summaryrefslogtreecommitdiffstats
path: root/doc/cephfs
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-21 11:54:28 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-21 11:54:28 +0000
commite6918187568dbd01842d8d1d2c808ce16a894239 (patch)
tree64f88b554b444a49f656b6c656111a145cbbaa28 /doc/cephfs
parentInitial commit. (diff)
downloadceph-e6918187568dbd01842d8d1d2c808ce16a894239.tar.xz
ceph-e6918187568dbd01842d8d1d2c808ce16a894239.zip
Adding upstream version 18.2.2.upstream/18.2.2
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/cephfs')
-rw-r--r--doc/cephfs/add-remove-mds.rst118
-rw-r--r--doc/cephfs/administration.rst392
-rw-r--r--doc/cephfs/api/index.rst11
-rw-r--r--doc/cephfs/api/libcephfs-java.rst18
-rw-r--r--doc/cephfs/api/libcephfs-py.rst13
-rw-r--r--doc/cephfs/app-best-practices.rst82
-rw-r--r--doc/cephfs/cache-configuration.rst211
-rw-r--r--doc/cephfs/capabilities.rst182
-rw-r--r--doc/cephfs/ceph-dokan.rst102
-rw-r--r--doc/cephfs/cephfs-architecture.svg1
-rw-r--r--doc/cephfs/cephfs-io-path.rst50
-rw-r--r--doc/cephfs/cephfs-journal-tool.rst238
-rw-r--r--doc/cephfs/cephfs-mirroring.rst414
-rw-r--r--doc/cephfs/cephfs-top.pngbin0 -> 37292 bytes
-rw-r--r--doc/cephfs/cephfs-top.rst116
-rw-r--r--doc/cephfs/client-auth.rst261
-rw-r--r--doc/cephfs/client-config-ref.rst75
-rw-r--r--doc/cephfs/createfs.rst135
-rw-r--r--doc/cephfs/dirfrags.rst101
-rw-r--r--doc/cephfs/disaster-recovery-experts.rst322
-rw-r--r--doc/cephfs/disaster-recovery.rst61
-rw-r--r--doc/cephfs/dynamic-metadata-management.rst90
-rw-r--r--doc/cephfs/eviction.rst190
-rw-r--r--doc/cephfs/experimental-features.rst42
-rw-r--r--doc/cephfs/file-layouts.rst272
-rw-r--r--doc/cephfs/fs-volumes.rst653
-rw-r--r--doc/cephfs/full.rst60
-rw-r--r--doc/cephfs/health-messages.rst240
-rw-r--r--doc/cephfs/index.rst213
-rw-r--r--doc/cephfs/journaler.rst7
-rw-r--r--doc/cephfs/kernel-features.rst47
-rw-r--r--doc/cephfs/lazyio.rst76
-rw-r--r--doc/cephfs/mantle.rst263
-rw-r--r--doc/cephfs/mdcache.rst77
-rw-r--r--doc/cephfs/mds-config-ref.rst67
-rw-r--r--doc/cephfs/mds-journaling.rst90
-rw-r--r--doc/cephfs/mds-state-diagram.dot75
-rw-r--r--doc/cephfs/mds-states.rst230
-rw-r--r--doc/cephfs/mount-prerequisites.rst75
-rw-r--r--doc/cephfs/mount-using-fuse.rst103
-rw-r--r--doc/cephfs/mount-using-kernel-driver.rst159
-rw-r--r--doc/cephfs/multifs.rst54
-rw-r--r--doc/cephfs/multimds.rst286
-rw-r--r--doc/cephfs/nfs.rst98
-rw-r--r--doc/cephfs/posix.rst104
-rw-r--r--doc/cephfs/quota.rst111
-rw-r--r--doc/cephfs/recover-fs-after-mon-store-loss.rst51
-rw-r--r--doc/cephfs/scrub.rst156
-rw-r--r--doc/cephfs/snap-schedule.rst209
-rw-r--r--doc/cephfs/standby.rst187
-rw-r--r--doc/cephfs/subtree-partitioning.svg1
-rw-r--r--doc/cephfs/troubleshooting.rst429
-rw-r--r--doc/cephfs/upgrading.rst90
53 files changed, 7708 insertions, 0 deletions
diff --git a/doc/cephfs/add-remove-mds.rst b/doc/cephfs/add-remove-mds.rst
new file mode 100644
index 000000000..4f5ee06aa
--- /dev/null
+++ b/doc/cephfs/add-remove-mds.rst
@@ -0,0 +1,118 @@
+.. _cephfs_add_remote_mds:
+
+.. note::
+ It is highly recommended to use :doc:`/cephadm/index` or another Ceph
+ orchestrator for setting up the ceph cluster. Use this approach only if you
+ are setting up the ceph cluster manually. If one still intends to use the
+ manual way for deploying MDS daemons, :doc:`/cephadm/services/mds/` can
+ also be used.
+
+============================
+ Deploying Metadata Servers
+============================
+
+Each CephFS file system requires at least one MDS. The cluster operator will
+generally use their automated deployment tool to launch required MDS servers as
+needed. Rook and ansible (via the ceph-ansible playbooks) are recommended
+tools for doing this. For clarity, we also show the systemd commands here which
+may be run by the deployment technology if executed on bare-metal.
+
+See `MDS Config Reference`_ for details on configuring metadata servers.
+
+
+Provisioning Hardware for an MDS
+================================
+
+The present version of the MDS is single-threaded and CPU-bound for most
+activities, including responding to client requests. An MDS under the most
+aggressive client loads uses about 2 to 3 CPU cores. This is due to the other
+miscellaneous upkeep threads working in tandem.
+
+Even so, it is recommended that an MDS server be well provisioned with an
+advanced CPU with sufficient cores. Development is on-going to make better use
+of available CPU cores in the MDS; it is expected in future versions of Ceph
+that the MDS server will improve performance by taking advantage of more cores.
+
+The other dimension to MDS performance is the available RAM for caching. The
+MDS necessarily manages a distributed and cooperative metadata cache among all
+clients and other active MDSs. Therefore it is essential to provide the MDS
+with sufficient RAM to enable faster metadata access and mutation. The default
+MDS cache size (see also :doc:`/cephfs/cache-configuration`) is 4GB. It is
+recommended to provision at least 8GB of RAM for the MDS to support this cache
+size.
+
+Generally, an MDS serving a large cluster of clients (1000 or more) will use at
+least 64GB of cache. An MDS with a larger cache is not well explored in the
+largest known community clusters; there may be diminishing returns where
+management of such a large cache negatively impacts performance in surprising
+ways. It would be best to do analysis with expected workloads to determine if
+provisioning more RAM is worthwhile.
+
+In a bare-metal cluster, the best practice is to over-provision hardware for
+the MDS server. Even if a single MDS daemon is unable to fully utilize the
+hardware, it may be desirable later on to start more active MDS daemons on the
+same node to fully utilize the available cores and memory. Additionally, it may
+become clear with workloads on the cluster that performance improves with
+multiple active MDS on the same node rather than over-provisioning a single
+MDS.
+
+Finally, be aware that CephFS is a highly-available file system by supporting
+standby MDS (see also :ref:`mds-standby`) for rapid failover. To get a real
+benefit from deploying standbys, it is usually necessary to distribute MDS
+daemons across at least two nodes in the cluster. Otherwise, a hardware failure
+on a single node may result in the file system becoming unavailable.
+
+Co-locating the MDS with other Ceph daemons (hyperconverged) is an effective
+and recommended way to accomplish this so long as all daemons are configured to
+use available hardware within certain limits. For the MDS, this generally
+means limiting its cache size.
+
+
+Adding an MDS
+=============
+
+#. Create an mds directory ``/var/lib/ceph/mds/ceph-${id}``. The daemon only uses this directory to store its keyring.
+
+#. Create the authentication key, if you use CephX: ::
+
+ $ sudo ceph auth get-or-create mds.${id} mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph-${id}/keyring
+
+#. Start the service: ::
+
+ $ sudo systemctl start ceph-mds@${id}
+
+#. The status of the cluster should show: ::
+
+ mds: ${id}:1 {0=${id}=up:active} 2 up:standby
+
+#. Optionally, configure the file system the MDS should join (:ref:`mds-join-fs`): ::
+
+ $ ceph config set mds.${id} mds_join_fs ${fs}
+
+
+Removing an MDS
+===============
+
+If you have a metadata server in your cluster that you'd like to remove, you may use
+the following method.
+
+#. (Optionally:) Create a new replacement Metadata Server. If there are no
+ replacement MDS to take over once the MDS is removed, the file system will
+ become unavailable to clients. If that is not desirable, consider adding a
+ metadata server before tearing down the metadata server you would like to
+ take offline.
+
+#. Stop the MDS to be removed. ::
+
+ $ sudo systemctl stop ceph-mds@${id}
+
+ The MDS will automatically notify the Ceph monitors that it is going down.
+ This enables the monitors to perform instantaneous failover to an available
+ standby, if one exists. It is unnecessary to use administrative commands to
+ effect this failover, e.g. through the use of ``ceph mds fail mds.${id}``.
+
+#. Remove the ``/var/lib/ceph/mds/ceph-${id}`` directory on the MDS. ::
+
+ $ sudo rm -rf /var/lib/ceph/mds/ceph-${id}
+
+.. _MDS Config Reference: ../mds-config-ref
diff --git a/doc/cephfs/administration.rst b/doc/cephfs/administration.rst
new file mode 100644
index 000000000..cd912b42a
--- /dev/null
+++ b/doc/cephfs/administration.rst
@@ -0,0 +1,392 @@
+.. _cephfs-administration:
+
+CephFS Administrative commands
+==============================
+
+File Systems
+------------
+
+.. note:: The names of the file systems, metadata pools, and data pools can
+ only have characters in the set [a-zA-Z0-9\_-.].
+
+These commands operate on the CephFS file systems in your Ceph cluster.
+Note that by default only one file system is permitted: to enable
+creation of multiple file systems use ``ceph fs flag set enable_multiple true``.
+
+::
+
+ ceph fs new <file system name> <metadata pool name> <data pool name>
+
+This command creates a new file system. The file system name and metadata pool
+name are self-explanatory. The specified data pool is the default data pool and
+cannot be changed once set. Each file system has its own set of MDS daemons
+assigned to ranks so ensure that you have sufficient standby daemons available
+to accommodate the new file system.
+
+::
+
+ ceph fs ls
+
+List all file systems by name.
+
+::
+
+ ceph fs lsflags <file system name>
+
+List all the flags set on a file system.
+
+::
+
+ ceph fs dump [epoch]
+
+This dumps the FSMap at the given epoch (default: current) which includes all
+file system settings, MDS daemons and the ranks they hold, and the list of
+standby MDS daemons.
+
+
+::
+
+ ceph fs rm <file system name> [--yes-i-really-mean-it]
+
+Destroy a CephFS file system. This wipes information about the state of the
+file system from the FSMap. The metadata pool and data pools are untouched and
+must be destroyed separately.
+
+::
+
+ ceph fs get <file system name>
+
+Get information about the named file system, including settings and ranks. This
+is a subset of the same information from the ``ceph fs dump`` command.
+
+::
+
+ ceph fs set <file system name> <var> <val>
+
+Change a setting on a file system. These settings are specific to the named
+file system and do not affect other file systems.
+
+::
+
+ ceph fs add_data_pool <file system name> <pool name/id>
+
+Add a data pool to the file system. This pool can be used for file layouts
+as an alternate location to store file data.
+
+::
+
+ ceph fs rm_data_pool <file system name> <pool name/id>
+
+This command removes the specified pool from the list of data pools for the
+file system. If any files have layouts for the removed data pool, the file
+data will become unavailable. The default data pool (when creating the file
+system) cannot be removed.
+
+::
+
+ ceph fs rename <file system name> <new file system name> [--yes-i-really-mean-it]
+
+Rename a Ceph file system. This also changes the application tags on the data
+pools and metadata pool of the file system to the new file system name.
+The CephX IDs authorized to the old file system name need to be reauthorized
+to the new name. Any on-going operations of the clients using these IDs may be
+disrupted. Mirroring is expected to be disabled on the file system.
+
+
+Settings
+--------
+
+::
+
+ ceph fs set <fs name> max_file_size <size in bytes>
+
+CephFS has a configurable maximum file size, and it's 1TB by default.
+You may wish to set this limit higher if you expect to store large files
+in CephFS. It is a 64-bit field.
+
+Setting ``max_file_size`` to 0 does not disable the limit. It would
+simply limit clients to only creating empty files.
+
+
+Maximum file sizes and performance
+----------------------------------
+
+CephFS enforces the maximum file size limit at the point of appending to
+files or setting their size. It does not affect how anything is stored.
+
+When users create a file of an enormous size (without necessarily
+writing any data to it), some operations (such as deletes) cause the MDS
+to have to do a large number of operations to check if any of the RADOS
+objects within the range that could exist (according to the file size)
+really existed.
+
+The ``max_file_size`` setting prevents users from creating files that
+appear to be eg. exabytes in size, causing load on the MDS as it tries
+to enumerate the objects during operations like stats or deletes.
+
+
+Taking the cluster down
+-----------------------
+
+Taking a CephFS cluster down is done by setting the down flag:
+
+::
+
+ ceph fs set <fs_name> down true
+
+To bring the cluster back online:
+
+::
+
+ ceph fs set <fs_name> down false
+
+This will also restore the previous value of max_mds. MDS daemons are brought
+down in a way such that journals are flushed to the metadata pool and all
+client I/O is stopped.
+
+
+Taking the cluster down rapidly for deletion or disaster recovery
+-----------------------------------------------------------------
+
+To allow rapidly deleting a file system (for testing) or to quickly bring the
+file system and MDS daemons down, use the ``ceph fs fail`` command:
+
+::
+
+ ceph fs fail <fs_name>
+
+This command sets a file system flag to prevent standbys from
+activating on the file system (the ``joinable`` flag).
+
+This process can also be done manually by doing the following:
+
+::
+
+ ceph fs set <fs_name> joinable false
+
+Then the operator can fail all of the ranks which causes the MDS daemons to
+respawn as standbys. The file system will be left in a degraded state.
+
+::
+
+ # For all ranks, 0-N:
+ ceph mds fail <fs_name>:<n>
+
+Once all ranks are inactive, the file system may also be deleted or left in
+this state for other purposes (perhaps disaster recovery).
+
+To bring the cluster back up, simply set the joinable flag:
+
+::
+
+ ceph fs set <fs_name> joinable true
+
+
+Daemons
+-------
+
+Most commands manipulating MDSs take a ``<role>`` argument which can take one
+of three forms:
+
+::
+
+ <fs_name>:<rank>
+ <fs_id>:<rank>
+ <rank>
+
+Commands to manipulate MDS daemons:
+
+::
+
+ ceph mds fail <gid/name/role>
+
+Mark an MDS daemon as failed. This is equivalent to what the cluster
+would do if an MDS daemon had failed to send a message to the mon
+for ``mds_beacon_grace`` second. If the daemon was active and a suitable
+standby is available, using ``ceph mds fail`` will force a failover to the
+standby.
+
+If the MDS daemon was in reality still running, then using ``ceph mds fail``
+will cause the daemon to restart. If it was active and a standby was
+available, then the "failed" daemon will return as a standby.
+
+
+::
+
+ ceph tell mds.<daemon name> command ...
+
+Send a command to the MDS daemon(s). Use ``mds.*`` to send a command to all
+daemons. Use ``ceph tell mds.* help`` to learn available commands.
+
+::
+
+ ceph mds metadata <gid/name/role>
+
+Get metadata about the given MDS known to the Monitors.
+
+::
+
+ ceph mds repaired <role>
+
+Mark the file system rank as repaired. Unlike the name suggests, this command
+does not change a MDS; it manipulates the file system rank which has been
+marked damaged.
+
+
+Required Client Features
+------------------------
+
+It is sometimes desirable to set features that clients must support to talk to
+CephFS. Clients without those features may disrupt other clients or behave in
+surprising ways. Or, you may want to require newer features to prevent older
+and possibly buggy clients from connecting.
+
+Commands to manipulate required client features of a file system:
+
+::
+
+ ceph fs required_client_features <fs name> add reply_encoding
+ ceph fs required_client_features <fs name> rm reply_encoding
+
+To list all CephFS features
+
+::
+
+ ceph fs feature ls
+
+Clients that are missing newly added features will be evicted automatically.
+
+Here are the current CephFS features and first release they came out:
+
++------------------+--------------+-----------------+
+| Feature | Ceph release | Upstream Kernel |
++==================+==============+=================+
+| jewel | jewel | 4.5 |
++------------------+--------------+-----------------+
+| kraken | kraken | 4.13 |
++------------------+--------------+-----------------+
+| luminous | luminous | 4.13 |
++------------------+--------------+-----------------+
+| mimic | mimic | 4.19 |
++------------------+--------------+-----------------+
+| reply_encoding | nautilus | 5.1 |
++------------------+--------------+-----------------+
+| reclaim_client | nautilus | N/A |
++------------------+--------------+-----------------+
+| lazy_caps_wanted | nautilus | 5.1 |
++------------------+--------------+-----------------+
+| multi_reconnect | nautilus | 5.1 |
++------------------+--------------+-----------------+
+| deleg_ino | octopus | 5.6 |
++------------------+--------------+-----------------+
+| metric_collect | pacific | N/A |
++------------------+--------------+-----------------+
+| alternate_name | pacific | PLANNED |
++------------------+--------------+-----------------+
+
+CephFS Feature Descriptions
+
+
+::
+
+ reply_encoding
+
+MDS encodes request reply in extensible format if client supports this feature.
+
+
+::
+
+ reclaim_client
+
+MDS allows new client to reclaim another (dead) client's states. This feature
+is used by NFS-Ganesha.
+
+
+::
+
+ lazy_caps_wanted
+
+When a stale client resumes, if the client supports this feature, mds only needs
+to re-issue caps that are explicitly wanted.
+
+
+::
+
+ multi_reconnect
+
+When mds failover, client sends reconnect messages to mds, to reestablish cache
+states. If MDS supports this feature, client can split large reconnect message
+into multiple ones.
+
+
+::
+
+ deleg_ino
+
+MDS delegate inode numbers to client if client supports this feature. Having
+delegated inode numbers is a prerequisite for client to do async file creation.
+
+
+::
+
+ metric_collect
+
+Clients can send performance metric to MDS if MDS support this feature.
+
+::
+
+ alternate_name
+
+Clients can set and understand "alternate names" for directory entries. This is
+to be used for encrypted file name support.
+
+
+Global settings
+---------------
+
+
+::
+
+ ceph fs flag set <flag name> <flag val> [<confirmation string>]
+
+Sets a global CephFS flag (i.e. not specific to a particular file system).
+Currently, the only flag setting is 'enable_multiple' which allows having
+multiple CephFS file systems.
+
+Some flags require you to confirm your intentions with "--yes-i-really-mean-it"
+or a similar string they will prompt you with. Consider these actions carefully
+before proceeding; they are placed on especially dangerous activities.
+
+.. _advanced-cephfs-admin-settings:
+
+Advanced
+--------
+
+These commands are not required in normal operation, and exist
+for use in exceptional circumstances. Incorrect use of these
+commands may cause serious problems, such as an inaccessible
+file system.
+
+::
+
+ ceph mds rmfailed
+
+This removes a rank from the failed set.
+
+::
+
+ ceph fs reset <file system name>
+
+This command resets the file system state to defaults, except for the name and
+pools. Non-zero ranks are saved in the stopped set.
+
+
+::
+
+ ceph fs new <file system name> <metadata pool name> <data pool name> --fscid <fscid> --force
+
+This command creates a file system with a specific **fscid** (file system cluster ID).
+You may want to do this when an application expects the file system's ID to be
+stable after it has been recovered, e.g., after monitor databases are lost and
+rebuilt. Consequently, file system IDs don't always keep increasing with newer
+file systems.
diff --git a/doc/cephfs/api/index.rst b/doc/cephfs/api/index.rst
new file mode 100644
index 000000000..bb6250200
--- /dev/null
+++ b/doc/cephfs/api/index.rst
@@ -0,0 +1,11 @@
+.. _cephfs api:
+
+============
+ CephFS APIs
+============
+
+.. toctree::
+ :maxdepth: 2
+
+ libcephfs (Java) <libcephfs-java>
+ libcephfs (Python) <libcephfs-py>
diff --git a/doc/cephfs/api/libcephfs-java.rst b/doc/cephfs/api/libcephfs-java.rst
new file mode 100644
index 000000000..83b5a6638
--- /dev/null
+++ b/doc/cephfs/api/libcephfs-java.rst
@@ -0,0 +1,18 @@
+===================
+Libcephfs (JavaDoc)
+===================
+
+.. warning::
+
+ CephFS Java bindings are no longer tested by CI. They may not work properly
+ or corrupt data.
+
+ Developers interested in reviving these bindings by fixing and writing tests
+ are encouraged to contribute!
+
+..
+ The admin/build-docs script runs Ant to build the JavaDoc files, and
+ copies them to api/libcephfs-java/javadoc/.
+
+
+View the auto-generated `JavaDoc pages for the CephFS Java bindings <javadoc/>`_.
diff --git a/doc/cephfs/api/libcephfs-py.rst b/doc/cephfs/api/libcephfs-py.rst
new file mode 100644
index 000000000..039401f22
--- /dev/null
+++ b/doc/cephfs/api/libcephfs-py.rst
@@ -0,0 +1,13 @@
+===================
+ LibCephFS (Python)
+===================
+
+.. highlight:: python
+
+The `cephfs` python module provides access to CephFS service.
+
+API calls
+=========
+
+.. automodule:: cephfs
+ :members: DirEntry, DirResult, LibCephFS
diff --git a/doc/cephfs/app-best-practices.rst b/doc/cephfs/app-best-practices.rst
new file mode 100644
index 000000000..50bd3b689
--- /dev/null
+++ b/doc/cephfs/app-best-practices.rst
@@ -0,0 +1,82 @@
+
+Application best practices for distributed file systems
+=======================================================
+
+CephFS is POSIX compatible, and therefore should work with any existing
+applications that expect a POSIX file system. However, because it is a
+network file system (unlike e.g. XFS) and it is highly consistent (unlike
+e.g. NFS), there are some consequences that application authors may
+benefit from knowing about.
+
+The following sections describe some areas where distributed file systems
+may have noticeably different performance behaviours compared with
+local file systems.
+
+
+ls -l
+-----
+
+When you run "ls -l", the ``ls`` program
+is first doing a directory listing, and then calling ``stat`` on every
+file in the directory.
+
+This is usually far in excess of what an application really needs, and
+it can be slow for large directories. If you don't really need all
+this metadata for each file, then use a plain ``ls``.
+
+ls/stat on files being extended
+-------------------------------
+
+If another client is currently extending files in the listed directory,
+then an ``ls -l`` may take an exceptionally long time to complete, as
+the lister must wait for the writer to flush data in order to do a valid
+read of the every file's size. So unless you *really* need to know the
+exact size of every file in the directory, just don't do it!
+
+This would also apply to any application code that was directly
+issuing ``stat`` system calls on files being appended from
+another node.
+
+Very large directories
+----------------------
+
+Do you really need that 10,000,000 file directory? While directory
+fragmentation enables CephFS to handle it, it is always going to be
+less efficient than splitting your files into more modest-sized directories.
+
+Even standard userspace tools can become quite slow when operating on very
+large directories. For example, the default behaviour of ``ls``
+is to give an alphabetically ordered result, but ``readdir`` system
+calls do not give an ordered result (this is true in general, not just
+with CephFS). So when you ``ls`` on a million file directory, it is
+loading a list of a million names into memory, sorting the list, then writing
+it out to the display.
+
+Hard links
+----------
+
+Hard links have an intrinsic cost in terms of the internal housekeeping
+that a file system has to do to keep two references to the same data. In
+CephFS there is a particular performance cost, because with normal files
+the inode is embedded in the directory (i.e. there is no extra fetch of
+the inode after looking up the path).
+
+Working set size
+----------------
+
+The MDS acts as a cache for the metadata stored in RADOS. Metadata
+performance is very different for workloads whose metadata fits within
+that cache.
+
+If your workload has more files than fit in your cache (configured using
+``mds_cache_memory_limit`` settings), then make sure you test it
+appropriately: don't test your system with a small number of files and then
+expect equivalent performance when you move to a much larger number of files.
+
+Do you need a file system?
+--------------------------
+
+Remember that Ceph also includes an object storage interface. If your
+application needs to store huge flat collections of files where you just
+read and write whole files at once, then you might well be better off
+using the :ref:`Object Gateway <object-gateway>`
diff --git a/doc/cephfs/cache-configuration.rst b/doc/cephfs/cache-configuration.rst
new file mode 100644
index 000000000..3fc757005
--- /dev/null
+++ b/doc/cephfs/cache-configuration.rst
@@ -0,0 +1,211 @@
+=======================
+MDS Cache Configuration
+=======================
+
+The Metadata Server coordinates a distributed cache among all MDS and CephFS
+clients. The cache serves to improve metadata access latency and allow clients
+to safely (coherently) mutate metadata state (e.g. via `chmod`). The MDS issues
+**capabilities** and **directory entry leases** to indicate what state clients
+may cache and what manipulations clients may perform (e.g. writing to a file).
+
+The MDS and clients both try to enforce a cache size. The mechanism for
+specifying the MDS cache size is described below. Note that the MDS cache size
+is not a hard limit. The MDS always allows clients to lookup new metadata
+which is loaded into the cache. This is an essential policy as it avoids
+deadlock in client requests (some requests may rely on held capabilities before
+capabilities are released).
+
+When the MDS cache is too large, the MDS will **recall** client state so cache
+items become unpinned and eligible to be dropped. The MDS can only drop cache
+state when no clients refer to the metadata to be dropped. Also described below
+is how to configure the MDS recall settings for your workload's needs. This is
+necessary if the internal throttles on the MDS recall can not keep up with the
+client workload.
+
+
+MDS Cache Size
+--------------
+
+You can limit the size of the Metadata Server (MDS) cache by a byte count. This
+is done through the `mds_cache_memory_limit` configuration:
+
+.. confval:: mds_cache_memory_limit
+
+In addition, you can specify a cache reservation by using the
+`mds_cache_reservation` parameter for MDS operations:
+
+.. confval:: mds_cache_reservation
+
+The cache reservation is
+limited as a percentage of the memory and is set to 5% by default. The intent
+of this parameter is to have the MDS maintain an extra reserve of memory for
+its cache for new metadata operations to use. As a consequence, the MDS should
+in general operate below its memory limit because it will recall old state from
+clients in order to drop unused metadata in its cache.
+
+If the MDS cannot keep its cache under the target size, the MDS will send a
+health alert to the Monitors indicating the cache is too large. This is
+controlled by the `mds_health_cache_threshold` configuration which is by
+default 150% of the maximum cache size:
+
+.. confval:: mds_health_cache_threshold
+
+Because the cache limit is not a hard limit, potential bugs in the CephFS
+client, MDS, or misbehaving applications might cause the MDS to exceed its
+cache size. The health warnings are intended to help the operator detect this
+situation and make necessary adjustments or investigate buggy clients.
+
+MDS Cache Trimming
+------------------
+
+There are two configurations for throttling the rate of cache trimming in the MDS:
+
+.. confval:: mds_cache_trim_threshold
+
+.. confval:: mds_cache_trim_decay_rate
+
+The intent of the throttle is to prevent the MDS from spending too much time
+trimming its cache. This may limit its ability to handle client requests or
+perform other upkeep.
+
+The trim configurations control an internal **decay counter**. Anytime metadata
+is trimmed from the cache, the counter is incremented. The threshold sets the
+maximum size of the counter while the decay rate indicates the exponential half
+life for the counter. If the MDS is continually removing items from its cache,
+it will reach a steady state of ``-ln(0.5)/rate*threshold`` items removed per
+second.
+
+.. note:: Increasing the value of the configuration setting
+ ``mds_cache_trim_decay_rate`` leads to the MDS spending less time
+ trimming the cache. To increase the cache trimming rate, set a lower
+ value.
+
+The defaults are conservative and may need to be changed for production MDS with
+large cache sizes.
+
+
+MDS Recall
+----------
+
+MDS limits its recall of client state (capabilities/leases) to prevent creating
+too much work for itself handling release messages from clients. This is controlled
+via the following configurations:
+
+
+The maximum number of capabilities to recall from a single client in a given recall
+event:
+
+.. confval:: mds_recall_max_caps
+
+The threshold and decay rate for the decay counter on a session:
+
+.. confval:: mds_recall_max_decay_threshold
+
+.. confval:: mds_recall_max_decay_rate
+
+The session decay counter controls the rate of recall for an individual
+session. The behavior of the counter works the same as for cache trimming
+above. Each capability that is recalled increments the counter.
+
+There is also a global decay counter that throttles for all session recall:
+
+.. confval:: mds_recall_global_max_decay_threshold
+
+its decay rate is the same as ``mds_recall_max_decay_rate``. Any recalled
+capability for any session also increments this counter.
+
+If clients are slow to release state, the warning "failing to respond to cache
+pressure" or ``MDS_HEALTH_CLIENT_RECALL`` will be reported. Each session's rate
+of release is monitored by another decay counter configured by:
+
+.. confval:: mds_recall_warning_threshold
+
+.. confval:: mds_recall_warning_decay_rate
+
+Each time a capability is released, the counter is incremented. If clients do
+not release capabilities quickly enough and there is cache pressure, the
+counter will indicate if the client is slow to release state.
+
+Some workloads and client behaviors may require faster recall of client state
+to keep up with capability acquisition. It is recommended to increase the above
+counters as needed to resolve any slow recall warnings in the cluster health
+state.
+
+
+MDS Cap Acquisition Throttle
+----------------------------
+
+A trivial "find" command on a large directory hierarchy will cause the client
+to receive caps significantly faster than it will release. The MDS will try
+to have the client reduce its caps below the ``mds_max_caps_per_client`` limit
+but the recall throttles prevent it from catching up to the pace of acquisition.
+So the readdir is throttled to control cap acquisition via the following
+configurations:
+
+
+The threshold and decay rate for the readdir cap acquisition decay counter:
+
+.. confval:: mds_session_cap_acquisition_throttle
+
+.. confval:: mds_session_cap_acquisition_decay_rate
+
+The cap acquisition decay counter controls the rate of cap acquisition via
+readdir. The behavior of the decay counter is the same as for cache trimming or
+caps recall. Each readdir call increments the counter by the number of files in
+the result.
+
+.. confval:: mds_session_max_caps_throttle_ratio
+
+.. confval:: mds_cap_acquisition_throttle_retry_request_timeout
+
+If the number of caps acquired by the client per session is greater than the
+``mds_session_max_caps_throttle_ratio`` and cap acquisition decay counter is
+greater than ``mds_session_cap_acquisition_throttle``, the readdir is throttled.
+The readdir request is retried after ``mds_cap_acquisition_throttle_retry_request_timeout``
+seconds.
+
+
+Session Liveness
+----------------
+
+The MDS also keeps track of whether sessions are quiescent. If a client session
+is not utilizing its capabilities or is otherwise quiet, the MDS will begin
+recalling state from the session even if it's not under cache pressure. This
+helps the MDS avoid future work when the cluster workload is hot and cache
+pressure is forcing the MDS to recall state. The expectation is that a client
+not utilizing its capabilities is unlikely to use those capabilities anytime
+in the near future.
+
+Determining whether a given session is quiescent is controlled by the following
+configuration variables:
+
+.. confval:: mds_session_cache_liveness_magnitude
+
+.. confval:: mds_session_cache_liveness_decay_rate
+
+The configuration ``mds_session_cache_liveness_decay_rate`` indicates the
+half-life for the decay counter tracking the use of capabilities by the client.
+Each time a client manipulates or acquires a capability, the MDS will increment
+the counter. This is a rough but effective way to monitor the utilization of the
+client cache.
+
+The ``mds_session_cache_liveness_magnitude`` is a base-2 magnitude difference
+of the liveness decay counter and the number of capabilities outstanding for
+the session. So if the client has ``1*2^20`` (1M) capabilities outstanding and
+only uses **less** than ``1*2^(20-mds_session_cache_liveness_magnitude)`` (1K
+using defaults), the MDS will consider the client to be quiescent and begin
+recall.
+
+
+Capability Limit
+----------------
+
+The MDS also tries to prevent a single client from acquiring too many
+capabilities. This helps prevent recovery from taking a long time in some
+situations. It is not generally necessary for a client to have such a large
+cache. The limit is configured via:
+
+.. confval:: mds_max_caps_per_client
+
+It is not recommended to set this value above 5M but it may be helpful with
+some workloads.
diff --git a/doc/cephfs/capabilities.rst b/doc/cephfs/capabilities.rst
new file mode 100644
index 000000000..79dfd3ace
--- /dev/null
+++ b/doc/cephfs/capabilities.rst
@@ -0,0 +1,182 @@
+======================
+Capabilities in CephFS
+======================
+When a client wants to operate on an inode, it will query the MDS in various
+ways, which will then grant the client a set of **capabilities**. This
+grants the client permissions to operate on the inode in various ways. One
+of the major differences from other network file systems (e.g NFS or SMB) is
+that the capabilities granted are quite granular, and it's possible that
+multiple clients can hold different capabilities on the same inodes.
+
+Types of Capabilities
+---------------------
+There are several "generic" capability bits. These denote what sort of ability
+the capability grants.
+
+::
+
+ /* generic cap bits */
+ #define CEPH_CAP_GSHARED 1 /* (metadata) client can read (s) */
+ #define CEPH_CAP_GEXCL 2 /* (metadata) client can read and update (x) */
+ #define CEPH_CAP_GCACHE 4 /* (file) client can cache reads (c) */
+ #define CEPH_CAP_GRD 8 /* (file) client can read (r) */
+ #define CEPH_CAP_GWR 16 /* (file) client can write (w) */
+ #define CEPH_CAP_GBUFFER 32 /* (file) client can buffer writes (b) */
+ #define CEPH_CAP_GWREXTEND 64 /* (file) client can extend EOF (a) */
+ #define CEPH_CAP_GLAZYIO 128 /* (file) client can perform lazy io (l) */
+
+These are then shifted by a particular number of bits. These denote a part of
+the inode's data or metadata on which the capability is being granted:
+
+::
+
+ /* per-lock shift */
+ #define CEPH_CAP_SAUTH 2 /* A */
+ #define CEPH_CAP_SLINK 4 /* L */
+ #define CEPH_CAP_SXATTR 6 /* X */
+ #define CEPH_CAP_SFILE 8 /* F */
+
+Only certain generic cap types are ever granted for some of those "shifts",
+however. In particular, only the FILE shift ever has more than the first two
+bits.
+
+::
+
+ | AUTH | LINK | XATTR | FILE
+ 2 4 6 8
+
+From the above, we get a number of constants, that are generated by taking
+each bit value and shifting to the correct bit in the word:
+
+::
+
+ #define CEPH_CAP_AUTH_SHARED (CEPH_CAP_GSHARED << CEPH_CAP_SAUTH)
+
+These bits can then be or'ed together to make a bitmask denoting a set of
+capabilities.
+
+There is one exception:
+
+::
+
+ #define CEPH_CAP_PIN 1 /* no specific capabilities beyond the pin */
+
+The "pin" just pins the inode into memory, without granting any other caps.
+
+Graphically:
+
+::
+
+ +---+---+---+---+---+---+---+---+
+ | p | _ |As x |Ls x |Xs x |
+ +---+---+---+---+---+---+---+---+
+ |Fs x c r w b a l |
+ +---+---+---+---+---+---+---+---+
+
+The second bit is currently unused.
+
+Abilities granted by each cap
+-----------------------------
+While that is how capabilities are granted (and communicated), the important
+bit is what they actually allow the client to do:
+
+* **PIN**: This just pins the inode into memory. This is sufficient to allow
+ the client to get to the inode number, as well as other immutable things like
+ major or minor numbers in a device inode, or symlink contents.
+
+* **AUTH**: This grants the ability to get to the authentication-related metadata.
+ In particular, the owner, group and mode. Note that doing a full permission
+ check may require getting at ACLs as well, which are stored in xattrs.
+
+* **LINK**: The link count of the inode.
+
+* **XATTR**: Ability to access or manipulate xattrs. Note that since ACLs are
+ stored in xattrs, it's also sometimes necessary to access them when checking
+ permissions.
+
+* **FILE**: This is the big one. This allows the client to access and manipulate
+ file data. It also covers certain metadata relating to file data -- the
+ size, mtime, atime and ctime, in particular.
+
+Shorthand
+---------
+Note that the client logging can also present a compact representation of the
+capabilities. For example:
+
+::
+
+ pAsLsXsFs
+
+The 'p' represents the pin. Each capital letter corresponds to the shift
+values, and the lowercase letters after each shift are for the actual
+capabilities granted in each shift.
+
+The relation between the lock states and the capabilities
+---------------------------------------------------------
+In MDS there are four different locks for each inode, they are simplelock,
+scatterlock, filelock and locallock. Each lock has several different lock
+states, and the MDS will issue capabilities to clients based on the lock
+state.
+
+In each state the MDS Locker will always try to issue all the capabilities to the
+clients allowed, even some capabilities are not needed or wanted by the clients,
+as pre-issuing capabilities could reduce latency in some cases.
+
+If there is only one client, usually it will be the loner client for all the inodes.
+While in multiple clients case, the MDS will try to calculate a loner client out for
+each inode depending on the capabilities the clients (needed | wanted), but usually
+it will fail. The loner client will always get all the capabilities.
+
+The filelock will control files' partial metadatas' and the file contents' access
+permissions. The metadatas include **mtime**, **atime**, **size**, etc.
+
+* **Fs**: Once a client has it, all other clients are denied **Fw**.
+
+* **Fx**: Only the loner client is allowed this capability. Once the lock state
+ transitions to LOCK_EXCL, the loner client is granted this along with all other
+ file capabilities except the **Fl**.
+
+* **Fr**: Once a client has it, the **Fb** capability will be already revoked from
+ all the other clients.
+
+ If clients only request to read the file, the lock state will be transferred
+ to LOCK_SYNC stable state directly. All the clients can be granted **Fscrl**
+ capabilities from the auth MDS and **Fscr** capabilities from the replica MDSes.
+
+ If multiple clients read from and write to the same file, then the lock state
+ will be transferred to LOCK_MIX stable state finally and all the clients could
+ have the **Frwl** capabilities from the auth MDS, and the **Fr** from the replica
+ MDSes. The **Fcb** capabilities won't be granted to all the clients and the
+ clients will do sync read/write.
+
+* **Fw**: If there is no loner client and once a client have this capability, the
+ **Fsxcb** capabilities won't be granted to other clients.
+
+ If multiple clients read from and write to the same file, then the lock state
+ will be transferred to LOCK_MIX stable state finally and all the clients could
+ have the **Frwl** capabilities from the auth MDS, and the **Fr** from the replica
+ MDSes. The **Fcb** capabilities won't be granted to all the clients and the
+ clients will do sync read/write.
+
+* **Fc**: This capability means the clients could cache file read and should be
+ issued together with **Fr** capability and only in this use case will it make
+ sense.
+
+ While actually in some stable or interim transitional states they tend to keep
+ the **Fc** allowed even the **Fr** capability isn't granted as this can avoid
+ forcing clients to drop full caches, for example on a simple file size extension
+ or truncating use case.
+
+* **Fb**: This capability means the clients could buffer file write and should be
+ issued together with **Fw** capability and only in this use case will it make
+ sense.
+
+ While actually in some stable or interim transitional states they tend to keep
+ the **Fc** allowed even the **Fw** capability isn't granted as this can avoid
+ forcing clients to drop dirty buffers, for example on a simple file size extension
+ or truncating use case.
+
+* **Fl**: This capability means the clients could perform lazy io. LazyIO relaxes
+ POSIX semantics. Buffered reads/writes are allowed even when a file is opened by
+ multiple applications on multiple clients. Applications are responsible for managing
+ cache coherency themselves.
diff --git a/doc/cephfs/ceph-dokan.rst b/doc/cephfs/ceph-dokan.rst
new file mode 100644
index 000000000..b9fb6c592
--- /dev/null
+++ b/doc/cephfs/ceph-dokan.rst
@@ -0,0 +1,102 @@
+.. _ceph-dokan:
+
+=======================
+Mount CephFS on Windows
+=======================
+
+``ceph-dokan`` can be used for mounting CephFS filesystems on Windows.
+It leverages Dokany, a Windows driver that allows implementing filesystems in
+userspace, pretty much like FUSE.
+
+Please check the `installation guide`_ to get started.
+
+Usage
+=====
+
+Mounting filesystems
+--------------------
+
+In order to mount a ceph filesystem, the following command can be used::
+
+ ceph-dokan.exe -c c:\ceph.conf -l x
+
+This will mount the default ceph filesystem using the drive letter ``x``.
+If ``ceph.conf`` is placed at the default location, which is
+``%ProgramData%\ceph\ceph.conf``, then this argument becomes optional.
+
+The ``-l`` argument also allows using an empty folder as a mountpoint
+instead of a drive letter.
+
+The uid and gid used for mounting the filesystem default to 0 and may be
+changed using the following ``ceph.conf`` options::
+
+ [client]
+ # client_permissions = true
+ client_mount_uid = 1000
+ client_mount_gid = 1000
+
+If you have more than one FS on your Ceph cluster, use the option
+``--client_fs`` to mount the non-default FS::
+
+ mkdir -Force C:\mnt\mycephfs2
+ ceph-dokan.exe --mountpoint C:\mnt\mycephfs2 --client_fs mycephfs2
+
+CephFS subdirectories can be mounted using the ``--root-path`` parameter::
+
+ ceph-dokan -l y --root-path /a
+
+If the ``-o --removable`` flags are set, the mounts will show up in the
+``Get-Volume`` results::
+
+ PS C:\> Get-Volume -FriendlyName "Ceph*" | `
+ Select-Object -Property @("DriveLetter", "Filesystem", "FilesystemLabel")
+
+ DriveLetter Filesystem FilesystemLabel
+ ----------- ---------- ---------------
+ Z Ceph Ceph
+ W Ceph Ceph - new_fs
+
+Please use ``ceph-dokan --help`` for a full list of arguments.
+
+Credentials
+-----------
+
+The ``--id`` option passes the name of the CephX user whose keyring we intend to
+use for mounting CephFS. The following commands are equivalent::
+
+ ceph-dokan --id foo -l x
+ ceph-dokan --name client.foo -l x
+
+Unmounting filesystems
+----------------------
+
+The mount can be removed by either issuing ctrl-c or using the unmap command,
+like so::
+
+ ceph-dokan.exe unmap -l x
+
+Note that when unmapping Ceph filesystems, the exact same mountpoint argument
+must be used as when the mapping was created.
+
+Limitations
+-----------
+
+Be aware that Windows ACLs are ignored. Posix ACLs are supported but cannot be
+modified using the current CLI. In the future, we may add some command actions
+to change file ownership or permissions.
+
+Another thing to note is that cephfs doesn't support mandatory file locks, which
+Windows is heavily rely upon. At the moment, we're letting Dokan handle file
+locks, which are only enforced locally.
+
+Unlike ``rbd-wnbd``, ``ceph-dokan`` doesn't currently provide a ``service``
+command. In order for the cephfs mount to survive host reboots, consider using
+``NSSM``.
+
+Troubleshooting
+===============
+
+Please consult the `Windows troubleshooting`_ page.
+
+.. _Windows troubleshooting: ../../install/windows-troubleshooting
+.. _installation guide: ../../install/windows-install
diff --git a/doc/cephfs/cephfs-architecture.svg b/doc/cephfs/cephfs-architecture.svg
new file mode 100644
index 000000000..e44c52a3f
--- /dev/null
+++ b/doc/cephfs/cephfs-architecture.svg
@@ -0,0 +1 @@
+<svg version="1.1" viewBox="0.0 0.0 784.4199475065617 481.56692913385825" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg"><clipPath id="p.0"><path d="m0 0l784.4199 0l0 481.56693l-784.4199 0l0 -481.56693z" clip-rule="nonzero"/></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l784.4199 0l0 481.56693l-784.4199 0z" fill-rule="evenodd"/><path fill="#b6d7a8" d="m30.272966 79.10533l0 0c0 -31.388912 24.684267 -56.83465 55.133858 -56.83465l0 0c14.622414 0 28.64592 5.987919 38.985527 16.646484c10.3396 10.658562 16.14833 25.114677 16.14833 40.188164l0 0c0 31.388908 -24.684265 56.83464 -55.133858 56.83464l0 0c-30.449589 0 -55.133858 -25.445732 -55.133858 -56.83464z" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m30.272966 79.10533l0 0c0 -31.388912 24.684267 -56.83465 55.133858 -56.83465l0 0c14.622414 0 28.64592 5.987919 38.985527 16.646484c10.3396 10.658562 16.14833 25.114677 16.14833 40.188164l0 0c0 31.388908 -24.684265 56.83464 -55.133858 56.83464l0 0c-30.449589 0 -55.133858 -25.445732 -55.133858 -56.83464z" fill-rule="evenodd"/><path fill="#000000" d="m62.688076 85.08783q0.6875 0 1.3124962 -0.375q0.640625 -0.390625 1.03125 -1.03125l0.96875 0.625q-0.578125 0.953125 -1.359375 1.4375q-0.7812462 0.46875 -1.9062462 0.46875q-1.296875 0 -2.3125 -0.640625q-1.015625 -0.640625 -1.625 -1.96875q-0.609375 -1.328125 -0.609375 -3.328125q0 -2.140625 0.65625 -3.46875q0.65625 -1.34375 1.65625 -1.921875q1.0 -0.578125 2.171875 -0.578125q1.21875 0 2.1249962 0.625q0.90625 0.609375 1.375 1.671875l-1.125 0.515625q-0.015625 0 -0.015625 0q0 -0.015625 0 -0.015625q-0.484375 -0.96875 -1.0781212 -1.359375q-0.59375 -0.390625 -1.328125 -0.390625q-1.46875 0 -2.328125 1.28125q-0.84375 1.28125 -0.84375 3.546875q0 1.4375 0.421875 2.5625q0.4375 1.109375 1.171875 1.734375q0.734375 0.609375 1.640625 0.609375zm2.1406212 -8.015625q0.03125 -0.046875 0.03125 -0.046875q0.046875 0 0.15625 0.125l-0.125 0.046875l-0.0625 -0.125zm0.21875 0.046875q0.078125 0.171875 -0.03125 0.03125l0.03125 -0.03125zm3.171875 8.90625l0 -1.078125l2.53125 0l0 -10.25l-2.421875 0l0 -1.078125l3.78125 0l0 11.328125l2.5 0l0 1.078125l-6.390625 0zm9.71875 0l0 -1.078125l2.1875076 0l0 -6.359375l-2.0625076 0l0 -1.09375l3.4062576 0l0 7.453125l2.0 0l0 1.078125l-5.5312576 0zm2.7812576 -10.3125q-0.390625 0 -0.671875 -0.28125q-0.28125 -0.28125 -0.28125 -0.671875q0 -0.40625 0.265625 -0.6875q0.28125 -0.28125 0.6875 -0.28125q0.390625 0 0.671875 0.296875q0.296875 0.28125 0.296875 0.671875q0 0.390625 -0.296875 0.671875q-0.28125 0.28125 -0.671875 0.28125zm9.765625 10.5q-1.90625 0 -3.03125 -1.15625q-1.125 -1.15625 -1.125 -3.265625q0 -1.40625 0.515625 -2.421875q0.515625 -1.015625 1.40625 -1.546875q0.890625 -0.53125 1.984375 -0.53125q1.546875 0 2.5 1.03125q0.96875 1.03125 0.96875 3.015625q0 0.203125 -0.03125 0.625l-6.0625 0q0.078125 1.5625 0.875 2.375q0.796875 0.796875 2.03125 0.796875q1.34375 0 2.203125 -0.953125l0.75 0.71875q-1.078125 1.3125 -2.984375 1.3125zm1.859375 -5.296875q0 -1.203125 -0.609375 -1.890625q-0.59375 -0.703125 -1.59375 -0.703125q-0.921875 0 -1.625 0.65625q-0.6875 0.640625 -0.859375 1.9375l4.6875 0zm3.734375 -3.421875l1.3125 0l0 1.515625q0.5 -0.78125 1.265625 -1.25q0.765625 -0.46875 1.625 -0.46875q1.125 0 1.8125 0.875q0.6875 0.859375 0.6875 2.6875l0 5.171875l-1.3125 0l0 -5.125q0 -1.28125 -0.4375 -1.859375q-0.421875 -0.578125 -1.125 -0.578125q-0.578125 0 -1.171875 0.34375q-0.578125 0.328125 -0.96875 0.9375q-0.375 0.609375 -0.375 1.375l0 4.90625l-1.3125 0l0 -8.53125zm16.265625 7.75q-1.234375 0.90625 -2.703125 0.90625q-1.421875 0 -2.015625 -0.84375q-0.578125 -0.859375 -0.578125 -2.8125q0 -0.3125 0.03125 -1.09375l0.171875 -2.796875l-1.875 0l0 -1.109375l1.953125 0l0.125 -2.265625l1.5 -0.25l0.1875 -0.015625l0.015625 0.109375q-0.125 0.1875 -0.203125 0.328125q-0.0625 0.125 -0.078125 0.40625l-0.203125 1.6875l2.828125 0l0 1.109375l-2.90625 0l-0.171875 2.890625q-0.03125 0.75 -0.03125 0.984375q0 1.5 0.34375 2.03125q0.34375 0.53125 1.109375 0.53125q0.5625 0 1.03125 -0.203125q0.46875 -0.21875 1.0625 -0.65625l0.40625 1.0625z" fill-rule="nonzero"/><path fill="#e06666" d="m245.87401 93.125015l0 0c0 -33.441494 27.109695 -60.55118 60.551193 -60.55118l0 0c16.059174 0 31.460602 6.3794823 42.81613 17.735031c11.35556 11.355549 17.735046 26.756977 17.735046 42.81615l0 0c0 33.44149 -27.10968 60.551178 -60.551178 60.551178l0 0c-33.441498 0 -60.551193 -27.109688 -60.551193 -60.551178z" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m245.87401 93.125015l0 0c0 -33.441494 27.109695 -60.55118 60.551193 -60.55118l0 0c16.059174 0 31.460602 6.3794823 42.81613 17.735031c11.35556 11.355549 17.735046 26.756977 17.735046 42.81615l0 0c0 33.44149 -27.10968 60.551178 -60.551178 60.551178l0 0c-33.441498 0 -60.551193 -27.109688 -60.551193 -60.551178z" fill-rule="evenodd"/><path fill="#000000" d="m286.12833 89.04501l-1.171875 -3.390625l-3.75 0l-1.1875 3.390625l-1.28125 0l4.296875 -11.828125l0.140625 0l4.296875 11.828125l-1.34375 0zm-4.578125 -4.40625l3.0625 0l-1.53125 -4.4375l-1.53125 4.4375zm11.609375 3.40625q1.28125 0 2.21875 -1.046875l0.78125 0.90625q-1.25 1.34375 -3.078125 1.34375q-1.234375 0 -2.203125 -0.578125q-0.96875 -0.578125 -1.515625 -1.59375q-0.546875 -1.015625 -0.546875 -2.28125q0 -1.265625 0.546875 -2.265625q0.546875 -1.015625 1.515625 -1.59375q0.96875 -0.578125 2.1875 -0.578125q1.03125 0 1.859375 0.421875q0.84375 0.40625 1.375 1.15625l-0.84375 0.828125l-0.015625 0.015625q-0.546875 -0.71875 -1.125 -1.0q-0.5625 -0.296875 -1.390625 -0.296875q-0.734375 0 -1.359375 0.40625q-0.625 0.40625 -1.0 1.140625q-0.375 0.71875 -0.375 1.671875q0 0.953125 0.375 1.71875q0.390625 0.765625 1.0625 1.203125q0.6875 0.421875 1.53125 0.421875zm2.078125 -5.25q0 -0.109375 0.15625 0.015625l-0.0625 0.078125l-0.09375 -0.09375zm0.203125 -0.015625q0.09375 0.125 0.046875 0.09375q-0.046875 -0.046875 -0.09375 -0.0625l0.046875 -0.03125zm9.9375 5.484375q-1.234375 0.90625 -2.703125 0.90625q-1.421875 0 -2.015625 -0.84375q-0.578125 -0.859375 -0.578125 -2.8125q0 -0.3125 0.03125 -1.09375l0.171875 -2.796875l-1.875 0l0 -1.109375l1.953125 0l0.125 -2.265625l1.5 -0.25l0.1875 -0.015625l0.015625 0.109375q-0.125 0.1875 -0.203125 0.328125q-0.0625 0.125 -0.078125 0.40625l-0.203125 1.6875l2.828125 0l0 1.109375l-2.90625 0l-0.171875 2.890625q-0.03125 0.75 -0.03125 0.984375q0 1.5 0.34375 2.03125q0.34375 0.53125 1.109375 0.53125q0.5625 0 1.03125 -0.203125q0.46875 -0.21875 1.0625 -0.65625l0.40625 1.0625zm2.90625 0.78125l0 -1.078125l2.1875 0l0 -6.359375l-2.0625 0l0 -1.09375l3.40625 0l0 7.453125l2.0 0l0 1.078125l-5.53125 0zm2.78125 -10.3125q-0.390625 0 -0.671875 -0.28125q-0.28125 -0.28125 -0.28125 -0.671875q0 -0.40625 0.265625 -0.6875q0.28125 -0.28125 0.6875 -0.28125q0.390625 0 0.671875 0.296875q0.296875 0.28125 0.296875 0.671875q0 0.390625 -0.296875 0.671875q-0.28125 0.28125 -0.671875 0.28125zm13.171875 1.78125q-0.296875 1.515625 -1.25 3.734375l-2.0625 4.796875l-1.046875 0l-3.375 -8.53125l1.34375 0l2.625 6.6875l1.375 -3.15625q0.859375 -1.9375 1.125 -3.53125l1.265625 0zm5.921875 8.71875q-1.90625 0 -3.03125 -1.15625q-1.125 -1.15625 -1.125 -3.265625q0 -1.40625 0.515625 -2.421875q0.515625 -1.015625 1.40625 -1.546875q0.890625 -0.53125 1.984375 -0.53125q1.546875 0 2.5 1.03125q0.96875 1.03125 0.96875 3.015625q0 0.203125 -0.03125 0.625l-6.0625 0q0.078125 1.5625 0.875 2.375q0.796875 0.796875 2.03125 0.796875q1.34375 0 2.203125 -0.953125l0.75 0.71875q-1.078125 1.3125 -2.984375 1.3125zm1.859375 -5.296875q0 -1.203125 -0.609375 -1.890625q-0.59375 -0.703125 -1.59375 -0.703125q-0.921875 0 -1.625 0.65625q-0.6875 0.640625 -0.859375 1.9375l4.6875 0z" fill-rule="nonzero"/><path fill="#000000" d="m293.19864 111.04501l0 -11.625l1.03125 0l2.875 5.6875l2.921875 -5.703125l0.984375 0l0 11.640625l-1.234375 0l0 -8.765625l-2.515625 4.6875l-0.5 0l-2.34375 -4.640625l0 8.71875l-1.21875 0zm9.5625 -11.625l2.71875 0q1.4375 0 2.265625 0.390625q0.84375 0.375 1.453125 1.203125q1.140625 1.53125 1.140625 4.28125q-0.109375 2.828125 -1.3125 4.3125q-1.1875 1.46875 -3.78125 1.453125l-2.484375 0l0 -11.640625zm2.4375 10.625q3.84375 0 3.84375 -4.671875q-0.03125 -2.375 -0.890625 -3.609375q-0.84375 -1.234375 -2.75 -1.234375l-1.40625 0l0 9.515625l1.203125 0zm11.453125 -5.421875q1.65625 0.6875 2.28125 1.421875q0.640625 0.734375 0.640625 1.828125q0 0.859375 -0.421875 1.625q-0.40625 0.765625 -1.296875 1.25q-0.875 0.484375 -2.15625 0.484375q-2.265625 0 -3.640625 -1.46875l0.671875 -1.1875l0 -0.015625q0.015625 0 0.015625 0.015625q0 0 0 0q0.515625 0.671875 1.296875 1.078125q0.796875 0.40625 1.84375 0.40625q1.03125 0 1.71875 -0.578125q0.6875 -0.578125 0.6875 -1.421875q0 -0.546875 -0.21875 -0.90625q-0.21875 -0.359375 -0.78125 -0.71875q-0.546875 -0.359375 -1.671875 -0.84375q-1.71875 -0.671875 -2.453125 -1.515625q-0.734375 -0.859375 -0.734375 -1.9375q0 -1.296875 0.953125 -2.078125q0.953125 -0.78125 2.59375 -0.78125q0.953125 0 1.78125 0.390625q0.828125 0.375 1.4375 1.0625l-0.71875 0.96875l-0.015625 0.015625q-0.5625 -0.765625 -1.171875 -1.0625q-0.609375 -0.296875 -1.515625 -0.296875q-0.90625 0 -1.453125 0.5q-0.546875 0.484375 -0.546875 1.1875q0 0.546875 0.234375 0.953125q0.234375 0.390625 0.84375 0.78125q0.625 0.375 1.796875 0.84375zm1.59375 -2.875q0 -0.046875 0.078125 0q0.078125 0.03125 0.09375 0.03125l-0.046875 0.0625l-0.125 -0.0625l0 -0.03125zm0.21875 -0.03125q0.109375 0.15625 0.03125 0.109375q-0.0625 -0.046875 -0.078125 -0.046875l0.046875 -0.0625zm-5.53125 6.796875q0 0.046875 -0.078125 0.015625q-0.0625 -0.046875 -0.09375 -0.046875l0.046875 -0.0625l0.125 0.0625l0 0.03125zm-0.203125 0.046875q-0.109375 -0.140625 0.03125 -0.078125l-0.03125 0.078125z" fill-rule="nonzero"/><path fill="#6fa8dc" d="m247.85564 203.24673l100.84248 0l0 55.47885c-50.418518 0 -50.418518 21.345428 -100.84248 10.672699zm8.308182 0l0 -7.1924896l100.15378 0l0 55.82686c-3.8070374 0 -7.6194763 0.3867035 -7.6194763 0.3867035l0 -49.021072zm7.809265 -7.1924896l0 -7.018463l101.021454 0l0 55.652832c-4.3384705 0 -8.676941 0.2900238 -8.676941 0.2900238l0 -48.924393z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m247.85564 203.24673l100.84248 0l0 55.47885c-50.418518 0 -50.418518 21.345428 -100.84248 10.672699zm8.308182 0l0 -7.1924896l100.15378 0l0 55.82686c-3.8070374 0 -7.6194763 0.3867035 -7.6194763 0.3867035m-84.72504 -56.213562l0 -7.018463l101.021454 0l0 55.652832c-4.3384705 0 -8.676941 0.2900238 -8.676941 0.2900238" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m247.85564 269.3983c50.423965 10.6727295 50.423965 -10.672699 100.84248 -10.672699l0 -6.457779c0 0 3.812439 -0.3867035 7.6194763 -0.3867035l0 -6.902466c0 0 4.3384705 -0.2900238 8.676941 -0.2900238l0 -55.652832l-101.021454 0l0 7.018463l-7.809265 0l0 7.1924896l-8.308182 0z" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m247.85564 203.24673l100.84248 0l0 55.47885c-50.418518 0 -50.418518 21.345428 -100.84248 10.672699zm8.308182 0l0 -7.1924896l100.15378 0l0 55.82686c-3.8070374 0 -7.6194763 0.3867035 -7.6194763 0.3867035m-84.72504 -56.213562l0 -7.018463l101.021454 0l0 55.652832c-4.3384705 0 -8.676941 0.2900238 -8.676941 0.2900238" fill-rule="evenodd"/><path fill="#000000" d="m272.36282 232.7269l0 6.609375q0.03125 1.421875 -0.375 2.34375q-0.40625 0.90625 -1.109375 1.34375q-0.703125 0.421875 -1.5625 0.421875q-1.703125 0 -2.765625 -1.28125l0.734375 -0.9375l0.015625 -0.015625l0.03125 0.015625q0.515625 0.5625 0.953125 0.8125q0.4375 0.25 1.0 0.25q0.953125 0 1.375 -0.65625q0.421875 -0.671875 0.421875 -2.265625l0 -6.640625l-2.25 0l0 -1.109375l5.328125 0l0 1.109375l-1.796875 0zm-4.84375 8.421875q0 0.078125 -0.0625 0.078125q-0.0625 -0.015625 -0.109375 -0.0625l0.078125 -0.09375l0.09375 0.078125zm-0.21875 0.0625q-0.09375 -0.09375 -0.03125 -0.078125q0.0625 0.015625 0.078125 0.03125l-0.046875 0.046875zm12.328125 2.1875q-1.140625 0 -2.046875 -0.5625q-0.890625 -0.5625 -1.390625 -1.5625q-0.5 -1.015625 -0.5 -2.296875q0 -1.296875 0.5 -2.296875q0.5 -1.015625 1.390625 -1.578125q0.90625 -0.578125 2.046875 -0.578125q1.125 0 2.015625 0.578125q0.90625 0.5625 1.40625 1.578125q0.5 1.0 0.5 2.296875q0 1.28125 -0.5 2.296875q-0.5 1.0 -1.40625 1.5625q-0.890625 0.5625 -2.015625 0.5625zm0 -1.125q0.71875 0 1.28125 -0.421875q0.578125 -0.4375 0.90625 -1.1875q0.328125 -0.765625 0.328125 -1.71875q0 -1.453125 -0.71875 -2.375q-0.703125 -0.921875 -1.796875 -0.921875q-1.109375 0 -1.828125 0.921875q-0.703125 0.921875 -0.703125 2.375q0 0.953125 0.328125 1.71875q0.328125 0.75 0.890625 1.1875q0.578125 0.421875 1.3125 0.421875zm8.78125 1.171875q-1.34375 0 -2.15625 -1.0q-0.796875 -1.0 -0.78125 -2.96875l0.03125 -4.765625l1.296875 0l0 4.765625q0 1.53125 0.53125 2.21875q0.53125 0.671875 1.40625 0.671875q0.96875 0 1.625 -0.75q0.671875 -0.765625 0.671875 -2.203125l0 -4.703125l1.3125 0l0 7.203125q0 0.453125 0.015625 0.75q0.03125 0.28125 0.171875 0.578125l-1.296875 0q-0.125 -0.28125 -0.15625 -0.578125q-0.03125 -0.296875 -0.03125 -0.734375q-0.40625 0.71875 -1.109375 1.125q-0.6875 0.390625 -1.53125 0.390625zm13.15625 -6.953125l0 0.015625q-0.546875 -0.5 -0.921875 -0.671875q-0.359375 -0.171875 -0.84375 -0.171875q-0.671875 0 -1.265625 0.34375q-0.59375 0.328125 -0.96875 1.015625q-0.375 0.671875 -0.375 1.703125l0 4.53125l-1.359375 0l0 -8.546875l1.40625 0l-0.046875 1.578125q0.359375 -0.84375 1.09375 -1.3125q0.734375 -0.46875 1.609375 -0.46875q1.375 0 2.265625 0.9375l-0.59375 1.046875zm0 0.015625q0.125 0.109375 0.0625 0.09375q-0.0625 -0.015625 -0.109375 -0.03125l0.046875 -0.0625zm-0.203125 0.0625q0 -0.0625 0.046875 -0.046875q0.0625 0 0.109375 0.046875l-0.03125 0.09375l-0.125 -0.078125l0 -0.015625zm2.921875 -1.859375l1.3125 0l0 1.515625q0.5 -0.78125 1.265625 -1.25q0.765625 -0.46875 1.625 -0.46875q1.125 0 1.8125 0.875q0.6875 0.859375 0.6875 2.6875l0 5.171875l-1.3125 0l0 -5.125q0 -1.28125 -0.4375 -1.859375q-0.421875 -0.578125 -1.125 -0.578125q-0.578125 0 -1.171875 0.34375q-0.578125 0.328125 -0.96875 0.9375q-0.375 0.609375 -0.375 1.375l0 4.90625l-1.3125 0l0 -8.53125zm12.53125 -0.1875q1.828125 0 2.78125 0.953125q0.96875 0.953125 0.96875 3.234375l0 4.53125l-1.453125 0l0 -1.3125q-0.78125 1.515625 -2.90625 1.515625q-1.421875 0 -2.21875 -0.640625q-0.796875 -0.640625 -0.796875 -1.703125q0 -0.90625 0.578125 -1.578125q0.578125 -0.671875 1.546875 -1.03125q0.984375 -0.359375 2.15625 -0.359375q1.0625 0 1.921875 0.109375q-0.109375 -1.4375 -0.75 -2.015625q-0.640625 -0.59375 -1.921875 -0.59375q-0.625 0 -1.21875 0.25q-0.578125 0.234375 -1.0625 0.703125l-0.65625 -0.859375q1.1875 -1.203125 3.03125 -1.203125zm-0.484375 7.890625q1.390625 0 2.1875 -0.796875q0.796875 -0.8125 0.875 -2.34375q-0.84375 -0.140625 -1.8125 -0.140625q-1.40625 0 -2.234375 0.46875q-0.828125 0.46875 -0.828125 1.421875q0 1.390625 1.8125 1.390625zm6.734375 0.828125l0 -1.078125l2.53125 0l0 -10.25l-2.421875 0l0 -1.078125l3.78125 0l0 11.328125l2.5 0l0 1.078125l-6.390625 0z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m306.4252 153.6762l8.062988 35.370087" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m306.4252 153.6762l6.729431 29.520172" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m311.54422 183.56346l2.619049 4.05748l0.6017761 -4.7917023z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m673.7428 143.05284l100.28345 0l0 73.44881l-100.28345 0z" fill-rule="evenodd"/><path fill="#000000" d="m683.5084 169.97284l2.46875 -11.625l1.03125 0l1.671875 5.6875l4.125 -5.703125l0.984375 0l-2.46875 11.640625l-1.234375 0l1.859375 -8.765625l-3.515625 4.6875l-0.5 0l-1.34375 -4.640625l-1.859375 8.71875l-1.21875 0zm13.59375 0.1875q-1.90625 0 -2.78125 -1.15625q-0.875 -1.15625 -0.421875 -3.265625q0.296875 -1.40625 1.015625 -2.421875q0.734375 -1.015625 1.734375 -1.546875q1.015625 -0.53125 2.109375 -0.53125q1.546875 0 2.28125 1.03125q0.75 1.03125 0.328125 3.015625q-0.046875 0.203125 -0.171875 0.625l-6.0625 0q-0.25 1.5625 0.375 2.375q0.625 0.796875 1.859375 0.796875q1.34375 0 2.40625 -0.953125l0.59375 0.71875q-1.359375 1.3125 -3.265625 1.3125zm3.0 -5.296875q0.25 -1.203125 -0.203125 -1.890625q-0.453125 -0.703125 -1.453125 -0.703125q-0.921875 0 -1.765625 0.65625q-0.828125 0.640625 -1.265625 1.9375l4.6875 0zm9.75 4.328125q-1.4375 0.90625 -2.90625 0.90625q-1.421875 0 -1.828125 -0.84375q-0.40625 -0.859375 0.015625 -2.8125q0.0625 -0.3125 0.265625 -1.09375l0.765625 -2.796875l-1.875 0l0.234375 -1.109375l1.953125 0l0.609375 -2.265625l1.546875 -0.25l0.1875 -0.015625l0 0.109375q-0.171875 0.1875 -0.265625 0.328125q-0.09375 0.125 -0.171875 0.40625l-0.5625 1.6875l2.828125 0l-0.234375 1.109375l-2.90625 0l-0.78125 2.890625q-0.203125 0.75 -0.25 0.984375q-0.3125 1.5 -0.09375 2.03125q0.234375 0.53125 1.0 0.53125q0.5625 0 1.078125 -0.203125q0.515625 -0.21875 1.203125 -0.65625l0.1875 1.0625zm7.28125 -7.9375q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm9.40625 1.015625q-0.90625 0 -1.609375 -0.515625q-0.6875 -0.515625 -0.96875 -1.53125q-0.28125 -1.03125 0.03125 -2.484375q0.3125 -1.484375 1.046875 -2.46875q0.734375 -0.984375 1.65625 -1.453125q0.921875 -0.484375 1.828125 -0.484375q0.859375 0 1.40625 0.390625q0.5625 0.390625 0.765625 1.078125l1.09375 -5.125l1.421875 0l-0.03125 0.125q-0.140625 0.125 -0.203125 0.25q-0.046875 0.125 -0.109375 0.453125l-2.171875 10.25q-0.09375 0.453125 -0.125 0.75q-0.03125 0.28125 0.03125 0.578125l-1.328125 0q-0.0625 -0.296875 -0.03125 -0.578125q0.03125 -0.296875 0.125 -0.75q-0.5625 0.71875 -1.296875 1.125q-0.71875 0.390625 -1.53125 0.390625zm0.46875 -1.171875q1.15625 0 1.890625 -0.90625q0.734375 -0.921875 1.0625 -2.421875q0.3125 -1.515625 -0.078125 -2.421875q-0.390625 -0.921875 -1.578125 -0.921875q-1.109375 0 -1.890625 0.84375q-0.78125 0.828125 -1.078125 2.265625q-0.34375 1.640625 0.0625 2.609375q0.421875 0.953125 1.609375 0.953125zm10.953125 -7.734375q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm13.546875 0.046875q-1.4375 0.90625 -2.90625 0.90625q-1.421875 0 -1.828125 -0.84375q-0.40625 -0.859375 0.015625 -2.8125q0.0625 -0.3125 0.265625 -1.09375l0.765625 -2.796875l-1.875 0l0.234375 -1.109375l1.953125 0l0.609375 -2.265625l1.546875 -0.25l0.1875 -0.015625l0 0.109375q-0.171875 0.1875 -0.265625 0.328125q-0.09375 0.125 -0.171875 0.40625l-0.5625 1.6875l2.828125 0l-0.234375 1.109375l-2.90625 0l-0.78125 2.890625q-0.203125 0.75 -0.25 0.984375q-0.3125 1.5 -0.09375 2.03125q0.234375 0.53125 1.0 0.53125q0.5625 0 1.078125 -0.203125q0.515625 -0.21875 1.203125 -0.65625l0.1875 1.0625zm7.28125 -7.9375q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625z" fill-rule="nonzero"/><path fill="#000000" d="m683.5084 191.97284l2.46875 -11.625l1.03125 0l1.671875 5.6875l4.125 -5.703125l0.984375 0l-2.46875 11.640625l-1.234375 0l1.859375 -8.765625l-3.515625 4.6875l-0.5 0l-1.34375 -4.640625l-1.859375 8.71875l-1.21875 0zm12.640625 0.203125q-1.34375 0 -1.9375 -1.0q-0.59375 -1.0 -0.15625 -2.96875l1.046875 -4.765625l1.296875 0l-1.015625 4.765625q-0.328125 1.53125 0.0625 2.21875q0.390625 0.671875 1.265625 0.671875q0.96875 0 1.796875 -0.75q0.828125 -0.765625 1.125 -2.203125l1.0 -4.703125l1.3125 0l-1.53125 7.203125q-0.09375 0.453125 -0.140625 0.75q-0.03125 0.28125 0.046875 0.578125l-1.296875 0q-0.0625 -0.28125 -0.03125 -0.578125q0.03125 -0.296875 0.125 -0.734375q-0.5625 0.71875 -1.34375 1.125q-0.78125 0.390625 -1.625 0.390625zm13.703125 -0.984375q-1.4375 0.90625 -2.90625 0.90625q-1.421875 0 -1.828125 -0.84375q-0.40625 -0.859375 0.015625 -2.8125q0.0625 -0.3125 0.265625 -1.09375l0.765625 -2.796875l-1.875 0l0.234375 -1.109375l1.953125 0l0.609375 -2.265625l1.546875 -0.25l0.1875 -0.015625l0 0.109375q-0.171875 0.1875 -0.265625 0.328125q-0.09375 0.125 -0.171875 0.40625l-0.5625 1.6875l2.828125 0l-0.234375 1.109375l-2.90625 0l-0.78125 2.890625q-0.203125 0.75 -0.25 0.984375q-0.3125 1.5 -0.09375 2.03125q0.234375 0.53125 1.0 0.53125q0.5625 0 1.078125 -0.203125q0.515625 -0.21875 1.203125 -0.65625l0.1875 1.0625zm7.28125 -7.9375q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm13.546875 0.046875q-1.4375 0.90625 -2.90625 0.90625q-1.421875 0 -1.828125 -0.84375q-0.40625 -0.859375 0.015625 -2.8125q0.0625 -0.3125 0.265625 -1.09375l0.765625 -2.796875l-1.875 0l0.234375 -1.109375l1.953125 0l0.609375 -2.265625l1.546875 -0.25l0.1875 -0.015625l0 0.109375q-0.171875 0.1875 -0.265625 0.328125q-0.09375 0.125 -0.171875 0.40625l-0.5625 1.6875l2.828125 0l-0.234375 1.109375l-2.90625 0l-0.78125 2.890625q-0.203125 0.75 -0.25 0.984375q-0.3125 1.5 -0.09375 2.03125q0.234375 0.53125 1.0 0.53125q0.5625 0 1.078125 -0.203125q0.515625 -0.21875 1.203125 -0.65625l0.1875 1.0625zm2.734375 0.78125l0.234375 -1.078125l2.1875 0l1.34375 -6.359375l-2.0625 0l0.234375 -1.09375l3.40625 0l-1.578125 7.453125l2.0 0l-0.234375 1.078125l-5.53125 0zm4.96875 -10.3125q-0.390625 0 -0.609375 -0.28125q-0.21875 -0.28125 -0.140625 -0.671875q0.09375 -0.40625 0.421875 -0.6875q0.328125 -0.28125 0.734375 -0.28125q0.390625 0 0.625 0.296875q0.234375 0.28125 0.140625 0.671875q-0.078125 0.390625 -0.4375 0.671875q-0.34375 0.28125 -0.734375 0.28125zm7.140625 10.46875q-1.140625 0 -1.921875 -0.5625q-0.78125 -0.5625 -1.0625 -1.5625q-0.28125 -1.015625 -0.015625 -2.296875q0.28125 -1.296875 0.984375 -2.296875q0.71875 -1.015625 1.734375 -1.578125q1.03125 -0.578125 2.171875 -0.578125q1.125 0 1.890625 0.578125q0.78125 0.5625 1.0625 1.578125q0.296875 1.0 0.015625 2.296875q-0.265625 1.28125 -0.984375 2.296875q-0.71875 1.0 -1.734375 1.5625q-1.015625 0.5625 -2.140625 0.5625zm0.234375 -1.125q0.71875 0 1.375 -0.421875q0.671875 -0.4375 1.15625 -1.1875q0.484375 -0.765625 0.6875 -1.71875q0.3125 -1.453125 -0.203125 -2.375q-0.515625 -0.921875 -1.609375 -0.921875q-1.109375 0 -2.015625 0.921875q-0.90625 0.921875 -1.21875 2.375q-0.203125 0.953125 -0.03125 1.71875q0.171875 0.75 0.640625 1.1875q0.484375 0.421875 1.21875 0.421875zm7.609375 -7.5625l1.3125 0l-0.328125 1.515625q0.671875 -0.78125 1.53125 -1.25q0.875 -0.46875 1.734375 -0.46875q1.125 0 1.625 0.875q0.5 0.859375 0.109375 2.6875l-1.09375 5.171875l-1.3125 0l1.09375 -5.125q0.265625 -1.28125 -0.046875 -1.859375q-0.296875 -0.578125 -1.0 -0.578125q-0.578125 0 -1.234375 0.34375q-0.65625 0.328125 -1.171875 0.9375q-0.515625 0.609375 -0.671875 1.375l-1.046875 4.90625l-1.3125 0l1.8125 -8.53125z" fill-rule="nonzero"/><path fill="#000000" d="m689.08655 208.95721q1.515625 0.484375 2.046875 1.0625q0.53125 0.5625 0.34375 1.46875q-0.25 1.15625 -1.3125 1.921875q-1.0625 0.75 -2.78125 0.75q-2.171875 0 -3.328125 -1.34375l1.015625 -1.265625l0.015625 -0.015625l0.015625 0.015625q0.421875 0.75 0.953125 1.125q0.546875 0.359375 1.609375 0.359375q1.015625 0 1.671875 -0.359375q0.65625 -0.359375 0.78125 -1.0q0.109375 -0.53125 -0.28125 -0.890625q-0.390625 -0.359375 -1.515625 -0.75q-3.015625 -0.90625 -2.65625 -2.640625q0.21875 -1.0 1.15625 -1.578125q0.9375 -0.578125 2.453125 -0.578125q1.15625 0 1.875 0.328125q0.71875 0.328125 1.234375 1.015625l-0.96875 0.9375l-0.015625 0.015625q-0.265625 -0.59375 -0.921875 -0.9375q-0.640625 -0.34375 -1.390625 -0.34375q-0.796875 0 -1.375 0.28125q-0.578125 0.28125 -0.671875 0.78125q-0.109375 0.46875 0.328125 0.859375q0.453125 0.390625 1.71875 0.78125zm2.09375 -1.390625q0.015625 -0.09375 0.140625 0.03125l-0.078125 0.0625l-0.0625 -0.09375zm0.21875 -0.03125q0.03125 0.078125 0.015625 0.09375q-0.015625 0.015625 -0.046875 0q-0.03125 -0.015625 -0.046875 -0.03125l0.078125 -0.0625zm-6.09375 3.9375q-0.015625 0.078125 -0.171875 0l0.078125 -0.09375l0.09375 0.078125l0 0.015625zm-0.21875 0.0625q-0.046875 -0.09375 -0.03125 -0.09375q0.03125 0 0.078125 0.03125l-0.046875 0.0625z" fill-rule="nonzero"/><path fill="#e06666" d="m424.69684 98.478035l0 0c0 -33.441498 27.10971 -60.55118 60.551178 -60.55118l0 0c16.059174 0 31.460602 6.3794785 42.816193 17.735027c11.35553 11.355553 17.734985 26.75698 17.734985 42.816154l0 0c0 33.44149 -27.10968 60.551186 -60.551178 60.551186l0 0c-33.441467 0 -60.551178 -27.109695 -60.551178 -60.551186z" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m424.69684 98.478035l0 0c0 -33.441498 27.10971 -60.55118 60.551178 -60.55118l0 0c16.059174 0 31.460602 6.3794785 42.816193 17.735027c11.35553 11.355553 17.734985 26.75698 17.734985 42.816154l0 0c0 33.44149 -27.10968 60.551186 -60.551178 60.551186l0 0c-33.441467 0 -60.551178 -27.109695 -60.551178 -60.551186z" fill-rule="evenodd"/><path fill="#000000" d="m458.16208 87.97616q1.65625 0.6875 2.28125 1.421875q0.640625 0.734375 0.640625 1.828125q0 0.859375 -0.421875 1.625q-0.40625 0.765625 -1.296875 1.25q-0.875 0.484375 -2.15625 0.484375q-2.265625 0 -3.640625 -1.46875l0.671875 -1.1875l0 -0.015625q0.015625 0 0.015625 0.015625q0 0 0 0q0.515625 0.671875 1.296875 1.078125q0.796875 0.40625 1.84375 0.40625q1.03125 0 1.71875 -0.578125q0.6875 -0.578125 0.6875 -1.421875q0 -0.546875 -0.21875 -0.90625q-0.21875 -0.359375 -0.78125 -0.71875q-0.546875 -0.359375 -1.671875 -0.84375q-1.71875 -0.671875 -2.453125 -1.515625q-0.734375 -0.859375 -0.734375 -1.9375q0 -1.296875 0.953125 -2.078125q0.953125 -0.78125 2.59375 -0.78125q0.953125 0 1.78125 0.390625q0.828125 0.375 1.4375 1.0625l-0.71875 0.96875l-0.015625 0.015625q-0.5625 -0.765625 -1.171875 -1.0625q-0.609375 -0.296875 -1.515625 -0.296875q-0.90625 0 -1.453125 0.5q-0.546875 0.484375 -0.546875 1.1875q0 0.546875 0.234375 0.953125q0.234375 0.390625 0.84375 0.78125q0.625 0.375 1.796875 0.84375zm1.59375 -2.875q0 -0.046875 0.078125 0q0.078125 0.03125 0.09375 0.03125l-0.046875 0.0625l-0.125 -0.0625l0 -0.03125zm0.21875 -0.03125q0.109375 0.15625 0.03125 0.109375q-0.0625 -0.046875 -0.078125 -0.046875l0.046875 -0.0625zm-5.53125 6.796875q0 0.046875 -0.078125 0.015625q-0.0625 -0.046875 -0.09375 -0.046875l0.046875 -0.0625l0.125 0.0625l0 0.03125zm-0.203125 0.046875q-0.109375 -0.140625 0.03125 -0.078125l-0.03125 0.078125zm15.96875 1.703125q-1.234375 0.90625 -2.703125 0.90625q-1.421875 0 -2.015625 -0.84375q-0.578125 -0.859375 -0.578125 -2.8125q0 -0.3125 0.03125 -1.09375l0.171875 -2.796875l-1.875 0l0 -1.109375l1.953125 0l0.125 -2.265625l1.5 -0.25l0.1875 -0.015625l0.015625 0.109375q-0.125 0.1875 -0.203125 0.328125q-0.0625 0.125 -0.078125 0.40625l-0.203125 1.6875l2.828125 0l0 1.109375l-2.90625 0l-0.171875 2.890625q-0.03125 0.75 -0.03125 0.984375q0 1.5 0.34375 2.03125q0.34375 0.53125 1.109375 0.53125q0.5625 0 1.03125 -0.203125q0.46875 -0.21875 1.0625 -0.65625l0.40625 1.0625zm5.59375 -7.9375q1.828125 0 2.78125 0.953125q0.96875 0.953125 0.96875 3.234375l0 4.53125l-1.453125 0l0 -1.3125q-0.78125 1.515625 -2.90625 1.515625q-1.421875 0 -2.21875 -0.640625q-0.796875 -0.640625 -0.796875 -1.703125q0 -0.90625 0.578125 -1.578125q0.578125 -0.671875 1.546875 -1.03125q0.984375 -0.359375 2.15625 -0.359375q1.0625 0 1.921875 0.109375q-0.109375 -1.4375 -0.75 -2.015625q-0.640625 -0.59375 -1.921875 -0.59375q-0.625 0 -1.21875 0.25q-0.578125 0.234375 -1.0625 0.703125l-0.65625 -0.859375q1.1875 -1.203125 3.03125 -1.203125zm-0.484375 7.890625q1.390625 0 2.1875 -0.796875q0.796875 -0.8125 0.875 -2.34375q-0.84375 -0.140625 -1.8125 -0.140625q-1.40625 0 -2.234375 0.46875q-0.828125 0.46875 -0.828125 1.421875q0 1.390625 1.8125 1.390625zm6.609375 -7.703125l1.3125 0l0 1.515625q0.5 -0.78125 1.265625 -1.25q0.765625 -0.46875 1.625 -0.46875q1.125 0 1.8125 0.875q0.6875 0.859375 0.6875 2.6875l0 5.171875l-1.3125 0l0 -5.125q0 -1.28125 -0.4375 -1.859375q-0.421875 -0.578125 -1.125 -0.578125q-0.578125 0 -1.171875 0.34375q-0.578125 0.328125 -0.96875 0.9375q-0.375 0.609375 -0.375 1.375l0 4.90625l-1.3125 0l0 -8.53125zm12.34375 8.71875q-0.90625 0 -1.71875 -0.515625q-0.796875 -0.515625 -1.296875 -1.53125q-0.5 -1.03125 -0.5 -2.484375q0 -1.484375 0.515625 -2.46875q0.53125 -0.984375 1.34375 -1.453125q0.828125 -0.484375 1.734375 -0.484375q0.859375 0 1.5 0.390625q0.640625 0.390625 0.984375 1.078125l0 -5.125l1.421875 0l0 0.125q-0.109375 0.125 -0.140625 0.25q-0.03125 0.125 -0.03125 0.453125l0.015625 10.25q0 0.453125 0.03125 0.75q0.03125 0.28125 0.15625 0.578125l-1.328125 0q-0.125 -0.296875 -0.15625 -0.578125q-0.03125 -0.296875 -0.03125 -0.75q-0.40625 0.71875 -1.046875 1.125q-0.640625 0.390625 -1.453125 0.390625zm0.21875 -1.171875q1.15625 0 1.6875 -0.90625q0.546875 -0.921875 0.546875 -2.421875q0 -1.515625 -0.59375 -2.421875q-0.578125 -0.921875 -1.765625 -0.921875q-1.109375 0 -1.71875 0.84375q-0.59375 0.828125 -0.59375 2.265625q0 1.640625 0.625 2.609375q0.625 0.953125 1.8125 0.953125zm9.71875 1.1875q-0.765625 0 -1.421875 -0.34375q-0.65625 -0.34375 -1.109375 -0.984375l-0.453125 1.125l-0.859375 0l0 -12.40625l1.53125 0l0 0.125q-0.125 0.125 -0.15625 0.25q-0.015625 0.125 -0.015625 0.453125l0 4.359375q0.40625 -0.6875 1.109375 -1.09375q0.703125 -0.421875 1.421875 -0.421875q1.609375 0 2.5625 1.125q0.953125 1.109375 0.953125 3.265625q0 1.453125 -0.515625 2.484375q-0.5 1.03125 -1.328125 1.546875q-0.8125 0.515625 -1.71875 0.515625zm-0.171875 -1.1875q1.0 0 1.671875 -0.796875q0.671875 -0.796875 0.671875 -2.484375q0 -1.640625 -0.625 -2.46875q-0.625 -0.84375 -1.6875 -0.84375q-1.046875 0 -1.703125 0.9375q-0.640625 0.9375 -0.640625 2.40625q0 3.25 2.3125 3.25zm12.7969055 -7.546875q-0.109375 0.765625 -0.34375 1.515625q-0.234375 0.75 -0.671875 1.984375l-2.125 6.0q-0.421875 1.203125 -1.109375 1.734375q-0.6875305 0.546875 -1.6406555 0.546875q-1.203125 0 -1.96875 -0.734375l0.578125 -0.90625l0.015625 -0.03125l0.03125 0.03125q0.296875 0.28125 0.625 0.40625q0.34375 0.125 0.78125 0.125q0.703125 0 1.1094055 -0.484375q0.40625 -0.484375 0.796875 -1.59375l-3.4844055 -8.59375l1.390625 0l2.6562805 6.828125l1.125 -3.421875q0.40625 -1.296875 0.59375 -1.96875q0.203125 -0.6875 0.296875 -1.4375l1.34375 0zm-7.0469055 10.0625q0 0.03125 -0.046875 0.03125l-0.140625 -0.03125l0.0625 -0.09375l0.125 0.078125l0 0.015625zm-0.21875 0.046875q-0.078125 -0.078125 -0.046875 -0.0625q0.046875 0.015625 0.078125 0.015625l-0.03125 0.046875z" fill-rule="nonzero"/><path fill="#000000" d="m472.02145 116.39803l0 -11.625l1.03125 0l2.875 5.6875l2.921875 -5.703125l0.984375 0l0 11.640625l-1.234375 0l0 -8.765625l-2.515625 4.6875l-0.5 0l-2.34375 -4.640625l0 8.71875l-1.21875 0zm9.5625 -11.625l2.71875 0q1.4375 0 2.265625 0.390625q0.84375 0.375 1.453125 1.203125q1.140625 1.53125 1.140625 4.28125q-0.109375 2.828125 -1.3125 4.3125q-1.1875 1.46875 -3.78125 1.453125l-2.484375 0l0 -11.640625zm2.4375 10.625q3.84375 0 3.84375 -4.671875q-0.03125 -2.375 -0.890625 -3.609375q-0.84375 -1.234375 -2.75 -1.234375l-1.40625 0l0 9.515625l1.203125 0zm11.453125 -5.421875q1.65625 0.6875 2.28125 1.421875q0.640625 0.734375 0.640625 1.828125q0 0.859375 -0.421875 1.625q-0.40625 0.765625 -1.296875 1.25q-0.875 0.484375 -2.15625 0.484375q-2.265625 0 -3.640625 -1.46875l0.671875 -1.1875l0 -0.015625q0.015625 0 0.015625 0.015625q0 0 0 0q0.515625 0.671875 1.296875 1.078125q0.796875 0.40625 1.84375 0.40625q1.03125 0 1.71875 -0.578125q0.6875 -0.578125 0.6875 -1.421875q0 -0.546875 -0.21875 -0.90625q-0.21875 -0.359375 -0.78125 -0.71875q-0.546875 -0.359375 -1.671875 -0.84375q-1.71875 -0.671875 -2.453125 -1.515625q-0.734375 -0.859375 -0.734375 -1.9375q0 -1.296875 0.953125 -2.078125q0.953125 -0.78125 2.59375 -0.78125q0.953125 0 1.78125 0.390625q0.828125 0.375 1.4375 1.0625l-0.71875 0.96875l-0.015625 0.015625q-0.5625 -0.765625 -1.171875 -1.0625q-0.609375 -0.296875 -1.515625 -0.296875q-0.90625 0 -1.453125 0.5q-0.546875 0.484375 -0.546875 1.1875q0 0.546875 0.234375 0.953125q0.234375 0.390625 0.84375 0.78125q0.625 0.375 1.796875 0.84375zm1.59375 -2.875q0 -0.046875 0.078125 0q0.078125 0.03125 0.09375 0.03125l-0.046875 0.0625l-0.125 -0.0625l0 -0.03125zm0.21875 -0.03125q0.109375 0.15625 0.03125 0.109375q-0.0625 -0.046875 -0.078125 -0.046875l0.046875 -0.0625zm-5.53125 6.796875q0 0.046875 -0.078125 0.015625q-0.0625 -0.046875 -0.09375 -0.046875l0.046875 -0.0625l0.125 0.0625l0 0.03125zm-0.203125 0.046875q-0.109375 -0.140625 0.03125 -0.078125l-0.03125 0.078125z" fill-rule="nonzero"/><path fill="#e06666" d="m603.52234 93.125015l0 0c0 -33.441494 27.10968 -60.55118 60.551147 -60.55118l0 0c16.059204 0 31.460632 6.3794823 42.816162 17.735031c11.35553 11.355549 17.735046 26.756977 17.735046 42.81615l0 0c0 33.44149 -27.10968 60.551178 -60.55121 60.551178l0 0c-33.441467 0 -60.551147 -27.109688 -60.551147 -60.551178z" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m603.52234 93.125015l0 0c0 -33.441494 27.10968 -60.55118 60.551147 -60.55118l0 0c16.059204 0 31.460632 6.3794823 42.816162 17.735031c11.35553 11.355549 17.735046 26.756977 17.735046 42.81615l0 0c0 33.44149 -27.10968 60.551178 -60.55121 60.551178l0 0c-33.441467 0 -60.551147 -27.109688 -60.551147 -60.551178z" fill-rule="evenodd"/><path fill="#000000" d="m643.7766 89.04501l-1.171875 -3.390625l-3.75 0l-1.1875 3.390625l-1.28125 0l4.296875 -11.828125l0.140625 0l4.296875 11.828125l-1.34375 0zm-4.578125 -4.40625l3.0625 0l-1.53125 -4.4375l-1.53125 4.4375zm11.609375 3.40625q1.28125 0 2.21875 -1.046875l0.78125 0.90625q-1.25 1.34375 -3.078125 1.34375q-1.234375 0 -2.203125 -0.578125q-0.96875 -0.578125 -1.515625 -1.59375q-0.546875 -1.015625 -0.546875 -2.28125q0 -1.265625 0.546875 -2.265625q0.546875 -1.015625 1.515625 -1.59375q0.96875 -0.578125 2.1875 -0.578125q1.03125 0 1.859375 0.421875q0.84375 0.40625 1.375 1.15625l-0.84375 0.828125l-0.015625 0.015625q-0.546875 -0.71875 -1.125 -1.0q-0.5625 -0.296875 -1.390625 -0.296875q-0.734375 0 -1.359375 0.40625q-0.625 0.40625 -1.0 1.140625q-0.375 0.71875 -0.375 1.671875q0 0.953125 0.375 1.71875q0.390625 0.765625 1.0625 1.203125q0.6875 0.421875 1.53125 0.421875zm2.078125 -5.25q0 -0.109375 0.15625 0.015625l-0.0625 0.078125l-0.09375 -0.09375zm0.203125 -0.015625q0.09375 0.125 0.046875 0.09375q-0.046875 -0.046875 -0.09375 -0.0625l0.046875 -0.03125zm9.9375 5.484375q-1.234375 0.90625 -2.703125 0.90625q-1.421875 0 -2.015625 -0.84375q-0.578125 -0.859375 -0.578125 -2.8125q0 -0.3125 0.03125 -1.09375l0.171875 -2.796875l-1.875 0l0 -1.109375l1.953125 0l0.125 -2.265625l1.5 -0.25l0.1875 -0.015625l0.015625 0.109375q-0.125 0.1875 -0.203125 0.328125q-0.0625 0.125 -0.078125 0.40625l-0.203125 1.6875l2.828125 0l0 1.109375l-2.90625 0l-0.171875 2.890625q-0.03125 0.75 -0.03125 0.984375q0 1.5 0.34375 2.03125q0.34375 0.53125 1.109375 0.53125q0.5625 0 1.03125 -0.203125q0.46875 -0.21875 1.0625 -0.65625l0.40625 1.0625zm2.90625 0.78125l0 -1.078125l2.1875 0l0 -6.359375l-2.0625 0l0 -1.09375l3.40625 0l0 7.453125l2.0 0l0 1.078125l-5.53125 0zm2.78125 -10.3125q-0.390625 0 -0.671875 -0.28125q-0.28125 -0.28125 -0.28125 -0.671875q0 -0.40625 0.265625 -0.6875q0.28125 -0.28125 0.6875 -0.28125q0.390625 0 0.671875 0.296875q0.296875 0.28125 0.296875 0.671875q0 0.390625 -0.296875 0.671875q-0.28125 0.28125 -0.671875 0.28125zm13.171875 1.78125q-0.296875 1.515625 -1.25 3.734375l-2.0625 4.796875l-1.046875 0l-3.375 -8.53125l1.34375 0l2.625 6.6875l1.375 -3.15625q0.859375 -1.9375 1.125 -3.53125l1.265625 0zm5.921875 8.71875q-1.90625 0 -3.03125 -1.15625q-1.125 -1.15625 -1.125 -3.265625q0 -1.40625 0.515625 -2.421875q0.515625 -1.015625 1.40625 -1.546875q0.890625 -0.53125 1.984375 -0.53125q1.546875 0 2.5 1.03125q0.96875 1.03125 0.96875 3.015625q0 0.203125 -0.03125 0.625l-6.0625 0q0.078125 1.5625 0.875 2.375q0.796875 0.796875 2.03125 0.796875q1.34375 0 2.203125 -0.953125l0.75 0.71875q-1.078125 1.3125 -2.984375 1.3125zm1.859375 -5.296875q0 -1.203125 -0.609375 -1.890625q-0.59375 -0.703125 -1.59375 -0.703125q-0.921875 0 -1.625 0.65625q-0.6875 0.640625 -0.859375 1.9375l4.6875 0z" fill-rule="nonzero"/><path fill="#000000" d="m650.8469 111.04501l0 -11.625l1.03125 0l2.875 5.6875l2.921875 -5.703125l0.984375 0l0 11.640625l-1.234375 0l0 -8.765625l-2.515625 4.6875l-0.5 0l-2.34375 -4.640625l0 8.71875l-1.21875 0zm9.5625 -11.625l2.71875 0q1.4375 0 2.265625 0.390625q0.84375 0.375 1.453125 1.203125q1.140625 1.53125 1.140625 4.28125q-0.109375 2.828125 -1.3125 4.3125q-1.1875 1.46875 -3.78125 1.453125l-2.484375 0l0 -11.640625zm2.4375 10.625q3.84375 0 3.84375 -4.671875q-0.03125 -2.375 -0.890625 -3.609375q-0.84375 -1.234375 -2.75 -1.234375l-1.40625 0l0 9.515625l1.203125 0zm11.453125 -5.421875q1.65625 0.6875 2.28125 1.421875q0.640625 0.734375 0.640625 1.828125q0 0.859375 -0.421875 1.625q-0.40625 0.765625 -1.296875 1.25q-0.875 0.484375 -2.15625 0.484375q-2.265625 0 -3.640625 -1.46875l0.671875 -1.1875l0 -0.015625q0.015625 0 0.015625 0.015625q0 0 0 0q0.515625 0.671875 1.296875 1.078125q0.796875 0.40625 1.84375 0.40625q1.03125 0 1.71875 -0.578125q0.6875 -0.578125 0.6875 -1.421875q0 -0.546875 -0.21875 -0.90625q-0.21875 -0.359375 -0.78125 -0.71875q-0.546875 -0.359375 -1.671875 -0.84375q-1.71875 -0.671875 -2.453125 -1.515625q-0.734375 -0.859375 -0.734375 -1.9375q0 -1.296875 0.953125 -2.078125q0.953125 -0.78125 2.59375 -0.78125q0.953125 0 1.78125 0.390625q0.828125 0.375 1.4375 1.0625l-0.71875 0.96875l-0.015625 0.015625q-0.5625 -0.765625 -1.171875 -1.0625q-0.609375 -0.296875 -1.515625 -0.296875q-0.90625 0 -1.453125 0.5q-0.546875 0.484375 -0.546875 1.1875q0 0.546875 0.234375 0.953125q0.234375 0.390625 0.84375 0.78125q0.625 0.375 1.796875 0.84375zm1.59375 -2.875q0 -0.046875 0.078125 0q0.078125 0.03125 0.09375 0.03125l-0.046875 0.0625l-0.125 -0.0625l0 -0.03125zm0.21875 -0.03125q0.109375 0.15625 0.03125 0.109375q-0.0625 -0.046875 -0.078125 -0.046875l0.046875 -0.0625zm-5.53125 6.796875q0 0.046875 -0.078125 0.015625q-0.0625 -0.046875 -0.09375 -0.046875l0.046875 -0.0625l0.125 0.0625l0 0.03125zm-0.203125 0.046875q-0.109375 -0.140625 0.03125 -0.078125l-0.03125 0.078125z" fill-rule="nonzero"/><path fill="#6fa8dc" d="m597.18896 215.2218l100.84253 0l0 55.47882c-50.41858 0 -50.41858 21.345459 -100.84253 10.6727295zm8.3081665 0l0 -7.1924896l100.15381 0l0 55.82686c-3.8070068 0 -7.619446 0.38668823 -7.619446 0.38668823l0 -49.021057zm7.809265 -7.1924896l0 -7.0184784l101.021484 0l0 55.652863c-4.338501 0 -8.676941 0.29000854 -8.676941 0.29000854l0 -48.924393z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m597.18896 215.2218l100.84253 0l0 55.47882c-50.41858 0 -50.41858 21.345459 -100.84253 10.6727295zm8.3081665 0l0 -7.1924896l100.15381 0l0 55.82686c-3.8070068 0 -7.619446 0.38668823 -7.619446 0.38668823m-84.7251 -56.213547l0 -7.0184784l101.021484 0l0 55.652863c-4.338501 0 -8.676941 0.29000854 -8.676941 0.29000854" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m597.18896 281.37335c50.42395 10.6727295 50.42395 -10.6727295 100.84253 -10.6727295l0 -6.4577637c0 0 3.812439 -0.38668823 7.619446 -0.38668823l0 -6.902466c0 0 4.33844 -0.29000854 8.676941 -0.29000854l0 -55.652863l-101.021484 0l0 7.0184784l-7.809265 0l0 7.1924896l-8.3081665 0z" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m597.18896 215.2218l100.84253 0l0 55.47882c-50.41858 0 -50.41858 21.345459 -100.84253 10.6727295zm8.3081665 0l0 -7.1924896l100.15381 0l0 55.82686c-3.8070068 0 -7.619446 0.38668823 -7.619446 0.38668823m-84.7251 -56.213547l0 -7.0184784l101.021484 0l0 55.652863c-4.338501 0 -8.676941 0.29000854 -8.676941 0.29000854" fill-rule="evenodd"/><path fill="#000000" d="m621.69617 244.70195l0 6.609375q0.03125 1.421875 -0.375 2.34375q-0.40625 0.90625 -1.109375 1.34375q-0.703125 0.421875 -1.5625 0.421875q-1.703125 0 -2.765625 -1.28125l0.734375 -0.9375l0.015625 -0.015625l0.03125 0.015625q0.515625 0.5625 0.953125 0.8125q0.4375 0.25 1.0 0.25q0.953125 0 1.375 -0.65625q0.421875 -0.671875 0.421875 -2.265625l0 -6.640625l-2.25 0l0 -1.109375l5.328125 0l0 1.109375l-1.796875 0zm-4.84375 8.421875q0 0.078125 -0.0625 0.078125q-0.0625 -0.015625 -0.109375 -0.0625l0.078125 -0.09375l0.09375 0.078125zm-0.21875 0.0625q-0.09375 -0.09375 -0.03125 -0.078125q0.0625 0.015625 0.078125 0.03125l-0.046875 0.046875zm12.328125 2.1875q-1.140625 0 -2.046875 -0.5625q-0.890625 -0.5625 -1.390625 -1.5625q-0.5 -1.015625 -0.5 -2.296875q0 -1.296875 0.5 -2.296875q0.5 -1.015625 1.390625 -1.578125q0.90625 -0.578125 2.046875 -0.578125q1.125 0 2.015625 0.578125q0.90625 0.5625 1.40625 1.578125q0.5 1.0 0.5 2.296875q0 1.28125 -0.5 2.296875q-0.5 1.0 -1.40625 1.5625q-0.890625 0.5625 -2.015625 0.5625zm0 -1.125q0.71875 0 1.28125 -0.421875q0.578125 -0.4375 0.90625 -1.1875q0.328125 -0.765625 0.328125 -1.71875q0 -1.453125 -0.71875 -2.375q-0.703125 -0.921875 -1.796875 -0.921875q-1.109375 0 -1.828125 0.921875q-0.703125 0.921875 -0.703125 2.375q0 0.953125 0.328125 1.71875q0.328125 0.75 0.890625 1.1875q0.578125 0.421875 1.3125 0.421875zm8.78125 1.171875q-1.34375 0 -2.15625 -1.0q-0.796875 -1.0 -0.78125 -2.96875l0.03125 -4.765625l1.296875 0l0 4.765625q0 1.53125 0.53125 2.21875q0.53125 0.671875 1.40625 0.671875q0.96875 0 1.625 -0.75q0.671875 -0.765625 0.671875 -2.203125l0 -4.703125l1.3125 0l0 7.203125q0 0.453125 0.015625 0.75q0.03125 0.28125 0.171875 0.578125l-1.296875 0q-0.125 -0.28125 -0.15625 -0.578125q-0.03125 -0.296875 -0.03125 -0.734375q-0.40625 0.71875 -1.109375 1.125q-0.6875 0.390625 -1.53125 0.390625zm13.15625 -6.953125l0 0.015625q-0.546875 -0.5 -0.921875 -0.671875q-0.359375 -0.171875 -0.84375 -0.171875q-0.671875 0 -1.265625 0.34375q-0.59375 0.328125 -0.96875 1.015625q-0.375 0.671875 -0.375 1.703125l0 4.53125l-1.359375 0l0 -8.546875l1.40625 0l-0.046875 1.578125q0.359375 -0.84375 1.09375 -1.3125q0.734375 -0.46875 1.609375 -0.46875q1.375 0 2.265625 0.9375l-0.59375 1.046875zm0 0.015625q0.125 0.109375 0.0625 0.09375q-0.0625 -0.015625 -0.109375 -0.03125l0.046875 -0.0625zm-0.203125 0.0625q0 -0.0625 0.046875 -0.046875q0.0625 0 0.109375 0.046875l-0.03125 0.09375l-0.125 -0.078125l0 -0.015625zm2.921875 -1.859375l1.3125 0l0 1.515625q0.5 -0.78125 1.265625 -1.25q0.765625 -0.46875 1.625 -0.46875q1.125 0 1.8125 0.875q0.6875 0.859375 0.6875 2.6875l0 5.171875l-1.3125 0l0 -5.125q0 -1.28125 -0.4375 -1.859375q-0.421875 -0.578125 -1.125 -0.578125q-0.578125 0 -1.171875 0.34375q-0.578125 0.328125 -0.96875 0.9375q-0.375 0.609375 -0.375 1.375l0 4.90625l-1.3125 0l0 -8.53125zm12.53125 -0.1875q1.828125 0 2.78125 0.953125q0.96875 0.953125 0.96875 3.234375l0 4.53125l-1.453125 0l0 -1.3125q-0.78125 1.515625 -2.90625 1.515625q-1.421875 0 -2.21875 -0.640625q-0.796875 -0.640625 -0.796875 -1.703125q0 -0.90625 0.578125 -1.578125q0.578125 -0.671875 1.546875 -1.03125q0.984375 -0.359375 2.15625 -0.359375q1.0625 0 1.921875 0.109375q-0.109375 -1.4375 -0.75 -2.015625q-0.640625 -0.59375 -1.921875 -0.59375q-0.625 0 -1.21875 0.25q-0.578125 0.234375 -1.0625 0.703125l-0.65625 -0.859375q1.1875 -1.203125 3.03125 -1.203125zm-0.484375 7.890625q1.390625 0 2.1875 -0.796875q0.796875 -0.8125 0.875 -2.34375q-0.84375 -0.140625 -1.8125 -0.140625q-1.40625 0 -2.234375 0.46875q-0.828125 0.46875 -0.828125 1.421875q0 1.390625 1.8125 1.390625zm6.734375 0.828125l0 -1.078125l2.53125 0l0 -10.25l-2.421875 0l0 -1.078125l3.78125 0l0 11.328125l2.5 0l0 1.078125l-6.390625 0z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m664.0735 153.6762l-0.25195312 47.338577" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m664.0735 153.6762l-0.22003174 41.33867" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m662.2017 195.00607l1.6275635 4.546829l1.6759033 -4.529236z" fill-rule="evenodd"/><path fill="#e6b8af" d="m3.406824 307.91898l777.6063 0l0 169.13385l-777.6063 0z" fill-rule="evenodd"/><path fill="#000000" d="m344.085 381.96402l7.890625 0q4.46875 0 6.59375 1.96875q2.125 1.953125 2.125 5.46875q0 1.375 -0.609375 2.75q-0.59375 1.359375 -1.671875 2.40625q-1.078125 1.03125 -2.484375 1.484375l5.828125 10.84375l-5.03125 0l-5.15625 -10.3125l-2.90625 0l0 10.3125l-4.578125 0l0 -24.921875zm8.171875 10.796875q1.921875 0 2.875 -0.890625q0.96875 -0.90625 0.96875 -2.46875q0 -1.75 -0.9375 -2.609375q-0.921875 -0.859375 -2.90625 -0.859375l-3.59375 0l0 6.828125l3.59375 0zm24.921875 14.125l-1.78125 -6.453125l-6.375 0l-1.8125 6.453125l-4.609375 0l8.328125 -24.921875l2.609375 0l8.328125 24.921875l-4.6875 0zm-7.265625 -9.546875l4.625 0l-2.28125 -8.28125l-2.34375 8.28125zm14.0625 -15.375l6.4375 0q3.1875 0 5.0625 0.859375q1.890625 0.84375 3.21875 2.609375q2.5625 3.484375 2.5625 9.171875q0 5.890625 -2.75 9.09375q-2.734375 3.1875 -8.578125 3.1875l-5.953125 0l0 -24.921875zm5.875 21.40625q3.5 0 5.203125 -2.203125q1.703125 -2.21875 1.703125 -6.453125q0 -9.015625 -6.5 -9.015625l-1.9375 0l0 17.671875l1.53125 0zm22.484375 4.234375q-2.953125 0 -5.078125 -1.5625q-2.109375 -1.578125 -3.234375 -4.546875q-1.125 -2.984375 -1.125 -7.125q0 -4.09375 1.125 -7.015625q1.125 -2.9375 3.234375 -4.484375q2.125 -1.546875 5.078125 -1.546875q2.9375 0 5.0625 1.546875q2.125 1.546875 3.234375 4.484375q1.125 2.921875 1.125 7.015625q0 4.140625 -1.125 7.125q-1.109375 2.96875 -3.234375 4.546875q-2.125 1.5625 -5.0625 1.5625zm0 -3.59375q1.484375 0 2.5625 -1.21875q1.078125 -1.21875 1.65625 -3.453125q0.578125 -2.234375 0.578125 -5.203125q0 -2.765625 -0.578125 -4.84375q-0.578125 -2.078125 -1.65625 -3.21875q-1.078125 -1.140625 -2.5625 -1.140625q-1.46875 0 -2.5625 1.140625q-1.09375 1.140625 -1.671875 3.21875q-0.578125 2.078125 -0.578125 4.84375q0 2.96875 0.578125 5.203125q0.578125 2.234375 1.65625 3.453125q1.09375 1.21875 2.578125 1.21875zm21.75 -11.5625q2.546875 1.21875 3.953125 2.34375q1.40625 1.109375 1.984375 2.34375q0.59375 1.234375 0.59375 2.875q0 1.984375 -0.921875 3.71875q-0.921875 1.734375 -2.90625 2.8125q-1.984375 1.0625 -5.03125 1.0625q-4.859375 0 -8.4375 -3.46875l1.953125 -3.40625l0.03125 -0.03125l0.03125 0.03125q2.90625 3.03125 6.796875 3.03125q1.875 0 2.890625 -0.875q1.03125 -0.890625 1.03125 -2.640625q0 -0.875 -0.4375 -1.59375q-0.4375 -0.71875 -1.5 -1.4375q-1.0625 -0.71875 -2.9375 -1.5625q-3.484375 -1.5 -5.046875 -3.375q-1.5625 -1.875 -1.5625 -4.359375q0 -1.796875 1.0 -3.3125q1.015625 -1.515625 2.78125 -2.390625q1.765625 -0.890625 3.984375 -0.890625q2.46875 0 4.265625 0.828125q1.796875 0.8125 3.4375 2.53125l-2.234375 3.125l-0.03125 0.03125q-0.015625 0 -0.015625 0q0 -0.015625 0 -0.015625q-0.921875 -1.46875 -2.34375 -2.140625q-1.421875 -0.671875 -3.125 -0.671875q-1.09375 0 -1.890625 0.390625q-0.78125 0.375 -1.1875 1.0q-0.390625 0.625 -0.390625 1.359375q0 0.875 0.453125 1.5625q0.453125 0.671875 1.5625 1.390625q1.125 0.703125 3.25 1.734375zm3.140625 -4.671875q0 -0.03125 0.046875 -0.03125q0.03125 0 0.15625 0.09375q0.140625 0.09375 0.234375 0.15625l-0.0625 0.09375l-0.375 -0.25l0 -0.0625zm0.546875 0.0625q0.140625 0.21875 0.140625 0.28125q0 0.015625 -0.015625 0.015625q-0.0625 0 -0.234375 -0.140625l0.109375 -0.15625zm-11.90625 12.875q0 0.03125 -0.046875 0.03125q-0.046875 0 -0.203125 -0.078125q-0.140625 -0.078125 -0.234375 -0.109375l0.078125 -0.140625l0.390625 0.234375l0.015625 0.0625zm-0.5625 -0.015625q-0.15625 -0.15625 -0.15625 -0.21875q0 0 0 0q0 -0.015625 0.015625 -0.015625l0.21875 0.09375l-0.078125 0.140625z" fill-rule="nonzero"/><path fill="#cfe2f3" d="m168.86089 338.05414l0 0c0 7.6123962 24.68428 13.783478 55.133865 13.783478c30.449585 0 55.13385 -6.1710815 55.13385 -13.783478l0 119.078735c0 7.6124268 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.133865 -6.171051 -55.133865 -13.783478z" fill-rule="evenodd"/><path fill="#e2edf7" d="m168.86089 338.05414l0 0c0 -7.6123962 24.68428 -13.783447 55.133865 -13.783447c30.449585 0 55.13385 6.171051 55.13385 13.783447l0 0c0 7.6123962 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.133865 -6.1710815 -55.133865 -13.783478z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m279.1286 338.05414l0 0c0 7.6123962 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.133865 -6.1710815 -55.133865 -13.783478l0 0c0 -7.6123962 24.68428 -13.783447 55.133865 -13.783447c30.449585 0 55.13385 6.171051 55.13385 13.783447l0 119.078735c0 7.6124268 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.133865 -6.171051 -55.133865 -13.783478l0 -119.078735" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m279.1286 338.05414l0 0c0 7.6123962 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.133865 -6.1710815 -55.133865 -13.783478l0 0c0 -7.6123962 24.68428 -13.783447 55.133865 -13.783447c30.449585 0 55.13385 6.171051 55.13385 13.783447l0 119.078735c0 7.6124268 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.133865 -6.171051 -55.133865 -13.783478l0 -119.078735" fill-rule="evenodd"/><path fill="#000000" d="m183.01819 399.78024l2.71875 0q1.4375 0 2.265625 0.390625q0.84375 0.375 1.453125 1.203125q1.140625 1.53125 1.140625 4.28125q-0.109375 2.828125 -1.3125 4.3125q-1.1875 1.46875 -3.78125 1.453125l-2.484375 0l0 -11.640625zm2.4375 10.625q3.84375 0 3.84375 -4.671875q-0.03125 -2.375 -0.890625 -3.609375q-0.84375 -1.234375 -2.75 -1.234375l-1.40625 0l0 9.515625l1.203125 0zm10.4375 -7.71875q1.828125 0 2.78125 0.953125q0.96875 0.953125 0.96875 3.234375l0 4.53125l-1.453125 0l0 -1.3125q-0.78125 1.515625 -2.90625 1.515625q-1.421875 0 -2.21875 -0.640625q-0.796875 -0.640625 -0.796875 -1.703125q0 -0.90625 0.578125 -1.578125q0.578125 -0.671875 1.546875 -1.03125q0.984375 -0.359375 2.15625 -0.359375q1.0625 0 1.921875 0.109375q-0.109375 -1.4375 -0.75 -2.015625q-0.640625 -0.59375 -1.921875 -0.59375q-0.625 0 -1.21875 0.25q-0.578125 0.234375 -1.0625 0.703125l-0.65625 -0.859375q1.1875 -1.203125 3.03125 -1.203125zm-0.484375 7.890625q1.390625 0 2.1875 -0.796875q0.796875 -0.8125 0.875 -2.34375q-0.84375 -0.140625 -1.8125 -0.140625q-1.40625 0 -2.234375 0.46875q-0.828125 0.46875 -0.828125 1.421875q0 1.390625 1.8125 1.390625zm13.546875 0.046875q-1.234375 0.90625 -2.703125 0.90625q-1.421875 0 -2.015625 -0.84375q-0.578125 -0.859375 -0.578125 -2.8125q0 -0.3125 0.03125 -1.09375l0.171875 -2.796875l-1.875 0l0 -1.109375l1.953125 0l0.125 -2.265625l1.5 -0.25l0.1875 -0.015625l0.015625 0.109375q-0.125 0.1875 -0.203125 0.328125q-0.0625 0.125 -0.078125 0.40625l-0.203125 1.6875l2.828125 0l0 1.109375l-2.90625 0l-0.171875 2.890625q-0.03125 0.75 -0.03125 0.984375q0 1.5 0.34375 2.03125q0.34375 0.53125 1.109375 0.53125q0.5625 0 1.03125 -0.203125q0.46875 -0.21875 1.0625 -0.65625l0.40625 1.0625zm5.59375 -7.9375q1.828125 0 2.78125 0.953125q0.96875 0.953125 0.96875 3.234375l0 4.53125l-1.453125 0l0 -1.3125q-0.78125 1.515625 -2.90625 1.515625q-1.421875 0 -2.21875 -0.640625q-0.796875 -0.640625 -0.796875 -1.703125q0 -0.90625 0.578125 -1.578125q0.578125 -0.671875 1.546875 -1.03125q0.984375 -0.359375 2.15625 -0.359375q1.0625 0 1.921875 0.109375q-0.109375 -1.4375 -0.75 -2.015625q-0.640625 -0.59375 -1.921875 -0.59375q-0.625 0 -1.21875 0.25q-0.578125 0.234375 -1.0625 0.703125l-0.65625 -0.859375q1.1875 -1.203125 3.03125 -1.203125zm-0.484375 7.890625q1.390625 0 2.1875 -0.796875q0.796875 -0.8125 0.875 -2.34375q-0.84375 -0.140625 -1.8125 -0.140625q-1.40625 0 -2.234375 0.46875q-0.828125 0.46875 -0.828125 1.421875q0 1.390625 1.8125 1.390625zm15.6875 -10.796875l3.734375 0q1.84375 0 2.75 0.921875q0.921875 0.90625 0.921875 2.359375q0 1.4375 -0.890625 2.328125q-0.890625 0.890625 -2.6875 0.890625l-2.484375 0l0 5.125l-1.34375 0l0 -11.625zm3.6875 5.34375q1.21875 0 1.796875 -0.53125q0.578125 -0.53125 0.578125 -1.484375q0 -0.921875 -0.59375 -1.5q-0.59375 -0.59375 -1.765625 -0.59375l-2.359375 0l0 4.109375l2.34375 0zm9.21875 6.4375q-1.140625 0 -2.046875 -0.5625q-0.890625 -0.5625 -1.390625 -1.5625q-0.5 -1.015625 -0.5 -2.296875q0 -1.296875 0.5 -2.296875q0.5 -1.015625 1.390625 -1.578125q0.90625 -0.578125 2.046875 -0.578125q1.125 0 2.015625 0.578125q0.90625 0.5625 1.40625 1.578125q0.5 1.0 0.5 2.296875q0 1.28125 -0.5 2.296875q-0.5 1.0 -1.40625 1.5625q-0.890625 0.5625 -2.015625 0.5625zm0 -1.125q0.71875 0 1.28125 -0.421875q0.578125 -0.4375 0.90625 -1.1875q0.328125 -0.765625 0.328125 -1.71875q0 -1.453125 -0.71875 -2.375q-0.703125 -0.921875 -1.796875 -0.921875q-1.109375 0 -1.828125 0.921875q-0.703125 0.921875 -0.703125 2.375q0 0.953125 0.328125 1.71875q0.328125 0.75 0.890625 1.1875q0.578125 0.421875 1.3125 0.421875zm9.328125 1.125q-1.140625 0 -2.046875 -0.5625q-0.890625 -0.5625 -1.390625 -1.5625q-0.5 -1.015625 -0.5 -2.296875q0 -1.296875 0.5 -2.296875q0.5 -1.015625 1.390625 -1.578125q0.90625 -0.578125 2.046875 -0.578125q1.125 0 2.015625 0.578125q0.90625 0.5625 1.40625 1.578125q0.5 1.0 0.5 2.296875q0 1.28125 -0.5 2.296875q-0.5 1.0 -1.40625 1.5625q-0.890625 0.5625 -2.015625 0.5625zm0 -1.125q0.71875 0 1.28125 -0.421875q0.578125 -0.4375 0.90625 -1.1875q0.328125 -0.765625 0.328125 -1.71875q0 -1.453125 -0.71875 -2.375q-0.703125 -0.921875 -1.796875 -0.921875q-1.109375 0 -1.828125 0.921875q-0.703125 0.921875 -0.703125 2.375q0 0.953125 0.328125 1.71875q0.328125 0.75 0.890625 1.1875q0.578125 0.421875 1.3125 0.421875zm6.125 0.96875l0 -1.078125l2.53125 0l0 -10.25l-2.421875 0l0 -1.078125l3.78125 0l0 11.328125l2.5 0l0 1.078125l-6.390625 0z" fill-rule="nonzero"/><path fill="#cfe2f3" d="m500.87402 338.05414l0 0c0 7.6123962 24.684265 13.783478 55.13385 13.783478c30.449585 0 55.13385 -6.1710815 55.13385 -13.783478l0 119.078735c0 7.6124268 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.13385 -6.171051 -55.13385 -13.783478z" fill-rule="evenodd"/><path fill="#e2edf7" d="m500.87402 338.05414l0 0c0 -7.6123962 24.684265 -13.783447 55.13385 -13.783447c30.449585 0 55.13385 6.171051 55.13385 13.783447l0 0c0 7.6123962 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.13385 -6.1710815 -55.13385 -13.783478z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m611.1417 338.05414l0 0c0 7.6123962 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.13385 -6.1710815 -55.13385 -13.783478l0 0c0 -7.6123962 24.684265 -13.783447 55.13385 -13.783447c30.449585 0 55.13385 6.171051 55.13385 13.783447l0 119.078735c0 7.6124268 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.13385 -6.171051 -55.13385 -13.783478l0 -119.078735" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m611.1417 338.05414l0 0c0 7.6123962 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.13385 -6.1710815 -55.13385 -13.783478l0 0c0 -7.6123962 24.684265 -13.783447 55.13385 -13.783447c30.449585 0 55.13385 6.171051 55.13385 13.783447l0 119.078735c0 7.6124268 -24.684265 13.783478 -55.13385 13.783478c-30.449585 0 -55.13385 -6.171051 -55.13385 -13.783478l0 -119.078735" fill-rule="evenodd"/><path fill="#000000" d="m519.461 400.40524l0 -11.625l1.03125 0l2.875 5.6875l2.921875 -5.703125l0.984375 0l0 11.640625l-1.234375 0l0 -8.765625l-2.515625 4.6875l-0.5 0l-2.34375 -4.640625l0 8.71875l-1.21875 0zm13.640625 0.1875q-1.90625 0 -3.03125 -1.15625q-1.125 -1.15625 -1.125 -3.265625q0 -1.40625 0.515625 -2.421875q0.515625 -1.015625 1.40625 -1.546875q0.890625 -0.53125 1.984375 -0.53125q1.546875 0 2.5 1.03125q0.96875 1.03125 0.96875 3.015625q0 0.203125 -0.03125 0.625l-6.0625 0q0.078125 1.5625 0.875 2.375q0.796875 0.796875 2.03125 0.796875q1.34375 0 2.203125 -0.953125l0.75 0.71875q-1.078125 1.3125 -2.984375 1.3125zm1.859375 -5.296875q0 -1.203125 -0.609375 -1.890625q-0.59375 -0.703125 -1.59375 -0.703125q-0.921875 0 -1.625 0.65625q-0.6875 0.640625 -0.859375 1.9375l4.6875 0zm10.671875 4.328125q-1.234375 0.90625 -2.703125 0.90625q-1.421875 0 -2.015625 -0.84375q-0.578125 -0.859375 -0.578125 -2.8125q0 -0.3125 0.03125 -1.09375l0.171875 -2.796875l-1.875 0l0 -1.109375l1.953125 0l0.125 -2.265625l1.5 -0.25l0.1875 -0.015625l0.015625 0.109375q-0.125 0.1875 -0.203125 0.328125q-0.0625 0.125 -0.078125 0.40625l-0.203125 1.6875l2.828125 0l0 1.109375l-2.90625 0l-0.171875 2.890625q-0.03125 0.75 -0.03125 0.984375q0 1.5 0.34375 2.03125q0.34375 0.53125 1.109375 0.53125q0.5625 0 1.03125 -0.203125q0.46875 -0.21875 1.0625 -0.65625l0.40625 1.0625zm5.59375 -7.9375q1.828125 0 2.78125 0.953125q0.96875 0.953125 0.96875 3.234375l0 4.53125l-1.453125 0l0 -1.3125q-0.78125 1.515625 -2.90625 1.515625q-1.421875 0 -2.21875 -0.640625q-0.796875 -0.640625 -0.796875 -1.703125q0 -0.90625 0.578125 -1.578125q0.578125 -0.671875 1.546875 -1.03125q0.984375 -0.359375 2.15625 -0.359375q1.0625 0 1.921875 0.109375q-0.109375 -1.4375 -0.75 -2.015625q-0.640625 -0.59375 -1.921875 -0.59375q-0.625 0 -1.21875 0.25q-0.578125 0.234375 -1.0625 0.703125l-0.65625 -0.859375q1.1875 -1.203125 3.03125 -1.203125zm-0.484375 7.890625q1.390625 0 2.1875 -0.796875q0.796875 -0.8125 0.875 -2.34375q-0.84375 -0.140625 -1.8125 -0.140625q-1.40625 0 -2.234375 0.46875q-0.828125 0.46875 -0.828125 1.421875q0 1.390625 1.8125 1.390625zm9.625 1.015625q-0.90625 0 -1.71875 -0.515625q-0.796875 -0.515625 -1.296875 -1.53125q-0.5 -1.03125 -0.5 -2.484375q0 -1.484375 0.515625 -2.46875q0.53125 -0.984375 1.34375 -1.453125q0.828125 -0.484375 1.734375 -0.484375q0.859375 0 1.5 0.390625q0.640625 0.390625 0.984375 1.078125l0 -5.125l1.421875 0l0 0.125q-0.109375 0.125 -0.140625 0.25q-0.03125 0.125 -0.03125 0.453125l0.015625 10.25q0 0.453125 0.03125 0.75q0.03125 0.28125 0.15625 0.578125l-1.328125 0q-0.125 -0.296875 -0.15625 -0.578125q-0.03125 -0.296875 -0.03125 -0.75q-0.40625 0.71875 -1.046875 1.125q-0.640625 0.390625 -1.453125 0.390625zm0.21875 -1.171875q1.15625 0 1.6875 -0.90625q0.546875 -0.921875 0.546875 -2.421875q0 -1.515625 -0.59375 -2.421875q-0.578125 -0.921875 -1.765625 -0.921875q-1.109375 0 -1.71875 0.84375q-0.59375 0.828125 -0.59375 2.265625q0 1.640625 0.625 2.609375q0.625 0.953125 1.8125 0.953125zm9.296875 -7.734375q1.828125 0 2.78125 0.953125q0.96875 0.953125 0.96875 3.234375l0 4.53125l-1.453125 0l0 -1.3125q-0.78125 1.515625 -2.90625 1.515625q-1.421875 0 -2.21875 -0.640625q-0.796875 -0.640625 -0.796875 -1.703125q0 -0.90625 0.578125 -1.578125q0.578125 -0.671875 1.546875 -1.03125q0.984375 -0.359375 2.15625 -0.359375q1.0625 0 1.921875 0.109375q-0.109375 -1.4375 -0.75 -2.015625q-0.640625 -0.59375 -1.921875 -0.59375q-0.625 0 -1.21875 0.25q-0.578125 0.234375 -1.0625 0.703125l-0.65625 -0.859375q1.1875 -1.203125 3.03125 -1.203125zm-0.484375 7.890625q1.390625 0 2.1875 -0.796875q0.796875 -0.8125 0.875 -2.34375q-0.84375 -0.140625 -1.8125 -0.140625q-1.40625 0 -2.234375 0.46875q-0.828125 0.46875 -0.828125 1.421875q0 1.390625 1.8125 1.390625zm13.546875 0.046875q-1.234375 0.90625 -2.703125 0.90625q-1.421875 0 -2.015625 -0.84375q-0.578125 -0.859375 -0.578125 -2.8125q0 -0.3125 0.03125 -1.09375l0.171875 -2.796875l-1.875 0l0 -1.109375l1.953125 0l0.125 -2.265625l1.5 -0.25l0.1875 -0.015625l0.015625 0.109375q-0.125 0.1875 -0.203125 0.328125q-0.0625 0.125 -0.078125 0.40625l-0.203125 1.6875l2.828125 0l0 1.109375l-2.90625 0l-0.171875 2.890625q-0.03125 0.75 -0.03125 0.984375q0 1.5 0.34375 2.03125q0.34375 0.53125 1.109375 0.53125q0.5625 0 1.03125 -0.203125q0.46875 -0.21875 1.0625 -0.65625l0.40625 1.0625zm5.59375 -7.9375q1.828125 0 2.78125 0.953125q0.96875 0.953125 0.96875 3.234375l0 4.53125l-1.453125 0l0 -1.3125q-0.78125 1.515625 -2.90625 1.515625q-1.421875 0 -2.21875 -0.640625q-0.796875 -0.640625 -0.796875 -1.703125q0 -0.90625 0.578125 -1.578125q0.578125 -0.671875 1.546875 -1.03125q0.984375 -0.359375 2.15625 -0.359375q1.0625 0 1.921875 0.109375q-0.109375 -1.4375 -0.75 -2.015625q-0.640625 -0.59375 -1.921875 -0.59375q-0.625 0 -1.21875 0.25q-0.578125 0.234375 -1.0625 0.703125l-0.65625 -0.859375q1.1875 -1.203125 3.03125 -1.203125zm-0.484375 7.890625q1.390625 0 2.1875 -0.796875q0.796875 -0.8125 0.875 -2.34375q-0.84375 -0.140625 -1.8125 -0.140625q-1.40625 0 -2.234375 0.46875q-0.828125 0.46875 -0.828125 1.421875q0 1.390625 1.8125 1.390625z" fill-rule="nonzero"/><path fill="#000000" d="m538.4454 410.78024l3.734375 0q1.84375 0 2.75 0.921875q0.921875 0.90625 0.921875 2.359375q0 1.4375 -0.890625 2.328125q-0.890625 0.890625 -2.6875 0.890625l-2.484375 0l0 5.125l-1.34375 0l0 -11.625zm3.6875 5.34375q1.21875 0 1.796875 -0.53125q0.578125 -0.53125 0.578125 -1.484375q0 -0.921875 -0.59375 -1.5q-0.59375 -0.59375 -1.765625 -0.59375l-2.359375 0l0 4.109375l2.34375 0zm9.21875 6.4375q-1.140625 0 -2.046875 -0.5625q-0.890625 -0.5625 -1.390625 -1.5625q-0.5 -1.015625 -0.5 -2.296875q0 -1.296875 0.5 -2.296875q0.5 -1.015625 1.390625 -1.578125q0.90625 -0.578125 2.046875 -0.578125q1.125 0 2.015625 0.578125q0.90625 0.5625 1.40625 1.578125q0.5 1.0 0.5 2.296875q0 1.28125 -0.5 2.296875q-0.5 1.0 -1.40625 1.5625q-0.890625 0.5625 -2.015625 0.5625zm0 -1.125q0.71875 0 1.28125 -0.421875q0.578125 -0.4375 0.90625 -1.1875q0.328125 -0.765625 0.328125 -1.71875q0 -1.453125 -0.71875 -2.375q-0.703125 -0.921875 -1.796875 -0.921875q-1.109375 0 -1.828125 0.921875q-0.703125 0.921875 -0.703125 2.375q0 0.953125 0.328125 1.71875q0.328125 0.75 0.890625 1.1875q0.578125 0.421875 1.3125 0.421875zm9.328125 1.125q-1.140625 0 -2.046875 -0.5625q-0.890625 -0.5625 -1.390625 -1.5625q-0.5 -1.015625 -0.5 -2.296875q0 -1.296875 0.5 -2.296875q0.5 -1.015625 1.390625 -1.578125q0.90625 -0.578125 2.046875 -0.578125q1.125 0 2.015625 0.578125q0.90625 0.5625 1.40625 1.578125q0.5 1.0 0.5 2.296875q0 1.28125 -0.5 2.296875q-0.5 1.0 -1.40625 1.5625q-0.890625 0.5625 -2.015625 0.5625zm0 -1.125q0.71875 0 1.28125 -0.421875q0.578125 -0.4375 0.90625 -1.1875q0.328125 -0.765625 0.328125 -1.71875q0 -1.453125 -0.71875 -2.375q-0.703125 -0.921875 -1.796875 -0.921875q-1.109375 0 -1.828125 0.921875q-0.703125 0.921875 -0.703125 2.375q0 0.953125 0.328125 1.71875q0.328125 0.75 0.890625 1.1875q0.578125 0.421875 1.3125 0.421875zm6.125 0.96875l0 -1.078125l2.53125 0l0 -10.25l-2.421875 0l0 -1.078125l3.78125 0l0 11.328125l2.5 0l0 1.078125l-6.390625 0z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m85.40682 135.93997l138.58267 188.34647" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m85.40682 135.93997l135.0268 183.51367" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m219.10321 320.43256l4.0198975 2.676361l-1.3590851 -4.6341553z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m47.26509 193.37305l76.28346 0l0 73.44882l-76.28346 0z" fill-rule="evenodd"/><path fill="#000000" d="m76.1412 213.54305l0 0.015625q-0.453125 -0.5 -0.78125 -0.671875q-0.328125 -0.171875 -0.8125 -0.171875q-0.671875 0 -1.34375 0.34375q-0.65625 0.328125 -1.1875 1.015625q-0.515625 0.671875 -0.734375 1.703125l-0.953125 4.53125l-1.359375 0l1.8125 -8.546875l1.40625 0l-0.375 1.578125q0.53125 -0.84375 1.359375 -1.3125q0.84375 -0.46875 1.71875 -0.46875q1.375 0 2.0625 0.9375l-0.8125 1.046875zm0 0.015625q0.09375 0.109375 0.03125 0.09375q-0.046875 -0.015625 -0.09375 -0.03125l0.0625 -0.0625zm-0.21875 0.0625q0.015625 -0.0625 0.0625 -0.046875q0.046875 0 0.09375 0.046875l-0.0625 0.09375l-0.09375 -0.078125l0 -0.015625zm5.1875 6.859375q-1.90625 0 -2.78125 -1.15625q-0.875 -1.15625 -0.421875 -3.265625q0.296875 -1.40625 1.015625 -2.421875q0.734375 -1.015625 1.734375 -1.546875q1.015625 -0.53125 2.109375 -0.53125q1.546875 0 2.28125 1.03125q0.75 1.03125 0.328125 3.015625q-0.046875 0.203125 -0.171875 0.625l-6.0625 0q-0.25 1.5625 0.375 2.375q0.625 0.796875 1.859375 0.796875q1.34375 0 2.40625 -0.953125l0.59375 0.71875q-1.359375 1.3125 -3.265625 1.3125zm3.0 -5.296875q0.25 -1.203125 -0.203125 -1.890625q-0.453125 -0.703125 -1.453125 -0.703125q-0.921875 0 -1.765625 0.65625q-0.828125 0.640625 -1.265625 1.9375l4.6875 0zm7.703125 -3.609375q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm9.40625 1.015625q-0.90625 0 -1.609375 -0.515625q-0.6875 -0.515625 -0.96875 -1.53125q-0.28125 -1.03125 0.03125 -2.484375q0.3125 -1.484375 1.046875 -2.46875q0.734375 -0.984375 1.65625 -1.453125q0.921875 -0.484375 1.828125 -0.484375q0.859375 0 1.40625 0.390625q0.5625 0.390625 0.765625 1.078125l1.09375 -5.125l1.421875 0l-0.03125 0.125q-0.140625 0.125 -0.203125 0.25q-0.046875 0.125 -0.109375 0.453125l-2.171875 10.25q-0.09375 0.453125 -0.125 0.75q-0.03125 0.28125 0.03125 0.578125l-1.328125 0q-0.0625 -0.296875 -0.03125 -0.578125q0.03125 -0.296875 0.125 -0.75q-0.5625 0.71875 -1.296875 1.125q-0.71875 0.390625 -1.53125 0.390625zm0.46875 -1.171875q1.15625 0 1.890625 -0.90625q0.734375 -0.921875 1.0625 -2.421875q0.3125 -1.515625 -0.078125 -2.421875q-0.390625 -0.921875 -1.578125 -0.921875q-1.109375 0 -1.890625 0.84375q-0.78125 0.828125 -1.078125 2.265625q-0.34375 1.640625 0.0625 2.609375q0.421875 0.953125 1.609375 0.953125z" fill-rule="nonzero"/><path fill="#000000" d="m64.195885 233.7618l1.171875 0l-0.0625 6.859375l2.734375 -6.046875l0.8125 0l0.53125 6.015625q1.28125 -3.484375 1.625 -4.578125q0.359375 -1.09375 0.546875 -1.96875l0.0625 -0.28125l1.25 0q-1.484375 4.328125 -3.21875 8.546875l-1.28125 0l-0.390625 -5.515625l-2.65625 5.515625l-1.21875 0l0.09375 -8.546875zm16.609375 1.78125l0 0.015625q-0.453125 -0.5 -0.78125 -0.671875q-0.328125 -0.171875 -0.8125 -0.171875q-0.671875 0 -1.34375 0.34375q-0.65625 0.328125 -1.1875 1.015625q-0.515625 0.671875 -0.734375 1.703125l-0.953125 4.53125l-1.359375 0l1.8125 -8.546875l1.40625 0l-0.375 1.578125q0.53125 -0.84375 1.359375 -1.3125q0.84375 -0.46875 1.71875 -0.46875q1.375 0 2.0625 0.9375l-0.8125 1.046875zm0 0.015625q0.09375 0.109375 0.03125 0.09375q-0.046875 -0.015625 -0.09375 -0.03125l0.0625 -0.0625zm-0.21875 0.0625q0.015625 -0.0625 0.0625 -0.046875q0.046875 0 0.09375 0.046875l-0.0625 0.09375l-0.09375 -0.078125l0 -0.015625zm2.015625 6.671875l0.234375 -1.078125l2.1875 0l1.34375 -6.359375l-2.0625 0l0.234375 -1.09375l3.40625 0l-1.578125 7.453125l2.0 0l-0.234375 1.078125l-5.53125 0zm4.96875 -10.3125q-0.390625 0 -0.609375 -0.28125q-0.21875 -0.28125 -0.140625 -0.671875q0.09375 -0.40625 0.421875 -0.6875q0.328125 -0.28125 0.734375 -0.28125q0.390625 0 0.625 0.296875q0.234375 0.28125 0.140625 0.671875q-0.078125 0.390625 -0.4375 0.671875q-0.34375 0.28125 -0.734375 0.28125zm10.953125 9.53125q-1.4375 0.90625 -2.90625 0.90625q-1.421875 0 -1.828125 -0.84375q-0.40625 -0.859375 0.015625 -2.8125q0.0625 -0.3125 0.265625 -1.09375l0.765625 -2.796875l-1.875 0l0.234375 -1.109375l1.953125 0l0.609375 -2.265625l1.546875 -0.25l0.1875 -0.015625l0 0.109375q-0.171875 0.1875 -0.265625 0.328125q-0.09375 0.125 -0.171875 0.40625l-0.5625 1.6875l2.828125 0l-0.234375 1.109375l-2.90625 0l-0.78125 2.890625q-0.203125 0.75 -0.25 0.984375q-0.3125 1.5 -0.09375 2.03125q0.234375 0.53125 1.0 0.53125q0.5625 0 1.078125 -0.203125q0.515625 -0.21875 1.203125 -0.65625l0.1875 1.0625zm5.90625 0.96875q-1.90625 0 -2.78125 -1.15625q-0.875 -1.15625 -0.421875 -3.265625q0.296875 -1.40625 1.015625 -2.421875q0.734375 -1.015625 1.734375 -1.546875q1.015625 -0.53125 2.109375 -0.53125q1.546875 0 2.28125 1.03125q0.75 1.03125 0.328125 3.015625q-0.046875 0.203125 -0.171875 0.625l-6.0625 0q-0.25 1.5625 0.375 2.375q0.625 0.796875 1.859375 0.796875q1.34375 0 2.40625 -0.953125l0.59375 0.71875q-1.359375 1.3125 -3.265625 1.3125zm3.0 -5.296875q0.25 -1.203125 -0.203125 -1.890625q-0.453125 -0.703125 -1.453125 -0.703125q-0.921875 0 -1.765625 0.65625q-0.828125 0.640625 -1.265625 1.9375l4.6875 0z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m647.6129 281.37335l-91.590515 42.897644" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m647.6129 281.37335l-86.15698 40.352753" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m560.7554 320.23032l-3.4091187 3.4206238l4.8103027 -0.4290161z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m298.2796 269.3983l257.7323 54.86615" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m298.2796 269.3983l251.86383 53.616882" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m549.7995 324.63068l4.7825317 -0.6706238l-4.0947266 -2.5604248z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m621.2573 135.94116l-65.25983 188.31497" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m621.2573 135.94116l-63.295166 182.64575" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m556.4015 318.04605l0.07470703 4.828766l3.0466309 -3.7470703z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m420.3832 219.50296l161.32285 0l0 52.503937l-161.32285 0z" fill-rule="evenodd"/><path fill="#000000" d="m460.40915 235.90733l-1.40625 6.609375q-0.265625 1.421875 -0.875 2.34375q-0.59375 0.90625 -1.390625 1.34375q-0.796875 0.421875 -1.65625 0.421875q-1.703125 0 -2.484375 -1.28125l0.921875 -0.9375l0.03125 -0.015625l0.015625 0.015625q0.40625 0.5625 0.78125 0.8125q0.390625 0.25 0.953125 0.25q0.953125 0 1.515625 -0.65625q0.5625 -0.671875 0.90625 -2.265625l1.40625 -6.640625l-2.25 0l0.234375 -1.109375l5.328125 0l-0.234375 1.109375l-1.796875 0zm-6.640625 8.421875q-0.015625 0.078125 -0.078125 0.078125q-0.046875 -0.015625 -0.09375 -0.0625l0.109375 -0.09375l0.0625 0.078125zm-0.21875 0.0625q-0.078125 -0.09375 -0.03125 -0.078125q0.0625 0.015625 0.078125 0.03125l-0.046875 0.046875zm11.859375 2.1875q-1.140625 0 -1.921875 -0.5625q-0.78125 -0.5625 -1.0625 -1.5625q-0.28125 -1.015625 -0.015625 -2.296875q0.28125 -1.296875 0.984375 -2.296875q0.71875 -1.015625 1.734375 -1.578125q1.03125 -0.578125 2.171875 -0.578125q1.125 0 1.890625 0.578125q0.78125 0.5625 1.0625 1.578125q0.296875 1.0 0.015625 2.296875q-0.265625 1.28125 -0.984375 2.296875q-0.71875 1.0 -1.734375 1.5625q-1.015625 0.5625 -2.140625 0.5625zm0.234375 -1.125q0.71875 0 1.375 -0.421875q0.671875 -0.4375 1.15625 -1.1875q0.484375 -0.765625 0.6875 -1.71875q0.3125 -1.453125 -0.203125 -2.375q-0.515625 -0.921875 -1.609375 -0.921875q-1.109375 0 -2.015625 0.921875q-0.90625 0.921875 -1.21875 2.375q-0.203125 0.953125 -0.03125 1.71875q0.171875 0.75 0.640625 1.1875q0.484375 0.421875 1.21875 0.421875zm8.53125 1.171875q-1.34375 0 -1.9375 -1.0q-0.59375 -1.0 -0.15625 -2.96875l1.046875 -4.765625l1.296875 0l-1.015625 4.765625q-0.328125 1.53125 0.0625 2.21875q0.390625 0.671875 1.265625 0.671875q0.96875 0 1.796875 -0.75q0.828125 -0.765625 1.125 -2.203125l1.0 -4.703125l1.3125 0l-1.53125 7.203125q-0.09375 0.453125 -0.140625 0.75q-0.03125 0.28125 0.046875 0.578125l-1.296875 0q-0.0625 -0.28125 -0.03125 -0.578125q0.03125 -0.296875 0.125 -0.734375q-0.5625 0.71875 -1.34375 1.125q-0.78125 0.390625 -1.625 0.390625zm14.640625 -6.953125l0 0.015625q-0.453125 -0.5 -0.78125 -0.671875q-0.328125 -0.171875 -0.8125 -0.171875q-0.671875 0 -1.34375 0.34375q-0.65625 0.328125 -1.1875 1.015625q-0.515625 0.671875 -0.734375 1.703125l-0.953125 4.53125l-1.359375 0l1.8125 -8.546875l1.40625 0l-0.375 1.578125q0.53125 -0.84375 1.359375 -1.3125q0.84375 -0.46875 1.71875 -0.46875q1.375 0 2.0625 0.9375l-0.8125 1.046875zm0 0.015625q0.09375 0.109375 0.03125 0.09375q-0.046875 -0.015625 -0.09375 -0.03125l0.0625 -0.0625zm-0.21875 0.0625q0.015625 -0.0625 0.0625 -0.046875q0.046875 0 0.09375 0.046875l-0.0625 0.09375l-0.09375 -0.078125l0 -0.015625zm3.3125 -1.859375l1.3125 0l-0.328125 1.515625q0.671875 -0.78125 1.53125 -1.25q0.875 -0.46875 1.734375 -0.46875q1.125 0 1.625 0.875q0.5 0.859375 0.109375 2.6875l-1.09375 5.171875l-1.3125 0l1.09375 -5.125q0.265625 -1.28125 -0.046875 -1.859375q-0.296875 -0.578125 -1.0 -0.578125q-0.578125 0 -1.234375 0.34375q-0.65625 0.328125 -1.171875 0.9375q-0.515625 0.609375 -0.671875 1.375l-1.046875 4.90625l-1.3125 0l1.8125 -8.53125zm12.578125 -0.1875q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm6.5625 0.828125l0.234375 -1.078125l2.53125 0l2.1719055 -10.25l-2.4219055 0l0.234375 -1.078125l3.7812805 0l-2.40625 11.328125l2.5 0l-0.234375 1.078125l-6.3906555 0zm18.640656 0l2.46875 -11.640625l6.703125 0l-0.234375 1.140625l-5.390625 0l-0.78125 3.65625l4.34375 0l-0.234375 1.140625l-4.34375 0l-1.21875 5.703125l-1.3125 0zm9.34375 0l0.234375 -1.078125l2.53125 0l2.171875 -10.25l-2.421875 0l0.234375 -1.078125l3.78125 0l-2.40625 11.328125l2.5 0l-0.234375 1.078125l-6.390625 0zm11.9375 0.203125q-1.34375 0 -1.9375 -1.0q-0.59375 -1.0 -0.15625 -2.96875l1.046875 -4.765625l1.296875 0l-1.015625 4.765625q-0.328125 1.53125 0.0625 2.21875q0.390625 0.671875 1.265625 0.671875q0.96875 0 1.796875 -0.75q0.828125 -0.765625 1.125 -2.203125l1.0 -4.703125l1.3125 0l-1.53125 7.203125q-0.09375 0.453125 -0.140625 0.75q-0.03125 0.28125 0.046875 0.578125l-1.296875 0q-0.0625 -0.28125 -0.03125 -0.578125q0.03125 -0.296875 0.125 -0.734375q-0.5625 0.71875 -1.34375 1.125q-0.78125 0.390625 -1.625 0.390625zm11.59375 -5.21875q1.515625 0.484375 2.046875 1.0625q0.53125 0.5625 0.34375 1.46875q-0.25 1.15625 -1.3125 1.921875q-1.0625 0.75 -2.78125 0.75q-2.171875 0 -3.328125 -1.34375l1.015625 -1.265625l0.015625 -0.015625l0.015625 0.015625q0.421875 0.75 0.953125 1.125q0.546875 0.359375 1.609375 0.359375q1.015625 0 1.671875 -0.359375q0.65625 -0.359375 0.78125 -1.0q0.109375 -0.53125 -0.28125 -0.890625q-0.390625 -0.359375 -1.515625 -0.75q-3.015625 -0.90625 -2.65625 -2.640625q0.21875 -1.0 1.15625 -1.578125q0.9375 -0.578125 2.453125 -0.578125q1.15625 0 1.875 0.328125q0.71875 0.328125 1.234375 1.015625l-0.96875 0.9375l-0.015625 0.015625q-0.265625 -0.59375 -0.921875 -0.9375q-0.640625 -0.34375 -1.390625 -0.34375q-0.796875 0 -1.375 0.28125q-0.578125 0.28125 -0.671875 0.78125q-0.109375 0.46875 0.328125 0.859375q0.453125 0.390625 1.71875 0.78125zm2.09375 -1.390625q0.015625 -0.09375 0.140625 0.03125l-0.078125 0.0625l-0.0625 -0.09375zm0.21875 -0.03125q0.03125 0.078125 0.015625 0.09375q-0.015625 0.015625 -0.046875 0q-0.03125 -0.015625 -0.046875 -0.03125l0.078125 -0.0625zm-6.09375 3.9375q-0.015625 0.078125 -0.171875 0l0.078125 -0.09375l0.09375 0.078125l0 0.015625zm-0.21875 0.0625q-0.046875 -0.09375 -0.03125 -0.09375q0.03125 0 0.078125 0.03125l-0.046875 0.0625zm10.984375 -9.96875l1.46875 0l-0.03125 0.125q-0.140625 0.125 -0.1875 0.25q-0.046875 0.125 -0.109375 0.453125l-0.984375 4.5625q0.671875 -0.78125 1.53125 -1.25q0.859375 -0.46875 1.640625 -0.46875q1.203125 0 1.703125 0.859375q0.515625 0.859375 0.125 2.703125l-1.09375 5.171875l-1.3125 0l1.09375 -5.125q0.265625 -1.28125 -0.046875 -1.859375q-0.296875 -0.578125 -1.0 -0.578125q-0.59375 0 -1.25 0.328125q-0.640625 0.328125 -1.15625 0.9375q-0.515625 0.609375 -0.671875 1.390625l-1.046875 4.90625l-1.3125 0l2.640625 -12.40625z" fill-rule="nonzero"/><path fill="#ffffff" d="m341.60367 41.995308l0 0c0 -5.917248 4.796875 -10.714127 10.714142 -10.714127l42.855194 0l0 0c2.8415833 0 5.566742 1.1288071 7.57605 3.1380959c2.0092773 2.0092888 3.138092 4.7344666 3.138092 7.5760307l0 42.855217c0 5.917244 -4.796875 10.714119 -10.714142 10.714119l-42.855194 0c-5.917267 0 -10.714142 -4.796875 -10.714142 -10.714119z" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m341.60367 41.995308l0 0c0 -5.917248 4.796875 -10.714127 10.714142 -10.714127l42.855194 0l0 0c2.8415833 0 5.566742 1.1288071 7.57605 3.1380959c2.0092773 2.0092888 3.138092 4.7344666 3.138092 7.5760307l0 42.855217c0 5.917244 -4.796875 10.714119 -10.714142 10.714119l-42.855194 0c-5.917267 0 -10.714142 -4.796875 -10.714142 -10.714119z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m371.98032 52.367798l22.11023 22.110237" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m374.66028 55.04773l16.750305 16.750374" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m372.33386 52.72135c0.64242554 -0.6424141 1.68396 -0.6424141 2.3263855 0c0.64242554 0.6424103 0.64242554 1.6839676 0 2.3263817c-0.64242554 0.6424103 -1.68396 0.6424103 -2.3263855 0c-0.642395 -0.6424141 -0.642395 -1.6839714 0 -2.3263817z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m393.737 74.12448c-0.64242554 0.6424103 -1.68396 0.6424103 -2.3263855 0c-0.642395 -0.6424103 -0.642395 -1.6839676 0 -2.3263855c0.64242554 -0.6424103 1.68396 -0.6424103 2.3263855 0c0.64242554 0.6424179 0.64242554 1.6839752 0 2.3263855z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m374.6601 52.79299l-21.259827 21.259842" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m374.6601 52.792995l-18.579865 18.579906" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m357.24338 72.536095l-2.3263855 2.3263779l-2.326355 -2.3263779l2.326355 -2.3263855l2.3263855 2.3263855z" fill-rule="nonzero"/><path fill="#ffffff" d="m702.1522 41.993996l0 0c0 -5.917248 4.796875 -10.714127 10.714111 -10.714127l42.855225 0l0 0c2.8415527 0 5.5667725 1.1288071 7.57605 3.1380959c2.0092773 2.0092888 3.1380615 4.7344666 3.1380615 7.5760307l0 42.855217c0 5.917244 -4.796875 10.714119 -10.714111 10.714119l-42.855225 0c-5.9172363 0 -10.714111 -4.796875 -10.714111 -10.714119z" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m702.1522 41.993996l0 0c0 -5.917248 4.796875 -10.714127 10.714111 -10.714127l42.855225 0l0 0c2.8415527 0 5.5667725 1.1288071 7.57605 3.1380959c2.0092773 2.0092888 3.1380615 4.7344666 3.1380615 7.5760307l0 42.855217c0 5.917244 -4.796875 10.714119 -10.714111 10.714119l-42.855225 0c-5.9172363 0 -10.714111 -4.796875 -10.714111 -10.714119z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m716.458 43.87436l22.11023 22.110237" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m719.13794 46.554295l16.750366 16.750366" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m716.8116 44.227913c0.642395 -0.6424141 1.68396 -0.6424141 2.326355 0c0.642395 0.6424103 0.642395 1.6839676 0 2.3263817c-0.642395 0.6424103 -1.68396 0.6424103 -2.326355 0c-0.64245605 -0.6424141 -0.64245605 -1.6839714 0 -2.3263817z" fill-rule="nonzero"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m738.21466 65.63104c-0.642395 0.6424103 -1.68396 0.6424103 -2.326355 0c-0.642395 -0.6424103 -0.642395 -1.6839676 0 -2.3263817c0.642395 -0.6424141 1.68396 -0.6424141 2.326355 0c0.64245605 0.6424141 0.64245605 1.6839714 0 2.3263817z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m738.14307 61.709003l-21.259888 21.259846" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m738.14307 61.709003l-18.579895 18.579914" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m720.7263 81.4521l-2.326355 2.3263855l-2.326416 -2.3263855l2.326416 -2.3263779l2.326355 2.3263779z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m735.96716 61.34155l21.259888 21.259846" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m735.96716 61.34155l18.579895 18.579914" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m755.71027 78.75827l2.326416 2.3263779l-2.326416 2.3263855l-2.326355 -2.3263855l2.326355 -2.3263779z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m361.7454 127.52721c13.486847 7.3713837 49.210907 36.856964 80.92093 44.228348c31.710052 7.371399 81.85028 8.16481 109.33923 0c27.489014 -8.164795 46.328796 -40.82402 55.594604 -48.988815" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m366.87332 130.64238l0.061187744 0.03933716c0.28182983 0.18249512 0.5683594 0.36921692 0.8595886 0.5599823c2.3294678 1.5260925 4.9551697 3.3113708 7.8257446 5.269394c5.7411804 3.9160461 12.462006 8.523178 19.75232 13.130295c14.580627 9.214233 31.439148 18.428467 47.29419 22.114166c31.709991 7.371399 81.85025 8.16481 109.3392 0c13.744507 -4.0823975 25.326721 -14.288406 34.63098 -24.494415c4.6521606 -5.102997 8.734863 -10.205994 12.233582 -14.543549c0.8746948 -1.0843811 1.7128906 -2.1209412 2.5143433 -3.0976868c0.40075684 -0.4883728 0.79229736 -0.961792 1.1746216 -1.4187622c0.19122314 -0.22849274 0.38006592 -0.45287323 0.5666504 -0.6729584l0.34716797 -0.40646362" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m367.7309 129.23071l-4.7360535 -0.9445038l3.0209045 3.7678375z" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m604.6718 128.2579l1.9227295 -4.4300766l-4.3204956 2.1577148z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m388.79266 166.79956l192.91339 0l0 43.338577l-192.91339 0z" fill-rule="evenodd"/><path fill="#000000" d="m406.72592 193.71956l2.46875 -11.625l1.03125 0l1.671875 5.6875l4.125 -5.703125l0.984375 0l-2.46875 11.640625l-1.234375 0l1.859375 -8.765625l-3.515625 4.6875l-0.5 0l-1.34375 -4.640625l-1.859375 8.71875l-1.21875 0zm13.59375 0.1875q-1.90625 0 -2.78125 -1.15625q-0.875 -1.15625 -0.421875 -3.265625q0.296875 -1.40625 1.015625 -2.421875q0.734375 -1.015625 1.734375 -1.546875q1.015625 -0.53125 2.109375 -0.53125q1.546875 0 2.28125 1.03125q0.75 1.03125 0.328125 3.015625q-0.046875 0.203125 -0.171875 0.625l-6.0625 0q-0.25 1.5625 0.375 2.375q0.625 0.796875 1.859375 0.796875q1.34375 0 2.40625 -0.953125l0.59375 0.71875q-1.359375 1.3125 -3.265625 1.3125zm3.0 -5.296875q0.25 -1.203125 -0.203125 -1.890625q-0.453125 -0.703125 -1.453125 -0.703125q-0.921875 0 -1.765625 0.65625q-0.828125 0.640625 -1.265625 1.9375l4.6875 0zm9.75 4.328125q-1.4375 0.90625 -2.90625 0.90625q-1.421875 0 -1.828125 -0.84375q-0.40625 -0.859375 0.015625 -2.8125q0.0625 -0.3125 0.265625 -1.09375l0.765625 -2.796875l-1.875 0l0.234375 -1.109375l1.953125 0l0.609375 -2.265625l1.546875 -0.25l0.1875 -0.015625l0 0.109375q-0.171875 0.1875 -0.265625 0.328125q-0.09375 0.125 -0.171875 0.40625l-0.5625 1.6875l2.828125 0l-0.234375 1.109375l-2.90625 0l-0.78125 2.890625q-0.203125 0.75 -0.25 0.984375q-0.3125 1.5 -0.09375 2.03125q0.234375 0.53125 1.0 0.53125q0.5625 0 1.078125 -0.203125q0.515625 -0.21875 1.203125 -0.65625l0.1875 1.0625zm7.28125 -7.9375q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm9.40625 1.015625q-0.90625 0 -1.609375 -0.515625q-0.6875 -0.515625 -0.96875 -1.53125q-0.28125 -1.03125 0.03125 -2.484375q0.3125 -1.484375 1.046875 -2.46875q0.734375 -0.984375 1.65625 -1.453125q0.921875 -0.484375 1.828125 -0.484375q0.859375 0 1.40625 0.390625q0.5625 0.390625 0.765625 1.078125l1.09375 -5.125l1.421875 0l-0.03125 0.125q-0.140625 0.125 -0.203125 0.25q-0.046875 0.125 -0.109375 0.453125l-2.171875 10.25q-0.09375 0.453125 -0.125 0.75q-0.03125 0.28125 0.03125 0.578125l-1.328125 0q-0.0625 -0.296875 -0.03125 -0.578125q0.03125 -0.296875 0.125 -0.75q-0.5625 0.71875 -1.296875 1.125q-0.71875 0.390625 -1.53125 0.390625zm0.46875 -1.171875q1.15625 0 1.890625 -0.90625q0.734375 -0.921875 1.0625 -2.421875q0.3125 -1.515625 -0.078125 -2.421875q-0.390625 -0.921875 -1.578125 -0.921875q-1.109375 0 -1.890625 0.84375q-0.78125 0.828125 -1.078125 2.265625q-0.34375 1.640625 0.0625 2.609375q0.421875 0.953125 1.609375 0.953125zm10.953125 -7.734375q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm13.546875 0.046875q-1.4375 0.90625 -2.90625 0.90625q-1.421875 0 -1.828125 -0.84375q-0.40625 -0.859375 0.015625 -2.8125q0.0625 -0.3125 0.265625 -1.09375l0.765625 -2.796875l-1.875 0l0.234375 -1.109375l1.953125 0l0.609375 -2.265625l1.546875 -0.25l0.1875 -0.015625l0 0.109375q-0.171875 0.1875 -0.265625 0.328125q-0.09375 0.125 -0.171875 0.40625l-0.5625 1.6875l2.828125 0l-0.234375 1.109375l-2.90625 0l-0.78125 2.890625q-0.203125 0.75 -0.25 0.984375q-0.3125 1.5 -0.09375 2.03125q0.234375 0.53125 1.0 0.53125q0.5625 0 1.078125 -0.203125q0.515625 -0.21875 1.203125 -0.65625l0.1875 1.0625zm7.28125 -7.9375q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm15.515625 0.828125l2.46875 -11.640625l7.140625 0l-0.234375 1.15625l-5.90625 0l-0.8125 3.828125l4.875 0l-0.265625 1.1875l-4.875 0l-0.90625 4.3125l5.84375 0l-0.25 1.15625l-7.078125 0zm9.109375 0l3.921875 -4.328125l-2.046875 -4.203125l1.46875 0l1.546875 3.15625l2.796875 -3.15625l1.390625 0l-3.640625 4.140625l2.1875 4.390625l-1.515625 0l-1.65625 -3.328125l-2.921875 3.328125l-1.53125 0zm14.0624695 -1.0q1.28125 0 2.4375 -1.046875l0.59375 0.90625q-1.546875 1.34375 -3.375 1.34375q-1.234375 0 -2.0780945 -0.578125q-0.84375 -0.578125 -1.171875 -1.59375q-0.328125 -1.015625 -0.0625 -2.28125q0.265625 -1.265625 1.015625 -2.265625q0.7655945 -1.015625 1.8593445 -1.59375q1.09375 -0.578125 2.3125 -0.578125q1.03125 0 1.78125 0.421875q0.75 0.40625 1.125 1.15625l-1.03125 0.828125l-0.015625 0.015625q-0.390625 -0.71875 -0.90625 -1.0q-0.5 -0.296875 -1.328125 -0.296875q-0.734375 0 -1.453125 0.40625q-0.703125 0.40625 -1.234375 1.140625q-0.53125 0.71875 -0.7343445 1.671875q-0.203125 0.953125 0.015625 1.71875q0.21871948 0.765625 0.8124695 1.203125q0.59375 0.421875 1.4375 0.421875zm3.1875 -5.25q0.03125 -0.109375 0.15625 0.015625l-0.078125 0.078125l-0.078125 -0.09375zm0.203125 -0.015625q0.078125 0.125 0.03125 0.09375q-0.03125 -0.046875 -0.078125 -0.0625l0.046875 -0.03125zm4.328125 -6.140625l1.46875 0l-0.03125 0.125q-0.140625 0.125 -0.1875 0.25q-0.046875 0.125 -0.109375 0.453125l-0.984375 4.5625q0.671875 -0.78125 1.53125 -1.25q0.859375 -0.46875 1.640625 -0.46875q1.203125 0 1.703125 0.859375q0.515625 0.859375 0.125 2.703125l-1.09375 5.171875l-1.3125 0l1.09375 -5.125q0.265625 -1.28125 -0.046875 -1.859375q-0.296875 -0.578125 -1.0 -0.578125q-0.59375 0 -1.25 0.328125q-0.640625 0.328125 -1.15625 0.9375q-0.515625 0.609375 -0.671875 1.390625l-1.046875 4.90625l-1.3125 0l2.640625 -12.40625zm11.734375 3.6875q1.828125 0 2.578125 0.953125q0.765625 0.953125 0.28125 3.234375l-0.96875 4.53125l-1.453125 0l0.28125 -1.3125q-1.109375 1.515625 -3.234375 1.515625q-1.421875 0 -2.078125 -0.640625q-0.65625 -0.640625 -0.4375 -1.703125q0.1875 -0.90625 0.90625 -1.578125q0.734375 -0.671875 1.78125 -1.03125q1.0625 -0.359375 2.234375 -0.359375q1.0625 0 1.890625 0.109375q0.203125 -1.4375 -0.3125 -2.015625q-0.515625 -0.59375 -1.796875 -0.59375q-0.625 0 -1.265625 0.25q-0.640625 0.234375 -1.21875 0.703125l-0.484375 -0.859375q1.453125 -1.203125 3.296875 -1.203125zm-2.171875 7.890625q1.390625 0 2.359375 -0.796875q0.96875 -0.8125 1.375 -2.34375q-0.8125 -0.140625 -1.78125 -0.140625q-1.40625 0 -2.34375 0.46875q-0.921875 0.46875 -1.125 1.421875q-0.296875 1.390625 1.515625 1.390625zm8.25 -7.703125l1.3125 0l-0.328125 1.515625q0.671875 -0.78125 1.53125 -1.25q0.875 -0.46875 1.734375 -0.46875q1.125 0 1.625 0.875q0.5 0.859375 0.109375 2.6875l-1.09375 5.171875l-1.3125 0l1.09375 -5.125q0.265625 -1.28125 -0.046875 -1.859375q-0.296875 -0.578125 -1.0 -0.578125q-0.578125 0 -1.234375 0.34375q-0.65625 0.328125 -1.171875 0.9375q-0.515625 0.609375 -0.671875 1.375l-1.046875 4.90625l-1.3125 0l1.8125 -8.53125zm16.65625 0.875q-0.171875 -0.015625 -0.484375 -0.015625q-0.890625 0 -1.5625 0.390625q0.171875 0.640625 0.015625 1.40625q-0.1875 0.84375 -0.703125 1.53125q-0.5 0.671875 -1.265625 1.0625q-0.765625 0.390625 -1.65625 0.390625q-0.71875 0 -1.265625 -0.265625q-0.421875 0.40625 -0.5 0.78125q-0.09375 0.453125 0.421875 0.65625q0.53125 0.1875 2.015625 0.1875q1.8125 0 2.34375 0.5625q0.546875 0.546875 0.3125 1.609375q-0.234375 1.09375 -1.28125 1.828125q-1.046875 0.734375 -2.953125 0.734375q-1.828125 0 -2.75 -0.5625q-0.90625 -0.546875 -0.6875 -1.640625q0.15625 -0.671875 0.625 -1.15625q0.484375 -0.484375 1.125 -0.765625q-0.53125 -0.40625 -0.375 -1.125q0.15625 -0.75 1.09375 -1.578125q-0.375 -0.40625 -0.5 -0.984375q-0.109375 -0.578125 0.03125 -1.265625q0.171875 -0.828125 0.671875 -1.5q0.515625 -0.6875 1.28125 -1.078125q0.765625 -0.390625 1.65625 -0.390625q1.359375 0 2.0 0.875q0.53125 -0.40625 1.046875 -0.59375q0.515625 -0.203125 1.09375 -0.203125l0.3125 0.015625l-0.0625 1.09375zm-5.40625 3.640625q0.796875 0 1.4375 -0.53125q0.640625 -0.546875 0.8125 -1.328125q0.171875 -0.78125 -0.25 -1.328125q-0.421875 -0.546875 -1.21875 -0.546875q-0.796875 0 -1.453125 0.546875q-0.640625 0.546875 -0.8125 1.328125q-0.171875 0.78125 0.25 1.328125q0.4375 0.53125 1.234375 0.53125zm1.828125 4.859375q0.09375 -0.453125 -0.03125 -0.703125q-0.109375 -0.25 -0.546875 -0.375q-0.421875 -0.125 -1.359375 -0.125q-1.34375 0 -2.09375 -0.234375q-0.515625 0.296875 -0.765625 0.609375q-0.234375 0.3125 -0.359375 0.84375q-0.125 0.640625 0.5 0.984375q0.640625 0.359375 1.90625 0.359375q1.234375 0 1.921875 -0.375q0.6875 -0.359375 0.828125 -0.984375zm7.453125 -0.65625q-1.90625 0 -2.78125 -1.15625q-0.875 -1.15625 -0.421875 -3.265625q0.296875 -1.40625 1.015625 -2.421875q0.734375 -1.015625 1.734375 -1.546875q1.015625 -0.53125 2.109375 -0.53125q1.546875 0 2.28125 1.03125q0.75 1.03125 0.328125 3.015625q-0.046875 0.203125 -0.171875 0.625l-6.0625 0q-0.25 1.5625 0.375 2.375q0.625 0.796875 1.859375 0.796875q1.34375 0 2.40625 -0.953125l0.59375 0.71875q-1.359375 1.3125 -3.265625 1.3125zm3.0 -5.296875q0.25 -1.203125 -0.203125 -1.890625q-0.453125 -0.703125 -1.453125 -0.703125q-0.921875 0 -1.765625 0.65625q-0.828125 0.640625 -1.265625 1.9375l4.6875 0z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m110.45702 4.5455894l171.33856 -0.031496048l0 73.44882l-171.33856 0.03149414z" fill-rule="evenodd"/><path fill="#000000" d="m182.11069 31.608667q-1.140625 2.07901E-4 -1.921875 -0.56214714q-0.78125 -0.56235695 -1.0625 -1.5623055q-0.28125 -1.0155735 -0.015625 -2.2968712q0.28125 -1.2969265 0.984375 -2.2970562q0.71875 -1.0157585 1.734375 -1.5784435q1.03125 -0.57831573 2.171875 -0.57852554q1.125 -2.0599365E-4 1.890625 0.57777786q0.78125 0.56235695 1.0625 1.5779305q0.296875 0.9999447 0.015625 2.2968712q-0.265625 1.2812996 -0.984375 2.2970562q-0.71875 1.0001316 -1.734375 1.5628185q-1.015625 0.5626869 -2.140625 0.5628948zm0.234375 -1.1250439q0.71875 -1.3160706E-4 1.375 -0.42212868q0.671875 -0.43762207 1.15625 -1.1877117q0.484375 -0.76571465 0.6875 -1.7188759q0.3125 -1.4531822 -0.203125 -2.3749638q-0.515625 -0.92177963 -1.609375 -0.92157936q-1.109375 2.040863E-4 -2.015625 0.92224693q-0.90625 0.92204094 -1.21875 2.3752232q-0.203125 0.95316315 -0.03125 1.7187557q0.171875 0.7499695 0.640625 1.1873817q0.484375 0.42178726 1.21875 0.42165184zm7.390625 -7.563858l1.359375 -2.4986267E-4l-0.359375 1.6875648q0.484375 -0.93758774 1.265625 -1.4064827q0.796875 -0.46889687 1.734375 -0.46906853q0.96875 -1.7738342E-4 1.671875 0.49969292q0.703125 0.4998703 0.96875 1.4841976q0.28125 0.9843235 -0.015625 2.406252q-0.3125 1.4375572 -1.046875 2.4689426q-0.71875 1.0313816 -1.65625 1.5628052q-0.9375 0.51579666 -1.875 0.5159683q-0.765625 1.411438E-4 -1.359375 -0.34350014q-0.59375 -0.35926437 -0.890625 -0.99983597l-0.890625 4.234541l-1.375 2.5177002E-4l2.46875 -11.641079zm2.078125 7.5621166q1.03125 -1.8882751E-4 1.890625 -0.76597214q0.859375 -0.7814083 1.234375 -2.531477q0.34375 -1.593813 -0.09375 -2.4374828q-0.4375 -0.8592949 -1.5625 -0.859087q-1.0625 1.9454956E-4 -1.921875 0.89097786q-0.84375 0.8907795 -1.234375 2.703352q-0.28125 1.4688015 0.171875 2.2343426q0.453125 0.765543 1.515625 0.7653465zm9.34375 1.1545334q-1.90625 3.5095215E-4 -2.78125 -1.1557388q-0.875 -1.1560898 -0.421875 -3.2655468q0.296875 -1.4063053 1.015625 -2.422062q0.734375 -1.0157604 1.734375 -1.5471954q1.015625 -0.531435 2.109375 -0.5316372q1.546875 -2.8419495E-4 2.28125 1.0308304q0.75 1.0311127 0.328125 3.0155659q-0.046875 0.20313263 -0.171875 0.6250305l-6.0625 0.001115799q-0.25 1.5625458 0.375 2.3749294q0.625 0.79676056 1.859375 0.7965336q1.34375 -2.4604797E-4 2.40625 -0.9535675l0.59375 0.7186413q-1.359375 1.3127499 -3.265625 1.3131008zm3.0 -5.297426q0.25 -1.2031708 -0.203125 -1.8905888q-0.453125 -0.7030411 -1.453125 -0.702858q-0.921875 1.6975403E-4 -1.765625 0.65657425q-0.828125 0.6407776 -1.265625 1.9377327l4.6875 -8.6021423E-4zm4.453125 -3.4226952l1.3125 -2.4032593E-4l-0.328125 1.515686q0.671875 -0.781374 1.53125 -1.2502823q0.875 -0.46891022 1.734375 -0.46906853q1.125 -2.07901E-4 1.625 0.87470055q0.5 0.85928345 0.109375 2.687481l-1.09375 5.1720753l-1.3125 2.4223328E-4l1.09375 -5.125202q0.265625 -1.2812977 -0.046875 -1.8593655q-0.296875 -0.5780716 -1.0 -0.5779419q-0.578125 1.0681152E-4 -1.234375 0.34397697q-0.65625 0.32824516 -1.171875 0.93771553q-0.515625 0.60947037 -0.671875 1.375124l-1.046875 4.9064426l-1.3125 2.4032593E-4l1.8125 -8.531584z" fill-rule="nonzero"/><path fill="#000000" d="m175.32162 44.922413l1.234375 -2.2506714E-4l-0.171875 0.8437805q0.390625 -0.46882248 0.921875 -0.75016785q0.53125 -0.29697418 1.03125 -0.29706573q0.5625 -1.02996826E-4 0.90625 0.35920715q0.34375 0.35931396 0.34375 0.89056396q0.34375 -0.56256485 0.9375 -0.90642166q0.609375 -0.34386444 1.25 -0.34397888q0.84375 -1.5640259E-4 1.125 0.6091652q0.28125 0.6093254 0.046875 1.5781174l-1.390625 6.5471306l-1.234375 2.2888184E-4l1.28125 -6.0471115q0.234375 -1.062542 0.09375 -1.3750191q-0.140625 -0.3280983 -0.53125 -0.32802582q-0.34375 6.1035156E-5 -0.71875 0.3282585q-0.375 0.31256866 -0.6875 0.8126259q-0.296875 0.4844284 -0.390625 0.96881866l-1.203125 5.6408463l-1.25 2.3269653E-4l1.265625 -5.9377327q0.21875 -1.046917 0.09375 -1.4062691q-0.109375 -0.35935593 -0.65625 -0.35925293q-0.28125 4.9591064E-5 -0.625 0.25011444q-0.34375 0.25006104 -0.640625 0.71886826q-0.28125 0.4531746 -0.40625 1.0469475l-1.203125 5.6877213l-1.234375 2.2888184E-4l1.8125 -8.531586zm11.4375 4.263523l-1.34375 1.031498l-0.6875 3.234501l-1.359375 2.5177002E-4l2.640625 -12.406738l1.53125 -2.784729E-4l-0.03125 0.12500381q-0.140625 0.1250267 -0.203125 0.25003815q-0.046875 0.12500763 -0.109375 0.45314407l-1.484375 6.9690247l5.09375 -3.9540634q0.453125 0.09366608 0.984375 0.09357071l0.484375 -8.773804E-5l-4.3125 3.3757896l3.109375 5.0931816l-1.75 0.078445435l-2.5625 -4.343281zm9.015625 4.4514694q-0.90625 1.6784668E-4 -1.609375 -0.51533127q-0.6875 -0.5154953 -0.96875 -1.5310707q-0.28125 -1.0311966 0.03125 -2.4843788q0.3125 -1.4844322 1.046875 -2.4689445q0.734375 -0.9845085 1.65625 -1.4534302q0.921875 -0.48454285 1.828125 -0.4847107q0.859375 -1.5640259E-4 1.40625 0.3903656q0.5625 0.390522 0.765625 1.0779877l1.09375 -5.125202l1.421875 -2.632141E-4l-0.03125 0.12500763q-0.140625 0.1250267 -0.203125 0.25003815q-0.046875 0.12500763 -0.109375 0.45314407l-2.171875 10.250401q-0.09375 0.45314026 -0.125 0.7500229q-0.03125 0.2812538 0.03125 0.5781174l-1.328125 2.4414062E-4q-0.0625 -0.29686356 -0.03125 -0.5781174q0.03125 -0.29688263 0.125 -0.7500229q-0.5625 0.718853 -1.296875 1.1252365q-0.71875 0.3907585 -1.53125 0.3909073zm0.46875 -1.1719627q1.15625 -2.0980835E-4 1.890625 -0.90659714q0.734375 -0.9220085 1.0625 -2.4220695q0.3125 -1.5156822 -0.078125 -2.4218597q-0.390625 -0.9218025 -1.578125 -0.9215851q-1.109375 2.0217896E-4 -1.890625 0.84409714q-0.78125 0.82826996 -1.078125 2.2658234q-0.34375 1.640686 0.0625 2.6093636q0.421875 0.9530487 1.609375 0.95282745zm6.40625 0.9832001l0.234375 -1.0781708l2.1875 -4.005432E-4l1.34375 -6.359623l-2.0625 3.8146973E-4l0.234375 -1.0937958l3.40625 -6.2561035E-4l-1.578125 7.453415l2.0 -3.6621094E-4l-0.234375 1.078167l-5.53125 0.0010185242zm4.96875 -10.313416q-0.390625 7.247925E-5 -0.609375 -0.28113556q-0.21875 -0.28121185 -0.140625 -0.6718521q0.09375 -0.40626526 0.421875 -0.6875763q0.328125 -0.28131104 0.734375 -0.2813835q0.390625 -7.247925E-5 0.625 0.29675674q0.234375 0.28120804 0.140625 0.6718521q-0.078125 0.39063644 -0.4375 0.6719551q-0.34375 0.28131104 -0.734375 0.2813835zm11.890625 3.5603142l0 0.015625q-0.453125 -0.49991608 -0.78125 -0.67173004q-0.328125 -0.17181396 -0.8125 -0.17172623q-0.671875 1.2207031E-4 -1.34375 0.34399796q-0.65625 0.32824326 -1.1875 1.0158424q-0.515625 0.67197037 -0.734375 1.7032585l-0.953125 4.5314255l-1.359375 2.5177002E-4l1.8125 -8.547211l1.40625 -2.5558472E-4l-0.375 1.5781937q0.53125 -0.8438492 1.359375 -1.3127518q0.84375 -0.4689026 1.71875 -0.46906662q1.375 -2.5177002E-4 2.0625 0.93712234l-0.8125 1.0470238zm0 0.015625q0.09375 0.10935974 0.03125 0.093746185q-0.046875 -0.015617371 -0.09375 -0.031234741l0.0625 -0.062511444zm-0.21875 0.06254196q0.015625 -0.062503815 0.0625 -0.046886444q0.046875 -1.1444092E-5 0.09375 0.046855927l-0.0625 0.093761444l-0.09375 -0.07810593l0 -0.015625z" fill-rule="nonzero"/><path fill="#000000" d="m164.94662 75.455574l0.234375 -1.0781708l2.53125 -4.6539307E-4l2.171875 -10.250397l-2.421875 4.4250488E-4l0.234375 -1.078167l3.78125 -6.942749E-4l-2.40625 11.328564l2.5 -4.5776367E-4l-0.234375 1.0781708l-6.390625 0.0011749268zm9.71875 -0.0017852783l0.234375 -1.0781708l2.1875 -4.043579E-4l1.34375 -6.359619l-2.0625 3.8146973E-4l0.234375 -1.0937958l3.40625 -6.2561035E-4l-1.578125 7.453415l2.0 -3.6621094E-4l-0.234375 1.0781631l-5.53125 0.0010223389zm4.96875 -10.313416q-0.390625 6.866455E-5 -0.609375 -0.28113556q-0.21875 -0.28121185 -0.140625 -0.6718521q0.09375 -0.40626907 0.421875 -0.6875763q0.328125 -0.28131104 0.734375 -0.28138733q0.390625 -7.247925E-5 0.625 0.29676056q0.234375 0.28120804 0.140625 0.6718521q-0.078125 0.39064026 -0.4375 0.6719513q-0.34375 0.28131104 -0.734375 0.28138733zm8.84375 5.29525q1.515625 0.4840927 2.046875 1.0621185q0.53125 0.56240845 0.34375 1.468689q-0.25 1.1562958 -1.3125 1.9221191q-1.0625 0.75019073 -2.78125 0.75051117q-2.171875 3.9672852E-4 -3.328125 -1.3431396l1.015625 -1.2658081l0.015625 -0.01563263l0.015625 0.015625q0.421875 0.7499237 0.953125 1.1248245q0.546875 0.35927582 1.609375 0.35907745q1.015625 -1.8310547E-4 1.671875 -0.35968018q0.65625 -0.35949707 0.78125 -1.000145q0.109375 -0.5312729 -0.28125 -0.8905716q-0.390625 -0.35930634 -1.515625 -0.74972534q-3.015625 -0.90569305 -2.65625 -2.6401367q0.21875 -1.0000381 1.15625 -1.5783386q0.9375 -0.57829285 2.453125 -0.57857513q1.15625 -2.1362305E-4 1.875 0.32778168q0.71875 0.3279953 1.234375 1.0153961l-0.96875 0.9376831l-0.015625 0.015625q-0.265625 -0.5937042 -0.921875 -0.93733215q-0.640625 -0.34362793 -1.390625 -0.3434906q-0.796875 1.449585E-4 -1.375 0.28150177q-0.578125 0.2813568 -0.671875 0.7813721q-0.109375 0.4687729 0.328125 0.85931396q0.453125 0.39054108 1.71875 0.7809372zm2.09375 -1.3910141q0.015625 -0.09375 0.140625 0.031227112l-0.078125 0.06251526l-0.0625 -0.09374237zm0.21875 -0.031288147q0.03125 0.07811737 0.015625 0.09375q-0.015625 0.015625 -0.046875 7.6293945E-6q-0.03125 -0.015617371 -0.046875 -0.03124237l0.078125 -0.06251526zm-6.09375 3.9386215q-0.015625 0.078125 -0.171875 3.0517578E-5l0.078125 -0.09376526l0.09375 0.07810974l0 0.015625zm-0.21875 0.06253815q-0.046875 -0.09374237 -0.03125 -0.09374237q0.03125 -7.6293945E-6 0.078125 0.031234741l-0.046875 0.06250763zm15.4375 1.6534119q-1.4375 0.906517 -2.90625 0.90678406q-1.421875 2.670288E-4 -1.828125 -0.8434143q-0.40625 -0.8592987 0.015625 -2.8125q0.0625 -0.31251526 0.265625 -1.0937958l0.765625 -2.79702l-1.875 3.4332275E-4l0.234375 -1.1094131l1.953125 -3.5858154E-4l0.609375 -2.2657394l1.546875 -0.2502823l0.1875 -0.015663147l0 0.109375q-0.171875 0.18753052 -0.265625 0.32817078q-0.09375 0.12502289 -0.171875 0.40628815l-0.5625 1.6875992l2.828125 -5.187988E-4l-0.234375 1.1094208l-2.90625 5.340576E-4l-0.78125 2.89077q-0.203125 0.7500305 -0.25 0.9844208q-0.3125 1.5000534 -0.09375 2.0312653q0.234375 0.5312042 1.0 0.5310669q0.5625 -1.0681152E-4 1.078125 -0.20332336q0.515625 -0.21884918 1.203125 -0.65647125l0.1875 1.0624619zm5.1875 0.9677963q-0.90625 1.6784668E-4 -1.609375 -0.51532745q-0.6875 -0.5154953 -0.96875 -1.5310745q-0.28125 -1.0311966 0.03125 -2.484375q0.3125 -1.484436 1.046875 -2.4689484q0.734375 -0.9845047 1.65625 -1.4534302q0.921875 -0.48454285 1.828125 -0.4847107q0.859375 -1.5258789E-4 1.40625 0.39037323q0.5625 0.3905182 0.765625 1.07798l1.09375 -5.125202l1.421875 -2.593994E-4l-0.03125 0.12500381q-0.140625 0.1250267 -0.203125 0.25003815q-0.046875 0.12501144 -0.109375 0.4531479l-2.171875 10.250397q-0.09375 0.45314026 -0.125 0.7500229q-0.03125 0.28125763 0.03125 0.5781174l-1.328125 2.4414062E-4q-0.0625 -0.29685974 -0.03125 -0.5781174q0.03125 -0.29688263 0.125 -0.7500229q-0.5625 0.7188568 -1.296875 1.1252365q-0.71875 0.39076233 -1.53125 0.3909073zm0.46875 -1.1719589q1.15625 -2.1362305E-4 1.890625 -0.90660095q0.734375 -0.9220047 1.0625 -2.4220657q0.3125 -1.515686 -0.078125 -2.4218597q-0.390625 -0.92180634 -1.578125 -0.9215851q-1.109375 1.9836426E-4 -1.890625 0.8440933q-0.78125 0.82826996 -1.078125 2.2658234q-0.34375 1.6406937 0.0625 2.6093674q0.421875 0.9530487 1.609375 0.95282745zm6.40625 0.9832001l0.234375 -1.0781708l2.1875 -4.043579E-4l1.34375 -6.359619l-2.0625 3.8146973E-4l0.234375 -1.0937958l3.40625 -6.2561035E-4l-1.578125 7.453415l2.0 -3.6621094E-4l-0.234375 1.0781631l-5.53125 0.0010223389zm4.96875 -10.313416q-0.390625 6.866455E-5 -0.609375 -0.28113556q-0.21875 -0.28121185 -0.140625 -0.6718521q0.09375 -0.40626907 0.421875 -0.6875763q0.328125 -0.28131104 0.734375 -0.28138733q0.390625 -7.247925E-5 0.625 0.29676056q0.234375 0.28120804 0.140625 0.6718521q-0.078125 0.39064026 -0.4375 0.6719513q-0.34375 0.28131104 -0.734375 0.28138733zm11.890625 3.5603104l0 0.015625q-0.453125 -0.49991608 -0.78125 -0.67173004q-0.328125 -0.17181396 -0.8125 -0.17172241q-0.671875 1.2207031E-4 -1.34375 0.34399414q-0.65625 0.32824707 -1.1875 1.0158463q-0.515625 0.67196655 -0.734375 1.7032547l-0.953125 4.5314255l-1.359375 2.5177002E-4l1.8125 -8.547203l1.40625 -2.593994E-4l-0.375 1.5781937q0.53125 -0.8438492 1.359375 -1.3127518q0.84375 -0.4689026 1.71875 -0.46907043q1.375 -2.5177002E-4 2.0625 0.93712616l-0.8125 1.04702zm0 0.015625q0.09375 0.10935974 0.03125 0.09375q-0.046875 -0.015617371 -0.09375 -0.031234741l0.0625 -0.06251526zm-0.21875 0.06254578q0.015625 -0.06250763 0.0625 -0.04689026q0.046875 -7.6293945E-6 0.09375 0.04685974l-0.0625 0.09375763l-0.09375 -0.07810211l0 -0.015625z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m124.98425 40.490856c3.682846 -3.459755 6.026245 -16.741034 22.097107 -20.758532c16.070877 -4.017497 54.685913 -8.257217 74.328094 -3.3464565c19.642166 4.910761 36.270782 27.342522 43.524933 32.811024" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m124.98425 40.490856c3.6828613 -3.459755 6.026245 -16.741034 22.097107 -20.758532c16.070877 -4.017497 54.68593 -8.257218 74.328094 -3.3464565c9.82106 2.4553795 18.888779 9.29101 26.407974 16.196358c3.7595825 3.4526749 7.1320496 6.922779 10.018021 9.871494c0.72146606 0.7371788 1.4125671 1.4417686 2.0716858 2.1053543l0.594635 0.594059" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m259.38858 46.373383l4.46579 1.8382454l-2.23938 -4.27874z" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m220.74016 15.71573c65.288284 0 317.17804 -2.5669298 391.7296 0c74.551636 2.566928 46.31671 12.834645 55.58008 15.401574" fill-rule="evenodd"/><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m220.74016 15.71573c65.288284 0 317.17804 -2.5669298 391.7296 0c37.27588 1.2834635 48.85504 4.4921255 52.273132 7.7007875c0.42730713 0.401083 0.7270508 0.802166 0.9335327 1.1994877c0.10321045 0.1986618 0.1831665 0.3963833 0.24407959 0.59269524c0.015197754 0.04907608 0.02923584 0.09806633 0.042175293 0.14696121l0.030273438 0.12522316" fill-rule="evenodd"/><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m664.44135 26.04712l3.1073608 3.6968708l-0.0040893555 -4.829342z" fill-rule="evenodd"/></g></svg> \ No newline at end of file
diff --git a/doc/cephfs/cephfs-io-path.rst b/doc/cephfs/cephfs-io-path.rst
new file mode 100644
index 000000000..8c7810ba0
--- /dev/null
+++ b/doc/cephfs/cephfs-io-path.rst
@@ -0,0 +1,50 @@
+=========================
+ Ceph File System IO Path
+=========================
+
+All file data in CephFS is stored as RADOS objects. CephFS clients can directly
+access RADOS to operate on file data. MDS only handles metadata operations.
+
+To read/write a CephFS file, client needs to have 'file read/write' capabilities
+for corresponding inode. If client does not have required capabilities, it sends
+a 'cap message' to MDS, telling MDS what it wants. MDS will issue capabilities
+to client when it is possible. Once client has 'file read/write' capabilities,
+it can directly access RADOS to read/write file data. File data are stored as
+RADOS objects in the form of <inode number>.<object index>. See 'Data Striping'
+section of `Architecture`_ for more information. If the file is only opened by
+one client, MDS also issues 'file cache/buffer' capabilities to the only client.
+The 'file cache' capability means that file read can be satisfied by client
+cache. The 'file buffer' capability means that file write can be buffered in
+client cache.
+
+
+.. ditaa::
+
+ +---------------------+
+ | Application |
+ +---------------------+
+ |
+ V
+ +---------------------+ Data IOs +--------------------+
+ | CephFS Library | ---------> | LibRados |
+ +---------------------+ +--------------------+
+ | |
+ | Metadata Operations | Objects Read/Write
+ V V
+ +---------------------+ +--------------------+
+ | MDSs | -=-------> | OSDs |
+ +---------------------+ +--------------------+
+
+
+ +----------------------+ +---------------------+
+ | CephFS kernel client | Data IOs | Ceph kernel library |
+ | (ceph.ko) | --------> | (libceph.ko) |
+ +----------------------+ +---------------------+
+ | |
+ | Metadata Operations | Objects Read/Write
+ v v
+ +---------------------+ +--------------------+
+ | MDSs | -=-------> | OSDs |
+ +---------------------+ +--------------------+
+
+.. _Architecture: ../architecture
diff --git a/doc/cephfs/cephfs-journal-tool.rst b/doc/cephfs/cephfs-journal-tool.rst
new file mode 100644
index 000000000..64a113091
--- /dev/null
+++ b/doc/cephfs/cephfs-journal-tool.rst
@@ -0,0 +1,238 @@
+
+cephfs-journal-tool
+===================
+
+Purpose
+-------
+
+If a CephFS journal has become damaged, expert intervention may be required
+to restore the file system to a working state.
+
+The ``cephfs-journal-tool`` utility provides functionality to aid experts in
+examining, modifying, and extracting data from journals.
+
+.. warning::
+
+ This tool is **dangerous** because it directly modifies internal
+ data structures of the file system. Make backups, be careful, and
+ seek expert advice. If you are unsure, do not run this tool.
+
+Syntax
+------
+
+::
+
+ cephfs-journal-tool journal <inspect|import|export|reset>
+ cephfs-journal-tool header <get|set>
+ cephfs-journal-tool event <get|splice|apply> [filter] <list|json|summary|binary>
+
+
+The tool operates in three modes: ``journal``, ``header`` and ``event``,
+meaning the whole journal, the header, and the events within the journal
+respectively.
+
+Journal mode
+------------
+
+This should be your starting point to assess the state of a journal.
+
+* ``inspect`` reports on the health of the journal. This will identify any
+ missing objects or corruption in the stored journal. Note that this does
+ not identify inconsistencies in the events themselves, just that events are
+ present and can be decoded.
+
+* ``import`` and ``export`` read and write binary dumps of the journal
+ in a sparse file format. Pass the filename as the last argument. The
+ export operation may not work reliably for journals which are damaged (missing
+ objects).
+
+* ``reset`` truncates a journal, discarding any information within it.
+
+
+Example: journal inspect
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ # cephfs-journal-tool journal inspect
+ Overall journal integrity: DAMAGED
+ Objects missing:
+ 0x1
+ Corrupt regions:
+ 0x400000-ffffffffffffffff
+
+Example: Journal import/export
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ # cephfs-journal-tool journal export myjournal.bin
+ journal is 4194304~80643
+ read 80643 bytes at offset 4194304
+ wrote 80643 bytes at offset 4194304 to myjournal.bin
+ NOTE: this is a _sparse_ file; you can
+ $ tar cSzf myjournal.bin.tgz myjournal.bin
+ to efficiently compress it while preserving sparseness.
+
+ # cephfs-journal-tool journal import myjournal.bin
+ undump myjournal.bin
+ start 4194304 len 80643
+ writing header 200.00000000
+ writing 4194304~80643
+ done.
+
+.. note::
+
+ It is wise to use the ``journal export <backup file>`` command to make a journal backup
+ before any further manipulation.
+
+Header mode
+-----------
+
+* ``get`` outputs the current content of the journal header
+
+* ``set`` modifies an attribute of the header. Allowed attributes are
+ ``trimmed_pos``, ``expire_pos`` and ``write_pos``.
+
+Example: header get/set
+~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ # cephfs-journal-tool header get
+ { "magic": "ceph fs volume v011",
+ "write_pos": 4274947,
+ "expire_pos": 4194304,
+ "trimmed_pos": 4194303,
+ "layout": { "stripe_unit": 4194304,
+ "stripe_count": 4194304,
+ "object_size": 4194304,
+ "cas_hash": 4194304,
+ "object_stripe_unit": 4194304,
+ "pg_pool": 4194304}}
+
+ # cephfs-journal-tool header set trimmed_pos 4194303
+ Updating trimmed_pos 0x400000 -> 0x3fffff
+ Successfully updated header.
+
+
+Event mode
+----------
+
+Event mode allows detailed examination and manipulation of the contents of the journal. Event
+mode can operate on all events in the journal, or filters may be applied.
+
+The arguments following ``cephfs-journal-tool event`` consist of an action, optional filter
+parameters, and an output mode:
+
+::
+
+ cephfs-journal-tool event <action> [filter] <output>
+
+Actions:
+
+* ``get`` read the events from the log
+* ``splice`` erase events or regions in the journal
+* ``apply`` extract file system metadata from events and attempt to apply it to the metadata store.
+
+Filtering:
+
+* ``--range <int begin>..[int end]`` only include events within the range begin (inclusive) to end (exclusive)
+* ``--path <path substring>`` only include events referring to metadata containing the specified string
+* ``--inode <int>`` only include events referring to metadata containing the specified inode
+* ``--type <type string>`` only include events of this type
+* ``--frag <ino>[.frag id]`` only include events referring to this directory fragment
+* ``--dname <string>`` only include events referring to this named dentry within a directory
+ fragment (may only be used in conjunction with ``--frag``
+* ``--client <int>`` only include events from this client session ID
+
+Filters may be combined on an AND basis (i.e. only the intersection of events from each filter).
+
+Output modes:
+
+* ``binary``: write each event as a binary file, within a folder whose name is controlled by ``--path``
+* ``json``: write all events to a single file, as a JSON serialized list of objects
+* ``summary``: write a human readable summary of the events read to standard out
+* ``list``: write a human readable terse listing of the type of each event, and
+ which file paths the event affects.
+
+
+Example: event mode
+~~~~~~~~~~~~~~~~~~~
+
+::
+
+ # cephfs-journal-tool event get json --path output.json
+ Wrote output to JSON file 'output.json'
+
+ # cephfs-journal-tool event get summary
+ Events by type:
+ NOOP: 2
+ OPEN: 2
+ SESSION: 2
+ SUBTREEMAP: 1
+ UPDATE: 43
+
+ # cephfs-journal-tool event get list
+ 0x400000 SUBTREEMAP: ()
+ 0x400308 SESSION: ()
+ 0x4003de UPDATE: (setattr)
+ /
+ 0x40068b UPDATE: (mkdir)
+ diralpha
+ 0x400d1b UPDATE: (mkdir)
+ diralpha/filealpha1
+ 0x401666 UPDATE: (unlink_local)
+ stray0/10000000001
+ diralpha/filealpha1
+ 0x40228d UPDATE: (unlink_local)
+ diralpha
+ stray0/10000000000
+ 0x402bf9 UPDATE: (scatter_writebehind)
+ stray0
+ 0x403150 UPDATE: (mkdir)
+ dirbravo
+ 0x4037e0 UPDATE: (openc)
+ dirbravo/.filebravo1.swp
+ 0x404032 UPDATE: (openc)
+ dirbravo/.filebravo1.swpx
+
+ # cephfs-journal-tool event get --path filebravo1 list
+ 0x40785a UPDATE: (openc)
+ dirbravo/filebravo1
+ 0x4103ee UPDATE: (cap update)
+ dirbravo/filebravo1
+
+ # cephfs-journal-tool event splice --range 0x40f754..0x410bf1 summary
+ Events by type:
+ OPEN: 1
+ UPDATE: 2
+
+ # cephfs-journal-tool event apply --range 0x410bf1.. summary
+ Events by type:
+ NOOP: 1
+ SESSION: 1
+ UPDATE: 9
+
+ # cephfs-journal-tool event get --inode=1099511627776 list
+ 0x40068b UPDATE: (mkdir)
+ diralpha
+ 0x400d1b UPDATE: (mkdir)
+ diralpha/filealpha1
+ 0x401666 UPDATE: (unlink_local)
+ stray0/10000000001
+ diralpha/filealpha1
+ 0x40228d UPDATE: (unlink_local)
+ diralpha
+ stray0/10000000000
+
+ # cephfs-journal-tool event get --frag=1099511627776 --dname=filealpha1 list
+ 0x400d1b UPDATE: (mkdir)
+ diralpha/filealpha1
+ 0x401666 UPDATE: (unlink_local)
+ stray0/10000000001
+ diralpha/filealpha1
+
+ # cephfs-journal-tool event get binary --path bin_events
+ Wrote output to binary files in directory 'bin_events'
+
diff --git a/doc/cephfs/cephfs-mirroring.rst b/doc/cephfs/cephfs-mirroring.rst
new file mode 100644
index 000000000..fd00a1eef
--- /dev/null
+++ b/doc/cephfs/cephfs-mirroring.rst
@@ -0,0 +1,414 @@
+.. _cephfs-mirroring:
+
+=========================
+CephFS Snapshot Mirroring
+=========================
+
+CephFS supports asynchronous replication of snapshots to a remote CephFS file system via
+the `cephfs-mirror` tool. Snapshots are synchronized by mirroring snapshot data followed by
+creating a remote snapshot with the same name (for a given directory on the remote file system) as
+the source snapshot.
+
+Requirements
+------------
+
+The primary (local) and secondary (remote) Ceph clusters version should be Pacific or later.
+
+.. _cephfs_mirroring_creating_users:
+
+Creating Users
+--------------
+
+Start by creating a Ceph user (on the primary/local cluster) for the `cephfs-mirror` daemon. This user
+requires write capability on the metadata pool to create RADOS objects (index objects)
+for watch/notify operation and read capability on the data pool(s)::
+
+ $ ceph auth get-or-create client.mirror mon 'profile cephfs-mirror' mds 'allow r' osd 'allow rw tag cephfs metadata=*, allow r tag cephfs data=*' mgr 'allow r'
+
+Create a Ceph user for each file system peer (on the secondary/remote cluster). This user needs
+to have full capabilities on the MDS (to take snapshots) and the OSDs::
+
+ $ ceph fs authorize <fs_name> client.mirror_remote / rwps
+
+This user will be supplied as part of the peer specification when adding a peer.
+
+Starting Mirror Daemon
+----------------------
+
+The mirror daemon should be spawned using `systemctl(1)` unit files::
+
+ $ systemctl enable cephfs-mirror@mirror
+ $ systemctl start cephfs-mirror@mirror
+
+`cephfs-mirror` daemon can be run in foreground using::
+
+ $ cephfs-mirror --id mirror --cluster site-a -f
+
+.. note:: The user specified here is `mirror`, the creation of which is
+ described in the :ref:`Creating Users<cephfs_mirroring_creating_users>`
+ section.
+
+Multiple ``cephfs-mirror`` daemons may be deployed for concurrent
+synchronization and high availability. Mirror daemons share the synchronization
+load using a simple ``M/N`` policy, where ``M`` is the number of directories
+and ``N`` is the number of ``cephfs-mirror`` daemons.
+
+When ``cephadm`` is used to manage a Ceph cluster, ``cephfs-mirror`` daemons can be
+deployed by running the following command:
+
+.. prompt:: bash $
+
+ ceph orch apply cephfs-mirror
+
+To deploy multiple mirror daemons, run a command of the following form:
+
+.. prompt:: bash $
+
+ ceph orch apply cephfs-mirror --placement=<placement-spec>
+
+For example, to deploy 3 `cephfs-mirror` daemons on different hosts, run a command of the following form:
+
+.. prompt:: bash $
+
+ $ ceph orch apply cephfs-mirror --placement="3 host1,host2,host3"
+
+Interface
+---------
+
+The `Mirroring` module (manager plugin) provides interfaces for managing
+directory snapshot mirroring. These are (mostly) wrappers around monitor
+commands for managing file system mirroring and is the recommended control
+interface.
+
+Mirroring Module
+----------------
+
+The mirroring module is responsible for assigning directories to mirror daemons
+for synchronization. Multiple mirror daemons can be spawned to achieve
+concurrency in directory snapshot synchronization. When mirror daemons are
+spawned (or terminated), the mirroring module discovers the modified set of
+mirror daemons and rebalances directory assignments across the new set, thus
+providing high-availability.
+
+.. note:: Deploying a single mirror daemon is recommended. Running multiple
+ daemons is untested.
+
+The mirroring module is disabled by default. To enable the mirroring module,
+run the following command:
+
+.. prompt:: bash $
+
+ ceph mgr module enable mirroring
+
+The mirroring module provides a family of commands that can be used to control
+the mirroring of directory snapshots. To add or remove directories, mirroring
+must be enabled for a given file system. To enable mirroring for a given file
+system, run a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs snapshot mirror enable <fs_name>
+
+.. note:: "Mirroring module" commands are prefixed with ``fs snapshot mirror``.
+ This distinguishes them from "monitor commands", which are prefixed with ``fs
+ mirror``. Be sure (in this context) to use module commands.
+
+To disable mirroring for a given file system, run a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs snapshot mirror disable <fs_name>
+
+After mirroring is enabled, add a peer to which directory snapshots are to be
+mirrored. Peers are specified by the ``<client>@<cluster>`` format, which is
+referred to elsewhere in this document as the ``remote_cluster_spec``. Peers
+are assigned a unique-id (UUID) when added. See the :ref:`Creating
+Users<cephfs_mirroring_creating_users>` section for instructions that describe
+how to create Ceph users for mirroring.
+
+To add a peer, run a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs snapshot mirror peer_add <fs_name> <remote_cluster_spec> [<remote_fs_name>] [<remote_mon_host>] [<cephx_key>]
+
+``<remote_cluster_spec>`` is of the format ``client.<id>@<cluster_name>``.
+
+``<remote_fs_name>`` is optional, and defaults to `<fs_name>` (on the remote
+cluster).
+
+For this command to succeed, the remote cluster's Ceph configuration and user
+keyring must be available in the primary cluster. For example, if a user named
+``client_mirror`` is created on the remote cluster which has ``rwps``
+permissions for the remote file system named ``remote_fs`` (see `Creating
+Users`) and the remote cluster is named ``remote_ceph`` (that is, the remote
+cluster configuration file is named ``remote_ceph.conf`` on the primary
+cluster), run the following command to add the remote filesystem as a peer to
+the primary filesystem ``primary_fs``:
+
+.. prompt:: bash $
+
+ ceph fs snapshot mirror peer_add primary_fs client.mirror_remote@remote_ceph remote_fs
+
+To avoid having to maintain the remote cluster configuration file and remote
+ceph user keyring in the primary cluster, users can bootstrap a peer (which
+stores the relevant remote cluster details in the monitor config store on the
+primary cluster). See the :ref:`Bootstrap
+Peers<cephfs_mirroring_bootstrap_peers>` section.
+
+The ``peer_add`` command supports passing the remote cluster monitor address
+and the user key. However, bootstrapping a peer is the recommended way to add a
+peer.
+
+.. note:: Only a single peer is currently supported.
+
+To remove a peer, run a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs snapshot mirror peer_remove <fs_name> <peer_uuid>
+
+To list file system mirror peers, run a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs snapshot mirror peer_list <fs_name>
+
+To configure a directory for mirroring, run a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs snapshot mirror add <fs_name> <path>
+
+To stop mirroring directory snapshots, run a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs snapshot mirror remove <fs_name> <path>
+
+Only absolute directory paths are allowed.
+
+Paths are normalized by the mirroring module. This means that ``/a/b/../b`` is
+equivalent to ``/a/b``. Paths always start from the CephFS file-system root and
+not from the host system mount point.
+
+For example::
+
+ $ mkdir -p /d0/d1/d2
+ $ ceph fs snapshot mirror add cephfs /d0/d1/d2
+ {}
+ $ ceph fs snapshot mirror add cephfs /d0/d1/../d1/d2
+ Error EEXIST: directory /d0/d1/d2 is already tracked
+
+After a directory is added for mirroring, the additional mirroring of
+subdirectories or ancestor directories is disallowed::
+
+ $ ceph fs snapshot mirror add cephfs /d0/d1
+ Error EINVAL: /d0/d1 is a ancestor of tracked path /d0/d1/d2
+ $ ceph fs snapshot mirror add cephfs /d0/d1/d2/d3
+ Error EINVAL: /d0/d1/d2/d3 is a subtree of tracked path /d0/d1/d2
+
+The :ref:`Mirroring Status<cephfs_mirroring_mirroring_status>` section contains
+information about the commands for checking the directory mapping (to mirror
+daemons) and for checking the directory distribution.
+
+.. _cephfs_mirroring_bootstrap_peers:
+
+Bootstrap Peers
+---------------
+
+Adding a peer (via `peer_add`) requires the peer cluster configuration and user keyring
+to be available in the primary cluster (manager host and hosts running the mirror daemon).
+This can be avoided by bootstrapping and importing a peer token. Peer bootstrap involves
+creating a bootstrap token on the peer cluster via::
+
+ $ ceph fs snapshot mirror peer_bootstrap create <fs_name> <client_entity> <site-name>
+
+e.g.::
+
+ $ ceph fs snapshot mirror peer_bootstrap create backup_fs client.mirror_remote site-remote
+ {"token": "eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ=="}
+
+`site-name` refers to a user-defined string to identify the remote filesystem. In context
+of `peer_add` interface, `site-name` is the passed in `cluster` name from `remote_cluster_spec`.
+
+Import the bootstrap token in the primary cluster via::
+
+ $ ceph fs snapshot mirror peer_bootstrap import <fs_name> <token>
+
+e.g.::
+
+ $ ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==
+
+
+.. _cephfs_mirroring_mirroring_status:
+
+Mirroring Status
+----------------
+
+CephFS mirroring module provides `mirror daemon status` interface to check mirror daemon status::
+
+ $ ceph fs snapshot mirror daemon status
+ [
+ {
+ "daemon_id": 284167,
+ "filesystems": [
+ {
+ "filesystem_id": 1,
+ "name": "a",
+ "directory_count": 1,
+ "peers": [
+ {
+ "uuid": "02117353-8cd1-44db-976b-eb20609aa160",
+ "remote": {
+ "client_name": "client.mirror_remote",
+ "cluster_name": "ceph",
+ "fs_name": "backup_fs"
+ },
+ "stats": {
+ "failure_count": 1,
+ "recovery_count": 0
+ }
+ }
+ ]
+ }
+ ]
+ }
+ ]
+
+An entry per mirror daemon instance is displayed along with information such as configured
+peers and basic stats. For more detailed stats, use the admin socket interface as detailed
+below.
+
+CephFS mirror daemons provide admin socket commands for querying mirror status. To check
+available commands for mirror status use::
+
+ $ ceph --admin-daemon /path/to/mirror/daemon/admin/socket help
+ {
+ ....
+ ....
+ "fs mirror status cephfs@360": "get filesystem mirror status",
+ ....
+ ....
+ }
+
+Commands prefixed with`fs mirror status` provide mirror status for mirror enabled
+file systems. Note that `cephfs@360` is of format `filesystem-name@filesystem-id`.
+This format is required since mirror daemons get asynchronously notified regarding
+file system mirror status (A file system can be deleted and recreated with the same
+name).
+
+This command currently provides minimal information regarding mirror status::
+
+ $ ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror status cephfs@360
+ {
+ "rados_inst": "192.168.0.5:0/1476644347",
+ "peers": {
+ "a2dc7784-e7a1-4723-b103-03ee8d8768f8": {
+ "remote": {
+ "client_name": "client.mirror_remote",
+ "cluster_name": "site-a",
+ "fs_name": "backup_fs"
+ }
+ }
+ },
+ "snap_dirs": {
+ "dir_count": 1
+ }
+ }
+
+The `Peers` section in the command output above shows the peer information including the unique
+peer-id (UUID) and specification. The peer-id is required when removing an existing peer
+as mentioned in the `Mirror Module and Interface` section.
+
+Commands prefixed with `fs mirror peer status` provide peer synchronization status. This
+command is of format `filesystem-name@filesystem-id peer-uuid`::
+
+ $ ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@360 a2dc7784-e7a1-4723-b103-03ee8d8768f8
+ {
+ "/d0": {
+ "state": "idle",
+ "last_synced_snap": {
+ "id": 120,
+ "name": "snap1",
+ "sync_duration": 0.079997898999999997,
+ "sync_time_stamp": "274900.558797s"
+ },
+ "snaps_synced": 2,
+ "snaps_deleted": 0,
+ "snaps_renamed": 0
+ }
+ }
+
+Synchronization stats including `snaps_synced`, `snaps_deleted` and `snaps_renamed` are reset
+on daemon restart and/or when a directory is reassigned to another mirror daemon (when
+multiple mirror daemons are deployed).
+
+A directory can be in one of the following states::
+
+ - `idle`: The directory is currently not being synchronized
+ - `syncing`: The directory is currently being synchronized
+ - `failed`: The directory has hit upper limit of consecutive failures
+
+When a directory experiences a configured number of consecutive synchronization failures, the
+mirror daemon marks it as `failed`. Synchronization for these directories is retried.
+By default, the number of consecutive failures before a directory is marked as failed
+is controlled by `cephfs_mirror_max_consecutive_failures_per_directory` configuration
+option (default: 10) and the retry interval for failed directories is controlled via
+`cephfs_mirror_retry_failed_directories_interval` configuration option (default: 60s).
+
+E.g., adding a regular file for synchronization would result in failed status::
+
+ $ ceph fs snapshot mirror add cephfs /f0
+ $ ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@360 a2dc7784-e7a1-4723-b103-03ee8d8768f8
+ {
+ "/d0": {
+ "state": "idle",
+ "last_synced_snap": {
+ "id": 120,
+ "name": "snap1",
+ "sync_duration": 0.079997898999999997,
+ "sync_time_stamp": "274900.558797s"
+ },
+ "snaps_synced": 2,
+ "snaps_deleted": 0,
+ "snaps_renamed": 0
+ },
+ "/f0": {
+ "state": "failed",
+ "snaps_synced": 0,
+ "snaps_deleted": 0,
+ "snaps_renamed": 0
+ }
+ }
+
+This allows a user to add a non-existent directory for synchronization. The mirror daemon
+will mark such a directory as failed and retry (less frequently). When the directory is
+created, the mirror daemon will clear the failed state upon successful synchronization.
+
+When mirroring is disabled, the respective `fs mirror status` command for the file system
+will not show up in command help.
+
+Configuration Options
+---------------------
+
+.. confval:: cephfs_mirror_max_concurrent_directory_syncs
+.. confval:: cephfs_mirror_action_update_interval
+.. confval:: cephfs_mirror_restart_mirror_on_blocklist_interval
+.. confval:: cephfs_mirror_max_snapshot_sync_per_cycle
+.. confval:: cephfs_mirror_directory_scan_interval
+.. confval:: cephfs_mirror_max_consecutive_failures_per_directory
+.. confval:: cephfs_mirror_retry_failed_directories_interval
+.. confval:: cephfs_mirror_restart_mirror_on_failure_interval
+.. confval:: cephfs_mirror_mount_timeout
+
+Re-adding Peers
+---------------
+
+When re-adding (reassigning) a peer to a file system in another cluster, ensure that
+all mirror daemons have stopped synchronization to the peer. This can be checked
+via `fs mirror status` admin socket command (the `Peer UUID` should not show up
+in the command output). Also, it is recommended to purge synchronized directories
+from the peer before re-adding it to another file system (especially those directories
+which might exist in the new primary file system). This is not required if re-adding
+a peer to the same primary file system it was earlier synchronized from.
diff --git a/doc/cephfs/cephfs-top.png b/doc/cephfs/cephfs-top.png
new file mode 100644
index 000000000..5215eb98c
--- /dev/null
+++ b/doc/cephfs/cephfs-top.png
Binary files differ
diff --git a/doc/cephfs/cephfs-top.rst b/doc/cephfs/cephfs-top.rst
new file mode 100644
index 000000000..49439a4bd
--- /dev/null
+++ b/doc/cephfs/cephfs-top.rst
@@ -0,0 +1,116 @@
+.. _cephfs-top:
+
+==================
+CephFS Top Utility
+==================
+
+CephFS provides `top(1)` like utility to display various Ceph Filesystem metrics
+in realtime. `cephfs-top` is a curses based python script which makes use of `stats`
+plugin in Ceph Manager to fetch (and display) metrics.
+
+Manager Plugin
+==============
+
+Ceph Filesystem clients periodically forward various metrics to Ceph Metadata Servers (MDS)
+which in turn get forwarded to Ceph Manager by MDS rank zero. Each active MDS forward its
+respective set of metrics to MDS rank zero. Metrics are aggregated and forwarded to Ceph
+Manager.
+
+Metrics are divided into two categories - global and per-mds. Global metrics represent
+set of metrics for the filesystem as a whole (e.g., client read latency) whereas per-mds
+metrics are for a particular MDS rank (e.g., number of subtrees handled by an MDS).
+
+.. note:: Currently, only global metrics are tracked.
+
+`stats` plugin is disabled by default and should be enabled via::
+
+ $ ceph mgr module enable stats
+
+Once enabled, Ceph Filesystem metrics can be fetched via::
+
+ $ ceph fs perf stats
+
+The output format is JSON and contains fields as follows:
+
+- `version`: Version of stats output
+- `global_counters`: List of global performance metrics
+- `counters`: List of per-mds performance metrics
+- `client_metadata`: Ceph Filesystem client metadata
+- `global_metrics`: Global performance counters
+- `metrics`: Per-MDS performance counters (currently, empty) and delayed ranks
+
+.. note:: `delayed_ranks` is the set of active MDS ranks that are reporting stale metrics.
+ This can happen in cases such as (temporary) network issue between MDS rank zero
+ and other active MDSs.
+
+Metrics can be fetched for a particular client and/or for a set of active MDSs. To fetch metrics
+for a particular client (e.g., for client-id: 1234)::
+
+ $ ceph fs perf stats --client_id=1234
+
+To fetch metrics only for a subset of active MDSs (e.g., MDS rank 1 and 2)::
+
+ $ ceph fs perf stats --mds_rank=1,2
+
+`cephfs-top`
+============
+
+`cephfs-top` utility relies on `stats` plugin to fetch performance metrics and display in
+`top(1)` like format. `cephfs-top` is available as part of `cephfs-top` package.
+
+By default, `cephfs-top` uses `client.fstop` user to connect to a Ceph cluster::
+
+ $ ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r'
+ $ cephfs-top
+
+Command-Line Options
+--------------------
+
+To use a non-default user (other than `client.fstop`) use::
+
+ $ cephfs-top --id <name>
+
+By default, `cephfs-top` connects to cluster name `ceph`. To use a non-default cluster name::
+
+ $ cephfs-top --cluster <cluster>
+
+`cephfs-top` refreshes stats every second by default. To choose a different refresh interval use::
+
+ $ cephfs-top -d <seconds>
+
+Refresh interval should be a positive integer.
+
+To dump the metrics to stdout without creating a curses display use::
+
+ $ cephfs-top --dump
+
+To dump the metrics of the given filesystem to stdout without creating a curses display use::
+
+ $ cephfs-top --dumpfs <fs_name>
+
+Interactive Commands
+--------------------
+
+1. m : Filesystem selection
+ Displays a menu of filesystems for selection.
+
+2. s : Sort field selection
+ Designates the sort field. 'cap_hit' is the default.
+
+3. l : Client limit
+ Sets the limit on the number of clients to be displayed.
+
+4. r : Reset
+ Resets the sort field and limit value to the default.
+
+5. q : Quit
+ Exit the utility if you are at the home screen (all filesystem info),
+ otherwise escape back to the home screen.
+
+The metrics display can be scrolled using the Arrow Keys, PgUp/PgDn, Home/End and mouse.
+
+Sample screenshot running `cephfs-top` with 2 filesystems:
+
+.. image:: cephfs-top.png
+
+.. note:: Minimum compatible python version for cephfs-top is 3.6.0. cephfs-top is supported on distros RHEL 8, Ubuntu 18.04, CentOS 8 and above.
diff --git a/doc/cephfs/client-auth.rst b/doc/cephfs/client-auth.rst
new file mode 100644
index 000000000..a7dea5251
--- /dev/null
+++ b/doc/cephfs/client-auth.rst
@@ -0,0 +1,261 @@
+================================
+CephFS Client Capabilities
+================================
+
+Use Ceph authentication capabilities to restrict your file system clients
+to the lowest possible level of authority needed.
+
+.. note:: Path restriction and layout modification restriction are new features
+ in the Jewel release of Ceph.
+
+.. note:: Using Erasure Coded(EC) pools with CephFS is supported only with the
+ BlueStore Backend. They cannot be used as metadata pools and overwrites must
+ be enabled on the data pools.
+
+
+Path restriction
+================
+
+By default, clients are not restricted in what paths they are allowed to
+mount. Further, when clients mount a subdirectory, e.g., ``/home/user``, the
+MDS does not by default verify that subsequent operations are ‘locked’ within
+that directory.
+
+To restrict clients to only mount and work within a certain directory, use
+path-based MDS authentication capabilities.
+
+Note that this restriction *only* impacts the filesystem hierarchy -- the metadata
+tree managed by the MDS. Clients will still be able to access the underlying
+file data in RADOS directly. To segregate clients fully, you must also isolate
+untrusted clients in their own RADOS namespace. You can place a client's
+filesystem subtree in a particular namespace using `file layouts`_ and then
+restrict their RADOS access to that namespace using `OSD capabilities`_
+
+.. _file layouts: ./file-layouts
+.. _OSD capabilities: ../rados/operations/user-management/#authorization-capabilities
+
+Syntax
+------
+
+To grant rw access to the specified directory only, we mention the specified
+directory while creating key for a client using the following syntax::
+
+ ceph fs authorize <fs_name> client.<client_id> <path-in-cephfs> rw
+
+For example, to restrict client ``foo`` to writing only in the ``bar``
+directory of file system ``cephfs_a``, use ::
+
+ ceph fs authorize cephfs_a client.foo / r /bar rw
+
+ results in:
+
+ client.foo
+ key: *key*
+ caps: [mds] allow r, allow rw path=/bar
+ caps: [mon] allow r
+ caps: [osd] allow rw tag cephfs data=cephfs_a
+
+To completely restrict the client to the ``bar`` directory, omit the
+root directory ::
+
+ ceph fs authorize cephfs_a client.foo /bar rw
+
+Note that if a client's read access is restricted to a path, they will only
+be able to mount the file system when specifying a readable path in the
+mount command (see below).
+
+Supplying ``all`` or ``*`` as the file system name will grant access to every
+file system. Note that it is usually necessary to quote ``*`` to protect it
+from the shell.
+
+See `User Management - Add a User to a Keyring`_. for additional details on
+user management
+
+To restrict a client to the specified sub-directory only, we mention the
+specified directory while mounting using the following syntax::
+
+ ceph-fuse -n client.<client_id> <mount-path> -r *directory_to_be_mounted*
+
+For example, to restrict client ``foo`` to ``mnt/bar`` directory, we will
+use::
+
+ ceph-fuse -n client.foo mnt -r /bar
+
+Free space reporting
+--------------------
+
+By default, when a client is mounting a sub-directory, the used space (``df``)
+will be calculated from the quota on that sub-directory, rather than reporting
+the overall amount of space used on the cluster.
+
+If you would like the client to report the overall usage of the file system,
+and not just the quota usage on the sub-directory mounted, then set the
+following config option on the client::
+
+
+ client quota df = false
+
+If quotas are not enabled, or no quota is set on the sub-directory mounted,
+then the overall usage of the file system will be reported irrespective of
+the value of this setting.
+
+Layout and Quota restriction (the 'p' flag)
+===========================================
+
+To set layouts or quotas, clients require the 'p' flag in addition to 'rw'.
+This restricts all the attributes that are set by special extended attributes
+with a "ceph." prefix, as well as restricting other means of setting
+these fields (such as openc operations with layouts).
+
+For example, in the following snippet client.0 can modify layouts and quotas
+on the file system cephfs_a, but client.1 cannot::
+
+ client.0
+ key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
+ caps: [mds] allow rwp
+ caps: [mon] allow r
+ caps: [osd] allow rw tag cephfs data=cephfs_a
+
+ client.1
+ key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
+ caps: [mds] allow rw
+ caps: [mon] allow r
+ caps: [osd] allow rw tag cephfs data=cephfs_a
+
+
+Snapshot restriction (the 's' flag)
+===========================================
+
+To create or delete snapshots, clients require the 's' flag in addition to
+'rw'. Note that when capability string also contains the 'p' flag, the 's'
+flag must appear after it (all flags except 'rw' must be specified in
+alphabetical order).
+
+For example, in the following snippet client.0 can create or delete snapshots
+in the ``bar`` directory of file system ``cephfs_a``::
+
+ client.0
+ key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
+ caps: [mds] allow rw, allow rws path=/bar
+ caps: [mon] allow r
+ caps: [osd] allow rw tag cephfs data=cephfs_a
+
+
+.. _User Management - Add a User to a Keyring: ../../rados/operations/user-management/#add-a-user-to-a-keyring
+
+Network restriction
+===================
+
+::
+
+ client.foo
+ key: *key*
+ caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8
+ caps: [mon] allow r network 10.0.0.0/8
+ caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8
+
+The optional ``{network/prefix}`` is a standard network name and
+prefix length in CIDR notation (e.g., ``10.3.0.0/16``). If present,
+the use of this capability is restricted to clients connecting from
+this network.
+
+.. _fs-authorize-multifs:
+
+File system Information Restriction
+===================================
+
+If desired, the monitor cluster can present a limited view of the file systems
+available. In this case, the monitor cluster will only inform clients about
+file systems specified by the administrator. Other file systems will not be
+reported and commands affecting them will fail as if the file systems do
+not exist.
+
+Consider following example. The Ceph cluster has 2 FSs::
+
+ $ ceph fs ls
+ name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
+ name: cephfs2, metadata pool: cephfs2_metadata, data pools: [cephfs2_data ]
+
+But we authorize client ``someuser`` for only one FS::
+
+ $ ceph fs authorize cephfs client.someuser / rw
+ [client.someuser]
+ key = AQAmthpf89M+JhAAiHDYQkMiCq3x+J0n9e8REQ==
+ $ cat ceph.client.someuser.keyring
+ [client.someuser]
+ key = AQAmthpf89M+JhAAiHDYQkMiCq3x+J0n9e8REQ==
+ caps mds = "allow rw fsname=cephfs"
+ caps mon = "allow r fsname=cephfs"
+ caps osd = "allow rw tag cephfs data=cephfs"
+
+And the client can only see the FS that it has authorization for::
+
+ $ ceph fs ls -n client.someuser -k ceph.client.someuser.keyring
+ name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
+
+Standby MDS daemons will always be displayed. Note that the information about
+restricted MDS daemons and file systems may become available by other means,
+such as ``ceph health detail``.
+
+MDS communication restriction
+=============================
+
+By default, user applications may communicate with any MDS, whether or not
+they are allowed to modify data on an associated file system (see
+`Path restriction` above). Client's communication can be restricted to MDS
+daemons associated with particular file system(s) by adding MDS caps for that
+particular file system. Consider the following example where the Ceph cluster
+has 2 FSs::
+
+ $ ceph fs ls
+ name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
+ name: cephfs2, metadata pool: cephfs2_metadata, data pools: [cephfs2_data ]
+
+Client ``someuser`` is authorized only for one FS::
+
+ $ ceph fs authorize cephfs client.someuser / rw
+ [client.someuser]
+ key = AQBPSARfg8hCJRAAEegIxjlm7VkHuiuntm6wsA==
+ $ ceph auth get client.someuser > ceph.client.someuser.keyring
+ exported keyring for client.someuser
+ $ cat ceph.client.someuser.keyring
+ [client.someuser]
+ key = AQBPSARfg8hCJRAAEegIxjlm7VkHuiuntm6wsA==
+ caps mds = "allow rw fsname=cephfs"
+ caps mon = "allow r"
+ caps osd = "allow rw tag cephfs data=cephfs"
+
+Mounting ``cephfs1`` with ``someuser`` works::
+
+ $ sudo ceph-fuse /mnt/cephfs1 -n client.someuser -k ceph.client.someuser.keyring --client-fs=cephfs
+ ceph-fuse[96634]: starting ceph client
+ ceph-fuse[96634]: starting fuse
+ $ mount | grep ceph-fuse
+ ceph-fuse on /mnt/cephfs1 type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
+
+But mounting ``cephfs2`` does not::
+
+ $ sudo ceph-fuse /mnt/cephfs2 -n client.someuser -k ceph.client.someuser.keyring --client-fs=cephfs2
+ ceph-fuse[96599]: starting ceph client
+ ceph-fuse[96599]: ceph mount failed with (1) Operation not permitted
+
+Root squash
+===========
+
+The ``root squash`` feature is implemented as a safety measure to prevent
+scenarios such as accidental ``sudo rm -rf /path``. You can enable
+``root_squash`` mode in MDS caps to disallow clients with uid=0 or gid=0 to
+perform write access operations -- e.g., rm, rmdir, rmsnap, mkdir, mksnap.
+However, the mode allows the read operations of a root client unlike in
+other file systems.
+
+Following is an example of enabling root_squash in a filesystem except within
+'/volumes' directory tree in the filesystem::
+
+ $ ceph fs authorize a client.test_a / rw root_squash /volumes rw
+ $ ceph auth get client.test_a
+ [client.test_a]
+ key = AQBZcDpfEbEUKxAADk14VflBXt71rL9D966mYA==
+ caps mds = "allow rw fsname=a root_squash, allow rw fsname=a path=/volumes"
+ caps mon = "allow r fsname=a"
+ caps osd = "allow rw tag cephfs data=a"
diff --git a/doc/cephfs/client-config-ref.rst b/doc/cephfs/client-config-ref.rst
new file mode 100644
index 000000000..5167906b3
--- /dev/null
+++ b/doc/cephfs/client-config-ref.rst
@@ -0,0 +1,75 @@
+Client Configuration
+====================
+
+Updating Client Configuration
+-----------------------------
+
+Certain client configurations can be applied at runtime. To check if a configuration option can be applied (taken into affect by a client) at runtime, use the `config help` command::
+
+ ceph config help debug_client
+ debug_client - Debug level for client
+ (str, advanced) Default: 0/5
+ Can update at runtime: true
+
+ The value takes the form 'N' or 'N/M' where N and M are values between 0 and 99. N is the debug level to log (all values below this are included), and M is the level to gather and buffer in memory. In the event of a crash, the most recent items <= M are dumped to the log file.
+
+`config help` tells if a given configuration can be applied at runtime along with the defaults and a description of the configuration option.
+
+To update a configuration option at runtime, use the `config set` command::
+
+ ceph config set client debug_client 20/20
+
+Note that this changes a given configuration for all clients.
+
+To check configured options use the `config get` command::
+
+ ceph config get client
+ WHO MASK LEVEL OPTION VALUE RO
+ client advanced debug_client 20/20
+ global advanced osd_pool_default_min_size 1
+ global advanced osd_pool_default_size 3
+
+Client Config Reference
+------------------------
+
+.. confval:: client_acl_type
+.. confval:: client_cache_mid
+.. confval:: client_cache_size
+.. confval:: client_caps_release_delay
+.. confval:: client_debug_force_sync_read
+.. confval:: client_dirsize_rbytes
+.. confval:: client_max_inline_size
+.. confval:: client_metadata
+.. confval:: client_mount_gid
+.. confval:: client_mount_timeout
+.. confval:: client_mount_uid
+.. confval:: client_mountpoint
+.. confval:: client_oc
+.. confval:: client_oc_max_dirty
+.. confval:: client_oc_max_dirty_age
+.. confval:: client_oc_max_objects
+.. confval:: client_oc_size
+.. confval:: client_oc_target_dirty
+.. confval:: client_permissions
+.. confval:: client_quota_df
+.. confval:: client_readahead_max_bytes
+.. confval:: client_readahead_max_periods
+.. confval:: client_readahead_min
+.. confval:: client_reconnect_stale
+.. confval:: client_snapdir
+.. confval:: client_tick_interval
+.. confval:: client_use_random_mds
+.. confval:: fuse_default_permissions
+.. confval:: fuse_max_write
+.. confval:: fuse_disable_pagecache
+
+Developer Options
+#################
+
+.. important:: These options are internal. They are listed here only to complete the list of options.
+
+.. confval:: client_debug_getattr_caps
+.. confval:: client_debug_inject_tick_delay
+.. confval:: client_inject_fixed_oldest_tid
+.. confval:: client_inject_release_failure
+.. confval:: client_trace
diff --git a/doc/cephfs/createfs.rst b/doc/cephfs/createfs.rst
new file mode 100644
index 000000000..4a282e562
--- /dev/null
+++ b/doc/cephfs/createfs.rst
@@ -0,0 +1,135 @@
+=========================
+Create a Ceph file system
+=========================
+
+Creating pools
+==============
+
+A Ceph file system requires at least two RADOS pools, one for data and one for metadata.
+There are important considerations when planning these pools:
+
+- We recommend configuring *at least* 3 replicas for the metadata pool,
+ as data loss in this pool can render the entire file system inaccessible.
+ Configuring 4 would not be extreme, especially since the metadata pool's
+ capacity requirements are quite modest.
+- We recommend the fastest feasible low-latency storage devices (NVMe, Optane,
+ or at the very least SAS/SATA SSD) for the metadata pool, as this will
+ directly affect the latency of client file system operations.
+- We strongly suggest that the CephFS metadata pool be provisioned on dedicated
+ SSD / NVMe OSDs. This ensures that high client workload does not adversely
+ impact metadata operations. See :ref:`device_classes` to configure pools this
+ way.
+- The data pool used to create the file system is the "default" data pool and
+ the location for storing all inode backtrace information, which is used for hard link
+ management and disaster recovery. For this reason, all CephFS inodes
+ have at least one object in the default data pool. If erasure-coded
+ pools are planned for file system data, it is best to configure the default as
+ a replicated pool to improve small-object write and
+ read performance when updating backtraces. Separately, another erasure-coded
+ data pool can be added (see also :ref:`ecpool`) that can be used on an entire
+ hierarchy of directories and files (see also :ref:`file-layouts`).
+
+Refer to :doc:`/rados/operations/pools` to learn more about managing pools. For
+example, to create two pools with default settings for use with a file system, you
+might run the following commands:
+
+.. code:: bash
+
+ $ ceph osd pool create cephfs_data
+ $ ceph osd pool create cephfs_metadata
+
+The metadata pool will typically hold at most a few gigabytes of data. For
+this reason, a smaller PG count is usually recommended. 64 or 128 is commonly
+used in practice for large clusters.
+
+.. note:: The names of the file systems, metadata pools, and data pools can
+ only have characters in the set [a-zA-Z0-9\_-.].
+
+Creating a file system
+======================
+
+Once the pools are created, you may enable the file system using the ``fs new`` command:
+
+.. code:: bash
+
+ $ ceph fs new <fs_name> <metadata> <data> [--force] [--allow-dangerous-metadata-overlay] [<fscid:int>] [--recover]
+
+This command creates a new file system with specified metadata and data pool.
+The specified data pool is the default data pool and cannot be changed once set.
+Each file system has its own set of MDS daemons assigned to ranks so ensure that
+you have sufficient standby daemons available to accommodate the new file system.
+
+The ``--force`` option is used to achieve any of the following:
+
+- To set an erasure-coded pool for the default data pool. Use of an EC pool for the
+ default data pool is discouraged. Refer to `Creating pools`_ for details.
+- To set non-empty pool (pool already contains some objects) for the metadata pool.
+- To create a file system with a specific file system's ID (fscid).
+ The --force option is required with --fscid option.
+
+The ``--allow-dangerous-metadata-overlay`` option permits the reuse metadata and
+data pools if it is already in-use. This should only be done in emergencies and
+after careful reading of the documentation.
+
+If the ``--fscid`` option is provided then this creates a file system with a
+specific fscid. This can be used when an application expects the file system's ID
+to be stable after it has been recovered, e.g., after monitor databases are
+lost and rebuilt. Consequently, file system IDs don't always keep increasing
+with newer file systems.
+
+The ``--recover`` option sets the state of file system's rank 0 to existing but
+failed. So when a MDS daemon eventually picks up rank 0, the daemon reads the
+existing in-RADOS metadata and doesn't overwrite it. The flag also prevents the
+standby MDS daemons to join the file system.
+
+For example:
+
+.. code:: bash
+
+ $ ceph fs new cephfs cephfs_metadata cephfs_data
+ $ ceph fs ls
+ name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
+
+Once a file system has been created, your MDS(s) will be able to enter
+an *active* state. For example, in a single MDS system:
+
+.. code:: bash
+
+ $ ceph mds stat
+ cephfs-1/1/1 up {0=a=up:active}
+
+Once the file system is created and the MDS is active, you are ready to mount
+the file system. If you have created more than one file system, you will
+choose which to use when mounting.
+
+ - `Mount CephFS`_
+ - `Mount CephFS as FUSE`_
+ - `Mount CephFS on Windows`_
+
+.. _Mount CephFS: ../../cephfs/mount-using-kernel-driver
+.. _Mount CephFS as FUSE: ../../cephfs/mount-using-fuse
+.. _Mount CephFS on Windows: ../../cephfs/ceph-dokan
+
+If you have created more than one file system, and a client does not
+specify a file system when mounting, you can control which file system
+they will see by using the ``ceph fs set-default`` command.
+
+Adding a Data Pool to the File System
+-------------------------------------
+
+See :ref:`adding-data-pool-to-file-system`.
+
+
+Using Erasure Coded pools with CephFS
+=====================================
+
+You may use Erasure Coded pools as CephFS data pools as long as they have overwrites enabled, which is done as follows:
+
+.. code:: bash
+
+ ceph osd pool set my_ec_pool allow_ec_overwrites true
+
+Note that EC overwrites are only supported when using OSDs with the BlueStore backend.
+
+You may not use Erasure Coded pools as CephFS metadata pools, because CephFS metadata is stored using RADOS *OMAP* data structures, which EC pools cannot store.
+
diff --git a/doc/cephfs/dirfrags.rst b/doc/cephfs/dirfrags.rst
new file mode 100644
index 000000000..9f177c19f
--- /dev/null
+++ b/doc/cephfs/dirfrags.rst
@@ -0,0 +1,101 @@
+
+===================================
+Configuring Directory fragmentation
+===================================
+
+In CephFS, directories are *fragmented* when they become very large
+or very busy. This splits up the metadata so that it can be shared
+between multiple MDS daemons, and between multiple objects in the
+metadata pool.
+
+In normal operation, directory fragmentation is invisible to
+users and administrators, and all the configuration settings mentioned
+here should be left at their default values.
+
+While directory fragmentation enables CephFS to handle very large
+numbers of entries in a single directory, application programmers should
+remain conservative about creating very large directories, as they still
+have a resource cost in situations such as a CephFS client listing
+the directory, where all the fragments must be loaded at once.
+
+.. tip:: The root directory cannot be fragmented.
+
+All directories are initially created as a single fragment. This fragment
+may be *split* to divide up the directory into more fragments, and these
+fragments may be *merged* to reduce the number of fragments in the directory.
+
+Splitting and merging
+=====================
+
+When an MDS identifies a directory fragment to be split, it does not
+do the split immediately. Because splitting interrupts metadata IO,
+a short delay is used to allow short bursts of client IO to complete
+before the split begins. This delay is configured with
+``mds_bal_fragment_interval``, which defaults to 5 seconds.
+
+When the split is done, the directory fragment is broken up into
+a power of two number of new fragments. The number of new
+fragments is given by two to the power ``mds_bal_split_bits``, i.e.
+if ``mds_bal_split_bits`` is 2, then four new fragments will be
+created. The default setting is 3, i.e. splits create 8 new fragments.
+
+The criteria for initiating a split or a merge are described in the
+following sections.
+
+Size thresholds
+===============
+
+A directory fragment is eligible for splitting when its size exceeds
+``mds_bal_split_size`` (default 10000 directory entries). Ordinarily this
+split is delayed by ``mds_bal_fragment_interval``, but if the fragment size
+exceeds a factor of ``mds_bal_fragment_fast_factor`` the split size,
+the split will happen immediately (holding up any client metadata
+IO on the directory).
+
+``mds_bal_fragment_size_max`` is the hard limit on the size of
+directory fragments. If it is reached, clients will receive
+ENOSPC errors if they try to create files in the fragment. On
+a properly configured system, this limit should never be reached on
+ordinary directories, as they will have split long before. By default,
+this is set to 10 times the split size, giving a dirfrag size limit of
+100000 directory entries. Increasing this limit may lead to oversized
+directory fragment objects in the metadata pool, which the OSDs may not
+be able to handle.
+
+A directory fragment is eligible for merging when its size is less
+than ``mds_bal_merge_size``. There is no merge equivalent of the
+"fast splitting" explained above: fast splitting exists to avoid
+creating oversized directory fragments, there is no equivalent issue
+to avoid when merging. The default merge size is 50 directory entries.
+
+Activity thresholds
+===================
+
+In addition to splitting fragments based
+on their size, the MDS may split directory fragments if their
+activity exceeds a threshold.
+
+The MDS maintains separate time-decaying load counters for read and write
+operations on directory fragments. The decaying load counters have an
+exponential decay based on the ``mds_decay_halflife`` setting.
+
+On writes, the write counter is
+incremented, and compared with ``mds_bal_split_wr``, triggering a
+split if the threshold is exceeded. Write operations include metadata IO
+such as renames, unlinks and creations.
+
+The ``mds_bal_split_rd`` threshold is applied based on the read operation
+load counter, which tracks readdir operations.
+
+The ``mds_bal_split_rd`` and ``mds_bal_split_wr`` configs represent the
+popularity threshold. In the MDS these are measured as "read/write temperatures"
+which is closely related to the number of respective read/write operations.
+By default, the read threshold is 25000 operations and the write
+threshold is 10000 operations, i.e. 2.5x as many reads as writes would be
+required to trigger a split.
+
+After fragments are split due to the activity thresholds, they are only
+merged based on the size threshold (``mds_bal_merge_size``), so
+a spike in activity may cause a directory to stay fragmented
+forever unless some entries are unlinked.
+
diff --git a/doc/cephfs/disaster-recovery-experts.rst b/doc/cephfs/disaster-recovery-experts.rst
new file mode 100644
index 000000000..c881c2423
--- /dev/null
+++ b/doc/cephfs/disaster-recovery-experts.rst
@@ -0,0 +1,322 @@
+
+.. _disaster-recovery-experts:
+
+Advanced: Metadata repair tools
+===============================
+
+.. warning::
+
+ If you do not have expert knowledge of CephFS internals, you will
+ need to seek assistance before using any of these tools.
+
+ The tools mentioned here can easily cause damage as well as fixing it.
+
+ It is essential to understand exactly what has gone wrong with your
+ file system before attempting to repair it.
+
+ If you do not have access to professional support for your cluster,
+ consult the ceph-users mailing list or the #ceph IRC channel.
+
+
+Journal export
+--------------
+
+Before attempting dangerous operations, make a copy of the journal like so:
+
+::
+
+ cephfs-journal-tool journal export backup.bin
+
+Note that this command may not always work if the journal is badly corrupted,
+in which case a RADOS-level copy should be made (http://tracker.ceph.com/issues/9902).
+
+
+Dentry recovery from journal
+----------------------------
+
+If a journal is damaged or for any reason an MDS is incapable of replaying it,
+attempt to recover what file metadata we can like so:
+
+::
+
+ cephfs-journal-tool event recover_dentries summary
+
+This command by default acts on MDS rank 0, pass --rank=<n> to operate on other ranks.
+
+This command will write any inodes/dentries recoverable from the journal
+into the backing store, if these inodes/dentries are higher-versioned
+than the previous contents of the backing store. If any regions of the journal
+are missing/damaged, they will be skipped.
+
+Note that in addition to writing out dentries and inodes, this command will update
+the InoTables of each 'in' MDS rank, to indicate that any written inodes' numbers
+are now in use. In simple cases, this will result in an entirely valid backing
+store state.
+
+.. warning::
+
+ The resulting state of the backing store is not guaranteed to be self-consistent,
+ and an online MDS scrub will be required afterwards. The journal contents
+ will not be modified by this command, you should truncate the journal
+ separately after recovering what you can.
+
+Journal truncation
+------------------
+
+If the journal is corrupt or MDSs cannot replay it for any reason, you can
+truncate it like so:
+
+::
+
+ cephfs-journal-tool [--rank=N] journal reset
+
+Specify the MDS rank using the ``--rank`` option when the file system has/had
+multiple active MDS.
+
+.. warning::
+
+ Resetting the journal *will* lose metadata unless you have extracted
+ it by other means such as ``recover_dentries``. It is likely to leave
+ some orphaned objects in the data pool. It may result in re-allocation
+ of already-written inodes, such that permissions rules could be violated.
+
+MDS table wipes
+---------------
+
+After the journal has been reset, it may no longer be consistent with respect
+to the contents of the MDS tables (InoTable, SessionMap, SnapServer).
+
+To reset the SessionMap (erase all sessions), use:
+
+::
+
+ cephfs-table-tool all reset session
+
+This command acts on the tables of all 'in' MDS ranks. Replace 'all' with an MDS
+rank to operate on that rank only.
+
+The session table is the table most likely to need resetting, but if you know you
+also need to reset the other tables then replace 'session' with 'snap' or 'inode'.
+
+MDS map reset
+-------------
+
+Once the in-RADOS state of the file system (i.e. contents of the metadata pool)
+is somewhat recovered, it may be necessary to update the MDS map to reflect
+the contents of the metadata pool. Use the following command to reset the MDS
+map to a single MDS:
+
+::
+
+ ceph fs reset <fs name> --yes-i-really-mean-it
+
+Once this is run, any in-RADOS state for MDS ranks other than 0 will be ignored:
+as a result it is possible for this to result in data loss.
+
+One might wonder what the difference is between 'fs reset' and 'fs remove; fs new'. The
+key distinction is that doing a remove/new will leave rank 0 in 'creating' state, such
+that it would overwrite any existing root inode on disk and orphan any existing files. In
+contrast, the 'reset' command will leave rank 0 in 'active' state such that the next MDS
+daemon to claim the rank will go ahead and use the existing in-RADOS metadata.
+
+Recovery from missing metadata objects
+--------------------------------------
+
+Depending on what objects are missing or corrupt, you may need to
+run various commands to regenerate default versions of the
+objects.
+
+::
+
+ # Session table
+ cephfs-table-tool 0 reset session
+ # SnapServer
+ cephfs-table-tool 0 reset snap
+ # InoTable
+ cephfs-table-tool 0 reset inode
+ # Journal
+ cephfs-journal-tool --rank=0 journal reset
+ # Root inodes ("/" and MDS directory)
+ cephfs-data-scan init
+
+Finally, you can regenerate metadata objects for missing files
+and directories based on the contents of a data pool. This is
+a three-phase process. First, scanning *all* objects to calculate
+size and mtime metadata for inodes. Second, scanning the first
+object from every file to collect this metadata and inject it into
+the metadata pool. Third, checking inode linkages and fixing found
+errors.
+
+::
+
+ cephfs-data-scan scan_extents [<data pool> [<extra data pool> ...]]
+ cephfs-data-scan scan_inodes [<data pool>]
+ cephfs-data-scan scan_links
+
+'scan_extents' and 'scan_inodes' commands may take a *very long* time
+if there are many files or very large files in the data pool.
+
+To accelerate the process, run multiple instances of the tool.
+
+Decide on a number of workers, and pass each worker a number within
+the range 0-(worker_m - 1).
+
+The example below shows how to run 4 workers simultaneously:
+
+::
+
+ # Worker 0
+ cephfs-data-scan scan_extents --worker_n 0 --worker_m 4
+ # Worker 1
+ cephfs-data-scan scan_extents --worker_n 1 --worker_m 4
+ # Worker 2
+ cephfs-data-scan scan_extents --worker_n 2 --worker_m 4
+ # Worker 3
+ cephfs-data-scan scan_extents --worker_n 3 --worker_m 4
+
+ # Worker 0
+ cephfs-data-scan scan_inodes --worker_n 0 --worker_m 4
+ # Worker 1
+ cephfs-data-scan scan_inodes --worker_n 1 --worker_m 4
+ # Worker 2
+ cephfs-data-scan scan_inodes --worker_n 2 --worker_m 4
+ # Worker 3
+ cephfs-data-scan scan_inodes --worker_n 3 --worker_m 4
+
+It is **important** to ensure that all workers have completed the
+scan_extents phase before any workers enter the scan_inodes phase.
+
+After completing the metadata recovery, you may want to run cleanup
+operation to delete ancillary data generated during recovery.
+
+::
+
+ cephfs-data-scan cleanup [<data pool>]
+
+Note, the data pool parameters for 'scan_extents', 'scan_inodes' and
+'cleanup' commands are optional, and usually the tool will be able to
+detect the pools automatically. Still you may override this. The
+'scan_extents' command needs all data pools to be specified, while
+'scan_inodes' and 'cleanup' commands need only the main data pool.
+
+
+Using an alternate metadata pool for recovery
+---------------------------------------------
+
+.. warning::
+
+ There has not been extensive testing of this procedure. It should be
+ undertaken with great care.
+
+If an existing file system is damaged and inoperative, it is possible to create
+a fresh metadata pool and attempt to reconstruct the file system metadata into
+this new pool, leaving the old metadata in place. This could be used to make a
+safer attempt at recovery since the existing metadata pool would not be
+modified.
+
+.. caution::
+
+ During this process, multiple metadata pools will contain data referring to
+ the same data pool. Extreme caution must be exercised to avoid changing the
+ data pool contents while this is the case. Once recovery is complete, the
+ damaged metadata pool should be archived or deleted.
+
+To begin, the existing file system should be taken down, if not done already,
+to prevent further modification of the data pool. Unmount all clients and then
+mark the file system failed:
+
+::
+
+ ceph fs fail <fs_name>
+
+.. note::
+
+ <fs_name> here and below indicates the original, damaged file system.
+
+Next, create a recovery file system in which we will populate a new metadata pool
+backed by the original data pool.
+
+::
+
+ ceph osd pool create cephfs_recovery_meta
+ ceph fs new cephfs_recovery cephfs_recovery_meta <data_pool> --recover --allow-dangerous-metadata-overlay
+
+.. note::
+
+ You may rename the recovery metadata pool and file system at a future time.
+ The ``--recover`` flag prevents any MDS from joining the new file system.
+
+Next, we will create the intial metadata for the fs:
+
+::
+
+ cephfs-table-tool cephfs_recovery:0 reset session
+ cephfs-table-tool cephfs_recovery:0 reset snap
+ cephfs-table-tool cephfs_recovery:0 reset inode
+ cephfs-journal-tool --rank cephfs_recovery:0 journal reset --force
+
+Now perform the recovery of the metadata pool from the data pool:
+
+::
+
+ cephfs-data-scan init --force-init --filesystem cephfs_recovery --alternate-pool cephfs_recovery_meta
+ cephfs-data-scan scan_extents --alternate-pool cephfs_recovery_meta --filesystem <fs_name>
+ cephfs-data-scan scan_inodes --alternate-pool cephfs_recovery_meta --filesystem <fs_name> --force-corrupt
+ cephfs-data-scan scan_links --filesystem cephfs_recovery
+
+.. note::
+
+ Each scan procedure above goes through the entire data pool. This may take a
+ significant amount of time. See the previous section on how to distribute
+ this task among workers.
+
+If the damaged file system contains dirty journal data, it may be recovered next
+with:
+
+::
+
+ cephfs-journal-tool --rank=<fs_name>:0 event recover_dentries list --alternate-pool cephfs_recovery_meta
+
+After recovery, some recovered directories will have incorrect statistics.
+Ensure the parameters ``mds_verify_scatter`` and ``mds_debug_scatterstat`` are
+set to false (the default) to prevent the MDS from checking the statistics:
+
+::
+
+ ceph config rm mds mds_verify_scatter
+ ceph config rm mds mds_debug_scatterstat
+
+.. note::
+
+ Also verify the config has not been set globally or with a local ceph.conf file.
+
+Now, allow an MDS to join the recovery file system:
+
+::
+
+ ceph fs set cephfs_recovery joinable true
+
+Finally, run a forward :doc:`scrub </cephfs/scrub>` to repair recursive statistics.
+Ensure you have an MDS running and issue:
+
+::
+
+ ceph tell mds.cephfs_recovery:0 scrub start / recursive,repair,force
+
+.. note::
+
+ The `Symbolic link recovery <https://tracker.ceph.com/issues/46166>`_ is supported from Quincy.
+ Symbolic links were recovered as empty regular files before.
+
+It is recommended to migrate any data from the recovery file system as soon as
+possible. Do not restore the old file system while the recovery file system is
+operational.
+
+.. note::
+
+ If the data pool is also corrupt, some files may not be restored because
+ backtrace information is lost. If any data objects are missing (due to
+ issues like lost Placement Groups on the data pool), the recovered files
+ will contain holes in place of the missing data.
+
+.. _Symbolic link recovery: https://tracker.ceph.com/issues/46166
diff --git a/doc/cephfs/disaster-recovery.rst b/doc/cephfs/disaster-recovery.rst
new file mode 100644
index 000000000..a728feb55
--- /dev/null
+++ b/doc/cephfs/disaster-recovery.rst
@@ -0,0 +1,61 @@
+.. _cephfs-disaster-recovery:
+
+Disaster recovery
+=================
+
+Metadata damage and repair
+--------------------------
+
+If a file system has inconsistent or missing metadata, it is considered
+*damaged*. You may find out about damage from a health message, or in some
+unfortunate cases from an assertion in a running MDS daemon.
+
+Metadata damage can result either from data loss in the underlying RADOS
+layer (e.g. multiple disk failures that lose all copies of a PG), or from
+software bugs.
+
+CephFS includes some tools that may be able to recover a damaged file system,
+but to use them safely requires a solid understanding of CephFS internals.
+The documentation for these potentially dangerous operations is on a
+separate page: :ref:`disaster-recovery-experts`.
+
+Data pool damage (files affected by lost data PGs)
+--------------------------------------------------
+
+If a PG is lost in a *data* pool, then the file system will continue
+to operate normally, but some parts of some files will simply
+be missing (reads will return zeros).
+
+Losing a data PG may affect many files. Files are split into many objects,
+so identifying which files are affected by loss of particular PGs requires
+a full scan over all object IDs that may exist within the size of a file.
+This type of scan may be useful for identifying which files require
+restoring from a backup.
+
+.. danger::
+
+ This command does not repair any metadata, so when restoring files in
+ this case you must *remove* the damaged file, and replace it in order
+ to have a fresh inode. Do not overwrite damaged files in place.
+
+If you know that objects have been lost from PGs, use the ``pg_files``
+subcommand to scan for files that may have been damaged as a result:
+
+::
+
+ cephfs-data-scan pg_files <path> <pg id> [<pg id>...]
+
+For example, if you have lost data from PGs 1.4 and 4.5, and you would like
+to know which files under /home/bob might have been damaged:
+
+::
+
+ cephfs-data-scan pg_files /home/bob 1.4 4.5
+
+The output will be a list of paths to potentially damaged files, one
+per line.
+
+Note that this command acts as a normal CephFS client to find all the
+files in the file system and read their layouts, so the MDS must be
+up and running.
+
diff --git a/doc/cephfs/dynamic-metadata-management.rst b/doc/cephfs/dynamic-metadata-management.rst
new file mode 100644
index 000000000..6e7ada9fc
--- /dev/null
+++ b/doc/cephfs/dynamic-metadata-management.rst
@@ -0,0 +1,90 @@
+==================================
+CephFS Dynamic Metadata Management
+==================================
+Metadata operations usually take up more than 50 percent of all
+file system operations. Also the metadata scales in a more complex
+fashion when compared to scaling storage (which in turn scales I/O
+throughput linearly). This is due to the hierarchical and
+interdependent nature of the file system metadata. So in CephFS,
+the metadata workload is decoupled from data workload so as to
+avoid placing unnecessary strain on the RADOS cluster. The metadata
+is hence handled by a cluster of Metadata Servers (MDSs).
+CephFS distributes metadata across MDSs via `Dynamic Subtree Partitioning <https://ceph.com/assets/pdfs/weil-mds-sc04.pdf>`__.
+
+Dynamic Subtree Partitioning
+----------------------------
+In traditional subtree partitioning, subtrees of the file system
+hierarchy are assigned to individual MDSs. This metadata distribution
+strategy provides good hierarchical locality, linear growth of
+cache and horizontal scaling across MDSs and a fairly good distribution
+of metadata across MDSs.
+
+.. image:: subtree-partitioning.svg
+
+The problem with traditional subtree partitioning is that the workload
+growth by depth (across a single MDS) leads to a hotspot of activity.
+This results in lack of vertical scaling and wastage of non-busy resources/MDSs.
+
+This led to the adoption of a more dynamic way of handling
+metadata: Dynamic Subtree Partitioning, where load intensive portions
+of the directory hierarchy from busy MDSs are migrated to non busy MDSs.
+
+This strategy ensures that activity hotspots are relieved as they
+appear and so leads to vertical scaling of the metadata workload in
+addition to horizontal scaling.
+
+Export Process During Subtree Migration
+---------------------------------------
+
+Once the exporter verifies that the subtree is permissible to be exported
+(Non degraded cluster, non-frozen subtree root), the subtree root
+directory is temporarily auth pinned, the subtree freeze is initiated,
+and the exporter is committed to the subtree migration, barring an
+intervening failure of the importer or itself.
+
+The MExportDiscover message is exchanged to ensure that the inode for the
+base directory being exported is open on the destination node. It is
+auth pinned by the importer to prevent it from being trimmed. This occurs
+before the exporter completes the freeze of the subtree to ensure that
+the importer is able to replicate the necessary metadata. When the
+exporter receives the MDiscoverAck, it allows the freeze to proceed by
+removing its temporary auth pin.
+
+A warning stage occurs only if the base subtree directory is open by
+nodes other than the importer and exporter. If it is not, then this
+implies that no metadata within or nested beneath the subtree is
+replicated by any node other than the importer and exporter. If it is,
+then an MExportWarning message informs any bystanders that the
+authority for the region is temporarily ambiguous, and lists both the
+exporter and importer as authoritative MDS nodes. In particular,
+bystanders who are trimming items from their cache must send
+MCacheExpire messages to both the old and new authorities. This is
+necessary to ensure that the surviving authority reliably receives all
+expirations even if the importer or exporter fails. While the subtree
+is frozen (on both the importer and exporter), expirations will not be
+immediately processed; instead, they will be queued until the region
+is unfrozen and it can be determined that the node is or is not
+authoritative.
+
+The exporter then packages an MExport message containing all metadata
+of the subtree and flags the objects as non-authoritative. The MExport message sends
+the actual subtree metadata to the importer. Upon receipt, the
+importer inserts the data into its cache, marks all objects as
+authoritative, and logs a copy of all metadata in an EImportStart
+journal message. Once that has safely flushed, it replies with an
+MExportAck. The exporter can now log an EExport journal entry, which
+ultimately specifies that the export was a success. In the presence
+of failures, it is the existence of the EExport entry only that
+disambiguates authority during recovery.
+
+Once logged, the exporter will send an MExportNotify to any
+bystanders, informing them that the authority is no longer ambiguous
+and cache expirations should be sent only to the new authority (the
+importer). Once these are acknowledged back to the exporter,
+implicitly flushing the bystander to exporter message streams of any
+stray expiration notices, the exporter unfreezes the subtree, cleans
+up its migration-related state, and sends a final MExportFinish to the
+importer. Upon receipt, the importer logs an EImportFinish(true)
+(noting locally that the export was indeed a success), unfreezes its
+subtree, processes any queued cache expirations, and cleans up its
+state.
diff --git a/doc/cephfs/eviction.rst b/doc/cephfs/eviction.rst
new file mode 100644
index 000000000..eb6f70a8e
--- /dev/null
+++ b/doc/cephfs/eviction.rst
@@ -0,0 +1,190 @@
+
+================================
+Ceph file system client eviction
+================================
+
+When a file system client is unresponsive or otherwise misbehaving, it
+may be necessary to forcibly terminate its access to the file system. This
+process is called *eviction*.
+
+Evicting a CephFS client prevents it from communicating further with MDS
+daemons and OSD daemons. If a client was doing buffered IO to the file system,
+any un-flushed data will be lost.
+
+Clients may either be evicted automatically (if they fail to communicate
+promptly with the MDS), or manually (by the system administrator).
+
+The client eviction process applies to clients of all kinds, this includes
+FUSE mounts, kernel mounts, nfs-ganesha gateways, and any process using
+libcephfs.
+
+Automatic client eviction
+=========================
+
+There are three situations in which a client may be evicted automatically.
+
+#. On an active MDS daemon, if a client has not communicated with the MDS for over
+ ``session_autoclose`` (a file system variable) seconds (300 seconds by
+ default), then it will be evicted automatically.
+
+#. On an active MDS daemon, if a client has not responded to cap revoke messages
+ for over ``mds_cap_revoke_eviction_timeout`` (configuration option) seconds.
+ This is disabled by default.
+
+#. During MDS startup (including on failover), the MDS passes through a
+ state called ``reconnect``. During this state, it waits for all the
+ clients to connect to the new MDS daemon. If any clients fail to do
+ so within the time window (``mds_reconnect_timeout``, 45 seconds by default)
+ then they will be evicted.
+
+A warning message is sent to the cluster log if either of these situations
+arises.
+
+Manual client eviction
+======================
+
+Sometimes, the administrator may want to evict a client manually. This
+could happen if a client has died and the administrator does not
+want to wait for its session to time out, or it could happen if
+a client is misbehaving and the administrator does not have access to
+the client node to unmount it.
+
+It is useful to inspect the list of clients first:
+
+::
+
+ ceph tell mds.0 client ls
+
+ [
+ {
+ "id": 4305,
+ "num_leases": 0,
+ "num_caps": 3,
+ "state": "open",
+ "replay_requests": 0,
+ "completed_requests": 0,
+ "reconnecting": false,
+ "inst": "client.4305 172.21.9.34:0/422650892",
+ "client_metadata": {
+ "ceph_sha1": "ae81e49d369875ac8b569ff3e3c456a31b8f3af5",
+ "ceph_version": "ceph version 12.0.0-1934-gae81e49 (ae81e49d369875ac8b569ff3e3c456a31b8f3af5)",
+ "entity_id": "0",
+ "hostname": "senta04",
+ "mount_point": "/tmp/tmpcMpF1b/mnt.0",
+ "pid": "29377",
+ "root": "/"
+ }
+ }
+ ]
+
+
+
+Once you have identified the client you want to evict, you can
+do that using its unique ID, or various other attributes to identify it:
+
+::
+
+ # These all work
+ ceph tell mds.0 client evict id=4305
+ ceph tell mds.0 client evict client_metadata.=4305
+
+
+Advanced: Un-blocklisting a client
+==================================
+
+Ordinarily, a blocklisted client may not reconnect to the servers: it
+must be unmounted and then mounted anew.
+
+However, in some situations it may be useful to permit a client that
+was evicted to attempt to reconnect.
+
+Because CephFS uses the RADOS OSD blocklist to control client eviction,
+CephFS clients can be permitted to reconnect by removing them from
+the blocklist:
+
+::
+
+ $ ceph osd blocklist ls
+ listed 1 entries
+ 127.0.0.1:0/3710147553 2018-03-19 11:32:24.716146
+ $ ceph osd blocklist rm 127.0.0.1:0/3710147553
+ un-blocklisting 127.0.0.1:0/3710147553
+
+
+Doing this may put data integrity at risk if other clients have accessed
+files that the blocklisted client was doing buffered IO to. It is also not
+guaranteed to result in a fully functional client -- the best way to get
+a fully healthy client back after an eviction is to unmount the client
+and do a fresh mount.
+
+If you are trying to reconnect clients in this way, you may also
+find it useful to set ``client_reconnect_stale`` to true in the
+FUSE client, to prompt the client to try to reconnect.
+
+Advanced: Configuring blocklisting
+==================================
+
+If you are experiencing frequent client evictions, due to slow
+client hosts or an unreliable network, and you cannot fix the underlying
+issue, then you may want to ask the MDS to be less strict.
+
+It is possible to respond to slow clients by simply dropping their
+MDS sessions, but permit them to re-open sessions and permit them
+to continue talking to OSDs. To enable this mode, set
+``mds_session_blocklist_on_timeout`` to false on your MDS nodes.
+
+For the equivalent behaviour on manual evictions, set
+``mds_session_blocklist_on_evict`` to false.
+
+Note that if blocklisting is disabled, then evicting a client will
+only have an effect on the MDS you send the command to. On a system
+with multiple active MDS daemons, you would need to send an
+eviction command to each active daemon. When blocklisting is enabled
+(the default), sending an eviction command to just a single
+MDS is sufficient, because the blocklist propagates it to the others.
+
+.. _background_blocklisting_and_osd_epoch_barrier:
+
+Background: Blocklisting and OSD epoch barrier
+==============================================
+
+After a client is blocklisted, it is necessary to make sure that
+other clients and MDS daemons have the latest OSDMap (including
+the blocklist entry) before they try to access any data objects
+that the blocklisted client might have been accessing.
+
+This is ensured using an internal "osdmap epoch barrier" mechanism.
+
+The purpose of the barrier is to ensure that when we hand out any
+capabilities which might allow touching the same RADOS objects, the
+clients we hand out the capabilities to must have a sufficiently recent
+OSD map to not race with cancelled operations (from ENOSPC) or
+blocklisted clients (from evictions).
+
+More specifically, the cases where an epoch barrier is set are:
+
+ * Client eviction (where the client is blocklisted and other clients
+ must wait for a post-blocklist epoch to touch the same objects).
+ * OSD map full flag handling in the client (where the client may
+ cancel some OSD ops from a pre-full epoch, so other clients must
+ wait until the full epoch or later before touching the same objects).
+ * MDS startup, because we don't persist the barrier epoch, so must
+ assume that latest OSD map is always required after a restart.
+
+Note that this is a global value for simplicity. We could maintain this on
+a per-inode basis. But we don't, because:
+
+ * It would be more complicated.
+ * It would use an extra 4 bytes of memory for every inode.
+ * It would not be much more efficient as, almost always, everyone has
+ the latest OSD map. And, in most cases everyone will breeze through this
+ barrier rather than waiting.
+ * This barrier is done in very rare cases, so any benefit from per-inode
+ granularity would only very rarely be seen.
+
+The epoch barrier is transmitted along with all capability messages, and
+instructs the receiver of the message to avoid sending any more RADOS
+operations to OSDs until it has seen this OSD epoch. This mainly applies
+to clients (doing their data writes directly to files), but also applies
+to the MDS because things like file size probing and file deletion are
+done directly from the MDS.
diff --git a/doc/cephfs/experimental-features.rst b/doc/cephfs/experimental-features.rst
new file mode 100644
index 000000000..ba60d12c7
--- /dev/null
+++ b/doc/cephfs/experimental-features.rst
@@ -0,0 +1,42 @@
+=====================
+Experimental Features
+=====================
+
+CephFS includes a number of experimental features which are not fully
+stabilized or qualified for users to turn on in real deployments. We generally
+do our best to clearly demarcate these and fence them off so they cannot be
+used by mistake.
+
+Some of these features are closer to being done than others, though. We
+describe each of them with an approximation of how risky they are and briefly
+describe what is required to enable them. Note that doing so will
+*irrevocably* flag maps in the monitor as having once enabled this flag to
+improve debugging and support processes.
+
+Inline data
+-----------
+By default, all CephFS file data is stored in RADOS objects. The inline data
+feature enables small files (generally <2KB) to be stored in the inode
+and served out of the MDS. This may improve small-file performance but increases
+load on the MDS. It is not sufficiently tested to support at this time, although
+failures within it are unlikely to make non-inlined data inaccessible
+
+Inline data has always been off by default and requires setting
+the ``inline_data`` flag.
+
+Inline data has been declared deprecated for the Octopus release, and will
+likely be removed altogether in the Q release.
+
+Mantle: Programmable Metadata Load Balancer
+-------------------------------------------
+
+Mantle is a programmable metadata balancer built into the MDS. The idea is to
+protect the mechanisms for balancing load (migration, replication,
+fragmentation) but stub out the balancing policies using Lua. For details, see
+:doc:`/cephfs/mantle`.
+
+LazyIO
+------
+LazyIO relaxes POSIX semantics. Buffered reads/writes are allowed even when a
+file is opened by multiple applications on multiple clients. Applications are
+responsible for managing cache coherency themselves.
diff --git a/doc/cephfs/file-layouts.rst b/doc/cephfs/file-layouts.rst
new file mode 100644
index 000000000..2cdb26efc
--- /dev/null
+++ b/doc/cephfs/file-layouts.rst
@@ -0,0 +1,272 @@
+.. _file-layouts:
+
+File layouts
+============
+
+The layout of a file controls how its contents are mapped to Ceph RADOS objects. You can
+read and write a file's layout using *virtual extended attributes* or xattrs.
+
+The name of the layout xattrs depends on whether a file is a regular file or a directory. Regular
+files' layout xattrs are called ``ceph.file.layout``, whereas directories' layout xattrs are called
+``ceph.dir.layout``. Where subsequent examples refer to ``ceph.file.layout``, substitute ``dir`` as appropriate
+when dealing with directories.
+
+.. tip::
+
+ Your linux distribution may not ship with commands for manipulating xattrs by default,
+ the required package is usually called ``attr``.
+
+Layout fields
+-------------
+
+pool
+ String, giving ID or name. String can only have characters in the set [a-zA-Z0-9\_-.]. Which RADOS pool a file's data objects will be stored in.
+
+pool_id
+ String of digits. This is the system assigned pool id for the RADOS pool whenever it is created.
+
+pool_name
+ String, given name. This is the user defined name for the RADOS pool whenever user creates it.
+
+pool_namespace
+ String with only characters in the set [a-zA-Z0-9\_-.]. Within the data pool, which RADOS namespace the objects will
+ be written to. Empty by default (i.e. default namespace).
+
+stripe_unit
+ Integer in bytes. The size (in bytes) of a block of data used in the RAID 0 distribution of a file. All stripe units for a file have equal size. The last stripe unit is typically incomplete–i.e. it represents the data at the end of the file as well as unused “space” beyond it up to the end of the fixed stripe unit size.
+
+stripe_count
+ Integer. The number of consecutive stripe units that constitute a RAID 0 “stripe” of file data.
+
+object_size
+ Integer in bytes. File data is chunked into RADOS objects of this size.
+
+.. tip::
+
+ RADOS enforces a configurable limit on object sizes: if you increase CephFS
+ object sizes beyond that limit then writes may not succeed. The OSD
+ setting is ``osd_max_object_size``, which is 128MB by default.
+ Very large RADOS objects may prevent smooth operation of the cluster,
+ so increasing the object size limit past the default is not recommended.
+
+Reading layouts with ``getfattr``
+---------------------------------
+
+Read the layout information as a single string:
+
+.. code-block:: bash
+
+ $ touch file
+ $ getfattr -n ceph.file.layout file
+ # file: file
+ ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data"
+
+Read individual layout fields:
+
+.. code-block:: bash
+
+ $ getfattr -n ceph.file.layout.pool_name file
+ # file: file
+ ceph.file.layout.pool_name="cephfs_data"
+ $ getfattr -n ceph.file.layout.pool_id file
+ # file: file
+ ceph.file.layout.pool_id="5"
+ $ getfattr -n ceph.file.layout.pool file
+ # file: file
+ ceph.file.layout.pool="cephfs_data"
+ $ getfattr -n ceph.file.layout.stripe_unit file
+ # file: file
+ ceph.file.layout.stripe_unit="4194304"
+ $ getfattr -n ceph.file.layout.stripe_count file
+ # file: file
+ ceph.file.layout.stripe_count="1"
+ $ getfattr -n ceph.file.layout.object_size file
+ # file: file
+ ceph.file.layout.object_size="4194304"
+
+.. note::
+
+ When reading layouts, the pool will usually be indicated by name. However, in
+ rare cases when pools have only just been created, the ID may be output instead.
+
+Directories do not have an explicit layout until it is customized. Attempts to read
+the layout will fail if it has never been modified: this indicates that layout of the
+next ancestor directory with an explicit layout will be used.
+
+.. code-block:: bash
+
+ $ mkdir dir
+ $ getfattr -n ceph.dir.layout dir
+ dir: ceph.dir.layout: No such attribute
+ $ setfattr -n ceph.dir.layout.stripe_count -v 2 dir
+ $ getfattr -n ceph.dir.layout dir
+ # file: dir
+ ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
+
+Getting the layout in json format. If there's no specific layout set for the
+particular inode, the system traverses the directory path backwards and finds
+the closest ancestor directory with a layout and returns it in json format.
+A file layout also can be retrieved in json format using ``ceph.file.layout.json`` vxattr.
+
+A virtual field named ``inheritance`` is added to the json output to show the status of layout.
+The ``inheritance`` field can have the following values:
+
+``@default`` implies the system default layout
+``@set`` implies that a specific layout has been set for that particular inode
+``@inherited`` implies that the returned layout has been inherited from an ancestor
+
+.. code-block:: bash
+
+ $ getfattr -n ceph.dir.layout.json --only-values /mnt/mycephs/accounts
+ {"stripe_unit": 4194304, "stripe_count": 1, "object_size": 4194304, "pool_name": "cephfs.a.data", "pool_id": 3, "pool_namespace": "", "inheritance": "@default"}
+
+
+Writing layouts with ``setfattr``
+---------------------------------
+
+Layout fields are modified using ``setfattr``:
+
+.. code-block:: bash
+
+ $ ceph osd lspools
+ 0 rbd
+ 1 cephfs_data
+ 2 cephfs_metadata
+
+ $ setfattr -n ceph.file.layout.stripe_unit -v 1048576 file2
+ $ setfattr -n ceph.file.layout.stripe_count -v 8 file2
+ $ setfattr -n ceph.file.layout.object_size -v 10485760 file2
+ $ setfattr -n ceph.file.layout.pool -v 1 file2 # Setting pool by ID
+ $ setfattr -n ceph.file.layout.pool -v cephfs_data file2 # Setting pool by name
+ $ setfattr -n ceph.file.layout.pool_id -v 1 file2 # Setting pool by ID
+ $ setfattr -n ceph.file.layout.pool_name -v cephfs_data file2 # Setting pool by name
+
+.. note::
+
+ When the layout fields of a file are modified using ``setfattr``, this file must be empty, otherwise an error will occur.
+
+.. code-block:: bash
+
+ # touch an empty file
+ $ touch file1
+ # modify layout field successfully
+ $ setfattr -n ceph.file.layout.stripe_count -v 3 file1
+
+ # write something to file1
+ $ echo "hello world" > file1
+ $ setfattr -n ceph.file.layout.stripe_count -v 4 file1
+ setfattr: file1: Directory not empty
+
+File and Directory layouts can also be set using the json format.
+The ``inheritance`` field is ignored when setting the layout.
+Also, if both, ``pool_name`` and ``pool_id`` fields are specified, then the
+``pool_name`` is given preference for better disambiguation.
+
+.. code-block:: bash
+
+ $ setfattr -n ceph.file.layout.json -v '{"stripe_unit": 4194304, "stripe_count": 1, "object_size": 4194304, "pool_name": "cephfs.a.data", "pool_id": 3, "pool_namespace": "", "inheritance": "@default"}' file1
+
+Clearing layouts
+----------------
+
+If you wish to remove an explicit layout from a directory, to revert to
+inheriting the layout of its ancestor, you can do so:
+
+.. code-block:: bash
+
+ setfattr -x ceph.dir.layout mydir
+
+Similarly, if you have set the ``pool_namespace`` attribute and wish
+to modify the layout to use the default namespace instead:
+
+.. code-block:: bash
+
+ # Create a dir and set a namespace on it
+ mkdir mydir
+ setfattr -n ceph.dir.layout.pool_namespace -v foons mydir
+ getfattr -n ceph.dir.layout mydir
+ ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data_a pool_namespace=foons"
+
+ # Clear the namespace from the directory's layout
+ setfattr -x ceph.dir.layout.pool_namespace mydir
+ getfattr -n ceph.dir.layout mydir
+ ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data_a"
+
+
+Inheritance of layouts
+----------------------
+
+Files inherit the layout of their parent directory at creation time. However, subsequent
+changes to the parent directory's layout do not affect children.
+
+.. code-block:: bash
+
+ $ getfattr -n ceph.dir.layout dir
+ # file: dir
+ ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
+
+ # Demonstrate file1 inheriting its parent's layout
+ $ touch dir/file1
+ $ getfattr -n ceph.file.layout dir/file1
+ # file: dir/file1
+ ceph.file.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
+
+ # Now update the layout of the directory before creating a second file
+ $ setfattr -n ceph.dir.layout.stripe_count -v 4 dir
+ $ touch dir/file2
+
+ # Demonstrate that file1's layout is unchanged
+ $ getfattr -n ceph.file.layout dir/file1
+ # file: dir/file1
+ ceph.file.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
+
+ # ...while file2 has the parent directory's new layout
+ $ getfattr -n ceph.file.layout dir/file2
+ # file: dir/file2
+ ceph.file.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
+
+
+Files created as descendents of the directory also inherit the layout, if the intermediate
+directories do not have layouts set:
+
+.. code-block:: bash
+
+ $ getfattr -n ceph.dir.layout dir
+ # file: dir
+ ceph.dir.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
+ $ mkdir dir/childdir
+ $ getfattr -n ceph.dir.layout dir/childdir
+ dir/childdir: ceph.dir.layout: No such attribute
+ $ touch dir/childdir/grandchild
+ $ getfattr -n ceph.file.layout dir/childdir/grandchild
+ # file: dir/childdir/grandchild
+ ceph.file.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
+
+
+.. _adding-data-pool-to-file-system:
+
+Adding a data pool to the File System
+-------------------------------------
+
+Before you can use a pool with CephFS you have to add it to the Metadata Servers.
+
+.. code-block:: bash
+
+ $ ceph fs add_data_pool cephfs cephfs_data_ssd
+ $ ceph fs ls # Pool should now show up
+ .... data pools: [cephfs_data cephfs_data_ssd ]
+
+Make sure that your cephx keys allows the client to access this new pool.
+
+You can then update the layout on a directory in CephFS to use the pool you added:
+
+.. code-block:: bash
+
+ $ mkdir /mnt/cephfs/myssddir
+ $ setfattr -n ceph.dir.layout.pool -v cephfs_data_ssd /mnt/cephfs/myssddir
+
+All new files created within that directory will now inherit its layout and place their data in your newly added pool.
+
+You may notice that object counts in your primary data pool (the one passed to ``fs new``) continue to increase, even if files are being created in the pool you added. This is normal: the file data is stored in the pool specified by the layout, but a small amount of metadata is kept in the primary data pool for all files.
+
+
diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst
new file mode 100644
index 000000000..e7fd377bf
--- /dev/null
+++ b/doc/cephfs/fs-volumes.rst
@@ -0,0 +1,653 @@
+.. _fs-volumes-and-subvolumes:
+
+FS volumes and subvolumes
+=========================
+
+The volumes module of the :term:`Ceph Manager` daemon (ceph-mgr) provides a
+single source of truth for CephFS exports. The OpenStack shared file system
+service (manila_) and the Ceph Container Storage Interface (CSI_) storage
+administrators use the common CLI provided by the ceph-mgr ``volumes`` module
+to manage CephFS exports.
+
+The ceph-mgr ``volumes`` module implements the following file system export
+abstractions:
+
+* FS volumes, an abstraction for CephFS file systems
+
+* FS subvolumes, an abstraction for independent CephFS directory trees
+
+* FS subvolume groups, an abstraction for a directory level higher than FS
+ subvolumes. Used to effect policies (e.g., :doc:`/cephfs/file-layouts`)
+ across a set of subvolumes
+
+Some possible use-cases for the export abstractions:
+
+* FS subvolumes used as Manila shares or CSI volumes
+
+* FS subvolume groups used as Manila share groups
+
+Requirements
+------------
+
+* Nautilus (14.2.x) or later Ceph release
+
+* Cephx client user (see :doc:`/rados/operations/user-management`) with
+ at least the following capabilities::
+
+ mon 'allow r'
+ mgr 'allow rw'
+
+FS Volumes
+----------
+
+Create a volume by running the following command:
+
+.. prompt:: bash #
+
+ ceph fs volume create <vol_name> [placement]
+
+This creates a CephFS file system and its data and metadata pools. It can also
+deploy MDS daemons for the filesystem using a ceph-mgr orchestrator module (for
+example Rook). See :doc:`/mgr/orchestrator`.
+
+``<vol_name>`` is the volume name (an arbitrary string). ``[placement]`` is an
+optional string that specifies the :ref:`orchestrator-cli-placement-spec` for
+the MDS. See also :ref:`orchestrator-cli-cephfs` for more examples on
+placement.
+
+.. note:: Specifying placement via a YAML file is not supported through the
+ volume interface.
+
+To remove a volume, run the following command:
+
+ $ ceph fs volume rm <vol_name> [--yes-i-really-mean-it]
+
+This removes a file system and its data and metadata pools. It also tries to
+remove MDS daemons using the enabled ceph-mgr orchestrator module.
+
+.. note:: After volume deletion, it is recommended to restart `ceph-mgr`
+ if a new file system is created on the same cluster and subvolume interface
+ is being used. Please see https://tracker.ceph.com/issues/49605#note-5
+ for more details.
+
+List volumes by running the following command:
+
+ $ ceph fs volume ls
+
+Rename a volume by running the following command:
+
+ $ ceph fs volume rename <vol_name> <new_vol_name> [--yes-i-really-mean-it]
+
+Renaming a volume can be an expensive operation that requires the following:
+
+- Renaming the orchestrator-managed MDS service to match the <new_vol_name>.
+ This involves launching a MDS service with ``<new_vol_name>`` and bringing
+ down the MDS service with ``<vol_name>``.
+- Renaming the file system matching ``<vol_name>`` to ``<new_vol_name>``.
+- Changing the application tags on the data and metadata pools of the file system
+ to ``<new_vol_name>``.
+- Renaming the metadata and data pools of the file system.
+
+The CephX IDs that are authorized for ``<vol_name>`` must be reauthorized for
+``<new_vol_name>``. Any ongoing operations of the clients using these IDs may
+be disrupted. Ensure that mirroring is disabled on the volume.
+
+To fetch the information of a CephFS volume, run the following command:
+
+ $ ceph fs volume info vol_name [--human_readable]
+
+The ``--human_readable`` flag shows used and available pool capacities in KB/MB/GB.
+
+The output format is JSON and contains fields as follows:
+
+* ``pools``: Attributes of data and metadata pools
+ * ``avail``: The amount of free space available in bytes
+ * ``used``: The amount of storage consumed in bytes
+ * ``name``: Name of the pool
+* ``mon_addrs``: List of Ceph monitor addresses
+* ``used_size``: Current used size of the CephFS volume in bytes
+* ``pending_subvolume_deletions``: Number of subvolumes pending deletion
+
+Sample output of the ``volume info`` command::
+
+ $ ceph fs volume info vol_name
+ {
+ "mon_addrs": [
+ "192.168.1.7:40977"
+ ],
+ "pending_subvolume_deletions": 0,
+ "pools": {
+ "data": [
+ {
+ "avail": 106288709632,
+ "name": "cephfs.vol_name.data",
+ "used": 4096
+ }
+ ],
+ "metadata": [
+ {
+ "avail": 106288709632,
+ "name": "cephfs.vol_name.meta",
+ "used": 155648
+ }
+ ]
+ },
+ "used_size": 0
+ }
+
+FS Subvolume groups
+-------------------
+
+Create a subvolume group by running the following command:
+
+ $ ceph fs subvolumegroup create <vol_name> <group_name> [--size <size_in_bytes>] [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>]
+
+The command succeeds even if the subvolume group already exists.
+
+When creating a subvolume group you can specify its data pool layout (see
+:doc:`/cephfs/file-layouts`), uid, gid, file mode in octal numerals, and
+size in bytes. The size of the subvolume group is specified by setting
+a quota on it (see :doc:`/cephfs/quota`). By default, the subvolume group
+is created with octal file mode ``755``, uid ``0``, gid ``0`` and the data pool
+layout of its parent directory.
+
+Remove a subvolume group by running a command of the following form:
+
+ $ ceph fs subvolumegroup rm <vol_name> <group_name> [--force]
+
+The removal of a subvolume group fails if the subvolume group is not empty or
+is non-existent. The ``--force`` flag allows the non-existent "subvolume group remove
+command" to succeed.
+
+Fetch the absolute path of a subvolume group by running a command of the
+following form:
+
+ $ ceph fs subvolumegroup getpath <vol_name> <group_name>
+
+List subvolume groups by running a command of the following form:
+
+ $ ceph fs subvolumegroup ls <vol_name>
+
+.. note:: Subvolume group snapshot feature is no longer supported in mainline
+ CephFS (existing group snapshots can still be listed and deleted)
+
+Fetch the metadata of a subvolume group by running a command of the following form:
+
+.. prompt:: bash #
+
+ ceph fs subvolumegroup info <vol_name> <group_name>
+
+The output format is JSON and contains fields as follows:
+
+* ``atime``: access time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
+* ``mtime``: modification time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
+* ``ctime``: change time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
+* ``uid``: uid of the subvolume group path
+* ``gid``: gid of the subvolume group path
+* ``mode``: mode of the subvolume group path
+* ``mon_addrs``: list of monitor addresses
+* ``bytes_pcent``: quota used in percentage if quota is set, else displays "undefined"
+* ``bytes_quota``: quota size in bytes if quota is set, else displays "infinite"
+* ``bytes_used``: current used size of the subvolume group in bytes
+* ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS"
+* ``data_pool``: data pool to which the subvolume group belongs
+
+Check the presence of any subvolume group by running a command of the following
+form:
+
+.. prompt:: bash $
+
+ ceph fs subvolumegroup exist <vol_name>
+
+The ``exist`` command outputs:
+
+* "subvolumegroup exists": if any subvolumegroup is present
+* "no subvolumegroup exists": if no subvolumegroup is present
+
+.. note:: This command checks for the presence of custom groups and not
+ presence of the default one. To validate the emptiness of the volume, a
+ subvolumegroup existence check alone is not sufficient. Subvolume existence
+ also needs to be checked as there might be subvolumes in the default group.
+
+Resize a subvolume group by running a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs subvolumegroup resize <vol_name> <group_name> <new_size> [--no_shrink]
+
+The command resizes the subvolume group quota, using the size specified by
+``new_size``. The ``--no_shrink`` flag prevents the subvolume group from
+shrinking below the current used size.
+
+The subvolume group may be resized to an infinite size by passing ``inf`` or
+``infinite`` as the ``new_size``.
+
+Remove a snapshot of a subvolume group by running a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]
+
+Supplying the ``--force`` flag allows the command to succeed when it would
+otherwise fail due to the nonexistence of the snapshot.
+
+List snapshots of a subvolume group by running a command of the following form:
+
+.. prompt:: bash $
+
+ ceph fs subvolumegroup snapshot ls <vol_name> <group_name>
+
+
+FS Subvolumes
+-------------
+
+Create a subvolume using::
+
+ $ ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes>] [--group_name <subvol_group_name>] [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>] [--namespace-isolated]
+
+
+The command succeeds even if the subvolume already exists.
+
+When creating a subvolume you can specify its subvolume group, data pool layout,
+uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume is
+specified by setting a quota on it (see :doc:`/cephfs/quota`). The subvolume can be
+created in a separate RADOS namespace by specifying --namespace-isolated option. By
+default a subvolume is created within the default subvolume group, and with an octal file
+mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of
+its parent directory and no size limit.
+
+Remove a subvolume using::
+
+ $ ceph fs subvolume rm <vol_name> <subvol_name> [--group_name <subvol_group_name>] [--force] [--retain-snapshots]
+
+The command removes the subvolume and its contents. It does this in two steps.
+First, it moves the subvolume to a trash folder, and then asynchronously purges
+its contents.
+
+The removal of a subvolume fails if it has snapshots, or is non-existent.
+'--force' flag allows the non-existent subvolume remove command to succeed.
+
+A subvolume can be removed retaining existing snapshots of the subvolume using the
+'--retain-snapshots' option. If snapshots are retained, the subvolume is considered
+empty for all operations not involving the retained snapshots.
+
+.. note:: Snapshot retained subvolumes can be recreated using 'ceph fs subvolume create'
+
+.. note:: Retained snapshots can be used as a clone source to recreate the subvolume, or clone to a newer subvolume.
+
+Resize a subvolume using::
+
+ $ ceph fs subvolume resize <vol_name> <subvol_name> <new_size> [--group_name <subvol_group_name>] [--no_shrink]
+
+The command resizes the subvolume quota using the size specified by ``new_size``.
+The `--no_shrink`` flag prevents the subvolume from shrinking below the current used size of the subvolume.
+
+The subvolume can be resized to an unlimited (but sparse) logical size by passing ``inf`` or ``infinite`` as `` new_size``.
+
+Authorize cephx auth IDs, the read/read-write access to fs subvolumes::
+
+ $ ceph fs subvolume authorize <vol_name> <sub_name> <auth_id> [--group_name=<group_name>] [--access_level=<access_level>]
+
+The ``access_level`` takes ``r`` or ``rw`` as value.
+
+Deauthorize cephx auth IDs, the read/read-write access to fs subvolumes::
+
+ $ ceph fs subvolume deauthorize <vol_name> <sub_name> <auth_id> [--group_name=<group_name>]
+
+List cephx auth IDs authorized to access fs subvolume::
+
+ $ ceph fs subvolume authorized_list <vol_name> <sub_name> [--group_name=<group_name>]
+
+Evict fs clients based on auth ID and subvolume mounted::
+
+ $ ceph fs subvolume evict <vol_name> <sub_name> <auth_id> [--group_name=<group_name>]
+
+Fetch the absolute path of a subvolume using::
+
+ $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
+
+Fetch the information of a subvolume using::
+
+ $ ceph fs subvolume info <vol_name> <subvol_name> [--group_name <subvol_group_name>]
+
+The output format is JSON and contains fields as follows.
+
+* ``atime``: access time of the subvolume path in the format "YYYY-MM-DD HH:MM:SS"
+* ``mtime``: modification time of the subvolume path in the format "YYYY-MM-DD HH:MM:SS"
+* ``ctime``: change time of the subvolume path in the format "YYYY-MM-DD HH:MM:SS"
+* ``uid``: uid of the subvolume path
+* ``gid``: gid of the subvolume path
+* ``mode``: mode of the subvolume path
+* ``mon_addrs``: list of monitor addresses
+* ``bytes_pcent``: quota used in percentage if quota is set, else displays ``undefined``
+* ``bytes_quota``: quota size in bytes if quota is set, else displays ``infinite``
+* ``bytes_used``: current used size of the subvolume in bytes
+* ``created_at``: creation time of the subvolume in the format "YYYY-MM-DD HH:MM:SS"
+* ``data_pool``: data pool to which the subvolume belongs
+* ``path``: absolute path of a subvolume
+* ``type``: subvolume type indicating whether it's clone or subvolume
+* ``pool_namespace``: RADOS namespace of the subvolume
+* ``features``: features supported by the subvolume
+* ``state``: current state of the subvolume
+
+If a subvolume has been removed retaining its snapshots, the output contains only fields as follows.
+
+* ``type``: subvolume type indicating whether it's clone or subvolume
+* ``features``: features supported by the subvolume
+* ``state``: current state of the subvolume
+
+A subvolume's ``features`` are based on the internal version of the subvolume and are
+a subset of the following:
+
+* ``snapshot-clone``: supports cloning using a subvolumes snapshot as the source
+* ``snapshot-autoprotect``: supports automatically protecting snapshots, that are active clone sources, from deletion
+* ``snapshot-retention``: supports removing subvolume contents, retaining any existing snapshots
+
+A subvolume's ``state`` is based on the current state of the subvolume and contains one of the following values.
+
+* ``complete``: subvolume is ready for all operations
+* ``snapshot-retained``: subvolume is removed but its snapshots are retained
+
+List subvolumes using::
+
+ $ ceph fs subvolume ls <vol_name> [--group_name <subvol_group_name>]
+
+.. note:: subvolumes that are removed but have snapshots retained, are also listed.
+
+Check the presence of any subvolume using::
+
+ $ ceph fs subvolume exist <vol_name> [--group_name <subvol_group_name>]
+
+These are the possible results of the ``exist`` command:
+
+* ``subvolume exists``: if any subvolume of given group_name is present
+* ``no subvolume exists``: if no subvolume of given group_name is present
+
+Set custom metadata on the subvolume as a key-value pair using::
+
+ $ ceph fs subvolume metadata set <vol_name> <subvol_name> <key_name> <value> [--group_name <subvol_group_name>]
+
+.. note:: If the key_name already exists then the old value will get replaced by the new value.
+
+.. note:: key_name and value should be a string of ASCII characters (as specified in python's string.printable). key_name is case-insensitive and always stored in lower case.
+
+.. note:: Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
+
+Get custom metadata set on the subvolume using the metadata key::
+
+ $ ceph fs subvolume metadata get <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>]
+
+List custom metadata (key-value pairs) set on the subvolume using::
+
+ $ ceph fs subvolume metadata ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
+
+Remove custom metadata set on the subvolume using the metadata key::
+
+ $ ceph fs subvolume metadata rm <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>] [--force]
+
+Using the ``--force`` flag allows the command to succeed that would otherwise
+fail if the metadata key did not exist.
+
+Create a snapshot of a subvolume using::
+
+ $ ceph fs subvolume snapshot create <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
+
+Remove a snapshot of a subvolume using::
+
+ $ ceph fs subvolume snapshot rm <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>] [--force]
+
+Using the ``--force`` flag allows the command to succeed that would otherwise
+fail if the snapshot did not exist.
+
+.. note:: if the last snapshot within a snapshot retained subvolume is removed, the subvolume is also removed
+
+List snapshots of a subvolume using::
+
+ $ ceph fs subvolume snapshot ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
+
+Fetch the information of a snapshot using::
+
+ $ ceph fs subvolume snapshot info <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
+
+The output format is JSON and contains fields as follows.
+
+* ``created_at``: creation time of the snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff"
+* ``data_pool``: data pool to which the snapshot belongs
+* ``has_pending_clones``: ``yes`` if snapshot clone is in progress, otherwise ``no``
+* ``pending_clones``: list of in-progress or pending clones and their target group if any exist, otherwise this field is not shown
+* ``orphan_clones_count``: count of orphan clones if the snapshot has orphan clones, otherwise this field is not shown
+
+Sample output when snapshot clones are in progress or pending::
+
+ $ ceph fs subvolume snapshot info cephfs subvol snap
+ {
+ "created_at": "2022-06-14 13:54:58.618769",
+ "data_pool": "cephfs.cephfs.data",
+ "has_pending_clones": "yes",
+ "pending_clones": [
+ {
+ "name": "clone_1",
+ "target_group": "target_subvol_group"
+ },
+ {
+ "name": "clone_2"
+ },
+ {
+ "name": "clone_3",
+ "target_group": "target_subvol_group"
+ }
+ ]
+ }
+
+Sample output when no snapshot clone is in progress or pending::
+
+ $ ceph fs subvolume snapshot info cephfs subvol snap
+ {
+ "created_at": "2022-06-14 13:54:58.618769",
+ "data_pool": "cephfs.cephfs.data",
+ "has_pending_clones": "no"
+ }
+
+Set custom key-value metadata on the snapshot by running::
+
+ $ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
+
+.. note:: If the key_name already exists then the old value will get replaced by the new value.
+
+.. note:: The key_name and value should be a strings of ASCII characters (as specified in Python's ``string.printable``). The key_name is case-insensitive and always stored in lowercase.
+
+.. note:: Custom metadata on a snapshot is not preserved when snapshotting the subvolume, and hence is also not preserved when cloning the subvolume snapshot.
+
+Get custom metadata set on the snapshot using the metadata key::
+
+ $ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
+
+List custom metadata (key-value pairs) set on the snapshot using::
+
+ $ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
+
+Remove custom metadata set on the snapshot using the metadata key::
+
+ $ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
+
+Using the ``--force`` flag allows the command to succeed that would otherwise
+fail if the metadata key did not exist.
+
+Cloning Snapshots
+-----------------
+
+Subvolumes can be created by cloning subvolume snapshots. Cloning is an asynchronous operation that copies
+data from a snapshot to a subvolume. Due to this bulk copying, cloning is inefficient for very large
+data sets.
+
+.. note:: Removing a snapshot (source subvolume) would fail if there are pending or in progress clone operations.
+
+Protecting snapshots prior to cloning was a prerequisite in the Nautilus release, and the commands to protect/unprotect
+snapshots were introduced for this purpose. This prerequisite, and hence the commands to protect/unprotect, is being
+deprecated and may be removed from a future release.
+
+The commands being deprecated are:
+
+.. prompt:: bash #
+
+ ceph fs subvolume snapshot protect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
+ ceph fs subvolume snapshot unprotect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
+
+.. note:: Using the above commands will not result in an error, but they have no useful purpose.
+
+.. note:: Use the ``subvolume info`` command to fetch subvolume metadata regarding supported ``features`` to help decide if protect/unprotect of snapshots is required, based on the availability of the ``snapshot-autoprotect`` feature.
+
+To initiate a clone operation use::
+
+ $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name>
+
+If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified::
+
+ $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --group_name <subvol_group_name>
+
+Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use::
+
+ $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --target_group_name <subvol_group_name>
+
+Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use::
+
+ $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --pool_layout <pool_layout>
+
+Configure the maximum number of concurrent clones. The default is 4::
+
+ $ ceph config set mgr mgr/volumes/max_concurrent_clones <value>
+
+To check the status of a clone operation use::
+
+ $ ceph fs clone status <vol_name> <clone_name> [--group_name <group_name>]
+
+A clone can be in one of the following states:
+
+#. ``pending`` : Clone operation has not started
+#. ``in-progress`` : Clone operation is in progress
+#. ``complete`` : Clone operation has successfully finished
+#. ``failed`` : Clone operation has failed
+#. ``canceled`` : Clone operation is cancelled by user
+
+The reason for a clone failure is shown as below:
+
+#. ``errno`` : error number
+#. ``error_msg`` : failure error string
+
+Here is an example of an ``in-progress`` clone::
+
+ $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
+ $ ceph fs clone status cephfs clone1
+ {
+ "status": {
+ "state": "in-progress",
+ "source": {
+ "volume": "cephfs",
+ "subvolume": "subvol1",
+ "snapshot": "snap1"
+ }
+ }
+ }
+
+.. note:: The ``failure`` section will be shown only if the clone's state is ``failed`` or ``cancelled``
+
+Here is an example of a ``failed`` clone::
+
+ $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
+ $ ceph fs clone status cephfs clone1
+ {
+ "status": {
+ "state": "failed",
+ "source": {
+ "volume": "cephfs",
+ "subvolume": "subvol1",
+ "snapshot": "snap1"
+ "size": "104857600"
+ },
+ "failure": {
+ "errno": "122",
+ "errstr": "Disk quota exceeded"
+ }
+ }
+ }
+
+(NOTE: since ``subvol1`` is in the default group, the ``source`` object's ``clone status`` does not include the group name)
+
+.. note:: Cloned subvolumes are accessible only after the clone operation has successfully completed.
+
+After a successful clone operation, ``clone status`` will look like the below::
+
+ $ ceph fs clone status cephfs clone1
+ {
+ "status": {
+ "state": "complete"
+ }
+ }
+
+If a clone operation is unsuccessful, the ``state`` value will be ``failed``.
+
+To retry a failed clone operation, the incomplete clone must be deleted and the clone operation must be issued again.
+To delete a partial clone use::
+
+ $ ceph fs subvolume rm <vol_name> <clone_name> [--group_name <group_name>] --force
+
+.. note:: Cloning synchronizes only directories, regular files and symbolic links. Inode timestamps (access and
+ modification times) are synchronized up to seconds granularity.
+
+An ``in-progress`` or a ``pending`` clone operation may be canceled. To cancel a clone operation use the ``clone cancel`` command::
+
+ $ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>]
+
+On successful cancellation, the cloned subvolume is moved to the ``canceled`` state::
+
+ $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
+ $ ceph fs clone cancel cephfs clone1
+ $ ceph fs clone status cephfs clone1
+ {
+ "status": {
+ "state": "canceled",
+ "source": {
+ "volume": "cephfs",
+ "subvolume": "subvol1",
+ "snapshot": "snap1"
+ }
+ }
+ }
+
+.. note:: The canceled cloned may be deleted by supplying the ``--force`` option to the `fs subvolume rm` command.
+
+
+.. _subvol-pinning:
+
+Pinning Subvolumes and Subvolume Groups
+---------------------------------------
+
+Subvolumes and subvolume groups may be automatically pinned to ranks according
+to policies. This can distribute load across MDS ranks in predictable and
+stable ways. Review :ref:`cephfs-pinning` and :ref:`cephfs-ephemeral-pinning`
+for details on how pinning works.
+
+Pinning is configured by::
+
+ $ ceph fs subvolumegroup pin <vol_name> <group_name> <pin_type> <pin_setting>
+
+or for subvolumes::
+
+ $ ceph fs subvolume pin <vol_name> <group_name> <pin_type> <pin_setting>
+
+Typically you will want to set subvolume group pins. The ``pin_type`` may be
+one of ``export``, ``distributed``, or ``random``. The ``pin_setting``
+corresponds to the extended attributed "value" as in the pinning documentation
+referenced above.
+
+So, for example, setting a distributed pinning strategy on a subvolume group::
+
+ $ ceph fs subvolumegroup pin cephfilesystem-a csi distributed 1
+
+Will enable distributed subtree partitioning policy for the "csi" subvolume
+group. This will cause every subvolume within the group to be automatically
+pinned to one of the available ranks on the file system.
+
+
+.. _manila: https://github.com/openstack/manila
+.. _CSI: https://github.com/ceph/ceph-csi
diff --git a/doc/cephfs/full.rst b/doc/cephfs/full.rst
new file mode 100644
index 000000000..fe0616cb6
--- /dev/null
+++ b/doc/cephfs/full.rst
@@ -0,0 +1,60 @@
+
+Handling a full Ceph file system
+================================
+
+When a RADOS cluster reaches its ``mon_osd_full_ratio`` (default
+95%) capacity, it is marked with the OSD full flag. This flag causes
+most normal RADOS clients to pause all operations until it is resolved
+(for example by adding more capacity to the cluster).
+
+The file system has some special handling of the full flag, explained below.
+
+Hammer and later
+----------------
+
+Since the hammer release, a full file system will lead to ENOSPC
+results from:
+
+ * Data writes on the client
+ * Metadata operations other than deletes and truncates
+
+Because the full condition may not be encountered until
+data is flushed to disk (sometime after a ``write`` call has already
+returned 0), the ENOSPC error may not be seen until the application
+calls ``fsync`` or ``fclose`` (or equivalent) on the file handle.
+
+Calling ``fsync`` is guaranteed to reliably indicate whether the data
+made it to disk, and will return an error if it doesn't. ``fclose`` will
+only return an error if buffered data happened to be flushed since
+the last write -- a successful ``fclose`` does not guarantee that the
+data made it to disk, and in a full-space situation, buffered data
+may be discarded after an ``fclose`` if no space is available to persist it.
+
+.. warning::
+ If an application appears to be misbehaving on a full file system,
+ check that it is performing ``fsync()`` calls as necessary to ensure
+ data is on disk before proceeding.
+
+Data writes may be cancelled by the client if they are in flight at the
+time the OSD full flag is sent. Clients update the ``osd_epoch_barrier``
+when releasing capabilities on files affected by cancelled operations, in
+order to ensure that these cancelled operations do not interfere with
+subsequent access to the data objects by the MDS or other clients. For
+more on the epoch barrier mechanism, see :ref:`background_blocklisting_and_osd_epoch_barrier`.
+
+Legacy (pre-hammer) behavior
+----------------------------
+
+In versions of Ceph earlier than hammer, the MDS would ignore
+the full status of the RADOS cluster, and any data writes from
+clients would stall until the cluster ceased to be full.
+
+There are two dangerous conditions to watch for with this behaviour:
+
+* If a client had pending writes to a file, then it was not possible
+ for the client to release the file to the MDS for deletion: this could
+ lead to difficulty clearing space on a full file system
+* If clients continued to create a large number of empty files, the
+ resulting metadata writes from the MDS could lead to total exhaustion
+ of space on the OSDs such that no further deletions could be performed.
+
diff --git a/doc/cephfs/health-messages.rst b/doc/cephfs/health-messages.rst
new file mode 100644
index 000000000..7edc1262f
--- /dev/null
+++ b/doc/cephfs/health-messages.rst
@@ -0,0 +1,240 @@
+
+.. _cephfs-health-messages:
+
+======================
+CephFS health messages
+======================
+
+Cluster health checks
+=====================
+
+The Ceph monitor daemons will generate health messages in response
+to certain states of the file system map structure (and the enclosed MDS maps).
+
+Message: mds rank(s) *ranks* have failed
+Description: One or more MDS ranks are not currently assigned to
+an MDS daemon; the cluster will not recover until a suitable replacement
+daemon starts.
+
+Message: mds rank(s) *ranks* are damaged
+Description: One or more MDS ranks has encountered severe damage to
+its stored metadata, and cannot start again until it is repaired.
+
+Message: mds cluster is degraded
+Description: One or more MDS ranks are not currently up and running, clients
+may pause metadata IO until this situation is resolved. This includes
+ranks being failed or damaged, and additionally includes ranks
+which are running on an MDS but have not yet made it to the *active*
+state (e.g. ranks currently in *replay* state).
+
+Message: mds *names* are laggy
+Description: The named MDS daemons have failed to send beacon messages
+to the monitor for at least ``mds_beacon_grace`` (default 15s), while
+they are supposed to send beacon messages every ``mds_beacon_interval``
+(default 4s). The daemons may have crashed. The Ceph monitor will
+automatically replace laggy daemons with standbys if any are available.
+
+Message: insufficient standby daemons available
+Description: One or more file systems are configured to have a certain number
+of standby daemons available (including daemons in standby-replay) but the
+cluster does not have enough standby daemons. The standby daemons not in replay
+count towards any file system (i.e. they may overlap). This warning can
+configured by setting ``ceph fs set <fs> standby_count_wanted <count>``. Use
+zero for ``count`` to disable.
+
+
+Daemon-reported health checks
+=============================
+
+MDS daemons can identify a variety of unwanted conditions, and
+indicate these to the operator in the output of ``ceph status``.
+These conditions have human readable messages, and additionally
+a unique code starting with ``MDS_``.
+
+.. highlight:: console
+
+``ceph health detail`` shows the details of the conditions. Following
+is a typical health report from a cluster experiencing MDS related
+performance issues::
+
+ $ ceph health detail
+ HEALTH_WARN 1 MDSs report slow metadata IOs; 1 MDSs report slow requests
+ MDS_SLOW_METADATA_IO 1 MDSs report slow metadata IOs
+ mds.fs-01(mds.0): 3 slow metadata IOs are blocked > 30 secs, oldest blocked for 51123 secs
+ MDS_SLOW_REQUEST 1 MDSs report slow requests
+ mds.fs-01(mds.0): 5 slow requests are blocked > 30 secs
+
+Where, for instance, ``MDS_SLOW_REQUEST`` is the unique code representing the
+condition where requests are taking long time to complete. And the following
+description shows its severity and the MDS daemons which are serving these
+slow requests.
+
+This page lists the health checks raised by MDS daemons. For the checks from
+other daemons, please see :ref:`health-checks`.
+
+``MDS_TRIM``
+------------
+
+ Message
+ "Behind on trimming..."
+ Description
+ CephFS maintains a metadata journal that is divided into
+ *log segments*. The length of journal (in number of segments) is controlled
+ by the setting ``mds_log_max_segments``, and when the number of segments
+ exceeds that setting the MDS starts writing back metadata so that it
+ can remove (trim) the oldest segments. If this writeback is happening
+ too slowly, or a software bug is preventing trimming, then this health
+ message may appear. The threshold for this message to appear is controlled by
+ the config option ``mds_log_warn_factor``, the default is 2.0.
+
+``MDS_HEALTH_CLIENT_LATE_RELEASE``, ``MDS_HEALTH_CLIENT_LATE_RELEASE_MANY``
+---------------------------------------------------------------------------
+
+ Message
+ "Client *name* failing to respond to capability release"
+ Description
+ CephFS clients are issued *capabilities* by the MDS, which
+ are like locks. Sometimes, for example when another client needs access,
+ the MDS will request clients release their capabilities. If the client
+ is unresponsive or buggy, it might fail to do so promptly or fail to do
+ so at all. This message appears if a client has taken longer than
+ ``session_timeout`` (default 60s) to comply.
+
+``MDS_CLIENT_RECALL``, ``MDS_HEALTH_CLIENT_RECALL_MANY``
+--------------------------------------------------------
+
+ Message
+ "Client *name* failing to respond to cache pressure"
+ Description
+ Clients maintain a metadata cache. Items (such as inodes) in the
+ client cache are also pinned in the MDS cache, so when the MDS needs to shrink
+ its cache (to stay within ``mds_cache_memory_limit``), it sends messages to
+ clients to shrink their caches too. If the client is unresponsive or buggy,
+ this can prevent the MDS from properly staying within its cache limits and it
+ may eventually run out of memory and crash. This message appears if a client
+ has failed to release more than
+ ``mds_recall_warning_threshold`` capabilities (decaying with a half-life of
+ ``mds_recall_max_decay_rate``) within the last
+ ``mds_recall_warning_decay_rate`` second.
+
+``MDS_CLIENT_OLDEST_TID``, ``MDS_CLIENT_OLDEST_TID_MANY``
+---------------------------------------------------------
+
+ Message
+ "Client *name* failing to advance its oldest client/flush tid"
+ Description
+ The CephFS client-MDS protocol uses a field called the
+ *oldest tid* to inform the MDS of which client requests are fully
+ complete and may therefore be forgotten about by the MDS. If a buggy
+ client is failing to advance this field, then the MDS may be prevented
+ from properly cleaning up resources used by client requests. This message
+ appears if a client appears to have more than ``max_completed_requests``
+ (default 100000) requests that are complete on the MDS side but haven't
+ yet been accounted for in the client's *oldest tid* value.
+
+``MDS_DAMAGE``
+--------------
+
+ Message
+ "Metadata damage detected"
+ Description
+ Corrupt or missing metadata was encountered when reading
+ from the metadata pool. This message indicates that the damage was
+ sufficiently isolated for the MDS to continue operating, although
+ client accesses to the damaged subtree will return IO errors. Use
+ the ``damage ls`` admin socket command to get more detail on the damage.
+ This message appears as soon as any damage is encountered.
+
+``MDS_HEALTH_READ_ONLY``
+------------------------
+
+ Message
+ "MDS in read-only mode"
+ Description
+ The MDS has gone into readonly mode and will return EROFS
+ error codes to client operations that attempt to modify any metadata. The
+ MDS will go into readonly mode if it encounters a write error while
+ writing to the metadata pool, or if forced to by an administrator using
+ the *force_readonly* admin socket command.
+
+``MDS_SLOW_REQUEST``
+--------------------
+
+ Message
+ "*N* slow requests are blocked"
+
+ Description
+ One or more client requests have not been completed promptly,
+ indicating that the MDS is either running very slowly, or that the RADOS
+ cluster is not acknowledging journal writes promptly, or that there is a bug.
+ Use the ``ops`` admin socket command to list outstanding metadata operations.
+ This message appears if any client requests have taken longer than
+ ``mds_op_complaint_time`` (default 30s).
+
+``MDS_CACHE_OVERSIZED``
+-----------------------
+
+ Message
+ "Too many inodes in cache"
+ Description
+ The MDS is not succeeding in trimming its cache to comply with the
+ limit set by the administrator. If the MDS cache becomes too large, the daemon
+ may exhaust available memory and crash. By default, this message appears if
+ the actual cache size (in memory) is at least 50% greater than
+ ``mds_cache_memory_limit`` (default 4GB). Modify ``mds_health_cache_threshold``
+ to set the warning ratio.
+
+``FS_WITH_FAILED_MDS``
+----------------------
+
+ Message
+ "Some MDS ranks do not have standby replacements"
+
+ Description
+ Normally, a failed MDS rank will be replaced by a standby MDS. This situation
+ is transient and is not considered critical. However, if there are no standby
+ MDSs available to replace an active MDS rank, this health warning is generated.
+
+``MDS_INSUFFICIENT_STANDBY``
+----------------------------
+
+ Message
+ "Insufficient number of available standby(-replay) MDS daemons than configured"
+
+ Description
+ The minimum number of standby(-replay) MDS daemons can be configured by setting
+ ``standby_count_wanted`` configuration variable. This health warning is generated
+ when the configured value mismatches the number of standby(-replay) MDS daemons
+ available.
+
+``FS_DEGRADED``
+----------------------------
+
+ Message
+ "Some MDS ranks have been marked failed or damaged"
+
+ Description
+ When one or more MDS rank ends up in failed or damaged state due to
+ an unrecoverable error. The file system may be partially or fully
+ unavailable when one (or more) ranks are offline.
+
+``MDS_UP_LESS_THAN_MAX``
+----------------------------
+
+ Message
+ "Number of active ranks are less than configured number of maximum MDSs"
+
+ Description
+ The maximum number of MDS ranks can be configured by setting ``max_mds``
+ configuration variable. This health warning is generated when the number
+ of MDS ranks falls below this configured value.
+
+``MDS_ALL_DOWN``
+----------------------------
+
+ Message
+ "None of the MDS ranks are available (file system offline)"
+
+ Description
+ All MDS ranks are unavailable resulting in the file system to be completely
+ offline.
diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst
new file mode 100644
index 000000000..3d52aef38
--- /dev/null
+++ b/doc/cephfs/index.rst
@@ -0,0 +1,213 @@
+.. _ceph-file-system:
+
+=================
+ Ceph File System
+=================
+
+The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
+top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
+a state-of-the-art, multi-use, highly available, and performant file store for
+a variety of applications, including traditional use-cases like shared home
+directories, HPC scratch space, and distributed workflow shared storage.
+
+CephFS achieves these goals through the use of some novel architectural
+choices. Notably, file metadata is stored in a separate RADOS pool from file
+data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
+which may scale to support higher throughput metadata workloads. Clients of
+the file system have direct access to RADOS for reading and writing file data
+blocks. For this reason, workloads may linearly scale with the size of the
+underlying RADOS object store; that is, there is no gateway or broker mediating
+data I/O for clients.
+
+Access to data is coordinated through the cluster of MDS which serve as
+authorities for the state of the distributed metadata cache cooperatively
+maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
+into a series of efficient writes to a journal on RADOS; no metadata state is
+stored locally by the MDS. This model allows for coherent and rapid
+collaboration between clients within the context of a POSIX file system.
+
+.. image:: cephfs-architecture.svg
+
+CephFS is the subject of numerous academic papers for its novel designs and
+contributions to file system research. It is the oldest storage interface in
+Ceph and was once the primary use-case for RADOS. Now it is joined by two
+other storage interfaces to form a modern unified storage system: RBD (Ceph
+Block Devices) and RGW (Ceph Object Storage Gateway).
+
+
+Getting Started with CephFS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For most deployments of Ceph, setting up your first CephFS file system is as simple as:
+
+.. prompt:: bash
+
+ # Create a CephFS volume named (for example) "cephfs":
+ ceph fs volume create cephfs
+
+The Ceph `Orchestrator`_ will automatically create and configure MDS for
+your file system if the back-end deployment technology supports it (see
+`Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
+as needed`_. You can also `create other CephFS volumes`_.
+
+Finally, to mount CephFS on your client nodes, see `Mount CephFS:
+Prerequisites`_ page. Additionally, a command-line shell utility is available
+for interactive access or scripting via the `cephfs-shell`_.
+
+.. _Orchestrator: ../mgr/orchestrator
+.. _deploy MDS manually as needed: add-remove-mds
+.. _create other CephFS volumes: fs-volumes
+.. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
+.. _Mount CephFS\: Prerequisites: mount-prerequisites
+.. _cephfs-shell: ../man/8/cephfs-shell
+
+
+.. raw:: html
+
+ <!---
+
+Administration
+^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ Create a CephFS file system <createfs>
+ Administrative commands <administration>
+ Creating Multiple File Systems <multifs>
+ Provision/Add/Remove MDS(s) <add-remove-mds>
+ MDS failover and standby configuration <standby>
+ MDS Cache Configuration <cache-configuration>
+ MDS Configuration Settings <mds-config-ref>
+ Manual: ceph-mds <../../man/8/ceph-mds>
+ Export over NFS <nfs>
+ Application best practices <app-best-practices>
+ FS volume and subvolumes <fs-volumes>
+ CephFS Quotas <quota>
+ Health messages <health-messages>
+ Upgrading old file systems <upgrading>
+ CephFS Top Utility <cephfs-top>
+ Scheduled Snapshots <snap-schedule>
+ CephFS Snapshot Mirroring <cephfs-mirroring>
+
+.. raw:: html
+
+ <!---
+
+Mounting CephFS
+^^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ Client Configuration Settings <client-config-ref>
+ Client Authentication <client-auth>
+ Mount CephFS: Prerequisites <mount-prerequisites>
+ Mount CephFS using Kernel Driver <mount-using-kernel-driver>
+ Mount CephFS using FUSE <mount-using-fuse>
+ Mount CephFS on Windows <ceph-dokan>
+ Use the CephFS Shell <../../man/8/cephfs-shell>
+ Supported Features of Kernel Driver <kernel-features>
+ Manual: ceph-fuse <../../man/8/ceph-fuse>
+ Manual: mount.ceph <../../man/8/mount.ceph>
+ Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
+
+
+.. raw:: html
+
+ <!---
+
+CephFS Concepts
+^^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ MDS States <mds-states>
+ POSIX compatibility <posix>
+ MDS Journaling <mds-journaling>
+ File layouts <file-layouts>
+ Distributed Metadata Cache <mdcache>
+ Dynamic Metadata Management in CephFS <dynamic-metadata-management>
+ CephFS IO Path <cephfs-io-path>
+ LazyIO <lazyio>
+ Directory fragmentation <dirfrags>
+ Multiple active MDS daemons <multimds>
+
+
+.. raw:: html
+
+ <!---
+
+Troubleshooting and Disaster Recovery
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
+
+.. toctree::
+ :hidden:
+
+ Client eviction <eviction>
+ Scrubbing the File System <scrub>
+ Handling full file systems <full>
+ Metadata repair <disaster-recovery-experts>
+ Troubleshooting <troubleshooting>
+ Disaster recovery <disaster-recovery>
+ cephfs-journal-tool <cephfs-journal-tool>
+ Recovering file system after monitor store loss <recover-fs-after-mon-store-loss>
+
+
+.. raw:: html
+
+ <!---
+
+Developer Guides
+^^^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ Journaler Configuration <journaler>
+ Client's Capabilities <capabilities>
+ Java and Python bindings <api/index>
+ Mantle <mantle>
+
+
+.. raw:: html
+
+ <!---
+
+Additional Details
+^^^^^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ Experimental Features <experimental-features>
diff --git a/doc/cephfs/journaler.rst b/doc/cephfs/journaler.rst
new file mode 100644
index 000000000..d2fb2b0ab
--- /dev/null
+++ b/doc/cephfs/journaler.rst
@@ -0,0 +1,7 @@
+===========
+ Journaler
+===========
+
+.. confval:: journaler_write_head_interval
+.. confval:: journaler_prefetch_periods
+.. confval:: journaler_prezero_periods
diff --git a/doc/cephfs/kernel-features.rst b/doc/cephfs/kernel-features.rst
new file mode 100644
index 000000000..c025c7a52
--- /dev/null
+++ b/doc/cephfs/kernel-features.rst
@@ -0,0 +1,47 @@
+
+Supported Features of the Kernel Driver
+========================================
+The kernel driver is developed separately from the core ceph code, and as
+such it sometimes differs from the FUSE driver in feature implementation.
+The following details the implementation status of various CephFS features
+in the kernel driver.
+
+Inline data
+-----------
+Inline data was introduced by the Firefly release. This feature is being
+deprecated in mainline CephFS, and may be removed from a future kernel
+release.
+
+Linux kernel clients >= 3.19 can read inline data and convert existing
+inline data to RADOS objects when file data is modified. At present,
+Linux kernel clients do not store file data as inline data.
+
+See `Experimental Features`_ for more information.
+
+Quotas
+------
+Quota was first introduced by the hammer release. Quota disk format got renewed
+by the Mimic release. Linux kernel clients >= 4.17 can support the new format
+quota. At present, no Linux kernel client support the old format quota.
+
+See `Quotas`_ for more information.
+
+Multiple file systems within a Ceph cluster
+-------------------------------------------
+The feature was introduced by the Jewel release. Linux kernel clients >= 4.7
+can support it.
+
+See `Experimental Features`_ for more information.
+
+Multiple active metadata servers
+--------------------------------
+The feature has been supported since the Luminous release. It is recommended to
+use Linux kernel clients >= 4.14 when there are multiple active MDS.
+
+Snapshots
+---------
+The feature has been supported since the Mimic release. It is recommended to
+use Linux kernel clients >= 4.17 if snapshot is used.
+
+.. _Experimental Features: ../experimental-features
+.. _Quotas: ../quota
diff --git a/doc/cephfs/lazyio.rst b/doc/cephfs/lazyio.rst
new file mode 100644
index 000000000..b30058774
--- /dev/null
+++ b/doc/cephfs/lazyio.rst
@@ -0,0 +1,76 @@
+======
+LazyIO
+======
+
+LazyIO relaxes POSIX semantics. Buffered reads/writes are allowed even when a
+file is opened by multiple applications on multiple clients. Applications are
+responsible for managing cache coherency themselves.
+
+Libcephfs supports LazyIO since nautilus release.
+
+Enable LazyIO
+=============
+
+LazyIO can be enabled by following ways.
+
+- ``client_force_lazyio`` option enables LAZY_IO globally for libcephfs and
+ ceph-fuse mount.
+
+- ``ceph_lazyio(...)`` and ``ceph_ll_lazyio(...)`` enable LAZY_IO for file handle
+ in libcephfs.
+
+Using LazyIO
+============
+
+LazyIO includes two methods ``lazyio_propagate()`` and ``lazyio_synchronize()``.
+With LazyIO enabled, writes may not be visible to other clients until
+``lazyio_propagate()`` is called. Reads may come from local cache (irrespective of
+changes to the file by other clients) until ``lazyio_synchronize()`` is called.
+
+- ``lazyio_propagate(int fd, loff_t offset, size_t count)`` - Ensures that any
+ buffered writes of the client, in the specific region (offset to offset+count),
+ has been propagated to the shared file. If offset and count are both 0, the
+ operation is performed on the entire file. Currently only this is supported.
+
+- ``lazyio_synchronize(int fd, loff_t offset, size_t count)`` - Ensures that the
+ client is, in a subsequent read call, able to read the updated file with all
+ the propagated writes of the other clients. In CephFS this is facilitated by
+ invalidating the file caches pertaining to the inode and hence forces the
+ client to refetch/recache the data from the updated file. Also if the write cache
+ of the calling client is dirty (not propagated), lazyio_synchronize() flushes it as well.
+
+An example usage (utilizing libcephfs) is given below. This is a sample I/O loop for a
+particular client/file descriptor in a parallel application:
+
+::
+
+ /* Client a (ca) opens the shared file file.txt */
+ int fda = ceph_open(ca, "shared_file.txt", O_CREAT|O_RDWR, 0644);
+
+ /* Enable LazyIO for fda */
+ ceph_lazyio(ca, fda, 1));
+
+ for(i = 0; i < num_iters; i++) {
+ char out_buf[] = "fooooooooo";
+
+ ceph_write(ca, fda, out_buf, sizeof(out_buf), i);
+ /* Propagate the writes associated with fda to the backing storage*/
+ ceph_propagate(ca, fda, 0, 0);
+
+ /* The barrier makes sure changes associated with all file descriptors
+ are propagated so that there is certainty that the backing file
+ is up to date */
+ application_specific_barrier();
+
+ char in_buf[40];
+ /* Calling ceph_lazyio_synchronize here will ascertain that ca will
+ read the updated file with the propagated changes and not read
+ stale cached data */
+ ceph_lazyio_synchronize(ca, fda, 0, 0);
+ ceph_read(ca, fda, in_buf, sizeof(in_buf), 0);
+
+ /* A barrier is required here before returning to the next write
+ phase so as to avoid overwriting the portion of the shared file still
+ being read by another file descriptor */
+ application_specific_barrier();
+ }
diff --git a/doc/cephfs/mantle.rst b/doc/cephfs/mantle.rst
new file mode 100644
index 000000000..dc9e62461
--- /dev/null
+++ b/doc/cephfs/mantle.rst
@@ -0,0 +1,263 @@
+Mantle
+======
+
+.. warning::
+
+ Mantle is for research and development of metadata balancer algorithms,
+ not for use on production CephFS clusters.
+
+Multiple, active MDSs can migrate directories to balance metadata load. The
+policies for when, where, and how much to migrate are hard-coded into the
+metadata balancing module. Mantle is a programmable metadata balancer built
+into the MDS. The idea is to protect the mechanisms for balancing load
+(migration, replication, fragmentation) but stub out the balancing policies
+using Lua. Mantle is based on [1] but the current implementation does *NOT*
+have the following features from that paper:
+
+1. Balancing API: in the paper, the user fills in when, where, how much, and
+ load calculation policies; currently, Mantle only requires that Lua policies
+ return a table of target loads (e.g., how much load to send to each MDS)
+2. "How much" hook: in the paper, there was a hook that let the user control
+ the fragment selector policy; currently, Mantle does not have this hook
+3. Instantaneous CPU utilization as a metric
+
+[1] Supercomputing '15 Paper:
+http://sc15.supercomputing.org/schedule/event_detail-evid=pap168.html
+
+Quickstart with vstart
+----------------------
+
+.. warning::
+
+ Developing balancers with vstart is difficult because running all daemons
+ and clients on one node can overload the system. Let it run for a while, even
+ though you will likely see a bunch of lost heartbeat and laggy MDS warnings.
+ Most of the time this guide will work but sometimes all MDSs lock up and you
+ cannot actually see them spill. It is much better to run this on a cluster.
+
+As a prerequisite, we assume you have installed `mdtest
+<https://sourceforge.net/projects/mdtest/>`_ or pulled the `Docker image
+<https://hub.docker.com/r/michaelsevilla/mdtest/>`_. We use mdtest because we
+need to generate enough load to get over the MIN_OFFLOAD threshold that is
+arbitrarily set in the balancer. For example, this does not create enough
+metadata load:
+
+::
+
+ while true; do
+ touch "/cephfs/blah-`date`"
+ done
+
+
+Mantle with `vstart.sh`
+~~~~~~~~~~~~~~~~~~~~~~~
+
+1. Start Ceph and tune the logging so we can see migrations happen:
+
+::
+
+ cd build
+ ../src/vstart.sh -n -l
+ for i in a b c; do
+ bin/ceph --admin-daemon out/mds.$i.asok config set debug_ms 0
+ bin/ceph --admin-daemon out/mds.$i.asok config set debug_mds 2
+ bin/ceph --admin-daemon out/mds.$i.asok config set mds_beacon_grace 1500
+ done
+
+
+2. Put the balancer into RADOS:
+
+::
+
+ bin/rados put --pool=cephfs_metadata_a greedyspill.lua ../src/mds/balancers/greedyspill.lua
+
+
+3. Activate Mantle:
+
+::
+
+ bin/ceph fs set cephfs max_mds 5
+ bin/ceph fs set cephfs_a balancer greedyspill.lua
+
+
+4. Mount CephFS in another window:
+
+::
+
+ bin/ceph-fuse /cephfs -o allow_other &
+ tail -f out/mds.a.log
+
+
+ Note that if you look at the last MDS (which could be a, b, or c -- it's
+ random), you will see an attempt to index a nil value. This is because the
+ last MDS tries to check the load of its neighbor, which does not exist.
+
+5. Run a simple benchmark. In our case, we use the Docker mdtest image to
+ create load:
+
+::
+
+ for i in 0 1 2; do
+ docker run -d \
+ --name=client$i \
+ -v /cephfs:/cephfs \
+ michaelsevilla/mdtest \
+ -F -C -n 100000 -d "/cephfs/client-test$i"
+ done
+
+
+6. When you are done, you can kill all the clients with:
+
+::
+
+ for i in 0 1 2 3; do docker rm -f client$i; done
+
+
+Output
+~~~~~~
+
+Looking at the log for the first MDS (could be a, b, or c), we see that
+everyone has no load:
+
+::
+
+ 2016-08-21 06:44:01.763930 7fd03aaf7700 0 lua.balancer MDS0: < auth.meta_load=0.0 all.meta_load=0.0 req_rate=1.0 queue_len=0.0 cpu_load_avg=1.35 > load=0.0
+ 2016-08-21 06:44:01.763966 7fd03aaf7700 0 lua.balancer MDS1: < auth.meta_load=0.0 all.meta_load=0.0 req_rate=0.0 queue_len=0.0 cpu_load_avg=1.35 > load=0.0
+ 2016-08-21 06:44:01.763982 7fd03aaf7700 0 lua.balancer MDS2: < auth.meta_load=0.0 all.meta_load=0.0 req_rate=0.0 queue_len=0.0 cpu_load_avg=1.35 > load=0.0
+ 2016-08-21 06:44:01.764010 7fd03aaf7700 2 lua.balancer when: not migrating! my_load=0.0 hisload=0.0
+ 2016-08-21 06:44:01.764033 7fd03aaf7700 2 mds.0.bal mantle decided that new targets={}
+
+
+After the jobs starts, MDS0 gets about 1953 units of load. The greedy spill
+balancer dictates that half the load goes to your neighbor MDS, so we see that
+Mantle tries to send 1953 load units to MDS1.
+
+::
+
+ 2016-08-21 06:45:21.869994 7fd03aaf7700 0 lua.balancer MDS0: < auth.meta_load=5834.188908912 all.meta_load=1953.3492228857 req_rate=12591.0 queue_len=1075.0 cpu_load_avg=3.05 > load=1953.3492228857
+ 2016-08-21 06:45:21.870017 7fd03aaf7700 0 lua.balancer MDS1: < auth.meta_load=0.0 all.meta_load=0.0 req_rate=0.0 queue_len=0.0 cpu_load_avg=3.05 > load=0.0
+ 2016-08-21 06:45:21.870027 7fd03aaf7700 0 lua.balancer MDS2: < auth.meta_load=0.0 all.meta_load=0.0 req_rate=0.0 queue_len=0.0 cpu_load_avg=3.05 > load=0.0
+ 2016-08-21 06:45:21.870034 7fd03aaf7700 2 lua.balancer when: migrating! my_load=1953.3492228857 hisload=0.0
+ 2016-08-21 06:45:21.870050 7fd03aaf7700 2 mds.0.bal mantle decided that new targets={0=0,1=976.675,2=0}
+ 2016-08-21 06:45:21.870094 7fd03aaf7700 0 mds.0.bal - exporting [0,0.52287 1.04574] 1030.88 to mds.1 [dir 100000006ab /client-test2/ [2,head] auth pv=33 v=32 cv=32/0 ap=2+3+4 state=1610612802|complete f(v0 m2016-08-21 06:44:20.366935 1=0+1) n(v2 rc2016-08-21 06:44:30.946816 3790=3788+2) hs=1+0,ss=0+0 dirty=1 | child=1 dirty=1 authpin=1 0x55d2762fd690]
+ 2016-08-21 06:45:21.870151 7fd03aaf7700 0 mds.0.migrator nicely exporting to mds.1 [dir 100000006ab /client-test2/ [2,head] auth pv=33 v=32 cv=32/0 ap=2+3+4 state=1610612802|complete f(v0 m2016-08-21 06:44:20.366935 1=0+1) n(v2 rc2016-08-21 06:44:30.946816 3790=3788+2) hs=1+0,ss=0+0 dirty=1 | child=1 dirty=1 authpin=1 0x55d2762fd690]
+
+
+Eventually load moves around:
+
+::
+
+ 2016-08-21 06:47:10.210253 7fd03aaf7700 0 lua.balancer MDS0: < auth.meta_load=415.77414300449 all.meta_load=415.79000078186 req_rate=82813.0 queue_len=0.0 cpu_load_avg=11.97 > load=415.79000078186
+ 2016-08-21 06:47:10.210277 7fd03aaf7700 0 lua.balancer MDS1: < auth.meta_load=228.72023977691 all.meta_load=186.5606496623 req_rate=28580.0 queue_len=0.0 cpu_load_avg=11.97 > load=186.5606496623
+ 2016-08-21 06:47:10.210290 7fd03aaf7700 0 lua.balancer MDS2: < auth.meta_load=0.0 all.meta_load=0.0 req_rate=1.0 queue_len=0.0 cpu_load_avg=11.97 > load=0.0
+ 2016-08-21 06:47:10.210298 7fd03aaf7700 2 lua.balancer when: not migrating! my_load=415.79000078186 hisload=186.5606496623
+ 2016-08-21 06:47:10.210311 7fd03aaf7700 2 mds.0.bal mantle decided that new targets={}
+
+
+Implementation Details
+----------------------
+
+Most of the implementation is in MDBalancer. Metrics are passed to the balancer
+policies via the Lua stack and a list of loads is returned back to MDBalancer.
+It sits alongside the current balancer implementation and it's enabled with a
+Ceph CLI command ("ceph fs set cephfs balancer mybalancer.lua"). If the Lua policy
+fails (for whatever reason), we fall back to the original metadata load
+balancer. The balancer is stored in the RADOS metadata pool and a string in the
+MDSMap tells the MDSs which balancer to use.
+
+Exposing Metrics to Lua
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Metrics are exposed directly to the Lua code as global variables instead of
+using a well-defined function signature. There is a global "mds" table, where
+each index is an MDS number (e.g., 0) and each value is a dictionary of metrics
+and values. The Lua code can grab metrics using something like this:
+
+::
+
+ mds[0]["queue_len"]
+
+
+This is in contrast to cls-lua in the OSDs, which has well-defined arguments
+(e.g., input/output bufferlists). Exposing the metrics directly makes it easier
+to add new metrics without having to change the API on the Lua side; we want
+the API to grow and shrink as we explore which metrics matter. The downside of
+this approach is that the person programming Lua balancer policies has to look
+at the Ceph source code to see which metrics are exposed. We figure that the
+Mantle developer will be in touch with MDS internals anyways.
+
+The metrics exposed to the Lua policy are the same ones that are already stored
+in mds_load_t: auth.meta_load(), all.meta_load(), req_rate, queue_length,
+cpu_load_avg.
+
+Compile/Execute the Balancer
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Here we use `lua_pcall` instead of `lua_call` because we want to handle errors
+in the MDBalancer. We do not want the error propagating up the call chain. The
+cls_lua class wants to handle the error itself because it must fail gracefully.
+For Mantle, we don't care if a Lua error crashes our balancer -- in that case,
+we will fall back to the original balancer.
+
+The performance improvement of using `lua_call` over `lua_pcall` would not be
+leveraged here because the balancer is invoked every 10 seconds by default.
+
+Returning Policy Decision to C++
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We force the Lua policy engine to return a table of values, corresponding to
+the amount of load to send to each MDS. These loads are inserted directly into
+the MDBalancer "my_targets" vector. We do not allow the MDS to return a table
+of MDSs and metrics because we want the decision to be completely made on the
+Lua side.
+
+Iterating through tables returned by Lua is done through the stack. In Lua
+jargon: a dummy value is pushed onto the stack and the next iterator replaces
+the top of the stack with a (k, v) pair. After reading each value, pop that
+value but keep the key for the next call to `lua_next`.
+
+Reading from RADOS
+~~~~~~~~~~~~~~~~~~
+
+All MDSs will read balancing code from RADOS when the balancer version changes
+in the MDS Map. The balancer pulls the Lua code from RADOS synchronously. We do
+this with a timeout: if the asynchronous read does not come back within half
+the balancing tick interval the operation is cancelled and a Connection Timeout
+error is returned. By default, the balancing tick interval is 10 seconds, so
+Mantle will use a 5 second timeout. This design allows Mantle to
+immediately return an error if anything RADOS-related goes wrong.
+
+We use this implementation because we do not want to do a blocking OSD read
+from inside the global MDS lock. Doing so would bring down the MDS cluster if
+any of the OSDs are not responsive -- this is tested in the ceph-qa-suite by
+setting all OSDs to down/out and making sure the MDS cluster stays active.
+
+One approach would be to asynchronously fire the read when handling the MDS Map
+and fill in the Lua code in the background. We cannot do this because the MDS
+does not support daemon-local fallbacks and the balancer assumes that all MDSs
+come to the same decision at the same time (e.g., importers, exporters, etc.).
+
+Debugging
+~~~~~~~~~
+
+Logging in a Lua policy will appear in the MDS log. The syntax is the same as
+the cls logging interface:
+
+::
+
+ BAL_LOG(0, "this is a log message")
+
+
+It is implemented by passing a function that wraps the `dout` logging framework
+(`dout_wrapper`) to Lua with the `lua_register()` primitive. The Lua code is
+actually calling the `dout` function in C++.
+
+Warning and Info messages are centralized using the clog/Beacon. Successful
+messages are only sent on version changes by the first MDS to avoid spamming
+the `ceph -w` utility. These messages are used for the integration tests.
+
+Testing
+~~~~~~~
+
+Testing is done with the ceph-qa-suite (tasks.cephfs.test_mantle). We do not
+test invalid balancer logging and loading the actual Lua VM.
diff --git a/doc/cephfs/mdcache.rst b/doc/cephfs/mdcache.rst
new file mode 100644
index 000000000..f2e20238c
--- /dev/null
+++ b/doc/cephfs/mdcache.rst
@@ -0,0 +1,77 @@
+=================================
+CephFS Distributed Metadata Cache
+=================================
+While the data for inodes in a Ceph file system is stored in RADOS and
+accessed by the clients directly, inode metadata and directory
+information is managed by the Ceph metadata server (MDS). The MDS's
+act as mediator for all metadata related activity, storing the resulting
+information in a separate RADOS pool from the file data.
+
+CephFS clients can request that the MDS fetch or change inode metadata
+on its behalf, but an MDS can also grant the client **capabilities**
+(aka **caps**) for each inode (see :doc:`/cephfs/capabilities`).
+
+A capability grants the client the ability to cache and possibly
+manipulate some portion of the data or metadata associated with the
+inode. When another client needs access to the same information, the MDS
+will revoke the capability and the client will eventually return it,
+along with an updated version of the inode's metadata (in the event that
+it made changes to it while it held the capability).
+
+Clients can request capabilities and will generally get them, but when
+there is competing access or memory pressure on the MDS, they may be
+**revoked**. When a capability is revoked, the client is responsible for
+returning it as soon as it is able. Clients that fail to do so in a
+timely fashion may end up **blocklisted** and unable to communicate with
+the cluster.
+
+Since the cache is distributed, the MDS must take great care to ensure
+that no client holds capabilities that may conflict with other clients'
+capabilities, or operations that it does itself. This allows cephfs
+clients to rely on much greater cache coherence than a filesystem like
+NFS, where the client may cache data and metadata beyond the point where
+it has changed on the server.
+
+Client Metadata Requests
+------------------------
+When a client needs to query/change inode metadata or perform an
+operation on a directory, it has two options. It can make a request to
+the MDS directly, or serve the information out of its cache. With
+CephFS, the latter is only possible if the client has the necessary
+caps.
+
+Clients can send simple requests to the MDS to query or request changes
+to certain metadata. The replies to these requests may also grant the
+client a certain set of caps for the inode, allowing it to perform
+subsequent requests without consulting the MDS.
+
+Clients can also request caps directly from the MDS, which is necessary
+in order to read or write file data.
+
+Distributed Locks in an MDS Cluster
+-----------------------------------
+When an MDS wants to read or change information about an inode, it must
+gather the appropriate locks for it. The MDS cluster may have a series
+of different types of locks on the given inode and each MDS may have
+disjoint sets of locks.
+
+If there are outstanding caps that would conflict with these locks, then
+they must be revoked before the lock can be acquired. Once the competing
+caps are returned to the MDS, then it can get the locks and do the
+operation.
+
+On a filesystem served by multiple MDS', the metadata cache is also
+distributed among the MDS' in the cluster. For every inode, at any given
+time, only one MDS in the cluster is considered **authoritative**. Any
+requests to change that inode must be done by the authoritative MDS,
+though non-authoritative MDS can forward requests to the authoritative
+one.
+
+Non-auth MDS' can also obtain read locks that prevent the auth MDS from
+changing the data until the lock is dropped, so that they can serve
+inode info to the clients.
+
+The auth MDS for an inode can change over time as well. The MDS' will
+actively balance responsibility for the inode cache amongst
+themselves, but this can be overridden by **pinning** certain subtrees
+to a single MDS.
diff --git a/doc/cephfs/mds-config-ref.rst b/doc/cephfs/mds-config-ref.rst
new file mode 100644
index 000000000..5b68053a0
--- /dev/null
+++ b/doc/cephfs/mds-config-ref.rst
@@ -0,0 +1,67 @@
+======================
+ MDS Config Reference
+======================
+
+.. confval:: mds_cache_mid
+.. confval:: mds_dir_max_commit_size
+.. confval:: mds_dir_max_entries
+.. confval:: mds_decay_halflife
+.. confval:: mds_beacon_interval
+.. confval:: mds_beacon_grace
+.. confval:: mon_mds_blocklist_interval
+.. confval:: mds_reconnect_timeout
+.. confval:: mds_tick_interval
+.. confval:: mds_dirstat_min_interval
+.. confval:: mds_scatter_nudge_interval
+.. confval:: mds_client_prealloc_inos
+.. confval:: mds_early_reply
+.. confval:: mds_default_dir_hash
+.. confval:: mds_log_skip_corrupt_events
+.. confval:: mds_log_max_events
+.. confval:: mds_log_max_segments
+.. confval:: mds_bal_sample_interval
+.. confval:: mds_bal_replicate_threshold
+.. confval:: mds_bal_unreplicate_threshold
+.. confval:: mds_bal_split_size
+.. confval:: mds_bal_split_rd
+.. confval:: mds_bal_split_wr
+.. confval:: mds_bal_split_bits
+.. confval:: mds_bal_merge_size
+.. confval:: mds_bal_interval
+.. confval:: mds_bal_fragment_interval
+.. confval:: mds_bal_fragment_fast_factor
+.. confval:: mds_bal_fragment_size_max
+.. confval:: mds_bal_idle_threshold
+.. confval:: mds_bal_max
+.. confval:: mds_bal_max_until
+.. confval:: mds_bal_mode
+.. confval:: mds_bal_min_rebalance
+.. confval:: mds_bal_min_start
+.. confval:: mds_bal_need_min
+.. confval:: mds_bal_need_max
+.. confval:: mds_bal_midchunk
+.. confval:: mds_bal_minchunk
+.. confval:: mds_replay_interval
+.. confval:: mds_shutdown_check
+.. confval:: mds_thrash_exports
+.. confval:: mds_thrash_fragments
+.. confval:: mds_dump_cache_on_map
+.. confval:: mds_dump_cache_after_rejoin
+.. confval:: mds_verify_scatter
+.. confval:: mds_debug_scatterstat
+.. confval:: mds_debug_frag
+.. confval:: mds_debug_auth_pins
+.. confval:: mds_debug_subtrees
+.. confval:: mds_kill_mdstable_at
+.. confval:: mds_kill_export_at
+.. confval:: mds_kill_import_at
+.. confval:: mds_kill_link_at
+.. confval:: mds_kill_rename_at
+.. confval:: mds_inject_skip_replaying_inotable
+.. confval:: mds_kill_skip_replaying_inotable
+.. confval:: mds_wipe_sessions
+.. confval:: mds_wipe_ino_prealloc
+.. confval:: mds_skip_ino
+.. confval:: mds_min_caps_per_client
+.. confval:: mds_symlink_recovery
+.. confval:: mds_extraordinary_events_dump_interval
diff --git a/doc/cephfs/mds-journaling.rst b/doc/cephfs/mds-journaling.rst
new file mode 100644
index 000000000..92b32bb45
--- /dev/null
+++ b/doc/cephfs/mds-journaling.rst
@@ -0,0 +1,90 @@
+MDS Journaling
+==============
+
+CephFS Metadata Pool
+--------------------
+
+CephFS uses a separate (metadata) pool for managing file metadata (inodes and
+dentries) in a Ceph File System. The metadata pool has all the information about
+files in a Ceph File System including the File System hierarchy. Additionally,
+CephFS maintains meta information related to other entities in a file system
+such as file system journals, open file table, session map, etc.
+
+This document describes how Ceph Metadata Servers use and rely on journaling.
+
+CephFS MDS Journaling
+---------------------
+
+CephFS metadata servers stream a journal of metadata events into RADOS in the metadata
+pool prior to executing a file system operation. Active MDS daemon(s) manage metadata
+for files and directories in CephFS.
+
+CephFS uses journaling for couple of reasons:
+
+#. Consistency: On an MDS failover, the journal events can be replayed to reach a
+ consistent file system state. Also, metadata operations that require multiple
+ updates to the backing store need to be journaled for crash consistency (along
+ with other consistency mechanisms such as locking, etc..).
+
+#. Performance: Journal updates are (mostly) sequential, hence updates to journals
+ are fast. Furthermore, updates can be batched into single write, thereby saving
+ disk seek time involved in updates to different parts of a file. Having a large
+ journal also helps a standby MDS to warm its cache which helps indirectly during
+ MDS failover.
+
+Each active metadata server maintains its own journal in the metadata pool. Journals
+are striped over multiple objects. Journal entries which are not required (deemed as
+old) are trimmed by the metadata server.
+
+Journal Events
+--------------
+
+Apart from journaling file system metadata updates, CephFS journals various other events
+such as client session info and directory import/export state to name a few. These events
+are used by the metadata sever to reestablish correct state as required, e.g., Ceph MDS
+tries to reconnect clients on restart when journal events get replayed and a specific
+event type in the journal specifies that a client entity type has a session with the MDS
+before it was restarted.
+
+To examine the list of such events recorded in the journal, CephFS provides a command
+line utility `cephfs-journal-tool` which can be used as follows:
+
+::
+
+ cephfs-journal-tool --rank=<fs>:<rank> event get list
+
+`cephfs-journal-tool` is also used to discover and repair a damaged Ceph File System.
+(See :doc:`/cephfs/cephfs-journal-tool` for more details)
+
+Journal Event Types
+-------------------
+
+Following are various event types that are journaled by the MDS.
+
+#. `EVENT_COMMITTED`: Mark a request (id) as committed.
+
+#. `EVENT_EXPORT`: Maps directories to an MDS rank.
+
+#. `EVENT_FRAGMENT`: Tracks various stages of directory fragmentation (split/merge).
+
+#. `EVENT_IMPORTSTART`: Logged when an MDS rank starts importing directory fragments.
+
+#. `EVENT_IMPORTFINISH`: Logged when an MDS rank finishes importing directory fragments.
+
+#. `EVENT_NOOP`: No operation event type for skipping over a journal region.
+
+#. `EVENT_OPEN`: Tracks which inodes have open file handles.
+
+#. `EVENT_RESETJOURNAL`: Used to mark a journal as `reset` post truncation.
+
+#. `EVENT_SESSION`: Tracks open client sessions.
+
+#. `EVENT_SLAVEUPDATE`: Logs various stages of an operation that has been forwarded to a (slave) mds.
+
+#. `EVENT_SUBTREEMAP`: Map of directory inodes to directory contents (subtree partition).
+
+#. `EVENT_TABLECLIENT`: Log transition states of MDSs view of client tables (snap/anchor).
+
+#. `EVENT_TABLESERVER`: Log transition states of MDSs view of server tables (snap/anchor).
+
+#. `EVENT_UPDATE`: Log file operations on an inode.
diff --git a/doc/cephfs/mds-state-diagram.dot b/doc/cephfs/mds-state-diagram.dot
new file mode 100644
index 000000000..8c9fa25d0
--- /dev/null
+++ b/doc/cephfs/mds-state-diagram.dot
@@ -0,0 +1,75 @@
+digraph {
+
+node [shape=circle,style=unfilled,fixedsize=true,width=2.0]
+
+node [color=blue,peripheries=1];
+N0 [label="up:boot"]
+
+node [color=green,peripheries=1];
+S1 [label="up:standby"]
+N0 -> S1 [color=green,penwidth=2.0];
+S2 [label="up:standby_replay"]
+S1 -> S2 [color=green,penwidth=2.0];
+
+node [color=orange,peripheries=2];
+N1 [label="up:creating"]
+S1 -> N1 [color=orange,penwidth=2.0];
+N2 [label="up:starting"]
+S1 -> N2 [color=orange,penwidth=2.0];
+N3 [label="up:replay"]
+S1 -> N3 [color=orange,penwidth=2.0];
+S2 -> N3 [color=orange,penwidth=2.0];
+N4 [label="up:resolve"]
+N3 -> N4 [color=orange,penwidth=2.0];
+N5 [label="up:reconnect"]
+N3 -> N5 [color=orange,penwidth=2.0];
+N4 -> N5 [color=orange,penwidth=2.0];
+N6 [label="up:rejoin"]
+N5 -> N6 [color=orange,penwidth=2.0];
+N7 [label="up:clientreplay"]
+N6 -> N7 [color=orange,penwidth=2.0];
+
+node [color=green,peripheries=2];
+S0 [label="up:active"]
+N7 -> S0 [color=green,penwidth=2.0];
+N1 -> S0 [color=green,penwidth=2.0];
+N2 -> S0 [color=green,penwidth=2.0];
+N6 -> S0 [color=green,penwidth=2.0];
+
+// going down but still accessible by clients
+node [color=purple,peripheries=2];
+S3 [label="up:stopping"]
+S0 -> S3 [color=purple,penwidth=2.0];
+
+// terminal (but "in")
+node [shape=polygon,sides=6,color=red,peripheries=2];
+D0 [label="down:failed"]
+N2 -> D0 [color=red,penwidth=2.0];
+N3 -> D0 [color=red,penwidth=2.0];
+N4 -> D0 [color=red,penwidth=2.0];
+N5 -> D0 [color=red,penwidth=2.0];
+N6 -> D0 [color=red,penwidth=2.0];
+N7 -> D0 [color=red,penwidth=2.0];
+S0 -> D0 [color=red,penwidth=2.0];
+S3 -> D0 [color=red,penwidth=2.0];
+D0 -> N3 [color=red,penwidth=2.0];
+
+// terminal (but not "in")
+node [shape=polygon,sides=6,color=black,peripheries=1];
+D1 [label="down:damaged"]
+S2 -> D1 [color=black,penwidth=2.0];
+N3 -> D1 [color=black,penwidth=2.0];
+N4 -> D1 [color=black,penwidth=2.0];
+N5 -> D1 [color=black,penwidth=2.0];
+N6 -> D1 [color=black,penwidth=2.0];
+N7 -> D1 [color=black,penwidth=2.0];
+S0 -> D1 [color=black,penwidth=2.0];
+S3 -> D1 [color=black,penwidth=2.0];
+D1 -> D0 [color=red,penwidth=2.0]
+
+node [shape=polygon,sides=6,color=purple,peripheries=1];
+D3 [label="down:stopped"]
+S3 -> D3 [color=purple,penwidth=2.0];
+N6 -> D3 [color=purple,penwidth=2.0];
+
+}
diff --git a/doc/cephfs/mds-states.rst b/doc/cephfs/mds-states.rst
new file mode 100644
index 000000000..c7da639a0
--- /dev/null
+++ b/doc/cephfs/mds-states.rst
@@ -0,0 +1,230 @@
+
+MDS States
+==========
+
+
+The Metadata Server (MDS) goes through several states during normal operation
+in CephFS. For example, some states indicate that the MDS is recovering from a
+failover by a previous instance of the MDS. Here we'll document all of these
+states and include a state diagram to visualize the transitions.
+
+State Descriptions
+------------------
+
+Common states
+~~~~~~~~~~~~~~
+
+
+::
+
+ up:active
+
+This is the normal operating state of the MDS. It indicates that the MDS
+and its rank in the file system is available.
+
+
+::
+
+ up:standby
+
+The MDS is available to takeover for a failed rank (see also :ref:`mds-standby`).
+The monitor will automatically assign an MDS in this state to a failed rank
+once available.
+
+
+::
+
+ up:standby_replay
+
+The MDS is following the journal of another ``up:active`` MDS. Should the
+active MDS fail, having a standby MDS in replay mode is desirable as the MDS is
+replaying the live journal and will more quickly takeover. A downside to having
+standby replay MDSs is that they are not available to takeover for any other
+MDS that fails, only the MDS they follow.
+
+
+Less common or transitory states
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+::
+
+ up:boot
+
+This state is broadcast to the Ceph monitors during startup. This state is
+never visible as the Monitor immediately assign the MDS to an available rank or
+commands the MDS to operate as a standby. The state is documented here for
+completeness.
+
+
+::
+
+ up:creating
+
+The MDS is creating a new rank (perhaps rank 0) by constructing some per-rank
+metadata (like the journal) and entering the MDS cluster.
+
+
+::
+
+ up:starting
+
+The MDS is restarting a stopped rank. It opens associated per-rank metadata
+and enters the MDS cluster.
+
+
+::
+
+ up:stopping
+
+When a rank is stopped, the monitors command an active MDS to enter the
+``up:stopping`` state. In this state, the MDS accepts no new client
+connections, migrates all subtrees to other ranks in the file system, flush its
+metadata journal, and, if the last rank (0), evict all clients and shutdown
+(see also :ref:`cephfs-administration`).
+
+
+::
+
+ up:replay
+
+The MDS taking over a failed rank. This state represents that the MDS is
+recovering its journal and other metadata.
+
+
+::
+
+ up:resolve
+
+The MDS enters this state from ``up:replay`` if the Ceph file system has
+multiple ranks (including this one), i.e. it's not a single active MDS cluster.
+The MDS is resolving any uncommitted inter-MDS operations. All ranks in the
+file system must be in this state or later for progress to be made, i.e. no
+rank can be failed/damaged or ``up:replay``.
+
+
+::
+
+ up:reconnect
+
+An MDS enters this state from ``up:replay`` or ``up:resolve``. This state is to
+solicit reconnections from clients. Any client which had a session with this
+rank must reconnect during this time, configurable via
+``mds_reconnect_timeout``.
+
+
+::
+
+ up:rejoin
+
+The MDS enters this state from ``up:reconnect``. In this state, the MDS is
+rejoining the MDS cluster cache. In particular, all inter-MDS locks on metadata
+are reestablished.
+
+If there are no known client requests to be replayed, the MDS directly becomes
+``up:active`` from this state.
+
+
+::
+
+ up:clientreplay
+
+The MDS may enter this state from ``up:rejoin``. The MDS is replaying any
+client requests which were replied to but not yet durable (not journaled).
+Clients resend these requests during ``up:reconnect`` and the requests are
+replayed once again. The MDS enters ``up:active`` after completing replay.
+
+
+Failed states
+~~~~~~~~~~~~~
+
+::
+
+ down:failed
+
+No MDS actually holds this state. Instead, it is applied to the rank in the file system. For example:
+
+::
+
+ $ ceph fs dump
+ ...
+ max_mds 1
+ in 0
+ up {}
+ failed 0
+ ...
+
+Rank 0 is part of the failed set and is pending to be taken over by a standby
+MDS. If this state persists, it indicates no suitable MDS daemons found to be
+assigned to this rank. This may be caused by not enough standby daemons, or all
+standby daemons have incompatible compat (see also :ref:`upgrade-mds-cluster`).
+
+
+::
+
+ down:damaged
+
+No MDS actually holds this state. Instead, it is applied to the rank in the file system. For example:
+
+::
+
+ $ ceph fs dump
+ ...
+ max_mds 1
+ in 0
+ up {}
+ failed
+ damaged 0
+ ...
+
+Rank 0 has become damaged (see also :ref:`cephfs-disaster-recovery`) and placed in
+the ``damaged`` set. An MDS which was running as rank 0 found metadata damage
+that could not be automatically recovered. Operator intervention is required.
+
+
+::
+
+ down:stopped
+
+No MDS actually holds this state. Instead, it is applied to the rank in the file system. For example:
+
+::
+
+ $ ceph fs dump
+ ...
+ max_mds 1
+ in 0
+ up {}
+ failed
+ damaged
+ stopped 1
+ ...
+
+The rank has been stopped by reducing ``max_mds`` (see also :ref:`cephfs-multimds`).
+
+State Diagram
+-------------
+
+This state diagram shows the possible state transitions for the MDS/rank. The legend is as follows:
+
+Color
+~~~~~
+
+- Green: MDS is active.
+- Orange: MDS is in transient state trying to become active.
+- Red: MDS is indicating a state that causes the rank to be marked failed.
+- Purple: MDS and rank is stopping.
+- Black: MDS is indicating a state that causes the rank to be marked damaged.
+
+Shape
+~~~~~
+
+- Circle: an MDS holds this state.
+- Hexagon: no MDS holds this state (it is applied to the rank).
+
+Lines
+~~~~~
+
+- A double-lined shape indicates the rank is "in".
+
+.. graphviz:: mds-state-diagram.dot
diff --git a/doc/cephfs/mount-prerequisites.rst b/doc/cephfs/mount-prerequisites.rst
new file mode 100644
index 000000000..6ed8a19b6
--- /dev/null
+++ b/doc/cephfs/mount-prerequisites.rst
@@ -0,0 +1,75 @@
+Mount CephFS: Prerequisites
+===========================
+
+You can use CephFS by mounting it to your local filesystem or by using
+`cephfs-shell`_. Mounting CephFS requires superuser privileges to trim
+dentries by issuing a remount of itself. CephFS can be mounted
+`using kernel`_ as well as `using FUSE`_. Both have their own
+advantages. Read the following section to understand more about both of
+these ways to mount CephFS.
+
+For Windows CephFS mounts, please check the `ceph-dokan`_ page.
+
+Which CephFS Client?
+--------------------
+
+The FUSE client is the most accessible and the easiest to upgrade to the
+version of Ceph used by the storage cluster, while the kernel client will
+always gives better performance.
+
+When encountering bugs or performance issues, it is often instructive to
+try using the other client, in order to find out whether the bug was
+client-specific or not (and then to let the developers know).
+
+General Pre-requisite for Mounting CephFS
+-----------------------------------------
+Before mounting CephFS, ensure that the client host (where CephFS has to be
+mounted and used) has a copy of the Ceph configuration file (i.e.
+``ceph.conf``) and a keyring of the CephX user that has permission to access
+the MDS. Both of these files must already be present on the host where the
+Ceph MON resides.
+
+#. Generate a minimal conf file for the client host and place it at a
+ standard location::
+
+ # on client host
+ mkdir -p -m 755 /etc/ceph
+ ssh {user}@{mon-host} "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
+
+ Alternatively, you may copy the conf file. But the above method generates
+ a conf with minimal details which is usually sufficient. For more
+ information, see `Client Authentication`_ and :ref:`bootstrap-options`.
+
+#. Ensure that the conf has appropriate permissions::
+
+ chmod 644 /etc/ceph/ceph.conf
+
+#. Create a CephX user and get its secret key::
+
+ ssh {user}@{mon-host} "sudo ceph fs authorize cephfs client.foo / rw" | sudo tee /etc/ceph/ceph.client.foo.keyring
+
+ In above command, replace ``cephfs`` with the name of your CephFS, ``foo``
+ by the name you want for your CephX user and ``/`` by the path within your
+ CephFS for which you want to allow access to the client host and ``rw``
+ stands for both read and write permissions. Alternatively, you may copy the
+ Ceph keyring from the MON host to client host at ``/etc/ceph`` but creating
+ a keyring specific to the client host is better. While creating a CephX
+ keyring/client, using same client name across multiple machines is perfectly
+ fine.
+
+ .. note:: If you get 2 prompts for password while running above any of 2
+ above command, run ``sudo ls`` (or any other trivial command with
+ sudo) immediately before these commands.
+
+#. Ensure that the keyring has appropriate permissions::
+
+ chmod 600 /etc/ceph/ceph.client.foo.keyring
+
+.. note:: There might be few more prerequisites for kernel and FUSE mounts
+ individually, please check respective mount documents.
+
+.. _Client Authentication: ../client-auth
+.. _cephfs-shell: ../cephfs-shell
+.. _using kernel: ../mount-using-kernel-driver
+.. _using FUSE: ../mount-using-fuse
+.. _ceph-dokan: ../ceph-dokan
diff --git a/doc/cephfs/mount-using-fuse.rst b/doc/cephfs/mount-using-fuse.rst
new file mode 100644
index 000000000..f3ac054c9
--- /dev/null
+++ b/doc/cephfs/mount-using-fuse.rst
@@ -0,0 +1,103 @@
+========================
+ Mount CephFS using FUSE
+========================
+
+`ceph-fuse`_ is an alternate way of mounting CephFS, although it mounts it
+in userspace. Therefore, performance of FUSE can be relatively lower but FUSE
+clients can be more manageable, especially while upgrading CephFS.
+
+Prerequisites
+=============
+
+Go through the prerequisites required by both, kernel as well as FUSE mounts,
+in `Mount CephFS: Prerequisites`_ page.
+
+.. note:: Mounting CephFS using FUSE requires superuser privileges to trim dentries
+ by issuing a remount of itself.
+
+Synopsis
+========
+In general, the command to mount CephFS via FUSE looks like this::
+
+ ceph-fuse {mountpoint} {options}
+
+Mounting CephFS
+===============
+To FUSE-mount the Ceph file system, use the ``ceph-fuse`` command::
+
+ mkdir /mnt/mycephfs
+ ceph-fuse --id foo /mnt/mycephfs
+
+Option ``--id`` passes the name of the CephX user whose keyring we intend to
+use for mounting CephFS. In the above command, it's ``foo``. You can also use
+``-n`` instead, although ``--id`` is evidently easier::
+
+ ceph-fuse -n client.foo /mnt/mycephfs
+
+In case the keyring is not present in standard locations, you may pass it
+too::
+
+ ceph-fuse --id foo -k /path/to/keyring /mnt/mycephfs
+
+You may pass the MON's socket too, although this is not mandatory::
+
+ ceph-fuse --id foo -m 192.168.0.1:6789 /mnt/mycephfs
+
+You can also mount a specific directory within CephFS instead of mounting
+root of CephFS on your local FS::
+
+ ceph-fuse --id foo -r /path/to/dir /mnt/mycephfs
+
+If you have more than one FS on your Ceph cluster, use the option
+``--client_fs`` to mount the non-default FS::
+
+ ceph-fuse --id foo --client_fs mycephfs2 /mnt/mycephfs2
+
+You may also add a ``client_fs`` setting to your ``ceph.conf``. Alternatively, the option
+``--client_mds_namespace`` is supported for backward compatibility.
+
+Unmounting CephFS
+=================
+
+Use ``umount`` to unmount CephFS like any other FS::
+
+ umount /mnt/mycephfs
+
+.. tip:: Ensure that you are not within the file system directories before
+ executing this command.
+
+Persistent Mounts
+=================
+
+To mount CephFS as a file system in user space, add the following to ``/etc/fstab``::
+
+ #DEVICE PATH TYPE OPTIONS
+ none /mnt/mycephfs fuse.ceph ceph.id={user-ID}[,ceph.conf={path/to/conf.conf}],_netdev,defaults 0 0
+
+For example::
+
+ none /mnt/mycephfs fuse.ceph ceph.id=myuser,_netdev,defaults 0 0
+ none /mnt/mycephfs fuse.ceph ceph.id=myuser,ceph.conf=/etc/ceph/foo.conf,_netdev,defaults 0 0
+
+Ensure you use the ID (e.g., ``myuser``, not ``client.myuser``). You can pass
+any valid ``ceph-fuse`` option to the command line this way.
+
+To mount a subdirectory of the CephFS, add the following to ``/etc/fstab``::
+
+ none /mnt/mycephfs fuse.ceph ceph.id=myuser,ceph.client_mountpoint=/path/to/dir,_netdev,defaults 0 0
+
+``ceph-fuse@.service`` and ``ceph-fuse.target`` systemd units are available.
+As usual, these unit files declare the default dependencies and recommended
+execution context for ``ceph-fuse``. After making the fstab entry shown above,
+run following commands::
+
+ systemctl start ceph-fuse@/mnt/mycephfs.service
+ systemctl enable ceph-fuse.target
+ systemctl enable ceph-fuse@-mnt-mycephfs.service
+
+See :ref:`User Management <user-management>` for details on CephX user management and `ceph-fuse`_
+manual for more options it can take. For troubleshooting, see
+:ref:`ceph_fuse_debugging`.
+
+.. _ceph-fuse: ../../man/8/ceph-fuse/#options
+.. _Mount CephFS\: Prerequisites: ../mount-prerequisites
diff --git a/doc/cephfs/mount-using-kernel-driver.rst b/doc/cephfs/mount-using-kernel-driver.rst
new file mode 100644
index 000000000..9d9a4a683
--- /dev/null
+++ b/doc/cephfs/mount-using-kernel-driver.rst
@@ -0,0 +1,159 @@
+=================================
+ Mount CephFS using Kernel Driver
+=================================
+
+The CephFS kernel driver is part of the Linux kernel. It allows mounting
+CephFS as a regular file system with native kernel performance. It is the
+client of choice for most use-cases.
+
+.. note:: CephFS mount device string now uses a new (v2) syntax. The mount
+ helper (and the kernel) is backward compatible with the old syntax.
+ This means that the old syntax can still be used for mounting with
+ newer mount helpers and kernel. However, it is recommended to use
+ the new syntax whenever possible.
+
+Prerequisites
+=============
+
+Complete General Prerequisites
+------------------------------
+Go through the prerequisites required by both, kernel as well as FUSE mounts,
+in `Mount CephFS: Prerequisites`_ page.
+
+Is mount helper is present?
+---------------------------
+``mount.ceph`` helper is installed by Ceph packages. The helper passes the
+monitor address(es) and CephX user keyrings automatically saving the Ceph
+admin the effort to pass these details explicitly while mounting CephFS. In
+case the helper is not present on the client machine, CephFS can still be
+mounted using kernel but by passing these details explicitly to the ``mount``
+command. To check whether it is present on your system, do::
+
+ stat /sbin/mount.ceph
+
+Which Kernel Version?
+---------------------
+
+Because the kernel client is distributed as part of the linux kernel (not
+as part of packaged ceph releases), you will need to consider which kernel
+version to use on your client nodes. Older kernels are known to include buggy
+ceph clients, and may not support features that more recent Ceph clusters
+support.
+
+Remember that the "latest" kernel in a stable linux distribution is likely
+to be years behind the latest upstream linux kernel where Ceph development
+takes place (including bug fixes).
+
+As a rough guide, as of Ceph 10.x (Jewel), you should be using a least a 4.x
+kernel. If you absolutely have to use an older kernel, you should use the
+fuse client instead of the kernel client.
+
+This advice does not apply if you are using a linux distribution that
+includes CephFS support, as in this case the distributor will be responsible
+for backporting fixes to their stable kernel: check with your vendor.
+
+Synopsis
+========
+In general, the command to mount CephFS via kernel driver looks like this::
+
+ mount -t ceph {device-string}={path-to-mounted} {mount-point} -o {key-value-args} {other-args}
+
+Mounting CephFS
+===============
+On Ceph clusters, CephX is enabled by default. Use ``mount`` command to
+mount CephFS with the kernel driver::
+
+ mkdir /mnt/mycephfs
+ mount -t ceph <name>@<fsid>.<fs_name>=/ /mnt/mycephfs
+
+``name`` is the username of the CephX user we are using to mount CephFS.
+``fsid`` is the FSID of the ceph cluster which can be found using
+``ceph fsid`` command. ``fs_name`` is the file system to mount. The kernel
+driver requires MON's socket and the secret key for the CephX user, e.g.::
+
+ mount -t ceph cephuser@b3acfc0d-575f-41d3-9c91-0e7ed3dbb3fa.cephfs=/ -o mon_addr=192.168.0.1:6789,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
+
+When using the mount helper, monitor hosts and FSID are optional. ``mount.ceph``
+helper figures out these details automatically by finding and reading ceph conf
+file, .e.g::
+
+ mount -t ceph cephuser@.cephfs=/ -o secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
+
+.. note:: Note that the dot (``.``) still needs to be a part of the device string.
+
+A potential problem with the above command is that the secret key is left in your
+shell's command history. To prevent that you can copy the secret key inside a file
+and pass the file by using the option ``secretfile`` instead of ``secret``::
+
+ mount -t ceph cephuser@.cephfs=/ /mnt/mycephfs -o secretfile=/etc/ceph/cephuser.secret
+
+Ensure the permissions on the secret key file are appropriate (preferably, ``600``).
+
+Multiple monitor hosts can be passed by separating each address with a ``/``::
+
+ mount -t ceph cephuser@.cephfs=/ /mnt/mycephfs -o mon_addr=192.168.0.1:6789/192.168.0.2:6789,secretfile=/etc/ceph/cephuser.secret
+
+In case CephX is disabled, you can omit any credential related options::
+
+ mount -t ceph cephuser@.cephfs=/ /mnt/mycephfs
+
+.. note:: The ceph user name still needs to be passed as part of the device string.
+
+To mount a subtree of the CephFS root, append the path to the device string::
+
+ mount -t ceph cephuser@.cephfs=/subvolume/dir1/dir2 /mnt/mycephfs -o secretfile=/etc/ceph/cephuser.secret
+
+Backward Compatibility
+======================
+The old syntax is supported for backward compatibility.
+
+To mount CephFS with the kernel driver::
+
+ mkdir /mnt/mycephfs
+ mount -t ceph :/ /mnt/mycephfs -o name=admin
+
+The key-value argument right after option ``-o`` is CephX credential;
+``name`` is the username of the CephX user we are using to mount CephFS.
+
+To mount a non-default FS ``cephfs2``, in case the cluster has multiple FSs::
+
+ mount -t ceph :/ /mnt/mycephfs -o name=admin,fs=cephfs2
+
+ or
+
+ mount -t ceph :/ /mnt/mycephfs -o name=admin,mds_namespace=cephfs2
+
+.. note:: The option ``mds_namespace`` is deprecated. Use ``fs=`` instead when using the old syntax for mounting.
+
+Unmounting CephFS
+=================
+To unmount the Ceph file system, use the ``umount`` command as usual::
+
+ umount /mnt/mycephfs
+
+.. tip:: Ensure that you are not within the file system directories before
+ executing this command.
+
+Persistent Mounts
+==================
+
+To mount CephFS in your file systems table as a kernel driver, add the
+following to ``/etc/fstab``::
+
+ {name}@.{fs_name}=/ {mount}/{mountpoint} ceph [mon_addr={ipaddress},secret=secretkey|secretfile=/path/to/secretfile],[{mount.options}] {fs_freq} {fs_passno}
+
+For example::
+
+ cephuser@.cephfs=/ /mnt/ceph ceph mon_addr=192.168.0.1:6789,noatime,_netdev 0 0
+
+If the ``secret`` or ``secretfile`` options are not specified then the mount helper
+will attempt to find a secret for the given ``name`` in one of the configured keyrings.
+
+See `User Management`_ for details on CephX user management and mount.ceph_
+manual for more options it can take. For troubleshooting, see
+:ref:`kernel_mount_debugging`.
+
+.. _fstab: ../fstab/#kernel-driver
+.. _Mount CephFS\: Prerequisites: ../mount-prerequisites
+.. _mount.ceph: ../../man/8/mount.ceph/
+.. _User Management: ../../rados/operations/user-management/
diff --git a/doc/cephfs/multifs.rst b/doc/cephfs/multifs.rst
new file mode 100644
index 000000000..2dcba7ae0
--- /dev/null
+++ b/doc/cephfs/multifs.rst
@@ -0,0 +1,54 @@
+.. _cephfs-multifs:
+
+Multiple Ceph File Systems
+==========================
+
+
+Beginning with the Pacific release, multiple file system support is stable
+and ready to use. This functionality allows configuring separate file systems
+with full data separation on separate pools.
+
+Existing clusters must set a flag to enable multiple file systems::
+
+ ceph fs flag set enable_multiple true
+
+New Ceph clusters automatically set this.
+
+
+Creating a new Ceph File System
+-------------------------------
+
+The new ``volumes`` plugin interface (see: :doc:`/cephfs/fs-volumes`) automates
+most of the work of configuring a new file system. The "volume" concept is
+simply a new file system. This can be done via::
+
+ ceph fs volume create <fs_name>
+
+Ceph will create the new pools and automate the deployment of new MDS to
+support the new file system. The deployment technology used, e.g. cephadm, will
+also configure the MDS affinity (see: :ref:`mds-join-fs`) of new MDS daemons to
+operate the new file system.
+
+
+Securing access
+---------------
+
+The ``fs authorize`` command allows configuring the client's access to a
+particular file system. See also in :ref:`fs-authorize-multifs`. The client will
+only have visibility of authorized file systems and the Monitors/MDS will
+reject access to clients without authorization.
+
+
+Other Notes
+-----------
+
+Multiple file systems do not share pools. This is particularly important for
+snapshots but also because no measures are in place to prevent duplicate
+inodes. The Ceph commands prevent this dangerous configuration.
+
+Each file system has its own set of MDS ranks. Consequently, each new file
+system requires more MDS daemons to operate and increases operational costs.
+This can be useful for increasing metadata throughput by application or user
+base but also adds cost to the creation of a file system. Generally, a single
+file system with subtree pinning is a better choice for isolating load between
+applications.
diff --git a/doc/cephfs/multimds.rst b/doc/cephfs/multimds.rst
new file mode 100644
index 000000000..e50a5148e
--- /dev/null
+++ b/doc/cephfs/multimds.rst
@@ -0,0 +1,286 @@
+.. _cephfs-multimds:
+
+Configuring multiple active MDS daemons
+---------------------------------------
+
+*Also known as: multi-mds, active-active MDS*
+
+Each CephFS file system is configured for a single active MDS daemon
+by default. To scale metadata performance for large scale systems, you
+may enable multiple active MDS daemons, which will share the metadata
+workload with one another.
+
+When should I use multiple active MDS daemons?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You should configure multiple active MDS daemons when your metadata performance
+is bottlenecked on the single MDS that runs by default.
+
+Adding more daemons may not increase performance on all workloads. Typically,
+a single application running on a single client will not benefit from an
+increased number of MDS daemons unless the application is doing a lot of
+metadata operations in parallel.
+
+Workloads that typically benefit from a larger number of active MDS daemons
+are those with many clients, perhaps working on many separate directories.
+
+
+Increasing the MDS active cluster size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Each CephFS file system has a *max_mds* setting, which controls how many ranks
+will be created. The actual number of ranks in the file system will only be
+increased if a spare daemon is available to take on the new rank. For example,
+if there is only one MDS daemon running, and max_mds is set to two, no second
+rank will be created. (Note that such a configuration is not Highly Available
+(HA) because no standby is available to take over for a failed rank. The
+cluster will complain via health warnings when configured this way.)
+
+Set ``max_mds`` to the desired number of ranks. In the following examples
+the "fsmap" line of "ceph status" is shown to illustrate the expected
+result of commands.
+
+::
+
+ # fsmap e5: 1/1/1 up {0=a=up:active}, 2 up:standby
+
+ ceph fs set <fs_name> max_mds 2
+
+ # fsmap e8: 2/2/2 up {0=a=up:active,1=c=up:creating}, 1 up:standby
+ # fsmap e9: 2/2/2 up {0=a=up:active,1=c=up:active}, 1 up:standby
+
+The newly created rank (1) will pass through the 'creating' state
+and then enter this 'active state'.
+
+Standby daemons
+~~~~~~~~~~~~~~~
+
+Even with multiple active MDS daemons, a highly available system **still
+requires standby daemons** to take over if any of the servers running
+an active daemon fail.
+
+Consequently, the practical maximum of ``max_mds`` for highly available systems
+is at most one less than the total number of MDS servers in your system.
+
+To remain available in the event of multiple server failures, increase the
+number of standby daemons in the system to match the number of server failures
+you wish to withstand.
+
+Decreasing the number of ranks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Reducing the number of ranks is as simple as reducing ``max_mds``:
+
+::
+
+ # fsmap e9: 2/2/2 up {0=a=up:active,1=c=up:active}, 1 up:standby
+ ceph fs set <fs_name> max_mds 1
+ # fsmap e10: 2/2/1 up {0=a=up:active,1=c=up:stopping}, 1 up:standby
+ # fsmap e10: 2/2/1 up {0=a=up:active,1=c=up:stopping}, 1 up:standby
+ ...
+ # fsmap e10: 1/1/1 up {0=a=up:active}, 2 up:standby
+
+The cluster will automatically stop extra ranks incrementally until ``max_mds``
+is reached.
+
+See :doc:`/cephfs/administration` for more details which forms ``<role>`` can
+take.
+
+Note: stopped ranks will first enter the stopping state for a period of
+time while it hands off its share of the metadata to the remaining active
+daemons. This phase can take from seconds to minutes. If the MDS appears to
+be stuck in the stopping state then that should be investigated as a possible
+bug.
+
+If an MDS daemon crashes or is killed while in the ``up:stopping`` state, a
+standby will take over and the cluster monitors will against try to stop
+the daemon.
+
+When a daemon finishes stopping, it will respawn itself and go back to being a
+standby.
+
+
+.. _cephfs-pinning:
+
+Manually pinning directory trees to a particular rank
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In multiple active metadata server configurations, a balancer runs which works
+to spread metadata load evenly across the cluster. This usually works well
+enough for most users but sometimes it is desirable to override the dynamic
+balancer with explicit mappings of metadata to particular ranks. This can allow
+the administrator or users to evenly spread application load or limit impact of
+users' metadata requests on the entire cluster.
+
+The mechanism provided for this purpose is called an ``export pin``, an
+extended attribute of directories. The name of this extended attribute is
+``ceph.dir.pin``. Users can set this attribute using standard commands:
+
+::
+
+ setfattr -n ceph.dir.pin -v 2 path/to/dir
+
+The value of the extended attribute is the rank to assign the directory subtree
+to. A default value of ``-1`` indicates the directory is not pinned.
+
+A directory's export pin is inherited from its closest parent with a set export
+pin. In this way, setting the export pin on a directory affects all of its
+children. However, the parents pin can be overridden by setting the child
+directory's export pin. For example:
+
+::
+
+ mkdir -p a/b
+ # "a" and "a/b" both start without an export pin set
+ setfattr -n ceph.dir.pin -v 1 a/
+ # a and b are now pinned to rank 1
+ setfattr -n ceph.dir.pin -v 0 a/b
+ # a/b is now pinned to rank 0 and a/ and the rest of its children are still pinned to rank 1
+
+
+.. _cephfs-ephemeral-pinning:
+
+Setting subtree partitioning policies
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It is also possible to setup **automatic** static partitioning of subtrees via
+a set of **policies**. In CephFS, this automatic static partitioning is
+referred to as **ephemeral pinning**. Any directory (inode) which is
+ephemerally pinned will be automatically assigned to a particular rank
+according to a consistent hash of its inode number. The set of all
+ephemerally pinned directories should be uniformly distributed across all
+ranks.
+
+Ephemerally pinned directories are so named because the pin may not persist
+once the directory inode is dropped from cache. However, an MDS failover does
+not affect the ephemeral nature of the pinned directory. The MDS records what
+subtrees are ephemerally pinned in its journal so MDS failovers do not drop
+this information.
+
+A directory is either ephemerally pinned or not. Which rank it is pinned to is
+derived from its inode number and a consistent hash. This means that
+ephemerally pinned directories are somewhat evenly spread across the MDS
+cluster. The **consistent hash** also minimizes redistribution when the MDS
+cluster grows or shrinks. So, growing an MDS cluster may automatically increase
+your metadata throughput with no other administrative intervention.
+
+Presently, there are two types of ephemeral pinning:
+
+**Distributed Ephemeral Pins**: This policy causes a directory to fragment
+(even well below the normal fragmentation thresholds) and distribute its
+fragments as ephemerally pinned subtrees. This has the effect of distributing
+immediate children across a range of MDS ranks. The canonical example use-case
+would be the ``/home`` directory: we want every user's home directory to be
+spread across the entire MDS cluster. This can be set via:
+
+::
+
+ setfattr -n ceph.dir.pin.distributed -v 1 /cephfs/home
+
+
+**Random Ephemeral Pins**: This policy indicates any descendent sub-directory
+may be ephemerally pinned. This is set through the extended attribute
+``ceph.dir.pin.random`` with the value set to the percentage of directories
+that should be pinned. For example:
+
+::
+
+ setfattr -n ceph.dir.pin.random -v 0.5 /cephfs/tmp
+
+Would cause any directory loaded into cache or created under ``/tmp`` to be
+ephemerally pinned 50 percent of the time.
+
+It is recommended to only set this to small values, like ``.001`` or ``0.1%``.
+Having too many subtrees may degrade performance. For this reason, the config
+``mds_export_ephemeral_random_max`` enforces a cap on the maximum of this
+percentage (default: ``.01``). The MDS returns ``EINVAL`` when attempting to
+set a value beyond this config.
+
+Both random and distributed ephemeral pin policies are off by default in
+Octopus. The features may be enabled via the
+``mds_export_ephemeral_random`` and ``mds_export_ephemeral_distributed``
+configuration options.
+
+Ephemeral pins may override parent export pins and vice versa. What determines
+which policy is followed is the rule of the closest parent: if a closer parent
+directory has a conflicting policy, use that one instead. For example:
+
+::
+
+ mkdir -p foo/bar1/baz foo/bar2
+ setfattr -n ceph.dir.pin -v 0 foo
+ setfattr -n ceph.dir.pin.distributed -v 1 foo/bar1
+
+The ``foo/bar1/baz`` directory will be ephemerally pinned because the
+``foo/bar1`` policy overrides the export pin on ``foo``. The ``foo/bar2``
+directory will obey the pin on ``foo`` normally.
+
+For the reverse situation:
+
+::
+
+ mkdir -p home/{patrick,john}
+ setfattr -n ceph.dir.pin.distributed -v 1 home
+ setfattr -n ceph.dir.pin -v 2 home/patrick
+
+The ``home/patrick`` directory and its children will be pinned to rank 2
+because its export pin overrides the policy on ``home``.
+
+To remove a partitioning policy, remove the respective extended attribute
+or set the value to 0.
+
+.. code::bash
+ $ setfattr -n ceph.dir.pin.distributed -v 0 home
+ # or
+ $ setfattr -x ceph.dir.pin.distributed home
+
+For export pins, remove the extended attribute or set the extended attribute
+value to `-1`.
+
+.. code::bash
+ $ setfattr -n ceph.dir.pin -v -1 home
+
+
+Dynamic subtree partitioning with Balancer on specific ranks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The CephFS file system provides the ``bal_rank_mask`` option to enable the balancer
+to dynamically rebalance subtrees within particular active MDS ranks. This
+allows administrators to employ both the dynamic subtree partitioning and
+static pining schemes in different active MDS ranks so that metadata loads
+are optimized based on user demand. For instance, in realistic cloud
+storage environments, where a lot of subvolumes are allotted to multiple
+computing nodes (e.g., VMs and containers), some subvolumes that require
+high performance are managed by static partitioning, whereas most subvolumes
+that experience a moderate workload are managed by the balancer. As the balancer
+evenly spreads the metadata workload to all active MDS ranks, performance of
+static pinned subvolumes inevitably may be affected or degraded. If this option
+is enabled, subtrees managed by the balancer are not affected by
+static pinned subtrees.
+
+This option can be configured with the ``ceph fs set`` command. For example:
+
+::
+
+ ceph fs set <fs_name> bal_rank_mask <hex>
+
+Each bitfield of the ``<hex>`` number represents a dedicated rank. If the ``<hex>`` is
+set to ``0x3``, the balancer runs on active ``0`` and ``1`` ranks. For example:
+
+::
+
+ ceph fs set <fs_name> bal_rank_mask 0x3
+
+If the ``bal_rank_mask`` is set to ``-1`` or ``all``, all active ranks are masked
+and utilized by the balancer. As an example:
+
+::
+
+ ceph fs set <fs_name> bal_rank_mask -1
+
+On the other hand, if the balancer needs to be disabled,
+the ``bal_rank_mask`` should be set to ``0x0``. For example:
+
+::
+
+ ceph fs set <fs_name> bal_rank_mask 0x0
diff --git a/doc/cephfs/nfs.rst b/doc/cephfs/nfs.rst
new file mode 100644
index 000000000..75e804e20
--- /dev/null
+++ b/doc/cephfs/nfs.rst
@@ -0,0 +1,98 @@
+.. _cephfs-nfs:
+
+===
+NFS
+===
+
+CephFS namespaces can be exported over NFS protocol using the `NFS-Ganesha NFS
+server`_. This document provides information on configuring NFS-Ganesha
+clusters manually. The simplest and preferred way of managing NFS-Ganesha
+clusters and CephFS exports is using ``ceph nfs ...`` commands. See
+:doc:`/mgr/nfs` for more details. As the deployment is done using cephadm or
+rook.
+
+Requirements
+============
+
+- Ceph file system
+- ``libcephfs2``, ``nfs-ganesha`` and ``nfs-ganesha-ceph`` packages on NFS
+ server host machine.
+- NFS-Ganesha server host connected to the Ceph public network
+
+.. note::
+ It is recommended to use 3.5 or later stable version of NFS-Ganesha
+ packages with pacific (16.2.x) or later stable version of Ceph packages.
+
+Configuring NFS-Ganesha to export CephFS
+========================================
+
+NFS-Ganesha provides a File System Abstraction Layer (FSAL) to plug in
+different storage backends. FSAL_CEPH_ is the plugin FSAL for CephFS. For
+each NFS-Ganesha export, FSAL_CEPH_ uses a libcephfs client to mount the
+CephFS path that NFS-Ganesha exports.
+
+Setting up NFS-Ganesha with CephFS, involves setting up NFS-Ganesha's and
+Ceph's configuration file and CephX access credentials for the Ceph clients
+created by NFS-Ganesha to access CephFS.
+
+NFS-Ganesha configuration
+-------------------------
+
+Here's a `sample ganesha.conf`_ configured with FSAL_CEPH_. It is suitable
+for a standalone NFS-Ganesha server, or an active/passive configuration of
+NFS-Ganesha servers, to be managed by some sort of clustering software
+(e.g., Pacemaker). Important details about the options are added as comments
+in the sample conf. There are options to do the following:
+
+- minimize Ganesha caching wherever possible since the libcephfs clients
+ (of FSAL_CEPH_) also cache aggressively
+
+- read from Ganesha config files stored in RADOS objects
+
+- store client recovery data in RADOS OMAP key-value interface
+
+- mandate NFSv4.1+ access
+
+- enable read delegations (need at least v13.0.1 ``libcephfs2`` package
+ and v2.6.0 stable ``nfs-ganesha`` and ``nfs-ganesha-ceph`` packages)
+
+.. important::
+
+ Under certain conditions, NFS access using the CephFS FSAL fails. This
+ causes an error to be thrown that reads "Input/output error". Under these
+ circumstances, the application metadata must be set for the CephFS metadata
+ and CephFS data pools. Do this by running the following command:
+
+ .. prompt:: bash $
+
+ ceph osd pool application set <cephfs_metadata_pool> cephfs <cephfs_data_pool> cephfs
+
+
+Configuration for libcephfs clients
+-----------------------------------
+
+``ceph.conf`` for libcephfs clients includes a ``[client]`` section with
+``mon_host`` option set to let the clients connect to the Ceph cluster's
+monitors, usually generated via ``ceph config generate-minimal-conf``.
+For example::
+
+ [client]
+ mon host = [v2:192.168.1.7:3300,v1:192.168.1.7:6789], [v2:192.168.1.8:3300,v1:192.168.1.8:6789], [v2:192.168.1.9:3300,v1:192.168.1.9:6789]
+
+Mount using NFSv4 clients
+=========================
+
+It is preferred to mount the NFS-Ganesha exports using NFSv4.1+ protocols
+to get the benefit of sessions.
+
+Conventions for mounting NFS resources are platform-specific. The
+following conventions work on Linux and some Unix platforms:
+
+.. code:: bash
+
+ mount -t nfs -o nfsvers=4.1,proto=tcp <ganesha-host-name>:<ganesha-pseudo-path> <mount-point>
+
+
+.. _FSAL_CEPH: https://github.com/nfs-ganesha/nfs-ganesha/tree/next/src/FSAL/FSAL_CEPH
+.. _NFS-Ganesha NFS server: https://github.com/nfs-ganesha/nfs-ganesha/wiki
+.. _sample ganesha.conf: https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf
diff --git a/doc/cephfs/posix.rst b/doc/cephfs/posix.rst
new file mode 100644
index 000000000..d79730c6f
--- /dev/null
+++ b/doc/cephfs/posix.rst
@@ -0,0 +1,104 @@
+========================
+ Differences from POSIX
+========================
+
+CephFS aims to adhere to POSIX semantics wherever possible. For
+example, in contrast to many other common network file systems like
+NFS, CephFS maintains strong cache coherency across clients. The goal
+is for processes communicating via the file system to behave the same
+when they are on different hosts as when they are on the same host.
+
+However, there are a few places where CephFS diverges from strict
+POSIX semantics for various reasons:
+
+- If a client is writing to a file and fails, its writes are not
+ necessarily atomic. That is, the client may call write(2) on a file
+ opened with O_SYNC with an 8 MB buffer and then crash and the write
+ may be only partially applied. (Almost all file systems, even local
+ file systems, have this behavior.)
+- In shared simultaneous writer situations, a write that crosses
+ object boundaries is not necessarily atomic. This means that you
+ could have writer A write "aa|aa" and writer B write "bb|bb"
+ simultaneously (where | is the object boundary), and end up with
+ "aa|bb" rather than the proper "aa|aa" or "bb|bb".
+- Sparse files propagate incorrectly to the stat(2) st_blocks field.
+ Because CephFS does not explicitly track which parts of a file are
+ allocated/written, the st_blocks field is always populated by the
+ file size divided by the block size. This will cause tools like
+ du(1) to overestimate consumed space. (The recursive size field,
+ maintained by CephFS, also includes file "holes" in its count.)
+- When a file is mapped into memory via mmap(2) on multiple hosts,
+ writes are not coherently propagated to other clients' caches. That
+ is, if a page is cached on host A, and then updated on host B, host
+ A's page is not coherently invalidated. (Shared writable mmap
+ appears to be quite rare--we have yet to hear any complaints about this
+ behavior, and implementing cache coherency properly is complex.)
+- CephFS clients present a hidden ``.snap`` directory that is used to
+ access, create, delete, and rename snapshots. Although the virtual
+ directory is excluded from readdir(2), any process that tries to
+ create a file or directory with the same name will get an error
+ code. The name of this hidden directory can be changed at mount
+ time with ``-o snapdirname=.somethingelse`` (Linux) or the config
+ option ``client_snapdir`` (libcephfs, ceph-fuse).
+- CephFS does not currently maintain the ``atime`` field. Most applications
+ do not care, though this impacts some backup and data tiering
+ applications that can move unused data to a secondary storage system.
+ You may be able to workaround this for some use cases, as CephFS does
+ support setting ``atime`` via the ``setattr`` operation.
+
+Perspective
+-----------
+
+People talk a lot about "POSIX compliance," but in reality most file
+system implementations do not strictly adhere to the spec, including
+local Linux file systems like ext4 and XFS. For example, for
+performance reasons, the atomicity requirements for reads are relaxed:
+processing reading from a file that is also being written may see torn
+results.
+
+Similarly, NFS has extremely weak consistency semantics when multiple
+clients are interacting with the same files or directories, opting
+instead for "close-to-open". In the world of network attached
+storage, where most environments use NFS, whether or not the server's
+file system is "fully POSIX" may not be relevant, and whether client
+applications notice depends on whether data is being shared between
+clients or not. NFS may also "tear" the results of concurrent writers
+as client data may not even be flushed to the server until the file is
+closed (and more generally writes will be significantly more
+time-shifted than CephFS, leading to less predictable results).
+
+Regardless, these are all similar enough to POSIX, and applications still work
+most of the time. Many other storage systems (e.g., HDFS) claim to be
+"POSIX-like" but diverge significantly from the standard by dropping support
+for things like in-place file modifications, truncate, or directory renames.
+
+Bottom line
+-----------
+
+CephFS relaxes more than local Linux kernel file systems (for example, writes
+spanning object boundaries may be torn). It relaxes strictly less
+than NFS when it comes to multiclient consistency, and generally less
+than NFS when it comes to write atomicity.
+
+In other words, when it comes to POSIX, ::
+
+ HDFS < NFS < CephFS < {XFS, ext4}
+
+
+fsync() and error reporting
+---------------------------
+
+POSIX is somewhat vague about the state of an inode after fsync reports
+an error. In general, CephFS uses the standard error-reporting
+mechanisms in the client's kernel, and therefore follows the same
+conventions as other file systems.
+
+In modern Linux kernels (v4.17 or later), writeback errors are reported
+once to every file description that is open at the time of the error. In
+addition, unreported errors that occurred before the file description was
+opened will also be returned on fsync.
+
+See `PostgreSQL's summary of fsync() error reporting across operating systems
+<https://wiki.postgresql.org/wiki/Fsync_Errors>`_ and `Matthew Wilcox's
+presentation on Linux IO error handling
+<https://www.youtube.com/watch?v=74c19hwY2oE>`_ for more information.
diff --git a/doc/cephfs/quota.rst b/doc/cephfs/quota.rst
new file mode 100644
index 000000000..0bc56be12
--- /dev/null
+++ b/doc/cephfs/quota.rst
@@ -0,0 +1,111 @@
+CephFS Quotas
+=============
+
+CephFS allows quotas to be set on any directory in the file system. The
+quota can restrict the number of *bytes* or the number of *files*
+stored beneath that point in the directory hierarchy.
+
+Like most other things in CephFS, quotas are configured using virtual
+extended attributes:
+
+ * ``ceph.quota.max_files`` -- file limit
+ * ``ceph.quota.max_bytes`` -- byte limit
+
+If the extended attributes appear on a directory that means a quota is
+configured there. If they are not present then no quota is set on that
+directory (although one may still be configured on a parent directory).
+
+To set a quota, set the extended attribute on a CephFS directory with a
+value::
+
+ setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir # 100 MB
+ setfattr -n ceph.quota.max_files -v 10000 /some/dir # 10,000 files
+
+To view quota limit::
+
+ $ getfattr -n ceph.quota.max_bytes /some/dir
+ # file: dir1/
+ ceph.quota.max_bytes="100000000"
+ $
+ $ getfattr -n ceph.quota.max_files /some/dir
+ # file: dir1/
+ ceph.quota.max_files="10000"
+
+.. note:: Running ``getfattr /some/dir -d -m -`` for a CephFS directory will
+ print none of the CephFS extended attributes. This is because the CephFS
+ kernel and FUSE clients hide this information from the ``listxattr(2)``
+ system call. Instead, a specific CephFS extended attribute can be viewed by
+ running ``getfattr /some/dir -n ceph.<some-xattr>``.
+
+To remove a quota, set the value of extended attribute to ``0``::
+
+ $ setfattr -n ceph.quota.max_bytes -v 0 /some/dir
+ $ getfattr /some/dir -n ceph.quota.max_bytes
+ dir1/: ceph.quota.max_bytes: No such attribute
+ $
+ $ setfattr -n ceph.quota.max_files -v 0 /some/dir
+ $ getfattr dir1/ -n ceph.quota.max_files
+ dir1/: ceph.quota.max_files: No such attribute
+
+Space Usage Reporting and CephFS Quotas
+---------------------------------------
+When the root directory of the CephFS mount has quota set on it, the available
+space on the CephFS reported by space usage report tools (like ``df``) is
+based on quota limit. That is, ``available space = quota limit - used space``
+instead of ``available space = total space - used space``.
+
+This behaviour can be disabled by setting following option in client section
+of ``ceph.conf``::
+
+ client quota df = false
+
+Limitations
+-----------
+
+#. *Quotas are cooperative and non-adversarial.* CephFS quotas rely on
+ the cooperation of the client who is mounting the file system to
+ stop writers when a limit is reached. A modified or adversarial
+ client cannot be prevented from writing as much data as it needs.
+ Quotas should not be relied on to prevent filling the system in
+ environments where the clients are fully untrusted.
+
+#. *Quotas are imprecise.* Processes that are writing to the file
+ system will be stopped a short time after the quota limit is
+ reached. They will inevitably be allowed to write some amount of
+ data over the configured limit. How far over the quota they are
+ able to go depends primarily on the amount of time, not the amount
+ of data. Generally speaking writers will be stopped within 10s of
+ seconds of crossing the configured limit.
+
+#. *Quotas are implemented in the kernel client 4.17 and higher.*
+ Quotas are supported by the userspace client (libcephfs, ceph-fuse).
+ Linux kernel clients >= 4.17 support CephFS quotas but only on
+ mimic+ clusters. Kernel clients (even recent versions) will fail
+ to handle quotas on older clusters, even if they may be able to set
+ the quotas extended attributes.
+
+#. *Quotas must be configured carefully when used with path-based
+ mount restrictions.* The client needs to have access to the
+ directory inode on which quotas are configured in order to enforce
+ them. If the client has restricted access to a specific path
+ (e.g., ``/home/user``) based on the MDS capability, and a quota is
+ configured on an ancestor directory they do not have access to
+ (e.g., ``/home``), the client will not enforce it. When using
+ path-based access restrictions be sure to configure the quota on
+ the directory the client is restricted too (e.g., ``/home/user``)
+ or something nested beneath it.
+
+ In case of a kernel client, it needs to have access to the parent
+ of the directory inode on which quotas are configured in order to
+ enforce them. If quota is configured on a directory path
+ (e.g., ``/home/volumes/group``), the kclient needs to have access
+ to the parent (e.g., ``/home/volumes``).
+
+ An example command to create such an user is as below::
+
+ $ ceph auth get-or-create client.guest mds 'allow r path=/home/volumes, allow rw path=/home/volumes/group' mgr 'allow rw' osd 'allow rw tag cephfs metadata=*' mon 'allow r'
+
+ See also: https://tracker.ceph.com/issues/55090
+
+#. *Snapshot file data which has since been deleted or changed does not count
+ towards the quota.* See also: http://tracker.ceph.com/issues/24284
diff --git a/doc/cephfs/recover-fs-after-mon-store-loss.rst b/doc/cephfs/recover-fs-after-mon-store-loss.rst
new file mode 100644
index 000000000..d44269c83
--- /dev/null
+++ b/doc/cephfs/recover-fs-after-mon-store-loss.rst
@@ -0,0 +1,51 @@
+Recovering the file system after catastrophic Monitor store loss
+================================================================
+
+During rare occasions, all the monitor stores of a cluster may get corrupted
+or lost. To recover the cluster in such a scenario, you need to rebuild the
+monitor stores using the OSDs (see :ref:`mon-store-recovery-using-osds`),
+and get back the pools intact (active+clean state). However, the rebuilt monitor
+stores don't restore the file system maps ("FSMap"). Additional steps are required
+to bring back the file system. The steps to recover a multiple active MDS file
+system or multiple file systems are yet to be identified. Currently, only the steps
+to recover a **single active MDS** file system with no additional file systems
+in the cluster have been identified and tested. Briefly the steps are:
+recreate the FSMap with basic defaults; and allow MDSs to recover from
+the journal/metadata stored in the filesystem's pools. The steps are described
+in more detail below.
+
+First up, recreate the file system using the recovered file system pools. The
+new FSMap will have the filesystem's default settings. However, the user defined
+file system settings such as ``standby_count_wanted``, ``required_client_features``,
+extra data pools, etc., are lost and need to be reapplied later.
+
+::
+
+ ceph fs new <fs_name> <metadata_pool> <data_pool> --force --recover
+
+The ``recover`` flag sets the state of file system's rank 0 to existing but
+failed. So when a MDS daemon eventually picks up rank 0, the daemon reads the
+existing in-RADOS metadata and doesn't overwrite it. The flag also prevents the
+standby MDS daemons to activate the file system.
+
+The file system cluster ID, fscid, of the file system will not be preserved.
+This behaviour may not be desirable for certain applications (e.g., Ceph CSI)
+that expect the file system to be unchanged across recovery. To fix this, you
+can optionally set the ``fscid`` option in the above command (see
+:ref:`advanced-cephfs-admin-settings`).
+
+Allow standby MDS daemons to join the file system.
+
+::
+
+ ceph fs set <fs_name> joinable true
+
+
+Check that the file system is no longer in degraded state and has an active
+MDS.
+
+::
+
+ ceph fs status
+
+Reapply any other custom file system settings.
diff --git a/doc/cephfs/scrub.rst b/doc/cephfs/scrub.rst
new file mode 100644
index 000000000..5b813f1c4
--- /dev/null
+++ b/doc/cephfs/scrub.rst
@@ -0,0 +1,156 @@
+.. _mds-scrub:
+
+======================
+Ceph File System Scrub
+======================
+
+CephFS provides the cluster admin (operator) to check consistency of a file system
+via a set of scrub commands. Scrub can be classified into two parts:
+
+#. Forward Scrub: In which the scrub operation starts at the root of the file system
+ (or a sub directory) and looks at everything that can be touched in the hierarchy
+ to ensure consistency.
+
+#. Backward Scrub: In which the scrub operation looks at every RADOS object in the
+ file system pools and maps it back to the file system hierarchy.
+
+This document details commands to initiate and control forward scrub (referred as
+scrub thereafter).
+
+.. warning::
+
+ CephFS forward scrubs are started and manipulated on rank 0. All scrub
+ commands must be directed at rank 0.
+
+Initiate File System Scrub
+==========================
+
+To start a scrub operation for a directory tree use the following command::
+
+ ceph tell mds.<fsname>:0 scrub start <path> [scrubopts] [tag]
+
+where ``scrubopts`` is a comma delimited list of ``recursive``, ``force``, or
+``repair`` and ``tag`` is an optional custom string tag (the default is a generated
+UUID). An example command is::
+
+ ceph tell mds.cephfs:0 scrub start / recursive
+ {
+ "return_code": 0,
+ "scrub_tag": "6f0d204c-6cfd-4300-9e02-73f382fd23c1",
+ "mode": "asynchronous"
+ }
+
+Recursive scrub is asynchronous (as hinted by `mode` in the output above).
+Asynchronous scrubs must be polled using ``scrub status`` to determine the
+status.
+
+The scrub tag is used to differentiate scrubs and also to mark each inode's
+first data object in the default data pool (where the backtrace information is
+stored) with a ``scrub_tag`` extended attribute with the value of the tag. You
+can verify an inode was scrubbed by looking at the extended attribute using the
+RADOS utilities.
+
+Scrubs work for multiple active MDS (multiple ranks). The scrub is managed by
+rank 0 and distributed across MDS as appropriate.
+
+
+Monitor (ongoing) File System Scrubs
+====================================
+
+Status of ongoing scrubs can be monitored and polled using in `scrub status`
+command. This commands lists out ongoing scrubs (identified by the tag) along
+with the path and options used to initiate the scrub::
+
+ ceph tell mds.cephfs:0 scrub status
+ {
+ "status": "scrub active (85 inodes in the stack)",
+ "scrubs": {
+ "6f0d204c-6cfd-4300-9e02-73f382fd23c1": {
+ "path": "/",
+ "options": "recursive"
+ }
+ }
+ }
+
+`status` shows the number of inodes that are scheduled to be scrubbed at any point in time,
+hence, can change on subsequent `scrub status` invocations. Also, a high level summary of
+scrub operation (which includes the operation state and paths on which scrub is triggered)
+gets displayed in `ceph status`::
+
+ ceph status
+ [...]
+
+ task status:
+ scrub status:
+ mds.0: active [paths:/]
+
+ [...]
+
+A scrub is complete when it no longer shows up in this list (although that may
+change in future releases). Any damage will be reported via cluster health warnings.
+
+Control (ongoing) File System Scrubs
+====================================
+
+- Pause: Pausing ongoing scrub operations results in no new or pending inodes being
+ scrubbed after in-flight RADOS ops (for the inodes that are currently being scrubbed)
+ finish::
+
+ ceph tell mds.cephfs:0 scrub pause
+ {
+ "return_code": 0
+ }
+
+ The ``scrub status`` after pausing reflects the paused state. At this point,
+ initiating new scrub operations (via ``scrub start``) would just queue the
+ inode for scrub::
+
+ ceph tell mds.cephfs:0 scrub status
+ {
+ "status": "PAUSED (66 inodes in the stack)",
+ "scrubs": {
+ "6f0d204c-6cfd-4300-9e02-73f382fd23c1": {
+ "path": "/",
+ "options": "recursive"
+ }
+ }
+ }
+
+- Resume: Resuming kick starts a paused scrub operation::
+
+ ceph tell mds.cephfs:0 scrub resume
+ {
+ "return_code": 0
+ }
+
+- Abort: Aborting ongoing scrub operations removes pending inodes from the scrub
+ queue (thereby aborting the scrub) after in-flight RADOS ops (for the inodes that
+ are currently being scrubbed) finish::
+
+ ceph tell mds.cephfs:0 scrub abort
+ {
+ "return_code": 0
+ }
+
+Damages
+=======
+
+The types of damage that can be reported and repaired by File System Scrub are:
+
+* DENTRY : Inode's dentry is missing.
+
+* DIR_FRAG : Inode's directory fragment(s) is missing.
+
+* BACKTRACE : Inode's backtrace in the data pool is corrupted.
+
+Evaluate strays using recursive scrub
+=====================================
+
+- In order to evaluate strays i.e. purge stray directories in ``~mdsdir`` use the following command::
+
+ ceph tell mds.<fsname>:0 scrub start ~mdsdir recursive
+
+- ``~mdsdir`` is not enqueued by default when scrubbing at the CephFS root. In order to perform stray evaluation
+ at root, run scrub with flags ``scrub_mdsdir`` and ``recursive``::
+
+ ceph tell mds.<fsname>:0 scrub start / recursive,scrub_mdsdir
diff --git a/doc/cephfs/snap-schedule.rst b/doc/cephfs/snap-schedule.rst
new file mode 100644
index 000000000..2b8873699
--- /dev/null
+++ b/doc/cephfs/snap-schedule.rst
@@ -0,0 +1,209 @@
+.. _snap-schedule:
+
+==========================
+Snapshot Scheduling Module
+==========================
+This module implements scheduled snapshots for CephFS.
+It provides a user interface to add, query and remove snapshots schedules and
+retention policies, as well as a scheduler that takes the snapshots and prunes
+existing snapshots accordingly.
+
+
+How to enable
+=============
+
+The *snap_schedule* module is enabled with::
+
+ ceph mgr module enable snap_schedule
+
+Usage
+=====
+
+This module uses :doc:`/dev/cephfs-snapshots`, please consider this documentation
+as well.
+
+This module's subcommands live under the `ceph fs snap-schedule` namespace.
+Arguments can either be supplied as positional arguments or as keyword
+arguments. Once a keyword argument was encountered, all following arguments are
+assumed to be keyword arguments too.
+
+Snapshot schedules are identified by path, their repeat interval and their start
+time. The
+repeat interval defines the time between two subsequent snapshots. It is
+specified by a number and a period multiplier, one of `h(our)`, `d(ay)` and
+`w(eek)`. E.g. a repeat interval of `12h` specifies one snapshot every 12
+hours.
+The start time is specified as a time string (more details about passing times
+below). By default
+the start time is last midnight. So when a snapshot schedule with repeat
+interval `1h` is added at 13:50
+with the default start time, the first snapshot will be taken at 14:00.
+The time zone is assumed to be UTC if none is explicitly included in the string.
+An explicit time zone will be mapped to UTC at execution.
+The start time must be in ISO8601 format. Examples below:
+
+UTC: 2022-08-08T05:30:00 i.e. 5:30 AM UTC, without explicit time zone offset
+IDT: 2022-08-08T09:00:00+03:00 i.e. 6:00 AM UTC
+EDT: 2022-08-08T05:30:00-04:00 i.e. 9:30 AM UTC
+
+Retention specifications are identified by path and the retention spec itself. A
+retention spec consists of either a number and a time period separated by a
+space or concatenated pairs of `<number><time period>`.
+The semantics are that a spec will ensure `<number>` snapshots are kept that are
+at least `<time period>` apart. For Example `7d` means the user wants to keep 7
+snapshots that are at least one day (but potentially longer) apart from each other.
+The following time periods are recognized: `h(our), d(ay), w(eek), m(onth),
+y(ear)` and `n`. The latter is a special modifier where e.g. `10n` means keep
+the last 10 snapshots regardless of timing,
+
+All subcommands take optional `fs` argument to specify paths in
+multi-fs setups and :doc:`/cephfs/fs-volumes` managed setups. If not
+passed `fs` defaults to the first file system listed in the fs_map.
+When using :doc:`/cephfs/fs-volumes` the argument `fs` is equivalent to a
+`volume`.
+
+When a timestamp is passed (the `start` argument in the `add`, `remove`,
+`activate` and `deactivate` subcommands) the ISO format `%Y-%m-%dT%H:%M:%S` will
+always be accepted. When either python3.7 or newer is used or
+https://github.com/movermeyer/backports.datetime_fromisoformat is installed, any
+valid ISO timestamp that is parsed by python's `datetime.fromisoformat` is valid.
+
+When no subcommand is supplied a synopsis is printed::
+
+ #> ceph fs snap-schedule
+ no valid command found; 8 closest matches:
+ fs snap-schedule status [<path>] [<fs>] [<format>]
+ fs snap-schedule list <path> [--recursive] [<fs>] [<format>]
+ fs snap-schedule add <path> <snap_schedule> [<start>] [<fs>]
+ fs snap-schedule remove <path> [<repeat>] [<start>] [<fs>]
+ fs snap-schedule retention add <path> <retention_spec_or_period> [<retention_count>] [<fs>]
+ fs snap-schedule retention remove <path> <retention_spec_or_period> [<retention_count>] [<fs>]
+ fs snap-schedule activate <path> [<repeat>] [<start>] [<fs>]
+ fs snap-schedule deactivate <path> [<repeat>] [<start>] [<fs>]
+ Error EINVAL: invalid command
+
+Note:
+^^^^^
+A `subvolume` argument is no longer accepted by the commands.
+
+
+Inspect snapshot schedules
+--------------------------
+
+The module offers two subcommands to inspect existing schedules: `list` and
+`status`. Bother offer plain and json output via the optional `format` argument.
+The default is plain.
+The `list` sub-command will list all schedules on a path in a short single line
+format. It offers a `recursive` argument to list all schedules in the specified
+directory and all contained directories.
+The `status` subcommand prints all available schedules and retention specs for a
+path.
+
+Examples::
+
+ ceph fs snap-schedule status /
+ ceph fs snap-schedule status /foo/bar --format=json
+ ceph fs snap-schedule list /
+ ceph fs snap-schedule list / --recursive=true # list all schedules in the tree
+
+
+Add and remove schedules
+------------------------
+The `add` and `remove` subcommands add and remove snapshots schedules
+respectively. Both require at least a `path` argument, `add` additionally
+requires a `schedule` argument as described in the USAGE section.
+
+Multiple different schedules can be added to a path. Two schedules are considered
+different from each other if they differ in their repeat interval and their
+start time.
+
+If multiple schedules have been set on a path, `remove` can remove individual
+schedules on a path by specifying the exact repeat interval and start time, or
+the subcommand can remove all schedules on a path when just a `path` is
+specified.
+
+Examples::
+
+ ceph fs snap-schedule add / 1h
+ ceph fs snap-schedule add / 1h 11:55
+ ceph fs snap-schedule add / 2h 11:55
+ ceph fs snap-schedule remove / 1h 11:55 # removes one single schedule
+ ceph fs snap-schedule remove / 1h # removes all schedules with --repeat=1h
+ ceph fs snap-schedule remove / # removes all schedules on path /
+
+Add and remove retention policies
+---------------------------------
+The `retention add` and `retention remove` subcommands allow to manage
+retention policies. One path has exactly one retention policy. A policy can
+however contain multiple count-time period pairs in order to specify complex
+retention policies.
+Retention policies can be added and removed individually or in bulk via the
+forms `ceph fs snap-schedule retention add <path> <time period> <count>` and
+`ceph fs snap-schedule retention add <path> <countTime period>[countTime period]`
+
+Examples::
+
+ ceph fs snap-schedule retention add / h 24 # keep 24 snapshots at least an hour apart
+ ceph fs snap-schedule retention add / d 7 # and 7 snapshots at least a day apart
+ ceph fs snap-schedule retention remove / h 24 # remove retention for 24 hourlies
+ ceph fs snap-schedule retention add / 24h4w # add 24 hourly and 4 weekly to retention
+ ceph fs snap-schedule retention remove / 7d4w # remove 7 daily and 4 weekly, leaves 24 hourly
+
+.. note: When adding a path to snap-schedule, remember to strip off the mount
+ point path prefix. Paths to snap-schedule should start at the appropriate
+ CephFS file system root and not at the host file system root.
+ e.g. if the Ceph File System is mounted at ``/mnt`` and the path under which
+ snapshots need to be taken is ``/mnt/some/path`` then the acutal path required
+ by snap-schedule is only ``/some/path``.
+
+.. note: It should be noted that the "created" field in the snap-schedule status
+ command output is the timestamp at which the schedule was created. The "created"
+ timestamp has nothing to do with the creation of actual snapshots. The actual
+ snapshot creation is accounted for in the "created_count" field, which is a
+ cumulative count of the total number of snapshots created so far.
+
+.. note: The maximum number of snapshots to retain per directory is limited by the
+ config tunable `mds_max_snaps_per_dir`. This tunable defaults to 100.
+ To ensure a new snapshot can be created, one snapshot less than this will be
+ retained. So by default, a maximum of 99 snapshots will be retained.
+
+.. note: The --fs argument is now required if there is more than one file system.
+
+Active and inactive schedules
+-----------------------------
+Snapshot schedules can be added for a path that doesn't exist yet in the
+directory tree. Similarly a path can be removed without affecting any snapshot
+schedules on that path.
+If a directory is not present when a snapshot is scheduled to be taken, the
+schedule will be set to inactive and will be excluded from scheduling until
+it is activated again.
+A schedule can manually be set to inactive to pause the creating of scheduled
+snapshots.
+The module provides the `activate` and `deactivate` subcommands for this
+purpose.
+
+Examples::
+
+ ceph fs snap-schedule activate / # activate all schedules on the root directory
+ ceph fs snap-schedule deactivate / 1d # deactivates daily snapshots on the root directory
+
+Limitations
+-----------
+Snapshots are scheduled using python Timers. Under normal circumstances
+specifying 1h as the schedule will result in snapshots 1 hour apart fairly
+precisely. If the mgr daemon is under heavy load however, the Timer threads
+might not get scheduled right away, resulting in a slightly delayed snapshot. If
+this happens, the next snapshot will be schedule as if the previous one was not
+delayed, i.e. one or more delayed snapshots will not cause drift in the overall
+schedule.
+
+In order to somewhat limit the overall number of snapshots in a file system, the
+module will only keep a maximum of 50 snapshots per directory. If the retention
+policy results in more then 50 retained snapshots, the retention list will be
+shortened to the newest 50 snapshots.
+
+Data storage
+------------
+The snapshot schedule data is stored in a rados object in the cephfs metadata
+pool. At runtime all data lives in a sqlite database that is serialized and
+stored as a rados object.
diff --git a/doc/cephfs/standby.rst b/doc/cephfs/standby.rst
new file mode 100644
index 000000000..367c6762b
--- /dev/null
+++ b/doc/cephfs/standby.rst
@@ -0,0 +1,187 @@
+.. _mds-standby:
+
+Terminology
+-----------
+
+A Ceph cluster may have zero or more CephFS *file systems*. Each CephFS has
+a human readable name (set at creation time with ``fs new``) and an integer
+ID. The ID is called the file system cluster ID, or *FSCID*.
+
+Each CephFS file system has a number of *ranks*, numbered beginning with zero.
+By default there is one rank per file system. A rank may be thought of as a
+metadata shard. Management of ranks is described in :doc:`/cephfs/multimds` .
+
+Each CephFS ``ceph-mds`` daemon starts without a rank. It may be assigned one
+by the cluster's monitors. A daemon may only hold one rank at a time, and only
+give up a rank when the ``ceph-mds`` process stops.
+
+If a rank is not associated with any daemon, that rank is considered ``failed``.
+Once a rank is assigned to a daemon, the rank is considered ``up``.
+
+Each ``ceph-mds`` daemon has a *name* that is assigned statically by the
+administrator when the daemon is first configured. Each daemon's *name* is
+typically that of the hostname where the process runs.
+
+A ``ceph-mds`` daemon may be assigned to a specific file system by
+setting its ``mds_join_fs`` configuration option to the file system's
+``name``.
+
+When a ``ceph-mds`` daemon starts, it is also assigned an integer ``GID``,
+which is unique to this current daemon's process. In other words, when a
+``ceph-mds`` daemon is restarted, it runs as a new process and is assigned a
+*new* ``GID`` that is different from that of the previous process.
+
+Referring to MDS daemons
+------------------------
+
+Most administrative commands that refer to a ``ceph-mds`` daemon (MDS)
+accept a flexible argument format that may specify a ``rank``, a ``GID``
+or a ``name``.
+
+Where a ``rank`` is used, it may optionally be qualified by
+a leading file system ``name`` or ``GID``. If a daemon is a standby (i.e.
+it is not currently assigned a ``rank``), then it may only be
+referred to by ``GID`` or ``name``.
+
+For example, say we have an MDS daemon with ``name`` 'myhost' and
+``GID`` 5446, and which is assigned ``rank`` 0 for the file system 'myfs'
+with ``FSCID`` 3. Any of the following are suitable forms of the ``fail``
+command:
+
+::
+
+ ceph mds fail 5446 # GID
+ ceph mds fail myhost # Daemon name
+ ceph mds fail 0 # Unqualified rank
+ ceph mds fail 3:0 # FSCID and rank
+ ceph mds fail myfs:0 # File System name and rank
+
+Managing failover
+-----------------
+
+If an MDS daemon stops communicating with the cluster's monitors, the monitors
+will wait ``mds_beacon_grace`` seconds (default 15) before marking the daemon as
+*laggy*. If a standby MDS is available, the monitor will immediately replace the
+laggy daemon.
+
+Each file system may specify a minimum number of standby daemons in order to be
+considered healthy. This number includes daemons in the ``standby-replay`` state
+waiting for a ``rank`` to fail. Note that a ``standby-replay`` daemon will not
+be assigned to take over a failure for another ``rank`` or a failure in a
+different CephFS file system). The pool of standby daemons not in ``replay``
+counts towards any file system count.
+Each file system may set the desired number of standby daemons by:
+
+::
+
+ ceph fs set <fs name> standby_count_wanted <count>
+
+Setting ``count`` to 0 will disable the health check.
+
+
+.. _mds-standby-replay:
+
+Configuring standby-replay
+--------------------------
+
+Each CephFS file system may be configured to add ``standby-replay`` daemons.
+These standby daemons follow the active MDS's metadata journal in order to
+reduce failover time in the event that the active MDS becomes unavailable. Each
+active MDS may have only one ``standby-replay`` daemon following it.
+
+Configuration of ``standby-replay`` on a file system is done using the below:
+
+::
+
+ ceph fs set <fs name> allow_standby_replay <bool>
+
+Once set, the monitors will assign available standby daemons to follow the
+active MDSs in that file system.
+
+Once an MDS has entered the ``standby-replay`` state, it will only be used as a
+standby for the ``rank`` that it is following. If another ``rank`` fails, this
+``standby-replay`` daemon will not be used as a replacement, even if no other
+standbys are available. For this reason, it is advised that if ``standby-replay``
+is used then *every* active MDS should have a ``standby-replay`` daemon.
+
+.. _mds-join-fs:
+
+Configuring MDS file system affinity
+------------------------------------
+
+You might elect to dedicate an MDS to a particular file system. Or, perhaps you
+have MDSs that run on better hardware that should be preferred over a last-resort
+standby on modest or over-provisioned systems. To configure this preference,
+CephFS provides a configuration option for MDS called ``mds_join_fs`` which
+enforces this affinity.
+
+When failing over MDS daemons, a cluster's monitors will prefer standby daemons with
+``mds_join_fs`` equal to the file system ``name`` with the failed ``rank``. If no
+standby exists with ``mds_join_fs`` equal to the file system ``name``, it will
+choose an unqualified standby (no setting for ``mds_join_fs``) for the replacement,
+or any other available standby, as a last resort. Note, this does not change the
+behavior that ``standby-replay`` daemons are always selected before
+other standbys.
+
+Even further, the monitors will regularly examine the CephFS file systems even when
+stable to check if a standby with stronger affinity is available to replace an
+MDS with lower affinity. This process is also done for ``standby-replay`` daemons:
+if a regular standby has stronger affinity than the ``standby-replay`` MDS, it will
+replace the standby-replay MDS.
+
+For example, given this stable and healthy file system:
+
+::
+
+ $ ceph fs dump
+ dumped fsmap epoch 399
+ ...
+ Filesystem 'cephfs' (27)
+ ...
+ e399
+ max_mds 1
+ in 0
+ up {0=20384}
+ failed
+ damaged
+ stopped
+ ...
+ [mds.a{0:20384} state up:active seq 239 addr [v2:127.0.0.1:6854/966242805,v1:127.0.0.1:6855/966242805]]
+
+ Standby daemons:
+
+ [mds.b{-1:10420} state up:standby seq 2 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]]
+
+
+You may set ``mds_join_fs`` on the standby to enforce your preference: ::
+
+ $ ceph config set mds.b mds_join_fs cephfs
+
+after automatic failover: ::
+
+ $ ceph fs dump
+ dumped fsmap epoch 405
+ e405
+ ...
+ Filesystem 'cephfs' (27)
+ ...
+ max_mds 1
+ in 0
+ up {0=10420}
+ failed
+ damaged
+ stopped
+ ...
+ [mds.b{0:10420} state up:active seq 274 join_fscid=27 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]]
+
+ Standby daemons:
+
+ [mds.a{-1:10720} state up:standby seq 2 addr [v2:127.0.0.1:6854/1340357658,v1:127.0.0.1:6855/1340357658]]
+
+Note in the above example that ``mds.b`` now has ``join_fscid=27``. In this
+output, the file system name from ``mds_join_fs`` is changed to the file system
+identifier (27). If the file system is recreated with the same name, the
+standby will follow the new file system as expected.
+
+Finally, if the file system is degraded or undersized, no failover will occur
+to enforce ``mds_join_fs``.
diff --git a/doc/cephfs/subtree-partitioning.svg b/doc/cephfs/subtree-partitioning.svg
new file mode 100644
index 000000000..20c60de92
--- /dev/null
+++ b/doc/cephfs/subtree-partitioning.svg
@@ -0,0 +1 @@
+<svg version="1.1" viewBox="0.0 0.0 908.7690288713911 373.36482939632543" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg"><clipPath id="p.0"><path d="m0 0l908.76904 0l0 373.36484l-908.76904 0l0 -373.36484z" clip-rule="nonzero"/></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l908.76904 0l0 373.36484l-908.76904 0z" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.1)"><use xlink:href="#p.1" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.1" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.1"><path fill="#ebeded" d="m259.5493 158.67822c-0.29309082 0 -0.5570679 0.2373352 -0.5570679 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570679 0.5012207l26.164001 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.164001 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m259.5493 158.67822c-0.29309082 0 -0.5570679 0.2373352 -0.5570679 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570679 0.5012207l26.164001 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.164001 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.2)"><use xlink:href="#p.2" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.2" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.2"><path fill="#717d84" d="m437.25455 5.221119c-0.43972778 0 -0.8501892 0.36909866 -0.8501892 0.7646594l0 13.396389c0 0.39556122 0.41046143 0.79112244 0.8501892 0.79112244l58.752655 0c0.44024658 0 0.87994385 -0.39556122 0.87994385 -0.79112244l0 -13.396389c0 -0.39556074 -0.43969727 -0.7646594 -0.87994385 -0.7646594l-58.752655 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m437.25455 5.221119c-0.43972778 0 -0.8501892 0.36909866 -0.8501892 0.7646594l0 13.396389c0 0.39556122 0.41046143 0.79112244 0.8501892 0.79112244l58.752655 0c0.44024658 0 0.87994385 -0.39556122 0.87994385 -0.79112244l0 -13.396389c0 -0.39556074 -0.43969727 -0.7646594 -0.87994385 -0.7646594l-58.752655 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.3)"><use xlink:href="#p.3" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.3"><path fill="#90dae3" d="m595.4631 55.41167c-0.46917725 0 -0.90875244 0.3957138 -0.90875244 0.81785583l0 14.326885c0 0.4221344 0.4395752 0.8442764 0.90875244 0.8442764l41.4339 0c0.46917725 0 0.9383545 -0.42214203 0.9383545 -0.8442764l0 -14.326885c0 -0.42214203 -0.46917725 -0.81785583 -0.9383545 -0.81785583l-41.4339 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m595.4631 55.41167c-0.46917725 0 -0.90875244 0.3957138 -0.90875244 0.81785583l0 14.326885c0 0.4221344 0.4395752 0.8442764 0.90875244 0.8442764l41.4339 0c0.46917725 0 0.9383545 -0.42214203 0.9383545 -0.8442764l0 -14.326885c0 -0.42214203 -0.46917725 -0.81785583 -0.9383545 -0.81785583l-41.4339 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m315.78055 55.438137l150.90213 -50.2082" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m315.78055 55.438137l150.90213 -50.2082" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m466.6484 5.2211194l149.58078 50.208202" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m466.6484 5.2211194l149.58078 50.208202" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.4)"><use xlink:href="#p.4" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.4" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.4"><path fill="#717d84" d="m295.0487 55.41167c-0.46920776 0 -0.9088135 0.3957138 -0.9088135 0.81785583l0 14.326885c0 0.4221344 0.4396057 0.8442764 0.9088135 0.8442764l41.433838 0c0.46920776 0 0.9384155 -0.42214203 0.9384155 -0.8442764l0 -14.326885c0 -0.42214203 -0.46920776 -0.81785583 -0.9384155 -0.81785583l-41.433838 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m295.0487 55.41167c-0.46920776 0 -0.9088135 0.3957138 -0.9088135 0.81785583l0 14.326885c0 0.4221344 0.4396057 0.8442764 0.9088135 0.8442764l41.433838 0c0.46920776 0 0.9384155 -0.42214203 0.9384155 -0.8442764l0 -14.326885c0 -0.42214203 -0.46920776 -0.81785583 -0.9384155 -0.81785583l-41.433838 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.5)"><use xlink:href="#p.5" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.5" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.5"><path fill="#717d84" d="m302.62457 101.419685c-0.29312134 0 -0.5566406 0.23719788 -0.5566406 0.5007782l0 9.093201c0 0.26358032 0.2635193 0.52716064 0.5566406 0.52716064l26.135773 0c0.2928772 0 0.585968 -0.26358032 0.585968 -0.52716064l0 -9.093201c0 -0.26358032 -0.29309082 -0.5007782 -0.585968 -0.5007782l-26.135773 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m302.62457 101.419685c-0.29312134 0 -0.5566406 0.23719788 -0.5566406 0.5007782l0 9.093201c0 0.26358032 0.2635193 0.52716064 0.5566406 0.52716064l26.135773 0c0.2928772 0 0.585968 -0.26358032 0.585968 -0.52716064l0 -9.093201c0 -0.26358032 -0.29309082 -0.5007782 -0.585968 -0.5007782l-26.135773 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.6)"><use xlink:href="#p.6" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.6" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.6"><path fill="#ebeded" d="m445.97543 55.41167c-0.46939087 0 -0.90893555 0.3957138 -0.90893555 0.81785583l0 14.326885c0 0.4221344 0.43954468 0.8442764 0.90893555 0.8442764l41.286926 0c0.4694214 0 0.93844604 -0.42214203 0.93844604 -0.8442764l0 -14.326885c0 -0.42214203 -0.46902466 -0.81785583 -0.93844604 -0.81785583l-41.286926 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m445.97543 55.41167c-0.46939087 0 -0.90893555 0.3957138 -0.90893555 0.81785583l0 14.326885c0 0.4221344 0.43954468 0.8442764 0.90893555 0.8442764l41.286926 0c0.4694214 0 0.93844604 -0.42214203 0.93844604 -0.8442764l0 -14.326885c0 -0.42214203 -0.46902466 -0.81785583 -0.93844604 -0.81785583l-41.286926 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.7)"><use xlink:href="#p.7" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.7" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.7"><path fill="#717d84" d="m361.4975 101.419685c-0.2929077 0 -0.556427 0.23719788 -0.556427 0.5007782l0 9.093201c0 0.26358032 0.2635193 0.52716064 0.556427 0.52716064l26.160553 0c0.2929077 0 0.5858154 -0.26358032 0.5858154 -0.52716064l0 -9.093201c0 -0.26358032 -0.2929077 -0.5007782 -0.5858154 -0.5007782l-26.160553 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m361.4975 101.419685c-0.2929077 0 -0.556427 0.23719788 -0.556427 0.5007782l0 9.093201c0 0.26358032 0.2635193 0.52716064 0.556427 0.52716064l26.160553 0c0.2929077 0 0.5858154 -0.26358032 0.5858154 -0.52716064l0 -9.093201c0 -0.26358032 -0.2929077 -0.5007782 -0.5858154 -0.5007782l-26.160553 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.8)"><use xlink:href="#p.8" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.8" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.8"><path fill="#90dae3" d="m542.25726 101.419685c-0.29309082 0 -0.5566406 0.23719788 -0.5566406 0.5007782l0 9.093201c0 0.26358032 0.2635498 0.52716064 0.5566406 0.52716064l26.135742 0c0.2929077 0 0.58599854 -0.26358032 0.58599854 -0.52716064l0 -9.093201c0 -0.26358032 -0.29309082 -0.5007782 -0.58599854 -0.5007782l-26.135742 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m542.25726 101.419685c-0.29309082 0 -0.5566406 0.23719788 -0.5566406 0.5007782l0 9.093201c0 0.26358032 0.2635498 0.52716064 0.5566406 0.52716064l26.135742 0c0.2929077 0 0.58599854 -0.26358032 0.58599854 -0.52716064l0 -9.093201c0 -0.26358032 -0.29309082 -0.5007782 -0.58599854 -0.5007782l-26.135742 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.9)"><use xlink:href="#p.9" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.9" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.9"><path fill="#ebeded" d="m302.62503 131.78285c-0.29312134 0 -0.5570984 0.2373352 -0.5570984 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570984 0.5012207l26.16397 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.16397 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m302.62503 131.78285c-0.29312134 0 -0.5570984 0.2373352 -0.5570984 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570984 0.5012207l26.16397 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.16397 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.10)"><use xlink:href="#p.10" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.10" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.10"><path fill="#ebeded" d="m602.9806 101.419685c-0.29315186 0 -0.5569458 0.23719788 -0.5569458 0.5007782l0 9.093201c0 0.26358032 0.26379395 0.52716064 0.5569458 0.52716064l26.359985 0c0.26379395 0 0.55718994 -0.26358032 0.55718994 -0.52716064l0 -9.093201c0 -0.26358032 -0.293396 -0.5007782 -0.55718994 -0.5007782l-26.359985 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m602.9806 101.419685c-0.29315186 0 -0.5569458 0.23719788 -0.5569458 0.5007782l0 9.093201c0 0.26358032 0.26379395 0.52716064 0.5569458 0.52716064l26.359985 0c0.26379395 0 0.55718994 -0.26358032 0.55718994 -0.52716064l0 -9.093201c0 -0.26358032 -0.293396 -0.5007782 -0.55718994 -0.5007782l-26.359985 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.11)"><use xlink:href="#p.11" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.11" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.11"><path fill="#ff4b47" d="m581.3394 131.78285c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.307251 0c0.2929077 0 0.58599854 -0.2637787 0.58599854 -0.5012207l0 -9.074539c0 -0.26387024 -0.29309082 -0.50120544 -0.58599854 -0.50120544l-26.307251 0" fill-rule="evenodd"/><path stroke="#e93317" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m581.3394 131.78285c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.307251 0c0.2929077 0 0.58599854 -0.2637787 0.58599854 -0.5012207l0 -9.074539c0 -0.26387024 -0.29309082 -0.50120544 -0.58599854 -0.50120544l-26.307251 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.12)"><use xlink:href="#p.12" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.12" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.12"><path fill="#ebeded" d="m245.95436 101.419685c-0.29327393 0 -0.55729675 0.23719788 -0.55729675 0.5007782l0 9.093201c0 0.26358032 0.26402283 0.52716064 0.55729675 0.52716064l26.285995 0c0.2640381 0 0.557312 -0.26358032 0.557312 -0.52716064l0 -9.093201c0 -0.26358032 -0.29327393 -0.5007782 -0.557312 -0.5007782l-26.285995 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m245.95436 101.419685c-0.29327393 0 -0.55729675 0.23719788 -0.55729675 0.5007782l0 9.093201c0 0.26358032 0.26402283 0.52716064 0.55729675 0.52716064l26.285995 0c0.2640381 0 0.557312 -0.26358032 0.557312 -0.52716064l0 -9.093201c0 -0.26358032 -0.29327393 -0.5007782 -0.557312 -0.5007782l-26.285995 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.13)"><use xlink:href="#p.13" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.13" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.13"><path fill="#717d84" d="m259.5493 131.78285c-0.29309082 0 -0.5570679 0.2373352 -0.5570679 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570679 0.5012207l26.164001 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.164001 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m259.5493 131.78285c-0.29309082 0 -0.5570679 0.2373352 -0.5570679 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570679 0.5012207l26.164001 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.164001 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.14)"><use xlink:href="#p.14" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.14" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.14"><path fill="#ebeded" d="m541.8753 131.78285c-0.2929077 0 -0.5563965 0.2373352 -0.5563965 0.50120544l0 9.074539c0 0.23744202 0.26348877 0.5012207 0.5563965 0.5012207l26.160583 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160583 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m541.8753 131.78285c-0.2929077 0 -0.5563965 0.2373352 -0.5563965 0.50120544l0 9.074539c0 0.23744202 0.26348877 0.5012207 0.5563965 0.5012207l26.160583 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160583 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.15)"><use xlink:href="#p.15" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.15" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.15"><path fill="#ebeded" d="m581.3394 158.67822c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.307251 0c0.2929077 0 0.58599854 -0.2637787 0.58599854 -0.5012207l0 -9.074539c0 -0.26387024 -0.29309082 -0.50120544 -0.58599854 -0.50120544l-26.307251 0" fill-rule="evenodd"/><path stroke="#e93317" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m581.3394 158.67822c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.307251 0c0.2929077 0 0.58599854 -0.2637787 0.58599854 -0.5012207l0 -9.074539c0 -0.26387024 -0.29309082 -0.50120544 -0.58599854 -0.50120544l-26.307251 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.16)"><use xlink:href="#p.16" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.16" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.16"><path fill="#ebeded" d="m622.8298 158.67822c-0.29309082 0 -0.5566406 0.2373352 -0.5566406 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5566406 0.5012207l26.28247 0c0.29309082 0 0.5859375 -0.2637787 0.5859375 -0.5012207l0 -9.074539c0 -0.26387024 -0.29284668 -0.50120544 -0.5859375 -0.50120544l-26.28247 0" fill-rule="evenodd"/><path stroke="#e93317" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m622.8298 158.67822c-0.29309082 0 -0.5566406 0.2373352 -0.5566406 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5566406 0.5012207l26.28247 0c0.29309082 0 0.5859375 -0.2637787 0.5859375 -0.5012207l0 -9.074539c0 -0.26387024 -0.29284668 -0.50120544 -0.5859375 -0.50120544l-26.28247 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m315.7218 101.44615l0.09790039 -46.03889" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m315.7218 101.44615l0.09790039 -46.03889" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m374.6243 101.44615l-58.82419 -46.03889" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m374.6243 101.44615l-58.82419 -46.03889" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m259.10968 101.44615l56.695343 -46.03889" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m259.10968 101.44615l56.695343 -46.03889" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m315.7218 131.80933l0.024475098 -30.39846" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m315.7218 131.80933l0.024475098 -30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m376.35672 131.80933l-1.7617798 -30.39846" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m376.35672 131.80933l-1.7617798 -30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m315.7512 101.419685l-43.1149 30.39846" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m315.7512 101.419685l-43.1149 30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m272.6461 158.70467l0.02444458 -26.890945" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m272.6461 158.70467l0.02444458 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m466.6484 55.438137l0.024475098 -50.2082" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m466.6484 55.438137l0.024475098 -50.2082" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m555.3545 101.44615l60.879578 -46.03889" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m555.3545 101.44615l60.879578 -46.03889" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m616.1655 101.44615l0.07342529 -46.03889" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m616.1655 101.44615l0.07342529 -46.03889" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m679.0027 101.975586l-62.812683 -46.590385" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m679.0027 101.975586l-62.812683 -46.590385" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m554.9728 131.80933l0.4159546 -30.39846" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m554.9728 131.80933l0.4159546 -30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m594.5543 131.80933l-39.224304 -30.39846" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m594.5543 131.80933l-39.224304 -30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m636.0151 158.70467l-41.49994 -26.890945" fill-rule="evenodd"/><path stroke="#e93317" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m636.0151 158.70467l-41.49994 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m594.5249 158.70467l0.024475098 -26.890945" fill-rule="evenodd"/><path stroke="#e93317" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m594.5249 158.70467l0.024475098 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m510.6638 131.80933l44.729828 -30.39846" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m510.6638 131.80933l44.729828 -30.39846" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.17)"><use xlink:href="#p.17" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.17" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.17"><path fill="#ebeded" d="m407.98 131.78285c-0.29309082 0 -0.5570679 0.2373352 -0.5570679 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570679 0.5012207l26.164001 0c0.26397705 0 0.5572815 -0.2637787 0.5572815 -0.5012207l0 -9.074539c0 -0.26387024 -0.29330444 -0.50120544 -0.5572815 -0.50120544l-26.164001 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m407.98 131.78285c-0.29309082 0 -0.5570679 0.2373352 -0.5570679 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570679 0.5012207l26.164001 0c0.26397705 0 0.5572815 -0.2637787 0.5572815 -0.5012207l0 -9.074539c0 -0.26387024 -0.29330444 -0.50120544 -0.5572815 -0.50120544l-26.164001 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m421.10617 131.80933l-46.516113 -30.39846" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m421.10617 131.80933l-46.516113 -30.39846" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.18)"><use xlink:href="#p.18" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.18" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.18"><path fill="#717d84" d="m363.2306 131.78285c-0.29312134 0 -0.5570984 0.2373352 -0.5570984 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570984 0.5012207l26.16397 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.16397 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m363.2306 131.78285c-0.29312134 0 -0.5570984 0.2373352 -0.5570984 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570984 0.5012207l26.16397 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.16397 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.19)"><use xlink:href="#p.19" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.19" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.19"><path fill="#f9afaa" d="m192.04312 101.419685c-0.29301453 0 -0.5567932 0.23719788 -0.5567932 0.5007782l0 9.093201c0 0.26358032 0.2637787 0.52716064 0.5567932 0.52716064l26.233093 0c0.2932434 0 0.58625793 -0.26358032 0.58625793 -0.52716064l0 -9.093201c0 -0.26358032 -0.29301453 -0.5007782 -0.58625793 -0.5007782l-26.233093 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m192.04312 101.419685c-0.29301453 0 -0.5567932 0.23719788 -0.5567932 0.5007782l0 9.093201c0 0.26358032 0.2637787 0.52716064 0.5567932 0.52716064l26.233093 0c0.2932434 0 0.58625793 -0.26358032 0.58625793 -0.52716064l0 -9.093201c0 -0.26358032 -0.29301453 -0.5007782 -0.58625793 -0.5007782l-26.233093 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.20)"><use xlink:href="#p.20" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.20" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.20"><path fill="#f9afaa" d="m184.55559 55.41167c-0.46939087 0 -0.9092102 0.3957138 -0.9092102 0.81785583l0 14.326885c0 0.4221344 0.43981934 0.8442764 0.9092102 0.8442764l41.384506 0c0.46903992 0 0.9384308 -0.42214203 0.9384308 -0.8442764l0 -14.326885c0 -0.42214203 -0.46939087 -0.81785583 -0.9384308 -0.81785583l-41.384506 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m184.55559 55.41167c-0.46939087 0 -0.9092102 0.3957138 -0.9092102 0.81785583l0 14.326885c0 0.4221344 0.43981934 0.8442764 0.9092102 0.8442764l41.384506 0c0.46903992 0 0.9384308 -0.42214203 0.9384308 -0.8442764l0 -14.326885c0 -0.42214203 -0.46939087 -0.81785583 -0.9384308 -0.81785583l-41.384506 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.21)"><use xlink:href="#p.21" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.21" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.21"><path fill="#ebeded" d="m736.84656 55.41167c-0.46899414 0 -0.90875244 0.3957138 -0.90875244 0.81785583l0 14.326885c0 0.4221344 0.4397583 0.8442764 0.90875244 0.8442764l41.483215 0c0.46899414 0 0.9379883 -0.42214203 0.9379883 -0.8442764l0 -14.326885c0 -0.42214203 -0.46899414 -0.81785583 -0.9379883 -0.81785583l-41.483215 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m736.84656 55.41167c-0.46899414 0 -0.90875244 0.3957138 -0.90875244 0.81785583l0 14.326885c0 0.4221344 0.4397583 0.8442764 0.90875244 0.8442764l41.483215 0c0.46899414 0 0.9379883 -0.42214203 0.9379883 -0.8442764l0 -14.326885c0 -0.42214203 -0.46899414 -0.81785583 -0.9379883 -0.81785583l-41.483215 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.22)"><use xlink:href="#p.22" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.22" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.22"><path fill="#90dae3" d="m665.8761 101.94912c-0.29315186 0 -0.5566406 0.23719788 -0.5566406 0.5007782l0 9.093201c0 0.26358032 0.26348877 0.5271683 0.5566406 0.5271683l26.135742 0c0.2929077 0 0.58599854 -0.26358795 0.58599854 -0.5271683l0 -9.093201c0 -0.26358032 -0.29309082 -0.5007782 -0.58599854 -0.5007782l-26.135742 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m665.8761 101.94912c-0.29315186 0 -0.5566406 0.23719788 -0.5566406 0.5007782l0 9.093201c0 0.26358032 0.26348877 0.5271683 0.5566406 0.5271683l26.135742 0c0.2929077 0 0.58599854 -0.26358795 0.58599854 -0.5271683l0 -9.093201c0 -0.26358032 -0.29309082 -0.5007782 -0.58599854 -0.5007782l-26.135742 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.23)"><use xlink:href="#p.23" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.23" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.23"><path fill="#ebeded" d="m192.04282 131.78285c-0.2929077 0 -0.55648804 0.2373352 -0.55648804 0.50120544l0 9.074539c0 0.23744202 0.26358032 0.5012207 0.55648804 0.5012207l26.331833 0c0.2929077 0 0.5858307 -0.2637787 0.5858307 -0.5012207l0 -9.074539c0 -0.26387024 -0.29292297 -0.50120544 -0.5858307 -0.50120544l-26.331833 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m192.04282 131.78285c-0.2929077 0 -0.55648804 0.2373352 -0.55648804 0.50120544l0 9.074539c0 0.23744202 0.26358032 0.5012207 0.55648804 0.5012207l26.331833 0c0.2929077 0 0.5858307 -0.2637787 0.5858307 -0.5012207l0 -9.074539c0 -0.26387024 -0.29292297 -0.50120544 -0.5858307 -0.50120544l-26.331833 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.24)"><use xlink:href="#p.24" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.24" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.24"><path fill="#ebeded" d="m149.17256 158.67822c-0.2929077 0 -0.556427 0.2373352 -0.556427 0.50120544l0 9.074539c0 0.23744202 0.2635193 0.5012207 0.556427 0.5012207l26.160568 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160568 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m149.17256 158.67822c-0.2929077 0 -0.556427 0.2373352 -0.556427 0.50120544l0 9.074539c0 0.23744202 0.2635193 0.5012207 0.556427 0.5012207l26.160568 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160568 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.25)"><use xlink:href="#p.25" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.25" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.25"><path fill="#ebeded" d="m665.8765 131.78285c-0.29309082 0 -0.5570679 0.2373352 -0.5570679 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570679 0.5012207l26.164001 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.164001 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m665.8765 131.78285c-0.29309082 0 -0.5570679 0.2373352 -0.5570679 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570679 0.5012207l26.164001 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.164001 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.26)"><use xlink:href="#p.26" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.26" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.26"><path fill="#ebeded" d="m707.5716 131.78285c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.184875 0c0.29296875 0 0.5859375 -0.2637787 0.5859375 -0.5012207l0 -9.074539c0 -0.26387024 -0.29296875 -0.50120544 -0.5859375 -0.50120544l-26.184875 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m707.5716 131.78285c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.184875 0c0.29296875 0 0.5859375 -0.2637787 0.5859375 -0.5012207l0 -9.074539c0 -0.26387024 -0.29296875 -0.50120544 -0.5859375 -0.50120544l-26.184875 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.27)"><use xlink:href="#p.27" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.27" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.27"><path fill="#ebeded" d="m750.88226 158.67822c-0.2929077 0 -0.55651855 0.2373352 -0.55651855 0.50120544l0 9.074539c0 0.23744202 0.26361084 0.5012207 0.55651855 0.5012207l26.331848 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.331848 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m750.88226 158.67822c-0.2929077 0 -0.55651855 0.2373352 -0.55651855 0.50120544l0 9.074539c0 0.23744202 0.26361084 0.5012207 0.55651855 0.5012207l26.331848 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.331848 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.28)"><use xlink:href="#p.28" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.28" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.28"><path fill="#ebeded" d="m106.53792 158.67822c-0.29310608 0 -0.55708313 0.2373352 -0.55708313 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.55708313 0.5012207l26.163986 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.163986 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m106.53792 158.67822c-0.29310608 0 -0.55708313 0.2373352 -0.55708313 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.55708313 0.5012207l26.163986 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.163986 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.29)"><use xlink:href="#p.29" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.29" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.29"><path fill="#ebeded" d="m191.01506 158.67822c-0.2928772 0 -0.55644226 0.2373352 -0.55644226 0.50120544l0 9.074539c0 0.23744202 0.26356506 0.5012207 0.55644226 0.5012207l26.307266 0c0.2928772 0 0.58599854 -0.2637787 0.58599854 -0.5012207l0 -9.074539c0 -0.26387024 -0.29312134 -0.50120544 -0.58599854 -0.50120544l-26.307266 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m191.01506 158.67822c-0.2928772 0 -0.55644226 0.2373352 -0.55644226 0.50120544l0 9.074539c0 0.23744202 0.26356506 0.5012207 0.55644226 0.5012207l26.307266 0c0.2928772 0 0.58599854 -0.2637787 0.58599854 -0.5012207l0 -9.074539c0 -0.26387024 -0.29312134 -0.50120544 -0.58599854 -0.50120544l-26.307266 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.30)"><use xlink:href="#p.30" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.30" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.30"><path fill="#0433ff" d="m303.35846 158.67822c-0.2929077 0 -0.556427 0.2373352 -0.556427 0.50120544l0 9.074539c0 0.23744202 0.2635193 0.5012207 0.556427 0.5012207l26.160553 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160553 0" fill-rule="evenodd"/><path stroke="#021ca1" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m303.35846 158.67822c-0.2929077 0 -0.556427 0.2373352 -0.556427 0.50120544l0 9.074539c0 0.23744202 0.2635193 0.5012207 0.556427 0.5012207l26.160553 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160553 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.31)"><use xlink:href="#p.31" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.31" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.31"><path fill="#ebeded" d="m363.2306 158.67822c-0.29312134 0 -0.5570984 0.2373352 -0.5570984 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570984 0.5012207l26.16397 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.16397 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m363.2306 158.67822c-0.29312134 0 -0.5570984 0.2373352 -0.5570984 0.50120544l0 9.074539c0 0.23744202 0.26397705 0.5012207 0.5570984 0.5012207l26.16397 0c0.26397705 0 0.557312 -0.2637787 0.557312 -0.5012207l0 -9.074539c0 -0.26387024 -0.29333496 -0.50120544 -0.557312 -0.50120544l-26.16397 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.32)"><use xlink:href="#p.32" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.32" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.32"><path fill="#ebeded" d="m409.41852 158.67822c-0.2932129 0 -0.5567932 0.2373352 -0.5567932 0.50120544l0 9.074539c0 0.23744202 0.26358032 0.5012207 0.5567932 0.5012207l26.380066 0c0.29296875 0 0.5861511 -0.2637787 0.5861511 -0.5012207l0 -9.074539c0 -0.26387024 -0.29318237 -0.50120544 -0.5861511 -0.50120544l-26.380066 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m409.41852 158.67822c-0.2932129 0 -0.5567932 0.2373352 -0.5567932 0.50120544l0 9.074539c0 0.23744202 0.26358032 0.5012207 0.5567932 0.5012207l26.380066 0c0.29296875 0 0.5861511 -0.2637787 0.5861511 -0.5012207l0 -9.074539c0 -0.26387024 -0.29318237 -0.50120544 -0.5861511 -0.50120544l-26.380066 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.33)"><use xlink:href="#p.33" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.33" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.33"><path fill="#ebeded" d="m707.5716 158.67822c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.184875 0c0.29296875 0 0.5859375 -0.2637787 0.5859375 -0.5012207l0 -9.074539c0 -0.26387024 -0.29296875 -0.50120544 -0.5859375 -0.50120544l-26.184875 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m707.5716 158.67822c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.184875 0c0.29296875 0 0.5859375 -0.2637787 0.5859375 -0.5012207l0 -9.074539c0 -0.26387024 -0.29296875 -0.50120544 -0.5859375 -0.50120544l-26.184875 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.34)"><use xlink:href="#p.34" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.34" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.34"><path fill="#ebeded" d="m497.4783 158.67822c-0.2929077 0 -0.55648804 0.2373352 -0.55648804 0.50120544l0 9.074539c0 0.23744202 0.26358032 0.5012207 0.55648804 0.5012207l26.331818 0c0.2929077 0 0.58587646 -0.2637787 0.58587646 -0.5012207l0 -9.074539c0 -0.26387024 -0.29296875 -0.50120544 -0.58587646 -0.50120544l-26.331818 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m497.4783 158.67822c-0.2929077 0 -0.55648804 0.2373352 -0.55648804 0.50120544l0 9.074539c0 0.23744202 0.26358032 0.5012207 0.55648804 0.5012207l26.331818 0c0.2929077 0 0.58587646 -0.2637787 0.58587646 -0.5012207l0 -9.074539c0 -0.26387024 -0.29296875 -0.50120544 -0.58587646 -0.50120544l-26.331818 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.35)"><use xlink:href="#p.35" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.35" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.35"><path fill="#f9afaa" d="m149.17256 131.78285c-0.2929077 0 -0.556427 0.2373352 -0.556427 0.50120544l0 9.074539c0 0.23744202 0.2635193 0.5012207 0.556427 0.5012207l26.160568 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160568 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m149.17256 131.78285c-0.2929077 0 -0.556427 0.2373352 -0.556427 0.50120544l0 9.074539c0 0.23744202 0.2635193 0.5012207 0.556427 0.5012207l26.160568 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160568 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.36)"><use xlink:href="#p.36" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.36" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.36"><path fill="#90dae3" d="m497.4783 131.78285c-0.2929077 0 -0.55648804 0.2373352 -0.55648804 0.50120544l0 9.074539c0 0.23744202 0.26358032 0.5012207 0.55648804 0.5012207l26.331818 0c0.2929077 0 0.58587646 -0.2637787 0.58587646 -0.5012207l0 -9.074539c0 -0.26387024 -0.29296875 -0.50120544 -0.58587646 -0.50120544l-26.331818 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m497.4783 131.78285c-0.2929077 0 -0.55648804 0.2373352 -0.55648804 0.50120544l0 9.074539c0 0.23744202 0.26358032 0.5012207 0.55648804 0.5012207l26.331818 0c0.2929077 0 0.58587646 -0.2637787 0.58587646 -0.5012207l0 -9.074539c0 -0.26387024 -0.29296875 -0.50120544 -0.58587646 -0.50120544l-26.331818 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.37)"><use xlink:href="#p.37" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.37" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.37"><path fill="#90dae3" d="m750.88226 131.78285c-0.2929077 0 -0.55651855 0.2373352 -0.55651855 0.50120544l0 9.074539c0 0.23744202 0.26361084 0.5012207 0.55651855 0.5012207l26.331848 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.331848 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m750.88226 131.78285c-0.2929077 0 -0.55651855 0.2373352 -0.55651855 0.50120544l0 9.074539c0 0.23744202 0.26361084 0.5012207 0.55651855 0.5012207l26.331848 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.331848 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.38)"><use xlink:href="#p.38" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.38" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.38"><path fill="#ebeded" d="m459.74677 158.67822c-0.29284668 0 -0.5565796 0.2373352 -0.5565796 0.50120544l0 9.074539c0 0.23744202 0.2637329 0.5012207 0.5565796 0.5012207l26.111206 0c0.2930603 0 0.5861206 -0.2637787 0.5861206 -0.5012207l0 -9.074539c0 -0.26387024 -0.2930603 -0.50120544 -0.5861206 -0.50120544l-26.111206 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m459.74677 158.67822c-0.29284668 0 -0.5565796 0.2373352 -0.5565796 0.50120544l0 9.074539c0 0.23744202 0.2637329 0.5012207 0.5565796 0.5012207l26.111206 0c0.2930603 0 0.5861206 -0.2637787 0.5861206 -0.5012207l0 -9.074539c0 -0.26387024 -0.2930603 -0.50120544 -0.5861206 -0.50120544l-26.111206 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.39)"><use xlink:href="#p.39" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.39" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.39"><path fill="#ebeded" d="m536.7074 158.67822c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.160583 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160583 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m536.7074 158.67822c-0.2929077 0 -0.5564575 0.2373352 -0.5564575 0.50120544l0 9.074539c0 0.23744202 0.2635498 0.5012207 0.5564575 0.5012207l26.160583 0c0.2929077 0 0.5858154 -0.2637787 0.5858154 -0.5012207l0 -9.074539c0 -0.26387024 -0.2929077 -0.50120544 -0.5858154 -0.50120544l-26.160583 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m205.25764 55.438137l261.40546 -50.2082" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m205.25764 55.438137l261.40546 -50.2082" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m757.63727 55.438137l-290.98883 -50.2082" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m757.63727 55.438137l-290.98883 -50.2082" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m205.16956 101.44615l0.12234497 -46.03889" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m205.16956 101.44615l0.12234497 -46.03889" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m205.25764 131.80933l-0.09786987 -30.39846" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m205.25764 131.80933l-0.09786987 -30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m162.26999 101.975586l43.017014 -46.590385" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m162.26999 101.975586l43.017014 -46.590385" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m162.26999 131.80933l42.919144 -30.39846" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m162.26999 131.80933l42.919144 -30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m678.9733 131.80933l0.024475098 -29.869026" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m678.9733 131.80933l0.024475098 -29.869026" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m720.69836 131.80933l43.6532 -30.39846" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m720.69836 131.80933l43.6532 -30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m764.06775 131.80933l0.29364014 -30.39846" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m764.06775 131.80933l0.29364014 -30.39846" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m119.634705 158.70467l42.67444 -26.890945" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m119.634705 158.70467l42.67444 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m204.22993 158.70467l-41.96483 -26.890945" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m204.22993 158.70467l-41.96483 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m316.48526 158.70467l-43.84897 -26.890945" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m316.48526 158.70467l-43.84897 -26.890945" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.40)"><use xlink:href="#p.40" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.40" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.40"><path fill="#ebeded" d="m149.17256 101.94912c-0.2929077 0 -0.556427 0.23719788 -0.556427 0.5007782l0 9.093201c0 0.26358032 0.2635193 0.5271683 0.556427 0.5271683l26.160568 0c0.2929077 0 0.5858154 -0.26358795 0.5858154 -0.5271683l0 -9.093201c0 -0.26358032 -0.2929077 -0.5007782 -0.5858154 -0.5007782l-26.160568 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m149.17256 101.94912c-0.2929077 0 -0.556427 0.23719788 -0.556427 0.5007782l0 9.093201c0 0.26358032 0.2635193 0.5271683 0.556427 0.5271683l26.160568 0c0.2929077 0 0.5858154 -0.26358795 0.5858154 -0.5271683l0 -9.093201c0 -0.26358032 -0.2929077 -0.5007782 -0.5858154 -0.5007782l-26.160568 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m422.66238 158.70467l-46.320343 -26.890945" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m422.66238 158.70467l-46.320343 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m549.8342 158.70467l-39.175354 -26.890945" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m549.8342 158.70467l-39.175354 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m472.81467 158.70467l37.87845 -26.890945" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m472.81467 158.70467l37.87845 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m720.69836 158.70467l43.40851 -26.890945" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m720.69836 158.70467l43.40851 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m804.6476 158.70467l-40.57007 -26.890945" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m804.6476 158.70467l-40.57007 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m764.06775 158.70467l0.024475098 -26.890945" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m764.06775 158.70467l0.024475098 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m376.32736 158.70467l0.024475098 -26.890945" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m376.32736 158.70467l0.024475098 -26.890945" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m510.6638 158.70467l0.024475098 -26.890945" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m510.6638 158.70467l0.024475098 -26.890945" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.41)"><use xlink:href="#p.41" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.41" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.41"><path fill="#90dae3" d="m751.14703 101.419685c-0.29315186 0 -0.5569458 0.23719788 -0.5569458 0.5007782l0 9.093201c0 0.26358032 0.26379395 0.52716064 0.5569458 0.52716064l26.360046 0c0.26379395 0 0.5571289 -0.26358032 0.5571289 -0.52716064l0 -9.093201c0 -0.26358032 -0.29333496 -0.5007782 -0.5571289 -0.5007782l-26.360046 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m751.14703 101.419685c-0.29315186 0 -0.5569458 0.23719788 -0.5569458 0.5007782l0 9.093201c0 0.26358032 0.26379395 0.52716064 0.5569458 0.52716064l26.360046 0c0.26379395 0 0.5571289 -0.26358032 0.5571289 -0.52716064l0 -9.093201c0 -0.26358032 -0.29333496 -0.5007782 -0.5571289 -0.5007782l-26.360046 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m764.3614 101.44615l-148.16156 -46.03889" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="4.005249343832021" stroke-linejoin="round" stroke-linecap="butt" d="m764.3614 101.44615l-148.16156 -46.03889" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.42)"><use xlink:href="#p.42" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.42" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.42"><path fill="#d7dada" d="m276.6083 192.50917c-1.1743164 0 -2.3179016 1.0310669 -2.3179016 2.0888672l0 36.888687c0 1.0578003 1.1435852 2.115265 2.3179016 2.115265l107.05585 0c1.1734009 0 2.3477173 -1.0574646 2.3477173 -2.115265l0 -36.888687c0 -1.0578003 -1.1743164 -2.0888672 -2.3477173 -2.0888672l-107.05585 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m276.6083 192.50917c-1.1743164 0 -2.3179016 1.0310669 -2.3179016 2.0888672l0 36.888687c0 1.0578003 1.1435852 2.115265 2.3179016 2.115265l107.05585 0c1.1734009 0 2.3477173 -1.0574646 2.3477173 -2.115265l0 -36.888687c0 -1.0578003 -1.1743164 -2.0888672 -2.3477173 -2.0888672l-107.05585 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.43)"><use xlink:href="#p.43" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.43" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.43"><path fill="#717d84" d="m287.06165 197.2741c-0.9097595 0 -1.8187866 0.8192291 -1.8187866 1.6384583l0 28.965668c0 0.8192291 0.9090271 1.6648407 1.8187866 1.6648407l85.71365 0c0.9097595 0 1.8485718 -0.8456116 1.8485718 -1.6648407l0 -28.965668c0 -0.8192291 -0.93881226 -1.6384583 -1.8485718 -1.6384583l-85.71365 0" fill-rule="evenodd"/><path stroke="#65bbc6" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m287.06165 197.2741c-0.9097595 0 -1.8187866 0.8192291 -1.8187866 1.6384583l0 28.965668c0 0.8192291 0.9090271 1.6648407 1.8187866 1.6648407l85.71365 0c0.9097595 0 1.8485718 -0.8456116 1.8485718 -1.6648407l0 -28.965668c0 -0.8192291 -0.93881226 -1.6384583 -1.8485718 -1.6384583l-85.71365 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m305.03363 202.62143l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m305.03363 202.62143l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m305.03363 206.5922l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m305.03363 206.5922l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m305.03363 210.53648l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m305.03363 210.53648l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m305.03363 214.50725l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m305.03363 214.50725l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m305.03363 218.45154l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m305.03363 218.45154l66.60541 0" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.44)"><use xlink:href="#p.44" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.44" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.44"><path fill="#d7dada" d="m139.9815 192.50917c-1.1738129 0 -2.3178253 1.0310669 -2.3178253 2.0888672l0 36.888687c0 1.0578003 1.1440125 2.115265 2.3178253 2.115265l107.00708 0c1.1738129 0 2.3476105 -1.0574646 2.3476105 -2.115265l0 -36.888687c0 -1.0578003 -1.1737976 -2.0888672 -2.3476105 -2.0888672l-107.00708 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m139.9815 192.50917c-1.1738129 0 -2.3178253 1.0310669 -2.3178253 2.0888672l0 36.888687c0 1.0578003 1.1440125 2.115265 2.3178253 2.115265l107.00708 0c1.1738129 0 2.3476105 -1.0574646 2.3476105 -2.115265l0 -36.888687c0 -1.0578003 -1.1737976 -2.0888672 -2.3476105 -2.0888672l-107.00708 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.45)"><use xlink:href="#p.45" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.45" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.45"><path fill="#717d84" d="m150.37743 197.2741c-0.9100189 0 -1.8200226 0.8192291 -1.8200226 1.6384583l0 28.965668c0 0.8192291 0.91000366 1.6648407 1.8200226 1.6648407l85.838806 0c0.91000366 0 1.8200073 -0.8456116 1.8200073 -1.6648407l0 -28.965668c0 -0.8192291 -0.91000366 -1.6384583 -1.8200073 -1.6384583l-85.838806 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m150.37743 197.2741c-0.9100189 0 -1.8200226 0.8192291 -1.8200226 1.6384583l0 28.965668c0 0.8192291 0.91000366 1.6648407 1.8200226 1.6648407l85.838806 0c0.91000366 0 1.8200073 -0.8456116 1.8200073 -1.6648407l0 -28.965668c0 -0.8192291 -0.91000366 -1.6384583 -1.8200073 -1.6384583l-85.838806 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m168.37753 202.62143l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m168.37753 202.62143l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m168.37753 206.5922l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m168.37753 206.5922l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m168.37753 210.53648l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m168.37753 210.53648l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m168.37753 214.50725l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m168.37753 214.50725l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m168.37753 218.45154l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m168.37753 218.45154l66.60541 0" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.46)"><use xlink:href="#p.46" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.46" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.46"><path fill="#d7dada" d="m413.99884 192.72095c-1.174469 0 -2.3182373 1.0310669 -2.3182373 2.0888672l0 36.888687c0 1.0578003 1.1437683 2.115265 2.3182373 2.115265l106.98181 0c1.173523 0 2.3480225 -1.0574646 2.3480225 -2.115265l0 -36.888687c0 -1.0578003 -1.1744995 -2.0888672 -2.3480225 -2.0888672l-106.98181 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m413.99884 192.72095c-1.174469 0 -2.3182373 1.0310669 -2.3182373 2.0888672l0 36.888687c0 1.0578003 1.1437683 2.115265 2.3182373 2.115265l106.98181 0c1.173523 0 2.3480225 -1.0574646 2.3480225 -2.115265l0 -36.888687c0 -1.0578003 -1.1744995 -2.0888672 -2.3480225 -2.0888672l-106.98181 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.47)"><use xlink:href="#p.47" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.47" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.47"><path fill="#717d84" d="m424.36423 197.45943c-0.91000366 0 -1.8192749 0.8189697 -1.8192749 1.6382294l0 28.988144c0 0.8192444 0.90927124 1.6646271 1.8192749 1.6646271l85.810455 0c0.91000366 0 1.8491211 -0.8453827 1.8491211 -1.6646271l0 -28.988144c0 -0.81925964 -0.93911743 -1.6382294 -1.8491211 -1.6382294l-85.810455 0" fill-rule="evenodd"/><path stroke="#f57469" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m424.36423 197.45943c-0.91000366 0 -1.8192749 0.8189697 -1.8192749 1.6382294l0 28.988144c0 0.8192444 0.90927124 1.6646271 1.8192749 1.6646271l85.810455 0c0.91000366 0 1.8491211 -0.8453827 1.8491211 -1.6646271l0 -28.988144c0 -0.81925964 -0.93911743 -1.6382294 -1.8491211 -1.6382294l-85.810455 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m442.36508 202.83318l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m442.36508 202.83318l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m442.36508 206.7775l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m442.36508 206.7775l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m442.36508 210.74826l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m442.36508 210.74826l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m442.36508 214.69255l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m442.36508 214.69255l66.60541 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m442.36508 218.66333l66.60541 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m442.36508 218.66333l66.60541 0" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.48)"><use xlink:href="#p.48" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.48" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.48"><path fill="#d7dada" d="m551.53564 192.72095c-1.1734619 0 -2.3180542 1.0310669 -2.3180542 2.0888672l0 36.888687c0 1.0578003 1.1445923 2.115265 2.3180542 2.115265l107.15332 0c1.1744385 0 2.3479004 -1.0574646 2.3479004 -2.115265l0 -36.888687c0 -1.0578003 -1.1734619 -2.0888672 -2.3479004 -2.0888672l-107.15332 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m551.53564 192.72095c-1.1734619 0 -2.3180542 1.0310669 -2.3180542 2.0888672l0 36.888687c0 1.0578003 1.1445923 2.115265 2.3180542 2.115265l107.15332 0c1.1744385 0 2.3479004 -1.0574646 2.3479004 -2.115265l0 -36.888687c0 -1.0578003 -1.1734619 -2.0888672 -2.3479004 -2.0888672l-107.15332 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.49)"><use xlink:href="#p.49" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.49" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.49"><path fill="#717d84" d="m561.9303 197.45943c-0.90948486 0 -1.8189697 0.8189697 -1.8189697 1.6382294l0 28.988144c0 0.8192444 0.90948486 1.6646271 1.8189697 1.6646271l85.909546 0c0.90948486 0 1.8481445 -0.8453827 1.8481445 -1.6646271l0 -28.988144c0 -0.81925964 -0.93865967 -1.6382294 -1.8481445 -1.6382294l-85.909546 0" fill-rule="evenodd"/><path stroke="#e93317" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m561.9303 197.45943c-0.90948486 0 -1.8189697 0.8189697 -1.8189697 1.6382294l0 28.988144c0 0.8192444 0.90948486 1.6646271 1.8189697 1.6646271l85.909546 0c0.90948486 0 1.8481445 -0.8453827 1.8481445 -1.6646271l0 -28.988144c0 -0.81925964 -0.93865967 -1.6382294 -1.8481445 -1.6382294l-85.909546 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m579.9902 202.83318l66.65436 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m579.9902 202.83318l66.65436 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m579.9902 206.7775l66.65436 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m579.9902 206.7775l66.65436 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m579.9902 210.74826l66.65436 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m579.9902 210.74826l66.65436 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m579.9902 214.69255l66.65436 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m579.9902 214.69255l66.65436 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m579.9902 218.66333l66.65436 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m579.9902 218.66333l66.65436 0" fill-rule="evenodd"/><g filter="url(#shadowFilter-p.50)"><use xlink:href="#p.50" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.50" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.50"><path fill="#d7dada" d="m686.0774 192.72095c-1.1738281 0 -2.317871 1.0310669 -2.317871 2.0888672l0 36.888687c0 1.0578003 1.144043 2.115265 2.317871 2.115265l107.00708 0c1.1737671 0 2.3475952 -1.0574646 2.3475952 -2.115265l0 -36.888687c0 -1.0578003 -1.1738281 -2.0888672 -2.3475952 -2.0888672l-107.00708 0" fill-rule="evenodd"/><path stroke="#47545c" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m686.0774 192.72095c-1.1738281 0 -2.317871 1.0310669 -2.317871 2.0888672l0 36.888687c0 1.0578003 1.144043 2.115265 2.317871 2.115265l107.00708 0c1.1737671 0 2.3475952 -1.0574646 2.3475952 -2.115265l0 -36.888687c0 -1.0578003 -1.1738281 -2.0888672 -2.3475952 -2.0888672l-107.00708 0" fill-rule="evenodd"/></g><g filter="url(#shadowFilter-p.51)"><use xlink:href="#p.51" transform="matrix(1.0 0.0 0.0 1.0 0.0 2.418897573716371)"/></g><defs><filter id="shadowFilter-p.51" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="0.0" result="blur"/><feComponentTransfer in="blur" color-interpolation-filters="sRGB"><feFuncR type="linear" slope="0" intercept="0.0"/><feFuncG type="linear" slope="0" intercept="0.0"/><feFuncB type="linear" slope="0" intercept="0.0"/><feFuncA type="linear" slope="0.349" intercept="0"/></feComponentTransfer></filter></defs><g id="p.51"><path fill="#717d84" d="m696.53125 197.45943c-0.9100342 0 -1.8192749 0.8189697 -1.8192749 1.6382294l0 28.988144c0 0.8192444 0.9092407 1.6646271 1.8192749 1.6646271l85.810486 0c0.90997314 0 1.84906 -0.8453827 1.84906 -1.6646271l0 -28.988144c0 -0.81925964 -0.9390869 -1.6382294 -1.84906 -1.6382294l-85.810486 0" fill-rule="evenodd"/><path stroke="#021ca1" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m696.53125 197.45943c-0.9100342 0 -1.8192749 0.8189697 -1.8192749 1.6382294l0 28.988144c0 0.8192444 0.9092407 1.6646271 1.8192749 1.6646271l85.810486 0c0.90997314 0 1.84906 -0.8453827 1.84906 -1.6646271l0 -28.988144c0 -0.81925964 -0.9390869 -1.6382294 -1.84906 -1.6382294l-85.810486 0" fill-rule="evenodd"/></g><path fill="#000000" fill-opacity="0.0" d="m714.50275 202.83318l66.55646 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m714.50275 202.83318l66.55646 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m714.50275 206.7775l66.55646 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m714.50275 206.7775l66.55646 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m714.50275 210.74826l66.55646 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m714.50275 210.74826l66.55646 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m714.50275 214.69255l66.55646 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m714.50275 214.69255l66.55646 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m714.50275 218.66333l66.55646 0" fill-rule="evenodd"/><path stroke="#ebeded" stroke-width="2.6824146981627295" stroke-linejoin="round" stroke-linecap="butt" d="m714.50275 218.66333l66.55646 0" fill-rule="evenodd"/><path fill="#000000" fill-opacity="0.0" d="m158.46217 234.2839l70.10451 0l0 32.91327l-70.10451 0z" fill-rule="evenodd"/><path fill="#000000" d="m168.8528 261.2039l0 -13.35936l2.65625 0l3.15625 9.45311q0.4375 1.328125 0.640625 1.984375q0.234375 -0.734375 0.703125 -2.140625l3.203125 -9.29686l2.375 0l0 13.35936l-1.703125 0l0 -11.17186l-3.875 11.17186l-1.59375 0l-3.859375 -11.374985l0 11.374985l-1.703125 0zm15.587677 0l0 -13.35936l4.609375 0q1.546875 0 2.375 0.203125q1.140625 0.25 1.953125 0.953125q1.0625 0.890625 1.578125 2.28125q0.53125 1.390625 0.53125 3.171875q0 1.515625 -0.359375 2.7031097q-0.359375 1.171875 -0.921875 1.9375q-0.546875 0.765625 -1.203125 1.21875q-0.65625 0.4375 -1.59375 0.671875q-0.9375 0.21875 -2.140625 0.21875l-4.828125 0zm1.765625 -1.578125l2.859375 0q1.3125 0 2.0625 -0.234375q0.75 -0.25 1.203125 -0.703125q0.625 -0.625 0.96875 -1.6875q0.359375 -1.0624847 0.359375 -2.5781097q0 -2.09375 -0.6875 -3.21875q-0.6875 -1.125 -1.671875 -1.5q-0.703125 -0.28125 -2.28125 -0.28125l-2.8125 0l0 10.20311zm11.113571 -2.71875l1.65625 -0.140625q0.125 1.0 0.546875 1.640625q0.4375 0.640625 1.34375 1.046875q0.921875 0.390625 2.0625 0.390625q1.0 0 1.78125 -0.296875q0.78125 -0.296875 1.15625 -0.8125q0.375 -0.53125 0.375 -1.15625q0 -0.625 -0.375 -1.09375q-0.359375 -0.46875 -1.1875 -0.79685974q-0.546875 -0.203125 -2.390625 -0.640625q-1.828125 -0.453125 -2.5625 -0.84375q-0.96875 -0.5 -1.4375 -1.234375q-0.46875 -0.75 -0.46875 -1.671875q0 -1.0 0.578125 -1.875q0.578125 -0.890625 1.671875 -1.34375q1.109375 -0.453125 2.453125 -0.453125q1.484375 0 2.609375 0.484375q1.140625 0.46875 1.75 1.40625q0.609375 0.921875 0.65625 2.09375l-1.6875 0.125q-0.140625 -1.265625 -0.9375 -1.90625q-0.78125 -0.65625 -2.3125 -0.65625q-1.609375 0 -2.34375 0.59375q-0.734375 0.59375 -0.734375 1.421875q0 0.71875 0.53125 1.171875q0.5 0.46875 2.65625 0.96875q2.15625 0.484375 2.953125 0.84375q1.171875 0.53125 1.71875 1.359375q0.5625 0.82810974 0.5625 1.9062347q0 1.0625 -0.609375 2.015625q-0.609375 0.9375 -1.75 1.46875q-1.140625 0.515625 -2.578125 0.515625q-1.8125 0 -3.046875 -0.53125q-1.21875 -0.53125 -1.921875 -1.59375q-0.6875 -1.0625 -0.71875 -2.40625z" fill-rule="nonzero"/><path fill="#000000" d="m168.24342 276.61014q0 -2.359375 0.484375 -3.796875q0.484375 -1.453125 1.4375 -2.234375q0.96875 -0.78125 2.421875 -0.78125q1.078125 0 1.890625 0.4375q0.8125 0.421875 1.328125 1.25q0.53125 0.8125 0.828125 1.984375q0.3125 1.15625 0.3125 3.140625q0 2.359375 -0.484375 3.8125q-0.484375 1.4375 -1.453125 2.234375q-0.953125 0.78125 -2.421875 0.78125q-1.921875 0 -3.03125 -1.390625q-1.3125 -1.671875 -1.3125 -5.4375zm1.671875 0q0 3.296875 0.765625 4.390625q0.78125 1.078125 1.90625 1.078125q1.140625 0 1.90625 -1.09375q0.765625 -1.09375 0.765625 -4.375q0 -3.296875 -0.765625 -4.375q-0.765625 -1.078125 -1.921875 -1.078125q-1.125 0 -1.796875 0.953125q-0.859375 1.21875 -0.859375 4.5z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m295.11298 234.2839l70.10452 0l0 32.91327l-70.10452 0z" fill-rule="evenodd"/><path fill="#000000" d="m305.5036 261.2039l0 -13.35936l2.65625 0l3.15625 9.45311q0.4375 1.328125 0.640625 1.984375q0.234375 -0.734375 0.703125 -2.140625l3.203125 -9.29686l2.375 0l0 13.35936l-1.703125 0l0 -11.17186l-3.875 11.17186l-1.59375 0l-3.859375 -11.374985l0 11.374985l-1.703125 0zm15.587677 0l0 -13.35936l4.609375 0q1.546875 0 2.375 0.203125q1.140625 0.25 1.953125 0.953125q1.0625 0.890625 1.578125 2.28125q0.53125 1.390625 0.53125 3.171875q0 1.515625 -0.359375 2.7031097q-0.359375 1.171875 -0.921875 1.9375q-0.546875 0.765625 -1.203125 1.21875q-0.65625 0.4375 -1.59375 0.671875q-0.9375 0.21875 -2.140625 0.21875l-4.828125 0zm1.765625 -1.578125l2.859375 0q1.3125 0 2.0625 -0.234375q0.75 -0.25 1.203125 -0.703125q0.625 -0.625 0.96875 -1.6875q0.359375 -1.0624847 0.359375 -2.5781097q0 -2.09375 -0.6875 -3.21875q-0.6875 -1.125 -1.671875 -1.5q-0.703125 -0.28125 -2.28125 -0.28125l-2.8125 0l0 10.20311zm11.113556 -2.71875l1.65625 -0.140625q0.125 1.0 0.546875 1.640625q0.4375 0.640625 1.34375 1.046875q0.921875 0.390625 2.0625 0.390625q1.0 0 1.78125 -0.296875q0.78125 -0.296875 1.15625 -0.8125q0.375 -0.53125 0.375 -1.15625q0 -0.625 -0.375 -1.09375q-0.359375 -0.46875 -1.1875 -0.79685974q-0.546875 -0.203125 -2.390625 -0.640625q-1.828125 -0.453125 -2.5625 -0.84375q-0.96875 -0.5 -1.4375 -1.234375q-0.46875 -0.75 -0.46875 -1.671875q0 -1.0 0.578125 -1.875q0.578125 -0.890625 1.671875 -1.34375q1.109375 -0.453125 2.453125 -0.453125q1.484375 0 2.609375 0.484375q1.140625 0.46875 1.75 1.40625q0.609375 0.921875 0.65625 2.09375l-1.6875 0.125q-0.140625 -1.265625 -0.9375 -1.90625q-0.78125 -0.65625 -2.3125 -0.65625q-1.609375 0 -2.34375 0.59375q-0.734375 0.59375 -0.734375 1.421875q0 0.71875 0.53125 1.171875q0.5 0.46875 2.65625 0.96875q2.15625 0.484375 2.953125 0.84375q1.171875 0.53125 1.71875 1.359375q0.5625 0.82810974 0.5625 1.9062347q0 1.0625 -0.609375 2.015625q-0.609375 0.9375 -1.75 1.46875q-1.140625 0.515625 -2.578125 0.515625q-1.8125 0 -3.046875 -0.53125q-1.21875 -0.53125 -1.921875 -1.59375q-0.6875 -1.0625 -0.71875 -2.40625z" fill-rule="nonzero"/><path fill="#000000" d="m311.0661 283.2039l-1.640625 0l0 -10.453125q-0.59375 0.5625 -1.5625 1.140625q-0.953125 0.5625 -1.71875 0.84375l0 -1.59375q1.375 -0.640625 2.40625 -1.5625q1.03125 -0.921875 1.453125 -1.78125l1.0625 0l0 13.40625z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m437.7791 234.2839l70.10452 0l0 32.91327l-70.10452 0z" fill-rule="evenodd"/><path fill="#000000" d="m448.16974 261.2039l0 -13.35936l2.65625 0l3.15625 9.45311q0.4375 1.328125 0.640625 1.984375q0.234375 -0.734375 0.703125 -2.140625l3.203125 -9.29686l2.375 0l0 13.35936l-1.703125 0l0 -11.17186l-3.875 11.17186l-1.59375 0l-3.859375 -11.374985l0 11.374985l-1.703125 0zm15.587677 0l0 -13.35936l4.609375 0q1.546875 0 2.375 0.203125q1.140625 0.25 1.953125 0.953125q1.0625 0.890625 1.578125 2.28125q0.53125 1.390625 0.53125 3.171875q0 1.515625 -0.359375 2.7031097q-0.359375 1.171875 -0.921875 1.9375q-0.546875 0.765625 -1.203125 1.21875q-0.65625 0.4375 -1.59375 0.671875q-0.9375 0.21875 -2.140625 0.21875l-4.828125 0zm1.765625 -1.578125l2.859375 0q1.3125 0 2.0625 -0.234375q0.75 -0.25 1.203125 -0.703125q0.625 -0.625 0.96875 -1.6875q0.359375 -1.0624847 0.359375 -2.5781097q0 -2.09375 -0.6875 -3.21875q-0.6875 -1.125 -1.671875 -1.5q-0.703125 -0.28125 -2.28125 -0.28125l-2.8125 0l0 10.20311zm11.113586 -2.71875l1.65625 -0.140625q0.125 1.0 0.546875 1.640625q0.4375 0.640625 1.34375 1.046875q0.921875 0.390625 2.0625 0.390625q1.0 0 1.78125 -0.296875q0.78125 -0.296875 1.15625 -0.8125q0.375 -0.53125 0.375 -1.15625q0 -0.625 -0.375 -1.09375q-0.359375 -0.46875 -1.1875 -0.79685974q-0.546875 -0.203125 -2.390625 -0.640625q-1.828125 -0.453125 -2.5625 -0.84375q-0.96875 -0.5 -1.4375 -1.234375q-0.46875 -0.75 -0.46875 -1.671875q0 -1.0 0.578125 -1.875q0.578125 -0.890625 1.671875 -1.34375q1.109375 -0.453125 2.453125 -0.453125q1.484375 0 2.609375 0.484375q1.140625 0.46875 1.75 1.40625q0.609375 0.921875 0.65625 2.09375l-1.6875 0.125q-0.140625 -1.265625 -0.9375 -1.90625q-0.78125 -0.65625 -2.3125 -0.65625q-1.609375 0 -2.34375 0.59375q-0.734375 0.59375 -0.734375 1.421875q0 0.71875 0.53125 1.171875q0.5 0.46875 2.65625 0.96875q2.15625 0.484375 2.953125 0.84375q1.171875 0.53125 1.71875 1.359375q0.5625 0.82810974 0.5625 1.9062347q0 1.0625 -0.609375 2.015625q-0.609375 0.9375 -1.75 1.46875q-1.140625 0.515625 -2.578125 0.515625q-1.8125 0 -3.046875 -0.53125q-1.21875 -0.53125 -1.921875 -1.59375q-0.6875 -1.0625 -0.71875 -2.40625z" fill-rule="nonzero"/><path fill="#000000" d="m456.16974 281.62576l0 1.578125l-8.828125 0q-0.015625 -0.59375 0.1875 -1.140625q0.34375 -0.90625 1.078125 -1.78125q0.75 -0.875 2.15625 -2.015625q2.171875 -1.78125 2.9375 -2.828125q0.765625 -1.046875 0.765625 -1.96875q0 -0.984375 -0.703125 -1.640625q-0.6875 -0.671875 -1.8125 -0.671875q-1.1875 0 -1.90625 0.71875q-0.703125 0.703125 -0.703125 1.953125l-1.6875 -0.171875q0.171875 -1.890625 1.296875 -2.875q1.140625 -0.984375 3.03125 -0.984375q1.921875 0 3.046875 1.0625q1.125 1.0625 1.125 2.640625q0 0.796875 -0.328125 1.578125q-0.328125 0.78125 -1.09375 1.640625q-0.75 0.84375 -2.53125 2.34375q-1.46875 1.234375 -1.890625 1.6875q-0.421875 0.4375 -0.6875 0.875l6.546875 0z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m570.0907 234.2839l70.10455 0l0 32.91327l-70.10455 0z" fill-rule="evenodd"/><path fill="#000000" d="m580.4813 261.2039l0 -13.35936l2.65625 0l3.15625 9.45311q0.4375 1.328125 0.640625 1.984375q0.234375 -0.734375 0.703125 -2.140625l3.203125 -9.29686l2.375 0l0 13.35936l-1.703125 0l0 -11.17186l-3.875 11.17186l-1.59375 0l-3.859375 -11.374985l0 11.374985l-1.703125 0zm15.5877075 0l0 -13.35936l4.609375 0q1.546875 0 2.375 0.203125q1.140625 0.25 1.953125 0.953125q1.0625 0.890625 1.578125 2.28125q0.53125 1.390625 0.53125 3.171875q0 1.515625 -0.359375 2.7031097q-0.359375 1.171875 -0.921875 1.9375q-0.546875 0.765625 -1.203125 1.21875q-0.65625 0.4375 -1.59375 0.671875q-0.9375 0.21875 -2.140625 0.21875l-4.828125 0zm1.765625 -1.578125l2.859375 0q1.3125 0 2.0625 -0.234375q0.75 -0.25 1.203125 -0.703125q0.625 -0.625 0.96875 -1.6875q0.359375 -1.0624847 0.359375 -2.5781097q0 -2.09375 -0.6875 -3.21875q-0.6875 -1.125 -1.671875 -1.5q-0.703125 -0.28125 -2.28125 -0.28125l-2.8125 0l0 10.20311zm11.113525 -2.71875l1.65625 -0.140625q0.125 1.0 0.546875 1.640625q0.4375 0.640625 1.34375 1.046875q0.921875 0.390625 2.0625 0.390625q1.0 0 1.78125 -0.296875q0.78125 -0.296875 1.15625 -0.8125q0.375 -0.53125 0.375 -1.15625q0 -0.625 -0.375 -1.09375q-0.359375 -0.46875 -1.1875 -0.79685974q-0.546875 -0.203125 -2.390625 -0.640625q-1.828125 -0.453125 -2.5625 -0.84375q-0.96875 -0.5 -1.4375 -1.234375q-0.46875 -0.75 -0.46875 -1.671875q0 -1.0 0.578125 -1.875q0.578125 -0.890625 1.671875 -1.34375q1.109375 -0.453125 2.453125 -0.453125q1.484375 0 2.609375 0.484375q1.140625 0.46875 1.75 1.40625q0.609375 0.921875 0.65625 2.09375l-1.6875 0.125q-0.140625 -1.265625 -0.9375 -1.90625q-0.78125 -0.65625 -2.3125 -0.65625q-1.609375 0 -2.34375 0.59375q-0.734375 0.59375 -0.734375 1.421875q0 0.71875 0.53125 1.171875q0.5 0.46875 2.65625 0.96875q2.15625 0.484375 2.953125 0.84375q1.171875 0.53125 1.71875 1.359375q0.5625 0.82810974 0.5625 1.9062347q0 1.0625 -0.609375 2.015625q-0.609375 0.9375 -1.75 1.46875q-1.140625 0.515625 -2.578125 0.515625q-1.8125 0 -3.046875 -0.53125q-1.21875 -0.53125 -1.921875 -1.59375q-0.6875 -1.0625 -0.71875 -2.40625z" fill-rule="nonzero"/><path fill="#000000" d="m579.87195 279.67264l1.640625 -0.21875q0.28125 1.40625 0.953125 2.015625q0.6875 0.609375 1.65625 0.609375q1.15625 0 1.953125 -0.796875q0.796875 -0.796875 0.796875 -1.984375q0 -1.125 -0.734375 -1.859375q-0.734375 -0.734375 -1.875 -0.734375q-0.46875 0 -1.15625 0.171875l0.1875 -1.4375q0.15625 0.015625 0.265625 0.015625q1.046875 0 1.875 -0.546875q0.84375 -0.546875 0.84375 -1.671875q0 -0.90625 -0.609375 -1.5q-0.609375 -0.59375 -1.578125 -0.59375q-0.953125 0 -1.59375 0.609375q-0.640625 0.59375 -0.8125 1.796875l-1.640625 -0.296875q0.296875 -1.640625 1.359375 -2.546875q1.0625 -0.90625 2.65625 -0.90625q1.09375 0 2.0 0.46875q0.921875 0.46875 1.40625 1.28125q0.5 0.8125 0.5 1.71875q0 0.859375 -0.46875 1.578125q-0.46875 0.703125 -1.375 1.125q1.1875 0.28125 1.84375 1.140625q0.65625 0.859375 0.65625 2.15625q0 1.734375 -1.28125 2.953125q-1.265625 1.21875 -3.21875 1.21875q-1.765625 0 -2.921875 -1.046875q-1.15625 -1.046875 -1.328125 -2.71875z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m704.5576 234.2839l70.10449 0l0 32.91327l-70.10449 0z" fill-rule="evenodd"/><path fill="#000000" d="m714.94824 261.2039l0 -13.35936l2.65625 0l3.15625 9.45311q0.4375 1.328125 0.640625 1.984375q0.234375 -0.734375 0.703125 -2.140625l3.203125 -9.29686l2.375 0l0 13.35936l-1.703125 0l0 -11.17186l-3.875 11.17186l-1.59375 0l-3.859375 -11.374985l0 11.374985l-1.703125 0zm15.5876465 0l0 -13.35936l4.609375 0q1.546875 0 2.375 0.203125q1.140625 0.25 1.953125 0.953125q1.0625 0.890625 1.578125 2.28125q0.53125 1.390625 0.53125 3.171875q0 1.515625 -0.359375 2.7031097q-0.359375 1.171875 -0.921875 1.9375q-0.546875 0.765625 -1.203125 1.21875q-0.65625 0.4375 -1.59375 0.671875q-0.9375 0.21875 -2.140625 0.21875l-4.828125 0zm1.765625 -1.578125l2.859375 0q1.3125 0 2.0625 -0.234375q0.75 -0.25 1.203125 -0.703125q0.625 -0.625 0.96875 -1.6875q0.359375 -1.0624847 0.359375 -2.5781097q0 -2.09375 -0.6875 -3.21875q-0.6875 -1.125 -1.671875 -1.5q-0.703125 -0.28125 -2.28125 -0.28125l-2.8125 0l0 10.20311zm11.113586 -2.71875l1.65625 -0.140625q0.125 1.0 0.546875 1.640625q0.4375 0.640625 1.34375 1.046875q0.921875 0.390625 2.0625 0.390625q1.0 0 1.78125 -0.296875q0.78125 -0.296875 1.15625 -0.8125q0.375 -0.53125 0.375 -1.15625q0 -0.625 -0.375 -1.09375q-0.359375 -0.46875 -1.1875 -0.79685974q-0.546875 -0.203125 -2.390625 -0.640625q-1.828125 -0.453125 -2.5625 -0.84375q-0.96875 -0.5 -1.4375 -1.234375q-0.46875 -0.75 -0.46875 -1.671875q0 -1.0 0.578125 -1.875q0.578125 -0.890625 1.671875 -1.34375q1.109375 -0.453125 2.453125 -0.453125q1.484375 0 2.609375 0.484375q1.140625 0.46875 1.75 1.40625q0.609375 0.921875 0.65625 2.09375l-1.6875 0.125q-0.140625 -1.265625 -0.9375 -1.90625q-0.78125 -0.65625 -2.3125 -0.65625q-1.609375 0 -2.34375 0.59375q-0.734375 0.59375 -0.734375 1.421875q0 0.71875 0.53125 1.171875q0.5 0.46875 2.65625 0.96875q2.15625 0.484375 2.953125 0.84375q1.171875 0.53125 1.71875 1.359375q0.5625 0.82810974 0.5625 1.9062347q0 1.0625 -0.609375 2.015625q-0.609375 0.9375 -1.75 1.46875q-1.140625 0.515625 -2.578125 0.515625q-1.8125 0 -3.046875 -0.53125q-1.21875 -0.53125 -1.921875 -1.59375q-0.6875 -1.0625 -0.71875 -2.40625z" fill-rule="nonzero"/><path fill="#000000" d="m719.58887 283.2039l0 -3.203125l-5.796875 0l0 -1.5l6.09375 -8.65625l1.34375 0l0 8.65625l1.796875 0l0 1.5l-1.796875 0l0 3.203125l-1.640625 0zm0 -4.703125l0 -6.015625l-4.1875 6.015625l4.1875 0z" fill-rule="nonzero"/><path fill="#000000" fill-opacity="0.0" d="m27.929134 282.41995l877.6693 0l0 27.937012l-877.6693 0z" fill-rule="evenodd"/><path fill="#000000" d="m38.429134 306.77994l0 -11.453125l1.515625 0l0 11.453125l-1.515625 0zm4.0078125 0l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm16.40625 -1.265625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.4843712 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.2968712 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm8.898434 -9.84375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm2.9921875 -2.484375l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm13.3359375 2.484375l0 -7.203125l-1.234375 0l0 -1.09375l1.234375 0l0 -0.890625q0 -0.828125 0.15625 -1.234375q0.203125 -0.546875 0.703125 -0.890625q0.515625 -0.34375 1.4375 -0.34375q0.59375 0 1.3125 0.140625l-0.203125 1.234375q-0.4375 -0.078125 -0.828125 -0.078125q-0.640625 0 -0.90625 0.28125q-0.265625 0.265625 -0.265625 1.015625l0 0.765625l1.609375 0l0 1.09375l-1.609375 0l0 7.203125l-1.40625 0zm4.1171875 -9.84375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm3.5234375 0l0 -11.453125l1.40625 0l0 11.453125l-1.40625 0zm9.2578125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm11.71875 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm8.5 5.6875l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm7.5 -5.6875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.5468826 0.4375 1.5000076 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.5468826 -0.390625 -2.1562576 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.5781326 -0.171875 1.2187576 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.9687576 0 -1.3906326 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.2500076 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.6250076 0 -2.4687576 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm11.625008 1.21875l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm7.0546875 -1.40625l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.8359375 4.953125l0 -8.296875l1.25 0l0 1.15625q0.390625 -0.609375 1.03125 -0.96875q0.65625 -0.375 1.484375 -0.375q0.921875 0 1.515625 0.390625q0.59375 0.375 0.828125 1.0625q0.984375 -1.453125 2.5625 -1.453125q1.234375 0 1.890625 0.6875q0.671875 0.671875 0.671875 2.09375l0 5.703125l-1.390625 0l0 -5.234375q0 -0.84375 -0.140625 -1.203125q-0.140625 -0.375 -0.5 -0.59375q-0.359375 -0.234375 -0.84375 -0.234375q-0.875 0 -1.453125 0.578125q-0.578125 0.578125 -0.578125 1.859375l0 4.828125l-1.40625 0l0 -5.390625q0 -0.9375 -0.34375 -1.40625q-0.34375 -0.46875 -1.125 -0.46875q-0.59375 0 -1.09375 0.3125q-0.5 0.3125 -0.734375 0.921875q-0.21875 0.59375 -0.21875 1.71875l0 4.3125l-1.40625 0zm17.773438 0l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm8.8984375 -9.84375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm9.2265625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.8203125 4.953125l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm10.75 -1.03125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.5859375 4.171875l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm10.75 -3.046875l1.390625 0.1875q-0.234375 1.421875 -1.171875 2.234375q-0.921875 0.8125 -2.28125 0.8125q-1.703125 0 -2.75 -1.109375q-1.03125 -1.125 -1.03125 -3.203125q0 -1.34375 0.4375 -2.34375q0.453125 -1.015625 1.359375 -1.515625q0.921875 -0.5 1.984375 -0.5q1.359375 0 2.21875 0.6875q0.859375 0.671875 1.09375 1.9375l-1.359375 0.203125q-0.203125 -0.828125 -0.703125 -1.25q-0.484375 -0.421875 -1.1875 -0.421875q-1.0625 0 -1.734375 0.765625q-0.65625 0.75 -0.65625 2.40625q0 1.671875 0.640625 2.4375q0.640625 0.75 1.671875 0.75q0.828125 0 1.375 -0.5q0.5625 -0.515625 0.703125 -1.578125zm2.59375 3.046875l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm8.8359375 3.203125l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm15.5703125 -4.46875l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3671875 1.265625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.5078125 2.28125l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm8.1953125 4.953125l0 -1.609375l1.609375 0l0 1.609375q0 0.890625 -0.3125 1.421875q-0.3125 0.546875 -1.0 0.84375l-0.390625 -0.59375q0.453125 -0.203125 0.65625 -0.578125q0.21875 -0.375 0.234375 -1.09375l-0.796875 0zm11.59375 -1.265625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm11.71875 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm8.5625 2.484375l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.3046875 -1.03125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm8.9765625 4.171875l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm13.6328125 1.46875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.2109375 4.953125l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm12.40625 4.140625l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm8.3671875 -4.15625q0 -2.296875 1.28125 -3.40625q1.078125 -0.921875 2.609375 -0.921875q1.71875 0 2.796875 1.125q1.09375 1.109375 1.09375 3.09375q0 1.59375 -0.484375 2.515625q-0.484375 0.921875 -1.40625 1.4375q-0.90625 0.5 -2.0 0.5q-1.734375 0 -2.8125 -1.109375q-1.078125 -1.125 -1.078125 -3.234375zm1.453125 0q0 1.59375 0.6875 2.390625q0.703125 0.796875 1.75 0.796875q1.046875 0 1.734375 -0.796875q0.703125 -0.796875 0.703125 -2.4375q0 -1.53125 -0.703125 -2.328125q-0.6875 -0.796875 -1.734375 -0.796875q-1.046875 0 -1.75 0.796875q-0.6875 0.78125 -0.6875 2.375zm13.3515625 4.15625l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm13.6328125 1.46875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.2734375 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm12.9921875 2.484375l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.8359375 8.140625l0 -11.484375l1.28125 0l0 1.078125q0.453125 -0.640625 1.015625 -0.953125q0.578125 -0.3125 1.390625 -0.3125q1.0625 0 1.875 0.546875q0.8125 0.546875 1.21875 1.546875q0.421875 0.984375 0.421875 2.171875q0 1.28125 -0.46875 2.296875q-0.453125 1.015625 -1.328125 1.5625q-0.859375 0.546875 -1.828125 0.546875q-0.703125 0 -1.265625 -0.296875q-0.546875 -0.296875 -0.90625 -0.75l0 4.046875l-1.40625 0zm1.265625 -7.296875q0 1.609375 0.640625 2.375q0.65625 0.765625 1.578125 0.765625q0.9375 0 1.609375 -0.796875q0.671875 -0.796875 0.671875 -2.453125q0 -1.59375 -0.65625 -2.375q-0.65625 -0.796875 -1.5625 -0.796875q-0.890625 0 -1.59375 0.84375q-0.6875 0.84375 -0.6875 2.4375zm7.6171875 4.109375l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.2734375 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm14.234375 -0.1875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.8359375 4.953125l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm11.9609375 -1.265625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm11.203125 1.265625l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm7.9609375 -5.703125l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm3.5390625 0l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.2421875 1.90625l1.390625 0.1875q-0.234375 1.421875 -1.171875 2.234375q-0.921875 0.8125 -2.28125 0.8125q-1.703125 0 -2.75 -1.109375q-1.03125 -1.125 -1.03125 -3.203125q0 -1.34375 0.4375 -2.34375q0.453125 -1.015625 1.359375 -1.515625q0.921875 -0.5 1.984375 -0.5q1.359375 0 2.21875 0.6875q0.859375 0.671875 1.09375 1.9375l-1.359375 0.203125q-0.203125 -0.828125 -0.703125 -1.25q-0.484375 -0.421875 -1.1875 -0.421875q-1.0625 0 -1.734375 0.765625q-0.65625 0.75 -0.65625 2.40625q0 1.671875 0.640625 2.4375q0.640625 0.75 1.671875 0.75q0.828125 0 1.375 -0.5q0.5625 -0.515625 0.703125 -1.578125zm5.65625 1.78125l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm0.8515625 -2.890625q0 -2.296875 1.28125 -3.40625q1.078125 -0.921875 2.609375 -0.921875q1.71875 0 2.796875 1.125q1.09375 1.109375 1.09375 3.09375q0 1.59375 -0.484375 2.515625q-0.484375 0.921875 -1.40625 1.4375q-0.90625 0.5 -2.0 0.5q-1.734375 0 -2.8125 -1.109375q-1.078125 -1.125 -1.078125 -3.234375zm1.453125 0q0 1.59375 0.6875 2.390625q0.703125 0.796875 1.75 0.796875q1.046875 0 1.734375 -0.796875q0.703125 -0.796875 0.703125 -2.4375q0 -1.53125 -0.703125 -2.328125q-0.6875 -0.796875 -1.734375 -0.796875q-1.046875 0 -1.75 0.796875q-0.6875 0.78125 -0.6875 2.375zm7.9609375 4.15625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm5.34375 -9.84375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm9.2265625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.2734375 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm18.414062 1.453125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.6015625 4.171875l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm14.2734375 0l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm17.84375 4.140625l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm3.4609375 0l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm8.3359375 -2.484375l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm8.5625 2.484375l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.3046875 -1.03125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm8.9765625 4.171875l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm13.6328125 1.46875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.2109375 4.953125l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm12.40625 4.140625l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm8.3671875 -4.15625q0 -2.296875 1.28125 -3.40625q1.078125 -0.921875 2.609375 -0.921875q1.71875 0 2.796875 1.125q1.09375 1.109375 1.09375 3.09375q0 1.59375 -0.484375 2.515625q-0.484375 0.921875 -1.40625 1.4375q-0.90625 0.5 -2.0 0.5q-1.734375 0 -2.8125 -1.109375q-1.078125 -1.125 -1.078125 -3.234375zm1.453125 0q0 1.59375 0.6875 2.390625q0.703125 0.796875 1.75 0.796875q1.046875 0 1.734375 -0.796875q0.703125 -0.796875 0.703125 -2.4375q0 -1.53125 -0.703125 -2.328125q-0.6875 -0.796875 -1.734375 -0.796875q-1.046875 0 -1.75 0.796875q-0.6875 0.78125 -0.6875 2.375zm13.3515625 4.15625l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm13.6328125 1.46875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.2734375 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm13.3359375 2.484375l0 -7.203125l-1.234375 0l0 -1.09375l1.234375 0l0 -0.890625q0 -0.828125 0.15625 -1.234375q0.203125 -0.546875 0.703125 -0.890625q0.515625 -0.34375 1.4375 -0.34375q0.59375 0 1.3125 0.140625l-0.203125 1.234375q-0.4375 -0.078125 -0.828125 -0.078125q-0.640625 0 -0.90625 0.28125q-0.265625 0.265625 -0.265625 1.015625l0 0.765625l1.609375 0l0 1.09375l-1.609375 0l0 7.203125l-1.40625 0zm4.1171875 -9.84375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm3.5234375 0l0 -11.453125l1.40625 0l0 11.453125l-1.40625 0zm9.2578125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.2734375 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm8.953125 2.484375l0 -1.609375l1.609375 0l0 1.609375l-1.609375 0zm11.3046875 0l0 -10.109375l-3.78125 0l0 -1.34375l9.078125 0l0 1.34375l-3.78125 0l0 10.109375l-1.515625 0zm6.6796875 0l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm11.71875 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm14.0 2.484375l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm4.7578125 0l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm10.6796875 2.953125l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3671875 1.265625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.5078125 2.28125l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.2734375 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm18.414062 1.453125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.5859375 4.171875l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875z" fill-rule="nonzero"/><path fill="#000000" d="m37.991634 328.96744l0 -11.484375l1.28125 0l0 1.078125q0.453125 -0.640625 1.015625 -0.953125q0.578125 -0.3125 1.390625 -0.3125q1.0625 0 1.875 0.546875q0.8125 0.546875 1.21875 1.546875q0.421875 0.984375 0.421875 2.171875q0 1.28125 -0.46875 2.296875q-0.453125 1.015625 -1.328125 1.5625q-0.859375 0.546875 -1.828125 0.546875q-0.703125 0 -1.265625 -0.296875q-0.546875 -0.296875 -0.90625 -0.75l0 4.046875l-1.40625 0zm1.265625 -7.296875q0 1.609375 0.640625 2.375q0.65625 0.765625 1.578125 0.765625q0.9375 0 1.609375 -0.796875q0.671875 -0.796875 0.671875 -2.453125q0 -1.59375 -0.65625 -2.375q-0.65625 -0.796875 -1.5625 -0.796875q-0.890625 0 -1.59375 0.84375q-0.6875 0.84375 -0.6875 2.4375zm13.0390625 3.078125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.5859375 4.171875l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm8.406246 -1.265625l0.203125 1.25q-0.5937462 0.125 -1.0624962 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.4062462 0l0 1.09375l-1.4062462 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.6093712 -0.0625zm1.3828125 -8.578125l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm6.6171875 -1.265625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 -8.578125l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm3.0234375 -4.15625q0 -2.296875 1.28125 -3.40625q1.078125 -0.921875 2.609375 -0.921875q1.71875 0 2.796875 1.125q1.09375 1.109375 1.09375 3.09375q0 1.59375 -0.484375 2.515625q-0.484375 0.921875 -1.40625 1.4375q-0.90625 0.5 -2.0 0.5q-1.734375 0 -2.8125 -1.109375q-1.078125 -1.125 -1.078125 -3.234375zm1.453125 0q0 1.59375 0.6875 2.390625q0.703125 0.796875 1.75 0.796875q1.046875 0 1.734375 -0.796875q0.703125 -0.796875 0.703125 -2.4375q0 -1.53125 -0.703125 -2.328125q-0.6875 -0.796875 -1.734375 -0.796875q-1.046875 0 -1.75 0.796875q-0.6875 0.78125 -0.6875 2.375zm7.9765625 4.15625l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.2109375 4.953125l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm12.40625 -5.703125l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm3.5546875 0l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm12.781258 -2.484375l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm14.0 2.484375l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm8.8671875 -3.046875l1.390625 0.1875q-0.234375 1.421875 -1.171875 2.234375q-0.921875 0.8125 -2.28125 0.8125q-1.703125 0 -2.75 -1.109375q-1.03125 -1.125 -1.03125 -3.203125q0 -1.34375 0.4375 -2.34375q0.453125 -1.015625 1.359375 -1.515625q0.921875 -0.5 1.984375 -0.5q1.359375 0 2.21875 0.6875q0.859375 0.671875 1.09375 1.9375l-1.359375 0.203125q-0.203125 -0.828125 -0.703125 -1.25q-0.484375 -0.421875 -1.1875 -0.421875q-1.0625 0 -1.734375 0.765625q-0.65625 0.75 -0.65625 2.40625q0 1.671875 0.640625 2.4375q0.640625 0.75 1.671875 0.75q0.828125 0 1.375 -0.5q0.5625 -0.515625 0.703125 -1.578125zm2.59375 3.046875l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm18.75 -1.03125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm9.578125 4.171875l-2.546875 -8.296875l1.453125 0l1.328125 4.78125l0.484375 1.78125q0.03125 -0.125 0.4375 -1.703125l1.3125 -4.859375l1.453125 0l1.234375 4.8125l0.421875 1.578125l0.46875 -1.59375l1.421875 -4.796875l1.375 0l-2.59375 8.296875l-1.46875 0l-1.3125 -4.96875l-0.328125 -1.421875l-1.671875 6.390625l-1.46875 0zm15.4296875 -1.03125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.5390625 7.375l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm15.5703125 -4.46875l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.3046875 -1.03125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm6.6640625 2.90625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm8.890625 0l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm12.28125 4.953125l0 -8.296875l1.25 0l0 1.15625q0.390625 -0.609375 1.03125 -0.96875q0.65625 -0.375 1.484375 -0.375q0.921875 0 1.515625 0.390625q0.59375 0.375 0.828125 1.0625q0.984375 -1.453125 2.5625 -1.453125q1.234375 0 1.890625 0.6875q0.671875 0.671875 0.671875 2.09375l0 5.703125l-1.390625 0l0 -5.234375q0 -0.84375 -0.140625 -1.203125q-0.140625 -0.375 -0.5 -0.59375q-0.359375 -0.234375 -0.84375 -0.234375q-0.875 0 -1.453125 0.578125q-0.578125 0.578125 -0.578125 1.859375l0 4.828125l-1.40625 0l0 -5.390625q0 -0.9375 -0.34375 -1.40625q-0.34375 -0.46875 -1.125 -0.46875q-0.59375 0 -1.09375 0.3125q-0.5 0.3125 -0.734375 0.921875q-0.21875 0.59375 -0.21875 1.71875l0 4.3125l-1.40625 0zm19.0 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm10.8984375 3.6875l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm6.7890625 0.234375q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm8.9765625 4.171875l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm13.3671875 3.109375q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm6.6640625 2.90625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm6.7890625 0.234375q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm8.375 4.171875l0 -7.203125l-1.234375 0l0 -1.09375l1.234375 0l0 -0.890625q0 -0.828125 0.15625 -1.234375q0.203125 -0.546875 0.703125 -0.890625q0.515625 -0.34375 1.4375 -0.34375q0.59375 0 1.3125 0.140625l-0.203125 1.234375q-0.4375 -0.078125 -0.828125 -0.078125q-0.640625 0 -0.90625 0.28125q-0.265625 0.265625 -0.265625 1.015625l0 0.765625l1.609375 0l0 1.09375l-1.609375 0l0 7.203125l-1.40625 0zm3.5859375 -4.15625q0 -2.296875 1.28125 -3.40625q1.078125 -0.921875 2.609375 -0.921875q1.71875 0 2.796875 1.125q1.09375 1.109375 1.09375 3.09375q0 1.59375 -0.484375 2.515625q-0.484375 0.921875 -1.40625 1.4375q-0.90625 0.5 -2.0 0.5q-1.734375 0 -2.8125 -1.109375q-1.078125 -1.125 -1.078125 -3.234375zm1.453125 0q0 1.59375 0.6875 2.390625q0.703125 0.796875 1.75 0.796875q1.046875 0 1.734375 -0.796875q0.703125 -0.796875 0.703125 -2.4375q0 -1.53125 -0.703125 -2.328125q-0.6875 -0.796875 -1.734375 -0.796875q-1.046875 0 -1.75 0.796875q-0.6875 0.78125 -0.6875 2.375zm7.9609375 4.15625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm12.8515625 -1.265625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm12.015625 5.640625l1.375 0.203125q0.078125 0.640625 0.46875 0.921875q0.53125 0.390625 1.4375 0.390625q0.96875 0 1.5 -0.390625q0.53125 -0.390625 0.71875 -1.09375q0.109375 -0.421875 0.109375 -1.8125q-0.921875 1.09375 -2.296875 1.09375q-1.71875 0 -2.65625 -1.234375q-0.9375 -1.234375 -0.9375 -2.96875q0 -1.1875 0.421875 -2.1875q0.4375 -1.0 1.25 -1.546875q0.828125 -0.546875 1.921875 -0.546875q1.46875 0 2.421875 1.1875l0 -1.0l1.296875 0l0 7.171875q0 1.9375 -0.390625 2.75q-0.390625 0.8125 -1.25 1.28125q-0.859375 0.46875 -2.109375 0.46875q-1.484375 0 -2.40625 -0.671875q-0.90625 -0.671875 -0.875 -2.015625zm1.171875 -4.984375q0 1.625 0.640625 2.375q0.65625 0.75 1.625 0.75q0.96875 0 1.625 -0.734375q0.65625 -0.75 0.65625 -2.34375q0 -1.53125 -0.671875 -2.296875q-0.671875 -0.78125 -1.625 -0.78125q-0.9375 0 -1.59375 0.765625q-0.65625 0.765625 -0.65625 2.265625zm7.9765625 4.296875l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm7.7734375 8.15625l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm11.9453125 -5.6875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm14.0 2.484375l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm4.7578125 0l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm10.6796875 2.953125l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3671875 1.265625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.5078125 2.28125l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm12.28125 -4.890625l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm2.9921875 -2.484375l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm13.0078125 2.484375l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.3046875 -1.03125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.6015625 4.171875l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm14.2734375 0l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm7.9296875 4.140625l0 -11.453125l1.40625 0l0 11.453125l-1.40625 0zm9.2578125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.2109375 4.953125l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm13.703125 4.140625l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm7.5546875 7.421875l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm12.6328125 -3.203125l0 -11.453125l2.28125 0l2.71875 8.109375q0.375 1.125 0.546875 1.6875q0.1875 -0.625 0.609375 -1.828125l2.734375 -7.96875l2.046875 0l0 11.453125l-1.46875 0l0 -9.59375l-3.328125 9.59375l-1.359375 0l-3.3125 -9.75l0 9.75l-1.46875 0zm13.375 0l0 -11.453125l3.953125 0q1.328125 0 2.03125 0.15625q0.984375 0.234375 1.6875 0.828125q0.90625 0.765625 1.34375 1.953125q0.453125 1.1875 0.453125 2.71875q0 1.3125 -0.3125 2.328125q-0.296875 1.0 -0.78125 1.65625q-0.46875 0.65625 -1.03125 1.046875q-0.5625 0.375 -1.375 0.578125q-0.796875 0.1875 -1.828125 0.1875l-4.140625 0zm1.515625 -1.359375l2.453125 0q1.125 0 1.765625 -0.203125q0.65625 -0.21875 1.03125 -0.59375q0.546875 -0.546875 0.84375 -1.453125q0.296875 -0.90625 0.296875 -2.203125q0 -1.796875 -0.59375 -2.765625q-0.578125 -0.96875 -1.421875 -1.296875q-0.609375 -0.234375 -1.96875 -0.234375l-2.40625 0l0 8.75zm9.5234375 -2.328125l1.4375 -0.125q0.09375 0.859375 0.46875 1.421875q0.375 0.546875 1.15625 0.890625q0.78125 0.328125 1.75 0.328125q0.875 0 1.53125 -0.25q0.671875 -0.265625 0.984375 -0.703125q0.328125 -0.453125 0.328125 -0.984375q0 -0.546875 -0.3125 -0.9375q-0.3125 -0.40625 -1.03125 -0.6875q-0.453125 -0.171875 -2.03125 -0.546875q-1.578125 -0.390625 -2.21875 -0.71875q-0.8125 -0.4375 -1.21875 -1.0625q-0.40625 -0.640625 -0.40625 -1.4375q0 -0.859375 0.484375 -1.609375q0.5 -0.765625 1.4375 -1.15625q0.953125 -0.390625 2.109375 -0.390625q1.28125 0 2.25 0.421875q0.96875 0.40625 1.484375 1.203125q0.53125 0.796875 0.578125 1.796875l-1.453125 0.109375q-0.125 -1.078125 -0.796875 -1.625q-0.671875 -0.5625 -2.0 -0.5625q-1.375 0 -2.0 0.5q-0.625 0.5 -0.625 1.21875q0 0.609375 0.4375 1.015625q0.4375 0.390625 2.28125 0.8125q1.859375 0.421875 2.546875 0.734375q1.0 0.453125 1.46875 1.171875q0.484375 0.703125 0.484375 1.625q0 0.90625 -0.53125 1.71875q-0.515625 0.8125 -1.5 1.265625q-0.984375 0.453125 -2.203125 0.453125q-1.5625 0 -2.609375 -0.453125q-1.046875 -0.46875 -1.65625 -1.375q-0.59375 -0.90625 -0.625 -2.0625zm15.0703125 -1.96875q0 -2.03125 0.40625 -3.265625q0.421875 -1.234375 1.25 -1.90625q0.828125 -0.671875 2.078125 -0.671875q0.921875 0 1.609375 0.375q0.703125 0.359375 1.15625 1.0625q0.453125 0.703125 0.703125 1.703125q0.265625 1.0 0.265625 2.703125q0 2.015625 -0.421875 3.265625q-0.40625 1.234375 -1.234375 1.921875q-0.828125 0.671875 -2.078125 0.671875q-1.65625 0 -2.609375 -1.203125q-1.125 -1.421875 -1.125 -4.65625zm1.4375 0q0 2.828125 0.65625 3.765625q0.671875 0.921875 1.640625 0.921875q0.96875 0 1.625 -0.9375q0.65625 -0.9375 0.65625 -3.75q0 -2.828125 -0.65625 -3.75q-0.65625 -0.9375 -1.640625 -0.9375q-0.96875 0 -1.546875 0.828125q-0.734375 1.046875 -0.734375 3.859375zm8.2109375 5.65625l0 -1.609375l1.609375 0l0 1.609375q0 0.890625 -0.3125 1.421875q-0.3125 0.546875 -1.0 0.84375l-0.390625 -0.59375q0.453125 -0.203125 0.65625 -0.578125q0.21875 -0.375 0.234375 -1.09375l-0.796875 0zm11.59375 -1.265625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm12.25 4.953125l0 -11.453125l1.40625 0l0 11.453125l-1.40625 0zm3.5859375 -9.84375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm3.2890625 0.6875l1.375 0.203125q0.078125 0.640625 0.46875 0.921875q0.53125 0.390625 1.4375 0.390625q0.96875 0 1.5 -0.390625q0.53125 -0.390625 0.71875 -1.09375q0.109375 -0.421875 0.109375 -1.8125q-0.921875 1.09375 -2.296875 1.09375q-1.71875 0 -2.65625 -1.234375q-0.9375 -1.234375 -0.9375 -2.96875q0 -1.1875 0.421875 -2.1875q0.4375 -1.0 1.25 -1.546875q0.828125 -0.546875 1.921875 -0.546875q1.46875 0 2.421875 1.1875l0 -1.0l1.296875 0l0 7.171875q0 1.9375 -0.390625 2.75q-0.390625 0.8125 -1.25 1.28125q-0.859375 0.46875 -2.109375 0.46875q-1.484375 0 -2.40625 -0.671875q-0.90625 -0.671875 -0.875 -2.015625zm1.171875 -4.984375q0 1.625 0.640625 2.375q0.65625 0.75 1.625 0.75q0.96875 0 1.625 -0.734375q0.65625 -0.75 0.65625 -2.34375q0 -1.53125 -0.671875 -2.296875q-0.671875 -0.78125 -1.625 -0.78125q-0.9375 0 -1.59375 0.765625q-0.65625 0.765625 -0.65625 2.265625zm7.9921875 4.296875l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm11.9609375 -1.265625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm7.125 1.265625l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm7.5859375 4.21875l0 -11.453125l1.40625 0l0 11.453125l-1.40625 0zm9.0234375 0l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm9.1328125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm11.71875 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm14.0 2.484375l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm4.7578125 0l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm10.6796875 2.953125l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3671875 1.265625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.5078125 2.28125l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.578125 4.953125l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm7.5546875 7.421875l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm12.6328125 -3.203125l0 -11.453125l2.28125 0l2.71875 8.109375q0.375 1.125 0.546875 1.6875q0.1875 -0.625 0.609375 -1.828125l2.734375 -7.96875l2.046875 0l0 11.453125l-1.46875 0l0 -9.59375l-3.328125 9.59375l-1.359375 0l-3.3125 -9.75l0 9.75l-1.46875 0zm13.375 0l0 -11.453125l3.953125 0q1.328125 0 2.03125 0.15625q0.984375 0.234375 1.6875 0.828125q0.90625 0.765625 1.34375 1.953125q0.453125 1.1875 0.453125 2.71875q0 1.3125 -0.3125 2.328125q-0.296875 1.0 -0.78125 1.65625q-0.46875 0.65625 -1.03125 1.046875q-0.5625 0.375 -1.375 0.578125q-0.796875 0.1875 -1.828125 0.1875l-4.140625 0zm1.515625 -1.359375l2.453125 0q1.125 0 1.765625 -0.203125q0.65625 -0.21875 1.03125 -0.59375q0.546875 -0.546875 0.84375 -1.453125q0.296875 -0.90625 0.296875 -2.203125q0 -1.796875 -0.59375 -2.765625q-0.578125 -0.96875 -1.421875 -1.296875q-0.609375 -0.234375 -1.96875 -0.234375l-2.40625 0l0 8.75zm9.5234375 -2.328125l1.4375 -0.125q0.09375 0.859375 0.46875 1.421875q0.375 0.546875 1.15625 0.890625q0.78125 0.328125 1.75 0.328125q0.875 0 1.53125 -0.25q0.671875 -0.265625 0.984375 -0.703125q0.328125 -0.453125 0.328125 -0.984375q0 -0.546875 -0.3125 -0.9375q-0.3125 -0.40625 -1.03125 -0.6875q-0.453125 -0.171875 -2.03125 -0.546875q-1.578125 -0.390625 -2.21875 -0.71875q-0.8125 -0.4375 -1.21875 -1.0625q-0.40625 -0.640625 -0.40625 -1.4375q0 -0.859375 0.484375 -1.609375q0.5 -0.765625 1.4375 -1.15625q0.953125 -0.390625 2.109375 -0.390625q1.28125 0 2.25 0.421875q0.96875 0.40625 1.484375 1.203125q0.53125 0.796875 0.578125 1.796875l-1.453125 0.109375q-0.125 -1.078125 -0.796875 -1.625q-0.671875 -0.5625 -2.0 -0.5625q-1.375 0 -2.0 0.5q-0.625 0.5 -0.625 1.21875q0 0.609375 0.4375 1.015625q0.4375 0.390625 2.28125 0.8125q1.859375 0.421875 2.546875 0.734375q1.0 0.453125 1.46875 1.171875q0.484375 0.703125 0.484375 1.625q0 0.90625 -0.53125 1.71875q-0.515625 0.8125 -1.5 1.265625q-0.984375 0.453125 -2.203125 0.453125q-1.5625 0 -2.609375 -0.453125q-1.046875 -0.46875 -1.65625 -1.375q-0.59375 -0.90625 -0.625 -2.0625zm20.367188 3.6875l-1.40625 0l0 -8.96875q-0.515625 0.484375 -1.34375 0.96875q-0.8125 0.484375 -1.46875 0.734375l0 -1.359375q1.171875 -0.5625 2.046875 -1.34375q0.890625 -0.796875 1.265625 -1.53125l0.90625 0l0 11.5zm4.3515625 0l0 -1.609375l1.609375 0l0 1.609375q0 0.890625 -0.3125 1.421875q-0.3125 0.546875 -1.0 0.84375l-0.390625 -0.59375q0.453125 -0.203125 0.65625 -0.578125q0.21875 -0.375 0.234375 -1.09375l-0.796875 0z" fill-rule="nonzero"/><path fill="#000000" d="m41.054134 343.5143l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm11.749996 0.796875q0 -2.296875 1.28125 -3.40625q1.078125 -0.921875 2.609375 -0.921875q1.71875 0 2.796875 1.125q1.09375 1.109375 1.09375 3.09375q0 1.59375 -0.484375 2.515625q-0.484375 0.921875 -1.40625 1.4375q-0.90625 0.5 -2.0 0.5q-1.734375 0 -2.8125 -1.109375q-1.078125 -1.125 -1.078125 -3.234375zm1.453125 0q0 1.59375 0.6875 2.390625q0.703125 0.796875 1.75 0.796875q1.046875 0 1.734375 -0.796875q0.703125 -0.796875 0.703125 -2.4375q0 -1.53125 -0.703125 -2.328125q-0.6875 -0.796875 -1.734375 -0.796875q-1.046875 0 -1.75 0.796875q-0.6875 0.78125 -0.6875 2.375zm7.9609375 4.15625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm10.75 -1.03125q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.6015625 4.171875l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm8.6328125 0.6875l1.375 0.203125q0.078125 0.640625 0.46875 0.921875q0.53125 0.390625 1.4375 0.390625q0.96875 0 1.5 -0.390625q0.53125 -0.390625 0.71875 -1.09375q0.109375 -0.421875 0.109375 -1.8125q-0.921875 1.09375 -2.296875 1.09375q-1.71875 0 -2.65625 -1.234375q-0.9375 -1.234375 -0.9375 -2.96875q0 -1.1875 0.421875 -2.1875q0.4375 -1.0 1.25 -1.546875q0.828125 -0.546875 1.921875 -0.546875q1.46875 0 2.421875 1.1875l0 -1.0l1.296875 0l0 7.171875q0 1.9375 -0.390625 2.75q-0.390625 0.8125 -1.25 1.28125q-0.859375 0.46875 -2.109375 0.46875q-1.484375 0 -2.40625 -0.671875q-0.90625 -0.671875 -0.875 -2.015625zm1.171875 -4.984375q0 1.625 0.640625 2.375q0.65625 0.75 1.625 0.75q0.96875 0 1.625 -0.734375q0.65625 -0.75 0.65625 -2.34375q0 -1.53125 -0.671875 -2.296875q-0.671875 -0.78125 -1.625 -0.78125q-0.9375 0 -1.59375 0.765625q-0.65625 0.765625 -0.65625 2.265625zm13.6640625 1.625l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm11.71875 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm14.000008 2.484375l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.6250076 -0.28125 -0.9375076 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.4062576 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm4.7578125 0l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm10.6796875 2.953125l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3671875 1.265625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.5078125 2.28125l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.578125 4.953125l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm7.5546875 7.421875l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm12.6328125 -3.203125l0 -11.453125l2.28125 0l2.71875 8.109375q0.375 1.125 0.546875 1.6875q0.1875 -0.625 0.609375 -1.828125l2.734375 -7.96875l2.046875 0l0 11.453125l-1.46875 0l0 -9.59375l-3.328125 9.59375l-1.359375 0l-3.3125 -9.75l0 9.75l-1.46875 0zm13.375 0l0 -11.453125l3.953125 0q1.328125 0 2.03125 0.15625q0.984375 0.234375 1.6875 0.828125q0.90625 0.765625 1.34375 1.953125q0.453125 1.1875 0.453125 2.71875q0 1.3125 -0.3125 2.328125q-0.296875 1.0 -0.78125 1.65625q-0.46875 0.65625 -1.03125 1.046875q-0.5625 0.375 -1.375 0.578125q-0.796875 0.1875 -1.828125 0.1875l-4.140625 0zm1.515625 -1.359375l2.453125 0q1.125 0 1.765625 -0.203125q0.65625 -0.21875 1.03125 -0.59375q0.546875 -0.546875 0.84375 -1.453125q0.296875 -0.90625 0.296875 -2.203125q0 -1.796875 -0.59375 -2.765625q-0.578125 -0.96875 -1.421875 -1.296875q-0.609375 -0.234375 -1.96875 -0.234375l-2.40625 0l0 8.75zm9.5234375 -2.328125l1.4375 -0.125q0.09375 0.859375 0.46875 1.421875q0.375 0.546875 1.15625 0.890625q0.78125 0.328125 1.75 0.328125q0.875 0 1.53125 -0.25q0.671875 -0.265625 0.984375 -0.703125q0.328125 -0.453125 0.328125 -0.984375q0 -0.546875 -0.3125 -0.9375q-0.3125 -0.40625 -1.03125 -0.6875q-0.453125 -0.171875 -2.03125 -0.546875q-1.578125 -0.390625 -2.21875 -0.71875q-0.8125 -0.4375 -1.21875 -1.0625q-0.40625 -0.640625 -0.40625 -1.4375q0 -0.859375 0.484375 -1.609375q0.5 -0.765625 1.4375 -1.15625q0.953125 -0.390625 2.109375 -0.390625q1.28125 0 2.25 0.421875q0.96875 0.40625 1.484375 1.203125q0.53125 0.796875 0.578125 1.796875l-1.453125 0.109375q-0.125 -1.078125 -0.796875 -1.625q-0.671875 -0.5625 -2.0 -0.5625q-1.375 0 -2.0 0.5q-0.625 0.5 -0.625 1.21875q0 0.609375 0.4375 1.015625q0.4375 0.390625 2.28125 0.8125q1.859375 0.421875 2.546875 0.734375q1.0 0.453125 1.46875 1.171875q0.484375 0.703125 0.484375 1.625q0 0.90625 -0.53125 1.71875q-0.515625 0.8125 -1.5 1.265625q-0.984375 0.453125 -2.203125 0.453125q-1.5625 0 -2.609375 -0.453125q-1.046875 -0.46875 -1.65625 -1.375q-0.59375 -0.90625 -0.625 -2.0625zm22.460938 2.328125l0 1.359375l-7.578125 0q-0.015625 -0.515625 0.171875 -0.984375q0.28125 -0.765625 0.921875 -1.515625q0.640625 -0.75 1.84375 -1.734375q1.859375 -1.53125 2.515625 -2.421875q0.65625 -0.90625 0.65625 -1.703125q0 -0.828125 -0.59375 -1.40625q-0.59375 -0.578125 -1.5625 -0.578125q-1.015625 0 -1.625 0.609375q-0.609375 0.609375 -0.609375 1.6875l-1.453125 -0.140625q0.15625 -1.625 1.125 -2.46875q0.96875 -0.84375 2.59375 -0.84375q1.65625 0 2.609375 0.921875q0.96875 0.90625 0.96875 2.25q0 0.6875 -0.28125 1.359375q-0.28125 0.65625 -0.9375 1.390625q-0.65625 0.734375 -2.171875 2.015625q-1.265625 1.0625 -1.625 1.453125q-0.359375 0.375 -0.59375 0.75l5.625 0zm2.2578125 1.359375l0 -1.609375l1.609375 0l0 1.609375q0 0.890625 -0.3125 1.421875q-0.3125 0.546875 -1.0 0.84375l-0.390625 -0.59375q0.453125 -0.203125 0.65625 -0.578125q0.21875 -0.375 0.234375 -1.09375l-0.796875 0zm11.59375 -1.265625l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm12.265625 4.953125l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.2109375 4.953125l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm11.84375 1.65625l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm14.0 2.484375l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm4.7578125 0l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm10.6796875 2.953125l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3671875 1.265625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.5078125 2.28125l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.578125 4.953125l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm7.5546875 7.421875l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm12.6328125 -3.203125l0 -11.453125l2.28125 0l2.71875 8.109375q0.375 1.125 0.546875 1.6875q0.1875 -0.625 0.609375 -1.828125l2.734375 -7.96875l2.046875 0l0 11.453125l-1.46875 0l0 -9.59375l-3.328125 9.59375l-1.359375 0l-3.3125 -9.75l0 9.75l-1.46875 0zm13.375 0l0 -11.453125l3.953125 0q1.328125 0 2.03125 0.15625q0.984375 0.234375 1.6875 0.828125q0.90625 0.765625 1.34375 1.953125q0.453125 1.1875 0.453125 2.71875q0 1.3125 -0.3125 2.328125q-0.296875 1.0 -0.78125 1.65625q-0.46875 0.65625 -1.03125 1.046875q-0.5625 0.375 -1.375 0.578125q-0.796875 0.1875 -1.828125 0.1875l-4.140625 0zm1.515625 -1.359375l2.453125 0q1.125 0 1.765625 -0.203125q0.65625 -0.21875 1.03125 -0.59375q0.546875 -0.546875 0.84375 -1.453125q0.296875 -0.90625 0.296875 -2.203125q0 -1.796875 -0.59375 -2.765625q-0.578125 -0.96875 -1.421875 -1.296875q-0.609375 -0.234375 -1.96875 -0.234375l-2.40625 0l0 8.75zm9.5234375 -2.328125l1.4375 -0.125q0.09375 0.859375 0.46875 1.421875q0.375 0.546875 1.15625 0.890625q0.78125 0.328125 1.75 0.328125q0.875 0 1.53125 -0.25q0.671875 -0.265625 0.984375 -0.703125q0.328125 -0.453125 0.328125 -0.984375q0 -0.546875 -0.3125 -0.9375q-0.3125 -0.40625 -1.03125 -0.6875q-0.453125 -0.171875 -2.03125 -0.546875q-1.578125 -0.390625 -2.21875 -0.71875q-0.8125 -0.4375 -1.21875 -1.0625q-0.40625 -0.640625 -0.40625 -1.4375q0 -0.859375 0.484375 -1.609375q0.5 -0.765625 1.4375 -1.15625q0.953125 -0.390625 2.109375 -0.390625q1.28125 0 2.25 0.421875q0.96875 0.40625 1.484375 1.203125q0.53125 0.796875 0.578125 1.796875l-1.453125 0.109375q-0.125 -1.078125 -0.796875 -1.625q-0.671875 -0.5625 -2.0 -0.5625q-1.375 0 -2.0 0.5q-0.625 0.5 -0.625 1.21875q0 0.609375 0.4375 1.015625q0.4375 0.390625 2.28125 0.8125q1.859375 0.421875 2.546875 0.734375q1.0 0.453125 1.46875 1.171875q0.484375 0.703125 0.484375 1.625q0 0.90625 -0.53125 1.71875q-0.515625 0.8125 -1.5 1.265625q-0.984375 0.453125 -2.203125 0.453125q-1.5625 0 -2.609375 -0.453125q-1.046875 -0.46875 -1.65625 -1.375q-0.59375 -0.90625 -0.625 -2.0625zm15.0703125 0.65625l1.40625 -0.1875q0.25 1.203125 0.828125 1.734375q0.578125 0.515625 1.421875 0.515625q0.984375 0 1.671875 -0.6875q0.6875 -0.6875 0.6875 -1.703125q0 -0.96875 -0.640625 -1.59375q-0.625 -0.625 -1.609375 -0.625q-0.390625 0 -0.984375 0.15625l0.15625 -1.234375q0.140625 0.015625 0.21875 0.015625q0.90625 0 1.625 -0.46875q0.71875 -0.46875 0.71875 -1.453125q0 -0.765625 -0.53125 -1.265625q-0.515625 -0.515625 -1.34375 -0.515625q-0.828125 0 -1.375 0.515625q-0.546875 0.515625 -0.703125 1.546875l-1.40625 -0.25q0.265625 -1.421875 1.171875 -2.1875q0.921875 -0.78125 2.28125 -0.78125q0.9375 0 1.71875 0.40625q0.796875 0.390625 1.203125 1.09375q0.421875 0.6875 0.421875 1.46875q0 0.75 -0.40625 1.359375q-0.390625 0.609375 -1.171875 0.96875q1.015625 0.234375 1.578125 0.96875q0.5625 0.734375 0.5625 1.84375q0 1.5 -1.09375 2.546875q-1.09375 1.046875 -2.765625 1.046875q-1.5 0 -2.5 -0.890625q-1.0 -0.90625 -1.140625 -2.34375zm19.140625 2.0q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.6015625 4.171875l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm14.2734375 0l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm15.46875 2.875l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3828125 1.265625l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm14.5703125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm17.65625 4.953125l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm13.3671875 3.109375q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm3.5859375 4.171875l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm5.34375 0l0 -11.453125l1.40625 0l0 6.53125l3.328125 -3.375l1.828125 0l-3.171875 3.078125l3.484375 5.21875l-1.734375 0l-2.734375 -4.25l-1.0 0.953125l0 3.296875l-1.40625 0zm13.7421875 0l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm7.5859375 4.21875l0 -11.453125l1.40625 0l0 11.453125l-1.40625 0zm9.0234375 0l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm9.1328125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm11.71875 2.46875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm14.0 2.484375l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm4.7578125 0l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm10.6796875 2.953125l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm1.3671875 1.265625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.5078125 2.28125l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm10.5234375 8.328125q-1.171875 -1.46875 -1.984375 -3.4375q-0.796875 -1.984375 -0.796875 -4.09375q0 -1.859375 0.609375 -3.5625q0.703125 -1.96875 2.171875 -3.9375l1.0 0q-0.9375 1.625 -1.25 2.328125q-0.46875 1.078125 -0.75 2.25q-0.328125 1.453125 -0.328125 2.9375q0 3.75 2.328125 7.515625l-1.0 0zm8.6171875 -3.375l-2.546875 -8.296875l1.453125 0l1.328125 4.78125l0.484375 1.78125q0.03125 -0.125 0.4375 -1.703125l1.3125 -4.859375l1.453125 0l1.234375 4.8125l0.421875 1.578125l0.46875 -1.59375l1.421875 -4.796875l1.375 0l-2.59375 8.296875l-1.46875 0l-1.3125 -4.96875l-0.328125 -1.421875l-1.671875 6.390625l-1.46875 0zm10.0234375 0l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm8.8984375 -9.84375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm8.9609375 -3.046875l1.390625 0.1875q-0.234375 1.421875 -1.171875 2.234375q-0.921875 0.8125 -2.28125 0.8125q-1.703125 0 -2.75 -1.109375q-1.03125 -1.125 -1.03125 -3.203125q0 -1.34375 0.4375 -2.34375q0.453125 -1.015625 1.359375 -1.515625q0.921875 -0.5 1.984375 -0.5q1.359375 0 2.21875 0.6875q0.859375 0.671875 1.09375 1.9375l-1.359375 0.203125q-0.203125 -0.828125 -0.703125 -1.25q-0.484375 -0.421875 -1.1875 -0.421875q-1.0625 0 -1.734375 0.765625q-0.65625 0.75 -0.65625 2.40625q0 1.671875 0.640625 2.4375q0.640625 0.75 1.671875 0.75q0.828125 0 1.375 -0.5q0.5625 -0.515625 0.703125 -1.578125zm2.59375 3.046875l0 -11.453125l1.40625 0l0 4.109375q0.984375 -1.140625 2.484375 -1.140625q0.921875 0 1.59375 0.359375q0.6875 0.359375 0.96875 1.0q0.296875 0.640625 0.296875 1.859375l0 5.265625l-1.40625 0l0 -5.265625q0 -1.046875 -0.453125 -1.53125q-0.453125 -0.484375 -1.296875 -0.484375q-0.625 0 -1.171875 0.328125q-0.546875 0.328125 -0.78125 0.890625q-0.234375 0.546875 -0.234375 1.515625l0 4.546875l-1.40625 0zm13.34375 -9.84375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm2.9921875 -2.484375l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm12.9921875 -7.34375l0 -1.625l1.40625 0l0 1.625l-1.40625 0zm-1.78125 13.046875l0.265625 -1.1875q0.421875 0.109375 0.671875 0.109375q0.421875 0 0.625 -0.296875q0.21875 -0.28125 0.21875 -1.421875l0 -8.71875l1.40625 0l0 8.75q0 1.53125 -0.390625 2.140625q-0.515625 0.78125 -1.6875 0.78125q-0.578125 0 -1.109375 -0.15625zm10.7890625 -3.21875l0 -1.21875q-0.96875 1.40625 -2.640625 1.40625q-0.734375 0 -1.375 -0.28125q-0.625 -0.28125 -0.9375 -0.703125q-0.3125 -0.4375 -0.4375 -1.046875q-0.078125 -0.421875 -0.078125 -1.3125l0 -5.140625l1.40625 0l0 4.59375q0 1.109375 0.078125 1.484375q0.140625 0.5625 0.5625 0.875q0.4375 0.3125 1.0625 0.3125q0.640625 0 1.1875 -0.3125q0.5625 -0.328125 0.78125 -0.890625q0.234375 -0.5625 0.234375 -1.625l0 -4.4375l1.40625 0l0 8.296875l-1.25 0zm2.8984375 -2.484375l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm11.625 1.21875l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm11.234375 0.234375q-0.78125 0.671875 -1.5 0.953125q-0.71875 0.265625 -1.546875 0.265625q-1.375 0 -2.109375 -0.671875q-0.734375 -0.671875 -0.734375 -1.703125q0 -0.609375 0.28125 -1.109375q0.28125 -0.515625 0.71875 -0.8125q0.453125 -0.3125 1.015625 -0.46875q0.421875 -0.109375 1.25 -0.203125q1.703125 -0.203125 2.515625 -0.484375q0 -0.296875 0 -0.375q0 -0.859375 -0.390625 -1.203125q-0.546875 -0.484375 -1.609375 -0.484375q-0.984375 0 -1.46875 0.359375q-0.46875 0.34375 -0.6875 1.21875l-1.375 -0.1875q0.1875 -0.875 0.609375 -1.421875q0.4375 -0.546875 1.25 -0.828125q0.8125 -0.296875 1.875 -0.296875q1.0625 0 1.71875 0.25q0.671875 0.25 0.984375 0.625q0.3125 0.375 0.4375 0.953125q0.078125 0.359375 0.078125 1.296875l0 1.875q0 1.96875 0.078125 2.484375q0.09375 0.515625 0.359375 1.0l-1.46875 0q-0.21875 -0.4375 -0.28125 -1.03125zm-0.109375 -3.140625q-0.765625 0.3125 -2.296875 0.53125q-0.875 0.125 -1.234375 0.28125q-0.359375 0.15625 -0.5625 0.46875q-0.1875 0.296875 -0.1875 0.65625q0 0.5625 0.421875 0.9375q0.4375 0.375 1.25 0.375q0.8125 0 1.4375 -0.34375q0.640625 -0.359375 0.9375 -0.984375q0.234375 -0.46875 0.234375 -1.40625l0 -0.515625zm7.484375 1.6875l1.390625 -0.21875q0.109375 0.84375 0.640625 1.296875q0.546875 0.4375 1.5 0.4375q0.96875 0 1.4375 -0.390625q0.46875 -0.40625 0.46875 -0.9375q0 -0.46875 -0.40625 -0.75q-0.296875 -0.1875 -1.4375 -0.46875q-1.546875 -0.390625 -2.15625 -0.671875q-0.59375 -0.296875 -0.90625 -0.796875q-0.296875 -0.5 -0.296875 -1.109375q0 -0.5625 0.25 -1.03125q0.25 -0.46875 0.6875 -0.78125q0.328125 -0.25 0.890625 -0.40625q0.578125 -0.171875 1.21875 -0.171875q0.984375 0 1.71875 0.28125q0.734375 0.28125 1.078125 0.765625q0.359375 0.46875 0.5 1.28125l-1.375 0.1875q-0.09375 -0.640625 -0.546875 -1.0q-0.453125 -0.359375 -1.265625 -0.359375q-0.96875 0 -1.390625 0.328125q-0.40625 0.3125 -0.40625 0.734375q0 0.28125 0.171875 0.5q0.171875 0.21875 0.53125 0.375q0.21875 0.078125 1.25 0.359375q1.484375 0.390625 2.078125 0.65625q0.59375 0.25 0.921875 0.734375q0.34375 0.484375 0.34375 1.203125q0 0.703125 -0.421875 1.328125q-0.40625 0.609375 -1.1875 0.953125q-0.765625 0.34375 -1.734375 0.34375q-1.625 0 -2.46875 -0.671875q-0.84375 -0.671875 -1.078125 -2.0zm8.5625 -7.359375l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm3.5546875 0l0 -8.296875l1.265625 0l0 1.171875q0.90625 -1.359375 2.640625 -1.359375q0.75 0 1.375 0.265625q0.625 0.265625 0.9375 0.703125q0.3125 0.4375 0.4375 1.046875q0.078125 0.390625 0.078125 1.359375l0 5.109375l-1.40625 0l0 -5.046875q0 -0.859375 -0.171875 -1.28125q-0.15625 -0.4375 -0.578125 -0.6875q-0.40625 -0.25 -0.96875 -0.25q-0.90625 0 -1.5625 0.578125q-0.640625 0.5625 -0.640625 2.15625l0 4.53125l-1.40625 0zm8.6328125 0.6875l1.375 0.203125q0.078125 0.640625 0.46875 0.921875q0.53125 0.390625 1.4375 0.390625q0.96875 0 1.5 -0.390625q0.53125 -0.390625 0.71875 -1.09375q0.109375 -0.421875 0.109375 -1.8125q-0.921875 1.09375 -2.296875 1.09375q-1.71875 0 -2.65625 -1.234375q-0.9375 -1.234375 -0.9375 -2.96875q0 -1.1875 0.421875 -2.1875q0.4375 -1.0 1.25 -1.546875q0.828125 -0.546875 1.921875 -0.546875q1.46875 0 2.421875 1.1875l0 -1.0l1.296875 0l0 7.171875q0 1.9375 -0.390625 2.75q-0.390625 0.8125 -1.25 1.28125q-0.859375 0.46875 -2.109375 0.46875q-1.484375 0 -2.40625 -0.671875q-0.90625 -0.671875 -0.875 -2.015625zm1.171875 -4.984375q0 1.625 0.640625 2.375q0.65625 0.75 1.625 0.75q0.96875 0 1.625 -0.734375q0.65625 -0.75 0.65625 -2.34375q0 -1.53125 -0.671875 -2.296875q-0.671875 -0.78125 -1.625 -0.78125q-0.9375 0 -1.59375 0.765625q-0.65625 0.765625 -0.65625 2.265625zm7.9609375 4.296875l0 -11.453125l1.40625 0l0 11.453125l-1.40625 0zm9.2578125 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm17.65625 4.953125l0 -1.046875q-0.78125 1.234375 -2.3125 1.234375q-1.0 0 -1.828125 -0.546875q-0.828125 -0.546875 -1.296875 -1.53125q-0.453125 -0.984375 -0.453125 -2.25q0 -1.25 0.40625 -2.25q0.421875 -1.015625 1.25 -1.546875q0.828125 -0.546875 1.859375 -0.546875q0.75 0 1.328125 0.3125q0.59375 0.3125 0.953125 0.828125l0 -4.109375l1.40625 0l0 11.453125l-1.3125 0zm-4.4375 -4.140625q0 1.59375 0.671875 2.390625q0.671875 0.78125 1.578125 0.78125q0.921875 0 1.5625 -0.75q0.65625 -0.765625 0.65625 -2.3125q0 -1.703125 -0.65625 -2.5q-0.65625 -0.796875 -1.625 -0.796875q-0.9375 0 -1.5625 0.765625q-0.625 0.765625 -0.625 2.421875zm7.9609375 -5.703125l0 -1.609375l1.40625 0l0 1.609375l-1.40625 0zm0 9.84375l0 -8.296875l1.40625 0l0 8.296875l-1.40625 0zm3.5390625 0l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm11.015625 -2.671875l1.453125 0.171875q-0.34375 1.28125 -1.28125 1.984375q-0.921875 0.703125 -2.359375 0.703125q-1.828125 0 -2.890625 -1.125q-1.0625 -1.125 -1.0625 -3.140625q0 -2.09375 1.078125 -3.25q1.078125 -1.15625 2.796875 -1.15625q1.65625 0 2.703125 1.140625q1.0625 1.125 1.0625 3.171875q0 0.125 0 0.375l-6.1875 0q0.078125 1.375 0.765625 2.109375q0.703125 0.71875 1.734375 0.71875q0.78125 0 1.328125 -0.40625q0.546875 -0.40625 0.859375 -1.296875zm-4.609375 -2.28125l4.625 0q-0.09375 -1.046875 -0.53125 -1.5625q-0.671875 -0.8125 -1.734375 -0.8125q-0.96875 0 -1.640625 0.65625q-0.65625 0.640625 -0.71875 1.71875zm13.2421875 1.90625l1.390625 0.1875q-0.234375 1.421875 -1.171875 2.234375q-0.921875 0.8125 -2.28125 0.8125q-1.703125 0 -2.75 -1.109375q-1.03125 -1.125 -1.03125 -3.203125q0 -1.34375 0.4375 -2.34375q0.453125 -1.015625 1.359375 -1.515625q0.921875 -0.5 1.984375 -0.5q1.359375 0 2.21875 0.6875q0.859375 0.671875 1.09375 1.9375l-1.359375 0.203125q-0.203125 -0.828125 -0.703125 -1.25q-0.484375 -0.421875 -1.1875 -0.421875q-1.0625 0 -1.734375 0.765625q-0.65625 0.75 -0.65625 2.40625q0 1.671875 0.640625 2.4375q0.640625 0.75 1.671875 0.75q0.828125 0 1.375 -0.5q0.5625 -0.515625 0.703125 -1.578125zm5.65625 1.78125l0.203125 1.25q-0.59375 0.125 -1.0625 0.125q-0.765625 0 -1.1875 -0.234375q-0.421875 -0.25 -0.59375 -0.640625q-0.171875 -0.40625 -0.171875 -1.671875l0 -4.765625l-1.03125 0l0 -1.09375l1.03125 0l0 -2.0625l1.40625 -0.84375l0 2.90625l1.40625 0l0 1.09375l-1.40625 0l0 4.84375q0 0.609375 0.0625 0.78125q0.078125 0.171875 0.25 0.28125q0.171875 0.09375 0.484375 0.09375q0.234375 0 0.609375 -0.0625zm0.8515625 -2.890625q0 -2.296875 1.28125 -3.40625q1.078125 -0.921875 2.609375 -0.921875q1.71875 0 2.796875 1.125q1.09375 1.109375 1.09375 3.09375q0 1.59375 -0.484375 2.515625q-0.484375 0.921875 -1.40625 1.4375q-0.90625 0.5 -2.0 0.5q-1.734375 0 -2.8125 -1.109375q-1.078125 -1.125 -1.078125 -3.234375zm1.453125 0q0 1.59375 0.6875 2.390625q0.703125 0.796875 1.75 0.796875q1.046875 0 1.734375 -0.796875q0.703125 -0.796875 0.703125 -2.4375q0 -1.53125 -0.703125 -2.328125q-0.6875 -0.796875 -1.734375 -0.796875q-1.046875 0 -1.75 0.796875q-0.6875 0.78125 -0.6875 2.375zm7.9609375 4.15625l0 -8.296875l1.265625 0l0 1.25q0.484375 -0.875 0.890625 -1.15625q0.40625 -0.28125 0.90625 -0.28125q0.703125 0 1.4375 0.453125l-0.484375 1.296875q-0.515625 -0.296875 -1.03125 -0.296875q-0.453125 0 -0.828125 0.28125q-0.359375 0.265625 -0.515625 0.765625q-0.234375 0.75 -0.234375 1.640625l0 4.34375l-1.40625 0zm5.28125 3.203125l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875zm8.984375 0.171875l-1.015625 0q2.34375 -3.765625 2.34375 -7.515625q0 -1.46875 -0.34375 -2.921875q-0.265625 -1.171875 -0.734375 -2.25q-0.3125 -0.703125 -1.265625 -2.34375l1.015625 0q1.46875 1.96875 2.171875 3.9375q0.59375 1.703125 0.59375 3.5625q0 2.109375 -0.8125 4.09375q-0.796875 1.96875 -1.953125 3.4375zm10.1484375 -3.375l-1.3125 0l0 -11.453125l1.40625 0l0 4.078125q0.890625 -1.109375 2.28125 -1.109375q0.765625 0 1.4375 0.3125q0.6875 0.296875 1.125 0.859375q0.453125 0.5625 0.703125 1.359375q0.25 0.78125 0.25 1.671875q0 2.140625 -1.0625 3.3125q-1.046875 1.15625 -2.53125 1.15625q-1.46875 0 -2.296875 -1.234375l0 1.046875zm-0.015625 -4.21875q0 1.5 0.40625 2.15625q0.65625 1.09375 1.796875 1.09375q0.921875 0 1.59375 -0.796875q0.671875 -0.8125 0.671875 -2.390625q0 -1.625 -0.65625 -2.390625q-0.640625 -0.78125 -1.546875 -0.78125q-0.921875 0 -1.59375 0.796875q-0.671875 0.796875 -0.671875 2.3125zm7.5546875 7.421875l-0.15625 -1.328125q0.453125 0.125 0.796875 0.125q0.46875 0 0.75 -0.15625q0.28125 -0.15625 0.46875 -0.4375q0.125 -0.203125 0.421875 -1.046875q0.046875 -0.109375 0.125 -0.34375l-3.140625 -8.3125l1.515625 0l1.71875 4.796875q0.34375 0.921875 0.609375 1.921875q0.234375 -0.96875 0.578125 -1.890625l1.765625 -4.828125l1.40625 0l-3.15625 8.4375q-0.5 1.375 -0.78125 1.890625q-0.375 0.6875 -0.859375 1.015625q-0.484375 0.328125 -1.15625 0.328125q-0.40625 0 -0.90625 -0.171875z" fill-rule="nonzero"/><path fill="#000000" d="m38.116634 366.33994l0 -11.453125l2.28125 0l2.71875 8.109375q0.375 1.125 0.546875 1.6875q0.1875 -0.625 0.609375 -1.828125l2.734375 -7.96875l2.046875 0l0 11.453125l-1.46875 0l0 -9.59375l-3.328125 9.59375l-1.359375 0l-3.3125 -9.75l0 9.75l-1.46875 0zm13.375 0l0 -11.453125l3.953125 0q1.328125 0 2.03125 0.15625q0.984375 0.234375 1.6875 0.828125q0.90625 0.765625 1.34375 1.953125q0.453125 1.1875 0.453125 2.71875q0 1.3125 -0.3125 2.328125q-0.296875 1.0 -0.78125 1.65625q-0.46875 0.65625 -1.03125 1.046875q-0.5625 0.375 -1.375 0.578125q-0.796875 0.1875 -1.828125 0.1875l-4.140625 0zm1.515625 -1.359375l2.453125 0q1.125 0 1.765625 -0.203125q0.65625 -0.21875 1.03125 -0.59375q0.546875 -0.546875 0.84375 -1.453125q0.296875 -0.90625 0.296875 -2.203125q0 -1.796875 -0.59375 -2.765625q-0.578125 -0.96875 -1.421875 -1.296875q-0.609375 -0.234375 -1.96875 -0.234375l-2.40625 0l0 8.75zm9.5234375 -2.328125l1.4375 -0.125q0.093746185 0.859375 0.4687462 1.421875q0.375 0.546875 1.15625 0.890625q0.78125 0.328125 1.75 0.328125q0.875 0 1.53125 -0.25q0.671875 -0.265625 0.984375 -0.703125q0.328125 -0.453125 0.328125 -0.984375q0 -0.546875 -0.3125 -0.9375q-0.3125 -0.40625 -1.03125 -0.6875q-0.453125 -0.171875 -2.03125 -0.546875q-1.578125 -0.390625 -2.21875 -0.71875q-0.8124962 -0.4375 -1.2187462 -1.0625q-0.40625 -0.640625 -0.40625 -1.4375q0 -0.859375 0.484375 -1.609375q0.5 -0.765625 1.4374962 -1.15625q0.953125 -0.390625 2.109375 -0.390625q1.28125 0 2.25 0.421875q0.96875 0.40625 1.484375 1.203125q0.53125 0.796875 0.578125 1.796875l-1.453125 0.109375q-0.125 -1.078125 -0.796875 -1.625q-0.671875 -0.5625 -2.0 -0.5625q-1.375 0 -2.0 0.5q-0.625 0.5 -0.625 1.21875q0 0.609375 0.4375 1.015625q0.4375 0.390625 2.28125 0.8125q1.859375 0.421875 2.546875 0.734375q1.0 0.453125 1.46875 1.171875q0.484375 0.703125 0.484375 1.625q0 0.90625 -0.53125 1.71875q-0.515625 0.8125 -1.5 1.265625q-0.984375 0.453125 -2.203125 0.453125q-1.5625 0 -2.609375 -0.453125q-1.0468712 -0.46875 -1.6562462 -1.375q-0.59375 -0.90625 -0.625 -2.0625zm19.570309 3.6875l0 -2.75l-4.96875 0l0 -1.28125l5.234375 -7.421875l1.140625 0l0 7.421875l1.546875 0l0 1.28125l-1.546875 0l0 2.75l-1.40625 0zm0 -4.03125l0 -5.171875l-3.578125 5.171875l3.578125 0zm5.1796875 4.03125l0 -1.609375l1.609375 0l0 1.609375l-1.609375 0z" fill-rule="nonzero"/></g></svg> \ No newline at end of file
diff --git a/doc/cephfs/troubleshooting.rst b/doc/cephfs/troubleshooting.rst
new file mode 100644
index 000000000..0e511526b
--- /dev/null
+++ b/doc/cephfs/troubleshooting.rst
@@ -0,0 +1,429 @@
+=================
+ Troubleshooting
+=================
+
+Slow/stuck operations
+=====================
+
+If you are experiencing apparent hung operations, the first task is to identify
+where the problem is occurring: in the client, the MDS, or the network connecting
+them. Start by looking to see if either side has stuck operations
+(:ref:`slow_requests`, below), and narrow it down from there.
+
+We can get hints about what's going on by dumping the MDS cache ::
+
+ ceph daemon mds.<name> dump cache /tmp/dump.txt
+
+.. note:: The file `dump.txt` is on the machine executing the MDS and for systemd
+ controlled MDS services, this is in a tmpfs in the MDS container.
+ Use `nsenter(1)` to locate `dump.txt` or specify another system-wide path.
+
+If high logging levels are set on the MDS, that will almost certainly hold the
+information we need to diagnose and solve the issue.
+
+Stuck during recovery
+=====================
+
+Stuck in up:replay
+------------------
+
+If your MDS is stuck in ``up:replay`` then it is likely that the journal is
+very long. Did you see ``MDS_HEALTH_TRIM`` cluster warnings saying the MDS is
+behind on trimming its journal? If the journal has grown very large, it can
+take hours to read the journal. There is no working around this but there
+are things you can do to speed things along:
+
+Reduce MDS debugging to 0. Even at the default settings, the MDS logs some
+messages to memory for dumping if a fatal error is encountered. You can avoid
+this:
+
+.. code:: bash
+
+ ceph config set mds debug_mds 0
+ ceph config set mds debug_ms 0
+ ceph config set mds debug_monc 0
+
+Note if the MDS fails then there will be virtually no information to determine
+why. If you can calculate when ``up:replay`` will complete, you should restore
+these configs just prior to entering the next state:
+
+.. code:: bash
+
+ ceph config rm mds debug_mds
+ ceph config rm mds debug_ms
+ ceph config rm mds debug_monc
+
+Once you've got replay moving along faster, you can calculate when the MDS will
+complete. This is done by examining the journal replay status:
+
+.. code:: bash
+
+ $ ceph tell mds.<fs_name>:0 status | jq .replay_status
+ {
+ "journal_read_pos": 4195244,
+ "journal_write_pos": 4195244,
+ "journal_expire_pos": 4194304,
+ "num_events": 2,
+ "num_segments": 2
+ }
+
+Replay completes when the ``journal_read_pos`` reaches the
+``journal_write_pos``. The write position will not change during replay. Track
+the progression of the read position to compute the expected time to complete.
+
+
+Avoiding recovery roadblocks
+----------------------------
+
+When trying to urgently restore your file system during an outage, here are some
+things to do:
+
+* **Deny all reconnect to clients.** This effectively blocklists all existing
+ CephFS sessions so all mounts will hang or become unavailable.
+
+.. code:: bash
+
+ ceph config set mds mds_deny_all_reconnect true
+
+ Remember to undo this after the MDS becomes active.
+
+.. note:: This does not prevent new sessions from connecting. For that, see the ``refuse_client_session`` file system setting.
+
+* **Extend the MDS heartbeat grace period**. This avoids replacing an MDS that appears
+ "stuck" doing some operation. Sometimes recovery of an MDS may involve an
+ operation that may take longer than expected (from the programmer's
+ perspective). This is more likely when recovery is already taking a longer than
+ normal amount of time to complete (indicated by your reading this document).
+ Avoid unnecessary replacement loops by extending the heartbeat graceperiod:
+
+.. code:: bash
+
+ ceph config set mds mds_heartbeat_grace 3600
+
+ This has the effect of having the MDS continue to send beacons to the monitors
+ even when its internal "heartbeat" mechanism has not been reset (beat) in one
+ hour. Note the previous mechanism for achieving this was via the
+ `mds_beacon_grace` monitor setting.
+
+* **Disable open file table prefetch.** Normally, the MDS will prefetch
+ directory contents during recovery to heat up its cache. During long
+ recovery, the cache is probably already hot **and large**. So this behavior
+ can be undesirable. Disable using:
+
+.. code:: bash
+
+ ceph config set mds mds_oft_prefetch_dirfrags false
+
+* **Turn off clients.** Clients reconnecting to the newly ``up:active`` MDS may
+ cause new load on the file system when it's just getting back on its feet.
+ There will likely be some general maintenance to do before workloads should be
+ resumed. For example, expediting journal trim may be advisable if the recovery
+ took a long time because replay was reading a overly large journal.
+
+ You can do this manually or use the new file system tunable:
+
+.. code:: bash
+
+ ceph fs set <fs_name> refuse_client_session true
+
+ That prevents any clients from establishing new sessions with the MDS.
+
+
+
+Expediting MDS journal trim
+===========================
+
+If your MDS journal grew too large (maybe your MDS was stuck in up:replay for a
+long time!), you will want to have the MDS trim its journal more frequently.
+You will know the journal is too large because of ``MDS_HEALTH_TRIM`` warnings.
+
+The main tunable available to do this is to modify the MDS tick interval. The
+"tick" interval drives several upkeep activities in the MDS. It is strongly
+recommended no significant file system load be present when modifying this tick
+interval. This setting only affects an MDS in ``up:active``. The MDS does not
+trim its journal during recovery.
+
+.. code:: bash
+
+ ceph config set mds mds_tick_interval 2
+
+
+RADOS Health
+============
+
+If part of the CephFS metadata or data pools is unavailable and CephFS is not
+responding, it is probably because RADOS itself is unhealthy. Resolve those
+problems first (:doc:`../../rados/troubleshooting/index`).
+
+The MDS
+=======
+
+If an operation is hung inside the MDS, it will eventually show up in ``ceph health``,
+identifying "slow requests are blocked". It may also identify clients as
+"failing to respond" or misbehaving in other ways. If the MDS identifies
+specific clients as misbehaving, you should investigate why they are doing so.
+
+Generally it will be the result of
+
+#. Overloading the system (if you have extra RAM, increase the
+ "mds cache memory limit" config from its default 1GiB; having a larger active
+ file set than your MDS cache is the #1 cause of this!).
+
+#. Running an older (misbehaving) client.
+
+#. Underlying RADOS issues.
+
+Otherwise, you have probably discovered a new bug and should report it to
+the developers!
+
+.. _slow_requests:
+
+Slow requests (MDS)
+-------------------
+You can list current operations via the admin socket by running::
+
+ ceph daemon mds.<name> dump_ops_in_flight
+
+from the MDS host. Identify the stuck commands and examine why they are stuck.
+Usually the last "event" will have been an attempt to gather locks, or sending
+the operation off to the MDS log. If it is waiting on the OSDs, fix them. If
+operations are stuck on a specific inode, you probably have a client holding
+caps which prevent others from using it, either because the client is trying
+to flush out dirty data or because you have encountered a bug in CephFS'
+distributed file lock code (the file "capabilities" ["caps"] system).
+
+If it's a result of a bug in the capabilities code, restarting the MDS
+is likely to resolve the problem.
+
+If there are no slow requests reported on the MDS, and it is not reporting
+that clients are misbehaving, either the client has a problem or its
+requests are not reaching the MDS.
+
+.. _ceph_fuse_debugging:
+
+ceph-fuse debugging
+===================
+
+ceph-fuse also supports ``dump_ops_in_flight``. See if it has any and where they are
+stuck.
+
+Debug output
+------------
+
+To get more debugging information from ceph-fuse, try running in the foreground
+with logging to the console (``-d``) and enabling client debug
+(``--debug-client=20``), enabling prints for each message sent
+(``--debug-ms=1``).
+
+If you suspect a potential monitor issue, enable monitor debugging as well
+(``--debug-monc=20``).
+
+.. _kernel_mount_debugging:
+
+Kernel mount debugging
+======================
+
+If there is an issue with the kernel client, the most important thing is
+figuring out whether the problem is with the kernel client or the MDS. Generally,
+this is easy to work out. If the kernel client broke directly, there will be
+output in ``dmesg``. Collect it and any inappropriate kernel state.
+
+Slow requests
+-------------
+
+Unfortunately the kernel client does not support the admin socket, but it has
+similar (if limited) interfaces if your kernel has debugfs enabled. There
+will be a folder in ``sys/kernel/debug/ceph/``, and that folder (whose name will
+look something like ``28f7427e-5558-4ffd-ae1a-51ec3042759a.client25386880``)
+will contain a variety of files that output interesting output when you ``cat``
+them. These files are described below; the most interesting when debugging
+slow requests are probably the ``mdsc`` and ``osdc`` files.
+
+* bdi: BDI info about the Ceph system (blocks dirtied, written, etc)
+* caps: counts of file "caps" structures in-memory and used
+* client_options: dumps the options provided to the CephFS mount
+* dentry_lru: Dumps the CephFS dentries currently in-memory
+* mdsc: Dumps current requests to the MDS
+* mdsmap: Dumps the current MDSMap epoch and MDSes
+* mds_sessions: Dumps the current sessions to MDSes
+* monc: Dumps the current maps from the monitor, and any "subscriptions" held
+* monmap: Dumps the current monitor map epoch and monitors
+* osdc: Dumps the current ops in-flight to OSDs (ie, file data IO)
+* osdmap: Dumps the current OSDMap epoch, pools, and OSDs
+
+If the data pool is in a NEARFULL condition, then the kernel cephfs client
+will switch to doing writes synchronously, which is quite slow.
+
+Disconnected+Remounted FS
+=========================
+Because CephFS has a "consistent cache", if your network connection is
+disrupted for a long enough time, the client will be forcibly
+disconnected from the system. At this point, the kernel client is in
+a bind: it cannot safely write back dirty data, and many applications
+do not handle IO errors correctly on close().
+At the moment, the kernel client will remount the FS, but outstanding file system
+IO may or may not be satisfied. In these cases, you may need to reboot your
+client system.
+
+You can identify you are in this situation if dmesg/kern.log report something like::
+
+ Jul 20 08:14:38 teuthology kernel: [3677601.123718] ceph: mds0 closed our session
+ Jul 20 08:14:38 teuthology kernel: [3677601.128019] ceph: mds0 reconnect start
+ Jul 20 08:14:39 teuthology kernel: [3677602.093378] ceph: mds0 reconnect denied
+ Jul 20 08:14:39 teuthology kernel: [3677602.098525] ceph: dropping dirty+flushing Fw state for ffff8802dc150518 1099935956631
+ Jul 20 08:14:39 teuthology kernel: [3677602.107145] ceph: dropping dirty+flushing Fw state for ffff8801008e8518 1099935946707
+ Jul 20 08:14:39 teuthology kernel: [3677602.196747] libceph: mds0 172.21.5.114:6812 socket closed (con state OPEN)
+ Jul 20 08:14:40 teuthology kernel: [3677603.126214] libceph: mds0 172.21.5.114:6812 connection reset
+ Jul 20 08:14:40 teuthology kernel: [3677603.132176] libceph: reset on mds0
+
+This is an area of ongoing work to improve the behavior. Kernels will soon
+be reliably issuing error codes to in-progress IO, although your application(s)
+may not deal with them well. In the longer-term, we hope to allow reconnect
+and reclaim of data in cases where it won't violate POSIX semantics (generally,
+data which hasn't been accessed or modified by other clients).
+
+Mounting
+========
+
+Mount 5 Error
+-------------
+
+A mount 5 error typically occurs if a MDS server is laggy or if it crashed.
+Ensure at least one MDS is up and running, and the cluster is ``active +
+healthy``.
+
+Mount 12 Error
+--------------
+
+A mount 12 error with ``cannot allocate memory`` usually occurs if you have a
+version mismatch between the :term:`Ceph Client` version and the :term:`Ceph
+Storage Cluster` version. Check the versions using::
+
+ ceph -v
+
+If the Ceph Client is behind the Ceph cluster, try to upgrade it::
+
+ sudo apt-get update && sudo apt-get install ceph-common
+
+You may need to uninstall, autoclean and autoremove ``ceph-common``
+and then reinstall it so that you have the latest version.
+
+Dynamic Debugging
+=================
+
+You can enable dynamic debug against the CephFS module.
+
+Please see: https://github.com/ceph/ceph/blob/master/src/script/kcon_all.sh
+
+In-memory Log Dump
+==================
+
+In-memory logs can be dumped by setting ``mds_extraordinary_events_dump_interval``
+during a lower level debugging (log level < 10). ``mds_extraordinary_events_dump_interval``
+is the interval in seconds for dumping the recent in-memory logs when there is an Extra-Ordinary event.
+
+The Extra-Ordinary events are classified as:
+
+* Client Eviction
+* Missed Beacon ACK from the monitors
+* Missed Internal Heartbeats
+
+In-memory Log Dump is disabled by default to prevent log file bloat in a production environment.
+The below commands consecutively enables it::
+
+ $ ceph config set mds debug_mds <log_level>/<gather_level>
+ $ ceph config set mds mds_extraordinary_events_dump_interval <seconds>
+
+The ``log_level`` should be < 10 and ``gather_level`` should be >= 10 to enable in-memory log dump.
+When it is enabled, the MDS checks for the extra-ordinary events every
+``mds_extraordinary_events_dump_interval`` seconds and if any of them occurs, MDS dumps the
+in-memory logs containing the relevant event details in ceph-mds log.
+
+.. note:: For higher log levels (log_level >= 10) there is no reason to dump the In-memory Logs and a
+ lower gather level (gather_level < 10) is insufficient to gather In-memory Logs. Thus a
+ log level >=10 or a gather level < 10 in debug_mds would prevent enabling the In-memory Log Dump.
+ In such cases, when there is a failure it's required to reset the value of
+ mds_extraordinary_events_dump_interval to 0 before enabling using the above commands.
+
+The In-memory Log Dump can be disabled using::
+
+ $ ceph config set mds mds_extraordinary_events_dump_interval 0
+
+Filesystems Become Inaccessible After an Upgrade
+================================================
+
+.. note::
+ You can avoid ``operation not permitted`` errors by running this procedure
+ before an upgrade. As of May 2023, it seems that ``operation not permitted``
+ errors of the kind discussed here occur after upgrades after Nautilus
+ (inclusive).
+
+IF
+
+you have CephFS file systems that have data and metadata pools that were
+created by a ``ceph fs new`` command (meaning that they were not created
+with the defaults)
+
+OR
+
+you have an existing CephFS file system and are upgrading to a new post-Nautilus
+major version of Ceph
+
+THEN
+
+in order for the documented ``ceph fs authorize...`` commands to function as
+documented (and to avoid 'operation not permitted' errors when doing file I/O
+or similar security-related problems for all users except the ``client.admin``
+user), you must first run:
+
+.. prompt:: bash $
+
+ ceph osd pool application set <your metadata pool name> cephfs metadata <your ceph fs filesystem name>
+
+and
+
+.. prompt:: bash $
+
+ ceph osd pool application set <your data pool name> cephfs data <your ceph fs filesystem name>
+
+Otherwise, when the OSDs receive a request to read or write data (not the
+directory info, but file data) they will not know which Ceph file system name
+to look up. This is true also of pool names, because the 'defaults' themselves
+changed in the major releases, from::
+
+ data pool=fsname
+ metadata pool=fsname_metadata
+
+to::
+
+ data pool=fsname.data and
+ metadata pool=fsname.meta
+
+Any setup that used ``client.admin`` for all mounts did not run into this
+problem, because the admin key gave blanket permissions.
+
+A temporary fix involves changing mount requests to the 'client.admin' user and
+its associated key. A less drastic but half-fix is to change the osd cap for
+your user to just ``caps osd = "allow rw"`` and delete ``tag cephfs
+data=....``
+
+Reporting Issues
+================
+
+If you have identified a specific issue, please report it with as much
+information as possible. Especially important information:
+
+* Ceph versions installed on client and server
+* Whether you are using the kernel or fuse client
+* If you are using the kernel client, what kernel version?
+* How many clients are in play, doing what kind of workload?
+* If a system is 'stuck', is that affecting all clients or just one?
+* Any ceph health messages
+* Any backtraces in the ceph logs from crashes
+
+If you are satisfied that you have found a bug, please file it on `the bug
+tracker`. For more general queries, please write to the `ceph-users mailing
+list`.
+
+.. _the bug tracker: http://tracker.ceph.com
+.. _ceph-users mailing list: http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com/
diff --git a/doc/cephfs/upgrading.rst b/doc/cephfs/upgrading.rst
new file mode 100644
index 000000000..c06e3e8e0
--- /dev/null
+++ b/doc/cephfs/upgrading.rst
@@ -0,0 +1,90 @@
+.. _upgrade-mds-cluster:
+
+Upgrading the MDS Cluster
+=========================
+
+Currently the MDS cluster does not have built-in versioning or file system
+flags to support seamless upgrades of the MDSs without potentially causing
+assertions or other faults due to incompatible messages or other functional
+differences. For this reason, it's necessary during any cluster upgrade to
+reduce the number of active MDS for a file system to one first so that two
+active MDS do not communicate with different versions.
+
+The proper sequence for upgrading the MDS cluster is:
+
+1. For each file system, disable and stop standby-replay daemons.
+
+::
+
+ ceph fs set <fs_name> allow_standby_replay false
+
+In Pacific, the standby-replay daemons are stopped for you after running this
+command. Older versions of Ceph require you to stop these daemons manually.
+
+::
+
+ ceph fs dump # find standby-replay daemons
+ ceph mds fail mds.<X>
+
+
+2. For each file system, reduce the number of ranks to 1:
+
+::
+
+ ceph fs set <fs_name> max_mds 1
+
+3. Wait for cluster to stop non-zero ranks where only rank 0 is active and the rest are standbys.
+
+::
+
+ ceph status # wait for MDS to finish stopping
+
+4. For each MDS, upgrade packages and restart. Note: to reduce failovers, it is
+ recommended -- but not strictly necessary -- to first upgrade standby daemons.
+
+::
+
+ # use package manager to update cluster
+ systemctl restart ceph-mds.target
+
+5. For each file system, restore the previous max_mds and allow_standby_replay settings for your cluster:
+
+::
+
+ ceph fs set <fs_name> max_mds <old_max_mds>
+ ceph fs set <fs_name> allow_standby_replay <old_allow_standby_replay>
+
+
+Upgrading pre-Firefly file systems past Jewel
+=============================================
+
+.. tip::
+
+ This advice only applies to users with file systems
+ created using versions of Ceph older than *Firefly* (0.80).
+ Users creating new file systems may disregard this advice.
+
+Pre-firefly versions of Ceph used a now-deprecated format
+for storing CephFS directory objects, called TMAPs. Support
+for reading these in RADOS will be removed after the Jewel
+release of Ceph, so for upgrading CephFS users it is important
+to ensure that any old directory objects have been converted.
+
+After installing Jewel on all your MDS and OSD servers, and restarting
+the services, run the following command:
+
+::
+
+ cephfs-data-scan tmap_upgrade <metadata pool name>
+
+This only needs to be run once, and it is not necessary to
+stop any other services while it runs. The command may take some
+time to execute, as it iterates overall objects in your metadata
+pool. It is safe to continue using your file system as normal while
+it executes. If the command aborts for any reason, it is safe
+to simply run it again.
+
+If you are upgrading a pre-Firefly CephFS file system to a newer Ceph version
+than Jewel, you must first upgrade to Jewel and run the ``tmap_upgrade``
+command before completing your upgrade to the latest version.
+