summaryrefslogtreecommitdiffstats
path: root/docs/docsite/rst/user_guide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/docsite/rst/user_guide')
-rw-r--r--docs/docsite/rst/user_guide/basic_concepts.rst12
-rw-r--r--docs/docsite/rst/user_guide/become.rst702
-rw-r--r--docs/docsite/rst/user_guide/collections_using.rst324
-rw-r--r--docs/docsite/rst/user_guide/command_line_tools.rst20
-rw-r--r--docs/docsite/rst/user_guide/complex_data_manipulation.rst246
-rw-r--r--docs/docsite/rst/user_guide/connection_details.rst116
-rw-r--r--docs/docsite/rst/user_guide/guide_rolling_upgrade.rst324
-rw-r--r--docs/docsite/rst/user_guide/index.rst133
-rw-r--r--docs/docsite/rst/user_guide/intro.rst15
-rw-r--r--docs/docsite/rst/user_guide/intro_adhoc.rst206
-rw-r--r--docs/docsite/rst/user_guide/intro_bsd.rst106
-rw-r--r--docs/docsite/rst/user_guide/intro_dynamic_inventory.rst249
-rw-r--r--docs/docsite/rst/user_guide/intro_getting_started.rst139
-rw-r--r--docs/docsite/rst/user_guide/intro_inventory.rst788
-rw-r--r--docs/docsite/rst/user_guide/intro_patterns.rst171
-rw-r--r--docs/docsite/rst/user_guide/intro_windows.rst4
-rw-r--r--docs/docsite/rst/user_guide/modules.rst36
-rw-r--r--docs/docsite/rst/user_guide/modules_intro.rst52
-rw-r--r--docs/docsite/rst/user_guide/modules_support.rst70
-rw-r--r--docs/docsite/rst/user_guide/playbook_pathing.rst42
-rw-r--r--docs/docsite/rst/user_guide/playbooks.rst21
-rw-r--r--docs/docsite/rst/user_guide/playbooks_advanced_syntax.rst112
-rw-r--r--docs/docsite/rst/user_guide/playbooks_async.rst161
-rw-r--r--docs/docsite/rst/user_guide/playbooks_best_practices.rst167
-rw-r--r--docs/docsite/rst/user_guide/playbooks_blocks.rst189
-rw-r--r--docs/docsite/rst/user_guide/playbooks_checkmode.rst97
-rw-r--r--docs/docsite/rst/user_guide/playbooks_conditionals.rst508
-rw-r--r--docs/docsite/rst/user_guide/playbooks_debugger.rst329
-rw-r--r--docs/docsite/rst/user_guide/playbooks_delegation.rst136
-rw-r--r--docs/docsite/rst/user_guide/playbooks_environment.rst141
-rw-r--r--docs/docsite/rst/user_guide/playbooks_error_handling.rst245
-rw-r--r--docs/docsite/rst/user_guide/playbooks_filters.rst1696
-rw-r--r--docs/docsite/rst/user_guide/playbooks_filters_ipaddr.rst744
-rw-r--r--docs/docsite/rst/user_guide/playbooks_handlers.rst148
-rw-r--r--docs/docsite/rst/user_guide/playbooks_intro.rst151
-rw-r--r--docs/docsite/rst/user_guide/playbooks_lookups.rst37
-rw-r--r--docs/docsite/rst/user_guide/playbooks_loops.rst445
-rw-r--r--docs/docsite/rst/user_guide/playbooks_module_defaults.rst143
-rw-r--r--docs/docsite/rst/user_guide/playbooks_prompts.rst116
-rw-r--r--docs/docsite/rst/user_guide/playbooks_python_version.rst64
-rw-r--r--docs/docsite/rst/user_guide/playbooks_reuse.rst201
-rw-r--r--docs/docsite/rst/user_guide/playbooks_reuse_includes.rst32
-rw-r--r--docs/docsite/rst/user_guide/playbooks_reuse_roles.rst490
-rw-r--r--docs/docsite/rst/user_guide/playbooks_roles.rst19
-rw-r--r--docs/docsite/rst/user_guide/playbooks_special_topics.rst8
-rw-r--r--docs/docsite/rst/user_guide/playbooks_startnstep.rst40
-rw-r--r--docs/docsite/rst/user_guide/playbooks_strategies.rst216
-rw-r--r--docs/docsite/rst/user_guide/playbooks_tags.rst428
-rw-r--r--docs/docsite/rst/user_guide/playbooks_templating.rst55
-rw-r--r--docs/docsite/rst/user_guide/playbooks_tests.rst395
-rw-r--r--docs/docsite/rst/user_guide/playbooks_variables.rst466
-rw-r--r--docs/docsite/rst/user_guide/playbooks_vars_facts.rst680
-rw-r--r--docs/docsite/rst/user_guide/playbooks_vault.rst6
-rw-r--r--docs/docsite/rst/user_guide/plugin_filtering_config.rst26
-rw-r--r--docs/docsite/rst/user_guide/quickstart.rst20
-rw-r--r--docs/docsite/rst/user_guide/sample_setup.rst285
-rw-r--r--docs/docsite/rst/user_guide/shared_snippets/SSH_password_prompt.txt2
-rw-r--r--docs/docsite/rst/user_guide/shared_snippets/with2loop.txt205
-rw-r--r--docs/docsite/rst/user_guide/vault.rst653
-rw-r--r--docs/docsite/rst/user_guide/windows.rst21
-rw-r--r--docs/docsite/rst/user_guide/windows_dsc.rst505
-rw-r--r--docs/docsite/rst/user_guide/windows_faq.rst236
-rw-r--r--docs/docsite/rst/user_guide/windows_performance.rst61
-rw-r--r--docs/docsite/rst/user_guide/windows_setup.rst573
-rw-r--r--docs/docsite/rst/user_guide/windows_usage.rst513
-rw-r--r--docs/docsite/rst/user_guide/windows_winrm.rst913
66 files changed, 16454 insertions, 0 deletions
diff --git a/docs/docsite/rst/user_guide/basic_concepts.rst b/docs/docsite/rst/user_guide/basic_concepts.rst
new file mode 100644
index 00000000..76adc684
--- /dev/null
+++ b/docs/docsite/rst/user_guide/basic_concepts.rst
@@ -0,0 +1,12 @@
+.. _basic_concepts:
+
+****************
+Ansible concepts
+****************
+
+These concepts are common to all uses of Ansible. You need to understand them to use Ansible for any kind of automation. This basic introduction provides the background you need to follow the rest of the User Guide.
+
+.. contents::
+ :local:
+
+.. include:: /shared_snippets/basic_concepts.txt
diff --git a/docs/docsite/rst/user_guide/become.rst b/docs/docsite/rst/user_guide/become.rst
new file mode 100644
index 00000000..fed806bb
--- /dev/null
+++ b/docs/docsite/rst/user_guide/become.rst
@@ -0,0 +1,702 @@
+.. _become:
+
+******************************************
+Understanding privilege escalation: become
+******************************************
+
+Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user's permissions. Because this feature allows you to 'become' another user, different from the user that logged into the machine (remote user), we call it ``become``. The ``become`` keyword leverages existing privilege escalation tools like `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`, `runas`, `machinectl` and others.
+
+.. contents::
+ :local:
+
+Using become
+============
+
+You can control the use of ``become`` with play or task directives, connection variables, or at the command line. If you set privilege escalation properties in multiple ways, review the :ref:`general precedence rules<general_precedence_rules>` to understand which settings will be used.
+
+A full list of all become plugins that are included in Ansible can be found in the :ref:`become_plugin_list`.
+
+Become directives
+-----------------
+
+You can set the directives that control ``become`` at the play or task level. You can override these by setting connection variables, which often differ from one host to another. These variables and directives are independent. For example, setting ``become_user`` does not set ``become``.
+
+become
+ set to ``yes`` to activate privilege escalation.
+
+become_user
+ set to user with desired privileges — the user you `become`, NOT the user you login as. Does NOT imply ``become: yes``, to allow it to be set at host level. Default value is ``root``.
+
+become_method
+ (at play or task level) overrides the default method set in ansible.cfg, set to use any of the :ref:`become_plugins`.
+
+become_flags
+ (at play or task level) permit the use of specific flags for the tasks or role. One common use is to change the user to nobody when the shell is set to nologin. Added in Ansible 2.2.
+
+For example, to manage a system service (which requires ``root`` privileges) when connected as a non-``root`` user, you can use the default value of ``become_user`` (``root``):
+
+.. code-block:: yaml
+
+ - name: Ensure the httpd service is running
+ service:
+ name: httpd
+ state: started
+ become: yes
+
+To run a command as the ``apache`` user:
+
+.. code-block:: yaml
+
+ - name: Run a command as the apache user
+ command: somecommand
+ become: yes
+ become_user: apache
+
+To do something as the ``nobody`` user when the shell is nologin:
+
+.. code-block:: yaml
+
+ - name: Run a command as nobody
+ command: somecommand
+ become: yes
+ become_method: su
+ become_user: nobody
+ become_flags: '-s /bin/sh'
+
+To specify a password for sudo, run ``ansible-playbook`` with ``--ask-become-pass`` (``-K`` for short).
+If you run a playbook utilizing ``become`` and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with `CTRL-c`, then execute the playbook with ``-K`` and the appropriate password.
+
+Become connection variables
+---------------------------
+
+You can define different ``become`` options for each managed node or group. You can define these variables in inventory or use them as normal variables.
+
+ansible_become
+ equivalent of the become directive, decides if privilege escalation is used or not.
+
+ansible_become_method
+ which privilege escalation method should be used
+
+ansible_become_user
+ set the user you become through privilege escalation; does not imply ``ansible_become: yes``
+
+ansible_become_password
+ set the privilege escalation password. See :ref:`playbooks_vault` for details on how to avoid having secrets in plain text
+
+For example, if you want to run all tasks as ``root`` on a server named ``webserver``, but you can only connect as the ``manager`` user, you could use an inventory entry like this:
+
+.. code-block:: text
+
+ webserver ansible_user=manager ansible_become=yes
+
+.. note::
+ The variables defined above are generic for all become plugins but plugin specific ones can also be set instead.
+ Please see the documentation for each plugin for a list of all options the plugin has and how they can be defined.
+ A full list of become plugins in Ansible can be found at :ref:`become_plugins`.
+
+Become command-line options
+---------------------------
+
+--ask-become-pass, -K
+ ask for privilege escalation password; does not imply become will be used. Note that this password will be used for all hosts.
+
+--become, -b
+ run operations with become (no password implied)
+
+--become-method=BECOME_METHOD
+ privilege escalation method to use (default=sudo),
+ valid choices: [ sudo | su | pbrun | pfexec | doas | dzdo | ksu | runas | machinectl ]
+
+--become-user=BECOME_USER
+ run operations as this user (default=root), does not imply --become/-b
+
+Risks and limitations of become
+===============================
+
+Although privilege escalation is mostly intuitive, there are a few limitations
+on how it works. Users should be aware of these to avoid surprises.
+
+Risks of becoming an unprivileged user
+--------------------------------------
+
+Ansible modules are executed on the remote machine by first substituting the
+parameters into the module file, then copying the file to the remote machine,
+and finally executing it there.
+
+Everything is fine if the module file is executed without using ``become``,
+when the ``become_user`` is root, or when the connection to the remote machine
+is made as root. In these cases Ansible creates the module file with permissions
+that only allow reading by the user and root, or only allow reading by the unprivileged
+user being switched to.
+
+However, when both the connection user and the ``become_user`` are unprivileged,
+the module file is written as the user that Ansible connects as, but the file needs to
+be readable by the user Ansible is set to ``become``. In this case, Ansible makes
+the module file world-readable for the duration of the Ansible module execution.
+Once the module is done executing, Ansible deletes the temporary file.
+
+If any of the parameters passed to the module are sensitive in nature, and you do
+not trust the client machines, then this is a potential danger.
+
+Ways to resolve this include:
+
+* Use `pipelining`. When pipelining is enabled, Ansible does not save the
+ module to a temporary file on the client. Instead it pipes the module to
+ the remote python interpreter's stdin. Pipelining does not work for
+ python modules involving file transfer (for example: :ref:`copy <copy_module>`,
+ :ref:`fetch <fetch_module>`, :ref:`template <template_module>`), or for non-python modules.
+
+* Install POSIX.1e filesystem acl support on the
+ managed host. If the temporary directory on the remote host is mounted with
+ POSIX acls enabled and the :command:`setfacl` tool is in the remote ``PATH``
+ then Ansible will use POSIX acls to share the module file with the second
+ unprivileged user instead of having to make the file readable by everyone.
+
+* Avoid becoming an unprivileged
+ user. Temporary files are protected by UNIX file permissions when you
+ ``become`` root or do not use ``become``. In Ansible 2.1 and above, UNIX
+ file permissions are also secure if you make the connection to the managed
+ machine as root and then use ``become`` to access an unprivileged account.
+
+.. warning:: Although the Solaris ZFS filesystem has filesystem ACLs, the ACLs
+ are not POSIX.1e filesystem acls (they are NFSv4 ACLs instead). Ansible
+ cannot use these ACLs to manage its temp file permissions so you may have
+ to resort to ``allow_world_readable_tmpfiles`` if the remote machines use ZFS.
+
+.. versionchanged:: 2.1
+
+Ansible makes it hard to unknowingly use ``become`` insecurely. Starting in Ansible 2.1,
+Ansible defaults to issuing an error if it cannot execute securely with ``become``.
+If you cannot use pipelining or POSIX ACLs, you must connect as an unprivileged user,
+you must use ``become`` to execute as a different unprivileged user,
+and you decide that your managed nodes are secure enough for the
+modules you want to run there to be world readable, you can turn on
+``allow_world_readable_tmpfiles`` in the :file:`ansible.cfg` file. Setting
+``allow_world_readable_tmpfiles`` will change this from an error into
+a warning and allow the task to run as it did prior to 2.1.
+
+Not supported by all connection plugins
+---------------------------------------
+
+Privilege escalation methods must also be supported by the connection plugin
+used. Most connection plugins will warn if they do not support become. Some
+will just ignore it as they always run as root (jail, chroot, and so on).
+
+Only one method may be enabled per host
+---------------------------------------
+
+Methods cannot be chained. You cannot use ``sudo /bin/su -`` to become a user,
+you need to have privileges to run the command as that user in sudo or be able
+to su directly to it (the same for pbrun, pfexec or other supported methods).
+
+Privilege escalation must be general
+------------------------------------
+
+You cannot limit privilege escalation permissions to certain commands.
+Ansible does not always
+use a specific command to do something but runs modules (code) from
+a temporary file name which changes every time. If you have '/sbin/service'
+or '/bin/chmod' as the allowed commands this will fail with ansible as those
+paths won't match with the temporary file that Ansible creates to run the
+module. If you have security rules that constrain your sudo/pbrun/doas environment
+to running specific command paths only, use Ansible from a special account that
+does not have this constraint, or use :ref:`ansible_tower` to manage indirect access to SSH credentials.
+
+May not access environment variables populated by pamd_systemd
+--------------------------------------------------------------
+
+For most Linux distributions using ``systemd`` as their init, the default
+methods used by ``become`` do not open a new "session", in the sense of
+systemd. Because the ``pam_systemd`` module will not fully initialize a new
+session, you might have surprises compared to a normal session opened through
+ssh: some environment variables set by ``pam_systemd``, most notably
+``XDG_RUNTIME_DIR``, are not populated for the new user and instead inherited
+or just emptied.
+
+This might cause trouble when trying to invoke systemd commands that depend on
+``XDG_RUNTIME_DIR`` to access the bus:
+
+.. code-block:: console
+
+ $ echo $XDG_RUNTIME_DIR
+
+ $ systemctl --user status
+ Failed to connect to bus: Permission denied
+
+To force ``become`` to open a new systemd session that goes through
+``pam_systemd``, you can use ``become_method: machinectl``.
+
+For more information, see `this systemd issue
+<https://github.com/systemd/systemd/issues/825#issuecomment-127917622>`_.
+
+.. _become_network:
+
+Become and network automation
+=============================
+
+As of version 2.6, Ansible supports ``become`` for privilege escalation (entering ``enable`` mode or privileged EXEC mode) on all Ansible-maintained network platforms that support ``enable`` mode. Using ``become`` replaces the ``authorize`` and ``auth_pass`` options in a ``provider`` dictionary.
+
+You must set the connection type to either ``connection: ansible.netcommon.network_cli`` or ``connection: ansible.netcommon.httpapi`` to use ``become`` for privilege escalation on network devices. Check the :ref:`platform_options` documentation for details.
+
+You can use escalated privileges on only the specific tasks that need them, on an entire play, or on all plays. Adding ``become: yes`` and ``become_method: enable`` instructs Ansible to enter ``enable`` mode before executing the task, play, or playbook where those parameters are set.
+
+If you see this error message, the task that generated it requires ``enable`` mode to succeed:
+
+.. code-block:: console
+
+ Invalid input (privileged mode required)
+
+To set ``enable`` mode for a specific task, add ``become`` at the task level:
+
+.. code-block:: yaml
+
+ - name: Gather facts (eos)
+ arista.eos.eos_facts:
+ gather_subset:
+ - "!hardware"
+ become: yes
+ become_method: enable
+
+To set enable mode for all tasks in a single play, add ``become`` at the play level:
+
+.. code-block:: yaml
+
+ - hosts: eos-switches
+ become: yes
+ become_method: enable
+ tasks:
+ - name: Gather facts (eos)
+ arista.eos.eos_facts:
+ gather_subset:
+ - "!hardware"
+
+Setting enable mode for all tasks
+---------------------------------
+
+Often you wish for all tasks in all plays to run using privilege mode, that is best achieved by using ``group_vars``:
+
+**group_vars/eos.yml**
+
+.. code-block:: yaml
+
+ ansible_connection: ansible.netcommon.network_cli
+ ansible_network_os: arista.eos.eos
+ ansible_user: myuser
+ ansible_become: yes
+ ansible_become_method: enable
+
+Passwords for enable mode
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you need a password to enter ``enable`` mode, you can specify it in one of two ways:
+
+* providing the :option:`--ask-become-pass <ansible-playbook --ask-become-pass>` command line option
+* setting the ``ansible_become_password`` connection variable
+
+.. warning::
+
+ As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see :ref:`vault`.
+
+authorize and auth_pass
+-----------------------
+
+Ansible still supports ``enable`` mode with ``connection: local`` for legacy network playbooks. To enter ``enable`` mode with ``connection: local``, use the module options ``authorize`` and ``auth_pass``:
+
+.. code-block:: yaml
+
+ - hosts: eos-switches
+ ansible_connection: local
+ tasks:
+ - name: Gather facts (eos)
+ eos_facts:
+ gather_subset:
+ - "!hardware"
+ provider:
+ authorize: yes
+ auth_pass: " {{ secret_auth_pass }}"
+
+We recommend updating your playbooks to use ``become`` for network-device ``enable`` mode consistently. The use of ``authorize`` and of ``provider`` dictionaries will be deprecated in future. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details.
+
+.. _become_windows:
+
+Become and Windows
+==================
+
+Since Ansible 2.3, ``become`` can be used on Windows hosts through the
+``runas`` method. Become on Windows uses the same inventory setup and
+invocation arguments as ``become`` on a non-Windows host, so the setup and
+variable names are the same as what is defined in this document.
+
+While ``become`` can be used to assume the identity of another user, there are other uses for
+it with Windows hosts. One important use is to bypass some of the
+limitations that are imposed when running on WinRM, such as constrained network
+delegation or accessing forbidden system calls like the WUA API. You can use
+``become`` with the same user as ``ansible_user`` to bypass these limitations
+and run commands that are not normally accessible in a WinRM session.
+
+Administrative rights
+---------------------
+
+Many tasks in Windows require administrative privileges to complete. When using
+the ``runas`` become method, Ansible will attempt to run the module with the
+full privileges that are available to the remote user. If it fails to elevate
+the user token, it will continue to use the limited token during execution.
+
+A user must have the ``SeDebugPrivilege`` to run a become process with elevated
+privileges. This privilege is assigned to Administrators by default. If the
+debug privilege is not available, the become process will run with a limited
+set of privileges and groups.
+
+To determine the type of token that Ansible was able to get, run the following
+task:
+
+.. code-block:: yaml
+
+ - Check my user name
+ ansible.windows.win_whoami:
+ become: yes
+
+The output will look something similar to the below:
+
+.. code-block:: ansible-output
+
+ ok: [windows] => {
+ "account": {
+ "account_name": "vagrant-domain",
+ "domain_name": "DOMAIN",
+ "sid": "S-1-5-21-3088887838-4058132883-1884671576-1105",
+ "type": "User"
+ },
+ "authentication_package": "Kerberos",
+ "changed": false,
+ "dns_domain_name": "DOMAIN.LOCAL",
+ "groups": [
+ {
+ "account_name": "Administrators",
+ "attributes": [
+ "Mandatory",
+ "Enabled by default",
+ "Enabled",
+ "Owner"
+ ],
+ "domain_name": "BUILTIN",
+ "sid": "S-1-5-32-544",
+ "type": "Alias"
+ },
+ {
+ "account_name": "INTERACTIVE",
+ "attributes": [
+ "Mandatory",
+ "Enabled by default",
+ "Enabled"
+ ],
+ "domain_name": "NT AUTHORITY",
+ "sid": "S-1-5-4",
+ "type": "WellKnownGroup"
+ },
+ ],
+ "impersonation_level": "SecurityAnonymous",
+ "label": {
+ "account_name": "High Mandatory Level",
+ "domain_name": "Mandatory Label",
+ "sid": "S-1-16-12288",
+ "type": "Label"
+ },
+ "login_domain": "DOMAIN",
+ "login_time": "2018-11-18T20:35:01.9696884+00:00",
+ "logon_id": 114196830,
+ "logon_server": "DC01",
+ "logon_type": "Interactive",
+ "privileges": {
+ "SeBackupPrivilege": "disabled",
+ "SeChangeNotifyPrivilege": "enabled-by-default",
+ "SeCreateGlobalPrivilege": "enabled-by-default",
+ "SeCreatePagefilePrivilege": "disabled",
+ "SeCreateSymbolicLinkPrivilege": "disabled",
+ "SeDebugPrivilege": "enabled",
+ "SeDelegateSessionUserImpersonatePrivilege": "disabled",
+ "SeImpersonatePrivilege": "enabled-by-default",
+ "SeIncreaseBasePriorityPrivilege": "disabled",
+ "SeIncreaseQuotaPrivilege": "disabled",
+ "SeIncreaseWorkingSetPrivilege": "disabled",
+ "SeLoadDriverPrivilege": "disabled",
+ "SeManageVolumePrivilege": "disabled",
+ "SeProfileSingleProcessPrivilege": "disabled",
+ "SeRemoteShutdownPrivilege": "disabled",
+ "SeRestorePrivilege": "disabled",
+ "SeSecurityPrivilege": "disabled",
+ "SeShutdownPrivilege": "disabled",
+ "SeSystemEnvironmentPrivilege": "disabled",
+ "SeSystemProfilePrivilege": "disabled",
+ "SeSystemtimePrivilege": "disabled",
+ "SeTakeOwnershipPrivilege": "disabled",
+ "SeTimeZonePrivilege": "disabled",
+ "SeUndockPrivilege": "disabled"
+ },
+ "rights": [
+ "SeNetworkLogonRight",
+ "SeBatchLogonRight",
+ "SeInteractiveLogonRight",
+ "SeRemoteInteractiveLogonRight"
+ ],
+ "token_type": "TokenPrimary",
+ "upn": "vagrant-domain@DOMAIN.LOCAL",
+ "user_flags": []
+ }
+
+Under the ``label`` key, the ``account_name`` entry determines whether the user
+has Administrative rights. Here are the labels that can be returned and what
+they represent:
+
+* ``Medium``: Ansible failed to get an elevated token and ran under a limited
+ token. Only a subset of the privileges assigned to user are available during
+ the module execution and the user does not have administrative rights.
+
+* ``High``: An elevated token was used and all the privileges assigned to the
+ user are available during the module execution.
+
+* ``System``: The ``NT AUTHORITY\System`` account is used and has the highest
+ level of privileges available.
+
+The output will also show the list of privileges that have been granted to the
+user. When the privilege value is ``disabled``, the privilege is assigned to
+the logon token but has not been enabled. In most scenarios these privileges
+are automatically enabled when required.
+
+If running on a version of Ansible that is older than 2.5 or the normal
+``runas`` escalation process fails, an elevated token can be retrieved by:
+
+* Set the ``become_user`` to ``System`` which has full control over the
+ operating system.
+
+* Grant ``SeTcbPrivilege`` to the user Ansible connects with on
+ WinRM. ``SeTcbPrivilege`` is a high-level privilege that grants
+ full control over the operating system. No user is given this privilege by
+ default, and care should be taken if you grant this privilege to a user or group.
+ For more information on this privilege, please see
+ `Act as part of the operating system <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn221957(v=ws.11)>`_.
+ You can use the below task to set this privilege on a Windows host:
+
+ .. code-block:: yaml
+
+ - name: grant the ansible user the SeTcbPrivilege right
+ ansible.windows.win_user_right:
+ name: SeTcbPrivilege
+ users: '{{ansible_user}}'
+ action: add
+
+* Turn UAC off on the host and reboot before trying to become the user. UAC is
+ a security protocol that is designed to run accounts with the
+ ``least privilege`` principle. You can turn UAC off by running the following
+ tasks:
+
+ .. code-block:: yaml
+
+ - name: turn UAC off
+ win_regedit:
+ path: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system
+ name: EnableLUA
+ data: 0
+ type: dword
+ state: present
+ register: uac_result
+
+ - name: reboot after disabling UAC
+ win_reboot:
+ when: uac_result is changed
+
+.. Note:: Granting the ``SeTcbPrivilege`` or turning UAC off can cause Windows
+ security vulnerabilities and care should be given if these steps are taken.
+
+Local service accounts
+----------------------
+
+Prior to Ansible version 2.5, ``become`` only worked on Windows with a local or domain
+user account. Local service accounts like ``System`` or ``NetworkService``
+could not be used as ``become_user`` in these older versions. This restriction
+has been lifted since the 2.5 release of Ansible. The three service accounts
+that can be set under ``become_user`` are:
+
+* System
+* NetworkService
+* LocalService
+
+Because local service accounts do not have passwords, the
+``ansible_become_password`` parameter is not required and is ignored if
+specified.
+
+Become without setting a password
+---------------------------------
+
+As of Ansible 2.8, ``become`` can be used to become a Windows local or domain account
+without requiring a password for that account. For this method to work, the
+following requirements must be met:
+
+* The connection user has the ``SeDebugPrivilege`` privilege assigned
+* The connection user is part of the ``BUILTIN\Administrators`` group
+* The ``become_user`` has either the ``SeBatchLogonRight`` or ``SeNetworkLogonRight`` user right
+
+Using become without a password is achieved in one of two different methods:
+
+* Duplicating an existing logon session's token if the account is already logged on
+* Using S4U to generate a logon token that is valid on the remote host only
+
+In the first scenario, the become process is spawned from another logon of that
+user account. This could be an existing RDP logon, console logon, but this is
+not guaranteed to occur all the time. This is similar to the
+``Run only when user is logged on`` option for a Scheduled Task.
+
+In the case where another logon of the become account does not exist, S4U is
+used to create a new logon and run the module through that. This is similar to
+the ``Run whether user is logged on or not`` with the ``Do not store password``
+option for a Scheduled Task. In this scenario, the become process will not be
+able to access any network resources like a normal WinRM process.
+
+To make a distinction between using become with no password and becoming an
+account that has no password make sure to keep ``ansible_become_password`` as
+undefined or set ``ansible_become_password:``.
+
+.. Note:: Because there are no guarantees an existing token will exist for a
+ user when Ansible runs, there's a high change the become process will only
+ have access to local resources. Use become with a password if the task needs
+ to access network resources
+
+Accounts without a password
+---------------------------
+
+.. Warning:: As a general security best practice, you should avoid allowing accounts without passwords.
+
+Ansible can be used to become a Windows account that does not have a password (like the
+``Guest`` account). To become an account without a password, set up the
+variables like normal but set ``ansible_become_password: ''``.
+
+Before become can work on an account like this, the local policy
+`Accounts: Limit local account use of blank passwords to console logon only <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852174(v=ws.11)>`_
+must be disabled. This can either be done through a Group Policy Object (GPO)
+or with this Ansible task:
+
+.. code-block:: yaml
+
+ - name: allow blank password on become
+ ansible.windows.win_regedit:
+ path: HKLM:\SYSTEM\CurrentControlSet\Control\Lsa
+ name: LimitBlankPasswordUse
+ data: 0
+ type: dword
+ state: present
+
+.. Note:: This is only for accounts that do not have a password. You still need
+ to set the account's password under ``ansible_become_password`` if the
+ become_user has a password.
+
+Become flags for Windows
+------------------------
+
+Ansible 2.5 added the ``become_flags`` parameter to the ``runas`` become method.
+This parameter can be set using the ``become_flags`` task directive or set in
+Ansible's configuration using ``ansible_become_flags``. The two valid values
+that are initially supported for this parameter are ``logon_type`` and
+``logon_flags``.
+
+.. Note:: These flags should only be set when becoming a normal user account, not a local service account like LocalSystem.
+
+The key ``logon_type`` sets the type of logon operation to perform. The value
+can be set to one of the following:
+
+* ``interactive``: The default logon type. The process will be run under a
+ context that is the same as when running a process locally. This bypasses all
+ WinRM restrictions and is the recommended method to use.
+
+* ``batch``: Runs the process under a batch context that is similar to a
+ scheduled task with a password set. This should bypass most WinRM
+ restrictions and is useful if the ``become_user`` is not allowed to log on
+ interactively.
+
+* ``new_credentials``: Runs under the same credentials as the calling user, but
+ outbound connections are run under the context of the ``become_user`` and
+ ``become_password``, similar to ``runas.exe /netonly``. The ``logon_flags``
+ flag should also be set to ``netcredentials_only``. Use this flag if
+ the process needs to access a network resource (like an SMB share) using a
+ different set of credentials.
+
+* ``network``: Runs the process under a network context without any cached
+ credentials. This results in the same type of logon session as running a
+ normal WinRM process without credential delegation, and operates under the same
+ restrictions.
+
+* ``network_cleartext``: Like the ``network`` logon type, but instead caches
+ the credentials so it can access network resources. This is the same type of
+ logon session as running a normal WinRM process with credential delegation.
+
+For more information, see
+`dwLogonType <https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-logonusera>`_.
+
+The ``logon_flags`` key specifies how Windows will log the user on when creating
+the new process. The value can be set to none or multiple of the following:
+
+* ``with_profile``: The default logon flag set. The process will load the
+ user's profile in the ``HKEY_USERS`` registry key to ``HKEY_CURRENT_USER``.
+
+* ``netcredentials_only``: The process will use the same token as the caller
+ but will use the ``become_user`` and ``become_password`` when accessing a remote
+ resource. This is useful in inter-domain scenarios where there is no trust
+ relationship, and should be used with the ``new_credentials`` ``logon_type``.
+
+By default ``logon_flags=with_profile`` is set, if the profile should not be
+loaded set ``logon_flags=`` or if the profile should be loaded with
+``netcredentials_only``, set ``logon_flags=with_profile,netcredentials_only``.
+
+For more information, see `dwLogonFlags <https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-createprocesswithtokenw>`_.
+
+Here are some examples of how to use ``become_flags`` with Windows tasks:
+
+.. code-block:: yaml
+
+ - name: copy a file from a fileshare with custom credentials
+ ansible.windows.win_copy:
+ src: \\server\share\data\file.txt
+ dest: C:\temp\file.txt
+ remote_src: yes
+ vars:
+ ansible_become: yes
+ ansible_become_method: runas
+ ansible_become_user: DOMAIN\user
+ ansible_become_password: Password01
+ ansible_become_flags: logon_type=new_credentials logon_flags=netcredentials_only
+
+ - name: run a command under a batch logon
+ ansible.windows.win_whoami:
+ become: yes
+ become_flags: logon_type=batch
+
+ - name: run a command and not load the user profile
+ ansible.windows.win_whomai:
+ become: yes
+ become_flags: logon_flags=
+
+
+Limitations of become on Windows
+--------------------------------
+
+* Running a task with ``async`` and ``become`` on Windows Server 2008, 2008 R2
+ and Windows 7 only works when using Ansible 2.7 or newer.
+
+* By default, the become user logs on with an interactive session, so it must
+ have the right to do so on the Windows host. If it does not inherit the
+ ``SeAllowLogOnLocally`` privilege or inherits the ``SeDenyLogOnLocally``
+ privilege, the become process will fail. Either add the privilege or set the
+ ``logon_type`` flag to change the logon type used.
+
+* Prior to Ansible version 2.3, become only worked when
+ ``ansible_winrm_transport`` was either ``basic`` or ``credssp``. This
+ restriction has been lifted since the 2.4 release of Ansible for all hosts
+ except Windows Server 2008 (non R2 version).
+
+* The Secondary Logon service ``seclogon`` must be running to use ``ansible_become_method: runas``
+
+.. seealso::
+
+ `Mailing List <https://groups.google.com/forum/#!forum/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `webchat.freenode.net <https://webchat.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/collections_using.rst b/docs/docsite/rst/user_guide/collections_using.rst
new file mode 100644
index 00000000..a9530a9e
--- /dev/null
+++ b/docs/docsite/rst/user_guide/collections_using.rst
@@ -0,0 +1,324 @@
+
+.. _collections:
+
+*****************
+Using collections
+*****************
+
+Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. As modules move from the core Ansible repository into collections, the module documentation will move to the :ref:`collections pages <list_of_collections>`.
+
+You can install and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
+
+* For details on how to *develop* collections see :ref:`developing_collections`.
+* For the current development status of Collections and FAQ see `Ansible Collections Community Guide <https://github.com/ansible-collections/overview/blob/main/README.rst>`_.
+
+.. contents::
+ :local:
+ :depth: 2
+
+.. _collections_installing:
+
+Installing collections
+======================
+
+
+Installing collections with ``ansible-galaxy``
+----------------------------------------------
+
+.. include:: ../shared_snippets/installing_collections.txt
+
+.. _collections_older_version:
+
+Installing an older version of a collection
+-------------------------------------------
+
+.. include:: ../shared_snippets/installing_older_collection.txt
+
+Installing a collection from a git repository
+---------------------------------------------
+
+.. include:: ../shared_snippets/installing_collections_git_repo.txt
+
+.. _collection_requirements_file:
+
+Install multiple collections with a requirements file
+-----------------------------------------------------
+
+.. include:: ../shared_snippets/installing_multiple_collections.txt
+
+.. _collection_offline_download:
+
+Downloading a collection for offline use
+-----------------------------------------
+
+.. include:: ../shared_snippets/download_tarball_collections.txt
+
+
+.. _galaxy_server_config:
+
+Configuring the ``ansible-galaxy`` client
+------------------------------------------
+
+.. include:: ../shared_snippets/galaxy_server_list.txt
+
+.. _collections_downloading:
+
+Downloading collections
+=======================
+
+To download a collection and its dependencies for an offline install, run ``ansible-galaxy collection download``. This
+downloads the collections specified and their dependencies to the specified folder and creates a ``requirements.yml``
+file which can be used to install those collections on a host without access to a Galaxy server. All the collections
+are downloaded by default to the ``./collections`` folder.
+
+Just like the ``install`` command, the collections are sourced based on the
+:ref:`configured galaxy server config <galaxy_server_config>`. Even if a collection to download was specified by a URL
+or path to a tarball, the collection will be redownloaded from the configured Galaxy server.
+
+Collections can be specified as one or multiple collections or with a ``requirements.yml`` file just like
+``ansible-galaxy collection install``.
+
+To download a single collection and its dependencies:
+
+.. code-block:: bash
+
+ ansible-galaxy collection download my_namespace.my_collection
+
+To download a single collection at a specific version:
+
+.. code-block:: bash
+
+ ansible-galaxy collection download my_namespace.my_collection:1.0.0
+
+To download multiple collections either specify multiple collections as command line arguments as shown above or use a
+requirements file in the format documented with :ref:`collection_requirements_file`.
+
+.. code-block:: bash
+
+ ansible-galaxy collection download -r requirements.yml
+
+All the collections are downloaded by default to the ``./collections`` folder but you can use ``-p`` or
+``--download-path`` to specify another path:
+
+.. code-block:: bash
+
+ ansible-galaxy collection download my_namespace.my_collection -p ~/offline-collections
+
+Once you have downloaded the collections, the folder contains the collections specified, their dependencies, and a
+``requirements.yml`` file. You can use this folder as is with ``ansible-galaxy collection install`` to install the
+collections on a host without access to a Galaxy or Automation Hub server.
+
+.. code-block:: bash
+
+ # This must be run from the folder that contains the offline collections and requirements.yml file downloaded
+ # by the internet-connected host
+ cd ~/offline-collections
+ ansible-galaxy collection install -r requirements.yml
+
+.. _collections_listing:
+
+Listing collections
+===================
+
+To list installed collections, run ``ansible-galaxy collection list``. This shows all of the installed collections found in the configured collections search paths. It will also show collections under development which contain a galaxy.yml file instead of a MANIFEST.json. The path where the collections are located are displayed as well as version information. If no version information is available, a ``*`` is displayed for the version number.
+
+.. code-block:: shell
+
+ # /home/astark/.ansible/collections/ansible_collections
+ Collection Version
+ -------------------------- -------
+ cisco.aci 0.0.5
+ cisco.mso 0.0.4
+ sandwiches.ham *
+ splunk.es 0.0.5
+
+ # /usr/share/ansible/collections/ansible_collections
+ Collection Version
+ ----------------- -------
+ fortinet.fortios 1.0.6
+ pureport.pureport 0.0.8
+ sensu.sensu_go 1.3.0
+
+Run with ``-vvv`` to display more detailed information.
+
+To list a specific collection, pass a valid fully qualified collection name (FQCN) to the command ``ansible-galaxy collection list``. All instances of the collection will be listed.
+
+.. code-block:: shell
+
+ > ansible-galaxy collection list fortinet.fortios
+
+ # /home/astark/.ansible/collections/ansible_collections
+ Collection Version
+ ---------------- -------
+ fortinet.fortios 1.0.1
+
+ # /usr/share/ansible/collections/ansible_collections
+ Collection Version
+ ---------------- -------
+ fortinet.fortios 1.0.6
+
+To search other paths for collections, use the ``-p`` option. Specify multiple search paths by separating them with a ``:``. The list of paths specified on the command line will be added to the beginning of the configured collections search paths.
+
+.. code-block:: shell
+
+ > ansible-galaxy collection list -p '/opt/ansible/collections:/etc/ansible/collections'
+
+ # /opt/ansible/collections/ansible_collections
+ Collection Version
+ --------------- -------
+ sandwiches.club 1.7.2
+
+ # /etc/ansible/collections/ansible_collections
+ Collection Version
+ -------------- -------
+ sandwiches.pbj 1.2.0
+
+ # /home/astark/.ansible/collections/ansible_collections
+ Collection Version
+ -------------------------- -------
+ cisco.aci 0.0.5
+ cisco.mso 0.0.4
+ fortinet.fortios 1.0.1
+ sandwiches.ham *
+ splunk.es 0.0.5
+
+ # /usr/share/ansible/collections/ansible_collections
+ Collection Version
+ ----------------- -------
+ fortinet.fortios 1.0.6
+ pureport.pureport 0.0.8
+ sensu.sensu_go 1.3.0
+
+
+.. _using_collections:
+
+Verifying collections
+=====================
+
+Verifying collections with ``ansible-galaxy``
+---------------------------------------------
+
+Once installed, you can verify that the content of the installed collection matches the content of the collection on the server. This feature expects that the collection is installed in one of the configured collection paths and that the collection exists on one of the configured galaxy servers.
+
+.. code-block:: bash
+
+ ansible-galaxy collection verify my_namespace.my_collection
+
+The output of the ``ansible-galaxy collection verify`` command is quiet if it is successful. If a collection has been modified, the altered files are listed under the collection name.
+
+.. code-block:: bash
+
+ ansible-galaxy collection verify my_namespace.my_collection
+ Collection my_namespace.my_collection contains modified content in the following files:
+ my_namespace.my_collection
+ plugins/inventory/my_inventory.py
+ plugins/modules/my_module.py
+
+You can use the ``-vvv`` flag to display additional information, such as the version and path of the installed collection, the URL of the remote collection used for validation, and successful verification output.
+
+.. code-block:: bash
+
+ ansible-galaxy collection verify my_namespace.my_collection -vvv
+ ...
+ Verifying 'my_namespace.my_collection:1.0.0'.
+ Installed collection found at '/path/to/ansible_collections/my_namespace/my_collection/'
+ Remote collection found at 'https://galaxy.ansible.com/download/my_namespace-my_collection-1.0.0.tar.gz'
+ Successfully verified that checksums for 'my_namespace.my_collection:1.0.0' match the remote collection
+
+If you have a pre-release or non-latest version of a collection installed you should include the specific version to verify. If the version is omitted, the installed collection is verified against the latest version available on the server.
+
+.. code-block:: bash
+
+ ansible-galaxy collection verify my_namespace.my_collection:1.0.0
+
+In addition to the ``namespace.collection_name:version`` format, you can provide the collections to verify in a ``requirements.yml`` file. Dependencies listed in ``requirements.yml`` are not included in the verify process and should be verified separately.
+
+.. code-block:: bash
+
+ ansible-galaxy collection verify -r requirements.yml
+
+Verifying against ``tar.gz`` files is not supported. If your ``requirements.yml`` contains paths to tar files or URLs for installation, you can use the ``--ignore-errors`` flag to ensure that all collections using the ``namespace.name`` format in the file are processed.
+
+.. _collections_using_playbook:
+
+Using collections in a Playbook
+===============================
+
+Once installed, you can reference a collection content by its fully qualified collection name (FQCN):
+
+.. code-block:: yaml
+
+ - hosts: all
+ tasks:
+ - my_namespace.my_collection.mymodule:
+ option1: value
+
+This works for roles or any type of plugin distributed within the collection:
+
+.. code-block:: yaml
+
+ - hosts: all
+ tasks:
+ - import_role:
+ name: my_namespace.my_collection.role1
+
+ - my_namespace.mycollection.mymodule:
+ option1: value
+
+ - debug:
+ msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
+
+Simplifying module names with the ``collections`` keyword
+=========================================================
+
+The ``collections`` keyword lets you define a list of collections that your role or playbook should search for unqualified module and action names. So you can use the ``collections`` keyword, then simply refer to modules and action plugins by their short-form names throughout that role or playbook.
+
+.. warning::
+ If your playbook uses both the ``collections`` keyword and one or more roles, the roles do not inherit the collections set by the playbook. See below for details.
+
+Using ``collections`` in roles
+------------------------------
+
+Within a role, you can control which collections Ansible searches for the tasks inside the role using the ``collections`` keyword in the role's ``meta/main.yml``. Ansible will use the collections list defined inside the role even if the playbook that calls the role defines different collections in a separate ``collections`` keyword entry. Roles defined inside a collection always implicitly search their own collection first, so you don't need to use the ``collections`` keyword to access modules, actions, or other roles contained in the same collection.
+
+.. code-block:: yaml
+
+ # myrole/meta/main.yml
+ collections:
+ - my_namespace.first_collection
+ - my_namespace.second_collection
+ - other_namespace.other_collection
+
+Using ``collections`` in playbooks
+----------------------------------
+
+In a playbook, you can control the collections Ansible searches for modules and action plugins to execute. However, any roles you call in your playbook define their own collections search order; they do not inherit the calling playbook's settings. This is true even if the role does not define its own ``collections`` keyword.
+
+.. code-block:: yaml
+
+ - hosts: all
+ collections:
+ - my_namespace.my_collection
+
+ tasks:
+ - import_role:
+ name: role1
+
+ - mymodule:
+ option1: value
+
+ - debug:
+ msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
+
+The ``collections`` keyword merely creates an ordered 'search path' for non-namespaced plugin and role references. It does not install content or otherwise change Ansible's behavior around the loading of plugins or roles. Note that an FQCN is still required for non-action or module plugins (for example, lookups, filters, tests).
+
+.. seealso::
+
+ :ref:`developing_collections`
+ Develop or modify a collection.
+ :ref:`collections_galaxy_meta`
+ Understand the collections metadata structure.
+ `Mailing List <https://groups.google.com/group/ansible-devel>`_
+ The development mailing list
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/command_line_tools.rst b/docs/docsite/rst/user_guide/command_line_tools.rst
new file mode 100644
index 00000000..56561b59
--- /dev/null
+++ b/docs/docsite/rst/user_guide/command_line_tools.rst
@@ -0,0 +1,20 @@
+.. _command_line_tools:
+
+Working with command line tools
+===============================
+
+Most users are familiar with `ansible` and `ansible-playbook`, but those are not the only utilities Ansible provides.
+Below is a complete list of Ansible utilities. Each page contains a description of the utility and a listing of supported parameters.
+
+.. toctree::
+ :maxdepth: 1
+
+ ../cli/ansible.rst
+ ../cli/ansible-config.rst
+ ../cli/ansible-console.rst
+ ../cli/ansible-doc.rst
+ ../cli/ansible-galaxy.rst
+ ../cli/ansible-inventory.rst
+ ../cli/ansible-playbook.rst
+ ../cli/ansible-pull.rst
+ ../cli/ansible-vault.rst
diff --git a/docs/docsite/rst/user_guide/complex_data_manipulation.rst b/docs/docsite/rst/user_guide/complex_data_manipulation.rst
new file mode 100644
index 00000000..253362b7
--- /dev/null
+++ b/docs/docsite/rst/user_guide/complex_data_manipulation.rst
@@ -0,0 +1,246 @@
+.. _complex_data_manipulation:
+
+Data manipulation
+#########################
+
+In many cases, you need to do some complex operation with your variables, while Ansible is not recommended as a data processing/manipulation tool, you can use the existing Jinja2 templating in conjunction with the many added Ansible filters, lookups and tests to do some very complex transformations.
+
+Let's start with a quick definition of each type of plugin:
+ - lookups: Mainly used to query 'external data', in Ansible these were the primary part of loops using the ``with_<lookup>`` construct, but they can be used independently to return data for processing. They normally return a list due to their primary function in loops as mentioned previously. Used with the ``lookup`` or ``query`` Jinja2 operators.
+ - filters: used to change/transform data, used with the ``|`` Jinja2 operator.
+ - tests: used to validate data, used with the ``is`` Jinja2 operator.
+
+.. _note:
+ * Some tests and filters are provided directly by Jinja2, so their availability depends on the Jinja2 version, not Ansible.
+
+.. _for_loops_or_list_comprehensions:
+
+Loops and list comprehensions
+=============================
+
+Most programming languages have loops (``for``, ``while``, and so on) and list comprehensions to do transformations on lists including lists of objects. Jinja2 has a few filters that provide this functionality: ``map``, ``select``, ``reject``, ``selectattr``, ``rejectattr``.
+
+- map: this is a basic for loop that just allows you to change every item in a list, using the 'attribute' keyword you can do the transformation based on attributes of the list elements.
+- select/reject: this is a for loop with a condition, that allows you to create a subset of a list that matches (or not) based on the result of the condition.
+- selectattr/rejectattr: very similar to the above but it uses a specific attribute of the list elements for the conditional statement.
+
+
+.. _keys_from_dict_matching_list:
+
+Extract keys from a dictionary matching elements from a list
+------------------------------------------------------------
+
+The Python equivalent code would be:
+
+.. code-block:: python
+
+ chains = [1, 2]
+ for chain in chains:
+ for config in chains_config[chain]['configs']:
+ print(config['type'])
+
+There are several ways to do it in Ansible, this is just one example:
+
+.. code-block:: YAML+Jinja
+ :emphasize-lines: 3
+ :caption: Way to extract matching keys from a list of dictionaries
+
+ tasks:
+ - name: Show extracted list of keys from a list of dictionaries
+ ansible.builtin.debug:
+ msg: "{{ chains | map('extract', chains_config) | map(attribute='configs') | flatten | map(attribute='type') | flatten }}"
+ vars:
+ chains: [1, 2]
+ chains_config:
+ 1:
+ foo: bar
+ configs:
+ - type: routed
+ version: 0.1
+ - type: bridged
+ version: 0.2
+ 2:
+ foo: baz
+ configs:
+ - type: routed
+ version: 1.0
+ - type: bridged
+ version: 1.1
+
+
+.. code-block:: ansible-output
+ :caption: Results of debug task, a list with the extracted keys
+
+ ok: [localhost] => {
+ "msg": [
+ "routed",
+ "bridged",
+ "routed",
+ "bridged"
+ ]
+ }
+
+
+.. _find_mount_point:
+
+Find mount point
+----------------
+
+In this case, we want to find the mount point for a given path across our machines, since we already collect mount facts, we can use the following:
+
+.. code-block:: YAML+Jinja
+ :caption: Use selectattr to filter mounts into list I can then sort and select the last from
+ :emphasize-lines: 7
+
+ - hosts: all
+ gather_facts: True
+ vars:
+ path: /var/lib/cache
+ tasks:
+ - name: The mount point for {{path}}, found using the Ansible mount facts, [-1] is the same as the 'last' filter
+ ansible.builtin.debug:
+ msg: "{{(ansible_facts.mounts | selectattr('mount', 'in', path) | list | sort(attribute='mount'))[-1]['mount']}}"
+
+
+
+Omit elements from a list
+-------------------------
+
+The special ``omit`` variable ONLY works with module options, but we can still use it in other ways as an identifier to tailor a list of elements:
+
+.. code-block:: YAML+Jinja
+ :caption: Inline list filtering when feeding a module option
+ :emphasize-lines: 3, 7
+
+ - name: Enable a list of Windows features, by name
+ ansible.builtin.set_fact:
+ win_feature_list: "{{ namestuff | reject('equalto', omit) | list }}"
+ vars:
+ namestuff:
+ - "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
+ - "foo"
+ - "bar"
+
+
+Another way is to avoid adding elements to the list in the first place, so you can just use it directly:
+
+.. code-block:: YAML+Jinja
+ :caption: Using set_fact in a loop to increment a list conditionally
+ :emphasize-lines: 3, 4, 6
+
+ - name: Build unique list with some items conditionally omitted
+ ansible.builtin.set_fact:
+ namestuff: ' {{ (namestuff | default([])) | union([item]) }}'
+ when: item != omit
+ loop:
+ - "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
+ - "foo"
+ - "bar"
+
+
+.. _complex_type_transfomations:
+
+Complex Type transformations
+=============================
+
+Jinja provides filters for simple data type transformations (``int``, ``bool``, and so on), but when you want to transform data structures things are not as easy.
+You can use loops and list comprehensions as shown above to help, also other filters and lookups can be chained and leveraged to achieve more complex transformations.
+
+
+.. _create_dictionary_from_list:
+
+Create dictionary from list
+---------------------------
+
+In most languages it is easy to create a dictionary (a.k.a. map/associative array/hash and so on) from a list of pairs, in Ansible there are a couple of ways to do it and the best one for you might depend on the source of your data.
+
+
+These example produces ``{"a": "b", "c": "d"}``
+
+.. code-block:: YAML+Jinja
+ :caption: Simple list to dict by assuming the list is [key, value , key, value, ...]
+
+ vars:
+ single_list: [ 'a', 'b', 'c', 'd' ]
+ mydict: "{{ dict(single_list) | slice(2) | list }}"
+
+
+.. code-block:: YAML+Jinja
+ :caption: It is simpler when we have a list of pairs:
+
+ vars:
+ list_of_pairs: [ ['a', 'b'], ['c', 'd'] ]
+ mydict: "{{ dict(list_of_pairs) }}"
+
+Both end up being the same thing, with the ``slice(2) | list`` transforming ``single_list`` to the same structure as ``list_of_pairs``.
+
+
+
+A bit more complex, using ``set_fact`` and a ``loop`` to create/update a dictionary with key value pairs from 2 lists:
+
+.. code-block:: YAML+Jinja
+ :caption: Using set_fact to create a dictionary from a set of lists
+ :emphasize-lines: 3, 4
+
+ - name: Uses 'combine' to update the dictionary and 'zip' to make pairs of both lists
+ ansible.builtin.set_fact:
+ mydict: "{{ mydict | default({}) | combine({item[0]: item[1]}) }}"
+ loop: "{{ (keys | zip(values)) | list }}"
+ vars:
+ keys:
+ - foo
+ - var
+ - bar
+ values:
+ - a
+ - b
+ - c
+
+This results in ``{"foo": "a", "var": "b", "bar": "c"}``.
+
+
+You can even combine these simple examples with other filters and lookups to create a dictionary dynamically by matching patterns to variable names:
+
+.. code-block:: YAML+Jinja
+ :caption: Using 'vars' to define dictionary from a set of lists without needing a task
+
+ vars:
+ myvarnames: "{{ q('varnames', '^my') }}"
+ mydict: "{{ dict(myvarnames | zip(q('vars', *myvarnames))) }}"
+
+A quick explanation, since there is a lot to unpack from these two lines:
+
+ - The ``varnames`` lookup returns a list of variables that match "begin with ``my``".
+ - Then feeding the list from the previous step into the ``vars`` lookup to get the list of values.
+ The ``*`` is used to 'dereference the list' (a pythonism that works in Jinja), otherwise it would take the list as a single argument.
+ - Both lists get passed to the ``zip`` filter to pair them off into a unified list (key, value, key2, value2, ...).
+ - The dict function then takes this 'list of pairs' to create the dictionary.
+
+
+An example on how to use facts to find a host's data that meets condition X:
+
+
+.. code-block:: YAML+Jinja
+
+ vars:
+ uptime_of_host_most_recently_rebooted: "{{ansible_play_hosts_all | map('extract', hostvars, 'ansible_uptime_seconds') | sort | first}}"
+
+
+Using an example from @zoradache on reddit, to show the 'uptime in days/hours/minutes' (assumes facts where gathered).
+https://www.reddit.com/r/ansible/comments/gj5a93/trying_to_get_uptime_from_seconds/fqj2qr3/
+
+.. code-block:: YAML+Jinja
+
+ - name: Show the uptime in a certain format
+ ansible.builtin.debug:
+ msg: Timedelta {{ now() - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}
+
+
+.. seealso::
+
+ :doc:`playbooks_filters`
+ Jinja2 filters included with Ansible
+ :doc:`playbooks_tests`
+ Jinja2 tests included with Ansible
+ `Jinja2 Docs <http://jinja.pocoo.org/docs/>`_
+ Jinja2 documentation, includes lists for core filters and tests
diff --git a/docs/docsite/rst/user_guide/connection_details.rst b/docs/docsite/rst/user_guide/connection_details.rst
new file mode 100644
index 00000000..60f93cad
--- /dev/null
+++ b/docs/docsite/rst/user_guide/connection_details.rst
@@ -0,0 +1,116 @@
+.. _connections:
+
+******************************
+Connection methods and details
+******************************
+
+This section shows you how to expand and refine the connection methods Ansible uses for your inventory.
+
+ControlPersist and paramiko
+---------------------------
+
+By default, Ansible uses native OpenSSH, because it supports ControlPersist (a performance feature), Kerberos, and options in ``~/.ssh/config`` such as Jump Host setup. If your control machine uses an older version of OpenSSH that does not support ControlPersist, Ansible will fallback to a Python implementation of OpenSSH called 'paramiko'.
+
+.. _connection_set_user:
+
+Setting a remote user
+---------------------
+
+By default, Ansible connects to all remote devices with the user name you are using on the control node. If that user name does not exist on a remote device, you can set a different user name for the connection. If you just need to do some tasks as a different user, look at :ref:`become`. You can set the connection user in a playbook:
+
+.. code-block:: yaml
+
+ ---
+ - name: update webservers
+ hosts: webservers
+ remote_user: admin
+
+ tasks:
+ - name: thing to do first in this playbook
+ . . .
+
+as a host variable in inventory:
+
+.. code-block:: text
+
+ other1.example.com ansible_connection=ssh ansible_user=myuser
+ other2.example.com ansible_connection=ssh ansible_user=myotheruser
+
+or as a group variable in inventory:
+
+.. code-block:: yaml
+
+ cloud:
+ hosts:
+ cloud1: my_backup.cloud.com
+ cloud2: my_backup2.cloud.com
+ vars:
+ ansible_user: admin
+
+Setting up SSH keys
+-------------------
+
+By default, Ansible assumes you are using SSH keys to connect to remote machines. SSH keys are encouraged, but you can use password authentication if needed with the ``--ask-pass`` option. If you need to provide a password for :ref:`privilege escalation <become>` (sudo, pbrun, and so on), use ``--ask-become-pass``.
+
+.. include:: shared_snippets/SSH_password_prompt.txt
+
+To set up SSH agent to avoid retyping passwords, you can do:
+
+.. code-block:: bash
+
+ $ ssh-agent bash
+ $ ssh-add ~/.ssh/id_rsa
+
+Depending on your setup, you may wish to use Ansible's ``--private-key`` command line option to specify a pem file instead. You can also add the private key file:
+
+.. code-block:: bash
+
+ $ ssh-agent bash
+ $ ssh-add ~/.ssh/keypair.pem
+
+Another way to add private key files without using ssh-agent is using ``ansible_ssh_private_key_file`` in an inventory file as explained here: :ref:`intro_inventory`.
+
+Running against localhost
+-------------------------
+
+You can run commands against the control node by using "localhost" or "127.0.0.1" for the server name:
+
+.. code-block:: bash
+
+ $ ansible localhost -m ping -e 'ansible_python_interpreter="/usr/bin/env python"'
+
+You can specify localhost explicitly by adding this to your inventory file:
+
+.. code-block:: bash
+
+ localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"
+
+.. _host_key_checking_on:
+
+Managing host key checking
+--------------------------
+
+Ansible enables host key checking by default. Checking host keys guards against server spoofing and man-in-the-middle attacks, but it does require some maintenance.
+
+If a host is reinstalled and has a different key in 'known_hosts', this will result in an error message until corrected. If a new host is not in 'known_hosts' your control node may prompt for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.
+
+If you understand the implications and wish to disable this behavior, you can do so by editing ``/etc/ansible/ansible.cfg`` or ``~/.ansible.cfg``:
+
+.. code-block:: text
+
+ [defaults]
+ host_key_checking = False
+
+Alternatively this can be set by the :envvar:`ANSIBLE_HOST_KEY_CHECKING` environment variable:
+
+.. code-block:: bash
+
+ $ export ANSIBLE_HOST_KEY_CHECKING=False
+
+Also note that host key checking in paramiko mode is reasonably slow, therefore switching to 'ssh' is also recommended when using this feature.
+
+Other connection methods
+------------------------
+
+Ansible can use a variety of connection methods beyond SSH. You can select any connection plugin, including managing things locally and managing chroot, lxc, and jail containers.
+A mode called 'ansible-pull' can also invert the system and have systems 'phone home' via scheduled git checkouts to pull configuration directives from a central repository.
diff --git a/docs/docsite/rst/user_guide/guide_rolling_upgrade.rst b/docs/docsite/rst/user_guide/guide_rolling_upgrade.rst
new file mode 100644
index 00000000..6f2ca742
--- /dev/null
+++ b/docs/docsite/rst/user_guide/guide_rolling_upgrade.rst
@@ -0,0 +1,324 @@
+**********************************************************
+Playbook Example: Continuous Delivery and Rolling Upgrades
+**********************************************************
+
+.. contents::
+ :local:
+
+.. _lamp_introduction:
+
+What is continuous delivery?
+============================
+
+Continuous delivery (CD) means frequently delivering updates to your software application.
+
+The idea is that by updating more often, you do not have to wait for a specific timed period, and your organization
+gets better at the process of responding to change.
+
+Some Ansible users are deploying updates to their end users on an hourly or even more frequent basis -- sometimes every time
+there is an approved code change. To achieve this, you need tools to be able to quickly apply those updates in a zero-downtime way.
+
+This document describes in detail how to achieve this goal, using one of Ansible's most complete example
+playbooks as a template: lamp_haproxy. This example uses a lot of Ansible features: roles, templates,
+and group variables, and it also comes with an orchestration playbook that can do zero-downtime
+rolling upgrades of the web application stack.
+
+.. note::
+
+ `Click here for the latest playbooks for this example
+ <https://github.com/ansible/ansible-examples/tree/master/lamp_haproxy>`_.
+
+The playbooks deploy Apache, PHP, MySQL, Nagios, and HAProxy to a CentOS-based set of servers.
+
+We're not going to cover how to run these playbooks here. Read the included README in the github project along with the
+example for that information. Instead, we're going to take a close look at every part of the playbook and describe what it does.
+
+.. _lamp_deployment:
+
+Site deployment
+===============
+
+Let's start with ``site.yml``. This is our site-wide deployment playbook. It can be used to initially deploy the site, as well
+as push updates to all of the servers:
+
+.. code-block:: yaml
+
+ ---
+ # This playbook deploys the whole application stack in this site.
+
+ # Apply common configuration to all hosts
+ - hosts: all
+
+ roles:
+ - common
+
+ # Configure and deploy database servers.
+ - hosts: dbservers
+
+ roles:
+ - db
+
+ # Configure and deploy the web servers. Note that we include two roles
+ # here, the 'base-apache' role which simply sets up Apache, and 'web'
+ # which includes our example web application.
+
+ - hosts: webservers
+
+ roles:
+ - base-apache
+ - web
+
+ # Configure and deploy the load balancer(s).
+ - hosts: lbservers
+
+ roles:
+ - haproxy
+
+ # Configure and deploy the Nagios monitoring node(s).
+ - hosts: monitoring
+
+ roles:
+ - base-apache
+ - nagios
+
+.. note::
+
+ If you're not familiar with terms like playbooks and plays, you should review :ref:`working_with_playbooks`.
+
+In this playbook we have 5 plays. The first one targets ``all`` hosts and applies the ``common`` role to all of the hosts.
+This is for site-wide things like yum repository configuration, firewall configuration, and anything else that needs to apply to all of the servers.
+
+The next four plays run against specific host groups and apply specific roles to those servers.
+Along with the roles for Nagios monitoring, the database, and the web application, we've implemented a
+``base-apache`` role that installs and configures a basic Apache setup. This is used by both the
+sample web application and the Nagios hosts.
+
+.. _lamp_roles:
+
+Reusable content: roles
+=======================
+
+By now you should have a bit of understanding about roles and how they work in Ansible. Roles are a way to organize
+content: tasks, handlers, templates, and files, into reusable components.
+
+This example has six roles: ``common``, ``base-apache``, ``db``, ``haproxy``, ``nagios``, and ``web``. How you organize
+your roles is up to you and your application, but most sites will have one or more common roles that are applied to
+all systems, and then a series of application-specific roles that install and configure particular parts of the site.
+
+Roles can have variables and dependencies, and you can pass in parameters to roles to modify their behavior.
+You can read more about roles in the :ref:`playbooks_reuse_roles` section.
+
+.. _lamp_group_variables:
+
+Configuration: group variables
+==============================
+
+Group variables are variables that are applied to groups of servers. They can be used in templates and in
+playbooks to customize behavior and to provide easily-changed settings and parameters. They are stored in
+a directory called ``group_vars`` in the same location as your inventory.
+Here is lamp_haproxy's ``group_vars/all`` file. As you might expect, these variables are applied to all of the machines in your inventory:
+
+.. code-block:: yaml
+
+ ---
+ httpd_port: 80
+ ntpserver: 192.0.2.23
+
+This is a YAML file, and you can create lists and dictionaries for more complex variable structures.
+In this case, we are just setting two variables, one for the port for the web server, and one for the
+NTP server that our machines should use for time synchronization.
+
+Here's another group variables file. This is ``group_vars/dbservers`` which applies to the hosts in the ``dbservers`` group:
+
+.. code-block:: yaml
+
+ ---
+ mysqlservice: mysqld
+ mysql_port: 3306
+ dbuser: root
+ dbname: foodb
+ upassword: usersecret
+
+If you look in the example, there are group variables for the ``webservers`` group and the ``lbservers`` group, similarly.
+
+These variables are used in a variety of places. You can use them in playbooks, like this, in ``roles/db/tasks/main.yml``:
+
+.. code-block:: yaml
+
+ - name: Create Application Database
+ mysql_db:
+ name: "{{ dbname }}"
+ state: present
+
+ - name: Create Application DB User
+ mysql_user:
+ name: "{{ dbuser }}"
+ password: "{{ upassword }}"
+ priv: "*.*:ALL"
+ host: '%'
+ state: present
+
+You can also use these variables in templates, like this, in ``roles/common/templates/ntp.conf.j2``:
+
+.. code-block:: text
+
+ driftfile /var/lib/ntp/drift
+
+ restrict 127.0.0.1
+ restrict -6 ::1
+
+ server {{ ntpserver }}
+
+ includefile /etc/ntp/crypto/pw
+
+ keys /etc/ntp/keys
+
+You can see that the variable substitution syntax of {{ and }} is the same for both templates and variables. The syntax
+inside the curly braces is Jinja2, and you can do all sorts of operations and apply different filters to the
+data inside. In templates, you can also use for loops and if statements to handle more complex situations,
+like this, in ``roles/common/templates/iptables.j2``:
+
+.. code-block:: jinja
+
+ {% if inventory_hostname in groups['dbservers'] %}
+ -A INPUT -p tcp --dport 3306 -j ACCEPT
+ {% endif %}
+
+This is testing to see if the inventory name of the machine we're currently operating on (``inventory_hostname``)
+exists in the inventory group ``dbservers``. If so, that machine will get an iptables ACCEPT line for port 3306.
+
+Here's another example, from the same template:
+
+.. code-block:: jinja
+
+ {% for host in groups['monitoring'] %}
+ -A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
+ {% endfor %}
+
+This loops over all of the hosts in the group called ``monitoring``, and adds an ACCEPT line for
+each monitoring hosts' default IPv4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts.
+
+You can learn a lot more about Jinja2 and its capabilities `here <http://jinja.pocoo.org/docs/>`_, and you
+can read more about Ansible variables in general in the :ref:`playbooks_variables` section.
+
+.. _lamp_rolling_upgrade:
+
+The rolling upgrade
+===================
+
+Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible's
+orchestration features come into play. While some applications use the term 'orchestration' to mean basic ordering or command-blasting, Ansible
+refers to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it.
+
+Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called ``rolling_update.yml``.
+
+Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this:
+
+.. code-block:: yaml
+
+ - hosts: monitoring
+ tasks: []
+
+What's going on here, and why are there no tasks? You might know that Ansible gathers "facts" from the servers before operating upon them. These facts are useful for all sorts of things: networking information, OS/distribution versions, and so on. In our case, we need to know something about all of the monitoring servers in our environment before we perform the update, so this simple play forces a fact-gathering step on our monitoring servers. You will see this pattern sometimes, and it's a useful trick to know.
+
+The next part is the update play. The first part looks like this:
+
+.. code-block:: yaml
+
+ - hosts: webservers
+ user: root
+ serial: 1
+
+This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will parallelize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time.
+
+Here is the next part of the update play:
+
+.. code-block:: yaml
+
+ pre_tasks:
+ - name: disable nagios alerts for this host webserver service
+ nagios:
+ action: disable_alerts
+ host: "{{ inventory_hostname }}"
+ services: webserver
+ delegate_to: "{{ item }}"
+ loop: "{{ groups.monitoring }}"
+
+ - name: disable the server in haproxy
+ shell: echo "disable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
+ delegate_to: "{{ item }}"
+ loop: "{{ groups.lbservers }}"
+
+.. note::
+ - The ``serial`` keyword forces the play to be executed in 'batches'. Each batch counts as a full play with a subselection of hosts.
+ This has some consequences on play behavior. For example, if all hosts in a batch fails, the play fails, which in turn fails the entire run. You should consider this when combining with ``max_fail_percentage``.
+
+The ``pre_tasks`` keyword just lets you list tasks to run before the roles are called. This will make more sense in a minute. If you look at the names of these tasks, you can see that we are disabling Nagios alerts and then removing the webserver that we are currently updating from the HAProxy load balancing pool.
+
+The ``delegate_to`` and ``loop`` arguments, used together, cause Ansible to loop over each monitoring server and load balancer, and perform that operation (delegate that operation) on the monitoring or load balancing server, "on behalf" of the webserver. In programming terms, the outer loop is the list of web servers, and the inner loop is the list of monitoring servers.
+
+Note that the HAProxy step looks a little complicated. We're using HAProxy in this example because it's freely available, though if you have (for instance) an F5 or Netscaler in your infrastructure (or maybe you have an AWS Elastic IP setup?), you can use Ansible modules to communicate with them instead. You might also wish to use other monitoring modules instead of nagios, but this just shows the main goal of the 'pre tasks' section -- take the server out of monitoring, and take it out of rotation.
+
+The next step simply re-applies the proper roles to the web servers. This will cause any configuration management declarations in ``web`` and ``base-apache`` roles to be applied to the web servers, including an update of the web application code itself. We don't have to do it this way--we could instead just purely update the web application, but this is a good example of how roles can be used to reuse tasks:
+
+.. code-block:: yaml
+
+ roles:
+ - common
+ - base-apache
+ - web
+
+Finally, in the ``post_tasks`` section, we reverse the changes to the Nagios configuration and put the web server back in the load balancing pool:
+
+.. code-block:: yaml
+
+ post_tasks:
+ - name: Enable the server in haproxy
+ shell: echo "enable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
+ delegate_to: "{{ item }}"
+ loop: "{{ groups.lbservers }}"
+
+ - name: re-enable nagios alerts
+ nagios:
+ action: enable_alerts
+ host: "{{ inventory_hostname }}"
+ services: webserver
+ delegate_to: "{{ item }}"
+ loop: "{{ groups.monitoring }}"
+
+Again, if you were using a Netscaler or F5 or Elastic Load Balancer, you would just substitute in the appropriate modules instead.
+
+.. _lamp_end_notes:
+
+Managing other load balancers
+=============================
+
+In this example, we use the simple HAProxy load balancer to front-end the web servers. It's easy to configure and easy to manage. As we have mentioned, Ansible has support for a variety of other load balancers like Citrix NetScaler, F5 BigIP, Amazon Elastic Load Balancers, and more. See the :ref:`working_with_modules` documentation for more information.
+
+For other load balancers, you may need to send shell commands to them (like we do for HAProxy above), or call an API, if your load balancer exposes one. For the load balancers for which Ansible has modules, you may want to run them as a ``local_action`` if they contact an API. You can read more about local actions in the :ref:`playbooks_delegation` section. Should you develop anything interesting for some hardware where there is not a module, it might make for a good contribution!
+
+.. _lamp_end_to_end:
+
+Continuous delivery end-to-end
+==============================
+
+Now that you have an automated way to deploy updates to your application, how do you tie it all together? A lot of organizations use a continuous integration tool like `Jenkins <https://jenkins.io/>`_ or `Atlassian Bamboo <https://www.atlassian.com/software/bamboo>`_ to tie the development, test, release, and deploy steps together. You may also want to use a tool like `Gerrit <https://www.gerritcodereview.com/>`_ to add a code review step to commits to either the application code itself, or to your Ansible playbooks, or both.
+
+Depending on your environment, you might be deploying continuously to a test environment, running an integration test battery against that environment, and then deploying automatically into production. Or you could keep it simple and just use the rolling-update for on-demand deployment into test or production specifically. This is all up to you.
+
+For integration with Continuous Integration systems, you can easily trigger playbook runs using the ``ansible-playbook`` command line tool, or, if you're using :ref:`ansible_tower`, the ``tower-cli`` or the built-in REST API. (The tower-cli command 'joblaunch' will spawn a remote job over the REST API and is pretty slick).
+
+This should give you a good idea of how to structure a multi-tier application with Ansible, and orchestrate operations upon that app, with the eventual goal of continuous delivery to your customers. You could extend the idea of the rolling upgrade to lots of different parts of the app; maybe add front-end web servers along with application servers, for instance, or replace the SQL database with something like MongoDB or Riak. Ansible gives you the capability to easily manage complicated environments and automate common operations.
+
+.. seealso::
+
+ `lamp_haproxy example <https://github.com/ansible/ansible-examples/tree/master/lamp_haproxy>`_
+ The lamp_haproxy example discussed here.
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_reuse_roles`
+ An introduction to playbook roles
+ :ref:`playbooks_variables`
+ An introduction to Ansible variables
+ `Ansible.com: Continuous Delivery <https://www.ansible.com/use-cases/continuous-delivery>`_
+ An introduction to Continuous Delivery with Ansible
diff --git a/docs/docsite/rst/user_guide/index.rst b/docs/docsite/rst/user_guide/index.rst
new file mode 100644
index 00000000..e3f2aaf3
--- /dev/null
+++ b/docs/docsite/rst/user_guide/index.rst
@@ -0,0 +1,133 @@
+.. _user_guide_index:
+
+##########
+User Guide
+##########
+
+Welcome to the Ansible User Guide! This guide covers how to work with Ansible, including using the command line, working with inventory, interacting with data, writing tasks, plays, and playbooks; executing playbooks, and reference materials. This page outlines the most common situations and questions that bring readers to this section. If you prefer a traditional table of contents, you can find one at the bottom of the page.
+
+Getting started
+===============
+
+* I'd like an overview of how Ansible works. Where can I find:
+
+ * a :ref:`quick video overview <quickstart_guide>`
+ * a :ref:`text introduction <intro_getting_started>`
+
+* I'm ready to learn about Ansible. What :ref:`basic_concepts` do I need to learn?
+* I want to use Ansible without writing a playbook. How do I use :ref:`ad-hoc commands <intro_adhoc>`?
+
+Writing tasks, plays, and playbooks
+===================================
+
+* I'm writing my first playbook. What should I :ref:`know before I begin <playbooks_tips_and_tricks>`?
+* I have a specific use case for a task or play:
+
+ * Executing tasks with elevated privileges or as a different user with :ref:`become <become>`
+ * Repeating a task once for each item in a list with :ref:`loops <playbooks_loops>`
+ * Executing tasks on a different machine with :ref:`delegation <playbooks_delegation>`
+ * Running tasks only when certain conditions apply with :ref:`conditionals <playbooks_conditionals>` and evaluating conditions with :ref:`tests <playbooks_tests>`
+ * Grouping a set of tasks together with :ref:`blocks <playbooks_blocks>`
+ * Running tasks only when something has changed with :ref:`handlers <handlers>`
+ * Changing the way Ansible :ref:`handles failures <playbooks_error_handling>`
+ * Setting remote :ref:`environment values <playbooks_environment>`
+
+* I want to leverage the power of re-usable Ansible artifacts. How do I create re-usable :ref:`files <playbooks_reuse>` and :ref:`roles <playbooks_reuse_roles>`?
+* I need to incorporate one file or playbook inside another. What is the difference between :ref:`including and importing <playbooks_reuse_includes>`?
+* I want to run selected parts of my playbook. How do I add and use :ref:`tags <tags>`?
+
+Working with inventory
+======================
+
+* I have a list of servers and devices I want to automate. How do I create :ref:`inventory <intro_inventory>` to track them?
+* I use cloud services and constantly have servers and devices starting and stopping. How do I track them using :ref:`dynamic inventory <intro_dynamic_inventory>`?
+* I want to automate specific sub-sets of my inventory. How do I use :ref:`patterns <intro_patterns>`?
+
+Interacting with data
+=====================
+
+* I want to use a single playbook against multiple systems with different attributes. How do I use :ref:`variables <playbooks_variables>` to handle the differences?
+* I want to retrieve data about my systems. How do I access :ref:`Ansible facts <vars_and_facts>`?
+* I need to access sensitive data like passwords with Ansible. How can I protect that data with :ref:`Ansible vault <vault>`?
+* I want to change the data I have, so I can use it in a task. How do I use :ref:`filters <playbooks_filters>` to transform my data?
+* I need to retrieve data from an external datastore. How do I use :ref:`lookups <playbooks_lookups>` to access databases and APIs?
+* I want to ask playbook users to supply data. How do I get user input with :ref:`prompts <playbooks_prompts>`?
+* I use certain modules frequently. How do I streamline my inventory and playbooks by :ref:`setting default values for module parameters <module_defaults>`?
+
+Executing playbooks
+===================
+
+Once your playbook is ready to run, you may need to use these topics:
+
+* Executing "dry run" playbooks with :ref:`check mode and diff <check_mode_dry>`
+* Running playbooks while troubleshooting with :ref:`start and step <playbooks_start_and_step>`
+* Correcting tasks during execution with the :ref:`Ansible debugger <playbook_debugger>`
+* Controlling how my playbook executes with :ref:`strategies and more <playbooks_strategies>`
+* Running tasks, plays, and playbooks :ref:`asynchronously <playbooks_async>`
+
+Advanced features and reference
+===============================
+
+* Using :ref:`advanced syntax <playbooks_advanced_syntax>`
+* Manipulating :ref:`complex data <complex_data_manipulation>`
+* Using :ref:`plugins <plugins_lookup>`
+* Using :ref:`playbook keywords <playbook_keywords>`
+* Using :ref:`command-line tools <command_line_tools>`
+* Rejecting :ref:`specific modules <plugin_filtering_config>`
+* Module :ref:`maintenance <modules_support>`
+
+Traditional Table of Contents
+=============================
+
+If you prefer to read the entire User Guide, here's a list of the pages in order:
+
+.. toctree::
+ :maxdepth: 2
+
+ quickstart
+ basic_concepts
+ intro_getting_started
+ intro_adhoc
+ playbooks
+ playbooks_intro
+ playbooks_best_practices
+ become
+ playbooks_loops
+ playbooks_delegation
+ playbooks_conditionals
+ playbooks_tests
+ playbooks_blocks
+ playbooks_handlers
+ playbooks_error_handling
+ playbooks_environment
+ playbooks_reuse
+ playbooks_reuse_roles
+ playbooks_reuse_includes
+ playbooks_tags
+ intro_inventory
+ intro_dynamic_inventory
+ intro_patterns
+ connection_details
+ command_line_tools
+ playbooks_variables
+ playbooks_vars_facts
+ vault
+ playbooks_filters
+ playbooks_lookups
+ playbooks_prompts
+ playbooks_module_defaults
+ playbooks_checkmode
+ playbooks_startnstep
+ playbooks_debugger
+ playbooks_strategies
+ playbooks_async
+ playbooks_advanced_syntax
+ complex_data_manipulation
+ plugin_filtering_config
+ sample_setup
+ modules
+ ../plugins/plugins
+ ../reference_appendices/playbooks_keywords
+ intro_bsd
+ windows
+ collections_using
diff --git a/docs/docsite/rst/user_guide/intro.rst b/docs/docsite/rst/user_guide/intro.rst
new file mode 100644
index 00000000..d6ff243f
--- /dev/null
+++ b/docs/docsite/rst/user_guide/intro.rst
@@ -0,0 +1,15 @@
+:orphan:
+
+Introduction
+============
+
+Before we start exploring the main components of Ansible -- playbooks, configuration management, deployment, and orchestration -- we'll learn how to get Ansible installed and cover some basic concepts. We'll also go over how to execute ad-hoc commands in parallel across your nodes using /usr/bin/ansible, and see what modules are available in Ansible's core (you can also write your own, which is covered later).
+
+.. toctree::
+ :maxdepth: 1
+
+ ../installation_guide/index
+ ../dev_guide/overview_architecture
+ ../installation_guide/intro_configuration
+ intro_bsd
+ intro_windows
diff --git a/docs/docsite/rst/user_guide/intro_adhoc.rst b/docs/docsite/rst/user_guide/intro_adhoc.rst
new file mode 100644
index 00000000..a7aa8da3
--- /dev/null
+++ b/docs/docsite/rst/user_guide/intro_adhoc.rst
@@ -0,0 +1,206 @@
+.. _intro_adhoc:
+
+*******************************
+Introduction to ad-hoc commands
+*******************************
+
+An Ansible ad-hoc command uses the `/usr/bin/ansible` command-line tool to automate a single task on one or more managed nodes. Ad-hoc commands are quick and easy, but they are not reusable. So why learn about ad-hoc commands first? Ad-hoc commands demonstrate the simplicity and power of Ansible. The concepts you learn here will port over directly to the playbook language. Before reading and executing these examples, please read :ref:`intro_inventory`.
+
+.. contents::
+ :local:
+
+Why use ad-hoc commands?
+========================
+
+Ad-hoc commands are great for tasks you repeat rarely. For example, if you want to power off all the machines in your lab for Christmas vacation, you could execute a quick one-liner in Ansible without writing a playbook. An ad-hoc command looks like this:
+
+.. code-block:: bash
+
+ $ ansible [pattern] -m [module] -a "[module options]"
+
+You can learn more about :ref:`patterns<intro_patterns>` and :ref:`modules<working_with_modules>` on other pages.
+
+Use cases for ad-hoc tasks
+==========================
+
+Ad-hoc tasks can be used to reboot servers, copy files, manage packages and users, and much more. You can use any Ansible module in an ad-hoc task. Ad-hoc tasks, like playbooks, use a declarative model,
+calculating and executing the actions required to reach a specified final state. They
+achieve a form of idempotence by checking the current state before they begin and doing nothing unless the current state is different from the specified final state.
+
+Rebooting servers
+-----------------
+
+The default module for the ``ansible`` command-line utility is the :ref:`ansible.builtin.command module<command_module>`. You can use an ad-hoc task to call the command module and reboot all web servers in Atlanta, 10 at a time. Before Ansible can do this, you must have all servers in Atlanta listed in a group called [atlanta] in your inventory, and you must have working SSH credentials for each machine in that group. To reboot all the servers in the [atlanta] group:
+
+.. code-block:: bash
+
+ $ ansible atlanta -a "/sbin/reboot"
+
+By default Ansible uses only 5 simultaneous processes. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will take a little longer. To reboot the [atlanta] servers with 10 parallel forks:
+
+.. code-block:: bash
+
+ $ ansible atlanta -a "/sbin/reboot" -f 10
+
+/usr/bin/ansible will default to running from your user account. To connect as a different user:
+
+.. code-block:: bash
+
+ $ ansible atlanta -a "/sbin/reboot" -f 10 -u username
+
+Rebooting probably requires privilege escalation. You can connect to the server as ``username`` and run the command as the ``root`` user by using the :ref:`become <become>` keyword:
+
+.. code-block:: bash
+
+ $ ansible atlanta -a "/sbin/reboot" -f 10 -u username --become [--ask-become-pass]
+
+If you add ``--ask-become-pass`` or ``-K``, Ansible prompts you for the password to use for privilege escalation (sudo/su/pfexec/doas/etc).
+
+.. note::
+ The :ref:`command module <command_module>` does not support extended shell syntax like piping and
+ redirects (although shell variables will always work). If your command requires shell-specific
+ syntax, use the `shell` module instead. Read more about the differences on the
+ :ref:`working_with_modules` page.
+
+So far all our examples have used the default 'command' module. To use a different module, pass ``-m`` for module name. For example, to use the :ref:`ansible.builtin.shell module <shell_module>`:
+
+.. code-block:: bash
+
+ $ ansible raleigh -m ansible.builtin.shell -a 'echo $TERM'
+
+When running any command with the Ansible *ad hoc* CLI (as opposed to
+:ref:`Playbooks <working_with_playbooks>`), pay particular attention to shell quoting rules, so
+the local shell retains the variable and passes it to Ansible.
+For example, using double rather than single quotes in the above example would
+evaluate the variable on the box you were on.
+
+.. _file_transfer:
+
+Managing files
+--------------
+
+An ad-hoc task can harness the power of Ansible and SCP to transfer many files to multiple machines in parallel. To transfer a file directly to all servers in the [atlanta] group:
+
+.. code-block:: bash
+
+ $ ansible atlanta -m ansible.builtin.copy -a "src=/etc/hosts dest=/tmp/hosts"
+
+If you plan to repeat a task like this, use the :ref:`ansible.builtin.template<template_module>` module in a playbook.
+
+The :ref:`ansible.builtin.file<file_module>` module allows changing ownership and permissions on files. These
+same options can be passed directly to the ``copy`` module as well:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.file -a "dest=/srv/foo/a.txt mode=600"
+ $ ansible webservers -m ansible.builtin.file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan"
+
+The ``file`` module can also create directories, similar to ``mkdir -p``:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory"
+
+As well as delete directories (recursively) and delete files:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.file -a "dest=/path/to/c state=absent"
+
+.. _managing_packages:
+
+Managing packages
+-----------------
+
+You might also use an ad-hoc task to install, update, or remove packages on managed nodes using a package management module like yum. To ensure a package is installed without updating it:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.yum -a "name=acme state=present"
+
+To ensure a specific version of a package is installed:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.yum -a "name=acme-1.5 state=present"
+
+To ensure a package is at the latest version:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.yum -a "name=acme state=latest"
+
+To ensure a package is not installed:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.yum -a "name=acme state=absent"
+
+Ansible has modules for managing packages under many platforms. If there is no module for your package manager, you can install packages using the command module or create a module for your package manager.
+
+.. _users_and_groups:
+
+Managing users and groups
+-------------------------
+
+You can create, manage, and remove user accounts on your managed nodes with ad-hoc tasks:
+
+.. code-block:: bash
+
+ $ ansible all -m ansible.builtin.user -a "name=foo password=<crypted password here>"
+
+ $ ansible all -m ansible.builtin.user -a "name=foo state=absent"
+
+See the :ref:`ansible.builtin.user <user_module>` module documentation for details on all of the available options, including
+how to manipulate groups and group membership.
+
+.. _managing_services:
+
+Managing services
+-----------------
+
+Ensure a service is started on all webservers:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.service -a "name=httpd state=started"
+
+Alternatively, restart a service on all webservers:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.service -a "name=httpd state=restarted"
+
+Ensure a service is stopped:
+
+.. code-block:: bash
+
+ $ ansible webservers -m ansible.builtin.service -a "name=httpd state=stopped"
+
+.. _gathering_facts:
+
+Gathering facts
+---------------
+
+Facts represent discovered variables about a system. You can use facts to implement conditional execution of tasks but also just to get ad-hoc information about your systems. To see all facts:
+
+.. code-block:: bash
+
+ $ ansible all -m ansible.builtin.setup
+
+You can also filter this output to display only certain facts, see the :ref:`ansible.builtin.setup <setup_module>` module documentation for details.
+
+Now that you understand the basic elements of Ansible execution, you are ready to learn to automate repetitive tasks using :ref:`Ansible Playbooks <playbooks_intro>`.
+
+.. seealso::
+
+ :ref:`intro_configuration`
+ All about the Ansible config file
+ :ref:`list_of_collections`
+ Browse existing collections, modules, and plugins
+ :ref:`working_with_playbooks`
+ Using Ansible for configuration management & deployment
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/intro_bsd.rst b/docs/docsite/rst/user_guide/intro_bsd.rst
new file mode 100644
index 00000000..68a62f31
--- /dev/null
+++ b/docs/docsite/rst/user_guide/intro_bsd.rst
@@ -0,0 +1,106 @@
+.. _working_with_bsd:
+
+Ansible and BSD
+===============
+
+Managing BSD machines is different from managing other Unix-like machines. If you have managed nodes running BSD, review these topics.
+
+.. contents::
+ :local:
+
+Connecting to BSD nodes
+-----------------------
+
+Ansible connects to managed nodes using OpenSSH by default. This works on BSD if you use SSH keys for authentication. However, if you use SSH passwords for authentication, Ansible relies on sshpass. Most
+versions of sshpass do not deal well with BSD login prompts, so when using SSH passwords against BSD machines, use ``paramiko`` to connect instead of OpenSSH. You can do this in ansible.cfg globally or you can set it as an inventory/group/host variable. For example:
+
+.. code-block:: text
+
+ [freebsd]
+ mybsdhost1 ansible_connection=paramiko
+
+.. _bootstrap_bsd:
+
+Bootstrapping BSD
+-----------------
+
+Ansible is agentless by default, however, it requires Python on managed nodes. Only the :ref:`raw <raw_module>` module will operate without Python. Although this module can be used to bootstrap Ansible and install Python on BSD variants (see below), it is very limited and the use of Python is required to make full use of Ansible's features.
+
+The following example installs Python 2.7 which includes the json library required for full functionality of Ansible.
+On your control machine you can execute the following for most versions of FreeBSD:
+
+.. code-block:: bash
+
+ ansible -m raw -a "pkg install -y python27" mybsdhost1
+
+Or for OpenBSD:
+
+.. code-block:: bash
+
+ ansible -m raw -a "pkg_add python%3.7"
+
+Once this is done you can now use other Ansible modules apart from the ``raw`` module.
+
+.. note::
+ This example demonstrated using pkg on FreeBSD and pkg_add on OpenBSD, however you should be able to substitute the appropriate package tool for your BSD; the package name may also differ. Refer to the package list or documentation of the BSD variant you are using for the exact Python package name you intend to install.
+
+.. BSD_python_location:
+
+Setting the Python interpreter
+------------------------------
+
+To support a variety of Unix-like operating systems and distributions, Ansible cannot always rely on the existing environment or ``env`` variables to locate the correct Python binary. By default, modules point at ``/usr/bin/python`` as this is the most common location. On BSD variants, this path may differ, so it is advised to inform Ansible of the binary's location, through the ``ansible_python_interpreter`` inventory variable. For example:
+
+.. code-block:: text
+
+ [freebsd:vars]
+ ansible_python_interpreter=/usr/local/bin/python2.7
+ [openbsd:vars]
+ ansible_python_interpreter=/usr/local/bin/python3.7
+
+If you use additional plugins beyond those bundled with Ansible, you can set similar variables for ``bash``, ``perl`` or ``ruby``, depending on how the plugin is written. For example:
+
+.. code-block:: text
+
+ [freebsd:vars]
+ ansible_python_interpreter=/usr/local/bin/python
+ ansible_perl_interpreter=/usr/bin/perl5
+
+
+Which modules are available?
+----------------------------
+
+The majority of the core Ansible modules are written for a combination of Unix-like machines and other generic services, so most should function well on the BSDs with the obvious exception of those that are aimed at Linux-only technologies (such as LVG).
+
+Using BSD as the control node
+-----------------------------
+
+Using BSD as the control machine is as simple as installing the Ansible package for your BSD variant or by following the ``pip`` or 'from source' instructions.
+
+.. _bsd_facts:
+
+BSD facts
+---------
+
+Ansible gathers facts from the BSDs in a similar manner to Linux machines, but since the data, names and structures can vary for network, disks and other devices, one should expect the output to be slightly different yet still familiar to a BSD administrator.
+
+.. _bsd_contributions:
+
+BSD efforts and contributions
+-----------------------------
+
+BSD support is important to us at Ansible. Even though the majority of our contributors use and target Linux we have an active BSD community and strive to be as BSD-friendly as possible.
+Please feel free to report any issues or incompatibilities you discover with BSD; pull requests with an included fix are also welcome!
+
+.. seealso::
+
+ :ref:`intro_adhoc`
+ Examples of basic commands
+ :ref:`working_with_playbooks`
+ Learning ansible's configuration management language
+ :ref:`developing_modules`
+ How to write modules
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/intro_dynamic_inventory.rst b/docs/docsite/rst/user_guide/intro_dynamic_inventory.rst
new file mode 100644
index 00000000..69016655
--- /dev/null
+++ b/docs/docsite/rst/user_guide/intro_dynamic_inventory.rst
@@ -0,0 +1,249 @@
+.. _intro_dynamic_inventory:
+.. _dynamic_inventory:
+
+******************************
+Working with dynamic inventory
+******************************
+
+.. contents::
+ :local:
+
+If your Ansible inventory fluctuates over time, with hosts spinning up and shutting down in response to business demands, the static inventory solutions described in :ref:`inventory` will not serve your needs. You may need to track hosts from multiple sources: cloud providers, LDAP, `Cobbler <https://cobbler.github.io>`_, and/or enterprise CMDB systems.
+
+Ansible integrates all of these options through a dynamic external inventory system. Ansible supports two ways to connect with external inventory: :ref:`inventory_plugins` and `inventory scripts`.
+
+Inventory plugins take advantage of the most recent updates to the Ansible core code. We recommend plugins over scripts for dynamic inventory. You can :ref:`write your own plugin <developing_inventory>` to connect to additional dynamic inventory sources.
+
+You can still use inventory scripts if you choose. When we implemented inventory plugins, we ensured backwards compatibility through the script inventory plugin. The examples below illustrate how to use inventory scripts.
+
+If you prefer a GUI for handling dynamic inventory, the :ref:`ansible_tower` inventory database syncs with all your dynamic inventory sources, provides web and REST access to the results, and offers a graphical inventory editor. With a database record of all of your hosts, you can correlate past event history and see which hosts have had failures on their last playbook runs.
+
+.. _cobbler_example:
+
+Inventory script example: Cobbler
+=================================
+
+Ansible integrates seamlessly with `Cobbler <https://cobbler.github.io>`_, a Linux installation server originally written by Michael DeHaan and now led by James Cammarata, who works for Ansible.
+
+While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic
+layer that can represent data for multiple configuration management systems (even at the same time) and serve as a 'lightweight CMDB'.
+
+To tie your Ansible inventory to Cobbler, copy `this script <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/cobbler.py>`_ to ``/etc/ansible`` and ``chmod +x`` the file. Run ``cobblerd`` any time you use Ansible and use the ``-i`` command line option (for example, ``-i /etc/ansible/cobbler.py``) to communicate with Cobbler using Cobbler's XMLRPC API.
+
+Add a ``cobbler.ini`` file in ``/etc/ansible`` so Ansible knows where the Cobbler server is and some cache improvements can be used. For example:
+
+.. code-block:: text
+
+ [cobbler]
+
+ # Set Cobbler's hostname or IP address
+ host = http://127.0.0.1/cobbler_api
+
+ # API calls to Cobbler can be slow. For this reason, we cache the results of an API
+ # call. Set this to the path you want cache files to be written to. Two files
+ # will be written to this directory:
+ # - ansible-cobbler.cache
+ # - ansible-cobbler.index
+
+ cache_path = /tmp
+
+ # The number of seconds a cache file is considered valid. After this many
+ # seconds, a new API call will be made, and the cache file will be updated.
+
+ cache_max_age = 900
+
+
+First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet.
+
+Let's explore what this does. In Cobbler, assume a scenario somewhat like the following:
+
+.. code-block:: bash
+
+ cobbler profile add --name=webserver --distro=CentOS6-x86_64
+ cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
+ cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
+ cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
+
+In the example above, the system 'foo.example.com' is addressable by ansible directly, but is also addressable when using the group names 'webserver' or 'atlanta'. Since Ansible uses SSH, it contacts system foo over 'foo.example.com', only, never just 'foo'. Similarly, if you tried "ansible foo", it would not find the system... but "ansible 'foo*'" would do, because the system DNS name starts with 'foo'.
+
+The script provides more than host and group info. In addition, as a bonus, when the 'setup' module is run (which happens automatically when using playbooks), the variables 'a', 'b', and 'c' will all be auto-populated in the templates:
+
+.. code-block:: text
+
+ # file: /srv/motd.j2
+ Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
+
+Which could be executed just like this:
+
+.. code-block:: bash
+
+ ansible webserver -m setup
+ ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd"
+
+.. note::
+ The name 'webserver' came from Cobbler, as did the variables for
+ the config file. You can still pass in your own variables like
+ normal in Ansible, but variables from the external inventory script
+ will override any that have the same name.
+
+So, with the template above (``motd.j2``), this results in the following data being written to ``/etc/motd`` for system 'foo':
+
+.. code-block:: text
+
+ Welcome, I am templated with a value of a=2, b=3, and c=4
+
+And on system 'bar' (bar.example.com):
+
+.. code-block:: text
+
+ Welcome, I am templated with a value of a=2, b=3, and c=5
+
+And technically, though there is no major good reason to do it, this also works:
+
+.. code-block:: bash
+
+ ansible webserver -m ansible.builtin.shell -a "echo {{ a }}"
+
+So, in other words, you can use those variables in arguments/actions as well.
+
+.. _openstack_example:
+
+Inventory script example: OpenStack
+===================================
+
+If you use an OpenStack-based cloud, instead of manually maintaining your own inventory file, you can use the ``openstack_inventory.py`` dynamic inventory to pull information about your compute instances directly from OpenStack.
+
+You can download the latest version of the OpenStack inventory script `here <https://raw.githubusercontent.com/openstack/ansible-collections-openstack/master/scripts/inventory/openstack_inventory.py>`_.
+
+You can use the inventory script explicitly (by passing the `-i openstack_inventory.py` argument to Ansible) or implicitly (by placing the script at `/etc/ansible/hosts`).
+
+Explicit use of OpenStack inventory script
+------------------------------------------
+
+Download the latest version of the OpenStack dynamic inventory script and make it executable::
+
+ wget https://raw.githubusercontent.com/openstack/ansible-collections-openstack/master/scripts/inventory/openstack_inventory.py
+ chmod +x openstack_inventory.py
+
+.. note::
+ Do not name it `openstack.py`. This name will conflict with imports from openstacksdk.
+
+Source an OpenStack RC file:
+
+.. code-block:: bash
+
+ source openstack.rc
+
+.. note::
+
+ An OpenStack RC file contains the environment variables required by the client tools to establish a connection with the cloud provider, such as the authentication URL, user name, password and region name. For more information on how to download, create or source an OpenStack RC file, please refer to `Set environment variables using the OpenStack RC file <https://docs.openstack.org/user-guide/common/cli_set_environment_variables_using_openstack_rc.html>`_.
+
+You can confirm the file has been successfully sourced by running a simple command, such as `nova list` and ensuring it returns no errors.
+
+.. note::
+
+ The OpenStack command line clients are required to run the `nova list` command. For more information on how to install them, please refer to `Install the OpenStack command-line clients <https://docs.openstack.org/user-guide/common/cli_install_openstack_command_line_clients.html>`_.
+
+You can test the OpenStack dynamic inventory script manually to confirm it is working as expected::
+
+ ./openstack_inventory.py --list
+
+After a few moments you should see some JSON output with information about your compute instances.
+
+Once you confirm the dynamic inventory script is working as expected, you can tell Ansible to use the `openstack_inventory.py` script as an inventory file, as illustrated below:
+
+.. code-block:: bash
+
+ ansible -i openstack_inventory.py all -m ansible.builtin.ping
+
+Implicit use of OpenStack inventory script
+------------------------------------------
+
+Download the latest version of the OpenStack dynamic inventory script, make it executable and copy it to `/etc/ansible/hosts`:
+
+.. code-block:: bash
+
+ wget https://raw.githubusercontent.com/openstack/ansible-collections-openstack/master/scripts/inventory/openstack_inventory.py
+ chmod +x openstack_inventory.py
+ sudo cp openstack_inventory.py /etc/ansible/hosts
+
+Download the sample configuration file, modify it to suit your needs and copy it to `/etc/ansible/openstack.yml`:
+
+.. code-block:: bash
+
+ wget https://raw.githubusercontent.com/openstack/ansible-collections-openstack/master/scripts/inventory/openstack.yml
+ vi openstack.yml
+ sudo cp openstack.yml /etc/ansible/
+
+You can test the OpenStack dynamic inventory script manually to confirm it is working as expected:
+
+.. code-block:: bash
+
+ /etc/ansible/hosts --list
+
+After a few moments you should see some JSON output with information about your compute instances.
+
+Refreshing the cache
+--------------------
+
+Note that the OpenStack dynamic inventory script will cache results to avoid repeated API calls. To explicitly clear the cache, you can run the openstack_inventory.py (or hosts) script with the ``--refresh`` parameter:
+
+.. code-block:: bash
+
+ ./openstack_inventory.py --refresh --list
+
+.. _other_inventory_scripts:
+
+Other inventory scripts
+=======================
+
+In Ansible 2.10 and later, inventory scripts moved to their associated collections. Many are now in the `community.general scripts/inventory directory <https://github.com/ansible-collections/community.general/tree/main/scripts/inventory>`_. We recommend you use :ref:`inventory_plugins` instead.
+
+.. _using_multiple_sources:
+
+Using inventory directories and multiple inventory sources
+==========================================================
+
+If the location given to ``-i`` in Ansible is a directory (or as so configured in ``ansible.cfg``), Ansible can use multiple inventory sources
+at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant
+hybrid cloud!
+
+In an inventory directory, executable files are treated as dynamic inventory sources and most other files as static sources. Files which end with any of the following are ignored:
+
+.. code-block:: text
+
+ ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo
+
+You can replace this list with your own selection by configuring an ``inventory_ignore_extensions`` list in ``ansible.cfg``, or setting the :envvar:`ANSIBLE_INVENTORY_IGNORE` environment variable. The value in either case must be a comma-separated list of patterns, as shown above.
+
+Any ``group_vars`` and ``host_vars`` subdirectories in an inventory directory are interpreted as expected, making inventory directories a powerful way to organize different sets of configurations. See :ref:`using_multiple_inventory_sources` for more information.
+
+.. _static_groups_of_dynamic:
+
+Static groups of dynamic groups
+===============================
+
+When defining groups of groups in the static inventory file, the child groups
+must also be defined in the static inventory file, otherwise ansible returns an
+error. If you want to define a static group of dynamic child groups, define
+the dynamic groups as empty in the static inventory file. For example:
+
+.. code-block:: text
+
+ [tag_Name_staging_foo]
+
+ [tag_Name_staging_bar]
+
+ [staging:children]
+ tag_Name_staging_foo
+ tag_Name_staging_bar
+
+
+.. seealso::
+
+ :ref:`intro_inventory`
+ All about static inventory files
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/intro_getting_started.rst b/docs/docsite/rst/user_guide/intro_getting_started.rst
new file mode 100644
index 00000000..0fde0281
--- /dev/null
+++ b/docs/docsite/rst/user_guide/intro_getting_started.rst
@@ -0,0 +1,139 @@
+.. _intro_getting_started:
+
+***************
+Getting Started
+***************
+
+Now that you have read the :ref:`installation guide<installation_guide>` and installed Ansible on a control node, you are ready to learn how Ansible works. A basic Ansible command or playbook:
+
+* selects machines to execute against from inventory
+* connects to those machines (or network devices, or other managed nodes), usually over SSH
+* copies one or more modules to the remote machines and starts execution there
+
+Ansible can do much more, but you should understand the most common use case before exploring all the powerful configuration, deployment, and orchestration features of Ansible. This page illustrates the basic process with a simple inventory and an ad-hoc command. Once you understand how Ansible works, you can read more details about :ref:`ad-hoc commands<intro_adhoc>`, organize your infrastructure with :ref:`inventory<intro_inventory>`, and harness the full power of Ansible with :ref:`playbooks<playbooks_intro>`.
+
+.. contents::
+ :local:
+
+Selecting machines from inventory
+=================================
+
+Ansible reads information about which machines you want to manage from your inventory. Although you can pass an IP address to an ad-hoc command, you need inventory to take advantage of the full flexibility and repeatability of Ansible.
+
+Action: create a basic inventory
+--------------------------------
+For this basic inventory, edit (or create) ``/etc/ansible/hosts`` and add a few remote systems to it. For this example, use either IP addresses or FQDNs:
+
+.. code-block:: text
+
+ 192.0.2.50
+ aserver.example.org
+ bserver.example.org
+
+Beyond the basics
+-----------------
+Your inventory can store much more than IPs and FQDNs. You can create :ref:`aliases<inventory_aliases>`, set variable values for a single host with :ref:`host vars<host_variables>`, or set variable values for multiple hosts with :ref:`group vars<group_variables>`.
+
+.. _remote_connection_information:
+
+Connecting to remote nodes
+==========================
+
+Ansible communicates with remote machines over the `SSH protocol <https://www.ssh.com/ssh/protocol/>`_. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
+
+Action: check your SSH connections
+----------------------------------
+Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the ``authorized_keys`` file on those systems.
+
+Beyond the basics
+-----------------
+You can override the default remote user name in several ways, including:
+
+* passing the ``-u`` parameter at the command line
+* setting user information in your inventory file
+* setting user information in your configuration file
+* setting environment variables
+
+See :ref:`general_precedence_rules` for details on the (sometimes unintuitive) precedence of each method of passing user information. You can read more about connections in :ref:`connections`.
+
+Copying and executing modules
+=============================
+
+Once it has connected, Ansible transfers the modules required by your command or playbook to the remote machine(s) for execution.
+
+Action: run your first Ansible commands
+---------------------------------------
+Use the ping module to ping all the nodes in your inventory:
+
+.. code-block:: bash
+
+ $ ansible all -m ping
+
+Now run a live command on all of your nodes:
+
+.. code-block:: bash
+
+ $ ansible all -a "/bin/echo hello"
+
+You should see output for each host in your inventory, similar to this:
+
+.. code-block:: ansible-output
+
+ aserver.example.org | SUCCESS => {
+ "ansible_facts": {
+ "discovered_interpreter_python": "/usr/bin/python"
+ },
+ "changed": false,
+ "ping": "pong"
+ }
+
+Beyond the basics
+-----------------
+By default Ansible uses SFTP to transfer files. If the machine or device you want to manage does not support SFTP, you can switch to SCP mode in :ref:`intro_configuration`. The files are placed in a temporary directory and executed from there.
+
+If you need privilege escalation (sudo and similar) to run a command, pass the ``become`` flags:
+
+.. code-block:: bash
+
+ # as bruce
+ $ ansible all -m ping -u bruce
+ # as bruce, sudoing to root (sudo is default method)
+ $ ansible all -m ping -u bruce --become
+ # as bruce, sudoing to batman
+ $ ansible all -m ping -u bruce --become --become-user batman
+
+You can read more about privilege escalation in :ref:`become`.
+
+Congratulations! You have contacted your nodes using Ansible. You used a basic inventory file and an ad-hoc command to direct Ansible to connect to specific remote nodes, copy a module file there and execute it, and return output. You have a fully working infrastructure.
+
+Resources
+=================================
+- `Product Demos <https://github.com/ansible/product-demos>`_
+- `Katakoda <https://katacoda.com/rhel-labs>`_
+- `Workshops <https://github.com/ansible/workshops>`_
+- `Ansible Examples <https://github.com/ansible/ansible-examples>`_
+- `Ansible Baseline <https://github.com/ansible/ansible-baseline>`_
+
+Next steps
+==========
+Next you can read about more real-world cases in :ref:`intro_adhoc`,
+explore what you can do with different modules, or read about the Ansible
+:ref:`working_with_playbooks` language. Ansible is not just about running commands, it
+also has powerful configuration management and deployment features.
+
+.. seealso::
+
+ :ref:`intro_inventory`
+ More information about inventory
+ :ref:`intro_adhoc`
+ Examples of basic commands
+ :ref:`working_with_playbooks`
+ Learning Ansible's configuration management language
+ `Ansible Demos <https://github.com/ansible/product-demos>`_
+ Demonstrations of different Ansible usecases
+ `RHEL Labs <https://katacoda.com/rhel-labs>`_
+ Labs to provide further knowledge on different topics
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/intro_inventory.rst b/docs/docsite/rst/user_guide/intro_inventory.rst
new file mode 100644
index 00000000..0b8b002c
--- /dev/null
+++ b/docs/docsite/rst/user_guide/intro_inventory.rst
@@ -0,0 +1,788 @@
+.. _intro_inventory:
+.. _inventory:
+
+***************************
+How to build your inventory
+***************************
+
+Ansible works against multiple managed nodes or "hosts" in your infrastructure at the same time, using a list or group of lists known as inventory. Once your inventory is defined, you use :ref:`patterns <intro_patterns>` to select the hosts or groups you want Ansible to run against.
+
+The default location for inventory is a file called ``/etc/ansible/hosts``. You can specify a different inventory file at the command line using the ``-i <path>`` option. You can also use multiple inventory files at the same time, and/or pull inventory from dynamic or cloud sources or different formats (YAML, ini, and so on), as described in :ref:`intro_dynamic_inventory`.
+Introduced in version 2.4, Ansible has :ref:`inventory_plugins` to make this flexible and customizable.
+
+.. contents::
+ :local:
+
+.. _inventoryformat:
+
+Inventory basics: formats, hosts, and groups
+============================================
+
+The inventory file can be in one of many formats, depending on the inventory plugins you have.
+The most common formats are INI and YAML. A basic INI ``/etc/ansible/hosts`` might look like this:
+
+.. code-block:: text
+
+ mail.example.com
+
+ [webservers]
+ foo.example.com
+ bar.example.com
+
+ [dbservers]
+ one.example.com
+ two.example.com
+ three.example.com
+
+The headings in brackets are group names, which are used in classifying hosts
+and deciding what hosts you are controlling at what times and for what purpose.
+Group names should follow the same guidelines as :ref:`valid_variable_names`.
+
+Here's that same basic inventory file in YAML format:
+
+.. code-block:: yaml
+
+ all:
+ hosts:
+ mail.example.com:
+ children:
+ webservers:
+ hosts:
+ foo.example.com:
+ bar.example.com:
+ dbservers:
+ hosts:
+ one.example.com:
+ two.example.com:
+ three.example.com:
+
+.. _default_groups:
+
+Default groups
+--------------
+
+There are two default groups: ``all`` and ``ungrouped``. The ``all`` group contains every host.
+The ``ungrouped`` group contains all hosts that don't have another group aside from ``all``.
+Every host will always belong to at least 2 groups (``all`` and ``ungrouped`` or ``all`` and some other group). Though ``all`` and ``ungrouped`` are always present, they can be implicit and not appear in group listings like ``group_names``.
+
+.. _host_multiple_groups:
+
+Hosts in multiple groups
+------------------------
+
+You can (and probably will) put each host in more than one group. For example a production webserver in a datacenter in Atlanta might be included in groups called [prod] and [atlanta] and [webservers]. You can create groups that track:
+
+* What - An application, stack or microservice (for example, database servers, web servers, and so on).
+* Where - A datacenter or region, to talk to local DNS, storage, and so on (for example, east, west).
+* When - The development stage, to avoid testing on production resources (for example, prod, test).
+
+Extending the previous YAML inventory to include what, when, and where would look like:
+
+.. code-block:: yaml
+
+ all:
+ hosts:
+ mail.example.com:
+ children:
+ webservers:
+ hosts:
+ foo.example.com:
+ bar.example.com:
+ dbservers:
+ hosts:
+ one.example.com:
+ two.example.com:
+ three.example.com:
+ east:
+ hosts:
+ foo.example.com:
+ one.example.com:
+ two.example.com:
+ west:
+ hosts:
+ bar.example.com:
+ three.example.com:
+ prod:
+ hosts:
+ foo.example.com:
+ one.example.com:
+ two.example.com:
+ test:
+ hosts:
+ bar.example.com:
+ three.example.com:
+
+You can see that ``one.example.com`` exists in the ``dbservers``, ``east``, and ``prod`` groups.
+
+You can also use nested groups to simplify ``prod`` and ``test`` in this inventory, for the same result:
+
+.. code-block:: yaml
+
+ all:
+ hosts:
+ mail.example.com:
+ children:
+ webservers:
+ hosts:
+ foo.example.com:
+ bar.example.com:
+ dbservers:
+ hosts:
+ one.example.com:
+ two.example.com:
+ three.example.com:
+ east:
+ hosts:
+ foo.example.com:
+ one.example.com:
+ two.example.com:
+ west:
+ hosts:
+ bar.example.com:
+ three.example.com:
+ prod:
+ children:
+ east:
+ test:
+ children:
+ west:
+
+You can find more examples on how to organize your inventories and group your hosts in :ref:`inventory_setup_examples`.
+
+Adding ranges of hosts
+----------------------
+
+If you have a lot of hosts with a similar pattern, you can add them as a range rather than listing each hostname separately:
+
+In INI:
+
+.. code-block:: text
+
+ [webservers]
+ www[01:50].example.com
+
+In YAML:
+
+.. code-block:: yaml
+
+ ...
+ webservers:
+ hosts:
+ www[01:50].example.com:
+
+You can specify a stride (increments between sequence numbers) when defining a numeric range of hosts:
+
+In INI:
+
+.. code-block:: text
+
+ [webservers]
+ www[01:50:2].example.com
+
+In YAML:
+
+.. code-block:: yaml
+
+ ...
+ webservers:
+ hosts:
+ www[01:50:2].example.com:
+
+For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define alphabetic ranges:
+
+.. code-block:: text
+
+ [databases]
+ db-[a:f].example.com
+
+.. _variables_in_inventory:
+
+Adding variables to inventory
+=============================
+
+You can store variable values that relate to a specific host or group in inventory. To start with, you may add variables directly to the hosts and groups in your main inventory file. As you add more and more managed nodes to your Ansible inventory, however, you will likely want to store variables in separate host and group variable files. See :ref:`define_variables_in_inventory` for details.
+
+.. _host_variables:
+
+Assigning a variable to one machine: host variables
+===================================================
+
+You can easily assign a variable to a single host, then use it later in playbooks. In INI:
+
+.. code-block:: text
+
+ [atlanta]
+ host1 http_port=80 maxRequestsPerChild=808
+ host2 http_port=303 maxRequestsPerChild=909
+
+In YAML:
+
+.. code-block:: yaml
+
+ atlanta:
+ host1:
+ http_port: 80
+ maxRequestsPerChild: 808
+ host2:
+ http_port: 303
+ maxRequestsPerChild: 909
+
+Unique values like non-standard SSH ports work well as host variables. You can add them to your Ansible inventory by adding the port number after the hostname with a colon:
+
+.. code-block:: text
+
+ badwolf.example.com:5309
+
+Connection variables also work well as host variables:
+
+.. code-block:: text
+
+ [targets]
+
+ localhost ansible_connection=local
+ other1.example.com ansible_connection=ssh ansible_user=myuser
+ other2.example.com ansible_connection=ssh ansible_user=myotheruser
+
+.. note:: If you list non-standard SSH ports in your SSH config file, the ``openssh`` connection will find and use them, but the ``paramiko`` connection will not.
+
+.. _inventory_aliases:
+
+Inventory aliases
+-----------------
+
+You can also define aliases in your inventory:
+
+In INI:
+
+.. code-block:: text
+
+ jumper ansible_port=5555 ansible_host=192.0.2.50
+
+In YAML:
+
+.. code-block:: yaml
+
+ ...
+ hosts:
+ jumper:
+ ansible_port: 5555
+ ansible_host: 192.0.2.50
+
+In the above example, running Ansible against the host alias "jumper" will connect to 192.0.2.50 on port 5555. See :ref:`behavioral inventory parameters <behavioral_parameters>` to further customize the connection to hosts.
+
+.. note::
+ Values passed in the INI format using the ``key=value`` syntax are interpreted differently depending on where they are declared:
+
+ * When declared inline with the host, INI values are interpreted as Python literal structures (strings, numbers, tuples, lists, dicts, booleans, None). Host lines accept multiple ``key=value`` parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator.
+
+ * When declared in a ``:vars`` section, INI values are interpreted as strings. For example ``var=FALSE`` would create a string equal to 'FALSE'. Unlike host lines, ``:vars`` sections accept only a single entry per line, so everything after the ``=`` must be the value for the entry.
+
+ * If a variable value set in an INI inventory must be a certain type (for example, a string or a boolean value), always specify the type with a filter in your task. Do not rely on types set in INI inventories when consuming variables.
+
+ * Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly.
+
+Generally speaking, this is not the best way to define variables that describe your system policy. Setting variables in the main inventory file is only a shorthand. See :ref:`splitting_out_vars` for guidelines on storing variable values in individual files in the 'host_vars' directory.
+
+.. _group_variables:
+
+Assigning a variable to many machines: group variables
+======================================================
+
+If all hosts in a group share a variable value, you can apply that variable to an entire group at once. In INI:
+
+.. code-block:: text
+
+ [atlanta]
+ host1
+ host2
+
+ [atlanta:vars]
+ ntp_server=ntp.atlanta.example.com
+ proxy=proxy.atlanta.example.com
+
+In YAML:
+
+.. code-block:: yaml
+
+ atlanta:
+ hosts:
+ host1:
+ host2:
+ vars:
+ ntp_server: ntp.atlanta.example.com
+ proxy: proxy.atlanta.example.com
+
+Group variables are a convenient way to apply variables to multiple hosts at once. Before executing, however, Ansible always flattens variables, including inventory variables, to the host level. If a host is a member of multiple groups, Ansible reads variable values from all of those groups. If you assign different values to the same variable in different groups, Ansible chooses which value to use based on internal :ref:`rules for merging <how_we_merge>`.
+
+.. _subgroups:
+
+Inheriting variable values: group variables for groups of groups
+----------------------------------------------------------------
+
+You can make groups of groups using the ``:children`` suffix in INI or the ``children:`` entry in YAML.
+You can apply variables to these groups of groups using ``:vars`` or ``vars:``:
+
+In INI:
+
+.. code-block:: text
+
+ [atlanta]
+ host1
+ host2
+
+ [raleigh]
+ host2
+ host3
+
+ [southeast:children]
+ atlanta
+ raleigh
+
+ [southeast:vars]
+ some_server=foo.southeast.example.com
+ halon_system_timeout=30
+ self_destruct_countdown=60
+ escape_pods=2
+
+ [usa:children]
+ southeast
+ northeast
+ southwest
+ northwest
+
+In YAML:
+
+.. code-block:: yaml
+
+ all:
+ children:
+ usa:
+ children:
+ southeast:
+ children:
+ atlanta:
+ hosts:
+ host1:
+ host2:
+ raleigh:
+ hosts:
+ host2:
+ host3:
+ vars:
+ some_server: foo.southeast.example.com
+ halon_system_timeout: 30
+ self_destruct_countdown: 60
+ escape_pods: 2
+ northeast:
+ northwest:
+ southwest:
+
+If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory file, see :ref:`splitting_out_vars`.
+
+Child groups have a couple of properties to note:
+
+ - Any host that is member of a child group is automatically a member of the parent group.
+ - A child group's variables will have higher precedence (override) a parent group's variables.
+ - Groups can have multiple parents and children, but not circular relationships.
+ - Hosts can also be in multiple groups, but there will only be **one** instance of a host, merging the data from the multiple groups.
+
+.. _splitting_out_vars:
+
+Organizing host and group variables
+===================================
+
+Although you can store variables in the main inventory file, storing separate host and group variables files may help you organize your variable values more easily. Host and group variable files must use YAML syntax. Valid file extensions include '.yml', '.yaml', '.json', or no file extension.
+See :ref:`yaml_syntax` if you are new to YAML.
+
+Ansible loads host and group variable files by searching paths relative to the inventory file or the playbook file. If your inventory file at ``/etc/ansible/hosts`` contains a host named 'foosball' that belongs to two groups, 'raleigh' and 'webservers', that host will use variables in YAML files at the following locations:
+
+.. code-block:: bash
+
+ /etc/ansible/group_vars/raleigh # can optionally end in '.yml', '.yaml', or '.json'
+ /etc/ansible/group_vars/webservers
+ /etc/ansible/host_vars/foosball
+
+For example, if you group hosts in your inventory by datacenter, and each datacenter uses its own NTP server and database server, you can create a file called ``/etc/ansible/group_vars/raleigh`` to store the variables for the ``raleigh`` group:
+
+.. code-block:: yaml
+
+ ---
+ ntp_server: acme.example.org
+ database_server: storage.example.org
+
+You can also create *directories* named after your groups or hosts. Ansible will read all the files in these directories in lexicographical order. An example with the 'raleigh' group:
+
+.. code-block:: bash
+
+ /etc/ansible/group_vars/raleigh/db_settings
+ /etc/ansible/group_vars/raleigh/cluster_settings
+
+All hosts in the 'raleigh' group will have the variables defined in these files
+available to them. This can be very useful to keep your variables organized when a single
+file gets too big, or when you want to use :ref:`Ansible Vault<playbooks_vault>` on some group variables.
+
+You can also add ``group_vars/`` and ``host_vars/`` directories to your playbook directory. The ``ansible-playbook`` command looks for these directories in the current working directory by default. Other Ansible commands (for example, ``ansible``, ``ansible-console``, and so on) will only look for ``group_vars/`` and ``host_vars/`` in the inventory directory. If you want other commands to load group and host variables from a playbook directory, you must provide the ``--playbook-dir`` option on the command line.
+If you load inventory files from both the playbook directory and the inventory directory, variables in the playbook directory will override variables set in the inventory directory.
+
+Keeping your inventory file and variables in a git repo (or other version control)
+is an excellent way to track changes to your inventory and host variables.
+
+.. _how_we_merge:
+
+How variables are merged
+========================
+
+By default variables are merged/flattened to the specific host before a play is run. This keeps Ansible focused on the Host and Task, so groups don't really survive outside of inventory and host matching. By default, Ansible overwrites variables including the ones defined for a group and/or host (see :ref:`DEFAULT_HASH_BEHAVIOUR<DEFAULT_HASH_BEHAVIOUR>`). The order/precedence is (from lowest to highest):
+
+- all group (because it is the 'parent' of all other groups)
+- parent group
+- child group
+- host
+
+By default Ansible merges groups at the same parent/child level in ASCII order, and the last group loaded overwrites the previous groups. For example, an a_group will be merged with b_group and b_group vars that match will overwrite the ones in a_group.
+
+You can change this behavior by setting the group variable ``ansible_group_priority`` to change the merge order for groups of the same level (after the parent/child order is resolved). The larger the number, the later it will be merged, giving it higher priority. This variable defaults to ``1`` if not set. For example:
+
+.. code-block:: yaml
+
+ a_group:
+ testvar: a
+ ansible_group_priority: 10
+ b_group:
+ testvar: b
+
+In this example, if both groups have the same priority, the result would normally have been ``testvar == b``, but since we are giving the ``a_group`` a higher priority the result will be ``testvar == a``.
+
+.. note:: ``ansible_group_priority`` can only be set in the inventory source and not in group_vars/, as the variable is used in the loading of group_vars.
+
+.. _using_multiple_inventory_sources:
+
+Using multiple inventory sources
+================================
+
+You can target multiple inventory sources (directories, dynamic inventory scripts
+or files supported by inventory plugins) at the same time by giving multiple inventory parameters from the command
+line or by configuring :envvar:`ANSIBLE_INVENTORY`. This can be useful when you want to target normally
+separate environments, like staging and production, at the same time for a specific action.
+
+Target two sources from the command line like this:
+
+.. code-block:: bash
+
+ ansible-playbook get_logs.yml -i staging -i production
+
+Keep in mind that if there are variable conflicts in the inventories, they are resolved according
+to the rules described in :ref:`how_we_merge` and :ref:`ansible_variable_precedence`.
+The merging order is controlled by the order of the inventory source parameters.
+If ``[all:vars]`` in staging inventory defines ``myvar = 1``, but production inventory defines ``myvar = 2``,
+the playbook will be run with ``myvar = 2``. The result would be reversed if the playbook was run with
+``-i production -i staging``.
+
+**Aggregating inventory sources with a directory**
+
+You can also create an inventory by combining multiple inventory sources and source types under a directory.
+This can be useful for combining static and dynamic hosts and managing them as one inventory.
+The following inventory combines an inventory plugin source, a dynamic inventory script,
+and a file with static hosts:
+
+.. code-block:: text
+
+ inventory/
+ openstack.yml # configure inventory plugin to get hosts from Openstack cloud
+ dynamic-inventory.py # add additional hosts with dynamic inventory script
+ static-inventory # add static hosts and groups
+ group_vars/
+ all.yml # assign variables to all hosts
+
+You can target this inventory directory simply like this:
+
+.. code-block:: bash
+
+ ansible-playbook example.yml -i inventory
+
+It can be useful to control the merging order of the inventory sources if there's variable
+conflicts or group of groups dependencies to the other inventory sources. The inventories
+are merged in ASCII order according to the filenames so the result can
+be controlled by adding prefixes to the files:
+
+.. code-block:: text
+
+ inventory/
+ 01-openstack.yml # configure inventory plugin to get hosts from Openstack cloud
+ 02-dynamic-inventory.py # add additional hosts with dynamic inventory script
+ 03-static-inventory # add static hosts
+ group_vars/
+ all.yml # assign variables to all hosts
+
+If ``01-openstack.yml`` defines ``myvar = 1`` for the group ``all``, ``02-dynamic-inventory.py`` defines ``myvar = 2``,
+and ``03-static-inventory`` defines ``myvar = 3``, the playbook will be run with ``myvar = 3``.
+
+For more details on inventory plugins and dynamic inventory scripts see :ref:`inventory_plugins` and :ref:`intro_dynamic_inventory`.
+
+.. _behavioral_parameters:
+
+Connecting to hosts: behavioral inventory parameters
+====================================================
+
+As described above, setting the following variables control how Ansible interacts with remote hosts.
+
+Host connection:
+
+.. include:: shared_snippets/SSH_password_prompt.txt
+
+ansible_connection
+ Connection type to the host. This can be the name of any of ansible's connection plugins. SSH protocol types are ``smart``, ``ssh`` or ``paramiko``. The default is smart. Non-SSH based types are described in the next section.
+
+General for all connections:
+
+ansible_host
+ The name of the host to connect to, if different from the alias you wish to give to it.
+ansible_port
+ The connection port number, if not the default (22 for ssh)
+ansible_user
+ The user name to use when connecting to the host
+ansible_password
+ The password to use to authenticate to the host (never store this variable in plain text; always use a vault. See :ref:`tip_for_variables_and_vaults`)
+
+
+Specific to the SSH connection:
+
+ansible_ssh_private_key_file
+ Private key file used by ssh. Useful if using multiple keys and you don't want to use SSH agent.
+ansible_ssh_common_args
+ This setting is always appended to the default command line for :command:`sftp`, :command:`scp`,
+ and :command:`ssh`. Useful to configure a ``ProxyCommand`` for a certain host (or
+ group).
+ansible_sftp_extra_args
+ This setting is always appended to the default :command:`sftp` command line.
+ansible_scp_extra_args
+ This setting is always appended to the default :command:`scp` command line.
+ansible_ssh_extra_args
+ This setting is always appended to the default :command:`ssh` command line.
+ansible_ssh_pipelining
+ Determines whether or not to use SSH pipelining. This can override the ``pipelining`` setting in :file:`ansible.cfg`.
+ansible_ssh_executable (added in version 2.2)
+ This setting overrides the default behavior to use the system :command:`ssh`. This can override the ``ssh_executable`` setting in :file:`ansible.cfg`.
+
+
+Privilege escalation (see :ref:`Ansible Privilege Escalation<become>` for further details):
+
+ansible_become
+ Equivalent to ``ansible_sudo`` or ``ansible_su``, allows to force privilege escalation
+ansible_become_method
+ Allows to set privilege escalation method
+ansible_become_user
+ Equivalent to ``ansible_sudo_user`` or ``ansible_su_user``, allows to set the user you become through privilege escalation
+ansible_become_password
+ Equivalent to ``ansible_sudo_password`` or ``ansible_su_password``, allows you to set the privilege escalation password (never store this variable in plain text; always use a vault. See :ref:`tip_for_variables_and_vaults`)
+ansible_become_exe
+ Equivalent to ``ansible_sudo_exe`` or ``ansible_su_exe``, allows you to set the executable for the escalation method selected
+ansible_become_flags
+ Equivalent to ``ansible_sudo_flags`` or ``ansible_su_flags``, allows you to set the flags passed to the selected escalation method. This can be also set globally in :file:`ansible.cfg` in the ``sudo_flags`` option
+
+Remote host environment parameters:
+
+.. _ansible_shell_type:
+
+ansible_shell_type
+ The shell type of the target system. You should not use this setting unless you have set the
+ :ref:`ansible_shell_executable<ansible_shell_executable>` to a non-Bourne (sh) compatible shell. By default commands are
+ formatted using ``sh``-style syntax. Setting this to ``csh`` or ``fish`` will cause commands
+ executed on target systems to follow those shell's syntax instead.
+
+.. _ansible_python_interpreter:
+
+ansible_python_interpreter
+ The target host python path. This is useful for systems with more
+ than one Python or not located at :command:`/usr/bin/python` such as \*BSD, or where :command:`/usr/bin/python`
+ is not a 2.X series Python. We do not use the :command:`/usr/bin/env` mechanism as that requires the remote user's
+ path to be set right and also assumes the :program:`python` executable is named python, where the executable might
+ be named something like :program:`python2.6`.
+
+ansible_*_interpreter
+ Works for anything such as ruby or perl and works just like :ref:`ansible_python_interpreter<ansible_python_interpreter>`.
+ This replaces shebang of modules which will run on that host.
+
+.. versionadded:: 2.1
+
+.. _ansible_shell_executable:
+
+ansible_shell_executable
+ This sets the shell the ansible controller will use on the target machine,
+ overrides ``executable`` in :file:`ansible.cfg` which defaults to
+ :command:`/bin/sh`. You should really only change it if is not possible
+ to use :command:`/bin/sh` (in other words, if :command:`/bin/sh` is not installed on the target
+ machine or cannot be run from sudo.).
+
+Examples from an Ansible-INI host file:
+
+.. code-block:: text
+
+ some_host ansible_port=2222 ansible_user=manager
+ aws_host ansible_ssh_private_key_file=/home/example/.ssh/aws.pem
+ freebsd_host ansible_python_interpreter=/usr/local/bin/python
+ ruby_module_host ansible_ruby_interpreter=/usr/bin/ruby.1.9.3
+
+Non-SSH connection types
+------------------------
+
+As stated in the previous section, Ansible executes playbooks over SSH but it is not limited to this connection type.
+With the host specific parameter ``ansible_connection=<connector>``, the connection type can be changed.
+The following non-SSH based connectors are available:
+
+**local**
+
+This connector can be used to deploy the playbook to the control machine itself.
+
+**docker**
+
+This connector deploys the playbook directly into Docker containers using the local Docker client. The following parameters are processed by this connector:
+
+ansible_host
+ The name of the Docker container to connect to.
+ansible_user
+ The user name to operate within the container. The user must exist inside the container.
+ansible_become
+ If set to ``true`` the ``become_user`` will be used to operate within the container.
+ansible_docker_extra_args
+ Could be a string with any additional arguments understood by Docker, which are not command specific. This parameter is mainly used to configure a remote Docker daemon to use.
+
+Here is an example of how to instantly deploy to created containers:
+
+.. code-block:: yaml
+
+ - name: Create a jenkins container
+ community.general.docker_container:
+ docker_host: myserver.net:4243
+ name: my_jenkins
+ image: jenkins
+
+ - name: Add the container to inventory
+ ansible.builtin.add_host:
+ name: my_jenkins
+ ansible_connection: docker
+ ansible_docker_extra_args: "--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/client-cert.pem --tlskey=/path/to/client-key.pem -H=tcp://myserver.net:4243"
+ ansible_user: jenkins
+ changed_when: false
+
+ - name: Create a directory for ssh keys
+ delegate_to: my_jenkins
+ ansible.builtin.file:
+ path: "/var/jenkins_home/.ssh/jupiter"
+ state: directory
+
+For a full list with available plugins and examples, see :ref:`connection_plugin_list`.
+
+.. note:: If you're reading the docs from the beginning, this may be the first example you've seen of an Ansible playbook. This is not an inventory file.
+ Playbooks will be covered in great detail later in the docs.
+
+.. _inventory_setup_examples:
+
+Inventory setup examples
+========================
+
+See also :ref:`sample_setup`, which shows inventory along with playbooks and other Ansible artifacts.
+
+.. _inventory_setup-per_environment:
+
+Example: One inventory per environment
+--------------------------------------
+
+If you need to manage multiple environments it's sometimes prudent to
+have only hosts of a single environment defined per inventory. This
+way, it is harder to, for instance, accidentally change the state of
+nodes inside the "test" environment when you actually wanted to update
+some "staging" servers.
+
+For the example mentioned above you could have an
+:file:`inventory_test` file:
+
+.. code-block:: ini
+
+ [dbservers]
+ db01.test.example.com
+ db02.test.example.com
+
+ [appservers]
+ app01.test.example.com
+ app02.test.example.com
+ app03.test.example.com
+
+That file only includes hosts that are part of the "test"
+environment. Define the "staging" machines in another file
+called :file:`inventory_staging`:
+
+.. code-block:: ini
+
+ [dbservers]
+ db01.staging.example.com
+ db02.staging.example.com
+
+ [appservers]
+ app01.staging.example.com
+ app02.staging.example.com
+ app03.staging.example.com
+
+To apply a playbook called :file:`site.yml`
+to all the app servers in the test environment, use the
+following command::
+
+ ansible-playbook -i inventory_test site.yml -l appservers
+
+.. _inventory_setup-per_function:
+
+Example: Group by function
+--------------------------
+
+In the previous section you already saw an example for using groups in
+order to cluster hosts that have the same function. This allows you,
+for instance, to define firewall rules inside a playbook or role
+without affecting database servers:
+
+.. code-block:: yaml
+
+ - hosts: dbservers
+ tasks:
+ - name: Allow access from 10.0.0.1
+ ansible.builtin.iptables:
+ chain: INPUT
+ jump: ACCEPT
+ source: 10.0.0.1
+
+.. _inventory_setup-per_location:
+
+Example: Group by location
+--------------------------
+
+Other tasks might be focused on where a certain host is located. Let's
+say that ``db01.test.example.com`` and ``app01.test.example.com`` are
+located in DC1 while ``db02.test.example.com`` is in DC2:
+
+.. code-block:: ini
+
+ [dc1]
+ db01.test.example.com
+ app01.test.example.com
+
+ [dc2]
+ db02.test.example.com
+
+In practice, you might even end up mixing all these setups as you
+might need to, on one day, update all nodes in a specific data center
+while, on another day, update all the application servers no matter
+their location.
+
+.. seealso::
+
+ :ref:`inventory_plugins`
+ Pulling inventory from dynamic or static sources
+ :ref:`intro_dynamic_inventory`
+ Pulling inventory from dynamic sources, such as cloud providers
+ :ref:`intro_adhoc`
+ Examples of basic commands
+ :ref:`working_with_playbooks`
+ Learning Ansible's configuration, deployment, and orchestration language.
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/intro_patterns.rst b/docs/docsite/rst/user_guide/intro_patterns.rst
new file mode 100644
index 00000000..edc25ad6
--- /dev/null
+++ b/docs/docsite/rst/user_guide/intro_patterns.rst
@@ -0,0 +1,171 @@
+.. _intro_patterns:
+
+Patterns: targeting hosts and groups
+====================================
+
+When you execute Ansible through an ad-hoc command or by running a playbook, you must choose which managed nodes or groups you want to execute against. Patterns let you run commands and playbooks against specific hosts and/or groups in your inventory. An Ansible pattern can refer to a single host, an IP address, an inventory group, a set of groups, or all hosts in your inventory. Patterns are highly flexible - you can exclude or require subsets of hosts, use wildcards or regular expressions, and more. Ansible executes on all inventory hosts included in the pattern.
+
+.. contents::
+ :local:
+
+Using patterns
+--------------
+
+You use a pattern almost any time you execute an ad-hoc command or a playbook. The pattern is the only element of an :ref:`ad-hoc command<intro_adhoc>` that has no flag. It is usually the second element::
+
+ ansible <pattern> -m <module_name> -a "<module options>"
+
+For example::
+
+ ansible webservers -m service -a "name=httpd state=restarted"
+
+In a playbook the pattern is the content of the ``hosts:`` line for each play:
+
+.. code-block:: yaml
+
+ - name: <play_name>
+ hosts: <pattern>
+
+For example::
+
+ - name: restart webservers
+ hosts: webservers
+
+Since you often want to run a command or playbook against multiple hosts at once, patterns often refer to inventory groups. Both the ad-hoc command and the playbook above will execute against all machines in the ``webservers`` group.
+
+.. _common_patterns:
+
+Common patterns
+---------------
+
+This table lists common patterns for targeting inventory hosts and groups.
+
+.. table::
+ :class: documentation-table
+
+ ====================== ================================ ===================================================
+ Description Pattern(s) Targets
+ ====================== ================================ ===================================================
+ All hosts all (or \*)
+
+ One host host1
+
+ Multiple hosts host1:host2 (or host1,host2)
+
+ One group webservers
+
+ Multiple groups webservers:dbservers all hosts in webservers plus all hosts in dbservers
+
+ Excluding groups webservers:!atlanta all hosts in webservers except those in atlanta
+
+ Intersection of groups webservers:&staging any hosts in webservers that are also in staging
+ ====================== ================================ ===================================================
+
+.. note:: You can use either a comma (``,``) or a colon (``:``) to separate a list of hosts. The comma is preferred when dealing with ranges and IPv6 addresses.
+
+Once you know the basic patterns, you can combine them. This example::
+
+ webservers:dbservers:&staging:!phoenix
+
+targets all machines in the groups 'webservers' and 'dbservers' that are also in
+the group 'staging', except any machines in the group 'phoenix'.
+
+You can use wildcard patterns with FQDNs or IP addresses, as long as the hosts are named in your inventory by FQDN or IP address::
+
+ 192.0.\*
+ \*.example.com
+ \*.com
+
+You can mix wildcard patterns and groups at the same time::
+
+ one*.com:dbservers
+
+Limitations of patterns
+-----------------------
+
+Patterns depend on inventory. If a host or group is not listed in your inventory, you cannot use a pattern to target it. If your pattern includes an IP address or hostname that does not appear in your inventory, you will see an error like this:
+
+.. code-block:: text
+
+ [WARNING]: No inventory was parsed, only implicit localhost is available
+ [WARNING]: Could not match supplied host pattern, ignoring: *.not_in_inventory.com
+
+Your pattern must match your inventory syntax. If you define a host as an :ref:`alias<inventory_aliases>`:
+
+.. code-block:: yaml
+
+ atlanta:
+ host1:
+ http_port: 80
+ maxRequestsPerChild: 808
+ host: 127.0.0.2
+
+you must use the alias in your pattern. In the example above, you must use ``host1`` in your pattern. If you use the IP address, you will once again get the error::
+
+ [WARNING]: Could not match supplied host pattern, ignoring: 127.0.0.2
+
+Advanced pattern options
+------------------------
+
+The common patterns described above will meet most of your needs, but Ansible offers several other ways to define the hosts and groups you want to target.
+
+Using variables in patterns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can use variables to enable passing group specifiers via the ``-e`` argument to ansible-playbook::
+
+ webservers:!{{ excluded }}:&{{ required }}
+
+Using group position in patterns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can define a host or subset of hosts by its position in a group. For example, given the following group::
+
+ [webservers]
+ cobweb
+ webbing
+ weber
+
+you can use subscripts to select individual hosts or ranges within the webservers group::
+
+ webservers[0] # == cobweb
+ webservers[-1] # == weber
+ webservers[0:2] # == webservers[0],webservers[1]
+ # == cobweb,webbing
+ webservers[1:] # == webbing,weber
+ webservers[:3] # == cobweb,webbing,weber
+
+Using regexes in patterns
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can specify a pattern as a regular expression by starting the pattern with ``~``::
+
+ ~(web|db).*\.example\.com
+
+Patterns and ansible-playbook flags
+-----------------------------------
+
+You can change the behavior of the patterns defined in playbooks using command-line options. For example, you can run a playbook that defines ``hosts: all`` on a single host by specifying ``-i 127.0.0.2,`` (note the trailing comma). This works even if the host you target is not defined in your inventory. You can also limit the hosts you target on a particular run with the ``--limit`` flag::
+
+ ansible-playbook site.yml --limit datacenter2
+
+Finally, you can use ``--limit`` to read the list of hosts from a file by prefixing the file name with ``@``::
+
+ ansible-playbook site.yml --limit @retry_hosts.txt
+
+If :ref:`RETRY_FILES_ENABLED` is set to ``True``, a ``.retry`` file will be created after the ``ansible-playbook`` run containing a list of failed hosts from all plays. This file is overwritten each time ``ansible-playook`` finishes running.
+
+ ansible-playbook site.yml --limit @site.retry
+
+To apply your knowledge of patterns with Ansible commands and playbooks, read :ref:`intro_adhoc` and :ref:`playbooks_intro`.
+
+.. seealso::
+
+ :ref:`intro_adhoc`
+ Examples of basic commands
+ :ref:`working_with_playbooks`
+ Learning the Ansible configuration management language
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/intro_windows.rst b/docs/docsite/rst/user_guide/intro_windows.rst
new file mode 100644
index 00000000..ba81f6d6
--- /dev/null
+++ b/docs/docsite/rst/user_guide/intro_windows.rst
@@ -0,0 +1,4 @@
+Windows Support
+===============
+
+This page has been split up and moved to the new section :ref:`windows`.
diff --git a/docs/docsite/rst/user_guide/modules.rst b/docs/docsite/rst/user_guide/modules.rst
new file mode 100644
index 00000000..70dac884
--- /dev/null
+++ b/docs/docsite/rst/user_guide/modules.rst
@@ -0,0 +1,36 @@
+.. _working_with_modules:
+
+Working With Modules
+====================
+
+.. toctree::
+ :maxdepth: 1
+
+ modules_intro
+ modules_support
+ ../reference_appendices/common_return_values
+
+
+Ansible ships with a number of modules (called the 'module library')
+that can be executed directly on remote hosts or through :ref:`Playbooks <working_with_playbooks>`.
+
+Users can also write their own modules. These modules can control system resources,
+like services, packages, or files (anything really), or handle executing system commands.
+
+
+.. seealso::
+
+ :ref:`intro_adhoc`
+ Examples of using modules in /usr/bin/ansible
+ :ref:`playbooks_intro`
+ Introduction to using modules with /usr/bin/ansible-playbook
+ :ref:`developing_modules_general`
+ How to write your own modules
+ :ref:`developing_api`
+ Examples of using modules with the Python API
+ :ref:`interpreter_discovery`
+ Configuring the right Python interpreter on target hosts
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/modules_intro.rst b/docs/docsite/rst/user_guide/modules_intro.rst
new file mode 100644
index 00000000..bb6d2cd7
--- /dev/null
+++ b/docs/docsite/rst/user_guide/modules_intro.rst
@@ -0,0 +1,52 @@
+.. _intro_modules:
+
+Introduction to modules
+=======================
+
+Modules (also referred to as "task plugins" or "library plugins") are discrete units of code that can be used from the command line or in a playbook task. Ansible executes each module, usually on the remote managed node, and collects return values. In Ansible 2.10 and later, most modules are hosted in collections.
+
+You can execute modules from the command line::
+
+ ansible webservers -m service -a "name=httpd state=started"
+ ansible webservers -m ping
+ ansible webservers -m command -a "/sbin/reboot -t now"
+
+Each module supports taking arguments. Nearly all modules take ``key=value`` arguments, space delimited. Some modules take no arguments, and the command/shell modules simply take the string of the command you want to run.
+
+From playbooks, Ansible modules are executed in a very similar way::
+
+ - name: reboot the servers
+ command: /sbin/reboot -t now
+
+Another way to pass arguments to a module is using YAML syntax, also called 'complex args' ::
+
+ - name: restart webserver
+ service:
+ name: httpd
+ state: restarted
+
+All modules return JSON format data. This means modules can be written in any programming language. Modules should be idempotent, and should avoid making any changes if they detect that the current state matches the desired final state. When used in an Ansible playbook, modules can trigger 'change events' in the form of notifying :ref:`handlers <handlers>` to run additional tasks.
+
+You can access the documentation for each module from the command line with the ansible-doc tool::
+
+ ansible-doc yum
+
+For a list of all available modules, see the :ref:`Collection docs <list_of_collections>`, or run the following at a command prompt::
+
+ ansible-doc -l
+
+
+.. seealso::
+
+ :ref:`intro_adhoc`
+ Examples of using modules in /usr/bin/ansible
+ :ref:`working_with_playbooks`
+ Examples of using modules with /usr/bin/ansible-playbook
+ :ref:`developing_modules`
+ How to write your own modules
+ :ref:`developing_api`
+ Examples of using modules with the Python API
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/modules_support.rst b/docs/docsite/rst/user_guide/modules_support.rst
new file mode 100644
index 00000000..6faa7333
--- /dev/null
+++ b/docs/docsite/rst/user_guide/modules_support.rst
@@ -0,0 +1,70 @@
+.. _modules_support:
+
+****************************
+Module Maintenance & Support
+****************************
+
+If you are using a module and you discover a bug, you may want to know where to report that bug, who is responsible for fixing it, and how you can track changes to the module. If you are a Red Hat subscriber, you may want to know whether you can get support for the issue you are facing.
+
+Starting in Ansible 2.10, most modules live in collections. The distribution method for each collection reflects the maintenance and support for the modules in that collection.
+
+.. contents::
+ :local:
+
+Maintenance
+===========
+
+.. table::
+ :class: documentation-table
+
+ ============================= ========================================== ==========================
+ Collection Code location Maintained by
+ ============================= ========================================== ==========================
+ ansible.builtin `ansible/ansible repo`_ on GitHub core team
+
+ distributed on Galaxy various; follow ``repo`` link community or partners
+
+ distributed on Automation Hub various; follow ``repo`` link content team or partners
+ ============================= ========================================== ==========================
+
+.. _ansible/ansible repo: https://github.com/ansible/ansible/tree/devel/lib/ansible/modules
+
+Issue Reporting
+===============
+
+If you find a bug that affects a plugin in the main Ansible repo, also known as ``ansible-base``:
+
+ #. Confirm that you are running the latest stable version of Ansible or the devel branch.
+ #. Look at the `issue tracker in the Ansible repo <https://github.com/ansible/ansible/issues>`_ to see if an issue has already been filed.
+ #. Create an issue if one does not already exist. Include as much detail as you can about the behavior you discovered.
+
+If you find a bug that affects a plugin in a Galaxy collection:
+
+ #. Find the collection on Galaxy.
+ #. Find the issue tracker for the collection.
+ #. Look there to see if an issue has already been filed.
+ #. Create an issue if one does not already exist. Include as much detail as you can about the behavior you discovered.
+
+Some partner collections may be hosted in private repositories.
+
+If you are not sure whether the behavior you see is a bug, if you have questions, if you want to discuss development-oriented topics, or if you just want to get in touch, use one of our Google groups or IRC channels to :ref:`communicate with Ansiblers <communication>`.
+
+If you find a bug that affects a module in an Automation Hub collection:
+
+ #. If the collection offers an Issue Tracker link on Automation Hub, click there and open an issue on the collection repository. If it does not, follow the standard process for reporting issues on the `Red Hat Customer Portal <https://access.redhat.com/>`_. You must have a subscription to the Red Hat Ansible Automation Platform to create an issue on the portal.
+
+Support
+=======
+
+All plugins that remain in ``ansible-base`` and all collections hosted in Automation Hub are supported by Red Hat. No other plugins or collections are supported by Red Hat. If you have a subscription to the Red Hat Ansible Automation Platform, you can find more information and resources on the `Red Hat Customer Portal. <https://access.redhat.com/>`_
+
+.. seealso::
+
+ :ref:`intro_adhoc`
+ Examples of using modules in /usr/bin/ansible
+ :ref:`working_with_playbooks`
+ Examples of using modules with /usr/bin/ansible-playbook
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbook_pathing.rst b/docs/docsite/rst/user_guide/playbook_pathing.rst
new file mode 100644
index 00000000..7fc6059b
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbook_pathing.rst
@@ -0,0 +1,42 @@
+:orphan:
+
+***********************
+Search paths in Ansible
+***********************
+
+You can control the paths Ansible searches to find resources on your control node (including configuration, modules, roles, ssh keys, and more) as well as resources on the remote nodes you are managing. Use absolute paths to tell Ansible where to find resources whenever you can. However, absolute paths are not always practical. This page covers how Ansible interprets relative search paths, along with ways to troubleshoot when Ansible cannot find the resource you need.
+
+.. contents::
+ :local:
+
+Config paths
+============
+
+By default these should be relative to the config file, some are specifically relative to the current working directory or the playbook and should have this noted in their description. Things like ssh keys are left to use the current working directory because it mirrors how the underlying tools would use it.
+
+
+Task paths
+==========
+
+Task paths include two different scopes: task evaluation and task execution. For task evaluation, all paths are local, like in lookups. For task execution, which usually happens on the remote nodes, local paths do not usually apply. However, if a task uses an action plugin, it uses a local path. The template and copy modules are examples of modules that use action plugins, and therefore use local paths.
+
+The magic of 'local' paths
+--------------------------
+
+Lookups and action plugins both use a special 'search magic' to find things, taking the current play into account, it uses from most specific to most general playbook dir in which a task is contained (this includes roles and includes).
+
+Using this magic, relative paths get attempted first with a 'files|templates|vars' appended (if not already present), depending on action being taken, 'files' is the default. (in other words, include_vars will use vars/). The paths will be searched from most specific to most general (in other words, role before play).
+dependent roles WILL be traversed (in other words, task is in role2, role2 is a dependency of role1, role2 will be looked at first, then role1, then play).
+i.e ::
+
+ role search path is rolename/{files|vars|templates}/, rolename/tasks/.
+ play search path is playdir/{files|vars|templates}/, playdir/.
+
+
+By default, Ansible does not search the current working directory unless it happens to coincide with one of the paths above. If you `include` a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, `include_role` will work. A new variable `ansible_search_path` var will have the search path used, in order (but without the appended subdirs). Using 5 "v"s (`-vvvvv`) should show the detail of the search as it happens.
+
+As for includes, they try the path of the included file first and fall back to the play/role that includes them.
+
+
+
+.. note: The current working directory might vary depending on the connection plugin and if the action is local or remote. For the remote it is normally the directory on which the login shell puts the user. For local it is either the directory you executed ansible from or in some cases the playbook directory.
diff --git a/docs/docsite/rst/user_guide/playbooks.rst b/docs/docsite/rst/user_guide/playbooks.rst
new file mode 100644
index 00000000..8c851c12
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks.rst
@@ -0,0 +1,21 @@
+.. _working_with_playbooks:
+
+Working with playbooks
+======================
+
+Playbooks record and execute Ansible's configuration, deployment, and orchestration functions. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.
+
+If Ansible modules are the tools in your workshop, playbooks are your instruction manuals, and your inventory of hosts are your raw material.
+
+At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts, interacting with monitoring servers and load balancers along the way.
+
+Playbooks are designed to be human-readable and are developed in a basic text language. There are multiple ways to organize playbooks and the files they include, and we'll offer up some suggestions on that and making the most out of Ansible.
+
+You should look at `Example Playbooks <https://github.com/ansible/ansible-examples>`_ while reading along with the playbook documentation. These illustrate best practices as well as how to put many of the various concepts together.
+
+.. toctree::
+ :maxdepth: 2
+
+ playbooks_templating
+ playbooks_special_topics
+ guide_rolling_upgrade
diff --git a/docs/docsite/rst/user_guide/playbooks_advanced_syntax.rst b/docs/docsite/rst/user_guide/playbooks_advanced_syntax.rst
new file mode 100644
index 00000000..03d4243f
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_advanced_syntax.rst
@@ -0,0 +1,112 @@
+.. _playbooks_advanced_syntax:
+
+***************
+Advanced Syntax
+***************
+
+The advanced YAML syntax examples on this page give you more control over the data placed in YAML files used by Ansible. You can find additional information about Python-specific YAML in the official `PyYAML Documentation <https://pyyaml.org/wiki/PyYAMLDocumentation#YAMLtagsandPythontypes>`_.
+
+.. contents::
+ :local:
+
+.. _unsafe_strings:
+
+Unsafe or raw strings
+=====================
+
+When handling values returned by lookup plugins, Ansible uses a data type called ``unsafe`` to block templating. Marking data as unsafe prevents malicious users from abusing Jinja2 templates to execute arbitrary code on target machines. The Ansible implementation ensures that unsafe values are never templated. It is more comprehensive than escaping Jinja2 with ``{% raw %} ... {% endraw %}`` tags.
+
+You can use the same ``unsafe`` data type in variables you define, to prevent templating errors and information disclosure. You can mark values supplied by :ref:`vars_prompts<unsafe_prompts>` as unsafe. You can also use ``unsafe`` in playbooks. The most common use cases include passwords that allow special characters like ``{`` or ``%``, and JSON arguments that look like templates but should not be templated. For example:
+
+.. code-block:: yaml
+
+ ---
+ mypassword: !unsafe 234%234{435lkj{{lkjsdf
+
+In a playbook::
+
+ ---
+ hosts: all
+ vars:
+ my_unsafe_variable: !unsafe 'unsafe % value'
+ tasks:
+ ...
+
+For complex variables such as hashes or arrays, use ``!unsafe`` on the individual elements::
+
+ ---
+ my_unsafe_array:
+ - !unsafe 'unsafe element'
+ - 'safe element'
+
+ my_unsafe_hash:
+ unsafe_key: !unsafe 'unsafe value'
+
+.. _anchors_and_aliases:
+
+YAML anchors and aliases: sharing variable values
+=================================================
+
+`YAML anchors and aliases <https://yaml.org/spec/1.2/spec.html#id2765878>`_ help you define, maintain, and use shared variable values in a flexible way.
+You define an anchor with ``&``, then refer to it using an alias, denoted with ``*``. Here's an example that sets three values with an anchor, uses two of those values with an alias, and overrides the third value::
+
+ ---
+ ...
+ vars:
+ app1:
+ jvm: &jvm_opts
+ opts: '-Xms1G -Xmx2G'
+ port: 1000
+ path: /usr/lib/app1
+ app2:
+ jvm:
+ <<: *jvm_opts
+ path: /usr/lib/app2
+ ...
+
+Here, ``app1`` and ``app2`` share the values for ``opts`` and ``port`` using the anchor ``&jvm_opts`` and the alias ``*jvm_opts``.
+The value for ``path`` is merged by ``<<`` or `merge operator <https://yaml.org/type/merge.html>`_.
+
+Anchors and aliases also let you share complex sets of variable values, including nested variables. If you have one variable value that includes another variable value, you can define them separately::
+
+ vars:
+ webapp_version: 1.0
+ webapp_custom_name: ToDo_App-1.0
+
+This is inefficient and, at scale, means more maintenance. To incorporate the version value in the name, you can use an anchor in ``app_version`` and an alias in ``custom_name``::
+
+ vars:
+ webapp:
+ version: &my_version 1.0
+ custom_name:
+ - "ToDo_App"
+ - *my_version
+
+Now, you can re-use the value of ``app_version`` within the value of ``custom_name`` and use the output in a template::
+
+ ---
+ - name: Using values nested inside dictionary
+ hosts: localhost
+ vars:
+ webapp:
+ version: &my_version 1.0
+ custom_name:
+ - "ToDo_App"
+ - *my_version
+ tasks:
+ - name: Using Anchor value
+ ansible.builtin.debug:
+ msg: My app is called "{{ webapp.custom_name | join('-') }}".
+
+You've anchored the value of ``version`` with the ``&my_version`` anchor, and re-used it with the ``*my_version`` alias. Anchors and aliases let you access nested values inside dictionaries.
+
+.. seealso::
+
+ :ref:`playbooks_variables`
+ All about variables
+ :doc:`complex_data_manipulation`
+ Doing complex data manipulation in Ansible
+ `User Mailing List <https://groups.google.com/group/ansible-project>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_async.rst b/docs/docsite/rst/user_guide/playbooks_async.rst
new file mode 100644
index 00000000..09fe5d5d
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_async.rst
@@ -0,0 +1,161 @@
+.. _playbooks_async:
+
+Asynchronous actions and polling
+================================
+
+By default Ansible runs tasks synchronously, holding the connection to the remote node open until the action is completed. This means within a playbook, each task blocks the next task by default, meaning subsequent tasks will not run until the current task completes. This behavior can create challenges. For example, a task may take longer to complete than the SSH session allows for, causing a timeout. Or you may want a long-running process to execute in the background while you perform other tasks concurrently. Asynchronous mode lets you control how long-running tasks execute.
+
+.. contents::
+ :local:
+
+Asynchronous ad-hoc tasks
+-------------------------
+
+You can execute long-running operations in the background with :ref:`ad-hoc tasks <intro_adhoc>`. For example, to execute ``long_running_operation`` asynchronously in the background, with a timeout (``-B``) of 3600 seconds, and without polling (``-P``)::
+
+ $ ansible all -B 3600 -P 0 -a "/usr/bin/long_running_operation --do-stuff"
+
+To check on the job status later, use the ``async_status`` module, passing it the job ID that was returned when you ran the original job in the background::
+
+ $ ansible web1.example.com -m async_status -a "jid=488359678239.2844"
+
+Ansible can also check on the status of your long-running job automatically with polling. In most cases, Ansible will keep the connection to your remote node open between polls. To run for 30 minutes and poll for status every 60 seconds::
+
+ $ ansible all -B 1800 -P 60 -a "/usr/bin/long_running_operation --do-stuff"
+
+Poll mode is smart so all jobs will be started before polling begins on any machine. Be sure to use a high enough ``--forks`` value if you want to get all of your jobs started very quickly. After the time limit (in seconds) runs out (``-B``), the process on the remote nodes will be terminated.
+
+Asynchronous mode is best suited to long-running shell commands or software upgrades. Running the copy module asynchronously, for example, does not do a background file transfer.
+
+Asynchronous playbook tasks
+---------------------------
+
+:ref:`Playbooks <working_with_playbooks>` also support asynchronous mode and polling, with a simplified syntax. You can use asynchronous mode in playbooks to avoid connection timeouts or to avoid blocking subsequent tasks. The behavior of asynchronous mode in a playbook depends on the value of `poll`.
+
+Avoid connection timeouts: poll > 0
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want to set a longer timeout limit for a certain task in your playbook, use ``async`` with ``poll`` set to a positive value. Ansible will still block the next task in your playbook, waiting until the async task either completes, fails or times out. However, the task will only time out if it exceeds the timeout limit you set with the ``async`` parameter.
+
+To avoid timeouts on a task, specify its maximum runtime and how frequently you would like to poll for status::
+
+ ---
+
+ - hosts: all
+ remote_user: root
+
+ tasks:
+
+ - name: Simulate long running op (15 sec), wait for up to 45 sec, poll every 5 sec
+ ansible.builtin.command: /bin/sleep 15
+ async: 45
+ poll: 5
+
+.. note::
+ The default poll value is set by the :ref:`DEFAULT_POLL_INTERVAL` setting.
+ There is no default for the async time limit. If you leave off the
+ 'async' keyword, the task runs synchronously, which is Ansible's
+ default.
+
+.. note::
+ As of Ansible 2.3, async does not support check mode and will fail the
+ task when run in check mode. See :ref:`check_mode_dry` on how to
+ skip a task in check mode.
+
+Run tasks concurrently: poll = 0
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want to run multiple tasks in a playbook concurrently, use ``async`` with ``poll`` set to 0. When you set ``poll: 0``, Ansible starts the task and immediately moves on to the next task without waiting for a result. Each async task runs until it either completes, fails or times out (runs longer than its ``async`` value). The playbook run ends without checking back on async tasks.
+
+To run a playbook task asynchronously::
+
+ ---
+
+ - hosts: all
+ remote_user: root
+
+ tasks:
+
+ - name: Simulate long running op, allow to run for 45 sec, fire and forget
+ ansible.builtin.command: /bin/sleep 15
+ async: 45
+ poll: 0
+
+.. note::
+ Do not specify a poll value of 0 with operations that require exclusive locks (such as yum transactions) if you expect to run other commands later in the playbook against those same resources.
+
+.. note::
+ Using a higher value for ``--forks`` will result in kicking off asynchronous tasks even faster. This also increases the efficiency of polling.
+
+If you need a synchronization point with an async task, you can register it to obtain its job ID and use the :ref:`async_status <async_status_module>` module to observe it in a later task. For example::
+
+ - name: Run an async task
+ ansible.builtin.yum:
+ name: docker-io
+ state: present
+ async: 1000
+ poll: 0
+ register: yum_sleeper
+
+ - name: Check on an async task
+ async_status:
+ jid: "{{ yum_sleeper.ansible_job_id }}"
+ register: job_result
+ until: job_result.finished
+ retries: 100
+ delay: 10
+
+.. note::
+ If the value of ``async:`` is not high enough, this will cause the
+ "check on it later" task to fail because the temporary status file that
+ the ``async_status:`` is looking for will not have been written or no longer exist
+
+To run multiple asynchronous tasks while limiting the number of tasks running concurrently::
+
+ #####################
+ # main.yml
+ #####################
+ - name: Run items asynchronously in batch of two items
+ vars:
+ sleep_durations:
+ - 1
+ - 2
+ - 3
+ - 4
+ - 5
+ durations: "{{ item }}"
+ include_tasks: execute_batch.yml
+ loop: "{{ sleep_durations | batch(2) | list }}"
+
+ #####################
+ # execute_batch.yml
+ #####################
+ - name: Async sleeping for batched_items
+ ansible.builtin.command: sleep {{ async_item }}
+ async: 45
+ poll: 0
+ loop: "{{ durations }}"
+ loop_control:
+ loop_var: "async_item"
+ register: async_results
+
+ - name: Check sync status
+ async_status:
+ jid: "{{ async_result_item.ansible_job_id }}"
+ loop: "{{ async_results.results }}"
+ loop_control:
+ loop_var: "async_result_item"
+ register: async_poll_results
+ until: async_poll_results.finished
+ retries: 30
+
+.. seealso::
+
+ :ref:`playbooks_strategies`
+ Options for controlling playbook execution
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_best_practices.rst b/docs/docsite/rst/user_guide/playbooks_best_practices.rst
new file mode 100644
index 00000000..86915f51
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_best_practices.rst
@@ -0,0 +1,167 @@
+.. _playbooks_tips_and_tricks:
+.. _playbooks_best_practices:
+
+***************
+Tips and tricks
+***************
+
+These tips and tricks have helped us optimize our Ansible usage, and we offer them here as suggestions. We hope they will help you organize content, write playbooks, maintain inventory, and execute Ansible. Ultimately, though, you should use Ansible in the way that makes most sense for your organization and your goals.
+
+.. contents::
+ :local:
+
+General tips
+============
+
+These concepts apply to all Ansible activities and artifacts.
+
+Keep it simple
+--------------
+
+Whenever you can, do things simply. Use advanced features only when necessary, and select the feature that best matches your use case. For example, you will probably not need ``vars``, ``vars_files``, ``vars_prompt`` and ``--extra-vars`` all at once, while also using an external inventory file. If something feels complicated, it probably is. Take the time to look for a simpler solution.
+
+Use version control
+-------------------
+
+Keep your playbooks, roles, inventory, and variables files in git or another version control system and make commits to the repository when you make changes. Version control gives you an audit trail describing when and why you changed the rules that automate your infrastructure.
+
+Playbook tips
+=============
+
+These tips help make playbooks and roles easier to read, maintain, and debug.
+
+Use whitespace
+--------------
+
+Generous use of whitespace, for example, a blank line before each block or task, makes a playbook easy to scan.
+
+Always name tasks
+-----------------
+
+Task names are optional, but extremely useful. In its output, Ansible shows you the name of each task it runs. Choose names that describe what each task does and why.
+
+Always mention the state
+------------------------
+
+For many modules, the 'state' parameter is optional. Different modules have different default settings for 'state', and some modules support several 'state' settings. Explicitly setting 'state=present' or 'state=absent' makes playbooks and roles clearer.
+
+Use comments
+------------
+
+Even with task names and explicit state, sometimes a part of a playbook or role (or inventory/variable file) needs more explanation. Adding a comment (any line starting with '#') helps others (and possibly yourself in future) understand what a play or task (or variable setting) does, how it does it, and why.
+
+Inventory tips
+==============
+
+These tips help keep your inventory well organized.
+
+Use dynamic inventory with clouds
+---------------------------------
+
+With cloud providers and other systems that maintain canonical lists of your infrastructure, use :ref:`dynamic inventory <intro_dynamic_inventory>` to retrieve those lists instead of manually updating static inventory files. With cloud resources, you can use tags to differentiate production and staging environments.
+
+Group inventory by function
+---------------------------
+
+A system can be in multiple groups. See :ref:`intro_inventory` and :ref:`intro_patterns`. If you create groups named for the function of the nodes in the group, for example *webservers* or *dbservers*, your playbooks can target machines based on function. You can assign function-specific variables using the group variable system, and design Ansible roles to handle function-specific use cases. See :ref:`playbooks_reuse_roles`.
+
+Separate production and staging inventory
+-----------------------------------------
+
+You can keep your production environment separate from development, test, and staging environments by using separate inventory files or directories for each environment. This way you pick with -i what you are targeting. Keeping all your environments in one file can lead to surprises!
+
+.. _tip_for_variables_and_vaults:
+
+Keep vaulted variables safely visible
+-------------------------------------
+
+You should encrypt sensitive or secret variables with Ansible Vault. However, encrypting the variable names as well as the variable values makes it hard to find the source of the values. You can keep the names of your variables accessible (by ``grep``, for example) without exposing any secrets by adding a layer of indirection:
+
+#. Create a ``group_vars/`` subdirectory named after the group.
+#. Inside this subdirectory, create two files named ``vars`` and ``vault``.
+#. In the ``vars`` file, define all of the variables needed, including any sensitive ones.
+#. Copy all of the sensitive variables over to the ``vault`` file and prefix these variables with ``vault_``.
+#. Adjust the variables in the ``vars`` file to point to the matching ``vault_`` variables using jinja2 syntax: ``db_password: {{ vault_db_password }}``.
+#. Encrypt the ``vault`` file to protect its contents.
+#. Use the variable name from the ``vars`` file in your playbooks.
+
+When running a playbook, Ansible finds the variables in the unencrypted file, which pulls the sensitive variable values from the encrypted file. There is no limit to the number of variable and vault files or their names.
+
+Execution tricks
+================
+
+These tips apply to using Ansible, rather than to Ansible artifacts.
+
+Try it in staging first
+-----------------------
+
+Testing changes in a staging environment before rolling them out in production is always a great idea. Your environments need not be the same size and you can use group variables to control the differences between those environments.
+
+Update in batches
+-----------------
+
+Use the 'serial' keyword to control how many machines you update at once in the batch. See :ref:`playbooks_delegation`.
+
+.. _os_variance:
+
+Handling OS and distro differences
+----------------------------------
+
+Group variables files and the ``group_by`` module work together to help Ansible execute across a range of operating systems and distributions that require different settings, packages, and tools. The ``group_by`` module creates a dynamic group of hosts matching certain criteria. This group does not need to be defined in the inventory file. This approach lets you execute different tasks on different operating systems or distributions. For example::
+
+ ---
+
+ - name: talk to all hosts just so we can learn about them
+ hosts: all
+ tasks:
+ - name: Classify hosts depending on their OS distribution
+ group_by:
+ key: os_{{ ansible_facts['distribution'] }}
+
+ # now just on the CentOS hosts...
+
+ - hosts: os_CentOS
+ gather_facts: False
+ tasks:
+ - # tasks that only happen on CentOS go in this play
+
+The first play categorizes all systems into dynamic groups based on the operating system name. Later plays can use these groups as patterns on the ``hosts`` line. You can also add group-specific settings in group vars files. All three names must match: the name created by the ``group_by`` task, the name of the pattern in subsequent plays, and the name of the group vars file. For example::
+
+ ---
+ # file: group_vars/all
+ asdf: 10
+
+ ---
+ # file: group_vars/os_CentOS.yml
+ asdf: 42
+
+In this example, CentOS machines get the value of '42' for asdf, but other machines get '10'.
+This can be used not only to set variables, but also to apply certain roles to only certain systems.
+
+You can use the same setup with ``include_vars`` when you only need OS-specific variables, not tasks::
+
+ - hosts: all
+ tasks:
+ - name: Set OS distribution dependent variables
+ include_vars: "os_{{ ansible_facts['distribution'] }}.yml"
+ - debug:
+ var: asdf
+
+This pulls in variables from the group_vars/os_CentOS.yml file.
+
+.. seealso::
+
+ :ref:`yaml_syntax`
+ Learn about YAML syntax
+ :ref:`working_with_playbooks`
+ Review the basic playbook features
+ :ref:`list_of_collections`
+ Browse existing collections, modules, and plugins
+ :ref:`developing_modules`
+ Learn how to extend Ansible by writing your own modules
+ :ref:`intro_patterns`
+ Learn about how to select hosts
+ `GitHub examples directory <https://github.com/ansible/ansible-examples>`_
+ Complete playbook files from the github project source
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
diff --git a/docs/docsite/rst/user_guide/playbooks_blocks.rst b/docs/docsite/rst/user_guide/playbooks_blocks.rst
new file mode 100644
index 00000000..dc516312
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_blocks.rst
@@ -0,0 +1,189 @@
+.. _playbooks_blocks:
+
+******
+Blocks
+******
+
+Blocks create logical groups of tasks. Blocks also offer ways to handle task errors, similar to exception handling in many programming languages.
+
+.. contents::
+ :local:
+
+Grouping tasks with blocks
+==========================
+
+All tasks in a block inherit directives applied at the block level. Most of what you can apply to a single task (with the exception of loops) can be applied at the block level, so blocks make it much easier to set data or directives common to the tasks. The directive does not affect the block itself, it is only inherited by the tasks enclosed by a block. For example, a `when` statement is applied to the tasks within a block, not to the block itself.
+
+.. code-block:: YAML
+ :emphasize-lines: 3
+ :caption: Block example with named tasks inside the block
+
+ tasks:
+ - name: Install, configure, and start Apache
+ block:
+ - name: Install httpd and memcached
+ ansible.builtin.yum:
+ name:
+ - httpd
+ - memcached
+ state: present
+
+ - name: Apply the foo config template
+ ansible.builtin.template:
+ src: templates/src.j2
+ dest: /etc/foo.conf
+
+ - name: Start service bar and enable it
+ ansible.builtin.service:
+ name: bar
+ state: started
+ enabled: True
+ when: ansible_facts['distribution'] == 'CentOS'
+ become: true
+ become_user: root
+ ignore_errors: yes
+
+In the example above, the 'when' condition will be evaluated before Ansible runs each of the three tasks in the block. All three tasks also inherit the privilege escalation directives, running as the root user. Finally, ``ignore_errors: yes`` ensures that Ansible continues to execute the playbook even if some of the tasks fail.
+
+Names for blocks have been available since Ansible 2.3. We recommend using names in all tasks, within blocks or elsewhere, for better visibility into the tasks being executed when you run the playbook.
+
+.. _block_error_handling:
+
+Handling errors with blocks
+===========================
+
+You can control how Ansible responds to task errors using blocks with ``rescue`` and ``always`` sections.
+
+Rescue blocks specify tasks to run when an earlier task in a block fails. This approach is similar to exception handling in many programming languages. Ansible only runs rescue blocks after a task returns a 'failed' state. Bad task definitions and unreachable hosts will not trigger the rescue block.
+
+.. _block_rescue:
+.. code-block:: YAML
+ :emphasize-lines: 3,10
+ :caption: Block error handling example
+
+ tasks:
+ - name: Handle the error
+ block:
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: 'I execute normally'
+
+ - name: Force a failure
+ ansible.builtin.command: /bin/false
+
+ - name: Never print this
+ ansible.builtin.debug:
+ msg: 'I never execute, due to the above task failing, :-('
+ rescue:
+ - name: Print when errors
+ ansible.builtin.debug:
+ msg: 'I caught an error, can do stuff here to fix it, :-)'
+
+You can also add an ``always`` section to a block. Tasks in the ``always`` section run no matter what the task status of the previous block is.
+
+.. _block_always:
+.. code-block:: YAML
+ :emphasize-lines: 2,9
+ :caption: Block with always section
+
+ - name: Always do X
+ block:
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: 'I execute normally'
+
+ - name: Force a failure
+ ansible.builtin.command: /bin/false
+
+ - name: Never print this
+ ansible.builtin.debug:
+ msg: 'I never execute :-('
+ always:
+ - name: Always do this
+ ansible.builtin.debug:
+ msg: "This always executes, :-)"
+
+Together, these elements offer complex error handling.
+
+.. code-block:: YAML
+ :emphasize-lines: 2,9,16
+ :caption: Block with all sections
+
+ - name: Attempt and graceful roll back demo
+ block:
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: 'I execute normally'
+
+ - name: Force a failure
+ ansible.builtin.command: /bin/false
+
+ - name: Never print this
+ ansible.builtin.debug:
+ msg: 'I never execute, due to the above task failing, :-('
+ rescue:
+ - name: Print when errors
+ ansible.builtin.debug:
+ msg: 'I caught an error'
+
+ - name: Force a failure in middle of recovery! >:-)
+ ansible.builtin.command: /bin/false
+
+ - name: Never print this
+ ansible.builtin.debug:
+ msg: 'I also never execute :-('
+ always:
+ - name: Always do this
+ ansible.builtin.debug:
+ msg: "This always executes"
+
+The tasks in the ``block`` execute normally. If any tasks in the block return ``failed``, the ``rescue`` section executes tasks to recover from the error. The ``always`` section runs regardless of the results of the ``block`` and ``rescue`` sections.
+
+If an error occurs in the block and the rescue task succeeds, Ansible reverts the failed status of the original task for the run and continues to run the play as if the original task had succeeded. The rescued task is considered successful, and does not trigger ``max_fail_percentage`` or ``any_errors_fatal`` configurations. However, Ansible still reports a failure in the playbook statistics.
+
+You can use blocks with ``flush_handlers`` in a rescue task to ensure that all handlers run even if an error occurs:
+
+.. code-block:: YAML
+ :emphasize-lines: 6,10
+ :caption: Block run handlers in error handling
+
+ tasks:
+ - name: Attempt and graceful roll back demo
+ block:
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: 'I execute normally'
+ changed_when: yes
+ notify: run me even after an error
+
+ - name: Force a failure
+ ansible.builtin.command: /bin/false
+ rescue:
+ - name: Make sure all handlers run
+ meta: flush_handlers
+ handlers:
+ - name: Run me even after an error
+ ansible.builtin.debug:
+ msg: 'This handler runs even on error'
+
+
+.. versionadded:: 2.1
+
+Ansible provides a couple of variables for tasks in the ``rescue`` portion of a block:
+
+ansible_failed_task
+ The task that returned 'failed' and triggered the rescue. For example, to get the name use ``ansible_failed_task.name``.
+
+ansible_failed_result
+ The captured return result of the failed task that triggered the rescue. This would equate to having used this var in the ``register`` keyword.
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_checkmode.rst b/docs/docsite/rst/user_guide/playbooks_checkmode.rst
new file mode 100644
index 00000000..36b16aa8
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_checkmode.rst
@@ -0,0 +1,97 @@
+.. _check_mode_dry:
+
+******************************************
+Validating tasks: check mode and diff mode
+******************************************
+
+Ansible provides two modes of execution that validate tasks: check mode and diff mode. These modes can be used separately or together. They are useful when you are creating or editing a playbook or role and you want to know what it will do. In check mode, Ansible runs without making any changes on remote systems. Modules that support check mode report the changes they would have made. Modules that do not support check mode report nothing and do nothing. In diff mode, Ansible provides before-and-after comparisons. Modules that support diff mode display detailed information. You can combine check mode and diff mode for detailed validation of your playbook or role.
+
+.. contents::
+ :local:
+
+Using check mode
+================
+
+Check mode is just a simulation. It will not generate output for tasks that use :ref:`conditionals based on registered variables <conditionals_registered_vars>` (results of prior tasks). However, it is great for validating configuration management playbooks that run on one node at a time. To run a playbook in check mode::
+
+ ansible-playbook foo.yml --check
+
+.. _forcing_to_run_in_check_mode:
+
+Enforcing or preventing check mode on tasks
+-------------------------------------------
+
+.. versionadded:: 2.2
+
+If you want certain tasks to run in check mode always, or never, regardless of whether you run the playbook with or without ``--check``, you can add the ``check_mode`` option to those tasks:
+
+ - To force a task to run in check mode, even when the playbook is called without ``--check``, set ``check_mode: yes``.
+ - To force a task to run in normal mode and make changes to the system, even when the playbook is called with ``--check``, set ``check_mode: no``.
+
+For example::
+
+ tasks:
+ - name: This task will always make changes to the system
+ ansible.builtin.command: /something/to/run --even-in-check-mode
+ check_mode: no
+
+ - name: This task will never make changes to the system
+ ansible.builtin.lineinfile:
+ line: "important config"
+ dest: /path/to/myconfig.conf
+ state: present
+ check_mode: yes
+ register: changes_to_important_config
+
+Running single tasks with ``check_mode: yes`` can be useful for testing Ansible modules, either to test the module itself or to test the conditions under which a module would make changes. You can register variables (see :ref:`playbooks_conditionals`) on these tasks for even more detail on the potential changes.
+
+.. note:: Prior to version 2.2 only the equivalent of ``check_mode: no`` existed. The notation for that was ``always_run: yes``.
+
+Skipping tasks or ignoring errors in check mode
+-----------------------------------------------
+
+.. versionadded:: 2.1
+
+If you want to skip a task or ignore errors on a task when you run Ansible in check mode, you can use a boolean magic variable ``ansible_check_mode``, which is set to ``True`` when Ansible runs in check mode. For example::
+
+ tasks:
+
+ - name: This task will be skipped in check mode
+ ansible.builtin.git:
+ repo: ssh://git@github.com/mylogin/hello.git
+ dest: /home/mylogin/hello
+ when: not ansible_check_mode
+
+ - name: This task will ignore errors in check mode
+ ansible.builtin.git:
+ repo: ssh://git@github.com/mylogin/hello.git
+ dest: /home/mylogin/hello
+ ignore_errors: "{{ ansible_check_mode }}"
+
+.. _diff_mode:
+
+Using diff mode
+===============
+
+The ``--diff`` option for ansible-playbook can be used alone or with ``--check``. When you run in diff mode, any module that supports diff mode reports the changes made or, if used with ``--check``, the changes that would have been made. Diff mode is most common in modules that manipulate files (for example, the template module) but other modules might also show 'before and after' information (for example, the user module).
+
+Diff mode produces a large amount of output, so it is best used when checking a single host at a time. For example::
+
+ ansible-playbook foo.yml --check --diff --limit foo.example.com
+
+.. versionadded:: 2.4
+
+Enforcing or preventing diff mode on tasks
+------------------------------------------
+
+Because the ``--diff`` option can reveal sensitive information, you can disable it for a task by specifying ``diff: no``. For example::
+
+ tasks:
+ - name: This task will not report a diff when the file changes
+ ansible.builtin.template:
+ src: secret.conf.j2
+ dest: /etc/secret.conf
+ owner: root
+ group: root
+ mode: '0600'
+ diff: no
diff --git a/docs/docsite/rst/user_guide/playbooks_conditionals.rst b/docs/docsite/rst/user_guide/playbooks_conditionals.rst
new file mode 100644
index 00000000..76599cb3
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_conditionals.rst
@@ -0,0 +1,508 @@
+.. _playbooks_conditionals:
+
+************
+Conditionals
+************
+
+In a playbook, you may want to execute different tasks, or have different goals, depending on the value of a fact (data about the remote system), a variable, or the result of a previous task. You may want the value of some variables to depend on the value of other variables. Or you may want to create additional groups of hosts based on whether the hosts match other criteria. You can do all of these things with conditionals.
+
+Ansible uses Jinja2 :ref:`tests <playbooks_tests>` and :ref:`filters <playbooks_filters>` in conditionals. Ansible supports all the standard tests and filters, and adds some unique ones as well.
+
+.. note::
+
+ There are many options to control execution flow in Ansible. You can find more examples of supported conditionals at `<https://jinja.palletsprojects.com/en/master/templates/#comparisons>`_.
+
+.. contents::
+ :local:
+
+.. _the_when_statement:
+
+Basic conditionals with ``when``
+================================
+
+The simplest conditional statement applies to a single task. Create the task, then add a ``when`` statement that applies a test. The ``when`` clause is a raw Jinja2 expression without double curly braces (see :ref:`group_by_module`). When you run the task or playbook, Ansible evaluates the test for all hosts. On any host where the test passes (returns a value of True), Ansible runs that task. For example, if you are installing mysql on multiple machines, some of which have SELinux enabled, you might have a task to configure SELinux to allow mysql to run. You would only want that task to run on machines that have SELinux enabled:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Configure SELinux to start mysql on any port
+ ansible.posix.seboolean:
+ name: mysql_connect_any
+ state: true
+ persistent: yes
+ when: ansible_selinux.status == "enabled"
+ # all variables can be used directly in conditionals without double curly braces
+
+Conditionals based on ansible_facts
+-----------------------------------
+
+Often you want to execute or skip a task based on facts. Facts are attributes of individual hosts, including IP address, operating system, the status of a filesystem, and many more. With conditionals based on facts:
+
+ - You can install a certain package only when the operating system is a particular version.
+ - You can skip configuring a firewall on hosts with internal IP addresses.
+ - You can perform cleanup tasks only when a filesystem is getting full.
+
+See :ref:`commonly_used_facts` for a list of facts that frequently appear in conditional statements. Not all facts exist for all hosts. For example, the 'lsb_major_release' fact used in an example below only exists when the lsb_release package is installed on the target host. To see what facts are available on your systems, add a debug task to your playbook::
+
+ - name: Show facts available on the system
+ ansible.builtin.debug:
+ var: ansible_facts
+
+Here is a sample conditional based on a fact:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Shut down Debian flavored systems
+ ansible.builtin.command: /sbin/shutdown -t now
+ when: ansible_facts['os_family'] == "Debian"
+
+If you have multiple conditions, you can group them with parentheses:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Shut down CentOS 6 and Debian 7 systems
+ ansible.builtin.command: /sbin/shutdown -t now
+ when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or
+ (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7")
+
+You can use `logical operators <https://jinja.palletsprojects.com/en/master/templates/#logic>`_ to combine conditions. When you have multiple conditions that all need to be true (that is, a logical ``and``), you can specify them as a list::
+
+ tasks:
+ - name: Shut down CentOS 6 systems
+ ansible.builtin.command: /sbin/shutdown -t now
+ when:
+ - ansible_facts['distribution'] == "CentOS"
+ - ansible_facts['distribution_major_version'] == "6"
+
+If a fact or variable is a string, and you need to run a mathematical comparison on it, use a filter to ensure that Ansible reads the value as an integer::
+
+ tasks:
+ - ansible.builtin.shell: echo "only on Red Hat 6, derivatives, and later"
+ when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
+
+.. _conditionals_registered_vars:
+
+Conditions based on registered variables
+----------------------------------------
+
+Often in a playbook you want to execute or skip a task based on the outcome of an earlier task. For example, you might want to configure a service after it is upgraded by an earlier task. To create a conditional based on a registered variable:
+
+ #. Register the outcome of the earlier task as a variable.
+ #. Create a conditional test based on the registered variable.
+
+You create the name of the registered variable using the ``register`` keyword. A registered variable always contains the status of the task that created it as well as any output that task generated. You can use registered variables in templates and action lines as well as in conditional ``when`` statements. You can access the string contents of the registered variable using ``variable.stdout``. For example::
+
+ - name: Test play
+ hosts: all
+
+ tasks:
+
+ - name: Register a variable
+ ansible.builtin.shell: cat /etc/motd
+ register: motd_contents
+
+ - name: Use the variable in conditional statement
+ ansible.builtin.shell: echo "motd contains the word hi"
+ when: motd_contents.stdout.find('hi') != -1
+
+You can use registered results in the loop of a task if the variable is a list. If the variable is not a list, you can convert it into a list, with either ``stdout_lines`` or with ``variable.stdout.split()``. You can also split the lines by other fields::
+
+ - name: Registered variable usage as a loop list
+ hosts: all
+ tasks:
+
+ - name: Retrieve the list of home directories
+ ansible.builtin.command: ls /home
+ register: home_dirs
+
+ - name: Add home dirs to the backup spooler
+ ansible.builtin.file:
+ path: /mnt/bkspool/{{ item }}
+ src: /home/{{ item }}
+ state: link
+ loop: "{{ home_dirs.stdout_lines }}"
+ # same as loop: "{{ home_dirs.stdout.split() }}"
+
+The string content of a registered variable can be empty. If you want to run another task only on hosts where the stdout of your registered variable is empty, check the registered variable's string contents for emptiness:
+
+.. code-block:: yaml
+
+ - name: check registered variable for emptiness
+ hosts: all
+
+ tasks:
+
+ - name: List contents of directory
+ ansible.builtin.command: ls mydir
+ register: contents
+
+ - name: Check contents for emptiness
+ ansible.builtin.debug:
+ msg: "Directory is empty"
+ when: contents.stdout == ""
+
+Ansible always registers something in a registered variable for every host, even on hosts where a task fails or Ansible skips a task because a condition is not met. To run a follow-up task on these hosts, query the registered variable for ``is skipped`` (not for "undefined" or "default"). See :ref:`registered_variables` for more information. Here are sample conditionals based on the success or failure of a task. Remember to ignore errors if you want Ansible to continue executing on a host when a failure occurs:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Register a variable, ignore errors and continue
+ ansible.builtin.command: /bin/false
+ register: result
+ ignore_errors: true
+
+ - name: Run only if the task that registered the "result" variable fails
+ ansible.builtin.command: /bin/something
+ when: result is failed
+
+ - name: Run only if the task that registered the "result" variable succeeds
+ ansible.builtin.command: /bin/something_else
+ when: result is succeeded
+
+ - name: Run only if the task that registered the "result" variable is skipped
+ ansible.builtin.command: /bin/still/something_else
+ when: result is skipped
+
+.. note:: Older versions of Ansible used ``success`` and ``fail``, but ``succeeded`` and ``failed`` use the correct tense. All of these options are now valid.
+
+
+Conditionals based on variables
+-------------------------------
+
+You can also create conditionals based on variables defined in the playbooks or inventory. Because conditionals require boolean input (a test must evaluate as True to trigger the condition), you must apply the ``| bool`` filter to non boolean variables, such as string variables with content like 'yes', 'on', '1', or 'true'. You can define variables like this:
+
+.. code-block:: yaml
+
+ vars:
+ epic: true
+ monumental: "yes"
+
+With the variables above, Ansible would run one of these tasks and skip the other:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Run the command if "epic" or "monumental" is true
+ ansible.builtin.shell: echo "This certainly is epic!"
+ when: epic or monumental | bool
+
+ - name: Run the command if "epic" is false
+ ansible.builtin.shell: echo "This certainly isn't epic!"
+ when: not epic
+
+If a required variable has not been set, you can skip or fail using Jinja2's `defined` test. For example:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Run the command if "foo" is defined
+ ansible.builtin.shell: echo "I've got '{{ foo }}' and am not afraid to use it!"
+ when: foo is defined
+
+ - name: Fail if "bar" is undefined
+ ansible.builtin.fail: msg="Bailing out. This play requires 'bar'"
+ when: bar is undefined
+
+This is especially useful in combination with the conditional import of vars files (see below).
+As the examples show, you do not need to use `{{ }}` to use variables inside conditionals, as these are already implied.
+
+.. _loops_and_conditionals:
+
+Using conditionals in loops
+---------------------------
+
+If you combine a ``when`` statement with a :ref:`loop <playbooks_loops>`, Ansible processes the condition separately for each item. This is by design, so you can execute the task on some items in the loop and skip it on other items. For example:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Run with items greater than 5
+ ansible.builtin.command: echo {{ item }}
+ loop: [ 0, 2, 4, 6, 8, 10 ]
+ when: item > 5
+
+If you need to skip the whole task when the loop variable is undefined, use the `|default` filter to provide an empty iterator. For example, when looping over a list:
+
+.. code-block:: yaml
+
+ - name: Skip the whole task when a loop variable is undefined
+ ansible.builtin.command: echo {{ item }}
+ loop: "{{ mylist|default([]) }}"
+ when: item > 5
+
+You can do the same thing when looping over a dict:
+
+.. code-block:: yaml
+
+ - name: The same as above using a dict
+ ansible.builtin.command: echo {{ item.key }}
+ loop: "{{ query('dict', mydict|default({})) }}"
+ when: item.value > 5
+
+.. _loading_in_custom_facts:
+
+Loading custom facts
+--------------------
+
+You can provide your own facts, as described in :ref:`developing_modules`. To run them, just make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be accessible to future tasks:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Gather site specific fact data
+ action: site_facts
+
+ - name: Use a custom fact
+ ansible.builtin.command: /usr/bin/thingy
+ when: my_custom_fact_just_retrieved_from_the_remote_system == '1234'
+
+.. _when_with_reuse:
+
+Conditionals with re-use
+------------------------
+
+You can use conditionals with re-usable tasks files, playbooks, or roles. Ansible executes these conditional statements differently for dynamic re-use (includes) and for static re-use (imports). See :ref:`playbooks_reuse` for more information on re-use in Ansible.
+
+.. _conditional_imports:
+
+Conditionals with imports
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When you add a conditional to an import statement, Ansible applies the condition to all tasks within the imported file. This behavior is the equivalent of :ref:`tag_inheritance`. Ansible applies the condition to every task, and evaluates each task separately. For example, you might have a playbook called ``main.yml`` and a tasks file called ``other_tasks.yml``::
+
+ # all tasks within an imported file inherit the condition from the import statement
+ # main.yml
+ - import_tasks: other_tasks.yml # note "import"
+ when: x is not defined
+
+ # other_tasks.yml
+ - name: Set a variable
+ ansible.builtin.set_fact:
+ x: foo
+
+ - name: Print a variable
+ ansible.builtin.debug:
+ var: x
+
+Ansible expands this at execution time to the equivalent of::
+
+ - name: Set a variable if not defined
+ ansible.builtin.set_fact:
+ x: foo
+ when: x is not defined
+ # this task sets a value for x
+
+ - name: Do the task if "x" is not defined
+ ansible.builin.debug:
+ var: x
+ when: x is not defined
+ # Ansible skips this task, because x is now defined
+
+Thus if ``x`` is initially undefined, the ``debug`` task will be skipped. If this is not the behavior you want, use an ``include_*`` statement to apply a condition only to that statement itself.
+
+You can apply conditions to ``import_playbook`` as well as to the other ``import_*`` statements. When you use this approach, Ansible returns a 'skipped' message for every task on every host that does not match the criteria, creating repetitive output. In many cases the :ref:`group_by module <group_by_module>` can be a more streamlined way to accomplish the same objective; see :ref:`os_variance`.
+
+.. _conditional_includes:
+
+Conditionals with includes
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When you use a conditional on an ``include_*`` statement, the condition is applied only to the include task itself and not to any other tasks within the included file(s). To contrast with the example used for conditionals on imports above, look at the same playbook and tasks file, but using an include instead of an import::
+
+ # Includes let you re-use a file to define a variable when it is not already defined
+
+ # main.yml
+ - include_tasks: other_tasks.yml
+ when: x is not defined
+
+ # other_tasks.yml
+ - name: Set a variable
+ ansible.builtin.set_fact:
+ x: foo
+
+ - name: Print a variable
+ ansible.builtin.debug:
+ var: x
+
+Ansible expands this at execution time to the equivalent of::
+
+ # main.yml
+ - include_tasks: other_tasks.yml
+ when: x is not defined
+ # if condition is met, Ansible includes other_tasks.yml
+
+ # other_tasks.yml
+ - name: Set a variable
+ ansible.builtin.set_fact:
+ x: foo
+ # no condition applied to this task, Ansible sets the value of x to foo
+
+ - name: Print a variable
+ ansible.builtin.debug:
+ var: x
+ # no condition applied to this task, Ansible prints the debug statement
+
+By using ``include_tasks`` instead of ``import_tasks``, both tasks from ``other_tasks.yml`` will be executed as expected. For more information on the differences between ``include`` v ``import`` see :ref:`playbooks_reuse`.
+
+Conditionals with roles
+^^^^^^^^^^^^^^^^^^^^^^^
+
+There are three ways to apply conditions to roles:
+
+ - Add the same condition or conditions to all tasks in the role by placing your ``when`` statement under the ``roles`` keyword. See the example in this section.
+ - Add the same condition or conditions to all tasks in the role by placing your ``when`` statement on a static ``import_role`` in your playbook.
+ - Add a condition or conditions to individual tasks or blocks within the role itself. This is the only approach that allows you to select or skip some tasks within the role based on your ``when`` statement. To select or skip tasks within the role, you must have conditions set on individual tasks or blocks, use the dynamic ``include_role`` in your playbook, and add the condition or conditions to the include. When you use this approach, Ansible applies the condition to the include itself plus any tasks in the role that also have that ``when`` statement.
+
+When you incorporate a role in your playbook statically with the ``roles`` keyword, Ansible adds the conditions you define to all the tasks in the role. For example:
+
+.. code-block:: yaml
+
+ - hosts: webservers
+ roles:
+ - role: debian_stock_config
+ when: ansible_facts['os_family'] == 'Debian'
+
+.. _conditional_variable_and_files:
+
+Selecting variables, files, or templates based on facts
+-------------------------------------------------------
+
+Sometimes the facts about a host determine the values you want to use for certain variables or even the file or template you want to select for that host. For example, the names of packages are different on CentOS and on Debian. The configuration files for common services are also different on different OS flavors and versions. To load different variables file, templates, or other files based on a fact about the hosts:
+
+ 1) name your vars files, templates, or files to match the Ansible fact that differentiates them
+
+ 2) select the correct vars file, template, or file for each host with a variable based on that Ansible fact
+
+Ansible separates variables from tasks, keeping your playbooks from turning into arbitrary code with nested conditionals. This approach results in more streamlined and auditable configuration rules because there are fewer decision points to track.
+
+Selecting variables files based on facts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can create a playbook that works on multiple platforms and OS versions with a minimum of syntax by placing your variable values in vars files and conditionally importing them. If you want to install Apache on some CentOS and some Debian servers, create variables files with YAML keys and values. For example::
+
+ ---
+ # for vars/RedHat.yml
+ apache: httpd
+ somethingelse: 42
+
+Then import those variables files based on the facts you gather on the hosts in your playbook::
+
+ ---
+ - hosts: webservers
+ remote_user: root
+ vars_files:
+ - "vars/common.yml"
+ - [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ]
+ tasks:
+ - name: Make sure apache is started
+ ansible.builtin.service:
+ name: '{{ apache }}'
+ state: started
+
+Ansible gathers facts on the hosts in the webservers group, then interpolates the variable "ansible_facts['os_family']" into a list of filenames. If you have hosts with Red Hat operating systems (CentOS, for example), Ansible looks for 'vars/RedHat.yml'. If that file does not exist, Ansible attempts to load 'vars/os_defaults.yml'. For Debian hosts, Ansible first looks for 'vars/Debian.yml', before falling back on 'vars/os_defaults.yml'. If no files in the list are found, Ansible raises an error.
+
+Selecting files and templates based on facts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can use the same approach when different OS flavors or versions require different configuration files or templates. Select the appropriate file or template based on the variables assigned to each host. This approach is often much cleaner than putting a lot of conditionals into a single template to cover multiple OS or package versions.
+
+For example, you can template out a configuration file that is very different between, say, CentOS and Debian::
+
+ - name: Template a file
+ ansible.builtin.template:
+ src: "{{ item }}"
+ dest: /etc/myapp/foo.conf
+ loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}"
+ vars:
+ myfiles:
+ - "{{ ansible_facts['distribution'] }}.conf"
+ - default.conf
+ mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/']
+
+.. _commonly_used_facts:
+
+Commonly-used facts
+===================
+
+The following Ansible facts are frequently used in conditionals.
+
+.. _ansible_distribution:
+
+ansible_facts['distribution']
+-----------------------------
+
+Possible values (sample, not complete list)::
+
+ Alpine
+ Altlinux
+ Amazon
+ Archlinux
+ ClearLinux
+ Coreos
+ CentOS
+ Debian
+ Fedora
+ Gentoo
+ Mandriva
+ NA
+ OpenWrt
+ OracleLinux
+ RedHat
+ Slackware
+ SLES
+ SMGL
+ SUSE
+ Ubuntu
+ VMwareESX
+
+.. See `OSDIST_LIST`
+
+.. _ansible_distribution_major_version:
+
+ansible_facts['distribution_major_version']
+-------------------------------------------
+
+The major version of the operating system. For example, the value is `16` for Ubuntu 16.04.
+
+.. _ansible_os_family:
+
+ansible_facts['os_family']
+--------------------------
+
+Possible values (sample, not complete list)::
+
+ AIX
+ Alpine
+ Altlinux
+ Archlinux
+ Darwin
+ Debian
+ FreeBSD
+ Gentoo
+ HP-UX
+ Mandrake
+ RedHat
+ SGML
+ Slackware
+ Solaris
+ Suse
+ Windows
+
+.. Ansible checks `OS_FAMILY_MAP`; if there's no match, it returns the value of `platform.system()`.
+
+.. seealso::
+
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`playbooks_variables`
+ All about variables
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_debugger.rst b/docs/docsite/rst/user_guide/playbooks_debugger.rst
new file mode 100644
index 00000000..cc330cc5
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_debugger.rst
@@ -0,0 +1,329 @@
+.. _playbook_debugger:
+
+***************
+Debugging tasks
+***************
+
+Ansible offers a task debugger so you can fix errors during execution instead of editing your playbook and running it again to see if your change worked. You have access to all of the features of the debugger in the context of the task. You can check or set the value of variables, update module arguments, and re-run the task with the new variables and arguments. The debugger lets you resolve the cause of the failure and continue with playbook execution.
+
+.. contents::
+ :local:
+
+Enabling the debugger
+=====================
+
+The debugger is not enabled by default. If you want to invoke the debugger during playbook execution, you must enable it first.
+
+Use one of these three methods to enable the debugger:
+
+ * with the debugger keyword
+ * in configuration or an environment variable, or
+ * as a strategy
+
+Enabling the debugger with the ``debugger`` keyword
+---------------------------------------------------
+
+.. versionadded:: 2.5
+
+You can use the ``debugger`` keyword to enable (or disable) the debugger for a specific play, role, block, or task. This option is especially useful when developing or extending playbooks, plays, and roles. You can enable the debugger on new or updated tasks. If they fail, you can fix the errors efficiently. The ``debugger`` keyword accepts five values:
+
+.. table::
+ :class: documentation-table
+
+ ========================= ======================================================
+ Value Result
+ ========================= ======================================================
+ always Always invoke the debugger, regardless of the outcome
+
+ never Never invoke the debugger, regardless of the outcome
+
+ on_failed Only invoke the debugger if a task fails
+
+ on_unreachable Only invoke the debugger if a host is unreachable
+
+ on_skipped Only invoke the debugger if the task is skipped
+
+ ========================= ======================================================
+
+When you use the ``debugger`` keyword, the value you specify overrides any global configuration to enable or disable the debugger. If you define ``debugger`` at multiple levels, such as in a role and in a task, Ansible honors the most granular definition. The definition at the play or role level applies to all blocks and tasks within that play or role, unless they specify a different value. The definition at the block level overrides the definition at the play or role level, and applies to all tasks within that block, unless they specify a different value. The definition at the task level always applies to the task; it overrides the definitions at the block, play, or role level.
+
+Examples of using the ``debugger`` keyword
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Example of setting the ``debugger`` keyword on a task:
+
+.. code-block:: yaml
+
+ - name: Execute a command
+ ansible.builtin.command: "false"
+ debugger: on_failed
+
+Example of setting the ``debugger`` keyword on a play:
+
+.. code-block:: yaml
+
+ - name: My play
+ hosts: all
+ debugger: on_skipped
+ tasks:
+ - name: Execute a command
+ ansible.builtin.command: "true"
+ when: False
+
+Example of setting the ``debugger`` keyword at multiple levels:
+
+.. code-block:: yaml
+
+
+ - name: Play
+ hosts: all
+ debugger: never
+ tasks:
+ - name: Execute a command
+ ansible.builtin.command: "false"
+ debugger: on_failed
+
+In this example, the debugger is set to ``never`` at the play level and to ``on_failed`` at the task level. If the task fails, Ansible invokes the debugger, because the definition on the task overrides the definition on its parent play.
+
+Enabling the debugger in configuration or an environment variable
+-----------------------------------------------------------------
+
+.. versionadded:: 2.5
+
+You can enable the task debugger globally with a setting in ansible.cfg or with an environment variable. The only options are ``True`` or ``False``. If you set the configuration option or environment variable to ``True``, Ansible runs the debugger on failed tasks by default.
+
+To enable the task debugger from ansible.cfg, add this setting to the defaults section::
+
+ [defaults]
+ enable_task_debugger = True
+
+To enable the task debugger with an environment variable, pass the variable when you run your playbook::
+
+ ANSIBLE_ENABLE_TASK_DEBUGGER=True ansible-playbook -i hosts site.yml
+
+When you enable the debugger globally, every failed task invokes the debugger, unless the role, play, block, or task explicity disables the debugger. If you need more granular control over what conditions trigger the debugger, use the ``debugger`` keyword.
+
+Enabling the debugger as a strategy
+-----------------------------------
+
+If you are running legacy playbooks or roles, you may see the debugger enabled as a :ref:`strategy <strategy_plugins>`. You can do this at the play level, in ansible.cfg, or with the environment variable ``ANSIBLE_STRATEGY=debug``. For example:
+
+.. code-block:: yaml
+
+ - hosts: test
+ strategy: debug
+ tasks:
+ ...
+
+Or in ansible.cfg::
+
+ [defaults]
+ strategy = debug
+
+.. note::
+
+ This backwards-compatible method, which matches Ansible versions before 2.5, may be removed in a future release.
+
+Resolving errors in the debugger
+================================
+
+After Ansible invokes the debugger, you can use the seven :ref:`debugger commands <available_commands>` to resolve the error that Ansible encountered. Consider this example playbook, which defines the ``var1`` variable but uses the undefined ``wrong_var`` variable in a task by mistake.
+
+.. code-block:: yaml
+
+ - hosts: test
+ debugger: on_failed
+ gather_facts: no
+ vars:
+ var1: value1
+ tasks:
+ - name: Use a wrong variable
+ ansible.builtin.ping: data={{ wrong_var }}
+
+If you run this playbook, Ansible invokes the debugger when the task fails. From the debug prompt, you can change the module arguments or the variables and run the task again.
+
+.. code-block:: none
+
+ PLAY ***************************************************************************
+
+ TASK [wrong variable] **********************************************************
+ fatal: [192.0.2.10]: FAILED! => {"failed": true, "msg": "ERROR! 'wrong_var' is undefined"}
+ Debugger invoked
+ [192.0.2.10] TASK: wrong variable (debug)> p result._result
+ {'failed': True,
+ 'msg': 'The task includes an option with an undefined variable. The error '
+ "was: 'wrong_var' is undefined\n"
+ '\n'
+ 'The error appears to have been in '
+ "'playbooks/debugger.yml': line 7, "
+ 'column 7, but may\n'
+ 'be elsewhere in the file depending on the exact syntax problem.\n'
+ '\n'
+ 'The offending line appears to be:\n'
+ '\n'
+ ' tasks:\n'
+ ' - name: wrong variable\n'
+ ' ^ here\n'}
+ [192.0.2.10] TASK: wrong variable (debug)> p task.args
+ {u'data': u'{{ wrong_var }}'}
+ [192.0.2.10] TASK: wrong variable (debug)> task.args['data'] = '{{ var1 }}'
+ [192.0.2.10] TASK: wrong variable (debug)> p task.args
+ {u'data': '{{ var1 }}'}
+ [192.0.2.10] TASK: wrong variable (debug)> redo
+ ok: [192.0.2.10]
+
+ PLAY RECAP *********************************************************************
+ 192.0.2.10 : ok=1 changed=0 unreachable=0 failed=0
+
+Changing the task arguments in the debugger to use ``var1`` instead of ``wrong_var`` makes the task run successfully.
+
+.. _available_commands:
+
+Available debug commands
+========================
+
+You can use these seven commands at the debug prompt:
+
+.. table::
+ :class: documentation-table
+
+ ========================== ============ =========================================================
+ Command Shortcut Action
+ ========================== ============ =========================================================
+ print p Print information about the task
+
+ task.args[*key*] = *value* no shortcut Update module arguments
+
+ task_vars[*key*] = *value* no shortcut Update task variables (you must ``update_task`` next)
+
+ update_task u Recreate a task with updated task variables
+
+ redo r Run the task again
+
+ continue c Continue executing, starting with the next task
+
+ quit q Quit the debugger
+
+ ========================== ============ =========================================================
+
+For more details, see the individual descriptions and examples below.
+
+.. _pprint_command:
+
+Print command
+-------------
+
+``print *task/task.args/task_vars/host/result*`` prints information about the task::
+
+ [192.0.2.10] TASK: install package (debug)> p task
+ TASK: install package
+ [192.0.2.10] TASK: install package (debug)> p task.args
+ {u'name': u'{{ pkg_name }}'}
+ [192.0.2.10] TASK: install package (debug)> p task_vars
+ {u'ansible_all_ipv4_addresses': [u'192.0.2.10'],
+ u'ansible_architecture': u'x86_64',
+ ...
+ }
+ [192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name']
+ u'bash'
+ [192.0.2.10] TASK: install package (debug)> p host
+ 192.0.2.10
+ [192.0.2.10] TASK: install package (debug)> p result._result
+ {'_ansible_no_log': False,
+ 'changed': False,
+ u'failed': True,
+ ...
+ u'msg': u"No package matching 'not_exist' is available"}
+
+.. _update_args_command:
+
+Update args command
+-------------------
+
+``task.args[*key*] = *value*`` updates a module argument. This sample playbook has an invalid package name::
+
+ - hosts: test
+ strategy: debug
+ gather_facts: yes
+ vars:
+ pkg_name: not_exist
+ tasks:
+ - name: Install a package
+ ansible.builtin.apt: name={{ pkg_name }}
+
+When you run the playbook, the invalid package name triggers an error, and Ansible invokes the debugger. You can fix the package name by viewing, then updating the module argument::
+
+ [192.0.2.10] TASK: install package (debug)> p task.args
+ {u'name': u'{{ pkg_name }}'}
+ [192.0.2.10] TASK: install package (debug)> task.args['name'] = 'bash'
+ [192.0.2.10] TASK: install package (debug)> p task.args
+ {u'name': 'bash'}
+ [192.0.2.10] TASK: install package (debug)> redo
+
+After you update the module argument, use ``redo`` to run the task again with the new args.
+
+.. _update_vars_command:
+
+Update vars command
+-------------------
+
+``task_vars[*key*] = *value*`` updates the ``task_vars``. You could fix the playbook above by viewing, then updating the task variables instead of the module args::
+
+ [192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name']
+ u'not_exist'
+ [192.0.2.10] TASK: install package (debug)> task_vars['pkg_name'] = 'bash'
+ [192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name']
+ 'bash'
+ [192.0.2.10] TASK: install package (debug)> update_task
+ [192.0.2.10] TASK: install package (debug)> redo
+
+After you update the task variables, you must use ``update_task`` to load the new variables before using ``redo`` to run the task again.
+
+.. note::
+ In 2.5 this was updated from ``vars`` to ``task_vars`` to avoid conflicts with the ``vars()`` python function.
+
+.. _update_task_command:
+
+Update task command
+-------------------
+
+.. versionadded:: 2.8
+
+``u`` or ``update_task`` recreates the task from the original task data structure and templates with updated task variables. See the entry :ref:`update_vars_command` for an example of use.
+
+.. _redo_command:
+
+Redo command
+------------
+
+``r`` or ``redo`` runs the task again.
+
+.. _continue_command:
+
+Continue command
+----------------
+
+``c`` or ``continue`` continues executing, starting with the next task.
+
+.. _quit_command:
+
+Quit command
+------------
+
+``q`` or ``quit`` quits the debugger. The playbook execution is aborted.
+
+How the debugger interacts with the free strategy
+=================================================
+
+With the default ``linear`` strategy enabled, Ansible halts execution while the debugger is active, and runs the debugged task immediately after you enter the ``redo`` command. With the ``free`` strategy enabled, however, Ansible does not wait for all hosts, and may queue later tasks on one host before a task fails on another host. With the ``free`` strategy, Ansible does not queue or execute any tasks while the debugger is active. However, all queued tasks remain in the queue and run as soon as you exit the debugger. If you use ``redo`` to reschedule a task from the debugger, other queued tasks may execute before your rescheduled task. For more information about strategies, see :ref:`playbooks_strategies`.
+
+.. seealso::
+
+ :ref:`playbooks_start_and_step`
+ Running playbooks while debugging or testing
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_delegation.rst b/docs/docsite/rst/user_guide/playbooks_delegation.rst
new file mode 100644
index 00000000..1042bafb
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_delegation.rst
@@ -0,0 +1,136 @@
+.. _playbooks_delegation:
+
+Controlling where tasks run: delegation and local actions
+=========================================================
+
+By default Ansible gathers facts and executes all tasks on the machines that match the ``hosts`` line of your playbook. This page shows you how to delegate tasks to a different machine or group, delegate facts to specific machines or groups, or run an entire playbook locally. Using these approaches, you can manage inter-related environments precisely and efficiently. For example, when updating your webservers, you might need to remove them from a load-balanced pool temporarily. You cannot perform this task on the webservers themselves. By delegating the task to localhost, you keep all the tasks within the same play.
+
+.. contents::
+ :local:
+
+Tasks that cannot be delegated
+------------------------------
+
+Some tasks always execute on the controller. These tasks, including ``include``, ``add_host``, and ``debug``, cannot be delegated.
+
+.. _delegation:
+
+Delegating tasks
+----------------
+
+If you want to perform a task on one host with reference to other hosts, use the ``delegate_to`` keyword on a task. This is ideal for managing nodes in a load balanced pool or for controlling outage windows. You can use delegation with the :ref:`serial <rolling_update_batch_size>` keyword to control the number of hosts executing at one time::
+
+ ---
+ - hosts: webservers
+ serial: 5
+
+ tasks:
+ - name: Take out of load balancer pool
+ ansible.builtin.command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
+ delegate_to: 127.0.0.1
+
+ - name: Actual steps would go here
+ ansible.builtin.yum:
+ name: acme-web-stack
+ state: latest
+
+ - name: Add back to load balancer pool
+ ansible.builtin.command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
+ delegate_to: 127.0.0.1
+
+The first and third tasks in this play run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: ``local_action``. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1::
+
+ ---
+ # ...
+
+ tasks:
+ - name: Take out of load balancer pool
+ local_action: ansible.builtin.command /usr/bin/take_out_of_pool {{ inventory_hostname }}
+
+ # ...
+
+ - name: Add back to load balancer pool
+ local_action: ansible.builtin.command /usr/bin/add_back_to_pool {{ inventory_hostname }}
+
+You can use a local action to call 'rsync' to recursively copy files to the managed servers::
+
+ ---
+ # ...
+
+ tasks:
+ - name: Recursively copy files from management server to target
+ local_action: ansible.builtin.command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/
+
+Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync asks for a passphrase.
+
+To specify more arguments, use the following syntax::
+
+ ---
+ # ...
+
+ tasks:
+ - name: Send summary mail
+ local_action:
+ module: community.general.mail
+ subject: "Summary Mail"
+ to: "{{ mail_recipient }}"
+ body: "{{ mail_body }}"
+ run_once: True
+
+The `ansible_host` variable reflects the host a task is delegated to.
+
+.. _delegate_facts:
+
+Delegating facts
+----------------
+
+Delegating Ansible tasks is like delegating tasks in the real world - your groceries belong to you, even if someone else delivers them to your home. Similarly, any facts gathered by a delegated task are assigned by default to the `inventory_hostname` (the current host), not to the host which produced the facts (the delegated to host). To assign gathered facts to the delegated host instead of the current host, set ``delegate_facts`` to ``true``::
+
+ ---
+ - hosts: app_servers
+
+ tasks:
+ - name: Gather facts from db servers
+ ansible.builtin.setup:
+ delegate_to: "{{ item }}"
+ delegate_facts: true
+ loop: "{{ groups['dbservers'] }}"
+
+This task gathers facts for the machines in the dbservers group and assigns the facts to those machines, even though the play targets the app_servers group. This way you can lookup `hostvars['dbhost1']['ansible_default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit`.
+
+.. _local_playbooks:
+
+Local playbooks
+---------------
+
+It may be useful to use a playbook locally on a remote host, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook in a crontab. This may also be used
+to run a playbook inside an OS installer, such as an Anaconda kickstart.
+
+To run an entire playbook locally, just set the ``hosts:`` line to ``hosts: 127.0.0.1`` and then run the playbook like so::
+
+ ansible-playbook playbook.yml --connection=local
+
+Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook
+use the default remote connection type::
+
+ ---
+ - hosts: 127.0.0.1
+ connection: local
+
+.. note::
+ If you set the connection to local and there is no ansible_python_interpreter set, modules will run under /usr/bin/python and not
+ under {{ ansible_playbook_python }}. Be sure to set ansible_python_interpreter: "{{ ansible_playbook_python }}" in
+ host_vars/localhost.yml, for example. You can avoid this issue by using ``local_action`` or ``delegate_to: localhost`` instead.
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_strategies`
+ More ways to control how and where Ansible executes
+ `Ansible Examples on GitHub <https://github.com/ansible/ansible-examples>`_
+ Many examples of full-stack deployments
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_environment.rst b/docs/docsite/rst/user_guide/playbooks_environment.rst
new file mode 100644
index 00000000..7d97b954
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_environment.rst
@@ -0,0 +1,141 @@
+.. _playbooks_environment:
+
+Setting the remote environment
+==============================
+
+.. versionadded:: 1.1
+
+You can use the ``environment`` keyword at the play, block, or task level to set an environment variable for an action on a remote host. With this keyword, you can enable using a proxy for a task that does http requests, set the required environment variables for language-specific version managers, and more.
+
+When you set a value with ``environment:`` at the play or block level, it is available only to tasks within the play or block that are executed by the same user. The ``environment:`` keyword does not affect Ansible itself, Ansible configuration settings, the environment for other users, or the execution of other plugins like lookups and filters. Variables set with ``environment:`` do not automatically become Ansible facts, even when you set them at the play level. You must include an explicit ``gather_facts`` task in your playbook and set the ``environment`` keyword on that task to turn these values into Ansible facts.
+
+.. contents::
+ :local:
+
+Setting the remote environment in a task
+----------------------------------------
+
+You can set the environment directly at the task level::
+
+ - hosts: all
+ remote_user: root
+
+ tasks:
+
+ - name: Install cobbler
+ ansible.builtin.package:
+ name: cobbler
+ state: present
+ environment:
+ http_proxy: http://proxy.example.com:8080
+
+You can re-use environment settings by defining them as variables in your play and accessing them in a task as you would access any stored Ansible variable::
+
+ - hosts: all
+ remote_user: root
+
+ # create a variable named "proxy_env" that is a dictionary
+ vars:
+ proxy_env:
+ http_proxy: http://proxy.example.com:8080
+
+ tasks:
+
+ - name: Install cobbler
+ ansible.builtin.package:
+ name: cobbler
+ state: present
+ environment: "{{ proxy_env }}"
+
+You can store environment settings for re-use in multiple playbooks by defining them in a group_vars file::
+
+ ---
+ # file: group_vars/boston
+
+ ntp_server: ntp.bos.example.com
+ backup: bak.bos.example.com
+ proxy_env:
+ http_proxy: http://proxy.bos.example.com:8080
+ https_proxy: http://proxy.bos.example.com:8080
+
+You can set the remote environment at the play level::
+
+ - hosts: testing
+
+ roles:
+ - php
+ - nginx
+
+ environment:
+ http_proxy: http://proxy.example.com:8080
+
+These examples show proxy settings, but you can provide any number of settings this way.
+
+Working with language-specific version managers
+===============================================
+
+Some language-specific version managers (such as rbenv and nvm) require you to set environment variables while these tools are in use. When using these tools manually, you usually source some environment variables from a script or from lines added to your shell configuration file. In Ansible, you can do this with the environment keyword at the play level::
+
+ ---
+ ### A playbook demonstrating a common npm workflow:
+ # - Check for package.json in the application directory
+ # - If package.json exists:
+ # * Run npm prune
+ # * Run npm install
+
+ - hosts: application
+ become: false
+
+ vars:
+ node_app_dir: /var/local/my_node_app
+
+ environment:
+ NVM_DIR: /var/local/nvm
+ PATH: /var/local/nvm/versions/node/v4.2.1/bin:{{ ansible_env.PATH }}
+
+ tasks:
+ - name: Check for package.json
+ ansible.builtin.stat:
+ path: '{{ node_app_dir }}/package.json'
+ register: packagejson
+
+ - name: Run npm prune
+ ansible.builtin.command: npm prune
+ args:
+ chdir: '{{ node_app_dir }}'
+ when: packagejson.stat.exists
+
+ - name: Run npm install
+ community.general.npm:
+ path: '{{ node_app_dir }}'
+ when: packagejson.stat.exists
+
+.. note::
+ The example above uses ``ansible_env`` as part of the PATH. Basing variables on ``ansible_env`` is risky. Ansible populates ``ansible_env`` values by gathering facts, so the value of the variables depends on the remote_user or become_user Ansible used when gathering those facts. If you change remote_user/become_user the values in ``ansible-env`` may not be the ones you expect.
+
+.. warning::
+ Environment variables are normally passed in clear text (shell plugin dependent) so they are not a recommended way of passing secrets to the module being executed.
+
+You can also specify the environment at the task level::
+
+ ---
+ - name: Install ruby 2.3.1
+ ansible.builtin.command: rbenv install {{ rbenv_ruby_version }}
+ args:
+ creates: '{{ rbenv_root }}/versions/{{ rbenv_ruby_version }}/bin/ruby'
+ vars:
+ rbenv_root: /usr/local/rbenv
+ rbenv_ruby_version: 2.3.1
+ environment:
+ CONFIGURE_OPTS: '--disable-install-doc'
+ RBENV_ROOT: '{{ rbenv_root }}'
+ PATH: '{{ rbenv_root }}/bin:{{ rbenv_root }}/shims:{{ rbenv_plugins }}/ruby-build/bin:{{ ansible_env.PATH }}'
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_error_handling.rst b/docs/docsite/rst/user_guide/playbooks_error_handling.rst
new file mode 100644
index 00000000..c73067cc
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_error_handling.rst
@@ -0,0 +1,245 @@
+.. _playbooks_error_handling:
+
+***************************
+Error handling in playbooks
+***************************
+
+When Ansible receives a non-zero return code from a command or a failure from a module, by default it stops executing on that host and continues on other hosts. However, in some circumstances you may want different behavior. Sometimes a non-zero return code indicates success. Sometimes you want a failure on one host to stop execution on all hosts. Ansible provides tools and settings to handle these situations and help you get the behavior, output, and reporting you want.
+
+.. contents::
+ :local:
+
+.. _ignoring_failed_commands:
+
+Ignoring failed commands
+========================
+
+By default Ansible stops executing tasks on a host when a task fails on that host. You can use ``ignore_errors`` to continue on in spite of the failure::
+
+ - name: Do not count this as a failure
+ ansible.builtin.command: /bin/false
+ ignore_errors: yes
+
+The ``ignore_errors`` directive only works when the task is able to run and returns a value of 'failed'. It does not make Ansible ignore undefined variable errors, connection failures, execution issues (for example, missing packages), or syntax errors.
+
+Ignoring unreachable host errors
+================================
+
+.. versionadded:: 2.7
+
+You can ignore a task failure due to the host instance being 'UNREACHABLE' with the ``ignore_unreachable`` keyword. Ansible ignores the task errors, but continues to execute future tasks against the unreachable host. For example, at the task level::
+
+ - name: This executes, fails, and the failure is ignored
+ ansible.builtin.command: /bin/true
+ ignore_unreachable: yes
+
+ - name: This executes, fails, and ends the play for this host
+ ansible.builtin.command: /bin/true
+
+And at the playbook level::
+
+ - hosts: all
+ ignore_unreachable: yes
+ tasks:
+ - name: This executes, fails, and the failure is ignored
+ ansible.builtin.command: /bin/true
+
+ - name: This executes, fails, and ends the play for this host
+ ansible.builtin.command: /bin/true
+ ignore_unreachable: no
+
+.. _resetting_unreachable:
+
+Resetting unreachable hosts
+===========================
+
+If Ansible cannot connect to a host, it marks that host as 'UNREACHABLE' and removes it from the list of active hosts for the run. You can use `meta: clear_host_errors` to reactivate all hosts, so subsequent tasks can try to reach them again.
+
+
+.. _handlers_and_failure:
+
+Handlers and failure
+====================
+
+Ansible runs :ref:`handlers <handlers>` at the end of each play. If a task notifies a handler but
+another task fails later in the play, by default the handler does *not* run on that host,
+which may leave the host in an unexpected state. For example, a task could update
+a configuration file and notify a handler to restart some service. If a
+task later in the same play fails, the configuration file might be changed but
+the service will not be restarted.
+
+You can change this behavior with the ``--force-handlers`` command-line option,
+by including ``force_handlers: True`` in a play, or by adding ``force_handlers = True``
+to ansible.cfg. When handlers are forced, Ansible will run all notified handlers on
+all hosts, even hosts with failed tasks. (Note that certain errors could still prevent
+the handler from running, such as a host becoming unreachable.)
+
+.. _controlling_what_defines_failure:
+
+Defining failure
+================
+
+Ansible lets you define what "failure" means in each task using the ``failed_when`` conditional. As with all conditionals in Ansible, lists of multiple ``failed_when`` conditions are joined with an implicit ``and``, meaning the task only fails when *all* conditions are met. If you want to trigger a failure when any of the conditions is met, you must define the conditions in a string with an explicit ``or`` operator.
+
+You may check for failure by searching for a word or phrase in the output of a command::
+
+ - name: Fail task when the command error output prints FAILED
+ ansible.builtin.command: /usr/bin/example-command -x -y -z
+ register: command_result
+ failed_when: "'FAILED' in command_result.stderr"
+
+or based on the return code::
+
+ - name: Fail task when both files are identical
+ ansible.builtin.raw: diff foo/file1 bar/file2
+ register: diff_cmd
+ failed_when: diff_cmd.rc == 0 or diff_cmd.rc >= 2
+
+You can also combine multiple conditions for failure. This task will fail if both conditions are true::
+
+ - name: Check if a file exists in temp and fail task if it does
+ ansible.builtin.command: ls /tmp/this_should_not_be_here
+ register: result
+ failed_when:
+ - result.rc == 0
+ - '"No such" not in result.stdout'
+
+If you want the task to fail when only one condition is satisfied, change the ``failed_when`` definition to::
+
+ failed_when: result.rc == 0 or "No such" not in result.stdout
+
+If you have too many conditions to fit neatly into one line, you can split it into a multi-line yaml value with ``>``::
+
+ - name: example of many failed_when conditions with OR
+ ansible.builtin.shell: "./myBinary"
+ register: ret
+ failed_when: >
+ ("No such file or directory" in ret.stdout) or
+ (ret.stderr != '') or
+ (ret.rc == 10)
+
+.. _override_the_changed_result:
+
+Defining "changed"
+==================
+
+Ansible lets you define when a particular task has "changed" a remote node using the ``changed_when`` conditional. This lets you determine, based on return codes or output, whether a change should be reported in Ansible statistics and whether a handler should be triggered or not. As with all conditionals in Ansible, lists of multiple ``changed_when`` conditions are joined with an implicit ``and``, meaning the task only reports a change when *all* conditions are met. If you want to report a change when any of the conditions is met, you must define the conditions in a string with an explicit ``or`` operator. For example::
+
+ tasks:
+
+ - name: Report 'changed' when the return code is not equal to 2
+ ansible.builtin.shell: /usr/bin/billybass --mode="take me to the river"
+ register: bass_result
+ changed_when: "bass_result.rc != 2"
+
+ - name: This will never report 'changed' status
+ ansible.builtin.shell: wall 'beep'
+ changed_when: False
+
+You can also combine multiple conditions to override "changed" result::
+
+ - name: Combine multiple conditions to override 'changed' result
+ ansible.builtin.command: /bin/fake_command
+ register: result
+ ignore_errors: True
+ changed_when:
+ - '"ERROR" in result.stderr'
+ - result.rc == 2
+
+See :ref:`controlling_what_defines_failure` for more conditional syntax examples.
+
+Ensuring success for command and shell
+======================================
+
+The :ref:`command <command_module>` and :ref:`shell <shell_module>` modules care about return codes, so if you have a command whose successful exit code is not zero, you can do this::
+
+ tasks:
+ - name: Run this command and ignore the result
+ ansible.builtin.shell: /usr/bin/somecommand || /bin/true
+
+
+Aborting a play on all hosts
+============================
+
+Sometimes you want a failure on a single host, or failures on a certain percentage of hosts, to abort the entire play on all hosts. You can stop play execution after the first failure happens with ``any_errors_fatal``. For finer-grained control, you can use ``max_fail_percentage`` to abort the run after a given percentage of hosts has failed.
+
+Aborting on the first error: any_errors_fatal
+---------------------------------------------
+
+If you set ``any_errors_fatal`` and a task returns an error, Ansible finishes the fatal task on all hosts in the current batch, then stops executing the play on all hosts. Subsequent tasks and plays are not executed. You can recover from fatal errors by adding a :ref:`rescue section <block_error_handling>` to the block. You can set ``any_errors_fatal`` at the play or block level::
+
+ - hosts: somehosts
+ any_errors_fatal: true
+ roles:
+ - myrole
+
+ - hosts: somehosts
+ tasks:
+ - block:
+ - include_tasks: mytasks.yml
+ any_errors_fatal: true
+
+You can use this feature when all tasks must be 100% successful to continue playbook execution. For example, if you run a service on machines in multiple data centers with load balancers to pass traffic from users to the service, you want all load balancers to be disabled before you stop the service for maintenance. To ensure that any failure in the task that disables the load balancers will stop all other tasks::
+
+ ---
+ - hosts: load_balancers_dc_a
+ any_errors_fatal: true
+
+ tasks:
+ - name: Shut down datacenter 'A'
+ ansible.builtin.command: /usr/bin/disable-dc
+
+ - hosts: frontends_dc_a
+
+ tasks:
+ - name: Stop service
+ ansible.builtin.command: /usr/bin/stop-software
+
+ - name: Update software
+ ansible.builtin.command: /usr/bin/upgrade-software
+
+ - hosts: load_balancers_dc_a
+
+ tasks:
+ - name: Start datacenter 'A'
+ ansible.builtin.command: /usr/bin/enable-dc
+
+In this example Ansible starts the software upgrade on the front ends only if all of the load balancers are successfully disabled.
+
+.. _maximum_failure_percentage:
+
+Setting a maximum failure percentage
+------------------------------------
+
+By default, Ansible continues to execute tasks as long as there are hosts that have not yet failed. In some situations, such as when executing a rolling update, you may want to abort the play when a certain threshold of failures has been reached. To achieve this, you can set a maximum failure percentage on a play::
+
+ ---
+ - hosts: webservers
+ max_fail_percentage: 30
+ serial: 10
+
+The ``max_fail_percentage`` setting applies to each batch when you use it with :ref:`serial <rolling_update_batch_size>`. In the example above, if more than 3 of the 10 servers in the first (or any) batch of servers failed, the rest of the play would be aborted.
+
+.. note::
+
+ The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort the play when 2 of the systems failed, set the max_fail_percentage at 49 rather than 50.
+
+Controlling errors in blocks
+============================
+
+You can also use blocks to define responses to task errors. This approach is similar to exception handling in many programming languages. See :ref:`block_error_handling` for details and examples.
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_variables`
+ All about variables
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_filters.rst b/docs/docsite/rst/user_guide/playbooks_filters.rst
new file mode 100644
index 00000000..f009900a
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_filters.rst
@@ -0,0 +1,1696 @@
+.. _playbooks_filters:
+
+********************************
+Using filters to manipulate data
+********************************
+
+Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of :ref:`built-in filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to transform data. You can :ref:`create custom Ansible filters as plugins <developing_filter_plugins>`, though we generally welcome new filters into the ansible-base repo so everyone can use them.
+
+Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally.
+
+.. contents::
+ :local:
+
+Handling undefined variables
+============================
+
+Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter.
+
+.. _defaulting_undefined_variables:
+
+Providing default values
+------------------------
+
+You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined::
+
+ {{ some_variable | default(5) }}
+
+In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role.
+
+Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use
+a default with a value in a nested data structure (in other words, :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
+
+If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``::
+
+ {{ lookup('env', 'MY_USER') | default('admin', true) }}
+
+.. _omitting_undefined_variables:
+
+Making variables optional
+-------------------------
+
+By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``::
+
+ - name: Touch files with an optional mode
+ ansible.builtin.file:
+ dest: "{{ item.path }}"
+ state: touch
+ mode: "{{ item.mode | default(omit) }}"
+ loop:
+ - path: /tmp/foo
+ - path: /tmp/bar
+ - path: /tmp/baz
+ mode: "0444"
+
+In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option.
+
+.. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this:
+ ``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this.
+
+.. _forcing_variables_to_be_defined:
+
+Defining mandatory values
+-------------------------
+
+If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do this with::
+
+ {{ variable | mandatory }}
+
+The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
+
+Defining different values for true/false/null (ternary)
+=======================================================
+
+You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9)::
+
+ {{ (status == 'needs_restart') | ternary('restart', 'continue') }}
+
+In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8)::
+
+ {{ enabled | ternary('no shutdown', 'shutdown', omit) }}
+
+Managing data types
+===================
+
+You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user :ref:`prompt <playbooks_prompts>` might return a string when your playbook needs a boolean value. Use the ``type_debug``, ``dict2items``, and ``items2dict`` filters to manage data types. You can also use the data type itself to cast a value as a specific data type.
+
+Discovering the data type
+-------------------------
+
+.. versionadded:: 2.3
+
+If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable::
+
+ {{ myvar | type_debug }}
+
+
+.. _dict_filter:
+
+Transforming dictionaries into lists
+------------------------------------
+
+.. versionadded:: 2.6
+
+
+Use the ``dict2items`` filter to transform a dictionary into a list of items suitable for :ref:`looping <playbooks_loops>`::
+
+ {{ dict | dict2items }}
+
+Dictionary data (before applying the ``dict2items`` filter)::
+
+ tags:
+ Application: payment
+ Environment: dev
+
+List data (after applying the ``dict2items`` filter)::
+
+ - key: Application
+ value: payment
+ - key: Environment
+ value: dev
+
+.. versionadded:: 2.8
+
+The ``dict2items`` filter is the reverse of the ``items2dict`` filter.
+
+If you want to configure the names of the keys, the ``dict2items`` filter accepts 2 keyword arguments. Pass the ``key_name`` and ``value_name`` arguments to configure the names of the keys in the list output::
+
+ {{ files | dict2items(key_name='file', value_name='path') }}
+
+Dictionary data (before applying the ``dict2items`` filter)::
+
+ files:
+ users: /etc/passwd
+ groups: /etc/group
+
+List data (after applying the ``dict2items`` filter)::
+
+ - file: users
+ path: /etc/passwd
+ - file: groups
+ path: /etc/group
+
+
+Transforming lists into dictionaries
+------------------------------------
+
+.. versionadded:: 2.7
+
+Use the ``items2dict`` filter to transform a list into a dictionary, mapping the content into ``key: value`` pairs::
+
+ {{ tags | items2dict }}
+
+List data (before applying the ``items2dict`` filter)::
+
+ tags:
+ - key: Application
+ value: payment
+ - key: Environment
+ value: dev
+
+Dictionary data (after applying the ``items2dict`` filter)::
+
+ Application: payment
+ Environment: dev
+
+The ``items2dict`` filter is the reverse of the ``dict2items`` filter.
+
+Not all lists use ``key`` to designate keys and ``value`` to designate values. For example::
+
+ fruits:
+ - fruit: apple
+ color: red
+ - fruit: pear
+ color: yellow
+ - fruit: grapefruit
+ color: yellow
+
+In this example, you must pass the ``key_name`` and ``value_name`` arguments to configure the transformation. For example::
+
+ {{ tags | items2dict(key_name='fruit', value_name='color') }}
+
+If you do not pass these arguments, or do not pass the correct values for your list, you will see ``KeyError: key`` or ``KeyError: my_typo``.
+
+Forcing the data type
+---------------------
+
+You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a boolean value instead of a string::
+
+ - debug:
+ msg: test
+ when: some_string_value | bool
+
+If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string::
+
+ - shell: echo "only on Red Hat 6, derivatives, and later"
+ when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
+
+
+.. versionadded:: 1.6
+
+.. _filters_for_formatting_data:
+
+Formatting data: YAML and JSON
+==============================
+
+You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging::
+
+ {{ some_variable | to_json }}
+ {{ some_variable | to_yaml }}
+
+For human readable output, you can use::
+
+ {{ some_variable | to_nice_json }}
+ {{ some_variable | to_nice_yaml }}
+
+You can change the indentation of either format::
+
+ {{ some_variable | to_nice_json(indent=2) }}
+ {{ some_variable | to_nice_yaml(indent=8) }}
+
+The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol)
+To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example::
+
+ {{ some_variable | to_yaml(indent=8, width=1337) }}
+ {{ some_variable | to_nice_yaml(indent=8, width=1337) }}
+
+The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_.
+
+If you are reading in some already formatted data::
+
+ {{ some_variable | from_json }}
+ {{ some_variable | from_yaml }}
+
+for example::
+
+ tasks:
+ - name: Register JSON output as a variable
+ ansible.builtin.shell: cat /some/path/to/file.json
+ register: result
+
+ - name: Set a variable
+ ansible.builtin.set_fact:
+ myvar: "{{ result.stdout | from_json }}"
+
+
+Filter `to_json` and Unicode support
+------------------------------------
+
+By default `to_json` and `to_nice_json` will convert data received to ASCII, so::
+
+ {{ 'München'| to_json }}
+
+will return::
+
+ 'M\u00fcnchen'
+
+To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter::
+
+ {{ 'München'| to_json(ensure_ascii=False) }}
+
+ 'München'
+
+.. versionadded:: 2.7
+
+To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided.
+The ``from_yaml_all`` filter will return a generator of parsed YAML documents.
+
+for example::
+
+ tasks:
+ - name: Register a file content as a variable
+ ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml
+ register: result
+
+ - name: Print the transformed variable
+ ansible.builtin.debug:
+ msg: '{{ item }}'
+ loop: '{{ result.stdout | from_yaml_all | list }}'
+
+Combining and selecting data
+============================
+
+You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data.
+
+.. _zip_filter:
+
+Combining items from multiple lists: zip and zip_longest
+--------------------------------------------------------
+
+.. versionadded:: 2.3
+
+To get a list combining the elements of other lists use ``zip``::
+
+ - name: Give me list combo of two lists
+ ansible.builtin.debug:
+ msg: "{{ [1,2,3,4,5] | zip(['a','b','c','d','e','f']) | list }}"
+
+ - name: Give me shortest combo of two lists
+ ansible.builtin.debug:
+ msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
+
+To always exhaust all lists use ``zip_longest``::
+
+ - name: Give me longest combo of three lists , fill with X
+ ansible.builtin.debug:
+ msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
+
+Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``::
+
+ {{ dict(keys_list | zip(values_list)) }}
+
+List data (before applying the ``zip`` filter)::
+
+ keys_list:
+ - one
+ - two
+ values_list:
+ - apple
+ - orange
+
+Dictonary data (after applying the ``zip`` filter)::
+
+ one: apple
+ two: orange
+
+Combining objects and subelements
+---------------------------------
+
+.. versionadded:: 2.7
+
+The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression::
+
+ {{ users | subelements('groups', skip_missing=True) }}
+
+Data before applying the ``subelements`` filter::
+
+ users:
+ - name: alice
+ authorized:
+ - /tmp/alice/onekey.pub
+ - /tmp/alice/twokey.pub
+ groups:
+ - wheel
+ - docker
+ - name: bob
+ authorized:
+ - /tmp/bob/id_rsa.pub
+ groups:
+ - docker
+
+Data after applying the ``subelements`` filter::
+
+ -
+ - name: alice
+ groups:
+ - wheel
+ - docker
+ authorized:
+ - /tmp/alice/onekey.pub
+ - /tmp/alice/twokey.pub
+ - wheel
+ -
+ - name: alice
+ groups:
+ - wheel
+ - docker
+ authorized:
+ - /tmp/alice/onekey.pub
+ - /tmp/alice/twokey.pub
+ - docker
+ -
+ - name: bob
+ authorized:
+ - /tmp/bob/id_rsa.pub
+ groups:
+ - docker
+ - docker
+
+You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects::
+
+ - name: Set authorized ssh key, extracting just that data from 'users'
+ ansible.posix.authorized_key:
+ user: "{{ item.0.name }}"
+ key: "{{ lookup('file', item.1) }}"
+ loop: "{{ users | subelements('authorized') }}"
+
+.. _combine_filter:
+
+Combining hashes/dictionaries
+-----------------------------
+
+.. versionadded:: 2.0
+
+The ``combine`` filter allows hashes to be merged. For example, the following would override keys in one hash::
+
+ {{ {'a':1, 'b':2} | combine({'b':3}) }}
+
+The resulting hash would be::
+
+ {'a':1, 'b':3}
+
+The filter can also take multiple arguments to merge::
+
+ {{ a | combine(b, c, d) }}
+ {{ [a, b, c, d] | combine }}
+
+In this case, keys in ``d`` would override those in ``c``, which would override those in ``b``, and so on.
+
+The filter also accepts two optional parameters: ``recursive`` and ``list_merge``.
+
+recursive
+ Is a boolean, default to ``False``.
+ Should the ``combine`` recursively merge nested hashes.
+ Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``.
+
+list_merge
+ Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``.
+ It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists.
+
+.. code-block:: yaml
+
+ default:
+ a:
+ x: default
+ y: default
+ b: default
+ c: default
+ patch:
+ a:
+ y: patch
+ z: patch
+ b: patch
+
+If ``recursive=False`` (the default), nested hash aren't merged::
+
+ {{ default | combine(patch) }}
+
+This would result in::
+
+ a:
+ y: patch
+ z: patch
+ b: patch
+ c: default
+
+If ``recursive=True``, recurse into nested hash and merge their keys::
+
+ {{ default | combine(patch, recursive=True) }}
+
+This would result in::
+
+ a:
+ x: default
+ y: patch
+ z: patch
+ b: patch
+ c: default
+
+If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash::
+
+ default:
+ a:
+ - default
+ patch:
+ a:
+ - patch
+
+.. code-block:: jinja
+
+ {{ default | combine(patch) }}
+
+This would result in::
+
+ a:
+ - patch
+
+If ``list_merge='keep'``, arrays from the left hash will be kept::
+
+ {{ default | combine(patch, list_merge='keep') }}
+
+This would result in::
+
+ a:
+ - default
+
+If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash::
+
+ {{ default | combine(patch, list_merge='append') }}
+
+This would result in::
+
+ a:
+ - default
+ - patch
+
+If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash::
+
+ {{ default | combine(patch, list_merge='prepend') }}
+
+This would result in::
+
+ a:
+ - patch
+ - default
+
+If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present"). Duplicate elements that aren't in both hashes are kept::
+
+ default:
+ a:
+ - 1
+ - 1
+ - 2
+ - 3
+ patch:
+ a:
+ - 3
+ - 4
+ - 5
+ - 5
+
+.. code-block:: jinja
+
+ {{ default | combine(patch, list_merge='append_rp') }}
+
+This would result in::
+
+ a:
+ - 1
+ - 1
+ - 2
+ - 3
+ - 4
+ - 5
+ - 5
+
+If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended::
+
+ {{ default | combine(patch, list_merge='prepend_rp') }}
+
+This would result in::
+
+ a:
+ - 3
+ - 4
+ - 5
+ - 5
+ - 1
+ - 1
+ - 2
+
+``recursive`` and ``list_merge`` can be used together::
+
+ default:
+ a:
+ a':
+ x: default_value
+ y: default_value
+ list:
+ - default_value
+ b:
+ - 1
+ - 1
+ - 2
+ - 3
+ patch:
+ a:
+ a':
+ y: patch_value
+ z: patch_value
+ list:
+ - patch_value
+ b:
+ - 3
+ - 4
+ - 4
+ - key: value
+
+.. code-block:: jinja
+
+ {{ default | combine(patch, recursive=True, list_merge='append_rp') }}
+
+This would result in::
+
+ a:
+ a':
+ x: default_value
+ y: patch_value
+ z: patch_value
+ list:
+ - default_value
+ - patch_value
+ b:
+ - 1
+ - 1
+ - 2
+ - 3
+ - 4
+ - 4
+ - key: value
+
+
+.. _extract_filter:
+
+Selecting values from arrays or hashtables
+-------------------------------------------
+
+.. versionadded:: 2.1
+
+The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array)::
+
+ {{ [0,2] | map('extract', ['x','y','z']) | list }}
+ {{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
+
+The results of the above expressions would be::
+
+ ['x', 'z']
+ [42, 31]
+
+The filter can take another argument::
+
+ {{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
+
+This takes the list of hosts in group 'x', looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group 'x'.
+
+The third argument to the filter can also be a list, for a recursive lookup inside the container::
+
+ {{ ['a'] | map('extract', b, ['x','y']) | list }}
+
+This would return a list containing the value of `b['a']['x']['y']`.
+
+Combining lists
+---------------
+
+This set of filters returns a list of combined lists.
+
+
+permutations
+^^^^^^^^^^^^
+To get permutations of a list::
+
+ - name: Give me largest permutations (order matters)
+ ansible.builtin.debug:
+ msg: "{{ [1,2,3,4,5] | permutations | list }}"
+
+ - name: Give me permutations of sets of three
+ ansible.builtin.debug:
+ msg: "{{ [1,2,3,4,5] | permutations(3) | list }}"
+
+combinations
+^^^^^^^^^^^^
+Combinations always require a set size::
+
+ - name: Give me combinations for sets of two
+ ansible.builtin.debug:
+ msg: "{{ [1,2,3,4,5] | combinations(2) | list }}"
+
+Also see the :ref:`zip_filter`
+
+products
+^^^^^^^^
+The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables. This is roughly equivalent to nested for-loops in a generator expression.
+
+For example::
+
+ - name: Generate multiple hostnames
+ ansible.builtin.debug:
+ msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
+
+This would result in::
+
+ { "msg": "foo.com,bar.com" }
+
+.. json_query_filter:
+
+Selecting JSON data: JSON queries
+---------------------------------
+
+To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure.
+
+.. note::
+
+ This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
+
+
+.. note:: This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <http://jmespath.org/examples.html>`_.
+
+Consider this data structure::
+
+ {
+ "domain_definition": {
+ "domain": {
+ "cluster": [
+ {
+ "name": "cluster1"
+ },
+ {
+ "name": "cluster2"
+ }
+ ],
+ "server": [
+ {
+ "name": "server11",
+ "cluster": "cluster1",
+ "port": "8080"
+ },
+ {
+ "name": "server12",
+ "cluster": "cluster1",
+ "port": "8090"
+ },
+ {
+ "name": "server21",
+ "cluster": "cluster2",
+ "port": "9080"
+ },
+ {
+ "name": "server22",
+ "cluster": "cluster2",
+ "port": "9090"
+ }
+ ],
+ "library": [
+ {
+ "name": "lib1",
+ "target": "cluster1"
+ },
+ {
+ "name": "lib2",
+ "target": "cluster2"
+ }
+ ]
+ }
+ }
+ }
+
+To extract all clusters from this structure, you can use the following query::
+
+ - name: Display all cluster names
+ ansible.builtin.debug:
+ var: item
+ loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}"
+
+To extract all server names::
+
+ - name: Display all server names
+ ansible.builtin.debug:
+ var: item
+ loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}"
+
+To extract ports from cluster1::
+
+ - ansible.builtin.name: Display all ports from cluster1
+ debug:
+ var: item
+ loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
+ vars:
+ server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
+
+.. note:: You can use a variable to make the query more readable.
+
+To print out the ports from cluster1 in a comma separated string::
+
+ - name: Display all ports from cluster1 as a string
+ ansible.builtin.debug:
+ msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
+
+.. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability.
+
+You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_::
+
+ - name: Display all ports from cluster1
+ ansible.builtin.debug:
+ var: item
+ loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}"
+
+.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
+
+To get a hash map with all ports and names of a cluster::
+
+ - name: Display all server ports and names from cluster1
+ ansible.builtin.debug:
+ var: item
+ loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
+ vars:
+ server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
+
+
+Randomizing data
+================
+
+When you need a randomly generated value, use one of these filters.
+
+
+.. _random_mac_filter:
+
+Random MAC addresses
+--------------------
+
+.. versionadded:: 2.6
+
+This filter can be used to generate a random MAC address from a string prefix.
+
+.. note::
+
+ This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
+
+To get a random MAC address from a string prefix starting with '52:54:00'::
+
+ "{{ '52:54:00' | community.general.random_mac }}"
+ # => '52:54:00:ef:1c:03'
+
+Note that if anything is wrong with the prefix string, the filter will issue an error.
+
+ .. versionadded:: 2.9
+
+As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses::
+
+ "{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}"
+
+
+.. _random_filter:
+
+Random items or numbers
+-----------------------
+
+The ``random`` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range.
+
+To get a random item from a list::
+
+ "{{ ['a','b','c'] | random }}"
+ # => 'c'
+
+To get a random number between 0 and a specified number::
+
+ "{{ 60 | random }} * * * * root /script/from/cron"
+ # => '21 * * * * root /script/from/cron'
+
+To get a random number from 0 to 100 but in steps of 10::
+
+ {{ 101 | random(step=10) }}
+ # => 70
+
+To get a random number from 1 to 100 but in steps of 10::
+
+ {{ 101 | random(1, 10) }}
+ # => 31
+ {{ 101 | random(start=1, step=10) }}
+ # => 51
+
+You can initialize the random number generator from a seed to create random-but-idempotent numbers::
+
+ "{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
+
+Shuffling a list
+----------------
+
+The ``shuffle`` filter randomizes an existing list, giving a different order every invocation.
+
+To get a random list from an existing list::
+
+ {{ ['a','b','c'] | shuffle }}
+ # => ['c','a','b']
+ {{ ['a','b','c'] | shuffle }}
+ # => ['b','c','a']
+
+You can initialize the shuffle generator from a seed to generate a random-but-idempotent order::
+
+ {{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
+ # => ['b','a','c']
+
+The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing.
+
+
+.. _list_filters:
+
+Managing list variables
+=======================
+
+You can search for the minimum or maximum value in a list, or flatten a multi-level list.
+
+To get the minimum value from list of numbers::
+
+ {{ list1 | min }}
+
+To get the maximum value from a list of numbers::
+
+ {{ [3, 4, 2] | max }}
+
+.. versionadded:: 2.5
+
+Flatten a list (same thing the `flatten` lookup does)::
+
+ {{ [3, [4, 2] ] | flatten }}
+
+Flatten only the first level of a list (akin to the `items` lookup)::
+
+ {{ [3, [4, [2]] ] | flatten(levels=1) }}
+
+
+.. _set_theory_filters:
+
+Selecting from sets or lists (set theory)
+=========================================
+
+You can select or combine items from sets or lists.
+
+.. versionadded:: 1.4
+
+To get a unique set from a list::
+
+ # list1: [1, 2, 5, 1, 3, 4, 10]
+ {{ list1 | unique }}
+ # => [1, 2, 5, 3, 4, 10]
+
+To get a union of two lists::
+
+ # list1: [1, 2, 5, 1, 3, 4, 10]
+ # list2: [1, 2, 3, 4, 5, 11, 99]
+ {{ list1 | union(list2) }}
+ # => [1, 2, 5, 1, 3, 4, 10, 11, 99]
+
+To get the intersection of 2 lists (unique list of all items in both)::
+
+ # list1: [1, 2, 5, 3, 4, 10]
+ # list2: [1, 2, 3, 4, 5, 11, 99]
+ {{ list1 | intersect(list2) }}
+ # => [1, 2, 5, 3, 4]
+
+To get the difference of 2 lists (items in 1 that don't exist in 2)::
+
+ # list1: [1, 2, 5, 1, 3, 4, 10]
+ # list2: [1, 2, 3, 4, 5, 11, 99]
+ {{ list1 | difference(list2) }}
+ # => [10]
+
+To get the symmetric difference of 2 lists (items exclusive to each list)::
+
+ # list1: [1, 2, 5, 1, 3, 4, 10]
+ # list2: [1, 2, 3, 4, 5, 11, 99]
+ {{ list1 | symmetric_difference(list2) }}
+ # => [10, 11, 99]
+
+.. _math_stuff:
+
+Calculating numbers (math)
+==========================
+
+.. versionadded:: 1.9
+
+You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round().
+
+Get the logarithm (default is e)::
+
+ {{ myvar | log }}
+
+Get the base 10 logarithm::
+
+ {{ myvar | log(10) }}
+
+Give me the power of 2! (or 5)::
+
+ {{ myvar | pow(2) }}
+ {{ myvar | pow(5) }}
+
+Square root, or the 5th::
+
+ {{ myvar | root }}
+ {{ myvar | root(5) }}
+
+
+Managing network interactions
+=============================
+
+These filters help you with common network tasks.
+
+.. note::
+
+ These filters have migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection.
+
+.. _ipaddr_filter:
+
+IP address filters
+------------------
+
+.. versionadded:: 1.9
+
+To test if a string is a valid IP address::
+
+ {{ myvar | ansible.netcommon.ipaddr }}
+
+You can also require a specific IP protocol version::
+
+ {{ myvar | ansible.netcommon.ipv4 }}
+ {{ myvar | ansible.netcommon.ipv6 }}
+
+IP address filter can also be used to extract specific information from an IP
+address. For example, to get the IP address itself from a CIDR, you can use::
+
+ {{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }}
+
+More information about ``ipaddr`` filter and complete usage guide can be found
+in :ref:`playbooks_filters_ipaddr`.
+
+.. _network_filters:
+
+Network CLI filters
+-------------------
+
+.. versionadded:: 2.4
+
+To convert the output of a network device CLI command into structured JSON
+output, use the ``parse_cli`` filter::
+
+ {{ output | ansible.netcommon.parse_cli('path/to/spec') }}
+
+The ``parse_cli`` filter will load the spec file and pass the command output
+through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
+
+The spec file should be valid formatted YAML. It defines how to parse the CLI
+output and return JSON data. Below is an example of a valid spec file that
+will parse the output from the ``show vlan`` command.
+
+.. code-block:: yaml
+
+ ---
+ vars:
+ vlan:
+ vlan_id: "{{ item.vlan_id }}"
+ name: "{{ item.name }}"
+ enabled: "{{ item.state != 'act/lshut' }}"
+ state: "{{ item.state }}"
+
+ keys:
+ vlans:
+ value: "{{ vlan }}"
+ items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
+ state_static:
+ value: present
+
+
+The spec file above will return a JSON data structure that is a list of hashes
+with the parsed VLAN information.
+
+The same command could be parsed into a hash by using the key and values
+directives. Here is an example of how to parse the output into a hash
+value using the same ``show vlan`` command.
+
+.. code-block:: yaml
+
+ ---
+ vars:
+ vlan:
+ key: "{{ item.vlan_id }}"
+ values:
+ vlan_id: "{{ item.vlan_id }}"
+ name: "{{ item.name }}"
+ enabled: "{{ item.state != 'act/lshut' }}"
+ state: "{{ item.state }}"
+
+ keys:
+ vlans:
+ value: "{{ vlan }}"
+ items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
+ state_static:
+ value: present
+
+Another common use case for parsing CLI commands is to break a large command
+into blocks that can be parsed. This can be done using the ``start_block`` and
+``end_block`` directives to break the command into blocks that can be parsed.
+
+.. code-block:: yaml
+
+ ---
+ vars:
+ interface:
+ name: "{{ item[0].match[0] }}"
+ state: "{{ item[1].state }}"
+ mode: "{{ item[2].match[0] }}"
+
+ keys:
+ interfaces:
+ value: "{{ interface }}"
+ start_block: "^Ethernet.*$"
+ end_block: "^$"
+ items:
+ - "^(?P<name>Ethernet\\d\\/\\d*)"
+ - "admin state is (?P<state>.+),"
+ - "Port mode is (.+)"
+
+
+The example above will parse the output of ``show interface`` into a list of
+hashes.
+
+The network filters also support parsing the output of a CLI command using the
+TextFSM library. To parse the CLI output with TextFSM use the following
+filter::
+
+ {{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }}
+
+Use of the TextFSM filter requires the TextFSM library to be installed.
+
+Network XML filters
+-------------------
+
+.. versionadded:: 2.5
+
+To convert the XML output of a network device command into structured JSON
+output, use the ``parse_xml`` filter::
+
+ {{ output | ansible.netcommon.parse_xml('path/to/spec') }}
+
+The ``parse_xml`` filter will load the spec file and pass the command output
+through formatted as JSON.
+
+The spec file should be valid formatted YAML. It defines how to parse the XML
+output and return JSON data.
+
+Below is an example of a valid spec file that
+will parse the output from the ``show vlan | display xml`` command.
+
+.. code-block:: yaml
+
+ ---
+ vars:
+ vlan:
+ vlan_id: "{{ item.vlan_id }}"
+ name: "{{ item.name }}"
+ desc: "{{ item.desc }}"
+ enabled: "{{ item.state.get('inactive') != 'inactive' }}"
+ state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
+
+ keys:
+ vlans:
+ value: "{{ vlan }}"
+ top: configuration/vlans/vlan
+ items:
+ vlan_id: vlan-id
+ name: name
+ desc: description
+ state: ".[@inactive='inactive']"
+
+
+The spec file above will return a JSON data structure that is a list of hashes
+with the parsed VLAN information.
+
+The same command could be parsed into a hash by using the key and values
+directives. Here is an example of how to parse the output into a hash
+value using the same ``show vlan | display xml`` command.
+
+.. code-block:: yaml
+
+ ---
+ vars:
+ vlan:
+ key: "{{ item.vlan_id }}"
+ values:
+ vlan_id: "{{ item.vlan_id }}"
+ name: "{{ item.name }}"
+ desc: "{{ item.desc }}"
+ enabled: "{{ item.state.get('inactive') != 'inactive' }}"
+ state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
+
+ keys:
+ vlans:
+ value: "{{ vlan }}"
+ top: configuration/vlans/vlan
+ items:
+ vlan_id: vlan-id
+ name: name
+ desc: description
+ state: ".[@inactive='inactive']"
+
+
+The value of ``top`` is the XPath relative to the XML root node.
+In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``,
+which is an XPath expression relative to the root node (<rpc-reply>).
+``configuration`` in the value of ``top`` is the outer most container node, and ``vlan``
+is the inner-most container node.
+
+``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions
+that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``.
+For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the
+relative to the value of XPath in ``top``
+
+Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec
+is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.::
+
+ <rpc-reply>
+ <configuration>
+ <vlans>
+ <vlan inactive="inactive">
+ <name>vlan-1</name>
+ <vlan-id>200</vlan-id>
+ <description>This is vlan-1</description>
+ </vlan>
+ </vlans>
+ </configuration>
+ </rpc-reply>
+
+.. note::
+ For more information on supported XPath expressions, see `XPath Support <https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support>`_.
+
+Network VLAN filters
+--------------------
+
+.. versionadded:: 2.8
+
+Use the ``vlan_parser`` filter to transform an unsorted list of VLAN integers into a
+sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
+
+* Vlans are listed in ascending order.
+* Three or more consecutive VLANs are listed with a dash.
+* The first line of the list can be first_line_len characters long.
+* Subsequent list lines can be other_line_len characters.
+
+To sort a VLAN list::
+
+ {{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }}
+
+This example renders the following sorted list::
+
+ ['100,1688,3002-3005,3999']
+
+
+Another example Jinja template::
+
+ {% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %}
+ switchport trunk allowed vlan {{ parsed_vlans[0] }}
+ {% for i in range (1, parsed_vlans | count) %}
+ switchport trunk allowed vlan add {{ parsed_vlans[i] }}
+
+This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
+
+
+.. _hash_filters:
+
+Encrypting and checksumming strings and passwords
+=================================================
+
+.. versionadded:: 1.9
+
+To get the sha1 hash of a string::
+
+ {{ 'test1' | hash('sha1') }}
+
+To get the md5 hash of a string::
+
+ {{ 'test1' | hash('md5') }}
+
+Get a string checksum::
+
+ {{ 'test2' | checksum }}
+
+Other hashes (platform dependent)::
+
+ {{ 'test2' | hash('blowfish') }}
+
+To get a sha512 password hash (random salt)::
+
+ {{ 'passwordsaresecret' | password_hash('sha512') }}
+
+To get a sha256 password hash with a specific salt::
+
+ {{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
+
+An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs::
+
+ {{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
+
+Hash types available depend on the master system running Ansible, 'hash' depends on hashlib, password_hash depends on passlib (https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html).
+
+.. versionadded:: 2.7
+
+Some hash types allow providing a rounds parameter::
+
+ {{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
+
+.. _other_useful_filters:
+
+Manipulating text
+=================
+
+Several filters work with text, including URLs, file names, and path names.
+
+.. _comment_filter:
+
+Adding comments to files
+------------------------
+
+The ``comment`` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following::
+
+ {{ "Plain style (default)" | comment }}
+
+produces this output:
+
+.. code-block:: text
+
+ #
+ # Plain style (default)
+ #
+
+Ansible offers styles for comments in C (``//...``), C block
+(``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``)::
+
+ {{ "C style" | comment('c') }}
+ {{ "C block style" | comment('cblock') }}
+ {{ "Erlang style" | comment('erlang') }}
+ {{ "XML style" | comment('xml') }}
+
+You can define a custom comment character. This filter::
+
+ {{ "My Special Case" | comment(decoration="! ") }}
+
+produces:
+
+.. code-block:: text
+
+ !
+ ! My Special Case
+ !
+
+You can fully customize the comment style::
+
+ {{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
+
+That creates the following output:
+
+.. code-block:: text
+
+ #######
+ #
+ # Custom style
+ #
+ #######
+ ###
+ #
+
+The filter can also be applied to any Ansible variable. For example to
+make the output of the ``ansible_managed`` variable more readable, we can
+change the definition in the ``ansible.cfg`` file to this:
+
+.. code-block:: jinja
+
+ [defaults]
+
+ ansible_managed = This file is managed by Ansible.%n
+ template: {file}
+ date: %Y-%m-%d %H:%M:%S
+ user: {uid}
+ host: {host}
+
+and then use the variable with the `comment` filter::
+
+ {{ ansible_managed | comment }}
+
+which produces this output:
+
+.. code-block:: sh
+
+ #
+ # This file is managed by Ansible.
+ #
+ # template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
+ # date: 2015-09-10 11:02:58
+ # user: ansible
+ # host: myhost
+ #
+
+Splitting URLs
+--------------
+
+.. versionadded:: 2.4
+
+The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields::
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
+ # => 'www.acme.com'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
+ # => 'user:password@www.acme.com:9000'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
+ # => 'user'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
+ # => 'password'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
+ # => '/dir/index.html'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
+ # => '9000'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
+ # => 'http'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
+ # => 'query=term'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
+ # => 'fragment'
+
+ {{ "http://user:password@www.acme.com:9000/dir/index.html?query=term#fragment" | urlsplit }}
+ # =>
+ # {
+ # "fragment": "fragment",
+ # "hostname": "www.acme.com",
+ # "netloc": "user:password@www.acme.com:9000",
+ # "password": "password",
+ # "path": "/dir/index.html",
+ # "port": 9000,
+ # "query": "query=term",
+ # "scheme": "http",
+ # "username": "user"
+ # }
+
+Searching strings with regular expressions
+------------------------------------------
+
+To search a string with a regex, use the "regex_search" filter::
+
+ # search for "foo" in "foobar"
+ {{ 'foobar' | regex_search('(foo)') }}
+
+ # will return empty if it cannot find a match
+ {{ 'ansible' | regex_search('(foobar)') }}
+
+ # case insensitive search in multiline mode
+ {{ 'foo\nBAR' | regex_search("^bar", multiline=True, ignorecase=True) }}
+
+
+To search for all occurrences of regex matches, use the "regex_findall" filter::
+
+ # Return a list of all IPv4 addresses in the string
+ {{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
+
+
+To replace text in a string with regex, use the "regex_replace" filter::
+
+ # convert "ansible" to "able"
+ {{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
+
+ # convert "foobar" to "bar"
+ {{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
+
+ # convert "localhost:80" to "localhost, 80" using named groups
+ {{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
+
+ # convert "localhost:80" to "localhost"
+ {{ 'localhost:80' | regex_replace(':80') }}
+
+ # change a multiline string
+ {{ var | regex_replace('^', '#CommentThis#', multiline=True) }}
+
+.. note::
+ If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors. For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements::
+
+ # add "https://" prefix to each item in a list
+ GOOD:
+ {{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
+ {{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
+ {{ hosts | map('regex_replace', '^', 'https://') | list }}
+
+ BAD:
+ {{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
+
+ # append ':80' to each item in a list
+ GOOD:
+ {{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
+ {{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
+ {{ hosts | map('regex_replace', '$', ':80') | list }}
+
+ BAD:
+ {{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
+
+.. note::
+ Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), then you needed to escape backreferences (for example, ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
+
+.. versionadded:: 2.0
+
+To escape special characters within a standard Python regex, use the "regex_escape" filter (using the default re_type='python' option)::
+
+ # convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
+ {{ '^f.*o(.*)$' | regex_escape() }}
+
+.. versionadded:: 2.8
+
+To escape special characters within a POSIX basic regex, use the "regex_escape" filter with the re_type='posix_basic' option::
+
+ # convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
+ {{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
+
+
+Managing file names and path names
+----------------------------------
+
+To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt'::
+
+ {{ path | basename }}
+
+To get the last name of a windows style file path (new in version 2.0)::
+
+ {{ path | win_basename }}
+
+To separate the windows drive letter from the rest of a file path (new in version 2.0)::
+
+ {{ path | win_splitdrive }}
+
+To get only the windows drive letter::
+
+ {{ path | win_splitdrive | first }}
+
+To get the rest of the path without the drive letter::
+
+ {{ path | win_splitdrive | last }}
+
+To get the directory from a path::
+
+ {{ path | dirname }}
+
+To get the directory from a windows path (new version 2.0)::
+
+ {{ path | win_dirname }}
+
+To expand a path containing a tilde (`~`) character (new in version 1.5)::
+
+ {{ path | expanduser }}
+
+To expand a path containing environment variables::
+
+ {{ path | expandvars }}
+
+.. note:: `expandvars` expands local variables; using it on remote paths can lead to errors.
+
+.. versionadded:: 2.6
+
+To get the real path of a link (new in version 1.8)::
+
+ {{ path | realpath }}
+
+To get the relative path of a link, from a start point (new in version 1.7)::
+
+ {{ path | relpath('/etc') }}
+
+To get the root and extension of a path or file name (new in version 2.0)::
+
+ # with path == 'nginx.conf' the return would be ('nginx', '.conf')
+ {{ path | splitext }}
+
+The ``splitext`` filter returns a string. The individual components can be accessed by using the ``first`` and ``last`` filters::
+
+ # with path == 'nginx.conf' the return would be 'nginx'
+ {{ path | splitext | first }}
+
+ # with path == 'nginx.conf' the return would be 'conf'
+ {{ path | splitext | last }}
+
+To join one or more path components::
+
+ {{ ('/etc', path, 'subdir', file) | path_join }}
+
+.. versionadded:: 2.10
+
+Manipulating strings
+====================
+
+To add quotes for shell usage::
+
+ - name: Run a shell command
+ ansible.builtin.shell: echo {{ string_value | quote }}
+
+To concatenate a list into a string::
+
+ {{ list | join(" ") }}
+
+To work with Base64 encoded strings::
+
+ {{ encoded | b64decode }}
+ {{ decoded | string | b64encode }}
+
+As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``::
+
+ {{ encoded | b64decode(encoding='utf-16-le') }}
+ {{ decoded | string | b64encode(encoding='utf-16-le') }}
+
+.. note:: The ``string`` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded.
+
+.. versionadded:: 2.6
+
+Managing UUIDs
+==============
+
+To create a namespaced UUIDv5::
+
+ {{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
+
+.. versionadded:: 2.10
+
+To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E'::
+
+ {{ string | to_uuid }}
+
+.. versionadded:: 1.9
+
+To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:map>`::
+
+ # get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host
+ {{ ansible_mounts | map(attribute='mount') | join(',') }}
+
+Handling dates and times
+========================
+
+To get a date object from a string use the `to_datetime` filter::
+
+ # Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
+ {{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
+
+ # Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds()
+ {{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
+ # This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
+
+ # get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
+ {{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
+
+.. versionadded:: 2.4
+
+To format a date using a string (like with the shell date command), use the "strftime" filter::
+
+ # Display year-month-day
+ {{ '%Y-%m-%d' | strftime }}
+
+ # Display hour:min:sec
+ {{ '%H:%M:%S' | strftime }}
+
+ # Use ansible_date_time.epoch fact
+ {{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
+
+ # Use arbitrary epoch value
+ {{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
+ {{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
+
+.. note:: To get all string possibilities, check https://docs.python.org/3/library/time.html#time.strftime
+
+Getting Kubernetes resource names
+=================================
+
+.. note::
+
+ These filters have migrated to the `community.kubernetes <https://galaxy.ansible.com/community/kubernetes>`_ collection. Follow the installation instructions to install that collection.
+
+Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret,
+including its hash::
+
+ {{ configmap_resource_definition | community.kubernetes.k8s_config_resource_name }}
+
+This can then be used to reference hashes in Pod specifications::
+
+ my_secret:
+ kind: Secret
+ name: my_secret_name
+
+ deployment_resource:
+ kind: Deployment
+ spec:
+ template:
+ spec:
+ containers:
+ - envFrom:
+ - secretRef:
+ name: {{ my_secret | community.kubernetes.k8s_config_resource_name }}
+
+.. versionadded:: 2.8
+
+.. _PyYAML library: https://pyyaml.org/
+
+.. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation
+
+
+.. seealso::
+
+ :ref:`about_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_variables`
+ All about variables
+ :ref:`playbooks_loops`
+ Looping in playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_filters_ipaddr.rst b/docs/docsite/rst/user_guide/playbooks_filters_ipaddr.rst
new file mode 100644
index 00000000..0a6d4825
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_filters_ipaddr.rst
@@ -0,0 +1,744 @@
+:orphan:
+
+.. _playbooks_filters_ipaddr:
+
+ipaddr filter
+`````````````
+
+.. versionadded:: 1.9
+
+``ipaddr()`` is a Jinja2 filter designed to provide an interface to the `netaddr`_
+Python package from within Ansible. It can operate on strings or lists of
+items, test various data to check if they are valid IP addresses, and manipulate
+the input data to extract requested information. ``ipaddr()`` works with both
+IPv4 and IPv6 addresses in various forms. There are also additional functions
+available to manipulate IP subnets and MAC addresses.
+
+.. note::
+
+ The ``ipaddr()`` filter migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection.
+
+To use this filter in Ansible, you need to install the `netaddr`_ Python library on
+a computer on which you use Ansible (it is not required on remote hosts).
+It can usually be installed with either your system package manager or using
+``pip``::
+
+ pip install netaddr
+
+.. _netaddr: https://pypi.org/project/netaddr/
+
+.. contents:: Topics
+ :local:
+ :depth: 2
+ :backlinks: top
+
+
+Basic tests
+^^^^^^^^^^^
+
+``ipaddr()`` is designed to return the input value if a query is True, and
+``False`` if a query is False. This way it can be easily used in chained
+filters. To use the filter, pass a string to it:
+
+.. code-block:: none
+
+ {{ '192.0.2.0' | ansible.netcommon.ipaddr }}
+
+You can also pass the values as variables::
+
+ {{ myvar | ansible.netcommon.ipaddr }}
+
+Here are some example test results of various input strings::
+
+ # These values are valid IP addresses or network ranges
+ '192.168.0.1' -> 192.168.0.1
+ '192.168.32.0/24' -> 192.168.32.0/24
+ 'fe80::100/10' -> fe80::100/10
+ 45443646733 -> ::a:94a7:50d
+ '523454/24' -> 0.7.252.190/24
+
+ # Values that are not valid IP addresses or network ranges
+ 'localhost' -> False
+ True -> False
+ 'space bar' -> False
+ False -> False
+ '' -> False
+ ':' -> False
+ 'fe80:/10' -> False
+
+Sometimes you need either IPv4 or IPv6 addresses. To filter only for a particular
+type, ``ipaddr()`` filter has two "aliases", ``ipv4()`` and ``ipv6()``.
+
+Example use of an IPv4 filter::
+
+ {{ myvar | ansible.netcommon.ipv4 }}
+
+A similar example of an IPv6 filter::
+
+ {{ myvar | ansible.netcommon.ipv6 }}
+
+Here's some example test results to look for IPv4 addresses::
+
+ '192.168.0.1' -> 192.168.0.1
+ '192.168.32.0/24' -> 192.168.32.0/24
+ 'fe80::100/10' -> False
+ 45443646733 -> False
+ '523454/24' -> 0.7.252.190/24
+
+And the same data filtered for IPv6 addresses::
+
+ '192.168.0.1' -> False
+ '192.168.32.0/24' -> False
+ 'fe80::100/10' -> fe80::100/10
+ 45443646733 -> ::a:94a7:50d
+ '523454/24' -> False
+
+
+Filtering lists
+^^^^^^^^^^^^^^^
+
+You can filter entire lists - ``ipaddr()`` will return a list with values
+valid for a particular query::
+
+ # Example list of values
+ test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']
+
+ # {{ test_list | ansible.netcommon.ipaddr }}
+ ['192.24.2.1', '::1', '192.168.32.0/24', 'fe80::100/10', '2001:db8:32c:faad::/64']
+
+ # {{ test_list | ansible.netcommon.ipv4 }}
+ ['192.24.2.1', '192.168.32.0/24']
+
+ # {{ test_list | ansible.netcommon.ipv6 }}
+ ['::1', 'fe80::100/10', '2001:db8:32c:faad::/64']
+
+
+Wrapping IPv6 addresses in [ ] brackets
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Some configuration files require IPv6 addresses to be "wrapped" in square
+brackets (``[ ]``). To accomplish that, you can use the ``ipwrap()`` filter. It
+will wrap all IPv6 addresses and leave any other strings intact::
+
+ # {{ test_list | ansible.netcommon.ipwrap }}
+ ['192.24.2.1', 'host.fqdn', '[::1]', '192.168.32.0/24', '[fe80::100]/10', True, '', '[2001:db8:32c:faad::]/64']
+
+As you can see, ``ipwrap()`` did not filter out non-IP address values, which is
+usually what you want when for example you are mixing IP addresses with
+hostnames. If you still want to filter out all non-IP address values, you can
+chain both filters together::
+
+ # {{ test_list | ansible.netcommon.ipaddr | ansible.netcommon.ipwrap }}
+ ['192.24.2.1', '[::1]', '192.168.32.0/24', '[fe80::100]/10', '[2001:db8:32c:faad::]/64']
+
+
+Basic queries
+^^^^^^^^^^^^^
+
+You can provide a single argument to each ``ipaddr()`` filter. The filter will then
+treat it as a query and return values modified by that query. Lists will
+contain only values that you are querying for.
+
+Types of queries include:
+
+- query by name: ``ansible.netcommon.ipaddr('address')``, ``ansible.netcommon.ipv4('network')``;
+- query by CIDR range: ``ansible.netcommon.ipaddr('192.168.0.0/24')``, ``ansible.netcommon.ipv6('2001:db8::/32')``;
+- query by index number: ``ansible.netcommon.ipaddr('1')``, ``ansible.netcommon.ipaddr('-1')``;
+
+If a query type is not recognized, Ansible will raise an error.
+
+
+Getting information about hosts and networks
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Here's our test list again::
+
+ # Example list of values
+ test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']
+
+Let's take the list above and get only those elements that are host IP addresses
+and not network ranges::
+
+ # {{ test_list | ansible.netcommon.ipaddr('address') }}
+ ['192.24.2.1', '::1', 'fe80::100']
+
+As you can see, even though some values had a host address with a CIDR prefix,
+they were dropped by the filter. If you want host IP addresses with their correct
+CIDR prefixes (as is common with IPv6 addressing), you can use the
+``ipaddr('host')`` filter::
+
+ # {{ test_list | ansible.netcommon.ipaddr('host') }}
+ ['192.24.2.1/32', '::1/128', 'fe80::100/10']
+
+Filtering by IP address type also works::
+
+ # {{ test_list | ansible.netcommon.ipv4('address') }}
+ ['192.24.2.1']
+
+ # {{ test_list | ansible.netcommon.ipv6('address') }}
+ ['::1', 'fe80::100']
+
+You can check if IP addresses or network ranges are accessible on a public
+Internet, or if they are in private networks::
+
+ # {{ test_list | ansible.netcommon.ipaddr('public') }}
+ ['192.24.2.1', '2001:db8:32c:faad::/64']
+
+ # {{ test_list | ansible.netcommon.ipaddr('private') }}
+ ['192.168.32.0/24', 'fe80::100/10']
+
+You can check which values are specifically network ranges::
+
+ # {{ test_list | ansible.netcommon.ipaddr('net') }}
+ ['192.168.32.0/24', '2001:db8:32c:faad::/64']
+
+You can also check how many IP addresses can be in a certain range::
+
+ # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('size') }}
+ [256, 18446744073709551616L]
+
+By specifying a network range as a query, you can check if a given value is in
+that range::
+
+ # {{ test_list | ansible.netcommon.ipaddr('192.0.0.0/8') }}
+ ['192.24.2.1', '192.168.32.0/24']
+
+If you specify a positive or negative integer as a query, ``ipaddr()`` will
+treat this as an index and will return the specific IP address from a network
+range, in the 'host/prefix' format::
+
+ # First IP address (network address)
+ # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('0') }}
+ ['192.168.32.0/24', '2001:db8:32c:faad::/64']
+
+ # Second IP address (usually the gateway host)
+ # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('1') }}
+ ['192.168.32.1/24', '2001:db8:32c:faad::1/64']
+
+ # Last IP address (the broadcast address in IPv4 networks)
+ # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('-1') }}
+ ['192.168.32.255/24', '2001:db8:32c:faad:ffff:ffff:ffff:ffff/64']
+
+You can also select IP addresses from a range by their index, from the start or
+end of the range::
+
+ # Returns from the start of the range
+ # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('200') }}
+ ['192.168.32.200/24', '2001:db8:32c:faad::c8/64']
+
+ # Returns from the end of the range
+ # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('-200') }}
+ ['192.168.32.56/24', '2001:db8:32c:faad:ffff:ffff:ffff:ff38/64']
+
+ # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('400') }}
+ ['2001:db8:32c:faad::190/64']
+
+
+Getting information from host/prefix values
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You frequently use a combination of IP addresses and subnet prefixes
+("CIDR"), this is even more common with IPv6. The ``ansible.netcommon.ipaddr()`` filter can extract
+useful data from these prefixes.
+
+Here's an example set of two host prefixes (with some "control" values)::
+
+ host_prefix = ['2001:db8:deaf:be11::ef3/64', '192.0.2.48/24', '127.0.0.1', '192.168.0.0/16']
+
+First, let's make sure that we only work with correct host/prefix values, not
+just subnets or single IP addresses::
+
+ # {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') }}
+ ['2001:db8:deaf:be11::ef3/64', '192.0.2.48/24']
+
+In Debian-based systems, the network configuration stored in the ``/etc/network/interfaces`` file uses a combination of IP address, network address, netmask and broadcast address to configure an IPv4 network interface. We can get these values from a single 'host/prefix' combination:
+
+.. code-block:: jinja
+
+ # Jinja2 template
+ {% set ipv4_host = host_prefix | unique | ansible.netcommon.ipv4('host/prefix') | first %}
+ iface eth0 inet static
+ address {{ ipv4_host | ansible.netcommon.ipaddr('address') }}
+ network {{ ipv4_host | ansible.netcommon.ipaddr('network') }}
+ netmask {{ ipv4_host | ansible.netcommon.ipaddr('netmask') }}
+ broadcast {{ ipv4_host | ansible.netcommon.ipaddr('broadcast') }}
+
+ # Generated configuration file
+ iface eth0 inet static
+ address 192.0.2.48
+ network 192.0.2.0
+ netmask 255.255.255.0
+ broadcast 192.0.2.255
+
+In the above example, we needed to handle the fact that values were stored in
+a list, which is unusual in IPv4 networks, where only a single IP address can be
+set on an interface. However, IPv6 networks can have multiple IP addresses set
+on an interface:
+
+.. code-block:: jinja
+
+ # Jinja2 template
+ iface eth0 inet6 static
+ {% set ipv6_list = host_prefix | unique | ansible.netcommon.ipv6('host/prefix') %}
+ address {{ ipv6_list[0] }}
+ {% if ipv6_list | length > 1 %}
+ {% for subnet in ipv6_list[1:] %}
+ up /sbin/ip address add {{ subnet }} dev eth0
+ down /sbin/ip address del {{ subnet }} dev eth0
+ {% endfor %}
+ {% endif %}
+
+ # Generated configuration file
+ iface eth0 inet6 static
+ address 2001:db8:deaf:be11::ef3/64
+
+If needed, you can extract subnet and prefix information from the 'host/prefix' value::
+
+.. code-block:: jinja
+
+ # {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') | ansible.netcommon.ipaddr('subnet') }}
+ ['2001:db8:deaf:be11::/64', '192.0.2.0/24']
+
+ # {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') | ansible.netcommon.ipaddr('prefix') }}
+ [64, 24]
+
+
+Converting subnet masks to CIDR notation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Given a subnet in the form of network address and subnet mask, the ``ipaddr()`` filter can convert it into CIDR notation. This can be useful for converting Ansible facts gathered about network configuration from subnet masks into CIDR format::
+
+ ansible_default_ipv4: {
+ address: "192.168.0.11",
+ alias: "eth0",
+ broadcast: "192.168.0.255",
+ gateway: "192.168.0.1",
+ interface: "eth0",
+ macaddress: "fa:16:3e:c4:bd:89",
+ mtu: 1500,
+ netmask: "255.255.255.0",
+ network: "192.168.0.0",
+ type: "ether"
+ }
+
+First concatenate the network and netmask::
+
+ net_mask = "{{ ansible_default_ipv4.network }}/{{ ansible_default_ipv4.netmask }}"
+ '192.168.0.0/255.255.255.0'
+
+This result can be converted to canonical form with ``ipaddr()`` to produce a subnet in CIDR format::
+
+ # {{ net_mask | ansible.netcommon.ipaddr('prefix') }}
+ '24'
+
+ # {{ net_mask | ansible.netcommon.ipaddr('net') }}
+ '192.168.0.0/24'
+
+
+Getting information about the network in CIDR notation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Given an IP address, the ``ipaddr()`` filter can produce the network address in CIDR notation.
+This can be useful when you want to obtain the network address from the IP address in CIDR format.
+
+Here's an example of IP address::
+
+ ip_address = "{{ ansible_default_ipv4.address }}/{{ ansible_default_ipv4.netmask }}"
+ '192.168.0.11/255.255.255.0'
+
+This can be used to obtain the network address in CIDR notation format::
+
+ # {{ ip_address | ansible.netcommon.ipaddr('network/prefix') }}
+ '192.168.0.0/24'
+
+
+IP address conversion
+^^^^^^^^^^^^^^^^^^^^^
+
+Here's our test list again::
+
+ # Example list of values
+ test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']
+
+You can convert IPv4 addresses into IPv6 addresses::
+
+ # {{ test_list | ansible.netcommon.ipv4('ipv6') }}
+ ['::ffff:192.24.2.1/128', '::ffff:192.168.32.0/120']
+
+Converting from IPv6 to IPv4 works very rarely::
+
+ # {{ test_list | ansible.netcommon.ipv6('ipv4') }}
+ ['0.0.0.1/32']
+
+But we can make a double conversion if needed::
+
+ # {{ test_list | ansible.netcommon.ipaddr('ipv6') | ansible.netcommon.ipaddr('ipv4') }}
+ ['192.24.2.1/32', '0.0.0.1/32', '192.168.32.0/24']
+
+You can convert IP addresses to integers, the same way that you can convert
+integers into IP addresses::
+
+ # {{ test_list | ansible.netcommon.ipaddr('address') | ansible.netcommon.ipaddr('int') }}
+ [3222798849, 1, '3232243712/24', '338288524927261089654018896841347694848/10', '42540766412265424405338506004571095040/64']
+
+You can convert IPv4 address to `Hexadecimal notation <https://en.wikipedia.org/wiki/Hexadecimal>`_ with optional delimiter::
+
+ # {{ '192.168.1.5' | ansible.netcommon.ip4_hex }}
+ c0a80105
+ # {{ '192.168.1.5' | ansible.netcommon.ip4_hex(':') }}
+ c0:a8:01:05
+
+You can convert IP addresses to PTR records::
+
+ # {% for address in test_list | ansible.netcommon.ipaddr %}
+ # {{ address | ansible.netcommon.ipaddr('revdns') }}
+ # {% endfor %}
+ 1.2.24.192.in-addr.arpa.
+ 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.
+ 0.32.168.192.in-addr.arpa.
+ 0.0.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa.
+ 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.d.a.a.f.c.2.3.0.8.b.d.0.1.0.0.2.ip6.arpa.
+
+
+Converting IPv4 address to a 6to4 address
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A `6to4`_ tunnel is a way to access the IPv6 Internet from an IPv4-only network. If you
+have a public IPv4 address, you can automatically configure its IPv6
+equivalent in the ``2002::/16`` network range. After conversion you will gain
+access to a ``2002:xxxx:xxxx::/48`` subnet which could be split into 65535
+``/64`` subnets if needed.
+
+To convert your IPv4 address, just send it through the ``'6to4'`` filter. It will
+be automatically converted to a router address (with a ``::1/48`` host address)::
+
+ # {{ '193.0.2.0' | ansible.netcommon.ipaddr('6to4') }}
+ 2002:c100:0200::1/48
+
+.. _6to4: https://en.wikipedia.org/wiki/6to4
+
+
+Finding IP addresses within a range
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To find usable IP addresses within an IP range, try these ``ipaddr`` filters:
+
+To find the next usable IP address in a range, use ``next_usable`` ::
+
+ # {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('next_usable') }}
+ 192.168.122.2
+
+To find the last usable IP address from a range, use ``last_usable``::
+
+ # {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('last_usable') }}
+ 192.168.122.254
+
+To find the available range of IP addresses from the given network address, use ``range_usable``::
+
+ # {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('range_usable') }}
+ 192.168.122.1-192.168.122.254
+
+To find the peer IP address for a point to point link, use ``peer``::
+
+ # {{ '192.168.122.1/31' | ansible.netcommon.ipaddr('peer') }}
+ 192.168.122.0
+ # {{ '192.168.122.1/30' | ansible.netcommon.ipaddr('peer') }}
+ 192.168.122.2
+
+To return the nth ip from a network, use the filter ``nthhost``::
+
+ # {{ '10.0.0.0/8' | ansible.netcommon.nthhost(305) }}
+ 10.0.1.49
+
+``nthhost`` also supports a negative value::
+
+ # {{ '10.0.0.0/8' | ansible.netcommon.nthhost(-1) }}
+ 10.255.255.255
+
+To find the next nth usable IP address in relation to another within a range, use ``next_nth_usable``
+In the example, ``next_nth_usable`` returns the second usable IP address for the given IP range::
+
+ # {{ '192.168.122.1/24' | ansible.netcommon.next_nth_usable(2) }}
+ 192.168.122.3
+
+If there is no usable address, it returns an empty string::
+
+ # {{ '192.168.122.254/24' | ansible.netcommon.next_nth_usable(2) }}
+ ""
+
+Just like ``next_nth_ansible``, you have ``previous_nth_usable`` to find the previous usable address::
+
+ # {{ '192.168.122.10/24' | ansible.netcommon.previous_nth_usable(2) }}
+ 192.168.122.8
+
+
+Testing if a address belong to a network range
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``network_in_usable`` filter returns whether an address passed as an argument is usable in a network.
+Usable addresses are addresses that can be assigned to a host. The network ID and the broadcast address
+are not usable addresses.::
+
+ # {{ '192.168.0.0/24' | ansible.netcommon.network_in_usable( '192.168.0.1' ) }}
+ True
+
+ # {{ '192.168.0.0/24' | ansible.netcommon.network_in_usable( '192.168.0.255' ) }}
+ False
+
+ # {{ '192.168.0.0/16' | ansible.netcommon.network_in_usable( '192.168.0.255' ) }}
+ True
+
+The ``network_in_network`` filter returns whether an address or a network passed as argument is in a network.::
+
+ # {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.1' ) }}
+ True
+
+ # {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.0/24' ) }}
+ True
+
+ # {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.255' ) }}
+ True
+
+ # Check in a network is part of another network
+ # {{ '192.168.0.0/16' | ansible.netcommon.network_in_network( '192.168.0.0/24' ) }}
+ True
+
+To check whether multiple addresses belong to a network, use the ``reduce_on_network`` filter::
+
+ # {{ ['192.168.0.34', '10.3.0.3', '192.168.2.34'] | ansible.netcommon.reduce_on_network( '192.168.0.0/24' ) }}
+ ['192.168.0.34']
+
+
+IP Math
+^^^^^^^
+
+.. versionadded:: 2.7
+
+The ``ipmath()`` filter can be used to do simple IP math/arithmetic.
+
+Here are a few simple examples::
+
+ # Get the next five addresses based on an IP address
+ # {{ '192.168.1.5' | ansible.netcommon.ipmath(5) }}
+ 192.168.1.10
+
+ # Get the ten previous addresses based on an IP address
+ # {{ '192.168.0.5' | ansible.netcommon.ipmath(-10) }}
+ 192.167.255.251
+
+ # Get the next five addresses using CIDR notation
+ # {{ '192.168.1.1/24' | ansible.netcommon.ipmath(5) }}
+ 192.168.1.6
+
+ # Get the previous five addresses using CIDR notation
+ # {{ '192.168.1.6/24' | ansible.netcommon.ipmath(-5) }}
+ 192.168.1.1
+
+ # Get the previous ten address using cidr notation
+ # It returns a address of the previous network range
+ # {{ '192.168.2.6/24' | ansible.netcommon.ipmath(-10) }}
+ 192.168.1.252
+
+ # Get the next ten addresses in IPv6
+ # {{ '2001::1' | ansible.netcommon.ipmath(10) }}
+ 2001::b
+
+ # Get the previous ten address in IPv6
+ # {{ '2001::5' | ansible.netcommon.ipmath(-10) }}
+ 2000:ffff:ffff:ffff:ffff:ffff:ffff:fffb
+
+
+
+Subnet manipulation
+^^^^^^^^^^^^^^^^^^^
+
+The ``ipsubnet()`` filter can be used to manipulate network subnets in several ways.
+
+Here is an example IP address and subnet::
+
+ address = '192.168.144.5'
+ subnet = '192.168.0.0/16'
+
+To check if a given string is a subnet, pass it through the filter without any
+arguments. If the given string is an IP address, it will be converted into
+a subnet::
+
+ # {{ address | ansible.netcommon.ipsubnet }}
+ 192.168.144.5/32
+
+ # {{ subnet | ansible.netcommon.ipsubnet }}
+ 192.168.0.0/16
+
+If you specify a subnet size as the first parameter of the ``ipsubnet()`` filter, and
+the subnet size is **smaller than the current one**, you will get the number of subnets
+a given subnet can be split into::
+
+ # {{ subnet | ansible.netcommon.ipsubnet(20) }}
+ 16
+
+The second argument of the ``ipsubnet()`` filter is an index number; by specifying it
+you can get a new subnet with the specified size::
+
+ # First subnet
+ # {{ subnet | ansible.netcommon.ipsubnet(20, 0) }}
+ 192.168.0.0/20
+
+ # Last subnet
+ # {{ subnet | ansible.netcommon.ipsubnet(20, -1) }}
+ 192.168.240.0/20
+
+ # Fifth subnet
+ # {{ subnet | ansible.netcommon.ipsubnet(20, 5) }}
+ 192.168.80.0/20
+
+ # Fifth to last subnet
+ # {{ subnet | ansible.netcommon.ipsubnet(20, -5) }}
+ 192.168.176.0/20
+
+If you specify an IP address instead of a subnet, and give a subnet size as
+the first argument, the ``ipsubnet()`` filter will instead return the biggest subnet that
+contains that given IP address::
+
+ # {{ address | ansible.netcommon.ipsubnet(20) }}
+ 192.168.144.0/20
+
+By specifying an index number as a second argument, you can select smaller and
+smaller subnets::
+
+ # First subnet
+ # {{ address | ansible.netcommon.ipsubnet(18, 0) }}
+ 192.168.128.0/18
+
+ # Last subnet
+ # {{ address | ansible.netcommon.ipsubnet(18, -1) }}
+ 192.168.144.4/31
+
+ # Fifth subnet
+ # {{ address | ansible.netcommon.ipsubnet(18, 5) }}
+ 192.168.144.0/23
+
+ # Fifth to last subnet
+ # {{ address | ansible.netcommon.ipsubnet(18, -5) }}
+ 192.168.144.0/27
+
+By specifying another subnet as a second argument, if the second subnet includes
+the first, you can determine the rank of the first subnet in the second ::
+
+ # The rank of the IP in the subnet (the IP is the 36870nth /32 of the subnet)
+ # {{ address | ansible.netcommon.ipsubnet(subnet) }}
+ 36870
+
+ # The rank in the /24 that contain the address
+ # {{ address | ansible.netcommon.ipsubnet('192.168.144.0/24') }}
+ 6
+
+ # An IP with the subnet in the first /30 in a /24
+ # {{ '192.168.144.1/30' | ansible.netcommon.ipsubnet('192.168.144.0/24') }}
+ 1
+
+ # The fifth subnet /30 in a /24
+ # {{ '192.168.144.16/30' | ansible.netcommon.ipsubnet('192.168.144.0/24') }}
+ 5
+
+If the second subnet doesn't include the first subnet, the ``ipsubnet()`` filter raises an error.
+
+
+You can use the ``ipsubnet()`` filter with the ``ipaddr()`` filter to, for example, split
+a given ``/48`` prefix into smaller ``/64`` subnets::
+
+ # {{ '193.0.2.0' | ansible.netcommon.ipaddr('6to4') | ipsubnet(64, 58820) | ansible.netcommon.ipaddr('1') }}
+ 2002:c100:200:e5c4::1/64
+
+Because of the size of IPv6 subnets, iteration over all of them to find the
+correct one may take some time on slower computers, depending on the size
+difference between the subnets.
+
+
+Subnet Merging
+^^^^^^^^^^^^^^
+
+.. versionadded:: 2.6
+
+The ``cidr_merge()`` filter can be used to merge subnets or individual addresses
+into their minimal representation, collapsing overlapping subnets and merging
+adjacent ones wherever possible::
+
+ {{ ['192.168.0.0/17', '192.168.128.0/17', '192.168.128.1' ] | cidr_merge }}
+ # => ['192.168.0.0/16']
+
+ {{ ['192.168.0.0/24', '192.168.1.0/24', '192.168.3.0/24'] | cidr_merge }}
+ # => ['192.168.0.0/23', '192.168.3.0/24']
+
+Changing the action from 'merge' to 'span' will instead return the smallest
+subnet which contains all of the inputs::
+
+ {{ ['192.168.0.0/24', '192.168.3.0/24'] | ansible.netcommon.cidr_merge('span') }}
+ # => '192.168.0.0/22'
+
+ {{ ['192.168.1.42', '192.168.42.1'] | ansible.netcommon.cidr_merge('span') }}
+ # => '192.168.0.0/18'
+
+
+MAC address filter
+^^^^^^^^^^^^^^^^^^
+
+You can use the ``hwaddr()`` filter to check if a given string is a MAC address or
+convert it between various formats. Examples::
+
+ # Example MAC address
+ macaddress = '1a:2b:3c:4d:5e:6f'
+
+ # Check if given string is a MAC address
+ # {{ macaddress | ansible.netcommon.hwaddr }}
+ 1a:2b:3c:4d:5e:6f
+
+ # Convert MAC address to PostgreSQL format
+ # {{ macaddress | ansible.netcommon.hwaddr('pgsql') }}
+ 1a2b3c:4d5e6f
+
+ # Convert MAC address to Cisco format
+ # {{ macaddress | ansible.netcommon.hwaddr('cisco') }}
+ 1a2b.3c4d.5e6f
+
+The supported formats result in the following conversions for the ``1a:2b:3c:4d:5e:6f`` MAC address::
+
+ bare: 1A2B3C4D5E6F
+ bool: True
+ int: 28772997619311
+ cisco: 1a2b.3c4d.5e6f
+ eui48 or win: 1A-2B-3C-4D-5E-6F
+ linux or unix: 1a:2b:3c:4d:5e:6f:
+ pgsql, postgresql, or psql: 1a2b3c:4d5e6f
+
+
+Generate an IPv6 address in Stateless Configuration (SLAAC)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+the filter ``slaac()`` generates an IPv6 address for a given network and a MAC Address in Stateless Configuration::
+
+ # {{ fdcf:1894:23b5:d38c:0000:0000:0000:0000 | slaac('c2:31:b3:83:bf:2b') }}
+ fdcf:1894:23b5:d38c:c031:b3ff:fe83:bf2b
+
+.. seealso::
+
+
+ `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_
+ Ansible network collection for common code
+ :ref:`about_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_filters`
+ Introduction to Jinja2 filters and their uses
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_variables`
+ All about variables
+ :ref:`playbooks_loops`
+ Looping in playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_handlers.rst b/docs/docsite/rst/user_guide/playbooks_handlers.rst
new file mode 100644
index 00000000..4650d5e7
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_handlers.rst
@@ -0,0 +1,148 @@
+.. _handlers:
+
+Handlers: running operations on change
+======================================
+
+Sometimes you want a task to run only when a change is made on a machine. For example, you may want to restart a service if a task updates the configuration of that service, but not if the configuration is unchanged. Ansible uses handlers to address this use case. Handlers are tasks that only run when notified. Each handler should have a globally unique name.
+
+.. contents::
+ :local:
+
+Handler example
+---------------
+
+This playbook, ``verify-apache.yml``, contains a single play with a handler::
+
+ ---
+ - name: Verify apache installation
+ hosts: webservers
+ vars:
+ http_port: 80
+ max_clients: 200
+ remote_user: root
+ tasks:
+ - name: Ensure apache is at the latest version
+ ansible.builtin.yum:
+ name: httpd
+ state: latest
+
+ - name: Write the apache config file
+ ansible.builtin.template:
+ src: /srv/httpd.j2
+ dest: /etc/httpd.conf
+ notify:
+ - Restart apache
+
+ - name: Ensure apache is running
+ ansible.builtin.service:
+ name: httpd
+ state: started
+
+ handlers:
+ - name: Restart apache
+ ansible.builtin.service:
+ name: httpd
+ state: restarted
+
+In this example playbook, the second task notifies the handler. A single task can notify more than one handler::
+
+ - name: Template configuration file
+ ansible.builtin.template:
+ src: template.j2
+ dest: /etc/foo.conf
+ notify:
+ - Restart memcached
+ - Restart apache
+
+ handlers:
+ - name: Restart memcached
+ ansible.builtin.service:
+ name: memcached
+ state: restarted
+
+ - name: Restart apache
+ ansible.builtin.service:
+ name: apache
+ state: restarted
+
+Controlling when handlers run
+-----------------------------
+
+By default, handlers run after all the tasks in a particular play have been completed. This approach is efficient, because the handler only runs once, regardless of how many tasks notify it. For example, if multiple tasks update a configuration file and notify a handler to restart Apache, Ansible only bounces Apache once to avoid unnecessary restarts.
+
+If you need handlers to run before the end of the play, add a task to flush them using the :ref:`meta module <meta_module>`, which executes Ansible actions::
+
+ tasks:
+ - name: Some tasks go here
+ ansible.builtin.shell: ...
+
+ - name: Flush handlers
+ meta: flush_handlers
+
+ - name: Some other tasks
+ ansible.builtin.shell: ...
+
+The ``meta: flush_handlers`` task triggers any handlers that have been notified at that point in the play.
+
+Using variables with handlers
+-----------------------------
+
+You may want your Ansible handlers to use variables. For example, if the name of a service varies slightly by distribution, you want your output to show the exact name of the restarted service for each target machine. Avoid placing variables in the name of the handler. Since handler names are templated early on, Ansible may not have a value available for a handler name like this::
+
+ handlers:
+ # This handler name may cause your play to fail!
+ - name: Restart "{{ web_service_name }}"
+
+If the variable used in the handler name is not available, the entire play fails. Changing that variable mid-play **will not** result in newly created handler.
+
+Instead, place variables in the task parameters of your handler. You can load the values using ``include_vars`` like this:
+
+ .. code-block:: yaml+jinja
+
+ tasks:
+ - name: Set host variables based on distribution
+ include_vars: "{{ ansible_facts.distribution }}.yml"
+
+ handlers:
+ - name: Restart web service
+ ansible.builtin.service:
+ name: "{{ web_service_name | default('httpd') }}"
+ state: restarted
+
+Handlers can also "listen" to generic topics, and tasks can notify those topics as follows::
+
+ handlers:
+ - name: Restart memcached
+ ansible.builtin.service:
+ name: memcached
+ state: restarted
+ listen: "restart web services"
+
+ - name: Restart apache
+ ansible.builtin.service:
+ name: apache
+ state: restarted
+ listen: "restart web services"
+
+ tasks:
+ - name: Restart everything
+ ansible.builtin.command: echo "this task will restart the web services"
+ notify: "restart web services"
+
+This use makes it much easier to trigger multiple handlers. It also decouples handlers from their names,
+making it easier to share handlers among playbooks and roles (especially when using 3rd party roles from
+a shared source like Galaxy).
+
+.. note::
+ * Handlers always run in the order they are defined, not in the order listed in the notify-statement. This is also the case for handlers using `listen`.
+ * Handler names and `listen` topics live in a global namespace.
+ * Handler names are templatable and `listen` topics are not.
+ * Use unique handler names. If you trigger more than one handler with the same name, the first one(s) get overwritten. Only the last one defined will run.
+ * You can notify a handler defined inside a static include.
+ * You cannot notify a handler defined inside a dynamic include.
+
+When using handlers within roles, note that:
+
+* handlers notified within ``pre_tasks``, ``tasks``, and ``post_tasks`` sections are automatically flushed at the end of section where they were notified.
+* handlers notified within ``roles`` section are automatically flushed at the end of ``tasks`` section, but before any ``tasks`` handlers.
+* handlers are play scoped and as such can be used outside of the role they are defined in.
diff --git a/docs/docsite/rst/user_guide/playbooks_intro.rst b/docs/docsite/rst/user_guide/playbooks_intro.rst
new file mode 100644
index 00000000..24037b3e
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_intro.rst
@@ -0,0 +1,151 @@
+.. _about_playbooks:
+.. _playbooks_intro:
+
+******************
+Intro to playbooks
+******************
+
+Ansible Playbooks offer a repeatable, re-usable, simple configuration management and multi-machine deployment system, one that is well suited to deploying complex applications. If you need to execute a task with Ansible more than once, write a playbook and put it under source control. Then you can use the playbook to push out new configuration or confirm the configuration of remote systems. The playbooks in the `ansible-examples repository <https://github.com/ansible/ansible-examples>`_ illustrate many useful techniques. You may want to look at these in another tab as you read the documentation.
+
+Playbooks can:
+
+* declare configurations
+* orchestrate steps of any manual ordered process, on multiple sets of machines, in a defined order
+* launch tasks synchronously or :ref:`asynchronously <playbooks_async>`
+
+.. contents::
+ :local:
+
+.. _playbook_language_example:
+
+Playbook syntax
+===============
+
+Playbooks are expressed in YAML format with a minimum of syntax. If you are not familiar with YAML, look at our overview of :ref:`yaml_syntax` and consider installing an add-on for your text editor (see :ref:`other_tools_and_programs`) to help you write clean YAML syntax in your playbooks.
+
+A playbook is composed of one or more 'plays' in an ordered list. The terms 'playbook' and 'play' are sports analogies. Each play executes part of the overall goal of the playbook, running one or more tasks. Each task calls an Ansible module.
+
+Playbook execution
+==================
+
+A playbook runs in order from top to bottom. Within each play, tasks also run in order from top to bottom. Playbooks with multiple 'plays' can orchestrate multi-machine deployments, running one play on your webservers, then another play on your database servers, then a third play on your network infrastructure, and so on. At a minimum, each play defines two things:
+
+* the managed nodes to target, using a :ref:`pattern <intro_patterns>`
+* at least one task to execute
+
+In this example, the first play targets the web servers; the second play targets the database servers::
+
+ ---
+ - name: update web servers
+ hosts: webservers
+ remote_user: root
+
+ tasks:
+ - name: ensure apache is at the latest version
+ yum:
+ name: httpd
+ state: latest
+ - name: write the apache config file
+ template:
+ src: /srv/httpd.j2
+ dest: /etc/httpd.conf
+
+ - name: update db servers
+ hosts: databases
+ remote_user: root
+
+ tasks:
+ - name: ensure postgresql is at the latest version
+ yum:
+ name: postgresql
+ state: latest
+ - name: ensure that postgresql is started
+ service:
+ name: postgresql
+ state: started
+
+Your playbook can include more than just a hosts line and tasks. For example, the playbook above sets a ``remote_user`` for each play. This is the user account for the SSH connection. You can add other :ref:`playbook_keywords` at the playbook, play, or task level to influence how Ansible behaves. Playbook keywords can control the :ref:`connection plugin <connection_plugins>`, whether to use :ref:`privilege escalation <become>`, how to handle errors, and more. To support a variety of environments, Ansible lets you set many of these parameters as command-line flags, in your Ansible configuration, or in your inventory. Learning the :ref:`precedence rules <general_precedence_rules>` for these sources of data will help you as you expand your Ansible ecosystem.
+
+.. _tasks_list:
+
+Task execution
+--------------
+
+By default, Ansible executes each task in order, one at a time, against all machines matched by the host pattern. Each task executes a module with specific arguments. When a task has executed on all target machines, Ansible moves on to the next task. You can use :ref:`strategies <playbooks_strategies>` to change this default behavior. Within each play, Ansible applies the same task directives to all hosts. If a task fails on a host, Ansible takes that host out of the rotation for the rest of the playbook.
+
+When you run a playbook, Ansible returns information about connections, the ``name`` lines of all your plays and tasks, whether each task has succeeded or failed on each machine, and whether each task has made a change on each machine. At the bottom of the playbook execution, Ansible provides a summary of the nodes that were targeted and how they performed. General failures and fatal "unreachable" communication attempts are kept separate in the counts.
+
+.. _idempotency:
+
+Desired state and 'idempotency'
+-------------------------------
+
+Most Ansible modules check whether the desired final state has already been achieved, and exit without performing any actions if that state has been achieved, so that repeating the task does not change the final state. Modules that behave this way are often called 'idempotent.' Whether you run a playbook once, or multiple times, the outcome should be the same. However, not all playbooks and not all modules behave this way. If you are unsure, test your playbooks in a sandbox environment before running them multiple times in production.
+
+.. _executing_a_playbook:
+
+Running playbooks
+-----------------
+
+To run your playbook, use the :ref:`ansible-playbook` command::
+
+ ansible-playbook playbook.yml -f 10
+
+Use the ``--verbose`` flag when running your playbook to see detailed output from successful modules as well as unsuccessful ones.
+
+.. _playbook_ansible-pull:
+
+Ansible-Pull
+============
+
+Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead
+of pushing configuration out to them, you can.
+
+The ``ansible-pull`` is a small script that will checkout a repo of configuration instructions from git, and then
+run ``ansible-playbook`` against that content.
+
+Assuming you load balance your checkout location, ``ansible-pull`` scales essentially infinitely.
+
+Run ``ansible-pull --help`` for details.
+
+There's also a `clever playbook <https://github.com/ansible/ansible-examples/blob/master/language_features/ansible_pull.yml>`_ available to configure ``ansible-pull`` via a crontab from push mode.
+
+Verifying playbooks
+===================
+
+You may want to verify your playbooks to catch syntax errors and other problems before you run them. The :ref:`ansible-playbook` command offers several options for verification, including ``--check``, ``--diff``, ``--list-hosts``, ``list-tasks``, and ``--syntax-check``. The :ref:`validate-playbook-tools` describes other tools for validating and testing playbooks.
+
+.. _linting_playbooks:
+
+ansible-lint
+------------
+
+You can use `ansible-lint <https://docs.ansible.com/ansible-lint/index.html>`_ for detailed, Ansible-specific feedback on your playbooks before you execute them. For example, if you run ``ansible-lint`` on the playbook called ``verify-apache.yml`` near the top of this page, you should get the following results:
+
+.. code-block:: bash
+
+ $ ansible-lint verify-apache.yml
+ [403] Package installs should not use latest
+ verify-apache.yml:8
+ Task/Handler: ensure apache is at the latest version
+
+The `ansible-lint default rules <https://docs.ansible.com/ansible-lint/rules/default_rules.html>`_ page describes each error. For ``[403]``, the recommended fix is to change ``state: latest`` to ``state: present`` in the playbook.
+
+.. seealso::
+
+ `ansible-lint <https://docs.ansible.com/ansible-lint/index.html>`_
+ Learn how to test Ansible Playbooks syntax
+ :ref:`yaml_syntax`
+ Learn about YAML syntax
+ :ref:`playbooks_best_practices`
+ Tips for managing playbooks in the real world
+ :ref:`list_of_collections`
+ Browse existing collections, modules, and plugins
+ :ref:`developing_modules`
+ Learn to extend Ansible by writing your own modules
+ :ref:`intro_patterns`
+ Learn about how to select hosts
+ `GitHub examples directory <https://github.com/ansible/ansible-examples>`_
+ Complete end-to-end playbook examples
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
diff --git a/docs/docsite/rst/user_guide/playbooks_lookups.rst b/docs/docsite/rst/user_guide/playbooks_lookups.rst
new file mode 100644
index 00000000..004db708
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_lookups.rst
@@ -0,0 +1,37 @@
+.. _playbooks_lookups:
+
+*******
+Lookups
+*******
+
+Lookup plugins retrieve data from outside sources such as files, databases, key/value stores, APIs, and other services. Like all templating, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. Before Ansible 2.5, lookups were mostly used indirectly in ``with_<lookup>`` constructs for looping. Starting with Ansible 2.5, lookups are used more explicitly as part of Jinja2 expressions fed into the ``loop`` keyword.
+
+.. _lookups_and_variables:
+
+Using lookups in variables
+==========================
+
+You can populate variables using lookups. Ansible evaluates the value each time it is executed in a task (or template)::
+
+ vars:
+ motd_value: "{{ lookup('file', '/etc/motd') }}"
+ tasks:
+ - debug:
+ msg: "motd value is {{ motd_value }}"
+
+For more details and a list of lookup plugins in ansible-base, see :ref:`plugins_lookup`. You may also find lookup plugins in collections. You can review a list of lookup plugins installed on your control machine with the command ``ansible-doc -l -t lookup``.
+
+.. seealso::
+
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_variables`
+ All about variables
+ :ref:`playbooks_loops`
+ Looping in playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_loops.rst b/docs/docsite/rst/user_guide/playbooks_loops.rst
new file mode 100644
index 00000000..0934eeed
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_loops.rst
@@ -0,0 +1,445 @@
+.. _playbooks_loops:
+
+*****
+Loops
+*****
+
+Sometimes you want to repeat a task multiple times. In computer programming, this is called a loop. Common Ansible loops include changing ownership on several files and/or directories with the :ref:`file module <file_module>`, creating multiple users with the :ref:`user module <user_module>`, and
+repeating a polling step until a certain result is reached. Ansible offers two keywords for creating loops: ``loop`` and ``with_<lookup>``.
+
+.. note::
+ * We added ``loop`` in Ansible 2.5. It is not yet a full replacement for ``with_<lookup>``, but we recommend it for most use cases.
+ * We have not deprecated the use of ``with_<lookup>`` - that syntax will still be valid for the foreseeable future.
+ * We are looking to improve ``loop`` syntax - watch this page and the `changelog <https://github.com/ansible/ansible/tree/devel/changelogs>`_ for updates.
+
+.. contents::
+ :local:
+
+Comparing ``loop`` and ``with_*``
+=================================
+
+* The ``with_<lookup>`` keywords rely on :ref:`lookup_plugins` - even ``items`` is a lookup.
+* The ``loop`` keyword is equivalent to ``with_list``, and is the best choice for simple loops.
+* The ``loop`` keyword will not accept a string as input, see :ref:`query_vs_lookup`.
+* Generally speaking, any use of ``with_*`` covered in :ref:`migrating_to_loop` can be updated to use ``loop``.
+* Be careful when changing ``with_items`` to ``loop``, as ``with_items`` performed implicit single-level flattening. You may need to use ``flatten(1)`` with ``loop`` to match the exact outcome. For example, to get the same output as:
+
+.. code-block:: yaml
+
+ with_items:
+ - 1
+ - [2,3]
+ - 4
+
+you would need::
+
+ loop: "{{ [1, [2,3] ,4] | flatten(1) }}"
+
+* Any ``with_*`` statement that requires using ``lookup`` within a loop should not be converted to use the ``loop`` keyword. For example, instead of doing:
+
+.. code-block:: yaml
+
+ loop: "{{ lookup('fileglob', '*.txt', wantlist=True) }}"
+
+it's cleaner to keep::
+
+ with_fileglob: '*.txt'
+
+.. _standard_loops:
+
+Standard loops
+==============
+
+Iterating over a simple list
+----------------------------
+
+Repeated tasks can be written as standard loops over a simple list of strings. You can define the list directly in the task::
+
+ - name: Add several users
+ ansible.builtin.user:
+ name: "{{ item }}"
+ state: present
+ groups: "wheel"
+ loop:
+ - testuser1
+ - testuser2
+
+You can define the list in a variables file, or in the 'vars' section of your play, then refer to the name of the list in the task::
+
+ loop: "{{ somelist }}"
+
+Either of these examples would be the equivalent of::
+
+ - name: Add user testuser1
+ ansible.builtin.user:
+ name: "testuser1"
+ state: present
+ groups: "wheel"
+
+ - name: Add user testuser2
+ ansible.builtin.user:
+ name: "testuser2"
+ state: present
+ groups: "wheel"
+
+You can pass a list directly to a parameter for some plugins. Most of the packaging modules, like :ref:`yum <yum_module>` and :ref:`apt <apt_module>`, have this capability. When available, passing the list to a parameter is better than looping over the task. For example::
+
+ - name: Optimal yum
+ ansible.builtin.yum:
+ name: "{{ list_of_packages }}"
+ state: present
+
+ - name: Non-optimal yum, slower and may cause issues with interdependencies
+ ansible.builtin.yum:
+ name: "{{ item }}"
+ state: present
+ loop: "{{ list_of_packages }}"
+
+Check the :ref:`module documentation <modules_by_category>` to see if you can pass a list to any particular module's parameter(s).
+
+Iterating over a list of hashes
+-------------------------------
+
+If you have a list of hashes, you can reference subkeys in a loop. For example::
+
+ - name: Add several users
+ ansible.builtin.user:
+ name: "{{ item.name }}"
+ state: present
+ groups: "{{ item.groups }}"
+ loop:
+ - { name: 'testuser1', groups: 'wheel' }
+ - { name: 'testuser2', groups: 'root' }
+
+When combining :ref:`conditionals <playbooks_conditionals>` with a loop, the ``when:`` statement is processed separately for each item.
+See :ref:`the_when_statement` for examples.
+
+Iterating over a dictionary
+---------------------------
+
+To loop over a dict, use the :ref:`dict2items <dict_filter>`:
+
+.. code-block:: yaml
+
+ - name: Using dict2items
+ ansible.builtin.debug:
+ msg: "{{ item.key }} - {{ item.value }}"
+ loop: "{{ tag_data | dict2items }}"
+ vars:
+ tag_data:
+ Environment: dev
+ Application: payment
+
+Here, we are iterating over `tag_data` and printing the key and the value from it.
+
+Registering variables with a loop
+=================================
+
+You can register the output of a loop as a variable. For example::
+
+ - name: Register loop output as a variable
+ ansible.builtin.shell: "echo {{ item }}"
+ loop:
+ - "one"
+ - "two"
+ register: echo
+
+When you use ``register`` with a loop, the data structure placed in the variable will contain a ``results`` attribute that is a list of all responses from the module. This differs from the data structure returned when using ``register`` without a loop::
+
+ {
+ "changed": true,
+ "msg": "All items completed",
+ "results": [
+ {
+ "changed": true,
+ "cmd": "echo \"one\" ",
+ "delta": "0:00:00.003110",
+ "end": "2013-12-19 12:00:05.187153",
+ "invocation": {
+ "module_args": "echo \"one\"",
+ "module_name": "shell"
+ },
+ "item": "one",
+ "rc": 0,
+ "start": "2013-12-19 12:00:05.184043",
+ "stderr": "",
+ "stdout": "one"
+ },
+ {
+ "changed": true,
+ "cmd": "echo \"two\" ",
+ "delta": "0:00:00.002920",
+ "end": "2013-12-19 12:00:05.245502",
+ "invocation": {
+ "module_args": "echo \"two\"",
+ "module_name": "shell"
+ },
+ "item": "two",
+ "rc": 0,
+ "start": "2013-12-19 12:00:05.242582",
+ "stderr": "",
+ "stdout": "two"
+ }
+ ]
+ }
+
+Subsequent loops over the registered variable to inspect the results may look like::
+
+ - name: Fail if return code is not 0
+ ansible.builtin.fail:
+ msg: "The command ({{ item.cmd }}) did not have a 0 return code"
+ when: item.rc != 0
+ loop: "{{ echo.results }}"
+
+During iteration, the result of the current item will be placed in the variable::
+
+ - name: Place the result of the current item in the variable
+ ansible.builtin.shell: echo "{{ item }}"
+ loop:
+ - one
+ - two
+ register: echo
+ changed_when: echo.stdout != "one"
+
+.. _complex_loops:
+
+Complex loops
+=============
+
+Iterating over nested lists
+---------------------------
+
+You can use Jinja2 expressions to iterate over complex lists. For example, a loop can combine nested lists::
+
+ - name: Give users access to multiple databases
+ community.mysql.mysql_user:
+ name: "{{ item[0] }}"
+ priv: "{{ item[1] }}.*:ALL"
+ append_privs: yes
+ password: "foo"
+ loop: "{{ ['alice', 'bob'] |product(['clientdb', 'employeedb', 'providerdb'])|list }}"
+
+
+.. _do_until_loops:
+
+Retrying a task until a condition is met
+----------------------------------------
+
+.. versionadded:: 1.4
+
+You can use the ``until`` keyword to retry a task until a certain condition is met. Here's an example::
+
+ - name: Retry a task until a certain condition is met
+ ansible.builtin.shell: /usr/bin/foo
+ register: result
+ until: result.stdout.find("all systems go") != -1
+ retries: 5
+ delay: 10
+
+This task runs up to 5 times with a delay of 10 seconds between each attempt. If the result of any attempt has "all systems go" in its stdout, the task succeeds. The default value for "retries" is 3 and "delay" is 5.
+
+To see the results of individual retries, run the play with ``-vv``.
+
+When you run a task with ``until`` and register the result as a variable, the registered variable will include a key called "attempts", which records the number of the retries for the task.
+
+.. note:: You must set the ``until`` parameter if you want a task to retry. If ``until`` is not defined, the value for the ``retries`` parameter is forced to 1.
+
+Looping over inventory
+----------------------
+
+To loop over your inventory, or just a subset of it, you can use a regular ``loop`` with the ``ansible_play_batch`` or ``groups`` variables::
+
+ - name: Show all the hosts in the inventory
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ loop: "{{ groups['all'] }}"
+
+ - name: Show all the hosts in the current play
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ loop: "{{ ansible_play_batch }}"
+
+There is also a specific lookup plugin ``inventory_hostnames`` that can be used like this::
+
+ - name: Show all the hosts in the inventory
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ loop: "{{ query('inventory_hostnames', 'all') }}"
+
+ - name: Show all the hosts matching the pattern, ie all but the group www
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ loop: "{{ query('inventory_hostnames', 'all:!www') }}"
+
+More information on the patterns can be found in :ref:`intro_patterns`.
+
+.. _query_vs_lookup:
+
+Ensuring list input for ``loop``: using ``query`` rather than ``lookup``
+========================================================================
+
+The ``loop`` keyword requires a list as input, but the ``lookup`` keyword returns a string of comma-separated values by default. Ansible 2.5 introduced a new Jinja2 function named :ref:`query <query>` that always returns a list, offering a simpler interface and more predictable output from lookup plugins when using the ``loop`` keyword.
+
+You can force ``lookup`` to return a list to ``loop`` by using ``wantlist=True``, or you can use ``query`` instead.
+
+These examples do the same thing::
+
+ loop: "{{ query('inventory_hostnames', 'all') }}"
+
+ loop: "{{ lookup('inventory_hostnames', 'all', wantlist=True) }}"
+
+
+.. _loop_control:
+
+Adding controls to loops
+========================
+.. versionadded:: 2.1
+
+The ``loop_control`` keyword lets you manage your loops in useful ways.
+
+Limiting loop output with ``label``
+-----------------------------------
+.. versionadded:: 2.2
+
+When looping over complex data structures, the console output of your task can be enormous. To limit the displayed output, use the ``label`` directive with ``loop_control``::
+
+ - name: Create servers
+ digital_ocean:
+ name: "{{ item.name }}"
+ state: present
+ loop:
+ - name: server1
+ disks: 3gb
+ ram: 15Gb
+ network:
+ nic01: 100Gb
+ nic02: 10Gb
+ ...
+ loop_control:
+ label: "{{ item.name }}"
+
+The output of this task will display just the ``name`` field for each ``item`` instead of the entire contents of the multi-line ``{{ item }}`` variable.
+
+.. note:: This is for making console output more readable, not protecting sensitive data. If there is sensitive data in ``loop``, set ``no_log: yes`` on the task to prevent disclosure.
+
+Pausing within a loop
+---------------------
+.. versionadded:: 2.2
+
+To control the time (in seconds) between the execution of each item in a task loop, use the ``pause`` directive with ``loop_control``::
+
+ # main.yml
+ - name: Create servers, pause 3s before creating next
+ community.digitalocean.digital_ocean:
+ name: "{{ item }}"
+ state: present
+ loop:
+ - server1
+ - server2
+ loop_control:
+ pause: 3
+
+Tracking progress through a loop with ``index_var``
+---------------------------------------------------
+.. versionadded:: 2.5
+
+To keep track of where you are in a loop, use the ``index_var`` directive with ``loop_control``. This directive specifies a variable name to contain the current loop index::
+
+ - name: Count our fruit
+ ansible.builtin.debug:
+ msg: "{{ item }} with index {{ my_idx }}"
+ loop:
+ - apple
+ - banana
+ - pear
+ loop_control:
+ index_var: my_idx
+
+.. note:: `index_var` is 0 indexed.
+
+Defining inner and outer variable names with ``loop_var``
+---------------------------------------------------------
+.. versionadded:: 2.1
+
+You can nest two looping tasks using ``include_tasks``. However, by default Ansible sets the loop variable ``item`` for each loop. This means the inner, nested loop will overwrite the value of ``item`` from the outer loop.
+You can specify the name of the variable for each loop using ``loop_var`` with ``loop_control``::
+
+ # main.yml
+ - include_tasks: inner.yml
+ loop:
+ - 1
+ - 2
+ - 3
+ loop_control:
+ loop_var: outer_item
+
+ # inner.yml
+ - name: Print outer and inner items
+ ansible.builtin.debug:
+ msg: "outer item={{ outer_item }} inner item={{ item }}"
+ loop:
+ - a
+ - b
+ - c
+
+.. note:: If Ansible detects that the current loop is using a variable which has already been defined, it will raise an error to fail the task.
+
+Extended loop variables
+-----------------------
+.. versionadded:: 2.8
+
+As of Ansible 2.8 you can get extended loop information using the ``extended`` option to loop control. This option will expose the following information.
+
+========================== ===========
+Variable Description
+-------------------------- -----------
+``ansible_loop.allitems`` The list of all items in the loop
+``ansible_loop.index`` The current iteration of the loop. (1 indexed)
+``ansible_loop.index0`` The current iteration of the loop. (0 indexed)
+``ansible_loop.revindex`` The number of iterations from the end of the loop (1 indexed)
+``ansible_loop.revindex0`` The number of iterations from the end of the loop (0 indexed)
+``ansible_loop.first`` ``True`` if first iteration
+``ansible_loop.last`` ``True`` if last iteration
+``ansible_loop.length`` The number of items in the loop
+``ansible_loop.previtem`` The item from the previous iteration of the loop. Undefined during the first iteration.
+``ansible_loop.nextitem`` The item from the following iteration of the loop. Undefined during the last iteration.
+========================== ===========
+
+::
+
+ loop_control:
+ extended: yes
+
+Accessing the name of your loop_var
+-----------------------------------
+.. versionadded:: 2.8
+
+As of Ansible 2.8 you can get the name of the value provided to ``loop_control.loop_var`` using the ``ansible_loop_var`` variable
+
+For role authors, writing roles that allow loops, instead of dictating the required ``loop_var`` value, you can gather the value via::
+
+ "{{ lookup('vars', ansible_loop_var) }}"
+
+.. _migrating_to_loop:
+
+Migrating from with_X to loop
+=============================
+
+.. include:: shared_snippets/with2loop.txt
+
+.. seealso::
+
+ :ref:`about_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_variables`
+ All about variables
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_module_defaults.rst b/docs/docsite/rst/user_guide/playbooks_module_defaults.rst
new file mode 100644
index 00000000..f1260e22
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_module_defaults.rst
@@ -0,0 +1,143 @@
+.. _module_defaults:
+
+Module defaults
+===============
+
+If you frequently call the same module with the same arguments, it can be useful to define default arguments for that particular module using the ``module_defaults`` attribute.
+
+Here is a basic example::
+
+ - hosts: localhost
+ module_defaults:
+ ansible.builtin.file:
+ owner: root
+ group: root
+ mode: 0755
+ tasks:
+ - name: Create file1
+ ansible.builtin.file:
+ state: touch
+ path: /tmp/file1
+
+ - name: Create file2
+ ansible.builtin.file:
+ state: touch
+ path: /tmp/file2
+
+ - name: Create file3
+ ansible.builtin.file:
+ state: touch
+ path: /tmp/file3
+
+The ``module_defaults`` attribute can be used at the play, block, and task level. Any module arguments explicitly specified in a task will override any established default for that module argument::
+
+ - block:
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: "Different message"
+ module_defaults:
+ ansible.builtin.debug:
+ msg: "Default message"
+
+You can remove any previously established defaults for a module by specifying an empty dict::
+
+ - name: Create file1
+ ansible.builtin.file:
+ state: touch
+ path: /tmp/file1
+ module_defaults:
+ file: {}
+
+.. note::
+ Any module defaults set at the play level (and block/task level when using ``include_role`` or ``import_role``) will apply to any roles used, which may cause unexpected behavior in the role.
+
+Here are some more realistic use cases for this feature.
+
+Interacting with an API that requires auth::
+
+ - hosts: localhost
+ module_defaults:
+ ansible.builtin.uri:
+ force_basic_auth: true
+ user: some_user
+ password: some_password
+ tasks:
+ - name: Interact with a web service
+ ansible.builtin.uri:
+ url: http://some.api.host/v1/whatever1
+
+ - name: Interact with a web service
+ ansible.builtin.uri:
+ url: http://some.api.host/v1/whatever2
+
+ - name: Interact with a web service
+ ansible.builtin.uri:
+ url: http://some.api.host/v1/whatever3
+
+Setting a default AWS region for specific EC2-related modules::
+
+ - hosts: localhost
+ vars:
+ my_region: us-west-2
+ module_defaults:
+ amazon.aws.ec2:
+ region: '{{ my_region }}'
+ community.aws.ec2_instance_info:
+ region: '{{ my_region }}'
+ amazon.aws.ec2_vpc_net_info:
+ region: '{{ my_region }}'
+
+.. _module_defaults_groups:
+
+Module defaults groups
+----------------------
+
+.. versionadded:: 2.7
+
+Ansible 2.7 adds a preview-status feature to group together modules that share common sets of parameters. This makes it easier to author playbooks making heavy use of API-based modules such as cloud modules.
+
++---------+---------------------------+-----------------+
+| Group | Purpose | Ansible Version |
++=========+===========================+=================+
+| aws | Amazon Web Services | 2.7 |
++---------+---------------------------+-----------------+
+| azure | Azure | 2.7 |
++---------+---------------------------+-----------------+
+| gcp | Google Cloud Platform | 2.7 |
++---------+---------------------------+-----------------+
+| k8s | Kubernetes | 2.8 |
++---------+---------------------------+-----------------+
+| os | OpenStack | 2.8 |
++---------+---------------------------+-----------------+
+| acme | ACME | 2.10 |
++---------+---------------------------+-----------------+
+| docker* | Docker | 2.10 |
++---------+---------------------------+-----------------+
+| ovirt | oVirt | 2.10 |
++---------+---------------------------+-----------------+
+| vmware | VMware | 2.10 |
++---------+---------------------------+-----------------+
+
+* The `docker_stack <docker_stack_module>`_ module is not included in the ``docker`` defaults group.
+
+Use the groups with ``module_defaults`` by prefixing the group name with ``group/`` - for example ``group/aws``.
+
+In a playbook, you can set module defaults for whole groups of modules, such as setting a common AWS region.
+
+.. code-block:: YAML
+
+ # example_play.yml
+ - hosts: localhost
+ module_defaults:
+ group/aws:
+ region: us-west-2
+ tasks:
+ - name: Get info
+ aws_s3_bucket_info:
+
+ # now the region is shared between both info modules
+
+ - name: Get info
+ ec2_ami_info:
+ filters:
+ name: 'RHEL*7.5*'
diff --git a/docs/docsite/rst/user_guide/playbooks_prompts.rst b/docs/docsite/rst/user_guide/playbooks_prompts.rst
new file mode 100644
index 00000000..856f7037
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_prompts.rst
@@ -0,0 +1,116 @@
+.. _playbooks_prompts:
+
+**************************
+Interactive input: prompts
+**************************
+
+If you want your playbook to prompt the user for certain input, add a 'vars_prompt' section. Prompting the user for variables lets you avoid recording sensitive data like passwords. In addition to security, prompts support flexibility. For example, if you use one playbook across multiple software releases, you could prompt for the particular release version.
+
+.. contents::
+ :local:
+
+Here is a most basic example::
+
+ ---
+ - hosts: all
+ vars_prompt:
+
+ - name: username
+ prompt: What is your username?
+ private: no
+
+ - name: password
+ prompt: What is your password?
+
+ tasks:
+
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: 'Logging in as {{ username }}'
+
+The user input is hidden by default but it can be made visible by setting ``private: no``.
+
+.. note::
+ Prompts for individual ``vars_prompt`` variables will be skipped for any variable that is already defined through the command line ``--extra-vars`` option, or when running from a non-interactive session (such as cron or Ansible Tower). See :ref:`passing_variables_on_the_command_line`.
+
+If you have a variable that changes infrequently, you can provide a default value that can be overridden::
+
+ vars_prompt:
+
+ - name: release_version
+ prompt: Product release version
+ default: "1.0"
+
+Encrypting values supplied by ``vars_prompt``
+---------------------------------------------
+
+You can encrypt the entered value so you can use it, for instance, with the user module to define a password::
+
+ vars_prompt:
+
+ - name: my_password2
+ prompt: Enter password2
+ private: yes
+ encrypt: sha512_crypt
+ confirm: yes
+ salt_size: 7
+
+If you have `Passlib <https://passlib.readthedocs.io/en/stable/>`_ installed, you can use any crypt scheme the library supports:
+
+- *des_crypt* - DES Crypt
+- *bsdi_crypt* - BSDi Crypt
+- *bigcrypt* - BigCrypt
+- *crypt16* - Crypt16
+- *md5_crypt* - MD5 Crypt
+- *bcrypt* - BCrypt
+- *sha1_crypt* - SHA-1 Crypt
+- *sun_md5_crypt* - Sun MD5 Crypt
+- *sha256_crypt* - SHA-256 Crypt
+- *sha512_crypt* - SHA-512 Crypt
+- *apr_md5_crypt* - Apache's MD5-Crypt variant
+- *phpass* - PHPass' Portable Hash
+- *pbkdf2_digest* - Generic PBKDF2 Hashes
+- *cta_pbkdf2_sha1* - Cryptacular's PBKDF2 hash
+- *dlitz_pbkdf2_sha1* - Dwayne Litzenberger's PBKDF2 hash
+- *scram* - SCRAM Hash
+- *bsd_nthash* - FreeBSD's MCF-compatible nthash encoding
+
+The only parameters accepted are 'salt' or 'salt_size'. You can use your own salt by defining
+'salt', or have one generated automatically using 'salt_size'. By default Ansible generates a salt
+of size 8.
+
+.. versionadded:: 2.7
+
+If you do not have Passlib installed, Ansible uses the `crypt <https://docs.python.org/2/library/crypt.html>`_ library as a fallback. Ansible supports at most four crypt schemes, depending on your platform at most the following crypt schemes are supported:
+
+- *bcrypt* - BCrypt
+- *md5_crypt* - MD5 Crypt
+- *sha256_crypt* - SHA-256 Crypt
+- *sha512_crypt* - SHA-512 Crypt
+
+.. versionadded:: 2.8
+.. _unsafe_prompts:
+
+Allowing special characters in ``vars_prompt`` values
+-----------------------------------------------------
+
+Some special characters, such as ``{`` and ``%`` can create templating errors. If you need to accept special characters, use the ``unsafe`` option::
+
+ vars_prompt:
+ - name: my_password_with_weird_chars
+ prompt: Enter password
+ unsafe: yes
+ private: yes
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_variables`
+ All about variables
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_python_version.rst b/docs/docsite/rst/user_guide/playbooks_python_version.rst
new file mode 100644
index 00000000..60821b37
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_python_version.rst
@@ -0,0 +1,64 @@
+.. _pb-py-compat:
+
+********************
+Python3 in templates
+********************
+
+Ansible uses Jinja2 to leverage Python data types and standard functions in templates and variables.
+You can use these data types and standard functions to perform a rich set of operations on your data. However,
+if you use templates, you must be aware of differences between Python versions.
+
+These topics help you design templates that work on both Python2 and Python3. They might also help if you are upgrading from Python2 to Python3. Upgrading within Python2 or Python3 does not usually introduce changes that affect Jinja2 templates.
+
+.. _pb-py-compat-dict-views:
+
+Dictionary views
+================
+
+In Python2, the :meth:`dict.keys`, :meth:`dict.values`, and :meth:`dict.items`
+methods return a list. Jinja2 returns that to Ansible via a string
+representation that Ansible can turn back into a list.
+
+In Python3, those methods return a :ref:`dictionary view <python3:dict-views>` object. The
+string representation that Jinja2 returns for dictionary views cannot be parsed back
+into a list by Ansible. It is, however, easy to make this portable by
+using the :func:`list <jinja2:list>` filter whenever using :meth:`dict.keys`,
+:meth:`dict.values`, or :meth:`dict.items`::
+
+ vars:
+ hosts:
+ testhost1: 127.0.0.2
+ testhost2: 127.0.0.3
+ tasks:
+ - debug:
+ msg: '{{ item }}'
+ # Only works with Python 2
+ #loop: "{{ hosts.keys() }}"
+ # Works with both Python 2 and Python 3
+ loop: "{{ hosts.keys() | list }}"
+
+.. _pb-py-compat-iteritems:
+
+dict.iteritems()
+================
+
+Python2 dictionaries have :meth:`~dict.iterkeys`, :meth:`~dict.itervalues`, and :meth:`~dict.iteritems` methods.
+
+Python3 dictionaries do not have these methods. Use :meth:`dict.keys`, :meth:`dict.values`, and :meth:`dict.items` to make your playbooks and templates compatible with both Python2 and Python3::
+
+ vars:
+ hosts:
+ testhost1: 127.0.0.2
+ testhost2: 127.0.0.3
+ tasks:
+ - debug:
+ msg: '{{ item }}'
+ # Only works with Python 2
+ #loop: "{{ hosts.iteritems() }}"
+ # Works with both Python 2 and Python 3
+ loop: "{{ hosts.items() | list }}"
+
+.. seealso::
+ * The :ref:`pb-py-compat-dict-views` entry for information on
+ why the :func:`list filter <jinja2:list>` is necessary
+ here.
diff --git a/docs/docsite/rst/user_guide/playbooks_reuse.rst b/docs/docsite/rst/user_guide/playbooks_reuse.rst
new file mode 100644
index 00000000..3e80f5c2
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_reuse.rst
@@ -0,0 +1,201 @@
+.. _playbooks_reuse:
+
+**************************
+Re-using Ansible artifacts
+**************************
+
+You can write a simple playbook in one very large file, and most users learn the one-file approach first. However, breaking tasks up into different files is an excellent way to organize complex sets of tasks and reuse them. Smaller, more distributed artifacts let you re-use the same variables, tasks, and plays in multiple playbooks to address different use cases. You can use distributed artifacts across multiple parent playbooks or even multiple times within one playbook. For example, you might want to update your customer database as part of several different playbooks. If you put all the tasks related to updating your database in a tasks file, you can re-use them in many playbooks while only maintaining them in one place.
+
+.. contents::
+ :local:
+
+Creating re-usable files and roles
+==================================
+
+Ansible offers four distributed, re-usable artifacts: variables files, task files, playbooks, and roles.
+
+ - A variables file contains only variables.
+ - A task file contains only tasks.
+ - A playbook contains at least one play, and may contain variables, tasks, and other content. You can re-use tightly focused playbooks, but you can only re-use them statically, not dynamically.
+ - A role contains a set of related tasks, variables, defaults, handlers, and even modules or other plugins in a defined file-tree. Unlike variables files, task files, or playbooks, roles can be easily uploaded and shared via Ansible Galaxy. See :ref:`playbooks_reuse_roles` for details about creating and using roles.
+
+.. versionadded:: 2.4
+
+Re-using playbooks
+==================
+
+You can incorporate multiple playbooks into a master playbook. However, you can only use imports to re-use playbooks. For example:
+
+.. code-block:: yaml
+
+ - import_playbook: webservers.yml
+ - import_playbook: databases.yml
+
+Importing incorporates playbooks in other playbooks statically. Ansible runs the plays and tasks in each imported playbook in the order they are listed, just as if they had been defined directly in the master playbook.
+
+Re-using files and roles
+========================
+
+Ansible offers two ways to re-use files and roles in a playbook: dynamic and static.
+
+ - For dynamic re-use, add an ``include_*`` task in the tasks section of a play:
+
+ - :ref:`include_role <include_role_module>`
+ - :ref:`include_tasks <include_tasks_module>`
+ - :ref:`include_vars <include_vars_module>`
+
+ - For static re-use, add an ``import_*`` task in the tasks section of a play:
+
+ - :ref:`import_role <import_role_module>`
+ - :ref:`import_tasks <import_tasks_module>`
+
+Task include and import statements can be used at arbitrary depth.
+
+You can still use the bare :ref:`roles <roles_keyword>` keyword at the play level to incorporate a role in a playbook statically. However, the bare :ref:`include <include_module>` keyword, once used for both task files and playbook-level includes, is now deprecated.
+
+Includes: dynamic re-use
+------------------------
+
+Including roles, tasks, or variables adds them to a playbook dynamically. Ansible processes included files and roles as they come up in a playbook, so included tasks can be affected by the results of earlier tasks within the top-level playbook. Included roles and tasks are similar to handlers - they may or may not run, depending on the results of other tasks in the top-level playbook.
+
+The primary advantage of using ``include_*`` statements is looping. When a loop is used with an include, the included tasks or role will be executed once for each item in the loop.
+
+You can pass variables into includes. See :ref:`ansible_variable_precedence` for more details on variable inheritance and precedence.
+
+Imports: static re-use
+----------------------
+
+Importing roles, tasks, or playbooks adds them to a playbook statically. Ansible pre-processes imported files and roles before it runs any tasks in a playbook, so imported content is never affected by other tasks within the top-level playbook.
+
+You can pass variables to imports. You must pass variables if you want to run an imported file more than once in a playbook. For example:
+
+.. code-block:: yaml
+
+ tasks:
+ - import_tasks: wordpress.yml
+ vars:
+ wp_user: timmy
+
+ - import_tasks: wordpress.yml
+ vars:
+ wp_user: alice
+
+ - import_tasks: wordpress.yml
+ vars:
+ wp_user: bob
+
+See :ref:`ansible_variable_precedence` for more details on variable inheritance and precedence.
+
+.. _dynamic_vs_static:
+
+Comparing includes and imports: dynamic and static re-use
+------------------------------------------------------------
+
+Each approach to re-using distributed Ansible artifacts has advantages and limitations. You may choose dynamic re-use for some playbooks and static re-use for others. Although you can use both dynamic and static re-use in a single playbook, it is best to select one approach per playbook. Mixing static and dynamic re-use can introduce difficult-to-diagnose bugs into your playbooks. This table summarizes the main differences so you can choose the best approach for each playbook you create.
+
+.. table::
+ :class: documentation-table
+
+ ========================= ======================================== ========================================
+ .. Include_* Import_*
+ ========================= ======================================== ========================================
+ Type of re-use Dynamic Static
+
+ When processed At runtime, when encountered Pre-processed during playbook parsing
+
+ Task or play All includes are tasks ``import_playbook`` cannot be a task
+
+ Task options Apply only to include task itself Apply to all child tasks in import
+
+ Calling from loops Executed once for each loop item Cannot be used in a loop
+
+ Using ``--list-tags`` Tags within includes not listed All tags appear with ``--list-tags``
+
+ Using ``--list-tasks`` Tasks within includes not listed All tasks appear with ``--list-tasks``
+
+ Notifying handlers Cannot trigger handlers within includes Can trigger individual imported handlers
+
+ Using ``--start-at-task`` Cannot start at tasks within includes Can start at imported tasks
+
+ Using inventory variables Can ``include_*: {{ inventory_var }}`` Cannot ``import_*: {{ inventory_var }}``
+
+ With playbooks No ``include_playbook`` Can import full playbooks
+
+ With variables files Can include variables files Use ``vars_files:`` to import variables
+
+ ========================= ======================================== ========================================
+
+Re-using tasks as handlers
+==========================
+
+You can also use includes and imports in the :ref:`handlers` section of a playbook. For instance, if you want to define how to restart Apache, you only have to do that once for all of your playbooks. You might make a ``restarts.yml`` file that looks like:
+
+.. code-block:: yaml
+
+ # restarts.yml
+ - name: Restart apache
+ ansible.builtin.service:
+ name: apache
+ state: restarted
+
+ - name: Restart mysql
+ ansible.builtin.service:
+ name: mysql
+ state: restarted
+
+You can trigger handlers from either an import or an include, but the procedure is different for each method of re-use. If you include the file, you must notify the include itself, which triggers all the tasks in ``restarts.yml``. If you import the file, you must notify the individual task(s) within ``restarts.yml``. You can mix direct tasks and handlers with included or imported tasks and handlers.
+
+Triggering included (dynamic) handlers
+--------------------------------------
+
+Includes are executed at run-time, so the name of the include exists during play execution, but the included tasks do not exist until the include itself is triggered. To use the ``Restart apache`` task with dynamic re-use, refer to the name of the include itself. This approach triggers all tasks in the included file as handlers. For example, with the task file shown above:
+
+.. code-block:: yaml
+
+ - trigger an included (dynamic) handler
+ hosts: localhost
+ handlers:
+ - name: Restart services
+ include_tasks: restarts.yml
+ tasks:
+ - command: "true"
+ notify: Restart services
+
+Triggering imported (static) handlers
+-------------------------------------
+
+Imports are processed before the play begins, so the name of the import no longer exists during play execution, but the names of the individual imported tasks do exist. To use the ``Restart apache`` task with static re-use, refer to the name of each task or tasks within the imported file. For example, with the task file shown above:
+
+.. code-block:: yaml
+
+ - trigger an imported (static) handler
+ hosts: localhost
+ handlers:
+ - name: Restart services
+ import_tasks: restarts.yml
+ tasks:
+ - command: "true"
+ notify: Restart apache
+ - command: "true"
+ notify: Restart mysql
+
+.. seealso::
+
+ :ref:`utilities_modules`
+ Documentation of the ``include*`` and ``import*`` modules discussed here.
+ :ref:`working_with_playbooks`
+ Review the basic Playbook language features
+ :ref:`playbooks_variables`
+ All about variables in playbooks
+ :ref:`playbooks_conditionals`
+ Conditionals in playbooks
+ :ref:`playbooks_loops`
+ Loops in playbooks
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`ansible_galaxy`
+ How to share roles on galaxy, role management
+ `GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_
+ Complete playbook files from the GitHub project source
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
diff --git a/docs/docsite/rst/user_guide/playbooks_reuse_includes.rst b/docs/docsite/rst/user_guide/playbooks_reuse_includes.rst
new file mode 100644
index 00000000..ecce954a
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_reuse_includes.rst
@@ -0,0 +1,32 @@
+:orphan:
+
+.. _playbooks_reuse_includes:
+
+Including and importing
+=======================
+
+The content on this page has been moved to :ref:`playbooks_reuse`.
+
+
+.. seealso::
+
+ :ref:`yaml_syntax`
+ Learn about YAML syntax
+ :ref:`working_with_playbooks`
+ Review the basic Playbook language features
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`playbooks_variables`
+ All about variables in playbooks
+ :ref:`playbooks_conditionals`
+ Conditionals in playbooks
+ :ref:`playbooks_loops`
+ Loops in playbooks
+ :ref:`list_of_collections`
+ Browse existing collections, modules, and plugins
+ :ref:`developing_modules`
+ Learn how to extend Ansible by writing your own modules
+ `GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_
+ Complete playbook files from the GitHub project source
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
diff --git a/docs/docsite/rst/user_guide/playbooks_reuse_roles.rst b/docs/docsite/rst/user_guide/playbooks_reuse_roles.rst
new file mode 100644
index 00000000..56093d3d
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_reuse_roles.rst
@@ -0,0 +1,490 @@
+.. _playbooks_reuse_roles:
+
+*****
+Roles
+*****
+
+Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily reuse them and share them with other users.
+
+.. contents::
+ :local:
+
+Role directory structure
+========================
+
+An Ansible role has a defined directory structure with seven main standard directories. You must include at least one of these directories in each role. You can omit any directories the role does not use. For example:
+
+.. code-block:: text
+
+ # playbooks
+ site.yml
+ webservers.yml
+ fooservers.yml
+ roles/
+ common/
+ tasks/
+ handlers/
+ library/
+ files/
+ templates/
+ vars/
+ defaults/
+ meta/
+ webservers/
+ tasks/
+ defaults/
+ meta/
+
+By default Ansible will look in each directory within a role for a ``main.yml`` file for relevant content (also ``main.yaml`` and ``main``):
+
+- ``tasks/main.yml`` - the main list of tasks that the role executes.
+- ``handlers/main.yml`` - handlers, which may be used within or outside this role.
+- ``library/my_module.py`` - modules, which may be used within this role (see :ref:`embedding_modules_and_plugins_in_roles` for more information).
+- ``defaults/main.yml`` - default variables for the role (see :ref:`playbooks_variables` for more information). These variables have the lowest priority of any variables available, and can be easily overridden by any other variable, including inventory variables.
+- ``vars/main.yml`` - other variables for the role (see :ref:`playbooks_variables` for more information).
+- ``files/main.yml`` - files that the role deploys.
+- ``templates/main.yml`` - templates that the role deploys.
+- ``meta/main.yml`` - metadata for the role, including role dependencies.
+
+You can add other YAML files in some directories. For example, you can place platform-specific tasks in separate files and refer to them in the ``tasks/main.yml`` file:
+
+.. code-block:: yaml
+
+ # roles/example/tasks/main.yml
+ - name: Install the correct web server for RHEL
+ import_tasks: redhat.yml
+ when: ansible_facts['os_family']|lower == 'redhat'
+
+ - name: Install the correct web server for Debian
+ import_tasks: debian.yml
+ when: ansible_facts['os_family']|lower == 'debian'
+
+ # roles/example/tasks/redhat.yml
+ - name: Install web server
+ ansible.builtin.yum:
+ name: "httpd"
+ state: present
+
+ # roles/example/tasks/debian.yml
+ - name: Install web server
+ ansible.builtin.apt:
+ name: "apache2"
+ state: present
+
+Roles may also include modules and other plugin types in a directory called ``library``. For more information, please refer to :ref:`embedding_modules_and_plugins_in_roles` below.
+
+.. _role_search_path:
+
+Storing and finding roles
+=========================
+
+By default, Ansible looks for roles in two locations:
+
+- in a directory called ``roles/``, relative to the playbook file
+- in ``/etc/ansible/roles``
+
+If you store your roles in a different location, set the :ref:`roles_path <DEFAULT_ROLES_PATH>` configuration option so Ansible can find your roles. Checking shared roles into a single location makes them easier to use in multiple playbooks. See :ref:`intro_configuration` for details about managing settings in ansible.cfg.
+
+Alternatively, you can call a role with a fully qualified path:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ roles:
+ - role: '/path/to/my/roles/common'
+
+Using roles
+===========
+
+You can use roles in three ways:
+
+- at the play level with the ``roles`` option: This is the classic way of using roles in a play.
+- at the tasks level with ``include_role``: You can reuse roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``.
+- at the tasks level with ``import_role``: You can reuse roles statically anywhere in the ``tasks`` section of a play using ``import_role``.
+
+.. _roles_keyword:
+
+Using roles at the play level
+-----------------------------
+
+The classic (original) way to use roles is with the ``roles`` option for a given play:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ roles:
+ - common
+ - webservers
+
+When you use the ``roles`` option at the play level, for each role 'x':
+
+- If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
+- If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
+- If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
+- If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
+- If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
+- Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
+
+When you use the ``roles`` option at the play level, Ansible treats the roles as static imports and processes them during playbook parsing. Ansible executes your playbook in this order:
+
+- Any ``pre_tasks`` defined in the play.
+- Any handlers triggered by pre_tasks.
+- Each role listed in ``roles:``, in the order listed. Any role dependencies defined in the role's ``meta/main.yml`` run first, subject to tag filtering and conditionals. See :ref:`role_dependencies` for more details.
+- Any ``tasks`` defined in the play.
+- Any handlers triggered by the roles or tasks.
+- Any ``post_tasks`` defined in the play.
+- Any handlers triggered by post_tasks.
+
+.. note::
+ If using tags with tasks in a role, be sure to also tag your pre_tasks, post_tasks, and role dependencies and pass those along as well, especially if the pre/post tasks and role dependencies are used for monitoring outage window control or load balancing. See :ref:`tags` for details on adding and using tags.
+
+You can pass other keywords to the ``roles`` option:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ roles:
+ - common
+ - role: foo_app_instance
+ vars:
+ dir: '/opt/a'
+ app_port: 5000
+ tags: typeA
+ - role: foo_app_instance
+ vars:
+ dir: '/opt/b'
+ app_port: 5001
+ tags: typeB
+
+When you add a tag to the ``role`` option, Ansible applies the tag to ALL tasks within the role.
+
+When using ``vars:`` within the ``roles:`` section of a playbook, the variables are added to the play variables, making them available to all tasks within the play before and after the role. This behavior can be changed by :ref:`DEFAULT_PRIVATE_ROLE_VARS`.
+
+Including roles: dynamic reuse
+------------------------------
+
+You can reuse roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``. While roles added in a ``roles`` section run before any other tasks in a playbook, included roles run in the order they are defined. If there are other tasks before an ``include_role`` task, the other tasks will run first.
+
+To include a role:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ tasks:
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: "this task runs before the example role"
+
+ - name: Include the example role
+ include_role:
+ name: example
+
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: "this task runs after the example role"
+
+You can pass other keywords, including variables and tags, when including roles:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ tasks:
+ - name: Include the foo_app_instance role
+ include_role:
+ name: foo_app_instance
+ vars:
+ dir: '/opt/a'
+ app_port: 5000
+ tags: typeA
+ ...
+
+When you add a :ref:`tag <tags>` to an ``include_role`` task, Ansible applies the tag `only` to the include itself. This means you can pass ``--tags`` to run only selected tasks from the role, if those tasks themselves have the same tag as the include statement. See :ref:`selective_reuse` for details.
+
+You can conditionally include a role:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ tasks:
+ - name: Include the some_role role
+ include_role:
+ name: some_role
+ when: "ansible_facts['os_family'] == 'RedHat'"
+
+Importing roles: static reuse
+-----------------------------
+
+You can reuse roles statically anywhere in the ``tasks`` section of a play using ``import_role``. The behavior is the same as using the ``roles`` keyword. For example:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ tasks:
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: "before we run our role"
+
+ - name: Import the example role
+ import_role:
+ name: example
+
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: "after we ran our role"
+
+You can pass other keywords, including variables and tags, when importing roles:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ tasks:
+ - name: Import the foo_app_instance role
+ import_role:
+ name: foo_app_instance
+ vars:
+ dir: '/opt/a'
+ app_port: 5000
+ ...
+
+When you add a tag to an ``import_role`` statement, Ansible applies the tag to `all` tasks within the role. See :ref:`tag_inheritance` for details.
+
+.. _run_role_twice:
+
+Running a role multiple times in one playbook
+=============================================
+
+Ansible only executes each role once, even if you define it multiple times, unless the parameters defined on the role are different for each definition. For example, Ansible only runs the role ``foo`` once in a play like this:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ roles:
+ - foo
+ - bar
+ - foo
+
+You have two options to force Ansible to run a role more than once.
+
+Passing different parameters
+----------------------------
+
+You can pass different parameters in each role definition as:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ roles:
+ - { role: foo, vars: { message: "first" } }
+ - { role: foo, vars: { message: "second" } }
+
+or
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ roles:
+ - role: foo
+ vars:
+ message: "first"
+ - role: foo
+ vars:
+ message: "second"
+
+In this example, because each role definition has different parameters, Ansible runs ``foo`` twice.
+
+Using ``allow_duplicates: true``
+--------------------------------
+
+Add ``allow_duplicates: true`` to the ``meta/main.yml`` file for the role:
+
+.. code-block:: yaml
+
+ # playbook.yml
+ ---
+ - hosts: webservers
+ roles:
+ - foo
+ - foo
+
+ # roles/foo/meta/main.yml
+ ---
+ allow_duplicates: true
+
+In this example, Ansible runs ``foo`` twice because we have explicitly enabled it to do so.
+
+.. _role_dependencies:
+
+Using role dependencies
+=======================
+
+Role dependencies let you automatically pull in other roles when using a role. Ansible does not execute role dependencies when you include or import a role. You must use the ``roles`` keyword if you want Ansible to execute role dependencies.
+
+Role dependencies are stored in the ``meta/main.yml`` file within the role directory. This file should contain a list of roles and parameters to insert before the specified role. For example:
+
+.. code-block:: yaml
+
+ # roles/myapp/meta/main.yml
+ ---
+ dependencies:
+ - role: common
+ vars:
+ some_parameter: 3
+ - role: apache
+ vars:
+ apache_port: 80
+ - role: postgres
+ vars:
+ dbname: blarg
+ other_parameter: 12
+
+Ansible always executes role dependencies before the role that includes them. Ansible executes recursive role dependencies as well. If one role depends on a second role, and the second role depends on a third role, Ansible executes the third role, then the second role, then the first role.
+
+Running role dependencies multiple times in one playbook
+--------------------------------------------------------
+
+Ansible treats duplicate role dependencies like duplicate roles listed under ``roles:``: Ansible only executes role dependencies once, even if defined multiple times, unless the parameters, tags, or when clause defined on the role are different for each definition. If two roles in a playbook both list a third role as a dependency, Ansible only runs that role dependency once, unless you pass different parameters, tags, when clause, or use ``allow_duplicates: true`` in the dependent (third) role. See :ref:`Galaxy role dependencies <galaxy_dependencies>` for more details.
+
+For example, a role named ``car`` depends on a role named ``wheel`` as follows:
+
+.. code-block:: yaml
+
+ ---
+ dependencies:
+ - role: wheel
+ vars:
+ n: 1
+ - role: wheel
+ vars:
+ n: 2
+ - role: wheel
+ vars:
+ n: 3
+ - role: wheel
+ vars:
+ n: 4
+
+And the ``wheel`` role depends on two roles: ``tire`` and ``brake``. The ``meta/main.yml`` for wheel would then contain the following:
+
+.. code-block:: yaml
+
+ ---
+ dependencies:
+ - role: tire
+ - role: brake
+
+And the ``meta/main.yml`` for ``tire`` and ``brake`` would contain the following:
+
+.. code-block:: yaml
+
+ ---
+ allow_duplicates: true
+
+The resulting order of execution would be as follows:
+
+.. code-block:: text
+
+ tire(n=1)
+ brake(n=1)
+ wheel(n=1)
+ tire(n=2)
+ brake(n=2)
+ wheel(n=2)
+ ...
+ car
+
+To use ``allow_duplicates: true`` with role dependencies, you must specify it for the dependent role, not for the parent role. In the example above, ``allow_duplicates: true`` appears in the ``meta/main.yml`` of the ``tire`` and ``brake`` roles. The ``wheel`` role does not require ``allow_duplicates: true``, because each instance defined by ``car`` uses different parameter values.
+
+.. note::
+ See :ref:`playbooks_variables` for details on how Ansible chooses among variable values defined in different places (variable inheritance and scope).
+
+.. _embedding_modules_and_plugins_in_roles:
+
+Embedding modules and plugins in roles
+======================================
+
+If you write a custom module (see :ref:`developing_modules`) or a plugin (see :ref:`developing_plugins`), you might wish to distribute it as part of a role. For example, if you write a module that helps configure your company's internal software, and you want other people in your organization to use this module, but you do not want to tell everyone how to configure their Ansible library path, you can include the module in your internal_config role.
+
+To add a module or a plugin to a role:
+Alongside the 'tasks' and 'handlers' structure of a role, add a directory named 'library' and then include the module directly inside the 'library' directory.
+
+Assuming you had this:
+
+.. code-block:: text
+
+ roles/
+ my_custom_modules/
+ library/
+ module1
+ module2
+
+The module will be usable in the role itself, as well as any roles that are called *after* this role, as follows:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ roles:
+ - my_custom_modules
+ - some_other_role_using_my_custom_modules
+ - yet_another_role_using_my_custom_modules
+
+If necessary, you can also embed a module in a role to modify a module in Ansible's core distribution. For example, you can use the development version of a particular module before it is released in production releases by copying the module and embedding the copy in a role. Use this approach with caution, as API signatures may change in core components, and this workaround is not guaranteed to work.
+
+The same mechanism can be used to embed and distribute plugins in a role, using the same schema. For example, for a filter plugin:
+
+.. code-block:: text
+
+ roles/
+ my_custom_filter/
+ filter_plugins
+ filter1
+ filter2
+
+These filters can then be used in a Jinja template in any role called after 'my_custom_filter'.
+
+Sharing roles: Ansible Galaxy
+=============================
+
+`Ansible Galaxy <https://galaxy.ansible.com>`_ is a free site for finding, downloading, rating, and reviewing all kinds of community-developed Ansible roles and can be a great way to get a jumpstart on your automation projects.
+
+The client ``ansible-galaxy`` is included in Ansible. The Galaxy client allows you to download roles from Ansible Galaxy, and also provides an excellent default framework for creating your own roles.
+
+Read the `Ansible Galaxy documentation <https://galaxy.ansible.com/docs/>`_ page for more information
+
+.. seealso::
+
+ :ref:`ansible_galaxy`
+ How to create new roles, share roles on Galaxy, role management
+ :ref:`yaml_syntax`
+ Learn about YAML syntax
+ :ref:`working_with_playbooks`
+ Review the basic Playbook language features
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`playbooks_variables`
+ Variables in playbooks
+ :ref:`playbooks_conditionals`
+ Conditionals in playbooks
+ :ref:`playbooks_loops`
+ Loops in playbooks
+ :ref:`tags`
+ Using tags to select or skip roles/tasks in long playbooks
+ :ref:`list_of_collections`
+ Browse existing collections, modules, and plugins
+ :ref:`developing_modules`
+ Extending Ansible by writing your own modules
+ `GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_
+ Complete playbook files from the GitHub project source
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
diff --git a/docs/docsite/rst/user_guide/playbooks_roles.rst b/docs/docsite/rst/user_guide/playbooks_roles.rst
new file mode 100644
index 00000000..f79e2308
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_roles.rst
@@ -0,0 +1,19 @@
+:orphan:
+
+Playbook Roles and Include Statements
+=====================================
+
+.. contents:: Topics
+
+
+The documentation regarding roles and includes for playbooks have moved. Their new location is here: :ref:`playbooks_reuse`. Please update any links you may have made directly to this page.
+
+.. seealso::
+
+ :ref:`ansible_galaxy`
+ How to share roles on galaxy, role management
+ :ref:`working_with_playbooks`
+ Review the basic Playbook language features
+ :ref:`playbooks_reuse`
+ Creating reusable Playbooks.
+
diff --git a/docs/docsite/rst/user_guide/playbooks_special_topics.rst b/docs/docsite/rst/user_guide/playbooks_special_topics.rst
new file mode 100644
index 00000000..5df72c11
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_special_topics.rst
@@ -0,0 +1,8 @@
+:orphan:
+
+.. _playbooks_special_topics:
+
+Advanced playbooks features
+===========================
+
+This page is obsolete. Refer to the :ref:`main User Guide index page <user_guide_index>` for links to all playbook-related topics. Please update any links you may have made directly to this page.
diff --git a/docs/docsite/rst/user_guide/playbooks_startnstep.rst b/docs/docsite/rst/user_guide/playbooks_startnstep.rst
new file mode 100644
index 00000000..e3b62961
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_startnstep.rst
@@ -0,0 +1,40 @@
+.. _playbooks_start_and_step:
+
+***************************************
+Executing playbooks for troubleshooting
+***************************************
+
+When you are testing new plays or debugging playbooks, you may need to run the same play multiple times. To make this more efficient, Ansible offers two alternative ways to execute a playbook: start-at-task and step mode.
+
+.. _start_at_task:
+
+start-at-task
+-------------
+
+To start executing your playbook at a particular task (usually the task that failed on the previous run), use the ``--start-at-task`` option::
+
+ ansible-playbook playbook.yml --start-at-task="install packages"
+
+In this example, Ansible starts executing your playbook at a task named "install packages". This feature does not work with tasks inside dynamically re-used roles or tasks (``include_*``), see :ref:`dynamic_vs_static`.
+
+.. _step:
+
+Step mode
+---------
+
+To execute a playbook interactively, use ``--step``::
+
+ ansible-playbook playbook.yml --step
+
+With this option, Ansible stops on each task, and asks if it should execute that task. For example, if you have a task called "configure ssh", the playbook run will stop and ask::
+
+ Perform task: configure ssh (y/n/c):
+
+Answer "y" to execute the task, answer "n" to skip the task, and answer "c" to exit step mode, executing all remaining tasks without asking.
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbook_debugger`
+ Using the Ansible debugger
diff --git a/docs/docsite/rst/user_guide/playbooks_strategies.rst b/docs/docsite/rst/user_guide/playbooks_strategies.rst
new file mode 100644
index 00000000..a97f0447
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_strategies.rst
@@ -0,0 +1,216 @@
+.. _playbooks_strategies:
+
+Controlling playbook execution: strategies and more
+===================================================
+
+By default, Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks. If you want to change this default behavior, you can use a different strategy plugin, change the number of forks, or apply one of several keywords like ``serial``.
+
+.. contents::
+ :local:
+
+Selecting a strategy
+--------------------
+The default behavior described above is the :ref:`linear strategy<linear_strategy>`. Ansible offers other strategies, including the :ref:`debug strategy<debug_strategy>` (see also :ref:`playbook_debugger`) and the :ref:`free strategy<free_strategy>`, which allows each host to run until the end of the play as fast as it can::
+
+ - hosts: all
+ strategy: free
+ tasks:
+ ...
+
+You can select a different strategy for each play as shown above, or set your preferred strategy globally in ``ansible.cfg``, under the ``defaults`` stanza::
+
+ [defaults]
+ strategy = free
+
+All strategies are implemented as :ref:`strategy plugins<strategy_plugins>`. Please review the documentation for each strategy plugin for details on how it works.
+
+Setting the number of forks
+---------------------------
+If you have the processing power available and want to use more forks, you can set the number in ``ansible.cfg``::
+
+ [defaults]
+ forks = 30
+
+or pass it on the command line: `ansible-playbook -f 30 my_playbook.yml`.
+
+Using keywords to control execution
+-----------------------------------
+
+In addition to strategies, several :ref:`keywords<playbook_keywords>` also affect play execution. You can set a number, a percentage, or a list of numbers of hosts you want to manage at a time with ``serial``. Ansible completes the play on the specified number or percentage of hosts before starting the next batch of hosts. You can restrict the number of workers allotted to a block or task with ``throttle``. You can control how Ansible selects the next host in a group to execute against with ``order``. You can run a task on a single host with ``run_once``. These keywords are not strategies. They are directives or options applied to a play, block, or task.
+
+.. _rolling_update_batch_size:
+
+Setting the batch size with ``serial``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By default, Ansible runs in parallel against all the hosts in the :ref:`pattern <intro_patterns>` you set in the ``hosts:`` field of each play. If you want to manage only a few machines at a time, for example during a rolling update, you can define how many hosts Ansible should manage at a single time using the ``serial`` keyword::
+
+ ---
+ - name: test play
+ hosts: webservers
+ serial: 2
+ gather_facts: False
+
+ tasks:
+ - name: first task
+ command: hostname
+ - name: second task
+ command: hostname
+
+In the above example, if we had 4 hosts in the group 'webservers', Ansible would execute the play completely (both tasks) on 2 of the hosts before moving on to the next 2 hosts::
+
+
+ PLAY [webservers] ****************************************
+
+ TASK [first task] ****************************************
+ changed: [web2]
+ changed: [web1]
+
+ TASK [second task] ***************************************
+ changed: [web1]
+ changed: [web2]
+
+ PLAY [webservers] ****************************************
+
+ TASK [first task] ****************************************
+ changed: [web3]
+ changed: [web4]
+
+ TASK [second task] ***************************************
+ changed: [web3]
+ changed: [web4]
+
+ PLAY RECAP ***********************************************
+ web1 : ok=2 changed=2 unreachable=0 failed=0
+ web2 : ok=2 changed=2 unreachable=0 failed=0
+ web3 : ok=2 changed=2 unreachable=0 failed=0
+ web4 : ok=2 changed=2 unreachable=0 failed=0
+
+
+You can also specify a percentage with the ``serial`` keyword. Ansible applies the percentage to the total number of hosts in a play to determine the number of hosts per pass::
+
+ ---
+ - name: test play
+ hosts: webservers
+ serial: "30%"
+
+If the number of hosts does not divide equally into the number of passes, the final pass contains the remainder. In this example, if you had 20 hosts in the webservers group, the first batch would contain 6 hosts, the second batch would contain 6 hosts, the third batch would contain 6 hosts, and the last batch would contain 2 hosts.
+
+You can also specify batch sizes as a list. For example::
+
+ ---
+ - name: test play
+ hosts: webservers
+ serial:
+ - 1
+ - 5
+ - 10
+
+In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left), every following batch would contain either 10 hosts or all the remaining hosts, if fewer than 10 hosts remained.
+
+You can list multiple batch sizes as percentages::
+
+ ---
+ - name: test play
+ hosts: webservers
+ serial:
+ - "10%"
+ - "20%"
+ - "100%"
+
+You can also mix and match the values::
+
+ ---
+ - name: test play
+ hosts: webservers
+ serial:
+ - 1
+ - 5
+ - "20%"
+
+.. note::
+ No matter how small the percentage, the number of hosts per pass will always be 1 or greater.
+
+Restricting execution with ``throttle``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``throttle`` keyword limits the number of workers for a particular task. It can be set at the block and task level. Use ``throttle`` to restrict tasks that may be CPU-intensive or interact with a rate-limiting API::
+
+ tasks:
+ - command: /path/to/cpu_intensive_command
+ throttle: 1
+
+If you have already restricted the number of forks or the number of machines to execute against in parallel, you can reduce the number of workers with ``throttle``, but you cannot increase it. In other words, to have an effect, your ``throttle`` setting must be lower than your ``forks`` or ``serial`` setting if you are using them together.
+
+Ordering execution based on inventory
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``order`` keyword controls the order in which hosts are run. Possible values for order are:
+
+inventory:
+ (default) The order provided in the inventory
+reverse_inventory:
+ The reverse of the order provided by the inventory
+sorted:
+ Sorted alphabetically sorted by name
+reverse_sorted:
+ Sorted by name in reverse alphabetical order
+shuffle:
+ Randomly ordered on each run
+
+Other keywords that affect play execution include ``ignore_errors``, ``ignore_unreachable``, and ``any_errors_fatal``. These options are documented in :ref:`playbooks_error_handling`.
+
+.. _run_once:
+
+Running on a single machine with ``run_once``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want a task to run only on the first host in your batch of hosts, set ``run_once`` to true on that task::
+
+ ---
+ # ...
+
+ tasks:
+
+ # ...
+
+ - command: /opt/application/upgrade_db.py
+ run_once: true
+
+ # ...
+
+Ansible executes this task on the first host in the current batch and applies all results and facts to all the hosts in the same batch. This approach is similar to applying a conditional to a task such as::
+
+ - command: /opt/application/upgrade_db.py
+ when: inventory_hostname == webservers[0]
+
+However, with ``run_once``, the results are applied to all the hosts. To run the task on a specific host, instead of the first host in the batch, delegate the task::
+
+ - command: /opt/application/upgrade_db.py
+ run_once: true
+ delegate_to: web01.example.org
+
+As always with :ref:`delegation <playbooks_delegation>`, the action will be executed on the delegated host, but the information is still that of the original host in the task.
+
+.. note::
+ When used together with ``serial``, tasks marked as ``run_once`` will be run on one host in *each* serial batch. If the task must run only once regardless of ``serial`` mode, use
+ :code:`when: inventory_hostname == ansible_play_hosts_all[0]` construct.
+
+.. note::
+ Any conditional (in other words, `when:`) will use the variables of the 'first host' to decide if the task runs or not, no other hosts will be tested.
+
+.. note::
+ If you want to avoid the default behavior of setting the fact for all hosts, set ``delegate_facts: True`` for the specific task or block.
+
+.. seealso::
+
+ :ref:`about_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_delegation`
+ Running tasks on or assigning facts to specific machines
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_tags.rst b/docs/docsite/rst/user_guide/playbooks_tags.rst
new file mode 100644
index 00000000..93c26636
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_tags.rst
@@ -0,0 +1,428 @@
+.. _tags:
+
+****
+Tags
+****
+
+If you have a large playbook, it may be useful to run only specific parts of it instead of running the entire playbook. You can do this with Ansible tags. Using tags to execute or skip selected tasks is a two-step process:
+
+ #. Add tags to your tasks, either individually or with tag inheritance from a block, play, role, or import.
+ #. Select or skip tags when you run your playbook.
+
+.. contents::
+ :local:
+
+Adding tags with the tags keyword
+=================================
+
+You can add tags to a single task or include. You can also add tags to multiple tasks by defining them at the level of a block, play, role, or import. The keyword ``tags`` addresses all these use cases. The ``tags`` keyword always defines tags and adds them to tasks; it does not select or skip tasks for execution. You can only select or skip tasks based on tags at the command line when you run a playbook. See :ref:`using_tags` for more details.
+
+Adding tags to individual tasks
+-------------------------------
+
+At the simplest level, you can apply one or more tags to an individual task. You can add tags to tasks in playbooks, in task files, or within a role. Here is an example that tags two tasks with different tags:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Install the servers
+ ansible.builtin.yum:
+ name:
+ - httpd
+ - memcached
+ state: present
+ tags:
+ - packages
+ - webservers
+
+ - name: Configure the service
+ ansible.builtin.template:
+ src: templates/src.j2
+ dest: /etc/foo.conf
+ tags:
+ - configuration
+
+You can apply the same tag to more than one individual task. This example tags several tasks with the same tag, "ntp":
+
+.. code-block:: yaml
+
+ ---
+ # file: roles/common/tasks/main.yml
+
+ - name: Install ntp
+ ansible.builtin.yum:
+ name: ntp
+ state: present
+ tags: ntp
+
+ - name: Configure ntp
+ ansible.builtin.template:
+ src: ntp.conf.j2
+ dest: /etc/ntp.conf
+ notify:
+ - restart ntpd
+ tags: ntp
+
+ - name: Enable and run ntpd
+ ansible.builtin.service:
+ name: ntpd
+ state: started
+ enabled: yes
+ tags: ntp
+
+ - name: Install NFS utils
+ ansible.builtin.yum:
+ name:
+ - nfs-utils
+ - nfs-util-lib
+ state: present
+ tags: filesharing
+
+If you ran these four tasks in a playbook with ``--tags ntp``, Ansible would run the three tasks tagged ``ntp`` and skip the one task that does not have that tag.
+
+.. _tags_on_includes:
+
+Adding tags to includes
+-----------------------
+
+You can apply tags to dynamic includes in a playbook. As with tags on an individual task, tags on an ``include_*`` task apply only to the include itself, not to any tasks within the included file or role. If you add ``mytag`` to a dynamic include, then run that playbook with ``--tags mytag``, Ansible runs the include itself, runs any tasks within the included file or role tagged with ``mytag``, and skips any tasks within the included file or role without that tag. See :ref:`selective_reuse` for more details.
+
+You add tags to includes the same way you add tags to any other task:
+
+.. code-block:: yaml
+
+ ---
+ # file: roles/common/tasks/main.yml
+
+ - name: Dynamic re-use of database tasks
+ include_tasks: db.yml
+ tags: db
+
+You can add a tag only to the dynamic include of a role. In this example, the ``foo`` tag will `not` apply to tasks inside the ``bar`` role:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ tasks:
+ - name: Include the bar role
+ include_role:
+ name: bar
+ tags:
+ - foo
+
+With plays, blocks, the ``role`` keyword, and static imports, Ansible applies tag inheritance, adding the tags you define to every task inside the play, block, role, or imported file. However, tag inheritance does *not* apply to dynamic re-use with ``include_role`` and ``include_tasks``. With dynamic re-use (includes), the tags you define apply only to the include itself. If you need tag inheritance, use a static import. If you cannot use an import because the rest of your playbook uses includes, see :ref:`apply_keyword` for ways to work around this behavior.
+
+.. _tag_inheritance:
+
+Tag inheritance: adding tags to multiple tasks
+----------------------------------------------
+
+If you want to apply the same tag or tags to multiple tasks without adding a ``tags`` line to every task, you can define the tags at the level of your play or block, or when you add a role or import a file. Ansible applies the tags down the dependency chain to all child tasks. With roles and imports, Ansible appends the tags set by the ``roles`` section or import to any tags set on individual tasks or blocks within the role or imported file. This is called tag inheritance. Tag inheritance is convenient, because you do not have to tag every task. However, the tags still apply to the tasks individually.
+
+Adding tags to blocks
+^^^^^^^^^^^^^^^^^^^^^
+
+If you want to apply a tag to many, but not all, of the tasks in your play, use a :ref:`block <playbooks_blocks>` and define the tags at that level. For example, we could edit the NTP example shown above to use a block:
+
+.. code-block:: yaml
+
+ # myrole/tasks/main.yml
+ tasks:
+ - block:
+ tags: ntp
+ - name: Install ntp
+ ansible.builtin.yum:
+ name: ntp
+ state: present
+
+ - name: Configure ntp
+ ansible.builtin.template:
+ src: ntp.conf.j2
+ dest: /etc/ntp.conf
+ notify:
+ - restart ntpd
+
+ - name: Enable and run ntpd
+ ansible.builtin.service:
+ name: ntpd
+ state: started
+ enabled: yes
+
+ - name: Install NFS utils
+ ansible.builtin.yum:
+ name:
+ - nfs-utils
+ - nfs-util-lib
+ state: present
+ tags: filesharing
+
+Adding tags to plays
+^^^^^^^^^^^^^^^^^^^^
+
+If all the tasks in a play should get the same tag, you can add the tag at the level of the play. For example, if you had a play with only the NTP tasks, you could tag the entire play:
+
+.. code-block:: yaml
+
+ - hosts: all
+ tags: ntp
+ tasks:
+ - name: Install ntp
+ ansible.builtin.yum:
+ name: ntp
+ state: present
+
+ - name: Configure ntp
+ ansible.builtin.template:
+ src: ntp.conf.j2
+ dest: /etc/ntp.conf
+ notify:
+ - restart ntpd
+
+ - name: Enable and run ntpd
+ ansible.builtin.service:
+ name: ntpd
+ state: started
+ enabled: yes
+
+ - hosts: fileservers
+ tags: filesharing
+ tasks:
+ ...
+
+Adding tags to roles
+^^^^^^^^^^^^^^^^^^^^
+
+There are three ways to add tags to roles:
+
+ #. Add the same tag or tags to all tasks in the role by setting tags under ``roles``. See examples in this section.
+ #. Add the same tag or tags to all tasks in the role by setting tags on a static ``import_role`` in your playbook. See examples in :ref:`tags_on_imports`.
+ #. Add a tag or tags to to individual tasks or blocks within the role itself. This is the only approach that allows you to select or skip some tasks within the role. To select or skip tasks within the role, you must have tags set on individual tasks or blocks, use the dynamic ``include_role`` in your playbook, and add the same tag or tags to the include. When you use this approach, and then run your playbook with ``--tags foo``, Ansible runs the include itself plus any tasks in the role that also have the tag ``foo``. See :ref:`tags_on_includes` for details.
+
+When you incorporate a role in your playbook statically with the ``roles`` keyword, Ansible adds any tags you define to all the tasks in the role. For example:
+
+.. code-block:: yaml
+
+ roles:
+ - role: webserver
+ vars:
+ port: 5000
+ tags: [ web, foo ]
+
+or:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ roles:
+ - role: foo
+ tags:
+ - bar
+ - baz
+ # using YAML shorthand, this is equivalent to:
+ # - { role: foo, tags: ["bar", "baz"] }
+
+.. _tags_on_imports:
+
+Adding tags to imports
+^^^^^^^^^^^^^^^^^^^^^^
+
+You can also apply a tag or tags to all the tasks imported by the static ``import_role`` and ``import_tasks`` statements:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: webservers
+ tasks:
+ - name: Import the foo role
+ import_role:
+ name: foo
+ tags:
+ - bar
+ - baz
+
+ - name: Import tasks from foo.yml
+ import_tasks: foo.yml
+ tags: [ web, foo ]
+
+.. _apply_keyword:
+
+Tag inheritance for includes: blocks and the ``apply`` keyword
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By default, Ansible does not apply :ref:`tag inheritance <tag_inheritance>` to dynamic re-use with ``include_role`` and ``include_tasks``. If you add tags to an include, they apply only to the include itself, not to any tasks in the included file or role. This allows you to execute selected tasks within a role or task file - see :ref:`selective_reuse` when you run your playbook.
+
+If you want tag inheritance, you probably want to use imports. However, using both includes and imports in a single playbook can lead to difficult-to-diagnose bugs. For this reason, if your playbook uses ``include_*`` to re-use roles or tasks, and you need tag inheritance on one include, Ansible offers two workarounds. You can use the ``apply`` keyword:
+
+.. code-block:: yaml
+
+ - name: Apply the db tag to the include and to all tasks in db.yaml
+ include_tasks:
+ file: db.yml
+ # adds 'db' tag to tasks within db.yml
+ apply:
+ tags: db
+ # adds 'db' tag to this 'include_tasks' itself
+ tags: db
+
+Or you can use a block:
+
+.. code-block:: yaml
+
+ - block:
+ - name: Include tasks from db.yml
+ include_tasks: db.yml
+ tags: db
+
+.. _special_tags:
+
+Special tags: always and never
+==============================
+
+Ansible reserves two tag names for special behavior: always and never. If you assign the ``always`` tag to a task or play, Ansible will always run that task or play, unless you specifically skip it (``--skip-tags always``).
+
+For example:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: "Always runs"
+ tags:
+ - always
+
+ - name: Print a message
+ ansible.builtin.debug:
+ msg: "runs when you use tag1"
+ tags:
+ - tag1
+
+.. warning::
+ * Fact gathering is tagged with 'always' by default. It is only skipped if
+ you apply a tag and then use a different tag in ``--tags`` or the same
+ tag in ``--skip-tags``.
+
+.. versionadded:: 2.5
+
+If you assign the ``never`` tag to a task or play, Ansible will skip that task or play unless you specifically request it (``--tags never``).
+
+For example:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Run the rarely-used debug task
+ ansible.builtin.debug:
+ msg: '{{ showmevar }}'
+ tags: [ never, debug ]
+
+The rarely-used debug task in the example above only runs when you specifically request the ``debug`` or ``never`` tags.
+
+.. _using_tags:
+
+Selecting or skipping tags when you run a playbook
+==================================================
+
+Once you have added tags to your tasks, includes, blocks, plays, roles, and imports, you can selectively execute or skip tasks based on their tags when you run :ref:`ansible-playbook`. Ansible runs or skips all tasks with tags that match the tags you pass at the command line. If you have added a tag at the block or play level, with ``roles``, or with an import, that tag applies to every task within the block, play, role, or imported role or file. If you have a role with lots of tags and you want to call subsets of the role at different times, either :ref:`use it with dynamic includes <selective_reuse>`, or split the role into multiple roles.
+
+:ref:`ansible-playbook` offers five tag-related command-line options:
+
+* ``--tags all`` - run all tasks, ignore tags (default behavior)
+* ``--tags [tag1, tag2]`` - run only tasks with the tags ``tag1`` and ``tag2``
+* ``--skip-tags [tag3, tag4]`` - run all tasks except those with the tags ``tag3`` and ``tag4``
+* ``--tags tagged`` - run only tasks with at least one tag
+* ``--tags untagged`` - run only tasks with no tags
+
+For example, to run only tasks and blocks tagged ``configuration`` and ``packages`` in a very long playbook:
+
+.. code-block:: bash
+
+ ansible-playbook example.yml --tags "configuration,packages"
+
+To run all tasks except those tagged ``packages``:
+
+.. code-block:: bash
+
+ ansible-playbook example.yml --skip-tags "packages"
+
+Previewing the results of using tags
+------------------------------------
+
+When you run a role or playbook, you might not know or remember which tasks have which tags, or which tags exist at all. Ansible offers two command-line flags for :ref:`ansible-playbook` that help you manage tagged playbooks:
+
+* ``--list-tags`` - generate a list of available tags
+* ``--list-tasks`` - when used with ``--tags tagname`` or ``--skip-tags tagname``, generate a preview of tagged tasks
+
+For example, if you do not know whether the tag for configuration tasks is ``config`` or ``conf`` in a playbook, role, or tasks file, you can display all available tags without running any tasks:
+
+.. code-block:: bash
+
+ ansible-playbook example.yml --list-tags
+
+If you do not know which tasks have the tags ``configuration`` and ``packages``, you can pass those tags and add ``--list-tasks``. Ansible lists the tasks but does not execute any of them.
+
+.. code-block:: bash
+
+ ansible-playbook example.yml --tags "configuration,packages" --list-tasks
+
+These command-line flags have one limitation: they cannot show tags or tasks within dynamically included files or roles. See :ref:`dynamic_vs_static` for more information on differences between static imports and dynamic includes.
+
+.. _selective_reuse:
+
+Selectively running tagged tasks in re-usable files
+---------------------------------------------------
+
+If you have a role or a tasks file with tags defined at the task or block level, you can selectively run or skip those tagged tasks in a playbook if you use a dynamic include instead of a static import. You must use the same tag on the included tasks and on the include statement itself. For example you might create a file with some tagged and some untagged tasks:
+
+.. code-block:: yaml
+
+ # mixed.yml
+ tasks:
+ - name: Run the task with no tags
+ ansible.builtin.debug:
+ msg: this task has no tags
+
+ - name: Run the tagged task
+ ansible.builtin.debug:
+ msg: this task is tagged with mytag
+ tags: mytag
+
+ - block:
+ - name: Run the first block task with mytag
+ ...
+ - name: Run the second block task with mytag
+ ...
+ tags:
+ - mytag
+
+And you might include the tasks file above in a playbook:
+
+.. code-block:: yaml
+
+ # myplaybook.yml
+ - hosts: all
+ tasks:
+ - name: Run tasks from mixed.yml
+ include_tasks:
+ name: mixed.yml
+ tags: mytag
+
+When you run the playbook with ``ansible-playbook -i hosts myplaybook.yml --tags "mytag"``, Ansible skips the task with no tags, runs the tagged individual task, and runs the two tasks in the block.
+
+Configuring tags globally
+-------------------------
+
+If you run or skip certain tags by default, you can use the :ref:`TAGS_RUN` and :ref:`TAGS_SKIP` options in Ansible configuration to set those defaults.
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_templating.rst b/docs/docsite/rst/user_guide/playbooks_templating.rst
new file mode 100644
index 00000000..162ab813
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_templating.rst
@@ -0,0 +1,55 @@
+.. _playbooks_templating:
+
+*******************
+Templating (Jinja2)
+*******************
+
+Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. Ansible includes a lot of specialized filters and tests for templating. You can use all the standard filters and tests included in Jinja2 as well. Ansible also offers a new plugin type: :ref:`lookup_plugins`.
+
+All templating happens on the Ansible controller **before** the task is sent and executed on the target machine. This approach minimizes the package requirements on the target (jinja2 is only required on the controller). It also limits the amount of data Ansible passes to the target machine. Ansible parses templates on the controller and passes only the information needed for each task to the target machine, instead of passing all the data on the controller and parsing it on the target.
+
+.. contents::
+ :local:
+
+.. toctree::
+ :maxdepth: 2
+
+ playbooks_filters
+ playbooks_tests
+ playbooks_lookups
+ playbooks_python_version
+
+.. _templating_now:
+
+Get the current time
+====================
+
+.. versionadded:: 2.8
+
+The ``now()`` Jinja2 function retrieves a Python datetime object or a string representation for the current time.
+
+The ``now()`` function supports 2 arguments:
+
+utc
+ Specify ``True`` to get the current time in UTC. Defaults to ``False``.
+
+fmt
+ Accepts a `strftime <https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior>`_ string that returns a formatted date time string.
+
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_loops`
+ Looping in playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_tests.rst b/docs/docsite/rst/user_guide/playbooks_tests.rst
new file mode 100644
index 00000000..0a1aa8d9
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_tests.rst
@@ -0,0 +1,395 @@
+.. _playbooks_tests:
+
+*****
+Tests
+*****
+
+`Tests <http://jinja.pocoo.org/docs/dev/templates/#tests>`_ in Jinja are a way of evaluating template expressions and returning True or False. Jinja ships with many of these. See `builtin tests`_ in the official Jinja template documentation.
+
+The main difference between tests and filters are that Jinja tests are used for comparisons, whereas filters are used for data manipulation, and have different applications in jinja. Tests can also be used in list processing filters, like ``map()`` and ``select()`` to choose items in the list.
+
+Like all templating, tests always execute on the Ansible controller, **not** on the target of a task, as they test local data.
+
+In addition to those Jinja2 tests, Ansible supplies a few more and users can easily create their own.
+
+.. contents::
+ :local:
+
+.. _test_syntax:
+
+Test syntax
+===========
+
+`Test syntax <http://jinja.pocoo.org/docs/dev/templates/#tests>`_ varies from `filter syntax <http://jinja.pocoo.org/docs/dev/templates/#filters>`_ (``variable | filter``). Historically Ansible has registered tests as both jinja tests and jinja filters, allowing for them to be referenced using filter syntax.
+
+As of Ansible 2.5, using a jinja test as a filter will generate a warning.
+
+The syntax for using a jinja test is as follows::
+
+ variable is test_name
+
+Such as::
+
+ result is failed
+
+.. _testing_strings:
+
+Testing strings
+===============
+
+To match strings against a substring or a regular expression, use the ``match``, ``search`` or ``regex`` tests::
+
+ vars:
+ url: "http://example.com/users/foo/resources/bar"
+
+ tasks:
+ - debug:
+ msg: "matched pattern 1"
+ when: url is match("http://example.com/users/.*/resources/")
+
+ - debug:
+ msg: "matched pattern 2"
+ when: url is search("/users/.*/resources/.*")
+
+ - debug:
+ msg: "matched pattern 3"
+ when: url is search("/users/")
+
+ - debug:
+ msg: "matched pattern 4"
+ when: url is regex("example.com/\w+/foo")
+
+``match`` succeeds if it finds the pattern at the beginning of the string, while ``search`` succeeds if it finds the pattern anywhere within string. By default, ``regex`` works like ``search``, but ``regex`` can be configured to perform other tests as well, by passing the ``match_type`` keyword argument. In particular, ``match_type`` determines the ``re`` method that gets used to perform the search. The full list can be found in the relevant Python documentation `here <https://docs.python.org/3/library/re.html#regular-expression-objects>`_.
+
+All of the string tests also take optional ``ignorecase`` and ``multiline`` arguments. These correspond to ``re.I`` and ``re.M`` from Python's ``re`` library, respectively.
+
+.. _testing_vault:
+
+Vault
+=====
+
+.. versionadded:: 2.10
+
+You can test whether a variable is an inline single vault encrypted value using the ``vault_encrypted`` test.
+
+.. code-block:: yaml
+
+ vars:
+ variable: !vault |
+ $ANSIBLE_VAULT;1.2;AES256;dev
+ 61323931353866666336306139373937316366366138656131323863373866376666353364373761
+ 3539633234313836346435323766306164626134376564330a373530313635343535343133316133
+ 36643666306434616266376434363239346433643238336464643566386135356334303736353136
+ 6565633133366366360a326566323363363936613664616364623437336130623133343530333739
+ 3039
+
+ tasks:
+ - debug:
+ msg: '{{ (variable is vault_encrypted) | ternary("Vault encrypted", "Not vault encrypted") }}'
+
+.. _testing_truthiness:
+
+Testing truthiness
+==================
+
+.. versionadded:: 2.10
+
+As of Ansible 2.10, you can now perform Python like truthy and falsy checks.
+
+.. code-block:: yaml
+
+ - debug:
+ msg: "Truthy"
+ when: value is truthy
+ vars:
+ value: "some string"
+
+ - debug:
+ msg: "Falsy"
+ when: value is falsy
+ vars:
+ value: ""
+
+Additionally, the ``truthy`` and ``falsy`` tests accept an optional parameter called ``convert_bool`` that will attempt
+to convert boolean indicators to actual booleans.
+
+.. code-block:: yaml
+
+ - debug:
+ msg: "Truthy"
+ when: value is truthy(convert_bool=True)
+ vars:
+ value: "yes"
+
+ - debug:
+ msg: "Falsy"
+ when: value is falsy(convert_bool=True)
+ vars:
+ value: "off"
+
+.. _testing_versions:
+
+Comparing versions
+==================
+
+.. versionadded:: 1.6
+
+.. note:: In 2.5 ``version_compare`` was renamed to ``version``
+
+To compare a version number, such as checking if the ``ansible_facts['distribution_version']``
+version is greater than or equal to '12.04', you can use the ``version`` test.
+
+The ``version`` test can also be used to evaluate the ``ansible_facts['distribution_version']``::
+
+ {{ ansible_facts['distribution_version'] is version('12.04', '>=') }}
+
+If ``ansible_facts['distribution_version']`` is greater than or equal to 12.04, this test returns True, otherwise False.
+
+The ``version`` test accepts the following operators::
+
+ <, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne
+
+This test also accepts a 3rd parameter, ``strict`` which defines if strict version parsing as defined by ``distutils.version.StrictVersion`` should be used. The default is ``False`` (using ``distutils.version.LooseVersion``), ``True`` enables strict version parsing::
+
+ {{ sample_version_var is version('1.0', operator='lt', strict=True) }}
+
+When using ``version`` in a playbook or role, don't use ``{{ }}`` as described in the `FAQ <https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#when-should-i-use-also-how-to-interpolate-variables-or-dynamic-variable-names>`_::
+
+ vars:
+ my_version: 1.2.3
+
+ tasks:
+ - debug:
+ msg: "my_version is higher than 1.0.0"
+ when: my_version is version('1.0.0', '>')
+
+.. _math_tests:
+
+Set theory tests
+================
+
+.. versionadded:: 2.1
+
+.. note:: In 2.5 ``issubset`` and ``issuperset`` were renamed to ``subset`` and ``superset``
+
+To see if a list includes or is included by another list, you can use 'subset' and 'superset'::
+
+ vars:
+ a: [1,2,3,4,5]
+ b: [2,3]
+ tasks:
+ - debug:
+ msg: "A includes B"
+ when: a is superset(b)
+
+ - debug:
+ msg: "B is included in A"
+ when: b is subset(a)
+
+.. _contains_test:
+
+Testing if a list contains a value
+==================================
+
+.. versionadded:: 2.8
+
+Ansible includes a ``contains`` test which operates similarly, but in reverse of the Jinja2 provided ``in`` test.
+The ``contains`` test is designed to work with the ``select``, ``reject``, ``selectattr``, and ``rejectattr`` filters::
+
+ vars:
+ lacp_groups:
+ - master: lacp0
+ network: 10.65.100.0/24
+ gateway: 10.65.100.1
+ dns4:
+ - 10.65.100.10
+ - 10.65.100.11
+ interfaces:
+ - em1
+ - em2
+
+ - master: lacp1
+ network: 10.65.120.0/24
+ gateway: 10.65.120.1
+ dns4:
+ - 10.65.100.10
+ - 10.65.100.11
+ interfaces:
+ - em3
+ - em4
+
+ tasks:
+ - debug:
+ msg: "{{ (lacp_groups|selectattr('interfaces', 'contains', 'em1')|first).master }}"
+
+.. versionadded:: 2.4
+
+Testing if a list value is True
+===============================
+
+You can use `any` and `all` to check if any or all elements in a list are true or not::
+
+ vars:
+ mylist:
+ - 1
+ - "{{ 3 == 3 }}"
+ - True
+ myotherlist:
+ - False
+ - True
+ tasks:
+
+ - debug:
+ msg: "all are true!"
+ when: mylist is all
+
+ - debug:
+ msg: "at least one is true"
+ when: myotherlist is any
+
+.. _path_tests:
+
+Testing paths
+=============
+
+.. note:: In 2.5 the following tests were renamed to remove the ``is_`` prefix
+
+The following tests can provide information about a path on the controller::
+
+ - debug:
+ msg: "path is a directory"
+ when: mypath is directory
+
+ - debug:
+ msg: "path is a file"
+ when: mypath is file
+
+ - debug:
+ msg: "path is a symlink"
+ when: mypath is link
+
+ - debug:
+ msg: "path already exists"
+ when: mypath is exists
+
+ - debug:
+ msg: "path is {{ (mypath is abs)|ternary('absolute','relative')}}"
+
+ - debug:
+ msg: "path is the same file as path2"
+ when: mypath is same_file(path2)
+
+ - debug:
+ msg: "path is a mount"
+ when: mypath is mount
+
+
+Testing size formats
+====================
+
+The ``human_readable`` and ``human_to_bytes`` functions let you test your
+playbooks to make sure you are using the right size format in your tasks, and that
+you provide Byte format to computers and human-readable format to people.
+
+Human readable
+--------------
+
+Asserts whether the given string is human readable or not.
+
+For example::
+
+ - name: "Human Readable"
+ assert:
+ that:
+ - '"1.00 Bytes" == 1|human_readable'
+ - '"1.00 bits" == 1|human_readable(isbits=True)'
+ - '"10.00 KB" == 10240|human_readable'
+ - '"97.66 MB" == 102400000|human_readable'
+ - '"0.10 GB" == 102400000|human_readable(unit="G")'
+ - '"0.10 Gb" == 102400000|human_readable(isbits=True, unit="G")'
+
+This would result in::
+
+ { "changed": false, "msg": "All assertions passed" }
+
+Human to bytes
+--------------
+
+Returns the given string in the Bytes format.
+
+For example::
+
+ - name: "Human to Bytes"
+ assert:
+ that:
+ - "{{'0'|human_to_bytes}} == 0"
+ - "{{'0.1'|human_to_bytes}} == 0"
+ - "{{'0.9'|human_to_bytes}} == 1"
+ - "{{'1'|human_to_bytes}} == 1"
+ - "{{'10.00 KB'|human_to_bytes}} == 10240"
+ - "{{ '11 MB'|human_to_bytes}} == 11534336"
+ - "{{ '1.1 GB'|human_to_bytes}} == 1181116006"
+ - "{{'10.00 Kb'|human_to_bytes(isbits=True)}} == 10240"
+
+This would result in::
+
+ { "changed": false, "msg": "All assertions passed" }
+
+
+.. _test_task_results:
+
+Testing task results
+====================
+
+The following tasks are illustrative of the tests meant to check the status of tasks::
+
+ tasks:
+
+ - shell: /usr/bin/foo
+ register: result
+ ignore_errors: True
+
+ - debug:
+ msg: "it failed"
+ when: result is failed
+
+ # in most cases you'll want a handler, but if you want to do something right now, this is nice
+ - debug:
+ msg: "it changed"
+ when: result is changed
+
+ - debug:
+ msg: "it succeeded in Ansible >= 2.1"
+ when: result is succeeded
+
+ - debug:
+ msg: "it succeeded"
+ when: result is success
+
+ - debug:
+ msg: "it was skipped"
+ when: result is skipped
+
+.. note:: From 2.1, you can also use success, failure, change, and skip so that the grammar matches, for those who need to be strict about it.
+
+
+.. _builtin tests: http://jinja.palletsprojects.com/templates/#builtin-tests
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_variables`
+ All about variables
+ :ref:`playbooks_loops`
+ Looping in playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_variables.rst b/docs/docsite/rst/user_guide/playbooks_variables.rst
new file mode 100644
index 00000000..eb2b58f7
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_variables.rst
@@ -0,0 +1,466 @@
+.. _playbooks_variables:
+
+***************
+Using Variables
+***************
+
+Ansible uses variables to manage differences between systems. With Ansible, you can execute tasks and playbooks on multiple different systems with a single command. To represent the variations among those different systems, you can create variables with standard YAML syntax, including lists and dictionaries. You can define these variables in your playbooks, in your :ref:`inventory <intro_inventory>`, in re-usable :ref:`files <playbooks_reuse>` or :ref:`roles <playbooks_reuse_roles>`, or at the command line. You can also create variables during a playbook run by registering the return value or values of a task as a new variable.
+
+After you create variables, either by defining them in a file, passing them at the command line, or registering the return value or values of a task as a new variable, you can use those variables in module arguments, in :ref:`conditional "when" statements <playbooks_conditionals>`, in :ref:`templates <playbooks_templating>`, and in :ref:`loops <playbooks_loops>`. The `ansible-examples github repository <https://github.com/ansible/ansible-examples>`_ contains many examples of using variables in Ansible.
+
+Once you understand the concepts and examples on this page, read about :ref:`Ansible facts <vars_and_facts>`, which are variables you retrieve from remote systems.
+
+.. contents::
+ :local:
+
+.. _valid_variable_names:
+
+Creating valid variable names
+=============================
+
+Not all strings are valid Ansible variable names. A variable name can only include letters, numbers, and underscores. `Python keywords`_ or :ref:`playbook keywords<playbook_keywords>` are not valid variable names. A variable name cannot begin with a number.
+
+Variable names can begin with an underscore. In many programming languages, variables that begin with an underscore are private. This is not true in Ansible. Variables that begin with an underscore are treated exactly the same as any other variable. Do not rely on this convention for privacy or security.
+
+This table gives examples of valid and invalid variable names:
+
+.. table::
+ :class: documentation-table
+
+ ====================== ====================================================================
+ Valid variable names Not valid
+ ====================== ====================================================================
+ ``foo`` ``*foo``, `Python keywords`_ such as ``async`` and ``lambda``
+
+ ``foo_env`` :ref:`playbook keywords<playbook_keywords>` such as ``environment``
+
+ ``foo_port`` ``foo-port``, ``foo port``, ``foo.port``
+
+ ``foo5``, ``_foo`` ``5foo``, ``12``
+ ====================== ====================================================================
+
+.. _Python keywords: https://docs.python.org/3/reference/lexical_analysis.html#keywords
+
+Simple variables
+================
+
+Simple variables combine a variable name with a single value. You can use this syntax (and the syntax for lists and dictionaries shown below) in a variety of places. For details about setting variables in inventory, in playbooks, in reusable files, in roles, or at the command line, see :ref:`setting_variables`.
+
+Defining simple variables
+-------------------------
+
+You can define a simple variable using standard YAML syntax. For example::
+
+ remote_install_path: /opt/my_app_config
+
+Referencing simple variables
+----------------------------
+
+After you define a variable, use Jinja2 syntax to reference it. Jinja2 variables use double curly braces. For example, the expression ``My amp goes to {{ max_amp_value }}`` demonstrates the most basic form of variable substitution. You can use Jinja2 syntax in playbooks. For example::
+
+ ansible.builtin.template:
+ src: foo.cfg.j2
+ dest: '{{ remote_install_path }}/foo.cfg'
+
+In this example, the variable defines the location of a file, which can vary from one system to another.
+
+.. note::
+
+ Ansible allows Jinja2 loops and conditionals in :ref:`templates <playbooks_templating>` but not in playbooks. You cannot create a loop of tasks. Ansible playbooks are pure machine-parseable YAML.
+
+.. _yaml_gotchas:
+
+When to quote variables (a YAML gotcha)
+=======================================
+
+If you start a value with ``{{ foo }}``, you must quote the whole expression to create valid YAML syntax. If you do not quote the whole expression, the YAML parser cannot interpret the syntax - it might be a variable or it might be the start of a YAML dictionary. For guidance on writing YAML, see the :ref:`yaml_syntax` documentation.
+
+If you use a variable without quotes like this::
+
+ - hosts: app_servers
+ vars:
+ app_path: {{ base_path }}/22
+
+You will see: ``ERROR! Syntax Error while loading YAML.`` If you add quotes, Ansible works correctly::
+
+ - hosts: app_servers
+ vars:
+ app_path: "{{ base_path }}/22"
+
+.. _list_variables:
+
+List variables
+==============
+
+A list variable combines a variable name with multiple values. The multiple values can be stored as an itemized list or in square brackets ``[]``, separated with commas.
+
+Defining variables as lists
+---------------------------
+
+You can define variables with multiple values using YAML lists. For example::
+
+ region:
+ - northeast
+ - southeast
+ - midwest
+
+Referencing list variables
+--------------------------
+
+When you use variables defined as a list (also called an array), you can use individual, specific fields from that list. The first item in a list is item 0, the second item is item 1. For example::
+
+ region: "{{ region[0] }}"
+
+The value of this expression would be "northeast".
+
+.. _dictionary_variables:
+
+Dictionary variables
+====================
+
+A dictionary stores the data in key-value pairs. Usually, dictionaries are used to store related data, such as the information contained in an ID or a user profile.
+
+Defining variables as key:value dictionaries
+--------------------------------------------
+
+You can define more complex variables using YAML dictionaries. A YAML dictionary maps keys to values. For example::
+
+ foo:
+ field1: one
+ field2: two
+
+Referencing key:value dictionary variables
+------------------------------------------
+
+When you use variables defined as a key:value dictionary (also called a hash), you can use individual, specific fields from that dictionary using either bracket notation or dot notation::
+
+ foo['field1']
+ foo.field1
+
+Both of these examples reference the same value ("one"). Bracket notation always works. Dot notation can cause problems because some keys collide with attributes and methods of python dictionaries. Use bracket notation if you use keys which start and end with two underscores (which are reserved for special meanings in python) or are any of the known public attributes:
+
+``add``, ``append``, ``as_integer_ratio``, ``bit_length``, ``capitalize``, ``center``, ``clear``, ``conjugate``, ``copy``, ``count``, ``decode``, ``denominator``, ``difference``, ``difference_update``, ``discard``, ``encode``, ``endswith``, ``expandtabs``, ``extend``, ``find``, ``format``, ``fromhex``, ``fromkeys``, ``get``, ``has_key``, ``hex``, ``imag``, ``index``, ``insert``, ``intersection``, ``intersection_update``, ``isalnum``, ``isalpha``, ``isdecimal``, ``isdigit``, ``isdisjoint``, ``is_integer``, ``islower``, ``isnumeric``, ``isspace``, ``issubset``, ``issuperset``, ``istitle``, ``isupper``, ``items``, ``iteritems``, ``iterkeys``, ``itervalues``, ``join``, ``keys``, ``ljust``, ``lower``, ``lstrip``, ``numerator``, ``partition``, ``pop``, ``popitem``, ``real``, ``remove``, ``replace``, ``reverse``, ``rfind``, ``rindex``, ``rjust``, ``rpartition``, ``rsplit``, ``rstrip``, ``setdefault``, ``sort``, ``split``, ``splitlines``, ``startswith``, ``strip``, ``swapcase``, ``symmetric_difference``, ``symmetric_difference_update``, ``title``, ``translate``, ``union``, ``update``, ``upper``, ``values``, ``viewitems``, ``viewkeys``, ``viewvalues``, ``zfill``.
+
+.. _registered_variables:
+
+Registering variables
+=====================
+
+You can create variables from the output of an Ansible task with the task keyword ``register``. You can use registered variables in any later tasks in your play. For example::
+
+ - hosts: web_servers
+
+ tasks:
+
+ - name: Run a shell command and register its output as a variable
+ ansible.builtin.shell: /usr/bin/foo
+ register: foo_result
+ ignore_errors: true
+
+ - name: Run a shell command using output of the previous task
+ ansible.builtin.shell: /usr/bin/bar
+ when: foo_result.rc == 5
+
+For more examples of using registered variables in conditions on later tasks, see :ref:`playbooks_conditionals`. Registered variables may be simple variables, list variables, dictionary variables, or complex nested data structures. The documentation for each module includes a ``RETURN`` section describing the return values for that module. To see the values for a particular task, run your playbook with ``-v``.
+
+Registered variables are stored in memory. You cannot cache registered variables for use in future plays. Registered variables are only valid on the host for the rest of the current playbook run.
+
+Registered variables are host-level variables. When you register a variable in a task with a loop, the registered variable contains a value for each item in the loop. The data structure placed in the variable during the loop will contain a ``results`` attribute, that is a list of all responses from the module. For a more in-depth example of how this works, see the :ref:`playbooks_loops` section on using register with a loop.
+
+.. note:: If a task fails or is skipped, Ansible still registers a variable with a failure or skipped status, unless the task is skipped based on tags. See :ref:`tags` for information on adding and using tags.
+
+.. _accessing_complex_variable_data:
+
+Referencing nested variables
+============================
+
+Many registered variables (and :ref:`facts <vars_and_facts>`) are nested YAML or JSON data structures. You cannot access values from these nested data structures with the simple ``{{ foo }}`` syntax. You must use either bracket notation or dot notation. For example, to reference an IP address from your facts using the bracket notation::
+
+ {{ ansible_facts["eth0"]["ipv4"]["address"] }}
+
+To reference an IP address from your facts using the dot notation::
+
+ {{ ansible_facts.eth0.ipv4.address }}
+
+.. _about_jinja2:
+.. _jinja2_filters:
+
+Transforming variables with Jinja2 filters
+==========================================
+
+Jinja2 filters let you transform the value of a variable within a template expression. For example, the ``capitalize`` filter capitalizes any value passed to it; the ``to_yaml`` and ``to_json`` filters change the format of your variable values. Jinja2 includes many `built-in filters <http://jinja.pocoo.org/docs/templates/#builtin-filters>`_ and Ansible supplies many more filters. To find more examples of filters, see :ref:`playbooks_filters`.
+
+.. _setting_variables:
+
+Where to set variables
+======================
+
+You can define variables in a variety of places, such as in inventory, in playbooks, in reusable files, in roles, and at the command line. Ansible loads every possible variable it finds, then chooses the variable to apply based on :ref:`variable precedence rules <ansible_variable_precedence>`.
+
+.. _define_variables_in_inventory:
+
+Defining variables in inventory
+-------------------------------
+
+You can define different variables for each individual host, or set shared variables for a group of hosts in your inventory. For example, if all machines in the ``[Boston]`` group use 'boston.ntp.example.com' as an NTP server, you can set a group variable. The :ref:`intro_inventory` page has details on setting :ref:`host variables <host_variables>` and :ref:`group variables <group_variables>` in inventory.
+
+.. _playbook_variables:
+
+Defining variables in a playbook
+--------------------------------
+
+You can define variables directly in a playbook::
+
+ - hosts: webservers
+ vars:
+ http_port: 80
+
+When you define variables in a playbook, they are visible to anyone who runs that playbook. This is especially useful if you share playbooks widely.
+
+.. _included_variables:
+.. _variable_file_separation_details:
+
+Defining variables in included files and roles
+----------------------------------------------
+
+You can define variables in reusable variables files and/or in reusable roles. When you define variables in reusable variable files, the sensitive variables are separated from playbooks. This separation enables you to store your playbooks in a source control software and even share the playbooks, without the risk of exposing passwords or other sensitive and personal data. For information about creating reusable files and roles, see :ref:`playbooks_reuse`.
+
+This example shows how you can include variables defined in an external file::
+
+ ---
+
+ - hosts: all
+ remote_user: root
+ vars:
+ favcolor: blue
+ vars_files:
+ - /vars/external_vars.yml
+
+ tasks:
+
+ - name: This is just a placeholder
+ ansible.builtin.command: /bin/echo foo
+
+The contents of each variables file is a simple YAML dictionary. For example::
+
+ ---
+ # in the above example, this would be vars/external_vars.yml
+ somevar: somevalue
+ password: magic
+
+.. note::
+ You can keep per-host and per-group variables in similar files. To learn about organizing your variables, see :ref:`splitting_out_vars`.
+
+.. _passing_variables_on_the_command_line:
+
+Defining variables at runtime
+-----------------------------
+
+You can define variables when you run your playbook by passing variables at the command line using the ``--extra-vars`` (or ``-e``) argument. You can also request user input with a ``vars_prompt`` (see :ref:`playbooks_prompts`). When you pass variables at the command line, use a single quoted string, that contains one or more variables, in one of the formats below.
+
+key=value format
+^^^^^^^^^^^^^^^^
+
+Values passed in using the ``key=value`` syntax are interpreted as strings. Use the JSON format if you need to pass non-string values such as Booleans, integers, floats, lists, and so on.
+
+.. code-block:: text
+
+ ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
+
+JSON string format
+^^^^^^^^^^^^^^^^^^
+
+.. code-block:: text
+
+ ansible-playbook release.yml --extra-vars '{"version":"1.23.45","other_variable":"foo"}'
+ ansible-playbook arcade.yml --extra-vars '{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}'
+
+When passing variables with ``--extra-vars``, you must escape quotes and other special characters appropriately for both your markup (for example, JSON), and for your shell::
+
+ ansible-playbook arcade.yml --extra-vars "{\"name\":\"Conan O\'Brien\"}"
+ ansible-playbook arcade.yml --extra-vars '{"name":"Conan O'\\\''Brien"}'
+ ansible-playbook script.yml --extra-vars "{\"dialog\":\"He said \\\"I just can\'t get enough of those single and double-quotes"\!"\\\"\"}"
+
+If you have a lot of special characters, use a JSON or YAML file containing the variable definitions.
+
+vars from a JSON or YAML file
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: text
+
+ ansible-playbook release.yml --extra-vars "@some_file.json"
+
+
+.. _ansible_variable_precedence:
+
+Variable precedence: Where should I put a variable?
+===================================================
+
+You can set multiple variables with the same name in many different places. When you do this, Ansible loads every possible variable it finds, then chooses the variable to apply based on variable precedence. In other words, the different variables will override each other in a certain order.
+
+Teams and projects that agree on guidelines for defining variables (where to define certain types of variables) usually avoid variable precedence concerns. We suggest that you define each variable in one place: figure out where to define a variable, and keep it simple. For examples, see :ref:`variable_examples`.
+
+Some behavioral parameters that you can set in variables you can also set in Ansible configuration, as command-line options, and using playbook keywords. For example, you can define the user Ansible uses to connect to remote devices as a variable with ``ansible_user``, in a configuration file with ``DEFAULT_REMOTE_USER``, as a command-line option with ``-u``, and with the playbook keyword ``remote_user``. If you define the same parameter in a variable and by another method, the variable overrides the other setting. This approach allows host-specific settings to override more general settings. For examples and more details on the precedence of these various settings, see :ref:`general_precedence_rules`.
+
+Understanding variable precedence
+---------------------------------
+
+Ansible does apply variable precedence, and you might have a use for it. Here is the order of precedence from least to greatest (the last listed variables override all other variables):
+
+ #. command line values (for example, ``-u my_user``, these are not variables)
+ #. role defaults (defined in role/defaults/main.yml) [1]_
+ #. inventory file or script group vars [2]_
+ #. inventory group_vars/all [3]_
+ #. playbook group_vars/all [3]_
+ #. inventory group_vars/* [3]_
+ #. playbook group_vars/* [3]_
+ #. inventory file or script host vars [2]_
+ #. inventory host_vars/* [3]_
+ #. playbook host_vars/* [3]_
+ #. host facts / cached set_facts [4]_
+ #. play vars
+ #. play vars_prompt
+ #. play vars_files
+ #. role vars (defined in role/vars/main.yml)
+ #. block vars (only for tasks in block)
+ #. task vars (only for the task)
+ #. include_vars
+ #. set_facts / registered vars
+ #. role (and include_role) params
+ #. include params
+ #. extra vars (for example, ``-e "user=my_user"``)(always win precedence)
+
+In general, Ansible gives precedence to variables that were defined more recently, more actively, and with more explicit scope. Variables in the the defaults folder inside a role are easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in the namespace. Host and/or inventory variables override role defaults, but explicit includes such as the vars directory or an ``include_vars`` task override inventory variables.
+
+Ansible merges different variables set in inventory so that more specific settings override more generic settings. For example, ``ansible_ssh_user`` specified as a group_var is overridden by ``ansible_user`` specified as a host_var. For details about the precedence of variables set in inventory, see :ref:`how_we_merge`.
+
+.. rubric:: Footnotes
+
+.. [1] Tasks in each role see their own role's defaults. Tasks defined outside of a role see the last role's defaults.
+.. [2] Variables defined in inventory file or provided by dynamic inventory.
+.. [3] Includes vars added by 'vars plugins' as well as host_vars and group_vars which are added by the default vars plugin shipped with Ansible.
+.. [4] When created with set_facts's cacheable option, variables have the high precedence in the play,
+ but are the same as a host facts precedence when they come from the cache.
+
+.. note:: Within any section, redefining a var overrides the previous instance.
+ If multiple groups have the same variable, the last one loaded wins.
+ If you define a variable twice in a play's ``vars:`` section, the second one wins.
+.. note:: The previous describes the default config ``hash_behaviour=replace``, switch to ``merge`` to only partially overwrite.
+
+.. _variable_scopes:
+
+Scoping variables
+-----------------
+
+You can decide where to set a variable based on the scope you want that value to have. Ansible has three main scopes:
+
+ * Global: this is set by config, environment variables and the command line
+ * Play: each play and contained structures, vars entries (vars; vars_files; vars_prompt), role defaults and vars.
+ * Host: variables directly associated to a host, like inventory, include_vars, facts or registered task outputs
+
+Inside a template, you automatically have access to all variables that are in scope for a host, plus any registered variables, facts, and magic variables.
+
+.. _variable_examples:
+
+Tips on where to set variables
+------------------------------
+
+You should choose where to define a variable based on the kind of control you might want over values.
+
+Set variables in inventory that deal with geography or behavior. Since groups are frequently the entity that maps roles onto hosts, you can often set variables on the group instead of defining them on a role. Remember: child groups override parent groups, and host variables override group variables. See :ref:`define_variables_in_inventory` for details on setting host and group variables.
+
+Set common defaults in a ``group_vars/all`` file. See :ref:`splitting_out_vars` for details on how to organize host and group variables in your inventory. Group variables are generally placed alongside your inventory file, but they can also be returned by dynamic inventory (see :ref:`intro_dynamic_inventory`) or defined in :ref:`ansible_tower` from the UI or API::
+
+ ---
+ # file: /etc/ansible/group_vars/all
+ # this is the site wide default
+ ntp_server: default-time.example.com
+
+Set location-specific variables in ``group_vars/my_location`` files. All groups are children of the ``all`` group, so variables set here override those set in ``group_vars/all``::
+
+ ---
+ # file: /etc/ansible/group_vars/boston
+ ntp_server: boston-time.example.com
+
+If one host used a different NTP server, you could set that in a host_vars file, which would override the group variable::
+
+ ---
+ # file: /etc/ansible/host_vars/xyz.boston.example.com
+ ntp_server: override.example.com
+
+Set defaults in roles to avoid undefined-variable errors. If you share your roles, other users can rely on the reasonable defaults you added in the ``roles/x/defaults/main.yml`` file, or they can easily override those values in inventory or at the command line. See :ref:`playbooks_reuse_roles` for more info. For example::
+
+ ---
+ # file: roles/x/defaults/main.yml
+ # if no other value is supplied in inventory or as a parameter, this value will be used
+ http_port: 80
+
+Set variables in roles to ensure a value is used in that role, and is not overridden by inventory variables. If you are not sharing your role with others, you can define app-specific behaviors like ports this way, in ``roles/x/vars/main.yml``. If you are sharing roles with others, putting variables here makes them harder to override, although they still can by passing a parameter to the role or setting a variable with ``-e``::
+
+ ---
+ # file: roles/x/vars/main.yml
+ # this will absolutely be used in this role
+ http_port: 80
+
+Pass variables as parameters when you call roles for maximum clarity, flexibility, and visibility. This approach overrides any defaults that exist for a role. For example::
+
+ roles:
+ - role: apache
+ vars:
+ http_port: 8080
+
+When you read this playbook it is clear that you have chosen to set a variable or override a default. You can also pass multiple values, which allows you to run the same role multiple times. See :ref:`run_role_twice` for more details. For example::
+
+ roles:
+ - role: app_user
+ vars:
+ myname: Ian
+ - role: app_user
+ vars:
+ myname: Terry
+ - role: app_user
+ vars:
+ myname: Graham
+ - role: app_user
+ vars:
+ myname: John
+
+Variables set in one role are available to later roles. You can set variables in a ``roles/common_settings/vars/main.yml`` file and use them in other roles and elsewhere in your playbook::
+
+ roles:
+ - role: common_settings
+ - role: something
+ vars:
+ foo: 12
+ - role: something_else
+
+.. note:: There are some protections in place to avoid the need to namespace variables.
+ In this example, variables defined in 'common_settings' are available to 'something' and 'something_else' tasks, but tasks in 'something' have foo set at 12, even if 'common_settings' sets foo to 20.
+
+Instead of worrying about variable precedence, we encourage you to think about how easily or how often you want to override a variable when deciding where to set it. If you are not sure what other variables are defined, and you need a particular value, use ``--extra-vars`` (``-e``) to override all other variables.
+
+Using advanced variable syntax
+==============================
+
+For information about advanced YAML syntax used to declare variables and have more control over the data placed in YAML files used by Ansible, see :ref:`playbooks_advanced_syntax`.
+
+.. seealso::
+
+ :ref:`about_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_conditionals`
+ Conditional statements in playbooks
+ :ref:`playbooks_filters`
+ Jinja2 filters and their uses
+ :ref:`playbooks_loops`
+ Looping in playbooks
+ :ref:`playbooks_reuse_roles`
+ Playbook organization by roles
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`special_variables`
+ List of special variables
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/playbooks_vars_facts.rst b/docs/docsite/rst/user_guide/playbooks_vars_facts.rst
new file mode 100644
index 00000000..3828b8e3
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_vars_facts.rst
@@ -0,0 +1,680 @@
+.. _vars_and_facts:
+
+************************************************
+Discovering variables: facts and magic variables
+************************************************
+
+With Ansible you can retrieve or discover certain variables containing information about your remote systems or about Ansible itself. Variables related to remote systems are called facts. With facts, you can use the behavior or state of one system as configuration on other systems. For example, you can use the IP address of one system as a configuration value on another system. Variables related to Ansible are called magic variables.
+
+.. contents::
+ :local:
+
+Ansible facts
+=============
+
+Ansible facts are data related to your remote systems, including operating systems, IP addresses, attached filesystems, and more. You can access this data in the ``ansible_facts`` variable. By default, you can also access some Ansible facts as top-level variables with the ``ansible_`` prefix. You can disable this behavior using the :ref:`INJECT_FACTS_AS_VARS` setting. To see all available facts, add this task to a play::
+
+ - name: Print all available facts
+ ansible.builtin.debug:
+ var: ansible_facts
+
+To see the 'raw' information as gathered, run this command at the command line::
+
+ ansible <hostname> -m ansible.builtin.setup
+
+Facts include a large amount of variable data, which may look like this:
+
+.. code-block:: json
+
+ {
+ "ansible_all_ipv4_addresses": [
+ "REDACTED IP ADDRESS"
+ ],
+ "ansible_all_ipv6_addresses": [
+ "REDACTED IPV6 ADDRESS"
+ ],
+ "ansible_apparmor": {
+ "status": "disabled"
+ },
+ "ansible_architecture": "x86_64",
+ "ansible_bios_date": "11/28/2013",
+ "ansible_bios_version": "4.1.5",
+ "ansible_cmdline": {
+ "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-862.14.4.el7.x86_64",
+ "console": "ttyS0,115200",
+ "no_timer_check": true,
+ "nofb": true,
+ "nomodeset": true,
+ "ro": true,
+ "root": "LABEL=cloudimg-rootfs",
+ "vga": "normal"
+ },
+ "ansible_date_time": {
+ "date": "2018-10-25",
+ "day": "25",
+ "epoch": "1540469324",
+ "hour": "12",
+ "iso8601": "2018-10-25T12:08:44Z",
+ "iso8601_basic": "20181025T120844109754",
+ "iso8601_basic_short": "20181025T120844",
+ "iso8601_micro": "2018-10-25T12:08:44.109968Z",
+ "minute": "08",
+ "month": "10",
+ "second": "44",
+ "time": "12:08:44",
+ "tz": "UTC",
+ "tz_offset": "+0000",
+ "weekday": "Thursday",
+ "weekday_number": "4",
+ "weeknumber": "43",
+ "year": "2018"
+ },
+ "ansible_default_ipv4": {
+ "address": "REDACTED",
+ "alias": "eth0",
+ "broadcast": "REDACTED",
+ "gateway": "REDACTED",
+ "interface": "eth0",
+ "macaddress": "REDACTED",
+ "mtu": 1500,
+ "netmask": "255.255.255.0",
+ "network": "REDACTED",
+ "type": "ether"
+ },
+ "ansible_default_ipv6": {},
+ "ansible_device_links": {
+ "ids": {},
+ "labels": {
+ "xvda1": [
+ "cloudimg-rootfs"
+ ],
+ "xvdd": [
+ "config-2"
+ ]
+ },
+ "masters": {},
+ "uuids": {
+ "xvda1": [
+ "cac81d61-d0f8-4b47-84aa-b48798239164"
+ ],
+ "xvdd": [
+ "2018-10-25-12-05-57-00"
+ ]
+ }
+ },
+ "ansible_devices": {
+ "xvda": {
+ "holders": [],
+ "host": "",
+ "links": {
+ "ids": [],
+ "labels": [],
+ "masters": [],
+ "uuids": []
+ },
+ "model": null,
+ "partitions": {
+ "xvda1": {
+ "holders": [],
+ "links": {
+ "ids": [],
+ "labels": [
+ "cloudimg-rootfs"
+ ],
+ "masters": [],
+ "uuids": [
+ "cac81d61-d0f8-4b47-84aa-b48798239164"
+ ]
+ },
+ "sectors": "83883999",
+ "sectorsize": 512,
+ "size": "40.00 GB",
+ "start": "2048",
+ "uuid": "cac81d61-d0f8-4b47-84aa-b48798239164"
+ }
+ },
+ "removable": "0",
+ "rotational": "0",
+ "sas_address": null,
+ "sas_device_handle": null,
+ "scheduler_mode": "deadline",
+ "sectors": "83886080",
+ "sectorsize": "512",
+ "size": "40.00 GB",
+ "support_discard": "0",
+ "vendor": null,
+ "virtual": 1
+ },
+ "xvdd": {
+ "holders": [],
+ "host": "",
+ "links": {
+ "ids": [],
+ "labels": [
+ "config-2"
+ ],
+ "masters": [],
+ "uuids": [
+ "2018-10-25-12-05-57-00"
+ ]
+ },
+ "model": null,
+ "partitions": {},
+ "removable": "0",
+ "rotational": "0",
+ "sas_address": null,
+ "sas_device_handle": null,
+ "scheduler_mode": "deadline",
+ "sectors": "131072",
+ "sectorsize": "512",
+ "size": "64.00 MB",
+ "support_discard": "0",
+ "vendor": null,
+ "virtual": 1
+ },
+ "xvde": {
+ "holders": [],
+ "host": "",
+ "links": {
+ "ids": [],
+ "labels": [],
+ "masters": [],
+ "uuids": []
+ },
+ "model": null,
+ "partitions": {
+ "xvde1": {
+ "holders": [],
+ "links": {
+ "ids": [],
+ "labels": [],
+ "masters": [],
+ "uuids": []
+ },
+ "sectors": "167770112",
+ "sectorsize": 512,
+ "size": "80.00 GB",
+ "start": "2048",
+ "uuid": null
+ }
+ },
+ "removable": "0",
+ "rotational": "0",
+ "sas_address": null,
+ "sas_device_handle": null,
+ "scheduler_mode": "deadline",
+ "sectors": "167772160",
+ "sectorsize": "512",
+ "size": "80.00 GB",
+ "support_discard": "0",
+ "vendor": null,
+ "virtual": 1
+ }
+ },
+ "ansible_distribution": "CentOS",
+ "ansible_distribution_file_parsed": true,
+ "ansible_distribution_file_path": "/etc/redhat-release",
+ "ansible_distribution_file_variety": "RedHat",
+ "ansible_distribution_major_version": "7",
+ "ansible_distribution_release": "Core",
+ "ansible_distribution_version": "7.5.1804",
+ "ansible_dns": {
+ "nameservers": [
+ "127.0.0.1"
+ ]
+ },
+ "ansible_domain": "",
+ "ansible_effective_group_id": 1000,
+ "ansible_effective_user_id": 1000,
+ "ansible_env": {
+ "HOME": "/home/zuul",
+ "LANG": "en_US.UTF-8",
+ "LESSOPEN": "||/usr/bin/lesspipe.sh %s",
+ "LOGNAME": "zuul",
+ "MAIL": "/var/mail/zuul",
+ "PATH": "/usr/local/bin:/usr/bin",
+ "PWD": "/home/zuul",
+ "SELINUX_LEVEL_REQUESTED": "",
+ "SELINUX_ROLE_REQUESTED": "",
+ "SELINUX_USE_CURRENT_RANGE": "",
+ "SHELL": "/bin/bash",
+ "SHLVL": "2",
+ "SSH_CLIENT": "REDACTED 55672 22",
+ "SSH_CONNECTION": "REDACTED 55672 REDACTED 22",
+ "USER": "zuul",
+ "XDG_RUNTIME_DIR": "/run/user/1000",
+ "XDG_SESSION_ID": "1",
+ "_": "/usr/bin/python2"
+ },
+ "ansible_eth0": {
+ "active": true,
+ "device": "eth0",
+ "ipv4": {
+ "address": "REDACTED",
+ "broadcast": "REDACTED",
+ "netmask": "255.255.255.0",
+ "network": "REDACTED"
+ },
+ "ipv6": [
+ {
+ "address": "REDACTED",
+ "prefix": "64",
+ "scope": "link"
+ }
+ ],
+ "macaddress": "REDACTED",
+ "module": "xen_netfront",
+ "mtu": 1500,
+ "pciid": "vif-0",
+ "promisc": false,
+ "type": "ether"
+ },
+ "ansible_eth1": {
+ "active": true,
+ "device": "eth1",
+ "ipv4": {
+ "address": "REDACTED",
+ "broadcast": "REDACTED",
+ "netmask": "255.255.224.0",
+ "network": "REDACTED"
+ },
+ "ipv6": [
+ {
+ "address": "REDACTED",
+ "prefix": "64",
+ "scope": "link"
+ }
+ ],
+ "macaddress": "REDACTED",
+ "module": "xen_netfront",
+ "mtu": 1500,
+ "pciid": "vif-1",
+ "promisc": false,
+ "type": "ether"
+ },
+ "ansible_fips": false,
+ "ansible_form_factor": "Other",
+ "ansible_fqdn": "centos-7-rax-dfw-0003427354",
+ "ansible_hostname": "centos-7-rax-dfw-0003427354",
+ "ansible_interfaces": [
+ "lo",
+ "eth1",
+ "eth0"
+ ],
+ "ansible_is_chroot": false,
+ "ansible_kernel": "3.10.0-862.14.4.el7.x86_64",
+ "ansible_lo": {
+ "active": true,
+ "device": "lo",
+ "ipv4": {
+ "address": "127.0.0.1",
+ "broadcast": "host",
+ "netmask": "255.0.0.0",
+ "network": "127.0.0.0"
+ },
+ "ipv6": [
+ {
+ "address": "::1",
+ "prefix": "128",
+ "scope": "host"
+ }
+ ],
+ "mtu": 65536,
+ "promisc": false,
+ "type": "loopback"
+ },
+ "ansible_local": {},
+ "ansible_lsb": {
+ "codename": "Core",
+ "description": "CentOS Linux release 7.5.1804 (Core)",
+ "id": "CentOS",
+ "major_release": "7",
+ "release": "7.5.1804"
+ },
+ "ansible_machine": "x86_64",
+ "ansible_machine_id": "2db133253c984c82aef2fafcce6f2bed",
+ "ansible_memfree_mb": 7709,
+ "ansible_memory_mb": {
+ "nocache": {
+ "free": 7804,
+ "used": 173
+ },
+ "real": {
+ "free": 7709,
+ "total": 7977,
+ "used": 268
+ },
+ "swap": {
+ "cached": 0,
+ "free": 0,
+ "total": 0,
+ "used": 0
+ }
+ },
+ "ansible_memtotal_mb": 7977,
+ "ansible_mounts": [
+ {
+ "block_available": 7220998,
+ "block_size": 4096,
+ "block_total": 9817227,
+ "block_used": 2596229,
+ "device": "/dev/xvda1",
+ "fstype": "ext4",
+ "inode_available": 10052341,
+ "inode_total": 10419200,
+ "inode_used": 366859,
+ "mount": "/",
+ "options": "rw,seclabel,relatime,data=ordered",
+ "size_available": 29577207808,
+ "size_total": 40211361792,
+ "uuid": "cac81d61-d0f8-4b47-84aa-b48798239164"
+ },
+ {
+ "block_available": 0,
+ "block_size": 2048,
+ "block_total": 252,
+ "block_used": 252,
+ "device": "/dev/xvdd",
+ "fstype": "iso9660",
+ "inode_available": 0,
+ "inode_total": 0,
+ "inode_used": 0,
+ "mount": "/mnt/config",
+ "options": "ro,relatime,mode=0700",
+ "size_available": 0,
+ "size_total": 516096,
+ "uuid": "2018-10-25-12-05-57-00"
+ }
+ ],
+ "ansible_nodename": "centos-7-rax-dfw-0003427354",
+ "ansible_os_family": "RedHat",
+ "ansible_pkg_mgr": "yum",
+ "ansible_processor": [
+ "0",
+ "GenuineIntel",
+ "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
+ "1",
+ "GenuineIntel",
+ "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
+ "2",
+ "GenuineIntel",
+ "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
+ "3",
+ "GenuineIntel",
+ "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
+ "4",
+ "GenuineIntel",
+ "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
+ "5",
+ "GenuineIntel",
+ "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
+ "6",
+ "GenuineIntel",
+ "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
+ "7",
+ "GenuineIntel",
+ "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz"
+ ],
+ "ansible_processor_cores": 8,
+ "ansible_processor_count": 8,
+ "ansible_processor_nproc": 8,
+ "ansible_processor_threads_per_core": 1,
+ "ansible_processor_vcpus": 8,
+ "ansible_product_name": "HVM domU",
+ "ansible_product_serial": "REDACTED",
+ "ansible_product_uuid": "REDACTED",
+ "ansible_product_version": "4.1.5",
+ "ansible_python": {
+ "executable": "/usr/bin/python2",
+ "has_sslcontext": true,
+ "type": "CPython",
+ "version": {
+ "major": 2,
+ "micro": 5,
+ "minor": 7,
+ "releaselevel": "final",
+ "serial": 0
+ },
+ "version_info": [
+ 2,
+ 7,
+ 5,
+ "final",
+ 0
+ ]
+ },
+ "ansible_python_version": "2.7.5",
+ "ansible_real_group_id": 1000,
+ "ansible_real_user_id": 1000,
+ "ansible_selinux": {
+ "config_mode": "enforcing",
+ "mode": "enforcing",
+ "policyvers": 31,
+ "status": "enabled",
+ "type": "targeted"
+ },
+ "ansible_selinux_python_present": true,
+ "ansible_service_mgr": "systemd",
+ "ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE",
+ "ansible_ssh_host_key_ed25519_public": "REDACTED KEY VALUE",
+ "ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE",
+ "ansible_swapfree_mb": 0,
+ "ansible_swaptotal_mb": 0,
+ "ansible_system": "Linux",
+ "ansible_system_capabilities": [
+ ""
+ ],
+ "ansible_system_capabilities_enforced": "True",
+ "ansible_system_vendor": "Xen",
+ "ansible_uptime_seconds": 151,
+ "ansible_user_dir": "/home/zuul",
+ "ansible_user_gecos": "",
+ "ansible_user_gid": 1000,
+ "ansible_user_id": "zuul",
+ "ansible_user_shell": "/bin/bash",
+ "ansible_user_uid": 1000,
+ "ansible_userspace_architecture": "x86_64",
+ "ansible_userspace_bits": "64",
+ "ansible_virtualization_role": "guest",
+ "ansible_virtualization_type": "xen",
+ "gather_subset": [
+ "all"
+ ],
+ "module_setup": true
+ }
+
+You can reference the model of the first disk in the facts shown above in a template or playbook as::
+
+ {{ ansible_facts['devices']['xvda']['model'] }}
+
+To reference the system hostname::
+
+ {{ ansible_facts['nodename'] }}
+
+You can use facts in conditionals (see :ref:`playbooks_conditionals`) and also in templates. You can also use facts to create dynamic groups of hosts that match particular criteria, see the :ref:`group_by module <group_by_module>` documentation for details.
+
+.. _fact_requirements:
+
+Package requirements for fact gathering
+---------------------------------------
+
+On some distros, you may see missing fact values or facts set to default values because the packages that support gathering those facts are not installed by default. You can install the necessary packages on your remote hosts using the OS package manager. Known dependencies include:
+
+* Linux Network fact gathering - Depends on the ``ip`` binary, commonly included in the ``iproute2`` package.
+
+.. _fact_caching:
+
+Caching facts
+-------------
+
+Like registered variables, facts are stored in memory by default. However, unlike registered variables, facts can be gathered independently and cached for repeated use. With cached facts, you can refer to facts from one system when configuring a second system, even if Ansible executes the current play on the second system first. For example::
+
+ {{ hostvars['asdf.example.com']['ansible_facts']['os_family'] }}
+
+Caching is controlled by the cache plugins. By default, Ansible uses the memory cache plugin, which stores facts in memory for the duration of the current playbook run. To retain Ansible facts for repeated use, select a different cache plugin. See :ref:`cache_plugins` for details.
+
+Fact caching can improve performance. If you manage thousands of hosts, you can configure fact caching to run nightly, then manage configuration on a smaller set of servers periodically throughout the day. With cached facts, you have access to variables and information about all hosts even when you are only managing a small number of servers.
+
+.. _disabling_facts:
+
+Disabling facts
+---------------
+
+By default, Ansible gathers facts at the beginning of each play. If you do not need to gather facts (for example, if you know everything about your systems centrally), you can turn off fact gathering at the play level to improve scalability. Disabling facts may particularly improve performance in push mode with very large numbers of systems, or if you are using Ansible on experimental platforms. To disable fact gathering::
+
+ - hosts: whatever
+ gather_facts: no
+
+Adding custom facts
+-------------------
+
+The setup module in Ansible automatically discovers a standard set of facts about each host. If you want to add custom values to your facts, you can write a custom facts module, set temporary facts with a ``ansible.builtin.set_fact`` task, or provide permanent custom facts using the facts.d directory.
+
+.. _local_facts:
+
+facts.d or local facts
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. versionadded:: 1.3
+
+You can add static custom facts by adding static files to facts.d, or add dynamic facts by adding executable scripts to facts.d. For example, you can add a list of all users on a host to your facts by creating and running a script in facts.d.
+
+To use facts.d, create an ``/etc/ansible/facts.d`` directory on the remote host or hosts. If you prefer a different directory, create it and specify it using the ``fact_path`` play keyword. Add files to the directory to supply your custom facts. All file names must end with ``.fact``. The files can be JSON, INI, or executable files returning JSON.
+
+To add static facts, simply add a file with the ``.facts`` extension. For example, create ``/etc/ansible/facts.d/preferences.fact`` with this content::
+
+ [general]
+ asdf=1
+ bar=2
+
+The next time fact gathering runs, your facts will include a hash variable fact named ``general`` with ``asdf`` and ``bar`` as members. To validate this, run the following::
+
+ ansible <hostname> -m ansible.builtin.setup -a "filter=ansible_local"
+
+And you will see your custom fact added::
+
+ "ansible_local": {
+ "preferences": {
+ "general": {
+ "asdf" : "1",
+ "bar" : "2"
+ }
+ }
+ }
+
+The ansible_local namespace separates custom facts created by facts.d from system facts or variables defined elsewhere in the playbook, so variables will not override each other. You can access this custom fact in a template or playbook as::
+
+ {{ ansible_local['preferences']['general']['asdf'] }}
+
+.. note:: The key part in the key=value pairs will be converted into lowercase inside the ansible_local variable. Using the example above, if the ini file contained ``XYZ=3`` in the ``[general]`` section, then you should expect to access it as: ``{{ ansible_local['preferences']['general']['xyz'] }}`` and not ``{{ ansible_local['preferences']['general']['XYZ'] }}``. This is because Ansible uses Python's `ConfigParser`_ which passes all option names through the `optionxform`_ method and this method's default implementation converts option names to lower case.
+
+.. _ConfigParser: https://docs.python.org/2/library/configparser.html
+.. _optionxform: https://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser.optionxform
+
+You can also use facts.d to execute a script on the remote host, generating dynamic custom facts to the ansible_local namespace. For example, you can generate a list of all users that exist on a remote host as a fact about that host. To generate dynamic custom facts using facts.d:
+
+ #. Write and test a script to generate the JSON data you want.
+ #. Save the script in your facts.d directory.
+ #. Make sure your script has the ``.fact`` file extension.
+ #. Make sure your script is executable by the Ansible connection user.
+ #. Gather facts to execute the script and add the JSON output to ansible_local.
+
+By default, fact gathering runs once at the beginning of each play. If you create a custom fact using facts.d in a playbook, it will be available in the next play that gathers facts. If you want to use it in the same play where you created it, you must explicitly re-run the setup module. For example::
+
+ - hosts: webservers
+ tasks:
+
+ - name: Create directory for ansible custom facts
+ ansible.builtin.file:
+ state: directory
+ recurse: yes
+ path: /etc/ansible/facts.d
+
+ - name: Install custom ipmi fact
+ ansible.builtin.copy:
+ src: ipmi.fact
+ dest: /etc/ansible/facts.d
+
+ - name: Re-read facts after adding custom fact
+ ansible.builtin.setup:
+ filter: ansible_local
+
+If you use this pattern frequently, a custom facts module would be more efficient than facts.d.
+
+.. _magic_variables_and_hostvars:
+
+Information about Ansible: magic variables
+==========================================
+
+You can access information about Ansible operations, including the python version being used, the hosts and groups in inventory, and the directories for playbooks and roles, using "magic" variables. Like connection variables, magic variables are :ref:`special_variables`. Magic variable names are reserved - do not set variables with these names. The variable ``environment`` is also reserved.
+
+The most commonly used magic variables are ``hostvars``, ``groups``, ``group_names``, and ``inventory_hostname``. With ``hostvars``, you can access variables defined for any host in the play, at any point in a playbook. You can access Ansible facts using the ``hostvars`` variable too, but only after you have gathered (or cached) facts.
+
+If you want to configure your database server using the value of a 'fact' from another node, or the value of an inventory variable assigned to another node, you can use ``hostvars`` in a template or on an action line::
+
+ {{ hostvars['test.example.com']['ansible_facts']['distribution'] }}
+
+With ``groups``, a list of all the groups (and hosts) in the inventory, you can enumerate all hosts within a group. For example:
+
+.. code-block:: jinja
+
+ {% for host in groups['app_servers'] %}
+ # something that applies to all app servers.
+ {% endfor %}
+
+You can use ``groups`` and ``hostvars`` together to find all the IP addresses in a group.
+
+.. code-block:: jinja
+
+ {% for host in groups['app_servers'] %}
+ {{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }}
+ {% endfor %}
+
+You can use this approach to point a frontend proxy server to all the hosts in your app servers group, to set up the correct firewall rules between servers, and so on. You must either cache facts or gather facts for those hosts before the task that fills out the template.
+
+With ``group_names``, a list (array) of all the groups the current host is in, you can create templated files that vary based on the group membership (or role) of the host:
+
+.. code-block:: jinja
+
+ {% if 'webserver' in group_names %}
+ # some part of a configuration file that only applies to webservers
+ {% endif %}
+
+You can use the magic variable ``inventory_hostname``, the name of the host as configured in your inventory, as an alternative to ``ansible_hostname`` when fact-gathering is disabled. If you have a long FQDN, you can use ``inventory_hostname_short``, which contains the part up to the first period, without the rest of the domain.
+
+Other useful magic variables refer to the current play or playbook. These vars may be useful for filling out templates with multiple hostnames or for injecting the list into the rules for a load balancer.
+
+``ansible_play_hosts`` is the list of all hosts still active in the current play.
+
+``ansible_play_batch`` is a list of hostnames that are in scope for the current 'batch' of the play.
+
+The batch size is defined by ``serial``, when not set it is equivalent to the whole play (making it the same as ``ansible_play_hosts``).
+
+``ansible_playbook_python`` is the path to the python executable used to invoke the Ansible command line tool.
+
+``inventory_dir`` is the pathname of the directory holding Ansible's inventory host file.
+
+``inventory_file`` is the pathname and the filename pointing to the Ansible's inventory host file.
+
+``playbook_dir`` contains the playbook base directory.
+
+``role_path`` contains the current role's pathname and only works inside a role.
+
+``ansible_check_mode`` is a boolean, set to ``True`` if you run Ansible with ``--check``.
+
+.. _ansible_version:
+
+Ansible version
+---------------
+
+.. versionadded:: 1.8
+
+To adapt playbook behavior to different versions of Ansible, you can use the variable ``ansible_version``, which has the following structure::
+
+ "ansible_version": {
+ "full": "2.10.1",
+ "major": 2,
+ "minor": 10,
+ "revision": 1,
+ "string": "2.10.1"
+ }
diff --git a/docs/docsite/rst/user_guide/playbooks_vault.rst b/docs/docsite/rst/user_guide/playbooks_vault.rst
new file mode 100644
index 00000000..03bd2c04
--- /dev/null
+++ b/docs/docsite/rst/user_guide/playbooks_vault.rst
@@ -0,0 +1,6 @@
+:orphan:
+
+Using vault in playbooks
+========================
+
+The documentation regarding Ansible Vault has moved. The new location is here: :ref:`vault`. Please update any links you may have made directly to this page.
diff --git a/docs/docsite/rst/user_guide/plugin_filtering_config.rst b/docs/docsite/rst/user_guide/plugin_filtering_config.rst
new file mode 100644
index 00000000..2e9900c9
--- /dev/null
+++ b/docs/docsite/rst/user_guide/plugin_filtering_config.rst
@@ -0,0 +1,26 @@
+.. _plugin_filtering_config:
+
+Blacklisting modules
+====================
+
+If you want to avoid using certain modules, you can blacklist them to prevent Ansible from loading them. To blacklist plugins, create a yaml configuration file. The default location for this file is :file:`/etc/ansible/plugin_filters.yml`, or you can select a different path for the blacklist file using the :ref:`PLUGIN_FILTERS_CFG` setting in the ``defaults`` section of your ansible.cfg. Here is an example blacklist file:
+
+.. code-block:: YAML
+
+ ---
+ filter_version: '1.0'
+ module_blacklist:
+ # Deprecated
+ - docker
+ # We only allow pip, not easy_install
+ - easy_install
+
+The file contains two fields:
+
+ * A file version so that you can update the format while keeping backwards compatibility in the future. The present version should be the string, ``"1.0"``
+
+ * A list of modules to blacklist. Any module in this list will not be loaded by Ansible when it searches for a module to invoke for a task.
+
+.. note::
+
+ You cannot blacklist the ``stat`` module, as it is required for Ansible to run.
diff --git a/docs/docsite/rst/user_guide/quickstart.rst b/docs/docsite/rst/user_guide/quickstart.rst
new file mode 100644
index 00000000..7e97d9ab
--- /dev/null
+++ b/docs/docsite/rst/user_guide/quickstart.rst
@@ -0,0 +1,20 @@
+.. _quickstart_guide:
+
+Ansible Quickstart Guide
+========================
+
+We've recorded a short video that introduces Ansible.
+
+The `quickstart video <https://www.ansible.com/resources/videos/quick-start-video>`_ is about 13 minutes long and gives you a high level
+introduction to Ansible -- what it does and how to use it. We'll also tell you about other products in the Ansible ecosystem.
+
+Enjoy, and be sure to visit the rest of the documentation to learn more.
+
+.. seealso::
+
+ `A system administrators guide to getting started with Ansible <https://www.redhat.com/en/blog/system-administrators-guide-getting-started-ansible-fast>`_
+ A step by step introduction to Ansible
+ `Ansible Automation for SysAdmins <https://opensource.com/downloads/ansible-quickstart>`_
+ A downloadable guide for getting started with Ansible
+ :ref:`network_getting_started`
+ A guide for network engineers using Ansible for the first time
diff --git a/docs/docsite/rst/user_guide/sample_setup.rst b/docs/docsite/rst/user_guide/sample_setup.rst
new file mode 100644
index 00000000..9be60004
--- /dev/null
+++ b/docs/docsite/rst/user_guide/sample_setup.rst
@@ -0,0 +1,285 @@
+.. _sample_setup:
+
+********************
+Sample Ansible setup
+********************
+
+You have learned about playbooks, inventory, roles, and variables. This section pulls all those elements together, outlining a sample setup for automating a web service. You can find more example playbooks illustrating these patterns in our `ansible-examples repository <https://github.com/ansible/ansible-examples>`_. (NOTE: These may not use all of the features in the latest release, but are still an excellent reference!).
+
+The sample setup organizes playbooks, roles, inventory, and variables files by function, with tags at the play and task level for greater granularity and control. This is a powerful and flexible approach, but there are other ways to organize Ansible content. Your usage of Ansible should fit your needs, not ours, so feel free to modify this approach and organize your content as you see fit.
+
+.. contents::
+ :local:
+
+Sample directory layout
+-----------------------
+
+This layout organizes most tasks in roles, with a single inventory file for each environment and a few playbooks in the top-level directory::
+
+ production # inventory file for production servers
+ staging # inventory file for staging environment
+
+ group_vars/
+ group1.yml # here we assign variables to particular groups
+ group2.yml
+ host_vars/
+ hostname1.yml # here we assign variables to particular systems
+ hostname2.yml
+
+ library/ # if any custom modules, put them here (optional)
+ module_utils/ # if any custom module_utils to support modules, put them here (optional)
+ filter_plugins/ # if any custom filter plugins, put them here (optional)
+
+ site.yml # master playbook
+ webservers.yml # playbook for webserver tier
+ dbservers.yml # playbook for dbserver tier
+ tasks/ # task files included from playbooks
+ webservers-extra.yml # <-- avoids confusing playbook with task files
+
+ roles/
+ common/ # this hierarchy represents a "role"
+ tasks/ #
+ main.yml # <-- tasks file can include smaller files if warranted
+ handlers/ #
+ main.yml # <-- handlers file
+ templates/ # <-- files for use with the template resource
+ ntp.conf.j2 # <------- templates end in .j2
+ files/ #
+ bar.txt # <-- files for use with the copy resource
+ foo.sh # <-- script files for use with the script resource
+ vars/ #
+ main.yml # <-- variables associated with this role
+ defaults/ #
+ main.yml # <-- default lower priority variables for this role
+ meta/ #
+ main.yml # <-- role dependencies
+ library/ # roles can also include custom modules
+ module_utils/ # roles can also include custom module_utils
+ lookup_plugins/ # or other types of plugins, like lookup in this case
+
+ webtier/ # same kind of structure as "common" was above, done for the webtier role
+ monitoring/ # ""
+ fooapp/ # ""
+
+.. note: By default, Ansible assumes your playbooks are stored in one directory with roles stored in a sub-directory called ``roles/``. As you use Ansible to automate more tasks, you may want to move your playbooks into a sub-directory called ``playbooks/``. If you do this, you must configure the path to your ``roles/`` directory using the ``roles_path`` setting in ansible.cfg.
+
+Alternative directory layout
+----------------------------
+
+Alternatively you can put each inventory file with its ``group_vars``/``host_vars`` in a separate directory. This is particularly useful if your ``group_vars``/``host_vars`` don't have that much in common in different environments. The layout could look something like this::
+
+ inventories/
+ production/
+ hosts # inventory file for production servers
+ group_vars/
+ group1.yml # here we assign variables to particular groups
+ group2.yml
+ host_vars/
+ hostname1.yml # here we assign variables to particular systems
+ hostname2.yml
+
+ staging/
+ hosts # inventory file for staging environment
+ group_vars/
+ group1.yml # here we assign variables to particular groups
+ group2.yml
+ host_vars/
+ stagehost1.yml # here we assign variables to particular systems
+ stagehost2.yml
+
+ library/
+ module_utils/
+ filter_plugins/
+
+ site.yml
+ webservers.yml
+ dbservers.yml
+
+ roles/
+ common/
+ webtier/
+ monitoring/
+ fooapp/
+
+This layout gives you more flexibility for larger environments, as well as a total separation of inventory variables between different environments. However, this approach is harder to maintain, because there are more files. For more information on organizing group and host variables, see :ref:`splitting_out_vars`.
+
+.. _groups_and_hosts:
+
+Sample group and host variables
+-------------------------------
+
+These sample group and host variables files record the variable values that apply to each machine or group of machines. For instance, the data center in Atlanta has its own NTP servers, so when setting up ntp.conf, we should use them::
+
+ ---
+ # file: group_vars/atlanta
+ ntp: ntp-atlanta.example.com
+ backup: backup-atlanta.example.com
+
+Similarly, the webservers have some configuration that does not apply to the database servers::
+
+ ---
+ # file: group_vars/webservers
+ apacheMaxRequestsPerChild: 3000
+ apacheMaxClients: 900
+
+Default values, or values that are universally true, belong in a file called group_vars/all::
+
+ ---
+ # file: group_vars/all
+ ntp: ntp-boston.example.com
+ backup: backup-boston.example.com
+
+If necessary, you can define specific hardware variance in systems in a host_vars file::
+
+ ---
+ # file: host_vars/db-bos-1.example.com
+ foo_agent_port: 86
+ bar_agent_port: 99
+
+Again, if you are using :ref:`dynamic inventory <dynamic_inventory>`, Ansible creates many dynamic groups automatically. So a tag like "class:webserver" would load in variables from the file "group_vars/ec2_tag_class_webserver" automatically.
+
+.. _split_by_role:
+
+Sample playbooks organized by function
+--------------------------------------
+
+With this setup, a single playbook can define all the infrastructure. The site.yml playbook imports two other playbooks, one for the webservers and one for the database servers::
+
+ ---
+ # file: site.yml
+ - import_playbook: webservers.yml
+ - import_playbook: dbservers.yml
+
+The webservers.yml file, also at the top level, maps the configuration of the webservers group to the roles related to the webservers group::
+
+ ---
+ # file: webservers.yml
+ - hosts: webservers
+ roles:
+ - common
+ - webtier
+
+With this setup, you can configure your whole infrastructure by "running" site.yml, or run a subset by running webservers.yml. This is analogous to the Ansible "--limit" parameter but a little more explicit::
+
+ ansible-playbook site.yml --limit webservers
+ ansible-playbook webservers.yml
+
+.. _role_organization:
+
+Sample task and handler files in a function-based role
+------------------------------------------------------
+
+Ansible loads any file called ``main.yml`` in a role sub-directory. This sample ``tasks/main.yml`` file is simple - it sets up NTP, but it could do more if we wanted::
+
+ ---
+ # file: roles/common/tasks/main.yml
+
+ - name: be sure ntp is installed
+ yum:
+ name: ntp
+ state: present
+ tags: ntp
+
+ - name: be sure ntp is configured
+ template:
+ src: ntp.conf.j2
+ dest: /etc/ntp.conf
+ notify:
+ - restart ntpd
+ tags: ntp
+
+ - name: be sure ntpd is running and enabled
+ service:
+ name: ntpd
+ state: started
+ enabled: yes
+ tags: ntp
+
+Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at the end
+of each play::
+
+ ---
+ # file: roles/common/handlers/main.yml
+ - name: restart ntpd
+ service:
+ name: ntpd
+ state: restarted
+
+See :ref:`playbooks_reuse_roles` for more information.
+
+
+.. _organization_examples:
+
+What the sample setup enables
+-----------------------------
+
+The basic organizational structure described above enables a lot of different automation options. To reconfigure your entire infrastructure::
+
+ ansible-playbook -i production site.yml
+
+To reconfigure NTP on everything::
+
+ ansible-playbook -i production site.yml --tags ntp
+
+To reconfigure only the webservers::
+
+ ansible-playbook -i production webservers.yml
+
+To reconfigure only the webservers in Boston::
+
+ ansible-playbook -i production webservers.yml --limit boston
+
+To reconfigure only the first 10 webservers in Boston, and then the next 10::
+
+ ansible-playbook -i production webservers.yml --limit boston[0:9]
+ ansible-playbook -i production webservers.yml --limit boston[10:19]
+
+The sample setup also supports basic ad-hoc commands::
+
+ ansible boston -i production -m ping
+ ansible boston -i production -m command -a '/sbin/reboot'
+
+To discover what tasks would run or what hostnames would be affected by a particular Ansible command::
+
+ # confirm what task names would be run if I ran this command and said "just ntp tasks"
+ ansible-playbook -i production webservers.yml --tags ntp --list-tasks
+
+ # confirm what hostnames might be communicated with if I said "limit to boston"
+ ansible-playbook -i production webservers.yml --limit boston --list-hosts
+
+.. _dep_vs_config:
+
+Organizing for deployment or configuration
+------------------------------------------
+
+The sample setup models a typical configuration topology. When doing multi-tier deployments, there are going
+to be some additional playbooks that hop between tiers to roll out an application. In this case, 'site.yml'
+may be augmented by playbooks like 'deploy_exampledotcom.yml' but the general concepts still apply. Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and keep the OS configuration in separate playbooks or roles from the app deployment.
+
+Consider "playbooks" as a sports metaphor -- you can have one set of plays to use against all your infrastructure and situational plays that you use at different times and for different purposes.
+
+.. _ship_modules_with_playbooks:
+
+Using local Ansible modules
+---------------------------
+
+If a playbook has a :file:`./library` directory relative to its YAML file, this directory can be used to add Ansible modules that will
+automatically be in the Ansible module path. This is a great way to keep modules that go with a playbook together. This is shown
+in the directory structure example at the start of this section.
+
+.. seealso::
+
+ :ref:`yaml_syntax`
+ Learn about YAML syntax
+ :ref:`working_with_playbooks`
+ Review the basic playbook features
+ :ref:`list_of_collections`
+ Browse existing collections, modules, and plugins
+ :ref:`developing_modules`
+ Learn how to extend Ansible by writing your own modules
+ :ref:`intro_patterns`
+ Learn about how to select hosts
+ `GitHub examples directory <https://github.com/ansible/ansible-examples>`_
+ Complete playbook files from the github project source
+ `Mailing List <https://groups.google.com/group/ansible-project>`_
+ Questions? Help? Ideas? Stop by the list on Google Groups
diff --git a/docs/docsite/rst/user_guide/shared_snippets/SSH_password_prompt.txt b/docs/docsite/rst/user_guide/shared_snippets/SSH_password_prompt.txt
new file mode 100644
index 00000000..dc61ab38
--- /dev/null
+++ b/docs/docsite/rst/user_guide/shared_snippets/SSH_password_prompt.txt
@@ -0,0 +1,2 @@
+.. note::
+ Ansible does not expose a channel to allow communication between the user and the ssh process to accept a password manually to decrypt an ssh key when using the ssh connection plugin (which is the default). The use of ``ssh-agent`` is highly recommended.
diff --git a/docs/docsite/rst/user_guide/shared_snippets/with2loop.txt b/docs/docsite/rst/user_guide/shared_snippets/with2loop.txt
new file mode 100644
index 00000000..5217f942
--- /dev/null
+++ b/docs/docsite/rst/user_guide/shared_snippets/with2loop.txt
@@ -0,0 +1,205 @@
+In most cases, loops work best with the ``loop`` keyword instead of ``with_X`` style loops. The ``loop`` syntax is usually best expressed using filters instead of more complex use of ``query`` or ``lookup``.
+
+These examples show how to convert many common ``with_`` style loops to ``loop`` and filters.
+
+with_list
+---------
+
+``with_list`` is directly replaced by ``loop``.
+
+.. code-block:: yaml+jinja
+
+ - name: with_list
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ with_list:
+ - one
+ - two
+
+ - name: with_list -> loop
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ loop:
+ - one
+ - two
+
+with_items
+----------
+
+``with_items`` is replaced by ``loop`` and the ``flatten`` filter.
+
+.. code-block:: yaml+jinja
+
+ - name: with_items
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ with_items: "{{ items }}"
+
+ - name: with_items -> loop
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ loop: "{{ items|flatten(levels=1) }}"
+
+with_indexed_items
+------------------
+
+``with_indexed_items`` is replaced by ``loop``, the ``flatten`` filter and ``loop_control.index_var``.
+
+.. code-block:: yaml+jinja
+
+ - name: with_indexed_items
+ ansible.builtin.debug:
+ msg: "{{ item.0 }} - {{ item.1 }}"
+ with_indexed_items: "{{ items }}"
+
+ - name: with_indexed_items -> loop
+ ansible.builtin.debug:
+ msg: "{{ index }} - {{ item }}"
+ loop: "{{ items|flatten(levels=1) }}"
+ loop_control:
+ index_var: index
+
+with_flattened
+--------------
+
+``with_flattened`` is replaced by ``loop`` and the ``flatten`` filter.
+
+.. code-block:: yaml+jinja
+
+ - name: with_flattened
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ with_flattened: "{{ items }}"
+
+ - name: with_flattened -> loop
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ loop: "{{ items|flatten }}"
+
+with_together
+-------------
+
+``with_together`` is replaced by ``loop`` and the ``zip`` filter.
+
+.. code-block:: yaml+jinja
+
+ - name: with_together
+ ansible.builtin.debug:
+ msg: "{{ item.0 }} - {{ item.1 }}"
+ with_together:
+ - "{{ list_one }}"
+ - "{{ list_two }}"
+
+ - name: with_together -> loop
+ ansible.builtin.debug:
+ msg: "{{ item.0 }} - {{ item.1 }}"
+ loop: "{{ list_one|zip(list_two)|list }}"
+
+Another example with complex data
+
+.. code-block:: yaml+jinja
+
+ - name: with_together -> loop
+ ansible.builtin.debug:
+ msg: "{{ item.0 }} - {{ item.1 }} - {{ item.2 }}"
+ loop: "{{ data[0]|zip(*data[1:])|list }}"
+ vars:
+ data:
+ - ['a', 'b', 'c']
+ - ['d', 'e', 'f']
+ - ['g', 'h', 'i']
+
+with_dict
+---------
+
+``with_dict`` can be substituted by ``loop`` and either the ``dictsort`` or ``dict2items`` filters.
+
+.. code-block:: yaml+jinja
+
+ - name: with_dict
+ ansible.builtin.debug:
+ msg: "{{ item.key }} - {{ item.value }}"
+ with_dict: "{{ dictionary }}"
+
+ - name: with_dict -> loop (option 1)
+ ansible.builtin.debug:
+ msg: "{{ item.key }} - {{ item.value }}"
+ loop: "{{ dictionary|dict2items }}"
+
+ - name: with_dict -> loop (option 2)
+ ansible.builtin.debug:
+ msg: "{{ item.0 }} - {{ item.1 }}"
+ loop: "{{ dictionary|dictsort }}"
+
+with_sequence
+-------------
+
+``with_sequence`` is replaced by ``loop`` and the ``range`` function, and potentially the ``format`` filter.
+
+.. code-block:: yaml+jinja
+
+ - name: with_sequence
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ with_sequence: start=0 end=4 stride=2 format=testuser%02x
+
+ - name: with_sequence -> loop
+ ansible.builtin.debug:
+ msg: "{{ 'testuser%02x' | format(item) }}"
+ # range is exclusive of the end point
+ loop: "{{ range(0, 4 + 1, 2)|list }}"
+
+with_subelements
+----------------
+
+``with_subelements`` is replaced by ``loop`` and the ``subelements`` filter.
+
+.. code-block:: yaml+jinja
+
+ - name: with_subelements
+ ansible.builtin.debug:
+ msg: "{{ item.0.name }} - {{ item.1 }}"
+ with_subelements:
+ - "{{ users }}"
+ - mysql.hosts
+
+ - name: with_subelements -> loop
+ ansible.builtin.debug:
+ msg: "{{ item.0.name }} - {{ item.1 }}"
+ loop: "{{ users|subelements('mysql.hosts') }}"
+
+with_nested/with_cartesian
+--------------------------
+
+``with_nested`` and ``with_cartesian`` are replaced by loop and the ``product`` filter.
+
+.. code-block:: yaml+jinja
+
+ - name: with_nested
+ ansible.builtin.debug:
+ msg: "{{ item.0 }} - {{ item.1 }}"
+ with_nested:
+ - "{{ list_one }}"
+ - "{{ list_two }}"
+
+ - name: with_nested -> loop
+ ansible.builtin.debug:
+ msg: "{{ item.0 }} - {{ item.1 }}"
+ loop: "{{ list_one|product(list_two)|list }}"
+
+with_random_choice
+------------------
+
+``with_random_choice`` is replaced by just use of the ``random`` filter, without need of ``loop``.
+
+.. code-block:: yaml+jinja
+
+ - name: with_random_choice
+ ansible.builtin.debug:
+ msg: "{{ item }}"
+ with_random_choice: "{{ my_list }}"
+
+ - name: with_random_choice -> loop (No loop is needed here)
+ ansible.builtin.debug:
+ msg: "{{ my_list|random }}"
+ tags: random
diff --git a/docs/docsite/rst/user_guide/vault.rst b/docs/docsite/rst/user_guide/vault.rst
new file mode 100644
index 00000000..abb2fadb
--- /dev/null
+++ b/docs/docsite/rst/user_guide/vault.rst
@@ -0,0 +1,653 @@
+.. _vault:
+
+*************************************
+Encrypting content with Ansible Vault
+*************************************
+
+Ansible Vault encrypts variables and files so you can protect sensitive content such as passwords or keys rather than leaving it visible as plaintext in playbooks or roles. To use Ansible Vault you need one or more passwords to encrypt and decrypt content. If you store your vault passwords in a third-party tool such as a secret manager, you need a script to access them. Use the passwords with the :ref:`ansible-vault` command-line tool to create and view encrypted variables, create encrypted files, encrypt existing files, or edit, re-key, or decrypt files. You can then place encrypted content under source control and share it more safely.
+
+.. warning::
+ * Encryption with Ansible Vault ONLY protects 'data at rest'. Once the content is decrypted ('data in use'), play and plugin authors are responsible for avoiding any secret disclosure, see :ref:`no_log <keep_secret_data>` for details on hiding output and :ref:`vault_securing_editor` for security considerations on editors you use with Ansible Vault.
+
+You can use encrypted variables and files in ad-hoc commands and playbooks by supplying the passwords you used to encrypt them. You can modify your ``ansible.cfg`` file to specify the location of a password file or to always prompt for the password.
+
+.. contents::
+ :local:
+
+Managing vault passwords
+========================
+
+Managing your encrypted content is easier if you develop a strategy for managing your vault passwords. A vault password can be any string you choose. There is no special command to create a vault password. However, you need to keep track of your vault passwords. Each time you encrypt a variable or file with Ansible Vault, you must provide a password. When you use an encrypted variable or file in a command or playbook, you must provide the same password that was used to encrypt it. To develop a strategy for managing vault passwords, start with two questions:
+
+ * Do you want to encrypt all your content with the same password, or use different passwords for different needs?
+ * Where do you want to store your password or passwords?
+
+Choosing between a single password and multiple passwords
+---------------------------------------------------------
+
+If you have a small team or few sensitive values, you can use a single password for everything you encrypt with Ansible Vault. Store your vault password securely in a file or a secret manager as described below.
+
+If you have a larger team or many sensitive values, you can use multiple passwords. For example, you can use different passwords for different users or different levels of access. Depending on your needs, you might want a different password for each encrypted file, for each directory, or for each environment. For example, you might have a playbook that includes two vars files, one for the dev environment and one for the production environment, encrypted with two different passwords. When you run the playbook, select the correct vault password for the environment you are targeting, using a vault ID.
+
+.. _vault_ids:
+
+Managing multiple passwords with vault IDs
+------------------------------------------
+
+If you use multiple vault passwords, you can differentiate one password from another with vault IDs. You use the vault ID in three ways:
+
+ * Pass it with :option:`--vault-id <ansible-playbook --vault-id>` to the :ref:`ansible-vault` command when you create encrypted content
+ * Include it wherever you store the password for that vault ID (see :ref:`storing_vault_passwords`)
+ * Pass it with :option:`--vault-id <ansible-playbook --vault-id>` to the :ref:`ansible-playbook` command when you run a playbook that uses content you encrypted with that vault ID
+
+When you pass a vault ID as an option to the :ref:`ansible-vault` command, you add a label (a hint or nickname) to the encrypted content. This label documents which password you used to encrypt it. The encrypted variable or file includes the vault ID label in plain text in the header. The vault ID is the last element before the encrypted content. For example::
+
+ my_encrytped_var: !vault |
+ $ANSIBLE_VAULT;1.2;AES256;dev
+ 30613233633461343837653833666333643061636561303338373661313838333565653635353162
+ 3263363434623733343538653462613064333634333464660a663633623939393439316636633863
+ 61636237636537333938306331383339353265363239643939666639386530626330633337633833
+ 6664656334373166630a363736393262666465663432613932613036303963343263623137386239
+ 6330
+
+In addition to the label, you must provide a source for the related password. The source can be a prompt, a file, or a script, depending on how you are storing your vault passwords. The pattern looks like this:
+
+.. code-block:: bash
+
+ --vault-id label@source
+
+If your playbook uses multiple encrypted variables or files that you encrypted with different passwords, you must pass the vault IDs when you run that playbook. You can use :option:`--vault-id <ansible-playbook --vault-id>` by itself, with :option:`--vault-password-file <ansible-playbook --vault-password-file>`, or with :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>`. The pattern is the same as when you create encrypted content: include the label and the source for the matching password.
+
+See below for examples of encrypting content with vault IDs and using content encrypted with vault IDs. The :option:`--vault-id <ansible-playbook --vault-id>` option works with any Ansible command that interacts with vaults, including :ref:`ansible-vault`, :ref:`ansible-playbook`, and so on.
+
+Limitations of vault IDs
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Ansible does not enforce using the same password every time you use a particular vault ID label. You can encrypt different variables or files with the same vault ID label but different passwords. This usually happens when you type the password at a prompt and make a mistake. It is possible to use different passwords with the same vault ID label on purpose. For example, you could use each label as a reference to a class of passwords, rather than a single password. In this scenario, you must always know which specific password or file to use in context. However, you are more likely to encrypt two files with the same vault ID label and different passwords by mistake. If you encrypt two files with the same label but different passwords by accident, you can :ref:`rekey <rekeying_files>` one file to fix the issue.
+
+Enforcing vault ID matching
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By default the vault ID label is only a hint to remind you which password you used to encrypt a variable or file. Ansible does not check that the vault ID in the header of the encrypted content matches the vault ID you provide when you use the content. Ansible decrypts all files and variables called by your command or playbook that are encrypted with the password you provide. To check the encrypted content and decrypt it only when the vault ID it contains matches the one you provide with ``--vault-id``, set the config option :ref:`DEFAULT_VAULT_ID_MATCH`. When you set :ref:`DEFAULT_VAULT_ID_MATCH`, each password is only used to decrypt data that was encrypted with the same label. This is efficient, predictable, and can reduce errors when different values are encrypted with different passwords.
+
+.. note::
+ Even with the :ref:`DEFAULT_VAULT_ID_MATCH` setting enabled, Ansible does not enforce using the same password every time you use a particular vault ID label.
+
+.. _storing_vault_passwords:
+
+Storing and accessing vault passwords
+-------------------------------------
+
+You can memorize your vault password, or manually copy vault passwords from any source and paste them at a command-line prompt, but most users store them securely and access them as needed from within Ansible. You have two options for storing vault passwords that work from within Ansible: in files, or in a third-party tool such as the system keyring or a secret manager. If you store your passwords in a third-party tool, you need a vault password client script to retrieve them from within Ansible.
+
+Storing passwords in files
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To store a vault password in a file, enter the password as a string on a single line in the file. Make sure the permissions on the file are appropriate. Do not add password files to source control.
+
+.. _vault_password_client_scripts:
+
+Storing passwords in third-party tools with vault password client scripts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can store your vault passwords on the system keyring, in a database, or in a secret manager and retrieve them from within Ansible using a vault password client script. Enter the password as a string on a single line. If your password has a vault ID, store it in a way that works with your password storage tool.
+
+To create a vault password client script:
+
+ * Create a file with a name ending in ``-client.py``
+ * Make the file executable
+ * Within the script itself:
+ * Print the passwords to standard output
+ * Accept a ``--vault-id`` option
+ * If the script prompts for data (for example, a database password), send the prompts to standard error
+
+When you run a playbook that uses vault passwords stored in a third-party tool, specify the script as the source within the ``--vault-id`` flag. For example:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-id dev@contrib/vault/vault-keyring-client.py
+
+Ansible executes the client script with a ``--vault-id`` option so the script knows which vault ID label you specified. For example a script loading passwords from a secret manager can use the vault ID label to pick either the 'dev' or 'prod' password. The example command above results in the following execution of the client script:
+
+.. code-block:: bash
+
+ contrib/vault/vault-keyring-client.py --vault-id dev
+
+For an example of a client script that loads passwords from the system keyring, see :file:`contrib/vault/vault-keyring-client.py`.
+
+
+Encrypting content with Ansible Vault
+=====================================
+
+Once you have a strategy for managing and storing vault passwords, you can start encrypting content. You can encrypt two types of content with Ansible Vault: variables and files. Encrypted content always includes the ``!vault`` tag, which tells Ansible and YAML that the content needs to be decrypted, and a ``|`` character, which allows multi-line strings. Encrypted content created with ``--vault-id`` also contains the vault ID label. For more details about the encryption process and the format of content encrypted with Ansible Vault, see :ref:`vault_format`. This table shows the main differences between encrypted variables and encrypted files:
+
+.. table::
+ :class: documentation-table
+
+ ====================== ================================= ====================================
+ .. Encrypted variables Encrypted files
+ ====================== ================================= ====================================
+ How much is encrypted? Variables within a plaintext file The entire file
+
+ When is it decrypted? On demand, only when needed Whenever loaded or referenced [#f1]_
+
+ What can be encrypted? Only variables Any structured data file
+
+ ====================== ================================= ====================================
+
+.. [#f1] Ansible cannot know if it needs content from an encrypted file unless it decrypts the file, so it decrypts all encrypted files referenced in your playbooks and roles.
+
+.. _encrypting_variables:
+.. _single_encrypted_variable:
+
+Encrypting individual variables with Ansible Vault
+--------------------------------------------------
+
+You can encrypt single values inside a YAML file using the :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command. For one way to keep your vaulted variables safely visible, see :ref:`tip_for_variables_and_vaults`.
+
+Advantages and disadvantages of encrypting variables
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+With variable-level encryption, your files are still easily legible. You can mix plaintext and encrypted variables, even inline in a play or role. However, password rotation is not as simple as with file-level encryption. You cannot :ref:`rekey <rekeying_files>` encrypted variables. Also, variable-level encryption only works on variables. If you want to encrypt tasks or other content, you must encrypt the entire file.
+
+.. _encrypt_string_for_use_in_yaml:
+
+Creating encrypted variables
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command encrypts and formats any string you type (or copy or generate) into a format that can be included in a playbook, role, or variables file. To create a basic encrypted variable, pass three options to the :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command:
+
+ * a source for the vault password (prompt, file, or script, with or without a vault ID)
+ * the string to encrypt
+ * the string name (the name of the variable)
+
+The pattern looks like this:
+
+.. code-block:: bash
+
+ ansible-vault encrypt_string <password_source> '<string_to_encrypt>' --name '<string_name_of_variable>'
+
+For example, to encrypt the string 'foobar' using the only password stored in 'a_password_file' and name the variable 'the_secret':
+
+.. code-block:: bash
+
+ ansible-vault encrypt_string --vault-password-file a_password_file 'foobar' --name 'the_secret'
+
+The command above creates this content::
+
+ the_secret: !vault |
+ $ANSIBLE_VAULT;1.1;AES256
+ 62313365396662343061393464336163383764373764613633653634306231386433626436623361
+ 6134333665353966363534333632666535333761666131620a663537646436643839616531643561
+ 63396265333966386166373632626539326166353965363262633030333630313338646335303630
+ 3438626666666137650a353638643435666633633964366338633066623234616432373231333331
+ 6564
+
+To encrypt the string 'foooodev', add the vault ID label 'dev' with the 'dev' vault password stored in 'a_password_file', and call the encrypted variable 'the_dev_secret':
+
+.. code-block:: bash
+
+ ansible-vault encrypt_string --vault-id dev@a_password_file 'foooodev' --name 'the_dev_secret'
+
+The command above creates this content::
+
+ the_dev_secret: !vault |
+ $ANSIBLE_VAULT;1.2;AES256;dev
+ 30613233633461343837653833666333643061636561303338373661313838333565653635353162
+ 3263363434623733343538653462613064333634333464660a663633623939393439316636633863
+ 61636237636537333938306331383339353265363239643939666639386530626330633337633833
+ 6664656334373166630a363736393262666465663432613932613036303963343263623137386239
+ 6330
+
+To encrypt the string 'letmein' read from stdin, add the vault ID 'test' using the 'test' vault password stored in `a_password_file`, and name the variable 'test_db_password':
+
+.. code-block:: bash
+
+ echo -n 'letmein' | ansible-vault encrypt_string --vault-id test@a_password_file --stdin-name 'test_db_password'
+
+.. warning::
+
+ Typing secret content directly at the command line (without a prompt) leaves the secret string in your shell history. Do not do this outside of testing.
+
+The command above creates this output::
+
+ Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a new line)
+ db_password: !vault |
+ $ANSIBLE_VAULT;1.2;AES256;dev
+ 61323931353866666336306139373937316366366138656131323863373866376666353364373761
+ 3539633234313836346435323766306164626134376564330a373530313635343535343133316133
+ 36643666306434616266376434363239346433643238336464643566386135356334303736353136
+ 6565633133366366360a326566323363363936613664616364623437336130623133343530333739
+ 3039
+
+To be prompted for a string to encrypt, encrypt it with the 'dev' vault password from 'a_password_file', name the variable 'new_user_password' and give it the vault ID label 'dev':
+
+.. code-block:: bash
+
+ ansible-vault encrypt_string --vault-id dev@a_password_file --stdin-name 'new_user_password'
+
+The command above triggers this prompt:
+
+.. code-block:: text
+
+ Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a new line)
+
+Type the string to encrypt (for example, 'hunter2'), hit ctrl-d, and wait.
+
+.. warning::
+
+ Do not press ``Enter`` after supplying the string to encrypt. That will add a newline to the encrypted value.
+
+The sequence above creates this output::
+
+ new_user_password: !vault |
+ $ANSIBLE_VAULT;1.2;AES256;dev
+ 37636561366636643464376336303466613062633537323632306566653533383833366462366662
+ 6565353063303065303831323539656138653863353230620a653638643639333133306331336365
+ 62373737623337616130386137373461306535383538373162316263386165376131623631323434
+ 3866363862363335620a376466656164383032633338306162326639643635663936623939666238
+ 3161
+
+You can add the output from any of the examples above to any playbook, variables file, or role for future use. Encrypted variables are larger than plain-text variables, but they protect your sensitive content while leaving the rest of the playbook, variables file, or role in plain text so you can easily read it.
+
+Viewing encrypted variables
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can view the original value of an encrypted variable using the debug module. You must pass the password that was used to encrypt the variable. For example, if you stored the variable created by the last example above in a file called 'vars.yml', you could view the unencrypted value of that variable like this:
+
+.. code-block:: console
+
+ ansible localhost -m ansible.builtin.debug -a var="new_user_password" -e "@vars.yml" --vault-id dev@a_password_file
+
+ localhost | SUCCESS => {
+ "new_user_password": "hunter2"
+ }
+
+
+Encrypting files with Ansible Vault
+-----------------------------------
+
+Ansible Vault can encrypt any structured data file used by Ansible, including:
+
+ * group variables files from inventory
+ * host variables files from inventory
+ * variables files passed to ansible-playbook with ``-e @file.yml`` or ``-e @file.json``
+ * variables files loaded by ``include_vars`` or ``vars_files``
+ * variables files in roles
+ * defaults files in roles
+ * tasks files
+ * handlers files
+ * binary files or other arbitrary files
+
+The full file is encrypted in the vault.
+
+.. note::
+
+ Ansible Vault uses an editor to create or modify encrypted files. See :ref:`vault_securing_editor` for some guidance on securing the editor.
+
+
+Advantages and disadvantages of encrypting files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+File-level encryption is easy to use. Password rotation for encrypted files is straightforward with the :ref:`rekey <rekeying_files>` command. Encrypting files can hide not only sensitive values, but the names of the variables you use. However, with file-level encryption the contents of files are no longer easy to access and read. This may be a problem with encrypted tasks files. When encrypting a variables file, see :ref:`tip_for_variables_and_vaults` for one way to keep references to these variables in a non-encrypted file. Ansible always decrypts the entire encrypted file when it is when loaded or referenced, because Ansible cannot know if it needs the content unless it decrypts it.
+
+.. _creating_files:
+
+Creating encrypted files
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+To create a new encrypted data file called 'foo.yml' with the 'test' vault password from 'multi_password_file':
+
+.. code-block:: bash
+
+ ansible-vault create --vault-id test@multi_password_file foo.yml
+
+The tool launches an editor (whatever editor you have defined with $EDITOR, default editor is vi). Add the content. When you close the the editor session, the file is saved as encrypted data. The file header reflects the vault ID used to create it:
+
+.. code-block:: text
+
+ ``$ANSIBLE_VAULT;1.2;AES256;test``
+
+To create a new encrypted data file with the vault ID 'my_new_password' assigned to it and be prompted for the password:
+
+.. code-block:: bash
+
+ ansible-vault create --vault-id my_new_password@prompt foo.yml
+
+Again, add content to the file in the editor and save. Be sure to store the new password you created at the prompt, so you can find it when you want to decrypt that file.
+
+.. _encrypting_files:
+
+Encrypting existing files
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To encrypt an existing file, use the :ref:`ansible-vault encrypt <ansible_vault_encrypt>` command. This command can operate on multiple files at once. For example:
+
+.. code-block:: bash
+
+ ansible-vault encrypt foo.yml bar.yml baz.yml
+
+To encrypt existing files with the 'project' ID and be prompted for the password:
+
+.. code-block:: bash
+
+ ansible-vault encrypt --vault-id project@prompt foo.yml bar.yml baz.yml
+
+
+.. _viewing_files:
+
+Viewing encrypted files
+^^^^^^^^^^^^^^^^^^^^^^^
+
+To view the contents of an encrypted file without editing it, you can use the :ref:`ansible-vault view <ansible_vault_view>` command:
+
+.. code-block:: bash
+
+ ansible-vault view foo.yml bar.yml baz.yml
+
+
+.. _editing_encrypted_files:
+
+Editing encrypted files
+^^^^^^^^^^^^^^^^^^^^^^^
+
+To edit an encrypted file in place, use the :ref:`ansible-vault edit <ansible_vault_edit>` command. This command decrypts the file to a temporary file, allows you to edit the content, then saves and re-encrypts the content and removes the temporary file when you close the editor. For example:
+
+.. code-block:: bash
+
+ ansible-vault edit foo.yml
+
+To edit a file encrypted with the ``vault2`` password file and assigned the vault ID ``pass2``:
+
+.. code-block:: bash
+
+ ansible-vault edit --vault-id pass2@vault2 foo.yml
+
+
+.. _rekeying_files:
+
+Changing the password and/or vault ID on encrypted files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To change the password on an encrypted file or files, use the :ref:`rekey <ansible_vault_rekey>` command:
+
+.. code-block:: bash
+
+ ansible-vault rekey foo.yml bar.yml baz.yml
+
+This command can rekey multiple data files at once and will ask for the original password and also the new password. To set a different ID for the rekeyed files, pass the new ID to ``--new-vault-id``. For example, to rekey a list of files encrypted with the 'preprod1' vault ID from the 'ppold' file to the 'preprod2' vault ID and be prompted for the new password:
+
+.. code-block:: bash
+
+ ansible-vault rekey --vault-id preprod1@ppold --new-vault-id preprod2@prompt foo.yml bar.yml baz.yml
+
+
+.. _decrypting_files:
+
+Decrypting encrypted files
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you have an encrypted file that you no longer want to keep encrypted, you can permanently decrypt it by running the :ref:`ansible-vault decrypt <ansible_vault_decrypt>` command. This command will save the file unencrypted to the disk, so be sure you do not want to :ref:`edit <ansible_vault_edit>` it instead.
+
+.. code-block:: bash
+
+ ansible-vault decrypt foo.yml bar.yml baz.yml
+
+
+.. _vault_securing_editor:
+
+Steps to secure your editor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Ansible Vault relies on your configured editor, which can be a source of disclosures. Most editors have ways to prevent loss of data, but these normally rely on extra plain text files that can have a clear text copy of your secrets. Consult your editor documentation to configure the editor to avoid disclosing secure data. The following sections provide some guidance on common editors but should not be taken as a complete guide to securing your editor.
+
+
+vim
+...
+
+You can set the following ``vim`` options in command mode to avoid cases of disclosure. There may be more settings you need to modify to ensure security, especially when using plugins, so consult the ``vim`` documentation.
+
+
+1. Disable swapfiles that act like an autosave in case of crash or interruption.
+
+.. code-block:: text
+
+ set noswapfile
+
+2. Disable creation of backup files.
+
+.. code-block:: text
+
+ set nobackup
+ set nowritebackup
+
+3. Disable the viminfo file from copying data from your current session.
+
+.. code-block:: text
+
+ set viminfo=
+
+4. Disable copying to the system clipboard.
+
+.. code-block:: text
+
+ set clipboard=
+
+
+You can optionally add these settings in ``.vimrc`` for all files, or just specific paths or extensions. See the ``vim`` manual for details.
+
+
+Emacs
+......
+
+You can set the following Emacs options to avoid cases of disclosure. There may be more settings you need to modify to ensure security, especially when using plugins, so consult the Emacs documentation.
+
+1. Do not copy data to the system clipboard.
+
+.. code-block:: text
+
+ (setq x-select-enable-clipboard nil)
+
+2. Disable creation of backup files.
+
+.. code-block:: text
+
+ (setq make-backup-files nil)
+
+3. Disable autosave files.
+
+.. code-block:: text
+
+ (setq auto-save-default nil)
+
+
+.. _playbooks_vault:
+.. _providing_vault_passwords:
+
+Using encrypted variables and files
+===================================
+
+When you run a task or playbook that uses encrypted variables or files, you must provide the passwords to decrypt the variables or files. You can do this at the command line or in the playbook itself.
+
+Passing a single password
+-------------------------
+
+If all the encrypted variables and files your task or playbook needs use a single password, you can use the :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or :option:`--vault-password-file <ansible-playbook --vault-password-file>` cli options.
+
+To prompt for the password:
+
+.. code-block:: bash
+
+ ansible-playbook --ask-vault-pass site.yml
+
+To retrieve the password from the :file:`/path/to/my/vault-password-file` file:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-password-file /path/to/my/vault-password-file site.yml
+
+To get the password from the vault password client script :file:`my-vault-password-client.py`:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-password-file my-vault-password-client.py
+
+
+.. _specifying_vault_ids:
+
+Passing vault IDs
+-----------------
+
+You can also use the :option:`--vault-id <ansible-playbook --vault-id>` option to pass a single password with its vault label. This approach is clearer when multiple vaults are used within a single inventory.
+
+To prompt for the password for the 'dev' vault ID:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-id dev@prompt site.yml
+
+To retrieve the password for the 'dev' vault ID from the :file:`dev-password` file:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-id dev@dev-password site.yml
+
+To get the password for the 'dev' vault ID from the vault password client script :file:`my-vault-password-client.py`:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-id dev@my-vault-password-client.py
+
+Passing multiple vault passwords
+--------------------------------
+
+If your task or playbook requires multiple encrypted variables or files that you encrypted with different vault IDs, you must use the :option:`--vault-id <ansible-playbook --vault-id>` option, passing multiple ``--vault-id`` options to specify the vault IDs ('dev', 'prod', 'cloud', 'db') and sources for the passwords (prompt, file, script). . For example, to use a 'dev' password read from a file and to be prompted for the 'prod' password:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-id dev@dev-password --vault-id prod@prompt site.yml
+
+By default the vault ID labels (dev, prod and so on) are only hints. Ansible attempts to decrypt vault content with each password. The password with the same label as the encrypted data will be tried first, after that each vault secret will be tried in the order they were provided on the command line.
+
+Where the encrypted data has no label, or the label does not match any of the provided labels, the passwords will be tried in the order they are specified. In the example above, the 'dev' password will be tried first, then the 'prod' password for cases where Ansible doesn't know which vault ID is used to encrypt something.
+
+Using ``--vault-id`` without a vault ID
+---------------------------------------
+
+The :option:`--vault-id <ansible-playbook --vault-id>` option can also be used without specifying a vault-id. This behavior is equivalent to :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or :option:`--vault-password-file <ansible-playbook --vault-password-file>` so is rarely used.
+
+For example, to use a password file :file:`dev-password`:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-id dev-password site.yml
+
+To prompt for the password:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-id @prompt site.yml
+
+To get the password from an executable script :file:`my-vault-password-client.py`:
+
+.. code-block:: bash
+
+ ansible-playbook --vault-id my-vault-password-client.py
+
+
+Configuring defaults for using encrypted content
+================================================
+
+Setting a default vault ID
+--------------------------
+
+If you use one vault ID more frequently than any other, you can set the config option :ref:`DEFAULT_VAULT_IDENTITY_LIST` to specify a default vault ID and password source. Ansible will use the default vault ID and source any time you do not specify :option:`--vault-id <ansible-playbook --vault-id>`. You can set multiple values for this option. Setting multiple values is equivalent to passing multiple :option:`--vault-id <ansible-playbook --vault-id>` cli options.
+
+Setting a default password source
+---------------------------------
+
+If you use one vault password file more frequently than any other, you can set the :ref:`DEFAULT_VAULT_PASSWORD_FILE` config option or the :envvar:`ANSIBLE_VAULT_PASSWORD_FILE` environment variable to specify that file. For example, if you set ``ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt``, Ansible will automatically search for the password in that file. This is useful if, for example, you use Ansible from a continuous integration system such as Jenkins.
+
+When are encrypted files made visible?
+======================================
+
+In general, content you encrypt with Ansible Vault remains encrypted after execution. However, there is one exception. If you pass an encrypted file as the ``src`` argument to the :ref:`copy <copy_module>`, :ref:`template <template_module>`, :ref:`unarchive <unarchive_module>`, :ref:`script <script_module>` or :ref:`assemble <assemble_module>` module, the file will not be encrypted on the target host (assuming you supply the correct vault password when you run the play). This behavior is intended and useful. You can encrypt a configuration file or template to avoid sharing the details of your configuration, but when you copy that configuration to servers in your environment, you want it to be decrypted so local users and processes can access it.
+
+.. _speeding_up_vault:
+
+Speeding up Ansible Vault
+=========================
+
+If you have many encrypted files, decrypting them at startup may cause a perceptible delay. To speed this up, install the cryptography package:
+
+.. code-block:: bash
+
+ pip install cryptography
+
+
+.. _vault_format:
+
+Format of files encrypted with Ansible Vault
+============================================
+
+Ansible Vault creates UTF-8 encoded txt files. The file format includes a newline terminated header. For example::
+
+ $ANSIBLE_VAULT;1.1;AES256
+
+or::
+
+ $ANSIBLE_VAULT;1.2;AES256;vault-id-label
+
+The header contains up to four elements, separated by semi-colons (``;``).
+
+ 1. The format ID (``$ANSIBLE_VAULT``). Currently ``$ANSIBLE_VAULT`` is the only valid format ID. The format ID identifies content that is encrypted with Ansible Vault (via vault.is_encrypted_file()).
+
+ 2. The vault format version (``1.X``). All supported versions of Ansible will currently default to '1.1' or '1.2' if a labeled vault ID is supplied. The '1.0' format is supported for reading only (and will be converted automatically to the '1.1' format on write). The format version is currently used as an exact string compare only (version numbers are not currently 'compared').
+
+ 3. The cipher algorithm used to encrypt the data (``AES256``). Currently ``AES256`` is the only supported cipher algorithm. Vault format 1.0 used 'AES', but current code always uses 'AES256'.
+
+ 4. The vault ID label used to encrypt the data (optional, ``vault-id-label``) For example, if you encrypt a file with ``--vault-id dev@prompt``, the vault-id-label is ``dev``.
+
+Note: In the future, the header could change. Fields after the format ID and format version depend on the format version, and future vault format versions may add more cipher algorithm options and/or additional fields.
+
+The rest of the content of the file is the 'vaulttext'. The vaulttext is a text armored version of the
+encrypted ciphertext. Each line is 80 characters wide, except for the last line which may be shorter.
+
+Ansible Vault payload format 1.1 - 1.2
+--------------------------------------
+
+The vaulttext is a concatenation of the ciphertext and a SHA256 digest with the result 'hexlifyied'.
+
+'hexlify' refers to the ``hexlify()`` method of the Python Standard Library's `binascii <https://docs.python.org/3/library/binascii.html>`_ module.
+
+hexlify()'ed result of:
+
+- hexlify()'ed string of the salt, followed by a newline (``0x0a``)
+- hexlify()'ed string of the crypted HMAC, followed by a newline. The HMAC is:
+
+ - a `RFC2104 <https://www.ietf.org/rfc/rfc2104.txt>`_ style HMAC
+
+ - inputs are:
+
+ - The AES256 encrypted ciphertext
+ - A PBKDF2 key. This key, the cipher key, and the cipher IV are generated from:
+
+ - the salt, in bytes
+ - 10000 iterations
+ - SHA256() algorithm
+ - the first 32 bytes are the cipher key
+ - the second 32 bytes are the HMAC key
+ - remaining 16 bytes are the cipher IV
+
+- hexlify()'ed string of the ciphertext. The ciphertext is:
+
+ - AES256 encrypted data. The data is encrypted using:
+
+ - AES-CTR stream cipher
+ - cipher key
+ - IV
+ - a 128 bit counter block seeded from an integer IV
+ - the plaintext
+
+ - the original plaintext
+ - padding up to the AES256 blocksize. (The data used for padding is based on `RFC5652 <https://tools.ietf.org/html/rfc5652#section-6.3>`_)
diff --git a/docs/docsite/rst/user_guide/windows.rst b/docs/docsite/rst/user_guide/windows.rst
new file mode 100644
index 00000000..24277189
--- /dev/null
+++ b/docs/docsite/rst/user_guide/windows.rst
@@ -0,0 +1,21 @@
+.. _windows:
+
+Windows Guides
+``````````````
+
+The following sections provide information on managing
+Windows hosts with Ansible.
+
+Because Windows is a non-POSIX-compliant operating system, there are differences between
+how Ansible interacts with them and the way Windows works. These guides will highlight
+some of the differences between Linux/Unix hosts and hosts running Windows.
+
+.. toctree::
+ :maxdepth: 2
+
+ windows_setup
+ windows_winrm
+ windows_usage
+ windows_dsc
+ windows_performance
+ windows_faq
diff --git a/docs/docsite/rst/user_guide/windows_dsc.rst b/docs/docsite/rst/user_guide/windows_dsc.rst
new file mode 100644
index 00000000..40416305
--- /dev/null
+++ b/docs/docsite/rst/user_guide/windows_dsc.rst
@@ -0,0 +1,505 @@
+Desired State Configuration
+===========================
+
+.. contents:: Topics
+ :local:
+
+What is Desired State Configuration?
+````````````````````````````````````
+Desired State Configuration, or DSC, is a tool built into PowerShell that can
+be used to define a Windows host setup through code. The overall purpose of DSC
+is the same as Ansible, it is just executed in a different manner. Since
+Ansible 2.4, the ``win_dsc`` module has been added and can be used to leverage
+existing DSC resources when interacting with a Windows host.
+
+More details on DSC can be viewed at `DSC Overview <https://docs.microsoft.com/en-us/powershell/scripting/dsc/overview/overview>`_.
+
+Host Requirements
+`````````````````
+To use the ``win_dsc`` module, a Windows host must have PowerShell v5.0 or
+newer installed. All supported hosts, except for Windows Server 2008 (non R2) can be
+upgraded to PowerShell v5.
+
+Once the PowerShell requirements have been met, using DSC is as simple as
+creating a task with the ``win_dsc`` module.
+
+Why Use DSC?
+````````````
+DSC and Ansible modules have a common goal which is to define and ensure the state of a
+resource. Because of
+this, resources like the DSC `File resource <https://docs.microsoft.com/en-us/powershell/scripting/dsc/reference/resources/windows/fileresource>`_
+and Ansible ``win_file`` can be used to achieve the same result. Deciding which to use depends
+on the scenario.
+
+Reasons for using an Ansible module over a DSC resource:
+
+* The host does not support PowerShell v5.0, or it cannot easily be upgraded
+* The DSC resource does not offer a feature present in an Ansible module. For example
+ win_regedit can manage the ``REG_NONE`` property type, while the DSC
+ ``Registry`` resource cannot
+* DSC resources have limited check mode support, while some Ansible modules have
+ better checks
+* DSC resources do not support diff mode, while some Ansible modules do
+* Custom resources require further installation steps to be run on the host
+ beforehand, while Ansible modules are built-in to Ansible
+* There are bugs in a DSC resource where an Ansible module works
+
+Reasons for using a DSC resource over an Ansible module:
+
+* The Ansible module does not support a feature present in a DSC resource
+* There is no Ansible module available
+* There are bugs in an existing Ansible module
+
+In the end, it doesn't matter whether the task is performed with DSC or an
+Ansible module; what matters is that the task is performed correctly and the
+playbooks are still readable. If you have more experience with DSC over Ansible
+and it does the job, just use DSC for that task.
+
+How to Use DSC?
+```````````````
+The ``win_dsc`` module takes in a free-form of options so that it changes
+according to the resource it is managing. A list of built in resources can be
+found at `resources <https://docs.microsoft.com/en-us/powershell/scripting/dsc/resources/resources>`_.
+
+Using the `Registry <https://docs.microsoft.com/en-us/powershell/scripting/dsc/reference/resources/windows/registryresource>`_
+resource as an example, this is the DSC definition as documented by Microsoft:
+
+.. code-block:: powershell
+
+ Registry [string] #ResourceName
+ {
+ Key = [string]
+ ValueName = [string]
+ [ Ensure = [string] { Enable | Disable } ]
+ [ Force = [bool] ]
+ [ Hex = [bool] ]
+ [ DependsOn = [string[]] ]
+ [ ValueData = [string[]] ]
+ [ ValueType = [string] { Binary | Dword | ExpandString | MultiString | Qword | String } ]
+ }
+
+When defining the task, ``resource_name`` must be set to the DSC resource being
+used - in this case the ``resource_name`` should be set to ``Registry``. The
+``module_version`` can refer to a specific version of the DSC resource
+installed; if left blank it will default to the latest version. The other
+options are parameters that are used to define the resource, such as ``Key`` and
+``ValueName``. While the options in the task are not case sensitive,
+keeping the case as-is is recommended because it makes it easier to distinguish DSC
+resource options from Ansible's ``win_dsc`` options.
+
+This is what the Ansible task version of the above DSC Registry resource would look like:
+
+.. code-block:: yaml+jinja
+
+ - name: Use win_dsc module with the Registry DSC resource
+ win_dsc:
+ resource_name: Registry
+ Ensure: Present
+ Key: HKEY_LOCAL_MACHINE\SOFTWARE\ExampleKey
+ ValueName: TestValue
+ ValueData: TestData
+
+Starting in Ansible 2.8, the ``win_dsc`` module automatically validates the
+input options from Ansible with the DSC definition. This means Ansible will
+fail if the option name is incorrect, a mandatory option is not set, or the
+value is not a valid choice. When running Ansible with a verbosity level of 3
+or more (``-vvv``), the return value will contain the possible invocation
+options based on the ``resource_name`` specified. Here is an example of the
+invocation output for the above ``Registry`` task:
+
+.. code-block:: ansible-output
+
+ changed: [2016] => {
+ "changed": true,
+ "invocation": {
+ "module_args": {
+ "DependsOn": null,
+ "Ensure": "Present",
+ "Force": null,
+ "Hex": null,
+ "Key": "HKEY_LOCAL_MACHINE\\SOFTWARE\\ExampleKey",
+ "PsDscRunAsCredential_password": null,
+ "PsDscRunAsCredential_username": null,
+ "ValueData": [
+ "TestData"
+ ],
+ "ValueName": "TestValue",
+ "ValueType": null,
+ "module_version": "latest",
+ "resource_name": "Registry"
+ }
+ },
+ "module_version": "1.1",
+ "reboot_required": false,
+ "verbose_set": [
+ "Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = ResourceSet,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = root/Microsoft/Windows/DesiredStateConfiguration'.",
+ "An LCM method call arrived from computer SERVER2016 with user sid S-1-5-21-3088887838-4058132883-1884671576-1105.",
+ "[SERVER2016]: LCM: [ Start Set ] [[Registry]DirectResourceAccess]",
+ "[SERVER2016]: [[Registry]DirectResourceAccess] (SET) Create registry key 'HKLM:\\SOFTWARE\\ExampleKey'",
+ "[SERVER2016]: [[Registry]DirectResourceAccess] (SET) Set registry key value 'HKLM:\\SOFTWARE\\ExampleKey\\TestValue' to 'TestData' of type 'String'",
+ "[SERVER2016]: LCM: [ End Set ] [[Registry]DirectResourceAccess] in 0.1930 seconds.",
+ "[SERVER2016]: LCM: [ End Set ] in 0.2720 seconds.",
+ "Operation 'Invoke CimMethod' complete.",
+ "Time taken for configuration job to complete is 0.402 seconds"
+ ],
+ "verbose_test": [
+ "Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = ResourceTest,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = root/Microsoft/Windows/DesiredStateConfiguration'.",
+ "An LCM method call arrived from computer SERVER2016 with user sid S-1-5-21-3088887838-4058132883-1884671576-1105.",
+ "[SERVER2016]: LCM: [ Start Test ] [[Registry]DirectResourceAccess]",
+ "[SERVER2016]: [[Registry]DirectResourceAccess] Registry key 'HKLM:\\SOFTWARE\\ExampleKey' does not exist",
+ "[SERVER2016]: LCM: [ End Test ] [[Registry]DirectResourceAccess] False in 0.2510 seconds.",
+ "[SERVER2016]: LCM: [ End Set ] in 0.3310 seconds.",
+ "Operation 'Invoke CimMethod' complete.",
+ "Time taken for configuration job to complete is 0.475 seconds"
+ ]
+ }
+
+The ``invocation.module_args`` key shows the actual values that were set as
+well as other possible values that were not set. Unfortunately this will not
+show the default value for a DSC property, only what was set from the Ansible
+task. Any ``*_password`` option will be masked in the output for security
+reasons, if there are any other sensitive module options, set ``no_log: True``
+on the task to stop all task output from being logged.
+
+
+Property Types
+--------------
+Each DSC resource property has a type that is associated with it. Ansible
+will try to convert the defined options to the correct type during execution.
+For simple types like ``[string]`` and ``[bool]`` this is a simple operation,
+but complex types like ``[PSCredential]`` or arrays (like ``[string[]]``) this
+require certain rules.
+
+PSCredential
+++++++++++++
+A ``[PSCredential]`` object is used to store credentials in a secure way, but
+Ansible has no way to serialize this over JSON. To set a DSC PSCredential property,
+the definition of that parameter should have two entries that are suffixed with
+``_username`` and ``_password`` for the username and password respectively.
+For example:
+
+.. code-block:: yaml+jinja
+
+ PsDscRunAsCredential_username: '{{ ansible_user }}'
+ PsDscRunAsCredential_password: '{{ ansible_password }}'
+
+ SourceCredential_username: AdminUser
+ SourceCredential_password: PasswordForAdminUser
+
+.. Note:: On versions of Ansible older than 2.8, you should set ``no_log: yes``
+ on the task definition in Ansible to ensure any credentials used are not
+ stored in any log file or console output.
+
+A ``[PSCredential]`` is defined with ``EmbeddedInstance("MSFT_Credential")`` in
+a DSC resource MOF definition.
+
+CimInstance Type
+++++++++++++++++
+A ``[CimInstance]`` object is used by DSC to store a dictionary object based on
+a custom class defined by that resource. Defining a value that takes in a
+``[CimInstance]`` in YAML is the same as defining a dictionary in YAML.
+For example, to define a ``[CimInstance]`` value in Ansible:
+
+.. code-block:: yaml+jinja
+
+ # [CimInstance]AuthenticationInfo == MSFT_xWebAuthenticationInformation
+ AuthenticationInfo:
+ Anonymous: no
+ Basic: yes
+ Digest: no
+ Windows: yes
+
+In the above example, the CIM instance is a representation of the class
+`MSFT_xWebAuthenticationInformation <https://github.com/dsccommunity/xWebAdministration/blob/master/source/DSCResources/MSFT_xWebSite/MSFT_xWebSite.schema.mof>`_.
+This class accepts four boolean variables, ``Anonymous``, ``Basic``,
+``Digest``, and ``Windows``. The keys to use in a ``[CimInstance]`` depend on
+the class it represents. Please read through the documentation of the resource
+to determine the keys that can be used and the types of each key value. The
+class definition is typically located in the ``<resource name>.schema.mof``.
+
+HashTable Type
+++++++++++++++
+A ``[HashTable]`` object is also a dictionary but does not have a strict set of
+keys that can/need to be defined. Like a ``[CimInstance]``, define it like a
+normal dictionary value in YAML. A ``[HashTable]]`` is defined with
+``EmbeddedInstance("MSFT_KeyValuePair")`` in a DSC resource MOF definition.
+
+Arrays
+++++++
+Simple type arrays like ``[string[]]`` or ``[UInt32[]]`` are defined as a list
+or as a comma separated string which are then cast to their type. Using a list
+is recommended because the values are not manually parsed by the ``win_dsc``
+module before being passed to the DSC engine. For example, to define a simple
+type array in Ansible:
+
+.. code-block:: yaml+jinja
+
+ # [string[]]
+ ValueData: entry1, entry2, entry3
+ ValueData:
+ - entry1
+ - entry2
+ - entry3
+
+ # [UInt32[]]
+ ReturnCode: 0,3010
+ ReturnCode:
+ - 0
+ - 3010
+
+Complex type arrays like ``[CimInstance[]]`` (array of dicts), can be defined
+like this example:
+
+.. code-block:: yaml+jinja
+
+ # [CimInstance[]]BindingInfo == MSFT_xWebBindingInformation
+ BindingInfo:
+ - Protocol: https
+ Port: 443
+ CertificateStoreName: My
+ CertificateThumbprint: C676A89018C4D5902353545343634F35E6B3A659
+ HostName: DSCTest
+ IPAddress: '*'
+ SSLFlags: 1
+ - Protocol: http
+ Port: 80
+ IPAddress: '*'
+
+The above example, is an array with two values of the class `MSFT_xWebBindingInformation <https://github.com/dsccommunity/xWebAdministration/blob/master/source/DSCResources/MSFT_xWebSite/MSFT_xWebSite.schema.mof>`_.
+When defining a ``[CimInstance[]]``, be sure to read the resource documentation
+to find out what keys to use in the definition.
+
+DateTime
+++++++++
+A ``[DateTime]`` object is a DateTime string representing the date and time in
+the `ISO 8601 <https://www.w3.org/TR/NOTE-datetime>`_ date time format. The
+value for a ``[DateTime]`` field should be quoted in YAML to ensure the string
+is properly serialized to the Windows host. Here is an example of how to define
+a ``[DateTime]`` value in Ansible:
+
+.. code-block:: yaml+jinja
+
+ # As UTC-0 (No timezone)
+ DateTime: '2019-02-22T13:57:31.2311892+00:00'
+
+ # As UTC+4
+ DateTime: '2019-02-22T17:57:31.2311892+04:00'
+
+ # As UTC-4
+ DateTime: '2019-02-22T09:57:31.2311892-04:00'
+
+All the values above are equal to a UTC date time of February 22nd 2019 at
+1:57pm with 31 seconds and 2311892 milliseconds.
+
+Run As Another User
+-------------------
+By default, DSC runs each resource as the SYSTEM account and not the account
+that Ansible use to run the module. This means that resources that are dynamically
+loaded based on a user profile, like the ``HKEY_CURRENT_USER`` registry hive,
+will be loaded under the ``SYSTEM`` profile. The parameter
+``PsDscRunAsCredential`` is a parameter that can be set for every DSC resource
+force the DSC engine to run under a different account. As
+``PsDscRunAsCredential`` has a type of ``PSCredential``, it is defined with the
+``_username`` and ``_password`` suffix.
+
+Using the Registry resource type as an example, this is how to define a task
+to access the ``HKEY_CURRENT_USER`` hive of the Ansible user:
+
+.. code-block:: yaml+jinja
+
+ - name: Use win_dsc with PsDscRunAsCredential to run as a different user
+ win_dsc:
+ resource_name: Registry
+ Ensure: Present
+ Key: HKEY_CURRENT_USER\ExampleKey
+ ValueName: TestValue
+ ValueData: TestData
+ PsDscRunAsCredential_username: '{{ ansible_user }}'
+ PsDscRunAsCredential_password: '{{ ansible_password }}'
+ no_log: yes
+
+Custom DSC Resources
+````````````````````
+DSC resources are not limited to the built-in options from Microsoft. Custom
+modules can be installed to manage other resources that are not usually available.
+
+Finding Custom DSC Resources
+----------------------------
+You can use the
+`PSGallery <https://www.powershellgallery.com/>`_ to find custom resources, along with documentation on how to install them on a Windows host.
+
+The ``Find-DscResource`` cmdlet can also be used to find custom resources. For example:
+
+.. code-block:: powershell
+
+ # Find all DSC resources in the configured repositories
+ Find-DscResource
+
+ # Find all DSC resources that relate to SQL
+ Find-DscResource -ModuleName "*sql*"
+
+.. Note:: DSC resources developed by Microsoft that start with ``x``, means the
+ resource is experimental and comes with no support.
+
+Installing a Custom Resource
+----------------------------
+There are three ways that a DSC resource can be installed on a host:
+
+* Manually with the ``Install-Module`` cmdlet
+* Using the ``win_psmodule`` Ansible module
+* Saving the module manually and copying it another host
+
+This is an example of installing the ``xWebAdministration`` resources using
+``win_psmodule``:
+
+.. code-block:: yaml+jinja
+
+ - name: Install xWebAdministration DSC resource
+ win_psmodule:
+ name: xWebAdministration
+ state: present
+
+Once installed, the win_dsc module will be able to use the resource by referencing it
+with the ``resource_name`` option.
+
+The first two methods above only work when the host has access to the internet.
+When a host does not have internet access, the module must first be installed
+using the methods above on another host with internet access and then copied
+across. To save a module to a local filepath, the following PowerShell cmdlet
+can be run::
+
+ Save-Module -Name xWebAdministration -Path C:\temp
+
+This will create a folder called ``xWebAdministration`` in ``C:\temp`` which
+can be copied to any host. For PowerShell to see this offline resource, it must
+be copied to a directory set in the ``PSModulePath`` environment variable.
+In most cases the path ``C:\Program Files\WindowsPowerShell\Module`` is set
+through this variable, but the ``win_path`` module can be used to add different
+paths.
+
+Examples
+````````
+Extract a zip file
+------------------
+
+.. code-block:: yaml+jinja
+
+ - name: Extract a zip file
+ win_dsc:
+ resource_name: Archive
+ Destination: C:\temp\output
+ Path: C:\temp\zip.zip
+ Ensure: Present
+
+Create a directory
+------------------
+
+.. code-block:: yaml+jinja
+
+ - name: Create file with some text
+ win_dsc:
+ resource_name: File
+ DestinationPath: C:\temp\file
+ Contents: |
+ Hello
+ World
+ Ensure: Present
+ Type: File
+
+ - name: Create directory that is hidden is set with the System attribute
+ win_dsc:
+ resource_name: File
+ DestinationPath: C:\temp\hidden-directory
+ Attributes: Hidden,System
+ Ensure: Present
+ Type: Directory
+
+Interact with Azure
+-------------------
+
+.. code-block:: yaml+jinja
+
+ - name: Install xAzure DSC resources
+ win_psmodule:
+ name: xAzure
+ state: present
+
+ - name: Create virtual machine in Azure
+ win_dsc:
+ resource_name: xAzureVM
+ ImageName: a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201409.01-en.us-127GB.vhd
+ Name: DSCHOST01
+ ServiceName: ServiceName
+ StorageAccountName: StorageAccountName
+ InstanceSize: Medium
+ Windows: yes
+ Ensure: Present
+ Credential_username: '{{ ansible_user }}'
+ Credential_password: '{{ ansible_password }}'
+
+Setup IIS Website
+-----------------
+
+.. code-block:: yaml+jinja
+
+ - name: Install xWebAdministration module
+ win_psmodule:
+ name: xWebAdministration
+ state: present
+
+ - name: Install IIS features that are required
+ win_dsc:
+ resource_name: WindowsFeature
+ Name: '{{ item }}'
+ Ensure: Present
+ loop:
+ - Web-Server
+ - Web-Asp-Net45
+
+ - name: Setup web content
+ win_dsc:
+ resource_name: File
+ DestinationPath: C:\inetpub\IISSite\index.html
+ Type: File
+ Contents: |
+ <html>
+ <head><title>IIS Site</title></head>
+ <body>This is the body</body>
+ </html>
+ Ensure: present
+
+ - name: Create new website
+ win_dsc:
+ resource_name: xWebsite
+ Name: NewIISSite
+ State: Started
+ PhysicalPath: C:\inetpub\IISSite\index.html
+ BindingInfo:
+ - Protocol: https
+ Port: 8443
+ CertificateStoreName: My
+ CertificateThumbprint: C676A89018C4D5902353545343634F35E6B3A659
+ HostName: DSCTest
+ IPAddress: '*'
+ SSLFlags: 1
+ - Protocol: http
+ Port: 8080
+ IPAddress: '*'
+ AuthenticationInfo:
+ Anonymous: no
+ Basic: yes
+ Digest: no
+ Windows: yes
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`List of Windows Modules <windows_modules>`
+ Windows specific module list, all implemented in PowerShell
+ `User Mailing List <https://groups.google.com/group/ansible-project>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/windows_faq.rst b/docs/docsite/rst/user_guide/windows_faq.rst
new file mode 100644
index 00000000..75e99d2e
--- /dev/null
+++ b/docs/docsite/rst/user_guide/windows_faq.rst
@@ -0,0 +1,236 @@
+.. _windows_faq:
+
+Windows Frequently Asked Questions
+==================================
+
+Here are some commonly asked questions in regards to Ansible and Windows and
+their answers.
+
+.. note:: This document covers questions about managing Microsoft Windows servers with Ansible.
+ For questions about Ansible Core, please see the
+ :ref:`general FAQ page <ansible_faq>`.
+
+Does Ansible work with Windows XP or Server 2003?
+``````````````````````````````````````````````````
+Ansible does not work with Windows XP or Server 2003 hosts. Ansible does work with these Windows operating system versions:
+
+* Windows Server 2008 :sup:`1`
+* Windows Server 2008 R2 :sup:`1`
+* Windows Server 2012
+* Windows Server 2012 R2
+* Windows Server 2016
+* Windows Server 2019
+* Windows 7 :sup:`1`
+* Windows 8.1
+* Windows 10
+
+1 - See the :ref:`Server 2008 FAQ <windows_faq_server2008>` entry for more details.
+
+Ansible also has minimum PowerShell version requirements - please see
+:ref:`windows_setup` for the latest information.
+
+.. _windows_faq_server2008:
+
+Are Server 2008, 2008 R2 and Windows 7 supported?
+`````````````````````````````````````````````````
+Microsoft ended Extended Support for these versions of Windows on January 14th, 2020, and Ansible deprecated official support in the 2.10 release. No new feature development will occur targeting these operating systems, and automated testing has ceased. However, existing modules and features will likely continue to work, and simple pull requests to resolve issues with these Windows versions may be accepted.
+
+Can I manage Windows Nano Server with Ansible?
+``````````````````````````````````````````````
+Ansible does not currently work with Windows Nano Server, since it does
+not have access to the full .NET Framework that is used by the majority of the
+modules and internal components.
+
+Can Ansible run on Windows?
+```````````````````````````
+No, Ansible can only manage Windows hosts. Ansible cannot run on a Windows host
+natively, though it can run under the Windows Subsystem for Linux (WSL).
+
+.. note:: The Windows Subsystem for Linux is not supported by Ansible and
+ should not be used for production systems.
+
+To install Ansible on WSL, the following commands
+can be run in the bash terminal:
+
+.. code-block:: shell
+
+ sudo apt-get update
+ sudo apt-get install python-pip git libffi-dev libssl-dev -y
+ pip install --user ansible pywinrm
+
+To run Ansible from source instead of a release on the WSL, simply uninstall the pip
+installed version and then clone the git repo.
+
+.. code-block:: shell
+
+ pip uninstall ansible -y
+ git clone https://github.com/ansible/ansible.git
+ source ansible/hacking/env-setup
+
+ # To enable Ansible on login, run the following
+ echo ". ~/ansible/hacking/env-setup -q' >> ~/.bashrc
+
+Can I use SSH keys to authenticate to Windows hosts?
+````````````````````````````````````````````````````
+You cannot use SSH keys with the WinRM or PSRP connection plugins.
+These connection plugins use X509 certificates for authentication instead
+of the SSH key pairs that SSH uses.
+
+The way X509 certificates are generated and mapped to a user is different
+from the SSH implementation; consult the :ref:`windows_winrm` documentation for
+more information.
+
+Ansible 2.8 has added an experimental option to use the SSH connection plugin,
+which uses SSH keys for authentication, for Windows servers. See :ref:`this question <windows_faq_ssh>`
+for more information.
+
+.. _windows_faq_winrm:
+
+Why can I run a command locally that does not work under Ansible?
+`````````````````````````````````````````````````````````````````
+Ansible executes commands through WinRM. These processes are different from
+running a command locally in these ways:
+
+* Unless using an authentication option like CredSSP or Kerberos with
+ credential delegation, the WinRM process does not have the ability to
+ delegate the user's credentials to a network resource, causing ``Access is
+ Denied`` errors.
+
+* All processes run under WinRM are in a non-interactive session. Applications
+ that require an interactive session will not work.
+
+* When running through WinRM, Windows restricts access to internal Windows
+ APIs like the Windows Update API and DPAPI, which some installers and
+ programs rely on.
+
+Some ways to bypass these restrictions are to:
+
+* Use ``become``, which runs a command as it would when run locally. This will
+ bypass most WinRM restrictions, as Windows is unaware the process is running
+ under WinRM when ``become`` is used. See the :ref:`become` documentation for more
+ information.
+
+* Use a scheduled task, which can be created with ``win_scheduled_task``. Like
+ ``become``, it will bypass all WinRM restrictions, but it can only be used to run
+ commands, not modules.
+
+* Use ``win_psexec`` to run a command on the host. PSExec does not use WinRM
+ and so will bypass any of the restrictions.
+
+* To access network resources without any of these workarounds, you can use
+ CredSSP or Kerberos with credential delegation enabled.
+
+See :ref:`become` more info on how to use become. The limitations section at
+:ref:`windows_winrm` has more details around WinRM limitations.
+
+This program won't install on Windows with Ansible
+``````````````````````````````````````````````````
+See :ref:`this question <windows_faq_winrm>` for more information about WinRM limitations.
+
+What Windows modules are available?
+```````````````````````````````````
+Most of the Ansible modules in Ansible Core are written for a combination of
+Linux/Unix machines and arbitrary web services. These modules are written in
+Python and most of them do not work on Windows.
+
+Because of this, there are dedicated Windows modules that are written in
+PowerShell and are meant to be run on Windows hosts. A list of these modules
+can be found :ref:`here <windows_modules>`.
+
+In addition, the following Ansible Core modules/action-plugins work with Windows:
+
+* add_host
+* assert
+* async_status
+* debug
+* fail
+* fetch
+* group_by
+* include
+* include_role
+* include_vars
+* meta
+* pause
+* raw
+* script
+* set_fact
+* set_stats
+* setup
+* slurp
+* template (also: win_template)
+* wait_for_connection
+
+Can I run Python modules on Windows hosts?
+``````````````````````````````````````````
+No, the WinRM connection protocol is set to use PowerShell modules, so Python
+modules will not work. A way to bypass this issue to use
+``delegate_to: localhost`` to run a Python module on the Ansible controller.
+This is useful if during a playbook, an external service needs to be contacted
+and there is no equivalent Windows module available.
+
+.. _windows_faq_ssh:
+
+Can I connect to Windows hosts over SSH?
+````````````````````````````````````````
+Ansible 2.8 has added an experimental option to use the SSH connection plugin
+to manage Windows hosts. To connect to Windows hosts over SSH, you must install and configure the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_
+fork that is in development with Microsoft on
+the Windows host(s). While most of the basics should work with SSH,
+``Win32-OpenSSH`` is rapidly changing, with new features added and bugs
+fixed in every release. It is highly recommend you `install <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_ the latest release
+of ``Win32-OpenSSH`` from the GitHub Releases page when using it with Ansible
+on Windows hosts.
+
+To use SSH as the connection to a Windows host, set the following variables in
+the inventory::
+
+ ansible_connection=ssh
+
+ # Set either cmd or powershell not both
+ ansible_shell_type=cmd
+ # ansible_shell_type=powershell
+
+The value for ``ansible_shell_type`` should either be ``cmd`` or ``powershell``.
+Use ``cmd`` if the ``DefaultShell`` has not been configured on the SSH service
+and ``powershell`` if that has been set as the ``DefaultShell``.
+
+Why is connecting to a Windows host via SSH failing?
+````````````````````````````````````````````````````
+Unless you are using ``Win32-OpenSSH`` as described above, you must connect to
+Windows hosts using :ref:`windows_winrm`. If your Ansible output indicates that
+SSH was used, either you did not set the connection vars properly or the host is not inheriting them correctly.
+
+Make sure ``ansible_connection: winrm`` is set in the inventory for the Windows
+host(s).
+
+Why are my credentials being rejected?
+``````````````````````````````````````
+This can be due to a myriad of reasons unrelated to incorrect credentials.
+
+See HTTP 401/Credentials Rejected at :ref:`windows_setup` for a more detailed
+guide of this could mean.
+
+Why am I getting an error SSL CERTIFICATE_VERIFY_FAILED?
+````````````````````````````````````````````````````````
+When the Ansible controller is running on Python 2.7.9+ or an older version of Python that
+has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to
+validate the certificate WinRM is using for an HTTPS connection. If the
+certificate cannot be validated (such as in the case of a self signed cert), it will
+fail the verification process.
+
+To ignore certificate validation, add
+``ansible_winrm_server_cert_validation: ignore`` to inventory for the Windows
+host.
+
+.. seealso::
+
+ :ref:`windows`
+ The Windows documentation index
+ :ref:`about_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ `User Mailing List <https://groups.google.com/group/ansible-project>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/windows_performance.rst b/docs/docsite/rst/user_guide/windows_performance.rst
new file mode 100644
index 00000000..5eb5dbbd
--- /dev/null
+++ b/docs/docsite/rst/user_guide/windows_performance.rst
@@ -0,0 +1,61 @@
+.. _windows_performance:
+
+Windows performance
+===================
+This document offers some performance optimizations you might like to apply to
+your Windows hosts to speed them up specifically in the context of using Ansible
+with them, and generally.
+
+Optimise PowerShell performance to reduce Ansible task overhead
+---------------------------------------------------------------
+To speed up the startup of PowerShell by around 10x, run the following
+PowerShell snippet in an Administrator session. Expect it to take tens of
+seconds.
+
+.. note::
+
+ If native images have already been created by the ngen task or service, you
+ will observe no difference in performance (but this snippet will at that
+ point execute faster than otherwise).
+
+.. code-block:: powershell
+
+ function Optimize-PowershellAssemblies {
+ # NGEN powershell assembly, improves startup time of powershell by 10x
+ $old_path = $env:path
+ try {
+ $env:path = [Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory()
+ [AppDomain]::CurrentDomain.GetAssemblies() | % {
+ if (! $_.location) {continue}
+ $Name = Split-Path $_.location -leaf
+ if ($Name.startswith("Microsoft.PowerShell.")) {
+ Write-Progress -Activity "Native Image Installation" -Status "$name"
+ ngen install $_.location | % {"`t$_"}
+ }
+ }
+ } finally {
+ $env:path = $old_path
+ }
+ }
+ Optimize-PowershellAssemblies
+
+PowerShell is used by every Windows Ansible module. This optimisation reduces
+the time PowerShell takes to start up, removing that overhead from every invocation.
+
+This snippet uses `the native image generator, ngen <https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#WhenToUse>`_
+to pre-emptively create native images for the assemblies that PowerShell relies on.
+
+Fix high-CPU-on-boot for VMs/cloud instances
+--------------------------------------------
+If you are creating golden images to spawn instances from, you can avoid a disruptive
+high CPU task near startup via `processing the ngen queue <https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#native-image-service>`_
+within your golden image creation, if you know the CPU types won't change between
+golden image build process and runtime.
+
+Place the following near the end of your playbook, bearing in mind the factors that can cause native images to be invalidated (`see MSDN <https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#native-images-and-jit-compilation>`_).
+
+.. code-block:: yaml
+
+ - name: generate native .NET images for CPU
+ win_dotnet_ngen:
+
diff --git a/docs/docsite/rst/user_guide/windows_setup.rst b/docs/docsite/rst/user_guide/windows_setup.rst
new file mode 100644
index 00000000..910fa06f
--- /dev/null
+++ b/docs/docsite/rst/user_guide/windows_setup.rst
@@ -0,0 +1,573 @@
+.. _windows_setup:
+
+Setting up a Windows Host
+=========================
+This document discusses the setup that is required before Ansible can communicate with a Microsoft Windows host.
+
+.. contents::
+ :local:
+
+Host Requirements
+`````````````````
+For Ansible to communicate to a Windows host and use Windows modules, the
+Windows host must meet these requirements:
+
+* Ansible can generally manage Windows versions under current
+ and extended support from Microsoft. Ansible can manage desktop OSs including
+ Windows 7, 8.1, and 10, and server OSs including Windows Server 2008,
+ 2008 R2, 2012, 2012 R2, 2016, and 2019.
+
+* Ansible requires PowerShell 3.0 or newer and at least .NET 4.0 to be
+ installed on the Windows host.
+
+* A WinRM listener should be created and activated. More details for this can be
+ found below.
+
+.. Note:: While these are the base requirements for Ansible connectivity, some Ansible
+ modules have additional requirements, such as a newer OS or PowerShell
+ version. Please consult the module's documentation page
+ to determine whether a host meets those requirements.
+
+Upgrading PowerShell and .NET Framework
+---------------------------------------
+Ansible requires PowerShell version 3.0 and .NET Framework 4.0 or newer to function on older operating systems like Server 2008 and Windows 7. The base image does not meet this
+requirement. You can use the `Upgrade-PowerShell.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Upgrade-PowerShell.ps1>`_ script to update these.
+
+This is an example of how to run this script from PowerShell:
+
+.. code-block:: powershell
+
+ $url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Upgrade-PowerShell.ps1"
+ $file = "$env:temp\Upgrade-PowerShell.ps1"
+ $username = "Administrator"
+ $password = "Password"
+
+ (New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
+ Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force
+
+ # Version can be 3.0, 4.0 or 5.1
+ &$file -Version 5.1 -Username $username -Password $password -Verbose
+
+Once completed, you will need to remove auto logon
+and set the execution policy back to the default of ``Restricted``. You can
+do this with the following PowerShell commands:
+
+.. code-block:: powershell
+
+ # This isn't needed but is a good security practice to complete
+ Set-ExecutionPolicy -ExecutionPolicy Restricted -Force
+
+ $reg_winlogon_path = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon"
+ Set-ItemProperty -Path $reg_winlogon_path -Name AutoAdminLogon -Value 0
+ Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultUserName -ErrorAction SilentlyContinue
+ Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultPassword -ErrorAction SilentlyContinue
+
+The script works by checking to see what programs need to be installed
+(such as .NET Framework 4.5.2) and what PowerShell version is required. If a reboot
+is required and the ``username`` and ``password`` parameters are set, the
+script will automatically reboot and logon when it comes back up from the
+reboot. The script will continue until no more actions are required and the
+PowerShell version matches the target version. If the ``username`` and
+``password`` parameters are not set, the script will prompt the user to
+manually reboot and logon when required. When the user is next logged in, the
+script will continue where it left off and the process continues until no more
+actions are required.
+
+.. Note:: If running on Server 2008, then SP2 must be installed. If running on
+ Server 2008 R2 or Windows 7, then SP1 must be installed.
+
+.. Note:: Windows Server 2008 can only install PowerShell 3.0; specifying a
+ newer version will result in the script failing.
+
+.. Note:: The ``username`` and ``password`` parameters are stored in plain text
+ in the registry. Make sure the cleanup commands are run after the script finishes
+ to ensure no credentials are still stored on the host.
+
+WinRM Memory Hotfix
+-------------------
+When running on PowerShell v3.0, there is a bug with the WinRM service that
+limits the amount of memory available to WinRM. Without this hotfix installed,
+Ansible will fail to execute certain commands on the Windows host. These
+hotfixes should be installed as part of the system bootstrapping or
+imaging process. The script `Install-WMF3Hotfix.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Install-WMF3Hotfix.ps1>`_ can be used to install the hotfix on affected hosts.
+
+The following PowerShell command will install the hotfix:
+
+.. code-block:: powershell
+
+ $url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Install-WMF3Hotfix.ps1"
+ $file = "$env:temp\Install-WMF3Hotfix.ps1"
+
+ (New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
+ powershell.exe -ExecutionPolicy ByPass -File $file -Verbose
+
+For more details, please refer to the `Hotfix document <https://support.microsoft.com/en-us/help/2842230/out-of-memory-error-on-a-computer-that-has-a-customized-maxmemorypersh>`_ from Microsoft.
+
+WinRM Setup
+```````````
+Once Powershell has been upgraded to at least version 3.0, the final step is for the
+WinRM service to be configured so that Ansible can connect to it. There are two
+main components of the WinRM service that governs how Ansible can interface with
+the Windows host: the ``listener`` and the ``service`` configuration settings.
+
+Details about each component can be read below, but the script
+`ConfigureRemotingForAnsible.ps1 <https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`_
+can be used to set up the basics. This script sets up both HTTP and HTTPS
+listeners with a self-signed certificate and enables the ``Basic``
+authentication option on the service.
+
+To use this script, run the following in PowerShell:
+
+.. code-block:: powershell
+
+ $url = "https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"
+ $file = "$env:temp\ConfigureRemotingForAnsible.ps1"
+
+ (New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
+
+ powershell.exe -ExecutionPolicy ByPass -File $file
+
+There are different switches and parameters (like ``-EnableCredSSP`` and
+``-ForceNewSSLCert``) that can be set alongside this script. The documentation
+for these options are located at the top of the script itself.
+
+.. Note:: The ConfigureRemotingForAnsible.ps1 script is intended for training and
+ development purposes only and should not be used in a
+ production environment, since it enables settings (like ``Basic`` authentication)
+ that can be inherently insecure.
+
+WinRM Listener
+--------------
+The WinRM services listens for requests on one or more ports. Each of these ports must have a
+listener created and configured.
+
+To view the current listeners that are running on the WinRM service, run the
+following command:
+
+.. code-block:: powershell
+
+ winrm enumerate winrm/config/Listener
+
+This will output something like::
+
+ Listener
+ Address = *
+ Transport = HTTP
+ Port = 5985
+ Hostname
+ Enabled = true
+ URLPrefix = wsman
+ CertificateThumbprint
+ ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
+ ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
+
+ Listener
+ Address = *
+ Transport = HTTPS
+ Port = 5986
+ Hostname = SERVER2016
+ Enabled = true
+ URLPrefix = wsman
+ CertificateThumbprint = E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE
+ ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
+ ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
+
+In the example above there are two listeners activated; one is listening on
+port 5985 over HTTP and the other is listening on port 5986 over HTTPS. Some of
+the key options that are useful to understand are:
+
+* ``Transport``: Whether the listener is run over HTTP or HTTPS, it is
+ recommended to use a listener over HTTPS as the data is encrypted without
+ any further changes required.
+
+* ``Port``: The port the listener runs on, by default it is ``5985`` for HTTP
+ and ``5986`` for HTTPS. This port can be changed to whatever is required and
+ corresponds to the host var ``ansible_port``.
+
+* ``URLPrefix``: The URL prefix to listen on, by default it is ``wsman``. If
+ this is changed, the host var ``ansible_winrm_path`` must be set to the same
+ value.
+
+* ``CertificateThumbprint``: If running over an HTTPS listener, this is the
+ thumbprint of the certificate in the Windows Certificate Store that is used
+ in the connection. To get the details of the certificate itself, run this
+ command with the relevant certificate thumbprint in PowerShell::
+
+ $thumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
+ Get-ChildItem -Path cert:\LocalMachine\My -Recurse | Where-Object { $_.Thumbprint -eq $thumbprint } | Select-Object *
+
+Setup WinRM Listener
+++++++++++++++++++++
+There are three ways to set up a WinRM listener:
+
+* Using ``winrm quickconfig`` for HTTP or
+ ``winrm quickconfig -transport:https`` for HTTPS. This is the easiest option
+ to use when running outside of a domain environment and a simple listener is
+ required. Unlike the other options, this process also has the added benefit of
+ opening up the Firewall for the ports required and starts the WinRM service.
+
+* Using Group Policy Objects. This is the best way to create a listener when the
+ host is a member of a domain because the configuration is done automatically
+ without any user input. For more information on group policy objects, see the
+ `Group Policy Objects documentation <https://msdn.microsoft.com/en-us/library/aa374162(v=vs.85).aspx>`_.
+
+* Using PowerShell to create the listener with a specific configuration. This
+ can be done by running the following PowerShell commands:
+
+ .. code-block:: powershell
+
+ $selector_set = @{
+ Address = "*"
+ Transport = "HTTPS"
+ }
+ $value_set = @{
+ CertificateThumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
+ }
+
+ New-WSManInstance -ResourceURI "winrm/config/Listener" -SelectorSet $selector_set -ValueSet $value_set
+
+ To see the other options with this PowerShell cmdlet, see
+ `New-WSManInstance <https://docs.microsoft.com/en-us/powershell/module/microsoft.wsman.management/new-wsmaninstance?view=powershell-5.1>`_.
+
+.. Note:: When creating an HTTPS listener, an existing certificate needs to be
+ created and stored in the ``LocalMachine\My`` certificate store. Without a
+ certificate being present in this store, most commands will fail.
+
+Delete WinRM Listener
++++++++++++++++++++++
+To remove a WinRM listener::
+
+ # Remove all listeners
+ Remove-Item -Path WSMan:\localhost\Listener\* -Recurse -Force
+
+ # Only remove listeners that are run over HTTPS
+ Get-ChildItem -Path WSMan:\localhost\Listener | Where-Object { $_.Keys -contains "Transport=HTTPS" } | Remove-Item -Recurse -Force
+
+.. Note:: The ``Keys`` object is an array of strings, so it can contain different
+ values. By default it contains a key for ``Transport=`` and ``Address=``
+ which correspond to the values from winrm enumerate winrm/config/Listeners.
+
+WinRM Service Options
+---------------------
+There are a number of options that can be set to control the behavior of the WinRM service component,
+including authentication options and memory settings.
+
+To get an output of the current service configuration options, run the
+following command:
+
+.. code-block:: powershell
+
+ winrm get winrm/config/Service
+ winrm get winrm/config/Winrs
+
+This will output something like::
+
+ Service
+ RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD)
+ MaxConcurrentOperations = 4294967295
+ MaxConcurrentOperationsPerUser = 1500
+ EnumerationTimeoutms = 240000
+ MaxConnections = 300
+ MaxPacketRetrievalTimeSeconds = 120
+ AllowUnencrypted = false
+ Auth
+ Basic = true
+ Kerberos = true
+ Negotiate = true
+ Certificate = true
+ CredSSP = true
+ CbtHardeningLevel = Relaxed
+ DefaultPorts
+ HTTP = 5985
+ HTTPS = 5986
+ IPv4Filter = *
+ IPv6Filter = *
+ EnableCompatibilityHttpListener = false
+ EnableCompatibilityHttpsListener = false
+ CertificateThumbprint
+ AllowRemoteAccess = true
+
+ Winrs
+ AllowRemoteShellAccess = true
+ IdleTimeout = 7200000
+ MaxConcurrentUsers = 2147483647
+ MaxShellRunTime = 2147483647
+ MaxProcessesPerShell = 2147483647
+ MaxMemoryPerShellMB = 2147483647
+ MaxShellsPerUser = 2147483647
+
+While many of these options should rarely be changed, a few can easily impact
+the operations over WinRM and are useful to understand. Some of the important
+options are:
+
+* ``Service\AllowUnencrypted``: This option defines whether WinRM will allow
+ traffic that is run over HTTP without message encryption. Message level
+ encryption is only possible when ``ansible_winrm_transport`` is ``ntlm``,
+ ``kerberos`` or ``credssp``. By default this is ``false`` and should only be
+ set to ``true`` when debugging WinRM messages.
+
+* ``Service\Auth\*``: These flags define what authentication
+ options are allowed with the WinRM service. By default, ``Negotiate (NTLM)``
+ and ``Kerberos`` are enabled.
+
+* ``Service\Auth\CbtHardeningLevel``: Specifies whether channel binding tokens are
+ not verified (None), verified but not required (Relaxed), or verified and
+ required (Strict). CBT is only used when connecting with NTLM or Kerberos
+ over HTTPS.
+
+* ``Service\CertificateThumbprint``: This is the thumbprint of the certificate
+ used to encrypt the TLS channel used with CredSSP authentication. By default
+ this is empty; a self-signed certificate is generated when the WinRM service
+ starts and is used in the TLS process.
+
+* ``Winrs\MaxShellRunTime``: This is the maximum time, in milliseconds, that a
+ remote command is allowed to execute.
+
+* ``Winrs\MaxMemoryPerShellMB``: This is the maximum amount of memory allocated
+ per shell, including the shell's child processes.
+
+To modify a setting under the ``Service`` key in PowerShell::
+
+ # substitute {path} with the path to the option after winrm/config/Service
+ Set-Item -Path WSMan:\localhost\Service\{path} -Value "value here"
+
+ # for example, to change Service\Auth\CbtHardeningLevel run
+ Set-Item -Path WSMan:\localhost\Service\Auth\CbtHardeningLevel -Value Strict
+
+To modify a setting under the ``Winrs`` key in PowerShell::
+
+ # Substitute {path} with the path to the option after winrm/config/Winrs
+ Set-Item -Path WSMan:\localhost\Shell\{path} -Value "value here"
+
+ # For example, to change Winrs\MaxShellRunTime run
+ Set-Item -Path WSMan:\localhost\Shell\MaxShellRunTime -Value 2147483647
+
+.. Note:: If running in a domain environment, some of these options are set by
+ GPO and cannot be changed on the host itself. When a key has been
+ configured with GPO, it contains the text ``[Source="GPO"]`` next to the value.
+
+Common WinRM Issues
+-------------------
+Because WinRM has a wide range of configuration options, it can be difficult
+to setup and configure. Because of this complexity, issues that are shown by Ansible
+could in fact be issues with the host setup instead.
+
+One easy way to determine whether a problem is a host issue is to
+run the following command from another Windows host to connect to the
+target Windows host::
+
+ # Test out HTTP
+ winrs -r:http://server:5985/wsman -u:Username -p:Password ipconfig
+
+ # Test out HTTPS (will fail if the cert is not verifiable)
+ winrs -r:https://server:5986/wsman -u:Username -p:Password -ssl ipconfig
+
+ # Test out HTTPS, ignoring certificate verification
+ $username = "Username"
+ $password = ConvertTo-SecureString -String "Password" -AsPlainText -Force
+ $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
+
+ $session_option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
+ Invoke-Command -ComputerName server -UseSSL -ScriptBlock { ipconfig } -Credential $cred -SessionOption $session_option
+
+If this fails, the issue is probably related to the WinRM setup. If it works, the issue may not be related to the WinRM setup; please continue reading for more troubleshooting suggestions.
+
+HTTP 401/Credentials Rejected
++++++++++++++++++++++++++++++
+A HTTP 401 error indicates the authentication process failed during the initial
+connection. Some things to check for this are:
+
+* Verify that the credentials are correct and set properly in your inventory with
+ ``ansible_user`` and ``ansible_password``
+
+* Ensure that the user is a member of the local Administrators group or has been explicitly
+ granted access (a connection test with the ``winrs`` command can be used to
+ rule this out).
+
+* Make sure that the authentication option set by ``ansible_winrm_transport`` is enabled under
+ ``Service\Auth\*``
+
+* If running over HTTP and not HTTPS, use ``ntlm``, ``kerberos`` or ``credssp``
+ with ``ansible_winrm_message_encryption: auto`` to enable message encryption.
+ If using another authentication option or if the installed pywinrm version cannot be
+ upgraded, the ``Service\AllowUnencrypted`` can be set to ``true`` but this is
+ only recommended for troubleshooting
+
+* Ensure the downstream packages ``pywinrm``, ``requests-ntlm``,
+ ``requests-kerberos``, and/or ``requests-credssp`` are up to date using ``pip``.
+
+* If using Kerberos authentication, ensure that ``Service\Auth\CbtHardeningLevel`` is
+ not set to ``Strict``.
+
+* When using Basic or Certificate authentication, make sure that the user is a local account and
+ not a domain account. Domain accounts do not work with Basic and Certificate
+ authentication.
+
+HTTP 500 Error
+++++++++++++++
+These indicate an error has occurred with the WinRM service. Some things
+to check for include:
+
+* Verify that the number of current open shells has not exceeded either
+ ``WinRsMaxShellsPerUser`` or any of the other Winrs quotas haven't been
+ exceeded.
+
+Timeout Errors
++++++++++++++++
+These usually indicate an error with the network connection where
+Ansible is unable to reach the host. Some things to check for include:
+
+* Make sure the firewall is not set to block the configured WinRM listener ports
+* Ensure that a WinRM listener is enabled on the port and path set by the host vars
+* Ensure that the ``winrm`` service is running on the Windows host and configured for
+ automatic start
+
+Connection Refused Errors
++++++++++++++++++++++++++
+These usually indicate an error when trying to communicate with the
+WinRM service on the host. Some things to check for:
+
+* Ensure that the WinRM service is up and running on the host. Use
+ ``(Get-Service -Name winrm).Status`` to get the status of the service.
+* Check that the host firewall is allowing traffic over the WinRM port. By default
+ this is ``5985`` for HTTP and ``5986`` for HTTPS.
+
+Sometimes an installer may restart the WinRM or HTTP service and cause this error. The
+best way to deal with this is to use ``win_psexec`` from another
+Windows host.
+
+Failure to Load Builtin Modules
++++++++++++++++++++++++++++++++
+If powershell fails with an error message similar to ``The 'Out-String' command was found in the module 'Microsoft.PowerShell.Utility', but the module could not be loaded.``
+then there could be a problem trying to access all the paths specified by the ``PSModulePath`` environment variable.
+A common cause of this issue is that the ``PSModulePath`` environment variable contains a UNC path to a file share and
+because of the double hop/credential delegation issue the Ansible process cannot access these folders. The way around
+this problems is to either:
+
+* Remove the UNC path from the ``PSModulePath`` environment variable, or
+* Use an authentication option that supports credential delegation like ``credssp`` or ``kerberos`` with credential delegation enabled
+
+See `KB4076842 <https://support.microsoft.com/en-us/help/4076842>`_ for more information on this problem.
+
+
+Windows SSH Setup
+`````````````````
+Ansible 2.8 has added an experimental SSH connection for Windows managed nodes.
+
+.. warning::
+ Use this feature at your own risk!
+ Using SSH with Windows is experimental, the implementation may make
+ backwards incompatible changes in feature releases. The server side
+ components can be unreliable depending on the version that is installed.
+
+Installing Win32-OpenSSH
+------------------------
+The first step to using SSH with Windows is to install the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_
+service on the Windows host. Microsoft offers a way to install ``Win32-OpenSSH`` through a Windows
+capability but currently the version that is installed through this process is
+too old to work with Ansible. To install ``Win32-OpenSSH`` for use with
+Ansible, select one of these three installation options:
+
+* Manually install the service, following the `install instructions <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_
+ from Microsoft.
+
+* Install the `openssh <https://chocolatey.org/packages/openssh>`_ package using Chocolatey::
+
+ choco install --package-parameters=/SSHServerFeature openssh
+
+* Use ``win_chocolatey`` to install the service::
+
+ - name: install the Win32-OpenSSH service
+ win_chocolatey:
+ name: openssh
+ package_params: /SSHServerFeature
+ state: present
+
+* Use an existing Ansible Galaxy role like `jborean93.win_openssh <https://galaxy.ansible.com/jborean93/win_openssh>`_::
+
+ # Make sure the role has been downloaded first
+ ansible-galaxy install jborean93.win_openssh
+
+ # main.yml
+ - name: install Win32-OpenSSH service
+ hosts: windows
+ gather_facts: no
+ roles:
+ - role: jborean93.win_openssh
+ opt_openssh_setup_service: True
+
+.. note:: ``Win32-OpenSSH`` is still a beta product and is constantly
+ being updated to include new features and bugfixes. If you are using SSH as
+ a connection option for Windows, it is highly recommend you install the
+ latest release from one of the 3 methods above.
+
+Configuring the Win32-OpenSSH shell
+-----------------------------------
+
+By default ``Win32-OpenSSH`` will use ``cmd.exe`` as a shell. To configure a
+different shell, use an Ansible task to define the registry setting::
+
+ - name: set the default shell to PowerShell
+ win_regedit:
+ path: HKLM:\SOFTWARE\OpenSSH
+ name: DefaultShell
+ data: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
+ type: string
+ state: present
+
+ # Or revert the settings back to the default, cmd
+ - name: set the default shell to cmd
+ win_regedit:
+ path: HKLM:\SOFTWARE\OpenSSH
+ name: DefaultShell
+ state: absent
+
+Win32-OpenSSH Authentication
+----------------------------
+Win32-OpenSSH authentication with Windows is similar to SSH
+authentication on Unix/Linux hosts. You can use a plaintext password or
+SSH public key authentication, add public keys to an ``authorized_key`` file
+in the ``.ssh`` folder of the user's profile directory, and configure the
+service using the ``sshd_config`` file used by the SSH service as you would on
+a Unix/Linux host.
+
+When using SSH key authentication with Ansible, the remote session won't have access to the
+user's credentials and will fail when attempting to access a network resource.
+This is also known as the double-hop or credential delegation issue. There are
+two ways to work around this issue:
+
+* Use plaintext password auth by setting ``ansible_password``
+* Use ``become`` on the task with the credentials of the user that needs access to the remote resource
+
+Configuring Ansible for SSH on Windows
+--------------------------------------
+To configure Ansible to use SSH for Windows hosts, you must set two connection variables:
+
+* set ``ansible_connection`` to ``ssh``
+* set ``ansible_shell_type`` to ``cmd`` or ``powershell``
+
+The ``ansible_shell_type`` variable should reflect the ``DefaultShell``
+configured on the Windows host. Set to ``cmd`` for the default shell or set to
+``powershell`` if the ``DefaultShell`` has been changed to PowerShell.
+
+Known issues with SSH on Windows
+--------------------------------
+Using SSH with Windows is experimental, and we expect to uncover more issues.
+Here are the known ones:
+
+* Win32-OpenSSH versions older than ``v7.9.0.0p1-Beta`` do not work when ``powershell`` is the shell type
+* While SCP should work, SFTP is the recommended SSH file transfer mechanism to use when copying or fetching a file
+
+
+.. seealso::
+
+ :ref:`about_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`List of Windows Modules <windows_modules>`
+ Windows specific module list, all implemented in PowerShell
+ `User Mailing List <https://groups.google.com/group/ansible-project>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/windows_usage.rst b/docs/docsite/rst/user_guide/windows_usage.rst
new file mode 100644
index 00000000..b39413cd
--- /dev/null
+++ b/docs/docsite/rst/user_guide/windows_usage.rst
@@ -0,0 +1,513 @@
+Using Ansible and Windows
+=========================
+When using Ansible to manage Windows, many of the syntax and rules that apply
+for Unix/Linux hosts also apply to Windows, but there are still some differences
+when it comes to components like path separators and OS-specific tasks.
+This document covers details specific to using Ansible for Windows.
+
+.. contents:: Topics
+ :local:
+
+Use Cases
+`````````
+Ansible can be used to orchestrate a multitude of tasks on Windows servers.
+Below are some examples and info about common tasks.
+
+Installing Software
+-------------------
+There are three main ways that Ansible can be used to install software:
+
+* Using the ``win_chocolatey`` module. This sources the program data from the default
+ public `Chocolatey <https://chocolatey.org/>`_ repository. Internal repositories can
+ be used instead by setting the ``source`` option.
+
+* Using the ``win_package`` module. This installs software using an MSI or .exe installer
+ from a local/network path or URL.
+
+* Using the ``win_command`` or ``win_shell`` module to run an installer manually.
+
+The ``win_chocolatey`` module is recommended since it has the most complete logic for checking to see if a package has already been installed and is up-to-date.
+
+Below are some examples of using all three options to install 7-Zip:
+
+.. code-block:: yaml+jinja
+
+ # Install/uninstall with chocolatey
+ - name: Ensure 7-Zip is installed via Chocolatey
+ win_chocolatey:
+ name: 7zip
+ state: present
+
+ - name: Ensure 7-Zip is not installed via Chocolatey
+ win_chocolatey:
+ name: 7zip
+ state: absent
+
+ # Install/uninstall with win_package
+ - name: Download the 7-Zip package
+ win_get_url:
+ url: https://www.7-zip.org/a/7z1701-x64.msi
+ dest: C:\temp\7z.msi
+
+ - name: Ensure 7-Zip is installed via win_package
+ win_package:
+ path: C:\temp\7z.msi
+ state: present
+
+ - name: Ensure 7-Zip is not installed via win_package
+ win_package:
+ path: C:\temp\7z.msi
+ state: absent
+
+ # Install/uninstall with win_command
+ - name: Download the 7-Zip package
+ win_get_url:
+ url: https://www.7-zip.org/a/7z1701-x64.msi
+ dest: C:\temp\7z.msi
+
+ - name: Check if 7-Zip is already installed
+ win_reg_stat:
+ name: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{23170F69-40C1-2702-1701-000001000000}
+ register: 7zip_installed
+
+ - name: Ensure 7-Zip is installed via win_command
+ win_command: C:\Windows\System32\msiexec.exe /i C:\temp\7z.msi /qn /norestart
+ when: 7zip_installed.exists == false
+
+ - name: Ensure 7-Zip is uninstalled via win_command
+ win_command: C:\Windows\System32\msiexec.exe /x {23170F69-40C1-2702-1701-000001000000} /qn /norestart
+ when: 7zip_installed.exists == true
+
+Some installers like Microsoft Office or SQL Server require credential delegation or
+access to components restricted by WinRM. The best method to bypass these
+issues is to use ``become`` with the task. With ``become``, Ansible will run
+the installer as if it were run interactively on the host.
+
+.. Note:: Many installers do not properly pass back error information over WinRM. In these cases, if the install has been verified to work locally the recommended method is to use become.
+
+.. Note:: Some installers restart the WinRM or HTTP services, or cause them to become temporarily unavailable, making Ansible assume the system is unreachable.
+
+Installing Updates
+------------------
+The ``win_updates`` and ``win_hotfix`` modules can be used to install updates
+or hotfixes on a host. The module ``win_updates`` is used to install multiple
+updates by category, while ``win_hotfix`` can be used to install a single
+update or hotfix file that has been downloaded locally.
+
+.. Note:: The ``win_hotfix`` module has a requirement that the DISM PowerShell cmdlets are
+ present. These cmdlets were only added by default on Windows Server 2012
+ and newer and must be installed on older Windows hosts.
+
+The following example shows how ``win_updates`` can be used:
+
+.. code-block:: yaml+jinja
+
+ - name: Install all critical and security updates
+ win_updates:
+ category_names:
+ - CriticalUpdates
+ - SecurityUpdates
+ state: installed
+ register: update_result
+
+ - name: Reboot host if required
+ win_reboot:
+ when: update_result.reboot_required
+
+The following example show how ``win_hotfix`` can be used to install a single
+update or hotfix:
+
+.. code-block:: yaml+jinja
+
+ - name: Download KB3172729 for Server 2012 R2
+ win_get_url:
+ url: http://download.windowsupdate.com/d/msdownload/update/software/secu/2016/07/windows8.1-kb3172729-x64_e8003822a7ef4705cbb65623b72fd3cec73fe222.msu
+ dest: C:\temp\KB3172729.msu
+
+ - name: Install hotfix
+ win_hotfix:
+ hotfix_kb: KB3172729
+ source: C:\temp\KB3172729.msu
+ state: present
+ register: hotfix_result
+
+ - name: Reboot host if required
+ win_reboot:
+ when: hotfix_result.reboot_required
+
+Set Up Users and Groups
+-----------------------
+Ansible can be used to create Windows users and groups both locally and on a domain.
+
+Local
++++++
+The modules ``win_user``, ``win_group`` and ``win_group_membership`` manage
+Windows users, groups and group memberships locally.
+
+The following is an example of creating local accounts and groups that can
+access a folder on the same host:
+
+.. code-block:: yaml+jinja
+
+ - name: Create local group to contain new users
+ win_group:
+ name: LocalGroup
+ description: Allow access to C:\Development folder
+
+ - name: Create local user
+ win_user:
+ name: '{{ item.name }}'
+ password: '{{ item.password }}'
+ groups: LocalGroup
+ update_password: no
+ password_never_expires: yes
+ loop:
+ - name: User1
+ password: Password1
+ - name: User2
+ password: Password2
+
+ - name: Create Development folder
+ win_file:
+ path: C:\Development
+ state: directory
+
+ - name: Set ACL of Development folder
+ win_acl:
+ path: C:\Development
+ rights: FullControl
+ state: present
+ type: allow
+ user: LocalGroup
+
+ - name: Remove parent inheritance of Development folder
+ win_acl_inheritance:
+ path: C:\Development
+ reorganize: yes
+ state: absent
+
+Domain
+++++++
+The modules ``win_domain_user`` and ``win_domain_group`` manages users and
+groups in a domain. The below is an example of ensuring a batch of domain users
+are created:
+
+.. code-block:: yaml+jinja
+
+ - name: Ensure each account is created
+ win_domain_user:
+ name: '{{ item.name }}'
+ upn: '{{ item.name }}@MY.DOMAIN.COM'
+ password: '{{ item.password }}'
+ password_never_expires: no
+ groups:
+ - Test User
+ - Application
+ company: Ansible
+ update_password: on_create
+ loop:
+ - name: Test User
+ password: Password
+ - name: Admin User
+ password: SuperSecretPass01
+ - name: Dev User
+ password: '@fvr3IbFBujSRh!3hBg%wgFucD8^x8W5'
+
+Running Commands
+----------------
+In cases where there is no appropriate module available for a task,
+a command or script can be run using the ``win_shell``, ``win_command``, ``raw``, and ``script`` modules.
+
+The ``raw`` module simply executes a Powershell command remotely. Since ``raw``
+has none of the wrappers that Ansible typically uses, ``become``, ``async``
+and environment variables do not work.
+
+The ``script`` module executes a script from the Ansible controller on
+one or more Windows hosts. Like ``raw``, ``script`` currently does not support
+``become``, ``async``, or environment variables.
+
+The ``win_command`` module is used to execute a command which is either an
+executable or batch file, while the ``win_shell`` module is used to execute commands within a shell.
+
+Choosing Command or Shell
++++++++++++++++++++++++++
+The ``win_shell`` and ``win_command`` modules can both be used to execute a command or commands.
+The ``win_shell`` module is run within a shell-like process like ``PowerShell`` or ``cmd``, so it has access to shell
+operators like ``<``, ``>``, ``|``, ``;``, ``&&``, and ``||``. Multi-lined commands can also be run in ``win_shell``.
+
+The ``win_command`` module simply runs a process outside of a shell. It can still
+run a shell command like ``mkdir`` or ``New-Item`` by passing the shell commands
+to a shell executable like ``cmd.exe`` or ``PowerShell.exe``.
+
+Here are some examples of using ``win_command`` and ``win_shell``:
+
+.. code-block:: yaml+jinja
+
+ - name: Run a command under PowerShell
+ win_shell: Get-Service -Name service | Stop-Service
+
+ - name: Run a command under cmd
+ win_shell: mkdir C:\temp
+ args:
+ executable: cmd.exe
+
+ - name: Run a multiple shell commands
+ win_shell: |
+ New-Item -Path C:\temp -ItemType Directory
+ Remove-Item -Path C:\temp -Force -Recurse
+ $path_info = Get-Item -Path C:\temp
+ $path_info.FullName
+
+ - name: Run an executable using win_command
+ win_command: whoami.exe
+
+ - name: Run a cmd command
+ win_command: cmd.exe /c mkdir C:\temp
+
+ - name: Run a vbs script
+ win_command: cscript.exe script.vbs
+
+.. Note:: Some commands like ``mkdir``, ``del``, and ``copy`` only exist in
+ the CMD shell. To run them with ``win_command`` they must be
+ prefixed with ``cmd.exe /c``.
+
+Argument Rules
+++++++++++++++
+When running a command through ``win_command``, the standard Windows argument
+rules apply:
+
+* Each argument is delimited by a white space, which can either be a space or a
+ tab.
+
+* An argument can be surrounded by double quotes ``"``. Anything inside these
+ quotes is interpreted as a single argument even if it contains whitespace.
+
+* A double quote preceded by a backslash ``\`` is interpreted as just a double
+ quote ``"`` and not as an argument delimiter.
+
+* Backslashes are interpreted literally unless it immediately precedes double
+ quotes; for example ``\`` == ``\`` and ``\"`` == ``"``
+
+* If an even number of backslashes is followed by a double quote, one
+ backslash is used in the argument for every pair, and the double quote is
+ used as a string delimiter for the argument.
+
+* If an odd number of backslashes is followed by a double quote, one backslash
+ is used in the argument for every pair, and the double quote is escaped and
+ made a literal double quote in the argument.
+
+With those rules in mind, here are some examples of quoting:
+
+.. code-block:: yaml+jinja
+
+ - win_command: C:\temp\executable.exe argument1 "argument 2" "C:\path\with space" "double \"quoted\""
+
+ argv[0] = C:\temp\executable.exe
+ argv[1] = argument1
+ argv[2] = argument 2
+ argv[3] = C:\path\with space
+ argv[4] = double "quoted"
+
+ - win_command: '"C:\Program Files\Program\program.exe" "escaped \\\" backslash" unquoted-end-backslash\'
+
+ argv[0] = C:\Program Files\Program\program.exe
+ argv[1] = escaped \" backslash
+ argv[2] = unquoted-end-backslash\
+
+ # Due to YAML and Ansible parsing '\"' must be written as '{% raw %}\\{% endraw %}"'
+ - win_command: C:\temp\executable.exe C:\no\space\path "arg with end \ before end quote{% raw %}\\{% endraw %}"
+
+ argv[0] = C:\temp\executable.exe
+ argv[1] = C:\no\space\path
+ argv[2] = arg with end \ before end quote\"
+
+For more information, see `escaping arguments <https://msdn.microsoft.com/en-us/library/17w5ykft(v=vs.85).aspx>`_.
+
+Creating and Running a Scheduled Task
+-------------------------------------
+WinRM has some restrictions in place that cause errors when running certain
+commands. One way to bypass these restrictions is to run a command through a
+scheduled task. A scheduled task is a Windows component that provides the
+ability to run an executable on a schedule and under a different account.
+
+Ansible version 2.5 added modules that make it easier to work with scheduled tasks in Windows.
+The following is an example of running a script as a scheduled task that deletes itself after
+running:
+
+.. code-block:: yaml+jinja
+
+ - name: Create scheduled task to run a process
+ win_scheduled_task:
+ name: adhoc-task
+ username: SYSTEM
+ actions:
+ - path: PowerShell.exe
+ arguments: |
+ Start-Sleep -Seconds 30 # This isn't required, just here as a demonstration
+ New-Item -Path C:\temp\test -ItemType Directory
+ # Remove this action if the task shouldn't be deleted on completion
+ - path: cmd.exe
+ arguments: /c schtasks.exe /Delete /TN "adhoc-task" /F
+ triggers:
+ - type: registration
+
+ - name: Wait for the scheduled task to complete
+ win_scheduled_task_stat:
+ name: adhoc-task
+ register: task_stat
+ until: (task_stat.state is defined and task_stat.state.status != "TASK_STATE_RUNNING") or (task_stat.task_exists == False)
+ retries: 12
+ delay: 10
+
+.. Note:: The modules used in the above example were updated/added in Ansible
+ version 2.5.
+
+Path Formatting for Windows
+```````````````````````````
+Windows differs from a traditional POSIX operating system in many ways. One of
+the major changes is the shift from ``/`` as the path separator to ``\``. This
+can cause major issues with how playbooks are written, since ``\`` is often used
+as an escape character on POSIX systems.
+
+Ansible allows two different styles of syntax; each deals with path separators for Windows differently:
+
+YAML Style
+----------
+When using the YAML syntax for tasks, the rules are well-defined by the YAML
+standard:
+
+* When using a normal string (without quotes), YAML will not consider the
+ backslash an escape character.
+
+* When using single quotes ``'``, YAML will not consider the backslash an
+ escape character.
+
+* When using double quotes ``"``, the backslash is considered an escape
+ character and needs to escaped with another backslash.
+
+.. Note:: You should only quote strings when it is absolutely
+ necessary or required by YAML, and then use single quotes.
+
+The YAML specification considers the following `escape sequences <https://yaml.org/spec/current.html#id2517668>`_:
+
+* ``\0``, ``\\``, ``\"``, ``\_``, ``\a``, ``\b``, ``\e``, ``\f``, ``\n``, ``\r``, ``\t``,
+ ``\v``, ``\L``, ``\N`` and ``\P`` -- Single character escape
+
+* ``<TAB>``, ``<SPACE>``, ``<NBSP>``, ``<LNSP>``, ``<PSP>`` -- Special
+ characters
+
+* ``\x..`` -- 2-digit hex escape
+
+* ``\u....`` -- 4-digit hex escape
+
+* ``\U........`` -- 8-digit hex escape
+
+Here are some examples on how to write Windows paths::
+
+ # GOOD
+ tempdir: C:\Windows\Temp
+
+ # WORKS
+ tempdir: 'C:\Windows\Temp'
+ tempdir: "C:\\Windows\\Temp"
+
+ # BAD, BUT SOMETIMES WORKS
+ tempdir: C:\\Windows\\Temp
+ tempdir: 'C:\\Windows\\Temp'
+ tempdir: C:/Windows/Temp
+
+This is an example which will fail:
+
+.. code-block:: text
+
+ # FAILS
+ tempdir: "C:\Windows\Temp"
+
+This example shows the use of single quotes when they are required::
+
+ ---
+ - name: Copy tomcat config
+ win_copy:
+ src: log4j.xml
+ dest: '{{tc_home}}\lib\log4j.xml'
+
+Legacy key=value Style
+----------------------
+The legacy ``key=value`` syntax is used on the command line for ad-hoc commands,
+or inside playbooks. The use of this style is discouraged within playbooks
+because backslash characters need to be escaped, making playbooks harder to read.
+The legacy syntax depends on the specific implementation in Ansible, and quoting
+(both single and double) does not have any effect on how it is parsed by
+Ansible.
+
+The Ansible key=value parser parse_kv() considers the following escape
+sequences:
+
+* ``\``, ``'``, ``"``, ``\a``, ``\b``, ``\f``, ``\n``, ``\r``, ``\t`` and
+ ``\v`` -- Single character escape
+
+* ``\x..`` -- 2-digit hex escape
+
+* ``\u....`` -- 4-digit hex escape
+
+* ``\U........`` -- 8-digit hex escape
+
+* ``\N{...}`` -- Unicode character by name
+
+This means that the backslash is an escape character for some sequences, and it
+is usually safer to escape a backslash when in this form.
+
+Here are some examples of using Windows paths with the key=value style:
+
+.. code-block:: ini
+
+ # GOOD
+ tempdir=C:\\Windows\\Temp
+
+ # WORKS
+ tempdir='C:\\Windows\\Temp'
+ tempdir="C:\\Windows\\Temp"
+
+ # BAD, BUT SOMETIMES WORKS
+ tempdir=C:\Windows\Temp
+ tempdir='C:\Windows\Temp'
+ tempdir="C:\Windows\Temp"
+ tempdir=C:/Windows/Temp
+
+ # FAILS
+ tempdir=C:\Windows\temp
+ tempdir='C:\Windows\temp'
+ tempdir="C:\Windows\temp"
+
+The failing examples don't fail outright but will substitute ``\t`` with the
+``<TAB>`` character resulting in ``tempdir`` being ``C:\Windows<TAB>emp``.
+
+Limitations
+```````````
+Some things you cannot do with Ansible and Windows are:
+
+* Upgrade PowerShell
+
+* Interact with the WinRM listeners
+
+Because WinRM is reliant on the services being online and running during normal operations, you cannot upgrade PowerShell or interact with WinRM listeners with Ansible. Both of these actions will cause the connection to fail. This can technically be avoided by using ``async`` or a scheduled task, but those methods are fragile if the process it runs breaks the underlying connection Ansible uses, and are best left to the bootstrapping process or before an image is
+created.
+
+Developing Windows Modules
+``````````````````````````
+Because Ansible modules for Windows are written in PowerShell, the development
+guides for Windows modules differ substantially from those for standard standard modules. Please see
+:ref:`developing_modules_general_windows` for more information.
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`List of Windows Modules <windows_modules>`
+ Windows specific module list, all implemented in PowerShell
+ `User Mailing List <https://groups.google.com/group/ansible-project>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/user_guide/windows_winrm.rst b/docs/docsite/rst/user_guide/windows_winrm.rst
new file mode 100644
index 00000000..03421cfb
--- /dev/null
+++ b/docs/docsite/rst/user_guide/windows_winrm.rst
@@ -0,0 +1,913 @@
+.. _windows_winrm:
+
+Windows Remote Management
+=========================
+Unlike Linux/Unix hosts, which use SSH by default, Windows hosts are
+configured with WinRM. This topic covers how to configure and use WinRM with Ansible.
+
+.. contents:: Topics
+ :local:
+
+What is WinRM?
+``````````````
+WinRM is a management protocol used by Windows to remotely communicate with
+another server. It is a SOAP-based protocol that communicates over HTTP/HTTPS, and is
+included in all recent Windows operating systems. Since Windows
+Server 2012, WinRM has been enabled by default, but in most cases extra
+configuration is required to use WinRM with Ansible.
+
+Ansible uses the `pywinrm <https://github.com/diyan/pywinrm>`_ package to
+communicate with Windows servers over WinRM. It is not installed by default
+with the Ansible package, but can be installed by running the following:
+
+.. code-block:: shell
+
+ pip install "pywinrm>=0.3.0"
+
+.. Note:: on distributions with multiple python versions, use pip2 or pip2.x,
+ where x matches the python minor version Ansible is running under.
+
+.. Warning::
+ Using the ``winrm`` or ``psrp`` connection plugins in Ansible on MacOS in
+ the latest releases typically fail. This is a known problem that occurs
+ deep within the Python stack and cannot be changed by Ansible. The only
+ workaround today is to set the environment variable ``no_proxy=*`` and
+ avoid using Kerberos auth.
+
+
+Authentication Options
+``````````````````````
+When connecting to a Windows host, there are several different options that can be used
+when authenticating with an account. The authentication type may be set on inventory
+hosts or groups with the ``ansible_winrm_transport`` variable.
+
+The following matrix is a high level overview of the options:
+
++-------------+----------------+---------------------------+-----------------------+-----------------+
+| Option | Local Accounts | Active Directory Accounts | Credential Delegation | HTTP Encryption |
++=============+================+===========================+=======================+=================+
+| Basic | Yes | No | No | No |
++-------------+----------------+---------------------------+-----------------------+-----------------+
+| Certificate | Yes | No | No | No |
++-------------+----------------+---------------------------+-----------------------+-----------------+
+| Kerberos | No | Yes | Yes | Yes |
++-------------+----------------+---------------------------+-----------------------+-----------------+
+| NTLM | Yes | Yes | No | Yes |
++-------------+----------------+---------------------------+-----------------------+-----------------+
+| CredSSP | Yes | Yes | Yes | Yes |
++-------------+----------------+---------------------------+-----------------------+-----------------+
+
+Basic
+-----
+Basic authentication is one of the simplest authentication options to use, but is
+also the most insecure. This is because the username and password are simply
+base64 encoded, and if a secure channel is not in use (eg, HTTPS) then it can be
+decoded by anyone. Basic authentication can only be used for local accounts (not domain accounts).
+
+The following example shows host vars configured for basic authentication:
+
+.. code-block:: yaml+jinja
+
+ ansible_user: LocalUsername
+ ansible_password: Password
+ ansible_connection: winrm
+ ansible_winrm_transport: basic
+
+Basic authentication is not enabled by default on a Windows host but can be
+enabled by running the following in PowerShell::
+
+ Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
+
+Certificate
+-----------
+Certificate authentication uses certificates as keys similar to SSH key
+pairs, but the file format and key generation process is different.
+
+The following example shows host vars configured for certificate authentication:
+
+.. code-block:: yaml+jinja
+
+ ansible_connection: winrm
+ ansible_winrm_cert_pem: /path/to/certificate/public/key.pem
+ ansible_winrm_cert_key_pem: /path/to/certificate/private/key.pem
+ ansible_winrm_transport: certificate
+
+Certificate authentication is not enabled by default on a Windows host but can
+be enabled by running the following in PowerShell::
+
+ Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true
+
+.. Note:: Encrypted private keys cannot be used as the urllib3 library that
+ is used by Ansible for WinRM does not support this functionality.
+
+Generate a Certificate
+++++++++++++++++++++++
+A certificate must be generated before it can be mapped to a local user.
+This can be done using one of the following methods:
+
+* OpenSSL
+* PowerShell, using the ``New-SelfSignedCertificate`` cmdlet
+* Active Directory Certificate Services
+
+Active Directory Certificate Services is beyond of scope in this documentation but may be
+the best option to use when running in a domain environment. For more information,
+see the `Active Directory Certificate Services documentation <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)>`_.
+
+.. Note:: Using the PowerShell cmdlet ``New-SelfSignedCertificate`` to generate
+ a certificate for authentication only works when being generated from a
+ Windows 10 or Windows Server 2012 R2 host or later. OpenSSL is still required to
+ extract the private key from the PFX certificate to a PEM file for Ansible
+ to use.
+
+To generate a certificate with ``OpenSSL``:
+
+.. code-block:: shell
+
+ # Set the name of the local user that will have the key mapped to
+ USERNAME="username"
+
+ cat > openssl.conf << EOL
+ distinguished_name = req_distinguished_name
+ [req_distinguished_name]
+ [v3_req_client]
+ extendedKeyUsage = clientAuth
+ subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$USERNAME@localhost
+ EOL
+
+ export OPENSSL_CONF=openssl.conf
+ openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out cert.pem -outform PEM -keyout cert_key.pem -subj "/CN=$USERNAME" -extensions v3_req_client
+ rm openssl.conf
+
+
+To generate a certificate with ``New-SelfSignedCertificate``:
+
+.. code-block:: powershell
+
+ # Set the name of the local user that will have the key mapped
+ $username = "username"
+ $output_path = "C:\temp"
+
+ # Instead of generating a file, the cert will be added to the personal
+ # LocalComputer folder in the certificate store
+ $cert = New-SelfSignedCertificate -Type Custom `
+ -Subject "CN=$username" `
+ -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=$username@localhost") `
+ -KeyUsage DigitalSignature,KeyEncipherment `
+ -KeyAlgorithm RSA `
+ -KeyLength 2048
+
+ # Export the public key
+ $pem_output = @()
+ $pem_output += "-----BEGIN CERTIFICATE-----"
+ $pem_output += [System.Convert]::ToBase64String($cert.RawData) -replace ".{64}", "$&`n"
+ $pem_output += "-----END CERTIFICATE-----"
+ [System.IO.File]::WriteAllLines("$output_path\cert.pem", $pem_output)
+
+ # Export the private key in a PFX file
+ [System.IO.File]::WriteAllBytes("$output_path\cert.pfx", $cert.Export("Pfx"))
+
+
+.. Note:: To convert the PFX file to a private key that pywinrm can use, run
+ the following command with OpenSSL
+ ``openssl pkcs12 -in cert.pfx -nocerts -nodes -out cert_key.pem -passin pass: -passout pass:``
+
+Import a Certificate to the Certificate Store
++++++++++++++++++++++++++++++++++++++++++++++
+Once a certificate has been generated, the issuing certificate needs to be
+imported into the ``Trusted Root Certificate Authorities`` of the
+``LocalMachine`` store, and the client certificate public key must be present
+in the ``Trusted People`` folder of the ``LocalMachine`` store. For this example,
+both the issuing certificate and public key are the same.
+
+Following example shows how to import the issuing certificate:
+
+.. code-block:: powershell
+
+ $cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
+ $cert.Import("cert.pem")
+
+ $store_name = [System.Security.Cryptography.X509Certificates.StoreName]::Root
+ $store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
+ $store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
+ $store.Open("MaxAllowed")
+ $store.Add($cert)
+ $store.Close()
+
+
+.. Note:: If using ADCS to generate the certificate, then the issuing
+ certificate will already be imported and this step can be skipped.
+
+The code to import the client certificate public key is:
+
+.. code-block:: powershell
+
+ $cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
+ $cert.Import("cert.pem")
+
+ $store_name = [System.Security.Cryptography.X509Certificates.StoreName]::TrustedPeople
+ $store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
+ $store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
+ $store.Open("MaxAllowed")
+ $store.Add($cert)
+ $store.Close()
+
+
+Mapping a Certificate to an Account
++++++++++++++++++++++++++++++++++++
+Once the certificate has been imported, map it to the local user account::
+
+ $username = "username"
+ $password = ConvertTo-SecureString -String "password" -AsPlainText -Force
+ $credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
+
+ # This is the issuer thumbprint which in the case of a self generated cert
+ # is the public key thumbprint, additional logic may be required for other
+ # scenarios
+ $thumbprint = (Get-ChildItem -Path cert:\LocalMachine\root | Where-Object { $_.Subject -eq "CN=$username" }).Thumbprint
+
+ New-Item -Path WSMan:\localhost\ClientCertificate `
+ -Subject "$username@localhost" `
+ -URI * `
+ -Issuer $thumbprint `
+ -Credential $credential `
+ -Force
+
+
+Once this is complete, the hostvar ``ansible_winrm_cert_pem`` should be set to
+the path of the public key and the ``ansible_winrm_cert_key_pem`` variable should be set to
+the path of the private key.
+
+NTLM
+----
+NTLM is an older authentication mechanism used by Microsoft that can support
+both local and domain accounts. NTLM is enabled by default on the WinRM
+service, so no setup is required before using it.
+
+NTLM is the easiest authentication protocol to use and is more secure than
+``Basic`` authentication. If running in a domain environment, ``Kerberos`` should be used
+instead of NTLM.
+
+Kerberos has several advantages over using NTLM:
+
+* NTLM is an older protocol and does not support newer encryption
+ protocols.
+* NTLM is slower to authenticate because it requires more round trips to the host in
+ the authentication stage.
+* Unlike Kerberos, NTLM does not allow credential delegation.
+
+This example shows host variables configured to use NTLM authentication:
+
+.. code-block:: yaml+jinja
+
+ ansible_user: LocalUsername
+ ansible_password: Password
+ ansible_connection: winrm
+ ansible_winrm_transport: ntlm
+
+Kerberos
+--------
+Kerberos is the recommended authentication option to use when running in a
+domain environment. Kerberos supports features like credential delegation and
+message encryption over HTTP and is one of the more secure options that
+is available through WinRM.
+
+Kerberos requires some additional setup work on the Ansible host before it can be
+used properly.
+
+The following example shows host vars configured for Kerberos authentication:
+
+.. code-block:: yaml+jinja
+
+ ansible_user: username@MY.DOMAIN.COM
+ ansible_password: Password
+ ansible_connection: winrm
+ ansible_winrm_transport: kerberos
+
+As of Ansible version 2.3, the Kerberos ticket will be created based on
+``ansible_user`` and ``ansible_password``. If running on an older version of
+Ansible or when ``ansible_winrm_kinit_mode`` is ``manual``, a Kerberos
+ticket must already be obtained. See below for more details.
+
+There are some extra host variables that can be set::
+
+ ansible_winrm_kinit_mode: managed/manual (manual means Ansible will not obtain a ticket)
+ ansible_winrm_kinit_cmd: the kinit binary to use to obtain a Kerberos ticket (default to kinit)
+ ansible_winrm_service: overrides the SPN prefix that is used, the default is ``HTTP`` and should rarely ever need changing
+ ansible_winrm_kerberos_delegation: allows the credentials to traverse multiple hops
+ ansible_winrm_kerberos_hostname_override: the hostname to be used for the kerberos exchange
+
+Installing the Kerberos Library
++++++++++++++++++++++++++++++++
+Some system dependencies that must be installed prior to using Kerberos. The script below lists the dependencies based on the distro:
+
+.. code-block:: shell
+
+ # Via Yum (RHEL/Centos/Fedora)
+ yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation
+
+ # Via Apt (Ubuntu)
+ sudo apt-get install python-dev libkrb5-dev krb5-user
+
+ # Via Portage (Gentoo)
+ emerge -av app-crypt/mit-krb5
+ emerge -av dev-python/setuptools
+
+ # Via Pkg (FreeBSD)
+ sudo pkg install security/krb5
+
+ # Via OpenCSW (Solaris)
+ pkgadd -d http://get.opencsw.org/now
+ /opt/csw/bin/pkgutil -U
+ /opt/csw/bin/pkgutil -y -i libkrb5_3
+
+ # Via Pacman (Arch Linux)
+ pacman -S krb5
+
+
+Once the dependencies have been installed, the ``python-kerberos`` wrapper can
+be install using ``pip``:
+
+.. code-block:: shell
+
+ pip install pywinrm[kerberos]
+
+
+.. note::
+ While Ansible has supported Kerberos auth through ``pywinrm`` for some
+ time, optional features or more secure options may only be available in
+ newer versions of the ``pywinrm`` and/or ``pykerberos`` libraries. It is
+ recommended you upgrade each version to the latest available to resolve
+ any warnings or errors. This can be done through tools like ``pip`` or a
+ system package manager like ``dnf``, ``yum``, ``apt`` but the package
+ names and versions available may differ between tools.
+
+
+Configuring Host Kerberos
++++++++++++++++++++++++++
+Once the dependencies have been installed, Kerberos needs to be configured so
+that it can communicate with a domain. This configuration is done through the
+``/etc/krb5.conf`` file, which is installed with the packages in the script above.
+
+To configure Kerberos, in the section that starts with:
+
+.. code-block:: ini
+
+ [realms]
+
+Add the full domain name and the fully qualified domain names of the primary
+and secondary Active Directory domain controllers. It should look something
+like this:
+
+.. code-block:: ini
+
+ [realms]
+ MY.DOMAIN.COM = {
+ kdc = domain-controller1.my.domain.com
+ kdc = domain-controller2.my.domain.com
+ }
+
+In the section that starts with:
+
+.. code-block:: ini
+
+ [domain_realm]
+
+Add a line like the following for each domain that Ansible needs access for:
+
+.. code-block:: ini
+
+ [domain_realm]
+ .my.domain.com = MY.DOMAIN.COM
+
+You can configure other settings in this file such as the default domain. See
+`krb5.conf <https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html>`_
+for more details.
+
+Automatic Kerberos Ticket Management
+++++++++++++++++++++++++++++++++++++
+Ansible version 2.3 and later defaults to automatically managing Kerberos tickets
+when both ``ansible_user`` and ``ansible_password`` are specified for a host. In
+this process, a new ticket is created in a temporary credential cache for each
+host. This is done before each task executes to minimize the chance of ticket
+expiration. The temporary credential caches are deleted after each task
+completes and will not interfere with the default credential cache.
+
+To disable automatic ticket management, set ``ansible_winrm_kinit_mode=manual``
+via the inventory.
+
+Automatic ticket management requires a standard ``kinit`` binary on the control
+host system path. To specify a different location or binary name, set the
+``ansible_winrm_kinit_cmd`` hostvar to the fully qualified path to a MIT krbv5
+``kinit``-compatible binary.
+
+Manual Kerberos Ticket Management
++++++++++++++++++++++++++++++++++
+To manually manage Kerberos tickets, the ``kinit`` binary is used. To
+obtain a new ticket the following command is used:
+
+.. code-block:: shell
+
+ kinit username@MY.DOMAIN.COM
+
+.. Note:: The domain must match the configured Kerberos realm exactly, and must be in upper case.
+
+To see what tickets (if any) have been acquired, use the following command:
+
+.. code-block:: shell
+
+ klist
+
+To destroy all the tickets that have been acquired, use the following command:
+
+.. code-block:: shell
+
+ kdestroy
+
+Troubleshooting Kerberos
+++++++++++++++++++++++++
+Kerberos is reliant on a properly-configured environment to
+work. To troubleshoot Kerberos issues, ensure that:
+
+* The hostname set for the Windows host is the FQDN and not an IP address.
+
+* The forward and reverse DNS lookups are working properly in the domain. To
+ test this, ping the windows host by name and then use the ip address returned
+ with ``nslookup``. The same name should be returned when using ``nslookup``
+ on the IP address.
+
+* The Ansible host's clock is synchronized with the domain controller. Kerberos
+ is time sensitive, and a little clock drift can cause the ticket generation
+ process to fail.
+
+* Ensure that the fully qualified domain name for the domain is configured in
+ the ``krb5.conf`` file. To check this, run::
+
+ kinit -C username@MY.DOMAIN.COM
+ klist
+
+ If the domain name returned by ``klist`` is different from the one requested,
+ an alias is being used. The ``krb5.conf`` file needs to be updated so that
+ the fully qualified domain name is used and not an alias.
+
+* If the default kerberos tooling has been replaced or modified (some IdM solutions may do this), this may cause issues when installing or upgrading the Python Kerberos library. As of the time of this writing, this library is called ``pykerberos`` and is known to work with both MIT and Heimdal Kerberos libraries. To resolve ``pykerberos`` installation issues, ensure the system dependencies for Kerberos have been met (see: `Installing the Kerberos Library`_), remove any custom Kerberos tooling paths from the PATH environment variable, and retry the installation of Python Kerberos library package.
+
+CredSSP
+-------
+CredSSP authentication is a newer authentication protocol that allows
+credential delegation. This is achieved by encrypting the username and password
+after authentication has succeeded and sending that to the server using the
+CredSSP protocol.
+
+Because the username and password are sent to the server to be used for double
+hop authentication, ensure that the hosts that the Windows host communicates with are
+not compromised and are trusted.
+
+CredSSP can be used for both local and domain accounts and also supports
+message encryption over HTTP.
+
+To use CredSSP authentication, the host vars are configured like so:
+
+.. code-block:: yaml+jinja
+
+ ansible_user: Username
+ ansible_password: Password
+ ansible_connection: winrm
+ ansible_winrm_transport: credssp
+
+There are some extra host variables that can be set as shown below::
+
+ ansible_winrm_credssp_disable_tlsv1_2: when true, will not use TLS 1.2 in the CredSSP auth process
+
+CredSSP authentication is not enabled by default on a Windows host, but can
+be enabled by running the following in PowerShell:
+
+.. code-block:: powershell
+
+ Enable-WSManCredSSP -Role Server -Force
+
+Installing CredSSP Library
+++++++++++++++++++++++++++
+
+The ``requests-credssp`` wrapper can be installed using ``pip``:
+
+.. code-block:: bash
+
+ pip install pywinrm[credssp]
+
+CredSSP and TLS 1.2
++++++++++++++++++++
+By default the ``requests-credssp`` library is configured to authenticate over
+the TLS 1.2 protocol. TLS 1.2 is installed and enabled by default for Windows Server 2012
+and Windows 8 and more recent releases.
+
+There are two ways that older hosts can be used with CredSSP:
+
+* Install and enable a hotfix to enable TLS 1.2 support (recommended
+ for Server 2008 R2 and Windows 7).
+
+* Set ``ansible_winrm_credssp_disable_tlsv1_2=True`` in the inventory to run
+ over TLS 1.0. This is the only option when connecting to Windows Server 2008, which
+ has no way of supporting TLS 1.2
+
+See :ref:`winrm_tls12` for more information on how to enable TLS 1.2 on the
+Windows host.
+
+Set CredSSP Certificate
++++++++++++++++++++++++
+CredSSP works by encrypting the credentials through the TLS protocol and uses a self-signed certificate by default. The ``CertificateThumbprint`` option under the WinRM service configuration can be used to specify the thumbprint of
+another certificate.
+
+.. Note:: This certificate configuration is independent of the WinRM listener
+ certificate. With CredSSP, message transport still occurs over the WinRM listener,
+ but the TLS-encrypted messages inside the channel use the service-level certificate.
+
+To explicitly set the certificate to use for CredSSP::
+
+ # Note the value $certificate_thumbprint will be different in each
+ # situation, this needs to be set based on the cert that is used.
+ $certificate_thumbprint = "7C8DCBD5427AFEE6560F4AF524E325915F51172C"
+
+ # Set the thumbprint value
+ Set-Item -Path WSMan:\localhost\Service\CertificateThumbprint -Value $certificate_thumbprint
+
+Non-Administrator Accounts
+``````````````````````````
+WinRM is configured by default to only allow connections from accounts in the local
+``Administrators`` group. This can be changed by running:
+
+.. code-block:: powershell
+
+ winrm configSDDL default
+
+This will display an ACL editor, where new users or groups may be added. To run commands
+over WinRM, users and groups must have at least the ``Read`` and ``Execute`` permissions
+enabled.
+
+While non-administrative accounts can be used with WinRM, most typical server administration
+tasks require some level of administrative access, so the utility is usually limited.
+
+WinRM Encryption
+````````````````
+By default WinRM will fail to work when running over an unencrypted channel.
+The WinRM protocol considers the channel to be encrypted if using TLS over HTTP
+(HTTPS) or using message level encryption. Using WinRM with TLS is the
+recommended option as it works with all authentication options, but requires
+a certificate to be created and used on the WinRM listener.
+
+The ``ConfigureRemotingForAnsible.ps1`` creates a self-signed certificate and
+creates the listener with that certificate. If in a domain environment, ADCS
+can also create a certificate for the host that is issued by the domain itself.
+
+If using HTTPS is not an option, then HTTP can be used when the authentication
+option is ``NTLM``, ``Kerberos`` or ``CredSSP``. These protocols will encrypt
+the WinRM payload with their own encryption method before sending it to the
+server. The message-level encryption is not used when running over HTTPS because the
+encryption uses the more secure TLS protocol instead. If both transport and
+message encryption is required, set ``ansible_winrm_message_encryption=always``
+in the host vars.
+
+.. Note:: Message encryption over HTTP requires pywinrm>=0.3.0.
+
+A last resort is to disable the encryption requirement on the Windows host. This
+should only be used for development and debugging purposes, as anything sent
+from Ansible can be viewed, manipulated and also the remote session can completely
+be taken over by anyone on the same network. To disable the encryption
+requirement::
+
+ Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
+
+.. Note:: Do not disable the encryption check unless it is
+ absolutely required. Doing so could allow sensitive information like
+ credentials and files to be intercepted by others on the network.
+
+Inventory Options
+`````````````````
+Ansible's Windows support relies on a few standard variables to indicate the
+username, password, and connection type of the remote hosts. These variables
+are most easily set up in the inventory, but can be set on the ``host_vars``/
+``group_vars`` level.
+
+When setting up the inventory, the following variables are required:
+
+.. code-block:: yaml+jinja
+
+ # It is suggested that these be encrypted with ansible-vault:
+ # ansible-vault edit group_vars/windows.yml
+ ansible_connection: winrm
+
+ # May also be passed on the command-line via --user
+ ansible_user: Administrator
+
+ # May also be supplied at runtime with --ask-pass
+ ansible_password: SecretPasswordGoesHere
+
+
+Using the variables above, Ansible will connect to the Windows host with Basic
+authentication through HTTPS. If ``ansible_user`` has a UPN value like
+``username@MY.DOMAIN.COM`` then the authentication option will automatically attempt
+to use Kerberos unless ``ansible_winrm_transport`` has been set to something other than
+``kerberos``.
+
+The following custom inventory variables are also supported
+for additional configuration of WinRM connections:
+
+* ``ansible_port``: The port WinRM will run over, HTTPS is ``5986`` which is
+ the default while HTTP is ``5985``
+
+* ``ansible_winrm_scheme``: Specify the connection scheme (``http`` or
+ ``https``) to use for the WinRM connection. Ansible uses ``https`` by default
+ unless ``ansible_port`` is ``5985``
+
+* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint,
+ Ansible uses ``/wsman`` by default
+
+* ``ansible_winrm_realm``: Specify the realm to use for Kerberos
+ authentication. If ``ansible_user`` contains ``@``, Ansible will use the part
+ of the username after ``@`` by default
+
+* ``ansible_winrm_transport``: Specify one or more authentication transport
+ options as a comma-separated list. By default, Ansible will use ``kerberos,
+ basic`` if the ``kerberos`` module is installed and a realm is defined,
+ otherwise it will be ``plaintext``
+
+* ``ansible_winrm_server_cert_validation``: Specify the server certificate
+ validation mode (``ignore`` or ``validate``). Ansible defaults to
+ ``validate`` on Python 2.7.9 and higher, which will result in certificate
+ validation errors against the Windows self-signed certificates. Unless
+ verifiable certificates have been configured on the WinRM listeners, this
+ should be set to ``ignore``
+
+* ``ansible_winrm_operation_timeout_sec``: Increase the default timeout for
+ WinRM operations, Ansible uses ``20`` by default
+
+* ``ansible_winrm_read_timeout_sec``: Increase the WinRM read timeout, Ansible
+ uses ``30`` by default. Useful if there are intermittent network issues and
+ read timeout errors keep occurring
+
+* ``ansible_winrm_message_encryption``: Specify the message encryption
+ operation (``auto``, ``always``, ``never``) to use, Ansible uses ``auto`` by
+ default. ``auto`` means message encryption is only used when
+ ``ansible_winrm_scheme`` is ``http`` and ``ansible_winrm_transport`` supports
+ message encryption. ``always`` means message encryption will always be used
+ and ``never`` means message encryption will never be used
+
+* ``ansible_winrm_ca_trust_path``: Used to specify a different cacert container
+ than the one used in the ``certifi`` module. See the HTTPS Certificate
+ Validation section for more details.
+
+* ``ansible_winrm_send_cbt``: When using ``ntlm`` or ``kerberos`` over HTTPS,
+ the authentication library will try to send channel binding tokens to
+ mitigate against man in the middle attacks. This flag controls whether these
+ bindings will be sent or not (default: ``yes``).
+
+* ``ansible_winrm_*``: Any additional keyword arguments supported by
+ ``winrm.Protocol`` may be provided in place of ``*``
+
+In addition, there are also specific variables that need to be set
+for each authentication option. See the section on authentication above for more information.
+
+.. Note:: Ansible 2.0 has deprecated the "ssh" from ``ansible_ssh_user``,
+ ``ansible_ssh_pass``, ``ansible_ssh_host``, and ``ansible_ssh_port`` to
+ become ``ansible_user``, ``ansible_password``, ``ansible_host``, and
+ ``ansible_port``. If using a version of Ansible prior to 2.0, the older
+ style (``ansible_ssh_*``) should be used instead. The shorter variables
+ are ignored, without warning, in older versions of Ansible.
+
+.. Note:: ``ansible_winrm_message_encryption`` is different from transport
+ encryption done over TLS. The WinRM payload is still encrypted with TLS
+ when run over HTTPS, even if ``ansible_winrm_message_encryption=never``.
+
+IPv6 Addresses
+``````````````
+IPv6 addresses can be used instead of IPv4 addresses or hostnames. This option
+is normally set in an inventory. Ansible will attempt to parse the address
+using the `ipaddress <https://docs.python.org/3/library/ipaddress.html>`_
+package and pass to pywinrm correctly.
+
+When defining a host using an IPv6 address, just add the IPv6 address as you
+would an IPv4 address or hostname:
+
+.. code-block:: ini
+
+ [windows-server]
+ 2001:db8::1
+
+ [windows-server:vars]
+ ansible_user=username
+ ansible_password=password
+ ansible_connection=winrm
+
+
+.. Note:: The ipaddress library is only included by default in Python 3.x. To
+ use IPv6 addresses in Python 2.7, make sure to run ``pip install ipaddress`` which installs
+ a backported package.
+
+HTTPS Certificate Validation
+````````````````````````````
+As part of the TLS protocol, the certificate is validated to ensure the host
+matches the subject and the client trusts the issuer of the server certificate.
+When using a self-signed certificate or setting
+``ansible_winrm_server_cert_validation: ignore`` these security mechanisms are
+bypassed. While self signed certificates will always need the ``ignore`` flag,
+certificates that have been issued from a certificate authority can still be
+validated.
+
+One of the more common ways of setting up a HTTPS listener in a domain
+environment is to use Active Directory Certificate Service (AD CS). AD CS is
+used to generate signed certificates from a Certificate Signing Request (CSR).
+If the WinRM HTTPS listener is using a certificate that has been signed by
+another authority, like AD CS, then Ansible can be set up to trust that
+issuer as part of the TLS handshake.
+
+To get Ansible to trust a Certificate Authority (CA) like AD CS, the issuer
+certificate of the CA can be exported as a PEM encoded certificate. This
+certificate can then be copied locally to the Ansible controller and used as a
+source of certificate validation, otherwise known as a CA chain.
+
+The CA chain can contain a single or multiple issuer certificates and each
+entry is contained on a new line. To then use the custom CA chain as part of
+the validation process, set ``ansible_winrm_ca_trust_path`` to the path of the
+file. If this variable is not set, the default CA chain is used instead which
+is located in the install path of the Python package
+`certifi <https://github.com/certifi/python-certifi>`_.
+
+.. Note:: Each HTTP call is done by the Python requests library which does not
+ use the systems built-in certificate store as a trust authority.
+ Certificate validation will fail if the server's certificate issuer is
+ only added to the system's truststore.
+
+.. _winrm_tls12:
+
+TLS 1.2 Support
+```````````````
+As WinRM runs over the HTTP protocol, using HTTPS means that the TLS protocol
+is used to encrypt the WinRM messages. TLS will automatically attempt to
+negotiate the best protocol and cipher suite that is available to both the
+client and the server. If a match cannot be found then Ansible will error out
+with a message similar to::
+
+ HTTPSConnectionPool(host='server', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
+
+Commonly this is when the Windows host has not been configured to support
+TLS v1.2 but it could also mean the Ansible controller has an older OpenSSL
+version installed.
+
+Windows 8 and Windows Server 2012 come with TLS v1.2 installed and enabled by
+default but older hosts, like Server 2008 R2 and Windows 7, have to be enabled
+manually.
+
+.. Note:: There is a bug with the TLS 1.2 patch for Server 2008 which will stop
+ Ansible from connecting to the Windows host. This means that Server 2008
+ cannot be configured to use TLS 1.2. Server 2008 R2 and Windows 7 are not
+ affected by this issue and can use TLS 1.2.
+
+To verify what protocol the Windows host supports, you can run the following
+command on the Ansible controller::
+
+ openssl s_client -connect <hostname>:5986
+
+The output will contain information about the TLS session and the ``Protocol``
+line will display the version that was negotiated::
+
+ New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
+ Server public key is 2048 bit
+ Secure Renegotiation IS supported
+ Compression: NONE
+ Expansion: NONE
+ No ALPN negotiated
+ SSL-Session:
+ Protocol : TLSv1
+ Cipher : ECDHE-RSA-AES256-SHA
+ Session-ID: 962A00001C95D2A601BE1CCFA7831B85A7EEE897AECDBF3D9ECD4A3BE4F6AC9B
+ Session-ID-ctx:
+ Master-Key: ....
+ Start Time: 1552976474
+ Timeout : 7200 (sec)
+ Verify return code: 21 (unable to verify the first certificate)
+ ---
+
+ New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
+ Server public key is 2048 bit
+ Secure Renegotiation IS supported
+ Compression: NONE
+ Expansion: NONE
+ No ALPN negotiated
+ SSL-Session:
+ Protocol : TLSv1.2
+ Cipher : ECDHE-RSA-AES256-GCM-SHA384
+ Session-ID: AE16000050DA9FD44D03BB8839B64449805D9E43DBD670346D3D9E05D1AEEA84
+ Session-ID-ctx:
+ Master-Key: ....
+ Start Time: 1552976538
+ Timeout : 7200 (sec)
+ Verify return code: 21 (unable to verify the first certificate)
+
+If the host is returning ``TLSv1`` then it should be configured so that
+TLS v1.2 is enable. You can do this by running the following PowerShell
+script:
+
+.. code-block:: powershell
+
+ Function Enable-TLS12 {
+ param(
+ [ValidateSet("Server", "Client")]
+ [String]$Component = "Server"
+ )
+
+ $protocols_path = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols'
+ New-Item -Path "$protocols_path\TLS 1.2\$Component" -Force
+ New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name Enabled -Value 1 -Type DWORD -Force
+ New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name DisabledByDefault -Value 0 -Type DWORD -Force
+ }
+
+ Enable-TLS12 -Component Server
+
+ # Not required but highly recommended to enable the Client side TLS 1.2 components
+ Enable-TLS12 -Component Client
+
+ Restart-Computer
+
+The below Ansible tasks can also be used to enable TLS v1.2:
+
+.. code-block:: yaml+jinja
+
+ - name: enable TLSv1.2 support
+ win_regedit:
+ path: HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\{{ item.type }}
+ name: '{{ item.property }}'
+ data: '{{ item.value }}'
+ type: dword
+ state: present
+ register: enable_tls12
+ loop:
+ - type: Server
+ property: Enabled
+ value: 1
+ - type: Server
+ property: DisabledByDefault
+ value: 0
+ - type: Client
+ property: Enabled
+ value: 1
+ - type: Client
+ property: DisabledByDefault
+ value: 0
+
+ - name: reboot if TLS config was applied
+ win_reboot:
+ when: enable_tls12 is changed
+
+There are other ways to configure the TLS protocols as well as the cipher
+suites that are offered by the Windows host. One tool that can give you a GUI
+to manage these settings is `IIS Crypto <https://www.nartac.com/Products/IISCrypto/>`_
+from Nartac Software.
+
+Limitations
+```````````
+Due to the design of the WinRM protocol , there are a few limitations
+when using WinRM that can cause issues when creating playbooks for Ansible.
+These include:
+
+* Credentials are not delegated for most authentication types, which causes
+ authentication errors when accessing network resources or installing certain
+ programs.
+
+* Many calls to the Windows Update API are blocked when running over WinRM.
+
+* Some programs fail to install with WinRM due to no credential delegation or
+ because they access forbidden Windows API like WUA over WinRM.
+
+* Commands under WinRM are done under a non-interactive session, which can prevent
+ certain commands or executables from running.
+
+* You cannot run a process that interacts with ``DPAPI``, which is used by some
+ installers (like Microsoft SQL Server).
+
+Some of these limitations can be mitigated by doing one of the following:
+
+* Set ``ansible_winrm_transport`` to ``credssp`` or ``kerberos`` (with
+ ``ansible_winrm_kerberos_delegation=true``) to bypass the double hop issue
+ and access network resources
+
+* Use ``become`` to bypass all WinRM restrictions and run a command as it would
+ locally. Unlike using an authentication transport like ``credssp``, this will
+ also remove the non-interactive restriction and API restrictions like WUA and
+ DPAPI
+
+* Use a scheduled task to run a command which can be created with the
+ ``win_scheduled_task`` module. Like ``become``, this bypasses all WinRM
+ restrictions but can only run a command and not modules.
+
+
+.. seealso::
+
+ :ref:`playbooks_intro`
+ An introduction to playbooks
+ :ref:`playbooks_best_practices`
+ Tips and tricks for playbooks
+ :ref:`List of Windows Modules <windows_modules>`
+ Windows specific module list, all implemented in PowerShell
+ `User Mailing List <https://groups.google.com/group/ansible-project>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel