summaryrefslogtreecommitdiffstats
path: root/docs/docsite/rst/scenario_guides
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-14 20:03:01 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-14 20:03:01 +0000
commita453ac31f3428614cceb99027f8efbdb9258a40b (patch)
treef61f87408f32a8511cbd91799f9cececb53e0374 /docs/docsite/rst/scenario_guides
parentInitial commit. (diff)
downloadansible-a453ac31f3428614cceb99027f8efbdb9258a40b.tar.xz
ansible-a453ac31f3428614cceb99027f8efbdb9258a40b.zip
Adding upstream version 2.10.7+merged+base+2.10.8+dfsg.upstream/2.10.7+merged+base+2.10.8+dfsgupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'docs/docsite/rst/scenario_guides')
-rw-r--r--docs/docsite/rst/scenario_guides/cloud_guides.rst22
-rw-r--r--docs/docsite/rst/scenario_guides/guide_aci.rst661
-rw-r--r--docs/docsite/rst/scenario_guides/guide_alicloud.rst125
-rw-r--r--docs/docsite/rst/scenario_guides/guide_aws.rst281
-rw-r--r--docs/docsite/rst/scenario_guides/guide_azure.rst480
-rw-r--r--docs/docsite/rst/scenario_guides/guide_cloudstack.rst377
-rw-r--r--docs/docsite/rst/scenario_guides/guide_docker.rst227
-rw-r--r--docs/docsite/rst/scenario_guides/guide_gce.rst302
-rw-r--r--docs/docsite/rst/scenario_guides/guide_infoblox.rst292
-rw-r--r--docs/docsite/rst/scenario_guides/guide_kubernetes.rst63
-rw-r--r--docs/docsite/rst/scenario_guides/guide_meraki.rst193
-rw-r--r--docs/docsite/rst/scenario_guides/guide_online.rst41
-rw-r--r--docs/docsite/rst/scenario_guides/guide_oracle.rst103
-rw-r--r--docs/docsite/rst/scenario_guides/guide_packet.rst311
-rw-r--r--docs/docsite/rst/scenario_guides/guide_rax.rst810
-rw-r--r--docs/docsite/rst/scenario_guides/guide_scaleway.rst293
-rw-r--r--docs/docsite/rst/scenario_guides/guide_vagrant.rst136
-rw-r--r--docs/docsite/rst/scenario_guides/guide_vmware.rst33
-rw-r--r--docs/docsite/rst/scenario_guides/guide_vultr.rst171
-rw-r--r--docs/docsite/rst/scenario_guides/guides.rst43
-rw-r--r--docs/docsite/rst/scenario_guides/network_guides.rst16
-rw-r--r--docs/docsite/rst/scenario_guides/scenario_template.rst53
-rw-r--r--docs/docsite/rst/scenario_guides/virt_guides.rst15
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/faq.rst26
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_clone_template.rst222
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_find_vm_folder.rst120
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_remove_vm.rst126
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_rename_vm.rst173
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_vmware_http.rst161
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_concepts.rst45
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_external_doc_links.rst11
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_getting_started.rst9
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_intro.rst53
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory.rst90
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_filters.rst216
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_hostnames.rst128
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_vm_attributes.rst1183
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_module_reference.rst9
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_requirements.rst44
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_scenarios.rst16
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_troubleshooting.rst102
41 files changed, 7782 insertions, 0 deletions
diff --git a/docs/docsite/rst/scenario_guides/cloud_guides.rst b/docs/docsite/rst/scenario_guides/cloud_guides.rst
new file mode 100644
index 00000000..d430bdda
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/cloud_guides.rst
@@ -0,0 +1,22 @@
+.. _cloud_guides:
+
+*******************
+Public Cloud Guides
+*******************
+
+The guides in this section cover using Ansible with a range of public cloud platforms. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+.. toctree::
+ :maxdepth: 1
+
+ guide_alicloud
+ guide_aws
+ guide_cloudstack
+ guide_gce
+ guide_azure
+ guide_online
+ guide_oracle
+ guide_packet
+ guide_rax
+ guide_scaleway
+ guide_vultr
diff --git a/docs/docsite/rst/scenario_guides/guide_aci.rst b/docs/docsite/rst/scenario_guides/guide_aci.rst
new file mode 100644
index 00000000..5fe4c648
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_aci.rst
@@ -0,0 +1,661 @@
+.. _aci_guide:
+
+Cisco ACI Guide
+===============
+
+
+.. _aci_guide_intro:
+
+What is Cisco ACI ?
+-------------------
+
+Application Centric Infrastructure (ACI)
+........................................
+The Cisco Application Centric Infrastructure (ACI) allows application requirements to define the network. This architecture simplifies, optimizes, and accelerates the entire application deployment life cycle.
+
+
+Application Policy Infrastructure Controller (APIC)
+...................................................
+The APIC manages the scalable ACI multi-tenant fabric. The APIC provides a unified point of automation and management, policy programming, application deployment, and health monitoring for the fabric. The APIC, which is implemented as a replicated synchronized clustered controller, optimizes performance, supports any application anywhere, and provides unified operation of the physical and virtual infrastructure.
+
+The APIC enables network administrators to easily define the optimal network for applications. Data center operators can clearly see how applications consume network resources, easily isolate and troubleshoot application and infrastructure problems, and monitor and profile resource usage patterns.
+
+The Cisco Application Policy Infrastructure Controller (APIC) API enables applications to directly connect with a secure, shared, high-performance resource pool that includes network, compute, and storage capabilities.
+
+
+ACI Fabric
+..........
+The Cisco Application Centric Infrastructure (ACI) Fabric includes Cisco Nexus 9000 Series switches with the APIC to run in the leaf/spine ACI fabric mode. These switches form a "fat-tree" network by connecting each leaf node to each spine node; all other devices connect to the leaf nodes. The APIC manages the ACI fabric.
+
+The ACI fabric provides consistent low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch is handled locally, and all other traffic travels from the ingress leaf to the egress leaf through a spine switch. Although this architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.
+
+The ACI fabric object-oriented operating system (OS) runs on each Cisco Nexus 9000 Series node. It enables programming of objects for each configurable element of the system. The ACI fabric OS renders policies from the APIC into a concrete model that runs in the physical infrastructure. The concrete model is analogous to compiled software; it is the form of the model that the switch operating system can execute.
+
+All the switch nodes contain a complete copy of the concrete model. When an administrator creates a policy in the APIC that represents a configuration, the APIC updates the logical model. The APIC then performs the intermediate step of creating a fully elaborated policy that it pushes into all the switch nodes where the concrete model is updated.
+
+The APIC is responsible for fabric activation, switch firmware management, network policy configuration, and instantiation. While the APIC acts as the centralized policy and network management engine for the fabric, it is completely removed from the data path, including the forwarding topology. Therefore, the fabric can still forward traffic even when communication with the APIC is lost.
+
+
+More information
+................
+Various resources exist to start learning ACI, here is a list of interesting articles from the community.
+
+- `Adam Raffe: Learning ACI <https://adamraffe.com/learning-aci/>`_
+- `Luca Relandini: ACI for dummies <https://lucarelandini.blogspot.be/2015/03/aci-for-dummies.html>`_
+- `Cisco DevNet Learning Labs about ACI <https://learninglabs.cisco.com/labs/tags/ACI>`_
+
+
+.. _aci_guide_modules:
+
+Using the ACI modules
+---------------------
+The Ansible ACI modules provide a user-friendly interface to managing your ACI environment using Ansible playbooks.
+
+For instance ensuring that a specific tenant exists, is done using the following Ansible task using the aci_tenant module:
+
+.. code-block:: yaml
+
+ - name: Ensure tenant customer-xyz exists
+ aci_tenant:
+ host: my-apic-1
+ username: admin
+ password: my-password
+
+ tenant: customer-xyz
+ description: Customer XYZ
+ state: present
+
+A complete list of existing ACI modules is available on the content tab of the `ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_.
+
+If you want to learn how to write your own ACI modules to contribute, look at the :ref:`Developing Cisco ACI modules <aci_dev_guide>` section.
+
+Querying ACI configuration
+..........................
+
+A module can also be used to query a specific object.
+
+.. code-block:: yaml
+
+ - name: Query tenant customer-xyz
+ aci_tenant:
+ host: my-apic-1
+ username: admin
+ password: my-password
+
+ tenant: customer-xyz
+ state: query
+ register: my_tenant
+
+Or query all objects.
+
+.. code-block:: yaml
+
+ - name: Query all tenants
+ aci_tenant:
+ host: my-apic-1
+ username: admin
+ password: my-password
+
+ state: query
+ register: all_tenants
+
+After registering the return values of the aci_tenant task as shown above, you can access all tenant information from variable ``all_tenants``.
+
+
+Running on the controller locally
+.................................
+As originally designed, Ansible modules are shipped to and run on the remote target(s), however the ACI modules (like most network-related modules) do not run on the network devices or controller (in this case the APIC), but they talk directly to the APIC's REST interface.
+
+For this very reason, the modules need to run on the local Ansible controller (or are delegated to another system that *can* connect to the APIC).
+
+
+Gathering facts
+```````````````
+Because we run the modules on the Ansible controller gathering facts will not work. That is why when using these ACI modules it is mandatory to disable facts gathering. You can do this globally in your ``ansible.cfg`` or by adding ``gather_facts: no`` to every play.
+
+.. code-block:: yaml
+ :emphasize-lines: 3
+
+ - name: Another play in my playbook
+ hosts: my-apic-1
+ gather_facts: no
+ tasks:
+ - name: Create a tenant
+ aci_tenant:
+ ...
+
+Delegating to localhost
+```````````````````````
+So let us assume we have our target configured in the inventory using the FQDN name as the ``ansible_host`` value, as shown below.
+
+.. code-block:: yaml
+ :emphasize-lines: 3
+
+ apics:
+ my-apic-1:
+ ansible_host: apic01.fqdn.intra
+ ansible_user: admin
+ ansible_password: my-password
+
+One way to set this up is to add to every task the directive: ``delegate_to: localhost``.
+
+.. code-block:: yaml
+ :emphasize-lines: 8
+
+ - name: Query all tenants
+ aci_tenant:
+ host: '{{ ansible_host }}'
+ username: '{{ ansible_user }}'
+ password: '{{ ansible_password }}'
+
+ state: query
+ delegate_to: localhost
+ register: all_tenants
+
+If one would forget to add this directive, Ansible will attempt to connect to the APIC using SSH and attempt to copy the module and run it remotely. This will fail with a clear error, yet may be confusing to some.
+
+
+Using the local connection method
+`````````````````````````````````
+Another option frequently used, is to tie the ``local`` connection method to this target so that every subsequent task for this target will use the local connection method (hence run it locally, rather than use SSH).
+
+In this case the inventory may look like this:
+
+.. code-block:: yaml
+ :emphasize-lines: 6
+
+ apics:
+ my-apic-1:
+ ansible_host: apic01.fqdn.intra
+ ansible_user: admin
+ ansible_password: my-password
+ ansible_connection: local
+
+But used tasks do not need anything special added.
+
+.. code-block:: yaml
+
+ - name: Query all tenants
+ aci_tenant:
+ host: '{{ ansible_host }}'
+ username: '{{ ansible_user }}'
+ password: '{{ ansible_password }}'
+
+ state: query
+ register: all_tenants
+
+.. hint:: For clarity we have added ``delegate_to: localhost`` to all the examples in the module documentation. This helps to ensure first-time users can easily copy&paste parts and make them work with a minimum of effort.
+
+
+Common parameters
+.................
+Every Ansible ACI module accepts the following parameters that influence the module's communication with the APIC REST API:
+
+ host
+ Hostname or IP address of the APIC.
+
+ port
+ Port to use for communication. (Defaults to ``443`` for HTTPS, and ``80`` for HTTP)
+
+ username
+ User name used to log on to the APIC. (Defaults to ``admin``)
+
+ password
+ Password for ``username`` to log on to the APIC, using password-based authentication.
+
+ private_key
+ Private key for ``username`` to log on to APIC, using signature-based authentication.
+ This could either be the raw private key content (include header/footer) or a file that stores the key content.
+ *New in version 2.5*
+
+ certificate_name
+ Name of the certificate in the ACI Web GUI.
+ This defaults to either the ``username`` value or the ``private_key`` file base name).
+ *New in version 2.5*
+
+ timeout
+ Timeout value for socket-level communication.
+
+ use_proxy
+ Use system proxy settings. (Defaults to ``yes``)
+
+ use_ssl
+ Use HTTPS or HTTP for APIC REST communication. (Defaults to ``yes``)
+
+ validate_certs
+ Validate certificate when using HTTPS communication. (Defaults to ``yes``)
+
+ output_level
+ Influence the level of detail ACI modules return to the user. (One of ``normal``, ``info`` or ``debug``) *New in version 2.5*
+
+
+Proxy support
+.............
+By default, if an environment variable ``<protocol>_proxy`` is set on the target host, requests will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see :ref:`playbooks_environment`), or by using the ``use_proxy`` module parameter.
+
+HTTP redirects can redirect from HTTP to HTTPS so ensure that the proxy environment for both protocols is correctly configured.
+
+If proxy support is not needed, but the system may have it configured nevertheless, use the parameter ``use_proxy: no`` to avoid accidental system proxy usage.
+
+.. hint:: Selective proxy support using the ``no_proxy`` environment variable is also supported.
+
+
+Return values
+.............
+
+.. versionadded:: 2.5
+
+The following values are always returned:
+
+ current
+ The resulting state of the managed object, or results of your query.
+
+The following values are returned when ``output_level: info``:
+
+ previous
+ The original state of the managed object (before any change was made).
+
+ proposed
+ The proposed config payload, based on user-supplied values.
+
+ sent
+ The sent config payload, based on user-supplied values and the existing configuration.
+
+The following values are returned when ``output_level: debug`` or ``ANSIBLE_DEBUG=1``:
+
+ filter_string
+ The filter used for specific APIC queries.
+
+ method
+ The HTTP method used for the sent payload. (Either ``GET`` for queries, ``DELETE`` or ``POST`` for changes)
+
+ response
+ The HTTP response from the APIC.
+
+ status
+ The HTTP status code for the request.
+
+ url
+ The url used for the request.
+
+.. note:: The module return values are documented in detail as part of each module's documentation.
+
+
+More information
+................
+Various resources exist to start learn more about ACI programmability, we recommend the following links:
+
+- :ref:`Developing Cisco ACI modules <aci_dev_guide>`
+- `Jacob McGill: Automating Cisco ACI with Ansible <https://blogs.cisco.com/developer/automating-cisco-aci-with-ansible-eliminates-repetitive-day-to-day-tasks>`_
+- `Cisco DevNet Learning Labs about ACI and Ansible <https://learninglabs.cisco.com/labs/tags/ACI,Ansible>`_
+
+
+.. _aci_guide_auth:
+
+ACI authentication
+------------------
+
+Password-based authentication
+.............................
+If you want to log on using a username and password, you can use the following parameters with your ACI modules:
+
+.. code-block:: yaml
+
+ username: admin
+ password: my-password
+
+Password-based authentication is very simple to work with, but it is not the most efficient form of authentication from ACI's point-of-view as it requires a separate login-request and an open session to work. To avoid having your session time-out and requiring another login, you can use the more efficient Signature-based authentication.
+
+.. note:: Password-based authentication also may trigger anti-DoS measures in ACI v3.1+ that causes session throttling and results in HTTP 503 errors and login failures.
+
+.. warning:: Never store passwords in plain text.
+
+The "Vault" feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See :ref:`playbooks_vault` for more information.
+
+
+Signature-based authentication using certificates
+.................................................
+
+.. versionadded:: 2.5
+
+Using signature-based authentication is more efficient and more reliable than password-based authentication.
+
+Generate certificate and private key
+````````````````````````````````````
+Signature-based authentication requires a (self-signed) X.509 certificate with private key, and a configuration step for your AAA user in ACI. To generate a working X.509 certificate and private key, use the following procedure:
+
+.. code-block:: bash
+
+ $ openssl req -new -newkey rsa:1024 -days 36500 -nodes -x509 -keyout admin.key -out admin.crt -subj '/CN=Admin/O=Your Company/C=US'
+
+Configure your local user
+`````````````````````````
+Perform the following steps:
+
+- Add the X.509 certificate to your ACI AAA local user at :guilabel:`ADMIN` » :guilabel:`AAA`
+- Click :guilabel:`AAA Authentication`
+- Check that in the :guilabel:`Authentication` field the :guilabel:`Realm` field displays :guilabel:`Local`
+- Expand :guilabel:`Security Management` » :guilabel:`Local Users`
+- Click the name of the user you want to add a certificate to, in the :guilabel:`User Certificates` area
+- Click the :guilabel:`+` sign and in the :guilabel:`Create X509 Certificate` enter a certificate name in the :guilabel:`Name` field
+
+ * If you use the basename of your private key here, you don't need to enter ``certificate_name`` in Ansible
+
+- Copy and paste your X.509 certificate in the :guilabel:`Data` field.
+
+You can automate this by using the following Ansible task:
+
+.. code-block:: yaml
+
+ - name: Ensure we have a certificate installed
+ aci_aaa_user_certificate:
+ host: my-apic-1
+ username: admin
+ password: my-password
+
+ aaa_user: admin
+ certificate_name: admin
+ certificate: "{{ lookup('file', 'pki/admin.crt') }}" # This will read the certificate data from a local file
+
+.. note:: Signature-based authentication only works with local users.
+
+
+Use signature-based authentication with Ansible
+```````````````````````````````````````````````
+You need the following parameters with your ACI module(s) for it to work:
+
+.. code-block:: yaml
+ :emphasize-lines: 2,3
+
+ username: admin
+ private_key: pki/admin.key
+ certificate_name: admin # This could be left out !
+
+or you can use the private key content:
+
+.. code-block:: yaml
+ :emphasize-lines: 2,3
+
+ username: admin
+ private_key: |
+ -----BEGIN PRIVATE KEY-----
+ <<your private key content>>
+ -----END PRIVATE KEY-----
+ certificate_name: admin # This could be left out !
+
+
+.. hint:: If you use a certificate name in ACI that matches the private key's basename, you can leave out the ``certificate_name`` parameter like the example above.
+
+
+Using Ansible Vault to encrypt the private key
+``````````````````````````````````````````````
+.. versionadded:: 2.8
+
+To start, encrypt the private key and give it a strong password.
+
+.. code-block:: bash
+
+ ansible-vault encrypt admin.key
+
+Use a text editor to open the private-key. You should have an encrypted cert now.
+
+.. code-block:: bash
+
+ $ANSIBLE_VAULT;1.1;AES256
+ 56484318584354658465121889743213151843149454864654151618131547984132165489484654
+ 45641818198456456489479874513215489484843614848456466655432455488484654848489498
+ ....
+
+Copy and paste the new encrypted cert into your playbook as a new variable.
+
+.. code-block:: yaml
+
+ private_key: !vault |
+ $ANSIBLE_VAULT;1.1;AES256
+ 56484318584354658465121889743213151843149454864654151618131547984132165489484654
+ 45641818198456456489479874513215489484843614848456466655432455488484654848489498
+ ....
+
+Use the new variable for the private_key:
+
+.. code-block:: yaml
+
+ username: admin
+ private_key: "{{ private_key }}"
+ certificate_name: admin # This could be left out !
+
+When running the playbook, use "--ask-vault-pass" to decrypt the private key.
+
+.. code-block:: bash
+
+ ansible-playbook site.yaml --ask-vault-pass
+
+
+More information
+````````````````
+- Detailed information about Signature-based Authentication is available from `Cisco APIC Signature-Based Transactions <https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_KB_Signature_Based_Transactions.html>`_.
+- More information on Ansible Vault can be found on the :ref:`Ansible Vault <vault>` page.
+
+
+.. _aci_guide_rest:
+
+Using ACI REST with Ansible
+---------------------------
+While already a lot of ACI modules exists in the Ansible distribution, and the most common actions can be performed with these existing modules, there's always something that may not be possible with off-the-shelf modules.
+
+The aci_rest module provides you with direct access to the APIC REST API and enables you to perform any task not already covered by the existing modules. This may seem like a complex undertaking, but you can generate the needed REST payload for any action performed in the ACI web interface effortlessly.
+
+
+Built-in idempotency
+....................
+Because the APIC REST API is intrinsically idempotent and can report whether a change was made, the aci_rest module automatically inherits both capabilities and is a first-class solution for automating your ACI infrastructure. As a result, users that require more powerful low-level access to their ACI infrastructure don't have to give up on idempotency and don't have to guess whether a change was performed when using the aci_rest module.
+
+
+Using the aci_rest module
+.........................
+The aci_rest module accepts the native XML and JSON payloads, but additionally accepts inline YAML payload (structured like JSON). The XML payload requires you to use a path ending with ``.xml`` whereas JSON or YAML require the path to end with ``.json``.
+
+When you're making modifications, you can use the POST or DELETE methods, whereas doing just queries require the GET method.
+
+For instance, if you would like to ensure a specific tenant exists on ACI, these below four examples are functionally identical:
+
+**XML** (Native ACI REST)
+
+.. code-block:: yaml
+
+ - aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+
+ method: post
+ path: /api/mo/uni.xml
+ content: |
+ <fvTenant name="customer-xyz" descr="Customer XYZ"/>
+
+**JSON** (Native ACI REST)
+
+.. code-block:: yaml
+
+ - aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+
+ method: post
+ path: /api/mo/uni.json
+ content:
+ {
+ "fvTenant": {
+ "attributes": {
+ "name": "customer-xyz",
+ "descr": "Customer XYZ"
+ }
+ }
+ }
+
+**YAML** (Ansible-style REST)
+
+.. code-block:: yaml
+
+ - aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+
+ method: post
+ path: /api/mo/uni.json
+ content:
+ fvTenant:
+ attributes:
+ name: customer-xyz
+ descr: Customer XYZ
+
+**Ansible task** (Dedicated module)
+
+.. code-block:: yaml
+
+ - aci_tenant:
+ host: my-apic-1
+ private_key: pki/admin.key
+
+ tenant: customer-xyz
+ description: Customer XYZ
+ state: present
+
+
+.. hint:: The XML format is more practical when there is a need to template the REST payload (inline), but the YAML format is more convenient for maintaining your infrastructure-as-code and feels more naturally integrated with Ansible playbooks. The dedicated modules offer a more simple, abstracted, but also a more limited experience. Use what feels best for your use-case.
+
+
+More information
+................
+Plenty of resources exist to learn about ACI's APIC REST interface, we recommend the links below:
+
+- `The ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_
+- `APIC REST API Configuration Guide <https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/rest_cfg/2_1_x/b_Cisco_APIC_REST_API_Configuration_Guide.html>`_ -- Detailed guide on how the APIC REST API is designed and used, incl. many examples
+- `APIC Management Information Model reference <https://developer.cisco.com/docs/apic-mim-ref/>`_ -- Complete reference of the APIC object model
+- `Cisco DevNet Learning Labs about ACI and REST <https://learninglabs.cisco.com/labs/tags/ACI,REST>`_
+
+
+.. _aci_guide_ops:
+
+Operational examples
+--------------------
+Here is a small overview of useful operational tasks to reuse in your playbooks.
+
+Feel free to contribute more useful snippets.
+
+
+Waiting for all controllers to be ready
+.......................................
+You can use the below task after you started to build your APICs and configured the cluster to wait until all the APICs have come online. It will wait until the number of controllers equals the number listed in the ``apic`` inventory group.
+
+.. code-block:: yaml
+
+ - name: Waiting for all controllers to be ready
+ aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+ method: get
+ path: /api/node/class/topSystem.json?query-target-filter=eq(topSystem.role,"controller")
+ register: topsystem
+ until: topsystem|success and topsystem.totalCount|int >= groups['apic']|count >= 3
+ retries: 20
+ delay: 30
+
+
+Waiting for cluster to be fully-fit
+...................................
+The below example waits until the cluster is fully-fit. In this example you know the number of APICs in the cluster and you verify each APIC reports a 'fully-fit' status.
+
+.. code-block:: yaml
+
+ - name: Waiting for cluster to be fully-fit
+ aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+ method: get
+ path: /api/node/class/infraWiNode.json?query-target-filter=wcard(infraWiNode.dn,"topology/pod-1/node-1/av")
+ register: infrawinode
+ until: >
+ infrawinode|success and
+ infrawinode.totalCount|int >= groups['apic']|count >= 3 and
+ infrawinode.imdata[0].infraWiNode.attributes.health == 'fully-fit' and
+ infrawinode.imdata[1].infraWiNode.attributes.health == 'fully-fit' and
+ infrawinode.imdata[2].infraWiNode.attributes.health == 'fully-fit'
+ retries: 30
+ delay: 30
+
+
+.. _aci_guide_errors:
+
+APIC error messages
+-------------------
+The following error messages may occur and this section can help you understand what exactly is going on and how to fix/avoid them.
+
+ APIC Error 122: unknown managed object class 'polUni'
+ In case you receive this error while you are certain your aci_rest payload and object classes are seemingly correct, the issue might be that your payload is not in fact correct JSON (for example, the sent payload is using single quotes, rather than double quotes), and as a result the APIC is not correctly parsing your object classes from the payload. One way to avoid this is by using a YAML or an XML formatted payload, which are easier to construct correctly and modify later.
+
+
+ APIC Error 400: invalid data at line '1'. Attributes are missing, tag 'attributes' must be specified first, before any other tag
+ Although the JSON specification allows unordered elements, the APIC REST API requires that the JSON ``attributes`` element precede the ``children`` array or other elements. So you need to ensure that your payload conforms to this requirement. Sorting your dictionary keys will do the trick just fine. If you don't have any attributes, it may be necessary to add: ``attributes: {}`` as the APIC does expect the entry to precede any ``children``.
+
+
+ APIC Error 801: property descr of uni/tn-TENANT/ap-AP failed validation for value 'A "legacy" network'
+ Some values in the APIC have strict format-rules to comply to, and the internal APIC validation check for the provided value failed. In the above case, the ``description`` parameter (internally known as ``descr``) only accepts values conforming to `Regex: [a-zA-Z0-9\\!#$%()*,-./:;@ _{|}~?&+]+ <https://pubhub-prod.s3.amazonaws.com/media/apic-mim-ref/docs/MO-fvAp.html#descr>`_, in general it must not include quotes or square brackets.
+
+
+.. _aci_guide_known_issues:
+
+Known issues
+------------
+The aci_rest module is a wrapper around the APIC REST API. As a result any issues related to the APIC will be reflected in the use of this module.
+
+All below issues either have been reported to the vendor, and most can simply be avoided.
+
+ Too many consecutive API calls may result in connection throttling
+ Starting with ACI v3.1 the APIC will actively throttle password-based authenticated connection rates over a specific threshold. This is as part of an anti-DDOS measure but can act up when using Ansible with ACI using password-based authentication. Currently, one solution is to increase this threshold within the nginx configuration, but using signature-based authentication is recommended.
+
+ **NOTE:** It is advisable to use signature-based authentication with ACI as it not only prevents connection-throttling, but also improves general performance when using the ACI modules.
+
+
+ Specific requests may not reflect changes correctly (`#35401 <https://github.com/ansible/ansible/issues/35041>`_)
+ There is a known issue where specific requests to the APIC do not properly reflect changed in the resulting output, even when we request those changes explicitly from the APIC. In one instance using the path ``api/node/mo/uni/infra.xml`` fails, where ``api/node/mo/uni/infra/.xml`` does work correctly.
+
+ **NOTE:** A workaround is to register the task return values (for example, ``register: this``) and influence when the task should report a change by adding: ``changed_when: this.imdata != []``.
+
+
+ Specific requests are known to not be idempotent (`#35050 <https://github.com/ansible/ansible/issues/35050>`_)
+ The behaviour of the APIC is inconsistent to the use of ``status="created"`` and ``status="deleted"``. The result is that when you use ``status="created"`` in your payload the resulting tasks are not idempotent and creation will fail when the object was already created. However this is not the case with ``status="deleted"`` where such call to an non-existing object does not cause any failure whatsoever.
+
+ **NOTE:** A workaround is to avoid using ``status="created"`` and instead use ``status="modified"`` when idempotency is essential to your workflow..
+
+
+ Setting user password is not idempotent (`#35544 <https://github.com/ansible/ansible/issues/35544>`_)
+ Due to an inconsistency in the APIC REST API, a task that sets the password of a locally-authenticated user is not idempotent. The APIC will complain with message ``Password history check: user dag should not use previous 5 passwords``.
+
+ **NOTE:** There is no workaround for this issue.
+
+
+.. _aci_guide_community:
+
+ACI Ansible community
+---------------------
+If you have specific issues with the ACI modules, or a feature request, or you like to contribute to the ACI project by proposing changes or documentation updates, look at the Ansible Community wiki ACI page at: https://github.com/ansible/community/wiki/Network:-ACI
+
+You will find our roadmap, an overview of open ACI issues and pull-requests, and more information about who we are. If you have an interest in using ACI with Ansible, feel free to join! We occasionally meet online to track progress and prepare for new Ansible releases.
+
+
+.. seealso::
+
+ `ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_
+ View the content tab for a complete list of supported ACI modules.
+ :ref:`Developing Cisco ACI modules <aci_dev_guide>`
+ A walkthough on how to develop new Cisco ACI modules to contribute back.
+ `ACI community <https://github.com/ansible/community/wiki/Network:-ACI>`_
+ The Ansible ACI community wiki page, includes roadmap, ideas and development documentation.
+ :ref:`network_guide`
+ A detailed guide on how to use Ansible for automating network infrastructure.
+ `Network Working Group <https://github.com/ansible/community/tree/master/group-network>`_
+ The Ansible Network community page, includes contact information and meeting information.
+ `#ansible-network <https://webchat.freenode.net/?channels=ansible-network>`_
+ The #ansible-network IRC chat channel on Freenode.net.
+ `User Mailing List <https://groups.google.com/group/ansible-project>`_
+ Have a question? Stop by the google group!
diff --git a/docs/docsite/rst/scenario_guides/guide_alicloud.rst b/docs/docsite/rst/scenario_guides/guide_alicloud.rst
new file mode 100644
index 00000000..c91eaf7f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_alicloud.rst
@@ -0,0 +1,125 @@
+Alibaba Cloud Compute Services Guide
+====================================
+
+.. _alicloud_intro:
+
+Introduction
+````````````
+
+Ansible contains several modules for controlling and managing Alibaba Cloud Compute Services (Alicloud). This guide
+explains how to use the Alicloud Ansible modules together.
+
+All Alicloud modules require ``footmark`` - install it on your control machine with ``pip install footmark``.
+
+Cloud modules, including Alicloud modules, execute on your local machine (the control machine) with ``connection: local``, rather than on remote machines defined in your hosts.
+
+Normally, you'll use the following pattern for plays that provision Alicloud resources::
+
+ - hosts: localhost
+ connection: local
+ vars:
+ - ...
+ tasks:
+ - ...
+
+.. _alicloud_authentication:
+
+Authentication
+``````````````
+
+You can specify your Alicloud authentication credentials (access key and secret key) by passing them as
+environment variables or by storing them in a vars file.
+
+To pass authentication credentials as environment variables::
+
+ export ALICLOUD_ACCESS_KEY='Alicloud123'
+ export ALICLOUD_SECRET_KEY='AlicloudSecret123'
+
+To store authentication credentials in a vars_file, encrypt them with :ref:`Ansible Vault<vault>` to keep them secure, then list them::
+
+ ---
+ alicloud_access_key: "--REMOVED--"
+ alicloud_secret_key: "--REMOVED--"
+
+Note that if you store your credentials in a vars_file, you need to refer to them in each Alicloud module. For example::
+
+ - ali_instance:
+ alicloud_access_key: "{{alicloud_access_key}}"
+ alicloud_secret_key: "{{alicloud_secret_key}}"
+ image_id: "..."
+
+.. _alicloud_provisioning:
+
+Provisioning
+````````````
+
+Alicloud modules create Alicloud ECS instances, disks, virtual private clouds, virtual switches, security groups and other resources.
+
+You can use the ``count`` parameter to control the number of resources you create or terminate. For example, if you want exactly 5 instances tagged ``NewECS``,
+set the ``count`` of instances to 5 and the ``count_tag`` to ``NewECS``, as shown in the last task of the example playbook below.
+If there are no instances with the tag ``NewECS``, the task creates 5 new instances. If there are 2 instances with that tag, the task
+creates 3 more. If there are 8 instances with that tag, the task terminates 3 of those instances.
+
+If you do not specify a ``count_tag``, the task creates the number of instances you specify in ``count`` with the ``instance_name`` you provide.
+
+::
+
+ # alicloud_setup.yml
+
+ - hosts: localhost
+ connection: local
+
+ tasks:
+
+ - name: Create VPC
+ ali_vpc:
+ cidr_block: '{{ cidr_block }}'
+ vpc_name: new_vpc
+ register: created_vpc
+
+ - name: Create VSwitch
+ ali_vswitch:
+ alicloud_zone: '{{ alicloud_zone }}'
+ cidr_block: '{{ vsw_cidr }}'
+ vswitch_name: new_vswitch
+ vpc_id: '{{ created_vpc.vpc.id }}'
+ register: created_vsw
+
+ - name: Create security group
+ ali_security_group:
+ name: new_group
+ vpc_id: '{{ created_vpc.vpc.id }}'
+ rules:
+ - proto: tcp
+ port_range: 22/22
+ cidr_ip: 0.0.0.0/0
+ priority: 1
+ rules_egress:
+ - proto: tcp
+ port_range: 80/80
+ cidr_ip: 192.168.0.54/32
+ priority: 1
+ register: created_group
+
+ - name: Create a set of instances
+ ali_instance:
+ security_groups: '{{ created_group.group_id }}'
+ instance_type: ecs.n4.small
+ image_id: "{{ ami_id }}"
+ instance_name: "My-new-instance"
+ instance_tags:
+ Name: NewECS
+ Version: 0.0.1
+ count: 5
+ count_tag:
+ Name: NewECS
+ allocate_public_ip: true
+ max_bandwidth_out: 50
+ vswitch_id: '{{ created_vsw.vswitch.id}}'
+ register: create_instance
+
+In the example playbook above, data about the vpc, vswitch, group, and instances created by this playbook
+are saved in the variables defined by the "register" keyword in each task.
+
+Each Alicloud module offers a variety of parameter options. Not all options are demonstrated in the above example.
+See each individual module for further details and examples.
diff --git a/docs/docsite/rst/scenario_guides/guide_aws.rst b/docs/docsite/rst/scenario_guides/guide_aws.rst
new file mode 100644
index 00000000..ba453195
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_aws.rst
@@ -0,0 +1,281 @@
+Amazon Web Services Guide
+=========================
+
+.. _aws_intro:
+
+Introduction
+````````````
+
+Ansible contains a number of modules for controlling Amazon Web Services (AWS). The purpose of this
+section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in AWS context.
+
+Requirements for the AWS modules are minimal.
+
+All of the modules require and are tested against recent versions of boto, usually boto3. Check the module documentation for the minimum required version for each module. You must have the boto3 Python module installed on your control machine. You may also need the original boto package. You can install these modules from your OS distribution or using the python package installer: ``pip install boto3``.
+
+Whereas classically Ansible will execute tasks in its host loop against multiple remote machines, most cloud-control steps occur on your local machine with reference to the regions to control.
+
+In your playbook steps we'll typically be using the following pattern for provisioning steps::
+
+ - hosts: localhost
+ gather_facts: False
+ tasks:
+ - ...
+
+.. _aws_authentication:
+
+Authentication
+``````````````
+
+Authentication with the AWS-related modules is handled by either
+specifying your access and secret key as ENV variables or module arguments.
+
+For environment variables::
+
+ export AWS_ACCESS_KEY_ID='AK123'
+ export AWS_SECRET_ACCESS_KEY='abc123'
+
+For storing these in a vars_file, ideally encrypted with ansible-vault::
+
+ ---
+ ec2_access_key: "--REMOVED--"
+ ec2_secret_key: "--REMOVED--"
+
+Note that if you store your credentials in vars_file, you need to refer to them in each AWS-module. For example::
+
+ - ec2
+ aws_access_key: "{{ec2_access_key}}"
+ aws_secret_key: "{{ec2_secret_key}}"
+ image: "..."
+
+.. _aws_provisioning:
+
+Provisioning
+````````````
+
+The ec2 module provisions and de-provisions instances within EC2.
+
+An example of making sure there are only 5 instances tagged 'Demo' in EC2 follows.
+
+In the example below, the "exact_count" of instances is set to 5. This means if there are 0 instances already existing, then
+5 new instances would be created. If there were 2 instances, only 3 would be created, and if there were 8 instances, 3 instances would
+be terminated.
+
+What is being counted is specified by the "count_tag" parameter. The parameter "instance_tags" is used to apply tags to the newly created
+instance.::
+
+ # demo_setup.yml
+
+ - hosts: localhost
+ gather_facts: False
+
+ tasks:
+
+ - name: Provision a set of instances
+ ec2:
+ key_name: my_key
+ group: test
+ instance_type: t2.micro
+ image: "{{ ami_id }}"
+ wait: true
+ exact_count: 5
+ count_tag:
+ Name: Demo
+ instance_tags:
+ Name: Demo
+ register: ec2
+
+The data about what instances are created is being saved by the "register" keyword in the variable named "ec2".
+
+From this, we'll use the add_host module to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task.::
+
+ # demo_setup.yml
+
+ - hosts: localhost
+ gather_facts: False
+
+ tasks:
+
+ - name: Provision a set of instances
+ ec2:
+ key_name: my_key
+ group: test
+ instance_type: t2.micro
+ image: "{{ ami_id }}"
+ wait: true
+ exact_count: 5
+ count_tag:
+ Name: Demo
+ instance_tags:
+ Name: Demo
+ register: ec2
+
+ - name: Add all instance public IPs to host group
+ add_host: hostname={{ item.public_ip }} groups=ec2hosts
+ loop: "{{ ec2.instances }}"
+
+With the host group now created, a second play at the bottom of the same provisioning playbook file might now have some configuration steps::
+
+ # demo_setup.yml
+
+ - name: Provision a set of instances
+ hosts: localhost
+ # ... AS ABOVE ...
+
+ - hosts: ec2hosts
+ name: configuration play
+ user: ec2-user
+ gather_facts: true
+
+ tasks:
+
+ - name: Check NTP service
+ service: name=ntpd state=started
+
+.. _aws_security_groups:
+
+Security Groups
+```````````````
+
+Security groups on AWS are stateful. The response of a request from your instance is allowed to flow in regardless of inbound security group rules and vice-versa.
+In case you only want allow traffic with AWS S3 service, you need to fetch the current IP ranges of AWS S3 for one region and apply them as an egress rule.::
+
+ - name: fetch raw ip ranges for aws s3
+ set_fact:
+ raw_s3_ranges: "{{ lookup('aws_service_ip_ranges', region='eu-central-1', service='S3', wantlist=True) }}"
+
+ - name: prepare list structure for ec2_group module
+ set_fact:
+ s3_ranges: "{{ s3_ranges | default([]) + [{'proto': 'all', 'cidr_ip': item, 'rule_desc': 'S3 Service IP range'}] }}"
+ loop: "{{ raw_s3_ranges }}"
+
+ - name: set S3 IP ranges to egress rules
+ ec2_group:
+ name: aws_s3_ip_ranges
+ description: allow outgoing traffic to aws S3 service
+ region: eu-central-1
+ state: present
+ vpc_id: vpc-123456
+ purge_rules: true
+ purge_rules_egress: true
+ rules: []
+ rules_egress: "{{ s3_ranges }}"
+ tags:
+ Name: aws_s3_ip_ranges
+
+.. _aws_host_inventory:
+
+Host Inventory
+``````````````
+
+Once your nodes are spun up, you'll probably want to talk to them again. With a cloud setup, it's best to not maintain a static list of cloud hostnames
+in text files. Rather, the best way to handle this is to use the aws_ec2 inventory plugin. See :ref:`dynamic_inventory`.
+
+The plugin will also return instances that were created outside of Ansible and allow Ansible to manage them.
+
+.. _aws_tags_and_groups:
+
+Tags And Groups And Variables
+`````````````````````````````
+
+When using the inventory plugin, you can configure extra inventory structure based on the metadata returned by AWS.
+
+For instance, you might use ``keyed_groups`` to create groups from instance tags::
+
+ plugin: aws_ec2
+ keyed_groups:
+ - prefix: tag
+ key: tags
+
+
+You can then target all instances with a "class" tag where the value is "webserver" in a play::
+
+ - hosts: tag_class_webserver
+ tasks:
+ - ping
+
+You can also use these groups with 'group_vars' to set variables that are automatically applied to matching instances. See :ref:`splitting_out_vars`.
+
+.. _aws_pull:
+
+Autoscaling with Ansible Pull
+`````````````````````````````
+
+Amazon Autoscaling features automatically increase or decrease capacity based on load. There are also Ansible modules shown in the cloud documentation that
+can configure autoscaling policy.
+
+When nodes come online, it may not be sufficient to wait for the next cycle of an ansible command to come along and configure that node.
+
+To do this, pre-bake machine images which contain the necessary ansible-pull invocation. Ansible-pull is a command line tool that fetches a playbook from a git server and runs it locally.
+
+One of the challenges of this approach is that there needs to be a centralized way to store data about the results of pull commands in an autoscaling context.
+For this reason, the autoscaling solution provided below in the next section can be a better approach.
+
+Read :ref:`ansible-pull` for more information on pull-mode playbooks.
+
+.. _aws_autoscale:
+
+Autoscaling with Ansible Tower
+``````````````````````````````
+
+:ref:`ansible_tower` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call
+a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way
+to reconfigure ephemeral nodes. See the Tower install and product documentation for more details.
+
+A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared
+with remote hosts.
+
+.. _aws_cloudformation_example:
+
+Ansible With (And Versus) CloudFormation
+````````````````````````````````````````
+
+CloudFormation is a Amazon technology for defining a cloud stack as a JSON or YAML document.
+
+Ansible modules provide an easier to use interface than CloudFormation in many examples, without defining a complex JSON/YAML document.
+This is recommended for most users.
+
+However, for users that have decided to use CloudFormation, there is an Ansible module that can be used to apply a CloudFormation template
+to Amazon.
+
+When using Ansible with CloudFormation, typically Ansible will be used with a tool like Packer to build images, and CloudFormation will launch
+those images, or ansible will be invoked through user data once the image comes online, or a combination of the two.
+
+Please see the examples in the Ansible CloudFormation module for more details.
+
+.. _aws_image_build:
+
+AWS Image Building With Ansible
+```````````````````````````````
+
+Many users may want to have images boot to a more complete configuration rather than configuring them entirely after instantiation. To do this,
+one of many programs can be used with Ansible playbooks to define and upload a base image, which will then get its own AMI ID for usage with
+the ec2 module or other Ansible AWS modules such as ec2_asg or the cloudformation module. Possible tools include Packer, aminator, and Ansible's
+ec2_ami module.
+
+Generally speaking, we find most users using Packer.
+
+See the Packer documentation of the `Ansible local Packer provisioner <https://www.packer.io/docs/provisioners/ansible-local.html>`_ and `Ansible remote Packer provisioner <https://www.packer.io/docs/provisioners/ansible.html>`_.
+
+If you do not want to adopt Packer at this time, configuring a base-image with Ansible after provisioning (as shown above) is acceptable.
+
+.. _aws_next_steps:
+
+Next Steps: Explore Modules
+```````````````````````````
+
+Ansible ships with lots of modules for configuring a wide array of EC2 services. Browse the "Cloud" category of the module
+documentation for a full list with examples.
+
+.. seealso::
+
+ :ref:`list_of_collections`
+ Browse existing collections, modules, and plugins
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_delegation`
+ Delegation, useful for working with loud balancers, clouds, and locally executed steps.
+ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Have a question? Stop by the google group!
+ `irc.freenode.net <http://irc.freenode.net>`_
+ #ansible IRC chat channel
diff --git a/docs/docsite/rst/scenario_guides/guide_azure.rst b/docs/docsite/rst/scenario_guides/guide_azure.rst
new file mode 100644
index 00000000..2317ade4
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_azure.rst
@@ -0,0 +1,480 @@
+Microsoft Azure Guide
+=====================
+
+Ansible includes a suite of modules for interacting with Azure Resource Manager, giving you the tools to easily create
+and orchestrate infrastructure on the Microsoft Azure Cloud.
+
+Requirements
+------------
+
+Using the Azure Resource Manager modules requires having specific Azure SDK modules
+installed on the host running Ansible.
+
+.. code-block:: bash
+
+ $ pip install 'ansible[azure]'
+
+If you are running Ansible from source, you can install the dependencies from the
+root directory of the Ansible repo.
+
+.. code-block:: bash
+
+ $ pip install .[azure]
+
+You can also directly run Ansible in `Azure Cloud Shell <https://shell.azure.com>`_, where Ansible is pre-installed.
+
+Authenticating with Azure
+-------------------------
+
+Using the Azure Resource Manager modules requires authenticating with the Azure API. You can choose from two authentication strategies:
+
+* Active Directory Username/Password
+* Service Principal Credentials
+
+Follow the directions for the strategy you wish to use, then proceed to `Providing Credentials to Azure Modules`_ for
+instructions on how to actually use the modules and authenticate with the Azure API.
+
+
+Using Service Principal
+.......................
+
+There is now a detailed official tutorial describing `how to create a service principal <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal>`_.
+
+After stepping through the tutorial you will have:
+
+* Your Client ID, which is found in the "client id" box in the "Configure" page of your application in the Azure portal
+* Your Secret key, generated when you created the application. You cannot show the key after creation.
+ If you lost the key, you must create a new one in the "Configure" page of your application.
+* And finally, a tenant ID. It's a UUID (for example, ABCDEFGH-1234-ABCD-1234-ABCDEFGHIJKL) pointing to the AD containing your
+ application. You will find it in the URL from within the Azure portal, or in the "view endpoints" of any given URL.
+
+
+Using Active Directory Username/Password
+........................................
+
+To create an Active Directory username/password:
+
+* Connect to the Azure Classic Portal with your admin account
+* Create a user in your default AAD. You must NOT activate Multi-Factor Authentication
+* Go to Settings - Administrators
+* Click on Add and enter the email of the new user.
+* Check the checkbox of the subscription you want to test with this user.
+* Login to Azure Portal with this new user to change the temporary password to a new one. You will not be able to use the
+ temporary password for OAuth login.
+
+Providing Credentials to Azure Modules
+......................................
+
+The modules offer several ways to provide your credentials. For a CI/CD tool such as Ansible Tower or Jenkins, you will
+most likely want to use environment variables. For local development you may wish to store your credentials in a file
+within your home directory. And of course, you can always pass credentials as parameters to a task within a playbook. The
+order of precedence is parameters, then environment variables, and finally a file found in your home directory.
+
+Using Environment Variables
+```````````````````````````
+
+To pass service principal credentials via the environment, define the following variables:
+
+* AZURE_CLIENT_ID
+* AZURE_SECRET
+* AZURE_SUBSCRIPTION_ID
+* AZURE_TENANT
+
+To pass Active Directory username/password via the environment, define the following variables:
+
+* AZURE_AD_USER
+* AZURE_PASSWORD
+* AZURE_SUBSCRIPTION_ID
+
+To pass Active Directory username/password in ADFS via the environment, define the following variables:
+
+* AZURE_AD_USER
+* AZURE_PASSWORD
+* AZURE_CLIENT_ID
+* AZURE_TENANT
+* AZURE_ADFS_AUTHORITY_URL
+
+"AZURE_ADFS_AUTHORITY_URL" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
+
+Storing in a File
+`````````````````
+
+When working in a development environment, it may be desirable to store credentials in a file. The modules will look
+for credentials in ``$HOME/.azure/credentials``. This file is an ini style file. It will look as follows:
+
+.. code-block:: ini
+
+ [default]
+ subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ client_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ secret=xxxxxxxxxxxxxxxxx
+ tenant=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+
+.. note:: If your secret values contain non-ASCII characters, you must `URL Encode <https://www.w3schools.com/tags/ref_urlencode.asp>`_ them to avoid login errors.
+
+It is possible to store multiple sets of credentials within the credentials file by creating multiple sections. Each
+section is considered a profile. The modules look for the [default] profile automatically. Define AZURE_PROFILE in the
+environment or pass a profile parameter to specify a specific profile.
+
+Passing as Parameters
+`````````````````````
+
+If you wish to pass credentials as parameters to a task, use the following parameters for service principal:
+
+* client_id
+* secret
+* subscription_id
+* tenant
+
+Or, pass the following parameters for Active Directory username/password:
+
+* ad_user
+* password
+* subscription_id
+
+Or, pass the following parameters for ADFS username/pasword:
+
+* ad_user
+* password
+* client_id
+* tenant
+* adfs_authority_url
+
+"adfs_authority_url" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
+
+
+Other Cloud Environments
+------------------------
+
+To use an Azure Cloud other than the default public cloud (eg, Azure China Cloud, Azure US Government Cloud, Azure Stack),
+pass the "cloud_environment" argument to modules, configure it in a credential profile, or set the "AZURE_CLOUD_ENVIRONMENT"
+environment variable. The value is either a cloud name as defined by the Azure Python SDK (eg, "AzureChinaCloud",
+"AzureUSGovernment"; defaults to "AzureCloud") or an Azure metadata discovery URL (for Azure Stack).
+
+Creating Virtual Machines
+-------------------------
+
+There are two ways to create a virtual machine, both involving the azure_rm_virtualmachine module. We can either create
+a storage account, network interface, security group and public IP address and pass the names of these objects to the
+module as parameters, or we can let the module do the work for us and accept the defaults it chooses.
+
+Creating Individual Components
+..............................
+
+An Azure module is available to help you create a storage account, virtual network, subnet, network interface,
+security group and public IP. Here is a full example of creating each of these and passing the names to the
+azure_rm_virtualmachine module at the end:
+
+.. code-block:: yaml
+
+ - name: Create storage account
+ azure_rm_storageaccount:
+ resource_group: Testing
+ name: testaccount001
+ account_type: Standard_LRS
+
+ - name: Create virtual network
+ azure_rm_virtualnetwork:
+ resource_group: Testing
+ name: testvn001
+ address_prefixes: "10.10.0.0/16"
+
+ - name: Add subnet
+ azure_rm_subnet:
+ resource_group: Testing
+ name: subnet001
+ address_prefix: "10.10.0.0/24"
+ virtual_network: testvn001
+
+ - name: Create public ip
+ azure_rm_publicipaddress:
+ resource_group: Testing
+ allocation_method: Static
+ name: publicip001
+
+ - name: Create security group that allows SSH
+ azure_rm_securitygroup:
+ resource_group: Testing
+ name: secgroup001
+ rules:
+ - name: SSH
+ protocol: Tcp
+ destination_port_range: 22
+ access: Allow
+ priority: 101
+ direction: Inbound
+
+ - name: Create NIC
+ azure_rm_networkinterface:
+ resource_group: Testing
+ name: testnic001
+ virtual_network: testvn001
+ subnet: subnet001
+ public_ip_name: publicip001
+ security_group: secgroup001
+
+ - name: Create virtual machine
+ azure_rm_virtualmachine:
+ resource_group: Testing
+ name: testvm001
+ vm_size: Standard_D1
+ storage_account: testaccount001
+ storage_container: testvm001
+ storage_blob: testvm001.vhd
+ admin_username: admin
+ admin_password: Password!
+ network_interfaces: testnic001
+ image:
+ offer: CentOS
+ publisher: OpenLogic
+ sku: '7.1'
+ version: latest
+
+Each of the Azure modules offers a variety of parameter options. Not all options are demonstrated in the above example.
+See each individual module for further details and examples.
+
+
+Creating a Virtual Machine with Default Options
+...............................................
+
+If you simply want to create a virtual machine without specifying all the details, you can do that as well. The only
+caveat is that you will need a virtual network with one subnet already in your resource group. Assuming you have a
+virtual network already with an existing subnet, you can run the following to create a VM:
+
+.. code-block:: yaml
+
+ azure_rm_virtualmachine:
+ resource_group: Testing
+ name: testvm10
+ vm_size: Standard_D1
+ admin_username: chouseknecht
+ ssh_password_enabled: false
+ ssh_public_keys: "{{ ssh_keys }}"
+ image:
+ offer: CentOS
+ publisher: OpenLogic
+ sku: '7.1'
+ version: latest
+
+
+Creating a Virtual Machine in Availability Zones
+..................................................
+
+If you want to create a VM in an availability zone,
+consider the following:
+
+* Both OS disk and data disk must be a 'managed disk', not an 'unmanaged disk'.
+* When creating a VM with the ``azure_rm_virtualmachine`` module,
+ you need to explicitly set the ``managed_disk_type`` parameter
+ to change the OS disk to a managed disk.
+ Otherwise, the OS disk becomes an unmanaged disk..
+* When you create a data disk with the ``azure_rm_manageddisk`` module,
+ you need to explicitly specify the ``storage_account_type`` parameter
+ to make it a managed disk.
+ Otherwise, the data disk will be an unmanaged disk.
+* A managed disk does not require a storage account or a storage container,
+ unlike a n unmanaged disk.
+ In particular, note that once a VM is created on an unmanaged disk,
+ an unnecessary storage container named "vhds" is automatically created.
+* When you create an IP address with the ``azure_rm_publicipaddress`` module,
+ you must set the ``sku`` parameter to ``standard``.
+ Otherwise, the IP address cannot be used in an availability zone.
+
+
+Dynamic Inventory Script
+------------------------
+
+If you are not familiar with Ansible's dynamic inventory scripts, check out :ref:`Intro to Dynamic Inventory <intro_dynamic_inventory>`.
+
+The Azure Resource Manager inventory script is called `azure_rm.py <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/azure_rm.py>`_. It authenticates with the Azure API exactly the same as the
+Azure modules, which means you will either define the same environment variables described above in `Using Environment Variables`_,
+create a ``$HOME/.azure/credentials`` file (also described above in `Storing in a File`_), or pass command line parameters. To see available command
+line options execute the following:
+
+.. code-block:: bash
+
+ $ wget https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/azure_rm.py
+ $ ./azure_rm.py --help
+
+As with all dynamic inventory scripts, the script can be executed directly, passed as a parameter to the ansible command,
+or passed directly to ansible-playbook using the -i option. No matter how it is executed the script produces JSON representing
+all of the hosts found in your Azure subscription. You can narrow this down to just hosts found in a specific set of
+Azure resource groups, or even down to a specific host.
+
+For a given host, the inventory script provides the following host variables:
+
+.. code-block:: JSON
+
+ {
+ "ansible_host": "XXX.XXX.XXX.XXX",
+ "computer_name": "computer_name2",
+ "fqdn": null,
+ "id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Compute/virtualMachines/object-name",
+ "image": {
+ "offer": "CentOS",
+ "publisher": "OpenLogic",
+ "sku": "7.1",
+ "version": "latest"
+ },
+ "location": "westus",
+ "mac_address": "00-00-5E-00-53-FE",
+ "name": "object-name",
+ "network_interface": "interface-name",
+ "network_interface_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkInterfaces/object-name1",
+ "network_security_group": null,
+ "network_security_group_id": null,
+ "os_disk": {
+ "name": "object-name",
+ "operating_system_type": "Linux"
+ },
+ "plan": null,
+ "powerstate": "running",
+ "private_ip": "172.26.3.6",
+ "private_ip_alloc_method": "Static",
+ "provisioning_state": "Succeeded",
+ "public_ip": "XXX.XXX.XXX.XXX",
+ "public_ip_alloc_method": "Static",
+ "public_ip_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/publicIPAddresses/object-name",
+ "public_ip_name": "object-name",
+ "resource_group": "galaxy-production",
+ "security_group": "object-name",
+ "security_group_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkSecurityGroups/object-name",
+ "tags": {
+ "db": "mysql"
+ },
+ "type": "Microsoft.Compute/virtualMachines",
+ "virtual_machine_size": "Standard_DS4"
+ }
+
+Host Groups
+...........
+
+By default hosts are grouped by:
+
+* azure (all hosts)
+* location name
+* resource group name
+* security group name
+* tag key
+* tag key_value
+* os_disk operating_system_type (Windows/Linux)
+
+You can control host groupings and host selection by either defining environment variables or creating an
+azure_rm.ini file in your current working directory.
+
+NOTE: An .ini file will take precedence over environment variables.
+
+NOTE: The name of the .ini file is the basename of the inventory script (in other words, 'azure_rm') with a '.ini'
+extension. This allows you to copy, rename and customize the inventory script and have matching .ini files all in
+the same directory.
+
+Control grouping using the following variables defined in the environment:
+
+* AZURE_GROUP_BY_RESOURCE_GROUP=yes
+* AZURE_GROUP_BY_LOCATION=yes
+* AZURE_GROUP_BY_SECURITY_GROUP=yes
+* AZURE_GROUP_BY_TAG=yes
+* AZURE_GROUP_BY_OS_FAMILY=yes
+
+Select hosts within specific resource groups by assigning a comma separated list to:
+
+* AZURE_RESOURCE_GROUPS=resource_group_a,resource_group_b
+
+Select hosts for specific tag key by assigning a comma separated list of tag keys to:
+
+* AZURE_TAGS=key1,key2,key3
+
+Select hosts for specific locations by assigning a comma separated list of locations to:
+
+* AZURE_LOCATIONS=eastus,eastus2,westus
+
+Or, select hosts for specific tag key:value pairs by assigning a comma separated list key:value pairs to:
+
+* AZURE_TAGS=key1:value1,key2:value2
+
+If you don't need the powerstate, you can improve performance by turning off powerstate fetching:
+
+* AZURE_INCLUDE_POWERSTATE=no
+
+A sample azure_rm.ini file is included along with the inventory script in
+`here <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/azure_rm.ini>`_.
+An .ini file will contain the following:
+
+.. code-block:: ini
+
+ [azure]
+ # Control which resource groups are included. By default all resources groups are included.
+ # Set resource_groups to a comma separated list of resource groups names.
+ #resource_groups=
+
+ # Control which tags are included. Set tags to a comma separated list of keys or key:value pairs
+ #tags=
+
+ # Control which locations are included. Set locations to a comma separated list of locations.
+ #locations=
+
+ # Include powerstate. If you don't need powerstate information, turning it off improves runtime performance.
+ # Valid values: yes, no, true, false, True, False, 0, 1.
+ include_powerstate=yes
+
+ # Control grouping with the following boolean flags. Valid values: yes, no, true, false, True, False, 0, 1.
+ group_by_resource_group=yes
+ group_by_location=yes
+ group_by_security_group=yes
+ group_by_tag=yes
+ group_by_os_family=yes
+
+Examples
+........
+
+Here are some examples using the inventory script:
+
+.. code-block:: bash
+
+ # Download inventory script
+ $ wget https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/azure_rm.py
+
+ # Execute /bin/uname on all instances in the Testing resource group
+ $ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"
+
+ # Execute win_ping on all Windows instances
+ $ ansible -i azure_rm.py windows -m win_ping
+
+ # Execute ping on all Linux instances
+ $ ansible -i azure_rm.py linux -m ping
+
+ # Use the inventory script to print instance specific information
+ $ ./azure_rm.py --host my_instance_host_name --resource-groups=Testing --pretty
+
+ # Use the inventory script with ansible-playbook
+ $ ansible-playbook -i ./azure_rm.py test_playbook.yml
+
+Here is a simple playbook to exercise the Azure inventory script:
+
+.. code-block:: yaml
+
+ - name: Test the inventory script
+ hosts: azure
+ connection: local
+ gather_facts: no
+ tasks:
+ - debug:
+ msg: "{{ inventory_hostname }} has powerstate {{ powerstate }}"
+
+You can execute the playbook with something like:
+
+.. code-block:: bash
+
+ $ ansible-playbook -i ./azure_rm.py test_azure_inventory.yml
+
+
+Disabling certificate validation on Azure endpoints
+...................................................
+
+When an HTTPS proxy is present, or when using Azure Stack, it may be necessary to disable certificate validation for
+Azure endpoints in the Azure modules. This is not a recommended security practice, but may be necessary when the system
+CA store cannot be altered to include the necessary CA certificate. Certificate validation can be controlled by setting
+the "cert_validation_mode" value in a credential profile, via the "AZURE_CERT_VALIDATION_MODE" environment variable, or
+by passing the "cert_validation_mode" argument to any Azure module. The default value is "validate"; setting the value
+to "ignore" will prevent all certificate validation. The module argument takes precedence over a credential profile value,
+which takes precedence over the environment value.
diff --git a/docs/docsite/rst/scenario_guides/guide_cloudstack.rst b/docs/docsite/rst/scenario_guides/guide_cloudstack.rst
new file mode 100644
index 00000000..fcfb8120
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_cloudstack.rst
@@ -0,0 +1,377 @@
+CloudStack Cloud Guide
+======================
+
+.. _cloudstack_introduction:
+
+Introduction
+````````````
+The purpose of this section is to explain how to put Ansible modules together to use Ansible in a CloudStack context. You will find more usage examples in the details section of each module.
+
+Ansible contains a number of extra modules for interacting with CloudStack based clouds. All modules support check mode, are designed to be idempotent, have been created and tested, and are maintained by the community.
+
+.. note:: Some of the modules will require domain admin or root admin privileges.
+
+Prerequisites
+`````````````
+Prerequisites for using the CloudStack modules are minimal. In addition to Ansible itself, all of the modules require the python library ``cs`` https://pypi.org/project/cs/
+
+You'll need this Python module installed on the execution host, usually your workstation.
+
+.. code-block:: bash
+
+ $ pip install cs
+
+Or alternatively starting with Debian 9 and Ubuntu 16.04:
+
+.. code-block:: bash
+
+ $ sudo apt install python-cs
+
+.. note:: cs also includes a command line interface for ad-hoc interaction with the CloudStack API, for example ``$ cs listVirtualMachines state=Running``.
+
+Limitations and Known Issues
+````````````````````````````
+VPC support has been improved since Ansible 2.3 but is still not yet fully implemented. The community is working on the VPC integration.
+
+Credentials File
+````````````````
+You can pass credentials and the endpoint of your cloud as module arguments, however in most cases it is a far less work to store your credentials in the cloudstack.ini file.
+
+The python library cs looks for the credentials file in the following order (last one wins):
+
+* A ``.cloudstack.ini`` (note the dot) file in the home directory.
+* A ``CLOUDSTACK_CONFIG`` environment variable pointing to an .ini file.
+* A ``cloudstack.ini`` (without the dot) file in the current working directory, same directory as your playbooks are located.
+
+The structure of the ini file must look like this:
+
+.. code-block:: bash
+
+ $ cat $HOME/.cloudstack.ini
+ [cloudstack]
+ endpoint = https://cloud.example.com/client/api
+ key = api key
+ secret = api secret
+ timeout = 30
+
+.. Note:: The section ``[cloudstack]`` is the default section. ``CLOUDSTACK_REGION`` environment variable can be used to define the default section.
+
+.. versionadded:: 2.4
+
+The ENV variables support ``CLOUDSTACK_*`` as written in the documentation of the library ``cs``, like ``CLOUDSTACK_TIMEOUT``, ``CLOUDSTACK_METHOD``, and so on. has been implemented into Ansible. It is even possible to have some incomplete config in your cloudstack.ini:
+
+.. code-block:: bash
+
+ $ cat $HOME/.cloudstack.ini
+ [cloudstack]
+ endpoint = https://cloud.example.com/client/api
+ timeout = 30
+
+and fulfill the missing data by either setting ENV variables or tasks params:
+
+.. code-block:: yaml
+
+ ---
+ - name: provision our VMs
+ hosts: cloud-vm
+ tasks:
+ - name: ensure VMs are created and running
+ delegate_to: localhost
+ cs_instance:
+ api_key: your api key
+ api_secret: your api secret
+ ...
+
+Regions
+```````
+If you use more than one CloudStack region, you can define as many sections as you want and name them as you like, for example:
+
+.. code-block:: bash
+
+ $ cat $HOME/.cloudstack.ini
+ [exoscale]
+ endpoint = https://api.exoscale.ch/compute
+ key = api key
+ secret = api secret
+
+ [example_cloud_one]
+ endpoint = https://cloud-one.example.com/client/api
+ key = api key
+ secret = api secret
+
+ [example_cloud_two]
+ endpoint = https://cloud-two.example.com/client/api
+ key = api key
+ secret = api secret
+
+.. Hint:: Sections can also be used to for login into the same region using different accounts.
+
+By passing the argument ``api_region`` with the CloudStack modules, the region wanted will be selected.
+
+.. code-block:: yaml
+
+ - name: ensure my ssh public key exists on Exoscale
+ cs_sshkeypair:
+ name: my-ssh-key
+ public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
+ api_region: exoscale
+ delegate_to: localhost
+
+Or by looping over a regions list if you want to do the task in every region:
+
+.. code-block:: yaml
+
+ - name: ensure my ssh public key exists in all CloudStack regions
+ local_action: cs_sshkeypair
+ name: my-ssh-key
+ public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
+ api_region: "{{ item }}"
+ loop:
+ - exoscale
+ - example_cloud_one
+ - example_cloud_two
+
+Environment Variables
+`````````````````````
+.. versionadded:: 2.3
+
+Since Ansible 2.3 it is possible to use environment variables for domain (``CLOUDSTACK_DOMAIN``), account (``CLOUDSTACK_ACCOUNT``), project (``CLOUDSTACK_PROJECT``), VPC (``CLOUDSTACK_VPC``) and zone (``CLOUDSTACK_ZONE``). This simplifies the tasks by not repeating the arguments for every tasks.
+
+Below you see an example how it can be used in combination with Ansible's block feature:
+
+.. code-block:: yaml
+
+ - hosts: cloud-vm
+ tasks:
+ - block:
+ - name: ensure my ssh public key
+ cs_sshkeypair:
+ name: my-ssh-key
+ public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
+
+ - name: ensure my ssh public key
+ cs_instance:
+ display_name: "{{ inventory_hostname_short }}"
+ template: Linux Debian 7 64-bit 20GB Disk
+ service_offering: "{{ cs_offering }}"
+ ssh_key: my-ssh-key
+ state: running
+
+ delegate_to: localhost
+ environment:
+ CLOUDSTACK_DOMAIN: root/customers
+ CLOUDSTACK_PROJECT: web-app
+ CLOUDSTACK_ZONE: sf-1
+
+.. Note:: You are still able overwrite the environment variables using the module arguments, for example ``zone: sf-2``
+
+.. Note:: Unlike ``CLOUDSTACK_REGION`` these additional environment variables are ignored in the CLI ``cs``.
+
+Use Cases
+`````````
+The following should give you some ideas how to use the modules to provision VMs to the cloud. As always, there isn't only one way to do it. But as always: keep it simple for the beginning is always a good start.
+
+Use Case: Provisioning in a Advanced Networking CloudStack setup
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+Our CloudStack cloud has an advanced networking setup, we would like to provision web servers, which get a static NAT and open firewall ports 80 and 443. Further we provision database servers, to which we do not give any access to. For accessing the VMs by SSH we use a SSH jump host.
+
+This is how our inventory looks like:
+
+.. code-block:: none
+
+ [cloud-vm:children]
+ webserver
+ db-server
+ jumphost
+
+ [webserver]
+ web-01.example.com public_ip=198.51.100.20
+ web-02.example.com public_ip=198.51.100.21
+
+ [db-server]
+ db-01.example.com
+ db-02.example.com
+
+ [jumphost]
+ jump.example.com public_ip=198.51.100.22
+
+As you can see, the public IPs for our web servers and jumphost has been assigned as variable ``public_ip`` directly in the inventory.
+
+The configure the jumphost, web servers and database servers, we use ``group_vars``. The ``group_vars`` directory contains 4 files for configuration of the groups: cloud-vm, jumphost, webserver and db-server. The cloud-vm is there for specifying the defaults of our cloud infrastructure.
+
+.. code-block:: yaml
+
+ # file: group_vars/cloud-vm
+ ---
+ cs_offering: Small
+ cs_firewall: []
+
+Our database servers should get more CPU and RAM, so we define to use a ``Large`` offering for them.
+
+.. code-block:: yaml
+
+ # file: group_vars/db-server
+ ---
+ cs_offering: Large
+
+The web servers should get a ``Small`` offering as we would scale them horizontally, which is also our default offering. We also ensure the known web ports are opened for the world.
+
+.. code-block:: yaml
+
+ # file: group_vars/webserver
+ ---
+ cs_firewall:
+ - { port: 80 }
+ - { port: 443 }
+
+Further we provision a jump host which has only port 22 opened for accessing the VMs from our office IPv4 network.
+
+.. code-block:: yaml
+
+ # file: group_vars/jumphost
+ ---
+ cs_firewall:
+ - { port: 22, cidr: "17.17.17.0/24" }
+
+Now to the fun part. We create a playbook to create our infrastructure we call it ``infra.yml``:
+
+.. code-block:: yaml
+
+ # file: infra.yaml
+ ---
+ - name: provision our VMs
+ hosts: cloud-vm
+ tasks:
+ - name: run all enclosed tasks from localhost
+ delegate_to: localhost
+ block:
+ - name: ensure VMs are created and running
+ cs_instance:
+ name: "{{ inventory_hostname_short }}"
+ template: Linux Debian 7 64-bit 20GB Disk
+ service_offering: "{{ cs_offering }}"
+ state: running
+
+ - name: ensure firewall ports opened
+ cs_firewall:
+ ip_address: "{{ public_ip }}"
+ port: "{{ item.port }}"
+ cidr: "{{ item.cidr | default('0.0.0.0/0') }}"
+ loop: "{{ cs_firewall }}"
+ when: public_ip is defined
+
+ - name: ensure static NATs
+ cs_staticnat: vm="{{ inventory_hostname_short }}" ip_address="{{ public_ip }}"
+ when: public_ip is defined
+
+In the above play we defined 3 tasks and use the group ``cloud-vm`` as target to handle all VMs in the cloud but instead SSH to these VMs, we use ``delegate_to: localhost`` to execute the API calls locally from our workstation.
+
+In the first task, we ensure we have a running VM created with the Debian template. If the VM is already created but stopped, it would just start it. If you like to change the offering on an existing VM, you must add ``force: yes`` to the task, which would stop the VM, change the offering and start the VM again.
+
+In the second task we ensure the ports are opened if we give a public IP to the VM.
+
+In the third task we add static NAT to the VMs having a public IP defined.
+
+
+.. Note:: The public IP addresses must have been acquired in advance, also see ``cs_ip_address``
+
+.. Note:: For some modules, for example ``cs_sshkeypair`` you usually want this to be executed only once, not for every VM. Therefore you would make a separate play for it targeting localhost. You find an example in the use cases below.
+
+Use Case: Provisioning on a Basic Networking CloudStack setup
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+A basic networking CloudStack setup is slightly different: Every VM gets a public IP directly assigned and security groups are used for access restriction policy.
+
+This is how our inventory looks like:
+
+.. code-block:: none
+
+ [cloud-vm:children]
+ webserver
+
+ [webserver]
+ web-01.example.com
+ web-02.example.com
+
+The default for your VMs looks like this:
+
+.. code-block:: yaml
+
+ # file: group_vars/cloud-vm
+ ---
+ cs_offering: Small
+ cs_securitygroups: [ 'default']
+
+Our webserver will also be in security group ``web``:
+
+.. code-block:: yaml
+
+ # file: group_vars/webserver
+ ---
+ cs_securitygroups: [ 'default', 'web' ]
+
+The playbook looks like the following:
+
+.. code-block:: yaml
+
+ # file: infra.yaml
+ ---
+ - name: cloud base setup
+ hosts: localhost
+ tasks:
+ - name: upload ssh public key
+ cs_sshkeypair:
+ name: defaultkey
+ public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
+
+ - name: ensure security groups exist
+ cs_securitygroup:
+ name: "{{ item }}"
+ loop:
+ - default
+ - web
+
+ - name: add inbound SSH to security group default
+ cs_securitygroup_rule:
+ security_group: default
+ start_port: "{{ item }}"
+ end_port: "{{ item }}"
+ loop:
+ - 22
+
+ - name: add inbound TCP rules to security group web
+ cs_securitygroup_rule:
+ security_group: web
+ start_port: "{{ item }}"
+ end_port: "{{ item }}"
+ loop:
+ - 80
+ - 443
+
+ - name: install VMs in the cloud
+ hosts: cloud-vm
+ tasks:
+ - delegate_to: localhost
+ block:
+ - name: create and run VMs on CloudStack
+ cs_instance:
+ name: "{{ inventory_hostname_short }}"
+ template: Linux Debian 7 64-bit 20GB Disk
+ service_offering: "{{ cs_offering }}"
+ security_groups: "{{ cs_securitygroups }}"
+ ssh_key: defaultkey
+ state: Running
+ register: vm
+
+ - name: show VM IP
+ debug: msg="VM {{ inventory_hostname }} {{ vm.default_ip }}"
+
+ - name: assign IP to the inventory
+ set_fact: ansible_ssh_host={{ vm.default_ip }}
+
+ - name: waiting for SSH to come up
+ wait_for: port=22 host={{ vm.default_ip }} delay=5
+
+In the first play we setup the security groups, in the second play the VMs will created be assigned to these groups. Further you see, that we assign the public IP returned from the modules to the host inventory. This is needed as we do not know the IPs we will get in advance. In a next step you would configure the DNS servers with these IPs for accessing the VMs with their DNS name.
+
+In the last task we wait for SSH to be accessible, so any later play would be able to access the VM by SSH without failure.
diff --git a/docs/docsite/rst/scenario_guides/guide_docker.rst b/docs/docsite/rst/scenario_guides/guide_docker.rst
new file mode 100644
index 00000000..c3f019bd
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_docker.rst
@@ -0,0 +1,227 @@
+Docker Guide
+============
+
+The `community.docker collection <https://galaxy.ansible.com/community/docker>`_ offers several modules and plugins for orchestrating Docker containers and Docker Swarm.
+
+.. contents::
+ :local:
+ :depth: 1
+
+
+Requirements
+------------
+
+Most of the modules and plugins in community.docker require the `Docker SDK for Python <https://docker-py.readthedocs.io/en/stable/>`_. The SDK needs to be installed on the machines where the modules and plugins are executed, and for the Python version(s) with which the modules and plugins are executed. You can use the :ref:`community.general.python_requirements_info module <ansible_collections.community.general.python_requirements_info_module>` to make sure that the Docker SDK for Python is installed on the correct machine and for the Python version used by Ansible.
+
+Note that plugins (inventory plugins and connection plugins) are always executed in the context of Ansible itself. If you use a plugin that requires the Docker SDK for Python, you need to install it on the machine running ``ansible`` or ``ansible-playbook`` and for the same Python interpreter used by Ansible. To see which Python is used, run ``ansible --version``.
+
+You can install the Docker SDK for Python for Python 2.7 or Python 3 as follows:
+
+.. code-block:: bash
+
+ $ pip install docker
+
+For Python 2.6, you need a version before 2.0. For these versions, the SDK was called ``docker-py``, so you need to install it as follows:
+
+.. code-block:: bash
+
+ $ pip install 'docker-py>=1.10.0'
+
+Please install only one of ``docker`` or ``docker-py``. Installing both will result in a broken installation. If this happens, Ansible will detect it and inform you about it. If that happens, you must uninstall both and reinstall the correct version.
+
+If in doubt, always install ``docker`` and never ``docker-py``.
+
+
+Connecting to the Docker API
+----------------------------
+
+You can connect to a local or remote API using parameters passed to each task or by setting environment variables. The order of precedence is command line parameters and then environment variables. If neither a command line option nor an environment variable is found, Ansible uses the default value provided under `Parameters`_.
+
+
+Parameters
+..........
+
+Most plugins and modules can be configured by the following parameters:
+
+ docker_host
+ The URL or Unix socket path used to connect to the Docker API. Defaults to ``unix://var/run/docker.sock``. To connect to a remote host, provide the TCP connection string (for example: ``tcp://192.0.2.23:2376``). If TLS is used to encrypt the connection to the API, then the module will automatically replace 'tcp' in the connection URL with 'https'.
+
+ api_version
+ The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported by the Docker SDK for Python installed.
+
+ timeout
+ The maximum amount of time in seconds to wait on a response from the API. Defaults to 60 seconds.
+
+ tls
+ Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server. Defaults to ``false``.
+
+ validate_certs
+ Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server. Default is ``false``.
+
+ cacert_path
+ Use a CA certificate when performing server verification by providing the path to a CA certificate file.
+
+ cert_path
+ Path to the client's TLS certificate file.
+
+ key_path
+ Path to the client's TLS key file.
+
+ tls_hostname
+ When verifying the authenticity of the Docker Host server, provide the expected name of the server. Defaults to ``localhost``.
+
+ ssl_version
+ Provide a valid SSL version number. The default value is determined by the Docker SDK for Python.
+
+
+Environment variables
+.....................
+
+You can also control how the plugins and modules connect to the Docker API by setting the following environment variables.
+
+For plugins, they have to be set for the environment Ansible itself runs in. For modules, they have to be set for the environment the modules are executed in. For modules running on remote machines, the environment variables have to be set on that machine for the user used to execute the modules with.
+
+ DOCKER_HOST
+ The URL or Unix socket path used to connect to the Docker API.
+
+ DOCKER_API_VERSION
+ The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported
+ by docker-py.
+
+ DOCKER_TIMEOUT
+ The maximum amount of time in seconds to wait on a response from the API.
+
+ DOCKER_CERT_PATH
+ Path to the directory containing the client certificate, client key and CA certificate.
+
+ DOCKER_SSL_VERSION
+ Provide a valid SSL version number.
+
+ DOCKER_TLS
+ Secure the connection to the API by using TLS without verifying the authenticity of the Docker Host.
+
+ DOCKER_TLS_VERIFY
+ Secure the connection to the API by using TLS and verify the authenticity of the Docker Host.
+
+
+Plain Docker daemon: images, networks, volumes, and containers
+--------------------------------------------------------------
+
+For working with a plain Docker daemon, that is without Swarm, there are connection plugins, an inventory plugin, and several modules available:
+
+ docker connection plugin
+ The :ref:`community.docker.docker connection plugin <ansible_collections.community.docker.docker_connection>` uses the Docker CLI utility to connect to Docker containers and execute modules in them. It essentially wraps ``docker exec`` and ``docker cp``. This connection plugin is supported by the :ref:`ansible.posix.synchronize module <ansible_collections.ansible.posix.synchronize_module>`.
+
+ docker_api connection plugin
+ The :ref:`community.docker.docker_api connection plugin <ansible_collections.community.docker.docker_api_connection>` talks directly to the Docker daemon to connect to Docker containers and execute modules in them.
+
+ docker_containers inventory plugin
+ The :ref:`community.docker.docker_containers inventory plugin <ansible_collections.community.docker.docker_containers_inventory>` allows you to dynamically add Docker containers from a Docker Daemon to your Ansible inventory. See :ref:`dynamic_inventory` for details on dynamic inventories.
+
+ The `docker inventory script <https://github.com/ansible-collections/community.general/blob/main/scripts/inventory/docker.py>`_ is deprecated. Please use the inventory plugin instead. The inventory plugin has several compatibility options. If you need to collect Docker containers from multiple Docker daemons, you need to add every Docker daemon as an individual inventory source.
+
+ docker_host_info module
+ The :ref:`community.docker.docker_host_info module <ansible_collections.community.docker.docker_host_info_module>` allows you to retrieve information on a Docker daemon, such as all containers, images, volumes, networks and so on.
+
+ docker_login module
+ The :ref:`community.docker.docker_login module <ansible_collections.community.docker.docker_login_module>` allows you to log in and out of a remote registry, such as Docker Hub or a private registry. It provides similar functionality to the ``docker login`` and ``docker logout`` CLI commands.
+
+ docker_prune module
+ The :ref:`community.docker.docker_prune module <ansible_collections.community.docker.docker_prune_module>` allows you to prune no longer needed containers, images, volumes and so on. It provides similar functionality to the ``docker prune`` CLI command.
+
+ docker_image module
+ The :ref:`community.docker.docker_image module <ansible_collections.community.docker.docker_image_module>` provides full control over images, including: build, pull, push, tag and remove.
+
+ docker_image_info module
+ The :ref:`community.docker.docker_image_info module <ansible_collections.community.docker.docker_image_info_module>` allows you to list and inspect images.
+
+ docker_network module
+ The :ref:`community.docker.docker_network module <ansible_collections.community.docker.docker_network_module>` provides full control over Docker networks.
+
+ docker_network_info module
+ The :ref:`community.docker.docker_network_info module <ansible_collections.community.docker.docker_network_info_module>` allows you to inspect Docker networks.
+
+ docker_volume_info module
+ The :ref:`community.docker.docker_volume_info module <ansible_collections.community.docker.docker_volume_info_module>` provides full control over Docker volumes.
+
+ docker_volume module
+ The :ref:`community.docker.docker_volume module <ansible_collections.community.docker.docker_volume_module>` allows you to inspect Docker volumes.
+
+ docker_container module
+ The :ref:`community.docker.docker_container module <ansible_collections.community.docker.docker_container_module>` manages the container lifecycle by providing the ability to create, update, stop, start and destroy a Docker container.
+
+ docker_container_info module
+ The :ref:`community.docker.docker_container_info module <ansible_collections.community.docker.docker_container_info_module>` allows you to inspect a Docker container.
+
+
+Docker Compose
+--------------
+
+The :ref:`community.docker.docker_compose module <ansible_collections.community.docker.docker_compose_module>`
+allows you to use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm.
+Supports compose versions 1 and 2.
+
+Next to Docker SDK for Python, you need to install `docker-compose <https://github.com/docker/compose>`_ on the remote machines to use the module.
+
+
+Docker Machine
+--------------
+
+The :ref:`community.docker.docker_machine inventory plugin <ansible_collections.community.docker.docker_machine_inventory>` allows you to dynamically add Docker Machine hosts to your Ansible inventory.
+
+
+Docker stack
+------------
+
+The :ref:`community.docker.docker_stack module <ansible_collections.community.docker.docker_stack_module>` module allows you to control Docker stacks. Information on stacks can be retrieved by the :ref:`community.docker.docker_stack_info module <ansible_collections.community.docker.docker_stack_info_module>`, and information on stack tasks can be retrieved by the :ref:`community.docker.docker_stack_task_info module <ansible_collections.community.docker.docker_stack_task_info_module>`.
+
+
+Docker Swarm
+------------
+
+The community.docker collection provides multiple plugins and modules for managing Docker Swarms.
+
+Swarm management
+................
+
+One inventory plugin and several modules are provided to manage Docker Swarms:
+
+ docker_swarm inventory plugin
+ The :ref:`community.docker.docker_swarm inventory plugin <ansible_collections.community.docker.docker_swarm_inventory>` allows you to dynamically add all Docker Swarm nodes to your Ansible inventory.
+
+ docker_swarm module
+ The :ref:`community.docker.docker_swarm module <ansible_collections.community.docker.docker_swarm_module>` allows you to globally configure Docker Swarm manager nodes to join and leave swarms, and to change the Docker Swarm configuration.
+
+ docker_swarm_info module
+ The :ref:`community.docker.docker_swarm_info module <ansible_collections.community.docker.docker_swarm_info_module>` allows you to retrieve information on Docker Swarm.
+
+ docker_node module
+ The :ref:`community.docker.docker_node module <ansible_collections.community.docker.docker_node_module>` allows you to manage Docker Swarm nodes.
+
+ docker_node_info module
+ The :ref:`community.docker.docker_node_info module <ansible_collections.community.docker.docker_node_info_module>` allows you to retrieve information on Docker Swarm nodes.
+
+Configuration management
+........................
+
+The community.docker collection offers modules to manage Docker Swarm configurations and secrets:
+
+ docker_config module
+ The :ref:`community.docker.docker_config module <ansible_collections.community.docker.docker_config_module>` allows you to create and modify Docker Swarm configs.
+
+ docker_secret module
+ The :ref:`community.docker.docker_secret module <ansible_collections.community.docker.docker_secret_module>` allows you to create and modify Docker Swarm secrets.
+
+
+Swarm services
+..............
+
+Docker Swarm services can be created and updated with the :ref:`community.docker.docker_swarm_service module <ansible_collections.community.docker.docker_swarm_service_module>`, and information on them can be queried by the :ref:`community.docker.docker_swarm_service_info module <ansible_collections.community.docker.docker_swarm_service_info_module>`.
+
+
+Helpful links
+-------------
+
+Still using Dockerfile to build images? Check out `ansible-bender <https://github.com/ansible-community/ansible-bender>`_, and start building images from your Ansible playbooks.
+
+Use `Ansible Operator <https://learn.openshift.com/ansibleop/ansible-operator-overview/>`_ to launch your docker-compose file on `OpenShift <https://www.okd.io/>`_. Go from an app on your laptop to a fully scalable app in the cloud with Kubernetes in just a few moments.
diff --git a/docs/docsite/rst/scenario_guides/guide_gce.rst b/docs/docsite/rst/scenario_guides/guide_gce.rst
new file mode 100644
index 00000000..6d9ca65a
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_gce.rst
@@ -0,0 +1,302 @@
+Google Cloud Platform Guide
+===========================
+
+.. gce_intro:
+
+Introduction
+--------------------------
+
+Ansible + Google have been working together on a set of auto-generated
+Ansible modules designed to consistently and comprehensively cover the entirety
+of the Google Cloud Platform (GCP).
+
+Ansible contains modules for managing Google Cloud Platform resources,
+including creating instances, controlling network access, working with
+persistent disks, managing load balancers, and a lot more.
+
+These new modules can be found under a new consistent name scheme "gcp_*"
+(Note: gcp_target_proxy and gcp_url_map are legacy modules, despite the "gcp_*"
+name. Please use gcp_compute_target_proxy and gcp_compute_url_map instead).
+
+Additionally, the gcp_compute inventory plugin can discover all
+Google Compute Engine (GCE) instances
+and make them automatically available in your Ansible inventory.
+
+You may see a collection of other GCP modules that do not conform to this
+naming convention. These are the original modules primarily developed by the
+Ansible community. You will find some overlapping functionality such as with
+the "gce" module and the new "gcp_compute_instance" module. Either can be
+used, but you may experience issues trying to use them together.
+
+While the community GCP modules are not going away, Google is investing effort
+into the new "gcp_*" modules. Google is committed to ensuring the Ansible
+community has a great experience with GCP and therefore recommends adopting
+these new modules if possible.
+
+
+Requisites
+---------------
+The GCP modules require both the ``requests`` and the
+``google-auth`` libraries to be installed.
+
+.. code-block:: bash
+
+ $ pip install requests google-auth
+
+Alternatively for RHEL / CentOS, the ``python-requests`` package is also
+available to satisfy ``requests`` libraries.
+
+.. code-block:: bash
+
+ $ yum install python-requests
+
+Credentials
+-----------
+It's easy to create a GCP account with credentials for Ansible. You have multiple options to
+get your credentials - here are two of the most common options:
+
+* Service Accounts (Recommended): Use JSON service accounts with specific permissions.
+* Machine Accounts: Use the permissions associated with the GCP Instance you're using Ansible on.
+
+For the following examples, we'll be using service account credentials.
+
+To work with the GCP modules, you'll first need to get some credentials in the
+JSON format:
+
+1. `Create a Service Account <https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount>`_
+2. `Download JSON credentials <https://support.google.com/cloud/answer/6158849?hl=en&ref_topic=6262490#serviceaccounts>`_
+
+Once you have your credentials, there are two different ways to provide them to Ansible:
+
+* by specifying them directly as module parameters
+* by setting environment variables
+
+Providing Credentials as Module Parameters
+``````````````````````````````````````````
+
+For the GCE modules you can specify the credentials as arguments:
+
+* ``auth_kind``: type of authentication being used (choices: machineaccount, serviceaccount, application)
+* ``service_account_email``: email associated with the project
+* ``service_account_file``: path to the JSON credentials file
+* ``project``: id of the project
+* ``scopes``: The specific scopes that you want the actions to use.
+
+For example, to create a new IP address using the ``gcp_compute_address`` module,
+you can use the following configuration:
+
+.. code-block:: yaml
+
+ - name: Create IP address
+ hosts: localhost
+ gather_facts: no
+
+ vars:
+ service_account_file: /home/my_account.json
+ project: my-project
+ auth_kind: serviceaccount
+ scopes:
+ - https://www.googleapis.com/auth/compute
+
+ tasks:
+
+ - name: Allocate an IP Address
+ gcp_compute_address:
+ state: present
+ name: 'test-address1'
+ region: 'us-west1'
+ project: "{{ project }}"
+ auth_kind: "{{ auth_kind }}"
+ service_account_file: "{{ service_account_file }}"
+ scopes: "{{ scopes }}"
+
+Providing Credentials as Environment Variables
+``````````````````````````````````````````````
+
+Set the following environment variables before running Ansible in order to configure your credentials:
+
+.. code-block:: bash
+
+ GCP_AUTH_KIND
+ GCP_SERVICE_ACCOUNT_EMAIL
+ GCP_SERVICE_ACCOUNT_FILE
+ GCP_SCOPES
+
+GCE Dynamic Inventory
+---------------------
+
+The best way to interact with your hosts is to use the gcp_compute inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed.
+
+To be able to use this GCE dynamic inventory plugin, you need to enable it first by specifying the following in the ``ansible.cfg`` file:
+
+.. code-block:: ini
+
+ [inventory]
+ enable_plugins = gcp_compute
+
+Then, create a file that ends in ``.gcp.yml`` in your root directory.
+
+The gcp_compute inventory script takes in the same authentication information as any module.
+
+Here's an example of a valid inventory file:
+
+.. code-block:: yaml
+
+ plugin: gcp_compute
+ projects:
+ - graphite-playground
+ auth_kind: serviceaccount
+ service_account_file: /home/alexstephen/my_account.json
+
+
+Executing ``ansible-inventory --list -i <filename>.gcp.yml`` will create a list of GCP instances that are ready to be configured using Ansible.
+
+Create an instance
+``````````````````
+
+The full range of GCP modules provide the ability to create a wide variety of
+GCP resources with the full support of the entire GCP API.
+
+The following playbook creates a GCE Instance. This instance relies on other GCP
+resources like Disk. By creating other resources separately, we can give as
+much detail as necessary about how we want to configure the other resources, for example
+formatting of the Disk. By registering it to a variable, we can simply insert the
+variable into the instance task. The gcp_compute_instance module will figure out the
+rest.
+
+.. code-block:: yaml
+
+ - name: Create an instance
+ hosts: localhost
+ gather_facts: no
+ vars:
+ gcp_project: my-project
+ gcp_cred_kind: serviceaccount
+ gcp_cred_file: /home/my_account.json
+ zone: "us-central1-a"
+ region: "us-central1"
+
+ tasks:
+ - name: create a disk
+ gcp_compute_disk:
+ name: 'disk-instance'
+ size_gb: 50
+ source_image: 'projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts'
+ zone: "{{ zone }}"
+ project: "{{ gcp_project }}"
+ auth_kind: "{{ gcp_cred_kind }}"
+ service_account_file: "{{ gcp_cred_file }}"
+ scopes:
+ - https://www.googleapis.com/auth/compute
+ state: present
+ register: disk
+ - name: create a address
+ gcp_compute_address:
+ name: 'address-instance'
+ region: "{{ region }}"
+ project: "{{ gcp_project }}"
+ auth_kind: "{{ gcp_cred_kind }}"
+ service_account_file: "{{ gcp_cred_file }}"
+ scopes:
+ - https://www.googleapis.com/auth/compute
+ state: present
+ register: address
+ - name: create a instance
+ gcp_compute_instance:
+ state: present
+ name: test-vm
+ machine_type: n1-standard-1
+ disks:
+ - auto_delete: true
+ boot: true
+ source: "{{ disk }}"
+ network_interfaces:
+ - network: null # use default
+ access_configs:
+ - name: 'External NAT'
+ nat_ip: "{{ address }}"
+ type: 'ONE_TO_ONE_NAT'
+ zone: "{{ zone }}"
+ project: "{{ gcp_project }}"
+ auth_kind: "{{ gcp_cred_kind }}"
+ service_account_file: "{{ gcp_cred_file }}"
+ scopes:
+ - https://www.googleapis.com/auth/compute
+ register: instance
+
+ - name: Wait for SSH to come up
+ wait_for: host={{ address.address }} port=22 delay=10 timeout=60
+
+ - name: Add host to groupname
+ add_host: hostname={{ address.address }} groupname=new_instances
+
+
+ - name: Manage new instances
+ hosts: new_instances
+ connection: ssh
+ become: True
+ roles:
+ - base_configuration
+ - production_server
+
+Note that use of the "add_host" module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines
+in the 'new_instances' group, if so desired. Any sort of arbitrary configuration is possible at this point.
+
+For more information about Google Cloud, please visit the `Google Cloud website <https://cloud.google.com>`_.
+
+Migration Guides
+----------------
+
+gce.py -> gcp_compute_instance.py
+`````````````````````````````````
+As of Ansible 2.8, we're encouraging everyone to move from the ``gce`` module to the
+``gcp_compute_instance`` module. The ``gcp_compute_instance`` module has better
+support for all of GCP's features, fewer dependencies, more flexibility, and
+better supports GCP's authentication systems.
+
+The ``gcp_compute_instance`` module supports all of the features of the ``gce``
+module (and more!). Below is a mapping of ``gce`` fields over to
+``gcp_compute_instance`` fields.
+
+============================ ========================================== ======================
+ gce.py gcp_compute_instance.py Notes
+============================ ========================================== ======================
+ state state/status State on gce has multiple values: "present", "absent", "stopped", "started", "terminated". State on gcp_compute_instance is used to describe if the instance exists (present) or does not (absent). Status is used to describe if the instance is "started", "stopped" or "terminated".
+ image disks[].initialize_params.source_image You'll need to create a single disk using the disks[] parameter and set it to be the boot disk (disks[].boot = true)
+ image_family disks[].initialize_params.source_image See above.
+ external_projects disks[].initialize_params.source_image The name of the source_image will include the name of the project.
+ instance_names Use a loop or multiple tasks. Using loops is a more Ansible-centric way of creating multiple instances and gives you the most flexibility.
+ service_account_email service_accounts[].email This is the service_account email address that you want the instance to be associated with. It is not the service_account email address that is used for the credentials necessary to create the instance.
+ service_account_permissions service_accounts[].scopes These are the permissions you want to grant to the instance.
+ pem_file Not supported. We recommend using JSON service account credentials instead of PEM files.
+ credentials_file service_account_file
+ project_id project
+ name name This field does not accept an array of names. Use a loop to create multiple instances.
+ num_instances Use a loop For maximum flexibility, we're encouraging users to use Ansible features to create multiple instances, rather than letting the module do it for you.
+ network network_interfaces[].network
+ subnetwork network_interfaces[].subnetwork
+ persistent_boot_disk disks[].type = 'PERSISTENT'
+ disks disks[]
+ ip_forward can_ip_forward
+ external_ip network_interfaces[].access_configs.nat_ip This field takes multiple types of values. You can create an IP address with ``gcp_compute_address`` and place the name/output of the address here. You can also place the string value of the IP address's GCP name or the actual IP address.
+ disks_auto_delete disks[].auto_delete
+ preemptible scheduling.preemptible
+ disk_size disks[].initialize_params.disk_size_gb
+============================ ========================================== ======================
+
+An example playbook is below:
+
+.. code:: yaml
+
+ gcp_compute_instance:
+ name: "{{ item }}"
+ machine_type: n1-standard-1
+ ... # any other settings
+ zone: us-central1-a
+ project: "my-project"
+ auth_kind: "service_account_file"
+ service_account_file: "~/my_account.json"
+ state: present
+ loop:
+ - instance-1
+ - instance-2
diff --git a/docs/docsite/rst/scenario_guides/guide_infoblox.rst b/docs/docsite/rst/scenario_guides/guide_infoblox.rst
new file mode 100644
index 00000000..d4597d90
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_infoblox.rst
@@ -0,0 +1,292 @@
+.. _nios_guide:
+
+************************
+ Infoblox Guide
+************************
+
+.. contents:: Topics
+
+This guide describes how to use Ansible with the Infoblox Network Identity Operating System (NIOS). With Ansible integration, you can use Ansible playbooks to automate Infoblox Core Network Services for IP address management (IPAM), DNS, and inventory tracking.
+
+You can review simple example tasks in the documentation for any of the :ref:`NIOS modules <nios_net tools_modules>` or look at the `Use cases with modules`_ section for more elaborate examples. See the `Infoblox <https://www.infoblox.com/>`_ website for more information on the Infoblox product.
+
+.. note:: You can retrieve most of the example playbooks used in this guide from the `network-automation/infoblox_ansible <https://github.com/network-automation/infoblox_ansible>`_ GitHub repository.
+
+Prerequisites
+=============
+Before using Ansible ``nios`` modules with Infoblox, you must install the ``infoblox-client`` on your Ansible control node:
+
+.. code-block:: bash
+
+ $ sudo pip install infoblox-client
+
+.. note::
+ You need an NIOS account with the WAPI feature enabled to use Ansible with Infoblox.
+
+.. _nios_credentials:
+
+Credentials and authenticating
+==============================
+
+To use Infoblox ``nios`` modules in playbooks, you need to configure the credentials to access your Infoblox system. The examples in this guide use credentials stored in ``<playbookdir>/group_vars/nios.yml``. Replace these values with your Infoblox credentials:
+
+.. code-block:: yaml
+
+ ---
+ nios_provider:
+ host: 192.0.0.2
+ username: admin
+ password: ansible
+
+NIOS lookup plugins
+===================
+
+Ansible includes the following lookup plugins for NIOS:
+
+- :ref:`nios <nios_lookup>` Uses the Infoblox WAPI API to fetch NIOS specified objects, for example network views, DNS views, and host records.
+- :ref:`nios_next_ip <nios_next_ip_lookup>` Provides the next available IP address from a network. You'll see an example of this in `Creating a host record`_.
+- :ref:`nios_next_network <nios_next_network_lookup>` - Returns the next available network range for a network-container.
+
+You must run the NIOS lookup plugins locally by specifying ``connection: local``. See :ref:`lookup plugins <lookup_plugins>` for more detail.
+
+
+Retrieving all network views
+----------------------------
+
+To retrieve all network views and save them in a variable, use the :ref:`set_fact <set_fact_module>` module with the :ref:`nios <nios_lookup>` lookup plugin:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: fetch all networkview objects
+ set_fact:
+ networkviews: "{{ lookup('nios', 'networkview', provider=nios_provider) }}"
+
+ - name: check the networkviews
+ debug:
+ var: networkviews
+
+
+Retrieving a host record
+------------------------
+
+To retrieve a set of host records, use the ``set_fact`` module with the ``nios`` lookup plugin and include a filter for the specific hosts you want to retrieve:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: fetch host leaf01
+ set_fact:
+ host: "{{ lookup('nios', 'record:host', filter={'name': 'leaf01.ansible.com'}, provider=nios_provider) }}"
+
+ - name: check the leaf01 return variable
+ debug:
+ var: host
+
+ - name: debug specific variable (ipv4 address)
+ debug:
+ var: host.ipv4addrs[0].ipv4addr
+
+ - name: fetch host leaf02
+ set_fact:
+ host: "{{ lookup('nios', 'record:host', filter={'name': 'leaf02.ansible.com'}, provider=nios_provider) }}"
+
+ - name: check the leaf02 return variable
+ debug:
+ var: host
+
+
+If you run this ``get_host_record.yml`` playbook, you should see results similar to the following:
+
+.. code-block:: none
+
+ $ ansible-playbook get_host_record.yml
+
+ PLAY [localhost] ***************************************************************************************
+
+ TASK [fetch host leaf01] ******************************************************************************
+ ok: [localhost]
+
+ TASK [check the leaf01 return variable] *************************************************************
+ ok: [localhost] => {
+ < ...output shortened...>
+ "host": {
+ "ipv4addrs": [
+ {
+ "configure_for_dhcp": false,
+ "host": "leaf01.ansible.com",
+ }
+ ],
+ "name": "leaf01.ansible.com",
+ "view": "default"
+ }
+ }
+
+ TASK [debug specific variable (ipv4 address)] ******************************************************
+ ok: [localhost] => {
+ "host.ipv4addrs[0].ipv4addr": "192.168.1.11"
+ }
+
+ TASK [fetch host leaf02] ******************************************************************************
+ ok: [localhost]
+
+ TASK [check the leaf02 return variable] *************************************************************
+ ok: [localhost] => {
+ < ...output shortened...>
+ "host": {
+ "ipv4addrs": [
+ {
+ "configure_for_dhcp": false,
+ "host": "leaf02.example.com",
+ "ipv4addr": "192.168.1.12"
+ }
+ ],
+ }
+ }
+
+ PLAY RECAP ******************************************************************************************
+ localhost : ok=5 changed=0 unreachable=0 failed=0
+
+The output above shows the host record for ``leaf01.ansible.com`` and ``leaf02.ansible.com`` that were retrieved by the ``nios`` lookup plugin. This playbook saves the information in variables which you can use in other playbooks. This allows you to use Infoblox as a single source of truth to gather and use information that changes dynamically. See :ref:`playbooks_variables` for more information on using Ansible variables. See the :ref:`nios <nios_lookup>` examples for more data options that you can retrieve.
+
+You can access these playbooks at `Infoblox lookup playbooks <https://github.com/network-automation/infoblox_ansible/tree/master/lookup_playbooks>`_.
+
+Use cases with modules
+======================
+
+You can use the ``nios`` modules in tasks to simplify common Infoblox workflows. Be sure to set up your :ref:`NIOS credentials<nios_credentials>` before following these examples.
+
+Configuring an IPv4 network
+---------------------------
+
+To configure an IPv4 network, use the :ref:`nios_network <nios_network_module>` module:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: Create a network on the default network view
+ nios_network:
+ network: 192.168.100.0/24
+ comment: sets the IPv4 network
+ options:
+ - name: domain-name
+ value: ansible.com
+ state: present
+ provider: "{{nios_provider}}"
+
+Notice the last parameter, ``provider``, uses the variable ``nios_provider`` defined in the ``group_vars/`` directory.
+
+Creating a host record
+----------------------
+
+To create a host record named `leaf03.ansible.com` on the newly-created IPv4 network:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: configure an IPv4 host record
+ nios_host_record:
+ name: leaf03.ansible.com
+ ipv4addrs:
+ - ipv4addr:
+ "{{ lookup('nios_next_ip', '192.168.100.0/24', provider=nios_provider)[0] }}"
+ state: present
+ provider: "{{nios_provider}}"
+
+Notice the IPv4 address in this example uses the :ref:`nios_next_ip <nios_next_ip_lookup>` lookup plugin to find the next available IPv4 address on the network.
+
+Creating a forward DNS zone
+---------------------------
+
+To configure a forward DNS zone use, the ``nios_zone`` module:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: Create a forward DNS zone called ansible-test.com
+ nios_zone:
+ name: ansible-test.com
+ comment: local DNS zone
+ state: present
+ provider: "{{ nios_provider }}"
+
+Creating a reverse DNS zone
+---------------------------
+
+To configure a reverse DNS zone:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: configure a reverse mapping zone on the system using IPV6 zone format
+ nios_zone:
+ name: 100::1/128
+ zone_format: IPV6
+ state: present
+ provider: "{{ nios_provider }}"
+
+Dynamic inventory script
+========================
+
+You can use the Infoblox dynamic inventory script to import your network node inventory with Infoblox NIOS. To gather the inventory from Infoblox, you need two files:
+
+- `infoblox.yaml <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/infoblox.yaml>`_ - A file that specifies the NIOS provider arguments and optional filters.
+
+- `infoblox.py <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/infoblox.py>`_ - The python script that retrieves the NIOS inventory.
+
+.. note::
+
+ Please note that the inventory script only works when Ansible 2.9, 2.10 or 3 have been installed. The inventory script will eventually be removed from `community.general <https://galaxy.ansible.com/community/general>`_, and will not work if `community.general` is only installed with `ansible-galaxy collection install`. Please use the inventory plugin from `infoblox.nios_modules <https://galaxy.ansible.com/infoblox/nios_modules>`_ instead.
+
+To use the Infoblox dynamic inventory script:
+
+#. Download the ``infoblox.yaml`` file and save it in the ``/etc/ansible`` directory.
+
+#. Modify the ``infoblox.yaml`` file with your NIOS credentials.
+
+#. Download the ``infoblox.py`` file and save it in the ``/etc/ansible/hosts`` directory.
+
+#. Change the permissions on the ``infoblox.py`` file to make the file an executable:
+
+.. code-block:: bash
+
+ $ sudo chmod +x /etc/ansible/hosts/infoblox.py
+
+You can optionally use ``./infoblox.py --list`` to test the script. After a few minutes, you should see your Infoblox inventory in JSON format. You can explicitly use the Infoblox dynamic inventory script as follows:
+
+.. code-block:: bash
+
+ $ ansible -i infoblox.py all -m ping
+
+You can also implicitly use the Infoblox dynamic inventory script by including it in your inventory directory (``etc/ansible/hosts`` by default). See :ref:`dynamic_inventory` for more details.
+
+.. seealso::
+
+ `Infoblox website <https://www.infoblox.com//>`_
+ The Infoblox website
+ `Infoblox and Ansible Deployment Guide <https://www.infoblox.com/resources/deployment-guides/infoblox-and-ansible-integration>`_
+ The deployment guide for Ansible integration provided by Infoblox.
+ `Infoblox Integration in Ansible 2.5 <https://www.ansible.com/blog/infoblox-integration-in-ansible-2.5>`_
+ Ansible blog post about Infoblox.
+ :ref:`Ansible NIOS modules <nios_net tools_modules>`
+ The list of supported NIOS modules, with examples.
+ `Infoblox Ansible Examples <https://github.com/network-automation/infoblox_ansible>`_
+ Infoblox example playbooks.
diff --git a/docs/docsite/rst/scenario_guides/guide_kubernetes.rst b/docs/docsite/rst/scenario_guides/guide_kubernetes.rst
new file mode 100644
index 00000000..abd548de
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_kubernetes.rst
@@ -0,0 +1,63 @@
+Kubernetes and OpenShift Guide
+==============================
+
+Modules for interacting with the Kubernetes (K8s) and OpenShift API are under development, and can be used in preview mode. To use them, review the requirements, and then follow the installation and use instructions.
+
+Requirements
+------------
+
+To use the modules, you'll need the following:
+
+- Run Ansible from source. For assistance, view :ref:`from_source`.
+- `OpenShift Rest Client <https://github.com/openshift/openshift-restclient-python>`_ installed on the host that will execute the modules.
+
+
+Installation and use
+--------------------
+
+The Kubernetes modules are part of the `Ansible Kubernetes collection <https://github.com/ansible-collections/community.kubernetes>`_.
+
+To install the collection, run the following:
+
+.. code-block:: bash
+
+ $ ansible-galaxy collection install community.kubernetes
+
+Next, include it in a playbook, as follows:
+
+.. code-block:: bash
+
+ ---
+ - hosts: localhost
+ tasks:
+ - name: Create a pod
+ community.kubernetes.k8s:
+ state: present
+ definition:
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: "utilitypod-1"
+ namespace: default
+ labels:
+ app: galaxy
+ spec:
+ containers:
+ - name: utilitypod
+ image: busybox
+
+
+Authenticating with the API
+---------------------------
+
+By default the OpenShift Rest Client will look for ``~/.kube/config``, and if found, connect using the active context. You can override the location of the file using the``kubeconfig`` parameter, and the context, using the ``context`` parameter.
+
+Basic authentication is also supported using the ``username`` and ``password`` options. You can override the URL using the ``host`` parameter. Certificate authentication works through the ``ssl_ca_cert``, ``cert_file``, and ``key_file`` parameters, and for token authentication, use the ``api_key`` parameter.
+
+To disable SSL certificate verification, set ``verify_ssl`` to false.
+
+Filing issues
+`````````````
+
+If you find a bug or have a suggestion regarding modules, please file issues at `Ansible Kubernetes collection <https://github.com/ansible-collections/community.kubernetes>`_.
+If you find a bug regarding OpenShift client, please file issues at `OpenShift REST Client issues <https://github.com/openshift/openshift-restclient-python/issues>`_.
diff --git a/docs/docsite/rst/scenario_guides/guide_meraki.rst b/docs/docsite/rst/scenario_guides/guide_meraki.rst
new file mode 100644
index 00000000..94c5b161
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_meraki.rst
@@ -0,0 +1,193 @@
+.. _meraki_guide:
+
+******************
+Cisco Meraki Guide
+******************
+
+.. contents::
+ :local:
+
+
+.. _meraki_guide_intro:
+
+What is Cisco Meraki?
+=====================
+
+Cisco Meraki is an easy-to-use, cloud-based, network infrastructure platform for enterprise environments. While most network hardware uses command-line interfaces (CLIs) for configuration, Meraki uses an easy-to-use Dashboard hosted in the Meraki cloud. No on-premises management hardware or software is required - only the network infrastructure to run your business.
+
+MS Switches
+-----------
+
+Meraki MS switches come in multiple flavors and form factors. Meraki switches support 10/100/1000/10000 ports, as well as Cisco's mGig technology for 2.5/5/10Gbps copper connectivity. 8, 24, and 48 port flavors are available with PoE (802.3af/802.3at/UPoE) available on many models.
+
+MX Firewalls
+------------
+
+Meraki's MX firewalls support full layer 3-7 deep packet inspection. MX firewalls are compatible with a variety of VPN technologies including IPSec, SSL VPN, and Meraki's easy-to-use AutoVPN.
+
+MR Wireless Access Points
+-------------------------
+
+MR access points are enterprise-class, high-performance access points for the enterprise. MR access points have MIMO technology and integrated beamforming built-in for high performance applications. BLE allows for advanced location applications to be developed with no on-premises analytics platforms.
+
+Using the Meraki modules
+========================
+
+Meraki modules provide a user-friendly interface to manage your Meraki environment using Ansible. For example, details about SNMP settings for a particular organization can be discovered using the module `meraki_snmp <meraki_snmp_module>`.
+
+.. code-block:: yaml
+
+ - name: Query SNMP settings
+ meraki_snmp:
+ api_key: abc123
+ org_name: AcmeCorp
+ state: query
+ delegate_to: localhost
+
+Information about a particular object can be queried. For example, the `meraki_admin <meraki_admin_module>` module supports
+
+.. code-block:: yaml
+
+ - name: Gather information about Jane Doe
+ meraki_admin:
+ api_key: abc123
+ org_name: AcmeCorp
+ state: query
+ email: janedoe@email.com
+ delegate_to: localhost
+
+Common Parameters
+=================
+
+All Ansible Meraki modules support the following parameters which affect communication with the Meraki Dashboard API. Most of these should only be used by Meraki developers and not the general public.
+
+ host
+ Hostname or IP of Meraki Dashboard.
+
+ use_https
+ Specifies whether communication should be over HTTPS. (Defaults to ``yes``)
+
+ use_proxy
+ Whether to use a proxy for any communication.
+
+ validate_certs
+ Determine whether certificates should be validated or trusted. (Defaults to ``yes``)
+
+These are the common parameters which are used for most every module.
+
+ org_name
+ Name of organization to perform actions in.
+
+ org_id
+ ID of organization to perform actions in.
+
+ net_name
+ Name of network to perform actions in.
+
+ net_id
+ ID of network to perform actions in.
+
+ state
+ General specification of what action to take. ``query`` does lookups. ``present`` creates or edits. ``absent`` deletes.
+
+.. hint:: Use the ``org_id`` and ``net_id`` parameters when possible. ``org_name`` and ``net_name`` require additional behind-the-scenes API calls to learn the ID values. ``org_id`` and ``net_id`` will perform faster.
+
+Meraki Authentication
+=====================
+
+All API access with the Meraki Dashboard requires an API key. An API key can be generated from the organization's settings page. Each play in a playbook requires the ``api_key`` parameter to be specified.
+
+The "Vault" feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See :ref:`playbooks_vault` for more information.
+
+Meraki's API returns a 404 error if the API key is not correct. It does not provide any specific error saying the key is incorrect. If you receive a 404 error, check the API key first.
+
+Returned Data Structures
+========================
+
+Meraki and its related Ansible modules return most information in the form of a list. For example, this is returned information by ``meraki_admin`` querying administrators. It returns a list even though there's only one.
+
+.. code-block:: json
+
+ [
+ {
+ "orgAccess": "full",
+ "name": "John Doe",
+ "tags": [],
+ "networks": [],
+ "email": "john@doe.com",
+ "id": "12345677890"
+ }
+ ]
+
+Handling Returned Data
+======================
+
+Since Meraki's response data uses lists instead of properly keyed dictionaries for responses, certain strategies should be used when querying data for particular information. For many situations, use the ``selectattr()`` Jinja2 function.
+
+Merging Existing and New Data
+=============================
+
+Ansible's Meraki modules do not allow for manipulating data. For example, you may need to insert a rule in the middle of a firewall ruleset. Ansible and the Meraki modules lack a way to directly merge to manipulate data. However, a playlist can use a few tasks to split the list where you need to insert a rule and then merge them together again with the new rule added. The steps involved are as follows:
+
+1. Create blank "front" and "back" lists.
+ ::
+
+ vars:
+ - front_rules: []
+ - back_rules: []
+2. Get existing firewall rules from Meraki and create a new variable.
+ ::
+
+ - name: Get firewall rules
+ meraki_mx_l3_firewall:
+ auth_key: abc123
+ org_name: YourOrg
+ net_name: YourNet
+ state: query
+ delegate_to: localhost
+ register: rules
+ - set_fact:
+ original_ruleset: '{{rules.data}}'
+3. Write the new rule. The new rule needs to be in a list so it can be merged with other lists in an upcoming step. The blank `-` puts the rule in a list so it can be merged.
+ ::
+
+ - set_fact:
+ new_rule:
+ -
+ - comment: Block traffic to server
+ src_cidr: 192.0.1.0/24
+ src_port: any
+ dst_cidr: 192.0.1.2/32
+ dst_port: any
+ protocol: any
+ policy: deny
+4. Split the rules into two lists. This assumes the existing ruleset is 2 rules long.
+ ::
+
+ - set_fact:
+ front_rules: '{{front_rules + [ original_ruleset[:1] ]}}'
+ - set_fact:
+ back_rules: '{{back_rules + [ original_ruleset[1:] ]}}'
+5. Merge rules with the new rule in the middle.
+ ::
+
+ - set_fact:
+ new_ruleset: '{{front_rules + new_rule + back_rules}}'
+6. Upload new ruleset to Meraki.
+ ::
+
+ - name: Set two firewall rules
+ meraki_mx_l3_firewall:
+ auth_key: abc123
+ org_name: YourOrg
+ net_name: YourNet
+ state: present
+ rules: '{{ new_ruleset }}'
+ delegate_to: localhost
+
+Error Handling
+==============
+
+Ansible's Meraki modules will often fail if improper or incompatible parameters are specified. However, there will likely be scenarios where the module accepts the information but the Meraki API rejects the data. If this happens, the error will be returned in the ``body`` field for HTTP status of 400 return code.
+
+Meraki's API returns a 404 error if the API key is not correct. It does not provide any specific error saying the key is incorrect. If you receive a 404 error, check the API key first. 404 errors can also occur if improper object IDs (ex. ``org_id``) are specified.
diff --git a/docs/docsite/rst/scenario_guides/guide_online.rst b/docs/docsite/rst/scenario_guides/guide_online.rst
new file mode 100644
index 00000000..2c181a94
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_online.rst
@@ -0,0 +1,41 @@
+****************
+Online.net Guide
+****************
+
+Introduction
+============
+
+Online is a French hosting company mainly known for providing bare-metal servers named Dedibox.
+Check it out: `https://www.online.net/en <https://www.online.net/en>`_
+
+Dynamic inventory for Online resources
+--------------------------------------
+
+Ansible has a dynamic inventory plugin that can list your resources.
+
+1. Create a YAML configuration such as ``online_inventory.yml`` with this content:
+
+.. code-block:: yaml
+
+ plugin: online
+
+2. Set your ``ONLINE_TOKEN`` environment variable with your token.
+ You need to open an account and log into it before you can get a token.
+ You can find your token at the following page: `https://console.online.net/en/api/access <https://console.online.net/en/api/access>`_
+
+3. You can test that your inventory is working by running:
+
+.. code-block:: bash
+
+ $ ansible-inventory -v -i online_inventory.yml --list
+
+
+4. Now you can run your playbook or any other module with this inventory:
+
+.. code-block:: console
+
+ $ ansible all -i online_inventory.yml -m ping
+ sd-96735 | SUCCESS => {
+ "changed": false,
+ "ping": "pong"
+ }
diff --git a/docs/docsite/rst/scenario_guides/guide_oracle.rst b/docs/docsite/rst/scenario_guides/guide_oracle.rst
new file mode 100644
index 00000000..170ea903
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_oracle.rst
@@ -0,0 +1,103 @@
+===================================
+Oracle Cloud Infrastructure Guide
+===================================
+
+************
+Introduction
+************
+
+Oracle provides a number of Ansible modules to interact with Oracle Cloud Infrastructure (OCI). In this guide, we will explain how you can use these modules to orchestrate, provision and configure your infrastructure on OCI.
+
+************
+Requirements
+************
+To use the OCI Ansible modules, you must have the following prerequisites on your control node, the computer from which Ansible playbooks are executed.
+
+1. `An Oracle Cloud Infrastructure account. <https://cloud.oracle.com/en_US/tryit>`_
+
+2. A user created in that account, in a security group with a policy that grants the necessary permissions for working with resources in those compartments. For guidance, see `How Policies Work <https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/policies.htm>`_.
+
+3. The necessary credentials and OCID information.
+
+************
+Installation
+************
+1. Install the Oracle Cloud Infrastructure Python SDK (`detailed installation instructions <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/installation.html>`_):
+
+.. code-block:: bash
+
+ pip install oci
+
+2. Install the Ansible OCI Modules in one of two ways:
+
+a. From Galaxy:
+
+.. code-block:: bash
+
+ ansible-galaxy install oracle.oci_ansible_modules
+
+b. From GitHub:
+
+.. code-block:: bash
+
+ $ git clone https://github.com/oracle/oci-ansible-modules.git
+
+.. code-block:: bash
+
+ $ cd oci-ansible-modules
+
+
+Run one of the following commands:
+
+- If Ansible is installed only for your user:
+
+.. code-block:: bash
+
+ $ ./install.py
+
+- If Ansible is installed as root:
+
+.. code-block:: bash
+
+ $ sudo ./install.py
+
+*************
+Configuration
+*************
+
+When creating and configuring Oracle Cloud Infrastructure resources, Ansible modules use the authentication information outlined `here <https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm>`_.
+.
+
+********
+Examples
+********
+Launch a compute instance
+=========================
+This `sample launch playbook <https://github.com/oracle/oci-ansible-modules/tree/master/samples/compute/launch_compute_instance>`_
+launches a public Compute instance and then accesses the instance from an Ansible module over an SSH connection. The sample illustrates how to:
+
+- Generate a temporary, host-specific SSH key pair.
+- Specify the public key from the key pair for connecting to the instance, and then launch the instance.
+- Connect to the newly launched instance using SSH.
+
+Create and manage Autonomous Data Warehouses
+============================================
+This `sample warehouse playbook <https://github.com/oracle/oci-ansible-modules/tree/master/samples/database/autonomous_data_warehouse>`_ creates an Autonomous Data Warehouse and manage its lifecycle. The sample shows how to:
+
+- Set up an Autonomous Data Warehouse.
+- List all of the Autonomous Data Warehouse instances available in a compartment, filtered by the display name.
+- Get the "facts" for a specified Autonomous Data Warehouse.
+- Stop and start an Autonomous Data Warehouse instance.
+- Delete an Autonomous Data Warehouse instance.
+
+Create and manage Autonomous Transaction Processing
+===================================================
+This `sample playbook <https://github.com/oracle/oci-ansible-modules/tree/master/samples/database/autonomous_database>`_
+creates an Autonomous Transaction Processing database and manage its lifecycle. The sample shows how to:
+
+- Set up an Autonomous Transaction Processing database instance.
+- List all of the Autonomous Transaction Processing instances in a compartment, filtered by the display name.
+- Get the "facts" for a specified Autonomous Transaction Processing instance.
+- Delete an Autonomous Transaction Processing database instance.
+
+You can find more examples here: `Sample Ansible Playbooks <https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/ansiblesamples.htm>`_.
diff --git a/docs/docsite/rst/scenario_guides/guide_packet.rst b/docs/docsite/rst/scenario_guides/guide_packet.rst
new file mode 100644
index 00000000..c08eb947
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_packet.rst
@@ -0,0 +1,311 @@
+**********************************
+Packet.net Guide
+**********************************
+
+Introduction
+============
+
+`Packet.net <https://packet.net>`_ is a bare metal infrastructure host that's supported by Ansible (>=2.3) via a dynamic inventory script and two cloud modules. The two modules are:
+
+- packet_sshkey: adds a public SSH key from file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
+- packet_device: manages servers on Packet. You can use this module to create, restart and delete devices.
+
+Note, this guide assumes you are familiar with Ansible and how it works. If you're not, have a look at their :ref:`docs <ansible_documentation>` before getting started.
+
+Requirements
+============
+
+The Packet modules and inventory script connect to the Packet API using the packet-python package. You can install it with pip:
+
+.. code-block:: bash
+
+ $ pip install packet-python
+
+In order to check the state of devices created by Ansible on Packet, it's a good idea to install one of the `Packet CLI clients <https://www.packet.net/developers/integrations/>`_. Otherwise you can check them via the `Packet portal <https://app.packet.net/portal>`_.
+
+To use the modules and inventory script you'll need a Packet API token. You can generate an API token via the Packet portal `here <https://app.packet.net/portal#/api-keys>`__. The simplest way to authenticate yourself is to set the Packet API token in an environment variable:
+
+.. code-block:: bash
+
+ $ export PACKET_API_TOKEN=Bfse9F24SFtfs423Gsd3ifGsd43sSdfs
+
+If you're not comfortable exporting your API token, you can pass it as a parameter to the modules.
+
+On Packet, devices and reserved IP addresses belong to `projects <https://www.packet.com/developers/api/#projects>`_. In order to use the packet_device module, you need to specify the UUID of the project in which you want to create or manage devices. You can find a project's UUID in the Packet portal `here <https://app.packet.net/portal#/projects/list/table/>`_ (it's just under the project table) or via one of the available `CLIs <https://www.packet.net/developers/integrations/>`_.
+
+
+If you want to use a new SSH keypair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
+
+.. code-block:: bash
+
+ $ ssh-keygen -t rsa -f ./id_rsa
+
+If you want to use an existing keypair, just copy the private and public key over to the playbook directory.
+
+
+Device Creation
+===============
+
+The following code block is a simple playbook that creates one `Type 0 <https://www.packet.com/cloud/servers/t1-small/>`_ server (the 'plan' parameter). You have to supply 'plan' and 'operating_system'. 'location' defaults to 'ewr1' (Parsippany, NJ). You can find all the possible values for the parameters via a `CLI client <https://www.packet.net/developers/integrations/>`_.
+
+.. code-block:: yaml
+
+ # playbook_create.yml
+
+ - name: create ubuntu device
+ hosts: localhost
+ tasks:
+
+ - packet_sshkey:
+ key_file: ./id_rsa.pub
+ label: tutorial key
+
+ - packet_device:
+ project_id: <your_project_id>
+ hostnames: myserver
+ operating_system: ubuntu_16_04
+ plan: baremetal_0
+ facility: sjc1
+
+After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify via a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__.
+
+If you get an error with the message "failed to set machine state present, error: Error 404: Not Found", please verify your project UUID.
+
+
+Updating Devices
+================
+
+The two parameters used to uniquely identify Packet devices are: "device_ids" and "hostnames". Both parameters accept either a single string (later converted to a one-element list), or a list of strings.
+
+The 'device_ids' and 'hostnames' parameters are mutually exclusive. The following values are all acceptable:
+
+- device_ids: a27b7a83-fc93-435b-a128-47a5b04f2dcf
+
+- hostnames: mydev1
+
+- device_ids: [a27b7a83-fc93-435b-a128-47a5b04f2dcf, 4887130f-0ccd-49a0-99b0-323c1ceb527b]
+
+- hostnames: [mydev1, mydev2]
+
+In addition, hostnames can contain a special '%d' formatter along with a 'count' parameter that lets you easily expand hostnames that follow a simple name and number pattern; in other words, ``hostnames: "mydev%d", count: 2`` will expand to [mydev1, mydev2].
+
+If your playbook acts on existing Packet devices, you can only pass the 'hostname' and 'device_ids' parameters. The following playbook shows how you can reboot a specific Packet device by setting the 'hostname' parameter:
+
+.. code-block:: yaml
+
+ # playbook_reboot.yml
+
+ - name: reboot myserver
+ hosts: localhost
+ tasks:
+
+ - packet_device:
+ project_id: <your_project_id>
+ hostnames: myserver
+ state: rebooted
+
+You can also identify specific Packet devices with the 'device_ids' parameter. The device's UUID can be found in the `Packet Portal <https://app.packet.net/portal>`_ or by using a `CLI <https://www.packet.net/developers/integrations/>`_. The following playbook removes a Packet device using the 'device_ids' field:
+
+.. code-block:: yaml
+
+ # playbook_remove.yml
+
+ - name: remove a device
+ hosts: localhost
+ tasks:
+
+ - packet_device:
+ project_id: <your_project_id>
+ device_ids: <myserver_device_id>
+ state: absent
+
+
+More Complex Playbooks
+======================
+
+In this example, we'll create a CoreOS cluster with `user data <https://packet.com/developers/docs/servers/key-features/user-data/>`_.
+
+
+The CoreOS cluster will use `etcd <https://etcd.io/>`_ for discovery of other servers in the cluster. Before provisioning your servers, you'll need to generate a discovery token for your cluster:
+
+.. code-block:: bash
+
+ $ curl -w "\n" 'https://discovery.etcd.io/new?size=3'
+
+The following playbook will create an SSH key, 3 Packet servers, and then wait until SSH is ready (or until 5 minutes passed). Make sure to substitute the discovery token URL in 'user_data', and the 'project_id' before running ``ansible-playbook``. Also, feel free to change 'plan' and 'facility'.
+
+.. code-block:: yaml
+
+ # playbook_coreos.yml
+
+ - name: Start 3 CoreOS nodes in Packet and wait until SSH is ready
+ hosts: localhost
+ tasks:
+
+ - packet_sshkey:
+ key_file: ./id_rsa.pub
+ label: new
+
+ - packet_device:
+ hostnames: [coreos-one, coreos-two, coreos-three]
+ operating_system: coreos_beta
+ plan: baremetal_0
+ facility: ewr1
+ project_id: <your_project_id>
+ wait_for_public_IPv: 4
+ user_data: |
+ #cloud-config
+ coreos:
+ etcd2:
+ discovery: https://discovery.etcd.io/<token>
+ advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
+ initial-advertise-peer-urls: http://$private_ipv4:2380
+ listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
+ listen-peer-urls: http://$private_ipv4:2380
+ fleet:
+ public-ip: $private_ipv4
+ units:
+ - name: etcd2.service
+ command: start
+ - name: fleet.service
+ command: start
+ register: newhosts
+
+ - name: wait for ssh
+ wait_for:
+ delay: 1
+ host: "{{ item.public_ipv4 }}"
+ port: 22
+ state: started
+ timeout: 500
+ loop: "{{ newhosts.results[0].devices }}"
+
+
+As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the ``packet_sshkey`` module call in our playbook. If the public key is already in your Packet account, the call will have no effect.
+
+The second module call provisions 3 Packet Type 0 (specified using the 'plan' parameter) servers in the project identified via the 'project_id' parameter. The servers are all provisioned with CoreOS beta (the 'operating_system' parameter) and are customized with cloud-config user data passed to the 'user_data' parameter.
+
+The ``packet_device`` module has a ``wait_for_public_IPv`` that is used to specify the version of the IP address to wait for (valid values are ``4`` or ``6`` for IPv4 or IPv6). If specified, Ansible will wait until the GET API call for a device contains an Internet-routeable IP address of the specified version. When referring to an IP address of a created device in subsequent module calls, it's wise to use the ``wait_for_public_IPv`` parameter, or ``state: active`` in the packet_device module call.
+
+Run the playbook:
+
+.. code-block:: bash
+
+ $ ansible-playbook playbook_coreos.yml
+
+Once the playbook quits, your new devices should be reachable via SSH. Try to connect to one and check if etcd has started properly:
+
+.. code-block:: bash
+
+ tomk@work $ ssh -i id_rsa core@$one_of_the_servers_ip
+ core@coreos-one ~ $ etcdctl cluster-health
+
+Once you create a couple of devices, you might appreciate the dynamic inventory script...
+
+
+Dynamic Inventory Script
+========================
+
+The dynamic inventory script queries the Packet API for a list of hosts, and exposes it to Ansible so you can easily identify and act on Packet devices.
+
+You can find it in Ansible Community General Collection's git repo at `scripts/inventory/packet_net.py <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/packet_net.py>`_.
+
+The inventory script is configurable via a `ini file <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/packet_net.ini>`_.
+
+If you want to use the inventory script, you must first export your Packet API token to a PACKET_API_TOKEN environment variable.
+
+You can either copy the inventory and ini config out from the cloned git repo, or you can download it to your working directory like so:
+
+.. code-block:: bash
+
+ $ wget https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/packet_net.py
+ $ chmod +x packet_net.py
+ $ wget https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/packet_net.ini
+
+In order to understand what the inventory script gives to Ansible you can run:
+
+.. code-block:: bash
+
+ $ ./packet_net.py --list
+
+It should print a JSON document looking similar to following trimmed dictionary:
+
+.. code-block:: json
+
+ {
+ "_meta": {
+ "hostvars": {
+ "147.75.64.169": {
+ "packet_billing_cycle": "hourly",
+ "packet_created_at": "2017-02-09T17:11:26Z",
+ "packet_facility": "ewr1",
+ "packet_hostname": "coreos-two",
+ "packet_href": "/devices/d0ab8972-54a8-4bff-832b-28549d1bec96",
+ "packet_id": "d0ab8972-54a8-4bff-832b-28549d1bec96",
+ "packet_locked": false,
+ "packet_operating_system": "coreos_beta",
+ "packet_plan": "baremetal_0",
+ "packet_state": "active",
+ "packet_updated_at": "2017-02-09T17:16:35Z",
+ "packet_user": "core",
+ "packet_userdata": "#cloud-config\ncoreos:\n etcd2:\n discovery: https://discovery.etcd.io/e0c8a4a9b8fe61acd51ec599e2a4f68e\n advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001\n initial-advertise-peer-urls: http://$private_ipv4:2380\n listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001\n listen-peer-urls: http://$private_ipv4:2380\n fleet:\n public-ip: $private_ipv4\n units:\n - name: etcd2.service\n command: start\n - name: fleet.service\n command: start"
+ }
+ }
+ },
+ "baremetal_0": [
+ "147.75.202.255",
+ "147.75.202.251",
+ "147.75.202.249",
+ "147.75.64.129",
+ "147.75.192.51",
+ "147.75.64.169"
+ ],
+ "coreos_beta": [
+ "147.75.202.255",
+ "147.75.202.251",
+ "147.75.202.249",
+ "147.75.64.129",
+ "147.75.192.51",
+ "147.75.64.169"
+ ],
+ "ewr1": [
+ "147.75.64.129",
+ "147.75.192.51",
+ "147.75.64.169"
+ ],
+ "sjc1": [
+ "147.75.202.255",
+ "147.75.202.251",
+ "147.75.202.249"
+ ],
+ "coreos-two": [
+ "147.75.64.169"
+ ],
+ "d0ab8972-54a8-4bff-832b-28549d1bec96": [
+ "147.75.64.169"
+ ]
+ }
+
+In the ``['_meta']['hostvars']`` key, there is a list of devices (uniquely identified by their public IPv4 address) with their parameters. The other keys under ``['_meta']`` are lists of devices grouped by some parameter. Here, it is type (all devices are of type baremetal_0), operating system, and facility (ewr1 and sjc1).
+
+In addition to the parameter groups, there are also one-item groups with the UUID or hostname of the device.
+
+You can now target groups in playbooks! The following playbook will install a role that supplies resources for an Ansible target into all devices in the "coreos_beta" group:
+
+.. code-block:: yaml
+
+ # playbook_bootstrap.yml
+
+ - hosts: coreos_beta
+ gather_facts: false
+ roles:
+ - defunctzombie.coreos-boostrap
+
+Don't forget to supply the dynamic inventory in the ``-i`` argument!
+
+.. code-block:: bash
+
+ $ ansible-playbook -u core -i packet_net.py playbook_bootstrap.yml
+
+
+If you have any questions or comments let us know! help@packet.net
diff --git a/docs/docsite/rst/scenario_guides/guide_rax.rst b/docs/docsite/rst/scenario_guides/guide_rax.rst
new file mode 100644
index 00000000..b6100b8b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_rax.rst
@@ -0,0 +1,810 @@
+Rackspace Cloud Guide
+=====================
+
+.. _rax_introduction:
+
+Introduction
+````````````
+
+.. note:: This section of the documentation is under construction. We are in the process of adding more examples about the Rackspace modules and how they work together. Once complete, there will also be examples for Rackspace Cloud in `ansible-examples <https://github.com/ansible/ansible-examples/>`_.
+
+Ansible contains a number of core modules for interacting with Rackspace Cloud.
+
+The purpose of this section is to explain how to put Ansible modules together
+(and use inventory scripts) to use Ansible in a Rackspace Cloud context.
+
+Prerequisites for using the rax modules are minimal. In addition to ansible itself,
+all of the modules require and are tested against pyrax 1.5 or higher.
+You'll need this Python module installed on the execution host.
+
+``pyrax`` is not currently available in many operating system
+package repositories, so you will likely need to install it via pip:
+
+.. code-block:: bash
+
+ $ pip install pyrax
+
+Ansible creates an implicit localhost that executes in the same context as the ``ansible-playbook`` and the other CLI tools.
+If for any reason you need or want to have it in your inventory you should do something like the following:
+
+.. code-block:: ini
+
+ [localhost]
+ localhost ansible_connection=local ansible_python_interpreter=/usr/local/bin/python2
+
+For more information see :ref:`Implicit Localhost <implicit_localhost>`
+
+In playbook steps, we'll typically be using the following pattern:
+
+.. code-block:: yaml
+
+ - hosts: localhost
+ gather_facts: False
+ tasks:
+
+.. _credentials_file:
+
+Credentials File
+````````````````
+
+The `rax.py` inventory script and all `rax` modules support a standard `pyrax` credentials file that looks like:
+
+.. code-block:: ini
+
+ [rackspace_cloud]
+ username = myraxusername
+ api_key = d41d8cd98f00b204e9800998ecf8427e
+
+Setting the environment parameter ``RAX_CREDS_FILE`` to the path of this file will help Ansible find how to load
+this information.
+
+More information about this credentials file can be found at
+https://github.com/pycontribs/pyrax/blob/master/docs/getting_started.md#authenticating
+
+
+.. _virtual_environment:
+
+Running from a Python Virtual Environment (Optional)
+++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
+
+There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running via 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
+
+.. code-block:: ini
+
+ [localhost]
+ localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
+
+.. note::
+
+ pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.
+
+.. _provisioning:
+
+Provisioning
+````````````
+
+Now for the fun parts.
+
+The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons:
+
+ - Avoiding installing the pyrax library on remote nodes
+ - No need to encrypt and distribute credentials to remote nodes
+ - Speed and simplicity
+
+.. note::
+
+ Authentication with the Rackspace-related modules is handled by either
+ specifying your username and API key as environment variables or passing
+ them as module arguments, or by specifying the location of a credentials
+ file.
+
+Here is a basic example of provisioning an instance in ad-hoc mode:
+
+.. code-block:: bash
+
+ $ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes"
+
+Here's what it would look like in a playbook, assuming the parameters were defined in variables:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Provision a set of instances
+ rax:
+ name: "{{ rax_name }}"
+ flavor: "{{ rax_flavor }}"
+ image: "{{ rax_image }}"
+ count: "{{ rax_count }}"
+ group: "{{ group }}"
+ wait: yes
+ register: rax
+ delegate_to: localhost
+
+The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called "raxhosts", with each nodes hostname, IP address, and root password being added to the inventory.
+
+.. code-block:: yaml
+
+ - name: Add the instances we created (by public IP) to the group 'raxhosts'
+ add_host:
+ hostname: "{{ item.name }}"
+ ansible_host: "{{ item.rax_accessipv4 }}"
+ ansible_password: "{{ item.rax_adminpass }}"
+ groups: raxhosts
+ loop: "{{ rax.success }}"
+ when: rax.action == 'create'
+
+With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.
+
+.. code-block:: yaml
+
+ - name: Configuration play
+ hosts: raxhosts
+ user: root
+ roles:
+ - ntp
+ - webserver
+
+The method above ties the configuration of a host with the provisioning step. This isn't always what you want, and leads us
+to the next section.
+
+.. _host_inventory:
+
+Host Inventory
+``````````````
+
+Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
+
+In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
+
+.. _raxpy:
+
+rax.py
+++++++
+
+To use the Rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentials file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
+
+.. note:: Dynamic inventory scripts (like ``rax.py``) are saved in ``/usr/share/ansible/inventory`` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to ``$VIRTUALENV/share/inventory``.
+
+.. note:: Users of :ref:`ansible_tower` will note that dynamic inventory is natively supported by Tower, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps::
+
+ $ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
+
+``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a comma separated list of regions.
+
+When using ``rax.py``, you will not have a 'localhost' defined in the inventory.
+
+As mentioned previously, you will often be running most of these modules outside of the host loop, and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, and place both the ``rax.py`` script and a file containing ``localhost`` in it.
+
+Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead
+of an individual file, will cause ansible to evaluate each file in that directory for inventory.
+
+Let's test our inventory script to see if it can talk to Rackspace Cloud.
+
+.. code-block:: bash
+
+ $ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup
+
+Assuming things are properly configured, the ``rax.py`` inventory script will output information similar to the
+following information, which will be utilized for inventory and variables.
+
+.. code-block:: json
+
+ {
+ "ORD": [
+ "test"
+ ],
+ "_meta": {
+ "hostvars": {
+ "test": {
+ "ansible_host": "198.51.100.1",
+ "rax_accessipv4": "198.51.100.1",
+ "rax_accessipv6": "2001:DB8::2342",
+ "rax_addresses": {
+ "private": [
+ {
+ "addr": "192.0.2.2",
+ "version": 4
+ }
+ ],
+ "public": [
+ {
+ "addr": "198.51.100.1",
+ "version": 4
+ },
+ {
+ "addr": "2001:DB8::2342",
+ "version": 6
+ }
+ ]
+ },
+ "rax_config_drive": "",
+ "rax_created": "2013-11-14T20:48:22Z",
+ "rax_flavor": {
+ "id": "performance1-1",
+ "links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
+ "rel": "bookmark"
+ }
+ ]
+ },
+ "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
+ "rax_human_id": "test",
+ "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rax_image": {
+ "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
+ "links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
+ "rel": "bookmark"
+ }
+ ]
+ },
+ "rax_key_name": null,
+ "rax_links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rel": "self"
+ },
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rel": "bookmark"
+ }
+ ],
+ "rax_metadata": {
+ "foo": "bar"
+ },
+ "rax_name": "test",
+ "rax_name_attr": "name",
+ "rax_networks": {
+ "private": [
+ "192.0.2.2"
+ ],
+ "public": [
+ "198.51.100.1",
+ "2001:DB8::2342"
+ ]
+ },
+ "rax_os-dcf_diskconfig": "AUTO",
+ "rax_os-ext-sts_power_state": 1,
+ "rax_os-ext-sts_task_state": null,
+ "rax_os-ext-sts_vm_state": "active",
+ "rax_progress": 100,
+ "rax_status": "ACTIVE",
+ "rax_tenant_id": "111111",
+ "rax_updated": "2013-11-14T20:49:27Z",
+ "rax_user_id": "22222"
+ }
+ }
+ }
+ }
+
+.. _standard_inventory:
+
+Standard Inventory
+++++++++++++++++++
+
+When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be advantageous to retrieve discoverable hostvar information from the Rackspace API.
+
+This can be achieved with the ``rax_facts`` module and an inventory file similar to the following:
+
+.. code-block:: ini
+
+ [test_servers]
+ hostname1 rax_region=ORD
+ hostname2 rax_region=ORD
+
+.. code-block:: yaml
+
+ - name: Gather info about servers
+ hosts: test_servers
+ gather_facts: False
+ tasks:
+ - name: Get facts about servers
+ rax_facts:
+ credentials: ~/.raxpub
+ name: "{{ inventory_hostname }}"
+ region: "{{ rax_region }}"
+ delegate_to: localhost
+ - name: Map some facts
+ set_fact:
+ ansible_host: "{{ rax_accessipv4 }}"
+
+While you don't need to know how it works, it may be interesting to know what kind of variables are returned.
+
+The ``rax_facts`` module provides facts as followings, which match the ``rax.py`` inventory script:
+
+.. code-block:: json
+
+ {
+ "ansible_facts": {
+ "rax_accessipv4": "198.51.100.1",
+ "rax_accessipv6": "2001:DB8::2342",
+ "rax_addresses": {
+ "private": [
+ {
+ "addr": "192.0.2.2",
+ "version": 4
+ }
+ ],
+ "public": [
+ {
+ "addr": "198.51.100.1",
+ "version": 4
+ },
+ {
+ "addr": "2001:DB8::2342",
+ "version": 6
+ }
+ ]
+ },
+ "rax_config_drive": "",
+ "rax_created": "2013-11-14T20:48:22Z",
+ "rax_flavor": {
+ "id": "performance1-1",
+ "links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
+ "rel": "bookmark"
+ }
+ ]
+ },
+ "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
+ "rax_human_id": "test",
+ "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rax_image": {
+ "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
+ "links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
+ "rel": "bookmark"
+ }
+ ]
+ },
+ "rax_key_name": null,
+ "rax_links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rel": "self"
+ },
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rel": "bookmark"
+ }
+ ],
+ "rax_metadata": {
+ "foo": "bar"
+ },
+ "rax_name": "test",
+ "rax_name_attr": "name",
+ "rax_networks": {
+ "private": [
+ "192.0.2.2"
+ ],
+ "public": [
+ "198.51.100.1",
+ "2001:DB8::2342"
+ ]
+ },
+ "rax_os-dcf_diskconfig": "AUTO",
+ "rax_os-ext-sts_power_state": 1,
+ "rax_os-ext-sts_task_state": null,
+ "rax_os-ext-sts_vm_state": "active",
+ "rax_progress": 100,
+ "rax_status": "ACTIVE",
+ "rax_tenant_id": "111111",
+ "rax_updated": "2013-11-14T20:49:27Z",
+ "rax_user_id": "22222"
+ },
+ "changed": false
+ }
+
+
+Use Cases
+`````````
+
+This section covers some additional usage examples built around a specific use case.
+
+.. _network_and_server:
+
+Network and Server
+++++++++++++++++++
+
+Create an isolated cloud network and build a server
+
+.. code-block:: yaml
+
+ - name: Build Servers on an Isolated Network
+ hosts: localhost
+ gather_facts: False
+ tasks:
+ - name: Network create request
+ rax_network:
+ credentials: ~/.raxpub
+ label: my-net
+ cidr: 192.168.3.0/24
+ region: IAD
+ state: present
+ delegate_to: localhost
+
+ - name: Server create request
+ rax:
+ credentials: ~/.raxpub
+ name: web%04d.example.org
+ flavor: 2
+ image: ubuntu-1204-lts-precise-pangolin
+ disk_config: manual
+ networks:
+ - public
+ - my-net
+ region: IAD
+ state: present
+ count: 5
+ exact_count: yes
+ group: web
+ wait: yes
+ wait_timeout: 360
+ register: rax
+ delegate_to: localhost
+
+.. _complete_environment:
+
+Complete Environment
+++++++++++++++++++++
+
+Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html
+
+.. code-block:: yaml
+
+ ---
+ - name: Build environment
+ hosts: localhost
+ gather_facts: False
+ tasks:
+ - name: Load Balancer create request
+ rax_clb:
+ credentials: ~/.raxpub
+ name: my-lb
+ port: 80
+ protocol: HTTP
+ algorithm: ROUND_ROBIN
+ type: PUBLIC
+ timeout: 30
+ region: IAD
+ wait: yes
+ state: present
+ meta:
+ app: my-cool-app
+ register: clb
+
+ - name: Network create request
+ rax_network:
+ credentials: ~/.raxpub
+ label: my-net
+ cidr: 192.168.3.0/24
+ state: present
+ region: IAD
+ register: network
+
+ - name: Server create request
+ rax:
+ credentials: ~/.raxpub
+ name: web%04d.example.org
+ flavor: performance1-1
+ image: ubuntu-1204-lts-precise-pangolin
+ disk_config: manual
+ networks:
+ - public
+ - private
+ - my-net
+ region: IAD
+ state: present
+ count: 5
+ exact_count: yes
+ group: web
+ wait: yes
+ register: rax
+
+ - name: Add servers to web host group
+ add_host:
+ hostname: "{{ item.name }}"
+ ansible_host: "{{ item.rax_accessipv4 }}"
+ ansible_password: "{{ item.rax_adminpass }}"
+ ansible_user: root
+ groups: web
+ loop: "{{ rax.success }}"
+ when: rax.action == 'create'
+
+ - name: Add servers to Load balancer
+ rax_clb_nodes:
+ credentials: ~/.raxpub
+ load_balancer_id: "{{ clb.balancer.id }}"
+ address: "{{ item.rax_networks.private|first }}"
+ port: 80
+ condition: enabled
+ type: primary
+ wait: yes
+ region: IAD
+ loop: "{{ rax.success }}"
+ when: rax.action == 'create'
+
+ - name: Configure servers
+ hosts: web
+ handlers:
+ - name: restart nginx
+ service: name=nginx state=restarted
+
+ tasks:
+ - name: Install nginx
+ apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
+ notify:
+ - restart nginx
+
+ - name: Ensure nginx starts on boot
+ service: name=nginx state=started enabled=yes
+
+ - name: Create custom index.html
+ copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
+ owner=root group=root mode=0644
+
+.. _rackconnect_and_manged_cloud:
+
+RackConnect and Managed Cloud
++++++++++++++++++++++++++++++
+
+When using RackConnect version 2 or Rackspace Managed Cloud there are Rackspace automation tasks that are executed on the servers you create after they are successfully built. If your automation executes before the RackConnect or Managed Cloud automation, you can cause failures and unusable servers.
+
+These examples show creating servers, and ensuring that the Rackspace automation has completed before Ansible continues onwards.
+
+For simplicity, these examples are joined, however both are only needed when using RackConnect. When only using Managed Cloud, the RackConnect portion can be ignored.
+
+The RackConnect portions only apply to RackConnect version 2.
+
+.. _using_a_control_machine:
+
+Using a Control Machine
+***********************
+
+.. code-block:: yaml
+
+ - name: Create an exact count of servers
+ hosts: localhost
+ gather_facts: False
+ tasks:
+ - name: Server build requests
+ rax:
+ credentials: ~/.raxpub
+ name: web%03d.example.org
+ flavor: performance1-1
+ image: ubuntu-1204-lts-precise-pangolin
+ disk_config: manual
+ region: DFW
+ state: present
+ count: 1
+ exact_count: yes
+ group: web
+ wait: yes
+ register: rax
+
+ - name: Add servers to in memory groups
+ add_host:
+ hostname: "{{ item.name }}"
+ ansible_host: "{{ item.rax_accessipv4 }}"
+ ansible_password: "{{ item.rax_adminpass }}"
+ ansible_user: root
+ rax_id: "{{ item.rax_id }}"
+ groups: web,new_web
+ loop: "{{ rax.success }}"
+ when: rax.action == 'create'
+
+ - name: Wait for rackconnect and managed cloud automation to complete
+ hosts: new_web
+ gather_facts: false
+ tasks:
+ - name: ensure we run all tasks from localhost
+ delegate_to: localhost
+ block:
+ - name: Wait for rackconnnect automation to complete
+ rax_facts:
+ credentials: ~/.raxpub
+ id: "{{ rax_id }}"
+ region: DFW
+ register: rax_facts
+ until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
+ retries: 30
+ delay: 10
+
+ - name: Wait for managed cloud automation to complete
+ rax_facts:
+ credentials: ~/.raxpub
+ id: "{{ rax_id }}"
+ region: DFW
+ register: rax_facts
+ until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
+ retries: 30
+ delay: 10
+
+ - name: Update new_web hosts with IP that RackConnect assigns
+ hosts: new_web
+ gather_facts: false
+ tasks:
+ - name: Get facts about servers
+ rax_facts:
+ name: "{{ inventory_hostname }}"
+ region: DFW
+ delegate_to: localhost
+ - name: Map some facts
+ set_fact:
+ ansible_host: "{{ rax_accessipv4 }}"
+
+ - name: Base Configure Servers
+ hosts: web
+ roles:
+ - role: users
+
+ - role: openssh
+ opensshd_PermitRootLogin: "no"
+
+ - role: ntp
+
+.. _using_ansible_pull:
+
+Using Ansible Pull
+******************
+
+.. code-block:: yaml
+
+ ---
+ - name: Ensure Rackconnect and Managed Cloud Automation is complete
+ hosts: all
+ tasks:
+ - name: ensure we run all tasks from localhost
+ delegate_to: localhost
+ block:
+ - name: Check for completed bootstrap
+ stat:
+ path: /etc/bootstrap_complete
+ register: bootstrap
+
+ - name: Get region
+ command: xenstore-read vm-data/provider_data/region
+ register: rax_region
+ when: bootstrap.stat.exists != True
+
+ - name: Wait for rackconnect automation to complete
+ uri:
+ url: "https://{{ rax_region.stdout|trim }}.api.rackconnect.rackspace.com/v1/automation_status?format=json"
+ return_content: yes
+ register: automation_status
+ when: bootstrap.stat.exists != True
+ until: automation_status['automation_status']|default('') == 'DEPLOYED'
+ retries: 30
+ delay: 10
+
+ - name: Wait for managed cloud automation to complete
+ wait_for:
+ path: /tmp/rs_managed_cloud_automation_complete
+ delay: 10
+ when: bootstrap.stat.exists != True
+
+ - name: Set bootstrap completed
+ file:
+ path: /etc/bootstrap_complete
+ state: touch
+ owner: root
+ group: root
+ mode: 0400
+
+ - name: Base Configure Servers
+ hosts: all
+ roles:
+ - role: users
+
+ - role: openssh
+ opensshd_PermitRootLogin: "no"
+
+ - role: ntp
+
+.. _using_ansible_pull_with_xenstore:
+
+Using Ansible Pull with XenStore
+********************************
+
+.. code-block:: yaml
+
+ ---
+ - name: Ensure Rackconnect and Managed Cloud Automation is complete
+ hosts: all
+ tasks:
+ - name: Check for completed bootstrap
+ stat:
+ path: /etc/bootstrap_complete
+ register: bootstrap
+
+ - name: Wait for rackconnect_automation_status xenstore key to exist
+ command: xenstore-exists vm-data/user-metadata/rackconnect_automation_status
+ register: rcas_exists
+ when: bootstrap.stat.exists != True
+ failed_when: rcas_exists.rc|int > 1
+ until: rcas_exists.rc|int == 0
+ retries: 30
+ delay: 10
+
+ - name: Wait for rackconnect automation to complete
+ command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
+ register: rcas
+ when: bootstrap.stat.exists != True
+ until: rcas.stdout|replace('"', '') == 'DEPLOYED'
+ retries: 30
+ delay: 10
+
+ - name: Wait for rax_service_level_automation xenstore key to exist
+ command: xenstore-exists vm-data/user-metadata/rax_service_level_automation
+ register: rsla_exists
+ when: bootstrap.stat.exists != True
+ failed_when: rsla_exists.rc|int > 1
+ until: rsla_exists.rc|int == 0
+ retries: 30
+ delay: 10
+
+ - name: Wait for managed cloud automation to complete
+ command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
+ register: rsla
+ when: bootstrap.stat.exists != True
+ until: rsla.stdout|replace('"', '') == 'DEPLOYED'
+ retries: 30
+ delay: 10
+
+ - name: Set bootstrap completed
+ file:
+ path: /etc/bootstrap_complete
+ state: touch
+ owner: root
+ group: root
+ mode: 0400
+
+ - name: Base Configure Servers
+ hosts: all
+ roles:
+ - role: users
+
+ - role: openssh
+ opensshd_PermitRootLogin: "no"
+
+ - role: ntp
+
+.. _advanced_usage:
+
+Advanced Usage
+``````````````
+
+.. _awx_autoscale:
+
+Autoscaling with Tower
+++++++++++++++++++++++
+
+:ref:`ansible_tower` also contains a very nice feature for auto-scaling use cases.
+In this mode, a simple curl script can call a defined URL and the server will "dial out" to the requester
+and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes.
+See the Tower documentation for more details.
+
+A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded
+and less information has to be shared with remote hosts.
+
+.. _pending_information:
+
+Orchestration in the Rackspace Cloud
+++++++++++++++++++++++++++++++++++++
+
+Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other piece of software in an environment. Complex deployments might have previously required manual manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additional nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
+
+* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
+* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
+* A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
+* Servers and load balancers that have DNS records created and destroyed on creation and decommissioning, respectively
+
+
+
+
diff --git a/docs/docsite/rst/scenario_guides/guide_scaleway.rst b/docs/docsite/rst/scenario_guides/guide_scaleway.rst
new file mode 100644
index 00000000..77af9ba7
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_scaleway.rst
@@ -0,0 +1,293 @@
+.. _guide_scaleway:
+
+**************
+Scaleway Guide
+**************
+
+.. _scaleway_introduction:
+
+Introduction
+============
+
+`Scaleway <https://scaleway.com>`_ is a cloud provider supported by Ansible, version 2.6 or higher via a dynamic inventory plugin and modules.
+Those modules are:
+
+- :ref:`scaleway_sshkey_module`: adds a public SSH key from a file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
+- :ref:`scaleway_compute_module`: manages servers on Scaleway. You can use this module to create, restart and delete servers.
+- :ref:`scaleway_volume_module`: manages volumes on Scaleway.
+
+.. note::
+ This guide assumes you are familiar with Ansible and how it works.
+ If you're not, have a look at :ref:`ansible_documentation` before getting started.
+
+.. _scaleway_requirements:
+
+Requirements
+============
+
+The Scaleway modules and inventory script connect to the Scaleway API using `Scaleway REST API <https://developer.scaleway.com>`_.
+To use the modules and inventory script you'll need a Scaleway API token.
+You can generate an API token via the Scaleway console `here <https://cloud.scaleway.com/#/credentials>`__.
+The simplest way to authenticate yourself is to set the Scaleway API token in an environment variable:
+
+.. code-block:: bash
+
+ $ export SCW_TOKEN=00000000-1111-2222-3333-444444444444
+
+If you're not comfortable exporting your API token, you can pass it as a parameter to the modules using the ``api_token`` argument.
+
+If you want to use a new SSH keypair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
+
+.. code-block:: bash
+
+ $ ssh-keygen -t rsa -f ./id_rsa
+
+If you want to use an existing keypair, just copy the private and public key over to the playbook directory.
+
+.. _scaleway_add_sshkey:
+
+How to add an SSH key?
+======================
+
+Connection to Scaleway Compute nodes use Secure Shell.
+SSH keys are stored at the account level, which means that you can re-use the same SSH key in multiple nodes.
+The first step to configure Scaleway compute resources is to have at least one SSH key configured.
+
+:ref:`scaleway_sshkey_module` is a module that manages SSH keys on your Scaleway account.
+You can add an SSH key to your account by including the following task in a playbook:
+
+.. code-block:: yaml
+
+ - name: "Add SSH key"
+ scaleway_sshkey:
+ ssh_pub_key: "ssh-rsa AAAA..."
+ state: "present"
+
+The ``ssh_pub_key`` parameter contains your ssh public key as a string. Here is an example inside a playbook:
+
+
+.. code-block:: yaml
+
+ - name: Test SSH key lifecycle on a Scaleway account
+ hosts: localhost
+ gather_facts: no
+ environment:
+ SCW_API_KEY: ""
+
+ tasks:
+
+ - scaleway_sshkey:
+ ssh_pub_key: "ssh-rsa AAAAB...424242 developer@example.com"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is success and result is changed
+
+.. _scaleway_create_instance:
+
+How to create a compute instance?
+=================================
+
+Now that we have an SSH key configured, the next step is to spin up a server!
+:ref:`scaleway_compute_module` is a module that can create, update and delete Scaleway compute instances:
+
+.. code-block:: yaml
+
+ - name: Create a server
+ scaleway_compute:
+ name: foobar
+ state: present
+ image: 00000000-1111-2222-3333-444444444444
+ organization: 00000000-1111-2222-3333-444444444444
+ region: ams1
+ commercial_type: START1-S
+
+Here are the parameter details for the example shown above:
+
+- ``name`` is the name of the instance (the one that will show up in your web console).
+- ``image`` is the UUID of the system image you would like to use.
+ A list of all images is available for each availability zone.
+- ``organization`` represents the organization that your account is attached to.
+- ``region`` represents the Availability Zone which your instance is in (for this example, par1 and ams1).
+- ``commercial_type`` represents the name of the commercial offers.
+ You can check out the Scaleway pricing page to find which instance is right for you.
+
+Take a look at this short playbook to see a working example using ``scaleway_compute``:
+
+.. code-block:: yaml
+
+ - name: Test compute instance lifecycle on a Scaleway account
+ hosts: localhost
+ gather_facts: no
+ environment:
+ SCW_API_KEY: ""
+
+ tasks:
+
+ - name: Create a server
+ register: server_creation_task
+ scaleway_compute:
+ name: foobar
+ state: present
+ image: 00000000-1111-2222-3333-444444444444
+ organization: 00000000-1111-2222-3333-444444444444
+ region: ams1
+ commercial_type: START1-S
+ wait: true
+
+ - debug: var=server_creation_task
+
+ - assert:
+ that:
+ - server_creation_task is success
+ - server_creation_task is changed
+
+ - name: Run it
+ scaleway_compute:
+ name: foobar
+ state: running
+ image: 00000000-1111-2222-3333-444444444444
+ organization: 00000000-1111-2222-3333-444444444444
+ region: ams1
+ commercial_type: START1-S
+ wait: true
+ tags:
+ - web_server
+ register: server_run_task
+
+ - debug: var=server_run_task
+
+ - assert:
+ that:
+ - server_run_task is success
+ - server_run_task is changed
+
+.. _scaleway_dynamic_inventory_tutorial:
+
+Dynamic Inventory Script
+========================
+
+Ansible ships with :ref:`scaleway_inventory`.
+You can now get a complete inventory of your Scaleway resources through this plugin and filter it on
+different parameters (``regions`` and ``tags`` are currently supported).
+
+Let's create an example!
+Suppose that we want to get all hosts that got the tag web_server.
+Create a file named ``scaleway_inventory.yml`` with the following content:
+
+.. code-block:: yaml
+
+ plugin: scaleway
+ regions:
+ - ams1
+ - par1
+ tags:
+ - web_server
+
+This inventory means that we want all hosts that got the tag ``web_server`` on the zones ``ams1`` and ``par1``.
+Once you have configured this file, you can get the information using the following command:
+
+.. code-block:: bash
+
+ $ ansible-inventory --list -i scaleway_inventory.yml
+
+The output will be:
+
+.. code-block:: yaml
+
+ {
+ "_meta": {
+ "hostvars": {
+ "dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d": {
+ "ansible_verbosity": 6,
+ "arch": "x86_64",
+ "commercial_type": "START1-S",
+ "hostname": "foobar",
+ "ipv4": "192.0.2.1",
+ "organization": "00000000-1111-2222-3333-444444444444",
+ "state": "running",
+ "tags": [
+ "web_server"
+ ]
+ }
+ }
+ },
+ "all": {
+ "children": [
+ "ams1",
+ "par1",
+ "ungrouped",
+ "web_server"
+ ]
+ },
+ "ams1": {},
+ "par1": {
+ "hosts": [
+ "dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
+ ]
+ },
+ "ungrouped": {},
+ "web_server": {
+ "hosts": [
+ "dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
+ ]
+ }
+ }
+
+As you can see, we get different groups of hosts.
+``par1`` and ``ams1`` are groups based on location.
+``web_server`` is a group based on a tag.
+
+In case a filter parameter is not defined, the plugin supposes all values possible are wanted.
+This means that for each tag that exists on your Scaleway compute nodes, a group based on each tag will be created.
+
+Scaleway S3 object storage
+==========================
+
+`Object Storage <https://www.scaleway.com/object-storage>`_ allows you to store any kind of objects (documents, images, videos, and so on).
+As the Scaleway API is S3 compatible, Ansible supports it natively through the modules: :ref:`s3_bucket_module`, :ref:`aws_s3_module`.
+
+You can find many examples in the `scaleway_s3 integration tests <https://github.com/ansible/ansible-legacy-tests/tree/devel/test/legacy/roles/scaleway_s3>`_.
+
+.. code-block:: yaml+jinja
+
+ - hosts: myserver
+ vars:
+ scaleway_region: nl-ams
+ s3_url: https://s3.nl-ams.scw.cloud
+ environment:
+ # AWS_ACCESS_KEY matches your scaleway organization id available at https://cloud.scaleway.com/#/account
+ AWS_ACCESS_KEY: 00000000-1111-2222-3333-444444444444
+ # AWS_SECRET_KEY matches a secret token that you can retrieve at https://cloud.scaleway.com/#/credentials
+ AWS_SECRET_KEY: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
+ module_defaults:
+ group/aws:
+ s3_url: '{{ s3_url }}'
+ region: '{{ scaleway_region }}'
+ tasks:
+ # use a fact instead of a variable, otherwise template is evaluate each time variable is used
+ - set_fact:
+ bucket_name: "{{ 99999999 | random | to_uuid }}"
+
+ # "requester_pays:" is mandatory because Scaleway doesn't implement related API
+ # another way is to use aws_s3 and "mode: create" !
+ - s3_bucket:
+ name: '{{ bucket_name }}'
+ requester_pays:
+
+ - name: Another way to create the bucket
+ aws_s3:
+ bucket: '{{ bucket_name }}'
+ mode: create
+ encrypt: false
+ register: bucket_creation_check
+
+ - name: add something in the bucket
+ aws_s3:
+ mode: put
+ bucket: '{{ bucket_name }}'
+ src: /tmp/test.txt # needs to be created before
+ object: test.txt
+ encrypt: false # server side encryption must be disabled
diff --git a/docs/docsite/rst/scenario_guides/guide_vagrant.rst b/docs/docsite/rst/scenario_guides/guide_vagrant.rst
new file mode 100644
index 00000000..f49477b0
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_vagrant.rst
@@ -0,0 +1,136 @@
+Vagrant Guide
+=============
+
+.. _vagrant_intro:
+
+Introduction
+````````````
+
+`Vagrant <https://www.vagrantup.com/>`_ is a tool to manage virtual machine
+environments, and allows you to configure and use reproducible work
+environments on top of various virtualization and cloud platforms.
+It also has integration with Ansible as a provisioner for these virtual
+machines, and the two tools work together well.
+
+This guide will describe how to use Vagrant 1.7+ and Ansible together.
+
+If you're not familiar with Vagrant, you should visit `the documentation
+<https://www.vagrantup.com/docs/>`_.
+
+This guide assumes that you already have Ansible installed and working.
+Running from a Git checkout is fine. Follow the :ref:`installation_guide`
+guide for more information.
+
+.. _vagrant_setup:
+
+Vagrant Setup
+`````````````
+
+The first step once you've installed Vagrant is to create a ``Vagrantfile``
+and customize it to suit your needs. This is covered in detail in the Vagrant
+documentation, but here is a quick example that includes a section to use the
+Ansible provisioner to manage a single machine:
+
+.. code-block:: ruby
+
+ # This guide is optimized for Vagrant 1.8 and above.
+ # Older versions of Vagrant put less info in the inventory they generate.
+ Vagrant.require_version ">= 1.8.0"
+
+ Vagrant.configure(2) do |config|
+
+ config.vm.box = "ubuntu/bionic64"
+
+ config.vm.provision "ansible" do |ansible|
+ ansible.verbose = "v"
+ ansible.playbook = "playbook.yml"
+ end
+ end
+
+Notice the ``config.vm.provision`` section that refers to an Ansible playbook
+called ``playbook.yml`` in the same directory as the ``Vagrantfile``. Vagrant
+runs the provisioner once the virtual machine has booted and is ready for SSH
+access.
+
+There are a lot of Ansible options you can configure in your ``Vagrantfile``.
+Visit the `Ansible Provisioner documentation
+<https://www.vagrantup.com/docs/provisioning/ansible.html>`_ for more
+information.
+
+.. code-block:: bash
+
+ $ vagrant up
+
+This will start the VM, and run the provisioning playbook (on the first VM
+startup).
+
+
+To re-run a playbook on an existing VM, just run:
+
+.. code-block:: bash
+
+ $ vagrant provision
+
+This will re-run the playbook against the existing VM.
+
+Note that having the ``ansible.verbose`` option enabled will instruct Vagrant
+to show the full ``ansible-playbook`` command used behind the scene, as
+illustrated by this example:
+
+.. code-block:: bash
+
+ $ PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/someone/coding-in-a-project/.vagrant/provisioners/ansible/inventory -v playbook.yml
+
+This information can be quite useful to debug integration issues and can also
+be used to manually execute Ansible from a shell, as explained in the next
+section.
+
+.. _running_ansible:
+
+Running Ansible Manually
+````````````````````````
+
+Sometimes you may want to run Ansible manually against the machines. This is
+faster than kicking ``vagrant provision`` and pretty easy to do.
+
+With our ``Vagrantfile`` example, Vagrant automatically creates an Ansible
+inventory file in ``.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory``.
+This inventory is configured according to the SSH tunnel that Vagrant
+automatically creates. A typical automatically-created inventory file for a
+single machine environment may look something like this:
+
+.. code-block:: none
+
+ # Generated by Vagrant
+
+ default ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/home/someone/coding-in-a-project/.vagrant/machines/default/virtualbox/private_key'
+
+If you want to run Ansible manually, you will want to make sure to pass
+``ansible`` or ``ansible-playbook`` commands the correct arguments, at least
+for the *inventory*.
+
+.. code-block:: bash
+
+ $ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
+
+Advanced Usages
+```````````````
+
+The "Tips and Tricks" chapter of the `Ansible Provisioner documentation
+<https://www.vagrantup.com/docs/provisioning/ansible.html>`_ provides detailed information about more advanced Ansible features like:
+
+ - how to execute a playbook in parallel within a multi-machine environment
+ - how to integrate a local ``ansible.cfg`` configuration file
+
+.. seealso::
+
+ `Vagrant Home <https://www.vagrantup.com/>`_
+ The Vagrant homepage with downloads
+ `Vagrant Documentation <https://www.vagrantup.com/docs/>`_
+ Vagrant Documentation
+ `Ansible Provisioner <https://www.vagrantup.com/docs/provisioning/ansible.html>`_
+ The Vagrant documentation for the Ansible provisioner
+ `Vagrant Issue Tracker <https://github.com/hashicorp/vagrant/issues?q=is%3Aopen+is%3Aissue+label%3Aprovisioners%2Fansible>`_
+ The open issues for the Ansible provisioner in the Vagrant project
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
diff --git a/docs/docsite/rst/scenario_guides/guide_vmware.rst b/docs/docsite/rst/scenario_guides/guide_vmware.rst
new file mode 100644
index 00000000..b31553d5
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_vmware.rst
@@ -0,0 +1,33 @@
+.. _vmware_ansible:
+
+******************
+VMware Guide
+******************
+
+Welcome to the Ansible for VMware Guide!
+
+The purpose of this guide is to teach you everything you need to know about using Ansible with VMware.
+
+To get started, please select one of the following topics.
+
+.. toctree::
+ :maxdepth: 1
+
+ vmware_scenarios/vmware_intro
+ vmware_scenarios/vmware_concepts
+ vmware_scenarios/vmware_requirements
+ vmware_scenarios/vmware_inventory
+ vmware_scenarios/vmware_inventory_vm_attributes
+ vmware_scenarios/vmware_inventory_hostnames
+ vmware_scenarios/vmware_inventory_filters
+ vmware_scenarios/vmware_scenarios
+ vmware_scenarios/vmware_troubleshooting
+ vmware_scenarios/vmware_external_doc_links
+ vmware_scenarios/faq
+.. comments look like this - start with two dots
+.. getting_started content not ready
+.. vmware_scenarios/vmware_getting_started
+.. module index page not ready
+.. vmware_scenarios/vmware_module_reference
+.. always exclude the template file
+.. vmware_scenarios/vmware_scenario_1
diff --git a/docs/docsite/rst/scenario_guides/guide_vultr.rst b/docs/docsite/rst/scenario_guides/guide_vultr.rst
new file mode 100644
index 00000000..c5d5adec
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_vultr.rst
@@ -0,0 +1,171 @@
+Vultr Guide
+===========
+
+Ansible offers a set of modules to interact with `Vultr <https://www.vultr.com>`_ cloud platform.
+
+This set of module forms a framework that allows one to easily manage and orchestrate one's infrastructure on Vultr cloud platform.
+
+
+Requirements
+------------
+
+There is actually no technical requirement; simply an already created Vultr account.
+
+
+Configuration
+-------------
+
+Vultr modules offer a rather flexible way with regard to configuration.
+
+Configuration is read in that order:
+
+- Environment Variables (eg. ``VULTR_API_KEY``, ``VULTR_API_TIMEOUT``)
+- File specified by environment variable ``VULTR_API_CONFIG``
+- ``vultr.ini`` file located in current working directory
+- ``$HOME/.vultr.ini``
+
+
+Ini file are structured this way:
+
+.. code-block:: ini
+
+ [default]
+ key = MY_API_KEY
+ timeout = 60
+
+ [personal_account]
+ key = MY_PERSONAL_ACCOUNT_API_KEY
+ timeout = 30
+
+
+If ``VULTR_API_ACCOUNT`` environment variable or ``api_account`` module parameter is not specified, modules will look for the section named "default".
+
+
+Authentication
+--------------
+
+Before using the Ansible modules to interact with Vultr, ones need an API key.
+If one doesn't own one yet, log in to `Vultr <https://www.vultr.com>`_ go to Account, then API, enable API then the API key should show up.
+
+Ensure you allow the usage of the API key from the proper IP addresses.
+
+Refer to the Configuration section to find out where to put this information.
+
+To check that everything is working properly run the following command:
+
+.. code-block:: console
+
+ #> VULTR_API_KEY=XXX ansible -m vultr_account_info localhost
+ localhost | SUCCESS => {
+ "changed": false,
+ "vultr_account_info": {
+ "balance": -8.9,
+ "last_payment_amount": -10.0,
+ "last_payment_date": "2018-07-21 11:34:46",
+ "pending_charges": 6.0
+ },
+ "vultr_api": {
+ "api_account": "default",
+ "api_endpoint": "https://api.vultr.com",
+ "api_retries": 5,
+ "api_timeout": 60
+ }
+ }
+
+
+If a similar output displays then everything is setup properly, else please ensure the proper ``VULTR_API_KEY`` has been specified and that Access Control on Vultr > Account > API page are accurate.
+
+
+Usage
+-----
+
+Since `Vultr <https://www.vultr.com>`_ offers a public API, the execution of the module to manage the infrastructure on their platform will happen on localhost. This translates to:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: localhost
+ tasks:
+ - name: Create a 10G volume
+ vultr_block_storage:
+ name: my_disk
+ size: 10
+ region: New Jersey
+
+
+From that point on, only your creativity is the limit. Make sure to read the documentation of the `available modules <https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html#vultr>`_.
+
+
+Dynamic Inventory
+-----------------
+
+Ansible provides a dynamic inventory plugin for `Vultr <https://www.vultr.com>`_.
+The configuration process is exactly the same as the one for the modules.
+
+To be able to use it you need to enable it first by specifying the following in the ``ansible.cfg`` file:
+
+.. code-block:: ini
+
+ [inventory]
+ enable_plugins=vultr
+
+And provide a configuration file to be used with the plugin, the minimal configuration file looks like this:
+
+.. code-block:: yaml
+
+ ---
+ plugin: vultr
+
+To list the available hosts one can simply run:
+
+.. code-block:: console
+
+ #> ansible-inventory -i vultr.yml --list
+
+
+For example, this allows you to take action on nodes grouped by location or OS name:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: Amsterdam
+ tasks:
+ - name: Rebooting the machine
+ shell: reboot
+ become: True
+
+
+Integration tests
+-----------------
+
+Ansible includes integration tests for all Vultr modules.
+
+These tests are meant to run against the public Vultr API and that is why they require a valid key to access the API.
+
+Prepare the test setup:
+
+.. code-block:: shell
+
+ $ cd ansible # location the ansible source is
+ $ source ./hacking/env-setup
+
+Set the Vultr API key:
+
+.. code-block:: shell
+
+ $ cd test/integration
+ $ cp cloud-config-vultr.ini.template cloud-config-vultr.ini
+ $ vi cloud-config-vultr.ini
+
+Run all Vultr tests:
+
+.. code-block:: shell
+
+ $ ansible-test integration cloud/vultr/ -v --diff --allow-unsupported
+
+
+To run a specific test, for example vultr_account_info:
+
+.. code-block:: shell
+
+ $ ansible-test integration cloud/vultr/vultr_account_info -v --diff --allow-unsupported
diff --git a/docs/docsite/rst/scenario_guides/guides.rst b/docs/docsite/rst/scenario_guides/guides.rst
new file mode 100644
index 00000000..2ff65bbc
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guides.rst
@@ -0,0 +1,43 @@
+:orphan:
+
+.. unified index page included for backwards compatibility
+
+******************
+Scenario Guides
+******************
+
+The guides in this section cover integrating Ansible with a variety of
+platforms, products, and technologies. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Public Cloud Guides
+
+ guide_alicloud
+ guide_aws
+ guide_cloudstack
+ guide_gce
+ guide_azure
+ guide_online
+ guide_oracle
+ guide_packet
+ guide_rax
+ guide_scaleway
+ guide_vultr
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Network Technology Guides
+
+ guide_aci
+ guide_meraki
+ guide_infoblox
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Virtualization & Containerization Guides
+
+ guide_docker
+ guide_kubernetes
+ guide_vagrant
+ guide_vmware
diff --git a/docs/docsite/rst/scenario_guides/network_guides.rst b/docs/docsite/rst/scenario_guides/network_guides.rst
new file mode 100644
index 00000000..2b538ff0
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/network_guides.rst
@@ -0,0 +1,16 @@
+.. _network_guides:
+
+*************************
+Network Technology Guides
+*************************
+
+The guides in this section cover using Ansible with specific network technologies. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+.. toctree::
+ :maxdepth: 1
+
+ guide_aci
+ guide_meraki
+ guide_infoblox
+
+To learn more about Network Automation with Ansible, see :ref:`network_getting_started` and :ref:`network_advanced`.
diff --git a/docs/docsite/rst/scenario_guides/scenario_template.rst b/docs/docsite/rst/scenario_guides/scenario_template.rst
new file mode 100644
index 00000000..14695bed
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/scenario_template.rst
@@ -0,0 +1,53 @@
+:orphan:
+
+.. _scenario_template:
+
+*************************************
+Sample scenario for Ansible platforms
+*************************************
+
+*Use this ``rst`` file as a starting point to create a scenario guide for your platform. The sections below are suggestions on what should be in a scenario guide.*
+
+Introductory paragraph.
+
+.. contents::
+ :local:
+
+Prerequisites
+=============
+
+Describe the requirements and assumptions for this scenario. This should include applicable subsections for hardware, software, and any other caveats to using the scenarios in this guide.
+
+Credentials and authenticating
+==============================
+
+Describe credential requirements and how to authenticate to this platform.
+
+Using dynamic inventory
+=========================
+
+If applicable, describe how to use a dynamic inventory plugin for this platform.
+
+
+Example description
+===================
+
+Description and code here. Change the section header to something descriptive about this example, such as "Renaming a virtual machine". The goal is that this is the text someone would search for to find your example.
+
+
+Example output
+--------------
+
+What the user should expect to see.
+
+
+Troubleshooting
+---------------
+
+What to look for if it breaks.
+
+
+Conclusion and where to go next
+===============================
+
+Recap of important points. For more information please see: links.
diff --git a/docs/docsite/rst/scenario_guides/virt_guides.rst b/docs/docsite/rst/scenario_guides/virt_guides.rst
new file mode 100644
index 00000000..b623799f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/virt_guides.rst
@@ -0,0 +1,15 @@
+.. _virtualization_guides:
+
+******************************************
+Virtualization and Containerization Guides
+******************************************
+
+The guides in this section cover integrating Ansible with popular tools for creating virtual machines and containers. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+.. toctree::
+ :maxdepth: 1
+
+ guide_docker
+ guide_kubernetes
+ guide_vagrant
+ guide_vmware
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/faq.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/faq.rst
new file mode 100644
index 00000000..6987df0b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/faq.rst
@@ -0,0 +1,26 @@
+.. _vmware_faq:
+
+******************
+Ansible VMware FAQ
+******************
+
+vmware_guest
+============
+
+Can I deploy a virtual machine on a standalone ESXi server ?
+------------------------------------------------------------
+
+Yes. ``vmware_guest`` can deploy a virtual machine with required settings on a standalone ESXi server.
+However, you must have a paid license to deploy virtual machines this way. If you are using the free version, the API is read-only.
+
+Is ``/vm`` required for ``vmware_guest`` module ?
+-------------------------------------------------
+
+Prior to Ansible version 2.5, ``folder`` was an optional parameter with a default value of ``/vm``.
+
+The folder parameter was used to discover information about virtual machines in the given infrastructure.
+
+Starting with Ansible version 2.5, ``folder`` is still an optional parameter with no default value.
+This parameter will be now used to identify a user's virtual machine, if multiple virtual machines or virtual
+machine templates are found with same name. VMware does not restrict the system administrator from creating virtual
+machines with same name.
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_clone_template.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_clone_template.rst
new file mode 100644
index 00000000..2c7647ef
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_clone_template.rst
@@ -0,0 +1,222 @@
+.. _vmware_guest_from_template:
+
+****************************************
+Deploy a virtual machine from a template
+****************************************
+
+.. contents:: Topics
+
+Introduction
+============
+
+This guide will show you how to utilize Ansible to clone a virtual machine from already existing VMware template or existing VMware guest.
+
+Scenario Requirements
+=====================
+
+* Software
+
+ * Ansible 2.5 or later must be installed
+
+ * The Python module ``Pyvmomi`` must be installed on the Ansible (or Target host if not executing against localhost)
+
+ * Installing the latest ``Pyvmomi`` via ``pip`` is recommended [as the OS provided packages are usually out of date and incompatible]
+
+* Hardware
+
+ * vCenter Server with at least one ESXi server
+
+* Access / Credentials
+
+ * Ansible (or the target server) must have network access to the either vCenter server or the ESXi server you will be deploying to
+
+ * Username and Password
+
+ * Administrator user with following privileges
+
+ - ``Datastore.AllocateSpace`` on the destination datastore or datastore folder
+ - ``Network.Assign`` on the network to which the virtual machine will be assigned
+ - ``Resource.AssignVMToPool`` on the destination host, cluster, or resource pool
+ - ``VirtualMachine.Config.AddNewDisk`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.AddRemoveDevice`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Interact.PowerOn`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Inventory.CreateFromExisting`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Provisioning.Clone`` on the virtual machine you are cloning
+ - ``VirtualMachine.Provisioning.Customize`` on the virtual machine or virtual machine folder if you are customizing the guest operating system
+ - ``VirtualMachine.Provisioning.DeployTemplate`` on the template you are using
+ - ``VirtualMachine.Provisioning.ReadCustSpecs`` on the root vCenter Server if you are customizing the guest operating system
+
+ Depending on your requirements, you could also need one or more of the following privileges:
+
+ - ``VirtualMachine.Config.CPUCount`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.Memory`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.DiskExtend`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.Annotation`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.AdvancedConfig`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.EditDevice`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.Resource`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.Settings`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Config.UpgradeVirtualHardware`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Interact.SetCDMedia`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Interact.SetFloppyMedia`` on the datacenter or virtual machine folder
+ - ``VirtualMachine.Interact.DeviceConnection`` on the datacenter or virtual machine folder
+
+Assumptions
+===========
+
+- All variable names and VMware object names are case sensitive
+- VMware allows creation of virtual machine and templates with same name across datacenters and within datacenters
+- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours
+
+Caveats
+=======
+
+- Hosts in the ESXi cluster must have access to the datastore that the template resides on.
+- Multiple templates with the same name will cause module failures.
+- In order to utilize Guest Customization, VMware Tools must be installed on the template. For Linux, the ``open-vm-tools`` package is recommended, and it requires that ``Perl`` be installed.
+
+
+Example Description
+===================
+
+In this use case / example, we will be selecting a virtual machine template and cloning it into a specific folder in our Datacenter / Cluster. The following Ansible playbook showcases the basic parameters that are needed for this.
+
+.. code-block:: yaml
+
+ ---
+ - name: Create a VM from a template
+ hosts: localhost
+ gather_facts: no
+ tasks:
+ - name: Clone the template
+ vmware_guest:
+ hostname: "{{ vcenter_ip }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: False
+ name: testvm_2
+ template: template_el7
+ datacenter: "{{ datacenter_name }}"
+ folder: /DC1/vm
+ state: poweredon
+ cluster: "{{ cluster_name }}"
+ wait_for_ip_address: yes
+
+
+Since Ansible utilizes the VMware API to perform actions, in this use case we will be connecting directly to the API from our localhost. This means that our playbooks will not be running from the vCenter or ESXi Server. We do not necessarily need to collect facts about our localhost, so the ``gather_facts`` parameter will be disabled. You can run these modules against another server that would then connect to the API if your localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server.
+
+To begin, there are a few bits of information we will need. First and foremost is the hostname of the ESXi server or vCenter server. After this, you will need the username and password for this server. For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_. If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
+
+Now you need to supply the information about the virtual machine which will be created. Give your virtual machine a name, one that conforms to all VMware requirements for naming conventions. Next, select the display name of the template from which you want to clone new virtual machine. This must match what's displayed in VMware Web UI exactly. Then you can specify a folder to place this new virtual machine in. This path can either be a relative path or a full path to the folder including the Datacenter. You may need to specify a state for the virtual machine. This simply tells the module which action you want to take, in this case you will be ensure that the virtual machine exists and is powered on. An optional parameter is ``wait_for_ip_address``, this will tell Ansible to wait for the virtual machine to fully boot up and VMware Tools is running before completing this task.
+
+
+What to expect
+--------------
+
+- You will see a bit of JSON output after this playbook completes. This output shows various parameters that are returned from the module and from vCenter about the newly created VM.
+
+.. code-block:: yaml
+
+ {
+ "changed": true,
+ "instance": {
+ "annotation": "",
+ "current_snapshot": null,
+ "customvalues": {},
+ "guest_consolidation_needed": false,
+ "guest_question": null,
+ "guest_tools_status": "guestToolsNotRunning",
+ "guest_tools_version": "0",
+ "hw_cores_per_socket": 1,
+ "hw_datastores": [
+ "ds_215"
+ ],
+ "hw_esxi_host": "192.0.2.44",
+ "hw_eth0": {
+ "addresstype": "assigned",
+ "ipaddresses": null,
+ "label": "Network adapter 1",
+ "macaddress": "00:50:56:8c:19:f4",
+ "macaddress_dash": "00-50-56-8c-19-f4",
+ "portgroup_key": "dvportgroup-17",
+ "portgroup_portkey": "0",
+ "summary": "DVSwitch: 50 0c 5b 22 b6 68 ab 89-fc 0b 59 a4 08 6e 80 fa"
+ },
+ "hw_files": [
+ "[ds_215] testvm_2/testvm_2.vmx",
+ "[ds_215] testvm_2/testvm_2.vmsd",
+ "[ds_215] testvm_2/testvm_2.vmdk"
+ ],
+ "hw_folder": "/DC1/vm",
+ "hw_guest_full_name": null,
+ "hw_guest_ha_state": null,
+ "hw_guest_id": null,
+ "hw_interfaces": [
+ "eth0"
+ ],
+ "hw_is_template": false,
+ "hw_memtotal_mb": 512,
+ "hw_name": "testvm_2",
+ "hw_power_status": "poweredOff",
+ "hw_processor_count": 2,
+ "hw_product_uuid": "420cb25b-81e8-8d3b-dd2d-a439ee54fcc5",
+ "hw_version": "vmx-13",
+ "instance_uuid": "500cd53b-ed57-d74e-2da8-0dc0eddf54d5",
+ "ipv4": null,
+ "ipv6": null,
+ "module_hw": true,
+ "snapshots": []
+ },
+ "invocation": {
+ "module_args": {
+ "annotation": null,
+ "cdrom": {},
+ "cluster": "DC1_C1",
+ "customization": {},
+ "customization_spec": null,
+ "customvalues": [],
+ "datacenter": "DC1",
+ "disk": [],
+ "esxi_hostname": null,
+ "folder": "/DC1/vm",
+ "force": false,
+ "guest_id": null,
+ "hardware": {},
+ "hostname": "192.0.2.44",
+ "is_template": false,
+ "linked_clone": false,
+ "name": "testvm_2",
+ "name_match": "first",
+ "networks": [],
+ "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ "port": 443,
+ "resource_pool": null,
+ "snapshot_src": null,
+ "state": "present",
+ "state_change_timeout": 0,
+ "template": "template_el7",
+ "username": "administrator@vsphere.local",
+ "uuid": null,
+ "validate_certs": false,
+ "vapp_properties": [],
+ "wait_for_ip_address": true
+ }
+ }
+ }
+
+- State is changed to ``True`` which notifies that the virtual machine is built using given template. The module will not complete until the clone task in VMware is finished. This can take some time depending on your environment.
+
+- If you utilize the ``wait_for_ip_address`` parameter, then it will also increase the clone time as it will wait until virtual machine boots into the OS and an IP Address has been assigned to the given NIC.
+
+
+
+Troubleshooting
+---------------
+
+Things to inspect
+
+- Check if the values provided for username and password are correct
+- Check if the datacenter you provided is available
+- Check if the template specified exists and you have permissions to access the datastore
+- Ensure the full folder path you specified already exists. It will not create folders automatically for you
+
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_find_vm_folder.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_find_vm_folder.rst
new file mode 100644
index 00000000..62758867
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_find_vm_folder.rst
@@ -0,0 +1,120 @@
+.. _vmware_guest_find_folder:
+
+******************************************************
+Find folder path of an existing VMware virtual machine
+******************************************************
+
+.. contents:: Topics
+
+Introduction
+============
+
+This guide will show you how to utilize Ansible to find folder path of an existing VMware virtual machine.
+
+Scenario Requirements
+=====================
+
+* Software
+
+ * Ansible 2.5 or later must be installed.
+
+ * The Python module ``Pyvmomi`` must be installed on the Ansible control node (or Target host if not executing against localhost).
+
+ * We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
+
+* Hardware
+
+ * At least one standalone ESXi server or
+
+ * vCenter Server with at least one ESXi server
+
+* Access / Credentials
+
+ * Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
+
+ * Username and Password for vCenter or ESXi server
+
+Caveats
+=======
+
+- All variable names and VMware object names are case sensitive.
+- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours.
+
+
+Example Description
+===================
+
+With the following Ansible playbook you can find the folder path of an existing virtual machine using name.
+
+.. code-block:: yaml
+
+ ---
+ - name: Find folder path of an existing virtual machine
+ hosts: localhost
+ gather_facts: False
+ vars_files:
+ - vcenter_vars.yml
+ vars:
+ ansible_python_interpreter: "/usr/bin/env python3"
+ tasks:
+ - set_fact:
+ vm_name: "DC0_H0_VM0"
+
+ - name: "Find folder for VM - {{ vm_name }}"
+ vmware_guest_find:
+ hostname: "{{ vcenter_server }}"
+ username: "{{ vcenter_user }}"
+ password: "{{ vcenter_pass }}"
+ validate_certs: False
+ name: "{{ vm_name }}"
+ delegate_to: localhost
+ register: vm_facts
+
+
+Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
+
+This means that playbooks will not be running from the vCenter or ESXi Server.
+
+Note that this play disables the ``gather_facts`` parameter, since you don't want to collect facts about localhost.
+
+You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
+
+Before you begin, make sure you have:
+
+- Hostname of the ESXi server or vCenter server
+- Username and password for the ESXi or vCenter server
+- Name of the existing Virtual Machine for which you want to collect folder path
+
+For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_.
+
+If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
+
+The name of existing virtual machine will be used as input for ``vmware_guest_find`` module via ``name`` parameter.
+
+
+What to expect
+--------------
+
+Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
+
+.. code-block:: yaml
+
+ "vm_facts": {
+ "changed": false,
+ "failed": false,
+ ...
+ "folders": [
+ "/F0/DC0/vm/F0"
+ ]
+ }
+
+
+Troubleshooting
+---------------
+
+If your playbook fails:
+
+- Check if the values provided for username and password are correct.
+- Check if the datacenter you provided is available.
+- Check if the virtual machine specified exists and you have respective permissions to access VMware object.
+- Ensure the full folder path you specified already exists.
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_remove_vm.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_remove_vm.rst
new file mode 100644
index 00000000..620f8e0a
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_remove_vm.rst
@@ -0,0 +1,126 @@
+.. _vmware_guest_remove_virtual_machine:
+
+*****************************************
+Remove an existing VMware virtual machine
+*****************************************
+
+.. contents:: Topics
+
+Introduction
+============
+
+This guide will show you how to utilize Ansible to remove an existing VMware virtual machine.
+
+Scenario Requirements
+=====================
+
+* Software
+
+ * Ansible 2.5 or later must be installed.
+
+ * The Python module ``Pyvmomi`` must be installed on the Ansible control node (or Target host if not executing against localhost).
+
+ * We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
+
+* Hardware
+
+ * At least one standalone ESXi server or
+
+ * vCenter Server with at least one ESXi server
+
+* Access / Credentials
+
+ * Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
+
+ * Username and Password for vCenter or ESXi server
+
+ * Hosts in the ESXi cluster must have access to the datastore that the template resides on.
+
+Caveats
+=======
+
+- All variable names and VMware object names are case sensitive.
+- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours.
+- ``vmware_guest`` module tries to mimic VMware Web UI and workflow, so the virtual machine must be in powered off state in order to remove it from the VMware inventory.
+
+.. warning::
+
+ The removal VMware virtual machine using ``vmware_guest`` module is destructive operation and can not be reverted, so it is strongly recommended to take the backup of virtual machine and related files (vmx and vmdk files) before proceeding.
+
+Example Description
+===================
+
+In this use case / example, user will be removing a virtual machine using name. The following Ansible playbook showcases the basic parameters that are needed for this.
+
+.. code-block:: yaml
+
+ ---
+ - name: Remove virtual machine
+ gather_facts: no
+ vars_files:
+ - vcenter_vars.yml
+ vars:
+ ansible_python_interpreter: "/usr/bin/env python3"
+ hosts: localhost
+ tasks:
+ - set_fact:
+ vm_name: "VM_0003"
+ datacenter: "DC1"
+
+ - name: Remove "{{ vm_name }}"
+ vmware_guest:
+ hostname: "{{ vcenter_server }}"
+ username: "{{ vcenter_user }}"
+ password: "{{ vcenter_pass }}"
+ validate_certs: no
+ cluster: "DC1_C1"
+ name: "{{ vm_name }}"
+ state: absent
+ delegate_to: localhost
+ register: facts
+
+
+Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
+
+This means that playbooks will not be running from the vCenter or ESXi Server.
+
+Note that this play disables the ``gather_facts`` parameter, since you don't want to collect facts about localhost.
+
+You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
+
+Before you begin, make sure you have:
+
+- Hostname of the ESXi server or vCenter server
+- Username and password for the ESXi or vCenter server
+- Name of the existing Virtual Machine you want to remove
+
+For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_.
+
+If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
+
+The name of existing virtual machine will be used as input for ``vmware_guest`` module via ``name`` parameter.
+
+
+What to expect
+--------------
+
+- You will not see any JSON output after this playbook completes as compared to other operations performed using ``vmware_guest`` module.
+
+.. code-block:: yaml
+
+ {
+ "changed": true
+ }
+
+- State is changed to ``True`` which notifies that the virtual machine is removed from the VMware inventory. This can take some time depending upon your environment and network connectivity.
+
+
+Troubleshooting
+---------------
+
+If your playbook fails:
+
+- Check if the values provided for username and password are correct.
+- Check if the datacenter you provided is available.
+- Check if the virtual machine specified exists and you have permissions to access the datastore.
+- Ensure the full folder path you specified already exists. It will not create folders automatically for you.
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_rename_vm.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_rename_vm.rst
new file mode 100644
index 00000000..81272897
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_rename_vm.rst
@@ -0,0 +1,173 @@
+.. _vmware_guest_rename_virtual_machine:
+
+**********************************
+Rename an existing virtual machine
+**********************************
+
+.. contents:: Topics
+
+Introduction
+============
+
+This guide will show you how to utilize Ansible to rename an existing virtual machine.
+
+Scenario Requirements
+=====================
+
+* Software
+
+ * Ansible 2.5 or later must be installed.
+
+ * The Python module ``Pyvmomi`` must be installed on the Ansible control node (or Target host if not executing against localhost).
+
+ * We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
+
+* Hardware
+
+ * At least one standalone ESXi server or
+
+ * vCenter Server with at least one ESXi server
+
+* Access / Credentials
+
+ * Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
+
+ * Username and Password for vCenter or ESXi server
+
+ * Hosts in the ESXi cluster must have access to the datastore that the template resides on.
+
+Caveats
+=======
+
+- All variable names and VMware object names are case sensitive.
+- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours.
+
+
+Example Description
+===================
+
+With the following Ansible playbook you can rename an existing virtual machine by changing the UUID.
+
+.. code-block:: yaml
+
+ ---
+ - name: Rename virtual machine from old name to new name using UUID
+ gather_facts: no
+ vars_files:
+ - vcenter_vars.yml
+ vars:
+ ansible_python_interpreter: "/usr/bin/env python3"
+ hosts: localhost
+ tasks:
+ - set_fact:
+ vm_name: "old_vm_name"
+ new_vm_name: "new_vm_name"
+ datacenter: "DC1"
+ cluster_name: "DC1_C1"
+
+ - name: Get VM "{{ vm_name }}" uuid
+ vmware_guest_facts:
+ hostname: "{{ vcenter_server }}"
+ username: "{{ vcenter_user }}"
+ password: "{{ vcenter_pass }}"
+ validate_certs: False
+ datacenter: "{{ datacenter }}"
+ folder: "/{{datacenter}}/vm"
+ name: "{{ vm_name }}"
+ register: vm_facts
+
+ - name: Rename "{{ vm_name }}" to "{{ new_vm_name }}"
+ vmware_guest:
+ hostname: "{{ vcenter_server }}"
+ username: "{{ vcenter_user }}"
+ password: "{{ vcenter_pass }}"
+ validate_certs: False
+ cluster: "{{ cluster_name }}"
+ uuid: "{{ vm_facts.instance.hw_product_uuid }}"
+ name: "{{ new_vm_name }}"
+
+Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
+
+This means that playbooks will not be running from the vCenter or ESXi Server.
+
+Note that this play disables the ``gather_facts`` parameter, since you don't want to collect facts about localhost.
+
+You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
+
+Before you begin, make sure you have:
+
+- Hostname of the ESXi server or vCenter server
+- Username and password for the ESXi or vCenter server
+- The UUID of the existing Virtual Machine you want to rename
+
+For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_.
+
+If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
+
+Now you need to supply the information about the existing virtual machine which will be renamed. For renaming virtual machine, ``vmware_guest`` module uses VMware UUID, which is unique across vCenter environment. This value is autogenerated and can not be changed. You will use ``vmware_guest_facts`` module to find virtual machine and get information about VMware UUID of the virtual machine.
+
+This value will be used input for ``vmware_guest`` module. Specify new name to virtual machine which conforms to all VMware requirements for naming conventions as ``name`` parameter. Also, provide ``uuid`` as the value of VMware UUID.
+
+What to expect
+--------------
+
+Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
+
+.. code-block:: yaml
+
+ {
+ "changed": true,
+ "instance": {
+ "annotation": "",
+ "current_snapshot": null,
+ "customvalues": {},
+ "guest_consolidation_needed": false,
+ "guest_question": null,
+ "guest_tools_status": "guestToolsNotRunning",
+ "guest_tools_version": "10247",
+ "hw_cores_per_socket": 1,
+ "hw_datastores": ["ds_204_2"],
+ "hw_esxi_host": "10.x.x.x",
+ "hw_eth0": {
+ "addresstype": "assigned",
+ "ipaddresses": [],
+ "label": "Network adapter 1",
+ "macaddress": "00:50:56:8c:b8:42",
+ "macaddress_dash": "00-50-56-8c-b8-42",
+ "portgroup_key": "dvportgroup-31",
+ "portgroup_portkey": "15",
+ "summary": "DVSwitch: 50 0c 3a 69 df 78 2c 7b-6e 08 0a 89 e3 a6 31 17"
+ },
+ "hw_files": ["[ds_204_2] old_vm_name/old_vm_name.vmx", "[ds_204_2] old_vm_name/old_vm_name.nvram", "[ds_204_2] old_vm_name/old_vm_name.vmsd", "[ds_204_2] old_vm_name/vmware.log", "[ds_204_2] old_vm_name/old_vm_name.vmdk"],
+ "hw_folder": "/DC1/vm",
+ "hw_guest_full_name": null,
+ "hw_guest_ha_state": null,
+ "hw_guest_id": null,
+ "hw_interfaces": ["eth0"],
+ "hw_is_template": false,
+ "hw_memtotal_mb": 1024,
+ "hw_name": "new_vm_name",
+ "hw_power_status": "poweredOff",
+ "hw_processor_count": 1,
+ "hw_product_uuid": "420cbebb-835b-980b-7050-8aea9b7b0a6d",
+ "hw_version": "vmx-13",
+ "instance_uuid": "500c60a6-b7b4-8ae5-970f-054905246a6f",
+ "ipv4": null,
+ "ipv6": null,
+ "module_hw": true,
+ "snapshots": []
+ }
+ }
+
+confirming that you've renamed the virtual machine.
+
+
+Troubleshooting
+---------------
+
+If your playbook fails:
+
+- Check if the values provided for username and password are correct.
+- Check if the datacenter you provided is available.
+- Check if the virtual machine specified exists and you have permissions to access the datastore.
+- Ensure the full folder path you specified already exists.
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_vmware_http.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_vmware_http.rst
new file mode 100644
index 00000000..e893c9d0
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/scenario_vmware_http.rst
@@ -0,0 +1,161 @@
+.. _vmware_http_api_usage:
+
+***********************************
+Using VMware HTTP API using Ansible
+***********************************
+
+.. contents:: Topics
+
+Introduction
+============
+
+This guide will show you how to utilize Ansible to use VMware HTTP APIs to automate various tasks.
+
+Scenario Requirements
+=====================
+
+* Software
+
+ * Ansible 2.5 or later must be installed.
+
+ * We recommend installing the latest version with pip: ``pip install Pyvmomi`` on the Ansible control node
+ (as the OS packages are usually out of date and incompatible) if you are planning to use any existing VMware modules.
+
+* Hardware
+
+ * vCenter Server 6.5 and above with at least one ESXi server
+
+* Access / Credentials
+
+ * Ansible (or the target server) must have network access to either the vCenter server or the ESXi server
+
+ * Username and Password for vCenter
+
+Caveats
+=======
+
+- All variable names and VMware object names are case sensitive.
+- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours.
+- VMware HTTP APIs are introduced in vSphere 6.5 and above so minimum level required in 6.5.
+- There are very limited number of APIs exposed, so you may need to rely on XMLRPC based VMware modules.
+
+
+Example Description
+===================
+
+With the following Ansible playbook you can find the VMware ESXi host system(s) and can perform various tasks depending on the list of host systems.
+This is a generic example to show how Ansible can be utilized to consume VMware HTTP APIs.
+
+.. code-block:: yaml
+
+ ---
+ - name: Example showing VMware HTTP API utilization
+ hosts: localhost
+ gather_facts: no
+ vars_files:
+ - vcenter_vars.yml
+ vars:
+ ansible_python_interpreter: "/usr/bin/env python3"
+ tasks:
+ - name: Login into vCenter and get cookies
+ uri:
+ url: https://{{ vcenter_server }}/rest/com/vmware/cis/session
+ force_basic_auth: yes
+ validate_certs: no
+ method: POST
+ user: "{{ vcenter_user }}"
+ password: "{{ vcenter_pass }}"
+ register: login
+
+ - name: Get all hosts from vCenter using cookies from last task
+ uri:
+ url: https://{{ vcenter_server }}/rest/vcenter/host
+ force_basic_auth: yes
+ validate_certs: no
+ headers:
+ Cookie: "{{ login.set_cookie }}"
+ register: vchosts
+
+ - name: Change Log level configuration of the given hostsystem
+ vmware_host_config_manager:
+ hostname: "{{ vcenter_server }}"
+ username: "{{ vcenter_user }}"
+ password: "{{ vcenter_pass }}"
+ esxi_hostname: "{{ item.name }}"
+ options:
+ 'Config.HostAgent.log.level': 'error'
+ validate_certs: no
+ loop: "{{ vchosts.json.value }}"
+ register: host_config_results
+
+
+Since Ansible utilizes the VMware HTTP API using the ``uri`` module to perform actions, in this use case it will be connecting directly to the VMware HTTP API from localhost.
+
+This means that playbooks will not be running from the vCenter or ESXi Server.
+
+Note that this play disables the ``gather_facts`` parameter, since you don't want to collect facts about localhost.
+
+Before you begin, make sure you have:
+
+- Hostname of the vCenter server
+- Username and password for the vCenter server
+- Version of vCenter is at least 6.5
+
+For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_.
+
+If your vCenter server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
+
+As you can see, we are using the ``uri`` module in first task to login into the vCenter server and storing result in the ``login`` variable using register. In the second task, using cookies from the first task we are gathering information about the ESXi host system.
+
+Using this information, we are changing the ESXi host system's advance configuration.
+
+What to expect
+--------------
+
+Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
+
+.. code-block:: yaml
+
+ "results": [
+ {
+ ...
+ "invocation": {
+ "module_args": {
+ "cluster_name": null,
+ "esxi_hostname": "10.76.33.226",
+ "hostname": "10.65.223.114",
+ "options": {
+ "Config.HostAgent.log.level": "error"
+ },
+ "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ "port": 443,
+ "username": "administrator@vsphere.local",
+ "validate_certs": false
+ }
+ },
+ "item": {
+ "connection_state": "CONNECTED",
+ "host": "host-21",
+ "name": "10.76.33.226",
+ "power_state": "POWERED_ON"
+ },
+ "msg": "Config.HostAgent.log.level changed."
+ ...
+ }
+ ]
+
+
+Troubleshooting
+---------------
+
+If your playbook fails:
+
+- Check if the values provided for username and password are correct.
+- Check if you are using vCenter 6.5 and onwards to use this HTTP APIs.
+
+.. seealso::
+
+ `VMware vSphere and Ansible From Zero to Useful by @arielsanchezmor <https://www.youtube.com/watch?v=0_qwOKlBlo8>`_
+ vBrownBag session video related to VMware HTTP APIs
+ `Sample Playbooks for using VMware HTTP APIs <https://github.com/Akasurde/ansible-vmware-http>`_
+ GitHub repo for examples of Ansible playbook to manage VMware using HTTP APIs
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_concepts.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_concepts.rst
new file mode 100644
index 00000000..ce1e831a
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_concepts.rst
@@ -0,0 +1,45 @@
+.. _vmware_concepts:
+
+***************************
+Ansible for VMware Concepts
+***************************
+
+Some of these concepts are common to all uses of Ansible, including VMware automation; some are specific to VMware. You need to understand them to use Ansible for VMware automation. This introduction provides the background you need to follow the :ref:`scenarios<vmware_scenarios>` in this guide.
+
+.. contents::
+ :local:
+
+Control Node
+============
+
+Any machine with Ansible installed. You can run commands and playbooks, invoking ``/usr/bin/ansible`` or ``/usr/bin/ansible-playbook``, from any control node. You can use any computer that has Python installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes.
+
+Delegation
+==========
+
+Delegation allows you to select the system that executes a given task. If you do not have ``pyVmomi`` installed on your control node, use the ``delegate_to`` keyword on VMware-specific tasks to execute them on any host where you have ``pyVmomi`` installed.
+
+Modules
+=======
+
+The units of code Ansible executes. Each module has a particular use, from creating virtual machines on vCenter to managing distributed virtual switches in the vCenter environment. You can invoke a single module with a task, or invoke several different modules in a playbook. For an idea of how many modules Ansible includes, take a look at the :ref:`list of cloud modules<cloud_modules>`, which includes VMware modules.
+
+Playbooks
+=========
+
+Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks can include variables as well as tasks. Playbooks are written in YAML and are easy to read, write, share and understand.
+
+pyVmomi
+=======
+
+Ansible VMware modules are written on top of `pyVmomi <https://github.com/vmware/pyvmomi>`_. ``pyVmomi`` is the official Python SDK for the VMware vSphere API that allows user to manage ESX, ESXi, and vCenter infrastructure.
+
+You need to install this Python SDK on host from where you want to invoke VMware automation. For example, if you are using control node then ``pyVmomi`` must be installed on control node.
+
+If you are using any ``delegate_to`` host which is different from your control node then you need to install ``pyVmomi`` on that ``delegate_to`` node.
+
+You can install pyVmomi using pip:
+
+.. code-block:: bash
+
+ $ pip install pyvmomi
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_external_doc_links.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_external_doc_links.rst
new file mode 100644
index 00000000..b50837dd
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_external_doc_links.rst
@@ -0,0 +1,11 @@
+.. _vmware_external_doc_links:
+
+*****************************
+Other useful VMware resources
+*****************************
+
+* `VMware API and SDK Documentation <https://www.vmware.com/support/pubs/sdk_pubs.html>`_
+* `VCSIM test container image <https://quay.io/repository/ansible/vcenter-test-container>`_
+* `Ansible VMware community wiki page <https://github.com/ansible/community/wiki/VMware>`_
+* `VMware's official Guest Operating system customization matrix <https://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf>`_
+* `VMware Compatibility Guide <https://www.vmware.com/resources/compatibility/search.php>`_
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_getting_started.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_getting_started.rst
new file mode 100644
index 00000000..fc5691b7
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_getting_started.rst
@@ -0,0 +1,9 @@
+:orphan:
+
+.. _vmware_ansible_getting_started:
+
+***************************************
+Getting Started with Ansible for VMware
+***************************************
+
+This will have a basic "hello world" scenario/walkthrough that gets the user introduced to the basics.
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_intro.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_intro.rst
new file mode 100644
index 00000000..7006e665
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_intro.rst
@@ -0,0 +1,53 @@
+.. _vmware_ansible_intro:
+
+**********************************
+Introduction to Ansible for VMware
+**********************************
+
+.. contents:: Topics
+
+Introduction
+============
+
+Ansible provides various modules to manage VMware infrastructure, which includes datacenter, cluster,
+host system and virtual machine.
+
+Requirements
+============
+
+Ansible VMware modules are written on top of `pyVmomi <https://github.com/vmware/pyvmomi>`_.
+pyVmomi is the Python SDK for the VMware vSphere API that allows user to manage ESX, ESXi,
+and vCenter infrastructure. You can install pyVmomi using pip:
+
+.. code-block:: bash
+
+ $ pip install pyvmomi
+
+Ansible VMware modules leveraging latest vSphere(6.0+) features are using `vSphere Automation Python SDK <https://github.com/vmware/vsphere-automation-sdk-python>`_. The vSphere Automation Python SDK also has client libraries, documentation, and sample code for VMware Cloud on AWS Console APIs, NSX VMware Cloud on AWS integration APIs, VMware Cloud on AWS site recovery APIs, NSX-T APIs.
+
+You can install vSphere Automation Python SDK using pip:
+
+.. code-block:: bash
+
+ $ pip install --upgrade git+https://github.com/vmware/vsphere-automation-sdk-python.git
+
+Note:
+ Installing vSphere Automation Python SDK also installs ``pyvmomi``. A separate installation of ``pyvmomi`` is not required.
+
+vmware_guest module
+===================
+
+The :ref:`vmware_guest<vmware_guest_module>` module manages various operations related to virtual machines in the given ESXi or vCenter server.
+
+
+.. seealso::
+
+ `pyVmomi <https://github.com/vmware/pyvmomi>`_
+ The GitHub Page of pyVmomi
+ `pyVmomi Issue Tracker <https://github.com/vmware/pyvmomi/issues>`_
+ The issue tracker for the pyVmomi project
+ `govc <https://github.com/vmware/govmomi/tree/master/govc>`_
+ govc is a vSphere CLI built on top of govmomi
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory.rst
new file mode 100644
index 00000000..f942dd00
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory.rst
@@ -0,0 +1,90 @@
+.. _vmware_ansible_inventory:
+
+*************************************
+Using VMware dynamic inventory plugin
+*************************************
+
+.. contents:: Topics
+
+VMware Dynamic Inventory Plugin
+===============================
+
+
+The best way to interact with your hosts is to use the VMware dynamic inventory plugin, which dynamically queries VMware APIs and
+tells Ansible what nodes can be managed.
+
+Requirements
+------------
+
+To use the VMware dynamic inventory plugins, you must install `pyVmomi <https://github.com/vmware/pyvmomi>`_
+on your control node (the host running Ansible).
+
+To include tag-related information for the virtual machines in your dynamic inventory, you also need the `vSphere Automation SDK <https://code.vmware.com/web/sdk/65/vsphere-automation-python>`_, which supports REST API features like tagging and content libraries, on your control node.
+You can install the ``vSphere Automation SDK`` following `these instructions <https://github.com/vmware/vsphere-automation-sdk-python#installing-required-python-packages>`_.
+
+.. code-block:: bash
+
+ $ pip install pyvmomi
+
+To use this VMware dynamic inventory plugin, you need to enable it first by specifying the following in the ``ansible.cfg`` file:
+
+.. code-block:: ini
+
+ [inventory]
+ enable_plugins = vmware_vm_inventory
+
+Then, create a file that ends in ``.vmware.yml`` or ``.vmware.yaml`` in your working directory.
+
+The ``vmware_vm_inventory`` script takes in the same authentication information as any VMware module.
+
+Here's an example of a valid inventory file:
+
+.. code-block:: yaml
+
+ plugin: vmware_vm_inventory
+ strict: False
+ hostname: 10.65.223.31
+ username: administrator@vsphere.local
+ password: Esxi@123$%
+ validate_certs: False
+ with_tags: True
+
+
+Executing ``ansible-inventory --list -i <filename>.vmware.yml`` will create a list of VMware instances that are ready to be configured using Ansible.
+
+Using vaulted configuration files
+=================================
+
+Since the inventory configuration file contains vCenter password in plain text, a security risk, you may want to
+encrypt your entire inventory configuration file.
+
+You can encrypt a valid inventory configuration file as follows:
+
+.. code-block:: bash
+
+ $ ansible-vault encrypt <filename>.vmware.yml
+ New Vault password:
+ Confirm New Vault password:
+ Encryption successful
+
+And you can use this vaulted inventory configuration file using:
+
+.. code-block:: bash
+
+ $ ansible-inventory -i filename.vmware.yml --list --vault-password-file=/path/to/vault_password_file
+
+
+.. seealso::
+
+ `pyVmomi <https://github.com/vmware/pyvmomi>`_
+ The GitHub Page of pyVmomi
+ `pyVmomi Issue Tracker <https://github.com/vmware/pyvmomi/issues>`_
+ The issue tracker for the pyVmomi project
+ `vSphere Automation SDK GitHub Page <https://github.com/vmware/vsphere-automation-sdk-python>`_
+ The GitHub Page of vSphere Automation SDK for Python
+ `vSphere Automation SDK Issue Tracker <https://github.com/vmware/vsphere-automation-sdk-python/issues>`_
+ The issue tracker for vSphere Automation SDK for Python
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_vault`
+ Using Vault in playbooks
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_filters.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_filters.rst
new file mode 100644
index 00000000..1208dcad
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_filters.rst
@@ -0,0 +1,216 @@
+.. _vmware_ansible_inventory_using_filters:
+
+***********************************************
+Using VMware dynamic inventory plugin - Filters
+***********************************************
+
+.. contents::
+ :local:
+
+VMware dynamic inventory plugin - filtering VMware guests
+=========================================================
+
+
+VMware inventory plugin allows you to filter VMware guests using the ``filters`` configuration parameter.
+
+This section shows how you configure ``filters`` for the given VMware guest in the inventory.
+
+Requirements
+------------
+
+To use the VMware dynamic inventory plugins, you must install `pyVmomi <https://github.com/vmware/pyvmomi>`_
+on your control node (the host running Ansible).
+
+To include tag-related information for the virtual machines in your dynamic inventory, you also need the `vSphere Automation SDK <https://code.vmware.com/web/sdk/65/vsphere-automation-python>`_, which supports REST API features such as tagging and content libraries, on your control node.
+You can install the ``vSphere Automation SDK`` following `these instructions <https://github.com/vmware/vsphere-automation-sdk-python#installing-required-python-packages>`_.
+
+.. code-block:: bash
+
+ $ pip install pyvmomi
+
+Starting in Ansible 2.10, the VMware dynamic inventory plugin is available in the ``community.vmware`` collection included Ansible.
+Alternately, to install the latest ``community.vmware`` collection:
+
+.. code-block:: bash
+
+ $ ansible-galaxy collection install community.vmware
+
+To use this VMware dynamic inventory plugin:
+
+1. Enable it first by specifying the following in the ``ansible.cfg`` file:
+
+.. code-block:: ini
+
+ [inventory]
+ enable_plugins = community.vmware.vmware_vm_inventory
+
+2. Create a file that ends in ``vmware.yml`` or ``vmware.yaml`` in your working directory.
+
+The ``vmware_vm_inventory`` inventory plugin takes in the same authentication information as any other VMware modules does.
+
+Let us assume we want to list all RHEL7 VMs with the power state as "poweredOn". A valid inventory file with filters for the given VMware guest looks as follows:
+
+.. code-block:: yaml
+
+ plugin: community.vmware.vmware_vm_inventory
+ strict: False
+ hostname: 10.65.223.31
+ username: administrator@vsphere.local
+ password: Esxi@123$%
+ validate_certs: False
+ with_tags: False
+ hostnames:
+ - config.name
+ filters:
+ - config.guestId == "rhel7_64Guest"
+ - summary.runtime.powerState == "poweredOn"
+
+
+Here, we have configured two filters -
+
+* ``config.guestId`` is equal to ``rhel7_64Guest``
+* ``summary.runtime.powerState`` is equal to ``poweredOn``
+
+This retrieves all the VMs which satisfy these two conditions and populates them in the inventory.
+Notice that the conditions are combined using an ``and`` operation.
+
+Using ``or`` conditions in filters
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let us assume you want filter RHEL7 and Ubuntu VMs. You can use multiple filters using ``or`` condition in your inventory file.
+
+A valid filter in the VMware inventory file for this example is:
+
+.. code-block:: yaml
+
+ plugin: community.vmware.vmware_vm_inventory
+ strict: False
+ hostname: 10.65.223.31
+ username: administrator@vsphere.local
+ password: Esxi@123$%
+ validate_certs: False
+ with_tags: False
+ hostnames:
+ - config.name
+ filters:
+ - config.guestId == "rhel7_64Guest" or config.guestId == "ubuntu64Guest"
+
+
+You can check all allowed properties for filters for the given virtual machine at :ref:`vmware_inventory_vm_attributes`.
+
+If you are using the ``properties`` parameter with custom VM properties, make sure that you include all the properties used by filters as well in your VM property list.
+
+For example, if we want all RHEL7 and Ubuntu VMs that are poweredOn, you can use inventory file:
+
+.. code-block:: yaml
+
+ plugin: community.vmware.vmware_vm_inventory
+ strict: False
+ hostname: 10.65.223.31
+ username: administrator@vsphere.local
+ password: Esxi@123$%
+ validate_certs: False
+ with_tags: False
+ hostnames:
+ - 'config.name'
+ properties:
+ - 'config.name'
+ - 'config.guestId'
+ - 'guest.ipAddress'
+ - 'summary.runtime.powerState'
+ filters:
+ - config.guestId == "rhel7_64Guest" or config.guestId == "ubuntu64Guest"
+ - summary.runtime.powerState == "poweredOn"
+
+Here, we are using minimum VM properties, that is ``config.name``, ``config.guestId``, ``summary.runtime.powerState``, and ``guest.ipAddress``.
+
+* ``config.name`` is used by the ``hostnames`` parameter.
+* ``config.guestId`` and ``summary.runtime.powerState`` are used by the ``filters`` parameter.
+* ``guest.guestId`` is used by ``ansible_host`` internally by the inventory plugin.
+
+Using regular expression in filters
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let us assume you want filter VMs with specific IP range. You can use regular expression in ``filters`` in your inventory file.
+
+For example, if we want all RHEL7 and Ubuntu VMs that are poweredOn, you can use inventory file:
+
+.. code-block:: yaml
+
+ plugin: community.vmware.vmware_vm_inventory
+ strict: False
+ hostname: 10.65.223.31
+ username: administrator@vsphere.local
+ password: Esxi@123$%
+ validate_certs: False
+ with_tags: False
+ hostnames:
+ - 'config.name'
+ properties:
+ - 'config.name'
+ - 'config.guestId'
+ - 'guest.ipAddress'
+ - 'summary.runtime.powerState'
+ filters:
+ - guest.ipAddress is defined and guest.ipAddress is match('192.168.*')
+
+Here, we are using ``guest.ipAddress`` VM property. This property is optional and depended upon VMware tools installed on VMs.
+We are using ``match`` to validate the regular expression for the given IP range.
+
+Executing ``ansible-inventory --list -i <filename>.vmware.yml`` creates a list of the virtual machines that are ready to be configured using Ansible.
+
+What to expect
+--------------
+
+You will notice that the inventory hosts are filtered depending on your ``filters`` section.
+
+
+.. code-block:: yaml
+
+ {
+ "_meta": {
+ "hostvars": {
+ "template_001": {
+ "config.name": "template_001",
+ "config.guestId": "ubuntu64Guest",
+ ...
+ "guest.toolsStatus": "toolsNotInstalled",
+ "summary.runtime.powerState": "poweredOn",
+ },
+ "vm_8046": {
+ "config.name": "vm_8046",
+ "config.guestId": "rhel7_64Guest",
+ ...
+ "guest.toolsStatus": "toolsNotInstalled",
+ "summary.runtime.powerState": "poweredOn",
+ },
+ ...
+ }
+
+Troubleshooting filters
+-----------------------
+
+If the custom property specified in ``filters`` fails:
+
+- Check if the values provided for username and password are correct.
+- Make sure it is a valid property, see :ref:`vmware_inventory_vm_attributes`.
+- Use ``strict: True`` to get more information about the error.
+- Please make sure that you are using latest version of the VMware collection.
+
+
+.. seealso::
+
+ `pyVmomi <https://github.com/vmware/pyvmomi>`_
+ The GitHub Page of pyVmomi
+ `pyVmomi Issue Tracker <https://github.com/vmware/pyvmomi/issues>`_
+ The issue tracker for the pyVmomi project
+ `vSphere Automation SDK GitHub Page <https://github.com/vmware/vsphere-automation-sdk-python>`_
+ The GitHub Page of vSphere Automation SDK for Python
+ `vSphere Automation SDK Issue Tracker <https://github.com/vmware/vsphere-automation-sdk-python/issues>`_
+ The issue tracker for vSphere Automation SDK for Python
+ :ref:`vmware_inventory_vm_attributes`
+ Using Virtual machine attributes in VMware dynamic inventory plugin
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_vault`
+ Using Vault in playbooks
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_hostnames.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_hostnames.rst
new file mode 100644
index 00000000..9d284562
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_hostnames.rst
@@ -0,0 +1,128 @@
+.. _vmware_ansible_inventory_using_hostnames:
+
+*************************************************
+Using VMware dynamic inventory plugin - Hostnames
+*************************************************
+
+.. contents::
+ :local:
+
+VMware dynamic inventory plugin - customizing hostnames
+=======================================================
+
+
+VMware inventory plugin allows you to configure hostnames using the ``hostnames`` configuration parameter.
+
+In this scenario guide we will see how you configure hostnames from the given VMware guest in the inventory.
+
+Requirements
+------------
+
+To use the VMware dynamic inventory plugins, you must install `pyVmomi <https://github.com/vmware/pyvmomi>`_
+on your control node (the host running Ansible).
+
+To include tag-related information for the virtual machines in your dynamic inventory, you also need the `vSphere Automation SDK <https://code.vmware.com/web/sdk/65/vsphere-automation-python>`_, which supports REST API features such as tagging and content libraries, on your control node.
+You can install the ``vSphere Automation SDK`` following `these instructions <https://github.com/vmware/vsphere-automation-sdk-python#installing-required-python-packages>`_.
+
+.. code-block:: bash
+
+ $ pip install pyvmomi
+
+Starting in Ansible 2.10, the VMware dynamic inventory plugin is available in the ``community.vmware`` collection included Ansible.
+To install the latest ``community.vmware`` collection:
+
+.. code-block:: bash
+
+ $ ansible-galaxy collection install community.vmware
+
+To use this VMware dynamic inventory plugin:
+
+1. Enable it first by specifying the following in the ``ansible.cfg`` file:
+
+.. code-block:: ini
+
+ [inventory]
+ enable_plugins = community.vmware.vmware_vm_inventory
+
+2. Create a file that ends in ``vmware.yml`` or ``vmware.yaml`` in your working directory.
+
+The ``vmware_vm_inventory`` inventory plugin takes in the same authentication information as any other VMware modules does.
+
+Here's an example of a valid inventory file with custom hostname for the given VMware guest:
+
+.. code-block:: yaml
+
+ plugin: community.vmware.vmware_vm_inventory
+ strict: False
+ hostname: 10.65.223.31
+ username: administrator@vsphere.local
+ password: Esxi@123$%
+ validate_certs: False
+ with_tags: False
+ hostnames:
+ - config.name
+
+
+Here, we have configured a custom hostname by setting the ``hostnames`` parameter to ``config.name``. This will retrieve
+the ``config.name`` property from the virtual machine and populate it in the inventory.
+
+You can check all allowed properties for the given virtual machine at :ref:`vmware_inventory_vm_attributes`.
+
+Executing ``ansible-inventory --list -i <filename>.vmware.yml`` creates a list of the virtual machines that are ready to be configured using Ansible.
+
+What to expect
+--------------
+
+You will notice that instead of default behavior of representing the hostname as ``config.name + _ + config.uuid``,
+the inventory hosts show value as ``config.name``.
+
+
+.. code-block:: yaml
+
+ {
+ "_meta": {
+ "hostvars": {
+ "template_001": {
+ "config.name": "template_001",
+ "guest.toolsRunningStatus": "guestToolsNotRunning",
+ ...
+ "guest.toolsStatus": "toolsNotInstalled",
+ "name": "template_001"
+ },
+ "vm_8046": {
+ "config.name": "vm_8046",
+ "guest.toolsRunningStatus": "guestToolsNotRunning",
+ ...
+ "guest.toolsStatus": "toolsNotInstalled",
+ "name": "vm_8046"
+ },
+ ...
+ }
+
+Troubleshooting
+---------------
+
+If the custom property specified in ``hostnames`` fails:
+
+- Check if the values provided for username and password are correct.
+- Make sure it is a valid property, see :ref:`vmware_inventory_vm_attributes`.
+- Use ``strict: True`` to get more information about the error.
+- Please make sure that you are using latest version VMware collection.
+
+
+.. seealso::
+
+ `pyVmomi <https://github.com/vmware/pyvmomi>`_
+ The GitHub Page of pyVmomi
+ `pyVmomi Issue Tracker <https://github.com/vmware/pyvmomi/issues>`_
+ The issue tracker for the pyVmomi project
+ `vSphere Automation SDK GitHub Page <https://github.com/vmware/vsphere-automation-sdk-python>`_
+ The GitHub Page of vSphere Automation SDK for Python
+ `vSphere Automation SDK Issue Tracker <https://github.com/vmware/vsphere-automation-sdk-python/issues>`_
+ The issue tracker for vSphere Automation SDK for Python
+ :ref:`vmware_inventory_vm_attributes`
+ Using Virtual machine attributes in VMware dynamic inventory plugin
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_vault`
+ Using Vault in playbooks
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_vm_attributes.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_vm_attributes.rst
new file mode 100644
index 00000000..089c13d7
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_vm_attributes.rst
@@ -0,0 +1,1183 @@
+.. _vmware_inventory_vm_attributes:
+
+*******************************************************************
+Using Virtual machine attributes in VMware dynamic inventory plugin
+*******************************************************************
+
+.. contents:: Topics
+
+Virtual machine attributes
+==========================
+
+You can use virtual machine properties which can be used to populate ``hostvars`` for the given
+virtual machine in a VMware dynamic inventory plugin.
+
+capability
+----------
+
+This section describes settings for the runtime capabilities of the virtual machine.
+
+snapshotOperationsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not a virtual machine supports snapshot operations.
+
+multipleSnapshotsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not a virtual machine supports multiple snapshots.
+ This value is not set when the virtual machine is unavailable, for instance, when it is being created or deleted.
+
+snapshotConfigSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not a virtual machine supports snapshot config.
+
+poweredOffSnapshotsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not a virtual machine supports snapshot operations in ``poweredOff`` state.
+
+memorySnapshotsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not a virtual machine supports memory snapshots.
+
+revertToSnapshotSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not a virtual machine supports reverting to a snapshot.
+
+quiescedSnapshotsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not a virtual machine supports quiesced snapshots.
+
+disableSnapshotsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not snapshots can be disabled.
+
+lockSnapshotsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not the snapshot tree can be locked.
+
+consolePreferencesSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether console preferences can be set for the virtual machine.
+
+cpuFeatureMaskSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether CPU feature requirements masks can be set for the virtual machine.
+
+s1AcpiManagementSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not a virtual machine supports ACPI S1 settings management.
+
+settingScreenResolutionSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not the virtual machine supports setting the screen resolution of the console window.
+
+toolsAutoUpdateSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Supports tools auto-update.
+
+vmNpivWwnSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Supports virtual machine NPIV WWN.
+
+npivWwnOnNonRdmVmSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Supports assigning NPIV WWN to virtual machines that do not have RDM disks.
+
+vmNpivWwnDisableSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the NPIV disabling operation is supported on the virtual machine.
+
+vmNpivWwnUpdateSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the update of NPIV WWNs are supported on the virtual machine.
+
+swapPlacementSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Flag indicating whether the virtual machine has a configurable (swapfile placement policy).
+
+toolsSyncTimeSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether asking tools to sync time with the host is supported.
+
+virtualMmuUsageSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not the use of nested page table hardware support can be explicitly set.
+
+diskSharesSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether resource settings for disks can be applied to the virtual machine.
+
+bootOptionsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether boot options can be configured for the virtual machine.
+
+bootRetryOptionsSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether automatic boot retry can be configured for the virtual machine.
+
+settingVideoRamSizeSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Flag indicating whether the video RAM size of the virtual machine can be configured.
+
+settingDisplayTopologySupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not the virtual machine supports setting the display topology of the console window.
+
+recordReplaySupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether record and replay functionality is supported on the virtual machine.
+
+changeTrackingSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates that change tracking is supported for virtual disks of the virtual machine.
+ However, even if change tracking is supported, it might not be available for all disks of the virtual machine.
+ For example, passthru raw disk mappings or disks backed by any Ver1BackingInfo cannot be tracked.
+
+multipleCoresPerSocketSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether multiple virtual cores per socket is supported on the virtual machine.
+
+hostBasedReplicationSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates that host based replication is supported on the virtual machine.
+ However, even if host based replication is supported, it might not be available for all disk types.
+ For example, passthru raw disk mappings can not be replicated.
+
+guestAutoLockSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not guest autolock is supported on the virtual machine.
+
+memoryReservationLockSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether :ref:`memory_reservation_locked_to_max` may be set to true for the virtual machine.
+
+featureRequirementSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the featureRequirement feature is supported.
+
+poweredOnMonitorTypeChangeSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether a monitor type change is supported while the virtual machine is in the ``poweredOn`` state.
+
+seSparseDiskSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the virtual machine supports the Flex-SE (space-efficent, sparse) format for virtual disks.
+
+nestedHVSupported (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the virtual machine supports nested hardware-assisted virtualization.
+
+vPMCSupported (bool)
+^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the virtual machine supports virtualized CPU performance counters.
+
+
+config
+------
+
+This section describes the configuration settings of the virtual machine, including the name and UUID.
+This property is set when a virtual machine is created or when the ``reconfigVM`` method is called.
+The virtual machine configuration is not guaranteed to be available.
+For example, the configuration information would be unavailable if the server is unable to access the virtual machine files on disk, and is often also unavailable during the initial phases of virtual machine creation.
+
+changeVersion (str)
+^^^^^^^^^^^^^^^^^^^
+
+ The changeVersion is a unique identifier for a given version of the configuration.
+ Each change to the configuration updates this value. This is typically implemented as an ever increasing count or a time-stamp.
+ However, a client should always treat this as an opaque string.
+
+modified (datetime)
+^^^^^^^^^^^^^^^^^^^
+
+ Last time a virtual machine's configuration was modified.
+
+name (str)
+^^^^^^^^^^
+
+ Display name of the virtual machine. Any / (slash), \ (backslash), character used in this name element is escaped. Similarly, any % (percent) character used in this name element is escaped, unless it is used to start an escape sequence. A slash is escaped as %2F or %2f. A backslash is escaped as %5C or %5c, and a percent is escaped as %25.
+
+.. _guest_full_name:
+
+guestFullName (str)
+^^^^^^^^^^^^^^^^^^^
+
+ This is the full name of the guest operating system for the virtual machine. For example: Windows 2000 Professional. See :ref:`alternate_guest_name`.
+
+version (str)
+^^^^^^^^^^^^^
+
+ The version string for the virtual machine.
+
+uuid (str)
+^^^^^^^^^^
+
+ 128-bit SMBIOS UUID of a virtual machine represented as a hexadecimal string in "12345678-abcd-1234-cdef-123456789abc" format.
+
+instanceUuid (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ VirtualCenter-specific 128-bit UUID of a virtual machine, represented as a hexademical string. This identifier is used by VirtualCenter to uniquely identify all virtual machine instances, including those that may share the same SMBIOS UUID.
+
+npivNodeWorldWideName (long, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ A 64-bit node WWN (World Wide Name).
+
+npivPortWorldWideName (long, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ A 64-bit port WWN (World Wide Name).
+
+npivWorldWideNameType (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The source that provides/generates the assigned WWNs.
+
+npivDesiredNodeWwns (short, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The NPIV node WWNs to be extended from the original list of WWN numbers.
+
+npivDesiredPortWwns (short, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The NPIV port WWNs to be extended from the original list of WWN numbers.
+
+npivTemporaryDisabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ This property is used to enable or disable the NPIV capability on a desired virtual machine on a temporary basis.
+
+npivOnNonRdmDisks (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ This property is used to check whether the NPIV can be enabled on the Virtual machine with non-rdm disks in the configuration, so this is potentially not enabling npiv on vmfs disks.
+ Also this property is used to check whether RDM is required to generate WWNs for a virtual machine.
+
+locationId (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Hash incorporating the virtual machine's config file location and the UUID of the host assigned to run the virtual machine.
+
+template (bool)
+^^^^^^^^^^^^^^^
+
+ Flag indicating whether or not a virtual machine is a template.
+
+guestId (str)
+^^^^^^^^^^^^^
+
+ Guest operating system configured on a virtual machine.
+
+.. _alternate_guest_name:
+
+alternateGuestName (str)
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Used as display name for the operating system if guestId isotherorother-64. See :ref:`guest_full_name`.
+
+annotation (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Description for the virtual machine.
+
+files (vim.vm.FileInfo)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+ Information about the files associated with a virtual machine.
+ This information does not include files for specific virtual disks or snapshots.
+
+tools (vim.vm.ToolsConfigInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Configuration of VMware Tools running in the guest operating system.
+
+flags (vim.vm.FlagInfo)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+ Additional flags for a virtual machine.
+
+consolePreferences (vim.vm.ConsolePreferences, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Legacy console viewer preferences when doing power operations.
+
+defaultPowerOps (vim.vm.DefaultPowerOpInfo)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Configuration of default power operations.
+
+hardware (vim.vm.VirtualHardware)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Processor, memory, and virtual devices for a virtual machine.
+
+cpuAllocation (vim.ResourceAllocationInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Resource limits for CPU.
+
+memoryAllocation (vim.ResourceAllocationInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Resource limits for memory.
+
+latencySensitivity (vim.LatencySensitivity, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The latency-sensitivity of the virtual machine.
+
+memoryHotAddEnabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Whether memory can be added while the virtual machine is running.
+
+cpuHotAddEnabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Whether virtual processors can be added while the virtual machine is running.
+
+cpuHotRemoveEnabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Whether virtual processors can be removed while the virtual machine is running.
+
+hotPlugMemoryLimit (long, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The maximum amount of memory, in MB, than can be added to a running virtual machine.
+
+hotPlugMemoryIncrementSize (long, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Memory, in MB that can be added to a running virtual machine.
+
+cpuAffinity (vim.vm.AffinityInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Affinity settings for CPU.
+
+memoryAffinity (vim.vm.AffinityInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Affinity settings for memory.
+
+networkShaper (vim.vm.NetworkShaperInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Resource limits for network.
+
+extraConfig (vim.option.OptionValue, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Additional configuration information for the virtual machine.
+
+cpuFeatureMask (vim.host.CpuIdInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Specifies CPU feature compatibility masks that override the defaults from the ``GuestOsDescriptor`` of the virtual machine's guest OS.
+
+datastoreUrl (vim.vm.ConfigInfo.DatastoreUrlPair, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Enumerates the set of datastores that the virtual machine is stored on, as well as the URL identification for each of these.
+
+swapPlacement (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Virtual machine swapfile placement policy.
+
+bootOptions (vim.vm.BootOptions, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Configuration options for the boot behavior of the virtual machine.
+
+ftInfo (vim.vm.FaultToleranceConfigInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Fault tolerance settings for the virtual machine.
+
+vAppConfig (vim.vApp.VmConfigInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ vApp meta-data for the virtual machine.
+
+vAssertsEnabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether user-configured virtual asserts will be triggered during virtual machine replay.
+
+changeTrackingEnabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether changed block tracking for the virtual machine's disks is active.
+
+firmware (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Information about firmware type for the virtual machine.
+
+maxMksConnections (int, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates the maximum number of active remote display connections that the virtual machine will support.
+
+guestAutoLockEnabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the guest operating system will logout any active sessions whenever there are no remote display connections open to the virtual machine.
+
+managedBy (vim.ext.ManagedByInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Specifies that the virtual machine is managed by a VC Extension.
+
+.. _memory_reservation_locked_to_max:
+
+memoryReservationLockedToMax (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ If set true, memory resource reservation for the virtual machine will always be equal to the virtual machine's memory size; increases in memory size will be rejected when a corresponding reservation increase is not possible.
+
+initialOverhead (vim.vm.ConfigInfo.OverheadInfo), optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Set of values to be used only to perform admission control when determining if a host has sufficient resources for the virtual machine to power on.
+
+nestedHVEnabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the virtual machine is configured to use nested hardware-assisted virtualization.
+
+vPMCEnabled (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether the virtual machine have virtual CPU performance counters enabled.
+
+scheduledHardwareUpgradeInfo (vim.vm.ScheduledHardwareUpgradeInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Configuration of scheduled hardware upgrades and result from last attempt to run scheduled hardware upgrade.
+
+vFlashCacheReservation (long, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Specifies the total vFlash resource reservation for the vFlash caches associated with the virtual machine's virtual disks, in bytes.
+
+layout
+------
+
+Detailed information about the files that comprise the virtual machine.
+
+configFile (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ A list of files that makes up the configuration of the virtual machine (excluding the .vmx file, since that file is represented in the FileInfo).
+ These are relative paths from the configuration directory.
+ A slash is always used as a separator.
+ This list will typically include the NVRAM file, but could also include other meta-data files.
+
+logFile (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+ A list of files stored in the virtual machine's log directory.
+ These are relative paths from the ``logDirectory``.
+ A slash is always used as a separator.
+
+disk (vim.vm.FileLayout.DiskLayout, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Files making up each virtual disk.
+
+snapshot (vim.vm.FileLayout.SnapshotLayout, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Files of each snapshot.
+
+swapFile (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The swapfile specific to the virtual machine, if any. This is a complete datastore path, not a relative path.
+
+
+layoutEx
+--------
+
+Detailed information about the files that comprise the virtual machine.
+
+file (vim.vm.FileLayoutEx.FileInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Information about all the files that constitute the virtual machine including configuration files, disks, swap file, suspend file, log files, core files, memory file and so on.
+
+disk (vim.vm.FileLayoutEx.DiskLayout, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Layout of each virtual disk attached to the virtual machine.
+ For a virtual machine with snaphots, this property gives only those disks that are attached to it at the current point of running.
+
+snapshot (vim.vm.FileLayoutEx.SnapshotLayout, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Layout of each snapshot of the virtual machine.
+
+timestamp (datetime)
+^^^^^^^^^^^^^^^^^^^^
+
+ Time when values in this structure were last updated.
+
+storage (vim.vm.StorageInfo)
+----------------------------
+
+Storage space used by the virtual machine, split by datastore.
+
+perDatastoreUsage (vim.vm.StorageInfo.UsageOnDatastore, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Storage space used by the virtual machine on all datastores that it is located on.
+ Total storage space committed to the virtual machine across all datastores is simply an aggregate of the property ``committed``
+
+timestamp (datetime)
+^^^^^^^^^^^^^^^^^^^^
+
+ Time when values in this structure were last updated.
+
+environmentBrowser (vim.EnvironmentBrowser)
+-------------------------------------------
+
+The current virtual machine's environment browser object.
+This contains information on all the configurations that can be used on the virtual machine.
+This is identical to the environment browser on the ComputeResource to which the virtual machine belongs.
+
+datastoreBrowser (vim.host.DatastoreBrowser)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ DatastoreBrowser to browse datastores that are available on this entity.
+
+resourcePool (vim.ResourcePool)
+-------------------------------
+
+The current resource pool that specifies resource allocation for the virtual machine.
+This property is set when a virtual machine is created or associated with a different resource pool.
+Returns null if the virtual machine is a template or the session has no access to the resource pool.
+
+summary (vim.ResourcePool.Summary)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Basic information about a resource pool.
+
+runtime (vim.ResourcePool.RuntimeInfo)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Runtime information about a resource pool.
+
+owner (vim.ComputeResource)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The ComputeResource to which this set of one or more nested resource pools belong.
+
+resourcePool (vim.ResourcePool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The set of child resource pools.
+
+vm (vim.VirtualMachine)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+ The set of virtual machines associated with this resource pool.
+
+config (vim.ResourceConfigSpec)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Configuration of this resource pool.
+
+childConfiguration (vim.ResourceConfigSpec)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The resource configuration of all direct children (VirtualMachine and ResourcePool) of this resource group.
+
+parentVApp (vim.ManagedEntity)
+------------------------------
+
+Reference to the parent vApp.
+
+parent (vim.ManagedEntity)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Parent of this entity.
+ This value is null for the root object and for (VirtualMachine) objects that are part of a (VirtualApp).
+
+customValue (vim.CustomFieldsManager.Value)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Custom field values.
+
+overallStatus (vim.ManagedEntity.Status)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ General health of this managed entity.
+
+configStatus (vim.ManagedEntity.Status)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The configStatus indicates whether or not the system has detected a configuration issue involving this entity.
+ For example, it might have detected a duplicate IP address or MAC address, or a host in a cluster might be out of ``compliance.property``.
+
+configIssue (vim.event.Event)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current configuration issues that have been detected for this entity.
+
+effectiveRole (int)
+^^^^^^^^^^^^^^^^^^^
+
+ Access rights the current session has to this entity.
+
+permission (vim.AuthorizationManager.Permission)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ List of permissions defined for this entity.
+
+name (str)
+^^^^^^^^^^
+
+ Name of this entity, unique relative to its parent.
+ Any / (slash), \ (backslash), character used in this name element will be escaped.
+ Similarly, any % (percent) character used in this name element will be escaped, unless it is used to start an escape sequence.
+ A slash is escaped as %2F or %2f. A backslash is escaped as %5C or %5c, and a percent is escaped as %25.
+
+disabledMethod (str)
+^^^^^^^^^^^^^^^^^^^^
+
+ List of operations that are disabled, given the current runtime state of the entity.
+ For example, a power-on operation always fails if a virtual machine is already powered on.
+
+recentTask (vim.Task)
+^^^^^^^^^^^^^^^^^^^^^
+
+ The set of recent tasks operating on this managed entity.
+ A task in this list could be in one of the four states: pending, running, success or error.
+
+declaredAlarmState (vim.alarm.AlarmState)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ A set of alarm states for alarms that apply to this managed entity.
+
+triggeredAlarmState (vim.alarm.AlarmState)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ A set of alarm states for alarms triggered by this entity or by its descendants.
+
+alarmActionsEnabled (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Whether alarm actions are enabled for this entity. True if enabled; false otherwise.
+
+tag (vim.Tag)
+^^^^^^^^^^^^^
+
+ The set of tags associated with this managed entity. Experimental. Subject to change.
+
+resourceConfig (vim.ResourceConfigSpec)
+---------------------------------------
+
+ The resource configuration for a virtual machine.
+
+entity (vim.ManagedEntity, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Reference to the entity with this resource specification: either a VirtualMachine or a ResourcePool.
+
+changeVersion (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The changeVersion is a unique identifier for a given version of the configuration. Each change to the configuration will update this value.
+ This is typically implemented as an ever increasing count or a time-stamp.
+
+
+lastModified (datetime, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Timestamp when the resources were last modified. This is ignored when the object is used to update a configuration.
+
+cpuAllocation (vim.ResourceAllocationInfo)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Resource allocation for CPU.
+
+memoryAllocation (vim.ResourceAllocationInfo)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Resource allocation for memory.
+
+runtime (vim.vm.RuntimeInfo)
+----------------------------
+
+Execution state and history for the virtual machine.
+
+device (vim.vm.DeviceRuntimeInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Per-device runtime info. This array will be empty if the host software does not provide runtime info for any of the device types currently in use by the virtual machine.
+
+host (vim.HostSystem, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The host that is responsible for running a virtual machine.
+ This property is null if the virtual machine is not running and is not assigned to run on a particular host.
+
+connectionState (vim.VirtualMachine.ConnectionState)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Indicates whether or not the virtual machine is available for management.
+
+powerState (vim.VirtualMachine.PowerState)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The current power state of the virtual machine.
+
+faultToleranceState (vim.VirtualMachine.FaultToleranceState)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The fault tolerance state of the virtual machine.
+
+dasVmProtection (vim.vm.RuntimeInfo.DasProtectionState, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The vSphere HA protection state for a virtual machine.
+ Property is unset if vSphere HA is not enabled.
+
+toolsInstallerMounted (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Flag to indicate whether or not the VMware Tools installer is mounted as a CD-ROM.
+
+suspendTime (datetime, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The timestamp when the virtual machine was most recently suspended.
+ This property is updated every time the virtual machine is suspended.
+
+bootTime (datetime, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The timestamp when the virtual machine was most recently powered on.
+ This property is updated when the virtual machine is powered on from the poweredOff state, and is cleared when the virtual machine is powered off.
+ This property is not updated when a virtual machine is resumed from a suspended state.
+
+suspendInterval (long, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The total time the virtual machine has been suspended since it was initially powered on.
+ This time excludes the current period, if the virtual machine is currently suspended.
+ This property is updated when the virtual machine resumes, and is reset to zero when the virtual machine is powered off.
+
+question (vim.vm.QuestionInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The current question, if any, that is blocking the virtual machine's execution.
+
+memoryOverhead (long, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The amount of memory resource (in bytes) that will be used by the virtual machine above its guest memory requirements.
+ This value is set if and only if the virtual machine is registered on a host that supports memory resource allocation features.
+ For powered off VMs, this is the minimum overhead required to power on the VM on the registered host.
+ For powered on VMs, this is the current overhead reservation, a value which is almost always larger than the minimum overhead, and which grows with time.
+
+maxCpuUsage (int, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current upper-bound on CPU usage.
+ The upper-bound is based on the host the virtual machine is current running on, as well as limits configured on the virtual machine itself or any parent resource pool.
+ Valid while the virtual machine is running.
+
+maxMemoryUsage (int, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current upper-bound on memory usage.
+ The upper-bound is based on memory configuration of the virtual machine, as well as limits configured on the virtual machine itself or any parent resource pool.
+ Valid while the virtual machine is running.
+
+numMksConnections (int)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+ Number of active MKS connections to the virtual machine.
+
+recordReplayState (vim.VirtualMachine.RecordReplayState)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Record / replay state of the virtual machine.
+
+cleanPowerOff (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ For a powered off virtual machine, indicates whether the virtual machine's last shutdown was an orderly power off or not.
+ Unset if the virtual machine is running or suspended.
+
+needSecondaryReason (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ If set, indicates the reason the virtual machine needs a secondary.
+
+onlineStandby (bool)
+^^^^^^^^^^^^^^^^^^^^
+
+ This property indicates whether the guest has gone into one of the s1, s2 or s3 standby modes. False indicates the guest is awake.
+
+minRequiredEVCModeKey (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ For a powered-on or suspended virtual machine in a cluster with Enhanced VMotion Compatibility (EVC) enabled, this identifies the least-featured EVC mode (among those for the appropriate CPU vendor) that could admit the virtual machine.
+ This property will be unset if the virtual machine is powered off or is not in an EVC cluster.
+ This property may be used as a general indicator of the CPU feature baseline currently in use by the virtual machine.
+ However, the virtual machine may be suppressing some of the features present in the CPU feature baseline of the indicated mode, either explicitly (in the virtual machine's configured ``cpuFeatureMask``) or implicitly (in the default masks for the ``GuestOsDescriptor`` appropriate for the virtual machine's configured guest OS).
+
+consolidationNeeded (bool)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Whether any disk of the virtual machine requires consolidation.
+ This can happen for example when a snapshot is deleted but its associated disk is not committed back to the base disk.
+
+offlineFeatureRequirement (vim.vm.FeatureRequirement, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ These requirements must have equivalent host capabilities ``featureCapability`` in order to power on.
+
+featureRequirement (vim.vm.FeatureRequirement, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ These requirements must have equivalent host capabilities ``featureCapability`` in order to power on, resume, or migrate to the host.
+
+featureMask (vim.host.FeatureMask, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ The masks applied to an individual virtual machine as a result of its configuration.
+
+vFlashCacheAllocation (long, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Specifies the total allocated vFlash resource for the vFlash caches associated with VM's VMDKs when VM is powered on, in bytes.
+
+
+guest (vim.vm.GuestInfo)
+------------------------
+
+Information about VMware Tools and about the virtual machine from the perspective of VMware Tools.
+Information about the guest operating system is available in VirtualCenter.
+Guest operating system information reflects the last known state of the virtual machine.
+For powered on machines, this is current information.
+For powered off machines, this is the last recorded state before the virtual machine was powered off.
+
+toolsStatus (vim.vm.GuestInfo.ToolsStatus, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current status of VMware Tools in the guest operating system, if known.
+
+toolsVersionStatus (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current version status of VMware Tools in the guest operating system, if known.
+
+toolsVersionStatus2 (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current version status of VMware Tools in the guest operating system, if known.
+
+toolsRunningStatus (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current running status of VMware Tools in the guest operating system, if known.
+
+toolsVersion (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current version of VMware Tools, if known.
+
+guestId (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+ Guest operating system identifier (short name), if known.
+
+guestFamily (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Guest operating system family, if known.
+
+guestFullName (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ See :ref:`guest_full_name`.
+
+hostName (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Hostname of the guest operating system, if known.
+
+ipAddress (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Primary IP address assigned to the guest operating system, if known.
+
+net (vim.vm.GuestInfo.NicInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Guest information about network adapters, if known.
+
+ipStack (vim.vm.GuestInfo.StackInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Guest information about IP networking stack, if known.
+
+disk (vim.vm.GuestInfo.DiskInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Guest information about disks.
+ You can obtain Linux guest disk information for the following file system types only: Ext2, Ext3, Ext4, ReiserFS, ZFS, NTFS, VFAT, UFS, PCFS, HFS, and MS-DOS.
+
+screen (vim.vm.GuestInfo.ScreenInfo, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Guest screen resolution info, if known.
+
+guestState (str)
+^^^^^^^^^^^^^^^^
+
+ Operation mode of guest operating system.
+
+appHeartbeatStatus (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Application heartbeat status.
+
+appState (str, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Application state.
+ If vSphere HA is enabled and the vm is configured for Application Monitoring and this field's value is ``appStateNeedReset`` then HA will attempt immediately reset the virtual machine.
+ There are some system conditions which may delay the immediate reset.
+ The immediate reset will be performed as soon as allowed by vSphere HA and ESX.
+ If during these conditions the value is changed to ``appStateOk`` the reset will be cancelled.
+
+guestOperationsReady (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Guest Operations availability. If true, the vitrual machine is ready to process guest operations.
+
+interactiveGuestOperationsReady (bool, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Interactive Guest Operations availability. If true, the virtual machine is ready to process guest operations as the user interacting with the guest desktop.
+
+generationInfo (vim.vm.GuestInfo.NamespaceGenerationInfo, privilege: VirtualMachine.Namespace.EventNotify, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ A list of namespaces and their corresponding generation numbers. Only namespaces with non-zero ``maxSizeEventsFromGuest`` are guaranteed to be present here.
+
+
+summary (vim.vm.Summary)
+------------------------
+
+ Basic information about the virtual machine.
+
+vm (vim.VirtualMachine, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Reference to the virtual machine managed object.
+
+runtime (vim.vm.RuntimeInfo)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Runtime and state information of a running virtual machine.
+ Most of this information is also available when a virtual machine is powered off.
+ In that case, it contains information from the last run, if available.
+
+guest (vim.vm.Summary.GuestSummary, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Guest operating system and VMware Tools information.
+
+config (vim.vm.Summary.ConfigSummary)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Basic configuration information about the virtual machine.
+ This information is not available when the virtual machine is unavailable, for instance, when it is being created or deleted.
+
+storage (vim.vm.Summary.StorageSummary, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Storage information of the virtual machine.
+
+quickStats (vim.vm.Summary.QuickStats)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ A set of statistics that are typically updated with near real-time regularity.
+
+overallStatus (vim.ManagedEntity.Status)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Overall alarm status on this node.
+
+customValue (vim.CustomFieldsManager.Value, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Custom field values.
+
+
+datastore (vim.Datastore)
+-------------------------
+
+ A collection of references to the subset of datastore objects in the datacenter that is used by the virtual machine.
+
+info (vim.Datastore.Info)
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Specific information about the datastore.
+
+summary (vim.Datastore.Summary)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Global properties of the datastore.
+
+host (vim.Datastore.HostMount)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Hosts attached to this datastore.
+
+vm (vim.VirtualMachine)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+ Virtual machines stored on this datastore.
+
+browser (vim.host.DatastoreBrowser)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ DatastoreBrowser used to browse this datastore.
+
+capability (vim.Datastore.Capability)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Capabilities of this datastore.
+
+iormConfiguration (vim.StorageResourceManager.IORMConfigInfo)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Configuration of storage I/O resource management for the datastore.
+ Currently VMware only support storage I/O resource management on VMFS volumes of a datastore.
+ This configuration may not be available if the datastore is not accessible from any host, or if the datastore does not have VMFS volume.
+
+network (vim.Network)
+---------------------
+
+ A collection of references to the subset of network objects in the datacenter that is used by the virtual machine.
+
+name (str)
+^^^^^^^^^^
+
+ Name of this network.
+
+summary (vim.Network.Summary)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Properties of a network.
+
+host (vim.HostSystem)
+^^^^^^^^^^^^^^^^^^^^^
+
+ Hosts attached to this network.
+
+vm (vim.VirtualMachine)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+ Virtual machines using this network.
+
+
+snapshot (vim.vm.SnapshotInfo)
+-------------------------------
+
+Current snapshot and tree.
+The property is valid if snapshots have been created for the virtual machine.
+
+currentSnapshot (vim.vm.Snapshot, optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Current snapshot of the virtual machineThis property is set by calling ``Snapshot.revert`` or ``VirtualMachine.createSnapshot``.
+ This property will be empty when the working snapshot is at the root of the snapshot tree.
+
+rootSnapshotList (vim.vm.SnapshotTree)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Data for the entire set of snapshots for one virtual machine.
+
+rootSnapshot (vim.vm.Snapshot)
+------------------------------
+
+The roots of all snapshot trees for the virtual machine.
+
+config (vim.vm.ConfigInfo)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ Information about the configuration of the virtual machine when this snapshot was taken.
+ The datastore paths for the virtual machine disks point to the head of the disk chain that represents the disk at this given snapshot.
+
+childSnapshot (vim.vm.Snapshot)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ All snapshots for which this snapshot is the parent.
+
+guestHeartbeatStatus (vim.ManagedEntity.Status)
+-----------------------------------------------
+
+ The guest heartbeat.
+
+.. seealso::
+
+ `pyVmomi <https://github.com/vmware/pyvmomi>`_
+ The GitHub Page of pyVmomi
+ `pyVmomi Issue Tracker <https://github.com/vmware/pyvmomi/issues>`_
+ The issue tracker for the pyVmomi project
+ rst/scenario_guides/guide_vmware.rst
+ The GitHub Page of vSphere Automation SDK for Python
+ `vSphere Automation SDK Issue Tracker <https://github.com/vmware/vsphere-automation-sdk-python/issues>`_
+ The issue tracker for vSphere Automation SDK for Python
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_vault`
+ Using Vault in playbooks
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_module_reference.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_module_reference.rst
new file mode 100644
index 00000000..3c7de1dd
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_module_reference.rst
@@ -0,0 +1,9 @@
+:orphan:
+
+.. _vmware_ansible_module_index:
+
+***************************
+Ansible VMware Module Guide
+***************************
+
+This will be a listing similar to the module index in our core docs.
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_requirements.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_requirements.rst
new file mode 100644
index 00000000..45e3ec8f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_requirements.rst
@@ -0,0 +1,44 @@
+.. _vmware_requirements:
+
+********************
+VMware Prerequisites
+********************
+
+.. contents::
+ :local:
+
+Installing SSL Certificates
+===========================
+
+All vCenter and ESXi servers require SSL encryption on all connections to enforce secure communication. You must enable SSL encryption for Ansible by installing the server's SSL certificates on your Ansible control node or delegate node.
+
+If the SSL certificate of your vCenter or ESXi server is not correctly installed on your Ansible control node, you will see the following warning when using Ansible VMware modules:
+
+``Unable to connect to vCenter or ESXi API at xx.xx.xx.xx on TCP/443: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)``
+
+To install the SSL certificate for your VMware server, and run your Ansible VMware modules in encrypted mode, please follow the instructions for the server you are running with VMware.
+
+Installing vCenter SSL certificates for Ansible
+-----------------------------------------------
+
+* From any web browser, go to the base URL of the vCenter Server without port number like ``https://vcenter-domain.example.com``
+
+* Click the "Download trusted root CA certificates" link at the bottom of the grey box on the right and download the file.
+
+* Change the extension of the file to .zip. The file is a ZIP file of all root certificates and all CRLs.
+
+* Extract the contents of the zip file. The extracted directory contains a ``.certs`` directory that contains two types of files. Files with a number as the extension (.0, .1, and so on) are root certificates.
+
+* Install the certificate files are trusted certificates by the process that is appropriate for your operating system.
+
+
+Installing ESXi SSL certificates for Ansible
+--------------------------------------------
+
+* Enable SSH Service on ESXi either by using Ansible VMware module `vmware_host_service_manager <https://github.com/ansible-collections/vmware/blob/main/plugins/modules/vmware_host_config_manager.py>`_ or manually using vSphere Web interface.
+
+* SSH to ESXi server using administrative credentials, and navigate to directory ``/etc/vmware/ssl``
+
+* Secure copy (SCP) ``rui.crt`` located in ``/etc/vmware/ssl`` directory to Ansible control node.
+
+* Install the certificate file by the process that is appropriate for your operating system.
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_scenarios.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_scenarios.rst
new file mode 100644
index 00000000..b044740b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_scenarios.rst
@@ -0,0 +1,16 @@
+.. _vmware_scenarios:
+
+****************************
+Ansible for VMware Scenarios
+****************************
+
+These scenarios teach you how to accomplish common VMware tasks using Ansible. To get started, please select the task you want to accomplish.
+
+.. toctree::
+ :maxdepth: 1
+
+ scenario_clone_template
+ scenario_rename_vm
+ scenario_remove_vm
+ scenario_find_vm_folder
+ scenario_vmware_http
diff --git a/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_troubleshooting.rst b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_troubleshooting.rst
new file mode 100644
index 00000000..3ca5eac2
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_troubleshooting.rst
@@ -0,0 +1,102 @@
+.. _vmware_troubleshooting:
+
+**********************************
+Troubleshooting Ansible for VMware
+**********************************
+
+.. contents:: Topics
+
+This section lists things that can go wrong and possible ways to fix them.
+
+Debugging Ansible for VMware
+============================
+
+When debugging or creating a new issue, you will need information about your VMware infrastructure. You can get this information using
+`govc <https://github.com/vmware/govmomi/tree/master/govc>`_, For example:
+
+
+.. code-block:: bash
+
+ $ export GOVC_USERNAME=ESXI_OR_VCENTER_USERNAME
+ $ export GOVC_PASSWORD=ESXI_OR_VCENTER_PASSWORD
+ $ export GOVC_URL=https://ESXI_OR_VCENTER_HOSTNAME:443
+ $ govc find /
+
+Known issues with Ansible for VMware
+====================================
+
+
+Network settings with vmware_guest in Ubuntu 18.04
+--------------------------------------------------
+
+Setting the network with ``vmware_guest`` in Ubuntu 18.04 is known to be broken, due to missing support for ``netplan`` in the ``open-vm-tools``.
+This issue is tracked via:
+
+* https://github.com/vmware/open-vm-tools/issues/240
+* https://github.com/ansible/ansible/issues/41133
+
+Potential Workarounds
+^^^^^^^^^^^^^^^^^^^^^
+
+There are several workarounds for this issue.
+
+1) Modify the Ubuntu 18.04 images and installing ``ifupdown`` in them via ``sudo apt install ifupdown``.
+ If so you need to remove ``netplan`` via ``sudo apt remove netplan.io`` and you need stop ``systemd-networkd`` via ``sudo systemctl disable systemctl-networkd``.
+
+2) Generate the ``systemd-networkd`` files with a task in your VMware Ansible role:
+
+.. code-block:: yaml
+
+ - name: make sure cache directory exists
+ file: path="{{ inventory_dir }}/cache" state=directory
+ delegate_to: localhost
+
+ - name: generate network templates
+ template: src=network.j2 dest="{{ inventory_dir }}/cache/{{ inventory_hostname }}.network"
+ delegate_to: localhost
+
+ - name: copy generated files to vm
+ vmware_guest_file_operation:
+ hostname: "{{ vmware_general.hostname }}"
+ username: "{{ vmware_username }}"
+ password: "{{ vmware_password }}"
+ datacenter: "{{ vmware_general.datacenter }}"
+ validate_certs: "{{ vmware_general.validate_certs }}"
+ vm_id: "{{ inventory_hostname }}"
+ vm_username: root
+ vm_password: "{{ template_password }}"
+ copy:
+ src: "{{ inventory_dir }}/cache/{{ inventory_hostname }}.network"
+ dest: "/etc/systemd/network/ens160.network"
+ overwrite: False
+ delegate_to: localhost
+
+ - name: restart systemd-networkd
+ vmware_vm_shell:
+ hostname: "{{ vmware_general.hostname }}"
+ username: "{{ vmware_username }}"
+ password: "{{ vmware_password }}"
+ datacenter: "{{ vmware_general.datacenter }}"
+ folder: /vm
+ vm_id: "{{ inventory_hostname}}"
+ vm_username: root
+ vm_password: "{{ template_password }}"
+ vm_shell: /bin/systemctl
+ vm_shell_args: " restart systemd-networkd"
+ delegate_to: localhost
+
+ - name: restart systemd-resolved
+ vmware_vm_shell:
+ hostname: "{{ vmware_general.hostname }}"
+ username: "{{ vmware_username }}"
+ password: "{{ vmware_password }}"
+ datacenter: "{{ vmware_general.datacenter }}"
+ folder: /vm
+ vm_id: "{{ inventory_hostname}}"
+ vm_username: root
+ vm_password: "{{ template_password }}"
+ vm_shell: /bin/systemctl
+ vm_shell_args: " restart systemd-resolved"
+ delegate_to: localhost
+
+3) Wait for ``netplan`` support in ``open-vm-tools``