summaryrefslogtreecommitdiffstats
path: root/docs/docsite/rst/scenario_guides
diff options
context:
space:
mode:
Diffstat (limited to 'docs/docsite/rst/scenario_guides')
-rw-r--r--docs/docsite/rst/scenario_guides/cloud_guides.rst25
-rw-r--r--docs/docsite/rst/scenario_guides/guide_aci.rst659
-rw-r--r--docs/docsite/rst/scenario_guides/guide_alicloud.rst133
-rw-r--r--docs/docsite/rst/scenario_guides/guide_aws.rst6
-rw-r--r--docs/docsite/rst/scenario_guides/guide_azure.rst484
-rw-r--r--docs/docsite/rst/scenario_guides/guide_cloudstack.rst377
-rw-r--r--docs/docsite/rst/scenario_guides/guide_docker.rst6
-rw-r--r--docs/docsite/rst/scenario_guides/guide_gce.rst302
-rw-r--r--docs/docsite/rst/scenario_guides/guide_infoblox.rst292
-rw-r--r--docs/docsite/rst/scenario_guides/guide_meraki.rst193
-rw-r--r--docs/docsite/rst/scenario_guides/guide_online.rst41
-rw-r--r--docs/docsite/rst/scenario_guides/guide_oracle.rst103
-rw-r--r--docs/docsite/rst/scenario_guides/guide_packet.rst311
-rw-r--r--docs/docsite/rst/scenario_guides/guide_rax.rst809
-rw-r--r--docs/docsite/rst/scenario_guides/guide_scaleway.rst293
-rw-r--r--docs/docsite/rst/scenario_guides/guide_vagrant.rst136
-rw-r--r--docs/docsite/rst/scenario_guides/guide_vmware_rest.rst20
-rw-r--r--docs/docsite/rst/scenario_guides/guide_vultr.rst171
-rw-r--r--docs/docsite/rst/scenario_guides/guides.rst44
-rw-r--r--docs/docsite/rst/scenario_guides/network_guides.rst16
-rw-r--r--docs/docsite/rst/scenario_guides/scenario_template.rst53
-rw-r--r--docs/docsite/rst/scenario_guides/virt_guides.rst14
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/authentication.rst47
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/collect_information.rst108
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/create_vm.rst39
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/installation.rst44
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/run_a_vm.rst52
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Add_a_floppy_disk_drive.result.json15
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Add_a_floppy_disk_drive.task.yaml5
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_a_VM_to_a_dvswitch.result.json23
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_a_VM_to_a_dvswitch.task.yaml9
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_an_ISO_image_to_a_guest_VM.result.json19
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_an_ISO_image_to_a_guest_VM.task.yaml12
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_clusters.result.json11
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_clusters.task.yaml3
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders.result.json10
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders.task.yaml3
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.result.json10
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.task.yaml6
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.result.json4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.task.yaml5
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.result.json4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.task.yaml5
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_information_about_a_specific_VM.result.json77
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_information_about_a_specific_VM.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_the_hardware_information.result.json8
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_the_hardware_information.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.result.json10
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.task.yaml5
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_VM.result.json77
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_VM.task.yaml14
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_new_disk.result.json17
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_new_disk.task.yaml7
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Dedicate_one_core_to_the_VM.result.json10
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Dedicate_one_core_to_the_VM.task.yaml5
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_VM_storage_policy.result.json6
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_VM_storage_policy.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_filesystem_information.result.json13
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_filesystem_information.task.yaml8
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_identity_information.result.json14
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_identity_information.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_interfaces_information.result.json23
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_interfaces_information.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_routes_information.result.json31
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_routes_information.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_networking_information.result.json17
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_networking_information.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_power_information.result.json6
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_power_information.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Increase_the_memory_of_a_VM.result.json4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Increase_the_memory_of_a_VM.task.yaml5
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_SCSI_adapter_of_a_given_VM.result.json14
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_SCSI_adapter_of_a_given_VM.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_cdrom_devices_on_the_guest.result.json4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_cdrom_devices_on_the_guest.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.result.json12
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.task.yaml5
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Remove_SATA_adapter_at_PCI_slot_34.result.json3
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_a_list_of_all_the_datastores.result.json26
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_a_list_of_all_the_datastores.task.yaml3
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_details_about_the_first_cluster.result.json8
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_details_about_the_first_cluster.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_disk_information_from_the_VM.result.json18
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_disk_information_from_the_VM.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_memory_information_from_the_VM.result.json7
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_memory_information_from_the_VM.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_NIC's_start_connected_flag_on.result.json4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_NIC's_start_connected_flag_on.task.yaml5
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_power_of_the_VM_on.result.json3
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_power_of_the_VM_on.task.yaml4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Upgrade_the_VM_hardware_version.result.json4
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Upgrade_the_VM_hardware_version.task.yaml6
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Wait_until_my_VM_is_ready.result.json13
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Wait_until_my_VM_is_ready.task.yaml9
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/collect_a_list_of_the_datacenters.result.json9
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/collect_a_list_of_the_datacenters.task.yaml3
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_hardware_tuning.rst150
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_info.rst129
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_tool_configuration.rst45
-rw-r--r--docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_tool_information.rst90
100 files changed, 5905 insertions, 0 deletions
diff --git a/docs/docsite/rst/scenario_guides/cloud_guides.rst b/docs/docsite/rst/scenario_guides/cloud_guides.rst
new file mode 100644
index 0000000..a0e6e8e
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/cloud_guides.rst
@@ -0,0 +1,25 @@
+.. _cloud_guides:
+
+**************************
+Legacy Public Cloud Guides
+**************************
+
+The legacy guides in this section may be out of date. They cover using Ansible with a range of public cloud platforms. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+Guides for using public clouds are moving into collections. We are migrating these guides into collections. Please update your links for the following guides:
+
+:ref:`ansible_collections.amazon.aws.docsite.aws_intro`
+
+.. toctree::
+ :maxdepth: 1
+
+ guide_alicloud
+ guide_cloudstack
+ guide_gce
+ guide_azure
+ guide_online
+ guide_oracle
+ guide_packet
+ guide_rax
+ guide_scaleway
+ guide_vultr
diff --git a/docs/docsite/rst/scenario_guides/guide_aci.rst b/docs/docsite/rst/scenario_guides/guide_aci.rst
new file mode 100644
index 0000000..e2e6613
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_aci.rst
@@ -0,0 +1,659 @@
+.. _aci_guide:
+
+Cisco ACI Guide
+===============
+
+
+.. _aci_guide_intro:
+
+What is Cisco ACI ?
+-------------------
+
+Application Centric Infrastructure (ACI)
+........................................
+The Cisco Application Centric Infrastructure (ACI) allows application requirements to define the network. This architecture simplifies, optimizes, and accelerates the entire application deployment life cycle.
+
+
+Application Policy Infrastructure Controller (APIC)
+...................................................
+The APIC manages the scalable ACI multi-tenant fabric. The APIC provides a unified point of automation and management, policy programming, application deployment, and health monitoring for the fabric. The APIC, which is implemented as a replicated synchronized clustered controller, optimizes performance, supports any application anywhere, and provides unified operation of the physical and virtual infrastructure.
+
+The APIC enables network administrators to easily define the optimal network for applications. Data center operators can clearly see how applications consume network resources, easily isolate and troubleshoot application and infrastructure problems, and monitor and profile resource usage patterns.
+
+The Cisco Application Policy Infrastructure Controller (APIC) API enables applications to directly connect with a secure, shared, high-performance resource pool that includes network, compute, and storage capabilities.
+
+
+ACI Fabric
+..........
+The Cisco Application Centric Infrastructure (ACI) Fabric includes Cisco Nexus 9000 Series switches with the APIC to run in the leaf/spine ACI fabric mode. These switches form a "fat-tree" network by connecting each leaf node to each spine node; all other devices connect to the leaf nodes. The APIC manages the ACI fabric.
+
+The ACI fabric provides consistent low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch is handled locally, and all other traffic travels from the ingress leaf to the egress leaf through a spine switch. Although this architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.
+
+The ACI fabric object-oriented operating system (OS) runs on each Cisco Nexus 9000 Series node. It enables programming of objects for each configurable element of the system. The ACI fabric OS renders policies from the APIC into a concrete model that runs in the physical infrastructure. The concrete model is analogous to compiled software; it is the form of the model that the switch operating system can execute.
+
+All the switch nodes contain a complete copy of the concrete model. When an administrator creates a policy in the APIC that represents a configuration, the APIC updates the logical model. The APIC then performs the intermediate step of creating a fully elaborated policy that it pushes into all the switch nodes where the concrete model is updated.
+
+The APIC is responsible for fabric activation, switch firmware management, network policy configuration, and instantiation. While the APIC acts as the centralized policy and network management engine for the fabric, it is completely removed from the data path, including the forwarding topology. Therefore, the fabric can still forward traffic even when communication with the APIC is lost.
+
+
+More information
+................
+Various resources exist to start learning ACI, here is a list of interesting articles from the community.
+
+- `Adam Raffe: Learning ACI <https://adamraffe.com/learning-aci/>`_
+- `Luca Relandini: ACI for dummies <https://lucarelandini.blogspot.be/2015/03/aci-for-dummies.html>`_
+- `Cisco DevNet Learning Labs about ACI <https://learninglabs.cisco.com/labs/tags/ACI>`_
+
+
+.. _aci_guide_modules:
+
+Using the ACI modules
+---------------------
+The Ansible ACI modules provide a user-friendly interface to managing your ACI environment using Ansible playbooks.
+
+For instance ensuring that a specific tenant exists, is done using the following Ansible task using the aci_tenant module:
+
+.. code-block:: yaml
+
+ - name: Ensure tenant customer-xyz exists
+ aci_tenant:
+ host: my-apic-1
+ username: admin
+ password: my-password
+
+ tenant: customer-xyz
+ description: Customer XYZ
+ state: present
+
+A complete list of existing ACI modules is available on the content tab of the `ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_.
+
+If you want to learn how to write your own ACI modules to contribute, look at the :ref:`Developing Cisco ACI modules <aci_dev_guide>` section.
+
+Querying ACI configuration
+..........................
+
+A module can also be used to query a specific object.
+
+.. code-block:: yaml
+
+ - name: Query tenant customer-xyz
+ aci_tenant:
+ host: my-apic-1
+ username: admin
+ password: my-password
+
+ tenant: customer-xyz
+ state: query
+ register: my_tenant
+
+Or query all objects.
+
+.. code-block:: yaml
+
+ - name: Query all tenants
+ aci_tenant:
+ host: my-apic-1
+ username: admin
+ password: my-password
+
+ state: query
+ register: all_tenants
+
+After registering the return values of the aci_tenant task as shown above, you can access all tenant information from variable ``all_tenants``.
+
+
+Running on the controller locally
+.................................
+As originally designed, Ansible modules are shipped to and run on the remote target(s), however the ACI modules (like most network-related modules) do not run on the network devices or controller (in this case the APIC), but they talk directly to the APIC's REST interface.
+
+For this very reason, the modules need to run on the local Ansible controller (or are delegated to another system that *can* connect to the APIC).
+
+
+Gathering facts
+```````````````
+Because we run the modules on the Ansible controller gathering facts will not work. That is why when using these ACI modules it is mandatory to disable facts gathering. You can do this globally in your ``ansible.cfg`` or by adding ``gather_facts: false`` to every play.
+
+.. code-block:: yaml
+ :emphasize-lines: 3
+
+ - name: Another play in my playbook
+ hosts: my-apic-1
+ gather_facts: false
+ tasks:
+ - name: Create a tenant
+ aci_tenant:
+ ...
+
+Delegating to localhost
+```````````````````````
+So let us assume we have our target configured in the inventory using the FQDN name as the ``ansible_host`` value, as shown below.
+
+.. code-block:: yaml
+ :emphasize-lines: 3
+
+ apics:
+ my-apic-1:
+ ansible_host: apic01.fqdn.intra
+ ansible_user: admin
+ ansible_password: my-password
+
+One way to set this up is to add to every task the directive: ``delegate_to: localhost``.
+
+.. code-block:: yaml
+ :emphasize-lines: 8
+
+ - name: Query all tenants
+ aci_tenant:
+ host: '{{ ansible_host }}'
+ username: '{{ ansible_user }}'
+ password: '{{ ansible_password }}'
+
+ state: query
+ delegate_to: localhost
+ register: all_tenants
+
+If one would forget to add this directive, Ansible will attempt to connect to the APIC using SSH and attempt to copy the module and run it remotely. This will fail with a clear error, yet may be confusing to some.
+
+
+Using the local connection method
+`````````````````````````````````
+Another option frequently used, is to tie the ``local`` connection method to this target so that every subsequent task for this target will use the local connection method (hence run it locally, rather than use SSH).
+
+In this case the inventory may look like this:
+
+.. code-block:: yaml
+ :emphasize-lines: 6
+
+ apics:
+ my-apic-1:
+ ansible_host: apic01.fqdn.intra
+ ansible_user: admin
+ ansible_password: my-password
+ ansible_connection: local
+
+But used tasks do not need anything special added.
+
+.. code-block:: yaml
+
+ - name: Query all tenants
+ aci_tenant:
+ host: '{{ ansible_host }}'
+ username: '{{ ansible_user }}'
+ password: '{{ ansible_password }}'
+
+ state: query
+ register: all_tenants
+
+.. hint:: For clarity we have added ``delegate_to: localhost`` to all the examples in the module documentation. This helps to ensure first-time users can easily copy&paste parts and make them work with a minimum of effort.
+
+
+Common parameters
+.................
+Every Ansible ACI module accepts the following parameters that influence the module's communication with the APIC REST API:
+
+ host
+ Hostname or IP address of the APIC.
+
+ port
+ Port to use for communication. (Defaults to ``443`` for HTTPS, and ``80`` for HTTP)
+
+ username
+ User name used to log on to the APIC. (Defaults to ``admin``)
+
+ password
+ Password for ``username`` to log on to the APIC, using password-based authentication.
+
+ private_key
+ Private key for ``username`` to log on to APIC, using signature-based authentication.
+ This could either be the raw private key content (include header/footer) or a file that stores the key content.
+ *New in version 2.5*
+
+ certificate_name
+ Name of the certificate in the ACI Web GUI.
+ This defaults to either the ``username`` value or the ``private_key`` file base name).
+ *New in version 2.5*
+
+ timeout
+ Timeout value for socket-level communication.
+
+ use_proxy
+ Use system proxy settings. (Defaults to ``yes``)
+
+ use_ssl
+ Use HTTPS or HTTP for APIC REST communication. (Defaults to ``yes``)
+
+ validate_certs
+ Validate certificate when using HTTPS communication. (Defaults to ``yes``)
+
+ output_level
+ Influence the level of detail ACI modules return to the user. (One of ``normal``, ``info`` or ``debug``) *New in version 2.5*
+
+
+Proxy support
+.............
+By default, if an environment variable ``<protocol>_proxy`` is set on the target host, requests will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see :ref:`playbooks_environment`), or by using the ``use_proxy`` module parameter.
+
+HTTP redirects can redirect from HTTP to HTTPS so ensure that the proxy environment for both protocols is correctly configured.
+
+If proxy support is not needed, but the system may have it configured nevertheless, use the parameter ``use_proxy: false`` to avoid accidental system proxy usage.
+
+.. hint:: Selective proxy support using the ``no_proxy`` environment variable is also supported.
+
+
+Return values
+.............
+
+.. versionadded:: 2.5
+
+The following values are always returned:
+
+ current
+ The resulting state of the managed object, or results of your query.
+
+The following values are returned when ``output_level: info``:
+
+ previous
+ The original state of the managed object (before any change was made).
+
+ proposed
+ The proposed config payload, based on user-supplied values.
+
+ sent
+ The sent config payload, based on user-supplied values and the existing configuration.
+
+The following values are returned when ``output_level: debug`` or ``ANSIBLE_DEBUG=1``:
+
+ filter_string
+ The filter used for specific APIC queries.
+
+ method
+ The HTTP method used for the sent payload. (Either ``GET`` for queries, ``DELETE`` or ``POST`` for changes)
+
+ response
+ The HTTP response from the APIC.
+
+ status
+ The HTTP status code for the request.
+
+ url
+ The url used for the request.
+
+.. note:: The module return values are documented in detail as part of each module's documentation.
+
+
+More information
+................
+Various resources exist to start learn more about ACI programmability, we recommend the following links:
+
+- :ref:`Developing Cisco ACI modules <aci_dev_guide>`
+- `Jacob McGill: Automating Cisco ACI with Ansible <https://blogs.cisco.com/developer/automating-cisco-aci-with-ansible-eliminates-repetitive-day-to-day-tasks>`_
+- `Cisco DevNet Learning Labs about ACI and Ansible <https://learninglabs.cisco.com/labs/tags/ACI,Ansible>`_
+
+
+.. _aci_guide_auth:
+
+ACI authentication
+------------------
+
+Password-based authentication
+.............................
+If you want to log on using a username and password, you can use the following parameters with your ACI modules:
+
+.. code-block:: yaml
+
+ username: admin
+ password: my-password
+
+Password-based authentication is very simple to work with, but it is not the most efficient form of authentication from ACI's point-of-view as it requires a separate login-request and an open session to work. To avoid having your session time-out and requiring another login, you can use the more efficient Signature-based authentication.
+
+.. note:: Password-based authentication also may trigger anti-DoS measures in ACI v3.1+ that causes session throttling and results in HTTP 503 errors and login failures.
+
+.. warning:: Never store passwords in plain text.
+
+The "Vault" feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See :ref:`playbooks_vault` for more information.
+
+
+Signature-based authentication using certificates
+.................................................
+
+.. versionadded:: 2.5
+
+Using signature-based authentication is more efficient and more reliable than password-based authentication.
+
+Generate certificate and private key
+````````````````````````````````````
+Signature-based authentication requires a (self-signed) X.509 certificate with private key, and a configuration step for your AAA user in ACI. To generate a working X.509 certificate and private key, use the following procedure:
+
+.. code-block:: bash
+
+ $ openssl req -new -newkey rsa:1024 -days 36500 -nodes -x509 -keyout admin.key -out admin.crt -subj '/CN=Admin/O=Your Company/C=US'
+
+Configure your local user
+`````````````````````````
+Perform the following steps:
+
+- Add the X.509 certificate to your ACI AAA local user at :guilabel:`ADMIN` » :guilabel:`AAA`
+- Click :guilabel:`AAA Authentication`
+- Check that in the :guilabel:`Authentication` field the :guilabel:`Realm` field displays :guilabel:`Local`
+- Expand :guilabel:`Security Management` » :guilabel:`Local Users`
+- Click the name of the user you want to add a certificate to, in the :guilabel:`User Certificates` area
+- Click the :guilabel:`+` sign and in the :guilabel:`Create X509 Certificate` enter a certificate name in the :guilabel:`Name` field
+
+ * If you use the basename of your private key here, you don't need to enter ``certificate_name`` in Ansible
+
+- Copy and paste your X.509 certificate in the :guilabel:`Data` field.
+
+You can automate this by using the following Ansible task:
+
+.. code-block:: yaml
+
+ - name: Ensure we have a certificate installed
+ aci_aaa_user_certificate:
+ host: my-apic-1
+ username: admin
+ password: my-password
+
+ aaa_user: admin
+ certificate_name: admin
+ certificate: "{{ lookup('file', 'pki/admin.crt') }}" # This will read the certificate data from a local file
+
+.. note:: Signature-based authentication only works with local users.
+
+
+Use signature-based authentication with Ansible
+```````````````````````````````````````````````
+You need the following parameters with your ACI module(s) for it to work:
+
+.. code-block:: yaml
+ :emphasize-lines: 2,3
+
+ username: admin
+ private_key: pki/admin.key
+ certificate_name: admin # This could be left out !
+
+or you can use the private key content:
+
+.. code-block:: yaml
+ :emphasize-lines: 2,3
+
+ username: admin
+ private_key: |
+ -----BEGIN PRIVATE KEY-----
+ <<your private key content>>
+ -----END PRIVATE KEY-----
+ certificate_name: admin # This could be left out !
+
+
+.. hint:: If you use a certificate name in ACI that matches the private key's basename, you can leave out the ``certificate_name`` parameter like the example above.
+
+
+Using Ansible Vault to encrypt the private key
+``````````````````````````````````````````````
+.. versionadded:: 2.8
+
+To start, encrypt the private key and give it a strong password.
+
+.. code-block:: bash
+
+ ansible-vault encrypt admin.key
+
+Use a text editor to open the private-key. You should have an encrypted cert now.
+
+.. code-block:: bash
+
+ $ANSIBLE_VAULT;1.1;AES256
+ 56484318584354658465121889743213151843149454864654151618131547984132165489484654
+ 45641818198456456489479874513215489484843614848456466655432455488484654848489498
+ ....
+
+Copy and paste the new encrypted cert into your playbook as a new variable.
+
+.. code-block:: yaml
+
+ private_key: !vault |
+ $ANSIBLE_VAULT;1.1;AES256
+ 56484318584354658465121889743213151843149454864654151618131547984132165489484654
+ 45641818198456456489479874513215489484843614848456466655432455488484654848489498
+ ....
+
+Use the new variable for the private_key:
+
+.. code-block:: yaml
+
+ username: admin
+ private_key: "{{ private_key }}"
+ certificate_name: admin # This could be left out !
+
+When running the playbook, use "--ask-vault-pass" to decrypt the private key.
+
+.. code-block:: bash
+
+ ansible-playbook site.yaml --ask-vault-pass
+
+
+More information
+````````````````
+- Detailed information about Signature-based Authentication is available from `Cisco APIC Signature-Based Transactions <https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_KB_Signature_Based_Transactions.html>`_.
+- More information on Ansible Vault can be found on the :ref:`Ansible Vault <vault>` page.
+
+
+.. _aci_guide_rest:
+
+Using ACI REST with Ansible
+---------------------------
+While already a lot of ACI modules exists in the Ansible distribution, and the most common actions can be performed with these existing modules, there's always something that may not be possible with off-the-shelf modules.
+
+The aci_rest module provides you with direct access to the APIC REST API and enables you to perform any task not already covered by the existing modules. This may seem like a complex undertaking, but you can generate the needed REST payload for any action performed in the ACI web interface effortlessly.
+
+
+Built-in idempotency
+....................
+Because the APIC REST API is intrinsically idempotent and can report whether a change was made, the aci_rest module automatically inherits both capabilities and is a first-class solution for automating your ACI infrastructure. As a result, users that require more powerful low-level access to their ACI infrastructure don't have to give up on idempotency and don't have to guess whether a change was performed when using the aci_rest module.
+
+
+Using the aci_rest module
+.........................
+The aci_rest module accepts the native XML and JSON payloads, but additionally accepts inline YAML payload (structured like JSON). The XML payload requires you to use a path ending with ``.xml`` whereas JSON or YAML require the path to end with ``.json``.
+
+When you're making modifications, you can use the POST or DELETE methods, whereas doing just queries require the GET method.
+
+For instance, if you would like to ensure a specific tenant exists on ACI, these below four examples are functionally identical:
+
+**XML** (Native ACI REST)
+
+.. code-block:: yaml
+
+ - aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+
+ method: post
+ path: /api/mo/uni.xml
+ content: |
+ <fvTenant name="customer-xyz" descr="Customer XYZ"/>
+
+**JSON** (Native ACI REST)
+
+.. code-block:: yaml
+
+ - aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+
+ method: post
+ path: /api/mo/uni.json
+ content:
+ {
+ "fvTenant": {
+ "attributes": {
+ "name": "customer-xyz",
+ "descr": "Customer XYZ"
+ }
+ }
+ }
+
+**YAML** (Ansible-style REST)
+
+.. code-block:: yaml
+
+ - aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+
+ method: post
+ path: /api/mo/uni.json
+ content:
+ fvTenant:
+ attributes:
+ name: customer-xyz
+ descr: Customer XYZ
+
+**Ansible task** (Dedicated module)
+
+.. code-block:: yaml
+
+ - aci_tenant:
+ host: my-apic-1
+ private_key: pki/admin.key
+
+ tenant: customer-xyz
+ description: Customer XYZ
+ state: present
+
+
+.. hint:: The XML format is more practical when there is a need to template the REST payload (inline), but the YAML format is more convenient for maintaining your infrastructure-as-code and feels more naturally integrated with Ansible playbooks. The dedicated modules offer a more simple, abstracted, but also a more limited experience. Use what feels best for your use-case.
+
+
+More information
+................
+Plenty of resources exist to learn about ACI's APIC REST interface, we recommend the links below:
+
+- `The ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_
+- `APIC REST API Configuration Guide <https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/rest_cfg/2_1_x/b_Cisco_APIC_REST_API_Configuration_Guide.html>`_ -- Detailed guide on how the APIC REST API is designed and used, incl. many examples
+- `APIC Management Information Model reference <https://developer.cisco.com/docs/apic-mim-ref/>`_ -- Complete reference of the APIC object model
+- `Cisco DevNet Learning Labs about ACI and REST <https://learninglabs.cisco.com/labs/tags/ACI,REST>`_
+
+
+.. _aci_guide_ops:
+
+Operational examples
+--------------------
+Here is a small overview of useful operational tasks to reuse in your playbooks.
+
+Feel free to contribute more useful snippets.
+
+
+Waiting for all controllers to be ready
+.......................................
+You can use the below task after you started to build your APICs and configured the cluster to wait until all the APICs have come online. It will wait until the number of controllers equals the number listed in the ``apic`` inventory group.
+
+.. code-block:: yaml
+
+ - name: Waiting for all controllers to be ready
+ aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+ method: get
+ path: /api/node/class/topSystem.json?query-target-filter=eq(topSystem.role,"controller")
+ register: topsystem
+ until: topsystem|success and topsystem.totalCount|int >= groups['apic']|count >= 3
+ retries: 20
+ delay: 30
+
+
+Waiting for cluster to be fully-fit
+...................................
+The below example waits until the cluster is fully-fit. In this example you know the number of APICs in the cluster and you verify each APIC reports a 'fully-fit' status.
+
+.. code-block:: yaml
+
+ - name: Waiting for cluster to be fully-fit
+ aci_rest:
+ host: my-apic-1
+ private_key: pki/admin.key
+ method: get
+ path: /api/node/class/infraWiNode.json?query-target-filter=wcard(infraWiNode.dn,"topology/pod-1/node-1/av")
+ register: infrawinode
+ until: >
+ infrawinode|success and
+ infrawinode.totalCount|int >= groups['apic']|count >= 3 and
+ infrawinode.imdata[0].infraWiNode.attributes.health == 'fully-fit' and
+ infrawinode.imdata[1].infraWiNode.attributes.health == 'fully-fit' and
+ infrawinode.imdata[2].infraWiNode.attributes.health == 'fully-fit'
+ retries: 30
+ delay: 30
+
+
+.. _aci_guide_errors:
+
+APIC error messages
+-------------------
+The following error messages may occur and this section can help you understand what exactly is going on and how to fix/avoid them.
+
+ APIC Error 122: unknown managed object class 'polUni'
+ In case you receive this error while you are certain your aci_rest payload and object classes are seemingly correct, the issue might be that your payload is not in fact correct JSON (for example, the sent payload is using single quotes, rather than double quotes), and as a result the APIC is not correctly parsing your object classes from the payload. One way to avoid this is by using a YAML or an XML formatted payload, which are easier to construct correctly and modify later.
+
+
+ APIC Error 400: invalid data at line '1'. Attributes are missing, tag 'attributes' must be specified first, before any other tag
+ Although the JSON specification allows unordered elements, the APIC REST API requires that the JSON ``attributes`` element precede the ``children`` array or other elements. So you need to ensure that your payload conforms to this requirement. Sorting your dictionary keys will do the trick just fine. If you don't have any attributes, it may be necessary to add: ``attributes: {}`` as the APIC does expect the entry to precede any ``children``.
+
+
+ APIC Error 801: property descr of uni/tn-TENANT/ap-AP failed validation for value 'A "legacy" network'
+ Some values in the APIC have strict format-rules to comply to, and the internal APIC validation check for the provided value failed. In the above case, the ``description`` parameter (internally known as ``descr``) only accepts values conforming to Regex: ``[a-zA-Z0-9\\!#$%()*,-./:;@ _{|}~?&+]+``, in general it must not include quotes or square brackets.
+
+
+.. _aci_guide_known_issues:
+
+Known issues
+------------
+The aci_rest module is a wrapper around the APIC REST API. As a result any issues related to the APIC will be reflected in the use of this module.
+
+All below issues either have been reported to the vendor, and most can simply be avoided.
+
+ Too many consecutive API calls may result in connection throttling
+ Starting with ACI v3.1 the APIC will actively throttle password-based authenticated connection rates over a specific threshold. This is as part of an anti-DDOS measure but can act up when using Ansible with ACI using password-based authentication. Currently, one solution is to increase this threshold within the nginx configuration, but using signature-based authentication is recommended.
+
+ **NOTE:** It is advisable to use signature-based authentication with ACI as it not only prevents connection-throttling, but also improves general performance when using the ACI modules.
+
+
+ Specific requests may not reflect changes correctly (`#35401 <https://github.com/ansible/ansible/issues/35041>`_)
+ There is a known issue where specific requests to the APIC do not properly reflect changed in the resulting output, even when we request those changes explicitly from the APIC. In one instance using the path ``api/node/mo/uni/infra.xml`` fails, where ``api/node/mo/uni/infra/.xml`` does work correctly.
+
+ **NOTE:** A workaround is to register the task return values (for example, ``register: this``) and influence when the task should report a change by adding: ``changed_when: this.imdata != []``.
+
+
+ Specific requests are known to not be idempotent (`#35050 <https://github.com/ansible/ansible/issues/35050>`_)
+ The behaviour of the APIC is inconsistent to the use of ``status="created"`` and ``status="deleted"``. The result is that when you use ``status="created"`` in your payload the resulting tasks are not idempotent and creation will fail when the object was already created. However this is not the case with ``status="deleted"`` where such call to an non-existing object does not cause any failure whatsoever.
+
+ **NOTE:** A workaround is to avoid using ``status="created"`` and instead use ``status="modified"`` when idempotency is essential to your workflow..
+
+
+ Setting user password is not idempotent (`#35544 <https://github.com/ansible/ansible/issues/35544>`_)
+ Due to an inconsistency in the APIC REST API, a task that sets the password of a locally-authenticated user is not idempotent. The APIC will complain with message ``Password history check: user dag should not use previous 5 passwords``.
+
+ **NOTE:** There is no workaround for this issue.
+
+
+.. _aci_guide_community:
+
+ACI Ansible community
+---------------------
+If you have specific issues with the ACI modules, or a feature request, or you like to contribute to the ACI project by proposing changes or documentation updates, look at the Ansible Community wiki ACI page at: https://github.com/ansible/community/wiki/Network:-ACI
+
+You will find our roadmap, an overview of open ACI issues and pull-requests, and more information about who we are. If you have an interest in using ACI with Ansible, feel free to join! We occasionally meet online (on the #ansible-network chat channel, using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_) to track progress and prepare for new Ansible releases.
+
+
+.. seealso::
+
+ `ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_
+ View the content tab for a complete list of supported ACI modules.
+ :ref:`Developing Cisco ACI modules <aci_dev_guide>`
+ A walkthrough on how to develop new Cisco ACI modules to contribute back.
+ `ACI community <https://github.com/ansible/community/wiki/Network:-ACI>`_
+ The Ansible ACI community wiki page, includes roadmap, ideas and development documentation.
+ :ref:`network_guide`
+ A detailed guide on how to use Ansible for automating network infrastructure.
+ `Network Working Group <https://github.com/ansible/community/tree/main/group-network>`_
+ The Ansible Network community page, includes contact information and meeting information.
+ `User Mailing List <https://groups.google.com/group/ansible-project>`_
+ Have a question? Stop by the google group!
diff --git a/docs/docsite/rst/scenario_guides/guide_alicloud.rst b/docs/docsite/rst/scenario_guides/guide_alicloud.rst
new file mode 100644
index 0000000..fd78bf1
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_alicloud.rst
@@ -0,0 +1,133 @@
+Alibaba Cloud Compute Services Guide
+====================================
+
+.. _alicloud_intro:
+
+Introduction
+````````````
+
+Ansible contains several modules for controlling and managing Alibaba Cloud Compute Services (Alicloud). This guide
+explains how to use the Alicloud Ansible modules together.
+
+All Alicloud modules require ``footmark`` - install it on your control machine with ``pip install footmark``.
+
+Cloud modules, including Alicloud modules, execute on your local machine (the control machine) with ``connection: local``, rather than on remote machines defined in your hosts.
+
+Normally, you'll use the following pattern for plays that provision Alicloud resources:
+
+.. code-block:: yaml
+
+ - hosts: localhost
+ connection: local
+ vars:
+ - ...
+ tasks:
+ - ...
+
+.. _alicloud_authentication:
+
+Authentication
+``````````````
+
+You can specify your Alicloud authentication credentials (access key and secret key) by passing them as
+environment variables or by storing them in a vars file.
+
+To pass authentication credentials as environment variables:
+
+.. code-block:: shell
+
+ export ALICLOUD_ACCESS_KEY='Alicloud123'
+ export ALICLOUD_SECRET_KEY='AlicloudSecret123'
+
+To store authentication credentials in a vars_files, encrypt them with :ref:`Ansible Vault<vault>` to keep them secure, then list them:
+
+.. code-block:: yaml
+
+ ---
+ alicloud_access_key: "--REMOVED--"
+ alicloud_secret_key: "--REMOVED--"
+
+Note that if you store your credentials in a vars_files, you need to refer to them in each Alicloud module. For example:
+
+.. code-block:: yaml
+
+ - ali_instance:
+ alicloud_access_key: "{{alicloud_access_key}}"
+ alicloud_secret_key: "{{alicloud_secret_key}}"
+ image_id: "..."
+
+.. _alicloud_provisioning:
+
+Provisioning
+````````````
+
+Alicloud modules create Alicloud ECS instances, disks, virtual private clouds, virtual switches, security groups and other resources.
+
+You can use the ``count`` parameter to control the number of resources you create or terminate. For example, if you want exactly 5 instances tagged ``NewECS``,
+set the ``count`` of instances to 5 and the ``count_tag`` to ``NewECS``, as shown in the last task of the example playbook below.
+If there are no instances with the tag ``NewECS``, the task creates 5 new instances. If there are 2 instances with that tag, the task
+creates 3 more. If there are 8 instances with that tag, the task terminates 3 of those instances.
+
+If you do not specify a ``count_tag``, the task creates the number of instances you specify in ``count`` with the ``instance_name`` you provide.
+
+.. code-block:: yaml
+
+ # alicloud_setup.yml
+
+ - hosts: localhost
+ connection: local
+
+ tasks:
+
+ - name: Create VPC
+ ali_vpc:
+ cidr_block: '{{ cidr_block }}'
+ vpc_name: new_vpc
+ register: created_vpc
+
+ - name: Create VSwitch
+ ali_vswitch:
+ alicloud_zone: '{{ alicloud_zone }}'
+ cidr_block: '{{ vsw_cidr }}'
+ vswitch_name: new_vswitch
+ vpc_id: '{{ created_vpc.vpc.id }}'
+ register: created_vsw
+
+ - name: Create security group
+ ali_security_group:
+ name: new_group
+ vpc_id: '{{ created_vpc.vpc.id }}'
+ rules:
+ - proto: tcp
+ port_range: 22/22
+ cidr_ip: 0.0.0.0/0
+ priority: 1
+ rules_egress:
+ - proto: tcp
+ port_range: 80/80
+ cidr_ip: 192.168.0.54/32
+ priority: 1
+ register: created_group
+
+ - name: Create a set of instances
+ ali_instance:
+ security_groups: '{{ created_group.group_id }}'
+ instance_type: ecs.n4.small
+ image_id: "{{ ami_id }}"
+ instance_name: "My-new-instance"
+ instance_tags:
+ Name: NewECS
+ Version: 0.0.1
+ count: 5
+ count_tag:
+ Name: NewECS
+ allocate_public_ip: true
+ max_bandwidth_out: 50
+ vswitch_id: '{{ created_vsw.vswitch.id}}'
+ register: create_instance
+
+In the example playbook above, data about the vpc, vswitch, group, and instances created by this playbook
+are saved in the variables defined by the "register" keyword in each task.
+
+Each Alicloud module offers a variety of parameter options. Not all options are demonstrated in the above example.
+See each individual module for further details and examples.
diff --git a/docs/docsite/rst/scenario_guides/guide_aws.rst b/docs/docsite/rst/scenario_guides/guide_aws.rst
new file mode 100644
index 0000000..f293155
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_aws.rst
@@ -0,0 +1,6 @@
+:orphan:
+
+Amazon Web Services Guide
+=========================
+
+The content on this page has moved. Please see the updated :ref:`ansible_collections.amazon.aws.docsite.aws_intro` in the AWS collection.
diff --git a/docs/docsite/rst/scenario_guides/guide_azure.rst b/docs/docsite/rst/scenario_guides/guide_azure.rst
new file mode 100644
index 0000000..41bdab3
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_azure.rst
@@ -0,0 +1,484 @@
+Microsoft Azure Guide
+=====================
+
+.. important::
+
+ Red Hat Ansible Automation Platform will soon be available on Microsoft Azure. `Sign up to preview the experience <https://www.redhat.com/en/engage/ansible-microsoft-azure-e-202110220735>`_.
+
+Ansible includes a suite of modules for interacting with Azure Resource Manager, giving you the tools to easily create
+and orchestrate infrastructure on the Microsoft Azure Cloud.
+
+Requirements
+------------
+
+Using the Azure Resource Manager modules requires having specific Azure SDK modules
+installed on the host running Ansible.
+
+.. code-block:: bash
+
+ $ pip install 'ansible[azure]'
+
+If you are running Ansible from source, you can install the dependencies from the
+root directory of the Ansible repo.
+
+.. code-block:: bash
+
+ $ pip install .[azure]
+
+You can also directly run Ansible in `Azure Cloud Shell <https://shell.azure.com>`_, where Ansible is pre-installed.
+
+Authenticating with Azure
+-------------------------
+
+Using the Azure Resource Manager modules requires authenticating with the Azure API. You can choose from two authentication strategies:
+
+* Active Directory Username/Password
+* Service Principal Credentials
+
+Follow the directions for the strategy you wish to use, then proceed to `Providing Credentials to Azure Modules`_ for
+instructions on how to actually use the modules and authenticate with the Azure API.
+
+
+Using Service Principal
+.......................
+
+There is now a detailed official tutorial describing `how to create a service principal <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal>`_.
+
+After stepping through the tutorial you will have:
+
+* Your Client ID, which is found in the "client id" box in the "Configure" page of your application in the Azure portal
+* Your Secret key, generated when you created the application. You cannot show the key after creation.
+ If you lost the key, you must create a new one in the "Configure" page of your application.
+* And finally, a tenant ID. It's a UUID (for example, ABCDEFGH-1234-ABCD-1234-ABCDEFGHIJKL) pointing to the AD containing your
+ application. You will find it in the URL from within the Azure portal, or in the "view endpoints" of any given URL.
+
+
+Using Active Directory Username/Password
+........................................
+
+To create an Active Directory username/password:
+
+* Connect to the Azure Classic Portal with your admin account
+* Create a user in your default AAD. You must NOT activate Multi-Factor Authentication
+* Go to Settings - Administrators
+* Click on Add and enter the email of the new user.
+* Check the checkbox of the subscription you want to test with this user.
+* Login to Azure Portal with this new user to change the temporary password to a new one. You will not be able to use the
+ temporary password for OAuth login.
+
+Providing Credentials to Azure Modules
+......................................
+
+The modules offer several ways to provide your credentials. For a CI/CD tool such as Ansible AWX or Jenkins, you will
+most likely want to use environment variables. For local development you may wish to store your credentials in a file
+within your home directory. And of course, you can always pass credentials as parameters to a task within a playbook. The
+order of precedence is parameters, then environment variables, and finally a file found in your home directory.
+
+Using Environment Variables
+```````````````````````````
+
+To pass service principal credentials through the environment, define the following variables:
+
+* AZURE_CLIENT_ID
+* AZURE_SECRET
+* AZURE_SUBSCRIPTION_ID
+* AZURE_TENANT
+
+To pass Active Directory username/password through the environment, define the following variables:
+
+* AZURE_AD_USER
+* AZURE_PASSWORD
+* AZURE_SUBSCRIPTION_ID
+
+To pass Active Directory username/password in ADFS through the environment, define the following variables:
+
+* AZURE_AD_USER
+* AZURE_PASSWORD
+* AZURE_CLIENT_ID
+* AZURE_TENANT
+* AZURE_ADFS_AUTHORITY_URL
+
+"AZURE_ADFS_AUTHORITY_URL" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
+
+Storing in a File
+`````````````````
+
+When working in a development environment, it may be desirable to store credentials in a file. The modules will look
+for credentials in ``$HOME/.azure/credentials``. This file is an ini style file. It will look as follows:
+
+.. code-block:: ini
+
+ [default]
+ subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ client_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ secret=xxxxxxxxxxxxxxxxx
+ tenant=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+
+.. note:: If your secret values contain non-ASCII characters, you must `URL Encode <https://www.w3schools.com/tags/ref_urlencode.asp>`_ them to avoid login errors.
+
+It is possible to store multiple sets of credentials within the credentials file by creating multiple sections. Each
+section is considered a profile. The modules look for the [default] profile automatically. Define AZURE_PROFILE in the
+environment or pass a profile parameter to specify a specific profile.
+
+Passing as Parameters
+`````````````````````
+
+If you wish to pass credentials as parameters to a task, use the following parameters for service principal:
+
+* client_id
+* secret
+* subscription_id
+* tenant
+
+Or, pass the following parameters for Active Directory username/password:
+
+* ad_user
+* password
+* subscription_id
+
+Or, pass the following parameters for ADFS username/password:
+
+* ad_user
+* password
+* client_id
+* tenant
+* adfs_authority_url
+
+"adfs_authority_url" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
+
+
+Other Cloud Environments
+------------------------
+
+To use an Azure Cloud other than the default public cloud (for example, Azure China Cloud, Azure US Government Cloud, Azure Stack),
+pass the "cloud_environment" argument to modules, configure it in a credential profile, or set the "AZURE_CLOUD_ENVIRONMENT"
+environment variable. The value is either a cloud name as defined by the Azure Python SDK (for example, "AzureChinaCloud",
+"AzureUSGovernment"; defaults to "AzureCloud") or an Azure metadata discovery URL (for Azure Stack).
+
+Creating Virtual Machines
+-------------------------
+
+There are two ways to create a virtual machine, both involving the azure_rm_virtualmachine module. We can either create
+a storage account, network interface, security group and public IP address and pass the names of these objects to the
+module as parameters, or we can let the module do the work for us and accept the defaults it chooses.
+
+Creating Individual Components
+..............................
+
+An Azure module is available to help you create a storage account, virtual network, subnet, network interface,
+security group and public IP. Here is a full example of creating each of these and passing the names to the
+``azure.azcollection.azure_rm_virtualmachine`` module at the end:
+
+.. code-block:: yaml
+
+ - name: Create storage account
+ azure.azcollection.azure_rm_storageaccount:
+ resource_group: Testing
+ name: testaccount001
+ account_type: Standard_LRS
+
+ - name: Create virtual network
+ azure.azcollection.azure_rm_virtualnetwork:
+ resource_group: Testing
+ name: testvn001
+ address_prefixes: "10.10.0.0/16"
+
+ - name: Add subnet
+ azure.azcollection.azure_rm_subnet:
+ resource_group: Testing
+ name: subnet001
+ address_prefix: "10.10.0.0/24"
+ virtual_network: testvn001
+
+ - name: Create public ip
+ azure.azcollection.azure_rm_publicipaddress:
+ resource_group: Testing
+ allocation_method: Static
+ name: publicip001
+
+ - name: Create security group that allows SSH
+ azure.azcollection.azure_rm_securitygroup:
+ resource_group: Testing
+ name: secgroup001
+ rules:
+ - name: SSH
+ protocol: Tcp
+ destination_port_range: 22
+ access: Allow
+ priority: 101
+ direction: Inbound
+
+ - name: Create NIC
+ azure.azcollection.azure_rm_networkinterface:
+ resource_group: Testing
+ name: testnic001
+ virtual_network: testvn001
+ subnet: subnet001
+ public_ip_name: publicip001
+ security_group: secgroup001
+
+ - name: Create virtual machine
+ azure.azcollection.azure_rm_virtualmachine:
+ resource_group: Testing
+ name: testvm001
+ vm_size: Standard_D1
+ storage_account: testaccount001
+ storage_container: testvm001
+ storage_blob: testvm001.vhd
+ admin_username: admin
+ admin_password: Password!
+ network_interfaces: testnic001
+ image:
+ offer: CentOS
+ publisher: OpenLogic
+ sku: '7.1'
+ version: latest
+
+Each of the Azure modules offers a variety of parameter options. Not all options are demonstrated in the above example.
+See each individual module for further details and examples.
+
+
+Creating a Virtual Machine with Default Options
+...............................................
+
+If you simply want to create a virtual machine without specifying all the details, you can do that as well. The only
+caveat is that you will need a virtual network with one subnet already in your resource group. Assuming you have a
+virtual network already with an existing subnet, you can run the following to create a VM:
+
+.. code-block:: yaml
+
+ azure.azcollection.azure_rm_virtualmachine:
+ resource_group: Testing
+ name: testvm10
+ vm_size: Standard_D1
+ admin_username: chouseknecht
+ ssh_password_enabled: false
+ ssh_public_keys: "{{ ssh_keys }}"
+ image:
+ offer: CentOS
+ publisher: OpenLogic
+ sku: '7.1'
+ version: latest
+
+
+Creating a Virtual Machine in Availability Zones
+..................................................
+
+If you want to create a VM in an availability zone,
+consider the following:
+
+* Both OS disk and data disk must be a 'managed disk', not an 'unmanaged disk'.
+* When creating a VM with the ``azure.azcollection.azure_rm_virtualmachine`` module,
+ you need to explicitly set the ``managed_disk_type`` parameter
+ to change the OS disk to a managed disk.
+ Otherwise, the OS disk becomes an unmanaged disk.
+* When you create a data disk with the ``azure.azcollection.azure_rm_manageddisk`` module,
+ you need to explicitly specify the ``storage_account_type`` parameter
+ to make it a managed disk.
+ Otherwise, the data disk will be an unmanaged disk.
+* A managed disk does not require a storage account or a storage container,
+ unlike an unmanaged disk.
+ In particular, note that once a VM is created on an unmanaged disk,
+ an unnecessary storage container named "vhds" is automatically created.
+* When you create an IP address with the ``azure.azcollection.azure_rm_publicipaddress`` module,
+ you must set the ``sku`` parameter to ``standard``.
+ Otherwise, the IP address cannot be used in an availability zone.
+
+
+Dynamic Inventory Script
+------------------------
+
+If you are not familiar with Ansible's dynamic inventory scripts, check out :ref:`Intro to Dynamic Inventory <intro_dynamic_inventory>`.
+
+The Azure Resource Manager inventory script is called `azure_rm.py <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py>`_. It authenticates with the Azure API exactly the same as the
+Azure modules, which means you will either define the same environment variables described above in `Using Environment Variables`_,
+create a ``$HOME/.azure/credentials`` file (also described above in `Storing in a File`_), or pass command line parameters. To see available command
+line options execute the following:
+
+.. code-block:: bash
+
+ $ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py
+ $ ./azure_rm.py --help
+
+As with all dynamic inventory scripts, the script can be executed directly, passed as a parameter to the ansible command,
+or passed directly to ansible-playbook using the -i option. No matter how it is executed the script produces JSON representing
+all of the hosts found in your Azure subscription. You can narrow this down to just hosts found in a specific set of
+Azure resource groups, or even down to a specific host.
+
+For a given host, the inventory script provides the following host variables:
+
+.. code-block:: JSON
+
+ {
+ "ansible_host": "XXX.XXX.XXX.XXX",
+ "computer_name": "computer_name2",
+ "fqdn": null,
+ "id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Compute/virtualMachines/object-name",
+ "image": {
+ "offer": "CentOS",
+ "publisher": "OpenLogic",
+ "sku": "7.1",
+ "version": "latest"
+ },
+ "location": "westus",
+ "mac_address": "00-00-5E-00-53-FE",
+ "name": "object-name",
+ "network_interface": "interface-name",
+ "network_interface_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkInterfaces/object-name1",
+ "network_security_group": null,
+ "network_security_group_id": null,
+ "os_disk": {
+ "name": "object-name",
+ "operating_system_type": "Linux"
+ },
+ "plan": null,
+ "powerstate": "running",
+ "private_ip": "172.26.3.6",
+ "private_ip_alloc_method": "Static",
+ "provisioning_state": "Succeeded",
+ "public_ip": "XXX.XXX.XXX.XXX",
+ "public_ip_alloc_method": "Static",
+ "public_ip_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/publicIPAddresses/object-name",
+ "public_ip_name": "object-name",
+ "resource_group": "galaxy-production",
+ "security_group": "object-name",
+ "security_group_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkSecurityGroups/object-name",
+ "tags": {
+ "db": "mysql"
+ },
+ "type": "Microsoft.Compute/virtualMachines",
+ "virtual_machine_size": "Standard_DS4"
+ }
+
+Host Groups
+...........
+
+By default hosts are grouped by:
+
+* azure (all hosts)
+* location name
+* resource group name
+* security group name
+* tag key
+* tag key_value
+* os_disk operating_system_type (Windows/Linux)
+
+You can control host groupings and host selection by either defining environment variables or creating an
+azure_rm.ini file in your current working directory.
+
+NOTE: An .ini file will take precedence over environment variables.
+
+NOTE: The name of the .ini file is the basename of the inventory script (in other words, 'azure_rm') with a '.ini'
+extension. This allows you to copy, rename and customize the inventory script and have matching .ini files all in
+the same directory.
+
+Control grouping using the following variables defined in the environment:
+
+* AZURE_GROUP_BY_RESOURCE_GROUP=yes
+* AZURE_GROUP_BY_LOCATION=yes
+* AZURE_GROUP_BY_SECURITY_GROUP=yes
+* AZURE_GROUP_BY_TAG=yes
+* AZURE_GROUP_BY_OS_FAMILY=yes
+
+Select hosts within specific resource groups by assigning a comma separated list to:
+
+* AZURE_RESOURCE_GROUPS=resource_group_a,resource_group_b
+
+Select hosts for specific tag key by assigning a comma separated list of tag keys to:
+
+* AZURE_TAGS=key1,key2,key3
+
+Select hosts for specific locations by assigning a comma separated list of locations to:
+
+* AZURE_LOCATIONS=eastus,eastus2,westus
+
+Or, select hosts for specific tag key:value pairs by assigning a comma separated list key:value pairs to:
+
+* AZURE_TAGS=key1:value1,key2:value2
+
+If you don't need the powerstate, you can improve performance by turning off powerstate fetching:
+
+* AZURE_INCLUDE_POWERSTATE=no
+
+A sample azure_rm.ini file is included along with the inventory script in
+`here <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.ini>`_.
+An .ini file will contain the following:
+
+.. code-block:: ini
+
+ [azure]
+ # Control which resource groups are included. By default all resources groups are included.
+ # Set resource_groups to a comma separated list of resource groups names.
+ #resource_groups=
+
+ # Control which tags are included. Set tags to a comma separated list of keys or key:value pairs
+ #tags=
+
+ # Control which locations are included. Set locations to a comma separated list of locations.
+ #locations=
+
+ # Include powerstate. If you don't need powerstate information, turning it off improves runtime performance.
+ # Valid values: yes, no, true, false, True, False, 0, 1.
+ include_powerstate=yes
+
+ # Control grouping with the following boolean flags. Valid values: yes, no, true, false, True, False, 0, 1.
+ group_by_resource_group=yes
+ group_by_location=yes
+ group_by_security_group=yes
+ group_by_tag=yes
+ group_by_os_family=yes
+
+Examples
+........
+
+Here are some examples using the inventory script:
+
+.. code-block:: bash
+
+ # Download inventory script
+ $ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py
+
+ # Execute /bin/uname on all instances in the Testing resource group
+ $ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"
+
+ # Execute win_ping on all Windows instances
+ $ ansible -i azure_rm.py windows -m win_ping
+
+ # Execute ping on all Linux instances
+ $ ansible -i azure_rm.py linux -m ping
+
+ # Use the inventory script to print instance specific information
+ $ ./azure_rm.py --host my_instance_host_name --resource-groups=Testing --pretty
+
+ # Use the inventory script with ansible-playbook
+ $ ansible-playbook -i ./azure_rm.py test_playbook.yml
+
+Here is a simple playbook to exercise the Azure inventory script:
+
+.. code-block:: yaml
+
+ - name: Test the inventory script
+ hosts: azure
+ connection: local
+ gather_facts: false
+ tasks:
+ - debug:
+ msg: "{{ inventory_hostname }} has powerstate {{ powerstate }}"
+
+You can execute the playbook with something like:
+
+.. code-block:: bash
+
+ $ ansible-playbook -i ./azure_rm.py test_azure_inventory.yml
+
+
+Disabling certificate validation on Azure endpoints
+...................................................
+
+When an HTTPS proxy is present, or when using Azure Stack, it may be necessary to disable certificate validation for
+Azure endpoints in the Azure modules. This is not a recommended security practice, but may be necessary when the system
+CA store cannot be altered to include the necessary CA certificate. Certificate validation can be controlled by setting
+the "cert_validation_mode" value in a credential profile, through the "AZURE_CERT_VALIDATION_MODE" environment variable, or
+by passing the "cert_validation_mode" argument to any Azure module. The default value is "validate"; setting the value
+to "ignore" will prevent all certificate validation. The module argument takes precedence over a credential profile value,
+which takes precedence over the environment value.
diff --git a/docs/docsite/rst/scenario_guides/guide_cloudstack.rst b/docs/docsite/rst/scenario_guides/guide_cloudstack.rst
new file mode 100644
index 0000000..6d3f2b4
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_cloudstack.rst
@@ -0,0 +1,377 @@
+CloudStack Cloud Guide
+======================
+
+.. _cloudstack_introduction:
+
+Introduction
+````````````
+The purpose of this section is to explain how to put Ansible modules together to use Ansible in a CloudStack context. You will find more usage examples in the details section of each module.
+
+Ansible contains a number of extra modules for interacting with CloudStack based clouds. All modules support check mode, are designed to be idempotent, have been created and tested, and are maintained by the community.
+
+.. note:: Some of the modules will require domain admin or root admin privileges.
+
+Prerequisites
+`````````````
+Prerequisites for using the CloudStack modules are minimal. In addition to Ansible itself, all of the modules require the python library ``cs`` https://pypi.org/project/cs/
+
+You'll need this Python module installed on the execution host, usually your workstation.
+
+.. code-block:: bash
+
+ $ pip install cs
+
+Or alternatively starting with Debian 9 and Ubuntu 16.04:
+
+.. code-block:: bash
+
+ $ sudo apt install python-cs
+
+.. note:: cs also includes a command line interface for ad hoc interaction with the CloudStack API, for example ``$ cs listVirtualMachines state=Running``.
+
+Limitations and Known Issues
+````````````````````````````
+VPC support has been improved since Ansible 2.3 but is still not yet fully implemented. The community is working on the VPC integration.
+
+Credentials File
+````````````````
+You can pass credentials and the endpoint of your cloud as module arguments, however in most cases it is a far less work to store your credentials in the cloudstack.ini file.
+
+The python library cs looks for the credentials file in the following order (last one wins):
+
+* A ``.cloudstack.ini`` (note the dot) file in the home directory.
+* A ``CLOUDSTACK_CONFIG`` environment variable pointing to an .ini file.
+* A ``cloudstack.ini`` (without the dot) file in the current working directory, same directory as your playbooks are located.
+
+The structure of the ini file must look like this:
+
+.. code-block:: bash
+
+ $ cat $HOME/.cloudstack.ini
+ [cloudstack]
+ endpoint = https://cloud.example.com/client/api
+ key = api key
+ secret = api secret
+ timeout = 30
+
+.. Note:: The section ``[cloudstack]`` is the default section. ``CLOUDSTACK_REGION`` environment variable can be used to define the default section.
+
+.. versionadded:: 2.4
+
+The ENV variables support ``CLOUDSTACK_*`` as written in the documentation of the library ``cs``, like ``CLOUDSTACK_TIMEOUT``, ``CLOUDSTACK_METHOD``, and so on. has been implemented into Ansible. It is even possible to have some incomplete config in your cloudstack.ini:
+
+.. code-block:: bash
+
+ $ cat $HOME/.cloudstack.ini
+ [cloudstack]
+ endpoint = https://cloud.example.com/client/api
+ timeout = 30
+
+and fulfill the missing data by either setting ENV variables or tasks params:
+
+.. code-block:: yaml
+
+ ---
+ - name: provision our VMs
+ hosts: cloud-vm
+ tasks:
+ - name: ensure VMs are created and running
+ delegate_to: localhost
+ cs_instance:
+ api_key: your api key
+ api_secret: your api secret
+ ...
+
+Regions
+```````
+If you use more than one CloudStack region, you can define as many sections as you want and name them as you like, for example:
+
+.. code-block:: bash
+
+ $ cat $HOME/.cloudstack.ini
+ [exoscale]
+ endpoint = https://api.exoscale.ch/compute
+ key = api key
+ secret = api secret
+
+ [example_cloud_one]
+ endpoint = https://cloud-one.example.com/client/api
+ key = api key
+ secret = api secret
+
+ [example_cloud_two]
+ endpoint = https://cloud-two.example.com/client/api
+ key = api key
+ secret = api secret
+
+.. Hint:: Sections can also be used to for login into the same region using different accounts.
+
+By passing the argument ``api_region`` with the CloudStack modules, the region wanted will be selected.
+
+.. code-block:: yaml
+
+ - name: ensure my ssh public key exists on Exoscale
+ cs_sshkeypair:
+ name: my-ssh-key
+ public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
+ api_region: exoscale
+ delegate_to: localhost
+
+Or by looping over a regions list if you want to do the task in every region:
+
+.. code-block:: yaml
+
+ - name: ensure my ssh public key exists in all CloudStack regions
+ local_action: cs_sshkeypair
+ name: my-ssh-key
+ public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
+ api_region: "{{ item }}"
+ loop:
+ - exoscale
+ - example_cloud_one
+ - example_cloud_two
+
+Environment Variables
+`````````````````````
+.. versionadded:: 2.3
+
+Since Ansible 2.3 it is possible to use environment variables for domain (``CLOUDSTACK_DOMAIN``), account (``CLOUDSTACK_ACCOUNT``), project (``CLOUDSTACK_PROJECT``), VPC (``CLOUDSTACK_VPC``) and zone (``CLOUDSTACK_ZONE``). This simplifies the tasks by not repeating the arguments for every tasks.
+
+Below you see an example how it can be used in combination with Ansible's block feature:
+
+.. code-block:: yaml
+
+ - hosts: cloud-vm
+ tasks:
+ - block:
+ - name: ensure my ssh public key
+ cs_sshkeypair:
+ name: my-ssh-key
+ public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
+
+ - name: ensure my ssh public key
+ cs_instance:
+ display_name: "{{ inventory_hostname_short }}"
+ template: Linux Debian 7 64-bit 20GB Disk
+ service_offering: "{{ cs_offering }}"
+ ssh_key: my-ssh-key
+ state: running
+
+ delegate_to: localhost
+ environment:
+ CLOUDSTACK_DOMAIN: root/customers
+ CLOUDSTACK_PROJECT: web-app
+ CLOUDSTACK_ZONE: sf-1
+
+.. Note:: You are still able overwrite the environment variables using the module arguments, for example ``zone: sf-2``
+
+.. Note:: Unlike ``CLOUDSTACK_REGION`` these additional environment variables are ignored in the CLI ``cs``.
+
+Use Cases
+`````````
+The following should give you some ideas how to use the modules to provision VMs to the cloud. As always, there isn't only one way to do it. But as always: keep it simple for the beginning is always a good start.
+
+Use Case: Provisioning in a Advanced Networking CloudStack setup
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+Our CloudStack cloud has an advanced networking setup, we would like to provision web servers, which get a static NAT and open firewall ports 80 and 443. Further we provision database servers, to which we do not give any access to. For accessing the VMs by SSH we use a SSH jump host.
+
+This is how our inventory looks like:
+
+.. code-block:: none
+
+ [cloud-vm:children]
+ webserver
+ db-server
+ jumphost
+
+ [webserver]
+ web-01.example.com public_ip=198.51.100.20
+ web-02.example.com public_ip=198.51.100.21
+
+ [db-server]
+ db-01.example.com
+ db-02.example.com
+
+ [jumphost]
+ jump.example.com public_ip=198.51.100.22
+
+As you can see, the public IPs for our web servers and jumphost has been assigned as variable ``public_ip`` directly in the inventory.
+
+The configure the jumphost, web servers and database servers, we use ``group_vars``. The ``group_vars`` directory contains 4 files for configuration of the groups: cloud-vm, jumphost, webserver and db-server. The cloud-vm is there for specifying the defaults of our cloud infrastructure.
+
+.. code-block:: yaml
+
+ # file: group_vars/cloud-vm
+ ---
+ cs_offering: Small
+ cs_firewall: []
+
+Our database servers should get more CPU and RAM, so we define to use a ``Large`` offering for them.
+
+.. code-block:: yaml
+
+ # file: group_vars/db-server
+ ---
+ cs_offering: Large
+
+The web servers should get a ``Small`` offering as we would scale them horizontally, which is also our default offering. We also ensure the known web ports are opened for the world.
+
+.. code-block:: yaml
+
+ # file: group_vars/webserver
+ ---
+ cs_firewall:
+ - { port: 80 }
+ - { port: 443 }
+
+Further we provision a jump host which has only port 22 opened for accessing the VMs from our office IPv4 network.
+
+.. code-block:: yaml
+
+ # file: group_vars/jumphost
+ ---
+ cs_firewall:
+ - { port: 22, cidr: "17.17.17.0/24" }
+
+Now to the fun part. We create a playbook to create our infrastructure we call it ``infra.yml``:
+
+.. code-block:: yaml
+
+ # file: infra.yaml
+ ---
+ - name: provision our VMs
+ hosts: cloud-vm
+ tasks:
+ - name: run all enclosed tasks from localhost
+ delegate_to: localhost
+ block:
+ - name: ensure VMs are created and running
+ cs_instance:
+ name: "{{ inventory_hostname_short }}"
+ template: Linux Debian 7 64-bit 20GB Disk
+ service_offering: "{{ cs_offering }}"
+ state: running
+
+ - name: ensure firewall ports opened
+ cs_firewall:
+ ip_address: "{{ public_ip }}"
+ port: "{{ item.port }}"
+ cidr: "{{ item.cidr | default('0.0.0.0/0') }}"
+ loop: "{{ cs_firewall }}"
+ when: public_ip is defined
+
+ - name: ensure static NATs
+ cs_staticnat: vm="{{ inventory_hostname_short }}" ip_address="{{ public_ip }}"
+ when: public_ip is defined
+
+In the above play we defined 3 tasks and use the group ``cloud-vm`` as target to handle all VMs in the cloud but instead SSH to these VMs, we use ``delegate_to: localhost`` to execute the API calls locally from our workstation.
+
+In the first task, we ensure we have a running VM created with the Debian template. If the VM is already created but stopped, it would just start it. If you like to change the offering on an existing VM, you must add ``force: yes`` to the task, which would stop the VM, change the offering and start the VM again.
+
+In the second task we ensure the ports are opened if we give a public IP to the VM.
+
+In the third task we add static NAT to the VMs having a public IP defined.
+
+
+.. Note:: The public IP addresses must have been acquired in advance, also see ``cs_ip_address``
+
+.. Note:: For some modules, for example ``cs_sshkeypair`` you usually want this to be executed only once, not for every VM. Therefore you would make a separate play for it targeting localhost. You find an example in the use cases below.
+
+Use Case: Provisioning on a Basic Networking CloudStack setup
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+A basic networking CloudStack setup is slightly different: Every VM gets a public IP directly assigned and security groups are used for access restriction policy.
+
+This is how our inventory looks like:
+
+.. code-block:: none
+
+ [cloud-vm:children]
+ webserver
+
+ [webserver]
+ web-01.example.com
+ web-02.example.com
+
+The default for your VMs looks like this:
+
+.. code-block:: yaml
+
+ # file: group_vars/cloud-vm
+ ---
+ cs_offering: Small
+ cs_securitygroups: [ 'default']
+
+Our webserver will also be in security group ``web``:
+
+.. code-block:: yaml
+
+ # file: group_vars/webserver
+ ---
+ cs_securitygroups: [ 'default', 'web' ]
+
+The playbook looks like the following:
+
+.. code-block:: yaml
+
+ # file: infra.yaml
+ ---
+ - name: cloud base setup
+ hosts: localhost
+ tasks:
+ - name: upload ssh public key
+ cs_sshkeypair:
+ name: defaultkey
+ public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
+
+ - name: ensure security groups exist
+ cs_securitygroup:
+ name: "{{ item }}"
+ loop:
+ - default
+ - web
+
+ - name: add inbound SSH to security group default
+ cs_securitygroup_rule:
+ security_group: default
+ start_port: "{{ item }}"
+ end_port: "{{ item }}"
+ loop:
+ - 22
+
+ - name: add inbound TCP rules to security group web
+ cs_securitygroup_rule:
+ security_group: web
+ start_port: "{{ item }}"
+ end_port: "{{ item }}"
+ loop:
+ - 80
+ - 443
+
+ - name: install VMs in the cloud
+ hosts: cloud-vm
+ tasks:
+ - delegate_to: localhost
+ block:
+ - name: create and run VMs on CloudStack
+ cs_instance:
+ name: "{{ inventory_hostname_short }}"
+ template: Linux Debian 7 64-bit 20GB Disk
+ service_offering: "{{ cs_offering }}"
+ security_groups: "{{ cs_securitygroups }}"
+ ssh_key: defaultkey
+ state: Running
+ register: vm
+
+ - name: show VM IP
+ debug: msg="VM {{ inventory_hostname }} {{ vm.default_ip }}"
+
+ - name: assign IP to the inventory
+ set_fact: ansible_ssh_host={{ vm.default_ip }}
+
+ - name: waiting for SSH to come up
+ wait_for: port=22 host={{ vm.default_ip }} delay=5
+
+In the first play we setup the security groups, in the second play the VMs will created be assigned to these groups. Further you see, that we assign the public IP returned from the modules to the host inventory. This is needed as we do not know the IPs we will get in advance. In a next step you would configure the DNS servers with these IPs for accessing the VMs with their DNS name.
+
+In the last task we wait for SSH to be accessible, so any later play would be able to access the VM by SSH without failure.
diff --git a/docs/docsite/rst/scenario_guides/guide_docker.rst b/docs/docsite/rst/scenario_guides/guide_docker.rst
new file mode 100644
index 0000000..8fe8111
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_docker.rst
@@ -0,0 +1,6 @@
+:orphan:
+
+Docker Guide
+============
+
+The content on this page has moved. Please see the updated :ref:`ansible_collections.community.docker.docsite.scenario_guide` in the `community.docker collection <https://galaxy.ansible.com/community/docker>`_.
diff --git a/docs/docsite/rst/scenario_guides/guide_gce.rst b/docs/docsite/rst/scenario_guides/guide_gce.rst
new file mode 100644
index 0000000..5407104
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_gce.rst
@@ -0,0 +1,302 @@
+Google Cloud Platform Guide
+===========================
+
+.. gce_intro:
+
+Introduction
+--------------------------
+
+Ansible + Google have been working together on a set of auto-generated
+Ansible modules designed to consistently and comprehensively cover the entirety
+of the Google Cloud Platform (GCP).
+
+Ansible contains modules for managing Google Cloud Platform resources,
+including creating instances, controlling network access, working with
+persistent disks, managing load balancers, and a lot more.
+
+These new modules can be found under a new consistent name scheme "gcp_*"
+(Note: gcp_target_proxy and gcp_url_map are legacy modules, despite the "gcp_*"
+name. Please use gcp_compute_target_proxy and gcp_compute_url_map instead).
+
+Additionally, the gcp_compute inventory plugin can discover all
+Google Compute Engine (GCE) instances
+and make them automatically available in your Ansible inventory.
+
+You may see a collection of other GCP modules that do not conform to this
+naming convention. These are the original modules primarily developed by the
+Ansible community. You will find some overlapping functionality such as with
+the "gce" module and the new "gcp_compute_instance" module. Either can be
+used, but you may experience issues trying to use them together.
+
+While the community GCP modules are not going away, Google is investing effort
+into the new "gcp_*" modules. Google is committed to ensuring the Ansible
+community has a great experience with GCP and therefore recommends adopting
+these new modules if possible.
+
+
+Requisites
+---------------
+The GCP modules require both the ``requests`` and the
+``google-auth`` libraries to be installed.
+
+.. code-block:: bash
+
+ $ pip install requests google-auth
+
+Alternatively for RHEL / CentOS, the ``python-requests`` package is also
+available to satisfy ``requests`` libraries.
+
+.. code-block:: bash
+
+ $ yum install python-requests
+
+Credentials
+-----------
+It's easy to create a GCP account with credentials for Ansible. You have multiple options to
+get your credentials - here are two of the most common options:
+
+* Service Accounts (Recommended): Use JSON service accounts with specific permissions.
+* Machine Accounts: Use the permissions associated with the GCP Instance you're using Ansible on.
+
+For the following examples, we'll be using service account credentials.
+
+To work with the GCP modules, you'll first need to get some credentials in the
+JSON format:
+
+1. `Create a Service Account <https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount>`_
+2. `Download JSON credentials <https://support.google.com/cloud/answer/6158849?hl=en&ref_topic=6262490#serviceaccounts>`_
+
+Once you have your credentials, there are two different ways to provide them to Ansible:
+
+* by specifying them directly as module parameters
+* by setting environment variables
+
+Providing Credentials as Module Parameters
+``````````````````````````````````````````
+
+For the GCE modules you can specify the credentials as arguments:
+
+* ``auth_kind``: type of authentication being used (choices: machineaccount, serviceaccount, application)
+* ``service_account_email``: email associated with the project
+* ``service_account_file``: path to the JSON credentials file
+* ``project``: id of the project
+* ``scopes``: The specific scopes that you want the actions to use.
+
+For example, to create a new IP address using the ``gcp_compute_address`` module,
+you can use the following configuration:
+
+.. code-block:: yaml
+
+ - name: Create IP address
+ hosts: localhost
+ gather_facts: false
+
+ vars:
+ service_account_file: /home/my_account.json
+ project: my-project
+ auth_kind: serviceaccount
+ scopes:
+ - https://www.googleapis.com/auth/compute
+
+ tasks:
+
+ - name: Allocate an IP Address
+ gcp_compute_address:
+ state: present
+ name: 'test-address1'
+ region: 'us-west1'
+ project: "{{ project }}"
+ auth_kind: "{{ auth_kind }}"
+ service_account_file: "{{ service_account_file }}"
+ scopes: "{{ scopes }}"
+
+Providing Credentials as Environment Variables
+``````````````````````````````````````````````
+
+Set the following environment variables before running Ansible in order to configure your credentials:
+
+.. code-block:: bash
+
+ GCP_AUTH_KIND
+ GCP_SERVICE_ACCOUNT_EMAIL
+ GCP_SERVICE_ACCOUNT_FILE
+ GCP_SCOPES
+
+GCE Dynamic Inventory
+---------------------
+
+The best way to interact with your hosts is to use the gcp_compute inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed.
+
+To be able to use this GCE dynamic inventory plugin, you need to enable it first by specifying the following in the ``ansible.cfg`` file:
+
+.. code-block:: ini
+
+ [inventory]
+ enable_plugins = gcp_compute
+
+Then, create a file that ends in ``.gcp.yml`` in your root directory.
+
+The gcp_compute inventory script takes in the same authentication information as any module.
+
+Here's an example of a valid inventory file:
+
+.. code-block:: yaml
+
+ plugin: gcp_compute
+ projects:
+ - graphite-playground
+ auth_kind: serviceaccount
+ service_account_file: /home/alexstephen/my_account.json
+
+
+Executing ``ansible-inventory --list -i <filename>.gcp.yml`` will create a list of GCP instances that are ready to be configured using Ansible.
+
+Create an instance
+``````````````````
+
+The full range of GCP modules provide the ability to create a wide variety of
+GCP resources with the full support of the entire GCP API.
+
+The following playbook creates a GCE Instance. This instance relies on other GCP
+resources like Disk. By creating other resources separately, we can give as
+much detail as necessary about how we want to configure the other resources, for example
+formatting of the Disk. By registering it to a variable, we can simply insert the
+variable into the instance task. The gcp_compute_instance module will figure out the
+rest.
+
+.. code-block:: yaml
+
+ - name: Create an instance
+ hosts: localhost
+ gather_facts: false
+ vars:
+ gcp_project: my-project
+ gcp_cred_kind: serviceaccount
+ gcp_cred_file: /home/my_account.json
+ zone: "us-central1-a"
+ region: "us-central1"
+
+ tasks:
+ - name: create a disk
+ gcp_compute_disk:
+ name: 'disk-instance'
+ size_gb: 50
+ source_image: 'projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts'
+ zone: "{{ zone }}"
+ project: "{{ gcp_project }}"
+ auth_kind: "{{ gcp_cred_kind }}"
+ service_account_file: "{{ gcp_cred_file }}"
+ scopes:
+ - https://www.googleapis.com/auth/compute
+ state: present
+ register: disk
+ - name: create a address
+ gcp_compute_address:
+ name: 'address-instance'
+ region: "{{ region }}"
+ project: "{{ gcp_project }}"
+ auth_kind: "{{ gcp_cred_kind }}"
+ service_account_file: "{{ gcp_cred_file }}"
+ scopes:
+ - https://www.googleapis.com/auth/compute
+ state: present
+ register: address
+ - name: create a instance
+ gcp_compute_instance:
+ state: present
+ name: test-vm
+ machine_type: n1-standard-1
+ disks:
+ - auto_delete: true
+ boot: true
+ source: "{{ disk }}"
+ network_interfaces:
+ - network: null # use default
+ access_configs:
+ - name: 'External NAT'
+ nat_ip: "{{ address }}"
+ type: 'ONE_TO_ONE_NAT'
+ zone: "{{ zone }}"
+ project: "{{ gcp_project }}"
+ auth_kind: "{{ gcp_cred_kind }}"
+ service_account_file: "{{ gcp_cred_file }}"
+ scopes:
+ - https://www.googleapis.com/auth/compute
+ register: instance
+
+ - name: Wait for SSH to come up
+ wait_for: host={{ address.address }} port=22 delay=10 timeout=60
+
+ - name: Add host to groupname
+ add_host: hostname={{ address.address }} groupname=new_instances
+
+
+ - name: Manage new instances
+ hosts: new_instances
+ connection: ssh
+ become: True
+ roles:
+ - base_configuration
+ - production_server
+
+Note that use of the "add_host" module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines
+in the 'new_instances' group, if so desired. Any sort of arbitrary configuration is possible at this point.
+
+For more information about Google Cloud, please visit the `Google Cloud website <https://cloud.google.com>`_.
+
+Migration Guides
+----------------
+
+gce.py -> gcp_compute_instance.py
+`````````````````````````````````
+As of Ansible 2.8, we're encouraging everyone to move from the ``gce`` module to the
+``gcp_compute_instance`` module. The ``gcp_compute_instance`` module has better
+support for all of GCP's features, fewer dependencies, more flexibility, and
+better supports GCP's authentication systems.
+
+The ``gcp_compute_instance`` module supports all of the features of the ``gce``
+module (and more!). Below is a mapping of ``gce`` fields over to
+``gcp_compute_instance`` fields.
+
+============================ ========================================== ======================
+ gce.py gcp_compute_instance.py Notes
+============================ ========================================== ======================
+ state state/status State on gce has multiple values: "present", "absent", "stopped", "started", "terminated". State on gcp_compute_instance is used to describe if the instance exists (present) or does not (absent). Status is used to describe if the instance is "started", "stopped" or "terminated".
+ image disks[].initialize_params.source_image You'll need to create a single disk using the disks[] parameter and set it to be the boot disk (disks[].boot = true)
+ image_family disks[].initialize_params.source_image See above.
+ external_projects disks[].initialize_params.source_image The name of the source_image will include the name of the project.
+ instance_names Use a loop or multiple tasks. Using loops is a more Ansible-centric way of creating multiple instances and gives you the most flexibility.
+ service_account_email service_accounts[].email This is the service_account email address that you want the instance to be associated with. It is not the service_account email address that is used for the credentials necessary to create the instance.
+ service_account_permissions service_accounts[].scopes These are the permissions you want to grant to the instance.
+ pem_file Not supported. We recommend using JSON service account credentials instead of PEM files.
+ credentials_file service_account_file
+ project_id project
+ name name This field does not accept an array of names. Use a loop to create multiple instances.
+ num_instances Use a loop For maximum flexibility, we're encouraging users to use Ansible features to create multiple instances, rather than letting the module do it for you.
+ network network_interfaces[].network
+ subnetwork network_interfaces[].subnetwork
+ persistent_boot_disk disks[].type = 'PERSISTENT'
+ disks disks[]
+ ip_forward can_ip_forward
+ external_ip network_interfaces[].access_configs.nat_ip This field takes multiple types of values. You can create an IP address with ``gcp_compute_address`` and place the name/output of the address here. You can also place the string value of the IP address's GCP name or the actual IP address.
+ disks_auto_delete disks[].auto_delete
+ preemptible scheduling.preemptible
+ disk_size disks[].initialize_params.disk_size_gb
+============================ ========================================== ======================
+
+An example playbook is below:
+
+.. code:: yaml
+
+ gcp_compute_instance:
+ name: "{{ item }}"
+ machine_type: n1-standard-1
+ ... # any other settings
+ zone: us-central1-a
+ project: "my-project"
+ auth_kind: "service_account_file"
+ service_account_file: "~/my_account.json"
+ state: present
+ loop:
+ - instance-1
+ - instance-2
diff --git a/docs/docsite/rst/scenario_guides/guide_infoblox.rst b/docs/docsite/rst/scenario_guides/guide_infoblox.rst
new file mode 100644
index 0000000..c7227c5
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_infoblox.rst
@@ -0,0 +1,292 @@
+.. _nios_guide:
+
+************************
+ Infoblox Guide
+************************
+
+.. contents:: Topics
+
+This guide describes how to use Ansible with the Infoblox Network Identity Operating System (NIOS). With Ansible integration, you can use Ansible playbooks to automate Infoblox Core Network Services for IP address management (IPAM), DNS, and inventory tracking.
+
+You can review simple example tasks in the documentation for any of the :ref:`NIOS modules <nios_net tools_modules>` or look at the `Use cases with modules`_ section for more elaborate examples. See the `Infoblox <https://www.infoblox.com/>`_ website for more information on the Infoblox product.
+
+.. note:: You can retrieve most of the example playbooks used in this guide from the `network-automation/infoblox_ansible <https://github.com/network-automation/infoblox_ansible>`_ GitHub repository.
+
+Prerequisites
+=============
+Before using Ansible ``nios`` modules with Infoblox, you must install the ``infoblox-client`` on your Ansible control node:
+
+.. code-block:: bash
+
+ $ sudo pip install infoblox-client
+
+.. note::
+ You need an NIOS account with the WAPI feature enabled to use Ansible with Infoblox.
+
+.. _nios_credentials:
+
+Credentials and authenticating
+==============================
+
+To use Infoblox ``nios`` modules in playbooks, you need to configure the credentials to access your Infoblox system. The examples in this guide use credentials stored in ``<playbookdir>/group_vars/nios.yml``. Replace these values with your Infoblox credentials:
+
+.. code-block:: yaml
+
+ ---
+ nios_provider:
+ host: 192.0.0.2
+ username: admin
+ password: ansible
+
+NIOS lookup plugins
+===================
+
+Ansible includes the following lookup plugins for NIOS:
+
+- :ref:`nios <nios_lookup>` Uses the Infoblox WAPI API to fetch NIOS specified objects, for example network views, DNS views, and host records.
+- :ref:`nios_next_ip <nios_next_ip_lookup>` Provides the next available IP address from a network. You'll see an example of this in `Creating a host record`_.
+- :ref:`nios_next_network <nios_next_network_lookup>` - Returns the next available network range for a network-container.
+
+You must run the NIOS lookup plugins locally by specifying ``connection: local``. See :ref:`lookup plugins <lookup_plugins>` for more detail.
+
+
+Retrieving all network views
+----------------------------
+
+To retrieve all network views and save them in a variable, use the :ref:`set_fact <set_fact_module>` module with the :ref:`nios <nios_lookup>` lookup plugin:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: fetch all networkview objects
+ set_fact:
+ networkviews: "{{ lookup('nios', 'networkview', provider=nios_provider) }}"
+
+ - name: check the networkviews
+ debug:
+ var: networkviews
+
+
+Retrieving a host record
+------------------------
+
+To retrieve a set of host records, use the ``set_fact`` module with the ``nios`` lookup plugin and include a filter for the specific hosts you want to retrieve:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: fetch host leaf01
+ set_fact:
+ host: "{{ lookup('nios', 'record:host', filter={'name': 'leaf01.ansible.com'}, provider=nios_provider) }}"
+
+ - name: check the leaf01 return variable
+ debug:
+ var: host
+
+ - name: debug specific variable (ipv4 address)
+ debug:
+ var: host.ipv4addrs[0].ipv4addr
+
+ - name: fetch host leaf02
+ set_fact:
+ host: "{{ lookup('nios', 'record:host', filter={'name': 'leaf02.ansible.com'}, provider=nios_provider) }}"
+
+ - name: check the leaf02 return variable
+ debug:
+ var: host
+
+
+If you run this ``get_host_record.yml`` playbook, you should see results similar to the following:
+
+.. code-block:: none
+
+ $ ansible-playbook get_host_record.yml
+
+ PLAY [localhost] ***************************************************************************************
+
+ TASK [fetch host leaf01] ******************************************************************************
+ ok: [localhost]
+
+ TASK [check the leaf01 return variable] *************************************************************
+ ok: [localhost] => {
+ < ...output shortened...>
+ "host": {
+ "ipv4addrs": [
+ {
+ "configure_for_dhcp": false,
+ "host": "leaf01.ansible.com",
+ }
+ ],
+ "name": "leaf01.ansible.com",
+ "view": "default"
+ }
+ }
+
+ TASK [debug specific variable (ipv4 address)] ******************************************************
+ ok: [localhost] => {
+ "host.ipv4addrs[0].ipv4addr": "192.168.1.11"
+ }
+
+ TASK [fetch host leaf02] ******************************************************************************
+ ok: [localhost]
+
+ TASK [check the leaf02 return variable] *************************************************************
+ ok: [localhost] => {
+ < ...output shortened...>
+ "host": {
+ "ipv4addrs": [
+ {
+ "configure_for_dhcp": false,
+ "host": "leaf02.example.com",
+ "ipv4addr": "192.168.1.12"
+ }
+ ],
+ }
+ }
+
+ PLAY RECAP ******************************************************************************************
+ localhost : ok=5 changed=0 unreachable=0 failed=0
+
+The output above shows the host record for ``leaf01.ansible.com`` and ``leaf02.ansible.com`` that were retrieved by the ``nios`` lookup plugin. This playbook saves the information in variables which you can use in other playbooks. This allows you to use Infoblox as a single source of truth to gather and use information that changes dynamically. See :ref:`playbooks_variables` for more information on using Ansible variables. See the :ref:`nios <nios_lookup>` examples for more data options that you can retrieve.
+
+You can access these playbooks at `Infoblox lookup playbooks <https://github.com/network-automation/infoblox_ansible/tree/master/lookup_playbooks>`_.
+
+Use cases with modules
+======================
+
+You can use the ``nios`` modules in tasks to simplify common Infoblox workflows. Be sure to set up your :ref:`NIOS credentials<nios_credentials>` before following these examples.
+
+Configuring an IPv4 network
+---------------------------
+
+To configure an IPv4 network, use the :ref:`nios_network <nios_network_module>` module:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: Create a network on the default network view
+ nios_network:
+ network: 192.168.100.0/24
+ comment: sets the IPv4 network
+ options:
+ - name: domain-name
+ value: ansible.com
+ state: present
+ provider: "{{nios_provider}}"
+
+Notice the last parameter, ``provider``, uses the variable ``nios_provider`` defined in the ``group_vars/`` directory.
+
+Creating a host record
+----------------------
+
+To create a host record named `leaf03.ansible.com` on the newly-created IPv4 network:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: configure an IPv4 host record
+ nios_host_record:
+ name: leaf03.ansible.com
+ ipv4addrs:
+ - ipv4addr:
+ "{{ lookup('nios_next_ip', '192.168.100.0/24', provider=nios_provider)[0] }}"
+ state: present
+ provider: "{{nios_provider}}"
+
+Notice the IPv4 address in this example uses the :ref:`nios_next_ip <nios_next_ip_lookup>` lookup plugin to find the next available IPv4 address on the network.
+
+Creating a forward DNS zone
+---------------------------
+
+To configure a forward DNS zone use, the ``nios_zone`` module:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: Create a forward DNS zone called ansible-test.com
+ nios_zone:
+ name: ansible-test.com
+ comment: local DNS zone
+ state: present
+ provider: "{{ nios_provider }}"
+
+Creating a reverse DNS zone
+---------------------------
+
+To configure a reverse DNS zone:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: nios
+ connection: local
+ tasks:
+ - name: configure a reverse mapping zone on the system using IPV6 zone format
+ nios_zone:
+ name: 100::1/128
+ zone_format: IPV6
+ state: present
+ provider: "{{ nios_provider }}"
+
+Dynamic inventory script
+========================
+
+You can use the Infoblox dynamic inventory script to import your network node inventory with Infoblox NIOS. To gather the inventory from Infoblox, you need two files:
+
+- `infoblox.yaml <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/infoblox.yaml>`_ - A file that specifies the NIOS provider arguments and optional filters.
+
+- `infoblox.py <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/infoblox.py>`_ - The python script that retrieves the NIOS inventory.
+
+.. note::
+
+ Please note that the inventory script only works when Ansible 2.9, 2.10 or 3 have been installed. The inventory script will eventually be removed from `community.general <https://galaxy.ansible.com/community/general>`_, and will not work if `community.general` is only installed with `ansible-galaxy collection install`. Please use the inventory plugin from `infoblox.nios_modules <https://galaxy.ansible.com/infoblox/nios_modules>`_ instead.
+
+To use the Infoblox dynamic inventory script:
+
+#. Download the ``infoblox.yaml`` file and save it in the ``/etc/ansible`` directory.
+
+#. Modify the ``infoblox.yaml`` file with your NIOS credentials.
+
+#. Download the ``infoblox.py`` file and save it in the ``/etc/ansible/hosts`` directory.
+
+#. Change the permissions on the ``infoblox.py`` file to make the file an executable:
+
+.. code-block:: bash
+
+ $ sudo chmod +x /etc/ansible/hosts/infoblox.py
+
+You can optionally use ``./infoblox.py --list`` to test the script. After a few minutes, you should see your Infoblox inventory in JSON format. You can explicitly use the Infoblox dynamic inventory script as follows:
+
+.. code-block:: bash
+
+ $ ansible -i infoblox.py all -m ping
+
+You can also implicitly use the Infoblox dynamic inventory script by including it in your inventory directory (``etc/ansible/hosts`` by default). See :ref:`dynamic_inventory` for more details.
+
+.. seealso::
+
+ `Infoblox website <https://www.infoblox.com//>`_
+ The Infoblox website
+ `Infoblox and Ansible Deployment Guide <https://www.infoblox.com/resources/deployment-guides/infoblox-and-ansible-integration>`_
+ The deployment guide for Ansible integration provided by Infoblox.
+ `Infoblox Integration in Ansible 2.5 <https://www.ansible.com/blog/infoblox-integration-in-ansible-2.5>`_
+ Ansible blog post about Infoblox.
+ :ref:`Ansible NIOS modules <nios_net tools_modules>`
+ The list of supported NIOS modules, with examples.
+ `Infoblox Ansible Examples <https://github.com/network-automation/infoblox_ansible>`_
+ Infoblox example playbooks.
diff --git a/docs/docsite/rst/scenario_guides/guide_meraki.rst b/docs/docsite/rst/scenario_guides/guide_meraki.rst
new file mode 100644
index 0000000..94c5b16
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_meraki.rst
@@ -0,0 +1,193 @@
+.. _meraki_guide:
+
+******************
+Cisco Meraki Guide
+******************
+
+.. contents::
+ :local:
+
+
+.. _meraki_guide_intro:
+
+What is Cisco Meraki?
+=====================
+
+Cisco Meraki is an easy-to-use, cloud-based, network infrastructure platform for enterprise environments. While most network hardware uses command-line interfaces (CLIs) for configuration, Meraki uses an easy-to-use Dashboard hosted in the Meraki cloud. No on-premises management hardware or software is required - only the network infrastructure to run your business.
+
+MS Switches
+-----------
+
+Meraki MS switches come in multiple flavors and form factors. Meraki switches support 10/100/1000/10000 ports, as well as Cisco's mGig technology for 2.5/5/10Gbps copper connectivity. 8, 24, and 48 port flavors are available with PoE (802.3af/802.3at/UPoE) available on many models.
+
+MX Firewalls
+------------
+
+Meraki's MX firewalls support full layer 3-7 deep packet inspection. MX firewalls are compatible with a variety of VPN technologies including IPSec, SSL VPN, and Meraki's easy-to-use AutoVPN.
+
+MR Wireless Access Points
+-------------------------
+
+MR access points are enterprise-class, high-performance access points for the enterprise. MR access points have MIMO technology and integrated beamforming built-in for high performance applications. BLE allows for advanced location applications to be developed with no on-premises analytics platforms.
+
+Using the Meraki modules
+========================
+
+Meraki modules provide a user-friendly interface to manage your Meraki environment using Ansible. For example, details about SNMP settings for a particular organization can be discovered using the module `meraki_snmp <meraki_snmp_module>`.
+
+.. code-block:: yaml
+
+ - name: Query SNMP settings
+ meraki_snmp:
+ api_key: abc123
+ org_name: AcmeCorp
+ state: query
+ delegate_to: localhost
+
+Information about a particular object can be queried. For example, the `meraki_admin <meraki_admin_module>` module supports
+
+.. code-block:: yaml
+
+ - name: Gather information about Jane Doe
+ meraki_admin:
+ api_key: abc123
+ org_name: AcmeCorp
+ state: query
+ email: janedoe@email.com
+ delegate_to: localhost
+
+Common Parameters
+=================
+
+All Ansible Meraki modules support the following parameters which affect communication with the Meraki Dashboard API. Most of these should only be used by Meraki developers and not the general public.
+
+ host
+ Hostname or IP of Meraki Dashboard.
+
+ use_https
+ Specifies whether communication should be over HTTPS. (Defaults to ``yes``)
+
+ use_proxy
+ Whether to use a proxy for any communication.
+
+ validate_certs
+ Determine whether certificates should be validated or trusted. (Defaults to ``yes``)
+
+These are the common parameters which are used for most every module.
+
+ org_name
+ Name of organization to perform actions in.
+
+ org_id
+ ID of organization to perform actions in.
+
+ net_name
+ Name of network to perform actions in.
+
+ net_id
+ ID of network to perform actions in.
+
+ state
+ General specification of what action to take. ``query`` does lookups. ``present`` creates or edits. ``absent`` deletes.
+
+.. hint:: Use the ``org_id`` and ``net_id`` parameters when possible. ``org_name`` and ``net_name`` require additional behind-the-scenes API calls to learn the ID values. ``org_id`` and ``net_id`` will perform faster.
+
+Meraki Authentication
+=====================
+
+All API access with the Meraki Dashboard requires an API key. An API key can be generated from the organization's settings page. Each play in a playbook requires the ``api_key`` parameter to be specified.
+
+The "Vault" feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See :ref:`playbooks_vault` for more information.
+
+Meraki's API returns a 404 error if the API key is not correct. It does not provide any specific error saying the key is incorrect. If you receive a 404 error, check the API key first.
+
+Returned Data Structures
+========================
+
+Meraki and its related Ansible modules return most information in the form of a list. For example, this is returned information by ``meraki_admin`` querying administrators. It returns a list even though there's only one.
+
+.. code-block:: json
+
+ [
+ {
+ "orgAccess": "full",
+ "name": "John Doe",
+ "tags": [],
+ "networks": [],
+ "email": "john@doe.com",
+ "id": "12345677890"
+ }
+ ]
+
+Handling Returned Data
+======================
+
+Since Meraki's response data uses lists instead of properly keyed dictionaries for responses, certain strategies should be used when querying data for particular information. For many situations, use the ``selectattr()`` Jinja2 function.
+
+Merging Existing and New Data
+=============================
+
+Ansible's Meraki modules do not allow for manipulating data. For example, you may need to insert a rule in the middle of a firewall ruleset. Ansible and the Meraki modules lack a way to directly merge to manipulate data. However, a playlist can use a few tasks to split the list where you need to insert a rule and then merge them together again with the new rule added. The steps involved are as follows:
+
+1. Create blank "front" and "back" lists.
+ ::
+
+ vars:
+ - front_rules: []
+ - back_rules: []
+2. Get existing firewall rules from Meraki and create a new variable.
+ ::
+
+ - name: Get firewall rules
+ meraki_mx_l3_firewall:
+ auth_key: abc123
+ org_name: YourOrg
+ net_name: YourNet
+ state: query
+ delegate_to: localhost
+ register: rules
+ - set_fact:
+ original_ruleset: '{{rules.data}}'
+3. Write the new rule. The new rule needs to be in a list so it can be merged with other lists in an upcoming step. The blank `-` puts the rule in a list so it can be merged.
+ ::
+
+ - set_fact:
+ new_rule:
+ -
+ - comment: Block traffic to server
+ src_cidr: 192.0.1.0/24
+ src_port: any
+ dst_cidr: 192.0.1.2/32
+ dst_port: any
+ protocol: any
+ policy: deny
+4. Split the rules into two lists. This assumes the existing ruleset is 2 rules long.
+ ::
+
+ - set_fact:
+ front_rules: '{{front_rules + [ original_ruleset[:1] ]}}'
+ - set_fact:
+ back_rules: '{{back_rules + [ original_ruleset[1:] ]}}'
+5. Merge rules with the new rule in the middle.
+ ::
+
+ - set_fact:
+ new_ruleset: '{{front_rules + new_rule + back_rules}}'
+6. Upload new ruleset to Meraki.
+ ::
+
+ - name: Set two firewall rules
+ meraki_mx_l3_firewall:
+ auth_key: abc123
+ org_name: YourOrg
+ net_name: YourNet
+ state: present
+ rules: '{{ new_ruleset }}'
+ delegate_to: localhost
+
+Error Handling
+==============
+
+Ansible's Meraki modules will often fail if improper or incompatible parameters are specified. However, there will likely be scenarios where the module accepts the information but the Meraki API rejects the data. If this happens, the error will be returned in the ``body`` field for HTTP status of 400 return code.
+
+Meraki's API returns a 404 error if the API key is not correct. It does not provide any specific error saying the key is incorrect. If you receive a 404 error, check the API key first. 404 errors can also occur if improper object IDs (ex. ``org_id``) are specified.
diff --git a/docs/docsite/rst/scenario_guides/guide_online.rst b/docs/docsite/rst/scenario_guides/guide_online.rst
new file mode 100644
index 0000000..2c181a9
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_online.rst
@@ -0,0 +1,41 @@
+****************
+Online.net Guide
+****************
+
+Introduction
+============
+
+Online is a French hosting company mainly known for providing bare-metal servers named Dedibox.
+Check it out: `https://www.online.net/en <https://www.online.net/en>`_
+
+Dynamic inventory for Online resources
+--------------------------------------
+
+Ansible has a dynamic inventory plugin that can list your resources.
+
+1. Create a YAML configuration such as ``online_inventory.yml`` with this content:
+
+.. code-block:: yaml
+
+ plugin: online
+
+2. Set your ``ONLINE_TOKEN`` environment variable with your token.
+ You need to open an account and log into it before you can get a token.
+ You can find your token at the following page: `https://console.online.net/en/api/access <https://console.online.net/en/api/access>`_
+
+3. You can test that your inventory is working by running:
+
+.. code-block:: bash
+
+ $ ansible-inventory -v -i online_inventory.yml --list
+
+
+4. Now you can run your playbook or any other module with this inventory:
+
+.. code-block:: console
+
+ $ ansible all -i online_inventory.yml -m ping
+ sd-96735 | SUCCESS => {
+ "changed": false,
+ "ping": "pong"
+ }
diff --git a/docs/docsite/rst/scenario_guides/guide_oracle.rst b/docs/docsite/rst/scenario_guides/guide_oracle.rst
new file mode 100644
index 0000000..170ea90
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_oracle.rst
@@ -0,0 +1,103 @@
+===================================
+Oracle Cloud Infrastructure Guide
+===================================
+
+************
+Introduction
+************
+
+Oracle provides a number of Ansible modules to interact with Oracle Cloud Infrastructure (OCI). In this guide, we will explain how you can use these modules to orchestrate, provision and configure your infrastructure on OCI.
+
+************
+Requirements
+************
+To use the OCI Ansible modules, you must have the following prerequisites on your control node, the computer from which Ansible playbooks are executed.
+
+1. `An Oracle Cloud Infrastructure account. <https://cloud.oracle.com/en_US/tryit>`_
+
+2. A user created in that account, in a security group with a policy that grants the necessary permissions for working with resources in those compartments. For guidance, see `How Policies Work <https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/policies.htm>`_.
+
+3. The necessary credentials and OCID information.
+
+************
+Installation
+************
+1. Install the Oracle Cloud Infrastructure Python SDK (`detailed installation instructions <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/installation.html>`_):
+
+.. code-block:: bash
+
+ pip install oci
+
+2. Install the Ansible OCI Modules in one of two ways:
+
+a. From Galaxy:
+
+.. code-block:: bash
+
+ ansible-galaxy install oracle.oci_ansible_modules
+
+b. From GitHub:
+
+.. code-block:: bash
+
+ $ git clone https://github.com/oracle/oci-ansible-modules.git
+
+.. code-block:: bash
+
+ $ cd oci-ansible-modules
+
+
+Run one of the following commands:
+
+- If Ansible is installed only for your user:
+
+.. code-block:: bash
+
+ $ ./install.py
+
+- If Ansible is installed as root:
+
+.. code-block:: bash
+
+ $ sudo ./install.py
+
+*************
+Configuration
+*************
+
+When creating and configuring Oracle Cloud Infrastructure resources, Ansible modules use the authentication information outlined `here <https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm>`_.
+.
+
+********
+Examples
+********
+Launch a compute instance
+=========================
+This `sample launch playbook <https://github.com/oracle/oci-ansible-modules/tree/master/samples/compute/launch_compute_instance>`_
+launches a public Compute instance and then accesses the instance from an Ansible module over an SSH connection. The sample illustrates how to:
+
+- Generate a temporary, host-specific SSH key pair.
+- Specify the public key from the key pair for connecting to the instance, and then launch the instance.
+- Connect to the newly launched instance using SSH.
+
+Create and manage Autonomous Data Warehouses
+============================================
+This `sample warehouse playbook <https://github.com/oracle/oci-ansible-modules/tree/master/samples/database/autonomous_data_warehouse>`_ creates an Autonomous Data Warehouse and manage its lifecycle. The sample shows how to:
+
+- Set up an Autonomous Data Warehouse.
+- List all of the Autonomous Data Warehouse instances available in a compartment, filtered by the display name.
+- Get the "facts" for a specified Autonomous Data Warehouse.
+- Stop and start an Autonomous Data Warehouse instance.
+- Delete an Autonomous Data Warehouse instance.
+
+Create and manage Autonomous Transaction Processing
+===================================================
+This `sample playbook <https://github.com/oracle/oci-ansible-modules/tree/master/samples/database/autonomous_database>`_
+creates an Autonomous Transaction Processing database and manage its lifecycle. The sample shows how to:
+
+- Set up an Autonomous Transaction Processing database instance.
+- List all of the Autonomous Transaction Processing instances in a compartment, filtered by the display name.
+- Get the "facts" for a specified Autonomous Transaction Processing instance.
+- Delete an Autonomous Transaction Processing database instance.
+
+You can find more examples here: `Sample Ansible Playbooks <https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/ansiblesamples.htm>`_.
diff --git a/docs/docsite/rst/scenario_guides/guide_packet.rst b/docs/docsite/rst/scenario_guides/guide_packet.rst
new file mode 100644
index 0000000..512620c
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_packet.rst
@@ -0,0 +1,311 @@
+**********************************
+Packet.net Guide
+**********************************
+
+Introduction
+============
+
+`Packet.net <https://packet.net>`_ is a bare metal infrastructure host that's supported by Ansible (>=2.3) through a dynamic inventory script and two cloud modules. The two modules are:
+
+- packet_sshkey: adds a public SSH key from file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
+- packet_device: manages servers on Packet. You can use this module to create, restart and delete devices.
+
+Note, this guide assumes you are familiar with Ansible and how it works. If you're not, have a look at their :ref:`docs <ansible_documentation>` before getting started.
+
+Requirements
+============
+
+The Packet modules and inventory script connect to the Packet API using the packet-python package. You can install it with pip:
+
+.. code-block:: bash
+
+ $ pip install packet-python
+
+In order to check the state of devices created by Ansible on Packet, it's a good idea to install one of the `Packet CLI clients <https://www.packet.net/developers/integrations/>`_. Otherwise you can check them through the `Packet portal <https://app.packet.net/portal>`_.
+
+To use the modules and inventory script you'll need a Packet API token. You can generate an API token through the Packet portal `here <https://app.packet.net/portal#/api-keys>`__. The simplest way to authenticate yourself is to set the Packet API token in an environment variable:
+
+.. code-block:: bash
+
+ $ export PACKET_API_TOKEN=Bfse9F24SFtfs423Gsd3ifGsd43sSdfs
+
+If you're not comfortable exporting your API token, you can pass it as a parameter to the modules.
+
+On Packet, devices and reserved IP addresses belong to `projects <https://www.packet.com/developers/api/#projects>`_. In order to use the packet_device module, you need to specify the UUID of the project in which you want to create or manage devices. You can find a project's UUID in the Packet portal `here <https://app.packet.net/portal#/projects/list/table/>`_ (it's just under the project table) or through one of the available `CLIs <https://www.packet.net/developers/integrations/>`_.
+
+
+If you want to use a new SSH key pair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
+
+.. code-block:: bash
+
+ $ ssh-keygen -t rsa -f ./id_rsa
+
+If you want to use an existing key pair, just copy the private and public key over to the playbook directory.
+
+
+Device Creation
+===============
+
+The following code block is a simple playbook that creates one `Type 0 <https://www.packet.com/cloud/servers/t1-small/>`_ server (the 'plan' parameter). You have to supply 'plan' and 'operating_system'. 'location' defaults to 'ewr1' (Parsippany, NJ). You can find all the possible values for the parameters through a `CLI client <https://www.packet.net/developers/integrations/>`_.
+
+.. code-block:: yaml
+
+ # playbook_create.yml
+
+ - name: create ubuntu device
+ hosts: localhost
+ tasks:
+
+ - packet_sshkey:
+ key_file: ./id_rsa.pub
+ label: tutorial key
+
+ - packet_device:
+ project_id: <your_project_id>
+ hostnames: myserver
+ operating_system: ubuntu_16_04
+ plan: baremetal_0
+ facility: sjc1
+
+After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify through a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__.
+
+If you get an error with the message "failed to set machine state present, error: Error 404: Not Found", please verify your project UUID.
+
+
+Updating Devices
+================
+
+The two parameters used to uniquely identify Packet devices are: "device_ids" and "hostnames". Both parameters accept either a single string (later converted to a one-element list), or a list of strings.
+
+The 'device_ids' and 'hostnames' parameters are mutually exclusive. The following values are all acceptable:
+
+- device_ids: a27b7a83-fc93-435b-a128-47a5b04f2dcf
+
+- hostnames: mydev1
+
+- device_ids: [a27b7a83-fc93-435b-a128-47a5b04f2dcf, 4887130f-0ccd-49a0-99b0-323c1ceb527b]
+
+- hostnames: [mydev1, mydev2]
+
+In addition, hostnames can contain a special '%d' formatter along with a 'count' parameter that lets you easily expand hostnames that follow a simple name and number pattern; in other words, ``hostnames: "mydev%d", count: 2`` will expand to [mydev1, mydev2].
+
+If your playbook acts on existing Packet devices, you can only pass the 'hostname' and 'device_ids' parameters. The following playbook shows how you can reboot a specific Packet device by setting the 'hostname' parameter:
+
+.. code-block:: yaml
+
+ # playbook_reboot.yml
+
+ - name: reboot myserver
+ hosts: localhost
+ tasks:
+
+ - packet_device:
+ project_id: <your_project_id>
+ hostnames: myserver
+ state: rebooted
+
+You can also identify specific Packet devices with the 'device_ids' parameter. The device's UUID can be found in the `Packet Portal <https://app.packet.net/portal>`_ or by using a `CLI <https://www.packet.net/developers/integrations/>`_. The following playbook removes a Packet device using the 'device_ids' field:
+
+.. code-block:: yaml
+
+ # playbook_remove.yml
+
+ - name: remove a device
+ hosts: localhost
+ tasks:
+
+ - packet_device:
+ project_id: <your_project_id>
+ device_ids: <myserver_device_id>
+ state: absent
+
+
+More Complex Playbooks
+======================
+
+In this example, we'll create a CoreOS cluster with `user data <https://packet.com/developers/docs/servers/key-features/user-data/>`_.
+
+
+The CoreOS cluster will use `etcd <https://etcd.io/>`_ for discovery of other servers in the cluster. Before provisioning your servers, you'll need to generate a discovery token for your cluster:
+
+.. code-block:: bash
+
+ $ curl -w "\n" 'https://discovery.etcd.io/new?size=3'
+
+The following playbook will create an SSH key, 3 Packet servers, and then wait until SSH is ready (or until 5 minutes passed). Make sure to substitute the discovery token URL in 'user_data', and the 'project_id' before running ``ansible-playbook``. Also, feel free to change 'plan' and 'facility'.
+
+.. code-block:: yaml
+
+ # playbook_coreos.yml
+
+ - name: Start 3 CoreOS nodes in Packet and wait until SSH is ready
+ hosts: localhost
+ tasks:
+
+ - packet_sshkey:
+ key_file: ./id_rsa.pub
+ label: new
+
+ - packet_device:
+ hostnames: [coreos-one, coreos-two, coreos-three]
+ operating_system: coreos_beta
+ plan: baremetal_0
+ facility: ewr1
+ project_id: <your_project_id>
+ wait_for_public_IPv: 4
+ user_data: |
+ #cloud-config
+ coreos:
+ etcd2:
+ discovery: https://discovery.etcd.io/<token>
+ advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
+ initial-advertise-peer-urls: http://$private_ipv4:2380
+ listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
+ listen-peer-urls: http://$private_ipv4:2380
+ fleet:
+ public-ip: $private_ipv4
+ units:
+ - name: etcd2.service
+ command: start
+ - name: fleet.service
+ command: start
+ register: newhosts
+
+ - name: wait for ssh
+ wait_for:
+ delay: 1
+ host: "{{ item.public_ipv4 }}"
+ port: 22
+ state: started
+ timeout: 500
+ loop: "{{ newhosts.results[0].devices }}"
+
+
+As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the ``packet_sshkey`` module call in our playbook. If the public key is already in your Packet account, the call will have no effect.
+
+The second module call provisions 3 Packet Type 0 (specified using the 'plan' parameter) servers in the project identified by the 'project_id' parameter. The servers are all provisioned with CoreOS beta (the 'operating_system' parameter) and are customized with cloud-config user data passed to the 'user_data' parameter.
+
+The ``packet_device`` module has a ``wait_for_public_IPv`` that is used to specify the version of the IP address to wait for (valid values are ``4`` or ``6`` for IPv4 or IPv6). If specified, Ansible will wait until the GET API call for a device contains an Internet-routeable IP address of the specified version. When referring to an IP address of a created device in subsequent module calls, it's wise to use the ``wait_for_public_IPv`` parameter, or ``state: active`` in the packet_device module call.
+
+Run the playbook:
+
+.. code-block:: bash
+
+ $ ansible-playbook playbook_coreos.yml
+
+Once the playbook quits, your new devices should be reachable through SSH. Try to connect to one and check if etcd has started properly:
+
+.. code-block:: bash
+
+ tomk@work $ ssh -i id_rsa core@$one_of_the_servers_ip
+ core@coreos-one ~ $ etcdctl cluster-health
+
+Once you create a couple of devices, you might appreciate the dynamic inventory script...
+
+
+Dynamic Inventory Script
+========================
+
+The dynamic inventory script queries the Packet API for a list of hosts, and exposes it to Ansible so you can easily identify and act on Packet devices.
+
+You can find it in Ansible Community General Collection's git repo at `scripts/inventory/packet_net.py <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.py>`_.
+
+The inventory script is configurable through an `ini file <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.ini>`_.
+
+If you want to use the inventory script, you must first export your Packet API token to a PACKET_API_TOKEN environment variable.
+
+You can either copy the inventory and ini config out from the cloned git repo, or you can download it to your working directory like so:
+
+.. code-block:: bash
+
+ $ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.py
+ $ chmod +x packet_net.py
+ $ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.ini
+
+In order to understand what the inventory script gives to Ansible you can run:
+
+.. code-block:: bash
+
+ $ ./packet_net.py --list
+
+It should print a JSON document looking similar to following trimmed dictionary:
+
+.. code-block:: json
+
+ {
+ "_meta": {
+ "hostvars": {
+ "147.75.64.169": {
+ "packet_billing_cycle": "hourly",
+ "packet_created_at": "2017-02-09T17:11:26Z",
+ "packet_facility": "ewr1",
+ "packet_hostname": "coreos-two",
+ "packet_href": "/devices/d0ab8972-54a8-4bff-832b-28549d1bec96",
+ "packet_id": "d0ab8972-54a8-4bff-832b-28549d1bec96",
+ "packet_locked": false,
+ "packet_operating_system": "coreos_beta",
+ "packet_plan": "baremetal_0",
+ "packet_state": "active",
+ "packet_updated_at": "2017-02-09T17:16:35Z",
+ "packet_user": "core",
+ "packet_userdata": "#cloud-config\ncoreos:\n etcd2:\n discovery: https://discovery.etcd.io/e0c8a4a9b8fe61acd51ec599e2a4f68e\n advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001\n initial-advertise-peer-urls: http://$private_ipv4:2380\n listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001\n listen-peer-urls: http://$private_ipv4:2380\n fleet:\n public-ip: $private_ipv4\n units:\n - name: etcd2.service\n command: start\n - name: fleet.service\n command: start"
+ }
+ }
+ },
+ "baremetal_0": [
+ "147.75.202.255",
+ "147.75.202.251",
+ "147.75.202.249",
+ "147.75.64.129",
+ "147.75.192.51",
+ "147.75.64.169"
+ ],
+ "coreos_beta": [
+ "147.75.202.255",
+ "147.75.202.251",
+ "147.75.202.249",
+ "147.75.64.129",
+ "147.75.192.51",
+ "147.75.64.169"
+ ],
+ "ewr1": [
+ "147.75.64.129",
+ "147.75.192.51",
+ "147.75.64.169"
+ ],
+ "sjc1": [
+ "147.75.202.255",
+ "147.75.202.251",
+ "147.75.202.249"
+ ],
+ "coreos-two": [
+ "147.75.64.169"
+ ],
+ "d0ab8972-54a8-4bff-832b-28549d1bec96": [
+ "147.75.64.169"
+ ]
+ }
+
+In the ``['_meta']['hostvars']`` key, there is a list of devices (uniquely identified by their public IPv4 address) with their parameters. The other keys under ``['_meta']`` are lists of devices grouped by some parameter. Here, it is type (all devices are of type baremetal_0), operating system, and facility (ewr1 and sjc1).
+
+In addition to the parameter groups, there are also one-item groups with the UUID or hostname of the device.
+
+You can now target groups in playbooks! The following playbook will install a role that supplies resources for an Ansible target into all devices in the "coreos_beta" group:
+
+.. code-block:: yaml
+
+ # playbook_bootstrap.yml
+
+ - hosts: coreos_beta
+ gather_facts: false
+ roles:
+ - defunctzombie.coreos-boostrap
+
+Don't forget to supply the dynamic inventory in the ``-i`` argument!
+
+.. code-block:: bash
+
+ $ ansible-playbook -u core -i packet_net.py playbook_bootstrap.yml
+
+
+If you have any questions or comments let us know! help@packet.net
diff --git a/docs/docsite/rst/scenario_guides/guide_rax.rst b/docs/docsite/rst/scenario_guides/guide_rax.rst
new file mode 100644
index 0000000..439ba18
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_rax.rst
@@ -0,0 +1,809 @@
+Rackspace Cloud Guide
+=====================
+
+.. _rax_introduction:
+
+Introduction
+````````````
+
+.. note:: Rackspace functionality in Ansible is not maintained and users should consider the `OpenStack collection <https://galaxy.ansible.com/openstack/cloud>`_ instead.
+
+Ansible contains a number of core modules for interacting with Rackspace Cloud.
+
+The purpose of this section is to explain how to put Ansible modules together
+(and use inventory scripts) to use Ansible in a Rackspace Cloud context.
+
+Prerequisites for using the rax modules are minimal. In addition to ansible itself,
+all of the modules require and are tested against pyrax 1.5 or higher.
+You'll need this Python module installed on the execution host.
+
+``pyrax`` is not currently available in many operating system
+package repositories, so you will likely need to install it through pip:
+
+.. code-block:: bash
+
+ $ pip install pyrax
+
+Ansible creates an implicit localhost that executes in the same context as the ``ansible-playbook`` and the other CLI tools.
+If for any reason you need or want to have it in your inventory you should do something like the following:
+
+.. code-block:: ini
+
+ [localhost]
+ localhost ansible_connection=local ansible_python_interpreter=/usr/local/bin/python2
+
+For more information see :ref:`Implicit Localhost <implicit_localhost>`
+
+In playbook steps, we'll typically be using the following pattern:
+
+.. code-block:: yaml
+
+ - hosts: localhost
+ gather_facts: False
+ tasks:
+
+.. _credentials_file:
+
+Credentials File
+````````````````
+
+The `rax.py` inventory script and all `rax` modules support a standard `pyrax` credentials file that looks like:
+
+.. code-block:: ini
+
+ [rackspace_cloud]
+ username = myraxusername
+ api_key = d41d8cd98f00b204e9800998ecf8427e
+
+Setting the environment parameter ``RAX_CREDS_FILE`` to the path of this file will help Ansible find how to load
+this information.
+
+More information about this credentials file can be found at
+https://github.com/pycontribs/pyrax/blob/master/docs/getting_started.md#authenticating
+
+
+.. _virtual_environment:
+
+Running from a Python Virtual Environment (Optional)
+++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
+
+There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done through the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running through 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
+
+.. code-block:: ini
+
+ [localhost]
+ localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
+
+.. note::
+
+ pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.
+
+.. _provisioning:
+
+Provisioning
+````````````
+
+Now for the fun parts.
+
+The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons:
+
+ - Avoiding installing the pyrax library on remote nodes
+ - No need to encrypt and distribute credentials to remote nodes
+ - Speed and simplicity
+
+.. note::
+
+ Authentication with the Rackspace-related modules is handled by either
+ specifying your username and API key as environment variables or passing
+ them as module arguments, or by specifying the location of a credentials
+ file.
+
+Here is a basic example of provisioning an instance in ad hoc mode:
+
+.. code-block:: bash
+
+ $ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes"
+
+Here's what it would look like in a playbook, assuming the parameters were defined in variables:
+
+.. code-block:: yaml
+
+ tasks:
+ - name: Provision a set of instances
+ rax:
+ name: "{{ rax_name }}"
+ flavor: "{{ rax_flavor }}"
+ image: "{{ rax_image }}"
+ count: "{{ rax_count }}"
+ group: "{{ group }}"
+ wait: true
+ register: rax
+ delegate_to: localhost
+
+The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called "raxhosts", with each nodes hostname, IP address, and root password being added to the inventory.
+
+.. code-block:: yaml
+
+ - name: Add the instances we created (by public IP) to the group 'raxhosts'
+ add_host:
+ hostname: "{{ item.name }}"
+ ansible_host: "{{ item.rax_accessipv4 }}"
+ ansible_password: "{{ item.rax_adminpass }}"
+ groups: raxhosts
+ loop: "{{ rax.success }}"
+ when: rax.action == 'create'
+
+With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.
+
+.. code-block:: yaml
+
+ - name: Configuration play
+ hosts: raxhosts
+ user: root
+ roles:
+ - ntp
+ - webserver
+
+The method above ties the configuration of a host with the provisioning step. This isn't always what you want, and leads us
+to the next section.
+
+.. _host_inventory:
+
+Host Inventory
+``````````````
+
+Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances through other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
+
+In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
+
+.. _raxpy:
+
+rax.py
+++++++
+
+To use the Rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentials file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
+
+.. note:: Dynamic inventory scripts (like ``rax.py``) are saved in ``/usr/share/ansible/inventory`` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to ``$VIRTUALENV/share/inventory``.
+
+.. note:: Users of :ref:`ansible_platform` will note that dynamic inventory is natively supported by the controller in the platform, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps::
+
+ $ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
+
+``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a comma separated list of regions.
+
+When using ``rax.py``, you will not have a 'localhost' defined in the inventory.
+
+As mentioned previously, you will often be running most of these modules outside of the host loop, and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, and place both the ``rax.py`` script and a file containing ``localhost`` in it.
+
+Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead
+of an individual file, will cause ansible to evaluate each file in that directory for inventory.
+
+Let's test our inventory script to see if it can talk to Rackspace Cloud.
+
+.. code-block:: bash
+
+ $ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup
+
+Assuming things are properly configured, the ``rax.py`` inventory script will output information similar to the
+following information, which will be utilized for inventory and variables.
+
+.. code-block:: json
+
+ {
+ "ORD": [
+ "test"
+ ],
+ "_meta": {
+ "hostvars": {
+ "test": {
+ "ansible_host": "198.51.100.1",
+ "rax_accessipv4": "198.51.100.1",
+ "rax_accessipv6": "2001:DB8::2342",
+ "rax_addresses": {
+ "private": [
+ {
+ "addr": "192.0.2.2",
+ "version": 4
+ }
+ ],
+ "public": [
+ {
+ "addr": "198.51.100.1",
+ "version": 4
+ },
+ {
+ "addr": "2001:DB8::2342",
+ "version": 6
+ }
+ ]
+ },
+ "rax_config_drive": "",
+ "rax_created": "2013-11-14T20:48:22Z",
+ "rax_flavor": {
+ "id": "performance1-1",
+ "links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
+ "rel": "bookmark"
+ }
+ ]
+ },
+ "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
+ "rax_human_id": "test",
+ "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rax_image": {
+ "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
+ "links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
+ "rel": "bookmark"
+ }
+ ]
+ },
+ "rax_key_name": null,
+ "rax_links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rel": "self"
+ },
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rel": "bookmark"
+ }
+ ],
+ "rax_metadata": {
+ "foo": "bar"
+ },
+ "rax_name": "test",
+ "rax_name_attr": "name",
+ "rax_networks": {
+ "private": [
+ "192.0.2.2"
+ ],
+ "public": [
+ "198.51.100.1",
+ "2001:DB8::2342"
+ ]
+ },
+ "rax_os-dcf_diskconfig": "AUTO",
+ "rax_os-ext-sts_power_state": 1,
+ "rax_os-ext-sts_task_state": null,
+ "rax_os-ext-sts_vm_state": "active",
+ "rax_progress": 100,
+ "rax_status": "ACTIVE",
+ "rax_tenant_id": "111111",
+ "rax_updated": "2013-11-14T20:49:27Z",
+ "rax_user_id": "22222"
+ }
+ }
+ }
+ }
+
+.. _standard_inventory:
+
+Standard Inventory
+++++++++++++++++++
+
+When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be advantageous to retrieve discoverable hostvar information from the Rackspace API.
+
+This can be achieved with the ``rax_facts`` module and an inventory file similar to the following:
+
+.. code-block:: ini
+
+ [test_servers]
+ hostname1 rax_region=ORD
+ hostname2 rax_region=ORD
+
+.. code-block:: yaml
+
+ - name: Gather info about servers
+ hosts: test_servers
+ gather_facts: False
+ tasks:
+ - name: Get facts about servers
+ rax_facts:
+ credentials: ~/.raxpub
+ name: "{{ inventory_hostname }}"
+ region: "{{ rax_region }}"
+ delegate_to: localhost
+ - name: Map some facts
+ set_fact:
+ ansible_host: "{{ rax_accessipv4 }}"
+
+While you don't need to know how it works, it may be interesting to know what kind of variables are returned.
+
+The ``rax_facts`` module provides facts as following, which match the ``rax.py`` inventory script:
+
+.. code-block:: json
+
+ {
+ "ansible_facts": {
+ "rax_accessipv4": "198.51.100.1",
+ "rax_accessipv6": "2001:DB8::2342",
+ "rax_addresses": {
+ "private": [
+ {
+ "addr": "192.0.2.2",
+ "version": 4
+ }
+ ],
+ "public": [
+ {
+ "addr": "198.51.100.1",
+ "version": 4
+ },
+ {
+ "addr": "2001:DB8::2342",
+ "version": 6
+ }
+ ]
+ },
+ "rax_config_drive": "",
+ "rax_created": "2013-11-14T20:48:22Z",
+ "rax_flavor": {
+ "id": "performance1-1",
+ "links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
+ "rel": "bookmark"
+ }
+ ]
+ },
+ "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
+ "rax_human_id": "test",
+ "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rax_image": {
+ "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
+ "links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
+ "rel": "bookmark"
+ }
+ ]
+ },
+ "rax_key_name": null,
+ "rax_links": [
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rel": "self"
+ },
+ {
+ "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
+ "rel": "bookmark"
+ }
+ ],
+ "rax_metadata": {
+ "foo": "bar"
+ },
+ "rax_name": "test",
+ "rax_name_attr": "name",
+ "rax_networks": {
+ "private": [
+ "192.0.2.2"
+ ],
+ "public": [
+ "198.51.100.1",
+ "2001:DB8::2342"
+ ]
+ },
+ "rax_os-dcf_diskconfig": "AUTO",
+ "rax_os-ext-sts_power_state": 1,
+ "rax_os-ext-sts_task_state": null,
+ "rax_os-ext-sts_vm_state": "active",
+ "rax_progress": 100,
+ "rax_status": "ACTIVE",
+ "rax_tenant_id": "111111",
+ "rax_updated": "2013-11-14T20:49:27Z",
+ "rax_user_id": "22222"
+ },
+ "changed": false
+ }
+
+
+Use Cases
+`````````
+
+This section covers some additional usage examples built around a specific use case.
+
+.. _network_and_server:
+
+Network and Server
+++++++++++++++++++
+
+Create an isolated cloud network and build a server
+
+.. code-block:: yaml
+
+ - name: Build Servers on an Isolated Network
+ hosts: localhost
+ gather_facts: False
+ tasks:
+ - name: Network create request
+ rax_network:
+ credentials: ~/.raxpub
+ label: my-net
+ cidr: 192.168.3.0/24
+ region: IAD
+ state: present
+ delegate_to: localhost
+
+ - name: Server create request
+ rax:
+ credentials: ~/.raxpub
+ name: web%04d.example.org
+ flavor: 2
+ image: ubuntu-1204-lts-precise-pangolin
+ disk_config: manual
+ networks:
+ - public
+ - my-net
+ region: IAD
+ state: present
+ count: 5
+ exact_count: true
+ group: web
+ wait: true
+ wait_timeout: 360
+ register: rax
+ delegate_to: localhost
+
+.. _complete_environment:
+
+Complete Environment
+++++++++++++++++++++
+
+Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html
+
+.. code-block:: yaml
+
+ ---
+ - name: Build environment
+ hosts: localhost
+ gather_facts: False
+ tasks:
+ - name: Load Balancer create request
+ rax_clb:
+ credentials: ~/.raxpub
+ name: my-lb
+ port: 80
+ protocol: HTTP
+ algorithm: ROUND_ROBIN
+ type: PUBLIC
+ timeout: 30
+ region: IAD
+ wait: true
+ state: present
+ meta:
+ app: my-cool-app
+ register: clb
+
+ - name: Network create request
+ rax_network:
+ credentials: ~/.raxpub
+ label: my-net
+ cidr: 192.168.3.0/24
+ state: present
+ region: IAD
+ register: network
+
+ - name: Server create request
+ rax:
+ credentials: ~/.raxpub
+ name: web%04d.example.org
+ flavor: performance1-1
+ image: ubuntu-1204-lts-precise-pangolin
+ disk_config: manual
+ networks:
+ - public
+ - private
+ - my-net
+ region: IAD
+ state: present
+ count: 5
+ exact_count: true
+ group: web
+ wait: true
+ register: rax
+
+ - name: Add servers to web host group
+ add_host:
+ hostname: "{{ item.name }}"
+ ansible_host: "{{ item.rax_accessipv4 }}"
+ ansible_password: "{{ item.rax_adminpass }}"
+ ansible_user: root
+ groups: web
+ loop: "{{ rax.success }}"
+ when: rax.action == 'create'
+
+ - name: Add servers to Load balancer
+ rax_clb_nodes:
+ credentials: ~/.raxpub
+ load_balancer_id: "{{ clb.balancer.id }}"
+ address: "{{ item.rax_networks.private|first }}"
+ port: 80
+ condition: enabled
+ type: primary
+ wait: true
+ region: IAD
+ loop: "{{ rax.success }}"
+ when: rax.action == 'create'
+
+ - name: Configure servers
+ hosts: web
+ handlers:
+ - name: restart nginx
+ service: name=nginx state=restarted
+
+ tasks:
+ - name: Install nginx
+ apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
+ notify:
+ - restart nginx
+
+ - name: Ensure nginx starts on boot
+ service: name=nginx state=started enabled=yes
+
+ - name: Create custom index.html
+ copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
+ owner=root group=root mode=0644
+
+.. _rackconnect_and_manged_cloud:
+
+RackConnect and Managed Cloud
++++++++++++++++++++++++++++++
+
+When using RackConnect version 2 or Rackspace Managed Cloud there are Rackspace automation tasks that are executed on the servers you create after they are successfully built. If your automation executes before the RackConnect or Managed Cloud automation, you can cause failures and unusable servers.
+
+These examples show creating servers, and ensuring that the Rackspace automation has completed before Ansible continues onwards.
+
+For simplicity, these examples are joined, however both are only needed when using RackConnect. When only using Managed Cloud, the RackConnect portion can be ignored.
+
+The RackConnect portions only apply to RackConnect version 2.
+
+.. _using_a_control_machine:
+
+Using a Control Machine
+***********************
+
+.. code-block:: yaml
+
+ - name: Create an exact count of servers
+ hosts: localhost
+ gather_facts: False
+ tasks:
+ - name: Server build requests
+ rax:
+ credentials: ~/.raxpub
+ name: web%03d.example.org
+ flavor: performance1-1
+ image: ubuntu-1204-lts-precise-pangolin
+ disk_config: manual
+ region: DFW
+ state: present
+ count: 1
+ exact_count: true
+ group: web
+ wait: true
+ register: rax
+
+ - name: Add servers to in memory groups
+ add_host:
+ hostname: "{{ item.name }}"
+ ansible_host: "{{ item.rax_accessipv4 }}"
+ ansible_password: "{{ item.rax_adminpass }}"
+ ansible_user: root
+ rax_id: "{{ item.rax_id }}"
+ groups: web,new_web
+ loop: "{{ rax.success }}"
+ when: rax.action == 'create'
+
+ - name: Wait for rackconnect and managed cloud automation to complete
+ hosts: new_web
+ gather_facts: false
+ tasks:
+ - name: ensure we run all tasks from localhost
+ delegate_to: localhost
+ block:
+ - name: Wait for rackconnnect automation to complete
+ rax_facts:
+ credentials: ~/.raxpub
+ id: "{{ rax_id }}"
+ region: DFW
+ register: rax_facts
+ until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
+ retries: 30
+ delay: 10
+
+ - name: Wait for managed cloud automation to complete
+ rax_facts:
+ credentials: ~/.raxpub
+ id: "{{ rax_id }}"
+ region: DFW
+ register: rax_facts
+ until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
+ retries: 30
+ delay: 10
+
+ - name: Update new_web hosts with IP that RackConnect assigns
+ hosts: new_web
+ gather_facts: false
+ tasks:
+ - name: Get facts about servers
+ rax_facts:
+ name: "{{ inventory_hostname }}"
+ region: DFW
+ delegate_to: localhost
+ - name: Map some facts
+ set_fact:
+ ansible_host: "{{ rax_accessipv4 }}"
+
+ - name: Base Configure Servers
+ hosts: web
+ roles:
+ - role: users
+
+ - role: openssh
+ opensshd_PermitRootLogin: "no"
+
+ - role: ntp
+
+.. _using_ansible_pull:
+
+Using Ansible Pull
+******************
+
+.. code-block:: yaml
+
+ ---
+ - name: Ensure Rackconnect and Managed Cloud Automation is complete
+ hosts: all
+ tasks:
+ - name: ensure we run all tasks from localhost
+ delegate_to: localhost
+ block:
+ - name: Check for completed bootstrap
+ stat:
+ path: /etc/bootstrap_complete
+ register: bootstrap
+
+ - name: Get region
+ command: xenstore-read vm-data/provider_data/region
+ register: rax_region
+ when: bootstrap.stat.exists != True
+
+ - name: Wait for rackconnect automation to complete
+ uri:
+ url: "https://{{ rax_region.stdout|trim }}.api.rackconnect.rackspace.com/v1/automation_status?format=json"
+ return_content: true
+ register: automation_status
+ when: bootstrap.stat.exists != True
+ until: automation_status['automation_status']|default('') == 'DEPLOYED'
+ retries: 30
+ delay: 10
+
+ - name: Wait for managed cloud automation to complete
+ wait_for:
+ path: /tmp/rs_managed_cloud_automation_complete
+ delay: 10
+ when: bootstrap.stat.exists != True
+
+ - name: Set bootstrap completed
+ file:
+ path: /etc/bootstrap_complete
+ state: touch
+ owner: root
+ group: root
+ mode: 0400
+
+ - name: Base Configure Servers
+ hosts: all
+ roles:
+ - role: users
+
+ - role: openssh
+ opensshd_PermitRootLogin: "no"
+
+ - role: ntp
+
+.. _using_ansible_pull_with_xenstore:
+
+Using Ansible Pull with XenStore
+********************************
+
+.. code-block:: yaml
+
+ ---
+ - name: Ensure Rackconnect and Managed Cloud Automation is complete
+ hosts: all
+ tasks:
+ - name: Check for completed bootstrap
+ stat:
+ path: /etc/bootstrap_complete
+ register: bootstrap
+
+ - name: Wait for rackconnect_automation_status xenstore key to exist
+ command: xenstore-exists vm-data/user-metadata/rackconnect_automation_status
+ register: rcas_exists
+ when: bootstrap.stat.exists != True
+ failed_when: rcas_exists.rc|int > 1
+ until: rcas_exists.rc|int == 0
+ retries: 30
+ delay: 10
+
+ - name: Wait for rackconnect automation to complete
+ command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
+ register: rcas
+ when: bootstrap.stat.exists != True
+ until: rcas.stdout|replace('"', '') == 'DEPLOYED'
+ retries: 30
+ delay: 10
+
+ - name: Wait for rax_service_level_automation xenstore key to exist
+ command: xenstore-exists vm-data/user-metadata/rax_service_level_automation
+ register: rsla_exists
+ when: bootstrap.stat.exists != True
+ failed_when: rsla_exists.rc|int > 1
+ until: rsla_exists.rc|int == 0
+ retries: 30
+ delay: 10
+
+ - name: Wait for managed cloud automation to complete
+ command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
+ register: rsla
+ when: bootstrap.stat.exists != True
+ until: rsla.stdout|replace('"', '') == 'DEPLOYED'
+ retries: 30
+ delay: 10
+
+ - name: Set bootstrap completed
+ file:
+ path: /etc/bootstrap_complete
+ state: touch
+ owner: root
+ group: root
+ mode: 0400
+
+ - name: Base Configure Servers
+ hosts: all
+ roles:
+ - role: users
+
+ - role: openssh
+ opensshd_PermitRootLogin: "no"
+
+ - role: ntp
+
+.. _advanced_usage:
+
+Advanced Usage
+``````````````
+
+.. _awx_autoscale:
+
+Autoscaling with AWX or Red Hat Ansible Automation Platform
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+The GUI component of :ref:`Red Hat Ansible Automation Platform <ansible_tower>` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call
+a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way
+to reconfigure ephemeral nodes. See `the documentation on provisioning callbacks <https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#provisioning-callbacks>`_ for more details.
+
+A benefit of using the callback approach over pull mode is that job results are still centrally recorded
+and less information has to be shared with remote hosts.
+
+.. _pending_information:
+
+Orchestration in the Rackspace Cloud
+++++++++++++++++++++++++++++++++++++
+
+Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other piece of software in an environment. Complex deployments might have previously required manual manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additional nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
+
+* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
+* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
+* A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
+* Servers and load balancers that have DNS records created and destroyed on creation and decommissioning, respectively
+
+
+
+
diff --git a/docs/docsite/rst/scenario_guides/guide_scaleway.rst b/docs/docsite/rst/scenario_guides/guide_scaleway.rst
new file mode 100644
index 0000000..0baf58a
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_scaleway.rst
@@ -0,0 +1,293 @@
+.. _guide_scaleway:
+
+**************
+Scaleway Guide
+**************
+
+.. _scaleway_introduction:
+
+Introduction
+============
+
+`Scaleway <https://scaleway.com>`_ is a cloud provider supported by Ansible, version 2.6 or higher through a dynamic inventory plugin and modules.
+Those modules are:
+
+- :ref:`scaleway_sshkey_module`: adds a public SSH key from a file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
+- :ref:`scaleway_compute_module`: manages servers on Scaleway. You can use this module to create, restart and delete servers.
+- :ref:`scaleway_volume_module`: manages volumes on Scaleway.
+
+.. note::
+ This guide assumes you are familiar with Ansible and how it works.
+ If you're not, have a look at :ref:`ansible_documentation` before getting started.
+
+.. _scaleway_requirements:
+
+Requirements
+============
+
+The Scaleway modules and inventory script connect to the Scaleway API using `Scaleway REST API <https://developer.scaleway.com>`_.
+To use the modules and inventory script you'll need a Scaleway API token.
+You can generate an API token through the Scaleway console `here <https://cloud.scaleway.com/#/credentials>`__.
+The simplest way to authenticate yourself is to set the Scaleway API token in an environment variable:
+
+.. code-block:: bash
+
+ $ export SCW_TOKEN=00000000-1111-2222-3333-444444444444
+
+If you're not comfortable exporting your API token, you can pass it as a parameter to the modules using the ``api_token`` argument.
+
+If you want to use a new SSH key pair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
+
+.. code-block:: bash
+
+ $ ssh-keygen -t rsa -f ./id_rsa
+
+If you want to use an existing key pair, just copy the private and public key over to the playbook directory.
+
+.. _scaleway_add_sshkey:
+
+How to add an SSH key?
+======================
+
+Connection to Scaleway Compute nodes use Secure Shell.
+SSH keys are stored at the account level, which means that you can re-use the same SSH key in multiple nodes.
+The first step to configure Scaleway compute resources is to have at least one SSH key configured.
+
+:ref:`scaleway_sshkey_module` is a module that manages SSH keys on your Scaleway account.
+You can add an SSH key to your account by including the following task in a playbook:
+
+.. code-block:: yaml
+
+ - name: "Add SSH key"
+ scaleway_sshkey:
+ ssh_pub_key: "ssh-rsa AAAA..."
+ state: "present"
+
+The ``ssh_pub_key`` parameter contains your ssh public key as a string. Here is an example inside a playbook:
+
+
+.. code-block:: yaml
+
+ - name: Test SSH key lifecycle on a Scaleway account
+ hosts: localhost
+ gather_facts: false
+ environment:
+ SCW_API_KEY: ""
+
+ tasks:
+
+ - scaleway_sshkey:
+ ssh_pub_key: "ssh-rsa AAAAB...424242 developer@example.com"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is success and result is changed
+
+.. _scaleway_create_instance:
+
+How to create a compute instance?
+=================================
+
+Now that we have an SSH key configured, the next step is to spin up a server!
+:ref:`scaleway_compute_module` is a module that can create, update and delete Scaleway compute instances:
+
+.. code-block:: yaml
+
+ - name: Create a server
+ scaleway_compute:
+ name: foobar
+ state: present
+ image: 00000000-1111-2222-3333-444444444444
+ organization: 00000000-1111-2222-3333-444444444444
+ region: ams1
+ commercial_type: START1-S
+
+Here are the parameter details for the example shown above:
+
+- ``name`` is the name of the instance (the one that will show up in your web console).
+- ``image`` is the UUID of the system image you would like to use.
+ A list of all images is available for each availability zone.
+- ``organization`` represents the organization that your account is attached to.
+- ``region`` represents the Availability Zone which your instance is in (for this example, par1 and ams1).
+- ``commercial_type`` represents the name of the commercial offers.
+ You can check out the Scaleway pricing page to find which instance is right for you.
+
+Take a look at this short playbook to see a working example using ``scaleway_compute``:
+
+.. code-block:: yaml
+
+ - name: Test compute instance lifecycle on a Scaleway account
+ hosts: localhost
+ gather_facts: false
+ environment:
+ SCW_API_KEY: ""
+
+ tasks:
+
+ - name: Create a server
+ register: server_creation_task
+ scaleway_compute:
+ name: foobar
+ state: present
+ image: 00000000-1111-2222-3333-444444444444
+ organization: 00000000-1111-2222-3333-444444444444
+ region: ams1
+ commercial_type: START1-S
+ wait: true
+
+ - debug: var=server_creation_task
+
+ - assert:
+ that:
+ - server_creation_task is success
+ - server_creation_task is changed
+
+ - name: Run it
+ scaleway_compute:
+ name: foobar
+ state: running
+ image: 00000000-1111-2222-3333-444444444444
+ organization: 00000000-1111-2222-3333-444444444444
+ region: ams1
+ commercial_type: START1-S
+ wait: true
+ tags:
+ - web_server
+ register: server_run_task
+
+ - debug: var=server_run_task
+
+ - assert:
+ that:
+ - server_run_task is success
+ - server_run_task is changed
+
+.. _scaleway_dynamic_inventory_tutorial:
+
+Dynamic Inventory Script
+========================
+
+Ansible ships with :ref:`scaleway_inventory`.
+You can now get a complete inventory of your Scaleway resources through this plugin and filter it on
+different parameters (``regions`` and ``tags`` are currently supported).
+
+Let's create an example!
+Suppose that we want to get all hosts that got the tag web_server.
+Create a file named ``scaleway_inventory.yml`` with the following content:
+
+.. code-block:: yaml
+
+ plugin: scaleway
+ regions:
+ - ams1
+ - par1
+ tags:
+ - web_server
+
+This inventory means that we want all hosts that got the tag ``web_server`` on the zones ``ams1`` and ``par1``.
+Once you have configured this file, you can get the information using the following command:
+
+.. code-block:: bash
+
+ $ ansible-inventory --list -i scaleway_inventory.yml
+
+The output will be:
+
+.. code-block:: yaml
+
+ {
+ "_meta": {
+ "hostvars": {
+ "dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d": {
+ "ansible_verbosity": 6,
+ "arch": "x86_64",
+ "commercial_type": "START1-S",
+ "hostname": "foobar",
+ "ipv4": "192.0.2.1",
+ "organization": "00000000-1111-2222-3333-444444444444",
+ "state": "running",
+ "tags": [
+ "web_server"
+ ]
+ }
+ }
+ },
+ "all": {
+ "children": [
+ "ams1",
+ "par1",
+ "ungrouped",
+ "web_server"
+ ]
+ },
+ "ams1": {},
+ "par1": {
+ "hosts": [
+ "dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
+ ]
+ },
+ "ungrouped": {},
+ "web_server": {
+ "hosts": [
+ "dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
+ ]
+ }
+ }
+
+As you can see, we get different groups of hosts.
+``par1`` and ``ams1`` are groups based on location.
+``web_server`` is a group based on a tag.
+
+In case a filter parameter is not defined, the plugin supposes all values possible are wanted.
+This means that for each tag that exists on your Scaleway compute nodes, a group based on each tag will be created.
+
+Scaleway S3 object storage
+==========================
+
+`Object Storage <https://www.scaleway.com/object-storage>`_ allows you to store any kind of objects (documents, images, videos, and so on).
+As the Scaleway API is S3 compatible, Ansible supports it natively through the modules: :ref:`s3_bucket_module`, :ref:`aws_s3_module`.
+
+You can find many examples in the `scaleway_s3 integration tests <https://github.com/ansible/ansible-legacy-tests/tree/devel/test/legacy/roles/scaleway_s3>`_.
+
+.. code-block:: yaml+jinja
+
+ - hosts: myserver
+ vars:
+ scaleway_region: nl-ams
+ s3_url: https://s3.nl-ams.scw.cloud
+ environment:
+ # AWS_ACCESS_KEY matches your scaleway organization id available at https://cloud.scaleway.com/#/account
+ AWS_ACCESS_KEY: 00000000-1111-2222-3333-444444444444
+ # AWS_SECRET_KEY matches a secret token that you can retrieve at https://cloud.scaleway.com/#/credentials
+ AWS_SECRET_KEY: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
+ module_defaults:
+ group/aws:
+ s3_url: '{{ s3_url }}'
+ region: '{{ scaleway_region }}'
+ tasks:
+ # use a fact instead of a variable, otherwise template is evaluate each time variable is used
+ - set_fact:
+ bucket_name: "{{ 99999999 | random | to_uuid }}"
+
+ # "requester_pays:" is mandatory because Scaleway doesn't implement related API
+ # another way is to use aws_s3 and "mode: create" !
+ - s3_bucket:
+ name: '{{ bucket_name }}'
+ requester_pays:
+
+ - name: Another way to create the bucket
+ aws_s3:
+ bucket: '{{ bucket_name }}'
+ mode: create
+ encrypt: false
+ register: bucket_creation_check
+
+ - name: add something in the bucket
+ aws_s3:
+ mode: put
+ bucket: '{{ bucket_name }}'
+ src: /tmp/test.txt # needs to be created before
+ object: test.txt
+ encrypt: false # server side encryption must be disabled
diff --git a/docs/docsite/rst/scenario_guides/guide_vagrant.rst b/docs/docsite/rst/scenario_guides/guide_vagrant.rst
new file mode 100644
index 0000000..f49477b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_vagrant.rst
@@ -0,0 +1,136 @@
+Vagrant Guide
+=============
+
+.. _vagrant_intro:
+
+Introduction
+````````````
+
+`Vagrant <https://www.vagrantup.com/>`_ is a tool to manage virtual machine
+environments, and allows you to configure and use reproducible work
+environments on top of various virtualization and cloud platforms.
+It also has integration with Ansible as a provisioner for these virtual
+machines, and the two tools work together well.
+
+This guide will describe how to use Vagrant 1.7+ and Ansible together.
+
+If you're not familiar with Vagrant, you should visit `the documentation
+<https://www.vagrantup.com/docs/>`_.
+
+This guide assumes that you already have Ansible installed and working.
+Running from a Git checkout is fine. Follow the :ref:`installation_guide`
+guide for more information.
+
+.. _vagrant_setup:
+
+Vagrant Setup
+`````````````
+
+The first step once you've installed Vagrant is to create a ``Vagrantfile``
+and customize it to suit your needs. This is covered in detail in the Vagrant
+documentation, but here is a quick example that includes a section to use the
+Ansible provisioner to manage a single machine:
+
+.. code-block:: ruby
+
+ # This guide is optimized for Vagrant 1.8 and above.
+ # Older versions of Vagrant put less info in the inventory they generate.
+ Vagrant.require_version ">= 1.8.0"
+
+ Vagrant.configure(2) do |config|
+
+ config.vm.box = "ubuntu/bionic64"
+
+ config.vm.provision "ansible" do |ansible|
+ ansible.verbose = "v"
+ ansible.playbook = "playbook.yml"
+ end
+ end
+
+Notice the ``config.vm.provision`` section that refers to an Ansible playbook
+called ``playbook.yml`` in the same directory as the ``Vagrantfile``. Vagrant
+runs the provisioner once the virtual machine has booted and is ready for SSH
+access.
+
+There are a lot of Ansible options you can configure in your ``Vagrantfile``.
+Visit the `Ansible Provisioner documentation
+<https://www.vagrantup.com/docs/provisioning/ansible.html>`_ for more
+information.
+
+.. code-block:: bash
+
+ $ vagrant up
+
+This will start the VM, and run the provisioning playbook (on the first VM
+startup).
+
+
+To re-run a playbook on an existing VM, just run:
+
+.. code-block:: bash
+
+ $ vagrant provision
+
+This will re-run the playbook against the existing VM.
+
+Note that having the ``ansible.verbose`` option enabled will instruct Vagrant
+to show the full ``ansible-playbook`` command used behind the scene, as
+illustrated by this example:
+
+.. code-block:: bash
+
+ $ PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/someone/coding-in-a-project/.vagrant/provisioners/ansible/inventory -v playbook.yml
+
+This information can be quite useful to debug integration issues and can also
+be used to manually execute Ansible from a shell, as explained in the next
+section.
+
+.. _running_ansible:
+
+Running Ansible Manually
+````````````````````````
+
+Sometimes you may want to run Ansible manually against the machines. This is
+faster than kicking ``vagrant provision`` and pretty easy to do.
+
+With our ``Vagrantfile`` example, Vagrant automatically creates an Ansible
+inventory file in ``.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory``.
+This inventory is configured according to the SSH tunnel that Vagrant
+automatically creates. A typical automatically-created inventory file for a
+single machine environment may look something like this:
+
+.. code-block:: none
+
+ # Generated by Vagrant
+
+ default ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/home/someone/coding-in-a-project/.vagrant/machines/default/virtualbox/private_key'
+
+If you want to run Ansible manually, you will want to make sure to pass
+``ansible`` or ``ansible-playbook`` commands the correct arguments, at least
+for the *inventory*.
+
+.. code-block:: bash
+
+ $ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
+
+Advanced Usages
+```````````````
+
+The "Tips and Tricks" chapter of the `Ansible Provisioner documentation
+<https://www.vagrantup.com/docs/provisioning/ansible.html>`_ provides detailed information about more advanced Ansible features like:
+
+ - how to execute a playbook in parallel within a multi-machine environment
+ - how to integrate a local ``ansible.cfg`` configuration file
+
+.. seealso::
+
+ `Vagrant Home <https://www.vagrantup.com/>`_
+ The Vagrant homepage with downloads
+ `Vagrant Documentation <https://www.vagrantup.com/docs/>`_
+ Vagrant Documentation
+ `Ansible Provisioner <https://www.vagrantup.com/docs/provisioning/ansible.html>`_
+ The Vagrant documentation for the Ansible provisioner
+ `Vagrant Issue Tracker <https://github.com/hashicorp/vagrant/issues?q=is%3Aopen+is%3Aissue+label%3Aprovisioners%2Fansible>`_
+ The open issues for the Ansible provisioner in the Vagrant project
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
diff --git a/docs/docsite/rst/scenario_guides/guide_vmware_rest.rst b/docs/docsite/rst/scenario_guides/guide_vmware_rest.rst
new file mode 100644
index 0000000..e93e352
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_vmware_rest.rst
@@ -0,0 +1,20 @@
+.. _vmware_rest_scenarios:
+
+****************************
+VMware REST Scenarios
+****************************
+
+These scenarios teach you how to accomplish common VMware tasks using the REST API and the Ansible ``vmware.vmware_rest`` collection. To get started, please select the task you want to accomplish.
+
+.. toctree::
+ :maxdepth: 1
+
+ vmware_rest_scenarios/installation
+ vmware_rest_scenarios/authentication
+ vmware_rest_scenarios/collect_information
+ vmware_rest_scenarios/create_vm
+ vmware_rest_scenarios/vm_info
+ vmware_rest_scenarios/vm_hardware_tuning
+ vmware_rest_scenarios/run_a_vm
+ vmware_rest_scenarios/vm_tool_information
+ vmware_rest_scenarios/vm_tool_configuration
diff --git a/docs/docsite/rst/scenario_guides/guide_vultr.rst b/docs/docsite/rst/scenario_guides/guide_vultr.rst
new file mode 100644
index 0000000..a946c91
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guide_vultr.rst
@@ -0,0 +1,171 @@
+Vultr Guide
+===========
+
+Ansible offers a set of modules to interact with `Vultr <https://www.vultr.com>`_ cloud platform.
+
+This set of module forms a framework that allows one to easily manage and orchestrate one's infrastructure on Vultr cloud platform.
+
+
+Requirements
+------------
+
+There is actually no technical requirement; simply an already created Vultr account.
+
+
+Configuration
+-------------
+
+Vultr modules offer a rather flexible way with regard to configuration.
+
+Configuration is read in that order:
+
+- Environment Variables (eg. ``VULTR_API_KEY``, ``VULTR_API_TIMEOUT``)
+- File specified by environment variable ``VULTR_API_CONFIG``
+- ``vultr.ini`` file located in current working directory
+- ``$HOME/.vultr.ini``
+
+
+Ini file are structured this way:
+
+.. code-block:: ini
+
+ [default]
+ key = MY_API_KEY
+ timeout = 60
+
+ [personal_account]
+ key = MY_PERSONAL_ACCOUNT_API_KEY
+ timeout = 30
+
+
+If ``VULTR_API_ACCOUNT`` environment variable or ``api_account`` module parameter is not specified, modules will look for the section named "default".
+
+
+Authentication
+--------------
+
+Before using the Ansible modules to interact with Vultr, you need an API key.
+If you don't yet own one, log in to `Vultr <https://www.vultr.com>`_ go to Account, then API, enable API then the API key should show up.
+
+Ensure you allow the usage of the API key from the proper IP addresses.
+
+Refer to the Configuration section to find out where to put this information.
+
+To check that everything is working properly run the following command:
+
+.. code-block:: console
+
+ #> VULTR_API_KEY=XXX ansible -m vultr_account_info localhost
+ localhost | SUCCESS => {
+ "changed": false,
+ "vultr_account_info": {
+ "balance": -8.9,
+ "last_payment_amount": -10.0,
+ "last_payment_date": "2018-07-21 11:34:46",
+ "pending_charges": 6.0
+ },
+ "vultr_api": {
+ "api_account": "default",
+ "api_endpoint": "https://api.vultr.com",
+ "api_retries": 5,
+ "api_timeout": 60
+ }
+ }
+
+
+If a similar output displays then everything is setup properly, else please ensure the proper ``VULTR_API_KEY`` has been specified and that Access Controls on Vultr > Account > API page are accurate.
+
+
+Usage
+-----
+
+Since `Vultr <https://www.vultr.com>`_ offers a public API, the execution of the module to manage the infrastructure on their platform will happen on localhost. This translates to:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: localhost
+ tasks:
+ - name: Create a 10G volume
+ vultr_block_storage:
+ name: my_disk
+ size: 10
+ region: New Jersey
+
+
+From that point on, only your creativity is the limit. Make sure to read the documentation of the `available modules <https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html#vultr>`_.
+
+
+Dynamic Inventory
+-----------------
+
+Ansible provides a dynamic inventory plugin for `Vultr <https://www.vultr.com>`_.
+The configuration process is exactly the same as for the modules.
+
+To be able to use it you need to enable it first by specifying the following in the ``ansible.cfg`` file:
+
+.. code-block:: ini
+
+ [inventory]
+ enable_plugins=vultr
+
+And provide a configuration file to be used with the plugin, the minimal configuration file looks like this:
+
+.. code-block:: yaml
+
+ ---
+ plugin: vultr
+
+To list the available hosts one can simply run:
+
+.. code-block:: console
+
+ #> ansible-inventory -i vultr.yml --list
+
+
+For example, this allows you to take action on nodes grouped by location or OS name:
+
+.. code-block:: yaml
+
+ ---
+ - hosts: Amsterdam
+ tasks:
+ - name: Rebooting the machine
+ shell: reboot
+ become: True
+
+
+Integration tests
+-----------------
+
+Ansible includes integration tests for all Vultr modules.
+
+These tests are meant to run against the public Vultr API and that is why they require a valid key to access the API.
+
+Prepare the test setup:
+
+.. code-block:: shell
+
+ $ cd ansible # location the ansible source is
+ $ source ./hacking/env-setup
+
+Set the Vultr API key:
+
+.. code-block:: shell
+
+ $ cd test/integration
+ $ cp cloud-config-vultr.ini.template cloud-config-vultr.ini
+ $ vi cloud-config-vultr.ini
+
+Run all Vultr tests:
+
+.. code-block:: shell
+
+ $ ansible-test integration cloud/vultr/ -v --diff --allow-unsupported
+
+
+To run a specific test, for example vultr_account_info:
+
+.. code-block:: shell
+
+ $ ansible-test integration cloud/vultr/vultr_account_info -v --diff --allow-unsupported
diff --git a/docs/docsite/rst/scenario_guides/guides.rst b/docs/docsite/rst/scenario_guides/guides.rst
new file mode 100644
index 0000000..8b6c54f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/guides.rst
@@ -0,0 +1,44 @@
+:orphan:
+
+.. unified index page included for backwards compatibility
+
+******************
+Scenario Guides
+******************
+
+The guides in this section are migrating into collections. Remaining guides may be out of date.
+
+These guides cover integrating Ansible with a variety of platforms, products, and technologies. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+We are migrating these guides into collections. Please update your links for the following guides:
+
+:ref:`ansible_collections.amazon.aws.docsite.aws_intro`
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Legacy Public Cloud Guides
+
+ guide_alicloud
+ guide_cloudstack
+ guide_gce
+ guide_azure
+ guide_online
+ guide_oracle
+ guide_packet
+ guide_rax
+ guide_scaleway
+ guide_vultr
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Network Technology Guides
+
+ guide_aci
+ guide_meraki
+ guide_infoblox
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Virtualization & Containerization Guides
+
+ guide_vagrant
diff --git a/docs/docsite/rst/scenario_guides/network_guides.rst b/docs/docsite/rst/scenario_guides/network_guides.rst
new file mode 100644
index 0000000..2b538ff
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/network_guides.rst
@@ -0,0 +1,16 @@
+.. _network_guides:
+
+*************************
+Network Technology Guides
+*************************
+
+The guides in this section cover using Ansible with specific network technologies. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+.. toctree::
+ :maxdepth: 1
+
+ guide_aci
+ guide_meraki
+ guide_infoblox
+
+To learn more about Network Automation with Ansible, see :ref:`network_getting_started` and :ref:`network_advanced`.
diff --git a/docs/docsite/rst/scenario_guides/scenario_template.rst b/docs/docsite/rst/scenario_guides/scenario_template.rst
new file mode 100644
index 0000000..14695be
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/scenario_template.rst
@@ -0,0 +1,53 @@
+:orphan:
+
+.. _scenario_template:
+
+*************************************
+Sample scenario for Ansible platforms
+*************************************
+
+*Use this ``rst`` file as a starting point to create a scenario guide for your platform. The sections below are suggestions on what should be in a scenario guide.*
+
+Introductory paragraph.
+
+.. contents::
+ :local:
+
+Prerequisites
+=============
+
+Describe the requirements and assumptions for this scenario. This should include applicable subsections for hardware, software, and any other caveats to using the scenarios in this guide.
+
+Credentials and authenticating
+==============================
+
+Describe credential requirements and how to authenticate to this platform.
+
+Using dynamic inventory
+=========================
+
+If applicable, describe how to use a dynamic inventory plugin for this platform.
+
+
+Example description
+===================
+
+Description and code here. Change the section header to something descriptive about this example, such as "Renaming a virtual machine". The goal is that this is the text someone would search for to find your example.
+
+
+Example output
+--------------
+
+What the user should expect to see.
+
+
+Troubleshooting
+---------------
+
+What to look for if it breaks.
+
+
+Conclusion and where to go next
+===============================
+
+Recap of important points. For more information please see: links.
diff --git a/docs/docsite/rst/scenario_guides/virt_guides.rst b/docs/docsite/rst/scenario_guides/virt_guides.rst
new file mode 100644
index 0000000..bc90078
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/virt_guides.rst
@@ -0,0 +1,14 @@
+.. _virtualization_guides:
+
+******************************************
+Virtualization and Containerization Guides
+******************************************
+
+The guides in this section cover integrating Ansible with popular tools for creating virtual machines and containers. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+.. toctree::
+ :maxdepth: 1
+
+ guide_docker
+ guide_vagrant
+ guide_vmware_rest
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/authentication.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/authentication.rst
new file mode 100644
index 0000000..4f09cbe
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/authentication.rst
@@ -0,0 +1,47 @@
+.. _vmware_rest_authentication:
+
+*******************************************
+How to configure the vmware_rest collection
+*******************************************
+
+.. contents::
+ :local:
+
+
+Introduction
+============
+
+The vcenter_rest modules need to be authenticated. There are two different
+
+Environment variables
+=====================
+
+.. note::
+ This solution requires that you call the modules from the local machine.
+
+You need to export some environment variables in your shell before you call the modules.
+
+.. code-block:: shell
+
+ $ export VMWARE_HOST=vcenter.test
+ $ export VMWARE_USER=myvcenter-user
+ $ export VMWARE_password=mypassword
+ $ ansible-playbook my-playbook.yaml
+
+Module parameters
+=================
+
+All the vcenter_rest modules accept the following arguments:
+
+- ``vcenter_hostname``
+- ``vcenter_username``
+- ``vcenter_password``
+
+
+Ignore SSL certificate error
+============================
+
+It's common to run a test environment without a proper SSL certificate configuration.
+
+To ignore the SSL error, you can use the ``vcenter_validate_certs: no`` argument or
+``export VMWARE_VALIDATE_CERTS=no`` to set the environment variable.
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/collect_information.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/collect_information.rst
new file mode 100644
index 0000000..d6c2b86
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/collect_information.rst
@@ -0,0 +1,108 @@
+.. _vmware_rest_collect_info:
+
+*************************************************
+How to collect information about your environment
+*************************************************
+
+.. contents::
+ :local:
+
+
+Introduction
+============
+
+This section shows you how to utilize Ansible to collect information about your environment.
+This information is useful for the other tutorials.
+
+Scenario requirements
+=====================
+
+In this scenario we've got a vCenter with an ESXi host.
+
+Our environment is pre-initialized with the following elements:
+
+- A datacenter called ``my_dc``
+- A cluster called ``my_cluster``
+- An ESXi host called ``esxi1`` is in the cluster
+- Two datastores on the ESXi: ``rw_datastore`` and ``ro_datastore``
+- A dvswitch based guest network
+
+Finally, we use the environment variables to authenticate ourselves as explained in :ref:`vmware_rest_authentication`.
+
+How to collect information
+==========================
+
+In these examples, we use the ``vcenter_*_info`` module to collect information about the associated resources.
+
+All these modules return a ``value`` key. Depending on the context, this ``value`` key will be either a list or a dictionary.
+
+Datacenter
+----------
+
+Here we use the ``vcenter_datacenter_info`` module to list all the datacenters:
+
+.. literalinclude:: task_outputs/collect_a_list_of_the_datacenters.task.yaml
+
+Result
+______
+
+As expected, the ``value`` key of the output is a list.
+
+.. literalinclude:: task_outputs/collect_a_list_of_the_datacenters.result.json
+
+Cluster
+-------
+
+Here we do the same with ``vcenter_cluster_info``:
+
+.. literalinclude:: task_outputs/Build_a_list_of_all_the_clusters.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Build_a_list_of_all_the_clusters.result.json
+
+And we can also fetch the details about a specific cluster, with the ``cluster`` parameter:
+
+.. literalinclude:: task_outputs/Retrieve_details_about_the_first_cluster.task.yaml
+
+Result
+______
+
+And the ``value`` key of the output is this time a dictionary.
+
+
+.. literalinclude:: task_outputs/Retrieve_details_about_the_first_cluster.result.json
+
+Datastore
+---------
+
+Here we use ``vcenter_datastore_info`` to get a list of all the datastores:
+
+.. literalinclude:: task_outputs/Retrieve_a_list_of_all_the_datastores.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Retrieve_a_list_of_all_the_datastores.result.json
+
+Folder
+------
+
+And here again, you use the ``vcenter_folder_info`` module to retrieve a list of all the folders.
+
+.. literalinclude:: task_outputs/Build_a_list_of_all_the_folders.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Build_a_list_of_all_the_folders.result.json
+
+Most of the time, you will just want one type of folder. In this case we can use filters to reduce the amount to collect. Most of the ``_info`` modules come with similar filters.
+
+.. literalinclude:: task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.result.json
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/create_vm.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/create_vm.rst
new file mode 100644
index 0000000..0e64bd0
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/create_vm.rst
@@ -0,0 +1,39 @@
+.. _vmware_rest_create_vm:
+
+*******************************
+How to create a Virtual Machine
+*******************************
+
+.. contents::
+ :local:
+
+
+Introduction
+============
+
+This section shows you how to use Ansible to create a virtual machine.
+
+Scenario requirements
+=====================
+
+You've already followed :ref:`vmware_rest_collect_info` and you've got the following variables defined:
+
+- ``my_cluster_info``
+- ``my_datastore``
+- ``my_virtual_machine_folder``
+- ``my_cluster_info``
+
+How to create a virtual machine
+===============================
+
+In this example, we will use the ``vcenter_vm`` module to create a new guest.
+
+.. literalinclude:: task_outputs/Create_a_VM.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Create_a_VM.result.json
+
+.. note::
+ ``vcenter_vm`` accepts more parameters, however you may prefer to start with a simple VM and use the ``vcenter_vm_hardware`` modules to tune it up afterwards. It's easier this way to identify a potential problematical step.
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/installation.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/installation.rst
new file mode 100644
index 0000000..7516d0f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/installation.rst
@@ -0,0 +1,44 @@
+.. _vmware_rest_installation:
+
+*****************************************
+How to install the vmware_rest collection
+*****************************************
+
+.. contents::
+ :local:
+
+
+Requirements
+============
+
+The collection depends on:
+
+- Ansible >=2.9.10 or greater
+- Python 3.6 or greater
+
+aiohttp
+=======
+
+aiohttp_ is the only dependency of the collection. You can install it with ``pip`` if you use a virtualenv to run Ansible.
+
+.. code-block:: shell
+
+ $ pip install aiohttp
+
+Or using an RPM.
+
+.. code-block:: shell
+
+ $ sudo dnf install python3-aiohttp
+
+.. _aiohttp: https://docs.aiohttp.org/en/stable/
+
+Installation
+============
+
+The best option to install the collection is to use the ``ansible-galaxy`` command:
+
+.. code-block:: shell
+
+
+ $ ansible-galaxy collection install vmware.vmware_rest
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/run_a_vm.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/run_a_vm.rst
new file mode 100644
index 0000000..be72386
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/run_a_vm.rst
@@ -0,0 +1,52 @@
+.. _vmware_rest_run_a_vm:
+
+****************************
+How to run a virtual machine
+****************************
+
+.. contents::
+ :local:
+
+
+Introduction
+============
+
+This section covers the power management of your virtual machine.
+
+Power information
+=================
+
+Use ``vcenter_vm_power_info`` to know the power state of the VM.
+
+.. literalinclude:: task_outputs/Get_guest_power_information.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Get_guest_power_information.result.json
+
+
+How to start a virtual machine
+==============================
+
+Use the ``vcenter_vm_power`` module to start your VM:
+
+.. literalinclude:: task_outputs/Turn_the_power_of_the_VM_on.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Turn_the_power_of_the_VM_on.result.json
+
+How to wait until my virtual machine is ready
+=============================================
+
+If your virtual machine runs VMware Tools, you can build a loop
+around the ``center_vm_tools_info`` module:
+
+.. literalinclude:: task_outputs/Wait_until_my_VM_is_ready.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Wait_until_my_VM_is_ready.result.json
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Add_a_floppy_disk_drive.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Add_a_floppy_disk_drive.result.json
new file mode 100644
index 0000000..c4bf5cd
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Add_a_floppy_disk_drive.result.json
@@ -0,0 +1,15 @@
+{
+ "value": {
+ "start_connected": false,
+ "backing": {
+ "auto_detect": true,
+ "type": "HOST_DEVICE",
+ "host_device": ""
+ },
+ "allow_guest_control": true,
+ "label": "Floppy drive 1",
+ "state": "NOT_CONNECTED"
+ },
+ "id": "8000",
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Add_a_floppy_disk_drive.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Add_a_floppy_disk_drive.task.yaml
new file mode 100644
index 0000000..d6bcbf8
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Add_a_floppy_disk_drive.task.yaml
@@ -0,0 +1,5 @@
+- name: Add a floppy disk drive
+ vmware.vmware_rest.vcenter_vm_hardware_floppy:
+ vm: '{{ test_vm1_info.id }}'
+ allow_guest_control: true
+ register: my_floppy_drive \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_a_VM_to_a_dvswitch.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_a_VM_to_a_dvswitch.result.json
new file mode 100644
index 0000000..fbb5a6f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_a_VM_to_a_dvswitch.result.json
@@ -0,0 +1,23 @@
+{
+ "value": {
+ "start_connected": false,
+ "pci_slot_number": 4,
+ "backing": {
+ "connection_cookie": 2145337177,
+ "distributed_switch_uuid": "50 33 88 3a 8c 6e f9 02-7a fd c2 c0 2c cf f2 ac",
+ "distributed_port": "2",
+ "type": "DISTRIBUTED_PORTGROUP",
+ "network": "dvportgroup-1649"
+ },
+ "mac_address": "00:50:56:b3:49:5c",
+ "mac_type": "ASSIGNED",
+ "allow_guest_control": false,
+ "wake_on_lan_enabled": false,
+ "label": "Network adapter 1",
+ "state": "NOT_CONNECTED",
+ "type": "VMXNET3",
+ "upt_compatibility_enabled": false
+ },
+ "id": "4000",
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_a_VM_to_a_dvswitch.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_a_VM_to_a_dvswitch.task.yaml
new file mode 100644
index 0000000..48c759f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_a_VM_to_a_dvswitch.task.yaml
@@ -0,0 +1,9 @@
+- name: Attach a VM to a dvswitch
+ vmware.vmware_rest.vcenter_vm_hardware_ethernet:
+ vm: '{{ test_vm1_info.id }}'
+ pci_slot_number: 4
+ backing:
+ type: DISTRIBUTED_PORTGROUP
+ network: "{{ my_portgroup_info.dvs_portgroup_info.dvswitch1[0].key }}"
+ start_connected: false
+ register: vm_hardware_ethernet_1 \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_an_ISO_image_to_a_guest_VM.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_an_ISO_image_to_a_guest_VM.result.json
new file mode 100644
index 0000000..ee456cb
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_an_ISO_image_to_a_guest_VM.result.json
@@ -0,0 +1,19 @@
+{
+ "value": {
+ "start_connected": true,
+ "backing": {
+ "iso_file": "[ro_datastore] fedora.iso",
+ "type": "ISO_FILE"
+ },
+ "allow_guest_control": false,
+ "label": "CD/DVD drive 1",
+ "state": "NOT_CONNECTED",
+ "type": "SATA",
+ "sata": {
+ "bus": 0,
+ "unit": 2
+ }
+ },
+ "id": "16002",
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_an_ISO_image_to_a_guest_VM.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_an_ISO_image_to_a_guest_VM.task.yaml
new file mode 100644
index 0000000..29d7488
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Attach_an_ISO_image_to_a_guest_VM.task.yaml
@@ -0,0 +1,12 @@
+- name: Attach an ISO image to a guest VM
+ vmware.vmware_rest.vcenter_vm_hardware_cdrom:
+ vm: '{{ test_vm1_info.id }}'
+ type: SATA
+ sata:
+ bus: 0
+ unit: 2
+ start_connected: true
+ backing:
+ iso_file: '[ro_datastore] fedora.iso'
+ type: ISO_FILE
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_clusters.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_clusters.result.json
new file mode 100644
index 0000000..3415fae
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_clusters.result.json
@@ -0,0 +1,11 @@
+{
+ "value": [
+ {
+ "drs_enabled": false,
+ "cluster": "domain-c1636",
+ "name": "my_cluster",
+ "ha_enabled": false
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_clusters.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_clusters.task.yaml
new file mode 100644
index 0000000..d819fa2
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_clusters.task.yaml
@@ -0,0 +1,3 @@
+- name: Build a list of all the clusters
+ vmware.vmware_rest.vcenter_cluster_info:
+ register: all_the_clusters \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders.result.json
new file mode 100644
index 0000000..516234d
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders.result.json
@@ -0,0 +1,10 @@
+{
+ "value": [
+ {
+ "folder": "group-d1",
+ "name": "Datacenters",
+ "type": "DATACENTER"
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders.task.yaml
new file mode 100644
index 0000000..ea8d5ce
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders.task.yaml
@@ -0,0 +1,3 @@
+- name: Build a list of all the folders
+ vmware.vmware_rest.vcenter_folder_info:
+ register: my_folders \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.result.json
new file mode 100644
index 0000000..ecf53f7
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.result.json
@@ -0,0 +1,10 @@
+{
+ "value": [
+ {
+ "folder": "group-v1631",
+ "name": "vm",
+ "type": "VIRTUAL_MACHINE"
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.task.yaml
new file mode 100644
index 0000000..d7e4746
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Build_a_list_of_all_the_folders_with_the_type_VIRTUAL_MACHINE_and_called_vm.task.yaml
@@ -0,0 +1,6 @@
+- name: Build a list of all the folders with the type VIRTUAL_MACHINE and called vm
+ vmware.vmware_rest.vcenter_folder_info:
+ filter_type: VIRTUAL_MACHINE
+ filter_names:
+ - vm
+ register: my_folders \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.result.json
new file mode 100644
index 0000000..e15f41c
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.result.json
@@ -0,0 +1,4 @@
+{
+ "id": null,
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.task.yaml
new file mode 100644
index 0000000..010dd41
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.task.yaml
@@ -0,0 +1,5 @@
+- name: Change vm-tools upgrade policy to MANUAL
+ vmware.vmware_rest.vcenter_vm_tools:
+ vm: '{{ test_vm1_info.id }}'
+ upgrade_policy: MANUAL
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.result.json
new file mode 100644
index 0000000..e15f41c
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.result.json
@@ -0,0 +1,4 @@
+{
+ "id": null,
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.task.yaml
new file mode 100644
index 0000000..c4e3891
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.task.yaml
@@ -0,0 +1,5 @@
+- name: Change vm-tools upgrade policy to UPGRADE_AT_POWER_CYCLE
+ vmware.vmware_rest.vcenter_vm_tools:
+ vm: '{{ test_vm1_info.id }}'
+ upgrade_policy: UPGRADE_AT_POWER_CYCLE
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_information_about_a_specific_VM.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_information_about_a_specific_VM.result.json
new file mode 100644
index 0000000..d0f17cb
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_information_about_a_specific_VM.result.json
@@ -0,0 +1,77 @@
+{
+ "value": {
+ "instant_clone_frozen": false,
+ "cdroms": [],
+ "memory": {
+ "size_MiB": 1024,
+ "hot_add_enabled": true
+ },
+ "disks": [
+ {
+ "value": {
+ "scsi": {
+ "bus": 0,
+ "unit": 0
+ },
+ "backing": {
+ "vmdk_file": "[local] test_vm1_8/test_vm1.vmdk",
+ "type": "VMDK_FILE"
+ },
+ "label": "Hard disk 1",
+ "type": "SCSI",
+ "capacity": 17179869184
+ },
+ "key": "2000"
+ }
+ ],
+ "parallel_ports": [],
+ "sata_adapters": [],
+ "cpu": {
+ "hot_remove_enabled": false,
+ "count": 1,
+ "hot_add_enabled": false,
+ "cores_per_socket": 1
+ },
+ "scsi_adapters": [
+ {
+ "value": {
+ "scsi": {
+ "bus": 0,
+ "unit": 7
+ },
+ "label": "SCSI controller 0",
+ "sharing": "NONE",
+ "type": "PVSCSI"
+ },
+ "key": "1000"
+ }
+ ],
+ "power_state": "POWERED_OFF",
+ "floppies": [],
+ "identity": {
+ "name": "test_vm1",
+ "instance_uuid": "5033c296-6954-64df-faca-d001de53763d",
+ "bios_uuid": "42330d17-e603-d925-fa4b-18827dbc1409"
+ },
+ "nvme_adapters": [],
+ "name": "test_vm1",
+ "nics": [],
+ "boot": {
+ "delay": 0,
+ "retry_delay": 10000,
+ "enter_setup_mode": false,
+ "type": "BIOS",
+ "retry": false
+ },
+ "serial_ports": [],
+ "boot_devices": [],
+ "guest_OS": "DEBIAN_8_64",
+ "hardware": {
+ "upgrade_policy": "NEVER",
+ "upgrade_status": "NONE",
+ "version": "VMX_11"
+ }
+ },
+ "id": "vm-1650",
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_information_about_a_specific_VM.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_information_about_a_specific_VM.task.yaml
new file mode 100644
index 0000000..ea00f7c
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_information_about_a_specific_VM.task.yaml
@@ -0,0 +1,4 @@
+- name: Collect information about a specific VM
+ vmware.vmware_rest.vcenter_vm_info:
+ vm: '{{ search_result.value[0].vm }}'
+ register: test_vm1_info \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_the_hardware_information.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_the_hardware_information.result.json
new file mode 100644
index 0000000..c2e162c
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_the_hardware_information.result.json
@@ -0,0 +1,8 @@
+{
+ "value": {
+ "upgrade_policy": "NEVER",
+ "upgrade_status": "NONE",
+ "version": "VMX_11"
+ },
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_the_hardware_information.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_the_hardware_information.task.yaml
new file mode 100644
index 0000000..bc9b18b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Collect_the_hardware_information.task.yaml
@@ -0,0 +1,4 @@
+- name: Collect the hardware information
+ vmware.vmware_rest.vcenter_vm_hardware_info:
+ vm: '{{ search_result.value[0].vm }}'
+ register: my_vm1_hardware_info \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.result.json
new file mode 100644
index 0000000..62ae281
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.result.json
@@ -0,0 +1,10 @@
+{
+ "value": {
+ "bus": 0,
+ "pci_slot_number": 34,
+ "label": "SATA controller 0",
+ "type": "AHCI"
+ },
+ "id": "15000",
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.task.yaml
new file mode 100644
index 0000000..70e564f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.task.yaml
@@ -0,0 +1,5 @@
+- name: Create a SATA adapter at PCI slot 34
+ vmware.vmware_rest.vcenter_vm_hardware_adapter_sata:
+ vm: '{{ test_vm1_info.id }}'
+ pci_slot_number: 34
+ register: _sata_adapter_result_1 \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_VM.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_VM.result.json
new file mode 100644
index 0000000..d309d07
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_VM.result.json
@@ -0,0 +1,77 @@
+{
+ "value": {
+ "instant_clone_frozen": false,
+ "cdroms": [],
+ "memory": {
+ "size_MiB": 1024,
+ "hot_add_enabled": true
+ },
+ "disks": [
+ {
+ "value": {
+ "scsi": {
+ "bus": 0,
+ "unit": 0
+ },
+ "backing": {
+ "vmdk_file": "[local] test_vm1_8/test_vm1.vmdk",
+ "type": "VMDK_FILE"
+ },
+ "label": "Hard disk 1",
+ "type": "SCSI",
+ "capacity": 17179869184
+ },
+ "key": "2000"
+ }
+ ],
+ "parallel_ports": [],
+ "sata_adapters": [],
+ "cpu": {
+ "hot_remove_enabled": false,
+ "count": 1,
+ "hot_add_enabled": false,
+ "cores_per_socket": 1
+ },
+ "scsi_adapters": [
+ {
+ "value": {
+ "scsi": {
+ "bus": 0,
+ "unit": 7
+ },
+ "label": "SCSI controller 0",
+ "sharing": "NONE",
+ "type": "PVSCSI"
+ },
+ "key": "1000"
+ }
+ ],
+ "power_state": "POWERED_OFF",
+ "floppies": [],
+ "identity": {
+ "name": "test_vm1",
+ "instance_uuid": "5033c296-6954-64df-faca-d001de53763d",
+ "bios_uuid": "42330d17-e603-d925-fa4b-18827dbc1409"
+ },
+ "nvme_adapters": [],
+ "name": "test_vm1",
+ "nics": [],
+ "boot": {
+ "delay": 0,
+ "retry_delay": 10000,
+ "enter_setup_mode": false,
+ "type": "BIOS",
+ "retry": false
+ },
+ "serial_ports": [],
+ "boot_devices": [],
+ "guest_OS": "DEBIAN_8_64",
+ "hardware": {
+ "upgrade_policy": "NEVER",
+ "upgrade_status": "NONE",
+ "version": "VMX_11"
+ }
+ },
+ "id": "vm-1650",
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_VM.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_VM.task.yaml
new file mode 100644
index 0000000..7880b5b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_VM.task.yaml
@@ -0,0 +1,14 @@
+- name: Create a VM
+ vmware.vmware_rest.vcenter_vm:
+ placement:
+ cluster: "{{ my_cluster_info.id }}"
+ datastore: "{{ my_datastore.datastore }}"
+ folder: "{{ my_virtual_machine_folder.folder }}"
+ resource_pool: "{{ my_cluster_info.value.resource_pool }}"
+ name: test_vm1
+ guest_OS: DEBIAN_8_64
+ hardware_version: VMX_11
+ memory:
+ hot_add_enabled: true
+ size_MiB: 1024
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_new_disk.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_new_disk.result.json
new file mode 100644
index 0000000..7b4275c
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_new_disk.result.json
@@ -0,0 +1,17 @@
+{
+ "value": {
+ "backing": {
+ "vmdk_file": "[local] test_vm1_8/test_vm1_1.vmdk",
+ "type": "VMDK_FILE"
+ },
+ "label": "Hard disk 2",
+ "type": "SATA",
+ "sata": {
+ "bus": 0,
+ "unit": 0
+ },
+ "capacity": 320000
+ },
+ "id": "16000",
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_new_disk.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_new_disk.task.yaml
new file mode 100644
index 0000000..66e330b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Create_a_new_disk.task.yaml
@@ -0,0 +1,7 @@
+- name: Create a new disk
+ vmware.vmware_rest.vcenter_vm_hardware_disk:
+ vm: '{{ test_vm1_info.id }}'
+ type: SATA
+ new_vmdk:
+ capacity: 320000
+ register: my_new_disk \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Dedicate_one_core_to_the_VM.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Dedicate_one_core_to_the_VM.result.json
new file mode 100644
index 0000000..8d2169b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Dedicate_one_core_to_the_VM.result.json
@@ -0,0 +1,10 @@
+{
+ "value": {
+ "hot_remove_enabled": false,
+ "count": 1,
+ "hot_add_enabled": false,
+ "cores_per_socket": 1
+ },
+ "id": null,
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Dedicate_one_core_to_the_VM.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Dedicate_one_core_to_the_VM.task.yaml
new file mode 100644
index 0000000..0713ea6
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Dedicate_one_core_to_the_VM.task.yaml
@@ -0,0 +1,5 @@
+- name: Dedicate one core to the VM
+ vmware.vmware_rest.vcenter_vm_hardware_cpu:
+ vm: '{{ test_vm1_info.id }}'
+ count: 1
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_VM_storage_policy.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_VM_storage_policy.result.json
new file mode 100644
index 0000000..204ad5f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_VM_storage_policy.result.json
@@ -0,0 +1,6 @@
+{
+ "value": {
+ "disks": []
+ },
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_VM_storage_policy.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_VM_storage_policy.task.yaml
new file mode 100644
index 0000000..f9a7e75
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_VM_storage_policy.task.yaml
@@ -0,0 +1,4 @@
+- name: Get VM storage policy
+ vmware.vmware_rest.vcenter_vm_storage_policy_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_filesystem_information.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_filesystem_information.result.json
new file mode 100644
index 0000000..ad87f76
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_filesystem_information.result.json
@@ -0,0 +1,13 @@
+{
+ "value": [
+ {
+ "value": {
+ "mappings": [],
+ "free_space": 774766592,
+ "capacity": 2515173376
+ },
+ "key": "/"
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_filesystem_information.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_filesystem_information.task.yaml
new file mode 100644
index 0000000..836129f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_filesystem_information.task.yaml
@@ -0,0 +1,8 @@
+- name: Get guest filesystem information
+ vmware.vmware_rest.vcenter_vm_guest_localfilesystem_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result
+ until:
+ - _result is not failed
+ retries: 60
+ delay: 5 \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_identity_information.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_identity_information.result.json
new file mode 100644
index 0000000..01e8a8f
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_identity_information.result.json
@@ -0,0 +1,14 @@
+{
+ "value": {
+ "full_name": {
+ "args": [],
+ "default_message": "Red Hat Fedora (64-bit)",
+ "id": "vmsg.guestos.fedora64Guest.label"
+ },
+ "name": "FEDORA_64",
+ "ip_address": "192.168.122.242",
+ "family": "LINUX",
+ "host_name": "localhost.localdomain"
+ },
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_identity_information.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_identity_information.task.yaml
new file mode 100644
index 0000000..bef332e
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_identity_information.task.yaml
@@ -0,0 +1,4 @@
+- name: Get guest identity information
+ vmware.vmware_rest.vcenter_vm_guest_identity_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_interfaces_information.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_interfaces_information.result.json
new file mode 100644
index 0000000..2c97337
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_interfaces_information.result.json
@@ -0,0 +1,23 @@
+{
+ "value": [
+ {
+ "mac_address": "00:50:56:b3:49:5c",
+ "ip": {
+ "ip_addresses": [
+ {
+ "ip_address": "192.168.122.242",
+ "prefix_length": 24,
+ "state": "PREFERRED"
+ },
+ {
+ "ip_address": "fe80::b8d0:511b:897f:65a2",
+ "prefix_length": 64,
+ "state": "UNKNOWN"
+ }
+ ]
+ },
+ "nic": "4000"
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_interfaces_information.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_interfaces_information.task.yaml
new file mode 100644
index 0000000..d25c6c6
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_interfaces_information.task.yaml
@@ -0,0 +1,4 @@
+- name: Get guest network interfaces information
+ vmware.vmware_rest.vcenter_vm_guest_networking_interfaces_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_routes_information.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_routes_information.result.json
new file mode 100644
index 0000000..68e2033
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_routes_information.result.json
@@ -0,0 +1,31 @@
+{
+ "value": [
+ {
+ "gateway_address": "192.168.122.1",
+ "interface_index": 0,
+ "prefix_length": 0,
+ "network": "0.0.0.0"
+ },
+ {
+ "interface_index": 0,
+ "prefix_length": 24,
+ "network": "192.168.122.0"
+ },
+ {
+ "interface_index": 0,
+ "prefix_length": 64,
+ "network": "fe80::"
+ },
+ {
+ "interface_index": 0,
+ "prefix_length": 128,
+ "network": "fe80::b8d0:511b:897f:65a2"
+ },
+ {
+ "interface_index": 0,
+ "prefix_length": 8,
+ "network": "ff00::"
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_routes_information.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_routes_information.task.yaml
new file mode 100644
index 0000000..8ef1352
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_network_routes_information.task.yaml
@@ -0,0 +1,4 @@
+- name: Get guest network routes information
+ vmware.vmware_rest.vcenter_vm_guest_networking_routes_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_networking_information.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_networking_information.result.json
new file mode 100644
index 0000000..fe757b6
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_networking_information.result.json
@@ -0,0 +1,17 @@
+{
+ "value": {
+ "dns": {
+ "ip_addresses": [
+ "10.0.2.3"
+ ],
+ "search_domains": [
+ "localdomain"
+ ]
+ },
+ "dns_values": {
+ "domain_name": "localdomain",
+ "host_name": "localhost.localdomain"
+ }
+ },
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_networking_information.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_networking_information.task.yaml
new file mode 100644
index 0000000..70ae776
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_networking_information.task.yaml
@@ -0,0 +1,4 @@
+- name: Get guest networking information
+ vmware.vmware_rest.vcenter_vm_guest_networking_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_power_information.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_power_information.result.json
new file mode 100644
index 0000000..da31778
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_power_information.result.json
@@ -0,0 +1,6 @@
+{
+ "value": {
+ "state": "POWERED_ON"
+ },
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_power_information.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_power_information.task.yaml
new file mode 100644
index 0000000..4064f53
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Get_guest_power_information.task.yaml
@@ -0,0 +1,4 @@
+- name: Get guest power information
+ vmware.vmware_rest.vcenter_vm_power_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Increase_the_memory_of_a_VM.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Increase_the_memory_of_a_VM.result.json
new file mode 100644
index 0000000..e15f41c
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Increase_the_memory_of_a_VM.result.json
@@ -0,0 +1,4 @@
+{
+ "id": null,
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Increase_the_memory_of_a_VM.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Increase_the_memory_of_a_VM.task.yaml
new file mode 100644
index 0000000..3710bcc
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Increase_the_memory_of_a_VM.task.yaml
@@ -0,0 +1,5 @@
+- name: Increase the memory of a VM
+ vmware.vmware_rest.vcenter_vm_hardware_memory:
+ vm: '{{ test_vm1_info.id }}'
+ size_MiB: 1080
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_SCSI_adapter_of_a_given_VM.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_SCSI_adapter_of_a_given_VM.result.json
new file mode 100644
index 0000000..3ecaa4b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_SCSI_adapter_of_a_given_VM.result.json
@@ -0,0 +1,14 @@
+{
+ "value": [
+ {
+ "scsi": {
+ "bus": 0,
+ "unit": 7
+ },
+ "label": "SCSI controller 0",
+ "type": "PVSCSI",
+ "sharing": "NONE"
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_SCSI_adapter_of_a_given_VM.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_SCSI_adapter_of_a_given_VM.task.yaml
new file mode 100644
index 0000000..e2fc3c0
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_SCSI_adapter_of_a_given_VM.task.yaml
@@ -0,0 +1,4 @@
+- name: List the SCSI adapter of a given VM
+ vmware.vmware_rest.vcenter_vm_hardware_adapter_scsi_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_cdrom_devices_on_the_guest.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_cdrom_devices_on_the_guest.result.json
new file mode 100644
index 0000000..a838aa5
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_cdrom_devices_on_the_guest.result.json
@@ -0,0 +1,4 @@
+{
+ "value": [],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_cdrom_devices_on_the_guest.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_cdrom_devices_on_the_guest.task.yaml
new file mode 100644
index 0000000..9dc6f4d
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/List_the_cdrom_devices_on_the_guest.task.yaml
@@ -0,0 +1,4 @@
+- name: List the cdrom devices on the guest
+ vmware.vmware_rest.vcenter_vm_hardware_cdrom_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.result.json
new file mode 100644
index 0000000..3b5e197
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.result.json
@@ -0,0 +1,12 @@
+{
+ "value": [
+ {
+ "memory_size_MiB": 1024,
+ "vm": "vm-1650",
+ "name": "test_vm1",
+ "power_state": "POWERED_OFF",
+ "cpu_count": 1
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.task.yaml
new file mode 100644
index 0000000..78a4043
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.task.yaml
@@ -0,0 +1,5 @@
+- name: Look up the VM called test_vm1 in the inventory
+ register: search_result
+ vmware.vmware_rest.vcenter_vm_info:
+ filter_names:
+ - test_vm1 \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Remove_SATA_adapter_at_PCI_slot_34.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Remove_SATA_adapter_at_PCI_slot_34.result.json
new file mode 100644
index 0000000..aac751a
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Remove_SATA_adapter_at_PCI_slot_34.result.json
@@ -0,0 +1,3 @@
+{
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_a_list_of_all_the_datastores.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_a_list_of_all_the_datastores.result.json
new file mode 100644
index 0000000..48c3cf8
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_a_list_of_all_the_datastores.result.json
@@ -0,0 +1,26 @@
+{
+ "value": [
+ {
+ "datastore": "datastore-1644",
+ "name": "local",
+ "type": "VMFS",
+ "free_space": 13523484672,
+ "capacity": 15032385536
+ },
+ {
+ "datastore": "datastore-1645",
+ "name": "ro_datastore",
+ "type": "NFS",
+ "free_space": 24638349312,
+ "capacity": 26831990784
+ },
+ {
+ "datastore": "datastore-1646",
+ "name": "rw_datastore",
+ "type": "NFS",
+ "free_space": 24638349312,
+ "capacity": 26831990784
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_a_list_of_all_the_datastores.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_a_list_of_all_the_datastores.task.yaml
new file mode 100644
index 0000000..838cabc
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_a_list_of_all_the_datastores.task.yaml
@@ -0,0 +1,3 @@
+- name: Retrieve a list of all the datastores
+ vmware.vmware_rest.vcenter_datastore_info:
+ register: my_datastores \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_details_about_the_first_cluster.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_details_about_the_first_cluster.result.json
new file mode 100644
index 0000000..7c86727
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_details_about_the_first_cluster.result.json
@@ -0,0 +1,8 @@
+{
+ "value": {
+ "name": "my_cluster",
+ "resource_pool": "resgroup-1637"
+ },
+ "id": "domain-c1636",
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_details_about_the_first_cluster.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_details_about_the_first_cluster.task.yaml
new file mode 100644
index 0000000..ec505aa
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_details_about_the_first_cluster.task.yaml
@@ -0,0 +1,4 @@
+- name: Retrieve details about the first cluster
+ vmware.vmware_rest.vcenter_cluster_info:
+ cluster: "{{ all_the_clusters.value[0].cluster }}"
+ register: my_cluster_info \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_disk_information_from_the_VM.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_disk_information_from_the_VM.result.json
new file mode 100644
index 0000000..922250e
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_disk_information_from_the_VM.result.json
@@ -0,0 +1,18 @@
+{
+ "value": [
+ {
+ "scsi": {
+ "bus": 0,
+ "unit": 0
+ },
+ "backing": {
+ "vmdk_file": "[local] test_vm1_8/test_vm1.vmdk",
+ "type": "VMDK_FILE"
+ },
+ "label": "Hard disk 1",
+ "type": "SCSI",
+ "capacity": 17179869184
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_disk_information_from_the_VM.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_disk_information_from_the_VM.task.yaml
new file mode 100644
index 0000000..eef0250
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_disk_information_from_the_VM.task.yaml
@@ -0,0 +1,4 @@
+- name: Retrieve the disk information from the VM
+ vmware.vmware_rest.vcenter_vm_hardware_disk_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_memory_information_from_the_VM.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_memory_information_from_the_VM.result.json
new file mode 100644
index 0000000..88436c1
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_memory_information_from_the_VM.result.json
@@ -0,0 +1,7 @@
+{
+ "value": {
+ "size_MiB": 1024,
+ "hot_add_enabled": true
+ },
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_memory_information_from_the_VM.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_memory_information_from_the_VM.task.yaml
new file mode 100644
index 0000000..00fa100
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Retrieve_the_memory_information_from_the_VM.task.yaml
@@ -0,0 +1,4 @@
+- name: Retrieve the memory information from the VM
+ vmware.vmware_rest.vcenter_vm_hardware_memory_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_NIC's_start_connected_flag_on.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_NIC's_start_connected_flag_on.result.json
new file mode 100644
index 0000000..9c0c9c1
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_NIC's_start_connected_flag_on.result.json
@@ -0,0 +1,4 @@
+{
+ "id": "4000",
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_NIC's_start_connected_flag_on.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_NIC's_start_connected_flag_on.task.yaml
new file mode 100644
index 0000000..8f927da
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_NIC's_start_connected_flag_on.task.yaml
@@ -0,0 +1,5 @@
+- name: Turn the NIC's start_connected flag on
+ vmware.vmware_rest.vcenter_vm_hardware_ethernet:
+ nic: '{{ vm_hardware_ethernet_1.id }}'
+ start_connected: true
+ vm: '{{ test_vm1_info.id }}' \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_power_of_the_VM_on.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_power_of_the_VM_on.result.json
new file mode 100644
index 0000000..a661aa0
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_power_of_the_VM_on.result.json
@@ -0,0 +1,3 @@
+{
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_power_of_the_VM_on.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_power_of_the_VM_on.task.yaml
new file mode 100644
index 0000000..789a585
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Turn_the_power_of_the_VM_on.task.yaml
@@ -0,0 +1,4 @@
+- name: Turn the power of the VM on
+ vmware.vmware_rest.vcenter_vm_power:
+ state: start
+ vm: '{{ test_vm1_info.id }}' \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Upgrade_the_VM_hardware_version.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Upgrade_the_VM_hardware_version.result.json
new file mode 100644
index 0000000..e15f41c
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Upgrade_the_VM_hardware_version.result.json
@@ -0,0 +1,4 @@
+{
+ "id": null,
+ "changed": true
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Upgrade_the_VM_hardware_version.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Upgrade_the_VM_hardware_version.task.yaml
new file mode 100644
index 0000000..46a95a5
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Upgrade_the_VM_hardware_version.task.yaml
@@ -0,0 +1,6 @@
+- name: Upgrade the VM hardware version
+ vmware.vmware_rest.vcenter_vm_hardware:
+ upgrade_policy: AFTER_CLEAN_SHUTDOWN
+ upgrade_version: VMX_13
+ vm: '{{ test_vm1_info.id }}'
+ register: _result \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Wait_until_my_VM_is_ready.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Wait_until_my_VM_is_ready.result.json
new file mode 100644
index 0000000..849bde4
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Wait_until_my_VM_is_ready.result.json
@@ -0,0 +1,13 @@
+{
+ "value": {
+ "auto_update_supported": false,
+ "upgrade_policy": "MANUAL",
+ "install_attempt_count": 0,
+ "version_status": "UNMANAGED",
+ "version_number": 10346,
+ "run_state": "RUNNING",
+ "version": "10346",
+ "install_type": "OPEN_VM_TOOLS"
+ },
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Wait_until_my_VM_is_ready.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Wait_until_my_VM_is_ready.task.yaml
new file mode 100644
index 0000000..1ec04f4
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/Wait_until_my_VM_is_ready.task.yaml
@@ -0,0 +1,9 @@
+- name: Wait until my VM is ready
+ vmware.vmware_rest.vcenter_vm_tools_info:
+ vm: '{{ test_vm1_info.id }}'
+ register: vm_tools_info
+ until:
+ - vm_tools_info is not failed
+ - vm_tools_info.value.run_state == "RUNNING"
+ retries: 60
+ delay: 5 \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/collect_a_list_of_the_datacenters.result.json b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/collect_a_list_of_the_datacenters.result.json
new file mode 100644
index 0000000..1225ad7
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/collect_a_list_of_the_datacenters.result.json
@@ -0,0 +1,9 @@
+{
+ "value": [
+ {
+ "name": "my_dc",
+ "datacenter": "datacenter-1630"
+ }
+ ],
+ "changed": false
+} \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/collect_a_list_of_the_datacenters.task.yaml b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/collect_a_list_of_the_datacenters.task.yaml
new file mode 100644
index 0000000..f274a97
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/task_outputs/collect_a_list_of_the_datacenters.task.yaml
@@ -0,0 +1,3 @@
+- name: collect a list of the datacenters
+ vmware.vmware_rest.vcenter_datacenter_info:
+ register: my_datacenters \ No newline at end of file
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_hardware_tuning.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_hardware_tuning.rst
new file mode 100644
index 0000000..1af1d5b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_hardware_tuning.rst
@@ -0,0 +1,150 @@
+.. _vmware_rest_vm_hardware_tuning:
+
+*******************************
+How to modify a virtual machine
+*******************************
+
+.. contents::
+ :local:
+
+
+Introduction
+============
+
+This section shows you how to use Ansible to modify an existing virtual machine.
+
+Scenario requirements
+=====================
+
+You've already followed :ref:`vmware_rest_create_vm` and created a VM.
+
+How to add a CDROM drive to a virtual machine
+=============================================
+
+In this example, we use the ``vcenter_vm_hardware_*`` modules to add a new CDROM to an existing VM.
+
+Add a new SATA adapter
+______________________
+
+First we create a new SATA adapter. We specify the ``pci_slot_number``. This way if we run the task again it won't do anything if there is already an adapter there.
+
+.. literalinclude:: task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Create_a_SATA_adapter_at_PCI_slot_34.result.json
+
+Add a CDROM drive
+_________________
+
+Now we can create the CDROM drive:
+
+.. literalinclude:: task_outputs/Attach_an_ISO_image_to_a_guest_VM.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Attach_an_ISO_image_to_a_guest_VM.result.json
+
+
+.. _vmware_rest_attach_a_network:
+
+How to attach a VM to a network
+===============================
+
+Attach a new NIC
+________________
+
+Here we attach the VM to the network (through the portgroup). We specify a ``pci_slot_number`` for the same reason.
+
+The second task adjusts the NIC configuration.
+
+.. literalinclude:: task_outputs/Attach_a_VM_to_a_dvswitch.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Attach_a_VM_to_a_dvswitch.result.json
+
+Adjust the configuration of the NIC
+___________________________________
+
+.. literalinclude:: task_outputs/Turn_the_NIC's_start_connected_flag_on.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Turn_the_NIC's_start_connected_flag_on.result.json
+
+Increase the memory of the VM
+=============================
+
+We can also adjust the amount of memory that we dedicate to our VM.
+
+.. literalinclude:: task_outputs/Increase_the_memory_of_a_VM.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Increase_the_memory_of_a_VM.result.json
+
+Upgrade the hardware version of the VM
+======================================
+
+Here we use the ``vcenter_vm_hardware`` module to upgrade the version of the hardware:
+
+.. literalinclude:: task_outputs/Upgrade_the_VM_hardware_version.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Upgrade_the_VM_hardware_version.result.json
+
+Adjust the number of CPUs of the VM
+===================================
+
+You can use ``vcenter_vm_hardware_cpu`` for that:
+
+.. literalinclude:: task_outputs/Dedicate_one_core_to_the_VM.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Dedicate_one_core_to_the_VM.result.json
+
+Remove a SATA controller
+========================
+
+In this example, we remove the SATA controller of the PCI slot 34.
+
+.. literalinclude:: task_outputs/Remove_SATA_adapter_at_PCI_slot_34.result.json
+
+Result
+______
+
+.. literalinclude:: task_outputs/Remove_SATA_adapter_at_PCI_slot_34.result.json
+
+Attach a floppy drive
+=====================
+
+Here we attach a floppy drive to a VM.
+
+.. literalinclude:: task_outputs/Add_a_floppy_disk_drive.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Add_a_floppy_disk_drive.result.json
+
+Attach a new disk
+=================
+
+Here we attach a tiny disk to the VM. The ``capacity`` is in bytes.
+
+.. literalinclude:: task_outputs/Create_a_new_disk.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Create_a_new_disk.result.json
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_info.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_info.rst
new file mode 100644
index 0000000..097b69b
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_info.rst
@@ -0,0 +1,129 @@
+.. _vmware_rest_vm_info:
+
+***************************************
+Retrieve information from a specific VM
+***************************************
+
+.. contents::
+ :local:
+
+
+Introduction
+============
+
+This section shows you how to use Ansible to retrieve information about a specific virtual machine.
+
+Scenario requirements
+=====================
+
+You've already followed :ref:`vmware_rest_create_vm` and you've got create a new VM called ``test_vm1``.
+
+How to collect virtual machine information
+==========================================
+
+List the VM
+___________
+
+In this example, we use the ``vcenter_vm_info`` module to collect information about our new VM.
+
+In this example, we start by asking for a list of VMs. We use a filter to limit the results to just the VM called ``test_vm1``. So we are in a list context, with one single entry in the ``value`` key.
+
+.. literalinclude:: task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.task.yaml
+
+Result
+______
+
+As expected, we get a list. And thanks to our filter, we just get one entry.
+
+.. literalinclude:: task_outputs/Look_up_the_VM_called_test_vm1_in_the_inventory.result.json
+
+Collect the details about a specific VM
+_______________________________________
+
+For the next steps, we pass the ID of the VM through the ``vm`` parameter. This allow us to collect more details about this specific VM.
+
+.. literalinclude:: task_outputs/Collect_information_about_a_specific_VM.task.yaml
+
+Result
+______
+
+The result is a structure with all the details about our VM. You will note this is actually the same information that we get when we created the VM.
+
+.. literalinclude:: task_outputs/Collect_information_about_a_specific_VM.result.json
+
+
+Get the hardware version of a specific VM
+_________________________________________
+
+We can also use all the ``vcenter_vm_*_info`` modules to retrieve a smaller amount
+of information. Here we use ``vcenter_vm_hardware_info`` to know the hardware version of
+the VM.
+
+.. literalinclude:: task_outputs/Collect_the_hardware_information.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Collect_the_hardware_information.result.json
+
+List the SCSI adapter(s) of a specific VM
+_________________________________________
+
+Here for instance, we list the SCSI adapter(s) of the VM:
+
+.. literalinclude:: task_outputs/List_the_SCSI_adapter_of_a_given_VM.task.yaml
+
+You can do the same for the SATA controllers with ``vcenter_vm_adapter_sata_info``.
+
+Result
+______
+
+.. literalinclude:: task_outputs/List_the_SCSI_adapter_of_a_given_VM.result.json
+
+List the CDROM drive(s) of a specific VM
+________________________________________
+
+And we list its CDROM drives.
+
+.. literalinclude:: task_outputs/List_the_cdrom_devices_on_the_guest.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/List_the_cdrom_devices_on_the_guest.result.json
+
+Get the memory information of the VM
+____________________________________
+
+Here we collect the memory information of the VM:
+
+.. literalinclude:: task_outputs/Retrieve_the_memory_information_from_the_VM.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Retrieve_the_memory_information_from_the_VM.result.json
+
+Get the storage policy of the VM
+--------------------------------
+
+We use the ``vcenter_vm_storage_policy_info`` module for that:
+
+.. literalinclude:: task_outputs/Get_VM_storage_policy.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Get_VM_storage_policy.result.json
+
+Get the disk information of the VM
+----------------------------------
+
+We use the ``vcenter_vm_hardware_disk_info`` for this operation:
+
+.. literalinclude:: task_outputs/Retrieve_the_disk_information_from_the_VM.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Retrieve_the_disk_information_from_the_VM.result.json
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_tool_configuration.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_tool_configuration.rst
new file mode 100644
index 0000000..b29447a
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_tool_configuration.rst
@@ -0,0 +1,45 @@
+.. _vmware_rest_vm_tool_configuration:
+
+**************************************************************
+How to configure the VMware tools of a running virtual machine
+**************************************************************
+
+.. contents::
+ :local:
+
+
+Introduction
+============
+
+This section show you how to collection information from a running virtual machine.
+
+Scenario requirements
+=====================
+
+You've already followed :ref:`vmware_rest_run_a_vm` and your virtual machine runs VMware Tools.
+
+How to change the upgrade policy
+================================
+
+Change the upgrade policy to MANUAL
+---------------------------------------------------
+
+You can adjust the VMware Tools upgrade policy with the ``vcenter_vm_tools`` module.
+
+.. literalinclude:: task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Change_vm-tools_upgrade_policy_to_MANUAL.result.json
+
+
+Change the upgrade policy to UPGRADE_AT_POWER_CYCLE
+------------------------------------------------------------------------------------------
+
+.. literalinclude:: task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Change_vm-tools_upgrade_policy_to_UPGRADE_AT_POWER_CYCLE.result.json
diff --git a/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_tool_information.rst b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_tool_information.rst
new file mode 100644
index 0000000..2f92871
--- /dev/null
+++ b/docs/docsite/rst/scenario_guides/vmware_rest_scenarios/vm_tool_information.rst
@@ -0,0 +1,90 @@
+.. _vmware_rest_vm_tool_information:
+
+*****************************************************
+How to get information from a running virtual machine
+*****************************************************
+
+.. contents::
+ :local:
+
+
+Introduction
+============
+
+This section shows you how to collection information from a running virtual machine.
+
+Scenario requirements
+=====================
+
+You've already followed :ref:`vmware_rest_run_a_vm` and your virtual machine runs VMware Tools.
+
+How to collect information
+==========================
+
+In this example, we use the ``vcenter_vm_guest_*`` module to collect information about the associated resources.
+
+Filesystem
+----------
+
+Here we use ``vcenter_vm_guest_localfilesystem_info`` to retrieve the details
+about the filesystem of the guest. In this example we also use a ``retries``
+loop. The VMware Tools may take a bit of time to start and by doing so, we give
+the VM a bit more time.
+
+.. literalinclude:: task_outputs/Get_guest_filesystem_information.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Get_guest_filesystem_information.result.json
+
+Guest identity
+--------------
+
+You can use ``vcenter_vm_guest_identity_info`` to get details like the OS family or the hostname of the running VM.
+
+.. literalinclude:: task_outputs/Get_guest_identity_information.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Get_guest_identity_information.result.json
+
+Network
+-------
+
+``vcenter_vm_guest_networking_info`` will return the OS network configuration.
+
+.. literalinclude:: task_outputs/Get_guest_networking_information.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Get_guest_networking_information.result.json
+
+Network interfaces
+------------------
+
+``vcenter_vm_guest_networking_interfaces_info`` will return a list of NIC configurations.
+
+See also :ref:`vmware_rest_attach_a_network`.
+
+.. literalinclude:: task_outputs/Get_guest_network_interfaces_information.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Get_guest_network_interfaces_information.result.json
+
+Network routes
+--------------
+
+Use ``vcenter_vm_guest_networking_routes_info`` to explore the route table of your virtual machine.
+
+.. literalinclude:: task_outputs/Get_guest_network_routes_information.task.yaml
+
+Result
+______
+
+.. literalinclude:: task_outputs/Get_guest_network_routes_information.result.json
+