summaryrefslogtreecommitdiffstats
path: root/docs/docsite/rst/dev_guide/testing
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-28 16:04:21 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-28 16:04:21 +0000
commit8a754e0858d922e955e71b253c139e071ecec432 (patch)
tree527d16e74bfd1840c85efd675fdecad056c54107 /docs/docsite/rst/dev_guide/testing
parentInitial commit. (diff)
downloadansible-core-upstream.tar.xz
ansible-core-upstream.zip
Adding upstream version 2.14.3.upstream/2.14.3upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r--docs/docsite/rst/dev_guide/testing.rst245
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/action-plugin-docs.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/ansible-doc.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/ansible-requirements.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/ansible-test-future-boilerplate.rst8
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/ansible-var-precedence-check.rst6
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/azure-requirements.rst10
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/boilerplate.rst11
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/changelog.rst19
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/compile.rst32
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/configure-remoting-ps1.rst5
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/deprecated-config.rst6
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/docs-build.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/empty-init.rst10
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/future-import-boilerplate.rst51
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/ignores.rst105
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/import.rst126
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/line-endings.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/metaclass-boilerplate.rst23
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/mypy.rst14
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-assert.rst16
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-basestring.rst11
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-dict-iteritems.rst16
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-dict-iterkeys.rst9
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-dict-itervalues.rst16
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-get-exception.rst28
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-illegal-filenames.rst61
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst14
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-tests-as-filters.rst12
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-underscore-variable.rst30
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-unicode-literals.rst16
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-unwanted-files.rst13
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-wildcard-import.rst31
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/obsolete-files.rst14
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/package-data.rst5
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/pep8.rst24
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/pslint.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/pylint-ansible-test.rst8
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/pylint.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/replace-urlopen.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/required-and-default-attributes.rst5
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/rstcheck.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/runtime-metadata.rst7
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/sanity-docs.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/shebang.rst16
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/shellcheck.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/symlinks.rst6
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/test-constraints.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/update-bundled.rst31
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/use-argspec-type-path.rst10
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/use-compat-six.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/validate-modules.rst140
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/yamllint.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing_compile.rst8
-rw-r--r--docs/docsite/rst/dev_guide/testing_documentation.rst44
-rw-r--r--docs/docsite/rst/dev_guide/testing_httptester.rst28
-rw-r--r--docs/docsite/rst/dev_guide/testing_integration.rst241
-rw-r--r--docs/docsite/rst/dev_guide/testing_pep8.rst8
-rw-r--r--docs/docsite/rst/dev_guide/testing_running_locally.rst382
-rw-r--r--docs/docsite/rst/dev_guide/testing_sanity.rst56
-rw-r--r--docs/docsite/rst/dev_guide/testing_units.rst219
-rw-r--r--docs/docsite/rst/dev_guide/testing_units_modules.rst589
-rw-r--r--docs/docsite/rst/dev_guide/testing_validate-modules.rst7
64 files changed, 2852 insertions, 0 deletions
diff --git a/docs/docsite/rst/dev_guide/testing.rst b/docs/docsite/rst/dev_guide/testing.rst
new file mode 100644
index 0000000..6b4716d
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing.rst
@@ -0,0 +1,245 @@
+.. _developing_testing:
+
+***************
+Testing Ansible
+***************
+
+.. contents::
+ :local:
+
+
+Why test your Ansible contributions?
+====================================
+
+If you're a developer, one of the most valuable things you can do is to look at GitHub issues and help fix bugs, since bug-fixing is almost always prioritized over feature development. Even for non-developers, helping to test pull requests for bug fixes and features is still immensely valuable.
+
+Ansible users who understand how to write playbooks and roles should be able to test their work. GitHub pull requests will automatically run a variety of tests (for example, Azure Pipelines) that show bugs in action. However, contributors must also test their work outside of the automated GitHub checks and show evidence of these tests in the PR to ensure that their work will be more likely to be reviewed and merged.
+
+Read on to learn how Ansible is tested, how to test your contributions locally, and how to extend testing capabilities.
+
+If you want to learn about testing collections, read :ref:`testing_collections`
+
+
+
+Types of tests
+==============
+
+At a high level we have the following classifications of tests:
+
+:sanity:
+ * :ref:`testing_sanity`
+ * Sanity tests are made up of scripts and tools used to perform static code analysis.
+ * The primary purpose of these tests is to enforce Ansible coding standards and requirements.
+:integration:
+ * :ref:`testing_integration`
+ * Functional tests of modules and Ansible core functionality.
+:units:
+ * :ref:`testing_units`
+ * Tests directly against individual parts of the code base.
+
+
+Testing within GitHub & Azure Pipelines
+=======================================
+
+
+Organization
+------------
+
+When Pull Requests (PRs) are created they are tested using Azure Pipelines, a Continuous Integration (CI) tool. Results are shown at the end of every PR.
+
+When Azure Pipelines detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example:
+
+.. code-block:: text
+
+ The test `ansible-test sanity --test pep8` failed with the following errors:
+
+ lib/ansible/modules/network/foo/bar.py:509:17: E265 block comment should start with '# '
+
+ The test `ansible-test sanity --test validate-modules` failed with the following error:
+ lib/ansible/modules/network/foo/bar.py:0:0: E307 version_added should be 2.4. Currently 2.3
+
+From the above example we can see that ``--test pep8`` and ``--test validate-modules`` have identified an issue. The commands given allow you to run the same tests locally to ensure you've fixed all issues without having to push your changes to GitHub and wait for Azure Pipelines, for example:
+
+If you haven't already got Ansible available, use the local checkout by running:
+
+.. code-block:: shell-session
+
+ source hacking/env-setup
+
+Then run the tests detailed in the GitHub comment:
+
+.. code-block:: shell-session
+
+ ansible-test sanity --test pep8
+ ansible-test sanity --test validate-modules
+
+If there isn't a GitHub comment stating what's failed you can inspect the results by clicking on the "Details" button under the "checks have failed" message at the end of the PR.
+
+Rerunning a failing CI job
+--------------------------
+
+Occasionally you may find your PR fails due to a reason unrelated to your change. This could happen for several reasons, including:
+
+* a temporary issue accessing an external resource, such as a yum or git repo
+* a timeout creating a virtual machine to run the tests on
+
+If either of these issues appear to be the case, you can rerun the Azure Pipelines test by:
+
+* adding a comment with ``/rebuild`` (full rebuild) or ``/rebuild_failed`` (rebuild only failed CI nodes) to the PR
+* closing and re-opening the PR (full rebuild)
+* making another change to the PR and pushing to GitHub
+
+If the issue persists, please contact us in the ``#ansible-devel`` chat channel (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_).
+
+
+How to test a PR
+================
+
+Ideally, code should add tests that prove that the code works. That's not always possible and tests are not always comprehensive, especially when a user doesn't have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.
+
+Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.
+
+Setup: Checking out a Pull Request
+----------------------------------
+
+You can do this by:
+
+* checking out Ansible
+* fetching the proposed changes into a test branch
+* testing
+* commenting on that particular issue on GitHub
+
+Here's how:
+
+.. warning::
+ Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code
+ sent may have mistakes or malicious code that could have a negative impact on your system. We recommend
+ doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant
+ or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
+ other flavors, since some features (for example, package managers such as apt or yum) are specific to those OS versions.
+
+
+Create a fresh area to work:
+
+.. code-block:: shell-session
+
+ git clone https://github.com/ansible/ansible.git ansible-pr-testing
+ cd ansible-pr-testing
+
+Next, find the pull request you'd like to test and make note of its number. It will look something like this:
+
+.. code-block:: text
+
+ Use os.path.sep instead of hardcoding / #65381
+
+.. note:: Only test ``ansible:devel``
+
+ It is important that the PR request target be ``ansible:devel``, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by Ansible staff.
+
+Use the pull request number when you fetch the proposed changes and create your branch for testing:
+
+.. code-block:: shell-session
+
+ git fetch origin refs/pull/XXXX/head:testing_PRXXXX
+ git checkout testing_PRXXXX
+
+The first command fetches the proposed changes from the pull request and creates a new branch named ``testing_PRXXXX``, where the XXXX is the actual number associated with the pull request (for example, 65381). The second command checks out the newly created branch.
+
+.. note::
+ If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
+
+.. note::
+ Some users do not create feature branches, which can cause problems when they have multiple, unrelated commits in their version of ``devel``. If the source looks like ``someuser:devel``, make sure there is only one commit listed on the pull request.
+
+The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
+full installation that is frequently used by developers on Ansible.
+
+Simply source it (to use the Linux/Unix terminology) to begin using it immediately:
+
+.. code-block:: shell-session
+
+ source ./hacking/env-setup
+
+This script modifies the ``PYTHONPATH`` environment variables (along with a few other things), which will be temporarily
+set as long as your shell session is open.
+
+Testing the Pull Request
+------------------------
+
+At this point, you should be ready to begin testing!
+
+Some ideas of what to test are:
+
+* Create a test Playbook with the examples in and check if they function correctly
+* Test to see if any Python backtraces returned (that's a bug)
+* Test on different operating systems, or against different library versions
+
+Run sanity tests
+^^^^^^^^^^^^^^^^
+
+.. code:: shell
+
+ ansible-test sanity
+
+More information: :ref:`testing_sanity`
+
+Run unit tests
+^^^^^^^^^^^^^^
+
+.. code:: shell
+
+ ansible-test units
+
+More information: :ref:`testing_units`
+
+Run integration tests
+^^^^^^^^^^^^^^^^^^^^^
+
+.. code:: shell
+
+ ansible-test integration -v ping
+
+More information: :ref:`testing_integration`
+
+Any potential issues should be added as comments on the pull request (and it's acceptable to comment if the feature works as well), remembering to include the output of ``ansible --version``
+
+Example:
+
+.. code-block:: text
+
+ Works for me! Tested on `Ansible 2.3.0`. I verified this on CentOS 6.5 and also Ubuntu 14.04.
+
+If the PR does not resolve the issue, or if you see any failures from the unit/integration tests, just include that output instead:
+
+ | This change causes errors for me.
+ |
+ | When I ran this Ubuntu 16.04 it failed with the following:
+ |
+ | \```
+ | some output
+ | StackTrace
+ | some other output
+ | \```
+
+Code Coverage Online
+^^^^^^^^^^^^^^^^^^^^
+
+`The online code coverage reports <https://codecov.io/gh/ansible/ansible>`_ are a good way
+to identify areas for testing improvement in Ansible. By following red colors you can
+drill down through the reports to find files which have no tests at all. Adding both
+integration and unit tests which show clearly how code should work, verify important
+Ansible functions and increase testing coverage in areas where there is none is a valuable
+way to help improve Ansible.
+
+The code coverage reports only cover the ``devel`` branch of Ansible where new feature
+development takes place. Pull requests and new code will be missing from the codecov.io
+coverage reports so local reporting is needed. Most ``ansible-test`` commands allow you
+to collect code coverage, this is particularly useful to indicate where to extend
+testing. See :ref:`testing_running_locally` for more information.
+
+
+Want to know more about testing?
+================================
+
+If you'd like to know more about the plans for improving testing Ansible then why not join the
+`Testing Working Group <https://github.com/ansible/community/blob/main/meetings/README.md>`_.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/action-plugin-docs.rst b/docs/docsite/rst/dev_guide/testing/sanity/action-plugin-docs.rst
new file mode 100644
index 0000000..e3a5d8b
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/action-plugin-docs.rst
@@ -0,0 +1,4 @@
+action-plugin-docs
+==================
+
+Each action plugin should have a matching module of the same name to provide documentation.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/ansible-doc.rst b/docs/docsite/rst/dev_guide/testing/sanity/ansible-doc.rst
new file mode 100644
index 0000000..9f2c4f5
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/ansible-doc.rst
@@ -0,0 +1,4 @@
+ansible-doc
+===========
+
+Verifies that ``ansible-doc`` can parse module documentation on all supported Python versions.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/ansible-requirements.rst b/docs/docsite/rst/dev_guide/testing/sanity/ansible-requirements.rst
new file mode 100644
index 0000000..f348b07
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/ansible-requirements.rst
@@ -0,0 +1,4 @@
+ansible-requirements
+====================
+
+``test/lib/ansible_test/_data/requirements/sanity.import-plugins.txt`` must be an identical copy of ``requirements.txt`` found in the project's root.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/ansible-test-future-boilerplate.rst b/docs/docsite/rst/dev_guide/testing/sanity/ansible-test-future-boilerplate.rst
new file mode 100644
index 0000000..43dfe32
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/ansible-test-future-boilerplate.rst
@@ -0,0 +1,8 @@
+ansible-test-future-boilerplate
+===============================
+
+The ``_internal`` code for ``ansible-test`` requires the following ``__future__`` import:
+
+.. code-block:: python
+
+ from __future__ import annotations
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/ansible-var-precedence-check.rst b/docs/docsite/rst/dev_guide/testing/sanity/ansible-var-precedence-check.rst
new file mode 100644
index 0000000..1906886
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/ansible-var-precedence-check.rst
@@ -0,0 +1,6 @@
+:orphan:
+
+ansible-var-precedence-check
+============================
+
+Check the order of precedence for Ansible variables against :ref:`ansible_variable_precedence`.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/azure-requirements.rst b/docs/docsite/rst/dev_guide/testing/sanity/azure-requirements.rst
new file mode 100644
index 0000000..5e0cc04
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/azure-requirements.rst
@@ -0,0 +1,10 @@
+:orphan:
+
+azure-requirements
+==================
+
+Update the Azure integration test requirements file when changes are made to the Azure packaging requirements file:
+
+.. code-block:: bash
+
+ cp packaging/requirements/requirements-azure.txt test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/boilerplate.rst b/docs/docsite/rst/dev_guide/testing/sanity/boilerplate.rst
new file mode 100644
index 0000000..51c0c08
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/boilerplate.rst
@@ -0,0 +1,11 @@
+:orphan:
+
+boilerplate
+===========
+
+Most Python files should include the following boilerplate:
+
+.. code-block:: python
+
+ from __future__ import (absolute_import, division, print_function)
+ __metaclass__ = type
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/changelog.rst b/docs/docsite/rst/dev_guide/testing/sanity/changelog.rst
new file mode 100644
index 0000000..2d557c7
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/changelog.rst
@@ -0,0 +1,19 @@
+changelog
+=========
+
+Basic linting of changelog fragments with `antsibull-changelog lint <https://pypi.org/project/antsibull-changelog/>`_.
+
+One or more of the following sections are required:
+
+- major_changes
+- minor_changes
+- breaking_changes
+- deprecated_features
+- removed_features
+- security_fixes
+- bugfixes
+- known_issues
+
+New modules and plugins must not be included in changelog fragments.
+
+See :ref:`collection_changelogs` for details.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/compile.rst b/docs/docsite/rst/dev_guide/testing/sanity/compile.rst
new file mode 100644
index 0000000..4036721
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/compile.rst
@@ -0,0 +1,32 @@
+.. _testing_compile:
+
+compile
+=======
+
+All Python source files must successfully compile using all supported Python versions.
+
+.. note::
+
+ The list of supported Python versions is dependent on the version of ``ansible-core`` that you are using.
+ Make sure you consult the version of the documentation which matches your ``ansible-core`` version.
+
+Controller code, including plugins in Ansible Collections, must support the following Python versions:
+
+- 3.11
+- 3.10
+- 3.9
+
+Code which runs on targets (``modules`` and ``module_utils``) must support all controller supported Python versions,
+as well as the additional Python versions supported only on targets:
+
+- 3.8
+- 3.7
+- 3.6
+- 3.5
+- 2.7
+
+.. note::
+
+ Ansible Collections can be
+ `configured <https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/config/config.yml>`_
+ to support a subset of the target-only Python versions.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/configure-remoting-ps1.rst b/docs/docsite/rst/dev_guide/testing/sanity/configure-remoting-ps1.rst
new file mode 100644
index 0000000..e83bc78
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/configure-remoting-ps1.rst
@@ -0,0 +1,5 @@
+configure-remoting-ps1
+======================
+
+The file ``examples/scripts/ConfigureRemotingForAnsible.ps1`` is required and must be a regular file.
+It is used by external automated processes and cannot be moved, renamed or replaced with a symbolic link.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/deprecated-config.rst b/docs/docsite/rst/dev_guide/testing/sanity/deprecated-config.rst
new file mode 100644
index 0000000..950805a
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/deprecated-config.rst
@@ -0,0 +1,6 @@
+:orphan:
+
+deprecated-config
+=================
+
+``DOCUMENTATION`` config is scheduled for removal
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/docs-build.rst b/docs/docsite/rst/dev_guide/testing/sanity/docs-build.rst
new file mode 100644
index 0000000..23f3c55
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/docs-build.rst
@@ -0,0 +1,4 @@
+docs-build
+==========
+
+Verifies that ``make singlehtmldocs`` in ``docs/docsite/`` completes without errors.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/empty-init.rst b/docs/docsite/rst/dev_guide/testing/sanity/empty-init.rst
new file mode 100644
index 0000000..e87bb71
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/empty-init.rst
@@ -0,0 +1,10 @@
+empty-init
+==========
+
+The ``__init__.py`` files under the following directories must be empty. For some of these (modules
+and tests), ``__init__.py`` files with code won't be used. For others (module_utils), we want the
+possibility of using Python namespaces which an empty ``__init__.py`` will allow for.
+
+- ``lib/ansible/modules/``
+- ``lib/ansible/module_utils/``
+- ``test/units/``
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/future-import-boilerplate.rst b/docs/docsite/rst/dev_guide/testing/sanity/future-import-boilerplate.rst
new file mode 100644
index 0000000..658ef06
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/future-import-boilerplate.rst
@@ -0,0 +1,51 @@
+future-import-boilerplate
+=========================
+
+Most Python files should include the following boilerplate at the top of the file, right after the
+comment header:
+
+.. code-block:: python
+
+ from __future__ import (absolute_import, division, print_function)
+
+This uses Python 3 semantics for absolute versus relative imports, division, and print. By doing this,
+we can write code which is portable between Python 2 and Python 3 by following the Python 3 semantics.
+
+
+absolute_import
+---------------
+
+When Python 2 encounters an import of a name in a file like ``import copy`` it attempts to load
+``copy.py`` from the same directory as the file is in. This can cause problems if there is a python
+file of that name in the directory and also a python module in ``sys.path`` with that same name. In
+that case, Python 2 would load the one in the same directory and there would be no way to load the
+one on ``sys.path``. Python 3 fixes this by making imports absolute by default. ``import copy``
+will find ``copy.py`` from ``sys.path``. If you want to import ``copy.py`` from the same directory,
+the code needs to be changed to perform a relative import: ``from . import copy``.
+
+.. seealso::
+
+ * `Absolute and relative imports <https://www.python.org/dev/peps/pep-0328>`_
+
+division
+--------
+
+In Python 2, the division operator (``/``) returns integer values when used with integers. If there
+was a remainder, this part would be left off (aka, `floor division`). In Python 3, the division
+operator (``/``) always returns a floating point number. Code that needs to calculate the integer
+portion of the quotient needs to switch to using the floor division operator (`//`) instead.
+
+.. seealso::
+
+ * `Changing the division operator <https://www.python.org/dev/peps/pep-0238>`_
+
+print_function
+--------------
+
+In Python 2, :func:`python:print` is a keyword. In Python 3, :func:`python3:print` is a function with different
+parameters. Using this ``__future__`` allows using the Python 3 print semantics everywhere.
+
+.. seealso::
+
+ * `Make print a function <https://www.python.org/dev/peps/pep-3105>`_
+
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/ignores.rst b/docs/docsite/rst/dev_guide/testing/sanity/ignores.rst
new file mode 100644
index 0000000..69190c8
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/ignores.rst
@@ -0,0 +1,105 @@
+ignores
+=======
+
+Sanity tests for individual files can be skipped, and specific errors can be ignored.
+
+When to Ignore Errors
+---------------------
+
+Sanity tests are designed to improve code quality and identify common issues with content.
+When issues are identified during development, those issues should be corrected.
+
+As development of Ansible continues, sanity tests are expanded to detect issues that previous releases could not.
+To allow time for existing content to be updated to pass newer tests, ignore entries can be added.
+New content should not use ignores for existing sanity tests.
+
+When code is fixed to resolve sanity test errors, any relevant ignores must also be removed.
+If the ignores are not removed, this will be reported as an unnecessary ignore error.
+This is intended to prevent future regressions due to the same error recurring after being fixed.
+
+When to Skip Tests
+------------------
+
+Although rare, there are reasons for skipping a sanity test instead of ignoring the errors it reports.
+
+If a sanity test results in a traceback when processing content, that error cannot be ignored.
+If this occurs, open a new `bug report <https://github.com/ansible/ansible/issues/new?template=bug_report.md>`_ for the issue so it can be fixed.
+If the traceback occurs due to an issue with the content, that issue should be fixed.
+If the content is correct, the test will need to be skipped until the bug in the sanity test is fixed.
+
+ Caution should be used when skipping sanity tests instead of ignoring them.
+ Since the test is skipped entirely, resolution of the issue will not be automatically detected.
+ This will prevent prevent regression detection from working once the issue has been resolved.
+ For this reason it is a good idea to periodically review skipped entries manually to verify they are required.
+
+Ignore File Location
+--------------------
+
+The location of the ignore file depends on the type of content being tested.
+
+Ansible Collections
+^^^^^^^^^^^^^^^^^^^
+
+Since sanity tests change between Ansible releases, a separate ignore file is needed for each Ansible major release.
+
+The filename is ``tests/sanity/ignore-X.Y.txt`` where ``X.Y`` is the Ansible release being used to test the collection.
+
+Maintaining a separate file for each Ansible release allows a collection to pass tests for multiple versions of Ansible.
+
+Ansible
+^^^^^^^
+
+When testing Ansible, all ignores are placed in the ``test/sanity/ignore.txt`` file.
+
+Only a single file is needed because ``ansible-test`` is developed and released as a part of Ansible itself.
+
+Ignore File Format
+------------------
+
+The ignore file contains one entry per line.
+Each line consists of two columns, separated by a single space.
+Comments may be added at the end of an entry, started with a hash (``#``) character, which can be proceeded by zero or more spaces.
+Blank and comment only lines are not allowed.
+
+The first column specifies the file path that the entry applies to.
+File paths must be relative to the root of the content being tested.
+This is either the Ansible source or an Ansible collection.
+File paths cannot contain a space or the hash (``#``) character.
+
+The second column specifies the sanity test that the entry applies to.
+This will be the name of the sanity test.
+If the sanity test is specific to a version of Python, the name will include a dash (``-``) and the relevant Python version.
+If the named test uses error codes then the error code to ignore must be appended to the name of the test, separated by a colon (``:``).
+
+Below are some example ignore entries for an Ansible collection:
+
+.. code-block:: text
+
+ roles/my_role/files/my_script.sh shellcheck:SC2154 # ignore undefined variable
+ plugins/modules/my_module.py validate-modules:missing-gplv3-license # ignore license check
+ plugins/modules/my_module.py import-3.8 # needs update to support collections.abc on Python 3.8+
+
+It is also possible to skip a sanity test for a specific file.
+This is done by adding ``!skip`` after the sanity test name in the second column.
+When this is done, no error code is included, even if the sanity test uses error codes.
+
+Below are some example skip entries for an Ansible collection:
+
+.. code-block:: text
+
+ plugins/module_utils/my_util.py validate-modules!skip # waiting for bug fix in module validator
+ plugins/lookup/my_plugin.py compile-2.6!skip # Python 2.6 is not supported on the controller
+
+See the full list of :ref:`sanity tests <all_sanity_tests>`, which details the various tests and details how to fix identified issues.
+
+Ignore File Errors
+------------------
+
+There are various errors that can be reported for the ignore file itself:
+
+- syntax errors parsing the ignore file
+- references a file path that does not exist
+- references to a sanity test that does not exist
+- ignoring an error that does not occur
+- ignoring a file which is skipped
+- duplicate entries
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/import.rst b/docs/docsite/rst/dev_guide/testing/sanity/import.rst
new file mode 100644
index 0000000..6a5d329
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/import.rst
@@ -0,0 +1,126 @@
+import
+======
+
+Ansible :ref:`allows unchecked imports<allowed_unchecked_imports>` of some libraries from specific directories.
+Importing any other Python library requires :ref:`handling import errors<handling_import_errors>`.
+This enables support for sanity tests such as :ref:`testing_validate-modules` and provides better error messages to the user.
+
+.. _handling_import_errors:
+
+Handling import errors
+----------------------
+
+In modules
+^^^^^^^^^^
+
+Instead of using ``import another_library``:
+
+.. code-block:: python
+
+ import traceback
+
+ from ansible.module_utils.basic import missing_required_lib
+
+ try:
+ import another_library
+ except ImportError:
+ HAS_ANOTHER_LIBRARY = False
+ ANOTHER_LIBRARY_IMPORT_ERROR = traceback.format_exc()
+ else:
+ HAS_ANOTHER_LIBRARY = True
+ ANOTHER_LIBRARY_IMPORT_ERROR = None
+
+.. note::
+
+ The ``missing_required_lib`` import above will be used below.
+
+Then in the module code:
+
+.. code-block:: python
+
+ module = AnsibleModule(...)
+
+ if not HAS_ANOTHER_LIBRARY:
+ module.fail_json(
+ msg=missing_required_lib('another_library'),
+ exception=ANOTHER_LIBRARY_IMPORT_ERROR)
+
+In plugins
+^^^^^^^^^^
+
+Instead of using ``import another_library``:
+
+.. code-block:: python
+
+ try:
+ import another_library
+ except ImportError as imp_exc:
+ ANOTHER_LIBRARY_IMPORT_ERROR = imp_exc
+ else:
+ ANOTHER_LIBRARY_IMPORT_ERROR = None
+
+Then in the plugin code, for example in ``__init__`` of the plugin:
+
+.. code-block:: python
+
+ if ANOTHER_LIBRARY_IMPORT_ERROR:
+ raise AnsibleError('another_library must be installed to use this plugin') from ANOTHER_LIBRARY_IMPORT_ERROR
+
+When used as base classes
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. important::
+
+ This solution builds on the previous two examples.
+ Make sure to pick the appropriate one before continuing with this solution.
+
+Sometimes an import is used in a base class, for example:
+
+.. code-block:: python
+
+ from another_library import UsefulThing
+
+ class CustomThing(UsefulThing):
+ pass
+
+One option is make the entire class definition conditional:
+
+.. code-block:: python
+
+ if not ANOTHER_LIBRARY_IMPORT_ERROR:
+ class CustomThing(UsefulThing):
+ pass
+
+Another option is to define a substitute base class by modifying the exception handler:
+
+.. code-block:: python
+
+ try:
+ from another_library import UsefulThing
+ except ImportError:
+ class UsefulThing:
+ pass
+ ...
+
+.. _allowed_unchecked_imports:
+
+Allowed unchecked imports
+-------------------------
+
+Ansible allows the following unchecked imports from these specific directories:
+
+* ansible-core:
+
+ * For ``lib/ansible/modules/`` and ``lib/ansible/module_utils/``, unchecked imports are only allowed from the Python standard library;
+ * For ``lib/ansible/plugins/``, unchecked imports are only allowed from the Python standard library, from public dependencies of ansible-core, and from ansible-core itself;
+
+* collections:
+
+ * For ``plugins/modules/`` and ``plugins/module_utils/``, unchecked imports are only allowed from the Python standard library;
+ * For other directories in ``plugins/`` (see `the community collection requirements <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst#modules-plugins>`_ for a list), unchecked imports are only allowed from the Python standard library, from public dependencies of ansible-core, and from ansible-core itself.
+
+Public dependencies of ansible-core are:
+
+ * Jinja2
+ * PyYAML
+ * MarkupSafe (as a dependency of Jinja2)
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/line-endings.rst b/docs/docsite/rst/dev_guide/testing/sanity/line-endings.rst
new file mode 100644
index 0000000..d56cfc1
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/line-endings.rst
@@ -0,0 +1,4 @@
+line-endings
+============
+
+All files must use ``\n`` for line endings instead of ``\r\n``.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/metaclass-boilerplate.rst b/docs/docsite/rst/dev_guide/testing/sanity/metaclass-boilerplate.rst
new file mode 100644
index 0000000..c7327b3
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/metaclass-boilerplate.rst
@@ -0,0 +1,23 @@
+metaclass-boilerplate
+=====================
+
+Most Python files should include the following boilerplate at the top of the file, right after the
+comment header and ``from __future__ import``:
+
+.. code-block:: python
+
+ __metaclass__ = type
+
+
+Python 2 has "new-style classes" and "old-style classes" whereas Python 3 only has new-style classes.
+Adding the ``__metaclass__ = type`` boilerplate makes every class defined in that file into
+a new-style class as well.
+
+.. code-block:: python
+
+ from __future__ import absolute_import, division, print_function
+ __metaclass__ = type
+
+ class Foo:
+ # This is a new-style class even on Python 2 because of the __metaclass__
+ pass
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/mypy.rst b/docs/docsite/rst/dev_guide/testing/sanity/mypy.rst
new file mode 100644
index 0000000..9eb46ba
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/mypy.rst
@@ -0,0 +1,14 @@
+mypy
+====
+
+The ``mypy`` static type checker is used to check the following code against each Python version supported by the controller:
+
+ * ``lib/ansible/``
+ * ``test/lib/ansible_test/_internal/``
+
+Additionally, the following code is checked against Python versions supported only on managed nodes:
+
+ * ``lib/ansible/modules/``
+ * ``lib/ansible/module_utils/``
+
+See `the mypy documentation <https://mypy.readthedocs.io/en/stable/>`_
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-assert.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-assert.rst
new file mode 100644
index 0000000..489f917
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-assert.rst
@@ -0,0 +1,16 @@
+no-assert
+=========
+
+Do not use ``assert`` in production Ansible python code. When running Python
+with optimizations, Python will remove ``assert`` statements, potentially
+allowing for unexpected behavior throughout the Ansible code base.
+
+Instead of using ``assert`` you should utilize simple ``if`` statements,
+that result in raising an exception. There is a new exception called
+``AnsibleAssertionError`` that inherits from ``AnsibleError`` and
+``AssertionError``. When possible, utilize a more specific exception
+than ``AnsibleAssertionError``.
+
+Modules will not have access to ``AnsibleAssertionError`` and should instead
+raise ``AssertionError``, a more specific exception, or just use
+``module.fail_json`` at the failure point.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-basestring.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-basestring.rst
new file mode 100644
index 0000000..f2fea13
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-basestring.rst
@@ -0,0 +1,11 @@
+no-basestring
+=============
+
+Do not use ``isinstance(s, basestring)`` as basestring has been removed in
+Python3. You can import ``string_types``, ``binary_type``, or ``text_type``
+from ``ansible.module_utils.six`` and then use ``isinstance(s, string_types)``
+or ``isinstance(s, (binary_type, text_type))`` instead.
+
+If this is part of code to convert a string to a particular type,
+``ansible.module_utils.common.text.converters`` contains several functions
+that may be even better for you: ``to_text``, ``to_bytes``, and ``to_native``.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-dict-iteritems.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-dict-iteritems.rst
new file mode 100644
index 0000000..e231c79
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-dict-iteritems.rst
@@ -0,0 +1,16 @@
+no-dict-iteritems
+=================
+
+The ``dict.iteritems`` method has been removed in Python 3. There are two recommended alternatives:
+
+.. code-block:: python
+
+ for KEY, VALUE in DICT.items():
+ pass
+
+.. code-block:: python
+
+ from ansible.module_utils.six import iteritems
+
+ for KEY, VALUE in iteritems(DICT):
+ pass
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-dict-iterkeys.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-dict-iterkeys.rst
new file mode 100644
index 0000000..9dc4a97
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-dict-iterkeys.rst
@@ -0,0 +1,9 @@
+no-dict-iterkeys
+================
+
+The ``dict.iterkeys`` method has been removed in Python 3. Use the following instead:
+
+.. code-block:: python
+
+ for KEY in DICT:
+ pass
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-dict-itervalues.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-dict-itervalues.rst
new file mode 100644
index 0000000..979450e
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-dict-itervalues.rst
@@ -0,0 +1,16 @@
+no-dict-itervalues
+==================
+
+The ``dict.itervalues`` method has been removed in Python 3. There are two recommended alternatives:
+
+.. code-block:: python
+
+ for VALUE in DICT.values():
+ pass
+
+.. code-block:: python
+
+ from ansible.module_utils.six import itervalues
+
+ for VALUE in itervalues(DICT):
+ pass
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-get-exception.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-get-exception.rst
new file mode 100644
index 0000000..67f1646
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-get-exception.rst
@@ -0,0 +1,28 @@
+no-get-exception
+================
+
+We created a function, ``ansible.module_utils.pycompat24.get_exception`` to
+help retrieve exceptions in a manner compatible with Python 2.4 through
+Python 3.6. We no longer support Python 2.4 and Python 2.5 so this is
+extraneous and we want to deprecate the function. Porting code should look
+something like this:
+
+.. code-block:: python
+
+ # Unfixed code:
+ try:
+ raise IOError('test')
+ except IOError:
+ e = get_exception()
+ do_something(e)
+ except:
+ e = get_exception()
+ do_something_else(e)
+
+ # After fixing:
+ try:
+ raise IOError('test')
+ except IOErrors as e:
+ do_something(e)
+ except Exception as e:
+ do_something_else(e)
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-illegal-filenames.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-illegal-filenames.rst
new file mode 100644
index 0000000..6e6f565
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-illegal-filenames.rst
@@ -0,0 +1,61 @@
+no-illegal-filenames
+====================
+
+Files and directories should not contain illegal characters or names so that
+Ansible can be checked out on any Operating System.
+
+Illegal Characters
+------------------
+
+The following characters are not allowed to be used in any part of the file or
+directory name;
+
+* ``<``
+* ``>``
+* ``:``
+* ``"``
+* ``/``
+* ``\``
+* ``|``
+* ``?``
+* ``*``
+* Any characters whose integer representations are in the range from 0 through to 31 like ``\n``
+
+The following characters are not allowed to be used as the last character of a
+file or directory;
+
+* ``.``
+* ``" "`` (just the space character)
+
+Illegal Names
+-------------
+
+The following names are not allowed to be used as the name of a file or
+directory excluding the extension;
+
+* ``CON``
+* ``PRN``
+* ``AUX``
+* ``NUL``
+* ``COM1``
+* ``COM2``
+* ``COM3``
+* ``COM4``
+* ``COM5``
+* ``COM6``
+* ``COM7``
+* ``COM8``
+* ``COM9``
+* ``LPT1``
+* ``LPT2``
+* ``LPT3``
+* ``LPT4``
+* ``LPT5``
+* ``LPT6``
+* ``LPT7``
+* ``LPT8``
+* ``LPT9``
+
+For example, the file ``folder/COM1``, ``folder/COM1.txt`` are illegal but
+``folder/COM1-file`` or ``folder/COM1-file.txt`` is allowed.
+
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst
new file mode 100644
index 0000000..271f88f
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst
@@ -0,0 +1,14 @@
+no-main-display
+===============
+
+As of Ansible 2.8, ``Display`` should no longer be imported from ``__main__``.
+
+``Display`` is now a singleton and should be utilized like the following:
+
+.. code-block:: python
+
+ from ansible.utils.display import Display
+ display = Display()
+
+There is no longer a need to attempt ``from __main__ import display`` inside
+a ``try/except`` block.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst
new file mode 100644
index 0000000..50dc7ba
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst
@@ -0,0 +1,4 @@
+no-smart-quotes
+===============
+
+Smart quotes (``”“‘’``) should not be used. Use plain ascii quotes (``"'``) instead.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-tests-as-filters.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-tests-as-filters.rst
new file mode 100644
index 0000000..0c1f99a
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-tests-as-filters.rst
@@ -0,0 +1,12 @@
+:orphan:
+
+no-tests-as-filters
+===================
+
+Using Ansible provided Jinja2 tests as filters will be removed in Ansible 2.9.
+
+Prior to Ansible 2.5, Jinja2 tests included within Ansible were most often used as filters. The large difference in use is that filters are referenced as ``variable | filter_name`` while Jinja2 tests are referenced as ``variable is test_name``.
+
+Jinja2 tests are used for comparisons, whereas filters are used for data manipulation, and have different applications in Jinja2. This change is to help differentiate the concepts for a better understanding of Jinja2, and where each can be appropriately used.
+
+As of Ansible 2.5 using an Ansible provided Jinja2 test with filter syntax will display a deprecation error.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-underscore-variable.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-underscore-variable.rst
new file mode 100644
index 0000000..5174a43
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-underscore-variable.rst
@@ -0,0 +1,30 @@
+:orphan:
+
+no-underscore-variable
+======================
+
+In the future, Ansible may use the identifier ``_`` to internationalize its
+message strings. To be ready for that, we need to make sure that there are
+no conflicting identifiers defined in the code base.
+
+In common practice, ``_`` is frequently used as a dummy variable (a variable
+to receive a value from a function where the value is useless and never used).
+In Ansible, we're using the identifier ``dummy`` for this purpose instead.
+
+Example of unfixed code:
+
+.. code-block:: python
+
+ for _ in range(0, retries):
+ success = retry_thing()
+ if success:
+ break
+
+Example of fixed code:
+
+.. code-block:: python
+
+ for dummy in range(0, retries):
+ success = retry_thing()
+ if success:
+ break
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-unicode-literals.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-unicode-literals.rst
new file mode 100644
index 0000000..f8ca1d2
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-unicode-literals.rst
@@ -0,0 +1,16 @@
+no-unicode_literals
+===================
+
+The use of :code:`from __future__ import unicode_literals` has been deemed an anti-pattern. The
+problems with it are:
+
+* It makes it so one can't jump into the middle of a file and know whether a bare literal string is
+ a byte string or text string. The programmer has to first check the top of the file to see if the
+ import is there.
+* It removes the ability to define native strings (a string which should be a byte string on python2
+ and a text string on python3) by a string literal.
+* It makes for more context switching. A programmer could be reading one file which has
+ `unicode_literals` and know that bare string literals are text strings but then switch to another
+ file (perhaps tracing program execution into a third party library) and have to switch their
+ understanding of what bare string literals are.
+
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-unwanted-files.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-unwanted-files.rst
new file mode 100644
index 0000000..3d76324
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-unwanted-files.rst
@@ -0,0 +1,13 @@
+no-unwanted-files
+=================
+
+Specific file types are allowed in certain directories:
+
+- ``lib`` - All content must reside in the ``lib/ansible`` directory.
+
+- ``lib/ansible`` - Only source code with one of the following extensions is allowed:
+
+ - ``*.cs`` - C#
+ - ``*.ps1`` - PowerShell
+ - ``*.psm1`` - PowerShell
+ - ``*.py`` - Python
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-wildcard-import.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-wildcard-import.rst
new file mode 100644
index 0000000..fdaf07b
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-wildcard-import.rst
@@ -0,0 +1,31 @@
+:orphan:
+
+no-wildcard-import
+==================
+
+Using :code:`import *` is a bad habit which pollutes your namespace, hinders
+debugging, and interferes with static analysis of code. For those reasons, we
+do want to limit the use of :code:`import *` in the ansible code. Change our
+code to import the specific names that you need instead.
+
+Examples of unfixed code:
+
+.. code-block:: python
+
+ from ansible.module_utils.six import *
+ if isinstance(variable, string_types):
+ do_something(variable)
+
+ from ansible.module_utils.basic import *
+ module = AnsibleModule()
+
+Examples of fixed code:
+
+.. code-block:: python
+
+ from ansible.module_utils import six
+ if isinstance(variable, six.string_types):
+ do_something(variable)
+
+ from ansible.module_utils.basic import AnsibleModule
+ module = AnsibleModule()
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/obsolete-files.rst b/docs/docsite/rst/dev_guide/testing/sanity/obsolete-files.rst
new file mode 100644
index 0000000..cb23746
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/obsolete-files.rst
@@ -0,0 +1,14 @@
+obsolete-files
+==============
+
+Directories in the Ansible source tree are sometimes made obsolete.
+Files should not exist in these directories.
+The new location (if any) is dependent on which directory has been made obsolete.
+
+Below are some of the obsolete directories and their new locations:
+
+- All of ``test/runner/`` is now under ``test/lib/ansible_test/`` instead. The organization of files in the new directory has changed.
+- Most subdirectories of ``test/sanity/`` (with some exceptions) are now under ``test/lib/ansible_test/_util/controller/sanity/`` instead.
+
+This error occurs most frequently for open pull requests which add or modify files in directories which are now obsolete.
+Make sure the branch you are working from is current so that changes can be made in the correct location.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/package-data.rst b/docs/docsite/rst/dev_guide/testing/sanity/package-data.rst
new file mode 100644
index 0000000..220872d
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/package-data.rst
@@ -0,0 +1,5 @@
+package-data
+============
+
+Verifies that the combination of ``MANIFEST.in`` and ``package_data`` from ``setup.py``
+properly installs data files from within ``lib/ansible``
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/pep8.rst b/docs/docsite/rst/dev_guide/testing/sanity/pep8.rst
new file mode 100644
index 0000000..9424bda
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/pep8.rst
@@ -0,0 +1,24 @@
+.. _testing_pep8:
+
+pep8
+====
+
+Python static analysis for PEP 8 style guideline compliance.
+
+`PEP 8`_ style guidelines are enforced by `pycodestyle`_ on all python files in the repository by default.
+
+Running locally
+-----------------
+
+The `PEP 8`_ check can be run locally as follows:
+
+.. code-block:: shell
+
+ ansible-test sanity --test pep8 [file-or-directory-path-to-check] ...
+
+
+
+.. _PEP 8: https://www.python.org/dev/peps/pep-0008/
+.. _pycodestyle: https://pypi.org/project/pycodestyle/
+
+
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/pslint.rst b/docs/docsite/rst/dev_guide/testing/sanity/pslint.rst
new file mode 100644
index 0000000..baa4fa0
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/pslint.rst
@@ -0,0 +1,4 @@
+pslint
+======
+
+PowerShell static analysis for common programming errors using `PSScriptAnalyzer <https://github.com/PowerShell/PSScriptAnalyzer/>`_.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/pylint-ansible-test.rst b/docs/docsite/rst/dev_guide/testing/sanity/pylint-ansible-test.rst
new file mode 100644
index 0000000..a80ddc1
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/pylint-ansible-test.rst
@@ -0,0 +1,8 @@
+:orphan:
+
+pylint-ansible-test
+===================
+
+Python static analysis for common programming errors.
+
+A more strict set of rules applied to ``ansible-test``.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/pylint.rst b/docs/docsite/rst/dev_guide/testing/sanity/pylint.rst
new file mode 100644
index 0000000..2b2ef9e
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/pylint.rst
@@ -0,0 +1,4 @@
+pylint
+======
+
+Python static analysis for common programming errors.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/replace-urlopen.rst b/docs/docsite/rst/dev_guide/testing/sanity/replace-urlopen.rst
new file mode 100644
index 0000000..705195c
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/replace-urlopen.rst
@@ -0,0 +1,4 @@
+replace-urlopen
+===============
+
+Use ``open_url`` from ``module_utils`` instead of ``urlopen``.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/required-and-default-attributes.rst b/docs/docsite/rst/dev_guide/testing/sanity/required-and-default-attributes.rst
new file mode 100644
index 0000000..573c361
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/required-and-default-attributes.rst
@@ -0,0 +1,5 @@
+required-and-default-attributes
+===============================
+
+Use only one of ``default`` or ``required`` with ``FieldAttribute``.
+
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/rstcheck.rst b/docs/docsite/rst/dev_guide/testing/sanity/rstcheck.rst
new file mode 100644
index 0000000..8fcbbce
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/rstcheck.rst
@@ -0,0 +1,4 @@
+rstcheck
+========
+
+Check reStructuredText files for syntax and formatting issues.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/runtime-metadata.rst b/docs/docsite/rst/dev_guide/testing/sanity/runtime-metadata.rst
new file mode 100644
index 0000000..1f3c32a
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/runtime-metadata.rst
@@ -0,0 +1,7 @@
+runtime-metadata.yml
+====================
+
+Validates the schema for:
+
+* ansible-core's ``lib/ansible/config/ansible_builtin_runtime.yml``
+* collection's ``meta/runtime.yml``
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/sanity-docs.rst b/docs/docsite/rst/dev_guide/testing/sanity/sanity-docs.rst
new file mode 100644
index 0000000..34265c3
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/sanity-docs.rst
@@ -0,0 +1,4 @@
+sanity-docs
+===========
+
+Documentation for each ``ansible-test sanity`` test is required.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/shebang.rst b/docs/docsite/rst/dev_guide/testing/sanity/shebang.rst
new file mode 100644
index 0000000..cff2aa0
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/shebang.rst
@@ -0,0 +1,16 @@
+shebang
+=======
+
+Most executable files should only use one of the following shebangs:
+
+- ``#!/bin/sh``
+- ``#!/bin/bash``
+- ``#!/usr/bin/make``
+- ``#!/usr/bin/env python``
+- ``#!/usr/bin/env bash``
+
+NOTE: For ``#!/bin/bash``, any of the options ``eux`` may also be used, such as ``#!/bin/bash -eux``.
+
+This does not apply to Ansible modules, which should not be executable and must always use ``#!/usr/bin/python``.
+
+Some exceptions are permitted. Ask if you have questions.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/shellcheck.rst b/docs/docsite/rst/dev_guide/testing/sanity/shellcheck.rst
new file mode 100644
index 0000000..446ee1e
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/shellcheck.rst
@@ -0,0 +1,4 @@
+shellcheck
+==========
+
+Static code analysis for shell scripts using the excellent `shellcheck <https://www.shellcheck.net/>`_ tool.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/symlinks.rst b/docs/docsite/rst/dev_guide/testing/sanity/symlinks.rst
new file mode 100644
index 0000000..017209b
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/symlinks.rst
@@ -0,0 +1,6 @@
+symlinks
+========
+
+Symbolic links are only permitted for files that exist to ensure proper tarball generation during a release.
+
+If other types of symlinks are needed for tests they must be created as part of the test.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/test-constraints.rst b/docs/docsite/rst/dev_guide/testing/sanity/test-constraints.rst
new file mode 100644
index 0000000..36ceb36
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/test-constraints.rst
@@ -0,0 +1,4 @@
+test-constraints
+================
+
+Constraints for test requirements should be in ``test/lib/ansible_test/_data/requirements/constraints.txt``.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/update-bundled.rst b/docs/docsite/rst/dev_guide/testing/sanity/update-bundled.rst
new file mode 100644
index 0000000..d8f1938
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/update-bundled.rst
@@ -0,0 +1,31 @@
+:orphan:
+
+update-bundled
+==============
+
+Check whether any of our known bundled code needs to be updated for a new upstream release.
+
+This test can error in the following ways:
+
+* The bundled code is out of date with regard to the latest release on pypi. Update the code
+ to the new version and update the version in _BUNDLED_METADATA to solve this.
+
+* The code is lacking a _BUNDLED_METADATA variable. This typically happens when a bundled version
+ is updated and we forget to add a _BUNDLED_METADATA variable to the updated file. Once that is
+ added, this error should go away.
+
+* A file has a _BUNDLED_METADATA variable but the file isn't specified in
+ :file:`test/sanity/code-smell/update-bundled.py`. This typically happens when a new bundled
+ library is added. Add the file to the `get_bundled_libs()` function in the `update-bundled.py`
+ test script to solve this error.
+
+_BUNDLED_METADATA has the following fields:
+
+:pypi_name: Name of the bundled package on pypi
+
+:version: Version of the package that we are including here
+
+:version_constraints: Optional PEP440 specifier for the version range that we are bundling.
+ Currently, the only valid use of this is to follow a version that is
+ compatible with the Python stdlib when newer versions of the pypi package
+ implement a new API.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/use-argspec-type-path.rst b/docs/docsite/rst/dev_guide/testing/sanity/use-argspec-type-path.rst
new file mode 100644
index 0000000..e06d83d
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/use-argspec-type-path.rst
@@ -0,0 +1,10 @@
+use-argspec-type-path
+=====================
+
+The AnsibleModule argument_spec knows of several types beyond the standard python types. One of
+these is ``path``. When used, type ``path`` ensures that an argument is a string and expands any
+shell variables and tilde characters.
+
+This test looks for use of :func:`os.path.expanduser <python:os.path.expanduser>` in modules. When found, it tells the user to
+replace it with ``type='path'`` in the module's argument_spec or list it as a false positive in the
+test.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/use-compat-six.rst b/docs/docsite/rst/dev_guide/testing/sanity/use-compat-six.rst
new file mode 100644
index 0000000..1f41500
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/use-compat-six.rst
@@ -0,0 +1,4 @@
+use-compat-six
+==============
+
+Use ``six`` from ``module_utils`` instead of ``six``.
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/validate-modules.rst b/docs/docsite/rst/dev_guide/testing/sanity/validate-modules.rst
new file mode 100644
index 0000000..dcb78c0
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/validate-modules.rst
@@ -0,0 +1,140 @@
+.. _testing_validate-modules:
+
+validate-modules
+================
+
+Analyze modules for common issues in code and documentation.
+
+.. contents::
+ :local:
+
+Usage
+------
+
+.. code:: shell
+
+ cd /path/to/ansible/source
+ source hacking/env-setup
+ ansible-test sanity --test validate-modules
+
+Help
+-----
+
+Type ``ansible-test sanity validate-modules -h`` to display help for using this sanity test.
+
+
+
+Extending validate-modules
+---------------------------
+
+The ``validate-modules`` tool has a `schema.py <https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/schema.py>`_ that is used to validate the YAML blocks, such as ``DOCUMENTATION`` and ``RETURNS``.
+
+
+Codes
+------
+
+============================================================ ================== ==================== =========================================================================================
+ **Error Code** **Type** **Level** **Sample Message**
+------------------------------------------------------------ ------------------ -------------------- -----------------------------------------------------------------------------------------
+ ansible-deprecated-module Documentation Error A module is deprecated and supposed to be removed in the current or an earlier Ansible version
+ collection-deprecated-module Documentation Error A module is deprecated and supposed to be removed in the current or an earlier collection version
+ ansible-deprecated-version Documentation Error A feature is deprecated and supposed to be removed in the current or an earlier Ansible version
+ ansible-module-not-initialized Syntax Error Execution of the module did not result in initialization of AnsibleModule
+ collection-deprecated-version Documentation Error A feature is deprecated and supposed to be removed in the current or an earlier collection version
+ deprecated-date Documentation Error A date before today appears as ``removed_at_date`` or in ``deprecated_aliases``
+ deprecation-mismatch Documentation Error Module marked as deprecated or removed in at least one of the filename, its metadata, or in DOCUMENTATION (setting DOCUMENTATION.deprecated for deprecation or removing all Documentation for removed) but not in all three places.
+ doc-choices-do-not-match-spec Documentation Error Value for "choices" from the argument_spec does not match the documentation
+ doc-choices-incompatible-type Documentation Error Choices value from the documentation is not compatible with type defined in the argument_spec
+ doc-default-does-not-match-spec Documentation Error Value for "default" from the argument_spec does not match the documentation
+ doc-default-incompatible-type Documentation Error Default value from the documentation is not compatible with type defined in the argument_spec
+ doc-elements-invalid Documentation Error Documentation specifies elements for argument, when "type" is not ``list``.
+ doc-elements-mismatch Documentation Error Argument_spec defines elements different than documentation does
+ doc-missing-type Documentation Error Documentation doesn't specify a type but argument in ``argument_spec`` use default type (``str``)
+ doc-required-mismatch Documentation Error argument in argument_spec is required but documentation says it is not, or vice versa
+ doc-type-does-not-match-spec Documentation Error Argument_spec defines type different than documentation does
+ documentation-error Documentation Error Unknown ``DOCUMENTATION`` error
+ documentation-syntax-error Documentation Error Invalid ``DOCUMENTATION`` schema
+ illegal-future-imports Imports Error Only the following ``from __future__`` imports are allowed: ``absolute_import``, ``division``, and ``print_function``.
+ import-before-documentation Imports Error Import found before documentation variables. All imports must appear below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
+ import-error Documentation Error ``Exception`` attempting to import module for ``argument_spec`` introspection
+ import-placement Locations Warning Imports should be directly below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
+ imports-improper-location Imports Error Imports should be directly below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
+ incompatible-choices Documentation Error Choices value from the argument_spec is not compatible with type defined in the argument_spec
+ incompatible-default-type Documentation Error Default value from the argument_spec is not compatible with type defined in the argument_spec
+ invalid-argument-name Documentation Error Argument in argument_spec must not be one of 'message', 'syslog_facility' as it is used internally by Ansible Core Engine
+ invalid-argument-spec Documentation Error Argument in argument_spec must be a dictionary/hash when used
+ invalid-argument-spec-options Documentation Error Suboptions in argument_spec are invalid
+ invalid-documentation Documentation Error ``DOCUMENTATION`` is not valid YAML
+ invalid-documentation-markup Documentation Error ``DOCUMENTATION`` or ``RETURN`` contains invalid markup
+ invalid-documentation-options Documentation Error ``DOCUMENTATION.options`` must be a dictionary/hash when used
+ invalid-examples Documentation Error ``EXAMPLES`` is not valid YAML
+ invalid-extension Naming Error Official Ansible modules must have a ``.py`` extension for python modules or a ``.ps1`` for powershell modules
+ invalid-module-schema Documentation Error ``AnsibleModule`` schema validation error
+ invalid-removal-version Documentation Error The version at which a feature is supposed to be removed cannot be parsed (for collections, it must be a `semantic version <https://semver.org/>`_)
+ invalid-requires-extension Naming Error Module ``#AnsibleRequires -CSharpUtil`` should not end in .cs, Module ``#Requires`` should not end in .psm1
+ missing-doc-fragment Documentation Error ``DOCUMENTATION`` fragment missing
+ missing-existing-doc-fragment Documentation Warning Pre-existing ``DOCUMENTATION`` fragment missing
+ missing-documentation Documentation Error No ``DOCUMENTATION`` provided
+ missing-examples Documentation Error No ``EXAMPLES`` provided
+ missing-gplv3-license Documentation Error GPLv3 license header not found
+ missing-module-utils-basic-import Imports Warning Did not find ``ansible.module_utils.basic`` import
+ missing-module-utils-import-csharp-requirements Imports Error No ``Ansible.ModuleUtils`` or C# Ansible util requirements/imports found
+ missing-powershell-interpreter Syntax Error Interpreter line is not ``#!powershell``
+ missing-python-interpreter Syntax Error Interpreter line is not ``#!/usr/bin/python``
+ missing-return Documentation Error No ``RETURN`` documentation provided
+ missing-return-legacy Documentation Warning No ``RETURN`` documentation provided for legacy module
+ missing-suboption-docs Documentation Error Argument in argument_spec has sub-options but documentation does not define sub-options
+ module-incorrect-version-added Documentation Error Module level ``version_added`` is incorrect
+ module-invalid-version-added Documentation Error Module level ``version_added`` is not a valid version number
+ module-utils-specific-import Imports Error ``module_utils`` imports should import specific components, not ``*``
+ multiple-utils-per-requires Imports Error ``Ansible.ModuleUtils`` requirements do not support multiple modules per statement
+ multiple-csharp-utils-per-requires Imports Error Ansible C# util requirements do not support multiple utils per statement
+ no-default-for-required-parameter Documentation Error Option is marked as required but specifies a default. Arguments with a default should not be marked as required
+ no-log-needed Parameters Error Option name suggests that the option contains a secret value, while ``no_log`` is not specified for this option in the argument spec. If this is a false positive, explicitly set ``no_log=False``
+ nonexistent-parameter-documented Documentation Error Argument is listed in DOCUMENTATION.options, but not accepted by the module
+ option-incorrect-version-added Documentation Error ``version_added`` for new option is incorrect
+ option-invalid-version-added Documentation Error ``version_added`` for option is not a valid version number
+ parameter-invalid Documentation Error Argument in argument_spec is not a valid python identifier
+ parameter-invalid-elements Documentation Error Value for "elements" is valid only when value of "type" is ``list``
+ implied-parameter-type-mismatch Documentation Error Argument_spec implies ``type="str"`` but documentation defines it as different data type
+ parameter-type-not-in-doc Documentation Error Type value is defined in ``argument_spec`` but documentation doesn't specify a type
+ parameter-alias-repeated Parameters Error argument in argument_spec has at least one alias specified multiple times in aliases
+ parameter-alias-self Parameters Error argument in argument_spec is specified as its own alias
+ parameter-documented-multiple-times Documentation Error argument in argument_spec with aliases is documented multiple times
+ parameter-list-no-elements Parameters Error argument in argument_spec "type" is specified as ``list`` without defining "elements"
+ parameter-state-invalid-choice Parameters Error Argument ``state`` includes ``get``, ``list`` or ``info`` as a choice. Functionality should be in an ``_info`` or (if further conditions apply) ``_facts`` module.
+ python-syntax-error Syntax Error Python ``SyntaxError`` while parsing module
+ removal-version-must-be-major Documentation Error According to the semantic versioning specification (https://semver.org/), the only versions in which features are allowed to be removed are major versions (x.0.0)
+ return-syntax-error Documentation Error ``RETURN`` is not valid YAML, ``RETURN`` fragments missing or invalid
+ return-invalid-version-added Documentation Error ``version_added`` for return value is not a valid version number
+ subdirectory-missing-init Naming Error Ansible module subdirectories must contain an ``__init__.py``
+ try-except-missing-has Imports Warning Try/Except ``HAS_`` expression missing
+ undocumented-parameter Documentation Error Argument is listed in the argument_spec, but not documented in the module
+ unidiomatic-typecheck Syntax Error Type comparison using ``type()`` found. Use ``isinstance()`` instead
+ unknown-doc-fragment Documentation Warning Unknown pre-existing ``DOCUMENTATION`` error
+ use-boto3 Imports Error ``boto`` import found, new modules should use ``boto3``
+ use-fail-json-not-sys-exit Imports Error ``sys.exit()`` call found. Should be ``exit_json``/``fail_json``
+ use-module-utils-urls Imports Error ``requests`` import found, should use ``ansible.module_utils.urls`` instead
+ use-run-command-not-os-call Imports Error ``os.call`` used instead of ``module.run_command``
+ use-run-command-not-popen Imports Error ``subprocess.Popen`` used instead of ``module.run_command``
+ use-short-gplv3-license Documentation Error GPLv3 license header should be the :ref:`short form <copyright>` for new modules
+ mutually_exclusive-type Documentation Error mutually_exclusive entry contains non-string value
+ mutually_exclusive-collision Documentation Error mutually_exclusive entry has repeated terms
+ mutually_exclusive-unknown Documentation Error mutually_exclusive entry contains option which does not appear in argument_spec (potentially an alias of an option?)
+ required_one_of-type Documentation Error required_one_of entry contains non-string value
+ required_one_of-collision Documentation Error required_one_of entry has repeated terms
+ required_one_of-unknown Documentation Error required_one_of entry contains option which does not appear in argument_spec (potentially an alias of an option?)
+ required_together-type Documentation Error required_together entry contains non-string value
+ required_together-collision Documentation Error required_together entry has repeated terms
+ required_together-unknown Documentation Error required_together entry contains option which does not appear in argument_spec (potentially an alias of an option?)
+ required_if-is_one_of-type Documentation Error required_if entry has a fourth value which is not a bool
+ required_if-requirements-type Documentation Error required_if entry has a third value (requirements) which is not a list or tuple
+ required_if-requirements-collision Documentation Error required_if entry has repeated terms in requirements
+ required_if-requirements-unknown Documentation Error required_if entry's requirements contains option which does not appear in argument_spec (potentially an alias of an option?)
+ required_if-unknown-key Documentation Error required_if entry's key does not appear in argument_spec (potentially an alias of an option?)
+ required_if-key-in-requirements Documentation Error required_if entry contains its key in requirements list/tuple
+ required_if-value-type Documentation Error required_if entry's value is not of the type specified for its key
+ required_by-collision Documentation Error required_by entry has repeated terms
+ required_by-unknown Documentation Error required_by entry contains option which does not appear in argument_spec (potentially an alias of an option?)
+ version-added-must-be-major-or-minor Documentation Error According to the semantic versioning specification (https://semver.org/), the only versions in which features are allowed to be added are major and minor versions (x.y.0)
+============================================================ ================== ==================== =========================================================================================
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/yamllint.rst b/docs/docsite/rst/dev_guide/testing/sanity/yamllint.rst
new file mode 100644
index 0000000..5822bb7
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing/sanity/yamllint.rst
@@ -0,0 +1,4 @@
+yamllint
+========
+
+Check YAML files for syntax and formatting issues.
diff --git a/docs/docsite/rst/dev_guide/testing_compile.rst b/docs/docsite/rst/dev_guide/testing_compile.rst
new file mode 100644
index 0000000..2f258c8
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_compile.rst
@@ -0,0 +1,8 @@
+:orphan:
+
+
+*************
+Compile Tests
+*************
+
+This page has moved to :ref:`testing_compile`.
diff --git a/docs/docsite/rst/dev_guide/testing_documentation.rst b/docs/docsite/rst/dev_guide/testing_documentation.rst
new file mode 100644
index 0000000..280e2c0
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_documentation.rst
@@ -0,0 +1,44 @@
+:orphan:
+
+.. _testing_module_documentation:
+.. _testing_plugin_documentation:
+
+****************************
+Testing plugin documentation
+****************************
+
+A quick test while developing is to use ``ansible-doc -t <plugin_type> <name>`` to see if it renders, you might need to add ``-M /path/to/module`` if the module is not somewhere Ansible expects to find it.
+
+Before you submit a plugin for inclusion in Ansible, you must test your documentation for correct HTML rendering and for modules to ensure that the argspec matches the documentation in your Python file.
+The community pages offer more information on :ref:`testing reStructuredText documentation <testing_documentation_locally>`.
+
+For example, to check the HTML output of your module documentation:
+
+#. Ensure working :ref:`development environment <environment_setup>`.
+#. Install required Python packages (drop '--user' in venv/virtualenv):
+
+ .. code-block:: bash
+
+ pip install --user -r requirements.txt
+ pip install --user -r docs/docsite/requirements.txt
+
+#. Ensure your module is in the correct directory: ``lib/ansible/modules/mymodule.py`` or in a configured path.
+#. Build HTML from your module documentation: ``MODULES=mymodule make webdocs``.
+#. To build the HTML documentation for multiple modules, use a comma-separated list of module names: ``MODULES=mymodule,mymodule2 make webdocs``.
+#. View the HTML page at ``file:///path/to/docs/docsite/_build/html/modules/mymodule_module.html``.
+
+To ensure that your module documentation matches your ``argument_spec``:
+
+#. Install required Python packages (drop '--user' in venv/virtualenv):
+
+ .. code-block:: bash
+
+ pip install --user -r test/lib/ansible_test/_data/requirements/sanity.txt
+
+#. run the ``validate-modules`` test:
+
+ .. code-block:: bash
+
+ ansible-test sanity --test validate-modules mymodule
+
+For other plugin types the steps are similar, just adjusting names and paths to the specific type.
diff --git a/docs/docsite/rst/dev_guide/testing_httptester.rst b/docs/docsite/rst/dev_guide/testing_httptester.rst
new file mode 100644
index 0000000..7c1a9fb
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_httptester.rst
@@ -0,0 +1,28 @@
+:orphan:
+
+**********
+httptester
+**********
+
+.. contents:: Topics
+
+Overview
+========
+
+``httptester`` is a docker container used to host certain resources required by :ref:`testing_integration`. This is to avoid CI tests requiring external resources (such as git or package repos) which, if temporarily unavailable, would cause tests to fail.
+
+HTTP Testing endpoint which provides the following capabilities:
+
+* httpbin
+* nginx
+* SSL
+* SNI
+* Negotiate Authentication
+
+
+Source files can be found in the `http-test-container <https://github.com/ansible/http-test-container>`_ repository.
+
+Extending httptester
+====================
+
+If you have sometime to improve ``httptester`` please add a comment on the `Testing Working Group Agenda <https://github.com/ansible/community/blob/main/meetings/README.md>`_ to avoid duplicated effort.
diff --git a/docs/docsite/rst/dev_guide/testing_integration.rst b/docs/docsite/rst/dev_guide/testing_integration.rst
new file mode 100644
index 0000000..915281d
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_integration.rst
@@ -0,0 +1,241 @@
+:orphan:
+
+.. _testing_integration:
+
+*****************
+Integration tests
+*****************
+
+.. contents:: Topics
+
+The Ansible integration Test system.
+
+Tests for playbooks, by playbooks.
+
+Some tests may require credentials. Credentials may be specified with `credentials.yml`.
+
+Some tests may require root.
+
+.. note::
+ Every new module and plugin should have integration tests, even if the tests cannot be run on Ansible CI infrastructure.
+ In this case, the tests should be marked with the ``unsupported`` alias in `aliases file <https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/integration-aliases.html>`_.
+
+Quick Start
+===========
+
+It is highly recommended that you install and activate the ``argcomplete`` python package.
+It provides tab completion in ``bash`` for the ``ansible-test`` test runner.
+
+Configuration
+=============
+
+ansible-test command
+--------------------
+
+The example below assumes ``bin/`` is in your ``$PATH``. An easy way to achieve that
+is to initialize your environment with the ``env-setup`` command:
+
+.. code-block:: shell-session
+
+ source hacking/env-setup
+ ansible-test --help
+
+You can also call ``ansible-test`` with the full path:
+
+.. code-block:: shell-session
+
+ bin/ansible-test --help
+
+integration_config.yml
+----------------------
+
+Making your own version of ``integration_config.yml`` can allow for setting some
+tunable parameters to help run the tests better in your environment. Some
+tests (for example, cloud tests) will only run when access credentials are provided. For more
+information about supported credentials, refer to the various ``cloud-config-*.template``
+files in the ``test/integration/`` directory.
+
+Prerequisites
+=============
+
+Some tests assume things like hg, svn, and git are installed, and in path. Some tests
+(such as those for Amazon Web Services) need separate definitions, which will be covered
+later in this document.
+
+(Complete list pending)
+
+Non-destructive Tests
+=====================
+
+These tests will modify files in subdirectories, but will not do things that install or remove packages or things
+outside of those test subdirectories. They will also not reconfigure or bounce system services.
+
+.. note:: Running integration tests within containers
+
+ To protect your system from any potential changes caused by integration tests, and to ensure a sensible set of dependencies are available we recommend that you always run integration tests with the ``--docker`` option, for example ``--docker ubuntu2004``. See the `list of supported container images <https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_data/completion/docker.txt>`_ for options (the ``default`` image is used for sanity and unit tests, as well as for platform independent integration tests such as those for cloud modules).
+
+Run as follows for all POSIX platform tests executed by our CI system in a Fedora 34 container:
+
+.. code-block:: shell-session
+
+ ansible-test integration shippable/ --docker fedora34
+
+You can target a specific tests as well, such as for individual modules:
+
+.. code-block:: shell-session
+
+ ansible-test integration ping
+
+You can use the ``-v`` option to make the output more verbose:
+
+.. code-block:: shell-session
+
+ ansible-test integration lineinfile -vvv
+
+Use the following command to list all the available targets:
+
+.. code-block:: shell-session
+
+ ansible-test integration --list-targets
+
+.. note:: Bash users
+
+ If you use ``bash`` with ``argcomplete``, obtain a full list by doing: ``ansible-test integration <tab><tab>``
+
+Destructive Tests
+=================
+
+These tests are allowed to install and remove some trivial packages. You will likely want to devote these
+to a virtual environment, such as Docker. They won't reformat your filesystem:
+
+.. code-block:: shell-session
+
+ ansible-test integration destructive/ --docker fedora34
+
+Windows Tests
+=============
+
+These tests exercise the ``winrm`` connection plugin and Windows modules. You'll
+need to define an inventory with a remote Windows Server to use for testing,
+and enable PowerShell Remoting to continue.
+
+Running these tests may result in changes to your Windows host, so don't run
+them against a production/critical Windows environment.
+
+Enable PowerShell Remoting (run on the Windows host by a Remote Desktop):
+
+.. code-block:: shell-session
+
+ Enable-PSRemoting -Force
+
+Define Windows inventory:
+
+.. code-block:: shell-session
+
+ cp inventory.winrm.template inventory.winrm
+ ${EDITOR:-vi} inventory.winrm
+
+Run the Windows tests executed by our CI system:
+
+.. code-block:: shell-session
+
+ ansible-test windows-integration -v shippable/
+
+Tests in containers
+==========================
+
+If you have a Linux system with Docker or Podman installed, running integration tests using the same containers used by
+the Ansible continuous integration (CI) system is recommended.
+
+.. note:: Podman
+
+ By default, Podman will only be used if the Docker CLI is not installed. If you have Docker installed but want to use
+ Podman, you can change this behavior by setting the environment variable ``ANSIBLE_TEST_PREFER_PODMAN``.
+
+.. note:: Docker on non-Linux
+
+ Using Docker Engine to run Docker on a non-Linux host (such as macOS) is not recommended.
+ Some tests may fail, depending on the image used for testing.
+ Using the ``--docker-privileged`` option when running ``integration`` (not ``network-integration`` or ``windows-integration``) may resolve the issue.
+
+Running Integration Tests
+-------------------------
+
+To run all CI integration test targets for POSIX platforms in a Ubuntu 18.04 container:
+
+.. code-block:: shell-session
+
+ ansible-test integration shippable/ --docker ubuntu1804
+
+You can also run specific tests or select a different Linux distribution.
+For example, to run tests for the ``ping`` module on a Ubuntu 18.04 container:
+
+.. code-block:: shell-session
+
+ ansible-test integration ping --docker ubuntu1804
+
+.. _test_container_images:
+
+Container Images
+----------------
+
+Container images are updated regularly. To see the current list of container images:
+
+.. code-block:: bash
+
+ ansible-test integration --help
+
+The list is under the **target docker images and supported python version** heading.
+
+Other configuration for Cloud Tests
+===================================
+
+In order to run some tests, you must provide access credentials in a file named
+``cloud-config-aws.yml`` or ``cloud-config-cs.ini`` in the test/integration
+directory. Corresponding .template files are available for for syntax help. The newer AWS
+tests now use the file test/integration/cloud-config-aws.yml
+
+IAM policies for AWS
+====================
+
+Ansible needs fairly wide ranging powers to run the tests in an AWS account. This rights can be provided to a dedicated user. These need to be configured before running the test.
+
+testing-policies
+----------------
+
+The GitHub repository `mattclay/aws-terminator <https://github.com/mattclay/aws-terminator/>`_
+contains two sets of policies used for all existing AWS module integratoin tests.
+The `hacking/aws_config/setup_iam.yml` playbook can be used to setup two groups:
+
+ - `ansible-integration-ci` will have the policies applied necessary to run any
+ integration tests not marked as `unsupported` and are designed to mirror those
+ used by Ansible's CI.
+ - `ansible-integration-unsupported` will have the additional policies applied
+ necessary to run the integration tests marked as `unsupported` including tests
+ for managing IAM roles, users and groups.
+
+Once the groups have been created, you'll need to create a user and make the user a member of these
+groups. The policies are designed to minimize the rights of that user. Please note that while this policy does limit
+the user to one region, this does not fully restrict the user (primarily due to the limitations of the Amazon ARN
+notation). The user will still have wide privileges for viewing account definitions, and will also able to manage
+some resources that are not related to testing (for example, AWS lambdas with different names). Tests should not
+be run in a primary production account in any case.
+
+Other Definitions required
+--------------------------
+
+Apart from installing the policy and giving it to the user identity running the tests, a
+lambda role `ansible_integration_tests` has to be created which has lambda basic execution
+privileges.
+
+
+Network Tests
+=============
+
+For guidance on writing network test see :ref:`testing_resource_modules`.
+
+
+Where to find out more
+======================
+
+If you'd like to know more about the plans for improving testing Ansible, join the `Testing Working Group <https://github.com/ansible/community/blob/main/meetings/README.md>`_.
diff --git a/docs/docsite/rst/dev_guide/testing_pep8.rst b/docs/docsite/rst/dev_guide/testing_pep8.rst
new file mode 100644
index 0000000..c634914
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_pep8.rst
@@ -0,0 +1,8 @@
+:orphan:
+
+
+*****
+PEP 8
+*****
+
+This page has moved to :ref:`testing_pep8`. \ No newline at end of file
diff --git a/docs/docsite/rst/dev_guide/testing_running_locally.rst b/docs/docsite/rst/dev_guide/testing_running_locally.rst
new file mode 100644
index 0000000..0d03189
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_running_locally.rst
@@ -0,0 +1,382 @@
+:orphan:
+
+.. _testing_running_locally:
+
+*******************************
+Testing Ansible and Collections
+*******************************
+
+This document describes how to run tests using ``ansible-test``.
+
+.. contents::
+ :local:
+
+Setup
+=====
+
+Before running ``ansible-test``, set up your environment for :ref:`testing_an_ansible_collection` or
+:ref:`testing_ansible_core`, depending on which scenario applies to you.
+
+.. warning::
+
+ If you use ``git`` for version control, make sure the files you are working with are not ignored by ``git``.
+ If they are, ``ansible-test`` will ignore them as well.
+
+.. _testing_an_ansible_collection:
+
+Testing an Ansible Collection
+-----------------------------
+
+If you are testing an Ansible Collection, you need a copy of the collection, preferably a git clone.
+For example, to work with the ``community.windows`` collection, follow these steps:
+
+1. Clone the collection you want to test into a valid collection root:
+
+ .. code-block:: shell
+
+ git clone https://github.com/ansible-collections/community.windows ~/dev/ansible_collections/community/windows
+
+ .. important::
+
+ The path must end with ``/ansible_collections/{collection_namespace}/{collection_name}`` where
+ ``{collection_namespace}`` is the namespace of the collection and ``{collection_name}`` is the collection name.
+
+2. Clone any collections on which the collection depends:
+
+ .. code-block:: shell
+
+ git clone https://github.com/ansible-collections/ansible.windows ~/dev/ansible_collections/ansible/windows
+
+ .. important::
+
+ If your collection has any dependencies on other collections, they must be in the same collection root, since
+ ``ansible-test`` will not use your configured collection roots (or other Ansible configuration).
+
+ .. note::
+
+ See the collection's ``galaxy.yml`` for a list of possible dependencies.
+
+3. Switch to the directory where the collection to test resides:
+
+ .. code-block:: shell
+
+ cd ~/dev/ansible_collections/community/windows
+
+.. _testing_ansible_core:
+
+Testing ``ansible-core``
+------------------------
+
+If you are testing ``ansible-core`` itself, you need a copy of the ``ansible-core`` source code, preferably a git clone.
+Having an installed copy of ``ansible-core`` is not sufficient or required.
+For example, to work with the ``ansible-core`` source cloned from GitHub, follow these steps:
+
+1. Clone the ``ansible-core`` repository:
+
+ .. code-block:: shell
+
+ git clone https://github.com/ansible/ansible ~/dev/ansible
+
+2. Switch to the directory where the ``ansible-core`` source resides:
+
+ .. code-block:: shell
+
+ cd ~/dev/ansible
+
+3. Add ``ansible-core`` programs to your ``PATH``:
+
+ .. code-block:: shell
+
+ source hacking/env-setup
+
+ .. note::
+
+ You can skip this step if you only need to run ``ansible-test``, and not other ``ansible-core`` programs.
+ In that case, simply run ``bin/ansible-test`` from the root of the ``ansible-core`` source.
+
+ .. caution::
+
+ If you have an installed version of ``ansible-core`` and are trying to run ``ansible-test`` from your ``PATH``,
+ make sure the program found by your shell is the one from the ``ansible-core`` source:
+
+ .. code-block:: shell
+
+ which ansible-test
+
+Commands
+========
+
+The most commonly used test commands are:
+
+* ``ansible-test sanity`` - Run sanity tests (mostly linters and static analysis).
+* ``ansible-test integration`` - Run integration tests.
+* ``ansible-test units`` - Run unit tests.
+
+Run ``ansible-test --help`` to see a complete list of available commands.
+
+.. note::
+
+ For detailed help on a specific command, add the ``--help`` option after the command.
+
+Environments
+============
+
+Most ``ansible-test`` commands support running in one or more isolated test environments to simplify testing.
+
+Containers
+----------
+
+Containers are recommended for running sanity, unit and integration tests, since they provide consistent environments.
+Unit tests will be run with network isolation, which avoids unintentional dependencies on network resources.
+
+The ``--docker`` option runs tests in a container using either Docker or Podman.
+
+.. note::
+
+ If both Docker and Podman are installed, Docker will be used.
+ To override this, set the environment variable ``ANSIBLE_TEST_PREFER_PODMAN`` to any non-empty value.
+
+Choosing a container
+^^^^^^^^^^^^^^^^^^^^
+
+Without an additional argument, the ``--docker`` option uses the ``default`` container.
+To use another container, specify it immediately after the ``--docker`` option.
+
+.. note::
+
+ The ``default`` container is recommended for all sanity and unit tests.
+
+To see the list of supported containers, use the ``--help`` option with the ``ansible-test`` command you want to use.
+
+.. note::
+
+ The list of available containers is dependent on the ``ansible-test`` command you are using.
+
+You can also specify your own container.
+When doing so, you will need to indicate the Python version in the container with the ``--python`` option.
+
+Custom containers
+"""""""""""""""""
+
+When building custom containers, keep in mind the following requirements:
+
+* The ``USER`` should be ``root``.
+* Use an ``init`` process, such as ``systemd``.
+* Include ``sshd`` and accept connections on the default port of ``22``.
+* Include a POSIX compatible ``sh`` shell which can be found on ``PATH``.
+* Include a ``sleep`` utility which runs as a subprocess.
+* Include a supported version of Python.
+* Avoid using the ``VOLUME`` statement.
+
+Docker and SELinux
+^^^^^^^^^^^^^^^^^^
+
+Using Docker on a host with SELinux may require setting the system in permissive mode.
+Consider using Podman instead.
+
+Docker Desktop with WSL2
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+These instructions explain how to use ``ansible-test`` with WSL2 and Docker Desktop *without* ``systemd`` support.
+
+.. note::
+
+ If your WSL2 environment includes ``systemd`` support, these steps are not required.
+
+.. _configuration_requirements:
+
+Configuration requirements
+""""""""""""""""""""""""""
+
+1. Open Docker Desktop and go to the **Settings** screen.
+2. On the the **General** tab:
+
+ a. Uncheck the **Start Docker Desktop when you log in** checkbox.
+ b. Check the **Use the WSL 2 based engine** checkbox.
+
+3. On the **Resources** tab under the **WSL Integration** section:
+
+ a. Enable distros you want to use under the **Enable integration with additional distros** section.
+
+4. Click **Apply and restart** if changes were made.
+
+.. _setup_instructions:
+
+Setup instructions
+""""""""""""""""""
+
+.. note::
+
+ If all WSL instances have been stopped, these changes will need to be re-applied.
+
+1. Verify Docker Desktop is properly configured (see :ref:`configuration_requirements`).
+2. Quit Docker Desktop if it is running:
+
+ a. Right click the **Docker Desktop** taskbar icon.
+ b. Click the **Quit Docker Desktop** option.
+
+3. Stop any running WSL instances with the command:
+
+ .. code-block:: shell
+
+ wsl --shutdown
+
+4. Verify all WSL instances have stopped with the command:
+
+ .. code-block:: shell
+
+ wsl -l -v
+
+5. Start a WSL instance and perform the following steps as ``root``:
+
+ a. Verify the ``systemd`` subsystem is not registered:
+
+ a. Check for the ``systemd`` cgroup hierarchy with the following command:
+
+ .. code-block:: shell
+
+ grep systemd /proc/self/cgroup
+
+ b. If any matches are found, re-check the :ref:`configuration_requirements` and follow the
+ :ref:`setup_instructions` again.
+
+ b. Mount the ``systemd`` cgroup hierarchy with the following commands:
+
+ .. code-block:: shell
+
+ mkdir /sys/fs/cgroup/systemd
+ mount cgroup -t cgroup /sys/fs/cgroup/systemd -o none,name=systemd,xattr
+
+6. Start Docker Desktop.
+
+You should now be able to use ``ansible-test`` with the ``--docker`` option.
+
+.. _linux_cgroup_configuration:
+
+Linux cgroup configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. note::
+
+ These changes will need to be re-applied each time the container host is booted.
+
+For certain container hosts and container combinations, additional setup on the container host may be required.
+In these situations ``ansible-test`` will report an error and provide additional instructions to run as ``root``:
+
+.. code-block:: shell
+
+ mkdir /sys/fs/cgroup/systemd
+ mount cgroup -t cgroup /sys/fs/cgroup/systemd -o none,name=systemd,xattr
+
+If you are using rootless Podman, an additional command must be run, also as ``root``.
+Make sure to substitute your user and group for ``{user}`` and ``{group}`` respectively:
+
+.. code-block:: shell
+
+ chown -R {user}:{group} /sys/fs/cgroup/systemd
+
+Podman
+""""""
+
+When using Podman, you may need to stop existing Podman processes after following the :ref:`linux_cgroup_configuration`
+instructions. Otherwise Podman may be unable to see the new mount point.
+
+You can check to see if Podman is running by looking for ``podman`` and ``catatonit`` processes.
+
+Remote virtual machines
+-----------------------
+
+Remote virtual machines are recommended for running integration tests not suitable for execution in containers.
+
+The ``--remote`` option runs tests in a cloud hosted ephemeral virtual machine.
+
+.. note::
+
+ An API key is required to use this feature, unless running under an approved Azure Pipelines organization.
+
+To see the list of supported systems, use the ``--help`` option with the ``ansible-test`` command you want to use.
+
+.. note::
+
+ The list of available systems is dependent on the ``ansible-test`` command you are using.
+
+Python virtual environments
+---------------------------
+
+Python virtual environments provide a simple way to achieve isolation from the system and user Python environments.
+They are recommended for unit and integration tests when the ``--docker`` and ``--remote`` options cannot be used.
+
+The ``--venv`` option runs tests in a virtual environment managed by ``ansible-test``.
+Requirements are automatically installed before tests are run.
+
+Composite environment arguments
+-------------------------------
+
+The environment arguments covered in this document are sufficient for most use cases.
+However, some scenarios may require the additional flexibility offered by composite environment arguments.
+
+The ``--controller`` and ``--target`` options are alternatives to the ``--docker``, ``--remote`` and ``--venv`` options.
+
+.. note::
+
+ When using the ``shell`` command, the ``--target`` option is replaced by three platform specific options.
+
+Add the ``--help`` option to your ``ansible-test`` command to learn more about the composite environment arguments.
+
+Additional Requirements
+=======================
+
+Some ``ansible-test`` commands have additional requirements.
+You can use the ``--requirements`` option to automatically install them.
+
+.. note::
+
+ When using a test environment managed by ``ansible-test`` the ``--requirements`` option is usually unnecessary.
+
+Environment variables
+=====================
+
+When using environment variables to manipulate tests there some limitations to keep in mind. Environment variables are:
+
+* Not propagated from the host to the test environment when using the ``--docker`` or ``--remote`` options.
+* Not exposed to the test environment unless enabled in ``test/lib/ansible_test/_internal/util.py`` in the ``common_environment`` function.
+
+ Example: ``ANSIBLE_KEEP_REMOTE_FILES=1`` can be set when running ``ansible-test integration --venv``. However, using the ``--docker`` option would
+ require running ``ansible-test shell`` to gain access to the Docker environment. Once at the shell prompt, the environment variable could be set
+ and the tests executed. This is useful for debugging tests inside a container by following the
+ :ref:`debugging_modules` instructions.
+
+Interactive shell
+=================
+
+Use the ``ansible-test shell`` command to get an interactive shell in the same environment used to run tests. Examples:
+
+* ``ansible-test shell --docker`` - Open a shell in the default docker container.
+* ``ansible-test shell --venv --python 3.10`` - Open a shell in a Python 3.10 virtual environment.
+
+Code coverage
+=============
+
+Code coverage reports make it easy to identify untested code for which more tests should
+be written. Online reports are available but only cover the ``devel`` branch (see
+:ref:`developing_testing`). For new code local reports are needed.
+
+Add the ``--coverage`` option to any test command to collect code coverage data. If you
+aren't using the ``--venv`` or ``--docker`` options which create an isolated python
+environment then you may have to use the ``--requirements`` option to ensure that the
+correct version of the coverage module is installed:
+
+.. code-block:: shell
+
+ ansible-test coverage erase
+ ansible-test units --coverage apt
+ ansible-test integration --coverage aws_lambda
+ ansible-test coverage html
+
+Reports can be generated in several different formats:
+
+* ``ansible-test coverage report`` - Console report.
+* ``ansible-test coverage html`` - HTML report.
+* ``ansible-test coverage xml`` - XML report.
+
+To clear data between test runs, use the ``ansible-test coverage erase`` command.
diff --git a/docs/docsite/rst/dev_guide/testing_sanity.rst b/docs/docsite/rst/dev_guide/testing_sanity.rst
new file mode 100644
index 0000000..01b167b
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_sanity.rst
@@ -0,0 +1,56 @@
+:orphan:
+
+.. _testing_sanity:
+
+************
+Sanity Tests
+************
+
+.. contents:: Topics
+
+Sanity tests are made up of scripts and tools used to perform static code analysis.
+The primary purpose of these tests is to enforce Ansible coding standards and requirements.
+
+Tests are run with ``ansible-test sanity``.
+All available tests are run unless the ``--test`` option is used.
+
+
+How to run
+==========
+
+.. note::
+ To run sanity tests using docker, always use the default docker image
+ by passing the ``--docker`` or ``--docker default`` argument.
+
+.. note::
+ When using docker and the ``--base-branch`` argument,
+ also use the ``--keep-git`` argument to avoid git related errors.
+
+.. code:: shell
+
+ source hacking/env-setup
+
+ # Run all sanity tests
+ ansible-test sanity
+
+ # Run all sanity tests including disabled ones
+ ansible-test sanity --allow-disabled
+
+ # Run all sanity tests against certain file(s)
+ ansible-test sanity lib/ansible/modules/files/template.py
+
+ # Run all sanity tests against certain folder(s)
+ ansible-test sanity lib/ansible/modules/files/
+
+ # Run all tests inside docker (good if you don't have dependencies installed)
+ ansible-test sanity --docker default
+
+ # Run validate-modules against a specific file
+ ansible-test sanity --test validate-modules lib/ansible/modules/files/template.py
+
+Available Tests
+===============
+
+Tests can be listed with ``ansible-test sanity --list-tests``.
+
+See the full list of :ref:`sanity tests <all_sanity_tests>`, which details the various tests and details how to fix identified issues.
diff --git a/docs/docsite/rst/dev_guide/testing_units.rst b/docs/docsite/rst/dev_guide/testing_units.rst
new file mode 100644
index 0000000..3b87645
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_units.rst
@@ -0,0 +1,219 @@
+:orphan:
+
+.. _testing_units:
+
+**********
+Unit Tests
+**********
+
+Unit tests are small isolated tests that target a specific library or module. Unit tests
+in Ansible are currently the only way of driving tests from python within Ansible's
+continuous integration process. This means that in some circumstances the tests may be a
+bit wider than just units.
+
+.. contents:: Topics
+
+Available Tests
+===============
+
+Unit tests can be found in `test/units
+<https://github.com/ansible/ansible/tree/devel/test/units>`_. Notice that the directory
+structure of the tests matches that of ``lib/ansible/``.
+
+Running Tests
+=============
+
+.. note::
+ To run unit tests using docker, always use the default docker image
+ by passing the ``--docker`` or ``--docker default`` argument.
+
+The Ansible unit tests can be run across the whole code base by doing:
+
+.. code:: shell
+
+ cd /path/to/ansible/source
+ source hacking/env-setup
+ ansible-test units --docker -v
+
+Against a single file by doing:
+
+.. code:: shell
+
+ ansible-test units --docker -v apt
+
+Or against a specific Python version by doing:
+
+.. code:: shell
+
+ ansible-test units --docker -v --python 2.7 apt
+
+If you are running unit tests against things other than modules, such as module utilities, specify the whole file path:
+
+.. code:: shell
+
+ ansible-test units --docker -v test/units/module_utils/basic/test_imports.py
+
+For advanced usage see the online help:
+
+.. code:: shell
+
+ ansible-test units --help
+
+You can also run tests in Ansible's continuous integration system by opening a pull
+request. This will automatically determine which tests to run based on the changes made
+in your pull request.
+
+
+Installing dependencies
+=======================
+
+If you are running ``ansible-test`` with the ``--docker`` or ``--venv`` option you do not need to install dependencies manually.
+
+Otherwise you can install dependencies using the ``--requirements`` option, which will
+install all the required dependencies needed for unit tests. For example:
+
+.. code:: shell
+
+ ansible-test units --python 2.7 --requirements apache2_module
+
+
+The list of unit test requirements can be found at `test/units/requirements.txt
+<https://github.com/ansible/ansible/tree/devel/test/units/requirements.txt>`_.
+
+This does not include the list of unit test requirements for ``ansible-test`` itself,
+which can be found at `test/lib/ansible_test/_data/requirements/units.txt
+<https://github.com/ansible/ansible/tree/devel/test/lib/ansible_test/_data/requirements/units.txt>`_.
+
+See also the `constraints
+<https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_data/requirements/constraints.txt>`_
+applicable to all test commands.
+
+
+Extending unit tests
+====================
+
+
+.. warning:: What a unit test isn't
+
+ If you start writing a test that requires external services then
+ you may be writing an integration test, rather than a unit test.
+
+
+Structuring Unit Tests
+----------------------
+
+Ansible drives unit tests through `pytest <https://docs.pytest.org/en/latest/>`_. This
+means that tests can either be written a simple functions which are included in any file
+name like ``test_<something>.py`` or as classes.
+
+Here is an example of a function:
+
+.. code:: python
+
+ #this function will be called simply because it is called test_*()
+
+ def test_add():
+ a = 10
+ b = 23
+ c = 33
+ assert a + b == c
+
+Here is an example of a class:
+
+.. code:: python
+
+ import unittest
+
+ class AddTester(unittest.TestCase):
+
+ def SetUp():
+ self.a = 10
+ self.b = 23
+
+ # this function will
+ def test_add():
+ c = 33
+ assert self.a + self.b == c
+
+ # this function will
+ def test_subtract():
+ c = -13
+ assert self.a - self.b == c
+
+Both methods work fine in most circumstances; the function-based interface is simpler and
+quicker and so that's probably where you should start when you are just trying to add a
+few basic tests for a module. The class-based test allows more tidy set up and tear down
+of pre-requisites, so if you have many test cases for your module you may want to refactor
+to use that.
+
+Assertions using the simple ``assert`` function inside the tests will give full
+information on the cause of the failure with a trace-back of functions called during the
+assertion. This means that plain asserts are recommended over other external assertion
+libraries.
+
+A number of the unit test suites include functions that are shared between several
+modules, especially in the networking arena. In these cases a file is created in the same
+directory, which is then included directly.
+
+
+Module test case common code
+----------------------------
+
+Keep common code as specific as possible within the `test/units/` directory structure.
+Don't import common unit test code from directories outside the current or parent directories.
+
+Don't import other unit tests from a unit test. Any common code should be in dedicated
+files that aren't themselves tests.
+
+
+Fixtures files
+--------------
+
+To mock out fetching results from devices, or provide other complex data structures that
+come from external libraries, you can use ``fixtures`` to read in pre-generated data.
+
+You can check how `fixtures <https://github.com/ansible/ansible/tree/devel/test/units/module_utils/facts/fixtures/cpuinfo>`_
+are used in `cpuinfo fact tests <https://github.com/ansible/ansible/blob/9f72ff80e3fe173baac83d74748ad87cb6e20e64/test/units/module_utils/facts/hardware/linux_data.py#L384>`_
+
+If you are simulating APIs you may find that Python placebo is useful. See
+:ref:`testing_units_modules` for more information.
+
+
+Code Coverage For New or Updated Unit Tests
+-------------------------------------------
+New code will be missing from the codecov.io coverage reports (see :ref:`developing_testing`), so
+local reporting is needed. Most ``ansible-test`` commands allow you to collect code
+coverage; this is particularly useful when to indicate where to extend testing.
+
+To collect coverage data add the ``--coverage`` argument to your ``ansible-test`` command line:
+
+.. code:: shell
+
+ ansible-test units --coverage apt
+ ansible-test coverage html
+
+Results will be written to ``test/results/reports/coverage/index.html``
+
+Reports can be generated in several different formats:
+
+* ``ansible-test coverage report`` - Console report.
+* ``ansible-test coverage html`` - HTML report.
+* ``ansible-test coverage xml`` - XML report.
+
+To clear data between test runs, use the ``ansible-test coverage erase`` command. See
+:ref:`testing_running_locally` for more information about generating coverage
+reports.
+
+
+.. seealso::
+
+ :ref:`testing_units_modules`
+ Special considerations for unit testing modules
+ :ref:`testing_running_locally`
+ Running tests locally including gathering and reporting coverage data
+ `Python 3 documentation - 26.4. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
+ The documentation of the unittest framework in python 3
+ `Python 2 documentation - 25.3. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
+ The documentation of the earliest supported unittest framework - from Python 2.6
+ `pytest: helps you write better programs <https://docs.pytest.org/en/latest/>`_
+ The documentation of pytest - the framework actually used to run Ansible unit tests
diff --git a/docs/docsite/rst/dev_guide/testing_units_modules.rst b/docs/docsite/rst/dev_guide/testing_units_modules.rst
new file mode 100644
index 0000000..d07dcff
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_units_modules.rst
@@ -0,0 +1,589 @@
+:orphan:
+
+.. _testing_units_modules:
+
+****************************
+Unit Testing Ansible Modules
+****************************
+
+.. highlight:: python
+
+.. contents:: Topics
+
+Introduction
+============
+
+This document explains why, how and when you should use unit tests for Ansible modules.
+The document doesn't apply to other parts of Ansible for which the recommendations are
+normally closer to the Python standard. There is basic documentation for Ansible unit
+tests in the developer guide :ref:`testing_units`. This document should
+be readable for a new Ansible module author. If you find it incomplete or confusing,
+please open a bug or ask for help on the #ansible-devel chat channel (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_).
+
+What Are Unit Tests?
+====================
+
+Ansible includes a set of unit tests in the :file:`test/units` directory. These tests primarily cover the
+internals but can also cover Ansible modules. The structure of the unit tests matches
+the structure of the code base, so the tests that reside in the :file:`test/units/modules/` directory
+are organized by module groups.
+
+Integration tests can be used for most modules, but there are situations where
+cases cannot be verified using integration tests. This means that Ansible unit test cases
+may extend beyond testing only minimal units and in some cases will include some
+level of functional testing.
+
+
+Why Use Unit Tests?
+===================
+
+Ansible unit tests have advantages and disadvantages. It is important to understand these.
+Advantages include:
+
+* Most unit tests are much faster than most Ansible integration tests. The complete suite
+ of unit tests can be run regularly by a developer on their local system.
+* Unit tests can be run by developers who don't have access to the system which the module is
+ designed to work on, allowing a level of verification that changes to core functions
+ haven't broken module expectations.
+* Unit tests can easily substitute system functions allowing testing of software that
+ would be impractical. For example, the ``sleep()`` function can be replaced and we check
+ that a ten minute sleep was called without actually waiting ten minutes.
+* Unit tests are run on different Python versions. This allows us to
+ ensure that the code behaves in the same way on different Python versions.
+
+There are also some potential disadvantages of unit tests. Unit tests don't normally
+directly test actual useful valuable features of software, instead just internal
+implementation
+
+* Unit tests that test the internal, non-visible features of software may make
+ refactoring difficult if those internal features have to change (see also naming in How
+ below)
+* Even if the internal feature is working correctly it is possible that there will be a
+ problem between the internal code tested and the actual result delivered to the user
+
+Normally the Ansible integration tests (which are written in Ansible YAML) provide better
+testing for most module functionality. If those tests already test a feature and perform
+well there may be little point in providing a unit test covering the same area as well.
+
+When To Use Unit Tests
+======================
+
+There are a number of situations where unit tests are a better choice than integration
+tests. For example, testing things which are impossible, slow or very difficult to test
+with integration tests, such as:
+
+* Forcing rare / strange / random situations that can't be forced, such as specific network
+ failures and exceptions
+* Extensive testing of slow configuration APIs
+* Situations where the integration tests cannot be run as part of the main Ansible
+ continuous integration running in Azure Pipelines.
+
+
+
+Providing quick feedback
+------------------------
+
+Example:
+ A single step of the rds_instance test cases can take up to 20
+ minutes (the time to create an RDS instance in Amazon). The entire
+ test run can last for well over an hour. All 16 of the unit tests
+ complete execution in less than 2 seconds.
+
+The time saving provided by being able to run the code in a unit test makes it worth
+creating a unit test when bug fixing a module, even if those tests do not often identify
+problems later. As a basic goal, every module should have at least one unit test which
+will give quick feedback in easy cases without having to wait for the integration tests to
+complete.
+
+Ensuring correct use of external interfaces
+-------------------------------------------
+
+Unit tests can check the way in which external services are run to ensure that they match
+specifications or are as efficient as possible *even when the final output will not be changed*.
+
+Example:
+ Package managers are often far more efficient when installing multiple packages at once
+ rather than each package separately. The final result is the
+ same: the packages are all installed, so the efficiency is difficult to verify through
+ integration tests. By providing a mock package manager and verifying that it is called
+ once, we can build a valuable test for module efficiency.
+
+Another related use is in the situation where an API has versions which behave
+differently. A programmer working on a new version may change the module to work with the
+new API version and unintentionally break the old version. A test case
+which checks that the call happens properly for the old version can help avoid the
+problem. In this situation it is very important to include version numbering in the test case
+name (see `Naming unit tests`_ below).
+
+Providing specific design tests
+--------------------------------
+
+By building a requirement for a particular part of the
+code and then coding to that requirement, unit tests _can_ sometimes improve the code and
+help future developers understand that code.
+
+Unit tests that test internal implementation details of code, on the other hand, almost
+always do more harm than good. Testing that your packages to install are stored in a list
+would slow down and confuse a future developer who might need to change that list into a
+dictionary for efficiency. This problem can be reduced somewhat with clear test naming so
+that the future developer immediately knows to delete the test case, but it is often
+better to simply leave out the test case altogether and test for a real valuable feature
+of the code, such as installing all of the packages supplied as arguments to the module.
+
+
+How to unit test Ansible modules
+================================
+
+There are a number of techniques for unit testing modules. Beware that most
+modules without unit tests are structured in a way that makes testing quite difficult and
+can lead to very complicated tests which need more work than the code. Effectively using unit
+tests may lead you to restructure your code. This is often a good thing and leads
+to better code overall. Good restructuring can make your code clearer and easier to understand.
+
+
+Naming unit tests
+-----------------
+
+Unit tests should have logical names. If a developer working on the module being tested
+breaks the test case, it should be easy to figure what the unit test covers from the name.
+If a unit test is designed to verify compatibility with a specific software or API version
+then include the version in the name of the unit test.
+
+As an example, ``test_v2_state_present_should_call_create_server_with_name()`` would be a
+good name, ``test_create_server()`` would not be.
+
+
+Use of Mocks
+------------
+
+Mock objects (from https://docs.python.org/3/library/unittest.mock.html) can be very
+useful in building unit tests for special / difficult cases, but they can also
+lead to complex and confusing coding situations. One good use for mocks would be in
+simulating an API. As for 'six', the 'mock' python package is bundled with Ansible (use
+``import units.compat.mock``).
+
+Ensuring failure cases are visible with mock objects
+----------------------------------------------------
+
+Functions like :meth:`module.fail_json` are normally expected to terminate execution. When you
+run with a mock module object this doesn't happen since the mock always returns another mock
+from a function call. You can set up the mock to raise an exception as shown above, or you can
+assert that these functions have not been called in each test. For example:
+
+.. code-block:: python
+
+ module = MagicMock()
+ function_to_test(module, argument)
+ module.fail_json.assert_not_called()
+
+This applies not only to calling the main module but almost any other
+function in a module which gets the module object.
+
+
+Mocking of the actual module
+----------------------------
+
+The setup of an actual module is quite complex (see `Passing Arguments`_ below) and often
+isn't needed for most functions which use a module. Instead you can use a mock object as
+the module and create any module attributes needed by the function you are testing. If
+you do this, beware that the module exit functions need special handling as mentioned
+above, either by throwing an exception or ensuring that they haven't been called. For example:
+
+.. code-block:: python
+
+ class AnsibleExitJson(Exception):
+ """Exception class to be raised by module.exit_json and caught by the test case"""
+ pass
+
+ # you may also do the same to fail json
+ module = MagicMock()
+ module.exit_json.side_effect = AnsibleExitJson(Exception)
+ with self.assertRaises(AnsibleExitJson) as result:
+ results = my_module.test_this_function(module, argument)
+ module.fail_json.assert_not_called()
+ assert results["changed"] == True
+
+API definition with unit test cases
+-----------------------------------
+
+API interaction is usually best tested with the function tests defined in Ansible's
+integration testing section, which run against the actual API. There are several cases
+where the unit tests are likely to work better.
+
+Defining a module against an API specification
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This case is especially important for modules interacting with web services, which provide
+an API that Ansible uses but which are beyond the control of the user.
+
+By writing a custom emulation of the calls that return data from the API, we can ensure
+that only the features which are clearly defined in the specification of the API are
+present in the message. This means that we can check that we use the correct
+parameters and nothing else.
+
+
+*Example: in rds_instance unit tests a simple instance state is defined*:
+
+.. code-block:: python
+
+ def simple_instance_list(status, pending):
+ return {u'DBInstances': [{u'DBInstanceArn': 'arn:aws:rds:us-east-1:1234567890:db:fakedb',
+ u'DBInstanceStatus': status,
+ u'PendingModifiedValues': pending,
+ u'DBInstanceIdentifier': 'fakedb'}]}
+
+This is then used to create a list of states:
+
+.. code-block:: python
+
+ rds_client_double = MagicMock()
+ rds_client_double.describe_db_instances.side_effect = [
+ simple_instance_list('rebooting', {"a": "b", "c": "d"}),
+ simple_instance_list('available', {"c": "d", "e": "f"}),
+ simple_instance_list('rebooting', {"a": "b"}),
+ simple_instance_list('rebooting', {"e": "f", "g": "h"}),
+ simple_instance_list('rebooting', {}),
+ simple_instance_list('available', {"g": "h", "i": "j"}),
+ simple_instance_list('rebooting', {"i": "j", "k": "l"}),
+ simple_instance_list('available', {}),
+ simple_instance_list('available', {}),
+ ]
+
+These states are then used as returns from a mock object to ensure that the ``await`` function
+waits through all of the states that would mean the RDS instance has not yet completed
+configuration:
+
+.. code-block:: python
+
+ rds_i.await_resource(rds_client_double, "some-instance", "available", mod_mock,
+ await_pending=1)
+ assert(len(sleeper_double.mock_calls) > 5), "await_pending didn't wait enough"
+
+By doing this we check that the ``await`` function will keep waiting through
+potentially unusual that it would be impossible to reliably trigger through the
+integration tests but which happen unpredictably in reality.
+
+Defining a module to work against multiple API versions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This case is especially important for modules interacting with many different versions of
+software; for example, package installation modules that might be expected to work with
+many different operating system versions.
+
+By using previously stored data from various versions of an API we can ensure that the
+code is tested against the actual data which will be sent from that version of the system
+even when the version is very obscure and unlikely to be available during testing.
+
+Ansible special cases for unit testing
+======================================
+
+There are a number of special cases for unit testing the environment of an Ansible module.
+The most common are documented below, and suggestions for others can be found by looking
+at the source code of the existing unit tests or asking on the Ansible chat channel or mailing
+lists. For more information on joining chat channels and subscribing to mailing lists, see :ref:`communication`.
+
+Module argument processing
+--------------------------
+
+There are two problems with running the main function of a module:
+
+* Since the module is supposed to accept arguments on ``STDIN`` it is a bit difficult to
+ set up the arguments correctly so that the module will get them as parameters.
+* All modules should finish by calling either the :meth:`module.fail_json` or
+ :meth:`module.exit_json`, but these won't work correctly in a testing environment.
+
+Passing Arguments
+-----------------
+
+.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
+ closed since the function below will be provided in a library file.
+
+To pass arguments to a module correctly, use the ``set_module_args`` method which accepts a dictionary
+as its parameter. Module creation and argument processing is
+handled through the :class:`AnsibleModule` object in the basic section of the utilities. Normally
+this accepts input on ``STDIN``, which is not convenient for unit testing. When the special
+variable is set it will be treated as if the input came on ``STDIN`` to the module. Simply call that function before setting up your module:
+
+.. code-block:: python
+
+ import json
+ from units.modules.utils import set_module_args
+ from ansible.module_utils.common.text.converters import to_bytes
+
+ def test_already_registered(self):
+ set_module_args({
+ 'activationkey': 'key',
+ 'username': 'user',
+ 'password': 'pass',
+ })
+
+Handling exit correctly
+-----------------------
+
+.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
+ closed since the exit and failure functions below will be provided in a library file.
+
+The :meth:`module.exit_json` function won't work properly in a testing environment since it
+writes error information to ``STDOUT`` upon exit, where it
+is difficult to examine. This can be mitigated by replacing it (and :meth:`module.fail_json`) with
+a function that raises an exception:
+
+.. code-block:: python
+
+ def exit_json(*args, **kwargs):
+ if 'changed' not in kwargs:
+ kwargs['changed'] = False
+ raise AnsibleExitJson(kwargs)
+
+Now you can ensure that the first function called is the one you expected simply by
+testing for the correct exception:
+
+.. code-block:: python
+
+ def test_returned_value(self):
+ set_module_args({
+ 'activationkey': 'key',
+ 'username': 'user',
+ 'password': 'pass',
+ })
+
+ with self.assertRaises(AnsibleExitJson) as result:
+ my_module.main()
+
+The same technique can be used to replace :meth:`module.fail_json` (which is used for failure
+returns from modules) and for the ``aws_module.fail_json_aws()`` (used in modules for Amazon
+Web Services).
+
+Running the main function
+-------------------------
+
+If you do want to run the actual main function of a module you must import the module, set
+the arguments as above, set up the appropriate exit exception and then run the module:
+
+.. code-block:: python
+
+ # This test is based around pytest's features for individual test functions
+ import pytest
+ import ansible.modules.module.group.my_module as my_module
+
+ def test_main_function(monkeypatch):
+ monkeypatch.setattr(my_module.AnsibleModule, "exit_json", fake_exit_json)
+ set_module_args({
+ 'activationkey': 'key',
+ 'username': 'user',
+ 'password': 'pass',
+ })
+ my_module.main()
+
+
+Handling calls to external executables
+--------------------------------------
+
+Module must use :meth:`AnsibleModule.run_command` in order to execute an external command. This
+method needs to be mocked:
+
+Here is a simple mock of :meth:`AnsibleModule.run_command` (taken from :file:`test/units/modules/packaging/os/test_rhn_register.py`):
+
+.. code-block:: python
+
+ with patch.object(basic.AnsibleModule, 'run_command') as run_command:
+ run_command.return_value = 0, '', '' # successful execution, no output
+ with self.assertRaises(AnsibleExitJson) as result:
+ my_module.main()
+ self.assertFalse(result.exception.args[0]['changed'])
+ # Check that run_command has been called
+ run_command.assert_called_once_with('/usr/bin/command args')
+ self.assertEqual(run_command.call_count, 1)
+ self.assertFalse(run_command.called)
+
+
+A Complete Example
+------------------
+
+The following example is a complete skeleton that reuses the mocks explained above and adds a new
+mock for :meth:`Ansible.get_bin_path`:
+
+.. code-block:: python
+
+ import json
+
+ from units.compat import unittest
+ from units.compat.mock import patch
+ from ansible.module_utils import basic
+ from ansible.module_utils.common.text.converters import to_bytes
+ from ansible.modules.namespace import my_module
+
+
+ def set_module_args(args):
+ """prepare arguments so that they will be picked up during module creation"""
+ args = json.dumps({'ANSIBLE_MODULE_ARGS': args})
+ basic._ANSIBLE_ARGS = to_bytes(args)
+
+
+ class AnsibleExitJson(Exception):
+ """Exception class to be raised by module.exit_json and caught by the test case"""
+ pass
+
+
+ class AnsibleFailJson(Exception):
+ """Exception class to be raised by module.fail_json and caught by the test case"""
+ pass
+
+
+ def exit_json(*args, **kwargs):
+ """function to patch over exit_json; package return data into an exception"""
+ if 'changed' not in kwargs:
+ kwargs['changed'] = False
+ raise AnsibleExitJson(kwargs)
+
+
+ def fail_json(*args, **kwargs):
+ """function to patch over fail_json; package return data into an exception"""
+ kwargs['failed'] = True
+ raise AnsibleFailJson(kwargs)
+
+
+ def get_bin_path(self, arg, required=False):
+ """Mock AnsibleModule.get_bin_path"""
+ if arg.endswith('my_command'):
+ return '/usr/bin/my_command'
+ else:
+ if required:
+ fail_json(msg='%r not found !' % arg)
+
+
+ class TestMyModule(unittest.TestCase):
+
+ def setUp(self):
+ self.mock_module_helper = patch.multiple(basic.AnsibleModule,
+ exit_json=exit_json,
+ fail_json=fail_json,
+ get_bin_path=get_bin_path)
+ self.mock_module_helper.start()
+ self.addCleanup(self.mock_module_helper.stop)
+
+ def test_module_fail_when_required_args_missing(self):
+ with self.assertRaises(AnsibleFailJson):
+ set_module_args({})
+ my_module.main()
+
+
+ def test_ensure_command_called(self):
+ set_module_args({
+ 'param1': 10,
+ 'param2': 'test',
+ })
+
+ with patch.object(basic.AnsibleModule, 'run_command') as mock_run_command:
+ stdout = 'configuration updated'
+ stderr = ''
+ rc = 0
+ mock_run_command.return_value = rc, stdout, stderr # successful execution
+
+ with self.assertRaises(AnsibleExitJson) as result:
+ my_module.main()
+ self.assertFalse(result.exception.args[0]['changed']) # ensure result is changed
+
+ mock_run_command.assert_called_once_with('/usr/bin/my_command --value 10 --name test')
+
+
+Restructuring modules to enable testing module set up and other processes
+-------------------------------------------------------------------------
+
+Often modules have a ``main()`` function which sets up the module and then performs other
+actions. This can make it difficult to check argument processing. This can be made easier by
+moving module configuration and initialization into a separate function. For example:
+
+.. code-block:: python
+
+ argument_spec = dict(
+ # module function variables
+ state=dict(choices=['absent', 'present', 'rebooted', 'restarted'], default='present'),
+ apply_immediately=dict(type='bool', default=False),
+ wait=dict(type='bool', default=False),
+ wait_timeout=dict(type='int', default=600),
+ allocated_storage=dict(type='int', aliases=['size']),
+ db_instance_identifier=dict(aliases=["id"], required=True),
+ )
+
+ def setup_module_object():
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=[['old_instance_id', 'source_db_instance_identifier',
+ 'db_snapshot_identifier']],
+ )
+ return module
+
+ def main():
+ module = setup_module_object()
+ validate_parameters(module)
+ conn = setup_client(module)
+ return_dict = run_task(module, conn)
+ module.exit_json(**return_dict)
+
+This now makes it possible to run tests against the module initiation function:
+
+.. code-block:: python
+
+ def test_rds_module_setup_fails_if_db_instance_identifier_parameter_missing():
+ # db_instance_identifier parameter is missing
+ set_module_args({
+ 'state': 'absent',
+ 'apply_immediately': 'True',
+ })
+
+ with self.assertRaises(AnsibleFailJson) as result:
+ my_module.setup_json
+
+See also ``test/units/module_utils/aws/test_rds.py``
+
+Note that the ``argument_spec`` dictionary is visible in a module variable. This has
+advantages, both in allowing explicit testing of the arguments and in allowing the easy
+creation of module objects for testing.
+
+The same restructuring technique can be valuable for testing other functionality, such as the part of the module which queries the object that the module configures.
+
+Traps for maintaining Python 2 compatibility
+============================================
+
+If you use the ``mock`` library from the Python 2.6 standard library, a number of the
+assert functions are missing but will return as if successful. This means that test cases should take great care *not* use
+functions marked as _new_ in the Python 3 documentation, since the tests will likely always
+succeed even if the code is broken when run on older versions of Python.
+
+A helpful development approach to this should be to ensure that all of the tests have been
+run under Python 2.6 and that each assertion in the test cases has been checked to work by breaking
+the code in Ansible to trigger that failure.
+
+.. warning:: Maintain Python 2.6 compatibility
+
+ Please remember that modules need to maintain compatibility with Python 2.6 so the unittests for
+ modules should also be compatible with Python 2.6.
+
+
+.. seealso::
+
+ :ref:`testing_units`
+ Ansible unit tests documentation
+ :ref:`testing_running_locally`
+ Running tests locally including gathering and reporting coverage data
+ :ref:`developing_modules_general`
+ Get started developing a module
+ `Python 3 documentation - 26.4. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
+ The documentation of the unittest framework in python 3
+ `Python 2 documentation - 25.3. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
+ The documentation of the earliest supported unittest framework - from Python 2.6
+ `pytest: helps you write better programs <https://docs.pytest.org/en/latest/>`_
+ The documentation of pytest - the framework actually used to run Ansible unit tests
+ `Development Mailing List <https://groups.google.com/group/ansible-devel>`_
+ Mailing list for development topics
+ `Testing Your Code (from The Hitchhiker's Guide to Python!) <https://docs.python-guide.org/writing/tests/>`_
+ General advice on testing Python code
+ `Uncle Bob's many videos on YouTube <https://www.youtube.com/watch?v=QedpQjxBPMA&list=PLlu0CT-JnSasQzGrGzddSczJQQU7295D2>`_
+ Unit testing is a part of the of various philosophies of software development, including
+ Extreme Programming (XP), Clean Coding. Uncle Bob talks through how to benefit from this
+ `"Why Most Unit Testing is Waste" <https://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf>`_
+ An article warning against the costs of unit testing
+ `'A Response to "Why Most Unit Testing is Waste"' <https://henrikwarne.com/2014/09/04/a-response-to-why-most-unit-testing-is-waste/>`_
+ An response pointing to how to maintain the value of unit tests
diff --git a/docs/docsite/rst/dev_guide/testing_validate-modules.rst b/docs/docsite/rst/dev_guide/testing_validate-modules.rst
new file mode 100644
index 0000000..1c74fe3
--- /dev/null
+++ b/docs/docsite/rst/dev_guide/testing_validate-modules.rst
@@ -0,0 +1,7 @@
+:orphan:
+
+****************
+validate-modules
+****************
+
+This page has moved to :ref:`testing_validate-modules`.