diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-18 05:52:27 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-18 05:52:27 +0000 |
commit | 3b0807ad7b283c46c21862eb826dcbb4ad04e5e2 (patch) | |
tree | 6461ea75f03eca87a5a90c86c3c9a787a6ad037e /ansible_collections/ovirt | |
parent | Adding debian version 7.7.0+dfsg-3. (diff) | |
download | ansible-3b0807ad7b283c46c21862eb826dcbb4ad04e5e2.tar.xz ansible-3b0807ad7b283c46c21862eb826dcbb4ad04e5e2.zip |
Merging upstream version 9.4.0+dfsg.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'ansible_collections/ovirt')
101 files changed, 4093 insertions, 3817 deletions
diff --git a/ansible_collections/ovirt/ovirt/.config/ansible-lint.yml b/ansible_collections/ovirt/ovirt/.config/ansible-lint.yml index 77bdaa939..82e3a82c1 100644 --- a/ansible_collections/ovirt/ovirt/.config/ansible-lint.yml +++ b/ansible_collections/ovirt/ovirt/.config/ansible-lint.yml @@ -15,3 +15,4 @@ warn_list: - fqcn[action] - name[missing] - deprecated-module + - command-instead-of-module diff --git a/ansible_collections/ovirt/ovirt/CHANGELOG.rst b/ansible_collections/ovirt/ovirt/CHANGELOG.rst index ff17b4b6d..ad557e886 100644 --- a/ansible_collections/ovirt/ovirt/CHANGELOG.rst +++ b/ansible_collections/ovirt/ovirt/CHANGELOG.rst @@ -5,32 +5,85 @@ ovirt.ovirt Release Notes .. contents:: Topics -v2.4.1 +v3.2.0 ====== +Minor Changes +------------- + +- ovirt_vm - Add tpm_enabled (https://github.com/oVirt/ovirt-ansible-collection/pull/722). +- storage_error_resume_behaviour - Support VM storage error resume behaviour "auto_resume", "kill", "leave_paused". (https://github.com/oVirt/ovirt-ansible-collection/pull/721) +- vm_infra - Support boot disk renaming and resizing. (https://github.com/oVirt/ovirt-ansible-collection/pull/705) + Bugfixes -------- -- cluster_upgrade - Fix the engine_correlation_id location (https://github.com/oVirt/ovirt-ansible-collection/pull/637). +- ovirt_role - Fix administrative option when set to False (https://github.com/oVirt/ovirt-ansible-collection/pull/723). -v2.4.0 +v3.1.3 ====== Bugfixes -------- -- cluster_upgrade - Add default random uuid to engine_correlation_id (https://github.com/oVirt/ovirt-ansible-collection/pull/624). -- image_template - Add template_bios_type (https://github.com/oVirt/ovirt-ansible-collection/pull/620). +- HE - add back dependency on python3-jmespath (https://github.com/oVirt/ovirt-ansible-collection/pull/701) +- HE - drop remaining filters using netaddr (https://github.com/oVirt/ovirt-ansible-collection/pull/702) +- HE - drop usage of ipaddr filters and remove dependency on python-netaddr (https://github.com/oVirt/ovirt-ansible-collection/pull/696) +- HE - fix ipv4 and ipv6 check after dropping netaddr (https://github.com/oVirt/ovirt-ansible-collection/pull/704) +- hosted_engine_setup - Update README (https://github.com/oVirt/ovirt-ansible-collection/pull/706) +- ovirt_disk - Fix issue in detaching the direct LUN (https://github.com/oVirt/ovirt-ansible-collection/pull/700) +- ovirt_quota - Convert storage size to integer (https://github.com/oVirt/ovirt-ansible-collection/pull/712) + +v3.1.1 +====== + +Bugfixes +-------- + +- hosted_engine_setup - Vdsm now uses -n flag for all qemu-img convert calls (https://github.com/oVirt/ovirt-ansible-collection/pull/682). +- ovirt_cluster_info - Fix example patter (https://github.com/oVirt/ovirt-ansible-collection/pull/684). +- ovirt_host - Fix refreshed state action (https://github.com/oVirt/ovirt-ansible-collection/pull/687). + +v3.1.0 +====== -v2.3.1 +Minor Changes +------------- + +- ovirt_host - Add refreshed state (https://github.com/oVirt/ovirt-ansible-collection/pull/673). +- ovirt_network - Add default_route usage to the ovirt_network module (https://github.com/oVirt/ovirt-ansible-collection/pull/647). + +Bugfixes +-------- + +- engine_setup - Remove provision_docker from tests (https://github.com/oVirt/ovirt-ansible-collection/pull/677). +- he-setup - Log the output sent to the serial console of the HostedEngineLocal VM to a file on the host, to allow diagnosing failures in that stage (https://github.com/oVirt/ovirt-ansible-collection/pull/664). +- he-setup - Run virt-install with options more suitable for debugging (https://github.com/oVirt/ovirt-ansible-collection/pull/664). +- he-setup - recently `virsh net-destroy default` doesn't delete the `virbr0`, so we need to delete it expicitly (https://github.com/oVirt/ovirt-ansible-collection/pull/661). +- info modules - Use dynamic collection name instead of ovirt.ovirt for deprecation warning (https://github.com/oVirt/ovirt-ansible-collection/pull/653). +- module_utils - replace `getargspec` with `getfullargspec` to support newer python 3.y versions (https://github.com/oVirt/ovirt-ansible-collection/pull/663). +- ovirt_host - Wait for host to be in result state during upgrade (https://github.com/oVirt/ovirt-ansible-collection/pull/621) + +v3.0.0 ====== +Minor Changes +------------- + +- Improving "ovirt_disk" and "disaster_recovery" documentation (https://github.com/oVirt/ovirt-ansible-collection/pull/562). + Bugfixes -------- +- Remove the 'warn:' argument (https://github.com/oVirt/ovirt-ansible-collection/pull/627). +- cluster_upgrade - Add default random uuid to engine_correlation_id (https://github.com/oVirt/ovirt-ansible-collection/pull/624). +- cluster_upgrade - Fix the engine_correlation_id location (https://github.com/oVirt/ovirt-ansible-collection/pull/637). - filters - Fix ovirtvmipsv4 with attribute and network (https://github.com/oVirt/ovirt-ansible-collection/pull/607). - filters - Fix ovirtvmipsv4 with filter to list (https://github.com/oVirt/ovirt-ansible-collection/pull/609). +- image_template - Add template_bios_type (https://github.com/oVirt/ovirt-ansible-collection/pull/620). +- info modules - Bump the deprecation version of fetch_nested and nested_attributes (https://github.com/oVirt/ovirt-ansible-collection/pull/610). - ovirt_host - Fix kernel_params elemets type (https://github.com/oVirt/ovirt-ansible-collection/pull/608). +- ovirt_nic - Add network_filter_parameters (https://github.com/oVirt/ovirt-ansible-collection/pull/623). v2.3.0 ====== diff --git a/ansible_collections/ovirt/ovirt/FILES.json b/ansible_collections/ovirt/ovirt/FILES.json index 7980a48bc..63575a16c 100644 --- a/ansible_collections/ovirt/ovirt/FILES.json +++ b/ansible_collections/ovirt/ovirt/FILES.json @@ -8,906 +8,822 @@ "format": 1 }, { - "name": "automation", + "name": "tests", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "automation/README.md", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "c29749d822aebf8e458fb5dddef632ad990b77ec16543ba0984589ab53064608", - "format": 1 - }, - { - "name": "automation/build.sh", + "name": "tests/.gitignore", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4ef53c1759ac7c884e1429fc03e9eecfecbc74ac8800f5644eb13a3059fc2c02", - "format": 1 - }, - { - "name": "changelogs", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "b5726d3ec9335a09c124469eca039523847a6b0f08a083efaefd002b83326600", "format": 1 }, { - "name": "changelogs/fragments", + "name": "tests/sanity", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "changelogs/fragments/.placeholder", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "6eeaeb3bcb0e4223384351f71d75972cc5977d76a808010a8af20e3a2c67fefc", - "format": 1 - }, - { - "name": "changelogs/README.md", + "name": "tests/sanity/ignore-2.11.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "be61604e7e4d2c3d2c1a6834828bc05589fba2c4b80332a9476c8c2598b3389b", + "chksum_sha256": "4d7cce85a95c81e141430e06e003bfa93c32d81a43d898ce259435c700a33874", "format": 1 }, { - "name": "changelogs/config.yaml", + "name": "tests/sanity/ignore-2.13.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a9855447b14e048a16cd7877ffeab3bfe07496680c55055a3e8de8c0d2fb64bd", + "chksum_sha256": "a876fafed252ef597357519ce5892e849d521f97688f5148f61be2959cbddefe", "format": 1 }, { - "name": "changelogs/changelog.yaml", + "name": "tests/sanity/ignore-2.10.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8e50d3236e8cccb15c89ccca4bdc7c795dfe6edac79dee7dbfa2b2ee62862b68", - "format": 1 - }, - { - "name": "examples", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "27ff7e16f9f191122d72a4f958a43cb908419f69bde8b6c9036bb0d9c768bfb8", "format": 1 }, { - "name": "examples/filters", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, - "format": 1 - }, - { - "name": "examples/filters/ovirtdiff.yml", + "name": "tests/sanity/ignore-2.9.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "66d5ef341d6075c3bf9671fd5b25f41642ef94127ca295e69901b41da9242e2d", + "chksum_sha256": "27ff7e16f9f191122d72a4f958a43cb908419f69bde8b6c9036bb0d9c768bfb8", "format": 1 }, { - "name": "examples/filters/vmips.yml", + "name": "tests/sanity/ignore-2.14.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f3c0ede23c25926f83af778160d4b52ea6c11e1dde1e97233dfff27ab8ab835b", + "chksum_sha256": "31ffc9547ed1721c400015b8e9a88c318cbb55de45190ad69d2f2f18eb6a969f", "format": 1 }, { - "name": "examples/ovirt_ansible_collections.yml", + "name": "tests/sanity/ignore-2.12.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e81805423a3ebec7b37b6d380a6fa76732fb3325f3af170eb498c481ddad1873", + "chksum_sha256": "de99e31740d5ce5e50621e8af761263f5b20e0b1b63f03b9d09e8193adbcd06e", "format": 1 }, { - "name": "licenses", + "name": "exported-artifacts", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "licenses/Apache-license.txt", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "f6d5b461deb8038ce0e083c9cb7f59859caa04c9b4f72149367393e9b252cf14", - "format": 1 - }, - { - "name": "licenses/GPL-license.txt", + "name": "README-developers.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8ceb4b9ee5adedde47b31e975c1d90c73ad27b6b165a1dcd80c7c545eb65b903", + "chksum_sha256": "81e38bf32f2a201d965eb10891068c1a56cc43e6ffd83c07f3f95442a1ab0e59", "format": 1 }, { - "name": "meta", + "name": ".config", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "meta/execution-environment.yml", + "name": ".config/ansible-lint.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "30270de38aee5490073ea0c04a2202948e0edeb671fc0d5f0d441472c6856592", + "chksum_sha256": "9d9029efea5764fff0ca332bad902998f42db58f32a9b486d4ea04c0a9b6370a", "format": 1 }, { - "name": "meta/requirements.yml", + "name": "build.sh", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "12b1ba483812c1f1012e4379c1fad9039ff728d2be82d2d1cd96118e9ff7b96b", + "chksum_sha256": "88d767d07f0d0da1948e410a56ccb68fbe2fe6c787aa05571dcc0088978b42dc", "format": 1 }, { - "name": "meta/runtime.yml", + "name": "CHANGELOG.rst", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "25354b3afabd2b5a0c3e209aeb30b9002752345651a4dbd6e74adcc0291999c2", + "chksum_sha256": "3ee54fc3c1abca7427386ccadab16b3561fbc4baab71319aa332b36e3b1ff0e9", "format": 1 }, { - "name": "plugins", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "requirements.txt", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "948e536d6fe99ae26067a6e1842a2947aee1094fe8baaa8cf136a578ee99b0bd", "format": 1 }, { - "name": "plugins/callback", + "name": "examples", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "plugins/callback/stdout.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "1945aee0ab3daf085f1ebe4b99f028160aded0bc7de35059cb41ed5fb4761db9", - "format": 1 - }, - { - "name": "plugins/doc_fragments", + "name": "examples/filters", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "plugins/doc_fragments/ovirt.py", + "name": "examples/filters/ovirtdiff.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a0cda744dca79d659df4846df7b0e257ba33f5e92e8f98776a6e20b49a1b285e", + "chksum_sha256": "66d5ef341d6075c3bf9671fd5b25f41642ef94127ca295e69901b41da9242e2d", "format": 1 }, { - "name": "plugins/doc_fragments/ovirt_info.py", + "name": "examples/filters/vmips.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "87131c23c708320037e45ebd46773d2e48fcb38ba79503a5e573dd1037a857d2", - "format": 1 - }, - { - "name": "plugins/filter", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "f3c0ede23c25926f83af778160d4b52ea6c11e1dde1e97233dfff27ab8ab835b", "format": 1 }, { - "name": "plugins/filter/convert_to_bytes.py", + "name": "examples/ovirt_ansible_collections.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8b83892c6a71f5cab3c93dfc93626d901b7cef7bfd704255aa1ac78424ae3426", + "chksum_sha256": "e81805423a3ebec7b37b6d380a6fa76732fb3325f3af170eb498c481ddad1873", "format": 1 }, { - "name": "plugins/filter/convert_to_bytes.yml", + "name": "README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "5cd2b833e5f7de2cca994ddcbf3d2c9a99d4e989ab9cf78c231336fc04c15fa4", + "chksum_sha256": "ef258c94b36aa5ffbacb78c9393229ecdc1e392a97338cbfd690e46a989bb936", "format": 1 }, { - "name": "plugins/filter/filtervalue.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "d26325ae24aa363744d7166bf017dcf53fa74ef74f1d3299a3d99d299db307b9", + "name": "roles", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/filter/get_network_xml_to_dict.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "02f7e3f247c2d4dc5d3c4191aec83a5017338d5004b0008fd585fd16d0523d54", + "name": "roles/repositories", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/filter/get_ovf_disk_size.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "b58e408d4fcd3ed6ac7016d984588c3565b83b3e2139cb71e11ba8aef38c9d18", + "name": "roles/repositories/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/filter/get_ovf_disk_size.yml", + "name": "roles/repositories/tasks/search-pool-id.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8fc13373ae2e97d7e2b90b73c6e0f235eec00d0e8327475c5ccde830ce39a868", + "chksum_sha256": "1982ab263ca68de6cf0067767b11702a2c4ab146432ed1b11b510715d7697e36", "format": 1 }, { - "name": "plugins/filter/json_query.py", + "name": "roles/repositories/tasks/install-satellite-ca.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1754fc223cf8315816d846798dad5e9a07daef8e1b6adaa282b15afa3ca48983", + "chksum_sha256": "b1a7294752c54db0db59f419718b94b4c0da63dcb06e9725ec5e03c6877ba18c", "format": 1 }, { - "name": "plugins/filter/json_query.yml", + "name": "roles/repositories/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1a0062d16af7209efcb4a4bb2cd2b5329a54f9d19311323097c86d90e63b082f", + "chksum_sha256": "f22d7a5ac1fd5b801067ac70b05cda17c43bb976957402693362a95908414cc3", "format": 1 }, { - "name": "plugins/filter/ovirtdiff.yml", + "name": "roles/repositories/tasks/rh-subscription.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "dc1bc2085850080372e875c508069dd417262df2c99bef29c47aa457f161aec1", + "chksum_sha256": "3828f467cb646ad4597451a0eac2db1fc400451c2d1d1df78ac3de7819970cd2", "format": 1 }, { - "name": "plugins/filter/ovirtvmip.py", + "name": "roles/repositories/tasks/satellite-subscription.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "95377cac0c2916f8e15eb504f1541327dbfb729ce0e33165c83e3b0bc574e7d6", + "chksum_sha256": "ee4899e118cb8303a854ea519fab4d81bdcf023a9b58e8be7bd4a84caff41a49", "format": 1 }, { - "name": "plugins/filter/ovirtvmip.yml", + "name": "roles/repositories/tasks/rpm.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "db8735e7f4e469300d3505718061031d71d3483616723fad483ed9cd74ee4cb1", + "chksum_sha256": "156084c0809e826021e2f960eaa59eaa81bb783ea7b60475a586172f617148c7", "format": 1 }, { - "name": "plugins/filter/ovirtvmips.yml", + "name": "roles/repositories/tasks/backup-repos.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "bd934b7bcf81c838f0f601446953c82a05887e04769ab6f80ecfb786fe56566d", + "chksum_sha256": "7c9af33497f79b5246552693d4cdf1d67ab172049a77e4712df0e6945ef1ec14", "format": 1 }, { - "name": "plugins/filter/ovirtvmipsv4.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "ce9ef1c18ddfbe6c42ba50095c398321470f7399f5a0fe967095e513b4ba2bcd", + "name": "roles/repositories/examples", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/filter/ovirtvmipsv6.yml", + "name": "roles/repositories/examples/ovirt_repositories_release_rpm.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "5075356bf5fa22a0d2601baf03fc50bdb6c91aad9543f3983d8bf8ee2b3080d4", + "chksum_sha256": "16f013b459194303f4a4b16485a9ded42c866ae244c024ef6bca5e544e1779cd", "format": 1 }, { - "name": "plugins/filter/ovirtvmipv4.yml", + "name": "roles/repositories/examples/ovirt_repositories_subscription_manager.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "838bffeb5436ac25f3888124eab7dc1903d432f75d8e43bd73dba1433317d5d5", + "chksum_sha256": "bb5e84201ed0b91de44f606f4b2a930ce07065de4eb98ce137d41256399e1266", "format": 1 }, { - "name": "plugins/filter/ovirtvmipv6.yml", + "name": "roles/repositories/examples/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "833ee9ec1c40054ba6349593176630fe3fc305baf026f41bf880a655bbf697a9", + "chksum_sha256": "7baec1da55ec214cdeaf66cb5fbdce88498268997b2a4bb5b6a3fc5a093e4e06", "format": 1 }, { - "name": "plugins/filter/removesensitivevmdata.yml", + "name": "roles/repositories/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "bac378ead2a6a37613460d3852756744706d17d8a892205594363d183ccf7b86", + "chksum_sha256": "e3c85a61e0991f9316f14f9dfcef8169169f4afeaeb6f62903900e15cb2aabb6", "format": 1 }, { - "name": "plugins/inventory", + "name": "roles/repositories/vars", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "plugins/inventory/ovirt.py", + "name": "roles/repositories/vars/engine_4.1.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0e0e0349a91b28f4628726cc43c379b1eff80ee6297687a95217e3b60889c6a4", - "format": 1 - }, - { - "name": "plugins/module_utils", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "f01ec6b4fcfc630b4f8a139b4f30123e7805ed2976cba75dc9a3e3c380fc5db1", "format": 1 }, { - "name": "plugins/module_utils/__init__.py", + "name": "roles/repositories/vars/rhvh_4.1.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + "chksum_sha256": "cbc95494cc017f3b7ccf608dc59b77394847929474531547fe5a6448d71d8b16", "format": 1 }, { - "name": "plugins/module_utils/cloud.py", + "name": "roles/repositories/vars/host_eus_4.4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "2fc3ad35c92926ddc389feb93244ed0432321a0ff01861f2a62c96582991298c", + "chksum_sha256": "ddef8e1dda88e910e2a5d83f1fae2c851757de805321235ac166b1458b1a39b6", "format": 1 }, { - "name": "plugins/module_utils/ovirt.py", + "name": "roles/repositories/vars/engine_4.4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "9664d0a239e76e27c15f1f5a8aa3c78e9ce6db8691d51ce38fb5a874010f5da6", + "chksum_sha256": "18f42ea5ce1dc798ee607357fa74569d8f069e7816f07ae9f79be74a9c1e91d2", "format": 1 }, { - "name": "plugins/module_utils/version.py", + "name": "roles/repositories/vars/engine_4.3.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b3de6a89533b19b883f7f3319e8de780acc0d53f3f5caed1f3006e384232ce60", - "format": 1 - }, - { - "name": "plugins/modules", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "6936324dbf3686dab7f3a0fd7586a7f1db9d56e1fcc60c8033b94522d393997e", "format": 1 }, { - "name": "plugins/modules/__init__.py", + "name": "roles/repositories/vars/engine_eus_4.4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + "chksum_sha256": "ddef8e1dda88e910e2a5d83f1fae2c851757de805321235ac166b1458b1a39b6", "format": 1 }, { - "name": "plugins/modules/ovirt_affinity_group.py", + "name": "roles/repositories/vars/host_4.1.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "094bf12fcae763c9b0d8d662c4a9ac87ca3f0721224c08a5324aeb940154f8d7", + "chksum_sha256": "b3171ba133adc54ba539e763246251b0f833dc8603d5a46243b55d82fbb80490", "format": 1 }, { - "name": "plugins/modules/ovirt_affinity_label.py", + "name": "roles/repositories/vars/host_4.2.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c612b78a1282cf0ada1e46779879ba79dfcbeec9f6919bfc5c688894513a505b", + "chksum_sha256": "8a97eeb8025db4ed4a5c88bf2a652f41982f48a2ce195e3c47b0990897873cd6", "format": 1 }, { - "name": "plugins/modules/ovirt_api_info.py", + "name": "roles/repositories/vars/rhvh_4.2.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d27663995af7b31f1dc83dc14de62d977ce6a2f7bec143938caad2bef1fcbb09", + "chksum_sha256": "cbc95494cc017f3b7ccf608dc59b77394847929474531547fe5a6448d71d8b16", "format": 1 }, { - "name": "plugins/modules/ovirt_auth.py", + "name": "roles/repositories/vars/engine_4.2.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "96b43cc0659ec70a9b89b519bbb66c82d6bcb8443e79e56dd4b35fb4124d22c9", + "chksum_sha256": "ced8a355735ce4d3636dc76bc7d63a6a71834064155399f971d9cb37da3237c1", "format": 1 }, { - "name": "plugins/modules/ovirt_cluster.py", + "name": "roles/repositories/vars/host_ppc_eus_4.4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "5f7fe11427cdda046d1c1aeb39c60c2ca49ca41a8d0d36500e96db961ae74ce4", + "chksum_sha256": "7b2a81f08cee1bbf2fa7431df24eb4c47697227531b05efdd6666eb3af4626b7", "format": 1 }, { - "name": "plugins/modules/ovirt_datacenter.py", + "name": "roles/repositories/vars/rhvh_4.3.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "82728d18290f7e671043c3e51db9b858f03b203be5ab178b7db4d621b2b64356", + "chksum_sha256": "cbc95494cc017f3b7ccf608dc59b77394847929474531547fe5a6448d71d8b16", "format": 1 }, { - "name": "plugins/modules/ovirt_disk_profile.py", + "name": "roles/repositories/vars/rhvh_4.4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "67d05ef0ad11058ed6a18da8a4079ab8d4d8902a797412646d8e7f2d09464c35", + "chksum_sha256": "fe7220fb776160b30f86fe7f9b70c41ae4d26e774d14a80951bf9b91aaacaffb", "format": 1 }, { - "name": "plugins/modules/ovirt_event.py", + "name": "roles/repositories/vars/host_4.3.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "fd7d258428744e7109fea1b8dfe4250e9e0d07664cf583141bdef23cacab4b1b", + "chksum_sha256": "ec3616b3d9433ef599822a6131e7d3168d5b5bb75712f0b69a1c822459cd6145", "format": 1 }, { - "name": "plugins/modules/ovirt_external_provider.py", + "name": "roles/repositories/vars/default.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "2253c69cd894e5d8b1d13b19e374cfbd8290ee683058ef333b6119c16828df7b", + "chksum_sha256": "8f3190dd83d2c27e2cd4b4cc36b9f574075ac41cd8a62823a7a9c119c0bae624", "format": 1 }, { - "name": "plugins/modules/ovirt_group.py", + "name": "roles/repositories/vars/host_ppc_4.4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7cd6538d4e18d19e816ca078ea597702b596cbea738ca4bce9f2cde20745b9fb", + "chksum_sha256": "89c1925d681185c71e2a249f30d0cc1efc885aa2339c5866b01f2459f1ddad5f", "format": 1 }, { - "name": "plugins/modules/ovirt_host.py", + "name": "roles/repositories/vars/host_4.4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c45c02372b6c9a67e57481ff65f6f45a5b340deaa33a3aedac58e47babe0840a", + "chksum_sha256": "dad02834d3f927ce052a2e9180f7a809d905369691d0707342b6557934315fc5", "format": 1 }, { - "name": "plugins/modules/ovirt_host_network.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "f9f283291e0c66aea729c6f18d9905cd2ba7e0c598ae0636c493f3068a610584", + "name": "roles/repositories/defaults", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_host_pm.py", + "name": "roles/repositories/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "51ec49b40fa65d456a38048f325df7982c7debdeb48bd137318c30240f155b29", + "chksum_sha256": "db4e3a15d6e4a0b7dc391ab7892418487bb2391e453d837ee77770989101cb22", "format": 1 }, { - "name": "plugins/modules/ovirt_instance_type.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "b1856d38726508ff171c3b8d2d5aea7342a2e62c254de87a28cb7ddc1b090752", + "name": "roles/cluster_upgrade", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_job.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "9d92d289cc7309d7304cf5d5601fd0806b9cef7e179fd16bdf4ed45e43912d51", + "name": "roles/cluster_upgrade/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_mac_pool.py", + "name": "roles/cluster_upgrade/tasks/log_progress.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "2c5822a799199e4d7d6e59ec7c0f67a3e56e5728ac0aa5a217b4746f7d4155dc", + "chksum_sha256": "f0f582981dba4b55b3787e86193072002345af82530d37cebf68ea0c8f140244", "format": 1 }, { - "name": "plugins/modules/ovirt_network.py", + "name": "roles/cluster_upgrade/tasks/upgrade.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "74b1b135135714e20a39b82d4ba80b78a146b57bd7f09ecf8baf7f2b4a29eacc", + "chksum_sha256": "1bce42a117b07403420e8f7f541ef8e4607d45b5455f43c32b5f58f2f5aad24d", "format": 1 }, { - "name": "plugins/modules/ovirt_permission.py", + "name": "roles/cluster_upgrade/tasks/pinned_vms.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "da6d49d7868f708a6af6c2439df2e45a4b3df00ac40574e24e6f33ec9569d0ba", + "chksum_sha256": "a66b3cdb5176d43714f92c16dbcd0a5a4393e49a67ecdeed78965c3c69abbeb0", "format": 1 }, { - "name": "plugins/modules/ovirt_qos.py", + "name": "roles/cluster_upgrade/tasks/cluster_policy.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "02c19d26cb46441133503e4b9cd6415d79d3d40bc8ee529a1421b172ace5d9df", + "chksum_sha256": "139ec62ccbbe68bca8da3eac9675ad610253d5a34f7ba192620bae3a2c924242", "format": 1 }, { - "name": "plugins/modules/ovirt_quota.py", + "name": "roles/cluster_upgrade/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7560839553cf19fbb2cdab3784f2c19e7dc2ed6beda2dcf9137b57156ea7d004", + "chksum_sha256": "a9a3b444a6b0c04937f184388b513508942f1c1b21393a869709c58bc2d7dad2", "format": 1 }, { - "name": "plugins/modules/ovirt_role.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "69567225f0e6c4d2d84a84e562b6881d648b8422141b318efd225459eebd99a7", + "name": "roles/cluster_upgrade/examples", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_snapshot.py", + "name": "roles/cluster_upgrade/examples/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ef2315d7cf9362a56213acbc5d17d8bdf7c9180dd6dcb5cf875d402add95ca5a", + "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", "format": 1 }, { - "name": "plugins/modules/ovirt_storage_connection.py", + "name": "roles/cluster_upgrade/examples/cluster_upgrade.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a7c12a7215094295957cd95b9040220cad09db91492cfd463ffa9fe179d32f82", + "chksum_sha256": "e5328d9e91b8d6e3a6464baae5fc0398f848cfcbcb75398fbdf659ad1b73a403", "format": 1 }, { - "name": "plugins/modules/ovirt_storage_domain.py", + "name": "roles/cluster_upgrade/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a8ad293631ebe87bfdc3e38e6bb39827bee67b3a2c1e66a9a278c60cdcbf926b", + "chksum_sha256": "7b761686d3fa460a4f440601407721aff898e919363fac5a2ee90cede476e5e9", "format": 1 }, { - "name": "plugins/modules/ovirt_tag.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "41961b31845824591a0691d386e9a8f8ac2c1b85415b522109a8f1f4e798e161", + "name": "roles/cluster_upgrade/defaults", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_template.py", + "name": "roles/cluster_upgrade/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4d92fa178a06c72c049e5bed7c8b20eb88d243c8fb894c94832e07284a853d89", + "chksum_sha256": "d8ca3c1448aef8b92f96382f9b6706333724dffd426c98eb8212f1fb526cdbcd", "format": 1 }, { - "name": "plugins/modules/ovirt_user.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "3002aae2edeff14d001c9bea614ad29c92132ae4175ce678c3269b1387f1dad0", + "name": "roles/shutdown_env", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_vm.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "81b38693a37162b80e534c674d4a93a4d1b0873849cec6ac58a4216f816b25c7", + "name": "roles/shutdown_env/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_vmpool.py", + "name": "roles/shutdown_env/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "badbc107526c539e0a22a127931d80a3621caf3ca244a6e336ea38262e08b20c", + "chksum_sha256": "8ee1707f55bef637da0a715c984a5bcfaa70dca0662b00a4344203d8750fc453", "format": 1 }, { - "name": "plugins/modules/ovirt_vnic_profile.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "165440d6bc6f89eccd5b32b2ea80c4990f10406a241b3338f47577a449872125", + "name": "roles/shutdown_env/examples", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_affinity_label_info.py", + "name": "roles/shutdown_env/examples/shutdown_env.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "9c2ec4c338c4fd133d694afb3fc6cefc38e80b34f97aeb333044dc319e2c803d", + "chksum_sha256": "845381aee5af25a91b98ae136d2b68fe217c686e21caa74b2016405c98194d5f", "format": 1 }, { - "name": "plugins/modules/ovirt_cluster_info.py", + "name": "roles/shutdown_env/examples/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "26de35adbb385309a7637f60dfac502c097c36ea16b7cf8abe197d71534a1fd0", + "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", "format": 1 }, { - "name": "plugins/modules/ovirt_datacenter_info.py", + "name": "roles/shutdown_env/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "5032925f5b2d4ab3bc65d2e94d7c52d2611183114521c6e783db73d6acd9d2c7", + "chksum_sha256": "d1bb8523fef9d1dc2ccd7202761c9085edb675f01d3205401117be6311cd1e0e", "format": 1 }, { - "name": "plugins/modules/ovirt_disk.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "605ec42115ffb9ed23e29d1d793510052088604a7a828edebe2d6bc7b08ef299", + "name": "roles/shutdown_env/defaults", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_disk_info.py", + "name": "roles/shutdown_env/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "88842b5a9a3d78a1bba01a301d9a8f11fb3aa05618b8bf8b1579c3905a5ed6ca", + "chksum_sha256": "23ee730fc457add36b19f314667fcea6891341e5e8ce982cd64f47773b7621fe", "format": 1 }, { - "name": "plugins/modules/ovirt_event_info.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "d31f5bd80058b6ad4bf24b6236e3bea5c28d953ea372d6a0753110fd911a7cc2", + "name": "roles/disaster_recovery", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_external_provider_info.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "2ba0e106dbfb2670e5733db837f4cd167696a63ec94da1bce7f3bed323cdeaa6", + "name": "roles/disaster_recovery/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_group_info.py", + "name": "roles/disaster_recovery/tasks/clean_engine.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "2fd64027bb004558774df1a58f9bbc6c2552e6e62655664dbefbffbdf3a3634c", + "chksum_sha256": "3768e720a7b4a3c88909a2da457148be453ebef0149bd78bb2db97678fd2b94f", "format": 1 }, { - "name": "plugins/modules/ovirt_host_info.py", + "name": "roles/disaster_recovery/tasks/unregister_entities.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8ed53c8a386f84ed19d375a5b8edb36c8ac25b6897a77f82ab5d728fd6bd1811", + "chksum_sha256": "49dfc30fc3f6b55a92a3016a7882ca7514282968de1ae0dfbd8b1054d0d51116", "format": 1 }, { - "name": "plugins/modules/ovirt_host_storage_info.py", + "name": "roles/disaster_recovery/tasks/generate_mapping.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "bb1602bbc1c36a6bb2981cfd1895be2d5eb46fd41997e77e06e0f014630761e2", + "chksum_sha256": "4a48362c1e153dfc5ed6b9d90e6e40d094e85775ebd8855004084c628b1fe149", "format": 1 }, { - "name": "plugins/modules/ovirt_network_info.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "0c6a33e1e9ca127511fea377380aa4b65aecf880ee3940a10cb21ee42065d765", + "name": "roles/disaster_recovery/tasks/recover", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_nic.py", + "name": "roles/disaster_recovery/tasks/recover/run_vms.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "35d865a6a853c47b0ea6b5228ad27f8cc680bf48ecc14520e90e9a5cac4c1dfa", + "chksum_sha256": "ee38d9138d515bba6f285bc7f29aaac3d063be546df6a7fbc72e6049237db449", "format": 1 }, { - "name": "plugins/modules/ovirt_nic_info.py", + "name": "roles/disaster_recovery/tasks/recover/register_templates.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "454e122d9326ec4a9912b90212e2beb148992034309d27978f9852e524376a2b", + "chksum_sha256": "8dea06f6718adf565297ded44dfc6dd44a52ea6386b2246814719b034bb531a3", "format": 1 }, { - "name": "plugins/modules/ovirt_permission_info.py", + "name": "roles/disaster_recovery/tasks/recover/print_info.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "3becf549aeeb749f48bf9223d5f8e47e836e67afd34bef0d027000479d223292", + "chksum_sha256": "9fb07229582ba647600d609900e5c3b580f6e881a9c401505a3d757cae19e94f", "format": 1 }, { - "name": "plugins/modules/ovirt_quota_info.py", + "name": "roles/disaster_recovery/tasks/recover/register_template.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d4c7fae770a7a14f869e937b0f26d351ecd37a18297f10d724f8fb39444c5437", + "chksum_sha256": "39f767a131032417b0576a9e3e0c3c6b703b50d65c67ff4826443e2fda5a1d30", "format": 1 }, { - "name": "plugins/modules/ovirt_scheduling_policy_info.py", + "name": "roles/disaster_recovery/tasks/recover/register_vm.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b965589a238e6ea02d94742f3555affec4cf3035889b77d1d01d978bdb0b74f8", + "chksum_sha256": "eb43083203fd11a54205d886ed340acda2cffaf7dd5d4f95373cc75f791f5db7", "format": 1 }, { - "name": "plugins/modules/ovirt_snapshot_info.py", + "name": "roles/disaster_recovery/tasks/recover/add_glusterfs_domain.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "61451186d9dbce130a66829a0ca2ed47a5e12125c521036d47ce85710378f1aa", + "chksum_sha256": "f7afa2849e8e51641ef8f10f0a671897367661553e25987712383bb70f3b90fd", "format": 1 }, { - "name": "plugins/modules/ovirt_storage_domain_info.py", + "name": "roles/disaster_recovery/tasks/recover/add_fcp_domain.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "6b74466500305497c2fc95994a0795ebaf574a205cc8726d985ff00eb01c4622", + "chksum_sha256": "a1304953f4f321a466ec3f40e3da6ab077b3555173e19fb2029b566682ceddee", "format": 1 }, { - "name": "plugins/modules/ovirt_storage_template_info.py", + "name": "roles/disaster_recovery/tasks/recover/add_posixfs_domain.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a2a635810433659f9700a6f1d23485c8f33bba924f02a26c26f3fb3e72bff463", + "chksum_sha256": "40cd9b57a069d91e8378aed4f91254ddcd02177cdf13acaab366773c5421543f", "format": 1 }, { - "name": "plugins/modules/ovirt_storage_vm_info.py", + "name": "roles/disaster_recovery/tasks/recover/add_iscsi_domain.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "05d6b2ec183079d340126eb6a7840555770e76099ca321b6314fc5c67f129f2c", + "chksum_sha256": "af856b275b930d302aefcedb84d4b8fc853520f405515d271c4bad3e10d2a2f3", "format": 1 }, { - "name": "plugins/modules/ovirt_system_option_info.py", + "name": "roles/disaster_recovery/tasks/recover/report_log_template.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "cbf2bf24a4fea1906165c0c92c3ddbc1d99aa33665c550eaa7d95778a8f2ab15", + "chksum_sha256": "4a6b48e869863fab445b983c2d4a4fa43bc3cb76620e3548c25ab8f54c89b91e", "format": 1 }, { - "name": "plugins/modules/ovirt_tag_info.py", + "name": "roles/disaster_recovery/tasks/recover/register_vms.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4fc736ee9a8364d6cf75658d0a58ab97dbe6132bf2abb8f8d077f9636130e855", + "chksum_sha256": "04940777097a9af94f44ae5f751154974e89ffa2747fa22317bb5ff7c865710c", "format": 1 }, { - "name": "plugins/modules/ovirt_template_info.py", + "name": "roles/disaster_recovery/tasks/recover/add_nfs_domain.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "56347a6b45c2900520b3f72b0d95f3a7041b24547934c0fd3d7a4520612c989f", + "chksum_sha256": "16300fe776b3d147e3fd2cd041854b1c11b9c737e1046516a4fd1b2909e5472e", "format": 1 }, { - "name": "plugins/modules/ovirt_user_info.py", + "name": "roles/disaster_recovery/tasks/recover/add_domain.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "33733fcec5e6fb969de04ae8f454ddd152518d778eb292baf56e12b0c09e1684", + "chksum_sha256": "4c29e7744719cb4e5edd154ce875f2dedaa945f8526907c53569fb29071ba0af", "format": 1 }, { - "name": "plugins/modules/ovirt_vm_info.py", + "name": "roles/disaster_recovery/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "be4ec699b7843b39cc292df76060c7e48ddd3893afeb86c8f3f000b0b4571517", + "chksum_sha256": "cdb9e3dbd3398fee4a37ffb289c9274a9e040fdfdc14c51ec09f7019919b2ca5", "format": 1 }, { - "name": "plugins/modules/ovirt_vm_os_info.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "08d4ac6875974350cde35cc0e15b7fac67ad13e968280276fe6eecceab649269", + "name": "roles/disaster_recovery/tasks/clean", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "plugins/modules/ovirt_vmpool_info.py", + "name": "roles/disaster_recovery/tasks/clean/remove_vms.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1abc015607272f80a5d708b123ae44507f6868cfc437d16f55d26799e427adef", + "chksum_sha256": "281872dffc6bedb2d5f2eababba225dcb68d97f1296060f7e22ba3c0d0c2cc7d", "format": 1 }, { - "name": "plugins/modules/ovirt_vnic_profile_info.py", + "name": "roles/disaster_recovery/tasks/clean/remove_disks.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "655b66528fec82f009ae4fad3b942c9a97582c612a419594b0fcdefec00626dd", + "chksum_sha256": "4904dba739bbcc872d023e055c9c1a3dd786bfa1199f72ee7e6cb3f99ef474a0", "format": 1 }, { - "name": "plugins/test", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/disaster_recovery/tasks/clean/remove_domain_process.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "6d44f6b334b85c43a6733325ea85433e85e892f878610cb0fd359ddb9e9efcb5", "format": 1 }, { - "name": "plugins/test/ovirt_proxied_check.py", + "name": "roles/disaster_recovery/tasks/clean/shutdown_vms.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4cd7ecf3b7f7f467865eaafd095af27707d839b7de357d0a9120e21cdde19d88", + "chksum_sha256": "512ef199851ec2deeb75ee8230e2c9acde7571e7e88ace12aa41a13e195727b9", "format": 1 }, { - "name": "roles", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/disaster_recovery/tasks/clean/shutdown_vm.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "08bc7c3a54e4b0a06301014814bff4976c72d2d6aa3bafdddb793c7fd1714503", "format": 1 }, { - "name": "roles/cluster_upgrade", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/disaster_recovery/tasks/clean/remove_invalid_filtered_master_domains.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "8e0759c74c512d7c2ada796fa2ebf0c5004be77339525bb61ebb2441ff00b0d4", "format": 1 }, { - "name": "roles/cluster_upgrade/defaults", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/disaster_recovery/tasks/clean/remove_domain.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "1c0126aca79a6f188488fe47e148eb62bf97d553a359fac5e0db30a2cd972449", "format": 1 }, { - "name": "roles/cluster_upgrade/defaults/main.yml", + "name": "roles/disaster_recovery/tasks/clean/update_ovf_store.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d8ca3c1448aef8b92f96382f9b6706333724dffd426c98eb8212f1fb526cdbcd", + "chksum_sha256": "54d45e1a5d0a357cffc1bd2e64eed83bb1281de575544b60dd049625644c8db2", "format": 1 }, { - "name": "roles/cluster_upgrade/examples", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/disaster_recovery/tasks/clean/remove_valid_filtered_master_domains.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "138db98f76c303349a8d8e07d45cdcf37150d65611f3dae86820c37026aa0527", "format": 1 }, { - "name": "roles/cluster_upgrade/examples/cluster_upgrade.yml", + "name": "roles/disaster_recovery/tasks/run_unregistered_entities.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e5328d9e91b8d6e3a6464baae5fc0398f848cfcbcb75398fbdf659ad1b73a403", + "chksum_sha256": "0a95e9dc5ec64b98a53d972ce1c339e946130ce9b716d94e2f605f62bee20259", "format": 1 }, { - "name": "roles/cluster_upgrade/examples/passwords.yml", + "name": "roles/disaster_recovery/tasks/recover_engine.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", + "chksum_sha256": "269691f765b9610e0cffe4cf718fcefa45bdf04287831d27a7cbc08c6d663a28", "format": 1 }, { - "name": "roles/cluster_upgrade/tasks", + "name": "roles/disaster_recovery/examples", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/cluster_upgrade/tasks/cluster_policy.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "139ec62ccbbe68bca8da3eac9675ad610253d5a34f7ba192620bae3a2c924242", - "format": 1 - }, - { - "name": "roles/cluster_upgrade/tasks/log_progress.yml", + "name": "roles/disaster_recovery/examples/ovirt_passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "474165c83e5ac44adbec3d9a89ea78c46bc01f4e54c667374416ca1743fd0fb7", + "chksum_sha256": "f6368b1291884cbd248e720948fc6a5709f04a1b07bfb78afdb615117a57da0e", "format": 1 }, { - "name": "roles/cluster_upgrade/tasks/pinned_vms.yml", + "name": "roles/disaster_recovery/examples/dr_ovirt_setup.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a66b3cdb5176d43714f92c16dbcd0a5a4393e49a67ecdeed78965c3c69abbeb0", + "chksum_sha256": "7dc8c63e44eb73c8147744ec27bb1ce42038195f2b727baf7751d5d69a518571", "format": 1 }, { - "name": "roles/cluster_upgrade/tasks/upgrade.yml", + "name": "roles/disaster_recovery/examples/dr_play.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ca4dc6a66466fd0751446d790c275a34682bcb30e88c8e0ee3e81a9f18e5a565", + "chksum_sha256": "ff5a6c4767651187cc6d3574997f968322e00451b9cb37d2d7689d2198648c0a", "format": 1 }, { - "name": "roles/cluster_upgrade/tasks/main.yml", + "name": "roles/disaster_recovery/examples/disaster_recovery_vars.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a9a3b444a6b0c04937f184388b513508942f1c1b21393a869709c58bc2d7dad2", + "chksum_sha256": "01ab09aa3035f63fecd327f987a47c6b796bd0065d76ec6f03082a9412515a9b", "format": 1 }, { - "name": "roles/cluster_upgrade/README.md", + "name": "roles/disaster_recovery/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7b761686d3fa460a4f440601407721aff898e919363fac5a2ee90cede476e5e9", - "format": 1 - }, - { - "name": "roles/disaster_recovery", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "5e5985bf27aafbfab9283cbaca7ccca64668833060bbf1e4f5f49577188e7c10", "format": 1 }, { @@ -925,45 +841,38 @@ "format": 1 }, { - "name": "roles/disaster_recovery/examples", + "name": "roles/disaster_recovery/files", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/examples/disaster_recovery_vars.yml", + "name": "roles/disaster_recovery/files/fail_back.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "01ab09aa3035f63fecd327f987a47c6b796bd0065d76ec6f03082a9412515a9b", + "chksum_sha256": "05bca5c0ba307c5c1207b1ada42666cd04a6c0e5ca6da8a648e5232e70557da8", "format": 1 }, { - "name": "roles/disaster_recovery/examples/dr_ovirt_setup.yml", + "name": "roles/disaster_recovery/files/validator.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7dc8c63e44eb73c8147744ec27bb1ce42038195f2b727baf7751d5d69a518571", + "chksum_sha256": "be65237aadb4f613add628535afde01789cf95e588ed84ca6a04af3f48c54a45", "format": 1 }, { - "name": "roles/disaster_recovery/examples/dr_play.yml", + "name": "roles/disaster_recovery/files/dr.conf", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ff5a6c4767651187cc6d3574997f968322e00451b9cb37d2d7689d2198648c0a", + "chksum_sha256": "090e3e3941bbca1da7720e2e5211cb6704ecb800f3e7d0e8d95438e21037dc6b", "format": 1 }, { - "name": "roles/disaster_recovery/examples/ovirt_passwords.yml", + "name": "roles/disaster_recovery/files/generate_vars.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f6368b1291884cbd248e720948fc6a5709f04a1b07bfb78afdb615117a57da0e", - "format": 1 - }, - { - "name": "roles/disaster_recovery/files", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "15e907807973a077c8f7bb97fb764dfcd03722dd37a7c2565777d4ac185ee75f", "format": 1 }, { @@ -974,17 +883,17 @@ "format": 1 }, { - "name": "roles/disaster_recovery/files/dr.conf", + "name": "roles/disaster_recovery/files/vault_secret.sh", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "090e3e3941bbca1da7720e2e5211cb6704ecb800f3e7d0e8d95438e21037dc6b", + "chksum_sha256": "2e5a276f384cac4d78e76145869238a895b959fb123a33ab652b81b830f8378e", "format": 1 }, { - "name": "roles/disaster_recovery/files/fail_back.py", + "name": "roles/disaster_recovery/files/ovirt-dr", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "05bca5c0ba307c5c1207b1ada42666cd04a6c0e5ca6da8a648e5232e70557da8", + "chksum_sha256": "4b17459db89da23f6f5f20d212ddde6e7f3879b8df610dace4e084be7533d3c7", "format": 1 }, { @@ -995,20 +904,6 @@ "format": 1 }, { - "name": "roles/disaster_recovery/files/generate_mapping.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "bdcd605745171f37b3049714acc6b830959e31311710aeae02ccb95b831595bc", - "format": 1 - }, - { - "name": "roles/disaster_recovery/files/generate_vars.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "15e907807973a077c8f7bb97fb764dfcd03722dd37a7c2565777d4ac185ee75f", - "format": 1 - }, - { "name": "roles/disaster_recovery/files/generate_vars_test.py", "ftype": "file", "chksum_type": "sha256", @@ -1016,248 +911,234 @@ "format": 1 }, { - "name": "roles/disaster_recovery/files/ovirt-dr", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "4b17459db89da23f6f5f20d212ddde6e7f3879b8df610dace4e084be7533d3c7", - "format": 1 - }, - { - "name": "roles/disaster_recovery/files/validator.py", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "be65237aadb4f613add628535afde01789cf95e588ed84ca6a04af3f48c54a45", - "format": 1 - }, - { - "name": "roles/disaster_recovery/files/vault_secret.sh", + "name": "roles/disaster_recovery/files/generate_mapping.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "2e5a276f384cac4d78e76145869238a895b959fb123a33ab652b81b830f8378e", + "chksum_sha256": "bdcd605745171f37b3049714acc6b830959e31311710aeae02ccb95b831595bc", "format": 1 }, { - "name": "roles/disaster_recovery/tasks", + "name": "roles/vm_infra", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean", + "name": "roles/vm_infra/tasks", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/remove_disks.yml", + "name": "roles/vm_infra/tasks/rename_resize_boot_disk.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4904dba739bbcc872d023e055c9c1a3dd786bfa1199f72ee7e6cb3f99ef474a0", + "chksum_sha256": "c3f096de8f4c18f29b1e4d0630e05ea31f285a0a4eaa21857aadc20c4a7bb023", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/remove_domain.yml", + "name": "roles/vm_infra/tasks/vm_state_absent.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1c0126aca79a6f188488fe47e148eb62bf97d553a359fac5e0db30a2cd972449", + "chksum_sha256": "2d5d08e9ac19af8523d7e8b0330294810e91d5ad77d4d4b67e1ccd61388ddda4", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/remove_domain_process.yml", + "name": "roles/vm_infra/tasks/create_inventory.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "6d44f6b334b85c43a6733325ea85433e85e892f878610cb0fd359ddb9e9efcb5", + "chksum_sha256": "a3df94becb3593f7dc9979a8ed5e740a60dc617c33b8f19f5f9f23567d9a0114", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/remove_invalid_filtered_master_domains.yml", + "name": "roles/vm_infra/tasks/manage_state.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8e0759c74c512d7c2ada796fa2ebf0c5004be77339525bb61ebb2441ff00b0d4", + "chksum_sha256": "52e5f76a2c2f30da25bed6d6ebd5b355d41bbcc72c1bdcb432c7a82995634d03", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/remove_valid_filtered_master_domains.yml", + "name": "roles/vm_infra/tasks/affinity_labels.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "138db98f76c303349a8d8e07d45cdcf37150d65611f3dae86820c37026aa0527", + "chksum_sha256": "1302469d26a335dab3169677c35ae70d96b84272e586c27786aabf7f06a5468e", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/remove_vms.yml", + "name": "roles/vm_infra/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "281872dffc6bedb2d5f2eababba225dcb68d97f1296060f7e22ba3c0d0c2cc7d", + "chksum_sha256": "f06571e7e9c4ad867ca9a9ae451511b71c27934104b926f75e39b33d4efe148b", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/shutdown_vm.yml", + "name": "roles/vm_infra/tasks/create_vms.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "08bc7c3a54e4b0a06301014814bff4976c72d2d6aa3bafdddb793c7fd1714503", + "chksum_sha256": "0a45f7bf1f9bcaeec819a49e26272d959cd928e674e8a2cc6d78a61ecd572f09", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/shutdown_vms.yml", + "name": "roles/vm_infra/tasks/affinity_groups.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "512ef199851ec2deeb75ee8230e2c9acde7571e7e88ace12aa41a13e195727b9", + "chksum_sha256": "99de64bdc087e561bccb1adacf980271a66654f63ce536678cade94a8b6e9ca2", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean/update_ovf_store.yml", + "name": "roles/vm_infra/tasks/vm_state_present.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "54d45e1a5d0a357cffc1bd2e64eed83bb1281de575544b60dd049625644c8db2", + "chksum_sha256": "c2adc50a2b5f49ff7f82fca6ebed977331f23f3a6c479a10f8802ba0e47138f6", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover", + "name": "roles/vm_infra/examples", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/add_domain.yml", + "name": "roles/vm_infra/examples/ovirt_vm_infra_inv.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4c29e7744719cb4e5edd154ce875f2dedaa945f8526907c53569fb29071ba0af", + "chksum_sha256": "800510a76705a8b1ac6a8b94f31826c2f303aa74730e151cdfe0b3984eaa6eb7", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/add_fcp_domain.yml", + "name": "roles/vm_infra/examples/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a1304953f4f321a466ec3f40e3da6ab077b3555173e19fb2029b566682ceddee", + "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/add_glusterfs_domain.yml", + "name": "roles/vm_infra/examples/ovirt_vm_infra.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f7afa2849e8e51641ef8f10f0a671897367661553e25987712383bb70f3b90fd", + "chksum_sha256": "d315a65145a8294f0e007f65d336c607926da12ab2621e4a60c8c627fa9f907a", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/add_iscsi_domain.yml", + "name": "roles/vm_infra/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "af856b275b930d302aefcedb84d4b8fc853520f405515d271c4bad3e10d2a2f3", + "chksum_sha256": "9d94768779dd3c85fd5b5c9f71bbe44652d50812c7ffd1c09b57179b7c468beb", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/add_nfs_domain.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "16300fe776b3d147e3fd2cd041854b1c11b9c737e1046516a4fd1b2909e5472e", + "name": "roles/vm_infra/defaults", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/add_posixfs_domain.yml", + "name": "roles/vm_infra/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "40cd9b57a069d91e8378aed4f91254ddcd02177cdf13acaab366773c5421543f", + "chksum_sha256": "a0cb0bfdf543b6406754a7524c94b4bded7fd7c7d0bc1648d2571843de4f904c", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/print_info.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "9fb07229582ba647600d609900e5c3b580f6e881a9c401505a3d757cae19e94f", + "name": "roles/image_template", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/register_template.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "39f767a131032417b0576a9e3e0c3c6b703b50d65c67ff4826443e2fda5a1d30", + "name": "roles/image_template/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/register_templates.yml", + "name": "roles/image_template/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8dea06f6718adf565297ded44dfc6dd44a52ea6386b2246814719b034bb531a3", + "chksum_sha256": "5cc8ac6f2a26ea08493e276f5900489f8233faaf9c80aa6faeed9d741393a3ad", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/register_vm.yml", + "name": "roles/image_template/tasks/qcow2_image.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "eb43083203fd11a54205d886ed340acda2cffaf7dd5d4f95373cc75f791f5db7", + "chksum_sha256": "fcfc1af2b33cb594e8c073dc7d99673daf18a2eb9dac03240119aa6e25200d36", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/register_vms.yml", + "name": "roles/image_template/tasks/glance_image.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "04940777097a9af94f44ae5f751154974e89ffa2747fa22317bb5ff7c865710c", + "chksum_sha256": "e8434e458bc32c9f1da37ae5cdac4c6535d133bc55e1033d1b2773436b2bcc3e", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/report_log_template.j2", + "name": "roles/image_template/tasks/empty.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4a6b48e869863fab445b983c2d4a4fa43bc3cb76620e3548c25ab8f54c89b91e", + "chksum_sha256": "a7e2509d3edfdc59f4e18438a84185b831e881a73f62ab0871de3ae1be1bf493", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover/run_vms.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "ee38d9138d515bba6f285bc7f29aaac3d063be546df6a7fbc72e6049237db449", + "name": "roles/image_template/examples", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/tasks/clean_engine.yml", + "name": "roles/image_template/examples/ovirt_image_template.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "3768e720a7b4a3c88909a2da457148be453ebef0149bd78bb2db97678fd2b94f", + "chksum_sha256": "ed3ee749c3fe4ea012157bf15c4af22461c9b6707d0fe64ac75e59b98860fbe1", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/generate_mapping.yml", + "name": "roles/image_template/examples/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4a48362c1e153dfc5ed6b9d90e6e40d094e85775ebd8855004084c628b1fe149", + "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/main.yml", + "name": "roles/image_template/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "cdb9e3dbd3398fee4a37ffb289c9274a9e040fdfdc14c51ec09f7019919b2ca5", + "chksum_sha256": "7d4ba8182ce18c62eba9fff4737dc56a0cd779f7d41f2153bf38a08cf8898b4b", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/recover_engine.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "269691f765b9610e0cffe4cf718fcefa45bdf04287831d27a7cbc08c6d663a28", + "name": "roles/image_template/vars", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/tasks/run_unregistered_entities.yml", + "name": "roles/image_template/vars/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0a95e9dc5ec64b98a53d972ce1c339e946130ce9b716d94e2f605f62bee20259", + "chksum_sha256": "f1785c9a8c3028367d2ba75256fa7260c079392212151d682acd55cab7750fbc", "format": 1 }, { - "name": "roles/disaster_recovery/tasks/unregister_entities.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "49dfc30fc3f6b55a92a3016a7882ca7514282968de1ae0dfbd8b1054d0d51116", + "name": "roles/image_template/defaults", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/disaster_recovery/README.md", + "name": "roles/image_template/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "6646b07417defb12746bb28acf5d4f7dba0d09f5623be42600f8071c82b20a67", + "chksum_sha256": "067ecccb5371a364a0f3addff4870bf1bf8a8985b8bd39dfebc75db057005e77", "format": 1 }, { @@ -1268,136 +1149,129 @@ "format": 1 }, { - "name": "roles/engine_setup/defaults", + "name": "roles/engine_setup/tasks", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/engine_setup/defaults/main.yml", + "name": "roles/engine_setup/tasks/pre_install_checks.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "3c8f572573fd60ddd0d28fc043a0bb457db1315d00dcf3007eaed4c656b055d4", + "chksum_sha256": "7bde6ab43a2d78f5fee146994a18a1815f2ab3c61f437817454311a1f5be8859", "format": 1 }, { - "name": "roles/engine_setup/examples", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/engine_setup/tasks/restore_engine_from_file.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "314c605bae9a6a3619e1c184406a0d0c2aba0c81f2cb34ccea5a019071e2c532", "format": 1 }, { - "name": "roles/engine_setup/examples/engine-deploy.yml", + "name": "roles/engine_setup/tasks/engine_setup.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "da334bb722a37e28d61d5d7e5340b3a1abde08fb7b210dfed413d02d6c253fa8", + "chksum_sha256": "443cd02db6a283dd96919ae01d26c130f9260b6afc40a21db012dab6814a44ac", "format": 1 }, { - "name": "roles/engine_setup/examples/engine-upgrade.yml", + "name": "roles/engine_setup/tasks/install_packages.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4ddca8cd12921dd36298bc8f26c019db5c289ee362f752b077b5373dadfc4a07", + "chksum_sha256": "e57dc32606b071aa2c4fe5327e018c634453e76042f3a1c054b5df24dd6fb49b", "format": 1 }, { - "name": "roles/engine_setup/examples/passwords.yml", + "name": "roles/engine_setup/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e8fed7a17e985ba5217acda961a8a41c98fa41d56f2b7046a82977da7b3ceea6", + "chksum_sha256": "07e406fcafdb599a48f099306c8da522d166b93ee28c453a09a715a9a2e8536e", "format": 1 }, { - "name": "roles/engine_setup/tasks", + "name": "roles/engine_setup/tests", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/engine_setup/tasks/engine_setup.yml", + "name": "roles/engine_setup/tests/inventory", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "862f800ab8c69ee875aa991a6d0a5bf3875d41fe5aeb7fcbabaaf23350b98f02", + "chksum_sha256": "669dea0f087198b19e89c45cb705e8439f9d1139a29d63264be472ef47b33b9e", "format": 1 }, { - "name": "roles/engine_setup/tasks/install_packages.yml", + "name": "roles/engine_setup/tests/containers-deploy.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e57dc32606b071aa2c4fe5327e018c634453e76042f3a1c054b5df24dd6fb49b", + "chksum_sha256": "5cfe4b58d3af1154518a8c99012677e08161aff15ae3ed4ad9bf491d3857ced6", "format": 1 }, { - "name": "roles/engine_setup/tasks/main.yml", + "name": "roles/engine_setup/tests/test-master.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "07e406fcafdb599a48f099306c8da522d166b93ee28c453a09a715a9a2e8536e", + "chksum_sha256": "c74b9080255dcb73c81609ae88a5b87b7410d425b664f7ab761962f647075bc5", "format": 1 }, { - "name": "roles/engine_setup/tasks/pre_install_checks.yml", + "name": "roles/engine_setup/tests/test-upgrade-4.2-to-master.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7bde6ab43a2d78f5fee146994a18a1815f2ab3c61f437817454311a1f5be8859", + "chksum_sha256": "a32dd0479bb4cd2c7c7cf5925133a9faa1a62a0e0aa6f13c536cb4936584922c", "format": 1 }, { - "name": "roles/engine_setup/tasks/restore_engine_from_file.yml", + "name": "roles/engine_setup/tests/test-4.2.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "314c605bae9a6a3619e1c184406a0d0c2aba0c81f2cb34ccea5a019071e2c532", - "format": 1 - }, - { - "name": "roles/engine_setup/templates", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "554a2626b5a2c711a91de675f133b6316753f58bf7b58f05e401b865f4453d5a", "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.1_basic.txt.j2", + "name": "roles/engine_setup/tests/engine-deploy.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "3d208930a8c1a973a018c7476a10f3a1857d9562ffa3037af797e62d9fe47aa3", + "chksum_sha256": "6e898c8c0035f4be61a893e691998bf17cace4ddd4628c3d3f73230b1a8663b2", "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.1_upgrade.txt.j2", + "name": "roles/engine_setup/tests/engine-upgrade.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "6295859733ac8997107fe0644a1788ba0b6f88e729aa57e67c9de1d0fb1e2bf4", + "chksum_sha256": "6d5698a32c3605fc3c97282a9564a47679af7a23776203c5ff2a9cb349b28d12", "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.2_basic.txt.j2", + "name": "roles/engine_setup/tests/requirements.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e563077a2da3efc212864c7f23639af0f0271a39e21fb38a27bf9ba53174f86a", + "chksum_sha256": "d97e2493b7c67dacb5a966615f5ad63491f5a7ac68ae00d585cc9a7f2f0965db", "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.2_upgrade.txt.j2", + "name": "roles/engine_setup/tests/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "674c9ed2cda7dca01bbedd549f17ab1ea4d24db6d7cc4308a8683bdb20d00a55", + "chksum_sha256": "e8fed7a17e985ba5217acda961a8a41c98fa41d56f2b7046a82977da7b3ceea6", "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.3_basic.txt.j2", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "a3e56da1f79927e222e08a8814165104272951dba6e3da64cd50c67280493165", + "name": "roles/engine_setup/templates", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.3_upgrade.txt.j2", + "name": "roles/engine_setup/templates/answerfile_4.5_upgrade.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4691c06ff6c2c79f7754ed23337bd5c425b1349eef196b58aec360676df57041", + "chksum_sha256": "66168916bf959bdadeb0c2212c938ad10c472e7cefd64470513021465929e60c", "format": 1 }, { @@ -1408,101 +1282,101 @@ "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.4_upgrade.txt.j2", + "name": "roles/engine_setup/templates/answerfile_4.3_basic.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f5b42d55ad4abe75d15aa800b697f6ff7d425bae8dc7bde5e0da36f805f24c45", + "chksum_sha256": "a3e56da1f79927e222e08a8814165104272951dba6e3da64cd50c67280493165", "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.5_basic.txt.j2", + "name": "roles/engine_setup/templates/answerfile_4.2_basic.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "82b3514110bc38936ebcff5c80ca1a99926d5a1351fe4aaade8b0f6d2fabb1cd", + "chksum_sha256": "e563077a2da3efc212864c7f23639af0f0271a39e21fb38a27bf9ba53174f86a", "format": 1 }, { - "name": "roles/engine_setup/templates/answerfile_4.5_upgrade.txt.j2", + "name": "roles/engine_setup/templates/answerfile_4.3_upgrade.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "66168916bf959bdadeb0c2212c938ad10c472e7cefd64470513021465929e60c", + "chksum_sha256": "4691c06ff6c2c79f7754ed23337bd5c425b1349eef196b58aec360676df57041", "format": 1 }, { - "name": "roles/engine_setup/templates/basic_answerfile.txt.j2", + "name": "roles/engine_setup/templates/answerfile_4.2_upgrade.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "59936bcb7c900b9779ecfed684759145e85eafac01dc7602440e844a8c55c73f", + "chksum_sha256": "674c9ed2cda7dca01bbedd549f17ab1ea4d24db6d7cc4308a8683bdb20d00a55", "format": 1 }, { - "name": "roles/engine_setup/tests", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/engine_setup/templates/answerfile_4.5_basic.txt.j2", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "82b3514110bc38936ebcff5c80ca1a99926d5a1351fe4aaade8b0f6d2fabb1cd", "format": 1 }, { - "name": "roles/engine_setup/tests/containers-deploy.yml", + "name": "roles/engine_setup/templates/answerfile_4.4_upgrade.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "9c3c2c21cb3d0aa75d7bfeab5a56a1bdbcac587e7d9c09656bb30f3f8f352ece", + "chksum_sha256": "f5b42d55ad4abe75d15aa800b697f6ff7d425bae8dc7bde5e0da36f805f24c45", "format": 1 }, { - "name": "roles/engine_setup/tests/engine-deploy.yml", + "name": "roles/engine_setup/templates/answerfile_4.1_upgrade.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "6e898c8c0035f4be61a893e691998bf17cace4ddd4628c3d3f73230b1a8663b2", + "chksum_sha256": "6295859733ac8997107fe0644a1788ba0b6f88e729aa57e67c9de1d0fb1e2bf4", "format": 1 }, { - "name": "roles/engine_setup/tests/engine-upgrade.yml", + "name": "roles/engine_setup/templates/answerfile_4.1_basic.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "6d5698a32c3605fc3c97282a9564a47679af7a23776203c5ff2a9cb349b28d12", + "chksum_sha256": "3d208930a8c1a973a018c7476a10f3a1857d9562ffa3037af797e62d9fe47aa3", "format": 1 }, { - "name": "roles/engine_setup/tests/inventory", + "name": "roles/engine_setup/templates/basic_answerfile.txt.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "669dea0f087198b19e89c45cb705e8439f9d1139a29d63264be472ef47b33b9e", + "chksum_sha256": "59936bcb7c900b9779ecfed684759145e85eafac01dc7602440e844a8c55c73f", "format": 1 }, { - "name": "roles/engine_setup/tests/passwords.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "e8fed7a17e985ba5217acda961a8a41c98fa41d56f2b7046a82977da7b3ceea6", + "name": "roles/engine_setup/examples", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/engine_setup/tests/requirements.yml", + "name": "roles/engine_setup/examples/engine-deploy.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "cc296c4e43917486b4a713e2f50b075b88fff850a6f0901081368428686ea431", + "chksum_sha256": "da334bb722a37e28d61d5d7e5340b3a1abde08fb7b210dfed413d02d6c253fa8", "format": 1 }, { - "name": "roles/engine_setup/tests/test-4.2.yml", + "name": "roles/engine_setup/examples/engine-upgrade.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ddbeb082e3e49d7265a4a6b4844009c2ba7761add8c6bbdd93e2c83cbcbe0b75", + "chksum_sha256": "4ddca8cd12921dd36298bc8f26c019db5c289ee362f752b077b5373dadfc4a07", "format": 1 }, { - "name": "roles/engine_setup/tests/test-master.yml", + "name": "roles/engine_setup/examples/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0990dd37b7c78b73cb88eacf4fc14725b637545849ef60b15cdd860e81f518ac", + "chksum_sha256": "e8fed7a17e985ba5217acda961a8a41c98fa41d56f2b7046a82977da7b3ceea6", "format": 1 }, { - "name": "roles/engine_setup/tests/test-upgrade-4.2-to-master.yml", + "name": "roles/engine_setup/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0eb240194881bb664e0adbad326c5c51c7dd2ae7dccd26867f36bc3b40719dc1", + "chksum_sha256": "e1602b7bba86c0b8c69ffd91dff866ce75fb8538b99fd0712d5a5b09574cf209", "format": 1 }, { @@ -1520,430 +1394,416 @@ "format": 1 }, { - "name": "roles/engine_setup/README.md", + "name": "roles/engine_setup/defaults", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/engine_setup/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e1602b7bba86c0b8c69ffd91dff866ce75fb8538b99fd0712d5a5b09574cf209", + "chksum_sha256": "3c8f572573fd60ddd0d28fc043a0bb457db1315d00dcf3007eaed4c656b055d4", "format": 1 }, { - "name": "roles/hosted_engine_setup", + "name": "roles/remove_stale_lun", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/defaults", + "name": "roles/remove_stale_lun/tasks", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/defaults/main.yml", + "name": "roles/remove_stale_lun/tasks/fetch_hosts.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "071e8947f0d2c9308f8820d6aeab02d08ee7991df0228a03a52ebcef12acd3ea", - "format": 1 - }, - { - "name": "roles/hosted_engine_setup/examples", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "5d648cb0fa07d757846d564fe520a5f9fa0a126e33c769481bc9c39d07eee359", "format": 1 }, { - "name": "roles/hosted_engine_setup/examples/hosted_engine_deploy_localhost.yml", + "name": "roles/remove_stale_lun/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "67398b5ebb07bcbc56ba3269c3ab11672bdad44ef34eaf5a54d6d834ff1fb05e", + "chksum_sha256": "dd053c903c438a2c9c8c4b79e4954f6e1474fedd8902f8a4665432454412b0fb", "format": 1 }, { - "name": "roles/hosted_engine_setup/examples/hosted_engine_deploy_remotehost.yml", + "name": "roles/remove_stale_lun/tasks/remove_mpath_device.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e2d43b9deaf49874e8c0b3303a466cea9b2d5848353da5feb5cd2080347863c9", + "chksum_sha256": "8af83eb6cbe20c7fb2f0ef0a9b69f6129c97b665c8b3ad3cffff77e5300116da", "format": 1 }, { - "name": "roles/hosted_engine_setup/examples/iscsi_deployment_remote.json", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "fd684ba2bf2c243ea49fcc3f724f80c11f3bff5aec833c539b363e2d8a664029", + "name": "roles/remove_stale_lun/examples", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/examples/nfs_deployment.json", + "name": "roles/remove_stale_lun/examples/remove_stale_lun.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e502f3e646feaf3043835aafafba260ae5dd804901ab11254ef8622a6058689b", + "chksum_sha256": "37bb921142b647852d34092e7ed9c98627656f9d1ba9ca4e19bb7a62e029229c", "format": 1 }, { - "name": "roles/hosted_engine_setup/examples/passwords.yml", + "name": "roles/remove_stale_lun/examples/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7d743b2921acb5121edb77586f3357245704891451720da05f5f7677230f8a94", + "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", "format": 1 }, { - "name": "roles/hosted_engine_setup/examples/required_networks_fix.yml", + "name": "roles/remove_stale_lun/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ec7b32cfd216364b2fc795df93d753f958f5f570e9940d0a19d786e23dcf8aaa", + "chksum_sha256": "36614071855b57d37060c8fb0db8da56627332613c06fe01b63396095226fbd3", "format": 1 }, { - "name": "roles/hosted_engine_setup/files", + "name": "roles/remove_stale_lun/defaults", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/files/35-allow-ansible-for-vdsm.rules", + "name": "roles/remove_stale_lun/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b562418571a4657f858b5feaf149e4f983480b7871955264f185c5b213595adf", + "chksum_sha256": "e0781e2354d1ca7c79e6b6915ab26643344acb2b075a10c900abaa9c717f7aa3", "format": 1 }, { - "name": "roles/hosted_engine_setup/hooks", + "name": "roles/hosted_engine_setup", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/hooks/after_add_host", + "name": "roles/hosted_engine_setup/tasks", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/hooks/after_add_host/README.md", + "name": "roles/hosted_engine_setup/tasks/fetch_host_ip.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "99f37c2644368879fc3f47df1673128068861c0c551488f3dd1d6c0ef930b943", - "format": 1 - }, - { - "name": "roles/hosted_engine_setup/hooks/after_setup", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "62ec1578cbd3f2f4d6b6601253c459dc729b6452d82679818d53c42946a540ae", "format": 1 }, { - "name": "roles/hosted_engine_setup/hooks/after_setup/README.md", + "name": "roles/hosted_engine_setup/tasks/auth_revoke.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "21b644ac87bacf4df09f35510e7e47c43e179e5e4c4c297ac381d892e3c101eb", + "chksum_sha256": "8fac25039bfd3c11600124fa227e6a198a404a9a06847710fd9d040b8507ba70", "format": 1 }, { - "name": "roles/hosted_engine_setup/hooks/after_setup/add_host_storage_domain.yml", + "name": "roles/hosted_engine_setup/tasks/search_available_network_subnet.yaml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "982ac73dbfb8969a1ccdc87af4ae4629b191e148ae9dd61f97e2fec04a83e7cb", - "format": 1 - }, - { - "name": "roles/hosted_engine_setup/hooks/enginevm_after_engine_setup", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "808cda0fb20b45d96a1992ef941996dd18791a789e865a06a9e2e49c82c0cbaf", "format": 1 }, { - "name": "roles/hosted_engine_setup/hooks/enginevm_after_engine_setup/README.md", + "name": "roles/hosted_engine_setup/tasks/filter_unsupported_vlan_devices.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ab33f419e16208c32590d47c90d428f23dad1528151b994d4ce4dd07ba0955d3", + "chksum_sha256": "46fde29f53a0556bb726ba3274baece3f06632114a353e5d21c12931357241a0", "format": 1 }, { - "name": "roles/hosted_engine_setup/hooks/enginevm_before_engine_setup", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/hosted_engine_setup/tasks/restore_backup.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "7ff3cdabdd919efa09b25ee7f168f40203497a0be4e549229d931fbd4e84164f", "format": 1 }, { - "name": "roles/hosted_engine_setup/hooks/enginevm_before_engine_setup/README.md", + "name": "roles/hosted_engine_setup/tasks/create_storage_domain.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "34769789fabe1aa2e5637fb5242e429ff8dc5fc9614d816165787937f767ecff", + "chksum_sha256": "822ea2e21f2524bed82b4b304d1a5e06b7b513c1f0dfbe7e3d0e137552c511e0", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/hosted_engine_setup/tasks/install_appliance.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "71fdc499e89c44671510b2c400c7450e8fd49f743db42f1e44120c25494fb0ee", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/hosted_engine_setup/tasks/add_engine_as_ansible_host.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "0b6f103d3e84fc73d34e82254c7abc76eeb7632984f505ee2860f8d7a9d9411d", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml", + "name": "roles/hosted_engine_setup/tasks/get_local_vm_disk_path.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "50ecb5b02426bb7b158de1e958bc1af682770225db1af4efa4c5f12f5de1f2b0", + "chksum_sha256": "06ea4b5fd5f94f8173680093e3fac76c8a628c6b646901cce30ac1559d107fb7", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml", + "name": "roles/hosted_engine_setup/tasks/auth_sso.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4fd0dc610e2658e79232182c6dac2ab1c3cd7b425fa3b6b5fa658b19001d415c", + "chksum_sha256": "740a3698d73da71f3fe9325a811b309157c849837167f7e89a6db7327e04f5bf", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml", + "name": "roles/hosted_engine_setup/tasks/alter_libvirt_default_net_configuration.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4f24c438123409883e229b91d8db67013d68b0714b86e7560cf9f435e8223d13", + "chksum_sha256": "c7e83212ffd88cc36be3fdc2c397276b8eb2ec378973e7109ff0c1e76a448d1b", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/04_engine_final_tasks.yml", + "name": "roles/hosted_engine_setup/tasks/iscsi_getdevices.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "af77a73d076033c2341210d0167e80f3d29c7d05ef08dacf2e1fad842338e32d", + "chksum_sha256": "10b2609a5041eb0b85fbd38462865d47ed7bad00a6d0dc09462832e00f41b3c0", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml", + "name": "roles/hosted_engine_setup/tasks/validate_vlan_bond_mode.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4fe15c23e8666aad557807a8c16d2a762c9ad0ac85191381bb63ccc5c425cb56", + "chksum_sha256": "7ea0d9aa61a5da96b08fbaa824c0e86f5237e95633fcd079bbb3c8e608e6a32f", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/create_target_vm", + "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml", + "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/04_engine_final_tasks.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "fb7c1d5c7fab5dacf7e9d2cbe71578b75b03b1f7f9c7d910833c1217bc56b453", + "chksum_sha256": "06ab45134d645133a7858d624019a88ef90a90ba12608706623e79adcc0a3e41", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml", + "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d64591cb198ee1a7ad6625dc188f8b40384e99d2dd45b9d149331ce634923677", + "chksum_sha256": "bd026a1ec849637d46448b4c91cafc1a0a03384c4c48e0a9d88bb7ce91cf5151", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/create_target_vm/03_hosted_engine_final_tasks.yml", + "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ecccd9ca3929323054d45ec60f407ea97276396474ad8b0e8a7b4a5fbf2025b6", + "chksum_sha256": "59277ab1a12368031b465c16b7520ed500b781d8c8fe61672fab9a4ceb99a6c8", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "3354a5bf07721ebfef129db30c3d7e415682fba97a3ab12f07212ecd59956663", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/001_validate_network_interfaces.yml", + "name": "roles/hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "88f3fb78661fd4a02ea9c5b39bceeca743aa373a8763f26b371c4343e8412641", + "chksum_sha256": "88eb1572e74d0a0ddc398cd376bced70ad462a57762f0a7a52edda8f952af2a8", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/002_validate_hostname_tasks.yml", + "name": "roles/hosted_engine_setup/tasks/final_clean.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4f481df891c266e54e9c3ef02b8f3ce0481d6f992cf184dbd7e051d591bb4327", + "chksum_sha256": "9c56f2c4d4cfdaec5c6a25013d50ba5852077139986da7c902c73384acce21ee", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/define_variables.yml", + "name": "roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "88eddc08e5e5b704f6e18d19b0d12e65bb1bb90341e89ef0f49c1c1904d0d250", + "chksum_sha256": "fc1907519278a8e3867f938f8f4f2bde39a9458670d851855890ae6de344ee3e", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_data_center_name.yml", + "name": "roles/hosted_engine_setup/tasks/install_packages.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f569ed5777d1a09bec53ae1078e9f48081fcb6a3979ee19cc35c17a2d9e89189", + "chksum_sha256": "7c98b24384a2d27657cac75f3414eef7ac37471e7e6d1062166685f2cdfba9d4", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml", + "name": "roles/hosted_engine_setup/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "072eaf9316569ed0de7d73f3ef1713bed0af6bfb4f94d016147ec8cae6f3e825", + "chksum_sha256": "27fc692c3676b70eb4d3151dc56485fa1327697b55da5796510f5383e5103c93", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_gateway.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "8c9bfda85d5f4e801535e2827f9971d63174585afe037109a834c103d6acee6a", + "name": "roles/hosted_engine_setup/tasks/pre_checks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_mac_address.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/define_variables.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c66e99b7b224c0b68e0e6420e20d4dea42a08ce995d00460988259b0570b3c3a", + "chksum_sha256": "54bcdbcbcd658a76e47112254869ed73d44b05ef6d5e0d6876720cd28666f641", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/002_validate_hostname_tasks.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e4bd2ef1a84ab4bafcaae6e266056bbe8efe04728d0f381f80cb5192d95260d5", + "chksum_sha256": "4f481df891c266e54e9c3ef02b8f3ce0481d6f992cf184dbd7e051d591bb4327", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_network_test.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_services_status.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "645b12514a5072f220253ad44f2d946f38bf45d043284a4ec809dfc3c28a9db3", + "chksum_sha256": "93fffffec58c3fec8248e0021107c93532726f0cdbb304a626d0b6ac79eb4c3c", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_services_status.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_data_center_name.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "93fffffec58c3fec8248e0021107c93532726f0cdbb304a626d0b6ac79eb4c3c", + "chksum_sha256": "fce843e5d9f9df743c52182090c0ff89857043f86d2af64f56ef665362d8bf78", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_vcpus_count.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_mac_address.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b7e7df2877ea9490a462c4edcae383bd73aa275da4e357bfc1d79c154a469a39", + "chksum_sha256": "aef7ed712a3edf62340dd2a369ffc7c65c9774a41d1bee14294cf7ce815b5425", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/add_engine_as_ansible_host.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b256105f4a4f66a6e23e03dc00bea3aa0d5a29772e043d34ba3143d34b363a7e", + "chksum_sha256": "677130d8b76d3ac12f0f92ce02fc8c02ade82a57353fb21ed422ba1ca33be3fb", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/alter_libvirt_default_net_configuration.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "9e3de754f0373d6586bc9cf48c917ca49cc72e264af467366cf10906b30ad5ee", + "chksum_sha256": "438242bf33b5a3c2cccfa08d2a7066bb9dddee1201c2d42150391e57e1cbcd6d", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/apply_openscap_profile.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_network_test.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c77fdd82c2d788092f2213e37187081340c946d4b34a72a6831161113e093125", + "chksum_sha256": "605a1bdafd4a19acb196f78a39f599b3b4b5d58e4e531d77e8a718ef1dc8acad", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/auth_revoke.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_vcpus_count.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8fac25039bfd3c11600124fa227e6a198a404a9a06847710fd9d040b8507ba70", + "chksum_sha256": "6fb097bfc6b5db9d552aef11da4a77f3159de4a8132f082b59c62a1f3be8881b", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/auth_sso.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/validate_gateway.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "740a3698d73da71f3fe9325a811b309157c849837167f7e89a6db7327e04f5bf", + "chksum_sha256": "10b24fca4ee9de6e913719d25f8d8b65f477a616c79ba1ca45f69c71e630cb74", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/clean_cloud_init_config.yml", + "name": "roles/hosted_engine_setup/tasks/pre_checks/001_validate_network_interfaces.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "fdfa13f3471ed27b2ee93b6e66ab0fc78e9e1f86013fb940aac875441f4aa7cc", + "chksum_sha256": "aae8d14d5e989223e5458225afe8bc4153585aa9773f521413da0292faad9e3f", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml", + "name": "roles/hosted_engine_setup/tasks/apply_openscap_profile.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7e55bcc5b1796b26c86be3bc7f40134086c3bc48fd14c43f68ff30c805311819", + "chksum_sha256": "c77fdd82c2d788092f2213e37187081340c946d4b34a72a6831161113e093125", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/clean_localvm_dir.yml", + "name": "roles/hosted_engine_setup/tasks/iscsi_discover.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "fb034920d818a04ed3674f35880f28a108d5023272b0e801517af83cb514cf95", + "chksum_sha256": "72d761ba4daaedeac8e34dca6bcc0eeebde701128bb7b5fdbe96b08d286c177d", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/create_storage_domain.yml", + "name": "roles/hosted_engine_setup/tasks/partial_execution.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "42e304c1d08e8b79b59440a0766063fc43ff8cddb312df33f463e37726110ea4", + "chksum_sha256": "00ff4a185254d24adc2bf63fe1aa8b99eed49f38ce0f902cd6e9637b3ffb0f21", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/fc_getdevices.yml", + "name": "roles/hosted_engine_setup/tasks/clean_cloud_init_config.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "2608dffb9d636db3337d9d9f55cf3809e8ec3fc27914d619eba32daf92d79330", + "chksum_sha256": "fdfa13f3471ed27b2ee93b6e66ab0fc78e9e1f86013fb940aac875441f4aa7cc", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/fetch_engine_logs.yml", + "name": "roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "9f6e0b81f31f65da82dd872b278e002f8934ef3c69ce4096091f7b2eae5cc86a", + "chksum_sha256": "3e55e39553e9d2c692e16908b471f5cb872d60759dc4625de2338d3c7c231a1b", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/fetch_host_ip.yml", + "name": "roles/hosted_engine_setup/tasks/validate_ip_prefix.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0252b7c4117a704486dab2cc0327693367347d2af1a48438c82ba6423648a33f", + "chksum_sha256": "89fd2e1962afd271a930373641f3636f55105b6399880e10a60659338e9bdf29", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/filter_team_devices.yml", + "name": "roles/hosted_engine_setup/tasks/pause_execution.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "5de9a207eead341ddf6db0a2a58971e37b9de2c86d345abfcc5e8dcd585abc50", + "chksum_sha256": "d7ad99d745ce1f1580feeb39ce1664a2e5845a8b205b1edd13f5d9284088d1b0", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/filter_unsupported_vlan_devices.yml", + "name": "roles/hosted_engine_setup/tasks/ipv_switch.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "46fde29f53a0556bb726ba3274baece3f06632114a353e5d21c12931357241a0", + "chksum_sha256": "711996872769061a1115daa045865c1ac19abd79102c4df1137a0a8f2271d3fa", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/final_clean.yml", + "name": "roles/hosted_engine_setup/tasks/fc_getdevices.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f34840bd6ad16a16f22687d9deee369eef7f53d72f0d172a0a63d89cb5dcbbf5", + "chksum_sha256": "2608dffb9d636db3337d9d9f55cf3809e8ec3fc27914d619eba32daf92d79330", "format": 1 }, { @@ -1961,157 +1821,164 @@ "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/get_local_vm_disk_path.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "06ea4b5fd5f94f8173680093e3fac76c8a628c6b646901cce30ac1559d107fb7", + "name": "roles/hosted_engine_setup/tasks/create_target_vm", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/install_appliance.yml", + "name": "roles/hosted_engine_setup/tasks/create_target_vm/03_hosted_engine_final_tasks.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "71fdc499e89c44671510b2c400c7450e8fd49f743db42f1e44120c25494fb0ee", + "chksum_sha256": "123b64772d1fa7037072c48516d3748839aeef754f8602804ef59d4af3327dce", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/install_packages.yml", + "name": "roles/hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7c98b24384a2d27657cac75f3414eef7ac37471e7e6d1062166685f2cdfba9d4", + "chksum_sha256": "2f8ab8a759e3cc2667bdf33db4289c4fd2913614cada120cd079d3cd2838ab96", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/ipv_switch.yml", + "name": "roles/hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "711996872769061a1115daa045865c1ac19abd79102c4df1137a0a8f2271d3fa", + "chksum_sha256": "206d0c5887fd7d7a69d6cfe8c75af12d6f60dd9c27316133f15c2827f6938e3b", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/iscsi_discover.yml", + "name": "roles/hosted_engine_setup/tasks/filter_team_devices.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "72d761ba4daaedeac8e34dca6bcc0eeebde701128bb7b5fdbe96b08d286c177d", + "chksum_sha256": "5de9a207eead341ddf6db0a2a58971e37b9de2c86d345abfcc5e8dcd585abc50", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/iscsi_getdevices.yml", + "name": "roles/hosted_engine_setup/tasks/initial_clean.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "10b2609a5041eb0b85fbd38462865d47ed7bad00a6d0dc09462832e00f41b3c0", + "chksum_sha256": "d482a83c87e83c09ea4063b1774586fca39f6c682d92e1f907fd3ea8cad01e22", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/main.yml", + "name": "roles/hosted_engine_setup/tasks/fetch_engine_logs.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "27fc692c3676b70eb4d3151dc56485fa1327697b55da5796510f5383e5103c93", + "chksum_sha256": "9f6e0b81f31f65da82dd872b278e002f8934ef3c69ce4096091f7b2eae5cc86a", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/partial_execution.yml", + "name": "roles/hosted_engine_setup/tasks/restore_host_redeploy.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "00ff4a185254d24adc2bf63fe1aa8b99eed49f38ce0f902cd6e9637b3ffb0f21", + "chksum_sha256": "76d792777a58ed7d04a034b2dd546e12c92a854fb148752b85464db63b90b508", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/pause_execution.yml", + "name": "roles/hosted_engine_setup/tasks/clean_localvm_dir.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d7ad99d745ce1f1580feeb39ce1664a2e5845a8b205b1edd13f5d9284088d1b0", + "chksum_sha256": "fb034920d818a04ed3674f35880f28a108d5023272b0e801517af83cb514cf95", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/restore_backup.yml", + "name": "roles/hosted_engine_setup/tasks/validate_vlan_name.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7ff3cdabdd919efa09b25ee7f168f40203497a0be4e549229d931fbd4e84164f", + "chksum_sha256": "768f9061d2b4db60d9f1f2bd67c19ce469e3058fddd39d0ca5ec0f9679517173", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/restore_host_redeploy.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "76d792777a58ed7d04a034b2dd546e12c92a854fb148752b85464db63b90b508", + "name": "roles/hosted_engine_setup/hooks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/search_available_network_subnet.yaml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "808cda0fb20b45d96a1992ef941996dd18791a789e865a06a9e2e49c82c0cbaf", + "name": "roles/hosted_engine_setup/hooks/after_setup", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml", + "name": "roles/hosted_engine_setup/hooks/after_setup/add_host_storage_domain.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "fc1907519278a8e3867f938f8f4f2bde39a9458670d851855890ae6de344ee3e", + "chksum_sha256": "5861e80522e6a68871c514c6469bc923e50e44c7283b1445744c429dc0f8bf83", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/validate_ip_prefix.yml", + "name": "roles/hosted_engine_setup/hooks/after_setup/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "89fd2e1962afd271a930373641f3636f55105b6399880e10a60659338e9bdf29", + "chksum_sha256": "21b644ac87bacf4df09f35510e7e47c43e179e5e4c4c297ac381d892e3c101eb", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/validate_vlan_bond_mode.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "7ea0d9aa61a5da96b08fbaa824c0e86f5237e95633fcd079bbb3c8e608e6a32f", + "name": "roles/hosted_engine_setup/hooks/enginevm_after_engine_setup", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/validate_vlan_name.yml", + "name": "roles/hosted_engine_setup/hooks/enginevm_after_engine_setup/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "768f9061d2b4db60d9f1f2bd67c19ce469e3058fddd39d0ca5ec0f9679517173", + "chksum_sha256": "ab33f419e16208c32590d47c90d428f23dad1528151b994d4ce4dd07ba0955d3", "format": 1 }, { - "name": "roles/hosted_engine_setup/tasks/initial_clean.yml", + "name": "roles/hosted_engine_setup/hooks/enginevm_before_engine_setup", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/hosted_engine_setup/hooks/enginevm_before_engine_setup/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "9235657660efb0b3cfb7c4bda6c0ec7c48e21d8c4ef07a949ab9bd0ee9b973db", + "chksum_sha256": "34769789fabe1aa2e5637fb5242e429ff8dc5fc9614d816165787937f767ecff", "format": 1 }, { - "name": "roles/hosted_engine_setup/templates", + "name": "roles/hosted_engine_setup/hooks/after_add_host", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/broker.conf.j2", + "name": "roles/hosted_engine_setup/hooks/after_add_host/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d5d0691008ba803c77c10312be55b10e47ca2e9d04049c8751c12aac2c63452a", + "chksum_sha256": "99f37c2644368879fc3f47df1673128068861c0c551488f3dd1d6c0ef930b943", "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/fhanswers.conf.j2", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "bd814bc72fb77a283864d2d2bcc729cd1e1424e55d96d3e3c52ac7fe86c4ed6e", + "name": "roles/hosted_engine_setup/templates", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/hosted-engine.conf.j2", + "name": "roles/hosted_engine_setup/templates/ifcfg-eth0-dhcp.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a3a530d046a7e5b4c7b0691e6b6e8412e21b9480664604df04496a5db16acf91", + "chksum_sha256": "47f188d5d7f0c676a3bb4cdcd10eade0d329f2b22e898ee2865b5a99958f0f28", "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/ifcfg-eth0-dhcp.j2", + "name": "roles/hosted_engine_setup/templates/ifcfg-eth0-static.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "47f188d5d7f0c676a3bb4cdcd10eade0d329f2b22e898ee2865b5a99958f0f28", + "chksum_sha256": "0c5713af9904015a96eb5c85156013c86d6262978329a2ecb098d3f2157632aa", "format": 1 }, { @@ -2122,17 +1989,24 @@ "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/ifcfg-eth0-static.j2", + "name": "roles/hosted_engine_setup/templates/user-data.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0c5713af9904015a96eb5c85156013c86d6262978329a2ecb098d3f2157632aa", + "chksum_sha256": "ee53560828d127e697b6fac3e706af225214050da817ef0ba474619233a18f56", "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/meta-data.j2", + "name": "roles/hosted_engine_setup/templates/network-config.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e4d1e14ea63ecadccab3ad35809d1126103ac83c0b8348af1c7ec1f9eeb5356d", + "chksum_sha256": "1cecf8d5c0c6e8de789058d57e33112c93ee9793b0d47c5e1d9af87009e046cd", + "format": 1 + }, + { + "name": "roles/hosted_engine_setup/templates/hosted-engine.conf.j2", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a3a530d046a7e5b4c7b0691e6b6e8412e21b9480664604df04496a5db16acf91", "format": 1 }, { @@ -2143,17 +2017,17 @@ "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/network-config.j2", + "name": "roles/hosted_engine_setup/templates/broker.conf.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1cecf8d5c0c6e8de789058d57e33112c93ee9793b0d47c5e1d9af87009e046cd", + "chksum_sha256": "d5d0691008ba803c77c10312be55b10e47ca2e9d04049c8751c12aac2c63452a", "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/user-data.j2", + "name": "roles/hosted_engine_setup/templates/fhanswers.conf.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ee53560828d127e697b6fac3e706af225214050da817ef0ba474619233a18f56", + "chksum_sha256": "bd814bc72fb77a283864d2d2bcc729cd1e1424e55d96d3e3c52ac7fe86c4ed6e", "format": 1 }, { @@ -2164,171 +2038,192 @@ "format": 1 }, { - "name": "roles/hosted_engine_setup/templates/vm.conf.j2", + "name": "roles/hosted_engine_setup/templates/meta-data.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7e12a9773fa641e2d90007baf8275b692f880a568f6f21ec896ba10d89661772", + "chksum_sha256": "e4d1e14ea63ecadccab3ad35809d1126103ac83c0b8348af1c7ec1f9eeb5356d", "format": 1 }, { - "name": "roles/hosted_engine_setup/README.md", + "name": "roles/hosted_engine_setup/templates/vm.conf.j2", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f6fd54dd1ee928aa2d5157715bb6efdc7b149235a1dfe84a35479a409ce014d6", - "format": 1 - }, - { - "name": "roles/infra", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "7e12a9773fa641e2d90007baf8275b692f880a568f6f21ec896ba10d89661772", "format": 1 }, { - "name": "roles/infra/defaults", + "name": "roles/hosted_engine_setup/examples", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/defaults/main.yml", + "name": "roles/hosted_engine_setup/examples/hosted_engine_deploy_remotehost.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "43cbf9c83626b92441ab813a6ed0967521abd594700db7f4a74afb10fb869634", + "chksum_sha256": "e2d43b9deaf49874e8c0b3303a466cea9b2d5848353da5feb5cd2080347863c9", "format": 1 }, { - "name": "roles/infra/examples", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/hosted_engine_setup/examples/hosted_engine_deploy_localhost.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "67398b5ebb07bcbc56ba3269c3ab11672bdad44ef34eaf5a54d6d834ff1fb05e", "format": 1 }, { - "name": "roles/infra/examples/vars", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/hosted_engine_setup/examples/passwords.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "7d743b2921acb5121edb77586f3357245704891451720da05f5f7677230f8a94", "format": 1 }, { - "name": "roles/infra/examples/vars/ovirt_infra_vars.yml", + "name": "roles/hosted_engine_setup/examples/iscsi_deployment_remote.json", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ea7382f07df13bb80695ee19c38c52aaa48138f85a5fd2a9c7a78efaf6f19411", + "chksum_sha256": "fd684ba2bf2c243ea49fcc3f724f80c11f3bff5aec833c539b363e2d8a664029", "format": 1 }, { - "name": "roles/infra/examples/vars/passwords.yml", + "name": "roles/hosted_engine_setup/examples/nfs_deployment.json", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", + "chksum_sha256": "e502f3e646feaf3043835aafafba260ae5dd804901ab11254ef8622a6058689b", "format": 1 }, { - "name": "roles/infra/examples/ovirt_infra.yml", + "name": "roles/hosted_engine_setup/examples/required_networks_fix.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a33e886ee1863693e8596d4e97aa28cdd0f774ab57f766699d3b80dd5ae7bdce", + "chksum_sha256": "ec7b32cfd216364b2fc795df93d753f958f5f570e9940d0a19d786e23dcf8aaa", "format": 1 }, { - "name": "roles/infra/examples/ovirt_infra_destroy.yml", + "name": "roles/hosted_engine_setup/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ec9b972a04e5e16cb267b9c492935300023dd2751d530b606c22852d7eb6eaee", + "chksum_sha256": "063243b8afbb8d2ac208930a7c472fc2aa73d08f71df3e48907ff58dfa16f487", "format": 1 }, { - "name": "roles/infra/roles", + "name": "roles/hosted_engine_setup/defaults", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/aaa_jdbc", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/hosted_engine_setup/defaults/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "071e8947f0d2c9308f8820d6aeab02d08ee7991df0228a03a52ebcef12acd3ea", "format": 1 }, { - "name": "roles/infra/roles/aaa_jdbc/defaults", + "name": "roles/hosted_engine_setup/files", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/aaa_jdbc/defaults/main.yml", + "name": "roles/hosted_engine_setup/files/35-allow-ansible-for-vdsm.rules", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "17d0abc72b21f8c4705c7f3a2685e127b5a35fd0fe7b7e8fd1d7fcf70ba00de3", + "chksum_sha256": "b562418571a4657f858b5feaf149e4f983480b7871955264f185c5b213595adf", "format": 1 }, { - "name": "roles/infra/roles/aaa_jdbc/tasks", + "name": "roles/infra", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/aaa_jdbc/tasks/main.yml", + "name": "roles/infra/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/infra/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1bf4dcaf0397579431601a07b3faf9552e6e064655d544a49a2e019de0efa77d", + "chksum_sha256": "d5e3dd05a90a3062d763151bd5fa57ba9b0d6a0d1e5371cd5117c31199c6f655", "format": 1 }, { - "name": "roles/infra/roles/aaa_jdbc/README.md", + "name": "roles/infra/tasks/create_infra.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4375cfc057fb42b16a8b0fbeb7715712355cac4d03e440b396608b1bf4fa27cc", + "chksum_sha256": "3f8c2c13ca874cd106c5df090fe27194761d5bb8c0ca832fcd2e2636da227fe9", "format": 1 }, { - "name": "roles/infra/roles/clusters", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/infra/tasks/remove_infra.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "910de118f4e477d913ab1480854952c657ad233da673b2842d31cb1c06b653b9", "format": 1 }, { - "name": "roles/infra/roles/clusters/tasks", + "name": "roles/infra/examples", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/clusters/tasks/main.yml", + "name": "roles/infra/examples/ovirt_infra.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a574b7097e21f095e7e46e57e770eed17ce6252eada2415d784a90a1cd3de7db", + "chksum_sha256": "a33e886ee1863693e8596d4e97aa28cdd0f774ab57f766699d3b80dd5ae7bdce", "format": 1 }, { - "name": "roles/infra/roles/clusters/vars", + "name": "roles/infra/examples/vars", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/clusters/vars/main.yml", + "name": "roles/infra/examples/vars/ovirt_infra_vars.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "3dd97b72b356fb9fda8d10d8c5877c5a9ad8db4b42cf17018d983ab56cbee10a", + "chksum_sha256": "ea7382f07df13bb80695ee19c38c52aaa48138f85a5fd2a9c7a78efaf6f19411", "format": 1 }, { - "name": "roles/infra/roles/clusters/README.md", + "name": "roles/infra/examples/vars/passwords.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8f7e8ddf2319f57e14f63216f2579e695b3f51fe9f1db47442ca7d9e3fd60846", + "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", + "format": 1 + }, + { + "name": "roles/infra/examples/ovirt_infra_destroy.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "ec9b972a04e5e16cb267b9c492935300023dd2751d530b606c22852d7eb6eaee", + "format": 1 + }, + { + "name": "roles/infra/README.md", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "e1d6fc4824b5074ccdf47dcb7cdf480443b11318eeee2f1f90b33e7b8482a550", + "format": 1 + }, + { + "name": "roles/infra/roles", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { @@ -2339,24 +2234,24 @@ "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/defaults", + "name": "roles/infra/roles/datacenter_cleanup/tasks", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/defaults/main.yml", + "name": "roles/infra/roles/datacenter_cleanup/tasks/disks.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "30ac2342a8f199951885187e8974c55a3c5a4bc0283746e430a5e34804d4f895", + "chksum_sha256": "356a72a24f1fc2190b8f13ac1a9b51422a444e221f3fe2a7085f91c22ac7f9bb", "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/tasks", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "roles/infra/roles/datacenter_cleanup/tasks/vm_pools.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "869addec6825d3ef8aef24b6ecdd96d759af75ecc99e9ce5f7108b2fe33b69fa", "format": 1 }, { @@ -2367,24 +2262,24 @@ "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/tasks/datacenter.yml", + "name": "roles/infra/roles/datacenter_cleanup/tasks/storages_pre.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0e569a2c53d245cf4fd7dc8a61a5cc6818ef35c9ecc1f00f9347985b1f0862f1", + "chksum_sha256": "570a2b8e93e98cc0bfbd78430959a0db65d500f54a39308db3a4f84394a618c6", "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/tasks/disks.yml", + "name": "roles/infra/roles/datacenter_cleanup/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "356a72a24f1fc2190b8f13ac1a9b51422a444e221f3fe2a7085f91c22ac7f9bb", + "chksum_sha256": "9e04a79449a131fc5fed5c509b2c8a268cef7c167cbb7443ad13f2d402c02a48", "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/tasks/main.yml", + "name": "roles/infra/roles/datacenter_cleanup/tasks/datacenter.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "9e04a79449a131fc5fed5c509b2c8a268cef7c167cbb7443ad13f2d402c02a48", + "chksum_sha256": "0e569a2c53d245cf4fd7dc8a61a5cc6818ef35c9ecc1f00f9347985b1f0862f1", "format": 1 }, { @@ -2395,10 +2290,10 @@ "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/tasks/storages_pre.yml", + "name": "roles/infra/roles/datacenter_cleanup/tasks/vms.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "570a2b8e93e98cc0bfbd78430959a0db65d500f54a39308db3a4f84394a618c6", + "chksum_sha256": "d82a8266fb68387223bf4a821cb6c9abe17a51a2567eb14e0e718d5558fecb12", "format": 1 }, { @@ -2409,94 +2304,122 @@ "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/tasks/vm_pools.yml", + "name": "roles/infra/roles/datacenter_cleanup/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "869addec6825d3ef8aef24b6ecdd96d759af75ecc99e9ce5f7108b2fe33b69fa", + "chksum_sha256": "f50289bf733588f37db2429f37781ded326c0d74e18c697d79515022e5f38657", "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/tasks/vms.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "d82a8266fb68387223bf4a821cb6c9abe17a51a2567eb14e0e718d5558fecb12", + "name": "roles/infra/roles/datacenter_cleanup/defaults", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/datacenter_cleanup/README.md", + "name": "roles/infra/roles/datacenter_cleanup/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f50289bf733588f37db2429f37781ded326c0d74e18c697d79515022e5f38657", + "chksum_sha256": "30ac2342a8f199951885187e8974c55a3c5a4bc0283746e430a5e34804d4f895", "format": 1 }, { - "name": "roles/infra/roles/datacenters", + "name": "roles/infra/roles/permissions", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/datacenters/defaults", + "name": "roles/infra/roles/permissions/tasks", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/datacenters/defaults/main.yml", + "name": "roles/infra/roles/permissions/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ddb8e440f777516ca7dc411535887948bcbe53246b4181b3cb198f80dc472da3", + "chksum_sha256": "54d4344381fee5124b217646ed15a91e0553ef9823f6d9a8bc5dc37702c27703", "format": 1 }, { - "name": "roles/infra/roles/datacenters/tasks", + "name": "roles/infra/roles/permissions/README.md", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "373fe54a49924231191bba8d6f2b1a6eff00a0bcff5b73f73eef3fbc880e1f59", + "format": 1 + }, + { + "name": "roles/infra/roles/clusters", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/datacenters/tasks/main.yml", + "name": "roles/infra/roles/clusters/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/infra/roles/clusters/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b7196cdd92719bc397af18398173c424b8d131f4ee6dba7c1278f6bad1e612b2", + "chksum_sha256": "a574b7097e21f095e7e46e57e770eed17ce6252eada2415d784a90a1cd3de7db", "format": 1 }, { - "name": "roles/infra/roles/datacenters/README.md", + "name": "roles/infra/roles/clusters/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "214b779ef23478dac5fbca69527d16ef268ef69a0d21ecb90c9299b05b901599", + "chksum_sha256": "8f7e8ddf2319f57e14f63216f2579e695b3f51fe9f1db47442ca7d9e3fd60846", "format": 1 }, { - "name": "roles/infra/roles/external_providers", + "name": "roles/infra/roles/clusters/vars", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/external_providers/tasks", + "name": "roles/infra/roles/clusters/vars/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "3dd97b72b356fb9fda8d10d8c5877c5a9ad8db4b42cf17018d983ab56cbee10a", + "format": 1 + }, + { + "name": "roles/infra/roles/mac_pools", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/external_providers/tasks/main.yml", + "name": "roles/infra/roles/mac_pools/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/infra/roles/mac_pools/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "5bfafe45e58d4d79d867d3f0b330d824f16b5635dc7a5891846fddc1574f82cb", + "chksum_sha256": "668356454fedd30e240952f68f64a3b591331c225b51c4d22aa361cc976ebbd4", "format": 1 }, { - "name": "roles/infra/roles/external_providers/README.md", + "name": "roles/infra/roles/mac_pools/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a02ecce75eb99b2607ea7fd89cfbc6d3a3078d99d84f3156ea383fd3bc0cc6f2", + "chksum_sha256": "c4e597a6aff75657e0a3d56d5d1624f0a5a19ff7e351f120fc1fb4b7d0210923", "format": 1 }, { @@ -2507,6 +2430,27 @@ "format": 1 }, { + "name": "roles/infra/roles/hosts/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/infra/roles/hosts/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "67ff6d8584476db0f60efd67135f9a7d38e937ba4206f9580b69b8970250bcc1", + "format": 1 + }, + { + "name": "roles/infra/roles/hosts/README.md", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "4b9289ded6f5b48eaa09c6618d84253294f618353ee836d767261afaf0d5ff06", + "format": 1 + }, + { "name": "roles/infra/roles/hosts/defaults", "ftype": "dir", "chksum_type": null, @@ -2521,108 +2465,143 @@ "format": 1 }, { - "name": "roles/infra/roles/hosts/tasks", + "name": "roles/infra/roles/networks", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/hosts/tasks/main.yml", + "name": "roles/infra/roles/networks/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/infra/roles/networks/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "67ff6d8584476db0f60efd67135f9a7d38e937ba4206f9580b69b8970250bcc1", + "chksum_sha256": "a5045933314fcad8a8588ca46b091938a9c8b708b0fdf8951ef6e3e83a4b08b9", "format": 1 }, { - "name": "roles/infra/roles/hosts/README.md", + "name": "roles/infra/roles/networks/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "4b9289ded6f5b48eaa09c6618d84253294f618353ee836d767261afaf0d5ff06", + "chksum_sha256": "69ba323cfd973321a9e768aace91dbbbb5983dd2814598e77fb1d55ccf6b6fd3", "format": 1 }, { - "name": "roles/infra/roles/mac_pools", + "name": "roles/infra/roles/datacenters", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/mac_pools/tasks", + "name": "roles/infra/roles/datacenters/tasks", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/mac_pools/tasks/main.yml", + "name": "roles/infra/roles/datacenters/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "668356454fedd30e240952f68f64a3b591331c225b51c4d22aa361cc976ebbd4", + "chksum_sha256": "b7196cdd92719bc397af18398173c424b8d131f4ee6dba7c1278f6bad1e612b2", "format": 1 }, { - "name": "roles/infra/roles/mac_pools/README.md", + "name": "roles/infra/roles/datacenters/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c4e597a6aff75657e0a3d56d5d1624f0a5a19ff7e351f120fc1fb4b7d0210923", + "chksum_sha256": "214b779ef23478dac5fbca69527d16ef268ef69a0d21ecb90c9299b05b901599", "format": 1 }, { - "name": "roles/infra/roles/networks", + "name": "roles/infra/roles/datacenters/defaults", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/networks/tasks", + "name": "roles/infra/roles/datacenters/defaults/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "ddb8e440f777516ca7dc411535887948bcbe53246b4181b3cb198f80dc472da3", + "format": 1 + }, + { + "name": "roles/infra/roles/aaa_jdbc", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/networks/tasks/main.yml", + "name": "roles/infra/roles/aaa_jdbc/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/infra/roles/aaa_jdbc/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a5045933314fcad8a8588ca46b091938a9c8b708b0fdf8951ef6e3e83a4b08b9", + "chksum_sha256": "1bf4dcaf0397579431601a07b3faf9552e6e064655d544a49a2e019de0efa77d", "format": 1 }, { - "name": "roles/infra/roles/networks/README.md", + "name": "roles/infra/roles/aaa_jdbc/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "69ba323cfd973321a9e768aace91dbbbb5983dd2814598e77fb1d55ccf6b6fd3", + "chksum_sha256": "4375cfc057fb42b16a8b0fbeb7715712355cac4d03e440b396608b1bf4fa27cc", "format": 1 }, { - "name": "roles/infra/roles/permissions", + "name": "roles/infra/roles/aaa_jdbc/defaults", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/permissions/tasks", + "name": "roles/infra/roles/aaa_jdbc/defaults/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "17d0abc72b21f8c4705c7f3a2685e127b5a35fd0fe7b7e8fd1d7fcf70ba00de3", + "format": 1 + }, + { + "name": "roles/infra/roles/external_providers", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/roles/permissions/tasks/main.yml", + "name": "roles/infra/roles/external_providers/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "roles/infra/roles/external_providers/tasks/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "54d4344381fee5124b217646ed15a91e0553ef9823f6d9a8bc5dc37702c27703", + "chksum_sha256": "5bfafe45e58d4d79d867d3f0b330d824f16b5635dc7a5891846fddc1574f82cb", "format": 1 }, { - "name": "roles/infra/roles/permissions/README.md", + "name": "roles/infra/roles/external_providers/README.md", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "373fe54a49924231191bba8d6f2b1a6eff00a0bcff5b73f73eef3fbc880e1f59", + "chksum_sha256": "a02ecce75eb99b2607ea7fd89cfbc6d3a3078d99d84f3156ea383fd3bc0cc6f2", "format": 1 }, { @@ -2654,773 +2633,808 @@ "format": 1 }, { - "name": "roles/infra/tasks", + "name": "roles/infra/defaults", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/infra/tasks/create_infra.yml", + "name": "roles/infra/defaults/main.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "3f8c2c13ca874cd106c5df090fe27194761d5bb8c0ca832fcd2e2636da227fe9", + "chksum_sha256": "43cbf9c83626b92441ab813a6ed0967521abd594700db7f4a74afb10fb869634", "format": 1 }, { - "name": "roles/infra/tasks/main.yml", + "name": "meta", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "meta/runtime.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d5e3dd05a90a3062d763151bd5fa57ba9b0d6a0d1e5371cd5117c31199c6f655", + "chksum_sha256": "25354b3afabd2b5a0c3e209aeb30b9002752345651a4dbd6e74adcc0291999c2", "format": 1 }, { - "name": "roles/infra/tasks/remove_infra.yml", + "name": "meta/requirements.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "910de118f4e477d913ab1480854952c657ad233da673b2842d31cb1c06b653b9", + "chksum_sha256": "12b1ba483812c1f1012e4379c1fad9039ff728d2be82d2d1cd96118e9ff7b96b", "format": 1 }, { - "name": "roles/infra/README.md", + "name": "meta/execution-environment.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e1d6fc4824b5074ccdf47dcb7cdf480443b11318eeee2f1f90b33e7b8482a550", + "chksum_sha256": "30270de38aee5490073ea0c04a2202948e0edeb671fc0d5f0d441472c6856592", "format": 1 }, { - "name": "roles/remove_stale_lun", + "name": "changelogs", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/remove_stale_lun/defaults", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "changelogs/README.md", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "be61604e7e4d2c3d2c1a6834828bc05589fba2c4b80332a9476c8c2598b3389b", "format": 1 }, { - "name": "roles/remove_stale_lun/defaults/main.yml", + "name": "changelogs/config.yaml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e0781e2354d1ca7c79e6b6915ab26643344acb2b075a10c900abaa9c717f7aa3", + "chksum_sha256": "a9855447b14e048a16cd7877ffeab3bfe07496680c55055a3e8de8c0d2fb64bd", "format": 1 }, { - "name": "roles/remove_stale_lun/examples", + "name": "changelogs/fragments", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/remove_stale_lun/examples/passwords.yml", + "name": "changelogs/fragments/705-support-boot-disk-resizing-renaming.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", + "chksum_sha256": "9caed0d10527cae84744269de18595b31583e49997b449b3d3b5fc4e642c360f", "format": 1 }, { - "name": "roles/remove_stale_lun/examples/remove_stale_lun.yml", + "name": "changelogs/fragments/721-enhancement-vm-storage-error-resume-behaviour.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "37bb921142b647852d34092e7ed9c98627656f9d1ba9ca4e19bb7a62e029229c", + "chksum_sha256": "0b7c9528b64d99d556c7a87df86c03af52a788a56e94f180b7035d5faa74da63", "format": 1 }, { - "name": "roles/remove_stale_lun/tasks", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "changelogs/fragments/.keep", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "6eeaeb3bcb0e4223384351f71d75972cc5977d76a808010a8af20e3a2c67fefc", "format": 1 }, { - "name": "roles/remove_stale_lun/tasks/fetch_hosts.yml", + "name": "changelogs/fragments/722-add-tpm-enabled.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "5d648cb0fa07d757846d564fe520a5f9fa0a126e33c769481bc9c39d07eee359", + "chksum_sha256": "c6bc9b8159af4ceb30e98dec32ca3ccd2202ea2910eea97cbd3d5ce65258e3bc", "format": 1 }, { - "name": "roles/remove_stale_lun/tasks/main.yml", + "name": "changelogs/fragments/723-ovirt_role-fix-administrative-condition.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "dd053c903c438a2c9c8c4b79e4954f6e1474fedd8902f8a4665432454412b0fb", + "chksum_sha256": "d9f329c587d21a0f14c2b615d826e002f6f7a76f2927f9b33d1978c2d4dc63bf", "format": 1 }, { - "name": "roles/remove_stale_lun/tasks/remove_mpath_device.yml", + "name": "changelogs/changelog.yaml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8af83eb6cbe20c7fb2f0ef0a9b69f6129c97b665c8b3ad3cffff77e5300116da", + "chksum_sha256": "5e4a89da9a9447b48a377945ea39df7949f103808e0eb7e5f0dff28811c03899", "format": 1 }, { - "name": "roles/remove_stale_lun/README.md", + "name": "bindep.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "36614071855b57d37060c8fb0db8da56627332613c06fe01b63396095226fbd3", + "chksum_sha256": "5d9a28c4ad9daa2c2012e282902fe45b58f983b4c4fa07587c98799f54ae2189", "format": 1 }, { - "name": "roles/repositories", + "name": "ovirt-ansible-collection.spec", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a81b5d6e7a29d0ee011ffe5076b54fb3529beda5f7bc4613a83afed75f68c2c1", + "format": 1 + }, + { + "name": "ovirt-ansible-collection.spec.in", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "5a5dd7509d741ebdd37bcc3554eb61ccf799a7aa781226a6a4cf52d95fbad7aa", + "format": 1 + }, + { + "name": "plugins", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/repositories/defaults", + "name": "plugins/inventory", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/repositories/defaults/main.yml", + "name": "plugins/inventory/ovirt.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "db4e3a15d6e4a0b7dc391ab7892418487bb2391e453d837ee77770989101cb22", + "chksum_sha256": "0e0e0349a91b28f4628726cc43c379b1eff80ee6297687a95217e3b60889c6a4", "format": 1 }, { - "name": "roles/repositories/examples", + "name": "plugins/filter", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/repositories/examples/ovirt_repositories_release_rpm.yml", + "name": "plugins/filter/convert_to_bytes.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "16f013b459194303f4a4b16485a9ded42c866ae244c024ef6bca5e544e1779cd", + "chksum_sha256": "8b83892c6a71f5cab3c93dfc93626d901b7cef7bfd704255aa1ac78424ae3426", "format": 1 }, { - "name": "roles/repositories/examples/ovirt_repositories_subscription_manager.yml", + "name": "plugins/filter/ovirtvmipv6.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "bb5e84201ed0b91de44f606f4b2a930ce07065de4eb98ce137d41256399e1266", + "chksum_sha256": "833ee9ec1c40054ba6349593176630fe3fc305baf026f41bf880a655bbf697a9", "format": 1 }, { - "name": "roles/repositories/examples/passwords.yml", + "name": "plugins/filter/json_query.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7baec1da55ec214cdeaf66cb5fbdce88498268997b2a4bb5b6a3fc5a093e4e06", - "format": 1 - }, - { - "name": "roles/repositories/tasks", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "chksum_sha256": "1754fc223cf8315816d846798dad5e9a07daef8e1b6adaa282b15afa3ca48983", "format": 1 }, { - "name": "roles/repositories/tasks/backup-repos.yml", + "name": "plugins/filter/convert_to_bytes.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7c9af33497f79b5246552693d4cdf1d67ab172049a77e4712df0e6945ef1ec14", + "chksum_sha256": "5cd2b833e5f7de2cca994ddcbf3d2c9a99d4e989ab9cf78c231336fc04c15fa4", "format": 1 }, { - "name": "roles/repositories/tasks/install-satellite-ca.yml", + "name": "plugins/filter/json_query.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b1a7294752c54db0db59f419718b94b4c0da63dcb06e9725ec5e03c6877ba18c", + "chksum_sha256": "1a0062d16af7209efcb4a4bb2cd2b5329a54f9d19311323097c86d90e63b082f", "format": 1 }, { - "name": "roles/repositories/tasks/main.yml", + "name": "plugins/filter/removesensitivevmdata.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f22d7a5ac1fd5b801067ac70b05cda17c43bb976957402693362a95908414cc3", + "chksum_sha256": "bac378ead2a6a37613460d3852756744706d17d8a892205594363d183ccf7b86", "format": 1 }, { - "name": "roles/repositories/tasks/search-pool-id.yml", + "name": "plugins/filter/get_ovf_disk_size.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1982ab263ca68de6cf0067767b11702a2c4ab146432ed1b11b510715d7697e36", + "chksum_sha256": "8fc13373ae2e97d7e2b90b73c6e0f235eec00d0e8327475c5ccde830ce39a868", "format": 1 }, { - "name": "roles/repositories/tasks/rh-subscription.yml", + "name": "plugins/filter/ovirtvmipsv4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f68c672708d4c32fc653654add85c4942a82e4f0aecd81195d322d5d9a054287", + "chksum_sha256": "ce9ef1c18ddfbe6c42ba50095c398321470f7399f5a0fe967095e513b4ba2bcd", "format": 1 }, { - "name": "roles/repositories/tasks/rpm.yml", + "name": "plugins/filter/filtervalue.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "af0e42025c9423cb9f722d17ae6c2420aca003560e972a000d97328848a8c74b", + "chksum_sha256": "d26325ae24aa363744d7166bf017dcf53fa74ef74f1d3299a3d99d299db307b9", "format": 1 }, { - "name": "roles/repositories/tasks/satellite-subscription.yml", + "name": "plugins/filter/ovirtdiff.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "73b390217602bd01ad7ab6a47e00d1bcdb94c995951f30cc27c01c1641185c6f", + "chksum_sha256": "dc1bc2085850080372e875c508069dd417262df2c99bef29c47aa457f161aec1", "format": 1 }, { - "name": "roles/repositories/vars", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/filter/ovirtvmips.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "bd934b7bcf81c838f0f601446953c82a05887e04769ab6f80ecfb786fe56566d", "format": 1 }, { - "name": "roles/repositories/vars/default.yml", + "name": "plugins/filter/get_network_xml_to_dict.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8f3190dd83d2c27e2cd4b4cc36b9f574075ac41cd8a62823a7a9c119c0bae624", + "chksum_sha256": "02f7e3f247c2d4dc5d3c4191aec83a5017338d5004b0008fd585fd16d0523d54", "format": 1 }, { - "name": "roles/repositories/vars/engine_4.1.yml", + "name": "plugins/filter/get_ovf_disk_size.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f01ec6b4fcfc630b4f8a139b4f30123e7805ed2976cba75dc9a3e3c380fc5db1", + "chksum_sha256": "b58e408d4fcd3ed6ac7016d984588c3565b83b3e2139cb71e11ba8aef38c9d18", "format": 1 }, { - "name": "roles/repositories/vars/engine_4.2.yml", + "name": "plugins/filter/ovirtvmip.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ced8a355735ce4d3636dc76bc7d63a6a71834064155399f971d9cb37da3237c1", + "chksum_sha256": "db8735e7f4e469300d3505718061031d71d3483616723fad483ed9cd74ee4cb1", "format": 1 }, { - "name": "roles/repositories/vars/engine_4.3.yml", + "name": "plugins/filter/ovirtvmipsv6.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "6936324dbf3686dab7f3a0fd7586a7f1db9d56e1fcc60c8033b94522d393997e", + "chksum_sha256": "5075356bf5fa22a0d2601baf03fc50bdb6c91aad9543f3983d8bf8ee2b3080d4", "format": 1 }, { - "name": "roles/repositories/vars/engine_4.4.yml", + "name": "plugins/filter/ovirtvmip.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "18f42ea5ce1dc798ee607357fa74569d8f069e7816f07ae9f79be74a9c1e91d2", + "chksum_sha256": "95377cac0c2916f8e15eb504f1541327dbfb729ce0e33165c83e3b0bc574e7d6", "format": 1 }, { - "name": "roles/repositories/vars/engine_eus_4.4.yml", + "name": "plugins/filter/ovirtvmipv4.yml", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ddef8e1dda88e910e2a5d83f1fae2c851757de805321235ac166b1458b1a39b6", + "chksum_sha256": "838bffeb5436ac25f3888124eab7dc1903d432f75d8e43bd73dba1433317d5d5", "format": 1 }, { - "name": "roles/repositories/vars/host_4.1.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "b3171ba133adc54ba539e763246251b0f833dc8603d5a46243b55d82fbb80490", + "name": "plugins/test", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/repositories/vars/host_4.2.yml", + "name": "plugins/test/ovirt_proxied_check.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8a97eeb8025db4ed4a5c88bf2a652f41982f48a2ce195e3c47b0990897873cd6", + "chksum_sha256": "1cdf454efa33b668a906980a358f3ea9efc9897511fdb56bc910cf9e548ba539", "format": 1 }, { - "name": "roles/repositories/vars/host_4.3.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "ec3616b3d9433ef599822a6131e7d3168d5b5bb75712f0b69a1c822459cd6145", + "name": "plugins/doc_fragments", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/repositories/vars/host_4.4.yml", + "name": "plugins/doc_fragments/ovirt_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "dad02834d3f927ce052a2e9180f7a809d905369691d0707342b6557934315fc5", + "chksum_sha256": "87131c23c708320037e45ebd46773d2e48fcb38ba79503a5e573dd1037a857d2", "format": 1 }, { - "name": "roles/repositories/vars/host_eus_4.4.yml", + "name": "plugins/doc_fragments/ovirt.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ddef8e1dda88e910e2a5d83f1fae2c851757de805321235ac166b1458b1a39b6", + "chksum_sha256": "a0cda744dca79d659df4846df7b0e257ba33f5e92e8f98776a6e20b49a1b285e", "format": 1 }, { - "name": "roles/repositories/vars/host_ppc_4.4.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "89c1925d681185c71e2a249f30d0cc1efc885aa2339c5866b01f2459f1ddad5f", + "name": "plugins/module_utils", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/repositories/vars/host_ppc_eus_4.4.yml", + "name": "plugins/module_utils/version.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7b2a81f08cee1bbf2fa7431df24eb4c47697227531b05efdd6666eb3af4626b7", + "chksum_sha256": "b3de6a89533b19b883f7f3319e8de780acc0d53f3f5caed1f3006e384232ce60", "format": 1 }, { - "name": "roles/repositories/vars/rhvh_4.1.yml", + "name": "plugins/module_utils/__init__.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "cbc95494cc017f3b7ccf608dc59b77394847929474531547fe5a6448d71d8b16", + "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", "format": 1 }, { - "name": "roles/repositories/vars/rhvh_4.2.yml", + "name": "plugins/module_utils/ovirt.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "cbc95494cc017f3b7ccf608dc59b77394847929474531547fe5a6448d71d8b16", + "chksum_sha256": "39fed5a2f95e142ec6308281be87b553b61a78261c33e954c4a16d476b787354", "format": 1 }, { - "name": "roles/repositories/vars/rhvh_4.3.yml", + "name": "plugins/module_utils/cloud.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "cbc95494cc017f3b7ccf608dc59b77394847929474531547fe5a6448d71d8b16", + "chksum_sha256": "2fc3ad35c92926ddc389feb93244ed0432321a0ff01861f2a62c96582991298c", "format": 1 }, { - "name": "roles/repositories/vars/rhvh_4.4.yml", - "ftype": "file", - "chksum_type": "sha256", - "chksum_sha256": "fe7220fb776160b30f86fe7f9b70c41ae4d26e774d14a80951bf9b91aaacaffb", + "name": "plugins/callback", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, "format": 1 }, { - "name": "roles/repositories/README.md", + "name": "plugins/callback/stdout.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e3c85a61e0991f9316f14f9dfcef8169169f4afeaeb6f62903900e15cb2aabb6", + "chksum_sha256": "1945aee0ab3daf085f1ebe4b99f028160aded0bc7de35059cb41ed5fb4761db9", "format": 1 }, { - "name": "roles/shutdown_env", + "name": "plugins/modules", "ftype": "dir", "chksum_type": null, "chksum_sha256": null, "format": 1 }, { - "name": "roles/shutdown_env/defaults", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_user_info.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "db59b869902e2899f20390f97429f946a260fa21c6d560415e7716dc6c1fd62d", "format": 1 }, { - "name": "roles/shutdown_env/defaults/main.yml", + "name": "plugins/modules/ovirt_vm_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "23ee730fc457add36b19f314667fcea6891341e5e8ce982cd64f47773b7621fe", + "chksum_sha256": "471def32947d2321ba0b443be63ff73307928a1c2d5cfc0ac503ee259cd66826", "format": 1 }, { - "name": "roles/shutdown_env/examples", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_disk.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "d671bf2cd849542f1e007965eb4c6087dd8e62c2ab3c5c512281e02e73e5804a", "format": 1 }, { - "name": "roles/shutdown_env/examples/passwords.yml", + "name": "plugins/modules/ovirt_template_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", + "chksum_sha256": "06fc5b798a885b121e07457928057618fd23a15916bd13c04a40041c3765f968", "format": 1 }, { - "name": "roles/shutdown_env/examples/shutdown_env.yml", + "name": "plugins/modules/ovirt_api_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "845381aee5af25a91b98ae136d2b68fe217c686e21caa74b2016405c98194d5f", + "chksum_sha256": "d27663995af7b31f1dc83dc14de62d977ce6a2f7bec143938caad2bef1fcbb09", "format": 1 }, { - "name": "roles/shutdown_env/tasks", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_nic.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "d6fbd4ca8e568992c7df266004dbb11c23c54fdb84cec28b66115f4b42814cfa", "format": 1 }, { - "name": "roles/shutdown_env/tasks/main.yml", + "name": "plugins/modules/ovirt_snapshot.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "8ee1707f55bef637da0a715c984a5bcfaa70dca0662b00a4344203d8750fc453", + "chksum_sha256": "ef2315d7cf9362a56213acbc5d17d8bdf7c9180dd6dcb5cf875d402add95ca5a", "format": 1 }, { - "name": "roles/shutdown_env/README.md", + "name": "plugins/modules/ovirt_storage_vm_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d1bb8523fef9d1dc2ccd7202761c9085edb675f01d3205401117be6311cd1e0e", + "chksum_sha256": "9cb9a72bca431bb3c6efb50394620f4d4d737cdda22d4bb1519d7a966b4a756b", "format": 1 }, { - "name": "roles/vm_infra", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_network_info.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "c62297e7674c2ed1957cc3e891ff8d9a42bd7ab72b54303e5fca0055dbd8b3da", "format": 1 }, { - "name": "roles/vm_infra/defaults", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_nic_info.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a41de252b76f66df35c74485ba88c1a1ec453ce2392e4e11e0dcf4d89cbfec4d", "format": 1 }, { - "name": "roles/vm_infra/defaults/main.yml", + "name": "plugins/modules/ovirt_mac_pool.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a0cb0bfdf543b6406754a7524c94b4bded7fd7c7d0bc1648d2571843de4f904c", + "chksum_sha256": "2c5822a799199e4d7d6e59ec7c0f67a3e56e5728ac0aa5a217b4746f7d4155dc", "format": 1 }, { - "name": "roles/vm_infra/examples", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_instance_type.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "b1856d38726508ff171c3b8d2d5aea7342a2e62c254de87a28cb7ddc1b090752", "format": 1 }, { - "name": "roles/vm_infra/examples/ovirt_vm_infra.yml", + "name": "plugins/modules/ovirt_job.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "d315a65145a8294f0e007f65d336c607926da12ab2621e4a60c8c627fa9f907a", + "chksum_sha256": "9d92d289cc7309d7304cf5d5601fd0806b9cef7e179fd16bdf4ed45e43912d51", "format": 1 }, { - "name": "roles/vm_infra/examples/ovirt_vm_infra_inv.yml", + "name": "plugins/modules/ovirt_host_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "800510a76705a8b1ac6a8b94f31826c2f303aa74730e151cdfe0b3984eaa6eb7", + "chksum_sha256": "5b74f6f2a0f6f88c34ad69faa91dda79ecb904c3717fbc58bff938bf022f0be3", "format": 1 }, { - "name": "roles/vm_infra/examples/passwords.yml", + "name": "plugins/modules/ovirt_template.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", + "chksum_sha256": "4d92fa178a06c72c049e5bed7c8b20eb88d243c8fb894c94832e07284a853d89", "format": 1 }, { - "name": "roles/vm_infra/tasks", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_vnic_profile_info.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "cdf3e86fa6c201c71a57e37b70c0537619dffa1b2a5a140c448b6bd0a9c9e000", "format": 1 }, { - "name": "roles/vm_infra/tasks/affinity_groups.yml", + "name": "plugins/modules/ovirt_storage_connection.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "99de64bdc087e561bccb1adacf980271a66654f63ce536678cade94a8b6e9ca2", + "chksum_sha256": "a7c12a7215094295957cd95b9040220cad09db91492cfd463ffa9fe179d32f82", "format": 1 }, { - "name": "roles/vm_infra/tasks/affinity_labels.yml", + "name": "plugins/modules/ovirt_vmpool_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1302469d26a335dab3169677c35ae70d96b84272e586c27786aabf7f06a5468e", + "chksum_sha256": "f8b1cba56e5625e5e226ea5fe7c247a39c43e2f51bfa2206c75daf2c61381544", "format": 1 }, { - "name": "roles/vm_infra/tasks/create_inventory.yml", + "name": "plugins/modules/ovirt_quota_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a3df94becb3593f7dc9979a8ed5e740a60dc617c33b8f19f5f9f23567d9a0114", + "chksum_sha256": "d68d57faa87b100e32859c2db99f446f380f6c3248aaa354316be8f7b75eb4de", "format": 1 }, { - "name": "roles/vm_infra/tasks/create_vms.yml", + "name": "plugins/modules/ovirt_vm.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0a45f7bf1f9bcaeec819a49e26272d959cd928e674e8a2cc6d78a61ecd572f09", + "chksum_sha256": "b658dbd833d7adc92ec7746205f1ad61f0821b2e551e243313731ac5ec59b3ce", "format": 1 }, { - "name": "roles/vm_infra/tasks/main.yml", + "name": "plugins/modules/ovirt_cluster_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f06571e7e9c4ad867ca9a9ae451511b71c27934104b926f75e39b33d4efe148b", + "chksum_sha256": "deb9584b9c838a64f28414c5922ac93d8a435867882172904d822f2db6f78c3c", "format": 1 }, { - "name": "roles/vm_infra/tasks/manage_state.yml", + "name": "plugins/modules/ovirt_tag_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "52e5f76a2c2f30da25bed6d6ebd5b355d41bbcc72c1bdcb432c7a82995634d03", + "chksum_sha256": "e361bc1d60bf17cd01dae465b172cad3c294663e91b554e364f41ba13800ca6e", "format": 1 }, { - "name": "roles/vm_infra/tasks/vm_state_absent.yml", + "name": "plugins/modules/ovirt_permission.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "2d5d08e9ac19af8523d7e8b0330294810e91d5ad77d4d4b67e1ccd61388ddda4", + "chksum_sha256": "da6d49d7868f708a6af6c2439df2e45a4b3df00ac40574e24e6f33ec9569d0ba", "format": 1 }, { - "name": "roles/vm_infra/tasks/vm_state_present.yml", + "name": "plugins/modules/ovirt_host_network.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7225536dc041606cc96d608053e37faffdf24e2b6bb2f813d819072a7b130d07", + "chksum_sha256": "f9f283291e0c66aea729c6f18d9905cd2ba7e0c598ae0636c493f3068a610584", "format": 1 }, { - "name": "roles/vm_infra/README.md", + "name": "plugins/modules/ovirt_external_provider_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c8c6eebe73201d32862598f55e691204600b3b3d060e61cc233ff809b19ee3c1", + "chksum_sha256": "16254f26fa4bfb2f6112c0cb94bd0847270982342e5433a214f1dfd922a55492", "format": 1 }, { - "name": "roles/image_template", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_event_info.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "62676c74e45c15f8a7ccc6a19706eb09bb46c133fc1cd2aa105e758c1f5a5691", "format": 1 }, { - "name": "roles/image_template/defaults", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_disk_profile.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "67d05ef0ad11058ed6a18da8a4079ab8d4d8902a797412646d8e7f2d09464c35", "format": 1 }, { - "name": "roles/image_template/defaults/main.yml", + "name": "plugins/modules/ovirt_storage_domain_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "067ecccb5371a364a0f3addff4870bf1bf8a8985b8bd39dfebc75db057005e77", + "chksum_sha256": "fa3de5e4f02ecaada97ed3709579acc92f27053633d3dd0a48c73fa17d8114ed", "format": 1 }, { - "name": "roles/image_template/examples", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_tag.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "41961b31845824591a0691d386e9a8f8ac2c1b85415b522109a8f1f4e798e161", "format": 1 }, { - "name": "roles/image_template/examples/ovirt_image_template.yml", + "name": "plugins/modules/ovirt_vm_os_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "ed3ee749c3fe4ea012157bf15c4af22461c9b6707d0fe64ac75e59b98860fbe1", + "chksum_sha256": "78e392cc8187cb6eb1e506e3afb9036e78cbc9693663e0e6a02ba0e5fdd7c463", "format": 1 }, { - "name": "roles/image_template/examples/passwords.yml", + "name": "plugins/modules/ovirt_quota.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "c135528dad4a7ec75c51b21ebee33d4a41a0ed73088e828e90f0ee34a9dbd003", + "chksum_sha256": "1df8bf1789fe4f656ef36515f1ea0fcf2eaa71512f3f58a8a9961b5a6b27ce30", "format": 1 }, { - "name": "roles/image_template/tasks", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_affinity_label.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "c612b78a1282cf0ada1e46779879ba79dfcbeec9f6919bfc5c688894513a505b", "format": 1 }, { - "name": "roles/image_template/tasks/empty.yml", + "name": "plugins/modules/__init__.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a7e2509d3edfdc59f4e18438a84185b831e881a73f62ab0871de3ae1be1bf493", + "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", "format": 1 }, { - "name": "roles/image_template/tasks/main.yml", + "name": "plugins/modules/ovirt_role.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "5cc8ac6f2a26ea08493e276f5900489f8233faaf9c80aa6faeed9d741393a3ad", + "chksum_sha256": "cdeb65a7e9bf21b552ba8a76696ce79d8f2a530e20f4c85b0e90fbf5bbf28de8", "format": 1 }, { - "name": "roles/image_template/tasks/glance_image.yml", + "name": "plugins/modules/ovirt_group_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "e8434e458bc32c9f1da37ae5cdac4c6535d133bc55e1033d1b2773436b2bcc3e", + "chksum_sha256": "a7be1d464ca4134fee58e729c245a0bc0c6a919b4418f23340b6e146cb17b8b1", "format": 1 }, { - "name": "roles/image_template/tasks/qcow2_image.yml", + "name": "plugins/modules/ovirt_host_pm.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "fcfc1af2b33cb594e8c073dc7d99673daf18a2eb9dac03240119aa6e25200d36", + "chksum_sha256": "51ec49b40fa65d456a38048f325df7982c7debdeb48bd137318c30240f155b29", "format": 1 }, { - "name": "roles/image_template/vars", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_external_provider.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "2253c69cd894e5d8b1d13b19e374cfbd8290ee683058ef333b6119c16828df7b", "format": 1 }, { - "name": "roles/image_template/vars/main.yml", + "name": "plugins/modules/ovirt_host_storage_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f1785c9a8c3028367d2ba75256fa7260c079392212151d682acd55cab7750fbc", + "chksum_sha256": "70dac4925d0c068c0e5e9fd9618b4db8875c75c4ab914bf35bc95ec55c7bf40a", "format": 1 }, { - "name": "roles/image_template/README.md", + "name": "plugins/modules/ovirt_snapshot_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7d4ba8182ce18c62eba9fff4737dc56a0cd779f7d41f2153bf38a08cf8898b4b", + "chksum_sha256": "1abf02b79af09391098583400cf2c1ee5b8e32311e0c44522c389ff3145de70a", "format": 1 }, { - "name": "tests", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_qos.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "02c19d26cb46441133503e4b9cd6415d79d3d40bc8ee529a1421b172ace5d9df", "format": 1 }, { - "name": "tests/sanity", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_datacenter.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "82728d18290f7e671043c3e51db9b858f03b203be5ab178b7db4d621b2b64356", "format": 1 }, { - "name": "tests/sanity/ignore-2.10.txt", + "name": "plugins/modules/ovirt_affinity_group.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0a4cb503461461d745b3edc27f0c6dd368e7d3a9a1fe8577d06bc069a3d68adf", + "chksum_sha256": "094bf12fcae763c9b0d8d662c4a9ac87ca3f0721224c08a5324aeb940154f8d7", "format": 1 }, { - "name": "tests/sanity/ignore-2.11.txt", + "name": "plugins/modules/ovirt_vnic_profile.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "68d5a192f3464327d2d51966c00ec15fa7a46173f5c6a88e79ba4f8314e5697d", + "chksum_sha256": "165440d6bc6f89eccd5b32b2ea80c4990f10406a241b3338f47577a449872125", "format": 1 }, { - "name": "tests/sanity/ignore-2.12.txt", + "name": "plugins/modules/ovirt_permission_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "9045518911d2c9b9ea386e5cc9c9224fba94629027e3fe92d4dedf9d7a9784a9", + "chksum_sha256": "bf2c8ff00872e2802858746b4a906799fe7c42f22ae73543abc29cd0aa9f388c", "format": 1 }, { - "name": "tests/sanity/ignore-2.13.txt", + "name": "plugins/modules/ovirt_host.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "7d84566f116944d49e6164c91d74c5e16722138e494ce40c7cdcac285d8929ea", + "chksum_sha256": "a3499b500c4cbabc29ac55eb8a83af735cfaf4f4a2a646c13cf2926c85478fc7", "format": 1 }, { - "name": "tests/sanity/ignore-2.9.txt", + "name": "plugins/modules/ovirt_disk_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0a4cb503461461d745b3edc27f0c6dd368e7d3a9a1fe8577d06bc069a3d68adf", + "chksum_sha256": "a10dff836127830f6ab99923b8f2d66f7ed97732917e632f40ef2624cf4361f8", "format": 1 }, { - "name": "tests/.gitignore", + "name": "plugins/modules/ovirt_cluster.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b5726d3ec9335a09c124469eca039523847a6b0f08a083efaefd002b83326600", + "chksum_sha256": "5f7fe11427cdda046d1c1aeb39c60c2ca49ca41a8d0d36500e96db961ae74ce4", "format": 1 }, { - "name": ".config", - "ftype": "dir", - "chksum_type": null, - "chksum_sha256": null, + "name": "plugins/modules/ovirt_datacenter_info.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "3607cc7564e842d38e837a2fe7b371569951ad736c9574fde8c91dba77613de4", "format": 1 }, { - "name": ".config/ansible-lint.yml", + "name": "plugins/modules/ovirt_user.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0eda8abb8b631980078ef5f512d080ec8e621260f6562b31b2ee145b81af36b8", + "chksum_sha256": "3002aae2edeff14d001c9bea614ad29c92132ae4175ce678c3269b1387f1dad0", "format": 1 }, { - "name": "bindep.txt", + "name": "plugins/modules/ovirt_system_option_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "a3f3d58aa2576f1acb4dec85fb4826bb85ffc7db8eb1660d08b3f6592fa05dc5", + "chksum_sha256": "2be9acdb4afb373028c530d88ee07d4552f9ddd47e5acae5109de0bef9646622", "format": 1 }, { - "name": "build.sh", + "name": "plugins/modules/ovirt_vmpool.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "1a393ff367c0b115d57411ce71dfa0dc682226c65cfb09f06e6b12ce94c7e6d7", + "chksum_sha256": "badbc107526c539e0a22a127931d80a3621caf3ca244a6e336ea38262e08b20c", "format": 1 }, { - "name": "CHANGELOG.rst", + "name": "plugins/modules/ovirt_network.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "765ca9f9c6f16154569a4c220c5a893f57e4bdda19aacf37758158c3e8c231af", + "chksum_sha256": "5d0bea21d612e9efd055fabe3c249f5e84bb15a4fb06c56f352a81db4447d2d8", "format": 1 }, { - "name": "ovirt-ansible-collection.spec.in", + "name": "plugins/modules/ovirt_affinity_label_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "2b7b6cad5d26e952ae9019301d9a8c081f17972f14db523addf2a8dfd7c10875", + "chksum_sha256": "35651efbdbccc01374eb3c795b68800a49c6ad4fc9e1c7dfef9984877e9d894a", "format": 1 }, { - "name": "README-developers.md", + "name": "plugins/modules/ovirt_storage_template_info.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "81e38bf32f2a201d965eb10891068c1a56cc43e6ffd83c07f3f95442a1ab0e59", + "chksum_sha256": "76b9b9f83b8b2c5a9807b89a621efa7064153bf2aedf97b0260c0837a36758cf", "format": 1 }, { - "name": "README.md", + "name": "plugins/modules/ovirt_storage_domain.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "29adaabe4887083b2cf4efcb8e18ae195ec5ec4d5c5dc7ebc7e1798827e10a97", + "chksum_sha256": "a8ad293631ebe87bfdc3e38e6bb39827bee67b3a2c1e66a9a278c60cdcbf926b", "format": 1 }, { - "name": "README.md.in", + "name": "plugins/modules/ovirt_event.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "b00a4c9f3df47a7507e7177abc719e6a192f82341cfb6278685e3a45eae3e635", + "chksum_sha256": "fd7d258428744e7109fea1b8dfe4250e9e0d07664cf583141bdef23cacab4b1b", "format": 1 }, { - "name": "requirements.txt", + "name": "plugins/modules/ovirt_auth.py", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "948e536d6fe99ae26067a6e1842a2947aee1094fe8baaa8cf136a578ee99b0bd", + "chksum_sha256": "96b43cc0659ec70a9b89b519bbb66c82d6bcb8443e79e56dd4b35fb4124d22c9", "format": 1 }, { - "name": "ovirt-ansible-collection.spec", + "name": "plugins/modules/ovirt_scheduling_policy_info.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "0c11439ec4bfc9b9677cba673d71db1d419f84abb83fbca01d6219faf0cb5657", + "format": 1 + }, + { + "name": "plugins/modules/ovirt_group.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "7cd6538d4e18d19e816ca078ea597702b596cbea738ca4bce9f2cde20745b9fb", + "format": 1 + }, + { + "name": "licenses", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "licenses/Apache-license.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "911da9e7718284248df18b810e9471a0dcfa1cc585b6a479751b6ff3440fa33e", + "chksum_sha256": "f6d5b461deb8038ce0e083c9cb7f59859caa04c9b4f72149367393e9b252cf14", "format": 1 }, { - "name": "ovirt-ansible-collection-2.4.1.tar.gz", + "name": "licenses/GPL-license.txt", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "f078cace8675eb2d9748176b44b60c6f487930fb5d3cac351c92c312cf4995ce", + "chksum_sha256": "8ceb4b9ee5adedde47b31e975c1d90c73ad27b6b165a1dcd80c7c545eb65b903", "format": 1 } ], diff --git a/ansible_collections/ovirt/ovirt/MANIFEST.json b/ansible_collections/ovirt/ovirt/MANIFEST.json index 03de4327e..f9ae7728a 100644 --- a/ansible_collections/ovirt/ovirt/MANIFEST.json +++ b/ansible_collections/ovirt/ovirt/MANIFEST.json @@ -2,7 +2,7 @@ "collection_info": { "namespace": "ovirt", "name": "ovirt", - "version": "2.4.1", + "version": "3.2.0", "authors": [ "Martin Necas <mnecas@redhat.com>" ], @@ -30,7 +30,7 @@ "name": "FILES.json", "ftype": "file", "chksum_type": "sha256", - "chksum_sha256": "0f934b63081cc9b57b257cf97ac3e56698357a8b724fe2ba426459ddeb7b1620", + "chksum_sha256": "8550f9af7df0ee9f02bd36424a8c4c43804ab428cfdce358f6ef422223ec1dc1", "format": 1 }, "format": 1 diff --git a/ansible_collections/ovirt/ovirt/README.md b/ansible_collections/ovirt/ovirt/README.md index 5cd461fd4..47f5fdd15 100644 --- a/ansible_collections/ovirt/ovirt/README.md +++ b/ansible_collections/ovirt/ovirt/README.md @@ -1,24 +1,84 @@ -[![Copr build status](https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/package/ovirt-ansible-collection/status_image/last_build.png)](https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/package/ovirt-ansible-collection/) +[![Build Status](https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-collection_standard-check-pr/badge/icon)](https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-collection_standard-check-pr/) [![Build Status](https://img.shields.io/badge/docs-latest-blue.svg)](https://docs.ansible.com/ansible/2.10/collections/ovirt/ovirt/index.html) oVirt Ansible Collection ==================================== +The `ovirt.ovirt` manages all oVirt Ansible modules. + +The pypi installation is no longer supported if you want +to install all dependencies do it manually or install the +collection from RPM and it will be done automatically. + +Note +---- +Please note that when installing this collection from Ansible Galaxy you are instructed to run following command: + +```bash +$ ansible-galaxy collection install ovirt.ovirt +``` + Requirements ------------ * Ansible core version 2.12.0 or higher * Python SDK version 4.5.0 or higher - * Python netaddr library on the ansible controller node -Upstream oVirt documentation --------------- -https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/index.html +Content of the collection +---------------- + +* modules: + * ovirt_* - Modules to manage objects in ovirt Engine + * ovirt_*_info - Modules to gather information about objects in ovirt Engine +* roles: + * cluster_upgrade + * engine_setup + * hosted_engine_setup + * image_template + * infra + * repositories + * shutdown_env + * vm_infra + * disaster_recovery +* inventory plugin + + +Example Playbook +---------------- + +```yaml +--- +- name: ovirt ansible collection + hosts: localhost + connection: local + vars_files: + # Contains encrypted `engine_password` varibale using ansible-vault + - passwords.yml + tasks: + - block: + # The use of ovirt.ovirt before ovirt_auth is to check if the collection is correctly loaded + - name: Obtain SSO token with using username/password credentials + ovirt.ovirt.ovirt_auth: + url: https://ovirt.example.com/ovirt-engine/api + username: admin@internal + ca_file: ca.pem + password: "{{ ovirt_password }}" -Downstream RHV documentation --------------- -https://cloud.redhat.com/ansible/automation-hub/redhat/rhv + # Previous task generated I(ovirt_auth) fact, which you can later use + # in different modules as follows: + - ovirt_vm: + auth: "{{ ovirt_auth }}" + state: absent + name: myvm + always: + - name: Always revoke the SSO token + ovirt_auth: + state: absent + ovirt_auth: "{{ ovirt_auth }}" + collections: + - ovirt.ovirt +``` Licenses ------- diff --git a/ansible_collections/ovirt/ovirt/README.md.in b/ansible_collections/ovirt/ovirt/README.md.in deleted file mode 100644 index 8270467e6..000000000 --- a/ansible_collections/ovirt/ovirt/README.md.in +++ /dev/null @@ -1,88 +0,0 @@ -[![Build Status](https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-collection_standard-check-pr/badge/icon)](https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-collection_standard-check-pr/) -[![Build Status](https://img.shields.io/badge/docs-latest-blue.svg)](https://docs.ansible.com/ansible/2.10/collections/ovirt/ovirt/index.html) - -oVirt Ansible Collection -==================================== - -The `ovirt.ovirt` manages all oVirt Ansible modules. - -The pypi installation is no longer supported if you want -to install all dependencies do it manually or install the -collection from RPM and it will be done automatically. - -Note ----- -Please note that when installing this collection from Ansible Galaxy you are instructed to run following command: - -```bash -$ ansible-galaxy collection install ovirt.ovirt -``` - -Requirements ------------- - - * Ansible core version 2.12.0 or higher - * Python SDK version 4.5.0 or higher - * Python netaddr library on the ansible controller node - -Content of the collection ----------------- - -* modules: - * ovirt_* - Modules to manage objects in ovirt Engine - * ovirt_*_info - Modules to gather information about objects in ovirt Engine -* roles: - * cluster_upgrade - * engine_setup - * hosted_engine_setup - * image_template - * infra - * repositories - * shutdown_env - * vm_infra - * disaster_recovery -* inventory plugin - - -Example Playbook ----------------- - -```yaml ---- -- name: ovirt ansible collection - hosts: localhost - connection: local - vars_files: - # Contains encrypted `engine_password` varibale using ansible-vault - - passwords.yml - tasks: - - block: - # The use of ovirt.ovirt before ovirt_auth is to check if the collection is correctly loaded - - name: Obtain SSO token with using username/password credentials - ovirt.ovirt.ovirt_auth: - url: https://ovirt.example.com/ovirt-engine/api - username: admin@internal - ca_file: ca.pem - password: "{{ ovirt_password }}" - - # Previous task generated I(ovirt_auth) fact, which you can later use - # in different modules as follows: - - ovirt_vm: - auth: "{{ ovirt_auth }}" - state: absent - name: myvm - - always: - - name: Always revoke the SSO token - ovirt_auth: - state: absent - ovirt_auth: "{{ ovirt_auth }}" - collections: - - ovirt.ovirt -``` - -Licenses -------- - -- Apache License 2.0 -- GNU General Public License 3.0 diff --git a/ansible_collections/ovirt/ovirt/automation/README.md b/ansible_collections/ovirt/ovirt/automation/README.md deleted file mode 100644 index 1b6a39975..000000000 --- a/ansible_collections/ovirt/ovirt/automation/README.md +++ /dev/null @@ -1,8 +0,0 @@ -Continuous Integration Scripts -============================== - -This directory contains scripts for Continuous Integration provided by -[oVirt Jenkins](http://jenkins.ovirt.org/) -system and follows the standard defined in -[Build and test standards](http://www.ovirt.org/CI/Build_and_test_standards) -wiki page. diff --git a/ansible_collections/ovirt/ovirt/automation/build.sh b/ansible_collections/ovirt/ovirt/automation/build.sh deleted file mode 100755 index cfd3faa48..000000000 --- a/ansible_collections/ovirt/ovirt/automation/build.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash -xe - -ROOT_PATH="$PWD" -BUILD_ROOT_PATH="/tmp" - -# Remove any previous artifacts -rm -rf "$BUILD_ROOT_PATH/ansible_collections" -rm -f "$BUILD_ROOT_PATH"/*tar.gz - -# Create exported-artifacts dir -[[ -d exported-artifacts ]] || mkdir "$ROOT_PATH/exported-artifacts/" - -# Create builds -./build.sh build ovirt "$BUILD_ROOT_PATH" -./build.sh build rhv "$BUILD_ROOT_PATH" - -OVIRT_BUILD="$BUILD_ROOT_PATH/ansible_collections/ovirt/ovirt" -RHV_BUILD="$BUILD_ROOT_PATH/ansible_collections/redhat/rhv" - -cd "$OVIRT_BUILD" - -# Create the src.rpm -rpmbuild \ - -D "_srcrpmdir $BUILD_ROOT_PATH/output" \ - -D "_topmdir $BUILD_ROOT_PATH/rpmbuild" \ - -ts ./*.gz - -# Remove the tarball so it will not be included in galaxy build -mv ./*.gz "$ROOT_PATH/exported-artifacts/" - -# Overwrite github README with dynamic -mv ./README.md.in ./README.md - -# Create tar for galaxy -ansible-galaxy collection build - -# Create the rpms -rpmbuild \ - -D "_rpmdir $BUILD_ROOT_PATH/output" \ - -D "_topmdir $BUILD_ROOT_PATH/rpmbuild" \ - --rebuild "$BUILD_ROOT_PATH"/output/*.src.rpm - -cd "$RHV_BUILD" - -# Remove the tarball so it will not be included in automation hub build -rm -rf ./*.gz - -# Overwrite github README with dynamic -mv ./README.md.in ./README.md - -# create tar for automation hub -ansible-galaxy collection build - -# Store any relevant artifacts in exported-artifacts for the ci system to -# archive -find "$BUILD_ROOT_PATH/output" -iname \*rpm -exec mv "{}" "$ROOT_PATH/exported-artifacts/" \; - -# Export build for Ansible Galaxy -mv "$OVIRT_BUILD"/*tar.gz "$ROOT_PATH/exported-artifacts/" -# Export build for Automation Hub -mv "$RHV_BUILD"/*tar.gz "$ROOT_PATH/exported-artifacts/" - -COLLECTION_DIR="/usr/local/share/ansible/collections/ansible_collections/ovirt/ovirt" -export ANSIBLE_LIBRARY="$COLLECTION_DIR/plugins/modules" -mkdir -p $COLLECTION_DIR -cp -r "$OVIRT_BUILD"/* "$OVIRT_BUILD"/.config "$COLLECTION_DIR" -cd "$COLLECTION_DIR" - -antsibull-changelog lint -v -ansible-lint roles/* - -cd "$ROOT_PATH" - -# If PR changed something in ./plugins or ./roles it is required to have changelog -if [[ $(git diff --quiet HEAD^ ./plugins ./roles)$? -eq 1 && $(git diff --quiet HEAD^ ./changelogs)$? -eq 0 ]]; then - echo "ERROR: Please add changelog."; - exit 1; -fi diff --git a/ansible_collections/ovirt/ovirt/bindep.txt b/ansible_collections/ovirt/ovirt/bindep.txt index 9b9844665..eba97a6cf 100644 --- a/ansible_collections/ovirt/ovirt/bindep.txt +++ b/ansible_collections/ovirt/ovirt/bindep.txt @@ -1,6 +1,7 @@ -gcc [compile platform:centos-8 platform:rhel-8] -libcurl-devel [compile platform:centos-8 platform:rhel-8] -libxml2-devel [compile platform:centos-8 platform:rhel-8] -openssl-devel [compile platform:centos-8 platform:rhel-8] -python38-devel [compile platform:centos-8 platform:rhel-8] -qemu-img [platform:centos-8 platform:rhel-8] +gcc [compile platform:rpm] +libcurl-devel [compile platform:rpm] +libxml2-devel [compile platform:rpm] +openssl-devel [compile platform:rpm] +python39-devel [compile platform:centos-8 platform:rhel-8] +python3-devel [compile platform:centos-9 platform:rhel-9] +qemu-img [platform:rpm] diff --git a/ansible_collections/ovirt/ovirt/build.sh b/ansible_collections/ovirt/ovirt/build.sh index 8185f0e4a..501cc9a35 100755 --- a/ansible_collections/ovirt/ovirt/build.sh +++ b/ansible_collections/ovirt/ovirt/build.sh @@ -1,7 +1,9 @@ #!/bin/bash -VERSION="2.4.1" +VERSION="3.2.0" +# MILESTONE="master" MILESTONE="" +# RPM_RELEASE="0.1.$MILESTONE.$(date -u +%Y%m%d%H%M%S)" RPM_RELEASE="1" BUILD_TYPE=$2 diff --git a/ansible_collections/ovirt/ovirt/changelogs/changelog.yaml b/ansible_collections/ovirt/ovirt/changelogs/changelog.yaml index e4b508409..ea1b17bc1 100644 --- a/ansible_collections/ovirt/ovirt/changelogs/changelog.yaml +++ b/ansible_collections/ovirt/ovirt/changelogs/changelog.yaml @@ -868,30 +868,105 @@ releases: - 597-ovirt_disk-add-read_only-param.yml - 603-add-filter-docs.yml release_date: '2022-10-13' - 2.3.1: + 3.0.0: changes: bugfixes: + - Remove the 'warn:' argument (https://github.com/oVirt/ovirt-ansible-collection/pull/627). + - cluster_upgrade - Add default random uuid to engine_correlation_id (https://github.com/oVirt/ovirt-ansible-collection/pull/624). + - cluster_upgrade - Fix the engine_correlation_id location (https://github.com/oVirt/ovirt-ansible-collection/pull/637). - filters - Fix ovirtvmipsv4 with attribute and network (https://github.com/oVirt/ovirt-ansible-collection/pull/607). - filters - Fix ovirtvmipsv4 with filter to list (https://github.com/oVirt/ovirt-ansible-collection/pull/609). + - image_template - Add template_bios_type (https://github.com/oVirt/ovirt-ansible-collection/pull/620). + - info modules - Bump the deprecation version of fetch_nested and nested_attributes + (https://github.com/oVirt/ovirt-ansible-collection/pull/610). - ovirt_host - Fix kernel_params elemets type (https://github.com/oVirt/ovirt-ansible-collection/pull/608). + - ovirt_nic - Add network_filter_parameters (https://github.com/oVirt/ovirt-ansible-collection/pull/623). + minor_changes: + - Improving "ovirt_disk" and "disaster_recovery" documentation (https://github.com/oVirt/ovirt-ansible-collection/pull/562). fragments: + - 562-improve-documentation-ovirt_disk-and-disaster_recovery.yml - 607-filters-fix-ovirtvmipsv4-with-atribute-and-network.yml - 608-fix-ovirt_host-kernel_params-type.yml - 609-filterip4-fix-filter-list.yml - release_date: '2022-10-27' - 2.4.0: + - 610-bump-info-deprecation.yml + - 620-image_template-add-template_bios_type.yml + - 623-ovirt_nic-add-network_filter_parameters.yml + - 624-cluster_upgrade-add-default-random-uuid-to-engine_correlation_id.yml + - 627-remove-warn-arg.yml + - 637-cluster_upgrade-fix-the-engine_correlation_id-location.yml + release_date: '2022-11-28' + 3.1.0: changes: bugfixes: - - cluster_upgrade - Add default random uuid to engine_correlation_id (https://github.com/oVirt/ovirt-ansible-collection/pull/624). - - image_template - Add template_bios_type (https://github.com/oVirt/ovirt-ansible-collection/pull/620). - fragments: - - 625-cluster_upgrade-add-default-random-uuid-to-engine_correlation_id.yml - - 626-image_template-add-template_bios_type.yml - release_date: '2022-11-15' - 2.4.1: + - engine_setup - Remove provision_docker from tests (https://github.com/oVirt/ovirt-ansible-collection/pull/677). + - he-setup - Log the output sent to the serial console of the HostedEngineLocal + VM to a file on the host, to allow diagnosing failures in that stage (https://github.com/oVirt/ovirt-ansible-collection/pull/664). + - he-setup - Run virt-install with options more suitable for debugging (https://github.com/oVirt/ovirt-ansible-collection/pull/664). + - he-setup - recently `virsh net-destroy default` doesn't delete the `virbr0`, + so we need to delete it expicitly (https://github.com/oVirt/ovirt-ansible-collection/pull/661). + - info modules - Use dynamic collection name instead of ovirt.ovirt for deprecation + warning (https://github.com/oVirt/ovirt-ansible-collection/pull/653). + - module_utils - replace `getargspec` with `getfullargspec` to support newer + python 3.y versions (https://github.com/oVirt/ovirt-ansible-collection/pull/663). + - ovirt_host - Wait for host to be in result state during upgrade (https://github.com/oVirt/ovirt-ansible-collection/pull/621) + minor_changes: + - ovirt_host - Add refreshed state (https://github.com/oVirt/ovirt-ansible-collection/pull/673). + - ovirt_network - Add default_route usage to the ovirt_network module (https://github.com/oVirt/ovirt-ansible-collection/pull/647). + fragments: + - 621-wait-for-host-to-be-in-result-state.yml + - 647-add-default_route-usage-to-the-ovirt_network-module.yml + - 653-info-modules-use-dynamic-name.yml + - 661-delete-also-the-bridge.yml + - 663-replace-getargspec-with-getfullargspec.yml + - 664-debug-virt-install.yml + - 664-log-local-vm-console.yml + - 673-ovirt_host-add-refresh.yml + - 677-remove-provision_docker-dependency.yml + release_date: '2023-02-14' + 3.1.1: changes: bugfixes: - - cluster_upgrade - Fix the engine_correlation_id location (https://github.com/oVirt/ovirt-ansible-collection/pull/637). + - hosted_engine_setup - Vdsm now uses -n flag for all qemu-img convert calls + (https://github.com/oVirt/ovirt-ansible-collection/pull/682). + - ovirt_cluster_info - Fix example patter (https://github.com/oVirt/ovirt-ansible-collection/pull/684). + - ovirt_host - Fix refreshed state action (https://github.com/oVirt/ovirt-ansible-collection/pull/687). fragments: - - 637-cluster_upgrade-fix-the-engine_correlation_id-location.yml - release_date: '2022-11-28' + - 682-hosted_engine_setup-fix-disk-copying-command.yml + - 684-ovirt_cluster_info-fix-example-patter.yml + - 687-ovirt_host-fix-refreshed-state-action.yml + release_date: '2023-03-03' + 3.1.3: + changes: + bugfixes: + - HE - add back dependency on python3-jmespath (https://github.com/oVirt/ovirt-ansible-collection/pull/701) + - HE - drop remaining filters using netaddr (https://github.com/oVirt/ovirt-ansible-collection/pull/702) + - HE - drop usage of ipaddr filters and remove dependency on python-netaddr + (https://github.com/oVirt/ovirt-ansible-collection/pull/696) + - HE - fix ipv4 and ipv6 check after dropping netaddr (https://github.com/oVirt/ovirt-ansible-collection/pull/704) + - hosted_engine_setup - Update README (https://github.com/oVirt/ovirt-ansible-collection/pull/706) + - ovirt_disk - Fix issue in detaching the direct LUN (https://github.com/oVirt/ovirt-ansible-collection/pull/700) + - ovirt_quota - Convert storage size to integer (https://github.com/oVirt/ovirt-ansible-collection/pull/712) + fragments: + - 696-drop-netaddr.yml + - 700-fix-directlun.yml + - 701-fix-jmespath.yml + - 702-drop-netaddr2.yml + - 704-drop-netaddr3.yml + - 706-he-update-README.yml + - 712-ovirt_quota-fix-storage-size-typecast.yml + release_date: '2023-08-23' + 3.1.4: + changes: + bugfixes: + - ovirt_role - Fix administrative option when set to False (https://github.com/oVirt/ovirt-ansible-collection/pull/723). + minor_changes: + - ovirt_vm - Add tpm_enabled (https://github.com/oVirt/ovirt-ansible-collection/pull/722). + - storage_error_resume_behaviour - Support VM storage error resume behaviour + "auto_resume", "kill", "leave_paused". (https://github.com/oVirt/ovirt-ansible-collection/pull/721) + - vm_infra - Support boot disk renaming and resizing. (https://github.com/oVirt/ovirt-ansible-collection/pull/705) + fragments: + - 705-support-boot-disk-resizing-renaming.yml + - 721-enhancement-vm-storage-error-resume-behaviour.yml + - 722-add-tpm-enabled.yml + - 723-ovirt_role-fix-administrative-condition.yml + release_date: '2023-10-02' diff --git a/ansible_collections/ovirt/ovirt/changelogs/fragments/.placeholder b/ansible_collections/ovirt/ovirt/changelogs/fragments/.keep index 1a791779a..1a791779a 100644 --- a/ansible_collections/ovirt/ovirt/changelogs/fragments/.placeholder +++ b/ansible_collections/ovirt/ovirt/changelogs/fragments/.keep diff --git a/ansible_collections/ovirt/ovirt/changelogs/fragments/705-support-boot-disk-resizing-renaming.yml b/ansible_collections/ovirt/ovirt/changelogs/fragments/705-support-boot-disk-resizing-renaming.yml new file mode 100644 index 000000000..777a80432 --- /dev/null +++ b/ansible_collections/ovirt/ovirt/changelogs/fragments/705-support-boot-disk-resizing-renaming.yml @@ -0,0 +1,3 @@ +--- +minor_changes: + - vm_infra - Support boot disk renaming and resizing. (https://github.com/oVirt/ovirt-ansible-collection/pull/705) diff --git a/ansible_collections/ovirt/ovirt/changelogs/fragments/721-enhancement-vm-storage-error-resume-behaviour.yml b/ansible_collections/ovirt/ovirt/changelogs/fragments/721-enhancement-vm-storage-error-resume-behaviour.yml new file mode 100644 index 000000000..665614c51 --- /dev/null +++ b/ansible_collections/ovirt/ovirt/changelogs/fragments/721-enhancement-vm-storage-error-resume-behaviour.yml @@ -0,0 +1,3 @@ +--- +minor_changes: + - storage_error_resume_behaviour - Support VM storage error resume behaviour "auto_resume", "kill", "leave_paused". (https://github.com/oVirt/ovirt-ansible-collection/pull/721) diff --git a/ansible_collections/ovirt/ovirt/changelogs/fragments/722-add-tpm-enabled.yml b/ansible_collections/ovirt/ovirt/changelogs/fragments/722-add-tpm-enabled.yml new file mode 100644 index 000000000..d18f63202 --- /dev/null +++ b/ansible_collections/ovirt/ovirt/changelogs/fragments/722-add-tpm-enabled.yml @@ -0,0 +1,3 @@ +--- +minor_changes: + - ovirt_vm - Add tpm_enabled (https://github.com/oVirt/ovirt-ansible-collection/pull/722). diff --git a/ansible_collections/ovirt/ovirt/changelogs/fragments/723-ovirt_role-fix-administrative-condition.yml b/ansible_collections/ovirt/ovirt/changelogs/fragments/723-ovirt_role-fix-administrative-condition.yml new file mode 100644 index 000000000..926e40ce5 --- /dev/null +++ b/ansible_collections/ovirt/ovirt/changelogs/fragments/723-ovirt_role-fix-administrative-condition.yml @@ -0,0 +1,3 @@ +--- +bugfixes: + - ovirt_role - Fix administrative option when set to False (https://github.com/oVirt/ovirt-ansible-collection/pull/723). diff --git a/ansible_collections/ovirt/ovirt/ovirt-ansible-collection-2.4.1.tar.gz b/ansible_collections/ovirt/ovirt/ovirt-ansible-collection-2.4.1.tar.gz Binary files differdeleted file mode 100644 index 3f84beee1..000000000 --- a/ansible_collections/ovirt/ovirt/ovirt-ansible-collection-2.4.1.tar.gz +++ /dev/null diff --git a/ansible_collections/ovirt/ovirt/ovirt-ansible-collection.spec b/ansible_collections/ovirt/ovirt/ovirt-ansible-collection.spec index 94d20c5b4..cbd51ba7b 100644 --- a/ansible_collections/ovirt/ovirt/ovirt-ansible-collection.spec +++ b/ansible_collections/ovirt/ovirt/ovirt-ansible-collection.spec @@ -4,9 +4,9 @@ Name: ovirt-ansible-collection Summary: Ansible collection to manage all ovirt modules and inventory -Version: 2.4.1 +Version: 3.2.0 Release: 1%{?release_suffix}%{?dist} -Source0: http://resources.ovirt.org/pub/src/ovirt-ansible-collection/ovirt-ansible-collection-2.4.1.tar.gz +Source0: http://resources.ovirt.org/pub/src/ovirt-ansible-collection/ovirt-ansible-collection-3.2.0.tar.gz License: ASL 2.0 and GPLv3+ BuildArch: noarch Url: http://www.ovirt.org @@ -17,23 +17,25 @@ BuildRequires: ansible-test BuildRequires: glibc-langpack-en %endif -Requires: ansible-core >= 2.12.0 -Requires: ovirt-imageio-client -Requires: python3-ovirt-engine-sdk4 >= 4.5.0 -Requires: python3-netaddr -Requires: python3-jmespath -Requires: python3-passlib +Requires: ansible-core >= 2.13.0 Requires: ansible-collection-ansible-netcommon Requires: ansible-collection-ansible-posix Requires: ansible-collection-ansible-utils Requires: qemu-img +Requires: python3-jmespath + +%if 0%{?rhel} >= 9 +Requires: python3.11-ovirt-imageio-client +Requires: python3.11-ovirt-engine-sdk4 >= 4.5.0 +Requires: python3.11-jmespath +Requires: python3.11-passlib +%endif %if 0%{?rhel} < 9 -Requires: python38-ovirt-imageio-client -Requires: python38-ovirt-engine-sdk4 >= 4.5.0 -Requires: python38-netaddr -Requires: python38-jmespath -Requires: python38-passlib +Requires: python3.11-ovirt-imageio-client +Requires: python3.11-ovirt-engine-sdk4 >= 4.5.0 +Requires: python3.11-jmespath +Requires: python3.11-passlib %endif Obsoletes: ansible < 2.10.0 @@ -87,14 +89,53 @@ sh build.sh install %{collectionname} %license licenses %changelog -* Tue Nov 15 2022 Martin Necas <mnecas@redhat.com> - 2.4.0-1 -- cluster_upgrade - Add default random uuid to engine_correlation_id -- image_template - Add template_bios_type - -* Thu Oct 27 2022 Martin Necas <mnecas@redhat.com> - 2.3.1-1 -- filters - Fix ovirtvmipsv4 with attribute and network -- filters - Fix ovirtvmipsv4 with filter to list -- ovirt_host - Fix kernel_params elemets type +* Mon Oct 2 2023 Martin Necas <mnecas@redhat.com> - 3.2.0-1 +- ovirt_vm - Add tpm_enabled +- storage_error_resume_behaviour - Support VM storage error resume behaviour "auto_resume", "kill", "leave_paused" +- vm_infra - Support boot disk renaming and resizing +- ovirt_role - Fix administrative option when set to False + +* Wed Aug 23 2023 Martin Necas <mnecas@redhat.com> - 3.1.3-1 +- HE - add back dependency on python3-jmespath +- HE - drop remaining filters using netaddr +- HE - drop usage of ipaddr filters and remove dependency on python-netaddr +- HE - fix ipv4 and ipv6 check after dropping netaddr +- hosted_engine_setup - Update README +- ovirt_disk - Fix issue in detaching the direct LUN +- ovirt_quota - Convert storage size to integer + +* Thu Mar 23 2023 Martin Necas <mnecas@redhat.com> - 3.1.2-1 +- Add Python 3.11 subpackage to be usable in ansible-core 2.14 for el8 + +* Fri Mar 3 2023 Martin Necas <mnecas@redhat.com> - 3.1.1-1 +- hosted_engine_setup - Vdsm now uses -n flag for all qemu-img convert calls. +- ovirt_cluster_info - Fix example patter. +- ovirt_host - Fix refreshed state action. +- Add Python 3.11 subpackage to be usable in ansible-core 2.14 + +* Tue Feb 14 2023 Martin Necas <mnecas@redhat.com> - 3.1.0-1 +- ovirt_host - Add refreshed state. +- ovirt_host - Wait for host to be in result state during upgrade. +- ovirt_network - Add default_route usage to the ovirt_network module. +- engine_setup - Remove provision_docker from tests. +- he-setup - Log the output sent to the serial console of the HostedEngineLocal VM to a file on the host, to allow diagnosing failures in that stage. +- he-setup - Run virt-install with options more suitable for debugging. +- he-setup - recently `virsh net-destroy default` doesn't delete the `virbr0`, so we need to delete it expicitly. +- info modules - Use dynamic collection name instead of ovirt.ovirt for deprecation warning. +- module_utils - replace `getargspec` with `getfullargspec` to support newer python 3.y versions. + +* Mon Nov 28 2022 Martin Perina <mperina@redhat.com> - 3.0.0-1 +- filters: Fix ovirtvmipsv4 with attribute and network +- ovirt_host: Fix kernel_params elemets +- ovirtvmipsv4: Fix filter list +- cluster_upgrade: Add default random uuid to engine_correlation_id +- image_template: Add template_bios_type +- Support ansible 2.14 +- ovirt_nic: Add network_filter_parameters +- Support ansible-core-2.13 on EL8 +- Use Python 3.9 on CS8 and CS9 builds +- Improving "ovirt_disk" (mostly documentation) +- cluster_upgrade: Fix the engine_correlation_id location * Thu Oct 13 2022 Martin Necas <mnecas@redhat.com> - 2.3.0-1 - ovirt_host - Honor activate and reboot_after_installation when they are set to false with reinstalled host state diff --git a/ansible_collections/ovirt/ovirt/ovirt-ansible-collection.spec.in b/ansible_collections/ovirt/ovirt/ovirt-ansible-collection.spec.in index 0d8d5849e..12bd3d58f 100644 --- a/ansible_collections/ovirt/ovirt/ovirt-ansible-collection.spec.in +++ b/ansible_collections/ovirt/ovirt/ovirt-ansible-collection.spec.in @@ -17,23 +17,25 @@ BuildRequires: ansible-test BuildRequires: glibc-langpack-en %endif -Requires: ansible-core >= 2.12.0 -Requires: ovirt-imageio-client -Requires: python3-ovirt-engine-sdk4 >= 4.5.0 -Requires: python3-netaddr -Requires: python3-jmespath -Requires: python3-passlib +Requires: ansible-core >= 2.13.0 Requires: ansible-collection-ansible-netcommon Requires: ansible-collection-ansible-posix Requires: ansible-collection-ansible-utils Requires: qemu-img +Requires: python3-jmespath + +%if 0%{?rhel} >= 9 +Requires: python3.11-ovirt-imageio-client +Requires: python3.11-ovirt-engine-sdk4 >= 4.5.0 +Requires: python3.11-jmespath +Requires: python3.11-passlib +%endif %if 0%{?rhel} < 9 -Requires: python38-ovirt-imageio-client -Requires: python38-ovirt-engine-sdk4 >= 4.5.0 -Requires: python38-netaddr -Requires: python38-jmespath -Requires: python38-passlib +Requires: python3.11-ovirt-imageio-client +Requires: python3.11-ovirt-engine-sdk4 >= 4.5.0 +Requires: python3.11-jmespath +Requires: python3.11-passlib %endif Obsoletes: ansible < 2.10.0 @@ -87,14 +89,53 @@ sh build.sh install %{collectionname} %license licenses %changelog -* Tue Nov 15 2022 Martin Necas <mnecas@redhat.com> - 2.4.0-1 -- cluster_upgrade - Add default random uuid to engine_correlation_id -- image_template - Add template_bios_type - -* Thu Oct 27 2022 Martin Necas <mnecas@redhat.com> - 2.3.1-1 -- filters - Fix ovirtvmipsv4 with attribute and network -- filters - Fix ovirtvmipsv4 with filter to list -- ovirt_host - Fix kernel_params elemets type +* Mon Oct 2 2023 Martin Necas <mnecas@redhat.com> - 3.2.0-1 +- ovirt_vm - Add tpm_enabled +- storage_error_resume_behaviour - Support VM storage error resume behaviour "auto_resume", "kill", "leave_paused" +- vm_infra - Support boot disk renaming and resizing +- ovirt_role - Fix administrative option when set to False + +* Wed Aug 23 2023 Martin Necas <mnecas@redhat.com> - 3.1.3-1 +- HE - add back dependency on python3-jmespath +- HE - drop remaining filters using netaddr +- HE - drop usage of ipaddr filters and remove dependency on python-netaddr +- HE - fix ipv4 and ipv6 check after dropping netaddr +- hosted_engine_setup - Update README +- ovirt_disk - Fix issue in detaching the direct LUN +- ovirt_quota - Convert storage size to integer + +* Thu Mar 23 2023 Martin Necas <mnecas@redhat.com> - 3.1.2-1 +- Add Python 3.11 subpackage to be usable in ansible-core 2.14 for el8 + +* Fri Mar 3 2023 Martin Necas <mnecas@redhat.com> - 3.1.1-1 +- hosted_engine_setup - Vdsm now uses -n flag for all qemu-img convert calls. +- ovirt_cluster_info - Fix example patter. +- ovirt_host - Fix refreshed state action. +- Add Python 3.11 subpackage to be usable in ansible-core 2.14 + +* Tue Feb 14 2023 Martin Necas <mnecas@redhat.com> - 3.1.0-1 +- ovirt_host - Add refreshed state. +- ovirt_host - Wait for host to be in result state during upgrade. +- ovirt_network - Add default_route usage to the ovirt_network module. +- engine_setup - Remove provision_docker from tests. +- he-setup - Log the output sent to the serial console of the HostedEngineLocal VM to a file on the host, to allow diagnosing failures in that stage. +- he-setup - Run virt-install with options more suitable for debugging. +- he-setup - recently `virsh net-destroy default` doesn't delete the `virbr0`, so we need to delete it expicitly. +- info modules - Use dynamic collection name instead of ovirt.ovirt for deprecation warning. +- module_utils - replace `getargspec` with `getfullargspec` to support newer python 3.y versions. + +* Mon Nov 28 2022 Martin Perina <mperina@redhat.com> - 3.0.0-1 +- filters: Fix ovirtvmipsv4 with attribute and network +- ovirt_host: Fix kernel_params elemets +- ovirtvmipsv4: Fix filter list +- cluster_upgrade: Add default random uuid to engine_correlation_id +- image_template: Add template_bios_type +- Support ansible 2.14 +- ovirt_nic: Add network_filter_parameters +- Support ansible-core-2.13 on EL8 +- Use Python 3.9 on CS8 and CS9 builds +- Improving "ovirt_disk" (mostly documentation) +- cluster_upgrade: Fix the engine_correlation_id location * Thu Oct 13 2022 Martin Necas <mnecas@redhat.com> - 2.3.0-1 - ovirt_host - Honor activate and reboot_after_installation when they are set to false with reinstalled host state diff --git a/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py b/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py index 6699d0532..68a8f52e0 100644 --- a/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py +++ b/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py @@ -280,7 +280,7 @@ def search_by_attributes(service, list_params=None, **kwargs): """ list_params = list_params or {} # Check if 'list' method support search(look for search parameter): - if 'search' in inspect.getargspec(service.list)[0]: + if 'search' in inspect.getfullargspec(service.list)[0]: res = service.list( # There must be double quotes around name, because some oVirt resources it's possible to create then with space in name. search=' and '.join('{0}="{1}"'.format(k, v) for k, v in kwargs.items()), @@ -308,7 +308,7 @@ def search_by_name(service, name, **kwargs): :return: Entity object returned by Python SDK """ # Check if 'list' method support search(look for search parameter): - if 'search' in inspect.getargspec(service.list)[0]: + if 'search' in inspect.getfullargspec(service.list)[0]: res = service.list( # There must be double quotes around name, because some oVirt resources it's possible to create then with space in name. search='name="{name}"'.format(name=name) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_affinity_label_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_affinity_label_info.py index 45b242143..1bbcbc360 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_affinity_label_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_affinity_label_info.py @@ -133,7 +133,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_cluster_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_cluster_info.py index c949aa0cf..8e14e6d75 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_cluster_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_cluster_info.py @@ -63,8 +63,7 @@ EXAMPLES = ''' # Gather information about all clusters which names start with C<production>: - ovirt.ovirt.ovirt_cluster_info: - pattern: - name: 'production*' + pattern: "name=production*" register: result - ansible.builtin.debug: msg: "{{ result.ovirt_clusters }}" @@ -101,7 +100,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_datacenter_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_datacenter_info.py index bfc17e305..ace54d0bb 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_datacenter_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_datacenter_info.py @@ -85,7 +85,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk.py index 5d83d21d0..821139467 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk.py @@ -21,7 +21,7 @@ description: options: id: description: - - "ID of the disk to manage. Either C(id) or C(name) is required." + - "ID of the disk to manage. Either C(id) or C(name)/C(alias) is required." type: str name: description: @@ -65,7 +65,7 @@ options: if you want to upload the disk even if the disk with C(id) or C(name) exists, then please use C(force) I(true). If you will use C(force) I(false), which is default, then the disk image won't be uploaded." - - "Note that to upload iso the C(format) should be 'raw'" + - "Note that in order to upload iso the C(format) should be 'raw'." type: str aliases: ['image_path'] size: @@ -211,7 +211,7 @@ options: activate: description: - I(True) if the disk should be activated. - - When creating disk of virtual machine it is set to I(True). + - When creating disk of Virtual Machine it is set to I(True). type: bool backup: description: @@ -229,13 +229,13 @@ options: version_added: 1.2.0 propagate_errors: description: - - Indicates if disk errors should cause virtual machine to be paused or if disk errors should be + - Indicates if disk errors should cause Virtual Machine to be paused or if disk errors should be - propagated to the the guest operating system instead. type: bool version_added: 1.2.0 pass_discard: description: - - Defines whether the virtual machine passes discard commands to the storage. + - Defines whether the Virtual Machine passes discard commands to the storage. type: bool version_added: 1.2.0 uses_scsi_reservation: @@ -373,7 +373,6 @@ disk_attachment: import json import os -import ssl import subprocess import time import traceback @@ -423,7 +422,7 @@ def create_transfer_connection(module, transfer, context, connect_timeout=10, re try: connection.connect() except Exception as e: - # Typically ConnectionRefusedError or socket.gaierror. + # Typically, "ConnectionRefusedError" or "socket.gaierror". module.warn("Cannot connect to %s, trying %s: %s" % (transfer.transfer_url, transfer.proxy_url, e)) url = urlparse(transfer.proxy_url) @@ -505,6 +504,7 @@ def cancel_transfer(connection, transfer_id): def finalize_transfer(connection, module, transfer_id): + transfer = None transfer_service = (connection.system_service() .image_transfers_service() .image_transfer_service(transfer_id)) @@ -552,8 +552,6 @@ def finalize_transfer(connection, module, transfer_id): def download_disk_image(connection, module): - transfers_service = connection.system_service().image_transfers_service() - hosts_service = connection.system_service().hosts_service() transfer = start_transfer(connection, module, otypes.ImageTransferDirection.DOWNLOAD) try: extra_args = {} @@ -579,8 +577,6 @@ def download_disk_image(connection, module): def upload_disk_image(connection, module): - transfers_service = connection.system_service().image_transfers_service() - hosts_service = connection.system_service().hosts_service() transfer = start_transfer(connection, module, otypes.ImageTransferDirection.UPLOAD) try: extra_args = {} @@ -715,9 +711,9 @@ class DisksModule(BaseModule): action='copy', entity=disk, action_condition=( - lambda disk: new_disk_storage.id not in [sd.id for sd in disk.storage_domains] + lambda d: new_disk_storage.id not in [sd.id for sd in d.storage_domains] ), - wait_condition=lambda disk: disk.status == otypes.DiskStatus.OK, + wait_condition=lambda d: d.status == otypes.DiskStatus.OK, storage_domain=otypes.StorageDomain( id=new_disk_storage.id, ), @@ -735,7 +731,7 @@ class DisksModule(BaseModule): equal(self.param('propagate_errors'), entity.propagate_errors) and equal(otypes.ScsiGenericIO(self.param('scsi_passthrough')) if self.param('scsi_passthrough') else None, entity.sgio) and equal(self.param('wipe_after_delete'), entity.wipe_after_delete) and - equal(self.param('profile'), follow_link(self._connection, entity.disk_profile).name) + equal(self.param('profile'), getattr(follow_link(self._connection, entity.disk_profile), 'name', None)) ) @@ -789,7 +785,7 @@ def get_vm_service(connection, module): if vm_id is None: module.fail_json( - msg="VM don't exists, please create it first." + msg="VM doesn't exist, please create it first." ) return vms_service.vm_service(vm_id) @@ -845,8 +841,8 @@ def main(): lun = module.params.get('logical_unit') host = module.params['host'] - # Fail when host is specified with the LUN id. Lun id is needed to identify - # an existing disk if already available inthe environment. + # Fail when host is specified with the LUN id. LUN id is needed to identify + # an existing disk if already available in the environment. if (host and lun is None) or (host and lun.get("id") is None): module.fail_json( msg="Can not use parameter host ({0!s}) without " @@ -857,7 +853,6 @@ def main(): check_params(module) try: - disk = None state = module.params['state'] auth = module.params.get('auth') connection = create_connection(auth) @@ -875,14 +870,14 @@ def main(): else: disk = disks_module.search_entity(search_params=searchable_attributes(module)) if vm_service and disk and state != 'attached': - # If the VM don't exist in VMs disks, but still it's found it means it was found + # If the VM doesn't exist in VMs disks, but still it's found it means it was found # for template with same name as VM, so we should force create the VM disk. force_create = disk.id not in [a.disk.id for a in vm_service.disk_attachments_service().list() if a.disk] ret = None # First take care of creating the VM, if needed: if state in ('present', 'detached', 'attached'): - # Always activate disk when its being created + # Always activate disk when it is created. if vm_service is not None and disk is None: module.params['activate'] = module.params['activate'] is None or module.params['activate'] ret = disks_module.create( @@ -895,17 +890,17 @@ def main(): ) is_new_disk = ret['changed'] ret['changed'] = ret['changed'] or disks_module.update_storage_domains(ret['id']) - # We need to pass ID to the module, so in case we want detach/attach disk + # We need to pass ID to the module, so in case we want to detach/attach disk # we have this ID specified to attach/detach method: module.params['id'] = ret['id'] - # Upload disk image in case it's new disk or force parameter is passed: + # Upload disk image in case it is a new disk or force parameter is passed: if module.params['upload_image_path'] and (is_new_disk or module.params['force']): if module.params['format'] == 'cow' and module.params['content_type'] == 'iso': module.warn("To upload an ISO image 'format' parameter needs to be set to 'raw'.") uploaded = upload_disk_image(connection, module) ret['changed'] = ret['changed'] or uploaded - # Download disk image in case it's file don't exist or force parameter is passed: + # Download disk image in case the file doesn't exist or force parameter is passed: if ( module.params['download_image_path'] and (not os.path.isfile(module.params['download_image_path']) or module.params['force']) ): @@ -986,7 +981,7 @@ def main(): module.params.get('bootable'), module.params.get('uses_scsi_reservation'), module.params.get('pass_discard'), ]): - module.warn("Cannot use 'interface', 'activate', 'bootable', 'uses_scsi_reservation' or 'pass_discard' without specifing VM.") + module.warn("Cannot use 'interface', 'activate', 'bootable', 'uses_scsi_reservation' or 'pass_discard' without specifying VM.") # When the host parameter is specified and the disk is not being # removed, refresh the information about the LUN. diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk_info.py index b3237fcd0..f9181bdf7 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk_info.py @@ -98,7 +98,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_event_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_event_info.py index 7427b19c1..64f657ea4 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_event_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_event_info.py @@ -131,7 +131,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_external_provider_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_external_provider_info.py index d47639196..3ab35bf0a 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_external_provider_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_external_provider_info.py @@ -138,7 +138,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_group_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_group_info.py index 2385a5a3a..37c296c5f 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_group_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_group_info.py @@ -97,7 +97,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host.py index dbf9e7d20..14a0499d9 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host.py @@ -31,9 +31,11 @@ options: description: - "State which should a host to be in after successful completion." - "I(iscsilogin) and I(iscsidiscover) are supported since version 2.4." + - "I(refreshed) is supported since 3.1.0" choices: [ 'present', 'absent', 'maintenance', 'upgraded', 'started', - 'restarted', 'stopped', 'reinstalled', 'iscsidiscover', 'iscsilogin' + 'restarted', 'stopped', 'reinstalled', 'iscsidiscover', 'iscsilogin', + 'refreshed' ] default: present type: str @@ -493,7 +495,8 @@ def main(): state=dict( choices=[ 'present', 'absent', 'maintenance', 'upgraded', 'started', - 'restarted', 'stopped', 'reinstalled', 'iscsidiscover', 'iscsilogin' + 'restarted', 'stopped', 'reinstalled', 'iscsidiscover', 'iscsilogin', + 'refreshed' ], default='present', ), @@ -610,10 +613,12 @@ def main(): ) # Set to False, because upgrade_check isn't 'changing' action: hosts_module._changed = False + + updates_available = host.update_available ret = hosts_module.action( action='upgrade', action_condition=lambda h: h.update_available, - wait_condition=lambda h: not h.update_available or h.status == result_state and ( + wait_condition=lambda h: h.status == result_state and (( len([ event for event in events_service.list( @@ -625,7 +630,7 @@ def main(): search='type=842 or type=841 or type=888', ) if host.name in event.description ]) > 0 - ), + ) or not updates_available), post_action=lambda h: time.sleep(module.params['poll_interval']), fail_condition=lambda h: hosts_module.failed_state_after_reinstall(h) or ( len([ @@ -709,6 +714,10 @@ def main(): fail_condition=hosts_module.failed_state_after_reinstall, fence_type='restart', ) + elif state == 'refreshed': + ret = hosts_module.action( + action='refresh', + ) elif state == 'reinstalled': # Deactivate host if not in maintanence: hosts_module.action( diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host_info.py index 230a923bc..19b73ebaa 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host_info.py @@ -116,7 +116,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host_storage_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host_storage_info.py index b1500e599..2f7c0072b 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host_storage_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host_storage_info.py @@ -149,7 +149,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_network.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_network.py index 889914ae1..7631cdf7e 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_network.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_network.py @@ -109,6 +109,10 @@ options: description: - I(true) if the network should marked as gluster network. type: bool + default_route: + description: + - I(true) if the default gateway and the DNS resolver configuration of the host will be taken from this network. + type: bool label: description: - "Name of the label to assign to the network." @@ -262,13 +266,14 @@ class ClusterNetworksModule(BaseModule): display=self._cluster_network.get('display'), usages=list(set([ otypes.NetworkUsage(usage) - for usage in ['display', 'gluster', 'migration'] + for usage in ['display', 'gluster', 'migration', 'default_route'] if self._cluster_network.get(usage, False) ] + self._old_usages)) if ( self._cluster_network.get('display') is not None or self._cluster_network.get('gluster') is not None or - self._cluster_network.get('migration') is not None + self._cluster_network.get('migration') is not None or + self._cluster_network.get('default_route') is not None ) else None, ) @@ -285,7 +290,7 @@ class ClusterNetworksModule(BaseModule): ] for x in [ usage - for usage in ['display', 'gluster', 'migration'] + for usage in ['display', 'gluster', 'migration', 'default_route'] if self._cluster_network.get(usage, False) ] ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_network_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_network_info.py index b65e03643..a4e772995 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_network_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_network_info.py @@ -101,7 +101,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_nic.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_nic.py index dc1c1801f..5ee0fb4f7 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_nic.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_nic.py @@ -71,6 +71,19 @@ options: description: - Defines if the NIC is linked to the virtual machine. type: bool + network_filter_parameters: + description: + - "The list of network filter parameters." + elements: dict + type: list + version_added: 3.0.0 + suboptions: + name: + description: + - "Name of the network filter parameter." + value: + description: + - "Value of the network filter parameter." extends_documentation_fragment: ovirt.ovirt.ovirt ''' @@ -122,6 +135,19 @@ EXAMPLES = ''' id: 00000000-0000-0000-0000-000000000000 name: "new_nic_name" vm: myvm + +# Add NIC network filter parameters +- ovirt.ovirt.ovirt_nic: + state: present + name: mynic + vm: myvm + network_filter_parameters: + - name: GATEWAY_MAC + value: 01:02:03:ab:cd:ef + - name: GATEWAY_MAC + value: 01:02:03:ab:cd:eg + - name: GATEWAY_MAC + value: 01:02:03:ab:cd:eh ''' RETURN = ''' @@ -170,6 +196,24 @@ class EntityNicsModule(BaseModule): def vnic_id(self, vnic_id): self._vnic_id = vnic_id + def post_create(self, entity): + self._set_network_filter_parameters(entity.id) + + def post_update(self, entity): + self._set_network_filter_parameters(entity.id) + + def _set_network_filter_parameters(self, entity_id): + if self._module.params['network_filter_parameters'] is not None: + nfps_service = self._service.service(entity_id).network_filter_parameters_service() + nfp_list = nfps_service.list() + # Remove all previous network filter parameters + for nfp in nfp_list: + nfps_service.service(nfp.id).remove() + + # Create all specified netwokr filters by user + for nfp in self._network_filter_parameters(): + nfps_service.add(nfp) + def build_entity(self): return otypes.Nic( id=self._module.params.get('id'), @@ -193,7 +237,8 @@ class EntityNicsModule(BaseModule): equal(self._module.params.get('linked'), entity.linked) and equal(self._module.params.get('name'), str(entity.name)) and equal(self._module.params.get('profile'), get_link_name(self._connection, entity.vnic_profile)) and - equal(self._module.params.get('mac_address'), entity.mac.address) + equal(self._module.params.get('mac_address'), entity.mac.address) and + equal(self._network_filter_parameters(), self._connection.follow_link(entity.network_filter_parameters)) ) elif self._module.params.get('template'): return ( @@ -203,6 +248,19 @@ class EntityNicsModule(BaseModule): equal(self._module.params.get('profile'), get_link_name(self._connection, entity.vnic_profile)) ) + def _network_filter_parameters(self): + if self._module.params['network_filter_parameters'] is None: + return [] + networkFilterParameters = list() + for networkFilterParameter in self._module.params['network_filter_parameters']: + networkFilterParameters.append( + otypes.NetworkFilterParameter( + name=networkFilterParameter.get("name"), + value=networkFilterParameter.get("value") + ) + ) + return networkFilterParameters + def get_vnics(networks_service, network, connection): resp = [] @@ -226,6 +284,7 @@ def main(): network=dict(type='str'), mac_address=dict(type='str'), linked=dict(type='bool'), + network_filter_parameters=dict(type='list', default=None, elements='dict'), ) module = AnsibleModule( argument_spec=argument_spec, diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_nic_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_nic_info.py index c1daede60..70bded2af 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_nic_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_nic_info.py @@ -115,7 +115,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_permission_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_permission_info.py index e8fabc669..c83d4cdd2 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_permission_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_permission_info.py @@ -146,7 +146,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_quota.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_quota.py index 92a7f2eb5..921a86bb5 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_quota.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_quota.py @@ -309,7 +309,7 @@ def main(): for storage in module.params.get('storages'): sd_limit_service.add( limit=otypes.QuotaStorageLimit( - limit=storage.get('size'), + limit=int(storage.get('size')), storage_domain=search_by_name( connection.system_service().storage_domains_service(), storage.get('name') diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_quota_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_quota_info.py index 078641cce..c00089a38 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_quota_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_quota_info.py @@ -105,7 +105,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_role.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_role.py index 600840546..6770cbee2 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_role.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_role.py @@ -112,8 +112,7 @@ class RoleModule(BaseModule): return otypes.Role( id=self.param('id'), name=self.param('name'), - administrative=self.param('administrative') if self.param( - 'administrative') else None, + administrative=self.param('administrative') if self.param('administrative') is not None else None, permits=[ otypes.Permit(id=all_permits.get(new_permit)) for new_permit in self.param('permits') ] if self.param('permits') else None, diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_scheduling_policy_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_scheduling_policy_info.py index cc1434efa..f547d7974 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_scheduling_policy_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_scheduling_policy_info.py @@ -105,7 +105,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_snapshot_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_snapshot_info.py index ae493cc26..6d94e6a4a 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_snapshot_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_snapshot_info.py @@ -98,7 +98,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain_info.py index 3a2b5c328..3c40d9d45 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain_info.py @@ -101,7 +101,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_template_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_template_info.py index 7b422871a..3cc92f1c7 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_template_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_template_info.py @@ -112,7 +112,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_vm_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_vm_info.py index 81d164aa7..c402500cc 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_vm_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_vm_info.py @@ -112,7 +112,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_system_option_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_system_option_info.py index 53ac0ddaf..a5f9a646e 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_system_option_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_system_option_info.py @@ -99,7 +99,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_tag_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_tag_info.py index 01dd47163..d7103797c 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_tag_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_tag_info.py @@ -124,7 +124,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_template_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_template_info.py index 8ae011382..27f719f98 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_template_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_template_info.py @@ -101,7 +101,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_user_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_user_info.py index 5f76c008f..b27712e23 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_user_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_user_info.py @@ -97,7 +97,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm.py index e1374fcc2..c4feb88c1 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm.py @@ -750,6 +750,13 @@ options: - "If no value is passed, default value is set by oVirt/RHV engine." choices: ['interleave', 'preferred', 'strict'] type: str + storage_error_resume_behaviour: + description: + - "If the storage, on which this virtual machine has some disks gets unresponsive, the virtual machine gets paused." + - "These are the possible options, what should happen with the virtual machine at the moment the storage becomes available again." + choices: ['auto_resume', 'kill', 'leave_paused'] + type: str + version_added: "3.2.0" numa_nodes: description: - "List of vNUMA Nodes to set for this VM and pin them to assigned host's physical NUMA node." @@ -925,6 +932,11 @@ options: >0 - Number of Virtio SCSI queues to use by virtual machine." type: int version_added: 1.7.0 + tpm_enabled: + description: + - "If `true`, a TPM device is added to the virtual machine." + type: bool + version_added: 3.2.0 wait_after_lease: description: - "Number of seconds which should the module wait after the lease is changed." @@ -1603,6 +1615,7 @@ class VmsModule(BaseModule): ) if self.param('virtio_scsi_enabled') is not None else None, multi_queues_enabled=self.param('multi_queues_enabled'), virtio_scsi_multi_queues=self.param('virtio_scsi_multi_queues'), + tpm_enabled=self.param('tpm_enabled'), os=otypes.OperatingSystem( type=self.param('operating_system'), boot=otypes.Boot( @@ -1676,6 +1689,9 @@ class VmsModule(BaseModule): numa_tune_mode=otypes.NumaTuneMode( self.param('numa_tune_mode') ) if self.param('numa_tune_mode') else None, + storage_error_resume_behaviour=otypes.VmStorageErrorResumeBehaviour( + self.param('storage_error_resume_behaviour') + ) if self.param('storage_error_resume_behaviour') else None, rng_device=otypes.RngDevice( source=otypes.RngSource(self.param('rng_device')), ) if self.param('rng_device') else None, @@ -1800,9 +1816,11 @@ class VmsModule(BaseModule): equal(self.param('serial_policy'), str(getattr(entity.serial_number, 'policy', None))) and equal(self.param('serial_policy_value'), getattr(entity.serial_number, 'value', None)) and equal(self.param('numa_tune_mode'), str(entity.numa_tune_mode)) and + equal(self.param('storage_error_resume_behaviour'), str(entity.storage_error_resume_behaviour)) and equal(self.param('virtio_scsi_enabled'), getattr(entity.virtio_scsi, 'enabled', None)) and equal(self.param('multi_queues_enabled'), entity.multi_queues_enabled) and equal(self.param('virtio_scsi_multi_queues'), entity.virtio_scsi_multi_queues) and + equal(self.param('tpm_enabled'), entity.tpm_enabled) and equal(self.param('rng_device'), str(entity.rng_device.source) if entity.rng_device else None) and equal(provided_vm_display.get('monitors'), getattr(vm_display, 'monitors', None)) and equal(provided_vm_display.get('copy_paste_enabled'), getattr(vm_display, 'copy_paste_enabled', None)) and @@ -2624,6 +2642,7 @@ def main(): ballooning_enabled=dict(type='bool', default=None), rng_device=dict(type='str'), numa_tune_mode=dict(type='str', choices=['interleave', 'preferred', 'strict']), + storage_error_resume_behaviour=dict(type='str', choices=['auto_resume', 'leave_paused', 'kill']), numa_nodes=dict(type='list', default=[], elements='dict'), custom_properties=dict(type='list', elements='dict'), watchdog=dict(type='dict'), @@ -2652,6 +2671,7 @@ def main(): virtio_scsi_multi_queues=dict(type='int'), snapshot_name=dict(type='str'), snapshot_vm=dict(type='str'), + tpm_enabled=dict(type='bool'), ) module = AnsibleModule( argument_spec=argument_spec, diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm_info.py index 037634585..4b4f3e35f 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm_info.py @@ -149,7 +149,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm_os_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm_os_info.py index 2ddb9defb..8ba1feff4 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm_os_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm_os_info.py @@ -109,7 +109,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vmpool_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vmpool_info.py index ac0d2f866..e4a94b276 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vmpool_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vmpool_info.py @@ -99,7 +99,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vnic_profile_info.py b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vnic_profile_info.py index 69a451d38..dda321312 100644 --- a/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vnic_profile_info.py +++ b/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vnic_profile_info.py @@ -99,7 +99,7 @@ def main(): if module.params['fetch_nested'] or module.params['nested_attributes']: module.deprecate( "The 'fetch_nested' and 'nested_attributes' are deprecated please use 'follow' parameter", - version='3.0.0', + version='4.0.0', collection_name='ovirt.ovirt' ) diff --git a/ansible_collections/ovirt/ovirt/plugins/test/ovirt_proxied_check.py b/ansible_collections/ovirt/ovirt/plugins/test/ovirt_proxied_check.py index f65ea2b51..8a708e045 100644 --- a/ansible_collections/ovirt/ovirt/plugins/test/ovirt_proxied_check.py +++ b/ansible_collections/ovirt/ovirt/plugins/test/ovirt_proxied_check.py @@ -34,7 +34,7 @@ except ImportError: def proxied(value): netloc = urlparse(value).netloc proxied = bool(getproxies_environment()) and not proxy_bypass(netloc) - return(proxied) + return (proxied) class TestModule(object): diff --git a/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/log_progress.yml b/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/log_progress.yml index 088a8f5ee..4d100545b 100644 --- a/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/log_progress.yml +++ b/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/log_progress.yml @@ -7,35 +7,35 @@ - name: Log process block block: - - name: Log progress as an event - vars: - message: - - "Cluster upgrade progress: {{ progress }}%" - - "{{ ', Cluster: ' + cluster_name if (cluster_name is defined and cluster_name) else '' }}" - - "{{ ', Host: ' + host_name if (host_name is defined and host_name) else '' }}" - - " [{{ description }}]" - ovirt_event: - auth: "{{ ovirt_auth }}" - state: present - severity: normal - custom_id: "{{ 2147483647 | random | int }}" - origin: "cluster_upgrade" - description: "{{ message | join('') }}" - cluster: "{{ cluster_id | default(omit) }}" + - name: Log progress as an event + vars: + message: + - "Cluster upgrade progress: {{ progress }}%" + - "{{ ', Cluster: ' + cluster_name if (cluster_name is defined and cluster_name) else '' }}" + - "{{ ', Host: ' + host_name if (host_name is defined and host_name) else '' }}" + - " [{{ description }}]" + ovirt_event: + auth: "{{ ovirt_auth }}" + state: present + severity: normal + custom_id: "{{ 2147483647 | random | int }}" + origin: "cluster_upgrade" + description: "{{ message | join('') }}" + cluster: "{{ cluster_id | default(omit) }}" - - name: Update the upgrade progress on the cluster - no_log: false - ansible.builtin.uri: - url: "{{ ovirt_auth.url }}/clusters/{{ cluster_id }}/upgrade" - method: POST - body_format: json - validate_certs: false - headers: - Authorization: "Bearer {{ ovirt_auth.token }}" - Correlation-Id: "{{ engine_correlation_id | default(omit) }}" - body: - upgrade_action: update_progress - upgrade_percent_complete: "{{ progress }}" - when: - - api_gt45 is defined and api_gt45 - - upgrade_set is defined + - name: Update the upgrade progress on the cluster + no_log: false + ansible.builtin.uri: + url: "{{ ovirt_auth.url }}/clusters/{{ cluster_id }}/upgrade" + method: POST + body_format: json + validate_certs: false + headers: + Authorization: "Bearer {{ ovirt_auth.token }}" + Correlation-Id: "{{ engine_correlation_id | default(omit) }}" + body: + upgrade_action: update_progress + upgrade_percent_complete: "{{ progress }}" + when: + - api_gt45 is defined and api_gt45 + - upgrade_set is defined diff --git a/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/upgrade.yml b/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/upgrade.yml index 67f30384a..9bad4408b 100644 --- a/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/upgrade.yml +++ b/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/upgrade.yml @@ -2,151 +2,150 @@ # Upgrade uses a block to keep the variables local to the block. Vars are defined after tasks. - name: Upgrade block block: - - - name: Start prep host ovirt job step - ovirt_job: - auth: "{{ ovirt_auth }}" - description: "Upgrading hosts in {{ cluster_name }}" - steps: - - description: "Preparing host for upgrade: {{ host_name }}" - - - name: progress - prepare host for upgrade (upgrade can't start until no VMs are running on the host) - include_tasks: log_progress.yml - vars: - progress: "{{ progress_host_start | int }}" - description: "preparing host for upgrade" - - - name: Get list of VMs in host - ovirt_vm_info: - auth: "{{ ovirt_auth }}" - pattern: "cluster={{ cluster_name }} and host={{ host_name }} and status=up" - register: vms_in_host - check_mode: "no" - - - name: Move user migratable vms - ovirt_vm: - auth: "{{ ovirt_auth }}" - force_migrate: true - migrate: true - state: running - name: "{{ item.name }}" - register: resp - when: - - "item['placement_policy']['affinity'] == 'user_migratable'" - with_items: - - "{{ vms_in_host.ovirt_vms }}" - loop_control: - label: "{{ item.name }}" - - - name: progress - done migrating VMs (host 10% complete) - include_tasks: log_progress.yml - vars: - progress: "{{ (progress_host_start | int + (progress_host_step_size | int * 0.10)) | int }}" - description: "status=up VMs migrated off host" - - - name: Shutdown non-migratable VMs - ovirt_vm: - auth: "{{ ovirt_auth }}" - state: stopped - force: true - name: "{{ item.name }}" - with_items: - - "{{ vms_in_host.ovirt_vms }}" - when: - - "item['placement_policy']['affinity'] == 'pinned'" - loop_control: - label: "{{ item.name }}" - register: pinned_to_host_vms - - - name: Create list of VM names which have been shut down - ansible.builtin.set_fact: - pinned_vms_names: "{{ pinned_vms_names + pinned_to_host_vms.results | selectattr('changed') | map(attribute='vm.name') | list }}" - - - name: progress - done shutting down pinned VMs (host 20% complete) - include_tasks: log_progress.yml - vars: - progress: "{{ (progress_host_start | int + (progress_host_step_size | int * 0.20)) | int }}" - description: "pinned VMs shutdown" - - - name: Gather self-heal facts about all gluster hosts in the cluster - ansible.builtin.command: gluster volume heal {{ volume_item.name }} info - register: self_heal_status - retries: "{{ healing_in_progress_checks }}" - delay: "{{ healing_in_progress_check_delay }}" - until: > - self_heal_status.stdout_lines is defined and - self_heal_status.stdout_lines | select('match','^(Number of entries: )[0-9]+') | map('last') | map('int') | sum == 0 - delegate_to: "{{ host_info.ovirt_hosts[0].address }}" - connection: ssh - with_items: - - "{{ cluster_info.ovirt_clusters[0].gluster_volumes }}" - loop_control: - loop_var: volume_item - when: cluster_info.ovirt_clusters[0].gluster_service | bool - - - name: Refresh gluster heal info entries to database - ansible.builtin.uri: - url: "{{ ovirt_auth.url }}/clusters/{{ cluster_id }}/refreshglusterhealstatus" - method: POST - body_format: json - validate_certs: false - headers: - Authorization: "Bearer {{ ovirt_auth.token }}" - body: "{}" - when: - - cluster_info.ovirt_clusters[0].gluster_service | bool - - api_info.ovirt_api.product_info.version.major >= 4 and api_info.ovirt_api.product_info.version.minor >= 4 - - - name: progress - host is ready for upgrade (host 30% complete) - include_tasks: log_progress.yml - vars: - progress: "{{ (progress_host_start | int + (progress_host_step_size | int * 0.30)) | int }}" - description: "host is ready for upgrade" - - - name: Finish prep host ovirt job step - ovirt_job: - auth: "{{ ovirt_auth }}" - description: "Upgrading hosts in {{ cluster_name }}" - steps: - - description: "Preparing host for upgrade: {{ host_name }}" - state: finished - - - name: Start upgrade host ovirt job step - ovirt_job: - auth: "{{ ovirt_auth }}" - description: "Upgrading hosts in {{ cluster_name }}" - steps: - - description: "Upgrading host: {{ host_name }}" - - - name: Upgrade host - ovirt_host: - auth: "{{ ovirt_auth }}" - name: "{{ host_name }}" - state: upgraded - check_upgrade: "{{ check_upgrade }}" - reboot_after_upgrade: "{{ reboot_after_upgrade }}" - timeout: "{{ upgrade_timeout }}" - - - name: Delay in minutes to wait to finish gluster healing process after successful host upgrade - ansible.builtin.pause: - minutes: "{{ wait_to_finish_healing }}" - when: - - cluster_info.ovirt_clusters[0].gluster_service | bool - - host_info.ovirt_hosts | length > 1 - - - name: progress - host upgrade complete (host 100% complete) - include_tasks: log_progress.yml - vars: - progress: "{{ (progress_host_start | int + (progress_host_step_size | int * 1.00)) | int }}" - description: "host upgrade complete" - - - name: Finish upgrade host ovirt job step - ovirt_job: - auth: "{{ ovirt_auth }}" - description: "Upgrading hosts in {{ cluster_name }}" - steps: - - description: "Upgrading host: {{ host_name }}" - state: finished + - name: Start prep host ovirt job step + ovirt_job: + auth: "{{ ovirt_auth }}" + description: "Upgrading hosts in {{ cluster_name }}" + steps: + - description: "Preparing host for upgrade: {{ host_name }}" + + - name: progress - prepare host for upgrade (upgrade can't start until no VMs are running on the host) + include_tasks: log_progress.yml + vars: + progress: "{{ progress_host_start | int }}" + description: "preparing host for upgrade" + + - name: Get list of VMs in host + ovirt_vm_info: + auth: "{{ ovirt_auth }}" + pattern: "cluster={{ cluster_name }} and host={{ host_name }} and status=up" + register: vms_in_host + check_mode: "no" + + - name: Move user migratable vms + ovirt_vm: + auth: "{{ ovirt_auth }}" + force_migrate: true + migrate: true + state: running + name: "{{ item.name }}" + register: resp + when: + - "item['placement_policy']['affinity'] == 'user_migratable'" + with_items: + - "{{ vms_in_host.ovirt_vms }}" + loop_control: + label: "{{ item.name }}" + + - name: progress - done migrating VMs (host 10% complete) + include_tasks: log_progress.yml + vars: + progress: "{{ (progress_host_start | int + (progress_host_step_size | int * 0.10)) | int }}" + description: "status=up VMs migrated off host" + + - name: Shutdown non-migratable VMs + ovirt_vm: + auth: "{{ ovirt_auth }}" + state: stopped + force: true + name: "{{ item.name }}" + with_items: + - "{{ vms_in_host.ovirt_vms }}" + when: + - "item['placement_policy']['affinity'] == 'pinned'" + loop_control: + label: "{{ item.name }}" + register: pinned_to_host_vms + + - name: Create list of VM names which have been shut down + ansible.builtin.set_fact: + pinned_vms_names: "{{ pinned_vms_names + pinned_to_host_vms.results | selectattr('changed') | map(attribute='vm.name') | list }}" + + - name: progress - done shutting down pinned VMs (host 20% complete) + include_tasks: log_progress.yml + vars: + progress: "{{ (progress_host_start | int + (progress_host_step_size | int * 0.20)) | int }}" + description: "pinned VMs shutdown" + + - name: Gather self-heal facts about all gluster hosts in the cluster + ansible.builtin.command: gluster volume heal {{ volume_item.name }} info + register: self_heal_status + retries: "{{ healing_in_progress_checks }}" + delay: "{{ healing_in_progress_check_delay }}" + until: > + self_heal_status.stdout_lines is defined and + self_heal_status.stdout_lines | select('match','^(Number of entries: )[0-9]+') | map('last') | map('int') | sum == 0 + delegate_to: "{{ host_info.ovirt_hosts[0].address }}" + connection: ssh + with_items: + - "{{ cluster_info.ovirt_clusters[0].gluster_volumes }}" + loop_control: + loop_var: volume_item + when: cluster_info.ovirt_clusters[0].gluster_service | bool + + - name: Refresh gluster heal info entries to database + ansible.builtin.uri: + url: "{{ ovirt_auth.url }}/clusters/{{ cluster_id }}/refreshglusterhealstatus" + method: POST + body_format: json + validate_certs: false + headers: + Authorization: "Bearer {{ ovirt_auth.token }}" + body: "{}" + when: + - cluster_info.ovirt_clusters[0].gluster_service | bool + - api_info.ovirt_api.product_info.version.major >= 4 and api_info.ovirt_api.product_info.version.minor >= 4 + + - name: progress - host is ready for upgrade (host 30% complete) + include_tasks: log_progress.yml + vars: + progress: "{{ (progress_host_start | int + (progress_host_step_size | int * 0.30)) | int }}" + description: "host is ready for upgrade" + + - name: Finish prep host ovirt job step + ovirt_job: + auth: "{{ ovirt_auth }}" + description: "Upgrading hosts in {{ cluster_name }}" + steps: + - description: "Preparing host for upgrade: {{ host_name }}" + state: finished + + - name: Start upgrade host ovirt job step + ovirt_job: + auth: "{{ ovirt_auth }}" + description: "Upgrading hosts in {{ cluster_name }}" + steps: + - description: "Upgrading host: {{ host_name }}" + + - name: Upgrade host + ovirt_host: + auth: "{{ ovirt_auth }}" + name: "{{ host_name }}" + state: upgraded + check_upgrade: "{{ check_upgrade }}" + reboot_after_upgrade: "{{ reboot_after_upgrade }}" + timeout: "{{ upgrade_timeout }}" + + - name: Delay in minutes to wait to finish gluster healing process after successful host upgrade + ansible.builtin.pause: + minutes: "{{ wait_to_finish_healing }}" + when: + - cluster_info.ovirt_clusters[0].gluster_service | bool + - host_info.ovirt_hosts | length > 1 + + - name: progress - host upgrade complete (host 100% complete) + include_tasks: log_progress.yml + vars: + progress: "{{ (progress_host_start | int + (progress_host_step_size | int * 1.00)) | int }}" + description: "host upgrade complete" + + - name: Finish upgrade host ovirt job step + ovirt_job: + auth: "{{ ovirt_auth }}" + description: "Upgrading hosts in {{ cluster_name }}" + steps: + - description: "Upgrading host: {{ host_name }}" + state: finished vars: host_name: "{{ host.name }}" diff --git a/ansible_collections/ovirt/ovirt/roles/disaster_recovery/README.md b/ansible_collections/ovirt/ovirt/roles/disaster_recovery/README.md index 56886ba4f..2e1fbe766 100644 --- a/ansible_collections/ovirt/ovirt/roles/disaster_recovery/README.md +++ b/ansible_collections/ovirt/ovirt/roles/disaster_recovery/README.md @@ -6,20 +6,19 @@ The `disaster_recovery` role responsible to manage the disaster recovery scenari Role Variables -------------- -| Name | Default value | | -|-------------------------|-----------------------|-----------------------------------------------------| -| dr_ignore_error_clean | False | Specify whether to ignore errors on clean engine setup.<br/>This is mainly being used to avoid failures when trying to move a storage domain to maintenance/detach it. | -| dr_ignore_error_recover | True | Specify whether to ignore errors on recover. | -| dr_partial_import | True | Specify whether to use the partial import flag on VM/Template register.<br/>If True, VMs and Templates will be registered without any missing disks, if false VMs/Templates will fail to be registered in case some of their disks will be missing from any of the storage domains. | -| dr_target_host | secondary | Specify the default target host to be used in the ansible play.<br/> This host indicates the target site which the recover process will be done. | -| dr_source_map | primary | Specify the default source map to be used in the play.<br/> The source map indicates the key which is used to get the target value for each attribute which we want to register with the VM/Template. | -| dr_reset_mac_pool | True | If True, then once a VM will be registered, it will automatically reset the mac pool, if configured in the VM. | -| dr_cleanup_retries_maintenance | 3 | Specify the number of retries of moving a storage domain to maintenance VM as part of a fail back scenario. | -| dr_cleanup_delay_maintenance | 120 | Specify the number of seconds between each retry as part of a fail back scenario. | -| dr_clean_orphaned_vms | True | Specify whether to remove any VMs which have no disks from the setup as part of cleanup. | -| dr_clean_orphaned_disks | True | Specify whether to remove lun disks from the setup as part of engine setup. | -| dr_running_vms | /tmp/ovirt_dr_running_vm_list | Specify the file path which is used to contain the data of the running VMs in the secondary setup before the failback process run on the primary setup after the secondary site cleanup was finished. Note that the /tmp folder is being used as default so the file will not be available after system reboot. - +| Name | Default value | | +|--------------------------------|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| dr_ignore_error_clean | `False` | Specify whether to ignore errors on clean engine setup.<br/>This is mainly being used to avoid failures when trying to move a storage domain to maintenance/detach it. | +| dr_ignore_error_recover | `True` | Specify whether to ignore errors on recover. | +| dr_partial_import | `True` | Specify whether to use the partial import flag on VM/Template register.<br/>If `True`, VMs and Templates will be registered without any missing disks, if `False` VMs/Templates will fail to be registered in case some of their disks will be missing from any of the storage domains. | +| dr_target_host | `secondary` | Specify the default target host to be used in the ansible play.<br/> This host indicates the target site which the recover process will be done. | +| dr_source_map | `primary` | Specify the default source map to be used in the play.<br/> The source map indicates the key which is used to get the target value for each attribute which we want to register with the VM/Template. | +| dr_reset_mac_pool | `True` | If `True`, then once a VM will be registered, it will automatically reset the mac pool, if configured in the VM. | +| dr_cleanup_retries_maintenance | `3` | Specify the number of retries of moving a storage domain to maintenance VM as part of a fail back scenario. | +| dr_cleanup_delay_maintenance | `120` | Specify the number of seconds between each retry as part of a fail back scenario. | +| dr_clean_orphaned_vms | `True` | Specify whether to remove any VMs which have no disks from the setup as part of cleanup. | +| dr_clean_orphaned_disks | `True` | Specify whether to remove lun disks from the setup as part of engine setup. | +| dr_running_vms | `/tmp/ovirt_dr_running_vm_list` | Specify the file path which is used to contain the data of the running VMs in the secondary setup before the failback process run on the primary setup after the secondary site cleanup was finished. Note that the `/tmp` folder is being used as default so the file will not be available after system reboot. | Example Playbook ---------------- diff --git a/ansible_collections/ovirt/ovirt/roles/engine_setup/tasks/engine_setup.yml b/ansible_collections/ovirt/ovirt/roles/engine_setup/tasks/engine_setup.yml index e85926eff..b58a677bb 100644 --- a/ansible_collections/ovirt/ovirt/roles/engine_setup/tasks/engine_setup.yml +++ b/ansible_collections/ovirt/ovirt/roles/engine_setup/tasks/engine_setup.yml @@ -1,118 +1,118 @@ --- - name: Engine setup block block: - - name: Set answer file path - ansible.builtin.set_fact: - answer_file_path: "/tmp/answerfile-{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}.txt" - - - name: Use the default answerfile - ansible.builtin.template: - src: answerfile_{{ ovirt_engine_setup_version }}_basic.txt.j2 - dest: "{{ answer_file_path }}" - mode: 0600 - owner: root - group: root - when: ovirt_engine_setup_answer_file_path is undefined - no_log: true - - - name: Copy custom answer file - ansible.builtin.template: - src: "{{ ovirt_engine_setup_answer_file_path }}" - dest: "{{ answer_file_path }}" - mode: 0600 - owner: root - group: root - when: ovirt_engine_setup_answer_file_path is defined and ( - ovirt_engine_setup_use_remote_answer_file is not defined or not - ovirt_engine_setup_use_remote_answer_file) - no_log: true - - - name: Use remote's answer file - ansible.builtin.set_fact: - answer_file_path: "{{ ovirt_engine_setup_answer_file_path }}" - when: ovirt_engine_setup_use_remote_answer_file | bool - - - name: Update setup packages - ansible.builtin.yum: - name: "ovirt*setup*" - update_only: true - state: latest - when: ovirt_engine_setup_update_setup_packages or ovirt_engine_setup_perform_upgrade - tags: - - "skip_ansible_lint" # ANSIBLE0006 - - - name: Copy yum configuration file - ansible.builtin.copy: - src: "/etc/yum.conf" - dest: "/tmp/yum.conf" - owner: root - group: root - mode: 0644 - remote_src: true - - - name: Set 'best' to false - ansible.builtin.replace: - path: "/tmp/yum.conf" - regexp: '^best=True' - replace: 'best=False' - owner: root - group: root - mode: 0644 - - - name: Update all packages - ansible.builtin.yum: - name: '*' - state: latest - conf_file: /tmp/yum.conf - when: not ovirt_engine_setup_offline | bool - tags: - - "skip_ansible_lint" # ANSIBLE0010 - - - name: Remove temporary yum configuration file - ansible.builtin.file: - path: "/tmp/yum.conf" - state: absent - ignore_errors: true - - - name: Set offline parameter if variable is set - ansible.builtin.set_fact: - offline: "{{ '--offline' if ovirt_engine_setup_offline | bool else '' }}" - - - name: Restore engine from file - include_tasks: restore_engine_from_file.yml - when: ovirt_engine_setup_restore_file is defined - - - name: Run engine-setup with answerfile - command: "engine-setup --accept-defaults --config-append={{ answer_file_path }} {{ offline }}" - tags: - - skip_ansible_lint - - - name: Make sure `ovirt-engine` service is running - ansible.builtin.service: - name: ovirt-engine - state: started - - - name: Run engine-config - ansible.builtin.command: "engine-config -s {{ item.key }}='{{ item.value }}' {% if item.version is defined %} --cver={{ item.version }} {% endif %}" - loop: "{{ ovirt_engine_setup_engine_configs | default([]) }}" - tags: - - skip_ansible_lint - - - name: Restart engine after engine-config - ansible.builtin.service: - name: ovirt-engine - state: restarted - when: ovirt_engine_setup_engine_configs is defined - - - name: Check if Engine health page is up - ansible.builtin.uri: - validate_certs: "{{ ovirt_engine_setup_validate_certs | default(omit) }}" - url: "http://localhost/ovirt-engine/services/health" - status_code: 200 - register: health_page - retries: 30 - delay: 10 - until: health_page is success + - name: Set answer file path + ansible.builtin.set_fact: + answer_file_path: "/tmp/answerfile-{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}.txt" + + - name: Use the default answerfile + ansible.builtin.template: + src: answerfile_{{ ovirt_engine_setup_version }}_basic.txt.j2 + dest: "{{ answer_file_path }}" + mode: 0600 + owner: root + group: root + when: ovirt_engine_setup_answer_file_path is undefined + no_log: true + + - name: Copy custom answer file + ansible.builtin.template: + src: "{{ ovirt_engine_setup_answer_file_path }}" + dest: "{{ answer_file_path }}" + mode: 0600 + owner: root + group: root + when: ovirt_engine_setup_answer_file_path is defined and ( + ovirt_engine_setup_use_remote_answer_file is not defined or not + ovirt_engine_setup_use_remote_answer_file) + no_log: true + + - name: Use remote's answer file + ansible.builtin.set_fact: + answer_file_path: "{{ ovirt_engine_setup_answer_file_path }}" + when: ovirt_engine_setup_use_remote_answer_file | bool + + - name: Update setup packages + ansible.builtin.yum: + name: "ovirt*setup*" + update_only: true + state: latest + when: ovirt_engine_setup_update_setup_packages or ovirt_engine_setup_perform_upgrade + tags: + - "skip_ansible_lint" # ANSIBLE0006 + + - name: Copy yum configuration file + ansible.builtin.copy: + src: "/etc/yum.conf" + dest: "/tmp/yum.conf" + owner: root + group: root + mode: 0644 + remote_src: true + + - name: Set 'best' to false + ansible.builtin.replace: + path: "/tmp/yum.conf" + regexp: '^best=True' + replace: 'best=False' + owner: root + group: root + mode: 0644 + + - name: Update all packages + ansible.builtin.yum: + name: '*' + state: latest + conf_file: /tmp/yum.conf + when: not ovirt_engine_setup_offline | bool + tags: + - "skip_ansible_lint" # ANSIBLE0010 + + - name: Remove temporary yum configuration file + ansible.builtin.file: + path: "/tmp/yum.conf" + state: absent + ignore_errors: true + + - name: Set offline parameter if variable is set + ansible.builtin.set_fact: + offline: "{{ '--offline' if ovirt_engine_setup_offline | bool else '' }}" + + - name: Restore engine from file + include_tasks: restore_engine_from_file.yml + when: ovirt_engine_setup_restore_file is defined + + - name: Run engine-setup with answerfile + command: "engine-setup --accept-defaults --config-append={{ answer_file_path }} {{ offline }}" + tags: + - skip_ansible_lint + + - name: Make sure `ovirt-engine` service is running + ansible.builtin.service: + name: ovirt-engine + state: started + + - name: Run engine-config + ansible.builtin.command: "engine-config -s {{ item.key }}='{{ item.value }}' {% if item.version is defined %} --cver={{ item.version }} {% endif %}" + loop: "{{ ovirt_engine_setup_engine_configs | default([]) }}" + tags: + - skip_ansible_lint + + - name: Restart engine after engine-config + ansible.builtin.service: + name: ovirt-engine + state: restarted + when: ovirt_engine_setup_engine_configs is defined + + - name: Check if Engine health page is up + ansible.builtin.uri: + validate_certs: "{{ ovirt_engine_setup_validate_certs | default(omit) }}" + url: "http://localhost/ovirt-engine/services/health" + status_code: 200 + register: health_page + retries: 30 + delay: 10 + until: health_page is success always: - name: Clean temporary files diff --git a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/containers-deploy.yml b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/containers-deploy.yml index f40955a8d..db9b7dd60 100644 --- a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/containers-deploy.yml +++ b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/containers-deploy.yml @@ -1,11 +1,4 @@ --- -- name: Bring up docker containers - hosts: localhost - gather_facts: false - roles: - - role: provision_docker - provision_docker_inventory_group: "{{ groups['engine'] }}" - - name: "Update python because of ovirt-imageio-proxy" hosts: engine tasks: diff --git a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/requirements.yml b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/requirements.yml index 159e73f9c..3f359ef12 100644 --- a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/requirements.yml +++ b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/requirements.yml @@ -1,4 +1,2 @@ --- -- src: chrismeyersfsu.provision_docker - name: provision_docker - src: oVirt.repositories diff --git a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-4.2.yml b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-4.2.yml index 206346346..5416eea9a 100644 --- a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-4.2.yml +++ b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-4.2.yml @@ -1,6 +1,8 @@ --- -- import_playbook: containers-deploy.yml -- import_playbook: engine-deploy.yml +- name: Import containers-deploy.yml + import_playbook: containers-deploy.yml +- name: Import engine-deploy.yml + import_playbook: engine-deploy.yml vars: ovirt_engine_setup_version: "4.2" ovirt_release_rpm: "http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm" diff --git a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-master.yml b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-master.yml index 886abf0e8..ebe715e56 100644 --- a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-master.yml +++ b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-master.yml @@ -1,6 +1,8 @@ --- -- import_playbook: containers-deploy.yml -- import_playbook: engine-deploy.yml +- name: Import containers-deploy.yml + import_playbook: containers-deploy.yml +- name: Import engine-deploy.yml + import_playbook: engine-deploy.yml vars: ovirt_engine_setup_version: "4.5" ovirt_release_rpm: "http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm" diff --git a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-upgrade-4.2-to-master.yml b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-upgrade-4.2-to-master.yml index 32516116b..d655db19a 100644 --- a/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-upgrade-4.2-to-master.yml +++ b/ansible_collections/ovirt/ovirt/roles/engine_setup/tests/test-upgrade-4.2-to-master.yml @@ -1,10 +1,13 @@ --- -- import_playbook: containers-deploy.yml -- import_playbook: engine-deploy.yml +- name: Import containers-deploy.yml + import_playbook: containers-deploy.yml +- name: Import engine-deploy.yml 4.2 + import_playbook: engine-deploy.yml vars: ovirt_engine_setup_version: "4.2" ovirt_release_rpm: "http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm" -- import_playbook: engine-upgrade.yml +- name: Import engine-deploy.yml 4.3 + import_playbook: engine-upgrade.yml vars: ovirt_engine_setup_version: "4.3" ovirt_release_rpm: "http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm" diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/README.md b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/README.md index af991eb4c..ea50a6f11 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/README.md +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/README.md @@ -318,7 +318,7 @@ $ ansible-playbook hosted_engine_deploy.yml --extra-vars='@he_deployment.json' - Deployment over a remote host: ```sh -ansible-playbook -i host123.localdomain, hosted_engine_deploy.yml --extra-vars='@he_deployment.json' --extra-vars='@passwords.yml' --ask-vault-pass +$ ansible-playbook -i host123.localdomain, hosted_engine_deploy.yml --extra-vars='@he_deployment.json' --extra-vars='@passwords.yml' --ask-vault-pass ``` Deploy over a remote host from Ansible AWX/Tower diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/hooks/after_setup/add_host_storage_domain.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/hooks/after_setup/add_host_storage_domain.yml index ad7df40e6..59e9cabf2 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/hooks/after_setup/add_host_storage_domain.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/hooks/after_setup/add_host_storage_domain.yml @@ -8,7 +8,7 @@ - name: Set Engine public key as authorized key without validating the TLS/SSL certificates connection: ssh - authorized_key: + ansible.posix.authorized_key: user: root state: present key: https://{{ he_fqdn }}/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/add_engine_as_ansible_host.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/add_engine_as_ansible_host.yml index 8f13b593e..f1320ba56 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/add_engine_as_ansible_host.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/add_engine_as_ansible_host.yml @@ -1,25 +1,25 @@ --- - name: Add the engine VM as an ansible host block: - - name: Fetch the value of HOST_KEY_CHECKING - ansible.builtin.set_fact: host_key_checking="{{ lookup('config', 'HOST_KEY_CHECKING') }}" - - name: Get the username running the deploy - become: false - ansible.builtin.command: whoami - register: username_on_host - changed_when: false - - name: Register the engine FQDN as a host - ansible.builtin.add_host: - name: "{{ he_fqdn }}" - groups: engine - ansible_connection: smart - ansible_ssh_extra_args: >- - -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {% if he_ansible_host_name != "localhost" %} - -o ProxyCommand="ssh -W %h:%p -q - {% if not host_key_checking %} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {% endif %} - {{ username_on_host.stdout }}@{{ he_ansible_host_name }}" {% endif %} - ansible_ssh_pass: "{{ he_appliance_password }}" - ansible_host: "{{ he_fqdn_ansible_host }}" - ansible_user: root - no_log: true - ignore_errors: true + - name: Fetch the value of HOST_KEY_CHECKING + ansible.builtin.set_fact: host_key_checking="{{ lookup('config', 'HOST_KEY_CHECKING') }}" + - name: Get the username running the deploy + become: false + ansible.builtin.command: whoami + register: username_on_host + changed_when: false + - name: Register the engine FQDN as a host + ansible.builtin.add_host: + name: "{{ he_fqdn }}" + groups: engine + ansible_connection: smart + ansible_ssh_extra_args: >- + -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {% if he_ansible_host_name != "localhost" %} + -o ProxyCommand="ssh -W %h:%p -q + {% if not host_key_checking %} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {% endif %} + {{ username_on_host.stdout }}@{{ he_ansible_host_name }}" {% endif %} + ansible_ssh_pass: "{{ he_appliance_password }}" + ansible_host: "{{ he_fqdn_ansible_host }}" + ansible_user: root + no_log: true + ignore_errors: true diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/alter_libvirt_default_net_configuration.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/alter_libvirt_default_net_configuration.yml index bcc5913ec..92a87da77 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/alter_libvirt_default_net_configuration.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/alter_libvirt_default_net_configuration.yml @@ -19,6 +19,11 @@ ignore_errors: true changed_when: false +- name: Update libvirt default network configuration, delete the bridge + ansible.builtin.command: ip link delete {{ network_dict['bridge']['name'] }} + ignore_errors: true + changed_when: false + - name: Update libvirt default network configuration, undefine ansible.builtin.command: virsh net-undefine default ignore_errors: true diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml index 4bd99efe9..c0bd6c10f 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml @@ -1,100 +1,100 @@ --- - name: Prepare routing rules block: - - name: Check IPv6 - ansible.builtin.set_fact: - ipv6_deployment: >- - {{ true if he_host_ip not in target_address_v4.stdout_lines and - he_host_ip in target_address_v6.stdout_lines - else false }} - - include_tasks: ../validate_ip_prefix.yml - - include_tasks: ../alter_libvirt_default_net_configuration.yml - # all of the next is a workaround for a network issue: - # vdsm installation breaks the routing by defining separate - # routing table for ovirtmgmt. But we need to enable communication - # between virbr0 and ovirtmgmt - - name: Start libvirt - ansible.builtin.service: - name: libvirtd - state: started - enabled: true - - name: Activate default libvirt network - ansible.builtin.command: virsh net-autostart default - ignore_errors: true - changed_when: false - - name: Get routing rules, IPv4 - ansible.builtin.command: ip -j rule - environment: "{{ he_cmd_lang }}" - register: route_rules_ipv4 - changed_when: true - - name: Get routing rules, IPv6 - ansible.builtin.command: ip -6 rule - environment: "{{ he_cmd_lang }}" - register: route_rules_ipv6 - changed_when: true - when: ipv6_deployment|bool - - name: Save bridge name - ansible.builtin.set_fact: - virbr_default: "{{ network_dict['bridge']['name'] }}" - - name: Wait for the bridge to appear on the host - ansible.builtin.command: ip link show {{ virbr_default }} - environment: "{{ he_cmd_lang }}" - changed_when: true - register: ip_link_show_bridge - until: ip_link_show_bridge.rc == 0 - retries: 30 - delay: 3 - - name: Accept IPv6 Router Advertisements for {{ virbr_default }} - ansible.builtin.shell: echo 2 > /proc/sys/net/ipv6/conf/{{ virbr_default }}/accept_ra - when: ipv6_deployment|bool - - name: Refresh network facts - ansible.builtin.setup: - tags: ['skip_ansible_lint'] - - name: Fetch IPv4 CIDR for {{ virbr_default }} - ansible.builtin.set_fact: - virbr_cidr_ipv4: >- - {{ (hostvars[inventory_hostname]['ansible_'+virbr_default]['ipv4']['address']+'/' - +hostvars[inventory_hostname]['ansible_'+virbr_default]['ipv4']['netmask']) |ipv4('host/prefix') }} - when: not ipv6_deployment|bool - - name: Fetch IPv6 CIDR for {{ virbr_default }} - ansible.builtin.set_fact: - virbr_cidr_ipv6: >- - {{ (hostvars[inventory_hostname]['ansible_'+virbr_default]['ipv6'][0]['address']+'/'+ - hostvars[inventory_hostname]['ansible_'+virbr_default]['ipv6'][0]['prefix']) | - ipv6('host/prefix') if 'ipv6' in hostvars[inventory_hostname]['ansible_'+virbr_default] else None }} - when: ipv6_deployment|bool - - name: Add IPv4 outbound route rules - ansible.builtin.command: ip rule add from {{ virbr_cidr_ipv4 }} priority 101 table main - environment: "{{ he_cmd_lang }}" - register: result - when: >- - not ipv6_deployment|bool and - route_rules_ipv4.stdout | from_json | - selectattr('priority', 'equalto', 101) | - selectattr('src', 'equalto', virbr_cidr_ipv4 | ipaddr('address') ) | - list | length == 0 - changed_when: true - - name: Add IPv4 inbound route rules - ansible.builtin.command: ip rule add from all to {{ virbr_cidr_ipv4 }} priority 100 table main - environment: "{{ he_cmd_lang }}" - register: result - changed_when: true - when: >- - not ipv6_deployment|bool and - route_rules_ipv4.stdout | from_json | - selectattr('priority', 'equalto', 100) | - selectattr('dst', 'equalto', virbr_cidr_ipv4 | ipaddr('address') ) | - list | length == 0 - - name: Add IPv6 outbound route rules - ansible.builtin.command: ip -6 rule add from {{ virbr_cidr_ipv6 }} priority 101 table main - environment: "{{ he_cmd_lang }}" - register: result - when: ipv6_deployment|bool and "\"101:\tfrom \"+virbr_cidr_ipv6+\" lookup main\" not in route_rules_ipv6.stdout" - changed_when: true - - name: Add IPv6 inbound route rules - ansible.builtin.command: ip -6 rule add from all to {{ virbr_cidr_ipv6 }} priority 100 table main - environment: "{{ he_cmd_lang }}" - register: result - changed_when: true - when: >- - ipv6_deployment|bool and "\"100:\tfrom all to \"+virbr_cidr_ipv6+\" lookup main\" not in route_rules_ipv6.stdout" + - name: Check IPv6 + ansible.builtin.set_fact: + ipv6_deployment: >- + {{ true if he_host_ip not in target_address_v4.stdout_lines and + he_host_ip in target_address_v6.stdout_lines + else false }} + - include_tasks: ../validate_ip_prefix.yml + - include_tasks: ../alter_libvirt_default_net_configuration.yml + # all of the next is a workaround for a network issue: + # vdsm installation breaks the routing by defining separate + # routing table for ovirtmgmt. But we need to enable communication + # between virbr0 and ovirtmgmt + - name: Start libvirt + ansible.builtin.service: + name: libvirtd + state: started + enabled: true + - name: Activate default libvirt network + ansible.builtin.command: virsh net-autostart default + ignore_errors: true + changed_when: false + - name: Get routing rules, IPv4 + ansible.builtin.command: ip -j rule + environment: "{{ he_cmd_lang }}" + register: route_rules_ipv4 + changed_when: true + - name: Get routing rules, IPv6 + ansible.builtin.command: ip -6 rule + environment: "{{ he_cmd_lang }}" + register: route_rules_ipv6 + changed_when: true + when: ipv6_deployment|bool + - name: Save bridge name + ansible.builtin.set_fact: + virbr_default: "{{ network_dict['bridge']['name'] }}" + - name: Wait for the bridge to appear on the host + ansible.builtin.command: ip link show {{ virbr_default }} + environment: "{{ he_cmd_lang }}" + changed_when: true + register: ip_link_show_bridge + until: ip_link_show_bridge.rc == 0 + retries: 30 + delay: 3 + - name: Accept IPv6 Router Advertisements for {{ virbr_default }} + ansible.builtin.shell: echo 2 > /proc/sys/net/ipv6/conf/{{ virbr_default }}/accept_ra + when: ipv6_deployment|bool + - name: Refresh network facts + ansible.builtin.setup: + tags: ['skip_ansible_lint'] + - name: Fetch IPv4 CIDR for {{ virbr_default }} + ansible.builtin.set_fact: + virbr_cidr_ipv4: >- + {{ (hostvars[inventory_hostname]['ansible_'+virbr_default]['ipv4']['address']+'/' + +hostvars[inventory_hostname]['ansible_'+virbr_default]['ipv4']['netmask']) }} + when: not ipv6_deployment|bool + - name: Fetch IPv6 CIDR for {{ virbr_default }} + ansible.builtin.set_fact: + virbr_cidr_ipv6: >- + {{ (hostvars[inventory_hostname]['ansible_'+virbr_default]['ipv6'][0]['address']+'/'+ + hostvars[inventory_hostname]['ansible_'+virbr_default]['ipv6'][0]['prefix']) + if 'ipv6' in hostvars[inventory_hostname]['ansible_'+virbr_default] else None }} + when: ipv6_deployment|bool + - name: Add IPv4 outbound route rules + ansible.builtin.command: ip rule add from {{ virbr_cidr_ipv4 }} priority 101 table main + environment: "{{ he_cmd_lang }}" + register: result + when: >- + not ipv6_deployment|bool and + route_rules_ipv4.stdout | from_json | + selectattr('priority', 'equalto', 101) | + selectattr('src', 'equalto', virbr_cidr_ipv4) | + list | length == 0 + changed_when: true + - name: Add IPv4 inbound route rules + ansible.builtin.command: ip rule add from all to {{ virbr_cidr_ipv4 }} priority 100 table main + environment: "{{ he_cmd_lang }}" + register: result + changed_when: true + when: >- + not ipv6_deployment|bool and + route_rules_ipv4.stdout | from_json | + selectattr('priority', 'equalto', 100) | + selectattr('dst', 'equalto', virbr_cidr_ipv4) | + list | length == 0 + - name: Add IPv6 outbound route rules + ansible.builtin.command: ip -6 rule add from {{ virbr_cidr_ipv6 }} priority 101 table main + environment: "{{ he_cmd_lang }}" + register: result + when: ipv6_deployment|bool and "\"101:\tfrom \"+virbr_cidr_ipv6+\" lookup main\" not in route_rules_ipv6.stdout" + changed_when: true + - name: Add IPv6 inbound route rules + ansible.builtin.command: ip -6 rule add from all to {{ virbr_cidr_ipv6 }} priority 100 table main + environment: "{{ he_cmd_lang }}" + register: result + changed_when: true + when: >- + ipv6_deployment|bool and "\"100:\tfrom all to \"+virbr_cidr_ipv6+\" lookup main\" not in route_rules_ipv6.stdout" diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml index 3958eca15..0fe107973 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml @@ -1,159 +1,160 @@ --- - name: Create hosted engine local vm block: - - name: Initial tasks - block: - - name: Get host unique id - ansible.builtin.shell: | - if [ -e /etc/vdsm/vdsm.id ]; - then cat /etc/vdsm/vdsm.id; - elif [ -e /proc/device-tree/system-id ]; - then cat /proc/device-tree/system-id; #ppc64le - else dmidecode -s system-uuid; - fi; - environment: "{{ he_cmd_lang }}" - changed_when: true - register: unique_id_out - - name: Create directory for local VM - ansible.builtin.tempfile: - state: directory - path: "{{ he_local_vm_dir_path }}" - prefix: "{{ he_local_vm_dir_prefix }}" - register: otopi_localvm_dir - - name: Set local vm dir path - ansible.builtin.set_fact: - he_local_vm_dir: "{{ otopi_localvm_dir.path }}" - - include_tasks: ../install_appliance.yml - when: he_appliance_ova is none or he_appliance_ova|length == 0 - - name: Register appliance PATH - ansible.builtin.set_fact: - he_appliance_ova_path: "{{ he_appliance_ova }}" - when: he_appliance_ova is not none and he_appliance_ova|length > 0 - - name: Debug var he_appliance_ova_path - ansible.builtin.debug: - var: he_appliance_ova_path - - name: Check available space on local VM directory - ansible.builtin.shell: df -k --output=avail "{{ he_local_vm_dir_path }}" | grep -v Avail | cat - environment: "{{ he_cmd_lang }}" - changed_when: true - register: local_vm_dir_space_out - - name: Check appliance size - ansible.builtin.shell: zcat "{{ he_appliance_ova_path }}" | wc --bytes - environment: "{{ he_cmd_lang }}" - changed_when: true - register: appliance_size - - name: Ensure we have enough space to extract the appliance - ansible.builtin.assert: - that: - - "local_vm_dir_space_out.stdout_lines[0]|int * 1024 > appliance_size.stdout_lines[0]|int * 1.1" - msg: > - {{ he_local_vm_dir_path }} doesn't provide enough free space to extract the - engine appliance: {{ local_vm_dir_space_out.stdout_lines[0]|int / 1024 | int }} Mb - are available while {{ appliance_size.stdout_lines[0]|int / 1024 / 1024 * 1.1 | int }} Mb - are required. - - name: Extract appliance to local VM directory - ansible.builtin.unarchive: - remote_src: true - src: "{{ he_appliance_ova_path }}" - dest: "{{ he_local_vm_dir }}" - extra_opts: ['--sparse'] - - include_tasks: ../get_local_vm_disk_path.yml - - name: Get appliance disk size - ansible.builtin.command: qemu-img info --output=json {{ local_vm_disk_path }} - environment: "{{ he_cmd_lang }}" - changed_when: true - register: qemu_img_out - - name: Debug var qemu_img_out - ansible.builtin.debug: - var: qemu_img_out - - name: Parse qemu-img output - ansible.builtin.set_fact: - virtual_size={{ qemu_img_out.stdout|from_json|ovirt.ovirt.json_query('"virtual-size"') }} - register: otopi_appliance_disk_size - - name: Debug var virtual_size - ansible.builtin.debug: - var: virtual_size - - name: Hash the appliance root password - ansible.builtin.set_fact: - he_hashed_appliance_password: "{{ he_appliance_password | string | password_hash('sha512') }}" - no_log: true - - name: Create cloud init user-data and meta-data files - ansible.builtin.template: - src: "{{ item.src }}" - dest: "{{ item.dest }}" - mode: 0640 - with_items: - - {src: templates/user-data.j2, dest: "{{ he_local_vm_dir }}/user-data"} - - {src: templates/meta-data.j2, dest: "{{ he_local_vm_dir }}/meta-data"} - - {src: templates/network-config-dhcp.j2, dest: "{{ he_local_vm_dir }}/network-config"} - - name: Create ISO disk - ansible.builtin.command: >- - mkisofs -output {{ he_local_vm_dir }}/seed.iso -volid cidata -joliet -rock -input-charset utf-8 - {{ he_local_vm_dir }}/meta-data {{ he_local_vm_dir }}/user-data - {{ he_local_vm_dir }}/network-config - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Fix local VM directory permission - ansible.builtin.file: - state: directory - path: "{{ he_local_vm_dir }}" - owner: vdsm - group: qemu - recurse: true - mode: u=rwX,g=rwX,o= - - name: Create local VM - ansible.builtin.command: >- - virt-install -n {{ he_vm_name }}Local --os-variant rhel8.0 --virt-type kvm --memory {{ he_mem_size_MB }} - --vcpus {{ he_vcpus }} --network network=default,mac={{ he_vm_mac_addr }},model=virtio - --disk {{ local_vm_disk_path }} --import --disk path={{ he_local_vm_dir }}/seed.iso,device=cdrom - --noautoconsole --rng /dev/random --graphics vnc --video vga --sound none --controller usb,model=none - --memballoon none --boot hd,menu=off --clock kvmclock_present=yes - environment: "{{ he_cmd_lang }}" - register: create_local_vm - changed_when: true - - name: Debug var create_local_vm - ansible.builtin.debug: - var: create_local_vm - - name: Get local VM IP - ansible.builtin.shell: virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/' - environment: "{{ he_cmd_lang }}" - register: local_vm_ip - until: local_vm_ip.stdout_lines|length >= 1 - retries: 90 - delay: 10 - changed_when: true - - name: Debug var local_vm_ip - ansible.builtin.debug: - var: local_vm_ip - - name: Remove leftover entries in /etc/hosts for the local VM - ansible.builtin.lineinfile: - dest: /etc/hosts - regexp: "# temporary entry added by hosted-engine-setup for the bootstrap VM$" - state: absent - - name: Create an entry in /etc/hosts for the local VM - ansible.builtin.lineinfile: - dest: /etc/hosts - line: - "{{ local_vm_ip.stdout_lines[0] }} \ - {{ he_fqdn }} # temporary entry added by hosted-engine-setup for the bootstrap VM" - insertbefore: BOF - backup: true - - name: Wait for SSH to restart on the local VM - ansible.builtin.wait_for: - host='{{ he_fqdn }}' - port=22 - delay=30 - timeout=300 - - name: Set the name for add_host - ansible.builtin.set_fact: - he_fqdn_ansible_host: "{{ local_vm_ip.stdout_lines[0] }}" - - import_tasks: ../add_engine_as_ansible_host.yml - rescue: - - include_tasks: ../clean_localvm_dir.yml - - include_tasks: ../clean_local_storage_pools.yml - - name: Notify the user about a failure - ansible.builtin.fail: - msg: > - The system may not be provisioned according to the playbook - results: please check the logs for the issue, - fix accordingly or re-deploy from scratch. + - name: Initial tasks + block: + - name: Get host unique id + ansible.builtin.shell: | + if [ -e /etc/vdsm/vdsm.id ]; + then cat /etc/vdsm/vdsm.id; + elif [ -e /proc/device-tree/system-id ]; + then cat /proc/device-tree/system-id; #ppc64le + else dmidecode -s system-uuid; + fi; + environment: "{{ he_cmd_lang }}" + changed_when: true + register: unique_id_out + - name: Create directory for local VM + ansible.builtin.tempfile: + state: directory + path: "{{ he_local_vm_dir_path }}" + prefix: "{{ he_local_vm_dir_prefix }}" + register: otopi_localvm_dir + - name: Set local vm dir path + ansible.builtin.set_fact: + he_local_vm_dir: "{{ otopi_localvm_dir.path }}" + - include_tasks: ../install_appliance.yml + when: he_appliance_ova is none or he_appliance_ova|length == 0 + - name: Register appliance PATH + ansible.builtin.set_fact: + he_appliance_ova_path: "{{ he_appliance_ova }}" + when: he_appliance_ova is not none and he_appliance_ova|length > 0 + - name: Debug var he_appliance_ova_path + ansible.builtin.debug: + var: he_appliance_ova_path + - name: Check available space on local VM directory + ansible.builtin.shell: df -k --output=avail "{{ he_local_vm_dir_path }}" | grep -v Avail | cat + environment: "{{ he_cmd_lang }}" + changed_when: true + register: local_vm_dir_space_out + - name: Check appliance size + ansible.builtin.shell: zcat "{{ he_appliance_ova_path }}" | wc --bytes + environment: "{{ he_cmd_lang }}" + changed_when: true + register: appliance_size + - name: Ensure we have enough space to extract the appliance + ansible.builtin.assert: + that: + - "local_vm_dir_space_out.stdout_lines[0]|int * 1024 > appliance_size.stdout_lines[0]|int * 1.1" + msg: > + {{ he_local_vm_dir_path }} doesn't provide enough free space to extract the + engine appliance: {{ local_vm_dir_space_out.stdout_lines[0]|int / 1024 | int }} Mb + are available while {{ appliance_size.stdout_lines[0]|int / 1024 / 1024 * 1.1 | int }} Mb + are required. + - name: Extract appliance to local VM directory + ansible.builtin.unarchive: + remote_src: true + src: "{{ he_appliance_ova_path }}" + dest: "{{ he_local_vm_dir }}" + extra_opts: ['--sparse'] + - include_tasks: ../get_local_vm_disk_path.yml + - name: Get appliance disk size + ansible.builtin.command: qemu-img info --output=json {{ local_vm_disk_path }} + environment: "{{ he_cmd_lang }}" + changed_when: true + register: qemu_img_out + - name: Debug var qemu_img_out + ansible.builtin.debug: + var: qemu_img_out + - name: Parse qemu-img output + ansible.builtin.set_fact: + virtual_size={{ qemu_img_out.stdout|from_json|ovirt.ovirt.json_query('"virtual-size"') }} + register: otopi_appliance_disk_size + - name: Debug var virtual_size + ansible.builtin.debug: + var: virtual_size + - name: Hash the appliance root password + ansible.builtin.set_fact: + he_hashed_appliance_password: "{{ he_appliance_password | string | password_hash('sha512') }}" + no_log: true + - name: Create cloud init user-data and meta-data files + ansible.builtin.template: + src: "{{ item.src }}" + dest: "{{ item.dest }}" + mode: 0640 + with_items: + - {src: templates/user-data.j2, dest: "{{ he_local_vm_dir }}/user-data"} + - {src: templates/meta-data.j2, dest: "{{ he_local_vm_dir }}/meta-data"} + - {src: templates/network-config-dhcp.j2, dest: "{{ he_local_vm_dir }}/network-config"} + - name: Create ISO disk + ansible.builtin.command: >- + mkisofs -output {{ he_local_vm_dir }}/seed.iso -volid cidata -joliet -rock -input-charset utf-8 + {{ he_local_vm_dir }}/meta-data {{ he_local_vm_dir }}/user-data + {{ he_local_vm_dir }}/network-config + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Fix local VM directory permission + ansible.builtin.file: + state: directory + path: "{{ he_local_vm_dir }}" + owner: vdsm + group: qemu + recurse: true + mode: u=rwX,g=rwX,o= + - name: Create local VM + ansible.builtin.command: >- + virt-install -n {{ he_vm_name }}Local --os-variant rhel8.0 --virt-type kvm --memory {{ he_mem_size_MB }} + --vcpus {{ he_vcpus }} --network network=default,mac={{ he_vm_mac_addr }},model=virtio + --disk {{ local_vm_disk_path }} --import --disk path={{ he_local_vm_dir }}/seed.iso,device=cdrom + --autoconsole text --rng /dev/random --graphics none --sound none --controller usb,model=none + --memballoon none --boot hd,bootmenu.enable=on,bios.useserial=on --clock kvmclock_present=yes + --serial=pty,log.file=/var/log/libvirt/qemu/HostedEngineLocal-console.log + environment: "{{ he_cmd_lang }}" + register: create_local_vm + changed_when: true + - name: Debug var create_local_vm + ansible.builtin.debug: + var: create_local_vm + - name: Get local VM IP + ansible.builtin.shell: virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/' + environment: "{{ he_cmd_lang }}" + register: local_vm_ip + until: local_vm_ip.stdout_lines|length >= 1 + retries: 90 + delay: 10 + changed_when: true + - name: Debug var local_vm_ip + ansible.builtin.debug: + var: local_vm_ip + - name: Remove leftover entries in /etc/hosts for the local VM + ansible.builtin.lineinfile: + dest: /etc/hosts + regexp: "# temporary entry added by hosted-engine-setup for the bootstrap VM$" + state: absent + - name: Create an entry in /etc/hosts for the local VM + ansible.builtin.lineinfile: + dest: /etc/hosts + line: + "{{ local_vm_ip.stdout_lines[0] }} \ + {{ he_fqdn }} # temporary entry added by hosted-engine-setup for the bootstrap VM" + insertbefore: BOF + backup: true + - name: Wait for SSH to restart on the local VM + ansible.builtin.wait_for: + host='{{ he_fqdn }}' + port=22 + delay=30 + timeout=300 + - name: Set the name for add_host + ansible.builtin.set_fact: + he_fqdn_ansible_host: "{{ local_vm_ip.stdout_lines[0] }}" + - import_tasks: ../add_engine_as_ansible_host.yml + rescue: + - include_tasks: ../clean_localvm_dir.yml + - include_tasks: ../clean_local_storage_pools.yml + - name: Notify the user about a failure + ansible.builtin.fail: + msg: > + The system may not be provisioned according to the playbook + results: please check the logs for the issue, + fix accordingly or re-deploy from scratch. diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml index 775acb1d4..2fdd86fe2 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml @@ -1,111 +1,111 @@ --- - name: Initial engine tasks block: - - name: Wait for the local VM - ansible.builtin.wait_for_connection: - delay: 5 - timeout: 3600 - - name: Add an entry for this host on /etc/hosts on the local VM - ansible.builtin.lineinfile: - dest: /etc/hosts - line: >- - {{ hostvars[he_ansible_host_name]['he_host_ip'] }} {{ hostvars[he_ansible_host_name]['he_host_address'] }} - - name: Set FQDN - ansible.builtin.command: hostnamectl set-hostname {{ he_fqdn }} - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Force the local VM FQDN to temporary resolve on the natted network address - ansible.builtin.lineinfile: - path: /etc/hosts - line: - "{{ hostvars[he_ansible_host_name]['local_vm_ip']['stdout_lines'][0] }} {{ he_fqdn }} # hosted-engine-setup-{{ \ - hostvars[he_ansible_host_name]['he_local_vm_dir'] }}" - - name: Reconfigure IPv6 default gateway - ansible.builtin.command: ip -6 route add default via "{{ he_ipv6_subnet_prefix + '::1' }}" - environment: "{{ he_cmd_lang }}" - changed_when: true - when: hostvars[he_ansible_host_name]['ipv6_deployment']|bool - - name: Restore sshd reverse DNS lookups - ansible.builtin.lineinfile: - path: /etc/ssh/sshd_config - regexp: '^UseDNS' - line: "UseDNS yes" - - name: Add lines to answerfile - ansible.builtin.lineinfile: - path: /root/ovirt-engine-answers - line: "{{ item }}" - no_log: true - with_items: - - "OVESETUP_CONFIG/adminPassword=str:{{ he_admin_password }}" - - name: Add lines to answerfile - ansible.builtin.lineinfile: - path: /root/ovirt-engine-answers - line: "{{ item }}" - no_log: true - with_items: - - "OVESETUP_DB/password=str:{{ he_db_password }}" - when: he_db_password is defined - - name: Add lines to answerfile - ansible.builtin.lineinfile: - path: /root/ovirt-engine-answers - line: "{{ item }}" - no_log: true - with_items: - - "OVESETUP_DWH_DB/password=str:{{ he_dwh_db_password }}" - when: he_dwh_db_password is defined - - name: Add keycloak line to answerfile - ansible.builtin.lineinfile: - path: /root/ovirt-engine-answers - line: "{{ item }}" - with_items: - - "OVESETUP_CONFIG/keycloakEnable=bool:{{ he_enable_keycloak }}" - - name: Enable security policy - block: - - import_tasks: ../get_appliance_dist.yml - - name: Apply Security profile - block: - - name: Import OpenSCAP task - import_tasks: ../apply_openscap_profile.yml - when: he_apply_openscap_profile|bool - - name: Enable FIPS on the engine VM - ansible.builtin.command: >- - fips-mode-setup --enable - changed_when: true - when: he_enable_fips|bool - - name: Reboot the engine VM to apply security rules - ansible.builtin.reboot: - reboot_timeout: 1200 - - name: Check if FIPS mode is enabled - block: - - name: Check if FIPS mode is enabled - ansible.builtin.command: sysctl -n crypto.fips_enabled - changed_when: true - register: he_fips_enabled - - name: Enforce FIPS mode - ansible.builtin.fail: - msg: "FIPS mode is not enabled as required" - when: he_fips_enabled.stdout != "1" - when: he_enable_fips|bool - when: he_apply_openscap_profile|bool or he_enable_fips|bool - - name: Include before engine-setup custom tasks files for the engine VM - include_tasks: "{{ before_engine_setup_item }}" - with_fileglob: "hooks/enginevm_before_engine_setup/*.yml" - loop_control: - loop_var: before_engine_setup_item - register: include_before_engine_setup_results - - name: Pause the execution to allow the user to configure the bootstrap engine VM - block: - - name: Allow the user to connect to the bootstrap engine VM and change configuration - ansible.builtin.debug: - msg: >- - You can now connect from this host to the bootstrap engine VM using ssh as root - and the temporary IP address - {{ hostvars[he_ansible_host_name]['local_vm_ip']['stdout_lines'][0] }} - - include_tasks: ../pause_execution.yml - when: he_pause_before_engine_setup|bool - - name: Restore a backup - block: - - include_tasks: ../restore_backup.yml - when: he_restore_from_file is defined and he_restore_from_file + - name: Wait for the local VM + ansible.builtin.wait_for_connection: + delay: 5 + timeout: 3600 + - name: Add an entry for this host on /etc/hosts on the local VM + ansible.builtin.lineinfile: + dest: /etc/hosts + line: >- + {{ hostvars[he_ansible_host_name]['he_host_ip'] }} {{ hostvars[he_ansible_host_name]['he_host_address'] }} + - name: Set FQDN + ansible.builtin.command: hostnamectl set-hostname {{ he_fqdn }} + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Force the local VM FQDN to temporary resolve on the natted network address + ansible.builtin.lineinfile: + path: /etc/hosts + line: + "{{ hostvars[he_ansible_host_name]['local_vm_ip']['stdout_lines'][0] }} {{ he_fqdn }} # hosted-engine-setup-{{ \ + hostvars[he_ansible_host_name]['he_local_vm_dir'] }}" + - name: Reconfigure IPv6 default gateway + ansible.builtin.command: ip -6 route add default via "{{ he_ipv6_subnet_prefix + '::1' }}" + environment: "{{ he_cmd_lang }}" + changed_when: true + when: hostvars[he_ansible_host_name]['ipv6_deployment']|bool + - name: Restore sshd reverse DNS lookups + ansible.builtin.lineinfile: + path: /etc/ssh/sshd_config + regexp: '^UseDNS' + line: "UseDNS yes" + - name: Add lines to answerfile + ansible.builtin.lineinfile: + path: /root/ovirt-engine-answers + line: "{{ item }}" + no_log: true + with_items: + - "OVESETUP_CONFIG/adminPassword=str:{{ he_admin_password }}" + - name: Add lines to answerfile + ansible.builtin.lineinfile: + path: /root/ovirt-engine-answers + line: "{{ item }}" + no_log: true + with_items: + - "OVESETUP_DB/password=str:{{ he_db_password }}" + when: he_db_password is defined + - name: Add lines to answerfile + ansible.builtin.lineinfile: + path: /root/ovirt-engine-answers + line: "{{ item }}" + no_log: true + with_items: + - "OVESETUP_DWH_DB/password=str:{{ he_dwh_db_password }}" + when: he_dwh_db_password is defined + - name: Add keycloak line to answerfile + ansible.builtin.lineinfile: + path: /root/ovirt-engine-answers + line: "{{ item }}" + with_items: + - "OVESETUP_CONFIG/keycloakEnable=bool:{{ he_enable_keycloak }}" + - name: Enable security policy + block: + - import_tasks: ../get_appliance_dist.yml + - name: Apply Security profile + block: + - name: Import OpenSCAP task + import_tasks: ../apply_openscap_profile.yml + when: he_apply_openscap_profile|bool + - name: Enable FIPS on the engine VM + ansible.builtin.command: >- + fips-mode-setup --enable + changed_when: true + when: he_enable_fips|bool + - name: Reboot the engine VM to apply security rules + ansible.builtin.reboot: + reboot_timeout: 1200 + - name: Check if FIPS mode is enabled + block: + - name: Check if FIPS mode is enabled + ansible.builtin.command: sysctl -n crypto.fips_enabled + changed_when: true + register: he_fips_enabled + - name: Enforce FIPS mode + ansible.builtin.fail: + msg: "FIPS mode is not enabled as required" + when: he_fips_enabled.stdout != "1" + when: he_enable_fips|bool + when: he_apply_openscap_profile|bool or he_enable_fips|bool + - name: Include before engine-setup custom tasks files for the engine VM + include_tasks: "{{ before_engine_setup_item }}" + with_fileglob: "hooks/enginevm_before_engine_setup/*.yml" + loop_control: + loop_var: before_engine_setup_item + register: include_before_engine_setup_results + - name: Pause the execution to allow the user to configure the bootstrap engine VM + block: + - name: Allow the user to connect to the bootstrap engine VM and change configuration + ansible.builtin.debug: + msg: >- + You can now connect from this host to the bootstrap engine VM using ssh as root + and the temporary IP address - {{ hostvars[he_ansible_host_name]['local_vm_ip']['stdout_lines'][0] }} + - include_tasks: ../pause_execution.yml + when: he_pause_before_engine_setup|bool + - name: Restore a backup + block: + - include_tasks: ../restore_backup.yml + when: he_restore_from_file is defined and he_restore_from_file rescue: - name: Sync on engine machine ansible.builtin.command: sync diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/04_engine_final_tasks.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/04_engine_final_tasks.yml index 882d1db7c..e447905e0 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/04_engine_final_tasks.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/04_engine_final_tasks.yml @@ -1,68 +1,68 @@ --- - name: Final engine tasks block: - - name: Include after engine-setup custom tasks files for the engine VM - include_tasks: "{{ after_engine_setup_item }}" - with_fileglob: "hooks/enginevm_after_engine_setup/*.yml" - loop_control: - loop_var: after_engine_setup_item - register: include_after_engine_setup_results - # After a restart the engine has a 5 minute grace time, - # other actions like electing a new SPM host or reconstructing - # the master storage domain could require more time - - name: Wait for the engine to reach a stable condition - ansible.builtin.wait_for: timeout=600 - when: he_restore_from_file is defined and he_restore_from_file - - name: Configure LibgfApi support - ansible.builtin.command: engine-config -s LibgfApiSupported=true --cver=4.2 - environment: "{{ he_cmd_lang }}" - register: libgfapi_support_out - changed_when: true - when: he_enable_libgfapi|bool - - name: Save original OvfUpdateIntervalInMinutes - ansible.builtin.shell: "engine-config -g OvfUpdateIntervalInMinutes | cut -d' ' -f2 > /root/OvfUpdateIntervalInMinutes.txt" - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Set OVF update interval to 1 minute - ansible.builtin.command: engine-config -s OvfUpdateIntervalInMinutes=1 - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Allow the webadmin UI to be accessed over the first host - block: - - name: Saving original value - ansible.builtin.replace: - path: /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf - regexp: '^(SSO_ALTERNATE_ENGINE_FQDNS=.*)' - replace: '#\1 # pre hosted-engine-setup' - - name: Adding new SSO_ALTERNATE_ENGINE_FQDNS line - ansible.builtin.lineinfile: - path: /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf - line: 'SSO_ALTERNATE_ENGINE_FQDNS="{{ he_host_address }}" # hosted-engine-setup' - - name: Restart ovirt-engine service for changed OVF Update configuration and LibgfApi support - ansible.builtin.systemd: - state: restarted - name: ovirt-engine - register: restart_out - - name: Mask cloud-init services to speed up future boot - ansible.builtin.systemd: - masked: true - name: "{{ item }}" - with_items: - - cloud-init-local - - cloud-init - - name: Check if keycloak is configured - ansible.builtin.command: otopi-config-query query -k OVESETUP_CONFIG/keycloakEnable -f /etc/ovirt-engine-setup.conf - register: keycloak_configured_out - ignore_errors: true - changed_when: false - - name: Set admin username - ansible.builtin.set_fact: - he_admin_username: >- - {{ 'admin@ovirt@internalsso' - if keycloak_configured_out.rc == 0 and keycloak_configured_out.stdout_lines[0] == 'True' - else 'admin@internal' - }} - register: otopi_he_admin_username + - name: Include after engine-setup custom tasks files for the engine VM + include_tasks: "{{ after_engine_setup_item }}" + with_fileglob: "hooks/enginevm_after_engine_setup/*.yml" + loop_control: + loop_var: after_engine_setup_item + register: include_after_engine_setup_results + # After a restart the engine has a 5 minute grace time, + # other actions like electing a new SPM host or reconstructing + # the master storage domain could require more time + - name: Wait for the engine to reach a stable condition + ansible.builtin.wait_for: timeout=600 + when: he_restore_from_file is defined and he_restore_from_file + - name: Configure LibgfApi support + ansible.builtin.command: engine-config -s LibgfApiSupported=true --cver=4.2 + environment: "{{ he_cmd_lang }}" + register: libgfapi_support_out + changed_when: true + when: he_enable_libgfapi|bool + - name: Save original OvfUpdateIntervalInMinutes + ansible.builtin.shell: "engine-config -g OvfUpdateIntervalInMinutes | cut -d' ' -f2 > /root/OvfUpdateIntervalInMinutes.txt" + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Set OVF update interval to 1 minute + ansible.builtin.command: engine-config -s OvfUpdateIntervalInMinutes=1 + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Allow the webadmin UI to be accessed over the first host + block: + - name: Saving original value + ansible.builtin.replace: + path: /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf + regexp: '^(SSO_ALTERNATE_ENGINE_FQDNS=.*)' + replace: '#\1 # pre hosted-engine-setup' + - name: Adding new SSO_ALTERNATE_ENGINE_FQDNS line + ansible.builtin.lineinfile: + path: /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf + line: 'SSO_ALTERNATE_ENGINE_FQDNS="{{ he_host_address }}" # hosted-engine-setup' + - name: Restart ovirt-engine service for changed OVF Update configuration and LibgfApi support + ansible.builtin.systemd: + state: restarted + name: ovirt-engine + register: restart_out + - name: Mask cloud-init services to speed up future boot + ansible.builtin.systemd: + masked: true + name: "{{ item }}" + with_items: + - cloud-init-local + - cloud-init + - name: Check if keycloak is configured + ansible.builtin.command: otopi-config-query query -k OVESETUP_CONFIG/keycloakEnable -f /etc/ovirt-engine-setup.conf + register: keycloak_configured_out + ignore_errors: true + changed_when: false + - name: Set admin username + ansible.builtin.set_fact: + he_admin_username: >- + {{ 'admin@ovirt@internalsso' + if keycloak_configured_out.rc == 0 and keycloak_configured_out.stdout_lines[0] == 'True' + else 'admin@internal' + }} + register: otopi_he_admin_username rescue: - name: Sync on engine machine ansible.builtin.command: sync diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml index 21b0ef03e..67a3038f9 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml @@ -1,263 +1,263 @@ --- - name: Add host block: - - name: Wait for ovirt-engine service to start - ansible.builtin.uri: - url: http://{{ he_fqdn }}/ovirt-engine/services/health - return_content: true - register: engine_status - until: "'DB Up!Welcome to Health Status!' in engine_status.content" - retries: 30 - delay: 20 - - name: Open a port on firewalld - ansible.builtin.command: firewall-cmd --zone=public --add-port {{ he_webui_forward_port }}/tcp - changed_when: true - - name: Expose engine VM webui over a local port via ssh port forwarding - ansible.builtin.command: >- - sshpass -e ssh -tt -o ServerAliveInterval=5 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -g -L - {{ he_webui_forward_port }}:{{ he_fqdn }}:443 {{ he_fqdn }} - environment: - "{{ he_cmd_lang | combine( { 'SSHPASS': he_appliance_password } ) }}" - changed_when: true - async: 86400 - poll: 0 - register: sshpf - - name: Evaluate temporary bootstrap engine VM URL - ansible.builtin.set_fact: bootstrap_engine_url="https://{{ he_host_address }}:{{ he_webui_forward_port }}/ovirt-engine/" - - name: Display the temporary bootstrap engine VM URL - ansible.builtin.debug: - msg: >- - The bootstrap engine is temporarily accessible via {{ bootstrap_engine_url }} - - name: Detect VLAN ID - ansible.builtin.shell: ip -d link show {{ he_bridge_if }} | grep 'vlan ' | grep -Po 'id \K[\d]+' | cat - environment: "{{ he_cmd_lang }}" - register: vlan_id_out - changed_when: true - - name: Set Engine public key as authorized key without validating the TLS/SSL certificates - authorized_key: - user: root - state: present - key: https://{{ he_fqdn }}/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY - validate_certs: false - - include_tasks: ../auth_sso.yml - - name: Ensure that the target datacenter is present - ovirt_datacenter: - state: present - name: "{{ he_data_center }}" - compatibility_version: "{{ he_data_center_comp_version | default(omit) }}" - wait: true - local: false - auth: "{{ ovirt_auth }}" - register: dc_result_presence - - name: Ensure that the target cluster is present in the target datacenter - ovirt_cluster: - state: present - name: "{{ he_cluster }}" - compatibility_version: "{{ he_cluster_comp_version | default(omit) }}" - data_center: "{{ he_data_center }}" - cpu_type: "{{ he_cluster_cpu_type | default(omit) }}" - wait: true - auth: "{{ ovirt_auth }}" - register: cluster_result_presence - - name: Check actual cluster location - ansible.builtin.fail: - msg: >- - A cluster named '{{ he_cluster }}' has been created earlier in a different - datacenter and cluster moving is still not supported. - You can avoid this specifying a different cluster name; - please fix accordingly and try again. - when: cluster_result_presence.cluster.data_center.id != dc_result_presence.datacenter.id - - name: Enable GlusterFS at cluster level - ovirt_cluster: - data_center: "{{ he_data_center }}" - name: "{{ he_cluster }}" - compatibility_version: "{{ he_cluster_comp_version | default(omit) }}" - auth: "{{ ovirt_auth }}" - virt: true - gluster: true - fence_skip_if_gluster_bricks_up: true - fence_skip_if_gluster_quorum_not_met: true - when: he_enable_hc_gluster_service is defined and he_enable_hc_gluster_service - - name: Set VLAN ID at datacenter level - ovirt_network: - data_center: "{{ he_data_center }}" - name: "{{ he_mgmt_network }}" - vlan_tag: "{{ vlan_id_out.stdout }}" - auth: "{{ ovirt_auth }}" - when: vlan_id_out.stdout|length > 0 - - name: Get active list of active firewalld zones - ansible.builtin.shell: set -euo pipefail && firewall-cmd --get-active-zones | grep -v "^\s*interfaces" - environment: "{{ he_cmd_lang }}" - register: active_f_zone - changed_when: true - - name: Configure libvirt firewalld zone - ansible.builtin.command: firewall-cmd --zone=libvirt --permanent --add-service={{ service_item }} - with_items: - - vdsm - - libvirt-tls - - ovirt-imageio - - ovirt-vmconsole - - ssh - loop_control: - loop_var: service_item - when: "'libvirt' in active_f_zone.stdout_lines" - - name: Reload firewall-cmd - ansible.builtin.command: firewall-cmd --reload - changed_when: true - - name: Add host - ovirt_host: - cluster: "{{ he_cluster }}" - name: "{{ he_host_name }}" - state: present - public_key: true - address: "{{ he_host_address }}" - auth: "{{ ovirt_auth }}" - reboot_after_installation: false - async: 1 - poll: 0 - - name: Include after_add_host tasks files - include_tasks: "{{ after_add_host_item }}" - with_fileglob: "hooks/after_add_host/*.yml" - loop_control: - loop_var: after_add_host_item - register: include_after_add_host_results - - name: Pause the execution to let the user interactively reconfigure the host - block: - - name: Let the user connect to the bootstrap engine VM to manually fix host configuration - ansible.builtin.debug: - msg: >- - You can now connect to {{ bootstrap_engine_url }} and check the status of this host and - eventually remediate it, please continue only when the host is listed as 'up' - - include_tasks: ../pause_execution.yml - when: he_pause_host|bool - # refresh the auth token after a long operation to avoid having it expired - - include_tasks: ../auth_revoke.yml - - include_tasks: ../auth_sso.yml - - name: Wait for the host to be up - ovirt_host_info: - pattern: name={{ he_host_name }} - auth: "{{ ovirt_auth }}" - register: host_result_up_check - until: >- - host_result_up_check is succeeded and - host_result_up_check.ovirt_hosts|length >= 1 and - ( - host_result_up_check.ovirt_hosts[0].status == 'up' or - host_result_up_check.ovirt_hosts[0].status == 'non_operational' - ) - retries: 120 - delay: 10 - ignore_errors: true - - name: Notify the user about a failure - ansible.builtin.fail: - msg: >- - Host is not up, please check logs, perhaps also on the engine machine - when: host_result_up_check is failed + - name: Wait for ovirt-engine service to start + ansible.builtin.uri: + url: http://{{ he_fqdn }}/ovirt-engine/services/health + return_content: true + register: engine_status + until: "'DB Up!Welcome to Health Status!' in engine_status.content" + retries: 30 + delay: 20 + - name: Open a port on firewalld + ansible.builtin.command: firewall-cmd --zone=public --add-port {{ he_webui_forward_port }}/tcp + changed_when: true + - name: Expose engine VM webui over a local port via ssh port forwarding + ansible.builtin.command: >- + sshpass -e ssh -tt -o ServerAliveInterval=5 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -g -L + {{ he_webui_forward_port }}:{{ he_fqdn }}:443 {{ he_fqdn }} + environment: + "{{ he_cmd_lang | combine( { 'SSHPASS': he_appliance_password } ) }}" + changed_when: true + async: 86400 + poll: 0 + register: sshpf + - name: Evaluate temporary bootstrap engine VM URL + ansible.builtin.set_fact: bootstrap_engine_url="https://{{ he_host_address }}:{{ he_webui_forward_port }}/ovirt-engine/" + - name: Display the temporary bootstrap engine VM URL + ansible.builtin.debug: + msg: >- + The bootstrap engine is temporarily accessible via {{ bootstrap_engine_url }} + - name: Detect VLAN ID + ansible.builtin.shell: ip -d link show {{ he_bridge_if }} | grep 'vlan ' | grep -Po 'id \K[\d]+' | cat + environment: "{{ he_cmd_lang }}" + register: vlan_id_out + changed_when: true + - name: Set Engine public key as authorized key without validating the TLS/SSL certificates + ansible.posix.authorized_key: + user: root + state: present + key: https://{{ he_fqdn }}/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY + validate_certs: false + - include_tasks: ../auth_sso.yml + - name: Ensure that the target datacenter is present + ovirt_datacenter: + state: present + name: "{{ he_data_center }}" + compatibility_version: "{{ he_data_center_comp_version | default(omit) }}" + wait: true + local: false + auth: "{{ ovirt_auth }}" + register: dc_result_presence + - name: Ensure that the target cluster is present in the target datacenter + ovirt_cluster: + state: present + name: "{{ he_cluster }}" + compatibility_version: "{{ he_cluster_comp_version | default(omit) }}" + data_center: "{{ he_data_center }}" + cpu_type: "{{ he_cluster_cpu_type | default(omit) }}" + wait: true + auth: "{{ ovirt_auth }}" + register: cluster_result_presence + - name: Check actual cluster location + ansible.builtin.fail: + msg: >- + A cluster named '{{ he_cluster }}' has been created earlier in a different + datacenter and cluster moving is still not supported. + You can avoid this specifying a different cluster name; + please fix accordingly and try again. + when: cluster_result_presence.cluster.data_center.id != dc_result_presence.datacenter.id + - name: Enable GlusterFS at cluster level + ovirt_cluster: + data_center: "{{ he_data_center }}" + name: "{{ he_cluster }}" + compatibility_version: "{{ he_cluster_comp_version | default(omit) }}" + auth: "{{ ovirt_auth }}" + virt: true + gluster: true + fence_skip_if_gluster_bricks_up: true + fence_skip_if_gluster_quorum_not_met: true + when: he_enable_hc_gluster_service is defined and he_enable_hc_gluster_service + - name: Set VLAN ID at datacenter level + ovirt_network: + data_center: "{{ he_data_center }}" + name: "{{ he_mgmt_network }}" + vlan_tag: "{{ vlan_id_out.stdout }}" + auth: "{{ ovirt_auth }}" + when: vlan_id_out.stdout|length > 0 + - name: Get active list of active firewalld zones + ansible.builtin.shell: set -euo pipefail && firewall-cmd --get-active-zones | grep -v "^\s*interfaces" + environment: "{{ he_cmd_lang }}" + register: active_f_zone + changed_when: true + - name: Configure libvirt firewalld zone + ansible.builtin.command: firewall-cmd --zone=libvirt --permanent --add-service={{ service_item }} + with_items: + - vdsm + - libvirt-tls + - ovirt-imageio + - ovirt-vmconsole + - ssh + loop_control: + loop_var: service_item + when: "'libvirt' in active_f_zone.stdout_lines" + - name: Reload firewall-cmd + ansible.builtin.command: firewall-cmd --reload + changed_when: true + - name: Add host + ovirt_host: + cluster: "{{ he_cluster }}" + name: "{{ he_host_name }}" + state: present + public_key: true + address: "{{ he_host_address }}" + auth: "{{ ovirt_auth }}" + reboot_after_installation: false + async: 1 + poll: 0 + - name: Include after_add_host tasks files + include_tasks: "{{ after_add_host_item }}" + with_fileglob: "hooks/after_add_host/*.yml" + loop_control: + loop_var: after_add_host_item + register: include_after_add_host_results + - name: Pause the execution to let the user interactively reconfigure the host + block: + - name: Let the user connect to the bootstrap engine VM to manually fix host configuration + ansible.builtin.debug: + msg: >- + You can now connect to {{ bootstrap_engine_url }} and check the status of this host and + eventually remediate it, please continue only when the host is listed as 'up' + - include_tasks: ../pause_execution.yml + when: he_pause_host|bool + # refresh the auth token after a long operation to avoid having it expired + - include_tasks: ../auth_revoke.yml + - include_tasks: ../auth_sso.yml + - name: Wait for the host to be up + ovirt_host_info: + pattern: name={{ he_host_name }} + auth: "{{ ovirt_auth }}" + register: host_result_up_check + until: >- + host_result_up_check is succeeded and + host_result_up_check.ovirt_hosts|length >= 1 and + ( + host_result_up_check.ovirt_hosts[0].status == 'up' or + host_result_up_check.ovirt_hosts[0].status == 'non_operational' + ) + retries: 120 + delay: 10 + ignore_errors: true + - name: Notify the user about a failure + ansible.builtin.fail: + msg: >- + Host is not up, please check logs, perhaps also on the engine machine + when: host_result_up_check is failed - - name: Emit error messages about the failure - block: - - name: Set host_id - ansible.builtin.set_fact: host_id={{ host_result_up_check.ovirt_hosts[0].id }} - - name: Collect error events from the Engine - ovirt_event_info: - auth: "{{ ovirt_auth }}" - search: "severity>=warning" - register: error_events + - name: Emit error messages about the failure + block: + - name: Set host_id + ansible.builtin.set_fact: host_id={{ host_result_up_check.ovirt_hosts[0].id }} + - name: Collect error events from the Engine + ovirt_event_info: + auth: "{{ ovirt_auth }}" + search: "severity>=warning" + register: error_events - - name: Generate the error message from the engine events - ansible.builtin.set_fact: - error_description: >- - {% for event in error_events.ovirt_events | groupby('code') %} - {% if 'host' in event[1][0] and 'id' in event[1][0].host and event[1][0].host.id == host_id %} - code {{ event[0] }}: {{ event[1][0].description }}, - {% endif %} - {% endfor %} - ignore_errors: true + - name: Generate the error message from the engine events + ansible.builtin.set_fact: + error_description: >- + {% for event in error_events.ovirt_events | groupby('code') %} + {% if 'host' in event[1][0] and 'id' in event[1][0].host and event[1][0].host.id == host_id %} + code {{ event[0] }}: {{ event[1][0].description }}, + {% endif %} + {% endfor %} + ignore_errors: true - - name: Notify with error description - ansible.builtin.debug: - msg: >- - The host has been set in non_operational status, - deployment errors: {{ error_description }} - when: error_description is defined + - name: Notify with error description + ansible.builtin.debug: + msg: >- + The host has been set in non_operational status, + deployment errors: {{ error_description }} + when: error_description is defined - - name: Notify with generic error - ansible.builtin.debug: - msg: >- - The host has been set in non_operational status, - please check engine logs, - more info can be found in the engine logs. - when: error_description is not defined - when: >- - host_result_up_check is succeeded and - host_result_up_check.ovirt_hosts|length >= 1 and - host_result_up_check.ovirt_hosts[0].status == 'non_operational' + - name: Notify with generic error + ansible.builtin.debug: + msg: >- + The host has been set in non_operational status, + please check engine logs, + more info can be found in the engine logs. + when: error_description is not defined + when: >- + host_result_up_check is succeeded and + host_result_up_check.ovirt_hosts|length >= 1 and + host_result_up_check.ovirt_hosts[0].status == 'non_operational' - - name: Pause the execution to let the user interactively reconfigure the host - block: - - name: Let the user connect to the bootstrap engine to manually fix host configuration - ansible.builtin.debug: - msg: >- - You can now connect to {{ bootstrap_engine_url }} and check the status of this host and - eventually remediate it, please continue only when the host is listed as 'up' - - include_tasks: ../pause_execution.yml - when: >- - he_pause_after_failed_add_host|bool and - host_result_up_check is succeeded and - host_result_up_check.ovirt_hosts|length >= 1 and - host_result_up_check.ovirt_hosts[0].status == 'non_operational' + - name: Pause the execution to let the user interactively reconfigure the host + block: + - name: Let the user connect to the bootstrap engine to manually fix host configuration + ansible.builtin.debug: + msg: >- + You can now connect to {{ bootstrap_engine_url }} and check the status of this host and + eventually remediate it, please continue only when the host is listed as 'up' + - include_tasks: ../pause_execution.yml + when: >- + he_pause_after_failed_add_host|bool and + host_result_up_check is succeeded and + host_result_up_check.ovirt_hosts|length >= 1 and + host_result_up_check.ovirt_hosts[0].status == 'non_operational' - # refresh the auth token after a long operation to avoid having it expired - - include_tasks: ../auth_revoke.yml - - include_tasks: ../auth_sso.yml - - name: Check if the host is up - ovirt_host_info: - pattern: name={{ he_host_name }} - auth: "{{ ovirt_auth }}" - register: host_result_up_check - ignore_errors: true + # refresh the auth token after a long operation to avoid having it expired + - include_tasks: ../auth_revoke.yml + - include_tasks: ../auth_sso.yml + - name: Check if the host is up + ovirt_host_info: + pattern: name={{ he_host_name }} + auth: "{{ ovirt_auth }}" + register: host_result_up_check + ignore_errors: true - - name: Handle deployment failure - block: - - name: Set host_id - ansible.builtin.set_fact: host_id={{ host_result_up_check.ovirt_hosts[0].id }} - - name: Collect error events from the Engine - ovirt_event_info: - auth: "{{ ovirt_auth }}" - search: "severity>=warning" - register: error_events + - name: Handle deployment failure + block: + - name: Set host_id + ansible.builtin.set_fact: host_id={{ host_result_up_check.ovirt_hosts[0].id }} + - name: Collect error events from the Engine + ovirt_event_info: + auth: "{{ ovirt_auth }}" + search: "severity>=warning" + register: error_events - - name: Generate the error message from the engine events - ansible.builtin.set_fact: - error_description: >- - {% for event in error_events.ovirt_events | groupby('code') %} - {% if event[1][0].host.id == host_id %} - code {{ event[0] }}: {{ event[1][0].description }}, - {% endif %} - {% endfor %} - ignore_errors: true + - name: Generate the error message from the engine events + ansible.builtin.set_fact: + error_description: >- + {% for event in error_events.ovirt_events | groupby('code') %} + {% if event[1][0].host.id == host_id %} + code {{ event[0] }}: {{ event[1][0].description }}, + {% endif %} + {% endfor %} + ignore_errors: true - - name: Fail with error description - ansible.builtin.fail: - msg: >- - The host has been set in non_operational status, - deployment errors: {{ error_description }} - fix accordingly and re-deploy. - when: error_description is defined + - name: Fail with error description + ansible.builtin.fail: + msg: >- + The host has been set in non_operational status, + deployment errors: {{ error_description }} + fix accordingly and re-deploy. + when: error_description is defined - - name: Fail with generic error - ansible.builtin.fail: - msg: >- - The host has been set in non_operational status, - please check engine logs, - more info can be found in the engine logs, - fix accordingly and re-deploy. - when: error_description is not defined + - name: Fail with generic error + ansible.builtin.fail: + msg: >- + The host has been set in non_operational status, + please check engine logs, + more info can be found in the engine logs, + fix accordingly and re-deploy. + when: error_description is not defined - when: >- - host_result_up_check is succeeded and - host_result_up_check.ovirt_hosts|length >= 1 and - host_result_up_check.ovirt_hosts[0].status == 'non_operational' + when: >- + host_result_up_check is succeeded and + host_result_up_check.ovirt_hosts|length >= 1 and + host_result_up_check.ovirt_hosts[0].status == 'non_operational' rescue: - name: Sync on engine machine ansible.builtin.command: sync diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml index adf6d4196..8588fdae6 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml @@ -1,28 +1,28 @@ --- - name: Clean storage-pool block: - - name: Destroy local storage-pool {{ he_local_vm_dir | basename }} - ansible.builtin.command: >- - virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} - pool-destroy {{ he_local_vm_dir | basename }} - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Undefine local storage-pool {{ he_local_vm_dir | basename }} - ansible.builtin.command: >- - virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} - pool-undefine {{ he_local_vm_dir | basename }} - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }} - ansible.builtin.command: >- - virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} - pool-destroy {{ local_vm_disk_path.split('/')[5] }} - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }} - ansible.builtin.command: >- - virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} - pool-undefine {{ local_vm_disk_path.split('/')[5] }} - environment: "{{ he_cmd_lang }}" - changed_when: true + - name: Destroy local storage-pool {{ he_local_vm_dir | basename }} + ansible.builtin.command: >- + virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} + pool-destroy {{ he_local_vm_dir | basename }} + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Undefine local storage-pool {{ he_local_vm_dir | basename }} + ansible.builtin.command: >- + virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} + pool-undefine {{ he_local_vm_dir | basename }} + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }} + ansible.builtin.command: >- + virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} + pool-destroy {{ local_vm_disk_path.split('/')[5] }} + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }} + ansible.builtin.command: >- + virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} + pool-undefine {{ local_vm_disk_path.split('/')[5] }} + environment: "{{ he_cmd_lang }}" + changed_when: true ignore_errors: true diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_storage_domain.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_storage_domain.yml index 5e7c510f0..23c067f95 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_storage_domain.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_storage_domain.yml @@ -1,191 +1,191 @@ --- - name: Create hosted engine local vm block: - - name: Wait for the storage interface to be up - ansible.builtin.command: ip -j link show '{{ he_storage_if }}' - register: storage_if_result_up_check - until: >- - storage_if_result_up_check.stdout|from_json|map(attribute='operstate')|join('') == 'UP' - retries: 120 - delay: 5 - delegate_to: "{{ he_ansible_host_name }}" - when: (he_domain_type == "glusterfs" or he_domain_type == "nfs") and he_storage_if is not none - - name: Check local VM dir stat - ansible.builtin.stat: - path: "{{ he_local_vm_dir }}" - register: local_vm_dir_stat - - name: Enforce local VM dir existence - ansible.builtin.fail: - msg: "Local VM dir '{{ he_local_vm_dir }}' doesn't exist" - when: not local_vm_dir_stat.stat.exists - - include_tasks: auth_sso.yml - - name: Fetch host facts - ovirt_host_info: - pattern: name={{ he_host_name }} - auth: "{{ ovirt_auth }}" - register: host_result - until: >- - host_result and 'ovirt_hosts' in host_result - and host_result.ovirt_hosts|length >= 1 and - 'up' in host_result.ovirt_hosts[0].status - retries: 50 - delay: 10 - - name: Fetch cluster ID - ansible.builtin.set_fact: cluster_id="{{ host_result.ovirt_hosts[0].cluster.id }}" - - name: Fetch cluster facts - ovirt_cluster_info: - auth: "{{ ovirt_auth }}" - register: cluster_facts - - name: Fetch Datacenter facts - ovirt_datacenter_info: - auth: "{{ ovirt_auth }}" - register: datacenter_facts - - name: Fetch Datacenter ID - ansible.builtin.set_fact: >- - datacenter_id={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].data_center.id")|first }} - - name: Fetch Datacenter name - ansible.builtin.set_fact: >- - datacenter_name={{ datacenter_facts.ovirt_datacenters|ovirt.ovirt.json_query("[?id=='" + datacenter_id + "'].name")|first }} - - name: Fetch cluster name - ansible.builtin.set_fact: >- - cluster_name={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].name")|first }} - - name: Fetch cluster version - ansible.builtin.set_fact: >- - cluster_version={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].version")|first }} - - name: Enforce cluster major version - ansible.builtin.fail: - msg: "Cluster {{ cluster_name }} major version is {{ cluster_version.major }}, needs to be at least 4" - when: cluster_version.major < 4 - - name: Enforce cluster minor version - ansible.builtin.fail: - msg: "Cluster {{ cluster_name }} minor version is {{ cluster_version.minor }}, needs to be at least 2" - when: cluster_version.minor < 2 - - name: Set storage_format - ansible.builtin.set_fact: >- - storage_format={{ 'v4' if cluster_version.minor == 2 else 'v5' }} - - name: Add NFS storage domain - ovirt_storage_domain: - state: unattached - name: "{{ he_storage_domain_name }}" - host: "{{ he_host_name }}" - data_center: "{{ datacenter_name }}" - storage_format: "{{ storage_format }}" - wait: true - nfs: - address: "{{ he_storage_domain_addr }}" - path: "{{ he_storage_domain_path }}" - mount_options: "{{ he_mount_options }}" - version: "{{ he_nfs_version }}" - auth: "{{ ovirt_auth }}" - when: he_domain_type == "nfs" - register: otopi_storage_domain_details_nfs - - name: Add glusterfs storage domain - ovirt_storage_domain: - state: unattached - name: "{{ he_storage_domain_name }}" - host: "{{ he_host_name }}" - data_center: "{{ datacenter_name }}" - storage_format: "{{ storage_format }}" - wait: true - glusterfs: - address: "{{ he_storage_domain_addr }}" - path: "{{ he_storage_domain_path }}" - mount_options: "{{ he_mount_options }}" - auth: "{{ ovirt_auth }}" - when: he_domain_type == "glusterfs" - register: otopi_storage_domain_details_gluster - - name: Add iSCSI storage domain - ovirt_storage_domain: - state: unattached - name: "{{ he_storage_domain_name }}" - host: "{{ he_host_name }}" - data_center: "{{ datacenter_name }}" - storage_format: "{{ storage_format }}" - wait: true - discard_after_delete: "{{ he_discard }}" - # we are sending a single iSCSI path but, not sure if intended or if - # it's bug, the engine is implicitly creating the storage domain - # consuming all the path that are already connected on the host (we - # cannot logout since there is not logout command in the rest API, see - # https://bugzilla.redhat.com/show_bug.cgi?id=1535951 ). - iscsi: - address: "{{ he_storage_domain_addr.split(',')|first }}" - port: "{{ he_iscsi_portal_port.split(',')|first if he_iscsi_portal_port is string else he_iscsi_portal_port }}" - target: "{{ he_iscsi_target }}" - lun_id: "{{ he_lun_id }}" - username: "{{ he_iscsi_username }}" - password: "{{ he_iscsi_password }}" - auth: "{{ ovirt_auth }}" - when: he_domain_type == "iscsi" - register: otopi_storage_domain_details_iscsi - - name: Add Fibre Channel storage domain - ovirt_storage_domain: - state: unattached - name: "{{ he_storage_domain_name }}" - host: "{{ he_host_name }}" - data_center: "{{ datacenter_name }}" - storage_format: "{{ storage_format }}" - wait: true - discard_after_delete: "{{ he_discard }}" - fcp: - lun_id: "{{ he_lun_id }}" - auth: "{{ ovirt_auth }}" - register: otopi_storage_domain_details_fc - when: he_domain_type == "fc" - - name: Get storage domain details - ovirt_storage_domain_info: - pattern: name={{ he_storage_domain_name }} - auth: "{{ ovirt_auth }}" - register: storage_domain_details - - name: Find the appliance OVF - ansible.builtin.find: - paths: "{{ he_local_vm_dir }}/master" - recurse: true - patterns: ^.*.(?<!meta).ovf$ - use_regex: true - register: app_ovf - - name: Get ovf data - ansible.builtin.command: cat "{{ app_ovf.files[0].path }}" - register: ovf_data - changed_when: false - - name: Get disk size from ovf data - ansible.builtin.set_fact: - disk_size: "{{ ovf_data['stdout'] | ovirt.ovirt.get_ovf_disk_size }}" - - name: Get required size - ansible.builtin.set_fact: - required_size: >- - {{ disk_size|int * 1024 * 1024 * 1024 + - storage_domain_details.ovirt_storage_domains[0].critical_space_action_blocker|int * - 1024 * 1024 * 1024 + 5 * 1024 * 1024 * 1024 }} - # +5G: 2xOVF_STORE, lockspace, metadata, configuration - - name: Remove unsuitable storage domain - ovirt_storage_domain: - host: "{{ he_host_name }}" - data_center: "{{ datacenter_name }}" - name: "{{ he_storage_domain_name }}" - wait: true - state: absent - destroy: true - auth: "{{ ovirt_auth }}" - when: storage_domain_details.ovirt_storage_domains[0].available|int < required_size|int - register: remove_storage_domain_details - - name: Check storage domain free space - ansible.builtin.fail: - msg: >- - Error: the target storage domain contains only - {{ storage_domain_details.ovirt_storage_domains[0].available|int / 1024 / 1024 / 1024 }}GiB of - available space while a minimum of {{ required_size|int / 1024 / 1024 / 1024 }}GiB is required - If you wish to use the current target storage domain by extending it, make sure it contains nothing - before adding it. - when: storage_domain_details.ovirt_storage_domains[0].available|int < required_size|int - - name: Activate storage domain - ovirt_storage_domain: - host: "{{ he_host_name }}" - data_center: "{{ datacenter_name }}" - name: "{{ he_storage_domain_name }}" - wait: true - state: present - auth: "{{ ovirt_auth }}" - when: storage_domain_details.ovirt_storage_domains[0].available|int >= required_size|int - register: otopi_storage_domain_details + - name: Wait for the storage interface to be up + ansible.builtin.command: ip -j link show '{{ he_storage_if }}' + register: storage_if_result_up_check + until: >- + storage_if_result_up_check.stdout|from_json|map(attribute='operstate')|join('') == 'UP' + retries: 120 + delay: 5 + delegate_to: "{{ he_ansible_host_name }}" + when: (he_domain_type == "glusterfs" or he_domain_type == "nfs") and he_storage_if is not none + - name: Check local VM dir stat + ansible.builtin.stat: + path: "{{ he_local_vm_dir }}" + register: local_vm_dir_stat + - name: Enforce local VM dir existence + ansible.builtin.fail: + msg: "Local VM dir '{{ he_local_vm_dir }}' doesn't exist" + when: not local_vm_dir_stat.stat.exists + - include_tasks: auth_sso.yml + - name: Fetch host facts + ovirt_host_info: + pattern: name={{ he_host_name }} + auth: "{{ ovirt_auth }}" + register: host_result + until: >- + host_result and 'ovirt_hosts' in host_result + and host_result.ovirt_hosts|length >= 1 and + 'up' in host_result.ovirt_hosts[0].status + retries: 50 + delay: 10 + - name: Fetch cluster ID + ansible.builtin.set_fact: cluster_id="{{ host_result.ovirt_hosts[0].cluster.id }}" + - name: Fetch cluster facts + ovirt_cluster_info: + auth: "{{ ovirt_auth }}" + register: cluster_facts + - name: Fetch Datacenter facts + ovirt_datacenter_info: + auth: "{{ ovirt_auth }}" + register: datacenter_facts + - name: Fetch Datacenter ID + ansible.builtin.set_fact: >- + datacenter_id={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].data_center.id")|first }} + - name: Fetch Datacenter name + ansible.builtin.set_fact: >- + datacenter_name={{ datacenter_facts.ovirt_datacenters|ovirt.ovirt.json_query("[?id=='" + datacenter_id + "'].name")|first }} + - name: Fetch cluster name + ansible.builtin.set_fact: >- + cluster_name={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].name")|first }} + - name: Fetch cluster version + ansible.builtin.set_fact: >- + cluster_version={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].version")|first }} + - name: Enforce cluster major version + ansible.builtin.fail: + msg: "Cluster {{ cluster_name }} major version is {{ cluster_version.major }}, needs to be at least 4" + when: cluster_version.major < 4 + - name: Enforce cluster minor version + ansible.builtin.fail: + msg: "Cluster {{ cluster_name }} minor version is {{ cluster_version.minor }}, needs to be at least 2" + when: cluster_version.minor < 2 + - name: Set storage_format + ansible.builtin.set_fact: >- + storage_format={{ 'v4' if cluster_version.minor == 2 else 'v5' }} + - name: Add NFS storage domain + ovirt_storage_domain: + state: unattached + name: "{{ he_storage_domain_name }}" + host: "{{ he_host_name }}" + data_center: "{{ datacenter_name }}" + storage_format: "{{ storage_format }}" + wait: true + nfs: + address: "{{ he_storage_domain_addr }}" + path: "{{ he_storage_domain_path }}" + mount_options: "{{ he_mount_options }}" + version: "{{ he_nfs_version }}" + auth: "{{ ovirt_auth }}" + when: he_domain_type == "nfs" + register: otopi_storage_domain_details_nfs + - name: Add glusterfs storage domain + ovirt_storage_domain: + state: unattached + name: "{{ he_storage_domain_name }}" + host: "{{ he_host_name }}" + data_center: "{{ datacenter_name }}" + storage_format: "{{ storage_format }}" + wait: true + glusterfs: + address: "{{ he_storage_domain_addr }}" + path: "{{ he_storage_domain_path }}" + mount_options: "{{ he_mount_options }}" + auth: "{{ ovirt_auth }}" + when: he_domain_type == "glusterfs" + register: otopi_storage_domain_details_gluster + - name: Add iSCSI storage domain + ovirt_storage_domain: + state: unattached + name: "{{ he_storage_domain_name }}" + host: "{{ he_host_name }}" + data_center: "{{ datacenter_name }}" + storage_format: "{{ storage_format }}" + wait: true + discard_after_delete: "{{ he_discard }}" + # we are sending a single iSCSI path but, not sure if intended or if + # it's bug, the engine is implicitly creating the storage domain + # consuming all the path that are already connected on the host (we + # cannot logout since there is not logout command in the rest API, see + # https://bugzilla.redhat.com/show_bug.cgi?id=1535951 ). + iscsi: + address: "{{ he_storage_domain_addr.split(',')|first }}" + port: "{{ he_iscsi_portal_port.split(',')|first if he_iscsi_portal_port is string else he_iscsi_portal_port }}" + target: "{{ he_iscsi_target }}" + lun_id: "{{ he_lun_id }}" + username: "{{ he_iscsi_username }}" + password: "{{ he_iscsi_password }}" + auth: "{{ ovirt_auth }}" + when: he_domain_type == "iscsi" + register: otopi_storage_domain_details_iscsi + - name: Add Fibre Channel storage domain + ovirt_storage_domain: + state: unattached + name: "{{ he_storage_domain_name }}" + host: "{{ he_host_name }}" + data_center: "{{ datacenter_name }}" + storage_format: "{{ storage_format }}" + wait: true + discard_after_delete: "{{ he_discard }}" + fcp: + lun_id: "{{ he_lun_id }}" + auth: "{{ ovirt_auth }}" + register: otopi_storage_domain_details_fc + when: he_domain_type == "fc" + - name: Get storage domain details + ovirt_storage_domain_info: + pattern: name={{ he_storage_domain_name }} + auth: "{{ ovirt_auth }}" + register: storage_domain_details + - name: Find the appliance OVF + ansible.builtin.find: + paths: "{{ he_local_vm_dir }}/master" + recurse: true + patterns: ^.*.(?<!meta).ovf$ + use_regex: true + register: app_ovf + - name: Get ovf data + ansible.builtin.command: cat "{{ app_ovf.files[0].path }}" + register: ovf_data + changed_when: false + - name: Get disk size from ovf data + ansible.builtin.set_fact: + disk_size: "{{ ovf_data['stdout'] | ovirt.ovirt.get_ovf_disk_size }}" + - name: Get required size + ansible.builtin.set_fact: + required_size: >- + {{ disk_size|int * 1024 * 1024 * 1024 + + storage_domain_details.ovirt_storage_domains[0].critical_space_action_blocker|int * + 1024 * 1024 * 1024 + 5 * 1024 * 1024 * 1024 }} + # +5G: 2xOVF_STORE, lockspace, metadata, configuration + - name: Remove unsuitable storage domain + ovirt_storage_domain: + host: "{{ he_host_name }}" + data_center: "{{ datacenter_name }}" + name: "{{ he_storage_domain_name }}" + wait: true + state: absent + destroy: true + auth: "{{ ovirt_auth }}" + when: storage_domain_details.ovirt_storage_domains[0].available|int < required_size|int + register: remove_storage_domain_details + - name: Check storage domain free space + ansible.builtin.fail: + msg: >- + Error: the target storage domain contains only + {{ storage_domain_details.ovirt_storage_domains[0].available|int / 1024 / 1024 / 1024 }}GiB of + available space while a minimum of {{ required_size|int / 1024 / 1024 / 1024 }}GiB is required + If you wish to use the current target storage domain by extending it, make sure it contains nothing + before adding it. + when: storage_domain_details.ovirt_storage_domains[0].available|int < required_size|int + - name: Activate storage domain + ovirt_storage_domain: + host: "{{ he_host_name }}" + data_center: "{{ datacenter_name }}" + name: "{{ he_storage_domain_name }}" + wait: true + state: present + auth: "{{ ovirt_auth }}" + when: storage_domain_details.ovirt_storage_domains[0].available|int >= required_size|int + register: otopi_storage_domain_details ... diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml index 9f916e9ce..1db90ad4e 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml @@ -1,173 +1,173 @@ --- - name: Create target Hosted Engine VM block: - - include_tasks: ../auth_sso.yml - - name: Get local VM IP - ansible.builtin.shell: virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/' - environment: "{{ he_cmd_lang }}" - register: local_vm_ip - changed_when: true - - name: Set the name for add_host - ansible.builtin.set_fact: - he_fqdn_ansible_host: "{{ local_vm_ip.stdout_lines[0] }}" - - import_tasks: ../add_engine_as_ansible_host.yml - - name: Fetch host facts - ovirt_host_info: - pattern: name={{ he_host_name }} status=up - auth: "{{ ovirt_auth }}" - register: host_result - until: host_result is succeeded and host_result.ovirt_hosts|length >= 1 - retries: 50 - delay: 10 - - name: Fetch Cluster ID - ansible.builtin.set_fact: cluster_id="{{ host_result.ovirt_hosts[0].cluster.id }}" - - name: Fetch Cluster facts - ovirt_cluster_info: - auth: "{{ ovirt_auth }}" - register: cluster_facts - - name: Fetch Datacenter facts - ovirt_datacenter_info: - auth: "{{ ovirt_auth }}" - register: datacenter_facts - - name: Fetch Cluster name - ansible.builtin.set_fact: cluster_name={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].name")|first }} - - name: Fetch Datacenter ID - ansible.builtin.set_fact: >- - datacenter_id={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].data_center.id")|first }} - - name: Fetch Datacenter name - ansible.builtin.set_fact: >- - datacenter_name={{ datacenter_facts.ovirt_datacenters|ovirt.ovirt.json_query("[?id=='" + datacenter_id + "'].name")|first }} - - name: Parse Cluster details - ansible.builtin.set_fact: - cluster_cpu: >- - {{ cluster_facts.ovirt_clusters|selectattr('id', 'match', '^'+cluster_id+'$')|map(attribute='cpu')|list|first }} - cluster_version: >- - {{ cluster_facts.ovirt_clusters|selectattr('id', 'match', '^'+cluster_id+'$')| - map(attribute='version')|list|first }} - - name: Get server CPU list - ovirt.ovirt.ovirt_system_option_info: - auth: "{{ ovirt_auth }}" - name: ServerCPUList - version: "{{ cluster_version.major }}.{{ cluster_version.minor }}" - register: server_cpu_list - - name: Get cluster emulated machine list - ovirt.ovirt.ovirt_system_option_info: - name: ClusterEmulatedMachines - auth: "{{ ovirt_auth }}" - version: "{{ cluster_version.major }}.{{ cluster_version.minor }}" - register: emulated_machine_list - - name: Prepare for parsing server CPU list - ansible.builtin.set_fact: - server_cpu_dict: {} - - name: Parse server CPU list - ansible.builtin.set_fact: - server_cpu_dict: "{{ server_cpu_dict | combine({item.split(':')[1]: item.split(':')[3]}) }}" - with_items: >- - {{ server_cpu_list['ovirt_system_option']['values'][0]['value'].split('; ')|list|difference(['']) }} - - name: Convert CPU model name - ansible.builtin.set_fact: - cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}" - - name: Parse emulated_machine - ansible.builtin.set_fact: - emulated_machine: >- - {{ emulated_machine_list['ovirt_system_option']['values'][0]['value'].replace( - '[','').replace(']','').split(', ')|first }} - - name: Get storage domain details - ovirt_storage_domain_info: - pattern: name={{ he_storage_domain_name }} and datacenter={{ datacenter_name }} - auth: "{{ ovirt_auth }}" - register: storage_domain_details - - name: Add HE disks - ovirt_disk: - name: "{{ item.name }}" - size: "{{ item.size }}" - format: "{{ item.format }}" - sparse: "{{ item.sparse }}" - description: "{{ item.description }}" - content_type: "{{ item.content }}" - interface: virtio - storage_domain: "{{ he_storage_domain_name }}" - wait: true - timeout: 600 - auth: "{{ ovirt_auth }}" - with_items: - - { - name: 'he_virtio_disk', - description: 'Hosted-Engine disk', - size: "{{ he_disk_size_GB }}GiB", - format: 'raw', - sparse: "{{ false if he_domain_type == 'fc' or he_domain_type == 'iscsi' else true }}", - content: 'hosted_engine' - } - - { - name: 'he_sanlock', - description: 'Hosted-Engine sanlock disk', - size: '1GiB', - format: 'raw', - sparse: false, - content: 'hosted_engine_sanlock' - } - - { - name: 'HostedEngineConfigurationImage', - description: 'Hosted-Engine configuration disk', - size: '1GiB', - format: 'raw', - sparse: false, - content: 'hosted_engine_configuration' - } - - { - name: 'he_metadata', - description: 'Hosted-Engine metadata disk', - size: '128MiB', - format: 'raw', - sparse: false, - content: 'hosted_engine_metadata' - } - register: add_disks - - name: Register disk details - ansible.builtin.set_fact: - he_virtio_disk_details: "{{ add_disks.results[0] }}" - he_sanlock_disk_details: "{{ add_disks.results[1] }}" - he_conf_disk_details: "{{ add_disks.results[2] }}" - he_metadata_disk_details: "{{ add_disks.results[3] }}" - - name: Set VNC graphic protocol - ansible.builtin.set_fact: - he_graphic_protocols: [vnc] - - name: Check if FIPS is enabled - ansible.builtin.command: sysctl -n crypto.fips_enabled - register: he_fips_enabled - changed_when: false - - name: Add VM - ovirt_vm: - state: stopped - cluster: "{{ cluster_name }}" - name: "{{ he_vm_name }}" - description: 'Hosted Engine Virtual Machine' - memory: "{{ he_mem_size_MB }}Mib" - cpu_cores: "{{ he_vcpus }}" - cpu_sockets: 1 - graphical_console: - headless_mode: false - protocol: "{{ he_graphic_protocols }}" - serial_console: false - operating_system: rhel_8x64 - bios_type: q35_sea_bios - type: server - high_availability_priority: 1 - high_availability: false - delete_protected: true - # timezone: "{{ he_time_zone }}" # TODO: fix with the right parameter syntax - disks: - - id: "{{ he_virtio_disk_details.disk.id }}" - nics: - - name: vnet0 - profile_name: "{{ he_mgmt_network }}" - interface: virtio - mac_address: "{{ he_vm_mac_addr }}" - auth: "{{ ovirt_auth }}" - register: he_vm_details - - name: Register external local VM uuid - ansible.builtin.shell: virsh -r domuuid {{ he_vm_name }}Local | head -1 - environment: "{{ he_cmd_lang }}" - register: external_local_vm_uuid - changed_when: true + - include_tasks: ../auth_sso.yml + - name: Get local VM IP + ansible.builtin.shell: virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/' + environment: "{{ he_cmd_lang }}" + register: local_vm_ip + changed_when: true + - name: Set the name for add_host + ansible.builtin.set_fact: + he_fqdn_ansible_host: "{{ local_vm_ip.stdout_lines[0] }}" + - import_tasks: ../add_engine_as_ansible_host.yml + - name: Fetch host facts + ovirt_host_info: + pattern: name={{ he_host_name }} status=up + auth: "{{ ovirt_auth }}" + register: host_result + until: host_result is succeeded and host_result.ovirt_hosts|length >= 1 + retries: 50 + delay: 10 + - name: Fetch Cluster ID + ansible.builtin.set_fact: cluster_id="{{ host_result.ovirt_hosts[0].cluster.id }}" + - name: Fetch Cluster facts + ovirt_cluster_info: + auth: "{{ ovirt_auth }}" + register: cluster_facts + - name: Fetch Datacenter facts + ovirt_datacenter_info: + auth: "{{ ovirt_auth }}" + register: datacenter_facts + - name: Fetch Cluster name + ansible.builtin.set_fact: cluster_name={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].name")|first }} + - name: Fetch Datacenter ID + ansible.builtin.set_fact: >- + datacenter_id={{ cluster_facts.ovirt_clusters|ovirt.ovirt.json_query("[?id=='" + cluster_id + "'].data_center.id")|first }} + - name: Fetch Datacenter name + ansible.builtin.set_fact: >- + datacenter_name={{ datacenter_facts.ovirt_datacenters|ovirt.ovirt.json_query("[?id=='" + datacenter_id + "'].name")|first }} + - name: Parse Cluster details + ansible.builtin.set_fact: + cluster_cpu: >- + {{ cluster_facts.ovirt_clusters|selectattr('id', 'match', '^'+cluster_id+'$')|map(attribute='cpu')|list|first }} + cluster_version: >- + {{ cluster_facts.ovirt_clusters|selectattr('id', 'match', '^'+cluster_id+'$')| + map(attribute='version')|list|first }} + - name: Get server CPU list + ovirt.ovirt.ovirt_system_option_info: + auth: "{{ ovirt_auth }}" + name: ServerCPUList + version: "{{ cluster_version.major }}.{{ cluster_version.minor }}" + register: server_cpu_list + - name: Get cluster emulated machine list + ovirt.ovirt.ovirt_system_option_info: + name: ClusterEmulatedMachines + auth: "{{ ovirt_auth }}" + version: "{{ cluster_version.major }}.{{ cluster_version.minor }}" + register: emulated_machine_list + - name: Prepare for parsing server CPU list + ansible.builtin.set_fact: + server_cpu_dict: {} + - name: Parse server CPU list + ansible.builtin.set_fact: + server_cpu_dict: "{{ server_cpu_dict | combine({item.split(':')[1]: item.split(':')[3]}) }}" + with_items: >- + {{ server_cpu_list['ovirt_system_option']['values'][0]['value'].split('; ')|list|difference(['']) }} + - name: Convert CPU model name + ansible.builtin.set_fact: + cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}" + - name: Parse emulated_machine + ansible.builtin.set_fact: + emulated_machine: >- + {{ emulated_machine_list['ovirt_system_option']['values'][0]['value'].replace( + '[','').replace(']','').split(', ')|first }} + - name: Get storage domain details + ovirt_storage_domain_info: + pattern: name={{ he_storage_domain_name }} and datacenter={{ datacenter_name }} + auth: "{{ ovirt_auth }}" + register: storage_domain_details + - name: Add HE disks + ovirt_disk: + name: "{{ item.name }}" + size: "{{ item.size }}" + format: "{{ item.format }}" + sparse: "{{ item.sparse }}" + description: "{{ item.description }}" + content_type: "{{ item.content }}" + interface: virtio + storage_domain: "{{ he_storage_domain_name }}" + wait: true + timeout: 600 + auth: "{{ ovirt_auth }}" + with_items: + - { + name: 'he_virtio_disk', + description: 'Hosted-Engine disk', + size: "{{ he_disk_size_GB }}GiB", + format: 'raw', + sparse: "{{ false if he_domain_type == 'fc' or he_domain_type == 'iscsi' else true }}", + content: 'hosted_engine' + } + - { + name: 'he_sanlock', + description: 'Hosted-Engine sanlock disk', + size: '1GiB', + format: 'raw', + sparse: false, + content: 'hosted_engine_sanlock' + } + - { + name: 'HostedEngineConfigurationImage', + description: 'Hosted-Engine configuration disk', + size: '1GiB', + format: 'raw', + sparse: false, + content: 'hosted_engine_configuration' + } + - { + name: 'he_metadata', + description: 'Hosted-Engine metadata disk', + size: '128MiB', + format: 'raw', + sparse: false, + content: 'hosted_engine_metadata' + } + register: add_disks + - name: Register disk details + ansible.builtin.set_fact: + he_virtio_disk_details: "{{ add_disks.results[0] }}" + he_sanlock_disk_details: "{{ add_disks.results[1] }}" + he_conf_disk_details: "{{ add_disks.results[2] }}" + he_metadata_disk_details: "{{ add_disks.results[3] }}" + - name: Set VNC graphic protocol + ansible.builtin.set_fact: + he_graphic_protocols: [vnc] + - name: Check if FIPS is enabled + ansible.builtin.command: sysctl -n crypto.fips_enabled + register: he_fips_enabled + changed_when: false + - name: Add VM + ovirt_vm: + state: stopped + cluster: "{{ cluster_name }}" + name: "{{ he_vm_name }}" + description: 'Hosted Engine Virtual Machine' + memory: "{{ he_mem_size_MB }}Mib" + cpu_cores: "{{ he_vcpus }}" + cpu_sockets: 1 + graphical_console: + headless_mode: false + protocol: "{{ he_graphic_protocols }}" + serial_console: false + operating_system: rhel_8x64 + bios_type: q35_sea_bios + type: server + high_availability_priority: 1 + high_availability: false + delete_protected: true + # timezone: "{{ he_time_zone }}" # TODO: fix with the right parameter syntax + disks: + - id: "{{ he_virtio_disk_details.disk.id }}" + nics: + - name: vnet0 + profile_name: "{{ he_mgmt_network }}" + interface: virtio + mac_address: "{{ he_vm_mac_addr }}" + auth: "{{ ovirt_auth }}" + register: he_vm_details + - name: Register external local VM uuid + ansible.builtin.shell: virsh -r domuuid {{ he_vm_name }}Local | head -1 + environment: "{{ he_cmd_lang }}" + register: external_local_vm_uuid + changed_when: true diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml index 849cba789..bd2a99dcc 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml @@ -1,81 +1,81 @@ --- - name: Engine VM configuration tasks block: - - name: Create a temporary directory for ansible as postgres user - ansible.builtin.file: - path: /var/lib/pgsql/.ansible/tmp - state: directory - owner: postgres - group: postgres - mode: 0700 - - name: Update target VM details at DB level - ansible.builtin.command: >- - "{{ engine_psql }}" -c - "UPDATE vm_static SET {{ item.field }}={{ item.value }} WHERE - vm_guid='{{ hostvars[he_ansible_host_name]['he_vm_details']['vm']['id'] }}'" - environment: "{{ he_cmd_lang }}" - changed_when: true - register: db_vm_update - with_items: - - {field: 'origin', value: 6} - - name: Insert Hosted Engine configuration disk uuid into Engine database - ansible.builtin.command: >- - "{{ engine_psql }}" -c - "UPDATE vdc_options SET option_value= - '{{ hostvars[he_ansible_host_name]['he_conf_disk_details']['disk']['id'] }}' - WHERE option_name='HostedEngineConfigurationImageGuid' AND version='general'" - environment: "{{ he_cmd_lang }}" - changed_when: true - register: db_conf_update - - name: Fetch host SPM_ID - ansible.builtin.command: >- - "{{ engine_psql }}" -t -c - "SELECT vds_spm_id FROM vds WHERE vds_name='{{ hostvars[he_ansible_host_name]['he_host_name'] }}'" - environment: "{{ he_cmd_lang }}" - changed_when: true - register: host_spm_id_out - - name: Parse host SPM_ID - ansible.builtin.set_fact: host_spm_id="{{ host_spm_id_out.stdout_lines|first|trim }}" - - name: Restore original DisableFenceAtStartupInSec - ansible.builtin.shell: "engine-config -s DisableFenceAtStartupInSec=$(cat /root/DisableFenceAtStartupInSec.txt)" - environment: "{{ he_cmd_lang }}" - changed_when: true - when: he_restore_from_file is defined and he_restore_from_file - - name: Remove DisableFenceAtStartupInSec temporary file - ansible.builtin.file: - path: /root/DisableFenceAtStartupInSec.txt - state: absent - when: he_restore_from_file is defined and he_restore_from_file - - name: Restore original OvfUpdateIntervalInMinutes - ansible.builtin.shell: "engine-config -s OvfUpdateIntervalInMinutes=$(cat /root/OvfUpdateIntervalInMinutes.txt)" - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Remove OvfUpdateIntervalInMinutes temporary file - ansible.builtin.file: - path: /root/OvfUpdateIntervalInMinutes.txt - state: absent - changed_when: true - - name: Restore original SSO_ALTERNATE_ENGINE_FQDNS - block: - - name: Removing temporary value - ansible.builtin.lineinfile: - path: /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf - regexp: '^SSO_ALTERNATE_ENGINE_FQDNS=.* # hosted-engine-setup' - state: absent - - name: Restoring original value - ansible.builtin.replace: - path: /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf - regexp: '^#(SSO_ALTERNATE_ENGINE_FQDNS=.*) # pre hosted-engine-setup' - replace: '\1' - - name: Remove temporary directory for ansible as postgres user - ansible.builtin.file: - path: /var/lib/pgsql/.ansible - state: absent - - name: Configure PermitRootLogin for sshd to its final value - ansible.builtin.lineinfile: - dest: /etc/ssh/sshd_config - regexp: "^\\s*PermitRootLogin" - line: "PermitRootLogin {{ he_root_ssh_access }}" - state: present - - name: Clean cloud-init configuration - include_tasks: ../clean_cloud_init_config.yml + - name: Create a temporary directory for ansible as postgres user + ansible.builtin.file: + path: /var/lib/pgsql/.ansible/tmp + state: directory + owner: postgres + group: postgres + mode: 0700 + - name: Update target VM details at DB level + ansible.builtin.command: >- + "{{ engine_psql }}" -c + "UPDATE vm_static SET {{ item.field }}={{ item.value }} WHERE + vm_guid='{{ hostvars[he_ansible_host_name]['he_vm_details']['vm']['id'] }}'" + environment: "{{ he_cmd_lang }}" + changed_when: true + register: db_vm_update + with_items: + - {field: 'origin', value: 6} + - name: Insert Hosted Engine configuration disk uuid into Engine database + ansible.builtin.command: >- + "{{ engine_psql }}" -c + "UPDATE vdc_options SET option_value= + '{{ hostvars[he_ansible_host_name]['he_conf_disk_details']['disk']['id'] }}' + WHERE option_name='HostedEngineConfigurationImageGuid' AND version='general'" + environment: "{{ he_cmd_lang }}" + changed_when: true + register: db_conf_update + - name: Fetch host SPM_ID + ansible.builtin.command: >- + "{{ engine_psql }}" -t -c + "SELECT vds_spm_id FROM vds WHERE vds_name='{{ hostvars[he_ansible_host_name]['he_host_name'] }}'" + environment: "{{ he_cmd_lang }}" + changed_when: true + register: host_spm_id_out + - name: Parse host SPM_ID + ansible.builtin.set_fact: host_spm_id="{{ host_spm_id_out.stdout_lines|first|trim }}" + - name: Restore original DisableFenceAtStartupInSec + ansible.builtin.shell: "engine-config -s DisableFenceAtStartupInSec=$(cat /root/DisableFenceAtStartupInSec.txt)" + environment: "{{ he_cmd_lang }}" + changed_when: true + when: he_restore_from_file is defined and he_restore_from_file + - name: Remove DisableFenceAtStartupInSec temporary file + ansible.builtin.file: + path: /root/DisableFenceAtStartupInSec.txt + state: absent + when: he_restore_from_file is defined and he_restore_from_file + - name: Restore original OvfUpdateIntervalInMinutes + ansible.builtin.shell: "engine-config -s OvfUpdateIntervalInMinutes=$(cat /root/OvfUpdateIntervalInMinutes.txt)" + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Remove OvfUpdateIntervalInMinutes temporary file + ansible.builtin.file: + path: /root/OvfUpdateIntervalInMinutes.txt + state: absent + changed_when: true + - name: Restore original SSO_ALTERNATE_ENGINE_FQDNS + block: + - name: Removing temporary value + ansible.builtin.lineinfile: + path: /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf + regexp: '^SSO_ALTERNATE_ENGINE_FQDNS=.* # hosted-engine-setup' + state: absent + - name: Restoring original value + ansible.builtin.replace: + path: /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf + regexp: '^#(SSO_ALTERNATE_ENGINE_FQDNS=.*) # pre hosted-engine-setup' + replace: '\1' + - name: Remove temporary directory for ansible as postgres user + ansible.builtin.file: + path: /var/lib/pgsql/.ansible + state: absent + - name: Configure PermitRootLogin for sshd to its final value + ansible.builtin.lineinfile: + dest: /etc/ssh/sshd_config + regexp: "^\\s*PermitRootLogin" + line: "PermitRootLogin {{ he_root_ssh_access }}" + state: present + - name: Clean cloud-init configuration + include_tasks: ../clean_cloud_init_config.yml diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/03_hosted_engine_final_tasks.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/03_hosted_engine_final_tasks.yml index b7641d1d1..81b74ee5d 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/03_hosted_engine_final_tasks.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/03_hosted_engine_final_tasks.yml @@ -1,494 +1,489 @@ --- - name: Hosted-Engine final tasks block: - - name: Choose IPv4, IPv6 or auto - import_tasks: ../ipv_switch.yml - - name: Trigger hosted engine OVF update and enable the serial console - ovirt_vm: - id: "{{ he_vm_details.vm.id }}" - description: "Hosted engine VM" - serial_console: true - auth: "{{ ovirt_auth }}" - - name: Wait until OVF update finishes - ovirt_storage_domain_info: - auth: "{{ ovirt_auth }}" - fetch_nested: true - nested_attributes: - - name - - image_id - - id - pattern: "name={{ he_storage_domain_name }}" - retries: 12 - delay: 10 - register: storage_domain_details - until: "storage_domain_details.ovirt_storage_domains[0].disks | selectattr('name', 'match', '^OVF_STORE$') | list" - - name: Parse OVF_STORE disk list - ansible.builtin.set_fact: - ovf_store_disks: >- - {{ storage_domain_details.ovirt_storage_domains[0].disks | - selectattr('name', 'match', '^OVF_STORE$') | list }} - - name: Check OVF_STORE volume status - ansible.builtin.command: >- - vdsm-client Volume getInfo storagepoolID={{ datacenter_id }} - storagedomainID={{ storage_domain_details.ovirt_storage_domains[0].id }} - imageID={{ item.id }} volumeID={{ item.image_id }} - environment: "{{ he_cmd_lang }}" - changed_when: true - register: ovf_store_status - retries: 12 - delay: 10 - until: >- - ovf_store_status.rc == 0 and ovf_store_status.stdout|from_json|ovirt.ovirt.json_query('status') == 'OK' and - ovf_store_status.stdout|from_json|ovirt.ovirt.json_query('description')|from_json|ovirt.ovirt.json_query('Updated') - with_items: "{{ ovf_store_disks }}" - - name: Wait for OVF_STORE disk content - ansible.builtin.shell: >- - vdsm-client Image prepare storagepoolID={{ datacenter_id }} - storagedomainID={{ storage_domain_details.ovirt_storage_domains[0].id }} imageID={{ item.id }} - volumeID={{ item.image_id }} | grep path | awk '{ print $2 }' | - xargs -I{} sudo -u vdsm dd if={} | tar -tvf - {{ he_vm_details.vm.id }}.ovf - environment: "{{ he_cmd_lang }}" - changed_when: true - register: ovf_store_content - retries: 12 - delay: 10 - until: ovf_store_content.rc == 0 - with_items: "{{ ovf_store_disks }}" - args: - warn: false - - name: Prepare images - ansible.builtin.command: >- - vdsm-client Image prepare storagepoolID={{ datacenter_id }} - storagedomainID={{ storage_domain_details.ovirt_storage_domains[0].id }} - imageID={{ item.disk.id }} volumeID={{ item.disk.image_id }} - environment: "{{ he_cmd_lang }}" - with_items: - - "{{ he_virtio_disk_details }}" - - "{{ he_conf_disk_details }}" - - "{{ he_metadata_disk_details }}" - - "{{ he_sanlock_disk_details }}" - register: prepareimage_results - changed_when: true - - name: Fetch Hosted Engine configuration disk path - ansible.builtin.set_fact: - he_conf_disk_path: >- - {{ (prepareimage_results.results|ovirt.ovirt.json_query("[?item.id=='" + - he_conf_disk_details.id + "'].stdout")|first|from_json).path }} - - name: Fetch Hosted Engine virtio disk path - ansible.builtin.set_fact: - he_virtio_disk_path: >- - {{ (prepareimage_results.results|ovirt.ovirt.json_query("[?item.id=='" + - he_virtio_disk_details.id + "'].stdout")|first|from_json).path }} - - name: Fetch Hosted Engine virtio metadata path - ansible.builtin.set_fact: - he_metadata_disk_path: >- - {{ (prepareimage_results.results|ovirt.ovirt.json_query("[?item.id=='" + - he_metadata_disk_details.id + "'].stdout")|first|from_json).path }} - - name: Shutdown local VM - ansible.builtin.command: "virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} shutdown {{ he_vm_name }}Local" - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Wait for local VM shutdown - ansible.builtin.command: virsh -r domstate "{{ he_vm_name }}Local" - environment: "{{ he_cmd_lang }}" - changed_when: true - register: dominfo_out - until: dominfo_out.rc == 0 and 'shut off' in dominfo_out.stdout - retries: 120 - delay: 5 - - name: Undefine local VM - ansible.builtin.command: "virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} undefine {{ he_vm_name }}Local" - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Update libvirt default network configuration, destroy - ansible.builtin.command: "virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} net-destroy default" - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Update libvirt default network configuration, undefine - ansible.builtin.command: "virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} net-undefine default" - environment: "{{ he_cmd_lang }}" - ignore_errors: true - changed_when: true - - name: Detect ovirt-hosted-engine-ha version - ansible.builtin.command: >- - /usr/libexec/platform-python -c - 'from ovirt_hosted_engine_ha.agent import constants as agentconst; print(agentconst.PACKAGE_VERSION)' - environment: "{{ he_cmd_lang }}" - register: ha_version_out - changed_when: true - - name: Set ha_version - ansible.builtin.set_fact: ha_version="{{ ha_version_out.stdout_lines|first }}" - - name: Create configuration templates - ansible.builtin.template: - src: "{{ item.src }}" - dest: "{{ item.dest }}" - mode: 0644 - with_items: - - {src: templates/vm.conf.j2, dest: "{{ he_local_vm_dir }}/vm.conf"} - - {src: templates/broker.conf.j2, dest: "{{ he_local_vm_dir }}/broker.conf"} - - {src: templates/version.j2, dest: "{{ he_local_vm_dir }}/version"} - - {src: templates/fhanswers.conf.j2, dest: "{{ he_local_vm_dir }}/fhanswers.conf"} - - {src: templates/hosted-engine.conf.j2, dest: "{{ he_local_vm_dir }}/hosted-engine.conf"} - - name: Create configuration archive - ansible.builtin.command: >- - tar --record-size=20480 -cvf {{ he_conf_disk_details.disk.image_id }} - vm.conf broker.conf version fhanswers.conf hosted-engine.conf - environment: "{{ he_cmd_lang }}" - args: - chdir: "{{ he_local_vm_dir }}" - warn: false - become: true - become_user: vdsm - become_method: sudo - changed_when: true - tags: ['skip_ansible_lint'] - - name: Create ovirt-hosted-engine-ha run directory - ansible.builtin.file: - path: /var/run/ovirt-hosted-engine-ha - state: directory - mode: 0755 - - name: Copy vm.conf to the right location on host - ansible.builtin.copy: - remote_src: true - src: "{{ he_local_vm_dir }}/vm.conf" - dest: "/var/run/ovirt-hosted-engine-ha" - owner: 'vdsm' - group: 'kvm' - mode: 0640 - - name: Copy hosted-engine.conf to the right location on host - ansible.builtin.copy: - remote_src: true - src: "{{ he_local_vm_dir }}/hosted-engine.conf" - dest: "/etc/ovirt-hosted-engine/" - owner: 'vdsm' - group: 'kvm' - mode: 0440 - - name: Check fapolicyd status - ansible.builtin.systemd: - name: fapolicyd - register: fapolicyd_s - - name: Set fapolicyd rules path - ansible.builtin.set_fact: - fapolicyd_rules_dir: /etc/fapolicyd/rules.d - - name: Verify fapolicyd/rules.d directory - ansible.builtin.stat: - path: "{{ fapolicyd_rules_dir }}" - register: fapolicy_rules - - name: Add rule to fapolicy - block: - - name: Add rule to /etc/fapolicyd/rules.d - ansible.builtin.copy: - src: 35-allow-ansible-for-vdsm.rules - dest: "{{ fapolicyd_rules_dir }}" - mode: 0644 - - name: Restart fapolicyd service - ansible.builtin.service: - name: fapolicyd - state: restarted - when: fapolicyd_s.status.SubState == 'running' and fapolicy_rules.stat.exists - - name: Copy configuration archive to storage - ansible.builtin.command: >- - dd bs=20480 count=1 oflag=direct if="{{ he_local_vm_dir }}/{{ he_conf_disk_details.disk.image_id }}" - of="{{ he_conf_disk_path }}" - environment: "{{ he_cmd_lang }}" - become: true - become_user: vdsm - become_method: sudo - changed_when: true - args: - warn: false - - name: Initialize metadata volume - # Data is written at offset 4KiB*host_id and since ovirt supports 250 hosts per dc, - # we can have maximum 250*4KiB = ~ 1MiB - ansible.builtin.command: dd conv=notrunc bs=1M count=1 oflag=direct if=/dev/zero of="{{ he_metadata_disk_path }}" - environment: "{{ he_cmd_lang }}" - become: true - become_user: vdsm - become_method: sudo - changed_when: true - - include_tasks: ../get_local_vm_disk_path.yml - - name: Generate DHCP network configuration for the engine VM - ansible.builtin.template: - src: templates/ifcfg-eth0-dhcp.j2 - dest: "{{ he_local_vm_dir }}/ifcfg-eth0" - owner: root - group: root - mode: 0644 - when: he_vm_ip_addr is none - - name: Generate static network configuration for the engine VM, IPv4 - ansible.builtin.template: - src: templates/ifcfg-eth0-static.j2 - dest: "{{ he_local_vm_dir }}/ifcfg-eth0" - owner: root - group: root - mode: 0644 - when: he_vm_ip_addr is not none and he_vm_ip_addr | ipv4 - - name: Generate static network configuration for the engine VM, IPv6 - ansible.builtin.template: - src: templates/ifcfg-eth0-static-ipv6.j2 - dest: "{{ he_local_vm_dir }}/ifcfg-eth0" - owner: root - group: root - mode: 0644 - when: he_vm_ip_addr is not none and he_vm_ip_addr | ipv6 - - name: Inject network configuration with guestfish - ansible.builtin.command: >- - guestfish -a {{ local_vm_disk_path }} --rw -i copy-in "{{ he_local_vm_dir }}/ifcfg-eth0" - /etc/sysconfig/network-scripts {{ ":" }} selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts - /etc/sysconfig/network-scripts/ifcfg-eth0 force{{ ":" }}true - environment: - LIBGUESTFS_BACKEND: direct - LANG: en_US.UTF-8 - LC_MESSAGES: en_US.UTF-8 - LC_ALL: en_US.UTF-8 - changed_when: true - - name: Extract /etc/hosts from the Hosted Engine VM - ansible.builtin.command: virt-copy-out -a {{ local_vm_disk_path }} /etc/hosts "{{ he_local_vm_dir }}" - environment: - LIBGUESTFS_BACKEND: direct - LANG: en_US.UTF-8 - LC_MESSAGES: en_US.UTF-8 - LC_ALL: en_US.UTF-8 - changed_when: true - - name: Clean /etc/hosts for the Hosted Engine VM for Engine VM FQDN - ansible.builtin.lineinfile: - dest: "{{ he_local_vm_dir }}/hosts" - regexp: "# hosted-engine-setup-{{ hostvars[he_ansible_host_name]['he_local_vm_dir'] }}$" - state: absent - - name: Add an entry on /etc/hosts for the Hosted Engine VM for the VM itself - ansible.builtin.lineinfile: - dest: "{{ he_local_vm_dir }}/hosts" - line: "{{ he_vm_ip_addr }} {{ he_fqdn }}" - state: present - when: he_vm_etc_hosts and he_vm_ip_addr is not none - - name: Clean /etc/hosts for the Hosted Engine VM for host address - ansible.builtin.lineinfile: - dest: "{{ he_local_vm_dir }}/hosts" - line: "{{ he_host_ip }} {{ he_host_address }}" - state: absent - when: not he_vm_etc_hosts - - name: Inject /etc/hosts with guestfish - ansible.builtin.command: >- - guestfish -a {{ local_vm_disk_path }} --rw -i copy-in "{{ he_local_vm_dir }}/hosts" - /etc {{ ":" }} selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts - /etc/hosts force{{ ":" }}true - environment: - LIBGUESTFS_BACKEND: direct - LANG: en_US.UTF-8 - LC_MESSAGES: en_US.UTF-8 - LC_ALL: en_US.UTF-8 - changed_when: true - - name: Copy local VM disk to shared storage - ansible.builtin.command: >- - qemu-img convert -f qcow2 -O raw -t none -T none {{ local_vm_disk_path }} {{ he_virtio_disk_path }} - environment: "{{ he_cmd_lang }}" - become: true - become_user: vdsm - become_method: sudo - changed_when: true - - name: Verify copy of VM disk - ansible.builtin.command: qemu-img compare {{ local_vm_disk_path }} {{ he_virtio_disk_path }} - environment: "{{ he_cmd_lang }}" - become: true - become_user: vdsm - become_method: sudo - changed_when: true - when: he_debug_mode|bool - - name: Remove rule from fapolicy - block: - - name: Remove rule from /etc/fapolicyd/rules.d - ansible.builtin.file: - path: "{{ fapolicyd_rules_dir }}/35-allow-ansible-for-vdsm.rules" - state: absent - - name: Restart fapolicyd service - ansible.builtin.service: - name: fapolicyd - state: restarted - when: fapolicyd_s.status.SubState == 'running' and fapolicy_rules.stat.exists - - name: Remove temporary entry in /etc/hosts for the local VM - ansible.builtin.lineinfile: - dest: /etc/hosts - regexp: "# temporary entry added by hosted-engine-setup for the bootstrap VM$" - state: absent - - name: Set the name for add_host - ansible.builtin.set_fact: - he_fqdn_ansible_host: "{{ he_fqdn }}" - - import_tasks: ../add_engine_as_ansible_host.yml - - name: Start ovirt-ha-broker service on the host - ansible.builtin.service: - name: ovirt-ha-broker - state: started - enabled: true - - name: Initialize lockspace volume - ansible.builtin.command: hosted-engine --reinitialize-lockspace --force - environment: "{{ he_cmd_lang }}" - register: result - until: result.rc == 0 - ignore_errors: true - retries: 5 - delay: 10 - changed_when: true - - name: Initialize lockspace volume block - block: - - name: Workaround for ovirt-ha-broker start failures - # Ugly workaround for https://bugzilla.redhat.com/1768511 - # fix it on ovirt-ha-broker side and remove ASAP - ansible.builtin.systemd: - state: restarted - enabled: true - name: ovirt-ha-broker - - name: Initialize lockspace volume - ansible.builtin.command: hosted-engine --reinitialize-lockspace --force - environment: "{{ he_cmd_lang }}" - register: result2 - until: result2.rc == 0 - retries: 5 - delay: 10 - changed_when: true - - name: Debug var result2 - ansible.builtin.debug: - var: result2 - when: result.rc != 0 - - name: Start ovirt-ha-agent service on the host - ansible.builtin.service: - name: ovirt-ha-agent - state: started - enabled: true - - name: Exit HE maintenance mode - ansible.builtin.command: hosted-engine --set-maintenance --mode=none - environment: "{{ he_cmd_lang }}" - register: mresult - until: mresult.rc == 0 - retries: 3 - delay: 10 - changed_when: true - - name: Wait for the engine to come up on the target VM - block: - - name: Check engine VM health - ansible.builtin.command: hosted-engine --vm-status --json - environment: "{{ he_cmd_lang }}" - register: health_result - until: >- - health_result.rc == 0 and 'health' in health_result.stdout and - health_result.stdout|from_json|ovirt.ovirt.json_query('*."engine-status"."health"')|first=="good" and - health_result.stdout|from_json|ovirt.ovirt.json_query('*."engine-status"."detail"')|first=="Up" - retries: 180 - delay: 5 - changed_when: true - - name: Debug var health_result - ansible.builtin.debug: - var: health_result - rescue: - - name: Check VM status at virt level - ansible.builtin.shell: virsh -r list | grep {{ he_vm_name }} | grep running - environment: "{{ he_cmd_lang }}" - ignore_errors: true - changed_when: true - register: vm_status_virsh - - name: Debug var vm_status_virsh - ansible.builtin.debug: - var: vm_status_virsh - - name: Fail if engine VM is not running - ansible.builtin.fail: - msg: "Engine VM is not running, please check vdsm logs" - when: vm_status_virsh.rc != 0 - - name: Get target engine VM IP address - ansible.builtin.shell: getent {{ ip_key }} {{ he_fqdn }} | cut -d' ' -f1 | uniq - environment: "{{ he_cmd_lang }}" - register: engine_vm_ip - changed_when: true - - name: Get VDSM's target engine VM stats - ansible.builtin.command: vdsm-client VM getStats vmID={{ he_vm_details.vm.id }} - environment: "{{ he_cmd_lang }}" - register: engine_vdsm_stats - changed_when: true - - name: Convert stats to JSON format - ansible.builtin.set_fact: json_stats={{ engine_vdsm_stats.stdout|from_json }} - - name: Get target engine VM IP address from VDSM stats - ansible.builtin.set_fact: engine_vm_ip_vdsm={{ json_stats[0].guestIPs }} - - name: Debug var engine_vm_ip_vdsm - ansible.builtin.debug: - var: engine_vm_ip_vdsm - - name: Fail if Engine IP is different from engine's he_fqdn resolved IP - ansible.builtin.fail: - msg: >- - Engine VM IP address is {{ engine_vm_ip_vdsm }} while the engine's he_fqdn {{ he_fqdn }} resolves to - {{ engine_vm_ip.stdout_lines[0] }}. If you are using DHCP, check your DHCP reservation configuration - when: engine_vm_ip_vdsm != engine_vm_ip.stdout_lines[0] - - name: Fail is for any other reason the engine didn't started - ansible.builtin.fail: - msg: The engine failed to start inside the engine VM; please check engine.log. - - name: Get target engine VM address - ansible.builtin.shell: getent {{ ip_key }} {{ he_fqdn }} | cut -d ' ' -f1 | uniq - environment: "{{ he_cmd_lang }}" - register: engine_vm_ip - when: engine_vm_ip is not defined - changed_when: true - # Workaround for ovn-central being configured with the address of the bootstrap engine VM. - # Keep this aligned with: - # https://github.com/oVirt/ovirt-engine/blob/master/packaging/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/main.yml - - name: Reconfigure OVN central address - ansible.builtin.command: vdsm-tool ovn-config {{ engine_vm_ip.stdout_lines[0] }} {{ he_mgmt_network }} {{ he_host_address }} - environment: "{{ he_cmd_lang }}" - changed_when: true - # Workaround for https://bugzilla.redhat.com/1540107 - # the engine fails deleting a VM if its status in the engine DB - # is not up to date. - - include_tasks: ../auth_sso.yml - - name: Check for the local bootstrap engine VM - ovirt_vm_info: - pattern: id="{{ external_local_vm_uuid.stdout_lines|first }}" - auth: "{{ ovirt_auth }}" - register: local_vm_f - - name: Remove the bootstrap local VM - block: - - name: Make the engine aware that the external VM is stopped - ignore_errors: true - ovirt_vm: - state: stopped - id: "{{ external_local_vm_uuid.stdout_lines|first }}" - auth: "{{ ovirt_auth }}" - register: vmstop_result - - name: Debug var vmstop_result - ansible.builtin.debug: - var: vmstop_result - - name: Wait for the local bootstrap engine VM to be down at engine eyes - ovirt_vm_info: - pattern: id="{{ external_local_vm_uuid.stdout_lines|first }}" - auth: "{{ ovirt_auth }}" - register: local_vm_status - until: local_vm_status.ovirt_vms[0].status == "down" - retries: 24 - delay: 5 - - name: Debug var local_vm_status - ansible.builtin.debug: - var: local_vm_status - - name: Remove bootstrap external VM from the engine - ovirt_vm: - state: absent - id: "{{ external_local_vm_uuid.stdout_lines|first }}" - auth: "{{ ovirt_auth }}" - register: vmremove_result - - name: Debug var vmremove_result - ansible.builtin.debug: - var: vmremove_result - when: local_vm_f.ovirt_vms|length > 0 - - name: Remove ovirt-engine-appliance rpm - ansible.builtin.yum: - name: ovirt-engine-appliance - state: absent - register: yum_result - until: yum_result is success - retries: 10 - delay: 5 - when: he_remove_appliance_rpm|bool + - name: Choose IPv4, IPv6 or auto + import_tasks: ../ipv_switch.yml + - name: Trigger hosted engine OVF update and enable the serial console + ovirt_vm: + id: "{{ he_vm_details.vm.id }}" + description: "Hosted engine VM" + serial_console: true + auth: "{{ ovirt_auth }}" + - name: Wait until OVF update finishes + ovirt_storage_domain_info: + auth: "{{ ovirt_auth }}" + fetch_nested: true + nested_attributes: + - name + - image_id + - id + pattern: "name={{ he_storage_domain_name }}" + retries: 12 + delay: 10 + register: storage_domain_details + until: "storage_domain_details.ovirt_storage_domains[0].disks | selectattr('name', 'match', '^OVF_STORE$') | list" + - name: Parse OVF_STORE disk list + ansible.builtin.set_fact: + ovf_store_disks: >- + {{ storage_domain_details.ovirt_storage_domains[0].disks | + selectattr('name', 'match', '^OVF_STORE$') | list }} + - name: Check OVF_STORE volume status + ansible.builtin.command: >- + vdsm-client Volume getInfo storagepoolID={{ datacenter_id }} + storagedomainID={{ storage_domain_details.ovirt_storage_domains[0].id }} + imageID={{ item.id }} volumeID={{ item.image_id }} + environment: "{{ he_cmd_lang }}" + changed_when: true + register: ovf_store_status + retries: 12 + delay: 10 + until: >- + ovf_store_status.rc == 0 and ovf_store_status.stdout|from_json|ovirt.ovirt.json_query('status') == 'OK' and + ovf_store_status.stdout|from_json|ovirt.ovirt.json_query('description')|from_json|ovirt.ovirt.json_query('Updated') + with_items: "{{ ovf_store_disks }}" + - name: Wait for OVF_STORE disk content + ansible.builtin.shell: >- + vdsm-client Image prepare storagepoolID={{ datacenter_id }} + storagedomainID={{ storage_domain_details.ovirt_storage_domains[0].id }} imageID={{ item.id }} + volumeID={{ item.image_id }} | grep path | awk '{ print $2 }' | + xargs -I{} sudo -u vdsm dd if={} | tar -tvf - {{ he_vm_details.vm.id }}.ovf + environment: "{{ he_cmd_lang }}" + changed_when: true + register: ovf_store_content + retries: 12 + delay: 10 + until: ovf_store_content.rc == 0 + with_items: "{{ ovf_store_disks }}" + - name: Prepare images + ansible.builtin.command: >- + vdsm-client Image prepare storagepoolID={{ datacenter_id }} + storagedomainID={{ storage_domain_details.ovirt_storage_domains[0].id }} + imageID={{ item.disk.id }} volumeID={{ item.disk.image_id }} + environment: "{{ he_cmd_lang }}" + with_items: + - "{{ he_virtio_disk_details }}" + - "{{ he_conf_disk_details }}" + - "{{ he_metadata_disk_details }}" + - "{{ he_sanlock_disk_details }}" + register: prepareimage_results + changed_when: true + - name: Fetch Hosted Engine configuration disk path + ansible.builtin.set_fact: + he_conf_disk_path: >- + {{ (prepareimage_results.results|ovirt.ovirt.json_query("[?item.id=='" + + he_conf_disk_details.id + "'].stdout")|first|from_json).path }} + - name: Fetch Hosted Engine virtio disk path + ansible.builtin.set_fact: + he_virtio_disk_path: >- + {{ (prepareimage_results.results|ovirt.ovirt.json_query("[?item.id=='" + + he_virtio_disk_details.id + "'].stdout")|first|from_json).path }} + - name: Fetch Hosted Engine virtio metadata path + ansible.builtin.set_fact: + he_metadata_disk_path: >- + {{ (prepareimage_results.results|ovirt.ovirt.json_query("[?item.id=='" + + he_metadata_disk_details.id + "'].stdout")|first|from_json).path }} + - name: Shutdown local VM + ansible.builtin.command: "virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} shutdown {{ he_vm_name }}Local" + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Wait for local VM shutdown + ansible.builtin.command: virsh -r domstate "{{ he_vm_name }}Local" + environment: "{{ he_cmd_lang }}" + changed_when: true + register: dominfo_out + until: dominfo_out.rc == 0 and 'shut off' in dominfo_out.stdout + retries: 120 + delay: 5 + - name: Undefine local VM + ansible.builtin.command: "virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} undefine {{ he_vm_name }}Local" + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Update libvirt default network configuration, destroy + ansible.builtin.command: "virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} net-destroy default" + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Update libvirt default network configuration, undefine + ansible.builtin.command: "virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} net-undefine default" + environment: "{{ he_cmd_lang }}" + ignore_errors: true + changed_when: true + - name: Detect ovirt-hosted-engine-ha version + ansible.builtin.command: >- + /usr/libexec/platform-python -c + 'from ovirt_hosted_engine_ha.agent import constants as agentconst; print(agentconst.PACKAGE_VERSION)' + environment: "{{ he_cmd_lang }}" + register: ha_version_out + changed_when: true + - name: Set ha_version + ansible.builtin.set_fact: ha_version="{{ ha_version_out.stdout_lines|first }}" + - name: Create configuration templates + ansible.builtin.template: + src: "{{ item.src }}" + dest: "{{ item.dest }}" + mode: 0644 + with_items: + - {src: templates/vm.conf.j2, dest: "{{ he_local_vm_dir }}/vm.conf"} + - {src: templates/broker.conf.j2, dest: "{{ he_local_vm_dir }}/broker.conf"} + - {src: templates/version.j2, dest: "{{ he_local_vm_dir }}/version"} + - {src: templates/fhanswers.conf.j2, dest: "{{ he_local_vm_dir }}/fhanswers.conf"} + - {src: templates/hosted-engine.conf.j2, dest: "{{ he_local_vm_dir }}/hosted-engine.conf"} + - name: Create configuration archive + ansible.builtin.command: >- + tar --record-size=20480 -cvf {{ he_conf_disk_details.disk.image_id }} + vm.conf broker.conf version fhanswers.conf hosted-engine.conf + environment: "{{ he_cmd_lang }}" + args: + chdir: "{{ he_local_vm_dir }}" + become: true + become_user: vdsm + become_method: ansible.builtin.sudo + changed_when: true + tags: ['skip_ansible_lint'] + - name: Create ovirt-hosted-engine-ha run directory + ansible.builtin.file: + path: /var/run/ovirt-hosted-engine-ha + state: directory + mode: 0755 + - name: Copy vm.conf to the right location on host + ansible.builtin.copy: + remote_src: true + src: "{{ he_local_vm_dir }}/vm.conf" + dest: "/var/run/ovirt-hosted-engine-ha" + owner: 'vdsm' + group: 'kvm' + mode: 0640 + - name: Copy hosted-engine.conf to the right location on host + ansible.builtin.copy: + remote_src: true + src: "{{ he_local_vm_dir }}/hosted-engine.conf" + dest: "/etc/ovirt-hosted-engine/" + owner: 'vdsm' + group: 'kvm' + mode: 0440 + - name: Check fapolicyd status + ansible.builtin.systemd: + name: fapolicyd + register: fapolicyd_s + - name: Set fapolicyd rules path + ansible.builtin.set_fact: + fapolicyd_rules_dir: /etc/fapolicyd/rules.d + - name: Verify fapolicyd/rules.d directory + ansible.builtin.stat: + path: "{{ fapolicyd_rules_dir }}" + register: fapolicy_rules + - name: Add rule to fapolicy + block: + - name: Add rule to /etc/fapolicyd/rules.d + ansible.builtin.copy: + src: 35-allow-ansible-for-vdsm.rules + dest: "{{ fapolicyd_rules_dir }}" + mode: 0644 + - name: Restart fapolicyd service + ansible.builtin.service: + name: fapolicyd + state: restarted + when: fapolicyd_s.status.SubState == 'running' and fapolicy_rules.stat.exists + - name: Copy configuration archive to storage + ansible.builtin.command: >- + dd bs=20480 count=1 oflag=direct if="{{ he_local_vm_dir }}/{{ he_conf_disk_details.disk.image_id }}" + of="{{ he_conf_disk_path }}" + environment: "{{ he_cmd_lang }}" + become: true + become_user: vdsm + become_method: ansible.builtin.sudo + changed_when: true + - name: Initialize metadata volume + # Data is written at offset 4KiB*host_id and since ovirt supports 250 hosts per dc, + # we can have maximum 250*4KiB = ~ 1MiB + ansible.builtin.command: dd conv=notrunc bs=1M count=1 oflag=direct if=/dev/zero of="{{ he_metadata_disk_path }}" + environment: "{{ he_cmd_lang }}" + become: true + become_user: vdsm + become_method: ansible.builtin.sudo + changed_when: true + - include_tasks: ../get_local_vm_disk_path.yml + - name: Generate DHCP network configuration for the engine VM + ansible.builtin.template: + src: templates/ifcfg-eth0-dhcp.j2 + dest: "{{ he_local_vm_dir }}/ifcfg-eth0" + owner: root + group: root + mode: 0644 + when: he_vm_ip_addr is none + - name: Generate static network configuration for the engine VM, IPv4 + ansible.builtin.template: + src: templates/ifcfg-eth0-static.j2 + dest: "{{ he_local_vm_dir }}/ifcfg-eth0" + owner: root + group: root + mode: 0644 + when: he_vm_ip_addr is not none and not he_vm_ip_addr is search(":") + - name: Generate static network configuration for the engine VM, IPv6 + ansible.builtin.template: + src: templates/ifcfg-eth0-static-ipv6.j2 + dest: "{{ he_local_vm_dir }}/ifcfg-eth0" + owner: root + group: root + mode: 0644 + when: he_vm_ip_addr is not none and he_vm_ip_addr is search(":") + - name: Inject network configuration with guestfish + ansible.builtin.command: >- + guestfish -a {{ local_vm_disk_path }} --rw -i copy-in "{{ he_local_vm_dir }}/ifcfg-eth0" + /etc/sysconfig/network-scripts {{ ":" }} selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts + /etc/sysconfig/network-scripts/ifcfg-eth0 force{{ ":" }}true + environment: + LIBGUESTFS_BACKEND: direct + LANG: en_US.UTF-8 + LC_MESSAGES: en_US.UTF-8 + LC_ALL: en_US.UTF-8 + changed_when: true + - name: Extract /etc/hosts from the Hosted Engine VM + ansible.builtin.command: virt-copy-out -a {{ local_vm_disk_path }} /etc/hosts "{{ he_local_vm_dir }}" + environment: + LIBGUESTFS_BACKEND: direct + LANG: en_US.UTF-8 + LC_MESSAGES: en_US.UTF-8 + LC_ALL: en_US.UTF-8 + changed_when: true + - name: Clean /etc/hosts for the Hosted Engine VM for Engine VM FQDN + ansible.builtin.lineinfile: + dest: "{{ he_local_vm_dir }}/hosts" + regexp: "# hosted-engine-setup-{{ hostvars[he_ansible_host_name]['he_local_vm_dir'] }}$" + state: absent + - name: Add an entry on /etc/hosts for the Hosted Engine VM for the VM itself + ansible.builtin.lineinfile: + dest: "{{ he_local_vm_dir }}/hosts" + line: "{{ he_vm_ip_addr }} {{ he_fqdn }}" + state: present + when: he_vm_etc_hosts and he_vm_ip_addr is not none + - name: Clean /etc/hosts for the Hosted Engine VM for host address + ansible.builtin.lineinfile: + dest: "{{ he_local_vm_dir }}/hosts" + line: "{{ he_host_ip }} {{ he_host_address }}" + state: absent + when: not he_vm_etc_hosts + - name: Inject /etc/hosts with guestfish + ansible.builtin.command: >- + guestfish -a {{ local_vm_disk_path }} --rw -i copy-in "{{ he_local_vm_dir }}/hosts" + /etc {{ ":" }} selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts + /etc/hosts force{{ ":" }}true + environment: + LIBGUESTFS_BACKEND: direct + LANG: en_US.UTF-8 + LC_MESSAGES: en_US.UTF-8 + LC_ALL: en_US.UTF-8 + changed_when: true + - name: Copy local VM disk to shared storage + ansible.builtin.command: >- + qemu-img convert -n -f qcow2 -O raw -t none -T none {{ local_vm_disk_path }} {{ he_virtio_disk_path }} + environment: "{{ he_cmd_lang }}" + become: true + become_user: vdsm + become_method: ansible.builtin.sudo + changed_when: true + - name: Verify copy of VM disk + ansible.builtin.command: qemu-img compare {{ local_vm_disk_path }} {{ he_virtio_disk_path }} + environment: "{{ he_cmd_lang }}" + become: true + become_user: vdsm + become_method: ansible.builtin.sudo + changed_when: true + when: he_debug_mode|bool + - name: Remove rule from fapolicy + block: + - name: Remove rule from /etc/fapolicyd/rules.d + ansible.builtin.file: + path: "{{ fapolicyd_rules_dir }}/35-allow-ansible-for-vdsm.rules" + state: absent + - name: Restart fapolicyd service + ansible.builtin.service: + name: fapolicyd + state: restarted + when: fapolicyd_s.status.SubState == 'running' and fapolicy_rules.stat.exists + - name: Remove temporary entry in /etc/hosts for the local VM + ansible.builtin.lineinfile: + dest: /etc/hosts + regexp: "# temporary entry added by hosted-engine-setup for the bootstrap VM$" + state: absent + - name: Set the name for add_host + ansible.builtin.set_fact: + he_fqdn_ansible_host: "{{ he_fqdn }}" + - import_tasks: ../add_engine_as_ansible_host.yml + - name: Start ovirt-ha-broker service on the host + ansible.builtin.service: + name: ovirt-ha-broker + state: started + enabled: true + - name: Initialize lockspace volume + ansible.builtin.command: hosted-engine --reinitialize-lockspace --force + environment: "{{ he_cmd_lang }}" + register: result + until: result.rc == 0 + ignore_errors: true + retries: 5 + delay: 10 + changed_when: true + - name: Initialize lockspace volume block + block: + - name: Workaround for ovirt-ha-broker start failures + # Ugly workaround for https://bugzilla.redhat.com/1768511 + # fix it on ovirt-ha-broker side and remove ASAP + ansible.builtin.systemd: + state: restarted + enabled: true + name: ovirt-ha-broker + - name: Initialize lockspace volume + ansible.builtin.command: hosted-engine --reinitialize-lockspace --force + environment: "{{ he_cmd_lang }}" + register: result2 + until: result2.rc == 0 + retries: 5 + delay: 10 + changed_when: true + - name: Debug var result2 + ansible.builtin.debug: + var: result2 + when: result.rc != 0 + - name: Start ovirt-ha-agent service on the host + ansible.builtin.service: + name: ovirt-ha-agent + state: started + enabled: true + - name: Exit HE maintenance mode + ansible.builtin.command: hosted-engine --set-maintenance --mode=none + environment: "{{ he_cmd_lang }}" + register: mresult + until: mresult.rc == 0 + retries: 3 + delay: 10 + changed_when: true + - name: Wait for the engine to come up on the target VM + block: + - name: Check engine VM health + ansible.builtin.command: hosted-engine --vm-status --json + environment: "{{ he_cmd_lang }}" + register: health_result + until: >- + health_result.rc == 0 and 'health' in health_result.stdout and + health_result.stdout|from_json|ovirt.ovirt.json_query('*."engine-status"."health"')|first=="good" and + health_result.stdout|from_json|ovirt.ovirt.json_query('*."engine-status"."detail"')|first=="Up" + retries: 180 + delay: 5 + changed_when: true + - name: Debug var health_result + ansible.builtin.debug: + var: health_result + rescue: + - name: Check VM status at virt level + ansible.builtin.shell: virsh -r list | grep {{ he_vm_name }} | grep running + environment: "{{ he_cmd_lang }}" + ignore_errors: true + changed_when: true + register: vm_status_virsh + - name: Debug var vm_status_virsh + ansible.builtin.debug: + var: vm_status_virsh + - name: Fail if engine VM is not running + ansible.builtin.fail: + msg: "Engine VM is not running, please check vdsm logs" + when: vm_status_virsh.rc != 0 + - name: Get target engine VM IP address + ansible.builtin.shell: getent {{ ip_key }} {{ he_fqdn }} | cut -d' ' -f1 | uniq + environment: "{{ he_cmd_lang }}" + register: engine_vm_ip + changed_when: true + - name: Get VDSM's target engine VM stats + ansible.builtin.command: vdsm-client VM getStats vmID={{ he_vm_details.vm.id }} + environment: "{{ he_cmd_lang }}" + register: engine_vdsm_stats + changed_when: true + - name: Convert stats to JSON format + ansible.builtin.set_fact: json_stats={{ engine_vdsm_stats.stdout|from_json }} + - name: Get target engine VM IP address from VDSM stats + ansible.builtin.set_fact: engine_vm_ip_vdsm={{ json_stats[0].guestIPs }} + - name: Debug var engine_vm_ip_vdsm + ansible.builtin.debug: + var: engine_vm_ip_vdsm + - name: Fail if Engine IP is different from engine's he_fqdn resolved IP + ansible.builtin.fail: + msg: >- + Engine VM IP address is {{ engine_vm_ip_vdsm }} while the engine's he_fqdn {{ he_fqdn }} resolves to + {{ engine_vm_ip.stdout_lines[0] }}. If you are using DHCP, check your DHCP reservation configuration + when: engine_vm_ip_vdsm != engine_vm_ip.stdout_lines[0] + - name: Fail is for any other reason the engine didn't started + ansible.builtin.fail: + msg: The engine failed to start inside the engine VM; please check engine.log. + - name: Get target engine VM address + ansible.builtin.shell: getent {{ ip_key }} {{ he_fqdn }} | cut -d ' ' -f1 | uniq + environment: "{{ he_cmd_lang }}" + register: engine_vm_ip + when: engine_vm_ip is not defined + changed_when: true + # Workaround for ovn-central being configured with the address of the bootstrap engine VM. + # Keep this aligned with: + # https://github.com/oVirt/ovirt-engine/blob/master/packaging/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/main.yml + - name: Reconfigure OVN central address + ansible.builtin.command: vdsm-tool ovn-config {{ engine_vm_ip.stdout_lines[0] }} {{ he_mgmt_network }} {{ he_host_address }} + environment: "{{ he_cmd_lang }}" + changed_when: true + # Workaround for https://bugzilla.redhat.com/1540107 + # the engine fails deleting a VM if its status in the engine DB + # is not up to date. + - include_tasks: ../auth_sso.yml + - name: Check for the local bootstrap engine VM + ovirt_vm_info: + pattern: id="{{ external_local_vm_uuid.stdout_lines|first }}" + auth: "{{ ovirt_auth }}" + register: local_vm_f + - name: Remove the bootstrap local VM + block: + - name: Make the engine aware that the external VM is stopped + ignore_errors: true + ovirt_vm: + state: stopped + id: "{{ external_local_vm_uuid.stdout_lines|first }}" + auth: "{{ ovirt_auth }}" + register: vmstop_result + - name: Debug var vmstop_result + ansible.builtin.debug: + var: vmstop_result + - name: Wait for the local bootstrap engine VM to be down at engine eyes + ovirt_vm_info: + pattern: id="{{ external_local_vm_uuid.stdout_lines|first }}" + auth: "{{ ovirt_auth }}" + register: local_vm_status + until: local_vm_status.ovirt_vms[0].status == "down" + retries: 24 + delay: 5 + - name: Debug var local_vm_status + ansible.builtin.debug: + var: local_vm_status + - name: Remove bootstrap external VM from the engine + ovirt_vm: + state: absent + id: "{{ external_local_vm_uuid.stdout_lines|first }}" + auth: "{{ ovirt_auth }}" + register: vmremove_result + - name: Debug var vmremove_result + ansible.builtin.debug: + var: vmremove_result + when: local_vm_f.ovirt_vms|length > 0 + - name: Remove ovirt-engine-appliance rpm + ansible.builtin.yum: + name: ovirt-engine-appliance + state: absent + register: yum_result + until: yum_result is success + retries: 10 + delay: 5 + when: he_remove_appliance_rpm|bool - - name: Include custom tasks for after setup customization - include_tasks: "{{ after_setup_item }}" - with_fileglob: "hooks/after_setup/*.yml" - loop_control: - loop_var: after_setup_item - register: after_setup_results + - name: Include custom tasks for after setup customization + include_tasks: "{{ after_setup_item }}" + with_fileglob: "hooks/after_setup/*.yml" + loop_control: + loop_var: after_setup_item + register: after_setup_results rescue: - name: Fetch logs from the engine VM include_tasks: ../fetch_engine_logs.yml diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/fetch_host_ip.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/fetch_host_ip.yml index 53907369c..f1d149b94 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/fetch_host_ip.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/fetch_host_ip.yml @@ -16,10 +16,14 @@ - name: Choose IPv4, IPv6 or auto import_tasks: ipv_switch.yml - name: Get host address resolution - ansible.builtin.shell: getent {{ ip_key }} {{ he_host_address }} | grep STREAM + ansible.builtin.shell: getent {{ ip_key }} {{ he_host_address }} | grep STREAM | cut -d ' ' -f1 register: hostname_resolution_output changed_when: true ignore_errors: true + - name: Get host IP addresses + ansible.builtin.command: hostname -I + register: hostname_addresses_output + changed_when: true - name: Check address resolution ansible.builtin.fail: msg: > @@ -29,10 +33,8 @@ ansible.builtin.set_fact: he_host_ip: "{{ ( - hostname_resolution_output.stdout.split() | ipaddr | - difference(hostname_resolution_output.stdout.split() | - ipaddr('link-local') - ) + hostname_resolution_output.stdout.split() | + intersect(hostname_addresses_output.stdout.split()) )[0] }}" - name: Fail if host's ip is empty diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/final_clean.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/final_clean.yml index 1000d2ed2..ce2db921e 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/final_clean.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/final_clean.yml @@ -1,11 +1,11 @@ --- - name: Clean temporary resources block: - - name: Fetch logs from the engine VM - include_tasks: fetch_engine_logs.yml - ignore_errors: true - - include_tasks: clean_localvm_dir.yml - - name: Clean local storage pools - include_tasks: clean_local_storage_pools.yml - ignore_errors: true + - name: Fetch logs from the engine VM + include_tasks: fetch_engine_logs.yml + ignore_errors: true + - include_tasks: clean_localvm_dir.yml + - name: Clean local storage pools + include_tasks: clean_local_storage_pools.yml + ignore_errors: true ... diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/initial_clean.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/initial_clean.yml index b95aa68d5..04dc3e359 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/initial_clean.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/initial_clean.yml @@ -2,149 +2,143 @@ - name: initial clean tags: he_initial_clean block: - - name: Stop libvirt service - ansible.builtin.service: - name: libvirtd - state: stopped - enabled: true - - name: Drop vdsm config statements - ansible.builtin.shell: >- - [ -r {{ item }} ] && sed -i - '/## beginning of configuration section by - vdsm-4.[0-9]\+.[0-9]\+/,/## end of configuration section by vdsm-4.[0-9]\+.[0-9]\+/d' {{ item }} || : - environment: "{{ he_cmd_lang }}" - args: - warn: false - with_items: - - /etc/libvirt/libvirtd.conf - - /etc/libvirt/qemu.conf - - /etc/libvirt/qemu-sanlock.conf - - /etc/sysconfig/libvirtd - tags: ['skip_ansible_lint'] - - name: Drop VNC encryption config statements - ansible.builtin.command: >- - sed -i - '/## beginning of configuration section for VNC encryption/,/## - end of configuration section for VNC encryption\+/d' /etc/libvirt/qemu.conf - args: - warn: false - environment: "{{ he_cmd_lang }}" - changed_when: true - - name: Check if vdsm's abrt-action-save-package-data config exists - ansible.builtin.stat: - path: /etc/abrt/abrt-action-save-package-data.conf - register: abrt_vdsm_config - - name: Check if abrt is installed - ansible.builtin.stat: - path: /usr/share/abrt/conf.d/abrt-action-save-package-data.conf - register: abrt_installed_config - - name: Restore initial abrt config files - ansible.builtin.copy: - remote_src: true - src: "{{ item.src }}" - dest: "{{ item.dest }}" - mode: preserve - with_items: - - { - src: /usr/share/abrt/conf.d/abrt-action-save-package-data.conf, - dest: /etc/abrt/abrt-action-save-package-data.conf - } - - { - src: /usr/share/abrt/conf.d/abrt.conf, - dest: /etc/abrt/abrt.conf - } - - { - src: /usr/share/abrt/conf.d/plugins/CCpp.conf, - dest: /etc/abrt/plugins/CCpp.conf - } - - { - src: /usr/share/abrt/conf.d/plugins/vmcore.conf, - dest: /etc/abrt/plugins/vmcore.conf - } - when: - - abrt_vdsm_config.stat.exists - - abrt_installed_config.stat.exists - - name: Restart abrtd service - ansible.builtin.service: - name: abrtd - state: restarted - when: - - abrt_vdsm_config.stat.exists - - abrt_installed_config.stat.exists - - name: Remove vdsm's abrt config files - ansible.builtin.file: - state: absent - path: "{{ item }}" - with_items: - - /etc/abrt/abrt-action-save-package-data.conf - - /etc/abrt/abrt.conf - - /etc/abrt/plugins/CCpp.conf - - /etc/abrt/plugins/vmcore.conf - when: - - abrt_vdsm_config.stat.exists - - not abrt_installed_config.stat.exists - - name: Drop libvirt sasl2 configuration by vdsm - ansible.builtin.command: >- - sed -i '/## start vdsm-4.[0-9]\+.[0-9]\+ configuration/,/## end vdsm configuration/d' /etc/sasl2/libvirt.conf - environment: "{{ he_cmd_lang }}" - args: - warn: false - tags: ['skip_ansible_lint'] - - name: Stop and disable services - ansible.builtin.service: - name: "{{ item }}" - state: stopped - enabled: false - with_items: - - ovirt-ha-agent - - ovirt-ha-broker - - vdsmd - - libvirtd-tls.socket - - name: Restore initial libvirt default network configuration - ansible.builtin.copy: - remote_src: true - src: /usr/share/libvirt/networks/default.xml - dest: /etc/libvirt/qemu/networks/default.xml - mode: preserve - - name: Start libvirt - ansible.builtin.service: - name: libvirtd - state: started - enabled: true - - name: Check for leftover local Hosted Engine VM - ansible.builtin.shell: virsh list | grep {{ he_vm_name }}Local | cat - environment: "{{ he_cmd_lang }}" - changed_when: true - register: local_vm_list - - name: Destroy leftover local Hosted Engine VM - ansible.builtin.command: virsh destroy {{ he_vm_name }}Local - environment: "{{ he_cmd_lang }}" - ignore_errors: true - when: local_vm_list.stdout_lines|length >= 1 - - name: Check for leftover defined local Hosted Engine VM - ansible.builtin.shell: virsh list --all | grep {{ he_vm_name }}Local | cat - environment: "{{ he_cmd_lang }}" - changed_when: true - register: local_vm_list_all - - name: Undefine leftover local engine VM - ansible.builtin.command: virsh undefine --managed-save {{ he_vm_name }}Local - environment: "{{ he_cmd_lang }}" - when: local_vm_list_all.stdout_lines|length >= 1 - changed_when: true - - name: Check for leftover defined Hosted Engine VM - ansible.builtin.shell: virsh list --all | grep {{ he_vm_name }} | cat - environment: "{{ he_cmd_lang }}" - changed_when: true - register: target_vm_list_all - - name: Undefine leftover engine VM - ansible.builtin.command: virsh undefine --managed-save {{ he_vm_name }} - environment: "{{ he_cmd_lang }}" - when: target_vm_list_all.stdout_lines|length >= 1 - changed_when: true - - name: Remove eventually entries for the local VM from known_hosts file - ansible.builtin.known_hosts: - name: "{{ he_fqdn }}" - state: absent - delegate_to: localhost - become: false + - name: Stop libvirt service + ansible.builtin.service: + name: libvirtd + state: stopped + enabled: true + - name: Drop vdsm config statements + ansible.builtin.shell: >- + [ -r {{ item }} ] && sed -i + '/## beginning of configuration section by + vdsm-4.[0-9]\+.[0-9]\+/,/## end of configuration section by vdsm-4.[0-9]\+.[0-9]\+/d' {{ item }} || : + environment: "{{ he_cmd_lang }}" + with_items: + - /etc/libvirt/libvirtd.conf + - /etc/libvirt/qemu.conf + - /etc/libvirt/qemu-sanlock.conf + - /etc/sysconfig/libvirtd + tags: ['skip_ansible_lint'] + - name: Drop VNC encryption config statements + ansible.builtin.command: >- + sed -i + '/## beginning of configuration section for VNC encryption/,/## + end of configuration section for VNC encryption\+/d' /etc/libvirt/qemu.conf + environment: "{{ he_cmd_lang }}" + changed_when: true + - name: Check if vdsm's abrt-action-save-package-data config exists + ansible.builtin.stat: + path: /etc/abrt/abrt-action-save-package-data.conf + register: abrt_vdsm_config + - name: Check if abrt is installed + ansible.builtin.stat: + path: /usr/share/abrt/conf.d/abrt-action-save-package-data.conf + register: abrt_installed_config + - name: Restore initial abrt config files + ansible.builtin.copy: + remote_src: true + src: "{{ item.src }}" + dest: "{{ item.dest }}" + mode: preserve + with_items: + - { + src: /usr/share/abrt/conf.d/abrt-action-save-package-data.conf, + dest: /etc/abrt/abrt-action-save-package-data.conf + } + - { + src: /usr/share/abrt/conf.d/abrt.conf, + dest: /etc/abrt/abrt.conf + } + - { + src: /usr/share/abrt/conf.d/plugins/CCpp.conf, + dest: /etc/abrt/plugins/CCpp.conf + } + - { + src: /usr/share/abrt/conf.d/plugins/vmcore.conf, + dest: /etc/abrt/plugins/vmcore.conf + } + when: + - abrt_vdsm_config.stat.exists + - abrt_installed_config.stat.exists + - name: Restart abrtd service + ansible.builtin.service: + name: abrtd + state: restarted + when: + - abrt_vdsm_config.stat.exists + - abrt_installed_config.stat.exists + - name: Remove vdsm's abrt config files + ansible.builtin.file: + state: absent + path: "{{ item }}" + with_items: + - /etc/abrt/abrt-action-save-package-data.conf + - /etc/abrt/abrt.conf + - /etc/abrt/plugins/CCpp.conf + - /etc/abrt/plugins/vmcore.conf + when: + - abrt_vdsm_config.stat.exists + - not abrt_installed_config.stat.exists + - name: Drop libvirt sasl2 configuration by vdsm + ansible.builtin.command: >- + sed -i '/## start vdsm-4.[0-9]\+.[0-9]\+ configuration/,/## end vdsm configuration/d' /etc/sasl2/libvirt.conf + environment: "{{ he_cmd_lang }}" + tags: ['skip_ansible_lint'] + - name: Stop and disable services + ansible.builtin.service: + name: "{{ item }}" + state: stopped + enabled: false + with_items: + - ovirt-ha-agent + - ovirt-ha-broker + - vdsmd + - libvirtd-tls.socket + - name: Restore initial libvirt default network configuration + ansible.builtin.copy: + remote_src: true + src: /usr/share/libvirt/networks/default.xml + dest: /etc/libvirt/qemu/networks/default.xml + mode: preserve + - name: Start libvirt + ansible.builtin.service: + name: libvirtd + state: started + enabled: true + - name: Check for leftover local Hosted Engine VM + ansible.builtin.shell: virsh list | grep {{ he_vm_name }}Local | cat + environment: "{{ he_cmd_lang }}" + changed_when: true + register: local_vm_list + - name: Destroy leftover local Hosted Engine VM + ansible.builtin.command: virsh destroy {{ he_vm_name }}Local + environment: "{{ he_cmd_lang }}" + ignore_errors: true + when: local_vm_list.stdout_lines|length >= 1 + - name: Check for leftover defined local Hosted Engine VM + ansible.builtin.shell: virsh list --all | grep {{ he_vm_name }}Local | cat + environment: "{{ he_cmd_lang }}" + changed_when: true + register: local_vm_list_all + - name: Undefine leftover local engine VM + ansible.builtin.command: virsh undefine --managed-save {{ he_vm_name }}Local + environment: "{{ he_cmd_lang }}" + when: local_vm_list_all.stdout_lines|length >= 1 + changed_when: true + - name: Check for leftover defined Hosted Engine VM + ansible.builtin.shell: virsh list --all | grep {{ he_vm_name }} | cat + environment: "{{ he_cmd_lang }}" + changed_when: true + register: target_vm_list_all + - name: Undefine leftover engine VM + ansible.builtin.command: virsh undefine --managed-save {{ he_vm_name }} + environment: "{{ he_cmd_lang }}" + when: target_vm_list_all.stdout_lines|length >= 1 + changed_when: true + - name: Remove eventually entries for the local VM from known_hosts file + ansible.builtin.known_hosts: + name: "{{ he_fqdn }}" + state: absent + delegate_to: localhost + become: false ... diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/001_validate_network_interfaces.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/001_validate_network_interfaces.yml index d71306300..c19475d5d 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/001_validate_network_interfaces.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/001_validate_network_interfaces.yml @@ -1,92 +1,92 @@ --- - name: Network interfaces block: - - name: Detecting interface on existing management bridge - ansible.builtin.set_fact: - bridge_interface="{{ hostvars[inventory_hostname]['ansible_' + bridge_name ]['interfaces']|first }}" - when: "'ansible_' + bridge_name in hostvars[inventory_hostname]" - with_items: - - 'ovirtmgmt' - - 'rhevm' - loop_control: - loop_var: bridge_name - - name: Set variable for supported bond modes - ansible.builtin.set_fact: - acceptable_bond_modes: ['active-backup', 'balance-xor', 'broadcast', '802.3ad'] - - name: Get all active network interfaces - ansible.builtin.set_fact: - otopi_net_host="{{ hostvars[inventory_hostname]['ansible_' + iface_item]['device'] }}" - type="{{ hostvars[inventory_hostname]['ansible_' + iface_item]['type'] }}" - bond_valid_name="{{ iface_item | regex_search('(^bond[0-9]+)') }}" - when: ( - ( - iface_item != 'lo' - ) and ( - bridge_interface is not defined - ) and ( - 'active' in hostvars[inventory_hostname]['ansible_' + iface_item] and - hostvars[inventory_hostname]['ansible_' + iface_item]['active'] - ) and ( - hostvars[inventory_hostname]['ansible_' + iface_item]['type'] != 'bridge' - ) and ( - hostvars[inventory_hostname]['ansible_' + iface_item]['ipv4'] is defined or - hostvars[inventory_hostname]['ansible_' + iface_item]['ipv6'] is defined - ) and ( + - name: Detecting interface on existing management bridge + ansible.builtin.set_fact: + bridge_interface="{{ hostvars[inventory_hostname]['ansible_' + bridge_name ]['interfaces']|first }}" + when: "'ansible_' + bridge_name in hostvars[inventory_hostname]" + with_items: + - 'ovirtmgmt' + - 'rhevm' + loop_control: + loop_var: bridge_name + - name: Set variable for supported bond modes + ansible.builtin.set_fact: + acceptable_bond_modes: ['active-backup', 'balance-xor', 'broadcast', '802.3ad'] + - name: Get all active network interfaces + ansible.builtin.set_fact: + otopi_net_host="{{ hostvars[inventory_hostname]['ansible_' + iface_item]['device'] }}" + type="{{ hostvars[inventory_hostname]['ansible_' + iface_item]['type'] }}" + bond_valid_name="{{ iface_item | regex_search('(^bond[0-9]+)') }}" + when: ( ( - hostvars[inventory_hostname]['ansible_' + iface_item]['type'] != 'bonding' - ) or ( + iface_item != 'lo' + ) and ( + bridge_interface is not defined + ) and ( + 'active' in hostvars[inventory_hostname]['ansible_' + iface_item] and + hostvars[inventory_hostname]['ansible_' + iface_item]['active'] + ) and ( + hostvars[inventory_hostname]['ansible_' + iface_item]['type'] != 'bridge' + ) and ( + hostvars[inventory_hostname]['ansible_' + iface_item]['ipv4'] is defined or + hostvars[inventory_hostname]['ansible_' + iface_item]['ipv6'] is defined + ) and ( ( - hostvars[inventory_hostname]['ansible_' + iface_item]['type'] == 'bonding' - ) and ( - hostvars[inventory_hostname]['ansible_' + iface_item]['slaves'][0] is defined - ) and ( - hostvars[inventory_hostname]['ansible_' + iface_item]['mode'] in acceptable_bond_modes + hostvars[inventory_hostname]['ansible_' + iface_item]['type'] != 'bonding' + ) or ( + ( + hostvars[inventory_hostname]['ansible_' + iface_item]['type'] == 'bonding' + ) and ( + hostvars[inventory_hostname]['ansible_' + iface_item]['slaves'][0] is defined + ) and ( + hostvars[inventory_hostname]['ansible_' + iface_item]['mode'] in acceptable_bond_modes + ) ) ) ) - ) - with_items: - - "{{ ansible_interfaces | map('replace', '-','_') | list }}" - loop_control: - loop_var: iface_item - register: valid_network_interfaces - - name: Filter bonds with bad naming - ansible.builtin.set_fact: - net_iface="{{ bond_item }}" - when: >- - not 'skipped' in bond_item and ((bond_item['ansible_facts']['type'] == 'ether') or - ( (bond_item['ansible_facts']['type'] == 'bonding') and bond_item['ansible_facts']['bond_valid_name'] )) - with_items: - - "{{ valid_network_interfaces['results'] }}" - loop_control: - loop_var: bond_item - register: bb_filtered_list - - name: Generate output list - ansible.builtin.set_fact: - host_net: >- - {{ [bridge_interface] if bridge_interface is defined else bb_filtered_list.results | - reject('skipped') | map(attribute='bond_item.ansible_facts.otopi_net_host') | list }} - - import_tasks: ../filter_team_devices.yml - - import_tasks: ../filter_unsupported_vlan_devices.yml - - name: Generate list of all unsupported network devices - ansible.builtin.set_fact: - invalid_net_if: "{{ invalid_vlan_if + team_if }}" - - name: Filter unsupported interface types - ansible.builtin.set_fact: - otopi_host_net: "{{ host_net | difference(invalid_net_if) }}" - register: otopi_host_net - - name: Failed if only unsupported devices are available - ansible.builtin.fail: - msg: >- - Only unsupported devices {{ invalid_net_if | join(', ') }} are present. - Teaming and bond modes: Round Robin, TLB, ALB are unsupported. - Supported VLAN naming convention is: VLAN_PARENT.VLAN_ID - The following bond modes are supported: {{ acceptable_bond_modes }} - when: (otopi_host_net.ansible_facts.otopi_host_net | length == 0) - - name: Validate selected bridge interface if management bridge does not exist - ansible.builtin.fail: - msg: The selected network interface is not valid - when: - he_bridge_if not in otopi_host_net.ansible_facts.otopi_host_net and bridge_interface is not defined and - not he_just_collect_network_interfaces + with_items: + - "{{ ansible_interfaces | map('replace', '-','_') | list }}" + loop_control: + loop_var: iface_item + register: valid_network_interfaces + - name: Filter bonds with bad naming + ansible.builtin.set_fact: + net_iface="{{ bond_item }}" + when: >- + not 'skipped' in bond_item and ((bond_item['ansible_facts']['type'] == 'ether') or + ( (bond_item['ansible_facts']['type'] == 'bonding') and bond_item['ansible_facts']['bond_valid_name'] )) + with_items: + - "{{ valid_network_interfaces['results'] }}" + loop_control: + loop_var: bond_item + register: bb_filtered_list + - name: Generate output list + ansible.builtin.set_fact: + host_net: >- + {{ [bridge_interface] if bridge_interface is defined else bb_filtered_list.results | + reject('skipped') | map(attribute='bond_item.ansible_facts.otopi_net_host') | list }} + - import_tasks: ../filter_team_devices.yml + - import_tasks: ../filter_unsupported_vlan_devices.yml + - name: Generate list of all unsupported network devices + ansible.builtin.set_fact: + invalid_net_if: "{{ invalid_vlan_if + team_if }}" + - name: Filter unsupported interface types + ansible.builtin.set_fact: + otopi_host_net: "{{ host_net | difference(invalid_net_if) }}" + register: otopi_host_net + - name: Failed if only unsupported devices are available + ansible.builtin.fail: + msg: >- + Only unsupported devices {{ invalid_net_if | join(', ') }} are present. + Teaming and bond modes: Round Robin, TLB, ALB are unsupported. + Supported VLAN naming convention is: VLAN_PARENT.VLAN_ID + The following bond modes are supported: {{ acceptable_bond_modes }} + when: (otopi_host_net.ansible_facts.otopi_host_net | length == 0) + - name: Validate selected bridge interface if management bridge does not exist + ansible.builtin.fail: + msg: The selected network interface is not valid + when: + he_bridge_if not in otopi_host_net.ansible_facts.otopi_host_net and bridge_interface is not defined and + not he_just_collect_network_interfaces ... diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/define_variables.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/define_variables.yml index 4467b1b64..584321fab 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/define_variables.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/define_variables.yml @@ -1,52 +1,52 @@ --- - name: Define Variables block: - - name: Define he_cloud_init_domain_name - block: - - name: Get domain name - ansible.builtin.command: hostname -d - changed_when: true - register: host_domain_name - - name: Set he_cloud_init_domain_name - ansible.builtin.set_fact: - he_cloud_init_domain_name: "{{ host_domain_name.stdout_lines[0] if host_domain_name.stdout_lines else '' }}" - when: he_cloud_init_domain_name is not defined - - name: Define he_cloud_init_host_name - ansible.builtin.set_fact: - he_cloud_init_host_name: "{{ he_fqdn }}" - - name: Define he_vm_uuid - block: - - name: Get uuid - ansible.builtin.command: uuidgen - changed_when: true - register: uuid - - name: Set he_vm_uuid - ansible.builtin.set_fact: - he_vm_uuid: "{{ uuid.stdout }}" - - name: Define he_nic_uuid - block: - - name: Get uuid - ansible.builtin.command: uuidgen - changed_when: true - register: uuid - - name: Set he_nic_uuid - ansible.builtin.set_fact: - he_nic_uuid: "{{ uuid.stdout }}" - - name: Define he_cdrom_uuid - block: - - name: Get uuid - ansible.builtin.command: uuidgen - changed_when: true - register: uuid - - name: Set he_cdrom_uuid - ansible.builtin.set_fact: - he_cdrom_uuid: "{{ uuid.stdout }}" - - name: Define Timezone - block: - - name: get timezone - ansible.builtin.shell: timedatectl | grep "Time zone" | awk '{print $3}' - changed_when: true - register: timezone - - name: Set he_time_zone - ansible.builtin.set_fact: - he_time_zone: "{{ timezone.stdout }}" + - name: Define he_cloud_init_domain_name + block: + - name: Get domain name + ansible.builtin.command: hostname -d + changed_when: true + register: host_domain_name + - name: Set he_cloud_init_domain_name + ansible.builtin.set_fact: + he_cloud_init_domain_name: "{{ host_domain_name.stdout_lines[0] if host_domain_name.stdout_lines else '' }}" + when: he_cloud_init_domain_name is not defined + - name: Define he_cloud_init_host_name + ansible.builtin.set_fact: + he_cloud_init_host_name: "{{ he_fqdn }}" + - name: Define he_vm_uuid + block: + - name: Get uuid + ansible.builtin.command: uuidgen + changed_when: true + register: uuid + - name: Set he_vm_uuid + ansible.builtin.set_fact: + he_vm_uuid: "{{ uuid.stdout }}" + - name: Define he_nic_uuid + block: + - name: Get uuid + ansible.builtin.command: uuidgen + changed_when: true + register: uuid + - name: Set he_nic_uuid + ansible.builtin.set_fact: + he_nic_uuid: "{{ uuid.stdout }}" + - name: Define he_cdrom_uuid + block: + - name: Get uuid + ansible.builtin.command: uuidgen + changed_when: true + register: uuid + - name: Set he_cdrom_uuid + ansible.builtin.set_fact: + he_cdrom_uuid: "{{ uuid.stdout }}" + - name: Define Timezone + block: + - name: get timezone + ansible.builtin.shell: timedatectl | grep "Time zone" | awk '{print $3}' + changed_when: true + register: timezone + - name: Set he_time_zone + ansible.builtin.set_fact: + he_time_zone: "{{ timezone.stdout }}" diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_data_center_name.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_data_center_name.yml index 1c242b845..1203f0b8b 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_data_center_name.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_data_center_name.yml @@ -1,15 +1,15 @@ --- - name: Validate Data Center name format block: - - name: Fail if Data Center name format is incorrect - ansible.builtin.fail: - msg: >- - "Invalid Data Center name format. Data Center name may only contain letters, numbers, '-', or '_'." - " Got {{ he_data_center }}" - when: not he_data_center | regex_search( "^[a-zA-Z0-9_-]+$" ) - - name: Validate Cluster name - ansible.builtin.fail: - msg: >- - "Cluster name cannot be 'Default'. This is a reserved name for the default DataCenter. Please choose" - " another name for the cluster" - when: he_data_center != "Default" and he_cluster == "Default" + - name: Fail if Data Center name format is incorrect + ansible.builtin.fail: + msg: >- + "Invalid Data Center name format. Data Center name may only contain letters, numbers, '-', or '_'." + " Got {{ he_data_center }}" + when: not he_data_center | regex_search( "^[a-zA-Z0-9_-]+$" ) + - name: Validate Cluster name + ansible.builtin.fail: + msg: >- + "Cluster name cannot be 'Default'. This is a reserved name for the default DataCenter. Please choose" + " another name for the cluster" + when: he_data_center != "Default" and he_cluster == "Default" diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml index 3480b2afe..9652f4286 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml @@ -1,14 +1,14 @@ --- - name: Check firewalld status block: - - name: Check firewalld status - ansible.builtin.systemd: - name: firewalld - register: firewalld_s - - name: Enforce firewalld status - ansible.builtin.fail: - msg: > - firewalld is required to be enabled and active in order - to correctly deploy hosted-engine. - Please check, fix accordingly and re-deploy. - when: firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked' + - name: Check firewalld status + ansible.builtin.systemd: + name: firewalld + register: firewalld_s + - name: Enforce firewalld status + ansible.builtin.fail: + msg: > + firewalld is required to be enabled and active in order + to correctly deploy hosted-engine. + Please check, fix accordingly and re-deploy. + when: firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked' diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_gateway.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_gateway.yml index 3083db2a0..4712fc7ff 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_gateway.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_gateway.yml @@ -1,24 +1,24 @@ --- - name: Define default gateway block: - - name: Get default gateway IPv4 - ansible.builtin.shell: ip r | grep default | awk '{print $3}' - changed_when: true - register: get_gateway_4 - when: he_default_gateway_4 is not defined or he_default_gateway_4 is none or not he_default_gateway_4 - - name: Get default gateway IPv6 - ansible.builtin.shell: ip -6 r | grep default | awk '{print $3}' - changed_when: true - register: get_gateway_6 - when: he_default_gateway_6 is not defined or he_default_gateway_6 is none or not he_default_gateway_6 - - name: Set he_gateway - ansible.builtin.set_fact: - he_gateway: >- - {{ get_gateway_4.stdout_lines[0] if get_gateway_4.stdout_lines else - get_gateway_6.stdout_lines[0] if get_gateway_6.stdout_lines else - '' - }} - when: he_gateway is not defined or he_gateway is none or not he_gateway|trim + - name: Get default gateway IPv4 + ansible.builtin.shell: ip r | grep default | awk '{print $3}' + changed_when: true + register: get_gateway_4 + when: he_default_gateway_4 is not defined or he_default_gateway_4 is none or not he_default_gateway_4 + - name: Get default gateway IPv6 + ansible.builtin.shell: ip -6 r | grep default | awk '{print $3}' + changed_when: true + register: get_gateway_6 + when: he_default_gateway_6 is not defined or he_default_gateway_6 is none or not he_default_gateway_6 + - name: Set he_gateway + ansible.builtin.set_fact: + he_gateway: >- + {{ get_gateway_4.stdout_lines[0] if get_gateway_4.stdout_lines else + get_gateway_6.stdout_lines[0] if get_gateway_6.stdout_lines else + '' + }} + when: he_gateway is not defined or he_gateway is none or not he_gateway|trim - name: Fail if there is no gateway ansible.builtin.fail: msg: "No default gateway is defined" diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_mac_address.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_mac_address.yml index e5b7d77e0..572715eeb 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_mac_address.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_mac_address.yml @@ -1,15 +1,15 @@ --- - name: Define Engine VM MAC address block: - - name: Generate unicast MAC address - ansible.builtin.shell: od -An -N6 -tx1 /dev/urandom | sed -e 's/^ *//' -e 's/ */:/g' -e 's/:$//' -e 's/^\(.\)[13579bdf]/\10/' - changed_when: true - register: mac_address - - name: Set he_vm_mac_addr - ansible.builtin.set_fact: - he_vm_mac_addr: >- - {{ mac_address.stdout if he_vm_mac_addr is not defined or he_vm_mac_addr is none else he_vm_mac_addr }} - - name: Fail if MAC address structure is incorrect - ansible.builtin.fail: - msg: "Invalid unicast MAC address format. Got {{ he_vm_mac_addr }}" - when: not he_vm_mac_addr | regex_search( "^[a-fA-F0-9][02468aAcCeE](:[a-fA-F0-9]{2}){5}$" ) + - name: Generate unicast MAC address + ansible.builtin.shell: od -An -N6 -tx1 /dev/urandom | sed -e 's/^ *//' -e 's/ */:/g' -e 's/:$//' -e 's/^\(.\)[13579bdf]/\10/' + changed_when: true + register: mac_address + - name: Set he_vm_mac_addr + ansible.builtin.set_fact: + he_vm_mac_addr: >- + {{ mac_address.stdout if he_vm_mac_addr is not defined or he_vm_mac_addr is none else he_vm_mac_addr }} + - name: Fail if MAC address structure is incorrect + ansible.builtin.fail: + msg: "Invalid unicast MAC address format. Got {{ he_vm_mac_addr }}" + when: not he_vm_mac_addr | regex_search( "^[a-fA-F0-9][02468aAcCeE](:[a-fA-F0-9]{2}){5}$" ) diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml index 0b867deb8..ff4971228 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml @@ -1,17 +1,17 @@ --- - name: Get available memory amount block: - - name: Get free memory - ansible.builtin.shell: free -m | grep Mem | awk '{print $4}' - changed_when: true - register: free_mem - - name: Get cached memory - ansible.builtin.shell: free -m | grep Mem | awk '{print $6}' - changed_when: true - register: cached_mem - - name: Set Max memory - ansible.builtin.set_fact: - max_mem: "{{ free_mem.stdout|int + cached_mem.stdout|int - he_reserved_memory_MB + he_avail_memory_grace_MB }}" + - name: Get free memory + ansible.builtin.shell: free -m | grep Mem | awk '{print $4}' + changed_when: true + register: free_mem + - name: Get cached memory + ansible.builtin.shell: free -m | grep Mem | awk '{print $6}' + changed_when: true + register: cached_mem + - name: Set Max memory + ansible.builtin.set_fact: + max_mem: "{{ free_mem.stdout|int + cached_mem.stdout|int - he_reserved_memory_MB + he_avail_memory_grace_MB }}" - name: set he_mem_size_MB to max available if not defined ansible.builtin.set_fact: he_mem_size_MB: "{{ he_mem_size_MB if he_mem_size_MB != 'max' else max_mem }}" diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_network_test.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_network_test.yml index 2bc7b9a94..0ccff4735 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_network_test.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_network_test.yml @@ -1,37 +1,37 @@ --- - name: Validate network connectivity check configuration block: - - name: Fail if he_network_test is not valid - ansible.builtin.fail: - msg: "Invalid he_network_test defined" - changed_when: true - when: he_network_test not in ['dns', 'ping', 'tcp', 'none'] - - name: Validate TCP network connectivity check parameters - block: - - name: Debug var he_tcp_t_address - ansible.builtin.debug: - var: he_tcp_t_address - - name: Fail if he_tcp_t_address is not defined - ansible.builtin.fail: - msg: "No he_tcp_t_address is defined" - changed_when: true - when: - ( he_tcp_t_address is undefined ) or - ( he_tcp_t_address is none ) or - ( he_tcp_t_address|trim|length == 0 ) - - name: Debug var he_tcp_t_port - ansible.builtin.debug: - var: he_tcp_t_port - - name: Fail if he_tcp_t_port is not defined - ansible.builtin.fail: - msg: "No he_tcp_t_port is defined" - changed_when: true - when: - ( he_tcp_t_port is undefined ) or - ( he_tcp_t_port is none ) - - name: Fail if he_tcp_t_port is no integer - ansible.builtin.fail: - msg: "he_tcp_t_port has to be integer" - changed_when: true - when: not he_tcp_t_port|int - when: he_network_test == 'tcp' + - name: Fail if he_network_test is not valid + ansible.builtin.fail: + msg: "Invalid he_network_test defined" + changed_when: true + when: he_network_test not in ['dns', 'ping', 'tcp', 'none'] + - name: Validate TCP network connectivity check parameters + block: + - name: Debug var he_tcp_t_address + ansible.builtin.debug: + var: he_tcp_t_address + - name: Fail if he_tcp_t_address is not defined + ansible.builtin.fail: + msg: "No he_tcp_t_address is defined" + changed_when: true + when: + ( he_tcp_t_address is undefined ) or + ( he_tcp_t_address is none ) or + ( he_tcp_t_address|trim|length == 0 ) + - name: Debug var he_tcp_t_port + ansible.builtin.debug: + var: he_tcp_t_port + - name: Fail if he_tcp_t_port is not defined + ansible.builtin.fail: + msg: "No he_tcp_t_port is defined" + changed_when: true + when: + ( he_tcp_t_port is undefined ) or + ( he_tcp_t_port is none ) + - name: Fail if he_tcp_t_port is no integer + ansible.builtin.fail: + msg: "he_tcp_t_port has to be integer" + changed_when: true + when: not he_tcp_t_port|int + when: he_network_test == 'tcp' diff --git a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_vcpus_count.yml b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_vcpus_count.yml index e84ec0a16..7c113aee4 100644 --- a/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_vcpus_count.yml +++ b/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_vcpus_count.yml @@ -1,13 +1,13 @@ --- - name: Define he_maxvcpus block: - - name: get max cpus - ansible.builtin.command: grep -c ^processor /proc/cpuinfo - changed_when: true - register: max_cpus - - name: Set he_maxvcpus - ansible.builtin.set_fact: - he_maxvcpus: "{{ max_cpus.stdout }}" + - name: get max cpus + ansible.builtin.command: grep -c ^processor /proc/cpuinfo + changed_when: true + register: max_cpus + - name: Set he_maxvcpus + ansible.builtin.set_fact: + he_maxvcpus: "{{ max_cpus.stdout }}" - name: Set he_vcpus to maximum amount if not defined ansible.builtin.set_fact: he_vcpus: "{{ he_vcpus if he_vcpus != 'max' else he_maxvcpus }}" diff --git a/ansible_collections/ovirt/ovirt/roles/repositories/tasks/rh-subscription.yml b/ansible_collections/ovirt/ovirt/roles/repositories/tasks/rh-subscription.yml index e7545b4bc..fe44a4412 100644 --- a/ansible_collections/ovirt/ovirt/roles/repositories/tasks/rh-subscription.yml +++ b/ansible_collections/ovirt/ovirt/roles/repositories/tasks/rh-subscription.yml @@ -73,8 +73,6 @@ - name: Enable dnf modules ansible.builtin.command: "dnf module enable -y {{ ovirt_repositories_rh_dnf_modules | join(' ') }}" - args: - warn: false when: - ovirt_repositories_ovirt_version|string >= '4.4' - ovirt_repositories_target_host == 'engine' diff --git a/ansible_collections/ovirt/ovirt/roles/repositories/tasks/rpm.yml b/ansible_collections/ovirt/ovirt/roles/repositories/tasks/rpm.yml index 25e0e4040..343e4c18e 100644 --- a/ansible_collections/ovirt/ovirt/roles/repositories/tasks/rpm.yml +++ b/ansible_collections/ovirt/ovirt/roles/repositories/tasks/rpm.yml @@ -13,8 +13,6 @@ - name: Enable dnf modules ansible.builtin.command: "dnf module enable -y {{ ovirt_repositories_ovirt_dnf_modules | join(' ') }}" - args: - warn: false when: - ovirt_repositories_ovirt_version|string >= '4.4' - ovirt_repositories_target_host == 'engine' diff --git a/ansible_collections/ovirt/ovirt/roles/repositories/tasks/satellite-subscription.yml b/ansible_collections/ovirt/ovirt/roles/repositories/tasks/satellite-subscription.yml index bf1adc823..3f941aae9 100644 --- a/ansible_collections/ovirt/ovirt/roles/repositories/tasks/satellite-subscription.yml +++ b/ansible_collections/ovirt/ovirt/roles/repositories/tasks/satellite-subscription.yml @@ -20,8 +20,6 @@ - name: Enable dnf modules ansible.builtin.command: "dnf module enable -y {{ ovirt_repositories_rh_dnf_modules | join(' ') }}" - args: - warn: false when: - ovirt_repositories_ovirt_version|string >= '4.4' - ovirt_repositories_target_host == 'engine' diff --git a/ansible_collections/ovirt/ovirt/roles/vm_infra/README.md b/ansible_collections/ovirt/ovirt/roles/vm_infra/README.md index 807cb232d..ecd7b1c08 100644 --- a/ansible_collections/ovirt/ovirt/roles/vm_infra/README.md +++ b/ansible_collections/ovirt/ovirt/roles/vm_infra/README.md @@ -66,6 +66,9 @@ The `vms` and `profile` variables can contain following attributes, note that if | clone | No | If yes then the disks of the created virtual machine will be cloned and independent of the template. This parameter is used only when state is running or present and VM didn't exist before. | | template | Blank | Name of template that the virtual machine should be based on. | | template_version | UNDEF | Version number of the template to be used for VM. By default the latest available version of the template is used. | +| boot_disk_name | UNDEF | Renames the bootable disk after the VM is created. Useful when creating VMs from templates | +| boot_disk_use_vm_name_prefix | true | Use the name of the virtual machine as prefix when renaming the VM boot disk. The resulting boot disk name would be <i>{{vm_name}}_{{boot_disk_name}}</i>| +| boot_disk_size | UNDEF | Resizes the bootable disk after the VM is created. A suffix that complies to the IEC 60027-2 standard (for example 10GiB, 1024MiB) can be used. | | memory | UNDEF | Amount of virtual machine memory. | | memory_max | UNDEF | Upper bound of virtual machine memory up to which memory hot-plug can be performed. | | memory_guaranteed | UNDEF | Amount of minimal guaranteed memory of the Virtual Machine. Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB). <i>memory_guaranteed</i> parameter can't be lower than <i>memory</i> parameter. | diff --git a/ansible_collections/ovirt/ovirt/roles/vm_infra/tasks/rename_resize_boot_disk.yml b/ansible_collections/ovirt/ovirt/roles/vm_infra/tasks/rename_resize_boot_disk.yml new file mode 100644 index 000000000..1c0929f90 --- /dev/null +++ b/ansible_collections/ovirt/ovirt/roles/vm_infra/tasks/rename_resize_boot_disk.yml @@ -0,0 +1,59 @@ +--- +- name: "Get the '{{ current_vm.name }}' VM info" + ovirt_vm_info: + auth: "{{ ovirt_auth }}" + pattern: "name={{ current_vm.name }}" + fetch_nested: true + nested_attributes: + - "bootable" + all_content: true + register: current_vm_info + +- name: "Get the '{{ current_vm.name }}' VM boot disk ID" + ansible.builtin.set_fact: + current_vm_boot_disk_id: "{{ item.id }}" + loop: "{{ current_vm_info.ovirt_vms[0].disk_attachments }} " + when: item.bootable is defined and item.bootable + +- name: "Get the '{{ current_vm.name }}' VM boot disk info" + ovirt_disk_info: + auth: "{{ ovirt_auth }}" + pattern: "id={{ current_vm_boot_disk_id }}" + fetch_nested: true + register: current_vm_boot_disk_info + when: current_vm_boot_disk_id is defined + +- name: "Get the Storage Domain info from the '{{ current_vm.name }}' VM boot disk" + ovirt_storage_domain: + auth: "{{ ovirt_auth }}" + id: "{{ current_vm_boot_disk_info.ovirt_disks[0].storage_domains[0].id }}" + register: current_vm_boot_disk_sd_info + when: current_vm_boot_disk_id is defined + +- name: "Get the '{{ current_vm.name }}' VM boot disk name" + ansible.builtin.set_fact: + current_vm_boot_disk_name: "{{ current_vm.boot_disk_name | default(current_vm.profile.boot_disk_name) }}" + when: current_vm.boot_disk_name is defined or current_vm.profile.boot_disk_name is defined + +- name: "Use '{{ current_vm.name }}' as prefix for the VM boot disk name" + ansible.builtin.set_fact: + current_vm_boot_disk_use_vm_name_prefix: > + "{{ current_vm.boot_disk_use_vm_name_prefix | default(current_vm.profile.boot_disk_use_vm_name_prefix) | default(true) }}" + +- name: "Rename the '{{ current_vm.name }}' VM boot disk" + ovirt_disk: + auth: "{{ ovirt_auth }}" + id: "{{ current_vm_boot_disk_id }}" + name: "{% if current_vm_boot_disk_use_vm_name_prefix %}{{ current_vm.name }}_{% endif %}{{ current_vm_boot_disk_name }}" + vm_name: "{{ current_vm.name }}" + storage_domain: "{{ current_vm_boot_disk_sd_info.storagedomain.name }}" + when: current_vm_boot_disk_id is defined and current_vm_boot_disk_name is defined + +- name: "Resize the '{{ current_vm.name }}' VM boot disk" + ovirt_disk: + auth: "{{ ovirt_auth }}" + id: "{{ current_vm_boot_disk_id }}" + size: "{{ current_vm.boot_disk_size | default(current_vm.profile.boot_disk_size) }}" + vm_name: "{{ current_vm.name }}" + when: ( current_vm.boot_disk_size is defined or current_vm.profile.boot_disk_size is defined ) and current_vm_boot_disk_id is defined +... diff --git a/ansible_collections/ovirt/ovirt/roles/vm_infra/tasks/vm_state_present.yml b/ansible_collections/ovirt/ovirt/roles/vm_infra/tasks/vm_state_present.yml index b34fba9d9..867579c67 100644 --- a/ansible_collections/ovirt/ovirt/roles/vm_infra/tasks/vm_state_present.yml +++ b/ansible_collections/ovirt/ovirt/roles/vm_infra/tasks/vm_state_present.yml @@ -20,6 +20,12 @@ - name: Apply any Affinity Labels import_tasks: affinity_labels.yml +- name: Rename and resize boot disk + include_tasks: rename_resize_boot_disk.yml + with_items: "{{ create_vms }}" + loop_control: + loop_var: "current_vm" + - name: Manage profile disks ovirt_disk: auth: "{{ ovirt_auth }}" diff --git a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.10.txt b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.10.txt index 9d12b04e8..0e52cf6b1 100644 --- a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.10.txt +++ b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.10.txt @@ -1,2 +1 @@ -automation/build.sh shebang!skip build.sh shebang!skip diff --git a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.11.txt b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.11.txt index cd443df48..13736f5f5 100644 --- a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.11.txt +++ b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.11.txt @@ -1,6 +1,5 @@ -automation/build.sh shebang!skip build.sh shebang!skip -changelogs/fragments/.placeholder changelog!skip +changelogs/fragments/.keep changelog!skip plugins/callback/stdout.py shebang!skip roles/disaster_recovery/files/fail_back.py shebang!skip roles/disaster_recovery/files/bcolors.py shebang!skip @@ -10,4 +9,4 @@ roles/disaster_recovery/files/generate_vars.py shebang!skip roles/disaster_recovery/files/generate_vars_test.py shebang!skip roles/disaster_recovery/files/validator.py shebang!skip roles/disaster_recovery/files/vault_secret.sh shellcheck!skip -roles/disaster_recovery/files/ovirt-dr shebang!skip
\ No newline at end of file +roles/disaster_recovery/files/ovirt-dr shebang!skip diff --git a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.12.txt b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.12.txt index cd66c06fc..88f8b325a 100644 --- a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.12.txt +++ b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.12.txt @@ -1,6 +1,5 @@ -automation/build.sh shebang!skip build.sh shebang!skip -changelogs/fragments/.placeholder changelog!skip +changelogs/fragments/.keep changelog!skip plugins/callback/stdout.py shebang!skip roles/disaster_recovery/files/fail_back.py shebang!skip roles/disaster_recovery/files/bcolors.py shebang!skip diff --git a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.13.txt b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.13.txt index 9428c0440..dd75d91eb 100644 --- a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.13.txt +++ b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.13.txt @@ -1,6 +1,5 @@ -automation/build.sh shebang!skip build.sh shebang!skip -changelogs/fragments/.placeholder changelog!skip +changelogs/fragments/.keep changelog!skip plugins/callback/stdout.py shebang!skip plugins/callback/stdout.py validate-modules!skip plugins/inventory/ovirt.py validate-modules!skip diff --git a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.14.txt b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.14.txt new file mode 100644 index 000000000..4d666b868 --- /dev/null +++ b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.14.txt @@ -0,0 +1,14 @@ +build.sh shebang!skip +plugins/callback/stdout.py shebang!skip +plugins/callback/stdout.py validate-modules!skip +plugins/inventory/ovirt.py validate-modules!skip +roles/disaster_recovery/files/fail_back.py shebang!skip +roles/disaster_recovery/files/bcolors.py shebang!skip +roles/disaster_recovery/files/fail_over.py shebang!skip +roles/disaster_recovery/files/generate_mapping.py shebang!skip +roles/disaster_recovery/files/generate_vars.py shebang!skip +roles/disaster_recovery/files/generate_vars_test.py shebang!skip +roles/disaster_recovery/files/validator.py shebang!skip +roles/disaster_recovery/files/vault_secret.sh shellcheck!skip +roles/disaster_recovery/files/ovirt-dr shebang!skip +plugins/module_utils/cloud.py pylint!skip diff --git a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.9.txt b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.9.txt index 9d12b04e8..0e52cf6b1 100644 --- a/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.9.txt +++ b/ansible_collections/ovirt/ovirt/tests/sanity/ignore-2.9.txt @@ -1,2 +1 @@ -automation/build.sh shebang!skip build.sh shebang!skip |