diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-28 16:03:42 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-28 16:03:42 +0000 |
commit | 66cec45960ce1d9c794e9399de15c138acb18aed (patch) | |
tree | 59cd19d69e9d56b7989b080da7c20ef1a3fe2a5a /ansible_collections/awx | |
parent | Initial commit. (diff) | |
download | ansible-upstream.tar.xz ansible-upstream.zip |
Adding upstream version 7.3.0+dfsg.upstream/7.3.0+dfsgupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'ansible_collections/awx')
133 files changed, 24535 insertions, 0 deletions
diff --git a/ansible_collections/awx/awx/COPYING b/ansible_collections/awx/awx/COPYING new file mode 100644 index 00000000..b743e04e --- /dev/null +++ b/ansible_collections/awx/awx/COPYING @@ -0,0 +1,674 @@ +GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + +Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> +Everyone is permitted to copy and distribute verbatim copies +of this license document, but changing it is not allowed. + + Preamble + +The GNU General Public License is a free, copyleft license for +software and other kinds of works. + +The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + +When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + +To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + +For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + +Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + +For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + +Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + +Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + +The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + +0. Definitions. + +"This License" refers to version 3 of the GNU General Public License. + +"Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + +"The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + +To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + +A "covered work" means either the unmodified Program or a work based +on the Program. + +To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + +To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + +An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + +1. Source Code. + +The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + +A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + +The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + +The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + +The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + +The Corresponding Source for a work in source code form is that +same work. + +2. Basic Permissions. + +All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + +You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + +Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + +3. Protecting Users' Legal Rights From Anti-Circumvention Law. + +No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + +When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + +4. Conveying Verbatim Copies. + +You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + +You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + +5. Conveying Modified Source Versions. + +You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + +a) The work must carry prominent notices stating that you modified +it, and giving a relevant date. + +b) The work must carry prominent notices stating that it is +released under this License and any conditions added under section +7. This requirement modifies the requirement in section 4 to +"keep intact all notices". + +c) You must license the entire work, as a whole, under this +License to anyone who comes into possession of a copy. This +License will therefore apply, along with any applicable section 7 +additional terms, to the whole of the work, and all its parts, +regardless of how they are packaged. This License gives no +permission to license the work in any other way, but it does not +invalidate such permission if you have separately received it. + +d) If the work has interactive user interfaces, each must display +Appropriate Legal Notices; however, if the Program has interactive +interfaces that do not display Appropriate Legal Notices, your +work need not make them do so. + +A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + +6. Conveying Non-Source Forms. + +You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + +a) Convey the object code in, or embodied in, a physical product +(including a physical distribution medium), accompanied by the +Corresponding Source fixed on a durable physical medium +customarily used for software interchange. + +b) Convey the object code in, or embodied in, a physical product +(including a physical distribution medium), accompanied by a +written offer, valid for at least three years and valid for as +long as you offer spare parts or customer support for that product +model, to give anyone who possesses the object code either (1) a +copy of the Corresponding Source for all the software in the +product that is covered by this License, on a durable physical +medium customarily used for software interchange, for a price no +more than your reasonable cost of physically performing this +conveying of source, or (2) access to copy the +Corresponding Source from a network server at no charge. + +c) Convey individual copies of the object code with a copy of the +written offer to provide the Corresponding Source. This +alternative is allowed only occasionally and noncommercially, and +only if you received the object code with such an offer, in accord +with subsection 6b. + +d) Convey the object code by offering access from a designated +place (gratis or for a charge), and offer equivalent access to the +Corresponding Source in the same way through the same place at no +further charge. You need not require recipients to copy the +Corresponding Source along with the object code. If the place to +copy the object code is a network server, the Corresponding Source +may be on a different server (operated by you or a third party) +that supports equivalent copying facilities, provided you maintain +clear directions next to the object code saying where to find the +Corresponding Source. Regardless of what server hosts the +Corresponding Source, you remain obligated to ensure that it is +available for as long as needed to satisfy these requirements. + +e) Convey the object code using peer-to-peer transmission, provided +you inform other peers where the object code and Corresponding +Source of the work are being offered to the general public at no +charge under subsection 6d. + +A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + +A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + +"Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + +If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + +The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + +Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + +7. Additional Terms. + +"Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + +When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + +Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + +a) Disclaiming warranty or limiting liability differently from the +terms of sections 15 and 16 of this License; or + +b) Requiring preservation of specified reasonable legal notices or +author attributions in that material or in the Appropriate Legal +Notices displayed by works containing it; or + +c) Prohibiting misrepresentation of the origin of that material, or +requiring that modified versions of such material be marked in +reasonable ways as different from the original version; or + +d) Limiting the use for publicity purposes of names of licensors or +authors of the material; or + +e) Declining to grant rights under trademark law for use of some +trade names, trademarks, or service marks; or + +f) Requiring indemnification of licensors and authors of that +material by anyone who conveys the material (or modified versions of +it) with contractual assumptions of liability to the recipient, for +any liability that these contractual assumptions directly impose on +those licensors and authors. + +All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + +If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + +Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + +8. Termination. + +You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + +However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + +Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + +Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + +9. Acceptance Not Required for Having Copies. + +You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + +10. Automatic Licensing of Downstream Recipients. + +Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + +An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + +You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + +11. Patents. + +A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + +A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + +Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + +In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + +If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + +If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + +A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + +Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + +12. No Surrender of Others' Freedom. + +If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + +13. Use with the GNU Affero General Public License. + +Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + +14. Revised Versions of this License. + +The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + +Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + +If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + +Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + +15. Disclaimer of Warranty. + +THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + +16. Limitation of Liability. + +IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + +17. Interpretation of Sections 15 and 16. + +If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + +How to Apply These Terms to Your New Programs + +If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + +To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + +<one line to give the program's name and a brief idea of what it does.> +Copyright (C) <year> <name of author> + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see <http://www.gnu.org/licenses/>. + +Also add information on how to contact you by electronic and paper mail. + +If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + +<program> Copyright (C) <year> <name of author> +This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. +This is free software, and you are welcome to redistribute it +under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + +You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +<http://www.gnu.org/licenses/>. + +The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +<http://www.gnu.org/philosophy/why-not-lgpl.html>. diff --git a/ansible_collections/awx/awx/FILES.json b/ansible_collections/awx/awx/FILES.json new file mode 100644 index 00000000..53b20986 --- /dev/null +++ b/ansible_collections/awx/awx/FILES.json @@ -0,0 +1,1587 @@ +{ + "files": [ + { + "name": ".", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "images", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "images/completeness_test_output.png", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "6367684c4b5edd3e1e8fdcb9270d68ca54040d5d17108734f3d3a2b9df5878ba", + "format": 1 + }, + { + "name": "plugins", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "plugins/doc_fragments", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "plugins/doc_fragments/auth_legacy.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "c2f10b81ecb89088c7c295430d4a71de26e3700b26e8344cdc7950908a738fd3", + "format": 1 + }, + { + "name": "plugins/doc_fragments/auth.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "08510309125b9276dca6553a3c77436c0a225c250eea33d54be356a68a06a5f3", + "format": 1 + }, + { + "name": "plugins/doc_fragments/auth_plugin.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "80afe672d9386df036747cda82e54091e9717cdecfeab47b8567502b2ac3fbd1", + "format": 1 + }, + { + "name": "plugins/modules", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "plugins/modules/job_list.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "2ea8024bfc9612c005745a13a508c40d320b4c204bf18fcd495f72789d9adb40", + "format": 1 + }, + { + "name": "plugins/modules/credential_input_source.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "d90dd76b3b2a42ceaf423d05755c4c61bc565370f7905aecf9a516172761b60b", + "format": 1 + }, + { + "name": "plugins/modules/instance_group.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "62bfe82b93ddeafcc72bf9c8a7a12fec6df00bf42d7b5e2c55de17053de276da", + "format": 1 + }, + { + "name": "plugins/modules/workflow_job_template_node.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "37f6e42c3ba5ab5c6df1c9c7d336e0377346cad811aee6a9ed6004f29770adb8", + "format": 1 + }, + { + "name": "plugins/modules/settings.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "6a382df72aa10c2d5402a2a66b430c789b0b3588c3f1dca226f9ad09b01c9bdb", + "format": 1 + }, + { + "name": "plugins/modules/job_cancel.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "d64f698909919b05c9c47a65f24c861c3cabe33c039944f6120d49a2ac7d40da", + "format": 1 + }, + { + "name": "plugins/modules/export.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "478b9c1a9808f40a284d733e9dd9739767bdfa5ddf6c14720bc8f325a5433195", + "format": 1 + }, + { + "name": "plugins/modules/ad_hoc_command_wait.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "67bc716ec04dfc77cb751cda6013ee54fa0cd3ed3afabc5ba0d146cc9712c996", + "format": 1 + }, + { + "name": "plugins/modules/controller_meta.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "c66ebbe3a0eab6a9d28d517824ebf8478afdf14981c6c931f08592503c243cdd", + "format": 1 + }, + { + "name": "plugins/modules/notification_template.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "473f8d494ba4356c93b76ccc3b25e95ea5afd6f413ee30d244070e2e7ffd66bd", + "format": 1 + }, + { + "name": "plugins/modules/credential_type.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "c56a3cf4eddc284b0c83e55a3f58d19a9d315a68abf513d232c3fe5b81ec85f3", + "format": 1 + }, + { + "name": "plugins/modules/ad_hoc_command_cancel.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "3338e10af9ccd0e4178b8e1ec1e7064b00ab90e64665f846a2123f10d9d151f4", + "format": 1 + }, + { + "name": "plugins/modules/team.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "d2ace1a41e1456f7d187ad7a1d3becdddd27cd945dcee863a048add0dbfac9f6", + "format": 1 + }, + { + "name": "plugins/modules/job_wait.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "7e7459abf351f6c172401eec4ba579dc8566f8a55fd022cc8eec9fa5a3399067", + "format": 1 + }, + { + "name": "plugins/modules/user.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "5bca255128e1f376a15622a9dfbf6a469c23f6d9528a5df6e318a503402214e6", + "format": 1 + }, + { + "name": "plugins/modules/application.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a932d66c23f578fc62a733ac466f5732d0ed2d2192252b108de21c4da219880c", + "format": 1 + }, + { + "name": "plugins/modules/execution_environment.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "219ffa0fca58cbfa76c1dba8dba9d3060b1e6816c84a201241645b90b59a75c0", + "format": 1 + }, + { + "name": "plugins/modules/credential.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "6a1882a9893a096ae4f0a89028e4447719daa33051eaf54919e88d07a3df8ef5", + "format": 1 + }, + { + "name": "plugins/modules/ad_hoc_command.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "4dcafb33a0487b4200a6abcf3283dd55335de9102a2740c93e24b0e9e7ef224d", + "format": 1 + }, + { + "name": "plugins/modules/group.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "5aaa56a12e55ed92aba4a591ef493d013df47dcd29371664837f5405ff52631f", + "format": 1 + }, + { + "name": "plugins/modules/job_template.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "8111cffd44f026c9997ab2315ea3a2fd984754caa5953a89ed6da9d9a257bcd1", + "format": 1 + }, + { + "name": "plugins/modules/label.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "65e6ecc50888ebfae7498ba414325a715363676520af41f65aa8a0cecc19ea9d", + "format": 1 + }, + { + "name": "plugins/modules/instance.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "ae8cbc633720f99c822f3cd5fe459605b241f0db6152fb4293f238864c0b7513", + "format": 1 + }, + { + "name": "plugins/modules/workflow_job_template.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "5f3c097d5b750d32e21e7ab3d598052ff6268926e4463663d95c6ec29d252433", + "format": 1 + }, + { + "name": "plugins/modules/__init__.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + "format": 1 + }, + { + "name": "plugins/modules/workflow_launch.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "91eabbcdfed14efb72a6b02db83cd4f92c811a77e55119e9b0fefb6453eee953", + "format": 1 + }, + { + "name": "plugins/modules/schedule.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "42f1c845cb65fc43b5ccc1a08d98ea4cc4b4d0aefbba3c88a454e3497a711e19", + "format": 1 + }, + { + "name": "plugins/modules/import.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a7a03186251ef644ba03c49e7e23a799f8046abddb9ea20fff68dd09fe759680", + "format": 1 + }, + { + "name": "plugins/modules/token.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "df79756cfc32e63b15e46c6bed1502c780db0257f54fcecf617960285c0f3286", + "format": 1 + }, + { + "name": "plugins/modules/job_launch.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "adb57a4499f754bce74741f5a15ab5d00bf318199796180a627549c6699693e2", + "format": 1 + }, + { + "name": "plugins/modules/host.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "e42e558d5f91555f6f4f32b186d32d2920aad6126b164448c6258a1ee9f847ef", + "format": 1 + }, + { + "name": "plugins/modules/license.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "d285b03abaf448db184ec0304d95206115e7d3f0cf28adba009c0c84084f5f52", + "format": 1 + }, + { + "name": "plugins/modules/workflow_approval.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "9d94dcf99852cd8d85bc90580fe55fcb5c28c9ecc09e4b188c69ea95a14b4af9", + "format": 1 + }, + { + "name": "plugins/modules/inventory.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "54620e4006a83c9641baefdbe4a8b953ee124dc1654a3ccae487461db2b853b4", + "format": 1 + }, + { + "name": "plugins/modules/inventory_source.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "160601506e4b6d8e62f05dbff6a54251b29204dc6fa2b101f07e2c24e5f2bc95", + "format": 1 + }, + { + "name": "plugins/modules/project_update.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "2b06aacf9e51faa8b51fe770d3663e4e6e6d9e382769edf6883cd04414d3cd8c", + "format": 1 + }, + { + "name": "plugins/modules/subscriptions.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "f497ab9ada8f89650422bf85deef386e32b774dfff9e1de07b387fba32d890a8", + "format": 1 + }, + { + "name": "plugins/modules/inventory_source_update.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "8a942beea88f174b16ee262e2332e26a1de2744d88fccfe79aa7b11a11fbf9dc", + "format": 1 + }, + { + "name": "plugins/modules/organization.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "020576aef74ec4574dbe35eab8323fcffa7bd93d08a092310949e7bcec0eb196", + "format": 1 + }, + { + "name": "plugins/modules/project.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "4d6b532d422b72b4274218ec0119035b5871dff14c89d7ae84ea87db8a475150", + "format": 1 + }, + { + "name": "plugins/modules/role.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "26f6dfe334c409b0ead538ff1c9a1c20c88d673db374fabdd5b3cfaeeb30e70e", + "format": 1 + }, + { + "name": "plugins/modules/workflow_node_wait.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "5b19778b005fbaa3e0a3abc645a6d6452bc0ad52e89fe04141d051f6ddafbb73", + "format": 1 + }, + { + "name": "plugins/lookup", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "plugins/lookup/schedule_rrule.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "c3ec7b8f134eca3a9f04156213b584792fc4e3397e3b9f82b5044e9ec662c7a2", + "format": 1 + }, + { + "name": "plugins/lookup/controller_api.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "5e79f19c9dee4fa0c3a88126a630fa6163249c332d73a44370f64836e22d4b27", + "format": 1 + }, + { + "name": "plugins/lookup/schedule_rruleset.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "55753e82ed1caddd0fddc88043d5ea2bd90e0836ff952962a2482278acefc49c", + "format": 1 + }, + { + "name": "plugins/module_utils", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "plugins/module_utils/awxkit.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "8b2398e4e7893f203b26f6c85d510cc4c41a79c53e1937710807233e62e35f58", + "format": 1 + }, + { + "name": "plugins/module_utils/tower_legacy.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "442535992f6564ac689645ff6e880848762eafc0d93a3255cbe5bedec5eefd58", + "format": 1 + }, + { + "name": "plugins/module_utils/controller_api.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "fe84a604f28674c8f44d3241c7353984cc59fa622bafaede53729d7ae58de9c3", + "format": 1 + }, + { + "name": "plugins/inventory", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "plugins/inventory/controller.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "29e7cd36a2b18ee616e31cbbec6a6e103f3f18ebe13f2bb87167b915163ca4bf", + "format": 1 + }, + { + "name": "meta", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "meta/runtime.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "3f5286a184070df4d15903ecf3d9204b03411b9e060900cf73902d8e7e4ee5ba", + "format": 1 + }, + { + "name": "bindep.txt", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "7205dda85d2cd5501b3344e9f18e4acd09583056aab5e8a05554ba29a3b8fad8", + "format": 1 + }, + { + "name": "TESTING.md", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "4691e79c8038d8e985610fb613cd2f4799d4740b0a6ca1b72d3266528088a272", + "format": 1 + }, + { + "name": "test", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "test/awx", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "test/awx/test_credential_input_source.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "9637a418c0b0e59261ec0d1c206ff2d3574a41a8a169068bbf74588e3a4214b2", + "format": 1 + }, + { + "name": "test/awx/test_schedule.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "dadfd1c19c4c828dd84128ca484b837c6a904a09e92bcee12cb7cda408562c81", + "format": 1 + }, + { + "name": "test/awx/test_workflow_job_template_node.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "0806356bfd91b28153baa63ca8cbf8f7da1125dd5150e38e73aa37c65e236f6b", + "format": 1 + }, + { + "name": "test/awx/test_user.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "f9520b058e16e4e4800d3a5f70cd28650a365fa357afa1d41a8c63bf3354027e", + "format": 1 + }, + { + "name": "test/awx/test_project.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "6190e3bfc79fde24618b2e9a93468928efb64078bf178ff10bb9ccad9f59b366", + "format": 1 + }, + { + "name": "test/awx/test_instance_group.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "9ce22bf5e6baa63ab096c9377478f8a3af33624def33e52753342e435924e573", + "format": 1 + }, + { + "name": "test/awx/test_settings.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "436c13933936e7b80dd26c61ea1dbf492c13974f2922f1543c4fe6e6b0fab0dd", + "format": 1 + }, + { + "name": "test/awx/test_job.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "76ba45e14438425f7511d196613928d64253e1912a45b71ea842b1cb2c3ca335", + "format": 1 + }, + { + "name": "test/awx/test_completeness.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "cb2846a380c9149ceafac4cc24bf1c0af874452e501613c9baacca4e0a0fa60f", + "format": 1 + }, + { + "name": "test/awx/test_notification_template.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "f40d5b65fbc78d12570f37799c8e240cfb90d9948421d3db82af6427fd14854f", + "format": 1 + }, + { + "name": "test/awx/test_inventory.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "70eac0cf78806e37406137fcfb97e5a249fd6b091b1f18e812278573049a4111", + "format": 1 + }, + { + "name": "test/awx/test_token.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "118145cdd5f6a03df7a7a608d5f9e510236b2a54f9bcd456f4294ba69f0f4fad", + "format": 1 + }, + { + "name": "test/awx/test_label.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "cd957d0b0cab6dd51539baf3fb27b659b91a8e57b20aae4c5cce7eaec9cec494", + "format": 1 + }, + { + "name": "test/awx/test_workflow_job_template.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "16c8ebc74606940f6ee1f51a191f22b497c176a46e770e886bbf94bdf0c25842", + "format": 1 + }, + { + "name": "test/awx/test_team.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "cbbdbdb3be0b0d80dcfcf337ed0095774cf73ef0e937d3e8dc5abab21739db5d", + "format": 1 + }, + { + "name": "test/awx/test_ad_hoc_wait.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "daed2a74d3f64fd0300255050dc8c732158db401323f44da66ccb4bf84b59633", + "format": 1 + }, + { + "name": "test/awx/test_credential_type.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "1fe388a0c19f08006c7718766d5faa79540dd3b14547ced43b5a237a2c2fd877", + "format": 1 + }, + { + "name": "test/awx/test_module_utils.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "da19747889f28cba3f49836ef64363a010c6cb78650456183efba297d71f0def", + "format": 1 + }, + { + "name": "test/awx/test_role.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "8f0340f77fd1465cf6e267b301e44ae86c5238b05aa89bd7fff145726a83ebb4", + "format": 1 + }, + { + "name": "test/awx/test_inventory_source.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a0198aefe81dc46bf639e76448a8c9265a62cde2cbbae7c4ad33d831fd86f594", + "format": 1 + }, + { + "name": "test/awx/test_job_template.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "75528184cbc1e92aafc05360990d0280cf64f3bb7049120090ef25a3feb114ac", + "format": 1 + }, + { + "name": "test/awx/conftest.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "09e6eaaa58debfcd70ebd6fd92baafd2fea879b59c60f4cfa8c866ae04977a4d", + "format": 1 + }, + { + "name": "test/awx/test_organization.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "90dacaf268600864f01bdfdb0eb34f0225a605320b5af73754cbc229610e5d24", + "format": 1 + }, + { + "name": "test/awx/test_credential.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "1f8348d6f37932997c7971beb8c5f92cf649523e3d3c5d5e859846460d7d1e8d", + "format": 1 + }, + { + "name": "test/awx/test_group.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "1ecf188e82d4c848de64c8f7fd7af2d4adb6887c6a448771ff51bb43c4fa8128", + "format": 1 + }, + { + "name": "test/awx/test_application.py", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a106d5fbffbe1eaec36d8247979ca637ee733a29abf94d955c48be8d2fd16842", + "format": 1 + }, + { + "name": "COPYING", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "7c50cd9b85e2b7eebaea2b5618b402862b01d5a66befff8e41401ef3f14e471a", + "format": 1 + }, + { + "name": "requirements.txt", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "2eb11923e1347afc5075a7871e206a8f15a68471c90012f7386e9db0875e70bf", + "format": 1 + }, + { + "name": "README.md", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "490a000bf64790d90206607e3dd74c77e7ae940e58346a319a590053afe72149", + "format": 1 + }, + { + "name": "tests", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/config.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "4cb8bf065737689916cda6a2856fcfb8bc27f49224a4b2c2fde842e3b0e76fbb", + "format": 1 + }, + { + "name": "tests/sanity", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/sanity/ignore-2.15.txt", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "05b621f6ff40c091ab1c07947c43d817ed37af7acfc0f8bef7b1453eb03b3aa7", + "format": 1 + }, + { + "name": "tests/sanity/ignore-2.14.txt", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "05b621f6ff40c091ab1c07947c43d817ed37af7acfc0f8bef7b1453eb03b3aa7", + "format": 1 + }, + { + "name": "tests/integration", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command_cancel", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command_cancel/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command_cancel/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "3f698a655089b977ee89dde6532823b4e496b190a0203b52e75e0a19b0321e3f", + "format": 1 + }, + { + "name": "tests/integration/targets/project_update", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/project_update/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/project_update/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "cfe9600b0f76c887c924cc051dbd978559f58ca45ad88c760fc44a1a4f1c5a08", + "format": 1 + }, + { + "name": "tests/integration/targets/execution_environment", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/execution_environment/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/execution_environment/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "f51f07de38999eff7fbacbd72929a622afc5b9fb01f2acd44cae288978948c64", + "format": 1 + }, + { + "name": "tests/integration/targets/inventory_source", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/inventory_source/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/inventory_source/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "55ef33f725875c00e6e31a928aff85257aacf5689c7f64a0924963c44b35c5af", + "format": 1 + }, + { + "name": "tests/integration/targets/schedule", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/schedule/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/schedule/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "617b00f1a2de9e876ce63a471be371ac0e1622e8ab45062e9de8f5ea9ad8d4b2", + "format": 1 + }, + { + "name": "tests/integration/targets/instance", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/instance/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/instance/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "e9588dae873c72034ee98d463c80cb48c5a236b7e4d182786f3dee240ed89456", + "format": 1 + }, + { + "name": "tests/integration/targets/job_launch", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_launch/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_launch/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "f4c564c9b92788fec1a7bdbe7ca53b2106ac7fee649912c64e366575ed3eb72a", + "format": 1 + }, + { + "name": "tests/integration/targets/token", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/token/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/token/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "606e16dcd72ab4a0a6c26aedf8830e1de844266e7fa54254c93ed7e307c950d7", + "format": 1 + }, + { + "name": "tests/integration/targets/export", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/export/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/export/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "e66e796b995b7c9ae612e00b393ccd75d9747d2d94ea3fbbaf90832e5b3e9e3f", + "format": 1 + }, + { + "name": "tests/integration/targets/export/aliases", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "52e1315ef042495cdf2b0ce22d8ba47f726dce15b968e301a795be1f69045f20", + "format": 1 + }, + { + "name": "tests/integration/targets/user", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/user/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/user/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "88f742f66ccac2fed93222a997e02129c43e9dd863ce8b9a2fd8e07dd6973916", + "format": 1 + }, + { + "name": "tests/integration/targets/credential_input_source", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/credential_input_source/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/credential_input_source/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "ccc6e4527f9019c28f32cdb7b223d1a4445f2f505ef448e8a2e255c8981bd927", + "format": 1 + }, + { + "name": "tests/integration/targets/team", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/team/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/team/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "ea5b32cc64a4b1553bc5b254b09ef61187911e6364bb7a4d3c3159233182bdc7", + "format": 1 + }, + { + "name": "tests/integration/targets/group", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/group/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/group/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "7444fbbb58dc4c93953187b843cbdc3198427571a1fd49f36bc8d71d23b70479", + "format": 1 + }, + { + "name": "tests/integration/targets/organization", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/organization/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/organization/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "73853c4fd1dde833e599df55d8636e3cabac2e1139eac0785c75a9b7b6fac00f", + "format": 1 + }, + { + "name": "tests/integration/targets/job_wait", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_wait/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_wait/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "d7dd7fb1b9b81268a3d35a9df424fc977df5463bdef0c00b34bda6fab98682c9", + "format": 1 + }, + { + "name": "tests/integration/targets/schedule_rrule", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/schedule_rrule/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/schedule_rrule/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "2d3b202620a305fcf477836f22ce7e52a195b0f43b3c854ffec5763d6411b26b", + "format": 1 + }, + { + "name": "tests/integration/targets/workflow_job_template", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/workflow_job_template/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/workflow_job_template/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "aad79860bc8a30cb1eb3c96891924903d988856267901b29349dfe3cca66677b", + "format": 1 + }, + { + "name": "tests/integration/targets/instance_group", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/instance_group/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/instance_group/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "2498af3df7c31531b3d029afea14945fc211c6cc8a535a855330fb699e7a7d32", + "format": 1 + }, + { + "name": "tests/integration/targets/lookup_rruleset", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/lookup_rruleset/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/lookup_rruleset/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "dd1d0b2f9bab1e977dc9e10be443b8e59da7ea5d328eb04939ada9be334c9811", + "format": 1 + }, + { + "name": "tests/integration/targets/host", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/host/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/host/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "9490898a74c074d553d8ac98dbc1d59bebc1d2a1f1899f28fd3165125ddfd44a", + "format": 1 + }, + { + "name": "tests/integration/targets/notification_template", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/notification_template/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/notification_template/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "fe7f0d86e9bde61f8b708757f0b0627f5ace6b7cbdaf7103f8b247289ae5a295", + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command_wait", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command_wait/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command_wait/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "ce9d9c82599c3673f4a0d7da0b7af4437bb32689b5ff607266f0c875a7b7f2b7", + "format": 1 + }, + { + "name": "tests/integration/targets/job_cancel", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_cancel/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_cancel/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "72e23ca4c467f6b23f23577597ad891613ec780c25fc00bb73bd3cd438783b2a", + "format": 1 + }, + { + "name": "tests/integration/targets/workflow_launch", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/workflow_launch/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/workflow_launch/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "dd5c0022ff17e2eebb52303e1f5132eff0cdf35e737f122b0573f928cdd7ad03", + "format": 1 + }, + { + "name": "tests/integration/targets/role", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/role/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/role/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "b5f1f2624634486869b5e587c44fb44d0849b926364bd14766a8921cbcfe3674", + "format": 1 + }, + { + "name": "tests/integration/targets/settings", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/settings/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/settings/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "1d02d2e1a163b170fa15d54b37ec7da22509d45f4cc194583ec1a1c5d5682b16", + "format": 1 + }, + { + "name": "tests/integration/targets/inventory_source_update", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/inventory_source_update/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/inventory_source_update/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "c9da57401129b24c4c6b2a54acd923a5e8f82a884ad94b23dbf1cf4dfad847cb", + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/ad_hoc_command/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "e77cbce44dc257461c1edb3602690be28a7d8bf5e11384c9f5f401b6e1cb3149", + "format": 1 + }, + { + "name": "tests/integration/targets/import", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/import/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/import/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "59c0ace95e680d9874fe15c76889c1b4beb38d2d3c66a11499581b0f328ec25a", + "format": 1 + }, + { + "name": "tests/integration/targets/import/aliases", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "52e1315ef042495cdf2b0ce22d8ba47f726dce15b968e301a795be1f69045f20", + "format": 1 + }, + { + "name": "tests/integration/targets/credential", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/credential/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/credential/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "df1406f279fb76a35c21f2e178b20f6268f30b15305200d7bff4f70d051d3284", + "format": 1 + }, + { + "name": "tests/integration/targets/demo_data", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/demo_data/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/demo_data/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "617098185b6890b6c01d85830deb32bda385ca2499ea0c6f5f8bf44f1bedae28", + "format": 1 + }, + { + "name": "tests/integration/targets/job_template", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_template/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_template/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "121573eb6c0556945387c8c8eb533dda8da637c3470c6b53a4cdd7d85f1b58d6", + "format": 1 + }, + { + "name": "tests/integration/targets/application", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/application/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/application/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "d95ffd0fd9a79fd5c9bd95841d6e88f0ad9a8d2f4376a1d66a3432a48cc8e445", + "format": 1 + }, + { + "name": "tests/integration/targets/credential_type", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/credential_type/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/credential_type/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "b354b1b26b90216a8460fc9d478422c825f87d0dc1d59074acdc4650e8a0fb34", + "format": 1 + }, + { + "name": "tests/integration/targets/project_manual", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/project_manual/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/project_manual/tasks/create_project_dir.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "562d2c4b88bbb2a3aa9ac76dbcb59e3cdf490e58f88c9971ff7e8b40bd4b3aca", + "format": 1 + }, + { + "name": "tests/integration/targets/project_manual/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a1773c1365a408448e4facc5efdcc1b1e9f1f4a05e2eed59197dac00e9fa5105", + "format": 1 + }, + { + "name": "tests/integration/targets/label", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/label/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/label/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "a54774f0ce66904eb1ab7c1389add53c60672d2b2d29b670034712f59e99d27d", + "format": 1 + }, + { + "name": "tests/integration/targets/job_list", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_list/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/job_list/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "0f71862d199385973c479a2ce6a2c9eb060c80cfca19b82026a19ec60308f1b3", + "format": 1 + }, + { + "name": "tests/integration/targets/lookup_api_plugin", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/lookup_api_plugin/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/lookup_api_plugin/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "e47eaa102b38a0fc93f5a0fa3bf478a9f8b5fffce02737ff7099ae3dee1958ea", + "format": 1 + }, + { + "name": "tests/integration/targets/inventory", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/inventory/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/inventory/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "40d4e282aabbabbf5aff3ea6f966d3782852aa3204f33d65e216ede5e0fd66fe", + "format": 1 + }, + { + "name": "tests/integration/targets/project", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/project/tasks", + "ftype": "dir", + "chksum_type": null, + "chksum_sha256": null, + "format": 1 + }, + { + "name": "tests/integration/targets/project/tasks/main.yml", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "6267e4d55fe0c8ac80f174bd90deb37ba3dea9825eef2ec9ce28f6863cfed562", + "format": 1 + } + ], + "format": 1 +}
\ No newline at end of file diff --git a/ansible_collections/awx/awx/MANIFEST.json b/ansible_collections/awx/awx/MANIFEST.json new file mode 100644 index 00000000..f2a7ab58 --- /dev/null +++ b/ansible_collections/awx/awx/MANIFEST.json @@ -0,0 +1,36 @@ +{ + "collection_info": { + "namespace": "awx", + "name": "awx", + "version": "21.12.0", + "authors": [ + "AWX Project Contributors <awx-project@googlegroups.com>" + ], + "readme": "README.md", + "tags": [ + "cloud", + "infrastructure", + "awx", + "ansible", + "automation" + ], + "description": "Ansible content that interacts with the AWX or Automation Platform Controller API.", + "license": [ + "GPL-3.0-only" + ], + "license_file": null, + "dependencies": {}, + "repository": "https://github.com/ansible/awx", + "documentation": "https://github.com/ansible/awx/blob/devel/awx_collection/README.md", + "homepage": "https://www.ansible.com/", + "issues": "https://github.com/ansible/awx/issues?q=is%3Aissue+label%3Acomponent%3Aawx_collection" + }, + "file_manifest_file": { + "name": "FILES.json", + "ftype": "file", + "chksum_type": "sha256", + "chksum_sha256": "ab8151f383a869a206d778422cc13d54e6c62a52fa76503a69c30b076aed5a08", + "format": 1 + }, + "format": 1 +}
\ No newline at end of file diff --git a/ansible_collections/awx/awx/README.md b/ansible_collections/awx/awx/README.md new file mode 100644 index 00000000..ea9e85e1 --- /dev/null +++ b/ansible_collections/awx/awx/README.md @@ -0,0 +1,142 @@ +# AWX Ansible Collection + +[comment]: # (*******************************************************) +[comment]: # (* *) +[comment]: # (* WARNING *) +[comment]: # (* *) +[comment]: # (* This file is templated and not to be *) +[comment]: # (* edited directly! Instead modify: *) +[comment]: # (* tools/roles/template_galaxy/templates/README.md.j2 *) +[comment]: # (* *) +[comment]: # (* Changes to the base README.md file are refreshed *) +[comment]: # (* upon build of the collection *) +[comment]: # (*******************************************************) + +This Ansible collection allows for easy interaction with an AWX server via Ansible playbooks. + +This source for this collection lives in the `awx_collection` folder inside of the +AWX GitHub repository. +The previous home for this collection was inside the folder [lib/ansible/modules/web_infrastructure/ansible_tower](https://github.com/ansible/ansible/tree/stable-2.9/lib/ansible/modules/web_infrastructure/ansible_tower) in the Ansible repo, +as well as other places for the inventory plugin, module utils, and +doc fragment. + +## Building and Installing + +This collection templates the `galaxy.yml` file it uses. +Run `make build_collection` from the root folder of the AWX source tree. +This will create the `tar.gz` file inside the `awx_collection` folder +with the current AWX version, for example: `awx_collection/awx-awx-9.2.0.tar.gz`. + +Installing the `tar.gz` involves no special instructions. + +## Running + +Non-deprecated modules in this collection have no Python requirements, but +may require the official [AWX CLI](https://docs.ansible.com/ansible-tower/latest/html/towercli/index.html) +in the future. The `DOCUMENTATION` for each module will report this. + +You can specify authentication by a combination of either: + + - host, username, password + - host, OAuth2 token + +The OAuth2 token is the preferred method. You can obtain a token via the +AWX CLI [login](https://docs.ansible.com/ansible-tower/latest/html/towercli/reference.html#awx-login) +command. + +These can be specified via (from highest to lowest precedence): + + - direct module parameters + - environment variables (most useful when running against localhost) + - a config file path specified by the `tower_config_file` parameter + - a config file at `~/.tower_cli.cfg` + - a config file at `/etc/tower/tower_cli.cfg` + +Config file syntax looks like this: + +``` +[general] +host = https://localhost:8043 +verify_ssl = true +oauth_token = LEdCpKVKc4znzffcpQL5vLG8oyeku6 +``` + +## Release and Upgrade Notes + +Notable releases of the `awx.awx` collection: + + - 7.0.0 is intended to be identical to the content prior to the migration, aside from changes necessary to function as a collection. + - 11.0.0 has no non-deprecated modules that depend on the deprecated `tower-cli` [PyPI](https://pypi.org/project/ansible-tower-cli/). + - 19.2.1 large renaming purged "tower" names (like options and module names), adding redirects for old names + - 21.11.0 "tower" modules deprecated and symlinks removed. + - 0.0.1-devel is the version you should see if installing from source, which is intended for development and expected to be unstable. + +The following notes are changes that may require changes to playbooks: + + - The `credential` module no longer allows `kind` as a parameter; additionally, `inputs` must now be used with a variety of key/value parameters to go with it (e.g., `become_method`) + - The `job_wait` module no longer allows `min_interval`/ `max_interval` parameters; use `interval` instead + - The `notification_template` requires various notification configuration information to be listed as a dictionary under the `notification_configuration` parameter (e.g., `use_ssl`) + - In the `inventory_source` module, the `source_project` (when provided) lookup defaults to the specified organization in the same way the inventory is looked up + - The module `tower_notification` was renamed `tower_notification_template`. In `ansible >= 2.10` there is a seamless redirect. Ansible 2.9 does not respect the redirect. + - When a project is created, it will wait for the update/sync to finish by default; this can be turned off with the `wait` parameter, if desired. + - Creating a "scan" type job template is no longer supported. + - Specifying a custom certificate via the `TOWER_CERTIFICATE` environment variable no longer works. + - Type changes of variable fields: + + - `extra_vars` in the `tower_job_launch` module worked with a `list` previously, but now only works with a `dict` type + - `extra_vars` in the `tower_workflow_job_template` module worked with a `string` previously but now expects a `dict` + - When the `extra_vars` parameter is used with the `tower_job_launch` module, the launch will fail unless `ask_extra_vars` or `survey_enabled` is explicitly set to `True` on the Job Template + - The `variables` parameter in the `tower_group`, `tower_host` and `tower_inventory` modules now expects a `dict` type and no longer supports the use of `@` syntax for a file + + + - Type changes of other types of fields: + + - `inputs` or `injectors` in the `tower_credential_type` module worked with a string previously but now expects a `dict` + - `schema` in the `tower_workflow_job_template` module worked with a `string` previously but not expects a `list` of `dict`s + + - `tower_group` used to also service inventory sources, but this functionality has been removed from this module; use `tower_inventory_source` instead. + - Specified `tower_config` file used to handle `k=v` pairs on a single line; this is no longer supported. Please use a file formatted as `yaml`, `json` or `ini` only. + - Some return values (e.g., `credential_type`) have been removed. Use of `id` is recommended. + - `tower_job_template` no longer supports the deprecated `extra_vars_path` parameter, please use `extra_vars` with the lookup plugin to replace this functionality. + - The `notification_configuration` parameter of `tower_notification_template` has changed from a string to a dict. Please use the `lookup` plugin to read an existing file into a dict. + - `tower_credential` no longer supports passing a file name to `ssh_key_data`. + - The HipChat `notification_type` has been removed and can no longer be created using the `tower_notification_template` module. + +## Running Unit Tests + +Tests to verify compatibility with the most recent AWX code are in `awx_collection/test/awx`. +These can be ran via the `make test_collection` command in the development container. + +To run tests outside of the development container, or to run against +Ansible source, set up a dedicated virtual environment: + +``` +mkvirtualenv my_new_venv +# may need to replace psycopg2 with psycopg2-binary in requirements/requirements.txt +pip install -r requirements/requirements.txt -r requirements/requirements_dev.txt -r requirements/requirements_git.txt +make clean-api +pip install -e <path to your Ansible> +pip install -e . +pip install -e awxkit +py.test awx_collection/test/awx/ +``` + +## Running Integration Tests + +The integration tests require a virtualenv with `ansible >= 2.9` and `awxkit`. +The collection must first be installed, which can be done using `make install_collection`. +You also need a configuration file, as described in the [Running](https://github.com/ansible/awx/blob/devel/awx_collection/README.md#running) section. + +How to run the tests: + +``` +# ansible-test must be run from the directory in which the collection is installed +cd ~/.ansible/collections/ansible_collections/awx/awx/ +ansible-test integration +``` + +## Licensing + +All content in this folder is licensed under the same license as Ansible, +which is the same as the license that applied before the split into an +independent collection. diff --git a/ansible_collections/awx/awx/TESTING.md b/ansible_collections/awx/awx/TESTING.md new file mode 100644 index 00000000..97ada687 --- /dev/null +++ b/ansible_collections/awx/awx/TESTING.md @@ -0,0 +1,261 @@ +# Testing the AWX Collection + +We strive to have test coverage for all modules in the AWX Collection. The `/test` and `/tests` directories contain unit and integration tests, respectively. Tests ensure that any changes to the AWX Collection do not adversely affect expected behavior and functionality. + +When trying to fix a bug, it is best to replicate its behavior within a test with an assertion for the desired behavior. After that, edit the code and running the test to ensure your changes corrected the problem and did not affect anything else. + + +## Unit Tests + +The unit tests are stored in the `test/awx` directory and, where possible, test interactions between the collections modules and the AWX database. This is achieved by using a Python testing suite and having a mocked layer which emulates interactions with the API. You do not need a server to run these unit tests. The depth of testing is not fixed and can change from module to module. + +Let's take a closer look at the `test_token.py` file (which tests the `token` module): + +``` +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import OAuth2AccessToken + + +@pytest.mark.django_db +def test_create_token(run_module, admin_user): + + module_args = { + 'description': 'barfoo', + 'state': 'present', + 'scope': 'read', + 'controller_host': None, + 'controller_username': None, + 'controller_password': None, + 'validate_certs': None, + 'controller_oauthtoken': None, + 'controller_config_file': None, + } + + result = run_module('token', module_args, admin_user) + assert result.get('changed'), result + + tokens = OAuth2AccessToken.objects.filter(description='barfoo') + assert len(tokens) == 1, 'Tokens with description of barfoo != 0: {0}'.format(len(tokens)) + assert tokens[0].scope == 'read', 'Token was not given read access' +``` + +This test has a single test called `test_create_token`. It creates a `module_args` section which is what will be passed into our module. We then call `run_module`, asking it to run the `token` module with the `module_args` we created and give us back the results. After that, we run an assertion to validate that our module did in fact report a change to the system. We will then use Python objects to look up the token that has a description of `barfoo` (which was in our arguments to the module). We want to validate that we only got back one token (the one we created) and that the scope of the token we created was read. + + +### Completion Test + +The completeness check is run from the unit tests and can be found in `awx_collection/tests/awx/test_completness.py`. It compares the CRUD modules to the API endpoints and looks for discrepancies in options between the two. + +For example, when creating a new module for an endpoint and the module has parameters A, B and C, and the endpoint supports options A, B and D, then errors around parameter C and option D being mismatched would come up. + +A completeness failure will generate a large ASCII table in the Zuul log indicating what is going on: + +![Completeness Test Output](images/completeness_test_output.png) + +To find the error, look at the last column and search for the term "failure". There will most likely be some failures which have been deemed acceptable and will typically say "non-blocking" next to them. Those errors can be safely ignored. + + +## Integration Tests + +Integration tests are stored in the `/tests` directory and will be familiar to Ansible developers as these tests are executed with the ansible-test command line program. + +Inside the `/tests` directory, there are two folders: + +- `/integration` +- `/sanity` + +In the `/sanity` folder are file directives for specific Ansible versions which contain information about which tests to skip for specific files. There are a number of reasons you may need to skip a sanity test. See the [`ansible-test` documentation](https://docs.ansible.com/ansible/latest/dev_guide/testing_running_locally.html) for more details about how and why you might want to skip a test. + +In the `integration/targets` folder you will see directories (which act as roles) for all of the different modules and plugins. When the collection is tested, an instance of Automation Platform Controller (or AWX) will be spun up and these roles will be applied to the target server to validate the functionality of the modules. Since these are really roles, each directory will contain a tasks folder under it with a `main.yml` file as an entry point. + +While not strictly followed, the general flow of a test should be: + +- **Generate a test ID** + +``` +- name: Generate test id + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" +``` + +- **Generate names for any objects this test will create** + +``` +- name: Generate names + set_fact: + group_name1: "AWX-Collection-tests-instance_group-group1-{{ test_id }}" + group_name2: "AWX-Collection-tests-instance_group-group2-{{ test_id }}" + cred_name1: "AWX-Collection-tests-instance_group-cred1-{{ test_id }}" +``` + +- **Non-creating tests (i.e. test for specific error conditions, etc), with assertion** + +``` +- name: Try to use a token as a dict which is missing the token parameter + job_list: + controller_oauthtoken: + not_token: "This has no token entry" + register: results + ignore_errors: true + +- assert: + that: + - results is failed + - '"The provided dict in controller_oauthtoken did not properly contain the token entry" == results.msg' +``` + +- **`Block:`** + - Run test which creates/modifies/deletes object(s) +``` + - name: Create a container group + instance_group: + name: "{{ group_name2 }}" + credential: "{{ cred_result.id }}" + register: result +``` + - Assert proper results were returned +``` + - assert: + that: + - "result is changed" +``` + +- **`Always:`** + - Cleanup created objects +``` + - name: Delete the credential + credential: + name: "{{ cred_name1 }}" + organization: "Default" + credential_type: "OpenShift or Kubernetes API Bearer Token" +``` + - Assert cleanup worked properly (if needed) + +When writing an integration test, a test of asset type A does not need to make assertions for asset type B. For example, if you are writing an integration test for a credential and you create a custom credential type, you do not need to assert that the `credential_type` call properly worked, you can assume it will. In addition, when cleaning up and deleting the `credential_type`, you do not need to assert that it properly deleted the credential type. + + +## Running Unit Tests + +You can use the `Makefile` to run unit tests. In addition to the `make` command, you need a virtual environment with several requirements installed. These requirements are outlined in the [`awx_collection/README.md`](https://github.com/ansible/awx/blob/devel/awx_collection/README.md) file. + +> **Note:** The process for the installation will differ depending on OS and version. + +Once your environment is completely established, you can run all of the unit tests with the command (your results may vary): + +``` +$ make test_collection +rm -f /home/student1/virtuelenvs//awx/lib/python3.6/no-global-site-packages.txt +if [ "/home/student1/virtuelenvs/" ]; then \ + . /home/student1/virtuelenvs//awx/bin/activate; \ +fi; \ +py.test awx_collection/test/awx -v +==================================== test session starts ==================================== +platform linux -- Python 3.6.8, pytest-6.1.0, py-1.9.0, pluggy-0.13.1 -- /home/student1/virtuelenvs/awx/bin/python +cachedir: .pytest_cache +django: settings: awx.settings.development (from ini) +rootdir: /home/student1/awx, configfile: pytest.ini +plugins: cov-2.10.1, django-3.10.0, pythonpath-0.7.3, mock-1.11.1, timeout-1.4.2, forked-1.3.0, xdist-1.34.0 +collected 116 items + +awx_collection/test/awx/test_application.py::test_create_application PASSED [ 0%] +awx_collection/test/awx/test_completeness.py::test_completeness PASSED [ 1%] + +... + +==================================== short test summary info ==================================== +FAILED awx_collection/test/awx/test_job_template.py::test_create_job_template - AssertionError: assert {'changed': T...'name': 'foo'} == {'changed': T... +FAILED awx_collection/test/awx/test_job_template.py::test_job_template_with_new_credentials - assert 16 == 14 +FAILED awx_collection/test/awx/test_job_template.py::test_job_template_with_survey_spec - assert 11 == 9 +FAILED awx_collection/test/awx/test_module_utils.py::test_version_warning - SystemExit: 1 +FAILED awx_collection/test/awx/test_module_utils.py::test_type_warning - SystemExit: 1 +====================== 5 failed, 106 passed, 5 skipped, 56 warnings in 48.53s =================== +make: *** [Makefile:382: test_collection] Error 1 +``` + +In addition to running all of the tests, you can also specify specific tests to run. This is useful when developing a single module. In this example, we will run the tests for the `token` module: + +``` +$ pytest awx_collection/test/awx/test_token.py +============================ test session starts ============================ +platform darwin -- Python 3.7.0, pytest-3.6.0, py-1.8.1, pluggy-0.6.0 +django: settings: awx.settings.development (from ini) +rootdir: /Users/jowestco/junk/awx, inifile: pytest.ini +plugins: xdist-1.27.0, timeout-1.3.4, pythonpath-0.7.3, mock-1.11.1, forked-1.1.3, django-3.9.0, cov-2.8.1 +collected 1 item + +awx_collection/test/awx/test_token.py . [100%] + +========================= 1 passed in 1.72 seconds ========================= +``` + + +## Running Integration Tests + +For integration tests, you will need an existing AWX or Automation Platform Controller instance to run the test playbooks against. You can write a simple `run_it.yml` playbook to invoke the main method: + +``` +--- +- name: Run Integration Test + hosts: localhost + connection: local + gather_facts: False + environment: + TOWER_HOST: <URL> + TOWER_USERNAME: <username> + TOWER_PASSWORD: <password> + TOWER_VERIFY_SSL: False + collections: + - awx.awx + + tasks: + - include_tasks: main.yml +``` + +Place this file in the `/tasks` directory of the test playbook you'd like to run (i.e., `awx/awx_collection/tests/integration/targets/ad_hoc_command_cancel/tasks/`; a test playbook named `main.yml` must be in the same directory). + +The `run_it.yml` playbook will set up your connection parameters via environment variables and then invoke the `main.yml` play of the role. + +The output below is what should ideally be seen when running an integration test: + +``` +$ ansible-playbook run_it.yml + +PLAY [Run Integration Test] ******************************************************************************* + +TASK [include_tasks] ************************************************************************************** +included: /home/student1/awx/awx_collection/tests/integration/targets/demo_data/tasks/main.yml for localhost + +TASK [Assure that default organization exists] ************************************************************* +[WARNING]: You are using the awx version of this collection but connecting to Red Hat Ansible Tower +ok: [localhost] + +TASK [HACK - delete orphaned projects from preload data where organization deleted] ************************ + +TASK [Assure that demo project exists] ********************************************************************* +changed: [localhost] + +TASK [Assure that demo inventory exists] ******************************************************************* +changed: [localhost] + +TASK [Create a Host] *************************************************************************************** +changed: [localhost] + +TASK [Assure that demo job template exists] ***************************************************************** +changed: [localhost] + +PLAY RECAP ************************************************************************************** +localhost: ok=6 changed=4 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 +``` + +You should see the tasks from the integration test run as expected. + +It is critical to keep in mind that the integration tests run against the _installed_ version of the collection, not against the files in `~/awx/awx_collection/plugins/modules`. Because of this, you need to build your development version of the collection by running `make install_collection` prior to running a test against it or your results may vary. + +In order to avoid having to rebuild every time, you can attempt to run `make symlink_collection`. This will symlink your development directory into the Ansible-installed collection location. + +> **Note:** Collections and symlinks can be unstable. diff --git a/ansible_collections/awx/awx/bindep.txt b/ansible_collections/awx/awx/bindep.txt new file mode 100644 index 00000000..4fc72158 --- /dev/null +++ b/ansible_collections/awx/awx/bindep.txt @@ -0,0 +1,8 @@ +# This is a cross-platform list tracking distribution packages needed by tests; +# see https://docs.openstack.org/infra/bindep/ for additional information. + +python38-pytz [platform:centos-8 platform:rhel-8] + +# awxkit +python38-requests [platform:centos-8 platform:rhel-8] +python38-pyyaml [platform:centos-8 platform:rhel-8] diff --git a/ansible_collections/awx/awx/images/completeness_test_output.png b/ansible_collections/awx/awx/images/completeness_test_output.png Binary files differnew file mode 100644 index 00000000..712391a0 --- /dev/null +++ b/ansible_collections/awx/awx/images/completeness_test_output.png diff --git a/ansible_collections/awx/awx/meta/runtime.yml b/ansible_collections/awx/awx/meta/runtime.yml new file mode 100644 index 00000000..e59becd4 --- /dev/null +++ b/ansible_collections/awx/awx/meta/runtime.yml @@ -0,0 +1,261 @@ +--- +requires_ansible: '>=2.9.10' +action_groups: + controller: + - ad_hoc_command + - ad_hoc_command_cancel + - ad_hoc_command_wait + - application + - controller_meta + - credential_input_source + - credential + - credential_type + - execution_environment + - export + - group + - host + - import + - instance + - instance_group + - inventory + - inventory_source + - inventory_source_update + - job_cancel + - job_launch + - job_list + - job_template + - job_wait + - label + - license + - notification_template + - organization + - project + - project_update + - role + - schedule + - settings + - subscriptions + - team + - token + - user + - workflow_approval + - workflow_job_template_node + - workflow_job_template + - workflow_launch + - workflow_node_wait +plugin_routing: + inventory: + tower: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* plugins have been deprecated, use awx.awx.controller instead. + redirect: awx.awx.controller + lookup: + tower_api: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* plugins have been deprecated, use awx.awx.controller_api instead. + redirect: awx.awx.controller_api + tower_schedule_rrule: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* plugins have been deprecated, use awx.awx.schedule_rrule instead. + redirect: awx.awx.schedule_rrule + modules: + tower_ad_hoc_command_cancel: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.ad_hoc_command_cancel instead. + redirect: awx.awx.ad_hoc_command_cancel + tower_ad_hoc_command_wait: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.ad_hoc_command_wait instead. + redirect: awx.awx.ad_hoc_command_wait + tower_ad_hoc_command: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.ad_hoc_command instead. + redirect: awx.awx.ad_hoc_command + tower_application: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.application instead. + redirect: awx.awx.application + tower_meta: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.controller_meta instead. + redirect: awx.awx.controller_meta + tower_credential_input_source: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.credential_input_source instead. + redirect: awx.awx.credential_input_source + tower_credential_type: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.credential_type instead. + redirect: awx.awx.credential_type + tower_credential: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.credential instead. + redirect: awx.awx.credential + tower_execution_environment: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.execution_environment instead. + redirect: awx.awx.execution_environment + tower_export: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.export instead. + redirect: awx.awx.export + tower_group: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.group instead. + redirect: awx.awx.group + tower_host: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.host instead. + redirect: awx.awx.host + tower_import: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.import instead. + redirect: awx.awx.import + tower_instance_group: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.instance_group instead. + redirect: awx.awx.instance_group + tower_inventory_source_update: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.inventory_source_update instead. + redirect: awx.awx.inventory_source_update + tower_inventory_source: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.inventory_source instead. + redirect: awx.awx.inventory_source + tower_inventory: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.inventory instead. + redirect: awx.awx.inventory + tower_job_cancel: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.job_cancel instead. + redirect: awx.awx.job_cancel + tower_job_launch: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.job_launch instead. + redirect: awx.awx.job_launch + tower_job_list: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.job_list instead. + redirect: awx.awx.job_list + tower_job_template: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.job_template instead. + redirect: awx.awx.job_template + tower_job_wait: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.job_wait instead. + redirect: awx.awx.job_wait + tower_label: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.label instead. + redirect: awx.awx.label + tower_license: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.license instead. + redirect: awx.awx.license + tower_notification_template: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.notification_template instead. + redirect: awx.awx.notification_template + tower_notification: + redirect: awx.awx.notification_template + tower_organization: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.organization instead. + redirect: awx.awx.organization + tower_project_update: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.project_update instead. + redirect: awx.awx.project_update + tower_project: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.project instead. + redirect: awx.awx.project + tower_role: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.role instead. + redirect: awx.awx.role + tower_schedule: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.schedule instead. + redirect: awx.awx.schedule + tower_settings: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.settings instead. + redirect: awx.awx.settings + tower_team: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.team instead. + redirect: awx.awx.team + tower_token: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.token instead. + redirect: awx.awx.token + tower_user: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.user instead. + redirect: awx.awx.user + tower_workflow_approval: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_approval instead. + redirect: awx.awx.workflow_approval + tower_workflow_job_template_node: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_job_template_node instead. + redirect: awx.awx.workflow_job_template_node + tower_workflow_job_template: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_job_template instead. + redirect: awx.awx.workflow_job_template + tower_workflow_launch: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_launch instead. + redirect: awx.awx.workflow_launch + tower_workflow_node_wait: + deprecation: + removal_date: '2022-01-23' + warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_node_wait instead. + redirect: awx.awx.workflow_node_wait diff --git a/ansible_collections/awx/awx/plugins/doc_fragments/auth.py b/ansible_collections/awx/awx/plugins/doc_fragments/auth.py new file mode 100644 index 00000000..3cab718a --- /dev/null +++ b/ansible_collections/awx/awx/plugins/doc_fragments/auth.py @@ -0,0 +1,67 @@ +# -*- coding: utf-8 -*- + +# Copyright: (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +class ModuleDocFragment(object): + + # Automation Platform Controller documentation fragment + DOCUMENTATION = r''' +options: + controller_host: + description: + - URL to your Automation Platform Controller instance. + - If value not set, will try environment variable C(CONTROLLER_HOST) and then config files + - If value not specified by any means, the value of C(127.0.0.1) will be used + type: str + aliases: [ tower_host ] + controller_username: + description: + - Username for your controller instance. + - If value not set, will try environment variable C(CONTROLLER_USERNAME) and then config files + type: str + aliases: [ tower_username ] + controller_password: + description: + - Password for your controller instance. + - If value not set, will try environment variable C(CONTROLLER_PASSWORD) and then config files + type: str + aliases: [ tower_password ] + controller_oauthtoken: + description: + - The OAuth token to use. + - This value can be in one of two formats. + - A string which is the token itself. (i.e. bqV5txm97wqJqtkxlMkhQz0pKhRMMX) + - A dictionary structure as returned by the token module. + - If value not set, will try environment variable C(CONTROLLER_OAUTH_TOKEN) and then config files + type: raw + version_added: "3.7.0" + aliases: [ tower_oauthtoken ] + validate_certs: + description: + - Whether to allow insecure connections to AWX. + - If C(no), SSL certificates will not be validated. + - This should only be used on personally controlled sites using self-signed certificates. + - If value not set, will try environment variable C(CONTROLLER_VERIFY_SSL) and then config files + type: bool + aliases: [ tower_verify_ssl ] + controller_config_file: + description: + - Path to the controller config file. + - If provided, the other locations for config files will not be considered. + type: path + aliases: [tower_config_file] + +notes: +- If no I(config_file) is provided we will attempt to use the tower-cli library + defaults to find your host information. +- I(config_file) should be in the following format + host=hostname + username=username + password=password +''' diff --git a/ansible_collections/awx/awx/plugins/doc_fragments/auth_legacy.py b/ansible_collections/awx/awx/plugins/doc_fragments/auth_legacy.py new file mode 100644 index 00000000..29c91507 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/doc_fragments/auth_legacy.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- + +# Copyright: (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +class ModuleDocFragment(object): + + # Ansible Tower documentation fragment + DOCUMENTATION = r''' +options: + tower_host: + description: + - URL to your Tower or AWX instance. + - If value not set, will try environment variable C(TOWER_HOST) and then config files + - If value not specified by any means, the value of C(127.0.0.1) will be used + type: str + tower_username: + description: + - Username for your Tower or AWX instance. + - If value not set, will try environment variable C(TOWER_USERNAME) and then config files + type: str + tower_password: + description: + - Password for your Tower or AWX instance. + - If value not set, will try environment variable C(TOWER_PASSWORD) and then config files + type: str + validate_certs: + description: + - Whether to allow insecure connections to Tower or AWX. + - If C(no), SSL certificates will not be validated. + - This should only be used on personally controlled sites using self-signed certificates. + - If value not set, will try environment variable C(TOWER_VERIFY_SSL) and then config files + type: bool + aliases: [ tower_verify_ssl ] + tower_config_file: + description: + - Path to the Tower or AWX config file. + - If provided, the other locations for config files will not be considered. + type: path + +notes: +- If no I(config_file) is provided we will attempt to use the tower-cli library + defaults to find your Tower host information. +- I(config_file) should contain Tower configuration in the following format + host=hostname + username=username + password=password +''' diff --git a/ansible_collections/awx/awx/plugins/doc_fragments/auth_plugin.py b/ansible_collections/awx/awx/plugins/doc_fragments/auth_plugin.py new file mode 100644 index 00000000..5a3a12b0 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/doc_fragments/auth_plugin.py @@ -0,0 +1,79 @@ +# -*- coding: utf-8 -*- + +# Copyright: (c) 2020, Ansible by Red Hat, Inc +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +class ModuleDocFragment(object): + + # Automation Platform Controller documentation fragment + DOCUMENTATION = r''' +options: + host: + description: The network address of your Automation Platform Controller host. + env: + - name: CONTROLLER_HOST + - name: TOWER_HOST + deprecated: + collection_name: 'awx.awx' + version: '4.0.0' + why: Collection name change + alternatives: 'CONTROLLER_HOST' + username: + description: The user that you plan to use to access inventories on the controller. + env: + - name: CONTROLLER_USERNAME + - name: TOWER_USERNAME + deprecated: + collection_name: 'awx.awx' + version: '4.0.0' + why: Collection name change + alternatives: 'CONTROLLER_USERNAME' + password: + description: The password for your controller user. + env: + - name: CONTROLLER_PASSWORD + - name: TOWER_PASSWORD + deprecated: + collection_name: 'awx.awx' + version: '4.0.0' + why: Collection name change + alternatives: 'CONTROLLER_PASSWORD' + oauth_token: + description: + - The OAuth token to use. + env: + - name: CONTROLLER_OAUTH_TOKEN + - name: TOWER_OAUTH_TOKEN + deprecated: + collection_name: 'awx.awx' + version: '4.0.0' + why: Collection name change + alternatives: 'CONTROLLER_OAUTH_TOKEN' + verify_ssl: + description: + - Specify whether Ansible should verify the SSL certificate of the controller host. + - Defaults to True, but this is handled by the shared module_utils code + type: bool + env: + - name: CONTROLLER_VERIFY_SSL + - name: TOWER_VERIFY_SSL + deprecated: + collection_name: 'awx.awx' + version: '4.0.0' + why: Collection name change + alternatives: 'CONTROLLER_VERIFY_SSL' + aliases: [ validate_certs ] + +notes: +- If no I(config_file) is provided we will attempt to use the tower-cli library + defaults to find your host information. +- I(config_file) should be in the following format + host=hostname + username=username + password=password +''' diff --git a/ansible_collections/awx/awx/plugins/inventory/controller.py b/ansible_collections/awx/awx/plugins/inventory/controller.py new file mode 100644 index 00000000..ee049c80 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/inventory/controller.py @@ -0,0 +1,182 @@ +# Copyright (c) 2018 Ansible Project +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +DOCUMENTATION = ''' +name: controller +author: + - Matthew Jones (@matburt) + - Yunfan Zhang (@YunfanZhang42) +short_description: Ansible dynamic inventory plugin for the Automation Platform Controller. +description: + - Reads inventories from the Automation Platform Controller. + - Supports reading configuration from both YAML config file and environment variables. + - If reading from the YAML file, the file name must end with controller.(yml|yaml) or controller_inventory.(yml|yaml), + the path in the command would be /path/to/controller_inventory.(yml|yaml). If some arguments in the config file + are missing, this plugin will try to fill in missing arguments by reading from environment variables. + - If reading configurations from environment variables, the path in the command must be @controller_inventory. +options: + inventory_id: + description: + - The ID of the inventory that you wish to import. + - This is allowed to be either the inventory primary key or its named URL slug. + - Primary key values will be accepted as strings or integers, and URL slugs must be strings. + - Named URL slugs follow the syntax of "inventory_name++organization_name". + type: raw + env: + - name: CONTROLLER_INVENTORY + required: True + include_metadata: + description: Make extra requests to provide all group vars with metadata about the source host. + type: bool + default: False +extends_documentation_fragment: awx.awx.auth_plugin +''' + +EXAMPLES = ''' +# Before you execute the following commands, you should make sure this file is in your plugin path, +# and you enabled this plugin. + +# Example for using controller_inventory.yml file + +plugin: awx.awx.controller +host: your_automation_controller_server_network_address +username: your_automation_controller_username +password: your_automation_controller_password +inventory_id: the_ID_of_targeted_automation_controller_inventory +# Then you can run the following command. +# If some of the arguments are missing, Ansible will attempt to read them from environment variables. +# ansible-inventory -i /path/to/controller_inventory.yml --list + +# Example for reading from environment variables: + +# Set environment variables: +# export CONTROLLER_HOST=YOUR_AUTOMATION_PLATFORM_CONTROLLER_HOST_ADDRESS +# export CONTROLLER_USERNAME=YOUR_CONTROLLER_USERNAME +# export CONTROLLER_PASSWORD=YOUR_CONTROLLER_PASSWORD +# export CONTROLLER_INVENTORY=THE_ID_OF_TARGETED_INVENTORY +# Read the inventory specified in CONTROLLER_INVENTORY from the controller, and list them. +# The inventory path must always be @controller_inventory if you are reading all settings from environment variables. +# ansible-inventory -i @controller_inventory --list +''' + +import os + +from ansible.module_utils import six +from ansible.module_utils._text import to_text, to_native +from ansible.errors import AnsibleParserError, AnsibleOptionsError +from ansible.plugins.inventory import BaseInventoryPlugin +from ansible.config.manager import ensure_type + +from ansible.module_utils.six import raise_from +from ..module_utils.controller_api import ControllerAPIModule + + +def handle_error(**kwargs): + raise AnsibleParserError(to_native(kwargs.get('msg'))) + + +class InventoryModule(BaseInventoryPlugin): + NAME = 'awx.awx.controller' # REPLACE + # Stays backward compatible with the inventory script. + # If the user supplies '@controller_inventory' as path, the plugin will read from environment variables. + no_config_file_supplied = False + + def verify_file(self, path): + if path.endswith('@controller_inventory') or path.endswith('@tower_inventory'): + self.no_config_file_supplied = True + return True + elif super().verify_file(path): + return path.endswith( + ( + 'controller_inventory.yml', + 'controller_inventory.yaml', + 'controller.yml', + 'controller.yaml', + 'tower_inventory.yml', + 'tower_inventory.yaml', + 'tower.yml', + 'tower.yaml', + ) + ) + else: + return False + + def warn_callback(self, warning): + self.display.warning(warning) + + def parse(self, inventory, loader, path, cache=True): + super().parse(inventory, loader, path) + if not self.no_config_file_supplied and os.path.isfile(path): + self._read_config_data(path) + + # Defer processing of params to logic shared with the modules + module_params = {} + for plugin_param, module_param in ControllerAPIModule.short_params.items(): + opt_val = self.get_option(plugin_param) + if opt_val is not None: + module_params[module_param] = opt_val + + module = ControllerAPIModule(argument_spec={}, direct_params=module_params, error_callback=handle_error, warn_callback=self.warn_callback) + + # validate type of inventory_id because we allow two types as special case + inventory_id = self.get_option('inventory_id') + if isinstance(inventory_id, int): + inventory_id = to_text(inventory_id, nonstring='simplerepr') + else: + try: + inventory_id = ensure_type(inventory_id, 'str') + except ValueError as e: + raise_from(AnsibleOptionsError( + 'Invalid type for configuration option inventory_id, ' 'not integer, and cannot convert to string: {err}'.format(err=to_native(e)) + ), e) + inventory_id = inventory_id.replace('/', '') + inventory_url = '/api/v2/inventories/{inv_id}/script/'.format(inv_id=inventory_id) + + inventory = module.get_endpoint(inventory_url, data={'hostvars': '1', 'towervars': '1', 'all': '1'})['json'] + + # To start with, create all the groups. + for group_name in inventory: + if group_name != '_meta': + self.inventory.add_group(group_name) + + # Then, create all hosts and add the host vars. + all_hosts = inventory['_meta']['hostvars'] + for host_name, host_vars in six.iteritems(all_hosts): + self.inventory.add_host(host_name) + for var_name, var_value in six.iteritems(host_vars): + self.inventory.set_variable(host_name, var_name, var_value) + + # Lastly, create to group-host and group-group relationships, and set group vars. + for group_name, group_content in six.iteritems(inventory): + if group_name != 'all' and group_name != '_meta': + # First add hosts to groups + for host_name in group_content.get('hosts', []): + self.inventory.add_host(host_name, group_name) + # Then add the parent-children group relationships. + for child_group_name in group_content.get('children', []): + # add the child group to groups, if its already there it will just throw a warning + self.inventory.add_group(child_group_name) + self.inventory.add_child(group_name, child_group_name) + # Set the group vars. Note we should set group var for 'all', but not '_meta'. + if group_name != '_meta': + for var_name, var_value in six.iteritems(group_content.get('vars', {})): + self.inventory.set_variable(group_name, var_name, var_value) + + # Fetch extra variables if told to do so + if self.get_option('include_metadata'): + + config_data = module.get_endpoint('/api/v2/config/')['json'] + + server_data = {} + server_data['license_type'] = config_data.get('license_info', {}).get('license_type', 'unknown') + for key in ('version', 'ansible_version'): + server_data[key] = config_data.get(key, 'unknown') + self.inventory.set_variable('all', 'tower_metadata', server_data) + self.inventory.set_variable('all', 'controller_metadata', server_data) + + # Clean up the inventory. + self.inventory.reconcile_inventory() diff --git a/ansible_collections/awx/awx/plugins/lookup/controller_api.py b/ansible_collections/awx/awx/plugins/lookup/controller_api.py new file mode 100644 index 00000000..6f0d07a4 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/lookup/controller_api.py @@ -0,0 +1,192 @@ +# (c) 2020 Ansible Project +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +DOCUMENTATION = """ +name: controller_api +author: John Westcott IV (@john-westcott-iv) +short_description: Search the API for objects +requirements: + - None +description: + - Returns GET requests from the Automation Platform Controller API. See + U(https://docs.ansible.com/ansible-tower/latest/html/towerapi/index.html) for API usage. + - For use that is cross-compatible between the awx.awx and ansible.controller collection + see the controller_meta module +options: + _terms: + description: + - The endpoint to query, i.e. teams, users, tokens, job_templates, etc. + required: True + query_params: + description: + - The query parameters to search for in the form of key/value pairs. + type: dict + required: False + aliases: [query, data, filter, params] + expect_objects: + description: + - Error if the response does not contain either a detail view or a list view. + type: boolean + default: False + aliases: [expect_object] + expect_one: + description: + - Error if the response contains more than one object. + type: boolean + default: False + return_objects: + description: + - If a list view is returned, promote the list of results to the top-level of list returned. + - Allows using this lookup plugin to loop over objects without additional work. + type: boolean + default: True + return_all: + description: + - If the response is paginated, return all pages. + type: boolean + default: False + return_ids: + description: + - If response contains objects, promote the id key to the top-level entries in the list. + - Allows looking up a related object and passing it as a parameter to another module. + - This will convert the return to a string or list of strings depending on the number of selected items. + type: boolean + aliases: [return_id] + default: False + max_objects: + description: + - if C(return_all) is true, this is the maximum of number of objects to return from the list. + - If a list view returns more an max_objects an exception will be raised + type: integer + default: 1000 +extends_documentation_fragment: awx.awx.auth_plugin +notes: + - If the query is not filtered properly this can cause a performance impact. +""" + +EXAMPLES = """ +- name: Load the UI settings + set_fact: + controller_settings: "{{ lookup('awx.awx.controller_api', 'settings/ui') }}" + +- name: Load the UI settings specifying the connection info + set_fact: + controller_settings: "{{ lookup('awx.awx.controller_api', 'settings/ui', host='controller.example.com', + username='admin', password=my_pass_var, verify_ssl=False) }}" + +- name: Report the usernames of all users with admin privs + debug: + msg: "Admin users: {{ query('awx.awx.controller_api', 'users', query_params={ 'is_superuser': true }) | map(attribute='username') | join(', ') }}" + +- name: debug all organizations in a loop # use query to return a list + debug: + msg: "Organization description={{ item['description'] }} id={{ item['id'] }}" + loop: "{{ query('awx.awx.controller_api', 'organizations') }}" + loop_control: + label: "{{ item['name'] }}" + +- name: Make sure user 'john' is an org admin of the default org if the user exists + role: + organization: Default + role: admin + user: john + when: "lookup('awx.awx.controller_api', 'users', query_params={ 'username': 'john' }) | length == 1" + +- name: Create an inventory group with all 'foo' hosts + group: + name: "Foo Group" + inventory: "Demo Inventory" + hosts: >- + {{ query( + 'awx.awx.controller_api', + 'hosts', + query_params={ 'name__startswith' : 'foo', }, + ) | map(attribute='name') | list }} + register: group_creation +""" + +RETURN = """ +_raw: + description: + - Response from the API + type: dict + returned: on successful request +""" + +from ansible.plugins.lookup import LookupBase +from ansible.errors import AnsibleError +from ansible.module_utils._text import to_native +from ansible.utils.display import Display +from ..module_utils.controller_api import ControllerAPIModule + + +class LookupModule(LookupBase): + display = Display() + + def handle_error(self, **kwargs): + raise AnsibleError(to_native(kwargs.get('msg'))) + + def warn_callback(self, warning): + self.display.warning(warning) + + def run(self, terms, variables=None, **kwargs): + if len(terms) != 1: + raise AnsibleError('You must pass exactly one endpoint to query') + + self.set_options(direct=kwargs) + + # Defer processing of params to logic shared with the modules + module_params = {} + for plugin_param, module_param in ControllerAPIModule.short_params.items(): + opt_val = self.get_option(plugin_param) + if opt_val is not None: + module_params[module_param] = opt_val + + # Create our module + module = ControllerAPIModule(argument_spec={}, direct_params=module_params, error_callback=self.handle_error, warn_callback=self.warn_callback) + + response = module.get_endpoint(terms[0], data=self.get_option('query_params', {})) + + if 'status_code' not in response: + raise AnsibleError("Unclear response from API: {0}".format(response)) + + if response['status_code'] != 200: + raise AnsibleError("Failed to query the API: {0}".format(response['json'].get('detail', response['json']))) + + return_data = response['json'] + + if self.get_option('expect_objects') or self.get_option('expect_one'): + if ('id' not in return_data) and ('results' not in return_data): + raise AnsibleError('Did not obtain a list or detail view at {0}, and ' 'expect_objects or expect_one is set to True'.format(terms[0])) + + if self.get_option('expect_one'): + if 'results' in return_data and len(return_data['results']) != 1: + raise AnsibleError('Expected one object from endpoint {0}, ' 'but obtained {1} from API'.format(terms[0], len(return_data['results']))) + + if self.get_option('return_all') and 'results' in return_data: + if return_data['count'] > self.get_option('max_objects'): + raise AnsibleError( + 'List view at {0} returned {1} objects, which is more than the maximum allowed ' + 'by max_objects, {2}'.format(terms[0], return_data['count'], self.get_option('max_objects')) + ) + + next_page = return_data['next'] + while next_page is not None: + next_response = module.get_endpoint(next_page) + return_data['results'] += next_response['json']['results'] + next_page = next_response['json']['next'] + return_data['next'] = None + + if self.get_option('return_ids'): + if 'results' in return_data: + return_data['results'] = [str(item['id']) for item in return_data['results']] + elif 'id' in return_data: + return_data = str(return_data['id']) + + if self.get_option('return_objects') and 'results' in return_data: + return return_data['results'] + else: + return [return_data] diff --git a/ansible_collections/awx/awx/plugins/lookup/schedule_rrule.py b/ansible_collections/awx/awx/plugins/lookup/schedule_rrule.py new file mode 100644 index 00000000..5f1d34c0 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/lookup/schedule_rrule.py @@ -0,0 +1,240 @@ +# (c) 2020 Ansible Project +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +DOCUMENTATION = """ + name: schedule_rrule + author: John Westcott IV (@john-westcott-iv) + short_description: Generate an rrule string which can be used for Schedules + requirements: + - pytz + - python-dateutil >= 2.7.0 + description: + - Returns a string based on criteria which represents an rrule + options: + _terms: + description: + - The frequency of the schedule + - none - Run this schedule once + - minute - Run this schedule every x minutes + - hour - Run this schedule every x hours + - day - Run this schedule every x days + - week - Run this schedule weekly + - month - Run this schedule monthly + required: True + choices: ['none', 'minute', 'hour', 'day', 'week', 'month'] + start_date: + description: + - The date to start the rule + - Used for all frequencies + - Format should be YYYY-MM-DD [HH:MM:SS] + type: str + timezone: + description: + - The timezone to use for this rule + - Used for all frequencies + - Format should be as US/Eastern + - Defaults to America/New_York + type: str + every: + description: + - The repetition in months, weeks, days hours or minutes + - Used for all types except none + type: int + end_on: + description: + - How to end this schedule + - If this is not defined, this schedule will never end + - If this is a positive integer, this schedule will end after this number of occurences + - If this is a date in the format YYYY-MM-DD [HH:MM:SS], this schedule ends after this date + - Used for all types except none + type: str + on_days: + description: + - The days to run this schedule on + - A comma-separated list which can contain values sunday, monday, tuesday, wednesday, thursday, friday + - Used for week type schedules + month_day_number: + description: + - The day of the month this schedule will run on (0-31) + - Used for month type schedules + - Cannot be used with on_the parameter + type: int + on_the: + description: + - A description on when this schedule will run + - Two strings separated by a space + - First string is one of first, second, third, fourth, last + - Second string is one of sunday, monday, tuesday, wednesday, thursday, friday + - Used for month type schedules + - Cannot be used with month_day_number parameters +""" + +EXAMPLES = """ + - name: Create a string for a schedule + debug: + msg: "{{ query('awx.awx.schedule_rrule', 'none', start_date='1979-09-13 03:45:07') }}" +""" + +RETURN = """ +_raw: + description: + - String in the rrule format + type: string +""" +import re + +from ansible.module_utils.six import raise_from +from ansible.plugins.lookup import LookupBase +from ansible.errors import AnsibleError +from datetime import datetime + +try: + import pytz + from dateutil import rrule +except ImportError as imp_exc: + LIBRARY_IMPORT_ERROR = imp_exc +else: + LIBRARY_IMPORT_ERROR = None + + +class LookupModule(LookupBase): + # plugin constructor + def __init__(self, *args, **kwargs): + if LIBRARY_IMPORT_ERROR: + raise_from(AnsibleError('{0}'.format(LIBRARY_IMPORT_ERROR)), LIBRARY_IMPORT_ERROR) + super().__init__(*args, **kwargs) + + self.frequencies = { + 'none': rrule.DAILY, + 'minute': rrule.MINUTELY, + 'hour': rrule.HOURLY, + 'day': rrule.DAILY, + 'week': rrule.WEEKLY, + 'month': rrule.MONTHLY, + } + + self.weekdays = { + 'monday': rrule.MO, + 'tuesday': rrule.TU, + 'wednesday': rrule.WE, + 'thursday': rrule.TH, + 'friday': rrule.FR, + 'saturday': rrule.SA, + 'sunday': rrule.SU, + } + + self.set_positions = { + 'first': 1, + 'second': 2, + 'third': 3, + 'fourth': 4, + 'last': -1, + } + + @staticmethod + def parse_date_time(date_string): + try: + return datetime.strptime(date_string, '%Y-%m-%d %H:%M:%S') + except ValueError: + return datetime.strptime(date_string, '%Y-%m-%d') + + def run(self, terms, variables=None, **kwargs): + if len(terms) != 1: + raise AnsibleError('You may only pass one schedule type in at a time') + + frequency = terms[0].lower() + + return self.get_rrule(frequency, kwargs) + + def get_rrule(self, frequency, kwargs): + + if frequency not in self.frequencies: + raise AnsibleError('Frequency of {0} is invalid'.format(frequency)) + + rrule_kwargs = { + 'freq': self.frequencies[frequency], + 'interval': kwargs.get('every', 1), + } + + # All frequencies can use a start date + if 'start_date' in kwargs: + try: + rrule_kwargs['dtstart'] = LookupModule.parse_date_time(kwargs['start_date']) + except Exception as e: + raise_from(AnsibleError('Parameter start_date must be in the format YYYY-MM-DD [HH:MM:SS]'), e) + + # If we are a none frequency we don't need anything else + if frequency == 'none': + rrule_kwargs['count'] = 1 + else: + # All non-none frequencies can have an end_on option + if 'end_on' in kwargs: + end_on = kwargs['end_on'] + if re.match(r'^\d+$', end_on): + rrule_kwargs['count'] = end_on + else: + try: + rrule_kwargs['until'] = LookupModule.parse_date_time(end_on) + except Exception as e: + raise_from(AnsibleError('Parameter end_on must either be an integer or in the format YYYY-MM-DD [HH:MM:SS]'), e) + + # A week-based frequency can also take the on_days parameter + if frequency == 'week' and 'on_days' in kwargs: + days = [] + for day in kwargs['on_days'].split(','): + day = day.strip() + if day not in self.weekdays: + raise AnsibleError('Parameter on_days must only contain values {0}'.format(', '.join(self.weekdays.keys()))) + days.append(self.weekdays[day]) + + rrule_kwargs['byweekday'] = days + + # A month-based frequency can also deal with month_day_number and on_the options + if frequency == 'month': + if 'month_day_number' in kwargs and 'on_the' in kwargs: + raise AnsibleError('Month based frequencies can have month_day_number or on_the but not both') + + if 'month_day_number' in kwargs: + try: + my_month_day = int(kwargs['month_day_number']) + if my_month_day < 1 or my_month_day > 31: + raise Exception() + except Exception as e: + raise_from(AnsibleError('month_day_number must be between 1 and 31'), e) + + rrule_kwargs['bymonthday'] = my_month_day + + if 'on_the' in kwargs: + try: + (occurance, weekday) = kwargs['on_the'].split(' ') + except Exception as e: + raise_from(AnsibleError('on_the parameter must be two words separated by a space'), e) + + if weekday not in self.weekdays: + raise AnsibleError('Weekday portion of on_the parameter is not valid') + if occurance not in self.set_positions: + raise AnsibleError('The first string of the on_the parameter is not valid') + + rrule_kwargs['byweekday'] = self.weekdays[weekday] + rrule_kwargs['bysetpos'] = self.set_positions[occurance] + + my_rule = rrule.rrule(**rrule_kwargs) + + # All frequencies can use a timezone but rrule can't support the format that AWX uses. + # So we will do a string manip here if we need to + timezone = 'America/New_York' + if 'timezone' in kwargs: + if kwargs['timezone'] not in pytz.all_timezones: + raise AnsibleError('Timezone parameter is not valid') + timezone = kwargs['timezone'] + + # rrule puts a \n in the rule instad of a space and can't handle timezones + return_rrule = str(my_rule).replace('\n', ' ').replace('DTSTART:', 'DTSTART;TZID={0}:'.format(timezone)) + # AWX requires an interval. rrule will not add interval if it's set to 1 + if kwargs.get('every', 1) == 1: + return_rrule = "{0};INTERVAL=1".format(return_rrule) + + return return_rrule diff --git a/ansible_collections/awx/awx/plugins/lookup/schedule_rruleset.py b/ansible_collections/awx/awx/plugins/lookup/schedule_rruleset.py new file mode 100644 index 00000000..6aefde48 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/lookup/schedule_rruleset.py @@ -0,0 +1,354 @@ +# (c) 2020 Ansible Project +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +DOCUMENTATION = """ + name: schedule_rruleset + author: John Westcott IV (@john-westcott-iv) + short_description: Generate an rruleset string + requirements: + - pytz + - python-dateutil >= 2.7.0 + description: + - Returns a string based on criteria which represents an rrule + options: + _terms: + description: + - The start date of the ruleset + - Used for all frequencies + - Format should be YYYY-MM-DD [HH:MM:SS] + required: True + type: str + timezone: + description: + - The timezone to use for this rule + - Used for all frequencies + - Format should be as US/Eastern + - Defaults to America/New_York + type: str + rules: + description: + - Array of rules in the rruleset + type: list + elements: dict + required: True + suboptions: + frequency: + description: + - The frequency of the schedule + - none - Run this schedule once + - minute - Run this schedule every x minutes + - hour - Run this schedule every x hours + - day - Run this schedule every x days + - week - Run this schedule weekly + - month - Run this schedule monthly + required: True + choices: ['none', 'minute', 'hour', 'day', 'week', 'month'] + interval: + description: + - The repetition in months, weeks, days hours or minutes + - Used for all types except none + type: int + end_on: + description: + - How to end this schedule + - If this is not defined, this schedule will never end + - If this is a positive integer, this schedule will end after this number of occurrences + - If this is a date in the format YYYY-MM-DD [HH:MM:SS], this schedule ends after this date + - Used for all types except none + type: str + bysetpos: + description: + - Specify an occurrence number, corresponding to the nth occurrence of the rule inside the frequency period. + - A comma-separated list of positions (first, second, third, forth or last) + type: string + bymonth: + description: + - The months this schedule will run on + - A comma-separated list which can contain values 0-12 + type: string + bymonthday: + description: + - The day of the month this schedule will run on + - A comma-separated list which can contain values 0-31 + type: string + byyearday: + description: + - The year day numbers to run this schedule on + - A comma-separated list which can contain values 0-366 + type: string + byweekno: + description: + - The week numbers to run this schedule on + - A comma-separated list which can contain values as described in ISO8601 + type: string + byweekday: + description: + - The days to run this schedule on + - A comma-separated list which can contain values sunday, monday, tuesday, wednesday, thursday, friday + type: string + byhour: + description: + - The hours to run this schedule on + - A comma-separated list which can contain values 0-23 + type: string + byminute: + description: + - The minutes to run this schedule on + - A comma-separated list which can contain values 0-59 + type: string + include: + description: + - If this rule should be included (RRULE) or excluded (EXRULE) + type: bool + default: True +""" + +EXAMPLES = """ + - name: Create a ruleset for everyday except Sundays + set_fact: + complex_rule: "{{ query(awx.awx.schedule_rruleset, '2022-04-30 10:30:45', rules=rrules, timezone='UTC' ) }}" + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'day' + interval: 1 + byweekday: 'sunday' + include: False +""" + +RETURN = """ +_raw: + description: + - String in the rrule format + type: string +""" +import re + +from ansible.module_utils.six import raise_from +from ansible.plugins.lookup import LookupBase +from ansible.errors import AnsibleError +from datetime import datetime + +try: + import pytz + from dateutil import rrule +except ImportError as imp_exc: + LIBRARY_IMPORT_ERROR = imp_exc +else: + LIBRARY_IMPORT_ERROR = None + + +class LookupModule(LookupBase): + # plugin constructor + def __init__(self, *args, **kwargs): + if LIBRARY_IMPORT_ERROR: + raise_from(AnsibleError('{0}'.format(LIBRARY_IMPORT_ERROR)), LIBRARY_IMPORT_ERROR) + super().__init__(*args, **kwargs) + + self.frequencies = { + 'none': rrule.DAILY, + 'minute': rrule.MINUTELY, + 'hour': rrule.HOURLY, + 'day': rrule.DAILY, + 'week': rrule.WEEKLY, + 'month': rrule.MONTHLY, + } + + self.weekdays = { + 'monday': rrule.MO, + 'tuesday': rrule.TU, + 'wednesday': rrule.WE, + 'thursday': rrule.TH, + 'friday': rrule.FR, + 'saturday': rrule.SA, + 'sunday': rrule.SU, + } + + self.set_positions = { + 'first': 1, + 'second': 2, + 'third': 3, + 'fourth': 4, + 'last': -1, + } + + @staticmethod + def parse_date_time(date_string): + try: + return datetime.strptime(date_string, '%Y-%m-%d %H:%M:%S') + except ValueError: + return datetime.strptime(date_string, '%Y-%m-%d') + + def process_integer(self, field_name, rule, min_value, max_value, rule_number): + # We are going to tolerate multiple types of input here: + # something: 1 - A single integer + # something: "1" - A single str + # something: "1,2,3" - A comma separated string of ints + # something: "1, 2,3" - A comma separated string of ints (with spaces) + # something: ["1", "2", "3"] - A list of strings + # something: [1,2,3] - A list of ints + return_values = [] + # If they give us a single int, lets make it a list of ints + if isinstance(rule[field_name], int): + rule[field_name] = [rule[field_name]] + # If its not a list, we need to split it into a list + if isinstance(rule[field_name], list): + rule[field_name] = rule[field_name].split(',') + for value in rule[field_name]: + # If they have a list of strs we want to strip the str incase its space delineated + if isinstance(value, str): + value = value.strip() + # If value happens to be an int (from a list of ints) we need to coerce it into a str for the re.match + if not re.match(r"^\d+$", str(value)) or int(value) < min_value or int(value) > max_value: + raise AnsibleError('In rule {0} {1} must be between {2} and {3}'.format(rule_number, field_name, min_value, max_value)) + return_values.append(int(value)) + return return_values + + def process_list(self, field_name, rule, valid_list, rule_number): + return_values = [] + if isinstance(rule[field_name], list): + rule[field_name] = rule[field_name].split(',') + for value in rule[field_name]: + value = value.strip() + if value not in valid_list: + raise AnsibleError('In rule {0} {1} must only contain values in {2}'.format(rule_number, field_name, ', '.join(valid_list.keys()))) + return_values.append(valid_list[value]) + return return_values + + def run(self, terms, variables=None, **kwargs): + if len(terms) != 1: + raise AnsibleError('You may only pass one schedule type in at a time') + + # Validate the start date + try: + start_date = LookupModule.parse_date_time(terms[0]) + except Exception as e: + raise_from(AnsibleError('The start date must be in the format YYYY-MM-DD [HH:MM:SS]'), e) + + if not kwargs.get('rules', None): + raise AnsibleError('You must include rules to be in the ruleset via the rules parameter') + + # All frequencies can use a timezone but rrule can't support the format that AWX uses. + # So we will do a string manip here if we need to + timezone = 'America/New_York' + if 'timezone' in kwargs: + if kwargs['timezone'] not in pytz.all_timezones: + raise AnsibleError('Timezone parameter is not valid') + timezone = kwargs['timezone'] + + rules = [] + got_at_least_one_rule = False + for rule_index in range(0, len(kwargs['rules'])): + rule = kwargs['rules'][rule_index] + rule_number = rule_index + 1 + valid_options = [ + "frequency", + "interval", + "end_on", + "bysetpos", + "bymonth", + "bymonthday", + "byyearday", + "byweekno", + "byweekday", + "byhour", + "byminute", + "include", + ] + invalid_options = list(set(rule.keys()) - set(valid_options)) + if invalid_options: + raise AnsibleError('Rule {0} has invalid options: {1}'.format(rule_number, ', '.join(invalid_options))) + frequency = rule.get('frequency', None) + if not frequency: + raise AnsibleError("Rule {0} is missing a frequency".format(rule_number)) + if frequency not in self.frequencies: + raise AnsibleError('Frequency of rule {0} is invalid {1}'.format(rule_number, frequency)) + + rrule_kwargs = { + 'freq': self.frequencies[frequency], + 'interval': rule.get('interval', 1), + 'dtstart': start_date, + } + + # If we are a none frequency we don't need anything else + if frequency == 'none': + rrule_kwargs['count'] = 1 + else: + # All non-none frequencies can have an end_on option + if 'end_on' in rule: + end_on = rule['end_on'] + if re.match(r'^\d+$', end_on): + rrule_kwargs['count'] = end_on + else: + try: + rrule_kwargs['until'] = LookupModule.parse_date_time(end_on) + except Exception as e: + raise_from( + AnsibleError('In rule {0} end_on must either be an integer or in the format YYYY-MM-DD [HH:MM:SS]'.format(rule_number)), e + ) + + if 'bysetpos' in rule: + rrule_kwargs['bysetpos'] = self.process_list('bysetpos', rule, self.set_positions, rule_number) + + if 'bymonth' in rule: + rrule_kwargs['bymonth'] = self.process_integer('bymonth', rule, 1, 12, rule_number) + + if 'bymonthday' in rule: + rrule_kwargs['bymonthday'] = self.process_integer('bymonthday', rule, 1, 31, rule_number) + + if 'byyearday' in rule: + rrule_kwargs['byyearday'] = self.process_integer('byyearday', rule, 1, 366, rule_number) # 366 for leap years + + if 'byweekno' in rule: + rrule_kwargs['byweekno'] = self.process_integer('byweekno', rule, 1, 52, rule_number) + + if 'byweekday' in rule: + rrule_kwargs['byweekday'] = self.process_list('byweekday', rule, self.weekdays, rule_number) + + if 'byhour' in rule: + rrule_kwargs['byhour'] = self.process_integer('byhour', rule, 0, 23, rule_number) + + if 'byminute' in rule: + rrule_kwargs['byminute'] = self.process_integer('byminute', rule, 0, 59, rule_number) + + try: + generated_rule = str(rrule.rrule(**rrule_kwargs)) + except Exception as e: + raise_from(AnsibleError('Failed to parse rrule for rule {0} {1}: {2}'.format(rule_number, str(rrule_kwargs), e)), e) + + # AWX requires an interval. rrule will not add interval if it's set to 1 + if rule.get('interval', 1) == 1: + generated_rule = "{0};INTERVAL=1".format(generated_rule) + + if rule_index == 0: + # rrule puts a \n in the rule instead of a space and can't handle timezones + generated_rule = generated_rule.replace('\n', ' ').replace('DTSTART:', 'DTSTART;TZID={0}:'.format(timezone)) + else: + # Only the first rule needs the dtstart in a ruleset so remaining rules we can split at \n + generated_rule = generated_rule.split('\n')[1] + + # If we are an exclude rule we need to flip from an rrule to an ex rule + if not rule.get('include', True): + generated_rule = generated_rule.replace('RRULE', 'EXRULE') + else: + got_at_least_one_rule = True + + rules.append(generated_rule) + + if not got_at_least_one_rule: + raise AnsibleError("A ruleset must contain at least one RRULE") + + rruleset_str = ' '.join(rules) + + # For a sanity check lets make sure our rule can parse. Not sure how we can test this though + try: + rules = rrule.rrulestr(rruleset_str) + except Exception as e: + raise_from(AnsibleError("Failed to parse generated rule set via rruleset {0}".format(e)), e) + + # return self.get_rrule(frequency, kwargs) + return rruleset_str diff --git a/ansible_collections/awx/awx/plugins/module_utils/awxkit.py b/ansible_collections/awx/awx/plugins/module_utils/awxkit.py new file mode 100644 index 00000000..770b0c7a --- /dev/null +++ b/ansible_collections/awx/awx/plugins/module_utils/awxkit.py @@ -0,0 +1,55 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +from .controller_api import ControllerModule +from ansible.module_utils.basic import missing_required_lib + +try: + from awxkit.api.client import Connection + from awxkit.api.pages.api import ApiV2 + from awxkit.api import get_registered_page + + HAS_AWX_KIT = True +except ImportError: + HAS_AWX_KIT = False + + +class ControllerAWXKitModule(ControllerModule): + connection = None + apiV2Ref = None + + def __init__(self, argument_spec, **kwargs): + kwargs['supports_check_mode'] = False + + super().__init__(argument_spec=argument_spec, **kwargs) + + # Die if we don't have AWX_KIT installed + if not HAS_AWX_KIT: + self.fail_json(msg=missing_required_lib('awxkit')) + + # Establish our conneciton object + self.connection = Connection(self.host, verify=self.verify_ssl) + + def authenticate(self): + try: + if self.oauth_token: + self.connection.login(None, None, token=self.oauth_token) + self.authenticated = True + elif self.username: + self.connection.login(username=self.username, password=self.password) + self.authenticated = True + except Exception: + self.fail_json("Failed to authenticate") + + def get_api_v2_object(self): + if not self.apiV2Ref: + if not self.authenticated: + self.authenticate() + v2_index = get_registered_page('/api/v2/')(self.connection).get() + self.api_ref = ApiV2(connection=self.connection, **{'json': v2_index}) + return self.api_ref + + def logout(self): + if self.authenticated: + self.connection.logout() diff --git a/ansible_collections/awx/awx/plugins/module_utils/controller_api.py b/ansible_collections/awx/awx/plugins/module_utils/controller_api.py new file mode 100644 index 00000000..3fb97148 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/module_utils/controller_api.py @@ -0,0 +1,1084 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +from ansible.module_utils.basic import AnsibleModule, env_fallback +from ansible.module_utils.urls import Request, SSLValidationError, ConnectionError +from ansible.module_utils.parsing.convert_bool import boolean as strtobool +from ansible.module_utils.six import PY2 +from ansible.module_utils.six import raise_from, string_types +from ansible.module_utils.six.moves import StringIO +from ansible.module_utils.six.moves.urllib.error import HTTPError +from ansible.module_utils.six.moves.http_cookiejar import CookieJar +from ansible.module_utils.six.moves.urllib.parse import urlparse, urlencode +from ansible.module_utils.six.moves.configparser import ConfigParser, NoOptionError +from socket import getaddrinfo, IPPROTO_TCP +import time +import re +from json import loads, dumps +from os.path import isfile, expanduser, split, join, exists, isdir +from os import access, R_OK, getcwd + + +try: + from ansible.module_utils.compat.version import LooseVersion as Version +except ImportError: + try: + from distutils.version import LooseVersion as Version + except ImportError: + raise AssertionError('To use this plugin or module with ansible-core 2.11, you need to use Python < 3.12 with distutils.version present') + +try: + import yaml + + HAS_YAML = True +except ImportError: + HAS_YAML = False + + +class ConfigFileException(Exception): + pass + + +class ItemNotDefined(Exception): + pass + + +class ControllerModule(AnsibleModule): + url = None + AUTH_ARGSPEC = dict( + controller_host=dict( + required=False, + aliases=['tower_host'], + fallback=(env_fallback, ['CONTROLLER_HOST', 'TOWER_HOST'])), + controller_username=dict( + required=False, + aliases=['tower_username'], + fallback=(env_fallback, ['CONTROLLER_USERNAME', 'TOWER_USERNAME'])), + controller_password=dict( + no_log=True, + aliases=['tower_password'], + required=False, + fallback=(env_fallback, ['CONTROLLER_PASSWORD', 'TOWER_PASSWORD'])), + validate_certs=dict( + type='bool', + aliases=['tower_verify_ssl'], + required=False, + fallback=(env_fallback, ['CONTROLLER_VERIFY_SSL', 'TOWER_VERIFY_SSL'])), + controller_oauthtoken=dict( + type='raw', + no_log=True, + aliases=['tower_oauthtoken'], + required=False, + fallback=(env_fallback, ['CONTROLLER_OAUTH_TOKEN', 'TOWER_OAUTH_TOKEN'])), + controller_config_file=dict( + type='path', + aliases=['tower_config_file'], + required=False, + default=None), + ) + short_params = { + 'host': 'controller_host', + 'username': 'controller_username', + 'password': 'controller_password', + 'verify_ssl': 'validate_certs', + 'oauth_token': 'controller_oauthtoken', + } + host = '127.0.0.1' + username = None + password = None + verify_ssl = True + oauth_token = None + oauth_token_id = None + authenticated = False + config_name = 'tower_cli.cfg' + version_checked = False + error_callback = None + warn_callback = None + + def __init__(self, argument_spec=None, direct_params=None, error_callback=None, warn_callback=None, **kwargs): + full_argspec = {} + full_argspec.update(ControllerModule.AUTH_ARGSPEC) + full_argspec.update(argument_spec) + kwargs['supports_check_mode'] = True + + self.error_callback = error_callback + self.warn_callback = warn_callback + + self.json_output = {'changed': False} + + if direct_params is not None: + self.params = direct_params + else: + super().__init__(argument_spec=full_argspec, **kwargs) + + self.load_config_files() + + # Parameters specified on command line will override settings in any config + for short_param, long_param in self.short_params.items(): + direct_value = self.params.get(long_param) + if direct_value is not None: + setattr(self, short_param, direct_value) + + # Perform magic depending on whether controller_oauthtoken is a string or a dict + if self.params.get('controller_oauthtoken'): + token_param = self.params.get('controller_oauthtoken') + if type(token_param) is dict: + if 'token' in token_param: + self.oauth_token = self.params.get('controller_oauthtoken')['token'] + else: + self.fail_json(msg="The provided dict in controller_oauthtoken did not properly contain the token entry") + elif isinstance(token_param, string_types): + self.oauth_token = self.params.get('controller_oauthtoken') + else: + error_msg = "The provided controller_oauthtoken type was not valid ({0}). Valid options are str or dict.".format(type(token_param).__name__) + self.fail_json(msg=error_msg) + + # Perform some basic validation + if not re.match('^https{0,1}://', self.host): + self.host = "https://{0}".format(self.host) + + # Try to parse the hostname as a url + try: + self.url = urlparse(self.host) + # Store URL prefix for later use in build_url + self.url_prefix = self.url.path + except Exception as e: + self.fail_json(msg="Unable to parse controller_host as a URL ({1}): {0}".format(self.host, e)) + + # Remove ipv6 square brackets + remove_target = '[]' + for char in remove_target: + self.url.hostname.replace(char, "") + # Try to resolve the hostname + try: + addrinfolist = getaddrinfo(self.url.hostname, self.url.port, proto=IPPROTO_TCP) + for family, kind, proto, canonical, sockaddr in addrinfolist: + sockaddr[0] + except Exception as e: + self.fail_json(msg="Unable to resolve controller_host ({1}): {0}".format(self.url.hostname, e)) + + def build_url(self, endpoint, query_params=None): + # Make sure we start with /api/vX + if not endpoint.startswith("/"): + endpoint = "/{0}".format(endpoint) + prefix = self.url_prefix.rstrip("/") + if not endpoint.startswith(prefix + "/api/"): + endpoint = prefix + "/api/v2{0}".format(endpoint) + if not endpoint.endswith('/') and '?' not in endpoint: + endpoint = "{0}/".format(endpoint) + + # Update the URL path with the endpoint + url = self.url._replace(path=endpoint) + + if query_params: + url = url._replace(query=urlencode(query_params)) + + return url + + def load_config_files(self): + # Load configs like TowerCLI would have from least import to most + config_files = ['/etc/tower/tower_cli.cfg', join(expanduser("~"), ".{0}".format(self.config_name))] + local_dir = getcwd() + config_files.append(join(local_dir, self.config_name)) + while split(local_dir)[1]: + local_dir = split(local_dir)[0] + config_files.insert(2, join(local_dir, ".{0}".format(self.config_name))) + + # If we have a specified tower config, load it + if self.params.get('controller_config_file'): + duplicated_params = [fn for fn in self.AUTH_ARGSPEC if fn != 'controller_config_file' and self.params.get(fn) is not None] + if duplicated_params: + self.warn( + ( + 'The parameter(s) {0} were provided at the same time as controller_config_file. ' + 'Precedence may be unstable, we suggest either using config file or params.' + ).format(', '.join(duplicated_params)) + ) + try: + # TODO: warn if there are conflicts with other params + self.load_config(self.params.get('controller_config_file')) + except ConfigFileException as cfe: + # Since we were told specifically to load this we want it to fail if we have an error + self.fail_json(msg=cfe) + else: + for config_file in config_files: + if exists(config_file) and not isdir(config_file): + # Only throw a formatting error if the file exists and is not a directory + try: + self.load_config(config_file) + except ConfigFileException: + self.fail_json(msg='The config file {0} is not properly formatted'.format(config_file)) + + def load_config(self, config_path): + # Validate the config file is an actual file + if not isfile(config_path): + raise ConfigFileException('The specified config file does not exist') + + if not access(config_path, R_OK): + raise ConfigFileException("The specified config file cannot be read") + + # Read in the file contents: + with open(config_path, 'r') as f: + config_string = f.read() + + # First try to yaml load the content (which will also load json) + try: + try_config_parsing = True + if HAS_YAML: + try: + config_data = yaml.load(config_string, Loader=yaml.SafeLoader) + # If this is an actual ini file, yaml will return the whole thing as a string instead of a dict + if type(config_data) is not dict: + raise AssertionError("The yaml config file is not properly formatted as a dict.") + try_config_parsing = False + + except (AttributeError, yaml.YAMLError, AssertionError): + try_config_parsing = True + + if try_config_parsing: + # TowerCLI used to support a config file with a missing [general] section by prepending it if missing + if '[general]' not in config_string: + config_string = '[general]\n{0}'.format(config_string) + + config = ConfigParser() + + try: + placeholder_file = StringIO(config_string) + # py2 ConfigParser has readfp, that has been deprecated in favor of read_file in py3 + # This "if" removes the deprecation warning + if hasattr(config, 'read_file'): + config.read_file(placeholder_file) + else: + config.readfp(placeholder_file) + + # If we made it here then we have values from reading the ini file, so let's pull them out into a dict + config_data = {} + for honorred_setting in self.short_params: + try: + config_data[honorred_setting] = config.get('general', honorred_setting) + except NoOptionError: + pass + + except Exception as e: + raise_from(ConfigFileException("An unknown exception occured trying to ini load config file: {0}".format(e)), e) + + except Exception as e: + raise_from(ConfigFileException("An unknown exception occured trying to load config file: {0}".format(e)), e) + + # If we made it here, we have a dict which has values in it from our config, any final settings logic can be performed here + for honorred_setting in self.short_params: + if honorred_setting in config_data: + # Veriffy SSL must be a boolean + if honorred_setting == 'verify_ssl': + if type(config_data[honorred_setting]) is str: + setattr(self, honorred_setting, strtobool(config_data[honorred_setting])) + else: + setattr(self, honorred_setting, bool(config_data[honorred_setting])) + else: + setattr(self, honorred_setting, config_data[honorred_setting]) + + def logout(self): + # This method is intended to be overridden + pass + + def fail_json(self, **kwargs): + # Try to log out if we are authenticated + self.logout() + if self.error_callback: + self.error_callback(**kwargs) + else: + super().fail_json(**kwargs) + + def exit_json(self, **kwargs): + # Try to log out if we are authenticated + self.logout() + super().exit_json(**kwargs) + + def warn(self, warning): + if self.warn_callback is not None: + self.warn_callback(warning) + else: + super().warn(warning) + + +class ControllerAPIModule(ControllerModule): + # TODO: Move the collection version check into controller_module.py + # This gets set by the make process so whatever is in here is irrelevant + _COLLECTION_VERSION = "21.12.0" + _COLLECTION_TYPE = "awx" + # This maps the collections type (awx/tower) to the values returned by the API + # Those values can be found in awx/api/generics.py line 204 + collection_to_version = { + 'awx': 'AWX', + 'controller': 'Red Hat Ansible Automation Platform', + } + session = None + IDENTITY_FIELDS = {'users': 'username', 'workflow_job_template_nodes': 'identifier', 'instances': 'hostname'} + ENCRYPTED_STRING = "$encrypted$" + + def __init__(self, argument_spec, direct_params=None, error_callback=None, warn_callback=None, **kwargs): + kwargs['supports_check_mode'] = True + + super().__init__( + argument_spec=argument_spec, direct_params=direct_params, error_callback=error_callback, warn_callback=warn_callback, **kwargs + ) + self.session = Request(cookies=CookieJar(), validate_certs=self.verify_ssl) + + if 'update_secrets' in self.params: + self.update_secrets = self.params.pop('update_secrets') + else: + self.update_secrets = True + + @staticmethod + def param_to_endpoint(name): + exceptions = {'inventory': 'inventories', 'target_team': 'teams', 'workflow': 'workflow_job_templates'} + return exceptions.get(name, '{0}s'.format(name)) + + @staticmethod + def get_name_field_from_endpoint(endpoint): + return ControllerAPIModule.IDENTITY_FIELDS.get(endpoint, 'name') + + def get_item_name(self, item, allow_unknown=False): + if item: + if 'name' in item: + return item['name'] + + for field_name in ControllerAPIModule.IDENTITY_FIELDS.values(): + if field_name in item: + return item[field_name] + + if item.get('type', None) in ('o_auth2_access_token', 'credential_input_source'): + return item['id'] + + if allow_unknown: + return 'unknown' + + if item: + self.exit_json(msg='Cannot determine identity field for {0} object.'.format(item.get('type', 'unknown'))) + else: + self.exit_json(msg='Cannot determine identity field for Undefined object.') + + def head_endpoint(self, endpoint, *args, **kwargs): + return self.make_request('HEAD', endpoint, **kwargs) + + def get_endpoint(self, endpoint, *args, **kwargs): + return self.make_request('GET', endpoint, **kwargs) + + def patch_endpoint(self, endpoint, *args, **kwargs): + # Handle check mode + if self.check_mode: + self.json_output['changed'] = True + self.exit_json(**self.json_output) + + return self.make_request('PATCH', endpoint, **kwargs) + + def post_endpoint(self, endpoint, *args, **kwargs): + # Handle check mode + if self.check_mode: + self.json_output['changed'] = True + self.exit_json(**self.json_output) + + return self.make_request('POST', endpoint, **kwargs) + + def delete_endpoint(self, endpoint, *args, **kwargs): + # Handle check mode + if self.check_mode: + self.json_output['changed'] = True + self.exit_json(**self.json_output) + + return self.make_request('DELETE', endpoint, **kwargs) + + def get_all_endpoint(self, endpoint, *args, **kwargs): + response = self.get_endpoint(endpoint, *args, **kwargs) + if 'next' not in response['json']: + raise RuntimeError('Expected list from API at {0}, got: {1}'.format(endpoint, response)) + next_page = response['json']['next'] + + if response['json']['count'] > 10000: + self.fail_json(msg='The number of items being queried for is higher than 10,000.') + + while next_page is not None: + next_response = self.get_endpoint(next_page) + response['json']['results'] = response['json']['results'] + next_response['json']['results'] + next_page = next_response['json']['next'] + response['json']['next'] = next_page + return response + + def get_one(self, endpoint, name_or_id=None, allow_none=True, **kwargs): + new_kwargs = kwargs.copy() + if name_or_id: + name_field = self.get_name_field_from_endpoint(endpoint) + new_data = kwargs.get('data', {}).copy() + if name_field in new_data: + self.fail_json(msg="You can't specify the field {0} in your search data if using the name_or_id field".format(name_field)) + + try: + new_data['or__id'] = int(name_or_id) + new_data['or__{0}'.format(name_field)] = name_or_id + except ValueError: + # If we get a value error, then we didn't have an integer so we can just pass and fall down to the fail + new_data[name_field] = name_or_id + new_kwargs['data'] = new_data + + response = self.get_endpoint(endpoint, **new_kwargs) + if response['status_code'] != 200: + fail_msg = "Got a {0} response when trying to get one from {1}".format(response['status_code'], endpoint) + if 'detail' in response.get('json', {}): + fail_msg += ', detail: {0}'.format(response['json']['detail']) + self.fail_json(msg=fail_msg) + + if 'count' not in response['json'] or 'results' not in response['json']: + self.fail_json(msg="The endpoint did not provide count and results") + + if response['json']['count'] == 0: + if allow_none: + return None + else: + self.fail_wanted_one(response, endpoint, new_kwargs.get('data')) + elif response['json']['count'] > 1: + if name_or_id: + # Since we did a name or ID search and got > 1 return something if the id matches + for asset in response['json']['results']: + if str(asset['id']) == name_or_id: + return asset + # We got > 1 and either didn't find something by ID (which means multiple names) + # Or we weren't running with a or search and just got back too many to begin with. + self.fail_wanted_one(response, endpoint, new_kwargs.get('data')) + + return response['json']['results'][0] + + def fail_wanted_one(self, response, endpoint, query_params): + sample = response.copy() + if len(sample['json']['results']) > 1: + sample['json']['results'] = sample['json']['results'][:2] + ['...more results snipped...'] + url = self.build_url(endpoint, query_params) + display_endpoint = url.geturl()[len(self.host):] # truncate to not include the base URL + self.fail_json( + msg="Request to {0} returned {1} items, expected 1".format(display_endpoint, response['json']['count']), + query=query_params, + response=sample, + total_results=response['json']['count'], + ) + + def get_exactly_one(self, endpoint, name_or_id=None, **kwargs): + return self.get_one(endpoint, name_or_id=name_or_id, allow_none=False, **kwargs) + + def resolve_name_to_id(self, endpoint, name_or_id): + return self.get_exactly_one(endpoint, name_or_id)['id'] + + def make_request(self, method, endpoint, *args, **kwargs): + # In case someone is calling us directly; make sure we were given a method, let's not just assume a GET + if not method: + raise Exception("The HTTP method must be defined") + + if method in ['POST', 'PUT', 'PATCH']: + url = self.build_url(endpoint) + else: + url = self.build_url(endpoint, query_params=kwargs.get('data')) + + # Extract the headers, this will be used in a couple of places + headers = kwargs.get('headers', {}) + + # Authenticate to AWX (if we don't have a token and if not already done so) + if not self.oauth_token and not self.authenticated: + # This method will set a cookie in the cookie jar for us and also an oauth_token + self.authenticate(**kwargs) + if self.oauth_token: + # If we have a oauth token, we just use a bearer header + headers['Authorization'] = 'Bearer {0}'.format(self.oauth_token) + + if method in ['POST', 'PUT', 'PATCH']: + headers.setdefault('Content-Type', 'application/json') + kwargs['headers'] = headers + + data = None # Important, if content type is not JSON, this should not be dict type + if headers.get('Content-Type', '') == 'application/json': + data = dumps(kwargs.get('data', {})) + + try: + response = self.session.open(method, url.geturl(), headers=headers, validate_certs=self.verify_ssl, follow_redirects=True, data=data) + except (SSLValidationError) as ssl_err: + self.fail_json(msg="Could not establish a secure connection to your host ({1}): {0}.".format(url.netloc, ssl_err)) + except (ConnectionError) as con_err: + self.fail_json(msg="There was a network error of some kind trying to connect to your host ({1}): {0}.".format(url.netloc, con_err)) + except (HTTPError) as he: + # Sanity check: Did the server send back some kind of internal error? + if he.code >= 500: + self.fail_json(msg='The host sent back a server error ({1}): {0}. Please check the logs and try again later'.format(url.path, he)) + # Sanity check: Did we fail to authenticate properly? If so, fail out now; this is always a failure. + elif he.code == 401: + self.fail_json(msg='Invalid authentication credentials for {0} (HTTP 401).'.format(url.path)) + # Sanity check: Did we get a forbidden response, which means that the user isn't allowed to do this? Report that. + elif he.code == 403: + self.fail_json(msg="You don't have permission to {1} to {0} (HTTP 403).".format(url.path, method)) + # Sanity check: Did we get a 404 response? + # Requests with primary keys will return a 404 if there is no response, and we want to consistently trap these. + elif he.code == 404: + if kwargs.get('return_none_on_404', False): + return None + self.fail_json(msg='The requested object could not be found at {0}.'.format(url.path)) + # Sanity check: Did we get a 405 response? + # A 405 means we used a method that isn't allowed. Usually this is a bad request, but it requires special treatment because the + # API sends it as a logic error in a few situations (e.g. trying to cancel a job that isn't running). + elif he.code == 405: + self.fail_json(msg="Cannot make a request with the {0} method to this endpoint {1}".format(method, url.path)) + # Sanity check: Did we get some other kind of error? If so, write an appropriate error message. + elif he.code >= 400: + # We are going to return a 400 so the module can decide what to do with it + page_data = he.read() + try: + return {'status_code': he.code, 'json': loads(page_data)} + # JSONDecodeError only available on Python 3.5+ + except ValueError: + return {'status_code': he.code, 'text': page_data} + elif he.code == 204 and method == 'DELETE': + # A 204 is a normal response for a delete function + pass + else: + self.fail_json(msg="Unexpected return code when calling {0}: {1}".format(url.geturl(), he)) + except (Exception) as e: + self.fail_json(msg="There was an unknown error when trying to connect to {2}: {0} {1}".format(type(e).__name__, e, url.geturl())) + + if not self.version_checked: + # In PY2 we get back an HTTPResponse object but PY2 is returning an addinfourl + # First try to get the headers in PY3 format and then drop down to PY2. + try: + controller_type = response.getheader('X-API-Product-Name', None) + controller_version = response.getheader('X-API-Product-Version', None) + except Exception: + controller_type = response.info().getheader('X-API-Product-Name', None) + controller_version = response.info().getheader('X-API-Product-Version', None) + + parsed_collection_version = Version(self._COLLECTION_VERSION).version + if not controller_version: + self.warn( + "You are using the {0} version of this collection but connecting to a controller that did not return a version".format( + self._COLLECTION_VERSION + ) + ) + else: + parsed_controller_version = Version(controller_version).version + if controller_type == 'AWX': + collection_compare_ver = parsed_collection_version[0] + controller_compare_ver = parsed_controller_version[0] + else: + collection_compare_ver = "{0}.{1}".format(parsed_collection_version[0], parsed_collection_version[1]) + controller_compare_ver = '{0}.{1}'.format(parsed_controller_version[0], parsed_controller_version[1]) + + if self._COLLECTION_TYPE not in self.collection_to_version or self.collection_to_version[self._COLLECTION_TYPE] != controller_type: + self.warn("You are using the {0} version of this collection but connecting to {1}".format(self._COLLECTION_TYPE, controller_type)) + elif collection_compare_ver != controller_compare_ver: + self.warn( + "You are running collection version {0} but connecting to {2} version {1}".format( + self._COLLECTION_VERSION, controller_version, controller_type + ) + ) + + self.version_checked = True + + response_body = '' + try: + response_body = response.read() + except (Exception) as e: + self.fail_json(msg="Failed to read response body: {0}".format(e)) + + response_json = {} + if response_body and response_body != '': + try: + response_json = loads(response_body) + except (Exception) as e: + self.fail_json(msg="Failed to parse the response json: {0}".format(e)) + + if PY2: + status_code = response.getcode() + else: + status_code = response.status + return {'status_code': status_code, 'json': response_json} + + def authenticate(self, **kwargs): + if self.username and self.password: + # Attempt to get a token from /api/v2/tokens/ by giving it our username/password combo + # If we have a username and password, we need to get a session cookie + login_data = { + "description": "Automation Platform Controller Module Token", + "application": None, + "scope": "write", + } + # Preserve URL prefix + endpoint = self.url_prefix.rstrip('/') + '/api/v2/tokens/' + # Post to the tokens endpoint with baisc auth to try and get a token + api_token_url = (self.url._replace(path=endpoint)).geturl() + + try: + response = self.session.open( + 'POST', + api_token_url, + validate_certs=self.verify_ssl, + follow_redirects=True, + force_basic_auth=True, + url_username=self.username, + url_password=self.password, + data=dumps(login_data), + headers={'Content-Type': 'application/json'}, + ) + except HTTPError as he: + try: + resp = he.read() + except Exception as e: + resp = 'unknown {0}'.format(e) + self.fail_json(msg='Failed to get token: {0}'.format(he), response=resp) + except (Exception) as e: + # Sanity check: Did the server send back some kind of internal error? + self.fail_json(msg='Failed to get token: {0}'.format(e)) + + token_response = None + try: + token_response = response.read() + response_json = loads(token_response) + self.oauth_token_id = response_json['id'] + self.oauth_token = response_json['token'] + except (Exception) as e: + self.fail_json(msg="Failed to extract token information from login response: {0}".format(e), **{'response': token_response}) + + # If we have neither of these, then we can try un-authenticated access + self.authenticated = True + + def delete_if_needed(self, existing_item, on_delete=None, auto_exit=True): + # This will exit from the module on its own. + # If the method successfully deletes an item and on_delete param is defined, + # the on_delete parameter will be called as a method pasing in this object and the json from the response + # This will return one of two things: + # 1. None if the existing_item is not defined (so no delete needs to happen) + # 2. The response from AWX from calling the delete on the endpont. It's up to you to process the response and exit from the module + # Note: common error codes from the AWX API can cause the module to fail + if existing_item: + # If we have an item, we can try to delete it + try: + item_url = existing_item['url'] + item_type = existing_item['type'] + item_id = existing_item['id'] + item_name = self.get_item_name(existing_item, allow_unknown=True) + except KeyError as ke: + self.fail_json(msg="Unable to process delete of item due to missing data {0}".format(ke)) + + response = self.delete_endpoint(item_url) + + if response['status_code'] in [202, 204]: + if on_delete: + on_delete(self, response['json']) + self.json_output['changed'] = True + self.json_output['id'] = item_id + self.exit_json(**self.json_output) + if auto_exit: + self.exit_json(**self.json_output) + else: + return self.json_output + else: + if 'json' in response and '__all__' in response['json']: + self.fail_json(msg="Unable to delete {0} {1}: {2}".format(item_type, item_name, response['json']['__all__'][0])) + elif 'json' in response: + # This is from a project delete (if there is an active job against it) + if 'error' in response['json']: + self.fail_json(msg="Unable to delete {0} {1}: {2}".format(item_type, item_name, response['json']['error'])) + else: + self.fail_json(msg="Unable to delete {0} {1}: {2}".format(item_type, item_name, response['json'])) + else: + self.fail_json(msg="Unable to delete {0} {1}: {2}".format(item_type, item_name, response['status_code'])) + else: + if auto_exit: + self.exit_json(**self.json_output) + else: + return self.json_output + + def modify_associations(self, association_endpoint, new_association_list): + # if we got None instead of [] we are not modifying the association_list + if new_association_list is None: + return + + # First get the existing associations + response = self.get_all_endpoint(association_endpoint) + existing_associated_ids = [association['id'] for association in response['json']['results']] + + # Disassociate anything that is in existing_associated_ids but not in new_association_list + ids_to_remove = list(set(existing_associated_ids) - set(new_association_list)) + for an_id in ids_to_remove: + response = self.post_endpoint(association_endpoint, **{'data': {'id': int(an_id), 'disassociate': True}}) + if response['status_code'] == 204: + self.json_output['changed'] = True + else: + self.fail_json(msg="Failed to disassociate item {0}".format(response['json'].get('detail', response['json']))) + + # Associate anything that is in new_association_list but not in `association` + for an_id in list(set(new_association_list) - set(existing_associated_ids)): + response = self.post_endpoint(association_endpoint, **{'data': {'id': int(an_id)}}) + if response['status_code'] == 204: + self.json_output['changed'] = True + else: + self.fail_json(msg="Failed to associate item {0}".format(response['json'].get('detail', response['json']))) + + def copy_item(self, existing_item, copy_from_name_or_id, new_item_name, endpoint=None, item_type='unknown', copy_lookup_data=None): + + if existing_item is not None: + self.warn("A {0} with the name {1} already exists.".format(item_type, new_item_name)) + self.json_output['changed'] = False + self.json_output['copied'] = False + return existing_item + + # Lookup existing item to copy from + copy_from_lookup = self.get_one(endpoint, name_or_id=copy_from_name_or_id, **{'data': copy_lookup_data}) + + # Fail if the copy_from_lookup is empty + if copy_from_lookup is None: + self.fail_json(msg="A {0} with the name {1} was not able to be found.".format(item_type, copy_from_name_or_id)) + + # Do checks for copy permisions if warrented + if item_type == 'workflow_job_template': + copy_get_check = self.get_endpoint(copy_from_lookup['related']['copy']) + if copy_get_check['status_code'] in [200]: + if ( + copy_get_check['json']['can_copy'] + and copy_get_check['json']['can_copy_without_user_input'] + and not copy_get_check['json']['templates_unable_to_copy'] + and not copy_get_check['json']['credentials_unable_to_copy'] + and not copy_get_check['json']['inventories_unable_to_copy'] + ): + # Because checks have passed + self.json_output['copy_checks'] = 'passed' + else: + self.fail_json(msg="Unable to copy {0} {1} error: {2}".format(item_type, copy_from_name_or_id, copy_get_check)) + else: + self.fail_json(msg="Error accessing {0} {1} error: {2} ".format(item_type, copy_from_name_or_id, copy_get_check)) + + response = self.post_endpoint(copy_from_lookup['related']['copy'], **{'data': {'name': new_item_name}}) + + if response['status_code'] in [201]: + self.json_output['id'] = response['json']['id'] + self.json_output['changed'] = True + self.json_output['copied'] = True + new_existing_item = response['json'] + else: + if 'json' in response and '__all__' in response['json']: + self.fail_json(msg="Unable to create {0} {1}: {2}".format(item_type, new_item_name, response['json']['__all__'][0])) + elif 'json' in response: + self.fail_json(msg="Unable to create {0} {1}: {2}".format(item_type, new_item_name, response['json'])) + else: + self.fail_json(msg="Unable to create {0} {1}: {2}".format(item_type, new_item_name, response['status_code'])) + return new_existing_item + + def create_if_needed(self, existing_item, new_item, endpoint, on_create=None, auto_exit=True, item_type='unknown', associations=None): + + # This will exit from the module on its own + # If the method successfully creates an item and on_create param is defined, + # the on_create parameter will be called as a method pasing in this object and the json from the response + # This will return one of two things: + # 1. None if the existing_item is already defined (so no create needs to happen) + # 2. The response from AWX from calling the patch on the endpont. It's up to you to process the response and exit from the module + # Note: common error codes from the AWX API can cause the module to fail + response = None + if not endpoint: + self.fail_json(msg="Unable to create new {0} due to missing endpoint".format(item_type)) + + item_url = None + if existing_item: + try: + item_url = existing_item['url'] + except KeyError as ke: + self.fail_json(msg="Unable to process create of item due to missing data {0}".format(ke)) + else: + # If we don't have an exisitng_item, we can try to create it + + # We have to rely on item_type being passed in since we don't have an existing item that declares its type + # We will pull the item_name out from the new_item, if it exists + item_name = self.get_item_name(new_item, allow_unknown=True) + + response = self.post_endpoint(endpoint, **{'data': new_item}) + + # 200 is response from approval node creation on tower 3.7.3 or awx 15.0.0 or earlier. + if response['status_code'] in [200, 201]: + self.json_output['name'] = 'unknown' + for key in ('name', 'username', 'identifier', 'hostname'): + if key in response['json']: + self.json_output['name'] = response['json'][key] + self.json_output['id'] = response['json']['id'] + self.json_output['changed'] = True + item_url = response['json']['url'] + else: + if 'json' in response and '__all__' in response['json']: + self.fail_json(msg="Unable to create {0} {1}: {2}".format(item_type, item_name, response['json']['__all__'][0])) + elif 'json' in response: + self.fail_json(msg="Unable to create {0} {1}: {2}".format(item_type, item_name, response['json'])) + else: + self.fail_json(msg="Unable to create {0} {1}: {2}".format(item_type, item_name, response['status_code'])) + + # Process any associations with this item + if associations is not None: + for association_type in associations: + sub_endpoint = '{0}{1}/'.format(item_url, association_type) + self.modify_associations(sub_endpoint, associations[association_type]) + + # If we have an on_create method and we actually changed something we can call on_create + if on_create is not None and self.json_output['changed']: + on_create(self, response['json']) + elif auto_exit: + self.exit_json(**self.json_output) + else: + if response is not None: + last_data = response['json'] + return last_data + else: + return + + def _encrypted_changed_warning(self, field, old, warning=False): + if not warning: + return + self.warn( + 'The field {0} of {1} {2} has encrypted data and may inaccurately report task is changed.'.format( + field, old.get('type', 'unknown'), old.get('id', 'unknown') + ) + ) + + @staticmethod + def has_encrypted_values(obj): + """Returns True if JSON-like python content in obj has $encrypted$ + anywhere in the data as a value + """ + if isinstance(obj, dict): + for val in obj.values(): + if ControllerAPIModule.has_encrypted_values(val): + return True + elif isinstance(obj, list): + for val in obj: + if ControllerAPIModule.has_encrypted_values(val): + return True + elif obj == ControllerAPIModule.ENCRYPTED_STRING: + return True + return False + + @staticmethod + def fields_could_be_same(old_field, new_field): + """Treating $encrypted$ as a wild card, + return False if the two values are KNOWN to be different + return True if the two values are the same, or could potentially be the same, + depending on the unknown $encrypted$ value or sub-values + """ + if isinstance(old_field, dict) and isinstance(new_field, dict): + if set(old_field.keys()) != set(new_field.keys()): + return False + for key in new_field.keys(): + if not ControllerAPIModule.fields_could_be_same(old_field[key], new_field[key]): + return False + return True # all sub-fields are either equal or could be equal + else: + if old_field == ControllerAPIModule.ENCRYPTED_STRING: + return True + return bool(new_field == old_field) + + def objects_could_be_different(self, old, new, field_set=None, warning=False): + if field_set is None: + field_set = set(fd for fd in new.keys() if fd not in ('modified', 'related', 'summary_fields')) + for field in field_set: + new_field = new.get(field, None) + old_field = old.get(field, None) + if old_field != new_field: + if self.update_secrets or (not self.fields_could_be_same(old_field, new_field)): + return True # Something doesn't match, or something might not match + elif self.has_encrypted_values(new_field) or field not in new: + if self.update_secrets or (not self.fields_could_be_same(old_field, new_field)): + # case of 'field not in new' - user password write-only field that API will not display + self._encrypted_changed_warning(field, old, warning=warning) + return True + return False + + def update_if_needed(self, existing_item, new_item, on_update=None, auto_exit=True, associations=None): + # This will exit from the module on its own + # If the method successfully updates an item and on_update param is defined, + # the on_update parameter will be called as a method pasing in this object and the json from the response + # This will return one of two things: + # 1. None if the existing_item does not need to be updated + # 2. The response from AWX from patching to the endpoint. It's up to you to process the response and exit from the module. + # Note: common error codes from the AWX API can cause the module to fail + response = None + if existing_item: + + # If we have an item, we can see if it needs an update + try: + item_url = existing_item['url'] + item_type = existing_item['type'] + if item_type == 'user': + item_name = existing_item['username'] + elif item_type == 'workflow_job_template_node': + item_name = existing_item['identifier'] + elif item_type == 'credential_input_source': + item_name = existing_item['id'] + elif item_type == 'instance': + item_name = existing_item['hostname'] + else: + item_name = existing_item['name'] + item_id = existing_item['id'] + except KeyError as ke: + self.fail_json(msg="Unable to process update of item due to missing data {0}".format(ke)) + + # Check to see if anything within the item requires the item to be updated + needs_patch = self.objects_could_be_different(existing_item, new_item) + + # If we decided the item needs to be updated, update it + self.json_output['id'] = item_id + if needs_patch: + response = self.patch_endpoint(item_url, **{'data': new_item}) + if response['status_code'] == 200: + # compare apples-to-apples, old API data to new API data + # but do so considering the fields given in parameters + self.json_output['changed'] |= self.objects_could_be_different(existing_item, response['json'], field_set=new_item.keys(), warning=True) + elif 'json' in response and '__all__' in response['json']: + self.fail_json(msg=response['json']['__all__']) + else: + self.fail_json(**{'msg': "Unable to update {0} {1}, see response".format(item_type, item_name), 'response': response}) + + else: + raise RuntimeError('update_if_needed called incorrectly without existing_item') + + # Process any associations with this item + if associations is not None: + for association_type, id_list in associations.items(): + endpoint = '{0}{1}/'.format(item_url, association_type) + self.modify_associations(endpoint, id_list) + + # If we change something and have an on_change call it + if on_update is not None and self.json_output['changed']: + if response is None: + last_data = existing_item + else: + last_data = response['json'] + on_update(self, last_data) + elif auto_exit: + self.exit_json(**self.json_output) + else: + if response is None: + last_data = existing_item + else: + last_data = response['json'] + return last_data + + def create_or_update_if_needed( + self, existing_item, new_item, endpoint=None, item_type='unknown', on_create=None, on_update=None, auto_exit=True, associations=None + ): + if existing_item: + return self.update_if_needed(existing_item, new_item, on_update=on_update, auto_exit=auto_exit, associations=associations) + else: + return self.create_if_needed( + existing_item, new_item, endpoint, on_create=on_create, item_type=item_type, auto_exit=auto_exit, associations=associations + ) + + def logout(self): + if self.authenticated and self.oauth_token_id: + # Attempt to delete our current token from /api/v2/tokens/ + # Post to the tokens endpoint with baisc auth to try and get a token + endpoint = self.url_prefix.rstrip('/') + '/api/v2/tokens/{0}/'.format(self.oauth_token_id) + api_token_url = ( + self.url._replace( + path=endpoint, query=None # in error cases, fail_json exists before exception handling + ) + ).geturl() + + try: + self.session.open( + 'DELETE', + api_token_url, + validate_certs=self.verify_ssl, + follow_redirects=True, + force_basic_auth=True, + url_username=self.username, + url_password=self.password, + ) + self.oauth_token_id = None + self.authenticated = False + except HTTPError as he: + try: + resp = he.read() + except Exception as e: + resp = 'unknown {0}'.format(e) + self.warn('Failed to release token: {0}, response: {1}'.format(he, resp)) + except (Exception) as e: + # Sanity check: Did the server send back some kind of internal error? + self.warn('Failed to release token {0}: {1}'.format(self.oauth_token_id, e)) + + def is_job_done(self, job_status): + if job_status in ['new', 'pending', 'waiting', 'running']: + return False + else: + return True + + def wait_on_url(self, url, object_name, object_type, timeout=30, interval=2): + # Grab our start time to compare against for the timeout + start = time.time() + result = self.get_endpoint(url) + while not result['json']['finished']: + # If we are past our time out fail with a message + if timeout and timeout < time.time() - start: + # Account for Legacy messages + if object_type == 'legacy_job_wait': + self.json_output['msg'] = 'Monitoring of Job - {0} aborted due to timeout'.format(object_name) + else: + self.json_output['msg'] = 'Monitoring of {0} - {1} aborted due to timeout'.format(object_type, object_name) + self.wait_output(result) + self.fail_json(**self.json_output) + + # Put the process to sleep for our interval + time.sleep(interval) + + result = self.get_endpoint(url) + self.json_output['status'] = result['json']['status'] + + # If the job has failed, we want to raise a task failure for that so we get a non-zero response. + if result['json']['failed']: + # Account for Legacy messages + if object_type == 'legacy_job_wait': + self.json_output['msg'] = 'Job with id {0} failed'.format(object_name) + else: + self.json_output['msg'] = 'The {0} - {1}, failed'.format(object_type, object_name) + self.json_output["job_data"] = result["json"] + self.wait_output(result) + self.fail_json(**self.json_output) + + self.wait_output(result) + + return result + + def wait_output(self, response): + for k in ('id', 'status', 'elapsed', 'started', 'finished'): + self.json_output[k] = response['json'].get(k) + + def wait_on_workflow_node_url(self, url, object_name, object_type, timeout=30, interval=2, **kwargs): + # Grab our start time to compare against for the timeout + start = time.time() + result = self.get_endpoint(url, **kwargs) + + while result["json"]["count"] == 0: + # If we are past our time out fail with a message + if timeout and timeout < time.time() - start: + # Account for Legacy messages + self.json_output["msg"] = "Monitoring of {0} - {1} aborted due to timeout, {2}".format(object_type, object_name, url) + self.wait_output(result) + self.fail_json(**self.json_output) + + # Put the process to sleep for our interval + time.sleep(interval) + result = self.get_endpoint(url, **kwargs) + + if object_type == "Workflow Approval": + # Approval jobs have no elapsed time so return + return result["json"]["results"][0] + else: + # Removed time so far from timeout. + revised_timeout = timeout - (time.time() - start) + # Now that Job has been found, wait for it to finish + result = self.wait_on_url( + url=result["json"]["results"][0]["related"]["job"], + object_name=object_name, + object_type=object_type, + timeout=revised_timeout, + interval=interval, + ) + self.json_output["job_data"] = result["json"] + return result diff --git a/ansible_collections/awx/awx/plugins/module_utils/tower_legacy.py b/ansible_collections/awx/awx/plugins/module_utils/tower_legacy.py new file mode 100644 index 00000000..84205c0f --- /dev/null +++ b/ansible_collections/awx/awx/plugins/module_utils/tower_legacy.py @@ -0,0 +1,119 @@ +# This code is part of Ansible, but is an independent component. +# This particular file snippet, and this file snippet only, is BSD licensed. +# Modules you write using this snippet, which is embedded dynamically by Ansible +# still belong to the author of the module, and may assign their own license +# to the complete work. +# +# Copyright (c), Wayne Witzel III <wayne@riotousliving.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without modification, +# are permitted provided that the following conditions are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. +# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import os +import traceback + +TOWER_CLI_IMP_ERR = None +try: + import tower_cli.utils.exceptions as exc + from tower_cli.utils import parser + from tower_cli.api import client + + HAS_TOWER_CLI = True +except ImportError: + TOWER_CLI_IMP_ERR = traceback.format_exc() + HAS_TOWER_CLI = False + +from ansible.module_utils.basic import AnsibleModule, missing_required_lib + + +def tower_auth_config(module): + """ + `tower_auth_config` attempts to load the tower-cli.cfg file + specified from the `tower_config_file` parameter. If found, + if returns the contents of the file as a dictionary, else + it will attempt to fetch values from the module params and + only pass those values that have been set. + """ + config_file = module.params.pop('tower_config_file', None) + if config_file: + if not os.path.exists(config_file): + module.fail_json(msg='file not found: %s' % config_file) + if os.path.isdir(config_file): + module.fail_json(msg='directory can not be used as config file: %s' % config_file) + + with open(config_file, 'r') as f: + return parser.string_to_dict(f.read()) + else: + auth_config = {} + host = module.params.pop('tower_host', None) + if host: + auth_config['host'] = host + username = module.params.pop('tower_username', None) + if username: + auth_config['username'] = username + password = module.params.pop('tower_password', None) + if password: + auth_config['password'] = password + module.params.pop('tower_verify_ssl', None) # pop alias if used + verify_ssl = module.params.pop('validate_certs', None) + if verify_ssl is not None: + auth_config['verify_ssl'] = verify_ssl + return auth_config + + +def tower_check_mode(module): + '''Execute check mode logic for Ansible Tower modules''' + if module.check_mode: + try: + result = client.get('/ping').json() + module.exit_json(changed=True, tower_version='{0}'.format(result['version'])) + except (exc.ServerError, exc.ConnectionError, exc.BadRequest) as excinfo: + module.fail_json(changed=False, msg='Failed check mode: {0}'.format(excinfo)) + + +class TowerLegacyModule(AnsibleModule): + def __init__(self, argument_spec, **kwargs): + args = dict( + tower_host=dict(), + tower_username=dict(), + tower_password=dict(no_log=True), + validate_certs=dict(type='bool', aliases=['tower_verify_ssl']), + tower_config_file=dict(type='path'), + ) + args.update(argument_spec) + + kwargs.setdefault('mutually_exclusive', []) + kwargs['mutually_exclusive'].extend( + ( + ('tower_config_file', 'tower_host'), + ('tower_config_file', 'tower_username'), + ('tower_config_file', 'tower_password'), + ('tower_config_file', 'validate_certs'), + ) + ) + + super().__init__(argument_spec=args, **kwargs) + + if not HAS_TOWER_CLI: + self.fail_json(msg=missing_required_lib('ansible-tower-cli'), exception=TOWER_CLI_IMP_ERR) diff --git a/ansible_collections/awx/awx/plugins/modules/__init__.py b/ansible_collections/awx/awx/plugins/modules/__init__.py new file mode 100644 index 00000000..e69de29b --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/__init__.py diff --git a/ansible_collections/awx/awx/plugins/modules/ad_hoc_command.py b/ansible_collections/awx/awx/plugins/modules/ad_hoc_command.py new file mode 100644 index 00000000..bfe22eb8 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/ad_hoc_command.py @@ -0,0 +1,190 @@ +#!/usr/bin/python +# coding: utf-8 -*- + + +# (c) 2020, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: ad_hoc_command +author: "John Westcott IV (@john-westcott-iv)" +version_added: "4.0.0" +short_description: create, update, or destroy Automation Platform Controller ad hoc commands. +description: + - Create, update, or destroy Automation Platform Controller ad hoc commands. See + U(https://www.ansible.com/tower) for an overview. +options: + job_type: + description: + - Job_type to use for the ad hoc command. + type: str + choices: [ 'run', 'check' ] + execution_environment: + description: + - Execution Environment to use for the ad hoc command. + required: False + type: str + inventory: + description: + - Inventory to use for the ad hoc command. + required: True + type: str + limit: + description: + - Limit to use for the ad hoc command. + type: str + credential: + description: + - Credential to use for ad hoc command. + required: True + type: str + module_name: + description: + - The Ansible module to execute. + required: True + type: str + module_args: + description: + - The arguments to pass to the module. + type: str + forks: + description: + - The number of forks to use for this ad hoc execution. + type: int + verbosity: + description: + - Verbosity level for this ad hoc command run + type: int + choices: [ 0, 1, 2, 3, 4, 5 ] + extra_vars: + description: + - Extra variables to use for the ad hoc command.. + type: dict + become_enabled: + description: + - If the become flag should be set. + type: bool + diff_mode: + description: + - Show the changes made by Ansible tasks where supported + type: bool + wait: + description: + - Wait for the command to complete. + default: False + type: bool + interval: + description: + - The interval to request an update from the controller. + default: 2 + type: float + timeout: + description: + - If waiting for the command to complete this will abort after this + amount of seconds + type: int +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +''' + +RETURN = ''' +id: + description: id of the newly launched command + returned: success + type: int + sample: 86 +status: + description: status of newly launched command + returned: success + type: str + sample: pending +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + job_type=dict(choices=['run', 'check']), + inventory=dict(required=True), + limit=dict(), + credential=dict(required=True), + module_name=dict(required=True), + module_args=dict(), + forks=dict(type='int'), + verbosity=dict(type='int', choices=[0, 1, 2, 3, 4, 5]), + extra_vars=dict(type='dict'), + become_enabled=dict(type='bool'), + diff_mode=dict(type='bool'), + wait=dict(default=False, type='bool'), + interval=dict(default=2.0, type='float'), + timeout=dict(type='int'), + execution_environment=dict(), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + inventory = module.params.get('inventory') + credential = module.params.get('credential') + module_name = module.params.get('module_name') + module_args = module.params.get('module_args') + + wait = module.params.get('wait') + interval = module.params.get('interval') + timeout = module.params.get('timeout') + + # Create a datastructure to pass into our command launch + post_data = { + 'module_name': module_name, + 'module_args': module_args, + } + for arg in ['job_type', 'limit', 'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode']: + if module.params.get(arg): + post_data[arg] = module.params.get(arg) + + # Attempt to look up the related items the user specified (these will fail the module if not found) + post_data['inventory'] = module.resolve_name_to_id('inventories', inventory) + post_data['credential'] = module.resolve_name_to_id('credentials', credential) + + # Launch the ad hoc command + results = module.post_endpoint('ad_hoc_commands', **{'data': post_data}) + + if results['status_code'] != 201: + module.fail_json(msg="Failed to launch command, see response for details", **{'response': results}) + + if not wait: + module.exit_json( + **{ + 'changed': True, + 'id': results['json']['id'], + 'status': results['json']['status'], + } + ) + + # Invoke wait function + results = module.wait_on_url(url=results['json']['url'], object_name=module_name, object_type='Ad Hoc Command', timeout=timeout, interval=interval) + + module.exit_json( + **{ + 'changed': True, + 'id': results['json']['id'], + 'status': results['json']['status'], + } + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/ad_hoc_command_cancel.py b/ansible_collections/awx/awx/plugins/modules/ad_hoc_command_cancel.py new file mode 100644 index 00000000..12cdaeaa --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/ad_hoc_command_cancel.py @@ -0,0 +1,133 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: ad_hoc_command_cancel +author: "John Westcott IV (@john-westcott-iv)" +short_description: Cancel an Ad Hoc Command. +description: + - Cancel ad hoc command. See + U(https://www.ansible.com/tower) for an overview. +options: + command_id: + description: + - ID of the command to cancel + required: True + type: int + fail_if_not_running: + description: + - Fail loudly if the I(command_id) can not be canceled + default: False + type: bool + interval: + description: + - The interval in seconds, to request an update from . + required: False + default: 1 + type: float + timeout: + description: + - Maximum time in seconds to wait for a job to finish. + - Not specifying means the task will wait until the controller cancels the command. + type: int + default: 0 +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Cancel command + ad_hoc_command_cancel: + command_id: command.id +''' + +RETURN = ''' +id: + description: command id requesting to cancel + returned: success + type: int + sample: 94 +''' + + +import time + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + command_id=dict(type='int', required=True), + fail_if_not_running=dict(type='bool', default=False), + interval=dict(type='float', default=1.0), + timeout=dict(type='int', default=0), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + command_id = module.params.get('command_id') + fail_if_not_running = module.params.get('fail_if_not_running') + interval = module.params.get('interval') + timeout = module.params.get('timeout') + + # Attempt to look up the command based on the provided name + command = module.get_one( + 'ad_hoc_commands', + **{ + 'data': { + 'id': command_id, + } + } + ) + + if command is None: + module.fail_json(msg="Unable to find command with id {0}".format(command_id)) + + cancel_page = module.get_endpoint(command['related']['cancel']) + if 'json' not in cancel_page or 'can_cancel' not in cancel_page['json']: + module.fail_json(msg="Failed to cancel command, got unexpected response", **{'response': cancel_page}) + + if not cancel_page['json']['can_cancel']: + if fail_if_not_running: + module.fail_json(msg="Ad Hoc Command is not running") + else: + module.exit_json(**{'changed': False}) + + results = module.post_endpoint(command['related']['cancel'], **{'data': {}}) + + if results['status_code'] != 202: + module.fail_json(msg="Failed to cancel command, see response for details", **{'response': results}) + + result = module.get_endpoint(command['related']['cancel']) + start = time.time() + while result['json']['can_cancel']: + # If we are past our time out fail with a message + if timeout and timeout < time.time() - start: + # Account for Legacy messages + module.json_output['msg'] = 'Monitoring of ad hoc command aborted due to timeout' + module.fail_json(**module.json_output) + + # Put the process to sleep for our interval + time.sleep(interval) + + result = module.get_endpoint(command['related']['cancel']) + + module.exit_json(**{'changed': True}) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/ad_hoc_command_wait.py b/ansible_collections/awx/awx/plugins/modules/ad_hoc_command_wait.py new file mode 100644 index 00000000..cedc4aa9 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/ad_hoc_command_wait.py @@ -0,0 +1,124 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: ad_hoc_command_wait +author: "John Westcott IV (@john-westcott-iv)" +short_description: Wait for Automation Platform Controller Ad Hoc Command to finish. +description: + - Wait for Automation Platform Controller ad hoc command to finish and report success or failure. See + U(https://www.ansible.com/tower) for an overview. +options: + command_id: + description: + - ID of the ad hoc command to monitor. + required: True + type: int + interval: + description: + - The interval in sections, to request an update from the controller. + required: False + default: 2 + type: float + timeout: + description: + - Maximum time in seconds to wait for a ad hoc command to finish. + type: int +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Launch an ad hoc command + ad_hoc_command: + inventory: "Demo Inventory" + credential: "Demo Credential" + wait: false + register: command + +- name: Wait for ad joc command max 120s + ad_hoc_command_wait: + command_id: "{{ command.id }}" + timeout: 120 +''' + +RETURN = ''' +id: + description: Ad hoc command id that is being waited on + returned: success + type: int + sample: 99 +elapsed: + description: total time in seconds the command took to run + returned: success + type: float + sample: 10.879 +started: + description: timestamp of when the command started running + returned: success + type: str + sample: "2017-03-01T17:03:53.200234Z" +finished: + description: timestamp of when the command finished running + returned: success + type: str + sample: "2017-03-01T17:04:04.078782Z" +status: + description: current status of command + returned: success + type: str + sample: successful +''' + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + command_id=dict(type='int', required=True), + timeout=dict(type='int'), + interval=dict(type='float', default=2), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + command_id = module.params.get('command_id') + timeout = module.params.get('timeout') + interval = module.params.get('interval') + + # Attempt to look up command based on the provided id + command = module.get_one( + 'ad_hoc_commands', + **{ + 'data': { + 'id': command_id, + } + } + ) + + if command is None: + module.fail_json(msg='Unable to wait on ad hoc command {0}; that ID does not exist.'.format(command_id)) + + # Invoke wait function + module.wait_on_url(url=command['url'], object_name=command_id, object_type='ad hoc command', timeout=timeout, interval=interval) + + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/application.py b/ansible_collections/awx/awx/plugins/modules/application.py new file mode 100644 index 00000000..e35e99df --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/application.py @@ -0,0 +1,155 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2020,Geoffrey Bachelot <bachelotg@gmail.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: application +author: "Geoffrey Bacheot (@jffz)" +short_description: create, update, or destroy Automation Platform Controller applications +description: + - Create, update, or destroy Automation Platform Controller applications. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name of the application. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + description: + description: + - Description of the application. + type: str + authorization_grant_type: + description: + - The grant type the user must use for acquire tokens for this application. + choices: ["password", "authorization-code"] + type: str + required: False + client_type: + description: + - Set to public or confidential depending on how secure the client device is. + choices: ["public", "confidential"] + type: str + required: False + organization: + description: + - Name of organization for application. + type: str + required: True + redirect_uris: + description: + - Allowed urls list, space separated. Required when authorization-grant-type=authorization-code + type: list + elements: str + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str + skip_authorization: + description: + - Set True to skip authorization step for completely trusted applications. + type: bool + +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add Foo application + application: + name: "Foo" + description: "Foo bar application" + organization: "test" + state: present + authorization_grant_type: password + client_type: public + +- name: Add Foo application + application: + name: "Foo" + description: "Foo bar application" + organization: "test" + state: present + authorization_grant_type: authorization-code + client_type: confidential + redirect_uris: + - http://tower.com/api/v2/ +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + description=dict(), + authorization_grant_type=dict(choices=["password", "authorization-code"]), + client_type=dict(choices=['public', 'confidential']), + organization=dict(required=True), + redirect_uris=dict(type="list", elements='str'), + state=dict(choices=['present', 'absent'], default='present'), + skip_authorization=dict(type='bool'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + description = module.params.get('description') + authorization_grant_type = module.params.get('authorization_grant_type') + client_type = module.params.get('client_type') + organization = module.params.get('organization') + redirect_uris = module.params.get('redirect_uris') + state = module.params.get('state') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + org_id = module.resolve_name_to_id('organizations', organization) + + # Attempt to look up application based on the provided name and org ID + application = module.get_one('applications', name_or_id=name, **{'data': {'organization': org_id}}) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(application) + + # Create the data that gets sent for create and update + application_fields = { + 'name': new_name if new_name else (module.get_item_name(application) if application else name), + 'organization': org_id, + } + if authorization_grant_type is not None: + application_fields['authorization_grant_type'] = authorization_grant_type + if client_type is not None: + application_fields['client_type'] = client_type + if description is not None: + application_fields['description'] = description + if redirect_uris is not None: + application_fields['redirect_uris'] = ' '.join(redirect_uris) + + # If the state was present and we can let the module build or update the existing application, this will return on its own + module.create_or_update_if_needed(application, application_fields, endpoint='applications', item_type='application') + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/controller_meta.py b/ansible_collections/awx/awx/plugins/modules/controller_meta.py new file mode 100644 index 00000000..d0812c97 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/controller_meta.py @@ -0,0 +1,75 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2020, Ansible by Red Hat, Inc +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: controller_meta +author: "Alan Rominger (@alancoding)" +short_description: Returns metadata about the collection this module lives in. +description: + - Allows a user to find out what collection this module exists in. + - This takes common module parameters, but does nothing with them. +options: {} +extends_documentation_fragment: awx.awx.auth +''' + + +RETURN = ''' +prefix: + description: Collection namespace and name in the namespace.name format + returned: success + sample: awx.awx + type: str +name: + description: Collection name + returned: success + sample: awx + type: str +namespace: + description: Collection namespace + returned: success + sample: awx + type: str +version: + description: Version of the collection + returned: success + sample: 0.0.1-devel + type: str +''' + + +EXAMPLES = ''' +- controller_meta: + register: result + +- name: Show details about the collection + debug: var=result + +- name: Load the UI setting without hard-coding the collection name + debug: + msg: "{{ lookup(result.prefix + '.controller_api', 'settings/ui') }}" +''' + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + module = ControllerAPIModule(argument_spec={}) + namespace = {'awx': 'awx', 'controller': 'ansible'}.get(module._COLLECTION_TYPE, 'unknown') + namespace_name = '{0}.{1}'.format(namespace, module._COLLECTION_TYPE) + module.exit_json(prefix=namespace_name, name=module._COLLECTION_TYPE, namespace=namespace, version=module._COLLECTION_VERSION) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/credential.py b/ansible_collections/awx/awx/plugins/modules/credential.py new file mode 100644 index 00000000..4e7f02e5 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/credential.py @@ -0,0 +1,301 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# Copyright: (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: credential +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller credential. +description: + - Create, update, or destroy Automation Platform Controller credentials. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name to use for the credential. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + required: False + type: str + copy_from: + description: + - Name or id to copy the credential from. + - This will copy an existing credential and change any parameters supplied. + - The new credential name will be the one provided in the name parameter. + - The organization parameter is not used in this, to facilitate copy from one organization to another. + - Provide the id or use the lookup plugin to provide the id if multiple credentials share the same name. + type: str + description: + description: + - The description to use for the credential. + type: str + organization: + description: + - Organization that should own the credential. + type: str + credential_type: + description: + - The credential type being created. + - Can be a built-in credential type such as "Machine", or a custom credential type such as "My Credential Type" + - Choices include Amazon Web Services, Ansible Galaxy/Automation Hub API Token, Centrify Vault Credential Provider Lookup, + Container Registry, CyberArk Central Credential Provider Lookup, CyberArk Conjur Secret Lookup, Google Compute Engine, + GitHub Personal Access Token, GitLab Personal Access Token, GPG Public Key, HashiCorp Vault Secret Lookup, HashiCorp Vault Signed SSH, + Insights, Machine, Microsoft Azure Key Vault, Microsoft Azure Resource Manager, Network, OpenShift or Kubernetes API + Bearer Token, OpenStack, Red Hat Ansible Automation Platform, Red Hat Satellite 6, Red Hat Virtualization, Source Control, + Thycotic DevOps Secrets Vault, Thycotic Secret Server, Vault, VMware vCenter, or a custom credential type + type: str + inputs: + description: + - >- + Credential inputs where the keys are var names used in templating. + Refer to the Automation Platform Controller documentation for example syntax. + - authorize (use this for net type) + - authorize_password (password for net credentials that require authorize) + - client (client or application ID for azure_rm type) + - security_token (STS token for aws type) + - secret (secret token for azure_rm type) + - tenant (tenant ID for azure_rm type) + - subscription (subscription ID for azure_rm type) + - domain (domain for openstack type) + - become_method (become method to use for privilege escalation; some examples are "None", "sudo", "su", "pbrun") + - become_username (become username; use "ASK" and launch job to be prompted) + - become_password (become password; use "ASK" and launch job to be prompted) + - vault_password (the vault password; use "ASK" and launch job to be prompted) + - project (project that should use this credential for GCP) + - host (the host for this credential) + - username (the username for this credential; ``access_key`` for AWS) + - password (the password for this credential; ``secret_key`` for AWS, ``api_key`` for RAX) + - ssh_key_data (SSH private key content; to extract the content from a file path, use the lookup function (see examples)) + - vault_id (the vault identifier; this parameter is only valid if C(kind) is specified as C(vault).) + - ssh_key_unlock (unlock password for ssh_key; use "ASK" and launch job to be prompted) + - gpg_public_key (GPG Public Key used for signature validation) + type: dict + update_secrets: + description: + - C(true) will always update encrypted values. + - C(false) will only updated encrypted values if a change is absolutely known to be needed. + type: bool + default: true + user: + description: + - User that should own this credential. + type: str + team: + description: + - Team that should own this credential. + type: str + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str + +extends_documentation_fragment: awx.awx.auth + +notes: + - Values `inputs` and the other deprecated fields (such as `tenant`) are replacements of existing values. + See the last 4 examples for details. +''' + + +EXAMPLES = ''' +- name: Add machine credential + credential: + name: Team Name + description: Team Description + organization: test-org + credential_type: Machine + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Create a valid SCM credential from a private_key file + credential: + name: SCM Credential + organization: Default + state: present + credential_type: Source Control + inputs: + username: joe + password: secret + ssh_key_data: "{{ lookup('file', '/tmp/id_rsa') }}" + ssh_key_unlock: "passphrase" + +- name: Fetch private key + slurp: + src: '$HOME/.ssh/aws-private.pem' + register: aws_ssh_key + +- name: Add Credential + credential: + name: Workshop Credential + credential_type: Machine + organization: Default + inputs: + ssh_key_data: "{{ aws_ssh_key['content'] | b64decode }}" + run_once: true + delegate_to: localhost + +- name: Add Credential with Custom Credential Type + credential: + name: Workshop Credential + credential_type: MyCloudCredential + organization: Default + controller_username: admin + controller_password: ansible + controller_host: https://localhost + +- name: Create a Vault credential (example for notes) + credential: + name: Example password + credential_type: Vault + organization: Default + inputs: + vault_password: 'hello' + vault_id: 'My ID' + +- name: Bad password update (will replace vault_id) + credential: + name: Example password + credential_type: Vault + organization: Default + inputs: + vault_password: 'new_password' + +- name: Another bad password update (will replace vault_id) + credential: + name: Example password + credential_type: Vault + organization: Default + vault_password: 'new_password' + +- name: A safe way to update a password and keep vault_id + credential: + name: Example password + credential_type: Vault + organization: Default + inputs: + vault_password: 'new_password' + vault_id: 'My ID' + +- name: Copy Credential + credential: + name: Copy password + copy_from: Example password + credential_type: Vault + organization: Foo +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + copy_from=dict(), + description=dict(), + organization=dict(), + credential_type=dict(), + inputs=dict(type='dict', no_log=True), + update_secrets=dict(type='bool', default=True, no_log=False), + user=dict(), + team=dict(), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get('new_name') + copy_from = module.params.get('copy_from') + description = module.params.get('description') + organization = module.params.get('organization') + credential_type = module.params.get('credential_type') + inputs = module.params.get('inputs') + user = module.params.get('user') + team = module.params.get('team') + state = module.params.get('state') + + cred_type_id = module.resolve_name_to_id('credential_types', credential_type) + if organization: + org_id = module.resolve_name_to_id('organizations', organization) + + # Attempt to look up the object based on the provided name, credential type and optional organization + lookup_data = { + 'credential_type': cred_type_id, + } + # Create a copy of lookup data for copying without org. + copy_lookup_data = lookup_data + if organization: + lookup_data['organization'] = org_id + + credential = module.get_one('credentials', name_or_id=name, **{'data': lookup_data}) + + # Attempt to look up credential to copy based on the provided name + if copy_from: + # a new existing item is formed when copying and is returned. + credential = module.copy_item( + credential, + copy_from, + name, + endpoint='credentials', + item_type='credential', + copy_lookup_data=copy_lookup_data, + ) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(credential) + + # Attempt to look up the related items the user specified (these will fail the module if not found) + if user: + user_id = module.resolve_name_to_id('users', user) + if team: + team_id = module.resolve_name_to_id('teams', team) + + # Create the data that gets sent for create and update + credential_fields = { + 'name': new_name if new_name else (module.get_item_name(credential) if credential else name), + 'credential_type': cred_type_id, + } + + if inputs: + credential_fields['inputs'] = inputs + if description: + credential_fields['description'] = description + if organization: + credential_fields['organization'] = org_id + + # If we don't already have a credential (and we are creating one) we can add user/team + # The API does not appear to do anything with these after creation anyway + # NOTE: We can't just add these on a modification because they are never returned from a GET so it would always cause a changed=True + if not credential: + if user: + credential_fields['user'] = user_id + if team: + credential_fields['team'] = team_id + + # If the state was present we can let the module build or update the existing group, this will return on its own + module.create_or_update_if_needed(credential, credential_fields, endpoint='credentials', item_type='credential') + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/credential_input_source.py b/ansible_collections/awx/awx/plugins/modules/credential_input_source.py new file mode 100644 index 00000000..c7ace4f7 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/credential_input_source.py @@ -0,0 +1,128 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# Copyright: (c) 2020, Tom Page <tpage@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: credential_input_source +author: "Tom Page (@Tompage1994)" +version_added: "2.3.0" +short_description: create, update, or destroy Automation Platform Controller credential input sources. +description: + - Create, update, or destroy Automation Platform Controller credential input sources. See + U(https://www.ansible.com/tower) for an overview. +options: + description: + description: + - The description to use for the credential input source. + type: str + input_field_name: + description: + - The input field the credential source will be used for + required: True + type: str + metadata: + description: + - A JSON or YAML string + required: False + type: dict + target_credential: + description: + - The credential which will have its input defined by this source + required: true + type: str + source_credential: + description: + - The credential which is the source of the credential lookup + type: str + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str + +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Use CyberArk Lookup credential as password source + credential_input_source: + input_field_name: password + target_credential: new_cred + source_credential: cyberark_lookup + metadata: + object_query: "Safe=MY_SAFE;Object=awxuser" + object_query_format: "Exact" + state: present + +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + description=dict(), + input_field_name=dict(required=True), + target_credential=dict(required=True), + source_credential=dict(), + metadata=dict(type="dict"), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + description = module.params.get('description') + input_field_name = module.params.get('input_field_name') + target_credential = module.params.get('target_credential') + source_credential = module.params.get('source_credential') + metadata = module.params.get('metadata') + state = module.params.get('state') + + target_credential_id = module.resolve_name_to_id('credentials', target_credential) + + # Attempt to look up the object based on the target credential and input field + lookup_data = { + 'target_credential': target_credential_id, + 'input_field_name': input_field_name, + } + credential_input_source = module.get_one('credential_input_sources', **{'data': lookup_data}) + + if state == 'absent': + module.delete_if_needed(credential_input_source) + + # Create the data that gets sent for create and update + credential_input_source_fields = { + 'target_credential': target_credential_id, + 'input_field_name': input_field_name, + } + if source_credential: + credential_input_source_fields['source_credential'] = module.resolve_name_to_id('credentials', source_credential) + if metadata: + credential_input_source_fields['metadata'] = metadata + if description: + credential_input_source_fields['description'] = description + + # If the state was present we can let the module build or update the existing group, this will return on its own + module.create_or_update_if_needed( + credential_input_source, credential_input_source_fields, endpoint='credential_input_sources', item_type='credential_input_source' + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/credential_type.py b/ansible_collections/awx/awx/plugins/modules/credential_type.py new file mode 100644 index 00000000..f6b56d0e --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/credential_type.py @@ -0,0 +1,140 @@ +#!/usr/bin/python +# coding: utf-8 -*- +# +# (c) 2018, Adrien Fleury <fleu42@gmail.com> +# GNU General Public License v3.0+ +# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'status': ['preview'], 'supported_by': 'community', 'metadata_version': '1.1'} + + +DOCUMENTATION = ''' +--- +module: credential_type +author: "Adrien Fleury (@fleu42)" +short_description: Create, update, or destroy custom Automation Platform Controller credential type. +description: + - Create, update, or destroy Automation Platform Controller credential type. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name of the credential type. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + description: + description: + - The description of the credential type to give more detail about it. + type: str + kind: + description: + - >- + The type of credential type being added. Note that only cloud and + net can be used for creating credential types. Refer to the Ansible + for more information. + choices: [ 'ssh', 'vault', 'net', 'scm', 'cloud', 'insights' ] + type: str + inputs: + description: + - >- + Enter inputs using either JSON or YAML syntax. Refer to the + Automation Platform Controller documentation for example syntax. + type: dict + injectors: + description: + - >- + Enter injectors using either JSON or YAML syntax. Refer to the + Automation Platform Controller documentation for example syntax. + type: dict + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- credential_type: + name: Nexus + description: Credentials type for Nexus + kind: cloud + inputs: "{{ lookup('file', 'credential_inputs_nexus.json') }}" + injectors: {'extra_vars': {'nexus_credential': 'test' }} + state: present + validate_certs: false + +- credential_type: + name: Nexus + state: absent +''' + + +RETURN = ''' # ''' + + +from ..module_utils.controller_api import ControllerAPIModule + +KIND_CHOICES = {'ssh': 'Machine', 'vault': 'Ansible Vault', 'net': 'Network', 'scm': 'Source Control', 'cloud': 'Lots of others', 'insights': 'Insights'} + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + description=dict(), + kind=dict(choices=list(KIND_CHOICES.keys())), + inputs=dict(type='dict'), + injectors=dict(type='dict'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + kind = module.params.get('kind') + state = module.params.get('state') + + # These will be passed into the create/updates + credential_type_params = { + 'managed': False, + } + if kind: + credential_type_params['kind'] = kind + if module.params.get('description'): + credential_type_params['description'] = module.params.get('description') + if module.params.get('inputs'): + credential_type_params['inputs'] = module.params.get('inputs') + if module.params.get('injectors'): + credential_type_params['injectors'] = module.params.get('injectors') + + # Attempt to look up credential_type based on the provided name + credential_type = module.get_one('credential_types', name_or_id=name) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(credential_type) + + credential_type_params['name'] = new_name if new_name else (module.get_item_name(credential_type) if credential_type else name) + + # If the state was present and we can let the module build or update the existing credential type, this will return on its own + module.create_or_update_if_needed(credential_type, credential_type_params, endpoint='credential_types', item_type='credential type') + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/execution_environment.py b/ansible_collections/awx/awx/plugins/modules/execution_environment.py new file mode 100644 index 00000000..552c8057 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/execution_environment.py @@ -0,0 +1,130 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2020, Shane McDonald <shanemcd@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: execution_environment +author: "Shane McDonald (@shanemcd)" +short_description: create, update, or destroy Execution Environments in Automation Platform Controller. +description: + - Create, update, or destroy Execution Environments in Automation Platform Controller. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name to use for the execution environment. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + image: + description: + - The fully qualified url of the container image. + required: True + type: str + description: + description: + - Description to use for the execution environment. + type: str + organization: + description: + - The organization the execution environment belongs to. + type: str + credential: + description: + - Name of the credential to use for the execution environment. + type: str + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str + pull: + description: + - determine image pull behavior + choices: ["always", "missing", "never"] + default: 'missing' + type: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add EE to the controller instance + execution_environment: + name: "My EE" + image: quay.io/ansible/awx-ee +''' + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + image=dict(required=True), + description=dict(), + organization=dict(), + credential=dict(), + state=dict(choices=['present', 'absent'], default='present'), + # NOTE: Default for pull differs from API (which is blank by default) + pull=dict(choices=['always', 'missing', 'never'], default='missing'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + image = module.params.get('image') + description = module.params.get('description') + state = module.params.get('state') + pull = module.params.get('pull') + + existing_item = module.get_one('execution_environments', name_or_id=name) + + if state == 'absent': + module.delete_if_needed(existing_item) + + new_fields = { + 'name': new_name if new_name else (module.get_item_name(existing_item) if existing_item else name), + 'image': image, + } + if description: + new_fields['description'] = description + + if pull: + new_fields['pull'] = pull + + # Attempt to look up the related items the user specified (these will fail the module if not found) + organization = module.params.get('organization') + if organization: + new_fields['organization'] = module.resolve_name_to_id('organizations', organization) + + credential = module.params.get('credential') + if credential: + new_fields['credential'] = module.resolve_name_to_id('credentials', credential) + + module.create_or_update_if_needed(existing_item, new_fields, endpoint='execution_environments', item_type='execution_environment') + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/export.py b/ansible_collections/awx/awx/plugins/modules/export.py new file mode 100644 index 00000000..d04e9960 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/export.py @@ -0,0 +1,197 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: export +author: "John Westcott IV (@john-westcott-iv)" +version_added: "3.7.0" +short_description: export resources from Automation Platform Controller. +description: + - Export assets from Automation Platform Controller. +options: + all: + description: + - Export all assets + type: bool + default: 'False' + organizations: + description: + - organization names to export + type: list + elements: str + users: + description: + - user names to export + type: list + elements: str + teams: + description: + - team names to export + type: list + elements: str + credential_types: + description: + - credential type names to export + type: list + elements: str + credentials: + description: + - credential names to export + type: list + elements: str + execution_environments: + description: + - execution environment names to export + type: list + elements: str + notification_templates: + description: + - notification template names to export + type: list + elements: str + inventory_sources: + description: + - inventory soruces to export + type: list + elements: str + inventory: + description: + - inventory names to export + type: list + elements: str + projects: + description: + - project names to export + type: list + elements: str + job_templates: + description: + - job template names to export + type: list + elements: str + workflow_job_templates: + description: + - workflow names to export + type: list + elements: str + applications: + description: + - OAuth2 application names to export + type: list + elements: str + schedules: + description: + - schedule names to export + type: list + elements: str +requirements: + - "awxkit >= 9.3.0" +notes: + - Specifying a name of "all" for any asset type will export all items of that asset type. +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Export all assets + export: + all: True + +- name: Export all inventories + export: + inventory: 'all' + +- name: Export a job template named "My Template" and all Credentials + export: + job_templates: "My Template" + credential: 'all' + +- name: Export a list of inventories + export: + inventory: ['My Inventory 1', 'My Inventory 2'] +''' + +import logging +from ansible.module_utils.six.moves import StringIO +from ..module_utils.awxkit import ControllerAWXKitModule + +try: + from awxkit.api.pages.api import EXPORTABLE_RESOURCES + + HAS_EXPORTABLE_RESOURCES = True +except ImportError: + HAS_EXPORTABLE_RESOURCES = False + + +def main(): + argument_spec = dict( + all=dict(type='bool', default=False), + ) + + # We are not going to raise an error here because the __init__ method of ControllerAWXKitModule will do that for us + if HAS_EXPORTABLE_RESOURCES: + for resource in EXPORTABLE_RESOURCES: + argument_spec[resource] = dict(type='list', elements='str') + + module = ControllerAWXKitModule(argument_spec=argument_spec) + + if not HAS_EXPORTABLE_RESOURCES: + module.fail_json(msg="Your version of awxkit does not have import/export") + + # The export process will never change the AWX system + module.json_output['changed'] = False + + # The exporter code currently works like the following: + # Empty string == all assets of that type + # Non-Empty string = just one asset of that type (by name or ID) + # Asset type not present or None = skip asset type (unless everything is None, then export all) + # Here we are going to setup a dict of values to export + export_args = {} + for resource in EXPORTABLE_RESOURCES: + if module.params.get('all') or module.params.get(resource) == 'all': + # If we are exporting everything or we got the keyword "all" we pass in an empty string for this asset type + export_args[resource] = '' + else: + # Otherwise we take either the string or None (if the parameter was not passed) to get one or no items + export_args[resource] = module.params.get(resource) + + # Currently the export process does not return anything on error + # It simply just logs to Python's logger + # Set up a log gobbler to get error messages from export_assets + log_capture_string = StringIO() + ch = logging.StreamHandler(log_capture_string) + for logger_name in ['awxkit.api.pages.api', 'awxkit.api.pages.page']: + logger = logging.getLogger(logger_name) + logger.setLevel(logging.ERROR) + ch.setLevel(logging.ERROR) + + logger.addHandler(ch) + log_contents = '' + + # Run the export process + try: + module.json_output['assets'] = module.get_api_v2_object().export_assets(**export_args) + module.exit_json(**module.json_output) + except Exception as e: + module.fail_json(msg="Failed to export assets {0}".format(e)) + finally: + # Finally, consume the logs in case there were any errors and die if there were + log_contents = log_capture_string.getvalue() + log_capture_string.close() + if log_contents != '': + module.fail_json(msg=log_contents) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/group.py b/ansible_collections/awx/awx/plugins/modules/group.py new file mode 100644 index 00000000..c91bf164 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/group.py @@ -0,0 +1,184 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: group +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller group. +description: + - Create, update, or destroy Automation Platform Controller groups. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name to use for the group. + required: True + type: str + description: + description: + - The description to use for the group. + type: str + inventory: + description: + - Inventory the group should be made a member of. + required: True + type: str + variables: + description: + - Variables to use for the group. + type: dict + hosts: + description: + - List of hosts that should be put in this group. + type: list + elements: str + children: + description: + - List of groups that should be nested inside in this group. + type: list + elements: str + aliases: + - groups + preserve_existing_hosts: + description: + - Provide option (False by default) to preserves existing hosts in an existing group. + default: False + type: bool + preserve_existing_children: + description: + - Provide option (False by default) to preserves existing children in an existing group. + default: False + type: bool + aliases: + - preserve_existing_groups + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str + new_name: + description: + - A new name for this group (for renaming) + type: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add group + group: + name: localhost + description: "Local Host Group" + inventory: "Local Inventory" + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add group + group: + name: Cities + description: "Local Host Group" + inventory: Default Inventory + hosts: + - fda + children: + - NewYork + preserve_existing_hosts: True + preserve_existing_children: True +''' + +from ..module_utils.controller_api import ControllerAPIModule +import json + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + description=dict(), + inventory=dict(required=True), + variables=dict(type='dict'), + hosts=dict(type='list', elements='str'), + children=dict(type='list', elements='str', aliases=['groups']), + preserve_existing_hosts=dict(type='bool', default=False), + preserve_existing_children=dict(type='bool', default=False, aliases=['preserve_existing_groups']), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get('new_name') + inventory = module.params.get('inventory') + description = module.params.get('description') + state = module.params.pop('state') + preserve_existing_hosts = module.params.get('preserve_existing_hosts') + preserve_existing_children = module.params.get('preserve_existing_children') + variables = module.params.get('variables') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + inventory_id = module.resolve_name_to_id('inventories', inventory) + + # Attempt to look up the object based on the provided name and inventory ID + group = module.get_one('groups', name_or_id=name, **{'data': {'inventory': inventory_id}}) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(group) + + # Create the data that gets sent for create and update + group_fields = { + 'name': new_name if new_name else (module.get_item_name(group) if group else name), + 'inventory': inventory_id, + } + if description is not None: + group_fields['description'] = description + if variables is not None: + group_fields['variables'] = json.dumps(variables) + + association_fields = {} + for resource, relationship in (('hosts', 'hosts'), ('groups', 'children')): + name_list = module.params.get(relationship) + if name_list is None: + continue + id_list = [] + for sub_name in name_list: + sub_obj = module.get_one( + resource, + name_or_id=sub_name, + **{ + 'data': {'inventory': inventory_id}, + } + ) + if sub_obj is None: + module.fail_json(msg='Could not find {0} with name {1}'.format(resource, sub_name)) + id_list.append(sub_obj['id']) + # Preserve existing objects + if (preserve_existing_hosts and relationship == 'hosts') or (preserve_existing_children and relationship == 'children'): + preserve_existing_check = module.get_all_endpoint(group['related'][relationship]) + for sub_obj in preserve_existing_check['json']['results']: + id_list.append(sub_obj['id']) + if id_list: + association_fields[relationship] = id_list + + # If the state was present we can let the module build or update the existing group, this will return on its own + module.create_or_update_if_needed(group, group_fields, endpoint='groups', item_type='group', associations=association_fields) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/host.py b/ansible_collections/awx/awx/plugins/modules/host.py new file mode 100644 index 00000000..21d063f3 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/host.py @@ -0,0 +1,127 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: host +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller host. +description: + - Create, update, or destroy Automation Platform Controller hosts. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name to use for the host. + required: True + type: str + new_name: + description: + - To use when changing a hosts's name. + type: str + description: + description: + - The description to use for the host. + type: str + inventory: + description: + - Inventory the host should be made a member of. + required: True + type: str + enabled: + description: + - If the host should be enabled. + type: bool + variables: + description: + - Variables to use for the host. + type: dict + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add host + host: + name: localhost + description: "Local Host Group" + inventory: "Local Inventory" + state: present + controller_config_file: "~/tower_cli.cfg" + variables: + example_var: 123 +''' + + +from ..module_utils.controller_api import ControllerAPIModule +import json + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + description=dict(), + inventory=dict(required=True), + enabled=dict(type='bool'), + variables=dict(type='dict'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get('new_name') + description = module.params.get('description') + inventory = module.params.get('inventory') + enabled = module.params.get('enabled') + state = module.params.get('state') + variables = module.params.get('variables') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + inventory_id = module.resolve_name_to_id('inventories', inventory) + + # Attempt to look up host based on the provided name and inventory ID + host = module.get_one('hosts', name_or_id=name, **{'data': {'inventory': inventory_id}}) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(host) + + # Create the data that gets sent for create and update + host_fields = { + 'name': new_name if new_name else (module.get_item_name(host) if host else name), + 'inventory': inventory_id, + 'enabled': enabled, + } + if description is not None: + host_fields['description'] = description + if variables is not None: + host_fields['variables'] = json.dumps(variables) + + # If the state was present and we can let the module build or update the existing host, this will return on its own + module.create_or_update_if_needed(host, host_fields, endpoint='hosts', item_type='host') + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/import.py b/ansible_collections/awx/awx/plugins/modules/import.py new file mode 100644 index 00000000..fe66b2a7 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/import.py @@ -0,0 +1,105 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: import +author: "John Westcott (@john-westcott-iv)" +version_added: "3.7.0" +short_description: import resources into Automation Platform Controller. +description: + - Import assets into Automation Platform Controller. See + U(https://www.ansible.com/tower) for an overview. +options: + assets: + description: + - The assets to import. + - This can be the output of the export module or loaded from a file + required: True + type: dict +requirements: + - "awxkit >= 9.3.0" +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Export all assets + export: + all: True + register: export_output + +- name: Import all assets from our export + import: + assets: "{{ export_output.assets }}" + +- name: Load data from a json file created by a command like awx export --organization Default + import: + assets: "{{ lookup('file', 'org.json') | from_json() }}" +''' + +from ..module_utils.awxkit import ControllerAWXKitModule + +# These two lines are not needed if awxkit changes to do programatic notifications on issues +from ansible.module_utils.six.moves import StringIO +import logging + +# In this module we don't use EXPORTABLE_RESOURCES, we just want to validate that our installed awxkit has import/export +try: + from awxkit.api.pages.api import EXPORTABLE_RESOURCES # noqa + + HAS_EXPORTABLE_RESOURCES = True +except ImportError: + HAS_EXPORTABLE_RESOURCES = False + + +def main(): + argument_spec = dict(assets=dict(type='dict', required=True)) + + module = ControllerAWXKitModule(argument_spec=argument_spec, supports_check_mode=False) + + assets = module.params.get('assets') + + if not HAS_EXPORTABLE_RESOURCES: + module.fail_json(msg="Your version of awxkit does not appear to have import/export") + + # Currently the import process does not return anything on error + # It simply just logs to Python's logger + # Set up a log gobbler to get error messages from import_assets + logger = logging.getLogger('awxkit.api.pages.api') + logger.setLevel(logging.ERROR) + + log_capture_string = StringIO() + ch = logging.StreamHandler(log_capture_string) + ch.setLevel(logging.ERROR) + + logger.addHandler(ch) + log_contents = '' + + # Run the import process + try: + module.json_output['changed'] = module.get_api_v2_object().import_assets(assets) + except Exception as e: + module.fail_json(msg="Failed to import assets {0}".format(e)) + finally: + # Finally, consume the logs in case there were any errors and die if there were + log_contents = log_capture_string.getvalue() + log_capture_string.close() + if log_contents != '': + module.fail_json(msg=log_contents) + + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/instance.py b/ansible_collections/awx/awx/plugins/modules/instance.py new file mode 100644 index 00000000..049dd976 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/instance.py @@ -0,0 +1,135 @@ +#!/usr/bin/python +# coding: utf-8 -*- + + +# (c) 2022 Red Hat, Inc. +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: instance +author: "Rick Elrod (@relrod)" +version_added: "4.3.0" +short_description: create, update, or destroy Automation Platform Controller instances. +description: + - Create, update, or destroy Automation Platform Controller instances. See + U(https://www.ansible.com/tower) for an overview. +options: + hostname: + description: + - Hostname of this instance. + required: True + type: str + capacity_adjustment: + description: + - Capacity adjustment (0 <= capacity_adjustment <= 1) + required: False + type: float + enabled: + description: + - If true, the instance will be enabled and used. + required: False + type: bool + managed_by_policy: + description: + - Managed by policy + required: False + type: bool + node_type: + description: + - Role that this node plays in the mesh. + choices: + - execution + required: False + type: str + node_state: + description: + - Indicates the current life cycle stage of this instance. + choices: + - deprovisioning + - installed + required: False + type: str + listener_port: + description: + - Port that Receptor will listen for incoming connections on. + required: False + type: int +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Create an instance + awx.awx.instance: + hostname: my-instance.prod.example.com + capacity_adjustment: 0.4 + listener_port: 31337 + +- name: Deprovision the instance + awx.awx.instance: + hostname: my-instance.prod.example.com + node_state: deprovisioning +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + hostname=dict(required=True), + capacity_adjustment=dict(type='float'), + enabled=dict(type='bool'), + managed_by_policy=dict(type='bool'), + node_type=dict(type='str', choices=['execution']), + node_state=dict(type='str', choices=['deprovisioning', 'installed']), + listener_port=dict(type='int'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + hostname = module.params.get('hostname') + capacity_adjustment = module.params.get('capacity_adjustment') + enabled = module.params.get('enabled') + managed_by_policy = module.params.get('managed_by_policy') + node_type = module.params.get('node_type') + node_state = module.params.get('node_state') + listener_port = module.params.get('listener_port') + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('instances', name_or_id=hostname) + + # Create the data that gets sent for create and update + new_fields = {'hostname': hostname} + if capacity_adjustment is not None: + new_fields['capacity_adjustment'] = capacity_adjustment + if enabled is not None: + new_fields['enabled'] = enabled + if managed_by_policy is not None: + new_fields['managed_by_policy'] = managed_by_policy + if node_type is not None: + new_fields['node_type'] = node_type + if node_state is not None: + new_fields['node_state'] = node_state + if listener_port is not None: + new_fields['listener_port'] = listener_port + + module.create_or_update_if_needed( + existing_item, + new_fields, + endpoint='instances', + item_type='instance', + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/instance_group.py b/ansible_collections/awx/awx/plugins/modules/instance_group.py new file mode 100644 index 00000000..dc993f8b --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/instance_group.py @@ -0,0 +1,180 @@ +#!/usr/bin/python +# coding: utf-8 -*- + + +# (c) 2020, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: instance_group +author: "John Westcott IV (@john-westcott-iv)" +version_added: "4.0.0" +short_description: create, update, or destroy Automation Platform Controller instance groups. +description: + - Create, update, or destroy Automation Platform Controller instance groups. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name of this instance group. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + credential: + description: + - Credential to authenticate with Kubernetes or OpenShift. Must be of type "OpenShift or Kubernetes API Bearer Token". + required: False + type: str + is_container_group: + description: + - Signifies that this InstanceGroup should act as a ContainerGroup. If no credential is specified, the underlying Pod's ServiceAccount will be used. + required: False + type: bool + policy_instance_percentage: + description: + - Minimum percentage of all instances that will be automatically assigned to this group when new instances come online. + required: False + type: int + policy_instance_minimum: + description: + - Static minimum number of Instances that will be automatically assign to this group when new instances come online. + required: False + type: int + max_concurrent_jobs: + description: + - Maximum number of concurrent jobs to run on this group. Zero means no limit. + required: False + type: int + max_forks: + description: + - Max forks to execute on this group. Zero means no limit. + required: False + type: int + policy_instance_list: + description: + - List of exact-match Instances that will be assigned to this group + required: False + type: list + elements: str + pod_spec_override: + description: + - A custom Kubernetes or OpenShift Pod specification. + required: False + type: str + instances: + description: + - The instances associated with this instance_group + required: False + type: list + elements: str + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + credential=dict(), + is_container_group=dict(type='bool'), + policy_instance_percentage=dict(type='int'), + policy_instance_minimum=dict(type='int'), + max_concurrent_jobs=dict(type='int'), + max_forks=dict(type='int'), + policy_instance_list=dict(type='list', elements='str'), + pod_spec_override=dict(), + instances=dict(required=False, type="list", elements='str'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + credential = module.params.get('credential') + is_container_group = module.params.get('is_container_group') + policy_instance_percentage = module.params.get('policy_instance_percentage') + policy_instance_minimum = module.params.get('policy_instance_minimum') + max_concurrent_jobs = module.params.get('max_concurrent_jobs') + max_forks = module.params.get('max_forks') + policy_instance_list = module.params.get('policy_instance_list') + pod_spec_override = module.params.get('pod_spec_override') + instances = module.params.get('instances') + state = module.params.get('state') + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('instance_groups', name_or_id=name) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(existing_item) + + # Attempt to look up the related items the user specified (these will fail the module if not found) + credential_id = None + if credential: + credential_id = module.resolve_name_to_id('credentials', credential) + instances_ids = None + if instances is not None: + instances_ids = [] + for item in instances: + instances_ids.append(module.resolve_name_to_id('instances', item)) + + # Create the data that gets sent for create and update + new_fields = {} + new_fields['name'] = new_name if new_name else (module.get_item_name(existing_item) if existing_item else name) + if credential is not None: + new_fields['credential'] = credential_id + if is_container_group is not None: + new_fields['is_container_group'] = is_container_group + if policy_instance_percentage is not None: + new_fields['policy_instance_percentage'] = policy_instance_percentage + if policy_instance_minimum is not None: + new_fields['policy_instance_minimum'] = policy_instance_minimum + if max_concurrent_jobs is not None: + new_fields['max_concurrent_jobs'] = max_concurrent_jobs + if max_forks is not None: + new_fields['max_forks'] = max_forks + if policy_instance_list is not None: + new_fields['policy_instance_list'] = policy_instance_list + if pod_spec_override is not None: + new_fields['pod_spec_override'] = pod_spec_override + + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed( + existing_item, + new_fields, + endpoint='instance_groups', + item_type='instance_group', + associations={ + 'instances': instances_ids, + }, + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/inventory.py b/ansible_collections/awx/awx/plugins/modules/inventory.py new file mode 100644 index 00000000..8e739b22 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/inventory.py @@ -0,0 +1,195 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: inventory +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller inventory. +description: + - Create, update, or destroy Automation Platform Controller inventories. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name to use for the inventory. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + copy_from: + description: + - Name or id to copy the inventory from. + - This will copy an existing inventory and change any parameters supplied. + - The new inventory name will be the one provided in the name parameter. + - The organization parameter is not used in this, to facilitate copy from one organization to another. + - Provide the id or use the lookup plugin to provide the id if multiple inventories share the same name. + type: str + description: + description: + - The description to use for the inventory. + type: str + organization: + description: + - Organization the inventory belongs to. + required: True + type: str + variables: + description: + - Inventory variables. + type: dict + kind: + description: + - The kind field. Cannot be modified after created. + choices: ["", "smart"] + type: str + host_filter: + description: + - The host_filter field. Only useful when C(kind=smart). + type: str + instance_groups: + description: + - list of Instance Groups for this Organization to run on. + type: list + elements: str + prevent_instance_group_fallback: + description: + - Prevent falling back to instance groups set on the organization + type: bool + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add inventory + inventory: + name: "Foo Inventory" + description: "Our Foo Cloud Servers" + organization: "Bar Org" + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Copy inventory + inventory: + name: Copy Foo Inventory + copy_from: Default Inventory + description: "Our Foo Cloud Servers" + organization: Foo + state: present +''' + + +from ..module_utils.controller_api import ControllerAPIModule +import json + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + copy_from=dict(), + description=dict(), + organization=dict(required=True), + variables=dict(type='dict'), + kind=dict(choices=['', 'smart']), + host_filter=dict(), + instance_groups=dict(type="list", elements='str'), + prevent_instance_group_fallback=dict(type='bool'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + copy_from = module.params.get('copy_from') + description = module.params.get('description') + organization = module.params.get('organization') + variables = module.params.get('variables') + state = module.params.get('state') + kind = module.params.get('kind') + host_filter = module.params.get('host_filter') + prevent_instance_group_fallback = module.params.get('prevent_instance_group_fallback') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + org_id = module.resolve_name_to_id('organizations', organization) + + # Attempt to look up inventory based on the provided name and org ID + inventory = module.get_one('inventories', name_or_id=name, **{'data': {'organization': org_id}}) + + # Attempt to look up credential to copy based on the provided name + if copy_from: + # a new existing item is formed when copying and is returned. + inventory = module.copy_item( + inventory, + copy_from, + name, + endpoint='inventories', + item_type='inventory', + copy_lookup_data={}, + ) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(inventory) + + # Create the data that gets sent for create and update + inventory_fields = { + 'name': new_name if new_name else (module.get_item_name(inventory) if inventory else name), + 'organization': org_id, + 'kind': kind, + 'host_filter': host_filter, + } + if prevent_instance_group_fallback is not None: + inventory_fields['prevent_instance_group_fallback'] = prevent_instance_group_fallback + if description is not None: + inventory_fields['description'] = description + if variables is not None: + inventory_fields['variables'] = json.dumps(variables) + + association_fields = {} + + instance_group_names = module.params.get('instance_groups') + if instance_group_names is not None: + association_fields['instance_groups'] = [] + for item in instance_group_names: + association_fields['instance_groups'].append(module.resolve_name_to_id('instance_groups', item)) + + # We need to perform a check to make sure you are not trying to convert a regular inventory into a smart one. + if inventory and inventory['kind'] == '' and inventory_fields['kind'] == 'smart': + module.fail_json(msg='You cannot turn a regular inventory into a "smart" inventory.') + + # If the state was present and we can let the module build or update the existing inventory, this will return on its own + module.create_or_update_if_needed( + inventory, + inventory_fields, + endpoint='inventories', + item_type='inventory', + associations=association_fields, + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/inventory_source.py b/ansible_collections/awx/awx/plugins/modules/inventory_source.py new file mode 100644 index 00000000..f3000ee9 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/inventory_source.py @@ -0,0 +1,298 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# Copyright: (c) 2018, Adrien Fleury <fleu42@gmail.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: inventory_source +author: "Adrien Fleury (@fleu42)" +short_description: create, update, or destroy Automation Platform Controller inventory source. +description: + - Create, update, or destroy Automation Platform Controller inventory source. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name to use for the inventory source. + required: True + type: str + new_name: + description: + - A new name for this assets (will rename the asset) + type: str + description: + description: + - The description to use for the inventory source. + type: str + inventory: + description: + - Inventory the group should be made a member of. + required: True + type: str + source: + description: + - The source to use for this group. + choices: [ "scm", "ec2", "gce", "azure_rm", "vmware", "satellite6", "openstack", "rhv", "controller", "insights" ] + type: str + source_path: + description: + - For an SCM based inventory source, the source path points to the file within the repo to use as an inventory. + type: str + source_vars: + description: + - The variables or environment fields to apply to this source type. + type: dict + enabled_var: + description: + - The variable to use to determine enabled state e.g., "status.power_state" + type: str + enabled_value: + description: + - Value when the host is considered enabled, e.g., "powered_on" + type: str + host_filter: + description: + - If specified, AWX will only import hosts that match this regular expression. + type: str + credential: + description: + - Credential to use for the source. + type: str + execution_environment: + description: + - Execution Environment to use for the source. + type: str + custom_virtualenv: + description: + - Local absolute file path containing a custom Python virtualenv to use. + - Only compatible with older versions of AWX/Controller + - Deprecated, will be removed in the future + type: str + overwrite: + description: + - Delete child groups and hosts not found in source. + type: bool + overwrite_vars: + description: + - Override vars in child groups and hosts with those from external source. + type: bool + timeout: + description: The amount of time (in seconds) to run before the task is canceled. + type: int + verbosity: + description: The verbosity level to run this inventory source under. + type: int + choices: [ 0, 1, 2 ] + update_on_launch: + description: + - Refresh inventory data from its source each time a job is run. + type: bool + update_cache_timeout: + description: + - Time in seconds to consider an inventory sync to be current. + type: int + source_project: + description: + - Project to use as source with scm option + type: str + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str + notification_templates_started: + description: + - list of notifications to send on start + type: list + elements: str + notification_templates_success: + description: + - list of notifications to send on success + type: list + elements: str + notification_templates_error: + description: + - list of notifications to send on error + type: list + elements: str + organization: + description: + - Name of the inventory source's inventory's organization. + type: str +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Add an inventory source + inventory_source: + name: "source-inventory" + description: Source for inventory + inventory: previously-created-inventory + credential: previously-created-credential + overwrite: True + update_on_launch: True + organization: Default + source_vars: + private: false +''' + +from ..module_utils.controller_api import ControllerAPIModule +from json import dumps + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + description=dict(), + inventory=dict(required=True), + # + # How do we handle manual and file? The controller does not seem to be able to activate them + # + source=dict(choices=["scm", "ec2", "gce", "azure_rm", "vmware", "satellite6", "openstack", "rhv", "controller", "insights"]), + source_path=dict(), + source_vars=dict(type='dict'), + enabled_var=dict(), + enabled_value=dict(), + host_filter=dict(), + credential=dict(), + execution_environment=dict(), + custom_virtualenv=dict(), + organization=dict(), + overwrite=dict(type='bool'), + overwrite_vars=dict(type='bool'), + timeout=dict(type='int'), + verbosity=dict(type='int', choices=[0, 1, 2]), + update_on_launch=dict(type='bool'), + update_cache_timeout=dict(type='int'), + source_project=dict(), + notification_templates_started=dict(type="list", elements='str'), + notification_templates_success=dict(type="list", elements='str'), + notification_templates_error=dict(type="list", elements='str'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get('new_name') + inventory = module.params.get('inventory') + organization = module.params.get('organization') + credential = module.params.get('credential') + ee = module.params.get('execution_environment') + source_project = module.params.get('source_project') + state = module.params.get('state') + + lookup_data = {} + if organization: + lookup_data['organization'] = module.resolve_name_to_id('organizations', organization) + inventory_object = module.get_one('inventories', name_or_id=inventory, data=lookup_data) + + if not inventory_object: + module.fail_json(msg='The specified inventory, {0}, was not found.'.format(lookup_data)) + + inventory_source_object = module.get_one( + 'inventory_sources', + name_or_id=name, + **{ + 'data': { + 'inventory': inventory_object['id'], + } + } + ) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(inventory_source_object) + + # Attempt to look up associated field items the user specified. + association_fields = {} + + notifications_start = module.params.get('notification_templates_started') + if notifications_start is not None: + association_fields['notification_templates_started'] = [] + for item in notifications_start: + association_fields['notification_templates_started'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_success = module.params.get('notification_templates_success') + if notifications_success is not None: + association_fields['notification_templates_success'] = [] + for item in notifications_success: + association_fields['notification_templates_success'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_error = module.params.get('notification_templates_error') + if notifications_error is not None: + association_fields['notification_templates_error'] = [] + for item in notifications_error: + association_fields['notification_templates_error'].append(module.resolve_name_to_id('notification_templates', item)) + + # Create the data that gets sent for create and update + inventory_source_fields = { + 'name': new_name if new_name else name, + 'inventory': inventory_object['id'], + } + + # Attempt to look up the related items the user specified (these will fail the module if not found) + if credential is not None: + inventory_source_fields['credential'] = module.resolve_name_to_id('credentials', credential) + if ee is not None: + inventory_source_fields['execution_environment'] = module.resolve_name_to_id('execution_environments', ee) + if source_project is not None: + source_project_object = module.get_one('projects', name_or_id=source_project, data=lookup_data) + if not source_project_object: + module.fail_json(msg='The specified source project, {0}, was not found.'.format(lookup_data)) + inventory_source_fields['source_project'] = source_project_object['id'] + + OPTIONAL_VARS = ( + 'description', + 'source', + 'source_path', + 'source_vars', + 'overwrite', + 'overwrite_vars', + 'custom_virtualenv', + 'timeout', + 'verbosity', + 'update_on_launch', + 'update_cache_timeout', + 'enabled_var', + 'enabled_value', + 'host_filter', + ) + + # Layer in all remaining optional information + for field_name in OPTIONAL_VARS: + field_val = module.params.get(field_name) + if field_val is not None: + inventory_source_fields[field_name] = field_val + + # Attempt to JSON encode source vars + if inventory_source_fields.get('source_vars', None): + inventory_source_fields['source_vars'] = dumps(inventory_source_fields['source_vars']) + + # Sanity check on arguments + if state == 'present' and not inventory_source_object and not inventory_source_fields['source']: + module.fail_json(msg="If creating a new inventory source, the source param must be present") + + # If the state was present we can let the module build or update the existing inventory_source_object, this will return on its own + module.create_or_update_if_needed( + inventory_source_object, inventory_source_fields, endpoint='inventory_sources', item_type='inventory source', associations=association_fields + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/inventory_source_update.py b/ansible_collections/awx/awx/plugins/modules/inventory_source_update.py new file mode 100644 index 00000000..5bd6cdfe --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/inventory_source_update.py @@ -0,0 +1,146 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2020, Bianca Henderson <bianca@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: inventory_source_update +author: "Bianca Henderson (@beeankha)" +short_description: Update inventory source(s). +description: + - Update Automation Platform Controller inventory source(s). See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name or id of the inventory source to update. + required: True + type: str + aliases: + - inventory_source + inventory: + description: + - Name or id of the inventory that contains the inventory source(s) to update. + required: True + type: str + organization: + description: + - Name of the inventory source's inventory's organization. + type: str + wait: + description: + - Wait for the job to complete. + default: False + type: bool + interval: + description: + - The interval to request an update from the controller. + required: False + default: 2 + type: float + timeout: + description: + - If waiting for the job to complete this will abort after this + amount of seconds + type: int +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Update a single inventory source + inventory_source_update: + name: "Example Inventory Source" + inventory: "My Inventory" + organization: Default + +- name: Update all inventory sources + inventory_source_update: + name: "{{ item }}" + inventory: "My Other Inventory" + loop: "{{ query('awx.awx.controller_api', 'inventory_sources', query_params={ 'inventory': 30 }, return_ids=True ) }}" +''' + +RETURN = ''' +id: + description: id of the inventory update + returned: success + type: int + sample: 86 +status: + description: status of the inventory update + returned: success + type: str + sample: pending +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True, aliases=['inventory_source']), + inventory=dict(required=True), + organization=dict(), + wait=dict(default=False, type='bool'), + interval=dict(default=2.0, type='float'), + timeout=dict(type='int'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + inventory = module.params.get('inventory') + organization = module.params.get('organization') + wait = module.params.get('wait') + interval = module.params.get('interval') + timeout = module.params.get('timeout') + + lookup_data = {} + if organization: + lookup_data['organization'] = module.resolve_name_to_id('organizations', organization) + inventory_object = module.get_one('inventories', name_or_id=inventory, data=lookup_data) + + if not inventory_object: + module.fail_json(msg='The specified inventory, {0}, was not found.'.format(lookup_data)) + + inventory_source_object = module.get_one('inventory_sources', name_or_id=name, data={'inventory': inventory_object['id']}) + + if not inventory_source_object: + module.fail_json(msg='The specified inventory source was not found.') + + # Sync the inventory source(s) + inventory_source_update_results = module.post_endpoint(inventory_source_object['related']['update']) + + if inventory_source_update_results['status_code'] != 202: + module.fail_json(msg="Failed to update inventory source, see response for details", response=inventory_source_update_results) + + module.json_output['changed'] = True + module.json_output['id'] = inventory_source_update_results['json']['id'] + module.json_output['status'] = inventory_source_update_results['json']['status'] + + if not wait: + module.exit_json(**module.json_output) + + # Invoke wait function + module.wait_on_url( + url=inventory_source_update_results['json']['url'], object_name=inventory_object, object_type='inventory_update', timeout=timeout, interval=interval + ) + + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/job_cancel.py b/ansible_collections/awx/awx/plugins/modules/job_cancel.py new file mode 100644 index 00000000..a987b7be --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/job_cancel.py @@ -0,0 +1,101 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: job_cancel +author: "Wayne Witzel III (@wwitzel3)" +short_description: Cancel an Automation Platform Controller Job. +description: + - Cancel Automation Platform Controller jobs. See + U(https://www.ansible.com/tower) for an overview. +options: + job_id: + description: + - ID of the job to cancel + required: True + type: int + fail_if_not_running: + description: + - Fail loudly if the I(job_id) can not be canceled + default: False + type: bool +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Cancel job + job_cancel: + job_id: job.id +''' + +RETURN = ''' +id: + description: job id requesting to cancel + returned: success + type: int + sample: 94 +''' + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + job_id=dict(type='int', required=True), + fail_if_not_running=dict(type='bool', default=False), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + job_id = module.params.get('job_id') + fail_if_not_running = module.params.get('fail_if_not_running') + + # Attempt to look up the job based on the provided name + job = module.get_one( + 'jobs', + **{ + 'data': { + 'id': job_id, + } + } + ) + + if job is None: + module.fail_json(msg="Unable to find job with id {0}".format(job_id)) + + cancel_page = module.get_endpoint(job['related']['cancel']) + if 'json' not in cancel_page or 'can_cancel' not in cancel_page['json']: + module.fail_json(msg="Failed to cancel job, got unexpected response from the controller", **{'response': cancel_page}) + + if not cancel_page['json']['can_cancel']: + if fail_if_not_running: + module.fail_json(msg="Job is not running") + else: + module.exit_json(**{'changed': False}) + + results = module.post_endpoint(job['related']['cancel'], **{'data': {}}) + + if results['status_code'] != 202: + module.fail_json(msg="Failed to cancel job, see response for details", **{'response': results}) + + module.exit_json(**{'changed': True}) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/job_launch.py b/ansible_collections/awx/awx/plugins/modules/job_launch.py new file mode 100644 index 00000000..9a76f3a8 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/job_launch.py @@ -0,0 +1,337 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: job_launch +author: "Wayne Witzel III (@wwitzel3)" +short_description: Launch an Ansible Job. +description: + - Launch an Automation Platform Controller jobs. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name of the job template to use. + required: True + type: str + aliases: ['job_template'] + job_type: + description: + - Job_type to use for the job, only used if prompt for job_type is set. + choices: ["run", "check"] + type: str + inventory: + description: + - Inventory to use for the job, only used if prompt for inventory is set. + type: str + organization: + description: + - Organization the job template exists in. + - Used to help lookup the object, cannot be modified using this module. + - If not provided, will lookup by name only, which does not work with duplicates. + type: str + credentials: + description: + - Credential to use for job, only used if prompt for credential is set. + type: list + aliases: ['credential'] + elements: str + extra_vars: + description: + - extra_vars to use for the Job Template. + - ask_extra_vars needs to be set to True via job_template module + when creating the Job Template. + type: dict + limit: + description: + - Limit to use for the I(job_template). + type: str + tags: + description: + - Specific tags to use for from playbook. + type: list + elements: str + scm_branch: + description: + - A specific of the SCM project to run the template on. + - This is only applicable if your project allows for branch override. + type: str + skip_tags: + description: + - Specific tags to skip from the playbook. + type: list + elements: str + verbosity: + description: + - Verbosity level for this job run + type: int + choices: [ 0, 1, 2, 3, 4, 5 ] + diff_mode: + description: + - Show the changes made by Ansible tasks where supported + type: bool + credential_passwords: + description: + - Passwords for credentials which are set to prompt on launch + type: dict + execution_environment: + description: + - Execution environment to use for the job, only used if prompt for execution environment is set. + type: str + forks: + description: + - Forks to use for the job, only used if prompt for forks is set. + type: int + instance_groups: + description: + - Instance groups to use for the job, only used if prompt for instance groups is set. + type: list + elements: str + job_slice_count: + description: + - Job slice count to use for the job, only used if prompt for job slice count is set. + type: int + labels: + description: + - Labels to use for the job, only used if prompt for labels is set. + type: list + elements: str + job_timeout: + description: + - Timeout to use for the job, only used if prompt for timeout is set. + - This parameter is sent through the API to the job. + type: int + wait: + description: + - Wait for the job to complete. + default: False + type: bool + interval: + description: + - The interval to request an update from the controller. + required: False + default: 2 + type: float + timeout: + description: + - If waiting for the job to complete this will abort after this + amount of seconds. This happens on the module side. + type: int +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Launch a job + job_launch: + job_template: "My Job Template" + register: job + +- name: Launch a job template with extra_vars on remote controller instance + job_launch: + job_template: "My Job Template" + extra_vars: + var1: "My First Variable" + var2: "My Second Variable" + var3: "My Third Variable" + job_type: run + +- name: Launch a job with inventory and credential + job_launch: + job_template: "My Job Template" + inventory: "My Inventory" + credential: "My Credential" + register: job +- name: Wait for job max 120s + job_wait: + job_id: "{{ job.id }}" + timeout: 120 +''' + +RETURN = ''' +id: + description: job id of the newly launched job + returned: success + type: int + sample: 86 +status: + description: status of newly launched job + returned: success + type: str + sample: pending +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True, aliases=['job_template']), + job_type=dict(choices=['run', 'check']), + inventory=dict(), + organization=dict(), + # Credentials will be a str instead of a list for backwards compatability + credentials=dict(type='list', aliases=['credential'], elements='str'), + limit=dict(), + tags=dict(type='list', elements='str'), + extra_vars=dict(type='dict'), + scm_branch=dict(), + skip_tags=dict(type='list', elements='str'), + verbosity=dict(type='int', choices=[0, 1, 2, 3, 4, 5]), + diff_mode=dict(type='bool'), + credential_passwords=dict(type='dict', no_log=False), + execution_environment=dict(), + forks=dict(type='int'), + instance_groups=dict(type='list', elements='str'), + job_slice_count=dict(type='int'), + labels=dict(type='list', elements='str'), + job_timeout=dict(type='int'), + wait=dict(default=False, type='bool'), + interval=dict(default=2.0, type='float'), + timeout=dict(type='int'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + optional_args = {} + # Extract our parameters + name = module.params.get('name') + inventory = module.params.get('inventory') + organization = module.params.get('organization') + credentials = module.params.get('credentials') + execution_environment = module.params.get('execution_environment') + instance_groups = module.params.get('instance_groups') + labels = module.params.get('labels') + wait = module.params.get('wait') + interval = module.params.get('interval') + timeout = module.params.get('timeout') + + for field_name in ( + 'job_type', + 'limit', + 'extra_vars', + 'scm_branch', + 'verbosity', + 'diff_mode', + 'credential_passwords', + 'forks', + 'job_slice_count', + 'job_timeout', + ): + field_val = module.params.get(field_name) + if field_val is not None: + optional_args[field_name] = field_val + + # Special treatment of tags parameters + job_tags = module.params.get('tags') + if job_tags is not None: + optional_args['job_tags'] = ",".join(job_tags) + skip_tags = module.params.get('skip_tags') + if skip_tags is not None: + optional_args['skip_tags'] = ",".join(skip_tags) + + # job_timeout is special because its actually timeout but we already had a timeout variable + job_timeout = module.params.get('job_timeout') + if job_timeout is not None: + optional_args['timeout'] = job_timeout + + # Create a datastructure to pass into our job launch + post_data = {} + for arg_name, arg_value in optional_args.items(): + if arg_value: + post_data[arg_name] = arg_value + + # Attempt to look up the related items the user specified (these will fail the module if not found) + if inventory: + post_data['inventory'] = module.resolve_name_to_id('inventories', inventory) + if execution_environment: + post_data['execution_environment'] = module.resolve_name_to_id('execution_environments', execution_environment) + + if credentials: + post_data['credentials'] = [] + for credential in credentials: + post_data['credentials'].append(module.resolve_name_to_id('credentials', credential)) + if labels: + post_data['labels'] = [] + for label in labels: + post_data['labels'].append(module.resolve_name_to_id('labels', label)) + if instance_groups: + post_data['instance_groups'] = [] + for instance_group in instance_groups: + post_data['instance_groups'].append(module.resolve_name_to_id('instance_groups', instance_group)) + + # Attempt to look up job_template based on the provided name + lookup_data = {} + if organization: + lookup_data['organization'] = module.resolve_name_to_id('organizations', organization) + job_template = module.get_one('job_templates', name_or_id=name, data=lookup_data) + + if job_template is None: + module.fail_json(msg="Unable to find job template by name {0}".format(name)) + + # The API will allow you to submit values to a jb launch that are not prompt on launch. + # Therefore, we will test to see if anything is set which is not prompt on launch and fail. + check_vars_to_prompts = { + 'scm_branch': 'ask_scm_branch_on_launch', + 'diff_mode': 'ask_diff_mode_on_launch', + 'limit': 'ask_limit_on_launch', + 'tags': 'ask_tags_on_launch', + 'skip_tags': 'ask_skip_tags_on_launch', + 'job_type': 'ask_job_type_on_launch', + 'verbosity': 'ask_verbosity_on_launch', + 'inventory': 'ask_inventory_on_launch', + 'credentials': 'ask_credential_on_launch', + } + + param_errors = [] + for variable_name, prompt in check_vars_to_prompts.items(): + if module.params.get(variable_name) and not job_template[prompt]: + param_errors.append("The field {0} was specified but the job template does not allow for it to be overridden".format(variable_name)) + # Check if Either ask_variables_on_launch, or survey_enabled is enabled for use of extra vars. + if module.params.get('extra_vars') and not (job_template['ask_variables_on_launch'] or job_template['survey_enabled']): + param_errors.append("The field extra_vars was specified but the job template does not allow for it to be overridden") + if len(param_errors) > 0: + module.fail_json(msg="Parameters specified which can not be passed into job template, see errors for details", **{'errors': param_errors}) + + # Launch the job + results = module.post_endpoint(job_template['related']['launch'], **{'data': post_data}) + + if results['status_code'] != 201: + module.fail_json(msg="Failed to launch job, see response for details", **{'response': results}) + + if not wait: + module.exit_json( + **{ + 'changed': True, + 'id': results['json']['id'], + 'status': results['json']['status'], + } + ) + + # Invoke wait function + results = module.wait_on_url(url=results['json']['url'], object_name=name, object_type='Job', timeout=timeout, interval=interval) + + module.exit_json( + **{ + 'changed': True, + 'id': results['json']['id'], + 'status': results['json']['status'], + } + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/job_list.py b/ansible_collections/awx/awx/plugins/modules/job_list.py new file mode 100644 index 00000000..95f9ea6f --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/job_list.py @@ -0,0 +1,125 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: job_list +author: "Wayne Witzel III (@wwitzel3)" +short_description: List Automation Platform Controller jobs. +description: + - List Automation Platform Controller jobs. See + U(https://www.ansible.com/tower) for an overview. +options: + status: + description: + - Only list jobs with this status. + choices: ['pending', 'waiting', 'running', 'error', 'failed', 'canceled', 'successful'] + type: str + page: + description: + - Page number of the results to fetch. + type: int + all_pages: + description: + - Fetch all the pages and return a single result. + type: bool + default: 'no' + query: + description: + - Query used to further filter the list of jobs. C({"foo":"bar"}) will be passed at C(?foo=bar) + type: dict +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: List running jobs for the testing.yml playbook + job_list: + status: running + query: {"playbook": "testing.yml"} + controller_config_file: "~/tower_cli.cfg" + register: testing_jobs +''' + +RETURN = ''' +count: + description: Total count of objects return + returned: success + type: int + sample: 51 +next: + description: next page available for the listing + returned: success + type: int + sample: 3 +previous: + description: previous page available for the listing + returned: success + type: int + sample: 1 +results: + description: a list of job objects represented as dictionaries + returned: success + type: list + sample: [{"allow_simultaneous": false, "artifacts": {}, "ask_credential_on_launch": false, + "ask_inventory_on_launch": false, "ask_job_type_on_launch": false, "failed": false, + "finished": "2017-02-22T15:09:05.633942Z", "force_handlers": false, "forks": 0, "id": 2, + "inventory": 1, "job_explanation": "", "job_tags": "", "job_template": 5, "job_type": "run"}, ...] +''' + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + status=dict(choices=['pending', 'waiting', 'running', 'error', 'failed', 'canceled', 'successful']), + page=dict(type='int'), + all_pages=dict(type='bool', default=False), + query=dict(type='dict'), + ) + + # Create a module for ourselves + module = ControllerAPIModule( + argument_spec=argument_spec, + mutually_exclusive=[ + ('page', 'all_pages'), + ], + ) + + # Extract our parameters + query = module.params.get('query') + status = module.params.get('status') + page = module.params.get('page') + all_pages = module.params.get('all_pages') + + job_search_data = {} + if page: + job_search_data['page'] = page + if status: + job_search_data['status'] = status + if query: + job_search_data.update(query) + if all_pages: + job_list = module.get_all_endpoint('jobs', **{'data': job_search_data}) + else: + job_list = module.get_endpoint('jobs', **{'data': job_search_data}) + + # Attempt to look up jobs based on the status + module.exit_json(**job_list['json']) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/job_template.py b/ansible_collections/awx/awx/plugins/modules/job_template.py new file mode 100644 index 00000000..4508bc18 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/job_template.py @@ -0,0 +1,648 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: job_template +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller job templates. +description: + - Create, update, or destroy Automation Platform Controller job templates. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name to use for the job template. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + copy_from: + description: + - Name or id to copy the job template from. + - This will copy an existing job template and change any parameters supplied. + - The new job template name will be the one provided in the name parameter. + - The organization parameter is not used in this, to facilitate copy from one organization to another. + - Provide the id or use the lookup plugin to provide the id if multiple job templates share the same name. + type: str + description: + description: + - Description to use for the job template. + type: str + job_type: + description: + - The job type to use for the job template. + choices: ["run", "check"] + type: str + inventory: + description: + - Name of the inventory to use for the job template. + type: str + organization: + description: + - Organization the job template exists in. + - Used to help lookup the object, cannot be modified using this module. + - The Organization is inferred from the associated project + - If not provided, will lookup by name only, which does not work with duplicates. + - Requires Automation Platform Version 3.7.0 or AWX 10.0.0 IS NOT backwards compatible with earlier versions. + type: str + project: + description: + - Name of the project to use for the job template. + type: str + playbook: + description: + - Path to the playbook to use for the job template within the project provided. + type: str + credential: + description: + - Name of the credential to use for the job template. + - Deprecated, use 'credentials'. + type: str + credentials: + description: + - List of credentials to use for the job template. + type: list + elements: str + vault_credential: + description: + - Name of the vault credential to use for the job template. + - Deprecated, use 'credentials'. + type: str + execution_environment: + description: + - Execution Environment to use for the job template. + type: str + custom_virtualenv: + description: + - Local absolute file path containing a custom Python virtualenv to use. + - Only compatible with older versions of AWX/Tower + - Deprecated, will be removed in the future + type: str + instance_groups: + description: + - list of Instance Groups for this Organization to run on. + type: list + elements: str + forks: + description: + - The number of parallel or simultaneous processes to use while executing the playbook. + type: int + limit: + description: + - A host pattern to further constrain the list of hosts managed or affected by the playbook + type: str + verbosity: + description: + - Control the output level Ansible produces as the playbook runs. 0 - Normal, 1 - Verbose, 2 - More Verbose, 3 - Debug, 4 - Connection Debug. + choices: [0, 1, 2, 3, 4] + type: int + extra_vars: + description: + - Specify C(extra_vars) for the template. + type: dict + job_tags: + description: + - Comma separated list of the tags to use for the job template. + type: str + force_handlers: + description: + - Enable forcing playbook handlers to run even if a task fails. + type: bool + aliases: + - force_handlers_enabled + skip_tags: + description: + - Comma separated list of the tags to skip for the job template. + type: str + start_at_task: + description: + - Start the playbook at the task matching this name. + type: str + diff_mode: + description: + - Enable diff mode for the job template. + type: bool + aliases: + - diff_mode_enabled + use_fact_cache: + description: + - Enable use of fact caching for the job template. + type: bool + aliases: + - fact_caching_enabled + host_config_key: + description: + - Allow provisioning callbacks using this host config key. + type: str + ask_scm_branch_on_launch: + description: + - Prompt user for (scm branch) on launch. + type: bool + ask_diff_mode_on_launch: + description: + - Prompt user to enable diff mode (show changes) to files when supported by modules. + type: bool + aliases: + - ask_diff_mode + ask_variables_on_launch: + description: + - Prompt user for (extra_vars) on launch. + type: bool + aliases: + - ask_extra_vars + ask_limit_on_launch: + description: + - Prompt user for a limit on launch. + type: bool + aliases: + - ask_limit + ask_tags_on_launch: + description: + - Prompt user for job tags on launch. + type: bool + aliases: + - ask_tags + ask_skip_tags_on_launch: + description: + - Prompt user for job tags to skip on launch. + type: bool + aliases: + - ask_skip_tags + ask_job_type_on_launch: + description: + - Prompt user for job type on launch. + type: bool + aliases: + - ask_job_type + ask_verbosity_on_launch: + description: + - Prompt user to choose a verbosity level on launch. + type: bool + aliases: + - ask_verbosity + ask_inventory_on_launch: + description: + - Prompt user for inventory on launch. + type: bool + aliases: + - ask_inventory + ask_credential_on_launch: + description: + - Prompt user for credential on launch. + type: bool + aliases: + - ask_credential + ask_execution_environment_on_launch: + description: + - Prompt user for execution environment on launch. + type: bool + aliases: + - ask_execution_environment + ask_forks_on_launch: + description: + - Prompt user for forks on launch. + type: bool + aliases: + - ask_forks + ask_instance_groups_on_launch: + description: + - Prompt user for instance groups on launch. + type: bool + aliases: + - ask_instance_groups + ask_job_slice_count_on_launch: + description: + - Prompt user for job slice count on launch. + type: bool + aliases: + - ask_job_slice_count + ask_labels_on_launch: + description: + - Prompt user for labels on launch. + type: bool + aliases: + - ask_labels + ask_timeout_on_launch: + description: + - Prompt user for timeout on launch. + type: bool + aliases: + - ask_timeout + survey_enabled: + description: + - Enable a survey on the job template. + type: bool + survey_spec: + description: + - JSON/YAML dict formatted survey definition. + type: dict + become_enabled: + description: + - Activate privilege escalation. + type: bool + allow_simultaneous: + description: + - Allow simultaneous runs of the job template. + type: bool + aliases: + - concurrent_jobs_enabled + timeout: + description: + - Maximum time in seconds to wait for a job to finish (server-side). + type: int + default: 0 + job_slice_count: + description: + - The number of jobs to slice into at runtime. Will cause the Job Template to launch a workflow if value is greater than 1. + type: int + webhook_service: + description: + - Service that webhook requests will be accepted from + type: str + choices: + - '' + - 'github' + - 'gitlab' + webhook_credential: + description: + - Personal Access Token for posting back the status to the service API + type: str + scm_branch: + description: + - Branch to use in job run. Project default used if blank. Only allowed if project allow_override field is set to true. + type: str + labels: + description: + - The labels applied to this job template + - Must be created with the labels module first. This will error if the label has not been created. + type: list + elements: str + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str + notification_templates_started: + description: + - list of notifications to send on start + type: list + elements: str + notification_templates_success: + description: + - list of notifications to send on success + type: list + elements: str + notification_templates_error: + description: + - list of notifications to send on error + type: list + elements: str + prevent_instance_group_fallback: + description: + - Prevent falling back to instance groups set on the associated inventory or organization + type: bool + +extends_documentation_fragment: awx.awx.auth + +notes: + - JSON for survey_spec can be found in the API Documentation. See + U(https://docs.ansible.com/ansible-tower/latest/html/towerapi/api_ref.html#/Job_Templates/Job_Templates_job_templates_survey_spec_create) + for POST operation payload example. +''' + + +EXAMPLES = ''' +- name: Create Ping job template + job_template: + name: "Ping" + job_type: "run" + organization: "Default" + inventory: "Local" + project: "Demo" + playbook: "ping.yml" + credentials: + - "Local" + state: "present" + controller_config_file: "~/tower_cli.cfg" + survey_enabled: yes + survey_spec: "{{ lookup('file', 'my_survey.json') }}" + +- name: Add start notification to Job Template + job_template: + name: "Ping" + notification_templates_started: + - Notification1 + - Notification2 + +- name: Remove Notification1 start notification from Job Template + job_template: + name: "Ping" + notification_templates_started: + - Notification2 + +- name: Copy Job Template + job_template: + name: copy job template + copy_from: test job template + job_type: "run" + inventory: Copy Foo Inventory + project: test + playbook: hello_world.yml + state: "present" +''' + +from ..module_utils.controller_api import ControllerAPIModule +import json + + +def update_survey(module, last_request): + spec_endpoint = last_request.get('related', {}).get('survey_spec') + if module.params.get('survey_spec') == {}: + response = module.delete_endpoint(spec_endpoint) + if response['status_code'] != 200: + # Not sure how to make this actually return a non 200 to test what to dump in the respinse + module.fail_json(msg="Failed to delete survey: {0}".format(response['json'])) + else: + response = module.post_endpoint(spec_endpoint, **{'data': module.params.get('survey_spec')}) + if response['status_code'] != 200: + module.fail_json(msg="Failed to update survey: {0}".format(response['json']['error'])) + module.exit_json(**module.json_output) + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + copy_from=dict(), + description=dict(), + organization=dict(), + job_type=dict(choices=['run', 'check']), + inventory=dict(), + project=dict(), + playbook=dict(), + credential=dict(), + vault_credential=dict(), + credentials=dict(type='list', elements='str'), + execution_environment=dict(), + custom_virtualenv=dict(), + instance_groups=dict(type="list", elements='str'), + forks=dict(type='int'), + limit=dict(), + verbosity=dict(type='int', choices=[0, 1, 2, 3, 4]), + extra_vars=dict(type='dict'), + job_tags=dict(), + force_handlers=dict(type='bool', aliases=['force_handlers_enabled']), + skip_tags=dict(), + start_at_task=dict(), + timeout=dict(type='int'), + use_fact_cache=dict(type='bool', aliases=['fact_caching_enabled']), + host_config_key=dict(no_log=False), + ask_diff_mode_on_launch=dict(type='bool', aliases=['ask_diff_mode']), + ask_variables_on_launch=dict(type='bool', aliases=['ask_extra_vars']), + ask_limit_on_launch=dict(type='bool', aliases=['ask_limit']), + ask_tags_on_launch=dict(type='bool', aliases=['ask_tags']), + ask_skip_tags_on_launch=dict(type='bool', aliases=['ask_skip_tags']), + ask_job_type_on_launch=dict(type='bool', aliases=['ask_job_type']), + ask_verbosity_on_launch=dict(type='bool', aliases=['ask_verbosity']), + ask_inventory_on_launch=dict(type='bool', aliases=['ask_inventory']), + ask_credential_on_launch=dict(type='bool', aliases=['ask_credential']), + ask_execution_environment_on_launch=dict(type='bool', aliases=['ask_execution_environment']), + ask_forks_on_launch=dict(type='bool', aliases=['ask_forks']), + ask_instance_groups_on_launch=dict(type='bool', aliases=['ask_instance_groups']), + ask_job_slice_count_on_launch=dict(type='bool', aliases=['ask_job_slice_count']), + ask_labels_on_launch=dict(type='bool', aliases=['ask_labels']), + ask_timeout_on_launch=dict(type='bool', aliases=['ask_timeout']), + survey_enabled=dict(type='bool'), + survey_spec=dict(type="dict"), + become_enabled=dict(type='bool'), + diff_mode=dict(type='bool', aliases=['diff_mode_enabled']), + allow_simultaneous=dict(type='bool', aliases=['concurrent_jobs_enabled']), + scm_branch=dict(), + ask_scm_branch_on_launch=dict(type='bool'), + job_slice_count=dict(type='int'), + webhook_service=dict(choices=['github', 'gitlab', '']), + webhook_credential=dict(), + labels=dict(type="list", elements='str'), + notification_templates_started=dict(type="list", elements='str'), + notification_templates_success=dict(type="list", elements='str'), + notification_templates_error=dict(type="list", elements='str'), + prevent_instance_group_fallback=dict(type="bool"), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + copy_from = module.params.get('copy_from') + state = module.params.get('state') + + # Deal with legacy credential and vault_credential + credential = module.params.get('credential') + vault_credential = module.params.get('vault_credential') + credentials = module.params.get('credentials') + if vault_credential: + if credentials is None: + credentials = [] + credentials.append(vault_credential) + if credential: + if credentials is None: + credentials = [] + credentials.append(credential) + + new_fields = {} + search_fields = {} + + # Attempt to look up the related items the user specified (these will fail the module if not found) + organization_id = None + organization = module.params.get('organization') + if organization: + organization_id = module.resolve_name_to_id('organizations', organization) + search_fields['organization'] = new_fields['organization'] = organization_id + + ee = module.params.get('execution_environment') + if ee: + new_fields['execution_environment'] = module.resolve_name_to_id('execution_environments', ee) + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('job_templates', name_or_id=name, **{'data': search_fields}) + + # Attempt to look up credential to copy based on the provided name + if copy_from: + # a new existing item is formed when copying and is returned. + existing_item = module.copy_item( + existing_item, + copy_from, + name, + endpoint='job_templates', + item_type='job_template', + copy_lookup_data={}, + ) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(existing_item) + + # Create the data that gets sent for create and update + new_fields['name'] = new_name if new_name else (module.get_item_name(existing_item) if existing_item else name) + for field_name in ( + 'description', + 'job_type', + 'playbook', + 'scm_branch', + 'forks', + 'limit', + 'verbosity', + 'job_tags', + 'force_handlers', + 'skip_tags', + 'start_at_task', + 'timeout', + 'use_fact_cache', + 'host_config_key', + 'ask_scm_branch_on_launch', + 'ask_diff_mode_on_launch', + 'ask_variables_on_launch', + 'ask_limit_on_launch', + 'ask_tags_on_launch', + 'ask_skip_tags_on_launch', + 'ask_job_type_on_launch', + 'ask_verbosity_on_launch', + 'ask_inventory_on_launch', + 'ask_credential_on_launch', + 'ask_execution_environment_on_launch', + 'ask_forks_on_launch', + 'ask_instance_groups_on_launch', + 'ask_job_slice_count_on_launch', + 'ask_labels_on_launch', + 'ask_timeout_on_launch', + 'survey_enabled', + 'become_enabled', + 'diff_mode', + 'allow_simultaneous', + 'custom_virtualenv', + 'job_slice_count', + 'webhook_service', + 'prevent_instance_group_fallback', + ): + field_val = module.params.get(field_name) + if field_val is not None: + new_fields[field_name] = field_val + + # Special treatment of extra_vars parameter + extra_vars = module.params.get('extra_vars') + if extra_vars is not None: + new_fields['extra_vars'] = json.dumps(extra_vars) + + # Attempt to look up the related items the user specified (these will fail the module if not found) + inventory = module.params.get('inventory') + project = module.params.get('project') + webhook_credential = module.params.get('webhook_credential') + + if inventory is not None: + new_fields['inventory'] = module.resolve_name_to_id('inventories', inventory) + if project is not None: + if organization_id is not None: + project_data = module.get_one( + 'projects', + name_or_id=project, + **{ + 'data': { + 'organization': organization_id, + } + } + ) + if project_data is None: + module.fail_json(msg="The project {0} in organization {1} was not found on the controller instance server".format(project, organization)) + new_fields['project'] = project_data['id'] + else: + new_fields['project'] = module.resolve_name_to_id('projects', project) + if webhook_credential is not None: + new_fields['webhook_credential'] = module.resolve_name_to_id('credentials', webhook_credential) + + association_fields = {} + + if credentials is not None: + association_fields['credentials'] = [] + for item in credentials: + association_fields['credentials'].append(module.resolve_name_to_id('credentials', item)) + + labels = module.params.get('labels') + if labels is not None: + association_fields['labels'] = [] + for item in labels: + label_id = module.get_one('labels', name_or_id=item, **{'data': search_fields}) + if label_id is None: + module.fail_json(msg='Could not find label entry with name {0}'.format(item)) + else: + association_fields['labels'].append(label_id['id']) + + notifications_start = module.params.get('notification_templates_started') + if notifications_start is not None: + association_fields['notification_templates_started'] = [] + for item in notifications_start: + association_fields['notification_templates_started'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_success = module.params.get('notification_templates_success') + if notifications_success is not None: + association_fields['notification_templates_success'] = [] + for item in notifications_success: + association_fields['notification_templates_success'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_error = module.params.get('notification_templates_error') + if notifications_error is not None: + association_fields['notification_templates_error'] = [] + for item in notifications_error: + association_fields['notification_templates_error'].append(module.resolve_name_to_id('notification_templates', item)) + + instance_group_names = module.params.get('instance_groups') + if instance_group_names is not None: + association_fields['instance_groups'] = [] + for item in instance_group_names: + association_fields['instance_groups'].append(module.resolve_name_to_id('instance_groups', item)) + + on_change = None + new_spec = module.params.get('survey_spec') + if new_spec is not None: + existing_spec = None + if existing_item: + spec_endpoint = existing_item.get('related', {}).get('survey_spec') + existing_spec = module.get_endpoint(spec_endpoint)['json'] + if new_spec != existing_spec: + module.json_output['changed'] = True + if existing_item and module.has_encrypted_values(existing_spec): + module._encrypted_changed_warning('survey_spec', existing_item, warning=True) + on_change = update_survey + + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed( + existing_item, + new_fields, + endpoint='job_templates', + item_type='job_template', + associations=association_fields, + on_create=on_change, + on_update=on_change, + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/job_wait.py b/ansible_collections/awx/awx/plugins/modules/job_wait.py new file mode 100644 index 00000000..b7f71eed --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/job_wait.py @@ -0,0 +1,131 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: job_wait +author: "Wayne Witzel III (@wwitzel3)" +short_description: Wait for Automation Platform Controller job to finish. +description: + - Wait for Automation Platform Controller job to finish and report success or failure. See + U(https://www.ansible.com/tower) for an overview. +options: + job_id: + description: + - ID of the job to monitor. + required: True + type: int + interval: + description: + - The interval in sections, to request an update from the controller. + - For backwards compatibility if unset this will be set to the average of min and max intervals + required: False + default: 2 + type: float + timeout: + description: + - Maximum time in seconds to wait for a job to finish. + type: int + job_type: + description: + - Job type to wait for + choices: ['project_updates', 'jobs', 'inventory_updates', 'workflow_jobs'] + default: 'jobs' + type: str +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Launch a job + job_launch: + job_template: "My Job Template" + register: job + +- name: Wait for job max 120s + job_wait: + job_id: "{{ job.id }}" + timeout: 120 +''' + +RETURN = ''' +id: + description: job id that is being waited on + returned: success + type: int + sample: 99 +elapsed: + description: total time in seconds the job took to run + returned: success + type: float + sample: 10.879 +started: + description: timestamp of when the job started running + returned: success + type: str + sample: "2017-03-01T17:03:53.200234Z" +finished: + description: timestamp of when the job finished running + returned: success + type: str + sample: "2017-03-01T17:04:04.078782Z" +status: + description: current status of job + returned: success + type: str + sample: successful +''' + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + job_id=dict(type='int', required=True), + job_type=dict(choices=['project_updates', 'jobs', 'inventory_updates', 'workflow_jobs'], default='jobs'), + timeout=dict(type='int'), + interval=dict(type='float', default=2), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + job_id = module.params.get('job_id') + job_type = module.params.get('job_type') + timeout = module.params.get('timeout') + interval = module.params.get('interval') + + # Attempt to look up job based on the provided id + job = module.get_one( + job_type, + **{ + 'data': { + 'id': job_id, + } + } + ) + + if job is None: + module.fail_json(msg='Unable to wait on ' + job_type.rstrip("s") + ' {0}; that ID does not exist.'.format(job_id)) + + # Invoke wait function + module.wait_on_url(url=job['url'], object_name=job_id, object_type='legacy_job_wait', timeout=timeout, interval=interval) + + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/label.py b/ansible_collections/awx/awx/plugins/modules/label.py new file mode 100644 index 00000000..b17d58be --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/label.py @@ -0,0 +1,102 @@ +#!/usr/bin/python +# coding: utf-8 -*- + + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: label +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller labels. +description: + - Create, update, or destroy Automation Platform Controller labels. See + U(https://www.ansible.com/tower) for an overview. + - Note, labels can only be created via the API, they can not be deleted. + Once they are fully disassociated the API will clean them up on its own. +options: + name: + description: + - Name of this label. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field). + type: str + organization: + description: + - Organization this label belongs to. + required: True + type: str + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present"] + type: str +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Add label to organization + label: + name: Custom Label + organization: My Organization +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + organization=dict(required=True), + state=dict(choices=['present'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + organization = module.params.get('organization') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + organization_id = None + if organization: + organization_id = module.resolve_name_to_id('organizations', organization) + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one( + 'labels', + name_or_id=name, + **{ + 'data': { + 'organization': organization_id, + } + } + ) + + # Create the data that gets sent for create and update + new_fields = {} + new_fields['name'] = new_name if new_name else (module.get_item_name(existing_item) if existing_item else name) + if organization: + new_fields['organization'] = organization_id + + module.create_or_update_if_needed(existing_item, new_fields, endpoint='labels', item_type='label', associations={}) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/license.py b/ansible_collections/awx/awx/plugins/modules/license.py new file mode 100644 index 00000000..ed9b9372 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/license.py @@ -0,0 +1,128 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2019, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: license +author: "John Westcott IV (@john-westcott-iv)" +short_description: Set the license for Automation Platform Controller +description: + - Get or Set Automation Platform Controller license. See + U(https://www.ansible.com/tower) for an overview. +options: + manifest: + description: + - file path to a Red Hat subscription manifest (a .zip file) + required: False + type: str + force: + description: + - By default, the license manifest will only be applied if Tower is currently + unlicensed or trial licensed. When force=true, the license is always applied. + type: bool + default: 'False' + pool_id: + description: + - Red Hat or Red Hat Satellite pool_id to attach to + required: False + type: str + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str +extends_documentation_fragment: awx.awx.auth +''' + +RETURN = ''' # ''' + +EXAMPLES = ''' +- name: Set the license using a file + license: + manifest: "/tmp/my_manifest.zip" + +- name: Attach to a pool + license: + pool_id: 123456 + +- name: Remove license + license: + state: absent +''' + +import base64 +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + + module = ControllerAPIModule( + argument_spec=dict( + manifest=dict(type='str', required=False), + pool_id=dict(type='str', required=False), + force=dict(type='bool', default=False), + state=dict(choices=['present', 'absent'], default='present'), + ), + required_if=[ + ['state', 'present', ['manifest', 'pool_id'], True], + ], + mutually_exclusive=[("manifest", "pool_id")], + ) + + json_output = {'changed': False} + + # If the state was absent we can delete the endpoint and exit. + state = module.params.get('state') + if state == 'absent': + module.delete_endpoint('config') + module.exit_json(**json_output) + + if module.params.get('manifest', None): + try: + with open(module.params.get('manifest'), 'rb') as fid: + manifest = base64.b64encode(fid.read()) + except OSError as e: + module.fail_json(msg=str(e)) + + # Check if Tower is already licensed + config = module.get_endpoint('config')['json'] + already_licensed = ( + 'license_info' in config + and 'instance_count' in config['license_info'] + and config['license_info']['instance_count'] > 0 + and 'trial' in config['license_info'] + and not config['license_info']['trial'] + ) + + # Determine if we will install the license + perform_install = bool((not already_licensed) or module.params.get('force')) + + # Handle check mode + if module.check_mode: + json_output['changed'] = perform_install + module.exit_json(**json_output) + + # Do the actual install, if we need to + if perform_install: + json_output['changed'] = True + if module.params.get('manifest', None): + module.post_endpoint('config', data={'manifest': manifest.decode()}) + else: + module.post_endpoint('config/attach', data={'pool_id': module.params.get('pool_id')}) + + module.exit_json(**json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/notification_template.py b/ansible_collections/awx/awx/plugins/modules/notification_template.py new file mode 100644 index 00000000..6da77c65 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/notification_template.py @@ -0,0 +1,297 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2018, Samuel Carpentier <samuelcarpentier0@gmail.ca> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: notification_template +author: "Samuel Carpentier (@samcarpentier)" +short_description: create, update, or destroy Automation Platform Controller notification. +description: + - Create, update, or destroy Automation Platform Controller notifications. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name of the notification. + type: str + required: True + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + copy_from: + description: + - Name or id to copy the notification from. + - This will copy an existing notification and change any parameters supplied. + - The new notification name will be the one provided in the name parameter. + - The organization parameter is not used in this, to facilitate copy from one organization to another. + - Provide the id or use the lookup plugin to provide the id if multiple notifications share the same name. + type: str + description: + description: + - The description of the notification. + type: str + organization: + description: + - The organization the notification belongs to. + type: str + notification_type: + description: + - The type of notification to be sent. + choices: + - 'email' + - 'grafana' + - 'irc' + - 'mattermost' + - 'pagerduty' + - 'rocketchat' + - 'slack' + - 'twilio' + - 'webhook' + type: str + notification_configuration: + description: + - The notification configuration file. Note providing this field would disable all notification-configuration-related fields. + - username (the mail server username) + - sender (the sender email address) + - recipients (the recipients email addresses) + - use_tls (the TLS trigger) + - host (the mail server host) + - use_ssl (the SSL trigger) + - password (the mail server password) + - port (the mail server port) + - channels (the destination Slack channels) + - token (the access token) + - account_token (the Twillio account token) + - from_number (the source phone number) + - to_numbers (the destination phone numbers) + - account_sid (the Twillio account SID) + - subdomain (the PagerDuty subdomain) + - service_key (the PagerDuty service/integration API key) + - client_name (the PagerDuty client identifier) + - message_from (the label to be shown with the notification) + - color (the notification color) + - notify (the notify channel trigger) + - url (the target URL) + - headers (the HTTP headers as JSON string) + - server (the IRC server address) + - nickname (the IRC nickname) + - targets (the destination channels or users) + type: dict + messages: + description: + - Optional custom messages for notification template. + type: dict + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add Slack notification with custom messages + notification_template: + name: slack notification + organization: Default + notification_type: slack + notification_configuration: + channels: + - general + token: cefda9e2be1f21d11cdd9452f5b7f97fda977f42 + messages: + started: + message: "{{ '{{ job_friendly_name }}{{ job.id }} started' }}" + success: + message: "{{ '{{ job_friendly_name }} completed in {{ job.elapsed }} seconds' }}" + error: + message: "{{ '{{ job_friendly_name }} FAILED! Please look at {{ job.url }}' }}" + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add webhook notification + notification_template: + name: webhook notification + notification_type: webhook + notification_configuration: + url: http://www.example.com/hook + headers: + X-Custom-Header: value123 + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add email notification + notification_template: + name: email notification + notification_type: email + notification_configuration: + username: user + password: s3cr3t + sender: controller@example.com + recipients: + - user1@example.com + host: smtp.example.com + port: 25 + use_tls: no + use_ssl: no + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add twilio notification + notification_template: + name: twilio notification + notification_type: twilio + notification_configuration: + account_token: a_token + account_sid: a_sid + from_number: '+15551112222' + to_numbers: + - '+15553334444' + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add PagerDuty notification + notification_template: + name: pagerduty notification + notification_type: pagerduty + notification_configuration: + token: a_token + subdomain: sub + client_name: client + service_key: a_key + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add IRC notification + notification_template: + name: irc notification + notification_type: irc + notification_configuration: + nickname: controller + password: s3cr3t + targets: + - user1 + port: 8080 + server: irc.example.com + use_ssl: no + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Delete notification + notification_template: + name: old notification + state: absent + controller_config_file: "~/tower_cli.cfg" + +- name: Copy webhook notification + notification_template: + name: foo notification + copy_from: email notification + organization: Foo +''' + + +RETURN = ''' # ''' + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + copy_from=dict(), + description=dict(), + organization=dict(), + notification_type=dict(choices=['email', 'grafana', 'irc', 'mattermost', 'pagerduty', 'rocketchat', 'slack', 'twilio', 'webhook']), + notification_configuration=dict(type='dict'), + messages=dict(type='dict'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get('new_name') + copy_from = module.params.get('copy_from') + description = module.params.get('description') + organization = module.params.get('organization') + notification_type = module.params.get('notification_type') + notification_configuration = module.params.get('notification_configuration') + messages = module.params.get('messages') + state = module.params.get('state') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + organization_id = None + if organization: + organization_id = module.resolve_name_to_id('organizations', organization) + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one( + 'notification_templates', + name_or_id=name, + **{ + 'data': { + 'organization': organization_id, + } + } + ) + + # Attempt to look up credential to copy based on the provided name + if copy_from: + # a new existing item is formed when copying and is returned. + existing_item = module.copy_item( + existing_item, + copy_from, + name, + endpoint='notification_templates', + item_type='notification_template', + copy_lookup_data={}, + ) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(existing_item) + + final_notification_configuration = {} + if notification_configuration is not None: + final_notification_configuration.update(notification_configuration) + + # Create the data that gets sent for create and update + new_fields = {} + if final_notification_configuration: + new_fields['notification_configuration'] = final_notification_configuration + new_fields['name'] = new_name if new_name else (module.get_item_name(existing_item) if existing_item else name) + if description is not None: + new_fields['description'] = description + if organization is not None: + new_fields['organization'] = organization_id + if notification_type is not None: + new_fields['notification_type'] = notification_type + if messages is not None: + new_fields['messages'] = messages + + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed(existing_item, new_fields, endpoint='notification_templates', item_type='notification_template', associations={}) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/organization.py b/ansible_collections/awx/awx/plugins/modules/organization.py new file mode 100644 index 00000000..de78eb22 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/organization.py @@ -0,0 +1,217 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: organization +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller organizations +description: + - Create, update, or destroy Automation Platform Controller organizations. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name to use for the organization. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + description: + description: + - The description to use for the organization. + type: str + default_environment: + description: + - Default Execution Environment to use for jobs owned by the Organization. + type: str + custom_virtualenv: + description: + - Local absolute file path containing a custom Python virtualenv to use. + - Only compatible with older versions of AWX/Tower + - Deprecated, will be removed in the future + type: str + max_hosts: + description: + - The max hosts allowed in this organizations + type: int + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str + instance_groups: + description: + - list of Instance Groups for this Organization to run on. + type: list + elements: str + notification_templates_started: + description: + - list of notifications to send on start + type: list + elements: str + notification_templates_success: + description: + - list of notifications to send on success + type: list + elements: str + notification_templates_error: + description: + - list of notifications to send on error + type: list + elements: str + notification_templates_approvals: + description: + - list of notifications to send on start + type: list + elements: str + galaxy_credentials: + description: + - list of Ansible Galaxy credentials to associate to the organization + type: list + elements: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Create organization + organization: + name: "Foo" + description: "Foo bar organization" + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Create organization using 'foo-venv' as default Python virtualenv + organization: + name: "Foo" + description: "Foo bar organization using foo-venv" + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Create organization that pulls content from galaxy.ansible.com + organization: + name: "Foo" + state: present + galaxy_credentials: + - Ansible Galaxy + controller_config_file: "~/tower_cli.cfg" +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + description=dict(), + default_environment=dict(), + custom_virtualenv=dict(), + max_hosts=dict(type='int'), + instance_groups=dict(type="list", elements='str'), + notification_templates_started=dict(type="list", elements='str'), + notification_templates_success=dict(type="list", elements='str'), + notification_templates_error=dict(type="list", elements='str'), + notification_templates_approvals=dict(type="list", elements='str'), + galaxy_credentials=dict(type="list", elements='str'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + description = module.params.get('description') + default_ee = module.params.get('default_environment') + custom_virtualenv = module.params.get('custom_virtualenv') + max_hosts = module.params.get('max_hosts') + state = module.params.get('state') + + # Attempt to look up organization based on the provided name + organization = module.get_one('organizations', name_or_id=name) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(organization) + # Attempt to look up associated field items the user specified. + association_fields = {} + + instance_group_names = module.params.get('instance_groups') + if instance_group_names is not None: + association_fields['instance_groups'] = [] + for item in instance_group_names: + association_fields['instance_groups'].append(module.resolve_name_to_id('instance_groups', item)) + + notifications_start = module.params.get('notification_templates_started') + if notifications_start is not None: + association_fields['notification_templates_started'] = [] + for item in notifications_start: + association_fields['notification_templates_started'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_success = module.params.get('notification_templates_success') + if notifications_success is not None: + association_fields['notification_templates_success'] = [] + for item in notifications_success: + association_fields['notification_templates_success'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_error = module.params.get('notification_templates_error') + if notifications_error is not None: + association_fields['notification_templates_error'] = [] + for item in notifications_error: + association_fields['notification_templates_error'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_approval = module.params.get('notification_templates_approvals') + if notifications_approval is not None: + association_fields['notification_templates_approvals'] = [] + for item in notifications_approval: + association_fields['notification_templates_approvals'].append(module.resolve_name_to_id('notification_templates', item)) + + galaxy_credentials = module.params.get('galaxy_credentials') + if galaxy_credentials is not None: + association_fields['galaxy_credentials'] = [] + for item in galaxy_credentials: + association_fields['galaxy_credentials'].append(module.resolve_name_to_id('credentials', item)) + + # Create the data that gets sent for create and update + org_fields = { + 'name': new_name if new_name else (module.get_item_name(organization) if organization else name), + } + if description is not None: + org_fields['description'] = description + if default_ee is not None: + org_fields['default_environment'] = module.resolve_name_to_id('execution_environments', default_ee) + if custom_virtualenv is not None: + org_fields['custom_virtualenv'] = custom_virtualenv + if max_hosts is not None: + org_fields['max_hosts'] = max_hosts + + # If the state was present and we can let the module build or update the existing organization, this will return on its own + module.create_or_update_if_needed( + organization, + org_fields, + endpoint='organizations', + item_type='organization', + associations=association_fields, + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/project.py b/ansible_collections/awx/awx/plugins/modules/project.py new file mode 100644 index 00000000..97713a26 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/project.py @@ -0,0 +1,420 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: project +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller projects +description: + - Create, update, or destroy Automation Platform Controller projects. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name to use for the project. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + type: str + copy_from: + description: + - Name or id to copy the project from. + - This will copy an existing project and change any parameters supplied. + - The new project name will be the one provided in the name parameter. + - The organization parameter is not used in this, to facilitate copy from one organization to another. + - Provide the id or use the lookup plugin to provide the id if multiple projects share the same name. + type: str + description: + description: + - Description to use for the project. + type: str + scm_type: + description: + - Type of SCM resource. + choices: ["manual", "git", "svn", "insights", "archive"] + type: str + scm_url: + description: + - URL of SCM resource. + type: str + local_path: + description: + - The server playbook directory for manual projects. + type: str + scm_branch: + description: + - The branch to use for the SCM resource. + type: str + scm_refspec: + description: + - The refspec to use for the SCM resource. + type: str + credential: + description: + - Name of the credential to use with this SCM resource. + type: str + aliases: + - scm_credential + scm_clean: + description: + - Remove local modifications before updating. + type: bool + scm_delete_on_update: + description: + - Remove the repository completely before updating. + type: bool + scm_track_submodules: + description: + - Track submodules latest commit on specified branch. + type: bool + scm_update_on_launch: + description: + - Before an update to the local repository before launching a job with this project. + type: bool + scm_update_cache_timeout: + description: + - Cache Timeout to cache prior project syncs for a certain number of seconds. + Only valid if scm_update_on_launch is to True, otherwise ignored. + type: int + allow_override: + description: + - Allow changing the SCM branch or revision in a job template that uses this project. + type: bool + aliases: + - scm_allow_override + timeout: + description: + - The amount of time (in seconds) to run before the SCM Update is canceled. A value of 0 means no timeout. + - If waiting for the project to update this will abort after this + amount of seconds + type: int + aliases: + - job_timeout + default_environment: + description: + - Default Execution Environment to use for jobs relating to the project. + type: str + custom_virtualenv: + description: + - Local absolute file path containing a custom Python virtualenv to use. + - Only compatible with older versions of AWX/Tower + - Deprecated, will be removed in the future + type: str + organization: + description: + - Name of organization for project. + type: str + state: + description: + - Desired state of the resource. + default: "present" + choices: ["present", "absent"] + type: str + wait: + description: + - Provides option (True by default) to wait for completed project sync + before returning + - Can assure playbook files are populated so that job templates that rely + on the project may be successfully created + type: bool + default: True + notification_templates_started: + description: + - list of notifications to send on start + type: list + elements: str + notification_templates_success: + description: + - list of notifications to send on success + type: list + elements: str + notification_templates_error: + description: + - list of notifications to send on error + type: list + elements: str + update_project: + description: + - Force project to update after changes. + - Used in conjunction with wait, interval, and timeout. + default: False + type: bool + interval: + description: + - The interval to request an update from the controller. + - Requires wait. + required: False + default: 2 + type: float + signature_validation_credential: + description: + - Name of the credential to use for signature validation. + - If signature validation credential is provided, signature validation will be enabled. + type: str + +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add project + project: + name: "Foo" + description: "Foo bar project" + organization: "test" + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add Project with cache timeout + project: + name: "Foo" + description: "Foo bar project" + organization: "test" + scm_update_on_launch: True + scm_update_cache_timeout: 60 + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Copy project + project: + name: copy + copy_from: test + description: Foo copy project + organization: Foo + state: present +''' + +import time + +from ..module_utils.controller_api import ControllerAPIModule + + +def wait_for_project_update(module, last_request): + # The current running job for the update is in last_request['summary_fields']['current_update']['id'] + + # Get parameters that were not passed in + update_project = module.params.get('update_project') + wait = module.params.get('wait') + timeout = module.params.get('timeout') + interval = module.params.get('interval') + scm_revision_original = last_request['scm_revision'] + + if 'current_update' in last_request['summary_fields']: + running = True + while running: + result = module.get_endpoint('/project_updates/{0}/'.format(last_request['summary_fields']['current_update']['id']))['json'] + + if module.is_job_done(result['status']): + time.sleep(1) + running = False + + if result['status'] != 'successful': + module.fail_json(msg="Project update failed") + elif update_project: + result = module.post_endpoint(last_request['related']['update']) + + if result['status_code'] != 202: + module.fail_json(msg="Failed to update project, see response for details", response=result) + + if not wait: + module.exit_json(**module.json_output) + + # Invoke wait function + result_final = module.wait_on_url( + url=result['json']['url'], object_name=module.get_item_name(last_request), object_type='Project Update', timeout=timeout, interval=interval + ) + + # Set Changed to correct value depending on if hash changed Also output refspec comparision + module.json_output['changed'] = True + if result_final['json']['scm_revision'] == scm_revision_original: + module.json_output['changed'] = False + + module.exit_json(**module.json_output) + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + copy_from=dict(), + description=dict(), + scm_type=dict(choices=['manual', 'git', 'svn', 'insights', 'archive']), + scm_url=dict(), + local_path=dict(), + scm_branch=dict(), + scm_refspec=dict(), + credential=dict(aliases=['scm_credential']), + scm_clean=dict(type='bool'), + scm_delete_on_update=dict(type='bool'), + scm_track_submodules=dict(type='bool'), + scm_update_on_launch=dict(type='bool'), + scm_update_cache_timeout=dict(type='int'), + allow_override=dict(type='bool', aliases=['scm_allow_override']), + timeout=dict(type='int', aliases=['job_timeout']), + default_environment=dict(), + custom_virtualenv=dict(), + organization=dict(), + notification_templates_started=dict(type="list", elements='str'), + notification_templates_success=dict(type="list", elements='str'), + notification_templates_error=dict(type="list", elements='str'), + state=dict(choices=['present', 'absent'], default='present'), + wait=dict(type='bool', default=True), + update_project=dict(default=False, type='bool'), + interval=dict(default=2.0, type='float'), + signature_validation_credential=dict(type='str'), + ) + + # Create a module for ourselves + module = ControllerAPIModule( + argument_spec=argument_spec, + ) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + copy_from = module.params.get('copy_from') + scm_type = module.params.get('scm_type') + if scm_type == "manual": + scm_type = "" + local_path = module.params.get('local_path') + credential = module.params.get('credential') + scm_update_on_launch = module.params.get('scm_update_on_launch') + scm_update_cache_timeout = module.params.get('scm_update_cache_timeout') + default_ee = module.params.get('default_environment') + organization = module.params.get('organization') + state = module.params.get('state') + wait = module.params.get('wait') + update_project = module.params.get('update_project') + + signature_validation_credential = module.params.get('signature_validation_credential') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + lookup_data = {} + org_id = None + if organization: + org_id = module.resolve_name_to_id('organizations', organization) + lookup_data['organization'] = org_id + + # Attempt to look up project based on the provided name and org ID + project = module.get_one('projects', name_or_id=name, data=lookup_data) + + # Attempt to look up credential to copy based on the provided name + if copy_from: + # a new existing item is formed when copying and is returned. + project = module.copy_item( + project, + copy_from, + name, + endpoint='projects', + item_type='project', + copy_lookup_data={}, + ) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(project) + + # Attempt to look up associated field items the user specified. + association_fields = {} + + notifications_start = module.params.get('notification_templates_started') + if notifications_start is not None: + association_fields['notification_templates_started'] = [] + for item in notifications_start: + association_fields['notification_templates_started'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_success = module.params.get('notification_templates_success') + if notifications_success is not None: + association_fields['notification_templates_success'] = [] + for item in notifications_success: + association_fields['notification_templates_success'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_error = module.params.get('notification_templates_error') + if notifications_error is not None: + association_fields['notification_templates_error'] = [] + for item in notifications_error: + association_fields['notification_templates_error'].append(module.resolve_name_to_id('notification_templates', item)) + + # Create the data that gets sent for create and update + project_fields = { + 'name': new_name if new_name else (module.get_item_name(project) if project else name), + } + + for field_name in ( + 'scm_type', + 'scm_url', + 'scm_branch', + 'scm_refspec', + 'scm_clean', + 'scm_delete_on_update', + 'scm_track_submodules', + 'scm_update_on_launch', + 'scm_update_cache_timeout', + 'timeout', + 'scm_update_cache_timeout', + 'custom_virtualenv', + 'description', + 'allow_override', + ): + field_val = module.params.get(field_name) + if field_val is not None: + project_fields[field_name] = field_val + + for variable, field, endpoint in ( + (default_ee, 'default_environment', 'execution_environments'), + (credential, 'credential', 'credentials'), + (signature_validation_credential, 'signature_validation_credential', 'credentials'), + ): + if variable is not None: + project_fields[field] = module.resolve_name_to_id(endpoint, variable) + + if org_id is not None: + # this is resolved earlier, so save an API call and don't do it again in the loop above + project_fields['organization'] = org_id + + if scm_type == '' and local_path is not None: + project_fields['local_path'] = local_path + + if scm_update_cache_timeout not in (0, None) and scm_update_on_launch is not True: + module.warn('scm_update_cache_timeout will be ignored since scm_update_on_launch was not set to true') + + # If we are doing a not manual project, register our on_change method + # An on_change function, if registered, will fire after an post_endpoint or update_if_needed completes successfully + on_change = None + if wait and scm_type != '' or update_project and scm_type != '': + on_change = wait_for_project_update + + # If the state was present and we can let the module build or update the existing project, this will return on its own + response = module.create_or_update_if_needed( + project, + project_fields, + endpoint='projects', + item_type='project', + associations=association_fields, + on_create=on_change, + on_update=on_change, + auto_exit=not update_project, + ) + + if update_project: + wait_for_project_update(module, response) + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/project_update.py b/ansible_collections/awx/awx/plugins/modules/project_update.py new file mode 100644 index 00000000..6cbcd39b --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/project_update.py @@ -0,0 +1,144 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +ANSIBLE_METADATA = {'metadata_version': '1.0', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: project_update +author: "Sean Sullivan (@sean-m-sullivan)" +short_description: Update a Project in Automation Platform Controller +description: + - Update a Automation Platform Controller Project. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name or id of the project to update. + required: True + type: str + aliases: + - project + organization: + description: + - Organization the project exists in. + - Used to help lookup the object, cannot be modified using this module. + - If not provided, will lookup by name only, which does not work with duplicates. + type: str + wait: + description: + - Wait for the project to update. + - If scm revision has not changed module will return not changed. + default: True + type: bool + interval: + description: + - The interval to request an update from the controller. + required: False + default: 2 + type: float + timeout: + description: + - If waiting for the project to update this will abort after this + amount of seconds + type: int +extends_documentation_fragment: awx.awx.auth +''' + +RETURN = ''' +id: + description: project id of the updated project + returned: success + type: int + sample: 86 +status: + description: status of the updated project + returned: success + type: str + sample: pending +''' + + +EXAMPLES = ''' +- name: Launch a project with a timeout of 10 seconds + project_update: + project: "Networking Project" + timeout: 10 + +- name: Launch a Project with extra_vars without waiting + project_update: + project: "Networking Project" + wait: False +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True, aliases=['project']), + organization=dict(), + wait=dict(default=True, type='bool'), + interval=dict(default=2.0, type='float'), + timeout=dict(type='int'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + organization = module.params.get('organization') + wait = module.params.get('wait') + interval = module.params.get('interval') + timeout = module.params.get('timeout') + + # Attempt to look up project based on the provided name or id + lookup_data = {} + if organization: + lookup_data['organization'] = module.resolve_name_to_id('organizations', organization) + project = module.get_one('projects', name_or_id=name, data=lookup_data) + if project is None: + module.fail_json(msg="Unable to find project") + + if wait: + scm_revision_original = project['scm_revision'] + + # Update the project + result = module.post_endpoint(project['related']['update']) + + if result['status_code'] == 405: + module.fail_json( + msg="Unable to trigger a project update because the project scm_type ({0}) does not support it.".format(project['scm_type']), + response=result + ) + elif result['status_code'] != 202: + module.fail_json(msg="Failed to update project, see response for details", response=result) + + module.json_output['changed'] = True + module.json_output['id'] = result['json']['id'] + module.json_output['status'] = result['json']['status'] + + if not wait: + module.exit_json(**module.json_output) + + # Invoke wait function + result = module.wait_on_url( + url=result['json']['url'], object_name=module.get_item_name(project), object_type='Project Update', timeout=timeout, interval=interval + ) + scm_revision_new = result['json']['scm_revision'] + if scm_revision_new == scm_revision_original: + module.json_output['changed'] = False + + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/role.py b/ansible_collections/awx/awx/plugins/modules/role.py new file mode 100644 index 00000000..51ed5643 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/role.py @@ -0,0 +1,319 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: role +author: "Wayne Witzel III (@wwitzel3)" +short_description: grant or revoke an Automation Platform Controller role. +description: + - Roles are used for access control, this module is for managing user access to server resources. + - Grant or revoke Automation Platform Controller roles to users. See U(https://www.ansible.com/tower) for an overview. +options: + user: + description: + - User that receives the permissions specified by the role. + type: str + team: + description: + - Team that receives the permissions specified by the role. + type: str + role: + description: + - The role type to grant/revoke. + required: True + choices: ["admin", "read", "member", "execute", "adhoc", "update", "use", "approval", "auditor", "project_admin", "inventory_admin", "credential_admin", + "workflow_admin", "notification_admin", "job_template_admin", "execution_environment_admin"] + type: str + target_team: + description: + - Team that the role acts on. + - For example, make someone a member or an admin of a team. + - Members of a team implicitly receive the permissions that the team has. + - Deprecated, use 'target_teams'. + type: str + target_teams: + description: + - Team that the role acts on. + - For example, make someone a member or an admin of a team. + - Members of a team implicitly receive the permissions that the team has. + type: list + elements: str + inventory: + description: + - Inventory the role acts on. + - Deprecated, use 'inventories'. + type: str + inventories: + description: + - Inventory the role acts on. + type: list + elements: str + job_template: + description: + - The job template the role acts on. + - Deprecated, use 'job_templates'. + type: str + job_templates: + description: + - The job template the role acts on. + type: list + elements: str + workflow: + description: + - The workflow job template the role acts on. + - Deprecated, use 'workflows'. + type: str + workflows: + description: + - The workflow job template the role acts on. + type: list + elements: str + credential: + description: + - Credential the role acts on. + - Deprecated, use 'credentials'. + type: str + credentials: + description: + - Credential the role acts on. + type: list + elements: str + organization: + description: + - Organization the role acts on. + - Deprecated, use 'organizations'. + type: str + organizations: + description: + - Organization the role acts on. + type: list + elements: str + lookup_organization: + description: + - Organization the inventories, job templates, projects, or workflows the items exists in. + - Used to help lookup the object, for organization roles see organization. + - If not provided, will lookup by name only, which does not work with duplicates. + type: str + project: + description: + - Project the role acts on. + - Deprecated, use 'projects'. + type: str + projects: + description: + - Project the role acts on. + type: list + elements: str + state: + description: + - Desired state. + - State of present indicates the user should have the role. + - State of absent indicates the user should have the role taken away, if they have it. + default: "present" + choices: ["present", "absent"] + type: str + +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add jdoe to the member role of My Team + role: + user: jdoe + target_team: "My Team" + role: member + state: present + +- name: Add Joe to multiple job templates and a workflow + role: + user: joe + role: execute + workflows: + - test-role-workflow + job_templates: + - jt1 + - jt2 + state: present +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + + argument_spec = dict( + user=dict(), + team=dict(), + role=dict( + choices=[ + "admin", + "read", + "member", + "execute", + "adhoc", + "update", + "use", + "approval", + "auditor", + "project_admin", + "inventory_admin", + "credential_admin", + "workflow_admin", + "notification_admin", + "job_template_admin", + "execution_environment_admin", + ], + required=True, + ), + target_team=dict(), + target_teams=dict(type='list', elements='str'), + inventory=dict(), + inventories=dict(type='list', elements='str'), + job_template=dict(), + job_templates=dict(type='list', elements='str'), + workflow=dict(), + workflows=dict(type='list', elements='str'), + credential=dict(), + credentials=dict(type='list', elements='str'), + organization=dict(), + organizations=dict(type='list', elements='str'), + lookup_organization=dict(), + project=dict(), + projects=dict(type='list', elements='str'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + module = ControllerAPIModule(argument_spec=argument_spec) + + role_type = module.params.pop('role') + role_field = role_type + '_role' + state = module.params.pop('state') + + module.json_output['role'] = role_type + + # Deal with legacy parameters + resource_list_param_keys = { + 'credentials': 'credential', + 'inventories': 'inventory', + 'job_templates': 'job_template', + 'organizations': 'organization', + 'projects': 'project', + 'target_teams': 'target_team', + 'workflows': 'workflow', + } + # Singular parameters + resource_param_keys = ('user', 'team', 'lookup_organization') + + resources = {} + for resource_group, old_name in resource_list_param_keys.items(): + if module.params.get(resource_group) is not None: + resources.setdefault(resource_group, []).extend(module.params.get(resource_group)) + if module.params.get(old_name) is not None: + resources.setdefault(resource_group, []).append(module.params.get(old_name)) + for resource_group in resource_param_keys: + if module.params.get(resource_group) is not None: + resources[resource_group] = module.params.get(resource_group) + # Change workflows and target_teams key to its endpoint name. + if 'workflows' in resources: + resources['workflow_job_templates'] = resources.pop('workflows') + if 'target_teams' in resources: + resources['teams'] = resources.pop('target_teams') + + # Set lookup data to use + lookup_data = {} + if 'lookup_organization' in resources: + lookup_data['organization'] = module.resolve_name_to_id('organizations', resources['lookup_organization']) + resources.pop('lookup_organization') + + # Lookup actor data + # separate actors from resources + actor_data = {} + missing_items = [] + for key in ('user', 'team'): + if key in resources: + if key == 'user': + lookup_data_populated = {} + else: + lookup_data_populated = lookup_data + # Attempt to look up project based on the provided name or ID and lookup data + data = module.get_one('{0}s'.format(key), name_or_id=resources[key], data=lookup_data_populated) + if data is None: + module.fail_json( + msg='Unable to find {0} with name: {1}'.format(key, resources[key]), changed=False + ) + else: + actor_data[key] = module.get_one('{0}s'.format(key), name_or_id=resources[key], data=lookup_data_populated) + resources.pop(key) + # Lookup Resources + resource_data = {} + for key, value in resources.items(): + for resource in value: + # Attempt to look up project based on the provided name or ID and lookup data + if key in resources: + if key == 'organizations': + lookup_data_populated = {} + else: + lookup_data_populated = lookup_data + data = module.get_one(key, name_or_id=resource, data=lookup_data_populated) + if data is None: + missing_items.append(resource) + else: + resource_data.setdefault(key, []).append(data) + if len(missing_items) > 0: + module.fail_json( + msg='There were {0} missing items, missing items: {1}'.format(len(missing_items), missing_items), changed=False + ) + # build association agenda + associations = {} + for actor_type, actor in actor_data.items(): + for key, value in resource_data.items(): + for resource in value: + resource_roles = resource['summary_fields']['object_roles'] + if role_field not in resource_roles: + available_roles = ', '.join(list(resource_roles.keys())) + module.fail_json( + msg='Resource {0} has no role {1}, available roles: {2}'.format(resource['url'], role_field, available_roles), changed=False + ) + role_data = resource_roles[role_field] + endpoint = '/roles/{0}/{1}/'.format(role_data['id'], module.param_to_endpoint(actor_type)) + associations.setdefault(endpoint, []) + associations[endpoint].append(actor['id']) + + # perform associations + for association_endpoint, new_association_list in associations.items(): + response = module.get_all_endpoint(association_endpoint) + existing_associated_ids = [association['id'] for association in response['json']['results']] + + if state == 'present': + for an_id in list(set(new_association_list) - set(existing_associated_ids)): + response = module.post_endpoint(association_endpoint, **{'data': {'id': int(an_id)}}) + if response['status_code'] == 204: + module.json_output['changed'] = True + else: + module.fail_json(msg="Failed to grant role. {0}".format(response['json'].get('detail', response['json'].get('msg', 'unknown')))) + else: + for an_id in list(set(existing_associated_ids) & set(new_association_list)): + response = module.post_endpoint(association_endpoint, **{'data': {'id': int(an_id), 'disassociate': True}}) + if response['status_code'] == 204: + module.json_output['changed'] = True + else: + module.fail_json(msg="Failed to revoke role. {0}".format(response['json'].get('detail', response['json'].get('msg', 'unknown')))) + + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/schedule.py b/ansible_collections/awx/awx/plugins/modules/schedule.py new file mode 100644 index 00000000..5bf82102 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/schedule.py @@ -0,0 +1,361 @@ +#!/usr/bin/python +# coding: utf-8 -*- + + +# (c) 2020, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: schedule +author: "John Westcott IV (@john-westcott-iv)" +short_description: create, update, or destroy Automation Platform Controller schedules. +description: + - Create, update, or destroy Automation Platform Controller schedules. See + U(https://www.ansible.com/tower) for an overview. +options: + rrule: + description: + - A value representing the schedules iCal recurrence rule. + - See rrule plugin for help constructing this value + required: False + type: str + name: + description: + - Name of this schedule. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name (looked up via the name field. + required: False + type: str + description: + description: + - Optional description of this schedule. + required: False + type: str + execution_environment: + description: + - Execution Environment applied as a prompt, assuming jot template prompts for execution environment + type: str + extra_data: + description: + - Specify C(extra_vars) for the template. + required: False + type: dict + forks: + description: + - Forks applied as a prompt, assuming job template prompts for forks + type: int + instance_groups: + description: + - List of Instance Groups applied as a prompt, assuming job template prompts for instance groups + type: list + elements: str + inventory: + description: + - Inventory applied as a prompt, assuming job template prompts for inventory + required: False + type: str + job_slice_count: + description: + - Job Slice Count applied as a prompt, assuming job template prompts for job slice count + type: int + labels: + description: + - List of labels applied as a prompt, assuming job template prompts for labels + type: list + elements: str + credentials: + description: + - List of credentials applied as a prompt, assuming job template prompts for credentials + type: list + elements: str + scm_branch: + description: + - Branch to use in job run. Project default used if blank. Only allowed if project allow_override field is set to true. + required: False + type: str + timeout: + description: + - Timeout applied as a prompt, assuming job template prompts for timeout + type: int + job_type: + description: + - The job type to use for the job template. + required: False + type: str + choices: + - 'run' + - 'check' + job_tags: + description: + - Comma separated list of the tags to use for the job template. + required: False + type: str + skip_tags: + description: + - Comma separated list of the tags to skip for the job template. + required: False + type: str + limit: + description: + - A host pattern to further constrain the list of hosts managed or affected by the playbook + required: False + type: str + diff_mode: + description: + - Enable diff mode for the job template. + required: False + type: bool + verbosity: + description: + - Control the output level Ansible produces as the playbook runs. 0 - Normal, 1 - Verbose, 2 - More Verbose, 3 - Debug, 4 - Connection Debug. + required: False + type: int + choices: + - 0 + - 1 + - 2 + - 3 + - 4 + - 5 + unified_job_template: + description: + - Name of unified job template to schedule. Used to look up an already existing schedule. + required: False + type: str + organization: + description: + - The organization the unified job template exists in. + - Used for looking up the unified job template, not a direct model field. + type: str + enabled: + description: + - Enables processing of this schedule. + required: False + type: bool + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Build a schedule for Demo Job Template + schedule: + name: "{{ sched1 }}" + state: present + unified_job_template: "Demo Job Template" + rrule: "DTSTART:20191219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1" + register: result + +- name: Build the same schedule using the rrule plugin + schedule: + name: "{{ sched1 }}" + state: present + unified_job_template: "Demo Job Template" + rrule: "{{ query('awx.awx.schedule_rrule', 'week', start_date='2019-12-19 13:05:51') }}" + register: result + +- name: Build a complex schedule for every day except sunday using the rruleset plugin + schedule: + name: "{{ sched1 }}" + state: present + unified_job_template: "Demo Job Template" + rrule: "{{ query(awx.awx.schedule_rruleset, '2022-04-30 10:30:45', rules=rrules, timezone='UTC' ) }}" + vars: + rrules: + - frequency: 'day' + every: 1 + - frequency: 'day' + every: 1 + on_days: 'sunday' + include: False + +- name: Delete 'my_schedule' schedule for my_workflow + schedule: + name: "my_schedule" + state: absent + unified_job_template: my_workflow +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + rrule=dict(), + name=dict(required=True), + new_name=dict(), + description=dict(), + execution_environment=dict(type='str'), + extra_data=dict(type='dict'), + forks=dict(type='int'), + instance_groups=dict(type='list', elements='str'), + inventory=dict(), + job_slice_count=dict(type='int'), + labels=dict(type='list', elements='str'), + timeout=dict(type='int'), + credentials=dict(type='list', elements='str'), + scm_branch=dict(), + job_type=dict(choices=['run', 'check']), + job_tags=dict(), + skip_tags=dict(), + limit=dict(), + diff_mode=dict(type='bool'), + verbosity=dict(type='int', choices=[0, 1, 2, 3, 4, 5]), + unified_job_template=dict(), + organization=dict(), + enabled=dict(type='bool'), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + rrule = module.params.get('rrule') + name = module.params.get('name') + new_name = module.params.get("new_name") + description = module.params.get('description') + execution_environment = module.params.get('execution_environment') + extra_data = module.params.get('extra_data') + forks = module.params.get('forks') + instance_groups = module.params.get('instance_groups') + inventory = module.params.get('inventory') + job_slice_count = module.params.get('job_slice_count') + labels = module.params.get('labels') + timeout = module.params.get('timeout') + credentials = module.params.get('credentials') + scm_branch = module.params.get('scm_branch') + job_type = module.params.get('job_type') + job_tags = module.params.get('job_tags') + skip_tags = module.params.get('skip_tags') + limit = module.params.get('limit') + diff_mode = module.params.get('diff_mode') + verbosity = module.params.get('verbosity') + unified_job_template = module.params.get('unified_job_template') + organization = module.params.get('organization') + enabled = module.params.get('enabled') + state = module.params.get('state') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + inventory_id = None + if inventory: + inventory_id = module.resolve_name_to_id('inventories', inventory) + search_fields = {} + sched_search_fields = {} + if organization: + search_fields['organization'] = module.resolve_name_to_id('organizations', organization) + unified_job_template_id = None + if unified_job_template: + search_fields['name'] = unified_job_template + unified_job_template_id = module.get_one('unified_job_templates', **{'data': search_fields})['id'] + sched_search_fields['unified_job_template'] = unified_job_template_id + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('schedules', name_or_id=name, **{'data': sched_search_fields}) + + association_fields = {} + + if credentials is not None: + association_fields['credentials'] = [] + for item in credentials: + association_fields['credentials'].append(module.resolve_name_to_id('credentials', item)) + + # We need to clear out the name from the search fields so we can use name_or_id in the following searches + if 'name' in search_fields: + del search_fields['name'] + + if labels is not None: + association_fields['labels'] = [] + for item in labels: + label_id = module.get_one('labels', name_or_id=item, **{'data': search_fields}) + if label_id is None: + module.fail_json(msg='Could not find label entry with name {0}'.format(item)) + else: + association_fields['labels'].append(label_id['id']) + + if instance_groups is not None: + association_fields['instance_groups'] = [] + for item in instance_groups: + instance_group_id = module.get_one('instance_groups', name_or_id=item, **{'data': search_fields}) + if instance_group_id is None: + module.fail_json(msg='Could not find instance_group entry with name {0}'.format(item)) + else: + association_fields['instance_groups'].append(instance_group_id['id']) + + # Create the data that gets sent for create and update + new_fields = {} + if rrule is not None: + new_fields['rrule'] = rrule + new_fields['name'] = new_name if new_name else (module.get_item_name(existing_item) if existing_item else name) + if description is not None: + new_fields['description'] = description + if extra_data is not None: + new_fields['extra_data'] = extra_data + if inventory is not None: + new_fields['inventory'] = inventory_id + if scm_branch is not None: + new_fields['scm_branch'] = scm_branch + if job_type is not None: + new_fields['job_type'] = job_type + if job_tags is not None: + new_fields['job_tags'] = job_tags + if skip_tags is not None: + new_fields['skip_tags'] = skip_tags + if limit is not None: + new_fields['limit'] = limit + if diff_mode is not None: + new_fields['diff_mode'] = diff_mode + if verbosity is not None: + new_fields['verbosity'] = verbosity + if unified_job_template is not None: + new_fields['unified_job_template'] = unified_job_template_id + if enabled is not None: + new_fields['enabled'] = enabled + if forks is not None: + new_fields['forks'] = forks + if job_slice_count is not None: + new_fields['job_slice_count'] = job_slice_count + if timeout is not None: + new_fields['timeout'] = timeout + + if execution_environment is not None: + if execution_environment == '': + new_fields['execution_environment'] = '' + else: + ee = module.get_one('execution_environments', name_or_id=execution_environment, **{'data': search_fields}) + if ee is None: + module.fail_json(msg='could not find execution_environment entry with name {0}'.format(execution_environment)) + else: + new_fields['execution_environment'] = ee['id'] + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(existing_item) + elif state == 'present': + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed( + existing_item, + new_fields, + endpoint='schedules', + item_type='schedule', + associations=association_fields, + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/settings.py b/ansible_collections/awx/awx/plugins/modules/settings.py new file mode 100644 index 00000000..56c0b94e --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/settings.py @@ -0,0 +1,180 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2018, Nikhil Jain <nikjain@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: settings +author: "Nikhil Jain (@jainnikhil30)" +short_description: Modify Automation Platform Controller settings. +description: + - Modify Automation Platform Controller settings. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name of setting to modify + type: str + value: + description: + - Value to be modified for given setting. + - If given a non-string type, will make best effort to cast it to type API expects. + - For better control over types, use the C(settings) param instead. + type: str + settings: + description: + - A data structure to be sent into the settings endpoint + type: dict +requirements: + - pyyaml +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Set the value of AWX_ISOLATION_BASE_PATH + settings: + name: AWX_ISOLATION_BASE_PATH + value: "/tmp" + register: testing_settings + +- name: Set the value of AWX_ISOLATION_SHOW_PATHS + settings: + name: "AWX_ISOLATION_SHOW_PATHS" + value: "'/var/lib/awx/projects/', '/tmp'" + register: testing_settings + +- name: Set the LDAP Auth Bind Password + settings: + name: "AUTH_LDAP_BIND_PASSWORD" + value: "Password" + no_log: true + +- name: Set all the LDAP Auth Bind Params + settings: + settings: + AUTH_LDAP_BIND_PASSWORD: "password" + AUTH_LDAP_USER_ATTR_MAP: + email: "mail" + first_name: "givenName" + last_name: "surname" +''' + +from ..module_utils.controller_api import ControllerAPIModule + +try: + import yaml + + HAS_YAML = True +except ImportError: + HAS_YAML = False + + +def coerce_type(module, value): + # If our value is already None we can just return directly + if value is None: + return value + + yaml_ish = bool((value.startswith('{') and value.endswith('}')) or (value.startswith('[') and value.endswith(']'))) + if yaml_ish: + if not HAS_YAML: + module.fail_json(msg="yaml is not installed, try 'pip install pyyaml'") + return yaml.safe_load(value) + elif value.lower in ('true', 'false', 't', 'f'): + return {'t': True, 'f': False}[value[0].lower()] + try: + return int(value) + except ValueError: + pass + return value + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(), + value=dict(), + settings=dict(type='dict'), + ) + + # Create a module for ourselves + module = ControllerAPIModule( + argument_spec=argument_spec, + required_one_of=[['name', 'settings']], + mutually_exclusive=[['name', 'settings']], + required_if=[['name', 'present', ['value']]], + ) + + # Extract our parameters + name = module.params.get('name') + value = module.params.get('value') + new_settings = module.params.get('settings') + + # If we were given a name/value pair we will just make settings out of that and proceed normally + if new_settings is None: + new_value = coerce_type(module, value) + + new_settings = {name: new_value} + + # Load the existing settings + existing_settings = module.get_endpoint('settings/all')['json'] + + # Begin a json response + json_output = {'changed': False, 'old_values': {}, 'new_values': {}} + + # Check any of the settings to see if anything needs to be updated + needs_update = False + for a_setting in new_settings: + if a_setting not in existing_settings or existing_settings[a_setting] != new_settings[a_setting]: + # At least one thing is different so we need to patch + needs_update = True + json_output['old_values'][a_setting] = existing_settings[a_setting] + json_output['new_values'][a_setting] = new_settings[a_setting] + + if module._diff: + json_output['diff'] = {'before': json_output['old_values'], 'after': json_output['new_values']} + + # If nothing needs an update we can simply exit with the response (as not changed) + if not needs_update: + module.exit_json(**json_output) + + if module.check_mode and module._diff: + json_output['changed'] = True + module.exit_json(**json_output) + + # Make the call to update the settings + response = module.patch_endpoint('settings/all', **{'data': new_settings}) + + if response['status_code'] == 200: + # Set the changed response to True + json_output['changed'] = True + + # To deal with the old style values we need to return 'value' in the response + new_values = {} + for a_setting in new_settings: + new_values[a_setting] = response['json'][a_setting] + + # If we were using a name we will just add a value of a string, otherwise we will return an array in values + if name is not None: + json_output['value'] = new_values[name] + else: + json_output['values'] = new_values + + module.exit_json(**json_output) + elif 'json' in response and '__all__' in response['json']: + module.fail_json(msg=response['json']['__all__']) + else: + module.fail_json(**{'msg': "Unable to update settings, see response", 'response': response}) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/subscriptions.py b/ansible_collections/awx/awx/plugins/modules/subscriptions.py new file mode 100644 index 00000000..0f89e71d --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/subscriptions.py @@ -0,0 +1,102 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2019, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: subscriptions +author: "John Westcott IV (@john-westcott-iv)" +short_description: Get subscription list +description: + - Get subscriptions available to Automation Platform Controller. See + U(https://www.ansible.com/tower) for an overview. +options: + username: + description: + - Red Hat or Red Hat Satellite username to get available subscriptions. + - The credentials you use will be stored for future use in retrieving renewal or expanded subscriptions + required: True + type: str + password: + description: + - Red Hat or Red Hat Satellite password to get available subscriptions. + - The credentials you use will be stored for future use in retrieving renewal or expanded subscriptions + required: True + type: str + filters: + description: + - Client side filters to apply to the subscriptions. + - For any entries in this dict, if there is a corresponding entry in the subscription it must contain the value from this dict + - Note This is a client side search, not an API side search + required: False + type: dict + default: {} +extends_documentation_fragment: awx.awx.auth +''' + +RETURN = ''' +subscriptions: + description: dictionary containing information about the subscriptions + returned: If login succeeded + type: dict +''' + +EXAMPLES = ''' +- name: Get subscriptions + subscriptions: + username: "my_username" + password: "My Password" + +- name: Get subscriptions with a filter + subscriptions: + username: "my_username" + password: "My Password" + filters: + product_name: "Red Hat Ansible Automation Platform" + support_level: "Self-Support" +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + + module = ControllerAPIModule( + argument_spec=dict( + username=dict(type='str', required=True), + password=dict(type='str', no_log=True, required=True), + filters=dict(type='dict', required=False, default={}), + ), + ) + + json_output = {'changed': False} + + # Check if Tower is already licensed + post_data = { + 'subscriptions_password': module.params.get('password'), + 'subscriptions_username': module.params.get('username'), + } + all_subscriptions = module.post_endpoint('config/subscriptions', data=post_data)['json'] + json_output['subscriptions'] = [] + for subscription in all_subscriptions: + add = True + for key in module.params.get('filters').keys(): + if subscription.get(key, None) and module.params.get('filters')[key] not in subscription.get(key): + add = False + if add: + json_output['subscriptions'].append(subscription) + + module.exit_json(**json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/team.py b/ansible_collections/awx/awx/plugins/modules/team.py new file mode 100644 index 00000000..5482b4c8 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/team.py @@ -0,0 +1,105 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2017, Wayne Witzel III <wayne@riotousliving.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: team +author: "Wayne Witzel III (@wwitzel3)" +short_description: create, update, or destroy Automation Platform Controller team. +description: + - Create, update, or destroy Automation Platform Controller teams. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - Name to use for the team. + required: True + type: str + new_name: + description: + - To use when changing a team's name. + type: str + description: + description: + - The description to use for the team. + type: str + organization: + description: + - Organization the team should be made a member of. + required: True + type: str + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Create team + team: + name: Team Name + description: Team Description + organization: test-org + state: present + controller_config_file: "~/tower_cli.cfg" +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + description=dict(), + organization=dict(required=True), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get('new_name') + description = module.params.get('description') + organization = module.params.get('organization') + state = module.params.get('state') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + org_id = module.resolve_name_to_id('organizations', organization) + + # Attempt to look up team based on the provided name and org ID + team = module.get_one('teams', name_or_id=name, **{'data': {'organization': org_id}}) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(team) + + # Create the data that gets sent for create and update + team_fields = {'name': new_name if new_name else (module.get_item_name(team) if team else name), 'organization': org_id} + if description is not None: + team_fields['description'] = description + + # If the state was present and we can let the module build or update the existing team, this will return on its own + module.create_or_update_if_needed(team, team_fields, endpoint='teams', item_type='team') + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/token.py b/ansible_collections/awx/awx/plugins/modules/token.py new file mode 100644 index 00000000..c9ed84f6 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/token.py @@ -0,0 +1,208 @@ +#!/usr/bin/python +# coding: utf-8 -*- + + +# (c) 2020, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: token +author: "John Westcott IV (@john-westcott-iv)" +version_added: "2.3.0" +short_description: create, update, or destroy Automation Platform Controller tokens. +description: + - Create or destroy Automation Platform Controller tokens. See + U(https://www.ansible.com/tower) for an overview. + - In addition, the module sets an Ansible fact which can be passed into other + controller modules as the parameter controller_oauthtoken. See examples for usage. + - Because of the sensitive nature of tokens, the created token value is only available once + through the Ansible fact. (See RETURN for details) + - Due to the nature of tokens this module is not idempotent. A second will + with the same parameters will create a new token. + - If you are creating a temporary token for use with modules you should delete the token + when you are done with it. See the example for how to do it. +options: + description: + description: + - Optional description of this access token. + required: False + type: str + application: + description: + - The application tied to this token. + required: False + type: str + scope: + description: + - Allowed scopes, further restricts user's permissions. Must be a simple space-separated string with allowed scopes ['read', 'write']. + required: False + type: str + choices: ["read", "write"] + existing_token: + description: The data structure produced from token in create mode to be used with state absent. + type: dict + existing_token_id: + description: A token ID (number) which can be used to delete an arbitrary token with state absent. + type: str + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- block: + - name: Create a new token using an existing token + token: + description: '{{ token_description }}' + scope: "write" + state: present + controller_oauthtoken: "{{ my_existing_token }}" + + - name: Delete this token + token: + existing_token: "{{ controller_token }}" + state: absent + + - name: Create a new token using username/password + token: + description: '{{ token_description }}' + scope: "write" + state: present + controller_username: "{{ my_username }}" + controller_password: "{{ my_password }}" + + - name: Use our new token to make another call + job_list: + controller_oauthtoken: "{{ controller_token }}" + + always: + - name: Delete our Token with the token we created + token: + existing_token: "{{ controller_token }}" + state: absent + when: token is defined + +- name: Delete a token by its id + token: + existing_token_id: 4 + state: absent +''' + +RETURN = ''' +controller_token: + type: dict + description: An Ansible Fact variable representing a token object which can be used for auth in subsequent modules. See examples for usage. + contains: + token: + description: The token that was generated. This token can never be accessed again, make sure this value is noted before it is lost. + type: str + id: + description: The numeric ID of the token created + type: str + returned: on successful create +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def return_token(module, last_response): + # A token is special because you can never get the actual token ID back from the API. + # So the default module return would give you an ID but then the token would forever be masked on you. + # This method will return the entire token object we got back so that a user has access to the token + + module.json_output['ansible_facts'] = { + 'controller_token': last_response, + 'tower_token': last_response, + } + module.exit_json(**module.json_output) + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + description=dict(), + application=dict(), + scope=dict(choices=['read', 'write']), + existing_token=dict(type='dict', no_log=False), + existing_token_id=dict(), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule( + argument_spec=argument_spec, + mutually_exclusive=[ + ('existing_token', 'existing_token_id'), + ], + # If we are state absent make sure one of existing_token or existing_token_id are present + required_if=[ + [ + 'state', + 'absent', + ('existing_token', 'existing_token_id'), + True, + ], + ], + ) + + # Extract our parameters + description = module.params.get('description') + application = module.params.get('application') + scope = module.params.get('scope') + existing_token = module.params.get('existing_token') + existing_token_id = module.params.get('existing_token_id') + state = module.params.get('state') + + if state == 'absent': + if not existing_token: + existing_token = module.get_one( + 'tokens', + **{ + 'data': { + 'id': existing_token_id, + } + } + ) + + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(existing_token) + + # Attempt to look up the related items the user specified (these will fail the module if not found) + application_id = None + if application: + application_id = module.resolve_name_to_id('applications', application) + + # Create the data that gets sent for create and update + new_fields = {} + if description is not None: + new_fields['description'] = description + if application is not None: + new_fields['application'] = application_id + if scope is not None: + new_fields['scope'] = scope + + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed( + None, + new_fields, + endpoint='tokens', + item_type='token', + associations={}, + on_create=return_token, + ) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/user.py b/ansible_collections/awx/awx/plugins/modules/user.py new file mode 100644 index 00000000..49a6f216 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/user.py @@ -0,0 +1,194 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2020, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + + +DOCUMENTATION = ''' +--- +module: user +author: "John Westcott IV (@john-westcott-iv)" +short_description: create, update, or destroy Automation Platform Controller users. +description: + - Create, update, or destroy Automation Platform Controller users. See + U(https://www.ansible.com/tower) for an overview. +options: + username: + description: + - Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only. + required: True + type: str + new_username: + description: + - Setting this option will change the existing username (looked up via the name field. + type: str + first_name: + description: + - First name of the user. + type: str + last_name: + description: + - Last name of the user. + type: str + email: + description: + - Email address of the user. + type: str + organization: + description: + - The user will be created as a member of that organization (needed for organization admins to create new organization users). + type: str + is_superuser: + description: + - Designates that this user has all permissions without explicitly assigning them. + type: bool + aliases: ['superuser'] + is_system_auditor: + description: + - User is a system wide auditor. + type: bool + aliases: ['auditor'] + password: + description: + - Write-only field used to change the password. + type: str + update_secrets: + description: + - C(true) will always change password if user specifies password, even if API gives $encrypted$ for password. + - C(false) will only set the password if other values change too. + type: bool + default: true + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str +extends_documentation_fragment: awx.awx.auth +''' + + +EXAMPLES = ''' +- name: Add user + user: + username: jdoe + password: foobarbaz + email: jdoe@example.org + first_name: John + last_name: Doe + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add user as a system administrator + user: + username: jdoe + password: foobarbaz + email: jdoe@example.org + superuser: yes + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add user as a system auditor + user: + username: jdoe + password: foobarbaz + email: jdoe@example.org + auditor: yes + state: present + controller_config_file: "~/tower_cli.cfg" + +- name: Add user as a member of an organization (permissions on the organization are required) + user: + username: jdoe + password: foobarbaz + email: jdoe@example.org + organization: devopsorg + state: present + +- name: Delete user + user: + username: jdoe + email: jdoe@example.org + state: absent + controller_config_file: "~/tower_cli.cfg" +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + username=dict(required=True), + new_username=dict(), + first_name=dict(), + last_name=dict(), + email=dict(), + is_superuser=dict(type='bool', aliases=['superuser']), + is_system_auditor=dict(type='bool', aliases=['auditor']), + password=dict(no_log=True), + update_secrets=dict(type='bool', default=True, no_log=False), + organization=dict(), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + username = module.params.get('username') + new_username = module.params.get("new_username") + first_name = module.params.get('first_name') + last_name = module.params.get('last_name') + email = module.params.get('email') + is_superuser = module.params.get('is_superuser') + is_system_auditor = module.params.get('is_system_auditor') + password = module.params.get('password') + organization = module.params.get('organization') + state = module.params.get('state') + + # Attempt to look up the related items the user specified (these will fail the module if not found) + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('users', name_or_id=username) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(existing_item) + + # Create the data that gets sent for create and update + new_fields = {} + if username is not None: + new_fields['username'] = new_username if new_username else (module.get_item_name(existing_item) if existing_item else username) + if first_name is not None: + new_fields['first_name'] = first_name + if last_name is not None: + new_fields['last_name'] = last_name + if email is not None: + new_fields['email'] = email + if is_superuser is not None: + new_fields['is_superuser'] = is_superuser + if is_system_auditor is not None: + new_fields['is_system_auditor'] = is_system_auditor + if password is not None: + new_fields['password'] = password + + if organization: + org_id = module.resolve_name_to_id('organizations', organization) + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed(existing_item, new_fields, endpoint='organizations/{0}/users'.format(org_id), item_type='user') + else: + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed(existing_item, new_fields, endpoint='users', item_type='user') + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/workflow_approval.py b/ansible_collections/awx/awx/plugins/modules/workflow_approval.py new file mode 100644 index 00000000..bb8c24df --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/workflow_approval.py @@ -0,0 +1,125 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2021, Sean Sullivan +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = { + "metadata_version": "1.1", + "status": ["preview"], + "supported_by": "community", +} + + +DOCUMENTATION = """ +--- +module: workflow_approval +author: "Sean Sullivan (@sean-m-sullivan)" +short_description: Approve an approval node in a workflow job. +description: + - Approve an approval node in a workflow job. See + U(https://www.ansible.com/tower) for an overview. +options: + workflow_job_id: + description: + - ID of the workflow job to monitor for approval. + required: True + type: int + name: + description: + - Name of the Approval node to approve or deny. + required: True + type: str + action: + description: + - Type of action to take. + choices: ["approve", "deny"] + default: "approve" + type: str + interval: + description: + - The interval in sections, to request an update from the controller. + required: False + default: 1 + type: float + timeout: + description: + - Maximum time in seconds to wait for a workflow job to to reach approval node. + default: 10 + type: int +extends_documentation_fragment: awx.awx.auth +""" + + +EXAMPLES = """ +- name: Launch a workflow with a timeout of 10 seconds + workflow_launch: + workflow_template: "Test Workflow" + wait: False + register: workflow + +- name: Wait for approval node to activate and approve + workflow_approval: + workflow_job_id: "{{ workflow.id }}" + name: Approve Me + interval: 10 + timeout: 20 + action: deny +""" + +RETURN = """ + +""" + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + workflow_job_id=dict(type="int", required=True), + name=dict(required=True), + action=dict(choices=["approve", "deny"], default="approve"), + timeout=dict(type="int", default=10), + interval=dict(type="float", default=1), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + workflow_job_id = module.params.get("workflow_job_id") + name = module.params.get("name") + action = module.params.get("action") + timeout = module.params.get("timeout") + interval = module.params.get("interval") + + # Attempt to look up workflow job based on the provided id + approval_job = module.wait_on_workflow_node_url( + url="workflow_jobs/{0}/workflow_nodes/".format(workflow_job_id), + object_name=name, + object_type="Workflow Approval", + timeout=timeout, + interval=interval, + **{ + "data": { + "job__name": name, + } + } + ) + response = module.post_endpoint("{0}{1}".format(approval_job["related"]["job"], action)) + if response["status_code"] == 204: + module.json_output["changed"] = True + + # Attempt to look up jobs based on the status + module.exit_json(**module.json_output) + + +if __name__ == "__main__": + main() diff --git a/ansible_collections/awx/awx/plugins/modules/workflow_job_template.py b/ansible_collections/awx/awx/plugins/modules/workflow_job_template.py new file mode 100644 index 00000000..41c5b665 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/workflow_job_template.py @@ -0,0 +1,980 @@ +#!/usr/bin/python +# coding: utf-8 -*- + + +# (c) 2020, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: workflow_job_template +author: "John Westcott IV (@john-westcott-iv)" +short_description: create, update, or destroy Automation Platform Controller workflow job templates. +description: + - Create, update, or destroy Automation Platform Controller workflow job templates. + - Use workflow_job_template_node after this, or use the workflow_nodes parameter to build the workflow's graph +options: + name: + description: + - Name of this workflow job template. + required: True + type: str + new_name: + description: + - Setting this option will change the existing name. + type: str + copy_from: + description: + - Name or id to copy the workflow job template from. + - This will copy an existing workflow job template and change any parameters supplied. + - The new workflow job template name will be the one provided in the name parameter. + - The organization parameter is not used in this, to facilitate copy from one organization to another. + - Provide the id or use the lookup plugin to provide the id if multiple workflow job templates share the same name. + type: str + description: + description: + - Optional description of this workflow job template. + type: str + extra_vars: + description: + - Variables which will be made available to jobs ran inside the workflow. + type: dict + job_tags: + description: + - Comma separated list of the tags to use for the job template. + type: str + ask_tags_on_launch: + description: + - Prompt user for job tags on launch. + type: bool + aliases: + - ask_tags + organization: + description: + - Organization the workflow job template exists in. + - Used to help lookup the object, cannot be modified using this module. + - If not provided, will lookup by name only, which does not work with duplicates. + type: str + allow_simultaneous: + description: + - Allow simultaneous runs of the workflow job template. + type: bool + ask_variables_on_launch: + description: + - Prompt user for C(extra_vars) on launch. + type: bool + inventory: + description: + - Inventory applied as a prompt, assuming job template prompts for inventory + type: str + limit: + description: + - Limit applied as a prompt, assuming job template prompts for limit + type: str + scm_branch: + description: + - SCM branch applied as a prompt, assuming job template prompts for SCM branch + type: str + ask_inventory_on_launch: + description: + - Prompt user for inventory on launch of this workflow job template + type: bool + ask_scm_branch_on_launch: + description: + - Prompt user for SCM branch on launch of this workflow job template + type: bool + ask_limit_on_launch: + description: + - Prompt user for limit on launch of this workflow job template + type: bool + ask_labels_on_launch: + description: + - Prompt user for labels on launch. + type: bool + aliases: + - ask_labels + ask_skip_tags_on_launch: + description: + - Prompt user for job tags to skip on launch. + type: bool + aliases: + - ask_skip_tags + skip_tags: + description: + - Comma separated list of the tags to skip for the job template. + type: str + webhook_service: + description: + - Service that webhook requests will be accepted from + type: str + choices: + - github + - gitlab + webhook_credential: + description: + - Personal Access Token for posting back the status to the service API + type: str + survey_enabled: + description: + - Setting that variable will prompt the user for job type on the + workflow launch. + type: bool + survey_spec: + description: + - The definition of the survey associated to the workflow. + type: dict + aliases: + - survey + labels: + description: + - The labels applied to this job template + - Must be created with the labels module first. This will error if the label has not been created. + type: list + elements: str + state: + description: + - Desired state of the resource. + choices: + - present + - absent + default: "present" + type: str + notification_templates_started: + description: + - list of notifications to send on start + type: list + elements: str + notification_templates_success: + description: + - list of notifications to send on success + type: list + elements: str + notification_templates_error: + description: + - list of notifications to send on error + type: list + elements: str + notification_templates_approvals: + description: + - list of notifications to send on approval + type: list + elements: str + workflow_nodes: + description: + - A json list of nodes and their coresponding options. The following suboptions describe a single node. + type: list + elements: dict + aliases: + - schema + suboptions: + extra_data: + description: + - Variables to apply at launch time. + - Will only be accepted if job template prompts for vars or has a survey asking for those vars. + type: dict + default: {} + inventory: + description: + - Inventory applied as a prompt, if job template prompts for inventory + type: str + scm_branch: + description: + - SCM branch applied as a prompt, if job template prompts for SCM branch + type: str + job_type: + description: + - Job type applied as a prompt, if job template prompts for job type + type: str + choices: + - 'run' + - 'check' + job_tags: + description: + - Job tags applied as a prompt, if job template prompts for job tags + type: str + skip_tags: + description: + - Tags to skip, applied as a prompt, if job tempalte prompts for job tags + type: str + limit: + description: + - Limit to act on, applied as a prompt, if job template prompts for limit + type: str + forks: + description: + - The number of parallel or simultaneous processes to use while executing the playbook, if job template prompts for forks + type: int + job_slice_count: + description: + - The number of jobs to slice into at runtime, if job template prompts for job slices. + - Will cause the Job Template to launch a workflow if value is greater than 1. + type: int + default: '1' + timeout: + description: + - Maximum time in seconds to wait for a job to finish (server-side), if job template prompts for timeout. + type: int + execution_environment: + description: + - Name of Execution Environment to be applied to job as launch-time prompts. + type: dict + suboptions: + name: + description: + - Name of Execution Environment to be applied to job as launch-time prompts. + - Uniqueness is not handled rigorously. + type: str + diff_mode: + description: + - Run diff mode, applied as a prompt, if job template prompts for diff mode + type: bool + verbosity: + description: + - Verbosity applied as a prompt, if job template prompts for verbosity + type: str + choices: + - '0' + - '1' + - '2' + - '3' + - '4' + - '5' + all_parents_must_converge: + description: + - If enabled then the node will only run if all of the parent nodes have met the criteria to reach this node + type: bool + identifier: + description: + - An identifier for this node that is unique within its workflow. + - It is copied to workflow job nodes corresponding to this node. + required: True + type: str + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str + unified_job_template: + description: + - Name of unified job template to run in the workflow. + - Can be a job template, project sync, inventory source sync, etc. + - Omit if creating an approval node (not yet implemented). + type: dict + suboptions: + organization: + description: + - Name of key for use in model for organizational reference + - Only Valid and used if referencing a job template or project sync + - This parameter is mutually exclusive with suboption C(inventory). + type: dict + suboptions: + name: + description: + - The organization of the job template or project sync the node exists in. + - Used for looking up the job template or project sync, not a direct model field. + type: str + inventory: + description: + - Name of key for use in model for organizational reference + - Only Valid and used if referencing an inventory sync + - This parameter is mutually exclusive with suboption C(organization). + type: dict + suboptions: + organization: + description: + - Name of key for use in model for organizational reference + type: dict + suboptions: + name: + description: + - The organization of the inventory the node exists in. + - Used for looking up the job template or project, not a direct model field. + type: str + name: + description: + - Name of unified job template to run in the workflow. + - Can be a job template, project, inventory source, etc. + type: str + description: + description: + - Optional description of this workflow approval template. + type: str + type: + description: + - Name of unified job template type to run in the workflow. + - Can be a job_template, project, inventory_source, system_job_template, workflow_approval, workflow_job_template. + type: str + timeout: + description: + - The amount of time (in seconds) to wait before Approval is canceled. A value of 0 means no timeout. + - Only Valid and used if referencing an Approval Node + default: 0 + type: int + related: + description: + - Related items to this workflow node. + type: dict + suboptions: + always_nodes: + description: + - Nodes that will run after this node completes. + - List of node identifiers. + type: list + elements: dict + suboptions: + identifier: + description: + - Identifier of Node that will run after this node completes given this option. + type: str + success_nodes: + description: + - Nodes that will run after this node on success. + - List of node identifiers. + type: list + elements: dict + suboptions: + identifier: + description: + - Identifier of Node that will run after this node completes given this option. + type: str + failure_nodes: + description: + - Nodes that will run after this node on failure. + - List of node identifiers. + type: list + elements: dict + suboptions: + identifier: + description: + - Identifier of Node that will run after this node completes given this option. + type: str + credentials: + description: + - Credentials to be applied to job as launch-time prompts. + - List of credential names. + - Uniqueness is not handled rigorously. + type: list + elements: dict + suboptions: + name: + description: + - Name Credentials to be applied to job as launch-time prompts. + type: str + organization: + description: + - Name of key for use in model for organizational reference + type: dict + suboptions: + name: + description: + - The organization of the credentials exists in. + type: str + labels: + description: + - Labels to be applied to job as launch-time prompts. + - List of Label names. + - Uniqueness is not handled rigorously. + type: list + elements: dict + suboptions: + name: + description: + - Name Labels to be applied to job as launch-time prompts. + type: str + organization: + description: + - Name of key for use in model for organizational reference + type: dict + suboptions: + name: + description: + - The organization of the label node exists in. + type: str + instance_groups: + description: + - Instance groups to be applied to job as launch-time prompts. + - List of Instance group names. + - Uniqueness is not handled rigorously. + type: list + elements: dict + suboptions: + name: + description: + - Name of Instance groups to be applied to job as launch-time prompts. + type: str + destroy_current_nodes: + description: + - Set in order to destroy current workflow_nodes on the workflow. + - This option is used for full workflow update, if not used, nodes not described in workflow will persist and keep current associations and links. + type: bool + default: False + aliases: + - destroy_current_schema + +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Create a workflow job template + workflow_job_template: + name: example-workflow + description: created by Ansible Playbook + organization: Default + +- name: Create a workflow job template with workflow nodes in template + awx.awx.workflow_job_template: + name: example-workflow + inventory: Demo Inventory + extra_vars: {'foo': 'bar', 'another-foo': {'barz': 'bar2'}} + workflow_nodes: + - identifier: node101 + unified_job_template: + name: example-project + inventory: + organization: + name: Default + type: inventory_source + related: + success_nodes: [] + failure_nodes: + - identifier: node201 + always_nodes: [] + credentials: [] + - identifier: node201 + unified_job_template: + organization: + name: Default + name: job template 1 + type: job_template + credentials: [] + related: + success_nodes: + - identifier: node301 + failure_nodes: [] + always_nodes: [] + credentials: [] + - identifier: node202 + unified_job_template: + organization: + name: Default + name: example-project + type: project + related: + success_nodes: [] + failure_nodes: [] + always_nodes: [] + credentials: [] + - identifier: node301 + all_parents_must_converge: false + unified_job_template: + organization: + name: Default + name: job template 2 + type: job_template + related: + success_nodes: [] + failure_nodes: [] + always_nodes: [] + credentials: [] + register: result + +- name: Copy a workflow job template + workflow_job_template: + name: copy-workflow + copy_from: example-workflow + organization: Foo + +- name: Create a workflow job template with workflow nodes in template + awx.awx.workflow_job_template: + name: example-workflow + inventory: Demo Inventory + extra_vars: {'foo': 'bar', 'another-foo': {'barz': 'bar2'}} + workflow_nodes: + - identifier: node101 + unified_job_template: + name: example-project + inventory: + organization: + name: Default + type: inventory_source + related: + success_nodes: [] + failure_nodes: + - identifier: node201 + always_nodes: [] + credentials: [] + - identifier: node201 + unified_job_template: + organization: + name: Default + name: job template 1 + type: job_template + credentials: [] + related: + success_nodes: + - identifier: node301 + failure_nodes: [] + always_nodes: [] + credentials: [] + - identifier: node202 + unified_job_template: + organization: + name: Default + name: example-project + type: project + related: + success_nodes: [] + failure_nodes: [] + always_nodes: [] + credentials: [] + - identifier: node301 + all_parents_must_converge: false + unified_job_template: + organization: + name: Default + name: job template 2 + type: job_template + execution_environment: + name: My EE + related: + credentials: + - name: cyberark + organization: + name: Default + instance_groups: + - name: SunCavanaugh Cloud + - name: default + labels: + - name: Custom Label + - name: Another Custom Label + organization: + name: Default + register: result + +''' + +from ..module_utils.controller_api import ControllerAPIModule + +import json + +response = [] + +response = [] + + +def update_survey(module, last_request): + spec_endpoint = last_request.get('related', {}).get('survey_spec') + if module.params.get('survey_spec') == {}: + response = module.delete_endpoint(spec_endpoint) + if response['status_code'] != 200: + # Not sure how to make this actually return a non 200 to test what to dump in the respinse + module.fail_json(msg="Failed to delete survey: {0}".format(response['json'])) + else: + response = module.post_endpoint(spec_endpoint, **{'data': module.params.get('survey_spec')}) + if response['status_code'] != 200: + module.fail_json(msg="Failed to update survey: {0}".format(response['json']['error'])) + + +def create_workflow_nodes(module, response, workflow_nodes, workflow_id): + for workflow_node in workflow_nodes: + workflow_node_fields = {} + search_fields = {} + association_fields = {} + + # Lookup Job Template ID + if workflow_node['unified_job_template']['name']: + if workflow_node['unified_job_template']['type'] is None: + module.fail_json(msg='Could not find unified job template type in workflow_nodes {0}'.format(workflow_node)) + search_fields['type'] = workflow_node['unified_job_template']['type'] + if workflow_node['unified_job_template']['type'] == 'inventory_source': + if 'inventory' in workflow_node['unified_job_template']: + if 'organization' in workflow_node['unified_job_template']['inventory']: + organization_id = module.resolve_name_to_id('organizations', workflow_node['unified_job_template']['inventory']['organization']['name']) + search_fields['organization'] = organization_id + else: + pass + elif 'organization' in workflow_node['unified_job_template']: + organization_id = module.resolve_name_to_id('organizations', workflow_node['unified_job_template']['organization']['name']) + search_fields['organization'] = organization_id + else: + pass + unified_job_template = module.get_one('unified_job_templates', name_or_id=workflow_node['unified_job_template']['name'], **{'data': search_fields}) + if unified_job_template: + workflow_node_fields['unified_job_template'] = unified_job_template['id'] + else: + if workflow_node['unified_job_template']['type'] != 'workflow_approval': + module.fail_json(msg="Unable to Find unified_job_template: {0}".format(search_fields)) + + inventory = workflow_node.get('inventory') + if inventory: + workflow_node_fields['inventory'] = module.resolve_name_to_id('inventories', inventory) + + # Lookup Values for other fields + + for field_name in ( + 'identifier', + 'extra_data', + 'scm_branch', + 'job_type', + 'job_tags', + 'skip_tags', + 'limit', + 'diff_mode', + 'verbosity', + 'forks', + 'job_slice_count', + 'timeout', + 'all_parents_must_converge', + 'state', + ): + field_val = workflow_node.get(field_name) + if field_val: + workflow_node_fields[field_name] = field_val + if workflow_node['identifier']: + search_fields = {'identifier': workflow_node['identifier']} + if 'execution_environment' in workflow_node: + workflow_node_fields['execution_environment'] = module.get_one( + 'execution_environments', name_or_id=workflow_node['execution_environment']['name'] + )['id'] + + # Set Search fields + search_fields['workflow_job_template'] = workflow_node_fields['workflow_job_template'] = workflow_id + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('workflow_job_template_nodes', **{'data': search_fields}) + + # Determine if state is present or absent. + state = True + if 'state' in workflow_node: + if workflow_node['state'] == 'absent': + state = False + if state: + response.append( + module.create_or_update_if_needed( + existing_item, + workflow_node_fields, + endpoint='workflow_job_template_nodes', + item_type='workflow_job_template_node', + auto_exit=False, + ) + ) + else: + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + response.append( + module.delete_if_needed( + existing_item, + auto_exit=False, + ) + ) + + # Start Approval Node creation process + if workflow_node['unified_job_template']['type'] == 'workflow_approval': + for field_name in ( + 'name', + 'description', + 'timeout', + ): + field_val = workflow_node['unified_job_template'].get(field_name) + if field_val: + workflow_node_fields[field_name] = field_val + + # Attempt to look up an existing item just created + workflow_job_template_node = module.get_one('workflow_job_template_nodes', **{'data': search_fields}) + workflow_job_template_node_id = workflow_job_template_node['id'] + existing_item = None + # Due to not able to lookup workflow_approval_templates, find the existing item in another place + if workflow_job_template_node['related'].get('unified_job_template') is not None: + existing_item = module.get_endpoint(workflow_job_template_node['related']['unified_job_template'])['json'] + approval_endpoint = 'workflow_job_template_nodes/{0}/create_approval_template/'.format(workflow_job_template_node_id) + + module.create_or_update_if_needed( + existing_item, + workflow_node_fields, + endpoint=approval_endpoint, + item_type='workflow_job_template_approval_node', + associations=association_fields, + auto_exit=False, + ) + + +def create_workflow_nodes_association(module, response, workflow_nodes, workflow_id): + for workflow_node in workflow_nodes: + workflow_node_fields = {} + search_fields = {} + association_fields = {} + + # Set Search fields + search_fields['workflow_job_template'] = workflow_node_fields['workflow_job_template'] = workflow_id + + # Lookup Values for other fields + if workflow_node['identifier']: + workflow_node_fields['identifier'] = workflow_node['identifier'] + search_fields['identifier'] = workflow_node['identifier'] + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('workflow_job_template_nodes', **{'data': search_fields}) + + if 'state' in workflow_node: + if workflow_node['state'] == 'absent': + continue + + if 'related' in workflow_node: + # Get id's for association fields + association_fields = {} + + for association in ( + 'always_nodes', + 'success_nodes', + 'failure_nodes', + 'credentials', + 'labels', + 'instance_groups', + ): + # Extract out information if it exists + # Test if it is defined, else move to next association. + prompt_lookup = ['credentials', 'labels', 'instance_groups'] + if association in workflow_node['related']: + id_list = [] + lookup_data = {} + for sub_name in workflow_node['related'][association]: + if association in prompt_lookup: + endpoint = association + if 'organization' in sub_name: + lookup_data['organization'] = module.resolve_name_to_id('organizations', sub_name['organization']['name']) + lookup_data['name'] = sub_name['name'] + else: + endpoint = 'workflow_job_template_nodes' + lookup_data = {'identifier': sub_name['identifier']} + lookup_data['workflow_job_template'] = workflow_id + sub_obj = module.get_one(endpoint, **{'data': lookup_data}) + if sub_obj is None: + module.fail_json(msg='Could not find {0} entry with name {1}'.format(association, sub_name)) + id_list.append(sub_obj['id']) + if id_list: + association_fields[association] = id_list + + module.create_or_update_if_needed( + existing_item, + workflow_node_fields, + endpoint='workflow_job_template_nodes', + item_type='workflow_job_template_node', + auto_exit=False, + associations=association_fields, + ) + + +def destroy_workflow_nodes(module, response, workflow_id): + search_fields = {} + + # Search for existing nodes. + search_fields['workflow_job_template'] = workflow_id + existing_items = module.get_all_endpoint('workflow_job_template_nodes', **{'data': search_fields}) + + # Loop through found fields + for workflow_node in existing_items['json']['results']: + response.append(module.delete_endpoint(workflow_node['url'])) + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True), + new_name=dict(), + copy_from=dict(), + description=dict(), + extra_vars=dict(type='dict'), + job_tags=dict(), + skip_tags=dict(), + organization=dict(), + survey_spec=dict(type='dict', aliases=['survey']), + survey_enabled=dict(type='bool'), + allow_simultaneous=dict(type='bool'), + ask_variables_on_launch=dict(type='bool'), + ask_labels_on_launch=dict(type='bool', aliases=['ask_labels']), + ask_tags_on_launch=dict(type='bool', aliases=['ask_tags']), + ask_skip_tags_on_launch=dict(type='bool', aliases=['ask_skip_tags']), + inventory=dict(), + limit=dict(), + scm_branch=dict(), + ask_inventory_on_launch=dict(type='bool'), + ask_scm_branch_on_launch=dict(type='bool'), + ask_limit_on_launch=dict(type='bool'), + webhook_service=dict(choices=['github', 'gitlab']), + webhook_credential=dict(), + labels=dict(type="list", elements='str'), + notification_templates_started=dict(type="list", elements='str'), + notification_templates_success=dict(type="list", elements='str'), + notification_templates_error=dict(type="list", elements='str'), + notification_templates_approvals=dict(type="list", elements='str'), + workflow_nodes=dict(type='list', elements='dict', aliases=['schema']), + destroy_current_nodes=dict(type='bool', default=False, aliases=['destroy_current_schema']), + state=dict(choices=['present', 'absent'], default='present'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + name = module.params.get('name') + new_name = module.params.get("new_name") + copy_from = module.params.get('copy_from') + state = module.params.get('state') + + # Extract schema parameters + workflow_nodes = None + if module.params.get('workflow_nodes'): + workflow_nodes = module.params.get('workflow_nodes') + destroy_current_nodes = module.params.get('destroy_current_nodes') + + new_fields = {} + search_fields = {} + + # Attempt to look up the related items the user specified (these will fail the module if not found) + organization = module.params.get('organization') + if organization: + organization_id = module.resolve_name_to_id('organizations', organization) + search_fields['organization'] = new_fields['organization'] = organization_id + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('workflow_job_templates', name_or_id=name, **{'data': search_fields}) + + # Attempt to look up credential to copy based on the provided name + if copy_from: + # a new existing item is formed when copying and is returned. + existing_item = module.copy_item( + existing_item, + copy_from, + name, + endpoint='workflow_job_templates', + item_type='workflow_job_template', + copy_lookup_data={}, + ) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(existing_item) + + inventory = module.params.get('inventory') + if inventory: + new_fields['inventory'] = module.resolve_name_to_id('inventories', inventory) + + webhook_credential = module.params.get('webhook_credential') + if webhook_credential: + new_fields['webhook_credential'] = module.resolve_name_to_id('credentials', webhook_credential) + + # Create the data that gets sent for create and update + new_fields['name'] = new_name if new_name else (module.get_item_name(existing_item) if existing_item else name) + for field_name in ( + 'description', + 'survey_enabled', + 'allow_simultaneous', + 'limit', + 'scm_branch', + 'extra_vars', + 'ask_inventory_on_launch', + 'ask_scm_branch_on_launch', + 'ask_limit_on_launch', + 'ask_variables_on_launch', + 'ask_labels_on_launch', + 'ask_tags_on_launch', + 'ask_skip_tags_on_launch', + 'webhook_service', + 'job_tags', + 'skip_tags', + ): + field_val = module.params.get(field_name) + if field_val is not None: + new_fields[field_name] = field_val + + if 'extra_vars' in new_fields: + new_fields['extra_vars'] = json.dumps(new_fields['extra_vars']) + + association_fields = {} + + notifications_start = module.params.get('notification_templates_started') + if notifications_start is not None: + association_fields['notification_templates_started'] = [] + for item in notifications_start: + association_fields['notification_templates_started'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_success = module.params.get('notification_templates_success') + if notifications_success is not None: + association_fields['notification_templates_success'] = [] + for item in notifications_success: + association_fields['notification_templates_success'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_error = module.params.get('notification_templates_error') + if notifications_error is not None: + association_fields['notification_templates_error'] = [] + for item in notifications_error: + association_fields['notification_templates_error'].append(module.resolve_name_to_id('notification_templates', item)) + + notifications_approval = module.params.get('notification_templates_approvals') + if notifications_approval is not None: + association_fields['notification_templates_approvals'] = [] + for item in notifications_approval: + association_fields['notification_templates_approvals'].append(module.resolve_name_to_id('notification_templates', item)) + + labels = module.params.get('labels') + if labels is not None: + association_fields['labels'] = [] + for item in labels: + label_id = module.get_one('labels', name_or_id=item, **{'data': search_fields}) + if label_id is None: + module.fail_json(msg='Could not find label entry with name {0}'.format(item)) + else: + association_fields['labels'].append(label_id['id']) + + on_change = None + new_spec = module.params.get('survey_spec') + if new_spec: + existing_spec = None + if existing_item: + spec_endpoint = existing_item.get('related', {}).get('survey_spec') + existing_spec = module.get_endpoint(spec_endpoint)['json'] + if new_spec != existing_spec: + module.json_output['changed'] = True + if existing_item and module.has_encrypted_values(existing_spec): + module._encrypted_changed_warning('survey_spec', existing_item, warning=True) + on_change = update_survey + + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed( + existing_item, + new_fields, + endpoint='workflow_job_templates', + item_type='workflow_job_template', + associations=association_fields, + on_create=on_change, + on_update=on_change, + auto_exit=False, + ) + + # Get Workflow information in case one was just created. + existing_item = module.get_one('workflow_job_templates', name_or_id=new_name if new_name else name, **{'data': search_fields}) + workflow_job_template_id = existing_item['id'] + # Destroy current nodes if selected. + if destroy_current_nodes: + destroy_workflow_nodes(module, response, workflow_job_template_id) + + # Work thorugh and lookup value for schema fields + if workflow_nodes: + # Create Schema Nodes + create_workflow_nodes(module, response, workflow_nodes, workflow_job_template_id) + # Create Schema Associations + create_workflow_nodes_association(module, response, workflow_nodes, workflow_job_template_id) + module.json_output['node_creation_data'] = response + + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/workflow_job_template_node.py b/ansible_collections/awx/awx/plugins/modules/workflow_job_template_node.py new file mode 100644 index 00000000..a59cd33f --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/workflow_job_template_node.py @@ -0,0 +1,445 @@ +#!/usr/bin/python +# coding: utf-8 -*- + + +# (c) 2020, John Westcott IV <john.westcott.iv@redhat.com> +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: workflow_job_template_node +author: "John Westcott IV (@john-westcott-iv)" +short_description: create, update, or destroy Automation Platform Controller workflow job template nodes. +description: + - Create, update, or destroy Automation Platform Controller workflow job template nodes. + - Use this to build a graph for a workflow, which dictates what the workflow runs. + - You can create nodes first, and link them afterwards, and not worry about ordering. + For failsafe referencing of a node, specify identifier, WFJT, and organization. + With those specified, you can choose to modify or not modify any other parameter. +options: + extra_data: + description: + - Variables to apply at launch time. + - Will only be accepted if job template prompts for vars or has a survey asking for those vars. + type: dict + inventory: + description: + - Inventory applied as a prompt, if job template prompts for inventory + type: str + scm_branch: + description: + - SCM branch applied as a prompt, if job template prompts for SCM branch + type: str + job_type: + description: + - Job type applied as a prompt, if job template prompts for job type + type: str + choices: + - 'run' + - 'check' + job_tags: + description: + - Job tags applied as a prompt, if job template prompts for job tags + type: str + skip_tags: + description: + - Tags to skip, applied as a prompt, if job tempalte prompts for job tags + type: str + limit: + description: + - Limit to act on, applied as a prompt, if job template prompts for limit + type: str + diff_mode: + description: + - Run diff mode, applied as a prompt, if job template prompts for diff mode + type: bool + verbosity: + description: + - Verbosity applied as a prompt, if job template prompts for verbosity + type: str + choices: + - '0' + - '1' + - '2' + - '3' + - '4' + - '5' + workflow_job_template: + description: + - The workflow job template the node exists in. + - Used for looking up the node, cannot be modified after creation. + required: True + type: str + aliases: + - workflow + organization: + description: + - The organization of the workflow job template the node exists in. + - Used for looking up the workflow, not a direct model field. + type: str + unified_job_template: + description: + - Name of unified job template to run in the workflow. + - Can be a job template, project, inventory source, etc. + - Omit if creating an approval node. + - This parameter is mutually exclusive with C(approval_node). + type: str + lookup_organization: + description: + - Organization the inventories, job template, project, inventory source the unified_job_template exists in. + - If not provided, will lookup by name only, which does not work with duplicates. + type: str + approval_node: + description: + - A dictionary of Name, description, and timeout values for the approval node. + - This parameter is mutually exclusive with C(unified_job_template). + type: dict + suboptions: + name: + description: + - Name of this workflow approval template. + type: str + required: True + description: + description: + - Optional description of this workflow approval template. + type: str + timeout: + description: + - The amount of time (in seconds) before the approval node expires and fails. + type: int + all_parents_must_converge: + description: + - If enabled then the node will only run if all of the parent nodes have met the criteria to reach this node + type: bool + identifier: + description: + - An identifier for this node that is unique within its workflow. + - It is copied to workflow job nodes corresponding to this node. + required: True + type: str + always_nodes: + description: + - Nodes that will run after this node completes. + - List of node identifiers. + type: list + elements: str + success_nodes: + description: + - Nodes that will run after this node on success. + - List of node identifiers. + type: list + elements: str + failure_nodes: + description: + - Nodes that will run after this node on failure. + - List of node identifiers. + type: list + elements: str + credentials: + description: + - Credentials to be applied to job as launch-time prompts. + - List of credential names. + - Uniqueness is not handled rigorously. + type: list + elements: str + execution_environment: + description: + - Execution Environment applied as a prompt, assuming jot template prompts for execution environment + type: str + forks: + description: + - Forks applied as a prompt, assuming job template prompts for forks + type: int + instance_groups: + description: + - List of Instance Groups applied as a prompt, assuming job template prompts for instance groups + type: list + elements: str + job_slice_count: + description: + - Job Slice Count applied as a prompt, assuming job template prompts for job slice count + type: int + labels: + description: + - List of labels applied as a prompt, assuming job template prompts for labels + type: list + elements: str + timeout: + description: + - Timeout applied as a prompt, assuming job template prompts for timeout + type: int + state: + description: + - Desired state of the resource. + choices: ["present", "absent"] + default: "present" + type: str +extends_documentation_fragment: awx.awx.auth +''' + +EXAMPLES = ''' +- name: Create a node, follows workflow_job_template example + workflow_job_template_node: + identifier: my-first-node + workflow: example-workflow + unified_job_template: jt-for-node-use + organization: Default # organization of workflow job template + extra_data: + foo_key: bar_value + +- name: Create parent node for prior node + workflow_job_template_node: + identifier: my-root-node + workflow: example-workflow + unified_job_template: jt-for-node-use + organization: Default + success_nodes: + - my-first-node + +- name: Create workflow with 2 Job Templates and an approval node in between + block: + - name: Create a workflow job template + tower_workflow_job_template: + name: my-workflow-job-template + ask_scm_branch_on_launch: true + organization: Default + + - name: Create 1st node + tower_workflow_job_template_node: + identifier: my-first-node + workflow_job_template: my-workflow-job-template + unified_job_template: some_job_template + organization: Default + + - name: Create 2nd approval node + tower_workflow_job_template_node: + identifier: my-second-approval-node + workflow_job_template: my-workflow-job-template + organization: Default + approval_node: + description: "Do this?" + name: my-second-approval-node + timeout: 3600 + + - name: Create 3rd node + tower_workflow_job_template_node: + identifier: my-third-node + workflow_job_template: my-workflow-job-template + unified_job_template: some_other_job_template + organization: Default + + - name: Link 1st node to 2nd Approval node + tower_workflow_job_template_node: + identifier: my-first-node + workflow_job_template: my-workflow-job-template + organization: Default + success_nodes: + - my-second-approval-node + + - name: Link 2nd Approval Node 3rd node + tower_workflow_job_template_node: + identifier: my-second-approval-node + workflow_job_template: my-workflow-job-template + organization: Default + success_nodes: + - my-third-node +''' + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + identifier=dict(required=True), + workflow_job_template=dict(required=True, aliases=['workflow']), + organization=dict(), + extra_data=dict(type='dict'), + inventory=dict(), + scm_branch=dict(), + job_type=dict(choices=['run', 'check']), + job_tags=dict(), + skip_tags=dict(), + limit=dict(), + diff_mode=dict(type='bool'), + verbosity=dict(choices=['0', '1', '2', '3', '4', '5']), + unified_job_template=dict(), + lookup_organization=dict(), + approval_node=dict(type='dict'), + all_parents_must_converge=dict(type='bool'), + success_nodes=dict(type='list', elements='str'), + always_nodes=dict(type='list', elements='str'), + failure_nodes=dict(type='list', elements='str'), + credentials=dict(type='list', elements='str'), + execution_environment=dict(type='str'), + forks=dict(type='int'), + instance_groups=dict(type='list', elements='str'), + job_slice_count=dict(type='int'), + labels=dict(type='list', elements='str'), + timeout=dict(type='int'), + state=dict(choices=['present', 'absent'], default='present'), + ) + mutually_exclusive = [("unified_job_template", "approval_node")] + required_if = [ + ['state', 'absent', ['identifier']], + ['state', 'present', ['identifier']], + ['state', 'present', ['unified_job_template', 'approval_node', 'success_nodes', 'always_nodes', 'failure_nodes'], True], + ] + + # Create a module for ourselves + module = ControllerAPIModule( + argument_spec=argument_spec, + mutually_exclusive=mutually_exclusive, + required_if=required_if, + ) + + # Extract our parameters + identifier = module.params.get('identifier') + state = module.params.get('state') + approval_node = module.params.get('approval_node') + new_fields = {} + lookup_organization = module.params.get('lookup_organization') + search_fields = {'identifier': identifier} + + # Attempt to look up the related items the user specified (these will fail the module if not found) + workflow_job_template = module.params.get('workflow_job_template') + workflow_job_template_id = None + if workflow_job_template: + wfjt_search_fields = {} + organization = module.params.get('organization') + if organization: + organization_id = module.resolve_name_to_id('organizations', organization) + wfjt_search_fields['organization'] = organization_id + wfjt_data = module.get_one('workflow_job_templates', name_or_id=workflow_job_template, **{'data': wfjt_search_fields}) + if wfjt_data is None: + module.fail_json( + msg="The workflow {0} in organization {1} was not found on the controller instance server".format(workflow_job_template, organization) + ) + workflow_job_template_id = wfjt_data['id'] + search_fields['workflow_job_template'] = new_fields['workflow_job_template'] = workflow_job_template_id + + # Attempt to look up an existing item based on the provided data + existing_item = module.get_one('workflow_job_template_nodes', **{'data': search_fields}) + + if state == 'absent': + # If the state was absent we can let the module delete it if needed, the module will handle exiting from this + module.delete_if_needed(existing_item) + + # Set lookup data to use + search_fields = {} + if lookup_organization: + search_fields['organization'] = module.resolve_name_to_id('organizations', lookup_organization) + + unified_job_template = module.params.get('unified_job_template') + if unified_job_template: + new_fields['unified_job_template'] = module.get_one('unified_job_templates', name_or_id=unified_job_template, **{'data': search_fields})['id'] + inventory = module.params.get('inventory') + if inventory: + new_fields['inventory'] = module.resolve_name_to_id('inventories', inventory) + + # Create the data that gets sent for create and update + for field_name in ( + 'identifier', + 'extra_data', + 'scm_branch', + 'job_type', + 'job_tags', + 'skip_tags', + 'limit', + 'diff_mode', + 'verbosity', + 'all_parents_must_converge', + 'forks', + 'job_slice_count', + 'timeout', + ): + field_val = module.params.get(field_name) + if field_val: + new_fields[field_name] = field_val + + association_fields = {} + for association in ('always_nodes', 'success_nodes', 'failure_nodes', 'credentials', 'instance_groups', 'labels'): + name_list = module.params.get(association) + if name_list is None: + continue + id_list = [] + for sub_name in name_list: + if association in ['credentials', 'instance_groups', 'labels']: + sub_obj = module.get_one(association, name_or_id=sub_name) + else: + endpoint = 'workflow_job_template_nodes' + lookup_data = {'identifier': sub_name} + if workflow_job_template_id: + lookup_data['workflow_job_template'] = workflow_job_template_id + sub_obj = module.get_one(endpoint, **{'data': lookup_data}) + if sub_obj is None: + module.fail_json(msg='Could not find {0} entry with name {1}'.format(association, sub_name)) + id_list.append(sub_obj['id']) + association_fields[association] = id_list + + execution_environment = module.params.get('execution_environment') + if execution_environment is not None: + if execution_environment == '': + new_fields['execution_environment'] = '' + else: + ee = module.get_one('execution_environments', name_or_id=execution_environment) + if ee is None: + module.fail_json(msg='could not find execution_environment entry with name {0}'.format(execution_environment)) + else: + new_fields['execution_environment'] = ee['id'] + + # In the case of a new object, the utils need to know it is a node + new_fields['type'] = 'workflow_job_template_node' + + # If the state was present and we can let the module build or update the existing item, this will return on its own + module.create_or_update_if_needed( + existing_item, + new_fields, + endpoint='workflow_job_template_nodes', + item_type='workflow_job_template_node', + auto_exit=not approval_node, + associations=association_fields, + ) + + # Create approval node unified template or update existing + if approval_node: + # Set Approval Fields + new_fields = {} + + # Extract Parameters + if approval_node.get('name') is None: + module.fail_json(msg="Approval node name is required to create approval node.") + if approval_node.get('name') is not None: + new_fields['name'] = approval_node['name'] + if approval_node.get('description') is not None: + new_fields['description'] = approval_node['description'] + if approval_node.get('timeout') is not None: + new_fields['timeout'] = approval_node['timeout'] + + # Find created workflow node ID + search_fields = {'identifier': identifier} + search_fields['workflow_job_template'] = workflow_job_template_id + workflow_job_template_node = module.get_one('workflow_job_template_nodes', **{'data': search_fields}) + workflow_job_template_node_id = workflow_job_template_node['id'] + module.json_output['workflow_node_id'] = workflow_job_template_node_id + existing_item = None + # Due to not able to lookup workflow_approval_templates, find the existing item in another place + if workflow_job_template_node['related'].get('unified_job_template') is not None: + existing_item = module.get_endpoint(workflow_job_template_node['related']['unified_job_template'])['json'] + approval_endpoint = 'workflow_job_template_nodes/{0}/create_approval_template/'.format(workflow_job_template_node_id) + module.create_or_update_if_needed( + existing_item, new_fields, endpoint=approval_endpoint, item_type='workflow_job_template_approval_node', associations=association_fields + ) + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/workflow_launch.py b/ansible_collections/awx/awx/plugins/modules/workflow_launch.py new file mode 100644 index 00000000..1613e4fa --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/workflow_launch.py @@ -0,0 +1,188 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} + +DOCUMENTATION = ''' +--- +module: workflow_launch +author: "John Westcott IV (@john-westcott-iv)" +short_description: Run a workflow in Automation Platform Controller +description: + - Launch an Automation Platform Controller workflows. See + U(https://www.ansible.com/tower) for an overview. +options: + name: + description: + - The name of the workflow template to run. + required: True + type: str + aliases: + - workflow_template + organization: + description: + - Organization the workflow job template exists in. + - Used to help lookup the object, cannot be modified using this module. + - If not provided, will lookup by name only, which does not work with duplicates. + type: str + inventory: + description: + - Inventory to use for the job ran with this workflow, only used if prompt for inventory is set. + type: str + limit: + description: + - Limit to use for the I(job_template). + type: str + scm_branch: + description: + - A specific branch of the SCM project to run the template on. + - This is only applicable if your project allows for branch override. + type: str + extra_vars: + description: + - Any extra vars required to launch the job. + type: dict + wait: + description: + - Wait for the workflow to complete. + default: True + type: bool + interval: + description: + - The interval to request an update from the controller. + required: False + default: 2 + type: float + timeout: + description: + - If waiting for the workflow to complete this will abort after this + amount of seconds + type: int +extends_documentation_fragment: awx.awx.auth +''' + +RETURN = ''' +job_info: + description: dictionary containing information about the workflow executed + returned: If workflow launched + type: dict +''' + + +EXAMPLES = ''' +- name: Launch a workflow with a timeout of 10 seconds + workflow_launch: + workflow_template: "Test Workflow" + timeout: 10 + +- name: Launch a Workflow with extra_vars without waiting + workflow_launch: + workflow_template: "Test workflow" + extra_vars: + var1: My First Variable + var2: My Second Variable + wait: False +''' + +from ..module_utils.controller_api import ControllerAPIModule +import json + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + name=dict(required=True, aliases=['workflow_template']), + organization=dict(), + inventory=dict(), + limit=dict(), + scm_branch=dict(), + extra_vars=dict(type='dict'), + wait=dict(required=False, default=True, type='bool'), + interval=dict(required=False, default=2.0, type='float'), + timeout=dict(required=False, type='int'), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + optional_args = {} + # Extract our parameters + name = module.params.get('name') + organization = module.params.get('organization') + inventory = module.params.get('inventory') + optional_args['limit'] = module.params.get('limit') + wait = module.params.get('wait') + interval = module.params.get('interval') + timeout = module.params.get('timeout') + + # Special treatment of extra_vars parameter + extra_vars = module.params.get('extra_vars') + if extra_vars is not None: + optional_args['extra_vars'] = json.dumps(extra_vars) + + # Create a datastructure to pass into our job launch + post_data = {} + for arg_name, arg_value in optional_args.items(): + if arg_value: + post_data[arg_name] = arg_value + + # Attempt to look up the related items the user specified (these will fail the module if not found) + if inventory: + post_data['inventory'] = module.resolve_name_to_id('inventories', inventory) + + # Attempt to look up job_template based on the provided name + lookup_data = {} + if organization: + lookup_data['organization'] = module.resolve_name_to_id('organizations', organization) + workflow_job_template = module.get_one('workflow_job_templates', name_or_id=name, data=lookup_data) + + if workflow_job_template is None: + module.fail_json(msg="Unable to find workflow job template") + + # The API will allow you to submit values to a jb launch that are not prompt on launch. + # Therefore, we will test to see if anything is set which is not prompt on launch and fail. + check_vars_to_prompts = { + 'inventory': 'ask_inventory_on_launch', + 'limit': 'ask_limit_on_launch', + 'scm_branch': 'ask_scm_branch_on_launch', + } + + param_errors = [] + for variable_name, prompt in check_vars_to_prompts.items(): + if variable_name in post_data and not workflow_job_template[prompt]: + param_errors.append("The field {0} was specified but the workflow job template does not allow for it to be overridden".format(variable_name)) + # Check if Either ask_variables_on_launch, or survey_enabled is enabled for use of extra vars. + if module.params.get('extra_vars') and not (workflow_job_template['ask_variables_on_launch'] or workflow_job_template['survey_enabled']): + param_errors.append("The field extra_vars was specified but the workflow job template does not allow for it to be overridden") + if len(param_errors) > 0: + module.fail_json(msg="Parameters specified which can not be passed into workflow job template, see errors for details", errors=param_errors) + + # Launch the job + result = module.post_endpoint(workflow_job_template['related']['launch'], data=post_data) + + if result['status_code'] != 201: + module.fail_json(msg="Failed to launch workflow, see response for details", response=result) + + module.json_output['changed'] = True + module.json_output['id'] = result['json']['id'] + module.json_output['status'] = result['json']['status'] + # This is for backwards compatability + module.json_output['job_info'] = {'id': result['json']['id']} + + if not wait: + module.exit_json(**module.json_output) + + # Invoke wait function + module.wait_on_url(url=result['json']['url'], object_name=name, object_type='Workflow Job', timeout=timeout, interval=interval) + + module.exit_json(**module.json_output) + + +if __name__ == '__main__': + main() diff --git a/ansible_collections/awx/awx/plugins/modules/workflow_node_wait.py b/ansible_collections/awx/awx/plugins/modules/workflow_node_wait.py new file mode 100644 index 00000000..e8260290 --- /dev/null +++ b/ansible_collections/awx/awx/plugins/modules/workflow_node_wait.py @@ -0,0 +1,111 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2021, Sean Sullivan +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + + +ANSIBLE_METADATA = { + "metadata_version": "1.1", + "status": ["preview"], + "supported_by": "community", +} + + +DOCUMENTATION = """ +--- +module: workflow_node_wait +author: "Sean Sullivan (@sean-m-sullivan)" +short_description: Wait for a workflow node to finish. +description: + - Approve an approval node in a workflow job. See + U(https://www.ansible.com/tower) for an overview. +options: + workflow_job_id: + description: + - ID of the workflow job to monitor for node. + required: True + type: int + name: + description: + - Name of the workflow node to wait on. + required: True + type: str + interval: + description: + - The interval in sections, to request an update from the controller. + required: False + default: 1 + type: float + timeout: + description: + - Maximum time in seconds to wait for a workflow job to reach approval node. + default: 10 + type: int +extends_documentation_fragment: awx.awx.auth +""" + + +EXAMPLES = """ +- name: Launch a workflow with a timeout of 10 seconds + workflow_launch: + workflow_template: "Test Workflow" + wait: False + register: workflow + +- name: Wait for a workflow node to finish + workflow_node_wait: + workflow_job_id: "{{ workflow.id }}" + name: Approval Data Step + timeout: 120 +""" + +RETURN = """ + +""" + + +from ..module_utils.controller_api import ControllerAPIModule + + +def main(): + # Any additional arguments that are not fields of the item can be added here + argument_spec = dict( + workflow_job_id=dict(type="int", required=True), + name=dict(required=True), + timeout=dict(type="int", default=10), + interval=dict(type="float", default=1), + ) + + # Create a module for ourselves + module = ControllerAPIModule(argument_spec=argument_spec) + + # Extract our parameters + workflow_job_id = module.params.get("workflow_job_id") + name = module.params.get("name") + timeout = module.params.get("timeout") + interval = module.params.get("interval") + + module.wait_on_workflow_node_url( + url="workflow_jobs/{0}/workflow_nodes/".format(workflow_job_id), + object_name=name, + object_type="Workflow Node", + timeout=timeout, + interval=interval, + **{ + "data": { + "job__name": name, + } + } + ) + + # Attempt to look up jobs based on the status + module.exit_json(**module.json_output) + + +if __name__ == "__main__": + main() diff --git a/ansible_collections/awx/awx/requirements.txt b/ansible_collections/awx/awx/requirements.txt new file mode 100644 index 00000000..4aa22000 --- /dev/null +++ b/ansible_collections/awx/awx/requirements.txt @@ -0,0 +1,3 @@ +pytz # for schedule_rrule lookup plugin +python-dateutil>=2.7.0 # schedule_rrule +awxkit # For import and export modules
\ No newline at end of file diff --git a/ansible_collections/awx/awx/test/awx/conftest.py b/ansible_collections/awx/awx/test/awx/conftest.py new file mode 100644 index 00000000..626f8593 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/conftest.py @@ -0,0 +1,290 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import io +import os +import json +import datetime +import importlib +from contextlib import redirect_stdout, suppress +from unittest import mock +import logging + +from requests.models import Response, PreparedRequest + +import pytest + +from ansible.module_utils.six import raise_from + +from awx.main.tests.functional.conftest import _request +from awx.main.models import Organization, Project, Inventory, JobTemplate, Credential, CredentialType, ExecutionEnvironment, UnifiedJob + +from django.db import transaction + +try: + import tower_cli # noqa + + HAS_TOWER_CLI = True +except ImportError: + HAS_TOWER_CLI = False + +try: + # Because awxkit will be a directory at the root of this makefile and we are using python3, import awxkit will work even if its not installed. + # However, awxkit will not contain api whih causes a stack failure down on line 170 when we try to mock it. + # So here we are importing awxkit.api to prevent that. Then you only get an error on tests for awxkit functionality. + import awxkit.api # noqa + + HAS_AWX_KIT = True +except ImportError: + HAS_AWX_KIT = False + +logger = logging.getLogger('awx.main.tests') + + +def sanitize_dict(din): + """Sanitize Django response data to purge it of internal types + so it may be used to cast a requests response object + """ + if isinstance(din, (int, str, type(None), bool)): + return din # native JSON types, no problem + elif isinstance(din, datetime.datetime): + return din.isoformat() + elif isinstance(din, list): + for i in range(len(din)): + din[i] = sanitize_dict(din[i]) + return din + elif isinstance(din, dict): + for k in din.copy().keys(): + din[k] = sanitize_dict(din[k]) + return din + else: + return str(din) # translation proxies often not string but stringlike + + +@pytest.fixture(autouse=True) +def collection_path_set(monkeypatch): + """Monkey patch sys.path, insert the root of the collection folder + so that content can be imported without being fully packaged + """ + base_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) + monkeypatch.syspath_prepend(base_folder) + + +@pytest.fixture +def collection_import(): + """These tests run assuming that the awx_collection folder is inserted + into the PATH before-hand by collection_path_set. + But all imports internally to the collection + go through this fixture so that can be changed if needed. + For instance, we could switch to fully-qualified import paths. + """ + + def rf(path): + return importlib.import_module(path) + + return rf + + +@pytest.fixture +def run_module(request, collection_import): + def rf(module_name, module_params, request_user): + def new_request(self, method, url, **kwargs): + kwargs_copy = kwargs.copy() + if 'data' in kwargs: + if isinstance(kwargs['data'], dict): + kwargs_copy['data'] = kwargs['data'] + elif kwargs['data'] is None: + pass + elif isinstance(kwargs['data'], str): + kwargs_copy['data'] = json.loads(kwargs['data']) + else: + raise RuntimeError('Expected data to be dict or str, got {0}, data: {1}'.format(type(kwargs['data']), kwargs['data'])) + if 'params' in kwargs and method == 'GET': + # query params for GET are handled a bit differently by + # tower-cli and python requests as opposed to REST framework APIRequestFactory + if not kwargs_copy.get('data'): + kwargs_copy['data'] = {} + if isinstance(kwargs['params'], dict): + kwargs_copy['data'].update(kwargs['params']) + elif isinstance(kwargs['params'], list): + for k, v in kwargs['params']: + kwargs_copy['data'][k] = v + + # make request + with transaction.atomic(): + rf = _request(method.lower()) + django_response = rf(url, user=request_user, expect=None, **kwargs_copy) + + # requests library response object is different from the Django response, but they are the same concept + # this converts the Django response object into a requests response object for consumption + resp = Response() + py_data = django_response.data + sanitize_dict(py_data) + resp._content = bytes(json.dumps(django_response.data), encoding='utf8') + resp.status_code = django_response.status_code + resp.headers = {'X-API-Product-Name': 'AWX', 'X-API-Product-Version': '0.0.1-devel'} + + if request.config.getoption('verbose') > 0: + logger.info('%s %s by %s, code:%s', method, '/api/' + url.split('/api/')[1], request_user.username, resp.status_code) + + resp.request = PreparedRequest() + resp.request.prepare(method=method, url=url) + return resp + + def new_open(self, method, url, **kwargs): + r = new_request(self, method, url, **kwargs) + m = mock.MagicMock(read=mock.MagicMock(return_value=r._content), status=r.status_code, getheader=mock.MagicMock(side_effect=r.headers.get)) + return m + + stdout_buffer = io.StringIO() + # Requies specific PYTHONPATH, see docs + # Note that a proper Ansiballz explosion of the modules will have an import path like: + # ansible_collections.awx.awx.plugins.modules.{} + # We should consider supporting that in the future + resource_module = collection_import('plugins.modules.{0}'.format(module_name)) + + if not isinstance(module_params, dict): + raise RuntimeError('Module params must be dict, got {0}'.format(type(module_params))) + + # Ansible params can be passed as an invocation argument or over stdin + # this short circuits within the AnsibleModule interface + def mock_load_params(self): + self.params = module_params + + if getattr(resource_module, 'ControllerAWXKitModule', None): + resource_class = resource_module.ControllerAWXKitModule + elif getattr(resource_module, 'ControllerAPIModule', None): + resource_class = resource_module.ControllerAPIModule + elif getattr(resource_module, 'TowerLegacyModule', None): + resource_class = resource_module.TowerLegacyModule + else: + raise RuntimeError("The module has neither a TowerLegacyModule, ControllerAWXKitModule or a ControllerAPIModule") + + with mock.patch.object(resource_class, '_load_params', new=mock_load_params): + # Call the test utility (like a mock server) instead of issuing HTTP requests + with mock.patch('ansible.module_utils.urls.Request.open', new=new_open): + if HAS_TOWER_CLI: + tower_cli_mgr = mock.patch('tower_cli.api.Session.request', new=new_request) + elif HAS_AWX_KIT: + tower_cli_mgr = mock.patch('awxkit.api.client.requests.Session.request', new=new_request) + else: + tower_cli_mgr = suppress() + with tower_cli_mgr: + try: + # Ansible modules return data to the mothership over stdout + with redirect_stdout(stdout_buffer): + resource_module.main() + except SystemExit: + pass # A system exit indicates successful execution + except Exception: + # dump the stdout back to console for debugging + print(stdout_buffer.getvalue()) + raise + + module_stdout = stdout_buffer.getvalue().strip() + try: + result = json.loads(module_stdout) + except Exception as e: + raise_from(Exception('Module did not write valid JSON, error: {0}, stdout:\n{1}'.format(str(e), module_stdout)), e) + # A module exception should never be a test expectation + if 'exception' in result: + if "ModuleNotFoundError: No module named 'tower_cli'" in result['exception']: + pytest.skip('The tower-cli library is needed to run this test, module no longer supported.') + raise Exception('Module encountered error:\n{0}'.format(result['exception'])) + return result + + return rf + + +@pytest.fixture +def survey_spec(): + return { + "spec": [{"index": 0, "question_name": "my question?", "default": "mydef", "variable": "myvar", "type": "text", "required": False}], + "description": "test", + "name": "test", + } + + +@pytest.fixture +def organization(): + return Organization.objects.create(name='Default') + + +@pytest.fixture +def project(organization): + return Project.objects.create( + name="test-proj", + description="test-proj-desc", + organization=organization, + playbook_files=['helloworld.yml'], + local_path='_92__test_proj', + scm_revision='1234567890123456789012345678901234567890', + scm_url='localhost', + scm_type='git', + ) + + +@pytest.fixture +def inventory(organization): + return Inventory.objects.create(name='test-inv', organization=organization) + + +@pytest.fixture +def job_template(project, inventory): + return JobTemplate.objects.create(name='test-jt', project=project, inventory=inventory, playbook='helloworld.yml') + + +@pytest.fixture +def machine_credential(organization): + ssh_type = CredentialType.defaults['ssh']() + ssh_type.save() + return Credential.objects.create(credential_type=ssh_type, name='machine-cred', inputs={'username': 'test_user', 'password': 'pas4word'}) + + +@pytest.fixture +def vault_credential(organization): + ct = CredentialType.defaults['vault']() + ct.save() + return Credential.objects.create(credential_type=ct, name='vault-cred', inputs={'vault_id': 'foo', 'vault_password': 'pas4word'}) + + +@pytest.fixture +def kube_credential(): + ct = CredentialType.defaults['kubernetes_bearer_token']() + ct.save() + return Credential.objects.create( + credential_type=ct, name='kube-cred', inputs={'host': 'my.cluster', 'bearer_token': 'my-token', 'verify_ssl': False} + ) + + +@pytest.fixture +def silence_deprecation(): + """The deprecation warnings are stored in a global variable + they will create cross-test interference. Use this to turn them off. + """ + with mock.patch('ansible.module_utils.basic.AnsibleModule.deprecate') as this_mock: + yield this_mock + + +@pytest.fixture(autouse=True) +def silence_warning(): + """Warnings use global variable, same as deprecations.""" + with mock.patch('ansible.module_utils.basic.AnsibleModule.warn') as this_mock: + yield this_mock + + +@pytest.fixture +def execution_environment(): + return ExecutionEnvironment.objects.create(name="test-ee", description="test-ee", managed=False) + + +@pytest.fixture(scope='session', autouse=True) +def mock_has_unpartitioned_events(): + # has_unpartitioned_events determines if there are any events still + # left in the old, unpartitioned job events table. In order to work, + # this method looks up when the partition migration occurred. When + # Django's unit tests run, however, there will be no record of the migration. + # We mock this out to circumvent the migration query. + with mock.patch.object(UnifiedJob, 'has_unpartitioned_events', new=False) as _fixture: + yield _fixture diff --git a/ansible_collections/awx/awx/test/awx/test_ad_hoc_wait.py b/ansible_collections/awx/awx/test/awx/test_ad_hoc_wait.py new file mode 100644 index 00000000..a0ff55fe --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_ad_hoc_wait.py @@ -0,0 +1,39 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest +from django.utils.timezone import now + +from awx.main.models.ad_hoc_commands import AdHocCommand + + +@pytest.mark.django_db +def test_ad_hoc_command_wait_successful(run_module, admin_user): + command = AdHocCommand.objects.create(status='successful', started=now(), finished=now()) + result = run_module('ad_hoc_command_wait', dict(command_id=command.id), admin_user) + result.pop('invocation', None) + result['elapsed'] = float(result['elapsed']) + assert result.pop('finished', '')[:10] == str(command.finished)[:10] + assert result.pop('started', '')[:10] == str(command.started)[:10] + assert result.pop('status', "successful"), result + assert result.get('changed') is False + + +@pytest.mark.django_db +def test_ad_hoc_command_wait_failed(run_module, admin_user): + command = AdHocCommand.objects.create(status='failed', started=now(), finished=now()) + result = run_module('ad_hoc_command_wait', dict(command_id=command.id), admin_user) + result.pop('invocation', None) + result['elapsed'] = float(result['elapsed']) + assert result.pop('finished', '')[:10] == str(command.finished)[:10] + assert result.pop('started', '')[:10] == str(command.started)[:10] + assert result.get('changed') is False + assert result.pop('status', "failed"), result + + +@pytest.mark.django_db +def test_ad_hoc_command_wait_not_found(run_module, admin_user): + result = run_module('ad_hoc_command_wait', dict(command_id=42), admin_user) + result.pop('invocation', None) + assert result == {"failed": True, "msg": "Unable to wait on ad hoc command 42; that ID does not exist."} diff --git a/ansible_collections/awx/awx/test/awx/test_application.py b/ansible_collections/awx/awx/test/awx/test_application.py new file mode 100644 index 00000000..c93e2f33 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_application.py @@ -0,0 +1,29 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Organization +from awx.main.models.oauth import OAuth2Application + + +@pytest.mark.django_db +def test_create_application(run_module, admin_user): + org = Organization.objects.create(name='foo') + + module_args = { + 'name': 'foo_app', + 'description': 'barfoo', + 'state': 'present', + 'authorization_grant_type': 'password', + 'client_type': 'public', + 'organization': 'foo', + } + + result = run_module('application', module_args, admin_user) + assert result.get('changed'), result + + application = OAuth2Application.objects.get(name='foo_app') + assert application.description == 'barfoo' + assert application.organization_id == org.id diff --git a/ansible_collections/awx/awx/test/awx/test_completeness.py b/ansible_collections/awx/awx/test/awx/test_completeness.py new file mode 100644 index 00000000..43e225e4 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_completeness.py @@ -0,0 +1,361 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +from awx.main.tests.functional.conftest import _request +from ansible.module_utils.six import string_types +import yaml +import os +import re +import glob + +# Analysis variables +# ----------------------------------------------------------------------------------------------------------- + +# Read-only endpoints are dynamically created by an options page with no POST section. +# Normally a read-only endpoint should not have a module (i.e. /api/v2/me) but sometimes we reuse a name +# For example, we have a role module but /api/v2/roles is a read only endpoint. +# This list indicates which read-only endpoints have associated modules with them. +read_only_endpoints_with_modules = ['settings', 'role', 'project_update'] + +# If a module should not be created for an endpoint and the endpoint is not read-only add it here +# THINK HARD ABOUT DOING THIS +no_module_for_endpoint = [] + +# Some modules work on the related fields of an endpoint. These modules will not have an auto-associated endpoint +no_endpoint_for_module = [ + 'import', + 'controller_meta', + 'export', + 'inventory_source_update', + 'job_launch', + 'job_wait', + 'job_list', + 'license', + 'ping', + 'receive', + 'send', + 'workflow_launch', + 'workflow_node_wait', + 'job_cancel', + 'workflow_template', + 'ad_hoc_command_wait', + 'ad_hoc_command_cancel', + 'subscriptions', # Subscription deals with config/subscriptions +] + +# Global module parameters we can ignore +ignore_parameters = ['state', 'new_name', 'update_secrets', 'copy_from'] + +# Some modules take additional parameters that do not appear in the API +# Add the module name as the key with the value being the list of params to ignore +no_api_parameter_ok = { + # The wait is for whether or not to wait for a project update on change + 'project': ['wait', 'interval', 'update_project'], + # Existing_token and id are for working with an existing tokens + 'token': ['existing_token', 'existing_token_id'], + # /survey spec is now how we handle associations + # We take an organization here to help with the lookups only + 'job_template': ['survey_spec', 'organization'], + 'inventory_source': ['organization'], + # Organization is how we are looking up job templates, Approval node is for workflow_approval_templates, + # lookup_organization is for specifiying the organization for the unified job template lookup + 'workflow_job_template_node': ['organization', 'approval_node', 'lookup_organization'], + # Survey is how we handle associations + 'workflow_job_template': ['survey_spec', 'destroy_current_nodes'], + # organization is how we lookup unified job templates + 'schedule': ['organization'], + # ad hoc commands support interval and timeout since its more like job_launch + 'ad_hoc_command': ['interval', 'timeout', 'wait'], + # group parameters to perserve hosts and children. + 'group': ['preserve_existing_children', 'preserve_existing_hosts'], + # new_username parameter to rename a user and organization allows for org admin user creation + 'user': ['new_username', 'organization'], + # workflow_approval parameters that do not apply when approving an approval node. + 'workflow_approval': ['action', 'interval', 'timeout', 'workflow_job_id'], +} + +# When this tool was created we were not feature complete. Adding something in here indicates a module +# that needs to be developed. If the module is found on the file system it will auto-detect that the +# work is being done and will bypass this check. At some point this module should be removed from this list. +needs_development = ['inventory_script', 'instance'] +needs_param_development = { + 'host': ['instance_id'], + 'workflow_approval': ['description', 'execution_environment'], +} +# ----------------------------------------------------------------------------------------------------------- + +return_value = 0 +read_only_endpoint = [] + + +def cause_error(msg): + global return_value + return_value = 255 + return msg + + +def test_meta_runtime(): + base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) + meta_filename = 'meta/runtime.yml' + module_dir = 'plugins/modules' + + print("\nMeta check:") + + with open('{0}/{1}'.format(base_dir, meta_filename), 'r') as f: + meta_data_string = f.read() + + meta_data = yaml.load(meta_data_string, Loader=yaml.Loader) + + needs_grouping = [] + for file_name in glob.glob('{0}/{1}/*'.format(base_dir, module_dir)): + if not os.path.isfile(file_name) or os.path.islink(file_name): + continue + with open(file_name, 'r') as f: + if 'extends_documentation_fragment: awx.awx.auth' in f.read(): + needs_grouping.append(os.path.splitext(os.path.basename(file_name))[0]) + + needs_to_be_removed = list(set(meta_data['action_groups']['controller']) - set(needs_grouping)) + needs_to_be_added = list(set(needs_grouping) - set(meta_data['action_groups']['controller'])) + + needs_to_be_removed.sort() + needs_to_be_added.sort() + + group = 'action-groups.controller' + if needs_to_be_removed: + print(cause_error("The following items should be removed from the {0} {1}:\n {2}".format(meta_filename, group, '\n '.join(needs_to_be_removed)))) + + if needs_to_be_added: + print(cause_error("The following items should be added to the {0} {1}:\n {2}".format(meta_filename, group, '\n '.join(needs_to_be_added)))) + + +def determine_state(module_id, endpoint, module, parameter, api_option, module_option): + # This is a hierarchical list of things that are ok/failures based on conditions + + # If we know this module needs development this is a non-blocking failure + if module_id in needs_development and module == 'N/A': + return "Failed (non-blocking), module needs development" + + # If the module is a read only endpoint: + # If it has no module on disk that is ok. + # If it has a module on disk but its listed in read_only_endpoints_with_modules that is ok + # Else we have a module for a read only endpoint that should not exit + if module_id in read_only_endpoint: + if module == 'N/A': + # There may be some cases where a read only endpoint has a module + return "OK, this endpoint is read-only and should not have a module" + elif module_id in read_only_endpoints_with_modules: + return "OK, module params can not be checked to read-only" + else: + return cause_error("Failed, read-only endpoint should not have an associated module") + + # If the endpoint is listed as not needing a module and we don't have one we are ok + if module_id in no_module_for_endpoint and module == 'N/A': + return "OK, this endpoint should not have a module" + + # If module is listed as not needing an endpoint and we don't have one we are ok + if module_id in no_endpoint_for_module and endpoint == 'N/A': + return "OK, this module does not require an endpoint" + + # All of the end/point module conditionals are done so if we don't have a module or endpoint we have a problem + if module == 'N/A': + return cause_error('Failed, missing module') + if endpoint == 'N/A': + return cause_error('Failed, why does this module have no endpoint') + + # Now perform parameter checks + + # First, if the parameter is in the ignore_parameters list we are ok + if parameter in ignore_parameters: + return "OK, globally ignored parameter" + + # If both the api option and the module option are both either objects or none + if (api_option is None) ^ (module_option is None): + # If the API option is node and the parameter is in the no_api_parameter list we are ok + if api_option is None and parameter in no_api_parameter_ok.get(module, {}): + return 'OK, no api parameter is ok' + # If we know this parameter needs development and we don't have a module option we are non-blocking + if module_option is None and parameter in needs_param_development.get(module_id, {}): + return "Failed (non-blocking), parameter needs development" + # Check for deprecated in the node, if its deprecated and has no api option we are ok, otherwise we have a problem + if module_option and module_option.get('description'): + description = '' + if isinstance(module_option.get('description'), string_types): + description = module_option.get('description') + else: + description = " ".join(module_option.get('description')) + + if 'deprecated' in description.lower(): + if api_option is None: + return 'OK, deprecated module option' + else: + return cause_error('Failed, module marks option as deprecated but option still exists in API') + # If we don't have a corresponding API option but we are a list then we are likely a relation + if not api_option and module_option and module_option.get('type', 'str') == 'list': + return "OK, Field appears to be relation" + # TODO, at some point try and check the object model to confirm its actually a relation + + return cause_error('Failed, option mismatch') + + # We made it through all of the checks so we are ok + return 'OK' + + +def test_completeness(collection_import, request, admin_user, job_template, execution_environment): + option_comparison = {} + # Load a list of existing module files from disk + base_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) + module_directory = os.path.join(base_folder, 'plugins', 'modules') + for root, dirs, files in os.walk(module_directory): + if root == module_directory: + for filename in files: + if os.path.islink(os.path.join(root, filename)): + continue + # must begin with a letter a-z, and end in .py + if re.match(r'^[a-z].*.py$', filename): + module_name = filename[:-3] + option_comparison[module_name] = { + 'endpoint': 'N/A', + 'api_options': {}, + 'module_options': {}, + 'module_name': module_name, + } + resource_module = collection_import('plugins.modules.{0}'.format(module_name)) + option_comparison[module_name]['module_options'] = yaml.load(resource_module.DOCUMENTATION, Loader=yaml.SafeLoader)['options'] + + endpoint_response = _request('get')( + url='/api/v2/', + user=admin_user, + expect=None, + ) + for endpoint in endpoint_response.data.keys(): + # Module names are singular and endpoints are plural so we need to convert to singular + singular_endpoint = '{0}'.format(endpoint) + if singular_endpoint.endswith('ies'): + singular_endpoint = singular_endpoint[:-3] + if singular_endpoint != 'settings' and singular_endpoint.endswith('s'): + singular_endpoint = singular_endpoint[:-1] + module_name = '{0}'.format(singular_endpoint) + + endpoint_url = endpoint_response.data.get(endpoint) + + # If we don't have a module for this endpoint then we can create an empty one + if module_name not in option_comparison: + option_comparison[module_name] = {} + option_comparison[module_name]['module_name'] = 'N/A' + option_comparison[module_name]['module_options'] = {} + + # Add in our endpoint and an empty api_options + option_comparison[module_name]['endpoint'] = endpoint_url + option_comparison[module_name]['api_options'] = {} + + # Get out the endpoint, load and parse its options page + options_response = _request('options')( + url=endpoint_url, + user=admin_user, + expect=None, + ) + if 'POST' in options_response.data.get('actions', {}): + option_comparison[module_name]['api_options'] = options_response.data.get('actions').get('POST') + else: + read_only_endpoint.append(module_name) + + # Parse through our data to get string lengths to make a pretty report + longest_module_name = 0 + longest_option_name = 0 + longest_endpoint = 0 + for module, module_value in option_comparison.items(): + if len(module_value['module_name']) > longest_module_name: + longest_module_name = len(module_value['module_name']) + if len(module_value['endpoint']) > longest_endpoint: + longest_endpoint = len(module_value['endpoint']) + for option in module_value['api_options'], module_value['module_options']: + if len(option) > longest_option_name: + longest_option_name = len(option) + + # Print out some headers + print( + "".join( + [ + "End Point", + " " * (longest_endpoint - len("End Point")), + " | Module Name", + " " * (longest_module_name - len("Module Name")), + " | Option", + " " * (longest_option_name - len("Option")), + " | API | Module | State", + ] + ) + ) + print( + "-|-".join( + [ + "-" * longest_endpoint, + "-" * longest_module_name, + "-" * longest_option_name, + "---", + "------", + "---------------------------------------------", + ] + ) + ) + + # Print out all of our data + for module in sorted(option_comparison): + module_data = option_comparison[module] + all_param_names = list(set(module_data['api_options']) | set(module_data['module_options'])) + for parameter in sorted(all_param_names): + print( + "".join( + [ + module_data['endpoint'], + " " * (longest_endpoint - len(module_data['endpoint'])), + " | ", + module_data['module_name'], + " " * (longest_module_name - len(module_data['module_name'])), + " | ", + parameter, + " " * (longest_option_name - len(parameter)), + " | ", + " X " if (parameter in module_data['api_options']) else ' ', + " | ", + ' X ' if (parameter in module_data['module_options']) else ' ', + " | ", + determine_state( + module, + module_data['endpoint'], + module_data['module_name'], + parameter, + module_data['api_options'][parameter] if (parameter in module_data['api_options']) else None, + module_data['module_options'][parameter] if (parameter in module_data['module_options']) else None, + ), + ] + ) + ) + # This handles cases were we got no params from the options page nor from the modules + if len(all_param_names) == 0: + print( + "".join( + [ + module_data['endpoint'], + " " * (longest_endpoint - len(module_data['endpoint'])), + " | ", + module_data['module_name'], + " " * (longest_module_name - len(module_data['module_name'])), + " | ", + "N/A", + " " * (longest_option_name - len("N/A")), + " | ", + ' ', + " | ", + ' ', + " | ", + determine_state(module, module_data['endpoint'], module_data['module_name'], 'N/A', None, None), + ] + ) + ) + + test_meta_runtime() + + if return_value != 0: + raise Exception("One or more failures caused issues") diff --git a/ansible_collections/awx/awx/test/awx/test_credential.py b/ansible_collections/awx/awx/test/awx/test_credential.py new file mode 100644 index 00000000..f12594be --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_credential.py @@ -0,0 +1,134 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Credential, CredentialType, Organization + + +@pytest.fixture +def cred_type(): + # Make a credential type which will be used by the credential + ct = CredentialType.objects.create( + name='Ansible Galaxy Token', + inputs={"fields": [{"id": "token", "type": "string", "secret": True, "label": "Ansible Galaxy Secret Token Value"}], "required": ["token"]}, + injectors={ + "extra_vars": { + "galaxy_token": "{{token}}", + } + }, + ) + return ct + + +@pytest.mark.django_db +def test_create_machine_credential(run_module, admin_user, organization): + Organization.objects.create(name='test-org') + # create the ssh credential type + ct = CredentialType.defaults['ssh']() + ct.save() + # Example from docs + result = run_module( + 'credential', + dict(name='Test Machine Credential', + organization=organization.name, + credential_type='Machine', + state='present'), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + cred = Credential.objects.get(name='Test Machine Credential') + assert cred.credential_type == ct + + assert result['name'] == "Test Machine Credential" + assert result['id'] == cred.pk + + +@pytest.mark.django_db +def test_create_vault_credential(run_module, admin_user, organization): + # https://github.com/ansible/ansible/issues/61324 + Organization.objects.create(name='test-org') + ct = CredentialType.defaults['vault']() + ct.save() + + result = run_module( + 'credential', + dict(name='Test Vault Credential', + organization=organization.name, + credential_type='Vault', + inputs={'vault_id': 'bar', 'vault_password': 'foobar'}, + state='present'), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + cred = Credential.objects.get(name='Test Vault Credential') + assert cred.credential_type == ct + assert 'vault_id' in cred.inputs + assert 'vault_password' in cred.inputs + + assert result['name'] == "Test Vault Credential" + assert result['id'] == cred.pk + + +@pytest.mark.django_db +def test_missing_credential_type(run_module, admin_user, organization): + Organization.objects.create(name='test-org') + result = run_module('credential', dict(name='A credential', organization=organization.name, credential_type='foobar', state='present'), admin_user) + assert result.get('failed', False), result + assert 'credential_type' in result['msg'] + assert 'foobar' in result['msg'] + assert 'returned 0 items, expected 1' in result['msg'] + + +@pytest.mark.django_db +def test_make_use_of_custom_credential_type(run_module, organization, admin_user, cred_type): + result = run_module( + 'credential', + dict(name='Galaxy Token for Steve', organization=organization.name, credential_type=cred_type.name, inputs={'token': '7rEZK38DJl58A7RxA6EC7lLvUHbBQ1'}), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', False), result + + cred = Credential.objects.get(name='Galaxy Token for Steve') + assert cred.credential_type_id == cred_type.id + assert list(cred.inputs.keys()) == ['token'] + assert cred.inputs['token'].startswith('$encrypted$') + assert len(cred.inputs['token']) >= len('$encrypted$') + len('7rEZK38DJl58A7RxA6EC7lLvUHbBQ1') + + assert result['name'] == "Galaxy Token for Steve" + assert result['id'] == cred.pk + + +@pytest.mark.django_db +@pytest.mark.parametrize('update_secrets', [True, False]) +def test_secret_field_write_twice(run_module, organization, admin_user, cred_type, update_secrets): + val1 = '7rEZK38DJl58A7RxA6EC7lLvUHbBQ1' + val2 = '7rEZ238DJl5837rxA6xxxlLvUHbBQ1' + for val in (val1, val2): + result = run_module( + 'credential', + dict( + name='Galaxy Token for Steve', + organization=organization.name, + credential_type=cred_type.name, + inputs={'token': val}, + update_secrets=update_secrets, + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + + if update_secrets: + assert Credential.objects.get(id=result['id']).get_input('token') == val + + if update_secrets: + assert result.get('changed'), result + else: + assert result.get('changed') is False, result + assert Credential.objects.get(id=result['id']).get_input('token') == val1 diff --git a/ansible_collections/awx/awx/test/awx/test_credential_input_source.py b/ansible_collections/awx/awx/test/awx/test_credential_input_source.py new file mode 100644 index 00000000..5978939a --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_credential_input_source.py @@ -0,0 +1,360 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import CredentialInputSource, Credential, CredentialType + + +@pytest.fixture +def aim_cred_type(): + ct = CredentialType.defaults['aim']() + ct.save() + return ct + + +# Test CyberArk AIM credential source +@pytest.fixture +def source_cred_aim(aim_cred_type): + return Credential.objects.create( + name='CyberArk AIM Cred', credential_type=aim_cred_type, inputs={"url": "https://cyberark.example.com", "app_id": "myAppID", "verify": "false"} + ) + + +@pytest.mark.django_db +def test_aim_credential_source(run_module, admin_user, organization, source_cred_aim, silence_deprecation): + ct = CredentialType.defaults['ssh']() + ct.save() + tgt_cred = Credential.objects.create(name='Test Machine Credential', organization=organization, credential_type=ct, inputs={'username': 'bob'}) + + result = run_module( + 'credential_input_source', + dict( + source_credential=source_cred_aim.name, + target_credential=tgt_cred.name, + input_field_name='password', + metadata={"object_query": "Safe=SUPERSAFE;Object=MyAccount"}, + state='present', + ), + admin_user, + ) + + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert CredentialInputSource.objects.count() == 1 + cis = CredentialInputSource.objects.first() + + assert cis.metadata['object_query'] == "Safe=SUPERSAFE;Object=MyAccount" + assert cis.source_credential.name == source_cred_aim.name + assert cis.target_credential.name == tgt_cred.name + assert cis.input_field_name == 'password' + assert result['id'] == cis.pk + + +# Test CyberArk Conjur credential source +@pytest.fixture +def source_cred_conjur(organization): + # Make a credential type which will be used by the credential + ct = CredentialType.defaults['conjur']() + ct.save() + return Credential.objects.create( + name='CyberArk CONJUR Cred', + credential_type=ct, + inputs={"url": "https://cyberark.example.com", "api_key": "myApiKey", "account": "account", "username": "username"}, + ) + + +@pytest.mark.django_db +def test_conjur_credential_source(run_module, admin_user, organization, source_cred_conjur, silence_deprecation): + ct = CredentialType.defaults['ssh']() + ct.save() + tgt_cred = Credential.objects.create(name='Test Machine Credential', organization=organization, credential_type=ct, inputs={'username': 'bob'}) + + result = run_module( + 'credential_input_source', + dict( + source_credential=source_cred_conjur.name, + target_credential=tgt_cred.name, + input_field_name='password', + metadata={"secret_path": "/path/to/secret"}, + state='present', + ), + admin_user, + ) + + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert CredentialInputSource.objects.count() == 1 + cis = CredentialInputSource.objects.first() + + assert cis.metadata['secret_path'] == "/path/to/secret" + assert cis.source_credential.name == source_cred_conjur.name + assert cis.target_credential.name == tgt_cred.name + assert cis.input_field_name == 'password' + assert result['id'] == cis.pk + + +# Test Hashicorp Vault secret credential source +@pytest.fixture +def source_cred_hashi_secret(organization): + # Make a credential type which will be used by the credential + ct = CredentialType.defaults['hashivault_kv']() + ct.save() + return Credential.objects.create( + name='HashiCorp secret Cred', + credential_type=ct, + inputs={ + "url": "https://secret.hash.example.com", + "token": "myApiKey", + "role_id": "role", + "secret_id": "secret", + "default_auth_path": "path-to-approle", + }, + ) + + +@pytest.mark.django_db +def test_hashi_secret_credential_source(run_module, admin_user, organization, source_cred_hashi_secret, silence_deprecation): + ct = CredentialType.defaults['ssh']() + ct.save() + tgt_cred = Credential.objects.create(name='Test Machine Credential', organization=organization, credential_type=ct, inputs={'username': 'bob'}) + + result = run_module( + 'credential_input_source', + dict( + source_credential=source_cred_hashi_secret.name, + target_credential=tgt_cred.name, + input_field_name='password', + metadata={"secret_path": "/path/to/secret", "auth_path": "/path/to/auth", "secret_backend": "backend", "secret_key": "a_key"}, + state='present', + ), + admin_user, + ) + + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert CredentialInputSource.objects.count() == 1 + cis = CredentialInputSource.objects.first() + + assert cis.metadata['secret_path'] == "/path/to/secret" + assert cis.metadata['auth_path'] == "/path/to/auth" + assert cis.metadata['secret_backend'] == "backend" + assert cis.metadata['secret_key'] == "a_key" + assert cis.source_credential.name == source_cred_hashi_secret.name + assert cis.target_credential.name == tgt_cred.name + assert cis.input_field_name == 'password' + assert result['id'] == cis.pk + + +# Test Hashicorp Vault signed ssh credential source +@pytest.fixture +def source_cred_hashi_ssh(organization): + # Make a credential type which will be used by the credential + ct = CredentialType.defaults['hashivault_ssh']() + ct.save() + return Credential.objects.create( + name='HashiCorp ssh Cred', + credential_type=ct, + inputs={"url": "https://ssh.hash.example.com", "token": "myApiKey", "role_id": "role", "secret_id": "secret"}, + ) + + +@pytest.mark.django_db +def test_hashi_ssh_credential_source(run_module, admin_user, organization, source_cred_hashi_ssh, silence_deprecation): + ct = CredentialType.defaults['ssh']() + ct.save() + tgt_cred = Credential.objects.create(name='Test Machine Credential', organization=organization, credential_type=ct, inputs={'username': 'bob'}) + + result = run_module( + 'credential_input_source', + dict( + source_credential=source_cred_hashi_ssh.name, + target_credential=tgt_cred.name, + input_field_name='password', + metadata={"secret_path": "/path/to/secret", "auth_path": "/path/to/auth", "role": "role", "public_key": "a_key", "valid_principals": "some_value"}, + state='present', + ), + admin_user, + ) + + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert CredentialInputSource.objects.count() == 1 + cis = CredentialInputSource.objects.first() + + assert cis.metadata['secret_path'] == "/path/to/secret" + assert cis.metadata['auth_path'] == "/path/to/auth" + assert cis.metadata['role'] == "role" + assert cis.metadata['public_key'] == "a_key" + assert cis.metadata['valid_principals'] == "some_value" + assert cis.source_credential.name == source_cred_hashi_ssh.name + assert cis.target_credential.name == tgt_cred.name + assert cis.input_field_name == 'password' + assert result['id'] == cis.pk + + +# Test Azure Key Vault credential source +@pytest.fixture +def source_cred_azure_kv(organization): + # Make a credential type which will be used by the credential + ct = CredentialType.defaults['azure_kv']() + ct.save() + return Credential.objects.create( + name='Azure KV Cred', + credential_type=ct, + inputs={ + "url": "https://key.azure.example.com", + "client": "client", + "secret": "secret", + "tenant": "tenant", + "cloud_name": "the_cloud", + }, + ) + + +@pytest.mark.django_db +def test_azure_kv_credential_source(run_module, admin_user, organization, source_cred_azure_kv, silence_deprecation): + ct = CredentialType.defaults['ssh']() + ct.save() + tgt_cred = Credential.objects.create(name='Test Machine Credential', organization=organization, credential_type=ct, inputs={'username': 'bob'}) + + result = run_module( + 'credential_input_source', + dict( + source_credential=source_cred_azure_kv.name, + target_credential=tgt_cred.name, + input_field_name='password', + metadata={"secret_field": "my_pass"}, + state='present', + ), + admin_user, + ) + + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert CredentialInputSource.objects.count() == 1 + cis = CredentialInputSource.objects.first() + + assert cis.metadata['secret_field'] == "my_pass" + assert cis.source_credential.name == source_cred_azure_kv.name + assert cis.target_credential.name == tgt_cred.name + assert cis.input_field_name == 'password' + assert result['id'] == cis.pk + + +# Test Changing Credential Source +@pytest.fixture +def source_cred_aim_alt(aim_cred_type): + return Credential.objects.create( + name='Alternate CyberArk AIM Cred', + credential_type=aim_cred_type, + inputs={"url": "https://cyberark-alt.example.com", "app_id": "myAltID", "verify": "false"}, + ) + + +@pytest.mark.django_db +def test_aim_credential_source_change_source(run_module, admin_user, organization, source_cred_aim, source_cred_aim_alt, silence_deprecation): + ct = CredentialType.defaults['ssh']() + ct.save() + tgt_cred = Credential.objects.create(name='Test Machine Credential', organization=organization, credential_type=ct, inputs={'username': 'bob'}) + + result = run_module( + 'credential_input_source', + dict( + source_credential=source_cred_aim.name, + target_credential=tgt_cred.name, + input_field_name='password', + metadata={"object_query": "Safe=SUPERSAFE;Object=MyAccount"}, + state='present', + ), + admin_user, + ) + + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + unchangedResult = run_module( + 'credential_input_source', + dict( + source_credential=source_cred_aim.name, + target_credential=tgt_cred.name, + input_field_name='password', + metadata={"object_query": "Safe=SUPERSAFE;Object=MyAccount"}, + state='present', + ), + admin_user, + ) + + assert not unchangedResult.get('failed', False), result.get('msg', result) + assert not unchangedResult.get('changed'), result + + changedResult = run_module( + 'credential_input_source', + dict(source_credential=source_cred_aim_alt.name, target_credential=tgt_cred.name, input_field_name='password', state='present'), + admin_user, + ) + + assert not changedResult.get('failed', False), changedResult.get('msg', result) + assert changedResult.get('changed'), result + + assert CredentialInputSource.objects.count() == 1 + cis = CredentialInputSource.objects.first() + + assert cis.metadata['object_query'] == "Safe=SUPERSAFE;Object=MyAccount" + assert cis.source_credential.name == source_cred_aim_alt.name + assert cis.target_credential.name == tgt_cred.name + assert cis.input_field_name == 'password' + + +# Test Centrify Vault secret credential source +@pytest.fixture +def source_cred_centrify_secret(organization): + # Make a credential type which will be used by the credential + ct = CredentialType.defaults['centrify_vault_kv']() + ct.save() + return Credential.objects.create( + name='Centrify vault secret Cred', + credential_type=ct, + inputs={ + "url": "https://tenant_id.my.centrify-dev.net", + "client_id": "secretuser@tenant", + "client_password": "secretuserpassword", + }, + ) + + +@pytest.mark.django_db +def test_centrify_vault_credential_source(run_module, admin_user, organization, source_cred_centrify_secret, silence_deprecation): + ct = CredentialType.defaults['ssh']() + ct.save() + tgt_cred = Credential.objects.create(name='Test Machine Credential', organization=organization, credential_type=ct, inputs={'username': 'bob'}) + + result = run_module( + 'credential_input_source', + dict( + source_credential=source_cred_centrify_secret.name, + target_credential=tgt_cred.name, + input_field_name='password', + metadata={"system-name": "systemname", "account-name": "accountname"}, + state='present', + ), + admin_user, + ) + + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + assert CredentialInputSource.objects.count() == 1 + cis = CredentialInputSource.objects.first() + + assert cis.metadata['system-name'] == "systemname" + assert cis.metadata['account-name'] == "accountname" + assert cis.source_credential.name == source_cred_centrify_secret.name + assert cis.target_credential.name == tgt_cred.name + assert cis.input_field_name == 'password' + assert result['id'] == cis.pk diff --git a/ansible_collections/awx/awx/test/awx/test_credential_type.py b/ansible_collections/awx/awx/test/awx/test_credential_type.py new file mode 100644 index 00000000..4a0f25aa --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_credential_type.py @@ -0,0 +1,62 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import CredentialType + + +@pytest.mark.django_db +def test_create_custom_credential_type(run_module, admin_user, silence_deprecation): + # Example from docs + result = run_module( + 'credential_type', + dict( + name='Nexus', + description='Credentials type for Nexus', + kind='cloud', + inputs={"fields": [{"id": "server", "type": "string", "default": "", "label": ""}], "required": []}, + injectors={'extra_vars': {'nexus_credential': 'test'}}, + state='present', + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + ct = CredentialType.objects.get(name='Nexus') + + assert result['name'] == 'Nexus' + assert result['id'] == ct.pk + + assert ct.inputs == {"fields": [{"id": "server", "type": "string", "default": "", "label": ""}], "required": []} + assert ct.injectors == {'extra_vars': {'nexus_credential': 'test'}} + + +@pytest.mark.django_db +def test_changed_false_with_api_changes(run_module, admin_user): + result = run_module( + 'credential_type', + dict( + name='foo', + kind='cloud', + inputs={"fields": [{"id": "env_value", "label": "foo", "default": "foo"}]}, + injectors={'env': {'TEST_ENV_VAR': '{{ env_value }}'}}, + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + result = run_module( + 'credential_type', + dict( + name='foo', + inputs={"fields": [{"id": "env_value", "label": "foo", "default": "foo"}]}, + injectors={'env': {'TEST_ENV_VAR': '{{ env_value }}'}}, + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert not result.get('changed'), result diff --git a/ansible_collections/awx/awx/test/awx/test_group.py b/ansible_collections/awx/awx/test/awx/test_group.py new file mode 100644 index 00000000..aae7af57 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_group.py @@ -0,0 +1,96 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Organization, Inventory, Group, Host + + +@pytest.mark.django_db +def test_create_group(run_module, admin_user): + org = Organization.objects.create(name='test-org') + inv = Inventory.objects.create(name='test-inv', organization=org) + variables = {"ansible_network_os": "iosxr"} + + result = run_module('group', dict(name='Test Group', inventory='test-inv', variables=variables, state='present'), admin_user) + assert result.get('changed'), result + + group = Group.objects.get(name='Test Group') + assert group.inventory == inv + assert group.variables == '{"ansible_network_os": "iosxr"}' + + result.pop('invocation') + assert result == { + 'id': group.id, + 'name': 'Test Group', + 'changed': True, + } + + +@pytest.mark.django_db +def test_associate_hosts_and_children(run_module, admin_user, organization): + inv = Inventory.objects.create(name='test-inv', organization=organization) + group = Group.objects.create(name='Test Group', inventory=inv) + + inv_hosts = [Host.objects.create(inventory=inv, name='foo{0}'.format(i)) for i in range(3)] + group.hosts.add(inv_hosts[0], inv_hosts[1]) + + child = Group.objects.create(inventory=inv, name='child_group') + + result = run_module( + 'group', + dict(name='Test Group', inventory='test-inv', hosts=[inv_hosts[1].name, inv_hosts[2].name], children=[child.name], state='present'), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result['changed'] is True + + assert set(group.hosts.all()) == set([inv_hosts[1], inv_hosts[2]]) + assert set(group.children.all()) == set([child]) + + +@pytest.mark.django_db +def test_associate_on_create(run_module, admin_user, organization): + inv = Inventory.objects.create(name='test-inv', organization=organization) + child = Group.objects.create(name='test-child', inventory=inv) + host = Host.objects.create(name='test-host', inventory=inv) + + result = run_module('group', dict(name='Test Group', inventory='test-inv', hosts=[host.name], groups=[child.name], state='present'), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result['changed'] is True + + group = Group.objects.get(pk=result['id']) + assert set(group.hosts.all()) == set([host]) + assert set(group.children.all()) == set([child]) + + +@pytest.mark.django_db +def test_children_alias_of_groups(run_module, admin_user, organization): + inv = Inventory.objects.create(name='test-inv', organization=organization) + group = Group.objects.create(name='Test Group', inventory=inv) + child = Group.objects.create(inventory=inv, name='child_group') + result = run_module('group', dict(name='Test Group', inventory='test-inv', groups=[child.name], state='present'), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result['changed'] is True + + assert set(group.children.all()) == set([child]) + + +@pytest.mark.django_db +def test_group_idempotent(run_module, admin_user): + # https://github.com/ansible/ansible/issues/46803 + org = Organization.objects.create(name='test-org') + inv = Inventory.objects.create(name='test-inv', organization=org) + group = Group.objects.create( + name='Test Group', + inventory=inv, + ) + + result = run_module('group', dict(name='Test Group', inventory='test-inv', state='present'), admin_user) + + result.pop('invocation') + assert result == { + 'id': group.id, + 'changed': False, # idempotency assertion + } diff --git a/ansible_collections/awx/awx/test/awx/test_instance_group.py b/ansible_collections/awx/awx/test/awx/test_instance_group.py new file mode 100644 index 00000000..91dc174c --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_instance_group.py @@ -0,0 +1,59 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import InstanceGroup, Instance + + +@pytest.mark.django_db +def test_instance_group_create(run_module, admin_user): + result = run_module( + 'instance_group', {'name': 'foo-group', 'policy_instance_percentage': 34, 'policy_instance_minimum': 12, 'state': 'present'}, admin_user + ) + assert not result.get('failed', False), result + assert result['changed'] + + ig = InstanceGroup.objects.get(name='foo-group') + assert ig.policy_instance_percentage == 34 + assert ig.policy_instance_minimum == 12 + + # Create a new instance in the DB + new_instance = Instance.objects.create(hostname='foo.example.com') + + # Set the new instance group only to the one instnace + result = run_module('instance_group', {'name': 'foo-group', 'instances': [new_instance.hostname], 'state': 'present'}, admin_user) + assert not result.get('failed', False), result + assert result['changed'] + + ig = InstanceGroup.objects.get(name='foo-group') + all_instance_names = [] + for instance in ig.instances.all(): + all_instance_names.append(instance.hostname) + + assert new_instance.hostname in all_instance_names, 'Failed to add instance to group' + assert len(all_instance_names) == 1, 'Too many instances in group {0}'.format(','.join(all_instance_names)) + + +@pytest.mark.django_db +def test_container_group_create(run_module, admin_user, kube_credential): + pod_spec = "{ 'Nothing': True }" + + result = run_module('instance_group', {'name': 'foo-c-group', 'credential': kube_credential.id, 'is_container_group': True, 'state': 'present'}, admin_user) + assert not result.get('failed', False), result['msg'] + assert result['changed'] + + ig = InstanceGroup.objects.get(name='foo-c-group') + assert ig.pod_spec_override == '' + + result = run_module( + 'instance_group', + {'name': 'foo-c-group', 'credential': kube_credential.id, 'is_container_group': True, 'pod_spec_override': pod_spec, 'state': 'present'}, + admin_user, + ) + assert not result.get('failed', False), result['msg'] + assert result['changed'] + + ig = InstanceGroup.objects.get(name='foo-c-group') + assert ig.pod_spec_override == pod_spec diff --git a/ansible_collections/awx/awx/test/awx/test_inventory.py b/ansible_collections/awx/awx/test/awx/test_inventory.py new file mode 100644 index 00000000..37ec99b7 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_inventory.py @@ -0,0 +1,60 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Inventory + + +@pytest.mark.django_db +def test_inventory_create(run_module, admin_user, organization): + # Create an insights credential + + result = run_module( + 'inventory', + { + 'name': 'foo-inventory', + 'organization': organization.name, + 'variables': {'foo': 'bar', 'another-foo': {'barz': 'bar2'}}, + 'state': 'present', + }, + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + + inv = Inventory.objects.get(name='foo-inventory') + assert inv.variables == '{"foo": "bar", "another-foo": {"barz": "bar2"}}' + + result.pop('module_args', None) + result.pop('invocation', None) + assert result == {"name": "foo-inventory", "id": inv.id, "changed": True} + + assert inv.organization_id == organization.id + + +@pytest.mark.django_db +def test_invalid_smart_inventory_create(run_module, admin_user, organization): + result = run_module( + 'inventory', + {'name': 'foo-inventory', 'organization': organization.name, 'kind': 'smart', 'host_filter': 'ansible', 'state': 'present'}, + admin_user, + ) + assert result.get('failed', False), result + + assert 'Invalid query ansible' in result['msg'] + + +@pytest.mark.django_db +def test_valid_smart_inventory_create(run_module, admin_user, organization): + result = run_module( + 'inventory', + {'name': 'foo-inventory', 'organization': organization.name, 'kind': 'smart', 'host_filter': 'name=my_host', 'state': 'present'}, + admin_user, + ) + assert not result.get('failed', False), result + + inv = Inventory.objects.get(name='foo-inventory') + assert inv.host_filter == 'name=my_host' + assert inv.kind == 'smart' + assert inv.organization_id == organization.id diff --git a/ansible_collections/awx/awx/test/awx/test_inventory_source.py b/ansible_collections/awx/awx/test/awx/test_inventory_source.py new file mode 100644 index 00000000..bebd3fc0 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_inventory_source.py @@ -0,0 +1,171 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Organization, Inventory, InventorySource, Project + + +@pytest.fixture +def base_inventory(): + org = Organization.objects.create(name='test-org') + inv = Inventory.objects.create(name='test-inv', organization=org) + return inv + + +@pytest.fixture +def project(base_inventory): + return Project.objects.create( + name='test-proj', + organization=base_inventory.organization, + scm_type='git', + scm_url='https://github.com/ansible/test-playbooks.git', + ) + + +@pytest.mark.django_db +def test_inventory_source_create(run_module, admin_user, base_inventory, project): + source_path = '/var/lib/awx/example_source_path/' + result = run_module( + 'inventory_source', + dict(name='foo', inventory=base_inventory.name, state='present', source='scm', source_path=source_path, source_project=project.name), + admin_user, + ) + assert result.pop('changed', None), result + + inv_src = InventorySource.objects.get(name='foo') + assert inv_src.inventory == base_inventory + result.pop('invocation') + assert result == { + 'id': inv_src.id, + 'name': 'foo', + } + + +@pytest.mark.django_db +def test_create_inventory_source_implied_org(run_module, admin_user): + org = Organization.objects.create(name='test-org') + inv = Inventory.objects.create(name='test-inv', organization=org) + + # Credential is not required for ec2 source, because of IAM roles + result = run_module('inventory_source', dict(name='Test Inventory Source', inventory='test-inv', source='ec2', state='present'), admin_user) + assert result.pop('changed', None), result + + inv_src = InventorySource.objects.get(name='Test Inventory Source') + assert inv_src.inventory == inv + + result.pop('invocation') + assert result == { + "name": "Test Inventory Source", + "id": inv_src.id, + } + + +@pytest.mark.django_db +def test_create_inventory_source_multiple_orgs(run_module, admin_user): + org = Organization.objects.create(name='test-org') + Inventory.objects.create(name='test-inv', organization=org) + + # make another inventory by same name in another org + org2 = Organization.objects.create(name='test-org-number-two') + inv2 = Inventory.objects.create(name='test-inv', organization=org2) + + result = run_module( + 'inventory_source', + dict(name='Test Inventory Source', inventory=inv2.name, organization='test-org-number-two', source='ec2', state='present'), + admin_user, + ) + assert result.pop('changed', None), result + + inv_src = InventorySource.objects.get(name='Test Inventory Source') + assert inv_src.inventory == inv2 + + result.pop('invocation') + assert result == { + "name": "Test Inventory Source", + "id": inv_src.id, + } + + +@pytest.mark.django_db +def test_falsy_value(run_module, admin_user, base_inventory): + result = run_module('inventory_source', dict(name='falsy-test', inventory=base_inventory.name, source='ec2', update_on_launch=True), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', None), result + + inv_src = InventorySource.objects.get(name='falsy-test') + assert inv_src.update_on_launch is True + + result = run_module('inventory_source', dict(name='falsy-test', inventory=base_inventory.name, source='ec2', update_on_launch=False), admin_user) + + inv_src.refresh_from_db() + assert inv_src.update_on_launch is False + + +# Tests related to source-specific parameters +# +# We want to let the API return issues with "this doesn't support that", etc. +# +# GUI OPTIONS: +# - - - - - - - manual: file: scm: ec2: gce azure_rm vmware sat openstack rhv tower custom +# credential ? ? o o r r r r r r r o +# source_project ? ? r - - - - - - - - - +# source_path ? ? r - - - - - - - - - +# verbosity ? ? o o o o o o o o o o +# overwrite ? ? o o o o o o o o o o +# overwrite_vars ? ? o o o o o o o o o o +# update_on_launch ? ? o o o o o o o o o o +# UoPL ? ? o - - - - - - - - - +# source_vars* ? ? - o - o o o o - - - +# environmet vars* ? ? o - - - - - - - - o +# source_script ? ? - - - - - - - - - r +# +# UoPL - update_on_project_launch +# * - source_vars are labeled environment_vars on project and custom sources + + +@pytest.mark.django_db +def test_missing_required_credential(run_module, admin_user, base_inventory): + result = run_module('inventory_source', dict(name='Test Azure Source', inventory=base_inventory.name, source='azure_rm', state='present'), admin_user) + assert result.pop('failed', None) is True, result + + assert 'Credential is required for a cloud source' in result.get('msg', '') + + +@pytest.mark.django_db +def test_source_project_not_for_cloud(run_module, admin_user, base_inventory, project): + result = run_module( + 'inventory_source', + dict(name='Test ec2 Inventory Source', inventory=base_inventory.name, source='ec2', state='present', source_project=project.name), + admin_user, + ) + assert result.pop('failed', None) is True, result + + assert 'Cannot set source_project if not SCM type' in result.get('msg', '') + + +@pytest.mark.django_db +def test_source_path_not_for_cloud(run_module, admin_user, base_inventory): + result = run_module( + 'inventory_source', + dict(name='Test ec2 Inventory Source', inventory=base_inventory.name, source='ec2', state='present', source_path='where/am/I'), + admin_user, + ) + assert result.pop('failed', None) is True, result + + assert 'Cannot set source_path if not SCM type' in result.get('msg', '') + + +@pytest.mark.django_db +def test_scm_source_needs_project(run_module, admin_user, base_inventory): + result = run_module( + 'inventory_source', + dict( + name='SCM inventory without project', inventory=base_inventory.name, state='present', source='scm', source_path='/var/lib/awx/example_source_path/' + ), + admin_user, + ) + assert result.pop('failed', None), result + + assert 'Project required for scm type sources' in result.get('msg', '') diff --git a/ansible_collections/awx/awx/test/awx/test_job.py b/ansible_collections/awx/awx/test/awx/test_job.py new file mode 100644 index 00000000..1731ef35 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_job.py @@ -0,0 +1,37 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest +from django.utils.timezone import now + +from awx.main.models import Job + + +@pytest.mark.django_db +def test_job_wait_successful(run_module, admin_user): + job = Job.objects.create(status='successful', started=now(), finished=now()) + result = run_module('job_wait', dict(job_id=job.id), admin_user) + result.pop('invocation', None) + result['elapsed'] = float(result['elapsed']) + assert result.pop('finished', '')[:10] == str(job.finished)[:10] + assert result.pop('started', '')[:10] == str(job.started)[:10] + assert result == {"status": "successful", "changed": False, "elapsed": job.elapsed, "id": job.id} + + +@pytest.mark.django_db +def test_job_wait_failed(run_module, admin_user): + job = Job.objects.create(status='failed', started=now(), finished=now()) + result = run_module('job_wait', dict(job_id=job.id), admin_user) + result.pop('invocation', None) + result['elapsed'] = float(result['elapsed']) + assert result.pop('finished', '')[:10] == str(job.finished)[:10] + assert result.pop('started', '')[:10] == str(job.started)[:10] + assert result == {"status": "failed", "failed": True, "changed": False, "elapsed": job.elapsed, "id": job.id, "msg": "Job with id 1 failed"} + + +@pytest.mark.django_db +def test_job_wait_not_found(run_module, admin_user): + result = run_module('job_wait', dict(job_id=42), admin_user) + result.pop('invocation', None) + assert result == {"failed": True, "msg": "Unable to wait on job 42; that ID does not exist."} diff --git a/ansible_collections/awx/awx/test/awx/test_job_template.py b/ansible_collections/awx/awx/test/awx/test_job_template.py new file mode 100644 index 00000000..e785a63a --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_job_template.py @@ -0,0 +1,278 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import ActivityStream, JobTemplate, Job, NotificationTemplate + + +@pytest.mark.django_db +def test_create_job_template(run_module, admin_user, project, inventory): + + module_args = { + 'name': 'foo', + 'playbook': 'helloworld.yml', + 'project': project.name, + 'inventory': inventory.name, + 'extra_vars': {'foo': 'bar'}, + 'job_type': 'run', + 'state': 'present', + } + + result = run_module('job_template', module_args, admin_user) + + jt = JobTemplate.objects.get(name='foo') + assert jt.extra_vars == '{"foo": "bar"}' + + assert result == {"name": "foo", "id": jt.id, "changed": True, "invocation": {"module_args": module_args}} + + assert jt.project_id == project.id + assert jt.inventory_id == inventory.id + + +@pytest.mark.django_db +def test_resets_job_template_values(run_module, admin_user, project, inventory): + + module_args = { + 'name': 'foo', + 'playbook': 'helloworld.yml', + 'project': project.name, + 'inventory': inventory.name, + 'extra_vars': {'foo': 'bar'}, + 'job_type': 'run', + 'state': 'present', + 'forks': 20, + 'timeout': 50, + 'allow_simultaneous': True, + 'ask_limit_on_launch': True, + 'ask_execution_environment_on_launch': True, + 'ask_forks_on_launch': True, + 'ask_instance_groups_on_launch': True, + 'ask_job_slice_count_on_launch': True, + 'ask_labels_on_launch': True, + 'ask_timeout_on_launch': True, + } + + result = run_module('job_template', module_args, admin_user) + + jt = JobTemplate.objects.get(name='foo') + assert jt.forks == 20 + assert jt.timeout == 50 + assert jt.allow_simultaneous + assert jt.ask_limit_on_launch + assert jt.ask_execution_environment_on_launch + assert jt.ask_forks_on_launch + assert jt.ask_instance_groups_on_launch + assert jt.ask_job_slice_count_on_launch + assert jt.ask_labels_on_launch + assert jt.ask_timeout_on_launch + + module_args = { + 'name': 'foo', + 'playbook': 'helloworld.yml', + 'project': project.name, + 'inventory': inventory.name, + 'extra_vars': {'foo': 'bar'}, + 'job_type': 'run', + 'state': 'present', + 'forks': 0, + 'timeout': 0, + 'allow_simultaneous': False, + 'ask_limit_on_launch': False, + 'ask_execution_environment_on_launch': False, + 'ask_forks_on_launch': False, + 'ask_instance_groups_on_launch': False, + 'ask_job_slice_count_on_launch': False, + 'ask_labels_on_launch': False, + 'ask_timeout_on_launch': False, + } + + result = run_module('job_template', module_args, admin_user) + assert result['changed'] + + jt = JobTemplate.objects.get(name='foo') + assert jt.forks == 0 + assert jt.timeout == 0 + assert not jt.allow_simultaneous + assert not jt.ask_limit_on_launch + assert not jt.ask_execution_environment_on_launch + assert not jt.ask_forks_on_launch + assert not jt.ask_instance_groups_on_launch + assert not jt.ask_job_slice_count_on_launch + assert not jt.ask_labels_on_launch + assert not jt.ask_timeout_on_launch + + +@pytest.mark.django_db +def test_job_launch_with_prompting(run_module, admin_user, project, organization, inventory, machine_credential): + JobTemplate.objects.create( + name='foo', + project=project, + organization=organization, + playbook='helloworld.yml', + ask_variables_on_launch=True, + ask_inventory_on_launch=True, + ask_credential_on_launch=True, + ) + result = run_module( + 'job_launch', + dict( + job_template='foo', + inventory=inventory.name, + credential=machine_credential.name, + extra_vars={"var1": "My First Variable", "var2": "My Second Variable", "var3": "My Third Variable"}, + ), + admin_user, + ) + assert result.pop('changed', None), result + + job = Job.objects.get(id=result['id']) + assert job.extra_vars == '{"var1": "My First Variable", "var2": "My Second Variable", "var3": "My Third Variable"}' + assert job.inventory == inventory + assert [cred.id for cred in job.credentials.all()] == [machine_credential.id] + + +@pytest.mark.django_db +def test_job_template_with_new_credentials(run_module, admin_user, project, inventory, machine_credential, vault_credential): + result = run_module( + 'job_template', + dict( + name='foo', playbook='helloworld.yml', project=project.name, inventory=inventory.name, credentials=[machine_credential.name, vault_credential.name] + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', False), result + jt = JobTemplate.objects.get(pk=result['id']) + + assert set([machine_credential.id, vault_credential.id]) == set([cred.pk for cred in jt.credentials.all()]) + + prior_ct = ActivityStream.objects.count() + result = run_module( + 'job_template', + dict( + name='foo', playbook='helloworld.yml', project=project.name, inventory=inventory.name, credentials=[machine_credential.name, vault_credential.name] + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert not result.get('changed', True), result + jt.refresh_from_db() + assert result['id'] == jt.id + + assert set([machine_credential.id, vault_credential.id]) == set([cred.pk for cred in jt.credentials.all()]) + assert ActivityStream.objects.count() == prior_ct + + +@pytest.mark.django_db +def test_job_template_with_survey_spec(run_module, admin_user, project, inventory, survey_spec): + result = run_module( + 'job_template', + dict(name='foo', playbook='helloworld.yml', project=project.name, inventory=inventory.name, survey_spec=survey_spec, survey_enabled=True), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', False), result + jt = JobTemplate.objects.get(pk=result['id']) + + assert jt.survey_spec == survey_spec + + prior_ct = ActivityStream.objects.count() + result = run_module( + 'job_template', + dict(name='foo', playbook='helloworld.yml', project=project.name, inventory=inventory.name, survey_spec=survey_spec, survey_enabled=True), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert not result.get('changed', True), result + jt.refresh_from_db() + assert result['id'] == jt.id + + assert jt.survey_spec == survey_spec + assert ActivityStream.objects.count() == prior_ct + + +@pytest.mark.django_db +def test_job_template_with_wrong_survey_spec(run_module, admin_user, project, inventory, survey_spec): + result = run_module( + 'job_template', + dict(name='foo', playbook='helloworld.yml', project=project.name, inventory=inventory.name, survey_spec=survey_spec, survey_enabled=True), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', False), result + jt = JobTemplate.objects.get(pk=result['id']) + + assert jt.survey_spec == survey_spec + + prior_ct = ActivityStream.objects.count() + + del survey_spec['description'] + + result = run_module( + 'job_template', + dict(name='foo', playbook='helloworld.yml', project=project.name, inventory=inventory.name, survey_spec=survey_spec, survey_enabled=True), + admin_user, + ) + assert result.get('failed', True) + assert result.get('msg') == "Failed to update survey: Field 'description' is missing from survey spec." + + assert ActivityStream.objects.count() == prior_ct + + +@pytest.mark.django_db +def test_job_template_with_survey_encrypted_default(run_module, admin_user, project, inventory, silence_warning): + spec = { + "spec": [{"index": 0, "question_name": "my question?", "default": "very_secret_value", "variable": "myvar", "type": "password", "required": False}], + "description": "test", + "name": "test", + } + for i in range(2): + result = run_module( + 'job_template', + dict(name='foo', playbook='helloworld.yml', project=project.name, inventory=inventory.name, survey_spec=spec, survey_enabled=True), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + + assert result.get('changed', False), result # not actually desired, but assert for sanity + + silence_warning.assert_called_once_with( + "The field survey_spec of job_template {0} has encrypted data and " "may inaccurately report task is changed.".format(result['id']) + ) + + +@pytest.mark.django_db +def test_associate_only_on_success(run_module, admin_user, organization, project): + jt = JobTemplate.objects.create( + name='foo', + project=project, + playbook='helloworld.yml', + ask_inventory_on_launch=True, + ) + create_kwargs = dict( + notification_configuration={'url': 'http://www.example.com/hook', 'headers': {'X-Custom-Header': 'value123'}, 'password': 'bar'}, + notification_type='webhook', + organization=organization, + ) + nt1 = NotificationTemplate.objects.create(name='nt1', **create_kwargs) + nt2 = NotificationTemplate.objects.create(name='nt2', **create_kwargs) + + jt.notification_templates_error.add(nt1) + + # test preservation of error NTs when success NTs are added + result = run_module('job_template', dict(name='foo', playbook='helloworld.yml', project=project.name, notification_templates_success=['nt2']), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', True), result + + assert list(jt.notification_templates_success.values_list('id', flat=True)) == [nt2.id] + assert list(jt.notification_templates_error.values_list('id', flat=True)) == [nt1.id] + + # test removal to empty list + result = run_module('job_template', dict(name='foo', playbook='helloworld.yml', project=project.name, notification_templates_success=[]), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', True), result + + assert list(jt.notification_templates_success.values_list('id', flat=True)) == [] + assert list(jt.notification_templates_error.values_list('id', flat=True)) == [nt1.id] diff --git a/ansible_collections/awx/awx/test/awx/test_label.py b/ansible_collections/awx/awx/test/awx/test_label.py new file mode 100644 index 00000000..2a34ceb9 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_label.py @@ -0,0 +1,38 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Label + + +@pytest.mark.django_db +def test_create_label(run_module, admin_user, organization): + result = run_module('label', dict(name='test-label', organization=organization.name), admin_user) + assert not result.get('failed'), result.get('msg', result) + assert result.get('changed', False) + + assert Label.objects.get(name='test-label').organization == organization + + +@pytest.mark.django_db +def test_create_label_using_org_id(run_module, admin_user, organization): + result = run_module('label', dict(name='test-label', organization=organization.id), admin_user) + assert not result.get('failed'), result.get('msg', result) + assert result.get('changed', False) + + assert Label.objects.get(name='test-label').organization == organization + + +@pytest.mark.django_db +def test_modify_label(run_module, admin_user, organization): + label = Label.objects.create(name='test-label', organization=organization) + + result = run_module('label', dict(name='test-label', new_name='renamed-label', organization=organization.name), admin_user) + assert not result.get('failed'), result.get('msg', result) + assert result.get('changed', False) + + label.refresh_from_db() + assert label.organization == organization + assert label.name == 'renamed-label' diff --git a/ansible_collections/awx/awx/test/awx/test_module_utils.py b/ansible_collections/awx/awx/test/awx/test_module_utils.py new file mode 100644 index 00000000..088b5368 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_module_utils.py @@ -0,0 +1,231 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import json +import sys + +from awx.main.models import Organization, Team, Project, Inventory +from requests.models import Response +from unittest import mock + +awx_name = 'AWX' +controller_name = 'Red Hat Ansible Automation Platform' +ping_version = '1.2.3' + + +def getTowerheader(self, header_name, default): + mock_headers = {'X-API-Product-Name': controller_name, 'X-API-Product-Version': ping_version} + return mock_headers.get(header_name, default) + + +def getAWXheader(self, header_name, default): + mock_headers = {'X-API-Product-Name': awx_name, 'X-API-Product-Version': ping_version} + return mock_headers.get(header_name, default) + + +def getNoheader(self, header_name, default): + mock_headers = {} + return mock_headers.get(header_name, default) + + +def read(self): + return json.dumps({}) + + +def status(self): + return 200 + + +def mock_controller_ping_response(self, method, url, **kwargs): + r = Response() + r.getheader = getTowerheader.__get__(r) + r.read = read.__get__(r) + r.status = status.__get__(r) + return r + + +def mock_awx_ping_response(self, method, url, **kwargs): + r = Response() + r.getheader = getAWXheader.__get__(r) + r.read = read.__get__(r) + r.status = status.__get__(r) + return r + + +def mock_no_ping_response(self, method, url, **kwargs): + r = Response() + r.getheader = getNoheader.__get__(r) + r.read = read.__get__(r) + r.status = status.__get__(r) + return r + + +def test_version_warning(collection_import, silence_warning): + ControllerAPIModule = collection_import('plugins.module_utils.controller_api').ControllerAPIModule + cli_data = {'ANSIBLE_MODULE_ARGS': {}} + testargs = ['module_file2.py', json.dumps(cli_data)] + with mock.patch.object(sys, 'argv', testargs): + with mock.patch('ansible.module_utils.urls.Request.open', new=mock_awx_ping_response): + my_module = ControllerAPIModule(argument_spec=dict()) + my_module._COLLECTION_VERSION = "2.0.0" + my_module._COLLECTION_TYPE = "awx" + my_module.get_endpoint('ping') + silence_warning.assert_called_once_with( + 'You are running collection version {0} but connecting to {1} version {2}'.format(my_module._COLLECTION_VERSION, awx_name, ping_version) + ) + + +def test_no_version_warning(collection_import, silence_warning): + ControllerAPIModule = collection_import('plugins.module_utils.controller_api').ControllerAPIModule + cli_data = {'ANSIBLE_MODULE_ARGS': {}} + testargs = ['module_file2.py', json.dumps(cli_data)] + with mock.patch.object(sys, 'argv', testargs): + with mock.patch('ansible.module_utils.urls.Request.open', new=mock_no_ping_response): + my_module = ControllerAPIModule(argument_spec=dict()) + my_module._COLLECTION_VERSION = "2.0.0" + my_module._COLLECTION_TYPE = "awx" + my_module.get_endpoint('ping') + silence_warning.assert_called_once_with( + 'You are using the {0} version of this collection but connecting to a controller that did not return a version'.format(my_module._COLLECTION_VERSION) + ) + + +def test_version_warning_strictness_awx(collection_import, silence_warning): + ControllerAPIModule = collection_import('plugins.module_utils.controller_api').ControllerAPIModule + cli_data = {'ANSIBLE_MODULE_ARGS': {}} + testargs = ['module_file2.py', json.dumps(cli_data)] + # Compare 1.0.0 to 1.2.3 (major matches) + with mock.patch.object(sys, 'argv', testargs): + with mock.patch('ansible.module_utils.urls.Request.open', new=mock_awx_ping_response): + my_module = ControllerAPIModule(argument_spec=dict()) + my_module._COLLECTION_VERSION = "1.0.0" + my_module._COLLECTION_TYPE = "awx" + my_module.get_endpoint('ping') + silence_warning.assert_not_called() + + # Compare 1.2.0 to 1.2.3 (major matches minor does not count) + with mock.patch.object(sys, 'argv', testargs): + with mock.patch('ansible.module_utils.urls.Request.open', new=mock_awx_ping_response): + my_module = ControllerAPIModule(argument_spec=dict()) + my_module._COLLECTION_VERSION = "1.2.0" + my_module._COLLECTION_TYPE = "awx" + my_module.get_endpoint('ping') + silence_warning.assert_not_called() + + +def test_version_warning_strictness_controller(collection_import, silence_warning): + ControllerAPIModule = collection_import('plugins.module_utils.controller_api').ControllerAPIModule + cli_data = {'ANSIBLE_MODULE_ARGS': {}} + testargs = ['module_file2.py', json.dumps(cli_data)] + # Compare 1.2.0 to 1.2.3 (major/minor matches) + with mock.patch.object(sys, 'argv', testargs): + with mock.patch('ansible.module_utils.urls.Request.open', new=mock_controller_ping_response): + my_module = ControllerAPIModule(argument_spec=dict()) + my_module._COLLECTION_VERSION = "1.2.0" + my_module._COLLECTION_TYPE = "controller" + my_module.get_endpoint('ping') + silence_warning.assert_not_called() + + # Compare 1.0.0 to 1.2.3 (major/minor fail to match) + with mock.patch.object(sys, 'argv', testargs): + with mock.patch('ansible.module_utils.urls.Request.open', new=mock_controller_ping_response): + my_module = ControllerAPIModule(argument_spec=dict()) + my_module._COLLECTION_VERSION = "1.0.0" + my_module._COLLECTION_TYPE = "controller" + my_module.get_endpoint('ping') + silence_warning.assert_called_once_with( + 'You are running collection version {0} but connecting to {1} version {2}'.format(my_module._COLLECTION_VERSION, controller_name, ping_version) + ) + + +def test_type_warning(collection_import, silence_warning): + ControllerAPIModule = collection_import('plugins.module_utils.controller_api').ControllerAPIModule + cli_data = {'ANSIBLE_MODULE_ARGS': {}} + testargs = ['module_file2.py', json.dumps(cli_data)] + with mock.patch.object(sys, 'argv', testargs): + with mock.patch('ansible.module_utils.urls.Request.open', new=mock_awx_ping_response): + my_module = ControllerAPIModule(argument_spec={}) + my_module._COLLECTION_VERSION = ping_version + my_module._COLLECTION_TYPE = "controller" + my_module.get_endpoint('ping') + silence_warning.assert_called_once_with( + 'You are using the {0} version of this collection but connecting to {1}'.format(my_module._COLLECTION_TYPE, awx_name) + ) + + +def test_duplicate_config(collection_import, silence_warning): + # imports done here because of PATH issues unique to this test suite + ControllerAPIModule = collection_import('plugins.module_utils.controller_api').ControllerAPIModule + data = {'name': 'zigzoom', 'zig': 'zoom', 'controller_username': 'bob', 'controller_config_file': 'my_config'} + + with mock.patch.object(ControllerAPIModule, 'load_config') as mock_load: + argument_spec = dict( + name=dict(required=True), + zig=dict(type='str'), + ) + ControllerAPIModule(argument_spec=argument_spec, direct_params=data) + assert mock_load.mock_calls[-1] == mock.call('my_config') + + silence_warning.assert_called_once_with( + 'The parameter(s) controller_username were provided at the same time as ' + 'controller_config_file. Precedence may be unstable, ' + 'we suggest either using config file or params.' + ) + + +def test_no_templated_values(collection_import): + """This test corresponds to replacements done by + awx_collection/tools/roles/template_galaxy/tasks/main.yml + Those replacements should happen at build time, so they should not be + checked into source. + """ + ControllerAPIModule = collection_import('plugins.module_utils.controller_api').ControllerAPIModule + assert ControllerAPIModule._COLLECTION_VERSION == "0.0.1-devel", ( + 'The collection version is templated when the collection is built ' 'and the code should retain the placeholder of "0.0.1-devel".' + ) + InventoryModule = collection_import('plugins.inventory.controller').InventoryModule + assert InventoryModule.NAME == 'awx.awx.controller', ( + 'The inventory plugin FQCN is templated when the collection is built ' 'and the code should retain the default of awx.awx.' + ) + + +def test_conflicting_name_and_id(run_module, admin_user): + """In the event that 2 related items match our search criteria in this way: + one item has an id that matches input + one item has a name that matches input + We should preference the id over the name. + Otherwise, the universality of the controller_api lookup plugin is compromised. + """ + org_by_id = Organization.objects.create(name='foo') + slug = str(org_by_id.id) + Organization.objects.create(name=slug) + result = run_module('team', {'name': 'foo_team', 'description': 'fooin around', 'organization': slug}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + team = Team.objects.filter(name='foo_team').first() + assert str(team.organization_id) == slug, 'Lookup by id should be preferenced over name in cases of conflict.' + assert team.organization.name == 'foo' + + +def test_multiple_lookup(run_module, admin_user): + org1 = Organization.objects.create(name='foo') + org2 = Organization.objects.create(name='bar') + inv = Inventory.objects.create(name='Foo Inv') + proj1 = Project.objects.create( + name='foo', + organization=org1, + scm_type='git', + scm_url="https://github.com/ansible/ansible-tower-samples", + ) + Project.objects.create( + name='foo', + organization=org2, + scm_type='git', + scm_url="https://github.com/ansible/ansible-tower-samples", + ) + result = run_module('job_template', {'name': 'Demo Job Template', 'project': proj1.name, 'inventory': inv.id, 'playbook': 'hello_world.yml'}, admin_user) + assert result.get('failed', False) + assert 'projects' in result['msg'] + assert 'foo' in result['msg'] + assert 'returned 2 items, expected 1' in result['msg'] + assert 'query' in result diff --git a/ansible_collections/awx/awx/test/awx/test_notification_template.py b/ansible_collections/awx/awx/test/awx/test_notification_template.py new file mode 100644 index 00000000..8bd2647b --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_notification_template.py @@ -0,0 +1,158 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import NotificationTemplate, Job + + +def compare_with_encrypted(model_config, param_config): + """Given a model_config from the database, assure that this is consistent + with the config given in the notification_configuration parameter + this requires handling of password fields + """ + for key, model_val in model_config.items(): + param_val = param_config.get(key, 'missing') + if isinstance(model_val, str) and (model_val.startswith('$encrypted$') or param_val.startswith('$encrypted$')): + assert model_val.startswith('$encrypted$') # must be saved as encrypted + assert len(model_val) > len('$encrypted$') + else: + assert model_val == param_val, 'Config key {0} did not match, (model: {1}, input: {2})'.format(key, model_val, param_val) + + +@pytest.mark.django_db +def test_create_modify_notification_template(run_module, admin_user, organization): + nt_config = { + 'username': 'user', + 'password': 'password', + 'sender': 'foo@invalid.com', + 'recipients': ['foo2@invalid.com'], + 'host': 'smtp.example.com', + 'port': 25, + 'use_tls': False, + 'use_ssl': False, + 'timeout': 4, + } + result = run_module( + 'notification_template', + dict( + name='foo-notification-template', + organization=organization.name, + notification_type='email', + notification_configuration=nt_config, + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.pop('changed', None), result + + nt = NotificationTemplate.objects.get(id=result['id']) + compare_with_encrypted(nt.notification_configuration, nt_config) + assert nt.organization == organization + + # Test no-op, this is impossible if the notification_configuration is given + # because we cannot determine if password fields changed + result = run_module( + 'notification_template', + dict( + name='foo-notification-template', + organization=organization.name, + notification_type='email', + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert not result.pop('changed', None), result + + # Test a change in the configuration + nt_config['timeout'] = 12 + result = run_module( + 'notification_template', + dict( + name='foo-notification-template', + organization=organization.name, + notification_type='email', + notification_configuration=nt_config, + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.pop('changed', None), result + + nt.refresh_from_db() + compare_with_encrypted(nt.notification_configuration, nt_config) + + +@pytest.mark.django_db +def test_invalid_notification_configuration(run_module, admin_user, organization): + result = run_module( + 'notification_template', + dict( + name='foo-notification-template', + organization=organization.name, + notification_type='email', + notification_configuration={}, + ), + admin_user, + ) + assert result.get('failed', False), result.get('msg', result) + assert 'Missing required fields for Notification Configuration' in result['msg'] + + +@pytest.mark.django_db +def test_deprecated_to_modern_no_op(run_module, admin_user, organization): + nt_config = {'url': 'http://www.example.com/hook', 'headers': {'X-Custom-Header': 'value123'}} + result = run_module( + 'notification_template', + dict( + name='foo-notification-template', + organization=organization.name, + notification_type='webhook', + notification_configuration=nt_config, + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.pop('changed', None), result + + result = run_module( + 'notification_template', + dict( + name='foo-notification-template', + organization=organization.name, + notification_type='webhook', + notification_configuration=nt_config, + ), + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert not result.pop('changed', None), result + + +@pytest.mark.django_db +def test_build_notification_message_undefined(run_module, admin_user, organization): + """Job notification templates may encounter undefined values in the context when they are + rendered. Make sure that accessing attributes or items of an undefined value returns another + instance of Undefined, rather than raising an UndefinedError. This enables the use of expressions + like "{{ job.created_by.first_name | default('unknown') }}".""" + job = Job.objects.create(name='foobar') + + nt_config = {'url': 'http://www.example.com/hook', 'headers': {'X-Custom-Header': 'value123'}} + custom_start_template = {'body': '{"started_by": "{{ job.summary_fields.created_by.username | default(\'My Placeholder\') }}"}'} + messages = {'started': custom_start_template, 'success': None, 'error': None, 'workflow_approval': None} + result = run_module( + 'notification_template', + dict( + name='foo-notification-template', + organization=organization.name, + notification_type='webhook', + notification_configuration=nt_config, + messages=messages, + ), + admin_user, + ) + nt = NotificationTemplate.objects.get(id=result['id']) + + body = job.build_notification_message(nt, 'running') + assert '{"started_by": "My Placeholder"}' in body[1] diff --git a/ansible_collections/awx/awx/test/awx/test_organization.py b/ansible_collections/awx/awx/test/awx/test_organization.py new file mode 100644 index 00000000..08d35ed6 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_organization.py @@ -0,0 +1,32 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Organization + + +@pytest.mark.django_db +def test_create_organization(run_module, admin_user): + + module_args = { + 'name': 'foo', + 'description': 'barfoo', + 'state': 'present', + 'max_hosts': '0', + 'controller_host': None, + 'controller_username': None, + 'controller_password': None, + 'validate_certs': None, + 'controller_oauthtoken': None, + 'controller_config_file': None, + } + + result = run_module('organization', module_args, admin_user) + assert result.get('changed'), result + + org = Organization.objects.get(name='foo') + assert result == {"name": "foo", "changed": True, "id": org.id, "invocation": {"module_args": module_args}} + + assert org.description == 'barfoo' diff --git a/ansible_collections/awx/awx/test/awx/test_project.py b/ansible_collections/awx/awx/test/awx/test_project.py new file mode 100644 index 00000000..2f872456 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_project.py @@ -0,0 +1,50 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Project + + +@pytest.mark.django_db +def test_create_project(run_module, admin_user, organization, silence_warning): + result = run_module( + 'project', + dict(name='foo', organization=organization.name, scm_type='git', scm_url='https://foo.invalid', wait=False, scm_update_cache_timeout=5), + admin_user, + ) + silence_warning.assert_called_once_with('scm_update_cache_timeout will be ignored since scm_update_on_launch was not set to true') + + assert result.pop('changed', None), result + + proj = Project.objects.get(name='foo') + assert proj.scm_url == 'https://foo.invalid' + assert proj.organization == organization + + result.pop('invocation') + assert result == {'name': 'foo', 'id': proj.id} + + +@pytest.mark.django_db +def test_create_project_copy_from(run_module, admin_user, organization, silence_warning): + '''Test the copy_from functionality''' + result = run_module( + 'project', + dict(name='foo', organization=organization.name, scm_type='git', scm_url='https://foo.invalid', wait=False, scm_update_cache_timeout=5), + admin_user, + ) + assert result.pop('changed', None), result + proj_name = 'bar' + result = run_module( + 'project', + dict(name=proj_name, copy_from='foo', scm_type='git', wait=False), + admin_user, + ) + assert result.pop('changed', None), result + result = run_module( + 'project', + dict(name=proj_name, copy_from='foo', scm_type='git', wait=False), + admin_user, + ) + silence_warning.assert_called_with("A project with the name {0} already exists.".format(proj_name)) diff --git a/ansible_collections/awx/awx/test/awx/test_role.py b/ansible_collections/awx/awx/test/awx/test_role.py new file mode 100644 index 00000000..f5cc5cee --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_role.py @@ -0,0 +1,88 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import WorkflowJobTemplate, User + + +@pytest.mark.django_db +@pytest.mark.parametrize('state', ('present', 'absent')) +def test_grant_organization_permission(run_module, admin_user, organization, state): + rando = User.objects.create(username='rando') + if state == 'absent': + organization.admin_role.members.add(rando) + + result = run_module('role', {'user': rando.username, 'organization': organization.name, 'role': 'admin', 'state': state}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + + if state == 'present': + assert rando in organization.execute_role + else: + assert rando not in organization.execute_role + + +@pytest.mark.django_db +@pytest.mark.parametrize('state', ('present', 'absent')) +def test_grant_workflow_permission(run_module, admin_user, organization, state): + wfjt = WorkflowJobTemplate.objects.create(organization=organization, name='foo-workflow') + rando = User.objects.create(username='rando') + if state == 'absent': + wfjt.execute_role.members.add(rando) + + result = run_module('role', {'user': rando.username, 'workflow': wfjt.name, 'role': 'execute', 'state': state}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + + if state == 'present': + assert rando in wfjt.execute_role + else: + assert rando not in wfjt.execute_role + + +@pytest.mark.django_db +@pytest.mark.parametrize('state', ('present', 'absent')) +def test_grant_workflow_list_permission(run_module, admin_user, organization, state): + wfjt = WorkflowJobTemplate.objects.create(organization=organization, name='foo-workflow') + rando = User.objects.create(username='rando') + if state == 'absent': + wfjt.execute_role.members.add(rando) + + result = run_module( + 'role', + {'user': rando.username, 'lookup_organization': wfjt.organization.name, 'workflows': [wfjt.name], 'role': 'execute', 'state': state}, + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + + if state == 'present': + assert rando in wfjt.execute_role + else: + assert rando not in wfjt.execute_role + + +@pytest.mark.django_db +@pytest.mark.parametrize('state', ('present', 'absent')) +def test_grant_workflow_approval_permission(run_module, admin_user, organization, state): + wfjt = WorkflowJobTemplate.objects.create(organization=organization, name='foo-workflow') + rando = User.objects.create(username='rando') + if state == 'absent': + wfjt.execute_role.members.add(rando) + + result = run_module('role', {'user': rando.username, 'workflow': wfjt.name, 'role': 'approval', 'state': state}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + + if state == 'present': + assert rando in wfjt.approval_role + else: + assert rando not in wfjt.approval_role + + +@pytest.mark.django_db +def test_invalid_role(run_module, admin_user, project): + rando = User.objects.create(username='rando') + result = run_module('role', {'user': rando.username, 'project': project.name, 'role': 'adhoc', 'state': 'present'}, admin_user) + assert result.get('failed', False) + msg = result.get('msg') + assert 'has no role adhoc_role' in msg + assert 'available roles: admin_role, use_role, update_role, read_role' in msg diff --git a/ansible_collections/awx/awx/test/awx/test_schedule.py b/ansible_collections/awx/awx/test/awx/test_schedule.py new file mode 100644 index 00000000..13f14c81 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_schedule.py @@ -0,0 +1,142 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from ansible.errors import AnsibleError + +from awx.main.models import JobTemplate, Schedule +from awx.api.serializers import SchedulePreviewSerializer + + +@pytest.mark.django_db +def test_create_schedule(run_module, job_template, admin_user): + my_rrule = 'DTSTART;TZID=Zulu:20200416T034507 RRULE:FREQ=MONTHLY;INTERVAL=1' + result = run_module('schedule', {'name': 'foo_schedule', 'unified_job_template': job_template.name, 'rrule': my_rrule}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + + schedule = Schedule.objects.filter(name='foo_schedule').first() + + assert result['id'] == schedule.id + assert result['changed'] + + assert schedule.rrule == my_rrule + + +@pytest.mark.django_db +def test_delete_same_named_schedule(run_module, project, inventory, admin_user): + jt1 = JobTemplate.objects.create(name='jt1', project=project, inventory=inventory, playbook='helloworld.yml') + jt2 = JobTemplate.objects.create(name='jt2', project=project, inventory=inventory, playbook='helloworld2.yml') + Schedule.objects.create(name='Some Schedule', rrule='DTSTART:20300112T210000Z RRULE:FREQ=DAILY;INTERVAL=1', unified_job_template=jt1) + Schedule.objects.create(name='Some Schedule', rrule='DTSTART:20300112T210000Z RRULE:FREQ=DAILY;INTERVAL=1', unified_job_template=jt2) + + result = run_module('schedule', {'name': 'Some Schedule', 'unified_job_template': 'jt1', 'state': 'absent'}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + + assert Schedule.objects.filter(name='Some Schedule').count() == 1 + + +@pytest.mark.parametrize( + "freq, kwargs, expect", + [ + # Test with a valid start date (no time) (also tests none frequency and count) + ('none', {'start_date': '2020-04-16'}, 'DTSTART;TZID=America/New_York:20200416T000000 RRULE:FREQ=DAILY;COUNT=1;INTERVAL=1'), + # Test with a valid start date and time + ('none', {'start_date': '2020-04-16 03:45:07'}, 'DTSTART;TZID=America/New_York:20200416T034507 RRULE:FREQ=DAILY;COUNT=1;INTERVAL=1'), + # Test end_on as count (also integration test) + ('minute', {'start_date': '2020-4-16 03:45:07', 'end_on': '2'}, 'DTSTART;TZID=America/New_York:20200416T034507 RRULE:FREQ=MINUTELY;COUNT=2;INTERVAL=1'), + # Test end_on as date + ( + 'minute', + {'start_date': '2020-4-16 03:45:07', 'end_on': '2020-4-17 03:45:07'}, + 'DTSTART;TZID=America/New_York:20200416T034507 RRULE:FREQ=MINUTELY;UNTIL=20200417T034507;INTERVAL=1', + ), + # Test on_days as a single day + ( + 'week', + {'start_date': '2020-4-16 03:45:07', 'on_days': 'saturday'}, + 'DTSTART;TZID=America/New_York:20200416T034507 RRULE:FREQ=WEEKLY;BYDAY=SA;INTERVAL=1', + ), + # Test on_days as multiple days (with some whitespaces) + ( + 'week', + {'start_date': '2020-4-16 03:45:07', 'on_days': 'saturday,monday , friday'}, + 'DTSTART;TZID=America/New_York:20200416T034507 RRULE:FREQ=WEEKLY;BYDAY=MO,FR,SA;INTERVAL=1', + ), + # Test valid month_day_number + ( + 'month', + {'start_date': '2020-4-16 03:45:07', 'month_day_number': '18'}, + 'DTSTART;TZID=America/New_York:20200416T034507 RRULE:FREQ=MONTHLY;BYMONTHDAY=18;INTERVAL=1', + ), + # Test a valid on_the + ( + 'month', + {'start_date': '2020-4-16 03:45:07', 'on_the': 'second sunday'}, + 'DTSTART;TZID=America/New_York:20200416T034507 RRULE:FREQ=MONTHLY;BYSETPOS=2;BYDAY=SU;INTERVAL=1', + ), + # Test an valid timezone + ('month', {'start_date': '2020-4-16 03:45:07', 'timezone': 'Zulu'}, 'DTSTART;TZID=Zulu:20200416T034507 RRULE:FREQ=MONTHLY;INTERVAL=1'), + ], +) +def test_rrule_lookup_plugin(collection_import, freq, kwargs, expect): + LookupModule = collection_import('plugins.lookup.schedule_rrule').LookupModule() + generated_rule = LookupModule.get_rrule(freq, kwargs) + assert generated_rule == expect + rrule_checker = SchedulePreviewSerializer() + # Try to run our generated rrule through the awx validator + # This will raise its own exception on failure + rrule_checker.validate_rrule(generated_rule) + + +@pytest.mark.parametrize("freq", ('none', 'minute', 'hour', 'day', 'week', 'month')) +def test_empty_schedule_rrule(collection_import, freq): + LookupModule = collection_import('plugins.lookup.schedule_rrule').LookupModule() + if freq == 'day': + pfreq = 'DAILY' + elif freq == 'none': + pfreq = 'DAILY;COUNT=1' + else: + pfreq = freq.upper() + 'LY' + assert LookupModule.get_rrule(freq, {}).endswith(' RRULE:FREQ={0};INTERVAL=1'.format(pfreq)) + + +@pytest.mark.parametrize( + "freq, kwargs, msg", + [ + # Test end_on as junk + ('minute', {'start_date': '2020-4-16 03:45:07', 'end_on': 'junk'}, 'Parameter end_on must either be an integer or in the format YYYY-MM-DD'), + # Test on_days as junk + ( + 'week', + {'start_date': '2020-4-16 03:45:07', 'on_days': 'junk'}, + 'Parameter on_days must only contain values monday, tuesday, wednesday, thursday, friday, saturday, sunday', + ), + # Test combo of both month_day_number and on_the + ( + 'month', + dict(start_date='2020-4-16 03:45:07', on_the='something', month_day_number='else'), + "Month based frequencies can have month_day_number or on_the but not both", + ), + # Test month_day_number as not an integer + ('month', dict(start_date='2020-4-16 03:45:07', month_day_number='junk'), "month_day_number must be between 1 and 31"), + # Test month_day_number < 1 + ('month', dict(start_date='2020-4-16 03:45:07', month_day_number='0'), "month_day_number must be between 1 and 31"), + # Test month_day_number > 31 + ('month', dict(start_date='2020-4-16 03:45:07', month_day_number='32'), "month_day_number must be between 1 and 31"), + # Test on_the as junk + ('month', dict(start_date='2020-4-16 03:45:07', on_the='junk'), "on_the parameter must be two words separated by a space"), + # Test on_the with invalid occurance + ('month', dict(start_date='2020-4-16 03:45:07', on_the='junk wednesday'), "The first string of the on_the parameter is not valid"), + # Test on_the with invalid weekday + ('month', dict(start_date='2020-4-16 03:45:07', on_the='second junk'), "Weekday portion of on_the parameter is not valid"), + # Test an invalid timezone + ('month', dict(start_date='2020-4-16 03:45:07', timezone='junk'), 'Timezone parameter is not valid'), + ], +) +def test_rrule_lookup_plugin_failure(collection_import, freq, kwargs, msg): + LookupModule = collection_import('plugins.lookup.schedule_rrule').LookupModule() + with pytest.raises(AnsibleError) as e: + assert LookupModule.get_rrule(freq, kwargs) + assert msg in str(e.value) diff --git a/ansible_collections/awx/awx/test/awx/test_settings.py b/ansible_collections/awx/awx/test/awx/test_settings.py new file mode 100644 index 00000000..69e823b3 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_settings.py @@ -0,0 +1,47 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.conf.models import Setting + + +@pytest.mark.django_db +def test_setting_flat_value(run_module, admin_user): + the_value = 'CN=service_account,OU=ServiceAccounts,DC=domain,DC=company,DC=org' + result = run_module('settings', dict(name='AUTH_LDAP_BIND_DN', value=the_value), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert Setting.objects.get(key='AUTH_LDAP_BIND_DN').value == the_value + + +@pytest.mark.django_db +def test_setting_dict_value(run_module, admin_user): + the_value = {'email': 'mail', 'first_name': 'givenName', 'last_name': 'surname'} + result = run_module('settings', dict(name='AUTH_LDAP_USER_ATTR_MAP', value=the_value), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert Setting.objects.get(key='AUTH_LDAP_USER_ATTR_MAP').value == the_value + + +@pytest.mark.django_db +def test_setting_nested_type(run_module, admin_user): + the_value = {'email': 'mail', 'first_name': 'givenName', 'last_name': 'surname'} + result = run_module('settings', dict(settings={'AUTH_LDAP_USER_ATTR_MAP': the_value}), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert Setting.objects.get(key='AUTH_LDAP_USER_ATTR_MAP').value == the_value + + +@pytest.mark.django_db +def test_setting_bool_value(run_module, admin_user): + for the_value in (True, False): + result = run_module('settings', dict(name='ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC', value=the_value), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + assert Setting.objects.get(key='ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC').value is the_value diff --git a/ansible_collections/awx/awx/test/awx/test_team.py b/ansible_collections/awx/awx/test/awx/test_team.py new file mode 100644 index 00000000..ddbed70f --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_team.py @@ -0,0 +1,47 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import Organization, Team + + +@pytest.mark.django_db +def test_create_team(run_module, admin_user): + org = Organization.objects.create(name='foo') + + result = run_module('team', {'name': 'foo_team', 'description': 'fooin around', 'state': 'present', 'organization': 'foo'}, admin_user) + + team = Team.objects.filter(name='foo_team').first() + + result.pop('invocation') + assert result == { + "changed": True, + "name": "foo_team", + "id": team.id if team else None, + } + team = Team.objects.get(name='foo_team') + assert team.description == 'fooin around' + assert team.organization_id == org.id + + +@pytest.mark.django_db +def test_modify_team(run_module, admin_user): + org = Organization.objects.create(name='foo') + team = Team.objects.create(name='foo_team', organization=org, description='flat foo') + assert team.description == 'flat foo' + + result = run_module('team', {'name': 'foo_team', 'description': 'fooin around', 'organization': 'foo'}, admin_user) + team.refresh_from_db() + result.pop('invocation') + assert result == { + "changed": True, + "id": team.id, + } + assert team.description == 'fooin around' + + # 2nd modification, should cause no change + result = run_module('team', {'name': 'foo_team', 'description': 'fooin around', 'organization': 'foo'}, admin_user) + result.pop('invocation') + assert result == {"id": team.id, "changed": False} diff --git a/ansible_collections/awx/awx/test/awx/test_token.py b/ansible_collections/awx/awx/test/awx/test_token.py new file mode 100644 index 00000000..d49dd01d --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_token.py @@ -0,0 +1,30 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import OAuth2AccessToken + + +@pytest.mark.django_db +def test_create_token(run_module, admin_user): + + module_args = { + 'description': 'barfoo', + 'state': 'present', + 'scope': 'read', + 'controller_host': None, + 'controller_username': None, + 'controller_password': None, + 'validate_certs': None, + 'controller_oauthtoken': None, + 'controller_config_file': None, + } + + result = run_module('token', module_args, admin_user) + assert result.get('changed'), result + + tokens = OAuth2AccessToken.objects.filter(description='barfoo') + assert len(tokens) == 1, 'Tokens with description of barfoo != 0: {0}'.format(len(tokens)) + assert tokens[0].scope == 'read', 'Token was not given read access' diff --git a/ansible_collections/awx/awx/test/awx/test_user.py b/ansible_collections/awx/awx/test/awx/test_user.py new file mode 100644 index 00000000..1513b05a --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_user.py @@ -0,0 +1,63 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from unittest import mock + +from awx.main.models import User + + +@pytest.fixture +def mock_auth_stuff(): + """Some really specific session-related stuff is done for changing or setting + passwords, so we will just avoid that here. + """ + with mock.patch('awx.api.serializers.update_session_auth_hash'): + yield + + +@pytest.mark.django_db +def test_create_user(run_module, admin_user, mock_auth_stuff): + result = run_module('user', dict(username='Bob', password='pass4word'), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + user = User.objects.get(id=result['id']) + assert user.username == 'Bob' + + +@pytest.mark.django_db +def test_password_no_op_warning(run_module, admin_user, mock_auth_stuff, silence_warning): + for i in range(2): + result = run_module('user', dict(username='Bob', password='pass4word'), admin_user) + assert not result.get('failed', False), result.get('msg', result) + + assert result.get('changed') # not actually desired, but assert for sanity + + silence_warning.assert_called_once_with( + "The field password of user {0} has encrypted data and " "may inaccurately report task is changed.".format(result['id']) + ) + + +@pytest.mark.django_db +def test_update_password_on_create(run_module, admin_user, mock_auth_stuff): + for i in range(2): + result = run_module('user', dict(username='Bob', password='pass4word', update_secrets=False), admin_user) + assert not result.get('failed', False), result.get('msg', result) + + assert not result.get('changed') + + +@pytest.mark.django_db +def test_update_user(run_module, admin_user, mock_auth_stuff): + result = run_module('user', dict(username='Bob', password='pass4word', is_system_auditor=True), admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed'), result + + update_result = run_module('user', dict(username='Bob', is_system_auditor=False), admin_user) + + assert update_result.get('changed') + user = User.objects.get(id=result['id']) + assert not user.is_system_auditor diff --git a/ansible_collections/awx/awx/test/awx/test_workflow_job_template.py b/ansible_collections/awx/awx/test/awx/test_workflow_job_template.py new file mode 100644 index 00000000..60a4fff7 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_workflow_job_template.py @@ -0,0 +1,145 @@ +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import WorkflowJobTemplate, NotificationTemplate + + +@pytest.mark.django_db +def test_create_workflow_job_template(run_module, admin_user, organization, survey_spec): + result = run_module( + 'workflow_job_template', + { + 'name': 'foo-workflow', + 'organization': organization.name, + 'extra_vars': {'foo': 'bar', 'another-foo': {'barz': 'bar2'}}, + 'survey_spec': survey_spec, + 'survey_enabled': True, + 'state': 'present', + 'job_tags': '', + 'skip_tags': '', + }, + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + + wfjt = WorkflowJobTemplate.objects.get(name='foo-workflow') + assert wfjt.extra_vars == '{"foo": "bar", "another-foo": {"barz": "bar2"}}' + + result.pop('invocation', None) + assert result == {"name": "foo-workflow", "id": wfjt.id, "changed": True} + + assert wfjt.organization_id == organization.id + assert wfjt.survey_spec == survey_spec + + +@pytest.mark.django_db +def test_create_modify_no_survey(run_module, admin_user, organization, survey_spec): + result = run_module( + 'workflow_job_template', + { + 'name': 'foo-workflow', + 'organization': organization.name, + 'job_tags': '', + 'skip_tags': '', + }, + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', False), result + + wfjt = WorkflowJobTemplate.objects.get(name='foo-workflow') + assert wfjt.organization_id == organization.id + assert wfjt.survey_spec == {} + result.pop('invocation', None) + assert result == {"name": "foo-workflow", "id": wfjt.id, "changed": True} + + result = run_module('workflow_job_template', {'name': 'foo-workflow', 'organization': organization.name}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert not result.get('changed', True), result + + +@pytest.mark.django_db +def test_survey_spec_only_changed(run_module, admin_user, organization, survey_spec): + wfjt = WorkflowJobTemplate.objects.create(organization=organization, name='foo-workflow', survey_enabled=True, survey_spec=survey_spec) + result = run_module('workflow_job_template', {'name': 'foo-workflow', 'organization': organization.name, 'state': 'present'}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert not result.get('changed', True), result + wfjt.refresh_from_db() + assert wfjt.survey_spec == survey_spec + + survey_spec['description'] = 'changed description' + + result = run_module( + 'workflow_job_template', {'name': 'foo-workflow', 'organization': organization.name, 'survey_spec': survey_spec, 'state': 'present'}, admin_user + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', True), result + wfjt.refresh_from_db() + assert wfjt.survey_spec == survey_spec + + +@pytest.mark.django_db +def test_survey_spec_missing_field(run_module, admin_user, organization, survey_spec): + wfjt = WorkflowJobTemplate.objects.create(organization=organization, name='foo-workflow', survey_enabled=True, survey_spec=survey_spec) + result = run_module('workflow_job_template', {'name': 'foo-workflow', 'organization': organization.name, 'state': 'present'}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert not result.get('changed', True), result + wfjt.refresh_from_db() + assert wfjt.survey_spec == survey_spec + + del survey_spec['description'] + + result = run_module( + 'workflow_job_template', {'name': 'foo-workflow', 'organization': organization.name, 'survey_spec': survey_spec, 'state': 'present'}, admin_user + ) + assert result.get('failed', True) + assert result.get('msg') == "Failed to update survey: Field 'description' is missing from survey spec." + + +@pytest.mark.django_db +def test_associate_only_on_success(run_module, admin_user, organization, project): + wfjt = WorkflowJobTemplate.objects.create( + organization=organization, + name='foo-workflow', + # survey_enabled=True, survey_spec=survey_spec + ) + create_kwargs = dict( + notification_configuration={'url': 'http://www.example.com/hook', 'headers': {'X-Custom-Header': 'value123'}, 'password': 'bar'}, + notification_type='webhook', + organization=organization, + ) + nt1 = NotificationTemplate.objects.create(name='nt1', **create_kwargs) + nt2 = NotificationTemplate.objects.create(name='nt2', **create_kwargs) + + wfjt.notification_templates_error.add(nt1) + + # test preservation of error NTs when success NTs are added + result = run_module( + 'workflow_job_template', {'name': 'foo-workflow', 'organization': organization.name, 'notification_templates_success': ['nt2']}, admin_user + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', True), result + + assert list(wfjt.notification_templates_success.values_list('id', flat=True)) == [nt2.id] + assert list(wfjt.notification_templates_error.values_list('id', flat=True)) == [nt1.id] + + # test removal to empty list + result = run_module('workflow_job_template', {'name': 'foo-workflow', 'organization': organization.name, 'notification_templates_success': []}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', True), result + + assert list(wfjt.notification_templates_success.values_list('id', flat=True)) == [] + assert list(wfjt.notification_templates_error.values_list('id', flat=True)) == [nt1.id] + + +@pytest.mark.django_db +def test_delete_with_spec(run_module, admin_user, organization, survey_spec): + WorkflowJobTemplate.objects.create(organization=organization, name='foo-workflow', survey_enabled=True, survey_spec=survey_spec) + result = run_module('workflow_job_template', {'name': 'foo-workflow', 'organization': organization.name, 'state': 'absent'}, admin_user) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', True), result + + assert WorkflowJobTemplate.objects.filter(name='foo-workflow', organization=organization).count() == 0 diff --git a/ansible_collections/awx/awx/test/awx/test_workflow_job_template_node.py b/ansible_collections/awx/awx/test/awx/test_workflow_job_template_node.py new file mode 100644 index 00000000..a4bc56a8 --- /dev/null +++ b/ansible_collections/awx/awx/test/awx/test_workflow_job_template_node.py @@ -0,0 +1,136 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function + +__metaclass__ = type + +import pytest + +from awx.main.models import WorkflowJobTemplateNode, WorkflowJobTemplate, JobTemplate, UnifiedJobTemplate + + +@pytest.fixture +def job_template(project, inventory): + return JobTemplate.objects.create( + project=project, + inventory=inventory, + playbook='helloworld.yml', + name='foo-jt', + ask_variables_on_launch=True, + ask_credential_on_launch=True, + ask_limit_on_launch=True, + ) + + +@pytest.fixture +def wfjt(organization): + WorkflowJobTemplate.objects.create(organization=None, name='foo-workflow') # to test org scoping + return WorkflowJobTemplate.objects.create(organization=organization, name='foo-workflow') + + +@pytest.mark.django_db +def test_create_workflow_job_template_node(run_module, admin_user, wfjt, job_template): + this_identifier = '42🐉' + result = run_module( + 'workflow_job_template_node', + { + 'identifier': this_identifier, + 'workflow_job_template': 'foo-workflow', + 'organization': wfjt.organization.name, + 'unified_job_template': 'foo-jt', + 'state': 'present', + }, + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + + node = WorkflowJobTemplateNode.objects.get(identifier=this_identifier) + + result.pop('invocation', None) + assert result == {"name": this_identifier, "id": node.id, "changed": True} # FIXME: should this be identifier instead + + assert node.identifier == this_identifier + assert node.workflow_job_template_id == wfjt.id + assert node.unified_job_template_id == job_template.id + + +@pytest.mark.django_db +def test_create_workflow_job_template_node_approval_node(run_module, admin_user, wfjt, job_template): + """This is a part of the API contract for creating approval nodes""" + this_identifier = '42🐉' + result = run_module( + 'workflow_job_template_node', + { + 'identifier': this_identifier, + 'workflow_job_template': wfjt.name, + 'organization': wfjt.organization.name, + 'approval_node': {'name': 'foo-jt-approval'}, + }, + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', False), result + + node = WorkflowJobTemplateNode.objects.get(identifier=this_identifier) + approval_node = UnifiedJobTemplate.objects.get(name='foo-jt-approval') + + assert result['id'] == approval_node.id + + assert node.identifier == this_identifier + assert node.workflow_job_template_id == wfjt.id + assert node.unified_job_template_id is approval_node.id + + +@pytest.mark.django_db +def test_make_use_of_prompts(run_module, admin_user, wfjt, job_template, machine_credential, vault_credential): + result = run_module( + 'workflow_job_template_node', + { + 'identifier': '42', + 'workflow_job_template': 'foo-workflow', + 'organization': wfjt.organization.name, + 'unified_job_template': 'foo-jt', + 'extra_data': {'foo': 'bar', 'another-foo': {'barz': 'bar2'}}, + 'limit': 'foo_hosts', + 'credentials': [machine_credential.name, vault_credential.name], + 'state': 'present', + }, + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', False) + + node = WorkflowJobTemplateNode.objects.get(identifier='42') + + assert node.limit == 'foo_hosts' + assert node.extra_data == {'foo': 'bar', 'another-foo': {'barz': 'bar2'}} + assert set(node.credentials.all()) == set([machine_credential, vault_credential]) + + +@pytest.mark.django_db +def test_create_with_edges(run_module, admin_user, wfjt, job_template): + next_nodes = [ + WorkflowJobTemplateNode.objects.create(identifier='foo{0}'.format(i), workflow_job_template=wfjt, unified_job_template=job_template) for i in range(3) + ] + + result = run_module( + 'workflow_job_template_node', + { + 'identifier': '42', + 'workflow_job_template': 'foo-workflow', + 'organization': wfjt.organization.name, + 'unified_job_template': 'foo-jt', + 'success_nodes': ['foo0'], + 'always_nodes': ['foo1'], + 'failure_nodes': ['foo2'], + 'state': 'present', + }, + admin_user, + ) + assert not result.get('failed', False), result.get('msg', result) + assert result.get('changed', False) + + node = WorkflowJobTemplateNode.objects.get(identifier='42') + + assert list(node.success_nodes.all()) == [next_nodes[0]] + assert list(node.always_nodes.all()) == [next_nodes[1]] + assert list(node.failure_nodes.all()) == [next_nodes[2]] diff --git a/ansible_collections/awx/awx/tests/config.yml b/ansible_collections/awx/awx/tests/config.yml new file mode 100644 index 00000000..fdb7c505 --- /dev/null +++ b/ansible_collections/awx/awx/tests/config.yml @@ -0,0 +1,3 @@ +--- +modules: + python_requires: '>3' diff --git a/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command/tasks/main.yml new file mode 100644 index 00000000..316315df --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command/tasks/main.yml @@ -0,0 +1,94 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + inv_name: "AWX-Collection-tests-ad_hoc_command-inventory-{{ test_id }}" + ssh_cred_name: "AWX-Collection-tests-ad_hoc_command-ssh-cred-{{ test_id }}" + org_name: "AWX-Collection-tests-ad_hoc_command-org-{{ test_id }}" + +- name: Create a New Organization + organization: + name: "{{ org_name }}" + +- name: Create an Inventory + inventory: + name: "{{ inv_name }}" + organization: "{{ org_name }}" + state: present + +- name: Add localhost to the Inventory + host: + name: localhost + inventory: "{{ inv_name }}" + variables: + ansible_connection: local + +- name: Create a Credential + credential: + name: "{{ ssh_cred_name }}" + organization: "{{ org_name }}" + credential_type: 'Machine' + state: present + +- name: Launch an Ad Hoc Command waiting for it to finish + ad_hoc_command: + inventory: "{{ inv_name }}" + credential: "{{ ssh_cred_name }}" + module_name: "command" + module_args: "echo I <3 Ansible" + wait: true + register: result + +- assert: + that: + - "result is changed" + - "result.status == 'successful'" + +- name: Launch an Ad Hoc Command without module argument + ad_hoc_command: + inventory: "Demo Inventory" + credential: "{{ ssh_cred_name }}" + module_name: "ping" + wait: true + register: result + +- assert: + that: + - "result is changed" + - "result.status == 'successful'" + +- name: Check module fails with correct msg + ad_hoc_command: + inventory: "{{ inv_name }}" + credential: "{{ ssh_cred_name }}" + module_name: "Does not exist" + register: result + ignore_errors: true + +- assert: + that: + - "result is failed" + - "result is not changed" + - "'Does not exist' in result.response['json']['module_name'][0]" + +- name: Delete the Credential + credential: + name: "{{ ssh_cred_name }}" + organization: "{{ org_name }}" + credential_type: 'Machine' + state: absent + +- name: Delete the Inventory + inventory: + name: "{{ inv_name }}" + organization: "{{ org_name }}" + state: absent + +- name: Remove the Organization + organization: + name: "{{ org_name }}" + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command_cancel/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command_cancel/tasks/main.yml new file mode 100644 index 00000000..f7ffe9bc --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command_cancel/tasks/main.yml @@ -0,0 +1,119 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + inv_name: "AWX-Collection-tests-ad_hoc_command_cancel-inventory-{{ test_id }}" + ssh_cred_name: "AWX-Collection-tests-ad_hoc_command_cancel-ssh-cred-{{ test_id }}" + org_name: "AWX-Collection-tests-ad_hoc_command_cancel-org-{{ test_id }}" + +- name: Create a New Organization + organization: + name: "{{ org_name }}" + +- name: Create an Inventory + inventory: + name: "{{ inv_name }}" + organization: "{{ org_name }}" + state: present + +- name: Add localhost to the Inventory + host: + name: localhost + inventory: "{{ inv_name }}" + variables: + ansible_connection: local + +- name: Create a Credential + credential: + name: "{{ ssh_cred_name }}" + organization: "{{ org_name }}" + credential_type: 'Machine' + state: present + +- name: Launch an Ad Hoc Command + ad_hoc_command: + inventory: "{{ inv_name }}" + credential: "{{ ssh_cred_name }}" + module_name: "command" + module_args: "sleep 100" + register: command + +- assert: + that: + - "command is changed" + +- name: Timeout waiting for the command to cancel + ad_hoc_command_cancel: + command_id: "{{ command.id }}" + timeout: -1 + register: results + ignore_errors: true + +- assert: + that: + - results is failed + - "results['msg'] == 'Monitoring of ad hoc command aborted due to timeout'" + +- block: + - name: "Wait for up to a minute until the job enters the can_cancel: False state" + debug: + msg: "The job can_cancel status has transitioned into False, we can proceed with testing" + until: not job_status + retries: 6 + delay: 10 + vars: + job_status: "{{ lookup('awx.awx.controller_api', 'ad_hoc_commands/'+ command.id | string +'/cancel')['can_cancel'] }}" + +- name: Cancel the command with hard error if it's not running + ad_hoc_command_cancel: + command_id: "{{ command.id }}" + fail_if_not_running: true + register: results + ignore_errors: true + +- assert: + that: + - results is failed + +- name: Cancel an already canceled command (assert failure) + ad_hoc_command_cancel: + command_id: "{{ command.id }}" + fail_if_not_running: true + register: results + ignore_errors: true + +- assert: + that: + - results is failed + +- name: Check module fails with correct msg + ad_hoc_command_cancel: + command_id: 9999999999 + register: result + ignore_errors: true + +- assert: + that: + - "result.msg == 'Unable to find command with id 9999999999'" + +- name: Delete the Credential + credential: + name: "{{ ssh_cred_name }}" + organization: "{{ org_name }}" + credential_type: 'Machine' + state: absent + +- name: Delete the Inventory + inventory: + name: "{{ inv_name }}" + organization: "{{ org_name }}" + state: absent + +- name: Remove the Organization + organization: + name: "{{ org_name }}" + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command_wait/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command_wait/tasks/main.yml new file mode 100644 index 00000000..94177487 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/ad_hoc_command_wait/tasks/main.yml @@ -0,0 +1,131 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + inv_name: "AWX-Collection-tests-ad_hoc_command_wait-inventory-{{ test_id }}" + ssh_cred_name: "AWX-Collection-tests-ad_hoc_command_wait-ssh-cred-{{ test_id }}" + org_name: "AWX-Collection-tests-ad_hoc_command_wait-org-{{ test_id }}" + +- name: Create a New Organization + organization: + name: "{{ org_name }}" + +- name: Create an Inventory + inventory: + name: "{{ inv_name }}" + organization: "{{ org_name }}" + state: present + +- name: Add localhost to the Inventory + host: + name: localhost + inventory: "{{ inv_name }}" + variables: + ansible_connection: local + +- name: Create a Credential + credential: + name: "{{ ssh_cred_name }}" + organization: "{{ org_name }}" + credential_type: 'Machine' + state: present + +- name: Check module fails with correct msg + ad_hoc_command_wait: + command_id: "99999999" + register: result + ignore_errors: true + +- assert: + that: + - result is failed + - "result.msg == 'Unable to wait on ad hoc command 99999999; that ID does not exist.'" + +- name: Launch command module with sleep 10 + ad_hoc_command: + inventory: "{{ inv_name }}" + credential: "{{ ssh_cred_name }}" + module_name: "command" + module_args: "sleep 5" + register: command + +- assert: + that: + - command is changed + +- name: Wait for the Job to finish + ad_hoc_command_wait: + command_id: "{{ command.id }}" + register: wait_results + +# Make sure it worked and that we have some data in our results +- assert: + that: + - wait_results is successful + - "'elapsed' in wait_results" + - "'id' in wait_results" + +- name: Launch a long running command + ad_hoc_command: + inventory: "{{ inv_name }}" + credential: "{{ ssh_cred_name }}" + module_name: "command" + module_args: "sleep 10000" + register: command + +- assert: + that: + - command is changed + +- name: Timeout waiting for the command to complete + ad_hoc_command_wait: + command_id: "{{ command.id }}" + timeout: 1 + ignore_errors: true + register: wait_results + +# Make sure that we failed and that we have some data in our results +- assert: + that: + - "'Monitoring aborted due to timeout' or 'Timeout waiting for command to finish.' in wait_results.msg" + - "'id' in wait_results" + +- name: Async cancel the long-running command + ad_hoc_command_cancel: + command_id: "{{ command.id }}" + async: 3600 + poll: 0 + +- name: Wait for the command to exit on cancel + ad_hoc_command_wait: + command_id: "{{ command.id }}" + register: wait_results + ignore_errors: true + +- assert: + that: + - wait_results is failed + - 'wait_results.status == "canceled"' + - "wait_results.msg == 'The ad hoc command - {{ command.id }}, failed'" + +- name: Delete the Credential + credential: + name: "{{ ssh_cred_name }}" + organization: "{{ org_name }}" + credential_type: 'Machine' + state: absent + +- name: Delete the Inventory + inventory: + name: "{{ inv_name }}" + organization: "{{ org_name }}" + state: absent + +- name: Remove the Organization + organization: + name: "{{ org_name }}" + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/application/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/application/tasks/main.yml new file mode 100644 index 00000000..ba76763a --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/application/tasks/main.yml @@ -0,0 +1,92 @@ +--- +- name: Generate a test id + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Generate names + set_fact: + app1_name: "AWX-Collection-tests-application-app1-{{ test_id }}" + app2_name: "AWX-Collection-tests-application-app2-{{ test_id }}" + app3_name: "AWX-Collection-tests-application-app3-{{ test_id }}" + +- block: + - name: Create an application + application: + name: "{{ app1_name }}" + authorization_grant_type: "password" + client_type: "public" + organization: "Default" + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Delete our application + application: + name: "{{ app1_name }}" + organization: "Default" + state: absent + register: result + + - assert: + that: + - "result is changed" + + - name: Create a second application + application: + name: "{{ app2_name }}" + authorization_grant_type: "authorization-code" + client_type: "confidential" + organization: "Default" + description: "Another application" + redirect_uris: + - http://tower.com/api/v2/ + - http://tower.com/api/v2/teams + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create an all trusting application + application: + name: "{{ app3_name }}" + organization: "Default" + description: "All Trusting Application" + skip_authorization: true + authorization_grant_type: "password" + client_type: "confidential" + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Rename an inventory + application: + name: "{{ app3_name }}" + new_name: "{{ app3_name }}a" + organization: Default + state: present + register: result + + - assert: + that: + - result.changed + + always: + - name: Delete our application + application: + name: "{{ item }}" + organization: "Default" + state: absent + register: result + loop: + - "{{ app1_name }}" + - "{{ app2_name }}" + - "{{ app3_name }}" + - "{{ app3_name }}a" diff --git a/ansible_collections/awx/awx/tests/integration/targets/credential/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/credential/tasks/main.yml new file mode 100644 index 00000000..57b2168f --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/credential/tasks/main.yml @@ -0,0 +1,637 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + ssh_cred_name1: "AWX-Collection-tests-credential-ssh-cred1-{{ test_id }}" + ssh_cred_name2: "AWX-Collection-tests-credential-ssh-cred2-{{ test_id }}" + ssh_cred_name3: "AWX-Collection-tests-credential-ssh-cred-lookup-source-{{ test_id }}" + ssh_cred_name4: "AWX-Collection-tests-credential-ssh-cred-file-source-{{ test_id }}" + vault_cred_name1: "AWX-Collection-tests-credential-vault-cred1-{{ test_id }}" + vault_cred_name2: "AWX-Collection-tests-credential-vault-ssh-cred1-{{ test_id }}" + net_cred_name1: "AWX-Collection-tests-credential-net-cred1-{{ test_id }}" + scm_cred_name1: "AWX-Collection-tests-credential-scm-cred1-{{ test_id }}" + aws_cred_name1: "AWX-Collection-tests-credential-aws-cred1-{{ test_id }}" + vmware_cred_name1: "AWX-Collection-tests-credential-vmware-cred1-{{ test_id }}" + sat6_cred_name1: "AWX-Collection-tests-credential-sat6-cred1-{{ test_id }}" + gce_cred_name1: "AWX-Collection-tests-credential-gce-cred1-{{ test_id }}" + azurerm_cred_name1: "AWX-Collection-tests-credential-azurerm-cred1-{{ test_id }}" + openstack_cred_name1: "AWX-Collection-tests-credential-openstack-cred1-{{ test_id }}" + rhv_cred_name1: "AWX-Collection-tests-credential-rhv-cred1-{{ test_id }}" + insights_cred_name1: "AWX-Collection-tests-credential-insights-cred1-{{ test_id }}" + tower_cred_name1: "AWX-Collection-tests-credential-tower-cred1-{{ test_id }}" + +- name: create a tempdir for an SSH key + local_action: shell mktemp -d + register: tempdir + +- name: Generate a local SSH key + local_action: "shell ssh-keygen -b 2048 -t rsa -f {{ tempdir.stdout }}/id_rsa -q -N 'passphrase'" + +- name: Read the generated key + set_fact: + ssh_key_data: "{{ lookup('file', tempdir.stdout + '/id_rsa') }}" + +- name: Create an Org-specific credential with an ID + credential: + name: "{{ ssh_cred_name1 }}" + organization: Default + credential_type: Machine + state: present + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Org-specific credential + credential: + name: "{{ ssh_cred_name1 }}" + organization: Default + state: absent + credential_type: Machine + register: result + +- assert: + that: + - "result is changed" + +- name: Create the User-specific credential + credential: + name: "{{ ssh_cred_name1 }}" + user: admin + credential_type: 'Machine' + state: present + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a User-specific credential + credential: + name: "{{ ssh_cred_name1 }}" + user: admin + state: absent + credential_type: 'Machine' + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid SSH credential + credential: + name: "{{ ssh_cred_name2 }}" + organization: Default + state: present + credential_type: Machine + description: An example SSH credential + inputs: + username: joe + password: secret + become_method: sudo + become_username: superuser + become_password: supersecret + ssh_key_data: "{{ ssh_key_data }}" + ssh_key_unlock: "passphrase" + register: result + +- assert: + that: + - result is changed + +- name: Create a valid SSH credential + credential: + name: "{{ ssh_cred_name2 }}" + organization: Default + state: present + credential_type: Machine + description: An example SSH credential + inputs: + username: joe + become_method: sudo + become_username: superuser + register: result + +- assert: + that: + - result is changed + +- name: Check for inputs idempotency (when "inputs" is blank) + credential: + name: "{{ ssh_cred_name2 }}" + organization: Default + state: present + credential_type: Machine + description: An example SSH credential + register: result + +- assert: + that: + - result is not changed + +- name: Copy ssh Credential + credential: + name: "copy_{{ ssh_cred_name2 }}" + copy_from: "{{ ssh_cred_name2 }}" + credential_type: Machine + register: result + +- assert: + that: + - result.copied + +- name: Delete an SSH credential + credential: + name: "copy_{{ ssh_cred_name2 }}" + organization: Default + state: absent + credential_type: Machine + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid SSH credential from lookup source + credential: + name: "{{ ssh_cred_name3 }}" + organization: Default + state: present + credential_type: Machine + description: An example SSH credential from lookup source + inputs: + username: joe + password: secret + become_method: sudo + become_username: superuser + become_password: supersecret + ssh_key_data: "{{ lookup('file', tempdir.stdout + '/id_rsa') }}" + ssh_key_unlock: "passphrase" + register: result + +- assert: + that: + - result is changed + +- name: Create an invalid SSH credential (passphrase required) + credential: + name: SSH Credential + organization: Default + state: present + credential_type: Machine + inputs: + username: joe + ssh_key_data: "{{ ssh_key_data }}" + ignore_errors: true + register: result + +- assert: + that: + - "result is failed" + - "'must be set when SSH key is encrypted' in result.msg" + +- name: Create an invalid SSH credential (Organization not found) + credential: + name: SSH Credential + organization: Missing_Organization + state: present + credential_type: Machine + inputs: + username: joe + ignore_errors: true + register: result + +- assert: + that: + - "result is failed" + - "result is not changed" + - "'Missing_Organization' in result.msg" + - "result.total_results == 0" + +- name: Delete an SSH credential + credential: + name: "{{ ssh_cred_name2 }}" + organization: Default + state: absent + credential_type: Machine + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an SSH credential + credential: + name: "{{ ssh_cred_name3 }}" + organization: Default + state: absent + credential_type: Machine + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an SSH credential + credential: + name: "{{ ssh_cred_name4 }}" + organization: Default + state: absent + credential_type: Machine + register: result + +# This one was never really created so it shouldn't be deleted +- assert: + that: + - "result is not changed" + +- name: Create a valid Vault credential + credential: + name: "{{ vault_cred_name1 }}" + organization: Default + state: present + credential_type: Vault + description: An example Vault credential + inputs: + vault_id: bar + vault_password: secret-vault + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Vault credential + credential: + name: "{{ vault_cred_name1 }}" + organization: Default + state: absent + credential_type: Vault + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Vault credential + credential: + name: "{{ vault_cred_name2 }}" + organization: Default + state: absent + credential_type: Vault + register: result + +# The creation of vault_cred_name2 never worked so we shouldn't actually need to delete it +- assert: + that: + - "result is not changed" + +- name: Create a valid Network credential + credential: + name: "{{ net_cred_name1 }}" + organization: Default + state: present + credential_type: Network + inputs: + username: joe + password: secret + authorize: true + authorize_password: authorize-me + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Network credential + credential: + name: "{{ net_cred_name1 }}" + organization: Default + state: absent + credential_type: Network + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid SCM credential + credential: + name: "{{ scm_cred_name1 }}" + organization: Default + state: present + credential_type: Source Control + inputs: + username: joe + password: secret + ssh_key_data: "{{ ssh_key_data }}" + ssh_key_unlock: "passphrase" + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an SCM credential + credential: + name: "{{ scm_cred_name1 }}" + organization: Default + state: absent + credential_type: Source Control + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid AWS credential + credential: + name: "{{ aws_cred_name1 }}" + organization: Default + state: present + credential_type: Amazon Web Services + inputs: + username: joe + password: secret + security_token: aws-token + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an AWS credential + credential: + name: "{{ aws_cred_name1 }}" + organization: Default + state: absent + credential_type: Amazon Web Services + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid VMWare credential + credential: + name: "{{ vmware_cred_name1 }}" + organization: Default + state: present + credential_type: VMware vCenter + inputs: + host: https://example.org + username: joe + password: secret + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an VMWare credential + credential: + name: "{{ vmware_cred_name1 }}" + organization: Default + state: absent + credential_type: VMware vCenter + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid Satellite6 credential + credential: + name: "{{ sat6_cred_name1 }}" + organization: Default + state: present + credential_type: Red Hat Satellite 6 + inputs: + host: https://example.org + username: joe + password: secret + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Satellite6 credential + credential: + name: "{{ sat6_cred_name1 }}" + organization: Default + state: absent + credential_type: Red Hat Satellite 6 + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid GCE credential + credential: + name: "{{ gce_cred_name1 }}" + organization: Default + state: present + credential_type: Google Compute Engine + inputs: + username: joe + project: ABC123 + ssh_key_data: "{{ ssh_key_data }}" + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a GCE credential + credential: + name: "{{ gce_cred_name1 }}" + organization: Default + state: absent + credential_type: Google Compute Engine + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid AzureRM credential + credential: + name: "{{ azurerm_cred_name1 }}" + organization: Default + state: present + credential_type: Microsoft Azure Resource Manager + inputs: + username: joe + password: secret + subscription: some-subscription + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid AzureRM credential with a tenant + credential: + name: "{{ azurerm_cred_name1 }}" + organization: Default + state: present + credential_type: Microsoft Azure Resource Manager + inputs: + client: some-client + secret: some-secret + tenant: some-tenant + subscription: some-subscription + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an AzureRM credential + credential: + name: "{{ azurerm_cred_name1 }}" + organization: Default + state: absent + credential_type: Microsoft Azure Resource Manager + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid OpenStack credential + credential: + name: "{{ openstack_cred_name1 }}" + organization: Default + state: present + credential_type: OpenStack + inputs: + host: https://keystone.example.org + username: joe + password: secret + project: tenant123 + domain: some-domain + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a OpenStack credential + credential: + name: "{{ openstack_cred_name1 }}" + organization: Default + state: absent + credential_type: OpenStack + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid RHV credential + credential: + name: "{{ rhv_cred_name1 }}" + organization: Default + state: present + credential_type: Red Hat Virtualization + inputs: + host: https://example.org + username: joe + password: secret + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an RHV credential + credential: + name: "{{ rhv_cred_name1 }}" + organization: Default + state: absent + credential_type: Red Hat Virtualization + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid Insights credential + credential: + name: "{{ insights_cred_name1 }}" + organization: Default + state: present + credential_type: Insights + inputs: + username: joe + password: secret + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an Insights credential + credential: + name: "{{ insights_cred_name1 }}" + organization: Default + state: absent + credential_type: Insights + register: result + +- assert: + that: + - "result is changed" + +- name: Create a valid Tower-to-Tower credential + credential: + name: "{{ tower_cred_name1 }}" + organization: Default + state: present + credential_type: Red Hat Ansible Automation Platform + inputs: + host: https://controller.example.org + username: joe + password: secret + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Tower-to-Tower credential + credential: + name: "{{ tower_cred_name1 }}" + organization: Default + state: absent + credential_type: Red Hat Ansible Automation Platform + register: result + +- assert: + that: + - "result is changed" + +- name: Check module fails with correct msg + credential: + name: test-credential + description: Credential Description + credential_type: Machine + organization: test-non-existing-org + state: present + register: result + ignore_errors: true + +- assert: + that: + - "result is failed" + - "result is not changed" + - "'test-non-existing-org' in result.msg" + - "result.total_results == 0" diff --git a/ansible_collections/awx/awx/tests/integration/targets/credential_input_source/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/credential_input_source/tasks/main.yml new file mode 100644 index 00000000..e9066a0f --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/credential_input_source/tasks/main.yml @@ -0,0 +1,113 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + src_cred_name: "AWX-Collection-tests-credential_input_source-src_cred-{{ test_id }}" + target_cred_name: "AWX-Collection-tests-credential_input_source-target_cred-{{ test_id }}" + +- block: + - name: Add credential Lookup + credential: + description: Credential for Testing Source + name: "{{ src_cred_name }}" + credential_type: CyberArk Central Credential Provider Lookup + inputs: + url: "https://cyberark.example.com" + app_id: "My-App-ID" + organization: Default + register: src_cred_result + + - assert: + that: + - "src_cred_result is changed" + + - name: Add credential Target + credential: + description: Credential for Testing Target + name: "{{ target_cred_name }}" + credential_type: Machine + inputs: + username: user + organization: Default + register: target_cred_result + + - assert: + that: + - "target_cred_result is changed" + + - name: Add credential Input Source + credential_input_source: + input_field_name: password + target_credential: "{{ target_cred_result.id }}" + source_credential: "{{ src_cred_result.id }}" + metadata: + object_query: "Safe=MY_SAFE;Object=AWX-user" + object_query_format: "Exact" + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Add Second credential Lookup + credential: + description: Credential for Testing Source Change + name: "{{ src_cred_name }}-2" + credential_type: CyberArk Central Credential Provider Lookup + inputs: + url: "https://cyberark-prod.example.com" + app_id: "My-App-ID" + organization: Default + register: result + + - name: Change credential Input Source + credential_input_source: + input_field_name: password + target_credential: "{{ target_cred_name }}" + source_credential: "{{ src_cred_name }}-2" + state: present + + - assert: + that: + - "result is changed" + + always: + - name: Remove a credential source + credential_input_source: + input_field_name: password + target_credential: "{{ target_cred_name }}" + state: absent + register: result + + - assert: + that: + - "result is changed" + + - name: Remove credential Lookup + credential: + name: "{{ src_cred_name }}" + organization: Default + credential_type: CyberArk Central Credential Provider Lookup + state: absent + register: result + + - name: Remove Alt credential Lookup + credential: + name: "{{ src_cred_name }}-2" + organization: Default + credential_type: CyberArk Central Credential Provider Lookup + state: absent + register: result + + - name: Remove credential + credential: + name: "{{ target_cred_name }}" + organization: Default + credential_type: Machine + state: absent + register: result diff --git a/ansible_collections/awx/awx/tests/integration/targets/credential_type/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/credential_type/tasks/main.yml new file mode 100644 index 00000000..e3b75f76 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/credential_type/tasks/main.yml @@ -0,0 +1,44 @@ +--- +- name: Generate names + set_fact: + cred_type_name: "AWX-Collection-tests-credential_type-cred-type-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- block: + - name: Add Tower credential type + credential_type: + description: Credential type for Test + name: "{{ cred_type_name }}" + kind: cloud + inputs: {"fields": [{"type": "string", "id": "username", "label": "Username"}, {"secret": true, "type": "string", "id": "password", "label": "Password"}], "required": ["username", "password"]} + injectors: {"extra_vars": {"test": "foo"}} + register: result + + - assert: + that: + - "result is changed" + + - name: Rename Tower credential type + credential_type: + name: "{{ cred_type_name }}" + new_name: "{{ cred_type_name }}a" + kind: cloud + + register: result + + - assert: + that: + - "result is changed" + + always: + - name: Remove a Tower credential type + credential_type: + name: "{{ item }}" + state: absent + register: result + loop: + - "{{ cred_type_name }}" + - "{{ cred_type_name }}a" + + - assert: + that: + - "result is changed" diff --git a/ansible_collections/awx/awx/tests/integration/targets/demo_data/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/demo_data/tasks/main.yml new file mode 100644 index 00000000..db152671 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/demo_data/tasks/main.yml @@ -0,0 +1,45 @@ +--- +- name: Assure that default organization exists + organization: + name: Default + +- name: HACK - delete orphaned projects from preload data where organization deletd + project: + name: "{{ item['id'] }}" + scm_type: git + state: absent + loop: > + {{ query('awx.awx.controller_api', 'projects', + query_params={'organization__isnull': true, 'name': 'Demo Project'}) + }} + loop_control: + label: "Deleting Demo Project with null organization id={{ item['id'] }}" + +- name: Assure that demo project exists + project: + name: "Demo Project" + scm_type: 'git' + scm_url: 'https://github.com/ansible/ansible-tower-samples' + scm_update_on_launch: true + organization: Default + +- name: Assure that demo inventory exists + inventory: + name: "Demo Inventory" + organization: Default + +- name: Create a Host + host: + name: "localhost" + inventory: "Demo Inventory" + state: present + variables: + ansible_connection: local + register: result + +- name: Assure that demo job template exists + job_template: + name: "Demo Job Template" + project: "Demo Project" + inventory: "Demo Inventory" + playbook: "hello_world.yml" diff --git a/ansible_collections/awx/awx/tests/integration/targets/execution_environment/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/execution_environment/tasks/main.yml new file mode 100644 index 00000000..0cb2fd90 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/execution_environment/tasks/main.yml @@ -0,0 +1,61 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + ee_name: "AWX-Collection-tests-ee-{{ test_id }}" + +- block: + - name: Add an EE + execution_environment: + name: "{{ ee_name }}" + description: "EE for Testing" + image: quay.io/ansible/awx-ee + pull: always + organization: Default + register: result + + - assert: + that: + - "result is changed" + + - name: Associate the Test EE with Default Org (this should fail) + execution_environment: + name: "{{ ee_name }}" + organization: Some Org + image: quay.io/ansible/awx-ee + register: result + ignore_errors: true + + - assert: + that: + - "result is failed" + + - name: Rename the Test EEs + execution_environment: + name: "{{ ee_name }}" + new_name: "{{ ee_name }}a" + image: quay.io/ansible/awx-ee + register: result + + - assert: + that: + - "result is changed" + + always: + - name: Delete the Test EEs + execution_environment: + name: "{{ item }}" + state: absent + image: quay.io/ansible/awx-ee + register: result + loop: + - "{{ ee_name }}" + - "{{ ee_name }}a" + + - assert: + that: + - "result is changed" diff --git a/ansible_collections/awx/awx/tests/integration/targets/export/aliases b/ansible_collections/awx/awx/tests/integration/targets/export/aliases new file mode 100644 index 00000000..527d07c3 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/export/aliases @@ -0,0 +1 @@ +skip/python2 diff --git a/ansible_collections/awx/awx/tests/integration/targets/export/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/export/tasks/main.yml new file mode 100644 index 00000000..baa78ce6 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/export/tasks/main.yml @@ -0,0 +1,111 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + org_name1: "AWX-Collection-tests-export-organization-{{ test_id }}" + org_name2: "AWX-Collection-tests-export-organization2-{{ test_id }}" + inventory_name1: "AWX-Collection-tests-export-inv1-{{ test_id }}" + +- block: + - name: Create some organizations + organization: + name: "{{ item }}" + loop: + - "{{ org_name1 }}" + - "{{ org_name2 }}" + + - name: Create an inventory + inventory: + name: "{{ inventory_name1 }}" + organization: "{{ org_name1 }}" + + - name: Export all assets + export: + all: true + register: all_assets + + - assert: + that: + - all_assets is not changed + - all_assets is successful + - all_assets['assets']['organizations'] | length() >= 2 + + - name: Export all inventories + export: + inventory: 'all' + register: inventory_export + + - assert: + that: + - inventory_export is successful + - inventory_export is not changed + - inventory_export['assets']['inventory'] | length() >= 1 + - "'organizations' not in inventory_export['assets']" + + # This mimics the example in the module + - name: Export an all and a specific + export: + inventory: 'all' + organizations: "{{ org_name1 }}" + register: mixed_export + + - assert: + that: + - mixed_export is successful + - mixed_export is not changed + - mixed_export['assets']['inventory'] | length() >= 1 + - mixed_export['assets']['organizations'] | length() == 1 + - "'workflow_job_templates' not in mixed_export['assets']" + + - name: Export list of organizations + export: + organizations: "{{[org_name1, org_name2]}}" + register: list_asserts + + - assert: + that: + - list_asserts is not changed + - list_asserts is successful + - list_asserts['assets']['organizations'] | length() >= 2 + + - name: Export list with one organization + export: + organizations: "{{[org_name1]}}" + register: list_asserts + + - assert: + that: + - list_asserts is not changed + - list_asserts is successful + - list_asserts['assets']['organizations'] | length() >= 1 + - "org_name1 in (list_asserts['assets']['organizations'] | map(attribute='name') )" + + - name: Export one organization as string + export: + organizations: "{{org_name2}}" + register: string_asserts + + - assert: + that: + - string_asserts is not changed + - string_asserts is successful + - string_asserts['assets']['organizations'] | length() >= 1 + - "org_name2 in (string_asserts['assets']['organizations'] | map(attribute='name') )" + always: + - name: Remove our inventory + inventory: + name: "{{ inventory_name1 }}" + organization: "{{ org_name1 }}" + state: absent + + - name: Remove test organizations + organization: + name: "{{ item }}" + state: absent + loop: + - "{{ org_name1 }}" + - "{{ org_name2 }}" diff --git a/ansible_collections/awx/awx/tests/integration/targets/group/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/group/tasks/main.yml new file mode 100644 index 00000000..ac58826b --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/group/tasks/main.yml @@ -0,0 +1,193 @@ +--- +- name: Generate names + set_fact: + group_name1: "AWX-Collection-tests-group-group-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + group_name2: "AWX-Collection-tests-group-group-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + group_name3: "AWX-Collection-tests-group-group-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + inv_name: "AWX-Collection-tests-group-inv-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + host_name1: "AWX-Collection-tests-group-host-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + host_name2: "AWX-Collection-tests-group-host-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + host_name3: "AWX-Collection-tests-group-host-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Create an Inventory + inventory: + name: "{{ inv_name }}" + organization: Default + state: present + register: result + +- name: Create a Group + group: + name: "{{ group_name1 }}" + inventory: "{{ result.id }}" + state: present + variables: + foo: bar + register: result + +- assert: + that: + - "result is changed" + +- name: Create a Group + group: + name: "{{ group_name2 }}" + inventory: "{{ inv_name }}" + state: present + variables: + foo: bar + register: result + +- assert: + that: + - "result is changed" + +- name: Create a Group + group: + name: "{{ group_name3 }}" + inventory: "{{ inv_name }}" + state: present + variables: + foo: bar + register: result + +- assert: + that: + - "result is changed" + +- name: add hosts + host: + name: "{{ item }}" + inventory: "{{ inv_name }}" + loop: + - "{{ host_name1 }}" + - "{{ host_name2 }}" + - "{{ host_name3 }}" + +- name: Create a Group with hosts and sub group + group: + name: "{{ group_name1 }}" + inventory: "{{ inv_name }}" + hosts: + - "{{ host_name1 }}" + - "{{ host_name2 }}" + children: + - "{{ group_name2 }}" + state: present + variables: + foo: bar + register: result + +- name: Create a Group with hosts and sub group + group: + name: "{{ group_name1 }}" + inventory: "{{ inv_name }}" + hosts: + - "{{ host_name3 }}" + children: + - "{{ group_name3 }}" + state: present + preserve_existing_hosts: true + preserve_existing_children: true + register: result + +- name: "Find number of hosts in {{ group_name1 }}" + set_fact: + group1_host_count: "{{ lookup('awx.awx.controller_api', 'groups/{{result.id}}/all_hosts/') |length}}" + +- assert: + that: + - group1_host_count == "3" + +- name: Delete a Group + group: + name: "{{ group_name1 }}" + inventory: "{{ inv_name }}" + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Group + group: + name: "{{ group_name2 }}" + inventory: "{{ inv_name }}" + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Group + group: + name: "{{ group_name3 }}" + inventory: "{{ inv_name }}" + state: absent + register: result + +- assert: + that: + - "result is not changed" + +- name: Check module fails with correct msg + group: + name: test-group + description: Group Description + inventory: test-non-existing-inventory + state: present + register: result + ignore_errors: true + +- assert: + that: + - "result is failed" + - "result is not changed" + - "'test-non-existing-inventory' in result.msg" + - "result.total_results == 0" + +- name: add hosts + host: + name: "{{ item }}" + inventory: "{{ inv_name }}" + loop: + - "{{ host_name1 }}" + - "{{ host_name2 }}" + - "{{ host_name3 }}" + +- name: add mid level group + group: + name: "{{ group_name2 }}" + inventory: "{{ inv_name }}" + hosts: + - "{{ host_name3 }}" + +- name: add top group + group: + name: "{{ group_name3 }}" + inventory: "{{ inv_name }}" + hosts: + - "{{ host_name1 }}" + - "{{ host_name2 }}" + children: + - "{{ group_name2 }}" + +- name: Delete the parent group + group: + name: "{{ group_name3 }}" + inventory: "{{ inv_name }}" + state: absent + +- name: Delete the child group + group: + name: "{{ group_name2 }}" + inventory: "{{ inv_name }}" + state: absent + +- name: Delete an Inventory + inventory: + name: "{{ inv_name }}" + organization: Default + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/host/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/host/tasks/main.yml new file mode 100644 index 00000000..a0321b09 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/host/tasks/main.yml @@ -0,0 +1,51 @@ +--- +- name: Generate names + set_fact: + host_name: "AWX-Collection-tests-host-host-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + inv_name: "AWX-Collection-tests-host-inv-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Create an Inventory + inventory: + name: "{{ inv_name }}" + organization: Default + state: present + register: result + +- name: Create a Host + host: + name: "{{ host_name }}" + inventory: "{{ result.id }}" + state: present + variables: + foo: bar + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Host + host: + name: "{{ result.id }}" + inventory: "{{ inv_name }}" + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Check module fails with correct msg + host: + name: test-host + description: Host Description + inventory: test-non-existing-inventory + state: present + register: result + ignore_errors: true + +- assert: + that: + - "result is failed" + - "'test-non-existing-inventory' in result.msg" + - "result.total_results == 0" diff --git a/ansible_collections/awx/awx/tests/integration/targets/import/aliases b/ansible_collections/awx/awx/tests/integration/targets/import/aliases new file mode 100644 index 00000000..527d07c3 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/import/aliases @@ -0,0 +1 @@ +skip/python2 diff --git a/ansible_collections/awx/awx/tests/integration/targets/import/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/import/tasks/main.yml new file mode 100644 index 00000000..9e0ac0fd --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/import/tasks/main.yml @@ -0,0 +1,105 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + org_name1: "AWX-Collection-tests-import-organization-{{ test_id }}" + org_name2: "AWX-Collection-tests-import-organization2-{{ test_id }}" + +- block: + - name: "Import something" + import: + assets: + organizations: + - name: "{{ org_name1 }}" + description: "" + max_hosts: 0 + related: + notification_templates: [] + notification_templates_started: [] + notification_templates_success: [] + notification_templates_error: [] + notification_templates_approvals: [] + natural_key: + name: "{{ org_name1 }}" + type: "organization" + register: import_output + + - assert: + that: + - import_output is changed + + - name: "Import the same thing again" + import: + assets: + organizations: + - name: "{{ org_name1 }}" + description: "" + max_hosts: 0 + related: + notification_templates: [] + notification_templates_started: [] + notification_templates_success: [] + notification_templates_error: [] + notification_templates_approvals: [] + natural_key: + name: "{{ org_name1 }}" + type: "organization" + register: import_output + ignore_errors: true + + - assert: + that: + - import_output is not failed + # - import_output is not changed # FIXME: module not idempotent + + - name: "Write out a json file" + copy: + content: | + { + "organizations": [ + { + "name": "{{ org_name2 }}", + "description": "", + "max_hosts": 0, + "related": { + "notification_templates": [], + "notification_templates_started": [], + "notification_templates_success": [], + "notification_templates_error": [], + "notification_templates_approvals": [] + }, + "natural_key": { + "name": "{{ org_name2 }}", + "type": "organization" + } + } + ] + } + dest: ./org.json + + - name: "Load assets from a file" + import: + assets: "{{ lookup('file', 'org.json') | from_json() }}" + register: import_output + + - assert: + that: + - import_output is changed + + always: + - name: Remove organizations + organization: + name: "{{ item }}" + state: absent + loop: + - "{{ org_name1 }}" + - "{{ org_name2 }}" + + - name: Delete org.json + file: + path: ./org.json + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/instance/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/instance/tasks/main.yml new file mode 100644 index 00000000..e312c5fa --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/instance/tasks/main.yml @@ -0,0 +1,59 @@ +--- +- name: Generate hostnames + set_fact: + hostname1: "AWX-Collection-tests-instance1.{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}.example.com" + hostname2: "AWX-Collection-tests-instance2.{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}.example.com" + hostname3: "AWX-Collection-tests-instance3.{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}.example.com" + register: facts + +- name: Show hostnames + debug: + var: facts + +- block: + - name: Create an instance + awx.awx.instance: + hostname: "{{ item }}" + node_type: execution + node_state: installed + with_items: + - "{{ hostname1 }}" + - "{{ hostname2 }}" + register: result + + - assert: + that: + - result is changed + + - name: Create an instance with non-default config + awx.awx.instance: + hostname: "{{ hostname3 }}" + node_type: execution + node_state: installed + capacity_adjustment: 0.4 + listener_port: 31337 + register: result + + - assert: + that: + - result is changed + + - name: Update an instance + awx.awx.instance: + hostname: "{{ hostname1 }}" + capacity_adjustment: 0.7 + register: result + + - assert: + that: + - result is changed + + always: + - name: Deprovision the instances + awx.awx.instance: + hostname: "{{ item }}" + node_state: deprovisioning + with_items: + - "{{ hostname1 }}" + - "{{ hostname2 }}" + - "{{ hostname3 }}" diff --git a/ansible_collections/awx/awx/tests/integration/targets/instance_group/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/instance_group/tasks/main.yml new file mode 100644 index 00000000..701137f2 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/instance_group/tasks/main.yml @@ -0,0 +1,76 @@ +--- +- name: Generate test id + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Generate names + set_fact: + group_name1: "AWX-Collection-tests-instance_group-group1-{{ test_id }}" + group_name2: "AWX-Collection-tests-instance_group-group2-{{ test_id }}" + cred_name1: "AWX-Collection-tests-instance_group-cred1-{{ test_id }}" + +- block: + - name: Create an OpenShift Credential + credential: + name: "{{ cred_name1 }}" + organization: "Default" + credential_type: "OpenShift or Kubernetes API Bearer Token" + inputs: + host: "https://openshift.org" + bearer_token: "asdf1234" + verify_ssl: false + register: cred_result + + - assert: + that: + - "cred_result is changed" + + - name: Create an Instance Group + instance_group: + name: "{{ group_name1 }}" + policy_instance_percentage: 34 + policy_instance_minimum: 12 + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Update an Instance Group + instance_group: + name: "{{ result.id }}" + policy_instance_percentage: 34 + policy_instance_minimum: 24 + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create a container group + instance_group: + name: "{{ group_name2 }}" + credential: "{{ cred_result.id }}" + is_container_group: true + register: result + + - assert: + that: + - "result is changed" + + always: + - name: Delete the instance groups + instance_group: + name: "{{ item }}" + state: absent + loop: + - "{{ group_name1 }}" + - "{{ group_name2 }}" + + - name: Delete the credential + credential: + name: "{{ cred_name1 }}" + organization: "Default" + credential_type: "OpenShift or Kubernetes API Bearer Token" diff --git a/ansible_collections/awx/awx/tests/integration/targets/inventory/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/inventory/tasks/main.yml new file mode 100644 index 00000000..abbe4f65 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/inventory/tasks/main.yml @@ -0,0 +1,196 @@ +--- +- name: Generate a test ID + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Generate names + set_fact: + inv_name1: "AWX-Collection-tests-inventory-inv1-{{ test_id }}" + inv_name2: "AWX-Collection-tests-inventory-inv2-{{ test_id }}" + cred_name1: "AWX-Collection-tests-inventory-cred1-{{ test_id }}" + group_name1: "AWX-Collection-tests-instance_group-group1-{{ test_id }}" + +- block: + + - name: Create an Instance Group + instance_group: + name: "{{ group_name1 }}" + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create an Insights Credential + credential: + name: "{{ cred_name1 }}" + organization: Default + credential_type: Insights + inputs: + username: joe + password: secret + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create an Inventory + inventory: + name: "{{ inv_name1 }}" + organization: Default + instance_groups: + - "{{ group_name1 }}" + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Test Inventory module idempotency + inventory: + name: "{{ result.id }}" + organization: Default + state: present + register: result + + - assert: + that: + - "result is not changed" + + - name: Copy an inventory + inventory: + name: "copy_{{ inv_name1 }}" + copy_from: "{{ inv_name1 }}" + organization: Default + description: "Our Foo Cloud Servers" + state: present + register: result + + - assert: + that: + - result.copied + + - name: Rename an inventory + inventory: + name: "copy_{{ inv_name1 }}" + new_name: "copy_{{ inv_name1 }}a" + organization: Default + state: present + register: result + + - assert: + that: + - result.changed + + - name: Delete an Inventory + inventory: + name: "copy_{{ inv_name1 }}a" + organization: Default + state: absent + register: result + + - assert: + that: + - "result is changed" + + - name: Fail Change Regular to Smart + inventory: + name: "{{ inv_name1 }}" + organization: Default + kind: smart + register: result + ignore_errors: true + + - assert: + that: + - "result is failed" + + - name: Create a smart inventory + inventory: + name: "{{ inv_name2 }}" + organization: Default + kind: smart + host_filter: name=foo + register: result + + - assert: + that: + - "result is changed" + + - name: Delete a smart inventory + inventory: + name: "{{ inv_name2 }}" + organization: Default + kind: smart + host_filter: name=foo + state: absent + register: result + + - assert: + that: + - "result is changed" + + - name: Delete an Inventory + inventory: + name: "{{ inv_name1 }}" + organization: Default + state: absent + register: result + + - assert: + that: + - "result is changed" + + - name: Delete a Non-Existent Inventory + inventory: + name: "{{ inv_name1 }}" + organization: Default + state: absent + register: result + + - assert: + that: + - "result is not changed" + + - name: Check module fails with correct msg + inventory: + name: test-inventory + description: Inventory Description + organization: test-non-existing-org + state: present + register: result + ignore_errors: true + + - assert: + that: + - "result is failed" + - "result is not changed" + - "'test-non-existing-org' in result.msg" + - "result.total_results == 0" + + always: + - name: Delete Inventories + inventory: + name: "{{ item }}" + organization: Default + state: absent + loop: + - "{{ inv_name1 }}" + - "{{ inv_name2 }}" + - "copy_{{ inv_name1 }}" + + - name: Delete the instance groups + instance_group: + name: "{{ group_name1 }}" + state: absent + + - name: Delete Insights Credential + credential: + name: "{{ cred_name1 }}" + organization: "Default" + credential_type: Insights + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/inventory_source/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/inventory_source/tasks/main.yml new file mode 100644 index 00000000..d905d03a --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/inventory_source/tasks/main.yml @@ -0,0 +1,84 @@ +--- +- name: Generate names + set_fact: + openstack_cred: "AWX-Collection-tests-inventory_source-cred-openstack-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + openstack_inv: "AWX-Collection-tests-inventory_source-inv-openstack-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + openstack_inv_source: "AWX-Collection-tests-inventory_source-inv-source-openstack-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Add a credential + credential: + description: Credentials for Openstack Test project + name: "{{ openstack_cred }}" + credential_type: OpenStack + organization: Default + inputs: + project: Test + username: admin + host: https://example.org:5000 + password: passw0rd + domain: test + register: credential_result + +- name: Add an inventory + inventory: + description: Test inventory + organization: Default + name: "{{ openstack_inv }}" + +- name: Create a source inventory + inventory_source: + name: "{{ openstack_inv_source }}" + description: Source for Test inventory + inventory: "{{ openstack_inv }}" + credential: "{{ credential_result.id }}" + overwrite: true + update_on_launch: true + source_vars: + private: false + source: openstack + register: result + +- assert: + that: + - "result is changed" + +- name: Delete the inventory source with an invalid cred and source_project specified + inventory_source: + name: "{{ result.id }}" + inventory: "{{ openstack_inv }}" + credential: "Does Not Exit" + source_project: "Does Not Exist" + state: absent + +- assert: + that: + - "result is changed" + +- name: Delete the credential + credential: + description: Credentials for Openstack Test project + name: "{{ openstack_cred }}" + credential_type: OpenStack + organization: Default + inputs: + project: Test + username: admin + host: https://example.org:5000 + password: passw0rd + domain: test + state: absent + +- assert: + that: + - "result is changed" + +- name: Delete the inventory + inventory: + description: Test inventory + organization: Default + name: "{{ openstack_inv }}" + state: absent + +- assert: + that: + - "result is changed" diff --git a/ansible_collections/awx/awx/tests/integration/targets/inventory_source_update/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/inventory_source_update/tasks/main.yml new file mode 100644 index 00000000..bc9182bb --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/inventory_source_update/tasks/main.yml @@ -0,0 +1,139 @@ +--- +- name: Generate a test ID + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Generate names + set_fact: + project_name: "AWX-Collection-tests-inventory_source_update-project-{{ test_id }}" + inv_name: "AWX-Collection-tests-inventory_source_update-inv-{{ test_id }}" + inv_source1: "AWX-Collection-tests-inventory_source_update-source1-{{ test_id }}" + inv_source2: "AWX-Collection-tests-inventory_source_update-source2-{{ test_id }}" + inv_source3: "AWX-Collection-tests-inventory_source_update-source3-{{ test_id }}" + org_name: "AWX-Collection-tests-inventory_source_update-org-{{ test_id }}" + + +- block: + + - name: "Create a new organization" + organization: + name: "{{ org_name }}" + register: created_org + + - name: Create a git project without credentials + project: + name: "{{ project_name }}" + organization: "{{ org_name }}" + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + wait: true + + - name: Create a git project with same name, different org + project: + name: "{{ project_name }}" + organization: Default + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + wait: true + + - name: Create an Inventory + inventory: + name: "{{ inv_name }}" + organization: "{{ org_name }}" + state: present + + - name: Create another inventory w/ same name, different org + inventory: + name: "{{ inv_name }}" + organization: Default + state: present + register: created_inventory + + - name: Create an Inventory Source (specifically connected to the randomly generated org) + inventory_source: + name: "{{ inv_source1 }}" + source: scm + source_project: "{{ project_name }}" + source_path: inventories/inventory.ini + description: Source for Test inventory + organization: "{{ created_org.id }}" + inventory: "{{ inv_name }}" + + - name: Create Another Inventory Source + inventory_source: + name: "{{ inv_source2 }}" + source: scm + source_project: "{{ project_name }}" + source_path: inventories/create_10_hosts.ini + description: Source for Test inventory + organization: Default + inventory: "{{ inv_name }}" + + - name: Create Yet Another Inventory Source (to make lookup plugin find multiple inv sources) + inventory_source: + name: "{{ inv_source3 }}" + source: scm + source_project: "{{ project_name }}" + source_path: inventories/create_100_hosts.ini + description: Source for Test inventory + organization: Default + inventory: "{{ inv_name }}" + + - name: Test Inventory Source Update + inventory_source_update: + name: "{{ inv_source2 }}" + inventory: "{{ inv_name }}" + organization: Default + register: result + + - assert: + that: + - "result is changed" + + - name: Test Inventory Source Update for All Sources + inventory_source_update: + name: "{{ item.name }}" + inventory: "{{ inv_name }}" + organization: Default + wait: true + loop: "{{ query('awx.awx.controller_api', 'inventory_sources', query_params={ 'inventory': created_inventory.id }, expect_objects=True, return_objects=True) }}" + loop_control: + label: "{{ item.name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Test Inventory Source Update for All Sources (using inventory_source as alias for name) + inventory_source_update: + inventory_source: "{{ item.name }}" + inventory: "{{ inv_name }}" + organization: Default + wait: true + loop: "{{ query('awx.awx.controller_api', 'inventory_sources', query_params={ 'inventory': created_inventory.id }, expect_objects=True, return_objects=True) }}" + loop_control: + label: "{{ item.name }}" + register: result + + - assert: + that: + - "result is changed" + + always: + - name: Delete Inventory + inventory: + name: "{{ inv_name }}" + organization: Default + state: absent + + - name: Delete Project + project: + name: "{{ project_name }}" + organization: Default + state: absent + + - name: "Remove the organization" + organization: + name: "{{ org_name }}" + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/job_cancel/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/job_cancel/tasks/main.yml new file mode 100644 index 00000000..254ea89c --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/job_cancel/tasks/main.yml @@ -0,0 +1,40 @@ +--- +- name: Launch a Job Template + job_launch: + job_template: "Demo Job Template" + register: job + +- assert: + that: + - "job is changed" + +- name: Cancel the job + job_cancel: + job_id: "{{ job.id }}" + register: results + +- assert: + that: + - results is changed + +- name: Cancel an already canceled job (assert failure) + job_cancel: + job_id: "{{ job.id }}" + fail_if_not_running: true + register: results + ignore_errors: true + +- assert: + that: + - results is failed + +- name: Check module fails with correct msg + job_cancel: + job_id: 9999999999 + register: result + ignore_errors: true + +- assert: + that: + - "result.msg =='Unable to cancel job_id/9999999999: The requested object could not be found.' + or result.msg =='Unable to find job with id 9999999999'" diff --git a/ansible_collections/awx/awx/tests/integration/targets/job_launch/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/job_launch/tasks/main.yml new file mode 100644 index 00000000..843c5c96 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/job_launch/tasks/main.yml @@ -0,0 +1,227 @@ +--- +- name: Generate names + set_fact: + jt_name1: "AWX-Collection-tests-job_launch-jt1-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + jt_name2: "AWX-Collection-tests-job_launch-jt2-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + proj_name: "AWX-Collection-tests-job_launch-project-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Launch a Job Template + job_launch: + job_template: "Demo Job Template" + register: result + +- assert: + that: + - "result is changed" + - "result.status == 'pending'" + +- name: Wait for a job template to complete + job_wait: + job_id: "{{ result.id }}" + interval: 10 + timeout: 120 + register: result + +- assert: + that: + - "result is not changed" + - "result.status == 'successful'" + +- name: Check module fails with correct msg + job_launch: + job_template: "Non_Existing_Job_Template" + inventory: "Demo Inventory" + register: result + ignore_errors: true + +- assert: + that: + - "result is failed" + - "result is not changed" + - "'Non_Existing_Job_Template' in result.msg" + +- name: Create a Job Template for testing prompt on launch + job_template: + name: "{{ jt_name1 }}" + project: Demo Project + playbook: hello_world.yml + job_type: run + ask_credential: true + ask_inventory: true + ask_tags_on_launch: true + ask_skip_tags_on_launch: true + state: present + register: result + +- name: Launch job template with inventory and credential for prompt on launch + job_launch: + job_template: "{{ jt_name1 }}" + inventory: "Demo Inventory" + credential: "Demo Credential" + tags: + - sometimes + skip_tags: + - always + register: result + +- assert: + that: + - "result is changed" + - "result.status == 'pending'" + +- name: Create a project for testing extra_vars + project: + name: "{{ proj_name }}" + organization: Default + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + +- name: Create the job template with survey + job_template: + name: "{{ jt_name2 }}" + project: "{{ proj_name }}" + playbook: debug.yml + job_type: run + state: present + inventory: "Demo Inventory" + survey_enabled: true + ask_variables_on_launch: false + survey_spec: + name: '' + description: '' + spec: + - question_name: Basic Name + question_description: Name + required: true + type: text + variable: basic_name + min: 0 + max: 1024 + default: '' + choices: '' + new_question: true + - question_name: Choose yes or no? + question_description: Choosing yes or no. + required: false + type: multiplechoice + variable: option_true_false + min: + max: + default: 'yes' + choices: |- + yes + no + new_question: true + +- name: Kick off a job template with survey + job_launch: + job_template: "{{ jt_name2 }}" + extra_vars: + basic_name: My First Variable + option_true_false: 'no' + ignore_errors: true + register: result + +- assert: + that: + - result is not failed + +- name: Prompt the job templates extra_vars on launch + job_template: + name: "{{ jt_name2 }}" + state: present + ask_variables_on_launch: true + + +- name: Kick off a job template with extra_vars + job_launch: + job_template: "{{ jt_name2 }}" + extra_vars: + basic_name: My First Variable + var1: My First Variable + var2: My Second Variable + ignore_errors: true + register: result + +- assert: + that: + - result is not failed + +- name: Create a Job Template for testing extra_vars + job_template: + name: "{{ jt_name2 }}" + project: "{{ proj_name }}" + playbook: debug.yml + job_type: run + survey_enabled: false + state: present + inventory: "Demo Inventory" + extra_vars: + foo: bar + register: result + +- name: Launch job template with inventory and credential for prompt on launch + job_launch: + job_template: "{{ jt_name2 }}" + organization: Default + register: result + +- assert: + that: + - "result is changed" + +- name: Wait for a job template to complete + job_wait: + job_id: "{{ result.id }}" + interval: 10 + timeout: 120 + register: result + +- assert: + that: + - "result is not changed" + - "result.status == 'successful'" + +- name: Get the job + job_list: + query: {"id": "{{result.id}}"} + register: result + +- assert: + that: + - '{"foo": "bar"} | to_json in result.results[0].extra_vars' + +- name: Delete the first jt + job_template: + name: "{{ jt_name1 }}" + project: Demo Project + playbook: hello_world.yml + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Delete the second jt + job_template: + name: "{{ jt_name2 }}" + project: "{{ proj_name }}" + playbook: debug.yml + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Delete the extra_vars project + project: + name: "{{ proj_name }}" + organization: Default + state: absent + register: result + +- assert: + that: + - "result is changed" diff --git a/ansible_collections/awx/awx/tests/integration/targets/job_list/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/job_list/tasks/main.yml new file mode 100644 index 00000000..04495bfc --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/job_list/tasks/main.yml @@ -0,0 +1,38 @@ +--- +- name: Launch a Job Template + job_launch: + job_template: "Demo Job Template" + register: job + +- assert: + that: + - "job is changed" + - "job.status == 'pending'" + +- name: List jobs w/ a matching primary key + job_list: + query: {"id": "{{ job.id }}"} + register: matching_jobs + +- assert: + that: + - "{{ matching_jobs.count }} == 1" + +- name: List failed jobs (which don't exist) + job_list: + status: failed + query: {"id": "{{ job.id }}"} + register: successful_jobs + +- assert: + that: + - "{{ successful_jobs.count }} == 0" + +- name: Get ALL result pages! + job_list: + all_pages: true + register: all_page_query + +- assert: + that: + - 'not all_page_query.next' diff --git a/ansible_collections/awx/awx/tests/integration/targets/job_template/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/job_template/tasks/main.yml new file mode 100644 index 00000000..951fe27f --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/job_template/tasks/main.yml @@ -0,0 +1,450 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: generate random string for project + set_fact: + org_name: "AWX-Collection-tests-organization-org-{{ test_id }}" + cred1: "AWX-Collection-tests-job_template-cred1-{{ test_id }}" + cred2: "AWX-Collection-tests-job_template-cred2-{{ test_id }}" + cred3: "AWX-Collection-tests-job_template-cred3-{{ test_id }}" + proj1: "AWX-Collection-tests-job_template-proj-{{ test_id }}" + jt1: "AWX-Collection-tests-job_template-jt1-{{ test_id }}" + jt2: "AWX-Collection-tests-job_template-jt2-{{ test_id }}" + lab1: "AWX-Collection-tests-job_template-lab1-{{ test_id }}" + email_not: "AWX-Collection-tests-job_template-email-not-{{ test_id }}" + webhook_not: "AWX-Collection-tests-notification_template-wehbook-not-{{ test_id }}" + group_name1: "AWX-Collection-tests-instance_group-group1-{{ test_id }}" + +- name: "Create a new organization" + organization: + name: "{{ org_name }}" + galaxy_credentials: + - Ansible Galaxy + register: result + +- name: Create a Demo Project + project: + name: "{{ proj1 }}" + organization: Default + state: present + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + register: proj_result + +- name: Create Credential1 + credential: + name: "{{ cred1 }}" + organization: Default + credential_type: Red Hat Ansible Automation Platform + register: cred1_result + +- name: Create Credential2 + credential: + name: "{{ cred2 }}" + organization: Default + credential_type: Machine + +- name: Create Credential3 + credential: + name: "{{ cred3 }}" + organization: Default + credential_type: Machine + +- name: Create Labels + label: + name: "{{ lab1 }}" + organization: "{{ item }}" + loop: + - Default + - "{{ org_name }}" + +- name: Create an Instance Group + instance_group: + name: "{{ group_name1 }}" + state: present + register: result + +- assert: + that: + - "result is changed" + +- name: Add email notification + notification_template: + name: "{{ email_not }}" + organization: Default + notification_type: email + notification_configuration: + username: user + password: s3cr3t + sender: tower@example.com + recipients: + - user1@example.com + host: smtp.example.com + port: 25 + use_tls: false + use_ssl: false + state: present + +- name: Add webhook notification + notification_template: + name: "{{ webhook_not }}" + organization: Default + notification_type: webhook + notification_configuration: + url: http://www.example.com/hook + headers: + X-Custom-Header: value123 + state: present + register: result + +- name: Create Job Template 1 + job_template: + name: "{{ jt1 }}" + project: "{{ proj1 }}" + inventory: Demo Inventory + playbook: hello_world.yml + credentials: + - "{{ cred1 }}" + - "{{ cred2 }}" + instance_groups: + - "{{ group_name1 }}" + job_type: run + state: present + register: jt1_result + +- assert: + that: + - "jt1_result is changed" + +- name: Add a credential to this JT + job_template: + name: "{{ jt1 }}" + project: "{{ proj_result.id }}" + playbook: hello_world.yml + credentials: + - "{{ cred1_result.id }}" + register: result + +- assert: + that: + - "result is changed" + +- name: Try to add the same credential to this JT + job_template: + name: "{{ jt1_result.id }}" + project: "{{ proj1 }}" + playbook: hello_world.yml + credentials: + - "{{ cred1 }}" + register: result + +- assert: + that: + - "result is not changed" + +- name: Add another credential to this JT + job_template: + name: "{{ jt1 }}" + project: "{{ proj1 }}" + playbook: hello_world.yml + credentials: + - "{{ cred1 }}" + - "{{ cred2 }}" + register: result + +- assert: + that: + - "result is changed" + +- name: Remove a credential for this JT + job_template: + name: "{{ jt1 }}" + project: "{{ proj1 }}" + playbook: hello_world.yml + credentials: + - "{{ cred1 }}" + register: result + +- assert: + that: + - "result is changed" + +- name: Remove all credentials from this JT + job_template: + name: "{{ jt1 }}" + project: "{{ proj1 }}" + playbook: hello_world.yml + credentials: [] + register: result + +- assert: + that: + - "result is changed" + +- name: Copy Job Template + job_template: + name: "copy_{{ jt1 }}" + copy_from: "{{ jt1 }}" + state: "present" + +- name: Delete copied Job Template + job_template: + name: "copy_{{ jt1 }}" + job_type: run + state: absent + register: result + +# This doesnt work if you include the credentials parameter +- name: Delete Job Template 1 + job_template: + name: "{{ jt1 }}" + playbook: hello_world.yml + job_type: run + project: "Does Not Exist" + inventory: "Does Not Exist" + webhook_credential: "Does Not Exist" + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Create Job Template 2 + job_template: + name: "{{ jt2 }}" + organization: Default + project: "{{ proj1 }}" + inventory: Demo Inventory + playbook: hello_world.yml + credential: "{{ cred3 }}" + job_type: run + labels: + - "{{ lab1 }}" + state: present + register: result + +- assert: + that: + - "result is changed" + +- name: add bad label to Job Template 2 + job_template: + name: "{{ jt2 }}" + organization: Default + project: "{{ proj1 }}" + inventory: Demo Inventory + playbook: hello_world.yml + credential: "{{ cred3 }}" + job_type: run + labels: + - label_bad + state: present + register: bad_label_results + ignore_errors: true + +- assert: + that: + - "bad_label_results.msg == 'Could not find label entry with name label_bad'" + +- name: Add survey to Job Template 2 + job_template: + name: "{{ jt2 }}" + survey_enabled: true + survey_spec: + name: "" + description: "" + spec: + - question_name: "Q1" + question_description: "The first question" + required: true + type: "text" + variable: "q1" + min: 5 + max: 15 + default: "hello" + register: result + +- assert: + that: + - "result is changed" + +- name: Re Add survey to Job Template 2 + job_template: + name: "{{ jt2 }}" + survey_enabled: true + survey_spec: + name: "" + description: "" + spec: + - question_name: "Q1" + question_description: "The first question" + required: true + type: "text" + variable: "q1" + min: 5 + max: 15 + default: "hello" + register: result + +- assert: + that: + - "result is not changed" + +- name: Add question to survey to Job Template 2 + job_template: + name: "{{ jt2 }}" + survey_enabled: true + survey_spec: + name: "" + description: "" + spec: + - question_name: "Q1" + question_description: "The first question" + required: true + type: "text" + variable: "q1" + min: 5 + max: 15 + default: "hello" + choices: "" + - question_name: "Q2" + type: "text" + variable: "q2" + required: false + register: result + +- assert: + that: + - "result is changed" + +- name: Remove survey from Job Template 2 + job_template: + name: "{{ jt2 }}" + survey_enabled: false + survey_spec: {} + register: result + +- assert: + that: + - "result is changed" + +- name: Add started notifications to Job Template 2 + job_template: + name: "{{ jt2 }}" + notification_templates_started: + - "{{ email_not }}" + - "{{ webhook_not }}" + register: result + +- assert: + that: + - "result is changed" + +- name: Re Add started notifications to Job Template 2 + job_template: + name: "{{ jt2 }}" + notification_templates_started: + - "{{ email_not }}" + - "{{ webhook_not }}" + register: result + +- assert: + that: + - "result is not changed" + +- name: Add success notifications to Job Template 2 + job_template: + name: "{{ jt2 }}" + notification_templates_success: + - "{{ email_not }}" + - "{{ webhook_not }}" + register: result + +- assert: + that: + - "result is changed" + +- name: Remove "on start" webhook notification from Job Template 2 + job_template: + name: "{{ jt2 }}" + notification_templates_started: + - "{{ email_not }}" + register: result + +- assert: + that: + - "result is changed" + + +- name: Delete Job Template 2 + job_template: + name: "{{ jt2 }}" + project: "{{ proj1 }}" + inventory: Demo Inventory + playbook: hello_world.yml + credential: "{{ cred3 }}" + job_type: run + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Delete the Demo Project + project: + name: "{{ proj1 }}" + organization: Default + state: absent + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + register: result + +- name: Delete Credential1 + credential: + name: "{{ cred1 }}" + organization: Default + credential_type: Red Hat Ansible Automation Platform + state: absent + +- name: Delete Credential2 + credential: + name: "{{ cred2 }}" + organization: Default + credential_type: Machine + state: absent + +- name: Delete Credential3 + credential: + name: "{{ cred3 }}" + organization: Default + credential_type: Machine + state: absent + +# You can't delete a label directly so no cleanup needed + +- name: Delete email notification + notification_template: + name: "{{ email_not }}" + organization: Default + state: absent + +- name: Delete the instance groups + instance_group: + name: "{{ group_name1 }}" + state: absent + +- name: Delete webhook notification + notification_template: + name: "{{ webhook_not }}" + organization: Default + state: absent + +- name: "Remove the organization" + organization: + name: "{{ org_name }}" + state: absent + register: result diff --git a/ansible_collections/awx/awx/tests/integration/targets/job_wait/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/job_wait/tasks/main.yml new file mode 100644 index 00000000..0aac7f31 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/job_wait/tasks/main.yml @@ -0,0 +1,171 @@ +--- +- name: Generate random string for template and project + set_fact: + jt_name: "AWX-Collection-tests-job_wait-long_running-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + proj_name: "AWX-Collection-tests-job_wait-long_running-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Assure that the demo project exists + project: + name: "{{ proj_name }}" + scm_type: 'git' + scm_url: 'https://github.com/ansible/test-playbooks.git' + scm_update_on_launch: true + organization: Default + +- name: Create a job template + job_template: + name: "{{ jt_name }}" + playbook: "sleep.yml" + job_type: run + project: "{{ proj_name }}" + inventory: "Demo Inventory" + extra_vars: + sleep_interval: 300 + +- name: Validate that interval superceeds min/max + job_wait: + min_interval: 10 + max_interval: 20 + interval: 12 + job_id: "99999999" + register: result + ignore_errors: true + +- assert: + that: + - "result.msg =='Unable to wait on job 99999999; that ID does not exist.' or + 'min and max interval have been depricated, please use interval instead, interval will be set to 12'" + +- name: Check module fails with correct msg + job_wait: + job_id: "99999999" + register: result + ignore_errors: true + +- assert: + that: + - result is failed + - "result.msg =='Unable to wait, no job_id 99999999 found: The requested object could not be found.' or + 'Unable to wait on job 99999999; that ID does not exist.'" + +- name: Launch Demo Job Template (take happy path) + job_launch: + job_template: "Demo Job Template" + register: job + +- assert: + that: + - job is changed + +- name: Wait for the Job to finish + job_wait: + job_id: "{{ job.id }}" + register: wait_results + +# Make sure it worked and that we have some data in our results +- assert: + that: + - wait_results is successful + - "'elapsed' in wait_results" + - "'id' in wait_results" + +- name: Launch a long running job + job_launch: + job_template: "{{ jt_name }}" + register: job + +- assert: + that: + - job is changed + +- name: Timeout waiting for the job to complete + job_wait: + job_id: "{{ job.id }}" + timeout: 5 + ignore_errors: true + register: wait_results + +# Make sure that we failed and that we have some data in our results +- assert: + that: + - "wait_results.msg == 'Monitoring aborted due to timeout' or 'Timeout waiting for job to finish.'" + - "'id' in wait_results" + +- name: Async cancel the long running job + job_cancel: + job_id: "{{ job.id }}" + async: 3600 + poll: 0 + +- name: Wait for the job to exit on cancel + job_wait: + job_id: "{{ job.id }}" + register: wait_results + ignore_errors: true + +- assert: + that: + - wait_results is failed + - 'wait_results.status == "canceled"' + - "wait_results.msg == 'Job with id {{ job.id }} failed' or 'Job with id={{ job.id }} failed, error: Job failed.'" + +- name: Delete the job template + job_template: + name: "{{ jt_name }}" + playbook: "sleep.yml" + job_type: run + project: "{{ proj_name }}" + inventory: "Demo Inventory" + state: absent + +- name: Delete the project + project: + name: "{{ proj_name }}" + organization: Default + state: absent + +# workflow wait test +- name: Generate a random string for test + set_fact: + test_id1: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id1 is not defined + +- name: Generate names + set_fact: + wfjt_name2: "AWX-Collection-tests-workflow_launch--wfjt1-{{ test_id1 }}" + +- name: Create our workflow + workflow_job_template: + name: "{{ wfjt_name2 }}" + state: present + +- name: Add a node + workflow_job_template_node: + workflow_job_template: "{{ wfjt_name2 }}" + unified_job_template: "Demo Job Template" + identifier: leaf + register: new_node + +- name: Kick off a workflow + workflow_launch: + workflow_template: "{{ wfjt_name2 }}" + ignore_errors: true + register: workflow + +- name: Wait for the Workflow Job to finish + job_wait: + job_id: "{{ workflow.job_info.id }}" + job_type: "workflow_jobs" + register: wait_workflow_results + +# Make sure it worked and that we have some data in our results +- assert: + that: + - wait_workflow_results is successful + - "'elapsed' in wait_workflow_results" + - "'id' in wait_workflow_results" + +- name: Clean up test workflow + workflow_job_template: + name: "{{ wfjt_name2 }}" + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/label/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/label/tasks/main.yml new file mode 100644 index 00000000..0ac077f8 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/label/tasks/main.yml @@ -0,0 +1,27 @@ +--- +- name: Generate names + set_fact: + label_name: "AWX-Collection-tests-label-label-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Create a Label + label: + name: "{{ label_name }}" + organization: Default + state: present + +- name: Check module fails with correct msg + label: + name: "Test Label" + organization: "Non_existing_org" + state: present + register: result + ignore_errors: true + +- assert: + that: + - "result is failed" + - "result is not changed" + - "'Non_existing_org' in result.msg" + - "result.total_results == 0" + +# You can't delete a label directly so no cleanup is necessary diff --git a/ansible_collections/awx/awx/tests/integration/targets/lookup_api_plugin/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/lookup_api_plugin/tasks/main.yml new file mode 100644 index 00000000..5abed9dc --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/lookup_api_plugin/tasks/main.yml @@ -0,0 +1,248 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate usernames + set_fact: + usernames: + - "AWX-Collection-tests-api_lookup-user1-{{ test_id }}" + - "AWX-Collection-tests-api_lookup-user2-{{ test_id }}" + - "AWX-Collection-tests-api_lookup-user3-{{ test_id }}" + hosts: + - "AWX-Collection-tests-api_lookup-host1-{{ test_id }}" + - "AWX-Collection-tests-api_lookup-host2-{{ test_id }}" + group_name: "AWX-Collection-tests-api_lookup-group1-{{ test_id }}" + +- name: Get our collection package + controller_meta: + register: controller_meta + +- name: Generate the name of our plugin + set_fact: + plugin_name: "{{ controller_meta.prefix }}.controller_api" + +- name: Create all of our users + user: + username: "{{ item }}" + is_superuser: true + password: "{{ test_id }}" + loop: "{{ usernames }}" + register: user_creation_results + +- block: + - name: Specify the connection params + debug: + msg: "{{ query(plugin_name, 'ping', host='DNE://junk.com', username='john', password='not_legit', verify_ssl=True) }}" + register: results + ignore_errors: true + + - assert: + that: + - "'dne' in (results.msg | lower)" + + - name: Create our hosts + host: + name: "{{ item }}" + inventory: "Demo Inventory" + loop: "{{ hosts }}" + + - name: Test too many params (failure from validation of terms) + set_fact: + junk: "{{ query(plugin_name, 'users', 'teams', query_params={}, ) }}" + ignore_errors: true + register: result + + - assert: + that: + - result is failed + - "'You must pass exactly one endpoint to query' in result.msg" + + - name: Try to load invalid endpoint + set_fact: + junk: "{{ query(plugin_name, 'john', query_params={}, ) }}" + ignore_errors: true + register: result + + - assert: + that: + - result is failed + - "'The requested object could not be found at' in result.msg" + + - name: Load user of a specific name without promoting objects + set_fact: + users_list: "{{ lookup(plugin_name, 'users', query_params={ 'username' : user_creation_results['results'][0]['item'] }, return_objects=False) }}" + + - assert: + that: + - users_list['results'] | length() == 1 + - users_list['count'] == 1 + - users_list['results'][0]['id'] == user_creation_results['results'][0]['id'] + + - name: Load user of a specific name with promoting objects + set_fact: + user_objects: "{{ query(plugin_name, 'users', query_params={ 'username' : user_creation_results['results'][0]['item'] }, return_objects=True ) }}" + + - assert: + that: + - user_objects | length() == 1 + - users_list['results'][0]['id'] == user_objects[0]['id'] + + - name: Loop over one user with the loop syntax + assert: + that: + - item['id'] == user_creation_results['results'][0]['id'] + loop: "{{ query(plugin_name, 'users', query_params={ 'username' : user_creation_results['results'][0]['item'] } ) }}" + loop_control: + label: "{{ item.id }}" + + - name: Get a page of users as just ids + set_fact: + users: "{{ query(plugin_name, 'users', query_params={ 'username__endswith': test_id, 'page_size': 2 }, return_ids=True ) }}" + + - name: Assert that user list has 2 ids only and that they are strings, not ints + assert: + that: + - users | length() == 2 + - user_creation_results['results'][0]['id'] not in users + - user_creation_results['results'][0]['id'] | string in users + + - name: Get all users of a system through next attribute + set_fact: + users: "{{ query(plugin_name, 'users', query_params={ 'username__endswith': test_id, 'page_size': 1 }, return_all=true ) }}" + + - assert: + that: + - users | length() >= 3 + + - name: Get all of the users created with a max_objects of 1 + set_fact: + users: "{{ lookup(plugin_name, 'users', query_params={ 'username__endswith': test_id, 'page_size': 1 }, return_all=true, max_objects=1 ) }}" + ignore_errors: true + register: max_user_errors + + - assert: + that: + - max_user_errors is failed + - "'List view at users returned 3 objects, which is more than the maximum allowed by max_objects' in max_user_errors.msg" + + - name: Get the ID of the first user created and verify that it is correct + assert: + that: "{{ query(plugin_name, 'users', query_params={ 'username' : user_creation_results['results'][0]['item'] }, return_ids=True)[0] }} == {{ user_creation_results['results'][0]['id'] }}" + + - name: Try to get an ID of someone who does not exist + set_fact: + failed_user_id: "{{ query(plugin_name, 'users', query_params={ 'username': 'john jacob jingleheimer schmidt' }, expect_one=True) }}" + register: result + ignore_errors: true + + - assert: + that: + - result is failed + - "'Expected one object from endpoint users' in result['msg']" + + - name: Lookup too many users + set_fact: + too_many_user_ids: " {{ query(plugin_name, 'users', query_params={ 'username__endswith': test_id }, expect_one=True) }}" + register: results + ignore_errors: true + + - assert: + that: + - results is failed + - "'Expected one object from endpoint users, but obtained 3' in results['msg']" + + - name: Get the ping page + set_fact: + ping_data: "{{ lookup(plugin_name, 'ping' ) }}" + register: results + + - assert: + that: + - results is succeeded + - "'active_node' in ping_data" + + - name: "Make sure that expect_objects fails on an API page" + set_fact: + my_var: "{{ lookup(plugin_name, 'settings/ui', expect_objects=True) }}" + ignore_errors: true + register: results + + - assert: + that: + - results is failed + - "'Did not obtain a list or detail view at settings/ui, and expect_objects or expect_one is set to True' in results.msg" + + # DOCS Example Tests + - name: Load the UI settings + set_fact: + controller_settings: "{{ lookup('awx.awx.controller_api', 'settings/ui') }}" + + - assert: + that: + - "'CUSTOM_LOGO' in controller_settings" + + - name: Display the usernames of all admin users + debug: + msg: "Admin users: {{ query('awx.awx.controller_api', 'users', query_params={ 'is_superuser': true }) | map(attribute='username') | join(', ') }}" + register: results + + - assert: + that: + - "'admin' in results.msg" + + - name: debug all organizations in a loop # use query to return a list + debug: + msg: "Organization description={{ item['description'] }} id={{ item['id'] }}" + loop: "{{ query('awx.awx.controller_api', 'organizations') }}" + loop_control: + label: "{{ item['name'] }}" + + - name: Make sure user 'john' is an org admin of the default org if the user exists + role: + organization: Default + role: admin + user: "{{ usernames[0] }}" + state: absent + register: role_revoke + when: "query('awx.awx.controller_api', 'users', query_params={ 'username': 'DNE_TESTING' }) | length == 1" + + - assert: + that: + - role_revoke is skipped + + - name: Create an inventory group with all 'foo' hosts + group: + name: "{{ group_name }}" + inventory: "Demo Inventory" + hosts: >- + {{ query( + 'awx.awx.controller_api', + 'hosts', + query_params={ 'name__endswith' : test_id, }, + ) | map(attribute='name') | list }} + register: group_creation + + - assert: + that: group_creation is changed + + always: + - name: Cleanup group + group: + name: "{{ group_name }}" + inventory: "Demo Inventory" + state: absent + + - name: Cleanup hosts + host: + name: "{{ item }}" + inventory: "Demo Inventory" + state: absent + loop: "{{ hosts }}" + + - name: Cleanup users + user: + username: "{{ item }}" + state: absent + loop: "{{ usernames }}" diff --git a/ansible_collections/awx/awx/tests/integration/targets/lookup_rruleset/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/lookup_rruleset/tasks/main.yml new file mode 100644 index 00000000..fe5b3f75 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/lookup_rruleset/tasks/main.yml @@ -0,0 +1,342 @@ +--- +- name: Get our collection package + controller_meta: + register: controller_meta + +- name: Generate the name of our plugin + set_fact: + ruleset_plugin_name: "{{ controller_meta.prefix }}.schedule_rruleset" + rule_plugin_name: "{{ controller_meta.prefix }}.schedule_rrule" + + +- name: Call ruleset with no rules + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45') }}" + ignore_errors: True + register: results + +- assert: + that: + - results is failed + - "'You must include rules to be in the ruleset via the rules parameter' in results.msg" + + +- name: call ruleset with a missing frequency + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - interval: 1 + byweekday: 'sunday' + +- assert: + that: + - results is failed + - "'Rule 2 is missing a frequency' in results.msg" + + +- name: call ruleset with a missing frequency + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - interval: 1 + byweekday: 'sunday' + +- assert: + that: + - results is failed + - "'Rule 2 is missing a frequency' in results.msg" + + +- name: call rruleset with an invalid frequency + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'asdf' + interval: 1 + byweekday: 'sunday' + +- assert: + that: + - results is failed + - "'Frequency of rule 2 is invalid asdf' in results.msg" + + +- name: call rruleset with an invalid end_on + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'day' + interval: 1 + byweekday: 'sunday' + end_on: 'a' + +- assert: + that: + - results is failed + - "'In rule 2 end_on must either be an integer or in the format YYYY-MM-DD [HH:MM:SS]' in results.msg" + + +- name: call rruleset with an invalid byweekday + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'day' + interval: 1 + byweekday: 'junk' + +- assert: + that: + - results is failed + - "'In rule 2 byweekday must only contain values' in results.msg" + + +- name: call rruleset with a monthly rule with invalid bymonthday (a) + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'month' + interval: 1 + bymonthday: 'a' + +- assert: + that: + - results is failed + - "'In rule 2 bymonthday must be between 1 and 31' in results.msg" + + +- name: call rruleset with a monthly rule with invalid bymonthday (-1) + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'month' + interval: 1 + bymonthday: '-1' + +- assert: + that: + - results is failed + - "'In rule 2 bymonthday must be between 1 and 31' in results.msg" + + +- name: call rruleset with a monthly rule with invalid bymonthday (32) + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'month' + interval: 1 + bymonthday: 32 + +- assert: + that: + - results is failed + - "'In rule 2 bymonthday must be between 1 and 31' in results.msg" + + +- name: call rruleset with a monthly rule with invalid bysetpos (junk) + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'month' + interval: 1 + bysetpos: 'junk' + +- assert: + that: + - results is failed + - "'In rule 2 bysetpos must only contain values in first, second, third, fourth, last' in results.msg" + + +- name: call rruleset with an invalid timezone + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules, timezone='junk' ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'day' + interval: 1 + byweekday: 'sunday' + +- assert: + that: + - results is failed + - "'Timezone parameter is not valid' in results.msg" + + +- name: call rruleset with only exclude rules + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + include: False + - frequency: 'day' + interval: 1 + byweekday: 'sunday' + include: False + +- assert: + that: + - results is failed + - "'A ruleset must contain at least one RRULE' in results.msg" + + +- name: Every day except for Sundays + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules, timezone='UTC' ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'day' + interval: 1 + byweekday: 'sunday' + include: False + +- assert: + that: + - results is success + - "'DTSTART;TZID=UTC:20220430T103045 RRULE:FREQ=DAILY;INTERVAL=1 EXRULE:FREQ=DAILY;BYDAY=SU;INTERVAL=1' == complex_rule" + + +- name: Every day except for April 30th + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2023-04-28 17:00:00', rules=rrules, timezone='UTC' ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'day' + interval: 1 + - frequency: 'day' + interval: 1 + bymonth: '4' + bymonthday: 30 + include: False + +- assert: + that: + - results is success + - "'DTSTART;TZID=UTC:20230428T170000 RRULE:FREQ=DAILY;INTERVAL=1 EXRULE:FREQ=DAILY;BYMONTH=4;BYMONTHDAY=30;INTERVAL=1' == complex_rule" + + +- name: Every 5 minutes but not on Mondays from 5-7pm + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules, timezone='UTC' ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'minute' + interval: 5 + - frequency: 'minute' + interval: 5 + byweekday: 'monday' + byhour: + - 17 + - 18 + include: False + +- assert: + that: + - results is success + - "'DTSTART;TZID=UTC:20220430T103045 RRULE:FREQ=MINUTELY;INTERVAL=5 EXRULE:FREQ=MINUTELY;INTERVAL=5;BYDAY=MO;BYHOUR=17,18' == complex_rule" + + +- name: Every 15 minutes Monday to Friday from 10:01am to 6:02pm (inclusive) + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules, timezone='UTC' ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'minute' + byweekday: + - monday + - tuesday + - wednesday + - thursday + - friday + interval: 15 + byhour: [10, 11, 12, 13, 14, 15, 16, 17, 18] + - frequency: 'minute' + interval: 1 + byweekday: "monday,tuesday,wednesday, thursday,friday" + byhour: 18 + byminute: "{{ range(3, 60) | list }}" + include: False + +- assert: + that: + - results is success + - "'DTSTART;TZID=UTC:20220430T103045 RRULE:FREQ=MINUTELY;INTERVAL=15;BYDAY=MO,TU,WE,TH,FR;BYHOUR=10,11,12,13,14,15,16,17,18 EXRULE:FREQ=MINUTELY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=18;BYMINUTE=3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59;INTERVAL=1' == complex_rule" + + +- name: Any Saturday whose month day is between 12 and 18 + set_fact: + complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules, timezone='UTC' ) }}" + ignore_errors: True + register: results + vars: + rrules: + - frequency: 'month' + interval: 1 + byweekday: 'saturday' + bymonthday: "{{ range(12,19) | list }}" + +- assert: + that: + - results is success + - "'DTSTART;TZID=UTC:20220430T103045 RRULE:FREQ=MONTHLY;BYMONTHDAY=12,13,14,15,16,17,18;BYDAY=SA;INTERVAL=1' == complex_rule" diff --git a/ansible_collections/awx/awx/tests/integration/targets/notification_template/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/notification_template/tasks/main.yml new file mode 100644 index 00000000..27884569 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/notification_template/tasks/main.yml @@ -0,0 +1,213 @@ +--- +- name: Generate names + set_fact: + slack_not: "AWX-Collection-tests-notification_template-slack-not-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + webhook_not: "AWX-Collection-tests-notification_template-wehbook-not-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + email_not: "AWX-Collection-tests-notification_template-email-not-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + twillo_not: "AWX-Collection-tests-notification_template-twillo-not-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + pd_not: "AWX-Collection-tests-notification_template-pd-not-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + irc_not: "AWX-Collection-tests-notification_template-irc-not-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Create Slack notification with custom messages + notification_template: + name: "{{ slack_not }}" + organization: Default + notification_type: slack + notification_configuration: + token: a_token + channels: + - general + messages: + started: + message: "{{ '{{' }} job_friendly_name {{' }}' }} {{ '{{' }} job.id {{' }}' }} started" + success: + message: "{{ '{{' }} job_friendly_name {{ '}}' }} completed in {{ '{{' }} job.elapsed {{ '}}' }} seconds" + error: + message: "{{ '{{' }} job_friendly_name {{ '}}' }} FAILED! Please look at {{ '{{' }} job.url {{ '}}' }}" + state: present + register: result + +- assert: + that: + - result is changed + +- name: Delete Slack notification + notification_template: + name: "{{ slack_not }}" + organization: Default + state: absent + register: result + +- assert: + that: + - result is changed + +- name: Add webhook notification + notification_template: + name: "{{ webhook_not }}" + organization: Default + notification_type: webhook + notification_configuration: + url: http://www.example.com/hook + headers: + X-Custom-Header: value123 + state: present + register: result + +- assert: + that: + - result is changed + +- name: Delete webhook notification + notification_template: + name: "{{ webhook_not }}" + organization: Default + state: absent + register: result + +- assert: + that: + - result is changed + +- name: Add email notification + notification_template: + name: "{{ email_not }}" + organization: Default + notification_type: email + notification_configuration: + username: user + password: s3cr3t + sender: tower@example.com + recipients: + - user1@example.com + host: smtp.example.com + port: 25 + use_tls: false + use_ssl: false + state: present + register: result + +- assert: + that: + - result is changed + +- name: Copy email notification + notification_template: + name: "copy_{{ email_not }}" + copy_from: "{{ email_not }}" + organization: Default + register: result + +- assert: + that: + - result.copied + +- name: Delete copied email notification + notification_template: + name: "copy_{{ email_not }}" + organization: Default + state: absent + register: result + +- assert: + that: + - result is changed + +- name: Delete email notification + notification_template: + name: "{{ email_not }}" + organization: Default + state: absent + register: result + +- assert: + that: + - result is changed + +- name: Add twilio notification + notification_template: + name: "{{ twillo_not }}" + organization: Default + notification_type: twilio + notification_configuration: + account_token: a_token + account_sid: a_sid + from_number: '+15551112222' + to_numbers: + - '+15553334444' + state: present + register: result + +- assert: + that: + - result is changed + +- name: Delete twilio notification + notification_template: + name: "{{ twillo_not }}" + organization: Default + state: absent + register: result + +- assert: + that: + - result is changed + +- name: Add PagerDuty notification + notification_template: + name: "{{ pd_not }}" + organization: Default + notification_type: pagerduty + notification_configuration: + token: a_token + subdomain: sub + client_name: client + service_key: a_key + state: present + register: result + +- assert: + that: + - result is changed + +- name: Delete PagerDuty notification + notification_template: + name: "{{ pd_not }}" + organization: Default + state: absent + register: result + +- assert: + that: + - result is changed + +- name: Add IRC notification + notification_template: + name: "{{ irc_not }}" + organization: Default + notification_type: irc + notification_configuration: + nickname: tower + password: s3cr3t + targets: + - user1 + port: 8080 + server: irc.example.com + use_ssl: false + state: present + register: result + +- assert: + that: + - result is changed + +- name: Delete IRC notification + notification_template: + name: "{{ irc_not }}" + organization: Default + state: absent + register: result + +- assert: + that: + - result is changed diff --git a/ansible_collections/awx/awx/tests/integration/targets/organization/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/organization/tasks/main.yml new file mode 100644 index 00000000..fcf34e47 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/organization/tasks/main.yml @@ -0,0 +1,119 @@ +--- +- name: Generate a test ID + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Generate an org name + set_fact: + org_name: "AWX-Collection-tests-organization-org-{{ test_id }}" + group_name1: "AWX-Collection-tests-instance_group-group1-{{ test_id }}" + +- name: Make sure {{ org_name }} is not there + organization: + name: "{{ org_name }}" + state: absent + register: result + +- name: "Create a new organization" + organization: + name: "{{ org_name }}" + galaxy_credentials: + - Ansible Galaxy + register: result + +- assert: + that: "result is changed" + +- name: "Make sure making the same org is not a change" + organization: + name: "{{ org_name }}" + register: result + +- assert: + that: + - "result is not changed" + +- name: Create an Instance Group + instance_group: + name: "{{ group_name1 }}" + state: present + register: result + +- assert: + that: + - "result is changed" + +- name: "Pass in all parameters" + organization: + name: "{{ org_name }}" + description: "A description" + instance_groups: + - "{{ group_name1 }}" + register: result + +- assert: + that: + - "result is changed" + +- name: "Change the description" + organization: + name: "{{ org_name }}" + description: "A new description" + register: result + +- assert: + that: + - "result is changed" + +- name: Delete the instance groups + instance_group: + name: "{{ group_name1 }}" + state: absent + +- name: "Rename the organization" + organization: + name: "{{ org_name }}" + new_name: "{{ org_name }}a" + register: result + +- assert: + that: + - "result is changed" + +- name: "Remove the organization" + organization: + name: "{{ org_name }}a" + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: "Remove a missing organization" + organization: + name: "{{ org_name }}" + state: absent + register: result + +- assert: + that: + - "result is not changed" + +# Test behaviour common to all controller modules +- name: Check that SSL is available and verify_ssl is enabled (task must fail) + organization: + name: Default + validate_certs: true + ignore_errors: true + register: check_ssl_is_used + +- name: Check that connection failed + assert: + that: + - "'CERTIFICATE_VERIFY_FAILED' in check_ssl_is_used['msg']" + +- name: Check that verify_ssl is disabled (task must not fail) + organization: + name: Default + validate_certs: false diff --git a/ansible_collections/awx/awx/tests/integration/targets/project/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/project/tasks/main.yml new file mode 100644 index 00000000..3a8889ee --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/project/tasks/main.yml @@ -0,0 +1,281 @@ +--- +- name: Generate names + set_fact: + project_name1: "AWX-Collection-tests-project-project1-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + project_name2: "AWX-Collection-tests-project-project2-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + project_name3: "AWX-Collection-tests-project-project3-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + jt1: "AWX-Collection-tests-project-jt1-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + scm_cred_name: "AWX-Collection-tests-project-scm-cred-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + org_name: "AWX-Collection-tests-project-org-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + cred_name: "AWX-Collection-tests-project-cred-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- block: + - name: Create an SCM Credential + credential: + name: "{{ scm_cred_name }}" + organization: Default + credential_type: Source Control + register: result + + - assert: + that: + - result is changed + + - name: Create a git project without credentials and wait + project: + name: "{{ project_name1 }}" + organization: Default + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + wait: true + register: result + + - assert: + that: + - result is changed + + - name: Recreate the project to validate not changed + project: + name: "{{ project_name1 }}" + organization: Default + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + wait: false + register: result + ignore_errors: true + + - assert: + that: + - result is not changed + + - name: Create organizations + organization: + name: "{{ org_name }}" + register: result + + - assert: + that: + - result is changed + + - name: Create credential + credential: + credential_type: Source Control + name: "{{ cred_name }}" + organization: "{{ org_name }}" + register: result + + - assert: + that: + - result is changed + + - name: Create a new test project in check_mode + project: + name: "{{ project_name2 }}" + organization: "{{ org_name }}" + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + scm_credential: "{{ cred_name }}" + check_mode: true + + - name: "Copy project from {{ project_name1 }}" + project: + name: "{{ project_name2 }}" + copy_from: "{{ project_name1 }}" + organization: "{{ org_name }}" + scm_type: git + scm_credential: "{{ cred_name }}" + state: present + register: result + + # If this fails it may be because the check_mode task actually already created + # the project, or it could be because the module actually failed somehow + - assert: + that: + - result.copied + + - name: Check module fails with correct msg when given non-existing org as param + project: + name: "{{ project_name2 }}" + organization: Non_Existing_Org + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + scm_credential: "{{ cred_name }}" + register: result + ignore_errors: true + + - assert: + that: + - "result is failed" + - "result is not changed" + - "'Non_Existing_Org' in result.msg" + - "result.total_results == 0" + + - name: Check module fails with correct msg when given non-existing credential as param + project: + name: "{{ project_name2 }}" + organization: "{{ org_name }}" + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + scm_credential: Non_Existing_Credential + register: result + ignore_errors: true + + - assert: + that: + - "result is failed" + - "result is not changed" + - "'Non_Existing_Credential' in result.msg" + - "result.total_results == 0" + + - name: Create a git project without credentials without waiting + project: + name: "{{ project_name3 }}" + organization: Default + scm_type: git + scm_branch: empty_branch + scm_url: https://github.com/ansible/test-playbooks + allow_override: true + register: result + + - assert: + that: + - result is changed + + - name: Update the project and wait. Verify not changed as no change made to repo and refspec not changed + project: + name: "{{ project_name3 }}" + organization: Default + scm_type: git + scm_branch: empty_branch + scm_url: https://github.com/ansible/test-playbooks + allow_override: true + wait: true + update_project: true + register: result + + - assert: + that: + - result is not changed + + - name: Create a job template that overrides the project scm_branch + job_template: + name: "{{ jt1 }}" + project: "{{ project_name3 }}" + inventory: "Demo Inventory" + scm_branch: master + playbook: debug.yml + + - name: Launch "{{ jt1 }}" + job_launch: + job_template: "{{ jt1 }}" + register: result + + - assert: + that: + - result is changed + + - name: "wait for job {{ result.id }}" + job_wait: + job_id: "{{ result.id }}" + register: job + + - assert: + that: + - job is successful + + - name: Rename an inventory + project: + name: "{{ project_name3 }}" + new_name: "{{ project_name3 }}a" + organization: Default + state: present + register: result + + - assert: + that: + - result.changed + + - name: Set project to remote archive and test that it updates correctly. + project: + name: "{{ project_name3 }}" + organization: Default + scm_type: archive + scm_url: https://github.com/ansible/test-playbooks/archive/refs/tags/1.0.0.tar.gz + wait: true + update_project: true + register: result + + - assert: + that: + - result is changed + + always: + - name: Delete the test job_template + job_template: + name: "{{ jt1 }}" + project: "{{ project_name3 }}" + inventory: "Demo Inventory" + state: absent + + - name: Delete the test project 3 + project: + name: "{{ project_name3 }}" + organization: Default + state: absent + + - name: Delete the test project 3a + project: + name: "{{ project_name3 }}a" + organization: Default + state: absent + + - name: Delete the test project 2 + project: + name: "{{ project_name2 }}" + organization: "{{ org_name }}" + state: absent + + - name: Delete the SCM Credential + credential: + name: "{{ scm_cred_name }}" + organization: Default + credential_type: Source Control + state: absent + register: result + + - assert: + that: + - result is changed + + - name: Delete the test project 1 + project: + name: "{{ project_name1 }}" + organization: Default + state: absent + register: result + + - assert: + that: + - result is changed + + - name: Delete credential + credential: + credential_type: Source Control + name: "{{ cred_name }}" + organization: "{{ org_name }}" + state: absent + register: result + + - assert: + that: + - result is changed + + - name: Delete the organization + organization: + name: "{{ org_name }}" + state: absent + register: result + + - assert: + that: + - result is changed diff --git a/ansible_collections/awx/awx/tests/integration/targets/project_manual/tasks/create_project_dir.yml b/ansible_collections/awx/awx/tests/integration/targets/project_manual/tasks/create_project_dir.yml new file mode 100644 index 00000000..9fb96072 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/project_manual/tasks/create_project_dir.yml @@ -0,0 +1,56 @@ +--- +- name: Load the UI settings + set_fact: + project_base_dir: "{{ controller_settings.project_base_dir }}" + vars: + controller_settings: "{{ lookup('awx.awx.controller_api', 'config/') }}" + +- inventory: + name: localhost + organization: Default + +- host: + name: localhost + inventory: localhost + variables: + ansible_connection: local + +- name: Create an unused SSH / Machine credential + credential: + name: dummy + credential_type: Machine + inputs: + ssh_key_data: | + -----BEGIN EC PRIVATE KEY----- + MHcCAQEEIIUl6R1xgzR6siIUArz4XBPtGZ09aetma2eWf1v3uYymoAoGCCqGSM49 + AwEHoUQDQgAENJNjgeZDAh/+BY860s0yqrLDprXJflY0GvHIr7lX3ieCtrzOMCVU + QWzw35pc5tvuP34SSi0ZE1E+7cVMDDOF3w== + -----END EC PRIVATE KEY----- + organization: Default + +- block: + - name: Add a path to a setting + settings: + name: AWX_ISOLATION_SHOW_PATHS + value: "[{{ project_base_dir }}]" + + - name: Create a directory for manual project + ad_hoc_command: + credential: dummy + inventory: localhost + job_type: run + module_args: "mkdir -p {{ project_base_dir }}/{{ project_dir_name }}" + module_name: command + wait: true + + always: + - name: Delete path from setting + settings: + name: AWX_ISOLATION_SHOW_PATHS + value: [] + + - name: Delete dummy credential + credential: + name: dummy + credential_type: Machine + state: absent diff --git a/ansible_collections/awx/awx/tests/integration/targets/project_manual/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/project_manual/tasks/main.yml new file mode 100644 index 00000000..3cd328b4 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/project_manual/tasks/main.yml @@ -0,0 +1,38 @@ +--- +- name: Generate random string for project + set_fact: + rand_string: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Generate manual project name + set_fact: + project_name: "Manual_Project_{{ rand_string }}" + +- name: Generate manual project dir name + set_fact: + project_dir_name: "proj_{{ rand_string }}" + +- name: Create a project directory for manual project + import_tasks: create_project_dir.yml + +- name: Create a manual project + project: + name: "{{ project_name }}" + organization: Default + scm_type: manual + local_path: "{{ project_dir_name }}" + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a manual project + project: + name: "{{ project_name }}" + organization: Default + state: absent + register: result + +- assert: + that: + - "result is changed" diff --git a/ansible_collections/awx/awx/tests/integration/targets/project_update/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/project_update/tasks/main.yml new file mode 100644 index 00000000..4b08e685 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/project_update/tasks/main.yml @@ -0,0 +1,67 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + project_name1: "AWX-Collection-tests-project_update-project-{{ test_id }}" + +- name: Create a git project without credentials without waiting + project: + name: "{{ project_name1 }}" + organization: Default + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + wait: false + register: project_create_result + +- assert: + that: + - project_create_result is changed + +- name: Update a project without waiting + project_update: + name: "{{ project_name1 }}" + organization: Default + wait: false + register: result + +- assert: + that: + - result is changed + +- name: Update a project and wait + project_update: + name: "{{ project_name1 }}" + organization: Default + wait: true + register: result + +- assert: + that: + - result is successful + +- name: Update a project by ID + project_update: + name: "{{ project_create_result.id }}" + organization: Default + wait: true + register: result + +- assert: + that: + - result is successful + - result is not changed + +- name: Delete the test project 1 + project: + name: "{{ project_name1 }}" + organization: Default + state: absent + register: result + +- assert: + that: + - result is changed diff --git a/ansible_collections/awx/awx/tests/integration/targets/role/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/role/tasks/main.yml new file mode 100644 index 00000000..a94f5341 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/role/tasks/main.yml @@ -0,0 +1,189 @@ +--- +- name: Generate a test id + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Generate names + set_fact: + username: "AWX-Collection-tests-role-user-{{ test_id }}" + project_name: "AWX-Collection-tests-role-project-1-{{ test_id }}" + jt1: "AWX-Collection-tests-role-jt1-{{ test_id }}" + jt2: "AWX-Collection-tests-role-jt2-{{ test_id }}" + wfjt_name: "AWX-Collection-tests-role-project-wfjt-{{ test_id }}" + +- block: + - name: Create a User + user: + first_name: Joe + last_name: User + username: "{{ username }}" + password: "{{ 65535 | random | to_uuid }}" + email: joe@example.org + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create a project + project: + name: "{{ project_name }}" + organization: Default + scm_type: git + scm_url: https://github.com/ansible/test-playbooks + wait: true + register: project_info + + - assert: + that: + - project_info is changed + + - name: Create job templates + job_template: + name: "{{ item }}" + project: "{{ project_name }}" + inventory: "Demo Inventory" + playbook: become.yml + with_items: + - jt1 + - jt2 + register: result + + - assert: + that: + - "result is changed" + + - name: Add Joe to the update role of the default Project with lookup Organization + role: + user: "{{ username }}" + role: update + lookup_organization: Default + project: "Demo Project" + state: "{{ item }}" + register: result + with_items: + - "present" + - "absent" + + - assert: + that: + - "result is changed" + + - name: Add Joe to the new project by ID + role: + user: "{{ username }}" + role: update + project: "{{ project_info['id'] }}" + state: "{{ item }}" + register: result + with_items: + - "present" + - "absent" + + - assert: + that: + - "result is changed" + + - name: Add Joe as execution admin to Default Org. + role: + user: "{{ username }}" + role: execution_environment_admin + organizations: Default + state: "{{ item }}" + register: result + with_items: + - "present" + - "absent" + + - assert: + that: + - "result is changed" + + - name: Create a workflow + workflow_job_template: + name: test-role-workflow + organization: Default + state: present + + - name: Add Joe to workflow execute role + role: + user: "{{ username }}" + role: execute + workflow: test-role-workflow + job_templates: + - jt1 + - jt2 + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Add Joe to nonexistant job template execute role + role: + user: "{{ username }}" + role: execute + workflow: test-role-workflow + job_templates: + - non existant temp + state: present + register: result + ignore_errors: true + + - assert: + that: + - "'There were 1 missing items, missing items' in result.msg" + - "'non existant temp' in result.msg" + + - name: Add Joe to workflow execute role, no-op + role: + user: "{{ username }}" + role: execute + workflow: test-role-workflow + state: present + register: result + + - assert: + that: + - "result is not changed" + + - name: Add Joe to workflow approve role + role: + user: "{{ username }}" + role: approval + workflow: test-role-workflow + state: present + register: result + + - assert: + that: + - "result is changed" + + always: + - name: Delete a User + user: + username: "{{ username }}" + email: joe@example.org + state: absent + register: result + + - name: Delete job templates + job_template: + name: "{{ item }}" + project: "{{ project_name }}" + inventory: "Demo Inventory" + playbook: debug.yml + state: absent + with_items: + - jt1 + - jt2 + register: result + + - name: Delete the project + project: + name: "{{ project_name }}" + organization: Default + state: absent + register: result diff --git a/ansible_collections/awx/awx/tests/integration/targets/schedule/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/schedule/tasks/main.yml new file mode 100644 index 00000000..73343faf --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/schedule/tasks/main.yml @@ -0,0 +1,382 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: generate random string for schedule + set_fact: + org_name: "AWX-Collection-tests-organization-org-{{ test_id }}" + sched1: "AWX-Collection-tests-schedule-sched1-{{ test_id }}" + sched2: "AWX-Collection-tests-schedule-sched2-{{ test_id }}" + cred1: "AWX-Collection-tests-schedule-cred1-{{ test_id }}" + proj1: "AWX-Collection-tests-schedule-proj1-{{ test_id }}" + proj2: "AWX-Collection-tests-schedule-proj2-{{ test_id }}" + jt1: "AWX-Collection-tests-schedule-jt1-{{ test_id }}" + jt2: "AWX-Collection-tests-schedule-jt1-{{ test_id }}" + ee1: "AWX-Collection-tests-schedule-ee1-{{ test_id }}" + label1: "AWX-Collection-tests-schedule-l1-{{ test_id }}" + label2: "AWX-Collection-tests-schedule-l2-{{ test_id }}" + ig1: "AWX-Collection-tests-schedule-ig1-{{ test_id }}" + ig2: "AWX-Collection-tests-schedule-ig2-{{ test_id }}" + slice_inventory: "AWX-Collection-tests-schedule-slice-inv-{{ test_id }}" + host_name: "AWX-Collection-tests-schedule-host-{{ test_id }}" + slice_num: 10 + +- block: + - name: Try to create without an rrule + schedule: + name: "{{ sched1 }}" + state: present + unified_job_template: "Demo Job Template" + enabled: true + register: result + ignore_errors: true + + - assert: + that: + - result is failed + - "'Unable to create schedule {{ sched1 }}' in result.msg" + + - name: Create with options that the JT does not support + schedule: + name: "{{ sched1 }}" + state: present + unified_job_template: "Demo Job Template" + rrule: "DTSTART:20191219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1" + description: "This hopefully will not work" + extra_data: + some: var + inventory: Demo Inventory + scm_branch: asdf1234 + job_type: run + job_tags: other_tags + skip_tags: some_tags + limit: node1 + diff_mode: true + verbosity: 4 + enabled: true + register: result + ignore_errors: true + + - assert: + that: + - result is failed + - "'Unable to create schedule {{ sched1 }}' in result.msg" + + - name: Build a real schedule + schedule: + name: "{{ sched1 }}" + state: present + unified_job_template: "Demo Job Template" + rrule: "DTSTART:20191219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1" + register: result + + - assert: + that: + - result is changed + + - name: Rebuild the same schedule + schedule: + name: "{{ sched1 }}" + state: present + unified_job_template: "Demo Job Template" + rrule: "DTSTART:20191219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1" + register: result + + - assert: + that: + - result is not changed + + - name: Create a Demo Project + project: + name: "{{ proj1 }}" + organization: Default + allow_override: true + state: present + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + + - name: "Create a new organization" + organization: + name: "{{ org_name }}" + + - name: Create a Demo Project in another org + project: + name: "{{ proj2 }}" + organization: "{{ org_name }}" + allow_override: true + state: present + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + + - name: Create Credential1 + credential: + name: "{{ cred1 }}" + organization: Default + credential_type: Red Hat Ansible Automation Platform + register: cred1_result + + - name: Create Job Template with all prompts + job_template: + name: "{{ jt1 }}" + organization: Default + project: "{{ proj1 }}" + inventory: Demo Inventory + playbook: hello_world.yml + ask_variables_on_launch: true + ask_inventory_on_launch: true + ask_scm_branch_on_launch: true + ask_credential_on_launch: true + ask_job_type_on_launch: true + ask_tags_on_launch: true + ask_skip_tags_on_launch: true + ask_limit_on_launch: true + ask_diff_mode_on_launch: true + ask_verbosity_on_launch: true + ask_execution_environment_on_launch: true + ask_forks_on_launch: true + ask_instance_groups_on_launch: true + ask_job_slice_count_on_launch: true + ask_labels_on_launch: true + ask_timeout_on_launch: true + job_type: run + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create labels + label: + name: "{{ item }}" + organization: "{{ org_name }}" + loop: + - "{{ label1 }}" + - "{{ label2 }}" + + - name: Create an execution environment + execution_environment: + name: "{{ ee1 }}" + image: "junk" + + - name: Create instance groups + instance_group: + name: "{{ item }}" + loop: + - "{{ ig1 }}" + - "{{ ig2 }}" + + - name: Create proper inventory for slice count + inventory: + name: "{{ slice_inventory }}" + organization: "{{ org_name }}" + state: present + register: result + + - name: Create a Host + host: + name: "{{ host_name }}-{{ item }}" + inventory: "{{ slice_inventory }}" + state: present + variables: + ansible_connection: local + loop: "{{ range(slice_num)|list }}" + register: result + + - name: Create with options that the JT does support + schedule: + name: "{{ sched2 }}" + state: present + unified_job_template: "{{ jt1 }}" + rrule: "DTSTART:20191219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1" + description: "This hopefully will work" + extra_data: + some: var + inventory: "{{ slice_inventory }}" + scm_branch: asdf1234 + credentials: + - "{{ cred1 }}" + job_type: run + job_tags: other_tags + skip_tags: some_tags + limit: node1 + diff_mode: true + verbosity: 4 + enabled: true + execution_environment: "{{ ee1 }}" + forks: 10 + instance_groups: + - "{{ ig1 }}" + - "{{ ig2 }}" + job_slice_count: "{{ slice_num }}" + labels: + - "{{ label1 }}" + - "{{ label2 }}" + timeout: 10 + register: result + ignore_errors: true + + - assert: + that: + - "result is changed" + + - name: Reset some options + schedule: + name: "{{ sched2 }}" + state: present + execution_environment: "" + forks: 1 + instance_groups: [] + job_slice_count: 1 + labels: [] + timeout: 60 + register: result + ignore_errors: true + + - assert: + that: + - "result is changed" + + - name: Disable a schedule + schedule: + name: "{{ sched1 }}" + unified_job_template: "Demo Job Template" + state: present + enabled: "false" + register: result + + - assert: + that: + - result is changed + + - name: Create a second Job Template in new org + job_template: + name: "{{ jt2 }}" + project: "{{ proj2 }}" + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: present + + - name: Build a schedule with a job template's name in two orgs + schedule: + name: "{{ sched1 }}" + state: present + unified_job_template: "{{ jt2 }}" + rrule: "DTSTART:20191219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1" + register: result + + - name: Verify we can't find the schedule without the UJT lookup + schedule: + name: "{{ sched1 }}" + state: present + rrule: "DTSTART:20201219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1" + register: result + ignore_errors: true + + - assert: + that: + - result is failed + + - name: Verify we can find the schedule with the UJT lookup and delete it + schedule: + name: "{{ sched1 }}" + state: absent + unified_job_template: "{{ jt2 }}" + register: result + + - assert: + that: + - result is changed + + always: + - name: Delete the schedules + schedule: + name: "{{ item }}" + state: absent + loop: + - "{{ sched1 }}" + - "{{ sched2 }}" + ignore_errors: True + + - name: Delete the jt1 + job_template: + name: "{{ jt1 }}" + project: "{{ proj1 }}" + playbook: hello_world.yml + state: absent + ignore_errors: True + + - name: Delete the jt2 + job_template: + name: "{{ jt2 }}" + project: "{{ proj2 }}" + playbook: hello_world.yml + state: absent + ignore_errors: True + + - name: Delete the Project2 + project: + name: "{{ proj2 }}" + organization: "{{ org_name }}" + state: absent + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + ignore_errors: True + + - name: Delete the Project1 + project: + name: "{{ proj1 }}" + organization: Default + state: absent + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + ignore_errors: True + + - name: Delete Credential1 + credential: + name: "{{ cred1 }}" + organization: Default + credential_type: Red Hat Ansible Automation Platform + state: absent + ignore_errors: True + + # Labels can not be deleted + + - name: Delete an execution environment + execution_environment: + name: "{{ ee1 }}" + image: "junk" + state: absent + ignore_errors: True + + - name: Delete instance groups + instance_group: + name: "{{ item }}" + state: absent + loop: + - "{{ ig1 }}" + - "{{ ig2 }}" + ignore_errors: True + + - name: "Remove the organization" + organization: + name: "{{ org_name }}" + state: absent + ignore_errors: True + + - name: "Delete slice inventory" + inventory: + name: "{{ slice_inventory }}" + organization: "{{ org_name }}" + state: absent + ignore_errors: True + + - name: Delete slice hosts + host: + name: "{{ host_name }}-{{ item }}" + inventory: "{{ slice_inventory }}" + state: absent + loop: "{{ range(slice_num)|list }}" + ignore_errors: True diff --git a/ansible_collections/awx/awx/tests/integration/targets/schedule_rrule/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/schedule_rrule/tasks/main.yml new file mode 100644 index 00000000..bf416b81 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/schedule_rrule/tasks/main.yml @@ -0,0 +1,50 @@ +--- +- name: Get our collection package + controller_meta: + register: controller_meta + +- name: Generate the name of our plugin + set_fact: + plugin_name: "{{ controller_meta.prefix }}.schedule_rrule" + +- name: Test too many params (failure from validation of terms) + debug: + msg: "{{ query(plugin_name, 'none', 'weekly', start_date='2020-4-16 03:45:07') }}" + ignore_errors: true + register: result + +- assert: + that: + - result is failed + - "'You may only pass one schedule type in at a time' in result.msg" + +- name: Test invalid frequency (failure from validation of term) + debug: + msg: "{{ query(plugin_name, 'john', start_date='2020-4-16 03:45:07') }}" + ignore_errors: true + register: result + +- assert: + that: + - result is failed + - "'Frequency of john is invalid' in result.msg" + +- name: Test an invalid start date (generic failure case from get_rrule) + debug: + msg: "{{ query(plugin_name, 'none', start_date='invalid') }}" + ignore_errors: true + register: result + +- assert: + that: + - result is failed + - "'Parameter start_date must be in the format YYYY-MM-DD' in result.msg" + +- name: Test end_on as count (generic success case) + debug: + msg: "{{ query(plugin_name, 'minute', start_date='2020-4-16 03:45:07', end_on='2') }}" + register: result + +- assert: + that: + - result.msg == 'DTSTART;TZID=America/New_York:20200416T034507 RRULE:FREQ=MINUTELY;COUNT=2;INTERVAL=1' diff --git a/ansible_collections/awx/awx/tests/integration/targets/settings/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/settings/tasks/main.yml new file mode 100644 index 00000000..65ae45c6 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/settings/tasks/main.yml @@ -0,0 +1,87 @@ +--- +- name: Set the value of AWX_ISOLATION_SHOW_PATHS to a baseline + settings: + name: AWX_ISOLATION_SHOW_PATHS + value: '["/var/lib/awx/projects/"]' + +- name: Set the value of AWX_ISOLATION_SHOW_PATHS to get an error back from the controller + settings: + settings: + AWX_ISOLATION_SHOW_PATHS: + 'not': 'a valid' + 'tower': 'setting' + register: result + ignore_errors: true + +- assert: + that: + - "result is failed" + +- name: Set the value of AWX_ISOLATION_SHOW_PATHS + settings: + name: AWX_ISOLATION_SHOW_PATHS + value: '["/var/lib/awx/projects/", "/tmp"]' + register: result + +- assert: + that: + - "result is changed" + +- name: Attempt to set the value of AWX_ISOLATION_BASE_PATH to what it already is + settings: + name: AWX_ISOLATION_BASE_PATH + value: /tmp + register: result + +- debug: + msg: "{{ result }}" + +- assert: + that: + - "result is not changed" + +- name: Apply a single setting via settings + settings: + name: AWX_ISOLATION_SHOW_PATHS + value: '["/var/lib/awx/projects/", "/var/tmp"]' + register: result + +- assert: + that: + - "result is changed" + +- name: Apply multiple setting via settings with no change + settings: + settings: + AWX_ISOLATION_BASE_PATH: /tmp + AWX_ISOLATION_SHOW_PATHS: ["/var/lib/awx/projects/", "/var/tmp"] + register: result + +- debug: + msg: "{{ result }}" + +- assert: + that: + - "result is not changed" + +- name: Apply multiple setting via settings with change + settings: + settings: + AWX_ISOLATION_BASE_PATH: /tmp + AWX_ISOLATION_SHOW_PATHS: [] + register: result + +- assert: + that: + - "result is changed" + +- name: Handle an omit value + settings: + name: AWX_ISOLATION_BASE_PATH + value: '{{ junk_var | default(omit) }}' + register: result + ignore_errors: true + +- assert: + that: + - "'Unable to update settings' in result.msg" diff --git a/ansible_collections/awx/awx/tests/integration/targets/team/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/team/tasks/main.yml new file mode 100644 index 00000000..f220d919 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/team/tasks/main.yml @@ -0,0 +1,57 @@ +--- +- name: Generate names + set_fact: + team_name: "AWX-Collection-tests-team-team-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Attempt to add a team to a non-existant Organization + team: + name: Test Team + organization: Missing_Organization + state: present + register: result + ignore_errors: true + +- name: Assert a meaningful error was provided for the failed team creation + assert: + that: + - "result is failed" + - "result is not changed" + - "'Missing_Organization' in result.msg" + - "result.total_results == 0" + +- name: Create a team + team: + name: "{{ team_name }}" + organization: Default + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a team + team: + name: "{{ team_name }}" + organization: Default + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Check module fails with correct msg + team: + name: "{{ team_name }}" + organization: Non_Existing_Org + state: present + register: result + ignore_errors: true + +- name: Lookup of the related organization should cause a failure + assert: + that: + - "result is failed" + - "result is not changed" + - "'Non_Existing_Org' in result.msg" + - "result.total_results == 0" diff --git a/ansible_collections/awx/awx/tests/integration/targets/token/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/token/tasks/main.yml new file mode 100644 index 00000000..f13bc6bc --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/token/tasks/main.yml @@ -0,0 +1,110 @@ +--- +- name: Generate names + set_fact: + token_description: "AWX-Collection-tests-token-description-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Try to use a token as a dict which is missing the token parameter + job_list: + controller_oauthtoken: + not_token: "This has no token entry" + register: results + ignore_errors: true + +- assert: + that: + - results is failed + - '"The provided dict in controller_oauthtoken did not properly contain the token entry" == results.msg' + +- name: Try to use a token as a list + job_list: + controller_oauthtoken: + - dummy_token + register: results + ignore_errors: true + +- assert: + that: + - results is failed + - '"The provided controller_oauthtoken type was not valid (list). Valid options are str or dict." == results.msg' + +- name: Try to delete a token with no existing_token or existing_token_id + token: + state: absent + register: results + ignore_errors: true + +- assert: + that: + - results is failed + # We don't assert a message here because it's handled by ansible + +- name: Try to delete a token with both existing_token or existing_token_id + token: + existing_token: + id: 1234 + existing_token_id: 1234 + state: absent + register: results + ignore_errors: true + +- assert: + that: + - results is failed + # We don't assert a message here because it's handled by ansible + + +- block: + - name: Create a Token + token: + description: '{{ token_description }}' + scope: "write" + state: present + register: new_token + + - name: Validate our token works by token + job_list: + controller_oauthtoken: "{{ controller_token.token }}" + register: job_list + + - name: Validate our token works by object + job_list: + controller_oauthtoken: "{{ controller_token }}" + register: job_list + + always: + - name: Delete our Token with our own token + token: + existing_token: "{{ controller_token }}" + controller_oauthtoken: "{{ controller_token }}" + state: absent + when: controller_token is defined + register: results + + - assert: + that: + - results is changed or results is skipped + +- block: + - name: Create a second token + token: + description: '{{ token_description }}' + scope: "write" + state: present + register: results + + - assert: + that: + - results is changed + + always: + - name: Delete the second Token with our own token + token: + existing_token_id: "{{ controller_token['id'] }}" + controller_oauthtoken: "{{ controller_token }}" + state: absent + when: controller_token is defined + register: results + + - assert: + that: + - results is changed or resuslts is skipped diff --git a/ansible_collections/awx/awx/tests/integration/targets/user/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/user/tasks/main.yml new file mode 100644 index 00000000..1d5cc5de --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/user/tasks/main.yml @@ -0,0 +1,301 @@ +--- +- name: Generate names + set_fact: + username: "AWX-Collection-tests-user-user-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Create a User + user: + username: "{{ username }}" + first_name: Joe + password: "{{ 65535 | random | to_uuid }}" + state: present + register: result + +- assert: + that: + - "result is changed" + +- name: Change a User by ID + user: + username: "{{ result.id }}" + last_name: User + email: joe@example.org + state: present + register: result + +- assert: + that: + - "result is changed" + +- name: Check idempotency + user: + username: "{{ username }}" + first_name: Joe + last_name: User + register: result + +- assert: + that: + - "result is not changed" + +- name: Rename a User + user: + username: "{{ username }}" + new_username: "{{ username }}-renamed" + email: joe@example.org + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a User + user: + username: "{{ username }}-renamed" + email: joe@example.org + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Create an Auditor + user: + first_name: Joe + last_name: Auditor + username: "{{ username }}" + password: "{{ 65535 | random | to_uuid }}" + email: joe@example.org + state: present + auditor: true + register: result + +- assert: + that: + - "result is changed" + +- name: Delete an Auditor + user: + username: "{{ username }}" + email: joe@example.org + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Create a Superuser + user: + first_name: Joe + last_name: Super + username: "{{ username }}" + password: "{{ 65535 | random | to_uuid }}" + email: joe@example.org + state: present + superuser: true + register: result + +- assert: + that: + - "result is changed" + +- name: Delete a Superuser + user: + username: "{{ username }}" + email: joe@example.org + state: absent + register: result + +- assert: + that: + - "result is changed" + +- name: Test SSL parameter + user: + first_name: Joe + last_name: User + username: "{{ username }}" + password: "{{ 65535 | random | to_uuid }}" + email: joe@example.org + state: present + validate_certs: true + controller_host: http://foo.invalid + ignore_errors: true + register: result + +- assert: + that: + - "'Unable to resolve controller_host' in result.msg or + 'Can not verify ssl with non-https protocol' in result.exception" + +- block: + - name: Generate a test ID + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + + - name: Generate an org name + set_fact: + org_name: "AWX-Collection-tests-organization-org-{{ test_id }}" + + - name: Make sure {{ org_name }} is not there + organization: + name: "{{ org_name }}" + state: absent + register: result + + - name: Create a new Organization + organization: + name: "{{ org_name }}" + galaxy_credentials: + - Ansible Galaxy + register: result + + - assert: + that: "result is changed" + + - name: Create a User to become admin of an organization {{ org_name }} + user: + username: "{{ username }}-orgadmin" + password: "{{ username }}-orgadmin" + state: present + organization: "{{ org_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Add the user {{ username }}-orgadmin as an admin of the organization {{ org_name }} + role: + user: "{{ username }}-orgadmin" + role: admin + organization: "{{ org_name }}" + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create a User as {{ username }}-orgadmin without using an organization (must fail) + user: + controller_username: "{{ username }}-orgadmin" + controller_password: "{{ username }}-orgadmin" + username: "{{ username }}" + first_name: Joe + password: "{{ 65535 | random | to_uuid }}" + state: present + register: result + ignore_errors: true + + - assert: + that: + - "result is failed" + + - name: Create a User as {{ username }}-orgadmin using an organization + user: + controller_username: "{{ username }}-orgadmin" + controller_password: "{{ username }}-orgadmin" + username: "{{ username }}" + first_name: Joe + password: "{{ 65535 | random | to_uuid }}" + state: present + organization: "{{ org_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Change a User as {{ username }}-orgadmin by ID using an organization + user: + controller_username: "{{ username }}-orgadmin" + controller_password: "{{ username }}-orgadmin" + username: "{{ result.id }}" + last_name: User + email: joe@example.org + state: present + organization: "{{ org_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Check idempotency as {{ username }}-orgadmin using an organization + user: + controller_username: "{{ username }}-orgadmin" + controller_password: "{{ username }}-orgadmin" + username: "{{ username }}" + first_name: Joe + last_name: User + organization: "{{ org_name }}" + register: result + + - assert: + that: + - "result is not changed" + + - name: Rename a User as {{ username }}-orgadmin using an organization + user: + controller_username: "{{ username }}-orgadmin" + controller_password: "{{ username }}-orgadmin" + username: "{{ username }}" + new_username: "{{ username }}-renamed" + email: joe@example.org + organization: "{{ org_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Delete a User as {{ username }}-orgadmin using an organization + user: + controller_username: "{{ username }}-orgadmin" + controller_password: "{{ username }}-orgadmin" + username: "{{ username }}-renamed" + email: joe@example.org + state: absent + organization: "{{ org_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Remove the user {{ username }}-orgadmin as an admin of the organization {{ org_name }} + role: + user: "{{ username }}-orgadmin" + role: admin + organization: "{{ org_name }}" + state: absent + register: result + + - assert: + that: + - "result is changed" + + - name: Delete the User {{ username }}-orgadmin + user: + username: "{{ username }}-orgadmin" + password: "{{ username }}-orgadmin" + state: absent + organization: "{{ org_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Delete the Organization {{ org_name }} + organization: + name: "{{ org_name }}" + state: absent + register: result + + - assert: + that: "result is changed" +... diff --git a/ansible_collections/awx/awx/tests/integration/targets/workflow_job_template/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/workflow_job_template/tasks/main.yml new file mode 100644 index 00000000..1477193e --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/workflow_job_template/tasks/main.yml @@ -0,0 +1,951 @@ +--- +- name: Generate a random string for names + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + +- name: Generate random names for test objects + set_fact: + org_name: "AWX-Collection-tests-organization-org-{{ test_id }}" + scm_cred_name: "AWX-Collection-tests-workflow_job_template-scm-cred-{{ test_id }}" + demo_project_name: "AWX-Collection-tests-workflow_job_template-proj-{{ test_id }}" + demo_project_name_2: "AWX-Collection-tests-workflow_job_template-proj-{{ test_id }}_2" + jt1_name: "AWX-Collection-tests-workflow_job_template-jt1-{{ test_id }}" + jt2_name: "AWX-Collection-tests-workflow_job_template-jt2-{{ test_id }}" + approval_node_name: "AWX-Collection-tests-workflow_approval_node-{{ test_id }}" + lab1: "AWX-Collection-tests-job_template-lab1-{{ test_id }}" + wfjt_name: "AWX-Collection-tests-workflow_job_template-wfjt-{{ test_id }}" + webhook_wfjt_name: "AWX-Collection-tests-workflow_job_template-webhook-wfjt-{{ test_id }}" + email_not: "AWX-Collection-tests-job_template-email-not-{{ test_id }}" + webhook_notification: "AWX-Collection-tests-notification_template-wehbook-not-{{ test_id }}" + project_inv: "AWX-Collection-tests-inventory_source-inv-project-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + project_inv_source: "AWX-Collection-tests-inventory_source-inv-source-project-{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + github_webhook_credential_name: "AWX-Collection-tests-credential-webhook-{{ test_id }}_github" + ee1: "AWX-Collection-tests-workflow_job_template-ee1-{{ test_id }}" + label1: "AWX-Collection-tests-workflow_job_template-l1-{{ test_id }}" + label2: "AWX-Collection-tests-workflow_job_template-l2-{{ test_id }}" + ig1: "AWX-Collection-tests-workflow_job_template-ig1-{{ test_id }}" + ig2: "AWX-Collection-tests-workflow_job_template-ig2-{{ test_id }}" + host1: "AWX-Collection-tests-workflow_job_template-h1-{{ test_id }}" + +- block: + - name: "Create a new organization" + organization: + name: "{{ org_name }}" + galaxy_credentials: + - Ansible Galaxy + register: result + + - name: Create Credentials + credential: + name: "{{ item.name }}" + organization: Default + credential_type: "{{ item.type }}" + register: result + loop: + - name: "{{ scm_cred_name }}" + type: Source Control + - name: "{{ github_webhook_credential_name }}" + type: GitHub Personal Access Token + + - assert: + that: + - "result is changed" + + - name: Add email notification + notification_template: + name: "{{ email_not }}" + organization: Default + notification_type: email + notification_configuration: + username: user + password: s3cr3t + sender: tower@example.com + recipients: + - user1@example.com + host: smtp.example.com + port: 25 + use_tls: false + use_ssl: false + state: present + + - name: Add webhook notification + notification_template: + name: "{{ webhook_notification }}" + organization: Default + notification_type: webhook + notification_configuration: + url: http://www.example.com/hook + headers: + X-Custom-Header: value123 + state: present + register: result + + - name: Create Labels + label: + name: "{{ lab1 }}" + organization: "{{ item }}" + loop: + - Default + - "{{ org_name }}" + + - name: Create a Demo Project + project: + name: "{{ demo_project_name }}" + organization: Default + state: present + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + scm_credential: "{{ scm_cred_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Create a 2nd Demo Project in another org + project: + name: "{{ demo_project_name_2 }}" + organization: "{{ org_name }}" + state: present + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + scm_credential: "{{ scm_cred_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Create a 3rd Demo Project in another org with inventory source name + project: + name: "{{ project_inv_source }}" + organization: "{{ org_name }}" + state: present + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + scm_credential: "{{ scm_cred_name }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Add an inventory + inventory: + description: Test inventory + organization: Default + name: "{{ project_inv }}" + + - name: Create a source inventory + inventory_source: + name: "{{ project_inv_source }}" + description: Source for Test inventory + inventory: "{{ project_inv }}" + source_project: "{{ demo_project_name }}" + source_path: "/inventories/inventory.ini" + overwrite: true + source: scm + register: project_inv_source_result + + - assert: + that: + - "project_inv_source_result is changed" + + - name: Add a node to demo inventory so we can use a slice count properly + host: + name: "{{ host1 }}" + inventory: Demo Inventory + variables: + ansible_connection: local + register: results + + - assert: + that: + - "result is changed" + + - name: Create a Job Template + job_template: + name: "{{ jt1_name }}" + project: "{{ demo_project_name }}" + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create a second Job Template + job_template: + name: "{{ jt2_name }}" + project: "{{ demo_project_name }}" + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: present + register: result + + - assert: + that: + - "result is changed" + + - name: Create a second Job Template in new org + job_template: + name: "{{ jt2_name }}" + project: "{{ demo_project_name_2 }}" + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: present + ask_execution_environment_on_launch: true + ask_forks_on_launch: true + ask_instance_groups_on_launch: true + ask_timeout_on_launch: true + ask_job_slice_count_on_launch: true + ask_labels_on_launch: true + register: jt2_name_result + + - assert: + that: + - "jt2_name_result is changed" + + - name: Add a Survey to second Job Template + job_template: + name: "{{ jt2_name }}" + organization: Default + project: "{{ demo_project_name }}" + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: present + survey_enabled: true + survey_spec: '{"spec": [{"index": 0, "question_name": "my question?", "default": "mydef", "variable": "myvar", "type": "text", "required": false}], "description": "test", "name": "test"}' + ask_execution_environment_on_launch: true + ask_forks_on_launch: true + ask_instance_groups_on_launch: true + ask_timeout_on_launch: true + ask_job_slice_count_on_launch: true + ask_labels_on_launch: true + register: result + + - assert: + that: + - "result is changed" + + - name: Create a workflow job template + workflow_job_template: + name: "{{ wfjt_name }}" + organization: Default + inventory: Demo Inventory + extra_vars: {'foo': 'bar', 'another-foo': {'barz': 'bar2'}} + labels: + - "{{ lab1 }}" + ask_inventory_on_launch: true + ask_scm_branch_on_launch: true + ask_limit_on_launch: true + ask_tags_on_launch: true + ask_variables_on_launch: true + register: result + + - assert: + that: + - "result is changed" + + - name: Create a workflow job template with bad label + workflow_job_template: + name: "{{ wfjt_name }}" + organization: Default + inventory: Demo Inventory + extra_vars: {'foo': 'bar', 'another-foo': {'barz': 'bar2'}} + labels: + - label_bad + ask_inventory_on_launch: true + ask_scm_branch_on_launch: true + ask_limit_on_launch: true + ask_tags_on_launch: true + ask_variables_on_launch: true + register: bad_label_results + ignore_errors: true + + - assert: + that: + - "bad_label_results.msg == 'Could not find label entry with name label_bad'" + + # Turn off ask_ * settings to test that the issue/10057 has been fixed + - name: Turn ask_* settings OFF + tower_workflow_job_template: + name: "{{ wfjt_name }}" + ask_inventory_on_launch: false + ask_scm_branch_on_launch: false + ask_limit_on_launch: false + ask_tags_on_launch: false + ask_variables_on_launch: false + state: present + + - assert: + that: + - "result is changed" + + - name: Create labels + label: + name: "{{ item }}" + organization: "{{ org_name }}" + loop: + - "{{ label1 }}" + - "{{ label2 }}" + + - name: Create an execution environment + execution_environment: + name: "{{ ee1 }}" + image: "junk" + + - name: Create instance groups + instance_group: + name: "{{ item }}" + loop: + - "{{ ig1 }}" + - "{{ ig2 }}" + + # Node actions do what the schema command used to do + - name: Create leaf node + workflow_job_template_node: + identifier: leaf + unified_job_template: "{{ jt2_name }}" + lookup_organization: "{{ org_name }}" + workflow: "{{ wfjt_name }}" + execution_environment: "{{ ee1 }}" + forks: 12 + instance_groups: + - "{{ ig1 }}" + - "{{ ig2 }}" + job_slice_count: 2 + labels: + - "{{ label1 }}" + - "{{ label2 }}" + timeout: 23 + register: results + + - assert: + that: + - "results is changed" + + - name: Update prompts on leaf node + workflow_job_template_node: + identifier: leaf + unified_job_template: "{{ jt2_name }}" + lookup_organization: "{{ org_name }}" + workflow: "{{ wfjt_name }}" + execution_environment: "" + forks: 1 + instance_groups: [] + job_slice_count: 1 + labels: [] + timeout: 10 + register: results + + - assert: + that: + - "results is changed" + + - name: Create root node + workflow_job_template_node: + identifier: root + unified_job_template: "{{ jt1_name }}" + workflow: "{{ wfjt_name }}" + + - name: Fail if no name is set for approval + workflow_job_template_node: + identifier: approval_test + approval_node: + description: "{{ approval_node_name }}" + workflow: "{{ wfjt_name }}" + register: no_name_results + ignore_errors: true + + - assert: + that: + - "no_name_results.msg == 'Approval node name is required to create approval node.'" + + - name: Fail if absent and no identifier set + workflow_job_template_node: + approval_node: + description: "{{ approval_node_name }}" + workflow: "{{ wfjt_name }}" + state: absent + register: no_identifier_results + ignore_errors: true + + - assert: + that: + - "no_identifier_results.msg == 'missing required arguments: identifier'" + + - name: Fail if present and no unified job template set + workflow_job_template_node: + identifier: approval_test + workflow: "{{ wfjt_name }}" + register: no_unified_results + ignore_errors: true + + - assert: + that: + - "no_unified_results.msg == 'state is present but any of the following are missing: unified_job_template, approval_node, success_nodes, always_nodes, failure_nodes'" + + - name: Create approval node + workflow_job_template_node: + identifier: approval_test + approval_node: + name: "{{ approval_node_name }}" + timeout: 900 + workflow: "{{ wfjt_name }}" + + - name: Create link for root node + workflow_job_template_node: + identifier: root + workflow: "{{ wfjt_name }}" + success_nodes: + - approval_test + always_nodes: + - leaf + + - name: Delete approval node + workflow_job_template_node: + identifier: approval_test + approval_node: + name: "{{ approval_node_name }}" + state: absent + workflow: "{{ wfjt_name }}" + + - name: Add started notifications to workflow job template + workflow_job_template: + name: "{{ wfjt_name }}" + notification_templates_started: + - "{{ email_not }}" + - "{{ webhook_notification }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Re Add started notifications to workflow job template + workflow_job_template: + name: "{{ wfjt_name }}" + notification_templates_started: + - "{{ email_not }}" + - "{{ webhook_notification }}" + register: result + + - assert: + that: + - "result is not changed" + + - name: Add success notifications to workflow job template + workflow_job_template: + name: "{{ wfjt_name }}" + notification_templates_success: + - "{{ email_not }}" + - "{{ webhook_notification }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Copy a workflow job template + workflow_job_template: + name: "copy_{{ wfjt_name }}" + copy_from: "{{ wfjt_name }}" + organization: Default + register: result + + - assert: + that: + - result.copied + + - name: Fail Remove "on start" webhook notification from copied workflow job template + workflow_job_template: + name: "copy_{{ wfjt_name }}" + notification_templates_started: + - "{{ email_not }}123" + register: remove_copied_workflow_node + ignore_errors: true + + - assert: + that: + - "remove_copied_workflow_node is failed" + - "remove_copied_workflow_node is not changed" + - "'returned 0 items' in remove_copied_workflow_node.msg" + + - name: Remove "on start" webhook notification from copied workflow job template + workflow_job_template: + name: "copy_{{ wfjt_name }}" + notification_templates_started: + - "{{ email_not }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Add Survey to Copied workflow job template + workflow_job_template: + name: "copy_{{ wfjt_name }}" + organization: Default + survey_spec: + name: Basic Survey + description: Basic Survey + spec: + - question_description: Name + min: 0 + default: '' + max: 128 + required: true + choices: '' + new_question: true + variable: basic_name + question_name: Basic Name + type: text + - question_description: Choosing yes or no. + min: 0 + default: 'yes' + max: 0 + required: true + choices: |- + yes + no + new_question: true + variable: option_true_false + question_name: Choose yes or no? + type: multiplechoice + - question_description: '' + min: 0 + default: '' + max: 0 + required: true + choices: |- + group1 + group2 + group3 + new_question: true + variable: target_groups + question_name: 'Select Group:' + type: multiselect + - question_name: password + question_description: '' + required: true + type: password + variable: password + min: 0 + max: 1024 + default: '' + choices: '' + new_question: true + register: result + + - assert: + that: + - "result is changed" + + - name: Re add survey to workflow job template expected not changed. + workflow_job_template: + name: "copy_{{ wfjt_name }}" + organization: Default + survey_spec: + name: Basic Survey + description: Basic Survey + spec: + - question_description: Name + min: 0 + default: '' + max: 128 + required: true + choices: '' + new_question: true + variable: basic_name + question_name: Basic Name + type: text + - question_description: Choosing yes or no. + min: 0 + default: 'yes' + max: 0 + required: true + choices: |- + yes + no + new_question: true + variable: option_true_false + question_name: Choose yes or no? + type: multiplechoice + - question_description: '' + min: 0 + default: '' + max: 0 + required: true + choices: |- + group1 + group2 + group3 + new_question: true + variable: target_groups + question_name: 'Select Group:' + type: multiselect + - question_name: password + question_description: '' + required: true + type: password + variable: password + min: 0 + max: 1024 + default: '' + choices: '' + new_question: true + register: result + + - assert: + that: + - "result is not changed" + + - name: Remove "on start" webhook notification from workflow job template + workflow_job_template: + name: "{{ wfjt_name }}" + notification_templates_started: + - "{{ email_not }}" + register: result + + - assert: + that: + - "result is changed" + + - name: Delete a workflow job template with an invalid inventory and webook_credential + workflow_job_template: + name: "{{ wfjt_name }}" + inventory: "Does Not Exist" + webhook_credential: "Does Not Exist" + state: absent + register: result + + - assert: + that: + - "result is changed" + + - name: Check module fails with correct msg + workflow_job_template: + name: "{{ wfjt_name }}" + organization: Non_Existing_Organization + register: result + ignore_errors: true + + - assert: + that: + - "result is failed" + - "result is not changed" + - "'Non_Existing_Organization' in result.msg" + - "result.total_results == 0" + + - name: Create a workflow job template with workflow nodes in template + awx.awx.workflow_job_template: + name: "{{ wfjt_name }}" + inventory: Demo Inventory + extra_vars: {'foo': 'bar', 'another-foo': {'barz': 'bar2'}} + schema: + - identifier: node101 + unified_job_template: + name: "{{ project_inv_source_result.id }}" + inventory: + organization: + name: Default + type: inventory_source + related: + failure_nodes: + - identifier: node201 + - identifier: node201 + unified_job_template: + organization: + name: Default + name: "{{ jt1_name }}" + type: job_template + credentials: [] + related: + success_nodes: + - identifier: node301 + - identifier: node202 + unified_job_template: + organization: + name: "{{ org_name }}" + name: "{{ project_inv_source }}" + type: project + - all_parents_must_converge: false + identifier: node301 + unified_job_template: + organization: + name: Default + name: "{{ jt2_name }}" + type: job_template + - identifier: Cleanup Job + unified_job_template: + name: Cleanup Activity Stream + type: system_job_template + register: result + + - assert: + that: + - "result is changed" + + - name: Kick off a workflow and wait for it + workflow_launch: + workflow_template: "{{ wfjt_name }}" + ignore_errors: true + register: result + + - assert: + that: + - result is not failed + - "'id' in result['job_info']" + + - name: Destroy previous workflow nodes for one that fails + awx.awx.workflow_job_template: + name: "{{ wfjt_name }}" + destroy_current_nodes: true + workflow_nodes: + - identifier: node101 + unified_job_template: + organization: + name: Default + name: "{{ jt1_name }}" + type: job_template + credentials: [] + related: + success_nodes: + - identifier: node201 + - identifier: node201 + unified_job_template: + name: "{{ project_inv_source }}" + inventory: + organization: + name: Default + type: inventory_source + - identifier: Workflow inception + unified_job_template: + name: "copy_{{ wfjt_name }}" + organization: + name: Default + type: workflow_job_template + forks: 12 + job_slice_count: 2 + timeout: 23 + execution_environment: + name: "{{ ee1 }}" + related: + credentials: + - name: "{{ scm_cred_name }}" + organization: + name: Default + instance_groups: + - name: "{{ ig1 }}" + - name: "{{ ig2 }}" + labels: + - name: "{{ label1 }}" + - name: "{{ label2 }}" + organization: + name: "{{ org_name }}" + register: result + + - name: Delete copied workflow job template + workflow_job_template: + name: "copy_{{ wfjt_name }}" + state: absent + register: result + + - assert: + that: + - "result is changed" + + - name: Kick off a workflow and wait for it + workflow_launch: + workflow_template: "{{ wfjt_name }}" + ignore_errors: true + register: result + + - assert: + that: + - result is failed + + - name: Create a workflow job template with a GitLab webhook but a GitHub credential + workflow_job_template: + name: "{{ webhook_wfjt_name }}" + organization: Default + inventory: Demo Inventory + webhook_service: gitlab + webhook_credential: "{{ github_webhook_credential_name }}" + ignore_errors: true + register: result + + - assert: + that: + - result is failed + - "'Must match the selected webhook service' in result['msg']" + + - name: Create a workflow job template with a GitHub webhook and a GitHub credential + workflow_job_template: + name: "{{ webhook_wfjt_name }}" + organization: Default + inventory: Demo Inventory + webhook_service: github + webhook_credential: "{{ github_webhook_credential_name }}" + register: result + + - assert: + that: + - result is not failed + + always: + - name: Delete the workflow job template + awx.awx.workflow_job_template: + name: "{{ item }}" + state: absent + ignore_errors: True + loop: + - "copy_{{ wfjt_name }}" + - "{{ wfjt_name }}" + - "{{ webhook_wfjt_name }}" + + - name: Delete the Job Template + job_template: + name: "{{ jt1_name }}" + project: "{{ demo_project_name }}" + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: absent + ignore_errors: True + + - name: Delete the second Job Template + job_template: + name: "{{ jt2_name }}" + project: "{{ demo_project_name }}" + organization: Default + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: absent + ignore_errors: True + + - name: Delete the second Job Template + job_template: + name: "{{ jt2_name }}" + project: "{{ demo_project_name_2 }}" + organization: "{{ org_name }}" + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: absent + ignore_errors: True + + - name: Delete the inventory source + inventory_source: + name: "{{ project_inv_source }}" + inventory: "{{ project_inv }}" + source: scm + state: absent + ignore_errors: True + + - name: Delete the inventory + inventory: + description: Test inventory + organization: Default + name: "{{ project_inv }}" + state: absent + ignore_errors: True + + - name: Delete the Demo Project + project: + name: "{{ demo_project_name }}" + organization: Default + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + scm_credential: "{{ scm_cred_name }}" + state: absent + ignore_errors: True + + - name: Delete the 2nd Demo Project + project: + name: "{{ demo_project_name_2 }}" + organization: "{{ org_name }}" + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + scm_credential: "{{ scm_cred_name }}" + state: absent + ignore_errors: True + + - name: Delete the 3rd Demo Project + project: + name: "{{ project_inv_source }}" + organization: "{{ org_name }}" + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + scm_credential: "{{ scm_cred_name }}" + state: absent + ignore_errors: True + + - name: Delete the SCM Credential + credential: + name: "{{ scm_cred_name }}" + organization: Default + credential_type: Source Control + state: absent + ignore_errors: True + + - name: Delete the GitHub Webhook Credential + credential: + name: "{{ github_webhook_credential_name }}" + organization: Default + credential_type: GitHub Personal Access Token + state: absent + ignore_errors: True + + - name: Delete email notification + notification_template: + name: "{{ email_not }}" + organization: Default + state: absent + ignore_errors: True + + - name: Delete webhook notification + notification_template: + name: "{{ webhook_notification }}" + organization: Default + state: absent + ignore_errors: True + + # Labels can not be deleted + + - name: Delete an execution environment + execution_environment: + name: "{{ ee1 }}" + image: "junk" + state: absent + ignore_errors: True + + - name: Delete instance groups + instance_group: + name: "{{ item }}" + state: absent + loop: + - "{{ ig1 }}" + - "{{ ig2 }}" + ignore_errors: True + + - name: "Remove the organization" + organization: + name: "{{ org_name }}" + state: absent + ignore_errors: True + + - name: Remove node + host: + name: "{{ host1 }}" + inventory: Demo Inventory + state: absent + ignore_errors: True diff --git a/ansible_collections/awx/awx/tests/integration/targets/workflow_launch/tasks/main.yml b/ansible_collections/awx/awx/tests/integration/targets/workflow_launch/tasks/main.yml new file mode 100644 index 00000000..a328b838 --- /dev/null +++ b/ansible_collections/awx/awx/tests/integration/targets/workflow_launch/tasks/main.yml @@ -0,0 +1,243 @@ +--- +- name: Generate a random string for test + set_fact: + test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}" + when: test_id is not defined + +- name: Generate names + set_fact: + wfjt_name1: "AWX-Collection-tests-workflow_launch--wfjt1-{{ test_id }}" + wfjt_name2: "AWX-Collection-tests-workflow_launch--wfjt1-{{ test_id }}-2" + approval_node_name: "AWX-Collection-tests-workflow_launch_approval_node-{{ test_id }}" + +- block: + + - name: Create our workflow + workflow_job_template: + name: "{{ wfjt_name1 }}" + state: present + + - name: Add a node + workflow_job_template_node: + workflow_job_template: "{{ wfjt_name1 }}" + unified_job_template: "Demo Job Template" + identifier: leaf + register: new_node + + - name: Connect to controller server but request an invalid workflow + workflow_launch: + workflow_template: "Does Not Exist" + ignore_errors: true + register: result + + - assert: + that: + - result is failed + - "'Unable to find workflow job template' in result.msg" + + - name: Run the workflow without waiting (this should just give us back a job ID) + workflow_launch: + workflow_template: "{{ wfjt_name1 }}" + wait: false + ignore_errors: true + register: result + + - assert: + that: + - result is not failed + - "'id' in result['job_info']" + + - name: Kick off a workflow and wait for it, but only for a second + workflow_launch: + workflow_template: "{{ wfjt_name1 }}" + timeout: 1 + ignore_errors: true + register: result + + - assert: + that: + - result is failed + - "'Monitoring of Workflow Job - {{ wfjt_name1 }} aborted due to timeout' in result.msg" + + - name: Kick off a workflow and wait for it + workflow_launch: + workflow_template: "{{ wfjt_name1 }}" + ignore_errors: true + register: result + + - assert: + that: + - result is not failed + - "'id' in result['job_info']" + + - name: Kick off a workflow with extra_vars but not enabled + workflow_launch: + workflow_template: "{{ wfjt_name1 }}" + extra_vars: + var1: My First Variable + var2: My Second Variable + ignore_errors: true + register: result + + - assert: + that: + - result is failed + - "'The field extra_vars was specified but the workflow job template does not allow for it to be overridden' in result.errors" + + - name: Prompt the workflow's with survey + workflow_job_template: + name: "{{ wfjt_name1 }}" + state: present + survey_enabled: true + ask_variables_on_launch: false + survey: + name: '' + description: '' + spec: + - question_name: Basic Name + question_description: Name + required: true + type: text + variable: basic_name + min: 0 + max: 1024 + default: '' + choices: '' + new_question: true + - question_name: Choose yes or no? + question_description: Choosing yes or no. + required: false + type: multiplechoice + variable: option_true_false + min: + max: + default: 'yes' + choices: |- + yes + no + new_question: true + + - name: Kick off a workflow with survey + workflow_launch: + workflow_template: "{{ wfjt_name1 }}" + extra_vars: + basic_name: My First Variable + option_true_false: 'no' + ignore_errors: true + register: result + + - assert: + that: + - result is not failed + + - name: Prompt the workflow's extra_vars on launch + workflow_job_template: + name: "{{ wfjt_name1 }}" + state: present + ask_variables_on_launch: true + + - name: Kick off a workflow with extra_vars + workflow_launch: + workflow_template: "{{ wfjt_name1 }}" + extra_vars: + basic_name: My First Variable + var1: My First Variable + var2: My Second Variable + ignore_errors: true + register: result + + - assert: + that: + - result is not failed + + - name: Test waiting for an approval node that doesn't exit on the last workflow for failure. + workflow_approval: + workflow_job_id: "{{ result.id }}" + name: Test workflow approval + interval: 1 + timeout: 2 + action: deny + register: result + ignore_errors: true + + - assert: + that: + - result is failed + - "'Monitoring of Workflow Approval - Test workflow approval aborted due to timeout' in result.msg" + + - name: Create new Workflow + workflow_job_template: + name: "{{ wfjt_name2 }}" + state: present + + - name: Add a job node + workflow_job_template_node: + workflow_job_template: "{{ wfjt_name2 }}" + unified_job_template: "Demo Job Template" + identifier: leaf + + # Test workflow_approval and workflow_node_wait + - name: Create approval node + workflow_job_template_node: + identifier: approval_test + approval_node: + name: "{{ approval_node_name }}" + timeout: 900 + workflow: "{{ wfjt_name2 }}" + + - name: Create link for approval node + workflow_job_template_node: + identifier: approval_test + workflow: "{{ wfjt_name2 }}" + always_nodes: + - leaf + + - name: Run the workflow without waiting This should pause waiting for approval + workflow_launch: + workflow_template: "{{ wfjt_name2 }}" + wait: false + ignore_errors: true + register: wfjt_info + + - name: Wait for Job node wait to fail as it is waiting on approval + awx.awx.workflow_node_wait: + workflow_job_id: "{{ wfjt_info.id }}" + name: Demo Job Template + interval: 1 + timeout: 5 + register: result + ignore_errors: true + + - assert: + that: + - result is failed + - "'Monitoring of Workflow Node - Demo Job Template aborted due to timeout' in result.msg" + + - name: Wait for approval node to activate and approve + awx.awx.workflow_approval: + workflow_job_id: "{{ wfjt_info.id }}" + name: "{{ approval_node_name }}" + interval: 1 + timeout: 10 + action: deny + register: result + + - assert: + that: + - result is not failed + - result is changed + + - name: Wait for workflow job to finish max 120s + job_wait: + job_id: "{{ wfjt_info.id }}" + timeout: 120 + job_type: "workflow_jobs" + + always: + - name: Clean up test workflow + workflow_job_template: + name: "{{ item }}" + state: absent + with_items: + - "{{ wfjt_name1 }}" + - "{{ wfjt_name2 }}" diff --git a/ansible_collections/awx/awx/tests/sanity/ignore-2.14.txt b/ansible_collections/awx/awx/tests/sanity/ignore-2.14.txt new file mode 100644 index 00000000..19512ea0 --- /dev/null +++ b/ansible_collections/awx/awx/tests/sanity/ignore-2.14.txt @@ -0,0 +1 @@ +plugins/modules/export.py validate-modules:nonexistent-parameter-documented # needs awxkit to construct argspec diff --git a/ansible_collections/awx/awx/tests/sanity/ignore-2.15.txt b/ansible_collections/awx/awx/tests/sanity/ignore-2.15.txt new file mode 100644 index 00000000..19512ea0 --- /dev/null +++ b/ansible_collections/awx/awx/tests/sanity/ignore-2.15.txt @@ -0,0 +1 @@ +plugins/modules/export.py validate-modules:nonexistent-parameter-documented # needs awxkit to construct argspec |