diff --git a/COPYING b/COPYING
new file mode 100644
index 0000000..f288702
--- /dev/null
+++ b/COPYING
@@ -0,0 +1,674 @@
+ GNU GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+ The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users. We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors. You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+ To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights. Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received. You must make sure that they, too, receive
+or can get the source code. And you must show them these terms so they
+know their rights.
+
+ Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+ For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software. For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+ Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so. This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software. The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable. Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products. If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+ Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary. To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS
+
+ 0. Definitions.
+
+ "This License" refers to version 3 of the GNU General Public License.
+
+ "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+ "The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+
+ To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy. The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+ A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+ To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+ To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+ An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+ 1. Source Code.
+
+ The "source code" for a work means the preferred form of the work
+for making modifications to it. "Object code" means any non-source
+form of a work.
+
+ A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+ The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+ The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+ The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+ The Corresponding Source for a work in source code form is that
+same work.
+
+ 2. Basic Permissions.
+
+ All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+ You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force. You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright. Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+ Conveying under any other circumstances is permitted solely under
+the conditions stated below. Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+ No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+ When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+ 4. Conveying Verbatim Copies.
+
+ You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+ You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+ 5. Conveying Modified Source Versions.
+
+ You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+ a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+
+ b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under section
+ 7. This requirement modifies the requirement in section 4 to
+ "keep intact all notices".
+
+ c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+
+ d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+ A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+ 6. Conveying Non-Source Forms.
+
+ You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+ a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+
+ b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the
+ Corresponding Source from a network server at no charge.
+
+ c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+
+ d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+
+ e) Convey the object code using peer-to-peer transmission, provided
+ you inform other peers where the object code and Corresponding
+ Source of the work are being offered to the general public at no
+ charge under subsection 6d.
+
+ A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+ A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling. In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage. For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product. A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+ "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source. The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+ If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+ The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed. Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+ Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+ 7. Additional Terms.
+
+ "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+ When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+ Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+ a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+
+ b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+
+ c) Prohibiting misrepresentation of the origin of that material, or
+ requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+
+ d) Limiting the use for publicity purposes of names of licensors or
+ authors of the material; or
+
+ e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+
+ f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions of
+ it) with contractual assumptions of liability to the recipient, for
+ any liability that these contractual assumptions directly impose on
+ those licensors and authors.
+
+ All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+ If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+ Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+ 8. Termination.
+
+ You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+ However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+ Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+ 9. Acceptance Not Required for Having Copies.
+
+ You are not required to accept this License in order to receive or
+run a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+ 10. Automatic Licensing of Downstream Recipients.
+
+ Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+
+ An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+ You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+ 11. Patents.
+
+ A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+
+ A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+ In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+ If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+ If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+ A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License. You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+ Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+ 12. No Surrender of Others' Freedom.
+
+ If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all. For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+ 13. Use with the GNU Affero General Public License.
+
+ Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+ 14. Revised Versions of this License.
+
+ The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+ If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+ Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+ 15. Disclaimer of Warranty.
+
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. Limitation of Liability.
+
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+ 17. Interpretation of Sections 15 and 16.
+
+ If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+
+ Copyright (C)
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+ Copyright (C)
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+ .
+
+ The GNU General Public License does not permit incorporating your program
+into proprietary programs. If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License. But first, please read
+.
diff --git a/Doxyfile.in b/Doxyfile.in
new file mode 100644
index 0000000..51f724b
--- /dev/null
+++ b/Doxyfile.in
@@ -0,0 +1,2618 @@
+# Doxyfile 1.9.1
+
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project.
+#
+# All text after a double hash (##) is considered a comment and is placed in
+# front of the TAG it is preceding.
+#
+# All text after a single hash (#) is considered a comment and will be ignored.
+# The format is:
+# TAG = value [value, ...]
+# For lists, items can also be appended using:
+# TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (\" \").
+
+#---------------------------------------------------------------------------
+# Project related configuration options
+#---------------------------------------------------------------------------
+
+# This tag specifies the encoding used for all characters in the configuration
+# file that follow. The default is UTF-8 which is also the encoding used for all
+# text before the first occurrence of this tag. Doxygen uses libiconv (or the
+# iconv built into libc) for the transcoding. See
+# https://www.gnu.org/software/libiconv/ for the list of possible encodings.
+# The default value is: UTF-8.
+
+DOXYFILE_ENCODING = UTF-8
+
+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
+# double-quotes, unless you are using Doxywizard) that should identify the
+# project for which the documentation is generated. This name is used in the
+# title of most generated pages and in a few other places.
+# The default value is: My Project.
+
+PROJECT_NAME = "Knot DNS"
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
+# could be handy for archiving the generated documentation or if some version
+# control system is used.
+
+PROJECT_NUMBER = @VERSION@
+
+# Using the PROJECT_BRIEF tag one can provide an optional one line description
+# for a project that appears at the top of each page and should give viewer a
+# quick idea about the purpose of the project. Keep the description short.
+
+PROJECT_BRIEF =
+
+# With the PROJECT_LOGO tag one can specify a logo or an icon that is included
+# in the documentation. The maximum height of the logo should not exceed 55
+# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
+# the logo to the output directory.
+
+# PROJECT_LOGO = doc/logo.svg
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
+# into which the generated documentation will be written. If a relative path is
+# entered, it will be relative to the location where doxygen was started. If
+# left blank the current directory will be used.
+
+OUTPUT_DIRECTORY = doc
+
+# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
+# directories (in 2 levels) under the output directory of each output format and
+# will distribute the generated files over these directories. Enabling this
+# option can be useful when feeding doxygen a huge amount of source files, where
+# putting all generated files in the same directory would otherwise causes
+# performance problems for the file system.
+# The default value is: NO.
+
+CREATE_SUBDIRS = NO
+
+# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII
+# characters to appear in the names of generated files. If set to NO, non-ASCII
+# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode
+# U+3044.
+# The default value is: NO.
+
+ALLOW_UNICODE_NAMES = NO
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all constant output in the proper language.
+# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
+# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
+# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
+# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
+# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
+# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
+# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
+# Ukrainian and Vietnamese.
+# The default value is: English.
+
+OUTPUT_LANGUAGE = English
+
+# The OUTPUT_TEXT_DIRECTION tag is used to specify the direction in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all generated output in the proper direction.
+# Possible values are: None, LTR, RTL and Context.
+# The default value is: None.
+
+OUTPUT_TEXT_DIRECTION = None
+
+# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
+# descriptions after the members that are listed in the file and class
+# documentation (similar to Javadoc). Set to NO to disable this.
+# The default value is: YES.
+
+BRIEF_MEMBER_DESC = YES
+
+# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
+# description of a member or function before the detailed description
+#
+# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
+# brief descriptions will be completely suppressed.
+# The default value is: YES.
+
+REPEAT_BRIEF = YES
+
+# This tag implements a quasi-intelligent brief description abbreviator that is
+# used to form the text in various listings. Each string in this list, if found
+# as the leading text of the brief description, will be stripped from the text
+# and the result, after processing the whole list, is used as the annotated
+# text. Otherwise, the brief description is used as-is. If left blank, the
+# following values are used ($name is automatically replaced with the name of
+# the entity):The $name class, The $name widget, The $name file, is, provides,
+# specifies, contains, represents, a, an and the.
+
+ABBREVIATE_BRIEF = "The $name class" \
+ "The $name widget" \
+ "The $name file" \
+ is \
+ provides \
+ specifies \
+ contains \
+ represents \
+ a \
+ an \
+ the
+
+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
+# doxygen will generate a detailed section even if there is only a brief
+# description.
+# The default value is: NO.
+
+ALWAYS_DETAILED_SEC = YES
+
+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
+# inherited members of a class in the documentation of that class as if those
+# members were ordinary class members. Constructors, destructors and assignment
+# operators of the base classes will not be shown.
+# The default value is: NO.
+
+INLINE_INHERITED_MEMB = NO
+
+# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
+# before files name in the file list and in the header files. If set to NO the
+# shortest path that makes the file name unique will be used
+# The default value is: YES.
+
+FULL_PATH_NAMES = YES
+
+# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
+# Stripping is only done if one of the specified strings matches the left-hand
+# part of the path. The tag can be used to show relative paths in the file list.
+# If left blank the directory from which doxygen is run is used as the path to
+# strip.
+#
+# Note that you can specify absolute paths here, but also relative paths, which
+# will be relative from the directory where doxygen is started.
+# This tag requires that the tag FULL_PATH_NAMES is set to YES.
+
+STRIP_FROM_PATH = src/
+
+# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
+# path mentioned in the documentation of a class, which tells the reader which
+# header file to include in order to use a class. If left blank only the name of
+# the header file containing the class definition is used. Otherwise one should
+# specify the list of include paths that are normally passed to the compiler
+# using the -I flag.
+
+STRIP_FROM_INC_PATH =
+
+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
+# less readable) file names. This can be useful is your file systems doesn't
+# support long names like on DOS, Mac, or CD-ROM.
+# The default value is: NO.
+
+SHORT_NAMES = NO
+
+# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
+# first line (until the first dot) of a Javadoc-style comment as the brief
+# description. If set to NO, the Javadoc-style will behave just like regular Qt-
+# style comments (thus requiring an explicit @brief command for a brief
+# description.)
+# The default value is: NO.
+
+JAVADOC_AUTOBRIEF = NO
+
+# If the JAVADOC_BANNER tag is set to YES then doxygen will interpret a line
+# such as
+# /***************
+# as being the beginning of a Javadoc-style comment "banner". If set to NO, the
+# Javadoc-style will behave just like regular comments and it will not be
+# interpreted by doxygen.
+# The default value is: NO.
+
+JAVADOC_BANNER = NO
+
+# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
+# line (until the first dot) of a Qt-style comment as the brief description. If
+# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
+# requiring an explicit \brief command for a brief description.)
+# The default value is: NO.
+
+QT_AUTOBRIEF = NO
+
+# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
+# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
+# a brief description. This used to be the default behavior. The new default is
+# to treat a multi-line C++ comment block as a detailed description. Set this
+# tag to YES if you prefer the old behavior instead.
+#
+# Note that setting this tag to YES also means that rational rose comments are
+# not recognized any more.
+# The default value is: NO.
+
+MULTILINE_CPP_IS_BRIEF = NO
+
+# By default Python docstrings are displayed as preformatted text and doxygen's
+# special commands cannot be used. By setting PYTHON_DOCSTRING to NO the
+# doxygen's special commands can be used and the contents of the docstring
+# documentation blocks is shown as doxygen documentation.
+# The default value is: YES.
+
+PYTHON_DOCSTRING = YES
+
+# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
+# documentation from any documented member that it re-implements.
+# The default value is: YES.
+
+INHERIT_DOCS = YES
+
+# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
+# page for each member. If set to NO, the documentation of a member will be part
+# of the file/class/namespace that contains it.
+# The default value is: NO.
+
+SEPARATE_MEMBER_PAGES = NO
+
+# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
+# uses this value to replace tabs by spaces in code fragments.
+# Minimum value: 1, maximum value: 16, default value: 4.
+
+TAB_SIZE = 8
+
+# This tag can be used to specify a number of aliases that act as commands in
+# the documentation. An alias has the form:
+# name=value
+# For example adding
+# "sideeffect=@par Side Effects:\n"
+# will allow you to put the command \sideeffect (or @sideeffect) in the
+# documentation, which will result in a user-defined paragraph with heading
+# "Side Effects:". You can put \n's in the value part of an alias to insert
+# newlines (in the resulting output). You can put ^^ in the value part of an
+# alias to insert a newline as if a physical newline was in the original file.
+# When you need a literal { or } or , in the value part of an alias you have to
+# escape them by means of a backslash (\), this can lead to conflicts with the
+# commands \{ and \} for these it is advised to use the version @{ and @} or use
+# a double escape (\\{ and \\})
+
+ALIASES =
+
+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
+# only. Doxygen will then generate output that is more tailored for C. For
+# instance, some of the names that are used will be different. The list of all
+# members will be omitted, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_FOR_C = YES
+
+# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
+# Python sources only. Doxygen will then generate output that is more tailored
+# for that language. For instance, namespaces will be presented as packages,
+# qualified scopes will look different, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_JAVA = NO
+
+# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
+# sources. Doxygen will then generate output that is tailored for Fortran.
+# The default value is: NO.
+
+OPTIMIZE_FOR_FORTRAN = NO
+
+# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
+# sources. Doxygen will then generate output that is tailored for VHDL.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_VHDL = NO
+
+# Set the OPTIMIZE_OUTPUT_SLICE tag to YES if your project consists of Slice
+# sources only. Doxygen will then generate output that is more tailored for that
+# language. For instance, namespaces will be presented as modules, types will be
+# separated into more groups, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_SLICE = NO
+
+# Doxygen selects the parser to use depending on the extension of the files it
+# parses. With this tag you can assign which parser to use for a given
+# extension. Doxygen has a built-in mapping, but you can override or extend it
+# using this tag. The format is ext=language, where ext is a file extension, and
+# language is one of the parsers supported by doxygen: IDL, Java, JavaScript,
+# Csharp (C#), C, C++, D, PHP, md (Markdown), Objective-C, Python, Slice, VHDL,
+# Fortran (fixed format Fortran: FortranFixed, free formatted Fortran:
+# FortranFree, unknown formatted Fortran: Fortran. In the later case the parser
+# tries to guess whether the code is fixed or free formatted code, this is the
+# default for Fortran type files). For instance to make doxygen treat .inc files
+# as Fortran files (default is PHP), and .f files as C (default is Fortran),
+# use: inc=Fortran f=C.
+#
+# Note: For files without extension you can use no_extension as a placeholder.
+#
+# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
+# the files are not read by doxygen. When specifying no_extension you should add
+# * to the FILE_PATTERNS.
+#
+# Note see also the list of default file extension mappings.
+
+EXTENSION_MAPPING =
+
+# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
+# according to the Markdown format, which allows for more readable
+# documentation. See https://daringfireball.net/projects/markdown/ for details.
+# The output of markdown processing is further processed by doxygen, so you can
+# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
+# case of backward compatibilities issues.
+# The default value is: YES.
+
+MARKDOWN_SUPPORT = YES
+
+# When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up
+# to that level are automatically included in the table of contents, even if
+# they do not have an id attribute.
+# Note: This feature currently applies only to Markdown headings.
+# Minimum value: 0, maximum value: 99, default value: 5.
+# This tag requires that the tag MARKDOWN_SUPPORT is set to YES.
+
+TOC_INCLUDE_HEADINGS = 5
+
+# When enabled doxygen tries to link words that correspond to documented
+# classes, or namespaces to their corresponding documentation. Such a link can
+# be prevented in individual cases by putting a % sign in front of the word or
+# globally by setting AUTOLINK_SUPPORT to NO.
+# The default value is: YES.
+
+AUTOLINK_SUPPORT = YES
+
+# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
+# to include (a tag file for) the STL sources as input, then you should set this
+# tag to YES in order to let doxygen match functions declarations and
+# definitions whose arguments contain STL classes (e.g. func(std::string);
+# versus func(std::string) {}). This also make the inheritance and collaboration
+# diagrams that involve STL classes more complete and accurate.
+# The default value is: NO.
+
+BUILTIN_STL_SUPPORT = NO
+
+# If you use Microsoft's C++/CLI language, you should set this option to YES to
+# enable parsing support.
+# The default value is: NO.
+
+CPP_CLI_SUPPORT = NO
+
+# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
+# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
+# will parse them like normal C++ but will assume all classes use public instead
+# of private inheritance when no explicit protection keyword is present.
+# The default value is: NO.
+
+SIP_SUPPORT = NO
+
+# For Microsoft's IDL there are propget and propput attributes to indicate
+# getter and setter methods for a property. Setting this option to YES will make
+# doxygen to replace the get and set methods by a property in the documentation.
+# This will only work if the methods are indeed getting or setting a simple
+# type. If this is not the case, or you want to show the methods anyway, you
+# should set this option to NO.
+# The default value is: YES.
+
+IDL_PROPERTY_SUPPORT = YES
+
+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
+# tag is set to YES then doxygen will reuse the documentation of the first
+# member in the group (if any) for the other members of the group. By default
+# all members of a group must be documented explicitly.
+# The default value is: NO.
+
+DISTRIBUTE_GROUP_DOC = NO
+
+# If one adds a struct or class to a group and this option is enabled, then also
+# any nested class or struct is added to the same group. By default this option
+# is disabled and one has to add nested compounds explicitly via \ingroup.
+# The default value is: NO.
+
+GROUP_NESTED_COMPOUNDS = NO
+
+# Set the SUBGROUPING tag to YES to allow class member groups of the same type
+# (for instance a group of public functions) to be put as a subgroup of that
+# type (e.g. under the Public Functions section). Set it to NO to prevent
+# subgrouping. Alternatively, this can be done per class using the
+# \nosubgrouping command.
+# The default value is: YES.
+
+SUBGROUPING = YES
+
+# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
+# are shown inside the group in which they are included (e.g. using \ingroup)
+# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
+# and RTF).
+#
+# Note that this feature does not work in combination with
+# SEPARATE_MEMBER_PAGES.
+# The default value is: NO.
+
+INLINE_GROUPED_CLASSES = NO
+
+# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
+# with only public data fields or simple typedef fields will be shown inline in
+# the documentation of the scope in which they are defined (i.e. file,
+# namespace, or group documentation), provided this scope is documented. If set
+# to NO, structs, classes, and unions are shown on a separate page (for HTML and
+# Man pages) or section (for LaTeX and RTF).
+# The default value is: NO.
+
+INLINE_SIMPLE_STRUCTS = YES
+
+# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
+# enum is documented as struct, union, or enum with the name of the typedef. So
+# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
+# with name TypeT. When disabled the typedef will appear as a member of a file,
+# namespace, or class. And the struct will be named TypeS. This can typically be
+# useful for C code in case the coding convention dictates that all compound
+# types are typedef'ed and only the typedef is referenced, never the tag name.
+# The default value is: NO.
+
+TYPEDEF_HIDES_STRUCT = YES
+
+# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
+# cache is used to resolve symbols given their name and scope. Since this can be
+# an expensive process and often the same symbol appears multiple times in the
+# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
+# doxygen will become slower. If the cache is too large, memory is wasted. The
+# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
+# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
+# symbols. At the end of a run doxygen will report the cache usage and suggest
+# the optimal cache size from a speed point of view.
+# Minimum value: 0, maximum value: 9, default value: 0.
+
+LOOKUP_CACHE_SIZE = 0
+
+# The NUM_PROC_THREADS specifies the number threads doxygen is allowed to use
+# during processing. When set to 0 doxygen will based this on the number of
+# cores available in the system. You can set it explicitly to a value larger
+# than 0 to get more control over the balance between CPU load and processing
+# speed. At this moment only the input processing can be done using multiple
+# threads. Since this is still an experimental feature the default is set to 1,
+# which efficively disables parallel processing. Please report any issues you
+# encounter. Generating dot graphs in parallel is controlled by the
+# DOT_NUM_THREADS setting.
+# Minimum value: 0, maximum value: 32, default value: 1.
+
+NUM_PROC_THREADS = 1
+
+#---------------------------------------------------------------------------
+# Build related configuration options
+#---------------------------------------------------------------------------
+
+# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
+# documentation are documented, even if no documentation was available. Private
+# class members and static file members will be hidden unless the
+# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
+# Note: This will also disable the warnings about undocumented members that are
+# normally produced when WARNINGS is set to YES.
+# The default value is: NO.
+
+EXTRACT_ALL = YES
+
+# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
+# be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PRIVATE = NO
+
+# If the EXTRACT_PRIV_VIRTUAL tag is set to YES, documented private virtual
+# methods of a class will be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PRIV_VIRTUAL = NO
+
+# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
+# scope will be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PACKAGE = NO
+
+# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
+# included in the documentation.
+# The default value is: NO.
+
+EXTRACT_STATIC = YES
+
+# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
+# locally in source files will be included in the documentation. If set to NO,
+# only classes defined in header files are included. Does not have any effect
+# for Java sources.
+# The default value is: YES.
+
+EXTRACT_LOCAL_CLASSES = YES
+
+# This flag is only useful for Objective-C code. If set to YES, local methods,
+# which are defined in the implementation section but not in the interface are
+# included in the documentation. If set to NO, only methods in the interface are
+# included.
+# The default value is: NO.
+
+EXTRACT_LOCAL_METHODS = NO
+
+# If this flag is set to YES, the members of anonymous namespaces will be
+# extracted and appear in the documentation as a namespace called
+# 'anonymous_namespace{file}', where file will be replaced with the base name of
+# the file that contains the anonymous namespace. By default anonymous namespace
+# are hidden.
+# The default value is: NO.
+
+EXTRACT_ANON_NSPACES = NO
+
+# If this flag is set to YES, the name of an unnamed parameter in a declaration
+# will be determined by the corresponding definition. By default unnamed
+# parameters remain unnamed in the output.
+# The default value is: YES.
+
+RESOLVE_UNNAMED_PARAMS = YES
+
+# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
+# undocumented members inside documented classes or files. If set to NO these
+# members will be included in the various overviews, but no documentation
+# section is generated. This option has no effect if EXTRACT_ALL is enabled.
+# The default value is: NO.
+
+HIDE_UNDOC_MEMBERS = NO
+
+# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
+# undocumented classes that are normally visible in the class hierarchy. If set
+# to NO, these classes will be included in the various overviews. This option
+# has no effect if EXTRACT_ALL is enabled.
+# The default value is: NO.
+
+HIDE_UNDOC_CLASSES = NO
+
+# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
+# declarations. If set to NO, these declarations will be included in the
+# documentation.
+# The default value is: NO.
+
+HIDE_FRIEND_COMPOUNDS = NO
+
+# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
+# documentation blocks found inside the body of a function. If set to NO, these
+# blocks will be appended to the function's detailed documentation block.
+# The default value is: NO.
+
+HIDE_IN_BODY_DOCS = NO
+
+# The INTERNAL_DOCS tag determines if documentation that is typed after a
+# \internal command is included. If the tag is set to NO then the documentation
+# will be excluded. Set it to YES to include the internal documentation.
+# The default value is: NO.
+
+INTERNAL_DOCS = NO
+
+# With the correct setting of option CASE_SENSE_NAMES doxygen will better be
+# able to match the capabilities of the underlying filesystem. In case the
+# filesystem is case sensitive (i.e. it supports files in the same directory
+# whose names only differ in casing), the option must be set to YES to properly
+# deal with such files in case they appear in the input. For filesystems that
+# are not case sensitive the option should be be set to NO to properly deal with
+# output files written for symbols that only differ in casing, such as for two
+# classes, one named CLASS and the other named Class, and to also support
+# references to files without having to specify the exact matching casing. On
+# Windows (including Cygwin) and MacOS, users should typically set this option
+# to NO, whereas on Linux or other Unix flavors it should typically be set to
+# YES.
+# The default value is: system dependent.
+
+CASE_SENSE_NAMES = NO
+
+# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
+# their full class and namespace scopes in the documentation. If set to YES, the
+# scope will be hidden.
+# The default value is: NO.
+
+HIDE_SCOPE_NAMES = YES
+
+# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will
+# append additional text to a page's title, such as Class Reference. If set to
+# YES the compound reference will be hidden.
+# The default value is: NO.
+
+HIDE_COMPOUND_REFERENCE= NO
+
+# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
+# the files that are included by a file in the documentation of that file.
+# The default value is: YES.
+
+SHOW_INCLUDE_FILES = YES
+
+# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
+# grouped member an include statement to the documentation, telling the reader
+# which file to include in order to use the member.
+# The default value is: NO.
+
+SHOW_GROUPED_MEMB_INC = YES
+
+# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
+# files with double quotes in the documentation rather than with sharp brackets.
+# The default value is: NO.
+
+FORCE_LOCAL_INCLUDES = NO
+
+# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
+# documentation for inline members.
+# The default value is: YES.
+
+INLINE_INFO = YES
+
+# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
+# (detailed) documentation of file and class members alphabetically by member
+# name. If set to NO, the members will appear in declaration order.
+# The default value is: YES.
+
+SORT_MEMBER_DOCS = YES
+
+# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
+# descriptions of file, namespace and class members alphabetically by member
+# name. If set to NO, the members will appear in declaration order. Note that
+# this will also influence the order of the classes in the class list.
+# The default value is: NO.
+
+SORT_BRIEF_DOCS = NO
+
+# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
+# (brief and detailed) documentation of class members so that constructors and
+# destructors are listed first. If set to NO the constructors will appear in the
+# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
+# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
+# member documentation.
+# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
+# detailed member documentation.
+# The default value is: NO.
+
+SORT_MEMBERS_CTORS_1ST = NO
+
+# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
+# of group names into alphabetical order. If set to NO the group names will
+# appear in their defined order.
+# The default value is: NO.
+
+SORT_GROUP_NAMES = NO
+
+# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
+# fully-qualified names, including namespaces. If set to NO, the class list will
+# be sorted only by class name, not including the namespace part.
+# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
+# Note: This option applies only to the class list, not to the alphabetical
+# list.
+# The default value is: NO.
+
+SORT_BY_SCOPE_NAME = NO
+
+# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
+# type resolution of all parameters of a function it will reject a match between
+# the prototype and the implementation of a member function even if there is
+# only one candidate or it is obvious which candidate to choose by doing a
+# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
+# accept a match between prototype and implementation in such cases.
+# The default value is: NO.
+
+STRICT_PROTO_MATCHING = NO
+
+# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
+# list. This list is created by putting \todo commands in the documentation.
+# The default value is: YES.
+
+GENERATE_TODOLIST = NO
+
+# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
+# list. This list is created by putting \test commands in the documentation.
+# The default value is: YES.
+
+GENERATE_TESTLIST = NO
+
+# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
+# list. This list is created by putting \bug commands in the documentation.
+# The default value is: YES.
+
+GENERATE_BUGLIST = YES
+
+# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
+# the deprecated list. This list is created by putting \deprecated commands in
+# the documentation.
+# The default value is: YES.
+
+GENERATE_DEPRECATEDLIST= YES
+
+# The ENABLED_SECTIONS tag can be used to enable conditional documentation
+# sections, marked by \if ... \endif and \cond
+# ... \endcond blocks.
+
+ENABLED_SECTIONS =
+
+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
+# initial value of a variable or macro / define can have for it to appear in the
+# documentation. If the initializer consists of more lines than specified here
+# it will be hidden. Use a value of 0 to hide initializers completely. The
+# appearance of the value of individual variables and macros / defines can be
+# controlled using \showinitializer or \hideinitializer command in the
+# documentation regardless of this setting.
+# Minimum value: 0, maximum value: 10000, default value: 30.
+
+MAX_INITIALIZER_LINES = 50
+
+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
+# the bottom of the documentation of classes and structs. If set to YES, the
+# list will mention the files that were used to generate the documentation.
+# The default value is: YES.
+
+SHOW_USED_FILES = YES
+
+# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
+# will remove the Files entry from the Quick Index and from the Folder Tree View
+# (if specified).
+# The default value is: YES.
+
+SHOW_FILES = YES
+
+# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
+# page. This will remove the Namespaces entry from the Quick Index and from the
+# Folder Tree View (if specified).
+# The default value is: YES.
+
+SHOW_NAMESPACES = YES
+
+# The FILE_VERSION_FILTER tag can be used to specify a program or script that
+# doxygen should invoke to get the current version for each file (typically from
+# the version control system). Doxygen will invoke the program by executing (via
+# popen()) the command command input-file, where command is the value of the
+# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
+# by doxygen. Whatever the program writes to standard output is used as the file
+# version. For an example see the documentation.
+
+FILE_VERSION_FILTER =
+
+# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
+# by doxygen. The layout file controls the global structure of the generated
+# output files in an output format independent way. To create the layout file
+# that represents doxygen's defaults, run doxygen with the -l option. You can
+# optionally specify a file name after the option, if omitted DoxygenLayout.xml
+# will be used as the name of the layout file.
+#
+# Note that if you run doxygen from a directory containing a file called
+# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
+# tag is left empty.
+
+LAYOUT_FILE = doc/doxygen/DoxygenLayout.xml
+
+# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
+# the reference definitions. This must be a list of .bib files. The .bib
+# extension is automatically appended if omitted. This requires the bibtex tool
+# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info.
+# For LaTeX the style of the bibliography can be controlled using
+# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
+# search path. See also \cite for info how to create references.
+
+CITE_BIB_FILES =
+
+#---------------------------------------------------------------------------
+# Configuration options related to warning and progress messages
+#---------------------------------------------------------------------------
+
+# The QUIET tag can be used to turn on/off the messages that are generated to
+# standard output by doxygen. If QUIET is set to YES this implies that the
+# messages are off.
+# The default value is: NO.
+
+QUIET = NO
+
+# The WARNINGS tag can be used to turn on/off the warning messages that are
+# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
+# this implies that the warnings are on.
+#
+# Tip: Turn warnings on while writing the documentation.
+# The default value is: YES.
+
+WARNINGS = YES
+
+# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
+# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
+# will automatically be disabled.
+# The default value is: YES.
+
+WARN_IF_UNDOCUMENTED = YES
+
+# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
+# potential errors in the documentation, such as not documenting some parameters
+# in a documented function, or documenting parameters that don't exist or using
+# markup commands wrongly.
+# The default value is: YES.
+
+WARN_IF_DOC_ERROR = YES
+
+# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
+# are documented, but have no documentation for their parameters or return
+# value. If set to NO, doxygen will only warn about wrong or incomplete
+# parameter documentation, but not about the absence of documentation. If
+# EXTRACT_ALL is set to YES then this flag will automatically be disabled.
+# The default value is: NO.
+
+WARN_NO_PARAMDOC = NO
+
+# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when
+# a warning is encountered. If the WARN_AS_ERROR tag is set to FAIL_ON_WARNINGS
+# then doxygen will continue running as if WARN_AS_ERROR tag is set to NO, but
+# at the end of the doxygen process doxygen will return with a non-zero status.
+# Possible values are: NO, YES and FAIL_ON_WARNINGS.
+# The default value is: NO.
+
+WARN_AS_ERROR = NO
+
+# The WARN_FORMAT tag determines the format of the warning messages that doxygen
+# can produce. The string should contain the $file, $line, and $text tags, which
+# will be replaced by the file and line number from which the warning originated
+# and the warning text. Optionally the format may contain $version, which will
+# be replaced by the version of the file (if it could be obtained via
+# FILE_VERSION_FILTER)
+# The default value is: $file:$line: $text.
+
+WARN_FORMAT = "$file:$line: $text"
+
+# The WARN_LOGFILE tag can be used to specify a file to which warning and error
+# messages should be written. If left blank the output is written to standard
+# error (stderr).
+
+WARN_LOGFILE = doc/html/doxygen.warn
+
+#---------------------------------------------------------------------------
+# Configuration options related to the input files
+#---------------------------------------------------------------------------
+
+# The INPUT tag is used to specify the files and/or directories that contain
+# documented source files. You may enter file names like myfile.cpp or
+# directories like /usr/src/myproject. Separate the files or directories with
+# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
+# Note: If this tag is empty the current directory is searched.
+
+INPUT = doc/doxygen/Doxy.page.h \
+ src/libdnssec \
+ src/libknot \
+ src/libzscanner \
+ src/knot/include
+
+# This tag can be used to specify the character encoding of the source files
+# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
+# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
+# documentation (see:
+# https://www.gnu.org/software/libiconv/) for the list of possible encodings.
+# The default value is: UTF-8.
+
+INPUT_ENCODING = UTF-8
+
+# If the value of the INPUT tag contains directories, you can use the
+# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
+# *.h) to filter out the source-files in the directories.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# read by doxygen.
+#
+# Note the list of default checked file patterns might differ from the list of
+# default file extension mappings.
+#
+# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
+# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
+# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
+# *.m, *.markdown, *.md, *.mm, *.dox (to be provided as doxygen C comment),
+# *.py, *.pyw, *.f90, *.f95, *.f03, *.f08, *.f18, *.f, *.for, *.vhd, *.vhdl,
+# *.ucf, *.qsf and *.ice.
+
+FILE_PATTERNS = *.h
+
+# The RECURSIVE tag can be used to specify whether or not subdirectories should
+# be searched for input files as well.
+# The default value is: NO.
+
+RECURSIVE = YES
+
+# The EXCLUDE tag can be used to specify files and/or directories that should be
+# excluded from the INPUT source files. This way you can easily exclude a
+# subdirectory from a directory tree whose root is specified with the INPUT tag.
+#
+# Note that relative paths are relative to the directory from which doxygen is
+# run.
+
+EXCLUDE =
+
+# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
+# directories that are symbolic links (a Unix file system feature) are excluded
+# from the input.
+# The default value is: NO.
+
+EXCLUDE_SYMLINKS = NO
+
+# If the value of the INPUT tag contains directories, you can use the
+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
+# certain files from those directories.
+#
+# Note that the wildcards are matched against the file with absolute path, so to
+# exclude all test directories for example use the pattern */test/*
+
+EXCLUDE_PATTERNS =
+
+# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
+# (namespaces, classes, functions, etc.) that should be excluded from the
+# output. The symbol name can be a fully qualified name, a word, or if the
+# wildcard * is used, a substring. Examples: ANamespace, AClass,
+# AClass::ANamespace, ANamespace::*Test
+#
+# Note that the wildcards are matched against the file with absolute path, so to
+# exclude all test directories use the pattern */test/*
+
+EXCLUDE_SYMBOLS =
+
+# The EXAMPLE_PATH tag can be used to specify one or more files or directories
+# that contain example code fragments that are included (see the \include
+# command).
+
+EXAMPLE_PATH =
+
+# If the value of the EXAMPLE_PATH tag contains directories, you can use the
+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
+# *.h) to filter out the source-files in the directories. If left blank all
+# files are included.
+
+EXAMPLE_PATTERNS = *
+
+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
+# searched for input files to be used with the \include or \dontinclude commands
+# irrespective of the value of the RECURSIVE tag.
+# The default value is: NO.
+
+EXAMPLE_RECURSIVE = NO
+
+# The IMAGE_PATH tag can be used to specify one or more files or directories
+# that contain images that are to be included in the documentation (see the
+# \image command).
+
+IMAGE_PATH =
+
+# The INPUT_FILTER tag can be used to specify a program that doxygen should
+# invoke to filter for each input file. Doxygen will invoke the filter program
+# by executing (via popen()) the command:
+#
+#
+#
+# where is the value of the INPUT_FILTER tag, and is the
+# name of an input file. Doxygen will then use the output that the filter
+# program writes to standard output. If FILTER_PATTERNS is specified, this tag
+# will be ignored.
+#
+# Note that the filter must not add or remove lines; it is applied before the
+# code is scanned, but not when the output code is generated. If lines are added
+# or removed, the anchors will not be placed correctly.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# properly processed by doxygen.
+
+INPUT_FILTER =
+
+# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
+# basis. Doxygen will compare the file name with each pattern and apply the
+# filter if there is a match. The filters are a list of the form: pattern=filter
+# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
+# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
+# patterns match the file name, INPUT_FILTER is applied.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# properly processed by doxygen.
+
+FILTER_PATTERNS =
+
+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
+# INPUT_FILTER) will also be used to filter the input files that are used for
+# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
+# The default value is: NO.
+
+FILTER_SOURCE_FILES = NO
+
+# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
+# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
+# it is also possible to disable source filtering for a specific pattern using
+# *.ext= (so without naming a filter).
+# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
+
+FILTER_SOURCE_PATTERNS =
+
+# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
+# is part of the input, its contents will be placed on the main page
+# (index.html). This can be useful if you have a project on for instance GitHub
+# and want to reuse the introduction page also for the doxygen output.
+
+USE_MDFILE_AS_MAINPAGE =
+
+#---------------------------------------------------------------------------
+# Configuration options related to source browsing
+#---------------------------------------------------------------------------
+
+# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
+# generated. Documented entities will be cross-referenced with these sources.
+#
+# Note: To get rid of all source code in the generated output, make sure that
+# also VERBATIM_HEADERS is set to NO.
+# The default value is: NO.
+
+SOURCE_BROWSER = YES
+
+# Setting the INLINE_SOURCES tag to YES will include the body of functions,
+# classes and enums directly into the documentation.
+# The default value is: NO.
+
+INLINE_SOURCES = NO
+
+# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
+# special comment blocks from generated source code fragments. Normal C, C++ and
+# Fortran comments will always remain visible.
+# The default value is: YES.
+
+STRIP_CODE_COMMENTS = YES
+
+# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
+# entity all documented functions referencing it will be listed.
+# The default value is: NO.
+
+REFERENCED_BY_RELATION = NO
+
+# If the REFERENCES_RELATION tag is set to YES then for each documented function
+# all documented entities called/used by that function will be listed.
+# The default value is: NO.
+
+REFERENCES_RELATION = NO
+
+# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
+# to YES then the hyperlinks from functions in REFERENCES_RELATION and
+# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
+# link to the documentation.
+# The default value is: YES.
+
+REFERENCES_LINK_SOURCE = YES
+
+# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
+# source code will show a tooltip with additional information such as prototype,
+# brief description and links to the definition and documentation. Since this
+# will make the HTML file larger and loading of large files a bit slower, you
+# can opt to disable this feature.
+# The default value is: YES.
+# This tag requires that the tag SOURCE_BROWSER is set to YES.
+
+SOURCE_TOOLTIPS = YES
+
+# If the USE_HTAGS tag is set to YES then the references to source code will
+# point to the HTML generated by the htags(1) tool instead of doxygen built-in
+# source browser. The htags tool is part of GNU's global source tagging system
+# (see https://www.gnu.org/software/global/global.html). You will need version
+# 4.8.6 or higher.
+#
+# To use it do the following:
+# - Install the latest version of global
+# - Enable SOURCE_BROWSER and USE_HTAGS in the configuration file
+# - Make sure the INPUT points to the root of the source tree
+# - Run doxygen as normal
+#
+# Doxygen will invoke htags (and that will in turn invoke gtags), so these
+# tools must be available from the command line (i.e. in the search path).
+#
+# The result: instead of the source browser generated by doxygen, the links to
+# source code will now point to the output of htags.
+# The default value is: NO.
+# This tag requires that the tag SOURCE_BROWSER is set to YES.
+
+USE_HTAGS = NO
+
+# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
+# verbatim copy of the header file for each class for which an include is
+# specified. Set to NO to disable this.
+# See also: Section \class.
+# The default value is: YES.
+
+VERBATIM_HEADERS = YES
+
+# If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the
+# clang parser (see:
+# http://clang.llvm.org/) for more accurate parsing at the cost of reduced
+# performance. This can be particularly helpful with template rich C++ code for
+# which doxygen's built-in parser lacks the necessary type information.
+# Note: The availability of this option depends on whether or not doxygen was
+# generated with the -Duse_libclang=ON option for CMake.
+# The default value is: NO.
+
+CLANG_ASSISTED_PARSING = NO
+
+# If clang assisted parsing is enabled and the CLANG_ADD_INC_PATHS tag is set to
+# YES then doxygen will add the directory of each input to the include path.
+# The default value is: YES.
+
+CLANG_ADD_INC_PATHS = YES
+
+# If clang assisted parsing is enabled you can provide the compiler with command
+# line options that you would normally use when invoking the compiler. Note that
+# the include paths will already be set by doxygen for the files and directories
+# specified with INPUT and INCLUDE_PATH.
+# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.
+
+CLANG_OPTIONS =
+
+# If clang assisted parsing is enabled you can provide the clang parser with the
+# path to the directory containing a file called compile_commands.json. This
+# file is the compilation database (see:
+# http://clang.llvm.org/docs/HowToSetupToolingForLLVM.html) containing the
+# options used when the source files were built. This is equivalent to
+# specifying the -p option to a clang tool, such as clang-check. These options
+# will then be passed to the parser. Any options specified with CLANG_OPTIONS
+# will be added as well.
+# Note: The availability of this option depends on whether or not doxygen was
+# generated with the -Duse_libclang=ON option for CMake.
+
+CLANG_DATABASE_PATH =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the alphabetical class index
+#---------------------------------------------------------------------------
+
+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
+# compounds will be generated. Enable this if the project contains a lot of
+# classes, structs, unions or interfaces.
+# The default value is: YES.
+
+ALPHABETICAL_INDEX = YES
+
+# In case all classes in a project start with a common prefix, all classes will
+# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
+# can be used to specify a prefix (or a list of prefixes) that should be ignored
+# while generating the index headers.
+# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
+
+IGNORE_PREFIX =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the HTML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
+# The default value is: YES.
+
+GENERATE_HTML = YES
+
+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: html.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_OUTPUT = api
+
+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
+# generated HTML page (for example: .htm, .php, .asp).
+# The default value is: .html.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FILE_EXTENSION = .html
+
+# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
+# each generated HTML page. If the tag is left blank doxygen will generate a
+# standard header.
+#
+# To get valid HTML the header file that includes any scripts and style sheets
+# that doxygen needs, which is dependent on the configuration options used (e.g.
+# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
+# default header using
+# doxygen -w html new_header.html new_footer.html new_stylesheet.css
+# YourConfigFile
+# and then modify the file new_header.html. See also section "Doxygen usage"
+# for information on how to generate the default header that doxygen normally
+# uses.
+# Note: The header is subject to change so you typically have to regenerate the
+# default header when upgrading to a newer version of doxygen. For a description
+# of the possible markers and block names see the documentation.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_HEADER =
+
+# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
+# generated HTML page. If the tag is left blank doxygen will generate a standard
+# footer. See HTML_HEADER for more information on how to generate a default
+# footer and what special commands can be used inside the footer. See also
+# section "Doxygen usage" for information on how to generate the default footer
+# that doxygen normally uses.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FOOTER =
+
+# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
+# sheet that is used by each HTML page. It can be used to fine-tune the look of
+# the HTML output. If left blank doxygen will generate a default style sheet.
+# See also section "Doxygen usage" for information on how to generate the style
+# sheet that doxygen normally uses.
+# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
+# it is more robust and this tag (HTML_STYLESHEET) will in the future become
+# obsolete.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_STYLESHEET =
+
+# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
+# cascading style sheets that are included after the standard style sheets
+# created by doxygen. Using this option one can overrule certain style aspects.
+# This is preferred over using HTML_STYLESHEET since it does not replace the
+# standard style sheet and is therefore more robust against future updates.
+# Doxygen will copy the style sheet files to the output directory.
+# Note: The order of the extra style sheet files is of importance (e.g. the last
+# style sheet in the list overrules the setting of the previous ones in the
+# list). For an example see the documentation.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_EXTRA_STYLESHEET =
+
+# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the HTML output directory. Note
+# that these files will be copied to the base HTML output directory. Use the
+# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
+# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
+# files will be copied as-is; there are no commands or markers available.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_EXTRA_FILES =
+
+# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
+# will adjust the colors in the style sheet and background images according to
+# this color. Hue is specified as an angle on a colorwheel, see
+# https://en.wikipedia.org/wiki/Hue for more information. For instance the value
+# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
+# purple, and 360 is red again.
+# Minimum value: 0, maximum value: 359, default value: 220.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_HUE = 220
+
+# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
+# in the HTML output. For a value of 0 the output will use grayscales only. A
+# value of 255 will produce the most vivid colors.
+# Minimum value: 0, maximum value: 255, default value: 100.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_SAT = 100
+
+# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
+# luminance component of the colors in the HTML output. Values below 100
+# gradually make the output lighter, whereas values above 100 make the output
+# darker. The value divided by 100 is the actual gamma applied, so 80 represents
+# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
+# change the gamma.
+# Minimum value: 40, maximum value: 240, default value: 80.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_GAMMA = 80
+
+# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
+# page will contain the date and time when the page was generated. Setting this
+# to YES can help to show when doxygen was last run and thus if the
+# documentation is up to date.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_TIMESTAMP = YES
+
+# If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML
+# documentation will contain a main index with vertical navigation menus that
+# are dynamically created via JavaScript. If disabled, the navigation index will
+# consists of multiple levels of tabs that are statically embedded in every HTML
+# page. Disable this option to support browsers that do not have JavaScript,
+# like the Qt help browser.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_DYNAMIC_MENUS = YES
+
+# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
+# documentation will contain sections that can be hidden and shown after the
+# page has loaded.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_DYNAMIC_SECTIONS = YES
+
+# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
+# shown in the various tree structured indices initially; the user can expand
+# and collapse entries dynamically later on. Doxygen will expand the tree to
+# such a level that at most the specified number of entries are visible (unless
+# a fully collapsed tree already exceeds this amount). So setting the number of
+# entries 1 will produce a full collapsed tree by default. 0 is a special value
+# representing an infinite number of entries and will result in a full expanded
+# tree by default.
+# Minimum value: 0, maximum value: 9999, default value: 100.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_INDEX_NUM_ENTRIES = 100
+
+# If the GENERATE_DOCSET tag is set to YES, additional index files will be
+# generated that can be used as input for Apple's Xcode 3 integrated development
+# environment (see:
+# https://developer.apple.com/xcode/), introduced with OSX 10.5 (Leopard). To
+# create a documentation set, doxygen will generate a Makefile in the HTML
+# output directory. Running make will produce the docset in that directory and
+# running make install will install the docset in
+# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
+# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy
+# genXcode/_index.html for more information.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_DOCSET = NO
+
+# This tag determines the name of the docset feed. A documentation feed provides
+# an umbrella under which multiple documentation sets from a single provider
+# (such as a company or product suite) can be grouped.
+# The default value is: Doxygen generated docs.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_FEEDNAME = "Doxygen generated docs"
+
+# This tag specifies a string that should uniquely identify the documentation
+# set bundle. This should be a reverse domain-name style string, e.g.
+# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_BUNDLE_ID = org.doxygen.Project
+
+# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
+# the documentation publisher. This should be a reverse domain-name style
+# string, e.g. com.mycompany.MyDocSet.documentation.
+# The default value is: org.doxygen.Publisher.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_PUBLISHER_ID = org.doxygen.Publisher
+
+# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
+# The default value is: Publisher.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_PUBLISHER_NAME = Publisher
+
+# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
+# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
+# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
+# (see:
+# https://www.microsoft.com/en-us/download/details.aspx?id=21138) on Windows.
+#
+# The HTML Help Workshop contains a compiler that can convert all HTML output
+# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
+# files are now used as the Windows 98 help format, and will replace the old
+# Windows help format (.hlp) on all Windows platforms in the future. Compressed
+# HTML files also contain an index, a table of contents, and you can search for
+# words in the documentation. The HTML workshop also contains a viewer for
+# compressed HTML files.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_HTMLHELP = NO
+
+# The CHM_FILE tag can be used to specify the file name of the resulting .chm
+# file. You can add a path in front of the file if the result should not be
+# written to the html output directory.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+CHM_FILE =
+
+# The HHC_LOCATION tag can be used to specify the location (absolute path
+# including file name) of the HTML help compiler (hhc.exe). If non-empty,
+# doxygen will try to run the HTML help compiler on the generated index.hhp.
+# The file has to be specified with full path.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+HHC_LOCATION =
+
+# The GENERATE_CHI flag controls if a separate .chi index file is generated
+# (YES) or that it should be included in the main .chm file (NO).
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+GENERATE_CHI = NO
+
+# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
+# and project file content.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+CHM_INDEX_ENCODING =
+
+# The BINARY_TOC flag controls whether a binary table of contents is generated
+# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
+# enables the Previous and Next buttons.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+BINARY_TOC = NO
+
+# The TOC_EXPAND flag can be set to YES to add extra items for group members to
+# the table of contents of the HTML help documentation and to the tree view.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+TOC_EXPAND = NO
+
+# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
+# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
+# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
+# (.qch) of the generated HTML documentation.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_QHP = NO
+
+# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
+# the file name of the resulting .qch file. The path specified is relative to
+# the HTML output folder.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QCH_FILE =
+
+# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
+# Project output. For more information please see Qt Help Project / Namespace
+# (see:
+# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace).
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_NAMESPACE = org.doxygen.Project
+
+# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
+# Help Project output. For more information please see Qt Help Project / Virtual
+# Folders (see:
+# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-folders).
+# The default value is: doc.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_VIRTUAL_FOLDER = doc
+
+# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
+# filter to add. For more information please see Qt Help Project / Custom
+# Filters (see:
+# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_CUST_FILTER_NAME =
+
+# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
+# custom filter to add. For more information please see Qt Help Project / Custom
+# Filters (see:
+# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_CUST_FILTER_ATTRS =
+
+# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
+# project's filter section matches. Qt Help Project / Filter Attributes (see:
+# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#filter-attributes).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_SECT_FILTER_ATTRS =
+
+# The QHG_LOCATION tag can be used to specify the location (absolute path
+# including file name) of Qt's qhelpgenerator. If non-empty doxygen will try to
+# run qhelpgenerator on the generated .qhp file.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHG_LOCATION =
+
+# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
+# generated, together with the HTML files, they form an Eclipse help plugin. To
+# install this plugin and make it available under the help contents menu in
+# Eclipse, the contents of the directory containing the HTML and XML files needs
+# to be copied into the plugins directory of eclipse. The name of the directory
+# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
+# After copying Eclipse needs to be restarted before the help appears.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_ECLIPSEHELP = NO
+
+# A unique identifier for the Eclipse help plugin. When installing the plugin
+# the directory name containing the HTML and XML files should also have this
+# name. Each documentation set should have its own identifier.
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
+
+ECLIPSE_DOC_ID = org.doxygen.Project
+
+# If you want full control over the layout of the generated HTML pages it might
+# be necessary to disable the index and replace it with your own. The
+# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
+# of each HTML page. A value of NO enables the index and the value YES disables
+# it. Since the tabs in the index contain the same information as the navigation
+# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+DISABLE_INDEX = YES
+
+# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
+# structure should be generated to display hierarchical information. If the tag
+# value is set to YES, a side panel will be generated containing a tree-like
+# index structure (just like the one that is generated for HTML Help). For this
+# to work a browser that supports JavaScript, DHTML, CSS and frames is required
+# (i.e. any modern browser). Windows users are probably better off using the
+# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
+# further fine-tune the look of the index. As an example, the default style
+# sheet generated by doxygen has an example that shows how to put an image at
+# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
+# the same information as the tab index, you could consider setting
+# DISABLE_INDEX to YES when enabling this option.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_TREEVIEW = YES
+
+# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
+# doxygen will group on one line in the generated HTML documentation.
+#
+# Note that a value of 0 will completely suppress the enum values from appearing
+# in the overview section.
+# Minimum value: 0, maximum value: 20, default value: 4.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+ENUM_VALUES_PER_LINE = 4
+
+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
+# to set the initial width (in pixels) of the frame in which the tree is shown.
+# Minimum value: 0, maximum value: 1500, default value: 250.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+TREEVIEW_WIDTH = 250
+
+# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
+# external symbols imported via tag files in a separate window.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+EXT_LINKS_IN_WINDOW = NO
+
+# If the HTML_FORMULA_FORMAT option is set to svg, doxygen will use the pdf2svg
+# tool (see https://github.com/dawbarton/pdf2svg) or inkscape (see
+# https://inkscape.org) to generate formulas as SVG images instead of PNGs for
+# the HTML output. These images will generally look nicer at scaled resolutions.
+# Possible values are: png (the default) and svg (looks nicer but requires the
+# pdf2svg or inkscape tool).
+# The default value is: png.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FORMULA_FORMAT = png
+
+# Use this tag to change the font size of LaTeX formulas included as images in
+# the HTML documentation. When you change the font size after a successful
+# doxygen run you need to manually remove any form_*.png images from the HTML
+# output directory to force them to be regenerated.
+# Minimum value: 8, maximum value: 50, default value: 10.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+FORMULA_FONTSIZE = 10
+
+# Use the FORMULA_TRANSPARENT tag to determine whether or not the images
+# generated for formulas are transparent PNGs. Transparent PNGs are not
+# supported properly for IE 6.0, but are supported on all modern browsers.
+#
+# Note that when changing this option you need to delete any form_*.png files in
+# the HTML output directory before the changes have effect.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+FORMULA_TRANSPARENT = YES
+
+# The FORMULA_MACROFILE can contain LaTeX \newcommand and \renewcommand commands
+# to create new LaTeX commands to be used in formulas as building blocks. See
+# the section "Including formulas" for details.
+
+FORMULA_MACROFILE =
+
+# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
+# https://www.mathjax.org) which uses client side JavaScript for the rendering
+# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
+# installed or if you want to formulas look prettier in the HTML output. When
+# enabled you may also need to install MathJax separately and configure the path
+# to it using the MATHJAX_RELPATH option.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+USE_MATHJAX = NO
+
+# When MathJax is enabled you can set the default output format to be used for
+# the MathJax output. See the MathJax site (see:
+# http://docs.mathjax.org/en/v2.7-latest/output.html) for more details.
+# Possible values are: HTML-CSS (which is slower, but has the best
+# compatibility), NativeMML (i.e. MathML) and SVG.
+# The default value is: HTML-CSS.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_FORMAT = HTML-CSS
+
+# When MathJax is enabled you need to specify the location relative to the HTML
+# output directory using the MATHJAX_RELPATH option. The destination directory
+# should contain the MathJax.js script. For instance, if the mathjax directory
+# is located at the same level as the HTML output directory, then
+# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
+# Content Delivery Network so you can quickly see the result without installing
+# MathJax. However, it is strongly recommended to install a local copy of
+# MathJax from https://www.mathjax.org before deployment.
+# The default value is: https://cdn.jsdelivr.net/npm/mathjax@2.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest
+
+# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
+# extension names that should be enabled during MathJax rendering. For example
+# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_EXTENSIONS =
+
+# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
+# of code that will be used on startup of the MathJax code. See the MathJax site
+# (see:
+# http://docs.mathjax.org/en/v2.7-latest/output.html) for more details. For an
+# example see the documentation.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_CODEFILE =
+
+# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
+# the HTML output. The underlying search engine uses javascript and DHTML and
+# should work on any modern browser. Note that when using HTML help
+# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
+# there is already a search function so this one should typically be disabled.
+# For large projects the javascript based search engine can be slow, then
+# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
+# search using the keyboard; to jump to the search box use + S
+# (what the is depends on the OS and browser, but it is typically
+# , /, or both). Inside the search box use the to jump into the search results window, the results can be navigated
+# using the . Press to select an item or to cancel
+# the search. The filter options can be selected when the cursor is inside the
+# search box by pressing +. Also here use the
+# to select a filter and or to activate or cancel the filter
+# option.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+SEARCHENGINE = YES
+
+# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
+# implemented using a web server instead of a web client using JavaScript. There
+# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
+# setting. When disabled, doxygen will generate a PHP script for searching and
+# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
+# and searching needs to be provided by external tools. See the section
+# "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SERVER_BASED_SEARCH = NO
+
+# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
+# script for searching. Instead the search results are written to an XML file
+# which needs to be processed by an external indexer. Doxygen will invoke an
+# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
+# search results.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see:
+# https://xapian.org/).
+#
+# See the section "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH = NO
+
+# The SEARCHENGINE_URL should point to a search engine hosted by a web server
+# which will return the search results when EXTERNAL_SEARCH is enabled.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see:
+# https://xapian.org/). See the section "External Indexing and Searching" for
+# details.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHENGINE_URL =
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
+# search data is written to a file for indexing by an external tool. With the
+# SEARCHDATA_FILE tag the name of this file can be specified.
+# The default file is: searchdata.xml.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHDATA_FILE = searchdata.xml
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
+# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
+# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
+# projects and redirect the results back to the right project.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH_ID =
+
+# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
+# projects other than the one defined by this configuration file, but that are
+# all added to the same external search index. Each project needs to have a
+# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
+# to a relative location where the documentation can be found. The format is:
+# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTRA_SEARCH_MAPPINGS =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
+# The default value is: YES.
+
+GENERATE_LATEX = NO
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: latex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_OUTPUT = latex
+
+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
+# invoked.
+#
+# Note that when not enabling USE_PDFLATEX the default is latex when enabling
+# USE_PDFLATEX the default is pdflatex and when in the later case latex is
+# chosen this is overwritten by pdflatex. For specific output languages the
+# default can have been set differently, this depends on the implementation of
+# the output language.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_CMD_NAME = latex
+
+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
+# index for LaTeX.
+# Note: This tag is used in the Makefile / make.bat.
+# See also: LATEX_MAKEINDEX_CMD for the part in the generated output file
+# (.tex).
+# The default file is: makeindex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+MAKEINDEX_CMD_NAME = makeindex
+
+# The LATEX_MAKEINDEX_CMD tag can be used to specify the command name to
+# generate index for LaTeX. In case there is no backslash (\) as first character
+# it will be automatically added in the LaTeX code.
+# Note: This tag is used in the generated output file (.tex).
+# See also: MAKEINDEX_CMD_NAME for the part in the Makefile / make.bat.
+# The default value is: makeindex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_MAKEINDEX_CMD = makeindex
+
+# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+COMPACT_LATEX = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used by the
+# printer.
+# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
+# 14 inches) and executive (7.25 x 10.5 inches).
+# The default value is: a4.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PAPER_TYPE = a4
+
+# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
+# that should be included in the LaTeX output. The package can be specified just
+# by its name or with the correct syntax as to be used with the LaTeX
+# \usepackage command. To get the times font for instance you can specify :
+# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
+# To use the option intlimits with the amsmath package you can specify:
+# EXTRA_PACKAGES=[intlimits]{amsmath}
+# If left blank no extra packages will be included.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+EXTRA_PACKAGES =
+
+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
+# generated LaTeX document. The header should contain everything until the first
+# chapter. If it is left blank doxygen will generate a standard header. See
+# section "Doxygen usage" for information on how to let doxygen write the
+# default header to a separate file.
+#
+# Note: Only use a user-defined header if you know what you are doing! The
+# following commands have a special meaning inside the header: $title,
+# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
+# $projectbrief, $projectlogo. Doxygen will replace $title with the empty
+# string, for the replacement values of the other commands the user is referred
+# to HTML_HEADER.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HEADER =
+
+# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
+# generated LaTeX document. The footer should contain everything after the last
+# chapter. If it is left blank doxygen will generate a standard footer. See
+# LATEX_HEADER for more information on how to generate a default footer and what
+# special commands can be used inside the footer.
+#
+# Note: Only use a user-defined footer if you know what you are doing!
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_FOOTER =
+
+# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined
+# LaTeX style sheets that are included after the standard style sheets created
+# by doxygen. Using this option one can overrule certain style aspects. Doxygen
+# will copy the style sheet files to the output directory.
+# Note: The order of the extra style sheet files is of importance (e.g. the last
+# style sheet in the list overrules the setting of the previous ones in the
+# list).
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EXTRA_STYLESHEET =
+
+# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the LATEX_OUTPUT output
+# directory. Note that the files will be copied as-is; there are no commands or
+# markers available.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EXTRA_FILES =
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
+# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
+# contain links (just like the HTML output) instead of page references. This
+# makes the output suitable for online browsing using a PDF viewer.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PDF_HYPERLINKS = YES
+
+# If the USE_PDFLATEX tag is set to YES, doxygen will use the engine as
+# specified with LATEX_CMD_NAME to generate the PDF file directly from the LaTeX
+# files. Set this option to YES, to get a higher quality PDF documentation.
+#
+# See also section LATEX_CMD_NAME for selecting the engine.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+USE_PDFLATEX = YES
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
+# command to the generated LaTeX files. This will instruct LaTeX to keep running
+# if errors occur, instead of asking the user for help. This option is also used
+# when generating formulas in HTML.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BATCHMODE = NO
+
+# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
+# index chapters (such as File Index, Compound Index, etc.) in the output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HIDE_INDICES = NO
+
+# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
+# code with syntax highlighting in the LaTeX output.
+#
+# Note that which sources are shown also depends on other settings such as
+# SOURCE_BROWSER.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_SOURCE_CODE = NO
+
+# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
+# bibliography, e.g. plainnat, or ieeetr. See
+# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
+# The default value is: plain.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BIB_STYLE = plain
+
+# If the LATEX_TIMESTAMP tag is set to YES then the footer of each generated
+# page will contain the date and time when the page was generated. Setting this
+# to NO can help when comparing the output of multiple runs.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_TIMESTAMP = NO
+
+# The LATEX_EMOJI_DIRECTORY tag is used to specify the (relative or absolute)
+# path from which the emoji images will be read. If a relative path is entered,
+# it will be relative to the LATEX_OUTPUT directory. If left blank the
+# LATEX_OUTPUT directory will be used.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EMOJI_DIRECTORY =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
+# RTF output is optimized for Word 97 and may not look too pretty with other RTF
+# readers/editors.
+# The default value is: NO.
+
+GENERATE_RTF = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: rtf.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_OUTPUT = rtf
+
+# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+COMPACT_RTF = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
+# contain hyperlink fields. The RTF file will contain links (just like the HTML
+# output) instead of page references. This makes the output suitable for online
+# browsing using Word or some other Word compatible readers that support those
+# fields.
+#
+# Note: WordPad (write) and others do not support links.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_HYPERLINKS = NO
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's
+# configuration file, i.e. a series of assignments. You only have to provide
+# replacements, missing definitions are set to their default value.
+#
+# See also section "Doxygen usage" for information on how to generate the
+# default style sheet that doxygen normally uses.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_STYLESHEET_FILE =
+
+# Set optional variables used in the generation of an RTF document. Syntax is
+# similar to doxygen's configuration file. A template extensions file can be
+# generated using doxygen -e rtf extensionFile.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_EXTENSIONS_FILE =
+
+# If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code
+# with syntax highlighting in the RTF output.
+#
+# Note that which sources are shown also depends on other settings such as
+# SOURCE_BROWSER.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_SOURCE_CODE = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
+# classes and files.
+# The default value is: NO.
+
+GENERATE_MAN = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it. A directory man3 will be created inside the directory specified by
+# MAN_OUTPUT.
+# The default directory is: man.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_OUTPUT = man
+
+# The MAN_EXTENSION tag determines the extension that is added to the generated
+# man pages. In case the manual section does not start with a number, the number
+# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
+# optional.
+# The default value is: .3.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_EXTENSION = .3
+
+# The MAN_SUBDIR tag determines the name of the directory created within
+# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by
+# MAN_EXTENSION with the initial . removed.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_SUBDIR =
+
+# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
+# will generate one additional man file for each entity documented in the real
+# man page(s). These additional files only source the real man page, but without
+# them the man command would be unable to find the correct page.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_LINKS = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
+# captures the structure of the code including all documentation.
+# The default value is: NO.
+
+GENERATE_XML = NO
+
+# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: xml.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_OUTPUT = xml
+
+# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
+# listings (including syntax highlighting and cross-referencing information) to
+# the XML output. Note that enabling this will significantly increase the size
+# of the XML output.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_PROGRAMLISTING = YES
+
+# If the XML_NS_MEMB_FILE_SCOPE tag is set to YES, doxygen will include
+# namespace members in file scope as well, matching the HTML output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_NS_MEMB_FILE_SCOPE = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the DOCBOOK output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
+# that can be used to generate PDF.
+# The default value is: NO.
+
+GENERATE_DOCBOOK = NO
+
+# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
+# front of it.
+# The default directory is: docbook.
+# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
+
+DOCBOOK_OUTPUT = docbook
+
+# If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the
+# program listings (including syntax highlighting and cross-referencing
+# information) to the DOCBOOK output. Note that enabling this will significantly
+# increase the size of the DOCBOOK output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
+
+DOCBOOK_PROGRAMLISTING = NO
+
+#---------------------------------------------------------------------------
+# Configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
+# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures
+# the structure of the code including all documentation. Note that this feature
+# is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_AUTOGEN_DEF = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the Perl module output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
+# file that captures the structure of the code including all documentation.
+#
+# Note that this feature is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_PERLMOD = NO
+
+# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
+# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
+# output from the Perl module output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_LATEX = NO
+
+# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
+# formatted so it can be parsed by a human reader. This is useful if you want to
+# understand what is going on. On the other hand, if this tag is set to NO, the
+# size of the Perl module output will be much smaller and Perl will parse it
+# just the same.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_PRETTY = YES
+
+# The names of the make variables in the generated doxyrules.make file are
+# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
+# so different doxyrules.make files included by the same Makefile don't
+# overwrite each other's variables.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_MAKEVAR_PREFIX =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
+# C-preprocessor directives found in the sources and include files.
+# The default value is: YES.
+
+ENABLE_PREPROCESSING = YES
+
+# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
+# in the source code. If set to NO, only conditional compilation will be
+# performed. Macro expansion can be done in a controlled way by setting
+# EXPAND_ONLY_PREDEF to YES.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+MACRO_EXPANSION = NO
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
+# the macro expansion is limited to the macros specified with the PREDEFINED and
+# EXPAND_AS_DEFINED tags.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_ONLY_PREDEF = NO
+
+# If the SEARCH_INCLUDES tag is set to YES, the include files in the
+# INCLUDE_PATH will be searched if a #include is found.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SEARCH_INCLUDES = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that
+# contain include files that are not input files but should be processed by the
+# preprocessor.
+# This tag requires that the tag SEARCH_INCLUDES is set to YES.
+
+INCLUDE_PATH =
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
+# patterns (like *.h and *.hpp) to filter out the header-files in the
+# directories. If left blank, the patterns specified with FILE_PATTERNS will be
+# used.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+INCLUDE_FILE_PATTERNS =
+
+# The PREDEFINED tag can be used to specify one or more macro names that are
+# defined before the preprocessor is started (similar to the -D option of e.g.
+# gcc). The argument of the tag is a list of macros of the form: name or
+# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
+# is assumed. To prevent a macro definition from being undefined via #undef or
+# recursively expanded use the := operator instead of the = operator.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+PREDEFINED =
+
+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
+# tag can be used to specify a list of macro names that should be expanded. The
+# macro definition that is found in the sources will be used. Use the PREDEFINED
+# tag if you want to use a different macro definition that overrules the
+# definition found in the source code.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_AS_DEFINED =
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
+# remove all references to function-like macros that are alone on a line, have
+# an all uppercase name, and do not end with a semicolon. Such function macros
+# are typically used for boiler-plate code, and will confuse the parser if not
+# removed.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SKIP_FUNCTION_MACROS = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to external references
+#---------------------------------------------------------------------------
+
+# The TAGFILES tag can be used to specify one or more tag files. For each tag
+# file the location of the external documentation should be added. The format of
+# a tag file without this location is as follows:
+# TAGFILES = file1 file2 ...
+# Adding location for the tag files is done as follows:
+# TAGFILES = file1=loc1 "file2 = loc2" ...
+# where loc1 and loc2 can be relative or absolute paths or URLs. See the
+# section "Linking to external documentation" for more information about the use
+# of tag files.
+# Note: Each tag file must have a unique name (where the name does NOT include
+# the path). If a tag file is not located in the directory in which doxygen is
+# run, you must also specify the path to the tagfile here.
+
+TAGFILES =
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
+# tag file that is based on the input files it reads. See section "Linking to
+# external documentation" for more information about the usage of tag files.
+
+GENERATE_TAGFILE =
+
+# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
+# the class index. If set to NO, only the inherited external classes will be
+# listed.
+# The default value is: NO.
+
+ALLEXTERNALS = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
+# in the modules index. If set to NO, only the current project's groups will be
+# listed.
+# The default value is: YES.
+
+EXTERNAL_GROUPS = YES
+
+# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
+# the related pages index. If set to NO, only the current project's pages will
+# be listed.
+# The default value is: YES.
+
+EXTERNAL_PAGES = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool
+#---------------------------------------------------------------------------
+
+# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
+# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
+# NO turns the diagrams off. Note that this option also works with HAVE_DOT
+# disabled, but it is recommended to install and use dot, since it yields more
+# powerful graphs.
+# The default value is: YES.
+
+CLASS_DIAGRAMS = YES
+
+# You can include diagrams made with dia in doxygen documentation. Doxygen will
+# then run dia to produce the diagram and insert it in the documentation. The
+# DIA_PATH tag allows you to specify the directory where the dia binary resides.
+# If left empty dia is assumed to be found in the default search path.
+
+DIA_PATH =
+
+# If set to YES the inheritance and collaboration graphs will hide inheritance
+# and usage relations if the target is undocumented or is not a class.
+# The default value is: YES.
+
+HIDE_UNDOC_RELATIONS = YES
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
+# available from the path. This tool is part of Graphviz (see:
+# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
+# Bell Labs. The other options in this section have no effect if this option is
+# set to NO
+# The default value is: YES.
+
+HAVE_DOT = NO
+
+# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
+# to run in parallel. When set to 0 doxygen will base this on the number of
+# processors available in the system. You can set it explicitly to a value
+# larger than 0 to get control over the balance between CPU load and processing
+# speed.
+# Minimum value: 0, maximum value: 32, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_NUM_THREADS = 0
+
+# When you want a differently looking font in the dot files that doxygen
+# generates you can specify the font name using DOT_FONTNAME. You need to make
+# sure dot is able to find the font, which can be done by putting it in a
+# standard location or by setting the DOTFONTPATH environment variable or by
+# setting DOT_FONTPATH to the directory containing the font.
+# The default value is: Helvetica.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTNAME =
+
+# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
+# dot graphs.
+# Minimum value: 4, maximum value: 24, default value: 10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTSIZE = 10
+
+# By default doxygen will tell dot to use the default font as specified with
+# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
+# the path where dot can find it using this tag.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTPATH =
+
+# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
+# each documented class showing the direct and indirect inheritance relations.
+# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CLASS_GRAPH = YES
+
+# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
+# graph for each documented class showing the direct and indirect implementation
+# dependencies (inheritance, containment, and class references variables) of the
+# class with other documented classes.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+COLLABORATION_GRAPH = YES
+
+# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
+# groups, showing the direct groups dependencies.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GROUP_GRAPHS = YES
+
+# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
+# collaboration diagrams in a style similar to the OMG's Unified Modeling
+# Language.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+UML_LOOK = NO
+
+# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
+# class node. If there are many fields or methods and many nodes the graph may
+# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
+# number of items for each type to make the size more manageable. Set this to 0
+# for no limit. Note that the threshold may be exceeded by 50% before the limit
+# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
+# but if the number exceeds 15, the total amount of fields shown is limited to
+# 10.
+# Minimum value: 0, maximum value: 100, default value: 10.
+# This tag requires that the tag UML_LOOK is set to YES.
+
+UML_LIMIT_NUM_FIELDS = 10
+
+# If the DOT_UML_DETAILS tag is set to NO, doxygen will show attributes and
+# methods without types and arguments in the UML graphs. If the DOT_UML_DETAILS
+# tag is set to YES, doxygen will add type and arguments for attributes and
+# methods in the UML graphs. If the DOT_UML_DETAILS tag is set to NONE, doxygen
+# will not generate fields with class member information in the UML graphs. The
+# class diagrams will look similar to the default class diagrams but using UML
+# notation for the relationships.
+# Possible values are: NO, YES and NONE.
+# The default value is: NO.
+# This tag requires that the tag UML_LOOK is set to YES.
+
+DOT_UML_DETAILS = NO
+
+# The DOT_WRAP_THRESHOLD tag can be used to set the maximum number of characters
+# to display on a single line. If the actual line length exceeds this threshold
+# significantly it will wrapped across multiple lines. Some heuristics are apply
+# to avoid ugly line breaks.
+# Minimum value: 0, maximum value: 1000, default value: 17.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_WRAP_THRESHOLD = 17
+
+# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
+# collaboration graphs will show the relations between templates and their
+# instances.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+TEMPLATE_RELATIONS = NO
+
+# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
+# YES then doxygen will generate a graph for each documented file showing the
+# direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDE_GRAPH = YES
+
+# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
+# set to YES then doxygen will generate a graph for each documented file showing
+# the direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDED_BY_GRAPH = YES
+
+# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable call graphs for selected
+# functions only using the \callgraph command. Disabling a call graph can be
+# accomplished by means of the command \hidecallgraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALL_GRAPH = NO
+
+# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable caller graphs for selected
+# functions only using the \callergraph command. Disabling a caller graph can be
+# accomplished by means of the command \hidecallergraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALLER_GRAPH = NO
+
+# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
+# hierarchy of all classes instead of a textual one.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GRAPHICAL_HIERARCHY = YES
+
+# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
+# dependencies a directory has on other directories in a graphical way. The
+# dependency relations are determined by the #include relations between the
+# files in the directories.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DIRECTORY_GRAPH = YES
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
+# generated by dot. For an explanation of the image formats see the section
+# output formats in the documentation of the dot tool (Graphviz (see:
+# http://www.graphviz.org/)).
+# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
+# to make the SVG files visible in IE 9+ (other browsers do not have this
+# requirement).
+# Possible values are: png, png:cairo, png:cairo:cairo, png:cairo:gd, png:gd,
+# png:gd:gd, jpg, jpg:cairo, jpg:cairo:gd, jpg:gd, jpg:gd:gd, gif, gif:cairo,
+# gif:cairo:gd, gif:gd, gif:gd:gd, svg, png:gd, png:gd:gd, png:cairo,
+# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
+# png:gdiplus:gdiplus.
+# The default value is: png.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_IMAGE_FORMAT = png
+
+# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
+# enable generation of interactive SVG images that allow zooming and panning.
+#
+# Note that this requires a modern browser other than Internet Explorer. Tested
+# and working are Firefox, Chrome, Safari, and Opera.
+# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
+# the SVG files visible. Older versions of IE do not have SVG support.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INTERACTIVE_SVG = NO
+
+# The DOT_PATH tag can be used to specify the path where the dot tool can be
+# found. If left blank, it is assumed the dot tool can be found in the path.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_PATH =
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that
+# contain dot files that are included in the documentation (see the \dotfile
+# command).
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOTFILE_DIRS =
+
+# The MSCFILE_DIRS tag can be used to specify one or more directories that
+# contain msc files that are included in the documentation (see the \mscfile
+# command).
+
+MSCFILE_DIRS =
+
+# The DIAFILE_DIRS tag can be used to specify one or more directories that
+# contain dia files that are included in the documentation (see the \diafile
+# command).
+
+DIAFILE_DIRS =
+
+# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the
+# path where java can find the plantuml.jar file. If left blank, it is assumed
+# PlantUML is not used or called during a preprocessing step. Doxygen will
+# generate a warning when it encounters a \startuml command in this case and
+# will not generate output for the diagram.
+
+PLANTUML_JAR_PATH =
+
+# When using plantuml, the PLANTUML_CFG_FILE tag can be used to specify a
+# configuration file for plantuml.
+
+PLANTUML_CFG_FILE =
+
+# When using plantuml, the specified paths are searched for files specified by
+# the !include statement in a plantuml block.
+
+PLANTUML_INCLUDE_PATH =
+
+# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
+# that will be shown in the graph. If the number of nodes in a graph becomes
+# larger than this value, doxygen will truncate the graph, which is visualized
+# by representing a node as a red box. Note that doxygen if the number of direct
+# children of the root node in a graph is already larger than
+# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
+# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
+# Minimum value: 0, maximum value: 10000, default value: 50.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_GRAPH_MAX_NODES = 50
+
+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
+# generated by dot. A depth value of 3 means that only nodes reachable from the
+# root by following a path via at most 3 edges will be shown. Nodes that lay
+# further from the root node will be omitted. Note that setting this option to 1
+# or 2 may greatly reduce the computation time needed for large code bases. Also
+# note that the size of a graph can be further restricted by
+# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
+# Minimum value: 0, maximum value: 1000, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+MAX_DOT_GRAPH_DEPTH = 0
+
+# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
+# background. This is disabled by default, because dot on Windows does not seem
+# to support this out of the box.
+#
+# Warning: Depending on the platform used, enabling this option may lead to
+# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
+# read).
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_TRANSPARENT = NO
+
+# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
+# files in one run (i.e. multiple -o and -T options on the command line). This
+# makes dot run faster, but since only newer versions of dot (>1.8.10) support
+# this, this feature is disabled by default.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_MULTI_TARGETS = NO
+
+# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
+# explaining the meaning of the various boxes and arrows in the dot generated
+# graphs.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GENERATE_LEGEND = YES
+
+# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate
+# files that are used to generate the various graphs.
+#
+# Note: This setting is not only used for dot files but also for msc and
+# plantuml temporary files.
+# The default value is: YES.
+
+DOT_CLEANUP = YES
diff --git a/Makefile.am b/Makefile.am
new file mode 100644
index 0000000..f6cab44
--- /dev/null
+++ b/Makefile.am
@@ -0,0 +1,99 @@
+ACLOCAL_AMFLAGS = -I m4
+SUBDIRS = src tests tests-fuzz python samples distro doc
+
+EXTRA_DIST = README.md
+
+.PHONY: singlehtml epub install-singlehtml install-epub
+singlehtml install-singlehtml epub install-epub:
+ $(MAKE) -C doc $@
+
+.PHONY: check-compile
+check-compile:
+ $(MAKE) $(AM_MAKEFLAGS) -C tests $@
+ $(MAKE) $(AM_MAKEFLAGS) -C tests-fuzz $@
+
+AM_DISTCHECK_CONFIGURE_FLAGS =
+
+CODE_COVERAGE_INFO = coverage.info
+CODE_COVERAGE_HTML = coverage.html
+CODE_COVERAGE_DIRS = \
+ src/contrib \
+ src/knot \
+ src/libdnssec \
+ src/libknot \
+ src/libzscanner
+
+code_coverage_quiet = --quiet
+
+check-code-coverage:
+if CODE_COVERAGE_ENABLED
+ $(MAKE) $(AM_MAKEFLAGS) code-coverage-initial
+ -$(MAKE) $(AM_MAKEFLAGS) -k check
+ $(MAKE) $(AM_MAKEFLAGS) code-coverage-capture
+ $(MAKE) $(AM_MAKEFLAGS) code-coverage-html
+ $(MAKE) $(AM_MAKEFLAGS) code-coverage-summary
+else
+ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+endif
+
+code-coverage-initial:
+if CODE_COVERAGE_ENABLED
+ $(LCOV) $(code_coverage_quiet) \
+ --no-external \
+ $(foreach dir, $(CODE_COVERAGE_DIRS), --directory $(top_builddir)/$(dir)) \
+ --capture --initial \
+ --ignore-errors source \
+ --no-checksum \
+ --compat-libtool \
+ --output-file $(CODE_COVERAGE_INFO)
+else
+ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+endif
+
+code-coverage-capture:
+if CODE_COVERAGE_ENABLED
+ $(LCOV) $(code_coverage_quiet) \
+ --no-external \
+ $(foreach dir, $(CODE_COVERAGE_DIRS), --directory $(builddir)/$(dir)) \
+ --capture \
+ --ignore-errors source \
+ --no-checksum \
+ --compat-libtool \
+ --output-file $(CODE_COVERAGE_INFO)
+else
+ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+endif
+
+code-coverage-html:
+if CODE_COVERAGE_ENABLED
+ @echo "Generating code coverage HTML report (this might take a while)"
+ LANG=C $(GENHTML) $(code_coverage_quiet) \
+ --output-directory $(CODE_COVERAGE_HTML) \
+ --title "Knot DNS $(PACKAGE_VERSION) Code Coverage" \
+ --legend --show-details \
+ --ignore-errors source \
+ $(CODE_COVERAGE_INFO)
+else
+ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+endif
+
+code-coverage-summary:
+if CODE_COVERAGE_ENABLED
+ $(LCOV) \
+ --summary $(CODE_COVERAGE_INFO)
+else
+ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+endif
+
+if CODE_COVERAGE_ENABLED
+clean-local: code-coverage-clean
+ -find . -name "*.gcno" -delete
+code-coverage-clean:
+ -$(LCOV) --directory $(top_builddir) -z
+ -rm -rf $(CODE_COVERAGE_INFO) $(CODE_COVERAGE_HTML)
+ -find . -name "*.gcda" -o -name "*.gcov" -delete
+endif
+
+
+.PHONY: check-code-coverage code-coverage-initial code-coverage-capture code-coverage-html code-coverage-summary code-coverage-clean
+.NOTPARALLEL: clean
diff --git a/Makefile.in b/Makefile.in
new file mode 100644
index 0000000..d7ac091
--- /dev/null
+++ b/Makefile.in
@@ -0,0 +1,983 @@
+# Makefile.in generated by automake 1.16.5 from Makefile.am.
+# @configure_input@
+
+# Copyright (C) 1994-2021 Free Software Foundation, Inc.
+
+# This Makefile.in is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+@SET_MAKE@
+VPATH = @srcdir@
+am__is_gnu_make = { \
+ if test -z '$(MAKELEVEL)'; then \
+ false; \
+ elif test -n '$(MAKE_HOST)'; then \
+ true; \
+ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
+ true; \
+ else \
+ false; \
+ fi; \
+}
+am__make_running_with_option = \
+ case $${target_option-} in \
+ ?) ;; \
+ *) echo "am__make_running_with_option: internal error: invalid" \
+ "target option '$${target_option-}' specified" >&2; \
+ exit 1;; \
+ esac; \
+ has_opt=no; \
+ sane_makeflags=$$MAKEFLAGS; \
+ if $(am__is_gnu_make); then \
+ sane_makeflags=$$MFLAGS; \
+ else \
+ case $$MAKEFLAGS in \
+ *\\[\ \ ]*) \
+ bs=\\; \
+ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
+ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
+ esac; \
+ fi; \
+ skip_next=no; \
+ strip_trailopt () \
+ { \
+ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
+ }; \
+ for flg in $$sane_makeflags; do \
+ test $$skip_next = yes && { skip_next=no; continue; }; \
+ case $$flg in \
+ *=*|--*) continue;; \
+ -*I) strip_trailopt 'I'; skip_next=yes;; \
+ -*I?*) strip_trailopt 'I';; \
+ -*O) strip_trailopt 'O'; skip_next=yes;; \
+ -*O?*) strip_trailopt 'O';; \
+ -*l) strip_trailopt 'l'; skip_next=yes;; \
+ -*l?*) strip_trailopt 'l';; \
+ -[dEDm]) skip_next=yes;; \
+ -[JT]) skip_next=yes;; \
+ esac; \
+ case $$flg in \
+ *$$target_option*) has_opt=yes; break;; \
+ esac; \
+ done; \
+ test $$has_opt = yes
+am__make_dryrun = (target_option=n; $(am__make_running_with_option))
+am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
+pkgdatadir = $(datadir)/@PACKAGE@
+pkgincludedir = $(includedir)/@PACKAGE@
+pkglibdir = $(libdir)/@PACKAGE@
+pkglibexecdir = $(libexecdir)/@PACKAGE@
+am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
+install_sh_DATA = $(install_sh) -c -m 644
+install_sh_PROGRAM = $(install_sh) -c
+install_sh_SCRIPT = $(install_sh) -c
+INSTALL_HEADER = $(INSTALL_DATA)
+transform = $(program_transform_name)
+NORMAL_INSTALL = :
+PRE_INSTALL = :
+POST_INSTALL = :
+NORMAL_UNINSTALL = :
+PRE_UNINSTALL = :
+POST_UNINSTALL = :
+build_triplet = @build@
+host_triplet = @host@
+subdir = .
+ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
+am__aclocal_m4_deps = $(top_srcdir)/m4/ax_check_compile_flag.m4 \
+ $(top_srcdir)/m4/ax_check_link_flag.m4 \
+ $(top_srcdir)/m4/code-coverage.m4 \
+ $(top_srcdir)/m4/knot-lib-version.m4 \
+ $(top_srcdir)/m4/knot-module.m4 $(top_srcdir)/m4/libtool.m4 \
+ $(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
+ $(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
+ $(top_srcdir)/m4/sanitizer.m4 $(top_srcdir)/m4/visibility.m4 \
+ $(top_srcdir)/m4/knot-version.m4 $(top_srcdir)/configure.ac
+am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
+ $(ACLOCAL_M4)
+DIST_COMMON = $(srcdir)/Makefile.am $(top_srcdir)/configure \
+ $(am__configure_deps) $(am__DIST_COMMON)
+am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
+ configure.lineno config.status.lineno
+mkinstalldirs = $(install_sh) -d
+CONFIG_HEADER = $(top_builddir)/src/config.h
+CONFIG_CLEAN_FILES = src/libknot/version.h src/libdnssec/version.h \
+ src/libzscanner/version.h Doxyfile \
+ python/libknot/libknot/__init__.py \
+ src/knot/modules/static_modules.h
+CONFIG_CLEAN_VPATH_FILES =
+AM_V_P = $(am__v_P_@AM_V@)
+am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
+am__v_P_0 = false
+am__v_P_1 = :
+AM_V_GEN = $(am__v_GEN_@AM_V@)
+am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
+am__v_GEN_0 = @echo " GEN " $@;
+am__v_GEN_1 =
+AM_V_at = $(am__v_at_@AM_V@)
+am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
+am__v_at_0 = @
+am__v_at_1 =
+SOURCES =
+DIST_SOURCES =
+RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \
+ ctags-recursive dvi-recursive html-recursive info-recursive \
+ install-data-recursive install-dvi-recursive \
+ install-exec-recursive install-html-recursive \
+ install-info-recursive install-pdf-recursive \
+ install-ps-recursive install-recursive installcheck-recursive \
+ installdirs-recursive pdf-recursive ps-recursive \
+ tags-recursive uninstall-recursive
+am__can_run_installinfo = \
+ case $$AM_UPDATE_INFO_DIR in \
+ n|no|NO) false;; \
+ *) (install-info --version) >/dev/null 2>&1;; \
+ esac
+RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \
+ distclean-recursive maintainer-clean-recursive
+am__recursive_targets = \
+ $(RECURSIVE_TARGETS) \
+ $(RECURSIVE_CLEAN_TARGETS) \
+ $(am__extra_recursive_targets)
+AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \
+ cscope distdir distdir-am dist dist-all distcheck
+am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
+# Read a list of newline-separated strings from the standard input,
+# and print each of them once, without duplicates. Input order is
+# *not* preserved.
+am__uniquify_input = $(AWK) '\
+ BEGIN { nonempty = 0; } \
+ { items[$$0] = 1; nonempty = 1; } \
+ END { if (nonempty) { for (i in items) print i; }; } \
+'
+# Make sure the list of sources is unique. This is necessary because,
+# e.g., the same source file might be shared among _SOURCES variables
+# for different programs/libraries.
+am__define_uniq_tagged_files = \
+ list='$(am__tagged_files)'; \
+ unique=`for i in $$list; do \
+ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
+ done | $(am__uniquify_input)`
+DIST_SUBDIRS = $(SUBDIRS)
+am__DIST_COMMON = $(srcdir)/Doxyfile.in $(srcdir)/Makefile.in \
+ $(top_srcdir)/python/libknot/libknot/__init__.py.in \
+ $(top_srcdir)/src/knot/modules/static_modules.h.in \
+ $(top_srcdir)/src/libdnssec/version.h.in \
+ $(top_srcdir)/src/libknot/version.h.in \
+ $(top_srcdir)/src/libzscanner/version.h.in COPYING NEWS \
+ README.md ar-lib compile config.guess config.sub install-sh \
+ ltmain.sh missing
+DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
+distdir = $(PACKAGE)-$(VERSION)
+top_distdir = $(distdir)
+am__remove_distdir = \
+ if test -d "$(distdir)"; then \
+ find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \
+ && rm -rf "$(distdir)" \
+ || { sleep 5 && rm -rf "$(distdir)"; }; \
+ else :; fi
+am__post_remove_distdir = $(am__remove_distdir)
+am__relativize = \
+ dir0=`pwd`; \
+ sed_first='s,^\([^/]*\)/.*$$,\1,'; \
+ sed_rest='s,^[^/]*/*,,'; \
+ sed_last='s,^.*/\([^/]*\)$$,\1,'; \
+ sed_butlast='s,/*[^/]*$$,,'; \
+ while test -n "$$dir1"; do \
+ first=`echo "$$dir1" | sed -e "$$sed_first"`; \
+ if test "$$first" != "."; then \
+ if test "$$first" = ".."; then \
+ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \
+ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \
+ else \
+ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \
+ if test "$$first2" = "$$first"; then \
+ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \
+ else \
+ dir2="../$$dir2"; \
+ fi; \
+ dir0="$$dir0"/"$$first"; \
+ fi; \
+ fi; \
+ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \
+ done; \
+ reldir="$$dir2"
+GZIP_ENV = --best
+DIST_ARCHIVES = $(distdir).tar.xz
+DIST_TARGETS = dist-xz
+# Exists only to be overridden by the user if desired.
+AM_DISTCHECK_DVI_TARGET = dvi
+distuninstallcheck_listfiles = find . -type f -print
+am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \
+ | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$'
+distcleancheck_listfiles = find . -type f -print
+ACLOCAL = @ACLOCAL@
+AMTAR = @AMTAR@
+AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
+AR = @AR@
+AUTOCONF = @AUTOCONF@
+AUTOHEADER = @AUTOHEADER@
+AUTOMAKE = @AUTOMAKE@
+AWK = @AWK@
+CC = @CC@
+CCDEPMODE = @CCDEPMODE@
+CFLAGS = @CFLAGS@
+CFLAG_VISIBILITY = @CFLAG_VISIBILITY@
+CODE_COVERAGE_ENABLED = @CODE_COVERAGE_ENABLED@
+CPP = @CPP@
+CPPFLAGS = @CPPFLAGS@
+CSCOPE = @CSCOPE@
+CTAGS = @CTAGS@
+CYGPATH_W = @CYGPATH_W@
+DEFS = @DEFS@
+DEPDIR = @DEPDIR@
+DLLTOOL = @DLLTOOL@
+DNSTAP_CFLAGS = @DNSTAP_CFLAGS@
+DNSTAP_LIBS = @DNSTAP_LIBS@
+DSYMUTIL = @DSYMUTIL@
+DUMPBIN = @DUMPBIN@
+ECHO_C = @ECHO_C@
+ECHO_N = @ECHO_N@
+ECHO_T = @ECHO_T@
+EGREP = @EGREP@
+ETAGS = @ETAGS@
+EXEEXT = @EXEEXT@
+FGREP = @FGREP@
+FILECMD = @FILECMD@
+GENHTML = @GENHTML@
+GREP = @GREP@
+HAVE_VISIBILITY = @HAVE_VISIBILITY@
+INSTALL = @INSTALL@
+INSTALL_DATA = @INSTALL_DATA@
+INSTALL_PROGRAM = @INSTALL_PROGRAM@
+INSTALL_SCRIPT = @INSTALL_SCRIPT@
+INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
+KNOT_VERSION_MAJOR = @KNOT_VERSION_MAJOR@
+KNOT_VERSION_MINOR = @KNOT_VERSION_MINOR@
+KNOT_VERSION_PATCH = @KNOT_VERSION_PATCH@
+LCOV = @LCOV@
+LD = @LD@
+LDFLAGS = @LDFLAGS@
+LDFLAG_EXCLUDE_LIBS = @LDFLAG_EXCLUDE_LIBS@
+LIBOBJS = @LIBOBJS@
+LIBS = @LIBS@
+LIBTOOL = @LIBTOOL@
+LIPO = @LIPO@
+LN_S = @LN_S@
+LTLIBOBJS = @LTLIBOBJS@
+LT_NO_UNDEFINED = @LT_NO_UNDEFINED@
+LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@
+MAKEINFO = @MAKEINFO@
+MANIFEST_TOOL = @MANIFEST_TOOL@
+MKDIR_P = @MKDIR_P@
+NM = @NM@
+NMEDIT = @NMEDIT@
+OBJDUMP = @OBJDUMP@
+OBJEXT = @OBJEXT@
+OTOOL = @OTOOL@
+OTOOL64 = @OTOOL64@
+PACKAGE = @PACKAGE@
+PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
+PACKAGE_NAME = @PACKAGE_NAME@
+PACKAGE_STRING = @PACKAGE_STRING@
+PACKAGE_TARNAME = @PACKAGE_TARNAME@
+PACKAGE_URL = @PACKAGE_URL@
+PACKAGE_VERSION = @PACKAGE_VERSION@
+PATH_SEPARATOR = @PATH_SEPARATOR@
+PDFLATEX = @PDFLATEX@
+PKG_CONFIG = @PKG_CONFIG@
+PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
+PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
+PROTOC_C = @PROTOC_C@
+RANLIB = @RANLIB@
+RELEASE_DATE = @RELEASE_DATE@
+SED = @SED@
+SET_MAKE = @SET_MAKE@
+SHELL = @SHELL@
+SPHINXBUILD = @SPHINXBUILD@
+STRIP = @STRIP@
+VERSION = @VERSION@
+abs_builddir = @abs_builddir@
+abs_srcdir = @abs_srcdir@
+abs_top_builddir = @abs_top_builddir@
+abs_top_srcdir = @abs_top_srcdir@
+ac_ct_AR = @ac_ct_AR@
+ac_ct_CC = @ac_ct_CC@
+ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+am__include = @am__include@
+am__leading_dot = @am__leading_dot@
+am__quote = @am__quote@
+am__tar = @am__tar@
+am__untar = @am__untar@
+bindir = @bindir@
+build = @build@
+build_alias = @build_alias@
+build_cpu = @build_cpu@
+build_os = @build_os@
+build_vendor = @build_vendor@
+builddir = @builddir@
+cap_ng_CFLAGS = @cap_ng_CFLAGS@
+cap_ng_LIBS = @cap_ng_LIBS@
+conf_mapsize = @conf_mapsize@
+config_dir = @config_dir@
+datadir = @datadir@
+datarootdir = @datarootdir@
+dlopen_LIBS = @dlopen_LIBS@
+docdir = @docdir@
+dvidir = @dvidir@
+embedded_libngtcp2_CFLAGS = @embedded_libngtcp2_CFLAGS@
+embedded_libngtcp2_LIBS = @embedded_libngtcp2_LIBS@
+exec_prefix = @exec_prefix@
+fuzzer_CFLAGS = @fuzzer_CFLAGS@
+fuzzer_LDFLAGS = @fuzzer_LDFLAGS@
+gnutls_CFLAGS = @gnutls_CFLAGS@
+gnutls_LIBS = @gnutls_LIBS@
+host = @host@
+host_alias = @host_alias@
+host_cpu = @host_cpu@
+host_os = @host_os@
+host_vendor = @host_vendor@
+htmldir = @htmldir@
+includedir = @includedir@
+infodir = @infodir@
+install_sh = @install_sh@
+libbpf_CFLAGS = @libbpf_CFLAGS@
+libbpf_LIBS = @libbpf_LIBS@
+libdbus_CFLAGS = @libdbus_CFLAGS@
+libdbus_LIBS = @libdbus_LIBS@
+libdir = @libdir@
+libdnssec_SONAME = @libdnssec_SONAME@
+libdnssec_SOVERSION = @libdnssec_SOVERSION@
+libdnssec_VERSION_INFO = @libdnssec_VERSION_INFO@
+libedit_CFLAGS = @libedit_CFLAGS@
+libedit_LIBS = @libedit_LIBS@
+libexecdir = @libexecdir@
+libfstrm_CFLAGS = @libfstrm_CFLAGS@
+libfstrm_LIBS = @libfstrm_LIBS@
+libidn2_CFLAGS = @libidn2_CFLAGS@
+libidn2_LIBS = @libidn2_LIBS@
+libknot_SONAME = @libknot_SONAME@
+libknot_SOVERSION = @libknot_SOVERSION@
+libknot_VERSION_INFO = @libknot_VERSION_INFO@
+libkqueue_CFLAGS = @libkqueue_CFLAGS@
+libkqueue_LIBS = @libkqueue_LIBS@
+libmaxminddb_CFLAGS = @libmaxminddb_CFLAGS@
+libmaxminddb_LIBS = @libmaxminddb_LIBS@
+libmnl_CFLAGS = @libmnl_CFLAGS@
+libmnl_LIBS = @libmnl_LIBS@
+libnghttp2_CFLAGS = @libnghttp2_CFLAGS@
+libnghttp2_LIBS = @libnghttp2_LIBS@
+libngtcp2_CFLAGS = @libngtcp2_CFLAGS@
+libngtcp2_LIBS = @libngtcp2_LIBS@
+libprotobuf_c_CFLAGS = @libprotobuf_c_CFLAGS@
+libprotobuf_c_LIBS = @libprotobuf_c_LIBS@
+liburcu_CFLAGS = @liburcu_CFLAGS@
+liburcu_LIBS = @liburcu_LIBS@
+libxdp_CFLAGS = @libxdp_CFLAGS@
+libxdp_LIBS = @libxdp_LIBS@
+libzscanner_SONAME = @libzscanner_SONAME@
+libzscanner_SOVERSION = @libzscanner_SOVERSION@
+libzscanner_VERSION_INFO = @libzscanner_VERSION_INFO@
+lmdb_CFLAGS = @lmdb_CFLAGS@
+lmdb_LIBS = @lmdb_LIBS@
+localedir = @localedir@
+localstatedir = @localstatedir@
+malloc_LIBS = @malloc_LIBS@
+mandir = @mandir@
+math_LIBS = @math_LIBS@
+mkdir_p = @mkdir_p@
+module_dir = @module_dir@
+module_instdir = @module_instdir@
+oldincludedir = @oldincludedir@
+pdfdir = @pdfdir@
+pkgconfigdir = @pkgconfigdir@
+prefix = @prefix@
+program_transform_name = @program_transform_name@
+psdir = @psdir@
+pthread_LIBS = @pthread_LIBS@
+run_dir = @run_dir@
+runstatedir = @runstatedir@
+sbindir = @sbindir@
+sharedstatedir = @sharedstatedir@
+srcdir = @srcdir@
+storage_dir = @storage_dir@
+sysconfdir = @sysconfdir@
+systemd_CFLAGS = @systemd_CFLAGS@
+systemd_LIBS = @systemd_LIBS@
+target_alias = @target_alias@
+top_build_prefix = @top_build_prefix@
+top_builddir = @top_builddir@
+top_srcdir = @top_srcdir@
+ACLOCAL_AMFLAGS = -I m4
+SUBDIRS = src tests tests-fuzz python samples distro doc
+EXTRA_DIST = README.md
+AM_DISTCHECK_CONFIGURE_FLAGS =
+CODE_COVERAGE_INFO = coverage.info
+CODE_COVERAGE_HTML = coverage.html
+CODE_COVERAGE_DIRS = \
+ src/contrib \
+ src/knot \
+ src/libdnssec \
+ src/libknot \
+ src/libzscanner
+
+code_coverage_quiet = --quiet
+all: all-recursive
+
+.SUFFIXES:
+am--refresh: Makefile
+ @:
+$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
+ @for dep in $?; do \
+ case '$(am__configure_deps)' in \
+ *$$dep*) \
+ echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \
+ $(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \
+ && exit 0; \
+ exit 1;; \
+ esac; \
+ done; \
+ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \
+ $(am__cd) $(top_srcdir) && \
+ $(AUTOMAKE) --foreign Makefile
+Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
+ @case '$?' in \
+ *config.status*) \
+ echo ' $(SHELL) ./config.status'; \
+ $(SHELL) ./config.status;; \
+ *) \
+ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles)'; \
+ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles);; \
+ esac;
+
+$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
+ $(SHELL) ./config.status --recheck
+
+$(top_srcdir)/configure: $(am__configure_deps)
+ $(am__cd) $(srcdir) && $(AUTOCONF)
+$(ACLOCAL_M4): $(am__aclocal_m4_deps)
+ $(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
+$(am__aclocal_m4_deps):
+src/libknot/version.h: $(top_builddir)/config.status $(top_srcdir)/src/libknot/version.h.in
+ cd $(top_builddir) && $(SHELL) ./config.status $@
+src/libdnssec/version.h: $(top_builddir)/config.status $(top_srcdir)/src/libdnssec/version.h.in
+ cd $(top_builddir) && $(SHELL) ./config.status $@
+src/libzscanner/version.h: $(top_builddir)/config.status $(top_srcdir)/src/libzscanner/version.h.in
+ cd $(top_builddir) && $(SHELL) ./config.status $@
+Doxyfile: $(top_builddir)/config.status $(srcdir)/Doxyfile.in
+ cd $(top_builddir) && $(SHELL) ./config.status $@
+python/libknot/libknot/__init__.py: $(top_builddir)/config.status $(top_srcdir)/python/libknot/libknot/__init__.py.in
+ cd $(top_builddir) && $(SHELL) ./config.status $@
+src/knot/modules/static_modules.h: $(top_builddir)/config.status $(top_srcdir)/src/knot/modules/static_modules.h.in
+ cd $(top_builddir) && $(SHELL) ./config.status $@
+
+mostlyclean-libtool:
+ -rm -f *.lo
+
+clean-libtool:
+ -rm -rf .libs _libs
+
+distclean-libtool:
+ -rm -f libtool config.lt
+
+# This directory's subdirectories are mostly independent; you can cd
+# into them and run 'make' without going through this Makefile.
+# To change the values of 'make' variables: instead of editing Makefiles,
+# (1) if the variable is set in 'config.status', edit 'config.status'
+# (which will cause the Makefiles to be regenerated when you run 'make');
+# (2) otherwise, pass the desired values on the 'make' command line.
+$(am__recursive_targets):
+ @fail=; \
+ if $(am__make_keepgoing); then \
+ failcom='fail=yes'; \
+ else \
+ failcom='exit 1'; \
+ fi; \
+ dot_seen=no; \
+ target=`echo $@ | sed s/-recursive//`; \
+ case "$@" in \
+ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \
+ *) list='$(SUBDIRS)' ;; \
+ esac; \
+ for subdir in $$list; do \
+ echo "Making $$target in $$subdir"; \
+ if test "$$subdir" = "."; then \
+ dot_seen=yes; \
+ local_target="$$target-am"; \
+ else \
+ local_target="$$target"; \
+ fi; \
+ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
+ || eval $$failcom; \
+ done; \
+ if test "$$dot_seen" = "no"; then \
+ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \
+ fi; test -z "$$fail"
+
+ID: $(am__tagged_files)
+ $(am__define_uniq_tagged_files); mkid -fID $$unique
+tags: tags-recursive
+TAGS: tags
+
+tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
+ set x; \
+ here=`pwd`; \
+ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \
+ include_option=--etags-include; \
+ empty_fix=.; \
+ else \
+ include_option=--include; \
+ empty_fix=; \
+ fi; \
+ list='$(SUBDIRS)'; for subdir in $$list; do \
+ if test "$$subdir" = .; then :; else \
+ test ! -f $$subdir/TAGS || \
+ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \
+ fi; \
+ done; \
+ $(am__define_uniq_tagged_files); \
+ shift; \
+ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
+ test -n "$$unique" || unique=$$empty_fix; \
+ if test $$# -gt 0; then \
+ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
+ "$$@" $$unique; \
+ else \
+ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
+ $$unique; \
+ fi; \
+ fi
+ctags: ctags-recursive
+
+CTAGS: ctags
+ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
+ $(am__define_uniq_tagged_files); \
+ test -z "$(CTAGS_ARGS)$$unique" \
+ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
+ $$unique
+
+GTAGS:
+ here=`$(am__cd) $(top_builddir) && pwd` \
+ && $(am__cd) $(top_srcdir) \
+ && gtags -i $(GTAGS_ARGS) "$$here"
+cscope: cscope.files
+ test ! -s cscope.files \
+ || $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS)
+clean-cscope:
+ -rm -f cscope.files
+cscope.files: clean-cscope cscopelist
+cscopelist: cscopelist-recursive
+
+cscopelist-am: $(am__tagged_files)
+ list='$(am__tagged_files)'; \
+ case "$(srcdir)" in \
+ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
+ *) sdir=$(subdir)/$(srcdir) ;; \
+ esac; \
+ for i in $$list; do \
+ if test -f "$$i"; then \
+ echo "$(subdir)/$$i"; \
+ else \
+ echo "$$sdir/$$i"; \
+ fi; \
+ done >> $(top_builddir)/cscope.files
+
+distclean-tags:
+ -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
+ -rm -f cscope.out cscope.in.out cscope.po.out cscope.files
+distdir: $(BUILT_SOURCES)
+ $(MAKE) $(AM_MAKEFLAGS) distdir-am
+
+distdir-am: $(DISTFILES)
+ $(am__remove_distdir)
+ test -d "$(distdir)" || mkdir "$(distdir)"
+ @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ list='$(DISTFILES)'; \
+ dist_files=`for file in $$list; do echo $$file; done | \
+ sed -e "s|^$$srcdirstrip/||;t" \
+ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
+ case $$dist_files in \
+ */*) $(MKDIR_P) `echo "$$dist_files" | \
+ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
+ sort -u` ;; \
+ esac; \
+ for file in $$dist_files; do \
+ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
+ if test -d $$d/$$file; then \
+ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
+ if test -d "$(distdir)/$$file"; then \
+ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
+ fi; \
+ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
+ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
+ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
+ fi; \
+ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
+ else \
+ test -f "$(distdir)/$$file" \
+ || cp -p $$d/$$file "$(distdir)/$$file" \
+ || exit 1; \
+ fi; \
+ done
+ @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \
+ if test "$$subdir" = .; then :; else \
+ $(am__make_dryrun) \
+ || test -d "$(distdir)/$$subdir" \
+ || $(MKDIR_P) "$(distdir)/$$subdir" \
+ || exit 1; \
+ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \
+ $(am__relativize); \
+ new_distdir=$$reldir; \
+ dir1=$$subdir; dir2="$(top_distdir)"; \
+ $(am__relativize); \
+ new_top_distdir=$$reldir; \
+ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \
+ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \
+ ($(am__cd) $$subdir && \
+ $(MAKE) $(AM_MAKEFLAGS) \
+ top_distdir="$$new_top_distdir" \
+ distdir="$$new_distdir" \
+ am__remove_distdir=: \
+ am__skip_length_check=: \
+ am__skip_mode_fix=: \
+ distdir) \
+ || exit 1; \
+ fi; \
+ done
+ -test -n "$(am__skip_mode_fix)" \
+ || find "$(distdir)" -type d ! -perm -755 \
+ -exec chmod u+rwx,go+rx {} \; -o \
+ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
+ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \
+ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
+ || chmod -R a+r "$(distdir)"
+dist-gzip: distdir
+ tardir=$(distdir) && $(am__tar) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).tar.gz
+ $(am__post_remove_distdir)
+
+dist-bzip2: distdir
+ tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2
+ $(am__post_remove_distdir)
+
+dist-lzip: distdir
+ tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz
+ $(am__post_remove_distdir)
+dist-xz: distdir
+ tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz
+ $(am__post_remove_distdir)
+
+dist-zstd: distdir
+ tardir=$(distdir) && $(am__tar) | zstd -c $${ZSTD_CLEVEL-$${ZSTD_OPT--19}} >$(distdir).tar.zst
+ $(am__post_remove_distdir)
+
+dist-tarZ: distdir
+ @echo WARNING: "Support for distribution archives compressed with" \
+ "legacy program 'compress' is deprecated." >&2
+ @echo WARNING: "It will be removed altogether in Automake 2.0" >&2
+ tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
+ $(am__post_remove_distdir)
+
+dist-shar: distdir
+ @echo WARNING: "Support for shar distribution archives is" \
+ "deprecated." >&2
+ @echo WARNING: "It will be removed altogether in Automake 2.0" >&2
+ shar $(distdir) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).shar.gz
+ $(am__post_remove_distdir)
+
+dist-zip: distdir
+ -rm -f $(distdir).zip
+ zip -rq $(distdir).zip $(distdir)
+ $(am__post_remove_distdir)
+
+dist dist-all:
+ $(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:'
+ $(am__post_remove_distdir)
+
+# This target untars the dist file and tries a VPATH configuration. Then
+# it guarantees that the distribution is self-contained by making another
+# tarfile.
+distcheck: dist
+ case '$(DIST_ARCHIVES)' in \
+ *.tar.gz*) \
+ eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).tar.gz | $(am__untar) ;;\
+ *.tar.bz2*) \
+ bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\
+ *.tar.lz*) \
+ lzip -dc $(distdir).tar.lz | $(am__untar) ;;\
+ *.tar.xz*) \
+ xz -dc $(distdir).tar.xz | $(am__untar) ;;\
+ *.tar.Z*) \
+ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
+ *.shar.gz*) \
+ eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).shar.gz | unshar ;;\
+ *.zip*) \
+ unzip $(distdir).zip ;;\
+ *.tar.zst*) \
+ zstd -dc $(distdir).tar.zst | $(am__untar) ;;\
+ esac
+ chmod -R a-w $(distdir)
+ chmod u+w $(distdir)
+ mkdir $(distdir)/_build $(distdir)/_build/sub $(distdir)/_inst
+ chmod a-w $(distdir)
+ test -d $(distdir)/_build || exit 0; \
+ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
+ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
+ && am__cwd=`pwd` \
+ && $(am__cd) $(distdir)/_build/sub \
+ && ../../configure \
+ $(AM_DISTCHECK_CONFIGURE_FLAGS) \
+ $(DISTCHECK_CONFIGURE_FLAGS) \
+ --srcdir=../.. --prefix="$$dc_install_base" \
+ && $(MAKE) $(AM_MAKEFLAGS) \
+ && $(MAKE) $(AM_MAKEFLAGS) $(AM_DISTCHECK_DVI_TARGET) \
+ && $(MAKE) $(AM_MAKEFLAGS) check \
+ && $(MAKE) $(AM_MAKEFLAGS) install \
+ && $(MAKE) $(AM_MAKEFLAGS) installcheck \
+ && $(MAKE) $(AM_MAKEFLAGS) uninstall \
+ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
+ distuninstallcheck \
+ && chmod -R a-w "$$dc_install_base" \
+ && ({ \
+ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
+ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
+ } || { rm -rf "$$dc_destdir"; exit 1; }) \
+ && rm -rf "$$dc_destdir" \
+ && $(MAKE) $(AM_MAKEFLAGS) dist \
+ && rm -rf $(DIST_ARCHIVES) \
+ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \
+ && cd "$$am__cwd" \
+ || exit 1
+ $(am__post_remove_distdir)
+ @(echo "$(distdir) archives ready for distribution: "; \
+ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
+ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
+distuninstallcheck:
+ @test -n '$(distuninstallcheck_dir)' || { \
+ echo 'ERROR: trying to run $@ with an empty' \
+ '$$(distuninstallcheck_dir)' >&2; \
+ exit 1; \
+ }; \
+ $(am__cd) '$(distuninstallcheck_dir)' || { \
+ echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \
+ exit 1; \
+ }; \
+ test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \
+ || { echo "ERROR: files left after uninstall:" ; \
+ if test -n "$(DESTDIR)"; then \
+ echo " (check DESTDIR support)"; \
+ fi ; \
+ $(distuninstallcheck_listfiles) ; \
+ exit 1; } >&2
+distcleancheck: distclean
+ @if test '$(srcdir)' = . ; then \
+ echo "ERROR: distcleancheck can only run from a VPATH build" ; \
+ exit 1 ; \
+ fi
+ @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
+ || { echo "ERROR: files left in build directory after distclean:" ; \
+ $(distcleancheck_listfiles) ; \
+ exit 1; } >&2
+check-am: all-am
+check: check-recursive
+all-am: Makefile
+installdirs: installdirs-recursive
+installdirs-am:
+install: install-recursive
+install-exec: install-exec-recursive
+install-data: install-data-recursive
+uninstall: uninstall-recursive
+
+install-am: all-am
+ @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
+
+installcheck: installcheck-recursive
+install-strip:
+ if test -z '$(STRIP)'; then \
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ install; \
+ else \
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
+ fi
+mostlyclean-generic:
+
+clean-generic:
+
+distclean-generic:
+ -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
+ -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
+
+maintainer-clean-generic:
+ @echo "This command is intended for maintainers to use"
+ @echo "it deletes files that may require special tools to rebuild."
+@CODE_COVERAGE_ENABLED_FALSE@clean-local:
+clean: clean-recursive
+
+clean-am: clean-generic clean-libtool clean-local mostlyclean-am
+
+distclean: distclean-recursive
+ -rm -f $(am__CONFIG_DISTCLEAN_FILES)
+ -rm -f Makefile
+distclean-am: clean-am distclean-generic distclean-libtool \
+ distclean-tags
+
+dvi: dvi-recursive
+
+dvi-am:
+
+html: html-recursive
+
+html-am:
+
+info: info-recursive
+
+info-am:
+
+install-data-am:
+
+install-dvi: install-dvi-recursive
+
+install-dvi-am:
+
+install-exec-am:
+
+install-html: install-html-recursive
+
+install-html-am:
+
+install-info: install-info-recursive
+
+install-info-am:
+
+install-man:
+
+install-pdf: install-pdf-recursive
+
+install-pdf-am:
+
+install-ps: install-ps-recursive
+
+install-ps-am:
+
+installcheck-am:
+
+maintainer-clean: maintainer-clean-recursive
+ -rm -f $(am__CONFIG_DISTCLEAN_FILES)
+ -rm -rf $(top_srcdir)/autom4te.cache
+ -rm -f Makefile
+maintainer-clean-am: distclean-am maintainer-clean-generic
+
+mostlyclean: mostlyclean-recursive
+
+mostlyclean-am: mostlyclean-generic mostlyclean-libtool
+
+pdf: pdf-recursive
+
+pdf-am:
+
+ps: ps-recursive
+
+ps-am:
+
+uninstall-am:
+
+.MAKE: $(am__recursive_targets) install-am install-strip
+
+.PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am \
+ am--refresh check check-am clean clean-cscope clean-generic \
+ clean-libtool clean-local cscope cscopelist-am ctags ctags-am \
+ dist dist-all dist-bzip2 dist-gzip dist-lzip dist-shar \
+ dist-tarZ dist-xz dist-zip dist-zstd distcheck distclean \
+ distclean-generic distclean-libtool distclean-tags \
+ distcleancheck distdir distuninstallcheck dvi dvi-am html \
+ html-am info info-am install install-am install-data \
+ install-data-am install-dvi install-dvi-am install-exec \
+ install-exec-am install-html install-html-am install-info \
+ install-info-am install-man install-pdf install-pdf-am \
+ install-ps install-ps-am install-strip installcheck \
+ installcheck-am installdirs installdirs-am maintainer-clean \
+ maintainer-clean-generic mostlyclean mostlyclean-generic \
+ mostlyclean-libtool pdf pdf-am ps ps-am tags tags-am uninstall \
+ uninstall-am
+
+.PRECIOUS: Makefile
+
+
+.PHONY: singlehtml epub install-singlehtml install-epub
+singlehtml install-singlehtml epub install-epub:
+ $(MAKE) -C doc $@
+
+.PHONY: check-compile
+check-compile:
+ $(MAKE) $(AM_MAKEFLAGS) -C tests $@
+ $(MAKE) $(AM_MAKEFLAGS) -C tests-fuzz $@
+
+check-code-coverage:
+@CODE_COVERAGE_ENABLED_TRUE@ $(MAKE) $(AM_MAKEFLAGS) code-coverage-initial
+@CODE_COVERAGE_ENABLED_TRUE@ -$(MAKE) $(AM_MAKEFLAGS) -k check
+@CODE_COVERAGE_ENABLED_TRUE@ $(MAKE) $(AM_MAKEFLAGS) code-coverage-capture
+@CODE_COVERAGE_ENABLED_TRUE@ $(MAKE) $(AM_MAKEFLAGS) code-coverage-html
+@CODE_COVERAGE_ENABLED_TRUE@ $(MAKE) $(AM_MAKEFLAGS) code-coverage-summary
+@CODE_COVERAGE_ENABLED_FALSE@ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+
+code-coverage-initial:
+@CODE_COVERAGE_ENABLED_TRUE@ $(LCOV) $(code_coverage_quiet) \
+@CODE_COVERAGE_ENABLED_TRUE@ --no-external \
+@CODE_COVERAGE_ENABLED_TRUE@ $(foreach dir, $(CODE_COVERAGE_DIRS), --directory $(top_builddir)/$(dir)) \
+@CODE_COVERAGE_ENABLED_TRUE@ --capture --initial \
+@CODE_COVERAGE_ENABLED_TRUE@ --ignore-errors source \
+@CODE_COVERAGE_ENABLED_TRUE@ --no-checksum \
+@CODE_COVERAGE_ENABLED_TRUE@ --compat-libtool \
+@CODE_COVERAGE_ENABLED_TRUE@ --output-file $(CODE_COVERAGE_INFO)
+@CODE_COVERAGE_ENABLED_FALSE@ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+
+code-coverage-capture:
+@CODE_COVERAGE_ENABLED_TRUE@ $(LCOV) $(code_coverage_quiet) \
+@CODE_COVERAGE_ENABLED_TRUE@ --no-external \
+@CODE_COVERAGE_ENABLED_TRUE@ $(foreach dir, $(CODE_COVERAGE_DIRS), --directory $(builddir)/$(dir)) \
+@CODE_COVERAGE_ENABLED_TRUE@ --capture \
+@CODE_COVERAGE_ENABLED_TRUE@ --ignore-errors source \
+@CODE_COVERAGE_ENABLED_TRUE@ --no-checksum \
+@CODE_COVERAGE_ENABLED_TRUE@ --compat-libtool \
+@CODE_COVERAGE_ENABLED_TRUE@ --output-file $(CODE_COVERAGE_INFO)
+@CODE_COVERAGE_ENABLED_FALSE@ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+
+code-coverage-html:
+@CODE_COVERAGE_ENABLED_TRUE@ @echo "Generating code coverage HTML report (this might take a while)"
+@CODE_COVERAGE_ENABLED_TRUE@ LANG=C $(GENHTML) $(code_coverage_quiet) \
+@CODE_COVERAGE_ENABLED_TRUE@ --output-directory $(CODE_COVERAGE_HTML) \
+@CODE_COVERAGE_ENABLED_TRUE@ --title "Knot DNS $(PACKAGE_VERSION) Code Coverage" \
+@CODE_COVERAGE_ENABLED_TRUE@ --legend --show-details \
+@CODE_COVERAGE_ENABLED_TRUE@ --ignore-errors source \
+@CODE_COVERAGE_ENABLED_TRUE@ $(CODE_COVERAGE_INFO)
+@CODE_COVERAGE_ENABLED_FALSE@ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+
+code-coverage-summary:
+@CODE_COVERAGE_ENABLED_TRUE@ $(LCOV) \
+@CODE_COVERAGE_ENABLED_TRUE@ --summary $(CODE_COVERAGE_INFO)
+@CODE_COVERAGE_ENABLED_FALSE@ @echo "You need to run configure with --enable-code-coverage to enable code coverage"
+
+@CODE_COVERAGE_ENABLED_TRUE@clean-local: code-coverage-clean
+@CODE_COVERAGE_ENABLED_TRUE@ -find . -name "*.gcno" -delete
+@CODE_COVERAGE_ENABLED_TRUE@code-coverage-clean:
+@CODE_COVERAGE_ENABLED_TRUE@ -$(LCOV) --directory $(top_builddir) -z
+@CODE_COVERAGE_ENABLED_TRUE@ -rm -rf $(CODE_COVERAGE_INFO) $(CODE_COVERAGE_HTML)
+@CODE_COVERAGE_ENABLED_TRUE@ -find . -name "*.gcda" -o -name "*.gcov" -delete
+
+.PHONY: check-code-coverage code-coverage-initial code-coverage-capture code-coverage-html code-coverage-summary code-coverage-clean
+.NOTPARALLEL: clean
+
+# Tell versions [3.59,3.63) of GNU make to not export all variables.
+# Otherwise a system limit (for SysV at least) may be exceeded.
+.NOEXPORT:
diff --git a/NEWS b/NEWS
new file mode 100644
index 0000000..45cd2f6
--- /dev/null
+++ b/NEWS
@@ -0,0 +1,3537 @@
+Knot DNS 3.4.6 (2025-04-10)
+===========================
+
+Improvements:
+-------------
+ - knotd: default TSIG algorithm is now 'hmac-sha256'
+ - knotd: added zone expiration info to the failed zone refresh log
+ - knotd: reverse record generation now accepts multiple forward zones to be reversed
+ - keymgr: underscores are now tolerated instead of dashes in command names
+ - keymgr: correct mnemonic 'rsasha1-nsec3-sha1' is used instead of 'rsasha1nsec3sha1'
+ - kdig: new '+[no]doflag' alias for '+[no]dnssec' #952
+ - kdig: documented default option values #951
+ - kxdpgun: extended JSON output with some packet statistics
+ - doc: various updates and improvements
+
+Bugfixes:
+---------
+ - knotd: failed to stop the server if 'dbus-event: running` is set
+ - knotd: TLS 0-RTT not working if compiled with the QUIC support
+ - knotd: TLS handshake fails on FreeBSD
+ - knotd: outbound QUIC communication fails on FreeBSD
+ - knotd: KSK submission not ignored in the manual key management mode
+ - knotd: failed to bind to a UNIX socket on recent Linux kernels
+ - kzonecheck: failed to check non-trivial zones through standard input
+
+Knot DNS 3.4.5 (2025-03-18)
+===========================
+
+Features:
+---------
+ - knotd: support for SOA serial shift (see 'serial-modulo')
+ - knotd: new server statistics (see 'tcp-io-timeout"' and 'tcp-idle-timeout')
+
+Improvements:
+-------------
+ - knotd: better signing performance of many zones in parallel by
+ moving 'last_signed_serial' from KASP database to timer database
+ - knotd: the 'terminated inactive client' TCP log moved to debug level
+ - knotd: allowed initial DDNS to an empty zone
+ - knotd: extended backup and flush argument checks
+ - knotd: new debug logs for zone events suspension
+ - libs: upgraded embedded libngtcp2 to 1.11.0
+ - doc: new section Multi-primary, updates
+
+Bugfixes:
+---------
+ - libdnssec: inappropriate DNSKEY flags evaluation
+ - libknot: incorrect VLAN map size calculation for XDP
+
+Knot DNS 3.4.4 (2025-01-22)
+===========================
+
+Features:
+---------
+ - knotd: added support for EDNS ZONEVERSION
+ - kdig: added support for EDNS ZONEVERSION (see '+zoneversion')
+
+Improvements:
+-------------
+ - knotd: improved control error detection and reporting
+ - kdig: proper section names for exported DDNS messages
+ - libs: upgraded embedded libngtcp2 to 1.10.0
+ - python: expanded documentation for the libknot control API
+ - doc: updated XDP prerequisites
+
+Bugfixes:
+---------
+ - knotd: a DNAME record at the zone apex with active NSEC3 not accepted via XFR
+ - knotd: configuration abort times out if no active transaction
+ - knotd: defective serial modulo result if it overflows
+ - knotd: TLS connections not properly terminated
+ - knotd: maximum zone TTL not correctly recomputed after RRSIG TTL change
+ - knotd: zone hangs if zone reload fails (Thanks to solidcc2)
+ - knotd: statistics dump generates invalid YAML output if XDP is enabled #947
+ - knotd: insufficient check for incomplete control message
+ - mod-dnstap: used incorrect type for DDNS messages
+ - knot-exporter: failed to run with Python 3.11 or older
+ - tests: test_atomic and test_spinlock require building with the daemon enabled #946
+
+Knot DNS 3.4.3 (2024-12-06)
+===========================
+
+Improvements:
+-------------
+ - knotd: improved processing of QNAMEs containing zero bytes
+ - knotd: zone expiration now aborts possible zone control transaction #929
+ - knotd: generated catalog memeber metadata is stored when the zone is loaded
+ - knotd: new configuration check for using default NSEC3 salt length, which will change
+ - mod-rrl: added QNAME (if possible) and transport protocol to log messages
+ - mod-rrl: increased defaults for 'log-period' to 30 secs, 'rate-limit' to 50,
+ 'instant-rate-limit' to 125, and 'time-rate-limit' to 5 ms
+ - kxdpgun: added space separators to some printed values for better readability
+ - libs: upgraded embedded libngtcp2 to 1.9.1
+ - knot-exporter: zone timers metric is now disabled by default (see '--zone-timers')
+ - packaging: added build dependency softhsm for PKCS #11 testing on RPM distributions
+ - doc: updated description of DNSSEC key management and module RRL
+
+Bugfixes:
+---------
+ - knotd: more active ZSKs cause cumulative ZSK rollovers
+ - knotd: zone purge clears active generated catalog member metadata
+ - mod-rrl: authorized requests are rate limited #943
+ - kdig: misleading warning about timeout during QUIC connection
+ - keymgr: public-only keys are marked as missing in the list output
+
+Knot DNS 3.4.2 (2024-10-31)
+===========================
+
+Improvements:
+-------------
+ - knotd: new warning log upon every incremental update if previous zone signing failed
+ - mod-cookies: support for two secret values specification
+ - keymgr: key pregenerate works even when a KSK exists
+ - libs: upgraded embedded libngtcp2 to 1.8.1
+
+Bugfixes:
+---------
+ - knotd: server can crash when processing just a terminal label as QNAME
+ - knotd: failed to compile if no atomic operations available
+ - kjournalprint: failed to merge zone-in-journal if followed by a non-first changeset
+ - knot-exporter: faulty escape sequence in time value parsing
+ - knot-exporter: failed to parse zone-status output
+ - kxdpgun: periodic statistics doesn't work correctly for longer time periods
+
+Knot DNS 3.4.1 (2024-10-14)
+===========================
+
+Features:
+---------
+ - knotd: ACL configuration allows protocol specification (see 'acl.protocol')
+ - knotc: support for benevolent zone updates (see zone-begin with '+benevolent')
+ - knotd: implemented TLS session resumption
+ - kjournalprint: added print merged changesets mode (see '-M')
+ - libknot: added NXNAME meta type (Thanks to Jan Včelák)
+
+Improvements:
+-------------
+ - knotd: DNSKEY synchronization event logs removed/added *DNSKEYs
+ - knotd: control command log message contains filters and flags in the debug mode
+ - knotc: zone status prints running, pending, and frozen duration
+ - knotd,knotc: unification of control flags and filters
+ - keymgr: key listing reports configured keys that are inaccessible
+ - libs: upgraded embedded libngtcp2 to 1.8.0
+ - doc: various fixes and updates
+
+Bugfixes:
+---------
+ - knotd: missing support for IPv6 link local address configuration
+ - knotd: zone reload occasionally causes a core dump #939 (Thanks to solidcc2)
+ - knotd: race condition in DDNS over QUIC processing
+ - knotd: imperfect signal handling on some auxiliary threads
+ - knotd: EDNS EXPIRE not updated when zone signing results in up-to-date
+ - knotd: failed to reload autogenerated QUIC/TLS key after process ownership change
+ - knotc: zone backup filter +keysonly doesn't disable other defaults
+ - kxdpgun: failed to receive more data over QUIC until 1-RTT handshake is done
+ - knsupdate: memory leak if rdata parsing fails
+ - doc: failed to install manual pages from a tarball
+ - Dockerfile: TCP port 853 not exposed for DoT
+
+Knot DNS 3.4.0 (2024-09-02)
+===========================
+
+Features:
+---------
+ - knotd: full DNS over TLS (DoT, RFC 7858) implementation (see 'DNS over TLS')
+ - knotd: bidirectional XFR over TLS (XoT) support with opportunistic, strict,
+ and mutual authentication profiles
+ - knotd: support for DDNS over QUIC and TLS
+ - knotd: DNSSEC validation requires the remaining RRSIG validity is longer than 'rrsig-refresh'
+ - knotd: new event for automatic DNSSEC revalidation
+ - knotd: if enabled DNSSEC signing, EDNS expire is adjusted to the earliest RRSIG expiration
+ - knotd: added support for libdbus as an alternative to systemd dbus
+ (see '--enable-dbus=libdbus' configure parameter)
+ - knotd: new XDP-related configuration options
+ (see 'xdp.ring-size', 'xdp.busypoll-budget', and 'xdp.busypoll-timeout')
+ - knotc: new command for explicit triggering DNSSEC validation (see 'zone-validate' command)
+ - keymgr: SKR verification requires end of DNSKEY RRSIG validity covers next DNSKEY snapshot
+ - kdig: +nocrypto applies also to CERT, DS, SSHFP, DHCID, TLSA, ZONEMD, and TSIG
+ - knsupdate: added support for DDNS over QUIC and TLS (see '-Q' and '-S' parameters)
+ - kxdpgun: support for reading a binary input file (see '-B' parameter)
+ - kxdpgun: support for output in JSON (see '-j' parameter)
+ - kxdpgun: support for periodical output (see '-S' parameter)
+ - mod-rrl: module offers limiting of non-UDP protocols based on consumed time
+ (see 'mod-rrl.time-rate-limit' and 'mod-rrl.time-instant-limit')
+ - utils: -VV option for listing compile time configuration summary
+
+Improvements:
+-------------
+ - knotd: up to eight DDNS queries can be queued per zone when frozen
+ - knotd: the number of created/validated RRSIGs is logged
+ - knotd: overhaul of atomic operations usage
+ - knotd: unified DNAME semantic errors with the CNAME ones
+ (see 'Handling CNAME and DNAME-related updates')
+ - knotd: better DDNS pre-check to prevent dropping a bulk of updates
+ - knotd: extended SOA presence semantic checks
+ - knotd: disallowed concurrent control zone and config transactions to avoid deadlock
+ - knotd: disallowed opening zone transaction when blocking command is running to avoid deadlock
+ - knotd: new XDP statistic counters
+ - knotd: remote zone serial is logged upon received incoming transfer
+ - knotd: zone backup stores and zone restore checks the CPU architecture compatibility
+ - knotd: time configuration options support 'w', 'M', and 'y' units
+ - knotd: some control commands can be processed asynchronously
+ - knotc: zone backup overwrites already existing backupdir in the force mode
+ - kdig: EDNS is enabled by default
+ - kdig: the default EDNS payload size was lowered to 1232
+ - mod-rrl: completely reimplemented UDP rate limiting using an efficient
+ query-counting mechanism on several address prefix lengths
+ - mod-rrl: module no longer requires explicit configuration
+ - libknot: various XDP improvements and new configuration parameters
+ - docker: increased -D_FORTIFY_SOURCE to 3
+
+Bugfixes:
+---------
+ - knotd: deadlock during zone-ksk-submitted processing of a frozen zone
+ - kxdpgun: race condition in SIGUSR1 signal processing
+ - doc: parallel build is unreliable #928
+
+Compatibility:
+--------------
+ - configure: increase minimal GnuTLS version to 3.6.10
+ - configure: removed deprecated libidn 1 support
+ - configure: removed liburcu search fallback
+ - configure: required GCC or LLVM Clang compiler with C11 support
+ - knotd: removed already ignored obsolete configuration options
+ - keymgr: removed legacy parameter '--brief'
+ - kjournalprint: removed legacy parameter '--no-color'
+ - kjournalprint: removed legacy database specification without '--dir'
+ - kcatalogprint: removed legacy database specification without '--dir'
+ - packaging: CentOS 7, Debian 10, and Ubuntu 18.04 no longer supported
+ - doc: removed info pages
+
+Knot DNS 3.3.9 (2024-08-26)
+===========================
+
+Improvements:
+-------------
+ - libknot: added EDE code 30
+ - libknot: improved performance of knot_rrset_to_wire_extra()
+ - libs: upgraded embedded libngtcp2 to 1.7.0
+ - doc: various fixes and updates
+
+Bugfixes:
+---------
+ - keymgr: pregenerate clears future timestamps of old keys and creates new keys
+ - mod-dnsproxy: defective TSIG processing
+ - mod-dnsproxy: TCP not detected in the XDP mode
+ - kxdpgun: unsuccessful interface initialization leaks memory
+ - packaging: libknot not installed with python3-libknot
+
+Knot DNS 3.3.8 (2024-07-22)
+===========================
+
+Features:
+---------
+ - libzscanner,libknot: added support for 'dohpath' and 'ohttp' SVCB parameters
+ - libzscanner,libknot: added support for WALLET rrtype
+ - keymgr: new commands for keystore testing (see 'keystore-test' and 'keystore-bench')
+ - knotd: new configuration option for setting default TTL (see 'zone.default-ttl')
+
+Improvements:
+-------------
+ - libknot: added error codes to better describe some failures
+
+Bugfixes:
+---------
+ - knotd: DNSSEC signing doesn't remove NSEC records for non-authoritative nodes
+ - knotd: DNSSEC signing not scheduled on secondary if nothing to be reloaded
+ - libknot: TCP over XDP doesn't ignore SYN+ACK packets on the server side
+
+Knot DNS 3.3.7 (2024-06-25)
+===========================
+
+Improvements:
+-------------
+ - libs: upgraded embedded libngtcp2 to 1.6.0
+
+Bugfixes:
+---------
+ - knotd: insufficient metadata check can cause journal corruption
+ - knotd: missing zone timers initialization upon purge
+ - knotd: missing RCU lock in zone flush and refresh
+ - knotd: defective assert in zone refresh
+
+Knot DNS 3.3.6 (2024-06-12)
+===========================
+
+Features:
+---------
+ - knotd: configurable control socket backlog size (see 'control.backlog')
+ - knotd: optional configuration of congruency of generated keytags (see 'policy.keytag-modulo')
+ - knotc: support for exporting configuration schema in JSON (see 'conf-export') #912
+ - mod-dnstap: configuration of sink allows TCP address specification
+
+Improvements:
+-------------
+ - knotd: last-signed serial is stored to KASP even if not a secondary zone
+ - knotd: allowed catalog role member in a catalog template configuration
+ - knotd: some references in a zone configuration can be set empty to override a template
+ - knotd: allowed zone backup during a zone transaction
+ - knotd: add remote TSIG key name to outgoing event logs
+ - knotc: zone backup with '+keysonly' silently uses all defaults as 'off'
+ - kxdpgun: host name can be used for target specification
+ - libs: upgraded embedded libngtcp2 to 1.5.0
+ - doc: various fixes and updates
+
+Bugfixes:
+---------
+ - knotd: reset TCP connection not removed from a connection pool
+ - knotd: server wrongly tries to remove removed ZONEMD
+ - knotd: failed to parse empty list from a textual configuration
+ - knotd: blocking zone signing in combination with an open transaction causes a deadlock
+ - knotd: missing RCU lock when sending NOTIFY
+ - kdig: QNAME letter case isn't preserved if IDN is enabled
+ - kdig: failed to parse empty QNAME (do not fill question section)
+ - kxdpgun: floating point exception on SIGUSR1 #927
+ - libknot: incorrect handling of regular QUIC tokens in incoming initials
+ - python: failed to set an empty configuration value
+
+Knot DNS 3.3.5 (2024-03-06)
+===========================
+
+Features:
+---------
+ - knotd: new module mod-authsignal for automatic authenticated DNSSEC
+ bootstrapping records synthesis (Thanks to Peter Thomassen)
+ - kzonecheck: new optional ZONEMD verification (see option '-z')
+
+Improvements:
+-------------
+ - knotd: new DNSSEC key rollover log informs about next planned key action
+ - knotd, kzonecheck: added limit on non-matching keys with a duplicate keytag
+ - knot-exporter: added counter-type variant for each metric (Thanks to Marcel Koch)
+ - libs: upgraded embedded libngtcp2 to 1.3.0
+ - doc: various fixes and updates
+
+Bugfixes:
+---------
+ - knotd, kzonecheck: failed to validate RRSIG if there are more keys with the same keytag
+ - knotd, kzonecheck: failed to validate zone with more CSK keys
+ - libknot: insufficient check for malformed TCP header options over XDP
+ - libzscanner: incorrect alpn processing #923
+
+Knot DNS 3.3.4 (2024-01-24)
+===========================
+
+Features:
+---------
+ - knotd: new configuration item for clearing configuration sections (see 'clear')
+ - knotc: configuration import can preserve database contents (see '+nopurge' flag)
+ - kxdpgun: new parameter for setting UDP payload size in EDNS (see '--edns-size') #915
+
+Improvements:
+-------------
+ - knotd: extended configuration check for 'zonefile-load' and 'journal-content'
+ - knotd: lowered check limit for additional NSEC3 iterations to 0
+ - knotd: lowered severity level of an informational backup log
+ - knotd: better log message when flushing the journal
+ - knotd: zone restore checks if requested contents are in the provided backup
+ - knotc: '+quic' is default for zone backup, '+noquic' is default for zone restore
+ - kdig: better processing of timeouts and reduced sent datagrams over QUIC
+ - kdig: no retries are attempted over QUIC
+ - keymgr: improved compatibility with bind9-generated keys
+ - libs: some improvements in XDP buffer allocation
+ - libs: upgraded embedded libngtcp2 to 1.2.0
+ - doc: various fixes and updates
+
+Bugfixes:
+---------
+ - knotd: failed to build on macOS #909
+ - knotd: 'nsec3-salt-lifetime: -1' doesn't work if 'ixfr-from-axfr' is enabled
+ - knotd: unnecessarily updated RRSIGs if 'ixfr-from-axfr' and signing are enabled
+ - knotc: zone check complains about missing zone file #913
+ - kdig: failed to try another target address over QUIC
+ - libknot: infinite loop in knot_rrset_to_wire_extra() #916
+
+Knot DNS 3.3.3 (2023-12-13)
+===========================
+
+Features:
+---------
+ - knotd: new 'pattern' mode of ACL update owner matching (see 'acl.update-owner-match')
+ - knotc: new '+keysonly' filter for zone backup/restore
+
+Improvements:
+-------------
+ - knotd: zone purging waits for finished zone expiration for better reliability
+ - knotd: remote configuration considers more 'via' with the same address family
+ - knotd: refresh doesn't fall back from IXFR to AXFR upon a network error
+ - knotd: increased default for 'policy.rrsig-refresh' by (0.1 * 'rrsig-lifetime')
+ - knotd: new control flag 'u' for unix time output format from zone status
+ - knotd: extended check for inconsistent acl settings
+ - knotd/libknot: simplified TCP/QUIC sweep logging
+ - mod-dnsproxy: all configured remote addresses are used for fallback operation
+ - mod-dnsproxy: module responds locally if forwarding fails instead of SERVFAIL
+ - libs: upgraded embedded libngtcp2 to 1.1.0
+ - doc: various fixes and extensions
+
+Bugfixes:
+---------
+ - knotd: zone backup fails due to improper backup context deinitialization #891
+ - knotd: failed to sign the zone if maximum zone's TTL is too high
+ - knotd: malformed TCP header if used with QUIC in the generic XDP mode
+ - knotd: server can crash when processing new TCP connections over XDP
+ - knotd: incorrect initialization of TCP limits
+ - knotd: orphaned PEM file not deleted when key generation fails
+ - knotd/libknot: connection timeouts over QUIC due to incomplete retransfer handling #894
+ - kdig: crashed when querying DNS over TLS if TLS handshake times out #896
+ - kzonecheck: failed to check DS with SHA-1 or GOST if not supported by local policy
+ - libdnssec: failed to compile with GnuTLS if PKCS #11 support is disabled
+
+Knot DNS 3.3.2 (2023-10-20)
+===========================
+
+Features:
+---------
+ - knotd: support for IXFR from AXFR computation (see 'zone.ixfr-from-axfr')
+ - knotd: support benevolent IXFR (see 'zone.ixfr-benevolent')
+ - knot-exporter: new configuration option '--no-zone-serial' #880
+
+Improvements:
+-------------
+ - libs: upgraded embedded libngtcp2 to 1.0.0
+ - knotd: added logging of new SOA serial when signing is finished
+ - knotd: unified some XDP-related logging
+ - keymgr: improved error message if a key file is not accessible
+ - keymgr: added offline RRSIGs validation at the end of their validity intervals
+ - kdig: upgraded EDNS presentation format to draft version -02
+ - kdig: simplified QUIC connection without extra PING frames
+ - kzonecheck: removed requirement that DS is at delegation point
+ - doc: various fixes and improvements
+
+Bugfixes:
+---------
+ - knotd: logged incorrect new SOA serial if 'zonefile-load: difference' is set #875
+ - knotd: more signing threads with a PKCS #11 keystore has no effect #876
+ - knotd: DNAME record returned with query domain name instead of actual name #873
+ - knotd: failed to import configuration file if mod-geoip is in use #881
+ - knotd: failed to sign RRSet that fits to 64k only if compressed
+ - knotd: broken zone update context upon failed operation over control interface
+ - keymgr: offline RRSIGs not refreshed if 'rrsig-refresh' is not set
+ - knsupdate: incorrect processing of @ in the delete operation #879
+ - knot-exporter: failed to parse knotd PIDs on FreeBSD
+
+Packaging:
+----------
+ - docker: added support for (inter-container) D-Bus signaling
+
+Knot DNS 3.3.1 (2023-09-11)
+===========================
+
+Improvements:
+-------------
+ - knotd: multiple catalog groups per member are tolerated, but only one is used
+ - modules: added const qualifier to various function parameters #877 (Thanks to Robert Edmonds)
+ - libs: upgraded embedded libngtcp2 to 0.19.1
+
+Bugfixes:
+---------
+ - knotd: TCP over XDP fails to respond
+ - knotd: server can crash when adjusting a wildcard glue
+ - knotd: failed to forward DDNS if 'zone.master' points to 'remotes'
+ - knotd: broken YAML statistics if more modules are configured #874
+ - knotd: DDNS forwarding isn't RFC 8945 compliant
+
+Knot DNS 3.3.0 (2023-08-28)
+===========================
+
+Features:
+---------
+ - knotd: full DNS over QUIC (DoQ, RFC 9250) implementation, also without XDP
+ - knotd: bidirectional XFR over QUIC (XoQ) support with opportunistic, strict,
+ and mutual authentication profiles
+ - knotd: automatic reverse PTR records pre-generation (see 'zone.reverse-generate')
+ - knotd: new per zone statistic counters 'zone.size' and 'zone.max-ttl'
+ - knotd: new primary server pinning (see 'zone.master-pin-tolerance')
+ - knotd: new SOA serial modulo policy (see 'zone.serial-modulo')
+ - knotd: new multi-signer operation mode (see 'policy.dnskey-sync' and 'DNSSEC multi-signer')
+ - kdig: support for EDNS presentation format, also in JSON mode (see '+optpresent')
+ - kxdpgun: new TCP/QUIC debug mode 'R' for connection reuse
+ - kxdpgun: new XDP mode parameter '--mode' (Thanks to Jan Včelák)
+ - kxdpgun: new parameter '--qlog' for qlog destination specification
+ - kzonecheck: new '--print' parameter for dumping the zone on stdout
+
+Improvements:
+-------------
+ - knotd: secondary can be configured not to forward DDNS (see 'zone.ddns-master')
+ - knotd: extended support for UNIX socket configuration (remote, acl)
+ - knotd: stats no longer dump empty or zero counters
+ - knotd: new 'keys-updated' D-Bus event
+ - knotd: added transport protocol information to outgoing event and nameserver logs
+ - knotd: server cleans up stale LMDB readers when opening a RW transaction
+ - knotd,kzonecheck: semantic check allows DS only at delegation point
+ - knotc: new zone backup filters '+quic' and '+noquic' for QUIC key backup
+ - mod-dnstap: DNS over QUIC traffic is marked as QUIC
+ - kxdpgun: QUIC connections are closed by default
+ - libs: upgraded embedded libngtcp2 to 0.18.0
+ - kdig: QUIC, TLS, or HTTPS protocol is printed in the final statistics
+ - doc: new sections 'DNS over QUIC' and 'DNSSEC multi-signer'
+ - doc: various improvements
+
+Bugfixes:
+---------
+ - knotd: server can crash if a shared module is loaded and dynamic configuration used
+ - knotd: inaccurate transfer size is logged if EDNS EXPIRE, PADDING, or TSIG is present
+ - knotd: subsequent addition and removal to catalog zone isn't handled properly
+ - knotc: configuration import fails if an explicit shared module is configured
+ - utils: database transactions not properly closed when terminated prematurely
+ - kdig: double-free on some malformed responses over QUIC #869
+ - kdig: some TLS parameters override QUIC parameters
+ - libs: NULL record with empty RDATA isn't allowed
+ - tests: dthreads destructor test sometimes fails
+
+Compatibility:
+--------------
+ - knotd: responses to forwarded DDNS requests are signed with local TSIG key
+ - knotd: NOTIFY-initiated refresh tries all configured addresses of the remote
+ - knotd: configuration option 'xdp.quic-log' was replaced with 'log.quic'
+ - libs: removed embedded libbpf, an external one is necessary for XDP
+ - libs: DNS over QUIC implementation only supports 'doq' ALPN
+ - ctl: removed 'Version: ' prefix from 'status version' output
+ - modules: reduced parameters of 'knotd_qdata_local_addr()'
+
+Packaging:
+----------
+ - knot-exporter: Prometheus exporter imported from GitHub
+ - knot-exporter: packages for Debian, Ubuntu, and PyPI
+ - debian,ubuntu: new self-hosted repository (see https://pkg.labs.nic.cz/doc/)
+ - docker: upgraded to Debian bookworm-slim
+
+Knot DNS 3.2.13 (2024-06-25)
+============================
+
+Bugfixes:
+---------
+ - knotd: insufficient metadata check can cause journal corruption
+ - knotd: failed to build on macOS #909
+ - knotd: early NSEC3 salt replanning if 'nsec3-salt-lifetime: -1'
+ - knotc: zone check complains about missing zone file #913
+ - kdig: failed to parse empty QNAME (do not fill question section)
+ - python: failed to set an empty configuration value
+ - libzscanner: incorrect alpn processing #923
+ - libknot: insufficient check for malformed TCP header options over XDP
+ - libknot: infinite loop in knot_rrset_to_wire_extra() #916
+
+Knot DNS 3.2.12 (2023-12-19)
+============================
+
+Improvements:
+-------------
+ - knotd: zone purging waits for finished zone expiration for better reliability
+ - doc: various fixes and extensions
+
+Bugfixes:
+---------
+ - knotd: zone backup fails due to improper backup context deinitialization #891
+ - knotd: failed to sign the zone if maximum zone's TTL is too high
+ - knotd: malformed TCP header if used with QUIC in the generic XDP mode
+ - knotd: incorrect initialization of TCP limits
+ - knotd: orphaned PEM file not deleted when key generation fails
+ - knotd: server can crash when processing new TCP connections over XDP
+ - kdig: crashed when querying DNS over TLS if TLS handshake times out #896
+ - kzonecheck: failed to check DS with SHA-1 or GOST if not supported by local policy
+
+Knot DNS 3.2.11 (2023-10-30)
+============================
+
+Improvements:
+-------------
+ - keymgr: improved error message if a key file is not accessible
+ - keymgr: added offline RRSIGs validation at the end of their validity intervals
+ - doc: fixed some typos
+
+Bugfixes:
+---------
+ - knotd: DNAME record returned with query domain name instead of actual name #873
+ - knotd: failed to import configuration file if mod-geoip is in use #881
+ - knotd: failed to sign RRSet that fits to 64k only if compressed
+ - keymgr: offline RRSIGs not refreshed if 'rrsig-refresh' is not set
+ - knsupdate: incorrect processing of @ in the delete operation #879
+
+Knot DNS 3.2.10 (2023-09-11)
+============================
+
+Improvements:
+-------------
+ - knotd: multiple catalog groups per member are tolerated, but only one is used
+ - knotd: server cleans up stale LMDB readers when opening a RW transaction
+
+Bugfixes:
+---------
+ - knotd: server can crash when adjusting a wildcard glue
+ - knotd: failed to forward DDNS if 'zone.master' points to 'remotes'
+ - knotd: subsequent addition and removal to catalog zone isn't handled properly
+ - knotd: server can crash if a shared module is loaded and dynamic configuration used
+ - knotc: configuration import fails if an explicit shared module is configured
+ - kdig: double-free on some malformed responses over QUIC #869
+ - kdig: some TLS parameters override QUIC parameters
+ - libs: NULL record with empty RDATA isn't allowed
+
+Knot DNS 3.2.9 (2023-07-27)
+===========================
+
+Improvements:
+-------------
+ - keymgr: 'import-pkcs11' not allowed if no PKCS #11 keystore backend is configured
+ - keymgr: more verbose key import errors
+ - doc: extended migration notes
+ - doc: various improvements
+
+Bugfixes:
+---------
+ - knotd: server may crash when storing changeset of a big zone migrating to/from NSEC3
+ - knotd: zone refresh loop when all masters are outdated and timers cleared
+ - knotd: failed to active D-Bus notifications if not started as systemd service
+ - kjournalprint: database transaction not properly closed when terminated prematurely
+
+Knot DNS 3.2.8 (2023-06-26)
+===========================
+
+Improvements:
+-------------
+ - kdig: malformed messages are parsed and printed using a best-effort approach
+ - python: new dname from wire initialization
+
+Bugfixes:
+---------
+ - knotd: missing outgoing NOTIFY upon refresh if one of more primaries is up-to-date
+ - knotd: journal loop detection can prevent zone from loading
+ - knotd: cryptic error message when journal is full #842
+ - knotd: failed to query catalog zone over UDP
+ - configure: libngtcp2 check wrongly requires version 0.13.0 instead of 0.13.1
+
+Knot DNS 3.2.7 (2023-06-06)
+===========================
+
+Features:
+---------
+ - knotd: new configuration option for preserving incoming IXFR changeset history
+ (see 'zone.ixfr-by-one')
+
+Improvements:
+-------------
+ - knotd: journal ensures the stored changeset's SOA serials are strictly increasing
+ - knotd: more effective handling of zero KNOT_ZONE_LOAD_TIMEOUT_SEC environment value
+ - knotd, kdig: incoming transfer fails if a message has the TC bit set
+ - knotd, kjournalprint: store or print the timestamp of changeset creation
+ - kxdpgun: load only necessary number of queries (Thanks to Petr Špaček)
+ - kxdpgun: print ratio of sent vs. requested queries (Thanks to Petr Špaček)
+ - kxdpgun: print percentages as floats (Thanks to Petr Špaček)
+ - kjournalprint: ability to print a changeset loop
+ - kjournalprint: added changset serials information to '-z -d' output
+ - packaging: RHEL9 requires libxdp like fedora since RHEL 9.2 #844
+ - doc: various improvements
+
+Bugfixes:
+---------
+ - knotd: journal loading can get stuck in a multi-changeset loop
+ - knotd: missing RCU lock when reading zone through the control interface
+ - knotd: server start D-Bus signaling doesn't work well if the zone file is
+ missing, catalog zones are used, or in the async-start mode
+ - knotd: test suite fails on 32bit architectures on musl 1.2 and newer #843
+ - knotd: failed to process zero-length messages over QUIC
+ - libs: compilation with embedded ngtcp2 fails if there is another ngtcp2 in the path
+
+Knot DNS 3.2.6 (2023-04-04)
+===========================
+
+Improvements:
+-------------
+ - libs: upgraded embedded libngtcp2 to 0.13.1
+ - libs: added support for building on Cygwin and MSYS (Thanks to Christopher Ng)
+ - mod-dnstap: improved precision of stored time values
+ - kdig: added option for EDNS EXPIRE (see '+expire') #836
+ - kdig: extended description of SOA timers in the multiline mode
+ - kdig: reduced latency of TLS communication
+ - libknot: added EDE codes 28 and 29
+ - doc: various improvements
+
+Bugfixes:
+---------
+ - knotd: generated catalog zone not updated upon server reload #834
+ - knotd: failed to check shared module configuration
+ - knotd: missing RCU registration of the statistics thread (Thanks to Qin Longfei)
+ - knotd: server logs failed to send QUIC packets in the XDP mode
+ - libs: inconsistent transformation of IPv4-Compatible IPv6 Addresses
+ - utils: failed to load configuration if dnstap module is enabled #831
+ - libknot: missing include string.h
+
+Knot DNS 3.2.5 (2023-02-02)
+===========================
+
+Features:
+---------
+ - knotd: new configuration option for enforcing IXFR fallback (see 'zone.provide-ixfr')
+
+Improvements:
+-------------
+ - knotd: changed UNIX socket file mode to 0222 for answering and 0220 for control
+ - mod-probe: new support for communication over a UNIX socket
+ - kdig: new support for communication over a UNIX socket
+ - libs: upgraded embedded libngtcp2 to 0.13.0
+ - doc: various improvements
+
+Bugfixes:
+---------
+ - knotd: failed to get catalog member configuration if catalog template is in a template
+ - knotd: failed to respond over a UNIX socket with EDNS
+ - knotd: unexpected zone update upon restart or zone reload if ZONEMD generation is enabled
+ - knotd: redundant zone flush of unchanged zone if zone file load is 'difference-no-serial'
+ - knotd/kxdpgun: failed to receive messages over XDP with drivers tap or ena
+ - knotc: zone check doesn't report missing zone file #829
+ - kxdpgun: program crashes when remote closes QUIC connection instead of resumption
+ - mod-geoip: configuration check leaks memory in the geodb mode
+ - utils: unwanted color reset sequences in non-color output
+
+Knot DNS 3.2.4 (2022-12-12)
+===========================
+
+Improvements:
+-------------
+ - knotd: significant speed-up of catalog zone update processing
+ - knotd: new runtime check if RRSIG lifetime is lower than RRSIG refresh
+ - knotd: reworked zone re-bootstrap scheduling to be less progressive
+ - mod-synthrecord: module can work with CIDR-style reverse zones #826
+ - python: new libknot wrappers for some dname transformation functions
+ - doc: a few fixes and improvements
+
+Bugfixes:
+---------
+ - knotd: incomplete zone is received when IXFR falls back to AXFR due to
+ connection timeout if primary puts initial SOA only to the first message
+ - knotd: first zone re-bootstrap is planned after 24 hours
+ - knotd: EDNS EXPIRE option is present in outgoing transfer of a catalog zone
+ - knotd: catalog zone can expire upon EDNS EXPIRE processing
+ - knotd: DNSSEC signing doesn't fail if no offline KSK records available
+
+Knot DNS 3.2.3 (2022-11-20)
+===========================
+
+Improvements:
+-------------
+ - knotd: new per-zone DS push configuration option (see 'zone.ds-push')
+ - libs: upgraded embedded libngtcp2 to 0.11.0
+
+Bugfixes:
+---------
+ - knsupdate: program crashes when sending an update
+ - knotd: server drops more responses over UDP under higher load
+ - knotd: missing EDNS padding in responses over QUIC
+ - knotd: some memory issues when handling unusual QUIC traffic
+ - kxdpgun: broken IPv4 source subnet processing
+ - kdig: incorrect handling of unsent data over QUIC
+
+Knot DNS 3.2.2 (2022-11-01)
+===========================
+
+Features:
+---------
+ - knotd,kxdpgun: support for VLAN (802.1Q) traffic in the XDP mode
+ - knotd: added configurable delay upon D-Bus initialization (see 'server.dbus-init-delay')
+ - kdig: support for JSON (RFC 8427) output format (see '+json')
+ - kdig: support for PROXYv2 (see '+proxy') (Gift for Peter van Dijk)
+
+Improvements:
+-------------
+ - mod-geoip: module respects the server configuration of answer rotation
+ - libs: upgraded embedded libngtcp2 to 0.10.0
+ - tests: improved robustness of some unit tests
+ - doc: added description of zone bootstrap re-planning
+
+Bugfixes:
+---------
+ - knotd: catalog confusion when a member is added and immediately deleted #818
+ - knotd: defective handling of short messages with PROXYv2 header #816
+ - knotd: inconsistent processing of malformed messages with PROXYv2 header #817
+ - kxdpgun: incorrect XDP mode is logged
+ - packaging: outdated dependency check in RPM packages
+
+Knot DNS 3.2.1 (2022-09-09)
+===========================
+
+Improvements:
+-------------
+ - libknot: added compatibility with libbpf 1.0 and libxdp
+ - libknot: removed some trailing white space characters from textual RR format
+ - libs: upgraded embedded libngtcp2 to 0.8.1
+
+Bugfixes:
+---------
+ - knotd: some non-DNS packets not passed to OS if XDP mode enabled
+ - knotd: inappropriate log about QUIC port change if QUIC not enabled
+ - knotd/kxdpgun: various memory leaks related to QUIC and TCP
+ - kxdpgun: can crash at high rates in emulated XDP mode
+ - tests: broken XDP-TCP test on 32-bit platforms
+ - kdig: failed to build with enabled QUIC on OpenBSD
+ - systemd: failed to start server due to TemporaryFileSystem setting
+ - packaging: missing knot-dnssecutils package on CentOS 7
+
+Knot DNS 3.2.0 (2022-08-22)
+===========================
+
+Features:
+---------
+ - knotd: finalized TCP over XDP implementation
+ - knotd: initial implementation of DNS over QUIC in the XDP mode (see 'xdp.quic')
+ - knotd: new incremental DNSKEY management for multi-signer deployment (see 'policy.dnskey-management')
+ - knotd: support for remote grouping in configuration (see 'groups' section)
+ - knotd: implemented EDNS Expire option (RFC 7314)
+ - knotd: NSEC3 salt is changed with every ZSK rollover if lifetime is set to -1
+ - knotd: support for PROXY v2 protocol over UDP (Thanks to Robert Edmonds) #762
+ - knotd: support for key labels with PKCS #11 keystore (see 'keystore.key-label')
+ - knotd: SVCB/HTTPS treatment according to draft-ietf-dnsop-svcb-https
+ - keymgr: new JSON output format (see '-j' parameter) for listing keys or zones (Thanks to JP Mens)
+ - kxdpgun: support for DNS over QUIC with some testing modes (see '-U' parameter)
+ - kdig: new DNS over QUIC support (see '+quic')
+
+Improvements:
+-------------
+ - knotd: reduced memory consumption when processing IXFR, DNSSEC, catalog, or DDNS
+ - knotd: RRSIG refresh values don't have to match in the mode Offline KSK
+ - knotd: better decision whether AXFR fallback is needed upon a refresh error
+ - knotd: NSEC3 resalt event was merged with the DNSSEC event
+ - knotd: server logs when the connection to remote was taken from the pool
+ - knotd: server logs zone expiration time when the zone is loaded
+ - knotd: DS check verifies removal of old DS during algorithm rollover
+ - knotd: DNSSEC-related records can be updated via DDNS
+ - knotd: new 'xdp.udp' configuration option for disabling UDP over XDP
+ - knotd: outgoing NOTIFY is replanned if failed
+ - knotd: configuration checks if zone MIN interval values are lower or equal to MAX ones
+ - knotd: DNSSEC-related zone semantic checks use DNSSEC validation
+ - knotd: new configuration value 'query' for setting ACL action
+ - knotd: new check on near end of imported Offline KSK records
+ - knotd/knotc: implemented zone catalog purge, including orphaned member zones
+ - knotc: interactive mode supports catalog zone completion, value completion, and more
+ - knotc: new default brief and colorized output from zone status
+ - knotc: unified empty values in zone status output
+ - keymgr: DNSKEY TTL is taken from KSR in the Offline KSK mode
+ - kjournalprint: path to journal DB is automatically taken from the configuration,
+ which can be specified using '-c', '-C' (or '-D')
+ - kcatalogprint: path to catalog DB is automatically taken from the configuration,
+ which can be specified using '-c', '-C' (or '-D')
+ - kzonesign: added automatic configuration file detection and '-C' parameter
+ for configuration DB specificaion
+ - kzonesign: all CPU threads are used for DNSSEC validation
+ - libknot: dname pointer cannot point to another dname pointer when encoding RRsets #765
+ - libknot: QNAME case is preserved in knot_pkt_t 'wire' field (Thanks to Robert Edmonds) #780
+ - libknot: reduced memory consumption of the XDP mode
+ - libknot: XDP filter supports up to 256 NIC queues
+ - kxdpgun: new options for specifying source and remote MAC addresses
+ - utils: extended logging of LMDB-related errors
+ - utils: improved error outputs
+ - kdig: query has AD bit set by default
+ - doc: various improvements
+
+Bugfixes:
+---------
+ - knotd: zone changeset is stored to journal even if disabled
+ - knotd: journal not applied to zone file if zone file changed during reload
+ - knotd: possible out-of-order processing or postponed zone events to far future
+ - knotd: incorrect TTL is used if updated RRSet is empty over control interface
+ - knotd/libs: serial arithmetics not used for RRSIG expiration processing
+ - knsupdate: incorrect RRTYPE in the question section
+
+Compatibility:
+--------------
+ - knotd: default value for 'zone.journal-max-depth' was lowered to 20
+ - knotd: default value for 'policy.nsec3-iterations' was lowered to 0
+ - knotd: default value for 'policy.rrsig-refresh' is propagation delay + zone maximum TTL
+ - knotd: server fails to load configuration if 'policy.rrsig-refresh' is too low
+ - knotd: configuration option 'server.listen-xdp' has no effect
+ - knotd: new configuration check on deprecated DNSSEC algorithm
+ - knotc: new '-e' parameter for full zone status output
+ - keymgr: new '-e' parameter for full key list output
+ - keymgr: brief key listing mode is enabled by default
+ - keymgr: renamed parameter '-d' to '-D'
+ - knsupdate: default TTL is set to 3600
+ - knsupdate: default zone is empty
+ - kjournalprint: renamed parameter '-c' to '-H'
+ - python/libknot: removed compatibility with Python 2
+
+Packaging:
+----------
+ - systemd: removed knot.tmpfile
+ - systemd: added some hardening options
+ - distro: Debian 9 and Ubuntu 16.04 no longer supported
+ - distro: packages for CentOS 7 are built in a separate COPR repository
+ - kzonecheck/kzonesign/knsec3hash: moved to new package knot-dnssecutils
+
+Knot DNS 3.1.9 (2022-08-10)
+===========================
+
+Improvements:
+-------------
+ - knotd: new configuration checks on unsupported catalog settings
+ - knotd: semantic check issues have notice log level in the soft mode
+ - keymgr: command generate-ksr automatically sets 'from' parameter to last
+ offline KSK records' timestamp if it's not specified
+ - keymgr: command show-offline starts from the first offline KSK record set
+ if 'from' parameter isn't specified
+ - kcatalogprint: new parameters for filtering catalog or member zone
+ - mod-probe: default rate limit was increased to 100000
+ - libknot: default control timeout was increased to 30 seconds
+ - python/libknot: various exceptions are raised from class KnotCtl
+ - doc: some improvements
+
+Bugfixes:
+---------
+ - knotd: incomplete outgoing IXFR is responded if journal history is inconsistent
+ - knotd: manually triggered zone flush is suppressed if disabled zone synchronization
+ - knotd: failed to configure XDP listen interface without port specification
+ - knotd: de-cataloged member zone's file isn't deleted #805
+ - knotd: member zone leaks memory when reloading catalog during dynamic configuration change
+ - knotd: server can crash when reloading modules with DNSSEC signing (Thanks to iqinlongfei)
+ - knotd: server crashes during shutdown if PKCS #11 keystore is used
+ - keymgr: command del-all-old isn't applied to all keys in the removed state
+ - kxdpgun: user specified network interface isn't used
+ - libs: fixed compilation on illumos derivatives (Thanks to Nick Ewins)
+
+Knot DNS 3.1.8 (2022-04-28)
+===========================
+
+Features:
+---------
+ - knotd: optional automatic ACL for XFR and NOTIFY (see 'remote.automatic-acl')
+ - knotd: new soft zone semantic check mode for allowing defective zone loading
+ - knotc: added zone transfer freeze state to the zone status output
+
+Improvements:
+-------------
+ - knotd: added configuration check for serial policy of generated catalogs
+
+Bugfixes:
+---------
+ - knotd/libknot: the server can crash when validating a malformed TSIG record
+ - knotd: outgoing zone transfer freeze not preserved during server reload
+ - knotd: catalog UPDATE not processed if previous UPDATE processing not finished #790
+ - knotd: zone refresh not started if planned during server reload
+ - knotd: generated catalogs can be queried over UDP
+ - knotd/utils: failed to open LMDB database if too many stale slots occupy the lock table
+
+Knot DNS 3.1.7 (2022-03-30)
+===========================
+
+Features:
+---------
+ - knotd: new configuration items for restricting minimum and maximum zone expire
+ and retry intervals (see 'zone.expire-min-interval', 'zone.expire-max-interval',
+ 'zone.retry-min-interval', 'zone.retry-max-interval') #785
+ - knotc: added catalog information to zone status
+
+Improvements:
+-------------
+ - knotd: better warning message if SOA serial comparison failed when loading from zone file
+ - knotc: zone status shows all zone events when frozen
+ - keymgr: better error message is returned when importing SKR with insufficient permissions
+ - kdig: transfer status is also printed if failed
+
+Bugfixes:
+---------
+ - knotd: incomplete implementation of the Offline KSK mode in the IXFR and DDNS processing
+ - knotd: catalog zone accepts duplicate members via UPDATE #786
+ - knotd: server crashes if catalog database contains orphaned member zones
+ - knotd: old journal is scraped when restoring just the zone file
+ - knotd: some planned zone events can be lost during server reload
+ - knotd: frozen zone gets thawed during server reload
+ - knsupdate: missing section names in the show output
+ - knsupdate: inappropriate log message if called from a script
+
+Knot DNS 3.1.6 (2022-02-08)
+===========================
+
+Features:
+---------
+ - knotd: optional D-Bus notifications for significant server and zone events
+ (see 'server.dbus-event')
+ - knotd: new submission configuration option for delayed KSK post-activation
+ (see 'submission.parent-delay')
+ - knotc: new commands for outgoing XFR freeze (see 'zone-xfr-freeze' and 'zone-xfr-thaw')
+ - kzonesign: added multithreaded DNSSEC validation mode (see '--verify')
+
+Improvements:
+-------------
+ - kdig: trailing data in reply packet is accepted with a warning
+ - kdig: XFR responses are checked if SOA owners match
+ - knotd: failed remote operations are logged as info instead of debug
+ - knsec3hash: added alternative and more natural parameter semantics
+ - knsupdate: interactive mode is newly based on library Editline
+ - Dockerfile: added UID argument to facilitate the use of unprivileged container #783
+ - doc: various fixes and improvements
+
+Bugfixes:
+---------
+ - libknot: inaccurate KNOT_DNAME_TXT_MAXLEN constant value #781
+ - knotd: propagation delay not considered before DS push
+ - knotd: excessive refresh retry delay when a few early attemps fail
+ - knotd: duplicate KSK submission log message during a KSK rollover
+ - kdig: dname letter case not preserved in XFR and Dnstap outputs
+ - mod-cookies: missing server cookie in responses over TCP
+
+Knot DNS 3.1.5 (2021-12-20)
+===========================
+
+Features:
+---------
+ - knotd: optional outgoing TCP connection pool for faster communication with remotes
+ (see 'server.remote-pool-limit' and 'server.remote-pool-timeout')
+ - knotd: optional unreachable remote tracking to avoid zone events clogging
+ (see 'server.remote-retry-delay')
+ - knotd: new ZONEMD generation mode for the record removal from the zone apex #760
+ (see 'zone.zonemd-generate: remove')
+ - mod-dnsproxy: new source address match option (see 'mod-dnsproxy.address')
+ - scripts/probe_dump: simple mod-probe client
+
+Improvements:
+-------------
+ - knotd: DS push sets DS TTL equal to DNSKEY TTL
+ - knotd: extended zone purge error logging
+ - knotd: zone file parsing error message was extended by the file name
+ - knotd: improved debug log message when TCP timeout is reached
+ - knotd: new configuration check for using the default number of NSEC3 iterations
+ - knotd: new configuration check for insufficient RRSIG refresh time
+ - mod-geoip: configuration check newly verifies the module configuration file #778
+ - kdig: option +notimeout or +timeout=0 is interpreted as infinity
+ - kdig: option +noretry is interpreted as zero retries
+ - python/probe: more detailed default output format
+ - doc: many spelling fixes (Thanks to Josh Soref)
+ - doc: various fixes and improvements
+
+Bugfixes:
+---------
+ - knotd: imperfect TCP connection closing in the XDP mode
+ - knotd: TCP reset packets are wrongly checked for ackno in the XDP mode
+ - knotd: only first zone name is logged for multi-zone control operations #776
+ - knotd: minor memory leak when full zone update fails to write to journal
+ - knotc: configuration check doesn't check a configuration database
+ - mod-dnstap: incorrect QNAME case restore in some corner cases (Thanks to Robert Edmonds) #777
+
+Knot DNS 3.1.4 (2021-11-04)
+===========================
+
+Features:
+---------
+ - mod-dnstap: added 'responses-with-queries' configuration option (Thanks to Robert Edmonds) #764
+
+Improvements:
+-------------
+ - knotd: DNSSEC keys are logged in sorted order by timestamp
+ - mod-cookies: added statistics counter for dropped queries due to the slip limit
+ - mod-dnstap: restored the original query QNAME case #773 (Thanks to Robert Edmonds)
+ - configure: improved compatibility of some scripts on macOS and BSDs
+ - doc: updates on DNSSEC signing
+
+Bugfixes:
+---------
+ - knotd: server can crash when receiving queries with NSID EDNS flag #774 (Thanks to Romain Labolle)
+ - knotd: server crashes on reload when no interfaces configured #770
+ - knotd: ZONEMD without DNSSEC not handled correctly
+ - knotd: generated catalog zone not updated on config reload #772
+ - knotd: zone catalog not verified before its interpretation
+ - knotd: ds-push fails to update the parent zone if a CNAME exists for a non-terminal node
+
+Knot DNS 3.1.3 (2021-10-18)
+===========================
+
+Improvements:
+-------------
+ - knotd: added simple error logging to orphaned zone purge
+ - knotd: allow manual public-only keys for unused algorithm
+ - kdig: send ALPN when using DoT or XoT #769
+ - doc: various fixes and improvements #767
+
+Bugfixes:
+---------
+ - knotd: catalog backup doesn't preserve version of the catalog implementation
+ - knotd: NOTIFY is scheduled even when DNSSEC signing is up-to-date
+ - knotd: server can crash when zone difference is inconsistent upon cold start
+ - knotd: zone not bootstrapped when zone file load failed due to an error
+ - knotd: broken AXFR with knot as slave and dnsmasq as master (Thanks to Daniel Gröber)
+ - knotd: journal not able to free up space when zone-in-journal present and zonefile written
+ - mod-stats: missing protocol counters for TCP over XDP
+ - kzonesign: input zone name not lower-cased
+
+Knot DNS 3.1.2 (2021-09-08)
+===========================
+
+Features:
+---------
+ - knotd: new policy configuration for postponing complete deletion of previous keys
+ - keymgr: new optional pretty mode (-b) of listing keys
+ - kdig: added support for TCP keepopen #503
+
+Improvements:
+-------------
+ - knotd: configuration item values can contain UTF-8 characters
+ - knotd: added configuration check for database storage writability
+ - knotd: better error reporting if zone is empty
+ - knotd: smaller journal database chunks in order to mitigate LMDB fragmentation
+ - knotd/kxdpgun: CAP_SYS_RESOURCE capability no longer needed for XDP on Linux >= 5.11
+
+Bugfixes:
+---------
+ - knotd: incomplete NSEC3 proof in response to opt-outed empty non-terminal
+ - knotd: wrong SOA serial handling when enabling signing on already existing secondary zone
+ - knotd: defective ZONEMD verification error reporting when loading zone #759
+ - knotd: server can crash when reloading catalog zone #761
+ - knotd: DNSSEC validation doesn't work when only NSEC3 chain changes
+ - knotd: DNSSEC validation doesn't check if empty non-terminal over non-opt-outed
+ delegation isn't opt-outed too
+ - knotd: ZONEMD generation doesn't cause flushing zone to disk #758
+ - knotd: incorrect evaluation of ACL deny rule in combination with TSIG
+ - knotd: failed DS-check is replaned even if no key is ready
+ - kdig: abort when query times out #763
+ - libzscanner: missing output overflow check in the SVCB parsing
+
+Compatibility:
+--------------
+ - keymgr: parameter -d is marked deprecated in favor of new parameter -D
+ - kjournalprint: parameter -n is marked deprecated in favor of new parameter -x
+
+Knot DNS 3.1.1 (2021-08-10)
+===========================
+
+Improvements:
+-------------
+ - keymgr: import-bind sets publish and active timers to now if missing timers #747
+ - mod-rrl: added QNAME, which triggered an action, to log messages #757
+ - systemd: added environment variable for setting maximum configuration DB size
+
+Bugfixes:
+---------
+ - knotd: adding RRSIGs to a signed zone can lead to redundant RRSIGs for some NSEC(3)s
+ - knotd: code not compiled correctly for ARM on Fedora >= 33
+ - knotd: server can crash when opening catalog DB on startup
+ - knotd: incorrect catalog update counts in logs
+ - knotd: journal discontinuity and zone-in-journal result in incorrectly calculated journal occupation
+ - kdig: +noall does not filter out AUTHORITY comment #749
+ - tests: journal unit test not passing if memory page size is different from 4096
+
+Reverts:
+--------
+ - libzscanner: reverted "omitted TTL value is correctly set to the last explicitly stated value (RFC 1035)" #751
+
+Knot DNS 3.1.0 (2021-08-02)
+===========================
+
+Features:
+---------
+ - knotd: automatic zone catalog generation based on actual configuration
+ - knotd: zone catalog supports configuration groups
+ - knotd: support for ZONEMD validation and generation
+ - knotd: basic support for TCP over XDP processing
+ - knotd: configuration option for enabling IP route check in the XDP mode
+ - knotd: support for epoll (Linux) and kqueue (*BSD, macOS) socket polling
+ - knotd: extended EDNS error (EDE) is added to the response if appropriate
+ - knotd: DNSSEC operation with extra ready public-only KSK is newly allowed
+ - knotd: new zone backup/restore filters for more variable component specification
+ - knotd: adaptive systemd service start timeout and new zone loading status #733
+ - knotd: configuration option for enabling TCP Fast Open on outbound communication
+ - knotd: when the server starts, zone NOTIFY is send only if not sent already
+ - knotc: zone reload with the force flag triggers reload of the zone and its modules
+ - libs: support for parsing and dumping SVCB and HTTPS resource records
+ - kdig: support for TCP Fast Open along with DoT/DoH #549
+ - kxdpgun: basic support for DNS over TCP processing
+ - kxdpgun: current traffic statistics can be printed using a USR1 signal
+ - python: new libknot/probe API wrapper
+
+Improvements:
+-------------
+ - knotd: PID file is created even in the foreground mode
+ - knotd: more robust and enhanced zone data backup and restore operations
+ - knotd: maximum length of an XFR message is limited to 16 KiB for better compression
+ - knotd: maximum CNAME/DNAME chain depth per reply was decreased from 20 to 5
+ - knotd: improved performance of processing domain names with many short labels
+ - knotd: adaptive limit on the number of LMDB readers to avoid problems with many workers
+ - knotd: TTL of generated NSEC(3) records is set to min(SOA TTL, SOA minimum)
+ - knotd: TTL of generated NSEC3PARAM is equal to TTL of NSEC3 records
+ - knotd: maximum TCP segment size is restricted to 1220 octets on Linux #468
+ - knotc: various improvements in error reporting
+ - knotc: default control timeout is infinity in the blocking mode
+ - dnssec: dnskey generator tries to return a key with a unique keytag
+ - kxdpgun: RLIMIT_MEMLOCK is increased only if not high enough
+ - kxdpgun: RTNETLINK is used for getting network information instead of the ip command
+
+Bugfixes:
+---------
+ - knotd: DNAME not applied more than once to resolve the query #714
+ - knotd: root zone not correctly purged from the journal
+ - kzonecheck: incorrect check for opt-outed empty non-terminal nodes
+ - libzscanner: wrong error line number
+ - libzscanner: broken multiline rdata processing if an error occurs
+ - mod-geoip: NXDOMAIN is responded instead of NODATA #745
+ - make: build fails with undefined references if building using slibtool #722
+
+Packaging:
+----------
+ - knotd: systemd service reload uses 'kill -HUP' instead of 'knotc reload'
+ - kxdpgun: new library dependency libmnl
+ - mod-dnstap: new package separate from the knot package
+ - mod-geoip: new package separate from the knot package
+
+Compatibility:
+--------------
+ - configure: option '--enable-xdp=yes' means use an external libbpf if available
+ or use the embedded one
+ - libzscanner: omitted TTL value is correctly set to the last explicitly stated value (RFC 1035)
+ - knotc: zone restore from an old backup (3.0.x) requires forced operation
+ - knotd: configuration option 'server.listen-xdp' is replaced with 'xdp.listen'
+ - knotd: zone file loading with automatic SOA serial incrementation newly
+ requires having full zone in the journal
+ - knotd: obsolete configuration options 'zone.disable-any', 'server.tcp-handshake-timeout'
+ are silently ignored
+ - knotd: obsolete configuration options 'zone.max-zone-size', 'zone.max-journal-depth',
+ 'zone.max-journal-usage', 'zone.max-refresh-interval', 'zone.min-refresh-interval'
+ 'server.max-ipv4-udp-payload', 'server.max-ipv6-udp-payload', 'server.max-udp-payload',
+ 'server.tcp-reply-timeout', 'server.max-tcp-clients' are ignored
+ - knotd: obsolete default template options 'template.journal-db',
+ 'template.kasp-db', 'template.timer-db', 'template.max-journal-db-size',
+ 'template.journal-db-mode', 'template.max-timer-db-size',
+ 'template.max-kasp-db-size' are ignored
+
+Knot DNS 3.0.11 (2022-04-28)
+============================
+
+Improvements:
+-------------
+ - doc: various fixes and improvements
+
+Bugfixes:
+---------
+ - knotd/libknot: the server can crash when validating a malformed TSIG record
+ - knotd: public-only key makes DNSSEC signing fail
+ - knotd: frozen zone gets thawed during server reload
+ - knotd: zone refresh not started if planned during server reload
+ - knotd: some planned zone events can be lost during server reload
+ - knotd: propagation delay not considered before DS push
+ - knotd: duplicate KSK submission log message during a KSK rollover
+ - mod-cookies: missing server cookie in responses over TCP
+ - knsupdate: missing section names in the show output
+
+Knot DNS 3.0.10 (2021-11-04)
+============================
+
+Improvements:
+-------------
+ - doc: various fixes and improvements
+
+Bugfixes:
+---------
+ - knotd: server can crash when receiving queries with NSID EDNS flag #774 (Thanks to Romain Labolle)
+ - knotd: ds-push fails to update the parent zone if a CNAME exists for a non-terminal node
+ - knotd: server crashes on reload when no interfaces configured #770
+ - knotd: journal not able to free up space when zone-in-journal present and zonefile written
+ - knotd: broken AXFR with knot as slave and dnsmasq as master (Thanks to Daniel Gröber)
+ - knotd: server can crash when zone difference is inconsistent upon cold start
+ - mod-stats: missing protocol counters for TCP over XDP
+ - kzonesign: input zone name not lower-cased
+
+Knot DNS 3.0.9 (2021-09-09)
+===========================
+
+Improvements:
+-------------
+ - keymgr: import-bind sets publish and active timers to now if missing timers #747
+
+Bugfixes:
+---------
+ - knotd: incomplete NSEC3 proof in response to opt-outed empty non-terminal
+ - knotd: journal discontinuity and zone-in-journal result in incorrectly calculated journal occupation
+ - knotd: incorrect evaluation of ACL deny rule in combination with TSIG
+ - knotd: failed DS-check is replanned even if no key is ready
+ - knotd: root zone not correctly purged from the journal
+ - kdig: +noall does not filter out AUTHORITY comment #749
+
+Knot DNS 3.0.8 (2021-07-16)
+===========================
+
+Features:
+---------
+ - knotc: new command for loading DNSSEC keys without dropping all RRSIGs when re-signing
+ - knotd: new policy configuration option for disabling some DNSSEC safety features #741
+ - mod-geoip: new dnssec and policy configuration options
+
+Bugfixes:
+---------
+ - knotd: early KSK removal during a KSK rollover if automatic KSK submission check
+ is enabled and DNSKEY TTL is lower than the corresponding DS TTL
+ - knotd: failed to generate a new DNSKEY if previously generated shared key not available
+ - knotd: periodical error logging when a PKCS #11 keystore failed to initialize #742
+ - knotd: zone commit doesn't check for missing SOA record
+
+Knot DNS 3.0.7 (2021-06-16)
+===========================
+
+Features:
+---------
+ - knotd: new configuration policy option for CDS digest algorithm setting #738
+ - keymgr: new command for primary SOA serial manipulation in on-secondary signing mode
+
+Improvements:
+-------------
+ - knotd: improved algorithm rollover to shorten the last step of old RRSIG publication
+
+Bugfixes:
+---------
+ - knotd: zone is flushed upon server start, despite DNSSEC signing is up-to-date
+ - knotd: wildcard nonexistence is proved on empty-non-terminal query
+ - knotd: redundant wildcard proof for non-authoritative data in a reply
+ - knotd: missing wildcard proofs in a wildcard-cname loop reply
+ - knotd: incorrectly synthesized CNAME owner from a wildcard record #715
+ - knotd: zone-in-journal changeset ignores journal-max-usage limit #736
+ - knotd: incorrect processing of zone-in-journal changeset with SOA serial 0
+ - knotd: broken initialization of processing workers if SO_REUSEPORT(_LB) not available
+ - kjournalprint: reported journal usage is incorrect #736
+ - keymgr: cannot parse algorithm name ed448 #739
+ - keymgr: default key size not set properly
+ - kdig: failed to process huge DoH responses
+ - libknot/probe: some corner-case bugs
+
+Knot DNS 3.0.6 (2021-05-12)
+===========================
+
+Features:
+---------
+ - mod-probe: new module for simple traffic logging (Python API not yet included)
+
+Improvements:
+-------------
+ - keymgr: new mode for listing zones with at least one key stored
+ - keymgr: the pregenerate command accepts optional timestamp-from parameter
+ - kzonecheck: accept '-' as substitution for standard input #727
+ - knotd: print an error when unable to change owner of a logging file
+ - knotd: new warning log if no interface is configured
+ - knotd: new signing policy check for NSEC3 iterations higher than 20
+ - knotd: don't allow backup to/restore from the DB storage directory
+ - Various code (mostly zone backup/restore), tests, and documentation improvements
+
+Bugfixes:
+---------
+ - knotd: secondary fails to load zone file if HTTPS or SVCB record is present #725
+ - knotd: (KSK roll-over) new KSK is not signing DNSKEY long enough before DS submission
+ - knotd: (KSK roll-over) old KSK uselessly published after roll-over finished
+ - knotd: malformed address in TCP-related logs when listening on a UNIX socket
+ - knotd: server responds FORMERR instead of BADTIME if TSIG signed time is zero #730
+ - modules: incorrect local and remote addresses in the XDP mode
+ - modules: failed to read configuration from a section without identifiers
+ - mod-synthrecord: queries on synthesized empty-non-terminals not answered with NODATA
+ - keymgr: confusing error if del-all-old command fails
+
+Knot DNS 3.0.5 (2021-03-25)
+===========================
+
+Improvements:
+-------------
+ - kdig: added support for TCP Fast Open on FreeBSD
+ - keymgr: the SEP flag can be changed on already generated keys
+ - Some documentation improvements
+
+Bugfixes:
+---------
+ - knotd: journal contents can be considered malformed after changeset merge
+ - knotd: broken detection of TCP Fast Open availability
+ - knotd: zone restore can stuck in an infinite loop if zone configuration changed
+ - knotd: failed zone backup makes control socket unavailable
+ - knotd: zone not stored to journal after reload if difference-no-serial is enabled
+ - knotd: old key is being used after an algorithm rollover with a shared policy #721
+ - keymgr: keytag not recomputed upon key flag change
+ - kdig: TCP not used if +fastopen is set
+ - mod-dnstap: the local address is empty
+ - kzonecheck: missing letter lower-casing of the origin parameter
+ - XDP mode wrongly detected on NetBSD
+ - Failed to build knotd_stdio fuzzing utility
+
+Knot DNS 3.0.4 (2021-01-20)
+===========================
+
+Improvements:
+-------------
+ - Sockets to CPUs binding is no longer enabled by default but can be enabled
+ via new configuration option 'server.socket-affinity'
+ - Some documentation improvements
+
+Bugfixes:
+---------
+ - DNS queries without EDNS to the root zone apex are dropped in the XDP mode
+ - Deterministic ECDSA signing leaks memory
+ - Zone not stored to journal if zonefile-load isn't ZONEFILE_LOAD_WHOLE
+ - Server crashes if the catalog zone isn't configured for registered member zones
+ - Server crashes when loading conflicting catalog member zones
+ - CNAME and DNAME records below delegation are not ignored #713
+ - Not all udp/tcp workers are used if the number of NIC queues is lower than
+ the number of udp/tcp workers
+ - Failed to load statistics and geoip modules if built as shared
+
+Knot DNS 3.0.3 (2020-12-15)
+===========================
+
+Features:
+---------
+ - Kjournalprint can display changesets starting from specific SOA serial
+
+Improvements:
+-------------
+ - New configuration check on ambiguous 'storage' specification #706
+ - New configuration check on problematic 'zonefile-load' with 'journal-contents' combination
+ - Server logs positive ACL check in debug severity level (Thanks to Andreas Schrägle)
+ - More verbose logging of failed zone backup
+ - Extended documentation for catalog zones
+
+Bugfixes:
+---------
+ - On-slave signing produces broken NSEC(3) chain if glue node becomes (un-)orphaned #705
+ - Server responds CNAME query with NXDOMAIN for CNAME synthesized from DNAME
+ - Kdig crashes if source address and dnstap logging are specified together #702
+ - Knotc fails to display error returned from zone freeze or zone thaw
+ - Dynamically reconfigured zone isn't loaded upon configuration commit
+ - Keymgr is unable to import BIND-style private key if it contains empty lines
+ - Zone backup fails to backup keys if any of them is public-only
+ - Failed to build with XDP support on Debian testing
+
+Knot DNS 3.0.2 (2020-11-11)
+===========================
+
+Features:
+---------
+ - kdig prints Extended DNS Error (Gift for Marek Vavruša)
+ - kxdpgun allows source IP address/subnet specification
+
+Improvements:
+-------------
+ - Server doesn't start if any of listen addresses fails to bind
+ - knotc no longer stores empty and adjacent identical commands to interactive history
+ - Depth of interactive history of knotc was increased to 1000 commands
+ - keymgr prints error messages to stderr instead of stdout
+ - keymgr checks for proper offline-ksk configuration before processing KSR or SKR
+ - keymgr imports Revoked timer from BIND keys
+ - Additional XDP support detection in server
+ - Lots of spelling and grammar fixes in documentation (Thanks to Paul Dee)
+ - Some documentation improvements
+
+Bugfixes:
+---------
+ - If more masters configured, zone retransfer triggers AXFR from all masters
+ - Server can fail to bind address during restart due to missing SO_REUSEADDR
+ - KSK imported from BIND doesn't roll over automatically
+ - libdnssec respects local GnuTLS policy — affects DNSSEC operations and Knot Resolver
+ - kdig can stuck in infinite loop when solving BADCOOKIE responses
+ - Zone names received over control interface are not lower-cased
+ - Zone attributes not secured with multi-threaded changes
+ - kzonecheck ignores forced dnssec checks if zone not signed
+ - kzonecheck fails on case-sensitivity of owner names in NSEC records #699
+ - kdig fails to establish TLS connection #700
+ - Server responds NOTIMPL to queries with QDCOUNT 0 and known OPCODE
+
+Knot DNS 3.0.1 (2020-10-10)
+===========================
+
+Features:
+---------
+ - New command in keymgr for validation of RRSIGs in SKR
+ - Keymgr validates RRSIGs in SKR during import
+ - New option in kzonecheck to skip DNSSEC-related checks
+
+Improvements:
+-------------
+ - Module noudp has new configuration option for UDP truncation rate
+ - Better detection of reproducible signing availability
+ - Kxdpgun allows setting of network interface
+ - Default control timeout in knotc was increased to 60 seconds
+ - DNSSEC validation searches for invalid redundant RRSIGs
+ - Configuration source detection no longer considers empty confdb directory as active configuration
+ - Zone backup preserves original zone file if zone file synchronization is disabled
+
+Bugfixes:
+---------
+ - NSEC3 re-salt can cause server crash due to possible zone inconsistencies
+ - Zone reload logs 'invalid parameter' if zone file not changed
+ - Outgoing multi-message transfer can contain invalid compression pointers under specific conditions
+ - Improper handling of file descriptors in libdnssec
+ - Server crashes if no policy is configured with DNSSEC validation
+ - Server crashes if DNSSEC validation is enabled for unsigned zone
+ - Failed to build with libnghttp2 (Thanks to Robert Edmonds)
+ - Various bugs in zone data backup/restore
+
+Knot DNS 3.0.0 (2020-09-09)
+===========================
+
+Features:
+---------
+ - High-performance networking mode using XDP sockets (requires Linux 4.18+)
+ - Support for Catalog zones including kcatalogprint utility
+ - New DNSSEC validation mode
+ - New kzonesign utility — an interface for manual DNSSEC signing
+ - New kxdpgun utility — high-performance DNS over UDP traffic generator for Linux
+ - DoH support in kdig using GnuTLS and libnghttp2
+ - New KSK revoked state (RFC 5011) in manual DNSSEC key management mode
+ - Deterministic signing with ECDSA algorithms (requires GnuTLS 3.6.10+)
+ - Module synthrecord supports reverse pointer shortening
+ - Safe persistent zone data backup and restore
+
+Improvements:
+-------------
+ - Processing depth of CNAME and DNAME chains is limited to 20
+ - Non-FQDN is allowed as 'update-owner-name' configuration option value
+ - Kdig prints detailed algorithm identifier for PRIVATEDNS and PRIVATEOID
+ in multiline mode #334
+ - Queries with QTYPE ANY or RRSIG are always responded with at most one random RRSet
+ - The statistics module has negligible performance overhead on modern CPUs
+ - If multithreaded zone signing is enabled, some additional zone maintenance
+ steps are newly parallelized
+ - ACL can be configured by reference to a remote
+ - Better CPU cache locality for higher query processing performance
+ - Logging to non-syslog streams contains timestamps with the timezone
+ - Keeping initial DNSKEY TTL and zone maximum TTL in KASP database to ensure
+ proper rollover timing in case of TTL changes during the rollover
+ - Responding FORMERR to queries with more OPT or TSIG records
+
+Bugfixes:
+---------
+ - Module onlinesign responds NXDOMAIN instead of NOERROR (NODATA) if DNSSEC not requested
+ - Outgoing multi-message transfer can contain invalid compression pointers under specific conditions
+
+Knot DNS 2.9.9 (2021-04-01)
+===========================
+
+Improvements:
+-------------
+ - keymgr: the SEP flag can be changed on already generated keys
+ - Some documentation improvements
+
+Bugfixes:
+---------
+ - knotd: journal contents can be considered malformed after changeset merge
+ - knotd: old key is being used after an algorithm rollover with a shared policy #721
+ - keymgr: keytag not recomputed upon key flag change
+ - kzonecheck: missing letter lower-casing of the origin parameter
+
+Knot DNS 2.9.8 (2020-12-15)
+===========================
+
+Bugfixes:
+---------
+ - On-slave signing produces broken NSEC(3) chain if glue node becomes (un-)orphaned #705
+ - If more masters configured, zone retransfer triggers AXFR from all masters
+ - KSK imported from BIND doesn't roll over automatically
+ - kzonecheck fails on case-sensitivity of owner names in NSEC records #699
+ - Server responds NOTIMPL to queries with QDCOUNT 0 and known OPCODE
+ - Kdig crashes if source address and dnstap logging are specified together #702
+ - Keymgr is unable to import BIND-style private key if it contains empty lines
+ - Knotc fails to display error returned from zone freeze or zone thaw
+
+Knot DNS 2.9.7 (2020-10-09)
+===========================
+
+Bugfixes:
+---------
+ - NSEC3 re-salt can cause server crash due to possible zone inconsistencies
+ - Zone reload logs 'invalid parameter' if zone file not changed
+ - Outgoing multi-message transfer can contain invalid compression pointers under specific conditions
+ - Improper handling of file descriptors in libdnssec
+
+Improvements:
+-------------
+ - Module noudp has new configuration option for UDP truncation rate
+
+Knot DNS 2.9.6 (2020-08-31)
+===========================
+
+Features:
+---------
+ - New kdig option '+[no]opttext' to print unknown EDNS options as text if possible (Thanks to Robert Edmonds)
+
+Improvements:
+-------------
+ - Better error message if no key is ready for submission
+ - Improved logging when master is not usable
+ - Improved control logging of zone-flush errors if output directory is specified
+ - More precise system error messages when a zone transfer fails
+ - Some documentation improvements (especially Offline KSK)
+
+Bugfixes:
+---------
+ - In the case of many zones, control operations over all zones take lots of memory
+ - Misleading error message on keymgr import-bind #683
+ - DS push is triggered upon every zone change even though CDS wasn't changed
+ - Kzonecheck performance penalty with passive keys #688
+ - CSK->KSK+ZSK scheme rollover can end too early
+
+Knot DNS 2.9.5 (2020-05-25)
+===========================
+
+Bugfixes:
+---------
+ - Old ZSK can be withdrawn too early during a ZSK rollover if maximum zone TTL
+ is computed automatically
+ - Server responds SERVFAIL to ANY queries on empty non-terminal nodes
+
+Improvements:
+-------------
+ - Also module onlinesign returns minimized responses to ANY queries
+ - Linking against libcap-ng can be disabled via a configure option
+
+Knot DNS 2.9.4 (2020-05-05)
+===========================
+
+Improvements:
+-------------
+ - ANY query over UDP is always answered with one RRSet + possible RRSIG instead
+ of truncated reply
+ - Server tries to resolve CNAME record generated by geoip module (Thanks to Conrad Hoffmann)
+ - Earlier OCSP validity check in kdig certificate verification (Thanks to Alexander Schultz)
+ - Module onlinesign allows KSK + ZSK mode
+ - Server control listen backlog limit was increased to 5
+ - Zone signing event is always re-scheduled even after a signing error
+ - Extended error checks and tiny enhancements in kjournalprint
+ - kdig logs a more detailed error message when failed to acquire a remote address
+ - Some documentation improvements
+
+Bugfixes:
+---------
+ - Server can crash when zone update fails due to exceeded zone size limit
+ - keymgr 'share' command doesn't work
+ - Shared KSK doesn't work with an initial key
+ - Self-created RRSIGs are still cryptographically verified in some unnecessary cases
+ - Changed NSEC3PARAM not correctly detected during zone update
+ - NSEC(3) chain not fixed if affected by zone update
+ - knotc orphan purge doesn't work on journal
+ - Online signing configured along with DNSSEC signing can cause MDB_BAD_RSLOT
+ error during server reload
+ - Zone journal access can stuck if mismanaged zone serial
+ - Concurrently added and removed same records in a DDNS message are not properly handled
+ - Zone check logs error instead of warning after a first error occurred
+
+Knot DNS 2.9.3 (2020-03-03)
+===========================
+
+Features:
+---------
+ - New configuration option 'remote.block-notify-after-transfer' to suppress
+ sending NOTIFY messages
+ - Enabled testing support for Ed448 DNSSEC algorithm (requires GnuTLS 3.6.12+
+ and not-yet-released Nettle 3.6+)
+ - New keymgr parameter 'local-serial' for getting/setting signed zone SOA serial
+ in the KASP database
+ - keymgr can import Ed25519 and Ed448 keys in the BIND format (Thanks to Conrad Hoffmann)
+
+Improvements:
+-------------
+ - kdig returns error if the query name is invalid
+ - Increased 'server.tcp-io-timeout' default value to 500 ms
+ - Decreased 'database.journal-db-max-size' default value to 512 MiB on 32-bit systems
+ - Server no longer falls back to AXFR if master is outdated during zone refresh
+ - Some documentation improvements (including new EPUB format and compatibility
+ with Ultra Electronics CIS Keyper Plus HSM)
+ - Some packaging improvements (including new python3-libknot deb package)
+
+Bugfixes:
+---------
+ - Outgoing IXFR can be malformed if the message size has specific size
+ - Server can crash if the zone contains solo NSEC3 record
+ - Improved compatibility with older journal format
+ - Incorrect SOA TTL in negative answers — SOA minimum not considered
+ - Cannot unset uppercase nodes via control interface #668
+ - Module RRL doesn't set AA flag and NOERROR rcode in slipped responses
+ - Server returns FORMERR instead of NOTIMP if empty QUESTION and unknown OPCODE
+
+Knot DNS 2.9.2 (2019-12-12)
+===========================
+
+Improvements:
+-------------
+ - Tiny ds-check log message rewording
+ - Some unnecessary code cleanup
+
+Bugfixes:
+---------
+ - ds-push doesn't replace the DS RRset on the parent #661
+ - Server gets stuck in a never-ending logging loop when changing SOA TTL
+ - Server can crash when the journal database size limit is reached
+ - Server can create a bogus changeset with equal serials from and to
+ - Unreasonable re-signing of the NSEC3PARAM record when reloading the zone
+ and 'zonefile-load: difference-no-serial' is configured
+ - SOA RRSIG not updated if the only changed record is SOA
+ - Failed to remove NSEC3 records through the control interface #666
+ - Failed to stop the server if a zone transaction is active
+
+Knot DNS 2.9.1 (2019-11-11)
+===========================
+
+Features:
+---------
+ - New option for OCSP stapling '+[no]tls-ocsp-stapling[=H]' in kdig (Thanks to Alexander Schultz)
+
+Improvements:
+-------------
+ - Kdig always randomizes source TCP port on recent Linux #575
+ - Server no longer warns about disabled zone file synchronization during shutdown
+ - Zone loading stops if failed to load zone from the journal
+ - Speed-up of insertion to big RRSets
+ - Various code and documentation improvements
+
+Bugfixes:
+---------
+ - Failed to apply journal changes after upgrade #659
+ - Failed to finish zone loading if journal changeset serials from and to are equal
+ - Incorrect handling of 0 value for 'tcp-io-timeout' and 'tcp-remote-io-timeout' configuration
+ - Server can crash if zone transaction is open during zone update
+ - NSEC3 chain not fully updated if NSEC3 salt changes during zone update
+ - Server can crash when flushing zone to a specified directory
+ - Server can respond incorrect NSEC3 records after NSEC3 salt change
+ - Delegation glue records not updated after specific zone change
+
+Knot DNS 2.9.0 (2019-10-10)
+===========================
+
+Features:
+---------
+ - Full support for different master/slave serial arithmetics when on-slave signing
+ - Module geoip newly supports wildcard records #650
+ - New DNSSEC policy configuration option 'rrsig-pre-refresh' for reducing
+ frequency of the zone signing event
+ - New server configuration option 'tcp-reuseport' for setting SO_REUSEPORT(_LB)
+ mode on TCP sockets
+ - New server configuration option 'tcp-io-timeout' [ms] for restricting inbound
+ IO operations over TCP #474
+
+Improvements:
+-------------
+ - Significant speed-up of zone contents modifications
+ - Avoided double zone signing during CSK rollovers
+ - Self-created RRSIGs are not cryptographically verified if not necessary
+ - Zone journal can store two changesets if zone file difference computing
+ and DNSSEC signing are enabled. The first one containing the difference of
+ zone history needed by slave servers, the second one containing the difference
+ between zone file and zone needed for server restart
+ - Universal and more robust memory clearing
+ - More precise socket timeout handling
+ - New notice log message for configuration changes requiring server restart
+ - Module RRL logs both trigger source address and affected subnet
+ - Various code (especially zone and TCP processing) and documentation improvements
+
+Bugfixes:
+---------
+ - RRSIGs are wrongly checked for inconsistent RRSet TTLs during zone update
+ - DS check/push warnings after disabled DNSSEC signing
+ - NSEC3 records not accessible through control interface
+ - Module geoip doesn't accept underscore character in dname specification #655
+
+Compatibility:
+--------------
+ - Removed runtime reconfiguration of network workers and interfaces since
+ it was imperfect and also couldn't work after dropped process privileges
+ - Removed inaccurate and misleading knotc command 'zone-memstats' because
+ memory consumption varies during zone modifications or transfers
+ - Removed useless 'zone.request-edns-option' configuration option
+ - Reimplemented DNS Cookies to be interoperable (based on draft-ietf-dnsop-server-cookies
+ and work by Witold Kręcicki)
+ - Default limit on TCP clients is auto-configured to one half of the file
+ descriptor limit for the server process
+ - Number of open files limit is set to 1048576 in upstream packages
+ - Default number of TCP workers is equal to the number of online CPUs or at least 10
+ - Default EDNS buffer size is 1232 for both IPv4 and IPv6
+ - Removed 'tcp-handshake-timeout' server configuration option
+ - Some configuration options were renamed and possibly moved. Old names will
+ be supported at least until next major release:
+ - 'server.tcp-reply-timeout' [s] to 'server.tcp-remote-io-timeout' [ms]
+ - 'server.max-tcp-clients' to 'server.tcp-max-clients'
+ - 'server.max-udp-payload' to 'server.udp-max-payload'
+ - 'server.max-ipv4-udp-payload' to 'server.udp-max-payload-ipv4'
+ - 'server.max-ipv6-udp-payload' to 'server.udp-max-payload-ipv6'
+ - 'template.journal-db' to 'database.journal-db'
+ - 'template.journal-db-mode' to 'database.journal-db-mode'
+ - 'template.max-journal-db-size' to 'database.journal-db-max-size'
+ - 'template.kasp-db' to 'database.kasp-db'
+ - 'template.max-kasp-db-size' to 'database.kasp-db-max-size'
+ - 'template.timer-db' to 'database.timer-db'
+ - 'template.max-timer-db-size' to 'database.timer-db-max-size'
+ - 'zone.max-journal-usage' to 'zone.journal-max-usage'
+ - 'zone.max-journal-depth' to 'zone.journal-max-depth'
+ - 'zone.max-zone-size' to 'zone.zone-max-size'
+ - 'zone.max-refresh-interval' to 'zone.refresh-max-interval'
+ - 'zone.min-refresh-interval' to 'zone.refresh-min-interval'
+
+Knot DNS 2.8.5 (2020-01-01)
+===========================
+
+Improvements:
+-------------
+ - Tiny ds-check log message rewording
+ - Various code and documentation improvements
+
+Bugfixes:
+---------
+ - RRSIGs are wrongly checked for inconsistent RRSet TTLs during zone update
+ - Server can crash when flushing zone to a specified directory
+ - ds-push doesn't replace the DS RRset on the parent #661
+ - Server gets stuck in a never-ending logging loop when changing SOA TTL
+ - Server can crash when the journal database size limit is reached
+ - Server can create a bogus changeset with equal serials from and to
+ - Server returns FORMERR instead of NOTIMP if empty QUESTION and unknown OPCODE
+
+Knot DNS 2.8.4 (2019-09-24)
+===========================
+
+Features:
+---------
+ - Automatic uploading of DS records to parent zone using DDNS,
+ see 'policy.ds-push' configuration option
+
+Improvements:
+-------------
+ - Incoming IXFR no longer falls back to AXFR if connection error #642
+ - More accurate semantic checks for missing glue records
+ - Various code and documentation improvements
+
+Bugfixes:
+---------
+ - Failed to read/export configuration if 'acl.update-type' is set #651
+ - Failed to generate initial zero-length salt
+ - Missing error log for invalid rrtype input to dynamic configuration #652
+ - Missing error log when AXFR processing fails to store zone data
+ - Redundant notice log about unavailable persistent configuration DB
+ - Zone not flushed after retransfer if SOA serial not changed
+ - Zone contents not properly fixed during zone transfers
+ - No changeset created for updated rrset's TTL if changed by RR addition
+
+Knot DNS 2.8.3 (2019-07-16)
+===========================
+
+Features:
+---------
+ - Added cert/key file configuration for TLS in kdig (Thanks to Alexander Schultz)
+
+Improvements:
+-------------
+ - More verbose log message for offline-KSK signing
+ - Module RRL logs affected source address subnet instead of only one source address
+ - Extended DNSSEC policy configuration checks
+ - Various improvements in the documentation
+
+Bugfixes:
+---------
+ - Excessive server load when maximum TCP clients limit is reached
+ - Incorrect reply after zone update with a node changed from non-authoritative to delegation
+ - Wrong error line number in a config file if it contains leading tab character
+ - Config file error message contains unrelated parsing context
+ - NSEC3 salt not updated when reconfigured to zero length
+ - Kjournalprint sometimes prints a random value for per-zone occupation
+ - Missing debug log for failed zone refresh triggered by zone notification
+ - DS check not scheduled when reconfigured
+ - Broken unit test on NetBSD 8.x
+
+Knot DNS 2.8.2 (2019-06-05)
+===========================
+
+Features:
+---------
+ - New blocking mode for zone event triggers in knotc
+ - New weighted records mode in the module geoip (Thanks to Conrad Hoffmann)
+ - Module noudp allows UDP allow rate configuration
+
+Improvements:
+-------------
+ - NSEC3 salt lifetime can be set to infinity
+ - New 'running' zone event status in the knotc output
+ - Knotc in the forced mode returns failure also if zone check emits any warning
+ - Ignoring PMTU information for IPv4/UDP via IP_PMTUDISC_OMIT (Thanks to Daisuke Higashi)
+ - Various improvements in the documentation
+
+Bugfixes:
+---------
+ - Broken setting of CPU affinity for UDP workers
+ - Unexpected results with the geoip subnet mode
+ - Sometimes insufficient zone adjusting
+ - Incoherent DNSKEY RRSIG lifetimes in SKR
+ - Confusing output from keymgr if an error occurs during KSR generation
+ - Non-functional changeset history depth limitation in kjournalprint
+ - Wrong processing of multiple $INCLUDE directives #646
+
+Knot DNS 2.8.1 (2019-04-09)
+===========================
+
+Improvements:
+-------------
+ - Possible zone transaction is aborted by zone events to avoid inconsistency
+ - Added log message if no persistent config DB is available during 'conf-begin'
+ - New environment setting 'KNOT_VERSION_FORMAT=release' for extended version suppression
+ - Various improvements in the documentation
+
+Bugfixes:
+---------
+ - Broken NSEC3-wildcard-nonexistence proof after NSEC3 re-salt
+ - Glue records under delegation are sometimes signed
+ - RRL doesn't work correctly on big-endian architectures
+ - NSEC3 not re-salted during AXFR refresh
+ - Failed to sign new zone contents if added dynamically #641
+ - NSEC3 opt-out signing doesn't work in some cases
+ - Broken NSEC3 chain after adding new sub-delegations
+ - Redundant SOA RRSIG on slave if RRSIG TTL changed on master
+ - Sometimes confusing log error message for NOTIFY event
+ - Improper include for LMDB #638
+
+Knot DNS 2.8.0 (2019-03-05)
+===========================
+
+Features:
+---------
+ - New offline-KSK mode of operation
+ - Configurable multithreaded DNSSEC signing for large zones
+ - Extended ACL configuration for dynamic updates
+ - New knotc trigger 'zone-key-rollover' for immediate DNSKEY rollover
+ - Added support for OPENPGPKEY, CSYNC, SMIMEA, and ZONEMD RR types
+ - New 'double-ds' option for CDS/CDNSKEY publication
+
+Improvements:
+-------------
+ - Significant speed-up of zone updates
+ - Knotc supports force option in the interactive mode
+ - Copy-on-write support for QP-trie (Thanks to Tony Finch)
+ - Unified and more efficient LMDB layer for journal, timer, and KASP databases
+ - DS check event is re-planned according to KASP even when purged timers
+ - Module DNS Cookies supports explicit Server Secret configuration
+ - Zone mtime is verified against full-precision timestamp (Thanks to Daniel Kahn Gillmor)
+ - Extended logging (loaded SOA serials, refresh duration, tiny cleanup)
+ - Relaxed fixed-length condition for DNSSEC key ID
+ - Extended semantic checks for DNAME and NS RR types
+ - Added support for FreeBSD's SO_REUSEPORT_LB
+ - Improved performance of geoip module
+ - Various improvements in the documentation
+
+Compatibility:
+--------------
+ - Changed configuration default for 'cds-cdnskey-publish' to 'rollover'
+ - Journal DB format changes are not downgrade-compatible
+ - Keymgr no longer prints DS for algorithm SHA-1
+
+Knot DNS 2.7.8 (2019-07-16)
+===========================
+
+Improvements:
+-------------
+ - Various improvements in the documentation
+
+Bugfixes:
+---------
+ - Excessive server load when maximum TCP clients limit is reached
+ - Incorrect reply after zone update with a node changed from non-authoritative to delegation
+ - Missing debug log for failed zone refresh triggered by zone notification
+ - Wrong processing of multiple $INCLUDE directives #646
+ - Broken unit test on NetBSD 8.x
+
+Knot DNS 2.7.7 (2019-04-15)
+===========================
+
+Improvements:
+-------------
+ - Possible zone transaction is aborted by zone events to avoid inconsistency
+ - Added log message if no persistent config DB is available during 'conf-begin'
+ - Tiny building improvements
+
+Bugfixes:
+---------
+ - Glue records under delegation are sometimes signed
+ - NSEC3 not re-salted during AXFR refresh
+ - Broken NSEC3 chain after adding new sub-delegations
+ - Failed to sign new zone contents if added dynamically #641
+ - NSEC3 opt-out signing doesn't work in some cases
+ - Redundant SOA RRSIG on slave if RRSIG TTL changed on master
+ - Sometimes confusing log error message for NOTIFY event
+ - Failed to explicit set value 0 for submission timeout
+
+Knot DNS 2.7.6 (2019-01-23)
+===========================
+
+Improvements:
+-------------
+ - Zone status also shows when the zone load is scheduled
+ - Server workers status also shows background workers utilization
+ - Default control timeout for knotc was increased to 10 seconds
+ - Pkg-config files contain auxiliary variable with library filename
+
+Bugfixes:
+---------
+ - Configuration commit or server reload can drop some pending zone events
+ - Nonempty zone journal is created even though it's disabled #635
+ - Zone is completely re-signed during empty dynamic update processing
+ - Server can crash when storing a big zone difference to the journal
+ - Failed to link on FreeBSD 12 with Clang
+
+Knot DNS 2.7.5 (2019-01-07)
+===========================
+
+Features:
+---------
+ - Keymgr supports NSEC3 salt handling
+
+Improvements:
+-------------
+ - Zone history in journal is dropped apon AXFR-like zone update
+ - Libdnssec is no longer linked against libm #628
+ - Libdnssec is explicitly linked against libpthread if PKCS #11 enabled #629
+ - Better support for libknot packaging in Python
+ - Manually generated KSK is 'ready' by default
+ - Kdig supports '+timeout' as an alias for '+time'
+ - Kdig supports '+nocomments' option
+ - Kdig no longer prints empty lines between retries
+ - Kdig returns failure if operations not successfully resolved #632
+ - Fixed repeating of the 'KSK submission, waiting for confirmation' log
+ - Various improvements in documentation, Dockerfile, and tests
+
+Bugfixes:
+---------
+ - Knotc fails to unset huge configuration section
+ - Kjournalprint sometimes fails to display zone journal content
+ - Improper timing of ZSK removal during ZSK rollover
+ - Missing UTC time zone indication in the 'iso' keymgr list output
+ - A race condition in the online signing module
+
+Knot DNS 2.7.4 (2018-11-13)
+===========================
+
+Features:
+---------
+ - Added SNI configuration for TLS in kdig (Thanks to Alexander Schultz)
+
+Improvements:
+-------------
+ - Added warning log when DNSSEC events not successfully scheduled
+ - New semantic check on timer values in keymgr
+ - DS query no longer asks other addresses if got a negative answer
+ - Reintroduced 'rollover' configuration option for CDS/CDNSKEY publication
+ - Extended logging for zone loading
+ - Various documentation improvements
+
+Bugfixes:
+---------
+ - Failed to import module configuration #613
+ - Improper Cflags value in libknot.pc if built with embedded LMDB #615
+ - IXFR doesn't fall back to AXFR if malformed reply
+ - DNSSEC events not correctly scheduled for empty zone updates
+ - During algorithm rollover old keys get removed before DS TTL expires #617
+ - Maximum zone's RRSIG TTL not considered during algorithm rollover #620
+
+Knot DNS 2.7.3 (2018-10-11)
+===========================
+
+Features:
+---------
+ - New queryacl module for query access control
+ - Configurable answer rrset rotation #612
+ - Configurable NSEC bitmap in online signing
+
+Improvements:
+-------------
+ - Better error logging for KASP DB operations #601
+ - Some documentation improvements
+
+Bugfixes:
+---------
+ - Keymgr "list" output doesn't show key size for ECDSA algorithms #602
+ - Failed to link statically with embedded LMDB
+ - Configuration commit causes zone reload for all zones
+ - The statistics module overlooks TSIG record in a request
+ - Improper processing of an AXFR-style-IXFR response consisting of one-record messages
+ - Race condition in online signing during key rollover #600
+ - Server can crash if geoip module is enabled in the geo mode
+
+Knot DNS 2.7.2 (2018-08-29)
+===========================
+
+Improvements:
+-------------
+ - Keymgr list command displays also key size
+ - Kjournalprint displays total occupied size in the debug mode
+ - Server doesn't stop if failed to load a shared module from the module directory
+ - Libraries libcap-ng, pthread, and dl are linked selectively if needed
+
+Bugfixes:
+---------
+ - Sometimes incorrect result from dnssec_nsec_bitmap_contains (libdnssec)
+ - Server can crash when loading zone file difference and zone-in-journal is set
+ - Incorrect treatment of specific queries in the module RRL
+ - Failed to link module Cookies as a shared library
+
+Knot DNS 2.7.1 (2018-08-14)
+===========================
+
+Improvements:
+-------------
+ - Added zone wire size information to zone loading log message
+ - Added debug log message for each unsuccessful remote address operation
+ - Various improvements for packaging
+
+Bugfixes:
+---------
+ - Incompatible handling of RRSIG TTL value when creating a DNS message
+ - Incorrect RRSIG TTL value in zone differences and knotc zone operation outputs
+ - Default configure prefix is ignored
+
+Knot DNS 2.7.0 (2018-08-03)
+===========================
+
+Features:
+---------
+ - New DNS Cookies module and related '+cookie' kdig option
+ - New module for response tailoring according to client's subnet or geographic location
+ - General EDNS Client Subnet support in the server
+ - OSS-Fuzz integration (Thanks to Jonathan Foote)
+ - New '+ednsopt' kdig option (Thanks to Jan Včelák)
+ - Online Signing support for automatic key rollover
+ - Non-normal file (e.g. pipe) loading support in zscanner #542
+ - Automatic SOA serial incrementation if non-empty zone difference
+ - New zone file load option for ignoring zone file's SOA serial
+ - New build-time option for alternative malloc specification
+ - Structured logging for DNSSEC key submission event
+ - Empty QNAME support in kdig
+
+Improvements:
+-------------
+ - Various library and server optimizations
+ - Reduced memory consumption of outgoing IXFR processing
+ - Linux capabilities use overhaul #546 (Thanks to Robert Edmonds)
+ - Online Signing properly signs delegations and CNAME records
+ - CDS/CDNSKEY rrset is signed with KSK instead of ZSK
+ - DNSSEC-related records are ignored when loading zone difference with signing enabled
+ - Minimum allowed RSA key length was increased to 1024
+ - Removed explicit dependency on Nettle
+
+Bugfixes:
+---------
+ - Possible uninitialized address buffer use in zscanner
+ - Possible index overflow during multiline record parsing in zscanner
+ - kdig +tls sometimes consumes 100 % CPU #561
+ - Single-Type Signing doesn't work with single ZSK key #566
+ - Zone not flushed after re-signing during zone load #594
+ - Server crashes when committing empty zone transaction
+ - Incoming IXFR with on-slave signing sometimes leads to memory corruption #595
+
+Compatibility:
+--------------
+ - Removed obsolete RRL configuration
+ - Removed obsolete module names 'mod-online-sign' and 'mod-synth-record'
+ - Removed obsolete 'ixfr-from-differences' configuration option
+ - Removed old journal migration
+ - Removed module rosedb
+
+Knot DNS 2.6.9 (2018-08-14)
+===========================
+
+Improvements:
+-------------
+ - Added zone wire size to zone loading log message
+ - Added debug log message for each unsuccessful remote address operation
+
+Bugfixes:
+---------
+ - Zone not flushed after re-signing during zone load #594
+ - Server crashes when committing empty zone transaction
+ - Incoming IXFR with on-slave signing sometimes leads to memory corruption #595
+
+Knot DNS 2.6.8 (2018-07-10)
+===========================
+
+Features:
+---------
+ - New 'import-pkcs11' command in keymgr
+
+Improvements:
+-------------
+ - Unixtime serial policy mimics Bind – increment if lower #593
+
+Bugfixes:
+---------
+ - Creeping memory consumption upon server reload #584
+ - Kdig incorrectly detects QNAME if 'notify' is a prefix
+ - Server crashes when zone sign fails #587
+ - CSK->KZSK rollover retires CSK early #588
+ - Server crashes when zone expires during outgoing multi-message transfer
+ - Kjournalprint doesn't convert zone name argument to lower-case
+ - Cannot switch to a previously used ksk-shared dnssec policy #589
+
+Knot DNS 2.6.7 (2018-05-17)
+===========================
+
+Features:
+---------
+ - Added 'dateserial' (YYYYMMDDnn) serial policy configuration (Thanks to Wolfgang Jung)
+
+Improvements:
+-------------
+ - Trailing data indication from the packet parser (libknot)
+ - Better configuration check for a problematical option combination
+
+Bugfixes:
+---------
+ - Incomplete configuration option item name check
+ - Possible buffer overflow in 'knot_dname_to_str' (libknot)
+ - Module dnsproxy doesn't preserve letter case of QNAME
+ - Module dnsproxy duplicates OPT and TSIG in the non-fallback mode
+
+Knot DNS 2.6.6 (2018-04-11)
+===========================
+
+Features:
+---------
+ - New EDNS option counters in the statistics module
+ - New '+orphan' filter for the 'zone-purge' operation
+
+Improvements:
+-------------
+ - Reduced memory consumption of disabled statistics metrics
+ - Some spelling fixes (Thanks to Daniel Kahn Gillmor)
+ - Server no longer fails to start if MODULE_DIR doesn't exist
+ - Configuration include doesn't fail if empty wildcard match
+ - Added a configuration check for a problematical option combination
+
+Bugfixes:
+---------
+ - NSEC3 chain not re-created when SOA minimum TTL changed
+ - Failed to start server if no template is configured
+ - Possibly incorrect SOA serial upon changed zone reload with DNSSEC signing
+ - Inaccurate outgoing zone transfer size in the log message
+ - Invalid dname compression if empty question section
+ - Missing EDNS in EMALF responses
+
+Knot DNS 2.6.5 (2018-02-12)
+===========================
+
+Features:
+---------
+ - New 'zone-notify' command in knotc
+ - Kdig uses '@server' as a hostname for TLS authentication if '+tls-ca' is set
+
+Improvements:
+-------------
+ - Better heap memory trimming for zone operations
+ - Added proper polling for TLS operations in kdig
+ - Configuration export uses stdout as a default output
+ - Simplified detection of atomic operations
+ - Added '--disable-modules' configure option
+ - Small documentation updates
+
+Bugfixes:
+---------
+ - Zone retransfer doesn't work well if more masters configured
+ - Kdig can leak or double free memory in corner cases
+ - Inconsistent error outputs from dynamic configuration operations
+ - Failed to generate documentation on OpenBSD
+
+Knot DNS 2.6.4 (2018-01-02)
+===========================
+
+Features:
+---------
+ - Module synthrecord allows multiple 'network' specification
+ - New CSK handling support in keymgr
+
+Improvements:
+-------------
+ - Allowed configuration for infinite zsk lifetime
+ - Increased performance and security of the module synthrecord
+ - Signing changeset is stored into journal even if 'zonefile-load' is whole
+
+Bugfixes:
+---------
+ - Unintentional zone re-sign during reload if empty NSEC3 salt
+ - Inconsistent zone names in journald structured logs
+ - Malformed outgoing transfer for big zone with TSIG
+ - Some minor DNSSEC-related issues
+
+Knot DNS 2.6.3 (2017-11-24)
+===========================
+
+Bugfixes:
+---------
+ - Wrong detection of signing scheme rollover
+
+Knot DNS 2.6.2 (2017-11-23)
+===========================
+
+Features:
+---------
+ - CSK algorithm rollover and (KSK, ZSK) <-> CSK rollover support
+
+Improvements:
+-------------
+ - Allowed explicit configuration for infinite ksk lifetime
+ - Proper error messages instead of unclear error codes in server log
+ - Better support for old compilers
+
+Bugfixes:
+---------
+ - Unexpected reply for DS query with an owner below a delegation point
+ - Old dependencies in the pkg-config file
+
+Knot DNS 2.6.1 (2017-11-02)
+===========================
+
+Features:
+---------
+ - NSEC3 Opt-Out support in the DNSSEC signing
+ - New CDS/CDNSKEY publish configuration option
+
+Improvements:
+-------------
+ - Simplified DNSSEC log message with DNSKEY details
+ - +tls-hostname in kdig implies +tls-ca if neither +tls-ca nor +tls-pin is given
+ - New documentation sections for DNSSEC key rollovers and shared keys
+ - Keymgr no longer prints useless algorithm number for generated key
+ - Kdig prints unknown RCODE in a numeric format
+ - Better support for LLVM libFuzzer
+
+Bugfixes:
+---------
+ - Faulty DNAME semantic check if present in the zone apex and NSEC3 is used
+ - Immediate zone flush not scheduled during the zone load event
+ - Server crashes upon dynamic zone addition if a query module is loaded
+ - Kdig fails to connect over TLS due to SNI is set to server IP address
+ - Possible out-of-bounds memory access at the end of the input
+ - TCP Fast Open enabled by default in kdig breaks TLS connection
+
+Knot DNS 2.6.0 (2017-09-29)
+===========================
+
+Features:
+---------
+ - On-slave (inline) signing support
+ - Automatic DNSSEC key algorithm rollover
+ - Ed25519 algorithm support in DNSSEC (requires GnuTLS 3.6.0)
+ - New 'journal-content' and 'zonefile-load' configuration options
+ - keymgr tries to run as user/group set in the configuration
+ - Public-only DNSSEC key import into KASP DB via keymgr
+ - NSEC3 resalt and parent DS query events are persistent in timer DB
+ - New processing state for a response suppression within a query module
+ - Enabled server side TCP Fast Open if supported
+ - TCP Fast Open support in kdig
+
+Improvements:
+-------------
+ - Better record owner compression if related to the previous rdata dname
+ - NSEC(3) chain is no longer recomputed whole on every update
+ - Remove inconsistent and unnecessary quoting in log files
+ - Avoiding of overlapping key rollovers at a time
+ - More DNSSEC-related semantic checks
+ - Extended timestamp format in keymgr
+
+Bugfixes:
+---------
+ - Incorrect journal free space computation causing inefficient space handling
+ - Interface-automatic broken on Linux in the presence of asymmetric routing
+
+Knot DNS 2.5.7 (2018-01-02)
+===========================
+
+Bugfixes:
+---------
+ - Unintentional zone re-sign during reload if empty NSEC3 salt
+ - Inconsistent zone names in journald structured logs
+ - Malformed outgoing transfer for big zone with TSIG
+ - Unexpected reply for DS query with an owner below a delegation point
+ - Old dependencies in the pkg-config file
+
+Knot DNS 2.5.6 (2017-11-02)
+===========================
+
+Improvements:
+-------------
+ - Keymgr no longer prints useless algorithm number for generated key
+
+Bugfixes:
+---------
+ - Faulty DNAME semantic check if present in the zone apex and NSEC3 is used
+ - Immediate zone flush not scheduled during the zone load event
+ - Server crashes upon dynamic zone addition if a query module is loaded
+ - Kdig fails to connect over TLS due to SNI is set to server IP address
+
+Knot DNS 2.5.5 (2017-09-29)
+===========================
+
+Improvements:
+-------------
+ - Constant time memory comparison in the TSIG processing
+ - Proper use of the ctype functions
+ - Generated RRSIG records have inception time 90 minutes in the past
+
+Bugfixes:
+---------
+ - Incorrect online signature for NSEC in the case of a CNAME record
+ - Incorrect timestamps in dnstap records
+ - EDNS Subnet Client validation rejects valid payloads
+ - Module configuration semantic checks are not executed
+ - Kzonecheck segfaults with unusual inputs
+
+Knot DNS 2.5.4 (2017-08-31)
+===========================
+
+Improvements:
+-------------
+ - New minimum and maximum refresh interval config options (Thanks to Manabu Sonoda)
+ - New warning when unforced flush with disabled zone file synchronization
+ - New 'dnskey' keymgr command
+ - Linking with libatomic on architectures that require it (Thanks to Pierre-Olivier Mercier)
+ - Removed 'OK' from listing keymgr command outputs
+ - Extended journal and keymgr documentation and logging
+
+Bugfixes:
+---------
+ - Incorrect handling of specific corner-cases with zone-in-journal
+ - The 'share' keymgr command doesn't work
+ - Server crashes if configured with query-size and reply-size statistics options
+ - Malformed big integer configuration values on some 32-bit platforms
+ - Keymgr uses local time when parsing date inputs
+ - Memory leak in kdig upon IXFR query
+
+Knot DNS 2.5.3 (2017-07-14)
+===========================
+
+Features:
+---------
+ - CSK rollover support for Single-Type Signing Scheme
+
+Improvements:
+-------------
+ - Allowed binding to non-local addresses for TCP (Thanks to Julian Brost!)
+ - New documentation section for manual DNSSEC key algorithm rollover
+ - Initial KSK also generated in the submission state
+ - The 'ds' keymgr command with no parameter uses all KSK keys
+ - New debug mode in kjournalprint
+ - Updated keymgr documentation
+
+Bugfixes:
+---------
+ - Sometimes missing RRSIG by KSK in submission state.
+ - Minor DNSSEC-related issues
+
+Knot DNS 2.5.2 (2017-06-23)
+===========================
+
+Security:
+---------
+ - CVE-2017-11104: Improper TSIG validity period check can allow TSIG forgery (Thanks to Synacktiv!)
+
+Improvements:
+-------------
+ - Extended debug logging for TSIG errors
+ - Better error message for unknown module section in the configuration
+ - Module documentation compilation no longer depends on module configuration
+ - Extended policy section configuration semantic checks
+ - Improved python version compatibility in pykeymgr
+ - Extended migration section in the documentation
+ - Improved DNSSEC event timing on 32-bit systems
+ - New KSK rollover start log info message
+ - NULL qtype support in kdig
+
+Bugfixes:
+---------
+ - Failed to process included configuration
+ - dnskey_ttl policy option in the configuration has no effect on DNSKEY TTL
+ - Corner case journal fixes (huge changesets, OpenWRT operation)
+ - Confusing event timestamps in knotc zone-status output
+ - NSEC/NSEC3 bitmap not updated for CDS/CDNSKEY
+ - CDS/CDNSKEY RRSIG not updated
+
+Knot DNS 2.5.1 (2017-06-07)
+===========================
+
+Bugfixes:
+---------
+ - pykeymgr no longer crash on empty json files in the KASP DB directory
+ - pykeymgr no longer imports keys in the "removed" state
+ - Imported keys in the "removed" state no longer makes knotd to crash
+ - Including an empty configuration directory no longer makes knotd to crash
+ - pykeymgr is distributed and installed to the distribution tarball
+
+Knot DNS 2.5.0 (2017-06-05)
+===========================
+
+Features:
+---------
+ - KASP database switched from JSON files to LMDB database
+ - KSK rollover support using CDNSKEY and CDS in the automatic DNSSEC signing
+ - Dynamic module loading support with proper module API
+ - Journal can store full zone contents (not only differences)
+ - Zone freeze/thaw support
+ - Updated knotc zone-status output with optional column filters
+ - New '[no]crypto' option in kdig
+ - New keymgr implementation reflecting KASP database changes
+ - New pykeymgr for JSON-based KASP database migration
+ - Removed obsolete knot1to2 utility
+
+Improvements:
+-------------
+ - Added libidn2 support to kdig (with libidn fallback)
+ - Maximum timer database switched from configure to the server configuration
+
+Knot DNS 2.4.4 (2017-06-05)
+===========================
+
+Improvements:
+-------------
+ - Improved error handling in kjournalprint
+
+Bugfixes:
+---------
+ - Zone flush not replanned upon unsuccessful flush
+ - Journal inconsistency after deleting deleted zone
+ - Zone events not rescheduled upon server reload (Thanks to Mark Warren)
+ - Unreliable LMDB mapsize detection in kjournalprint
+ - Some minor issues found by AddressSanitizer
+
+Knot DNS 2.4.3 (2017-04-11)
+===========================
+
+Improvements:
+-------------
+ - New 'journal-db-mode' optimization configuration option
+ - The default TSIG algorithm for utilities input is HMAC-SHA256
+ - Implemented sensible default EDNS(0) padding policy (Thanks to D. K. Gillmor)
+ - Added some more semantic checks on the knotc configuration operations
+
+Bugfixes:
+---------
+ - Missing 'zone' keyword in the YAML output
+ - Missing trailing dot in the keymgr DS owner output
+ - Journal logs 'invalid parameter' in several cases
+ - Some minor journal-related problems
+
+Knot DNS 2.4.2 (2017-03-23)
+===========================
+
+Features:
+---------
+ - Zscanner can store record comments placed on the same line
+ - Knotc status extension with version, configure, and workers parameters
+
+Improvements:
+-------------
+ - Significant incoming XFR speed-up in the case of many zones
+
+Bugfixes:
+---------
+ - Double OPT RR insertion when a global module returns KNOT_STATE_FAIL
+ - User-driven zscanner parsing logic inconsistency
+ - Lower serial at master doesn't trigger any errors
+ - Queries with too long DNAME substitution do not return YXDOMAIN response
+ - Incorrect elapsed time in the DDNS log
+ - Failed to process forwarded DDNS request with TSIG
+
+Knot DNS 2.4.1 (2017-02-10)
+===========================
+
+Improvements:
+-------------
+ - Speed-up of rdata addition into a huge rrset
+ - Introduce check of minimum timeout for next refresh
+ - Dnsproxy module can forward all queries without local resolving
+
+Bugfixes:
+--------
+ - Transfer of a huge rrset goes into an infinite loop
+ - Huge response over TCP contains useless TC bit instead of SERVFAIL
+ - Failed to build utilities with disabled daemon
+ - Memory leaks during keys removal
+ - Rough TSIG packet reservation causes early truncation
+ - Minor out-of-bounds string termination write in rrset dump
+ - Server crash during stop if failed to open timers DB
+ - Failed to compile on OS X older than Sierra
+ - Poor minimum UDP-max-size configuration check
+ - Failed to receive one-record-per-message IXFR-style AXFR
+ - Kdig timeouts when receiving RCODE != NOERROR on subsequent transfer message
+
+Knot DNS 2.4.0 (2017-01-18)
+===========================
+
+Bugfixes:
+--------
+ - False positive semantic-check warning about invalid bitmap in NSEC
+ - Unnecessary SOA queries upon notify with up to date serial
+ - Timers for expired zones are reset on reload
+ - Zone doesn't expire when the server is down
+ - Failed to handle keys with duplicate keytags
+ - Per zone module and global module inconsistency
+ - Obsolete online signing module configuration
+ - Malformed output from kjournalprint
+ - Redundant SO_REUSEPORT activation on the TCP socket
+ - Failed to use higher number of background workers
+
+Improvements:
+-------------
+ - Lower memory consumption with qp-trie
+ - Zone events and zone timers improvements
+ - Print all zone names in the FQDN format
+ - Simplified query module interface
+ - Shared TCP connection between SOA query and transfer
+ - Response Rate Limiting as a module with statistics support
+ - Key filters in keymgr
+
+Features:
+---------
+ - New unified LMDB-based zone journal
+ - Server statistics support
+ - New statistics module for traffic measuring
+ - Automatic deletion of retired DNSSEC keys
+ - New control logging category
+
+Knot DNS 2.3.4 (2017-11-20)
+===========================
+
+Security:
+---------
+ - CVE-2017-11104: Improper TSIG validity period check can allow TSIG forgery (Thanks to Synacktiv!)
+
+Bugfixes:
+---------
+ - Unexpected response for DS query below delegation poing
+ - Zone events not rescheduled upon server reload (Thanks to Mark Warren)
+ - Missing trailing dot in the keymgr DS owner output
+ - Malformed output from kjournalprint
+ - Redundant SO_REUSEPORT activation on the TCP socket
+
+Knot DNS 2.3.3 (2016-12-08)
+===========================
+
+Bugfixes:
+---------
+ - Double free when failed to apply zone journal
+ - Zone bootstrap retry interval not preserved upon zone reload
+ - DNSSEC related records not flushed if not signed
+ - False semantic checks warning about incorrect type in NSEC bitmap
+ - Memory leak in kzonecheck
+
+Improvements:
+-------------
+ - All zone names are fully-qualified in log
+
+Features:
+---------
+ - New kjournalprint utility
+
+Knot DNS 2.3.2 (2016-11-04)
+===========================
+
+Bugfixes:
+---------
+ - Incorrect %s expansion for the root zone
+ - Failed to refresh not existing slave zone after restart
+ - Immediate zone refresh upon restart if refresh already scheduled
+ - Early zone transfer after restart if transfer already scheduled
+ - Not ignoring empty non-terminal parents during delegation lookup
+ - CD bit preservation in responses
+ - Compilation error on GNU/kFreeBSD
+ - Server crash after double zone-commit if journal error
+
+Improvements:
+-------------
+ - Speed-up of knotc if control operation and known socket
+ - Zone purge operation purges also zone timers
+
+Features:
+---------
+ - Simple modules don't require empty configuration section
+ - New zone journal path configuration option
+ - New timeout configuration option for module dnsproxy
+
+Knot DNS 2.3.1 (2016-10-07)
+===========================
+
+Bugfixes:
+---------
+ - Missing glue records in some responses
+ - Knsupdate prompt printing on non-terminal
+ - Mismatch between configuration policy item names and documentation
+ - Segfault on OS X (Sierra)
+
+Improvements:
+-------------
+ - Significant speed-up of conf-commit and conf-diff operations (in most cases)
+ - New EDNS Client Subnet libknot API
+ - Better semantic-checks error messages
+
+Features:
+---------
+ - Print TLS certificate hierarchy in kdig verbose mode
+ - New +subnet alias for +client
+ - New mod-whoami and mod-noudp modules
+ - New zone-purge control command
+ - New log-queries and log-responses options for mod-dnstap
+
+Knot DNS 2.3.0 (2016-08-09)
+===========================
+
+Bugfixes:
+---------
+ - No wildcard expansion below empty non-terminal for NSEC signed zone
+ - Avoid multiple loads of the same PKCS #11 module
+ - Fix kdig IXFR response processing if the transfer content is empty
+ - Don't ignore non-existing records to be removed in IXFR
+
+Improvements:
+-------------
+ - Refactored semantic checks and improved error messages
+ - Set TC flag in delegation only if mandatory glue doesn't fit the response
+ - Separate EDNS(0) payload size configuration for IPv4 and IPv6
+
+Features:
+---------
+ - DNSSEC policy can be defined in server configuration
+ - Automatic NSEC3 resalt according to DNSSEC policy
+ - Zone content editing using control interface
+ - Zone size limit restriction for DDNS, AXFR, and IXFR (CVE-2016-6171)
+ - DNS-over-TLS support in kdig (RFC 7858)
+ - EDNS(0) padding and alignment support in kdig (RFC 7830)
+
+Knot DNS 2.2.1 (2016-05-24)
+===========================
+
+Bugfixes:
+---------
+ - Fix separate logging of server and zone events
+ - Fix concurrent zone file flushing with many zones
+ - Fix possible server crash with empty hostname on OpenWRT
+ - Fix control timeout parsing in knotc
+ - Fix "Environment maxreaders limit reached" error in knotc
+ - Don't apply journal changes on modified zone file
+ - Remove broken LTO option from configure script
+ - Enable multiple zone names completion in interactive knotc
+ - Set the TC flag in a response if a glue doesn't fit the response
+ - Disallow server reload when there is an active configuration transaction
+
+Improvements:
+-------------
+ - Distinguish unavailable zones from zones with zero serial in log messages
+ - Log warning and error messages to standard error output in all utilities
+ - Document tested PKCS #11 devices
+ - Extended Python configuration interface
+
+Knot DNS 2.2.0 (2016-04-26)
+===========================
+
+Bugfixes:
+---------
+ - Fix build dependencies on FreeBSD
+ - Fix query/response message type setting in dnstap module
+ - Fix remote address retrieval from dnstap capture in kdig
+ - Fix global modules execution for queries hitting existing zones
+ - Fix execution of semantic checks after an IXFR transfer
+ - Fix PKCS#11 support detection at build time
+ - Fix kdig failure when the first AXFR message contains just the SOA record
+ - Exclude non-authoritative types from NSEC/NSEC3 bitmap at a delegation
+ - Mark PKCS#11 generated keys as sensitive (required by Luna SA)
+ - Fix error when removing the only zone from the server
+ - Don't abort knotc transaction when some check fails
+
+Features:
+---------
+ - URI and CAA resource record types support
+ - RRL client address based white list
+ - knotc interactive mode
+
+Improvements:
+-------------
+ - Consistent IXFR error messages
+ - Various fixes for better compatibility with PKCS#11 devices
+ - Various keymgr user interface improvements
+ - Better zone event scheduler performance with many zones
+ - New server control interface
+ - kdig uses local resolver if resolv.conf is empty
+
+Knot DNS 2.1.1 (2016-02-10)
+===========================
+
+Bugfixes:
+---------
+ - DNSSEC: Allow import of duplicate private key into the KASP
+ - DNSSEC: Avoid duplicate NSEC for Wildcard No Data answer
+ - Fix server crash when an incoming transfer is in progress and reload is issued
+ - Fix socket polling when configured with many interfaces and threads
+ - Fix compilation against Nettle 3.2
+
+Improvements:
+-------------
+ - Select correct source address for UDP messages received on ANY address
+ - Extend documentation of knotc commands
+
+Knot DNS 2.1.0 (2016-01-14)
+===========================
+
+Features:
+---------
+ - Per-thread UDP socket binding using SO_REUSEPORT on Linux
+ - Support for dynamic configuration database
+ - DNSSEC: Support for cryptographic tokens via PKCS #11 interface
+ - DNSSEC: Experimental support for online signing
+
+Improvements:
+-------------
+ - Support for zone file name patterns
+ - Configurable location of zone timer database
+ - Non-blocking network operations and better timeout handling
+ - Caching of Critical configuration values for better performance
+ - Logging of ACL failures
+ - RRL: Add rate-limit-slip zero support to drop all responses
+ - RRL: Document behavior for different rate-limit-slip options
+ - kdig: Warning instead of error on TSIG validation failure
+ - Cleanup of support libraries interfaces (libknot, libzscanner, libdnssec)
+ - Remove possibly insecure server control over a network socket
+ - Remove implementation limit for the number of network interfaces
+
+Bugfixes:
+---------
+ - synth-record module: Fix application of default configuration options
+ - TSIG: Allow compressed TSIG name when forwarding DDNS updates
+ - Schedule zone bootstrap after slave zone fails to load from disk
+
+Knot DNS 2.0.2 (2015-11-24)
+===========================
+
+Bugfixes:
+---------
+ - Out-of-bound read in packet parser for malformed NAPTR records (LibFuzzer)
+
+Knot DNS 2.0.1 (2015-09-02)
+===========================
+
+Bugfixes:
+---------
+ - Do not reload expired zones on 'knotc reload' and server startup
+ - Fix rare race-condition in event scheduling causing delayed event execution
+ - Fix skipping of non-authoritative nodes in NSEC proofs
+ - Fix TC flag setting in RRL slipped answers
+ - Disable domain name compression for root label
+ - Log via journald only when running under systemd
+ - Fix CNAME following when querying for NSEC RR type
+ - Fix refreshing of DNSSEC signatures for zone keys
+ - Fix binding an unavailable IPv6 address on Linux (IP_FREEBIND)
+ - Fix infinite loop in knotc zonestatus and memstats
+ - Fix memory leak in configuration on server shutdown
+ - Fix broken dnsproxy module
+ - Fix DNSSEC KASP timestamps parsing in strict POSIX environment
+ - Fix multi value parsing on big-endian
+ - Adapt to Nettle 3 API break causing base64 decoding failures on big-endian
+
+Features:
+---------
+ - Add 'keymgr zone key ds' to show key's DS record
+ - Add 'keymgr tsig generate' to generate TSIG keys
+ - Add query module scoping to process either all queries or zone queries only
+ - Add support for file name globbing in config file includes
+ - Add 'request-edns-option' config option to add custom EDNS0 option into
+ server initiated queries
+
+Improvements:
+-------------
+ - Send minimal responses (remove NS from Authority section for NOERROR)
+ - Update persistent timers only on shutdown for better performance
+ - Allow change of RR TTL over DDNS
+ - Documentation fixes, updates, and improvements in formatting
+ - Install yparser and zscanner header files
+ - Improve lookup of libsystemd build dependencies
+ - Fix compilation warnings in endian conversion functions on OpenBSD
+
+Knot DNS 2.0.0 (2015-06-26)
+===========================
+
+Bugfixes:
+---------
+ - Fix lost NOTIFY message if received during zone transfer
+ - Disable fast zone parser when compiled in Clang (workaround for Clang bug)
+ - kdig: Record correct dnstap SocketProtocol when retrying over TCP
+ - kdig: Hide TSIG section with +noall
+ - Do not set AA flag for AXFR/IXFR queries
+
+Features:
+---------
+ - DNSSEC: separate library, switch to GnuTLS, new utilities
+ - DNSSEC: basic KASP support (generate initial keys, ZSK rollover)
+ - Configuration: New text format in YAML, binary store in LMDB
+ - Zone parser: Split long TXT/SPF strings into multiple strings
+ - kdig: Add generic dump style option (+generic)
+ - Try all master servers in multi-master environment
+ - Improved remotes and ACLs (multiple addresses, multiple keys)
+ - Basic support for zone file patterns (%s to substitute zone name)
+ - Disable zone file synchronization by setting 'zonefile_sync' to '-1'
+ - knsupdate: Add input prompt in interactive mode and 'quit' command
+ - knsupdate: Allow TSIG algorithm specification in interactive prompt
+
+Improvements:
+-------------
+ - Zone dump: Do not write class for SOA record (unified with other RR types)
+ - Zone dump: Do not write master server address into the zone file
+ - Documentation: Manual pages are included in HTML and PDF
+
+Knot DNS 1.6.3 (2015-04-08)
+===========================
+
+Bugfixes:
+---------
+ - Performance drop for NSEC-signed zones
+ - Proper handling of TCP short-writes
+ - Out-of-bound read in zone parser for long domain names in origin (AFL fuzzer)
+ - Out-of-bound read in packet parser for TSIG RR without RDATA (AFL fuzzer)
+ - Out-of-bound read in packet parser for malformed NAPTR RR (AFL fuzzer)
+
+Features:
+--------
+ - CDS and CDNSKEY support in zone parser
+
+Improvements:
+-------------
+ - Add defaults for TCP config options into documentation
+ - Detailed error message if zone reload fails
+
+Knot DNS 1.6.2 (2015-02-19)
+===========================
+
+Features:
+---------
+ - Limiting number of parallel TCP clients (max-tcp-clients config option)
+
+Bugfixes:
+---------
+ - Ignore refresh and transfer events on non-slave zones
+ - Compilation with Dnstap support on FreeBSD
+ - Possible file descriptor leak when terminating inactive TCP clients
+
+Knot DNS 1.6.1 (2014-12-13)
+===========================
+
+Bugfixes:
+---------
+ - Journal file would sometimes outgrow its limit (ixfr-fslimit in configuration)
+ - Fixed incompatibility with OpenSSL 0.9.8
+ - Proper handling when hostname cannot be retrieved (for NSID and CH)
+
+Features:
+---------
+ - DNSSEC Single Type Signing Scheme is now supported
+
+Knot DNS 1.6.0 (2014-10-23)
+===========================
+
+Bugfixes:
+---------
+ - Fix zone expiration when AXFR/IXFR is being refused by master
+ - Fix forced zone refresh on slave (knotc refresh -f)
+
+Knot DNS 1.6.0-rc2 (2014-10-17)
+===============================
+
+Improvements:
+-------------
+ - Maximal size of persistent timers database increased from 10 MB to 100 MB
+ - Added logging of persistent timers database errors
+
+Bugfixes:
+---------
+ - Persistent timers database opening after privileges has been dropped
+
+Knot DNS 1.6.0-rc1 (2014-10-13)
+===============================
+
+Features:
+---------
+ - Persistent timers for slave zones (expire, refresh, and flush)
+
+Bugfixes:
+---------
+ - DNSSEC: RFC compliant processing of letter case in RDATA domain names
+ - EDNS: Return minimal error response for queries with unsupported version
+ - EDNS: Fix interpretation of Extended RCODE
+
+Knot DNS 1.5.3 (2014-09-15)
+===========================
+
+Bugfixes:
+---------
+ - Some specific incoming IXFRs were causing server to crash
+ - Rare synchronization error during reload caused read-after-free
+ - Response synthetization module did not work properly with DNSSEC-enabled zones
+ - When Knot sent AXFR when IXFR was requested, message ID and opcode were wrong
+ - Knot failed to send large messages to remote control (present since 1.5.1)
+
+Knot DNS 1.5.2 (2014-09-08)
+===========================
+
+Bugfixes:
+---------
+ - Some RR parsing corner cases were not handled properly
+ - AXFR-style IXFR was refused and had to be retransferred
+ - Hash character (#) was not properly escaped when storing text zone file
+
+Knot DNS 1.5.1 (2014-08-19)
+===========================
+
+Features:
+---------
+ - Basic support for logging using systemd journal
+ - DDNS: Ability to process updates in bulk
+
+Improvements:
+-------------
+ - Unified logging messages structure
+ - DNSSEC: More strict controls for signing keys
+
+Bugfixes:
+---------
+ - DNSSEC: DNAMEs in RDATA were not lowercased before signing
+ - EDNS: OPT RR were not put into responsing for some errors
+ - TSIG: DDNS responses were not signed with TSIG
+ - DDNS: Prerequisite checks failed for some inputs
+ - knsupdate: Zone origin was not used for deletions
+
+Knot DNS 1.5.0 (2014-07-08)
+===========================
+
+Features:
+---------
+ - DDNS forwarding reimplemented
+
+Improvements:
+-------------
+ - Transfer sizes logged in bytes if needed
+ - Logging outgoing NOTIFY messages
+ - Logging unauthorized incoming NOTIFYs
+
+Bugfixes:
+---------
+ - Zone flush planning after bootstrap
+ - Incorrect incoming AXFR message sizes
+ - DDNS signing changes were freed too soon, possibility of stale data
+ - knotc remote control key handling
+
+Knot DNS 1.5.0-rc2 (2014-06-18)
+===============================
+
+Features:
+---------
+ - edns-client-subnet support in kdig
+ - Optional asynchronous startup (config "asynchronous-start")
+
+Improvements:
+-------------
+ - Preempt task queue for faster reload
+ - Lazy zone file write after zone transfer (governed by
+ "zonefile-sync")
+
+Bugfixes:
+---------
+ - Close zone transfer after SERVFAIL response
+ - Incremental to full zone transfer fallback, wrong log message
+ - Zone events corner cases, reload replanning
+
+Knot DNS 1.5.0-rc1 (2014-06-03)
+===============================
+
+Features:
+---------
+ - Pluggable query processing modules
+ - Synthetic IPv4/IPv6 reverse/forward records (optional module)
+ - dnstap support in both utilities & server (optional module)
+ - NOTIFY message support and new TSIG section in kdig
+ - Zone transfer master failover
+
+Improvements:
+-------------
+ - Query processing and core functionality overhaul
+ - Performance and reduced memory footprint
+ - Faster zone events scheduling
+ - RFC compliant queries/responses in some corner cases
+ - Log messages
+ - New documentation (Sphinx)
+
+Knot DNS 1.4.2 (2014-01-27)
+===========================
+
+Bugfixes:
+---------
+ - AXFR/IXFR compatibility issues with tinydns/axfrdns
+ - Journal file is created only when needed
+ - Zone-related log messages are logged into correct category
+ - DNSSEC: Refresh signatures earlier (3 days before their expiration
+ with the default signature lifetime)
+ - Fixed RCU synchronization causing deadlock on 'knotc signzone'
+ - RRSIG not fitting in the additional records doesn't cause
+ truncation
+
+Knot DNS 1.4.1 (2014-01-13)
+===========================
+
+Bugfixes:
+---------
+ - Empty APL record support
+ - 'zonestatus' when using immediate zone syncing
+ - Immediate zone syncing after reload
+ - Race condition writing time values to zone file
+
+Knot DNS 1.4.0 (2014-01-06)
+===========================
+
+Features:
+---------
+ - Zone SERIAL policies (INCREMENT, UNIXTIME)
+ - IDN support in Knot utilities
+ - DNSSEC: support for GOST algorithm
+ - Better logging of automatic DNSSEC events
+ - Support for DNSSEC key pre-publication
+ - Experimental automatic DNSSEC signing
+ - Reduced memory usage
+
+Improvements:
+-------------
+ - ./configure prints build configuration summary
+ - Pretty zone file output (DNSSEC-related data separately)
+ - Lower memory consumption
+ - config: option 'dnssec-keydir' can be set per zone
+ - config: option 'storage' can be set per zone
+
+Bugfixes:
+---------
+ - AXFR crash with specific packet
+ - QNAME case-sensitive since 1.4.0-rc0
+ - DNSSEC records over DDNS
+ - Semantic check fail in AXFR is only soft-error
+ - Journal race condition
+ - Notifies are sent immediately
+ - Crash in particular additionals processing
+ - Race condition in event cancellation
+ - Journal corruption after failed transactions
+ - DNSSEC: fixed detection of ECDSA support
+ - Refactored zone loading
+ - Improved journal locking and fixed some race conditions
+ - Various fixes in client utilities
+ - Fixed memory errors in automatic DNSSEC signing
+ - 'dnssec-keydir' doesn't auto-enable signing
+ - Fixed rescheduling of zone resigns
+
+Knot DNS 1.3.3 (2013-10-28)
+===========================
+
+Bugfixes:
+---------
+ - Improved zone loading error messages
+ - Correct control socket permissions
+ - Improved log syntax documentation
+ - Fixed wrong assertions in DDNS prerequisites checking
+ - Fixed processing of some malformed DNS packets
+ - Fixed notify messages being ignored in some cases
+
+Knot DNS 1.3.2 (2013-09-30)
+===========================
+
+Bugfixes:
+---------
+ - Configuration option for EDNS0 max UDP payload.
+ - Max UDP payload from EDNS0 affected TCP responses.
+ - Fixed build on SLE 10.
+ - knotc reload did not close files included from config.
+
+Knot DNS 1.3.1 (2013-08-26)
+===========================
+
+Bugfixes:
+---------
+ - Response with NSID contained extra bytes after reload
+ - List of remotes is scanned for longest prefix match
+ - Multipacket TSIG signatures for transfers
+ - Wrongly parsed TSIG key secret without quotes
+ - Removed autoconf checks for extended instruction sets
+
+Knot DNS 1.3.0 (2013-08-05)
+===========================
+
+Features:
+---------
+ - Defaults for CH TXT id.server,version.server (see doc)
+ - Much faster bootstrap of many zones
+ - --with-configdir option for default config path
+ - Reintroduced 'pidfile' config option
+ - Utility to estimate memory consumption (see 'knotc memstats')
+ - PID file is not created when running on foreground
+ - UNIX sockets support for knotc
+ - Configurable 'rundir' and 'storage'
+ - Faster zone parser
+ - Full support for EUI and ILNP resource records
+ - Lower memory footprint for large zones
+ - No compilation of zones
+ - Improved scheduling of zone transfers
+ - Logging of serials and timing information for zone transfers
+ - Config: 'groups' keyword allowing to create groups of remotes
+ - Config: 'include' keyword allowing other file includes
+ - Client utilities: kdig, khost, knsupdate
+ - Server identification using TXT/CH queries (RFC 4892)
+ - Improved build scripts
+ - Improved dname compression and performance
+
+Bugfixes:
+---------
+ - Progressive interval for bootstrap retry
+ - Transfers randomly cancelled
+ - Disabling RRL on reload
+ - Secondary groups not initialized when dropping privileges
+ - Responding to DS queries for names at or below delegation points
+ - Removed deprecated 'knotc -w' option
+ - Slave ignores out-of-zone records in zone
+ - Support for obsolete types in zone transfers
+ - Slave zone file names fixes
+ - Long transfers being randomly dropped
+ - AXFR/IXFR subsystem performance improvements
+ - Rescheduling of AXFR in some cases
+ - RRSIGs not in the same section for DS records
+ - Log messages leaking to syslog
+ - 'knotc restart' option removed due to several limitations
+ - IXFR with an arbitrary number of diffs
+ - Processing of knotc TSIG keyfile
+ - Atomic PID file writing, removed deprecated 'knotc start'
+ - Performance regression when RRSIGs came before covered RRs in AXFR
+ - Label compression related bug
+ - Proper resolution of some CNAME chains
+ - Unstable response rate in rare cases
+ - Several log messages
+ - Fixed creating of PID file when dropping privileges
+
+Knot DNS 1.2.0 (2013-03-29)
+===========================
+
+Features:
+---------
+ - knotc 'zonestatus' command
+ - Response rate limiting (see documentation)
+ - Dynamic updates, including forwarding (limited on signed zones)
+ - Updated remote control utility
+ - Configurable TCP timeouts
+ - LOC RR support
+
+Bugfixes:
+---------
+ - Memory leaks
+ - Check for broken recvmmsg() implementation
+ - Changing logfile ownership before dropping privileges
+ - knotc respects 'control' section from configuration
+ - RRL: resolved bucket collisions
+ - RRL: updated bucket mapping to conform RRL technical memo
+ - Fixed OpenBSD build
+ - Responses to ANY should contain RRSIGs
+ - Fixed processing of some non-standard dnames.
+ - Correct checking of label length bounds in some cases.
+ - More compliant rcodes in case of DDNS/TSIG failures.
+ - Correct processing of malformed DDNS prereq section.
+
+Knot DNS 1.1.3 (2012-12-19)
+===========================
+
+Bugfixes:
+---------
+ - Updated manpage.
+ - Fixed answering DS queries (RRSIGs not together with DS, AA bit
+ missing).
+ - Fixed setting ARCOUNT in some error responses with EDNS enabled.
+ - Fixed crash when compiling zone zone with NSEC3PARAM but no NSEC3
+ and semantic checks enabled.
+
+Knot DNS 1.1.2 (2012-11-21)
+===========================
+
+Bugfixes:
+---------
+ - Fixed debug message.
+ - Fixed crash on reload when config contained duplicate zones.
+ - Fixed scheduling of transfers.
+
+Knot DNS 1.1.1 (2012-10-31)
+===========================
+
+Features:
+---------
+ - Improved compression of packets. Out-of-zone dnames present in
+ RDATA were not compressed.
+ - Slave zones are now automatically refreshed after startup.
+ - Proper response to IXFR/UDP query (returns SOA in Authority
+ section).
+
+Bugfixes:
+---------
+ - Fixed assertion failing when asking directly for a wildcard name.
+ - Crash after IXFR in certain cases when adding RRSIG in an IXFR.
+ - Fixed behaviour when incoming IXFR removes a zone cut. Previously
+ occluded names now become properly visible. Previously lead to a
+ crash when the server was asked for the previously occluded name.
+ - Fixed handling of zero-length strings in text zone dump. Caused the
+ compilation to fail.
+ - Fixed TSIG algorithm name comparison - the names should be in
+ canonical form.
+ - Fixed handling unknown RR types with type less than 251.
+
+Knot DNS 1.1.0 (2012-08-31)
+===========================
+
+Features:
+---------
+ - Signing SOA with TSIG queries when checking zone version with
+ master.
+ - Optionally disable ANY queries for authoritative answers.
+ - Dropping identical records in zone and incoming transfers.
+ - Support for '/' in zone names.
+ - Generating journal from reloaded zone (EXPERIMENTAL).
+ - Outgoing-only interfaces in configuration file.
+ - Following DNAME if the synthesized name is in the same zone.
+
+Bugfixes:
+---------
+ - Syncing journal to zone was not updating the compiled zone
+ database.
+ - Fixed ixfr-from-differences journal generation in case of IPSECKEY
+ and APL records.
+ - Fixed possible leak on server shutdown with a pending transfer.
+ - Crash when zone contained RRSIG signing a CNAME, but did not
+ contain the CNAME.
+ - Malformed packets parsing.
+ - Failed IXFR caused memory leaks.
+ - Failed IXFR might have resulted in inconsistent zone structures.
+ - Fixed answering to +dnssec queries when NSEC3 chain is corrupted.
+ - Fixed answering when transitioning from NSEC3 to NSEC.
+ - Fixed answering when zone contains multiple NSEC3 chains.
+ - Handling RRSets with different TTLs - TTL from the first RR is
+ used.
+ - Synchronization of zone reload and zone transfers.
+ - Fixed build on NetBSD 5 and FreeBSD.
+ - Fixed binding to both IPv4 and IPv6 at the same time on special
+ interfaces.
+ - Fixed access rights of created files.
+ - Semantic checks corrupted RDATA domain names which are covered by
+ wildcard in the same zone.
+
+Improvements:
+-------------
+ - Improved user manual.
+ - Better checks of corrupted zone database.
+ - IXFR-in optimized.
+ - Many zones loading optimized.
+ - More detailed log messages (mostly transfer-related).
+ - Copying Question section to error responses.
+ - Using zone name from config file as default origin in zone file.
+ - Additional records are now added to response also from
+ wildcard-covered names.
+
+Knot DNS 1.0.6 (2012-06-13)
+===========================
+
+Bugfixes:
+---------
+ - Fixed potential problems with RCU synchronization.
+ - Adding NSEC/NSEC3 for all wildcard CNAMEs in the response.
+
+Knot DNS 1.0.5 (2012-05-17)
+===========================
+
+Bugfixes:
+---------
+ - Fixed bug with creating journal files.
+
+Knot DNS 1.0.4 (2012-05-16)
+===========================
+
+Features:
+---------
+ - Parallel loading of zones to the server.
+ - RFC3339-complaint format of log time.
+ - Support for TLSA (RR type 52).
+ - knotc checkzone (as a dry-run of zone compile).
+ - knotc refresh for forcing Knot to update all zones from master
+ servers.
+ - Reopening log files upon start (used to truncate them).
+
+Improvements:
+-------------
+ - Significantly sped up IXFR-in and reduced its memory requirements.
+
+Bugfixes:
+---------
+ - Copying OPCODE and RD bit from query to NOTIMPL responses.
+ - Corrected response to CNAME queries if the canonical name was also
+ an alias (was adding the whole CNAME chain to the response).
+ - Fixed crash when NS or MX points to an alias.
+ - Fixed problem with early closing of filedescriptors (lead to crash
+ when compiling and loading or bootstrapping and restarting the
+ server with a lot of zones).
+
+Knot DNS 1.0.3 (2012-04-17)
+===========================
+
+Bugfixes:
+---------
+ - Corrected handling of EDNS0 when TCP is used (was applying the UDP
+ size limit).
+ - Fixed slow compilation of zones.
+ - Fixed potential crash with many concurrent transfers.
+ - Fixed missing include for FreeBSD.
+
+Knot DNS 1.0.2 (2012-04-13)
+===========================
+
+Features:
+---------
+ - Configuration checker (invoked via knotc).
+ - Specifying source interface for transfers and NOTIFY requests
+ directly.
+
+Bugfixes:
+---------
+ - Fixed leak when querying non-existing name and zone SOA TTL >
+ minimal.
+ - Fixed some minor bugs in transfers.
+
+Improvements:
+-------------
+ - Improved log messages (added date and time, better specification of
+ XFR remote).
+ - Improved saving incoming IXFR to journal (memory optimized).
+ - Now using system scheduler (better for Linux).
+ - Decreased thread stack size.
+
+Knot DNS 1.0.1 (2012-05-09)
+===========================
+
+Features:
+---------
+ - Implemented jitter to REFRESH/RETRY timers.
+ - Implemented magic bytes for journal.
+ - Improved error messages.
+
+Bugfixes:
+---------
+ - Problem with creating IXFR journal for bootstrapped zone.
+ - Race condition in processing NOTIFY/SOA queries.
+ - Leak when reloading zone with NSEC3.
+ - Processing of APL RR.
+ - TSIG improper assignment of algorithm type.
+
+Knot DNS 1.0.0 (2012-02-29)
+===========================
+
+Features:
+---------
+ - Support for subnets in ACL.
+ - Debug messages enabling in configure.
+ - Optimized memory consumption of zone structures.
+ - NSID support (RFC5001).
+ - Root zone support.
+ - Automatic zone compiling on server start.
+ - Setting user to run Knot under in config file.
+ - Dropping privileges after binding to port 53.
+ + Support for Linux capabilities(7).
+ - Setting source address of outgoing transfers in config file.
+ - Custom PID file.
+ - CNAME loop detection.
+ - Timeout on TCP connections.
+ - Basic defense against DoS attacks.
+
+Bugfixes:
+---------
+ - Memory errors and leaks.
+ - Fixed improper handling of failed IXFR/IN.
+ - Several other minor bugfixes.
+ - Fixed IXFR processing.
+ - Patched URCU so that it compiles on architectures without TLS in
+ compiler (NetBSD, OpenBSD).
+ - Fixed response to DS query at parent zone.
+ - A lot of other bugfixes.
+
+Knot DNS 0.9.1 (2012-01-20)
+===========================
+
+Features:
+---------
+ - RRSet rotation
+
+Improvements:
+-------------
+ - Replaced pseudo-random number generator by one with MIT/BSD
+ license.
+
+Bugfixes:
+---------
+ - Fixed build on BSD.
+ - Fixes in parsing and dumping of zone RR types IPSECKEY, WKS, DLV,
+ APL, NSAP
+
+Knot DNS 0.9.0 (2012-01-13)
+===========================
+
+Features:
+---------
+ - TSIG support in both client and server.
+ - Use of sendmmsg() on Linux 3.0+ (improves performance).
+
+Bugfixes:
+---------
+ - Knot was not accepting AXFR-style IXFR with first SOA in a separate
+ packet (i.e. from Power DNS).
+ - Wrong SOA TTL in negative answers.
+ - Wrong max packet size for outgoing transfers (was causing the
+ packets to be malformed).
+ - Wrong handling of WKS record in zone compiler.
+ - Problems with zone bootstrapping.
+
+Knot DNS 0.8.1 (2011-12-01)
+===========================
+
+Bugfixes:
+---------
+ - Handling SPF record.
+ - Wrong text dump of unknown records.
+
+Knot DNS 0.8.0 (2011-11-03)
+===========================
+
+Features:
+---------
+ - First Public Release
+ - AXFR-in/-out
+ - IXFR-in/-out
+ - EDNS0
+ - DNSSEC
+ - NSEC3
+ - IPv6
+ - Runtime reconfiguration
+
+Known issues:
+-------------
+ - Missing support for TSIG
+ - Root zone support
+ - NSID support
+ - Other DNS classes than IN
+ - RRSet rotation not implemented
+ - Dynamic update support
+ - IXFR code might be flaky sometimes
+ - IXFR may be slow when too much (10 000+) RRSets are transferred at
+ once
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..5f5f3a4
--- /dev/null
+++ b/README.md
@@ -0,0 +1,121 @@
+[](https://scan.coverity.com/projects/knot-dns)
+[](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:knot-dns)
+
+# Requirements
+
+[doc/requirements.rst](doc/requirements.rst)
+
+# Installation
+
+[doc/installation.rst](doc/installation.rst)
+
+## 1. Install prerequisites
+
+### Debian based distributions
+
+#### Update the system:
+```bash
+sudo apt-get update
+sudo apt-get upgrade
+```
+
+#### Install prerequisites:
+```bash
+sudo apt-get install \
+ libtool autoconf automake make pkg-config liburcu-dev libgnutls28-dev libedit-dev liblmdb-dev
+```
+
+#### Install optional packages:
+```bash
+sudo apt-get install \
+ libcap-ng-dev libsystemd-dev libidn2-dev libprotobuf-c-dev protobuf-c-compiler libfstrm-dev libmaxminddb-dev libnghttp2-dev libbpf-dev libxdp-dev libmnl-dev python3-sphinx python3-sphinx-panels softhsm2
+```
+
+### Fedora like distributions
+
+#### Update the system:
+```bash
+dnf upgrade
+```
+
+#### Install basic development tools:
+```bash
+dnf install @buildsys-build
+```
+
+#### Install prerequisites:
+```bash
+dnf install \
+ libtool autoconf automake pkgconfig userspace-rcu-devel gnutls-devel libedit-devel lmdb-devel
+```
+
+#### Install optional packages:
+```bash
+dnf install \
+ libcap-ng-devel systemd-devel libidn2-devel protobuf-c-devel fstrm-devel libmaxminddb-devel libnghttp2-devel libbpf-devel libxdp-devel libmnl-devel python-sphinx python-sphinx-panels softhsm
+```
+
+When compiling on RHEL based system, the Fedora EPEL repository has to be
+enabled.
+
+## 2. Install Knot DNS
+
+Get the source code:
+```bash
+git clone https://gitlab.nic.cz/knot/knot-dns.git
+```
+Or extract source package to knot-dns directory.
+
+Compile the source code:
+```bash
+cd knot-dns
+autoreconf -if
+./configure
+make
+```
+
+Install Knot DNS into system:
+```bash
+sudo make install
+sudo ldconfig
+```
+
+# Running
+
+### 1. Ensure some configuration
+
+[doc/configuration.rst](doc/configuration.rst)
+
+Please see [samples/knot.sample.conf](samples/knot.sample.conf),
+[project documentation](https://www.knot-dns.cz/documentation/),
+or `man 5 knot.conf` for more details. Basically the configuration should specify:
+- network interfaces
+- served zones
+
+E.g. use the default configuration file:
+```bash
+cd /etc/knot
+mv knot.sample.conf knot.conf
+```
+Modify the configuration file:
+```bash
+editor knot.conf
+```
+
+### 2. Prepare working directory
+
+```bash
+mv example.com.zone /var/lib/knot/
+```
+
+### 3. Start the server
+
+[doc/operation.rst](doc/operation.rst)
+
+This can be done by running the `knotd` command. Alternatively, your distribution
+should have an init script available, if you installed Knot DNS from a binary package.
+
+Start the server in foreground to see if it runs:
+```bash
+knotd -c /etc/knot/knot.conf
+```
diff --git a/aclocal.m4 b/aclocal.m4
new file mode 100644
index 0000000..fad2309
--- /dev/null
+++ b/aclocal.m4
@@ -0,0 +1,1566 @@
+# generated automatically by aclocal 1.16.5 -*- Autoconf -*-
+
+# Copyright (C) 1996-2021 Free Software Foundation, Inc.
+
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])])
+m4_ifndef([AC_AUTOCONF_VERSION],
+ [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl
+m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.71],,
+[m4_warning([this file was generated for autoconf 2.71.
+You have another version of autoconf. It may work, but is not guaranteed to.
+If you have problems, you may need to regenerate the build system entirely.
+To do so, use the procedure documented by the package, typically 'autoreconf'.])])
+
+# pkg.m4 - Macros to locate and use pkg-config. -*- Autoconf -*-
+# serial 12 (pkg-config-0.29.2)
+
+dnl Copyright © 2004 Scott James Remnant .
+dnl Copyright © 2012-2015 Dan Nicholson
+dnl
+dnl This program is free software; you can redistribute it and/or modify
+dnl it under the terms of the GNU General Public License as published by
+dnl the Free Software Foundation; either version 2 of the License, or
+dnl (at your option) any later version.
+dnl
+dnl This program is distributed in the hope that it will be useful, but
+dnl WITHOUT ANY WARRANTY; without even the implied warranty of
+dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+dnl General Public License for more details.
+dnl
+dnl You should have received a copy of the GNU General Public License
+dnl along with this program; if not, write to the Free Software
+dnl Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
+dnl 02111-1307, USA.
+dnl
+dnl As a special exception to the GNU General Public License, if you
+dnl distribute this file as part of a program that contains a
+dnl configuration script generated by Autoconf, you may include it under
+dnl the same distribution terms that you use for the rest of that
+dnl program.
+
+dnl PKG_PREREQ(MIN-VERSION)
+dnl -----------------------
+dnl Since: 0.29
+dnl
+dnl Verify that the version of the pkg-config macros are at least
+dnl MIN-VERSION. Unlike PKG_PROG_PKG_CONFIG, which checks the user's
+dnl installed version of pkg-config, this checks the developer's version
+dnl of pkg.m4 when generating configure.
+dnl
+dnl To ensure that this macro is defined, also add:
+dnl m4_ifndef([PKG_PREREQ],
+dnl [m4_fatal([must install pkg-config 0.29 or later before running autoconf/autogen])])
+dnl
+dnl See the "Since" comment for each macro you use to see what version
+dnl of the macros you require.
+m4_defun([PKG_PREREQ],
+[m4_define([PKG_MACROS_VERSION], [0.29.2])
+m4_if(m4_version_compare(PKG_MACROS_VERSION, [$1]), -1,
+ [m4_fatal([pkg.m4 version $1 or higher is required but ]PKG_MACROS_VERSION[ found])])
+])dnl PKG_PREREQ
+
+dnl PKG_PROG_PKG_CONFIG([MIN-VERSION])
+dnl ----------------------------------
+dnl Since: 0.16
+dnl
+dnl Search for the pkg-config tool and set the PKG_CONFIG variable to
+dnl first found in the path. Checks that the version of pkg-config found
+dnl is at least MIN-VERSION. If MIN-VERSION is not specified, 0.9.0 is
+dnl used since that's the first version where most current features of
+dnl pkg-config existed.
+AC_DEFUN([PKG_PROG_PKG_CONFIG],
+[m4_pattern_forbid([^_?PKG_[A-Z_]+$])
+m4_pattern_allow([^PKG_CONFIG(_(PATH|LIBDIR|SYSROOT_DIR|ALLOW_SYSTEM_(CFLAGS|LIBS)))?$])
+m4_pattern_allow([^PKG_CONFIG_(DISABLE_UNINSTALLED|TOP_BUILD_DIR|DEBUG_SPEW)$])
+AC_ARG_VAR([PKG_CONFIG], [path to pkg-config utility])
+AC_ARG_VAR([PKG_CONFIG_PATH], [directories to add to pkg-config's search path])
+AC_ARG_VAR([PKG_CONFIG_LIBDIR], [path overriding pkg-config's built-in search path])
+
+if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then
+ AC_PATH_TOOL([PKG_CONFIG], [pkg-config])
+fi
+if test -n "$PKG_CONFIG"; then
+ _pkg_min_version=m4_default([$1], [0.9.0])
+ AC_MSG_CHECKING([pkg-config is at least version $_pkg_min_version])
+ if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then
+ AC_MSG_RESULT([yes])
+ else
+ AC_MSG_RESULT([no])
+ PKG_CONFIG=""
+ fi
+fi[]dnl
+])dnl PKG_PROG_PKG_CONFIG
+
+dnl PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
+dnl -------------------------------------------------------------------
+dnl Since: 0.18
+dnl
+dnl Check to see whether a particular set of modules exists. Similar to
+dnl PKG_CHECK_MODULES(), but does not set variables or print errors.
+dnl
+dnl Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
+dnl only at the first occurrence in configure.ac, so if the first place
+dnl it's called might be skipped (such as if it is within an "if", you
+dnl have to call PKG_CHECK_EXISTS manually
+AC_DEFUN([PKG_CHECK_EXISTS],
+[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
+if test -n "$PKG_CONFIG" && \
+ AC_RUN_LOG([$PKG_CONFIG --exists --print-errors "$1"]); then
+ m4_default([$2], [:])
+m4_ifvaln([$3], [else
+ $3])dnl
+fi])
+
+dnl _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
+dnl ---------------------------------------------
+dnl Internal wrapper calling pkg-config via PKG_CONFIG and setting
+dnl pkg_failed based on the result.
+m4_define([_PKG_CONFIG],
+[if test -n "$$1"; then
+ pkg_cv_[]$1="$$1"
+ elif test -n "$PKG_CONFIG"; then
+ PKG_CHECK_EXISTS([$3],
+ [pkg_cv_[]$1=`$PKG_CONFIG --[]$2 "$3" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes ],
+ [pkg_failed=yes])
+ else
+ pkg_failed=untried
+fi[]dnl
+])dnl _PKG_CONFIG
+
+dnl _PKG_SHORT_ERRORS_SUPPORTED
+dnl ---------------------------
+dnl Internal check to see if pkg-config supports short errors.
+AC_DEFUN([_PKG_SHORT_ERRORS_SUPPORTED],
+[AC_REQUIRE([PKG_PROG_PKG_CONFIG])
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi[]dnl
+])dnl _PKG_SHORT_ERRORS_SUPPORTED
+
+
+dnl PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
+dnl [ACTION-IF-NOT-FOUND])
+dnl --------------------------------------------------------------
+dnl Since: 0.4.0
+dnl
+dnl Note that if there is a possibility the first call to
+dnl PKG_CHECK_MODULES might not happen, you should be sure to include an
+dnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
+AC_DEFUN([PKG_CHECK_MODULES],
+[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
+AC_ARG_VAR([$1][_CFLAGS], [C compiler flags for $1, overriding pkg-config])dnl
+AC_ARG_VAR([$1][_LIBS], [linker flags for $1, overriding pkg-config])dnl
+
+pkg_failed=no
+AC_MSG_CHECKING([for $2])
+
+_PKG_CONFIG([$1][_CFLAGS], [cflags], [$2])
+_PKG_CONFIG([$1][_LIBS], [libs], [$2])
+
+m4_define([_PKG_TEXT], [Alternatively, you may set the environment variables $1[]_CFLAGS
+and $1[]_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.])
+
+if test $pkg_failed = yes; then
+ AC_MSG_RESULT([no])
+ _PKG_SHORT_ERRORS_SUPPORTED
+ if test $_pkg_short_errors_supported = yes; then
+ $1[]_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "$2" 2>&1`
+ else
+ $1[]_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "$2" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$$1[]_PKG_ERRORS" >&AS_MESSAGE_LOG_FD
+
+ m4_default([$4], [AC_MSG_ERROR(
+[Package requirements ($2) were not met:
+
+$$1_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+_PKG_TEXT])[]dnl
+ ])
+elif test $pkg_failed = untried; then
+ AC_MSG_RESULT([no])
+ m4_default([$4], [AC_MSG_FAILURE(
+[The pkg-config script could not be found or is too old. Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+_PKG_TEXT
+
+To get pkg-config, see .])[]dnl
+ ])
+else
+ $1[]_CFLAGS=$pkg_cv_[]$1[]_CFLAGS
+ $1[]_LIBS=$pkg_cv_[]$1[]_LIBS
+ AC_MSG_RESULT([yes])
+ $3
+fi[]dnl
+])dnl PKG_CHECK_MODULES
+
+
+dnl PKG_CHECK_MODULES_STATIC(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
+dnl [ACTION-IF-NOT-FOUND])
+dnl ---------------------------------------------------------------------
+dnl Since: 0.29
+dnl
+dnl Checks for existence of MODULES and gathers its build flags with
+dnl static libraries enabled. Sets VARIABLE-PREFIX_CFLAGS from --cflags
+dnl and VARIABLE-PREFIX_LIBS from --libs.
+dnl
+dnl Note that if there is a possibility the first call to
+dnl PKG_CHECK_MODULES_STATIC might not happen, you should be sure to
+dnl include an explicit call to PKG_PROG_PKG_CONFIG in your
+dnl configure.ac.
+AC_DEFUN([PKG_CHECK_MODULES_STATIC],
+[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
+_save_PKG_CONFIG=$PKG_CONFIG
+PKG_CONFIG="$PKG_CONFIG --static"
+PKG_CHECK_MODULES($@)
+PKG_CONFIG=$_save_PKG_CONFIG[]dnl
+])dnl PKG_CHECK_MODULES_STATIC
+
+
+dnl PKG_INSTALLDIR([DIRECTORY])
+dnl -------------------------
+dnl Since: 0.27
+dnl
+dnl Substitutes the variable pkgconfigdir as the location where a module
+dnl should install pkg-config .pc files. By default the directory is
+dnl $libdir/pkgconfig, but the default can be changed by passing
+dnl DIRECTORY. The user can override through the --with-pkgconfigdir
+dnl parameter.
+AC_DEFUN([PKG_INSTALLDIR],
+[m4_pushdef([pkg_default], [m4_default([$1], ['${libdir}/pkgconfig'])])
+m4_pushdef([pkg_description],
+ [pkg-config installation directory @<:@]pkg_default[@:>@])
+AC_ARG_WITH([pkgconfigdir],
+ [AS_HELP_STRING([--with-pkgconfigdir], pkg_description)],,
+ [with_pkgconfigdir=]pkg_default)
+AC_SUBST([pkgconfigdir], [$with_pkgconfigdir])
+m4_popdef([pkg_default])
+m4_popdef([pkg_description])
+])dnl PKG_INSTALLDIR
+
+
+dnl PKG_NOARCH_INSTALLDIR([DIRECTORY])
+dnl --------------------------------
+dnl Since: 0.27
+dnl
+dnl Substitutes the variable noarch_pkgconfigdir as the location where a
+dnl module should install arch-independent pkg-config .pc files. By
+dnl default the directory is $datadir/pkgconfig, but the default can be
+dnl changed by passing DIRECTORY. The user can override through the
+dnl --with-noarch-pkgconfigdir parameter.
+AC_DEFUN([PKG_NOARCH_INSTALLDIR],
+[m4_pushdef([pkg_default], [m4_default([$1], ['${datadir}/pkgconfig'])])
+m4_pushdef([pkg_description],
+ [pkg-config arch-independent installation directory @<:@]pkg_default[@:>@])
+AC_ARG_WITH([noarch-pkgconfigdir],
+ [AS_HELP_STRING([--with-noarch-pkgconfigdir], pkg_description)],,
+ [with_noarch_pkgconfigdir=]pkg_default)
+AC_SUBST([noarch_pkgconfigdir], [$with_noarch_pkgconfigdir])
+m4_popdef([pkg_default])
+m4_popdef([pkg_description])
+])dnl PKG_NOARCH_INSTALLDIR
+
+
+dnl PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
+dnl [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
+dnl -------------------------------------------
+dnl Since: 0.28
+dnl
+dnl Retrieves the value of the pkg-config variable for the given module.
+AC_DEFUN([PKG_CHECK_VAR],
+[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
+AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl
+
+_PKG_CONFIG([$1], [variable="][$3]["], [$2])
+AS_VAR_COPY([$1], [pkg_cv_][$1])
+
+AS_VAR_IF([$1], [""], [$5], [$4])dnl
+])dnl PKG_CHECK_VAR
+
+dnl PKG_WITH_MODULES(VARIABLE-PREFIX, MODULES,
+dnl [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND],
+dnl [DESCRIPTION], [DEFAULT])
+dnl ------------------------------------------
+dnl
+dnl Prepare a "--with-" configure option using the lowercase
+dnl [VARIABLE-PREFIX] name, merging the behaviour of AC_ARG_WITH and
+dnl PKG_CHECK_MODULES in a single macro.
+AC_DEFUN([PKG_WITH_MODULES],
+[
+m4_pushdef([with_arg], m4_tolower([$1]))
+
+m4_pushdef([description],
+ [m4_default([$5], [build with ]with_arg[ support])])
+
+m4_pushdef([def_arg], [m4_default([$6], [auto])])
+m4_pushdef([def_action_if_found], [AS_TR_SH([with_]with_arg)=yes])
+m4_pushdef([def_action_if_not_found], [AS_TR_SH([with_]with_arg)=no])
+
+m4_case(def_arg,
+ [yes],[m4_pushdef([with_without], [--without-]with_arg)],
+ [m4_pushdef([with_without],[--with-]with_arg)])
+
+AC_ARG_WITH(with_arg,
+ AS_HELP_STRING(with_without, description[ @<:@default=]def_arg[@:>@]),,
+ [AS_TR_SH([with_]with_arg)=def_arg])
+
+AS_CASE([$AS_TR_SH([with_]with_arg)],
+ [yes],[PKG_CHECK_MODULES([$1],[$2],$3,$4)],
+ [auto],[PKG_CHECK_MODULES([$1],[$2],
+ [m4_n([def_action_if_found]) $3],
+ [m4_n([def_action_if_not_found]) $4])])
+
+m4_popdef([with_arg])
+m4_popdef([description])
+m4_popdef([def_arg])
+
+])dnl PKG_WITH_MODULES
+
+dnl PKG_HAVE_WITH_MODULES(VARIABLE-PREFIX, MODULES,
+dnl [DESCRIPTION], [DEFAULT])
+dnl -----------------------------------------------
+dnl
+dnl Convenience macro to trigger AM_CONDITIONAL after PKG_WITH_MODULES
+dnl check._[VARIABLE-PREFIX] is exported as make variable.
+AC_DEFUN([PKG_HAVE_WITH_MODULES],
+[
+PKG_WITH_MODULES([$1],[$2],,,[$3],[$4])
+
+AM_CONDITIONAL([HAVE_][$1],
+ [test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"])
+])dnl PKG_HAVE_WITH_MODULES
+
+dnl PKG_HAVE_DEFINE_WITH_MODULES(VARIABLE-PREFIX, MODULES,
+dnl [DESCRIPTION], [DEFAULT])
+dnl ------------------------------------------------------
+dnl
+dnl Convenience macro to run AM_CONDITIONAL and AC_DEFINE after
+dnl PKG_WITH_MODULES check. HAVE_[VARIABLE-PREFIX] is exported as make
+dnl and preprocessor variable.
+AC_DEFUN([PKG_HAVE_DEFINE_WITH_MODULES],
+[
+PKG_HAVE_WITH_MODULES([$1],[$2],[$3],[$4])
+
+AS_IF([test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"],
+ [AC_DEFINE([HAVE_][$1], 1, [Enable ]m4_tolower([$1])[ support])])
+])dnl PKG_HAVE_DEFINE_WITH_MODULES
+
+# Copyright (C) 2002-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_AUTOMAKE_VERSION(VERSION)
+# ----------------------------
+# Automake X.Y traces this macro to ensure aclocal.m4 has been
+# generated from the m4 files accompanying Automake X.Y.
+# (This private macro should not be called outside this file.)
+AC_DEFUN([AM_AUTOMAKE_VERSION],
+[am__api_version='1.16'
+dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to
+dnl require some minimum version. Point them to the right macro.
+m4_if([$1], [1.16.5], [],
+ [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl
+])
+
+# _AM_AUTOCONF_VERSION(VERSION)
+# -----------------------------
+# aclocal traces this macro to find the Autoconf version.
+# This is a private macro too. Using m4_define simplifies
+# the logic in aclocal, which can simply ignore this definition.
+m4_define([_AM_AUTOCONF_VERSION], [])
+
+# AM_SET_CURRENT_AUTOMAKE_VERSION
+# -------------------------------
+# Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced.
+# This function is AC_REQUIREd by AM_INIT_AUTOMAKE.
+AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION],
+[AM_AUTOMAKE_VERSION([1.16.5])dnl
+m4_ifndef([AC_AUTOCONF_VERSION],
+ [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl
+_AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))])
+
+# Copyright (C) 2011-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_PROG_AR([ACT-IF-FAIL])
+# -------------------------
+# Try to determine the archiver interface, and trigger the ar-lib wrapper
+# if it is needed. If the detection of archiver interface fails, run
+# ACT-IF-FAIL (default is to abort configure with a proper error message).
+AC_DEFUN([AM_PROG_AR],
+[AC_BEFORE([$0], [LT_INIT])dnl
+AC_BEFORE([$0], [AC_PROG_LIBTOOL])dnl
+AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
+AC_REQUIRE_AUX_FILE([ar-lib])dnl
+AC_CHECK_TOOLS([AR], [ar lib "link -lib"], [false])
+: ${AR=ar}
+
+AC_CACHE_CHECK([the archiver ($AR) interface], [am_cv_ar_interface],
+ [AC_LANG_PUSH([C])
+ am_cv_ar_interface=ar
+ AC_COMPILE_IFELSE([AC_LANG_SOURCE([[int some_variable = 0;]])],
+ [am_ar_try='$AR cru libconftest.a conftest.$ac_objext >&AS_MESSAGE_LOG_FD'
+ AC_TRY_EVAL([am_ar_try])
+ if test "$ac_status" -eq 0; then
+ am_cv_ar_interface=ar
+ else
+ am_ar_try='$AR -NOLOGO -OUT:conftest.lib conftest.$ac_objext >&AS_MESSAGE_LOG_FD'
+ AC_TRY_EVAL([am_ar_try])
+ if test "$ac_status" -eq 0; then
+ am_cv_ar_interface=lib
+ else
+ am_cv_ar_interface=unknown
+ fi
+ fi
+ rm -f conftest.lib libconftest.a
+ ])
+ AC_LANG_POP([C])])
+
+case $am_cv_ar_interface in
+ar)
+ ;;
+lib)
+ # Microsoft lib, so override with the ar-lib wrapper script.
+ # FIXME: It is wrong to rewrite AR.
+ # But if we don't then we get into trouble of one sort or another.
+ # A longer-term fix would be to have automake use am__AR in this case,
+ # and then we could set am__AR="$am_aux_dir/ar-lib \$(AR)" or something
+ # similar.
+ AR="$am_aux_dir/ar-lib $AR"
+ ;;
+unknown)
+ m4_default([$1],
+ [AC_MSG_ERROR([could not determine $AR interface])])
+ ;;
+esac
+AC_SUBST([AR])dnl
+])
+
+# AM_AUX_DIR_EXPAND -*- Autoconf -*-
+
+# Copyright (C) 2001-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets
+# $ac_aux_dir to '$srcdir/foo'. In other projects, it is set to
+# '$srcdir', '$srcdir/..', or '$srcdir/../..'.
+#
+# Of course, Automake must honor this variable whenever it calls a
+# tool from the auxiliary directory. The problem is that $srcdir (and
+# therefore $ac_aux_dir as well) can be either absolute or relative,
+# depending on how configure is run. This is pretty annoying, since
+# it makes $ac_aux_dir quite unusable in subdirectories: in the top
+# source directory, any form will work fine, but in subdirectories a
+# relative path needs to be adjusted first.
+#
+# $ac_aux_dir/missing
+# fails when called from a subdirectory if $ac_aux_dir is relative
+# $top_srcdir/$ac_aux_dir/missing
+# fails if $ac_aux_dir is absolute,
+# fails when called from a subdirectory in a VPATH build with
+# a relative $ac_aux_dir
+#
+# The reason of the latter failure is that $top_srcdir and $ac_aux_dir
+# are both prefixed by $srcdir. In an in-source build this is usually
+# harmless because $srcdir is '.', but things will broke when you
+# start a VPATH build or use an absolute $srcdir.
+#
+# So we could use something similar to $top_srcdir/$ac_aux_dir/missing,
+# iff we strip the leading $srcdir from $ac_aux_dir. That would be:
+# am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"`
+# and then we would define $MISSING as
+# MISSING="\${SHELL} $am_aux_dir/missing"
+# This will work as long as MISSING is not called from configure, because
+# unfortunately $(top_srcdir) has no meaning in configure.
+# However there are other variables, like CC, which are often used in
+# configure, and could therefore not use this "fixed" $ac_aux_dir.
+#
+# Another solution, used here, is to always expand $ac_aux_dir to an
+# absolute PATH. The drawback is that using absolute paths prevent a
+# configured tree to be moved without reconfiguration.
+
+AC_DEFUN([AM_AUX_DIR_EXPAND],
+[AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl
+# Expand $ac_aux_dir to an absolute path.
+am_aux_dir=`cd "$ac_aux_dir" && pwd`
+])
+
+# AM_CONDITIONAL -*- Autoconf -*-
+
+# Copyright (C) 1997-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_CONDITIONAL(NAME, SHELL-CONDITION)
+# -------------------------------------
+# Define a conditional.
+AC_DEFUN([AM_CONDITIONAL],
+[AC_PREREQ([2.52])dnl
+ m4_if([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])],
+ [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl
+AC_SUBST([$1_TRUE])dnl
+AC_SUBST([$1_FALSE])dnl
+_AM_SUBST_NOTMAKE([$1_TRUE])dnl
+_AM_SUBST_NOTMAKE([$1_FALSE])dnl
+m4_define([_AM_COND_VALUE_$1], [$2])dnl
+if $2; then
+ $1_TRUE=
+ $1_FALSE='#'
+else
+ $1_TRUE='#'
+ $1_FALSE=
+fi
+AC_CONFIG_COMMANDS_PRE(
+[if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then
+ AC_MSG_ERROR([[conditional "$1" was never defined.
+Usually this means the macro was only invoked conditionally.]])
+fi])])
+
+# Copyright (C) 1999-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+
+# There are a few dirty hacks below to avoid letting 'AC_PROG_CC' be
+# written in clear, in which case automake, when reading aclocal.m4,
+# will think it sees a *use*, and therefore will trigger all it's
+# C support machinery. Also note that it means that autoscan, seeing
+# CC etc. in the Makefile, will ask for an AC_PROG_CC use...
+
+
+# _AM_DEPENDENCIES(NAME)
+# ----------------------
+# See how the compiler implements dependency checking.
+# NAME is "CC", "CXX", "OBJC", "OBJCXX", "UPC", or "GJC".
+# We try a few techniques and use that to set a single cache variable.
+#
+# We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was
+# modified to invoke _AM_DEPENDENCIES(CC); we would have a circular
+# dependency, and given that the user is not expected to run this macro,
+# just rely on AC_PROG_CC.
+AC_DEFUN([_AM_DEPENDENCIES],
+[AC_REQUIRE([AM_SET_DEPDIR])dnl
+AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl
+AC_REQUIRE([AM_MAKE_INCLUDE])dnl
+AC_REQUIRE([AM_DEP_TRACK])dnl
+
+m4_if([$1], [CC], [depcc="$CC" am_compiler_list=],
+ [$1], [CXX], [depcc="$CXX" am_compiler_list=],
+ [$1], [OBJC], [depcc="$OBJC" am_compiler_list='gcc3 gcc'],
+ [$1], [OBJCXX], [depcc="$OBJCXX" am_compiler_list='gcc3 gcc'],
+ [$1], [UPC], [depcc="$UPC" am_compiler_list=],
+ [$1], [GCJ], [depcc="$GCJ" am_compiler_list='gcc3 gcc'],
+ [depcc="$$1" am_compiler_list=])
+
+AC_CACHE_CHECK([dependency style of $depcc],
+ [am_cv_$1_dependencies_compiler_type],
+[if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then
+ # We make a subdir and do the tests there. Otherwise we can end up
+ # making bogus files that we don't know about and never remove. For
+ # instance it was reported that on HP-UX the gcc test will end up
+ # making a dummy file named 'D' -- because '-MD' means "put the output
+ # in D".
+ rm -rf conftest.dir
+ mkdir conftest.dir
+ # Copy depcomp to subdir because otherwise we won't find it if we're
+ # using a relative directory.
+ cp "$am_depcomp" conftest.dir
+ cd conftest.dir
+ # We will build objects and dependencies in a subdirectory because
+ # it helps to detect inapplicable dependency modes. For instance
+ # both Tru64's cc and ICC support -MD to output dependencies as a
+ # side effect of compilation, but ICC will put the dependencies in
+ # the current directory while Tru64 will put them in the object
+ # directory.
+ mkdir sub
+
+ am_cv_$1_dependencies_compiler_type=none
+ if test "$am_compiler_list" = ""; then
+ am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp`
+ fi
+ am__universal=false
+ m4_case([$1], [CC],
+ [case " $depcc " in #(
+ *\ -arch\ *\ -arch\ *) am__universal=true ;;
+ esac],
+ [CXX],
+ [case " $depcc " in #(
+ *\ -arch\ *\ -arch\ *) am__universal=true ;;
+ esac])
+
+ for depmode in $am_compiler_list; do
+ # Setup a source with many dependencies, because some compilers
+ # like to wrap large dependency lists on column 80 (with \), and
+ # we should not choose a depcomp mode which is confused by this.
+ #
+ # We need to recreate these files for each test, as the compiler may
+ # overwrite some of them when testing with obscure command lines.
+ # This happens at least with the AIX C compiler.
+ : > sub/conftest.c
+ for i in 1 2 3 4 5 6; do
+ echo '#include "conftst'$i'.h"' >> sub/conftest.c
+ # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with
+ # Solaris 10 /bin/sh.
+ echo '/* dummy */' > sub/conftst$i.h
+ done
+ echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf
+
+ # We check with '-c' and '-o' for the sake of the "dashmstdout"
+ # mode. It turns out that the SunPro C++ compiler does not properly
+ # handle '-M -o', and we need to detect this. Also, some Intel
+ # versions had trouble with output in subdirs.
+ am__obj=sub/conftest.${OBJEXT-o}
+ am__minus_obj="-o $am__obj"
+ case $depmode in
+ gcc)
+ # This depmode causes a compiler race in universal mode.
+ test "$am__universal" = false || continue
+ ;;
+ nosideeffect)
+ # After this tag, mechanisms are not by side-effect, so they'll
+ # only be used when explicitly requested.
+ if test "x$enable_dependency_tracking" = xyes; then
+ continue
+ else
+ break
+ fi
+ ;;
+ msvc7 | msvc7msys | msvisualcpp | msvcmsys)
+ # This compiler won't grok '-c -o', but also, the minuso test has
+ # not run yet. These depmodes are late enough in the game, and
+ # so weak that their functioning should not be impacted.
+ am__obj=conftest.${OBJEXT-o}
+ am__minus_obj=
+ ;;
+ none) break ;;
+ esac
+ if depmode=$depmode \
+ source=sub/conftest.c object=$am__obj \
+ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \
+ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \
+ >/dev/null 2>conftest.err &&
+ grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 &&
+ grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 &&
+ grep $am__obj sub/conftest.Po > /dev/null 2>&1 &&
+ ${MAKE-make} -s -f confmf > /dev/null 2>&1; then
+ # icc doesn't choke on unknown options, it will just issue warnings
+ # or remarks (even with -Werror). So we grep stderr for any message
+ # that says an option was ignored or not supported.
+ # When given -MP, icc 7.0 and 7.1 complain thusly:
+ # icc: Command line warning: ignoring option '-M'; no argument required
+ # The diagnosis changed in icc 8.0:
+ # icc: Command line remark: option '-MP' not supported
+ if (grep 'ignoring option' conftest.err ||
+ grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else
+ am_cv_$1_dependencies_compiler_type=$depmode
+ break
+ fi
+ fi
+ done
+
+ cd ..
+ rm -rf conftest.dir
+else
+ am_cv_$1_dependencies_compiler_type=none
+fi
+])
+AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type])
+AM_CONDITIONAL([am__fastdep$1], [
+ test "x$enable_dependency_tracking" != xno \
+ && test "$am_cv_$1_dependencies_compiler_type" = gcc3])
+])
+
+
+# AM_SET_DEPDIR
+# -------------
+# Choose a directory name for dependency files.
+# This macro is AC_REQUIREd in _AM_DEPENDENCIES.
+AC_DEFUN([AM_SET_DEPDIR],
+[AC_REQUIRE([AM_SET_LEADING_DOT])dnl
+AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl
+])
+
+
+# AM_DEP_TRACK
+# ------------
+AC_DEFUN([AM_DEP_TRACK],
+[AC_ARG_ENABLE([dependency-tracking], [dnl
+AS_HELP_STRING(
+ [--enable-dependency-tracking],
+ [do not reject slow dependency extractors])
+AS_HELP_STRING(
+ [--disable-dependency-tracking],
+ [speeds up one-time build])])
+if test "x$enable_dependency_tracking" != xno; then
+ am_depcomp="$ac_aux_dir/depcomp"
+ AMDEPBACKSLASH='\'
+ am__nodep='_no'
+fi
+AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno])
+AC_SUBST([AMDEPBACKSLASH])dnl
+_AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl
+AC_SUBST([am__nodep])dnl
+_AM_SUBST_NOTMAKE([am__nodep])dnl
+])
+
+# Generate code to set up dependency tracking. -*- Autoconf -*-
+
+# Copyright (C) 1999-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# _AM_OUTPUT_DEPENDENCY_COMMANDS
+# ------------------------------
+AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS],
+[{
+ # Older Autoconf quotes --file arguments for eval, but not when files
+ # are listed without --file. Let's play safe and only enable the eval
+ # if we detect the quoting.
+ # TODO: see whether this extra hack can be removed once we start
+ # requiring Autoconf 2.70 or later.
+ AS_CASE([$CONFIG_FILES],
+ [*\'*], [eval set x "$CONFIG_FILES"],
+ [*], [set x $CONFIG_FILES])
+ shift
+ # Used to flag and report bootstrapping failures.
+ am_rc=0
+ for am_mf
+ do
+ # Strip MF so we end up with the name of the file.
+ am_mf=`AS_ECHO(["$am_mf"]) | sed -e 's/:.*$//'`
+ # Check whether this is an Automake generated Makefile which includes
+ # dependency-tracking related rules and includes.
+ # Grep'ing the whole file directly is not great: AIX grep has a line
+ # limit of 2048, but all sed's we know have understand at least 4000.
+ sed -n 's,^am--depfiles:.*,X,p' "$am_mf" | grep X >/dev/null 2>&1 \
+ || continue
+ am_dirpart=`AS_DIRNAME(["$am_mf"])`
+ am_filepart=`AS_BASENAME(["$am_mf"])`
+ AM_RUN_LOG([cd "$am_dirpart" \
+ && sed -e '/# am--include-marker/d' "$am_filepart" \
+ | $MAKE -f - am--depfiles]) || am_rc=$?
+ done
+ if test $am_rc -ne 0; then
+ AC_MSG_FAILURE([Something went wrong bootstrapping makefile fragments
+ for automatic dependency tracking. If GNU make was not used, consider
+ re-running the configure script with MAKE="gmake" (or whatever is
+ necessary). You can also try re-running configure with the
+ '--disable-dependency-tracking' option to at least be able to build
+ the package (albeit without support for automatic dependency tracking).])
+ fi
+ AS_UNSET([am_dirpart])
+ AS_UNSET([am_filepart])
+ AS_UNSET([am_mf])
+ AS_UNSET([am_rc])
+ rm -f conftest-deps.mk
+}
+])# _AM_OUTPUT_DEPENDENCY_COMMANDS
+
+
+# AM_OUTPUT_DEPENDENCY_COMMANDS
+# -----------------------------
+# This macro should only be invoked once -- use via AC_REQUIRE.
+#
+# This code is only required when automatic dependency tracking is enabled.
+# This creates each '.Po' and '.Plo' makefile fragment that we'll need in
+# order to bootstrap the dependency handling code.
+AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS],
+[AC_CONFIG_COMMANDS([depfiles],
+ [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS],
+ [AMDEP_TRUE="$AMDEP_TRUE" MAKE="${MAKE-make}"])])
+
+# Do all the work for Automake. -*- Autoconf -*-
+
+# Copyright (C) 1996-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This macro actually does too much. Some checks are only needed if
+# your package does certain things. But this isn't really a big deal.
+
+dnl Redefine AC_PROG_CC to automatically invoke _AM_PROG_CC_C_O.
+m4_define([AC_PROG_CC],
+m4_defn([AC_PROG_CC])
+[_AM_PROG_CC_C_O
+])
+
+# AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE])
+# AM_INIT_AUTOMAKE([OPTIONS])
+# -----------------------------------------------
+# The call with PACKAGE and VERSION arguments is the old style
+# call (pre autoconf-2.50), which is being phased out. PACKAGE
+# and VERSION should now be passed to AC_INIT and removed from
+# the call to AM_INIT_AUTOMAKE.
+# We support both call styles for the transition. After
+# the next Automake release, Autoconf can make the AC_INIT
+# arguments mandatory, and then we can depend on a new Autoconf
+# release and drop the old call support.
+AC_DEFUN([AM_INIT_AUTOMAKE],
+[AC_PREREQ([2.65])dnl
+m4_ifdef([_$0_ALREADY_INIT],
+ [m4_fatal([$0 expanded multiple times
+]m4_defn([_$0_ALREADY_INIT]))],
+ [m4_define([_$0_ALREADY_INIT], m4_expansion_stack)])dnl
+dnl Autoconf wants to disallow AM_ names. We explicitly allow
+dnl the ones we care about.
+m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl
+AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl
+AC_REQUIRE([AC_PROG_INSTALL])dnl
+if test "`cd $srcdir && pwd`" != "`pwd`"; then
+ # Use -I$(srcdir) only when $(srcdir) != ., so that make's output
+ # is not polluted with repeated "-I."
+ AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl
+ # test to see if srcdir already configured
+ if test -f $srcdir/config.status; then
+ AC_MSG_ERROR([source directory already configured; run "make distclean" there first])
+ fi
+fi
+
+# test whether we have cygpath
+if test -z "$CYGPATH_W"; then
+ if (cygpath --version) >/dev/null 2>/dev/null; then
+ CYGPATH_W='cygpath -w'
+ else
+ CYGPATH_W=echo
+ fi
+fi
+AC_SUBST([CYGPATH_W])
+
+# Define the identity of the package.
+dnl Distinguish between old-style and new-style calls.
+m4_ifval([$2],
+[AC_DIAGNOSE([obsolete],
+ [$0: two- and three-arguments forms are deprecated.])
+m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl
+ AC_SUBST([PACKAGE], [$1])dnl
+ AC_SUBST([VERSION], [$2])],
+[_AM_SET_OPTIONS([$1])dnl
+dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT.
+m4_if(
+ m4_ifset([AC_PACKAGE_NAME], [ok]):m4_ifset([AC_PACKAGE_VERSION], [ok]),
+ [ok:ok],,
+ [m4_fatal([AC_INIT should be called with package and version arguments])])dnl
+ AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl
+ AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl
+
+_AM_IF_OPTION([no-define],,
+[AC_DEFINE_UNQUOTED([PACKAGE], ["$PACKAGE"], [Name of package])
+ AC_DEFINE_UNQUOTED([VERSION], ["$VERSION"], [Version number of package])])dnl
+
+# Some tools Automake needs.
+AC_REQUIRE([AM_SANITY_CHECK])dnl
+AC_REQUIRE([AC_ARG_PROGRAM])dnl
+AM_MISSING_PROG([ACLOCAL], [aclocal-${am__api_version}])
+AM_MISSING_PROG([AUTOCONF], [autoconf])
+AM_MISSING_PROG([AUTOMAKE], [automake-${am__api_version}])
+AM_MISSING_PROG([AUTOHEADER], [autoheader])
+AM_MISSING_PROG([MAKEINFO], [makeinfo])
+AC_REQUIRE([AM_PROG_INSTALL_SH])dnl
+AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl
+AC_REQUIRE([AC_PROG_MKDIR_P])dnl
+# For better backward compatibility. To be removed once Automake 1.9.x
+# dies out for good. For more background, see:
+#
+#
+AC_SUBST([mkdir_p], ['$(MKDIR_P)'])
+# We need awk for the "check" target (and possibly the TAP driver). The
+# system "awk" is bad on some platforms.
+AC_REQUIRE([AC_PROG_AWK])dnl
+AC_REQUIRE([AC_PROG_MAKE_SET])dnl
+AC_REQUIRE([AM_SET_LEADING_DOT])dnl
+_AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])],
+ [_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])],
+ [_AM_PROG_TAR([v7])])])
+_AM_IF_OPTION([no-dependencies],,
+[AC_PROVIDE_IFELSE([AC_PROG_CC],
+ [_AM_DEPENDENCIES([CC])],
+ [m4_define([AC_PROG_CC],
+ m4_defn([AC_PROG_CC])[_AM_DEPENDENCIES([CC])])])dnl
+AC_PROVIDE_IFELSE([AC_PROG_CXX],
+ [_AM_DEPENDENCIES([CXX])],
+ [m4_define([AC_PROG_CXX],
+ m4_defn([AC_PROG_CXX])[_AM_DEPENDENCIES([CXX])])])dnl
+AC_PROVIDE_IFELSE([AC_PROG_OBJC],
+ [_AM_DEPENDENCIES([OBJC])],
+ [m4_define([AC_PROG_OBJC],
+ m4_defn([AC_PROG_OBJC])[_AM_DEPENDENCIES([OBJC])])])dnl
+AC_PROVIDE_IFELSE([AC_PROG_OBJCXX],
+ [_AM_DEPENDENCIES([OBJCXX])],
+ [m4_define([AC_PROG_OBJCXX],
+ m4_defn([AC_PROG_OBJCXX])[_AM_DEPENDENCIES([OBJCXX])])])dnl
+])
+# Variables for tags utilities; see am/tags.am
+if test -z "$CTAGS"; then
+ CTAGS=ctags
+fi
+AC_SUBST([CTAGS])
+if test -z "$ETAGS"; then
+ ETAGS=etags
+fi
+AC_SUBST([ETAGS])
+if test -z "$CSCOPE"; then
+ CSCOPE=cscope
+fi
+AC_SUBST([CSCOPE])
+
+AC_REQUIRE([AM_SILENT_RULES])dnl
+dnl The testsuite driver may need to know about EXEEXT, so add the
+dnl 'am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This
+dnl macro is hooked onto _AC_COMPILER_EXEEXT early, see below.
+AC_CONFIG_COMMANDS_PRE(dnl
+[m4_provide_if([_AM_COMPILER_EXEEXT],
+ [AM_CONDITIONAL([am__EXEEXT], [test -n "$EXEEXT"])])])dnl
+
+# POSIX will say in a future version that running "rm -f" with no argument
+# is OK; and we want to be able to make that assumption in our Makefile
+# recipes. So use an aggressive probe to check that the usage we want is
+# actually supported "in the wild" to an acceptable degree.
+# See automake bug#10828.
+# To make any issue more visible, cause the running configure to be aborted
+# by default if the 'rm' program in use doesn't match our expectations; the
+# user can still override this though.
+if rm -f && rm -fr && rm -rf; then : OK; else
+ cat >&2 <<'END'
+Oops!
+
+Your 'rm' program seems unable to run without file operands specified
+on the command line, even when the '-f' option is present. This is contrary
+to the behaviour of most rm programs out there, and not conforming with
+the upcoming POSIX standard:
+
+Please tell bug-automake@gnu.org about your system, including the value
+of your $PATH and any error possibly output before this message. This
+can help us improve future automake versions.
+
+END
+ if test x"$ACCEPT_INFERIOR_RM_PROGRAM" = x"yes"; then
+ echo 'Configuration will proceed anyway, since you have set the' >&2
+ echo 'ACCEPT_INFERIOR_RM_PROGRAM variable to "yes"' >&2
+ echo >&2
+ else
+ cat >&2 <<'END'
+Aborting the configuration process, to ensure you take notice of the issue.
+
+You can download and install GNU coreutils to get an 'rm' implementation
+that behaves properly: .
+
+If you want to complete the configuration process using your problematic
+'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM
+to "yes", and re-run configure.
+
+END
+ AC_MSG_ERROR([Your 'rm' program is bad, sorry.])
+ fi
+fi
+dnl The trailing newline in this macro's definition is deliberate, for
+dnl backward compatibility and to allow trailing 'dnl'-style comments
+dnl after the AM_INIT_AUTOMAKE invocation. See automake bug#16841.
+])
+
+dnl Hook into '_AC_COMPILER_EXEEXT' early to learn its expansion. Do not
+dnl add the conditional right here, as _AC_COMPILER_EXEEXT may be further
+dnl mangled by Autoconf and run in a shell conditional statement.
+m4_define([_AC_COMPILER_EXEEXT],
+m4_defn([_AC_COMPILER_EXEEXT])[m4_provide([_AM_COMPILER_EXEEXT])])
+
+# When config.status generates a header, we must update the stamp-h file.
+# This file resides in the same directory as the config header
+# that is generated. The stamp files are numbered to have different names.
+
+# Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the
+# loop where config.status creates the headers, so we can generate
+# our stamp files there.
+AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK],
+[# Compute $1's index in $config_headers.
+_am_arg=$1
+_am_stamp_count=1
+for _am_header in $config_headers :; do
+ case $_am_header in
+ $_am_arg | $_am_arg:* )
+ break ;;
+ * )
+ _am_stamp_count=`expr $_am_stamp_count + 1` ;;
+ esac
+done
+echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count])
+
+# Copyright (C) 2001-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_PROG_INSTALL_SH
+# ------------------
+# Define $install_sh.
+AC_DEFUN([AM_PROG_INSTALL_SH],
+[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
+if test x"${install_sh+set}" != xset; then
+ case $am_aux_dir in
+ *\ * | *\ *)
+ install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;;
+ *)
+ install_sh="\${SHELL} $am_aux_dir/install-sh"
+ esac
+fi
+AC_SUBST([install_sh])])
+
+# Copyright (C) 2003-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# Check whether the underlying file-system supports filenames
+# with a leading dot. For instance MS-DOS doesn't.
+AC_DEFUN([AM_SET_LEADING_DOT],
+[rm -rf .tst 2>/dev/null
+mkdir .tst 2>/dev/null
+if test -d .tst; then
+ am__leading_dot=.
+else
+ am__leading_dot=_
+fi
+rmdir .tst 2>/dev/null
+AC_SUBST([am__leading_dot])])
+
+# Check to see how 'make' treats includes. -*- Autoconf -*-
+
+# Copyright (C) 2001-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_MAKE_INCLUDE()
+# -----------------
+# Check whether make has an 'include' directive that can support all
+# the idioms we need for our automatic dependency tracking code.
+AC_DEFUN([AM_MAKE_INCLUDE],
+[AC_MSG_CHECKING([whether ${MAKE-make} supports the include directive])
+cat > confinc.mk << 'END'
+am__doit:
+ @echo this is the am__doit target >confinc.out
+.PHONY: am__doit
+END
+am__include="#"
+am__quote=
+# BSD make does it like this.
+echo '.include "confinc.mk" # ignored' > confmf.BSD
+# Other make implementations (GNU, Solaris 10, AIX) do it like this.
+echo 'include confinc.mk # ignored' > confmf.GNU
+_am_result=no
+for s in GNU BSD; do
+ AM_RUN_LOG([${MAKE-make} -f confmf.$s && cat confinc.out])
+ AS_CASE([$?:`cat confinc.out 2>/dev/null`],
+ ['0:this is the am__doit target'],
+ [AS_CASE([$s],
+ [BSD], [am__include='.include' am__quote='"'],
+ [am__include='include' am__quote=''])])
+ if test "$am__include" != "#"; then
+ _am_result="yes ($s style)"
+ break
+ fi
+done
+rm -f confinc.* confmf.*
+AC_MSG_RESULT([${_am_result}])
+AC_SUBST([am__include])])
+AC_SUBST([am__quote])])
+
+# Fake the existence of programs that GNU maintainers use. -*- Autoconf -*-
+
+# Copyright (C) 1997-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_MISSING_PROG(NAME, PROGRAM)
+# ------------------------------
+AC_DEFUN([AM_MISSING_PROG],
+[AC_REQUIRE([AM_MISSING_HAS_RUN])
+$1=${$1-"${am_missing_run}$2"}
+AC_SUBST($1)])
+
+# AM_MISSING_HAS_RUN
+# ------------------
+# Define MISSING if not defined so far and test if it is modern enough.
+# If it is, set am_missing_run to use it, otherwise, to nothing.
+AC_DEFUN([AM_MISSING_HAS_RUN],
+[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
+AC_REQUIRE_AUX_FILE([missing])dnl
+if test x"${MISSING+set}" != xset; then
+ MISSING="\${SHELL} '$am_aux_dir/missing'"
+fi
+# Use eval to expand $SHELL
+if eval "$MISSING --is-lightweight"; then
+ am_missing_run="$MISSING "
+else
+ am_missing_run=
+ AC_MSG_WARN(['missing' script is too old or missing])
+fi
+])
+
+# Helper functions for option handling. -*- Autoconf -*-
+
+# Copyright (C) 2001-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# _AM_MANGLE_OPTION(NAME)
+# -----------------------
+AC_DEFUN([_AM_MANGLE_OPTION],
+[[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])])
+
+# _AM_SET_OPTION(NAME)
+# --------------------
+# Set option NAME. Presently that only means defining a flag for this option.
+AC_DEFUN([_AM_SET_OPTION],
+[m4_define(_AM_MANGLE_OPTION([$1]), [1])])
+
+# _AM_SET_OPTIONS(OPTIONS)
+# ------------------------
+# OPTIONS is a space-separated list of Automake options.
+AC_DEFUN([_AM_SET_OPTIONS],
+[m4_foreach_w([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])])
+
+# _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET])
+# -------------------------------------------
+# Execute IF-SET if OPTION is set, IF-NOT-SET otherwise.
+AC_DEFUN([_AM_IF_OPTION],
+[m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])])
+
+# Copyright (C) 1999-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# _AM_PROG_CC_C_O
+# ---------------
+# Like AC_PROG_CC_C_O, but changed for automake. We rewrite AC_PROG_CC
+# to automatically call this.
+AC_DEFUN([_AM_PROG_CC_C_O],
+[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
+AC_REQUIRE_AUX_FILE([compile])dnl
+AC_LANG_PUSH([C])dnl
+AC_CACHE_CHECK(
+ [whether $CC understands -c and -o together],
+ [am_cv_prog_cc_c_o],
+ [AC_LANG_CONFTEST([AC_LANG_PROGRAM([])])
+ # Make sure it works both with $CC and with simple cc.
+ # Following AC_PROG_CC_C_O, we do the test twice because some
+ # compilers refuse to overwrite an existing .o file with -o,
+ # though they will create one.
+ am_cv_prog_cc_c_o=yes
+ for am_i in 1 2; do
+ if AM_RUN_LOG([$CC -c conftest.$ac_ext -o conftest2.$ac_objext]) \
+ && test -f conftest2.$ac_objext; then
+ : OK
+ else
+ am_cv_prog_cc_c_o=no
+ break
+ fi
+ done
+ rm -f core conftest*
+ unset am_i])
+if test "$am_cv_prog_cc_c_o" != yes; then
+ # Losing compiler, so override with the script.
+ # FIXME: It is wrong to rewrite CC.
+ # But if we don't then we get into trouble of one sort or another.
+ # A longer-term fix would be to have automake use am__CC in this case,
+ # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)"
+ CC="$am_aux_dir/compile $CC"
+fi
+AC_LANG_POP([C])])
+
+# For backward compatibility.
+AC_DEFUN_ONCE([AM_PROG_CC_C_O], [AC_REQUIRE([AC_PROG_CC])])
+
+# Copyright (C) 2001-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_RUN_LOG(COMMAND)
+# -------------------
+# Run COMMAND, save the exit status in ac_status, and log it.
+# (This has been adapted from Autoconf's _AC_RUN_LOG macro.)
+AC_DEFUN([AM_RUN_LOG],
+[{ echo "$as_me:$LINENO: $1" >&AS_MESSAGE_LOG_FD
+ ($1) >&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
+ (exit $ac_status); }])
+
+# Check to make sure that the build environment is sane. -*- Autoconf -*-
+
+# Copyright (C) 1996-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_SANITY_CHECK
+# ---------------
+AC_DEFUN([AM_SANITY_CHECK],
+[AC_MSG_CHECKING([whether build environment is sane])
+# Reject unsafe characters in $srcdir or the absolute working directory
+# name. Accept space and tab only in the latter.
+am_lf='
+'
+case `pwd` in
+ *[[\\\"\#\$\&\'\`$am_lf]]*)
+ AC_MSG_ERROR([unsafe absolute working directory name]);;
+esac
+case $srcdir in
+ *[[\\\"\#\$\&\'\`$am_lf\ \ ]]*)
+ AC_MSG_ERROR([unsafe srcdir value: '$srcdir']);;
+esac
+
+# Do 'set' in a subshell so we don't clobber the current shell's
+# arguments. Must try -L first in case configure is actually a
+# symlink; some systems play weird games with the mod time of symlinks
+# (eg FreeBSD returns the mod time of the symlink's containing
+# directory).
+if (
+ am_has_slept=no
+ for am_try in 1 2; do
+ echo "timestamp, slept: $am_has_slept" > conftest.file
+ set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null`
+ if test "$[*]" = "X"; then
+ # -L didn't work.
+ set X `ls -t "$srcdir/configure" conftest.file`
+ fi
+ if test "$[*]" != "X $srcdir/configure conftest.file" \
+ && test "$[*]" != "X conftest.file $srcdir/configure"; then
+
+ # If neither matched, then we have a broken ls. This can happen
+ # if, for instance, CONFIG_SHELL is bash and it inherits a
+ # broken ls alias from the environment. This has actually
+ # happened. Such a system could not be considered "sane".
+ AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken
+ alias in your environment])
+ fi
+ if test "$[2]" = conftest.file || test $am_try -eq 2; then
+ break
+ fi
+ # Just in case.
+ sleep 1
+ am_has_slept=yes
+ done
+ test "$[2]" = conftest.file
+ )
+then
+ # Ok.
+ :
+else
+ AC_MSG_ERROR([newly created file is older than distributed files!
+Check your system clock])
+fi
+AC_MSG_RESULT([yes])
+# If we didn't sleep, we still need to ensure time stamps of config.status and
+# generated files are strictly newer.
+am_sleep_pid=
+if grep 'slept: no' conftest.file >/dev/null 2>&1; then
+ ( sleep 1 ) &
+ am_sleep_pid=$!
+fi
+AC_CONFIG_COMMANDS_PRE(
+ [AC_MSG_CHECKING([that generated files are newer than configure])
+ if test -n "$am_sleep_pid"; then
+ # Hide warnings about reused PIDs.
+ wait $am_sleep_pid 2>/dev/null
+ fi
+ AC_MSG_RESULT([done])])
+rm -f conftest.file
+])
+
+# Copyright (C) 2009-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_SILENT_RULES([DEFAULT])
+# --------------------------
+# Enable less verbose build rules; with the default set to DEFAULT
+# ("yes" being less verbose, "no" or empty being verbose).
+AC_DEFUN([AM_SILENT_RULES],
+[AC_ARG_ENABLE([silent-rules], [dnl
+AS_HELP_STRING(
+ [--enable-silent-rules],
+ [less verbose build output (undo: "make V=1")])
+AS_HELP_STRING(
+ [--disable-silent-rules],
+ [verbose build output (undo: "make V=0")])dnl
+])
+case $enable_silent_rules in @%:@ (((
+ yes) AM_DEFAULT_VERBOSITY=0;;
+ no) AM_DEFAULT_VERBOSITY=1;;
+ *) AM_DEFAULT_VERBOSITY=m4_if([$1], [yes], [0], [1]);;
+esac
+dnl
+dnl A few 'make' implementations (e.g., NonStop OS and NextStep)
+dnl do not support nested variable expansions.
+dnl See automake bug#9928 and bug#10237.
+am_make=${MAKE-make}
+AC_CACHE_CHECK([whether $am_make supports nested variables],
+ [am_cv_make_support_nested_variables],
+ [if AS_ECHO([['TRUE=$(BAR$(V))
+BAR0=false
+BAR1=true
+V=1
+am__doit:
+ @$(TRUE)
+.PHONY: am__doit']]) | $am_make -f - >/dev/null 2>&1; then
+ am_cv_make_support_nested_variables=yes
+else
+ am_cv_make_support_nested_variables=no
+fi])
+if test $am_cv_make_support_nested_variables = yes; then
+ dnl Using '$V' instead of '$(V)' breaks IRIX make.
+ AM_V='$(V)'
+ AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)'
+else
+ AM_V=$AM_DEFAULT_VERBOSITY
+ AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY
+fi
+AC_SUBST([AM_V])dnl
+AM_SUBST_NOTMAKE([AM_V])dnl
+AC_SUBST([AM_DEFAULT_V])dnl
+AM_SUBST_NOTMAKE([AM_DEFAULT_V])dnl
+AC_SUBST([AM_DEFAULT_VERBOSITY])dnl
+AM_BACKSLASH='\'
+AC_SUBST([AM_BACKSLASH])dnl
+_AM_SUBST_NOTMAKE([AM_BACKSLASH])dnl
+])
+
+# Copyright (C) 2001-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_PROG_INSTALL_STRIP
+# ---------------------
+# One issue with vendor 'install' (even GNU) is that you can't
+# specify the program used to strip binaries. This is especially
+# annoying in cross-compiling environments, where the build's strip
+# is unlikely to handle the host's binaries.
+# Fortunately install-sh will honor a STRIPPROG variable, so we
+# always use install-sh in "make install-strip", and initialize
+# STRIPPROG with the value of the STRIP variable (set by the user).
+AC_DEFUN([AM_PROG_INSTALL_STRIP],
+[AC_REQUIRE([AM_PROG_INSTALL_SH])dnl
+# Installed binaries are usually stripped using 'strip' when the user
+# run "make install-strip". However 'strip' might not be the right
+# tool to use in cross-compilation environments, therefore Automake
+# will honor the 'STRIP' environment variable to overrule this program.
+dnl Don't test for $cross_compiling = yes, because it might be 'maybe'.
+if test "$cross_compiling" != no; then
+ AC_CHECK_TOOL([STRIP], [strip], :)
+fi
+INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s"
+AC_SUBST([INSTALL_STRIP_PROGRAM])])
+
+# Copyright (C) 2006-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# _AM_SUBST_NOTMAKE(VARIABLE)
+# ---------------------------
+# Prevent Automake from outputting VARIABLE = @VARIABLE@ in Makefile.in.
+# This macro is traced by Automake.
+AC_DEFUN([_AM_SUBST_NOTMAKE])
+
+# AM_SUBST_NOTMAKE(VARIABLE)
+# --------------------------
+# Public sister of _AM_SUBST_NOTMAKE.
+AC_DEFUN([AM_SUBST_NOTMAKE], [_AM_SUBST_NOTMAKE($@)])
+
+# Check how to create a tarball. -*- Autoconf -*-
+
+# Copyright (C) 2004-2021 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# _AM_PROG_TAR(FORMAT)
+# --------------------
+# Check how to create a tarball in format FORMAT.
+# FORMAT should be one of 'v7', 'ustar', or 'pax'.
+#
+# Substitute a variable $(am__tar) that is a command
+# writing to stdout a FORMAT-tarball containing the directory
+# $tardir.
+# tardir=directory && $(am__tar) > result.tar
+#
+# Substitute a variable $(am__untar) that extract such
+# a tarball read from stdin.
+# $(am__untar) < result.tar
+#
+AC_DEFUN([_AM_PROG_TAR],
+[# Always define AMTAR for backward compatibility. Yes, it's still used
+# in the wild :-( We should find a proper way to deprecate it ...
+AC_SUBST([AMTAR], ['$${TAR-tar}'])
+
+# We'll loop over all known methods to create a tar archive until one works.
+_am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none'
+
+m4_if([$1], [v7],
+ [am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -'],
+
+ [m4_case([$1],
+ [ustar],
+ [# The POSIX 1988 'ustar' format is defined with fixed-size fields.
+ # There is notably a 21 bits limit for the UID and the GID. In fact,
+ # the 'pax' utility can hang on bigger UID/GID (see automake bug#8343
+ # and bug#13588).
+ am_max_uid=2097151 # 2^21 - 1
+ am_max_gid=$am_max_uid
+ # The $UID and $GID variables are not portable, so we need to resort
+ # to the POSIX-mandated id(1) utility. Errors in the 'id' calls
+ # below are definitely unexpected, so allow the users to see them
+ # (that is, avoid stderr redirection).
+ am_uid=`id -u || echo unknown`
+ am_gid=`id -g || echo unknown`
+ AC_MSG_CHECKING([whether UID '$am_uid' is supported by ustar format])
+ if test $am_uid -le $am_max_uid; then
+ AC_MSG_RESULT([yes])
+ else
+ AC_MSG_RESULT([no])
+ _am_tools=none
+ fi
+ AC_MSG_CHECKING([whether GID '$am_gid' is supported by ustar format])
+ if test $am_gid -le $am_max_gid; then
+ AC_MSG_RESULT([yes])
+ else
+ AC_MSG_RESULT([no])
+ _am_tools=none
+ fi],
+
+ [pax],
+ [],
+
+ [m4_fatal([Unknown tar format])])
+
+ AC_MSG_CHECKING([how to create a $1 tar archive])
+
+ # Go ahead even if we have the value already cached. We do so because we
+ # need to set the values for the 'am__tar' and 'am__untar' variables.
+ _am_tools=${am_cv_prog_tar_$1-$_am_tools}
+
+ for _am_tool in $_am_tools; do
+ case $_am_tool in
+ gnutar)
+ for _am_tar in tar gnutar gtar; do
+ AM_RUN_LOG([$_am_tar --version]) && break
+ done
+ am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"'
+ am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"'
+ am__untar="$_am_tar -xf -"
+ ;;
+ plaintar)
+ # Must skip GNU tar: if it does not support --format= it doesn't create
+ # ustar tarball either.
+ (tar --version) >/dev/null 2>&1 && continue
+ am__tar='tar chf - "$$tardir"'
+ am__tar_='tar chf - "$tardir"'
+ am__untar='tar xf -'
+ ;;
+ pax)
+ am__tar='pax -L -x $1 -w "$$tardir"'
+ am__tar_='pax -L -x $1 -w "$tardir"'
+ am__untar='pax -r'
+ ;;
+ cpio)
+ am__tar='find "$$tardir" -print | cpio -o -H $1 -L'
+ am__tar_='find "$tardir" -print | cpio -o -H $1 -L'
+ am__untar='cpio -i -H $1 -d'
+ ;;
+ none)
+ am__tar=false
+ am__tar_=false
+ am__untar=false
+ ;;
+ esac
+
+ # If the value was cached, stop now. We just wanted to have am__tar
+ # and am__untar set.
+ test -n "${am_cv_prog_tar_$1}" && break
+
+ # tar/untar a dummy directory, and stop if the command works.
+ rm -rf conftest.dir
+ mkdir conftest.dir
+ echo GrepMe > conftest.dir/file
+ AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar])
+ rm -rf conftest.dir
+ if test -s conftest.tar; then
+ AM_RUN_LOG([$am__untar /dev/null 2>&1 && break
+ fi
+ done
+ rm -rf conftest.dir
+
+ AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool])
+ AC_MSG_RESULT([$am_cv_prog_tar_$1])])
+
+AC_SUBST([am__tar])
+AC_SUBST([am__untar])
+]) # _AM_PROG_TAR
+
+m4_include([m4/ax_check_compile_flag.m4])
+m4_include([m4/ax_check_link_flag.m4])
+m4_include([m4/code-coverage.m4])
+m4_include([m4/knot-lib-version.m4])
+m4_include([m4/knot-module.m4])
+m4_include([m4/libtool.m4])
+m4_include([m4/ltoptions.m4])
+m4_include([m4/ltsugar.m4])
+m4_include([m4/ltversion.m4])
+m4_include([m4/lt~obsolete.m4])
+m4_include([m4/sanitizer.m4])
+m4_include([m4/visibility.m4])
diff --git a/ar-lib b/ar-lib
new file mode 100755
index 0000000..c349042
--- /dev/null
+++ b/ar-lib
@@ -0,0 +1,271 @@
+#! /bin/sh
+# Wrapper for Microsoft lib.exe
+
+me=ar-lib
+scriptversion=2019-07-04.01; # UTC
+
+# Copyright (C) 2010-2021 Free Software Foundation, Inc.
+# Written by Peter Rosin .
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see .
+
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# This file is maintained in Automake, please report
+# bugs to or send patches to
+# .
+
+
+# func_error message
+func_error ()
+{
+ echo "$me: $1" 1>&2
+ exit 1
+}
+
+file_conv=
+
+# func_file_conv build_file
+# Convert a $build file to $host form and store it in $file
+# Currently only supports Windows hosts.
+func_file_conv ()
+{
+ file=$1
+ case $file in
+ / | /[!/]*) # absolute file, and not a UNC file
+ if test -z "$file_conv"; then
+ # lazily determine how to convert abs files
+ case `uname -s` in
+ MINGW*)
+ file_conv=mingw
+ ;;
+ CYGWIN* | MSYS*)
+ file_conv=cygwin
+ ;;
+ *)
+ file_conv=wine
+ ;;
+ esac
+ fi
+ case $file_conv in
+ mingw)
+ file=`cmd //C echo "$file " | sed -e 's/"\(.*\) " *$/\1/'`
+ ;;
+ cygwin | msys)
+ file=`cygpath -m "$file" || echo "$file"`
+ ;;
+ wine)
+ file=`winepath -w "$file" || echo "$file"`
+ ;;
+ esac
+ ;;
+ esac
+}
+
+# func_at_file at_file operation archive
+# Iterate over all members in AT_FILE performing OPERATION on ARCHIVE
+# for each of them.
+# When interpreting the content of the @FILE, do NOT use func_file_conv,
+# since the user would need to supply preconverted file names to
+# binutils ar, at least for MinGW.
+func_at_file ()
+{
+ operation=$2
+ archive=$3
+ at_file_contents=`cat "$1"`
+ eval set x "$at_file_contents"
+ shift
+
+ for member
+ do
+ $AR -NOLOGO $operation:"$member" "$archive" || exit $?
+ done
+}
+
+case $1 in
+ '')
+ func_error "no command. Try '$0 --help' for more information."
+ ;;
+ -h | --h*)
+ cat <.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see .
+
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# This file is maintained in Automake, please report
+# bugs to or send patches to
+# .
+
+nl='
+'
+
+# We need space, tab and new line, in precisely that order. Quoting is
+# there to prevent tools from complaining about whitespace usage.
+IFS=" "" $nl"
+
+file_conv=
+
+# func_file_conv build_file lazy
+# Convert a $build file to $host form and store it in $file
+# Currently only supports Windows hosts. If the determined conversion
+# type is listed in (the comma separated) LAZY, no conversion will
+# take place.
+func_file_conv ()
+{
+ file=$1
+ case $file in
+ / | /[!/]*) # absolute file, and not a UNC file
+ if test -z "$file_conv"; then
+ # lazily determine how to convert abs files
+ case `uname -s` in
+ MINGW*)
+ file_conv=mingw
+ ;;
+ CYGWIN* | MSYS*)
+ file_conv=cygwin
+ ;;
+ *)
+ file_conv=wine
+ ;;
+ esac
+ fi
+ case $file_conv/,$2, in
+ *,$file_conv,*)
+ ;;
+ mingw/*)
+ file=`cmd //C echo "$file " | sed -e 's/"\(.*\) " *$/\1/'`
+ ;;
+ cygwin/* | msys/*)
+ file=`cygpath -m "$file" || echo "$file"`
+ ;;
+ wine/*)
+ file=`winepath -w "$file" || echo "$file"`
+ ;;
+ esac
+ ;;
+ esac
+}
+
+# func_cl_dashL linkdir
+# Make cl look for libraries in LINKDIR
+func_cl_dashL ()
+{
+ func_file_conv "$1"
+ if test -z "$lib_path"; then
+ lib_path=$file
+ else
+ lib_path="$lib_path;$file"
+ fi
+ linker_opts="$linker_opts -LIBPATH:$file"
+}
+
+# func_cl_dashl library
+# Do a library search-path lookup for cl
+func_cl_dashl ()
+{
+ lib=$1
+ found=no
+ save_IFS=$IFS
+ IFS=';'
+ for dir in $lib_path $LIB
+ do
+ IFS=$save_IFS
+ if $shared && test -f "$dir/$lib.dll.lib"; then
+ found=yes
+ lib=$dir/$lib.dll.lib
+ break
+ fi
+ if test -f "$dir/$lib.lib"; then
+ found=yes
+ lib=$dir/$lib.lib
+ break
+ fi
+ if test -f "$dir/lib$lib.a"; then
+ found=yes
+ lib=$dir/lib$lib.a
+ break
+ fi
+ done
+ IFS=$save_IFS
+
+ if test "$found" != yes; then
+ lib=$lib.lib
+ fi
+}
+
+# func_cl_wrapper cl arg...
+# Adjust compile command to suit cl
+func_cl_wrapper ()
+{
+ # Assume a capable shell
+ lib_path=
+ shared=:
+ linker_opts=
+ for arg
+ do
+ if test -n "$eat"; then
+ eat=
+ else
+ case $1 in
+ -o)
+ # configure might choose to run compile as 'compile cc -o foo foo.c'.
+ eat=1
+ case $2 in
+ *.o | *.[oO][bB][jJ])
+ func_file_conv "$2"
+ set x "$@" -Fo"$file"
+ shift
+ ;;
+ *)
+ func_file_conv "$2"
+ set x "$@" -Fe"$file"
+ shift
+ ;;
+ esac
+ ;;
+ -I)
+ eat=1
+ func_file_conv "$2" mingw
+ set x "$@" -I"$file"
+ shift
+ ;;
+ -I*)
+ func_file_conv "${1#-I}" mingw
+ set x "$@" -I"$file"
+ shift
+ ;;
+ -l)
+ eat=1
+ func_cl_dashl "$2"
+ set x "$@" "$lib"
+ shift
+ ;;
+ -l*)
+ func_cl_dashl "${1#-l}"
+ set x "$@" "$lib"
+ shift
+ ;;
+ -L)
+ eat=1
+ func_cl_dashL "$2"
+ ;;
+ -L*)
+ func_cl_dashL "${1#-L}"
+ ;;
+ -static)
+ shared=false
+ ;;
+ -Wl,*)
+ arg=${1#-Wl,}
+ save_ifs="$IFS"; IFS=','
+ for flag in $arg; do
+ IFS="$save_ifs"
+ linker_opts="$linker_opts $flag"
+ done
+ IFS="$save_ifs"
+ ;;
+ -Xlinker)
+ eat=1
+ linker_opts="$linker_opts $2"
+ ;;
+ -*)
+ set x "$@" "$1"
+ shift
+ ;;
+ *.cc | *.CC | *.cxx | *.CXX | *.[cC]++)
+ func_file_conv "$1"
+ set x "$@" -Tp"$file"
+ shift
+ ;;
+ *.c | *.cpp | *.CPP | *.lib | *.LIB | *.Lib | *.OBJ | *.obj | *.[oO])
+ func_file_conv "$1" mingw
+ set x "$@" "$file"
+ shift
+ ;;
+ *)
+ set x "$@" "$1"
+ shift
+ ;;
+ esac
+ fi
+ shift
+ done
+ if test -n "$linker_opts"; then
+ linker_opts="-link$linker_opts"
+ fi
+ exec "$@" $linker_opts
+ exit 1
+}
+
+eat=
+
+case $1 in
+ '')
+ echo "$0: No command. Try '$0 --help' for more information." 1>&2
+ exit 1;
+ ;;
+ -h | --h*)
+ cat <<\EOF
+Usage: compile [--help] [--version] PROGRAM [ARGS]
+
+Wrapper for compilers which do not understand '-c -o'.
+Remove '-o dest.o' from ARGS, run PROGRAM with the remaining
+arguments, and rename the output as expected.
+
+If you are trying to build a whole package this is not the
+right script to run: please start by reading the file 'INSTALL'.
+
+Report bugs to .
+EOF
+ exit $?
+ ;;
+ -v | --v*)
+ echo "compile $scriptversion"
+ exit $?
+ ;;
+ cl | *[/\\]cl | cl.exe | *[/\\]cl.exe | \
+ icl | *[/\\]icl | icl.exe | *[/\\]icl.exe )
+ func_cl_wrapper "$@" # Doesn't return...
+ ;;
+esac
+
+ofile=
+cfile=
+
+for arg
+do
+ if test -n "$eat"; then
+ eat=
+ else
+ case $1 in
+ -o)
+ # configure might choose to run compile as 'compile cc -o foo foo.c'.
+ # So we strip '-o arg' only if arg is an object.
+ eat=1
+ case $2 in
+ *.o | *.obj)
+ ofile=$2
+ ;;
+ *)
+ set x "$@" -o "$2"
+ shift
+ ;;
+ esac
+ ;;
+ *.c)
+ cfile=$1
+ set x "$@" "$1"
+ shift
+ ;;
+ *)
+ set x "$@" "$1"
+ shift
+ ;;
+ esac
+ fi
+ shift
+done
+
+if test -z "$ofile" || test -z "$cfile"; then
+ # If no '-o' option was seen then we might have been invoked from a
+ # pattern rule where we don't need one. That is ok -- this is a
+ # normal compilation that the losing compiler can handle. If no
+ # '.c' file was seen then we are probably linking. That is also
+ # ok.
+ exec "$@"
+fi
+
+# Name of file we expect compiler to create.
+cofile=`echo "$cfile" | sed 's|^.*[\\/]||; s|^[a-zA-Z]:||; s/\.c$/.o/'`
+
+# Create the lock directory.
+# Note: use '[/\\:.-]' here to ensure that we don't use the same name
+# that we are using for the .o file. Also, base the name on the expected
+# object file name, since that is what matters with a parallel build.
+lockdir=`echo "$cofile" | sed -e 's|[/\\:.-]|_|g'`.d
+while true; do
+ if mkdir "$lockdir" >/dev/null 2>&1; then
+ break
+ fi
+ sleep 1
+done
+# FIXME: race condition here if user kills between mkdir and trap.
+trap "rmdir '$lockdir'; exit 1" 1 2 15
+
+# Run the compile.
+"$@"
+ret=$?
+
+if test -f "$cofile"; then
+ test "$cofile" = "$ofile" || mv "$cofile" "$ofile"
+elif test -f "${cofile}bj"; then
+ test "${cofile}bj" = "$ofile" || mv "${cofile}bj" "$ofile"
+fi
+
+rmdir "$lockdir"
+exit $ret
+
+# Local Variables:
+# mode: shell-script
+# sh-indentation: 2
+# eval: (add-hook 'before-save-hook 'time-stamp)
+# time-stamp-start: "scriptversion="
+# time-stamp-format: "%:y-%02m-%02d.%02H"
+# time-stamp-time-zone: "UTC0"
+# time-stamp-end: "; # UTC"
+# End:
diff --git a/config.guess b/config.guess
new file mode 100755
index 0000000..7f76b62
--- /dev/null
+++ b/config.guess
@@ -0,0 +1,1754 @@
+#! /bin/sh
+# Attempt to guess a canonical system name.
+# Copyright 1992-2022 Free Software Foundation, Inc.
+
+# shellcheck disable=SC2006,SC2268 # see below for rationale
+
+timestamp='2022-01-09'
+
+# This file is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see .
+#
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that
+# program. This Exception is an additional permission under section 7
+# of the GNU General Public License, version 3 ("GPLv3").
+#
+# Originally written by Per Bothner; maintained since 2000 by Ben Elliston.
+#
+# You can get the latest version of this script from:
+# https://git.savannah.gnu.org/cgit/config.git/plain/config.guess
+#
+# Please send patches to .
+
+
+# The "shellcheck disable" line above the timestamp inhibits complaints
+# about features and limitations of the classic Bourne shell that were
+# superseded or lifted in POSIX. However, this script identifies a wide
+# variety of pre-POSIX systems that do not have POSIX shells at all, and
+# even some reasonably current systems (Solaris 10 as case-in-point) still
+# have a pre-POSIX /bin/sh.
+
+
+me=`echo "$0" | sed -e 's,.*/,,'`
+
+usage="\
+Usage: $0 [OPTION]
+
+Output the configuration name of the system \`$me' is run on.
+
+Options:
+ -h, --help print this help, then exit
+ -t, --time-stamp print date of last modification, then exit
+ -v, --version print version number, then exit
+
+Report bugs and patches to ."
+
+version="\
+GNU config.guess ($timestamp)
+
+Originally written by Per Bothner.
+Copyright 1992-2022 Free Software Foundation, Inc.
+
+This is free software; see the source for copying conditions. There is NO
+warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
+
+help="
+Try \`$me --help' for more information."
+
+# Parse command line
+while test $# -gt 0 ; do
+ case $1 in
+ --time-stamp | --time* | -t )
+ echo "$timestamp" ; exit ;;
+ --version | -v )
+ echo "$version" ; exit ;;
+ --help | --h* | -h )
+ echo "$usage"; exit ;;
+ -- ) # Stop option processing
+ shift; break ;;
+ - ) # Use stdin as input.
+ break ;;
+ -* )
+ echo "$me: invalid option $1$help" >&2
+ exit 1 ;;
+ * )
+ break ;;
+ esac
+done
+
+if test $# != 0; then
+ echo "$me: too many arguments$help" >&2
+ exit 1
+fi
+
+# Just in case it came from the environment.
+GUESS=
+
+# CC_FOR_BUILD -- compiler used by this script. Note that the use of a
+# compiler to aid in system detection is discouraged as it requires
+# temporary files to be created and, as you can see below, it is a
+# headache to deal with in a portable fashion.
+
+# Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still
+# use `HOST_CC' if defined, but it is deprecated.
+
+# Portable tmp directory creation inspired by the Autoconf team.
+
+tmp=
+# shellcheck disable=SC2172
+trap 'test -z "$tmp" || rm -fr "$tmp"' 0 1 2 13 15
+
+set_cc_for_build() {
+ # prevent multiple calls if $tmp is already set
+ test "$tmp" && return 0
+ : "${TMPDIR=/tmp}"
+ # shellcheck disable=SC2039,SC3028
+ { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } ||
+ { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir "$tmp" 2>/dev/null) ; } ||
+ { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir "$tmp" 2>/dev/null) && echo "Warning: creating insecure temp directory" >&2 ; } ||
+ { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; }
+ dummy=$tmp/dummy
+ case ${CC_FOR_BUILD-},${HOST_CC-},${CC-} in
+ ,,) echo "int x;" > "$dummy.c"
+ for driver in cc gcc c89 c99 ; do
+ if ($driver -c -o "$dummy.o" "$dummy.c") >/dev/null 2>&1 ; then
+ CC_FOR_BUILD=$driver
+ break
+ fi
+ done
+ if test x"$CC_FOR_BUILD" = x ; then
+ CC_FOR_BUILD=no_compiler_found
+ fi
+ ;;
+ ,,*) CC_FOR_BUILD=$CC ;;
+ ,*,*) CC_FOR_BUILD=$HOST_CC ;;
+ esac
+}
+
+# This is needed to find uname on a Pyramid OSx when run in the BSD universe.
+# (ghazi@noc.rutgers.edu 1994-08-24)
+if test -f /.attbin/uname ; then
+ PATH=$PATH:/.attbin ; export PATH
+fi
+
+UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown
+UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown
+UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown
+UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown
+
+case $UNAME_SYSTEM in
+Linux|GNU|GNU/*)
+ LIBC=unknown
+
+ set_cc_for_build
+ cat <<-EOF > "$dummy.c"
+ #include
+ #if defined(__UCLIBC__)
+ LIBC=uclibc
+ #elif defined(__dietlibc__)
+ LIBC=dietlibc
+ #elif defined(__GLIBC__)
+ LIBC=gnu
+ #else
+ #include
+ /* First heuristic to detect musl libc. */
+ #ifdef __DEFINED_va_list
+ LIBC=musl
+ #endif
+ #endif
+ EOF
+ cc_set_libc=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^LIBC' | sed 's, ,,g'`
+ eval "$cc_set_libc"
+
+ # Second heuristic to detect musl libc.
+ if [ "$LIBC" = unknown ] &&
+ command -v ldd >/dev/null &&
+ ldd --version 2>&1 | grep -q ^musl; then
+ LIBC=musl
+ fi
+
+ # If the system lacks a compiler, then just pick glibc.
+ # We could probably try harder.
+ if [ "$LIBC" = unknown ]; then
+ LIBC=gnu
+ fi
+ ;;
+esac
+
+# Note: order is significant - the case branches are not exclusive.
+
+case $UNAME_MACHINE:$UNAME_SYSTEM:$UNAME_RELEASE:$UNAME_VERSION in
+ *:NetBSD:*:*)
+ # NetBSD (nbsd) targets should (where applicable) match one or
+ # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*,
+ # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently
+ # switched to ELF, *-*-netbsd* would select the old
+ # object file format. This provides both forward
+ # compatibility and a consistent mechanism for selecting the
+ # object file format.
+ #
+ # Note: NetBSD doesn't particularly care about the vendor
+ # portion of the name. We always set it to "unknown".
+ UNAME_MACHINE_ARCH=`(uname -p 2>/dev/null || \
+ /sbin/sysctl -n hw.machine_arch 2>/dev/null || \
+ /usr/sbin/sysctl -n hw.machine_arch 2>/dev/null || \
+ echo unknown)`
+ case $UNAME_MACHINE_ARCH in
+ aarch64eb) machine=aarch64_be-unknown ;;
+ armeb) machine=armeb-unknown ;;
+ arm*) machine=arm-unknown ;;
+ sh3el) machine=shl-unknown ;;
+ sh3eb) machine=sh-unknown ;;
+ sh5el) machine=sh5le-unknown ;;
+ earmv*)
+ arch=`echo "$UNAME_MACHINE_ARCH" | sed -e 's,^e\(armv[0-9]\).*$,\1,'`
+ endian=`echo "$UNAME_MACHINE_ARCH" | sed -ne 's,^.*\(eb\)$,\1,p'`
+ machine=${arch}${endian}-unknown
+ ;;
+ *) machine=$UNAME_MACHINE_ARCH-unknown ;;
+ esac
+ # The Operating System including object format, if it has switched
+ # to ELF recently (or will in the future) and ABI.
+ case $UNAME_MACHINE_ARCH in
+ earm*)
+ os=netbsdelf
+ ;;
+ arm*|i386|m68k|ns32k|sh3*|sparc|vax)
+ set_cc_for_build
+ if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \
+ | grep -q __ELF__
+ then
+ # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout).
+ # Return netbsd for either. FIX?
+ os=netbsd
+ else
+ os=netbsdelf
+ fi
+ ;;
+ *)
+ os=netbsd
+ ;;
+ esac
+ # Determine ABI tags.
+ case $UNAME_MACHINE_ARCH in
+ earm*)
+ expr='s/^earmv[0-9]/-eabi/;s/eb$//'
+ abi=`echo "$UNAME_MACHINE_ARCH" | sed -e "$expr"`
+ ;;
+ esac
+ # The OS release
+ # Debian GNU/NetBSD machines have a different userland, and
+ # thus, need a distinct triplet. However, they do not need
+ # kernel version information, so it can be replaced with a
+ # suitable tag, in the style of linux-gnu.
+ case $UNAME_VERSION in
+ Debian*)
+ release='-gnu'
+ ;;
+ *)
+ release=`echo "$UNAME_RELEASE" | sed -e 's/[-_].*//' | cut -d. -f1,2`
+ ;;
+ esac
+ # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM:
+ # contains redundant information, the shorter form:
+ # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used.
+ GUESS=$machine-${os}${release}${abi-}
+ ;;
+ *:Bitrig:*:*)
+ UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'`
+ GUESS=$UNAME_MACHINE_ARCH-unknown-bitrig$UNAME_RELEASE
+ ;;
+ *:OpenBSD:*:*)
+ UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'`
+ GUESS=$UNAME_MACHINE_ARCH-unknown-openbsd$UNAME_RELEASE
+ ;;
+ *:SecBSD:*:*)
+ UNAME_MACHINE_ARCH=`arch | sed 's/SecBSD.//'`
+ GUESS=$UNAME_MACHINE_ARCH-unknown-secbsd$UNAME_RELEASE
+ ;;
+ *:LibertyBSD:*:*)
+ UNAME_MACHINE_ARCH=`arch | sed 's/^.*BSD\.//'`
+ GUESS=$UNAME_MACHINE_ARCH-unknown-libertybsd$UNAME_RELEASE
+ ;;
+ *:MidnightBSD:*:*)
+ GUESS=$UNAME_MACHINE-unknown-midnightbsd$UNAME_RELEASE
+ ;;
+ *:ekkoBSD:*:*)
+ GUESS=$UNAME_MACHINE-unknown-ekkobsd$UNAME_RELEASE
+ ;;
+ *:SolidBSD:*:*)
+ GUESS=$UNAME_MACHINE-unknown-solidbsd$UNAME_RELEASE
+ ;;
+ *:OS108:*:*)
+ GUESS=$UNAME_MACHINE-unknown-os108_$UNAME_RELEASE
+ ;;
+ macppc:MirBSD:*:*)
+ GUESS=powerpc-unknown-mirbsd$UNAME_RELEASE
+ ;;
+ *:MirBSD:*:*)
+ GUESS=$UNAME_MACHINE-unknown-mirbsd$UNAME_RELEASE
+ ;;
+ *:Sortix:*:*)
+ GUESS=$UNAME_MACHINE-unknown-sortix
+ ;;
+ *:Twizzler:*:*)
+ GUESS=$UNAME_MACHINE-unknown-twizzler
+ ;;
+ *:Redox:*:*)
+ GUESS=$UNAME_MACHINE-unknown-redox
+ ;;
+ mips:OSF1:*.*)
+ GUESS=mips-dec-osf1
+ ;;
+ alpha:OSF1:*:*)
+ # Reset EXIT trap before exiting to avoid spurious non-zero exit code.
+ trap '' 0
+ case $UNAME_RELEASE in
+ *4.0)
+ UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'`
+ ;;
+ *5.*)
+ UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'`
+ ;;
+ esac
+ # According to Compaq, /usr/sbin/psrinfo has been available on
+ # OSF/1 and Tru64 systems produced since 1995. I hope that
+ # covers most systems running today. This code pipes the CPU
+ # types through head -n 1, so we only detect the type of CPU 0.
+ ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1`
+ case $ALPHA_CPU_TYPE in
+ "EV4 (21064)")
+ UNAME_MACHINE=alpha ;;
+ "EV4.5 (21064)")
+ UNAME_MACHINE=alpha ;;
+ "LCA4 (21066/21068)")
+ UNAME_MACHINE=alpha ;;
+ "EV5 (21164)")
+ UNAME_MACHINE=alphaev5 ;;
+ "EV5.6 (21164A)")
+ UNAME_MACHINE=alphaev56 ;;
+ "EV5.6 (21164PC)")
+ UNAME_MACHINE=alphapca56 ;;
+ "EV5.7 (21164PC)")
+ UNAME_MACHINE=alphapca57 ;;
+ "EV6 (21264)")
+ UNAME_MACHINE=alphaev6 ;;
+ "EV6.7 (21264A)")
+ UNAME_MACHINE=alphaev67 ;;
+ "EV6.8CB (21264C)")
+ UNAME_MACHINE=alphaev68 ;;
+ "EV6.8AL (21264B)")
+ UNAME_MACHINE=alphaev68 ;;
+ "EV6.8CX (21264D)")
+ UNAME_MACHINE=alphaev68 ;;
+ "EV6.9A (21264/EV69A)")
+ UNAME_MACHINE=alphaev69 ;;
+ "EV7 (21364)")
+ UNAME_MACHINE=alphaev7 ;;
+ "EV7.9 (21364A)")
+ UNAME_MACHINE=alphaev79 ;;
+ esac
+ # A Pn.n version is a patched version.
+ # A Vn.n version is a released version.
+ # A Tn.n version is a released field test version.
+ # A Xn.n version is an unreleased experimental baselevel.
+ # 1.2 uses "1.2" for uname -r.
+ OSF_REL=`echo "$UNAME_RELEASE" | sed -e 's/^[PVTX]//' | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz`
+ GUESS=$UNAME_MACHINE-dec-osf$OSF_REL
+ ;;
+ Amiga*:UNIX_System_V:4.0:*)
+ GUESS=m68k-unknown-sysv4
+ ;;
+ *:[Aa]miga[Oo][Ss]:*:*)
+ GUESS=$UNAME_MACHINE-unknown-amigaos
+ ;;
+ *:[Mm]orph[Oo][Ss]:*:*)
+ GUESS=$UNAME_MACHINE-unknown-morphos
+ ;;
+ *:OS/390:*:*)
+ GUESS=i370-ibm-openedition
+ ;;
+ *:z/VM:*:*)
+ GUESS=s390-ibm-zvmoe
+ ;;
+ *:OS400:*:*)
+ GUESS=powerpc-ibm-os400
+ ;;
+ arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*)
+ GUESS=arm-acorn-riscix$UNAME_RELEASE
+ ;;
+ arm*:riscos:*:*|arm*:RISCOS:*:*)
+ GUESS=arm-unknown-riscos
+ ;;
+ SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*)
+ GUESS=hppa1.1-hitachi-hiuxmpp
+ ;;
+ Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*)
+ # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE.
+ case `(/bin/universe) 2>/dev/null` in
+ att) GUESS=pyramid-pyramid-sysv3 ;;
+ *) GUESS=pyramid-pyramid-bsd ;;
+ esac
+ ;;
+ NILE*:*:*:dcosx)
+ GUESS=pyramid-pyramid-svr4
+ ;;
+ DRS?6000:unix:4.0:6*)
+ GUESS=sparc-icl-nx6
+ ;;
+ DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*)
+ case `/usr/bin/uname -p` in
+ sparc) GUESS=sparc-icl-nx7 ;;
+ esac
+ ;;
+ s390x:SunOS:*:*)
+ SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'`
+ GUESS=$UNAME_MACHINE-ibm-solaris2$SUN_REL
+ ;;
+ sun4H:SunOS:5.*:*)
+ SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'`
+ GUESS=sparc-hal-solaris2$SUN_REL
+ ;;
+ sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*)
+ SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'`
+ GUESS=sparc-sun-solaris2$SUN_REL
+ ;;
+ i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*)
+ GUESS=i386-pc-auroraux$UNAME_RELEASE
+ ;;
+ i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*)
+ set_cc_for_build
+ SUN_ARCH=i386
+ # If there is a compiler, see if it is configured for 64-bit objects.
+ # Note that the Sun cc does not turn __LP64__ into 1 like gcc does.
+ # This test works for both compilers.
+ if test "$CC_FOR_BUILD" != no_compiler_found; then
+ if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \
+ (CCOPTS="" $CC_FOR_BUILD -m64 -E - 2>/dev/null) | \
+ grep IS_64BIT_ARCH >/dev/null
+ then
+ SUN_ARCH=x86_64
+ fi
+ fi
+ SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'`
+ GUESS=$SUN_ARCH-pc-solaris2$SUN_REL
+ ;;
+ sun4*:SunOS:6*:*)
+ # According to config.sub, this is the proper way to canonicalize
+ # SunOS6. Hard to guess exactly what SunOS6 will be like, but
+ # it's likely to be more like Solaris than SunOS4.
+ SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'`
+ GUESS=sparc-sun-solaris3$SUN_REL
+ ;;
+ sun4*:SunOS:*:*)
+ case `/usr/bin/arch -k` in
+ Series*|S4*)
+ UNAME_RELEASE=`uname -v`
+ ;;
+ esac
+ # Japanese Language versions have a version number like `4.1.3-JL'.
+ SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/'`
+ GUESS=sparc-sun-sunos$SUN_REL
+ ;;
+ sun3*:SunOS:*:*)
+ GUESS=m68k-sun-sunos$UNAME_RELEASE
+ ;;
+ sun*:*:4.2BSD:*)
+ UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null`
+ test "x$UNAME_RELEASE" = x && UNAME_RELEASE=3
+ case `/bin/arch` in
+ sun3)
+ GUESS=m68k-sun-sunos$UNAME_RELEASE
+ ;;
+ sun4)
+ GUESS=sparc-sun-sunos$UNAME_RELEASE
+ ;;
+ esac
+ ;;
+ aushp:SunOS:*:*)
+ GUESS=sparc-auspex-sunos$UNAME_RELEASE
+ ;;
+ # The situation for MiNT is a little confusing. The machine name
+ # can be virtually everything (everything which is not
+ # "atarist" or "atariste" at least should have a processor
+ # > m68000). The system name ranges from "MiNT" over "FreeMiNT"
+ # to the lowercase version "mint" (or "freemint"). Finally
+ # the system name "TOS" denotes a system which is actually not
+ # MiNT. But MiNT is downward compatible to TOS, so this should
+ # be no problem.
+ atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*)
+ GUESS=m68k-atari-mint$UNAME_RELEASE
+ ;;
+ atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*)
+ GUESS=m68k-atari-mint$UNAME_RELEASE
+ ;;
+ *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*)
+ GUESS=m68k-atari-mint$UNAME_RELEASE
+ ;;
+ milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*)
+ GUESS=m68k-milan-mint$UNAME_RELEASE
+ ;;
+ hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*)
+ GUESS=m68k-hades-mint$UNAME_RELEASE
+ ;;
+ *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*)
+ GUESS=m68k-unknown-mint$UNAME_RELEASE
+ ;;
+ m68k:machten:*:*)
+ GUESS=m68k-apple-machten$UNAME_RELEASE
+ ;;
+ powerpc:machten:*:*)
+ GUESS=powerpc-apple-machten$UNAME_RELEASE
+ ;;
+ RISC*:Mach:*:*)
+ GUESS=mips-dec-mach_bsd4.3
+ ;;
+ RISC*:ULTRIX:*:*)
+ GUESS=mips-dec-ultrix$UNAME_RELEASE
+ ;;
+ VAX*:ULTRIX*:*:*)
+ GUESS=vax-dec-ultrix$UNAME_RELEASE
+ ;;
+ 2020:CLIX:*:* | 2430:CLIX:*:*)
+ GUESS=clipper-intergraph-clix$UNAME_RELEASE
+ ;;
+ mips:*:*:UMIPS | mips:*:*:RISCos)
+ set_cc_for_build
+ sed 's/^ //' << EOF > "$dummy.c"
+#ifdef __cplusplus
+#include /* for printf() prototype */
+ int main (int argc, char *argv[]) {
+#else
+ int main (argc, argv) int argc; char *argv[]; {
+#endif
+ #if defined (host_mips) && defined (MIPSEB)
+ #if defined (SYSTYPE_SYSV)
+ printf ("mips-mips-riscos%ssysv\\n", argv[1]); exit (0);
+ #endif
+ #if defined (SYSTYPE_SVR4)
+ printf ("mips-mips-riscos%ssvr4\\n", argv[1]); exit (0);
+ #endif
+ #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD)
+ printf ("mips-mips-riscos%sbsd\\n", argv[1]); exit (0);
+ #endif
+ #endif
+ exit (-1);
+ }
+EOF
+ $CC_FOR_BUILD -o "$dummy" "$dummy.c" &&
+ dummyarg=`echo "$UNAME_RELEASE" | sed -n 's/\([0-9]*\).*/\1/p'` &&
+ SYSTEM_NAME=`"$dummy" "$dummyarg"` &&
+ { echo "$SYSTEM_NAME"; exit; }
+ GUESS=mips-mips-riscos$UNAME_RELEASE
+ ;;
+ Motorola:PowerMAX_OS:*:*)
+ GUESS=powerpc-motorola-powermax
+ ;;
+ Motorola:*:4.3:PL8-*)
+ GUESS=powerpc-harris-powermax
+ ;;
+ Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*)
+ GUESS=powerpc-harris-powermax
+ ;;
+ Night_Hawk:Power_UNIX:*:*)
+ GUESS=powerpc-harris-powerunix
+ ;;
+ m88k:CX/UX:7*:*)
+ GUESS=m88k-harris-cxux7
+ ;;
+ m88k:*:4*:R4*)
+ GUESS=m88k-motorola-sysv4
+ ;;
+ m88k:*:3*:R3*)
+ GUESS=m88k-motorola-sysv3
+ ;;
+ AViiON:dgux:*:*)
+ # DG/UX returns AViiON for all architectures
+ UNAME_PROCESSOR=`/usr/bin/uname -p`
+ if test "$UNAME_PROCESSOR" = mc88100 || test "$UNAME_PROCESSOR" = mc88110
+ then
+ if test "$TARGET_BINARY_INTERFACE"x = m88kdguxelfx || \
+ test "$TARGET_BINARY_INTERFACE"x = x
+ then
+ GUESS=m88k-dg-dgux$UNAME_RELEASE
+ else
+ GUESS=m88k-dg-dguxbcs$UNAME_RELEASE
+ fi
+ else
+ GUESS=i586-dg-dgux$UNAME_RELEASE
+ fi
+ ;;
+ M88*:DolphinOS:*:*) # DolphinOS (SVR3)
+ GUESS=m88k-dolphin-sysv3
+ ;;
+ M88*:*:R3*:*)
+ # Delta 88k system running SVR3
+ GUESS=m88k-motorola-sysv3
+ ;;
+ XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3)
+ GUESS=m88k-tektronix-sysv3
+ ;;
+ Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD)
+ GUESS=m68k-tektronix-bsd
+ ;;
+ *:IRIX*:*:*)
+ IRIX_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/g'`
+ GUESS=mips-sgi-irix$IRIX_REL
+ ;;
+ ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX.
+ GUESS=romp-ibm-aix # uname -m gives an 8 hex-code CPU id
+ ;; # Note that: echo "'`uname -s`'" gives 'AIX '
+ i*86:AIX:*:*)
+ GUESS=i386-ibm-aix
+ ;;
+ ia64:AIX:*:*)
+ if test -x /usr/bin/oslevel ; then
+ IBM_REV=`/usr/bin/oslevel`
+ else
+ IBM_REV=$UNAME_VERSION.$UNAME_RELEASE
+ fi
+ GUESS=$UNAME_MACHINE-ibm-aix$IBM_REV
+ ;;
+ *:AIX:2:3)
+ if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then
+ set_cc_for_build
+ sed 's/^ //' << EOF > "$dummy.c"
+ #include
+
+ main()
+ {
+ if (!__power_pc())
+ exit(1);
+ puts("powerpc-ibm-aix3.2.5");
+ exit(0);
+ }
+EOF
+ if $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"`
+ then
+ GUESS=$SYSTEM_NAME
+ else
+ GUESS=rs6000-ibm-aix3.2.5
+ fi
+ elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then
+ GUESS=rs6000-ibm-aix3.2.4
+ else
+ GUESS=rs6000-ibm-aix3.2
+ fi
+ ;;
+ *:AIX:*:[4567])
+ IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'`
+ if /usr/sbin/lsattr -El "$IBM_CPU_ID" | grep ' POWER' >/dev/null 2>&1; then
+ IBM_ARCH=rs6000
+ else
+ IBM_ARCH=powerpc
+ fi
+ if test -x /usr/bin/lslpp ; then
+ IBM_REV=`/usr/bin/lslpp -Lqc bos.rte.libc | \
+ awk -F: '{ print $3 }' | sed s/[0-9]*$/0/`
+ else
+ IBM_REV=$UNAME_VERSION.$UNAME_RELEASE
+ fi
+ GUESS=$IBM_ARCH-ibm-aix$IBM_REV
+ ;;
+ *:AIX:*:*)
+ GUESS=rs6000-ibm-aix
+ ;;
+ ibmrt:4.4BSD:*|romp-ibm:4.4BSD:*)
+ GUESS=romp-ibm-bsd4.4
+ ;;
+ ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and
+ GUESS=romp-ibm-bsd$UNAME_RELEASE # 4.3 with uname added to
+ ;; # report: romp-ibm BSD 4.3
+ *:BOSX:*:*)
+ GUESS=rs6000-bull-bosx
+ ;;
+ DPX/2?00:B.O.S.:*:*)
+ GUESS=m68k-bull-sysv3
+ ;;
+ 9000/[34]??:4.3bsd:1.*:*)
+ GUESS=m68k-hp-bsd
+ ;;
+ hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*)
+ GUESS=m68k-hp-bsd4.4
+ ;;
+ 9000/[34678]??:HP-UX:*:*)
+ HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'`
+ case $UNAME_MACHINE in
+ 9000/31?) HP_ARCH=m68000 ;;
+ 9000/[34]??) HP_ARCH=m68k ;;
+ 9000/[678][0-9][0-9])
+ if test -x /usr/bin/getconf; then
+ sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null`
+ sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null`
+ case $sc_cpu_version in
+ 523) HP_ARCH=hppa1.0 ;; # CPU_PA_RISC1_0
+ 528) HP_ARCH=hppa1.1 ;; # CPU_PA_RISC1_1
+ 532) # CPU_PA_RISC2_0
+ case $sc_kernel_bits in
+ 32) HP_ARCH=hppa2.0n ;;
+ 64) HP_ARCH=hppa2.0w ;;
+ '') HP_ARCH=hppa2.0 ;; # HP-UX 10.20
+ esac ;;
+ esac
+ fi
+ if test "$HP_ARCH" = ""; then
+ set_cc_for_build
+ sed 's/^ //' << EOF > "$dummy.c"
+
+ #define _HPUX_SOURCE
+ #include
+ #include
+
+ int main ()
+ {
+ #if defined(_SC_KERNEL_BITS)
+ long bits = sysconf(_SC_KERNEL_BITS);
+ #endif
+ long cpu = sysconf (_SC_CPU_VERSION);
+
+ switch (cpu)
+ {
+ case CPU_PA_RISC1_0: puts ("hppa1.0"); break;
+ case CPU_PA_RISC1_1: puts ("hppa1.1"); break;
+ case CPU_PA_RISC2_0:
+ #if defined(_SC_KERNEL_BITS)
+ switch (bits)
+ {
+ case 64: puts ("hppa2.0w"); break;
+ case 32: puts ("hppa2.0n"); break;
+ default: puts ("hppa2.0"); break;
+ } break;
+ #else /* !defined(_SC_KERNEL_BITS) */
+ puts ("hppa2.0"); break;
+ #endif
+ default: puts ("hppa1.0"); break;
+ }
+ exit (0);
+ }
+EOF
+ (CCOPTS="" $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null) && HP_ARCH=`"$dummy"`
+ test -z "$HP_ARCH" && HP_ARCH=hppa
+ fi ;;
+ esac
+ if test "$HP_ARCH" = hppa2.0w
+ then
+ set_cc_for_build
+
+ # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating
+ # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler
+ # generating 64-bit code. GNU and HP use different nomenclature:
+ #
+ # $ CC_FOR_BUILD=cc ./config.guess
+ # => hppa2.0w-hp-hpux11.23
+ # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess
+ # => hppa64-hp-hpux11.23
+
+ if echo __LP64__ | (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) |
+ grep -q __LP64__
+ then
+ HP_ARCH=hppa2.0w
+ else
+ HP_ARCH=hppa64
+ fi
+ fi
+ GUESS=$HP_ARCH-hp-hpux$HPUX_REV
+ ;;
+ ia64:HP-UX:*:*)
+ HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'`
+ GUESS=ia64-hp-hpux$HPUX_REV
+ ;;
+ 3050*:HI-UX:*:*)
+ set_cc_for_build
+ sed 's/^ //' << EOF > "$dummy.c"
+ #include
+ int
+ main ()
+ {
+ long cpu = sysconf (_SC_CPU_VERSION);
+ /* The order matters, because CPU_IS_HP_MC68K erroneously returns
+ true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct
+ results, however. */
+ if (CPU_IS_PA_RISC (cpu))
+ {
+ switch (cpu)
+ {
+ case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break;
+ case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break;
+ case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break;
+ default: puts ("hppa-hitachi-hiuxwe2"); break;
+ }
+ }
+ else if (CPU_IS_HP_MC68K (cpu))
+ puts ("m68k-hitachi-hiuxwe2");
+ else puts ("unknown-hitachi-hiuxwe2");
+ exit (0);
+ }
+EOF
+ $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` &&
+ { echo "$SYSTEM_NAME"; exit; }
+ GUESS=unknown-hitachi-hiuxwe2
+ ;;
+ 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:*)
+ GUESS=hppa1.1-hp-bsd
+ ;;
+ 9000/8??:4.3bsd:*:*)
+ GUESS=hppa1.0-hp-bsd
+ ;;
+ *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*)
+ GUESS=hppa1.0-hp-mpeix
+ ;;
+ hp7??:OSF1:*:* | hp8?[79]:OSF1:*:*)
+ GUESS=hppa1.1-hp-osf
+ ;;
+ hp8??:OSF1:*:*)
+ GUESS=hppa1.0-hp-osf
+ ;;
+ i*86:OSF1:*:*)
+ if test -x /usr/sbin/sysversion ; then
+ GUESS=$UNAME_MACHINE-unknown-osf1mk
+ else
+ GUESS=$UNAME_MACHINE-unknown-osf1
+ fi
+ ;;
+ parisc*:Lites*:*:*)
+ GUESS=hppa1.1-hp-lites
+ ;;
+ C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*)
+ GUESS=c1-convex-bsd
+ ;;
+ C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*)
+ if getsysinfo -f scalar_acc
+ then echo c32-convex-bsd
+ else echo c2-convex-bsd
+ fi
+ exit ;;
+ C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*)
+ GUESS=c34-convex-bsd
+ ;;
+ C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*)
+ GUESS=c38-convex-bsd
+ ;;
+ C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*)
+ GUESS=c4-convex-bsd
+ ;;
+ CRAY*Y-MP:*:*:*)
+ CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'`
+ GUESS=ymp-cray-unicos$CRAY_REL
+ ;;
+ CRAY*[A-Z]90:*:*:*)
+ echo "$UNAME_MACHINE"-cray-unicos"$UNAME_RELEASE" \
+ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \
+ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \
+ -e 's/\.[^.]*$/.X/'
+ exit ;;
+ CRAY*TS:*:*:*)
+ CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'`
+ GUESS=t90-cray-unicos$CRAY_REL
+ ;;
+ CRAY*T3E:*:*:*)
+ CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'`
+ GUESS=alphaev5-cray-unicosmk$CRAY_REL
+ ;;
+ CRAY*SV1:*:*:*)
+ CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'`
+ GUESS=sv1-cray-unicos$CRAY_REL
+ ;;
+ *:UNICOS/mp:*:*)
+ CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'`
+ GUESS=craynv-cray-unicosmp$CRAY_REL
+ ;;
+ F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*)
+ FUJITSU_PROC=`uname -m | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz`
+ FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'`
+ FUJITSU_REL=`echo "$UNAME_RELEASE" | sed -e 's/ /_/'`
+ GUESS=${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}
+ ;;
+ 5000:UNIX_System_V:4.*:*)
+ FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'`
+ FUJITSU_REL=`echo "$UNAME_RELEASE" | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/ /_/'`
+ GUESS=sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}
+ ;;
+ i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*)
+ GUESS=$UNAME_MACHINE-pc-bsdi$UNAME_RELEASE
+ ;;
+ sparc*:BSD/OS:*:*)
+ GUESS=sparc-unknown-bsdi$UNAME_RELEASE
+ ;;
+ *:BSD/OS:*:*)
+ GUESS=$UNAME_MACHINE-unknown-bsdi$UNAME_RELEASE
+ ;;
+ arm:FreeBSD:*:*)
+ UNAME_PROCESSOR=`uname -p`
+ set_cc_for_build
+ if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \
+ | grep -q __ARM_PCS_VFP
+ then
+ FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'`
+ GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabi
+ else
+ FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'`
+ GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabihf
+ fi
+ ;;
+ *:FreeBSD:*:*)
+ UNAME_PROCESSOR=`/usr/bin/uname -p`
+ case $UNAME_PROCESSOR in
+ amd64)
+ UNAME_PROCESSOR=x86_64 ;;
+ i386)
+ UNAME_PROCESSOR=i586 ;;
+ esac
+ FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'`
+ GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL
+ ;;
+ i*:CYGWIN*:*)
+ GUESS=$UNAME_MACHINE-pc-cygwin
+ ;;
+ *:MINGW64*:*)
+ GUESS=$UNAME_MACHINE-pc-mingw64
+ ;;
+ *:MINGW*:*)
+ GUESS=$UNAME_MACHINE-pc-mingw32
+ ;;
+ *:MSYS*:*)
+ GUESS=$UNAME_MACHINE-pc-msys
+ ;;
+ i*:PW*:*)
+ GUESS=$UNAME_MACHINE-pc-pw32
+ ;;
+ *:SerenityOS:*:*)
+ GUESS=$UNAME_MACHINE-pc-serenity
+ ;;
+ *:Interix*:*)
+ case $UNAME_MACHINE in
+ x86)
+ GUESS=i586-pc-interix$UNAME_RELEASE
+ ;;
+ authenticamd | genuineintel | EM64T)
+ GUESS=x86_64-unknown-interix$UNAME_RELEASE
+ ;;
+ IA64)
+ GUESS=ia64-unknown-interix$UNAME_RELEASE
+ ;;
+ esac ;;
+ i*:UWIN*:*)
+ GUESS=$UNAME_MACHINE-pc-uwin
+ ;;
+ amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*)
+ GUESS=x86_64-pc-cygwin
+ ;;
+ prep*:SunOS:5.*:*)
+ SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'`
+ GUESS=powerpcle-unknown-solaris2$SUN_REL
+ ;;
+ *:GNU:*:*)
+ # the GNU system
+ GNU_ARCH=`echo "$UNAME_MACHINE" | sed -e 's,[-/].*$,,'`
+ GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's,/.*$,,'`
+ GUESS=$GNU_ARCH-unknown-$LIBC$GNU_REL
+ ;;
+ *:GNU/*:*:*)
+ # other systems with GNU libc and userland
+ GNU_SYS=`echo "$UNAME_SYSTEM" | sed 's,^[^/]*/,,' | tr "[:upper:]" "[:lower:]"`
+ GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'`
+ GUESS=$UNAME_MACHINE-unknown-$GNU_SYS$GNU_REL-$LIBC
+ ;;
+ *:Minix:*:*)
+ GUESS=$UNAME_MACHINE-unknown-minix
+ ;;
+ aarch64:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ aarch64_be:Linux:*:*)
+ UNAME_MACHINE=aarch64_be
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ alpha:Linux:*:*)
+ case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' /proc/cpuinfo 2>/dev/null` in
+ EV5) UNAME_MACHINE=alphaev5 ;;
+ EV56) UNAME_MACHINE=alphaev56 ;;
+ PCA56) UNAME_MACHINE=alphapca56 ;;
+ PCA57) UNAME_MACHINE=alphapca56 ;;
+ EV6) UNAME_MACHINE=alphaev6 ;;
+ EV67) UNAME_MACHINE=alphaev67 ;;
+ EV68*) UNAME_MACHINE=alphaev68 ;;
+ esac
+ objdump --private-headers /bin/sh | grep -q ld.so.1
+ if test "$?" = 0 ; then LIBC=gnulibc1 ; fi
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ arc:Linux:*:* | arceb:Linux:*:* | arc32:Linux:*:* | arc64:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ arm*:Linux:*:*)
+ set_cc_for_build
+ if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \
+ | grep -q __ARM_EABI__
+ then
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ else
+ if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \
+ | grep -q __ARM_PCS_VFP
+ then
+ GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabi
+ else
+ GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabihf
+ fi
+ fi
+ ;;
+ avr32*:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ cris:Linux:*:*)
+ GUESS=$UNAME_MACHINE-axis-linux-$LIBC
+ ;;
+ crisv32:Linux:*:*)
+ GUESS=$UNAME_MACHINE-axis-linux-$LIBC
+ ;;
+ e2k:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ frv:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ hexagon:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ i*86:Linux:*:*)
+ GUESS=$UNAME_MACHINE-pc-linux-$LIBC
+ ;;
+ ia64:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ k1om:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ loongarch32:Linux:*:* | loongarch64:Linux:*:* | loongarchx32:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ m32r*:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ m68*:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ mips:Linux:*:* | mips64:Linux:*:*)
+ set_cc_for_build
+ IS_GLIBC=0
+ test x"${LIBC}" = xgnu && IS_GLIBC=1
+ sed 's/^ //' << EOF > "$dummy.c"
+ #undef CPU
+ #undef mips
+ #undef mipsel
+ #undef mips64
+ #undef mips64el
+ #if ${IS_GLIBC} && defined(_ABI64)
+ LIBCABI=gnuabi64
+ #else
+ #if ${IS_GLIBC} && defined(_ABIN32)
+ LIBCABI=gnuabin32
+ #else
+ LIBCABI=${LIBC}
+ #endif
+ #endif
+
+ #if ${IS_GLIBC} && defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6
+ CPU=mipsisa64r6
+ #else
+ #if ${IS_GLIBC} && !defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6
+ CPU=mipsisa32r6
+ #else
+ #if defined(__mips64)
+ CPU=mips64
+ #else
+ CPU=mips
+ #endif
+ #endif
+ #endif
+
+ #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL)
+ MIPS_ENDIAN=el
+ #else
+ #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB)
+ MIPS_ENDIAN=
+ #else
+ MIPS_ENDIAN=
+ #endif
+ #endif
+EOF
+ cc_set_vars=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^CPU\|^MIPS_ENDIAN\|^LIBCABI'`
+ eval "$cc_set_vars"
+ test "x$CPU" != x && { echo "$CPU${MIPS_ENDIAN}-unknown-linux-$LIBCABI"; exit; }
+ ;;
+ mips64el:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ openrisc*:Linux:*:*)
+ GUESS=or1k-unknown-linux-$LIBC
+ ;;
+ or32:Linux:*:* | or1k*:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ padre:Linux:*:*)
+ GUESS=sparc-unknown-linux-$LIBC
+ ;;
+ parisc64:Linux:*:* | hppa64:Linux:*:*)
+ GUESS=hppa64-unknown-linux-$LIBC
+ ;;
+ parisc:Linux:*:* | hppa:Linux:*:*)
+ # Look for CPU level
+ case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in
+ PA7*) GUESS=hppa1.1-unknown-linux-$LIBC ;;
+ PA8*) GUESS=hppa2.0-unknown-linux-$LIBC ;;
+ *) GUESS=hppa-unknown-linux-$LIBC ;;
+ esac
+ ;;
+ ppc64:Linux:*:*)
+ GUESS=powerpc64-unknown-linux-$LIBC
+ ;;
+ ppc:Linux:*:*)
+ GUESS=powerpc-unknown-linux-$LIBC
+ ;;
+ ppc64le:Linux:*:*)
+ GUESS=powerpc64le-unknown-linux-$LIBC
+ ;;
+ ppcle:Linux:*:*)
+ GUESS=powerpcle-unknown-linux-$LIBC
+ ;;
+ riscv32:Linux:*:* | riscv32be:Linux:*:* | riscv64:Linux:*:* | riscv64be:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ s390:Linux:*:* | s390x:Linux:*:*)
+ GUESS=$UNAME_MACHINE-ibm-linux-$LIBC
+ ;;
+ sh64*:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ sh*:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ sparc:Linux:*:* | sparc64:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ tile*:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ vax:Linux:*:*)
+ GUESS=$UNAME_MACHINE-dec-linux-$LIBC
+ ;;
+ x86_64:Linux:*:*)
+ set_cc_for_build
+ LIBCABI=$LIBC
+ if test "$CC_FOR_BUILD" != no_compiler_found; then
+ if (echo '#ifdef __ILP32__'; echo IS_X32; echo '#endif') | \
+ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \
+ grep IS_X32 >/dev/null
+ then
+ LIBCABI=${LIBC}x32
+ fi
+ fi
+ GUESS=$UNAME_MACHINE-pc-linux-$LIBCABI
+ ;;
+ xtensa*:Linux:*:*)
+ GUESS=$UNAME_MACHINE-unknown-linux-$LIBC
+ ;;
+ i*86:DYNIX/ptx:4*:*)
+ # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there.
+ # earlier versions are messed up and put the nodename in both
+ # sysname and nodename.
+ GUESS=i386-sequent-sysv4
+ ;;
+ i*86:UNIX_SV:4.2MP:2.*)
+ # Unixware is an offshoot of SVR4, but it has its own version
+ # number series starting with 2...
+ # I am not positive that other SVR4 systems won't match this,
+ # I just have to hope. -- rms.
+ # Use sysv4.2uw... so that sysv4* matches it.
+ GUESS=$UNAME_MACHINE-pc-sysv4.2uw$UNAME_VERSION
+ ;;
+ i*86:OS/2:*:*)
+ # If we were able to find `uname', then EMX Unix compatibility
+ # is probably installed.
+ GUESS=$UNAME_MACHINE-pc-os2-emx
+ ;;
+ i*86:XTS-300:*:STOP)
+ GUESS=$UNAME_MACHINE-unknown-stop
+ ;;
+ i*86:atheos:*:*)
+ GUESS=$UNAME_MACHINE-unknown-atheos
+ ;;
+ i*86:syllable:*:*)
+ GUESS=$UNAME_MACHINE-pc-syllable
+ ;;
+ i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*)
+ GUESS=i386-unknown-lynxos$UNAME_RELEASE
+ ;;
+ i*86:*DOS:*:*)
+ GUESS=$UNAME_MACHINE-pc-msdosdjgpp
+ ;;
+ i*86:*:4.*:*)
+ UNAME_REL=`echo "$UNAME_RELEASE" | sed 's/\/MP$//'`
+ if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then
+ GUESS=$UNAME_MACHINE-univel-sysv$UNAME_REL
+ else
+ GUESS=$UNAME_MACHINE-pc-sysv$UNAME_REL
+ fi
+ ;;
+ i*86:*:5:[678]*)
+ # UnixWare 7.x, OpenUNIX and OpenServer 6.
+ case `/bin/uname -X | grep "^Machine"` in
+ *486*) UNAME_MACHINE=i486 ;;
+ *Pentium) UNAME_MACHINE=i586 ;;
+ *Pent*|*Celeron) UNAME_MACHINE=i686 ;;
+ esac
+ GUESS=$UNAME_MACHINE-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION}
+ ;;
+ i*86:*:3.2:*)
+ if test -f /usr/options/cb.name; then
+ UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then
+ UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')`
+ (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486
+ (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \
+ && UNAME_MACHINE=i586
+ (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \
+ && UNAME_MACHINE=i686
+ (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \
+ && UNAME_MACHINE=i686
+ GUESS=$UNAME_MACHINE-pc-sco$UNAME_REL
+ else
+ GUESS=$UNAME_MACHINE-pc-sysv32
+ fi
+ ;;
+ pc:*:*:*)
+ # Left here for compatibility:
+ # uname -m prints for DJGPP always 'pc', but it prints nothing about
+ # the processor, so we play safe by assuming i586.
+ # Note: whatever this is, it MUST be the same as what config.sub
+ # prints for the "djgpp" host, or else GDB configure will decide that
+ # this is a cross-build.
+ GUESS=i586-pc-msdosdjgpp
+ ;;
+ Intel:Mach:3*:*)
+ GUESS=i386-pc-mach3
+ ;;
+ paragon:*:*:*)
+ GUESS=i860-intel-osf1
+ ;;
+ i860:*:4.*:*) # i860-SVR4
+ if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then
+ GUESS=i860-stardent-sysv$UNAME_RELEASE # Stardent Vistra i860-SVR4
+ else # Add other i860-SVR4 vendors below as they are discovered.
+ GUESS=i860-unknown-sysv$UNAME_RELEASE # Unknown i860-SVR4
+ fi
+ ;;
+ mini*:CTIX:SYS*5:*)
+ # "miniframe"
+ GUESS=m68010-convergent-sysv
+ ;;
+ mc68k:UNIX:SYSTEM5:3.51m)
+ GUESS=m68k-convergent-sysv
+ ;;
+ M680?0:D-NIX:5.3:*)
+ GUESS=m68k-diab-dnix
+ ;;
+ M68*:*:R3V[5678]*:*)
+ test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;;
+ 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0)
+ OS_REL=''
+ test -r /etc/.relid \
+ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid`
+ /bin/uname -p 2>/dev/null | grep 86 >/dev/null \
+ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; }
+ /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \
+ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;;
+ 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*)
+ /bin/uname -p 2>/dev/null | grep 86 >/dev/null \
+ && { echo i486-ncr-sysv4; exit; } ;;
+ NCR*:*:4.2:* | MPRAS*:*:4.2:*)
+ OS_REL='.3'
+ test -r /etc/.relid \
+ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid`
+ /bin/uname -p 2>/dev/null | grep 86 >/dev/null \
+ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; }
+ /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \
+ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; }
+ /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \
+ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;;
+ m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*)
+ GUESS=m68k-unknown-lynxos$UNAME_RELEASE
+ ;;
+ mc68030:UNIX_System_V:4.*:*)
+ GUESS=m68k-atari-sysv4
+ ;;
+ TSUNAMI:LynxOS:2.*:*)
+ GUESS=sparc-unknown-lynxos$UNAME_RELEASE
+ ;;
+ rs6000:LynxOS:2.*:*)
+ GUESS=rs6000-unknown-lynxos$UNAME_RELEASE
+ ;;
+ PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*)
+ GUESS=powerpc-unknown-lynxos$UNAME_RELEASE
+ ;;
+ SM[BE]S:UNIX_SV:*:*)
+ GUESS=mips-dde-sysv$UNAME_RELEASE
+ ;;
+ RM*:ReliantUNIX-*:*:*)
+ GUESS=mips-sni-sysv4
+ ;;
+ RM*:SINIX-*:*:*)
+ GUESS=mips-sni-sysv4
+ ;;
+ *:SINIX-*:*:*)
+ if uname -p 2>/dev/null >/dev/null ; then
+ UNAME_MACHINE=`(uname -p) 2>/dev/null`
+ GUESS=$UNAME_MACHINE-sni-sysv4
+ else
+ GUESS=ns32k-sni-sysv
+ fi
+ ;;
+ PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort
+ # says
+ GUESS=i586-unisys-sysv4
+ ;;
+ *:UNIX_System_V:4*:FTX*)
+ # From Gerald Hewes .
+ # How about differentiating between stratus architectures? -djm
+ GUESS=hppa1.1-stratus-sysv4
+ ;;
+ *:*:*:FTX*)
+ # From seanf@swdc.stratus.com.
+ GUESS=i860-stratus-sysv4
+ ;;
+ i*86:VOS:*:*)
+ # From Paul.Green@stratus.com.
+ GUESS=$UNAME_MACHINE-stratus-vos
+ ;;
+ *:VOS:*:*)
+ # From Paul.Green@stratus.com.
+ GUESS=hppa1.1-stratus-vos
+ ;;
+ mc68*:A/UX:*:*)
+ GUESS=m68k-apple-aux$UNAME_RELEASE
+ ;;
+ news*:NEWS-OS:6*:*)
+ GUESS=mips-sony-newsos6
+ ;;
+ R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*)
+ if test -d /usr/nec; then
+ GUESS=mips-nec-sysv$UNAME_RELEASE
+ else
+ GUESS=mips-unknown-sysv$UNAME_RELEASE
+ fi
+ ;;
+ BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only.
+ GUESS=powerpc-be-beos
+ ;;
+ BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only.
+ GUESS=powerpc-apple-beos
+ ;;
+ BePC:BeOS:*:*) # BeOS running on Intel PC compatible.
+ GUESS=i586-pc-beos
+ ;;
+ BePC:Haiku:*:*) # Haiku running on Intel PC compatible.
+ GUESS=i586-pc-haiku
+ ;;
+ x86_64:Haiku:*:*)
+ GUESS=x86_64-unknown-haiku
+ ;;
+ SX-4:SUPER-UX:*:*)
+ GUESS=sx4-nec-superux$UNAME_RELEASE
+ ;;
+ SX-5:SUPER-UX:*:*)
+ GUESS=sx5-nec-superux$UNAME_RELEASE
+ ;;
+ SX-6:SUPER-UX:*:*)
+ GUESS=sx6-nec-superux$UNAME_RELEASE
+ ;;
+ SX-7:SUPER-UX:*:*)
+ GUESS=sx7-nec-superux$UNAME_RELEASE
+ ;;
+ SX-8:SUPER-UX:*:*)
+ GUESS=sx8-nec-superux$UNAME_RELEASE
+ ;;
+ SX-8R:SUPER-UX:*:*)
+ GUESS=sx8r-nec-superux$UNAME_RELEASE
+ ;;
+ SX-ACE:SUPER-UX:*:*)
+ GUESS=sxace-nec-superux$UNAME_RELEASE
+ ;;
+ Power*:Rhapsody:*:*)
+ GUESS=powerpc-apple-rhapsody$UNAME_RELEASE
+ ;;
+ *:Rhapsody:*:*)
+ GUESS=$UNAME_MACHINE-apple-rhapsody$UNAME_RELEASE
+ ;;
+ arm64:Darwin:*:*)
+ GUESS=aarch64-apple-darwin$UNAME_RELEASE
+ ;;
+ *:Darwin:*:*)
+ UNAME_PROCESSOR=`uname -p`
+ case $UNAME_PROCESSOR in
+ unknown) UNAME_PROCESSOR=powerpc ;;
+ esac
+ if command -v xcode-select > /dev/null 2> /dev/null && \
+ ! xcode-select --print-path > /dev/null 2> /dev/null ; then
+ # Avoid executing cc if there is no toolchain installed as
+ # cc will be a stub that puts up a graphical alert
+ # prompting the user to install developer tools.
+ CC_FOR_BUILD=no_compiler_found
+ else
+ set_cc_for_build
+ fi
+ if test "$CC_FOR_BUILD" != no_compiler_found; then
+ if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \
+ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \
+ grep IS_64BIT_ARCH >/dev/null
+ then
+ case $UNAME_PROCESSOR in
+ i386) UNAME_PROCESSOR=x86_64 ;;
+ powerpc) UNAME_PROCESSOR=powerpc64 ;;
+ esac
+ fi
+ # On 10.4-10.6 one might compile for PowerPC via gcc -arch ppc
+ if (echo '#ifdef __POWERPC__'; echo IS_PPC; echo '#endif') | \
+ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \
+ grep IS_PPC >/dev/null
+ then
+ UNAME_PROCESSOR=powerpc
+ fi
+ elif test "$UNAME_PROCESSOR" = i386 ; then
+ # uname -m returns i386 or x86_64
+ UNAME_PROCESSOR=$UNAME_MACHINE
+ fi
+ GUESS=$UNAME_PROCESSOR-apple-darwin$UNAME_RELEASE
+ ;;
+ *:procnto*:*:* | *:QNX:[0123456789]*:*)
+ UNAME_PROCESSOR=`uname -p`
+ if test "$UNAME_PROCESSOR" = x86; then
+ UNAME_PROCESSOR=i386
+ UNAME_MACHINE=pc
+ fi
+ GUESS=$UNAME_PROCESSOR-$UNAME_MACHINE-nto-qnx$UNAME_RELEASE
+ ;;
+ *:QNX:*:4*)
+ GUESS=i386-pc-qnx
+ ;;
+ NEO-*:NONSTOP_KERNEL:*:*)
+ GUESS=neo-tandem-nsk$UNAME_RELEASE
+ ;;
+ NSE-*:NONSTOP_KERNEL:*:*)
+ GUESS=nse-tandem-nsk$UNAME_RELEASE
+ ;;
+ NSR-*:NONSTOP_KERNEL:*:*)
+ GUESS=nsr-tandem-nsk$UNAME_RELEASE
+ ;;
+ NSV-*:NONSTOP_KERNEL:*:*)
+ GUESS=nsv-tandem-nsk$UNAME_RELEASE
+ ;;
+ NSX-*:NONSTOP_KERNEL:*:*)
+ GUESS=nsx-tandem-nsk$UNAME_RELEASE
+ ;;
+ *:NonStop-UX:*:*)
+ GUESS=mips-compaq-nonstopux
+ ;;
+ BS2000:POSIX*:*:*)
+ GUESS=bs2000-siemens-sysv
+ ;;
+ DS/*:UNIX_System_V:*:*)
+ GUESS=$UNAME_MACHINE-$UNAME_SYSTEM-$UNAME_RELEASE
+ ;;
+ *:Plan9:*:*)
+ # "uname -m" is not consistent, so use $cputype instead. 386
+ # is converted to i386 for consistency with other x86
+ # operating systems.
+ if test "${cputype-}" = 386; then
+ UNAME_MACHINE=i386
+ elif test "x${cputype-}" != x; then
+ UNAME_MACHINE=$cputype
+ fi
+ GUESS=$UNAME_MACHINE-unknown-plan9
+ ;;
+ *:TOPS-10:*:*)
+ GUESS=pdp10-unknown-tops10
+ ;;
+ *:TENEX:*:*)
+ GUESS=pdp10-unknown-tenex
+ ;;
+ KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*)
+ GUESS=pdp10-dec-tops20
+ ;;
+ XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*)
+ GUESS=pdp10-xkl-tops20
+ ;;
+ *:TOPS-20:*:*)
+ GUESS=pdp10-unknown-tops20
+ ;;
+ *:ITS:*:*)
+ GUESS=pdp10-unknown-its
+ ;;
+ SEI:*:*:SEIUX)
+ GUESS=mips-sei-seiux$UNAME_RELEASE
+ ;;
+ *:DragonFly:*:*)
+ DRAGONFLY_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'`
+ GUESS=$UNAME_MACHINE-unknown-dragonfly$DRAGONFLY_REL
+ ;;
+ *:*VMS:*:*)
+ UNAME_MACHINE=`(uname -p) 2>/dev/null`
+ case $UNAME_MACHINE in
+ A*) GUESS=alpha-dec-vms ;;
+ I*) GUESS=ia64-dec-vms ;;
+ V*) GUESS=vax-dec-vms ;;
+ esac ;;
+ *:XENIX:*:SysV)
+ GUESS=i386-pc-xenix
+ ;;
+ i*86:skyos:*:*)
+ SKYOS_REL=`echo "$UNAME_RELEASE" | sed -e 's/ .*$//'`
+ GUESS=$UNAME_MACHINE-pc-skyos$SKYOS_REL
+ ;;
+ i*86:rdos:*:*)
+ GUESS=$UNAME_MACHINE-pc-rdos
+ ;;
+ i*86:Fiwix:*:*)
+ GUESS=$UNAME_MACHINE-pc-fiwix
+ ;;
+ *:AROS:*:*)
+ GUESS=$UNAME_MACHINE-unknown-aros
+ ;;
+ x86_64:VMkernel:*:*)
+ GUESS=$UNAME_MACHINE-unknown-esx
+ ;;
+ amd64:Isilon\ OneFS:*:*)
+ GUESS=x86_64-unknown-onefs
+ ;;
+ *:Unleashed:*:*)
+ GUESS=$UNAME_MACHINE-unknown-unleashed$UNAME_RELEASE
+ ;;
+esac
+
+# Do we have a guess based on uname results?
+if test "x$GUESS" != x; then
+ echo "$GUESS"
+ exit
+fi
+
+# No uname command or uname output not recognized.
+set_cc_for_build
+cat > "$dummy.c" <
+#include
+#endif
+#if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__)
+#if defined (vax) || defined (__vax) || defined (__vax__) || defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__)
+#include
+#if defined(_SIZE_T_) || defined(SIGLOST)
+#include
+#endif
+#endif
+#endif
+main ()
+{
+#if defined (sony)
+#if defined (MIPSEB)
+ /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed,
+ I don't know.... */
+ printf ("mips-sony-bsd\n"); exit (0);
+#else
+#include
+ printf ("m68k-sony-newsos%s\n",
+#ifdef NEWSOS4
+ "4"
+#else
+ ""
+#endif
+ ); exit (0);
+#endif
+#endif
+
+#if defined (NeXT)
+#if !defined (__ARCHITECTURE__)
+#define __ARCHITECTURE__ "m68k"
+#endif
+ int version;
+ version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`;
+ if (version < 4)
+ printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version);
+ else
+ printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version);
+ exit (0);
+#endif
+
+#if defined (MULTIMAX) || defined (n16)
+#if defined (UMAXV)
+ printf ("ns32k-encore-sysv\n"); exit (0);
+#else
+#if defined (CMU)
+ printf ("ns32k-encore-mach\n"); exit (0);
+#else
+ printf ("ns32k-encore-bsd\n"); exit (0);
+#endif
+#endif
+#endif
+
+#if defined (__386BSD__)
+ printf ("i386-pc-bsd\n"); exit (0);
+#endif
+
+#if defined (sequent)
+#if defined (i386)
+ printf ("i386-sequent-dynix\n"); exit (0);
+#endif
+#if defined (ns32000)
+ printf ("ns32k-sequent-dynix\n"); exit (0);
+#endif
+#endif
+
+#if defined (_SEQUENT_)
+ struct utsname un;
+
+ uname(&un);
+ if (strncmp(un.version, "V2", 2) == 0) {
+ printf ("i386-sequent-ptx2\n"); exit (0);
+ }
+ if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */
+ printf ("i386-sequent-ptx1\n"); exit (0);
+ }
+ printf ("i386-sequent-ptx\n"); exit (0);
+#endif
+
+#if defined (vax)
+#if !defined (ultrix)
+#include
+#if defined (BSD)
+#if BSD == 43
+ printf ("vax-dec-bsd4.3\n"); exit (0);
+#else
+#if BSD == 199006
+ printf ("vax-dec-bsd4.3reno\n"); exit (0);
+#else
+ printf ("vax-dec-bsd\n"); exit (0);
+#endif
+#endif
+#else
+ printf ("vax-dec-bsd\n"); exit (0);
+#endif
+#else
+#if defined(_SIZE_T_) || defined(SIGLOST)
+ struct utsname un;
+ uname (&un);
+ printf ("vax-dec-ultrix%s\n", un.release); exit (0);
+#else
+ printf ("vax-dec-ultrix\n"); exit (0);
+#endif
+#endif
+#endif
+#if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__)
+#if defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__)
+#if defined(_SIZE_T_) || defined(SIGLOST)
+ struct utsname *un;
+ uname (&un);
+ printf ("mips-dec-ultrix%s\n", un.release); exit (0);
+#else
+ printf ("mips-dec-ultrix\n"); exit (0);
+#endif
+#endif
+#endif
+
+#if defined (alliant) && defined (i860)
+ printf ("i860-alliant-bsd\n"); exit (0);
+#endif
+
+ exit (1);
+}
+EOF
+
+$CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null && SYSTEM_NAME=`"$dummy"` &&
+ { echo "$SYSTEM_NAME"; exit; }
+
+# Apollos put the system type in the environment.
+test -d /usr/apollo && { echo "$ISP-apollo-$SYSTYPE"; exit; }
+
+echo "$0: unable to guess system type" >&2
+
+case $UNAME_MACHINE:$UNAME_SYSTEM in
+ mips:Linux | mips64:Linux)
+ # If we got here on MIPS GNU/Linux, output extra information.
+ cat >&2 <&2 <&2 </dev/null || echo unknown`
+uname -r = `(uname -r) 2>/dev/null || echo unknown`
+uname -s = `(uname -s) 2>/dev/null || echo unknown`
+uname -v = `(uname -v) 2>/dev/null || echo unknown`
+
+/usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null`
+/bin/uname -X = `(/bin/uname -X) 2>/dev/null`
+
+hostinfo = `(hostinfo) 2>/dev/null`
+/bin/universe = `(/bin/universe) 2>/dev/null`
+/usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null`
+/bin/arch = `(/bin/arch) 2>/dev/null`
+/usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null`
+/usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null`
+
+UNAME_MACHINE = "$UNAME_MACHINE"
+UNAME_RELEASE = "$UNAME_RELEASE"
+UNAME_SYSTEM = "$UNAME_SYSTEM"
+UNAME_VERSION = "$UNAME_VERSION"
+EOF
+fi
+
+exit 1
+
+# Local variables:
+# eval: (add-hook 'before-save-hook 'time-stamp)
+# time-stamp-start: "timestamp='"
+# time-stamp-format: "%:y-%02m-%02d"
+# time-stamp-end: "'"
+# End:
diff --git a/config.sub b/config.sub
new file mode 100755
index 0000000..dba16e8
--- /dev/null
+++ b/config.sub
@@ -0,0 +1,1890 @@
+#! /bin/sh
+# Configuration validation subroutine script.
+# Copyright 1992-2022 Free Software Foundation, Inc.
+
+# shellcheck disable=SC2006,SC2268 # see below for rationale
+
+timestamp='2022-01-03'
+
+# This file is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see .
+#
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that
+# program. This Exception is an additional permission under section 7
+# of the GNU General Public License, version 3 ("GPLv3").
+
+
+# Please send patches to .
+#
+# Configuration subroutine to validate and canonicalize a configuration type.
+# Supply the specified configuration type as an argument.
+# If it is invalid, we print an error message on stderr and exit with code 1.
+# Otherwise, we print the canonical config type on stdout and succeed.
+
+# You can get the latest version of this script from:
+# https://git.savannah.gnu.org/cgit/config.git/plain/config.sub
+
+# This file is supposed to be the same for all GNU packages
+# and recognize all the CPU types, system types and aliases
+# that are meaningful with *any* GNU software.
+# Each package is responsible for reporting which valid configurations
+# it does not support. The user should be able to distinguish
+# a failure to support a valid configuration from a meaningless
+# configuration.
+
+# The goal of this file is to map all the various variations of a given
+# machine specification into a single specification in the form:
+# CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM
+# or in some cases, the newer four-part form:
+# CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM
+# It is wrong to echo any other type of specification.
+
+# The "shellcheck disable" line above the timestamp inhibits complaints
+# about features and limitations of the classic Bourne shell that were
+# superseded or lifted in POSIX. However, this script identifies a wide
+# variety of pre-POSIX systems that do not have POSIX shells at all, and
+# even some reasonably current systems (Solaris 10 as case-in-point) still
+# have a pre-POSIX /bin/sh.
+
+me=`echo "$0" | sed -e 's,.*/,,'`
+
+usage="\
+Usage: $0 [OPTION] CPU-MFR-OPSYS or ALIAS
+
+Canonicalize a configuration name.
+
+Options:
+ -h, --help print this help, then exit
+ -t, --time-stamp print date of last modification, then exit
+ -v, --version print version number, then exit
+
+Report bugs and patches to ."
+
+version="\
+GNU config.sub ($timestamp)
+
+Copyright 1992-2022 Free Software Foundation, Inc.
+
+This is free software; see the source for copying conditions. There is NO
+warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
+
+help="
+Try \`$me --help' for more information."
+
+# Parse command line
+while test $# -gt 0 ; do
+ case $1 in
+ --time-stamp | --time* | -t )
+ echo "$timestamp" ; exit ;;
+ --version | -v )
+ echo "$version" ; exit ;;
+ --help | --h* | -h )
+ echo "$usage"; exit ;;
+ -- ) # Stop option processing
+ shift; break ;;
+ - ) # Use stdin as input.
+ break ;;
+ -* )
+ echo "$me: invalid option $1$help" >&2
+ exit 1 ;;
+
+ *local*)
+ # First pass through any local machine types.
+ echo "$1"
+ exit ;;
+
+ * )
+ break ;;
+ esac
+done
+
+case $# in
+ 0) echo "$me: missing argument$help" >&2
+ exit 1;;
+ 1) ;;
+ *) echo "$me: too many arguments$help" >&2
+ exit 1;;
+esac
+
+# Split fields of configuration type
+# shellcheck disable=SC2162
+saved_IFS=$IFS
+IFS="-" read field1 field2 field3 field4 <&2
+ exit 1
+ ;;
+ *-*-*-*)
+ basic_machine=$field1-$field2
+ basic_os=$field3-$field4
+ ;;
+ *-*-*)
+ # Ambiguous whether COMPANY is present, or skipped and KERNEL-OS is two
+ # parts
+ maybe_os=$field2-$field3
+ case $maybe_os in
+ nto-qnx* | linux-* | uclinux-uclibc* \
+ | uclinux-gnu* | kfreebsd*-gnu* | knetbsd*-gnu* | netbsd*-gnu* \
+ | netbsd*-eabi* | kopensolaris*-gnu* | cloudabi*-eabi* \
+ | storm-chaos* | os2-emx* | rtmk-nova*)
+ basic_machine=$field1
+ basic_os=$maybe_os
+ ;;
+ android-linux)
+ basic_machine=$field1-unknown
+ basic_os=linux-android
+ ;;
+ *)
+ basic_machine=$field1-$field2
+ basic_os=$field3
+ ;;
+ esac
+ ;;
+ *-*)
+ # A lone config we happen to match not fitting any pattern
+ case $field1-$field2 in
+ decstation-3100)
+ basic_machine=mips-dec
+ basic_os=
+ ;;
+ *-*)
+ # Second component is usually, but not always the OS
+ case $field2 in
+ # Prevent following clause from handling this valid os
+ sun*os*)
+ basic_machine=$field1
+ basic_os=$field2
+ ;;
+ zephyr*)
+ basic_machine=$field1-unknown
+ basic_os=$field2
+ ;;
+ # Manufacturers
+ dec* | mips* | sequent* | encore* | pc533* | sgi* | sony* \
+ | att* | 7300* | 3300* | delta* | motorola* | sun[234]* \
+ | unicom* | ibm* | next | hp | isi* | apollo | altos* \
+ | convergent* | ncr* | news | 32* | 3600* | 3100* \
+ | hitachi* | c[123]* | convex* | sun | crds | omron* | dg \
+ | ultra | tti* | harris | dolphin | highlevel | gould \
+ | cbm | ns | masscomp | apple | axis | knuth | cray \
+ | microblaze* | sim | cisco \
+ | oki | wec | wrs | winbond)
+ basic_machine=$field1-$field2
+ basic_os=
+ ;;
+ *)
+ basic_machine=$field1
+ basic_os=$field2
+ ;;
+ esac
+ ;;
+ esac
+ ;;
+ *)
+ # Convert single-component short-hands not valid as part of
+ # multi-component configurations.
+ case $field1 in
+ 386bsd)
+ basic_machine=i386-pc
+ basic_os=bsd
+ ;;
+ a29khif)
+ basic_machine=a29k-amd
+ basic_os=udi
+ ;;
+ adobe68k)
+ basic_machine=m68010-adobe
+ basic_os=scout
+ ;;
+ alliant)
+ basic_machine=fx80-alliant
+ basic_os=
+ ;;
+ altos | altos3068)
+ basic_machine=m68k-altos
+ basic_os=
+ ;;
+ am29k)
+ basic_machine=a29k-none
+ basic_os=bsd
+ ;;
+ amdahl)
+ basic_machine=580-amdahl
+ basic_os=sysv
+ ;;
+ amiga)
+ basic_machine=m68k-unknown
+ basic_os=
+ ;;
+ amigaos | amigados)
+ basic_machine=m68k-unknown
+ basic_os=amigaos
+ ;;
+ amigaunix | amix)
+ basic_machine=m68k-unknown
+ basic_os=sysv4
+ ;;
+ apollo68)
+ basic_machine=m68k-apollo
+ basic_os=sysv
+ ;;
+ apollo68bsd)
+ basic_machine=m68k-apollo
+ basic_os=bsd
+ ;;
+ aros)
+ basic_machine=i386-pc
+ basic_os=aros
+ ;;
+ aux)
+ basic_machine=m68k-apple
+ basic_os=aux
+ ;;
+ balance)
+ basic_machine=ns32k-sequent
+ basic_os=dynix
+ ;;
+ blackfin)
+ basic_machine=bfin-unknown
+ basic_os=linux
+ ;;
+ cegcc)
+ basic_machine=arm-unknown
+ basic_os=cegcc
+ ;;
+ convex-c1)
+ basic_machine=c1-convex
+ basic_os=bsd
+ ;;
+ convex-c2)
+ basic_machine=c2-convex
+ basic_os=bsd
+ ;;
+ convex-c32)
+ basic_machine=c32-convex
+ basic_os=bsd
+ ;;
+ convex-c34)
+ basic_machine=c34-convex
+ basic_os=bsd
+ ;;
+ convex-c38)
+ basic_machine=c38-convex
+ basic_os=bsd
+ ;;
+ cray)
+ basic_machine=j90-cray
+ basic_os=unicos
+ ;;
+ crds | unos)
+ basic_machine=m68k-crds
+ basic_os=
+ ;;
+ da30)
+ basic_machine=m68k-da30
+ basic_os=
+ ;;
+ decstation | pmax | pmin | dec3100 | decstatn)
+ basic_machine=mips-dec
+ basic_os=
+ ;;
+ delta88)
+ basic_machine=m88k-motorola
+ basic_os=sysv3
+ ;;
+ dicos)
+ basic_machine=i686-pc
+ basic_os=dicos
+ ;;
+ djgpp)
+ basic_machine=i586-pc
+ basic_os=msdosdjgpp
+ ;;
+ ebmon29k)
+ basic_machine=a29k-amd
+ basic_os=ebmon
+ ;;
+ es1800 | OSE68k | ose68k | ose | OSE)
+ basic_machine=m68k-ericsson
+ basic_os=ose
+ ;;
+ gmicro)
+ basic_machine=tron-gmicro
+ basic_os=sysv
+ ;;
+ go32)
+ basic_machine=i386-pc
+ basic_os=go32
+ ;;
+ h8300hms)
+ basic_machine=h8300-hitachi
+ basic_os=hms
+ ;;
+ h8300xray)
+ basic_machine=h8300-hitachi
+ basic_os=xray
+ ;;
+ h8500hms)
+ basic_machine=h8500-hitachi
+ basic_os=hms
+ ;;
+ harris)
+ basic_machine=m88k-harris
+ basic_os=sysv3
+ ;;
+ hp300 | hp300hpux)
+ basic_machine=m68k-hp
+ basic_os=hpux
+ ;;
+ hp300bsd)
+ basic_machine=m68k-hp
+ basic_os=bsd
+ ;;
+ hppaosf)
+ basic_machine=hppa1.1-hp
+ basic_os=osf
+ ;;
+ hppro)
+ basic_machine=hppa1.1-hp
+ basic_os=proelf
+ ;;
+ i386mach)
+ basic_machine=i386-mach
+ basic_os=mach
+ ;;
+ isi68 | isi)
+ basic_machine=m68k-isi
+ basic_os=sysv
+ ;;
+ m68knommu)
+ basic_machine=m68k-unknown
+ basic_os=linux
+ ;;
+ magnum | m3230)
+ basic_machine=mips-mips
+ basic_os=sysv
+ ;;
+ merlin)
+ basic_machine=ns32k-utek
+ basic_os=sysv
+ ;;
+ mingw64)
+ basic_machine=x86_64-pc
+ basic_os=mingw64
+ ;;
+ mingw32)
+ basic_machine=i686-pc
+ basic_os=mingw32
+ ;;
+ mingw32ce)
+ basic_machine=arm-unknown
+ basic_os=mingw32ce
+ ;;
+ monitor)
+ basic_machine=m68k-rom68k
+ basic_os=coff
+ ;;
+ morphos)
+ basic_machine=powerpc-unknown
+ basic_os=morphos
+ ;;
+ moxiebox)
+ basic_machine=moxie-unknown
+ basic_os=moxiebox
+ ;;
+ msdos)
+ basic_machine=i386-pc
+ basic_os=msdos
+ ;;
+ msys)
+ basic_machine=i686-pc
+ basic_os=msys
+ ;;
+ mvs)
+ basic_machine=i370-ibm
+ basic_os=mvs
+ ;;
+ nacl)
+ basic_machine=le32-unknown
+ basic_os=nacl
+ ;;
+ ncr3000)
+ basic_machine=i486-ncr
+ basic_os=sysv4
+ ;;
+ netbsd386)
+ basic_machine=i386-pc
+ basic_os=netbsd
+ ;;
+ netwinder)
+ basic_machine=armv4l-rebel
+ basic_os=linux
+ ;;
+ news | news700 | news800 | news900)
+ basic_machine=m68k-sony
+ basic_os=newsos
+ ;;
+ news1000)
+ basic_machine=m68030-sony
+ basic_os=newsos
+ ;;
+ necv70)
+ basic_machine=v70-nec
+ basic_os=sysv
+ ;;
+ nh3000)
+ basic_machine=m68k-harris
+ basic_os=cxux
+ ;;
+ nh[45]000)
+ basic_machine=m88k-harris
+ basic_os=cxux
+ ;;
+ nindy960)
+ basic_machine=i960-intel
+ basic_os=nindy
+ ;;
+ mon960)
+ basic_machine=i960-intel
+ basic_os=mon960
+ ;;
+ nonstopux)
+ basic_machine=mips-compaq
+ basic_os=nonstopux
+ ;;
+ os400)
+ basic_machine=powerpc-ibm
+ basic_os=os400
+ ;;
+ OSE68000 | ose68000)
+ basic_machine=m68000-ericsson
+ basic_os=ose
+ ;;
+ os68k)
+ basic_machine=m68k-none
+ basic_os=os68k
+ ;;
+ paragon)
+ basic_machine=i860-intel
+ basic_os=osf
+ ;;
+ parisc)
+ basic_machine=hppa-unknown
+ basic_os=linux
+ ;;
+ psp)
+ basic_machine=mipsallegrexel-sony
+ basic_os=psp
+ ;;
+ pw32)
+ basic_machine=i586-unknown
+ basic_os=pw32
+ ;;
+ rdos | rdos64)
+ basic_machine=x86_64-pc
+ basic_os=rdos
+ ;;
+ rdos32)
+ basic_machine=i386-pc
+ basic_os=rdos
+ ;;
+ rom68k)
+ basic_machine=m68k-rom68k
+ basic_os=coff
+ ;;
+ sa29200)
+ basic_machine=a29k-amd
+ basic_os=udi
+ ;;
+ sei)
+ basic_machine=mips-sei
+ basic_os=seiux
+ ;;
+ sequent)
+ basic_machine=i386-sequent
+ basic_os=
+ ;;
+ sps7)
+ basic_machine=m68k-bull
+ basic_os=sysv2
+ ;;
+ st2000)
+ basic_machine=m68k-tandem
+ basic_os=
+ ;;
+ stratus)
+ basic_machine=i860-stratus
+ basic_os=sysv4
+ ;;
+ sun2)
+ basic_machine=m68000-sun
+ basic_os=
+ ;;
+ sun2os3)
+ basic_machine=m68000-sun
+ basic_os=sunos3
+ ;;
+ sun2os4)
+ basic_machine=m68000-sun
+ basic_os=sunos4
+ ;;
+ sun3)
+ basic_machine=m68k-sun
+ basic_os=
+ ;;
+ sun3os3)
+ basic_machine=m68k-sun
+ basic_os=sunos3
+ ;;
+ sun3os4)
+ basic_machine=m68k-sun
+ basic_os=sunos4
+ ;;
+ sun4)
+ basic_machine=sparc-sun
+ basic_os=
+ ;;
+ sun4os3)
+ basic_machine=sparc-sun
+ basic_os=sunos3
+ ;;
+ sun4os4)
+ basic_machine=sparc-sun
+ basic_os=sunos4
+ ;;
+ sun4sol2)
+ basic_machine=sparc-sun
+ basic_os=solaris2
+ ;;
+ sun386 | sun386i | roadrunner)
+ basic_machine=i386-sun
+ basic_os=
+ ;;
+ sv1)
+ basic_machine=sv1-cray
+ basic_os=unicos
+ ;;
+ symmetry)
+ basic_machine=i386-sequent
+ basic_os=dynix
+ ;;
+ t3e)
+ basic_machine=alphaev5-cray
+ basic_os=unicos
+ ;;
+ t90)
+ basic_machine=t90-cray
+ basic_os=unicos
+ ;;
+ toad1)
+ basic_machine=pdp10-xkl
+ basic_os=tops20
+ ;;
+ tpf)
+ basic_machine=s390x-ibm
+ basic_os=tpf
+ ;;
+ udi29k)
+ basic_machine=a29k-amd
+ basic_os=udi
+ ;;
+ ultra3)
+ basic_machine=a29k-nyu
+ basic_os=sym1
+ ;;
+ v810 | necv810)
+ basic_machine=v810-nec
+ basic_os=none
+ ;;
+ vaxv)
+ basic_machine=vax-dec
+ basic_os=sysv
+ ;;
+ vms)
+ basic_machine=vax-dec
+ basic_os=vms
+ ;;
+ vsta)
+ basic_machine=i386-pc
+ basic_os=vsta
+ ;;
+ vxworks960)
+ basic_machine=i960-wrs
+ basic_os=vxworks
+ ;;
+ vxworks68)
+ basic_machine=m68k-wrs
+ basic_os=vxworks
+ ;;
+ vxworks29k)
+ basic_machine=a29k-wrs
+ basic_os=vxworks
+ ;;
+ xbox)
+ basic_machine=i686-pc
+ basic_os=mingw32
+ ;;
+ ymp)
+ basic_machine=ymp-cray
+ basic_os=unicos
+ ;;
+ *)
+ basic_machine=$1
+ basic_os=
+ ;;
+ esac
+ ;;
+esac
+
+# Decode 1-component or ad-hoc basic machines
+case $basic_machine in
+ # Here we handle the default manufacturer of certain CPU types. It is in
+ # some cases the only manufacturer, in others, it is the most popular.
+ w89k)
+ cpu=hppa1.1
+ vendor=winbond
+ ;;
+ op50n)
+ cpu=hppa1.1
+ vendor=oki
+ ;;
+ op60c)
+ cpu=hppa1.1
+ vendor=oki
+ ;;
+ ibm*)
+ cpu=i370
+ vendor=ibm
+ ;;
+ orion105)
+ cpu=clipper
+ vendor=highlevel
+ ;;
+ mac | mpw | mac-mpw)
+ cpu=m68k
+ vendor=apple
+ ;;
+ pmac | pmac-mpw)
+ cpu=powerpc
+ vendor=apple
+ ;;
+
+ # Recognize the various machine names and aliases which stand
+ # for a CPU type and a company and sometimes even an OS.
+ 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc)
+ cpu=m68000
+ vendor=att
+ ;;
+ 3b*)
+ cpu=we32k
+ vendor=att
+ ;;
+ bluegene*)
+ cpu=powerpc
+ vendor=ibm
+ basic_os=cnk
+ ;;
+ decsystem10* | dec10*)
+ cpu=pdp10
+ vendor=dec
+ basic_os=tops10
+ ;;
+ decsystem20* | dec20*)
+ cpu=pdp10
+ vendor=dec
+ basic_os=tops20
+ ;;
+ delta | 3300 | motorola-3300 | motorola-delta \
+ | 3300-motorola | delta-motorola)
+ cpu=m68k
+ vendor=motorola
+ ;;
+ dpx2*)
+ cpu=m68k
+ vendor=bull
+ basic_os=sysv3
+ ;;
+ encore | umax | mmax)
+ cpu=ns32k
+ vendor=encore
+ ;;
+ elxsi)
+ cpu=elxsi
+ vendor=elxsi
+ basic_os=${basic_os:-bsd}
+ ;;
+ fx2800)
+ cpu=i860
+ vendor=alliant
+ ;;
+ genix)
+ cpu=ns32k
+ vendor=ns
+ ;;
+ h3050r* | hiux*)
+ cpu=hppa1.1
+ vendor=hitachi
+ basic_os=hiuxwe2
+ ;;
+ hp3k9[0-9][0-9] | hp9[0-9][0-9])
+ cpu=hppa1.0
+ vendor=hp
+ ;;
+ hp9k2[0-9][0-9] | hp9k31[0-9])
+ cpu=m68000
+ vendor=hp
+ ;;
+ hp9k3[2-9][0-9])
+ cpu=m68k
+ vendor=hp
+ ;;
+ hp9k6[0-9][0-9] | hp6[0-9][0-9])
+ cpu=hppa1.0
+ vendor=hp
+ ;;
+ hp9k7[0-79][0-9] | hp7[0-79][0-9])
+ cpu=hppa1.1
+ vendor=hp
+ ;;
+ hp9k78[0-9] | hp78[0-9])
+ # FIXME: really hppa2.0-hp
+ cpu=hppa1.1
+ vendor=hp
+ ;;
+ hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893)
+ # FIXME: really hppa2.0-hp
+ cpu=hppa1.1
+ vendor=hp
+ ;;
+ hp9k8[0-9][13679] | hp8[0-9][13679])
+ cpu=hppa1.1
+ vendor=hp
+ ;;
+ hp9k8[0-9][0-9] | hp8[0-9][0-9])
+ cpu=hppa1.0
+ vendor=hp
+ ;;
+ i*86v32)
+ cpu=`echo "$1" | sed -e 's/86.*/86/'`
+ vendor=pc
+ basic_os=sysv32
+ ;;
+ i*86v4*)
+ cpu=`echo "$1" | sed -e 's/86.*/86/'`
+ vendor=pc
+ basic_os=sysv4
+ ;;
+ i*86v)
+ cpu=`echo "$1" | sed -e 's/86.*/86/'`
+ vendor=pc
+ basic_os=sysv
+ ;;
+ i*86sol2)
+ cpu=`echo "$1" | sed -e 's/86.*/86/'`
+ vendor=pc
+ basic_os=solaris2
+ ;;
+ j90 | j90-cray)
+ cpu=j90
+ vendor=cray
+ basic_os=${basic_os:-unicos}
+ ;;
+ iris | iris4d)
+ cpu=mips
+ vendor=sgi
+ case $basic_os in
+ irix*)
+ ;;
+ *)
+ basic_os=irix4
+ ;;
+ esac
+ ;;
+ miniframe)
+ cpu=m68000
+ vendor=convergent
+ ;;
+ *mint | mint[0-9]* | *MiNT | *MiNT[0-9]*)
+ cpu=m68k
+ vendor=atari
+ basic_os=mint
+ ;;
+ news-3600 | risc-news)
+ cpu=mips
+ vendor=sony
+ basic_os=newsos
+ ;;
+ next | m*-next)
+ cpu=m68k
+ vendor=next
+ case $basic_os in
+ openstep*)
+ ;;
+ nextstep*)
+ ;;
+ ns2*)
+ basic_os=nextstep2
+ ;;
+ *)
+ basic_os=nextstep3
+ ;;
+ esac
+ ;;
+ np1)
+ cpu=np1
+ vendor=gould
+ ;;
+ op50n-* | op60c-*)
+ cpu=hppa1.1
+ vendor=oki
+ basic_os=proelf
+ ;;
+ pa-hitachi)
+ cpu=hppa1.1
+ vendor=hitachi
+ basic_os=hiuxwe2
+ ;;
+ pbd)
+ cpu=sparc
+ vendor=tti
+ ;;
+ pbb)
+ cpu=m68k
+ vendor=tti
+ ;;
+ pc532)
+ cpu=ns32k
+ vendor=pc532
+ ;;
+ pn)
+ cpu=pn
+ vendor=gould
+ ;;
+ power)
+ cpu=power
+ vendor=ibm
+ ;;
+ ps2)
+ cpu=i386
+ vendor=ibm
+ ;;
+ rm[46]00)
+ cpu=mips
+ vendor=siemens
+ ;;
+ rtpc | rtpc-*)
+ cpu=romp
+ vendor=ibm
+ ;;
+ sde)
+ cpu=mipsisa32
+ vendor=sde
+ basic_os=${basic_os:-elf}
+ ;;
+ simso-wrs)
+ cpu=sparclite
+ vendor=wrs
+ basic_os=vxworks
+ ;;
+ tower | tower-32)
+ cpu=m68k
+ vendor=ncr
+ ;;
+ vpp*|vx|vx-*)
+ cpu=f301
+ vendor=fujitsu
+ ;;
+ w65)
+ cpu=w65
+ vendor=wdc
+ ;;
+ w89k-*)
+ cpu=hppa1.1
+ vendor=winbond
+ basic_os=proelf
+ ;;
+ none)
+ cpu=none
+ vendor=none
+ ;;
+ leon|leon[3-9])
+ cpu=sparc
+ vendor=$basic_machine
+ ;;
+ leon-*|leon[3-9]-*)
+ cpu=sparc
+ vendor=`echo "$basic_machine" | sed 's/-.*//'`
+ ;;
+
+ *-*)
+ # shellcheck disable=SC2162
+ saved_IFS=$IFS
+ IFS="-" read cpu vendor <&2
+ exit 1
+ ;;
+ esac
+ ;;
+esac
+
+# Here we canonicalize certain aliases for manufacturers.
+case $vendor in
+ digital*)
+ vendor=dec
+ ;;
+ commodore*)
+ vendor=cbm
+ ;;
+ *)
+ ;;
+esac
+
+# Decode manufacturer-specific aliases for certain operating systems.
+
+if test x$basic_os != x
+then
+
+# First recognize some ad-hoc cases, or perhaps split kernel-os, or else just
+# set os.
+case $basic_os in
+ gnu/linux*)
+ kernel=linux
+ os=`echo "$basic_os" | sed -e 's|gnu/linux|gnu|'`
+ ;;
+ os2-emx)
+ kernel=os2
+ os=`echo "$basic_os" | sed -e 's|os2-emx|emx|'`
+ ;;
+ nto-qnx*)
+ kernel=nto
+ os=`echo "$basic_os" | sed -e 's|nto-qnx|qnx|'`
+ ;;
+ *-*)
+ # shellcheck disable=SC2162
+ saved_IFS=$IFS
+ IFS="-" read kernel os <&2
+ exit 1
+ ;;
+esac
+
+# As a final step for OS-related things, validate the OS-kernel combination
+# (given a valid OS), if there is a kernel.
+case $kernel-$os in
+ linux-gnu* | linux-dietlibc* | linux-android* | linux-newlib* \
+ | linux-musl* | linux-relibc* | linux-uclibc* )
+ ;;
+ uclinux-uclibc* )
+ ;;
+ -dietlibc* | -newlib* | -musl* | -relibc* | -uclibc* )
+ # These are just libc implementations, not actual OSes, and thus
+ # require a kernel.
+ echo "Invalid configuration \`$1': libc \`$os' needs explicit kernel." 1>&2
+ exit 1
+ ;;
+ kfreebsd*-gnu* | kopensolaris*-gnu*)
+ ;;
+ vxworks-simlinux | vxworks-simwindows | vxworks-spe)
+ ;;
+ nto-qnx*)
+ ;;
+ os2-emx)
+ ;;
+ *-eabi* | *-gnueabi*)
+ ;;
+ -*)
+ # Blank kernel with real OS is always fine.
+ ;;
+ *-*)
+ echo "Invalid configuration \`$1': Kernel \`$kernel' not known to work with OS \`$os'." 1>&2
+ exit 1
+ ;;
+esac
+
+# Here we handle the case where we know the os, and the CPU type, but not the
+# manufacturer. We pick the logical manufacturer.
+case $vendor in
+ unknown)
+ case $cpu-$os in
+ *-riscix*)
+ vendor=acorn
+ ;;
+ *-sunos*)
+ vendor=sun
+ ;;
+ *-cnk* | *-aix*)
+ vendor=ibm
+ ;;
+ *-beos*)
+ vendor=be
+ ;;
+ *-hpux*)
+ vendor=hp
+ ;;
+ *-mpeix*)
+ vendor=hp
+ ;;
+ *-hiux*)
+ vendor=hitachi
+ ;;
+ *-unos*)
+ vendor=crds
+ ;;
+ *-dgux*)
+ vendor=dg
+ ;;
+ *-luna*)
+ vendor=omron
+ ;;
+ *-genix*)
+ vendor=ns
+ ;;
+ *-clix*)
+ vendor=intergraph
+ ;;
+ *-mvs* | *-opened*)
+ vendor=ibm
+ ;;
+ *-os400*)
+ vendor=ibm
+ ;;
+ s390-* | s390x-*)
+ vendor=ibm
+ ;;
+ *-ptx*)
+ vendor=sequent
+ ;;
+ *-tpf*)
+ vendor=ibm
+ ;;
+ *-vxsim* | *-vxworks* | *-windiss*)
+ vendor=wrs
+ ;;
+ *-aux*)
+ vendor=apple
+ ;;
+ *-hms*)
+ vendor=hitachi
+ ;;
+ *-mpw* | *-macos*)
+ vendor=apple
+ ;;
+ *-*mint | *-mint[0-9]* | *-*MiNT | *-MiNT[0-9]*)
+ vendor=atari
+ ;;
+ *-vos*)
+ vendor=stratus
+ ;;
+ esac
+ ;;
+esac
+
+echo "$cpu-$vendor-${kernel:+$kernel-}$os"
+exit
+
+# Local variables:
+# eval: (add-hook 'before-save-hook 'time-stamp)
+# time-stamp-start: "timestamp='"
+# time-stamp-format: "%:y-%02m-%02d"
+# time-stamp-end: "'"
+# End:
diff --git a/configure b/configure
new file mode 100755
index 0000000..10bc2a1
--- /dev/null
+++ b/configure
@@ -0,0 +1,23220 @@
+#! /bin/sh
+# Guess values for system-dependent variables and create Makefiles.
+# Generated by GNU Autoconf 2.71 for knot 3.4.6.
+#
+# Report bugs to .
+#
+#
+# Copyright (C) 1992-1996, 1998-2017, 2020-2021 Free Software Foundation,
+# Inc.
+#
+#
+# This configure script is free software; the Free Software Foundation
+# gives unlimited permission to copy, distribute and modify it.
+## -------------------- ##
+## M4sh Initialization. ##
+## -------------------- ##
+
+# Be more Bourne compatible
+DUALCASE=1; export DUALCASE # for MKS sh
+as_nop=:
+if test ${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1
+then :
+ emulate sh
+ NULLCMD=:
+ # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+ setopt NO_GLOB_SUBST
+else $as_nop
+ case `(set -o) 2>/dev/null` in #(
+ *posix*) :
+ set -o posix ;; #(
+ *) :
+ ;;
+esac
+fi
+
+
+
+# Reset variables that may have inherited troublesome values from
+# the environment.
+
+# IFS needs to be set, to space, tab, and newline, in precisely that order.
+# (If _AS_PATH_WALK were called with IFS unset, it would have the
+# side effect of setting IFS to empty, thus disabling word splitting.)
+# Quoting is to prevent editors from complaining about space-tab.
+as_nl='
+'
+export as_nl
+IFS=" "" $as_nl"
+
+PS1='$ '
+PS2='> '
+PS4='+ '
+
+# Ensure predictable behavior from utilities with locale-dependent output.
+LC_ALL=C
+export LC_ALL
+LANGUAGE=C
+export LANGUAGE
+
+# We cannot yet rely on "unset" to work, but we need these variables
+# to be unset--not just set to an empty or harmless value--now, to
+# avoid bugs in old shells (e.g. pre-3.0 UWIN ksh). This construct
+# also avoids known problems related to "unset" and subshell syntax
+# in other old shells (e.g. bash 2.01 and pdksh 5.2.14).
+for as_var in BASH_ENV ENV MAIL MAILPATH CDPATH
+do eval test \${$as_var+y} \
+ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || :
+done
+
+# Ensure that fds 0, 1, and 2 are open.
+if (exec 3>&0) 2>/dev/null; then :; else exec 0&1) 2>/dev/null; then :; else exec 1>/dev/null; fi
+if (exec 3>&2) ; then :; else exec 2>/dev/null; fi
+
+# The user is always right.
+if ${PATH_SEPARATOR+false} :; then
+ PATH_SEPARATOR=:
+ (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && {
+ (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 ||
+ PATH_SEPARATOR=';'
+ }
+fi
+
+
+# Find who we are. Look in the path if we contain no directory separator.
+as_myself=
+case $0 in #((
+ *[\\/]* ) as_myself=$0 ;;
+ *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ test -r "$as_dir$0" && as_myself=$as_dir$0 && break
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+# We did not find ourselves, most probably we were run as `sh COMMAND'
+# in which case we are not to be found in the path.
+if test "x$as_myself" = x; then
+ as_myself=$0
+fi
+if test ! -f "$as_myself"; then
+ printf "%s\n" "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2
+ exit 1
+fi
+
+
+# Use a proper internal environment variable to ensure we don't fall
+ # into an infinite loop, continuously re-executing ourselves.
+ if test x"${_as_can_reexec}" != xno && test "x$CONFIG_SHELL" != x; then
+ _as_can_reexec=no; export _as_can_reexec;
+ # We cannot yet assume a decent shell, so we have to provide a
+# neutralization value for shells without unset; and this also
+# works around shells that cannot unset nonexistent variables.
+# Preserve -v and -x to the replacement shell.
+BASH_ENV=/dev/null
+ENV=/dev/null
+(unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV
+case $- in # ((((
+ *v*x* | *x*v* ) as_opts=-vx ;;
+ *v* ) as_opts=-v ;;
+ *x* ) as_opts=-x ;;
+ * ) as_opts= ;;
+esac
+exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"}
+# Admittedly, this is quite paranoid, since all the known shells bail
+# out after a failed `exec'.
+printf "%s\n" "$0: could not re-execute with $CONFIG_SHELL" >&2
+exit 255
+ fi
+ # We don't want this to propagate to other subprocesses.
+ { _as_can_reexec=; unset _as_can_reexec;}
+if test "x$CONFIG_SHELL" = x; then
+ as_bourne_compatible="as_nop=:
+if test \${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1
+then :
+ emulate sh
+ NULLCMD=:
+ # Pre-4.2 versions of Zsh do word splitting on \${1+\"\$@\"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '\${1+\"\$@\"}'='\"\$@\"'
+ setopt NO_GLOB_SUBST
+else \$as_nop
+ case \`(set -o) 2>/dev/null\` in #(
+ *posix*) :
+ set -o posix ;; #(
+ *) :
+ ;;
+esac
+fi
+"
+ as_required="as_fn_return () { (exit \$1); }
+as_fn_success () { as_fn_return 0; }
+as_fn_failure () { as_fn_return 1; }
+as_fn_ret_success () { return 0; }
+as_fn_ret_failure () { return 1; }
+
+exitcode=0
+as_fn_success || { exitcode=1; echo as_fn_success failed.; }
+as_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; }
+as_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; }
+as_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; }
+if ( set x; as_fn_ret_success y && test x = \"\$1\" )
+then :
+
+else \$as_nop
+ exitcode=1; echo positional parameters were not saved.
+fi
+test x\$exitcode = x0 || exit 1
+blah=\$(echo \$(echo blah))
+test x\"\$blah\" = xblah || exit 1
+test -x / || exit 1"
+ as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO
+ as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO
+ eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" &&
+ test \"x\`expr \$as_lineno_1'\$as_run' + 1\`\" = \"x\$as_lineno_2'\$as_run'\"' || exit 1
+test \$(( 1 + 1 )) = 2 || exit 1
+
+ test -n \"\${ZSH_VERSION+set}\${BASH_VERSION+set}\" || (
+ ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'
+ ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO
+ ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO
+ PATH=/empty FPATH=/empty; export PATH FPATH
+ test \"X\`printf %s \$ECHO\`\" = \"X\$ECHO\" \\
+ || test \"X\`print -r -- \$ECHO\`\" = \"X\$ECHO\" ) || exit 1"
+ if (eval "$as_required") 2>/dev/null
+then :
+ as_have_required=yes
+else $as_nop
+ as_have_required=no
+fi
+ if test x$as_have_required = xyes && (eval "$as_suggested") 2>/dev/null
+then :
+
+else $as_nop
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+as_found=false
+for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ as_found=:
+ case $as_dir in #(
+ /*)
+ for as_base in sh bash ksh sh5; do
+ # Try only shells that exist, to save several forks.
+ as_shell=$as_dir$as_base
+ if { test -f "$as_shell" || test -f "$as_shell.exe"; } &&
+ as_run=a "$as_shell" -c "$as_bourne_compatible""$as_required" 2>/dev/null
+then :
+ CONFIG_SHELL=$as_shell as_have_required=yes
+ if as_run=a "$as_shell" -c "$as_bourne_compatible""$as_suggested" 2>/dev/null
+then :
+ break 2
+fi
+fi
+ done;;
+ esac
+ as_found=false
+done
+IFS=$as_save_IFS
+if $as_found
+then :
+
+else $as_nop
+ if { test -f "$SHELL" || test -f "$SHELL.exe"; } &&
+ as_run=a "$SHELL" -c "$as_bourne_compatible""$as_required" 2>/dev/null
+then :
+ CONFIG_SHELL=$SHELL as_have_required=yes
+fi
+fi
+
+
+ if test "x$CONFIG_SHELL" != x
+then :
+ export CONFIG_SHELL
+ # We cannot yet assume a decent shell, so we have to provide a
+# neutralization value for shells without unset; and this also
+# works around shells that cannot unset nonexistent variables.
+# Preserve -v and -x to the replacement shell.
+BASH_ENV=/dev/null
+ENV=/dev/null
+(unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV
+case $- in # ((((
+ *v*x* | *x*v* ) as_opts=-vx ;;
+ *v* ) as_opts=-v ;;
+ *x* ) as_opts=-x ;;
+ * ) as_opts= ;;
+esac
+exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"}
+# Admittedly, this is quite paranoid, since all the known shells bail
+# out after a failed `exec'.
+printf "%s\n" "$0: could not re-execute with $CONFIG_SHELL" >&2
+exit 255
+fi
+
+ if test x$as_have_required = xno
+then :
+ printf "%s\n" "$0: This script requires a shell more modern than all"
+ printf "%s\n" "$0: the shells that I found on your system."
+ if test ${ZSH_VERSION+y} ; then
+ printf "%s\n" "$0: In particular, zsh $ZSH_VERSION has bugs and should"
+ printf "%s\n" "$0: be upgraded to zsh 4.3.4 or later."
+ else
+ printf "%s\n" "$0: Please tell bug-autoconf@gnu.org and
+$0: knot-dns@labs.nic.cz about your system, including any
+$0: error possibly output before this message. Then install
+$0: a modern shell, or manually run the script under such a
+$0: shell if you do have one."
+ fi
+ exit 1
+fi
+fi
+fi
+SHELL=${CONFIG_SHELL-/bin/sh}
+export SHELL
+# Unset more variables known to interfere with behavior of common tools.
+CLICOLOR_FORCE= GREP_OPTIONS=
+unset CLICOLOR_FORCE GREP_OPTIONS
+
+## --------------------- ##
+## M4sh Shell Functions. ##
+## --------------------- ##
+# as_fn_unset VAR
+# ---------------
+# Portably unset VAR.
+as_fn_unset ()
+{
+ { eval $1=; unset $1;}
+}
+as_unset=as_fn_unset
+
+
+# as_fn_set_status STATUS
+# -----------------------
+# Set $? to STATUS, without forking.
+as_fn_set_status ()
+{
+ return $1
+} # as_fn_set_status
+
+# as_fn_exit STATUS
+# -----------------
+# Exit the shell with STATUS, even in a "trap 0" or "set -e" context.
+as_fn_exit ()
+{
+ set +e
+ as_fn_set_status $1
+ exit $1
+} # as_fn_exit
+# as_fn_nop
+# ---------
+# Do nothing but, unlike ":", preserve the value of $?.
+as_fn_nop ()
+{
+ return $?
+}
+as_nop=as_fn_nop
+
+# as_fn_mkdir_p
+# -------------
+# Create "$as_dir" as a directory, including parents if necessary.
+as_fn_mkdir_p ()
+{
+
+ case $as_dir in #(
+ -*) as_dir=./$as_dir;;
+ esac
+ test -d "$as_dir" || eval $as_mkdir_p || {
+ as_dirs=
+ while :; do
+ case $as_dir in #(
+ *\'*) as_qdir=`printf "%s\n" "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'(
+ *) as_qdir=$as_dir;;
+ esac
+ as_dirs="'$as_qdir' $as_dirs"
+ as_dir=`$as_dirname -- "$as_dir" ||
+$as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$as_dir" : 'X\(//\)[^/]' \| \
+ X"$as_dir" : 'X\(//\)$' \| \
+ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X"$as_dir" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ test -d "$as_dir" && break
+ done
+ test -z "$as_dirs" || eval "mkdir $as_dirs"
+ } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir"
+
+
+} # as_fn_mkdir_p
+
+# as_fn_executable_p FILE
+# -----------------------
+# Test if FILE is an executable regular file.
+as_fn_executable_p ()
+{
+ test -f "$1" && test -x "$1"
+} # as_fn_executable_p
+# as_fn_append VAR VALUE
+# ----------------------
+# Append the text in VALUE to the end of the definition contained in VAR. Take
+# advantage of any shell optimizations that allow amortized linear growth over
+# repeated appends, instead of the typical quadratic growth present in naive
+# implementations.
+if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null
+then :
+ eval 'as_fn_append ()
+ {
+ eval $1+=\$2
+ }'
+else $as_nop
+ as_fn_append ()
+ {
+ eval $1=\$$1\$2
+ }
+fi # as_fn_append
+
+# as_fn_arith ARG...
+# ------------------
+# Perform arithmetic evaluation on the ARGs, and store the result in the
+# global $as_val. Take advantage of shells that can avoid forks. The arguments
+# must be portable across $(()) and expr.
+if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null
+then :
+ eval 'as_fn_arith ()
+ {
+ as_val=$(( $* ))
+ }'
+else $as_nop
+ as_fn_arith ()
+ {
+ as_val=`expr "$@" || test $? -eq 1`
+ }
+fi # as_fn_arith
+
+# as_fn_nop
+# ---------
+# Do nothing but, unlike ":", preserve the value of $?.
+as_fn_nop ()
+{
+ return $?
+}
+as_nop=as_fn_nop
+
+# as_fn_error STATUS ERROR [LINENO LOG_FD]
+# ----------------------------------------
+# Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are
+# provided, also output the error to LOG_FD, referencing LINENO. Then exit the
+# script with STATUS, using 1 if that was 0.
+as_fn_error ()
+{
+ as_status=$1; test $as_status -eq 0 && as_status=1
+ if test "$4"; then
+ as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: $2" >&$4
+ fi
+ printf "%s\n" "$as_me: error: $2" >&2
+ as_fn_exit $as_status
+} # as_fn_error
+
+if expr a : '\(a\)' >/dev/null 2>&1 &&
+ test "X`expr 00001 : '.*\(...\)'`" = X001; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then
+ as_basename=basename
+else
+ as_basename=false
+fi
+
+if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then
+ as_dirname=dirname
+else
+ as_dirname=false
+fi
+
+as_me=`$as_basename -- "$0" ||
+$as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X/"$0" |
+ sed '/^.*\/\([^/][^/]*\)\/*$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+
+# Avoid depending upon Character Ranges.
+as_cr_letters='abcdefghijklmnopqrstuvwxyz'
+as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
+as_cr_Letters=$as_cr_letters$as_cr_LETTERS
+as_cr_digits='0123456789'
+as_cr_alnum=$as_cr_Letters$as_cr_digits
+
+
+ as_lineno_1=$LINENO as_lineno_1a=$LINENO
+ as_lineno_2=$LINENO as_lineno_2a=$LINENO
+ eval 'test "x$as_lineno_1'$as_run'" != "x$as_lineno_2'$as_run'" &&
+ test "x`expr $as_lineno_1'$as_run' + 1`" = "x$as_lineno_2'$as_run'"' || {
+ # Blame Lee E. McMahon (1931-1989) for sed's syntax. :-)
+ sed -n '
+ p
+ /[$]LINENO/=
+ ' <$as_myself |
+ sed '
+ s/[$]LINENO.*/&-/
+ t lineno
+ b
+ :lineno
+ N
+ :loop
+ s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/
+ t loop
+ s/-\n.*//
+ ' >$as_me.lineno &&
+ chmod +x "$as_me.lineno" ||
+ { printf "%s\n" "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; }
+
+ # If we had to re-execute with $CONFIG_SHELL, we're ensured to have
+ # already done that, so ensure we don't try to do so again and fall
+ # in an infinite loop. This has already happened in practice.
+ _as_can_reexec=no; export _as_can_reexec
+ # Don't try to exec as it changes $[0], causing all sort of problems
+ # (the dirname of $[0] is not the place where we might find the
+ # original and so on. Autoconf is especially sensitive to this).
+ . "./$as_me.lineno"
+ # Exit status is that of the last command.
+ exit
+}
+
+
+# Determine whether it's possible to make 'echo' print without a newline.
+# These variables are no longer used directly by Autoconf, but are AC_SUBSTed
+# for compatibility with existing Makefiles.
+ECHO_C= ECHO_N= ECHO_T=
+case `echo -n x` in #(((((
+-n*)
+ case `echo 'xy\c'` in
+ *c*) ECHO_T=' ';; # ECHO_T is single tab character.
+ xy) ECHO_C='\c';;
+ *) echo `echo ksh88 bug on AIX 6.1` > /dev/null
+ ECHO_T=' ';;
+ esac;;
+*)
+ ECHO_N='-n';;
+esac
+
+# For backward compatibility with old third-party macros, we provide
+# the shell variables $as_echo and $as_echo_n. New code should use
+# AS_ECHO(["message"]) and AS_ECHO_N(["message"]), respectively.
+as_echo='printf %s\n'
+as_echo_n='printf %s'
+
+
+rm -f conf$$ conf$$.exe conf$$.file
+if test -d conf$$.dir; then
+ rm -f conf$$.dir/conf$$.file
+else
+ rm -f conf$$.dir
+ mkdir conf$$.dir 2>/dev/null
+fi
+if (echo >conf$$.file) 2>/dev/null; then
+ if ln -s conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s='ln -s'
+ # ... but there are two gotchas:
+ # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.
+ # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.
+ # In both cases, we have to default to `cp -pR'.
+ ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||
+ as_ln_s='cp -pR'
+ elif ln conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s=ln
+ else
+ as_ln_s='cp -pR'
+ fi
+else
+ as_ln_s='cp -pR'
+fi
+rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file
+rmdir conf$$.dir 2>/dev/null
+
+if mkdir -p . 2>/dev/null; then
+ as_mkdir_p='mkdir -p "$as_dir"'
+else
+ test -d ./-p && rmdir ./-p
+ as_mkdir_p=false
+fi
+
+as_test_x='test -x'
+as_executable_p=as_fn_executable_p
+
+# Sed expression to map a string onto a valid CPP name.
+as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'"
+
+# Sed expression to map a string onto a valid variable name.
+as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'"
+
+SHELL=${CONFIG_SHELL-/bin/sh}
+
+
+test -n "$DJDIR" || exec 7<&0 &1
+
+# Name of the host.
+# hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status,
+# so uname gets run too.
+ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q`
+
+#
+# Initializations.
+#
+ac_default_prefix=/usr/local
+ac_clean_files=
+ac_config_libobj_dir=.
+LIBOBJS=
+cross_compiling=no
+subdirs=
+MFLAGS=
+MAKEFLAGS=
+
+# Identity of this package.
+PACKAGE_NAME='knot'
+PACKAGE_TARNAME='knot'
+PACKAGE_VERSION='3.4.6'
+PACKAGE_STRING='knot 3.4.6'
+PACKAGE_BUGREPORT='knot-dns@labs.nic.cz'
+PACKAGE_URL=''
+
+ac_unique_file="src/knot"
+# Factoring default headers for most tests.
+ac_includes_default="\
+#include
+#ifdef HAVE_STDIO_H
+# include
+#endif
+#ifdef HAVE_STDLIB_H
+# include
+#endif
+#ifdef HAVE_STRING_H
+# include
+#endif
+#ifdef HAVE_INTTYPES_H
+# include
+#endif
+#ifdef HAVE_STDINT_H
+# include
+#endif
+#ifdef HAVE_STRINGS_H
+# include
+#endif
+#ifdef HAVE_SYS_TYPES_H
+# include
+#endif
+#ifdef HAVE_SYS_STAT_H
+# include
+#endif
+#ifdef HAVE_UNISTD_H
+# include
+#endif"
+
+ac_header_c_list=
+ac_subst_vars='am__EXEEXT_FALSE
+am__EXEEXT_TRUE
+LTLIBOBJS
+LIBOBJS
+OSS_FUZZ_FALSE
+OSS_FUZZ_TRUE
+FUZZER_FALSE
+FUZZER_TRUE
+fuzzer_LDFLAGS
+fuzzer_CFLAGS
+GENHTML
+LCOV
+CODE_COVERAGE_ENABLED
+CODE_COVERAGE_ENABLED_FALSE
+CODE_COVERAGE_ENABLED_TRUE
+HAVE_VISIBILITY
+CFLAG_VISIBILITY
+USE_GNUTLS_MEMSET_FALSE
+USE_GNUTLS_MEMSET_TRUE
+math_LIBS
+dlopen_LIBS
+pthread_LIBS
+cap_ng_LIBS
+cap_ng_CFLAGS
+libmnl_LIBS
+libmnl_CFLAGS
+libnghttp2_LIBS
+libnghttp2_CFLAGS
+libidn2_LIBS
+libidn2_CFLAGS
+embedded_libngtcp2_LIBS
+embedded_libngtcp2_CFLAGS
+ENABLE_QUIC_FALSE
+ENABLE_QUIC_TRUE
+EMBEDDED_LIBNGTCP2_FALSE
+EMBEDDED_LIBNGTCP2_TRUE
+libngtcp2_LIBS
+libngtcp2_CFLAGS
+libedit_LIBS
+libedit_CFLAGS
+conf_mapsize
+lmdb_LIBS
+lmdb_CFLAGS
+HAVE_MAXMINDDB_FALSE
+HAVE_MAXMINDDB_TRUE
+libmaxminddb_LIBS
+libmaxminddb_CFLAGS
+HAVE_LIBDNSTAP_FALSE
+HAVE_LIBDNSTAP_TRUE
+HAVE_DNSTAP_FALSE
+HAVE_DNSTAP_TRUE
+DNSTAP_LIBS
+DNSTAP_CFLAGS
+libprotobuf_c_LIBS
+libprotobuf_c_CFLAGS
+libfstrm_LIBS
+libfstrm_CFLAGS
+PROTOC_C
+DOC_MODULES
+STATIC_MODULES_INIT
+STATIC_MODULES_DECLARS
+SHARED_MODULE_whoami_FALSE
+SHARED_MODULE_whoami_TRUE
+STATIC_MODULE_whoami_FALSE
+STATIC_MODULE_whoami_TRUE
+SHARED_MODULE_synthrecord_FALSE
+SHARED_MODULE_synthrecord_TRUE
+STATIC_MODULE_synthrecord_FALSE
+STATIC_MODULE_synthrecord_TRUE
+SHARED_MODULE_stats_FALSE
+SHARED_MODULE_stats_TRUE
+STATIC_MODULE_stats_FALSE
+STATIC_MODULE_stats_TRUE
+SHARED_MODULE_rrl_FALSE
+SHARED_MODULE_rrl_TRUE
+STATIC_MODULE_rrl_FALSE
+STATIC_MODULE_rrl_TRUE
+SHARED_MODULE_queryacl_FALSE
+SHARED_MODULE_queryacl_TRUE
+STATIC_MODULE_queryacl_FALSE
+STATIC_MODULE_queryacl_TRUE
+SHARED_MODULE_probe_FALSE
+SHARED_MODULE_probe_TRUE
+STATIC_MODULE_probe_FALSE
+STATIC_MODULE_probe_TRUE
+SHARED_MODULE_onlinesign_FALSE
+SHARED_MODULE_onlinesign_TRUE
+STATIC_MODULE_onlinesign_FALSE
+STATIC_MODULE_onlinesign_TRUE
+SHARED_MODULE_noudp_FALSE
+SHARED_MODULE_noudp_TRUE
+STATIC_MODULE_noudp_FALSE
+STATIC_MODULE_noudp_TRUE
+SHARED_MODULE_geoip_FALSE
+SHARED_MODULE_geoip_TRUE
+STATIC_MODULE_geoip_FALSE
+STATIC_MODULE_geoip_TRUE
+SHARED_MODULE_dnstap_FALSE
+SHARED_MODULE_dnstap_TRUE
+STATIC_MODULE_dnstap_FALSE
+STATIC_MODULE_dnstap_TRUE
+SHARED_MODULE_dnsproxy_FALSE
+SHARED_MODULE_dnsproxy_TRUE
+STATIC_MODULE_dnsproxy_FALSE
+STATIC_MODULE_dnsproxy_TRUE
+SHARED_MODULE_cookies_FALSE
+SHARED_MODULE_cookies_TRUE
+STATIC_MODULE_cookies_FALSE
+STATIC_MODULE_cookies_TRUE
+SHARED_MODULE_authsignal_FALSE
+SHARED_MODULE_authsignal_TRUE
+STATIC_MODULE_authsignal_FALSE
+STATIC_MODULE_authsignal_TRUE
+liburcu_LIBS
+liburcu_CFLAGS
+malloc_LIBS
+libkqueue_LIBS
+libkqueue_CFLAGS
+libdbus_LIBS
+libdbus_CFLAGS
+systemd_LIBS
+systemd_CFLAGS
+libxdp_LIBS
+libxdp_CFLAGS
+ENABLE_XDP_FALSE
+ENABLE_XDP_TRUE
+libbpf_LIBS
+libbpf_CFLAGS
+ENABLE_PKCS11_FALSE
+ENABLE_PKCS11_TRUE
+gnutls_LIBS
+gnutls_CFLAGS
+FAST_PARSER_FALSE
+FAST_PARSER_TRUE
+HAVE_PDFLATEX_FALSE
+HAVE_PDFLATEX_TRUE
+HAVE_SPHINX_FALSE
+HAVE_SPHINX_TRUE
+HAVE_DOCS_FALSE
+HAVE_DOCS_TRUE
+PDFLATEX
+SPHINXBUILD
+HAVE_LIBUTILS_FALSE
+HAVE_LIBUTILS_TRUE
+HAVE_UTILS_FALSE
+HAVE_UTILS_TRUE
+HAVE_DAEMON_FALSE
+HAVE_DAEMON_TRUE
+module_dir
+module_instdir
+config_dir
+storage_dir
+run_dir
+pkgconfigdir
+PKG_CONFIG_LIBDIR
+PKG_CONFIG_PATH
+PKG_CONFIG
+LT_NO_UNDEFINED
+LT_SYS_LIBRARY_PATH
+OTOOL64
+OTOOL
+LIPO
+NMEDIT
+DSYMUTIL
+MANIFEST_TOOL
+RANLIB
+DLLTOOL
+OBJDUMP
+FILECMD
+LN_S
+NM
+ac_ct_DUMPBIN
+DUMPBIN
+LD
+FGREP
+EGREP
+GREP
+LIBTOOL
+ac_ct_AR
+AR
+LDFLAG_EXCLUDE_LIBS
+CPP
+RELEASE_DATE
+SED
+KNOT_VERSION_PATCH
+KNOT_VERSION_MINOR
+KNOT_VERSION_MAJOR
+libzscanner_SONAME
+libzscanner_SOVERSION
+libzscanner_VERSION_INFO
+libdnssec_SONAME
+libdnssec_SOVERSION
+libdnssec_VERSION_INFO
+libknot_SONAME
+libknot_SOVERSION
+libknot_VERSION_INFO
+host_os
+host_vendor
+host_cpu
+host
+build_os
+build_vendor
+build_cpu
+build
+am__fastdepCC_FALSE
+am__fastdepCC_TRUE
+CCDEPMODE
+am__nodep
+AMDEPBACKSLASH
+AMDEP_FALSE
+AMDEP_TRUE
+am__include
+DEPDIR
+OBJEXT
+EXEEXT
+ac_ct_CC
+CPPFLAGS
+LDFLAGS
+CFLAGS
+CC
+AM_BACKSLASH
+AM_DEFAULT_VERBOSITY
+AM_DEFAULT_V
+AM_V
+CSCOPE
+ETAGS
+CTAGS
+am__untar
+am__tar
+AMTAR
+am__leading_dot
+SET_MAKE
+AWK
+mkdir_p
+MKDIR_P
+INSTALL_STRIP_PROGRAM
+STRIP
+install_sh
+MAKEINFO
+AUTOHEADER
+AUTOMAKE
+AUTOCONF
+ACLOCAL
+VERSION
+PACKAGE
+CYGPATH_W
+am__isrc
+INSTALL_DATA
+INSTALL_SCRIPT
+INSTALL_PROGRAM
+target_alias
+host_alias
+build_alias
+LIBS
+ECHO_T
+ECHO_N
+ECHO_C
+DEFS
+mandir
+localedir
+libdir
+psdir
+pdfdir
+dvidir
+htmldir
+infodir
+docdir
+oldincludedir
+includedir
+runstatedir
+localstatedir
+sharedstatedir
+sysconfdir
+datadir
+datarootdir
+libexecdir
+sbindir
+bindir
+program_transform_name
+prefix
+exec_prefix
+PACKAGE_URL
+PACKAGE_BUGREPORT
+PACKAGE_STRING
+PACKAGE_VERSION
+PACKAGE_TARNAME
+PACKAGE_NAME
+PATH_SEPARATOR
+SHELL
+am__quote'
+ac_subst_files=''
+ac_user_opts='
+enable_option_checking
+enable_silent_rules
+enable_dependency_tracking
+enable_shared
+enable_static
+with_pic
+enable_fast_install
+with_aix_soname
+with_gnu_ld
+with_sysroot
+enable_libtool_lock
+with_pkgconfigdir
+with_rundir
+with_storage
+with_configdir
+with_moduledir
+enable_daemon
+enable_modules
+enable_utilities
+enable_documentation
+enable_fastparser
+enable_recvmmsg
+enable_xdp
+enable_reuseport
+enable_systemd
+enable_dbus
+with_socket_polling
+with_memory_allocator
+with_module_authsignal
+with_module_cookies
+with_module_dnsproxy
+with_module_dnstap
+with_module_geoip
+with_module_noudp
+with_module_onlinesign
+with_module_probe
+with_module_queryacl
+with_module_rrl
+with_module_stats
+with_module_synthrecord
+with_module_whoami
+enable_dnstap
+enable_maxminddb
+with_lmdb
+with_conf_mapsize
+enable_quic
+with_libidn
+with_libnghttp2
+enable_cap_ng
+enable_code_coverage
+with_sanitizer
+with_fuzzer
+with_oss_fuzz
+'
+ ac_precious_vars='build_alias
+host_alias
+target_alias
+CC
+CFLAGS
+LDFLAGS
+LIBS
+CPPFLAGS
+CPP
+LT_SYS_LIBRARY_PATH
+PKG_CONFIG
+PKG_CONFIG_PATH
+PKG_CONFIG_LIBDIR
+gnutls_CFLAGS
+gnutls_LIBS
+libbpf_CFLAGS
+libbpf_LIBS
+libxdp_CFLAGS
+libxdp_LIBS
+systemd_CFLAGS
+systemd_LIBS
+libdbus_CFLAGS
+libdbus_LIBS
+libkqueue_CFLAGS
+libkqueue_LIBS
+liburcu_CFLAGS
+liburcu_LIBS
+libfstrm_CFLAGS
+libfstrm_LIBS
+libprotobuf_c_CFLAGS
+libprotobuf_c_LIBS
+libmaxminddb_CFLAGS
+libmaxminddb_LIBS
+lmdb_CFLAGS
+lmdb_LIBS
+libedit_CFLAGS
+libedit_LIBS
+libngtcp2_CFLAGS
+libngtcp2_LIBS
+libidn2_CFLAGS
+libidn2_LIBS
+libnghttp2_CFLAGS
+libnghttp2_LIBS
+libmnl_CFLAGS
+libmnl_LIBS
+cap_ng_CFLAGS
+cap_ng_LIBS'
+
+
+# Initialize some variables set by options.
+ac_init_help=
+ac_init_version=false
+ac_unrecognized_opts=
+ac_unrecognized_sep=
+# The variables have the same names as the options, with
+# dashes changed to underlines.
+cache_file=/dev/null
+exec_prefix=NONE
+no_create=
+no_recursion=
+prefix=NONE
+program_prefix=NONE
+program_suffix=NONE
+program_transform_name=s,x,x,
+silent=
+site=
+srcdir=
+verbose=
+x_includes=NONE
+x_libraries=NONE
+
+# Installation directory options.
+# These are left unexpanded so users can "make install exec_prefix=/foo"
+# and all the variables that are supposed to be based on exec_prefix
+# by default will actually change.
+# Use braces instead of parens because sh, perl, etc. also accept them.
+# (The list follows the same order as the GNU Coding Standards.)
+bindir='${exec_prefix}/bin'
+sbindir='${exec_prefix}/sbin'
+libexecdir='${exec_prefix}/libexec'
+datarootdir='${prefix}/share'
+datadir='${datarootdir}'
+sysconfdir='${prefix}/etc'
+sharedstatedir='${prefix}/com'
+localstatedir='${prefix}/var'
+runstatedir='${localstatedir}/run'
+includedir='${prefix}/include'
+oldincludedir='/usr/include'
+docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
+infodir='${datarootdir}/info'
+htmldir='${docdir}'
+dvidir='${docdir}'
+pdfdir='${docdir}'
+psdir='${docdir}'
+libdir='${exec_prefix}/lib'
+localedir='${datarootdir}/locale'
+mandir='${datarootdir}/man'
+
+ac_prev=
+ac_dashdash=
+for ac_option
+do
+ # If the previous option needs an argument, assign it.
+ if test -n "$ac_prev"; then
+ eval $ac_prev=\$ac_option
+ ac_prev=
+ continue
+ fi
+
+ case $ac_option in
+ *=?*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;;
+ *=) ac_optarg= ;;
+ *) ac_optarg=yes ;;
+ esac
+
+ case $ac_dashdash$ac_option in
+ --)
+ ac_dashdash=yes ;;
+
+ -bindir | --bindir | --bindi | --bind | --bin | --bi)
+ ac_prev=bindir ;;
+ -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*)
+ bindir=$ac_optarg ;;
+
+ -build | --build | --buil | --bui | --bu)
+ ac_prev=build_alias ;;
+ -build=* | --build=* | --buil=* | --bui=* | --bu=*)
+ build_alias=$ac_optarg ;;
+
+ -cache-file | --cache-file | --cache-fil | --cache-fi \
+ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c)
+ ac_prev=cache_file ;;
+ -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \
+ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*)
+ cache_file=$ac_optarg ;;
+
+ --config-cache | -C)
+ cache_file=config.cache ;;
+
+ -datadir | --datadir | --datadi | --datad)
+ ac_prev=datadir ;;
+ -datadir=* | --datadir=* | --datadi=* | --datad=*)
+ datadir=$ac_optarg ;;
+
+ -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \
+ | --dataroo | --dataro | --datar)
+ ac_prev=datarootdir ;;
+ -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \
+ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*)
+ datarootdir=$ac_optarg ;;
+
+ -disable-* | --disable-*)
+ ac_useropt=`expr "x$ac_option" : 'x-*disable-\(.*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null &&
+ as_fn_error $? "invalid feature name: \`$ac_useropt'"
+ ac_useropt_orig=$ac_useropt
+ ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'`
+ case $ac_user_opts in
+ *"
+"enable_$ac_useropt"
+"*) ;;
+ *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig"
+ ac_unrecognized_sep=', ';;
+ esac
+ eval enable_$ac_useropt=no ;;
+
+ -docdir | --docdir | --docdi | --doc | --do)
+ ac_prev=docdir ;;
+ -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*)
+ docdir=$ac_optarg ;;
+
+ -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv)
+ ac_prev=dvidir ;;
+ -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*)
+ dvidir=$ac_optarg ;;
+
+ -enable-* | --enable-*)
+ ac_useropt=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null &&
+ as_fn_error $? "invalid feature name: \`$ac_useropt'"
+ ac_useropt_orig=$ac_useropt
+ ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'`
+ case $ac_user_opts in
+ *"
+"enable_$ac_useropt"
+"*) ;;
+ *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig"
+ ac_unrecognized_sep=', ';;
+ esac
+ eval enable_$ac_useropt=\$ac_optarg ;;
+
+ -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \
+ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \
+ | --exec | --exe | --ex)
+ ac_prev=exec_prefix ;;
+ -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \
+ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \
+ | --exec=* | --exe=* | --ex=*)
+ exec_prefix=$ac_optarg ;;
+
+ -gas | --gas | --ga | --g)
+ # Obsolete; use --with-gas.
+ with_gas=yes ;;
+
+ -help | --help | --hel | --he | -h)
+ ac_init_help=long ;;
+ -help=r* | --help=r* | --hel=r* | --he=r* | -hr*)
+ ac_init_help=recursive ;;
+ -help=s* | --help=s* | --hel=s* | --he=s* | -hs*)
+ ac_init_help=short ;;
+
+ -host | --host | --hos | --ho)
+ ac_prev=host_alias ;;
+ -host=* | --host=* | --hos=* | --ho=*)
+ host_alias=$ac_optarg ;;
+
+ -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht)
+ ac_prev=htmldir ;;
+ -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \
+ | --ht=*)
+ htmldir=$ac_optarg ;;
+
+ -includedir | --includedir | --includedi | --included | --include \
+ | --includ | --inclu | --incl | --inc)
+ ac_prev=includedir ;;
+ -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \
+ | --includ=* | --inclu=* | --incl=* | --inc=*)
+ includedir=$ac_optarg ;;
+
+ -infodir | --infodir | --infodi | --infod | --info | --inf)
+ ac_prev=infodir ;;
+ -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*)
+ infodir=$ac_optarg ;;
+
+ -libdir | --libdir | --libdi | --libd)
+ ac_prev=libdir ;;
+ -libdir=* | --libdir=* | --libdi=* | --libd=*)
+ libdir=$ac_optarg ;;
+
+ -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \
+ | --libexe | --libex | --libe)
+ ac_prev=libexecdir ;;
+ -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \
+ | --libexe=* | --libex=* | --libe=*)
+ libexecdir=$ac_optarg ;;
+
+ -localedir | --localedir | --localedi | --localed | --locale)
+ ac_prev=localedir ;;
+ -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*)
+ localedir=$ac_optarg ;;
+
+ -localstatedir | --localstatedir | --localstatedi | --localstated \
+ | --localstate | --localstat | --localsta | --localst | --locals)
+ ac_prev=localstatedir ;;
+ -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \
+ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*)
+ localstatedir=$ac_optarg ;;
+
+ -mandir | --mandir | --mandi | --mand | --man | --ma | --m)
+ ac_prev=mandir ;;
+ -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*)
+ mandir=$ac_optarg ;;
+
+ -nfp | --nfp | --nf)
+ # Obsolete; use --without-fp.
+ with_fp=no ;;
+
+ -no-create | --no-create | --no-creat | --no-crea | --no-cre \
+ | --no-cr | --no-c | -n)
+ no_create=yes ;;
+
+ -no-recursion | --no-recursion | --no-recursio | --no-recursi \
+ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r)
+ no_recursion=yes ;;
+
+ -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \
+ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \
+ | --oldin | --oldi | --old | --ol | --o)
+ ac_prev=oldincludedir ;;
+ -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \
+ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \
+ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*)
+ oldincludedir=$ac_optarg ;;
+
+ -prefix | --prefix | --prefi | --pref | --pre | --pr | --p)
+ ac_prev=prefix ;;
+ -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*)
+ prefix=$ac_optarg ;;
+
+ -program-prefix | --program-prefix | --program-prefi | --program-pref \
+ | --program-pre | --program-pr | --program-p)
+ ac_prev=program_prefix ;;
+ -program-prefix=* | --program-prefix=* | --program-prefi=* \
+ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*)
+ program_prefix=$ac_optarg ;;
+
+ -program-suffix | --program-suffix | --program-suffi | --program-suff \
+ | --program-suf | --program-su | --program-s)
+ ac_prev=program_suffix ;;
+ -program-suffix=* | --program-suffix=* | --program-suffi=* \
+ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*)
+ program_suffix=$ac_optarg ;;
+
+ -program-transform-name | --program-transform-name \
+ | --program-transform-nam | --program-transform-na \
+ | --program-transform-n | --program-transform- \
+ | --program-transform | --program-transfor \
+ | --program-transfo | --program-transf \
+ | --program-trans | --program-tran \
+ | --progr-tra | --program-tr | --program-t)
+ ac_prev=program_transform_name ;;
+ -program-transform-name=* | --program-transform-name=* \
+ | --program-transform-nam=* | --program-transform-na=* \
+ | --program-transform-n=* | --program-transform-=* \
+ | --program-transform=* | --program-transfor=* \
+ | --program-transfo=* | --program-transf=* \
+ | --program-trans=* | --program-tran=* \
+ | --progr-tra=* | --program-tr=* | --program-t=*)
+ program_transform_name=$ac_optarg ;;
+
+ -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd)
+ ac_prev=pdfdir ;;
+ -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*)
+ pdfdir=$ac_optarg ;;
+
+ -psdir | --psdir | --psdi | --psd | --ps)
+ ac_prev=psdir ;;
+ -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*)
+ psdir=$ac_optarg ;;
+
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil)
+ silent=yes ;;
+
+ -runstatedir | --runstatedir | --runstatedi | --runstated \
+ | --runstate | --runstat | --runsta | --runst | --runs \
+ | --run | --ru | --r)
+ ac_prev=runstatedir ;;
+ -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
+ | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
+ | --run=* | --ru=* | --r=*)
+ runstatedir=$ac_optarg ;;
+
+ -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
+ ac_prev=sbindir ;;
+ -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
+ | --sbi=* | --sb=*)
+ sbindir=$ac_optarg ;;
+
+ -sharedstatedir | --sharedstatedir | --sharedstatedi \
+ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \
+ | --sharedst | --shareds | --shared | --share | --shar \
+ | --sha | --sh)
+ ac_prev=sharedstatedir ;;
+ -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \
+ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \
+ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \
+ | --sha=* | --sh=*)
+ sharedstatedir=$ac_optarg ;;
+
+ -site | --site | --sit)
+ ac_prev=site ;;
+ -site=* | --site=* | --sit=*)
+ site=$ac_optarg ;;
+
+ -srcdir | --srcdir | --srcdi | --srcd | --src | --sr)
+ ac_prev=srcdir ;;
+ -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*)
+ srcdir=$ac_optarg ;;
+
+ -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \
+ | --syscon | --sysco | --sysc | --sys | --sy)
+ ac_prev=sysconfdir ;;
+ -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \
+ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*)
+ sysconfdir=$ac_optarg ;;
+
+ -target | --target | --targe | --targ | --tar | --ta | --t)
+ ac_prev=target_alias ;;
+ -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*)
+ target_alias=$ac_optarg ;;
+
+ -v | -verbose | --verbose | --verbos | --verbo | --verb)
+ verbose=yes ;;
+
+ -version | --version | --versio | --versi | --vers | -V)
+ ac_init_version=: ;;
+
+ -with-* | --with-*)
+ ac_useropt=`expr "x$ac_option" : 'x-*with-\([^=]*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null &&
+ as_fn_error $? "invalid package name: \`$ac_useropt'"
+ ac_useropt_orig=$ac_useropt
+ ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'`
+ case $ac_user_opts in
+ *"
+"with_$ac_useropt"
+"*) ;;
+ *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig"
+ ac_unrecognized_sep=', ';;
+ esac
+ eval with_$ac_useropt=\$ac_optarg ;;
+
+ -without-* | --without-*)
+ ac_useropt=`expr "x$ac_option" : 'x-*without-\(.*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null &&
+ as_fn_error $? "invalid package name: \`$ac_useropt'"
+ ac_useropt_orig=$ac_useropt
+ ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'`
+ case $ac_user_opts in
+ *"
+"with_$ac_useropt"
+"*) ;;
+ *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig"
+ ac_unrecognized_sep=', ';;
+ esac
+ eval with_$ac_useropt=no ;;
+
+ --x)
+ # Obsolete; use --with-x.
+ with_x=yes ;;
+
+ -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \
+ | --x-incl | --x-inc | --x-in | --x-i)
+ ac_prev=x_includes ;;
+ -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \
+ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*)
+ x_includes=$ac_optarg ;;
+
+ -x-libraries | --x-libraries | --x-librarie | --x-librari \
+ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l)
+ ac_prev=x_libraries ;;
+ -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \
+ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*)
+ x_libraries=$ac_optarg ;;
+
+ -*) as_fn_error $? "unrecognized option: \`$ac_option'
+Try \`$0 --help' for more information"
+ ;;
+
+ *=*)
+ ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='`
+ # Reject names that are not valid shell variable names.
+ case $ac_envvar in #(
+ '' | [0-9]* | *[!_$as_cr_alnum]* )
+ as_fn_error $? "invalid variable name: \`$ac_envvar'" ;;
+ esac
+ eval $ac_envvar=\$ac_optarg
+ export $ac_envvar ;;
+
+ *)
+ # FIXME: should be removed in autoconf 3.0.
+ printf "%s\n" "$as_me: WARNING: you should use --build, --host, --target" >&2
+ expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null &&
+ printf "%s\n" "$as_me: WARNING: invalid host type: $ac_option" >&2
+ : "${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}"
+ ;;
+
+ esac
+done
+
+if test -n "$ac_prev"; then
+ ac_option=--`echo $ac_prev | sed 's/_/-/g'`
+ as_fn_error $? "missing argument to $ac_option"
+fi
+
+if test -n "$ac_unrecognized_opts"; then
+ case $enable_option_checking in
+ no) ;;
+ fatal) as_fn_error $? "unrecognized options: $ac_unrecognized_opts" ;;
+ *) printf "%s\n" "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2 ;;
+ esac
+fi
+
+# Check all directory arguments for consistency.
+for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \
+ datadir sysconfdir sharedstatedir localstatedir includedir \
+ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
+ libdir localedir mandir runstatedir
+do
+ eval ac_val=\$$ac_var
+ # Remove trailing slashes.
+ case $ac_val in
+ */ )
+ ac_val=`expr "X$ac_val" : 'X\(.*[^/]\)' \| "X$ac_val" : 'X\(.*\)'`
+ eval $ac_var=\$ac_val;;
+ esac
+ # Be sure to have absolute directory names.
+ case $ac_val in
+ [\\/$]* | ?:[\\/]* ) continue;;
+ NONE | '' ) case $ac_var in *prefix ) continue;; esac;;
+ esac
+ as_fn_error $? "expected an absolute directory name for --$ac_var: $ac_val"
+done
+
+# There might be people who depend on the old broken behavior: `$host'
+# used to hold the argument of --host etc.
+# FIXME: To remove some day.
+build=$build_alias
+host=$host_alias
+target=$target_alias
+
+# FIXME: To remove some day.
+if test "x$host_alias" != x; then
+ if test "x$build_alias" = x; then
+ cross_compiling=maybe
+ elif test "x$build_alias" != "x$host_alias"; then
+ cross_compiling=yes
+ fi
+fi
+
+ac_tool_prefix=
+test -n "$host_alias" && ac_tool_prefix=$host_alias-
+
+test "$silent" = yes && exec 6>/dev/null
+
+
+ac_pwd=`pwd` && test -n "$ac_pwd" &&
+ac_ls_di=`ls -di .` &&
+ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` ||
+ as_fn_error $? "working directory cannot be determined"
+test "X$ac_ls_di" = "X$ac_pwd_ls_di" ||
+ as_fn_error $? "pwd does not report name of working directory"
+
+
+# Find the source files, if location was not specified.
+if test -z "$srcdir"; then
+ ac_srcdir_defaulted=yes
+ # Try the directory containing this script, then the parent directory.
+ ac_confdir=`$as_dirname -- "$as_myself" ||
+$as_expr X"$as_myself" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$as_myself" : 'X\(//\)[^/]' \| \
+ X"$as_myself" : 'X\(//\)$' \| \
+ X"$as_myself" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X"$as_myself" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ srcdir=$ac_confdir
+ if test ! -r "$srcdir/$ac_unique_file"; then
+ srcdir=..
+ fi
+else
+ ac_srcdir_defaulted=no
+fi
+if test ! -r "$srcdir/$ac_unique_file"; then
+ test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .."
+ as_fn_error $? "cannot find sources ($ac_unique_file) in $srcdir"
+fi
+ac_msg="sources are in $srcdir, but \`cd $srcdir' does not work"
+ac_abs_confdir=`(
+ cd "$srcdir" && test -r "./$ac_unique_file" || as_fn_error $? "$ac_msg"
+ pwd)`
+# When building in place, set srcdir=.
+if test "$ac_abs_confdir" = "$ac_pwd"; then
+ srcdir=.
+fi
+# Remove unnecessary trailing slashes from srcdir.
+# Double slashes in file names in object file debugging info
+# mess up M-x gdb in Emacs.
+case $srcdir in
+*/) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;;
+esac
+for ac_var in $ac_precious_vars; do
+ eval ac_env_${ac_var}_set=\${${ac_var}+set}
+ eval ac_env_${ac_var}_value=\$${ac_var}
+ eval ac_cv_env_${ac_var}_set=\${${ac_var}+set}
+ eval ac_cv_env_${ac_var}_value=\$${ac_var}
+done
+
+#
+# Report the --help message.
+#
+if test "$ac_init_help" = "long"; then
+ # Omit some internal or obsolete options to make the list less imposing.
+ # This message is too long to be a string in the A/UX 3.1 sh.
+ cat <<_ACEOF
+\`configure' configures knot 3.4.6 to adapt to many kinds of systems.
+
+Usage: $0 [OPTION]... [VAR=VALUE]...
+
+To assign environment variables (e.g., CC, CFLAGS...), specify them as
+VAR=VALUE. See below for descriptions of some of the useful variables.
+
+Defaults for the options are specified in brackets.
+
+Configuration:
+ -h, --help display this help and exit
+ --help=short display options specific to this package
+ --help=recursive display the short help of all the included packages
+ -V, --version display version information and exit
+ -q, --quiet, --silent do not print \`checking ...' messages
+ --cache-file=FILE cache test results in FILE [disabled]
+ -C, --config-cache alias for \`--cache-file=config.cache'
+ -n, --no-create do not create output files
+ --srcdir=DIR find the sources in DIR [configure dir or \`..']
+
+Installation directories:
+ --prefix=PREFIX install architecture-independent files in PREFIX
+ [$ac_default_prefix]
+ --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX
+ [PREFIX]
+
+By default, \`make install' will install all the files in
+\`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify
+an installation prefix other than \`$ac_default_prefix' using \`--prefix',
+for instance \`--prefix=\$HOME'.
+
+For better control, use the options below.
+
+Fine tuning of the installation directories:
+ --bindir=DIR user executables [EPREFIX/bin]
+ --sbindir=DIR system admin executables [EPREFIX/sbin]
+ --libexecdir=DIR program executables [EPREFIX/libexec]
+ --sysconfdir=DIR read-only single-machine data [PREFIX/etc]
+ --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com]
+ --localstatedir=DIR modifiable single-machine data [PREFIX/var]
+ --runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run]
+ --libdir=DIR object code libraries [EPREFIX/lib]
+ --includedir=DIR C header files [PREFIX/include]
+ --oldincludedir=DIR C header files for non-gcc [/usr/include]
+ --datarootdir=DIR read-only arch.-independent data root [PREFIX/share]
+ --datadir=DIR read-only architecture-independent data [DATAROOTDIR]
+ --infodir=DIR info documentation [DATAROOTDIR/info]
+ --localedir=DIR locale-dependent data [DATAROOTDIR/locale]
+ --mandir=DIR man documentation [DATAROOTDIR/man]
+ --docdir=DIR documentation root [DATAROOTDIR/doc/knot]
+ --htmldir=DIR html documentation [DOCDIR]
+ --dvidir=DIR dvi documentation [DOCDIR]
+ --pdfdir=DIR pdf documentation [DOCDIR]
+ --psdir=DIR ps documentation [DOCDIR]
+_ACEOF
+
+ cat <<\_ACEOF
+
+Program names:
+ --program-prefix=PREFIX prepend PREFIX to installed program names
+ --program-suffix=SUFFIX append SUFFIX to installed program names
+ --program-transform-name=PROGRAM run sed PROGRAM on installed program names
+
+System types:
+ --build=BUILD configure for building on BUILD [guessed]
+ --host=HOST cross-compile to build programs to run on HOST [BUILD]
+_ACEOF
+fi
+
+if test -n "$ac_init_help"; then
+ case $ac_init_help in
+ short | recursive ) echo "Configuration of knot 3.4.6:";;
+ esac
+ cat <<\_ACEOF
+
+Optional Features:
+ --disable-option-checking ignore unrecognized --enable/--with options
+ --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
+ --enable-FEATURE[=ARG] include FEATURE [ARG=yes]
+ --enable-silent-rules less verbose build output (undo: "make V=1")
+ --disable-silent-rules verbose build output (undo: "make V=0")
+ --enable-dependency-tracking
+ do not reject slow dependency extractors
+ --disable-dependency-tracking
+ speeds up one-time build
+ --enable-shared[=PKGS] build shared libraries [default=yes]
+ --enable-static[=PKGS] build static libraries [default=yes]
+ --enable-fast-install[=PKGS]
+ optimize for fast installation [default=yes]
+ --disable-libtool-lock avoid locking (might break parallel builds)
+ --disable-daemon Don't build Knot DNS main daemon
+ --disable-modules Don't build Knot DNS modules
+ --disable-utilities Don't build Knot DNS utilities
+ --disable-documentation Don't build Knot DNS documentation
+ --disable-fastparser Disable use of fast zone parser
+ --enable-recvmmsg=auto|yes|no
+ enable recvmmsg() network API [default=auto]
+ --enable-xdp=auto|yes|no
+ enable eXpress Data Path [default=auto]
+ --enable-reuseport=auto|yes|no
+ enable SO_REUSEPORT(_LB) support [default=auto]
+ --enable-systemd=auto|yes|no
+ enable systemd integration [default=auto]
+ --enable-dbus=auto|systemd|libdbus|no
+ enable D-bus support [default=auto]
+ --enable-dnstap Enable dnstap support for kdig (requires fstrm,
+ protobuf-c)
+ --enable-maxminddb=auto|yes|no
+ enable MaxMind DB [default=auto]
+ --enable-quic=auto|yes|no|embedded
+ Support DoQ (needs libngtcp2 >= 0.17.0, gnutls >=
+ 3.7.3) [default=auto]
+ --enable-cap-ng=auto|no enable POSIX capabilities [default=auto]
+ --enable-code-coverage enable code coverage testing with gcov
+
+Optional Packages:
+ --with-PACKAGE[=ARG] use PACKAGE [ARG=yes]
+ --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no)
+ --with-pic[=PKGS] try to use only PIC/non-PIC objects [default=use
+ both]
+ --with-aix-soname=aix|svr4|both
+ shared library versioning (aka "SONAME") variant to
+ provide on AIX, [default=aix].
+ --with-gnu-ld assume the C compiler uses GNU ld [default=no]
+ --with-sysroot[=DIR] Search for dependent libraries within DIR (or the
+ compiler's sysroot if not specified).
+ --with-pkgconfigdir pkg-config installation directory
+ ['${libdir}/pkgconfig']
+ --with-rundir=path Path to run-time variable data (pid, sockets...).
+ [default=LOCALSTATEDIR/run/knot]
+ --with-storage=path Default storage directory (slave zones, persistent
+ data). [default=LOCALSTATEDIR/lib/knot]
+ --with-configdir=path Default directory for configuration.
+ [default=SYSCONFDIR/knot]
+ --with-moduledir=path Path to auto-loaded dynamic modules. [default not
+ set]
+ --with-socket-polling=auto|poll|epoll|kqueue|libkqueue
+ Use specific socket polling method [default=auto]
+ --with-memory-allocator=auto|LIBRARY
+ Use specific memory allocator for the server (e.g.
+ jemalloc) [default=auto]
+ --with-module-authsignal=yes|shared|no
+ Build 'authsignal' module [default="yes"]
+ --with-module-cookies=yes|shared|no
+ Build 'cookies' module [default="yes"]
+ --with-module-dnsproxy=yes|shared|no
+ Build 'dnsproxy' module [default="yes"]
+ --with-module-dnstap=yes|shared|no
+ Build 'dnstap' module [default="no"]
+ --with-module-geoip=yes|shared|no
+ Build 'geoip' module [default="yes"]
+ --with-module-noudp=yes|shared|no
+ Build 'noudp' module [default="yes"]
+ --with-module-onlinesign=yes|shared|no
+ Build 'onlinesign' module [default="yes"]
+ --with-module-probe=yes|shared|no
+ Build 'probe' module [default="yes"]
+ --with-module-queryacl=yes|shared|no
+ Build 'queryacl' module [default="yes"]
+ --with-module-rrl=yes|shared|no
+ Build 'rrl' module [default="yes"]
+ --with-module-stats=yes|shared|no
+ Build 'stats' module [default="yes"]
+ --with-module-synthrecord=yes|shared|no
+ Build 'synthrecord' module [default="yes"]
+ --with-module-whoami=yes|shared|no
+ Build 'whoami' module [default="yes"]
+ --with-lmdb=DIR explicit location where to find LMDB
+
+ --with-conf-mapsize=NUM Configuration DB mapsize in MiB
+ [default=$conf_mapsize_default]
+ --with-libidn=DIR Support IDN (needs GNU libidn2)
+ --with-libnghttp2=DIR Support DoH (needs libnghttp2)
+ --with-sanitizer Compile with sanitizer [default=no]
+ --with-fuzzer Compile with libfuzzer [default=no]
+ --with-oss-fuzz Link for oss-fuzz environment [default=no]
+
+Some influential environment variables:
+ CC C compiler command
+ CFLAGS C compiler flags
+ LDFLAGS linker flags, e.g. -L if you have libraries in a
+ nonstandard directory
+ LIBS libraries to pass to the linker, e.g. -l
+ CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if
+ you have headers in a nonstandard directory
+ CPP C preprocessor
+ LT_SYS_LIBRARY_PATH
+ User-defined run-time library search path.
+ PKG_CONFIG path to pkg-config utility
+ PKG_CONFIG_PATH
+ directories to add to pkg-config's search path
+ PKG_CONFIG_LIBDIR
+ path overriding pkg-config's built-in search path
+ gnutls_CFLAGS
+ C compiler flags for gnutls, overriding pkg-config
+ gnutls_LIBS linker flags for gnutls, overriding pkg-config
+ libbpf_CFLAGS
+ C compiler flags for libbpf, overriding pkg-config
+ libbpf_LIBS linker flags for libbpf, overriding pkg-config
+ libxdp_CFLAGS
+ C compiler flags for libxdp, overriding pkg-config
+ libxdp_LIBS linker flags for libxdp, overriding pkg-config
+ systemd_CFLAGS
+ C compiler flags for systemd, overriding pkg-config
+ systemd_LIBS
+ linker flags for systemd, overriding pkg-config
+ libdbus_CFLAGS
+ C compiler flags for libdbus, overriding pkg-config
+ libdbus_LIBS
+ linker flags for libdbus, overriding pkg-config
+ libkqueue_CFLAGS
+ C compiler flags for libkqueue, overriding pkg-config
+ libkqueue_LIBS
+ linker flags for libkqueue, overriding pkg-config
+ liburcu_CFLAGS
+ C compiler flags for liburcu, overriding pkg-config
+ liburcu_LIBS
+ linker flags for liburcu, overriding pkg-config
+ libfstrm_CFLAGS
+ C compiler flags for libfstrm, overriding pkg-config
+ libfstrm_LIBS
+ linker flags for libfstrm, overriding pkg-config
+ libprotobuf_c_CFLAGS
+ C compiler flags for libprotobuf_c, overriding pkg-config
+ libprotobuf_c_LIBS
+ linker flags for libprotobuf_c, overriding pkg-config
+ libmaxminddb_CFLAGS
+ C compiler flags for libmaxminddb, overriding pkg-config
+ libmaxminddb_LIBS
+ linker flags for libmaxminddb, overriding pkg-config
+ lmdb_CFLAGS C compiler flags for lmdb, overriding pkg-config
+ lmdb_LIBS linker flags for lmdb, overriding pkg-config
+ libedit_CFLAGS
+ C compiler flags for libedit, overriding pkg-config
+ libedit_LIBS
+ linker flags for libedit, overriding pkg-config
+ libngtcp2_CFLAGS
+ C compiler flags for libngtcp2, overriding pkg-config
+ libngtcp2_LIBS
+ linker flags for libngtcp2, overriding pkg-config
+ libidn2_CFLAGS
+ C compiler flags for libidn2, overriding pkg-config
+ libidn2_LIBS
+ linker flags for libidn2, overriding pkg-config
+ libnghttp2_CFLAGS
+ C compiler flags for libnghttp2, overriding pkg-config
+ libnghttp2_LIBS
+ linker flags for libnghttp2, overriding pkg-config
+ libmnl_CFLAGS
+ C compiler flags for libmnl, overriding pkg-config
+ libmnl_LIBS linker flags for libmnl, overriding pkg-config
+ cap_ng_CFLAGS
+ C compiler flags for cap_ng, overriding pkg-config
+ cap_ng_LIBS linker flags for cap_ng, overriding pkg-config
+
+Use these variables to override the choices made by `configure' or to help
+it to find libraries and programs with nonstandard names/locations.
+
+Report bugs to .
+_ACEOF
+ac_status=$?
+fi
+
+if test "$ac_init_help" = "recursive"; then
+ # If there are subdirs, report their specific --help.
+ for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue
+ test -d "$ac_dir" ||
+ { cd "$srcdir" && ac_pwd=`pwd` && srcdir=. && test -d "$ac_dir"; } ||
+ continue
+ ac_builddir=.
+
+case "$ac_dir" in
+.) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;;
+*)
+ ac_dir_suffix=/`printf "%s\n" "$ac_dir" | sed 's|^\.[\\/]||'`
+ # A ".." for each directory in $ac_dir_suffix.
+ ac_top_builddir_sub=`printf "%s\n" "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'`
+ case $ac_top_builddir_sub in
+ "") ac_top_builddir_sub=. ac_top_build_prefix= ;;
+ *) ac_top_build_prefix=$ac_top_builddir_sub/ ;;
+ esac ;;
+esac
+ac_abs_top_builddir=$ac_pwd
+ac_abs_builddir=$ac_pwd$ac_dir_suffix
+# for backward compatibility:
+ac_top_builddir=$ac_top_build_prefix
+
+case $srcdir in
+ .) # We are building in place.
+ ac_srcdir=.
+ ac_top_srcdir=$ac_top_builddir_sub
+ ac_abs_top_srcdir=$ac_pwd ;;
+ [\\/]* | ?:[\\/]* ) # Absolute name.
+ ac_srcdir=$srcdir$ac_dir_suffix;
+ ac_top_srcdir=$srcdir
+ ac_abs_top_srcdir=$srcdir ;;
+ *) # Relative name.
+ ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix
+ ac_top_srcdir=$ac_top_build_prefix$srcdir
+ ac_abs_top_srcdir=$ac_pwd/$srcdir ;;
+esac
+ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix
+
+ cd "$ac_dir" || { ac_status=$?; continue; }
+ # Check for configure.gnu first; this name is used for a wrapper for
+ # Metaconfig's "Configure" on case-insensitive file systems.
+ if test -f "$ac_srcdir/configure.gnu"; then
+ echo &&
+ $SHELL "$ac_srcdir/configure.gnu" --help=recursive
+ elif test -f "$ac_srcdir/configure"; then
+ echo &&
+ $SHELL "$ac_srcdir/configure" --help=recursive
+ else
+ printf "%s\n" "$as_me: WARNING: no configuration information is in $ac_dir" >&2
+ fi || ac_status=$?
+ cd "$ac_pwd" || { ac_status=$?; break; }
+ done
+fi
+
+test -n "$ac_init_help" && exit $ac_status
+if $ac_init_version; then
+ cat <<\_ACEOF
+knot configure 3.4.6
+generated by GNU Autoconf 2.71
+
+Copyright (C) 2021 Free Software Foundation, Inc.
+This configure script is free software; the Free Software Foundation
+gives unlimited permission to copy, distribute and modify it.
+_ACEOF
+ exit
+fi
+
+## ------------------------ ##
+## Autoconf initialization. ##
+## ------------------------ ##
+
+# ac_fn_c_try_compile LINENO
+# --------------------------
+# Try to compile conftest.$ac_ext, and return whether this succeeded.
+ac_fn_c_try_compile ()
+{
+ as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ rm -f conftest.$ac_objext conftest.beam
+ if { { ac_try="$ac_compile"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_compile") 2>conftest.err
+ ac_status=$?
+ if test -s conftest.err; then
+ grep -v '^ *+' conftest.err >conftest.er1
+ cat conftest.er1 >&5
+ mv -f conftest.er1 conftest.err
+ fi
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; } && {
+ test -z "$ac_c_werror_flag" ||
+ test ! -s conftest.err
+ } && test -s conftest.$ac_objext
+then :
+ ac_retval=0
+else $as_nop
+ printf "%s\n" "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ ac_retval=1
+fi
+ eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
+ as_fn_set_status $ac_retval
+
+} # ac_fn_c_try_compile
+
+# ac_fn_c_check_header_compile LINENO HEADER VAR INCLUDES
+# -------------------------------------------------------
+# Tests whether HEADER exists and can be compiled using the include files in
+# INCLUDES, setting the cache variable VAR accordingly.
+ac_fn_c_check_header_compile ()
+{
+ as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2" >&5
+printf %s "checking for $2... " >&6; }
+if eval test \${$3+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$4
+#include <$2>
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ eval "$3=yes"
+else $as_nop
+ eval "$3=no"
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+eval ac_res=\$$3
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5
+printf "%s\n" "$ac_res" >&6; }
+ eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
+
+} # ac_fn_c_check_header_compile
+
+# ac_fn_c_try_cpp LINENO
+# ----------------------
+# Try to preprocess conftest.$ac_ext, and return whether this succeeded.
+ac_fn_c_try_cpp ()
+{
+ as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ if { { ac_try="$ac_cpp conftest.$ac_ext"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err
+ ac_status=$?
+ if test -s conftest.err; then
+ grep -v '^ *+' conftest.err >conftest.er1
+ cat conftest.er1 >&5
+ mv -f conftest.er1 conftest.err
+ fi
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; } > conftest.i && {
+ test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" ||
+ test ! -s conftest.err
+ }
+then :
+ ac_retval=0
+else $as_nop
+ printf "%s\n" "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ ac_retval=1
+fi
+ eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
+ as_fn_set_status $ac_retval
+
+} # ac_fn_c_try_cpp
+
+# ac_fn_c_try_link LINENO
+# -----------------------
+# Try to link conftest.$ac_ext, and return whether this succeeded.
+ac_fn_c_try_link ()
+{
+ as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ rm -f conftest.$ac_objext conftest.beam conftest$ac_exeext
+ if { { ac_try="$ac_link"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_link") 2>conftest.err
+ ac_status=$?
+ if test -s conftest.err; then
+ grep -v '^ *+' conftest.err >conftest.er1
+ cat conftest.er1 >&5
+ mv -f conftest.er1 conftest.err
+ fi
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; } && {
+ test -z "$ac_c_werror_flag" ||
+ test ! -s conftest.err
+ } && test -s conftest$ac_exeext && {
+ test "$cross_compiling" = yes ||
+ test -x conftest$ac_exeext
+ }
+then :
+ ac_retval=0
+else $as_nop
+ printf "%s\n" "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ ac_retval=1
+fi
+ # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information
+ # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would
+ # interfere with the next link command; also delete a directory that is
+ # left behind by Apple's compiler. We do this before executing the actions.
+ rm -rf conftest.dSYM conftest_ipa8_conftest.oo
+ eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
+ as_fn_set_status $ac_retval
+
+} # ac_fn_c_try_link
+
+# ac_fn_c_try_run LINENO
+# ----------------------
+# Try to run conftest.$ac_ext, and return whether this succeeded. Assumes that
+# executables *can* be run.
+ac_fn_c_try_run ()
+{
+ as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ if { { ac_try="$ac_link"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_link") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; } && { ac_try='./conftest$ac_exeext'
+ { { case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_try") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; }
+then :
+ ac_retval=0
+else $as_nop
+ printf "%s\n" "$as_me: program exited with status $ac_status" >&5
+ printf "%s\n" "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ ac_retval=$ac_status
+fi
+ rm -rf conftest.dSYM conftest_ipa8_conftest.oo
+ eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
+ as_fn_set_status $ac_retval
+
+} # ac_fn_c_try_run
+
+# ac_fn_c_check_func LINENO FUNC VAR
+# ----------------------------------
+# Tests whether FUNC exists, setting the cache variable VAR accordingly
+ac_fn_c_check_func ()
+{
+ as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2" >&5
+printf %s "checking for $2... " >&6; }
+if eval test \${$3+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+/* Define $2 to an innocuous variant, in case declares $2.
+ For example, HP-UX 11i declares gettimeofday. */
+#define $2 innocuous_$2
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char $2 (); below. */
+
+#include
+#undef $2
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+#ifdef __cplusplus
+extern "C"
+#endif
+char $2 ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined __stub_$2 || defined __stub___$2
+choke me
+#endif
+
+int
+main (void)
+{
+return $2 ();
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ eval "$3=yes"
+else $as_nop
+ eval "$3=no"
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+eval ac_res=\$$3
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5
+printf "%s\n" "$ac_res" >&6; }
+ eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
+
+} # ac_fn_c_check_func
+
+# ac_fn_check_decl LINENO SYMBOL VAR INCLUDES EXTRA-OPTIONS FLAG-VAR
+# ------------------------------------------------------------------
+# Tests whether SYMBOL is declared in INCLUDES, setting cache variable VAR
+# accordingly. Pass EXTRA-OPTIONS to the compiler, using FLAG-VAR.
+ac_fn_check_decl ()
+{
+ as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ as_decl_name=`echo $2|sed 's/ *(.*//'`
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $as_decl_name is declared" >&5
+printf %s "checking whether $as_decl_name is declared... " >&6; }
+if eval test \${$3+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ as_decl_use=`echo $2|sed -e 's/(/((/' -e 's/)/) 0&/' -e 's/,/) 0& (/g'`
+ eval ac_save_FLAGS=\$$6
+ as_fn_append $6 " $5"
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$4
+int
+main (void)
+{
+#ifndef $as_decl_name
+#ifdef __cplusplus
+ (void) $as_decl_use;
+#else
+ (void) $as_decl_name;
+#endif
+#endif
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ eval "$3=yes"
+else $as_nop
+ eval "$3=no"
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ eval $6=\$ac_save_FLAGS
+
+fi
+eval ac_res=\$$3
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5
+printf "%s\n" "$ac_res" >&6; }
+ eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
+
+} # ac_fn_check_decl
+ac_configure_args_raw=
+for ac_arg
+do
+ case $ac_arg in
+ *\'*)
+ ac_arg=`printf "%s\n" "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;;
+ esac
+ as_fn_append ac_configure_args_raw " '$ac_arg'"
+done
+
+case $ac_configure_args_raw in
+ *$as_nl*)
+ ac_safe_unquote= ;;
+ *)
+ ac_unsafe_z='|&;<>()$`\\"*?[ '' ' # This string ends in space, tab.
+ ac_unsafe_a="$ac_unsafe_z#~"
+ ac_safe_unquote="s/ '\\([^$ac_unsafe_a][^$ac_unsafe_z]*\\)'/ \\1/g"
+ ac_configure_args_raw=` printf "%s\n" "$ac_configure_args_raw" | sed "$ac_safe_unquote"`;;
+esac
+
+cat >config.log <<_ACEOF
+This file contains any messages produced by compilers while
+running configure, to aid debugging if configure makes a mistake.
+
+It was created by knot $as_me 3.4.6, which was
+generated by GNU Autoconf 2.71. Invocation command line was
+
+ $ $0$ac_configure_args_raw
+
+_ACEOF
+exec 5>>config.log
+{
+cat <<_ASUNAME
+## --------- ##
+## Platform. ##
+## --------- ##
+
+hostname = `(hostname || uname -n) 2>/dev/null | sed 1q`
+uname -m = `(uname -m) 2>/dev/null || echo unknown`
+uname -r = `(uname -r) 2>/dev/null || echo unknown`
+uname -s = `(uname -s) 2>/dev/null || echo unknown`
+uname -v = `(uname -v) 2>/dev/null || echo unknown`
+
+/usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown`
+/bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown`
+
+/bin/arch = `(/bin/arch) 2>/dev/null || echo unknown`
+/usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown`
+/usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown`
+/usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown`
+/bin/machine = `(/bin/machine) 2>/dev/null || echo unknown`
+/usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown`
+/bin/universe = `(/bin/universe) 2>/dev/null || echo unknown`
+
+_ASUNAME
+
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ printf "%s\n" "PATH: $as_dir"
+ done
+IFS=$as_save_IFS
+
+} >&5
+
+cat >&5 <<_ACEOF
+
+
+## ----------- ##
+## Core tests. ##
+## ----------- ##
+
+_ACEOF
+
+
+# Keep a trace of the command line.
+# Strip out --no-create and --no-recursion so they do not pile up.
+# Strip out --silent because we don't want to record it for future runs.
+# Also quote any args containing shell meta-characters.
+# Make two passes to allow for proper duplicate-argument suppression.
+ac_configure_args=
+ac_configure_args0=
+ac_configure_args1=
+ac_must_keep_next=false
+for ac_pass in 1 2
+do
+ for ac_arg
+ do
+ case $ac_arg in
+ -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;;
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil)
+ continue ;;
+ *\'*)
+ ac_arg=`printf "%s\n" "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;;
+ esac
+ case $ac_pass in
+ 1) as_fn_append ac_configure_args0 " '$ac_arg'" ;;
+ 2)
+ as_fn_append ac_configure_args1 " '$ac_arg'"
+ if test $ac_must_keep_next = true; then
+ ac_must_keep_next=false # Got value, back to normal.
+ else
+ case $ac_arg in
+ *=* | --config-cache | -C | -disable-* | --disable-* \
+ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \
+ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \
+ | -with-* | --with-* | -without-* | --without-* | --x)
+ case "$ac_configure_args0 " in
+ "$ac_configure_args1"*" '$ac_arg' "* ) continue ;;
+ esac
+ ;;
+ -* ) ac_must_keep_next=true ;;
+ esac
+ fi
+ as_fn_append ac_configure_args " '$ac_arg'"
+ ;;
+ esac
+ done
+done
+{ ac_configure_args0=; unset ac_configure_args0;}
+{ ac_configure_args1=; unset ac_configure_args1;}
+
+# When interrupted or exit'd, cleanup temporary files, and complete
+# config.log. We remove comments because anyway the quotes in there
+# would cause problems or look ugly.
+# WARNING: Use '\'' to represent an apostrophe within the trap.
+# WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug.
+trap 'exit_status=$?
+ # Sanitize IFS.
+ IFS=" "" $as_nl"
+ # Save into config.log some information that might help in debugging.
+ {
+ echo
+
+ printf "%s\n" "## ---------------- ##
+## Cache variables. ##
+## ---------------- ##"
+ echo
+ # The following way of writing the cache mishandles newlines in values,
+(
+ for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do
+ eval ac_val=\$$ac_var
+ case $ac_val in #(
+ *${as_nl}*)
+ case $ac_var in #(
+ *_cv_*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5
+printf "%s\n" "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;;
+ esac
+ case $ac_var in #(
+ _ | IFS | as_nl) ;; #(
+ BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #(
+ *) { eval $ac_var=; unset $ac_var;} ;;
+ esac ;;
+ esac
+ done
+ (set) 2>&1 |
+ case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #(
+ *${as_nl}ac_space=\ *)
+ sed -n \
+ "s/'\''/'\''\\\\'\'''\''/g;
+ s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p"
+ ;; #(
+ *)
+ sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p"
+ ;;
+ esac |
+ sort
+)
+ echo
+
+ printf "%s\n" "## ----------------- ##
+## Output variables. ##
+## ----------------- ##"
+ echo
+ for ac_var in $ac_subst_vars
+ do
+ eval ac_val=\$$ac_var
+ case $ac_val in
+ *\'\''*) ac_val=`printf "%s\n" "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;;
+ esac
+ printf "%s\n" "$ac_var='\''$ac_val'\''"
+ done | sort
+ echo
+
+ if test -n "$ac_subst_files"; then
+ printf "%s\n" "## ------------------- ##
+## File substitutions. ##
+## ------------------- ##"
+ echo
+ for ac_var in $ac_subst_files
+ do
+ eval ac_val=\$$ac_var
+ case $ac_val in
+ *\'\''*) ac_val=`printf "%s\n" "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;;
+ esac
+ printf "%s\n" "$ac_var='\''$ac_val'\''"
+ done | sort
+ echo
+ fi
+
+ if test -s confdefs.h; then
+ printf "%s\n" "## ----------- ##
+## confdefs.h. ##
+## ----------- ##"
+ echo
+ cat confdefs.h
+ echo
+ fi
+ test "$ac_signal" != 0 &&
+ printf "%s\n" "$as_me: caught signal $ac_signal"
+ printf "%s\n" "$as_me: exit $exit_status"
+ } >&5
+ rm -f core *.core core.conftest.* &&
+ rm -f -r conftest* confdefs* conf$$* $ac_clean_files &&
+ exit $exit_status
+' 0
+for ac_signal in 1 2 13 15; do
+ trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal
+done
+ac_signal=0
+
+# confdefs.h avoids OS command line length limits that DEFS can exceed.
+rm -f -r conftest* confdefs.h
+
+printf "%s\n" "/* confdefs.h */" > confdefs.h
+
+# Predefined preprocessor variables.
+
+printf "%s\n" "#define PACKAGE_NAME \"$PACKAGE_NAME\"" >>confdefs.h
+
+printf "%s\n" "#define PACKAGE_TARNAME \"$PACKAGE_TARNAME\"" >>confdefs.h
+
+printf "%s\n" "#define PACKAGE_VERSION \"$PACKAGE_VERSION\"" >>confdefs.h
+
+printf "%s\n" "#define PACKAGE_STRING \"$PACKAGE_STRING\"" >>confdefs.h
+
+printf "%s\n" "#define PACKAGE_BUGREPORT \"$PACKAGE_BUGREPORT\"" >>confdefs.h
+
+printf "%s\n" "#define PACKAGE_URL \"$PACKAGE_URL\"" >>confdefs.h
+
+
+# Let the site file select an alternate cache file if it wants to.
+# Prefer an explicitly selected file to automatically selected ones.
+if test -n "$CONFIG_SITE"; then
+ ac_site_files="$CONFIG_SITE"
+elif test "x$prefix" != xNONE; then
+ ac_site_files="$prefix/share/config.site $prefix/etc/config.site"
+else
+ ac_site_files="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site"
+fi
+
+for ac_site_file in $ac_site_files
+do
+ case $ac_site_file in #(
+ */*) :
+ ;; #(
+ *) :
+ ac_site_file=./$ac_site_file ;;
+esac
+ if test -f "$ac_site_file" && test -r "$ac_site_file"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file" >&5
+printf "%s\n" "$as_me: loading site script $ac_site_file" >&6;}
+ sed 's/^/| /' "$ac_site_file" >&5
+ . "$ac_site_file" \
+ || { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "failed to load site script $ac_site_file
+See \`config.log' for more details" "$LINENO" 5; }
+ fi
+done
+
+if test -r "$cache_file"; then
+ # Some versions of bash will fail to source /dev/null (special files
+ # actually), so we avoid doing that. DJGPP emulates it as a regular file.
+ if test /dev/null != "$cache_file" && test -f "$cache_file"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: loading cache $cache_file" >&5
+printf "%s\n" "$as_me: loading cache $cache_file" >&6;}
+ case $cache_file in
+ [\\/]* | ?:[\\/]* ) . "$cache_file";;
+ *) . "./$cache_file";;
+ esac
+ fi
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating cache $cache_file" >&5
+printf "%s\n" "$as_me: creating cache $cache_file" >&6;}
+ >$cache_file
+fi
+
+as_fn_append ac_header_c_list " stdio.h stdio_h HAVE_STDIO_H"
+# Test code for whether the C compiler supports C89 (global declarations)
+ac_c_conftest_c89_globals='
+/* Does the compiler advertise C89 conformance?
+ Do not test the value of __STDC__, because some compilers set it to 0
+ while being otherwise adequately conformant. */
+#if !defined __STDC__
+# error "Compiler does not advertise C89 conformance"
+#endif
+
+#include
+#include
+struct stat;
+/* Most of the following tests are stolen from RCS 5.7 src/conf.sh. */
+struct buf { int x; };
+struct buf * (*rcsopen) (struct buf *, struct stat *, int);
+static char *e (p, i)
+ char **p;
+ int i;
+{
+ return p[i];
+}
+static char *f (char * (*g) (char **, int), char **p, ...)
+{
+ char *s;
+ va_list v;
+ va_start (v,p);
+ s = g (p, va_arg (v,int));
+ va_end (v);
+ return s;
+}
+
+/* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has
+ function prototypes and stuff, but not \xHH hex character constants.
+ These do not provoke an error unfortunately, instead are silently treated
+ as an "x". The following induces an error, until -std is added to get
+ proper ANSI mode. Curiously \x00 != x always comes out true, for an
+ array size at least. It is necessary to write \x00 == 0 to get something
+ that is true only with -std. */
+int osf4_cc_array ['\''\x00'\'' == 0 ? 1 : -1];
+
+/* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters
+ inside strings and character constants. */
+#define FOO(x) '\''x'\''
+int xlc6_cc_array[FOO(a) == '\''x'\'' ? 1 : -1];
+
+int test (int i, double x);
+struct s1 {int (*f) (int a);};
+struct s2 {int (*f) (double a);};
+int pairnames (int, char **, int *(*)(struct buf *, struct stat *, int),
+ int, int);'
+
+# Test code for whether the C compiler supports C89 (body of main).
+ac_c_conftest_c89_main='
+ok |= (argc == 0 || f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]);
+'
+
+# Test code for whether the C compiler supports C99 (global declarations)
+ac_c_conftest_c99_globals='
+// Does the compiler advertise C99 conformance?
+#if !defined __STDC_VERSION__ || __STDC_VERSION__ < 199901L
+# error "Compiler does not advertise C99 conformance"
+#endif
+
+#include
+extern int puts (const char *);
+extern int printf (const char *, ...);
+extern int dprintf (int, const char *, ...);
+extern void *malloc (size_t);
+
+// Check varargs macros. These examples are taken from C99 6.10.3.5.
+// dprintf is used instead of fprintf to avoid needing to declare
+// FILE and stderr.
+#define debug(...) dprintf (2, __VA_ARGS__)
+#define showlist(...) puts (#__VA_ARGS__)
+#define report(test,...) ((test) ? puts (#test) : printf (__VA_ARGS__))
+static void
+test_varargs_macros (void)
+{
+ int x = 1234;
+ int y = 5678;
+ debug ("Flag");
+ debug ("X = %d\n", x);
+ showlist (The first, second, and third items.);
+ report (x>y, "x is %d but y is %d", x, y);
+}
+
+// Check long long types.
+#define BIG64 18446744073709551615ull
+#define BIG32 4294967295ul
+#define BIG_OK (BIG64 / BIG32 == 4294967297ull && BIG64 % BIG32 == 0)
+#if !BIG_OK
+ #error "your preprocessor is broken"
+#endif
+#if BIG_OK
+#else
+ #error "your preprocessor is broken"
+#endif
+static long long int bignum = -9223372036854775807LL;
+static unsigned long long int ubignum = BIG64;
+
+struct incomplete_array
+{
+ int datasize;
+ double data[];
+};
+
+struct named_init {
+ int number;
+ const wchar_t *name;
+ double average;
+};
+
+typedef const char *ccp;
+
+static inline int
+test_restrict (ccp restrict text)
+{
+ // See if C++-style comments work.
+ // Iterate through items via the restricted pointer.
+ // Also check for declarations in for loops.
+ for (unsigned int i = 0; *(text+i) != '\''\0'\''; ++i)
+ continue;
+ return 0;
+}
+
+// Check varargs and va_copy.
+static bool
+test_varargs (const char *format, ...)
+{
+ va_list args;
+ va_start (args, format);
+ va_list args_copy;
+ va_copy (args_copy, args);
+
+ const char *str = "";
+ int number = 0;
+ float fnumber = 0;
+
+ while (*format)
+ {
+ switch (*format++)
+ {
+ case '\''s'\'': // string
+ str = va_arg (args_copy, const char *);
+ break;
+ case '\''d'\'': // int
+ number = va_arg (args_copy, int);
+ break;
+ case '\''f'\'': // float
+ fnumber = va_arg (args_copy, double);
+ break;
+ default:
+ break;
+ }
+ }
+ va_end (args_copy);
+ va_end (args);
+
+ return *str && number && fnumber;
+}
+'
+
+# Test code for whether the C compiler supports C99 (body of main).
+ac_c_conftest_c99_main='
+ // Check bool.
+ _Bool success = false;
+ success |= (argc != 0);
+
+ // Check restrict.
+ if (test_restrict ("String literal") == 0)
+ success = true;
+ char *restrict newvar = "Another string";
+
+ // Check varargs.
+ success &= test_varargs ("s, d'\'' f .", "string", 65, 34.234);
+ test_varargs_macros ();
+
+ // Check flexible array members.
+ struct incomplete_array *ia =
+ malloc (sizeof (struct incomplete_array) + (sizeof (double) * 10));
+ ia->datasize = 10;
+ for (int i = 0; i < ia->datasize; ++i)
+ ia->data[i] = i * 1.234;
+
+ // Check named initializers.
+ struct named_init ni = {
+ .number = 34,
+ .name = L"Test wide string",
+ .average = 543.34343,
+ };
+
+ ni.number = 58;
+
+ int dynamic_array[ni.number];
+ dynamic_array[0] = argv[0][0];
+ dynamic_array[ni.number - 1] = 543;
+
+ // work around unused variable warnings
+ ok |= (!success || bignum == 0LL || ubignum == 0uLL || newvar[0] == '\''x'\''
+ || dynamic_array[ni.number - 1] != 543);
+'
+
+# Test code for whether the C compiler supports C11 (global declarations)
+ac_c_conftest_c11_globals='
+// Does the compiler advertise C11 conformance?
+#if !defined __STDC_VERSION__ || __STDC_VERSION__ < 201112L
+# error "Compiler does not advertise C11 conformance"
+#endif
+
+// Check _Alignas.
+char _Alignas (double) aligned_as_double;
+char _Alignas (0) no_special_alignment;
+extern char aligned_as_int;
+char _Alignas (0) _Alignas (int) aligned_as_int;
+
+// Check _Alignof.
+enum
+{
+ int_alignment = _Alignof (int),
+ int_array_alignment = _Alignof (int[100]),
+ char_alignment = _Alignof (char)
+};
+_Static_assert (0 < -_Alignof (int), "_Alignof is signed");
+
+// Check _Noreturn.
+int _Noreturn does_not_return (void) { for (;;) continue; }
+
+// Check _Static_assert.
+struct test_static_assert
+{
+ int x;
+ _Static_assert (sizeof (int) <= sizeof (long int),
+ "_Static_assert does not work in struct");
+ long int y;
+};
+
+// Check UTF-8 literals.
+#define u8 syntax error!
+char const utf8_literal[] = u8"happens to be ASCII" "another string";
+
+// Check duplicate typedefs.
+typedef long *long_ptr;
+typedef long int *long_ptr;
+typedef long_ptr long_ptr;
+
+// Anonymous structures and unions -- taken from C11 6.7.2.1 Example 1.
+struct anonymous
+{
+ union {
+ struct { int i; int j; };
+ struct { int k; long int l; } w;
+ };
+ int m;
+} v1;
+'
+
+# Test code for whether the C compiler supports C11 (body of main).
+ac_c_conftest_c11_main='
+ _Static_assert ((offsetof (struct anonymous, i)
+ == offsetof (struct anonymous, w.k)),
+ "Anonymous union alignment botch");
+ v1.i = 2;
+ v1.w.k = 5;
+ ok |= v1.i != 5;
+'
+
+# Test code for whether the C compiler supports C11 (complete).
+ac_c_conftest_c11_program="${ac_c_conftest_c89_globals}
+${ac_c_conftest_c99_globals}
+${ac_c_conftest_c11_globals}
+
+int
+main (int argc, char **argv)
+{
+ int ok = 0;
+ ${ac_c_conftest_c89_main}
+ ${ac_c_conftest_c99_main}
+ ${ac_c_conftest_c11_main}
+ return ok;
+}
+"
+
+# Test code for whether the C compiler supports C99 (complete).
+ac_c_conftest_c99_program="${ac_c_conftest_c89_globals}
+${ac_c_conftest_c99_globals}
+
+int
+main (int argc, char **argv)
+{
+ int ok = 0;
+ ${ac_c_conftest_c89_main}
+ ${ac_c_conftest_c99_main}
+ return ok;
+}
+"
+
+# Test code for whether the C compiler supports C89 (complete).
+ac_c_conftest_c89_program="${ac_c_conftest_c89_globals}
+
+int
+main (int argc, char **argv)
+{
+ int ok = 0;
+ ${ac_c_conftest_c89_main}
+ return ok;
+}
+"
+
+as_fn_append ac_header_c_list " stdlib.h stdlib_h HAVE_STDLIB_H"
+as_fn_append ac_header_c_list " string.h string_h HAVE_STRING_H"
+as_fn_append ac_header_c_list " inttypes.h inttypes_h HAVE_INTTYPES_H"
+as_fn_append ac_header_c_list " stdint.h stdint_h HAVE_STDINT_H"
+as_fn_append ac_header_c_list " strings.h strings_h HAVE_STRINGS_H"
+as_fn_append ac_header_c_list " sys/stat.h sys_stat_h HAVE_SYS_STAT_H"
+as_fn_append ac_header_c_list " sys/types.h sys_types_h HAVE_SYS_TYPES_H"
+as_fn_append ac_header_c_list " unistd.h unistd_h HAVE_UNISTD_H"
+as_fn_append ac_header_c_list " wchar.h wchar_h HAVE_WCHAR_H"
+as_fn_append ac_header_c_list " minix/config.h minix_config_h HAVE_MINIX_CONFIG_H"
+as_fn_append ac_header_c_list " pthread_np.h pthread_np_h HAVE_PTHREAD_NP_H"
+as_fn_append ac_header_c_list " sys/uio.h sys_uio_h HAVE_SYS_UIO_H"
+as_fn_append ac_header_c_list " bsd/string.h bsd_string_h HAVE_BSD_STRING_H"
+
+# Auxiliary files required by this configure script.
+ac_aux_files="ltmain.sh ar-lib config.guess config.sub compile missing install-sh"
+
+# Locations in which to look for auxiliary files.
+ac_aux_dir_candidates="${srcdir}${PATH_SEPARATOR}${srcdir}/..${PATH_SEPARATOR}${srcdir}/../.."
+
+# Search for a directory containing all of the required auxiliary files,
+# $ac_aux_files, from the $PATH-style list $ac_aux_dir_candidates.
+# If we don't find one directory that contains all the files we need,
+# we report the set of missing files from the *first* directory in
+# $ac_aux_dir_candidates and give up.
+ac_missing_aux_files=""
+ac_first_candidate=:
+printf "%s\n" "$as_me:${as_lineno-$LINENO}: looking for aux files: $ac_aux_files" >&5
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+as_found=false
+for as_dir in $ac_aux_dir_candidates
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ as_found=:
+
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: trying $as_dir" >&5
+ ac_aux_dir_found=yes
+ ac_install_sh=
+ for ac_aux in $ac_aux_files
+ do
+ # As a special case, if "install-sh" is required, that requirement
+ # can be satisfied by any of "install-sh", "install.sh", or "shtool",
+ # and $ac_install_sh is set appropriately for whichever one is found.
+ if test x"$ac_aux" = x"install-sh"
+ then
+ if test -f "${as_dir}install-sh"; then
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: ${as_dir}install-sh found" >&5
+ ac_install_sh="${as_dir}install-sh -c"
+ elif test -f "${as_dir}install.sh"; then
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: ${as_dir}install.sh found" >&5
+ ac_install_sh="${as_dir}install.sh -c"
+ elif test -f "${as_dir}shtool"; then
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: ${as_dir}shtool found" >&5
+ ac_install_sh="${as_dir}shtool install -c"
+ else
+ ac_aux_dir_found=no
+ if $ac_first_candidate; then
+ ac_missing_aux_files="${ac_missing_aux_files} install-sh"
+ else
+ break
+ fi
+ fi
+ else
+ if test -f "${as_dir}${ac_aux}"; then
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: ${as_dir}${ac_aux} found" >&5
+ else
+ ac_aux_dir_found=no
+ if $ac_first_candidate; then
+ ac_missing_aux_files="${ac_missing_aux_files} ${ac_aux}"
+ else
+ break
+ fi
+ fi
+ fi
+ done
+ if test "$ac_aux_dir_found" = yes; then
+ ac_aux_dir="$as_dir"
+ break
+ fi
+ ac_first_candidate=false
+
+ as_found=false
+done
+IFS=$as_save_IFS
+if $as_found
+then :
+
+else $as_nop
+ as_fn_error $? "cannot find required auxiliary files:$ac_missing_aux_files" "$LINENO" 5
+fi
+
+
+# These three variables are undocumented and unsupported,
+# and are intended to be withdrawn in a future Autoconf release.
+# They can cause serious problems if a builder's source tree is in a directory
+# whose full name contains unusual characters.
+if test -f "${ac_aux_dir}config.guess"; then
+ ac_config_guess="$SHELL ${ac_aux_dir}config.guess"
+fi
+if test -f "${ac_aux_dir}config.sub"; then
+ ac_config_sub="$SHELL ${ac_aux_dir}config.sub"
+fi
+if test -f "$ac_aux_dir/configure"; then
+ ac_configure="$SHELL ${ac_aux_dir}configure"
+fi
+
+# Check that the precious variables saved in the cache have kept the same
+# value.
+ac_cache_corrupted=false
+for ac_var in $ac_precious_vars; do
+ eval ac_old_set=\$ac_cv_env_${ac_var}_set
+ eval ac_new_set=\$ac_env_${ac_var}_set
+ eval ac_old_val=\$ac_cv_env_${ac_var}_value
+ eval ac_new_val=\$ac_env_${ac_var}_value
+ case $ac_old_set,$ac_new_set in
+ set,)
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5
+printf "%s\n" "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;}
+ ac_cache_corrupted=: ;;
+ ,set)
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was not set in the previous run" >&5
+printf "%s\n" "$as_me: error: \`$ac_var' was not set in the previous run" >&2;}
+ ac_cache_corrupted=: ;;
+ ,);;
+ *)
+ if test "x$ac_old_val" != "x$ac_new_val"; then
+ # differences in whitespace do not lead to failure.
+ ac_old_val_w=`echo x $ac_old_val`
+ ac_new_val_w=`echo x $ac_new_val`
+ if test "$ac_old_val_w" != "$ac_new_val_w"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' has changed since the previous run:" >&5
+printf "%s\n" "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;}
+ ac_cache_corrupted=:
+ else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&5
+printf "%s\n" "$as_me: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&2;}
+ eval $ac_var=\$ac_old_val
+ fi
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: former value: \`$ac_old_val'" >&5
+printf "%s\n" "$as_me: former value: \`$ac_old_val'" >&2;}
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: current value: \`$ac_new_val'" >&5
+printf "%s\n" "$as_me: current value: \`$ac_new_val'" >&2;}
+ fi;;
+ esac
+ # Pass precious variables to config.status.
+ if test "$ac_new_set" = set; then
+ case $ac_new_val in
+ *\'*) ac_arg=$ac_var=`printf "%s\n" "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;;
+ *) ac_arg=$ac_var=$ac_new_val ;;
+ esac
+ case " $ac_configure_args " in
+ *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy.
+ *) as_fn_append ac_configure_args " '$ac_arg'" ;;
+ esac
+ fi
+done
+if $ac_cache_corrupted; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build" >&5
+printf "%s\n" "$as_me: error: changes in the environment can compromise the build" >&2;}
+ as_fn_error $? "run \`${MAKE-make} distclean' and/or \`rm $cache_file'
+ and start over" "$LINENO" 5
+fi
+## -------------------- ##
+## Main body of script. ##
+## -------------------- ##
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+am__api_version='1.16'
+
+
+
+ # Find a good install program. We prefer a C program (faster),
+# so one script is as good as another. But avoid the broken or
+# incompatible versions:
+# SysV /etc/install, /usr/sbin/install
+# SunOS /usr/etc/install
+# IRIX /sbin/install
+# AIX /bin/install
+# AmigaOS /C/install, which installs bootblocks on floppy discs
+# AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag
+# AFS /usr/afsws/bin/install, which mishandles nonexistent args
+# SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff"
+# OS/2's system install, which has a completely different semantic
+# ./install, which can be erroneously created by make from ./install.sh.
+# Reject install programs that cannot install multiple files.
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a BSD-compatible install" >&5
+printf %s "checking for a BSD-compatible install... " >&6; }
+if test -z "$INSTALL"; then
+if test ${ac_cv_path_install+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ # Account for fact that we put trailing slashes in our PATH walk.
+case $as_dir in #((
+ ./ | /[cC]/* | \
+ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \
+ ?:[\\/]os2[\\/]install[\\/]* | ?:[\\/]OS2[\\/]INSTALL[\\/]* | \
+ /usr/ucb/* ) ;;
+ *)
+ # OSF1 and SCO ODT 3.0 have their own names for install.
+ # Don't use installbsd from OSF since it installs stuff as root
+ # by default.
+ for ac_prog in ginstall scoinst install; do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_prog$ac_exec_ext"; then
+ if test $ac_prog = install &&
+ grep dspmsg "$as_dir$ac_prog$ac_exec_ext" >/dev/null 2>&1; then
+ # AIX install. It has an incompatible calling convention.
+ :
+ elif test $ac_prog = install &&
+ grep pwplus "$as_dir$ac_prog$ac_exec_ext" >/dev/null 2>&1; then
+ # program-specific install script used by HP pwplus--don't use.
+ :
+ else
+ rm -rf conftest.one conftest.two conftest.dir
+ echo one > conftest.one
+ echo two > conftest.two
+ mkdir conftest.dir
+ if "$as_dir$ac_prog$ac_exec_ext" -c conftest.one conftest.two "`pwd`/conftest.dir/" &&
+ test -s conftest.one && test -s conftest.two &&
+ test -s conftest.dir/conftest.one &&
+ test -s conftest.dir/conftest.two
+ then
+ ac_cv_path_install="$as_dir$ac_prog$ac_exec_ext -c"
+ break 3
+ fi
+ fi
+ fi
+ done
+ done
+ ;;
+esac
+
+ done
+IFS=$as_save_IFS
+
+rm -rf conftest.one conftest.two conftest.dir
+
+fi
+ if test ${ac_cv_path_install+y}; then
+ INSTALL=$ac_cv_path_install
+ else
+ # As a last resort, use the slow shell script. Don't cache a
+ # value for INSTALL within a source directory, because that will
+ # break other packages using the cache if that directory is
+ # removed, or if the value is a relative name.
+ INSTALL=$ac_install_sh
+ fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $INSTALL" >&5
+printf "%s\n" "$INSTALL" >&6; }
+
+# Use test -z because SunOS4 sh mishandles braces in ${var-val}.
+# It thinks the first close brace ends the variable substitution.
+test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}'
+
+test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}'
+
+test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644'
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether build environment is sane" >&5
+printf %s "checking whether build environment is sane... " >&6; }
+# Reject unsafe characters in $srcdir or the absolute working directory
+# name. Accept space and tab only in the latter.
+am_lf='
+'
+case `pwd` in
+ *[\\\"\#\$\&\'\`$am_lf]*)
+ as_fn_error $? "unsafe absolute working directory name" "$LINENO" 5;;
+esac
+case $srcdir in
+ *[\\\"\#\$\&\'\`$am_lf\ \ ]*)
+ as_fn_error $? "unsafe srcdir value: '$srcdir'" "$LINENO" 5;;
+esac
+
+# Do 'set' in a subshell so we don't clobber the current shell's
+# arguments. Must try -L first in case configure is actually a
+# symlink; some systems play weird games with the mod time of symlinks
+# (eg FreeBSD returns the mod time of the symlink's containing
+# directory).
+if (
+ am_has_slept=no
+ for am_try in 1 2; do
+ echo "timestamp, slept: $am_has_slept" > conftest.file
+ set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null`
+ if test "$*" = "X"; then
+ # -L didn't work.
+ set X `ls -t "$srcdir/configure" conftest.file`
+ fi
+ if test "$*" != "X $srcdir/configure conftest.file" \
+ && test "$*" != "X conftest.file $srcdir/configure"; then
+
+ # If neither matched, then we have a broken ls. This can happen
+ # if, for instance, CONFIG_SHELL is bash and it inherits a
+ # broken ls alias from the environment. This has actually
+ # happened. Such a system could not be considered "sane".
+ as_fn_error $? "ls -t appears to fail. Make sure there is not a broken
+ alias in your environment" "$LINENO" 5
+ fi
+ if test "$2" = conftest.file || test $am_try -eq 2; then
+ break
+ fi
+ # Just in case.
+ sleep 1
+ am_has_slept=yes
+ done
+ test "$2" = conftest.file
+ )
+then
+ # Ok.
+ :
+else
+ as_fn_error $? "newly created file is older than distributed files!
+Check your system clock" "$LINENO" 5
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+# If we didn't sleep, we still need to ensure time stamps of config.status and
+# generated files are strictly newer.
+am_sleep_pid=
+if grep 'slept: no' conftest.file >/dev/null 2>&1; then
+ ( sleep 1 ) &
+ am_sleep_pid=$!
+fi
+
+rm -f conftest.file
+
+test "$program_prefix" != NONE &&
+ program_transform_name="s&^&$program_prefix&;$program_transform_name"
+# Use a double $ so make ignores it.
+test "$program_suffix" != NONE &&
+ program_transform_name="s&\$&$program_suffix&;$program_transform_name"
+# Double any \ or $.
+# By default was `s,x,x', remove it if useless.
+ac_script='s/[\\$]/&&/g;s/;s,x,x,$//'
+program_transform_name=`printf "%s\n" "$program_transform_name" | sed "$ac_script"`
+
+
+# Expand $ac_aux_dir to an absolute path.
+am_aux_dir=`cd "$ac_aux_dir" && pwd`
+
+
+ if test x"${MISSING+set}" != xset; then
+ MISSING="\${SHELL} '$am_aux_dir/missing'"
+fi
+# Use eval to expand $SHELL
+if eval "$MISSING --is-lightweight"; then
+ am_missing_run="$MISSING "
+else
+ am_missing_run=
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: 'missing' script is too old or missing" >&5
+printf "%s\n" "$as_me: WARNING: 'missing' script is too old or missing" >&2;}
+fi
+
+if test x"${install_sh+set}" != xset; then
+ case $am_aux_dir in
+ *\ * | *\ *)
+ install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;;
+ *)
+ install_sh="\${SHELL} $am_aux_dir/install-sh"
+ esac
+fi
+
+# Installed binaries are usually stripped using 'strip' when the user
+# run "make install-strip". However 'strip' might not be the right
+# tool to use in cross-compilation environments, therefore Automake
+# will honor the 'STRIP' environment variable to overrule this program.
+if test "$cross_compiling" != no; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+set dummy ${ac_tool_prefix}strip; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_STRIP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$STRIP"; then
+ ac_cv_prog_STRIP="$STRIP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_STRIP="${ac_tool_prefix}strip"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+STRIP=$ac_cv_prog_STRIP
+if test -n "$STRIP"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5
+printf "%s\n" "$STRIP" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_STRIP"; then
+ ac_ct_STRIP=$STRIP
+ # Extract the first word of "strip", so it can be a program name with args.
+set dummy strip; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_STRIP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_STRIP"; then
+ ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_STRIP="strip"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP
+if test -n "$ac_ct_STRIP"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5
+printf "%s\n" "$ac_ct_STRIP" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_STRIP" = x; then
+ STRIP=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ STRIP=$ac_ct_STRIP
+ fi
+else
+ STRIP="$ac_cv_prog_STRIP"
+fi
+
+fi
+INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s"
+
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a race-free mkdir -p" >&5
+printf %s "checking for a race-free mkdir -p... " >&6; }
+if test -z "$MKDIR_P"; then
+ if test ${ac_cv_path_mkdir+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_prog in mkdir gmkdir; do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ as_fn_executable_p "$as_dir$ac_prog$ac_exec_ext" || continue
+ case `"$as_dir$ac_prog$ac_exec_ext" --version 2>&1` in #(
+ 'mkdir ('*'coreutils) '* | \
+ 'BusyBox '* | \
+ 'mkdir (fileutils) '4.1*)
+ ac_cv_path_mkdir=$as_dir$ac_prog$ac_exec_ext
+ break 3;;
+ esac
+ done
+ done
+ done
+IFS=$as_save_IFS
+
+fi
+
+ test -d ./--version && rmdir ./--version
+ if test ${ac_cv_path_mkdir+y}; then
+ MKDIR_P="$ac_cv_path_mkdir -p"
+ else
+ # As a last resort, use the slow shell script. Don't cache a
+ # value for MKDIR_P within a source directory, because that will
+ # break other packages using the cache if that directory is
+ # removed, or if the value is a relative name.
+ MKDIR_P="$ac_install_sh -d"
+ fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $MKDIR_P" >&5
+printf "%s\n" "$MKDIR_P" >&6; }
+
+for ac_prog in gawk mawk nawk awk
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_AWK+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$AWK"; then
+ ac_cv_prog_AWK="$AWK" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_AWK="$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+AWK=$ac_cv_prog_AWK
+if test -n "$AWK"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5
+printf "%s\n" "$AWK" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$AWK" && break
+done
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5
+printf %s "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; }
+set x ${MAKE-make}
+ac_make=`printf "%s\n" "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'`
+if eval test \${ac_cv_prog_make_${ac_make}_set+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat >conftest.make <<\_ACEOF
+SHELL = /bin/sh
+all:
+ @echo '@@@%%%=$(MAKE)=@@@%%%'
+_ACEOF
+# GNU make sometimes prints "make[1]: Entering ...", which would confuse us.
+case `${MAKE-make} -f conftest.make 2>/dev/null` in
+ *@@@%%%=?*=@@@%%%*)
+ eval ac_cv_prog_make_${ac_make}_set=yes;;
+ *)
+ eval ac_cv_prog_make_${ac_make}_set=no;;
+esac
+rm -f conftest.make
+fi
+if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ SET_MAKE=
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ SET_MAKE="MAKE=${MAKE-make}"
+fi
+
+rm -rf .tst 2>/dev/null
+mkdir .tst 2>/dev/null
+if test -d .tst; then
+ am__leading_dot=.
+else
+ am__leading_dot=_
+fi
+rmdir .tst 2>/dev/null
+
+# Check whether --enable-silent-rules was given.
+if test ${enable_silent_rules+y}
+then :
+ enableval=$enable_silent_rules;
+fi
+
+case $enable_silent_rules in # (((
+ yes) AM_DEFAULT_VERBOSITY=0;;
+ no) AM_DEFAULT_VERBOSITY=1;;
+ *) AM_DEFAULT_VERBOSITY=1;;
+esac
+am_make=${MAKE-make}
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $am_make supports nested variables" >&5
+printf %s "checking whether $am_make supports nested variables... " >&6; }
+if test ${am_cv_make_support_nested_variables+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if printf "%s\n" 'TRUE=$(BAR$(V))
+BAR0=false
+BAR1=true
+V=1
+am__doit:
+ @$(TRUE)
+.PHONY: am__doit' | $am_make -f - >/dev/null 2>&1; then
+ am_cv_make_support_nested_variables=yes
+else
+ am_cv_make_support_nested_variables=no
+fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_make_support_nested_variables" >&5
+printf "%s\n" "$am_cv_make_support_nested_variables" >&6; }
+if test $am_cv_make_support_nested_variables = yes; then
+ AM_V='$(V)'
+ AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)'
+else
+ AM_V=$AM_DEFAULT_VERBOSITY
+ AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY
+fi
+AM_BACKSLASH='\'
+
+if test "`cd $srcdir && pwd`" != "`pwd`"; then
+ # Use -I$(srcdir) only when $(srcdir) != ., so that make's output
+ # is not polluted with repeated "-I."
+ am__isrc=' -I$(srcdir)'
+ # test to see if srcdir already configured
+ if test -f $srcdir/config.status; then
+ as_fn_error $? "source directory already configured; run \"make distclean\" there first" "$LINENO" 5
+ fi
+fi
+
+# test whether we have cygpath
+if test -z "$CYGPATH_W"; then
+ if (cygpath --version) >/dev/null 2>/dev/null; then
+ CYGPATH_W='cygpath -w'
+ else
+ CYGPATH_W=echo
+ fi
+fi
+
+
+# Define the identity of the package.
+ PACKAGE='knot'
+ VERSION='3.4.6'
+
+
+printf "%s\n" "#define PACKAGE \"$PACKAGE\"" >>confdefs.h
+
+
+printf "%s\n" "#define VERSION \"$VERSION\"" >>confdefs.h
+
+# Some tools Automake needs.
+
+ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"}
+
+
+AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"}
+
+
+AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"}
+
+
+AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"}
+
+
+MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"}
+
+# For better backward compatibility. To be removed once Automake 1.9.x
+# dies out for good. For more background, see:
+#
+#
+mkdir_p='$(MKDIR_P)'
+
+# We need awk for the "check" target (and possibly the TAP driver). The
+# system "awk" is bad on some platforms.
+# Always define AMTAR for backward compatibility. Yes, it's still used
+# in the wild :-( We should find a proper way to deprecate it ...
+AMTAR='$${TAR-tar}'
+
+
+# We'll loop over all known methods to create a tar archive until one works.
+_am_tools='gnutar pax cpio none'
+
+am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -'
+
+
+
+
+
+# Variables for tags utilities; see am/tags.am
+if test -z "$CTAGS"; then
+ CTAGS=ctags
+fi
+
+if test -z "$ETAGS"; then
+ ETAGS=etags
+fi
+
+if test -z "$CSCOPE"; then
+ CSCOPE=cscope
+fi
+
+
+
+# POSIX will say in a future version that running "rm -f" with no argument
+# is OK; and we want to be able to make that assumption in our Makefile
+# recipes. So use an aggressive probe to check that the usage we want is
+# actually supported "in the wild" to an acceptable degree.
+# See automake bug#10828.
+# To make any issue more visible, cause the running configure to be aborted
+# by default if the 'rm' program in use doesn't match our expectations; the
+# user can still override this though.
+if rm -f && rm -fr && rm -rf; then : OK; else
+ cat >&2 <<'END'
+Oops!
+
+Your 'rm' program seems unable to run without file operands specified
+on the command line, even when the '-f' option is present. This is contrary
+to the behaviour of most rm programs out there, and not conforming with
+the upcoming POSIX standard:
+
+Please tell bug-automake@gnu.org about your system, including the value
+of your $PATH and any error possibly output before this message. This
+can help us improve future automake versions.
+
+END
+ if test x"$ACCEPT_INFERIOR_RM_PROGRAM" = x"yes"; then
+ echo 'Configuration will proceed anyway, since you have set the' >&2
+ echo 'ACCEPT_INFERIOR_RM_PROGRAM variable to "yes"' >&2
+ echo >&2
+ else
+ cat >&2 <<'END'
+Aborting the configuration process, to ensure you take notice of the issue.
+
+You can download and install GNU coreutils to get an 'rm' implementation
+that behaves properly: .
+
+If you want to complete the configuration process using your problematic
+'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM
+to "yes", and re-run configure.
+
+END
+ as_fn_error $? "Your 'rm' program is bad, sorry." "$LINENO" 5
+ fi
+fi
+
+# Check whether --enable-silent-rules was given.
+if test ${enable_silent_rules+y}
+then :
+ enableval=$enable_silent_rules;
+fi
+
+case $enable_silent_rules in # (((
+ yes) AM_DEFAULT_VERBOSITY=0;;
+ no) AM_DEFAULT_VERBOSITY=1;;
+ *) AM_DEFAULT_VERBOSITY=0;;
+esac
+am_make=${MAKE-make}
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $am_make supports nested variables" >&5
+printf %s "checking whether $am_make supports nested variables... " >&6; }
+if test ${am_cv_make_support_nested_variables+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if printf "%s\n" 'TRUE=$(BAR$(V))
+BAR0=false
+BAR1=true
+V=1
+am__doit:
+ @$(TRUE)
+.PHONY: am__doit' | $am_make -f - >/dev/null 2>&1; then
+ am_cv_make_support_nested_variables=yes
+else
+ am_cv_make_support_nested_variables=no
+fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_make_support_nested_variables" >&5
+printf "%s\n" "$am_cv_make_support_nested_variables" >&6; }
+if test $am_cv_make_support_nested_variables = yes; then
+ AM_V='$(V)'
+ AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)'
+else
+ AM_V=$AM_DEFAULT_VERBOSITY
+ AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY
+fi
+AM_BACKSLASH='\'
+
+
+ac_config_headers="$ac_config_headers src/config.h"
+
+
+
+
+
+
+
+
+
+
+
+DEPDIR="${am__leading_dot}deps"
+
+ac_config_commands="$ac_config_commands depfiles"
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} supports the include directive" >&5
+printf %s "checking whether ${MAKE-make} supports the include directive... " >&6; }
+cat > confinc.mk << 'END'
+am__doit:
+ @echo this is the am__doit target >confinc.out
+.PHONY: am__doit
+END
+am__include="#"
+am__quote=
+# BSD make does it like this.
+echo '.include "confinc.mk" # ignored' > confmf.BSD
+# Other make implementations (GNU, Solaris 10, AIX) do it like this.
+echo 'include confinc.mk # ignored' > confmf.GNU
+_am_result=no
+for s in GNU BSD; do
+ { echo "$as_me:$LINENO: ${MAKE-make} -f confmf.$s && cat confinc.out" >&5
+ (${MAKE-make} -f confmf.$s && cat confinc.out) >&5 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+ case $?:`cat confinc.out 2>/dev/null` in #(
+ '0:this is the am__doit target') :
+ case $s in #(
+ BSD) :
+ am__include='.include' am__quote='"' ;; #(
+ *) :
+ am__include='include' am__quote='' ;;
+esac ;; #(
+ *) :
+ ;;
+esac
+ if test "$am__include" != "#"; then
+ _am_result="yes ($s style)"
+ break
+ fi
+done
+rm -f confinc.* confmf.*
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: ${_am_result}" >&5
+printf "%s\n" "${_am_result}" >&6; }
+
+# Check whether --enable-dependency-tracking was given.
+if test ${enable_dependency_tracking+y}
+then :
+ enableval=$enable_dependency_tracking;
+fi
+
+if test "x$enable_dependency_tracking" != xno; then
+ am_depcomp="$ac_aux_dir/depcomp"
+ AMDEPBACKSLASH='\'
+ am__nodep='_no'
+fi
+ if test "x$enable_dependency_tracking" != xno; then
+ AMDEP_TRUE=
+ AMDEP_FALSE='#'
+else
+ AMDEP_TRUE='#'
+ AMDEP_FALSE=
+fi
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args.
+set dummy ${ac_tool_prefix}gcc; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}gcc"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_CC"; then
+ ac_ct_CC=$CC
+ # Extract the first word of "gcc", so it can be a program name with args.
+set dummy gcc; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="gcc"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5
+printf "%s\n" "$ac_ct_CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_CC" = x; then
+ CC=""
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ CC=$ac_ct_CC
+ fi
+else
+ CC="$ac_cv_prog_CC"
+fi
+
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args.
+set dummy ${ac_tool_prefix}cc; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}cc"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ fi
+fi
+if test -z "$CC"; then
+ # Extract the first word of "cc", so it can be a program name with args.
+set dummy cc; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+ ac_prog_rejected=no
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ if test "$as_dir$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then
+ ac_prog_rejected=yes
+ continue
+ fi
+ ac_cv_prog_CC="cc"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+if test $ac_prog_rejected = yes; then
+ # We found a bogon in the path, so make sure we never use it.
+ set dummy $ac_cv_prog_CC
+ shift
+ if test $# != 0; then
+ # We chose a different compiler from the bogus one.
+ # However, it has the same basename, so the bogon will be chosen
+ # first if we set CC to just the basename; use the full file name.
+ shift
+ ac_cv_prog_CC="$as_dir$ac_word${1+' '}$@"
+ fi
+fi
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ for ac_prog in cl.exe
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="$ac_tool_prefix$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$CC" && break
+ done
+fi
+if test -z "$CC"; then
+ ac_ct_CC=$CC
+ for ac_prog in cl.exe
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5
+printf "%s\n" "$ac_ct_CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$ac_ct_CC" && break
+done
+
+ if test "x$ac_ct_CC" = x; then
+ CC=""
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ CC=$ac_ct_CC
+ fi
+fi
+
+fi
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}clang", so it can be a program name with args.
+set dummy ${ac_tool_prefix}clang; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}clang"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_CC"; then
+ ac_ct_CC=$CC
+ # Extract the first word of "clang", so it can be a program name with args.
+set dummy clang; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="clang"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5
+printf "%s\n" "$ac_ct_CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_CC" = x; then
+ CC=""
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ CC=$ac_ct_CC
+ fi
+else
+ CC="$ac_cv_prog_CC"
+fi
+
+fi
+
+
+test -z "$CC" && { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "no acceptable C compiler found in \$PATH
+See \`config.log' for more details" "$LINENO" 5; }
+
+# Provide some information about the compiler.
+printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5
+set X $ac_compile
+ac_compiler=$2
+for ac_option in --version -v -V -qversion -version; do
+ { { ac_try="$ac_compiler $ac_option >&5"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_compiler $ac_option >&5") 2>conftest.err
+ ac_status=$?
+ if test -s conftest.err; then
+ sed '10a\
+... rest of stderr output deleted ...
+ 10q' conftest.err >conftest.er1
+ cat conftest.er1 >&5
+ fi
+ rm -f conftest.er1 conftest.err
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+done
+
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+ac_clean_files_save=$ac_clean_files
+ac_clean_files="$ac_clean_files a.out a.out.dSYM a.exe b.out"
+# Try to create an executable without -o first, disregard a.out.
+# It will help us diagnose broken compilers, and finding out an intuition
+# of exeext.
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the C compiler works" >&5
+printf %s "checking whether the C compiler works... " >&6; }
+ac_link_default=`printf "%s\n" "$ac_link" | sed 's/ -o *conftest[^ ]*//'`
+
+# The possible output files:
+ac_files="a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*"
+
+ac_rmfiles=
+for ac_file in $ac_files
+do
+ case $ac_file in
+ *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;;
+ * ) ac_rmfiles="$ac_rmfiles $ac_file";;
+ esac
+done
+rm -f $ac_rmfiles
+
+if { { ac_try="$ac_link_default"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_link_default") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+then :
+ # Autoconf-2.13 could set the ac_cv_exeext variable to `no'.
+# So ignore a value of `no', otherwise this would lead to `EXEEXT = no'
+# in a Makefile. We should not override ac_cv_exeext if it was cached,
+# so that the user can short-circuit this test for compilers unknown to
+# Autoconf.
+for ac_file in $ac_files ''
+do
+ test -f "$ac_file" || continue
+ case $ac_file in
+ *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj )
+ ;;
+ [ab].out )
+ # We found the default executable, but exeext='' is most
+ # certainly right.
+ break;;
+ *.* )
+ if test ${ac_cv_exeext+y} && test "$ac_cv_exeext" != no;
+ then :; else
+ ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'`
+ fi
+ # We set ac_cv_exeext here because the later test for it is not
+ # safe: cross compilers may not add the suffix if given an `-o'
+ # argument, so we may need to know it at that point already.
+ # Even if this section looks crufty: it has the advantage of
+ # actually working.
+ break;;
+ * )
+ break;;
+ esac
+done
+test "$ac_cv_exeext" = no && ac_cv_exeext=
+
+else $as_nop
+ ac_file=''
+fi
+if test -z "$ac_file"
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+printf "%s\n" "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+{ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error 77 "C compiler cannot create executables
+See \`config.log' for more details" "$LINENO" 5; }
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C compiler default output file name" >&5
+printf %s "checking for C compiler default output file name... " >&6; }
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_file" >&5
+printf "%s\n" "$ac_file" >&6; }
+ac_exeext=$ac_cv_exeext
+
+rm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out
+ac_clean_files=$ac_clean_files_save
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for suffix of executables" >&5
+printf %s "checking for suffix of executables... " >&6; }
+if { { ac_try="$ac_link"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_link") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+then :
+ # If both `conftest.exe' and `conftest' are `present' (well, observable)
+# catch `conftest.exe'. For instance with Cygwin, `ls conftest' will
+# work properly (i.e., refer to `conftest.exe'), while it won't with
+# `rm'.
+for ac_file in conftest.exe conftest conftest.*; do
+ test -f "$ac_file" || continue
+ case $ac_file in
+ *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;;
+ *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'`
+ break;;
+ * ) break;;
+ esac
+done
+else $as_nop
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "cannot compute suffix of executables: cannot compile and link
+See \`config.log' for more details" "$LINENO" 5; }
+fi
+rm -f conftest conftest$ac_cv_exeext
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext" >&5
+printf "%s\n" "$ac_cv_exeext" >&6; }
+
+rm -f conftest.$ac_ext
+EXEEXT=$ac_cv_exeext
+ac_exeext=$EXEEXT
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+int
+main (void)
+{
+FILE *f = fopen ("conftest.out", "w");
+ return ferror (f) || fclose (f) != 0;
+
+ ;
+ return 0;
+}
+_ACEOF
+ac_clean_files="$ac_clean_files conftest.out"
+# Check that the compiler produces executables we can run. If not, either
+# the compiler is broken, or we cross compile.
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling" >&5
+printf %s "checking whether we are cross compiling... " >&6; }
+if test "$cross_compiling" != yes; then
+ { { ac_try="$ac_link"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_link") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+ if { ac_try='./conftest$ac_cv_exeext'
+ { { case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_try") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; }; then
+ cross_compiling=no
+ else
+ if test "$cross_compiling" = maybe; then
+ cross_compiling=yes
+ else
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error 77 "cannot run C compiled programs.
+If you meant to cross compile, use \`--host'.
+See \`config.log' for more details" "$LINENO" 5; }
+ fi
+ fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $cross_compiling" >&5
+printf "%s\n" "$cross_compiling" >&6; }
+
+rm -f conftest.$ac_ext conftest$ac_cv_exeext conftest.out
+ac_clean_files=$ac_clean_files_save
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for suffix of object files" >&5
+printf %s "checking for suffix of object files... " >&6; }
+if test ${ac_cv_objext+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.o conftest.obj
+if { { ac_try="$ac_compile"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_compile") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+then :
+ for ac_file in conftest.o conftest.obj conftest.*; do
+ test -f "$ac_file" || continue;
+ case $ac_file in
+ *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;;
+ *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'`
+ break;;
+ esac
+done
+else $as_nop
+ printf "%s\n" "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+{ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "cannot compute suffix of object files: cannot compile
+See \`config.log' for more details" "$LINENO" 5; }
+fi
+rm -f conftest.$ac_cv_objext conftest.$ac_ext
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext" >&5
+printf "%s\n" "$ac_cv_objext" >&6; }
+OBJEXT=$ac_cv_objext
+ac_objext=$OBJEXT
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the compiler supports GNU C" >&5
+printf %s "checking whether the compiler supports GNU C... " >&6; }
+if test ${ac_cv_c_compiler_gnu+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+#ifndef __GNUC__
+ choke me
+#endif
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_compiler_gnu=yes
+else $as_nop
+ ac_compiler_gnu=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ac_cv_c_compiler_gnu=$ac_compiler_gnu
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5
+printf "%s\n" "$ac_cv_c_compiler_gnu" >&6; }
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+if test $ac_compiler_gnu = yes; then
+ GCC=yes
+else
+ GCC=
+fi
+ac_test_CFLAGS=${CFLAGS+y}
+ac_save_CFLAGS=$CFLAGS
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5
+printf %s "checking whether $CC accepts -g... " >&6; }
+if test ${ac_cv_prog_cc_g+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_save_c_werror_flag=$ac_c_werror_flag
+ ac_c_werror_flag=yes
+ ac_cv_prog_cc_g=no
+ CFLAGS="-g"
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_g=yes
+else $as_nop
+ CFLAGS=""
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+
+else $as_nop
+ ac_c_werror_flag=$ac_save_c_werror_flag
+ CFLAGS="-g"
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_g=yes
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ ac_c_werror_flag=$ac_save_c_werror_flag
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5
+printf "%s\n" "$ac_cv_prog_cc_g" >&6; }
+if test $ac_test_CFLAGS; then
+ CFLAGS=$ac_save_CFLAGS
+elif test $ac_cv_prog_cc_g = yes; then
+ if test "$GCC" = yes; then
+ CFLAGS="-g -O2"
+ else
+ CFLAGS="-g"
+ fi
+else
+ if test "$GCC" = yes; then
+ CFLAGS="-O2"
+ else
+ CFLAGS=
+ fi
+fi
+ac_prog_cc_stdc=no
+if test x$ac_prog_cc_stdc = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C11 features" >&5
+printf %s "checking for $CC option to enable C11 features... " >&6; }
+if test ${ac_cv_prog_cc_c11+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_cv_prog_cc_c11=no
+ac_save_CC=$CC
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$ac_c_conftest_c11_program
+_ACEOF
+for ac_arg in '' -std=gnu11
+do
+ CC="$ac_save_CC $ac_arg"
+ if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_c11=$ac_arg
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam
+ test "x$ac_cv_prog_cc_c11" != "xno" && break
+done
+rm -f conftest.$ac_ext
+CC=$ac_save_CC
+fi
+
+if test "x$ac_cv_prog_cc_c11" = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5
+printf "%s\n" "unsupported" >&6; }
+else $as_nop
+ if test "x$ac_cv_prog_cc_c11" = x
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5
+printf "%s\n" "none needed" >&6; }
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c11" >&5
+printf "%s\n" "$ac_cv_prog_cc_c11" >&6; }
+ CC="$CC $ac_cv_prog_cc_c11"
+fi
+ ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c11
+ ac_prog_cc_stdc=c11
+fi
+fi
+if test x$ac_prog_cc_stdc = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C99 features" >&5
+printf %s "checking for $CC option to enable C99 features... " >&6; }
+if test ${ac_cv_prog_cc_c99+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_cv_prog_cc_c99=no
+ac_save_CC=$CC
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$ac_c_conftest_c99_program
+_ACEOF
+for ac_arg in '' -std=gnu99 -std=c99 -c99 -qlanglvl=extc1x -qlanglvl=extc99 -AC99 -D_STDC_C99=
+do
+ CC="$ac_save_CC $ac_arg"
+ if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_c99=$ac_arg
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam
+ test "x$ac_cv_prog_cc_c99" != "xno" && break
+done
+rm -f conftest.$ac_ext
+CC=$ac_save_CC
+fi
+
+if test "x$ac_cv_prog_cc_c99" = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5
+printf "%s\n" "unsupported" >&6; }
+else $as_nop
+ if test "x$ac_cv_prog_cc_c99" = x
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5
+printf "%s\n" "none needed" >&6; }
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c99" >&5
+printf "%s\n" "$ac_cv_prog_cc_c99" >&6; }
+ CC="$CC $ac_cv_prog_cc_c99"
+fi
+ ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c99
+ ac_prog_cc_stdc=c99
+fi
+fi
+if test x$ac_prog_cc_stdc = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C89 features" >&5
+printf %s "checking for $CC option to enable C89 features... " >&6; }
+if test ${ac_cv_prog_cc_c89+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_cv_prog_cc_c89=no
+ac_save_CC=$CC
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$ac_c_conftest_c89_program
+_ACEOF
+for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__"
+do
+ CC="$ac_save_CC $ac_arg"
+ if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_c89=$ac_arg
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam
+ test "x$ac_cv_prog_cc_c89" != "xno" && break
+done
+rm -f conftest.$ac_ext
+CC=$ac_save_CC
+fi
+
+if test "x$ac_cv_prog_cc_c89" = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5
+printf "%s\n" "unsupported" >&6; }
+else $as_nop
+ if test "x$ac_cv_prog_cc_c89" = x
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5
+printf "%s\n" "none needed" >&6; }
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5
+printf "%s\n" "$ac_cv_prog_cc_c89" >&6; }
+ CC="$CC $ac_cv_prog_cc_c89"
+fi
+ ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c89
+ ac_prog_cc_stdc=c89
+fi
+fi
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC understands -c and -o together" >&5
+printf %s "checking whether $CC understands -c and -o together... " >&6; }
+if test ${am_cv_prog_cc_c_o+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+ # Make sure it works both with $CC and with simple cc.
+ # Following AC_PROG_CC_C_O, we do the test twice because some
+ # compilers refuse to overwrite an existing .o file with -o,
+ # though they will create one.
+ am_cv_prog_cc_c_o=yes
+ for am_i in 1 2; do
+ if { echo "$as_me:$LINENO: $CC -c conftest.$ac_ext -o conftest2.$ac_objext" >&5
+ ($CC -c conftest.$ac_ext -o conftest2.$ac_objext) >&5 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } \
+ && test -f conftest2.$ac_objext; then
+ : OK
+ else
+ am_cv_prog_cc_c_o=no
+ break
+ fi
+ done
+ rm -f core conftest*
+ unset am_i
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_prog_cc_c_o" >&5
+printf "%s\n" "$am_cv_prog_cc_c_o" >&6; }
+if test "$am_cv_prog_cc_c_o" != yes; then
+ # Losing compiler, so override with the script.
+ # FIXME: It is wrong to rewrite CC.
+ # But if we don't then we get into trouble of one sort or another.
+ # A longer-term fix would be to have automake use am__CC in this case,
+ # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)"
+ CC="$am_aux_dir/compile $CC"
+fi
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+depcc="$CC" am_compiler_list=
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5
+printf %s "checking dependency style of $depcc... " >&6; }
+if test ${am_cv_CC_dependencies_compiler_type+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then
+ # We make a subdir and do the tests there. Otherwise we can end up
+ # making bogus files that we don't know about and never remove. For
+ # instance it was reported that on HP-UX the gcc test will end up
+ # making a dummy file named 'D' -- because '-MD' means "put the output
+ # in D".
+ rm -rf conftest.dir
+ mkdir conftest.dir
+ # Copy depcomp to subdir because otherwise we won't find it if we're
+ # using a relative directory.
+ cp "$am_depcomp" conftest.dir
+ cd conftest.dir
+ # We will build objects and dependencies in a subdirectory because
+ # it helps to detect inapplicable dependency modes. For instance
+ # both Tru64's cc and ICC support -MD to output dependencies as a
+ # side effect of compilation, but ICC will put the dependencies in
+ # the current directory while Tru64 will put them in the object
+ # directory.
+ mkdir sub
+
+ am_cv_CC_dependencies_compiler_type=none
+ if test "$am_compiler_list" = ""; then
+ am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp`
+ fi
+ am__universal=false
+ case " $depcc " in #(
+ *\ -arch\ *\ -arch\ *) am__universal=true ;;
+ esac
+
+ for depmode in $am_compiler_list; do
+ # Setup a source with many dependencies, because some compilers
+ # like to wrap large dependency lists on column 80 (with \), and
+ # we should not choose a depcomp mode which is confused by this.
+ #
+ # We need to recreate these files for each test, as the compiler may
+ # overwrite some of them when testing with obscure command lines.
+ # This happens at least with the AIX C compiler.
+ : > sub/conftest.c
+ for i in 1 2 3 4 5 6; do
+ echo '#include "conftst'$i'.h"' >> sub/conftest.c
+ # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with
+ # Solaris 10 /bin/sh.
+ echo '/* dummy */' > sub/conftst$i.h
+ done
+ echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf
+
+ # We check with '-c' and '-o' for the sake of the "dashmstdout"
+ # mode. It turns out that the SunPro C++ compiler does not properly
+ # handle '-M -o', and we need to detect this. Also, some Intel
+ # versions had trouble with output in subdirs.
+ am__obj=sub/conftest.${OBJEXT-o}
+ am__minus_obj="-o $am__obj"
+ case $depmode in
+ gcc)
+ # This depmode causes a compiler race in universal mode.
+ test "$am__universal" = false || continue
+ ;;
+ nosideeffect)
+ # After this tag, mechanisms are not by side-effect, so they'll
+ # only be used when explicitly requested.
+ if test "x$enable_dependency_tracking" = xyes; then
+ continue
+ else
+ break
+ fi
+ ;;
+ msvc7 | msvc7msys | msvisualcpp | msvcmsys)
+ # This compiler won't grok '-c -o', but also, the minuso test has
+ # not run yet. These depmodes are late enough in the game, and
+ # so weak that their functioning should not be impacted.
+ am__obj=conftest.${OBJEXT-o}
+ am__minus_obj=
+ ;;
+ none) break ;;
+ esac
+ if depmode=$depmode \
+ source=sub/conftest.c object=$am__obj \
+ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \
+ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \
+ >/dev/null 2>conftest.err &&
+ grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 &&
+ grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 &&
+ grep $am__obj sub/conftest.Po > /dev/null 2>&1 &&
+ ${MAKE-make} -s -f confmf > /dev/null 2>&1; then
+ # icc doesn't choke on unknown options, it will just issue warnings
+ # or remarks (even with -Werror). So we grep stderr for any message
+ # that says an option was ignored or not supported.
+ # When given -MP, icc 7.0 and 7.1 complain thusly:
+ # icc: Command line warning: ignoring option '-M'; no argument required
+ # The diagnosis changed in icc 8.0:
+ # icc: Command line remark: option '-MP' not supported
+ if (grep 'ignoring option' conftest.err ||
+ grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else
+ am_cv_CC_dependencies_compiler_type=$depmode
+ break
+ fi
+ fi
+ done
+
+ cd ..
+ rm -rf conftest.dir
+else
+ am_cv_CC_dependencies_compiler_type=none
+fi
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5
+printf "%s\n" "$am_cv_CC_dependencies_compiler_type" >&6; }
+CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type
+
+ if
+ test "x$enable_dependency_tracking" != xno \
+ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then
+ am__fastdepCC_TRUE=
+ am__fastdepCC_FALSE='#'
+else
+ am__fastdepCC_TRUE='#'
+ am__fastdepCC_FALSE=
+fi
+
+
+
+ac_header= ac_cache=
+for ac_item in $ac_header_c_list
+do
+ if test $ac_cache; then
+ ac_fn_c_check_header_compile "$LINENO" $ac_header ac_cv_header_$ac_cache "$ac_includes_default"
+ if eval test \"x\$ac_cv_header_$ac_cache\" = xyes; then
+ printf "%s\n" "#define $ac_item 1" >> confdefs.h
+ fi
+ ac_header= ac_cache=
+ elif test $ac_header; then
+ ac_cache=$ac_item
+ else
+ ac_header=$ac_item
+ fi
+done
+
+
+
+
+
+
+
+
+if test $ac_cv_header_stdlib_h = yes && test $ac_cv_header_string_h = yes
+then :
+
+printf "%s\n" "#define STDC_HEADERS 1" >>confdefs.h
+
+fi
+
+
+
+
+
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether it is safe to define __EXTENSIONS__" >&5
+printf %s "checking whether it is safe to define __EXTENSIONS__... " >&6; }
+if test ${ac_cv_safe_to_define___extensions__+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+# define __EXTENSIONS__ 1
+ $ac_includes_default
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_safe_to_define___extensions__=yes
+else $as_nop
+ ac_cv_safe_to_define___extensions__=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_safe_to_define___extensions__" >&5
+printf "%s\n" "$ac_cv_safe_to_define___extensions__" >&6; }
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether _XOPEN_SOURCE should be defined" >&5
+printf %s "checking whether _XOPEN_SOURCE should be defined... " >&6; }
+if test ${ac_cv_should_define__xopen_source+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_cv_should_define__xopen_source=no
+ if test $ac_cv_header_wchar_h = yes
+then :
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+ #include
+ mbstate_t x;
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+ #define _XOPEN_SOURCE 500
+ #include
+ mbstate_t x;
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_should_define__xopen_source=yes
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_should_define__xopen_source" >&5
+printf "%s\n" "$ac_cv_should_define__xopen_source" >&6; }
+
+ printf "%s\n" "#define _ALL_SOURCE 1" >>confdefs.h
+
+ printf "%s\n" "#define _DARWIN_C_SOURCE 1" >>confdefs.h
+
+ printf "%s\n" "#define _GNU_SOURCE 1" >>confdefs.h
+
+ printf "%s\n" "#define _HPUX_ALT_XOPEN_SOCKET_API 1" >>confdefs.h
+
+ printf "%s\n" "#define _NETBSD_SOURCE 1" >>confdefs.h
+
+ printf "%s\n" "#define _OPENBSD_SOURCE 1" >>confdefs.h
+
+ printf "%s\n" "#define _POSIX_PTHREAD_SEMANTICS 1" >>confdefs.h
+
+ printf "%s\n" "#define __STDC_WANT_IEC_60559_ATTRIBS_EXT__ 1" >>confdefs.h
+
+ printf "%s\n" "#define __STDC_WANT_IEC_60559_BFP_EXT__ 1" >>confdefs.h
+
+ printf "%s\n" "#define __STDC_WANT_IEC_60559_DFP_EXT__ 1" >>confdefs.h
+
+ printf "%s\n" "#define __STDC_WANT_IEC_60559_FUNCS_EXT__ 1" >>confdefs.h
+
+ printf "%s\n" "#define __STDC_WANT_IEC_60559_TYPES_EXT__ 1" >>confdefs.h
+
+ printf "%s\n" "#define __STDC_WANT_LIB_EXT2__ 1" >>confdefs.h
+
+ printf "%s\n" "#define __STDC_WANT_MATH_SPEC_FUNCS__ 1" >>confdefs.h
+
+ printf "%s\n" "#define _TANDEM_SOURCE 1" >>confdefs.h
+
+ if test $ac_cv_header_minix_config_h = yes
+then :
+ MINIX=yes
+ printf "%s\n" "#define _MINIX 1" >>confdefs.h
+
+ printf "%s\n" "#define _POSIX_SOURCE 1" >>confdefs.h
+
+ printf "%s\n" "#define _POSIX_1_SOURCE 2" >>confdefs.h
+
+else $as_nop
+ MINIX=
+fi
+ if test $ac_cv_safe_to_define___extensions__ = yes
+then :
+ printf "%s\n" "#define __EXTENSIONS__ 1" >>confdefs.h
+
+fi
+ if test $ac_cv_should_define__xopen_source = yes
+then :
+ printf "%s\n" "#define _XOPEN_SOURCE 500" >>confdefs.h
+
+fi
+
+
+
+ # Make sure we can run config.sub.
+$SHELL "${ac_aux_dir}config.sub" sun4 >/dev/null 2>&1 ||
+ as_fn_error $? "cannot run $SHELL ${ac_aux_dir}config.sub" "$LINENO" 5
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking build system type" >&5
+printf %s "checking build system type... " >&6; }
+if test ${ac_cv_build+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_build_alias=$build_alias
+test "x$ac_build_alias" = x &&
+ ac_build_alias=`$SHELL "${ac_aux_dir}config.guess"`
+test "x$ac_build_alias" = x &&
+ as_fn_error $? "cannot guess build type; you must specify one" "$LINENO" 5
+ac_cv_build=`$SHELL "${ac_aux_dir}config.sub" $ac_build_alias` ||
+ as_fn_error $? "$SHELL ${ac_aux_dir}config.sub $ac_build_alias failed" "$LINENO" 5
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_build" >&5
+printf "%s\n" "$ac_cv_build" >&6; }
+case $ac_cv_build in
+*-*-*) ;;
+*) as_fn_error $? "invalid value of canonical build" "$LINENO" 5;;
+esac
+build=$ac_cv_build
+ac_save_IFS=$IFS; IFS='-'
+set x $ac_cv_build
+shift
+build_cpu=$1
+build_vendor=$2
+shift; shift
+# Remember, the first character of IFS is used to create $*,
+# except with old shells:
+build_os=$*
+IFS=$ac_save_IFS
+case $build_os in *\ *) build_os=`echo "$build_os" | sed 's/ /-/g'`;; esac
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking host system type" >&5
+printf %s "checking host system type... " >&6; }
+if test ${ac_cv_host+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test "x$host_alias" = x; then
+ ac_cv_host=$ac_cv_build
+else
+ ac_cv_host=`$SHELL "${ac_aux_dir}config.sub" $host_alias` ||
+ as_fn_error $? "$SHELL ${ac_aux_dir}config.sub $host_alias failed" "$LINENO" 5
+fi
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_host" >&5
+printf "%s\n" "$ac_cv_host" >&6; }
+case $ac_cv_host in
+*-*-*) ;;
+*) as_fn_error $? "invalid value of canonical host" "$LINENO" 5;;
+esac
+host=$ac_cv_host
+ac_save_IFS=$IFS; IFS='-'
+set x $ac_cv_host
+shift
+host_cpu=$1
+host_vendor=$2
+shift; shift
+# Remember, the first character of IFS is used to create $*,
+# except with old shells:
+host_os=$*
+IFS=$ac_save_IFS
+case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac
+
+
+
+# Update library versions
+# https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
+
+ libknot_VERSION_INFO="-version-info 15:0:0"
+
+ libknot_SOVERSION="15"
+
+ case $host_os in #(
+ darwin*) :
+ libknot_SONAME="libknot.15.dylib"
+ ;; #(
+ cygwin*) :
+ libknot_SONAME="cygknot-15.dll"
+ ;; #(
+ msys*) :
+ libknot_SONAME="msys-knot-15.dll"
+ ;; #(
+ *) :
+ libknot_SONAME="libknot.so.15"
+
+ ;; #(
+ *) :
+ ;;
+esac
+
+
+ libdnssec_VERSION_INFO="-version-info 9:0:0"
+
+ libdnssec_SOVERSION="9"
+
+ case $host_os in #(
+ darwin*) :
+ libdnssec_SONAME="libdnssec.9.dylib"
+ ;; #(
+ cygwin*) :
+ libdnssec_SONAME="cygdnssec-9.dll"
+ ;; #(
+ msys*) :
+ libdnssec_SONAME="msys-dnssec-9.dll"
+ ;; #(
+ *) :
+ libdnssec_SONAME="libdnssec.so.9"
+
+ ;; #(
+ *) :
+ ;;
+esac
+
+
+ libzscanner_VERSION_INFO="-version-info 4:0:0"
+
+ libzscanner_SOVERSION="4"
+
+ case $host_os in #(
+ darwin*) :
+ libzscanner_SONAME="libzscanner.4.dylib"
+ ;; #(
+ cygwin*) :
+ libzscanner_SONAME="cygzscanner-4.dll"
+ ;; #(
+ msys*) :
+ libzscanner_SONAME="msys-zscanner-4.dll"
+ ;; #(
+ *) :
+ libzscanner_SONAME="libzscanner.so.4"
+
+ ;; #(
+ *) :
+ ;;
+esac
+
+
+KNOT_VERSION_MAJOR=3
+
+KNOT_VERSION_MINOR=4
+
+KNOT_VERSION_PATCH=6
+
+
+ac_config_files="$ac_config_files src/libknot/version.h src/libdnssec/version.h src/libzscanner/version.h"
+
+
+# Automatically update release date based on NEWS
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a sed that does not truncate output" >&5
+printf %s "checking for a sed that does not truncate output... " >&6; }
+if test ${ac_cv_path_SED+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/
+ for ac_i in 1 2 3 4 5 6 7; do
+ ac_script="$ac_script$as_nl$ac_script"
+ done
+ echo "$ac_script" 2>/dev/null | sed 99q >conftest.sed
+ { ac_script=; unset ac_script;}
+ if test -z "$SED"; then
+ ac_path_SED_found=false
+ # Loop through the user's path and test for each of PROGNAME-LIST
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_prog in sed gsed
+ do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ ac_path_SED="$as_dir$ac_prog$ac_exec_ext"
+ as_fn_executable_p "$ac_path_SED" || continue
+# Check for GNU ac_path_SED and select it if it is found.
+ # Check for GNU $ac_path_SED
+case `"$ac_path_SED" --version 2>&1` in
+*GNU*)
+ ac_cv_path_SED="$ac_path_SED" ac_path_SED_found=:;;
+*)
+ ac_count=0
+ printf %s 0123456789 >"conftest.in"
+ while :
+ do
+ cat "conftest.in" "conftest.in" >"conftest.tmp"
+ mv "conftest.tmp" "conftest.in"
+ cp "conftest.in" "conftest.nl"
+ printf "%s\n" '' >> "conftest.nl"
+ "$ac_path_SED" -f conftest.sed < "conftest.nl" >"conftest.out" 2>/dev/null || break
+ diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break
+ as_fn_arith $ac_count + 1 && ac_count=$as_val
+ if test $ac_count -gt ${ac_path_SED_max-0}; then
+ # Best one so far, save it but keep looking for a better one
+ ac_cv_path_SED="$ac_path_SED"
+ ac_path_SED_max=$ac_count
+ fi
+ # 10*(2^10) chars as input seems more than enough
+ test $ac_count -gt 10 && break
+ done
+ rm -f conftest.in conftest.tmp conftest.nl conftest.out;;
+esac
+
+ $ac_path_SED_found && break 3
+ done
+ done
+ done
+IFS=$as_save_IFS
+ if test -z "$ac_cv_path_SED"; then
+ as_fn_error $? "no acceptable sed could be found in \$PATH" "$LINENO" 5
+ fi
+else
+ ac_cv_path_SED=$SED
+fi
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_SED" >&5
+printf "%s\n" "$ac_cv_path_SED" >&6; }
+ SED="$ac_cv_path_SED"
+ rm -f conftest.sed
+
+release_date=$($SED -n 's/^Knot DNS .* (\(.*\))/\1/p;q;' ${srcdir}/NEWS)
+RELEASE_DATE=$release_date
+
+
+# Set compiler compatibility flags
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args.
+set dummy ${ac_tool_prefix}gcc; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}gcc"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_CC"; then
+ ac_ct_CC=$CC
+ # Extract the first word of "gcc", so it can be a program name with args.
+set dummy gcc; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="gcc"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5
+printf "%s\n" "$ac_ct_CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_CC" = x; then
+ CC=""
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ CC=$ac_ct_CC
+ fi
+else
+ CC="$ac_cv_prog_CC"
+fi
+
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args.
+set dummy ${ac_tool_prefix}cc; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}cc"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ fi
+fi
+if test -z "$CC"; then
+ # Extract the first word of "cc", so it can be a program name with args.
+set dummy cc; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+ ac_prog_rejected=no
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ if test "$as_dir$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then
+ ac_prog_rejected=yes
+ continue
+ fi
+ ac_cv_prog_CC="cc"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+if test $ac_prog_rejected = yes; then
+ # We found a bogon in the path, so make sure we never use it.
+ set dummy $ac_cv_prog_CC
+ shift
+ if test $# != 0; then
+ # We chose a different compiler from the bogus one.
+ # However, it has the same basename, so the bogon will be chosen
+ # first if we set CC to just the basename; use the full file name.
+ shift
+ ac_cv_prog_CC="$as_dir$ac_word${1+' '}$@"
+ fi
+fi
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ for ac_prog in cl.exe
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="$ac_tool_prefix$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$CC" && break
+ done
+fi
+if test -z "$CC"; then
+ ac_ct_CC=$CC
+ for ac_prog in cl.exe
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5
+printf "%s\n" "$ac_ct_CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$ac_ct_CC" && break
+done
+
+ if test "x$ac_ct_CC" = x; then
+ CC=""
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ CC=$ac_ct_CC
+ fi
+fi
+
+fi
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}clang", so it can be a program name with args.
+set dummy ${ac_tool_prefix}clang; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}clang"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5
+printf "%s\n" "$CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_CC"; then
+ ac_ct_CC=$CC
+ # Extract the first word of "clang", so it can be a program name with args.
+set dummy clang; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_CC+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="clang"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5
+printf "%s\n" "$ac_ct_CC" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_CC" = x; then
+ CC=""
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ CC=$ac_ct_CC
+ fi
+else
+ CC="$ac_cv_prog_CC"
+fi
+
+fi
+
+
+test -z "$CC" && { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "no acceptable C compiler found in \$PATH
+See \`config.log' for more details" "$LINENO" 5; }
+
+# Provide some information about the compiler.
+printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5
+set X $ac_compile
+ac_compiler=$2
+for ac_option in --version -v -V -qversion -version; do
+ { { ac_try="$ac_compiler $ac_option >&5"
+case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+esac
+eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
+printf "%s\n" "$ac_try_echo"; } >&5
+ (eval "$ac_compiler $ac_option >&5") 2>conftest.err
+ ac_status=$?
+ if test -s conftest.err; then
+ sed '10a\
+... rest of stderr output deleted ...
+ 10q' conftest.err >conftest.er1
+ cat conftest.er1 >&5
+ fi
+ rm -f conftest.er1 conftest.err
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+done
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the compiler supports GNU C" >&5
+printf %s "checking whether the compiler supports GNU C... " >&6; }
+if test ${ac_cv_c_compiler_gnu+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+#ifndef __GNUC__
+ choke me
+#endif
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_compiler_gnu=yes
+else $as_nop
+ ac_compiler_gnu=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ac_cv_c_compiler_gnu=$ac_compiler_gnu
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5
+printf "%s\n" "$ac_cv_c_compiler_gnu" >&6; }
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+if test $ac_compiler_gnu = yes; then
+ GCC=yes
+else
+ GCC=
+fi
+ac_test_CFLAGS=${CFLAGS+y}
+ac_save_CFLAGS=$CFLAGS
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5
+printf %s "checking whether $CC accepts -g... " >&6; }
+if test ${ac_cv_prog_cc_g+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_save_c_werror_flag=$ac_c_werror_flag
+ ac_c_werror_flag=yes
+ ac_cv_prog_cc_g=no
+ CFLAGS="-g"
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_g=yes
+else $as_nop
+ CFLAGS=""
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+
+else $as_nop
+ ac_c_werror_flag=$ac_save_c_werror_flag
+ CFLAGS="-g"
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_g=yes
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ ac_c_werror_flag=$ac_save_c_werror_flag
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5
+printf "%s\n" "$ac_cv_prog_cc_g" >&6; }
+if test $ac_test_CFLAGS; then
+ CFLAGS=$ac_save_CFLAGS
+elif test $ac_cv_prog_cc_g = yes; then
+ if test "$GCC" = yes; then
+ CFLAGS="-g -O2"
+ else
+ CFLAGS="-g"
+ fi
+else
+ if test "$GCC" = yes; then
+ CFLAGS="-O2"
+ else
+ CFLAGS=
+ fi
+fi
+ac_prog_cc_stdc=no
+if test x$ac_prog_cc_stdc = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C11 features" >&5
+printf %s "checking for $CC option to enable C11 features... " >&6; }
+if test ${ac_cv_prog_cc_c11+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_cv_prog_cc_c11=no
+ac_save_CC=$CC
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$ac_c_conftest_c11_program
+_ACEOF
+for ac_arg in '' -std=gnu11
+do
+ CC="$ac_save_CC $ac_arg"
+ if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_c11=$ac_arg
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam
+ test "x$ac_cv_prog_cc_c11" != "xno" && break
+done
+rm -f conftest.$ac_ext
+CC=$ac_save_CC
+fi
+
+if test "x$ac_cv_prog_cc_c11" = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5
+printf "%s\n" "unsupported" >&6; }
+else $as_nop
+ if test "x$ac_cv_prog_cc_c11" = x
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5
+printf "%s\n" "none needed" >&6; }
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c11" >&5
+printf "%s\n" "$ac_cv_prog_cc_c11" >&6; }
+ CC="$CC $ac_cv_prog_cc_c11"
+fi
+ ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c11
+ ac_prog_cc_stdc=c11
+fi
+fi
+if test x$ac_prog_cc_stdc = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C99 features" >&5
+printf %s "checking for $CC option to enable C99 features... " >&6; }
+if test ${ac_cv_prog_cc_c99+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_cv_prog_cc_c99=no
+ac_save_CC=$CC
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$ac_c_conftest_c99_program
+_ACEOF
+for ac_arg in '' -std=gnu99 -std=c99 -c99 -qlanglvl=extc1x -qlanglvl=extc99 -AC99 -D_STDC_C99=
+do
+ CC="$ac_save_CC $ac_arg"
+ if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_c99=$ac_arg
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam
+ test "x$ac_cv_prog_cc_c99" != "xno" && break
+done
+rm -f conftest.$ac_ext
+CC=$ac_save_CC
+fi
+
+if test "x$ac_cv_prog_cc_c99" = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5
+printf "%s\n" "unsupported" >&6; }
+else $as_nop
+ if test "x$ac_cv_prog_cc_c99" = x
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5
+printf "%s\n" "none needed" >&6; }
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c99" >&5
+printf "%s\n" "$ac_cv_prog_cc_c99" >&6; }
+ CC="$CC $ac_cv_prog_cc_c99"
+fi
+ ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c99
+ ac_prog_cc_stdc=c99
+fi
+fi
+if test x$ac_prog_cc_stdc = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C89 features" >&5
+printf %s "checking for $CC option to enable C89 features... " >&6; }
+if test ${ac_cv_prog_cc_c89+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_cv_prog_cc_c89=no
+ac_save_CC=$CC
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$ac_c_conftest_c89_program
+_ACEOF
+for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__"
+do
+ CC="$ac_save_CC $ac_arg"
+ if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_prog_cc_c89=$ac_arg
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam
+ test "x$ac_cv_prog_cc_c89" != "xno" && break
+done
+rm -f conftest.$ac_ext
+CC=$ac_save_CC
+fi
+
+if test "x$ac_cv_prog_cc_c89" = xno
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5
+printf "%s\n" "unsupported" >&6; }
+else $as_nop
+ if test "x$ac_cv_prog_cc_c89" = x
+then :
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5
+printf "%s\n" "none needed" >&6; }
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5
+printf "%s\n" "$ac_cv_prog_cc_c89" >&6; }
+ CC="$CC $ac_cv_prog_cc_c89"
+fi
+ ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c89
+ ac_prog_cc_stdc=c89
+fi
+fi
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC understands -c and -o together" >&5
+printf %s "checking whether $CC understands -c and -o together... " >&6; }
+if test ${am_cv_prog_cc_c_o+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+ # Make sure it works both with $CC and with simple cc.
+ # Following AC_PROG_CC_C_O, we do the test twice because some
+ # compilers refuse to overwrite an existing .o file with -o,
+ # though they will create one.
+ am_cv_prog_cc_c_o=yes
+ for am_i in 1 2; do
+ if { echo "$as_me:$LINENO: $CC -c conftest.$ac_ext -o conftest2.$ac_objext" >&5
+ ($CC -c conftest.$ac_ext -o conftest2.$ac_objext) >&5 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } \
+ && test -f conftest2.$ac_objext; then
+ : OK
+ else
+ am_cv_prog_cc_c_o=no
+ break
+ fi
+ done
+ rm -f core conftest*
+ unset am_i
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_prog_cc_c_o" >&5
+printf "%s\n" "$am_cv_prog_cc_c_o" >&6; }
+if test "$am_cv_prog_cc_c_o" != yes; then
+ # Losing compiler, so override with the script.
+ # FIXME: It is wrong to rewrite CC.
+ # But if we don't then we get into trouble of one sort or another.
+ # A longer-term fix would be to have automake use am__CC in this case,
+ # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)"
+ CC="$am_aux_dir/compile $CC"
+fi
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+depcc="$CC" am_compiler_list=
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5
+printf %s "checking dependency style of $depcc... " >&6; }
+if test ${am_cv_CC_dependencies_compiler_type+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then
+ # We make a subdir and do the tests there. Otherwise we can end up
+ # making bogus files that we don't know about and never remove. For
+ # instance it was reported that on HP-UX the gcc test will end up
+ # making a dummy file named 'D' -- because '-MD' means "put the output
+ # in D".
+ rm -rf conftest.dir
+ mkdir conftest.dir
+ # Copy depcomp to subdir because otherwise we won't find it if we're
+ # using a relative directory.
+ cp "$am_depcomp" conftest.dir
+ cd conftest.dir
+ # We will build objects and dependencies in a subdirectory because
+ # it helps to detect inapplicable dependency modes. For instance
+ # both Tru64's cc and ICC support -MD to output dependencies as a
+ # side effect of compilation, but ICC will put the dependencies in
+ # the current directory while Tru64 will put them in the object
+ # directory.
+ mkdir sub
+
+ am_cv_CC_dependencies_compiler_type=none
+ if test "$am_compiler_list" = ""; then
+ am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp`
+ fi
+ am__universal=false
+ case " $depcc " in #(
+ *\ -arch\ *\ -arch\ *) am__universal=true ;;
+ esac
+
+ for depmode in $am_compiler_list; do
+ # Setup a source with many dependencies, because some compilers
+ # like to wrap large dependency lists on column 80 (with \), and
+ # we should not choose a depcomp mode which is confused by this.
+ #
+ # We need to recreate these files for each test, as the compiler may
+ # overwrite some of them when testing with obscure command lines.
+ # This happens at least with the AIX C compiler.
+ : > sub/conftest.c
+ for i in 1 2 3 4 5 6; do
+ echo '#include "conftst'$i'.h"' >> sub/conftest.c
+ # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with
+ # Solaris 10 /bin/sh.
+ echo '/* dummy */' > sub/conftst$i.h
+ done
+ echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf
+
+ # We check with '-c' and '-o' for the sake of the "dashmstdout"
+ # mode. It turns out that the SunPro C++ compiler does not properly
+ # handle '-M -o', and we need to detect this. Also, some Intel
+ # versions had trouble with output in subdirs.
+ am__obj=sub/conftest.${OBJEXT-o}
+ am__minus_obj="-o $am__obj"
+ case $depmode in
+ gcc)
+ # This depmode causes a compiler race in universal mode.
+ test "$am__universal" = false || continue
+ ;;
+ nosideeffect)
+ # After this tag, mechanisms are not by side-effect, so they'll
+ # only be used when explicitly requested.
+ if test "x$enable_dependency_tracking" = xyes; then
+ continue
+ else
+ break
+ fi
+ ;;
+ msvc7 | msvc7msys | msvisualcpp | msvcmsys)
+ # This compiler won't grok '-c -o', but also, the minuso test has
+ # not run yet. These depmodes are late enough in the game, and
+ # so weak that their functioning should not be impacted.
+ am__obj=conftest.${OBJEXT-o}
+ am__minus_obj=
+ ;;
+ none) break ;;
+ esac
+ if depmode=$depmode \
+ source=sub/conftest.c object=$am__obj \
+ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \
+ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \
+ >/dev/null 2>conftest.err &&
+ grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 &&
+ grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 &&
+ grep $am__obj sub/conftest.Po > /dev/null 2>&1 &&
+ ${MAKE-make} -s -f confmf > /dev/null 2>&1; then
+ # icc doesn't choke on unknown options, it will just issue warnings
+ # or remarks (even with -Werror). So we grep stderr for any message
+ # that says an option was ignored or not supported.
+ # When given -MP, icc 7.0 and 7.1 complain thusly:
+ # icc: Command line warning: ignoring option '-M'; no argument required
+ # The diagnosis changed in icc 8.0:
+ # icc: Command line remark: option '-MP' not supported
+ if (grep 'ignoring option' conftest.err ||
+ grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else
+ am_cv_CC_dependencies_compiler_type=$depmode
+ break
+ fi
+ fi
+ done
+
+ cd ..
+ rm -rf conftest.dir
+else
+ am_cv_CC_dependencies_compiler_type=none
+fi
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5
+printf "%s\n" "$am_cv_CC_dependencies_compiler_type" >&6; }
+CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type
+
+ if
+ test "x$enable_dependency_tracking" != xno \
+ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then
+ am__fastdepCC_TRUE=
+ am__fastdepCC_FALSE='#'
+else
+ am__fastdepCC_TRUE='#'
+ am__fastdepCC_FALSE=
+fi
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to run the C preprocessor" >&5
+printf %s "checking how to run the C preprocessor... " >&6; }
+# On Suns, sometimes $CPP names a directory.
+if test -n "$CPP" && test -d "$CPP"; then
+ CPP=
+fi
+if test -z "$CPP"; then
+ if test ${ac_cv_prog_CPP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ # Double quotes because $CC needs to be expanded
+ for CPP in "$CC -E" "$CC -E -traditional-cpp" cpp /lib/cpp
+ do
+ ac_preproc_ok=false
+for ac_c_preproc_warn_flag in '' yes
+do
+ # Use a header file that comes with gcc, so configuring glibc
+ # with a fresh cross-compiler works.
+ # On the NeXT, cc -E runs the code through the compiler's parser,
+ # not just through cpp. "Syntax error" is here to catch this case.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+ Syntax error
+_ACEOF
+if ac_fn_c_try_cpp "$LINENO"
+then :
+
+else $as_nop
+ # Broken: fails on valid input.
+continue
+fi
+rm -f conftest.err conftest.i conftest.$ac_ext
+
+ # OK, works on sane cases. Now check whether nonexistent headers
+ # can be detected and how.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+_ACEOF
+if ac_fn_c_try_cpp "$LINENO"
+then :
+ # Broken: success on invalid input.
+continue
+else $as_nop
+ # Passes both tests.
+ac_preproc_ok=:
+break
+fi
+rm -f conftest.err conftest.i conftest.$ac_ext
+
+done
+# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
+rm -f conftest.i conftest.err conftest.$ac_ext
+if $ac_preproc_ok
+then :
+ break
+fi
+
+ done
+ ac_cv_prog_CPP=$CPP
+
+fi
+ CPP=$ac_cv_prog_CPP
+else
+ ac_cv_prog_CPP=$CPP
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CPP" >&5
+printf "%s\n" "$CPP" >&6; }
+ac_preproc_ok=false
+for ac_c_preproc_warn_flag in '' yes
+do
+ # Use a header file that comes with gcc, so configuring glibc
+ # with a fresh cross-compiler works.
+ # On the NeXT, cc -E runs the code through the compiler's parser,
+ # not just through cpp. "Syntax error" is here to catch this case.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+ Syntax error
+_ACEOF
+if ac_fn_c_try_cpp "$LINENO"
+then :
+
+else $as_nop
+ # Broken: fails on valid input.
+continue
+fi
+rm -f conftest.err conftest.i conftest.$ac_ext
+
+ # OK, works on sane cases. Now check whether nonexistent headers
+ # can be detected and how.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+_ACEOF
+if ac_fn_c_try_cpp "$LINENO"
+then :
+ # Broken: success on invalid input.
+continue
+else $as_nop
+ # Passes both tests.
+ac_preproc_ok=:
+break
+fi
+rm -f conftest.err conftest.i conftest.$ac_ext
+
+done
+# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
+rm -f conftest.i conftest.err conftest.$ac_ext
+if $ac_preproc_ok
+then :
+
+else $as_nop
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "C preprocessor \"$CPP\" fails sanity check
+See \`config.log' for more details" "$LINENO" 5; }
+fi
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+ac_c_preproc_warn_flag=yes
+
+# Set default CFLAGS
+CFLAGS="$CFLAGS -Wall -Wshadow -Werror=format-security -Werror=implicit -Werror=attributes -Wstrict-prototypes"
+
+as_CACHEVAR=`printf "%s\n" "ax_cv_check_cflags_"-Werror"_"-fpredictive-commoning"" | $as_tr_sh`
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts \"-fpredictive-commoning\"" >&5
+printf %s "checking whether C compiler accepts \"-fpredictive-commoning\"... " >&6; }
+if eval test \${$as_CACHEVAR+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+
+ ax_check_save_flags=$CFLAGS
+ CFLAGS="$CFLAGS "-Werror" "-fpredictive-commoning""
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ eval "$as_CACHEVAR=yes"
+else $as_nop
+ eval "$as_CACHEVAR=no"
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ CFLAGS=$ax_check_save_flags
+fi
+eval ac_res=\$$as_CACHEVAR
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5
+printf "%s\n" "$ac_res" >&6; }
+if test x"`eval 'as_val=${'$as_CACHEVAR'};printf "%s\n" "$as_val"'`" = xyes
+then :
+ CFLAGS="$CFLAGS -fpredictive-commoning"
+else $as_nop
+ :
+fi
+
+as_CACHEVAR=`printf "%s\n" "ax_cv_check_ldflags_""_"-Wl,--exclude-libs,ALL"" | $as_tr_sh`
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the linker accepts \"-Wl,--exclude-libs,ALL\"" >&5
+printf %s "checking whether the linker accepts \"-Wl,--exclude-libs,ALL\"... " >&6; }
+if eval test \${$as_CACHEVAR+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+
+ ax_check_save_flags=$LDFLAGS
+ LDFLAGS="$LDFLAGS "" "-Wl,--exclude-libs,ALL""
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ eval "$as_CACHEVAR=yes"
+else $as_nop
+ eval "$as_CACHEVAR=no"
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+ LDFLAGS=$ax_check_save_flags
+fi
+eval ac_res=\$$as_CACHEVAR
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5
+printf "%s\n" "$ac_res" >&6; }
+if test x"`eval 'as_val=${'$as_CACHEVAR'};printf "%s\n" "$as_val"'`" = xyes
+then :
+ ldflag_exclude_libs="-Wl,--exclude-libs,ALL"
+else $as_nop
+ ldflag_exclude_libs=""
+fi
+
+LDFLAG_EXCLUDE_LIBS=$ldflag_exclude_libs
+
+
+# Get processor byte ordering
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether byte ordering is bigendian" >&5
+printf %s "checking whether byte ordering is bigendian... " >&6; }
+if test ${ac_cv_c_bigendian+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_cv_c_bigendian=unknown
+ # See if we're dealing with a universal compiler.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#ifndef __APPLE_CC__
+ not a universal capable compiler
+ #endif
+ typedef int dummy;
+
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+
+ # Check for potential -arch flags. It is not universal unless
+ # there are at least two -arch flags with different values.
+ ac_arch=
+ ac_prev=
+ for ac_word in $CC $CFLAGS $CPPFLAGS $LDFLAGS; do
+ if test -n "$ac_prev"; then
+ case $ac_word in
+ i?86 | x86_64 | ppc | ppc64)
+ if test -z "$ac_arch" || test "$ac_arch" = "$ac_word"; then
+ ac_arch=$ac_word
+ else
+ ac_cv_c_bigendian=universal
+ break
+ fi
+ ;;
+ esac
+ ac_prev=
+ elif test "x$ac_word" = "x-arch"; then
+ ac_prev=arch
+ fi
+ done
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ if test $ac_cv_c_bigendian = unknown; then
+ # See if sys/param.h defines the BYTE_ORDER macro.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+ #include
+
+int
+main (void)
+{
+#if ! (defined BYTE_ORDER && defined BIG_ENDIAN \
+ && defined LITTLE_ENDIAN && BYTE_ORDER && BIG_ENDIAN \
+ && LITTLE_ENDIAN)
+ bogus endian macros
+ #endif
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ # It does; now see whether it defined to BIG_ENDIAN or not.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+ #include
+
+int
+main (void)
+{
+#if BYTE_ORDER != BIG_ENDIAN
+ not big endian
+ #endif
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_c_bigendian=yes
+else $as_nop
+ ac_cv_c_bigendian=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ fi
+ if test $ac_cv_c_bigendian = unknown; then
+ # See if defines _LITTLE_ENDIAN or _BIG_ENDIAN (e.g., Solaris).
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+
+int
+main (void)
+{
+#if ! (defined _LITTLE_ENDIAN || defined _BIG_ENDIAN)
+ bogus endian macros
+ #endif
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ # It does; now see whether it defined to _BIG_ENDIAN or not.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+
+int
+main (void)
+{
+#ifndef _BIG_ENDIAN
+ not big endian
+ #endif
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ ac_cv_c_bigendian=yes
+else $as_nop
+ ac_cv_c_bigendian=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ fi
+ if test $ac_cv_c_bigendian = unknown; then
+ # Compile a test program.
+ if test "$cross_compiling" = yes
+then :
+ # Try to guess by grepping values from an object file.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+unsigned short int ascii_mm[] =
+ { 0x4249, 0x4765, 0x6E44, 0x6961, 0x6E53, 0x7953, 0 };
+ unsigned short int ascii_ii[] =
+ { 0x694C, 0x5454, 0x656C, 0x6E45, 0x6944, 0x6E61, 0 };
+ int use_ascii (int i) {
+ return ascii_mm[i] + ascii_ii[i];
+ }
+ unsigned short int ebcdic_ii[] =
+ { 0x89D3, 0xE3E3, 0x8593, 0x95C5, 0x89C4, 0x9581, 0 };
+ unsigned short int ebcdic_mm[] =
+ { 0xC2C9, 0xC785, 0x95C4, 0x8981, 0x95E2, 0xA8E2, 0 };
+ int use_ebcdic (int i) {
+ return ebcdic_mm[i] + ebcdic_ii[i];
+ }
+ extern int foo;
+
+int
+main (void)
+{
+return use_ascii (foo) == use_ebcdic (foo);
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ if grep BIGenDianSyS conftest.$ac_objext >/dev/null; then
+ ac_cv_c_bigendian=yes
+ fi
+ if grep LiTTleEnDian conftest.$ac_objext >/dev/null ; then
+ if test "$ac_cv_c_bigendian" = unknown; then
+ ac_cv_c_bigendian=no
+ else
+ # finding both strings is unlikely to happen, but who knows?
+ ac_cv_c_bigendian=unknown
+ fi
+ fi
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+$ac_includes_default
+int
+main (void)
+{
+
+ /* Are we little or big endian? From Harbison&Steele. */
+ union
+ {
+ long int l;
+ char c[sizeof (long int)];
+ } u;
+ u.l = 1;
+ return u.c[sizeof (long int) - 1] == 1;
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"
+then :
+ ac_cv_c_bigendian=no
+else $as_nop
+ ac_cv_c_bigendian=yes
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+ conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+ fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_bigendian" >&5
+printf "%s\n" "$ac_cv_c_bigendian" >&6; }
+ case $ac_cv_c_bigendian in #(
+ yes)
+ endianity=big-endian;; #(
+ no)
+ endianity=little-endian ;; #(
+ universal)
+
+printf "%s\n" "#define AC_APPLE_UNIVERSAL_BUILD 1" >>confdefs.h
+
+ ;; #(
+ *)
+ as_fn_error $? "unknown endianness
+ presetting ac_cv_c_bigendian=no (or yes) will help" "$LINENO" 5 ;;
+ esac
+
+if test "$endianity" = "little-endian"
+then :
+
+
+printf "%s\n" "#define ENDIANITY_LITTLE 1" >>confdefs.h
+
+fi
+
+# Check if an archiver is available
+
+ if test -n "$ac_tool_prefix"; then
+ for ac_prog in ar lib "link -lib"
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_AR+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$AR"; then
+ ac_cv_prog_AR="$AR" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+AR=$ac_cv_prog_AR
+if test -n "$AR"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $AR" >&5
+printf "%s\n" "$AR" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$AR" && break
+ done
+fi
+if test -z "$AR"; then
+ ac_ct_AR=$AR
+ for ac_prog in ar lib "link -lib"
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_AR+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_AR"; then
+ ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_AR="$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_AR=$ac_cv_prog_ac_ct_AR
+if test -n "$ac_ct_AR"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_AR" >&5
+printf "%s\n" "$ac_ct_AR" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$ac_ct_AR" && break
+done
+
+ if test "x$ac_ct_AR" = x; then
+ AR="false"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ AR=$ac_ct_AR
+ fi
+fi
+
+: ${AR=ar}
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking the archiver ($AR) interface" >&5
+printf %s "checking the archiver ($AR) interface... " >&6; }
+if test ${am_cv_ar_interface+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+ am_cv_ar_interface=ar
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+int some_variable = 0;
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ am_ar_try='$AR cru libconftest.a conftest.$ac_objext >&5'
+ { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$am_ar_try\""; } >&5
+ (eval $am_ar_try) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+ if test "$ac_status" -eq 0; then
+ am_cv_ar_interface=ar
+ else
+ am_ar_try='$AR -NOLOGO -OUT:conftest.lib conftest.$ac_objext >&5'
+ { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$am_ar_try\""; } >&5
+ (eval $am_ar_try) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+ if test "$ac_status" -eq 0; then
+ am_cv_ar_interface=lib
+ else
+ am_cv_ar_interface=unknown
+ fi
+ fi
+ rm -f conftest.lib libconftest.a
+
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_ar_interface" >&5
+printf "%s\n" "$am_cv_ar_interface" >&6; }
+
+case $am_cv_ar_interface in
+ar)
+ ;;
+lib)
+ # Microsoft lib, so override with the ar-lib wrapper script.
+ # FIXME: It is wrong to rewrite AR.
+ # But if we don't then we get into trouble of one sort or another.
+ # A longer-term fix would be to have automake use am__AR in this case,
+ # and then we could set am__AR="$am_aux_dir/ar-lib \$(AR)" or something
+ # similar.
+ AR="$am_aux_dir/ar-lib $AR"
+ ;;
+unknown)
+ as_fn_error $? "could not determine $AR interface" "$LINENO" 5
+ ;;
+esac
+
+
+
+
+# Initialize libtool
+case `pwd` in
+ *\ * | *\ *)
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&5
+printf "%s\n" "$as_me: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&2;} ;;
+esac
+
+
+
+macro_version='2.4.7'
+macro_revision='2.4.7'
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ltmain=$ac_aux_dir/ltmain.sh
+
+# Backslashify metacharacters that are still active within
+# double-quoted strings.
+sed_quote_subst='s/\(["`$\\]\)/\\\1/g'
+
+# Same as above, but do not quote variable references.
+double_quote_subst='s/\(["`\\]\)/\\\1/g'
+
+# Sed substitution to delay expansion of an escaped shell variable in a
+# double_quote_subst'ed string.
+delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g'
+
+# Sed substitution to delay expansion of an escaped single quote.
+delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g'
+
+# Sed substitution to avoid accidental globbing in evaled expressions
+no_glob_subst='s/\*/\\\*/g'
+
+ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'
+ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO
+ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+printf %s "checking how to print strings... " >&6; }
+# Test print first, because it will be a builtin if present.
+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+ ECHO='print -r --'
+elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+ ECHO='printf %s\n'
+else
+ # Use this function as a fallback that always works.
+ func_fallback_echo ()
+ {
+ eval 'cat <<_LTECHO_EOF
+$1
+_LTECHO_EOF'
+ }
+ ECHO='func_fallback_echo'
+fi
+
+# func_echo_all arg...
+# Invoke $ECHO with all args, space-separated.
+func_echo_all ()
+{
+ $ECHO ""
+}
+
+case $ECHO in
+ printf*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: printf" >&5
+printf "%s\n" "printf" >&6; } ;;
+ print*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: print -r" >&5
+printf "%s\n" "print -r" >&6; } ;;
+ *) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: cat" >&5
+printf "%s\n" "cat" >&6; } ;;
+esac
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a sed that does not truncate output" >&5
+printf %s "checking for a sed that does not truncate output... " >&6; }
+if test ${ac_cv_path_SED+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/
+ for ac_i in 1 2 3 4 5 6 7; do
+ ac_script="$ac_script$as_nl$ac_script"
+ done
+ echo "$ac_script" 2>/dev/null | sed 99q >conftest.sed
+ { ac_script=; unset ac_script;}
+ if test -z "$SED"; then
+ ac_path_SED_found=false
+ # Loop through the user's path and test for each of PROGNAME-LIST
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_prog in sed gsed
+ do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ ac_path_SED="$as_dir$ac_prog$ac_exec_ext"
+ as_fn_executable_p "$ac_path_SED" || continue
+# Check for GNU ac_path_SED and select it if it is found.
+ # Check for GNU $ac_path_SED
+case `"$ac_path_SED" --version 2>&1` in
+*GNU*)
+ ac_cv_path_SED="$ac_path_SED" ac_path_SED_found=:;;
+*)
+ ac_count=0
+ printf %s 0123456789 >"conftest.in"
+ while :
+ do
+ cat "conftest.in" "conftest.in" >"conftest.tmp"
+ mv "conftest.tmp" "conftest.in"
+ cp "conftest.in" "conftest.nl"
+ printf "%s\n" '' >> "conftest.nl"
+ "$ac_path_SED" -f conftest.sed < "conftest.nl" >"conftest.out" 2>/dev/null || break
+ diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break
+ as_fn_arith $ac_count + 1 && ac_count=$as_val
+ if test $ac_count -gt ${ac_path_SED_max-0}; then
+ # Best one so far, save it but keep looking for a better one
+ ac_cv_path_SED="$ac_path_SED"
+ ac_path_SED_max=$ac_count
+ fi
+ # 10*(2^10) chars as input seems more than enough
+ test $ac_count -gt 10 && break
+ done
+ rm -f conftest.in conftest.tmp conftest.nl conftest.out;;
+esac
+
+ $ac_path_SED_found && break 3
+ done
+ done
+ done
+IFS=$as_save_IFS
+ if test -z "$ac_cv_path_SED"; then
+ as_fn_error $? "no acceptable sed could be found in \$PATH" "$LINENO" 5
+ fi
+else
+ ac_cv_path_SED=$SED
+fi
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_SED" >&5
+printf "%s\n" "$ac_cv_path_SED" >&6; }
+ SED="$ac_cv_path_SED"
+ rm -f conftest.sed
+
+test -z "$SED" && SED=sed
+Xsed="$SED -e 1s/^X//"
+
+
+
+
+
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for grep that handles long lines and -e" >&5
+printf %s "checking for grep that handles long lines and -e... " >&6; }
+if test ${ac_cv_path_GREP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -z "$GREP"; then
+ ac_path_GREP_found=false
+ # Loop through the user's path and test for each of PROGNAME-LIST
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_prog in grep ggrep
+ do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ ac_path_GREP="$as_dir$ac_prog$ac_exec_ext"
+ as_fn_executable_p "$ac_path_GREP" || continue
+# Check for GNU ac_path_GREP and select it if it is found.
+ # Check for GNU $ac_path_GREP
+case `"$ac_path_GREP" --version 2>&1` in
+*GNU*)
+ ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_found=:;;
+*)
+ ac_count=0
+ printf %s 0123456789 >"conftest.in"
+ while :
+ do
+ cat "conftest.in" "conftest.in" >"conftest.tmp"
+ mv "conftest.tmp" "conftest.in"
+ cp "conftest.in" "conftest.nl"
+ printf "%s\n" 'GREP' >> "conftest.nl"
+ "$ac_path_GREP" -e 'GREP$' -e '-(cannot match)-' < "conftest.nl" >"conftest.out" 2>/dev/null || break
+ diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break
+ as_fn_arith $ac_count + 1 && ac_count=$as_val
+ if test $ac_count -gt ${ac_path_GREP_max-0}; then
+ # Best one so far, save it but keep looking for a better one
+ ac_cv_path_GREP="$ac_path_GREP"
+ ac_path_GREP_max=$ac_count
+ fi
+ # 10*(2^10) chars as input seems more than enough
+ test $ac_count -gt 10 && break
+ done
+ rm -f conftest.in conftest.tmp conftest.nl conftest.out;;
+esac
+
+ $ac_path_GREP_found && break 3
+ done
+ done
+ done
+IFS=$as_save_IFS
+ if test -z "$ac_cv_path_GREP"; then
+ as_fn_error $? "no acceptable grep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5
+ fi
+else
+ ac_cv_path_GREP=$GREP
+fi
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_GREP" >&5
+printf "%s\n" "$ac_cv_path_GREP" >&6; }
+ GREP="$ac_cv_path_GREP"
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for egrep" >&5
+printf %s "checking for egrep... " >&6; }
+if test ${ac_cv_path_EGREP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if echo a | $GREP -E '(a|b)' >/dev/null 2>&1
+ then ac_cv_path_EGREP="$GREP -E"
+ else
+ if test -z "$EGREP"; then
+ ac_path_EGREP_found=false
+ # Loop through the user's path and test for each of PROGNAME-LIST
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_prog in egrep
+ do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ ac_path_EGREP="$as_dir$ac_prog$ac_exec_ext"
+ as_fn_executable_p "$ac_path_EGREP" || continue
+# Check for GNU ac_path_EGREP and select it if it is found.
+ # Check for GNU $ac_path_EGREP
+case `"$ac_path_EGREP" --version 2>&1` in
+*GNU*)
+ ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_found=:;;
+*)
+ ac_count=0
+ printf %s 0123456789 >"conftest.in"
+ while :
+ do
+ cat "conftest.in" "conftest.in" >"conftest.tmp"
+ mv "conftest.tmp" "conftest.in"
+ cp "conftest.in" "conftest.nl"
+ printf "%s\n" 'EGREP' >> "conftest.nl"
+ "$ac_path_EGREP" 'EGREP$' < "conftest.nl" >"conftest.out" 2>/dev/null || break
+ diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break
+ as_fn_arith $ac_count + 1 && ac_count=$as_val
+ if test $ac_count -gt ${ac_path_EGREP_max-0}; then
+ # Best one so far, save it but keep looking for a better one
+ ac_cv_path_EGREP="$ac_path_EGREP"
+ ac_path_EGREP_max=$ac_count
+ fi
+ # 10*(2^10) chars as input seems more than enough
+ test $ac_count -gt 10 && break
+ done
+ rm -f conftest.in conftest.tmp conftest.nl conftest.out;;
+esac
+
+ $ac_path_EGREP_found && break 3
+ done
+ done
+ done
+IFS=$as_save_IFS
+ if test -z "$ac_cv_path_EGREP"; then
+ as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5
+ fi
+else
+ ac_cv_path_EGREP=$EGREP
+fi
+
+ fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP" >&5
+printf "%s\n" "$ac_cv_path_EGREP" >&6; }
+ EGREP="$ac_cv_path_EGREP"
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for fgrep" >&5
+printf %s "checking for fgrep... " >&6; }
+if test ${ac_cv_path_FGREP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if echo 'ab*c' | $GREP -F 'ab*c' >/dev/null 2>&1
+ then ac_cv_path_FGREP="$GREP -F"
+ else
+ if test -z "$FGREP"; then
+ ac_path_FGREP_found=false
+ # Loop through the user's path and test for each of PROGNAME-LIST
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_prog in fgrep
+ do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ ac_path_FGREP="$as_dir$ac_prog$ac_exec_ext"
+ as_fn_executable_p "$ac_path_FGREP" || continue
+# Check for GNU ac_path_FGREP and select it if it is found.
+ # Check for GNU $ac_path_FGREP
+case `"$ac_path_FGREP" --version 2>&1` in
+*GNU*)
+ ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_found=:;;
+*)
+ ac_count=0
+ printf %s 0123456789 >"conftest.in"
+ while :
+ do
+ cat "conftest.in" "conftest.in" >"conftest.tmp"
+ mv "conftest.tmp" "conftest.in"
+ cp "conftest.in" "conftest.nl"
+ printf "%s\n" 'FGREP' >> "conftest.nl"
+ "$ac_path_FGREP" FGREP < "conftest.nl" >"conftest.out" 2>/dev/null || break
+ diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break
+ as_fn_arith $ac_count + 1 && ac_count=$as_val
+ if test $ac_count -gt ${ac_path_FGREP_max-0}; then
+ # Best one so far, save it but keep looking for a better one
+ ac_cv_path_FGREP="$ac_path_FGREP"
+ ac_path_FGREP_max=$ac_count
+ fi
+ # 10*(2^10) chars as input seems more than enough
+ test $ac_count -gt 10 && break
+ done
+ rm -f conftest.in conftest.tmp conftest.nl conftest.out;;
+esac
+
+ $ac_path_FGREP_found && break 3
+ done
+ done
+ done
+IFS=$as_save_IFS
+ if test -z "$ac_cv_path_FGREP"; then
+ as_fn_error $? "no acceptable fgrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5
+ fi
+else
+ ac_cv_path_FGREP=$FGREP
+fi
+
+ fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_FGREP" >&5
+printf "%s\n" "$ac_cv_path_FGREP" >&6; }
+ FGREP="$ac_cv_path_FGREP"
+
+
+test -z "$GREP" && GREP=grep
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Check whether --with-gnu-ld was given.
+if test ${with_gnu_ld+y}
+then :
+ withval=$with_gnu_ld; test no = "$withval" || with_gnu_ld=yes
+else $as_nop
+ with_gnu_ld=no
+fi
+
+ac_prog=ld
+if test yes = "$GCC"; then
+ # Check if gcc -print-prog-name=ld gives a path.
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5
+printf %s "checking for ld used by $CC... " >&6; }
+ case $host in
+ *-*-mingw*)
+ # gcc leaves a trailing carriage return, which upsets mingw
+ ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;;
+ *)
+ ac_prog=`($CC -print-prog-name=ld) 2>&5` ;;
+ esac
+ case $ac_prog in
+ # Accept absolute paths.
+ [\\/]* | ?:[\\/]*)
+ re_direlt='/[^/][^/]*/\.\./'
+ # Canonicalize the pathname of ld
+ ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'`
+ while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do
+ ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"`
+ done
+ test -z "$LD" && LD=$ac_prog
+ ;;
+ "")
+ # If it fails, then pretend we aren't using GCC.
+ ac_prog=ld
+ ;;
+ *)
+ # If it is relative, then search for the first ld in PATH.
+ with_gnu_ld=unknown
+ ;;
+ esac
+elif test yes = "$with_gnu_ld"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5
+printf %s "checking for GNU ld... " >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5
+printf %s "checking for non-GNU ld... " >&6; }
+fi
+if test ${lt_cv_path_LD+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -z "$LD"; then
+ lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
+ for ac_dir in $PATH; do
+ IFS=$lt_save_ifs
+ test -z "$ac_dir" && ac_dir=.
+ if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then
+ lt_cv_path_LD=$ac_dir/$ac_prog
+ # Check to see if the program is GNU ld. I'd rather use --version,
+ # but apparently some variants of GNU ld only accept -v.
+ # Break only if it was the GNU/non-GNU ld that we prefer.
+ case `"$lt_cv_path_LD" -v 2>&1 &5
+printf "%s\n" "$LD" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5
+printf %s "checking if the linker ($LD) is GNU ld... " >&6; }
+if test ${lt_cv_prog_gnu_ld+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ # I'd rather use --version here, but apparently some GNU lds only accept -v.
+case `$LD -v 2>&1 &5
+printf "%s\n" "$lt_cv_prog_gnu_ld" >&6; }
+with_gnu_ld=$lt_cv_prog_gnu_ld
+
+
+
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for BSD- or MS-compatible name lister (nm)" >&5
+printf %s "checking for BSD- or MS-compatible name lister (nm)... " >&6; }
+if test ${lt_cv_path_NM+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$NM"; then
+ # Let the user override the test.
+ lt_cv_path_NM=$NM
+else
+ lt_nm_to_check=${ac_tool_prefix}nm
+ if test -n "$ac_tool_prefix" && test "$build" = "$host"; then
+ lt_nm_to_check="$lt_nm_to_check nm"
+ fi
+ for lt_tmp_nm in $lt_nm_to_check; do
+ lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
+ for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do
+ IFS=$lt_save_ifs
+ test -z "$ac_dir" && ac_dir=.
+ tmp_nm=$ac_dir/$lt_tmp_nm
+ if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext"; then
+ # Check to see if the nm accepts a BSD-compat flag.
+ # Adding the 'sed 1q' prevents false positives on HP-UX, which says:
+ # nm: unknown option "B" ignored
+ # Tru64's nm complains that /dev/null is an invalid object file
+ # MSYS converts /dev/null to NUL, MinGW nm treats NUL as empty
+ case $build_os in
+ mingw*) lt_bad_file=conftest.nm/nofile ;;
+ *) lt_bad_file=/dev/null ;;
+ esac
+ case `"$tmp_nm" -B $lt_bad_file 2>&1 | $SED '1q'` in
+ *$lt_bad_file* | *'Invalid file or object type'*)
+ lt_cv_path_NM="$tmp_nm -B"
+ break 2
+ ;;
+ *)
+ case `"$tmp_nm" -p /dev/null 2>&1 | $SED '1q'` in
+ */dev/null*)
+ lt_cv_path_NM="$tmp_nm -p"
+ break 2
+ ;;
+ *)
+ lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but
+ continue # so that we can try to find one that supports BSD flags
+ ;;
+ esac
+ ;;
+ esac
+ fi
+ done
+ IFS=$lt_save_ifs
+ done
+ : ${lt_cv_path_NM=no}
+fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_NM" >&5
+printf "%s\n" "$lt_cv_path_NM" >&6; }
+if test no != "$lt_cv_path_NM"; then
+ NM=$lt_cv_path_NM
+else
+ # Didn't find any BSD compatible name lister, look for dumpbin.
+ if test -n "$DUMPBIN"; then :
+ # Let the user override the test.
+ else
+ if test -n "$ac_tool_prefix"; then
+ for ac_prog in dumpbin "link -dump"
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_DUMPBIN+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$DUMPBIN"; then
+ ac_cv_prog_DUMPBIN="$DUMPBIN" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_DUMPBIN="$ac_tool_prefix$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+DUMPBIN=$ac_cv_prog_DUMPBIN
+if test -n "$DUMPBIN"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $DUMPBIN" >&5
+printf "%s\n" "$DUMPBIN" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$DUMPBIN" && break
+ done
+fi
+if test -z "$DUMPBIN"; then
+ ac_ct_DUMPBIN=$DUMPBIN
+ for ac_prog in dumpbin "link -dump"
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_DUMPBIN+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_DUMPBIN"; then
+ ac_cv_prog_ac_ct_DUMPBIN="$ac_ct_DUMPBIN" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_DUMPBIN="$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_DUMPBIN=$ac_cv_prog_ac_ct_DUMPBIN
+if test -n "$ac_ct_DUMPBIN"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DUMPBIN" >&5
+printf "%s\n" "$ac_ct_DUMPBIN" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$ac_ct_DUMPBIN" && break
+done
+
+ if test "x$ac_ct_DUMPBIN" = x; then
+ DUMPBIN=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ DUMPBIN=$ac_ct_DUMPBIN
+ fi
+fi
+
+ case `$DUMPBIN -symbols -headers /dev/null 2>&1 | $SED '1q'` in
+ *COFF*)
+ DUMPBIN="$DUMPBIN -symbols -headers"
+ ;;
+ *)
+ DUMPBIN=:
+ ;;
+ esac
+ fi
+
+ if test : != "$DUMPBIN"; then
+ NM=$DUMPBIN
+ fi
+fi
+test -z "$NM" && NM=nm
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking the name lister ($NM) interface" >&5
+printf %s "checking the name lister ($NM) interface... " >&6; }
+if test ${lt_cv_nm_interface+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_nm_interface="BSD nm"
+ echo "int some_variable = 0;" > conftest.$ac_ext
+ (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&5)
+ (eval "$ac_compile" 2>conftest.err)
+ cat conftest.err >&5
+ (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&5)
+ (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out)
+ cat conftest.err >&5
+ (eval echo "\"\$as_me:$LINENO: output\"" >&5)
+ cat conftest.out >&5
+ if $GREP 'External.*some_variable' conftest.out > /dev/null; then
+ lt_cv_nm_interface="MS dumpbin"
+ fi
+ rm -f conftest*
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_nm_interface" >&5
+printf "%s\n" "$lt_cv_nm_interface" >&6; }
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether ln -s works" >&5
+printf %s "checking whether ln -s works... " >&6; }
+LN_S=$as_ln_s
+if test "$LN_S" = "ln -s"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no, using $LN_S" >&5
+printf "%s\n" "no, using $LN_S" >&6; }
+fi
+
+# find the maximum length of command line arguments
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking the maximum length of command line arguments" >&5
+printf %s "checking the maximum length of command line arguments... " >&6; }
+if test ${lt_cv_sys_max_cmd_len+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ i=0
+ teststring=ABCD
+
+ case $build_os in
+ msdosdjgpp*)
+ # On DJGPP, this test can blow up pretty badly due to problems in libc
+ # (any single argument exceeding 2000 bytes causes a buffer overrun
+ # during glob expansion). Even if it were fixed, the result of this
+ # check would be larger than it should be.
+ lt_cv_sys_max_cmd_len=12288; # 12K is about right
+ ;;
+
+ gnu*)
+ # Under GNU Hurd, this test is not required because there is
+ # no limit to the length of command line arguments.
+ # Libtool will interpret -1 as no limit whatsoever
+ lt_cv_sys_max_cmd_len=-1;
+ ;;
+
+ cygwin* | mingw* | cegcc*)
+ # On Win9x/ME, this test blows up -- it succeeds, but takes
+ # about 5 minutes as the teststring grows exponentially.
+ # Worse, since 9x/ME are not pre-emptively multitasking,
+ # you end up with a "frozen" computer, even though with patience
+ # the test eventually succeeds (with a max line length of 256k).
+ # Instead, let's just punt: use the minimum linelength reported by
+ # all of the supported platforms: 8192 (on NT/2K/XP).
+ lt_cv_sys_max_cmd_len=8192;
+ ;;
+
+ mint*)
+ # On MiNT this can take a long time and run out of memory.
+ lt_cv_sys_max_cmd_len=8192;
+ ;;
+
+ amigaos*)
+ # On AmigaOS with pdksh, this test takes hours, literally.
+ # So we just punt and use a minimum line length of 8192.
+ lt_cv_sys_max_cmd_len=8192;
+ ;;
+
+ bitrig* | darwin* | dragonfly* | freebsd* | midnightbsd* | netbsd* | openbsd*)
+ # This has been around since 386BSD, at least. Likely further.
+ if test -x /sbin/sysctl; then
+ lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax`
+ elif test -x /usr/sbin/sysctl; then
+ lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax`
+ else
+ lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs
+ fi
+ # And add a safety zone
+ lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4`
+ lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3`
+ ;;
+
+ interix*)
+ # We know the value 262144 and hardcode it with a safety zone (like BSD)
+ lt_cv_sys_max_cmd_len=196608
+ ;;
+
+ os2*)
+ # The test takes a long time on OS/2.
+ lt_cv_sys_max_cmd_len=8192
+ ;;
+
+ osf*)
+ # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure
+ # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not
+ # nice to cause kernel panics so lets avoid the loop below.
+ # First set a reasonable default.
+ lt_cv_sys_max_cmd_len=16384
+ #
+ if test -x /sbin/sysconfig; then
+ case `/sbin/sysconfig -q proc exec_disable_arg_limit` in
+ *1*) lt_cv_sys_max_cmd_len=-1 ;;
+ esac
+ fi
+ ;;
+ sco3.2v5*)
+ lt_cv_sys_max_cmd_len=102400
+ ;;
+ sysv5* | sco5v6* | sysv4.2uw2*)
+ kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null`
+ if test -n "$kargmax"; then
+ lt_cv_sys_max_cmd_len=`echo $kargmax | $SED 's/.*[ ]//'`
+ else
+ lt_cv_sys_max_cmd_len=32768
+ fi
+ ;;
+ *)
+ lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null`
+ if test -n "$lt_cv_sys_max_cmd_len" && \
+ test undefined != "$lt_cv_sys_max_cmd_len"; then
+ lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4`
+ lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3`
+ else
+ # Make teststring a little bigger before we do anything with it.
+ # a 1K string should be a reasonable start.
+ for i in 1 2 3 4 5 6 7 8; do
+ teststring=$teststring$teststring
+ done
+ SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}}
+ # If test is not a shell built-in, we'll probably end up computing a
+ # maximum length that is only half of the actual maximum length, but
+ # we can't tell.
+ while { test X`env echo "$teststring$teststring" 2>/dev/null` \
+ = "X$teststring$teststring"; } >/dev/null 2>&1 &&
+ test 17 != "$i" # 1/2 MB should be enough
+ do
+ i=`expr $i + 1`
+ teststring=$teststring$teststring
+ done
+ # Only check the string length outside the loop.
+ lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1`
+ teststring=
+ # Add a significant safety factor because C++ compilers can tack on
+ # massive amounts of additional arguments before passing them to the
+ # linker. It appears as though 1/2 is a usable value.
+ lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2`
+ fi
+ ;;
+ esac
+
+fi
+
+if test -n "$lt_cv_sys_max_cmd_len"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sys_max_cmd_len" >&5
+printf "%s\n" "$lt_cv_sys_max_cmd_len" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none" >&5
+printf "%s\n" "none" >&6; }
+fi
+max_cmd_len=$lt_cv_sys_max_cmd_len
+
+
+
+
+
+
+: ${CP="cp -f"}
+: ${MV="mv -f"}
+: ${RM="rm -f"}
+
+if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then
+ lt_unset=unset
+else
+ lt_unset=false
+fi
+
+
+
+
+
+# test EBCDIC or ASCII
+case `echo X|tr X '\101'` in
+ A) # ASCII based system
+ # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr
+ lt_SP2NL='tr \040 \012'
+ lt_NL2SP='tr \015\012 \040\040'
+ ;;
+ *) # EBCDIC based system
+ lt_SP2NL='tr \100 \n'
+ lt_NL2SP='tr \r\n \100\100'
+ ;;
+esac
+
+
+
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
+printf %s "checking how to convert $build file names to $host format... " >&6; }
+if test ${lt_cv_to_host_file_cmd+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ case $host in
+ *-*-mingw* )
+ case $build in
+ *-*-mingw* ) # actually msys
+ lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
+ ;;
+ *-*-cygwin* )
+ lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
+ ;;
+ * ) # otherwise, assume *nix
+ lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
+ ;;
+ esac
+ ;;
+ *-*-cygwin* )
+ case $build in
+ *-*-mingw* ) # actually msys
+ lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
+ ;;
+ *-*-cygwin* )
+ lt_cv_to_host_file_cmd=func_convert_file_noop
+ ;;
+ * ) # otherwise, assume *nix
+ lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
+ ;;
+ esac
+ ;;
+ * ) # unhandled hosts (and "normal" native builds)
+ lt_cv_to_host_file_cmd=func_convert_file_noop
+ ;;
+esac
+
+fi
+
+to_host_file_cmd=$lt_cv_to_host_file_cmd
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
+printf "%s\n" "$lt_cv_to_host_file_cmd" >&6; }
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
+printf %s "checking how to convert $build file names to toolchain format... " >&6; }
+if test ${lt_cv_to_tool_file_cmd+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ #assume ordinary cross tools, or native build.
+lt_cv_to_tool_file_cmd=func_convert_file_noop
+case $host in
+ *-*-mingw* )
+ case $build in
+ *-*-mingw* ) # actually msys
+ lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
+ ;;
+ esac
+ ;;
+esac
+
+fi
+
+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
+printf "%s\n" "$lt_cv_to_tool_file_cmd" >&6; }
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+printf %s "checking for $LD option to reload object files... " >&6; }
+if test ${lt_cv_ld_reload_flag+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_ld_reload_flag='-r'
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_reload_flag" >&5
+printf "%s\n" "$lt_cv_ld_reload_flag" >&6; }
+reload_flag=$lt_cv_ld_reload_flag
+case $reload_flag in
+"" | " "*) ;;
+*) reload_flag=" $reload_flag" ;;
+esac
+reload_cmds='$LD$reload_flag -o $output$reload_objs'
+case $host_os in
+ cygwin* | mingw* | pw32* | cegcc*)
+ if test yes != "$GCC"; then
+ reload_cmds=false
+ fi
+ ;;
+ darwin*)
+ if test yes = "$GCC"; then
+ reload_cmds='$LTCC $LTCFLAGS -nostdlib $wl-r -o $output$reload_objs'
+ else
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ fi
+ ;;
+esac
+
+
+
+
+
+
+
+
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}file", so it can be a program name with args.
+set dummy ${ac_tool_prefix}file; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_FILECMD+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$FILECMD"; then
+ ac_cv_prog_FILECMD="$FILECMD" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_FILECMD="${ac_tool_prefix}file"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+FILECMD=$ac_cv_prog_FILECMD
+if test -n "$FILECMD"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $FILECMD" >&5
+printf "%s\n" "$FILECMD" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_FILECMD"; then
+ ac_ct_FILECMD=$FILECMD
+ # Extract the first word of "file", so it can be a program name with args.
+set dummy file; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_FILECMD+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_FILECMD"; then
+ ac_cv_prog_ac_ct_FILECMD="$ac_ct_FILECMD" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_FILECMD="file"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_FILECMD=$ac_cv_prog_ac_ct_FILECMD
+if test -n "$ac_ct_FILECMD"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_FILECMD" >&5
+printf "%s\n" "$ac_ct_FILECMD" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_FILECMD" = x; then
+ FILECMD=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ FILECMD=$ac_ct_FILECMD
+ fi
+else
+ FILECMD="$ac_cv_prog_FILECMD"
+fi
+
+
+
+
+
+
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}objdump", so it can be a program name with args.
+set dummy ${ac_tool_prefix}objdump; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_OBJDUMP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$OBJDUMP"; then
+ ac_cv_prog_OBJDUMP="$OBJDUMP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_OBJDUMP="${ac_tool_prefix}objdump"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+OBJDUMP=$ac_cv_prog_OBJDUMP
+if test -n "$OBJDUMP"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $OBJDUMP" >&5
+printf "%s\n" "$OBJDUMP" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_OBJDUMP"; then
+ ac_ct_OBJDUMP=$OBJDUMP
+ # Extract the first word of "objdump", so it can be a program name with args.
+set dummy objdump; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_OBJDUMP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_OBJDUMP"; then
+ ac_cv_prog_ac_ct_OBJDUMP="$ac_ct_OBJDUMP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_OBJDUMP="objdump"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_OBJDUMP=$ac_cv_prog_ac_ct_OBJDUMP
+if test -n "$ac_ct_OBJDUMP"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OBJDUMP" >&5
+printf "%s\n" "$ac_ct_OBJDUMP" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_OBJDUMP" = x; then
+ OBJDUMP="false"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ OBJDUMP=$ac_ct_OBJDUMP
+ fi
+else
+ OBJDUMP="$ac_cv_prog_OBJDUMP"
+fi
+
+test -z "$OBJDUMP" && OBJDUMP=objdump
+
+
+
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to recognize dependent libraries" >&5
+printf %s "checking how to recognize dependent libraries... " >&6; }
+if test ${lt_cv_deplibs_check_method+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_file_magic_cmd='$MAGIC_CMD'
+lt_cv_file_magic_test_file=
+lt_cv_deplibs_check_method='unknown'
+# Need to set the preceding variable on all platforms that support
+# interlibrary dependencies.
+# 'none' -- dependencies not supported.
+# 'unknown' -- same as none, but documents that we really don't know.
+# 'pass_all' -- all dependencies passed with no checks.
+# 'test_compile' -- check by making test program.
+# 'file_magic [[regex]]' -- check by looking for files in library path
+# that responds to the $file_magic_cmd with a given extended regex.
+# If you have 'file' or equivalent on your system and you're not sure
+# whether 'pass_all' will *always* work, you probably want this one.
+
+case $host_os in
+aix[4-9]*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+beos*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+bsdi[45]*)
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)'
+ lt_cv_file_magic_cmd='$FILECMD -L'
+ lt_cv_file_magic_test_file=/shlib/libc.so
+ ;;
+
+cygwin*)
+ # func_win32_libid is a shell function defined in ltmain.sh
+ lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+ lt_cv_file_magic_cmd='func_win32_libid'
+ ;;
+
+mingw* | pw32*)
+ # Base MSYS/MinGW do not provide the 'file' command needed by
+ # func_win32_libid shell function, so use a weaker test based on 'objdump',
+ # unless we find 'file', for example because we are cross-compiling.
+ if ( file / ) >/dev/null 2>&1; then
+ lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+ lt_cv_file_magic_cmd='func_win32_libid'
+ else
+ # Keep this pattern in sync with the one in func_win32_libid.
+ lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+ lt_cv_file_magic_cmd='$OBJDUMP -f'
+ fi
+ ;;
+
+cegcc*)
+ # use the weaker test based on 'objdump'. See mingw*.
+ lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?'
+ lt_cv_file_magic_cmd='$OBJDUMP -f'
+ ;;
+
+darwin* | rhapsody*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+freebsd* | dragonfly* | midnightbsd*)
+ if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
+ case $host_cpu in
+ i*86 )
+ # Not sure whether the presence of OpenBSD here was a mistake.
+ # Let's accept both of them until this is cleared up.
+ lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[3-9]86 (compact )?demand paged shared library'
+ lt_cv_file_magic_cmd=$FILECMD
+ lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*`
+ ;;
+ esac
+ else
+ lt_cv_deplibs_check_method=pass_all
+ fi
+ ;;
+
+haiku*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+hpux10.20* | hpux11*)
+ lt_cv_file_magic_cmd=$FILECMD
+ case $host_cpu in
+ ia64*)
+ lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - IA64'
+ lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so
+ ;;
+ hppa*64*)
+ lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]'
+ lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl
+ ;;
+ *)
+ lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9]\.[0-9]) shared library'
+ lt_cv_file_magic_test_file=/usr/lib/libc.sl
+ ;;
+ esac
+ ;;
+
+interix[3-9]*)
+ # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here
+ lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|\.a)$'
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $LD in
+ *-32|*"-32 ") libmagic=32-bit;;
+ *-n32|*"-n32 ") libmagic=N32;;
+ *-64|*"-64 ") libmagic=64-bit;;
+ *) libmagic=never-match;;
+ esac
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+# This must be glibc/ELF.
+linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+netbsd* | netbsdelf*-gnu)
+ if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
+ lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$'
+ else
+ lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|_pic\.a)$'
+ fi
+ ;;
+
+newos6*)
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)'
+ lt_cv_file_magic_cmd=$FILECMD
+ lt_cv_file_magic_test_file=/usr/lib/libnls.so
+ ;;
+
+*nto* | *qnx*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+openbsd* | bitrig*)
+ if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
+ lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|\.so|_pic\.a)$'
+ else
+ lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$'
+ fi
+ ;;
+
+osf3* | osf4* | osf5*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+rdos*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+solaris*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+sysv4 | sysv4.3*)
+ case $host_vendor in
+ motorola)
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]'
+ lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*`
+ ;;
+ ncr)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+ sequent)
+ lt_cv_file_magic_cmd='/bin/file'
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )'
+ ;;
+ sni)
+ lt_cv_file_magic_cmd='/bin/file'
+ lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib"
+ lt_cv_file_magic_test_file=/lib/libc.so
+ ;;
+ siemens)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+ pc)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+ esac
+ ;;
+
+tpf*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+os2*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+esac
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+printf "%s\n" "$lt_cv_deplibs_check_method" >&6; }
+
+file_magic_glob=
+want_nocaseglob=no
+if test "$build" = "$host"; then
+ case $host_os in
+ mingw* | pw32*)
+ if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
+ want_nocaseglob=yes
+ else
+ file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
+ fi
+ ;;
+ esac
+fi
+
+file_magic_cmd=$lt_cv_file_magic_cmd
+deplibs_check_method=$lt_cv_deplibs_check_method
+test -z "$deplibs_check_method" && deplibs_check_method=unknown
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_DLLTOOL+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$DLLTOOL"; then
+ ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+DLLTOOL=$ac_cv_prog_DLLTOOL
+if test -n "$DLLTOOL"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
+printf "%s\n" "$DLLTOOL" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_DLLTOOL"; then
+ ac_ct_DLLTOOL=$DLLTOOL
+ # Extract the first word of "dlltool", so it can be a program name with args.
+set dummy dlltool; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_DLLTOOL+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_DLLTOOL"; then
+ ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_DLLTOOL="dlltool"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
+if test -n "$ac_ct_DLLTOOL"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
+printf "%s\n" "$ac_ct_DLLTOOL" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_DLLTOOL" = x; then
+ DLLTOOL="false"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ DLLTOOL=$ac_ct_DLLTOOL
+ fi
+else
+ DLLTOOL="$ac_cv_prog_DLLTOOL"
+fi
+
+test -z "$DLLTOOL" && DLLTOOL=dlltool
+
+
+
+
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
+printf %s "checking how to associate runtime and link libraries... " >&6; }
+if test ${lt_cv_sharedlib_from_linklib_cmd+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_sharedlib_from_linklib_cmd='unknown'
+
+case $host_os in
+cygwin* | mingw* | pw32* | cegcc*)
+ # two different shell functions defined in ltmain.sh;
+ # decide which one to use based on capabilities of $DLLTOOL
+ case `$DLLTOOL --help 2>&1` in
+ *--identify-strict*)
+ lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
+ ;;
+ *)
+ lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
+ ;;
+ esac
+ ;;
+*)
+ # fallback: assume linklib IS sharedlib
+ lt_cv_sharedlib_from_linklib_cmd=$ECHO
+ ;;
+esac
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
+printf "%s\n" "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
+
+
+
+
+
+
+
+if test -n "$ac_tool_prefix"; then
+ for ac_prog in ar
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_AR+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$AR"; then
+ ac_cv_prog_AR="$AR" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+AR=$ac_cv_prog_AR
+if test -n "$AR"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $AR" >&5
+printf "%s\n" "$AR" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$AR" && break
+ done
+fi
+if test -z "$AR"; then
+ ac_ct_AR=$AR
+ for ac_prog in ar
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_AR+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_AR"; then
+ ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_AR="$ac_prog"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_AR=$ac_cv_prog_ac_ct_AR
+if test -n "$ac_ct_AR"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_AR" >&5
+printf "%s\n" "$ac_ct_AR" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ test -n "$ac_ct_AR" && break
+done
+
+ if test "x$ac_ct_AR" = x; then
+ AR="false"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ AR=$ac_ct_AR
+ fi
+fi
+
+: ${AR=ar}
+
+
+
+
+
+
+# Use ARFLAGS variable as AR's operation code to sync the variable naming with
+# Automake. If both AR_FLAGS and ARFLAGS are specified, AR_FLAGS should have
+# higher priority because thats what people were doing historically (setting
+# ARFLAGS for automake and AR_FLAGS for libtool). FIXME: Make the AR_FLAGS
+# variable obsoleted/removed.
+
+test ${AR_FLAGS+y} || AR_FLAGS=${ARFLAGS-cr}
+lt_ar_flags=$AR_FLAGS
+
+
+
+
+
+
+# Make AR_FLAGS overridable by 'make ARFLAGS='. Don't try to run-time override
+# by AR_FLAGS because that was never working and AR_FLAGS is about to die.
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
+printf %s "checking for archiver @FILE support... " >&6; }
+if test ${lt_cv_ar_at_file+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_ar_at_file=no
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ echo conftest.$ac_objext > conftest.lst
+ lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
+ { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
+ (eval $lt_ar_try) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+ if test 0 -eq "$ac_status"; then
+ # Ensure the archiver fails upon bogus file names.
+ rm -f conftest.$ac_objext libconftest.a
+ { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
+ (eval $lt_ar_try) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+ if test 0 -ne "$ac_status"; then
+ lt_cv_ar_at_file=@
+ fi
+ fi
+ rm -f conftest.* libconftest.a
+
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
+printf "%s\n" "$lt_cv_ar_at_file" >&6; }
+
+if test no = "$lt_cv_ar_at_file"; then
+ archiver_list_spec=
+else
+ archiver_list_spec=$lt_cv_ar_at_file
+fi
+
+
+
+
+
+
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+set dummy ${ac_tool_prefix}strip; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_STRIP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$STRIP"; then
+ ac_cv_prog_STRIP="$STRIP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_STRIP="${ac_tool_prefix}strip"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+STRIP=$ac_cv_prog_STRIP
+if test -n "$STRIP"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5
+printf "%s\n" "$STRIP" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_STRIP"; then
+ ac_ct_STRIP=$STRIP
+ # Extract the first word of "strip", so it can be a program name with args.
+set dummy strip; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_STRIP+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_STRIP"; then
+ ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_STRIP="strip"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP
+if test -n "$ac_ct_STRIP"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5
+printf "%s\n" "$ac_ct_STRIP" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_STRIP" = x; then
+ STRIP=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ STRIP=$ac_ct_STRIP
+ fi
+else
+ STRIP="$ac_cv_prog_STRIP"
+fi
+
+test -z "$STRIP" && STRIP=:
+
+
+
+
+
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args.
+set dummy ${ac_tool_prefix}ranlib; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_RANLIB+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$RANLIB"; then
+ ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+RANLIB=$ac_cv_prog_RANLIB
+if test -n "$RANLIB"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $RANLIB" >&5
+printf "%s\n" "$RANLIB" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_RANLIB"; then
+ ac_ct_RANLIB=$RANLIB
+ # Extract the first word of "ranlib", so it can be a program name with args.
+set dummy ranlib; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_RANLIB+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_RANLIB"; then
+ ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_RANLIB="ranlib"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB
+if test -n "$ac_ct_RANLIB"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_RANLIB" >&5
+printf "%s\n" "$ac_ct_RANLIB" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_RANLIB" = x; then
+ RANLIB=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ RANLIB=$ac_ct_RANLIB
+ fi
+else
+ RANLIB="$ac_cv_prog_RANLIB"
+fi
+
+test -z "$RANLIB" && RANLIB=:
+
+
+
+
+
+
+# Determine commands to create old-style static archives.
+old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs'
+old_postinstall_cmds='chmod 644 $oldlib'
+old_postuninstall_cmds=
+
+if test -n "$RANLIB"; then
+ case $host_os in
+ bitrig* | openbsd*)
+ old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib"
+ ;;
+ *)
+ old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib"
+ ;;
+ esac
+ old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib"
+fi
+
+case $host_os in
+ darwin*)
+ lock_old_archive_extraction=yes ;;
+ *)
+ lock_old_archive_extraction=no ;;
+esac
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# If no C compiler flags were specified, use CFLAGS.
+LTCFLAGS=${LTCFLAGS-"$CFLAGS"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+
+
+# Check for command to grab the raw symbol name followed by C symbol from nm.
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking command to parse $NM output from $compiler object" >&5
+printf %s "checking command to parse $NM output from $compiler object... " >&6; }
+if test ${lt_cv_sys_global_symbol_pipe+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+
+# These are sane defaults that work on at least a few old systems.
+# [They come from Ultrix. What could be older than Ultrix?!! ;)]
+
+# Character class describing NM global symbol codes.
+symcode='[BCDEGRST]'
+
+# Regexp to match symbols that can be accessed directly from C.
+sympat='\([_A-Za-z][_A-Za-z0-9]*\)'
+
+# Define system-specific variables.
+case $host_os in
+aix*)
+ symcode='[BCDT]'
+ ;;
+cygwin* | mingw* | pw32* | cegcc*)
+ symcode='[ABCDGISTW]'
+ ;;
+hpux*)
+ if test ia64 = "$host_cpu"; then
+ symcode='[ABCDEGRST]'
+ fi
+ ;;
+irix* | nonstopux*)
+ symcode='[BCDEGRST]'
+ ;;
+osf*)
+ symcode='[BCDEGQRST]'
+ ;;
+solaris*)
+ symcode='[BDRT]'
+ ;;
+sco3.2v5*)
+ symcode='[DT]'
+ ;;
+sysv4.2uw2*)
+ symcode='[DT]'
+ ;;
+sysv5* | sco5v6* | unixware* | OpenUNIX*)
+ symcode='[ABDT]'
+ ;;
+sysv4)
+ symcode='[DFNSTU]'
+ ;;
+esac
+
+# If we're using GNU nm, then use its standard symbol codes.
+case `$NM -V 2>&1` in
+*GNU* | *'with BFD'*)
+ symcode='[ABCDGIRSTW]' ;;
+esac
+
+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
+ # Gets list of data symbols to import.
+ lt_cv_sys_global_symbol_to_import="$SED -n -e 's/^I .* \(.*\)$/\1/p'"
+ # Adjust the below global symbol transforms to fixup imported variables.
+ lt_cdecl_hook=" -e 's/^I .* \(.*\)$/extern __declspec(dllimport) char \1;/p'"
+ lt_c_name_hook=" -e 's/^I .* \(.*\)$/ {\"\1\", (void *) 0},/p'"
+ lt_c_name_lib_hook="\
+ -e 's/^I .* \(lib.*\)$/ {\"\1\", (void *) 0},/p'\
+ -e 's/^I .* \(.*\)$/ {\"lib\1\", (void *) 0},/p'"
+else
+ # Disable hooks by default.
+ lt_cv_sys_global_symbol_to_import=
+ lt_cdecl_hook=
+ lt_c_name_hook=
+ lt_c_name_lib_hook=
+fi
+
+# Transform an extracted symbol line into a proper C declaration.
+# Some systems (esp. on ia64) link data and code symbols differently,
+# so use this general approach.
+lt_cv_sys_global_symbol_to_cdecl="$SED -n"\
+$lt_cdecl_hook\
+" -e 's/^T .* \(.*\)$/extern int \1();/p'"\
+" -e 's/^$symcode$symcode* .* \(.*\)$/extern char \1;/p'"
+
+# Transform an extracted symbol line into symbol name and symbol address
+lt_cv_sys_global_symbol_to_c_name_address="$SED -n"\
+$lt_c_name_hook\
+" -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\
+" -e 's/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/p'"
+
+# Transform an extracted symbol line into symbol name with lib prefix and
+# symbol address.
+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="$SED -n"\
+$lt_c_name_lib_hook\
+" -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\
+" -e 's/^$symcode$symcode* .* \(lib.*\)$/ {\"\1\", (void *) \&\1},/p'"\
+" -e 's/^$symcode$symcode* .* \(.*\)$/ {\"lib\1\", (void *) \&\1},/p'"
+
+# Handle CRLF in mingw tool chain
+opt_cr=
+case $build_os in
+mingw*)
+ opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp
+ ;;
+esac
+
+# Try without a prefix underscore, then with it.
+for ac_symprfx in "" "_"; do
+
+ # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol.
+ symxfrm="\\1 $ac_symprfx\\2 \\2"
+
+ # Write the raw and C identifiers.
+ if test "$lt_cv_nm_interface" = "MS dumpbin"; then
+ # Fake it for dumpbin and say T for any non-static function,
+ # D for any global variable and I for any imported variable.
+ # Also find C++ and __fastcall symbols from MSVC++ or ICC,
+ # which start with @ or ?.
+ lt_cv_sys_global_symbol_pipe="$AWK '"\
+" {last_section=section; section=\$ 3};"\
+" /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\
+" /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\
+" /^ *Symbol name *: /{split(\$ 0,sn,\":\"); si=substr(sn[2],2)};"\
+" /^ *Type *: code/{print \"T\",si,substr(si,length(prfx))};"\
+" /^ *Type *: data/{print \"I\",si,substr(si,length(prfx))};"\
+" \$ 0!~/External *\|/{next};"\
+" / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\
+" {if(hide[section]) next};"\
+" {f=\"D\"}; \$ 0~/\(\).*\|/{f=\"T\"};"\
+" {split(\$ 0,a,/\||\r/); split(a[2],s)};"\
+" s[1]~/^[@?]/{print f,s[1],s[1]; next};"\
+" s[1]~prfx {split(s[1],t,\"@\"); print f,t[1],substr(t[1],length(prfx))}"\
+" ' prfx=^$ac_symprfx"
+ else
+ lt_cv_sys_global_symbol_pipe="$SED -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+ fi
+ lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | $SED '/ __gnu_lto/d'"
+
+ # Check to see that the pipe works correctly.
+ pipe_works=no
+
+ rm -f conftest*
+ cat > conftest.$ac_ext <<_LT_EOF
+#ifdef __cplusplus
+extern "C" {
+#endif
+char nm_test_var;
+void nm_test_func(void);
+void nm_test_func(void){}
+#ifdef __cplusplus
+}
+#endif
+int main(){nm_test_var='a';nm_test_func();return(0);}
+_LT_EOF
+
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ # Now try to grab the symbols.
+ nlist=conftest.nm
+ $ECHO "$as_me:$LINENO: $NM conftest.$ac_objext | $lt_cv_sys_global_symbol_pipe > $nlist" >&5
+ if eval "$NM" conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist 2>&5 && test -s "$nlist"; then
+ # Try sorting and uniquifying the output.
+ if sort "$nlist" | uniq > "$nlist"T; then
+ mv -f "$nlist"T "$nlist"
+ else
+ rm -f "$nlist"T
+ fi
+
+ # Make sure that we snagged all the symbols we need.
+ if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ cat <<_LT_EOF > conftest.$ac_ext
+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */
+#if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE
+/* DATA imports from DLLs on WIN32 can't be const, because runtime
+ relocations are performed -- see ld's documentation on pseudo-relocs. */
+# define LT_DLSYM_CONST
+#elif defined __osf__
+/* This system does not cope well with relocations in const data. */
+# define LT_DLSYM_CONST
+#else
+# define LT_DLSYM_CONST const
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+_LT_EOF
+ # Now generate the symbol file.
+ eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext'
+
+ cat <<_LT_EOF >> conftest.$ac_ext
+
+/* The mapping between symbol names and symbols. */
+LT_DLSYM_CONST struct {
+ const char *name;
+ void *address;
+}
+lt__PROGRAM__LTX_preloaded_symbols[] =
+{
+ { "@PROGRAM@", (void *) 0 },
+_LT_EOF
+ $SED "s/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext
+ cat <<\_LT_EOF >> conftest.$ac_ext
+ {0, (void *) 0}
+};
+
+/* This works around a problem in FreeBSD linker */
+#ifdef FREEBSD_WORKAROUND
+static const void *lt_preloaded_setup() {
+ return lt__PROGRAM__LTX_preloaded_symbols;
+}
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+_LT_EOF
+ # Now try linking the two files.
+ mv conftest.$ac_objext conftstm.$ac_objext
+ lt_globsym_save_LIBS=$LIBS
+ lt_globsym_save_CFLAGS=$CFLAGS
+ LIBS=conftstm.$ac_objext
+ CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; } && test -s conftest$ac_exeext; then
+ pipe_works=yes
+ fi
+ LIBS=$lt_globsym_save_LIBS
+ CFLAGS=$lt_globsym_save_CFLAGS
+ else
+ echo "cannot find nm_test_func in $nlist" >&5
+ fi
+ else
+ echo "cannot find nm_test_var in $nlist" >&5
+ fi
+ else
+ echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5
+ fi
+ else
+ echo "$progname: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ fi
+ rm -rf conftest* conftst*
+
+ # Do not use the global_symbol_pipe unless it works.
+ if test yes = "$pipe_works"; then
+ break
+ else
+ lt_cv_sys_global_symbol_pipe=
+ fi
+done
+
+fi
+
+if test -z "$lt_cv_sys_global_symbol_pipe"; then
+ lt_cv_sys_global_symbol_to_cdecl=
+fi
+if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: failed" >&5
+printf "%s\n" "failed" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: ok" >&5
+printf "%s\n" "ok" >&6; }
+fi
+
+# Response file support.
+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
+ nm_file_list_spec='@'
+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
+ nm_file_list_spec='@'
+fi
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
+printf %s "checking for sysroot... " >&6; }
+
+# Check whether --with-sysroot was given.
+if test ${with_sysroot+y}
+then :
+ withval=$with_sysroot;
+else $as_nop
+ with_sysroot=no
+fi
+
+
+lt_sysroot=
+case $with_sysroot in #(
+ yes)
+ if test yes = "$GCC"; then
+ lt_sysroot=`$CC --print-sysroot 2>/dev/null`
+ fi
+ ;; #(
+ /*)
+ lt_sysroot=`echo "$with_sysroot" | $SED -e "$sed_quote_subst"`
+ ;; #(
+ no|'')
+ ;; #(
+ *)
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $with_sysroot" >&5
+printf "%s\n" "$with_sysroot" >&6; }
+ as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
+ ;;
+esac
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
+printf "%s\n" "${lt_sysroot:-no}" >&6; }
+
+
+
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a working dd" >&5
+printf %s "checking for a working dd... " >&6; }
+if test ${ac_cv_path_lt_DD+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ printf 0123456789abcdef0123456789abcdef >conftest.i
+cat conftest.i conftest.i >conftest2.i
+: ${lt_DD:=$DD}
+if test -z "$lt_DD"; then
+ ac_path_lt_DD_found=false
+ # Loop through the user's path and test for each of PROGNAME-LIST
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_prog in dd
+ do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ ac_path_lt_DD="$as_dir$ac_prog$ac_exec_ext"
+ as_fn_executable_p "$ac_path_lt_DD" || continue
+if "$ac_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then
+ cmp -s conftest.i conftest.out \
+ && ac_cv_path_lt_DD="$ac_path_lt_DD" ac_path_lt_DD_found=:
+fi
+ $ac_path_lt_DD_found && break 3
+ done
+ done
+ done
+IFS=$as_save_IFS
+ if test -z "$ac_cv_path_lt_DD"; then
+ :
+ fi
+else
+ ac_cv_path_lt_DD=$lt_DD
+fi
+
+rm -f conftest.i conftest2.i conftest.out
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_lt_DD" >&5
+printf "%s\n" "$ac_cv_path_lt_DD" >&6; }
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to truncate binary pipes" >&5
+printf %s "checking how to truncate binary pipes... " >&6; }
+if test ${lt_cv_truncate_bin+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ printf 0123456789abcdef0123456789abcdef >conftest.i
+cat conftest.i conftest.i >conftest2.i
+lt_cv_truncate_bin=
+if "$ac_cv_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then
+ cmp -s conftest.i conftest.out \
+ && lt_cv_truncate_bin="$ac_cv_path_lt_DD bs=4096 count=1"
+fi
+rm -f conftest.i conftest2.i conftest.out
+test -z "$lt_cv_truncate_bin" && lt_cv_truncate_bin="$SED -e 4q"
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_truncate_bin" >&5
+printf "%s\n" "$lt_cv_truncate_bin" >&6; }
+
+
+
+
+
+
+
+# Calculate cc_basename. Skip known compiler wrappers and cross-prefix.
+func_cc_basename ()
+{
+ for cc_temp in $*""; do
+ case $cc_temp in
+ compile | *[\\/]compile | ccache | *[\\/]ccache ) ;;
+ distcc | *[\\/]distcc | purify | *[\\/]purify ) ;;
+ \-*) ;;
+ *) break;;
+ esac
+ done
+ func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"`
+}
+
+# Check whether --enable-libtool-lock was given.
+if test ${enable_libtool_lock+y}
+then :
+ enableval=$enable_libtool_lock;
+fi
+
+test no = "$enable_libtool_lock" || enable_libtool_lock=yes
+
+# Some flags need to be propagated to the compiler or linker for good
+# libtool support.
+case $host in
+ia64-*-hpux*)
+ # Find out what ABI is being produced by ac_compile, and set mode
+ # options accordingly.
+ echo 'int i;' > conftest.$ac_ext
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ case `$FILECMD conftest.$ac_objext` in
+ *ELF-32*)
+ HPUX_IA64_MODE=32
+ ;;
+ *ELF-64*)
+ HPUX_IA64_MODE=64
+ ;;
+ esac
+ fi
+ rm -rf conftest*
+ ;;
+*-*-irix6*)
+ # Find out what ABI is being produced by ac_compile, and set linker
+ # options accordingly.
+ echo '#line '$LINENO' "configure"' > conftest.$ac_ext
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ if test yes = "$lt_cv_prog_gnu_ld"; then
+ case `$FILECMD conftest.$ac_objext` in
+ *32-bit*)
+ LD="${LD-ld} -melf32bsmip"
+ ;;
+ *N32*)
+ LD="${LD-ld} -melf32bmipn32"
+ ;;
+ *64-bit*)
+ LD="${LD-ld} -melf64bmip"
+ ;;
+ esac
+ else
+ case `$FILECMD conftest.$ac_objext` in
+ *32-bit*)
+ LD="${LD-ld} -32"
+ ;;
+ *N32*)
+ LD="${LD-ld} -n32"
+ ;;
+ *64-bit*)
+ LD="${LD-ld} -64"
+ ;;
+ esac
+ fi
+ fi
+ rm -rf conftest*
+ ;;
+
+mips64*-*linux*)
+ # Find out what ABI is being produced by ac_compile, and set linker
+ # options accordingly.
+ echo '#line '$LINENO' "configure"' > conftest.$ac_ext
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ emul=elf
+ case `$FILECMD conftest.$ac_objext` in
+ *32-bit*)
+ emul="${emul}32"
+ ;;
+ *64-bit*)
+ emul="${emul}64"
+ ;;
+ esac
+ case `$FILECMD conftest.$ac_objext` in
+ *MSB*)
+ emul="${emul}btsmip"
+ ;;
+ *LSB*)
+ emul="${emul}ltsmip"
+ ;;
+ esac
+ case `$FILECMD conftest.$ac_objext` in
+ *N32*)
+ emul="${emul}n32"
+ ;;
+ esac
+ LD="${LD-ld} -m $emul"
+ fi
+ rm -rf conftest*
+ ;;
+
+x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \
+s390*-*linux*|s390*-*tpf*|sparc*-*linux*)
+ # Find out what ABI is being produced by ac_compile, and set linker
+ # options accordingly. Note that the listed cases only cover the
+ # situations where additional linker options are needed (such as when
+ # doing 32-bit compilation for a host where ld defaults to 64-bit, or
+ # vice versa); the common cases where no linker options are needed do
+ # not appear in the list.
+ echo 'int i;' > conftest.$ac_ext
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ case `$FILECMD conftest.o` in
+ *32-bit*)
+ case $host in
+ x86_64-*kfreebsd*-gnu)
+ LD="${LD-ld} -m elf_i386_fbsd"
+ ;;
+ x86_64-*linux*)
+ case `$FILECMD conftest.o` in
+ *x86-64*)
+ LD="${LD-ld} -m elf32_x86_64"
+ ;;
+ *)
+ LD="${LD-ld} -m elf_i386"
+ ;;
+ esac
+ ;;
+ powerpc64le-*linux*)
+ LD="${LD-ld} -m elf32lppclinux"
+ ;;
+ powerpc64-*linux*)
+ LD="${LD-ld} -m elf32ppclinux"
+ ;;
+ s390x-*linux*)
+ LD="${LD-ld} -m elf_s390"
+ ;;
+ sparc64-*linux*)
+ LD="${LD-ld} -m elf32_sparc"
+ ;;
+ esac
+ ;;
+ *64-bit*)
+ case $host in
+ x86_64-*kfreebsd*-gnu)
+ LD="${LD-ld} -m elf_x86_64_fbsd"
+ ;;
+ x86_64-*linux*)
+ LD="${LD-ld} -m elf_x86_64"
+ ;;
+ powerpcle-*linux*)
+ LD="${LD-ld} -m elf64lppc"
+ ;;
+ powerpc-*linux*)
+ LD="${LD-ld} -m elf64ppc"
+ ;;
+ s390*-*linux*|s390*-*tpf*)
+ LD="${LD-ld} -m elf64_s390"
+ ;;
+ sparc*-*linux*)
+ LD="${LD-ld} -m elf64_sparc"
+ ;;
+ esac
+ ;;
+ esac
+ fi
+ rm -rf conftest*
+ ;;
+
+*-*-sco3.2v5*)
+ # On SCO OpenServer 5, we need -belf to get full-featured binaries.
+ SAVE_CFLAGS=$CFLAGS
+ CFLAGS="$CFLAGS -belf"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the C compiler needs -belf" >&5
+printf %s "checking whether the C compiler needs -belf... " >&6; }
+if test ${lt_cv_cc_needs_belf+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ lt_cv_cc_needs_belf=yes
+else $as_nop
+ lt_cv_cc_needs_belf=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_cc_needs_belf" >&5
+printf "%s\n" "$lt_cv_cc_needs_belf" >&6; }
+ if test yes != "$lt_cv_cc_needs_belf"; then
+ # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf
+ CFLAGS=$SAVE_CFLAGS
+ fi
+ ;;
+*-*solaris*)
+ # Find out what ABI is being produced by ac_compile, and set linker
+ # options accordingly.
+ echo 'int i;' > conftest.$ac_ext
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ case `$FILECMD conftest.o` in
+ *64-bit*)
+ case $lt_cv_prog_gnu_ld in
+ yes*)
+ case $host in
+ i?86-*-solaris*|x86_64-*-solaris*)
+ LD="${LD-ld} -m elf_x86_64"
+ ;;
+ sparc*-*-solaris*)
+ LD="${LD-ld} -m elf64_sparc"
+ ;;
+ esac
+ # GNU ld 2.21 introduced _sol2 emulations. Use them if available.
+ if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then
+ LD=${LD-ld}_sol2
+ fi
+ ;;
+ *)
+ if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then
+ LD="${LD-ld} -64"
+ fi
+ ;;
+ esac
+ ;;
+ esac
+ fi
+ rm -rf conftest*
+ ;;
+esac
+
+need_locks=$enable_libtool_lock
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
+set dummy ${ac_tool_prefix}mt; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_MANIFEST_TOOL+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$MANIFEST_TOOL"; then
+ ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
+if test -n "$MANIFEST_TOOL"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
+printf "%s\n" "$MANIFEST_TOOL" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
+ ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
+ # Extract the first word of "mt", so it can be a program name with args.
+set dummy mt; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_MANIFEST_TOOL+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_MANIFEST_TOOL"; then
+ ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
+if test -n "$ac_ct_MANIFEST_TOOL"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
+printf "%s\n" "$ac_ct_MANIFEST_TOOL" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_MANIFEST_TOOL" = x; then
+ MANIFEST_TOOL=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
+ fi
+else
+ MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
+fi
+
+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
+printf %s "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
+if test ${lt_cv_path_mainfest_tool+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_path_mainfest_tool=no
+ echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
+ $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
+ cat conftest.err >&5
+ if $GREP 'Manifest Tool' conftest.out > /dev/null; then
+ lt_cv_path_mainfest_tool=yes
+ fi
+ rm -f conftest*
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
+printf "%s\n" "$lt_cv_path_mainfest_tool" >&6; }
+if test yes != "$lt_cv_path_mainfest_tool"; then
+ MANIFEST_TOOL=:
+fi
+
+
+
+
+
+
+ case $host_os in
+ rhapsody* | darwin*)
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}dsymutil", so it can be a program name with args.
+set dummy ${ac_tool_prefix}dsymutil; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_DSYMUTIL+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$DSYMUTIL"; then
+ ac_cv_prog_DSYMUTIL="$DSYMUTIL" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_DSYMUTIL="${ac_tool_prefix}dsymutil"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+DSYMUTIL=$ac_cv_prog_DSYMUTIL
+if test -n "$DSYMUTIL"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $DSYMUTIL" >&5
+printf "%s\n" "$DSYMUTIL" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_DSYMUTIL"; then
+ ac_ct_DSYMUTIL=$DSYMUTIL
+ # Extract the first word of "dsymutil", so it can be a program name with args.
+set dummy dsymutil; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_DSYMUTIL+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_DSYMUTIL"; then
+ ac_cv_prog_ac_ct_DSYMUTIL="$ac_ct_DSYMUTIL" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_DSYMUTIL="dsymutil"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_DSYMUTIL=$ac_cv_prog_ac_ct_DSYMUTIL
+if test -n "$ac_ct_DSYMUTIL"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DSYMUTIL" >&5
+printf "%s\n" "$ac_ct_DSYMUTIL" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_DSYMUTIL" = x; then
+ DSYMUTIL=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ DSYMUTIL=$ac_ct_DSYMUTIL
+ fi
+else
+ DSYMUTIL="$ac_cv_prog_DSYMUTIL"
+fi
+
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}nmedit", so it can be a program name with args.
+set dummy ${ac_tool_prefix}nmedit; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_NMEDIT+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$NMEDIT"; then
+ ac_cv_prog_NMEDIT="$NMEDIT" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_NMEDIT="${ac_tool_prefix}nmedit"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+NMEDIT=$ac_cv_prog_NMEDIT
+if test -n "$NMEDIT"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $NMEDIT" >&5
+printf "%s\n" "$NMEDIT" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_NMEDIT"; then
+ ac_ct_NMEDIT=$NMEDIT
+ # Extract the first word of "nmedit", so it can be a program name with args.
+set dummy nmedit; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_NMEDIT+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_NMEDIT"; then
+ ac_cv_prog_ac_ct_NMEDIT="$ac_ct_NMEDIT" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_NMEDIT="nmedit"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_NMEDIT=$ac_cv_prog_ac_ct_NMEDIT
+if test -n "$ac_ct_NMEDIT"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_NMEDIT" >&5
+printf "%s\n" "$ac_ct_NMEDIT" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_NMEDIT" = x; then
+ NMEDIT=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ NMEDIT=$ac_ct_NMEDIT
+ fi
+else
+ NMEDIT="$ac_cv_prog_NMEDIT"
+fi
+
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}lipo", so it can be a program name with args.
+set dummy ${ac_tool_prefix}lipo; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_LIPO+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$LIPO"; then
+ ac_cv_prog_LIPO="$LIPO" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_LIPO="${ac_tool_prefix}lipo"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+LIPO=$ac_cv_prog_LIPO
+if test -n "$LIPO"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $LIPO" >&5
+printf "%s\n" "$LIPO" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_LIPO"; then
+ ac_ct_LIPO=$LIPO
+ # Extract the first word of "lipo", so it can be a program name with args.
+set dummy lipo; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_LIPO+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_LIPO"; then
+ ac_cv_prog_ac_ct_LIPO="$ac_ct_LIPO" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_LIPO="lipo"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_LIPO=$ac_cv_prog_ac_ct_LIPO
+if test -n "$ac_ct_LIPO"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_LIPO" >&5
+printf "%s\n" "$ac_ct_LIPO" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_LIPO" = x; then
+ LIPO=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ LIPO=$ac_ct_LIPO
+ fi
+else
+ LIPO="$ac_cv_prog_LIPO"
+fi
+
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}otool", so it can be a program name with args.
+set dummy ${ac_tool_prefix}otool; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_OTOOL+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$OTOOL"; then
+ ac_cv_prog_OTOOL="$OTOOL" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_OTOOL="${ac_tool_prefix}otool"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+OTOOL=$ac_cv_prog_OTOOL
+if test -n "$OTOOL"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $OTOOL" >&5
+printf "%s\n" "$OTOOL" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_OTOOL"; then
+ ac_ct_OTOOL=$OTOOL
+ # Extract the first word of "otool", so it can be a program name with args.
+set dummy otool; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_OTOOL+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_OTOOL"; then
+ ac_cv_prog_ac_ct_OTOOL="$ac_ct_OTOOL" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_OTOOL="otool"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_OTOOL=$ac_cv_prog_ac_ct_OTOOL
+if test -n "$ac_ct_OTOOL"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL" >&5
+printf "%s\n" "$ac_ct_OTOOL" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_OTOOL" = x; then
+ OTOOL=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ OTOOL=$ac_ct_OTOOL
+ fi
+else
+ OTOOL="$ac_cv_prog_OTOOL"
+fi
+
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}otool64", so it can be a program name with args.
+set dummy ${ac_tool_prefix}otool64; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_OTOOL64+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$OTOOL64"; then
+ ac_cv_prog_OTOOL64="$OTOOL64" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_OTOOL64="${ac_tool_prefix}otool64"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+OTOOL64=$ac_cv_prog_OTOOL64
+if test -n "$OTOOL64"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $OTOOL64" >&5
+printf "%s\n" "$OTOOL64" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_OTOOL64"; then
+ ac_ct_OTOOL64=$OTOOL64
+ # Extract the first word of "otool64", so it can be a program name with args.
+set dummy otool64; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_ac_ct_OTOOL64+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$ac_ct_OTOOL64"; then
+ ac_cv_prog_ac_ct_OTOOL64="$ac_ct_OTOOL64" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_OTOOL64="otool64"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_OTOOL64=$ac_cv_prog_ac_ct_OTOOL64
+if test -n "$ac_ct_OTOOL64"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL64" >&5
+printf "%s\n" "$ac_ct_OTOOL64" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_ct_OTOOL64" = x; then
+ OTOOL64=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ OTOOL64=$ac_ct_OTOOL64
+ fi
+else
+ OTOOL64="$ac_cv_prog_OTOOL64"
+fi
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for -single_module linker flag" >&5
+printf %s "checking for -single_module linker flag... " >&6; }
+if test ${lt_cv_apple_cc_single_mod+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_apple_cc_single_mod=no
+ if test -z "$LT_MULTI_MODULE"; then
+ # By default we will add the -single_module flag. You can override
+ # by either setting the environment variable LT_MULTI_MODULE
+ # non-empty at configure time, or by adding -multi_module to the
+ # link flags.
+ rm -rf libconftest.dylib*
+ echo "int foo(void){return 1;}" > conftest.c
+ echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \
+-dynamiclib -Wl,-single_module conftest.c" >&5
+ $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \
+ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err
+ _lt_result=$?
+ # If there is a non-empty error log, and "single_module"
+ # appears in it, assume the flag caused a linker warning
+ if test -s conftest.err && $GREP single_module conftest.err; then
+ cat conftest.err >&5
+ # Otherwise, if the output was created with a 0 exit code from
+ # the compiler, it worked.
+ elif test -f libconftest.dylib && test 0 = "$_lt_result"; then
+ lt_cv_apple_cc_single_mod=yes
+ else
+ cat conftest.err >&5
+ fi
+ rm -rf libconftest.dylib*
+ rm -f conftest.*
+ fi
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_apple_cc_single_mod" >&5
+printf "%s\n" "$lt_cv_apple_cc_single_mod" >&6; }
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for -exported_symbols_list linker flag" >&5
+printf %s "checking for -exported_symbols_list linker flag... " >&6; }
+if test ${lt_cv_ld_exported_symbols_list+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_ld_exported_symbols_list=no
+ save_LDFLAGS=$LDFLAGS
+ echo "_main" > conftest.sym
+ LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym"
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ lt_cv_ld_exported_symbols_list=yes
+else $as_nop
+ lt_cv_ld_exported_symbols_list=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+ LDFLAGS=$save_LDFLAGS
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_exported_symbols_list" >&5
+printf "%s\n" "$lt_cv_ld_exported_symbols_list" >&6; }
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for -force_load linker flag" >&5
+printf %s "checking for -force_load linker flag... " >&6; }
+if test ${lt_cv_ld_force_load+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_ld_force_load=no
+ cat > conftest.c << _LT_EOF
+int forced_loaded() { return 2;}
+_LT_EOF
+ echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&5
+ $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+ echo "$AR $AR_FLAGS libconftest.a conftest.o" >&5
+ $AR $AR_FLAGS libconftest.a conftest.o 2>&5
+ echo "$RANLIB libconftest.a" >&5
+ $RANLIB libconftest.a 2>&5
+ cat > conftest.c << _LT_EOF
+int main() { return 0;}
+_LT_EOF
+ echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&5
+ $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err
+ _lt_result=$?
+ if test -s conftest.err && $GREP force_load conftest.err; then
+ cat conftest.err >&5
+ elif test -f conftest && test 0 = "$_lt_result" && $GREP forced_load conftest >/dev/null 2>&1; then
+ lt_cv_ld_force_load=yes
+ else
+ cat conftest.err >&5
+ fi
+ rm -f conftest.err libconftest.a conftest conftest.c
+ rm -rf conftest.dSYM
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_force_load" >&5
+printf "%s\n" "$lt_cv_ld_force_load" >&6; }
+ case $host_os in
+ rhapsody* | darwin1.[012])
+ _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
+ darwin1.*)
+ _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
+ darwin*)
+ case $MACOSX_DEPLOYMENT_TARGET,$host in
+ 10.[012],*|,*powerpc*-darwin[5-8]*)
+ _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
+ *)
+ _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
+ esac
+ ;;
+ esac
+ if test yes = "$lt_cv_apple_cc_single_mod"; then
+ _lt_dar_single_mod='$single_module'
+ fi
+ if test yes = "$lt_cv_ld_exported_symbols_list"; then
+ _lt_dar_export_syms=' $wl-exported_symbols_list,$output_objdir/$libname-symbols.expsym'
+ else
+ _lt_dar_export_syms='~$NMEDIT -s $output_objdir/$libname-symbols.expsym $lib'
+ fi
+ if test : != "$DSYMUTIL" && test no = "$lt_cv_ld_force_load"; then
+ _lt_dsymutil='~$DSYMUTIL $lib || :'
+ else
+ _lt_dsymutil=
+ fi
+ ;;
+ esac
+
+# func_munge_path_list VARIABLE PATH
+# -----------------------------------
+# VARIABLE is name of variable containing _space_ separated list of
+# directories to be munged by the contents of PATH, which is string
+# having a format:
+# "DIR[:DIR]:"
+# string "DIR[ DIR]" will be prepended to VARIABLE
+# ":DIR[:DIR]"
+# string "DIR[ DIR]" will be appended to VARIABLE
+# "DIRP[:DIRP]::[DIRA:]DIRA"
+# string "DIRP[ DIRP]" will be prepended to VARIABLE and string
+# "DIRA[ DIRA]" will be appended to VARIABLE
+# "DIR[:DIR]"
+# VARIABLE will be replaced by "DIR[ DIR]"
+func_munge_path_list ()
+{
+ case x$2 in
+ x)
+ ;;
+ *:)
+ eval $1=\"`$ECHO $2 | $SED 's/:/ /g'` \$$1\"
+ ;;
+ x:*)
+ eval $1=\"\$$1 `$ECHO $2 | $SED 's/:/ /g'`\"
+ ;;
+ *::*)
+ eval $1=\"\$$1\ `$ECHO $2 | $SED -e 's/.*:://' -e 's/:/ /g'`\"
+ eval $1=\"`$ECHO $2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \$$1\"
+ ;;
+ *)
+ eval $1=\"`$ECHO $2 | $SED 's/:/ /g'`\"
+ ;;
+ esac
+}
+
+ac_fn_c_check_header_compile "$LINENO" "dlfcn.h" "ac_cv_header_dlfcn_h" "$ac_includes_default
+"
+if test "x$ac_cv_header_dlfcn_h" = xyes
+then :
+ printf "%s\n" "#define HAVE_DLFCN_H 1" >>confdefs.h
+
+fi
+
+
+
+
+
+# Set options
+
+
+
+ enable_dlopen=no
+
+
+ enable_win32_dll=no
+
+
+ # Check whether --enable-shared was given.
+if test ${enable_shared+y}
+then :
+ enableval=$enable_shared; p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_shared=yes ;;
+ no) enable_shared=no ;;
+ *)
+ enable_shared=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR,
+ for pkg in $enableval; do
+ IFS=$lt_save_ifs
+ if test "X$pkg" = "X$p"; then
+ enable_shared=yes
+ fi
+ done
+ IFS=$lt_save_ifs
+ ;;
+ esac
+else $as_nop
+ enable_shared=yes
+fi
+
+
+
+
+
+
+
+
+
+ # Check whether --enable-static was given.
+if test ${enable_static+y}
+then :
+ enableval=$enable_static; p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_static=yes ;;
+ no) enable_static=no ;;
+ *)
+ enable_static=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR,
+ for pkg in $enableval; do
+ IFS=$lt_save_ifs
+ if test "X$pkg" = "X$p"; then
+ enable_static=yes
+ fi
+ done
+ IFS=$lt_save_ifs
+ ;;
+ esac
+else $as_nop
+ enable_static=yes
+fi
+
+
+
+
+
+
+
+
+
+
+# Check whether --with-pic was given.
+if test ${with_pic+y}
+then :
+ withval=$with_pic; lt_p=${PACKAGE-default}
+ case $withval in
+ yes|no) pic_mode=$withval ;;
+ *)
+ pic_mode=default
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR,
+ for lt_pkg in $withval; do
+ IFS=$lt_save_ifs
+ if test "X$lt_pkg" = "X$lt_p"; then
+ pic_mode=yes
+ fi
+ done
+ IFS=$lt_save_ifs
+ ;;
+ esac
+else $as_nop
+ pic_mode=default
+fi
+
+
+
+
+
+
+
+
+ # Check whether --enable-fast-install was given.
+if test ${enable_fast_install+y}
+then :
+ enableval=$enable_fast_install; p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_fast_install=yes ;;
+ no) enable_fast_install=no ;;
+ *)
+ enable_fast_install=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR,
+ for pkg in $enableval; do
+ IFS=$lt_save_ifs
+ if test "X$pkg" = "X$p"; then
+ enable_fast_install=yes
+ fi
+ done
+ IFS=$lt_save_ifs
+ ;;
+ esac
+else $as_nop
+ enable_fast_install=yes
+fi
+
+
+
+
+
+
+
+
+ shared_archive_member_spec=
+case $host,$enable_shared in
+power*-*-aix[5-9]*,yes)
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking which variant of shared library versioning to provide" >&5
+printf %s "checking which variant of shared library versioning to provide... " >&6; }
+
+# Check whether --with-aix-soname was given.
+if test ${with_aix_soname+y}
+then :
+ withval=$with_aix_soname; case $withval in
+ aix|svr4|both)
+ ;;
+ *)
+ as_fn_error $? "Unknown argument to --with-aix-soname" "$LINENO" 5
+ ;;
+ esac
+ lt_cv_with_aix_soname=$with_aix_soname
+else $as_nop
+ if test ${lt_cv_with_aix_soname+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_with_aix_soname=aix
+fi
+
+ with_aix_soname=$lt_cv_with_aix_soname
+fi
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $with_aix_soname" >&5
+printf "%s\n" "$with_aix_soname" >&6; }
+ if test aix != "$with_aix_soname"; then
+ # For the AIX way of multilib, we name the shared archive member
+ # based on the bitwidth used, traditionally 'shr.o' or 'shr_64.o',
+ # and 'shr.imp' or 'shr_64.imp', respectively, for the Import File.
+ # Even when GNU compilers ignore OBJECT_MODE but need '-maix64' flag,
+ # the AIX toolchain works better with OBJECT_MODE set (default 32).
+ if test 64 = "${OBJECT_MODE-32}"; then
+ shared_archive_member_spec=shr_64
+ else
+ shared_archive_member_spec=shr
+ fi
+ fi
+ ;;
+*)
+ with_aix_soname=aix
+ ;;
+esac
+
+
+
+
+
+
+
+
+
+
+# This can be used to rebuild libtool when needed
+LIBTOOL_DEPS=$ltmain
+
+# Always use our own libtool.
+LIBTOOL='$(SHELL) $(top_builddir)/libtool'
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+test -z "$LN_S" && LN_S="ln -s"
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+if test -n "${ZSH_VERSION+set}"; then
+ setopt NO_GLOB_SUBST
+fi
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for objdir" >&5
+printf %s "checking for objdir... " >&6; }
+if test ${lt_cv_objdir+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ rm -f .libs 2>/dev/null
+mkdir .libs 2>/dev/null
+if test -d .libs; then
+ lt_cv_objdir=.libs
+else
+ # MS-DOS does not allow filenames that begin with a dot.
+ lt_cv_objdir=_libs
+fi
+rmdir .libs 2>/dev/null
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_objdir" >&5
+printf "%s\n" "$lt_cv_objdir" >&6; }
+objdir=$lt_cv_objdir
+
+
+
+
+
+printf "%s\n" "#define LT_OBJDIR \"$lt_cv_objdir/\"" >>confdefs.h
+
+
+
+
+case $host_os in
+aix3*)
+ # AIX sometimes has problems with the GCC collect2 program. For some
+ # reason, if we set the COLLECT_NAMES environment variable, the problems
+ # vanish in a puff of smoke.
+ if test set != "${COLLECT_NAMES+set}"; then
+ COLLECT_NAMES=
+ export COLLECT_NAMES
+ fi
+ ;;
+esac
+
+# Global variables:
+ofile=libtool
+can_build_shared=yes
+
+# All known linkers require a '.a' archive for static linking (except MSVC and
+# ICC, which need '.lib').
+libext=a
+
+with_gnu_ld=$lt_cv_prog_gnu_ld
+
+old_CC=$CC
+old_CFLAGS=$CFLAGS
+
+# Set sane defaults for various variables
+test -z "$CC" && CC=cc
+test -z "$LTCC" && LTCC=$CC
+test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS
+test -z "$LD" && LD=ld
+test -z "$ac_objext" && ac_objext=o
+
+func_cc_basename $compiler
+cc_basename=$func_cc_basename_result
+
+
+# Only perform the check for file, if the check method requires it
+test -z "$MAGIC_CMD" && MAGIC_CMD=file
+case $deplibs_check_method in
+file_magic*)
+ if test "$file_magic_cmd" = '$MAGIC_CMD'; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for ${ac_tool_prefix}file" >&5
+printf %s "checking for ${ac_tool_prefix}file... " >&6; }
+if test ${lt_cv_path_MAGIC_CMD+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ case $MAGIC_CMD in
+[\\/*] | ?:[\\/]*)
+ lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path.
+ ;;
+*)
+ lt_save_MAGIC_CMD=$MAGIC_CMD
+ lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
+ ac_dummy="/usr/bin$PATH_SEPARATOR$PATH"
+ for ac_dir in $ac_dummy; do
+ IFS=$lt_save_ifs
+ test -z "$ac_dir" && ac_dir=.
+ if test -f "$ac_dir/${ac_tool_prefix}file"; then
+ lt_cv_path_MAGIC_CMD=$ac_dir/"${ac_tool_prefix}file"
+ if test -n "$file_magic_test_file"; then
+ case $deplibs_check_method in
+ "file_magic "*)
+ file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"`
+ MAGIC_CMD=$lt_cv_path_MAGIC_CMD
+ if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null |
+ $EGREP "$file_magic_regex" > /dev/null; then
+ :
+ else
+ cat <<_LT_EOF 1>&2
+
+*** Warning: the command libtool uses to detect shared libraries,
+*** $file_magic_cmd, produces output that libtool cannot recognize.
+*** The result is that libtool may fail to recognize shared libraries
+*** as such. This will affect the creation of libtool libraries that
+*** depend on shared libraries, but programs linked with such libtool
+*** libraries will work regardless of this problem. Nevertheless, you
+*** may want to report the problem to your system manager and/or to
+*** bug-libtool@gnu.org
+
+_LT_EOF
+ fi ;;
+ esac
+ fi
+ break
+ fi
+ done
+ IFS=$lt_save_ifs
+ MAGIC_CMD=$lt_save_MAGIC_CMD
+ ;;
+esac
+fi
+
+MAGIC_CMD=$lt_cv_path_MAGIC_CMD
+if test -n "$MAGIC_CMD"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5
+printf "%s\n" "$MAGIC_CMD" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+
+
+
+if test -z "$lt_cv_path_MAGIC_CMD"; then
+ if test -n "$ac_tool_prefix"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for file" >&5
+printf %s "checking for file... " >&6; }
+if test ${lt_cv_path_MAGIC_CMD+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ case $MAGIC_CMD in
+[\\/*] | ?:[\\/]*)
+ lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path.
+ ;;
+*)
+ lt_save_MAGIC_CMD=$MAGIC_CMD
+ lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
+ ac_dummy="/usr/bin$PATH_SEPARATOR$PATH"
+ for ac_dir in $ac_dummy; do
+ IFS=$lt_save_ifs
+ test -z "$ac_dir" && ac_dir=.
+ if test -f "$ac_dir/file"; then
+ lt_cv_path_MAGIC_CMD=$ac_dir/"file"
+ if test -n "$file_magic_test_file"; then
+ case $deplibs_check_method in
+ "file_magic "*)
+ file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"`
+ MAGIC_CMD=$lt_cv_path_MAGIC_CMD
+ if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null |
+ $EGREP "$file_magic_regex" > /dev/null; then
+ :
+ else
+ cat <<_LT_EOF 1>&2
+
+*** Warning: the command libtool uses to detect shared libraries,
+*** $file_magic_cmd, produces output that libtool cannot recognize.
+*** The result is that libtool may fail to recognize shared libraries
+*** as such. This will affect the creation of libtool libraries that
+*** depend on shared libraries, but programs linked with such libtool
+*** libraries will work regardless of this problem. Nevertheless, you
+*** may want to report the problem to your system manager and/or to
+*** bug-libtool@gnu.org
+
+_LT_EOF
+ fi ;;
+ esac
+ fi
+ break
+ fi
+ done
+ IFS=$lt_save_ifs
+ MAGIC_CMD=$lt_save_MAGIC_CMD
+ ;;
+esac
+fi
+
+MAGIC_CMD=$lt_cv_path_MAGIC_CMD
+if test -n "$MAGIC_CMD"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5
+printf "%s\n" "$MAGIC_CMD" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ else
+ MAGIC_CMD=:
+ fi
+fi
+
+ fi
+ ;;
+esac
+
+# Use C for the default configuration in the libtool script
+
+lt_save_CC=$CC
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+# Source file extension for C test sources.
+ac_ext=c
+
+# Object file extension for compiled C test sources.
+objext=o
+objext=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code="int some_variable = 0;"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code='int main(){return(0);}'
+
+
+
+
+
+
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# If no C compiler flags were specified, use CFLAGS.
+LTCFLAGS=${LTCFLAGS-"$CFLAGS"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+
+# Save the default compiler, since it gets overwritten when the other
+# tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP.
+compiler_DEFAULT=$CC
+
+# save warnings/boilerplate of simple test code
+ac_outfile=conftest.$ac_objext
+echo "$lt_simple_compile_test_code" >conftest.$ac_ext
+eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err
+_lt_compiler_boilerplate=`cat conftest.err`
+$RM conftest*
+
+ac_outfile=conftest.$ac_objext
+echo "$lt_simple_link_test_code" >conftest.$ac_ext
+eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err
+_lt_linker_boilerplate=`cat conftest.err`
+$RM -r conftest*
+
+
+## CAVEAT EMPTOR:
+## There is no encapsulation within the following macros, do not change
+## the running order or otherwise move them around unless you know exactly
+## what you are doing...
+if test -n "$compiler"; then
+
+lt_prog_compiler_no_builtin_flag=
+
+if test yes = "$GCC"; then
+ case $cc_basename in
+ nvcc*)
+ lt_prog_compiler_no_builtin_flag=' -Xcompiler -fno-builtin' ;;
+ *)
+ lt_prog_compiler_no_builtin_flag=' -fno-builtin' ;;
+ esac
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -fno-rtti -fno-exceptions" >&5
+printf %s "checking if $compiler supports -fno-rtti -fno-exceptions... " >&6; }
+if test ${lt_cv_prog_compiler_rtti_exceptions+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_prog_compiler_rtti_exceptions=no
+ ac_outfile=conftest.$ac_objext
+ echo "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="-fno-rtti -fno-exceptions" ## exclude from sc_useless_quotes_in_assignment
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings other than the usual output.
+ $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp
+ $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2
+ if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then
+ lt_cv_prog_compiler_rtti_exceptions=yes
+ fi
+ fi
+ $RM conftest*
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_rtti_exceptions" >&5
+printf "%s\n" "$lt_cv_prog_compiler_rtti_exceptions" >&6; }
+
+if test yes = "$lt_cv_prog_compiler_rtti_exceptions"; then
+ lt_prog_compiler_no_builtin_flag="$lt_prog_compiler_no_builtin_flag -fno-rtti -fno-exceptions"
+else
+ :
+fi
+
+fi
+
+
+
+
+
+
+ lt_prog_compiler_wl=
+lt_prog_compiler_pic=
+lt_prog_compiler_static=
+
+
+ if test yes = "$GCC"; then
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_static='-static'
+
+ case $host_os in
+ aix*)
+ # All AIX code is PIC.
+ if test ia64 = "$host_cpu"; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static='-Bstatic'
+ fi
+ lt_prog_compiler_pic='-fPIC'
+ ;;
+
+ amigaos*)
+ case $host_cpu in
+ powerpc)
+ # see comment about AmigaOS4 .so support
+ lt_prog_compiler_pic='-fPIC'
+ ;;
+ m68k)
+ # FIXME: we need at least 68020 code to build shared libraries, but
+ # adding the '-m68020' flag to GCC prevents building anything better,
+ # like '-m68040'.
+ lt_prog_compiler_pic='-m68020 -resident32 -malways-restore-a4'
+ ;;
+ esac
+ ;;
+
+ beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
+ # PIC is the default for these OSes.
+ ;;
+
+ mingw* | cygwin* | pw32* | os2* | cegcc*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ # Although the cygwin gcc ignores -fPIC, still need this for old-style
+ # (--disable-auto-import) libraries
+ lt_prog_compiler_pic='-DDLL_EXPORT'
+ case $host_os in
+ os2*)
+ lt_prog_compiler_static='$wl-static'
+ ;;
+ esac
+ ;;
+
+ darwin* | rhapsody*)
+ # PIC is the default on this platform
+ # Common symbols not allowed in MH_DYLIB files
+ lt_prog_compiler_pic='-fno-common'
+ ;;
+
+ haiku*)
+ # PIC is the default for Haiku.
+ # The "-static" flag exists, but is broken.
+ lt_prog_compiler_static=
+ ;;
+
+ hpux*)
+ # PIC is the default for 64-bit PA HP-UX, but not for 32-bit
+ # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag
+ # sets the default TLS model and affects inlining.
+ case $host_cpu in
+ hppa*64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic='-fPIC'
+ ;;
+ esac
+ ;;
+
+ interix[3-9]*)
+ # Interix 3.x gcc -fpic/-fPIC options generate broken code.
+ # Instead, we relocate shared libraries at runtime.
+ ;;
+
+ msdosdjgpp*)
+ # Just because we use GCC doesn't mean we suddenly get shared libraries
+ # on systems that don't support them.
+ lt_prog_compiler_can_build_shared=no
+ enable_shared=no
+ ;;
+
+ *nto* | *qnx*)
+ # QNX uses GNU C++, but need to define -shared option too, otherwise
+ # it will coredump.
+ lt_prog_compiler_pic='-fPIC -shared'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ lt_prog_compiler_pic=-Kconform_pic
+ fi
+ ;;
+
+ *)
+ lt_prog_compiler_pic='-fPIC'
+ ;;
+ esac
+
+ case $cc_basename in
+ nvcc*) # Cuda Compiler Driver 2.2
+ lt_prog_compiler_wl='-Xlinker '
+ if test -n "$lt_prog_compiler_pic"; then
+ lt_prog_compiler_pic="-Xcompiler $lt_prog_compiler_pic"
+ fi
+ ;;
+ esac
+ else
+ # PORTME Check for flag to pass linker flags through the system compiler.
+ case $host_os in
+ aix*)
+ lt_prog_compiler_wl='-Wl,'
+ if test ia64 = "$host_cpu"; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static='-Bstatic'
+ else
+ lt_prog_compiler_static='-bnso -bI:/lib/syscalls.exp'
+ fi
+ ;;
+
+ darwin* | rhapsody*)
+ # PIC is the default on this platform
+ # Common symbols not allowed in MH_DYLIB files
+ lt_prog_compiler_pic='-fno-common'
+ case $cc_basename in
+ nagfor*)
+ # NAG Fortran compiler
+ lt_prog_compiler_wl='-Wl,-Wl,,'
+ lt_prog_compiler_pic='-PIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+ esac
+ ;;
+
+ mingw* | cygwin* | pw32* | os2* | cegcc*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ lt_prog_compiler_pic='-DDLL_EXPORT'
+ case $host_os in
+ os2*)
+ lt_prog_compiler_static='$wl-static'
+ ;;
+ esac
+ ;;
+
+ hpux9* | hpux10* | hpux11*)
+ lt_prog_compiler_wl='-Wl,'
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case $host_cpu in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic='+Z'
+ ;;
+ esac
+ # Is there a better lt_prog_compiler_static that works with the bundled CC?
+ lt_prog_compiler_static='$wl-a ${wl}archive'
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ lt_prog_compiler_wl='-Wl,'
+ # PIC (with -KPIC) is the default.
+ lt_prog_compiler_static='-non_shared'
+ ;;
+
+ linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
+ case $cc_basename in
+ # old Intel for x86_64, which still supported -KPIC.
+ ecc*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-static'
+ ;;
+ # flang / f18. f95 an alias for gfortran or flang on Debian
+ flang* | f18* | f95*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-fPIC'
+ lt_prog_compiler_static='-static'
+ ;;
+ # icc used to be incompatible with GCC.
+ # ICC 10 doesn't accept -KPIC any more.
+ icc* | ifort*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-fPIC'
+ lt_prog_compiler_static='-static'
+ ;;
+ # Lahey Fortran 8.1.
+ lf95*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='--shared'
+ lt_prog_compiler_static='--static'
+ ;;
+ nagfor*)
+ # NAG Fortran compiler
+ lt_prog_compiler_wl='-Wl,-Wl,,'
+ lt_prog_compiler_pic='-PIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+ tcc*)
+ # Fabrice Bellard et al's Tiny C Compiler
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-fPIC'
+ lt_prog_compiler_static='-static'
+ ;;
+ pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+ # Portland Group compilers (*not* the Pentium gcc compiler,
+ # which looks to be a dead project)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-fpic'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+ ccc*)
+ lt_prog_compiler_wl='-Wl,'
+ # All Alpha code is PIC.
+ lt_prog_compiler_static='-non_shared'
+ ;;
+ xl* | bgxl* | bgf* | mpixl*)
+ # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-qpic'
+ lt_prog_compiler_static='-qstaticlink'
+ ;;
+ *)
+ case `$CC -V 2>&1 | $SED 5q` in
+ *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [1-7].* | *Sun*Fortran*\ 8.[0-3]*)
+ # Sun Fortran 8.3 passes all unrecognized flags to the linker
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ lt_prog_compiler_wl=''
+ ;;
+ *Sun\ F* | *Sun*Fortran*)
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ lt_prog_compiler_wl='-Qoption ld '
+ ;;
+ *Sun\ C*)
+ # Sun C 5.9
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ lt_prog_compiler_wl='-Wl,'
+ ;;
+ *Intel*\ [CF]*Compiler*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-fPIC'
+ lt_prog_compiler_static='-static'
+ ;;
+ *Portland\ Group*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-fpic'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+ esac
+ ;;
+ esac
+ ;;
+
+ newsos6)
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ *nto* | *qnx*)
+ # QNX uses GNU C++, but need to define -shared option too, otherwise
+ # it will coredump.
+ lt_prog_compiler_pic='-fPIC -shared'
+ ;;
+
+ osf3* | osf4* | osf5*)
+ lt_prog_compiler_wl='-Wl,'
+ # All OSF/1 code is PIC.
+ lt_prog_compiler_static='-non_shared'
+ ;;
+
+ rdos*)
+ lt_prog_compiler_static='-non_shared'
+ ;;
+
+ solaris*)
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ case $cc_basename in
+ f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ lt_prog_compiler_wl='-Qoption ld ';;
+ *)
+ lt_prog_compiler_wl='-Wl,';;
+ esac
+ ;;
+
+ sunos4*)
+ lt_prog_compiler_wl='-Qoption ld '
+ lt_prog_compiler_pic='-PIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ sysv4 | sysv4.2uw2* | sysv4.3*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ lt_prog_compiler_pic='-Kconform_pic'
+ lt_prog_compiler_static='-Bstatic'
+ fi
+ ;;
+
+ sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ unicos*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_can_build_shared=no
+ ;;
+
+ uts4*)
+ lt_prog_compiler_pic='-pic'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ *)
+ lt_prog_compiler_can_build_shared=no
+ ;;
+ esac
+ fi
+
+case $host_os in
+ # For platforms that do not support PIC, -DPIC is meaningless:
+ *djgpp*)
+ lt_prog_compiler_pic=
+ ;;
+ *)
+ lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+ ;;
+esac
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+printf %s "checking for $compiler option to produce PIC... " >&6; }
+if test ${lt_cv_prog_compiler_pic+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
+printf "%s\n" "$lt_cv_prog_compiler_pic" >&6; }
+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+
+#
+# Check to make sure the PIC flag actually works.
+#
+if test -n "$lt_prog_compiler_pic"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic works" >&5
+printf %s "checking if $compiler PIC flag $lt_prog_compiler_pic works... " >&6; }
+if test ${lt_cv_prog_compiler_pic_works+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_prog_compiler_pic_works=no
+ ac_outfile=conftest.$ac_objext
+ echo "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="$lt_prog_compiler_pic -DPIC" ## exclude from sc_useless_quotes_in_assignment
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings other than the usual output.
+ $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp
+ $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2
+ if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then
+ lt_cv_prog_compiler_pic_works=yes
+ fi
+ fi
+ $RM conftest*
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works" >&5
+printf "%s\n" "$lt_cv_prog_compiler_pic_works" >&6; }
+
+if test yes = "$lt_cv_prog_compiler_pic_works"; then
+ case $lt_prog_compiler_pic in
+ "" | " "*) ;;
+ *) lt_prog_compiler_pic=" $lt_prog_compiler_pic" ;;
+ esac
+else
+ lt_prog_compiler_pic=
+ lt_prog_compiler_can_build_shared=no
+fi
+
+fi
+
+
+
+
+
+
+
+
+
+
+
+#
+# Check to make sure the static flag actually works.
+#
+wl=$lt_prog_compiler_wl eval lt_tmp_static_flag=\"$lt_prog_compiler_static\"
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5
+printf %s "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; }
+if test ${lt_cv_prog_compiler_static_works+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_prog_compiler_static_works=no
+ save_LDFLAGS=$LDFLAGS
+ LDFLAGS="$LDFLAGS $lt_tmp_static_flag"
+ echo "$lt_simple_link_test_code" > conftest.$ac_ext
+ if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then
+ # The linker can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test -s conftest.err; then
+ # Append any errors to the config.log.
+ cat conftest.err 1>&5
+ $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp
+ $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2
+ if diff conftest.exp conftest.er2 >/dev/null; then
+ lt_cv_prog_compiler_static_works=yes
+ fi
+ else
+ lt_cv_prog_compiler_static_works=yes
+ fi
+ fi
+ $RM -r conftest*
+ LDFLAGS=$save_LDFLAGS
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works" >&5
+printf "%s\n" "$lt_cv_prog_compiler_static_works" >&6; }
+
+if test yes = "$lt_cv_prog_compiler_static_works"; then
+ :
+else
+ lt_prog_compiler_static=
+fi
+
+
+
+
+
+
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5
+printf %s "checking if $compiler supports -c -o file.$ac_objext... " >&6; }
+if test ${lt_cv_prog_compiler_c_o+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_prog_compiler_c_o=no
+ $RM -r conftest 2>/dev/null
+ mkdir conftest
+ cd conftest
+ mkdir out
+ echo "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ lt_compiler_flag="-o out/conftest2.$ac_objext"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>out/conftest.err)
+ ac_status=$?
+ cat out/conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s out/conftest2.$ac_objext
+ then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp
+ $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2
+ if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then
+ lt_cv_prog_compiler_c_o=yes
+ fi
+ fi
+ chmod u+w . 2>&5
+ $RM conftest*
+ # SGI C++ compiler will create directory out/ii_files/ for
+ # template instantiation
+ test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files
+ $RM out/* && rmdir out
+ cd ..
+ $RM -r conftest
+ $RM conftest*
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5
+printf "%s\n" "$lt_cv_prog_compiler_c_o" >&6; }
+
+
+
+
+
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5
+printf %s "checking if $compiler supports -c -o file.$ac_objext... " >&6; }
+if test ${lt_cv_prog_compiler_c_o+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_prog_compiler_c_o=no
+ $RM -r conftest 2>/dev/null
+ mkdir conftest
+ cd conftest
+ mkdir out
+ echo "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ lt_compiler_flag="-o out/conftest2.$ac_objext"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>out/conftest.err)
+ ac_status=$?
+ cat out/conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s out/conftest2.$ac_objext
+ then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp
+ $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2
+ if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then
+ lt_cv_prog_compiler_c_o=yes
+ fi
+ fi
+ chmod u+w . 2>&5
+ $RM conftest*
+ # SGI C++ compiler will create directory out/ii_files/ for
+ # template instantiation
+ test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files
+ $RM out/* && rmdir out
+ cd ..
+ $RM -r conftest
+ $RM conftest*
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5
+printf "%s\n" "$lt_cv_prog_compiler_c_o" >&6; }
+
+
+
+
+hard_links=nottested
+if test no = "$lt_cv_prog_compiler_c_o" && test no != "$need_locks"; then
+ # do not overwrite the value of need_locks provided by the user
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5
+printf %s "checking if we can lock with hard links... " >&6; }
+ hard_links=yes
+ $RM conftest*
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ touch conftest.a
+ ln conftest.a conftest.b 2>&5 || hard_links=no
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5
+printf "%s\n" "$hard_links" >&6; }
+ if test no = "$hard_links"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: '$CC' does not support '-c -o', so 'make -j' may be unsafe" >&5
+printf "%s\n" "$as_me: WARNING: '$CC' does not support '-c -o', so 'make -j' may be unsafe" >&2;}
+ need_locks=warn
+ fi
+else
+ need_locks=no
+fi
+
+
+
+
+
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5
+printf %s "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; }
+
+ runpath_var=
+ allow_undefined_flag=
+ always_export_symbols=no
+ archive_cmds=
+ archive_expsym_cmds=
+ compiler_needs_object=no
+ enable_shared_with_static_runtimes=no
+ export_dynamic_flag_spec=
+ export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ hardcode_automatic=no
+ hardcode_direct=no
+ hardcode_direct_absolute=no
+ hardcode_libdir_flag_spec=
+ hardcode_libdir_separator=
+ hardcode_minus_L=no
+ hardcode_shlibpath_var=unsupported
+ inherit_rpath=no
+ link_all_deplibs=unknown
+ module_cmds=
+ module_expsym_cmds=
+ old_archive_from_new_cmds=
+ old_archive_from_expsyms_cmds=
+ thread_safe_flag_spec=
+ whole_archive_flag_spec=
+ # include_expsyms should be a list of space-separated symbols to be *always*
+ # included in the symbol list
+ include_expsyms=
+ # exclude_expsyms can be an extended regexp of symbols to exclude
+ # it will be wrapped by ' (' and ')$', so one must not match beginning or
+ # end of line. Example: 'a|bc|.*d.*' will exclude the symbols 'a' and 'bc',
+ # as well as any symbol that contains 'd'.
+ exclude_expsyms='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+ # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out
+ # platforms (ab)use it in PIC code, but their linkers get confused if
+ # the symbol is explicitly referenced. Since portable code cannot
+ # rely on this symbol name, it's probably fine to never include it in
+ # preloaded symbol tables.
+ # Exclude shared library initialization/finalization symbols.
+ extract_expsyms_cmds=
+
+ case $host_os in
+ cygwin* | mingw* | pw32* | cegcc*)
+ # FIXME: the MSVC++ and ICC port hasn't been tested in a loooong time
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++ or Intel C++ Compiler.
+ if test yes != "$GCC"; then
+ with_gnu_ld=no
+ fi
+ ;;
+ interix*)
+ # we just hope/assume this is gcc and not c89 (= MSVC++ or ICC)
+ with_gnu_ld=yes
+ ;;
+ openbsd* | bitrig*)
+ with_gnu_ld=no
+ ;;
+ linux* | k*bsd*-gnu | gnu*)
+ link_all_deplibs=no
+ ;;
+ esac
+
+ ld_shlibs=yes
+
+ # On some targets, GNU ld is compatible enough with the native linker
+ # that we're better off using the native interface for both.
+ lt_use_gnu_ld_interface=no
+ if test yes = "$with_gnu_ld"; then
+ case $host_os in
+ aix*)
+ # The AIX port of GNU ld has always aspired to compatibility
+ # with the native linker. However, as the warning in the GNU ld
+ # block says, versions before 2.19.5* couldn't really create working
+ # shared libraries, regardless of the interface used.
+ case `$LD -v 2>&1` in
+ *\ \(GNU\ Binutils\)\ 2.19.5*) ;;
+ *\ \(GNU\ Binutils\)\ 2.[2-9]*) ;;
+ *\ \(GNU\ Binutils\)\ [3-9]*) ;;
+ *)
+ lt_use_gnu_ld_interface=yes
+ ;;
+ esac
+ ;;
+ *)
+ lt_use_gnu_ld_interface=yes
+ ;;
+ esac
+ fi
+
+ if test yes = "$lt_use_gnu_ld_interface"; then
+ # If archive_cmds runs LD, not CC, wlarc should be empty
+ wlarc='$wl'
+
+ # Set some defaults for GNU ld with shared library support. These
+ # are reset later if shared libraries are not supported. Putting them
+ # here allows them to be overridden if necessary.
+ runpath_var=LD_RUN_PATH
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ export_dynamic_flag_spec='$wl--export-dynamic'
+ # ancient GNU ld didn't support --whole-archive et. al.
+ if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then
+ whole_archive_flag_spec=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive'
+ else
+ whole_archive_flag_spec=
+ fi
+ supports_anon_versioning=no
+ case `$LD -v | $SED -e 's/([^)]\+)\s\+//' 2>&1` in
+ *GNU\ gold*) supports_anon_versioning=yes ;;
+ *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11
+ *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ...
+ *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ...
+ *\ 2.11.*) ;; # other 2.11 versions
+ *) supports_anon_versioning=yes ;;
+ esac
+
+ # See if GNU ld supports shared libraries.
+ case $host_os in
+ aix[3-9]*)
+ # On AIX/PPC, the GNU linker is very broken
+ if test ia64 != "$host_cpu"; then
+ ld_shlibs=no
+ cat <<_LT_EOF 1>&2
+
+*** Warning: the GNU linker, at least up to release 2.19, is reported
+*** to be unable to reliably create shared libraries on AIX.
+*** Therefore, libtool is disabling shared libraries support. If you
+*** really care for shared libraries, you may want to install binutils
+*** 2.20 or above, or modify your PATH so that a non-GNU linker is found.
+*** You will then need to restart the configuration process.
+
+_LT_EOF
+ fi
+ ;;
+
+ amigaos*)
+ case $host_cpu in
+ powerpc)
+ # see comment about AmigaOS4 .so support
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+ archive_expsym_cmds=''
+ ;;
+ m68k)
+ archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_minus_L=yes
+ ;;
+ esac
+ ;;
+
+ beos*)
+ if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+ allow_undefined_flag=unsupported
+ # Joseph Beckenbach says some releases of gcc
+ # support --undefined. This deserves some investigation. FIXME
+ archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ cygwin* | mingw* | pw32* | cegcc*)
+ # _LT_TAGVAR(hardcode_libdir_flag_spec, ) is actually meaningless,
+ # as there is no search path for DLLs.
+ hardcode_libdir_flag_spec='-L$libdir'
+ export_dynamic_flag_spec='$wl--export-all-symbols'
+ allow_undefined_flag=unsupported
+ always_export_symbols=no
+ enable_shared_with_static_runtimes=yes
+ export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
+ exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+
+ if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+ # If the export-symbols file already is a .def file, use it as
+ # is; otherwise, prepend EXPORTS...
+ archive_expsym_cmds='if test DEF = "`$SED -n -e '\''s/^[ ]*//'\'' -e '\''/^\(;.*\)*$/d'\'' -e '\''s/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p'\'' -e q $export_symbols`" ; then
+ cp $export_symbols $output_objdir/$soname.def;
+ else
+ echo EXPORTS > $output_objdir/$soname.def;
+ cat $export_symbols >> $output_objdir/$soname.def;
+ fi~
+ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ haiku*)
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+ link_all_deplibs=yes
+ ;;
+
+ os2*)
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_minus_L=yes
+ allow_undefined_flag=unsupported
+ shrext_cmds=.dll
+ archive_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
+ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
+ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
+ $ECHO EXPORTS >> $output_objdir/$libname.def~
+ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~
+ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
+ emximp -o $lib $output_objdir/$libname.def'
+ archive_expsym_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
+ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
+ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
+ $ECHO EXPORTS >> $output_objdir/$libname.def~
+ prefix_cmds="$SED"~
+ if test EXPORTS = "`$SED 1q $export_symbols`"; then
+ prefix_cmds="$prefix_cmds -e 1d";
+ fi~
+ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~
+ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~
+ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
+ emximp -o $lib $output_objdir/$libname.def'
+ old_archive_From_new_cmds='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def'
+ enable_shared_with_static_runtimes=yes
+ file_list_spec='@'
+ ;;
+
+ interix[3-9]*)
+ hardcode_direct=no
+ hardcode_shlibpath_var=no
+ hardcode_libdir_flag_spec='$wl-rpath,$libdir'
+ export_dynamic_flag_spec='$wl-E'
+ # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc.
+ # Instead, shared libraries are loaded at an image base (0x10000000 by
+ # default) and relocated if they conflict, which is a slow very memory
+ # consuming and fragmenting process. To avoid this, we pick a random,
+ # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link
+ # time. Moving up from 0x10000000 also allows more sbrk(2) space.
+ archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
+ archive_expsym_cmds='$SED "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
+ ;;
+
+ gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu)
+ tmp_diet=no
+ if test linux-dietlibc = "$host_os"; then
+ case $cc_basename in
+ diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn)
+ esac
+ fi
+ if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \
+ && test no = "$tmp_diet"
+ then
+ tmp_addflag=' $pic_flag'
+ tmp_sharedflag='-shared'
+ case $cc_basename,$host_cpu in
+ pgcc*) # Portland Group C compiler
+ whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
+ tmp_addflag=' $pic_flag'
+ ;;
+ pgf77* | pgf90* | pgf95* | pgfortran*)
+ # Portland Group f77 and f90 compilers
+ whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
+ tmp_addflag=' $pic_flag -Mnomain' ;;
+ ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64
+ tmp_addflag=' -i_dynamic' ;;
+ efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64
+ tmp_addflag=' -i_dynamic -nofor_main' ;;
+ ifc* | ifort*) # Intel Fortran compiler
+ tmp_addflag=' -nofor_main' ;;
+ lf95*) # Lahey Fortran 8.1
+ whole_archive_flag_spec=
+ tmp_sharedflag='--shared' ;;
+ nagfor*) # NAGFOR 5.3
+ tmp_sharedflag='-Wl,-shared' ;;
+ xl[cC]* | bgxl[cC]* | mpixl[cC]*) # IBM XL C 8.0 on PPC (deal with xlf below)
+ tmp_sharedflag='-qmkshrobj'
+ tmp_addflag= ;;
+ nvcc*) # Cuda Compiler Driver 2.2
+ whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
+ compiler_needs_object=yes
+ ;;
+ esac
+ case `$CC -V 2>&1 | $SED 5q` in
+ *Sun\ C*) # Sun C 5.9
+ whole_archive_flag_spec='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
+ compiler_needs_object=yes
+ tmp_sharedflag='-G' ;;
+ *Sun\ F*) # Sun Fortran 8.3
+ tmp_sharedflag='-G' ;;
+ esac
+ archive_cmds='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+
+ if test yes = "$supports_anon_versioning"; then
+ archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ echo "local: *; };" >> $output_objdir/$libname.ver~
+ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib'
+ fi
+
+ case $cc_basename in
+ tcc*)
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ export_dynamic_flag_spec='-rdynamic'
+ ;;
+ xlf* | bgf* | bgxlf* | mpixlf*)
+ # IBM XL Fortran 10.1 on PPC cannot create shared libs itself
+ whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ if test yes = "$supports_anon_versioning"; then
+ archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ echo "local: *; };" >> $output_objdir/$libname.ver~
+ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ fi
+ ;;
+ esac
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ netbsd* | netbsdelf*-gnu)
+ if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
+ archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ wlarc=
+ else
+ archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+ archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
+ fi
+ ;;
+
+ solaris*)
+ if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then
+ ld_shlibs=no
+ cat <<_LT_EOF 1>&2
+
+*** Warning: The releases 2.8.* of the GNU linker cannot reliably
+*** create shared libraries on Solaris systems. Therefore, libtool
+*** is disabling shared libraries support. We urge you to upgrade GNU
+*** binutils to release 2.9.1 or newer. Another option is to modify
+*** your PATH or compiler configuration so that the native linker is
+*** used, and then restart.
+
+_LT_EOF
+ elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+ archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+ archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*)
+ case `$LD -v 2>&1` in
+ *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*)
+ ld_shlibs=no
+ cat <<_LT_EOF 1>&2
+
+*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 cannot
+*** reliably create shared libraries on SCO systems. Therefore, libtool
+*** is disabling shared libraries support. We urge you to upgrade GNU
+*** binutils to release 2.16.91.0.3 or newer. Another option is to modify
+*** your PATH or compiler configuration so that the native linker is
+*** used, and then restart.
+
+_LT_EOF
+ ;;
+ *)
+ # For security reasons, it is highly recommended that you always
+ # use absolute paths for naming shared libraries, and exclude the
+ # DT_RUNPATH tag from executables and libraries. But doing so
+ # requires that you compile everything twice, which is a pain.
+ if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+ archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+ esac
+ ;;
+
+ sunos4*)
+ archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ wlarc=
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ *)
+ if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+ archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+ archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+ esac
+
+ if test no = "$ld_shlibs"; then
+ runpath_var=
+ hardcode_libdir_flag_spec=
+ export_dynamic_flag_spec=
+ whole_archive_flag_spec=
+ fi
+ else
+ # PORTME fill in a description of your system's linker (not GNU ld)
+ case $host_os in
+ aix3*)
+ allow_undefined_flag=unsupported
+ always_export_symbols=yes
+ archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname'
+ # Note: this linker hardcodes the directories in LIBPATH if there
+ # are no directories specified by -L.
+ hardcode_minus_L=yes
+ if test yes = "$GCC" && test -z "$lt_prog_compiler_static"; then
+ # Neither direct hardcoding nor static linking is supported with a
+ # broken collect2.
+ hardcode_direct=unsupported
+ fi
+ ;;
+
+ aix[4-9]*)
+ if test ia64 = "$host_cpu"; then
+ # On IA64, the linker does run time linking by default, so we don't
+ # have to do anything special.
+ aix_use_runtimelinking=no
+ exp_sym_flag='-Bexport'
+ no_entry_flag=
+ else
+ # If we're using GNU nm, then we don't want the "-C" option.
+ # -C means demangle to GNU nm, but means don't demangle to AIX nm.
+ # Without the "-l" option, or with the "-B" option, AIX nm treats
+ # weak defined symbols like other global defined symbols, whereas
+ # GNU nm marks them as "W".
+ # While the 'weak' keyword is ignored in the Export File, we need
+ # it in the Import File for the 'aix-soname' feature, so we have
+ # to replace the "-B" option with "-P" for AIX nm.
+ if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then
+ export_symbols_cmds='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols'
+ else
+ export_symbols_cmds='`func_echo_all $NM | $SED -e '\''s/B\([^B]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "L") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && (substr(\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols'
+ fi
+ aix_use_runtimelinking=no
+
+ # Test if we are trying to use run time linking or normal
+ # AIX style linking. If -brtl is somewhere in LDFLAGS, we
+ # have runtime linking enabled, and use it for executables.
+ # For shared libraries, we enable/disable runtime linking
+ # depending on the kind of the shared library created -
+ # when "with_aix_soname,aix_use_runtimelinking" is:
+ # "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables
+ # "aix,yes" lib.so shared, rtl:yes, for executables
+ # lib.a static archive
+ # "both,no" lib.so.V(shr.o) shared, rtl:yes
+ # lib.a(lib.so.V) shared, rtl:no, for executables
+ # "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables
+ # lib.a(lib.so.V) shared, rtl:no
+ # "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables
+ # lib.a static archive
+ case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*)
+ for ld_flag in $LDFLAGS; do
+ if (test x-brtl = "x$ld_flag" || test x-Wl,-brtl = "x$ld_flag"); then
+ aix_use_runtimelinking=yes
+ break
+ fi
+ done
+ if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then
+ # With aix-soname=svr4, we create the lib.so.V shared archives only,
+ # so we don't have lib.a shared libs to link our executables.
+ # We have to force runtime linking in this case.
+ aix_use_runtimelinking=yes
+ LDFLAGS="$LDFLAGS -Wl,-brtl"
+ fi
+ ;;
+ esac
+
+ exp_sym_flag='-bexport'
+ no_entry_flag='-bnoentry'
+ fi
+
+ # When large executables or shared objects are built, AIX ld can
+ # have problems creating the table of contents. If linking a library
+ # or program results in "error TOC overflow" add -mminimal-toc to
+ # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
+ # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
+
+ archive_cmds=''
+ hardcode_direct=yes
+ hardcode_direct_absolute=yes
+ hardcode_libdir_separator=':'
+ link_all_deplibs=yes
+ file_list_spec='$wl-f,'
+ case $with_aix_soname,$aix_use_runtimelinking in
+ aix,*) ;; # traditional, no import file
+ svr4,* | *,yes) # use import file
+ # The Import File defines what to hardcode.
+ hardcode_direct=no
+ hardcode_direct_absolute=no
+ ;;
+ esac
+
+ if test yes = "$GCC"; then
+ case $host_os in aix4.[012]|aix4.[012].*)
+ # We only want to do this on AIX 4.2 and lower, the check
+ # below for broken collect2 doesn't work under 4.3+
+ collect2name=`$CC -print-prog-name=collect2`
+ if test -f "$collect2name" &&
+ strings "$collect2name" | $GREP resolve_lib_name >/dev/null
+ then
+ # We have reworked collect2
+ :
+ else
+ # We have old collect2
+ hardcode_direct=unsupported
+ # It fails to find uninstalled libraries when the uninstalled
+ # path is not listed in the libpath. Setting hardcode_minus_L
+ # to unsupported forces relinking
+ hardcode_minus_L=yes
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_libdir_separator=
+ fi
+ ;;
+ esac
+ shared_flag='-shared'
+ if test yes = "$aix_use_runtimelinking"; then
+ shared_flag="$shared_flag "'$wl-G'
+ fi
+ # Need to ensure runtime linking is disabled for the traditional
+ # shared library, or the linker may eventually find shared libraries
+ # /with/ Import File - we do not want to mix them.
+ shared_flag_aix='-shared'
+ shared_flag_svr4='-shared $wl-G'
+ else
+ # not using gcc
+ if test ia64 = "$host_cpu"; then
+ # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
+ # chokes on -Wl,-G. The following line is correct:
+ shared_flag='-G'
+ else
+ if test yes = "$aix_use_runtimelinking"; then
+ shared_flag='$wl-G'
+ else
+ shared_flag='$wl-bM:SRE'
+ fi
+ shared_flag_aix='$wl-bM:SRE'
+ shared_flag_svr4='$wl-G'
+ fi
+ fi
+
+ export_dynamic_flag_spec='$wl-bexpall'
+ # It seems that -bexpall does not export symbols beginning with
+ # underscore (_), so it is better to generate a list of symbols to export.
+ always_export_symbols=yes
+ if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then
+ # Warning - without using the other runtime loading flags (-brtl),
+ # -berok will link without error, but may produce a broken library.
+ allow_undefined_flag='-berok'
+ # Determine the default libpath from the value encoded in an
+ # empty executable.
+ if test set = "${lt_cv_aix_libpath+set}"; then
+ aix_libpath=$lt_cv_aix_libpath
+else
+ if test ${lt_cv_aix_libpath_+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+
+ lt_aix_libpath_sed='
+ /Import File Strings/,/^$/ {
+ /^0/ {
+ s/^0 *\([^ ]*\) *$/\1/
+ p
+ }
+ }'
+ lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+ # Check for a 64-bit object if we didn't find anything.
+ if test -z "$lt_cv_aix_libpath_"; then
+ lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+ fi
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+ if test -z "$lt_cv_aix_libpath_"; then
+ lt_cv_aix_libpath_=/usr/lib:/lib
+ fi
+
+fi
+
+ aix_libpath=$lt_cv_aix_libpath_
+fi
+
+ hardcode_libdir_flag_spec='$wl-blibpath:$libdir:'"$aix_libpath"
+ archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag
+ else
+ if test ia64 = "$host_cpu"; then
+ hardcode_libdir_flag_spec='$wl-R $libdir:/usr/lib:/lib'
+ allow_undefined_flag="-z nodefs"
+ archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols"
+ else
+ # Determine the default libpath from the value encoded in an
+ # empty executable.
+ if test set = "${lt_cv_aix_libpath+set}"; then
+ aix_libpath=$lt_cv_aix_libpath
+else
+ if test ${lt_cv_aix_libpath_+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+
+ lt_aix_libpath_sed='
+ /Import File Strings/,/^$/ {
+ /^0/ {
+ s/^0 *\([^ ]*\) *$/\1/
+ p
+ }
+ }'
+ lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+ # Check for a 64-bit object if we didn't find anything.
+ if test -z "$lt_cv_aix_libpath_"; then
+ lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+ fi
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+ if test -z "$lt_cv_aix_libpath_"; then
+ lt_cv_aix_libpath_=/usr/lib:/lib
+ fi
+
+fi
+
+ aix_libpath=$lt_cv_aix_libpath_
+fi
+
+ hardcode_libdir_flag_spec='$wl-blibpath:$libdir:'"$aix_libpath"
+ # Warning - without using the other run time loading flags,
+ # -berok will link without error, but may produce a broken library.
+ no_undefined_flag=' $wl-bernotok'
+ allow_undefined_flag=' $wl-berok'
+ if test yes = "$with_gnu_ld"; then
+ # We only use this code for GNU lds that support --whole-archive.
+ whole_archive_flag_spec='$wl--whole-archive$convenience $wl--no-whole-archive'
+ else
+ # Exported symbols can be pulled into shared objects from archives
+ whole_archive_flag_spec='$convenience'
+ fi
+ archive_cmds_need_lc=yes
+ archive_expsym_cmds='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d'
+ # -brtl affects multiple linker settings, -berok does not and is overridden later
+ compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([, ]\\)%-berok\\1%g"`'
+ if test svr4 != "$with_aix_soname"; then
+ # This is similar to how AIX traditionally builds its shared libraries.
+ archive_expsym_cmds="$archive_expsym_cmds"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname'
+ fi
+ if test aix != "$with_aix_soname"; then
+ archive_expsym_cmds="$archive_expsym_cmds"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp'
+ else
+ # used by -dlpreopen to get the symbols
+ archive_expsym_cmds="$archive_expsym_cmds"'~$MV $output_objdir/$realname.d/$soname $output_objdir'
+ fi
+ archive_expsym_cmds="$archive_expsym_cmds"'~$RM -r $output_objdir/$realname.d'
+ fi
+ fi
+ ;;
+
+ amigaos*)
+ case $host_cpu in
+ powerpc)
+ # see comment about AmigaOS4 .so support
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
+ archive_expsym_cmds=''
+ ;;
+ m68k)
+ archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_minus_L=yes
+ ;;
+ esac
+ ;;
+
+ bsdi[45]*)
+ export_dynamic_flag_spec=-rdynamic
+ ;;
+
+ cygwin* | mingw* | pw32* | cegcc*)
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++ or Intel C++ Compiler.
+ # hardcode_libdir_flag_spec is actually meaningless, as there is
+ # no search path for DLLs.
+ case $cc_basename in
+ cl* | icl*)
+ # Native MSVC or ICC
+ hardcode_libdir_flag_spec=' '
+ allow_undefined_flag=unsupported
+ always_export_symbols=yes
+ file_list_spec='@'
+ # Tell ltmain to make .lib files, not .a files.
+ libext=lib
+ # Tell ltmain to make .dll files, not .so files.
+ shrext_cmds=.dll
+ # FIXME: Setting linknames here is a bad hack.
+ archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames='
+ archive_expsym_cmds='if test DEF = "`$SED -n -e '\''s/^[ ]*//'\'' -e '\''/^\(;.*\)*$/d'\'' -e '\''s/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p'\'' -e q $export_symbols`" ; then
+ cp "$export_symbols" "$output_objdir/$soname.def";
+ echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp";
+ else
+ $SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp;
+ fi~
+ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
+ linknames='
+ # The linker will not automatically build a static lib if we build a DLL.
+ # _LT_TAGVAR(old_archive_from_new_cmds, )='true'
+ enable_shared_with_static_runtimes=yes
+ exclude_expsyms='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*'
+ export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
+ # Don't use ranlib
+ old_postinstall_cmds='chmod 644 $oldlib'
+ postlink_cmds='lt_outputfile="@OUTPUT@"~
+ lt_tool_outputfile="@TOOL_OUTPUT@"~
+ case $lt_outputfile in
+ *.exe|*.EXE) ;;
+ *)
+ lt_outputfile=$lt_outputfile.exe
+ lt_tool_outputfile=$lt_tool_outputfile.exe
+ ;;
+ esac~
+ if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then
+ $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
+ $RM "$lt_outputfile.manifest";
+ fi'
+ ;;
+ *)
+ # Assume MSVC and ICC wrapper
+ hardcode_libdir_flag_spec=' '
+ allow_undefined_flag=unsupported
+ # Tell ltmain to make .lib files, not .a files.
+ libext=lib
+ # Tell ltmain to make .dll files, not .so files.
+ shrext_cmds=.dll
+ # FIXME: Setting linknames here is a bad hack.
+ archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+ # The linker will automatically build a .lib file if we build a DLL.
+ old_archive_from_new_cmds='true'
+ # FIXME: Should let the user specify the lib program.
+ old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+ enable_shared_with_static_runtimes=yes
+ ;;
+ esac
+ ;;
+
+ darwin* | rhapsody*)
+
+
+ archive_cmds_need_lc=no
+ hardcode_direct=no
+ hardcode_automatic=yes
+ hardcode_shlibpath_var=unsupported
+ if test yes = "$lt_cv_ld_force_load"; then
+ whole_archive_flag_spec='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience $wl-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`'
+
+ else
+ whole_archive_flag_spec=''
+ fi
+ link_all_deplibs=yes
+ allow_undefined_flag=$_lt_dar_allow_undefined
+ case $cc_basename in
+ ifort*|nagfor*) _lt_dar_can_shared=yes ;;
+ *) _lt_dar_can_shared=$GCC ;;
+ esac
+ if test yes = "$_lt_dar_can_shared"; then
+ output_verbose_link_cmd=func_echo_all
+ archive_cmds="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dsymutil"
+ module_cmds="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dsymutil"
+ archive_expsym_cmds="$SED 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dar_export_syms$_lt_dsymutil"
+ module_expsym_cmds="$SED -e 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dar_export_syms$_lt_dsymutil"
+
+ else
+ ld_shlibs=no
+ fi
+
+ ;;
+
+ dgux*)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_shlibpath_var=no
+ ;;
+
+ # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor
+ # support. Future versions do this automatically, but an explicit c++rt0.o
+ # does not break anything, and helps significantly (at the cost of a little
+ # extra space).
+ freebsd2.2*)
+ archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o'
+ hardcode_libdir_flag_spec='-R$libdir'
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ # Unfortunately, older versions of FreeBSD 2 do not have this feature.
+ freebsd2.*)
+ archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=yes
+ hardcode_minus_L=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+ freebsd* | dragonfly* | midnightbsd*)
+ archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec='-R$libdir'
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ hpux9*)
+ if test yes = "$GCC"; then
+ archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
+ else
+ archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
+ fi
+ hardcode_libdir_flag_spec='$wl+b $wl$libdir'
+ hardcode_libdir_separator=:
+ hardcode_direct=yes
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L=yes
+ export_dynamic_flag_spec='$wl-E'
+ ;;
+
+ hpux10*)
+ if test yes,no = "$GCC,$with_gnu_ld"; then
+ archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+ fi
+ if test no = "$with_gnu_ld"; then
+ hardcode_libdir_flag_spec='$wl+b $wl$libdir'
+ hardcode_libdir_separator=:
+ hardcode_direct=yes
+ hardcode_direct_absolute=yes
+ export_dynamic_flag_spec='$wl-E'
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L=yes
+ fi
+ ;;
+
+ hpux11*)
+ if test yes,no = "$GCC,$with_gnu_ld"; then
+ case $host_cpu in
+ hppa*64*)
+ archive_cmds='$CC -shared $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ ia64*)
+ archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ *)
+ archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ esac
+ else
+ case $host_cpu in
+ hppa*64*)
+ archive_cmds='$CC -b $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ ia64*)
+ archive_cmds='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ *)
+
+ # Older versions of the 11.00 compiler do not understand -b yet
+ # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does)
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $CC understands -b" >&5
+printf %s "checking if $CC understands -b... " >&6; }
+if test ${lt_cv_prog_compiler__b+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_prog_compiler__b=no
+ save_LDFLAGS=$LDFLAGS
+ LDFLAGS="$LDFLAGS -b"
+ echo "$lt_simple_link_test_code" > conftest.$ac_ext
+ if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then
+ # The linker can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test -s conftest.err; then
+ # Append any errors to the config.log.
+ cat conftest.err 1>&5
+ $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp
+ $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2
+ if diff conftest.exp conftest.er2 >/dev/null; then
+ lt_cv_prog_compiler__b=yes
+ fi
+ else
+ lt_cv_prog_compiler__b=yes
+ fi
+ fi
+ $RM -r conftest*
+ LDFLAGS=$save_LDFLAGS
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler__b" >&5
+printf "%s\n" "$lt_cv_prog_compiler__b" >&6; }
+
+if test yes = "$lt_cv_prog_compiler__b"; then
+ archive_cmds='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+else
+ archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+fi
+
+ ;;
+ esac
+ fi
+ if test no = "$with_gnu_ld"; then
+ hardcode_libdir_flag_spec='$wl+b $wl$libdir'
+ hardcode_libdir_separator=:
+
+ case $host_cpu in
+ hppa*64*|ia64*)
+ hardcode_direct=no
+ hardcode_shlibpath_var=no
+ ;;
+ *)
+ hardcode_direct=yes
+ hardcode_direct_absolute=yes
+ export_dynamic_flag_spec='$wl-E'
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L=yes
+ ;;
+ esac
+ fi
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ if test yes = "$GCC"; then
+ archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
+ # Try to use the -exported_symbol ld option, if it does not
+ # work, assume that -exports_file does not work either and
+ # implicitly export all symbols.
+ # This should be the same for all languages, so no per-tag cache variable.
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
+printf %s "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
+if test ${lt_cv_irix_exported_symbol+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ save_LDFLAGS=$LDFLAGS
+ LDFLAGS="$LDFLAGS -shared $wl-exported_symbol ${wl}foo $wl-update_registry $wl/dev/null"
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+int foo (void) { return 0; }
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ lt_cv_irix_exported_symbol=yes
+else $as_nop
+ lt_cv_irix_exported_symbol=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+ LDFLAGS=$save_LDFLAGS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
+printf "%s\n" "$lt_cv_irix_exported_symbol" >&6; }
+ if test yes = "$lt_cv_irix_exported_symbol"; then
+ archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations $wl-exports_file $wl$export_symbols -o $lib'
+ fi
+ link_all_deplibs=no
+ else
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
+ archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -exports_file $export_symbols -o $lib'
+ fi
+ archive_cmds_need_lc='no'
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ hardcode_libdir_separator=:
+ inherit_rpath=yes
+ link_all_deplibs=yes
+ ;;
+
+ linux*)
+ case $cc_basename in
+ tcc*)
+ # Fabrice Bellard et al's Tiny C Compiler
+ ld_shlibs=yes
+ archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ ;;
+ esac
+ ;;
+
+ netbsd* | netbsdelf*-gnu)
+ if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
+ archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out
+ else
+ archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF
+ fi
+ hardcode_libdir_flag_spec='-R$libdir'
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ newsos6)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=yes
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ hardcode_libdir_separator=:
+ hardcode_shlibpath_var=no
+ ;;
+
+ *nto* | *qnx*)
+ ;;
+
+ openbsd* | bitrig*)
+ if test -f /usr/libexec/ld.so; then
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ hardcode_direct_absolute=yes
+ if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
+ archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags $wl-retain-symbols-file,$export_symbols'
+ hardcode_libdir_flag_spec='$wl-rpath,$libdir'
+ export_dynamic_flag_spec='$wl-E'
+ else
+ archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec='$wl-rpath,$libdir'
+ fi
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ os2*)
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_minus_L=yes
+ allow_undefined_flag=unsupported
+ shrext_cmds=.dll
+ archive_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
+ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
+ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
+ $ECHO EXPORTS >> $output_objdir/$libname.def~
+ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~
+ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
+ emximp -o $lib $output_objdir/$libname.def'
+ archive_expsym_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
+ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
+ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
+ $ECHO EXPORTS >> $output_objdir/$libname.def~
+ prefix_cmds="$SED"~
+ if test EXPORTS = "`$SED 1q $export_symbols`"; then
+ prefix_cmds="$prefix_cmds -e 1d";
+ fi~
+ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~
+ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~
+ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
+ emximp -o $lib $output_objdir/$libname.def'
+ old_archive_From_new_cmds='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def'
+ enable_shared_with_static_runtimes=yes
+ file_list_spec='@'
+ ;;
+
+ osf3*)
+ if test yes = "$GCC"; then
+ allow_undefined_flag=' $wl-expect_unresolved $wl\*'
+ archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
+ else
+ allow_undefined_flag=' -expect_unresolved \*'
+ archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
+ fi
+ archive_cmds_need_lc='no'
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ hardcode_libdir_separator=:
+ ;;
+
+ osf4* | osf5*) # as osf3* with the addition of -msym flag
+ if test yes = "$GCC"; then
+ allow_undefined_flag=' $wl-expect_unresolved $wl\*'
+ archive_cmds='$CC -shared$allow_undefined_flag $pic_flag $libobjs $deplibs $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
+ hardcode_libdir_flag_spec='$wl-rpath $wl$libdir'
+ else
+ allow_undefined_flag=' -expect_unresolved \*'
+ archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
+ archive_expsym_cmds='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~
+ $CC -shared$allow_undefined_flag $wl-input $wl$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~$RM $lib.exp'
+
+ # Both c and cxx compiler support -rpath directly
+ hardcode_libdir_flag_spec='-rpath $libdir'
+ fi
+ archive_cmds_need_lc='no'
+ hardcode_libdir_separator=:
+ ;;
+
+ solaris*)
+ no_undefined_flag=' -z defs'
+ if test yes = "$GCC"; then
+ wlarc='$wl'
+ archive_cmds='$CC -shared $pic_flag $wl-z ${wl}text $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+ $CC -shared $pic_flag $wl-z ${wl}text $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+ else
+ case `$CC -V 2>&1` in
+ *"Compilers 5.0"*)
+ wlarc=''
+ archive_cmds='$LD -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+ $LD -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp'
+ ;;
+ *)
+ wlarc='$wl'
+ archive_cmds='$CC -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+ $CC -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+ ;;
+ esac
+ fi
+ hardcode_libdir_flag_spec='-R$libdir'
+ hardcode_shlibpath_var=no
+ case $host_os in
+ solaris2.[0-5] | solaris2.[0-5].*) ;;
+ *)
+ # The compiler driver will combine and reorder linker options,
+ # but understands '-z linker_flag'. GCC discards it without '$wl',
+ # but is careful enough not to reorder.
+ # Supported since Solaris 2.6 (maybe 2.5.1?)
+ if test yes = "$GCC"; then
+ whole_archive_flag_spec='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract'
+ else
+ whole_archive_flag_spec='-z allextract$convenience -z defaultextract'
+ fi
+ ;;
+ esac
+ link_all_deplibs=yes
+ ;;
+
+ sunos4*)
+ if test sequent = "$host_vendor"; then
+ # Use $CC to link under sequent, because it throws in some extra .o
+ # files that make .init and .fini sections work.
+ archive_cmds='$CC -G $wl-h $soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags'
+ fi
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_direct=yes
+ hardcode_minus_L=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ sysv4)
+ case $host_vendor in
+ sni)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=yes # is this really true???
+ ;;
+ siemens)
+ ## LD is ld it makes a PLAMLIB
+ ## CC just makes a GrossModule.
+ archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ reload_cmds='$CC -r -o $output$reload_objs'
+ hardcode_direct=no
+ ;;
+ motorola)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=no #Motorola manual says yes, but my tests say they lie
+ ;;
+ esac
+ runpath_var='LD_RUN_PATH'
+ hardcode_shlibpath_var=no
+ ;;
+
+ sysv4.3*)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var=no
+ export_dynamic_flag_spec='-Bexport'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var=no
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ ld_shlibs=yes
+ fi
+ ;;
+
+ sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*)
+ no_undefined_flag='$wl-z,text'
+ archive_cmds_need_lc=no
+ hardcode_shlibpath_var=no
+ runpath_var='LD_RUN_PATH'
+
+ if test yes = "$GCC"; then
+ archive_cmds='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
+ fi
+ ;;
+
+ sysv5* | sco3.2v5* | sco5v6*)
+ # Note: We CANNOT use -z defs as we might desire, because we do not
+ # link with -lc, and that would cause any symbols used from libc to
+ # always be unresolved, which means just about no library would
+ # ever link correctly. If we're not using GNU ld we use -z text
+ # though, which does catch some bad symbols but isn't as heavy-handed
+ # as -z defs.
+ no_undefined_flag='$wl-z,text'
+ allow_undefined_flag='$wl-z,nodefs'
+ archive_cmds_need_lc=no
+ hardcode_shlibpath_var=no
+ hardcode_libdir_flag_spec='$wl-R,$libdir'
+ hardcode_libdir_separator=':'
+ link_all_deplibs=yes
+ export_dynamic_flag_spec='$wl-Bexport'
+ runpath_var='LD_RUN_PATH'
+
+ if test yes = "$GCC"; then
+ archive_cmds='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
+ fi
+ ;;
+
+ uts4*)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_shlibpath_var=no
+ ;;
+
+ *)
+ ld_shlibs=no
+ ;;
+ esac
+
+ if test sni = "$host_vendor"; then
+ case $host in
+ sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ export_dynamic_flag_spec='$wl-Blargedynsym'
+ ;;
+ esac
+ fi
+ fi
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs" >&5
+printf "%s\n" "$ld_shlibs" >&6; }
+test no = "$ld_shlibs" && can_build_shared=no
+
+with_gnu_ld=$with_gnu_ld
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+#
+# Do we need to explicitly link libc?
+#
+case "x$archive_cmds_need_lc" in
+x|xyes)
+ # Assume -lc should be added
+ archive_cmds_need_lc=yes
+
+ if test yes,yes = "$GCC,$enable_shared"; then
+ case $archive_cmds in
+ *'~'*)
+ # FIXME: we may have to deal with multi-command sequences.
+ ;;
+ '$CC '*)
+ # Test whether the compiler implicitly links with -lc since on some
+ # systems, -lgcc has to come before -lc. If gcc already passes -lc
+ # to ld, don't add -lc before -lgcc.
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5
+printf %s "checking whether -lc should be explicitly linked in... " >&6; }
+if test ${lt_cv_archive_cmds_need_lc+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ $RM conftest*
+ echo "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; } 2>conftest.err; then
+ soname=conftest
+ lib=conftest
+ libobjs=conftest.$ac_objext
+ deplibs=
+ wl=$lt_prog_compiler_wl
+ pic_flag=$lt_prog_compiler_pic
+ compiler_flags=-v
+ linker_flags=-v
+ verstring=
+ output_objdir=.
+ libname=conftest
+ lt_save_allow_undefined_flag=$allow_undefined_flag
+ allow_undefined_flag=
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5
+ (eval $archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }
+ then
+ lt_cv_archive_cmds_need_lc=no
+ else
+ lt_cv_archive_cmds_need_lc=yes
+ fi
+ allow_undefined_flag=$lt_save_allow_undefined_flag
+ else
+ cat conftest.err 1>&5
+ fi
+ $RM conftest*
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc" >&5
+printf "%s\n" "$lt_cv_archive_cmds_need_lc" >&6; }
+ archive_cmds_need_lc=$lt_cv_archive_cmds_need_lc
+ ;;
+ esac
+ fi
+ ;;
+esac
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5
+printf %s "checking dynamic linker characteristics... " >&6; }
+
+if test yes = "$GCC"; then
+ case $host_os in
+ darwin*) lt_awk_arg='/^libraries:/,/LR/' ;;
+ *) lt_awk_arg='/^libraries:/' ;;
+ esac
+ case $host_os in
+ mingw* | cegcc*) lt_sed_strip_eq='s|=\([A-Za-z]:\)|\1|g' ;;
+ *) lt_sed_strip_eq='s|=/|/|g' ;;
+ esac
+ lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq`
+ case $lt_search_path_spec in
+ *\;*)
+ # if the path contains ";" then we assume it to be the separator
+ # otherwise default to the standard path separator (i.e. ":") - it is
+ # assumed that no part of a normal pathname contains ";" but that should
+ # okay in the real world where ";" in dirpaths is itself problematic.
+ lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'`
+ ;;
+ *)
+ lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"`
+ ;;
+ esac
+ # Ok, now we have the path, separated by spaces, we can step through it
+ # and add multilib dir if necessary...
+ lt_tmp_lt_search_path_spec=
+ lt_multi_os_dir=/`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null`
+ # ...but if some path component already ends with the multilib dir we assume
+ # that all is fine and trust -print-search-dirs as is (GCC 4.2? or newer).
+ case "$lt_multi_os_dir; $lt_search_path_spec " in
+ "/; "* | "/.; "* | "/./; "* | *"$lt_multi_os_dir "* | *"$lt_multi_os_dir/ "*)
+ lt_multi_os_dir=
+ ;;
+ esac
+ for lt_sys_path in $lt_search_path_spec; do
+ if test -d "$lt_sys_path$lt_multi_os_dir"; then
+ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path$lt_multi_os_dir"
+ elif test -n "$lt_multi_os_dir"; then
+ test -d "$lt_sys_path" && \
+ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path"
+ fi
+ done
+ lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk '
+BEGIN {RS = " "; FS = "/|\n";} {
+ lt_foo = "";
+ lt_count = 0;
+ for (lt_i = NF; lt_i > 0; lt_i--) {
+ if ($lt_i != "" && $lt_i != ".") {
+ if ($lt_i == "..") {
+ lt_count++;
+ } else {
+ if (lt_count == 0) {
+ lt_foo = "/" $lt_i lt_foo;
+ } else {
+ lt_count--;
+ }
+ }
+ }
+ }
+ if (lt_foo != "") { lt_freq[lt_foo]++; }
+ if (lt_freq[lt_foo] == 1) { print lt_foo; }
+}'`
+ # AWK program above erroneously prepends '/' to C:/dos/paths
+ # for these hosts.
+ case $host_os in
+ mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\
+ $SED 's|/\([A-Za-z]:\)|\1|g'` ;;
+ esac
+ sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP`
+else
+ sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
+fi
+library_names_spec=
+libname_spec='lib$name'
+soname_spec=
+shrext_cmds=.so
+postinstall_cmds=
+postuninstall_cmds=
+finish_cmds=
+finish_eval=
+shlibpath_var=
+shlibpath_overrides_runpath=unknown
+version_type=none
+dynamic_linker="$host_os ld.so"
+sys_lib_dlsearch_path_spec="/lib /usr/lib"
+need_lib_prefix=unknown
+hardcode_into_libs=no
+
+# when you set need_version to no, make sure it does not cause -set_version
+# flags to be left without arguments
+need_version=unknown
+
+
+
+case $host_os in
+aix3*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ library_names_spec='$libname$release$shared_ext$versuffix $libname.a'
+ shlibpath_var=LIBPATH
+
+ # AIX 3 has no versioning support, so we append a major version to the name.
+ soname_spec='$libname$release$shared_ext$major'
+ ;;
+
+aix[4-9]*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ need_lib_prefix=no
+ need_version=no
+ hardcode_into_libs=yes
+ if test ia64 = "$host_cpu"; then
+ # AIX 5 supports IA64
+ library_names_spec='$libname$release$shared_ext$major $libname$release$shared_ext$versuffix $libname$shared_ext'
+ shlibpath_var=LD_LIBRARY_PATH
+ else
+ # With GCC up to 2.95.x, collect2 would create an import file
+ # for dependence libraries. The import file would start with
+ # the line '#! .'. This would cause the generated library to
+ # depend on '.', always an invalid library. This was fixed in
+ # development snapshots of GCC prior to 3.0.
+ case $host_os in
+ aix4 | aix4.[01] | aix4.[01].*)
+ if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)'
+ echo ' yes '
+ echo '#endif'; } | $CC -E - | $GREP yes > /dev/null; then
+ :
+ else
+ can_build_shared=no
+ fi
+ ;;
+ esac
+ # Using Import Files as archive members, it is possible to support
+ # filename-based versioning of shared library archives on AIX. While
+ # this would work for both with and without runtime linking, it will
+ # prevent static linking of such archives. So we do filename-based
+ # shared library versioning with .so extension only, which is used
+ # when both runtime linking and shared linking is enabled.
+ # Unfortunately, runtime linking may impact performance, so we do
+ # not want this to be the default eventually. Also, we use the
+ # versioned .so libs for executables only if there is the -brtl
+ # linker flag in LDFLAGS as well, or --with-aix-soname=svr4 only.
+ # To allow for filename-based versioning support, we need to create
+ # libNAME.so.V as an archive file, containing:
+ # *) an Import File, referring to the versioned filename of the
+ # archive as well as the shared archive member, telling the
+ # bitwidth (32 or 64) of that shared object, and providing the
+ # list of exported symbols of that shared object, eventually
+ # decorated with the 'weak' keyword
+ # *) the shared object with the F_LOADONLY flag set, to really avoid
+ # it being seen by the linker.
+ # At run time we better use the real file rather than another symlink,
+ # but for link time we create the symlink libNAME.so -> libNAME.so.V
+
+ case $with_aix_soname,$aix_use_runtimelinking in
+ # AIX (on Power*) has no versioning support, so currently we cannot hardcode correct
+ # soname into executable. Probably we can add versioning support to
+ # collect2, so additional links can be useful in future.
+ aix,yes) # traditional libtool
+ dynamic_linker='AIX unversionable lib.so'
+ # If using run time linking (on AIX 4.2 or later) use lib.so
+ # instead of lib.a to let people know that these are not
+ # typical AIX shared libraries.
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ ;;
+ aix,no) # traditional AIX only
+ dynamic_linker='AIX lib.a(lib.so.V)'
+ # We preserve .a as extension for shared libraries through AIX4.2
+ # and later when we are not doing run time linking.
+ library_names_spec='$libname$release.a $libname.a'
+ soname_spec='$libname$release$shared_ext$major'
+ ;;
+ svr4,*) # full svr4 only
+ dynamic_linker="AIX lib.so.V($shared_archive_member_spec.o)"
+ library_names_spec='$libname$release$shared_ext$major $libname$shared_ext'
+ # We do not specify a path in Import Files, so LIBPATH fires.
+ shlibpath_overrides_runpath=yes
+ ;;
+ *,yes) # both, prefer svr4
+ dynamic_linker="AIX lib.so.V($shared_archive_member_spec.o), lib.a(lib.so.V)"
+ library_names_spec='$libname$release$shared_ext$major $libname$shared_ext'
+ # unpreferred sharedlib libNAME.a needs extra handling
+ postinstall_cmds='test -n "$linkname" || linkname="$realname"~func_stripname "" ".so" "$linkname"~$install_shared_prog "$dir/$func_stripname_result.$libext" "$destdir/$func_stripname_result.$libext"~test -z "$tstripme" || test -z "$striplib" || $striplib "$destdir/$func_stripname_result.$libext"'
+ postuninstall_cmds='for n in $library_names $old_library; do :; done~func_stripname "" ".so" "$n"~test "$func_stripname_result" = "$n" || func_append rmfiles " $odir/$func_stripname_result.$libext"'
+ # We do not specify a path in Import Files, so LIBPATH fires.
+ shlibpath_overrides_runpath=yes
+ ;;
+ *,no) # both, prefer aix
+ dynamic_linker="AIX lib.a(lib.so.V), lib.so.V($shared_archive_member_spec.o)"
+ library_names_spec='$libname$release.a $libname.a'
+ soname_spec='$libname$release$shared_ext$major'
+ # unpreferred sharedlib libNAME.so.V and symlink libNAME.so need extra handling
+ postinstall_cmds='test -z "$dlname" || $install_shared_prog $dir/$dlname $destdir/$dlname~test -z "$tstripme" || test -z "$striplib" || $striplib $destdir/$dlname~test -n "$linkname" || linkname=$realname~func_stripname "" ".a" "$linkname"~(cd "$destdir" && $LN_S -f $dlname $func_stripname_result.so)'
+ postuninstall_cmds='test -z "$dlname" || func_append rmfiles " $odir/$dlname"~for n in $old_library $library_names; do :; done~func_stripname "" ".a" "$n"~func_append rmfiles " $odir/$func_stripname_result.so"'
+ ;;
+ esac
+ shlibpath_var=LIBPATH
+ fi
+ ;;
+
+amigaos*)
+ case $host_cpu in
+ powerpc)
+ # Since July 2007 AmigaOS4 officially supports .so libraries.
+ # When compiling the executable, add -use-dynld -Lsobjs: to the compileline.
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ ;;
+ m68k)
+ library_names_spec='$libname.ixlibrary $libname.a'
+ # Create ${libname}_ixlibrary.a entries in /sys/libs.
+ finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done'
+ ;;
+ esac
+ ;;
+
+beos*)
+ library_names_spec='$libname$shared_ext'
+ dynamic_linker="$host_os ld.so"
+ shlibpath_var=LIBRARY_PATH
+ ;;
+
+bsdi[45]*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ need_version=no
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib"
+ sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib"
+ # the default ld.so.conf also contains /usr/contrib/lib and
+ # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow
+ # libtool to hard-code these into programs
+ ;;
+
+cygwin* | mingw* | pw32* | cegcc*)
+ version_type=windows
+ shrext_cmds=.dll
+ need_version=no
+ need_lib_prefix=no
+
+ case $GCC,$cc_basename in
+ yes,*)
+ # gcc
+ library_names_spec='$libname.dll.a'
+ # DLL is installed to $(libdir)/../bin by postinstall_cmds
+ postinstall_cmds='base_file=`basename \$file`~
+ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~
+ dldir=$destdir/`dirname \$dlpath`~
+ test -d \$dldir || mkdir -p \$dldir~
+ $install_prog $dir/$dlname \$dldir/$dlname~
+ chmod a+x \$dldir/$dlname~
+ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then
+ eval '\''$striplib \$dldir/$dlname'\'' || exit \$?;
+ fi'
+ postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
+ dlpath=$dir/\$dldll~
+ $RM \$dlpath'
+ shlibpath_overrides_runpath=yes
+
+ case $host_os in
+ cygwin*)
+ # Cygwin DLLs use 'cyg' prefix rather than 'lib'
+ soname_spec='`echo $libname | $SED -e 's/^lib/cyg/'``echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext'
+
+ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api"
+ ;;
+ mingw* | cegcc*)
+ # MinGW DLLs use traditional 'lib' prefix
+ soname_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext'
+ ;;
+ pw32*)
+ # pw32 DLLs use 'pw' prefix rather than 'lib'
+ library_names_spec='`echo $libname | $SED -e 's/^lib/pw/'``echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext'
+ ;;
+ esac
+ dynamic_linker='Win32 ld.exe'
+ ;;
+
+ *,cl* | *,icl*)
+ # Native MSVC or ICC
+ libname_spec='$name'
+ soname_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext'
+ library_names_spec='$libname.dll.lib'
+
+ case $build_os in
+ mingw*)
+ sys_lib_search_path_spec=
+ lt_save_ifs=$IFS
+ IFS=';'
+ for lt_path in $LIB
+ do
+ IFS=$lt_save_ifs
+ # Let DOS variable expansion print the short 8.3 style file name.
+ lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
+ sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
+ done
+ IFS=$lt_save_ifs
+ # Convert to MSYS style.
+ sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
+ ;;
+ cygwin*)
+ # Convert to unix form, then to dos form, then back to unix form
+ # but this time dos style (no spaces!) so that the unix form looks
+ # like /cygdrive/c/PROGRA~1:/cygdr...
+ sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
+ sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
+ sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ ;;
+ *)
+ sys_lib_search_path_spec=$LIB
+ if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
+ # It is most probably a Windows format PATH.
+ sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+ # FIXME: find the short name or the path components, as spaces are
+ # common. (e.g. "Program Files" -> "PROGRA~1")
+ ;;
+ esac
+
+ # DLL is installed to $(libdir)/../bin by postinstall_cmds
+ postinstall_cmds='base_file=`basename \$file`~
+ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~
+ dldir=$destdir/`dirname \$dlpath`~
+ test -d \$dldir || mkdir -p \$dldir~
+ $install_prog $dir/$dlname \$dldir/$dlname'
+ postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
+ dlpath=$dir/\$dldll~
+ $RM \$dlpath'
+ shlibpath_overrides_runpath=yes
+ dynamic_linker='Win32 link.exe'
+ ;;
+
+ *)
+ # Assume MSVC and ICC wrapper
+ library_names_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext $libname.lib'
+ dynamic_linker='Win32 ld.exe'
+ ;;
+ esac
+ # FIXME: first we should search . and the directory the executable is in
+ shlibpath_var=PATH
+ ;;
+
+darwin* | rhapsody*)
+ dynamic_linker="$host_os dyld"
+ version_type=darwin
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$major$shared_ext $libname$shared_ext'
+ soname_spec='$libname$release$major$shared_ext'
+ shlibpath_overrides_runpath=yes
+ shlibpath_var=DYLD_LIBRARY_PATH
+ shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`'
+
+ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib"
+ sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib'
+ ;;
+
+dgux*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+freebsd* | dragonfly* | midnightbsd*)
+ # DragonFly does not have aout. When/if they implement a new
+ # versioning mechanism, adjust this.
+ if test -x /usr/bin/objformat; then
+ objformat=`/usr/bin/objformat`
+ else
+ case $host_os in
+ freebsd[23].*) objformat=aout ;;
+ *) objformat=elf ;;
+ esac
+ fi
+ version_type=freebsd-$objformat
+ case $version_type in
+ freebsd-elf*)
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ need_version=no
+ need_lib_prefix=no
+ ;;
+ freebsd-*)
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
+ need_version=yes
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_os in
+ freebsd2.*)
+ shlibpath_overrides_runpath=yes
+ ;;
+ freebsd3.[01]* | freebsdelf3.[01]*)
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+ freebsd3.[2-9]* | freebsdelf3.[2-9]* | \
+ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1)
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ ;;
+ *) # from 4.6 on, and DragonFly
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+ esac
+ ;;
+
+haiku*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ need_lib_prefix=no
+ need_version=no
+ dynamic_linker="$host_os runtime_loader"
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ shlibpath_var=LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+ hardcode_into_libs=yes
+ ;;
+
+hpux9* | hpux10* | hpux11*)
+ # Give a soname corresponding to the major version so that dld.sl refuses to
+ # link against other versions.
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ case $host_cpu in
+ ia64*)
+ shrext_cmds='.so'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.so"
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ if test 32 = "$HPUX_IA64_MODE"; then
+ sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib"
+ sys_lib_dlsearch_path_spec=/usr/lib/hpux32
+ else
+ sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64"
+ sys_lib_dlsearch_path_spec=/usr/lib/hpux64
+ fi
+ ;;
+ hppa*64*)
+ shrext_cmds='.sl'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64"
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ *)
+ shrext_cmds='.sl'
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=SHLIB_PATH
+ shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ ;;
+ esac
+ # HP-UX runs *really* slowly unless shared libraries are mode 555, ...
+ postinstall_cmds='chmod 555 $lib'
+ # or fails outright, so override atomically:
+ install_override_mode=555
+ ;;
+
+interix[3-9]*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $host_os in
+ nonstopux*) version_type=nonstopux ;;
+ *)
+ if test yes = "$lt_cv_prog_gnu_ld"; then
+ version_type=linux # correct to gnu/linux during the next big refactor
+ else
+ version_type=irix
+ fi ;;
+ esac
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='$libname$release$shared_ext$major'
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$release$shared_ext $libname$shared_ext'
+ case $host_os in
+ irix5* | nonstopux*)
+ libsuff= shlibsuff=
+ ;;
+ *)
+ case $LD in # libtool.m4 will add one of these switches to LD
+ *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ")
+ libsuff= shlibsuff= libmagic=32-bit;;
+ *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ")
+ libsuff=32 shlibsuff=N32 libmagic=N32;;
+ *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ")
+ libsuff=64 shlibsuff=64 libmagic=64-bit;;
+ *) libsuff= shlibsuff= libmagic=never-match;;
+ esac
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY${shlibsuff}_PATH
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec="/usr/lib$libsuff /lib$libsuff /usr/local/lib$libsuff"
+ sys_lib_dlsearch_path_spec="/usr/lib$libsuff /lib$libsuff"
+ hardcode_into_libs=yes
+ ;;
+
+# No shared lib support for Linux oldld, aout, or coff.
+linux*oldld* | linux*aout* | linux*coff*)
+ dynamic_linker=no
+ ;;
+
+linux*android*)
+ version_type=none # Android doesn't support versioned libraries.
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$shared_ext'
+ soname_spec='$libname$release$shared_ext'
+ finish_cmds=
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+
+ # This implies no fast_install, which is unacceptable.
+ # Some rework will be needed to allow for fast_install
+ # before this can be enabled.
+ hardcode_into_libs=yes
+
+ dynamic_linker='Android linker'
+ # Don't embed -rpath directories since the linker doesn't support them.
+ hardcode_libdir_flag_spec='-L$libdir'
+ ;;
+
+# This must be glibc/ELF.
+linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+
+ # Some binutils ld are patched to set DT_RUNPATH
+ if test ${lt_cv_shlibpath_overrides_runpath+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ lt_cv_shlibpath_overrides_runpath=no
+ save_LDFLAGS=$LDFLAGS
+ save_libdir=$libdir
+ eval "libdir=/foo; wl=\"$lt_prog_compiler_wl\"; \
+ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec\""
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null
+then :
+ lt_cv_shlibpath_overrides_runpath=yes
+fi
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+ LDFLAGS=$save_LDFLAGS
+ libdir=$save_libdir
+
+fi
+
+ shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath
+
+ # This implies no fast_install, which is unacceptable.
+ # Some rework will be needed to allow for fast_install
+ # before this can be enabled.
+ hardcode_into_libs=yes
+
+ # Ideally, we could use ldconfig to report *all* directores which are
+ # searched for libraries, however this is still not possible. Aside from not
+ # being certain /sbin/ldconfig is available, command
+ # 'ldconfig -N -X -v | grep ^/' on 64bit Fedora does not report /usr/lib64,
+ # even though it is searched at run-time. Try to do the best guess by
+ # appending ld.so.conf contents (and includes) to the search path.
+ if test -f /etc/ld.so.conf; then
+ lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '`
+ sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra"
+ fi
+
+ # We used to test for /lib/ld.so.1 and disable shared libraries on
+ # powerpc, because MkLinux only supported shared libraries with the
+ # GNU dynamic linker. Since this was broken with cross compilers,
+ # most powerpc-linux boxes support dynamic linking these days and
+ # people can always --disable-shared, the test was removed, and we
+ # assume the GNU/Linux dynamic linker is in use.
+ dynamic_linker='GNU/Linux ld.so'
+ ;;
+
+netbsdelf*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='NetBSD ld.elf_so'
+ ;;
+
+netbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ dynamic_linker='NetBSD (a.out) ld.so'
+ else
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ dynamic_linker='NetBSD ld.elf_so'
+ fi
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+
+newsos6)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+*nto* | *qnx*)
+ version_type=qnx
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='ldqnx.so'
+ ;;
+
+openbsd* | bitrig*)
+ version_type=sunos
+ sys_lib_dlsearch_path_spec=/usr/lib
+ need_lib_prefix=no
+ if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
+ need_version=no
+ else
+ need_version=yes
+ fi
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+os2*)
+ libname_spec='$name'
+ version_type=windows
+ shrext_cmds=.dll
+ need_version=no
+ need_lib_prefix=no
+ # OS/2 can only load a DLL with a base name of 8 characters or less.
+ soname_spec='`test -n "$os2dllname" && libname="$os2dllname";
+ v=$($ECHO $release$versuffix | tr -d .-);
+ n=$($ECHO $libname | cut -b -$((8 - ${#v})) | tr . _);
+ $ECHO $n$v`$shared_ext'
+ library_names_spec='${libname}_dll.$libext'
+ dynamic_linker='OS/2 ld.exe'
+ shlibpath_var=BEGINLIBPATH
+ sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ postinstall_cmds='base_file=`basename \$file`~
+ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; $ECHO \$dlname'\''`~
+ dldir=$destdir/`dirname \$dlpath`~
+ test -d \$dldir || mkdir -p \$dldir~
+ $install_prog $dir/$dlname \$dldir/$dlname~
+ chmod a+x \$dldir/$dlname~
+ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then
+ eval '\''$striplib \$dldir/$dlname'\'' || exit \$?;
+ fi'
+ postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; $ECHO \$dlname'\''`~
+ dlpath=$dir/\$dldll~
+ $RM \$dlpath'
+ ;;
+
+osf3* | osf4* | osf5*)
+ version_type=osf
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='$libname$release$shared_ext$major'
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib"
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+
+rdos*)
+ dynamic_linker=no
+ ;;
+
+solaris*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ # ldd complains unless libraries are executable
+ postinstall_cmds='chmod +x $lib'
+ ;;
+
+sunos4*)
+ version_type=sunos
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
+ finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ if test yes = "$with_gnu_ld"; then
+ need_lib_prefix=no
+ fi
+ need_version=yes
+ ;;
+
+sysv4 | sysv4.3*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_vendor in
+ sni)
+ shlibpath_overrides_runpath=no
+ need_lib_prefix=no
+ runpath_var=LD_RUN_PATH
+ ;;
+ siemens)
+ need_lib_prefix=no
+ ;;
+ motorola)
+ need_lib_prefix=no
+ need_version=no
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib'
+ ;;
+ esac
+ ;;
+
+sysv4*MP*)
+ if test -d /usr/nec; then
+ version_type=linux # correct to gnu/linux during the next big refactor
+ library_names_spec='$libname$shared_ext.$versuffix $libname$shared_ext.$major $libname$shared_ext'
+ soname_spec='$libname$shared_ext.$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ fi
+ ;;
+
+sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*)
+ version_type=sco
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ if test yes = "$with_gnu_ld"; then
+ sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib'
+ else
+ sys_lib_search_path_spec='/usr/ccs/lib /usr/lib'
+ case $host_os in
+ sco3.2v5*)
+ sys_lib_search_path_spec="$sys_lib_search_path_spec /lib"
+ ;;
+ esac
+ fi
+ sys_lib_dlsearch_path_spec='/usr/lib'
+ ;;
+
+tpf*)
+ # TPF is a cross-target only. Preferred cross-host = GNU/Linux.
+ version_type=linux # correct to gnu/linux during the next big refactor
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ ;;
+
+uts4*)
+ version_type=linux # correct to gnu/linux during the next big refactor
+ library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
+ soname_spec='$libname$release$shared_ext$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+*)
+ dynamic_linker=no
+ ;;
+esac
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5
+printf "%s\n" "$dynamic_linker" >&6; }
+test no = "$dynamic_linker" && can_build_shared=no
+
+variables_saved_for_relink="PATH $shlibpath_var $runpath_var"
+if test yes = "$GCC"; then
+ variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH"
+fi
+
+if test set = "${lt_cv_sys_lib_search_path_spec+set}"; then
+ sys_lib_search_path_spec=$lt_cv_sys_lib_search_path_spec
+fi
+
+if test set = "${lt_cv_sys_lib_dlsearch_path_spec+set}"; then
+ sys_lib_dlsearch_path_spec=$lt_cv_sys_lib_dlsearch_path_spec
+fi
+
+# remember unaugmented sys_lib_dlsearch_path content for libtool script decls...
+configure_time_dlsearch_path=$sys_lib_dlsearch_path_spec
+
+# ... but it needs LT_SYS_LIBRARY_PATH munging for other configure-time code
+func_munge_path_list sys_lib_dlsearch_path_spec "$LT_SYS_LIBRARY_PATH"
+
+# to be used as default LT_SYS_LIBRARY_PATH value in generated libtool
+configure_time_lt_sys_library_path=$LT_SYS_LIBRARY_PATH
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5
+printf %s "checking how to hardcode library paths into programs... " >&6; }
+hardcode_action=
+if test -n "$hardcode_libdir_flag_spec" ||
+ test -n "$runpath_var" ||
+ test yes = "$hardcode_automatic"; then
+
+ # We can hardcode non-existent directories.
+ if test no != "$hardcode_direct" &&
+ # If the only mechanism to avoid hardcoding is shlibpath_var, we
+ # have to relink, otherwise we might link with an installed library
+ # when we should be linking with a yet-to-be-installed one
+ ## test no != "$_LT_TAGVAR(hardcode_shlibpath_var, )" &&
+ test no != "$hardcode_minus_L"; then
+ # Linking always hardcodes the temporary library directory.
+ hardcode_action=relink
+ else
+ # We can link without hardcoding, and we can hardcode nonexisting dirs.
+ hardcode_action=immediate
+ fi
+else
+ # We cannot hardcode anything, or else we can only hardcode existing
+ # directories.
+ hardcode_action=unsupported
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $hardcode_action" >&5
+printf "%s\n" "$hardcode_action" >&6; }
+
+if test relink = "$hardcode_action" ||
+ test yes = "$inherit_rpath"; then
+ # Fast installation is not supported
+ enable_fast_install=no
+elif test yes = "$shlibpath_overrides_runpath" ||
+ test no = "$enable_shared"; then
+ # Fast installation is not necessary
+ enable_fast_install=needless
+fi
+
+
+
+
+
+
+ if test yes != "$enable_dlopen"; then
+ enable_dlopen=unknown
+ enable_dlopen_self=unknown
+ enable_dlopen_self_static=unknown
+else
+ lt_cv_dlopen=no
+ lt_cv_dlopen_libs=
+
+ case $host_os in
+ beos*)
+ lt_cv_dlopen=load_add_on
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+ ;;
+
+ mingw* | pw32* | cegcc*)
+ lt_cv_dlopen=LoadLibrary
+ lt_cv_dlopen_libs=
+ ;;
+
+ cygwin*)
+ lt_cv_dlopen=dlopen
+ lt_cv_dlopen_libs=
+ ;;
+
+ darwin*)
+ # if libdl is installed we need to link against it
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5
+printf %s "checking for dlopen in -ldl... " >&6; }
+if test ${ac_cv_lib_dl_dlopen+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldl $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main (void)
+{
+return dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_lib_dl_dlopen=yes
+else $as_nop
+ ac_cv_lib_dl_dlopen=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5
+printf "%s\n" "$ac_cv_lib_dl_dlopen" >&6; }
+if test "x$ac_cv_lib_dl_dlopen" = xyes
+then :
+ lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl
+else $as_nop
+
+ lt_cv_dlopen=dyld
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+
+fi
+
+ ;;
+
+ tpf*)
+ # Don't try to run any link tests for TPF. We know it's impossible
+ # because TPF is a cross-compiler, and we know how we open DSOs.
+ lt_cv_dlopen=dlopen
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=no
+ ;;
+
+ *)
+ ac_fn_c_check_func "$LINENO" "shl_load" "ac_cv_func_shl_load"
+if test "x$ac_cv_func_shl_load" = xyes
+then :
+ lt_cv_dlopen=shl_load
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for shl_load in -ldld" >&5
+printf %s "checking for shl_load in -ldld... " >&6; }
+if test ${ac_cv_lib_dld_shl_load+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldld $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char shl_load ();
+int
+main (void)
+{
+return shl_load ();
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_lib_dld_shl_load=yes
+else $as_nop
+ ac_cv_lib_dld_shl_load=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_shl_load" >&5
+printf "%s\n" "$ac_cv_lib_dld_shl_load" >&6; }
+if test "x$ac_cv_lib_dld_shl_load" = xyes
+then :
+ lt_cv_dlopen=shl_load lt_cv_dlopen_libs=-ldld
+else $as_nop
+ ac_fn_c_check_func "$LINENO" "dlopen" "ac_cv_func_dlopen"
+if test "x$ac_cv_func_dlopen" = xyes
+then :
+ lt_cv_dlopen=dlopen
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5
+printf %s "checking for dlopen in -ldl... " >&6; }
+if test ${ac_cv_lib_dl_dlopen+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldl $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main (void)
+{
+return dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_lib_dl_dlopen=yes
+else $as_nop
+ ac_cv_lib_dl_dlopen=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5
+printf "%s\n" "$ac_cv_lib_dl_dlopen" >&6; }
+if test "x$ac_cv_lib_dl_dlopen" = xyes
+then :
+ lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dlopen in -lsvld" >&5
+printf %s "checking for dlopen in -lsvld... " >&6; }
+if test ${ac_cv_lib_svld_dlopen+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-lsvld $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main (void)
+{
+return dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_lib_svld_dlopen=yes
+else $as_nop
+ ac_cv_lib_svld_dlopen=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_svld_dlopen" >&5
+printf "%s\n" "$ac_cv_lib_svld_dlopen" >&6; }
+if test "x$ac_cv_lib_svld_dlopen" = xyes
+then :
+ lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-lsvld
+else $as_nop
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dld_link in -ldld" >&5
+printf %s "checking for dld_link in -ldld... " >&6; }
+if test ${ac_cv_lib_dld_dld_link+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldld $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char dld_link ();
+int
+main (void)
+{
+return dld_link ();
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_lib_dld_dld_link=yes
+else $as_nop
+ ac_cv_lib_dld_dld_link=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_dld_link" >&5
+printf "%s\n" "$ac_cv_lib_dld_dld_link" >&6; }
+if test "x$ac_cv_lib_dld_dld_link" = xyes
+then :
+ lt_cv_dlopen=dld_link lt_cv_dlopen_libs=-ldld
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+ ;;
+ esac
+
+ if test no = "$lt_cv_dlopen"; then
+ enable_dlopen=no
+ else
+ enable_dlopen=yes
+ fi
+
+ case $lt_cv_dlopen in
+ dlopen)
+ save_CPPFLAGS=$CPPFLAGS
+ test yes = "$ac_cv_header_dlfcn_h" && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H"
+
+ save_LDFLAGS=$LDFLAGS
+ wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\"
+
+ save_LIBS=$LIBS
+ LIBS="$lt_cv_dlopen_libs $LIBS"
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether a program can dlopen itself" >&5
+printf %s "checking whether a program can dlopen itself... " >&6; }
+if test ${lt_cv_dlopen_self+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test yes = "$cross_compiling"; then :
+ lt_cv_dlopen_self=cross
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<_LT_EOF
+#line $LINENO "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include
+#endif
+
+#include
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+/* When -fvisibility=hidden is used, assume the code has been annotated
+ correspondingly for the symbols needed. */
+#if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+int fnord () __attribute__((visibility("default")));
+#endif
+
+int fnord () { return 42; }
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else
+ {
+ if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ else puts (dlerror ());
+ }
+ /* dlclose (self); */
+ }
+ else
+ puts (dlerror ());
+
+ return status;
+}
+_LT_EOF
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; } && test -s "conftest$ac_exeext" 2>/dev/null; then
+ (./conftest; exit; ) >&5 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;;
+ x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;;
+ x$lt_dlunknown|x*) lt_cv_dlopen_self=no ;;
+ esac
+ else :
+ # compilation failed
+ lt_cv_dlopen_self=no
+ fi
+fi
+rm -fr conftest*
+
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self" >&5
+printf "%s\n" "$lt_cv_dlopen_self" >&6; }
+
+ if test yes = "$lt_cv_dlopen_self"; then
+ wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether a statically linked program can dlopen itself" >&5
+printf %s "checking whether a statically linked program can dlopen itself... " >&6; }
+if test ${lt_cv_dlopen_self_static+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test yes = "$cross_compiling"; then :
+ lt_cv_dlopen_self_static=cross
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<_LT_EOF
+#line $LINENO "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include
+#endif
+
+#include
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+/* When -fvisibility=hidden is used, assume the code has been annotated
+ correspondingly for the symbols needed. */
+#if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+int fnord () __attribute__((visibility("default")));
+#endif
+
+int fnord () { return 42; }
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else
+ {
+ if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ else puts (dlerror ());
+ }
+ /* dlclose (self); */
+ }
+ else
+ puts (dlerror ());
+
+ return status;
+}
+_LT_EOF
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; } && test -s "conftest$ac_exeext" 2>/dev/null; then
+ (./conftest; exit; ) >&5 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;;
+ x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;;
+ x$lt_dlunknown|x*) lt_cv_dlopen_self_static=no ;;
+ esac
+ else :
+ # compilation failed
+ lt_cv_dlopen_self_static=no
+ fi
+fi
+rm -fr conftest*
+
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self_static" >&5
+printf "%s\n" "$lt_cv_dlopen_self_static" >&6; }
+ fi
+
+ CPPFLAGS=$save_CPPFLAGS
+ LDFLAGS=$save_LDFLAGS
+ LIBS=$save_LIBS
+ ;;
+ esac
+
+ case $lt_cv_dlopen_self in
+ yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;;
+ *) enable_dlopen_self=unknown ;;
+ esac
+
+ case $lt_cv_dlopen_self_static in
+ yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;;
+ *) enable_dlopen_self_static=unknown ;;
+ esac
+fi
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+striplib=
+old_striplib=
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether stripping libraries is possible" >&5
+printf %s "checking whether stripping libraries is possible... " >&6; }
+if test -z "$STRIP"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+else
+ if $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then
+ old_striplib="$STRIP --strip-debug"
+ striplib="$STRIP --strip-unneeded"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ else
+ case $host_os in
+ darwin*)
+ # FIXME - insert some real tests, host_os isn't really good enough
+ striplib="$STRIP -x"
+ old_striplib="$STRIP -S"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ ;;
+ freebsd*)
+ if $STRIP -V 2>&1 | $GREP "elftoolchain" >/dev/null; then
+ old_striplib="$STRIP --strip-debug"
+ striplib="$STRIP --strip-unneeded"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ fi
+ ;;
+ *)
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ ;;
+ esac
+ fi
+fi
+
+
+
+
+
+
+
+
+
+
+
+
+ # Report what library types will actually be built
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if libtool supports shared libraries" >&5
+printf %s "checking if libtool supports shared libraries... " >&6; }
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $can_build_shared" >&5
+printf "%s\n" "$can_build_shared" >&6; }
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether to build shared libraries" >&5
+printf %s "checking whether to build shared libraries... " >&6; }
+ test no = "$can_build_shared" && enable_shared=no
+
+ # On AIX, shared libraries and static libraries use the same namespace, and
+ # are all built from PIC.
+ case $host_os in
+ aix3*)
+ test yes = "$enable_shared" && enable_static=no
+ if test -n "$RANLIB"; then
+ archive_cmds="$archive_cmds~\$RANLIB \$lib"
+ postinstall_cmds='$RANLIB $lib'
+ fi
+ ;;
+
+ aix[4-9]*)
+ if test ia64 != "$host_cpu"; then
+ case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in
+ yes,aix,yes) ;; # shared object as lib.so file only
+ yes,svr4,*) ;; # shared object as lib.so archive member only
+ yes,*) enable_static=no ;; # shared object in lib.a archive as well
+ esac
+ fi
+ ;;
+ esac
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $enable_shared" >&5
+printf "%s\n" "$enable_shared" >&6; }
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether to build static libraries" >&5
+printf %s "checking whether to build static libraries... " >&6; }
+ # Make sure either enable_shared or enable_static is yes.
+ test yes = "$enable_shared" || enable_static=yes
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $enable_static" >&5
+printf "%s\n" "$enable_static" >&6; }
+
+
+
+
+fi
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+CC=$lt_save_CC
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ac_config_commands="$ac_config_commands libtool"
+
+
+
+
+# Only expand once:
+
+
+
+# Build DLLs on Cygwin/MSYS2
+case $host_os in #(
+ cygwin* | msys*) :
+ LT_NO_UNDEFINED="-no-undefined"
+ ;; #(
+ *) :
+ ;;
+esac
+
+
+# Use pkg-config
+
+
+
+
+
+
+
+if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}pkg-config", so it can be a program name with args.
+set dummy ${ac_tool_prefix}pkg-config; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_path_PKG_CONFIG+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ case $PKG_CONFIG in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PKG_CONFIG="$PKG_CONFIG" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_path_PKG_CONFIG="$as_dir$ac_word$ac_exec_ext"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PKG_CONFIG=$ac_cv_path_PKG_CONFIG
+if test -n "$PKG_CONFIG"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $PKG_CONFIG" >&5
+printf "%s\n" "$PKG_CONFIG" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_path_PKG_CONFIG"; then
+ ac_pt_PKG_CONFIG=$PKG_CONFIG
+ # Extract the first word of "pkg-config", so it can be a program name with args.
+set dummy pkg-config; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_path_ac_pt_PKG_CONFIG+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ case $ac_pt_PKG_CONFIG in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_ac_pt_PKG_CONFIG="$ac_pt_PKG_CONFIG" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_path_ac_pt_PKG_CONFIG="$as_dir$ac_word$ac_exec_ext"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+ac_pt_PKG_CONFIG=$ac_cv_path_ac_pt_PKG_CONFIG
+if test -n "$ac_pt_PKG_CONFIG"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_pt_PKG_CONFIG" >&5
+printf "%s\n" "$ac_pt_PKG_CONFIG" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+ if test "x$ac_pt_PKG_CONFIG" = x; then
+ PKG_CONFIG=""
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
+printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
+ac_tool_warned=yes ;;
+esac
+ PKG_CONFIG=$ac_pt_PKG_CONFIG
+ fi
+else
+ PKG_CONFIG="$ac_cv_path_PKG_CONFIG"
+fi
+
+fi
+if test -n "$PKG_CONFIG"; then
+ _pkg_min_version=0.9.0
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking pkg-config is at least version $_pkg_min_version" >&5
+printf %s "checking pkg-config is at least version $_pkg_min_version... " >&6; }
+ if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ PKG_CONFIG=""
+ fi
+fi
+
+
+
+# Check whether --with-pkgconfigdir was given.
+if test ${with_pkgconfigdir+y}
+then :
+ withval=$with_pkgconfigdir;
+else $as_nop
+ with_pkgconfigdir='${libdir}/pkgconfig'
+fi
+
+pkgconfigdir=$with_pkgconfigdir
+
+
+
+
+ac_config_files="$ac_config_files src/knotd.pc src/libknot.pc src/libdnssec.pc src/libzscanner.pc"
+
+
+# Default directories
+knot_prefix=$ac_default_prefix
+if test "$prefix" != NONE
+then :
+ knot_prefix=$prefix
+fi
+
+run_dir="${localstatedir}/run/knot"
+
+# Check whether --with-rundir was given.
+if test ${with_rundir+y}
+then :
+ withval=$with_rundir; run_dir=$withval
+fi
+
+
+
+storage_dir="${localstatedir}/lib/knot"
+
+# Check whether --with-storage was given.
+if test ${with_storage+y}
+then :
+ withval=$with_storage; storage_dir=$withval
+fi
+
+
+
+config_dir="${sysconfdir}/knot"
+
+# Check whether --with-configdir was given.
+if test ${with_configdir+y}
+then :
+ withval=$with_configdir; config_dir=$withval
+fi
+
+
+
+module_dir=
+module_instdir="${libdir}/knot/modules-${KNOT_VERSION_MAJOR}.${KNOT_VERSION_MINOR}"
+
+# Check whether --with-moduledir was given.
+if test ${with_moduledir+y}
+then :
+ withval=$with_moduledir; module_dir=$withval module_instdir=$module_dir
+fi
+
+
+
+
+# Build Knot DNS daemon
+# Check whether --enable-daemon was given.
+if test ${enable_daemon+y}
+then :
+ enableval=$enable_daemon;
+else $as_nop
+ enable_daemon=yes
+fi
+
+ if test "$enable_daemon" = "yes"; then
+ HAVE_DAEMON_TRUE=
+ HAVE_DAEMON_FALSE='#'
+else
+ HAVE_DAEMON_TRUE='#'
+ HAVE_DAEMON_FALSE=
+fi
+
+
+# Build Knot DNS modules
+# Check whether --enable-modules was given.
+if test ${enable_modules+y}
+then :
+ enableval=$enable_modules;
+else $as_nop
+ enable_modules=yes
+fi
+
+
+# Build Knot DNS utilities
+# Check whether --enable-utilities was given.
+if test ${enable_utilities+y}
+then :
+ enableval=$enable_utilities;
+else $as_nop
+ enable_utilities=yes
+fi
+
+ if test "$enable_utilities" = "yes"; then
+ HAVE_UTILS_TRUE=
+ HAVE_UTILS_FALSE='#'
+else
+ HAVE_UTILS_TRUE='#'
+ HAVE_UTILS_FALSE=
+fi
+
+ if test "$enable_utilities" != "no" -o \
+ "$enable_daemon" != "no"; then
+ HAVE_LIBUTILS_TRUE=
+ HAVE_LIBUTILS_FALSE='#'
+else
+ HAVE_LIBUTILS_TRUE='#'
+ HAVE_LIBUTILS_FALSE=
+fi
+
+
+# Build Knot DNS documentation
+# Check whether --enable-documentation was given.
+if test ${enable_documentation+y}
+then :
+ enableval=$enable_documentation;
+else $as_nop
+ enable_documentation=yes
+fi
+
+if test "$enable_documentation" != "no"
+then :
+
+ # Extract the first word of "sphinx-build", so it can be a program name with args.
+set dummy sphinx-build; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_path_SPHINXBUILD+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ case $SPHINXBUILD in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_path_SPHINXBUILD="$as_dir$ac_word$ac_exec_ext"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ test -z "$ac_cv_path_SPHINXBUILD" && ac_cv_path_SPHINXBUILD="false"
+ ;;
+esac
+fi
+SPHINXBUILD=$ac_cv_path_SPHINXBUILD
+if test -n "$SPHINXBUILD"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
+printf "%s\n" "$SPHINXBUILD" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ if test "$SPHINXBUILD" != "false"
+then :
+
+ enable_documentation="man html epub"
+ # Extract the first word of "pdflatex", so it can be a program name with args.
+set dummy pdflatex; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_path_PDFLATEX+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ case $PDFLATEX in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PDFLATEX="$PDFLATEX" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_path_PDFLATEX="$as_dir$ac_word$ac_exec_ext"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ test -z "$ac_cv_path_PDFLATEX" && ac_cv_path_PDFLATEX="false"
+ ;;
+esac
+fi
+PDFLATEX=$ac_cv_path_PDFLATEX
+if test -n "$PDFLATEX"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $PDFLATEX" >&5
+printf "%s\n" "$PDFLATEX" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ if test "$PDFLATEX" != "false"
+then :
+
+ enable_documentation="$enable_documentation pdf"
+
+fi
+
+fi
+
+fi
+ if test "$enable_documentation" != "no"; then
+ HAVE_DOCS_TRUE=
+ HAVE_DOCS_FALSE='#'
+else
+ HAVE_DOCS_TRUE='#'
+ HAVE_DOCS_FALSE=
+fi
+
+ if test "$SPHINXBUILD" != "false"; then
+ HAVE_SPHINX_TRUE=
+ HAVE_SPHINX_FALSE='#'
+else
+ HAVE_SPHINX_TRUE='#'
+ HAVE_SPHINX_FALSE=
+fi
+
+ if test "$PDFLATEX" != "false"; then
+ HAVE_PDFLATEX_TRUE=
+ HAVE_PDFLATEX_FALSE='#'
+else
+ HAVE_PDFLATEX_TRUE='#'
+ HAVE_PDFLATEX_FALSE=
+fi
+
+
+######################
+# Generic dependencies
+######################
+
+# Check whether --enable-fastparser was given.
+if test ${enable_fastparser+y}
+then :
+ enableval=$enable_fastparser;
+else $as_nop
+ if test -d ".git"
+then :
+ enable_fastparser=no
+else $as_nop
+ enable_fastparser=yes
+fi
+
+fi
+
+ if test "$enable_fastparser" = "yes"; then
+ FAST_PARSER_TRUE=
+ FAST_PARSER_FALSE='#'
+else
+ FAST_PARSER_TRUE='#'
+ FAST_PARSER_FALSE=
+fi
+
+
+# GnuTLS crypto backend
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC options needed to detect all undeclared functions" >&5
+printf %s "checking for $CC options needed to detect all undeclared functions... " >&6; }
+if test ${ac_cv_c_undeclared_builtin_options+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_save_CFLAGS=$CFLAGS
+ ac_cv_c_undeclared_builtin_options='cannot detect'
+ for ac_arg in '' -fno-builtin; do
+ CFLAGS="$ac_save_CFLAGS $ac_arg"
+ # This test program should *not* compile successfully.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+(void) strchr;
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+
+else $as_nop
+ # This test program should compile successfully.
+ # No library function is consistently available on
+ # freestanding implementations, so test against a dummy
+ # declaration. Include always-available headers on the
+ # off chance that they somehow elicit warnings.
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+#include
+#include
+#include
+extern void ac_decl (int, char *);
+
+int
+main (void)
+{
+(void) ac_decl (0, (char *) 0);
+ (void) ac_decl;
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ if test x"$ac_arg" = x
+then :
+ ac_cv_c_undeclared_builtin_options='none needed'
+else $as_nop
+ ac_cv_c_undeclared_builtin_options=$ac_arg
+fi
+ break
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ done
+ CFLAGS=$ac_save_CFLAGS
+
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_undeclared_builtin_options" >&5
+printf "%s\n" "$ac_cv_c_undeclared_builtin_options" >&6; }
+ case $ac_cv_c_undeclared_builtin_options in #(
+ 'cannot detect') :
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "cannot make $CC report undeclared builtins
+See \`config.log' for more details" "$LINENO" 5; } ;; #(
+ 'none needed') :
+ ac_c_undeclared_builtin_options='' ;; #(
+ *) :
+ ac_c_undeclared_builtin_options=$ac_cv_c_undeclared_builtin_options ;;
+esac
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for gnutls >= 3.6.10" >&5
+printf %s "checking for gnutls >= 3.6.10... " >&6; }
+
+if test -n "$gnutls_CFLAGS"; then
+ pkg_cv_gnutls_CFLAGS="$gnutls_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"gnutls >= 3.6.10\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "gnutls >= 3.6.10") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_gnutls_CFLAGS=`$PKG_CONFIG --cflags "gnutls >= 3.6.10" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$gnutls_LIBS"; then
+ pkg_cv_gnutls_LIBS="$gnutls_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"gnutls >= 3.6.10\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "gnutls >= 3.6.10") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_gnutls_LIBS=`$PKG_CONFIG --libs "gnutls >= 3.6.10" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ gnutls_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "gnutls >= 3.6.10" 2>&1`
+ else
+ gnutls_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "gnutls >= 3.6.10" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$gnutls_PKG_ERRORS" >&5
+
+ as_fn_error $? "Package requirements (gnutls >= 3.6.10) were not met:
+
+$gnutls_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables gnutls_CFLAGS
+and gnutls_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables gnutls_CFLAGS
+and gnutls_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see .
+See \`config.log' for more details" "$LINENO" 5; }
+else
+ gnutls_CFLAGS=$pkg_cv_gnutls_CFLAGS
+ gnutls_LIBS=$pkg_cv_gnutls_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+ save_CFLAGS=$CFLAGS
+ save_LIBS=$LIBS
+ CFLAGS="$CFLAGS $gnutls_CFLAGS"
+ LIBS="$LIBS $gnutls_LIBS"
+
+ ac_fn_c_check_func "$LINENO" "gnutls_pkcs11_copy_pubkey" "ac_cv_func_gnutls_pkcs11_copy_pubkey"
+if test "x$ac_cv_func_gnutls_pkcs11_copy_pubkey" = xyes
+then :
+ enable_pkcs11=yes
+else $as_nop
+ enable_pkcs11=no
+fi
+
+ if test "$enable_pkcs11" = yes
+then :
+
+printf "%s\n" "#define ENABLE_PKCS11 1" >>confdefs.h
+
+fi
+
+ ac_fn_check_decl "$LINENO" "GNUTLS_SIGN_EDDSA_ED448" "ac_cv_have_decl_GNUTLS_SIGN_EDDSA_ED448" "#include
+" "$ac_c_undeclared_builtin_options" "CFLAGS"
+if test "x$ac_cv_have_decl_GNUTLS_SIGN_EDDSA_ED448" = xyes
+then :
+
+printf "%s\n" "#define HAVE_ED448 1" >>confdefs.h
+
+ enable_ed448=yes
+else $as_nop
+ enable_ed448=no
+fi
+
+ ac_fn_c_check_func "$LINENO" "gnutls_early_cipher_get" "ac_cv_func_gnutls_early_cipher_get"
+if test "x$ac_cv_func_gnutls_early_cipher_get" = xyes
+then :
+
+printf "%s\n" "#define HAVE_GNUTLS_QUIC 1" >>confdefs.h
+
+ gnutls_quic=yes
+else $as_nop
+ gnutls_quic=no
+fi
+
+
+ CFLAGS=$save_CFLAGS
+ LIBS=$save_LIBS
+
+fi
+ if test "$enable_pkcs11" = "yes"; then
+ ENABLE_PKCS11_TRUE=
+ ENABLE_PKCS11_FALSE='#'
+else
+ ENABLE_PKCS11_TRUE='#'
+ ENABLE_PKCS11_FALSE=
+fi
+
+
+# Check whether --enable-recvmmsg was given.
+if test ${enable_recvmmsg+y}
+then :
+ enableval=$enable_recvmmsg;
+else $as_nop
+ enable_recvmmsg=auto
+fi
+
+
+case $enable_recvmmsg in #(
+ auto|yes) :
+
+ ac_fn_c_check_func "$LINENO" "recvmmsg" "ac_cv_func_recvmmsg"
+if test "x$ac_cv_func_recvmmsg" = xyes
+then :
+ ac_fn_c_check_func "$LINENO" "sendmmsg" "ac_cv_func_sendmmsg"
+if test "x$ac_cv_func_sendmmsg" = xyes
+then :
+ enable_recvmmsg=yes
+else $as_nop
+ enable_recvmmsg=no
+fi
+
+else $as_nop
+ enable_recvmmsg=no
+fi
+ ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value of --enable-recvmmsg.
+ " "$LINENO" 5 ;; #(
+ *) :
+ ;;
+esac
+
+if test "$enable_recvmmsg" = yes
+then :
+
+
+printf "%s\n" "#define ENABLE_RECVMMSG 1" >>confdefs.h
+
+fi
+
+# XDP support
+# Check whether --enable-xdp was given.
+if test ${enable_xdp+y}
+then :
+ enableval=$enable_xdp;
+else $as_nop
+ enable_xdp=auto
+fi
+
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libbpf" >&5
+printf %s "checking for libbpf... " >&6; }
+
+if test -n "$libbpf_CFLAGS"; then
+ pkg_cv_libbpf_CFLAGS="$libbpf_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libbpf\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libbpf") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libbpf_CFLAGS=`$PKG_CONFIG --cflags "libbpf" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libbpf_LIBS"; then
+ pkg_cv_libbpf_LIBS="$libbpf_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libbpf\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libbpf") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libbpf_LIBS=`$PKG_CONFIG --libs "libbpf" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libbpf_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libbpf" 2>&1`
+ else
+ libbpf_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libbpf" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libbpf_PKG_ERRORS" >&5
+
+ have_libbpf=no
+
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ have_libbpf=no
+
+else
+ libbpf_CFLAGS=$pkg_cv_libbpf_CFLAGS
+ libbpf_LIBS=$pkg_cv_libbpf_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ save_LIBS=$LIBS
+ LIBS="$LIBS $libbpf_LIBS"
+ ac_fn_c_check_func "$LINENO" "bpf_object__find_map_by_offset" "ac_cv_func_bpf_object__find_map_by_offset"
+if test "x$ac_cv_func_bpf_object__find_map_by_offset" = xyes
+then :
+ libbpf1=no
+else $as_nop
+ libbpf1=yes
+fi
+
+ LIBS=$save_LIBS
+ have_libbpf=yes
+fi
+
+case $enable_xdp in #(
+ auto) :
+ if test "$have_libbpf" = "yes"
+then :
+ enable_xdp=yes
+else $as_nop
+ enable_xdp=no
+fi ;; #(
+ yes) :
+ if test "$have_libbpf" = "yes"
+then :
+ enable_xdp=yes
+else $as_nop
+ as_fn_error $? "libbpf not available" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value of --enable-xdp." "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$enable_xdp" != "no"; then
+ ENABLE_XDP_TRUE=
+ ENABLE_XDP_FALSE='#'
+else
+ ENABLE_XDP_TRUE='#'
+ ENABLE_XDP_FALSE=
+fi
+
+
+if test "$enable_xdp" = "yes"
+then :
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libxdp" >&5
+printf %s "checking for libxdp... " >&6; }
+
+if test -n "$libxdp_CFLAGS"; then
+ pkg_cv_libxdp_CFLAGS="$libxdp_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libxdp\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libxdp") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libxdp_CFLAGS=`$PKG_CONFIG --cflags "libxdp" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libxdp_LIBS"; then
+ pkg_cv_libxdp_LIBS="$libxdp_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libxdp\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libxdp") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libxdp_LIBS=`$PKG_CONFIG --libs "libxdp" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libxdp_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libxdp" 2>&1`
+ else
+ libxdp_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libxdp" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libxdp_PKG_ERRORS" >&5
+
+ enable_xdp=yes
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ enable_xdp=yes
+else
+ libxdp_CFLAGS=$pkg_cv_libxdp_CFLAGS
+ libxdp_LIBS=$pkg_cv_libxdp_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ enable_xdp=libxdp
+fi
+ if test "$enable_xdp" = "libxdp"
+then :
+
+printf "%s\n" "#define USE_LIBXDP 1" >>confdefs.h
+
+ libbpf_CFLAGS="$libbpf_CFLAGS $libxdp_CFLAGS"
+ libbpf_LIBS="$libbpf_LIBS $libxdp_LIBS"
+else $as_nop
+ if test "$libbpf1" = "yes"
+then :
+ as_fn_error $? "libxdp not available" "$LINENO" 5
+fi
+
+fi
+
+fi
+
+
+
+if test "$enable_xdp" != "no"
+then :
+
+
+printf "%s\n" "#define ENABLE_XDP 1" >>confdefs.h
+
+fi
+
+# Reuseport support
+case $host_os in #(
+ freebsd*) :
+ reuseport_opt=SO_REUSEPORT_LB ;; #(
+ *) :
+ reuseport_opt=SO_REUSEPORT ;; #(
+ *) :
+ ;;
+esac
+
+# Check whether --enable-reuseport was given.
+if test ${enable_reuseport+y}
+then :
+ enableval=$enable_reuseport;
+else $as_nop
+ enable_reuseport=auto
+
+fi
+
+
+case $enable_reuseport in #(
+ auto) :
+
+ case $host_os in #(
+ freebsd*|linux*) :
+ as_ac_Symbol=`printf "%s\n" "ac_cv_have_decl_$reuseport_opt" | $as_tr_sh`
+ac_fn_check_decl "$LINENO" "$reuseport_opt" "$as_ac_Symbol" "#include
+
+" "$ac_c_undeclared_builtin_options" "CFLAGS"
+if eval test \"x\$"$as_ac_Symbol"\" = x"yes"
+then :
+ enable_reuseport=yes
+else $as_nop
+ enable_reuseport=no
+fi ;; #(
+ *) :
+ enable_reuseport=no
+ ;; #(
+ *) :
+ ;;
+esac ;; #(
+ yes) :
+ as_ac_Symbol=`printf "%s\n" "ac_cv_have_decl_$reuseport_opt" | $as_tr_sh`
+ac_fn_check_decl "$LINENO" "$reuseport_opt" "$as_ac_Symbol" "#include
+
+" "$ac_c_undeclared_builtin_options" "CFLAGS"
+if eval test \"x\$"$as_ac_Symbol"\" = x"yes"
+then :
+
+else $as_nop
+ as_fn_error $? "SO_REUSEPORT(_LB) not supported." "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value of --enable-reuseport." "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+
+if test "$enable_reuseport" = yes
+then :
+
+
+printf "%s\n" "#define ENABLE_REUSEPORT 1" >>confdefs.h
+
+fi
+
+#########################################
+# Dependencies needed for Knot DNS daemon
+#########################################
+
+# Systemd integration
+# Check whether --enable-systemd was given.
+if test ${enable_systemd+y}
+then :
+ enableval=$enable_systemd; enable_systemd="$enableval"
+else $as_nop
+ enable_systemd=auto
+fi
+
+
+# Check whether --enable-dbus was given.
+if test ${enable_dbus+y}
+then :
+ enableval=$enable_dbus; enable_dbus="$enableval"
+else $as_nop
+ enable_dbus=auto
+fi
+
+
+if test "$enable_daemon" = "yes"
+then :
+
+
+if test "$enable_systemd" != "no"
+then :
+
+ case $enable_systemd in #(
+ auto) :
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libsystemd" >&5
+printf %s "checking for libsystemd... " >&6; }
+
+if test -n "$systemd_CFLAGS"; then
+ pkg_cv_systemd_CFLAGS="$systemd_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_CFLAGS=`$PKG_CONFIG --cflags "libsystemd" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$systemd_LIBS"; then
+ pkg_cv_systemd_LIBS="$systemd_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_LIBS=`$PKG_CONFIG --libs "libsystemd" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ systemd_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libsystemd" 2>&1`
+ else
+ systemd_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libsystemd" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$systemd_PKG_ERRORS" >&5
+
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libsystemd-daemon libsystemd-journal" >&5
+printf %s "checking for libsystemd-daemon libsystemd-journal... " >&6; }
+
+if test -n "$systemd_CFLAGS"; then
+ pkg_cv_systemd_CFLAGS="$systemd_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd-daemon libsystemd-journal\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd-daemon libsystemd-journal") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_CFLAGS=`$PKG_CONFIG --cflags "libsystemd-daemon libsystemd-journal" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$systemd_LIBS"; then
+ pkg_cv_systemd_LIBS="$systemd_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd-daemon libsystemd-journal\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd-daemon libsystemd-journal") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_LIBS=`$PKG_CONFIG --libs "libsystemd-daemon libsystemd-journal" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ systemd_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libsystemd-daemon libsystemd-journal" 2>&1`
+ else
+ systemd_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libsystemd-daemon libsystemd-journal" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$systemd_PKG_ERRORS" >&5
+
+ enable_systemd=no
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ enable_systemd=no
+else
+ systemd_CFLAGS=$pkg_cv_systemd_CFLAGS
+ systemd_LIBS=$pkg_cv_systemd_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ enable_systemd=yes
+fi
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libsystemd-daemon libsystemd-journal" >&5
+printf %s "checking for libsystemd-daemon libsystemd-journal... " >&6; }
+
+if test -n "$systemd_CFLAGS"; then
+ pkg_cv_systemd_CFLAGS="$systemd_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd-daemon libsystemd-journal\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd-daemon libsystemd-journal") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_CFLAGS=`$PKG_CONFIG --cflags "libsystemd-daemon libsystemd-journal" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$systemd_LIBS"; then
+ pkg_cv_systemd_LIBS="$systemd_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd-daemon libsystemd-journal\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd-daemon libsystemd-journal") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_LIBS=`$PKG_CONFIG --libs "libsystemd-daemon libsystemd-journal" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ systemd_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libsystemd-daemon libsystemd-journal" 2>&1`
+ else
+ systemd_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libsystemd-daemon libsystemd-journal" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$systemd_PKG_ERRORS" >&5
+
+ enable_systemd=no
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ enable_systemd=no
+else
+ systemd_CFLAGS=$pkg_cv_systemd_CFLAGS
+ systemd_LIBS=$pkg_cv_systemd_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ enable_systemd=yes
+fi
+else
+ systemd_CFLAGS=$pkg_cv_systemd_CFLAGS
+ systemd_LIBS=$pkg_cv_systemd_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ enable_systemd=yes
+fi ;; #(
+ yes) :
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libsystemd" >&5
+printf %s "checking for libsystemd... " >&6; }
+
+if test -n "$systemd_CFLAGS"; then
+ pkg_cv_systemd_CFLAGS="$systemd_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_CFLAGS=`$PKG_CONFIG --cflags "libsystemd" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$systemd_LIBS"; then
+ pkg_cv_systemd_LIBS="$systemd_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_LIBS=`$PKG_CONFIG --libs "libsystemd" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ systemd_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libsystemd" 2>&1`
+ else
+ systemd_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libsystemd" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$systemd_PKG_ERRORS" >&5
+
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libsystemd-daemon libsystemd-journal" >&5
+printf %s "checking for libsystemd-daemon libsystemd-journal... " >&6; }
+
+if test -n "$systemd_CFLAGS"; then
+ pkg_cv_systemd_CFLAGS="$systemd_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd-daemon libsystemd-journal\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd-daemon libsystemd-journal") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_CFLAGS=`$PKG_CONFIG --cflags "libsystemd-daemon libsystemd-journal" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$systemd_LIBS"; then
+ pkg_cv_systemd_LIBS="$systemd_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd-daemon libsystemd-journal\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd-daemon libsystemd-journal") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_LIBS=`$PKG_CONFIG --libs "libsystemd-daemon libsystemd-journal" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ systemd_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libsystemd-daemon libsystemd-journal" 2>&1`
+ else
+ systemd_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libsystemd-daemon libsystemd-journal" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$systemd_PKG_ERRORS" >&5
+
+ as_fn_error $? "Package requirements (libsystemd-daemon libsystemd-journal) were not met:
+
+$systemd_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables systemd_CFLAGS
+and systemd_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables systemd_CFLAGS
+and systemd_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see .
+See \`config.log' for more details" "$LINENO" 5; }
+else
+ systemd_CFLAGS=$pkg_cv_systemd_CFLAGS
+ systemd_LIBS=$pkg_cv_systemd_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libsystemd-daemon libsystemd-journal" >&5
+printf %s "checking for libsystemd-daemon libsystemd-journal... " >&6; }
+
+if test -n "$systemd_CFLAGS"; then
+ pkg_cv_systemd_CFLAGS="$systemd_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd-daemon libsystemd-journal\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd-daemon libsystemd-journal") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_CFLAGS=`$PKG_CONFIG --cflags "libsystemd-daemon libsystemd-journal" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$systemd_LIBS"; then
+ pkg_cv_systemd_LIBS="$systemd_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libsystemd-daemon libsystemd-journal\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libsystemd-daemon libsystemd-journal") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_systemd_LIBS=`$PKG_CONFIG --libs "libsystemd-daemon libsystemd-journal" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ systemd_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libsystemd-daemon libsystemd-journal" 2>&1`
+ else
+ systemd_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libsystemd-daemon libsystemd-journal" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$systemd_PKG_ERRORS" >&5
+
+ as_fn_error $? "Package requirements (libsystemd-daemon libsystemd-journal) were not met:
+
+$systemd_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables systemd_CFLAGS
+and systemd_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables systemd_CFLAGS
+and systemd_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see .
+See \`config.log' for more details" "$LINENO" 5; }
+else
+ systemd_CFLAGS=$pkg_cv_systemd_CFLAGS
+ systemd_LIBS=$pkg_cv_systemd_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi
+else
+ systemd_CFLAGS=$pkg_cv_systemd_CFLAGS
+ systemd_LIBS=$pkg_cv_systemd_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi ;; #(
+ *) :
+ as_fn_error $? "Invalid value of --enable-systemd." "$LINENO" 5 ;; #(
+ *) :
+ ;;
+esac
+
+fi
+
+if test "$enable_systemd" = "yes"
+then :
+
+
+printf "%s\n" "#define ENABLE_SYSTEMD 1" >>confdefs.h
+
+fi
+
+if test "$enable_dbus" != "no"
+then :
+
+ case $enable_dbus in #(
+ auto) :
+ if test "$enable_systemd" = "yes"
+then :
+
+printf "%s\n" "#define ENABLE_DBUS_SYSTEMD 1" >>confdefs.h
+
+ enable_dbus=systemd
+else $as_nop
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dbus-1" >&5
+printf %s "checking for dbus-1... " >&6; }
+
+if test -n "$libdbus_CFLAGS"; then
+ pkg_cv_libdbus_CFLAGS="$libdbus_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"dbus-1\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "dbus-1") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libdbus_CFLAGS=`$PKG_CONFIG --cflags "dbus-1" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libdbus_LIBS"; then
+ pkg_cv_libdbus_LIBS="$libdbus_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"dbus-1\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "dbus-1") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libdbus_LIBS=`$PKG_CONFIG --libs "dbus-1" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libdbus_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "dbus-1" 2>&1`
+ else
+ libdbus_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "dbus-1" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libdbus_PKG_ERRORS" >&5
+
+ enable_dbus=no
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ enable_dbus=no
+else
+ libdbus_CFLAGS=$pkg_cv_libdbus_CFLAGS
+ libdbus_LIBS=$pkg_cv_libdbus_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+printf "%s\n" "#define ENABLE_DBUS_LIBDBUS 1" >>confdefs.h
+
+ enable_dbus=libdbus
+fi
+fi ;; #(
+ systemd) :
+ if test "$enable_systemd" = "yes"
+then :
+
+printf "%s\n" "#define ENABLE_DBUS_SYSTEMD 1" >>confdefs.h
+
+ enable_dbus=systemd
+else $as_nop
+ as_fn_error $? "systemd >= 221 not available." "$LINENO" 5
+fi ;; #(
+ libdbus) :
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dbus-1" >&5
+printf %s "checking for dbus-1... " >&6; }
+
+if test -n "$libdbus_CFLAGS"; then
+ pkg_cv_libdbus_CFLAGS="$libdbus_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"dbus-1\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "dbus-1") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libdbus_CFLAGS=`$PKG_CONFIG --cflags "dbus-1" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libdbus_LIBS"; then
+ pkg_cv_libdbus_LIBS="$libdbus_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"dbus-1\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "dbus-1") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libdbus_LIBS=`$PKG_CONFIG --libs "dbus-1" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libdbus_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "dbus-1" 2>&1`
+ else
+ libdbus_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "dbus-1" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libdbus_PKG_ERRORS" >&5
+
+ as_fn_error $? "libdbus not available." "$LINENO" 5
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ as_fn_error $? "libdbus not available." "$LINENO" 5
+else
+ libdbus_CFLAGS=$pkg_cv_libdbus_CFLAGS
+ libdbus_LIBS=$pkg_cv_libdbus_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+printf "%s\n" "#define ENABLE_DBUS_LIBDBUS 1" >>confdefs.h
+
+ enable_dbus=libdbus
+fi ;; #(
+ no) :
+ enable_dbus=no ;; #(
+ *) :
+ as_fn_error $? "Invalid value of --enable-dbus." "$LINENO" 5 ;; #(
+ *) :
+ ;;
+esac
+
+fi
+
+
+fi
+# Socket polling method
+socket_polling=
+
+# Check whether --with-socket-polling was given.
+if test ${with_socket_polling+y}
+then :
+ withval=$with_socket_polling; socket_polling=$withval
+else $as_nop
+ socket_polling=auto
+
+fi
+
+
+case $socket_polling in #(
+ auto) :
+
+ for ac_func in kqueue
+do :
+ ac_fn_c_check_func "$LINENO" "kqueue" "ac_cv_func_kqueue"
+if test "x$ac_cv_func_kqueue" = xyes
+then :
+ printf "%s\n" "#define HAVE_KQUEUE 1" >>confdefs.h
+
+printf "%s\n" "#define HAVE_KQUEUE 1" >>confdefs.h
+
+ socket_polling=kqueue
+else $as_nop
+
+ for ac_func in epoll_create
+do :
+ ac_fn_c_check_func "$LINENO" "epoll_create" "ac_cv_func_epoll_create"
+if test "x$ac_cv_func_epoll_create" = xyes
+then :
+ printf "%s\n" "#define HAVE_EPOLL_CREATE 1" >>confdefs.h
+
+printf "%s\n" "#define HAVE_EPOLL 1" >>confdefs.h
+
+ socket_polling=epoll
+else $as_nop
+ socket_polling=poll
+fi
+
+done
+fi
+
+done ;; #(
+ poll) :
+ socket_polling=poll ;; #(
+ epoll) :
+
+ for ac_func in epoll_create
+do :
+ ac_fn_c_check_func "$LINENO" "epoll_create" "ac_cv_func_epoll_create"
+if test "x$ac_cv_func_epoll_create" = xyes
+then :
+ printf "%s\n" "#define HAVE_EPOLL_CREATE 1" >>confdefs.h
+
+printf "%s\n" "#define HAVE_EPOLL 1" >>confdefs.h
+
+ socket_polling=epoll
+else $as_nop
+ as_fn_error $? "epoll not available." "$LINENO" 5
+fi
+
+done ;; #(
+ kqueue) :
+
+ for ac_func in kqueue
+do :
+ ac_fn_c_check_func "$LINENO" "kqueue" "ac_cv_func_kqueue"
+if test "x$ac_cv_func_kqueue" = xyes
+then :
+ printf "%s\n" "#define HAVE_KQUEUE 1" >>confdefs.h
+
+printf "%s\n" "#define HAVE_KQUEUE 1" >>confdefs.h
+
+ socket_polling=kqueue
+else $as_nop
+ as_fn_error $? "kqueue not available." "$LINENO" 5
+fi
+
+done ;; #(
+ libkqueue) :
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libkqueue" >&5
+printf %s "checking for libkqueue... " >&6; }
+
+if test -n "$libkqueue_CFLAGS"; then
+ pkg_cv_libkqueue_CFLAGS="$libkqueue_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libkqueue\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libkqueue") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libkqueue_CFLAGS=`$PKG_CONFIG --cflags "libkqueue" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libkqueue_LIBS"; then
+ pkg_cv_libkqueue_LIBS="$libkqueue_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libkqueue\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libkqueue") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libkqueue_LIBS=`$PKG_CONFIG --libs "libkqueue" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libkqueue_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libkqueue" 2>&1`
+ else
+ libkqueue_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libkqueue" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libkqueue_PKG_ERRORS" >&5
+
+ as_fn_error $? "libkqueue not available." "$LINENO" 5
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ as_fn_error $? "libkqueue not available." "$LINENO" 5
+else
+ libkqueue_CFLAGS=$pkg_cv_libkqueue_CFLAGS
+ libkqueue_LIBS=$pkg_cv_libkqueue_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+printf "%s\n" "#define HAVE_KQUEUE 1" >>confdefs.h
+
+ socket_polling=libkqueue
+fi ;; #(
+ *) :
+ as_fn_error $? "Invalid value of --socket-polling." "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+
+# Alternative memory allocator
+malloc_LIBS=
+
+# Check whether --with-memory-allocator was given.
+if test ${with_memory_allocator+y}
+then :
+ withval=$with_memory_allocator; case $withval in #(
+ auto) :
+ ;; #(
+ *) :
+ malloc_LIBS="-l$withval"
+ ;; #(
+ *) :
+ ;;
+esac
+ with_memory_allocator=$withval
+
+fi
+
+if test "$with_memory_allocator" = ""
+then :
+ with_memory_allocator="auto"
+fi
+
+
+if test "$enable_daemon" = "yes"
+then :
+
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for liburcu" >&5
+printf %s "checking for liburcu... " >&6; }
+
+if test -n "$liburcu_CFLAGS"; then
+ pkg_cv_liburcu_CFLAGS="$liburcu_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"liburcu\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "liburcu") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_liburcu_CFLAGS=`$PKG_CONFIG --cflags "liburcu" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$liburcu_LIBS"; then
+ pkg_cv_liburcu_LIBS="$liburcu_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"liburcu\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "liburcu") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_liburcu_LIBS=`$PKG_CONFIG --libs "liburcu" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ liburcu_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "liburcu" 2>&1`
+ else
+ liburcu_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "liburcu" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$liburcu_PKG_ERRORS" >&5
+
+
+ as_fn_error $? "liburcu not found" "$LINENO" 5
+
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+ as_fn_error $? "liburcu not found" "$LINENO" 5
+
+else
+ liburcu_CFLAGS=$pkg_cv_liburcu_CFLAGS
+ liburcu_LIBS=$pkg_cv_liburcu_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi
+
+
+fi
+
+static_modules=""
+shared_modules=""
+static_modules_declars=""
+static_modules_init=""
+doc_modules=""
+
+
+
+# Check whether --with-module-authsignal was given.
+if test ${with_module_authsignal+y}
+then :
+ withval=$with_module_authsignal; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/authsignal/authsignal.rst\n"
+
+ STATIC_MODULE_authsignal=no
+ SHARED_MODULE_authsignal=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_authsignal=yes
+ static_modules="${static_modules}authsignal "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_authsignal;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_authsignal }," ;; #(
+ shared) :
+ SHARED_MODULE_authsignal=yes
+ shared_modules="${shared_modules}authsignal "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module authsignal cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module authsignal requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-authsignal" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_authsignal" = "yes"; then
+ STATIC_MODULE_authsignal_TRUE=
+ STATIC_MODULE_authsignal_FALSE='#'
+else
+ STATIC_MODULE_authsignal_TRUE='#'
+ STATIC_MODULE_authsignal_FALSE=
+fi
+
+ if test "$SHARED_MODULE_authsignal" = "yes"; then
+ SHARED_MODULE_authsignal_TRUE=
+ SHARED_MODULE_authsignal_FALSE='#'
+else
+ SHARED_MODULE_authsignal_TRUE='#'
+ SHARED_MODULE_authsignal_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-cookies was given.
+if test ${with_module_cookies+y}
+then :
+ withval=$with_module_cookies; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/cookies/cookies.rst\n"
+
+ STATIC_MODULE_cookies=no
+ SHARED_MODULE_cookies=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_cookies=yes
+ static_modules="${static_modules}cookies "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_cookies;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_cookies }," ;; #(
+ shared) :
+ SHARED_MODULE_cookies=yes
+ shared_modules="${shared_modules}cookies "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module cookies cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module cookies requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-cookies" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_cookies" = "yes"; then
+ STATIC_MODULE_cookies_TRUE=
+ STATIC_MODULE_cookies_FALSE='#'
+else
+ STATIC_MODULE_cookies_TRUE='#'
+ STATIC_MODULE_cookies_FALSE=
+fi
+
+ if test "$SHARED_MODULE_cookies" = "yes"; then
+ SHARED_MODULE_cookies_TRUE=
+ SHARED_MODULE_cookies_FALSE='#'
+else
+ SHARED_MODULE_cookies_TRUE='#'
+ SHARED_MODULE_cookies_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-dnsproxy was given.
+if test ${with_module_dnsproxy+y}
+then :
+ withval=$with_module_dnsproxy; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/dnsproxy/dnsproxy.rst\n"
+
+ STATIC_MODULE_dnsproxy=no
+ SHARED_MODULE_dnsproxy=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_dnsproxy=yes
+ static_modules="${static_modules}dnsproxy "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_dnsproxy;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_dnsproxy }," ;; #(
+ shared) :
+ SHARED_MODULE_dnsproxy=yes
+ shared_modules="${shared_modules}dnsproxy "
+ if test ""non-shareable"" = "non-shareable"
+then :
+ as_fn_error $? "Module dnsproxy cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module dnsproxy requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-dnsproxy" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_dnsproxy" = "yes"; then
+ STATIC_MODULE_dnsproxy_TRUE=
+ STATIC_MODULE_dnsproxy_FALSE='#'
+else
+ STATIC_MODULE_dnsproxy_TRUE='#'
+ STATIC_MODULE_dnsproxy_FALSE=
+fi
+
+ if test "$SHARED_MODULE_dnsproxy" = "yes"; then
+ SHARED_MODULE_dnsproxy_TRUE=
+ SHARED_MODULE_dnsproxy_FALSE='#'
+else
+ SHARED_MODULE_dnsproxy_TRUE='#'
+ SHARED_MODULE_dnsproxy_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-dnstap was given.
+if test ${with_module_dnstap+y}
+then :
+ withval=$with_module_dnstap; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="no"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/dnstap/dnstap.rst\n"
+
+ STATIC_MODULE_dnstap=no
+ SHARED_MODULE_dnstap=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_dnstap=yes
+ static_modules="${static_modules}dnstap "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_dnstap;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_dnstap }," ;; #(
+ shared) :
+ SHARED_MODULE_dnstap=yes
+ shared_modules="${shared_modules}dnstap "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module dnstap cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module dnstap requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-dnstap" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_dnstap" = "yes"; then
+ STATIC_MODULE_dnstap_TRUE=
+ STATIC_MODULE_dnstap_FALSE='#'
+else
+ STATIC_MODULE_dnstap_TRUE='#'
+ STATIC_MODULE_dnstap_FALSE=
+fi
+
+ if test "$SHARED_MODULE_dnstap" = "yes"; then
+ SHARED_MODULE_dnstap_TRUE=
+ SHARED_MODULE_dnstap_FALSE='#'
+else
+ SHARED_MODULE_dnstap_TRUE='#'
+ SHARED_MODULE_dnstap_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-geoip was given.
+if test ${with_module_geoip+y}
+then :
+ withval=$with_module_geoip; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/geoip/geoip.rst\n"
+
+ STATIC_MODULE_geoip=no
+ SHARED_MODULE_geoip=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_geoip=yes
+ static_modules="${static_modules}geoip "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_geoip;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_geoip }," ;; #(
+ shared) :
+ SHARED_MODULE_geoip=yes
+ shared_modules="${shared_modules}geoip "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module geoip cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module geoip requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-geoip" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_geoip" = "yes"; then
+ STATIC_MODULE_geoip_TRUE=
+ STATIC_MODULE_geoip_FALSE='#'
+else
+ STATIC_MODULE_geoip_TRUE='#'
+ STATIC_MODULE_geoip_FALSE=
+fi
+
+ if test "$SHARED_MODULE_geoip" = "yes"; then
+ SHARED_MODULE_geoip_TRUE=
+ SHARED_MODULE_geoip_FALSE='#'
+else
+ SHARED_MODULE_geoip_TRUE='#'
+ SHARED_MODULE_geoip_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-noudp was given.
+if test ${with_module_noudp+y}
+then :
+ withval=$with_module_noudp; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/noudp/noudp.rst\n"
+
+ STATIC_MODULE_noudp=no
+ SHARED_MODULE_noudp=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_noudp=yes
+ static_modules="${static_modules}noudp "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_noudp;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_noudp }," ;; #(
+ shared) :
+ SHARED_MODULE_noudp=yes
+ shared_modules="${shared_modules}noudp "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module noudp cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module noudp requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-noudp" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_noudp" = "yes"; then
+ STATIC_MODULE_noudp_TRUE=
+ STATIC_MODULE_noudp_FALSE='#'
+else
+ STATIC_MODULE_noudp_TRUE='#'
+ STATIC_MODULE_noudp_FALSE=
+fi
+
+ if test "$SHARED_MODULE_noudp" = "yes"; then
+ SHARED_MODULE_noudp_TRUE=
+ SHARED_MODULE_noudp_FALSE='#'
+else
+ SHARED_MODULE_noudp_TRUE='#'
+ SHARED_MODULE_noudp_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-onlinesign was given.
+if test ${with_module_onlinesign+y}
+then :
+ withval=$with_module_onlinesign; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/onlinesign/onlinesign.rst\n"
+
+ STATIC_MODULE_onlinesign=no
+ SHARED_MODULE_onlinesign=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_onlinesign=yes
+ static_modules="${static_modules}onlinesign "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_onlinesign;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_onlinesign }," ;; #(
+ shared) :
+ SHARED_MODULE_onlinesign=yes
+ shared_modules="${shared_modules}onlinesign "
+ if test ""non-shareable"" = "non-shareable"
+then :
+ as_fn_error $? "Module onlinesign cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module onlinesign requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-onlinesign" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_onlinesign" = "yes"; then
+ STATIC_MODULE_onlinesign_TRUE=
+ STATIC_MODULE_onlinesign_FALSE='#'
+else
+ STATIC_MODULE_onlinesign_TRUE='#'
+ STATIC_MODULE_onlinesign_FALSE=
+fi
+
+ if test "$SHARED_MODULE_onlinesign" = "yes"; then
+ SHARED_MODULE_onlinesign_TRUE=
+ SHARED_MODULE_onlinesign_FALSE='#'
+else
+ SHARED_MODULE_onlinesign_TRUE='#'
+ SHARED_MODULE_onlinesign_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-probe was given.
+if test ${with_module_probe+y}
+then :
+ withval=$with_module_probe; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/probe/probe.rst\n"
+
+ STATIC_MODULE_probe=no
+ SHARED_MODULE_probe=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_probe=yes
+ static_modules="${static_modules}probe "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_probe;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_probe }," ;; #(
+ shared) :
+ SHARED_MODULE_probe=yes
+ shared_modules="${shared_modules}probe "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module probe cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module probe requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-probe" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_probe" = "yes"; then
+ STATIC_MODULE_probe_TRUE=
+ STATIC_MODULE_probe_FALSE='#'
+else
+ STATIC_MODULE_probe_TRUE='#'
+ STATIC_MODULE_probe_FALSE=
+fi
+
+ if test "$SHARED_MODULE_probe" = "yes"; then
+ SHARED_MODULE_probe_TRUE=
+ SHARED_MODULE_probe_FALSE='#'
+else
+ SHARED_MODULE_probe_TRUE='#'
+ SHARED_MODULE_probe_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-queryacl was given.
+if test ${with_module_queryacl+y}
+then :
+ withval=$with_module_queryacl; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/queryacl/queryacl.rst\n"
+
+ STATIC_MODULE_queryacl=no
+ SHARED_MODULE_queryacl=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_queryacl=yes
+ static_modules="${static_modules}queryacl "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_queryacl;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_queryacl }," ;; #(
+ shared) :
+ SHARED_MODULE_queryacl=yes
+ shared_modules="${shared_modules}queryacl "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module queryacl cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module queryacl requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-queryacl" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_queryacl" = "yes"; then
+ STATIC_MODULE_queryacl_TRUE=
+ STATIC_MODULE_queryacl_FALSE='#'
+else
+ STATIC_MODULE_queryacl_TRUE='#'
+ STATIC_MODULE_queryacl_FALSE=
+fi
+
+ if test "$SHARED_MODULE_queryacl" = "yes"; then
+ SHARED_MODULE_queryacl_TRUE=
+ SHARED_MODULE_queryacl_FALSE='#'
+else
+ SHARED_MODULE_queryacl_TRUE='#'
+ SHARED_MODULE_queryacl_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-rrl was given.
+if test ${with_module_rrl+y}
+then :
+ withval=$with_module_rrl; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/rrl/rrl.rst\n"
+
+ STATIC_MODULE_rrl=no
+ SHARED_MODULE_rrl=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_rrl=yes
+ static_modules="${static_modules}rrl "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_rrl;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_rrl }," ;; #(
+ shared) :
+ SHARED_MODULE_rrl=yes
+ shared_modules="${shared_modules}rrl "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module rrl cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module rrl requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-rrl" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_rrl" = "yes"; then
+ STATIC_MODULE_rrl_TRUE=
+ STATIC_MODULE_rrl_FALSE='#'
+else
+ STATIC_MODULE_rrl_TRUE='#'
+ STATIC_MODULE_rrl_FALSE=
+fi
+
+ if test "$SHARED_MODULE_rrl" = "yes"; then
+ SHARED_MODULE_rrl_TRUE=
+ SHARED_MODULE_rrl_FALSE='#'
+else
+ SHARED_MODULE_rrl_TRUE='#'
+ SHARED_MODULE_rrl_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-stats was given.
+if test ${with_module_stats+y}
+then :
+ withval=$with_module_stats; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/stats/stats.rst\n"
+
+ STATIC_MODULE_stats=no
+ SHARED_MODULE_stats=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_stats=yes
+ static_modules="${static_modules}stats "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_stats;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_stats }," ;; #(
+ shared) :
+ SHARED_MODULE_stats=yes
+ shared_modules="${shared_modules}stats "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module stats cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module stats requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-stats" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_stats" = "yes"; then
+ STATIC_MODULE_stats_TRUE=
+ STATIC_MODULE_stats_FALSE='#'
+else
+ STATIC_MODULE_stats_TRUE='#'
+ STATIC_MODULE_stats_FALSE=
+fi
+
+ if test "$SHARED_MODULE_stats" = "yes"; then
+ SHARED_MODULE_stats_TRUE=
+ SHARED_MODULE_stats_FALSE='#'
+else
+ SHARED_MODULE_stats_TRUE='#'
+ SHARED_MODULE_stats_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-synthrecord was given.
+if test ${with_module_synthrecord+y}
+then :
+ withval=$with_module_synthrecord; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/synthrecord/synthrecord.rst\n"
+
+ STATIC_MODULE_synthrecord=no
+ SHARED_MODULE_synthrecord=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_synthrecord=yes
+ static_modules="${static_modules}synthrecord "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_synthrecord;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_synthrecord }," ;; #(
+ shared) :
+ SHARED_MODULE_synthrecord=yes
+ shared_modules="${shared_modules}synthrecord "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module synthrecord cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module synthrecord requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-synthrecord" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_synthrecord" = "yes"; then
+ STATIC_MODULE_synthrecord_TRUE=
+ STATIC_MODULE_synthrecord_FALSE='#'
+else
+ STATIC_MODULE_synthrecord_TRUE='#'
+ STATIC_MODULE_synthrecord_FALSE=
+fi
+
+ if test "$SHARED_MODULE_synthrecord" = "yes"; then
+ SHARED_MODULE_synthrecord_TRUE=
+ SHARED_MODULE_synthrecord_FALSE='#'
+else
+ SHARED_MODULE_synthrecord_TRUE='#'
+ SHARED_MODULE_synthrecord_FALSE=
+fi
+
+
+
+
+# Check whether --with-module-whoami was given.
+if test ${with_module_whoami+y}
+then :
+ withval=$with_module_whoami; module=$withval
+else $as_nop
+ if test "$enable_modules" = "no"
+then :
+ module=no
+else $as_nop
+ module="yes"
+
+fi
+
+fi
+
+
+ doc_modules="${doc_modules}.. include:: modules/whoami/whoami.rst\n"
+
+ STATIC_MODULE_whoami=no
+ SHARED_MODULE_whoami=no
+ case $module in #(
+ yes) :
+ STATIC_MODULE_whoami=yes
+ static_modules="${static_modules}whoami "
+ static_modules_declars="${static_modules_declars}extern const knotd_mod_api_t knotd_mod_api_whoami;\n"
+ static_modules_init="${static_modules_init}\\\\\n\t{ &knotd_mod_api_whoami }," ;; #(
+ shared) :
+ SHARED_MODULE_whoami=yes
+ shared_modules="${shared_modules}whoami "
+ if test "" = "non-shareable"
+then :
+ as_fn_error $? "Module whoami cannot be shared" "$LINENO" 5
+fi
+ if test "$enable_shared" != "yes"
+then :
+ as_fn_error $? "Shared module whoami requires shared libraries" "$LINENO" 5
+fi ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value '$module' for --with-module-whoami" "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$STATIC_MODULE_whoami" = "yes"; then
+ STATIC_MODULE_whoami_TRUE=
+ STATIC_MODULE_whoami_FALSE='#'
+else
+ STATIC_MODULE_whoami_TRUE='#'
+ STATIC_MODULE_whoami_FALSE=
+fi
+
+ if test "$SHARED_MODULE_whoami" = "yes"; then
+ SHARED_MODULE_whoami_TRUE=
+ SHARED_MODULE_whoami_FALSE='#'
+else
+ SHARED_MODULE_whoami_TRUE='#'
+ SHARED_MODULE_whoami_FALSE=
+fi
+
+
+
+STATIC_MODULES_DECLARS=$(printf "$static_modules_declars")
+
+
+STATIC_MODULES_INIT=$(printf "$static_modules_init")
+
+
+DOC_MODULES=$(printf "$doc_modules")
+
+
+
+# Check for Dnstap
+# Check whether --enable-dnstap was given.
+if test ${enable_dnstap+y}
+then :
+ enableval=$enable_dnstap;
+else $as_nop
+ enable_dnstap=no
+fi
+
+
+if test "$enable_dnstap" != "no" -o "$STATIC_MODULE_dnstap" != "no" -o "$SHARED_MODULE_dnstap" != "no"
+then :
+
+ # Extract the first word of "protoc-c", so it can be a program name with args.
+set dummy protoc-c; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_path_PROTOC_C+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ case $PROTOC_C in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PROTOC_C="$PROTOC_C" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_path_PROTOC_C="$as_dir$ac_word$ac_exec_ext"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PROTOC_C=$ac_cv_path_PROTOC_C
+if test -n "$PROTOC_C"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $PROTOC_C" >&5
+printf "%s\n" "$PROTOC_C" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ if test -z "$PROTOC_C"
+then :
+
+ as_fn_error $? "The protoc-c program was not found. Please install protobuf-c!" "$LINENO" 5
+
+fi
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libfstrm" >&5
+printf %s "checking for libfstrm... " >&6; }
+
+if test -n "$libfstrm_CFLAGS"; then
+ pkg_cv_libfstrm_CFLAGS="$libfstrm_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libfstrm\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libfstrm") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libfstrm_CFLAGS=`$PKG_CONFIG --cflags "libfstrm" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libfstrm_LIBS"; then
+ pkg_cv_libfstrm_LIBS="$libfstrm_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libfstrm\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libfstrm") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libfstrm_LIBS=`$PKG_CONFIG --libs "libfstrm" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libfstrm_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libfstrm" 2>&1`
+ else
+ libfstrm_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libfstrm" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libfstrm_PKG_ERRORS" >&5
+
+ as_fn_error $? "Package requirements (libfstrm) were not met:
+
+$libfstrm_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables libfstrm_CFLAGS
+and libfstrm_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables libfstrm_CFLAGS
+and libfstrm_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see .
+See \`config.log' for more details" "$LINENO" 5; }
+else
+ libfstrm_CFLAGS=$pkg_cv_libfstrm_CFLAGS
+ libfstrm_LIBS=$pkg_cv_libfstrm_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libprotobuf-c >= 1.0.0" >&5
+printf %s "checking for libprotobuf-c >= 1.0.0... " >&6; }
+
+if test -n "$libprotobuf_c_CFLAGS"; then
+ pkg_cv_libprotobuf_c_CFLAGS="$libprotobuf_c_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libprotobuf-c >= 1.0.0\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libprotobuf-c >= 1.0.0") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libprotobuf_c_CFLAGS=`$PKG_CONFIG --cflags "libprotobuf-c >= 1.0.0" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libprotobuf_c_LIBS"; then
+ pkg_cv_libprotobuf_c_LIBS="$libprotobuf_c_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libprotobuf-c >= 1.0.0\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libprotobuf-c >= 1.0.0") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libprotobuf_c_LIBS=`$PKG_CONFIG --libs "libprotobuf-c >= 1.0.0" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libprotobuf_c_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libprotobuf-c >= 1.0.0" 2>&1`
+ else
+ libprotobuf_c_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libprotobuf-c >= 1.0.0" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libprotobuf_c_PKG_ERRORS" >&5
+
+ as_fn_error $? "Package requirements (libprotobuf-c >= 1.0.0) were not met:
+
+$libprotobuf_c_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables libprotobuf_c_CFLAGS
+and libprotobuf_c_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables libprotobuf_c_CFLAGS
+and libprotobuf_c_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see .
+See \`config.log' for more details" "$LINENO" 5; }
+else
+ libprotobuf_c_CFLAGS=$pkg_cv_libprotobuf_c_CFLAGS
+ libprotobuf_c_LIBS=$pkg_cv_libprotobuf_c_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi
+ DNSTAP_CFLAGS="$libfstrm_CFLAGS $libprotobuf_c_CFLAGS"
+
+ DNSTAP_LIBS="$libfstrm_LIBS $libprotobuf_c_LIBS"
+
+
+fi
+
+if test "$enable_dnstap" != "no"
+then :
+
+
+printf "%s\n" "#define USE_DNSTAP 1" >>confdefs.h
+
+
+fi
+ if test "$enable_dnstap" != "no"; then
+ HAVE_DNSTAP_TRUE=
+ HAVE_DNSTAP_FALSE='#'
+else
+ HAVE_DNSTAP_TRUE='#'
+ HAVE_DNSTAP_FALSE=
+fi
+
+
+ if test "$enable_dnstap" != "no" -o \
+ "$STATIC_MODULE_dnstap" != "no" -o \
+ "$SHARED_MODULE_dnstap" != "no"; then
+ HAVE_LIBDNSTAP_TRUE=
+ HAVE_LIBDNSTAP_FALSE='#'
+else
+ HAVE_LIBDNSTAP_TRUE='#'
+ HAVE_LIBDNSTAP_FALSE=
+fi
+
+# MaxMind DB for the GeoIP module
+# Check whether --enable-maxminddb was given.
+if test ${enable_maxminddb+y}
+then :
+ enableval=$enable_maxminddb; enable_maxminddb="$enableval"
+else $as_nop
+ enable_maxminddb=auto
+fi
+
+
+if test "$enable_daemon" = "no"
+then :
+ enable_maxminddb=no
+fi
+case $enable_maxminddb in #(
+ no) :
+ ;; #(
+ auto) :
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libmaxminddb" >&5
+printf %s "checking for libmaxminddb... " >&6; }
+
+if test -n "$libmaxminddb_CFLAGS"; then
+ pkg_cv_libmaxminddb_CFLAGS="$libmaxminddb_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmaxminddb\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libmaxminddb") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libmaxminddb_CFLAGS=`$PKG_CONFIG --cflags "libmaxminddb" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libmaxminddb_LIBS"; then
+ pkg_cv_libmaxminddb_LIBS="$libmaxminddb_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmaxminddb\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libmaxminddb") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libmaxminddb_LIBS=`$PKG_CONFIG --libs "libmaxminddb" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libmaxminddb_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libmaxminddb" 2>&1`
+ else
+ libmaxminddb_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libmaxminddb" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libmaxminddb_PKG_ERRORS" >&5
+
+ enable_maxminddb=no
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ enable_maxminddb=no
+else
+ libmaxminddb_CFLAGS=$pkg_cv_libmaxminddb_CFLAGS
+ libmaxminddb_LIBS=$pkg_cv_libmaxminddb_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ enable_maxminddb=yes
+fi ;; #(
+ yes) :
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libmaxminddb" >&5
+printf %s "checking for libmaxminddb... " >&6; }
+
+if test -n "$libmaxminddb_CFLAGS"; then
+ pkg_cv_libmaxminddb_CFLAGS="$libmaxminddb_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmaxminddb\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libmaxminddb") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libmaxminddb_CFLAGS=`$PKG_CONFIG --cflags "libmaxminddb" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libmaxminddb_LIBS"; then
+ pkg_cv_libmaxminddb_LIBS="$libmaxminddb_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmaxminddb\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libmaxminddb") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libmaxminddb_LIBS=`$PKG_CONFIG --libs "libmaxminddb" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libmaxminddb_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libmaxminddb" 2>&1`
+ else
+ libmaxminddb_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libmaxminddb" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libmaxminddb_PKG_ERRORS" >&5
+
+ as_fn_error $? "Package requirements (libmaxminddb) were not met:
+
+$libmaxminddb_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables libmaxminddb_CFLAGS
+and libmaxminddb_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables libmaxminddb_CFLAGS
+and libmaxminddb_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see .
+See \`config.log' for more details" "$LINENO" 5; }
+else
+ libmaxminddb_CFLAGS=$pkg_cv_libmaxminddb_CFLAGS
+ libmaxminddb_LIBS=$pkg_cv_libmaxminddb_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi ;; #(
+ *) :
+
+ save_CFLAGS="$CFLAGS"
+ save_LIBS="$LIBS"
+ if test "$enable_maxminddb" != ""
+then :
+
+ LIBS="$LIBS -L$enable_maxminddb"
+ CFLAGS="$CFLAGS -I$enable_maxminddb/include"
+
+fi
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing MMDB_open" >&5
+printf %s "checking for library containing MMDB_open... " >&6; }
+if test ${ac_cv_search_MMDB_open+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char MMDB_open ();
+int
+main (void)
+{
+return MMDB_open ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' maxminddb
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_MMDB_open=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_MMDB_open+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_MMDB_open+y}
+then :
+
+else $as_nop
+ ac_cv_search_MMDB_open=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_MMDB_open" >&5
+printf "%s\n" "$ac_cv_search_MMDB_open" >&6; }
+ac_res=$ac_cv_search_MMDB_open
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+ if test "$enable_maxminddb" != ""
+then :
+
+ libmaxminddb_CFLAGS="-I$enable_maxminddb/include"
+ libmaxminddb_LIBS="-L$enable_maxminddb -lmaxminddb"
+
+else $as_nop
+
+ libmaxminddb_CFLAGS=""
+ libmaxminddb_LIBS="$ac_cv_search_MMDB_open"
+
+fi
+
+else $as_nop
+ as_fn_error $? "\"not found in \`$enable_maxminddb'\"" "$LINENO" 5
+fi
+
+ CFLAGS="$save_CFLAGS"
+ LIBS="$save_LIBS"
+
+
+ enable_maxminddb=yes
+ ;; #(
+ *) :
+ ;;
+esac
+
+if test "$enable_maxminddb" = yes
+then :
+
+printf "%s\n" "#define HAVE_MAXMINDDB 1" >>confdefs.h
+
+fi
+ if test "$enable_maxminddb" = yes; then
+ HAVE_MAXMINDDB_TRUE=
+ HAVE_MAXMINDDB_FALSE='#'
+else
+ HAVE_MAXMINDDB_TRUE='#'
+ HAVE_MAXMINDDB_FALSE=
+fi
+
+
+
+# Check whether --with-lmdb was given.
+if test ${with_lmdb+y}
+then :
+ withval=$with_lmdb;
+fi
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for lmdb >= 0.9.15" >&5
+printf %s "checking for lmdb >= 0.9.15... " >&6; }
+
+if test -n "$lmdb_CFLAGS"; then
+ pkg_cv_lmdb_CFLAGS="$lmdb_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"lmdb >= 0.9.15\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "lmdb >= 0.9.15") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_lmdb_CFLAGS=`$PKG_CONFIG --cflags "lmdb >= 0.9.15" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$lmdb_LIBS"; then
+ pkg_cv_lmdb_LIBS="$lmdb_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"lmdb >= 0.9.15\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "lmdb >= 0.9.15") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_lmdb_LIBS=`$PKG_CONFIG --libs "lmdb >= 0.9.15" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ lmdb_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "lmdb >= 0.9.15" 2>&1`
+ else
+ lmdb_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "lmdb >= 0.9.15" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$lmdb_PKG_ERRORS" >&5
+
+
+ save_CPPFLAGS=$CPPFLAGS
+ save_LIBS=$LIBS
+
+ have_lmdb=no
+
+ for try_lmdb in "$with_lmdb" "" "/usr/local" "/usr/pkg"; do
+ if test -d "$try_lmdb"
+then :
+
+ lmdb_CFLAGS="-I$try_lmdb/include"
+ lmdb_LIBS="-L$try_lmdb/lib"
+
+else $as_nop
+
+ lmdb_CFLAGS=""
+ lmdb_LIBS=""
+
+fi
+
+ CPPFLAGS="$save_CPPFLAGS $lmdb_CFLAGS"
+ LIBS="$save_LIBS $lmdb_LIBS"
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing mdb_txn_id" >&5
+printf %s "checking for library containing mdb_txn_id... " >&6; }
+if test ${ac_cv_search_mdb_txn_id+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char mdb_txn_id ();
+int
+main (void)
+{
+return mdb_txn_id ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' lmdb
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_mdb_txn_id=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_mdb_txn_id+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_mdb_txn_id+y}
+then :
+
+else $as_nop
+ ac_cv_search_mdb_txn_id=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_mdb_txn_id" >&5
+printf "%s\n" "$ac_cv_search_mdb_txn_id" >&6; }
+ac_res=$ac_cv_search_mdb_txn_id
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+ have_lmdb=yes
+ lmdb_LIBS="$lmdb_LIBS -llmdb"
+
+
+ break
+
+fi
+
+
+ # do not cache result of AC_SEARCH_LIBS test
+ unset ac_cv_search_mdb_txn_id
+ done
+
+ CPPFLAGS="$save_CPPFLAGS"
+ LIBS="$save_LIBS"
+
+ if test "$have_lmdb" = "no"
+then :
+
+ as_fn_error $? "lmdb library not found" "$LINENO" 5
+
+fi
+
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+ save_CPPFLAGS=$CPPFLAGS
+ save_LIBS=$LIBS
+
+ have_lmdb=no
+
+ for try_lmdb in "$with_lmdb" "" "/usr/local" "/usr/pkg"; do
+ if test -d "$try_lmdb"
+then :
+
+ lmdb_CFLAGS="-I$try_lmdb/include"
+ lmdb_LIBS="-L$try_lmdb/lib"
+
+else $as_nop
+
+ lmdb_CFLAGS=""
+ lmdb_LIBS=""
+
+fi
+
+ CPPFLAGS="$save_CPPFLAGS $lmdb_CFLAGS"
+ LIBS="$save_LIBS $lmdb_LIBS"
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing mdb_txn_id" >&5
+printf %s "checking for library containing mdb_txn_id... " >&6; }
+if test ${ac_cv_search_mdb_txn_id+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char mdb_txn_id ();
+int
+main (void)
+{
+return mdb_txn_id ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' lmdb
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_mdb_txn_id=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_mdb_txn_id+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_mdb_txn_id+y}
+then :
+
+else $as_nop
+ ac_cv_search_mdb_txn_id=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_mdb_txn_id" >&5
+printf "%s\n" "$ac_cv_search_mdb_txn_id" >&6; }
+ac_res=$ac_cv_search_mdb_txn_id
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+ have_lmdb=yes
+ lmdb_LIBS="$lmdb_LIBS -llmdb"
+
+
+ break
+
+fi
+
+
+ # do not cache result of AC_SEARCH_LIBS test
+ unset ac_cv_search_mdb_txn_id
+ done
+
+ CPPFLAGS="$save_CPPFLAGS"
+ LIBS="$save_LIBS"
+
+ if test "$have_lmdb" = "no"
+then :
+
+ as_fn_error $? "lmdb library not found" "$LINENO" 5
+
+fi
+
+else
+ lmdb_CFLAGS=$pkg_cv_lmdb_CFLAGS
+ lmdb_LIBS=$pkg_cv_lmdb_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi
+
+# LMDB mapping sizes
+conf_mapsize_default=500
+
+# Check whether --with-conf_mapsize was given.
+if test ${with_conf_mapsize+y}
+then :
+ withval=$with_conf_mapsize; conf_mapsize=$withval
+else $as_nop
+ conf_mapsize=$conf_mapsize_default
+fi
+
+
+case $conf_mapsize in #(
+ yes) :
+ conf_mapsize=$conf_mapsize_default ;; #(
+ no) :
+ as_fn_error $? "conf_mapsize must be a number" "$LINENO" 5 ;; #(
+ *) :
+ if test $conf_mapsize != $(( $conf_mapsize + 0 ))
+then :
+ as_fn_error $? "conf_mapsize must be an integer number" "$LINENO" 5
+fi ;; #(
+ *) :
+ ;;
+esac
+
+printf "%s\n" "#define CONF_MAPSIZE $conf_mapsize" >>confdefs.h
+
+
+
+# libedit
+if test "$enable_daemon" = "yes" -o "$enable_utilities" = "yes"
+then :
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libedit" >&5
+printf %s "checking for libedit... " >&6; }
+
+if test -n "$libedit_CFLAGS"; then
+ pkg_cv_libedit_CFLAGS="$libedit_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libedit\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libedit") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libedit_CFLAGS=`$PKG_CONFIG --cflags "libedit" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libedit_LIBS"; then
+ pkg_cv_libedit_LIBS="$libedit_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libedit\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libedit") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libedit_LIBS=`$PKG_CONFIG --libs "libedit" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libedit_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libedit" 2>&1`
+ else
+ libedit_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libedit" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libedit_PKG_ERRORS" >&5
+
+
+ with_libedit=no
+ ac_fn_c_check_header_compile "$LINENO" "histedit.h" "ac_cv_header_histedit_h" "$ac_includes_default"
+if test "x$ac_cv_header_histedit_h" = xyes
+then :
+
+ # workaround for OpenBSD
+ case $host_os in #(
+ openbsd*) :
+ libedit_deps=-lcurses ;; #(
+ *) :
+ libedit_deps=
+ ;;
+esac
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for el_init in -ledit" >&5
+printf %s "checking for el_init in -ledit... " >&6; }
+if test ${ac_cv_lib_edit_el_init+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ledit $libedit_deps
+ $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char el_init ();
+int
+main (void)
+{
+return el_init ();
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_lib_edit_el_init=yes
+else $as_nop
+ ac_cv_lib_edit_el_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_edit_el_init" >&5
+printf "%s\n" "$ac_cv_lib_edit_el_init" >&6; }
+if test "x$ac_cv_lib_edit_el_init" = xyes
+then :
+
+ with_libedit=yes
+ libedit_CFLAGS=
+ libedit_LIBS="-ledit $libedit_deps"
+
+fi
+
+
+fi
+
+
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+ with_libedit=no
+ ac_fn_c_check_header_compile "$LINENO" "histedit.h" "ac_cv_header_histedit_h" "$ac_includes_default"
+if test "x$ac_cv_header_histedit_h" = xyes
+then :
+
+ # workaround for OpenBSD
+ case $host_os in #(
+ openbsd*) :
+ libedit_deps=-lcurses ;; #(
+ *) :
+ libedit_deps=
+ ;;
+esac
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for el_init in -ledit" >&5
+printf %s "checking for el_init in -ledit... " >&6; }
+if test ${ac_cv_lib_edit_el_init+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ledit $libedit_deps
+ $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char el_init ();
+int
+main (void)
+{
+return el_init ();
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_lib_edit_el_init=yes
+else $as_nop
+ ac_cv_lib_edit_el_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_edit_el_init" >&5
+printf "%s\n" "$ac_cv_lib_edit_el_init" >&6; }
+if test "x$ac_cv_lib_edit_el_init" = xyes
+then :
+
+ with_libedit=yes
+ libedit_CFLAGS=
+ libedit_LIBS="-ledit $libedit_deps"
+
+fi
+
+
+fi
+
+
+else
+ libedit_CFLAGS=$pkg_cv_libedit_CFLAGS
+ libedit_LIBS=$pkg_cv_libedit_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ with_libedit=yes
+fi
+ if test "$with_libedit" != "yes"
+then :
+
+ as_fn_error $? "libedit not found" "$LINENO" 5
+
+fi
+
+else $as_nop
+
+ with_libedit=no
+ libedit_CFLAGS=
+ libedit_LIBS=
+
+fi
+
+# QUIC support
+# Check whether --enable-quic was given.
+if test ${enable_quic+y}
+then :
+ enableval=$enable_quic;
+else $as_nop
+ enable_quic=auto
+fi
+
+
+case $enable_quic in #(
+ auto) :
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" >&5
+printf %s "checking for libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls... " >&6; }
+
+if test -n "$libngtcp2_CFLAGS"; then
+ pkg_cv_libngtcp2_CFLAGS="$libngtcp2_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libngtcp2_CFLAGS=`$PKG_CONFIG --cflags "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libngtcp2_LIBS"; then
+ pkg_cv_libngtcp2_LIBS="$libngtcp2_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libngtcp2_LIBS=`$PKG_CONFIG --libs "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libngtcp2_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" 2>&1`
+ else
+ libngtcp2_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libngtcp2_PKG_ERRORS" >&5
+
+ enable_quic=no
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ enable_quic=no
+else
+ libngtcp2_CFLAGS=$pkg_cv_libngtcp2_CFLAGS
+ libngtcp2_LIBS=$pkg_cv_libngtcp2_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ enable_quic=yes
+fi ;; #(
+ yes) :
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" >&5
+printf %s "checking for libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls... " >&6; }
+
+if test -n "$libngtcp2_CFLAGS"; then
+ pkg_cv_libngtcp2_CFLAGS="$libngtcp2_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libngtcp2_CFLAGS=`$PKG_CONFIG --cflags "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libngtcp2_LIBS"; then
+ pkg_cv_libngtcp2_LIBS="$libngtcp2_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libngtcp2_LIBS=`$PKG_CONFIG --libs "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libngtcp2_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" 2>&1`
+ else
+ libngtcp2_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libngtcp2_PKG_ERRORS" >&5
+
+ if test "$gnutls_quic" = "yes"
+then :
+ enable_quic=embedded
+ embedded_libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2 -I\$(top_srcdir)/src/contrib/libngtcp2/ngtcp2/lib"
+ embedded_libngtcp2_LIBS=$libelf_LIBS
+ libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2"
+else $as_nop
+ enable_quic=no
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: gnutls >= 3.7.3 is required for QUIC" >&5
+printf "%s\n" "$as_me: WARNING: gnutls >= 3.7.3 is required for QUIC" >&2;}
+fi
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ if test "$gnutls_quic" = "yes"
+then :
+ enable_quic=embedded
+ embedded_libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2 -I\$(top_srcdir)/src/contrib/libngtcp2/ngtcp2/lib"
+ embedded_libngtcp2_LIBS=$libelf_LIBS
+ libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2"
+else $as_nop
+ enable_quic=no
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: gnutls >= 3.7.3 is required for QUIC" >&5
+printf "%s\n" "$as_me: WARNING: gnutls >= 3.7.3 is required for QUIC" >&2;}
+fi
+else
+ libngtcp2_CFLAGS=$pkg_cv_libngtcp2_CFLAGS
+ libngtcp2_LIBS=$pkg_cv_libngtcp2_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ enable_quic=yes
+fi ;; #(
+ embedded) :
+ enable_quic=embedded
+ embedded_libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2 -I\$(top_srcdir)/src/contrib/libngtcp2/ngtcp2/lib"
+ embedded_libngtcp2_LIBS=$libelf_LIBS
+ libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2" ;; #(
+ no) :
+ ;; #(
+ *) :
+ as_fn_error $? "Invalid value of --enable-quic." "$LINENO" 5
+ ;; #(
+ *) :
+ ;;
+esac
+ if test "$enable_quic" = "embedded"; then
+ EMBEDDED_LIBNGTCP2_TRUE=
+ EMBEDDED_LIBNGTCP2_FALSE='#'
+else
+ EMBEDDED_LIBNGTCP2_TRUE='#'
+ EMBEDDED_LIBNGTCP2_FALSE=
+fi
+
+ if test "$enable_quic" != "no"; then
+ ENABLE_QUIC_TRUE=
+ ENABLE_QUIC_FALSE='#'
+else
+ ENABLE_QUIC_TRUE='#'
+ ENABLE_QUIC_FALSE=
+fi
+
+
+
+
+
+
+if test "$enable_quic" != "no"
+then :
+
+
+printf "%s\n" "#define ENABLE_QUIC 1" >>confdefs.h
+
+fi
+
+############################################
+# Dependencies needed for Knot DNS utilities
+############################################
+
+
+# Check whether --with-libidn was given.
+if test ${with_libidn+y}
+then :
+ withval=$with_libidn; with_libidn=$withval
+else $as_nop
+ with_libidn=yes
+
+fi
+
+
+
+# Check whether --with-libnghttp2 was given.
+if test ${with_libnghttp2+y}
+then :
+ withval=$with_libnghttp2; with_libnghttp2=$withval
+else $as_nop
+ with_libnghttp2=yes
+
+fi
+
+
+if test "$enable_utilities" = "yes"
+then :
+
+ if test "$with_libidn" != "no"
+then :
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libidn2 >= 2.0.0" >&5
+printf %s "checking for libidn2 >= 2.0.0... " >&6; }
+
+if test -n "$libidn2_CFLAGS"; then
+ pkg_cv_libidn2_CFLAGS="$libidn2_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libidn2 >= 2.0.0\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libidn2 >= 2.0.0") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libidn2_CFLAGS=`$PKG_CONFIG --cflags "libidn2 >= 2.0.0" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libidn2_LIBS"; then
+ pkg_cv_libidn2_LIBS="$libidn2_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libidn2 >= 2.0.0\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libidn2 >= 2.0.0") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libidn2_LIBS=`$PKG_CONFIG --libs "libidn2 >= 2.0.0" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libidn2_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libidn2 >= 2.0.0" 2>&1`
+ else
+ libidn2_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libidn2 >= 2.0.0" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libidn2_PKG_ERRORS" >&5
+
+
+ with_libidn=no
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: libidn2 not found" >&5
+printf "%s\n" "$as_me: WARNING: libidn2 not found" >&2;}
+
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+ with_libidn=no
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: libidn2 not found" >&5
+printf "%s\n" "$as_me: WARNING: libidn2 not found" >&2;}
+
+else
+ libidn2_CFLAGS=$pkg_cv_libidn2_CFLAGS
+ libidn2_LIBS=$pkg_cv_libidn2_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+ with_libidn=libidn2
+
+printf "%s\n" "#define LIBIDN 1" >>confdefs.h
+
+
+fi
+
+fi
+
+ if test "$with_libnghttp2" != "no"
+then :
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libnghttp2" >&5
+printf %s "checking for libnghttp2... " >&6; }
+
+if test -n "$libnghttp2_CFLAGS"; then
+ pkg_cv_libnghttp2_CFLAGS="$libnghttp2_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libnghttp2\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libnghttp2") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libnghttp2_CFLAGS=`$PKG_CONFIG --cflags "libnghttp2" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libnghttp2_LIBS"; then
+ pkg_cv_libnghttp2_LIBS="$libnghttp2_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libnghttp2\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libnghttp2") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libnghttp2_LIBS=`$PKG_CONFIG --libs "libnghttp2" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libnghttp2_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libnghttp2" 2>&1`
+ else
+ libnghttp2_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libnghttp2" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libnghttp2_PKG_ERRORS" >&5
+
+
+ with_libnghttp2=no
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: libnghttp2 not found" >&5
+printf "%s\n" "$as_me: WARNING: libnghttp2 not found" >&2;}
+
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+ with_libnghttp2=no
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: libnghttp2 not found" >&5
+printf "%s\n" "$as_me: WARNING: libnghttp2 not found" >&2;}
+
+else
+ libnghttp2_CFLAGS=$pkg_cv_libnghttp2_CFLAGS
+ libnghttp2_LIBS=$pkg_cv_libnghttp2_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+ with_libnghttp2=libnghttp2
+
+printf "%s\n" "#define LIBNGHTTP2 1" >>confdefs.h
+
+
+fi
+
+fi
+
+ if test "$enable_xdp" != "no"
+then :
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for libmnl" >&5
+printf %s "checking for libmnl... " >&6; }
+
+if test -n "$libmnl_CFLAGS"; then
+ pkg_cv_libmnl_CFLAGS="$libmnl_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmnl\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libmnl") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libmnl_CFLAGS=`$PKG_CONFIG --cflags "libmnl" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$libmnl_LIBS"; then
+ pkg_cv_libmnl_LIBS="$libmnl_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmnl\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "libmnl") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_libmnl_LIBS=`$PKG_CONFIG --libs "libmnl" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ libmnl_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libmnl" 2>&1`
+ else
+ libmnl_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libmnl" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$libmnl_PKG_ERRORS" >&5
+
+
+ as_fn_error $? "libmnl not found" "$LINENO" 5
+
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+ as_fn_error $? "libmnl not found" "$LINENO" 5
+
+else
+ libmnl_CFLAGS=$pkg_cv_libmnl_CFLAGS
+ libmnl_LIBS=$pkg_cv_libmnl_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+fi
+
+fi
+
+fi # Knot DNS utilities dependencies
+
+# Check whether --enable-cap-ng was given.
+if test ${enable_cap_ng+y}
+then :
+ enableval=$enable_cap_ng; enable_cap_ng="$enableval"
+else $as_nop
+ enable_cap_ng=auto
+fi
+
+
+if test "$enable_daemon" = "yes"
+then :
+
+
+if test "$enable_cap_ng" != "no"
+then :
+
+
+pkg_failed=no
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for cap-ng" >&5
+printf %s "checking for cap-ng... " >&6; }
+
+if test -n "$cap_ng_CFLAGS"; then
+ pkg_cv_cap_ng_CFLAGS="$cap_ng_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"cap-ng\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "cap-ng") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_cap_ng_CFLAGS=`$PKG_CONFIG --cflags "cap-ng" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+if test -n "$cap_ng_LIBS"; then
+ pkg_cv_cap_ng_LIBS="$cap_ng_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+ if test -n "$PKG_CONFIG" && \
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"cap-ng\""; } >&5
+ ($PKG_CONFIG --exists --print-errors "cap-ng") 2>&5
+ ac_status=$?
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+ test $ac_status = 0; }; then
+ pkg_cv_cap_ng_LIBS=`$PKG_CONFIG --libs "cap-ng" 2>/dev/null`
+ test "x$?" != "x0" && pkg_failed=yes
+else
+ pkg_failed=yes
+fi
+ else
+ pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+ _pkg_short_errors_supported=yes
+else
+ _pkg_short_errors_supported=no
+fi
+ if test $_pkg_short_errors_supported = yes; then
+ cap_ng_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "cap-ng" 2>&1`
+ else
+ cap_ng_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "cap-ng" 2>&1`
+ fi
+ # Put the nasty error message in config.log where it belongs
+ echo "$cap_ng_PKG_ERRORS" >&5
+
+
+ enable_cap_ng=no
+ ac_fn_c_check_header_compile "$LINENO" "cap-ng.h" "ac_cv_header_cap_ng_h" "$ac_includes_default"
+if test "x$ac_cv_header_cap_ng_h" = xyes
+then :
+
+ save_LIBS="$LIBS"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing capng_apply" >&5
+printf %s "checking for library containing capng_apply... " >&6; }
+if test ${ac_cv_search_capng_apply+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char capng_apply ();
+int
+main (void)
+{
+return capng_apply ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' cap-ng
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_capng_apply=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_capng_apply+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_capng_apply+y}
+then :
+
+else $as_nop
+ ac_cv_search_capng_apply=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_capng_apply" >&5
+printf "%s\n" "$ac_cv_search_capng_apply" >&6; }
+ac_res=$ac_cv_search_capng_apply
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+ if test "$ac_cv_search_capng_apply" != "none required"
+then :
+ cap_ng_LIBS="$ac_cv_search_capng_apply"
+else $as_nop
+ cap_ng_LIBS=
+fi
+
+ enable_cap_ng=yes
+
+fi
+
+ LIBS="$save_LIBS"
+
+fi
+
+
+elif test $pkg_failed = untried; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+
+ enable_cap_ng=no
+ ac_fn_c_check_header_compile "$LINENO" "cap-ng.h" "ac_cv_header_cap_ng_h" "$ac_includes_default"
+if test "x$ac_cv_header_cap_ng_h" = xyes
+then :
+
+ save_LIBS="$LIBS"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing capng_apply" >&5
+printf %s "checking for library containing capng_apply... " >&6; }
+if test ${ac_cv_search_capng_apply+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char capng_apply ();
+int
+main (void)
+{
+return capng_apply ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' cap-ng
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_capng_apply=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_capng_apply+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_capng_apply+y}
+then :
+
+else $as_nop
+ ac_cv_search_capng_apply=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_capng_apply" >&5
+printf "%s\n" "$ac_cv_search_capng_apply" >&6; }
+ac_res=$ac_cv_search_capng_apply
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+ if test "$ac_cv_search_capng_apply" != "none required"
+then :
+ cap_ng_LIBS="$ac_cv_search_capng_apply"
+else $as_nop
+ cap_ng_LIBS=
+fi
+
+ enable_cap_ng=yes
+
+fi
+
+ LIBS="$save_LIBS"
+
+fi
+
+
+else
+ cap_ng_CFLAGS=$pkg_cv_cap_ng_CFLAGS
+ cap_ng_LIBS=$pkg_cv_cap_ng_LIBS
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+ enable_cap_ng=yes
+fi
+
+else $as_nop
+
+ enable_cap_ng=no
+ cap_ng_LIBS=
+
+fi
+fi
+
+if test "$enable_cap_ng" = yes
+then :
+
+printf "%s\n" "#define ENABLE_CAP_NG 1" >>confdefs.h
+
+
+fi
+
+save_LIBS="$LIBS"
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing pthread_create" >&5
+printf %s "checking for library containing pthread_create... " >&6; }
+if test ${ac_cv_search_pthread_create+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char pthread_create ();
+int
+main (void)
+{
+return pthread_create ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' pthread
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_pthread_create=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_pthread_create+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_pthread_create+y}
+then :
+
+else $as_nop
+ ac_cv_search_pthread_create=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_pthread_create" >&5
+printf "%s\n" "$ac_cv_search_pthread_create" >&6; }
+ac_res=$ac_cv_search_pthread_create
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+ if test "$ac_cv_search_pthread_create" != "none required"
+then :
+ pthread_LIBS="$ac_cv_search_pthread_create"
+else $as_nop
+ pthread_LIBS=
+fi
+
+
+else $as_nop
+
+ as_fn_error $? "pthreads not found" "$LINENO" 5
+
+fi
+
+LIBS="$save_LIBS"
+
+save_LIBS="$LIBS"
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing dlopen" >&5
+printf %s "checking for library containing dlopen... " >&6; }
+if test ${ac_cv_search_dlopen+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main (void)
+{
+return dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' dl
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_dlopen=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_dlopen+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_dlopen+y}
+then :
+
+else $as_nop
+ ac_cv_search_dlopen=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_dlopen" >&5
+printf "%s\n" "$ac_cv_search_dlopen" >&6; }
+ac_res=$ac_cv_search_dlopen
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+ if test "$ac_cv_search_dlopen" != "none required"
+then :
+ dlopen_LIBS="$ac_cv_search_dlopen"
+else $as_nop
+ dlopen_LIBS=
+fi
+
+
+else $as_nop
+
+ as_fn_error $? "dlopen not found" "$LINENO" 5
+
+fi
+
+LIBS="$save_LIBS"
+
+save_LIBS="$LIBS"
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing pow" >&5
+printf %s "checking for library containing pow... " >&6; }
+if test ${ac_cv_search_pow+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char pow ();
+int
+main (void)
+{
+return pow ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' m
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_pow=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_pow+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_pow+y}
+then :
+
+else $as_nop
+ ac_cv_search_pow=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_pow" >&5
+printf "%s\n" "$ac_cv_search_pow" >&6; }
+ac_res=$ac_cv_search_pow
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+ if test "$ac_cv_search_pow" != "none required"
+then :
+ math_LIBS="$ac_cv_search_pow"
+else $as_nop
+ math_LIBS=
+fi
+
+
+else $as_nop
+
+ as_fn_error $? "math not found" "$LINENO" 5
+
+fi
+
+LIBS="$save_LIBS"
+
+save_LIBS="$LIBS"
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing pthread_setaffinity_np" >&5
+printf %s "checking for library containing pthread_setaffinity_np... " >&6; }
+if test ${ac_cv_search_pthread_setaffinity_np+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ ac_func_search_save_LIBS=$LIBS
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+/* Override any GCC internal prototype to avoid an error.
+ Use char because int might match the return type of a GCC
+ builtin and then its argument prototype would still apply. */
+char pthread_setaffinity_np ();
+int
+main (void)
+{
+return pthread_setaffinity_np ();
+ ;
+ return 0;
+}
+_ACEOF
+for ac_lib in '' pthread
+do
+ if test -z "$ac_lib"; then
+ ac_res="none required"
+ else
+ ac_res=-l$ac_lib
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ fi
+ if ac_fn_c_try_link "$LINENO"
+then :
+ ac_cv_search_pthread_setaffinity_np=$ac_res
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext
+ if test ${ac_cv_search_pthread_setaffinity_np+y}
+then :
+ break
+fi
+done
+if test ${ac_cv_search_pthread_setaffinity_np+y}
+then :
+
+else $as_nop
+ ac_cv_search_pthread_setaffinity_np=no
+fi
+rm conftest.$ac_ext
+LIBS=$ac_func_search_save_LIBS
+fi
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_pthread_setaffinity_np" >&5
+printf "%s\n" "$ac_cv_search_pthread_setaffinity_np" >&6; }
+ac_res=$ac_cv_search_pthread_setaffinity_np
+if test "$ac_res" != no
+then :
+ test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
+
+
+printf "%s\n" "#define HAVE_PTHREAD_SETAFFINITY_NP 1" >>confdefs.h
+
+
+fi
+
+LIBS="$save_LIBS"
+
+# Checks for header files.
+ac_fn_c_check_header_compile "$LINENO" "sys/types.h" "ac_cv_header_sys_types_h" "#ifdef HAVE_SYS_TYPES_H
+# include
+#endif
+#ifdef HAVE_NETINET_IN_H
+# include /* inet_ functions / structs */
+#endif
+#ifdef HAVE_ARPA_NAMESER_H
+# include /* DNS HEADER struct */
+#endif
+#ifdef HAVE_NETDB_H
+# include
+#endif
+"
+if test "x$ac_cv_header_sys_types_h" = xyes
+then :
+ printf "%s\n" "#define HAVE_SYS_TYPES_H 1" >>confdefs.h
+
+fi
+ac_fn_c_check_header_compile "$LINENO" "netinet/in.h" "ac_cv_header_netinet_in_h" "#ifdef HAVE_SYS_TYPES_H
+# include
+#endif
+#ifdef HAVE_NETINET_IN_H
+# include /* inet_ functions / structs */
+#endif
+#ifdef HAVE_ARPA_NAMESER_H
+# include /* DNS HEADER struct */
+#endif
+#ifdef HAVE_NETDB_H
+# include
+#endif
+"
+if test "x$ac_cv_header_netinet_in_h" = xyes
+then :
+ printf "%s\n" "#define HAVE_NETINET_IN_H 1" >>confdefs.h
+
+fi
+ac_fn_c_check_header_compile "$LINENO" "arpa/nameser.h" "ac_cv_header_arpa_nameser_h" "#ifdef HAVE_SYS_TYPES_H
+# include
+#endif
+#ifdef HAVE_NETINET_IN_H
+# include /* inet_ functions / structs */
+#endif
+#ifdef HAVE_ARPA_NAMESER_H
+# include /* DNS HEADER struct */
+#endif
+#ifdef HAVE_NETDB_H
+# include
+#endif
+"
+if test "x$ac_cv_header_arpa_nameser_h" = xyes
+then :
+ printf "%s\n" "#define HAVE_ARPA_NAMESER_H 1" >>confdefs.h
+
+fi
+ac_fn_c_check_header_compile "$LINENO" "netdb.h" "ac_cv_header_netdb_h" "#ifdef HAVE_SYS_TYPES_H
+# include
+#endif
+#ifdef HAVE_NETINET_IN_H
+# include /* inet_ functions / structs */
+#endif
+#ifdef HAVE_ARPA_NAMESER_H
+# include /* DNS HEADER struct */
+#endif
+#ifdef HAVE_NETDB_H
+# include
+#endif
+"
+if test "x$ac_cv_header_netdb_h" = xyes
+then :
+ printf "%s\n" "#define HAVE_NETDB_H 1" >>confdefs.h
+
+fi
+ac_fn_c_check_header_compile "$LINENO" "resolv.h" "ac_cv_header_resolv_h" "#ifdef HAVE_SYS_TYPES_H
+# include
+#endif
+#ifdef HAVE_NETINET_IN_H
+# include /* inet_ functions / structs */
+#endif
+#ifdef HAVE_ARPA_NAMESER_H
+# include /* DNS HEADER struct */
+#endif
+#ifdef HAVE_NETDB_H
+# include
+#endif
+"
+if test "x$ac_cv_header_resolv_h" = xyes
+then :
+ printf "%s\n" "#define HAVE_RESOLV_H 1" >>confdefs.h
+
+fi
+
+
+
+
+
+
+
+# Checks for optional library functions.
+ac_fn_c_check_func "$LINENO" "accept4" "ac_cv_func_accept4"
+if test "x$ac_cv_func_accept4" = xyes
+then :
+ printf "%s\n" "#define HAVE_ACCEPT4 1" >>confdefs.h
+
+fi
+ac_fn_c_check_func "$LINENO" "fgetln" "ac_cv_func_fgetln"
+if test "x$ac_cv_func_fgetln" = xyes
+then :
+ printf "%s\n" "#define HAVE_FGETLN 1" >>confdefs.h
+
+fi
+ac_fn_c_check_func "$LINENO" "getline" "ac_cv_func_getline"
+if test "x$ac_cv_func_getline" = xyes
+then :
+ printf "%s\n" "#define HAVE_GETLINE 1" >>confdefs.h
+
+fi
+ac_fn_c_check_func "$LINENO" "initgroups" "ac_cv_func_initgroups"
+if test "x$ac_cv_func_initgroups" = xyes
+then :
+ printf "%s\n" "#define HAVE_INITGROUPS 1" >>confdefs.h
+
+fi
+ac_fn_c_check_func "$LINENO" "malloc_trim" "ac_cv_func_malloc_trim"
+if test "x$ac_cv_func_malloc_trim" = xyes
+then :
+ printf "%s\n" "#define HAVE_MALLOC_TRIM 1" >>confdefs.h
+
+fi
+ac_fn_c_check_func "$LINENO" "setgroups" "ac_cv_func_setgroups"
+if test "x$ac_cv_func_setgroups" = xyes
+then :
+ printf "%s\n" "#define HAVE_SETGROUPS 1" >>confdefs.h
+
+fi
+ac_fn_c_check_func "$LINENO" "strlcat" "ac_cv_func_strlcat"
+if test "x$ac_cv_func_strlcat" = xyes
+then :
+ printf "%s\n" "#define HAVE_STRLCAT 1" >>confdefs.h
+
+fi
+ac_fn_c_check_func "$LINENO" "strlcpy" "ac_cv_func_strlcpy"
+if test "x$ac_cv_func_strlcpy" = xyes
+then :
+ printf "%s\n" "#define HAVE_STRLCPY 1" >>confdefs.h
+
+fi
+ac_fn_c_check_func "$LINENO" "sysctlbyname" "ac_cv_func_sysctlbyname"
+if test "x$ac_cv_func_sysctlbyname" = xyes
+then :
+ printf "%s\n" "#define HAVE_SYSCTLBYNAME 1" >>confdefs.h
+
+fi
+
+
+# Check for robust memory cleanup implementations.
+ac_fn_c_check_func "$LINENO" "explicit_bzero" "ac_cv_func_explicit_bzero"
+if test "x$ac_cv_func_explicit_bzero" = xyes
+then :
+
+
+printf "%s\n" "#define HAVE_EXPLICIT_BZERO 1" >>confdefs.h
+
+ explicit_bzero=yes
+else $as_nop
+ explicit_bzero=no
+
+fi
+
+ac_fn_c_check_func "$LINENO" "explicit_memset" "ac_cv_func_explicit_memset"
+if test "x$ac_cv_func_explicit_memset" = xyes
+then :
+
+
+printf "%s\n" "#define HAVE_EXPLICIT_MEMSET 1" >>confdefs.h
+
+ explicit_memset=yes
+else $as_nop
+ explicit_memset=no
+
+fi
+
+ if test "$explicit_bzero" = "no" -a "$explicit_memset" = "no"; then
+ USE_GNUTLS_MEMSET_TRUE=
+ USE_GNUTLS_MEMSET_FALSE='#'
+else
+ USE_GNUTLS_MEMSET_TRUE='#'
+ USE_GNUTLS_MEMSET_FALSE=
+fi
+
+
+# Check for mandatory library functions.
+ac_fn_c_check_func "$LINENO" "vasprintf" "ac_cv_func_vasprintf"
+if test "x$ac_cv_func_vasprintf" = xyes
+then :
+
+else $as_nop
+
+ as_fn_error $? "vasprintf support in the libc is required" "$LINENO" 5
+fi
+
+
+# Check for cpu_set_t/cpuset_t compatibility
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+int
+main (void)
+{
+cpu_set_t set; CPU_ZERO(&set);
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+
+printf "%s\n" "#define HAVE_CPUSET_LINUX 1" >>confdefs.h
+
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+int
+main (void)
+{
+cpuset_t set; CPU_ZERO(&set);
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+
+printf "%s\n" "#define HAVE_CPUSET_BSD 1" >>confdefs.h
+
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+int
+main (void)
+{
+cpuset_t* set = cpuset_create(); cpuset_destroy(set);
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+
+printf "%s\n" "#define HAVE_CPUSET_NETBSD 1" >>confdefs.h
+
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+
+# Check for a C11 or GCC-style '__atomic' compiler builtin atomic functions.
+atomic_type="none"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#if (__STDC_VERSION__ < 201112L) || defined(__STDC_NO_ATOMICS__)
+ #error "No C11 atomics"
+ #endif
+ #include
+int
+main (void)
+{
+atomic_uint_fast64_t val = 0;
+ atomic_fetch_add_explicit(&val, 1, memory_order_relaxed);
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+
+printf "%s\n" "#define HAVE_C11_ATOMIC 1" >>confdefs.h
+
+ atomic_type="C11"
+else $as_nop
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+#include
+int
+main (void)
+{
+uint64_t val = 0; __atomic_add_fetch(&val, 1, __ATOMIC_RELAXED);
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"
+then :
+
+printf "%s\n" "#define HAVE_GCC_ATOMIC 1" >>confdefs.h
+
+ atomic_type="__atomic"
+
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam \
+ conftest$ac_exeext conftest.$ac_ext
+
+# Prepare CFLAG_VISIBILITY to be used where needed
+
+
+ CFLAG_VISIBILITY=
+ HAVE_VISIBILITY=0
+ if test -n "$GCC"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the -Werror option is usable" >&5
+printf %s "checking whether the -Werror option is usable... " >&6; }
+ if test ${gl_cv_cc_vis_werror+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+
+ gl_save_CFLAGS="$CFLAGS"
+ CFLAGS="$CFLAGS -Werror"
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ gl_cv_cc_vis_werror=yes
+else $as_nop
+ gl_cv_cc_vis_werror=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ CFLAGS="$gl_save_CFLAGS"
+fi
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $gl_cv_cc_vis_werror" >&5
+printf "%s\n" "$gl_cv_cc_vis_werror" >&6; }
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for simple visibility declarations" >&5
+printf %s "checking for simple visibility declarations... " >&6; }
+ if test ${gl_cv_cc_visibility+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+
+ gl_save_CFLAGS="$CFLAGS"
+ CFLAGS="$CFLAGS -fvisibility=hidden"
+ if test $gl_cv_cc_vis_werror = yes; then
+ CFLAGS="$CFLAGS -Werror"
+ fi
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+extern __attribute__((__visibility__("hidden"))) int hiddenvar;
+ extern __attribute__((__visibility__("default"))) int exportedvar;
+ extern __attribute__((__visibility__("hidden"))) int hiddenfunc (void);
+ extern __attribute__((__visibility__("default"))) int exportedfunc (void);
+ void dummyfunc (void) {}
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+ gl_cv_cc_visibility=yes
+else $as_nop
+ gl_cv_cc_visibility=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ CFLAGS="$gl_save_CFLAGS"
+fi
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $gl_cv_cc_visibility" >&5
+printf "%s\n" "$gl_cv_cc_visibility" >&6; }
+ if test $gl_cv_cc_visibility = yes; then
+ CFLAG_VISIBILITY="-fvisibility=hidden"
+ HAVE_VISIBILITY=1
+ fi
+ fi
+
+
+
+printf "%s\n" "#define HAVE_VISIBILITY $HAVE_VISIBILITY" >>confdefs.h
+
+
+
+# Add code coverage macro
+
+ # Check whether --enable-code-coverage was given.
+if test ${enable_code_coverage+y}
+then :
+ enableval=$enable_code_coverage; enable_code_coverage=$enableval
+else $as_nop
+ enable_code_coverage=no
+
+fi
+
+ if test "$enable_code_coverage" = "yes"; then
+ CODE_COVERAGE_ENABLED_TRUE=
+ CODE_COVERAGE_ENABLED_FALSE='#'
+else
+ CODE_COVERAGE_ENABLED_TRUE='#'
+ CODE_COVERAGE_ENABLED_FALSE=
+fi
+
+ CODE_COVERAGE_ENABLED=$enable_code_coverage
+
+
+ if test "$enable_code_coverage" = "yes"
+then :
+
+ # Extract the first word of "lcov", so it can be a program name with args.
+set dummy lcov; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_LCOV+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$LCOV"; then
+ ac_cv_prog_LCOV="$LCOV" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_LCOV="lcov"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+LCOV=$ac_cv_prog_LCOV
+if test -n "$LCOV"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $LCOV" >&5
+printf "%s\n" "$LCOV" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+ # Extract the first word of "genhtml", so it can be a program name with args.
+set dummy genhtml; ac_word=$2
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+printf %s "checking for $ac_word... " >&6; }
+if test ${ac_cv_prog_GENHTML+y}
+then :
+ printf %s "(cached) " >&6
+else $as_nop
+ if test -n "$GENHTML"; then
+ ac_cv_prog_GENHTML="$GENHTML" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then
+ ac_cv_prog_GENHTML="genhtml"
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+fi
+fi
+GENHTML=$ac_cv_prog_GENHTML
+if test -n "$GENHTML"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $GENHTML" >&5
+printf "%s\n" "$GENHTML" >&6; }
+else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+fi
+
+
+
+ if test -z "$LCOV"
+then :
+
+ as_fn_error $? "Could not find lcov" "$LINENO" 5
+
+fi
+ if test -z "$GENHTML"
+then :
+
+ as_fn_error $? "Could not find genhtml from the lcov package" "$LINENO" 5
+
+fi
+
+
+ CFLAGS=`echo "$CFLAGS" | $SED -e 's/-O[0-9]*//g'`
+
+
+ CFLAGS="$CFLAGS --coverage"
+ LDFLAGS="$LDFLAGS --coverage"
+
+fi
+
+
+
+
+ # Configure options
+
+# Check whether --with-sanitizer was given.
+if test ${with_sanitizer+y}
+then :
+ withval=$with_sanitizer;
+else $as_nop
+ with_sanitizer=no
+
+fi
+
+
+# Check whether --with-fuzzer was given.
+if test ${with_fuzzer+y}
+then :
+ withval=$with_fuzzer;
+else $as_nop
+ with_fuzzer=no
+
+fi
+
+
+# Check whether --with-oss-fuzz was given.
+if test ${with_oss_fuzz+y}
+then :
+ withval=$with_oss_fuzz;
+else $as_nop
+ with_oss_fuzz=no
+
+fi
+
+
+ # Default values
+ if test "$with_sanitizer" = "yes"
+then :
+ with_sanitizer=address,undefined
+fi
+ if test "$with_fuzzer" = "yes"
+then :
+ with_fuzzer=fuzzer
+fi
+
+ # Construct output variables
+ sanitizer_CFLAGS=
+ fuzzer_CFLAGS=
+ fuzzer_LDFLAGS=
+ if test "$with_sanitizer" != "no"
+then :
+
+ sanitizer_CFLAGS="-fsanitize=${with_sanitizer}"
+
+fi
+ if test "$with_fuzzer" != "no"
+then :
+
+ fuzzer_CFLAGS="-fsanitize=${with_fuzzer}"
+ fuzzer_LDFLAGS="-fsanitize=${with_fuzzer}"
+
+fi
+
+
+
+ # Test compiler support
+ if test -n "$sanitizer_CFLAGS" -o -n "$fuzzer_CFLAGS"
+then :
+
+ save_CFLAGS="$CFLAGS"
+ CFLAGS="$CFLAGS $sanitizer_CFLAGS $fuzzer_CFLAGS"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether compiler accepts '${sanitizer_CFLAGS} ${fuzzer_CFLAGS}'" >&5
+printf %s "checking whether compiler accepts '${sanitizer_CFLAGS} ${fuzzer_CFLAGS}'... " >&6; }
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+int
+main (void)
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"
+then :
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+printf "%s\n" "yes" >&6; }
+
+else $as_nop
+
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5
+printf "%s\n" "no" >&6; }
+ as_fn_error $? "Options are not supported." "$LINENO" 5
+
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
+ CFLAGS="$save_CFLAGS"
+
+fi
+
+
+if test -n "$sanitizer_CFLAGS"
+then :
+ CFLAGS="$CFLAGS $sanitizer_CFLAGS"
+fi
+ if test "$with_fuzzer" != "no"; then
+ FUZZER_TRUE=
+ FUZZER_FALSE='#'
+else
+ FUZZER_TRUE='#'
+ FUZZER_FALSE=
+fi
+
+ if test "$with_oss_fuzz" != "no"; then
+ OSS_FUZZ_TRUE=
+ OSS_FUZZ_FALSE='#'
+else
+ OSS_FUZZ_TRUE='#'
+ OSS_FUZZ_FALSE=
+fi
+
+
+# Strip -fdebug-prefix-map= parameters from flags for better reproducibility of binaries.
+filtered_cflags=$(echo -n "$CFLAGS" | \
+ sed 's/[^[:alnum:]]-f[^[:space:]]*-prefix-map=[^[:space:]]*//g')
+filtered_cppflags=$(echo -n "$CPPFLAGS" | \
+ sed 's/[^[:alnum:]]-f[^[:space:]]*-prefix-map=[^[:space:]]*//g')
+filtered_config_params=$(echo -n "$ac_configure_args" | \
+ sed 's/[^[:alnum:]]-f[^[:space:]]*-prefix-map=[^[:space:]]*//g')
+
+result_msg_base="
+ Target: $host_os $host_cpu $endianity
+ Compiler: ${CC}
+ CFLAGS: ${filtered_cflags} ${filtered_cppflags}
+ LIBS: ${LIBS} ${LDFLAGS}
+ LibURCU: ${liburcu_LIBS} ${liburcu_CFLAGS}
+ GnuTLS: ${gnutls_LIBS} ${gnutls_CFLAGS}
+ Libedit: ${libedit_LIBS} ${libedit_CFLAGS}
+ LMDB: ${lmdb_LIBS} ${lmdb_CFLAGS}
+ Config: ${conf_mapsize} MiB default mapsize
+
+ Prefix: ${knot_prefix}
+ Run dir: ${run_dir}
+ Storage dir: ${storage_dir}
+ Config dir: ${config_dir}
+ Module dir: ${module_dir}
+
+ Static modules: ${static_modules}
+ Shared modules: ${shared_modules}
+
+ Knot DNS libraries: yes
+ Knot DNS daemon: ${enable_daemon}
+ Knot DNS utilities: ${enable_utilities}
+ Knot DNS documentation: ${enable_documentation}
+
+ Use recvmmsg: ${enable_recvmmsg}
+ Use SO_REUSEPORT(_LB): ${enable_reuseport}
+ XDP support: ${enable_xdp}
+ DoQ support: ${enable_quic}
+ Socket polling: ${socket_polling}
+ Atomic support: ${atomic_type}
+ Memory allocator: ${with_memory_allocator}
+ Fast zone parser: ${enable_fastparser}
+ Utilities with IDN: ${with_libidn}
+ Utilities with DoH: ${with_libnghttp2}
+ Utilities with Dnstap: ${enable_dnstap}
+ MaxMind DB support: ${enable_maxminddb}
+ Systemd integration: ${enable_systemd}
+ D-Bus support: ${enable_dbus}
+ POSIX capabilities: ${enable_cap_ng}
+ PKCS #11 support: ${enable_pkcs11}
+ Ed448 support: ${enable_ed448}
+
+ Code coverage: ${enable_code_coverage}
+ Sanitizer: ${with_sanitizer}
+ LibFuzzer: ${with_fuzzer}
+ OSS-Fuzz: ${with_oss_fuzz}"
+
+result_msg_esc=$(echo -n " Configure:$filtered_config_params\n$result_msg_base" | sed '$!s/$/\\n/' | tr -d '\n')
+
+
+printf "%s\n" "#define CONFIGURE_SUMMARY \"$result_msg_esc\"" >>confdefs.h
+
+
+ac_config_files="$ac_config_files Makefile Doxyfile doc/Makefile tests/Makefile tests-fuzz/Makefile samples/Makefile distro/Makefile python/Makefile python/knot_exporter/Makefile python/knot_exporter/pyproject.toml python/knot_exporter/setup.py python/libknot/Makefile python/libknot/pyproject.toml python/libknot/setup.py python/libknot/libknot/__init__.py src/Makefile src/libknot/xdp/Makefile src/knot/modules/static_modules.h"
+
+
+ac_config_files="$ac_config_files doc/modules.rst"
+
+
+cat >confcache <<\_ACEOF
+# This file is a shell script that caches the results of configure
+# tests run on this system so they can be shared between configure
+# scripts and configure runs, see configure's option --config-cache.
+# It is not useful on other systems. If it contains results you don't
+# want to keep, you may remove or edit it.
+#
+# config.status only pays attention to the cache file if you give it
+# the --recheck option to rerun configure.
+#
+# `ac_cv_env_foo' variables (set or unset) will be overridden when
+# loading this file, other *unset* `ac_cv_foo' will be assigned the
+# following values.
+
+_ACEOF
+
+# The following way of writing the cache mishandles newlines in values,
+# but we know of no workaround that is simple, portable, and efficient.
+# So, we kill variables containing newlines.
+# Ultrix sh set writes to stderr and can't be redirected directly,
+# and sets the high bit in the cache file unless we assign to the vars.
+(
+ for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do
+ eval ac_val=\$$ac_var
+ case $ac_val in #(
+ *${as_nl}*)
+ case $ac_var in #(
+ *_cv_*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5
+printf "%s\n" "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;;
+ esac
+ case $ac_var in #(
+ _ | IFS | as_nl) ;; #(
+ BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #(
+ *) { eval $ac_var=; unset $ac_var;} ;;
+ esac ;;
+ esac
+ done
+
+ (set) 2>&1 |
+ case $as_nl`(ac_space=' '; set) 2>&1` in #(
+ *${as_nl}ac_space=\ *)
+ # `set' does not quote correctly, so add quotes: double-quote
+ # substitution turns \\\\ into \\, and sed turns \\ into \.
+ sed -n \
+ "s/'/'\\\\''/g;
+ s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p"
+ ;; #(
+ *)
+ # `set' quotes correctly as required by POSIX, so do not add quotes.
+ sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p"
+ ;;
+ esac |
+ sort
+) |
+ sed '
+ /^ac_cv_env_/b end
+ t clear
+ :clear
+ s/^\([^=]*\)=\(.*[{}].*\)$/test ${\1+y} || &/
+ t end
+ s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/
+ :end' >>confcache
+if diff "$cache_file" confcache >/dev/null 2>&1; then :; else
+ if test -w "$cache_file"; then
+ if test "x$cache_file" != "x/dev/null"; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: updating cache $cache_file" >&5
+printf "%s\n" "$as_me: updating cache $cache_file" >&6;}
+ if test ! -f "$cache_file" || test -h "$cache_file"; then
+ cat confcache >"$cache_file"
+ else
+ case $cache_file in #(
+ */* | ?:*)
+ mv -f confcache "$cache_file"$$ &&
+ mv -f "$cache_file"$$ "$cache_file" ;; #(
+ *)
+ mv -f confcache "$cache_file" ;;
+ esac
+ fi
+ fi
+ else
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file" >&5
+printf "%s\n" "$as_me: not updating unwritable cache $cache_file" >&6;}
+ fi
+fi
+rm -f confcache
+
+test "x$prefix" = xNONE && prefix=$ac_default_prefix
+# Let make expand exec_prefix.
+test "x$exec_prefix" = xNONE && exec_prefix='${prefix}'
+
+DEFS=-DHAVE_CONFIG_H
+
+ac_libobjs=
+ac_ltlibobjs=
+U=
+for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue
+ # 1. Remove the extension, and $U if already installed.
+ ac_script='s/\$U\././;s/\.o$//;s/\.obj$//'
+ ac_i=`printf "%s\n" "$ac_i" | sed "$ac_script"`
+ # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR
+ # will be set to the directory where LIBOBJS objects are built.
+ as_fn_append ac_libobjs " \${LIBOBJDIR}$ac_i\$U.$ac_objext"
+ as_fn_append ac_ltlibobjs " \${LIBOBJDIR}$ac_i"'$U.lo'
+done
+LIBOBJS=$ac_libobjs
+
+LTLIBOBJS=$ac_ltlibobjs
+
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking that generated files are newer than configure" >&5
+printf %s "checking that generated files are newer than configure... " >&6; }
+ if test -n "$am_sleep_pid"; then
+ # Hide warnings about reused PIDs.
+ wait $am_sleep_pid 2>/dev/null
+ fi
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: done" >&5
+printf "%s\n" "done" >&6; }
+ if test -n "$EXEEXT"; then
+ am__EXEEXT_TRUE=
+ am__EXEEXT_FALSE='#'
+else
+ am__EXEEXT_TRUE='#'
+ am__EXEEXT_FALSE=
+fi
+
+if test -z "${AMDEP_TRUE}" && test -z "${AMDEP_FALSE}"; then
+ as_fn_error $? "conditional \"AMDEP\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then
+ as_fn_error $? "conditional \"am__fastdepCC\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then
+ as_fn_error $? "conditional \"am__fastdepCC\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+
+if test -z "${HAVE_DAEMON_TRUE}" && test -z "${HAVE_DAEMON_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_DAEMON\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${HAVE_UTILS_TRUE}" && test -z "${HAVE_UTILS_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_UTILS\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${HAVE_LIBUTILS_TRUE}" && test -z "${HAVE_LIBUTILS_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_LIBUTILS\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${HAVE_DOCS_TRUE}" && test -z "${HAVE_DOCS_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_DOCS\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${HAVE_SPHINX_TRUE}" && test -z "${HAVE_SPHINX_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_SPHINX\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${HAVE_PDFLATEX_TRUE}" && test -z "${HAVE_PDFLATEX_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_PDFLATEX\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${FAST_PARSER_TRUE}" && test -z "${FAST_PARSER_FALSE}"; then
+ as_fn_error $? "conditional \"FAST_PARSER\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${ENABLE_PKCS11_TRUE}" && test -z "${ENABLE_PKCS11_FALSE}"; then
+ as_fn_error $? "conditional \"ENABLE_PKCS11\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${ENABLE_XDP_TRUE}" && test -z "${ENABLE_XDP_FALSE}"; then
+ as_fn_error $? "conditional \"ENABLE_XDP\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_authsignal_TRUE}" && test -z "${STATIC_MODULE_authsignal_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_authsignal\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_authsignal_TRUE}" && test -z "${SHARED_MODULE_authsignal_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_authsignal\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_cookies_TRUE}" && test -z "${STATIC_MODULE_cookies_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_cookies\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_cookies_TRUE}" && test -z "${SHARED_MODULE_cookies_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_cookies\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_dnsproxy_TRUE}" && test -z "${STATIC_MODULE_dnsproxy_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_dnsproxy\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_dnsproxy_TRUE}" && test -z "${SHARED_MODULE_dnsproxy_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_dnsproxy\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_dnstap_TRUE}" && test -z "${STATIC_MODULE_dnstap_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_dnstap\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_dnstap_TRUE}" && test -z "${SHARED_MODULE_dnstap_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_dnstap\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_geoip_TRUE}" && test -z "${STATIC_MODULE_geoip_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_geoip\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_geoip_TRUE}" && test -z "${SHARED_MODULE_geoip_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_geoip\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_noudp_TRUE}" && test -z "${STATIC_MODULE_noudp_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_noudp\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_noudp_TRUE}" && test -z "${SHARED_MODULE_noudp_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_noudp\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_onlinesign_TRUE}" && test -z "${STATIC_MODULE_onlinesign_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_onlinesign\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_onlinesign_TRUE}" && test -z "${SHARED_MODULE_onlinesign_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_onlinesign\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_probe_TRUE}" && test -z "${STATIC_MODULE_probe_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_probe\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_probe_TRUE}" && test -z "${SHARED_MODULE_probe_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_probe\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_queryacl_TRUE}" && test -z "${STATIC_MODULE_queryacl_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_queryacl\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_queryacl_TRUE}" && test -z "${SHARED_MODULE_queryacl_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_queryacl\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_rrl_TRUE}" && test -z "${STATIC_MODULE_rrl_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_rrl\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_rrl_TRUE}" && test -z "${SHARED_MODULE_rrl_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_rrl\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_stats_TRUE}" && test -z "${STATIC_MODULE_stats_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_stats\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_stats_TRUE}" && test -z "${SHARED_MODULE_stats_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_stats\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_synthrecord_TRUE}" && test -z "${STATIC_MODULE_synthrecord_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_synthrecord\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_synthrecord_TRUE}" && test -z "${SHARED_MODULE_synthrecord_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_synthrecord\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${STATIC_MODULE_whoami_TRUE}" && test -z "${STATIC_MODULE_whoami_FALSE}"; then
+ as_fn_error $? "conditional \"STATIC_MODULE_whoami\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${SHARED_MODULE_whoami_TRUE}" && test -z "${SHARED_MODULE_whoami_FALSE}"; then
+ as_fn_error $? "conditional \"SHARED_MODULE_whoami\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${HAVE_DNSTAP_TRUE}" && test -z "${HAVE_DNSTAP_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_DNSTAP\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${HAVE_LIBDNSTAP_TRUE}" && test -z "${HAVE_LIBDNSTAP_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_LIBDNSTAP\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${HAVE_MAXMINDDB_TRUE}" && test -z "${HAVE_MAXMINDDB_FALSE}"; then
+ as_fn_error $? "conditional \"HAVE_MAXMINDDB\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${EMBEDDED_LIBNGTCP2_TRUE}" && test -z "${EMBEDDED_LIBNGTCP2_FALSE}"; then
+ as_fn_error $? "conditional \"EMBEDDED_LIBNGTCP2\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${ENABLE_QUIC_TRUE}" && test -z "${ENABLE_QUIC_FALSE}"; then
+ as_fn_error $? "conditional \"ENABLE_QUIC\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${USE_GNUTLS_MEMSET_TRUE}" && test -z "${USE_GNUTLS_MEMSET_FALSE}"; then
+ as_fn_error $? "conditional \"USE_GNUTLS_MEMSET\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${CODE_COVERAGE_ENABLED_TRUE}" && test -z "${CODE_COVERAGE_ENABLED_FALSE}"; then
+ as_fn_error $? "conditional \"CODE_COVERAGE_ENABLED\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${FUZZER_TRUE}" && test -z "${FUZZER_FALSE}"; then
+ as_fn_error $? "conditional \"FUZZER\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+if test -z "${OSS_FUZZ_TRUE}" && test -z "${OSS_FUZZ_FALSE}"; then
+ as_fn_error $? "conditional \"OSS_FUZZ\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
+
+: "${CONFIG_STATUS=./config.status}"
+ac_write_fail=0
+ac_clean_files_save=$ac_clean_files
+ac_clean_files="$ac_clean_files $CONFIG_STATUS"
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS" >&5
+printf "%s\n" "$as_me: creating $CONFIG_STATUS" >&6;}
+as_write_fail=0
+cat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1
+#! $SHELL
+# Generated by $as_me.
+# Run this file to recreate the current configuration.
+# Compiler output produced by configure, useful for debugging
+# configure, is in config.log if it exists.
+
+debug=false
+ac_cs_recheck=false
+ac_cs_silent=false
+
+SHELL=\${CONFIG_SHELL-$SHELL}
+export SHELL
+_ASEOF
+cat >>$CONFIG_STATUS <<\_ASEOF || as_write_fail=1
+## -------------------- ##
+## M4sh Initialization. ##
+## -------------------- ##
+
+# Be more Bourne compatible
+DUALCASE=1; export DUALCASE # for MKS sh
+as_nop=:
+if test ${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1
+then :
+ emulate sh
+ NULLCMD=:
+ # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+ setopt NO_GLOB_SUBST
+else $as_nop
+ case `(set -o) 2>/dev/null` in #(
+ *posix*) :
+ set -o posix ;; #(
+ *) :
+ ;;
+esac
+fi
+
+
+
+# Reset variables that may have inherited troublesome values from
+# the environment.
+
+# IFS needs to be set, to space, tab, and newline, in precisely that order.
+# (If _AS_PATH_WALK were called with IFS unset, it would have the
+# side effect of setting IFS to empty, thus disabling word splitting.)
+# Quoting is to prevent editors from complaining about space-tab.
+as_nl='
+'
+export as_nl
+IFS=" "" $as_nl"
+
+PS1='$ '
+PS2='> '
+PS4='+ '
+
+# Ensure predictable behavior from utilities with locale-dependent output.
+LC_ALL=C
+export LC_ALL
+LANGUAGE=C
+export LANGUAGE
+
+# We cannot yet rely on "unset" to work, but we need these variables
+# to be unset--not just set to an empty or harmless value--now, to
+# avoid bugs in old shells (e.g. pre-3.0 UWIN ksh). This construct
+# also avoids known problems related to "unset" and subshell syntax
+# in other old shells (e.g. bash 2.01 and pdksh 5.2.14).
+for as_var in BASH_ENV ENV MAIL MAILPATH CDPATH
+do eval test \${$as_var+y} \
+ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || :
+done
+
+# Ensure that fds 0, 1, and 2 are open.
+if (exec 3>&0) 2>/dev/null; then :; else exec 0&1) 2>/dev/null; then :; else exec 1>/dev/null; fi
+if (exec 3>&2) ; then :; else exec 2>/dev/null; fi
+
+# The user is always right.
+if ${PATH_SEPARATOR+false} :; then
+ PATH_SEPARATOR=:
+ (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && {
+ (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 ||
+ PATH_SEPARATOR=';'
+ }
+fi
+
+
+# Find who we are. Look in the path if we contain no directory separator.
+as_myself=
+case $0 in #((
+ *[\\/]* ) as_myself=$0 ;;
+ *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ case $as_dir in #(((
+ '') as_dir=./ ;;
+ */) ;;
+ *) as_dir=$as_dir/ ;;
+ esac
+ test -r "$as_dir$0" && as_myself=$as_dir$0 && break
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+# We did not find ourselves, most probably we were run as `sh COMMAND'
+# in which case we are not to be found in the path.
+if test "x$as_myself" = x; then
+ as_myself=$0
+fi
+if test ! -f "$as_myself"; then
+ printf "%s\n" "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2
+ exit 1
+fi
+
+
+
+# as_fn_error STATUS ERROR [LINENO LOG_FD]
+# ----------------------------------------
+# Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are
+# provided, also output the error to LOG_FD, referencing LINENO. Then exit the
+# script with STATUS, using 1 if that was 0.
+as_fn_error ()
+{
+ as_status=$1; test $as_status -eq 0 && as_status=1
+ if test "$4"; then
+ as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
+ printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: $2" >&$4
+ fi
+ printf "%s\n" "$as_me: error: $2" >&2
+ as_fn_exit $as_status
+} # as_fn_error
+
+
+
+# as_fn_set_status STATUS
+# -----------------------
+# Set $? to STATUS, without forking.
+as_fn_set_status ()
+{
+ return $1
+} # as_fn_set_status
+
+# as_fn_exit STATUS
+# -----------------
+# Exit the shell with STATUS, even in a "trap 0" or "set -e" context.
+as_fn_exit ()
+{
+ set +e
+ as_fn_set_status $1
+ exit $1
+} # as_fn_exit
+
+# as_fn_unset VAR
+# ---------------
+# Portably unset VAR.
+as_fn_unset ()
+{
+ { eval $1=; unset $1;}
+}
+as_unset=as_fn_unset
+
+# as_fn_append VAR VALUE
+# ----------------------
+# Append the text in VALUE to the end of the definition contained in VAR. Take
+# advantage of any shell optimizations that allow amortized linear growth over
+# repeated appends, instead of the typical quadratic growth present in naive
+# implementations.
+if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null
+then :
+ eval 'as_fn_append ()
+ {
+ eval $1+=\$2
+ }'
+else $as_nop
+ as_fn_append ()
+ {
+ eval $1=\$$1\$2
+ }
+fi # as_fn_append
+
+# as_fn_arith ARG...
+# ------------------
+# Perform arithmetic evaluation on the ARGs, and store the result in the
+# global $as_val. Take advantage of shells that can avoid forks. The arguments
+# must be portable across $(()) and expr.
+if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null
+then :
+ eval 'as_fn_arith ()
+ {
+ as_val=$(( $* ))
+ }'
+else $as_nop
+ as_fn_arith ()
+ {
+ as_val=`expr "$@" || test $? -eq 1`
+ }
+fi # as_fn_arith
+
+
+if expr a : '\(a\)' >/dev/null 2>&1 &&
+ test "X`expr 00001 : '.*\(...\)'`" = X001; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then
+ as_basename=basename
+else
+ as_basename=false
+fi
+
+if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then
+ as_dirname=dirname
+else
+ as_dirname=false
+fi
+
+as_me=`$as_basename -- "$0" ||
+$as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X/"$0" |
+ sed '/^.*\/\([^/][^/]*\)\/*$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+
+# Avoid depending upon Character Ranges.
+as_cr_letters='abcdefghijklmnopqrstuvwxyz'
+as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
+as_cr_Letters=$as_cr_letters$as_cr_LETTERS
+as_cr_digits='0123456789'
+as_cr_alnum=$as_cr_Letters$as_cr_digits
+
+
+# Determine whether it's possible to make 'echo' print without a newline.
+# These variables are no longer used directly by Autoconf, but are AC_SUBSTed
+# for compatibility with existing Makefiles.
+ECHO_C= ECHO_N= ECHO_T=
+case `echo -n x` in #(((((
+-n*)
+ case `echo 'xy\c'` in
+ *c*) ECHO_T=' ';; # ECHO_T is single tab character.
+ xy) ECHO_C='\c';;
+ *) echo `echo ksh88 bug on AIX 6.1` > /dev/null
+ ECHO_T=' ';;
+ esac;;
+*)
+ ECHO_N='-n';;
+esac
+
+# For backward compatibility with old third-party macros, we provide
+# the shell variables $as_echo and $as_echo_n. New code should use
+# AS_ECHO(["message"]) and AS_ECHO_N(["message"]), respectively.
+as_echo='printf %s\n'
+as_echo_n='printf %s'
+
+rm -f conf$$ conf$$.exe conf$$.file
+if test -d conf$$.dir; then
+ rm -f conf$$.dir/conf$$.file
+else
+ rm -f conf$$.dir
+ mkdir conf$$.dir 2>/dev/null
+fi
+if (echo >conf$$.file) 2>/dev/null; then
+ if ln -s conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s='ln -s'
+ # ... but there are two gotchas:
+ # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.
+ # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.
+ # In both cases, we have to default to `cp -pR'.
+ ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||
+ as_ln_s='cp -pR'
+ elif ln conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s=ln
+ else
+ as_ln_s='cp -pR'
+ fi
+else
+ as_ln_s='cp -pR'
+fi
+rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file
+rmdir conf$$.dir 2>/dev/null
+
+
+# as_fn_mkdir_p
+# -------------
+# Create "$as_dir" as a directory, including parents if necessary.
+as_fn_mkdir_p ()
+{
+
+ case $as_dir in #(
+ -*) as_dir=./$as_dir;;
+ esac
+ test -d "$as_dir" || eval $as_mkdir_p || {
+ as_dirs=
+ while :; do
+ case $as_dir in #(
+ *\'*) as_qdir=`printf "%s\n" "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'(
+ *) as_qdir=$as_dir;;
+ esac
+ as_dirs="'$as_qdir' $as_dirs"
+ as_dir=`$as_dirname -- "$as_dir" ||
+$as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$as_dir" : 'X\(//\)[^/]' \| \
+ X"$as_dir" : 'X\(//\)$' \| \
+ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X"$as_dir" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ test -d "$as_dir" && break
+ done
+ test -z "$as_dirs" || eval "mkdir $as_dirs"
+ } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir"
+
+
+} # as_fn_mkdir_p
+if mkdir -p . 2>/dev/null; then
+ as_mkdir_p='mkdir -p "$as_dir"'
+else
+ test -d ./-p && rmdir ./-p
+ as_mkdir_p=false
+fi
+
+
+# as_fn_executable_p FILE
+# -----------------------
+# Test if FILE is an executable regular file.
+as_fn_executable_p ()
+{
+ test -f "$1" && test -x "$1"
+} # as_fn_executable_p
+as_test_x='test -x'
+as_executable_p=as_fn_executable_p
+
+# Sed expression to map a string onto a valid CPP name.
+as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'"
+
+# Sed expression to map a string onto a valid variable name.
+as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'"
+
+
+exec 6>&1
+## ----------------------------------- ##
+## Main body of $CONFIG_STATUS script. ##
+## ----------------------------------- ##
+_ASEOF
+test $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1
+
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+# Save the log message, to keep $0 and so on meaningful, and to
+# report actual input values of CONFIG_FILES etc. instead of their
+# values after options handling.
+ac_log="
+This file was extended by knot $as_me 3.4.6, which was
+generated by GNU Autoconf 2.71. Invocation command line was
+
+ CONFIG_FILES = $CONFIG_FILES
+ CONFIG_HEADERS = $CONFIG_HEADERS
+ CONFIG_LINKS = $CONFIG_LINKS
+ CONFIG_COMMANDS = $CONFIG_COMMANDS
+ $ $0 $@
+
+on `(hostname || uname -n) 2>/dev/null | sed 1q`
+"
+
+_ACEOF
+
+case $ac_config_files in *"
+"*) set x $ac_config_files; shift; ac_config_files=$*;;
+esac
+
+case $ac_config_headers in *"
+"*) set x $ac_config_headers; shift; ac_config_headers=$*;;
+esac
+
+
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+# Files that config.status was made for.
+config_files="$ac_config_files"
+config_headers="$ac_config_headers"
+config_commands="$ac_config_commands"
+
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+ac_cs_usage="\
+\`$as_me' instantiates files and other configuration actions
+from templates according to the current configuration. Unless the files
+and actions are specified as TAGs, all are instantiated by default.
+
+Usage: $0 [OPTION]... [TAG]...
+
+ -h, --help print this help, then exit
+ -V, --version print version number and configuration settings, then exit
+ --config print configuration, then exit
+ -q, --quiet, --silent
+ do not print progress messages
+ -d, --debug don't remove temporary files
+ --recheck update $as_me by reconfiguring in the same conditions
+ --file=FILE[:TEMPLATE]
+ instantiate the configuration file FILE
+ --header=FILE[:TEMPLATE]
+ instantiate the configuration header FILE
+
+Configuration files:
+$config_files
+
+Configuration headers:
+$config_headers
+
+Configuration commands:
+$config_commands
+
+Report bugs to ."
+
+_ACEOF
+ac_cs_config=`printf "%s\n" "$ac_configure_args" | sed "$ac_safe_unquote"`
+ac_cs_config_escaped=`printf "%s\n" "$ac_cs_config" | sed "s/^ //; s/'/'\\\\\\\\''/g"`
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+ac_cs_config='$ac_cs_config_escaped'
+ac_cs_version="\\
+knot config.status 3.4.6
+configured by $0, generated by GNU Autoconf 2.71,
+ with options \\"\$ac_cs_config\\"
+
+Copyright (C) 2021 Free Software Foundation, Inc.
+This config.status script is free software; the Free Software Foundation
+gives unlimited permission to copy, distribute and modify it."
+
+ac_pwd='$ac_pwd'
+srcdir='$srcdir'
+INSTALL='$INSTALL'
+MKDIR_P='$MKDIR_P'
+AWK='$AWK'
+test -n "\$AWK" || AWK=awk
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+# The default lists apply if the user does not specify any file.
+ac_need_defaults=:
+while test $# != 0
+do
+ case $1 in
+ --*=?*)
+ ac_option=`expr "X$1" : 'X\([^=]*\)='`
+ ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'`
+ ac_shift=:
+ ;;
+ --*=)
+ ac_option=`expr "X$1" : 'X\([^=]*\)='`
+ ac_optarg=
+ ac_shift=:
+ ;;
+ *)
+ ac_option=$1
+ ac_optarg=$2
+ ac_shift=shift
+ ;;
+ esac
+
+ case $ac_option in
+ # Handling of the options.
+ -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r)
+ ac_cs_recheck=: ;;
+ --version | --versio | --versi | --vers | --ver | --ve | --v | -V )
+ printf "%s\n" "$ac_cs_version"; exit ;;
+ --config | --confi | --conf | --con | --co | --c )
+ printf "%s\n" "$ac_cs_config"; exit ;;
+ --debug | --debu | --deb | --de | --d | -d )
+ debug=: ;;
+ --file | --fil | --fi | --f )
+ $ac_shift
+ case $ac_optarg in
+ *\'*) ac_optarg=`printf "%s\n" "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;;
+ '') as_fn_error $? "missing file argument" ;;
+ esac
+ as_fn_append CONFIG_FILES " '$ac_optarg'"
+ ac_need_defaults=false;;
+ --header | --heade | --head | --hea )
+ $ac_shift
+ case $ac_optarg in
+ *\'*) ac_optarg=`printf "%s\n" "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;;
+ esac
+ as_fn_append CONFIG_HEADERS " '$ac_optarg'"
+ ac_need_defaults=false;;
+ --he | --h)
+ # Conflict between --help and --header
+ as_fn_error $? "ambiguous option: \`$1'
+Try \`$0 --help' for more information.";;
+ --help | --hel | -h )
+ printf "%s\n" "$ac_cs_usage"; exit ;;
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil | --si | --s)
+ ac_cs_silent=: ;;
+
+ # This is an error.
+ -*) as_fn_error $? "unrecognized option: \`$1'
+Try \`$0 --help' for more information." ;;
+
+ *) as_fn_append ac_config_targets " $1"
+ ac_need_defaults=false ;;
+
+ esac
+ shift
+done
+
+ac_configure_extra_args=
+
+if $ac_cs_silent; then
+ exec 6>/dev/null
+ ac_configure_extra_args="$ac_configure_extra_args --silent"
+fi
+
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+if \$ac_cs_recheck; then
+ set X $SHELL '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion
+ shift
+ \printf "%s\n" "running CONFIG_SHELL=$SHELL \$*" >&6
+ CONFIG_SHELL='$SHELL'
+ export CONFIG_SHELL
+ exec "\$@"
+fi
+
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+exec 5>>config.log
+{
+ echo
+ sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX
+## Running $as_me. ##
+_ASBOX
+ printf "%s\n" "$ac_log"
+} >&5
+
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+#
+# INIT-COMMANDS
+#
+AMDEP_TRUE="$AMDEP_TRUE" MAKE="${MAKE-make}"
+
+
+# The HP-UX ksh and POSIX shell print the target directory to stdout
+# if CDPATH is set.
+(unset CDPATH) >/dev/null 2>&1 && unset CDPATH
+
+sed_quote_subst='$sed_quote_subst'
+double_quote_subst='$double_quote_subst'
+delay_variable_subst='$delay_variable_subst'
+macro_version='`$ECHO "$macro_version" | $SED "$delay_single_quote_subst"`'
+macro_revision='`$ECHO "$macro_revision" | $SED "$delay_single_quote_subst"`'
+enable_shared='`$ECHO "$enable_shared" | $SED "$delay_single_quote_subst"`'
+enable_static='`$ECHO "$enable_static" | $SED "$delay_single_quote_subst"`'
+pic_mode='`$ECHO "$pic_mode" | $SED "$delay_single_quote_subst"`'
+enable_fast_install='`$ECHO "$enable_fast_install" | $SED "$delay_single_quote_subst"`'
+shared_archive_member_spec='`$ECHO "$shared_archive_member_spec" | $SED "$delay_single_quote_subst"`'
+SHELL='`$ECHO "$SHELL" | $SED "$delay_single_quote_subst"`'
+ECHO='`$ECHO "$ECHO" | $SED "$delay_single_quote_subst"`'
+PATH_SEPARATOR='`$ECHO "$PATH_SEPARATOR" | $SED "$delay_single_quote_subst"`'
+host_alias='`$ECHO "$host_alias" | $SED "$delay_single_quote_subst"`'
+host='`$ECHO "$host" | $SED "$delay_single_quote_subst"`'
+host_os='`$ECHO "$host_os" | $SED "$delay_single_quote_subst"`'
+build_alias='`$ECHO "$build_alias" | $SED "$delay_single_quote_subst"`'
+build='`$ECHO "$build" | $SED "$delay_single_quote_subst"`'
+build_os='`$ECHO "$build_os" | $SED "$delay_single_quote_subst"`'
+SED='`$ECHO "$SED" | $SED "$delay_single_quote_subst"`'
+Xsed='`$ECHO "$Xsed" | $SED "$delay_single_quote_subst"`'
+GREP='`$ECHO "$GREP" | $SED "$delay_single_quote_subst"`'
+EGREP='`$ECHO "$EGREP" | $SED "$delay_single_quote_subst"`'
+FGREP='`$ECHO "$FGREP" | $SED "$delay_single_quote_subst"`'
+LD='`$ECHO "$LD" | $SED "$delay_single_quote_subst"`'
+NM='`$ECHO "$NM" | $SED "$delay_single_quote_subst"`'
+LN_S='`$ECHO "$LN_S" | $SED "$delay_single_quote_subst"`'
+max_cmd_len='`$ECHO "$max_cmd_len" | $SED "$delay_single_quote_subst"`'
+ac_objext='`$ECHO "$ac_objext" | $SED "$delay_single_quote_subst"`'
+exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+FILECMD='`$ECHO "$FILECMD" | $SED "$delay_single_quote_subst"`'
+OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+lt_ar_flags='`$ECHO "$lt_ar_flags" | $SED "$delay_single_quote_subst"`'
+AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+old_postuninstall_cmds='`$ECHO "$old_postuninstall_cmds" | $SED "$delay_single_quote_subst"`'
+old_archive_cmds='`$ECHO "$old_archive_cmds" | $SED "$delay_single_quote_subst"`'
+lock_old_archive_extraction='`$ECHO "$lock_old_archive_extraction" | $SED "$delay_single_quote_subst"`'
+CC='`$ECHO "$CC" | $SED "$delay_single_quote_subst"`'
+CFLAGS='`$ECHO "$CFLAGS" | $SED "$delay_single_quote_subst"`'
+compiler='`$ECHO "$compiler" | $SED "$delay_single_quote_subst"`'
+GCC='`$ECHO "$GCC" | $SED "$delay_single_quote_subst"`'
+lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$delay_single_quote_subst"`'
+lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+lt_cv_sys_global_symbol_to_import='`$ECHO "$lt_cv_sys_global_symbol_to_import" | $SED "$delay_single_quote_subst"`'
+lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
+lt_cv_nm_interface='`$ECHO "$lt_cv_nm_interface" | $SED "$delay_single_quote_subst"`'
+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+lt_cv_truncate_bin='`$ECHO "$lt_cv_truncate_bin" | $SED "$delay_single_quote_subst"`'
+objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+OTOOL='`$ECHO "$OTOOL" | $SED "$delay_single_quote_subst"`'
+OTOOL64='`$ECHO "$OTOOL64" | $SED "$delay_single_quote_subst"`'
+libext='`$ECHO "$libext" | $SED "$delay_single_quote_subst"`'
+shrext_cmds='`$ECHO "$shrext_cmds" | $SED "$delay_single_quote_subst"`'
+extract_expsyms_cmds='`$ECHO "$extract_expsyms_cmds" | $SED "$delay_single_quote_subst"`'
+archive_cmds_need_lc='`$ECHO "$archive_cmds_need_lc" | $SED "$delay_single_quote_subst"`'
+enable_shared_with_static_runtimes='`$ECHO "$enable_shared_with_static_runtimes" | $SED "$delay_single_quote_subst"`'
+export_dynamic_flag_spec='`$ECHO "$export_dynamic_flag_spec" | $SED "$delay_single_quote_subst"`'
+whole_archive_flag_spec='`$ECHO "$whole_archive_flag_spec" | $SED "$delay_single_quote_subst"`'
+compiler_needs_object='`$ECHO "$compiler_needs_object" | $SED "$delay_single_quote_subst"`'
+old_archive_from_new_cmds='`$ECHO "$old_archive_from_new_cmds" | $SED "$delay_single_quote_subst"`'
+old_archive_from_expsyms_cmds='`$ECHO "$old_archive_from_expsyms_cmds" | $SED "$delay_single_quote_subst"`'
+archive_cmds='`$ECHO "$archive_cmds" | $SED "$delay_single_quote_subst"`'
+archive_expsym_cmds='`$ECHO "$archive_expsym_cmds" | $SED "$delay_single_quote_subst"`'
+module_cmds='`$ECHO "$module_cmds" | $SED "$delay_single_quote_subst"`'
+module_expsym_cmds='`$ECHO "$module_expsym_cmds" | $SED "$delay_single_quote_subst"`'
+with_gnu_ld='`$ECHO "$with_gnu_ld" | $SED "$delay_single_quote_subst"`'
+allow_undefined_flag='`$ECHO "$allow_undefined_flag" | $SED "$delay_single_quote_subst"`'
+no_undefined_flag='`$ECHO "$no_undefined_flag" | $SED "$delay_single_quote_subst"`'
+hardcode_libdir_flag_spec='`$ECHO "$hardcode_libdir_flag_spec" | $SED "$delay_single_quote_subst"`'
+hardcode_libdir_separator='`$ECHO "$hardcode_libdir_separator" | $SED "$delay_single_quote_subst"`'
+hardcode_direct='`$ECHO "$hardcode_direct" | $SED "$delay_single_quote_subst"`'
+hardcode_direct_absolute='`$ECHO "$hardcode_direct_absolute" | $SED "$delay_single_quote_subst"`'
+hardcode_minus_L='`$ECHO "$hardcode_minus_L" | $SED "$delay_single_quote_subst"`'
+hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_quote_subst"`'
+hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+need_version='`$ECHO "$need_version" | $SED "$delay_single_quote_subst"`'
+version_type='`$ECHO "$version_type" | $SED "$delay_single_quote_subst"`'
+runpath_var='`$ECHO "$runpath_var" | $SED "$delay_single_quote_subst"`'
+shlibpath_var='`$ECHO "$shlibpath_var" | $SED "$delay_single_quote_subst"`'
+shlibpath_overrides_runpath='`$ECHO "$shlibpath_overrides_runpath" | $SED "$delay_single_quote_subst"`'
+libname_spec='`$ECHO "$libname_spec" | $SED "$delay_single_quote_subst"`'
+library_names_spec='`$ECHO "$library_names_spec" | $SED "$delay_single_quote_subst"`'
+soname_spec='`$ECHO "$soname_spec" | $SED "$delay_single_quote_subst"`'
+install_override_mode='`$ECHO "$install_override_mode" | $SED "$delay_single_quote_subst"`'
+postinstall_cmds='`$ECHO "$postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+postuninstall_cmds='`$ECHO "$postuninstall_cmds" | $SED "$delay_single_quote_subst"`'
+finish_cmds='`$ECHO "$finish_cmds" | $SED "$delay_single_quote_subst"`'
+finish_eval='`$ECHO "$finish_eval" | $SED "$delay_single_quote_subst"`'
+hardcode_into_libs='`$ECHO "$hardcode_into_libs" | $SED "$delay_single_quote_subst"`'
+sys_lib_search_path_spec='`$ECHO "$sys_lib_search_path_spec" | $SED "$delay_single_quote_subst"`'
+configure_time_dlsearch_path='`$ECHO "$configure_time_dlsearch_path" | $SED "$delay_single_quote_subst"`'
+configure_time_lt_sys_library_path='`$ECHO "$configure_time_lt_sys_library_path" | $SED "$delay_single_quote_subst"`'
+hardcode_action='`$ECHO "$hardcode_action" | $SED "$delay_single_quote_subst"`'
+enable_dlopen='`$ECHO "$enable_dlopen" | $SED "$delay_single_quote_subst"`'
+enable_dlopen_self='`$ECHO "$enable_dlopen_self" | $SED "$delay_single_quote_subst"`'
+enable_dlopen_self_static='`$ECHO "$enable_dlopen_self_static" | $SED "$delay_single_quote_subst"`'
+old_striplib='`$ECHO "$old_striplib" | $SED "$delay_single_quote_subst"`'
+striplib='`$ECHO "$striplib" | $SED "$delay_single_quote_subst"`'
+
+LTCC='$LTCC'
+LTCFLAGS='$LTCFLAGS'
+compiler='$compiler_DEFAULT'
+
+# A function that is used when there is no print builtin or printf.
+func_fallback_echo ()
+{
+ eval 'cat <<_LTECHO_EOF
+\$1
+_LTECHO_EOF'
+}
+
+# Quote evaled strings.
+for var in SHELL \
+ECHO \
+PATH_SEPARATOR \
+SED \
+GREP \
+EGREP \
+FGREP \
+LD \
+NM \
+LN_S \
+lt_SP2NL \
+lt_NL2SP \
+reload_flag \
+FILECMD \
+OBJDUMP \
+deplibs_check_method \
+file_magic_cmd \
+file_magic_glob \
+want_nocaseglob \
+DLLTOOL \
+sharedlib_from_linklib_cmd \
+AR \
+archiver_list_spec \
+STRIP \
+RANLIB \
+CC \
+CFLAGS \
+compiler \
+lt_cv_sys_global_symbol_pipe \
+lt_cv_sys_global_symbol_to_cdecl \
+lt_cv_sys_global_symbol_to_import \
+lt_cv_sys_global_symbol_to_c_name_address \
+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
+lt_cv_nm_interface \
+nm_file_list_spec \
+lt_cv_truncate_bin \
+lt_prog_compiler_no_builtin_flag \
+lt_prog_compiler_pic \
+lt_prog_compiler_wl \
+lt_prog_compiler_static \
+lt_cv_prog_compiler_c_o \
+need_locks \
+MANIFEST_TOOL \
+DSYMUTIL \
+NMEDIT \
+LIPO \
+OTOOL \
+OTOOL64 \
+shrext_cmds \
+export_dynamic_flag_spec \
+whole_archive_flag_spec \
+compiler_needs_object \
+with_gnu_ld \
+allow_undefined_flag \
+no_undefined_flag \
+hardcode_libdir_flag_spec \
+hardcode_libdir_separator \
+exclude_expsyms \
+include_expsyms \
+file_list_spec \
+variables_saved_for_relink \
+libname_spec \
+library_names_spec \
+soname_spec \
+install_override_mode \
+finish_eval \
+old_striplib \
+striplib; do
+ case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
+ *[\\\\\\\`\\"\\\$]*)
+ eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes
+ ;;
+ *)
+ eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\""
+ ;;
+ esac
+done
+
+# Double-quote double-evaled strings.
+for var in reload_cmds \
+old_postinstall_cmds \
+old_postuninstall_cmds \
+old_archive_cmds \
+extract_expsyms_cmds \
+old_archive_from_new_cmds \
+old_archive_from_expsyms_cmds \
+archive_cmds \
+archive_expsym_cmds \
+module_cmds \
+module_expsym_cmds \
+export_symbols_cmds \
+prelink_cmds \
+postlink_cmds \
+postinstall_cmds \
+postuninstall_cmds \
+finish_cmds \
+sys_lib_search_path_spec \
+configure_time_dlsearch_path \
+configure_time_lt_sys_library_path; do
+ case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
+ *[\\\\\\\`\\"\\\$]*)
+ eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes
+ ;;
+ *)
+ eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\""
+ ;;
+ esac
+done
+
+ac_aux_dir='$ac_aux_dir'
+
+# See if we are running on zsh, and set the options that allow our
+# commands through without removal of \ escapes INIT.
+if test -n "\${ZSH_VERSION+set}"; then
+ setopt NO_GLOB_SUBST
+fi
+
+
+ PACKAGE='$PACKAGE'
+ VERSION='$VERSION'
+ RM='$RM'
+ ofile='$ofile'
+
+
+
+
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+
+# Handling of arguments.
+for ac_config_target in $ac_config_targets
+do
+ case $ac_config_target in
+ "src/config.h") CONFIG_HEADERS="$CONFIG_HEADERS src/config.h" ;;
+ "depfiles") CONFIG_COMMANDS="$CONFIG_COMMANDS depfiles" ;;
+ "src/libknot/version.h") CONFIG_FILES="$CONFIG_FILES src/libknot/version.h" ;;
+ "src/libdnssec/version.h") CONFIG_FILES="$CONFIG_FILES src/libdnssec/version.h" ;;
+ "src/libzscanner/version.h") CONFIG_FILES="$CONFIG_FILES src/libzscanner/version.h" ;;
+ "libtool") CONFIG_COMMANDS="$CONFIG_COMMANDS libtool" ;;
+ "src/knotd.pc") CONFIG_FILES="$CONFIG_FILES src/knotd.pc" ;;
+ "src/libknot.pc") CONFIG_FILES="$CONFIG_FILES src/libknot.pc" ;;
+ "src/libdnssec.pc") CONFIG_FILES="$CONFIG_FILES src/libdnssec.pc" ;;
+ "src/libzscanner.pc") CONFIG_FILES="$CONFIG_FILES src/libzscanner.pc" ;;
+ "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;;
+ "Doxyfile") CONFIG_FILES="$CONFIG_FILES Doxyfile" ;;
+ "doc/Makefile") CONFIG_FILES="$CONFIG_FILES doc/Makefile" ;;
+ "tests/Makefile") CONFIG_FILES="$CONFIG_FILES tests/Makefile" ;;
+ "tests-fuzz/Makefile") CONFIG_FILES="$CONFIG_FILES tests-fuzz/Makefile" ;;
+ "samples/Makefile") CONFIG_FILES="$CONFIG_FILES samples/Makefile" ;;
+ "distro/Makefile") CONFIG_FILES="$CONFIG_FILES distro/Makefile" ;;
+ "python/Makefile") CONFIG_FILES="$CONFIG_FILES python/Makefile" ;;
+ "python/knot_exporter/Makefile") CONFIG_FILES="$CONFIG_FILES python/knot_exporter/Makefile" ;;
+ "python/knot_exporter/pyproject.toml") CONFIG_FILES="$CONFIG_FILES python/knot_exporter/pyproject.toml" ;;
+ "python/knot_exporter/setup.py") CONFIG_FILES="$CONFIG_FILES python/knot_exporter/setup.py" ;;
+ "python/libknot/Makefile") CONFIG_FILES="$CONFIG_FILES python/libknot/Makefile" ;;
+ "python/libknot/pyproject.toml") CONFIG_FILES="$CONFIG_FILES python/libknot/pyproject.toml" ;;
+ "python/libknot/setup.py") CONFIG_FILES="$CONFIG_FILES python/libknot/setup.py" ;;
+ "python/libknot/libknot/__init__.py") CONFIG_FILES="$CONFIG_FILES python/libknot/libknot/__init__.py" ;;
+ "src/Makefile") CONFIG_FILES="$CONFIG_FILES src/Makefile" ;;
+ "src/libknot/xdp/Makefile") CONFIG_FILES="$CONFIG_FILES src/libknot/xdp/Makefile" ;;
+ "src/knot/modules/static_modules.h") CONFIG_FILES="$CONFIG_FILES src/knot/modules/static_modules.h" ;;
+ "doc/modules.rst") CONFIG_FILES="$CONFIG_FILES doc/modules.rst" ;;
+
+ *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;;
+ esac
+done
+
+
+# If the user did not use the arguments to specify the items to instantiate,
+# then the envvar interface is used. Set only those that are not.
+# We use the long form for the default assignment because of an extremely
+# bizarre bug on SunOS 4.1.3.
+if $ac_need_defaults; then
+ test ${CONFIG_FILES+y} || CONFIG_FILES=$config_files
+ test ${CONFIG_HEADERS+y} || CONFIG_HEADERS=$config_headers
+ test ${CONFIG_COMMANDS+y} || CONFIG_COMMANDS=$config_commands
+fi
+
+# Have a temporary directory for convenience. Make it in the build tree
+# simply because there is no reason against having it here, and in addition,
+# creating and moving files from /tmp can sometimes cause problems.
+# Hook for its removal unless debugging.
+# Note that there is a small window in which the directory will not be cleaned:
+# after its creation but before its name has been assigned to `$tmp'.
+$debug ||
+{
+ tmp= ac_tmp=
+ trap 'exit_status=$?
+ : "${ac_tmp:=$tmp}"
+ { test ! -d "$ac_tmp" || rm -fr "$ac_tmp"; } && exit $exit_status
+' 0
+ trap 'as_fn_exit 1' 1 2 13 15
+}
+# Create a (secure) tmp directory for tmp files.
+
+{
+ tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` &&
+ test -d "$tmp"
+} ||
+{
+ tmp=./conf$$-$RANDOM
+ (umask 077 && mkdir "$tmp")
+} || as_fn_error $? "cannot create a temporary directory in ." "$LINENO" 5
+ac_tmp=$tmp
+
+# Set up the scripts for CONFIG_FILES section.
+# No need to generate them if there are no CONFIG_FILES.
+# This happens for instance with `./config.status config.h'.
+if test -n "$CONFIG_FILES"; then
+
+
+ac_cr=`echo X | tr X '\015'`
+# On cygwin, bash can eat \r inside `` if the user requested igncr.
+# But we know of no other shell where ac_cr would be empty at this
+# point, so we can use a bashism as a fallback.
+if test "x$ac_cr" = x; then
+ eval ac_cr=\$\'\\r\'
+fi
+ac_cs_awk_cr=`$AWK 'BEGIN { print "a\rb" }' /dev/null`
+if test "$ac_cs_awk_cr" = "a${ac_cr}b"; then
+ ac_cs_awk_cr='\\r'
+else
+ ac_cs_awk_cr=$ac_cr
+fi
+
+echo 'BEGIN {' >"$ac_tmp/subs1.awk" &&
+_ACEOF
+
+
+{
+ echo "cat >conf$$subs.awk <<_ACEOF" &&
+ echo "$ac_subst_vars" | sed 's/.*/&!$&$ac_delim/' &&
+ echo "_ACEOF"
+} >conf$$subs.sh ||
+ as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5
+ac_delim_num=`echo "$ac_subst_vars" | grep -c '^'`
+ac_delim='%!_!# '
+for ac_last_try in false false false false false :; do
+ . ./conf$$subs.sh ||
+ as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5
+
+ ac_delim_n=`sed -n "s/.*$ac_delim\$/X/p" conf$$subs.awk | grep -c X`
+ if test $ac_delim_n = $ac_delim_num; then
+ break
+ elif $ac_last_try; then
+ as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5
+ else
+ ac_delim="$ac_delim!$ac_delim _$ac_delim!! "
+ fi
+done
+rm -f conf$$subs.sh
+
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+cat >>"\$ac_tmp/subs1.awk" <<\\_ACAWK &&
+_ACEOF
+sed -n '
+h
+s/^/S["/; s/!.*/"]=/
+p
+g
+s/^[^!]*!//
+:repl
+t repl
+s/'"$ac_delim"'$//
+t delim
+:nl
+h
+s/\(.\{148\}\)..*/\1/
+t more1
+s/["\\]/\\&/g; s/^/"/; s/$/\\n"\\/
+p
+n
+b repl
+:more1
+s/["\\]/\\&/g; s/^/"/; s/$/"\\/
+p
+g
+s/.\{148\}//
+t nl
+:delim
+h
+s/\(.\{148\}\)..*/\1/
+t more2
+s/["\\]/\\&/g; s/^/"/; s/$/"/
+p
+b
+:more2
+s/["\\]/\\&/g; s/^/"/; s/$/"\\/
+p
+g
+s/.\{148\}//
+t delim
+' >$CONFIG_STATUS || ac_write_fail=1
+rm -f conf$$subs.awk
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+_ACAWK
+cat >>"\$ac_tmp/subs1.awk" <<_ACAWK &&
+ for (key in S) S_is_set[key] = 1
+ FS = ""
+
+}
+{
+ line = $ 0
+ nfields = split(line, field, "@")
+ substed = 0
+ len = length(field[1])
+ for (i = 2; i < nfields; i++) {
+ key = field[i]
+ keylen = length(key)
+ if (S_is_set[key]) {
+ value = S[key]
+ line = substr(line, 1, len) "" value "" substr(line, len + keylen + 3)
+ len += length(value) + length(field[++i])
+ substed = 1
+ } else
+ len += 1 + keylen
+ }
+
+ print line
+}
+
+_ACAWK
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+if sed "s/$ac_cr//" < /dev/null > /dev/null 2>&1; then
+ sed "s/$ac_cr\$//; s/$ac_cr/$ac_cs_awk_cr/g"
+else
+ cat
+fi < "$ac_tmp/subs1.awk" > "$ac_tmp/subs.awk" \
+ || as_fn_error $? "could not setup config files machinery" "$LINENO" 5
+_ACEOF
+
+# VPATH may cause trouble with some makes, so we remove sole $(srcdir),
+# ${srcdir} and @srcdir@ entries from VPATH if srcdir is ".", strip leading and
+# trailing colons and then remove the whole line if VPATH becomes empty
+# (actually we leave an empty line to preserve line numbers).
+if test "x$srcdir" = x.; then
+ ac_vpsub='/^[ ]*VPATH[ ]*=[ ]*/{
+h
+s///
+s/^/:/
+s/[ ]*$/:/
+s/:\$(srcdir):/:/g
+s/:\${srcdir}:/:/g
+s/:@srcdir@:/:/g
+s/^:*//
+s/:*$//
+x
+s/\(=[ ]*\).*/\1/
+G
+s/\n//
+s/^[^=]*=[ ]*$//
+}'
+fi
+
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+fi # test -n "$CONFIG_FILES"
+
+# Set up the scripts for CONFIG_HEADERS section.
+# No need to generate them if there are no CONFIG_HEADERS.
+# This happens for instance with `./config.status Makefile'.
+if test -n "$CONFIG_HEADERS"; then
+cat >"$ac_tmp/defines.awk" <<\_ACAWK ||
+BEGIN {
+_ACEOF
+
+# Transform confdefs.h into an awk script `defines.awk', embedded as
+# here-document in config.status, that substitutes the proper values into
+# config.h.in to produce config.h.
+
+# Create a delimiter string that does not exist in confdefs.h, to ease
+# handling of long lines.
+ac_delim='%!_!# '
+for ac_last_try in false false :; do
+ ac_tt=`sed -n "/$ac_delim/p" confdefs.h`
+ if test -z "$ac_tt"; then
+ break
+ elif $ac_last_try; then
+ as_fn_error $? "could not make $CONFIG_HEADERS" "$LINENO" 5
+ else
+ ac_delim="$ac_delim!$ac_delim _$ac_delim!! "
+ fi
+done
+
+# For the awk script, D is an array of macro values keyed by name,
+# likewise P contains macro parameters if any. Preserve backslash
+# newline sequences.
+
+ac_word_re=[_$as_cr_Letters][_$as_cr_alnum]*
+sed -n '
+s/.\{148\}/&'"$ac_delim"'/g
+t rset
+:rset
+s/^[ ]*#[ ]*define[ ][ ]*/ /
+t def
+d
+:def
+s/\\$//
+t bsnl
+s/["\\]/\\&/g
+s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\
+D["\1"]=" \3"/p
+s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2"/p
+d
+:bsnl
+s/["\\]/\\&/g
+s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\
+D["\1"]=" \3\\\\\\n"\\/p
+t cont
+s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2\\\\\\n"\\/p
+t cont
+d
+:cont
+n
+s/.\{148\}/&'"$ac_delim"'/g
+t clear
+:clear
+s/\\$//
+t bsnlc
+s/["\\]/\\&/g; s/^/"/; s/$/"/p
+d
+:bsnlc
+s/["\\]/\\&/g; s/^/"/; s/$/\\\\\\n"\\/p
+b cont
+' >$CONFIG_STATUS || ac_write_fail=1
+
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+ for (key in D) D_is_set[key] = 1
+ FS = ""
+}
+/^[\t ]*#[\t ]*(define|undef)[\t ]+$ac_word_re([\t (]|\$)/ {
+ line = \$ 0
+ split(line, arg, " ")
+ if (arg[1] == "#") {
+ defundef = arg[2]
+ mac1 = arg[3]
+ } else {
+ defundef = substr(arg[1], 2)
+ mac1 = arg[2]
+ }
+ split(mac1, mac2, "(") #)
+ macro = mac2[1]
+ prefix = substr(line, 1, index(line, defundef) - 1)
+ if (D_is_set[macro]) {
+ # Preserve the white space surrounding the "#".
+ print prefix "define", macro P[macro] D[macro]
+ next
+ } else {
+ # Replace #undef with comments. This is necessary, for example,
+ # in the case of _POSIX_SOURCE, which is predefined and required
+ # on some systems where configure will not decide to define it.
+ if (defundef == "undef") {
+ print "/*", prefix defundef, macro, "*/"
+ next
+ }
+ }
+}
+{ print }
+_ACAWK
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+ as_fn_error $? "could not setup config headers machinery" "$LINENO" 5
+fi # test -n "$CONFIG_HEADERS"
+
+
+eval set X " :F $CONFIG_FILES :H $CONFIG_HEADERS :C $CONFIG_COMMANDS"
+shift
+for ac_tag
+do
+ case $ac_tag in
+ :[FHLC]) ac_mode=$ac_tag; continue;;
+ esac
+ case $ac_mode$ac_tag in
+ :[FHL]*:*);;
+ :L* | :C*:*) as_fn_error $? "invalid tag \`$ac_tag'" "$LINENO" 5;;
+ :[FH]-) ac_tag=-:-;;
+ :[FH]*) ac_tag=$ac_tag:$ac_tag.in;;
+ esac
+ ac_save_IFS=$IFS
+ IFS=:
+ set x $ac_tag
+ IFS=$ac_save_IFS
+ shift
+ ac_file=$1
+ shift
+
+ case $ac_mode in
+ :L) ac_source=$1;;
+ :[FH])
+ ac_file_inputs=
+ for ac_f
+ do
+ case $ac_f in
+ -) ac_f="$ac_tmp/stdin";;
+ *) # Look for the file first in the build tree, then in the source tree
+ # (if the path is not absolute). The absolute path cannot be DOS-style,
+ # because $ac_f cannot contain `:'.
+ test -f "$ac_f" ||
+ case $ac_f in
+ [\\/$]*) false;;
+ *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";;
+ esac ||
+ as_fn_error 1 "cannot find input file: \`$ac_f'" "$LINENO" 5;;
+ esac
+ case $ac_f in *\'*) ac_f=`printf "%s\n" "$ac_f" | sed "s/'/'\\\\\\\\''/g"`;; esac
+ as_fn_append ac_file_inputs " '$ac_f'"
+ done
+
+ # Let's still pretend it is `configure' which instantiates (i.e., don't
+ # use $as_me), people would be surprised to read:
+ # /* config.h. Generated by config.status. */
+ configure_input='Generated from '`
+ printf "%s\n" "$*" | sed 's|^[^:]*/||;s|:[^:]*/|, |g'
+ `' by configure.'
+ if test x"$ac_file" != x-; then
+ configure_input="$ac_file. $configure_input"
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating $ac_file" >&5
+printf "%s\n" "$as_me: creating $ac_file" >&6;}
+ fi
+ # Neutralize special characters interpreted by sed in replacement strings.
+ case $configure_input in #(
+ *\&* | *\|* | *\\* )
+ ac_sed_conf_input=`printf "%s\n" "$configure_input" |
+ sed 's/[\\\\&|]/\\\\&/g'`;; #(
+ *) ac_sed_conf_input=$configure_input;;
+ esac
+
+ case $ac_tag in
+ *:-:* | *:-) cat >"$ac_tmp/stdin" \
+ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;;
+ esac
+ ;;
+ esac
+
+ ac_dir=`$as_dirname -- "$ac_file" ||
+$as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$ac_file" : 'X\(//\)[^/]' \| \
+ X"$ac_file" : 'X\(//\)$' \| \
+ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X"$ac_file" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ as_dir="$ac_dir"; as_fn_mkdir_p
+ ac_builddir=.
+
+case "$ac_dir" in
+.) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;;
+*)
+ ac_dir_suffix=/`printf "%s\n" "$ac_dir" | sed 's|^\.[\\/]||'`
+ # A ".." for each directory in $ac_dir_suffix.
+ ac_top_builddir_sub=`printf "%s\n" "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'`
+ case $ac_top_builddir_sub in
+ "") ac_top_builddir_sub=. ac_top_build_prefix= ;;
+ *) ac_top_build_prefix=$ac_top_builddir_sub/ ;;
+ esac ;;
+esac
+ac_abs_top_builddir=$ac_pwd
+ac_abs_builddir=$ac_pwd$ac_dir_suffix
+# for backward compatibility:
+ac_top_builddir=$ac_top_build_prefix
+
+case $srcdir in
+ .) # We are building in place.
+ ac_srcdir=.
+ ac_top_srcdir=$ac_top_builddir_sub
+ ac_abs_top_srcdir=$ac_pwd ;;
+ [\\/]* | ?:[\\/]* ) # Absolute name.
+ ac_srcdir=$srcdir$ac_dir_suffix;
+ ac_top_srcdir=$srcdir
+ ac_abs_top_srcdir=$srcdir ;;
+ *) # Relative name.
+ ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix
+ ac_top_srcdir=$ac_top_build_prefix$srcdir
+ ac_abs_top_srcdir=$ac_pwd/$srcdir ;;
+esac
+ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix
+
+
+ case $ac_mode in
+ :F)
+ #
+ # CONFIG_FILE
+ #
+
+ case $INSTALL in
+ [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;;
+ *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;;
+ esac
+ ac_MKDIR_P=$MKDIR_P
+ case $MKDIR_P in
+ [\\/$]* | ?:[\\/]* ) ;;
+ */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;;
+ esac
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+# If the template does not know about datarootdir, expand it.
+# FIXME: This hack should be removed a few years after 2.60.
+ac_datarootdir_hack=; ac_datarootdir_seen=
+ac_sed_dataroot='
+/datarootdir/ {
+ p
+ q
+}
+/@datadir@/p
+/@docdir@/p
+/@infodir@/p
+/@localedir@/p
+/@mandir@/p'
+case `eval "sed -n \"\$ac_sed_dataroot\" $ac_file_inputs"` in
+*datarootdir*) ac_datarootdir_seen=yes;;
+*@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*)
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5
+printf "%s\n" "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;}
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+ ac_datarootdir_hack='
+ s&@datadir@&$datadir&g
+ s&@docdir@&$docdir&g
+ s&@infodir@&$infodir&g
+ s&@localedir@&$localedir&g
+ s&@mandir@&$mandir&g
+ s&\\\${datarootdir}&$datarootdir&g' ;;
+esac
+_ACEOF
+
+# Neutralize VPATH when `$srcdir' = `.'.
+# Shell code in configure.ac might set extrasub.
+# FIXME: do we really want to maintain this feature?
+cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
+ac_sed_extra="$ac_vpsub
+$extrasub
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
+:t
+/@[a-zA-Z_][a-zA-Z_0-9]*@/!b
+s|@configure_input@|$ac_sed_conf_input|;t t
+s&@top_builddir@&$ac_top_builddir_sub&;t t
+s&@top_build_prefix@&$ac_top_build_prefix&;t t
+s&@srcdir@&$ac_srcdir&;t t
+s&@abs_srcdir@&$ac_abs_srcdir&;t t
+s&@top_srcdir@&$ac_top_srcdir&;t t
+s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t
+s&@builddir@&$ac_builddir&;t t
+s&@abs_builddir@&$ac_abs_builddir&;t t
+s&@abs_top_builddir@&$ac_abs_top_builddir&;t t
+s&@INSTALL@&$ac_INSTALL&;t t
+s&@MKDIR_P@&$ac_MKDIR_P&;t t
+$ac_datarootdir_hack
+"
+eval sed \"\$ac_sed_extra\" "$ac_file_inputs" | $AWK -f "$ac_tmp/subs.awk" \
+ >$ac_tmp/out || as_fn_error $? "could not create $ac_file" "$LINENO" 5
+
+test -z "$ac_datarootdir_hack$ac_datarootdir_seen" &&
+ { ac_out=`sed -n '/\${datarootdir}/p' "$ac_tmp/out"`; test -n "$ac_out"; } &&
+ { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' \
+ "$ac_tmp/out"`; test -z "$ac_out"; } &&
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable \`datarootdir'
+which seems to be undefined. Please make sure it is defined" >&5
+printf "%s\n" "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir'
+which seems to be undefined. Please make sure it is defined" >&2;}
+
+ rm -f "$ac_tmp/stdin"
+ case $ac_file in
+ -) cat "$ac_tmp/out" && rm -f "$ac_tmp/out";;
+ *) rm -f "$ac_file" && mv "$ac_tmp/out" "$ac_file";;
+ esac \
+ || as_fn_error $? "could not create $ac_file" "$LINENO" 5
+ ;;
+ :H)
+ #
+ # CONFIG_HEADER
+ #
+ if test x"$ac_file" != x-; then
+ {
+ printf "%s\n" "/* $configure_input */" >&1 \
+ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs"
+ } >"$ac_tmp/config.h" \
+ || as_fn_error $? "could not create $ac_file" "$LINENO" 5
+ if diff "$ac_file" "$ac_tmp/config.h" >/dev/null 2>&1; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: $ac_file is unchanged" >&5
+printf "%s\n" "$as_me: $ac_file is unchanged" >&6;}
+ else
+ rm -f "$ac_file"
+ mv "$ac_tmp/config.h" "$ac_file" \
+ || as_fn_error $? "could not create $ac_file" "$LINENO" 5
+ fi
+ else
+ printf "%s\n" "/* $configure_input */" >&1 \
+ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" \
+ || as_fn_error $? "could not create -" "$LINENO" 5
+ fi
+# Compute "$ac_file"'s index in $config_headers.
+_am_arg="$ac_file"
+_am_stamp_count=1
+for _am_header in $config_headers :; do
+ case $_am_header in
+ $_am_arg | $_am_arg:* )
+ break ;;
+ * )
+ _am_stamp_count=`expr $_am_stamp_count + 1` ;;
+ esac
+done
+echo "timestamp for $_am_arg" >`$as_dirname -- "$_am_arg" ||
+$as_expr X"$_am_arg" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$_am_arg" : 'X\(//\)[^/]' \| \
+ X"$_am_arg" : 'X\(//\)$' \| \
+ X"$_am_arg" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X"$_am_arg" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`/stamp-h$_am_stamp_count
+ ;;
+
+ :C) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: executing $ac_file commands" >&5
+printf "%s\n" "$as_me: executing $ac_file commands" >&6;}
+ ;;
+ esac
+
+
+ case $ac_file$ac_mode in
+ "depfiles":C) test x"$AMDEP_TRUE" != x"" || {
+ # Older Autoconf quotes --file arguments for eval, but not when files
+ # are listed without --file. Let's play safe and only enable the eval
+ # if we detect the quoting.
+ # TODO: see whether this extra hack can be removed once we start
+ # requiring Autoconf 2.70 or later.
+ case $CONFIG_FILES in #(
+ *\'*) :
+ eval set x "$CONFIG_FILES" ;; #(
+ *) :
+ set x $CONFIG_FILES ;; #(
+ *) :
+ ;;
+esac
+ shift
+ # Used to flag and report bootstrapping failures.
+ am_rc=0
+ for am_mf
+ do
+ # Strip MF so we end up with the name of the file.
+ am_mf=`printf "%s\n" "$am_mf" | sed -e 's/:.*$//'`
+ # Check whether this is an Automake generated Makefile which includes
+ # dependency-tracking related rules and includes.
+ # Grep'ing the whole file directly is not great: AIX grep has a line
+ # limit of 2048, but all sed's we know have understand at least 4000.
+ sed -n 's,^am--depfiles:.*,X,p' "$am_mf" | grep X >/dev/null 2>&1 \
+ || continue
+ am_dirpart=`$as_dirname -- "$am_mf" ||
+$as_expr X"$am_mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$am_mf" : 'X\(//\)[^/]' \| \
+ X"$am_mf" : 'X\(//\)$' \| \
+ X"$am_mf" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X"$am_mf" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ am_filepart=`$as_basename -- "$am_mf" ||
+$as_expr X/"$am_mf" : '.*/\([^/][^/]*\)/*$' \| \
+ X"$am_mf" : 'X\(//\)$' \| \
+ X"$am_mf" : 'X\(/\)' \| . 2>/dev/null ||
+printf "%s\n" X/"$am_mf" |
+ sed '/^.*\/\([^/][^/]*\)\/*$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ { echo "$as_me:$LINENO: cd "$am_dirpart" \
+ && sed -e '/# am--include-marker/d' "$am_filepart" \
+ | $MAKE -f - am--depfiles" >&5
+ (cd "$am_dirpart" \
+ && sed -e '/# am--include-marker/d' "$am_filepart" \
+ | $MAKE -f - am--depfiles) >&5 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } || am_rc=$?
+ done
+ if test $am_rc -ne 0; then
+ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "Something went wrong bootstrapping makefile fragments
+ for automatic dependency tracking. If GNU make was not used, consider
+ re-running the configure script with MAKE=\"gmake\" (or whatever is
+ necessary). You can also try re-running configure with the
+ '--disable-dependency-tracking' option to at least be able to build
+ the package (albeit without support for automatic dependency tracking).
+See \`config.log' for more details" "$LINENO" 5; }
+ fi
+ { am_dirpart=; unset am_dirpart;}
+ { am_filepart=; unset am_filepart;}
+ { am_mf=; unset am_mf;}
+ { am_rc=; unset am_rc;}
+ rm -f conftest-deps.mk
+}
+ ;;
+ "libtool":C)
+
+ # See if we are running on zsh, and set the options that allow our
+ # commands through without removal of \ escapes.
+ if test -n "${ZSH_VERSION+set}"; then
+ setopt NO_GLOB_SUBST
+ fi
+
+ cfgfile=${ofile}T
+ trap "$RM \"$cfgfile\"; exit 1" 1 2 15
+ $RM "$cfgfile"
+
+ cat <<_LT_EOF >> "$cfgfile"
+#! $SHELL
+# Generated automatically by $as_me ($PACKAGE) $VERSION
+# NOTE: Changes made to this file will be lost: look at ltmain.sh.
+
+# Provide generalized library-building support services.
+# Written by Gordon Matzigkeit, 1996
+
+# Copyright (C) 2014 Free Software Foundation, Inc.
+# This is free software; see the source for copying conditions. There is NO
+# warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+
+# GNU Libtool is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of of the License, or
+# (at your option) any later version.
+#
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program or library that is built
+# using GNU Libtool, you may include this file under the same
+# distribution terms that you use for the rest of that program.
+#
+# GNU Libtool is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see .
+
+
+# The names of the tagged configurations supported by this script.
+available_tags=''
+
+# Configured defaults for sys_lib_dlsearch_path munging.
+: \${LT_SYS_LIBRARY_PATH="$configure_time_lt_sys_library_path"}
+
+# ### BEGIN LIBTOOL CONFIG
+
+# Which release of libtool.m4 was used?
+macro_version=$macro_version
+macro_revision=$macro_revision
+
+# Whether or not to build shared libraries.
+build_libtool_libs=$enable_shared
+
+# Whether or not to build static libraries.
+build_old_libs=$enable_static
+
+# What type of objects to build.
+pic_mode=$pic_mode
+
+# Whether or not to optimize for fast installation.
+fast_install=$enable_fast_install
+
+# Shared archive member basename,for filename based shared library versioning on AIX.
+shared_archive_member_spec=$shared_archive_member_spec
+
+# Shell to use when invoking shell scripts.
+SHELL=$lt_SHELL
+
+# An echo program that protects backslashes.
+ECHO=$lt_ECHO
+
+# The PATH separator for the build system.
+PATH_SEPARATOR=$lt_PATH_SEPARATOR
+
+# The host system.
+host_alias=$host_alias
+host=$host
+host_os=$host_os
+
+# The build system.
+build_alias=$build_alias
+build=$build
+build_os=$build_os
+
+# A sed program that does not truncate output.
+SED=$lt_SED
+
+# Sed that helps us avoid accidentally triggering echo(1) options like -n.
+Xsed="\$SED -e 1s/^X//"
+
+# A grep program that handles long lines.
+GREP=$lt_GREP
+
+# An ERE matcher.
+EGREP=$lt_EGREP
+
+# A literal string matcher.
+FGREP=$lt_FGREP
+
+# A BSD- or MS-compatible name lister.
+NM=$lt_NM
+
+# Whether we need soft or hard links.
+LN_S=$lt_LN_S
+
+# What is the maximum length of a command?
+max_cmd_len=$max_cmd_len
+
+# Object file suffix (normally "o").
+objext=$ac_objext
+
+# Executable file suffix (normally "").
+exeext=$exeext
+
+# whether the shell understands "unset".
+lt_unset=$lt_unset
+
+# turn spaces into newlines.
+SP2NL=$lt_lt_SP2NL
+
+# turn newlines into spaces.
+NL2SP=$lt_lt_NL2SP
+
+# convert \$build file names to \$host format.
+to_host_file_cmd=$lt_cv_to_host_file_cmd
+
+# convert \$build files to toolchain format.
+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
+
+# A file(cmd) program that detects file types.
+FILECMD=$lt_FILECMD
+
+# An object symbol dumper.
+OBJDUMP=$lt_OBJDUMP
+
+# Method to check whether dependent libraries are shared objects.
+deplibs_check_method=$lt_deplibs_check_method
+
+# Command to use when deplibs_check_method = "file_magic".
+file_magic_cmd=$lt_file_magic_cmd
+
+# How to find potential files when deplibs_check_method = "file_magic".
+file_magic_glob=$lt_file_magic_glob
+
+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
+want_nocaseglob=$lt_want_nocaseglob
+
+# DLL creation program.
+DLLTOOL=$lt_DLLTOOL
+
+# Command to associate shared and link libraries.
+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
+
+# The archiver.
+AR=$lt_AR
+
+# Flags to create an archive (by configure).
+lt_ar_flags=$lt_ar_flags
+
+# Flags to create an archive.
+AR_FLAGS=\${ARFLAGS-"\$lt_ar_flags"}
+
+# How to feed a file listing to the archiver.
+archiver_list_spec=$lt_archiver_list_spec
+
+# A symbol stripping program.
+STRIP=$lt_STRIP
+
+# Commands used to install an old-style archive.
+RANLIB=$lt_RANLIB
+old_postinstall_cmds=$lt_old_postinstall_cmds
+old_postuninstall_cmds=$lt_old_postuninstall_cmds
+
+# Whether to use a lock for old archive extraction.
+lock_old_archive_extraction=$lock_old_archive_extraction
+
+# A C compiler.
+LTCC=$lt_CC
+
+# LTCC compiler flags.
+LTCFLAGS=$lt_CFLAGS
+
+# Take the output of nm and produce a listing of raw symbols and C names.
+global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe
+
+# Transform the output of nm in a proper C declaration.
+global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl
+
+# Transform the output of nm into a list of symbols to manually relocate.
+global_symbol_to_import=$lt_lt_cv_sys_global_symbol_to_import
+
+# Transform the output of nm in a C name address pair.
+global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+
+# Transform the output of nm in a C name address pair when lib prefix is needed.
+global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+
+# The name lister interface.
+nm_interface=$lt_lt_cv_nm_interface
+
+# Specify filename containing input files for \$NM.
+nm_file_list_spec=$lt_nm_file_list_spec
+
+# The root where to search for dependent libraries,and where our libraries should be installed.
+lt_sysroot=$lt_sysroot
+
+# Command to truncate a binary pipe.
+lt_truncate_bin=$lt_lt_cv_truncate_bin
+
+# The name of the directory that contains temporary libtool files.
+objdir=$objdir
+
+# Used to examine libraries when file_magic_cmd begins with "file".
+MAGIC_CMD=$MAGIC_CMD
+
+# Must we lock files when doing compilation?
+need_locks=$lt_need_locks
+
+# Manifest tool.
+MANIFEST_TOOL=$lt_MANIFEST_TOOL
+
+# Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+DSYMUTIL=$lt_DSYMUTIL
+
+# Tool to change global to local symbols on Mac OS X.
+NMEDIT=$lt_NMEDIT
+
+# Tool to manipulate fat objects and archives on Mac OS X.
+LIPO=$lt_LIPO
+
+# ldd/readelf like tool for Mach-O binaries on Mac OS X.
+OTOOL=$lt_OTOOL
+
+# ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4.
+OTOOL64=$lt_OTOOL64
+
+# Old archive suffix (normally "a").
+libext=$libext
+
+# Shared library suffix (normally ".so").
+shrext_cmds=$lt_shrext_cmds
+
+# The commands to extract the exported symbol list from a shared archive.
+extract_expsyms_cmds=$lt_extract_expsyms_cmds
+
+# Variables whose values should be saved in libtool wrapper scripts and
+# restored at link time.
+variables_saved_for_relink=$lt_variables_saved_for_relink
+
+# Do we need the "lib" prefix for modules?
+need_lib_prefix=$need_lib_prefix
+
+# Do we need a version for libraries?
+need_version=$need_version
+
+# Library versioning type.
+version_type=$version_type
+
+# Shared library runtime path variable.
+runpath_var=$runpath_var
+
+# Shared library path variable.
+shlibpath_var=$shlibpath_var
+
+# Is shlibpath searched before the hard-coded library search path?
+shlibpath_overrides_runpath=$shlibpath_overrides_runpath
+
+# Format of library name prefix.
+libname_spec=$lt_libname_spec
+
+# List of archive names. First name is the real one, the rest are links.
+# The last name is the one that the linker finds with -lNAME
+library_names_spec=$lt_library_names_spec
+
+# The coded name of the library, if different from the real name.
+soname_spec=$lt_soname_spec
+
+# Permission mode override for installation of shared libraries.
+install_override_mode=$lt_install_override_mode
+
+# Command to use after installation of a shared archive.
+postinstall_cmds=$lt_postinstall_cmds
+
+# Command to use after uninstallation of a shared archive.
+postuninstall_cmds=$lt_postuninstall_cmds
+
+# Commands used to finish a libtool library installation in a directory.
+finish_cmds=$lt_finish_cmds
+
+# As "finish_cmds", except a single script fragment to be evaled but
+# not shown.
+finish_eval=$lt_finish_eval
+
+# Whether we should hardcode library paths into libraries.
+hardcode_into_libs=$hardcode_into_libs
+
+# Compile-time system search path for libraries.
+sys_lib_search_path_spec=$lt_sys_lib_search_path_spec
+
+# Detected run-time system search path for libraries.
+sys_lib_dlsearch_path_spec=$lt_configure_time_dlsearch_path
+
+# Explicit LT_SYS_LIBRARY_PATH set during ./configure time.
+configure_time_lt_sys_library_path=$lt_configure_time_lt_sys_library_path
+
+# Whether dlopen is supported.
+dlopen_support=$enable_dlopen
+
+# Whether dlopen of programs is supported.
+dlopen_self=$enable_dlopen_self
+
+# Whether dlopen of statically linked programs is supported.
+dlopen_self_static=$enable_dlopen_self_static
+
+# Commands to strip libraries.
+old_striplib=$lt_old_striplib
+striplib=$lt_striplib
+
+
+# The linker used to build libraries.
+LD=$lt_LD
+
+# How to create reloadable object files.
+reload_flag=$lt_reload_flag
+reload_cmds=$lt_reload_cmds
+
+# Commands used to build an old-style archive.
+old_archive_cmds=$lt_old_archive_cmds
+
+# A language specific compiler.
+CC=$lt_compiler
+
+# Is the compiler the GNU compiler?
+with_gcc=$GCC
+
+# Compiler flag to turn off builtin functions.
+no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+
+# Additional compiler flags for building library objects.
+pic_flag=$lt_lt_prog_compiler_pic
+
+# How to pass a linker flag through the compiler.
+wl=$lt_lt_prog_compiler_wl
+
+# Compiler flag to prevent dynamic linking.
+link_static_flag=$lt_lt_prog_compiler_static
+
+# Does compiler simultaneously support -c and -o options?
+compiler_c_o=$lt_lt_cv_prog_compiler_c_o
+
+# Whether or not to add -lc for building shared libraries.
+build_libtool_need_lc=$archive_cmds_need_lc
+
+# Whether or not to disallow shared libs when runtime libs are static.
+allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes
+
+# Compiler flag to allow reflexive dlopens.
+export_dynamic_flag_spec=$lt_export_dynamic_flag_spec
+
+# Compiler flag to generate shared objects directly from archives.
+whole_archive_flag_spec=$lt_whole_archive_flag_spec
+
+# Whether the compiler copes with passing no objects directly.
+compiler_needs_object=$lt_compiler_needs_object
+
+# Create an old-style archive from a shared archive.
+old_archive_from_new_cmds=$lt_old_archive_from_new_cmds
+
+# Create a temporary old-style archive to link instead of a shared archive.
+old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds
+
+# Commands used to build a shared archive.
+archive_cmds=$lt_archive_cmds
+archive_expsym_cmds=$lt_archive_expsym_cmds
+
+# Commands used to build a loadable module if different from building
+# a shared archive.
+module_cmds=$lt_module_cmds
+module_expsym_cmds=$lt_module_expsym_cmds
+
+# Whether we are building with GNU ld or not.
+with_gnu_ld=$lt_with_gnu_ld
+
+# Flag that allows shared libraries with undefined symbols to be built.
+allow_undefined_flag=$lt_allow_undefined_flag
+
+# Flag that enforces no undefined symbols.
+no_undefined_flag=$lt_no_undefined_flag
+
+# Flag to hardcode \$libdir into a binary during linking.
+# This must work even if \$libdir does not exist
+hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec
+
+# Whether we need a single "-rpath" flag with a separated argument.
+hardcode_libdir_separator=$lt_hardcode_libdir_separator
+
+# Set to "yes" if using DIR/libNAME\$shared_ext during linking hardcodes
+# DIR into the resulting binary.
+hardcode_direct=$hardcode_direct
+
+# Set to "yes" if using DIR/libNAME\$shared_ext during linking hardcodes
+# DIR into the resulting binary and the resulting library dependency is
+# "absolute",i.e impossible to change by setting \$shlibpath_var if the
+# library is relocated.
+hardcode_direct_absolute=$hardcode_direct_absolute
+
+# Set to "yes" if using the -LDIR flag during linking hardcodes DIR
+# into the resulting binary.
+hardcode_minus_L=$hardcode_minus_L
+
+# Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR
+# into the resulting binary.
+hardcode_shlibpath_var=$hardcode_shlibpath_var
+
+# Set to "yes" if building a shared library automatically hardcodes DIR
+# into the library and all subsequent libraries and executables linked
+# against it.
+hardcode_automatic=$hardcode_automatic
+
+# Set to yes if linker adds runtime paths of dependent libraries
+# to runtime path list.
+inherit_rpath=$inherit_rpath
+
+# Whether libtool must link a program against all its dependency libraries.
+link_all_deplibs=$link_all_deplibs
+
+# Set to "yes" if exported symbols are required.
+always_export_symbols=$always_export_symbols
+
+# The commands to list exported symbols.
+export_symbols_cmds=$lt_export_symbols_cmds
+
+# Symbols that should not be listed in the preloaded symbols.
+exclude_expsyms=$lt_exclude_expsyms
+
+# Symbols that must always be exported.
+include_expsyms=$lt_include_expsyms
+
+# Commands necessary for linking programs (against libraries) with templates.
+prelink_cmds=$lt_prelink_cmds
+
+# Commands necessary for finishing linking programs.
+postlink_cmds=$lt_postlink_cmds
+
+# Specify filename containing input files.
+file_list_spec=$lt_file_list_spec
+
+# How to hardcode a shared library path into an executable.
+hardcode_action=$hardcode_action
+
+# ### END LIBTOOL CONFIG
+
+_LT_EOF
+
+ cat <<'_LT_EOF' >> "$cfgfile"
+
+# ### BEGIN FUNCTIONS SHARED WITH CONFIGURE
+
+# func_munge_path_list VARIABLE PATH
+# -----------------------------------
+# VARIABLE is name of variable containing _space_ separated list of
+# directories to be munged by the contents of PATH, which is string
+# having a format:
+# "DIR[:DIR]:"
+# string "DIR[ DIR]" will be prepended to VARIABLE
+# ":DIR[:DIR]"
+# string "DIR[ DIR]" will be appended to VARIABLE
+# "DIRP[:DIRP]::[DIRA:]DIRA"
+# string "DIRP[ DIRP]" will be prepended to VARIABLE and string
+# "DIRA[ DIRA]" will be appended to VARIABLE
+# "DIR[:DIR]"
+# VARIABLE will be replaced by "DIR[ DIR]"
+func_munge_path_list ()
+{
+ case x$2 in
+ x)
+ ;;
+ *:)
+ eval $1=\"`$ECHO $2 | $SED 's/:/ /g'` \$$1\"
+ ;;
+ x:*)
+ eval $1=\"\$$1 `$ECHO $2 | $SED 's/:/ /g'`\"
+ ;;
+ *::*)
+ eval $1=\"\$$1\ `$ECHO $2 | $SED -e 's/.*:://' -e 's/:/ /g'`\"
+ eval $1=\"`$ECHO $2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \$$1\"
+ ;;
+ *)
+ eval $1=\"`$ECHO $2 | $SED 's/:/ /g'`\"
+ ;;
+ esac
+}
+
+
+# Calculate cc_basename. Skip known compiler wrappers and cross-prefix.
+func_cc_basename ()
+{
+ for cc_temp in $*""; do
+ case $cc_temp in
+ compile | *[\\/]compile | ccache | *[\\/]ccache ) ;;
+ distcc | *[\\/]distcc | purify | *[\\/]purify ) ;;
+ \-*) ;;
+ *) break;;
+ esac
+ done
+ func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"`
+}
+
+
+# ### END FUNCTIONS SHARED WITH CONFIGURE
+
+_LT_EOF
+
+ case $host_os in
+ aix3*)
+ cat <<\_LT_EOF >> "$cfgfile"
+# AIX sometimes has problems with the GCC collect2 program. For some
+# reason, if we set the COLLECT_NAMES environment variable, the problems
+# vanish in a puff of smoke.
+if test set != "${COLLECT_NAMES+set}"; then
+ COLLECT_NAMES=
+ export COLLECT_NAMES
+fi
+_LT_EOF
+ ;;
+ esac
+
+
+
+ltmain=$ac_aux_dir/ltmain.sh
+
+
+ # We use sed instead of cat because bash on DJGPP gets confused if
+ # if finds mixed CR/LF and LF-only lines. Since sed operates in
+ # text mode, it properly converts lines to CR/LF. This bash problem
+ # is reportedly fixed, but why not run on old versions too?
+ $SED '$q' "$ltmain" >> "$cfgfile" \
+ || (rm -f "$cfgfile"; exit 1)
+
+ mv -f "$cfgfile" "$ofile" ||
+ (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+ chmod +x "$ofile"
+
+ ;;
+ "doc/modules.rst":F) cp doc/modules.rst "${srcdir}"/doc/modules.rst 2>/dev/null
+ abs_srcdir=$(cd "${srcdir}" && pwd)
+ ln -s -f "${abs_srcdir}"/src/knot/modules "${srcdir}"/doc 2>/dev/null ;;
+
+ esac
+done # for ac_tag
+
+
+as_fn_exit 0
+_ACEOF
+ac_clean_files=$ac_clean_files_save
+
+test $ac_write_fail = 0 ||
+ as_fn_error $? "write failure creating $CONFIG_STATUS" "$LINENO" 5
+
+
+# configure is writing to config.log, and then calls config.status.
+# config.status does its own redirection, appending to config.log.
+# Unfortunately, on DOS this fails, as config.log is still kept open
+# by configure, so config.status won't be able to write to it; its
+# output is simply discarded. So we exec the FD to /dev/null,
+# effectively closing config.log, so it can be properly (re)opened and
+# appended to by config.status. When coming back to configure, we
+# need to make the FD available again.
+if test "$no_create" != yes; then
+ ac_cs_success=:
+ ac_config_status_args=
+ test "$silent" = yes &&
+ ac_config_status_args="$ac_config_status_args --quiet"
+ exec 5>/dev/null
+ $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false
+ exec 5>>config.log
+ # Use ||, not &&, to avoid exiting from the if with $? = 1, which
+ # would make configure fail if this is the last instruction.
+ $ac_cs_success || as_fn_exit 1
+fi
+if test -n "$ac_unrecognized_opts" && test "$enable_option_checking" != no; then
+ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts" >&5
+printf "%s\n" "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2;}
+fi
+
+{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result:
+ Knot DNS $VERSION
+$result_msg_base
+" >&5
+printf "%s\n" "
+ Knot DNS $VERSION
+$result_msg_base
+" >&6; }
+
diff --git a/configure.ac b/configure.ac
new file mode 100644
index 0000000..9f095bb
--- /dev/null
+++ b/configure.ac
@@ -0,0 +1,827 @@
+AC_PREREQ([2.69])
+
+m4_define([knot_VERSION_MAJOR], 3)dnl
+m4_define([knot_VERSION_MINOR], 4)dnl
+m4_define([knot_VERSION_PATCH], 6)dnl Leave empty if the master branch!
+m4_include([m4/knot-version.m4])
+
+AC_INIT([knot], [knot_PKG_VERSION], [knot-dns@labs.nic.cz])
+AM_INIT_AUTOMAKE([foreign std-options subdir-objects no-dist-gzip dist-xz -Wall -Werror])
+AM_SILENT_RULES([yes])
+AC_CONFIG_SRCDIR([src/knot])
+AC_CONFIG_HEADERS([src/config.h])
+AC_CONFIG_MACRO_DIR([m4])
+AC_USE_SYSTEM_EXTENSIONS
+AC_CANONICAL_HOST
+
+# Update library versions
+# https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
+KNOT_LIB_VERSION([libknot], 15, 0, 0)
+KNOT_LIB_VERSION([libdnssec], 9, 0, 0)
+KNOT_LIB_VERSION([libzscanner], 4, 0, 0)
+
+AC_SUBST([KNOT_VERSION_MAJOR], knot_VERSION_MAJOR)
+AC_SUBST([KNOT_VERSION_MINOR], knot_VERSION_MINOR)
+AC_SUBST([KNOT_VERSION_PATCH], knot_VERSION_PATCH)
+
+AC_CONFIG_FILES([src/libknot/version.h
+ src/libdnssec/version.h
+ src/libzscanner/version.h])
+
+# Automatically update release date based on NEWS
+AC_PROG_SED
+release_date=$($SED -n 's/^Knot DNS .* (\(.*\))/\1/p;q;' ${srcdir}/NEWS)
+AC_SUBST([RELEASE_DATE], $release_date)
+
+# Set compiler compatibility flags
+AC_PROG_CC
+AC_PROG_CPP_WERROR
+
+# Set default CFLAGS
+CFLAGS="$CFLAGS -Wall -Wshadow -Werror=format-security -Werror=implicit -Werror=attributes -Wstrict-prototypes"
+
+AX_CHECK_COMPILE_FLAG("-fpredictive-commoning", [CFLAGS="$CFLAGS -fpredictive-commoning"], [], "-Werror")
+AX_CHECK_LINK_FLAG(["-Wl,--exclude-libs,ALL"], [ldflag_exclude_libs="-Wl,--exclude-libs,ALL"], [ldflag_exclude_libs=""], "")
+AC_SUBST([LDFLAG_EXCLUDE_LIBS], $ldflag_exclude_libs)
+
+# Get processor byte ordering
+AC_C_BIGENDIAN([endianity=big-endian], [endianity=little-endian])
+AS_IF([test "$endianity" = "little-endian"],[
+ AC_DEFINE([ENDIANITY_LITTLE], [1], [System is little-endian.])])
+
+# Check if an archiver is available
+m4_ifdef([AM_PROG_AR], [AM_PROG_AR])
+
+AC_PROG_INSTALL
+
+# Initialize libtool
+LT_INIT
+
+# Build DLLs on Cygwin/MSYS2
+AS_CASE([$host_os],
+ [cygwin* | msys*], [LT_NO_UNDEFINED="-no-undefined"]
+)
+AC_SUBST(LT_NO_UNDEFINED)
+
+# Use pkg-config
+PKG_PROG_PKG_CONFIG
+m4_ifdef([PKG_INSTALLDIR], [PKG_INSTALLDIR], [AC_SUBST([pkgconfigdir], ['${libdir}/pkgconfig'])])
+AC_CONFIG_FILES([src/knotd.pc
+ src/libknot.pc
+ src/libdnssec.pc
+ src/libzscanner.pc
+ ])
+
+# Default directories
+knot_prefix=$ac_default_prefix
+AS_IF([test "$prefix" != NONE], [knot_prefix=$prefix])
+
+run_dir="${localstatedir}/run/knot"
+AC_ARG_WITH([rundir],
+ AS_HELP_STRING([--with-rundir=path], [Path to run-time variable data (pid, sockets...). [default=LOCALSTATEDIR/run/knot]]),
+ [run_dir=$withval])
+AC_SUBST(run_dir)
+
+storage_dir="${localstatedir}/lib/knot"
+AC_ARG_WITH([storage],
+ AS_HELP_STRING([--with-storage=path], [Default storage directory (slave zones, persistent data). [default=LOCALSTATEDIR/lib/knot]]),
+ [storage_dir=$withval])
+AC_SUBST(storage_dir)
+
+config_dir="${sysconfdir}/knot"
+AC_ARG_WITH([configdir],
+ AS_HELP_STRING([--with-configdir=path], [Default directory for configuration. [default=SYSCONFDIR/knot]]),
+ [config_dir=$withval])
+AC_SUBST(config_dir)
+
+module_dir=
+module_instdir="${libdir}/knot/modules-${KNOT_VERSION_MAJOR}.${KNOT_VERSION_MINOR}"
+AC_ARG_WITH([moduledir],
+ AS_HELP_STRING([--with-moduledir=path], [Path to auto-loaded dynamic modules. [default not set]]),
+ [module_dir=$withval module_instdir=$module_dir])
+AC_SUBST(module_instdir)
+AC_SUBST(module_dir)
+
+# Build Knot DNS daemon
+AC_ARG_ENABLE([daemon],
+ AS_HELP_STRING([--disable-daemon], [Don't build Knot DNS main daemon]), [], [enable_daemon=yes])
+AM_CONDITIONAL([HAVE_DAEMON], [test "$enable_daemon" = "yes"])
+
+# Build Knot DNS modules
+AC_ARG_ENABLE([modules],
+ AS_HELP_STRING([--disable-modules], [Don't build Knot DNS modules]), [], [enable_modules=yes])
+
+# Build Knot DNS utilities
+AC_ARG_ENABLE([utilities],
+ AS_HELP_STRING([--disable-utilities], [Don't build Knot DNS utilities]), [], [enable_utilities=yes])
+AM_CONDITIONAL([HAVE_UTILS], [test "$enable_utilities" = "yes"])
+AM_CONDITIONAL([HAVE_LIBUTILS], test "$enable_utilities" != "no" -o \
+ "$enable_daemon" != "no")
+
+# Build Knot DNS documentation
+AC_ARG_ENABLE([documentation],
+ AS_HELP_STRING([--disable-documentation], [Don't build Knot DNS documentation]), [], [enable_documentation=yes])
+AS_IF([test "$enable_documentation" != "no"], [
+ AC_PATH_PROG([SPHINXBUILD], [sphinx-build], [false])
+ AS_IF([test "$SPHINXBUILD" != "false"], [
+ enable_documentation="man html epub"
+ AC_PATH_PROG([PDFLATEX], [pdflatex], [false])
+ AS_IF([test "$PDFLATEX" != "false"], [
+ enable_documentation="$enable_documentation pdf"
+ ])
+ ])
+])
+AM_CONDITIONAL([HAVE_DOCS], [test "$enable_documentation" != "no"])
+AM_CONDITIONAL([HAVE_SPHINX], [test "$SPHINXBUILD" != "false"])
+AM_CONDITIONAL([HAVE_PDFLATEX], test "$PDFLATEX" != "false")
+
+######################
+# Generic dependencies
+######################
+
+AC_ARG_ENABLE([fastparser],
+ AS_HELP_STRING([--disable-fastparser], [Disable use of fast zone parser]),
+ [], [AS_IF([test -d ".git"], [enable_fastparser=no], [enable_fastparser=yes])]
+)
+AM_CONDITIONAL([FAST_PARSER], [test "$enable_fastparser" = "yes"])
+
+# GnuTLS crypto backend
+PKG_CHECK_MODULES([gnutls], [gnutls >= 3.6.10], [
+ save_CFLAGS=$CFLAGS
+ save_LIBS=$LIBS
+ CFLAGS="$CFLAGS $gnutls_CFLAGS"
+ LIBS="$LIBS $gnutls_LIBS"
+
+ AC_CHECK_FUNC([gnutls_pkcs11_copy_pubkey], [enable_pkcs11=yes], [enable_pkcs11=no])
+ AS_IF([test "$enable_pkcs11" = yes],
+ [AC_DEFINE([ENABLE_PKCS11], [1], [PKCS #11 support available])])
+
+ AC_CHECK_DECL([GNUTLS_SIGN_EDDSA_ED448],
+ [AC_DEFINE([HAVE_ED448], [1], [GnuTLS ED448 support available])
+ enable_ed448=yes],
+ [enable_ed448=no],
+ [#include ])
+
+ AC_CHECK_FUNC([gnutls_early_cipher_get],
+ [AC_DEFINE([HAVE_GNUTLS_QUIC], [1], [gnutls_early_cipher_get available])
+ gnutls_quic=yes], [gnutls_quic=no])
+
+ CFLAGS=$save_CFLAGS
+ LIBS=$save_LIBS
+])
+AM_CONDITIONAL([ENABLE_PKCS11], [test "$enable_pkcs11" = "yes"])
+
+AC_ARG_ENABLE([recvmmsg],
+ AS_HELP_STRING([--enable-recvmmsg=auto|yes|no], [enable recvmmsg() network API [default=auto]]),
+ [], [enable_recvmmsg=auto])
+
+AS_CASE([$enable_recvmmsg],
+ [auto|yes],[
+ AC_CHECK_FUNC([recvmmsg],
+ [AC_CHECK_FUNC([sendmmsg],[enable_recvmmsg=yes],[enable_recvmmsg=no])],
+ [enable_recvmmsg=no])],
+ [no],[],
+ [*], [AC_MSG_ERROR([Invalid value of --enable-recvmmsg.]
+ )])
+
+AS_IF([test "$enable_recvmmsg" = yes],[
+ AC_DEFINE([ENABLE_RECVMMSG], [1], [Use recvmmsg().])])
+
+# XDP support
+AC_ARG_ENABLE([xdp],
+ AS_HELP_STRING([--enable-xdp=auto|yes|no], [enable eXpress Data Path [default=auto]]),
+ [], [enable_xdp=auto])
+
+PKG_CHECK_MODULES([libbpf], [libbpf],
+ [save_LIBS=$LIBS
+ LIBS="$LIBS $libbpf_LIBS"
+ AC_CHECK_FUNC([bpf_object__find_map_by_offset], [libbpf1=no], [libbpf1=yes])
+ LIBS=$save_LIBS
+ have_libbpf=yes],
+ [have_libbpf=no]
+)
+
+AS_CASE([$enable_xdp],
+ [auto], [AS_IF([test "$have_libbpf" = "yes"], [enable_xdp=yes], [enable_xdp=no])],
+ [yes], [AS_IF([test "$have_libbpf" = "yes"], [enable_xdp=yes], [AC_MSG_ERROR([libbpf not available])])],
+ [no], [],
+ [*], [AC_MSG_ERROR([Invalid value of --enable-xdp.])]
+)
+AM_CONDITIONAL([ENABLE_XDP], [test "$enable_xdp" != "no"])
+
+AS_IF([test "$enable_xdp" = "yes"], [
+ PKG_CHECK_MODULES([libxdp], [libxdp], [enable_xdp=libxdp], [enable_xdp=yes])
+ AS_IF([test "$enable_xdp" = "libxdp"],
+ [AC_DEFINE([USE_LIBXDP], [1], [Use libxdp.])
+ libbpf_CFLAGS="$libbpf_CFLAGS $libxdp_CFLAGS"
+ libbpf_LIBS="$libbpf_LIBS $libxdp_LIBS"],
+ [AS_IF([test "$libbpf1" = "yes"], [AC_MSG_ERROR([libxdp not available])])]
+ )]
+)
+AC_SUBST([libbpf_CFLAGS])
+AC_SUBST([libbpf_LIBS])
+
+AS_IF([test "$enable_xdp" != "no"],[
+ AC_DEFINE([ENABLE_XDP], [1], [Use eXpress Data Path.])])
+
+# Reuseport support
+AS_CASE([$host_os],
+ [freebsd*], [reuseport_opt=SO_REUSEPORT_LB],
+ [*], [reuseport_opt=SO_REUSEPORT],
+)
+
+AC_ARG_ENABLE([reuseport],
+ AS_HELP_STRING([--enable-reuseport=auto|yes|no],
+ [enable SO_REUSEPORT(_LB) support [default=auto]]),
+ [], [enable_reuseport=auto]
+)
+
+AS_CASE([$enable_reuseport],
+ [auto], [
+ AS_CASE([$host_os],
+ [freebsd*|linux*], [AC_CHECK_DECL([$reuseport_opt],
+ [enable_reuseport=yes],
+ [enable_reuseport=no],
+ [#include
+ ])],
+ [*], [enable_reuseport=no]
+ )],
+ [yes], [AC_CHECK_DECL([$reuseport_opt], [],
+ [AC_MSG_ERROR([SO_REUSEPORT(_LB) not supported.])],
+ [#include
+ ])],
+ [no], [],
+ [*], [AC_MSG_ERROR([Invalid value of --enable-reuseport.])]
+)
+
+AS_IF([test "$enable_reuseport" = yes],[
+ AC_DEFINE([ENABLE_REUSEPORT], [1], [Use SO_REUSEPORT(_LB).])])
+
+#########################################
+# Dependencies needed for Knot DNS daemon
+#########################################
+
+# Systemd integration
+AC_ARG_ENABLE([systemd],
+ AS_HELP_STRING([--enable-systemd=auto|yes|no], [enable systemd integration [default=auto]]),
+ [enable_systemd="$enableval"], [enable_systemd=auto])
+
+AC_ARG_ENABLE([dbus],
+ AS_HELP_STRING([--enable-dbus=auto|systemd|libdbus|no], [enable D-bus support [default=auto]]),
+ [enable_dbus="$enableval"], [enable_dbus=auto])
+
+AS_IF([test "$enable_daemon" = "yes"],[
+
+AS_IF([test "$enable_systemd" != "no"],[
+ AS_CASE([$enable_systemd],
+ [auto],[PKG_CHECK_MODULES([systemd], [libsystemd], [enable_systemd=yes], [
+ PKG_CHECK_MODULES([systemd], [libsystemd-daemon libsystemd-journal], [enable_systemd=yes], [enable_systemd=no])])],
+ [yes],[PKG_CHECK_MODULES([systemd], [libsystemd], [], [
+ PKG_CHECK_MODULES([systemd], [libsystemd-daemon libsystemd-journal])])],
+ [*],[AC_MSG_ERROR([Invalid value of --enable-systemd.])])
+ ])
+
+AS_IF([test "$enable_systemd" = "yes"],[
+ AC_DEFINE([ENABLE_SYSTEMD], [1], [Use systemd integration.])])
+
+AS_IF([test "$enable_dbus" != "no"],[
+ AS_CASE([$enable_dbus],
+ [auto],[AS_IF([test "$enable_systemd" = "yes"],
+ [AC_DEFINE([ENABLE_DBUS_SYSTEMD], [1], [systemd D-Bus available])
+ enable_dbus=systemd],
+ [PKG_CHECK_MODULES([libdbus], [dbus-1],
+ [AC_DEFINE([ENABLE_DBUS_LIBDBUS], [1], [libdbus D-Bus available])
+ enable_dbus=libdbus],
+ [enable_dbus=no])])],
+ [systemd],[AS_IF([test "$enable_systemd" = "yes"],
+ [AC_DEFINE([ENABLE_DBUS_SYSTEMD], [1], [systemd D-Bus available])
+ enable_dbus=systemd],
+ [AC_MSG_ERROR([systemd >= 221 not available.])])],
+ [libdbus],[PKG_CHECK_MODULES([libdbus], [dbus-1],
+ [AC_DEFINE([ENABLE_DBUS_LIBDBUS], [1], [libdbus D-Bus available])
+ enable_dbus=libdbus],
+ [AC_MSG_ERROR([libdbus not available.])])],
+ [no],[enable_dbus=no],
+ [*],[AC_MSG_ERROR([Invalid value of --enable-dbus.])])
+ ])
+
+]) dnl enable_daemon
+
+# Socket polling method
+socket_polling=
+AC_ARG_WITH([socket-polling],
+ AS_HELP_STRING([--with-socket-polling=auto|poll|epoll|kqueue|libkqueue],
+ [Use specific socket polling method [default=auto]]),
+ [socket_polling=$withval], [socket_polling=auto]
+)
+
+AS_CASE([$socket_polling],
+ [auto], [AC_CHECK_FUNCS([kqueue],
+ [AC_DEFINE([HAVE_KQUEUE], [1], [kqueue available])
+ socket_polling=kqueue],
+ [AC_CHECK_FUNCS([epoll_create],
+ [AC_DEFINE([HAVE_EPOLL], [1], [epoll available])
+ socket_polling=epoll],
+ [socket_polling=poll])])],
+ [poll], [socket_polling=poll],
+ [epoll], [AC_CHECK_FUNCS([epoll_create],
+ [AC_DEFINE([HAVE_EPOLL], [1], [epoll available])
+ socket_polling=epoll],
+ [AC_MSG_ERROR([epoll not available.])])],
+ [kqueue], [AC_CHECK_FUNCS([kqueue],
+ [AC_DEFINE([HAVE_KQUEUE], [1], [kqueue available])
+ socket_polling=kqueue],
+ [AC_MSG_ERROR([kqueue not available.])])],
+ [libkqueue], [PKG_CHECK_MODULES([libkqueue], [libkqueue],
+ [AC_DEFINE([HAVE_KQUEUE], [1], [libkqueue available])
+ socket_polling=libkqueue],
+ [AC_MSG_ERROR([libkqueue not available.])])],
+ [*], [AC_MSG_ERROR([Invalid value of --socket-polling.])]
+)
+
+# Alternative memory allocator
+malloc_LIBS=
+AC_ARG_WITH([memory-allocator],
+ AS_HELP_STRING([--with-memory-allocator=auto|LIBRARY],
+ [Use specific memory allocator for the server (e.g. jemalloc) [default=auto]]),
+ AS_CASE([$withval],
+ [auto], [],
+ [*], [malloc_LIBS="-l$withval"]
+ )
+ with_memory_allocator=[$withval]
+)
+AS_IF([test "$with_memory_allocator" = ""], [with_memory_allocator="auto"])
+AC_SUBST([malloc_LIBS])
+
+AS_IF([test "$enable_daemon" = "yes"],[
+
+PKG_CHECK_MODULES([liburcu], [liburcu], [], [
+ AC_MSG_ERROR([liburcu not found])
+])
+
+])
+
+static_modules=""
+shared_modules=""
+static_modules_declars=""
+static_modules_init=""
+doc_modules=""
+
+KNOT_MODULE([authsignal], "yes")
+KNOT_MODULE([cookies], "yes")
+KNOT_MODULE([dnsproxy], "yes", "non-shareable")
+KNOT_MODULE([dnstap], "no")
+KNOT_MODULE([geoip], "yes")
+KNOT_MODULE([noudp], "yes")
+KNOT_MODULE([onlinesign], "yes", "non-shareable")
+KNOT_MODULE([probe], "yes")
+KNOT_MODULE([queryacl], "yes")
+KNOT_MODULE([rrl], "yes")
+KNOT_MODULE([stats], "yes")
+KNOT_MODULE([synthrecord], "yes")
+KNOT_MODULE([whoami], "yes")
+
+AC_SUBST([STATIC_MODULES_DECLARS], [$(printf "$static_modules_declars")])
+AM_SUBST_NOTMAKE([STATIC_MODULES_DECLARS])
+AC_SUBST([STATIC_MODULES_INIT], [$(printf "$static_modules_init")])
+AM_SUBST_NOTMAKE([STATIC_MODULES_INIT])
+AC_SUBST([DOC_MODULES], [$(printf "$doc_modules")])
+AM_SUBST_NOTMAKE([DOC_MODULES])
+
+# Check for Dnstap
+AC_ARG_ENABLE([dnstap],
+ AS_HELP_STRING([--enable-dnstap], [Enable dnstap support for kdig (requires fstrm, protobuf-c)]),
+ [], [enable_dnstap=no])
+
+AS_IF([test "$enable_dnstap" != "no" -o "$STATIC_MODULE_dnstap" != "no" -o "$SHARED_MODULE_dnstap" != "no"],[
+ AC_PATH_PROG([PROTOC_C], [protoc-c])
+ AS_IF([test -z "$PROTOC_C"],[
+ AC_MSG_ERROR([The protoc-c program was not found. Please install protobuf-c!])
+ ])
+ PKG_CHECK_MODULES([libfstrm], [libfstrm])
+ PKG_CHECK_MODULES([libprotobuf_c], [libprotobuf-c >= 1.0.0])
+ AC_SUBST([DNSTAP_CFLAGS], ["$libfstrm_CFLAGS $libprotobuf_c_CFLAGS"])
+ AC_SUBST([DNSTAP_LIBS], ["$libfstrm_LIBS $libprotobuf_c_LIBS"])
+])
+
+AS_IF([test "$enable_dnstap" != "no"],[
+ AC_DEFINE([USE_DNSTAP], [1], [Define to 1 to enable dnstap support for kdig])
+])
+AM_CONDITIONAL([HAVE_DNSTAP], test "$enable_dnstap" != "no")
+
+AM_CONDITIONAL([HAVE_LIBDNSTAP], test "$enable_dnstap" != "no" -o \
+ "$STATIC_MODULE_dnstap" != "no" -o \
+ "$SHARED_MODULE_dnstap" != "no")
+# MaxMind DB for the GeoIP module
+AC_ARG_ENABLE([maxminddb],
+ AS_HELP_STRING([--enable-maxminddb=auto|yes|no], [enable MaxMind DB [default=auto]]),
+ [enable_maxminddb="$enableval"], [enable_maxminddb=auto])
+
+AS_IF([test "$enable_daemon" = "no"],[enable_maxminddb=no])
+AS_CASE([$enable_maxminddb],
+ [no],[],
+ [auto],[PKG_CHECK_MODULES([libmaxminddb], [libmaxminddb], [enable_maxminddb=yes], [enable_maxminddb=no])],
+ [yes], [PKG_CHECK_MODULES([libmaxminddb], [libmaxminddb])],
+ [*],[
+ save_CFLAGS="$CFLAGS"
+ save_LIBS="$LIBS"
+ AS_IF([test "$enable_maxminddb" != ""],[
+ LIBS="$LIBS -L$enable_maxminddb"
+ CFLAGS="$CFLAGS -I$enable_maxminddb/include"
+ ])
+ AC_SEARCH_LIBS([MMDB_open], [maxminddb], [
+ AS_IF([test "$enable_maxminddb" != ""], [
+ libmaxminddb_CFLAGS="-I$enable_maxminddb/include"
+ libmaxminddb_LIBS="-L$enable_maxminddb -lmaxminddb"
+ ],[
+ libmaxminddb_CFLAGS=""
+ libmaxminddb_LIBS="$ac_cv_search_MMDB_open"
+ ])
+ ],[AC_MSG_ERROR("not found in `$enable_maxminddb'")])
+ CFLAGS="$save_CFLAGS"
+ LIBS="$save_LIBS"
+ AC_SUBST([libmaxminddb_CFLAGS])
+ AC_SUBST([libmaxminddb_LIBS])
+ enable_maxminddb=yes
+ ])
+
+AS_IF([test "$enable_maxminddb" = yes], [AC_DEFINE([HAVE_MAXMINDDB], [1], [Define to 1 to enable MaxMind DB.])])
+AM_CONDITIONAL([HAVE_MAXMINDDB], [test "$enable_maxminddb" = yes])
+
+AC_ARG_WITH([lmdb],
+ [AS_HELP_STRING([--with-lmdb=DIR], [explicit location where to find LMDB])]
+)
+PKG_CHECK_MODULES([lmdb], [lmdb >= 0.9.15], [], [
+ save_CPPFLAGS=$CPPFLAGS
+ save_LIBS=$LIBS
+
+ have_lmdb=no
+
+ for try_lmdb in "$with_lmdb" "" "/usr/local" "/usr/pkg"; do
+ AS_IF([test -d "$try_lmdb"], [
+ lmdb_CFLAGS="-I$try_lmdb/include"
+ lmdb_LIBS="-L$try_lmdb/lib"
+ ],[
+ lmdb_CFLAGS=""
+ lmdb_LIBS=""
+ ])
+
+ CPPFLAGS="$save_CPPFLAGS $lmdb_CFLAGS"
+ LIBS="$save_LIBS $lmdb_LIBS"
+
+ AC_SEARCH_LIBS([mdb_txn_id], [lmdb], [
+ have_lmdb=yes
+ lmdb_LIBS="$lmdb_LIBS -llmdb"
+ AC_SUBST([lmdb_CFLAGS])
+ AC_SUBST([lmdb_LIBS])
+ break
+ ])
+
+ # do not cache result of AC_SEARCH_LIBS test
+ unset ac_cv_search_mdb_txn_id
+ done
+
+ CPPFLAGS="$save_CPPFLAGS"
+ LIBS="$save_LIBS"
+
+ AS_IF([test "$have_lmdb" = "no"], [
+ AC_MSG_ERROR([lmdb library not found])
+ ])
+])
+
+# LMDB mapping sizes
+conf_mapsize_default=500
+AC_ARG_WITH([conf_mapsize],
+ AS_HELP_STRING([--with-conf-mapsize=NUM], [Configuration DB mapsize in MiB [default=$conf_mapsize_default]]),
+ [conf_mapsize=$withval],[conf_mapsize=$conf_mapsize_default])
+
+AS_CASE([$conf_mapsize],
+ [yes],[conf_mapsize=$conf_mapsize_default],
+ [no], [AC_MSG_ERROR([conf_mapsize must be a number])],
+ [*], [AS_IF([test $conf_mapsize != $(( $conf_mapsize + 0 ))],
+ [AC_MSG_ERROR(conf_mapsize must be an integer number)])])
+AC_DEFINE_UNQUOTED([CONF_MAPSIZE], [$conf_mapsize], [Configuration DB mapsize.])
+AC_SUBST(conf_mapsize)
+
+# libedit
+AS_IF([test "$enable_daemon" = "yes" -o "$enable_utilities" = "yes"], [
+ PKG_CHECK_MODULES([libedit], [libedit], [with_libedit=yes], [
+ with_libedit=no
+ AC_CHECK_HEADER([histedit.h], [
+ # workaround for OpenBSD
+ AS_CASE([$host_os],
+ [openbsd*], [libedit_deps=-lcurses],
+ [libedit_deps=]
+ )
+ AC_CHECK_LIB([edit], [el_init], [
+ with_libedit=yes
+ libedit_CFLAGS=
+ libedit_LIBS="-ledit $libedit_deps"
+ ], [], [$libedit_deps]
+ )
+ ])
+ ])
+ AS_IF([test "$with_libedit" != "yes"], [
+ AC_MSG_ERROR([libedit not found])
+ ])
+], [
+ with_libedit=no
+ libedit_CFLAGS=
+ libedit_LIBS=
+])
+
+# QUIC support
+AC_ARG_ENABLE([quic],
+ AS_HELP_STRING([--enable-quic=auto|yes|no|embedded], [Support DoQ (needs libngtcp2 >= 0.17.0, gnutls >= 3.7.3) [default=auto]]),
+ [], [enable_quic=auto])
+
+AS_CASE([$enable_quic],
+ [auto], [PKG_CHECK_MODULES([libngtcp2], [libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls], [enable_quic=yes], [enable_quic=no])],
+ [yes], [PKG_CHECK_MODULES([libngtcp2], [libngtcp2 >= 0.17.0 libngtcp2_crypto_gnutls], [enable_quic=yes],
+ AS_IF([test "$gnutls_quic" = "yes"],
+ [enable_quic=embedded
+ embedded_libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2 -I\$(top_srcdir)/src/contrib/libngtcp2/ngtcp2/lib"
+ embedded_libngtcp2_LIBS=$libelf_LIBS
+ libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2"],
+ [enable_quic=no
+ AC_MSG_WARN([gnutls >= 3.7.3 is required for QUIC])]))],
+ [embedded], [enable_quic=embedded
+ embedded_libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2 -I\$(top_srcdir)/src/contrib/libngtcp2/ngtcp2/lib"
+ embedded_libngtcp2_LIBS=$libelf_LIBS
+ libngtcp2_CFLAGS="-I\$(top_srcdir)/src/contrib/libngtcp2"],
+ [no], [],
+ [*], [AC_MSG_ERROR([Invalid value of --enable-quic.])]
+)
+AM_CONDITIONAL([EMBEDDED_LIBNGTCP2], [test "$enable_quic" = "embedded"])
+AM_CONDITIONAL([ENABLE_QUIC], [test "$enable_quic" != "no"])
+AC_SUBST([embedded_libngtcp2_CFLAGS])
+AC_SUBST([embedded_libngtcp2_LIBS])
+AC_SUBST([libngtcp2_CFLAGS])
+AC_SUBST([libngtcp2_LIBS])
+
+AS_IF([test "$enable_quic" != "no"], [
+ AC_DEFINE([ENABLE_QUIC], [1], [Define to 1 to enable DoQ support using libngtcp2 and GnuTLS])])
+
+############################################
+# Dependencies needed for Knot DNS utilities
+############################################
+
+dnl Check for libidn2.
+AC_ARG_WITH(libidn,
+ AS_HELP_STRING([--with-libidn=[DIR]], [Support IDN (needs GNU libidn2)]),
+ with_libidn=$withval,
+ with_libidn=yes
+)
+
+dnl Check for libnghttp2.
+AC_ARG_WITH(libnghttp2,
+ AS_HELP_STRING([--with-libnghttp2=[DIR]], [Support DoH (needs libnghttp2)]),
+ with_libnghttp2=$withval,
+ with_libnghttp2=yes
+)
+
+AS_IF([test "$enable_utilities" = "yes"], [
+ AS_IF([test "$with_libidn" != "no"], [
+ PKG_CHECK_MODULES([libidn2], [libidn2 >= 2.0.0], [
+ with_libidn=libidn2
+ AC_DEFINE([LIBIDN], [1], [Define to 1 to enable IDN support])
+ ], [
+ with_libidn=no
+ AC_MSG_WARN([libidn2 not found])
+ ])
+ ])
+
+ AS_IF([test "$with_libnghttp2" != "no"], [
+ PKG_CHECK_MODULES([libnghttp2], [libnghttp2], [
+ with_libnghttp2=libnghttp2
+ AC_DEFINE([LIBNGHTTP2], [1], [Define to 1 to enable DoH support])
+ ], [
+ with_libnghttp2=no
+ AC_MSG_WARN([libnghttp2 not found])
+ ])
+ ])
+
+ AS_IF([test "$enable_xdp" != "no"], [
+ PKG_CHECK_MODULES([libmnl], [libmnl], [], [
+ AC_MSG_ERROR([libmnl not found])
+ ])
+ ])
+]) # Knot DNS utilities dependencies
+
+AC_ARG_ENABLE([cap-ng],
+ AS_HELP_STRING([--enable-cap-ng=auto|no], [enable POSIX capabilities [default=auto]]),
+ [enable_cap_ng="$enableval"], [enable_cap_ng=auto])
+
+AS_IF([test "$enable_daemon" = "yes"], [
+
+AS_IF([test "$enable_cap_ng" != "no"],[
+ PKG_CHECK_MODULES([cap_ng], [cap-ng], [enable_cap_ng=yes], [
+ enable_cap_ng=no
+ AC_CHECK_HEADER([cap-ng.h], [
+ save_LIBS="$LIBS"
+ AC_SEARCH_LIBS([capng_apply], [cap-ng], [
+ AS_IF([test "$ac_cv_search_capng_apply" != "none required"],
+ [cap_ng_LIBS="$ac_cv_search_capng_apply"], [cap_ng_LIBS=])
+ AC_SUBST([cap_ng_LIBS])
+ enable_cap_ng=yes
+ ])
+ LIBS="$save_LIBS"
+ ])
+ ])
+], [
+ enable_cap_ng=no
+ cap_ng_LIBS=
+])])
+
+AS_IF([test "$enable_cap_ng" = yes],
+ [AC_DEFINE([ENABLE_CAP_NG], [1], [POSIX capabilities available])]
+)
+
+save_LIBS="$LIBS"
+AC_SEARCH_LIBS([pthread_create], [pthread], [
+ AS_IF([test "$ac_cv_search_pthread_create" != "none required"],
+ [pthread_LIBS="$ac_cv_search_pthread_create"], [pthread_LIBS=])
+ AC_SUBST([pthread_LIBS])
+],[
+ AC_MSG_ERROR([pthreads not found])
+])
+LIBS="$save_LIBS"
+
+save_LIBS="$LIBS"
+AC_SEARCH_LIBS([dlopen], [dl], [
+ AS_IF([test "$ac_cv_search_dlopen" != "none required"],
+ [dlopen_LIBS="$ac_cv_search_dlopen"], [dlopen_LIBS=])
+ AC_SUBST([dlopen_LIBS])
+],[
+ AC_MSG_ERROR([dlopen not found])
+])
+LIBS="$save_LIBS"
+
+save_LIBS="$LIBS"
+AC_SEARCH_LIBS([pow], [m], [
+ AS_IF([test "$ac_cv_search_pow" != "none required"],
+ [math_LIBS="$ac_cv_search_pow"], [math_LIBS=])
+ AC_SUBST([math_LIBS])
+],[
+ AC_MSG_ERROR([math not found])
+])
+LIBS="$save_LIBS"
+
+save_LIBS="$LIBS"
+AC_SEARCH_LIBS([pthread_setaffinity_np], [pthread], [
+ AC_DEFINE([HAVE_PTHREAD_SETAFFINITY_NP], [1],
+ [Define to 1 if you have the pthread_setaffinity_np function.])
+])
+LIBS="$save_LIBS"
+
+# Checks for header files.
+AC_HEADER_RESOLV
+AC_CHECK_HEADERS_ONCE([pthread_np.h sys/uio.h bsd/string.h])
+
+# Checks for optional library functions.
+AC_CHECK_FUNCS([accept4 fgetln getline initgroups malloc_trim \
+ setgroups strlcat strlcpy sysctlbyname])
+
+# Check for robust memory cleanup implementations.
+AC_CHECK_FUNC([explicit_bzero], [
+ AC_DEFINE([HAVE_EXPLICIT_BZERO], [1], [explicit_bzero available])
+ explicit_bzero=yes], [explicit_bzero=no]
+)
+AC_CHECK_FUNC([explicit_memset], [
+ AC_DEFINE([HAVE_EXPLICIT_MEMSET], [1], [explicit_memset available])
+ explicit_memset=yes], [explicit_memset=no]
+)
+AM_CONDITIONAL([USE_GNUTLS_MEMSET], [test "$explicit_bzero" = "no" -a "$explicit_memset" = "no"])
+
+# Check for mandatory library functions.
+AC_CHECK_FUNC([vasprintf], [], [
+ AC_MSG_ERROR([vasprintf support in the libc is required])])
+
+# Check for cpu_set_t/cpuset_t compatibility
+AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], [[cpu_set_t set; CPU_ZERO(&set);]])],
+[AC_DEFINE(HAVE_CPUSET_LINUX, 1, [Define if Linux-like cpu_set_t exists.])])
+AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], [[cpuset_t set; CPU_ZERO(&set);]])],
+[AC_DEFINE(HAVE_CPUSET_BSD, 1, [Define if FreeBSD-like cpuset_t exists.])])
+AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], [[cpuset_t* set = cpuset_create(); cpuset_destroy(set);]])],
+[AC_DEFINE(HAVE_CPUSET_NETBSD, 1, [Define if cpuset_t and cpuset(3) exists.])])
+
+# Check for a C11 or GCC-style '__atomic' compiler builtin atomic functions.
+atomic_type="none"
+AC_LINK_IFELSE(
+ [AC_LANG_PROGRAM([[#if (__STDC_VERSION__ < 201112L) || defined(__STDC_NO_ATOMICS__)
+ #error "No C11 atomics"
+ #endif
+ #include ]],
+ [[atomic_uint_fast64_t val = 0;]]
+ [[atomic_fetch_add_explicit(&val, 1, memory_order_relaxed);]])],
+ [AC_DEFINE(HAVE_C11_ATOMIC, 1, [Define to 1 if you have C11 'atomic' functions.])
+ atomic_type="C11"],
+ AC_LINK_IFELSE(
+ [AC_LANG_PROGRAM([[#include ]],
+ [[uint64_t val = 0; __atomic_add_fetch(&val, 1, __ATOMIC_RELAXED);]])],
+ [AC_DEFINE(HAVE_GCC_ATOMIC, 1, [Define to 1 if you have GCC-style '__atomic' functions.])
+ atomic_type="__atomic"]
+ )
+)
+
+# Prepare CFLAG_VISIBILITY to be used where needed
+gl_VISIBILITY()
+
+# Add code coverage macro
+AX_CODE_COVERAGE
+
+AX_SANITIZER
+AS_IF([test -n "$sanitizer_CFLAGS"], [CFLAGS="$CFLAGS $sanitizer_CFLAGS"])
+AM_CONDITIONAL([FUZZER], [test "$with_fuzzer" != "no"])
+AM_CONDITIONAL([OSS_FUZZ], [test "$with_oss_fuzz" != "no"])
+
+# Strip -fdebug-prefix-map= parameters from flags for better reproducibility of binaries.
+filtered_cflags=$(echo -n "$CFLAGS" | \
+ sed 's/[[^[:alnum:]]]-f[[^[:space:]]]*-prefix-map=[[^[:space:]]]*//g')
+filtered_cppflags=$(echo -n "$CPPFLAGS" | \
+ sed 's/[[^[:alnum:]]]-f[[^[:space:]]]*-prefix-map=[[^[:space:]]]*//g')
+filtered_config_params=$(echo -n "$ac_configure_args" | \
+ sed 's/[[^[:alnum:]]]-f[[^[:space:]]]*-prefix-map=[[^[:space:]]]*//g')
+
+result_msg_base="
+ Target: $host_os $host_cpu $endianity
+ Compiler: ${CC}
+ CFLAGS: ${filtered_cflags} ${filtered_cppflags}
+ LIBS: ${LIBS} ${LDFLAGS}
+ LibURCU: ${liburcu_LIBS} ${liburcu_CFLAGS}
+ GnuTLS: ${gnutls_LIBS} ${gnutls_CFLAGS}
+ Libedit: ${libedit_LIBS} ${libedit_CFLAGS}
+ LMDB: ${lmdb_LIBS} ${lmdb_CFLAGS}
+ Config: ${conf_mapsize} MiB default mapsize
+
+ Prefix: ${knot_prefix}
+ Run dir: ${run_dir}
+ Storage dir: ${storage_dir}
+ Config dir: ${config_dir}
+ Module dir: ${module_dir}
+
+ Static modules: ${static_modules}
+ Shared modules: ${shared_modules}
+
+ Knot DNS libraries: yes
+ Knot DNS daemon: ${enable_daemon}
+ Knot DNS utilities: ${enable_utilities}
+ Knot DNS documentation: ${enable_documentation}
+
+ Use recvmmsg: ${enable_recvmmsg}
+ Use SO_REUSEPORT(_LB): ${enable_reuseport}
+ XDP support: ${enable_xdp}
+ DoQ support: ${enable_quic}
+ Socket polling: ${socket_polling}
+ Atomic support: ${atomic_type}
+ Memory allocator: ${with_memory_allocator}
+ Fast zone parser: ${enable_fastparser}
+ Utilities with IDN: ${with_libidn}
+ Utilities with DoH: ${with_libnghttp2}
+ Utilities with Dnstap: ${enable_dnstap}
+ MaxMind DB support: ${enable_maxminddb}
+ Systemd integration: ${enable_systemd}
+ D-Bus support: ${enable_dbus}
+ POSIX capabilities: ${enable_cap_ng}
+ PKCS #11 support: ${enable_pkcs11}
+ Ed448 support: ${enable_ed448}
+
+ Code coverage: ${enable_code_coverage}
+ Sanitizer: ${with_sanitizer}
+ LibFuzzer: ${with_fuzzer}
+ OSS-Fuzz: ${with_oss_fuzz}"
+
+result_msg_esc=$(echo -n " Configure:$filtered_config_params\n$result_msg_base" | sed '$!s/$/\\n/' | tr -d '\n')
+
+AC_DEFINE_UNQUOTED([CONFIGURE_SUMMARY],["$result_msg_esc"],[Configure summary])
+
+AC_CONFIG_FILES([Makefile
+ Doxyfile
+ doc/Makefile
+ tests/Makefile
+ tests-fuzz/Makefile
+ samples/Makefile
+ distro/Makefile
+ python/Makefile
+ python/knot_exporter/Makefile
+ python/knot_exporter/pyproject.toml
+ python/knot_exporter/setup.py
+ python/libknot/Makefile
+ python/libknot/pyproject.toml
+ python/libknot/setup.py
+ python/libknot/libknot/__init__.py
+ src/Makefile
+ src/libknot/xdp/Makefile
+ src/knot/modules/static_modules.h
+ ])
+
+AC_CONFIG_FILES([doc/modules.rst],
+ [cp doc/modules.rst "${srcdir}"/doc/modules.rst 2>/dev/null
+ abs_srcdir=$(cd "${srcdir}" && pwd)
+ ln -s -f "${abs_srcdir}"/src/knot/modules "${srcdir}"/doc 2>/dev/null])
+
+AC_OUTPUT
+AC_MSG_RESULT([
+ Knot DNS $VERSION
+$result_msg_base
+])
diff --git a/depcomp b/depcomp
new file mode 100755
index 0000000..715e343
--- /dev/null
+++ b/depcomp
@@ -0,0 +1,791 @@
+#! /bin/sh
+# depcomp - compile a program generating dependencies as side-effects
+
+scriptversion=2018-03-07.03; # UTC
+
+# Copyright (C) 1999-2021 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see .
+
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# Originally written by Alexandre Oliva .
+
+case $1 in
+ '')
+ echo "$0: No command. Try '$0 --help' for more information." 1>&2
+ exit 1;
+ ;;
+ -h | --h*)
+ cat <<\EOF
+Usage: depcomp [--help] [--version] PROGRAM [ARGS]
+
+Run PROGRAMS ARGS to compile a file, generating dependencies
+as side-effects.
+
+Environment variables:
+ depmode Dependency tracking mode.
+ source Source file read by 'PROGRAMS ARGS'.
+ object Object file output by 'PROGRAMS ARGS'.
+ DEPDIR directory where to store dependencies.
+ depfile Dependency file to output.
+ tmpdepfile Temporary file to use when outputting dependencies.
+ libtool Whether libtool is used (yes/no).
+
+Report bugs to .
+EOF
+ exit $?
+ ;;
+ -v | --v*)
+ echo "depcomp $scriptversion"
+ exit $?
+ ;;
+esac
+
+# Get the directory component of the given path, and save it in the
+# global variables '$dir'. Note that this directory component will
+# be either empty or ending with a '/' character. This is deliberate.
+set_dir_from ()
+{
+ case $1 in
+ */*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;;
+ *) dir=;;
+ esac
+}
+
+# Get the suffix-stripped basename of the given path, and save it the
+# global variable '$base'.
+set_base_from ()
+{
+ base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'`
+}
+
+# If no dependency file was actually created by the compiler invocation,
+# we still have to create a dummy depfile, to avoid errors with the
+# Makefile "include basename.Plo" scheme.
+make_dummy_depfile ()
+{
+ echo "#dummy" > "$depfile"
+}
+
+# Factor out some common post-processing of the generated depfile.
+# Requires the auxiliary global variable '$tmpdepfile' to be set.
+aix_post_process_depfile ()
+{
+ # If the compiler actually managed to produce a dependency file,
+ # post-process it.
+ if test -f "$tmpdepfile"; then
+ # Each line is of the form 'foo.o: dependency.h'.
+ # Do two passes, one to just change these to
+ # $object: dependency.h
+ # and one to simply output
+ # dependency.h:
+ # which is needed to avoid the deleted-header problem.
+ { sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile"
+ sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile"
+ } > "$depfile"
+ rm -f "$tmpdepfile"
+ else
+ make_dummy_depfile
+ fi
+}
+
+# A tabulation character.
+tab=' '
+# A newline character.
+nl='
+'
+# Character ranges might be problematic outside the C locale.
+# These definitions help.
+upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ
+lower=abcdefghijklmnopqrstuvwxyz
+digits=0123456789
+alpha=${upper}${lower}
+
+if test -z "$depmode" || test -z "$source" || test -z "$object"; then
+ echo "depcomp: Variables source, object and depmode must be set" 1>&2
+ exit 1
+fi
+
+# Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po.
+depfile=${depfile-`echo "$object" |
+ sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`}
+tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`}
+
+rm -f "$tmpdepfile"
+
+# Avoid interferences from the environment.
+gccflag= dashmflag=
+
+# Some modes work just like other modes, but use different flags. We
+# parameterize here, but still list the modes in the big case below,
+# to make depend.m4 easier to write. Note that we *cannot* use a case
+# here, because this file can only contain one case statement.
+if test "$depmode" = hp; then
+ # HP compiler uses -M and no extra arg.
+ gccflag=-M
+ depmode=gcc
+fi
+
+if test "$depmode" = dashXmstdout; then
+ # This is just like dashmstdout with a different argument.
+ dashmflag=-xM
+ depmode=dashmstdout
+fi
+
+cygpath_u="cygpath -u -f -"
+if test "$depmode" = msvcmsys; then
+ # This is just like msvisualcpp but w/o cygpath translation.
+ # Just convert the backslash-escaped backslashes to single forward
+ # slashes to satisfy depend.m4
+ cygpath_u='sed s,\\\\,/,g'
+ depmode=msvisualcpp
+fi
+
+if test "$depmode" = msvc7msys; then
+ # This is just like msvc7 but w/o cygpath translation.
+ # Just convert the backslash-escaped backslashes to single forward
+ # slashes to satisfy depend.m4
+ cygpath_u='sed s,\\\\,/,g'
+ depmode=msvc7
+fi
+
+if test "$depmode" = xlc; then
+ # IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information.
+ gccflag=-qmakedep=gcc,-MF
+ depmode=gcc
+fi
+
+case "$depmode" in
+gcc3)
+## gcc 3 implements dependency tracking that does exactly what
+## we want. Yay! Note: for some reason libtool 1.4 doesn't like
+## it if -MD -MP comes after the -MF stuff. Hmm.
+## Unfortunately, FreeBSD c89 acceptance of flags depends upon
+## the command line argument order; so add the flags where they
+## appear in depend2.am. Note that the slowdown incurred here
+## affects only configure: in makefiles, %FASTDEP% shortcuts this.
+ for arg
+ do
+ case $arg in
+ -c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;;
+ *) set fnord "$@" "$arg" ;;
+ esac
+ shift # fnord
+ shift # $arg
+ done
+ "$@"
+ stat=$?
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile"
+ exit $stat
+ fi
+ mv "$tmpdepfile" "$depfile"
+ ;;
+
+gcc)
+## Note that this doesn't just cater to obsosete pre-3.x GCC compilers.
+## but also to in-use compilers like IMB xlc/xlC and the HP C compiler.
+## (see the conditional assignment to $gccflag above).
+## There are various ways to get dependency output from gcc. Here's
+## why we pick this rather obscure method:
+## - Don't want to use -MD because we'd like the dependencies to end
+## up in a subdir. Having to rename by hand is ugly.
+## (We might end up doing this anyway to support other compilers.)
+## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like
+## -MM, not -M (despite what the docs say). Also, it might not be
+## supported by the other compilers which use the 'gcc' depmode.
+## - Using -M directly means running the compiler twice (even worse
+## than renaming).
+ if test -z "$gccflag"; then
+ gccflag=-MD,
+ fi
+ "$@" -Wp,"$gccflag$tmpdepfile"
+ stat=$?
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile"
+ exit $stat
+ fi
+ rm -f "$depfile"
+ echo "$object : \\" > "$depfile"
+ # The second -e expression handles DOS-style file names with drive
+ # letters.
+ sed -e 's/^[^:]*: / /' \
+ -e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile"
+## This next piece of magic avoids the "deleted header file" problem.
+## The problem is that when a header file which appears in a .P file
+## is deleted, the dependency causes make to die (because there is
+## typically no way to rebuild the header). We avoid this by adding
+## dummy dependencies for each header file. Too bad gcc doesn't do
+## this for us directly.
+## Some versions of gcc put a space before the ':'. On the theory
+## that the space means something, we add a space to the output as
+## well. hp depmode also adds that space, but also prefixes the VPATH
+## to the object. Take care to not repeat it in the output.
+## Some versions of the HPUX 10.20 sed can't process this invocation
+## correctly. Breaking it into two sed invocations is a workaround.
+ tr ' ' "$nl" < "$tmpdepfile" \
+ | sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \
+ | sed -e 's/$/ :/' >> "$depfile"
+ rm -f "$tmpdepfile"
+ ;;
+
+hp)
+ # This case exists only to let depend.m4 do its work. It works by
+ # looking at the text of this script. This case will never be run,
+ # since it is checked for above.
+ exit 1
+ ;;
+
+sgi)
+ if test "$libtool" = yes; then
+ "$@" "-Wp,-MDupdate,$tmpdepfile"
+ else
+ "$@" -MDupdate "$tmpdepfile"
+ fi
+ stat=$?
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile"
+ exit $stat
+ fi
+ rm -f "$depfile"
+
+ if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files
+ echo "$object : \\" > "$depfile"
+ # Clip off the initial element (the dependent). Don't try to be
+ # clever and replace this with sed code, as IRIX sed won't handle
+ # lines with more than a fixed number of characters (4096 in
+ # IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines;
+ # the IRIX cc adds comments like '#:fec' to the end of the
+ # dependency line.
+ tr ' ' "$nl" < "$tmpdepfile" \
+ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \
+ | tr "$nl" ' ' >> "$depfile"
+ echo >> "$depfile"
+ # The second pass generates a dummy entry for each header file.
+ tr ' ' "$nl" < "$tmpdepfile" \
+ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \
+ >> "$depfile"
+ else
+ make_dummy_depfile
+ fi
+ rm -f "$tmpdepfile"
+ ;;
+
+xlc)
+ # This case exists only to let depend.m4 do its work. It works by
+ # looking at the text of this script. This case will never be run,
+ # since it is checked for above.
+ exit 1
+ ;;
+
+aix)
+ # The C for AIX Compiler uses -M and outputs the dependencies
+ # in a .u file. In older versions, this file always lives in the
+ # current directory. Also, the AIX compiler puts '$object:' at the
+ # start of each line; $object doesn't have directory information.
+ # Version 6 uses the directory in both cases.
+ set_dir_from "$object"
+ set_base_from "$object"
+ if test "$libtool" = yes; then
+ tmpdepfile1=$dir$base.u
+ tmpdepfile2=$base.u
+ tmpdepfile3=$dir.libs/$base.u
+ "$@" -Wc,-M
+ else
+ tmpdepfile1=$dir$base.u
+ tmpdepfile2=$dir$base.u
+ tmpdepfile3=$dir$base.u
+ "$@" -M
+ fi
+ stat=$?
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
+ exit $stat
+ fi
+
+ for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
+ do
+ test -f "$tmpdepfile" && break
+ done
+ aix_post_process_depfile
+ ;;
+
+tcc)
+ # tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26
+ # FIXME: That version still under development at the moment of writing.
+ # Make that this statement remains true also for stable, released
+ # versions.
+ # It will wrap lines (doesn't matter whether long or short) with a
+ # trailing '\', as in:
+ #
+ # foo.o : \
+ # foo.c \
+ # foo.h \
+ #
+ # It will put a trailing '\' even on the last line, and will use leading
+ # spaces rather than leading tabs (at least since its commit 0394caf7
+ # "Emit spaces for -MD").
+ "$@" -MD -MF "$tmpdepfile"
+ stat=$?
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile"
+ exit $stat
+ fi
+ rm -f "$depfile"
+ # Each non-empty line is of the form 'foo.o : \' or ' dep.h \'.
+ # We have to change lines of the first kind to '$object: \'.
+ sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile"
+ # And for each line of the second kind, we have to emit a 'dep.h:'
+ # dummy dependency, to avoid the deleted-header problem.
+ sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile"
+ rm -f "$tmpdepfile"
+ ;;
+
+## The order of this option in the case statement is important, since the
+## shell code in configure will try each of these formats in the order
+## listed in this file. A plain '-MD' option would be understood by many
+## compilers, so we must ensure this comes after the gcc and icc options.
+pgcc)
+ # Portland's C compiler understands '-MD'.
+ # Will always output deps to 'file.d' where file is the root name of the
+ # source file under compilation, even if file resides in a subdirectory.
+ # The object file name does not affect the name of the '.d' file.
+ # pgcc 10.2 will output
+ # foo.o: sub/foo.c sub/foo.h
+ # and will wrap long lines using '\' :
+ # foo.o: sub/foo.c ... \
+ # sub/foo.h ... \
+ # ...
+ set_dir_from "$object"
+ # Use the source, not the object, to determine the base name, since
+ # that's sadly what pgcc will do too.
+ set_base_from "$source"
+ tmpdepfile=$base.d
+
+ # For projects that build the same source file twice into different object
+ # files, the pgcc approach of using the *source* file root name can cause
+ # problems in parallel builds. Use a locking strategy to avoid stomping on
+ # the same $tmpdepfile.
+ lockdir=$base.d-lock
+ trap "
+ echo '$0: caught signal, cleaning up...' >&2
+ rmdir '$lockdir'
+ exit 1
+ " 1 2 13 15
+ numtries=100
+ i=$numtries
+ while test $i -gt 0; do
+ # mkdir is a portable test-and-set.
+ if mkdir "$lockdir" 2>/dev/null; then
+ # This process acquired the lock.
+ "$@" -MD
+ stat=$?
+ # Release the lock.
+ rmdir "$lockdir"
+ break
+ else
+ # If the lock is being held by a different process, wait
+ # until the winning process is done or we timeout.
+ while test -d "$lockdir" && test $i -gt 0; do
+ sleep 1
+ i=`expr $i - 1`
+ done
+ fi
+ i=`expr $i - 1`
+ done
+ trap - 1 2 13 15
+ if test $i -le 0; then
+ echo "$0: failed to acquire lock after $numtries attempts" >&2
+ echo "$0: check lockdir '$lockdir'" >&2
+ exit 1
+ fi
+
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile"
+ exit $stat
+ fi
+ rm -f "$depfile"
+ # Each line is of the form `foo.o: dependent.h',
+ # or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'.
+ # Do two passes, one to just change these to
+ # `$object: dependent.h' and one to simply `dependent.h:'.
+ sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile"
+ # Some versions of the HPUX 10.20 sed can't process this invocation
+ # correctly. Breaking it into two sed invocations is a workaround.
+ sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \
+ | sed -e 's/$/ :/' >> "$depfile"
+ rm -f "$tmpdepfile"
+ ;;
+
+hp2)
+ # The "hp" stanza above does not work with aCC (C++) and HP's ia64
+ # compilers, which have integrated preprocessors. The correct option
+ # to use with these is +Maked; it writes dependencies to a file named
+ # 'foo.d', which lands next to the object file, wherever that
+ # happens to be.
+ # Much of this is similar to the tru64 case; see comments there.
+ set_dir_from "$object"
+ set_base_from "$object"
+ if test "$libtool" = yes; then
+ tmpdepfile1=$dir$base.d
+ tmpdepfile2=$dir.libs/$base.d
+ "$@" -Wc,+Maked
+ else
+ tmpdepfile1=$dir$base.d
+ tmpdepfile2=$dir$base.d
+ "$@" +Maked
+ fi
+ stat=$?
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile1" "$tmpdepfile2"
+ exit $stat
+ fi
+
+ for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2"
+ do
+ test -f "$tmpdepfile" && break
+ done
+ if test -f "$tmpdepfile"; then
+ sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile"
+ # Add 'dependent.h:' lines.
+ sed -ne '2,${
+ s/^ *//
+ s/ \\*$//
+ s/$/:/
+ p
+ }' "$tmpdepfile" >> "$depfile"
+ else
+ make_dummy_depfile
+ fi
+ rm -f "$tmpdepfile" "$tmpdepfile2"
+ ;;
+
+tru64)
+ # The Tru64 compiler uses -MD to generate dependencies as a side
+ # effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'.
+ # At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put
+ # dependencies in 'foo.d' instead, so we check for that too.
+ # Subdirectories are respected.
+ set_dir_from "$object"
+ set_base_from "$object"
+
+ if test "$libtool" = yes; then
+ # Libtool generates 2 separate objects for the 2 libraries. These
+ # two compilations output dependencies in $dir.libs/$base.o.d and
+ # in $dir$base.o.d. We have to check for both files, because
+ # one of the two compilations can be disabled. We should prefer
+ # $dir$base.o.d over $dir.libs/$base.o.d because the latter is
+ # automatically cleaned when .libs/ is deleted, while ignoring
+ # the former would cause a distcleancheck panic.
+ tmpdepfile1=$dir$base.o.d # libtool 1.5
+ tmpdepfile2=$dir.libs/$base.o.d # Likewise.
+ tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504
+ "$@" -Wc,-MD
+ else
+ tmpdepfile1=$dir$base.d
+ tmpdepfile2=$dir$base.d
+ tmpdepfile3=$dir$base.d
+ "$@" -MD
+ fi
+
+ stat=$?
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
+ exit $stat
+ fi
+
+ for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
+ do
+ test -f "$tmpdepfile" && break
+ done
+ # Same post-processing that is required for AIX mode.
+ aix_post_process_depfile
+ ;;
+
+msvc7)
+ if test "$libtool" = yes; then
+ showIncludes=-Wc,-showIncludes
+ else
+ showIncludes=-showIncludes
+ fi
+ "$@" $showIncludes > "$tmpdepfile"
+ stat=$?
+ grep -v '^Note: including file: ' "$tmpdepfile"
+ if test $stat -ne 0; then
+ rm -f "$tmpdepfile"
+ exit $stat
+ fi
+ rm -f "$depfile"
+ echo "$object : \\" > "$depfile"
+ # The first sed program below extracts the file names and escapes
+ # backslashes for cygpath. The second sed program outputs the file
+ # name when reading, but also accumulates all include files in the
+ # hold buffer in order to output them again at the end. This only
+ # works with sed implementations that can handle large buffers.
+ sed < "$tmpdepfile" -n '
+/^Note: including file: *\(.*\)/ {
+ s//\1/
+ s/\\/\\\\/g
+ p
+}' | $cygpath_u | sort -u | sed -n '
+s/ /\\ /g
+s/\(.*\)/'"$tab"'\1 \\/p
+s/.\(.*\) \\/\1:/
+H
+$ {
+ s/.*/'"$tab"'/
+ G
+ p
+}' >> "$depfile"
+ echo >> "$depfile" # make sure the fragment doesn't end with a backslash
+ rm -f "$tmpdepfile"
+ ;;
+
+msvc7msys)
+ # This case exists only to let depend.m4 do its work. It works by
+ # looking at the text of this script. This case will never be run,
+ # since it is checked for above.
+ exit 1
+ ;;
+
+#nosideeffect)
+ # This comment above is used by automake to tell side-effect
+ # dependency tracking mechanisms from slower ones.
+
+dashmstdout)
+ # Important note: in order to support this mode, a compiler *must*
+ # always write the preprocessed file to stdout, regardless of -o.
+ "$@" || exit $?
+
+ # Remove the call to Libtool.
+ if test "$libtool" = yes; then
+ while test "X$1" != 'X--mode=compile'; do
+ shift
+ done
+ shift
+ fi
+
+ # Remove '-o $object'.
+ IFS=" "
+ for arg
+ do
+ case $arg in
+ -o)
+ shift
+ ;;
+ $object)
+ shift
+ ;;
+ *)
+ set fnord "$@" "$arg"
+ shift # fnord
+ shift # $arg
+ ;;
+ esac
+ done
+
+ test -z "$dashmflag" && dashmflag=-M
+ # Require at least two characters before searching for ':'
+ # in the target name. This is to cope with DOS-style filenames:
+ # a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise.
+ "$@" $dashmflag |
+ sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile"
+ rm -f "$depfile"
+ cat < "$tmpdepfile" > "$depfile"
+ # Some versions of the HPUX 10.20 sed can't process this sed invocation
+ # correctly. Breaking it into two sed invocations is a workaround.
+ tr ' ' "$nl" < "$tmpdepfile" \
+ | sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
+ | sed -e 's/$/ :/' >> "$depfile"
+ rm -f "$tmpdepfile"
+ ;;
+
+dashXmstdout)
+ # This case only exists to satisfy depend.m4. It is never actually
+ # run, as this mode is specially recognized in the preamble.
+ exit 1
+ ;;
+
+makedepend)
+ "$@" || exit $?
+ # Remove any Libtool call
+ if test "$libtool" = yes; then
+ while test "X$1" != 'X--mode=compile'; do
+ shift
+ done
+ shift
+ fi
+ # X makedepend
+ shift
+ cleared=no eat=no
+ for arg
+ do
+ case $cleared in
+ no)
+ set ""; shift
+ cleared=yes ;;
+ esac
+ if test $eat = yes; then
+ eat=no
+ continue
+ fi
+ case "$arg" in
+ -D*|-I*)
+ set fnord "$@" "$arg"; shift ;;
+ # Strip any option that makedepend may not understand. Remove
+ # the object too, otherwise makedepend will parse it as a source file.
+ -arch)
+ eat=yes ;;
+ -*|$object)
+ ;;
+ *)
+ set fnord "$@" "$arg"; shift ;;
+ esac
+ done
+ obj_suffix=`echo "$object" | sed 's/^.*\././'`
+ touch "$tmpdepfile"
+ ${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@"
+ rm -f "$depfile"
+ # makedepend may prepend the VPATH from the source file name to the object.
+ # No need to regex-escape $object, excess matching of '.' is harmless.
+ sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile"
+ # Some versions of the HPUX 10.20 sed can't process the last invocation
+ # correctly. Breaking it into two sed invocations is a workaround.
+ sed '1,2d' "$tmpdepfile" \
+ | tr ' ' "$nl" \
+ | sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
+ | sed -e 's/$/ :/' >> "$depfile"
+ rm -f "$tmpdepfile" "$tmpdepfile".bak
+ ;;
+
+cpp)
+ # Important note: in order to support this mode, a compiler *must*
+ # always write the preprocessed file to stdout.
+ "$@" || exit $?
+
+ # Remove the call to Libtool.
+ if test "$libtool" = yes; then
+ while test "X$1" != 'X--mode=compile'; do
+ shift
+ done
+ shift
+ fi
+
+ # Remove '-o $object'.
+ IFS=" "
+ for arg
+ do
+ case $arg in
+ -o)
+ shift
+ ;;
+ $object)
+ shift
+ ;;
+ *)
+ set fnord "$@" "$arg"
+ shift # fnord
+ shift # $arg
+ ;;
+ esac
+ done
+
+ "$@" -E \
+ | sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
+ -e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
+ | sed '$ s: \\$::' > "$tmpdepfile"
+ rm -f "$depfile"
+ echo "$object : \\" > "$depfile"
+ cat < "$tmpdepfile" >> "$depfile"
+ sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile"
+ rm -f "$tmpdepfile"
+ ;;
+
+msvisualcpp)
+ # Important note: in order to support this mode, a compiler *must*
+ # always write the preprocessed file to stdout.
+ "$@" || exit $?
+
+ # Remove the call to Libtool.
+ if test "$libtool" = yes; then
+ while test "X$1" != 'X--mode=compile'; do
+ shift
+ done
+ shift
+ fi
+
+ IFS=" "
+ for arg
+ do
+ case "$arg" in
+ -o)
+ shift
+ ;;
+ $object)
+ shift
+ ;;
+ "-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI")
+ set fnord "$@"
+ shift
+ shift
+ ;;
+ *)
+ set fnord "$@" "$arg"
+ shift
+ shift
+ ;;
+ esac
+ done
+ "$@" -E 2>/dev/null |
+ sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile"
+ rm -f "$depfile"
+ echo "$object : \\" > "$depfile"
+ sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile"
+ echo "$tab" >> "$depfile"
+ sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile"
+ rm -f "$tmpdepfile"
+ ;;
+
+msvcmsys)
+ # This case exists only to let depend.m4 do its work. It works by
+ # looking at the text of this script. This case will never be run,
+ # since it is checked for above.
+ exit 1
+ ;;
+
+none)
+ exec "$@"
+ ;;
+
+*)
+ echo "Unknown depmode $depmode" 1>&2
+ exit 1
+ ;;
+esac
+
+exit 0
+
+# Local Variables:
+# mode: shell-script
+# sh-indentation: 2
+# eval: (add-hook 'before-save-hook 'time-stamp)
+# time-stamp-start: "scriptversion="
+# time-stamp-format: "%:y-%02m-%02d.%02H"
+# time-stamp-time-zone: "UTC0"
+# time-stamp-end: "; # UTC"
+# End:
diff --git a/distro/Makefile.am b/distro/Makefile.am
new file mode 100644
index 0000000..7e55ad7
--- /dev/null
+++ b/distro/Makefile.am
@@ -0,0 +1,5 @@
+EXTRA_DIST = \
+ common \
+ config \
+ pkg \
+ tests
diff --git a/distro/Makefile.in b/distro/Makefile.in
new file mode 100644
index 0000000..a492ba7
--- /dev/null
+++ b/distro/Makefile.in
@@ -0,0 +1,529 @@
+# Makefile.in generated by automake 1.16.5 from Makefile.am.
+# @configure_input@
+
+# Copyright (C) 1994-2021 Free Software Foundation, Inc.
+
+# This Makefile.in is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+@SET_MAKE@
+VPATH = @srcdir@
+am__is_gnu_make = { \
+ if test -z '$(MAKELEVEL)'; then \
+ false; \
+ elif test -n '$(MAKE_HOST)'; then \
+ true; \
+ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
+ true; \
+ else \
+ false; \
+ fi; \
+}
+am__make_running_with_option = \
+ case $${target_option-} in \
+ ?) ;; \
+ *) echo "am__make_running_with_option: internal error: invalid" \
+ "target option '$${target_option-}' specified" >&2; \
+ exit 1;; \
+ esac; \
+ has_opt=no; \
+ sane_makeflags=$$MAKEFLAGS; \
+ if $(am__is_gnu_make); then \
+ sane_makeflags=$$MFLAGS; \
+ else \
+ case $$MAKEFLAGS in \
+ *\\[\ \ ]*) \
+ bs=\\; \
+ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
+ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
+ esac; \
+ fi; \
+ skip_next=no; \
+ strip_trailopt () \
+ { \
+ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
+ }; \
+ for flg in $$sane_makeflags; do \
+ test $$skip_next = yes && { skip_next=no; continue; }; \
+ case $$flg in \
+ *=*|--*) continue;; \
+ -*I) strip_trailopt 'I'; skip_next=yes;; \
+ -*I?*) strip_trailopt 'I';; \
+ -*O) strip_trailopt 'O'; skip_next=yes;; \
+ -*O?*) strip_trailopt 'O';; \
+ -*l) strip_trailopt 'l'; skip_next=yes;; \
+ -*l?*) strip_trailopt 'l';; \
+ -[dEDm]) skip_next=yes;; \
+ -[JT]) skip_next=yes;; \
+ esac; \
+ case $$flg in \
+ *$$target_option*) has_opt=yes; break;; \
+ esac; \
+ done; \
+ test $$has_opt = yes
+am__make_dryrun = (target_option=n; $(am__make_running_with_option))
+am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
+pkgdatadir = $(datadir)/@PACKAGE@
+pkgincludedir = $(includedir)/@PACKAGE@
+pkglibdir = $(libdir)/@PACKAGE@
+pkglibexecdir = $(libexecdir)/@PACKAGE@
+am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
+install_sh_DATA = $(install_sh) -c -m 644
+install_sh_PROGRAM = $(install_sh) -c
+install_sh_SCRIPT = $(install_sh) -c
+INSTALL_HEADER = $(INSTALL_DATA)
+transform = $(program_transform_name)
+NORMAL_INSTALL = :
+PRE_INSTALL = :
+POST_INSTALL = :
+NORMAL_UNINSTALL = :
+PRE_UNINSTALL = :
+POST_UNINSTALL = :
+build_triplet = @build@
+host_triplet = @host@
+subdir = distro
+ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
+am__aclocal_m4_deps = $(top_srcdir)/m4/ax_check_compile_flag.m4 \
+ $(top_srcdir)/m4/ax_check_link_flag.m4 \
+ $(top_srcdir)/m4/code-coverage.m4 \
+ $(top_srcdir)/m4/knot-lib-version.m4 \
+ $(top_srcdir)/m4/knot-module.m4 $(top_srcdir)/m4/libtool.m4 \
+ $(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
+ $(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
+ $(top_srcdir)/m4/sanitizer.m4 $(top_srcdir)/m4/visibility.m4 \
+ $(top_srcdir)/m4/knot-version.m4 $(top_srcdir)/configure.ac
+am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
+ $(ACLOCAL_M4)
+DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON)
+mkinstalldirs = $(install_sh) -d
+CONFIG_HEADER = $(top_builddir)/src/config.h
+CONFIG_CLEAN_FILES =
+CONFIG_CLEAN_VPATH_FILES =
+AM_V_P = $(am__v_P_@AM_V@)
+am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
+am__v_P_0 = false
+am__v_P_1 = :
+AM_V_GEN = $(am__v_GEN_@AM_V@)
+am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
+am__v_GEN_0 = @echo " GEN " $@;
+am__v_GEN_1 =
+AM_V_at = $(am__v_at_@AM_V@)
+am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
+am__v_at_0 = @
+am__v_at_1 =
+SOURCES =
+DIST_SOURCES =
+am__can_run_installinfo = \
+ case $$AM_UPDATE_INFO_DIR in \
+ n|no|NO) false;; \
+ *) (install-info --version) >/dev/null 2>&1;; \
+ esac
+am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
+am__DIST_COMMON = $(srcdir)/Makefile.in
+DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
+ACLOCAL = @ACLOCAL@
+AMTAR = @AMTAR@
+AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
+AR = @AR@
+AUTOCONF = @AUTOCONF@
+AUTOHEADER = @AUTOHEADER@
+AUTOMAKE = @AUTOMAKE@
+AWK = @AWK@
+CC = @CC@
+CCDEPMODE = @CCDEPMODE@
+CFLAGS = @CFLAGS@
+CFLAG_VISIBILITY = @CFLAG_VISIBILITY@
+CODE_COVERAGE_ENABLED = @CODE_COVERAGE_ENABLED@
+CPP = @CPP@
+CPPFLAGS = @CPPFLAGS@
+CSCOPE = @CSCOPE@
+CTAGS = @CTAGS@
+CYGPATH_W = @CYGPATH_W@
+DEFS = @DEFS@
+DEPDIR = @DEPDIR@
+DLLTOOL = @DLLTOOL@
+DNSTAP_CFLAGS = @DNSTAP_CFLAGS@
+DNSTAP_LIBS = @DNSTAP_LIBS@
+DSYMUTIL = @DSYMUTIL@
+DUMPBIN = @DUMPBIN@
+ECHO_C = @ECHO_C@
+ECHO_N = @ECHO_N@
+ECHO_T = @ECHO_T@
+EGREP = @EGREP@
+ETAGS = @ETAGS@
+EXEEXT = @EXEEXT@
+FGREP = @FGREP@
+FILECMD = @FILECMD@
+GENHTML = @GENHTML@
+GREP = @GREP@
+HAVE_VISIBILITY = @HAVE_VISIBILITY@
+INSTALL = @INSTALL@
+INSTALL_DATA = @INSTALL_DATA@
+INSTALL_PROGRAM = @INSTALL_PROGRAM@
+INSTALL_SCRIPT = @INSTALL_SCRIPT@
+INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
+KNOT_VERSION_MAJOR = @KNOT_VERSION_MAJOR@
+KNOT_VERSION_MINOR = @KNOT_VERSION_MINOR@
+KNOT_VERSION_PATCH = @KNOT_VERSION_PATCH@
+LCOV = @LCOV@
+LD = @LD@
+LDFLAGS = @LDFLAGS@
+LDFLAG_EXCLUDE_LIBS = @LDFLAG_EXCLUDE_LIBS@
+LIBOBJS = @LIBOBJS@
+LIBS = @LIBS@
+LIBTOOL = @LIBTOOL@
+LIPO = @LIPO@
+LN_S = @LN_S@
+LTLIBOBJS = @LTLIBOBJS@
+LT_NO_UNDEFINED = @LT_NO_UNDEFINED@
+LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@
+MAKEINFO = @MAKEINFO@
+MANIFEST_TOOL = @MANIFEST_TOOL@
+MKDIR_P = @MKDIR_P@
+NM = @NM@
+NMEDIT = @NMEDIT@
+OBJDUMP = @OBJDUMP@
+OBJEXT = @OBJEXT@
+OTOOL = @OTOOL@
+OTOOL64 = @OTOOL64@
+PACKAGE = @PACKAGE@
+PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
+PACKAGE_NAME = @PACKAGE_NAME@
+PACKAGE_STRING = @PACKAGE_STRING@
+PACKAGE_TARNAME = @PACKAGE_TARNAME@
+PACKAGE_URL = @PACKAGE_URL@
+PACKAGE_VERSION = @PACKAGE_VERSION@
+PATH_SEPARATOR = @PATH_SEPARATOR@
+PDFLATEX = @PDFLATEX@
+PKG_CONFIG = @PKG_CONFIG@
+PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
+PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
+PROTOC_C = @PROTOC_C@
+RANLIB = @RANLIB@
+RELEASE_DATE = @RELEASE_DATE@
+SED = @SED@
+SET_MAKE = @SET_MAKE@
+SHELL = @SHELL@
+SPHINXBUILD = @SPHINXBUILD@
+STRIP = @STRIP@
+VERSION = @VERSION@
+abs_builddir = @abs_builddir@
+abs_srcdir = @abs_srcdir@
+abs_top_builddir = @abs_top_builddir@
+abs_top_srcdir = @abs_top_srcdir@
+ac_ct_AR = @ac_ct_AR@
+ac_ct_CC = @ac_ct_CC@
+ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+am__include = @am__include@
+am__leading_dot = @am__leading_dot@
+am__quote = @am__quote@
+am__tar = @am__tar@
+am__untar = @am__untar@
+bindir = @bindir@
+build = @build@
+build_alias = @build_alias@
+build_cpu = @build_cpu@
+build_os = @build_os@
+build_vendor = @build_vendor@
+builddir = @builddir@
+cap_ng_CFLAGS = @cap_ng_CFLAGS@
+cap_ng_LIBS = @cap_ng_LIBS@
+conf_mapsize = @conf_mapsize@
+config_dir = @config_dir@
+datadir = @datadir@
+datarootdir = @datarootdir@
+dlopen_LIBS = @dlopen_LIBS@
+docdir = @docdir@
+dvidir = @dvidir@
+embedded_libngtcp2_CFLAGS = @embedded_libngtcp2_CFLAGS@
+embedded_libngtcp2_LIBS = @embedded_libngtcp2_LIBS@
+exec_prefix = @exec_prefix@
+fuzzer_CFLAGS = @fuzzer_CFLAGS@
+fuzzer_LDFLAGS = @fuzzer_LDFLAGS@
+gnutls_CFLAGS = @gnutls_CFLAGS@
+gnutls_LIBS = @gnutls_LIBS@
+host = @host@
+host_alias = @host_alias@
+host_cpu = @host_cpu@
+host_os = @host_os@
+host_vendor = @host_vendor@
+htmldir = @htmldir@
+includedir = @includedir@
+infodir = @infodir@
+install_sh = @install_sh@
+libbpf_CFLAGS = @libbpf_CFLAGS@
+libbpf_LIBS = @libbpf_LIBS@
+libdbus_CFLAGS = @libdbus_CFLAGS@
+libdbus_LIBS = @libdbus_LIBS@
+libdir = @libdir@
+libdnssec_SONAME = @libdnssec_SONAME@
+libdnssec_SOVERSION = @libdnssec_SOVERSION@
+libdnssec_VERSION_INFO = @libdnssec_VERSION_INFO@
+libedit_CFLAGS = @libedit_CFLAGS@
+libedit_LIBS = @libedit_LIBS@
+libexecdir = @libexecdir@
+libfstrm_CFLAGS = @libfstrm_CFLAGS@
+libfstrm_LIBS = @libfstrm_LIBS@
+libidn2_CFLAGS = @libidn2_CFLAGS@
+libidn2_LIBS = @libidn2_LIBS@
+libknot_SONAME = @libknot_SONAME@
+libknot_SOVERSION = @libknot_SOVERSION@
+libknot_VERSION_INFO = @libknot_VERSION_INFO@
+libkqueue_CFLAGS = @libkqueue_CFLAGS@
+libkqueue_LIBS = @libkqueue_LIBS@
+libmaxminddb_CFLAGS = @libmaxminddb_CFLAGS@
+libmaxminddb_LIBS = @libmaxminddb_LIBS@
+libmnl_CFLAGS = @libmnl_CFLAGS@
+libmnl_LIBS = @libmnl_LIBS@
+libnghttp2_CFLAGS = @libnghttp2_CFLAGS@
+libnghttp2_LIBS = @libnghttp2_LIBS@
+libngtcp2_CFLAGS = @libngtcp2_CFLAGS@
+libngtcp2_LIBS = @libngtcp2_LIBS@
+libprotobuf_c_CFLAGS = @libprotobuf_c_CFLAGS@
+libprotobuf_c_LIBS = @libprotobuf_c_LIBS@
+liburcu_CFLAGS = @liburcu_CFLAGS@
+liburcu_LIBS = @liburcu_LIBS@
+libxdp_CFLAGS = @libxdp_CFLAGS@
+libxdp_LIBS = @libxdp_LIBS@
+libzscanner_SONAME = @libzscanner_SONAME@
+libzscanner_SOVERSION = @libzscanner_SOVERSION@
+libzscanner_VERSION_INFO = @libzscanner_VERSION_INFO@
+lmdb_CFLAGS = @lmdb_CFLAGS@
+lmdb_LIBS = @lmdb_LIBS@
+localedir = @localedir@
+localstatedir = @localstatedir@
+malloc_LIBS = @malloc_LIBS@
+mandir = @mandir@
+math_LIBS = @math_LIBS@
+mkdir_p = @mkdir_p@
+module_dir = @module_dir@
+module_instdir = @module_instdir@
+oldincludedir = @oldincludedir@
+pdfdir = @pdfdir@
+pkgconfigdir = @pkgconfigdir@
+prefix = @prefix@
+program_transform_name = @program_transform_name@
+psdir = @psdir@
+pthread_LIBS = @pthread_LIBS@
+run_dir = @run_dir@
+runstatedir = @runstatedir@
+sbindir = @sbindir@
+sharedstatedir = @sharedstatedir@
+srcdir = @srcdir@
+storage_dir = @storage_dir@
+sysconfdir = @sysconfdir@
+systemd_CFLAGS = @systemd_CFLAGS@
+systemd_LIBS = @systemd_LIBS@
+target_alias = @target_alias@
+top_build_prefix = @top_build_prefix@
+top_builddir = @top_builddir@
+top_srcdir = @top_srcdir@
+EXTRA_DIST = \
+ common \
+ config \
+ pkg \
+ tests
+
+all: all-am
+
+.SUFFIXES:
+$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
+ @for dep in $?; do \
+ case '$(am__configure_deps)' in \
+ *$$dep*) \
+ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
+ && { if test -f $@; then exit 0; else break; fi; }; \
+ exit 1;; \
+ esac; \
+ done; \
+ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign distro/Makefile'; \
+ $(am__cd) $(top_srcdir) && \
+ $(AUTOMAKE) --foreign distro/Makefile
+Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
+ @case '$?' in \
+ *config.status*) \
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
+ *) \
+ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \
+ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \
+ esac;
+
+$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+
+$(top_srcdir)/configure: $(am__configure_deps)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+$(ACLOCAL_M4): $(am__aclocal_m4_deps)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+$(am__aclocal_m4_deps):
+
+mostlyclean-libtool:
+ -rm -f *.lo
+
+clean-libtool:
+ -rm -rf .libs _libs
+tags TAGS:
+
+ctags CTAGS:
+
+cscope cscopelist:
+
+distdir: $(BUILT_SOURCES)
+ $(MAKE) $(AM_MAKEFLAGS) distdir-am
+
+distdir-am: $(DISTFILES)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ list='$(DISTFILES)'; \
+ dist_files=`for file in $$list; do echo $$file; done | \
+ sed -e "s|^$$srcdirstrip/||;t" \
+ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
+ case $$dist_files in \
+ */*) $(MKDIR_P) `echo "$$dist_files" | \
+ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
+ sort -u` ;; \
+ esac; \
+ for file in $$dist_files; do \
+ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
+ if test -d $$d/$$file; then \
+ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
+ if test -d "$(distdir)/$$file"; then \
+ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
+ fi; \
+ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
+ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
+ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
+ fi; \
+ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
+ else \
+ test -f "$(distdir)/$$file" \
+ || cp -p $$d/$$file "$(distdir)/$$file" \
+ || exit 1; \
+ fi; \
+ done
+check-am: all-am
+check: check-am
+all-am: Makefile
+installdirs:
+install: install-am
+install-exec: install-exec-am
+install-data: install-data-am
+uninstall: uninstall-am
+
+install-am: all-am
+ @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
+
+installcheck: installcheck-am
+install-strip:
+ if test -z '$(STRIP)'; then \
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ install; \
+ else \
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
+ fi
+mostlyclean-generic:
+
+clean-generic:
+
+distclean-generic:
+ -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
+ -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
+
+maintainer-clean-generic:
+ @echo "This command is intended for maintainers to use"
+ @echo "it deletes files that may require special tools to rebuild."
+clean: clean-am
+
+clean-am: clean-generic clean-libtool mostlyclean-am
+
+distclean: distclean-am
+ -rm -f Makefile
+distclean-am: clean-am distclean-generic
+
+dvi: dvi-am
+
+dvi-am:
+
+html: html-am
+
+html-am:
+
+info: info-am
+
+info-am:
+
+install-data-am:
+
+install-dvi: install-dvi-am
+
+install-dvi-am:
+
+install-exec-am:
+
+install-html: install-html-am
+
+install-html-am:
+
+install-info: install-info-am
+
+install-info-am:
+
+install-man:
+
+install-pdf: install-pdf-am
+
+install-pdf-am:
+
+install-ps: install-ps-am
+
+install-ps-am:
+
+installcheck-am:
+
+maintainer-clean: maintainer-clean-am
+ -rm -f Makefile
+maintainer-clean-am: distclean-am maintainer-clean-generic
+
+mostlyclean: mostlyclean-am
+
+mostlyclean-am: mostlyclean-generic mostlyclean-libtool
+
+pdf: pdf-am
+
+pdf-am:
+
+ps: ps-am
+
+ps-am:
+
+uninstall-am:
+
+.MAKE: install-am install-strip
+
+.PHONY: all all-am check check-am clean clean-generic clean-libtool \
+ cscopelist-am ctags-am distclean distclean-generic \
+ distclean-libtool distdir dvi dvi-am html html-am info info-am \
+ install install-am install-data install-data-am install-dvi \
+ install-dvi-am install-exec install-exec-am install-html \
+ install-html-am install-info install-info-am install-man \
+ install-pdf install-pdf-am install-ps install-ps-am \
+ install-strip installcheck installcheck-am installdirs \
+ maintainer-clean maintainer-clean-generic mostlyclean \
+ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
+ tags-am uninstall uninstall-am
+
+.PRECIOUS: Makefile
+
+
+# Tell versions [3.59,3.63) of GNU make to not export all variables.
+# Otherwise a system limit (for SysV at least) may be exceeded.
+.NOEXPORT:
diff --git a/distro/common/cz.nic.knotd.conf b/distro/common/cz.nic.knotd.conf
new file mode 100644
index 0000000..50af87a
--- /dev/null
+++ b/distro/common/cz.nic.knotd.conf
@@ -0,0 +1,9 @@
+
+
+
+
+
+
+
+
+
diff --git a/distro/common/knot.service b/distro/common/knot.service
new file mode 100644
index 0000000..e6c13ed
--- /dev/null
+++ b/distro/common/knot.service
@@ -0,0 +1,30 @@
+[Unit]
+Description=Knot DNS server
+Wants=network-online.target
+After=network-online.target
+Documentation=man:knotd(8) man:knot.conf(5) man:knotc(8)
+
+[Service]
+Type=notify
+User=knot
+Group=knot
+CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETPCAP
+AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETPCAP
+ExecStartPre=/usr/sbin/knotc conf-check
+ExecStart=/usr/sbin/knotd -m "$KNOT_CONF_MAX_SIZE"
+ExecReload=/bin/kill -HUP $MAINPID
+Restart=on-abort
+LimitNOFILE=1048576
+TimeoutStopSec=300
+# Extend the systemd startup timeout by this value (seconds) for each zone
+Environment="KNOT_ZONE_LOAD_TIMEOUT_SEC=180"
+# Maximum size (MiB) of a configuration database
+Environment="KNOT_CONF_MAX_SIZE=512"
+
+# Expected systemd >= v239
+RuntimeDirectory=knot
+StateDirectory=knot
+NoNewPrivileges=yes
+
+[Install]
+WantedBy=multi-user.target
diff --git a/distro/common/system-local.conf b/distro/common/system-local.conf
new file mode 100644
index 0000000..8df0a2f
--- /dev/null
+++ b/distro/common/system-local.conf
@@ -0,0 +1,5 @@
+
+
+ unix:path=/rundir/dbus.sock
+
diff --git a/distro/config/apkg.toml b/distro/config/apkg.toml
new file mode 100644
index 0000000..3483ac2
--- /dev/null
+++ b/distro/config/apkg.toml
@@ -0,0 +1,16 @@
+[project]
+name = "knot-dns"
+# needed for make-archive
+make_archive_script = "scripts/make-dev-archive.sh"
+
+[upstream]
+# needed for get-archive
+archive_url = "https://secure.nic.cz/files/knot-dns/knot-{{ version }}.tar.xz"
+signature_url = "https://secure.nic.cz/files/knot-dns/knot-{{ version }}.tar.xz.asc"
+
+[apkg]
+compat = 3
+
+[[distro.aliases]]
+name = "deb-nolibxdp"
+distro = ["debian == 11", "ubuntu == 20.04", "ubuntu == 22.04"]
diff --git a/distro/pkg/arch/PKGBUILD b/distro/pkg/arch/PKGBUILD
new file mode 100644
index 0000000..16f1259
--- /dev/null
+++ b/distro/pkg/arch/PKGBUILD
@@ -0,0 +1,66 @@
+# Maintainer: Tomas Krizek
+# Maintainer: Bruno Pagani
+# Contributor: Ondřej Surý
+# Contributor: Julian Brost
+# Contributor: Oleander Reis
+# Contributor: Otto Sabart
+
+pkgname=knot
+pkgver={{ version }}
+pkgrel=1
+pkgdesc="High-performance authoritative-only DNS server"
+arch=('x86_64')
+url="https://www.knot-dns.cz/"
+license=('GPL3')
+depends=('fstrm'
+ 'gnutls'
+ 'libcap-ng'
+ 'libedit'
+ 'libidn2'
+ 'libmaxminddb'
+ 'liburcu'
+ 'lmdb'
+ 'protobuf-c'
+ 'systemd')
+backup=('etc/knot/knot.conf')
+source=("${pkgname}-${pkgver}.tar.xz")
+sha256sums=('SKIP')
+validpgpkeys=('742FA4E95829B6C5EAC6B85710BB7AF6FEBBD6AB') # Daniel Salzman
+
+build() {
+ cd ${pkgname}-${pkgver}
+
+ ./configure \
+ --prefix=/usr \
+ --sbindir=/usr/bin \
+ --sysconfdir=/etc \
+ --localstatedir=/var/lib \
+ --libexecdir=/usr/lib/knot \
+ --with-rundir=/run/knot \
+ --with-storage=/var/lib/knot \
+ --enable-recvmmsg \
+ --enable-dnstap \
+ --enable-systemd \
+ --enable-reuseport \
+ --disable-silent-rules \
+ --disable-static
+
+ make
+}
+
+check() {
+ cd ${pkgname}-${pkgver}
+ make check
+}
+
+package() {
+ cd ${pkgname}-${pkgver}
+
+ make DESTDIR="${pkgdir}" install
+
+ rm "${pkgdir}"/etc/knot/example.com.zone
+ mv "${pkgdir}"/etc/knot/{knot.sample.conf,knot.conf}
+
+ install -Dm644 distro/common/${pkgname}.service -t "${pkgdir}"/usr/lib/systemd/system/
+ install -Dm644 distro/pkg/arch/${pkgname}.sysusers "${pkgdir}"/usr/lib/sysusers.d/${pkgname}.conf
+}
diff --git a/distro/pkg/arch/knot.sysusers b/distro/pkg/arch/knot.sysusers
new file mode 100644
index 0000000..735db76
--- /dev/null
+++ b/distro/pkg/arch/knot.sysusers
@@ -0,0 +1 @@
+u knot - "Knot DNS Daemon User"
diff --git a/distro/pkg/arch/knot.tmpfiles.arch b/distro/pkg/arch/knot.tmpfiles.arch
new file mode 100644
index 0000000..b20df6a
--- /dev/null
+++ b/distro/pkg/arch/knot.tmpfiles.arch
@@ -0,0 +1,2 @@
+d /run/knot 0755 knot knot - -
+d /var/lib/knot 0700 knot knot - -
diff --git a/distro/pkg/deb-nolibxdp/changelog b/distro/pkg/deb-nolibxdp/changelog
new file mode 100644
index 0000000..123f92b
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/changelog
@@ -0,0 +1,6 @@
+knot ({{ version }}-cznic.{{ release }}) unstable; urgency=medium
+
+ * upstream package
+ * see https://www.knot-dns.cz
+
+ -- Knot DNS {{ now }}
diff --git a/distro/pkg/deb-nolibxdp/clean b/distro/pkg/deb-nolibxdp/clean
new file mode 100644
index 0000000..b2a9f3f
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/clean
@@ -0,0 +1,2 @@
+doc/modules
+.pybuild/
diff --git a/distro/pkg/deb-nolibxdp/compat b/distro/pkg/deb-nolibxdp/compat
new file mode 100644
index 0000000..b4de394
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/compat
@@ -0,0 +1 @@
+11
diff --git a/distro/pkg/deb-nolibxdp/control b/distro/pkg/deb-nolibxdp/control
new file mode 100644
index 0000000..495c080
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/control
@@ -0,0 +1,283 @@
+Source: knot
+Section: net
+Priority: optional
+Maintainer: Knot DNS
+Uploaders:
+ Jakub Ružička ,
+ Daniel Salzman ,
+Build-Depends-Indep:
+ python3-setuptools,
+ python3-sphinx,
+Build-Depends:
+ autoconf,
+ automake,
+ debhelper (>= 11),
+ dh-python,
+ libbpf-dev,
+ libcap-ng-dev,
+ libedit-dev,
+ libfstrm-dev,
+ libgnutls28-dev,
+ libidn2-dev,
+ liblmdb-dev,
+ libmaxminddb-dev,
+ libmnl-dev,
+ libnghttp2-dev,
+ libprotobuf-c-dev,
+ libsystemd-dev [linux-any] | libsystemd-daemon-dev [linux-any],
+ libsystemd-dev [linux-any] | libsystemd-journal-dev [linux-any],
+ libtool,
+ liburcu-dev,
+ pkgconf,
+ protobuf-c-compiler,
+ python3-all,
+ softhsm2 ,
+Standards-Version: 4.5.0
+Homepage: https://www.knot-dns.cz/
+Vcs-Browser: https://gitlab.nic.cz/knot/knot-dns
+Vcs-Git: https://gitlab.nic.cz/knot/knot-dns.git
+Rules-Requires-Root: no
+
+Package: knot
+Architecture: any
+Depends:
+ adduser,
+ libdnssec9 (= ${binary:Version}),
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Pre-Depends:
+ ${misc:Pre-Depends},
+Suggests:
+ systemd,
+Description: Authoritative domain name server
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+
+Package: libknot15
+Architecture: any
+Depends:
+ ${misc:Depends},
+ ${shlibs:Depends},
+Section: libs
+Description: DNS shared library from Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides a DNS shared library used by Knot DNS and
+ Knot Resolver.
+
+Package: libzscanner4
+Architecture: any
+Depends:
+ ${misc:Depends},
+ ${shlibs:Depends},
+Section: libs
+Description: DNS zone-parsing shared library from Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides a fast zone parser shared library used by Knot
+ DNS and Knot Resolver.
+
+Package: libdnssec9
+Architecture: any
+Depends:
+ ${misc:Depends},
+ ${shlibs:Depends},
+Section: libs
+Description: DNSSEC shared library from Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides common DNSSEC shared library used by Knot DNS
+ and Knot Resolver.
+
+Package: libknot-dev
+Architecture: any
+Depends:
+ libdnssec9 (= ${binary:Version}),
+ libgnutls28-dev,
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+Section: libdevel
+Description: Knot DNS shared library development files
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides development files for shared libraries from Knot DNS.
+
+Package: knot-dnsutils
+Architecture: any
+Depends:
+ libdnssec9 (= ${binary:Version}),
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: DNS clients provided with Knot DNS (kdig, knsupdate)
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package delivers various DNS client programs from Knot DNS.
+ .
+ - kdig - query a DNS server in various ways
+ - knsupdate - perform dynamic updates (See RFC2136)
+ - kxdpgun - send a DNS query stream over UDP to a DNS server
+ .
+ Those clients were designed to be almost 1:1 compatible with BIND dnsutils,
+ but they provide some enhancements, which are documented.
+ .
+ WARNING: knslookup is not provided as it is considered obsolete.
+
+Package: knot-dnssecutils
+Architecture: any
+Depends:
+ libdnssec9 (= ${binary:Version}),
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: DNSSEC tools provided with Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package delivers various DNSSEC tools from Knot DNS.
+ .
+ - kzonecheck
+ - kzonesign
+ - knsec3hash
+
+Package: knot-host
+Architecture: any
+Depends:
+ libdnssec9 (= ${binary:Version}),
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: Version of 'host' bundled with Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides the 'host' program from Knot DNS. This program is
+ designed to be almost 1:1 compatible with BIND 9.x 'host' program.
+
+Package: knot-module-dnstap
+Architecture: any
+Depends:
+ knot (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: dnstap module for Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package contains dnstap module for logging DNS traffic.
+
+Package: knot-module-geoip
+Architecture: any
+Depends:
+ knot (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: geoip module for Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package contains geoip module for geography-based responses.
+
+Package: knot-doc
+Architecture: all
+Multi-Arch: foreign
+Depends:
+ libjs-jquery,
+ libjs-sphinxdoc,
+ libjs-underscore,
+ ${misc:Depends},
+Section: doc
+Description: Documentation for Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides various documents that are useful for
+ maintaining a working Knot DNS installation.
+
+Package: knot-exporter
+Architecture: all
+Depends:
+ ${misc:Depends},
+ ${python3:Depends},
+Section: python
+Description: Prometheus exporter for Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides Python Prometheus exporter for Knot DNS.
+
+Package: python3-libknot
+Architecture: all
+Depends:
+ libknot15 (= ${binary:Version}),
+ ${misc:Depends},
+ ${python3:Depends},
+Section: python
+Description: Python bindings for libknot
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides Python bindings for the libknot shared library.
diff --git a/distro/pkg/deb-nolibxdp/copyright b/distro/pkg/deb-nolibxdp/copyright
new file mode 100644
index 0000000..feabb26
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/copyright
@@ -0,0 +1,200 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: Knot DNS
+Upstream-Contact: knot-dns@labs.nic.cz
+Source: https://secure.nic.cz/files/knot-dns/
+
+Files: *
+Copyright: 2011-2023 CZ.NIC, z.s.p.o.
+License: GPL-3+
+
+Files: m4/*
+Copyright: 2011-2022 CZ.NIC, z.s.p.o.
+ 1996-2001, 2003-2015 Free Software Foundation, Inc.
+License: GPL-3+
+
+Files: install-sh
+Copyright: 1994 X Consortium
+License: MIT
+
+Files: debian/* distro/pkg/deb/*
+Copyright: 2011-2023 CZ.NIC, z.s.p.o.
+ 2011 Ondřej Surý
+License: GPL-3+
+
+Files: tests/tap/*
+Copyright: 2000-2001, 2004, 2006-2012 Russ Allbery
+ 2006, 2007, 2008, 2013 The Board of Trustees of the Leland Stanford Junior University
+License: MIT
+
+Files: tests/tap/files.*
+Copyright: 2011-2019 CZ.NIC, z.s.p.o.
+License: GPL-3+
+
+Files: src/contrib/dnstap/*
+Copyright: 2014, Farsight Security, Inc.
+ 2011-2022 CZ.NIC, z.s.p.o.
+License: GPL-3+
+
+Files: src/contrib/libngtcp2/*
+Copyright: 2016-2023 ngtcp2 contributors
+ 2012-2017 nghttp2 contributors
+License: MIT
+
+Files: src/contrib/musl/*
+Copyright: 2005-2020 Rich Felker, et al.
+License: MIT
+
+Files: src/contrib/openbsd/siphash.*
+Copyright: 2013 Andre Oppermann
+License: BSD-3-Clause
+
+Files: src/contrib/openbsd/strl*
+Copyright: 1998 Todd C. Miller
+License: ISC
+
+Files: src/contrib/proxyv2/*
+Copyright: 2022 CZ.NIC, z.s.p.o.
+ 2021 Fastly, Inc.
+License: GPL-3+
+
+Files: src/contrib/qp-trie/*
+Copyright: 2011-2019 CZ.NIC, z.s.p.o.
+ 2018 Tony Finch
+License: GPL-3+
+
+Files: src/contrib/ucw/*
+Copyright: 2011-2022 CZ.NIC, z.s.p.o.
+ 1997-2017 Martin Mares
+ 2007 Pavel Charvat
+ 2012 Ondrej Filip
+License: LGPL-2+
+
+Files: src/contrib/ucw/lists.*
+Copyright: 2011-2022 CZ.NIC, z.s.p.o.
+ 1998 Martin Mares
+ 1998 Pavel Machek
+ 1998 Ondrej Zajicek
+License: GPL-2+
+
+Files: src/contrib/url-parser/*
+Copyright: 2020 Igor Sysoev
+ 2020 Nginx, Inc.
+ 2020 Joyent, Inc.
+License: MIT
+
+Files: src/contrib/vpool/*
+Copyright: 2006, 2008 Alexey Vatchenko
+License: ISC
+
+Files: tests-fuzz/main.c
+Copyright: 2011-2019 CZ.NIC, z.s.p.o.
+ 2017 Tim Ruehsen
+License: MIT
+
+License: GPL-3+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+ .
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+ .
+ On Debian systems, the full text of the GNU General Public License
+ version 3 can be found in the file `/usr/share/common-licenses/GPL-3'.
+
+License: GPL-2+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+ .
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+ .
+ On Debian GNU/Linux systems, the complete text of the GNU General
+ Public License can be found in `/usr/share/common-licenses/GPL-2'.
+
+License: LGPL-2+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Library General Public
+ License as published by the Free Software Foundation; either
+ version 2 of the License, or (at your option) any later version.
+ .
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Library General Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+ .
+ On Debian GNU/Linux systems, the complete text of the GNU Lesser
+ General Public License can be found in `/usr/share/common-licenses/LGPL-2'.
+
+License: ISC
+ Permission to use, copy, modify, and distribute this software for any
+ purpose with or without fee is hereby granted, provided that the above
+ copyright notice and this permission notice appear in all copies.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+License: BSD-3-Clause
+ Redistribution and use in source and binary forms, with or without modification,
+ are permitted provided that the following conditions are met:
+ 1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+ 3. Neither the name of the copyright holder nor the names of its contributors
+ may be used to endorse or promote products derived from this software without
+ specific prior written permission.
+ .
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+License: MIT
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ "Software"), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+ .
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE.
diff --git a/distro/pkg/deb-nolibxdp/cz.nic.knotd.conf b/distro/pkg/deb-nolibxdp/cz.nic.knotd.conf
new file mode 100644
index 0000000..50af87a
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/cz.nic.knotd.conf
@@ -0,0 +1,9 @@
+
+
+
+
+
+
+
+
+
diff --git a/distro/pkg/deb-nolibxdp/docs b/distro/pkg/deb-nolibxdp/docs
new file mode 100644
index 0000000..b43bf86
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/docs
@@ -0,0 +1 @@
+README.md
diff --git a/distro/pkg/deb-nolibxdp/knot-dnssecutils.install b/distro/pkg/deb-nolibxdp/knot-dnssecutils.install
new file mode 100644
index 0000000..20009e8
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-dnssecutils.install
@@ -0,0 +1,3 @@
+usr/bin/knsec3hash
+usr/bin/kzonecheck
+usr/bin/kzonesign
diff --git a/distro/pkg/deb-nolibxdp/knot-dnssecutils.manpages b/distro/pkg/deb-nolibxdp/knot-dnssecutils.manpages
new file mode 100644
index 0000000..913c4cb
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-dnssecutils.manpages
@@ -0,0 +1,3 @@
+usr/share/man/man1/knsec3hash.1
+usr/share/man/man1/kzonecheck.1
+usr/share/man/man1/kzonesign.1
diff --git a/distro/pkg/deb-nolibxdp/knot-dnsutils.install b/distro/pkg/deb-nolibxdp/knot-dnsutils.install
new file mode 100644
index 0000000..e2f2a8a
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-dnsutils.install
@@ -0,0 +1,3 @@
+usr/bin/kdig
+usr/bin/knsupdate
+usr/sbin/kxdpgun
diff --git a/distro/pkg/deb-nolibxdp/knot-dnsutils.manpages b/distro/pkg/deb-nolibxdp/knot-dnsutils.manpages
new file mode 100644
index 0000000..67254d9
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-dnsutils.manpages
@@ -0,0 +1,3 @@
+usr/share/man/man1/kdig.1
+usr/share/man/man1/knsupdate.1
+usr/share/man/man8/kxdpgun.8
diff --git a/distro/pkg/deb-nolibxdp/knot-doc.install b/distro/pkg/deb-nolibxdp/knot-doc.install
new file mode 100644
index 0000000..c2a345d
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-doc.install
@@ -0,0 +1 @@
+usr/share/doc/knot/* /usr/share/doc/knot-doc/
diff --git a/distro/pkg/deb-nolibxdp/knot-doc.links b/distro/pkg/deb-nolibxdp/knot-doc.links
new file mode 100644
index 0000000..1376b3a
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-doc.links
@@ -0,0 +1,5 @@
+usr/share/javascript/jquery/jquery.min.js usr/share/doc/knot-doc/_static/jquery.js
+usr/share/javascript/sphinxdoc/1.0/doctools.js usr/share/doc/knot-doc/_static/doctools.js
+usr/share/javascript/sphinxdoc/1.0/language_data.js usr/share/doc/knot-doc/_static/language_data.js
+usr/share/javascript/sphinxdoc/1.0/searchtools.js usr/share/doc/knot-doc/_static/searchtools.js
+usr/share/javascript/underscore/underscore.min.js usr/share/doc/knot-doc/_static/underscore.js
diff --git a/distro/pkg/deb-nolibxdp/knot-exporter.install b/distro/pkg/deb-nolibxdp/knot-exporter.install
new file mode 100644
index 0000000..9b2bf4f
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-exporter.install
@@ -0,0 +1,3 @@
+usr/lib/python3*/dist-packages/knot_exporter-*.egg-info
+usr/lib/python3*/dist-packages/knot_exporter/*.py
+usr/bin/knot-exporter /usr/sbin/
diff --git a/distro/pkg/deb-nolibxdp/knot-host.install b/distro/pkg/deb-nolibxdp/knot-host.install
new file mode 100644
index 0000000..51bacf0
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-host.install
@@ -0,0 +1 @@
+usr/bin/khost
diff --git a/distro/pkg/deb-nolibxdp/knot-host.manpages b/distro/pkg/deb-nolibxdp/knot-host.manpages
new file mode 100644
index 0000000..4891e2c
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-host.manpages
@@ -0,0 +1 @@
+usr/share/man/man1/khost.1
diff --git a/distro/pkg/deb-nolibxdp/knot-module-dnstap.install b/distro/pkg/deb-nolibxdp/knot-module-dnstap.install
new file mode 100644
index 0000000..983455e
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-module-dnstap.install
@@ -0,0 +1 @@
+usr/lib/*/knot/modules-*/dnstap.so
diff --git a/distro/pkg/deb-nolibxdp/knot-module-geoip.install b/distro/pkg/deb-nolibxdp/knot-module-geoip.install
new file mode 100644
index 0000000..16d87c3
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot-module-geoip.install
@@ -0,0 +1 @@
+usr/lib/*/knot/modules-*/geoip.so
diff --git a/distro/pkg/deb-nolibxdp/knot.dirs b/distro/pkg/deb-nolibxdp/knot.dirs
new file mode 100644
index 0000000..6e937aa
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot.dirs
@@ -0,0 +1 @@
+var/lib/knot
diff --git a/distro/pkg/deb-nolibxdp/knot.init b/distro/pkg/deb-nolibxdp/knot.init
new file mode 100644
index 0000000..3f8fcae
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot.init
@@ -0,0 +1,149 @@
+#!/bin/sh
+### BEGIN INIT INFO
+# Provides: knot
+# Required-Start: $network $local_fs $remote_fs $syslog
+# Required-Stop: $remote_fs $syslog
+# Default-Start: 2 3 4 5
+# Default-Stop: 0 1 6
+# Short-Description: authoritative domain name server
+# Description: Knot DNS is a authoritative-only domain name server
+### END INIT INFO
+
+# Author: Ondřej Surý
+
+# PATH should only include /usr/* if it runs after the mountnfs.sh script
+PATH=/sbin:/usr/sbin:/bin:/usr/bin
+DESC="Knot DNS server" # Introduce a short description here
+NAME=knotd # Introduce the short server's name here
+DAEMON=/usr/sbin/$NAME # Introduce the server's location here
+PIDFILE=/run/knot/knot.pid
+SCRIPTNAME=/etc/init.d/knot
+KNOTC=/usr/sbin/knotc
+RUNDIR=/run/knot
+
+# Exit if the package is not installed
+[ -x $DAEMON ] || exit 0
+
+KNOTD_ARGS=""
+
+# Read configuration variable file if it is present
+[ -r /etc/default/knot ] && . /etc/default/knot
+
+DAEMON_ARGS="-d $KNOTD_ARGS"
+
+# Define LSB log_* functions.
+# Depend on sysvinit-utils (>= 2.96) to ensure that this file is present.
+. /lib/lsb/init-functions
+
+#
+# Function that starts the daemon/service
+#
+do_start()
+{
+ # Return
+ # 0 if daemon has been started
+ # 1 if daemon was already running
+ # 2 if daemon could not be started
+
+ $KNOTC status >/dev/null 2>/dev/null \
+ && return 1
+
+ start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
+ || return 1
+ start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
+ $DAEMON_ARGS \
+ || return 2
+}
+
+#
+# Function that stops the daemon/service
+#
+do_stop()
+{
+ # Return
+ # 0 if daemon has been stopped
+ # 1 if daemon was already stopped
+ # 2 if daemon could not be stopped
+ # other if a failure occurred
+
+ $KNOTC status >/dev/null 2>/dev/null \
+ || return 1
+
+ $KNOTC stop >/dev/null
+ RETVAL="$?"
+ [ $? = 1 ] && return 2
+
+ # Many daemons don't delete their pidfiles when they exit.
+ rm -f $PIDFILE
+ return 0
+}
+
+do_reload() {
+ $KNOTC reload >/dev/null
+ return $?
+}
+
+do_mkrundir() {
+ mkdir -p $RUNDIR
+ chmod 0755 $RUNDIR
+ chown knot:knot $RUNDIR
+}
+
+case "$1" in
+ start)
+ do_mkrundir
+ log_daemon_msg "Starting $DESC " "$NAME"
+ do_start
+ case "$?" in
+ 0|1) log_end_msg 0 ;;
+ 2) log_end_msg 1 ;;
+ esac
+ ;;
+ stop)
+ log_daemon_msg "Stopping $DESC" "$NAME"
+ do_stop
+ case "$?" in
+ 0|1) log_end_msg 0 ;;
+ 2) log_end_msg 1 ;;
+ esac
+ ;;
+ status)
+ STATUS=$($KNOTC status 2>&1 >/dev/null)
+ RETVAL=$?
+ if [ $RETVAL = 0 ]; then
+ log_success_msg "$NAME is running"
+ else
+ log_failure_msg "$NAME is not running ($STATUS)"
+ fi
+ exit $RETVAL
+ ;;
+ reload|force-reload)
+ log_daemon_msg "Reloading $DESC" "$NAME"
+ do_reload
+ log_end_msg $?
+ ;;
+ restart)
+ log_daemon_msg "Restarting $DESC" "$NAME"
+ do_stop
+ case "$?" in
+ 0|1)
+ do_start
+ case "$?" in
+ 0) log_end_msg 0 ;;
+ 1) log_end_msg 1 ;; # Old process is still running
+ *) log_end_msg 1 ;; # Failed to start
+ esac
+ ;;
+ *)
+ # Failed to stop
+ log_end_msg 1
+ ;;
+ esac
+ ;;
+ *)
+ echo "Usage: $SCRIPTNAME {start|stop|status|restart|reload|force-reload}" >&2
+ exit 3
+ ;;
+esac
+
+:
diff --git a/distro/pkg/deb-nolibxdp/knot.install b/distro/pkg/deb-nolibxdp/knot.install
new file mode 100644
index 0000000..a31224f
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot.install
@@ -0,0 +1,7 @@
+debian/cz.nic.knotd.conf usr/share/dbus-1/system.d/
+etc/knot/knot.conf
+usr/sbin/kcatalogprint
+usr/sbin/keymgr
+usr/sbin/kjournalprint
+usr/sbin/knotc
+usr/sbin/knotd
diff --git a/distro/pkg/deb-nolibxdp/knot.manpages b/distro/pkg/deb-nolibxdp/knot.manpages
new file mode 100644
index 0000000..5d23e9f
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot.manpages
@@ -0,0 +1,6 @@
+usr/share/man/man5/knot.conf.5
+usr/share/man/man8/kcatalogprint.8
+usr/share/man/man8/keymgr.8
+usr/share/man/man8/kjournalprint.8
+usr/share/man/man8/knotc.8
+usr/share/man/man8/knotd.8
diff --git a/distro/pkg/deb-nolibxdp/knot.postinst b/distro/pkg/deb-nolibxdp/knot.postinst
new file mode 100644
index 0000000..da747c8
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot.postinst
@@ -0,0 +1,16 @@
+#!/bin/sh
+set -e
+
+if [ "$1" = "configure" ]; then
+ if ! getent passwd knot > /dev/null; then
+ adduser --quiet --system --group --no-create-home --home /var/lib/knot knot
+ fi
+
+ dpkg-statoverride --list /var/lib/knot >/dev/null 2>&1 || dpkg-statoverride --update --add root knot 0770 /var/lib/knot
+ dpkg-statoverride --list /etc/knot/knot.conf >/dev/null 2>&1 || dpkg-statoverride --update --add root knot 0640 /etc/knot/knot.conf
+ dpkg-statoverride --list /etc/knot >/dev/null 2>&1 || dpkg-statoverride --update --add root knot 0750 /etc/knot
+fi
+
+#DEBHELPER#
+
+exit 0
diff --git a/distro/pkg/deb-nolibxdp/knot.postrm b/distro/pkg/deb-nolibxdp/knot.postrm
new file mode 100644
index 0000000..14b3d69
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot.postrm
@@ -0,0 +1,21 @@
+#!/bin/sh
+set -e
+
+if test "$1" = "purge"; then
+ state_dir=/var/lib/knot
+ for db_name in "catalog" "confdb" "journal" "keys" "timers"; do
+ rm -rf $state_dir/$db_name >/dev/null 2>&1 || true
+ done
+ rmdir $state_dir >/dev/null 2>&1 || true
+ [ -e $state_dir/* ] && echo "Notice: there are still data in ${state_dir}, please check."
+
+ dpkg-statoverride --remove /var/lib/knot >/dev/null 2>&1 || true
+ dpkg-statoverride --remove /etc/knot/knot.conf >/dev/null 2>&1 || true
+ dpkg-statoverride --remove /etc/knot >/dev/null 2>&1 || true
+
+ deluser --quiet knot >/dev/null 2>&1 || true
+fi
+
+#DEBHELPER#
+
+exit 0
diff --git a/distro/pkg/deb-nolibxdp/knot.service b/distro/pkg/deb-nolibxdp/knot.service
new file mode 100644
index 0000000..e6c13ed
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/knot.service
@@ -0,0 +1,30 @@
+[Unit]
+Description=Knot DNS server
+Wants=network-online.target
+After=network-online.target
+Documentation=man:knotd(8) man:knot.conf(5) man:knotc(8)
+
+[Service]
+Type=notify
+User=knot
+Group=knot
+CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETPCAP
+AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETPCAP
+ExecStartPre=/usr/sbin/knotc conf-check
+ExecStart=/usr/sbin/knotd -m "$KNOT_CONF_MAX_SIZE"
+ExecReload=/bin/kill -HUP $MAINPID
+Restart=on-abort
+LimitNOFILE=1048576
+TimeoutStopSec=300
+# Extend the systemd startup timeout by this value (seconds) for each zone
+Environment="KNOT_ZONE_LOAD_TIMEOUT_SEC=180"
+# Maximum size (MiB) of a configuration database
+Environment="KNOT_CONF_MAX_SIZE=512"
+
+# Expected systemd >= v239
+RuntimeDirectory=knot
+StateDirectory=knot
+NoNewPrivileges=yes
+
+[Install]
+WantedBy=multi-user.target
diff --git a/distro/pkg/deb-nolibxdp/libdnssec9.install b/distro/pkg/deb-nolibxdp/libdnssec9.install
new file mode 100644
index 0000000..17a9fe6
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/libdnssec9.install
@@ -0,0 +1 @@
+usr/lib/*/libdnssec.so.*
diff --git a/distro/pkg/deb-nolibxdp/libdnssec9.symbols b/distro/pkg/deb-nolibxdp/libdnssec9.symbols
new file mode 100644
index 0000000..c3ab2ed
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/libdnssec9.symbols
@@ -0,0 +1,96 @@
+libdnssec.so.9 libdnssec9 #MINVER#
+* Build-Depends-Package: libknot-dev
+ dnssec_algorithm_digest_support@Base 3.2.0
+ dnssec_algorithm_key_size_check@Base 3.2.0
+ dnssec_algorithm_key_size_default@Base 3.2.0
+ dnssec_algorithm_key_size_range@Base 3.2.0
+ dnssec_algorithm_key_support@Base 3.2.0
+ dnssec_algorithm_reproducible@Base 3.2.0
+ dnssec_binary_alloc@Base 3.2.0
+ dnssec_binary_cmp@Base 3.2.0
+ dnssec_binary_dup@Base 3.2.0
+ dnssec_binary_free@Base 3.2.0
+ dnssec_binary_from_base64@Base 3.2.0
+ dnssec_binary_resize@Base 3.2.0
+ dnssec_binary_to_base64@Base 3.2.0
+ dnssec_crypto_cleanup@Base 3.2.0
+ dnssec_crypto_init@Base 3.2.0
+ dnssec_crypto_reinit@Base 3.2.0
+ dnssec_digest@Base 3.2.0
+ dnssec_digest_finish@Base 3.2.0
+ dnssec_digest_init@Base 3.2.0
+ dnssec_key_can_sign@Base 3.2.0
+ dnssec_key_can_verify@Base 3.2.0
+ dnssec_key_clear@Base 3.2.0
+ dnssec_key_create_ds@Base 3.2.0
+ dnssec_key_dup@Base 3.2.0
+ dnssec_key_free@Base 3.2.0
+ dnssec_key_get_algorithm@Base 3.2.0
+ dnssec_key_get_dname@Base 3.2.0
+ dnssec_key_get_flags@Base 3.2.0
+ dnssec_key_get_keyid@Base 3.2.0
+ dnssec_key_get_keytag@Base 3.2.0
+ dnssec_key_get_protocol@Base 3.2.0
+ dnssec_key_get_pubkey@Base 3.2.0
+ dnssec_key_get_rdata@Base 3.2.0
+ dnssec_key_get_size@Base 3.2.0
+ dnssec_key_load_pkcs8@Base 3.2.0
+ dnssec_key_new@Base 3.2.0
+ dnssec_key_set_algorithm@Base 3.2.0
+ dnssec_key_set_dname@Base 3.2.0
+ dnssec_key_set_flags@Base 3.2.0
+ dnssec_key_set_protocol@Base 3.2.0
+ dnssec_key_set_pubkey@Base 3.2.0
+ dnssec_key_set_rdata@Base 3.2.0
+ dnssec_keyid_copy@Base 3.2.0
+ dnssec_keyid_equal@Base 3.2.0
+ dnssec_keyid_is_valid@Base 3.2.0
+ dnssec_keyid_normalize@Base 3.2.0
+ dnssec_keystore_close@Base 3.2.0
+ dnssec_keystore_deinit@Base 3.2.0
+ dnssec_keystore_generate@Base 3.2.0
+ dnssec_keystore_get_private@Base 3.2.0
+ dnssec_keystore_import@Base 3.2.0
+ dnssec_keystore_init@Base 3.2.0
+ dnssec_keystore_init_pkcs11@Base 3.2.0
+ dnssec_keystore_init_pkcs8@Base 3.2.0
+ dnssec_keystore_open@Base 3.2.0
+ dnssec_keystore_remove@Base 3.2.0
+ dnssec_keystore_set_private@Base 3.2.0
+ dnssec_keytag@Base 3.2.0
+ dnssec_nsec3_hash@Base 3.2.0
+ dnssec_nsec3_hash_length@Base 3.2.0
+ dnssec_nsec3_params_free@Base 3.2.0
+ dnssec_nsec3_params_from_rdata@Base 3.2.0
+ dnssec_nsec3_params_match@Base 3.2.0
+ dnssec_nsec_bitmap_add@Base 3.2.0
+ dnssec_nsec_bitmap_clear@Base 3.2.0
+ dnssec_nsec_bitmap_contains@Base 3.2.0
+ dnssec_nsec_bitmap_free@Base 3.2.0
+ dnssec_nsec_bitmap_new@Base 3.2.0
+ dnssec_nsec_bitmap_size@Base 3.2.0
+ dnssec_nsec_bitmap_write@Base 3.2.0
+ dnssec_pem_from_privkey@Base 3.2.0
+ dnssec_pem_from_x509@Base 3.2.0
+ dnssec_pem_to_privkey@Base 3.2.0
+ dnssec_pem_to_x509@Base 3.2.0
+ dnssec_random_binary@Base 3.2.0
+ dnssec_random_buffer@Base 3.2.0
+ dnssec_sign_add@Base 3.2.0
+ dnssec_sign_free@Base 3.2.0
+ dnssec_sign_init@Base 3.2.0
+ dnssec_sign_new@Base 3.2.0
+ dnssec_sign_verify@Base 3.2.0
+ dnssec_sign_write@Base 3.2.0
+ dnssec_strerror@Base 3.2.0
+ dnssec_tsig_add@Base 3.2.0
+ dnssec_tsig_algorithm_from_dname@Base 3.2.0
+ dnssec_tsig_algorithm_from_name@Base 3.2.0
+ dnssec_tsig_algorithm_size@Base 3.2.0
+ dnssec_tsig_algorithm_to_dname@Base 3.2.0
+ dnssec_tsig_algorithm_to_name@Base 3.2.0
+ dnssec_tsig_free@Base 3.2.0
+ dnssec_tsig_new@Base 3.2.0
+ dnssec_tsig_optimal_key_size@Base 3.2.0
+ dnssec_tsig_size@Base 3.2.0
+ dnssec_tsig_write@Base 3.2.0
diff --git a/distro/pkg/deb-nolibxdp/libknot-dev.install b/distro/pkg/deb-nolibxdp/libknot-dev.install
new file mode 100644
index 0000000..cb60d88
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/libknot-dev.install
@@ -0,0 +1,3 @@
+usr/include/
+usr/lib/*/*.so
+usr/lib/*/pkgconfig/*
diff --git a/distro/pkg/deb-nolibxdp/libknot15.install b/distro/pkg/deb-nolibxdp/libknot15.install
new file mode 100644
index 0000000..f9b9f93
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/libknot15.install
@@ -0,0 +1 @@
+usr/lib/*/libknot.so.*
diff --git a/distro/pkg/deb-nolibxdp/libknot15.symbols b/distro/pkg/deb-nolibxdp/libknot15.symbols
new file mode 100644
index 0000000..dea14ec
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/libknot15.symbols
@@ -0,0 +1,296 @@
+libknot.so.15 libknot15 #MINVER#
+* Build-Depends-Package: libknot-dev
+ KNOT_DB_LMDB_DUPSORT@Base 3.4.0
+ KNOT_DB_LMDB_INTEGERKEY@Base 3.4.0
+ KNOT_DB_LMDB_MAPASYNC@Base 3.4.0
+ KNOT_DB_LMDB_NOSYNC@Base 3.4.0
+ KNOT_DB_LMDB_NOTLS@Base 3.4.0
+ KNOT_DB_LMDB_RDONLY@Base 3.4.0
+ KNOT_DB_LMDB_WRITEMAP@Base 3.4.0
+ KNOT_DUMP_STYLE_DEFAULT@Base 3.4.0
+ knot_creds_cert@Base 3.4.0
+ knot_creds_free@Base 3.4.0
+ knot_creds_init@Base 3.4.0
+ knot_creds_init_peer@Base 3.4.0
+ knot_creds_update@Base 3.4.0
+ knot_ctl_accept@Base 3.4.0
+ knot_ctl_alloc@Base 3.4.0
+ knot_ctl_bind@Base 3.4.0
+ knot_ctl_clone@Base 3.4.0
+ knot_ctl_close@Base 3.4.0
+ knot_ctl_connect@Base 3.4.0
+ knot_ctl_free@Base 3.4.0
+ knot_ctl_receive@Base 3.4.0
+ knot_ctl_send@Base 3.4.0
+ knot_ctl_set_timeout@Base 3.4.0
+ knot_ctl_unbind@Base 3.4.0
+ knot_db_lmdb_api@Base 3.4.0
+ knot_db_lmdb_del_exact@Base 3.4.0
+ knot_db_lmdb_get_mapsize@Base 3.4.0
+ knot_db_lmdb_get_path@Base 3.4.0
+ knot_db_lmdb_get_usage@Base 3.4.0
+ knot_db_lmdb_iter_del@Base 3.4.0
+ knot_db_lmdb_txn_begin@Base 3.4.0
+ knot_db_trie_api@Base 3.4.0
+ knot_dname_cmp@Base 3.4.0
+ knot_dname_copy@Base 3.4.0
+ knot_dname_copy_lower@Base 3.4.0
+ knot_dname_free@Base 3.4.0
+ knot_dname_from_str@Base 3.4.0
+ knot_dname_in_bailiwick@Base 3.4.0
+ knot_dname_is_case_equal@Base 3.4.0
+ knot_dname_is_equal@Base 3.4.0
+ knot_dname_labels@Base 3.4.0
+ knot_dname_lf@Base 3.4.0
+ knot_dname_matched_labels@Base 3.4.0
+ knot_dname_prefixlen@Base 3.4.0
+ knot_dname_realsize@Base 3.4.0
+ knot_dname_replace_suffix@Base 3.4.0
+ knot_dname_size@Base 3.4.0
+ knot_dname_store@Base 3.4.0
+ knot_dname_to_lower@Base 3.4.0
+ knot_dname_to_str@Base 3.4.0
+ knot_dname_to_wire@Base 3.4.0
+ knot_dname_unpack@Base 3.4.0
+ knot_dname_wire_check@Base 3.4.0
+ knot_dname_with_null@Base 3.4.3
+ knot_dnssec_alg_names@Base 3.4.0
+ knot_edns_add_option@Base 3.4.0
+ knot_edns_alignment_size@Base 3.4.0
+ knot_edns_chain_parse@Base 3.4.0
+ knot_edns_chain_size@Base 3.4.0
+ knot_edns_chain_write@Base 3.4.0
+ knot_edns_client_subnet_get_addr@Base 3.4.0
+ knot_edns_client_subnet_parse@Base 3.4.0
+ knot_edns_client_subnet_set_addr@Base 3.4.0
+ knot_edns_client_subnet_size@Base 3.4.0
+ knot_edns_client_subnet_write@Base 3.4.0
+ knot_edns_cookie_client_check@Base 3.4.0
+ knot_edns_cookie_client_generate@Base 3.4.0
+ knot_edns_cookie_parse@Base 3.4.0
+ knot_edns_cookie_server_check@Base 3.4.0
+ knot_edns_cookie_server_generate@Base 3.4.0
+ knot_edns_cookie_size@Base 3.4.0
+ knot_edns_cookie_write@Base 3.4.0
+ knot_edns_ede_names@Base 3.4.0
+ knot_edns_get_ext_rcode@Base 3.4.0
+ knot_edns_get_option@Base 3.4.0
+ knot_edns_get_options@Base 3.4.0
+ knot_edns_get_version@Base 3.4.0
+ knot_edns_init@Base 3.4.0
+ knot_edns_keepalive_parse@Base 3.4.0
+ knot_edns_keepalive_size@Base 3.4.0
+ knot_edns_keepalive_write@Base 3.4.0
+ knot_edns_opt_names@Base 3.4.0
+ knot_edns_reserve_option@Base 3.4.0
+ knot_edns_set_ext_rcode@Base 3.4.0
+ knot_edns_set_version@Base 3.4.0
+ knot_edns_zoneversion_parse@Base 3.4.4
+ knot_edns_zoneversion_write@Base 3.4.4
+ knot_error_from_libdnssec@Base 3.4.0
+ knot_eth_mtu@Base 3.4.0
+ knot_eth_name_from_addr@Base 3.4.0
+ knot_eth_queues@Base 3.4.0
+ knot_eth_rss@Base 3.4.0
+ knot_eth_vlans@Base 3.4.0
+ knot_eth_xdp_mode@Base 3.4.0
+ knot_get_obsolete_rdata_descriptor@Base 3.4.0
+ knot_get_rdata_descriptor@Base 3.4.0
+ knot_naptr_header_size@Base 3.4.0
+ knot_opcode_names@Base 3.4.0
+ knot_opt_code_to_string@Base 3.4.0
+ knot_pkt_begin@Base 3.4.0
+ knot_pkt_clear@Base 3.4.0
+ knot_pkt_copy@Base 3.4.0
+ knot_pkt_ext_rcode@Base 3.4.0
+ knot_pkt_ext_rcode_name@Base 3.4.0
+ knot_pkt_free@Base 3.4.0
+ knot_pkt_init_response@Base 3.4.0
+ knot_pkt_new@Base 3.4.0
+ knot_pkt_parse@Base 3.4.0
+ knot_pkt_parse_question@Base 3.4.0
+ knot_pkt_put_question@Base 3.4.0
+ knot_pkt_put_rotate@Base 3.4.0
+ knot_pkt_reclaim@Base 3.4.0
+ knot_pkt_reserve@Base 3.4.0
+ knot_probe_alloc@Base 3.4.0
+ knot_probe_consume@Base 3.4.0
+ knot_probe_data_set@Base 3.4.0
+ knot_probe_fd@Base 3.4.0
+ knot_probe_free@Base 3.4.0
+ knot_probe_produce@Base 3.4.0
+ knot_probe_set_consumer@Base 3.4.0
+ knot_probe_set_producer@Base 3.4.0
+ knot_probe_tcp_rtt@Base 3.4.0
+ knot_quic_cleanup@Base 3.4.0
+ knot_quic_client@Base 3.4.0
+ knot_quic_conn_block@Base 3.4.0
+ knot_quic_conn_get_stream@Base 3.4.0
+ knot_quic_conn_local_port@Base 3.4.0
+ knot_quic_conn_new_stream@Base 3.4.0
+ knot_quic_conn_next_timeout@Base 3.4.0
+ knot_quic_conn_rtt@Base 3.4.0
+ knot_quic_conn_stream_free@Base 3.4.0
+ knot_quic_handle@Base 3.4.0
+ knot_quic_hanle_expiry@Base 3.4.0
+ knot_quic_send@Base 3.4.0
+ knot_quic_session_available@Base 3.4.0
+ knot_quic_session_load@Base 3.4.0
+ knot_quic_session_save@Base 3.4.0
+ knot_quic_stream_add_data@Base 3.4.0
+ knot_quic_stream_get_process@Base 3.4.0
+ knot_quic_table_free@Base 3.4.0
+ knot_quic_table_new@Base 3.4.0
+ knot_quic_table_rem@Base 3.4.0
+ knot_quic_table_sweep@Base 3.4.0
+ knot_rcode_names@Base 3.4.0
+ knot_rdataset_add@Base 3.4.0
+ knot_rdataset_at@Base 3.4.0
+ knot_rdataset_clear@Base 3.4.0
+ knot_rdataset_copy@Base 3.4.0
+ knot_rdataset_eq@Base 3.4.0
+ knot_rdataset_intersect@Base 3.4.0
+ knot_rdataset_intersect2@Base 3.4.0
+ knot_rdataset_member@Base 3.4.0
+ knot_rdataset_merge@Base 3.4.0
+ knot_rdataset_subset@Base 3.4.0
+ knot_rdataset_subtract@Base 3.4.0
+ knot_rrclass_from_string@Base 3.4.0
+ knot_rrclass_to_string@Base 3.4.0
+ knot_rrset_add_rdata@Base 3.4.0
+ knot_rrset_clear@Base 3.4.0
+ knot_rrset_copy@Base 3.4.0
+ knot_rrset_equal@Base 3.4.0
+ knot_rrset_free@Base 3.4.0
+ knot_rrset_is_nsec3rel@Base 3.4.0
+ knot_rrset_new@Base 3.4.0
+ knot_rrset_rr_from_wire@Base 3.4.0
+ knot_rrset_rr_to_canonical@Base 3.4.0
+ knot_rrset_size@Base 3.4.0
+ knot_rrset_to_wire_extra@Base 3.4.0
+ knot_rrset_txt_dump@Base 3.4.0
+ knot_rrset_txt_dump_data@Base 3.4.0
+ knot_rrset_txt_dump_edns@Base 3.4.0
+ knot_rrset_txt_dump_header@Base 3.4.0
+ knot_rrtype_additional_needed@Base 3.4.0
+ knot_rrtype_from_string@Base 3.4.0
+ knot_rrtype_is_dnssec@Base 3.4.0
+ knot_rrtype_is_metatype@Base 3.4.0
+ knot_rrtype_should_be_lowercased@Base 3.4.0
+ knot_rrtype_to_string@Base 3.4.0
+ knot_strerror@Base 3.4.0
+ knot_svcb_param_names@Base 3.4.0
+ knot_tcp_cleanup@Base 3.4.0
+ knot_tcp_inbufs_upd@Base 3.4.0
+ knot_tcp_outbufs_ack@Base 3.4.0
+ knot_tcp_outbufs_add@Base 3.4.0
+ knot_tcp_outbufs_can_send@Base 3.4.0
+ knot_tcp_outbufs_usage@Base 3.4.0
+ knot_tcp_recv@Base 3.4.0
+ knot_tcp_reply_data@Base 3.4.0
+ knot_tcp_send@Base 3.4.0
+ knot_tcp_sweep@Base 3.4.0
+ knot_tcp_table_free@Base 3.4.0
+ knot_tcp_table_new@Base 3.4.0
+ knot_tls_conn_block@Base 3.4.0
+ knot_tls_conn_del@Base 3.4.0
+ knot_tls_conn_new@Base 3.4.0
+ knot_tls_ctx_free@Base 3.4.0
+ knot_tls_ctx_new@Base 3.4.0
+ knot_tls_handshake@Base 3.4.0
+ knot_tls_pin@Base 3.4.0
+ knot_tls_pin_check@Base 3.4.0
+ knot_tls_recv_dns@Base 3.4.0
+ knot_tls_send_dns@Base 3.4.0
+ knot_tls_session@Base 3.4.0
+ knot_tls_session_available@Base 3.4.1
+ knot_tls_session_load@Base 3.4.1
+ knot_tls_session_save@Base 3.4.1
+ knot_tsig_add@Base 3.4.0
+ knot_tsig_append@Base 3.4.0
+ knot_tsig_client_check@Base 3.4.0
+ knot_tsig_client_check_next@Base 3.4.0
+ knot_tsig_create_rdata@Base 3.4.0
+ knot_tsig_key_copy@Base 3.4.0
+ knot_tsig_key_deinit@Base 3.4.0
+ knot_tsig_key_init@Base 3.4.0
+ knot_tsig_key_init_file@Base 3.4.0
+ knot_tsig_key_init_str@Base 3.4.0
+ knot_tsig_rcode_names@Base 3.4.0
+ knot_tsig_rdata_alg@Base 3.4.0
+ knot_tsig_rdata_alg_name@Base 3.4.0
+ knot_tsig_rdata_error@Base 3.4.0
+ knot_tsig_rdata_fudge@Base 3.4.0
+ knot_tsig_rdata_is_ok@Base 3.4.0
+ knot_tsig_rdata_mac@Base 3.4.0
+ knot_tsig_rdata_mac_length@Base 3.4.0
+ knot_tsig_rdata_orig_id@Base 3.4.0
+ knot_tsig_rdata_other_data@Base 3.4.0
+ knot_tsig_rdata_other_data_length@Base 3.4.0
+ knot_tsig_rdata_set_fudge@Base 3.4.0
+ knot_tsig_rdata_set_mac@Base 3.4.0
+ knot_tsig_rdata_set_orig_id@Base 3.4.0
+ knot_tsig_rdata_set_other_data@Base 3.4.0
+ knot_tsig_rdata_set_time_signed@Base 3.4.0
+ knot_tsig_rdata_time_signed@Base 3.4.0
+ knot_tsig_rdata_tsig_timers_length@Base 3.4.0
+ knot_tsig_rdata_tsig_variables_length@Base 3.4.0
+ knot_tsig_server_check@Base 3.4.0
+ knot_tsig_sign@Base 3.4.0
+ knot_tsig_sign_next@Base 3.4.0
+ knot_tsig_wire_maxsize@Base 3.4.0
+ knot_tsig_wire_size@Base 3.4.0
+ knot_xdp_deinit@Base 3.4.0
+ knot_xdp_init@Base 3.4.0
+ knot_xdp_recv@Base 3.4.0
+ knot_xdp_recv_finish@Base 3.4.0
+ knot_xdp_reply_alloc@Base 3.4.0
+ knot_xdp_send@Base 3.4.0
+ knot_xdp_send_alloc@Base 3.4.0
+ knot_xdp_send_finish@Base 3.4.0
+ knot_xdp_send_free@Base 3.4.0
+ knot_xdp_send_prepare@Base 3.4.0
+ knot_xdp_socket_info@Base 3.4.0
+ knot_xdp_socket_stats@Base 3.4.0
+ knot_xdp_socket_fd@Base 3.4.0
+ yp_addr@Base 3.4.0
+ yp_addr_noport@Base 3.4.0
+ yp_addr_noport_to_bin@Base 3.4.0
+ yp_addr_noport_to_txt@Base 3.4.0
+ yp_addr_range_to_bin@Base 3.4.0
+ yp_addr_range_to_txt@Base 3.4.0
+ yp_addr_to_bin@Base 3.4.0
+ yp_addr_to_txt@Base 3.4.0
+ yp_base64_to_bin@Base 3.4.0
+ yp_base64_to_txt@Base 3.4.0
+ yp_bool_to_bin@Base 3.4.0
+ yp_bool_to_txt@Base 3.4.0
+ yp_deinit@Base 3.4.0
+ yp_dname_to_bin@Base 3.4.0
+ yp_dname_to_txt@Base 3.4.0
+ yp_format_id@Base 3.4.0
+ yp_format_key0@Base 3.4.0
+ yp_format_key1@Base 3.4.0
+ yp_hex_to_bin@Base 3.4.0
+ yp_hex_to_txt@Base 3.4.0
+ yp_init@Base 3.4.0
+ yp_int_to_bin@Base 3.4.0
+ yp_int_to_txt@Base 3.4.0
+ yp_item_to_bin@Base 3.4.0
+ yp_item_to_txt@Base 3.4.0
+ yp_option_to_bin@Base 3.4.0
+ yp_option_to_txt@Base 3.4.0
+ yp_parse@Base 3.4.0
+ yp_schema_check_deinit@Base 3.4.0
+ yp_schema_check_init@Base 3.4.0
+ yp_schema_check_parser@Base 3.4.0
+ yp_schema_check_str@Base 3.4.0
+ yp_schema_copy@Base 3.4.0
+ yp_schema_find@Base 3.4.0
+ yp_schema_free@Base 3.4.0
+ yp_schema_merge@Base 3.4.0
+ yp_schema_purge_dynamic@Base 3.4.0
+ yp_set_input_file@Base 3.4.0
+ yp_set_input_string@Base 3.4.0
+ yp_str_to_bin@Base 3.4.0
+ yp_str_to_txt@Base 3.4.0
diff --git a/distro/pkg/deb-nolibxdp/libzscanner4.install b/distro/pkg/deb-nolibxdp/libzscanner4.install
new file mode 100644
index 0000000..a8dc226
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/libzscanner4.install
@@ -0,0 +1 @@
+usr/lib/*/libzscanner.so.*
diff --git a/distro/pkg/deb-nolibxdp/libzscanner4.symbols b/distro/pkg/deb-nolibxdp/libzscanner4.symbols
new file mode 100644
index 0000000..99ac3b7
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/libzscanner4.symbols
@@ -0,0 +1,12 @@
+libzscanner.so.4 libzscanner4 #MINVER#
+* Build-Depends-Package: libknot-dev
+ zs_deinit@Base 3.1.0
+ zs_errorname@Base 3.1.0
+ zs_init@Base 3.1.0
+ zs_parse_all@Base 3.1.0
+ zs_parse_record@Base 3.1.0
+ zs_set_input_file@Base 3.1.0
+ zs_set_input_string@Base 3.1.0
+ zs_set_processing@Base 3.1.0
+ zs_set_processing_comment@Base 3.1.0
+ zs_strerror@Base 3.1.0
diff --git a/distro/pkg/deb-nolibxdp/not-installed b/distro/pkg/deb-nolibxdp/not-installed
new file mode 100644
index 0000000..c928be1
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/not-installed
@@ -0,0 +1 @@
+etc/knot/example.com.zone
diff --git a/distro/pkg/deb-nolibxdp/prepare-environment b/distro/pkg/deb-nolibxdp/prepare-environment
new file mode 100755
index 0000000..7176f5e
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/prepare-environment
@@ -0,0 +1,38 @@
+#!/bin/sh
+
+set -eu
+
+CONFFILE=${1:-/etc/knot/knot.conf}
+
+if [ ! -r $CONFFILE ]; then
+ echo "$CONFFILE doesn't exist or has wrong permissions."
+ exit 1;
+fi
+
+KNOT_RUNDIR=$(sed -ne "s/#.*$//;s/.*rundir: \"*\([^\";]*\\).*/\\1/p;" $CONFFILE)
+[ -z "$KNOT_RUNDIR" ] && KNOT_RUNDIR=/run/knot
+
+mkdir --parents "$KNOT_RUNDIR";
+
+KNOT_USER=$(sed -ne "s/#.*$//;s/.*user:[ \"]*\\([^\\:\"]*\\)[ \"]*/\\1/p;" $CONFFILE)
+
+if [ -n "$KNOT_USER" ]; then
+ if ! getent passwd $KNOT_USER >/dev/null; then
+ echo "Configured user '$KNOT_USER' doesn't exist."
+ exit 1
+ fi
+
+ KNOT_GROUP=$(sed -ne "s/#.*$//;s/.*user:[ \"]*[^\\:\"]*\\:\\([^\"]*\\)[ \"]*/\\1/p;" $CONFFILE)
+ if [ -z "$KNOT_GROUP" ]; then
+ KNOT_GROUP=$(getent group $(getent passwd "$KNOT_USER" | cut -f 4 -d :) | cut -f 1 -d :)
+ fi
+
+ if ! getent group $KNOT_GROUP >/dev/null; then
+ echo "Configured group '$KNOT_GROUP' doesn't exist."
+ exit 1
+ fi
+ chown --silent "$KNOT_USER:$KNOT_GROUP" "$KNOT_RUNDIR"
+ chmod 775 "$KNOT_RUNDIR"
+fi
+
+:
diff --git a/distro/pkg/deb-nolibxdp/python3-libknot.install b/distro/pkg/deb-nolibxdp/python3-libknot.install
new file mode 100644
index 0000000..ce92dec
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/python3-libknot.install
@@ -0,0 +1,2 @@
+usr/lib/python3*/dist-packages/libknot-*.egg-info
+usr/lib/python3*/dist-packages/libknot/*.py
diff --git a/distro/pkg/deb-nolibxdp/rules b/distro/pkg/deb-nolibxdp/rules
new file mode 100755
index 0000000..9f859a7
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/rules
@@ -0,0 +1,89 @@
+#!/usr/bin/make -f
+
+export DEB_BUILD_MAINT_OPTIONS = hardening=+all
+export DEB_CFLAGS_MAINT_APPEND = -Wall -DNDEBUG
+export DEB_LDFLAGS_MAINT_APPEND = -Wl,--as-needed
+
+export DPKG_GENSYMBOLS_CHECK_LEVEL := 4
+export KNOT_SOFTHSM2_DSO = /usr/lib/softhsm/libsofthsm2.so
+
+include /usr/share/dpkg/default.mk
+
+# Disable fastparser if requested
+ifeq (maint,$(filter $(DEB_BUILD_OPTIONS),maint))
+ FASTPARSER := --disable-fastparser
+else
+ FASTPARSER := --enable-fastparser
+endif
+
+LIBKNOT_SYMBOLS := $(wildcard $(CURDIR)/debian/libknot*.symbols)
+
+# MAJOR.MINOR version part
+BASE_VERSION := $(shell echo $(DEB_VERSION) | sed 's/^\([^.]\+\.[^.]\+\).*/\1/')
+
+# pyproject is supported by knot but fails on second `pybuild --build`
+# invocation due to bug in dh-python's plugin_pyproject.py wheel unpack
+export PYBUILD_SYSTEM = distutils
+
+%:
+ dh $@ \
+ --with python3
+
+override_dh_auto_configure:
+ @echo 'architecture:' $(DEB_HOST_ARCH)
+ dh_auto_configure -- \
+ --sysconfdir=/etc \
+ --localstatedir=/var/lib \
+ --libexecdir=/usr/lib/knot \
+ --with-rundir=/run/knot \
+ --with-moduledir=/usr/lib/$(DEB_HOST_MULTIARCH)/knot/modules-$(BASE_VERSION) \
+ --with-storage=/var/lib/knot \
+ --enable-systemd=auto \
+ --enable-dnstap \
+ --with-module-dnstap=shared \
+ --with-module-geoip=shared \
+ --enable-recvmmsg=yes \
+ --disable-silent-rules \
+ --enable-xdp=yes \
+ --enable-quic=yes \
+ --disable-static \
+ $(FASTPARSER)
+
+override_dh_auto_configure-indep:
+ pybuild --dir python/libknot --configure
+ pybuild --dir python/knot_exporter --configure
+
+override_dh_auto_build-indep:
+ dh_auto_build -- html
+ pybuild --dir python/libknot --build
+ pybuild --dir python/knot_exporter --build
+
+override_dh_auto_install-arch:
+ dh_auto_install -- install
+ # rename knot.sample.conf to knot.conf
+ mv $(CURDIR)/debian/tmp/etc/knot/knot.sample.conf $(CURDIR)/debian/tmp/etc/knot/knot.conf
+ @if grep -E -q "DoQ support: +no" "$(CURDIR)/debian/tmp/usr/sbin/knotd"; then \
+ echo "Stripping the QUIC symbols"; \
+ sed -i '/knot_quic_/d' $(LIBKNOT_SYMBOLS); \
+ fi
+
+override_dh_auto_install-indep:
+ dh_auto_install -- install-html
+ # rename knot.sample.conf to knot.conf
+ mv $(CURDIR)/debian/tmp/etc/knot/knot.sample.conf $(CURDIR)/debian/tmp/etc/knot/knot.conf
+ pybuild --dir python/libknot --install
+ pybuild --dir python/knot_exporter --install
+ rm -rf $(CURDIR)/debian/tmp/usr/lib/python*/dist-packages/libknot/__pycache__
+ rm -rf $(CURDIR)/debian/tmp/usr/lib/python*/dist-packages/knot_exporter/__pycache__
+
+override_dh_auto_test-indep:
+override_dh_auto_test-arch:
+ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
+ dh_auto_test
+endif
+
+override_dh_missing:
+ dh_missing --exclude=.la --fail-missing
+
+override_dh_installchangelogs:
+ dh_installchangelogs NEWS
diff --git a/distro/pkg/deb-nolibxdp/source/format b/distro/pkg/deb-nolibxdp/source/format
new file mode 100644
index 0000000..163aaf8
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/source/format
@@ -0,0 +1 @@
+3.0 (quilt)
diff --git a/distro/pkg/deb-nolibxdp/tests/authoritative-server b/distro/pkg/deb-nolibxdp/tests/authoritative-server
new file mode 100755
index 0000000..028dfbf
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/tests/authoritative-server
@@ -0,0 +1,150 @@
+#!/bin/bash
+
+# Author: Daniel Kahn Gillmor
+# 2018-11-02
+# License: GPLv3+
+
+# error on exit
+set -e
+# for handling jobspecs:
+set -m
+
+if [ -z "$AUTOPKGTEST_ARTIFACTS" ]; then
+ d="$(mktemp -d)"
+ remove="$d"
+else
+ d="$AUTOPKGTEST_ARTIFACTS"
+fi
+ip="${TESTIP:-127.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).$(( $RANDOM % 256 ))}"
+port="${PORT:-8123}"
+knotc="${KNOTC:-/usr/sbin/knotc}"
+knotd="${KNOTD:-/usr/sbin/knotd}"
+keymgr="${KEYMGR:-/usr/sbin/keymgr}"
+kdig="${KDIG:-$(command -v kdig)}"
+kzonecheck="${KZONECHECK:-$(command -v kzonecheck)}"
+test_address="${TEST_ADDRESS:-192.0.2.199}"
+
+declare -a knot_conf="--config=$d/knot.conf"
+declare -a knot_args=("$knot_conf" --verbose)
+
+printf "%s + %s roundtrip tests\n------------\n workdir: %s\n IP addr: %s\n knot args: %s\n" "$knotd" "$kdig" "$d" "$ip" "${knot_args[*]}"
+
+section() {
+ printf "\n%s\n" "$1"
+ sed 's/./-/g' <<<"$1"
+}
+
+cleanup () {
+ section "cleaning up"
+ find "$d" -ls
+ "${knotc}" "${knot_args[@]}" stop
+ wait %1
+ tail -n +1 -v "$d"/*.err
+ if [ "$remove" ]; then
+ printf "\ncleaning up working directory %s\n" "$remove"
+ rm -rf "$remove"
+ fi
+}
+trap cleanup EXIT
+
+section "set up config file and zonefile"
+
+user=$(id -nu)
+group=$(id -ng)
+cat > "$d/knot.conf" < "$d/example.net.zone" < "$d/knotd.err" &
+
+# FIXME: this is an annoying poll -- would be better if we could be
+# alerted when the daemon is done setting up the socket, but i don't
+# want to "--daemonize" if i can avoid it because i want the shell to
+# remain in direct supervision of all its processes
+tried=0
+while [ $tried -lt 10 ] ; do
+ if "${knotc}" "${knot_args[@]}" status 2>&1; then
+ break;
+ fi
+ sleep 0.5
+ tried=$(( $tried + 1 ))
+done
+if [ $tried -ge 10 ]; then
+ printf "failed to use %s\n" "${knotc}" >&2
+ exit 1
+fi
+
+section "querying knot"
+"${kdig}" -p "${port}" @"${ip}" -t A test.example.net test2.example.net
+answer="$("${kdig}" +short -p "${port}" @"${ip}" -t A test.example.net)"
+if ! [ "$answer" = "$test_address" ]; then
+ printf "test.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer" >&2
+ exit 1
+fi
+answer2="$("${kdig}" +short -p "${port}" @"${ip}" -t A test2.example.net)"
+if ! [ "$answer2" = "" ]; then
+ printf "test2.example.net gave unexpected answer!\n got: %s\n" "$answer2" >&2
+ exit 1
+fi
+
+section "modifying zone"
+printf "test2 1D IN A $test_address\n" >>"$d/example.net.zone"
+sed -i 's/^@ 1D IN SOA.*/@ 1D IN SOA a.ns hostmaster 2018110100 3h 15m 1w 1d/' "$d/example.net.zone"
+"${knotc}" "${knot_args[@]}" reload
+sleep 1
+
+section "querying again"
+"${kdig}" -p "${port}" @"${ip}" -t A test.example.net test2.example.net
+answer="$("${kdig}" +short -p "${port}" @"${ip}" -t A test.example.net)"
+if ! [ "$answer" = "$test_address" ]; then
+ printf "test.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer" >&2
+ exit 1
+fi
+answer2="$("${kdig}" +short -p "${port}" @"${ip}" -t A test2.example.net)"
+if ! [ "$answer2" = "$test_address" ]; then
+ printf "test2.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer2" >&2
+ exit 1
+fi
+
+section "querying DNSSEC"
+"${kdig}" -p "${port}" @"${ip}" -t DNSKEY example.net. +dnssec
+if ! "${kdig}" -p "${port}" @"${ip}" -t DNSKEY example.net. +dnssec 2>&1 | grep -q "RRSIG[[:space:]]*DNSKEY"; then
+ printf "DNSSEC query not successful" >&2
+ exit 1
+fi
+
+section "listing keys with keymgr"
+"${keymgr}" "$knot_conf" -e example.net. list
+if ! "${keymgr}" "$knot_conf" -e example.net. list 2>&1 | grep -q "ksk=yes"; then
+ printf "keymgr did not list KSK as expected" >&2
+ exit 1
+fi
diff --git a/distro/pkg/deb-nolibxdp/tests/control b/distro/pkg/deb-nolibxdp/tests/control
new file mode 100644
index 0000000..e8b3dcb
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/tests/control
@@ -0,0 +1,13 @@
+Tests: kdig
+Restrictions: skippable
+Depends:
+ ca-certificates,
+ iputils-ping,
+ knot-dnsutils,
+
+Tests: authoritative-server
+Depends:
+ findutils,
+ knot,
+ knot-dnsutils,
+ knot-dnssecutils,
diff --git a/distro/pkg/deb-nolibxdp/tests/kdig b/distro/pkg/deb-nolibxdp/tests/kdig
new file mode 100755
index 0000000..f1dbe5a
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/tests/kdig
@@ -0,0 +1,14 @@
+#!/bin/bash
+
+set -e
+
+# Skip the test if no internet access
+ping -c1 1.1.1.1 2>&1 || exit 77
+
+expected=198.41.0.4
+answer=$(kdig +short +tls-ca @1.1.1.1 -q a.root-servers.net. -t A 2>&1 || true)
+
+if [ "$answer" != "$expected" ]; then
+ printf "expected: %s\ngot: %s\n" "$expected" "$answer" >&2
+ kdig -d +tls-ca @1.1.1.1 -q a.root-servers.net. -t A
+fi
diff --git a/distro/pkg/deb-nolibxdp/watch b/distro/pkg/deb-nolibxdp/watch
new file mode 100644
index 0000000..7cf9ea1
--- /dev/null
+++ b/distro/pkg/deb-nolibxdp/watch
@@ -0,0 +1,4 @@
+version=4
+opts=uversionmangle=s/-((alpha|beta|rc)\d*)$/~$1/,pgpsigurlmangle=s/$/.asc/,dversionmangle=s/\+hotfix// \
+https://secure.nic.cz/files/knot-dns/ \
+(?:|.*/)knot(?:[_\-]v?|)(\d\S*)\.(?:tar\.xz|txz|tar\.bz2|tbz2|tar\.gz|tgz)
diff --git a/distro/pkg/deb/changelog b/distro/pkg/deb/changelog
new file mode 100644
index 0000000..123f92b
--- /dev/null
+++ b/distro/pkg/deb/changelog
@@ -0,0 +1,6 @@
+knot ({{ version }}-cznic.{{ release }}) unstable; urgency=medium
+
+ * upstream package
+ * see https://www.knot-dns.cz
+
+ -- Knot DNS {{ now }}
diff --git a/distro/pkg/deb/clean b/distro/pkg/deb/clean
new file mode 100644
index 0000000..b2a9f3f
--- /dev/null
+++ b/distro/pkg/deb/clean
@@ -0,0 +1,2 @@
+doc/modules
+.pybuild/
diff --git a/distro/pkg/deb/compat b/distro/pkg/deb/compat
new file mode 100644
index 0000000..b4de394
--- /dev/null
+++ b/distro/pkg/deb/compat
@@ -0,0 +1 @@
+11
diff --git a/distro/pkg/deb/control b/distro/pkg/deb/control
new file mode 100644
index 0000000..1d499d5
--- /dev/null
+++ b/distro/pkg/deb/control
@@ -0,0 +1,285 @@
+Source: knot
+Section: net
+Priority: optional
+Maintainer: Knot DNS
+Uploaders:
+ Jakub Ružička ,
+ Daniel Salzman ,
+Build-Depends-Indep:
+ python3-setuptools,
+ python3-sphinx,
+ python3-sphinx-panels,
+Build-Depends:
+ autoconf,
+ automake,
+ debhelper (>= 11),
+ dh-python,
+ libbpf-dev,
+ libcap-ng-dev,
+ libedit-dev,
+ libfstrm-dev,
+ libgnutls28-dev,
+ libidn2-dev,
+ liblmdb-dev,
+ libmaxminddb-dev,
+ libmnl-dev,
+ libnghttp2-dev,
+ libprotobuf-c-dev,
+ libsystemd-dev [linux-any] | libsystemd-daemon-dev [linux-any],
+ libsystemd-dev [linux-any] | libsystemd-journal-dev [linux-any],
+ libtool,
+ liburcu-dev,
+ libxdp-dev,
+ pkgconf,
+ protobuf-c-compiler,
+ python3-all,
+ softhsm2 ,
+Standards-Version: 4.5.0
+Homepage: https://www.knot-dns.cz/
+Vcs-Browser: https://gitlab.nic.cz/knot/knot-dns
+Vcs-Git: https://gitlab.nic.cz/knot/knot-dns.git
+Rules-Requires-Root: no
+
+Package: knot
+Architecture: any
+Depends:
+ adduser,
+ libdnssec9 (= ${binary:Version}),
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Pre-Depends:
+ ${misc:Pre-Depends},
+Suggests:
+ systemd,
+Description: Authoritative domain name server
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+
+Package: libknot15
+Architecture: any
+Depends:
+ ${misc:Depends},
+ ${shlibs:Depends},
+Section: libs
+Description: DNS shared library from Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides a DNS shared library used by Knot DNS and
+ Knot Resolver.
+
+Package: libzscanner4
+Architecture: any
+Depends:
+ ${misc:Depends},
+ ${shlibs:Depends},
+Section: libs
+Description: DNS zone-parsing shared library from Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides a fast zone parser shared library used by Knot
+ DNS and Knot Resolver.
+
+Package: libdnssec9
+Architecture: any
+Depends:
+ ${misc:Depends},
+ ${shlibs:Depends},
+Section: libs
+Description: DNSSEC shared library from Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides common DNSSEC shared library used by Knot DNS
+ and Knot Resolver.
+
+Package: libknot-dev
+Architecture: any
+Depends:
+ libdnssec9 (= ${binary:Version}),
+ libgnutls28-dev,
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+Section: libdevel
+Description: Knot DNS shared library development files
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides development files for shared libraries from Knot DNS.
+
+Package: knot-dnsutils
+Architecture: any
+Depends:
+ libdnssec9 (= ${binary:Version}),
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: DNS clients provided with Knot DNS (kdig, knsupdate)
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package delivers various DNS client programs from Knot DNS.
+ .
+ - kdig - query a DNS server in various ways
+ - knsupdate - perform dynamic updates (See RFC2136)
+ - kxdpgun - send a DNS query stream over UDP to a DNS server
+ .
+ Those clients were designed to be almost 1:1 compatible with BIND dnsutils,
+ but they provide some enhancements, which are documented.
+ .
+ WARNING: knslookup is not provided as it is considered obsolete.
+
+Package: knot-dnssecutils
+Architecture: any
+Depends:
+ libdnssec9 (= ${binary:Version}),
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: DNSSEC tools provided with Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package delivers various DNSSEC tools from Knot DNS.
+ .
+ - kzonecheck
+ - kzonesign
+ - knsec3hash
+
+Package: knot-host
+Architecture: any
+Depends:
+ libdnssec9 (= ${binary:Version}),
+ libknot15 (= ${binary:Version}),
+ libzscanner4 (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: Version of 'host' bundled with Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides the 'host' program from Knot DNS. This program is
+ designed to be almost 1:1 compatible with BIND 9.x 'host' program.
+
+Package: knot-module-dnstap
+Architecture: any
+Depends:
+ knot (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: dnstap module for Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package contains dnstap module for logging DNS traffic.
+
+Package: knot-module-geoip
+Architecture: any
+Depends:
+ knot (= ${binary:Version}),
+ ${misc:Depends},
+ ${shlibs:Depends},
+Description: geoip module for Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package contains geoip module for geography-based responses.
+
+Package: knot-doc
+Architecture: all
+Multi-Arch: foreign
+Depends:
+ libjs-jquery,
+ libjs-sphinxdoc,
+ libjs-underscore,
+ ${misc:Depends},
+Section: doc
+Description: Documentation for Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides various documents that are useful for
+ maintaining a working Knot DNS installation.
+
+Package: knot-exporter
+Architecture: all
+Depends:
+ ${misc:Depends},
+ ${python3:Depends},
+Section: python
+Description: Prometheus exporter for Knot DNS
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides Python Prometheus exporter for Knot DNS.
+
+Package: python3-libknot
+Architecture: all
+Depends:
+ libknot15 (= ${binary:Version}),
+ ${misc:Depends},
+ ${python3:Depends},
+Section: python
+Description: Python bindings for libknot
+ Knot DNS is a fast, authoritative only, high performance, feature
+ full and open source name server.
+ .
+ Knot DNS is developed by CZ.NIC Labs, the R&D department of .CZ
+ registry and hence is well suited to run anything from the root
+ zone, the top-level domain, to many smaller standard domain names.
+ .
+ This package provides Python bindings for the libknot shared library.
diff --git a/distro/pkg/deb/copyright b/distro/pkg/deb/copyright
new file mode 100644
index 0000000..feabb26
--- /dev/null
+++ b/distro/pkg/deb/copyright
@@ -0,0 +1,200 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: Knot DNS
+Upstream-Contact: knot-dns@labs.nic.cz
+Source: https://secure.nic.cz/files/knot-dns/
+
+Files: *
+Copyright: 2011-2023 CZ.NIC, z.s.p.o.
+License: GPL-3+
+
+Files: m4/*
+Copyright: 2011-2022 CZ.NIC, z.s.p.o.
+ 1996-2001, 2003-2015 Free Software Foundation, Inc.
+License: GPL-3+
+
+Files: install-sh
+Copyright: 1994 X Consortium
+License: MIT
+
+Files: debian/* distro/pkg/deb/*
+Copyright: 2011-2023 CZ.NIC, z.s.p.o.
+ 2011 Ondřej Surý
+License: GPL-3+
+
+Files: tests/tap/*
+Copyright: 2000-2001, 2004, 2006-2012 Russ Allbery
+ 2006, 2007, 2008, 2013 The Board of Trustees of the Leland Stanford Junior University
+License: MIT
+
+Files: tests/tap/files.*
+Copyright: 2011-2019 CZ.NIC, z.s.p.o.
+License: GPL-3+
+
+Files: src/contrib/dnstap/*
+Copyright: 2014, Farsight Security, Inc.
+ 2011-2022 CZ.NIC, z.s.p.o.
+License: GPL-3+
+
+Files: src/contrib/libngtcp2/*
+Copyright: 2016-2023 ngtcp2 contributors
+ 2012-2017 nghttp2 contributors
+License: MIT
+
+Files: src/contrib/musl/*
+Copyright: 2005-2020 Rich Felker, et al.
+License: MIT
+
+Files: src/contrib/openbsd/siphash.*
+Copyright: 2013 Andre Oppermann
+License: BSD-3-Clause
+
+Files: src/contrib/openbsd/strl*
+Copyright: 1998 Todd C. Miller
+License: ISC
+
+Files: src/contrib/proxyv2/*
+Copyright: 2022 CZ.NIC, z.s.p.o.
+ 2021 Fastly, Inc.
+License: GPL-3+
+
+Files: src/contrib/qp-trie/*
+Copyright: 2011-2019 CZ.NIC, z.s.p.o.
+ 2018 Tony Finch
+License: GPL-3+
+
+Files: src/contrib/ucw/*
+Copyright: 2011-2022 CZ.NIC, z.s.p.o.
+ 1997-2017 Martin Mares
+ 2007 Pavel Charvat
+ 2012 Ondrej Filip
+License: LGPL-2+
+
+Files: src/contrib/ucw/lists.*
+Copyright: 2011-2022 CZ.NIC, z.s.p.o.
+ 1998 Martin Mares
+ 1998 Pavel Machek
+ 1998 Ondrej Zajicek
+License: GPL-2+
+
+Files: src/contrib/url-parser/*
+Copyright: 2020 Igor Sysoev
+ 2020 Nginx, Inc.
+ 2020 Joyent, Inc.
+License: MIT
+
+Files: src/contrib/vpool/*
+Copyright: 2006, 2008 Alexey Vatchenko
+License: ISC
+
+Files: tests-fuzz/main.c
+Copyright: 2011-2019 CZ.NIC, z.s.p.o.
+ 2017 Tim Ruehsen
+License: MIT
+
+License: GPL-3+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+ .
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+ .
+ On Debian systems, the full text of the GNU General Public License
+ version 3 can be found in the file `/usr/share/common-licenses/GPL-3'.
+
+License: GPL-2+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+ .
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+ .
+ On Debian GNU/Linux systems, the complete text of the GNU General
+ Public License can be found in `/usr/share/common-licenses/GPL-2'.
+
+License: LGPL-2+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Library General Public
+ License as published by the Free Software Foundation; either
+ version 2 of the License, or (at your option) any later version.
+ .
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Library General Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+ .
+ On Debian GNU/Linux systems, the complete text of the GNU Lesser
+ General Public License can be found in `/usr/share/common-licenses/LGPL-2'.
+
+License: ISC
+ Permission to use, copy, modify, and distribute this software for any
+ purpose with or without fee is hereby granted, provided that the above
+ copyright notice and this permission notice appear in all copies.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+License: BSD-3-Clause
+ Redistribution and use in source and binary forms, with or without modification,
+ are permitted provided that the following conditions are met:
+ 1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+ 3. Neither the name of the copyright holder nor the names of its contributors
+ may be used to endorse or promote products derived from this software without
+ specific prior written permission.
+ .
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+License: MIT
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ "Software"), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+ .
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE.
diff --git a/distro/pkg/deb/cz.nic.knotd.conf b/distro/pkg/deb/cz.nic.knotd.conf
new file mode 100644
index 0000000..50af87a
--- /dev/null
+++ b/distro/pkg/deb/cz.nic.knotd.conf
@@ -0,0 +1,9 @@
+
+
+
+
+
+
+
+
+
diff --git a/distro/pkg/deb/docs b/distro/pkg/deb/docs
new file mode 100644
index 0000000..b43bf86
--- /dev/null
+++ b/distro/pkg/deb/docs
@@ -0,0 +1 @@
+README.md
diff --git a/distro/pkg/deb/knot-dnssecutils.install b/distro/pkg/deb/knot-dnssecutils.install
new file mode 100644
index 0000000..20009e8
--- /dev/null
+++ b/distro/pkg/deb/knot-dnssecutils.install
@@ -0,0 +1,3 @@
+usr/bin/knsec3hash
+usr/bin/kzonecheck
+usr/bin/kzonesign
diff --git a/distro/pkg/deb/knot-dnssecutils.manpages b/distro/pkg/deb/knot-dnssecutils.manpages
new file mode 100644
index 0000000..913c4cb
--- /dev/null
+++ b/distro/pkg/deb/knot-dnssecutils.manpages
@@ -0,0 +1,3 @@
+usr/share/man/man1/knsec3hash.1
+usr/share/man/man1/kzonecheck.1
+usr/share/man/man1/kzonesign.1
diff --git a/distro/pkg/deb/knot-dnsutils.install b/distro/pkg/deb/knot-dnsutils.install
new file mode 100644
index 0000000..e2f2a8a
--- /dev/null
+++ b/distro/pkg/deb/knot-dnsutils.install
@@ -0,0 +1,3 @@
+usr/bin/kdig
+usr/bin/knsupdate
+usr/sbin/kxdpgun
diff --git a/distro/pkg/deb/knot-dnsutils.manpages b/distro/pkg/deb/knot-dnsutils.manpages
new file mode 100644
index 0000000..67254d9
--- /dev/null
+++ b/distro/pkg/deb/knot-dnsutils.manpages
@@ -0,0 +1,3 @@
+usr/share/man/man1/kdig.1
+usr/share/man/man1/knsupdate.1
+usr/share/man/man8/kxdpgun.8
diff --git a/distro/pkg/deb/knot-doc.install b/distro/pkg/deb/knot-doc.install
new file mode 100644
index 0000000..c2a345d
--- /dev/null
+++ b/distro/pkg/deb/knot-doc.install
@@ -0,0 +1 @@
+usr/share/doc/knot/* /usr/share/doc/knot-doc/
diff --git a/distro/pkg/deb/knot-doc.links b/distro/pkg/deb/knot-doc.links
new file mode 100644
index 0000000..1376b3a
--- /dev/null
+++ b/distro/pkg/deb/knot-doc.links
@@ -0,0 +1,5 @@
+usr/share/javascript/jquery/jquery.min.js usr/share/doc/knot-doc/_static/jquery.js
+usr/share/javascript/sphinxdoc/1.0/doctools.js usr/share/doc/knot-doc/_static/doctools.js
+usr/share/javascript/sphinxdoc/1.0/language_data.js usr/share/doc/knot-doc/_static/language_data.js
+usr/share/javascript/sphinxdoc/1.0/searchtools.js usr/share/doc/knot-doc/_static/searchtools.js
+usr/share/javascript/underscore/underscore.min.js usr/share/doc/knot-doc/_static/underscore.js
diff --git a/distro/pkg/deb/knot-exporter.install b/distro/pkg/deb/knot-exporter.install
new file mode 100644
index 0000000..9b2bf4f
--- /dev/null
+++ b/distro/pkg/deb/knot-exporter.install
@@ -0,0 +1,3 @@
+usr/lib/python3*/dist-packages/knot_exporter-*.egg-info
+usr/lib/python3*/dist-packages/knot_exporter/*.py
+usr/bin/knot-exporter /usr/sbin/
diff --git a/distro/pkg/deb/knot-host.install b/distro/pkg/deb/knot-host.install
new file mode 100644
index 0000000..51bacf0
--- /dev/null
+++ b/distro/pkg/deb/knot-host.install
@@ -0,0 +1 @@
+usr/bin/khost
diff --git a/distro/pkg/deb/knot-host.manpages b/distro/pkg/deb/knot-host.manpages
new file mode 100644
index 0000000..4891e2c
--- /dev/null
+++ b/distro/pkg/deb/knot-host.manpages
@@ -0,0 +1 @@
+usr/share/man/man1/khost.1
diff --git a/distro/pkg/deb/knot-module-dnstap.install b/distro/pkg/deb/knot-module-dnstap.install
new file mode 100644
index 0000000..983455e
--- /dev/null
+++ b/distro/pkg/deb/knot-module-dnstap.install
@@ -0,0 +1 @@
+usr/lib/*/knot/modules-*/dnstap.so
diff --git a/distro/pkg/deb/knot-module-geoip.install b/distro/pkg/deb/knot-module-geoip.install
new file mode 100644
index 0000000..16d87c3
--- /dev/null
+++ b/distro/pkg/deb/knot-module-geoip.install
@@ -0,0 +1 @@
+usr/lib/*/knot/modules-*/geoip.so
diff --git a/distro/pkg/deb/knot.dirs b/distro/pkg/deb/knot.dirs
new file mode 100644
index 0000000..6e937aa
--- /dev/null
+++ b/distro/pkg/deb/knot.dirs
@@ -0,0 +1 @@
+var/lib/knot
diff --git a/distro/pkg/deb/knot.init b/distro/pkg/deb/knot.init
new file mode 100644
index 0000000..3f8fcae
--- /dev/null
+++ b/distro/pkg/deb/knot.init
@@ -0,0 +1,149 @@
+#!/bin/sh
+### BEGIN INIT INFO
+# Provides: knot
+# Required-Start: $network $local_fs $remote_fs $syslog
+# Required-Stop: $remote_fs $syslog
+# Default-Start: 2 3 4 5
+# Default-Stop: 0 1 6
+# Short-Description: authoritative domain name server
+# Description: Knot DNS is a authoritative-only domain name server
+### END INIT INFO
+
+# Author: Ondřej Surý
+
+# PATH should only include /usr/* if it runs after the mountnfs.sh script
+PATH=/sbin:/usr/sbin:/bin:/usr/bin
+DESC="Knot DNS server" # Introduce a short description here
+NAME=knotd # Introduce the short server's name here
+DAEMON=/usr/sbin/$NAME # Introduce the server's location here
+PIDFILE=/run/knot/knot.pid
+SCRIPTNAME=/etc/init.d/knot
+KNOTC=/usr/sbin/knotc
+RUNDIR=/run/knot
+
+# Exit if the package is not installed
+[ -x $DAEMON ] || exit 0
+
+KNOTD_ARGS=""
+
+# Read configuration variable file if it is present
+[ -r /etc/default/knot ] && . /etc/default/knot
+
+DAEMON_ARGS="-d $KNOTD_ARGS"
+
+# Define LSB log_* functions.
+# Depend on sysvinit-utils (>= 2.96) to ensure that this file is present.
+. /lib/lsb/init-functions
+
+#
+# Function that starts the daemon/service
+#
+do_start()
+{
+ # Return
+ # 0 if daemon has been started
+ # 1 if daemon was already running
+ # 2 if daemon could not be started
+
+ $KNOTC status >/dev/null 2>/dev/null \
+ && return 1
+
+ start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
+ || return 1
+ start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
+ $DAEMON_ARGS \
+ || return 2
+}
+
+#
+# Function that stops the daemon/service
+#
+do_stop()
+{
+ # Return
+ # 0 if daemon has been stopped
+ # 1 if daemon was already stopped
+ # 2 if daemon could not be stopped
+ # other if a failure occurred
+
+ $KNOTC status >/dev/null 2>/dev/null \
+ || return 1
+
+ $KNOTC stop >/dev/null
+ RETVAL="$?"
+ [ $? = 1 ] && return 2
+
+ # Many daemons don't delete their pidfiles when they exit.
+ rm -f $PIDFILE
+ return 0
+}
+
+do_reload() {
+ $KNOTC reload >/dev/null
+ return $?
+}
+
+do_mkrundir() {
+ mkdir -p $RUNDIR
+ chmod 0755 $RUNDIR
+ chown knot:knot $RUNDIR
+}
+
+case "$1" in
+ start)
+ do_mkrundir
+ log_daemon_msg "Starting $DESC " "$NAME"
+ do_start
+ case "$?" in
+ 0|1) log_end_msg 0 ;;
+ 2) log_end_msg 1 ;;
+ esac
+ ;;
+ stop)
+ log_daemon_msg "Stopping $DESC" "$NAME"
+ do_stop
+ case "$?" in
+ 0|1) log_end_msg 0 ;;
+ 2) log_end_msg 1 ;;
+ esac
+ ;;
+ status)
+ STATUS=$($KNOTC status 2>&1 >/dev/null)
+ RETVAL=$?
+ if [ $RETVAL = 0 ]; then
+ log_success_msg "$NAME is running"
+ else
+ log_failure_msg "$NAME is not running ($STATUS)"
+ fi
+ exit $RETVAL
+ ;;
+ reload|force-reload)
+ log_daemon_msg "Reloading $DESC" "$NAME"
+ do_reload
+ log_end_msg $?
+ ;;
+ restart)
+ log_daemon_msg "Restarting $DESC" "$NAME"
+ do_stop
+ case "$?" in
+ 0|1)
+ do_start
+ case "$?" in
+ 0) log_end_msg 0 ;;
+ 1) log_end_msg 1 ;; # Old process is still running
+ *) log_end_msg 1 ;; # Failed to start
+ esac
+ ;;
+ *)
+ # Failed to stop
+ log_end_msg 1
+ ;;
+ esac
+ ;;
+ *)
+ echo "Usage: $SCRIPTNAME {start|stop|status|restart|reload|force-reload}" >&2
+ exit 3
+ ;;
+esac
+
+:
diff --git a/distro/pkg/deb/knot.install b/distro/pkg/deb/knot.install
new file mode 100644
index 0000000..a31224f
--- /dev/null
+++ b/distro/pkg/deb/knot.install
@@ -0,0 +1,7 @@
+debian/cz.nic.knotd.conf usr/share/dbus-1/system.d/
+etc/knot/knot.conf
+usr/sbin/kcatalogprint
+usr/sbin/keymgr
+usr/sbin/kjournalprint
+usr/sbin/knotc
+usr/sbin/knotd
diff --git a/distro/pkg/deb/knot.manpages b/distro/pkg/deb/knot.manpages
new file mode 100644
index 0000000..5d23e9f
--- /dev/null
+++ b/distro/pkg/deb/knot.manpages
@@ -0,0 +1,6 @@
+usr/share/man/man5/knot.conf.5
+usr/share/man/man8/kcatalogprint.8
+usr/share/man/man8/keymgr.8
+usr/share/man/man8/kjournalprint.8
+usr/share/man/man8/knotc.8
+usr/share/man/man8/knotd.8
diff --git a/distro/pkg/deb/knot.postinst b/distro/pkg/deb/knot.postinst
new file mode 100644
index 0000000..da747c8
--- /dev/null
+++ b/distro/pkg/deb/knot.postinst
@@ -0,0 +1,16 @@
+#!/bin/sh
+set -e
+
+if [ "$1" = "configure" ]; then
+ if ! getent passwd knot > /dev/null; then
+ adduser --quiet --system --group --no-create-home --home /var/lib/knot knot
+ fi
+
+ dpkg-statoverride --list /var/lib/knot >/dev/null 2>&1 || dpkg-statoverride --update --add root knot 0770 /var/lib/knot
+ dpkg-statoverride --list /etc/knot/knot.conf >/dev/null 2>&1 || dpkg-statoverride --update --add root knot 0640 /etc/knot/knot.conf
+ dpkg-statoverride --list /etc/knot >/dev/null 2>&1 || dpkg-statoverride --update --add root knot 0750 /etc/knot
+fi
+
+#DEBHELPER#
+
+exit 0
diff --git a/distro/pkg/deb/knot.postrm b/distro/pkg/deb/knot.postrm
new file mode 100644
index 0000000..14b3d69
--- /dev/null
+++ b/distro/pkg/deb/knot.postrm
@@ -0,0 +1,21 @@
+#!/bin/sh
+set -e
+
+if test "$1" = "purge"; then
+ state_dir=/var/lib/knot
+ for db_name in "catalog" "confdb" "journal" "keys" "timers"; do
+ rm -rf $state_dir/$db_name >/dev/null 2>&1 || true
+ done
+ rmdir $state_dir >/dev/null 2>&1 || true
+ [ -e $state_dir/* ] && echo "Notice: there are still data in ${state_dir}, please check."
+
+ dpkg-statoverride --remove /var/lib/knot >/dev/null 2>&1 || true
+ dpkg-statoverride --remove /etc/knot/knot.conf >/dev/null 2>&1 || true
+ dpkg-statoverride --remove /etc/knot >/dev/null 2>&1 || true
+
+ deluser --quiet knot >/dev/null 2>&1 || true
+fi
+
+#DEBHELPER#
+
+exit 0
diff --git a/distro/pkg/deb/knot.service b/distro/pkg/deb/knot.service
new file mode 100644
index 0000000..e6c13ed
--- /dev/null
+++ b/distro/pkg/deb/knot.service
@@ -0,0 +1,30 @@
+[Unit]
+Description=Knot DNS server
+Wants=network-online.target
+After=network-online.target
+Documentation=man:knotd(8) man:knot.conf(5) man:knotc(8)
+
+[Service]
+Type=notify
+User=knot
+Group=knot
+CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETPCAP
+AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETPCAP
+ExecStartPre=/usr/sbin/knotc conf-check
+ExecStart=/usr/sbin/knotd -m "$KNOT_CONF_MAX_SIZE"
+ExecReload=/bin/kill -HUP $MAINPID
+Restart=on-abort
+LimitNOFILE=1048576
+TimeoutStopSec=300
+# Extend the systemd startup timeout by this value (seconds) for each zone
+Environment="KNOT_ZONE_LOAD_TIMEOUT_SEC=180"
+# Maximum size (MiB) of a configuration database
+Environment="KNOT_CONF_MAX_SIZE=512"
+
+# Expected systemd >= v239
+RuntimeDirectory=knot
+StateDirectory=knot
+NoNewPrivileges=yes
+
+[Install]
+WantedBy=multi-user.target
diff --git a/distro/pkg/deb/libdnssec9.install b/distro/pkg/deb/libdnssec9.install
new file mode 100644
index 0000000..17a9fe6
--- /dev/null
+++ b/distro/pkg/deb/libdnssec9.install
@@ -0,0 +1 @@
+usr/lib/*/libdnssec.so.*
diff --git a/distro/pkg/deb/libdnssec9.symbols b/distro/pkg/deb/libdnssec9.symbols
new file mode 100644
index 0000000..c3ab2ed
--- /dev/null
+++ b/distro/pkg/deb/libdnssec9.symbols
@@ -0,0 +1,96 @@
+libdnssec.so.9 libdnssec9 #MINVER#
+* Build-Depends-Package: libknot-dev
+ dnssec_algorithm_digest_support@Base 3.2.0
+ dnssec_algorithm_key_size_check@Base 3.2.0
+ dnssec_algorithm_key_size_default@Base 3.2.0
+ dnssec_algorithm_key_size_range@Base 3.2.0
+ dnssec_algorithm_key_support@Base 3.2.0
+ dnssec_algorithm_reproducible@Base 3.2.0
+ dnssec_binary_alloc@Base 3.2.0
+ dnssec_binary_cmp@Base 3.2.0
+ dnssec_binary_dup@Base 3.2.0
+ dnssec_binary_free@Base 3.2.0
+ dnssec_binary_from_base64@Base 3.2.0
+ dnssec_binary_resize@Base 3.2.0
+ dnssec_binary_to_base64@Base 3.2.0
+ dnssec_crypto_cleanup@Base 3.2.0
+ dnssec_crypto_init@Base 3.2.0
+ dnssec_crypto_reinit@Base 3.2.0
+ dnssec_digest@Base 3.2.0
+ dnssec_digest_finish@Base 3.2.0
+ dnssec_digest_init@Base 3.2.0
+ dnssec_key_can_sign@Base 3.2.0
+ dnssec_key_can_verify@Base 3.2.0
+ dnssec_key_clear@Base 3.2.0
+ dnssec_key_create_ds@Base 3.2.0
+ dnssec_key_dup@Base 3.2.0
+ dnssec_key_free@Base 3.2.0
+ dnssec_key_get_algorithm@Base 3.2.0
+ dnssec_key_get_dname@Base 3.2.0
+ dnssec_key_get_flags@Base 3.2.0
+ dnssec_key_get_keyid@Base 3.2.0
+ dnssec_key_get_keytag@Base 3.2.0
+ dnssec_key_get_protocol@Base 3.2.0
+ dnssec_key_get_pubkey@Base 3.2.0
+ dnssec_key_get_rdata@Base 3.2.0
+ dnssec_key_get_size@Base 3.2.0
+ dnssec_key_load_pkcs8@Base 3.2.0
+ dnssec_key_new@Base 3.2.0
+ dnssec_key_set_algorithm@Base 3.2.0
+ dnssec_key_set_dname@Base 3.2.0
+ dnssec_key_set_flags@Base 3.2.0
+ dnssec_key_set_protocol@Base 3.2.0
+ dnssec_key_set_pubkey@Base 3.2.0
+ dnssec_key_set_rdata@Base 3.2.0
+ dnssec_keyid_copy@Base 3.2.0
+ dnssec_keyid_equal@Base 3.2.0
+ dnssec_keyid_is_valid@Base 3.2.0
+ dnssec_keyid_normalize@Base 3.2.0
+ dnssec_keystore_close@Base 3.2.0
+ dnssec_keystore_deinit@Base 3.2.0
+ dnssec_keystore_generate@Base 3.2.0
+ dnssec_keystore_get_private@Base 3.2.0
+ dnssec_keystore_import@Base 3.2.0
+ dnssec_keystore_init@Base 3.2.0
+ dnssec_keystore_init_pkcs11@Base 3.2.0
+ dnssec_keystore_init_pkcs8@Base 3.2.0
+ dnssec_keystore_open@Base 3.2.0
+ dnssec_keystore_remove@Base 3.2.0
+ dnssec_keystore_set_private@Base 3.2.0
+ dnssec_keytag@Base 3.2.0
+ dnssec_nsec3_hash@Base 3.2.0
+ dnssec_nsec3_hash_length@Base 3.2.0
+ dnssec_nsec3_params_free@Base 3.2.0
+ dnssec_nsec3_params_from_rdata@Base 3.2.0
+ dnssec_nsec3_params_match@Base 3.2.0
+ dnssec_nsec_bitmap_add@Base 3.2.0
+ dnssec_nsec_bitmap_clear@Base 3.2.0
+ dnssec_nsec_bitmap_contains@Base 3.2.0
+ dnssec_nsec_bitmap_free@Base 3.2.0
+ dnssec_nsec_bitmap_new@Base 3.2.0
+ dnssec_nsec_bitmap_size@Base 3.2.0
+ dnssec_nsec_bitmap_write@Base 3.2.0
+ dnssec_pem_from_privkey@Base 3.2.0
+ dnssec_pem_from_x509@Base 3.2.0
+ dnssec_pem_to_privkey@Base 3.2.0
+ dnssec_pem_to_x509@Base 3.2.0
+ dnssec_random_binary@Base 3.2.0
+ dnssec_random_buffer@Base 3.2.0
+ dnssec_sign_add@Base 3.2.0
+ dnssec_sign_free@Base 3.2.0
+ dnssec_sign_init@Base 3.2.0
+ dnssec_sign_new@Base 3.2.0
+ dnssec_sign_verify@Base 3.2.0
+ dnssec_sign_write@Base 3.2.0
+ dnssec_strerror@Base 3.2.0
+ dnssec_tsig_add@Base 3.2.0
+ dnssec_tsig_algorithm_from_dname@Base 3.2.0
+ dnssec_tsig_algorithm_from_name@Base 3.2.0
+ dnssec_tsig_algorithm_size@Base 3.2.0
+ dnssec_tsig_algorithm_to_dname@Base 3.2.0
+ dnssec_tsig_algorithm_to_name@Base 3.2.0
+ dnssec_tsig_free@Base 3.2.0
+ dnssec_tsig_new@Base 3.2.0
+ dnssec_tsig_optimal_key_size@Base 3.2.0
+ dnssec_tsig_size@Base 3.2.0
+ dnssec_tsig_write@Base 3.2.0
diff --git a/distro/pkg/deb/libknot-dev.install b/distro/pkg/deb/libknot-dev.install
new file mode 100644
index 0000000..cb60d88
--- /dev/null
+++ b/distro/pkg/deb/libknot-dev.install
@@ -0,0 +1,3 @@
+usr/include/
+usr/lib/*/*.so
+usr/lib/*/pkgconfig/*
diff --git a/distro/pkg/deb/libknot15.install b/distro/pkg/deb/libknot15.install
new file mode 100644
index 0000000..f9b9f93
--- /dev/null
+++ b/distro/pkg/deb/libknot15.install
@@ -0,0 +1 @@
+usr/lib/*/libknot.so.*
diff --git a/distro/pkg/deb/libknot15.symbols b/distro/pkg/deb/libknot15.symbols
new file mode 100644
index 0000000..dea14ec
--- /dev/null
+++ b/distro/pkg/deb/libknot15.symbols
@@ -0,0 +1,296 @@
+libknot.so.15 libknot15 #MINVER#
+* Build-Depends-Package: libknot-dev
+ KNOT_DB_LMDB_DUPSORT@Base 3.4.0
+ KNOT_DB_LMDB_INTEGERKEY@Base 3.4.0
+ KNOT_DB_LMDB_MAPASYNC@Base 3.4.0
+ KNOT_DB_LMDB_NOSYNC@Base 3.4.0
+ KNOT_DB_LMDB_NOTLS@Base 3.4.0
+ KNOT_DB_LMDB_RDONLY@Base 3.4.0
+ KNOT_DB_LMDB_WRITEMAP@Base 3.4.0
+ KNOT_DUMP_STYLE_DEFAULT@Base 3.4.0
+ knot_creds_cert@Base 3.4.0
+ knot_creds_free@Base 3.4.0
+ knot_creds_init@Base 3.4.0
+ knot_creds_init_peer@Base 3.4.0
+ knot_creds_update@Base 3.4.0
+ knot_ctl_accept@Base 3.4.0
+ knot_ctl_alloc@Base 3.4.0
+ knot_ctl_bind@Base 3.4.0
+ knot_ctl_clone@Base 3.4.0
+ knot_ctl_close@Base 3.4.0
+ knot_ctl_connect@Base 3.4.0
+ knot_ctl_free@Base 3.4.0
+ knot_ctl_receive@Base 3.4.0
+ knot_ctl_send@Base 3.4.0
+ knot_ctl_set_timeout@Base 3.4.0
+ knot_ctl_unbind@Base 3.4.0
+ knot_db_lmdb_api@Base 3.4.0
+ knot_db_lmdb_del_exact@Base 3.4.0
+ knot_db_lmdb_get_mapsize@Base 3.4.0
+ knot_db_lmdb_get_path@Base 3.4.0
+ knot_db_lmdb_get_usage@Base 3.4.0
+ knot_db_lmdb_iter_del@Base 3.4.0
+ knot_db_lmdb_txn_begin@Base 3.4.0
+ knot_db_trie_api@Base 3.4.0
+ knot_dname_cmp@Base 3.4.0
+ knot_dname_copy@Base 3.4.0
+ knot_dname_copy_lower@Base 3.4.0
+ knot_dname_free@Base 3.4.0
+ knot_dname_from_str@Base 3.4.0
+ knot_dname_in_bailiwick@Base 3.4.0
+ knot_dname_is_case_equal@Base 3.4.0
+ knot_dname_is_equal@Base 3.4.0
+ knot_dname_labels@Base 3.4.0
+ knot_dname_lf@Base 3.4.0
+ knot_dname_matched_labels@Base 3.4.0
+ knot_dname_prefixlen@Base 3.4.0
+ knot_dname_realsize@Base 3.4.0
+ knot_dname_replace_suffix@Base 3.4.0
+ knot_dname_size@Base 3.4.0
+ knot_dname_store@Base 3.4.0
+ knot_dname_to_lower@Base 3.4.0
+ knot_dname_to_str@Base 3.4.0
+ knot_dname_to_wire@Base 3.4.0
+ knot_dname_unpack@Base 3.4.0
+ knot_dname_wire_check@Base 3.4.0
+ knot_dname_with_null@Base 3.4.3
+ knot_dnssec_alg_names@Base 3.4.0
+ knot_edns_add_option@Base 3.4.0
+ knot_edns_alignment_size@Base 3.4.0
+ knot_edns_chain_parse@Base 3.4.0
+ knot_edns_chain_size@Base 3.4.0
+ knot_edns_chain_write@Base 3.4.0
+ knot_edns_client_subnet_get_addr@Base 3.4.0
+ knot_edns_client_subnet_parse@Base 3.4.0
+ knot_edns_client_subnet_set_addr@Base 3.4.0
+ knot_edns_client_subnet_size@Base 3.4.0
+ knot_edns_client_subnet_write@Base 3.4.0
+ knot_edns_cookie_client_check@Base 3.4.0
+ knot_edns_cookie_client_generate@Base 3.4.0
+ knot_edns_cookie_parse@Base 3.4.0
+ knot_edns_cookie_server_check@Base 3.4.0
+ knot_edns_cookie_server_generate@Base 3.4.0
+ knot_edns_cookie_size@Base 3.4.0
+ knot_edns_cookie_write@Base 3.4.0
+ knot_edns_ede_names@Base 3.4.0
+ knot_edns_get_ext_rcode@Base 3.4.0
+ knot_edns_get_option@Base 3.4.0
+ knot_edns_get_options@Base 3.4.0
+ knot_edns_get_version@Base 3.4.0
+ knot_edns_init@Base 3.4.0
+ knot_edns_keepalive_parse@Base 3.4.0
+ knot_edns_keepalive_size@Base 3.4.0
+ knot_edns_keepalive_write@Base 3.4.0
+ knot_edns_opt_names@Base 3.4.0
+ knot_edns_reserve_option@Base 3.4.0
+ knot_edns_set_ext_rcode@Base 3.4.0
+ knot_edns_set_version@Base 3.4.0
+ knot_edns_zoneversion_parse@Base 3.4.4
+ knot_edns_zoneversion_write@Base 3.4.4
+ knot_error_from_libdnssec@Base 3.4.0
+ knot_eth_mtu@Base 3.4.0
+ knot_eth_name_from_addr@Base 3.4.0
+ knot_eth_queues@Base 3.4.0
+ knot_eth_rss@Base 3.4.0
+ knot_eth_vlans@Base 3.4.0
+ knot_eth_xdp_mode@Base 3.4.0
+ knot_get_obsolete_rdata_descriptor@Base 3.4.0
+ knot_get_rdata_descriptor@Base 3.4.0
+ knot_naptr_header_size@Base 3.4.0
+ knot_opcode_names@Base 3.4.0
+ knot_opt_code_to_string@Base 3.4.0
+ knot_pkt_begin@Base 3.4.0
+ knot_pkt_clear@Base 3.4.0
+ knot_pkt_copy@Base 3.4.0
+ knot_pkt_ext_rcode@Base 3.4.0
+ knot_pkt_ext_rcode_name@Base 3.4.0
+ knot_pkt_free@Base 3.4.0
+ knot_pkt_init_response@Base 3.4.0
+ knot_pkt_new@Base 3.4.0
+ knot_pkt_parse@Base 3.4.0
+ knot_pkt_parse_question@Base 3.4.0
+ knot_pkt_put_question@Base 3.4.0
+ knot_pkt_put_rotate@Base 3.4.0
+ knot_pkt_reclaim@Base 3.4.0
+ knot_pkt_reserve@Base 3.4.0
+ knot_probe_alloc@Base 3.4.0
+ knot_probe_consume@Base 3.4.0
+ knot_probe_data_set@Base 3.4.0
+ knot_probe_fd@Base 3.4.0
+ knot_probe_free@Base 3.4.0
+ knot_probe_produce@Base 3.4.0
+ knot_probe_set_consumer@Base 3.4.0
+ knot_probe_set_producer@Base 3.4.0
+ knot_probe_tcp_rtt@Base 3.4.0
+ knot_quic_cleanup@Base 3.4.0
+ knot_quic_client@Base 3.4.0
+ knot_quic_conn_block@Base 3.4.0
+ knot_quic_conn_get_stream@Base 3.4.0
+ knot_quic_conn_local_port@Base 3.4.0
+ knot_quic_conn_new_stream@Base 3.4.0
+ knot_quic_conn_next_timeout@Base 3.4.0
+ knot_quic_conn_rtt@Base 3.4.0
+ knot_quic_conn_stream_free@Base 3.4.0
+ knot_quic_handle@Base 3.4.0
+ knot_quic_hanle_expiry@Base 3.4.0
+ knot_quic_send@Base 3.4.0
+ knot_quic_session_available@Base 3.4.0
+ knot_quic_session_load@Base 3.4.0
+ knot_quic_session_save@Base 3.4.0
+ knot_quic_stream_add_data@Base 3.4.0
+ knot_quic_stream_get_process@Base 3.4.0
+ knot_quic_table_free@Base 3.4.0
+ knot_quic_table_new@Base 3.4.0
+ knot_quic_table_rem@Base 3.4.0
+ knot_quic_table_sweep@Base 3.4.0
+ knot_rcode_names@Base 3.4.0
+ knot_rdataset_add@Base 3.4.0
+ knot_rdataset_at@Base 3.4.0
+ knot_rdataset_clear@Base 3.4.0
+ knot_rdataset_copy@Base 3.4.0
+ knot_rdataset_eq@Base 3.4.0
+ knot_rdataset_intersect@Base 3.4.0
+ knot_rdataset_intersect2@Base 3.4.0
+ knot_rdataset_member@Base 3.4.0
+ knot_rdataset_merge@Base 3.4.0
+ knot_rdataset_subset@Base 3.4.0
+ knot_rdataset_subtract@Base 3.4.0
+ knot_rrclass_from_string@Base 3.4.0
+ knot_rrclass_to_string@Base 3.4.0
+ knot_rrset_add_rdata@Base 3.4.0
+ knot_rrset_clear@Base 3.4.0
+ knot_rrset_copy@Base 3.4.0
+ knot_rrset_equal@Base 3.4.0
+ knot_rrset_free@Base 3.4.0
+ knot_rrset_is_nsec3rel@Base 3.4.0
+ knot_rrset_new@Base 3.4.0
+ knot_rrset_rr_from_wire@Base 3.4.0
+ knot_rrset_rr_to_canonical@Base 3.4.0
+ knot_rrset_size@Base 3.4.0
+ knot_rrset_to_wire_extra@Base 3.4.0
+ knot_rrset_txt_dump@Base 3.4.0
+ knot_rrset_txt_dump_data@Base 3.4.0
+ knot_rrset_txt_dump_edns@Base 3.4.0
+ knot_rrset_txt_dump_header@Base 3.4.0
+ knot_rrtype_additional_needed@Base 3.4.0
+ knot_rrtype_from_string@Base 3.4.0
+ knot_rrtype_is_dnssec@Base 3.4.0
+ knot_rrtype_is_metatype@Base 3.4.0
+ knot_rrtype_should_be_lowercased@Base 3.4.0
+ knot_rrtype_to_string@Base 3.4.0
+ knot_strerror@Base 3.4.0
+ knot_svcb_param_names@Base 3.4.0
+ knot_tcp_cleanup@Base 3.4.0
+ knot_tcp_inbufs_upd@Base 3.4.0
+ knot_tcp_outbufs_ack@Base 3.4.0
+ knot_tcp_outbufs_add@Base 3.4.0
+ knot_tcp_outbufs_can_send@Base 3.4.0
+ knot_tcp_outbufs_usage@Base 3.4.0
+ knot_tcp_recv@Base 3.4.0
+ knot_tcp_reply_data@Base 3.4.0
+ knot_tcp_send@Base 3.4.0
+ knot_tcp_sweep@Base 3.4.0
+ knot_tcp_table_free@Base 3.4.0
+ knot_tcp_table_new@Base 3.4.0
+ knot_tls_conn_block@Base 3.4.0
+ knot_tls_conn_del@Base 3.4.0
+ knot_tls_conn_new@Base 3.4.0
+ knot_tls_ctx_free@Base 3.4.0
+ knot_tls_ctx_new@Base 3.4.0
+ knot_tls_handshake@Base 3.4.0
+ knot_tls_pin@Base 3.4.0
+ knot_tls_pin_check@Base 3.4.0
+ knot_tls_recv_dns@Base 3.4.0
+ knot_tls_send_dns@Base 3.4.0
+ knot_tls_session@Base 3.4.0
+ knot_tls_session_available@Base 3.4.1
+ knot_tls_session_load@Base 3.4.1
+ knot_tls_session_save@Base 3.4.1
+ knot_tsig_add@Base 3.4.0
+ knot_tsig_append@Base 3.4.0
+ knot_tsig_client_check@Base 3.4.0
+ knot_tsig_client_check_next@Base 3.4.0
+ knot_tsig_create_rdata@Base 3.4.0
+ knot_tsig_key_copy@Base 3.4.0
+ knot_tsig_key_deinit@Base 3.4.0
+ knot_tsig_key_init@Base 3.4.0
+ knot_tsig_key_init_file@Base 3.4.0
+ knot_tsig_key_init_str@Base 3.4.0
+ knot_tsig_rcode_names@Base 3.4.0
+ knot_tsig_rdata_alg@Base 3.4.0
+ knot_tsig_rdata_alg_name@Base 3.4.0
+ knot_tsig_rdata_error@Base 3.4.0
+ knot_tsig_rdata_fudge@Base 3.4.0
+ knot_tsig_rdata_is_ok@Base 3.4.0
+ knot_tsig_rdata_mac@Base 3.4.0
+ knot_tsig_rdata_mac_length@Base 3.4.0
+ knot_tsig_rdata_orig_id@Base 3.4.0
+ knot_tsig_rdata_other_data@Base 3.4.0
+ knot_tsig_rdata_other_data_length@Base 3.4.0
+ knot_tsig_rdata_set_fudge@Base 3.4.0
+ knot_tsig_rdata_set_mac@Base 3.4.0
+ knot_tsig_rdata_set_orig_id@Base 3.4.0
+ knot_tsig_rdata_set_other_data@Base 3.4.0
+ knot_tsig_rdata_set_time_signed@Base 3.4.0
+ knot_tsig_rdata_time_signed@Base 3.4.0
+ knot_tsig_rdata_tsig_timers_length@Base 3.4.0
+ knot_tsig_rdata_tsig_variables_length@Base 3.4.0
+ knot_tsig_server_check@Base 3.4.0
+ knot_tsig_sign@Base 3.4.0
+ knot_tsig_sign_next@Base 3.4.0
+ knot_tsig_wire_maxsize@Base 3.4.0
+ knot_tsig_wire_size@Base 3.4.0
+ knot_xdp_deinit@Base 3.4.0
+ knot_xdp_init@Base 3.4.0
+ knot_xdp_recv@Base 3.4.0
+ knot_xdp_recv_finish@Base 3.4.0
+ knot_xdp_reply_alloc@Base 3.4.0
+ knot_xdp_send@Base 3.4.0
+ knot_xdp_send_alloc@Base 3.4.0
+ knot_xdp_send_finish@Base 3.4.0
+ knot_xdp_send_free@Base 3.4.0
+ knot_xdp_send_prepare@Base 3.4.0
+ knot_xdp_socket_info@Base 3.4.0
+ knot_xdp_socket_stats@Base 3.4.0
+ knot_xdp_socket_fd@Base 3.4.0
+ yp_addr@Base 3.4.0
+ yp_addr_noport@Base 3.4.0
+ yp_addr_noport_to_bin@Base 3.4.0
+ yp_addr_noport_to_txt@Base 3.4.0
+ yp_addr_range_to_bin@Base 3.4.0
+ yp_addr_range_to_txt@Base 3.4.0
+ yp_addr_to_bin@Base 3.4.0
+ yp_addr_to_txt@Base 3.4.0
+ yp_base64_to_bin@Base 3.4.0
+ yp_base64_to_txt@Base 3.4.0
+ yp_bool_to_bin@Base 3.4.0
+ yp_bool_to_txt@Base 3.4.0
+ yp_deinit@Base 3.4.0
+ yp_dname_to_bin@Base 3.4.0
+ yp_dname_to_txt@Base 3.4.0
+ yp_format_id@Base 3.4.0
+ yp_format_key0@Base 3.4.0
+ yp_format_key1@Base 3.4.0
+ yp_hex_to_bin@Base 3.4.0
+ yp_hex_to_txt@Base 3.4.0
+ yp_init@Base 3.4.0
+ yp_int_to_bin@Base 3.4.0
+ yp_int_to_txt@Base 3.4.0
+ yp_item_to_bin@Base 3.4.0
+ yp_item_to_txt@Base 3.4.0
+ yp_option_to_bin@Base 3.4.0
+ yp_option_to_txt@Base 3.4.0
+ yp_parse@Base 3.4.0
+ yp_schema_check_deinit@Base 3.4.0
+ yp_schema_check_init@Base 3.4.0
+ yp_schema_check_parser@Base 3.4.0
+ yp_schema_check_str@Base 3.4.0
+ yp_schema_copy@Base 3.4.0
+ yp_schema_find@Base 3.4.0
+ yp_schema_free@Base 3.4.0
+ yp_schema_merge@Base 3.4.0
+ yp_schema_purge_dynamic@Base 3.4.0
+ yp_set_input_file@Base 3.4.0
+ yp_set_input_string@Base 3.4.0
+ yp_str_to_bin@Base 3.4.0
+ yp_str_to_txt@Base 3.4.0
diff --git a/distro/pkg/deb/libzscanner4.install b/distro/pkg/deb/libzscanner4.install
new file mode 100644
index 0000000..a8dc226
--- /dev/null
+++ b/distro/pkg/deb/libzscanner4.install
@@ -0,0 +1 @@
+usr/lib/*/libzscanner.so.*
diff --git a/distro/pkg/deb/libzscanner4.symbols b/distro/pkg/deb/libzscanner4.symbols
new file mode 100644
index 0000000..99ac3b7
--- /dev/null
+++ b/distro/pkg/deb/libzscanner4.symbols
@@ -0,0 +1,12 @@
+libzscanner.so.4 libzscanner4 #MINVER#
+* Build-Depends-Package: libknot-dev
+ zs_deinit@Base 3.1.0
+ zs_errorname@Base 3.1.0
+ zs_init@Base 3.1.0
+ zs_parse_all@Base 3.1.0
+ zs_parse_record@Base 3.1.0
+ zs_set_input_file@Base 3.1.0
+ zs_set_input_string@Base 3.1.0
+ zs_set_processing@Base 3.1.0
+ zs_set_processing_comment@Base 3.1.0
+ zs_strerror@Base 3.1.0
diff --git a/distro/pkg/deb/not-installed b/distro/pkg/deb/not-installed
new file mode 100644
index 0000000..c928be1
--- /dev/null
+++ b/distro/pkg/deb/not-installed
@@ -0,0 +1 @@
+etc/knot/example.com.zone
diff --git a/distro/pkg/deb/prepare-environment b/distro/pkg/deb/prepare-environment
new file mode 100755
index 0000000..7176f5e
--- /dev/null
+++ b/distro/pkg/deb/prepare-environment
@@ -0,0 +1,38 @@
+#!/bin/sh
+
+set -eu
+
+CONFFILE=${1:-/etc/knot/knot.conf}
+
+if [ ! -r $CONFFILE ]; then
+ echo "$CONFFILE doesn't exist or has wrong permissions."
+ exit 1;
+fi
+
+KNOT_RUNDIR=$(sed -ne "s/#.*$//;s/.*rundir: \"*\([^\";]*\\).*/\\1/p;" $CONFFILE)
+[ -z "$KNOT_RUNDIR" ] && KNOT_RUNDIR=/run/knot
+
+mkdir --parents "$KNOT_RUNDIR";
+
+KNOT_USER=$(sed -ne "s/#.*$//;s/.*user:[ \"]*\\([^\\:\"]*\\)[ \"]*/\\1/p;" $CONFFILE)
+
+if [ -n "$KNOT_USER" ]; then
+ if ! getent passwd $KNOT_USER >/dev/null; then
+ echo "Configured user '$KNOT_USER' doesn't exist."
+ exit 1
+ fi
+
+ KNOT_GROUP=$(sed -ne "s/#.*$//;s/.*user:[ \"]*[^\\:\"]*\\:\\([^\"]*\\)[ \"]*/\\1/p;" $CONFFILE)
+ if [ -z "$KNOT_GROUP" ]; then
+ KNOT_GROUP=$(getent group $(getent passwd "$KNOT_USER" | cut -f 4 -d :) | cut -f 1 -d :)
+ fi
+
+ if ! getent group $KNOT_GROUP >/dev/null; then
+ echo "Configured group '$KNOT_GROUP' doesn't exist."
+ exit 1
+ fi
+ chown --silent "$KNOT_USER:$KNOT_GROUP" "$KNOT_RUNDIR"
+ chmod 775 "$KNOT_RUNDIR"
+fi
+
+:
diff --git a/distro/pkg/deb/python3-libknot.install b/distro/pkg/deb/python3-libknot.install
new file mode 100644
index 0000000..ce92dec
--- /dev/null
+++ b/distro/pkg/deb/python3-libknot.install
@@ -0,0 +1,2 @@
+usr/lib/python3*/dist-packages/libknot-*.egg-info
+usr/lib/python3*/dist-packages/libknot/*.py
diff --git a/distro/pkg/deb/rules b/distro/pkg/deb/rules
new file mode 100755
index 0000000..9f859a7
--- /dev/null
+++ b/distro/pkg/deb/rules
@@ -0,0 +1,89 @@
+#!/usr/bin/make -f
+
+export DEB_BUILD_MAINT_OPTIONS = hardening=+all
+export DEB_CFLAGS_MAINT_APPEND = -Wall -DNDEBUG
+export DEB_LDFLAGS_MAINT_APPEND = -Wl,--as-needed
+
+export DPKG_GENSYMBOLS_CHECK_LEVEL := 4
+export KNOT_SOFTHSM2_DSO = /usr/lib/softhsm/libsofthsm2.so
+
+include /usr/share/dpkg/default.mk
+
+# Disable fastparser if requested
+ifeq (maint,$(filter $(DEB_BUILD_OPTIONS),maint))
+ FASTPARSER := --disable-fastparser
+else
+ FASTPARSER := --enable-fastparser
+endif
+
+LIBKNOT_SYMBOLS := $(wildcard $(CURDIR)/debian/libknot*.symbols)
+
+# MAJOR.MINOR version part
+BASE_VERSION := $(shell echo $(DEB_VERSION) | sed 's/^\([^.]\+\.[^.]\+\).*/\1/')
+
+# pyproject is supported by knot but fails on second `pybuild --build`
+# invocation due to bug in dh-python's plugin_pyproject.py wheel unpack
+export PYBUILD_SYSTEM = distutils
+
+%:
+ dh $@ \
+ --with python3
+
+override_dh_auto_configure:
+ @echo 'architecture:' $(DEB_HOST_ARCH)
+ dh_auto_configure -- \
+ --sysconfdir=/etc \
+ --localstatedir=/var/lib \
+ --libexecdir=/usr/lib/knot \
+ --with-rundir=/run/knot \
+ --with-moduledir=/usr/lib/$(DEB_HOST_MULTIARCH)/knot/modules-$(BASE_VERSION) \
+ --with-storage=/var/lib/knot \
+ --enable-systemd=auto \
+ --enable-dnstap \
+ --with-module-dnstap=shared \
+ --with-module-geoip=shared \
+ --enable-recvmmsg=yes \
+ --disable-silent-rules \
+ --enable-xdp=yes \
+ --enable-quic=yes \
+ --disable-static \
+ $(FASTPARSER)
+
+override_dh_auto_configure-indep:
+ pybuild --dir python/libknot --configure
+ pybuild --dir python/knot_exporter --configure
+
+override_dh_auto_build-indep:
+ dh_auto_build -- html
+ pybuild --dir python/libknot --build
+ pybuild --dir python/knot_exporter --build
+
+override_dh_auto_install-arch:
+ dh_auto_install -- install
+ # rename knot.sample.conf to knot.conf
+ mv $(CURDIR)/debian/tmp/etc/knot/knot.sample.conf $(CURDIR)/debian/tmp/etc/knot/knot.conf
+ @if grep -E -q "DoQ support: +no" "$(CURDIR)/debian/tmp/usr/sbin/knotd"; then \
+ echo "Stripping the QUIC symbols"; \
+ sed -i '/knot_quic_/d' $(LIBKNOT_SYMBOLS); \
+ fi
+
+override_dh_auto_install-indep:
+ dh_auto_install -- install-html
+ # rename knot.sample.conf to knot.conf
+ mv $(CURDIR)/debian/tmp/etc/knot/knot.sample.conf $(CURDIR)/debian/tmp/etc/knot/knot.conf
+ pybuild --dir python/libknot --install
+ pybuild --dir python/knot_exporter --install
+ rm -rf $(CURDIR)/debian/tmp/usr/lib/python*/dist-packages/libknot/__pycache__
+ rm -rf $(CURDIR)/debian/tmp/usr/lib/python*/dist-packages/knot_exporter/__pycache__
+
+override_dh_auto_test-indep:
+override_dh_auto_test-arch:
+ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
+ dh_auto_test
+endif
+
+override_dh_missing:
+ dh_missing --exclude=.la --fail-missing
+
+override_dh_installchangelogs:
+ dh_installchangelogs NEWS
diff --git a/distro/pkg/deb/source/format b/distro/pkg/deb/source/format
new file mode 100644
index 0000000..163aaf8
--- /dev/null
+++ b/distro/pkg/deb/source/format
@@ -0,0 +1 @@
+3.0 (quilt)
diff --git a/distro/pkg/deb/tests/authoritative-server b/distro/pkg/deb/tests/authoritative-server
new file mode 100755
index 0000000..028dfbf
--- /dev/null
+++ b/distro/pkg/deb/tests/authoritative-server
@@ -0,0 +1,150 @@
+#!/bin/bash
+
+# Author: Daniel Kahn Gillmor
+# 2018-11-02
+# License: GPLv3+
+
+# error on exit
+set -e
+# for handling jobspecs:
+set -m
+
+if [ -z "$AUTOPKGTEST_ARTIFACTS" ]; then
+ d="$(mktemp -d)"
+ remove="$d"
+else
+ d="$AUTOPKGTEST_ARTIFACTS"
+fi
+ip="${TESTIP:-127.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).$(( $RANDOM % 256 ))}"
+port="${PORT:-8123}"
+knotc="${KNOTC:-/usr/sbin/knotc}"
+knotd="${KNOTD:-/usr/sbin/knotd}"
+keymgr="${KEYMGR:-/usr/sbin/keymgr}"
+kdig="${KDIG:-$(command -v kdig)}"
+kzonecheck="${KZONECHECK:-$(command -v kzonecheck)}"
+test_address="${TEST_ADDRESS:-192.0.2.199}"
+
+declare -a knot_conf="--config=$d/knot.conf"
+declare -a knot_args=("$knot_conf" --verbose)
+
+printf "%s + %s roundtrip tests\n------------\n workdir: %s\n IP addr: %s\n knot args: %s\n" "$knotd" "$kdig" "$d" "$ip" "${knot_args[*]}"
+
+section() {
+ printf "\n%s\n" "$1"
+ sed 's/./-/g' <<<"$1"
+}
+
+cleanup () {
+ section "cleaning up"
+ find "$d" -ls
+ "${knotc}" "${knot_args[@]}" stop
+ wait %1
+ tail -n +1 -v "$d"/*.err
+ if [ "$remove" ]; then
+ printf "\ncleaning up working directory %s\n" "$remove"
+ rm -rf "$remove"
+ fi
+}
+trap cleanup EXIT
+
+section "set up config file and zonefile"
+
+user=$(id -nu)
+group=$(id -ng)
+cat > "$d/knot.conf" < "$d/example.net.zone" < "$d/knotd.err" &
+
+# FIXME: this is an annoying poll -- would be better if we could be
+# alerted when the daemon is done setting up the socket, but i don't
+# want to "--daemonize" if i can avoid it because i want the shell to
+# remain in direct supervision of all its processes
+tried=0
+while [ $tried -lt 10 ] ; do
+ if "${knotc}" "${knot_args[@]}" status 2>&1; then
+ break;
+ fi
+ sleep 0.5
+ tried=$(( $tried + 1 ))
+done
+if [ $tried -ge 10 ]; then
+ printf "failed to use %s\n" "${knotc}" >&2
+ exit 1
+fi
+
+section "querying knot"
+"${kdig}" -p "${port}" @"${ip}" -t A test.example.net test2.example.net
+answer="$("${kdig}" +short -p "${port}" @"${ip}" -t A test.example.net)"
+if ! [ "$answer" = "$test_address" ]; then
+ printf "test.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer" >&2
+ exit 1
+fi
+answer2="$("${kdig}" +short -p "${port}" @"${ip}" -t A test2.example.net)"
+if ! [ "$answer2" = "" ]; then
+ printf "test2.example.net gave unexpected answer!\n got: %s\n" "$answer2" >&2
+ exit 1
+fi
+
+section "modifying zone"
+printf "test2 1D IN A $test_address\n" >>"$d/example.net.zone"
+sed -i 's/^@ 1D IN SOA.*/@ 1D IN SOA a.ns hostmaster 2018110100 3h 15m 1w 1d/' "$d/example.net.zone"
+"${knotc}" "${knot_args[@]}" reload
+sleep 1
+
+section "querying again"
+"${kdig}" -p "${port}" @"${ip}" -t A test.example.net test2.example.net
+answer="$("${kdig}" +short -p "${port}" @"${ip}" -t A test.example.net)"
+if ! [ "$answer" = "$test_address" ]; then
+ printf "test.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer" >&2
+ exit 1
+fi
+answer2="$("${kdig}" +short -p "${port}" @"${ip}" -t A test2.example.net)"
+if ! [ "$answer2" = "$test_address" ]; then
+ printf "test2.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer2" >&2
+ exit 1
+fi
+
+section "querying DNSSEC"
+"${kdig}" -p "${port}" @"${ip}" -t DNSKEY example.net. +dnssec
+if ! "${kdig}" -p "${port}" @"${ip}" -t DNSKEY example.net. +dnssec 2>&1 | grep -q "RRSIG[[:space:]]*DNSKEY"; then
+ printf "DNSSEC query not successful" >&2
+ exit 1
+fi
+
+section "listing keys with keymgr"
+"${keymgr}" "$knot_conf" -e example.net. list
+if ! "${keymgr}" "$knot_conf" -e example.net. list 2>&1 | grep -q "ksk=yes"; then
+ printf "keymgr did not list KSK as expected" >&2
+ exit 1
+fi
diff --git a/distro/pkg/deb/tests/control b/distro/pkg/deb/tests/control
new file mode 100644
index 0000000..e8b3dcb
--- /dev/null
+++ b/distro/pkg/deb/tests/control
@@ -0,0 +1,13 @@
+Tests: kdig
+Restrictions: skippable
+Depends:
+ ca-certificates,
+ iputils-ping,
+ knot-dnsutils,
+
+Tests: authoritative-server
+Depends:
+ findutils,
+ knot,
+ knot-dnsutils,
+ knot-dnssecutils,
diff --git a/distro/pkg/deb/tests/kdig b/distro/pkg/deb/tests/kdig
new file mode 100755
index 0000000..f1dbe5a
--- /dev/null
+++ b/distro/pkg/deb/tests/kdig
@@ -0,0 +1,14 @@
+#!/bin/bash
+
+set -e
+
+# Skip the test if no internet access
+ping -c1 1.1.1.1 2>&1 || exit 77
+
+expected=198.41.0.4
+answer=$(kdig +short +tls-ca @1.1.1.1 -q a.root-servers.net. -t A 2>&1 || true)
+
+if [ "$answer" != "$expected" ]; then
+ printf "expected: %s\ngot: %s\n" "$expected" "$answer" >&2
+ kdig -d +tls-ca @1.1.1.1 -q a.root-servers.net. -t A
+fi
diff --git a/distro/pkg/deb/watch b/distro/pkg/deb/watch
new file mode 100644
index 0000000..7cf9ea1
--- /dev/null
+++ b/distro/pkg/deb/watch
@@ -0,0 +1,4 @@
+version=4
+opts=uversionmangle=s/-((alpha|beta|rc)\d*)$/~$1/,pgpsigurlmangle=s/$/.asc/,dversionmangle=s/\+hotfix// \
+https://secure.nic.cz/files/knot-dns/ \
+(?:|.*/)knot(?:[_\-]v?|)(\d\S*)\.(?:tar\.xz|txz|tar\.bz2|tbz2|tar\.gz|tgz)
diff --git a/distro/pkg/nix/default.nix b/distro/pkg/nix/default.nix
new file mode 100644
index 0000000..eca1698
--- /dev/null
+++ b/distro/pkg/nix/default.nix
@@ -0,0 +1,86 @@
+{ lib, stdenv, fetchurl, pkg-config, gnutls, liburcu, lmdb, libcap_ng, libidn2, libunistring
+, systemd, nettle, libedit, zlib, libiconv, libintl, libmaxminddb, libbpf, nghttp2, libmnl
+, ngtcp2-gnutls, xdp-tools
+, autoreconfHook
+, nixosTests, knot-resolver, knot-dns, runCommandLocal
+}:
+
+stdenv.mkDerivation rec {
+ pname = "knot-dns";
+ version = "{{ version }}";
+
+ src = fetchurl {
+ url = "https://secure.nic.cz/files/knot-dns/knot-${version}.tar.xz";
+ sha256 = "{{ src_hash }}";
+ };
+
+ outputs = [ "bin" "out" "dev" ];
+
+ configureFlags = [
+ "--with-configdir=/etc/knot"
+ "--with-rundir=/run/knot"
+ "--with-storage=/var/lib/knot"
+ ];
+
+ patches = [
+ # Don't try to create directories like /var/lib/knot at build time.
+ # They are later created from NixOS itself.
+ ./dont-create-run-time-dirs.patch
+ ./runtime-deps.patch
+ ];
+
+ nativeBuildInputs = [ pkg-config autoreconfHook ];
+ buildInputs = [
+ gnutls liburcu libidn2 libunistring
+ nettle libedit
+ libiconv lmdb libintl
+ nghttp2 # DoH support in kdig
+ ngtcp2-gnutls # DoQ support in kdig (and elsewhere but not much use there yet)
+ libmaxminddb # optional for geoip module (it's tiny)
+ # without sphinx &al. for developer documentation
+ # TODO: add dnstap support?
+ ] ++ lib.optionals stdenv.isLinux [
+ libcap_ng systemd
+ xdp-tools libbpf libmnl # XDP support (it's Linux kernel API)
+ ] ++ lib.optional stdenv.isDarwin zlib; # perhaps due to gnutls
+
+ enableParallelBuilding = true;
+
+ CFLAGS = [ "-O2" "-DNDEBUG" ];
+
+ doCheck = true;
+ checkFlags = [ "V=1" ]; # verbose output in case some test fails
+ doInstallCheck = true;
+
+ postInstall = ''
+ rm -r "$out"/lib/*.la
+ '';
+
+ passthru.tests = {
+ inherit knot-resolver;
+ } // lib.optionalAttrs stdenv.isLinux {
+ inherit (nixosTests) knot kea;
+ # Some dependencies are very version-sensitive, so the might get dropped
+ # or embedded after some update, even if the nixPackagers didn't intend to.
+ # For non-linux I don't know a good replacement for `ldd`.
+ deps = runCommandLocal "knot-deps-test"
+ { nativeBuildInputs = [ (lib.getBin stdenv.cc.libc) ]; }
+ ''
+ for libname in libngtcp2 libxdp libbpf; do
+ echo "Checking for $libname:"
+ ldd '${knot-dns.bin}/bin/knotd' | grep -F "$libname"
+ echo "OK"
+ done
+ touch "$out"
+ '';
+ };
+
+ meta = with lib; {
+ description = "Authoritative-only DNS server from .cz domain registry";
+ homepage = "https://knot-dns.cz";
+ license = licenses.gpl3Plus;
+ platforms = platforms.unix;
+ maintainers = [ maintainers.vcunat ];
+ mainProgram = "knotd";
+ };
+}
diff --git a/distro/pkg/nix/dont-create-run-time-dirs.patch b/distro/pkg/nix/dont-create-run-time-dirs.patch
new file mode 100644
index 0000000..9fe165e
--- /dev/null
+++ b/distro/pkg/nix/dont-create-run-time-dirs.patch
@@ -0,0 +1,32 @@
+diff --git a/samples/Makefile.am b/samples/Makefile.am
+index c253c91..107401d 100644
+--- a/samples/Makefile.am
++++ b/samples/Makefile.am
+@@ -19,11 +19,6 @@ EXTRA_DIST = knot.sample.conf.in example.com.zone
+
+ if HAVE_DAEMON
+
+-install-data-local: knot.sample.conf
+- if [ \! -f $(DESTDIR)/$(config_dir)/knot.sample.conf ]; then \
+- $(INSTALL) -d $(DESTDIR)/$(config_dir); \
+- $(INSTALL_DATA) knot.sample.conf $(srcdir)/example.com.zone $(DESTDIR)/$(config_dir); \
+- fi
+ uninstall-local:
+ -rm -rf $(DESTDIR)/$(config_dir)/knot.sample.conf \
+ $(DESTDIR)/$(config_dir)/example.com.zone
+diff --git a/src/utils/Makefile.inc b/src/utils/Makefile.inc
+index e6765d9..d859d23 100644
+--- a/src/utils/Makefile.inc
++++ b/src/utils/Makefile.inc
+@@ -79,11 +79,6 @@ endif HAVE_DNSTAP
+ endif HAVE_UTILS
+
+ if HAVE_DAEMON
+-# Create storage and run-time directories
+-install-data-hook:
+- $(INSTALL) -d $(DESTDIR)/@config_dir@
+- $(INSTALL) -d $(DESTDIR)/@run_dir@
+- $(INSTALL) -d $(DESTDIR)/@storage_dir@
+
+ sbin_PROGRAMS = knotc knotd
+
diff --git a/distro/pkg/nix/runtime-deps.patch b/distro/pkg/nix/runtime-deps.patch
new file mode 100644
index 0000000..19fc9cd
--- /dev/null
+++ b/distro/pkg/nix/runtime-deps.patch
@@ -0,0 +1,14 @@
+Remove unnecessary runtime dependencies.
+
+`knotc status configure` shows summary from the configure script,
+but that contains also references like include paths.
+Filter these at least in a crude way (whole lines).
+--- a/configure.ac
++++ b/configure.ac
+@@ -766,5 +766,5 @@ result_msg_base=" Knot DNS $VERSION
+
+-result_msg_esc=$(echo -n "$result_msg_base" | sed '$!s/$/\\n/' | tr -d '\n')
++result_msg_esc=$(echo -n "$result_msg_base" | grep -Fv "$NIX_STORE" | sed '$!s/$/\\n/' | tr -d '\n')
+
+ AC_DEFINE_UNQUOTED([CONFIGURE_SUMMARY],["$result_msg_esc"],[Configure summary])
+
diff --git a/distro/pkg/nix/top-level.nix b/distro/pkg/nix/top-level.nix
new file mode 100644
index 0000000..303923c
--- /dev/null
+++ b/distro/pkg/nix/top-level.nix
@@ -0,0 +1,8 @@
+
+with import {};
+
+(callPackage ./. {
+}).overrideAttrs (attrs: {
+ src = ./knot-{{ version }}.tar.xz;
+})
+
diff --git a/distro/pkg/rpm/knot.spec b/distro/pkg/rpm/knot.spec
new file mode 100644
index 0000000..f3c0d36
--- /dev/null
+++ b/distro/pkg/rpm/knot.spec
@@ -0,0 +1,331 @@
+%global _hardened_build 1
+%{!?_pkgdocdir: %global _pkgdocdir %{_docdir}/%{name}}
+
+%define GPG_CHECK 0
+%define BASE_VERSION %(echo "%{version}" | sed 's/^\\([^.]\\+\\.[^.]\\+\\).*/\\1/')
+%define repodir %{_builddir}/%{name}-%{version}
+
+Summary: High-performance authoritative DNS server
+Name: knot
+Version: {{ version }}
+Release: cznic.{{ release }}%{?dist}
+License: GPL-3.0-or-later
+URL: https://www.knot-dns.cz
+Source0: %{name}-%{version}.tar.xz
+
+%if 0%{?GPG_CHECK}
+Source1: https://secure.nic.cz/files/knot-dns/%{name}-%{version}.tar.xz.asc
+# PGP keys used to sign upstream releases
+# Export with --armor using command from https://fedoraproject.org/wiki/PackagingDrafts:GPGSignatures
+# Don't forget to update %%prep section when adding/removing keys
+Source100: gpgkey-742FA4E95829B6C5EAC6B85710BB7AF6FEBBD6AB.gpg.asc
+BuildRequires: gnupg2
+%endif
+
+# Required dependencies
+BuildRequires: autoconf
+BuildRequires: automake
+BuildRequires: libtool
+BuildRequires: make
+BuildRequires: gcc
+BuildRequires: pkgconfig(liburcu)
+BuildRequires: pkgconfig(gnutls)
+BuildRequires: pkgconfig(libedit)
+
+# Optional dependencies
+BuildRequires: pkgconfig(libcap-ng)
+BuildRequires: pkgconfig(libidn2)
+BuildRequires: pkgconfig(libmnl)
+BuildRequires: pkgconfig(libnghttp2)
+BuildRequires: pkgconfig(libsystemd)
+BuildRequires: pkgconfig(systemd)
+%if 0%{?fedora} || 0%{?rhel}
+BuildRequires: softhsm
+%endif
+# dnstap dependencies
+BuildRequires: pkgconfig(libfstrm)
+BuildRequires: pkgconfig(libprotobuf-c)
+# geoip dependencies
+BuildRequires: pkgconfig(libmaxminddb)
+# XDP dependencies
+BuildRequires: pkgconfig(libbpf)
+
+# Distro-dependent dependencies
+%if 0%{?suse_version}
+BuildRequires: python3-Sphinx
+BuildRequires: lmdb-devel
+BuildRequires: protobuf-c
+Requires(pre): pwdutils
+%if 0%{?sle_version} != 150400
+BuildRequires: pkgconfig(libxdp)
+%endif
+%endif
+%if 0%{?fedora} || 0%{?rhel}
+BuildRequires: python3-sphinx
+BuildRequires: pkgconfig(lmdb)
+%if 0%{?fedora} || 0%{?rhel} >= 9
+BuildRequires: pkgconfig(libxdp)
+%endif
+%endif
+
+%if 0%{?rhel} >= 9 || 0%{?suse_version} || 0%{?fedora}
+%define configure_quic --enable-quic=yes
+%endif
+
+Requires(post): systemd %{_sbindir}/runuser
+Requires(preun): systemd
+Requires(postun): systemd
+
+Requires: %{name}-libs%{?_isa} = %{version}-%{release}
+
+%if 0%{?suse_version}
+Provides: group(knot)
+%endif
+
+%description
+Knot DNS is a high-performance authoritative DNS server implementation.
+
+%package libs
+Summary: Libraries used by the Knot DNS server and client applications
+Conflicts: knot-resolver < 5.7.3
+
+%description libs
+The package contains shared libraries used by the Knot DNS server and
+utilities.
+
+%package devel
+Summary: Development header files for the Knot DNS libraries
+Requires: %{name}-libs%{?_isa} = %{version}-%{release}
+
+%description devel
+The package contains development header files for the Knot DNS libraries
+included in knot-libs package.
+
+%package utils
+Summary: DNS client utilities shipped with the Knot DNS server
+Requires: %{name}-libs%{?_isa} = %{version}-%{release}
+# Debian package compat
+Provides: %{name}-dnsutils = %{version}-%{release}
+
+%description utils
+The package contains DNS client utilities shipped with the Knot DNS server.
+
+%package dnssecutils
+Summary: DNSSEC tools shipped with the Knot DNS server
+Requires: %{name}-libs%{?_isa} = %{version}-%{release}
+
+%description dnssecutils
+The package contains DNSSEC tools shipped with the Knot DNS server.
+
+%package module-dnstap
+Summary: dnstap module for Knot DNS
+Requires: %{name} = %{version}-%{release}
+
+%description module-dnstap
+The package contains dnstap Knot DNS module for logging DNS traffic.
+
+%package module-geoip
+Summary: geoip module for Knot DNS
+Requires: %{name} = %{version}-%{release}
+
+%description module-geoip
+The package contains geoip Knot DNS module for geography-based responses.
+
+%package doc
+Summary: Documentation for the Knot DNS server
+BuildArch: noarch
+Provides: bundled(jquery)
+
+%description doc
+The package contains documentation for the Knot DNS server.
+On-line version is available on https://www.knot-dns.cz/documentation/
+
+%prep
+%if 0%{?GPG_CHECK}
+export GNUPGHOME=./gpg-keyring
+[ -d ${GNUPGHOME} ] && rm -r ${GNUPGHOME}
+mkdir --mode=700 ${GNUPGHOME}
+gpg2 --import %{SOURCE100}
+gpg2 --verify %{SOURCE1} %{SOURCE0}
+%endif
+%autosetup -p1
+
+%build
+# disable debug code (causes unused warnings)
+CFLAGS="%{optflags} -DNDEBUG -Wno-unused"
+
+%ifarch armv7hl i686
+# 32-bit architectures sometimes do not have sufficient amount of
+# contiguous address space to handle default values
+%define configure_db_sizes --with-conf-mapsize=64
+%endif
+
+%configure \
+ --sysconfdir=/etc \
+ --localstatedir=/var/lib \
+ --libexecdir=/usr/lib/knot \
+ --with-rundir=/run/knot \
+ --with-moduledir=%{_libdir}/knot/modules-%{BASE_VERSION} \
+ --with-storage=/var/lib/knot \
+ %{?configure_db_sizes} \
+ %{?configure_quic} \
+ --disable-static \
+ --enable-dnstap=yes \
+ --with-module-dnstap=shared \
+ --with-module-geoip=shared
+make %{?_smp_mflags}
+make html
+
+%install
+make install DESTDIR=%{buildroot}
+
+# install documentation
+install -d -m 0755 %{buildroot}%{_pkgdocdir}/samples
+install -p -m 0644 -t %{buildroot}%{_pkgdocdir}/samples samples/*.zone*
+install -p -m 0644 NEWS README.md %{buildroot}%{_pkgdocdir}
+cp -av doc/_build/html %{buildroot}%{_pkgdocdir}
+[ -r %{buildroot}%{_pkgdocdir}/html/index.html ] || exit 1
+rm -f %{buildroot}%{_pkgdocdir}/html/.buildinfo
+
+# install daemon and dbus configuration files
+rm %{buildroot}%{_sysconfdir}/%{name}/*
+install -p -m 0644 -D %{repodir}/samples/%{name}.sample.conf %{buildroot}%{_sysconfdir}/%{name}/%{name}.conf
+%if 0%{?fedora} || 0%{?rhel} > 7
+install -p -m 0644 -D %{repodir}/distro/common/cz.nic.knotd.conf %{buildroot}%{_datadir}/dbus-1/system.d/cz.nic.knotd.conf
+%endif
+
+# install systemd files
+install -p -m 0644 -D %{repodir}/distro/common/%{name}.service %{buildroot}%{_unitdir}/%{name}.service
+%if 0%{?suse_version}
+ln -s service %{buildroot}/%{_sbindir}/rcknot
+%endif
+
+# create storage dir
+install -d %{buildroot}%{_sharedstatedir}
+install -d -m 0770 -D %{buildroot}%{_sharedstatedir}/knot
+
+# remove libarchive files
+find %{buildroot} -type f -name "*.la" -delete -print
+
+%check
+V=1 make check
+
+%pre
+getent group knot >/dev/null || groupadd -r knot
+getent passwd knot >/dev/null || \
+ useradd -r -g knot -d %{_sharedstatedir}/knot -s /sbin/nologin \
+ -c "Knot DNS server" knot
+%if 0%{?suse_version}
+%service_add_pre knot.service
+%endif
+
+%post
+%if 0%{?suse_version}
+%service_add_post knot.service
+%else
+%systemd_post knot.service
+%endif
+
+%preun
+%if 0%{?suse_version}
+%service_del_preun knot.service
+%else
+%systemd_preun knot.service
+%endif
+
+%postun
+%if 0%{?suse_version}
+%service_del_postun knot.service
+%else
+%systemd_postun_with_restart knot.service
+%endif
+
+%if 0%{?fedora} || 0%{?rhel} > 7
+# https://fedoraproject.org/wiki/Changes/Removing_ldconfig_scriptlets
+%else
+%post libs -p /sbin/ldconfig
+%postun libs -p /sbin/ldconfig
+%endif
+
+%files
+%license COPYING
+%doc %{_pkgdocdir}
+%exclude %{_pkgdocdir}/html
+%attr(750,root,knot) %dir %{_sysconfdir}/knot
+%config(noreplace) %attr(640,root,knot) %{_sysconfdir}/knot/knot.conf
+%if 0%{?fedora} || 0%{?rhel} > 7
+%config(noreplace) %attr(644,root,root) %{_datadir}/dbus-1/system.d/cz.nic.knotd.conf
+%endif
+%attr(770,root,knot) %dir %{_sharedstatedir}/knot
+%dir %{_libdir}/knot
+%dir %{_libdir}/knot/modules-*
+%{_unitdir}/knot.service
+%{_sbindir}/kcatalogprint
+%{_sbindir}/kjournalprint
+%{_sbindir}/keymgr
+%{_sbindir}/knotc
+%{_sbindir}/knotd
+%if 0%{?suse_version}
+%{_sbindir}/rcknot
+%endif
+%{_mandir}/man5/knot.conf.*
+%{_mandir}/man8/kcatalogprint.*
+%{_mandir}/man8/kjournalprint.*
+%{_mandir}/man8/keymgr.*
+%{_mandir}/man8/knotc.*
+%{_mandir}/man8/knotd.*
+%ghost %attr(770,root,knot) %dir %{_rundir}/knot
+
+%files utils
+%{_bindir}/kdig
+%{_bindir}/khost
+%{_bindir}/knsupdate
+%{_sbindir}/kxdpgun
+%{_mandir}/man8/kxdpgun.*
+%{_mandir}/man1/kdig.*
+%{_mandir}/man1/khost.*
+%{_mandir}/man1/knsupdate.*
+
+%files dnssecutils
+%{_bindir}/knsec3hash
+%{_bindir}/kzonecheck
+%{_bindir}/kzonesign
+%{_mandir}/man1/knsec3hash.*
+%{_mandir}/man1/kzonecheck.*
+%{_mandir}/man1/kzonesign.*
+
+%files module-dnstap
+%{_libdir}/knot/modules-*/dnstap.so
+
+%files module-geoip
+%{_libdir}/knot/modules-*/geoip.so
+
+%files libs
+%license COPYING
+%doc NEWS
+%doc README.md
+%{_libdir}/libdnssec.so.*
+%{_libdir}/libknot.so.*
+%{_libdir}/libzscanner.so.*
+
+%files devel
+%{_includedir}/libdnssec
+%{_includedir}/knot
+%{_includedir}/libknot
+%{_includedir}/libzscanner
+%{_libdir}/libdnssec.so
+%{_libdir}/libknot.so
+%{_libdir}/libzscanner.so
+%{_libdir}/pkgconfig/knotd.pc
+%{_libdir}/pkgconfig/libdnssec.pc
+%{_libdir}/pkgconfig/libknot.pc
+%{_libdir}/pkgconfig/libzscanner.pc
+
+%files doc
+%dir %{_pkgdocdir}
+%doc %{_pkgdocdir}/html
+
+%changelog
+* {{ now }} Knot DNS - {{ version }}-{{ release }}
+- upstream package
+- see https://www.knot-dns.cz
diff --git a/distro/tests/README.md b/distro/tests/README.md
new file mode 100644
index 0000000..d356db5
--- /dev/null
+++ b/distro/tests/README.md
@@ -0,0 +1,16 @@
+# packaging tests
+
+Debian autopkgtests from `distro/pkg/deb/tests` are reused here
+using `apkg test` and symlinks.
+
+To run tests (from project root):
+
+ apkg test
+
+See templated tests control: distro/tests/extra/all
+
+To see rendered control (from project root):
+
+ apkg test --show-control [--distro debian-11]
+
+See [apkg test docs](https://pkg.labs.nic.cz/pages/apkg/test/).
diff --git a/distro/tests/authoritative-server b/distro/tests/authoritative-server
new file mode 100755
index 0000000..028dfbf
--- /dev/null
+++ b/distro/tests/authoritative-server
@@ -0,0 +1,150 @@
+#!/bin/bash
+
+# Author: Daniel Kahn Gillmor
+# 2018-11-02
+# License: GPLv3+
+
+# error on exit
+set -e
+# for handling jobspecs:
+set -m
+
+if [ -z "$AUTOPKGTEST_ARTIFACTS" ]; then
+ d="$(mktemp -d)"
+ remove="$d"
+else
+ d="$AUTOPKGTEST_ARTIFACTS"
+fi
+ip="${TESTIP:-127.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).$(( $RANDOM % 256 ))}"
+port="${PORT:-8123}"
+knotc="${KNOTC:-/usr/sbin/knotc}"
+knotd="${KNOTD:-/usr/sbin/knotd}"
+keymgr="${KEYMGR:-/usr/sbin/keymgr}"
+kdig="${KDIG:-$(command -v kdig)}"
+kzonecheck="${KZONECHECK:-$(command -v kzonecheck)}"
+test_address="${TEST_ADDRESS:-192.0.2.199}"
+
+declare -a knot_conf="--config=$d/knot.conf"
+declare -a knot_args=("$knot_conf" --verbose)
+
+printf "%s + %s roundtrip tests\n------------\n workdir: %s\n IP addr: %s\n knot args: %s\n" "$knotd" "$kdig" "$d" "$ip" "${knot_args[*]}"
+
+section() {
+ printf "\n%s\n" "$1"
+ sed 's/./-/g' <<<"$1"
+}
+
+cleanup () {
+ section "cleaning up"
+ find "$d" -ls
+ "${knotc}" "${knot_args[@]}" stop
+ wait %1
+ tail -n +1 -v "$d"/*.err
+ if [ "$remove" ]; then
+ printf "\ncleaning up working directory %s\n" "$remove"
+ rm -rf "$remove"
+ fi
+}
+trap cleanup EXIT
+
+section "set up config file and zonefile"
+
+user=$(id -nu)
+group=$(id -ng)
+cat > "$d/knot.conf" < "$d/example.net.zone" < "$d/knotd.err" &
+
+# FIXME: this is an annoying poll -- would be better if we could be
+# alerted when the daemon is done setting up the socket, but i don't
+# want to "--daemonize" if i can avoid it because i want the shell to
+# remain in direct supervision of all its processes
+tried=0
+while [ $tried -lt 10 ] ; do
+ if "${knotc}" "${knot_args[@]}" status 2>&1; then
+ break;
+ fi
+ sleep 0.5
+ tried=$(( $tried + 1 ))
+done
+if [ $tried -ge 10 ]; then
+ printf "failed to use %s\n" "${knotc}" >&2
+ exit 1
+fi
+
+section "querying knot"
+"${kdig}" -p "${port}" @"${ip}" -t A test.example.net test2.example.net
+answer="$("${kdig}" +short -p "${port}" @"${ip}" -t A test.example.net)"
+if ! [ "$answer" = "$test_address" ]; then
+ printf "test.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer" >&2
+ exit 1
+fi
+answer2="$("${kdig}" +short -p "${port}" @"${ip}" -t A test2.example.net)"
+if ! [ "$answer2" = "" ]; then
+ printf "test2.example.net gave unexpected answer!\n got: %s\n" "$answer2" >&2
+ exit 1
+fi
+
+section "modifying zone"
+printf "test2 1D IN A $test_address\n" >>"$d/example.net.zone"
+sed -i 's/^@ 1D IN SOA.*/@ 1D IN SOA a.ns hostmaster 2018110100 3h 15m 1w 1d/' "$d/example.net.zone"
+"${knotc}" "${knot_args[@]}" reload
+sleep 1
+
+section "querying again"
+"${kdig}" -p "${port}" @"${ip}" -t A test.example.net test2.example.net
+answer="$("${kdig}" +short -p "${port}" @"${ip}" -t A test.example.net)"
+if ! [ "$answer" = "$test_address" ]; then
+ printf "test.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer" >&2
+ exit 1
+fi
+answer2="$("${kdig}" +short -p "${port}" @"${ip}" -t A test2.example.net)"
+if ! [ "$answer2" = "$test_address" ]; then
+ printf "test2.example.net mismatch!\nexpected: %s\n got: %s\n" "$test_address" "$answer2" >&2
+ exit 1
+fi
+
+section "querying DNSSEC"
+"${kdig}" -p "${port}" @"${ip}" -t DNSKEY example.net. +dnssec
+if ! "${kdig}" -p "${port}" @"${ip}" -t DNSKEY example.net. +dnssec 2>&1 | grep -q "RRSIG[[:space:]]*DNSKEY"; then
+ printf "DNSSEC query not successful" >&2
+ exit 1
+fi
+
+section "listing keys with keymgr"
+"${keymgr}" "$knot_conf" -e example.net. list
+if ! "${keymgr}" "$knot_conf" -e example.net. list 2>&1 | grep -q "ksk=yes"; then
+ printf "keymgr did not list KSK as expected" >&2
+ exit 1
+fi
diff --git a/distro/tests/extra/all/control b/distro/tests/extra/all/control
new file mode 100644
index 0000000..6f9da6b
--- /dev/null
+++ b/distro/tests/extra/all/control
@@ -0,0 +1,10 @@
+Tests: kdig
+Restrictions: skippable
+{%- if distro.match('deb') %}
+Depends: iputils-ping, ca-certificates
+{%- elif distro.match('rpm') %}
+Depends: iputils
+{%- endif %}
+
+Tests: authoritative-server
+Depends: findutils
diff --git a/distro/tests/kdig b/distro/tests/kdig
new file mode 100755
index 0000000..f1dbe5a
--- /dev/null
+++ b/distro/tests/kdig
@@ -0,0 +1,14 @@
+#!/bin/bash
+
+set -e
+
+# Skip the test if no internet access
+ping -c1 1.1.1.1 2>&1 || exit 77
+
+expected=198.41.0.4
+answer=$(kdig +short +tls-ca @1.1.1.1 -q a.root-servers.net. -t A 2>&1 || true)
+
+if [ "$answer" != "$expected" ]; then
+ printf "expected: %s\ngot: %s\n" "$expected" "$answer" >&2
+ kdig -d +tls-ca @1.1.1.1 -q a.root-servers.net. -t A
+fi
diff --git a/doc/Makefile.am b/doc/Makefile.am
new file mode 100644
index 0000000..b38b211
--- /dev/null
+++ b/doc/Makefile.am
@@ -0,0 +1,173 @@
+MANPAGES_RST = \
+ reference.rst \
+ man_knotc.rst \
+ man_knotd.rst \
+ man_kcatalogprint.rst \
+ man_keymgr.rst \
+ man_kjournalprint.rst \
+ man_kdig.rst \
+ man_khost.rst \
+ man_knsupdate.rst \
+ man_knsec3hash.rst \
+ man_kzonecheck.rst \
+ man_kzonesign.rst \
+ man_kxdpgun.rst
+
+EXTRA_DIST = \
+ conf.py \
+ \
+ appendices.rst \
+ configuration.rst \
+ index.rst \
+ installation.rst \
+ introduction.rst \
+ migration.rst \
+ modules.rst.in \
+ operation.rst \
+ reference.rst \
+ requirements.rst \
+ troubleshooting.rst \
+ utilities.rst \
+ \
+ $(MANPAGES_RST) \
+ \
+ logo.pdf \
+ logo.svg \
+ \
+ ext/ignore_panels.py \
+ theme_epub \
+ theme_html
+
+SPHINX_V = $(SPHINX_V_@AM_V@)
+SPHINX_V_ = $(SPHINX_V_@AM_DEFAULT_V@)
+SPHINX_V_0 = -q
+SPHINX_V_1 = -n
+
+AM_V_SPHINX = $(AM_V_SPHINX_@AM_V@)
+AM_V_SPHINX_ = $(AM_V_SPHINX_@AM_DEFAULT_V@)
+AM_V_SPHINX_0 = @echo " SPHINX $@";
+
+SPHINXBUILDDIR = $(builddir)/_build
+
+_SPHINXOPTS = -c $(srcdir) \
+ -a \
+ $(SPHINX_V) \
+ -D version="$(VERSION)" \
+ -D today="$(RELEASE_DATE)" \
+ -D release="$(VERSION)"
+
+ALLSPHINXOPTS = $(_SPHINXOPTS) \
+ $(SPHINXOPTS) \
+ $(srcdir)
+
+man_SPHINXOPTS = $(_SPHINXOPTS) \
+ -D extensions="ignore_panels" \
+ $(SPHINXOPTS) \
+ $(srcdir)
+
+.PHONY: html-local singlehtml pdf-local epub man install-html-local install-singlehtml install-pdf-local install-epub
+
+man_MANS =
+
+if HAVE_DOCS
+
+if HAVE_DAEMON
+man_MANS += \
+ man/knot.conf.5 \
+ man/knotc.8 \
+ man/knotd.8
+endif # HAVE_DAEMON
+
+if HAVE_UTILS
+if HAVE_DAEMON
+man_MANS += \
+ man/kcatalogprint.8 \
+ man/keymgr.8 \
+ man/kjournalprint.8 \
+ man/kzonecheck.1 \
+ man/kzonesign.1
+endif # HAVE_DAEMON
+
+man_MANS += \
+ man/kdig.1 \
+ man/khost.1 \
+ man/knsupdate.1 \
+ man/knsec3hash.1
+
+if ENABLE_XDP
+man_MANS += man/kxdpgun.8
+endif # ENABLE_XDP
+endif # HAVE_UTILS
+
+if HAVE_SPHINX
+html-local:
+ $(AM_V_SPHINX)$(SPHINXBUILD) -b html -d $(SPHINXBUILDDIR)/doctrees/html $(ALLSPHINXOPTS) $(SPHINXBUILDDIR)/html
+ @echo "The HTML documentation has been built in $(SPHINXBUILDDIR)/html/"
+
+install-html-local:
+ $(INSTALL) -d $(DESTDIR)/$(docdir) $(DESTDIR)/$(docdir)/_static $(DESTDIR)/$(docdir)/_sources
+ $(INSTALL) -D $(SPHINXBUILDDIR)/html/*.html $(DESTDIR)/$(docdir)/
+ $(INSTALL_DATA) $(SPHINXBUILDDIR)/html/_sources/* $(DESTDIR)/$(docdir)/_sources/
+ $(INSTALL_DATA) $(SPHINXBUILDDIR)/html/_static/* $(DESTDIR)/$(docdir)/_static/
+
+singlehtml:
+ $(AM_V_SPHINX)$(SPHINXBUILD) -b singlehtml -d $(SPHINXBUILDDIR)/doctrees/singlehtml $(ALLSPHINXOPTS) $(SPHINXBUILDDIR)/singlehtml
+ @echo "The single HTML documentation has been built in $(SPHINXBUILDDIR)/singlehtml/"
+
+install-singlehtml: singlehtml
+ $(INSTALL) -d $(DESTDIR)/$(docdir) $(DESTDIR)/$(docdir)/_static
+ $(INSTALL_DATA) $(SPHINXBUILDDIR)/singlehtml/*.html $(DESTDIR)/$(docdir)/
+ $(INSTALL_DATA) $(SPHINXBUILDDIR)/singlehtml/_static/* $(DESTDIR)/$(docdir)/_static/
+
+epub:
+ $(AM_V_SPHINX)$(SPHINXBUILD) -b epub -A today=$(RELEASE_DATE) -d $(SPHINXBUILDDIR)/doctrees/epub $(ALLSPHINXOPTS) $(SPHINXBUILDDIR)/epub
+ @echo "The EPUB documentation has been built in $(SPHINXBUILDDIR)/epub/"
+
+install-epub:
+ $(INSTALL) -d $(DESTDIR)/$(docdir)
+ $(INSTALL_DATA) $(SPHINXBUILDDIR)/epub/KnotDNS.epub $(DESTDIR)/$(docdir)/
+
+if HAVE_PDFLATEX
+pdf-local:
+ $(AM_V_SPHINX)$(SPHINXBUILD) -b latex -d $(SPHINXBUILDDIR)/doctrees/latex $(ALLSPHINXOPTS) $(SPHINXBUILDDIR)/latex
+ $(MAKE) -C $(SPHINXBUILDDIR)/latex all-pdf
+ @echo "The PDF documentation has been built in $(SPHINXBUILDDIR)/latex/"
+
+install-pdf-local:
+ $(INSTALL) -d $(DESTDIR)/$(docdir)
+ $(INSTALL_DATA) $(SPHINXBUILDDIR)/latex/KnotDNS.pdf $(DESTDIR)/$(docdir)/
+
+else
+pdf-local install-pdf-local:
+ @echo "Install 'pdflatex' and re-run configure to be able to generate PDF documentation!"
+endif # HAVE_PDFLATEX
+
+man: $(man_MANS)
+$(man_MANS)&: $(MANPAGES_RST)
+ $(AM_V_SPHINX)$(SPHINXBUILD) -b man -d $(SPHINXBUILDDIR)/doctrees/man $(man_SPHINXOPTS) $(SPHINXBUILDDIR)/man
+ @mkdir -p man
+ @for f in $(SPHINXBUILDDIR)/man/*; do \
+ sed -e 's,[@]config_dir@,$(config_dir),' \
+ -e 's,[@]storage_dir@,$(storage_dir),' \
+ -e 's,[@]run_dir@,$(run_dir),' \
+ -e 's,[@]conf_mapsize@,$(conf_mapsize),' "$$f" > "man/$$(basename $$f)"; \
+ done
+
+else
+html-local singlehtml pdf-local epub man install-html-local install-singlehtml install-pdf-local install-epub:
+ @echo "Install 'sphinx-build' and re-run configure to be able to generate documentation!"
+
+$(man_MANS)&:
+ @if [ ! -f "$@" ]; then \
+ echo "Install 'sphinx-build' or disable documentation and re-run configure to generate man pages!"; \
+ fi
+endif # HAVE_SPHINX
+
+endif # HAVE_DOCS
+
+EXTRA_DIST += \
+ $(man_MANS)
+
+clean-local:
+ -rm -rf $(SPHINXBUILDDIR)
+ -rm -rf man
diff --git a/doc/Makefile.in b/doc/Makefile.in
new file mode 100644
index 0000000..b206b85
--- /dev/null
+++ b/doc/Makefile.in
@@ -0,0 +1,836 @@
+# Makefile.in generated by automake 1.16.5 from Makefile.am.
+# @configure_input@
+
+# Copyright (C) 1994-2021 Free Software Foundation, Inc.
+
+# This Makefile.in is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+@SET_MAKE@
+VPATH = @srcdir@
+am__is_gnu_make = { \
+ if test -z '$(MAKELEVEL)'; then \
+ false; \
+ elif test -n '$(MAKE_HOST)'; then \
+ true; \
+ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
+ true; \
+ else \
+ false; \
+ fi; \
+}
+am__make_running_with_option = \
+ case $${target_option-} in \
+ ?) ;; \
+ *) echo "am__make_running_with_option: internal error: invalid" \
+ "target option '$${target_option-}' specified" >&2; \
+ exit 1;; \
+ esac; \
+ has_opt=no; \
+ sane_makeflags=$$MAKEFLAGS; \
+ if $(am__is_gnu_make); then \
+ sane_makeflags=$$MFLAGS; \
+ else \
+ case $$MAKEFLAGS in \
+ *\\[\ \ ]*) \
+ bs=\\; \
+ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
+ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
+ esac; \
+ fi; \
+ skip_next=no; \
+ strip_trailopt () \
+ { \
+ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
+ }; \
+ for flg in $$sane_makeflags; do \
+ test $$skip_next = yes && { skip_next=no; continue; }; \
+ case $$flg in \
+ *=*|--*) continue;; \
+ -*I) strip_trailopt 'I'; skip_next=yes;; \
+ -*I?*) strip_trailopt 'I';; \
+ -*O) strip_trailopt 'O'; skip_next=yes;; \
+ -*O?*) strip_trailopt 'O';; \
+ -*l) strip_trailopt 'l'; skip_next=yes;; \
+ -*l?*) strip_trailopt 'l';; \
+ -[dEDm]) skip_next=yes;; \
+ -[JT]) skip_next=yes;; \
+ esac; \
+ case $$flg in \
+ *$$target_option*) has_opt=yes; break;; \
+ esac; \
+ done; \
+ test $$has_opt = yes
+am__make_dryrun = (target_option=n; $(am__make_running_with_option))
+am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
+pkgdatadir = $(datadir)/@PACKAGE@
+pkgincludedir = $(includedir)/@PACKAGE@
+pkglibdir = $(libdir)/@PACKAGE@
+pkglibexecdir = $(libexecdir)/@PACKAGE@
+am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
+install_sh_DATA = $(install_sh) -c -m 644
+install_sh_PROGRAM = $(install_sh) -c
+install_sh_SCRIPT = $(install_sh) -c
+INSTALL_HEADER = $(INSTALL_DATA)
+transform = $(program_transform_name)
+NORMAL_INSTALL = :
+PRE_INSTALL = :
+POST_INSTALL = :
+NORMAL_UNINSTALL = :
+PRE_UNINSTALL = :
+POST_UNINSTALL = :
+build_triplet = @build@
+host_triplet = @host@
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@am__append_1 = \
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@ man/knot.conf.5 \
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@ man/knotc.8 \
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@ man/knotd.8
+
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@am__append_2 = \
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/kcatalogprint.8 \
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/keymgr.8 \
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/kjournalprint.8 \
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/kzonecheck.1 \
+@HAVE_DAEMON_TRUE@@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/kzonesign.1
+
+@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@am__append_3 = \
+@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/kdig.1 \
+@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/khost.1 \
+@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/knsupdate.1 \
+@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@ man/knsec3hash.1
+
+@ENABLE_XDP_TRUE@@HAVE_DOCS_TRUE@@HAVE_UTILS_TRUE@am__append_4 = man/kxdpgun.8
+subdir = doc
+ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
+am__aclocal_m4_deps = $(top_srcdir)/m4/ax_check_compile_flag.m4 \
+ $(top_srcdir)/m4/ax_check_link_flag.m4 \
+ $(top_srcdir)/m4/code-coverage.m4 \
+ $(top_srcdir)/m4/knot-lib-version.m4 \
+ $(top_srcdir)/m4/knot-module.m4 $(top_srcdir)/m4/libtool.m4 \
+ $(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
+ $(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
+ $(top_srcdir)/m4/sanitizer.m4 $(top_srcdir)/m4/visibility.m4 \
+ $(top_srcdir)/m4/knot-version.m4 $(top_srcdir)/configure.ac
+am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
+ $(ACLOCAL_M4)
+DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON)
+mkinstalldirs = $(install_sh) -d
+CONFIG_HEADER = $(top_builddir)/src/config.h
+CONFIG_CLEAN_FILES = modules.rst
+CONFIG_CLEAN_VPATH_FILES =
+AM_V_P = $(am__v_P_@AM_V@)
+am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
+am__v_P_0 = false
+am__v_P_1 = :
+AM_V_GEN = $(am__v_GEN_@AM_V@)
+am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
+am__v_GEN_0 = @echo " GEN " $@;
+am__v_GEN_1 =
+AM_V_at = $(am__v_at_@AM_V@)
+am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
+am__v_at_0 = @
+am__v_at_1 =
+SOURCES =
+DIST_SOURCES =
+am__can_run_installinfo = \
+ case $$AM_UPDATE_INFO_DIR in \
+ n|no|NO) false;; \
+ *) (install-info --version) >/dev/null 2>&1;; \
+ esac
+am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
+am__vpath_adj = case $$p in \
+ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
+ *) f=$$p;; \
+ esac;
+am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
+am__install_max = 40
+am__nobase_strip_setup = \
+ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
+am__nobase_strip = \
+ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
+am__nobase_list = $(am__nobase_strip_setup); \
+ for p in $$list; do echo "$$p $$p"; done | \
+ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
+ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
+ if (++n[$$2] == $(am__install_max)) \
+ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
+ END { for (dir in files) print dir, files[dir] }'
+am__base_list = \
+ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
+ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
+am__uninstall_files_from_dir = { \
+ test -z "$$files" \
+ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
+ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \
+ $(am__cd) "$$dir" && rm -f $$files; }; \
+ }
+man1dir = $(mandir)/man1
+am__installdirs = "$(DESTDIR)$(man1dir)" "$(DESTDIR)$(man5dir)" \
+ "$(DESTDIR)$(man8dir)"
+man5dir = $(mandir)/man5
+man8dir = $(mandir)/man8
+NROFF = nroff
+MANS = $(man_MANS)
+am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
+am__DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/modules.rst.in
+DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
+ACLOCAL = @ACLOCAL@
+AMTAR = @AMTAR@
+AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
+AR = @AR@
+AUTOCONF = @AUTOCONF@
+AUTOHEADER = @AUTOHEADER@
+AUTOMAKE = @AUTOMAKE@
+AWK = @AWK@
+CC = @CC@
+CCDEPMODE = @CCDEPMODE@
+CFLAGS = @CFLAGS@
+CFLAG_VISIBILITY = @CFLAG_VISIBILITY@
+CODE_COVERAGE_ENABLED = @CODE_COVERAGE_ENABLED@
+CPP = @CPP@
+CPPFLAGS = @CPPFLAGS@
+CSCOPE = @CSCOPE@
+CTAGS = @CTAGS@
+CYGPATH_W = @CYGPATH_W@
+DEFS = @DEFS@
+DEPDIR = @DEPDIR@
+DLLTOOL = @DLLTOOL@
+DNSTAP_CFLAGS = @DNSTAP_CFLAGS@
+DNSTAP_LIBS = @DNSTAP_LIBS@
+DSYMUTIL = @DSYMUTIL@
+DUMPBIN = @DUMPBIN@
+ECHO_C = @ECHO_C@
+ECHO_N = @ECHO_N@
+ECHO_T = @ECHO_T@
+EGREP = @EGREP@
+ETAGS = @ETAGS@
+EXEEXT = @EXEEXT@
+FGREP = @FGREP@
+FILECMD = @FILECMD@
+GENHTML = @GENHTML@
+GREP = @GREP@
+HAVE_VISIBILITY = @HAVE_VISIBILITY@
+INSTALL = @INSTALL@
+INSTALL_DATA = @INSTALL_DATA@
+INSTALL_PROGRAM = @INSTALL_PROGRAM@
+INSTALL_SCRIPT = @INSTALL_SCRIPT@
+INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
+KNOT_VERSION_MAJOR = @KNOT_VERSION_MAJOR@
+KNOT_VERSION_MINOR = @KNOT_VERSION_MINOR@
+KNOT_VERSION_PATCH = @KNOT_VERSION_PATCH@
+LCOV = @LCOV@
+LD = @LD@
+LDFLAGS = @LDFLAGS@
+LDFLAG_EXCLUDE_LIBS = @LDFLAG_EXCLUDE_LIBS@
+LIBOBJS = @LIBOBJS@
+LIBS = @LIBS@
+LIBTOOL = @LIBTOOL@
+LIPO = @LIPO@
+LN_S = @LN_S@
+LTLIBOBJS = @LTLIBOBJS@
+LT_NO_UNDEFINED = @LT_NO_UNDEFINED@
+LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@
+MAKEINFO = @MAKEINFO@
+MANIFEST_TOOL = @MANIFEST_TOOL@
+MKDIR_P = @MKDIR_P@
+NM = @NM@
+NMEDIT = @NMEDIT@
+OBJDUMP = @OBJDUMP@
+OBJEXT = @OBJEXT@
+OTOOL = @OTOOL@
+OTOOL64 = @OTOOL64@
+PACKAGE = @PACKAGE@
+PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
+PACKAGE_NAME = @PACKAGE_NAME@
+PACKAGE_STRING = @PACKAGE_STRING@
+PACKAGE_TARNAME = @PACKAGE_TARNAME@
+PACKAGE_URL = @PACKAGE_URL@
+PACKAGE_VERSION = @PACKAGE_VERSION@
+PATH_SEPARATOR = @PATH_SEPARATOR@
+PDFLATEX = @PDFLATEX@
+PKG_CONFIG = @PKG_CONFIG@
+PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
+PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
+PROTOC_C = @PROTOC_C@
+RANLIB = @RANLIB@
+RELEASE_DATE = @RELEASE_DATE@
+SED = @SED@
+SET_MAKE = @SET_MAKE@
+SHELL = @SHELL@
+SPHINXBUILD = @SPHINXBUILD@
+STRIP = @STRIP@
+VERSION = @VERSION@
+abs_builddir = @abs_builddir@
+abs_srcdir = @abs_srcdir@
+abs_top_builddir = @abs_top_builddir@
+abs_top_srcdir = @abs_top_srcdir@
+ac_ct_AR = @ac_ct_AR@
+ac_ct_CC = @ac_ct_CC@
+ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+am__include = @am__include@
+am__leading_dot = @am__leading_dot@
+am__quote = @am__quote@
+am__tar = @am__tar@
+am__untar = @am__untar@
+bindir = @bindir@
+build = @build@
+build_alias = @build_alias@
+build_cpu = @build_cpu@
+build_os = @build_os@
+build_vendor = @build_vendor@
+builddir = @builddir@
+cap_ng_CFLAGS = @cap_ng_CFLAGS@
+cap_ng_LIBS = @cap_ng_LIBS@
+conf_mapsize = @conf_mapsize@
+config_dir = @config_dir@
+datadir = @datadir@
+datarootdir = @datarootdir@
+dlopen_LIBS = @dlopen_LIBS@
+docdir = @docdir@
+dvidir = @dvidir@
+embedded_libngtcp2_CFLAGS = @embedded_libngtcp2_CFLAGS@
+embedded_libngtcp2_LIBS = @embedded_libngtcp2_LIBS@
+exec_prefix = @exec_prefix@
+fuzzer_CFLAGS = @fuzzer_CFLAGS@
+fuzzer_LDFLAGS = @fuzzer_LDFLAGS@
+gnutls_CFLAGS = @gnutls_CFLAGS@
+gnutls_LIBS = @gnutls_LIBS@
+host = @host@
+host_alias = @host_alias@
+host_cpu = @host_cpu@
+host_os = @host_os@
+host_vendor = @host_vendor@
+htmldir = @htmldir@
+includedir = @includedir@
+infodir = @infodir@
+install_sh = @install_sh@
+libbpf_CFLAGS = @libbpf_CFLAGS@
+libbpf_LIBS = @libbpf_LIBS@
+libdbus_CFLAGS = @libdbus_CFLAGS@
+libdbus_LIBS = @libdbus_LIBS@
+libdir = @libdir@
+libdnssec_SONAME = @libdnssec_SONAME@
+libdnssec_SOVERSION = @libdnssec_SOVERSION@
+libdnssec_VERSION_INFO = @libdnssec_VERSION_INFO@
+libedit_CFLAGS = @libedit_CFLAGS@
+libedit_LIBS = @libedit_LIBS@
+libexecdir = @libexecdir@
+libfstrm_CFLAGS = @libfstrm_CFLAGS@
+libfstrm_LIBS = @libfstrm_LIBS@
+libidn2_CFLAGS = @libidn2_CFLAGS@
+libidn2_LIBS = @libidn2_LIBS@
+libknot_SONAME = @libknot_SONAME@
+libknot_SOVERSION = @libknot_SOVERSION@
+libknot_VERSION_INFO = @libknot_VERSION_INFO@
+libkqueue_CFLAGS = @libkqueue_CFLAGS@
+libkqueue_LIBS = @libkqueue_LIBS@
+libmaxminddb_CFLAGS = @libmaxminddb_CFLAGS@
+libmaxminddb_LIBS = @libmaxminddb_LIBS@
+libmnl_CFLAGS = @libmnl_CFLAGS@
+libmnl_LIBS = @libmnl_LIBS@
+libnghttp2_CFLAGS = @libnghttp2_CFLAGS@
+libnghttp2_LIBS = @libnghttp2_LIBS@
+libngtcp2_CFLAGS = @libngtcp2_CFLAGS@
+libngtcp2_LIBS = @libngtcp2_LIBS@
+libprotobuf_c_CFLAGS = @libprotobuf_c_CFLAGS@
+libprotobuf_c_LIBS = @libprotobuf_c_LIBS@
+liburcu_CFLAGS = @liburcu_CFLAGS@
+liburcu_LIBS = @liburcu_LIBS@
+libxdp_CFLAGS = @libxdp_CFLAGS@
+libxdp_LIBS = @libxdp_LIBS@
+libzscanner_SONAME = @libzscanner_SONAME@
+libzscanner_SOVERSION = @libzscanner_SOVERSION@
+libzscanner_VERSION_INFO = @libzscanner_VERSION_INFO@
+lmdb_CFLAGS = @lmdb_CFLAGS@
+lmdb_LIBS = @lmdb_LIBS@
+localedir = @localedir@
+localstatedir = @localstatedir@
+malloc_LIBS = @malloc_LIBS@
+mandir = @mandir@
+math_LIBS = @math_LIBS@
+mkdir_p = @mkdir_p@
+module_dir = @module_dir@
+module_instdir = @module_instdir@
+oldincludedir = @oldincludedir@
+pdfdir = @pdfdir@
+pkgconfigdir = @pkgconfigdir@
+prefix = @prefix@
+program_transform_name = @program_transform_name@
+psdir = @psdir@
+pthread_LIBS = @pthread_LIBS@
+run_dir = @run_dir@
+runstatedir = @runstatedir@
+sbindir = @sbindir@
+sharedstatedir = @sharedstatedir@
+srcdir = @srcdir@
+storage_dir = @storage_dir@
+sysconfdir = @sysconfdir@
+systemd_CFLAGS = @systemd_CFLAGS@
+systemd_LIBS = @systemd_LIBS@
+target_alias = @target_alias@
+top_build_prefix = @top_build_prefix@
+top_builddir = @top_builddir@
+top_srcdir = @top_srcdir@
+MANPAGES_RST = \
+ reference.rst \
+ man_knotc.rst \
+ man_knotd.rst \
+ man_kcatalogprint.rst \
+ man_keymgr.rst \
+ man_kjournalprint.rst \
+ man_kdig.rst \
+ man_khost.rst \
+ man_knsupdate.rst \
+ man_knsec3hash.rst \
+ man_kzonecheck.rst \
+ man_kzonesign.rst \
+ man_kxdpgun.rst
+
+EXTRA_DIST = conf.py appendices.rst configuration.rst index.rst \
+ installation.rst introduction.rst migration.rst modules.rst.in \
+ operation.rst reference.rst requirements.rst \
+ troubleshooting.rst utilities.rst $(MANPAGES_RST) logo.pdf \
+ logo.svg ext/ignore_panels.py theme_epub theme_html \
+ $(man_MANS)
+SPHINX_V = $(SPHINX_V_@AM_V@)
+SPHINX_V_ = $(SPHINX_V_@AM_DEFAULT_V@)
+SPHINX_V_0 = -q
+SPHINX_V_1 = -n
+AM_V_SPHINX = $(AM_V_SPHINX_@AM_V@)
+AM_V_SPHINX_ = $(AM_V_SPHINX_@AM_DEFAULT_V@)
+AM_V_SPHINX_0 = @echo " SPHINX $@";
+SPHINXBUILDDIR = $(builddir)/_build
+_SPHINXOPTS = -c $(srcdir) \
+ -a \
+ $(SPHINX_V) \
+ -D version="$(VERSION)" \
+ -D today="$(RELEASE_DATE)" \
+ -D release="$(VERSION)"
+
+ALLSPHINXOPTS = $(_SPHINXOPTS) \
+ $(SPHINXOPTS) \
+ $(srcdir)
+
+man_SPHINXOPTS = $(_SPHINXOPTS) \
+ -D extensions="ignore_panels" \
+ $(SPHINXOPTS) \
+ $(srcdir)
+
+man_MANS = $(am__append_1) $(am__append_2) $(am__append_3) \
+ $(am__append_4)
+all: all-am
+
+.SUFFIXES:
+$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
+ @for dep in $?; do \
+ case '$(am__configure_deps)' in \
+ *$$dep*) \
+ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
+ && { if test -f $@; then exit 0; else break; fi; }; \
+ exit 1;; \
+ esac; \
+ done; \
+ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/Makefile'; \
+ $(am__cd) $(top_srcdir) && \
+ $(AUTOMAKE) --foreign doc/Makefile
+Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
+ @case '$?' in \
+ *config.status*) \
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
+ *) \
+ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \
+ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \
+ esac;
+
+$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+
+$(top_srcdir)/configure: $(am__configure_deps)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+$(ACLOCAL_M4): $(am__aclocal_m4_deps)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+$(am__aclocal_m4_deps):
+modules.rst: $(top_builddir)/config.status $(srcdir)/modules.rst.in
+ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
+
+mostlyclean-libtool:
+ -rm -f *.lo
+
+clean-libtool:
+ -rm -rf .libs _libs
+install-man1: $(man_MANS)
+ @$(NORMAL_INSTALL)
+ @list1=''; \
+ list2='$(man_MANS)'; \
+ test -n "$(man1dir)" \
+ && test -n "`echo $$list1$$list2`" \
+ || exit 0; \
+ echo " $(MKDIR_P) '$(DESTDIR)$(man1dir)'"; \
+ $(MKDIR_P) "$(DESTDIR)$(man1dir)" || exit 1; \
+ { for i in $$list1; do echo "$$i"; done; \
+ if test -n "$$list2"; then \
+ for i in $$list2; do echo "$$i"; done \
+ | sed -n '/\.1[a-z]*$$/p'; \
+ fi; \
+ } | while read p; do \
+ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \
+ echo "$$d$$p"; echo "$$p"; \
+ done | \
+ sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \
+ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \
+ sed 'N;N;s,\n, ,g' | { \
+ list=; while read file base inst; do \
+ if test "$$base" = "$$inst"; then list="$$list $$file"; else \
+ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man1dir)/$$inst'"; \
+ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man1dir)/$$inst" || exit $$?; \
+ fi; \
+ done; \
+ for i in $$list; do echo "$$i"; done | $(am__base_list) | \
+ while read files; do \
+ test -z "$$files" || { \
+ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man1dir)'"; \
+ $(INSTALL_DATA) $$files "$(DESTDIR)$(man1dir)" || exit $$?; }; \
+ done; }
+
+uninstall-man1:
+ @$(NORMAL_UNINSTALL)
+ @list=''; test -n "$(man1dir)" || exit 0; \
+ files=`{ for i in $$list; do echo "$$i"; done; \
+ l2='$(man_MANS)'; for i in $$l2; do echo "$$i"; done | \
+ sed -n '/\.1[a-z]*$$/p'; \
+ } | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \
+ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \
+ dir='$(DESTDIR)$(man1dir)'; $(am__uninstall_files_from_dir)
+install-man5: $(man_MANS)
+ @$(NORMAL_INSTALL)
+ @list1=''; \
+ list2='$(man_MANS)'; \
+ test -n "$(man5dir)" \
+ && test -n "`echo $$list1$$list2`" \
+ || exit 0; \
+ echo " $(MKDIR_P) '$(DESTDIR)$(man5dir)'"; \
+ $(MKDIR_P) "$(DESTDIR)$(man5dir)" || exit 1; \
+ { for i in $$list1; do echo "$$i"; done; \
+ if test -n "$$list2"; then \
+ for i in $$list2; do echo "$$i"; done \
+ | sed -n '/\.5[a-z]*$$/p'; \
+ fi; \
+ } | while read p; do \
+ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \
+ echo "$$d$$p"; echo "$$p"; \
+ done | \
+ sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^5][0-9a-z]*$$,5,;x' \
+ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \
+ sed 'N;N;s,\n, ,g' | { \
+ list=; while read file base inst; do \
+ if test "$$base" = "$$inst"; then list="$$list $$file"; else \
+ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man5dir)/$$inst'"; \
+ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man5dir)/$$inst" || exit $$?; \
+ fi; \
+ done; \
+ for i in $$list; do echo "$$i"; done | $(am__base_list) | \
+ while read files; do \
+ test -z "$$files" || { \
+ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man5dir)'"; \
+ $(INSTALL_DATA) $$files "$(DESTDIR)$(man5dir)" || exit $$?; }; \
+ done; }
+
+uninstall-man5:
+ @$(NORMAL_UNINSTALL)
+ @list=''; test -n "$(man5dir)" || exit 0; \
+ files=`{ for i in $$list; do echo "$$i"; done; \
+ l2='$(man_MANS)'; for i in $$l2; do echo "$$i"; done | \
+ sed -n '/\.5[a-z]*$$/p'; \
+ } | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^5][0-9a-z]*$$,5,;x' \
+ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \
+ dir='$(DESTDIR)$(man5dir)'; $(am__uninstall_files_from_dir)
+install-man8: $(man_MANS)
+ @$(NORMAL_INSTALL)
+ @list1=''; \
+ list2='$(man_MANS)'; \
+ test -n "$(man8dir)" \
+ && test -n "`echo $$list1$$list2`" \
+ || exit 0; \
+ echo " $(MKDIR_P) '$(DESTDIR)$(man8dir)'"; \
+ $(MKDIR_P) "$(DESTDIR)$(man8dir)" || exit 1; \
+ { for i in $$list1; do echo "$$i"; done; \
+ if test -n "$$list2"; then \
+ for i in $$list2; do echo "$$i"; done \
+ | sed -n '/\.8[a-z]*$$/p'; \
+ fi; \
+ } | while read p; do \
+ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \
+ echo "$$d$$p"; echo "$$p"; \
+ done | \
+ sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^8][0-9a-z]*$$,8,;x' \
+ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \
+ sed 'N;N;s,\n, ,g' | { \
+ list=; while read file base inst; do \
+ if test "$$base" = "$$inst"; then list="$$list $$file"; else \
+ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man8dir)/$$inst'"; \
+ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man8dir)/$$inst" || exit $$?; \
+ fi; \
+ done; \
+ for i in $$list; do echo "$$i"; done | $(am__base_list) | \
+ while read files; do \
+ test -z "$$files" || { \
+ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man8dir)'"; \
+ $(INSTALL_DATA) $$files "$(DESTDIR)$(man8dir)" || exit $$?; }; \
+ done; }
+
+uninstall-man8:
+ @$(NORMAL_UNINSTALL)
+ @list=''; test -n "$(man8dir)" || exit 0; \
+ files=`{ for i in $$list; do echo "$$i"; done; \
+ l2='$(man_MANS)'; for i in $$l2; do echo "$$i"; done | \
+ sed -n '/\.8[a-z]*$$/p'; \
+ } | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^8][0-9a-z]*$$,8,;x' \
+ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \
+ dir='$(DESTDIR)$(man8dir)'; $(am__uninstall_files_from_dir)
+tags TAGS:
+
+ctags CTAGS:
+
+cscope cscopelist:
+
+distdir: $(BUILT_SOURCES)
+ $(MAKE) $(AM_MAKEFLAGS) distdir-am
+
+distdir-am: $(DISTFILES)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ list='$(DISTFILES)'; \
+ dist_files=`for file in $$list; do echo $$file; done | \
+ sed -e "s|^$$srcdirstrip/||;t" \
+ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
+ case $$dist_files in \
+ */*) $(MKDIR_P) `echo "$$dist_files" | \
+ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
+ sort -u` ;; \
+ esac; \
+ for file in $$dist_files; do \
+ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
+ if test -d $$d/$$file; then \
+ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
+ if test -d "$(distdir)/$$file"; then \
+ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
+ fi; \
+ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
+ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
+ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
+ fi; \
+ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
+ else \
+ test -f "$(distdir)/$$file" \
+ || cp -p $$d/$$file "$(distdir)/$$file" \
+ || exit 1; \
+ fi; \
+ done
+check-am: all-am
+check: check-am
+all-am: Makefile $(MANS)
+installdirs:
+ for dir in "$(DESTDIR)$(man1dir)" "$(DESTDIR)$(man5dir)" "$(DESTDIR)$(man8dir)"; do \
+ test -z "$$dir" || $(MKDIR_P) "$$dir"; \
+ done
+install: install-am
+install-exec: install-exec-am
+install-data: install-data-am
+uninstall: uninstall-am
+
+install-am: all-am
+ @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
+
+installcheck: installcheck-am
+install-strip:
+ if test -z '$(STRIP)'; then \
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ install; \
+ else \
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
+ fi
+mostlyclean-generic:
+
+clean-generic:
+
+distclean-generic:
+ -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
+ -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
+
+maintainer-clean-generic:
+ @echo "This command is intended for maintainers to use"
+ @echo "it deletes files that may require special tools to rebuild."
+@HAVE_DOCS_FALSE@html-local:
+@HAVE_SPHINX_FALSE@html-local:
+@HAVE_DOCS_FALSE@install-html-local:
+@HAVE_SPHINX_FALSE@install-html-local:
+@HAVE_DOCS_FALSE@install-pdf-local:
+@HAVE_PDFLATEX_FALSE@install-pdf-local:
+@HAVE_SPHINX_FALSE@install-pdf-local:
+@HAVE_DOCS_FALSE@pdf-local:
+@HAVE_PDFLATEX_FALSE@pdf-local:
+@HAVE_SPHINX_FALSE@pdf-local:
+clean: clean-am
+
+clean-am: clean-generic clean-libtool clean-local mostlyclean-am
+
+distclean: distclean-am
+ -rm -f Makefile
+distclean-am: clean-am distclean-generic
+
+dvi: dvi-am
+
+dvi-am:
+
+html: html-am
+
+html-am: html-local
+
+info: info-am
+
+info-am:
+
+install-data-am: install-man
+
+install-dvi: install-dvi-am
+
+install-dvi-am:
+
+install-exec-am:
+
+install-html: install-html-am
+
+install-html-am: install-html-local
+
+install-info: install-info-am
+
+install-info-am:
+
+install-man: install-man1 install-man5 install-man8
+
+install-pdf: install-pdf-am
+
+install-pdf-am: install-pdf-local
+
+install-ps: install-ps-am
+
+install-ps-am:
+
+installcheck-am:
+
+maintainer-clean: maintainer-clean-am
+ -rm -f Makefile
+maintainer-clean-am: distclean-am maintainer-clean-generic
+
+mostlyclean: mostlyclean-am
+
+mostlyclean-am: mostlyclean-generic mostlyclean-libtool
+
+pdf: pdf-am
+
+pdf-am: pdf-local
+
+ps: ps-am
+
+ps-am:
+
+uninstall-am: uninstall-man
+
+uninstall-man: uninstall-man1 uninstall-man5 uninstall-man8
+
+.MAKE: install-am install-strip
+
+.PHONY: all all-am check check-am clean clean-generic clean-libtool \
+ clean-local cscopelist-am ctags-am distclean distclean-generic \
+ distclean-libtool distdir dvi dvi-am html html-am html-local \
+ info info-am install install-am install-data install-data-am \
+ install-dvi install-dvi-am install-exec install-exec-am \
+ install-html install-html-am install-html-local install-info \
+ install-info-am install-man install-man1 install-man5 \
+ install-man8 install-pdf install-pdf-am install-pdf-local \
+ install-ps install-ps-am install-strip installcheck \
+ installcheck-am installdirs maintainer-clean \
+ maintainer-clean-generic mostlyclean mostlyclean-generic \
+ mostlyclean-libtool pdf pdf-am pdf-local ps ps-am tags-am \
+ uninstall uninstall-am uninstall-man uninstall-man1 \
+ uninstall-man5 uninstall-man8
+
+.PRECIOUS: Makefile
+
+
+.PHONY: html-local singlehtml pdf-local epub man install-html-local install-singlehtml install-pdf-local install-epub
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@html-local:
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(AM_V_SPHINX)$(SPHINXBUILD) -b html -d $(SPHINXBUILDDIR)/doctrees/html $(ALLSPHINXOPTS) $(SPHINXBUILDDIR)/html
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ @echo "The HTML documentation has been built in $(SPHINXBUILDDIR)/html/"
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@install-html-local:
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL) -d $(DESTDIR)/$(docdir) $(DESTDIR)/$(docdir)/_static $(DESTDIR)/$(docdir)/_sources
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL) -D $(SPHINXBUILDDIR)/html/*.html $(DESTDIR)/$(docdir)/
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL_DATA) $(SPHINXBUILDDIR)/html/_sources/* $(DESTDIR)/$(docdir)/_sources/
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL_DATA) $(SPHINXBUILDDIR)/html/_static/* $(DESTDIR)/$(docdir)/_static/
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@singlehtml:
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(AM_V_SPHINX)$(SPHINXBUILD) -b singlehtml -d $(SPHINXBUILDDIR)/doctrees/singlehtml $(ALLSPHINXOPTS) $(SPHINXBUILDDIR)/singlehtml
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ @echo "The single HTML documentation has been built in $(SPHINXBUILDDIR)/singlehtml/"
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@install-singlehtml: singlehtml
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL) -d $(DESTDIR)/$(docdir) $(DESTDIR)/$(docdir)/_static
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL_DATA) $(SPHINXBUILDDIR)/singlehtml/*.html $(DESTDIR)/$(docdir)/
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL_DATA) $(SPHINXBUILDDIR)/singlehtml/_static/* $(DESTDIR)/$(docdir)/_static/
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@epub:
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(AM_V_SPHINX)$(SPHINXBUILD) -b epub -A today=$(RELEASE_DATE) -d $(SPHINXBUILDDIR)/doctrees/epub $(ALLSPHINXOPTS) $(SPHINXBUILDDIR)/epub
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ @echo "The EPUB documentation has been built in $(SPHINXBUILDDIR)/epub/"
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@install-epub:
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL) -d $(DESTDIR)/$(docdir)
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL_DATA) $(SPHINXBUILDDIR)/epub/KnotDNS.epub $(DESTDIR)/$(docdir)/
+
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_TRUE@@HAVE_SPHINX_TRUE@pdf-local:
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_TRUE@@HAVE_SPHINX_TRUE@ $(AM_V_SPHINX)$(SPHINXBUILD) -b latex -d $(SPHINXBUILDDIR)/doctrees/latex $(ALLSPHINXOPTS) $(SPHINXBUILDDIR)/latex
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_TRUE@@HAVE_SPHINX_TRUE@ $(MAKE) -C $(SPHINXBUILDDIR)/latex all-pdf
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_TRUE@@HAVE_SPHINX_TRUE@ @echo "The PDF documentation has been built in $(SPHINXBUILDDIR)/latex/"
+
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_TRUE@@HAVE_SPHINX_TRUE@install-pdf-local:
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL) -d $(DESTDIR)/$(docdir)
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_TRUE@@HAVE_SPHINX_TRUE@ $(INSTALL_DATA) $(SPHINXBUILDDIR)/latex/KnotDNS.pdf $(DESTDIR)/$(docdir)/
+
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_FALSE@@HAVE_SPHINX_TRUE@pdf-local install-pdf-local:
+@HAVE_DOCS_TRUE@@HAVE_PDFLATEX_FALSE@@HAVE_SPHINX_TRUE@ @echo "Install 'pdflatex' and re-run configure to be able to generate PDF documentation!"
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@man: $(man_MANS)
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@$(man_MANS)&: $(MANPAGES_RST)
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ $(AM_V_SPHINX)$(SPHINXBUILD) -b man -d $(SPHINXBUILDDIR)/doctrees/man $(man_SPHINXOPTS) $(SPHINXBUILDDIR)/man
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ @mkdir -p man
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ @for f in $(SPHINXBUILDDIR)/man/*; do \
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ sed -e 's,[@]config_dir@,$(config_dir),' \
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ -e 's,[@]storage_dir@,$(storage_dir),' \
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ -e 's,[@]run_dir@,$(run_dir),' \
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ -e 's,[@]conf_mapsize@,$(conf_mapsize),' "$$f" > "man/$$(basename $$f)"; \
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_TRUE@ done
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_FALSE@html-local singlehtml pdf-local epub man install-html-local install-singlehtml install-pdf-local install-epub:
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_FALSE@ @echo "Install 'sphinx-build' and re-run configure to be able to generate documentation!"
+
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_FALSE@$(man_MANS)&:
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_FALSE@ @if [ ! -f "$@" ]; then \
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_FALSE@ echo "Install 'sphinx-build' or disable documentation and re-run configure to generate man pages!"; \
+@HAVE_DOCS_TRUE@@HAVE_SPHINX_FALSE@ fi
+
+clean-local:
+ -rm -rf $(SPHINXBUILDDIR)
+ -rm -rf man
+
+# Tell versions [3.59,3.63) of GNU make to not export all variables.
+# Otherwise a system limit (for SysV at least) may be exceeded.
+.NOEXPORT:
diff --git a/doc/appendices.rst b/doc/appendices.rst
new file mode 100644
index 0000000..1012623
--- /dev/null
+++ b/doc/appendices.rst
@@ -0,0 +1,105 @@
+.. highlight:: none
+.. _Appendices:
+
+**********
+Appendices
+**********
+
+.. _compatible_pkcs11_devices:
+
+Compatible PKCS #11 Devices
+===========================
+
+This section has informative character. Knot DNS has been tested with several
+devices which claim to support PKCS #11 interface. The following table
+indicates which algorithms and operations have been observed to work. Please
+notice minimal GnuTLS library version required for particular algorithm
+support.
+
+.. |yes| replace:: **yes**
+.. |no| replace:: no
+.. |unknown| replace:: ?
+
+.. list-table::
+ :header-rows: 1
+ :stub-columns: 1
+
+ * -
+ - Key generate
+ - Key import
+ - ED25519 256-bit
+ - ECDSA 256-bit
+ - ECDSA 384-bit
+ - RSA 1024-bit
+ - RSA 2048-bit
+ - RSA 4096-bit
+ * - `Feitian ePass 2003 `_
+ - |yes|
+ - |no|
+ - |no|
+ - |no|
+ - |no|
+ - |yes|
+ - |yes|
+ - |no|
+ * - `SafeNet Network HSM (Luna SA 4) `_
+ - |yes|
+ - |no|
+ - |no|
+ - |no|
+ - |no|
+ - |yes|
+ - |yes|
+ - |yes|
+ * - `SoftHSM 2.0 `_ [#fn-softhsm]_
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ * - `Trustway Proteccio NetHSM `_
+ - |yes|
+ - ECDSA only
+ - |no|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ * - `Ultra Electronics CIS Keyper Plus (Model 9860-2) `_
+ - |yes|
+ - RSA only
+ - |no|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ * - `Utimaco SecurityServer (V4) `_ [#fn-utimaco]_
+ - |yes|
+ - |yes|
+ - |no|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+ - |yes|
+
+.. in progress: key ID checks have to be disabled in code
+ * - `Yubikey NEO `_
+ - |no|
+ - |no|
+ - |no|
+ - |yes|
+ - |no|
+ - |yes|
+ - |yes|
+ - |no|
+
+.. [#fn-softhsm] Algorithms supported depend on support in OpenSSL on which SoftHSM relies.
+ A command similar to the following may be used to verify what algorithms are supported:
+ ``$ pkcs11-tool --modul /usr/lib64/pkcs11/libsofthsm2.so -M``.
+.. [#fn-utimaco] Requires setting the number of background workers to 1!
diff --git a/doc/conf.py b/doc/conf.py
new file mode 100644
index 0000000..a104552
--- /dev/null
+++ b/doc/conf.py
@@ -0,0 +1,271 @@
+# -*- coding: utf-8 -*-
+#
+# Knot DNS documentation build configuration file, created by
+# sphinx-quickstart on Tue Apr 15 13:48:28 2014.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys, os, time, logging
+
+sys.setrecursionlimit(1500)
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+sys.path.insert(0, os.path.abspath('ext'))
+
+# -- General configuration -----------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+import importlib.util
+if importlib.util.find_spec("sphinx_panels"):
+ extensions = [ 'sphinx_panels' ]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = 'Knot DNS'
+copyright_year = 2025
+current_year = time.localtime().tm_year
+if current_year > copyright_year:
+ logging.warning('Copyright year is %d, but current year is %d.'%(copyright_year, current_year))
+ logging.warning('Maybe you should update copyright_year in doc/conf.py?')
+copyright = u'Copyright 2010–%d, CZ.NIC, z.s.p.o.' % copyright_year
+author = 'CZ.NIC Labs '
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+#version = ''
+# The full version, including alpha/beta/rc tags.
+#release = ''
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = False
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = ['_build', 'modules']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+# -- Options for HTML output ---------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+html_theme = 'theme_html'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+html_theme_path = ['.']
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# " v documentation".
+#html_title = None
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+html_logo = 'logo.svg'
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+#html_static_path = ['_static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, smartquotes will be used to convert quotes and dashes to
+# typographically correct entities.
+smartquotes = False
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+html_domain_indices = False
+
+# If false, no index is generated.
+html_use_index = False
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'KnotDNSdoc'
+
+# -- Options for LaTeX output --------------------------------------------------
+
+latex_elements = {
+# The paper size ('letterpaper' or 'a4paper').
+'papersize': 'a4paper',
+
+# The font size ('10pt', '11pt' or '12pt').
+#'pointsize': '10pt',
+
+# Additional stuff for the LaTeX preamble.
+#'preamble': '',
+
+# No empty pages between chapters
+'classoptions': ',openany,oneside',
+
+# Language preferences
+'babel': '\\usepackage[english]{babel}',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, documentclass [howto/manual]).
+latex_documents = [
+ ('index', 'KnotDNS.tex', 'Knot DNS Documentation', copyright, 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+latex_logo = 'logo.pdf'
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+latex_domain_indices = False
+
+# -- Options for manual page output --------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ ('reference', 'knot.conf', 'Knot DNS configuration file', author, 5),
+ ('man_knotc', 'knotc', 'Knot DNS control utility', author, 8),
+ ('man_knotd', 'knotd', 'Knot DNS server daemon', author, 8),
+ ('man_kcatalogprint', 'kcatalogprint', 'Knot DNS catalog print utility', author, 8),
+ ('man_keymgr', 'keymgr', 'Knot DNS key management utility', author, 8),
+ ('man_kjournalprint', 'kjournalprint', 'Knot DNS journal print utility', author, 8),
+ ('man_kdig', 'kdig', 'Advanced DNS lookup utility', author, 1),
+ ('man_khost', 'khost', 'Simple DNS lookup utility', author, 1),
+ ('man_knsec3hash', 'knsec3hash', 'Simple utility to compute NSEC3 hash', author, 1),
+ ('man_knsupdate', 'knsupdate', 'Dynamic DNS update utility', author, 1),
+ ('man_kzonecheck', 'kzonecheck', 'Knot DNS zone check tool', author, 1),
+ ('man_kzonesign', 'kzonesign', 'DNSSEC signing utility', author, 1),
+ ('man_kxdpgun', 'kxdpgun', 'XDP-powered DNS benchmarking tool', author, 8),
+]
+
+# If true, show URL addresses after external links.
+#man_show_urls = False
+
+# -- Options for Texinfo output ------------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+#texinfo_documents = []
+
+# Documents to append as an appendix to all manuals.
+#texinfo_appendices = []
+
+# If false, no module index is generated.
+#texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#texinfo_show_urls = 'footnote'
+
+# -- Options for Epub output ----------------------------------------------
+
+epub_title = project
+epub_author = author
+epub_publisher = author
+epub_copyright = copyright
+
+epub_theme = 'theme_epub'
+epub_cover = ('_static/logo.svg', 'epub-cover.html')
+epub_exclude_files = ['epub-cover.xhtml']
diff --git a/doc/configuration.rst b/doc/configuration.rst
new file mode 100644
index 0000000..24d8d1e
--- /dev/null
+++ b/doc/configuration.rst
@@ -0,0 +1,1271 @@
+.. highlight:: none
+.. _Configuration:
+
+*************
+Configuration
+*************
+
+Simple configuration
+====================
+
+The following example presents a simple configuration file
+which can be used as a base for your Knot DNS setup::
+
+ # Example of a very simple Knot DNS configuration.
+
+ server:
+ listen: 0.0.0.0@53
+ listen: ::@53
+
+ zone:
+ - domain: example.com
+ storage: /var/lib/knot/zones/
+ file: example.com.zone
+
+ log:
+ - target: syslog
+ any: info
+
+Now let's walk through this configuration step by step:
+
+- The :ref:`server_listen` statement in the :ref:`server section`
+ defines where the server will listen for incoming connections.
+ We have defined the server to listen on all available IPv4 and IPv6 addresses,
+ all on port 53.
+- The :ref:`zone section` defines the zones that the server will
+ serve. In this case, we defined one zone named *example.com* which is stored
+ in the zone file :file:`/var/lib/knot/zones/example.com.zone`.
+- The :ref:`log section` defines the log facilities for
+ the server. In this example, we told Knot DNS to send its log messages with
+ the severity ``info`` or more serious to the syslog (or systemd journal).
+
+For detailed description of all configuration items see
+:ref:`Configuration Reference`.
+
+Zone templates
+==============
+
+A zone template allows a single zone configuration to be shared among several
+zones. There is no inheritance between templates; they are exclusive. The
+``default`` template identifier is reserved for the default template::
+
+ template:
+ - id: default
+ storage: /var/lib/knot/master
+ semantic-checks: on
+
+ - id: signed
+ storage: /var/lib/knot/signed
+ dnssec-signing: on
+ semantic-checks: on
+ master: [master1, master2]
+
+ - id: slave
+ storage: /var/lib/knot/slave
+
+ zone:
+ - domain: example1.com # Uses default template
+
+ - domain: example2.com # Uses default template
+ semantic-checks: off # Override default settings
+
+ - domain: example.cz
+ template: signed
+ master: master3 # Override masters to just master3
+
+ - domain: example1.eu
+ template: slave
+ master: master1
+
+ - domain: example2.eu
+ template: slave
+ master: master2
+
+.. NOTE::
+ Each template option can be explicitly overridden in zone-specific configuration.
+
+.. _ACL:
+
+Access control list (ACL)
+=========================
+
+Normal DNS queries are always allowed. All other DNS requests must be
+authorized before they can be processed by the server. A zone can have
+configured :ref:`ACL ` which is a sequence of rules describing
+what requests are authorized. An :ref:`automatic ACL `
+feature can be used to simplify ACL management.
+
+Every ACL rule can allow or deny one or more request types (:ref:`actions `)
+based on the source IP address, network subnet, address range, protocol,
+remote certificate key PIN and/or
+if the request is secured by a given TSIG key. See :doc:`keymgr -t`
+on how to generate a TSIG key.
+
+If there are multiple ACL rules assigned to a zone, they are applied in the
+specified order of the :ref:`zone_acl` configuration. The first rule that matches
+the given request is applied and the remaining rules are ignored. Some examples::
+
+ acl:
+ - id: address_rule
+ address: [2001:db8::1, 192.168.2.0/24]
+ action: transfer
+
+ - id: deny_rule
+ address: 192.168.2.100
+ action: transfer
+ deny: on
+
+ zone:
+ - domain: acl1.example.com
+ acl: [deny_rule, address_rule] # Allow some addresses with an exception
+
+::
+
+ key:
+ - id: key1 # The real TSIG key name
+ algorithm: hmac-sha256
+ secret: 4Tc0K1QkcMCs7cOW2LuSWnxQY0qysdvsZlSb4yTN9pA=
+
+ acl:
+ - id: deny_all
+ address: 192.168.3.0/24
+ deny: on # No action specified and deny on implies denial of all actions
+
+ - id: key_rule
+ key: key1 # Access based just on TSIG key
+ action: [transfer, notify]
+
+ zone:
+ - domain: acl2.example.com
+ acl: [deny_all, key_rule] # Allow with the TSIG except for the subnet
+
+In the case of dynamic DNS updates, some additional conditions may be specified
+for more granular filtering. See more in the section :ref:`Restricting dynamic updates`.
+
+.. NOTE::
+ If more conditions (address ranges and/or a key)
+ are given in a single ACL rule, all of them have to be satisfied for the rule to match.
+
+.. TIP::
+ In order to restrict regular DNS queries, use module :ref:`queryacl`.
+
+Secondary (slave) zone
+======================
+
+Knot DNS doesn't strictly differ between primary (formerly known as master)
+and secondary (formerly known as slave) zones. The only requirement for a secondary
+zone is to have a :ref:`zone_master` statement set. For effective zone synchronization,
+incoming zone change notifications (NOTIFY), which require authorization, can be
+enabled using :ref:`automatic ACL ` or :ref:`explicit ACL `
+configuration. Optional transaction authentication (TSIG) is supported for both
+zone transfers and zone notifications::
+
+ server:
+ automatic-acl: on # Enabled automatic ACL
+
+ key:
+ - id: xfr_notify_key # Common TSIG key for XFR an NOTIFY
+ algorithm: hmac-sha256
+ secret: VFRejzw8h4M7mb0xZKRFiZAfhhd1eDGybjqHr2FV3vc=
+
+ remote:
+ - id: primary
+ address: [2001:DB8:1::1, 192.168.1.1] # Primary server IP addresses
+ # via: [2001:DB8:2::1, 10.0.0.1] # Local source addresses (optional)
+ key: xfr_notify_key # TSIG key (optional)
+
+ zone:
+ - domain: example.com
+ master: primary # Primary remote(s)
+
+An example of explicit ACL with different TSIG keys for zone transfers
+and notifications::
+
+ key:
+ - id: notify_key # TSIG key for NOTIFY
+ algorithm: hmac-sha256
+ secret: uBbhV4aeSS4fPd+wF2ZIn5pxOMF35xEtdq2ibi2hHEQ=
+
+ - id: xfr_key # TSIG key for XFR
+ algorithm: hmac-sha256
+ secret: VFRejzw8h4M7mb0xZKRFiZAfhhd1eDGybjqHr2FV3vc=
+
+ remote:
+ - id: primary
+ address: [2001:DB8:1::1, 192.168.1.1] # Primary server IP addresses
+ # via: [2001:DB8:2::1, 10.0.0.1] # Local source addresses if needed
+ key: xfr_key # Optional TSIG key
+
+ acl:
+ - id: notify_from_primary # ACL rule for NOTIFY from primary
+ address: [2001:DB8:1::1, 192.168.1.1] # Primary addresses (optional)
+ key: notify_key # TSIG key (optional)
+ action: notify
+
+ zone:
+ - domain: example.com
+ master: primary # Primary remote(s)
+ acl: notify_from_primary # Explicit ACL(s)
+
+Note that the :ref:`zone_master` option accepts a list of remotes, which are
+queried for a zone refresh sequentially in the specified order. When the server
+receives a zone change notification from a listed remote, only that remote is
+used for a subsequent zone transfer.
+
+.. NOTE::
+ When transferring a lot of zones, the server may easily get into a state
+ where all available ports are in the TIME_WAIT state, thus transfers
+ cease until the operating system closes the ports for good. There are
+ several ways to work around this:
+
+ * Allow reusing of ports in TIME_WAIT (sysctl -w net.ipv4.tcp_tw_reuse=1)
+ * Shorten TIME_WAIT timeout (tcp_fin_timeout)
+ * Increase available local port count
+
+Primary (master) zone
+=====================
+
+A zone is considered primary if it doesn't have :ref:`zone_master` set. As
+outgoing zone transfers (XFR) require authorization, it must be enabled
+using :ref:`automatic ACL ` or :ref:`explicit ACL `
+configuration. Outgoing zone change notifications (NOTIFY) to remotes can be
+set by configuring :ref:`zone_notify`. Transaction authentication
+(TSIG) is supported for both zone transfers and zone notifications::
+
+ server:
+ automatic-acl: on # Enabled automatic ACL
+
+ key:
+ - id: xfr_notify_key # Common TSIG key for XFR an NOTIFY
+ algorithm: hmac-sha256
+ secret: VFRejzw8h4M7mb0xZKRFiZAfhhd1eDGybjqHr2FV3vc=
+
+ remote:
+ - id: secondary
+ address: [2001:DB8:1::1, 192.168.1.1] # Secondary server IP addresses
+ # via: [2001:DB8:2::1, 10.0.0.1] # Local source addresses (optional)
+ key: xfr_notify_key # TSIG key (optional)
+
+ acl:
+ - id: local_xfr # Allow XFR to localhost without TSIG
+ address: [::1, 127.0.0.1]
+ action: transfer
+
+ zone:
+ - domain: example.com
+ notify: secondary # Secondary remote(s)
+ acl: local_xfr # Explicit ACL for local XFR
+
+Note that the :ref:`zone_notify` option accepts a list of remotes, which are
+all notified sequentially in the specified order.
+
+A secondary zone may serve as a primary zone for a different set of remotes
+at the same time.
+
+.. _dynamic updates:
+
+Dynamic updates
+===============
+
+Dynamic updates for the zone are allowed via proper ACL rule with the
+``update`` action. If the zone is configured as a secondary and a DNS update
+message is accepted, the server forwards the message to its first primary
+:ref:`zone_master` or :ref:`zone_ddns-master` if configured.
+The primary master's response is then forwarded back to the originator.
+
+However, if the zone is configured as a primary, the update is accepted and
+processed::
+
+ acl:
+ - id: update_acl
+ address: 192.168.3.0/24
+ action: update
+
+ zone:
+ - domain: example.com.
+ acl: update_acl
+
+.. NOTE::
+ To forward DDNS requests signed with a locally unknown key, an ACL rule for
+ the action ``update`` without a key must be configured for the zone. E.g.::
+
+ acl:
+ - id: fwd_foreign_key
+ action: update
+ # possible non-key options
+
+ zone:
+ - domain: example.com.
+ acl: fwd_foreign_key
+
+.. _Restricting dynamic updates:
+
+Restricting dynamic updates
+---------------------------
+
+There are several additional ACL options for dynamic DNS updates which affect
+the request classification based on the update contents.
+
+Updates can be restricted to specific resource record types::
+
+ acl:
+ - id: type_rule
+ action: update
+ update-type: [A, AAAA, MX] # Updated records must match one of the specified types
+
+Another possibility is restriction on the owner name of updated records. The option
+:ref:`acl_update-owner` is used to select the source of domain
+names which are used for the comparison. And the option :ref:`acl_update-owner-match`
+specifies the required relation between the record owner and the reference domain
+names. Example::
+
+ acl:
+ - id: owner_rule1
+ action: update
+ update-owner: name # Updated record owners are restricted by the next conditions
+ update-owner-match: equal # The record owner must exactly match one name from the next list
+ update-owner-name: [foo, bar.] # Reference domain names
+
+.. NOTE::
+ If the specified owner name is non-FQDN (e.g. ``foo``), it's considered relatively
+ to the effective zone name. So it can apply to more zones
+ (e.g. ``foo.example.com.`` or ``foo.example.net.``). Alternatively, if the
+ name is FQDN (e.g. ``bar.``), the rule only applies to this name.
+
+If the reference domain name is the zone name, the following variant can be used::
+
+ acl:
+ - id: owner_rule2
+ action: update
+ update-owner: zone # The reference name is the zone name
+ update-owner-match: sub # Any record owner matches except for the zone name itself
+
+ template:
+ - id: default
+ acl: owner_rule2
+
+ zone:
+ - domain: example.com.
+ - domain: example.net.
+
+The last variant is for the cases where the reference domain name is a TSIG key name,
+which must be used for the transaction security::
+
+ key:
+ - id: example.com # Key names are always considered FQDN
+ ...
+ - id: steve.example.net
+ ...
+ - id: jane.example.net
+ ...
+
+ acl:
+ - id: owner_rule3_com
+ action: update
+ update-owner: key # The reference name is the TSIG key name
+ update-owner-match: sub # The record owner must be a subdomain of the key name
+ key: [example.com] # One common key for updating all non-apex records
+
+ - id: owner_rule3_net
+ action: update
+ update-owner: key # The reference name is the TSIG key name
+ update-owner-match: equal # The record owner must exactly match the used key name
+ key: [steve.example.net, jane.example.net] # Keys for updating specific zone nodes
+
+ zone:
+ - domain: example.com.
+ acl: owner_rule3_com
+ - domain: example.net.
+ acl: owner_rule3_net
+
+.. _Handling CNAME and DNAME-related updates:
+
+Handling CNAME and DNAME-related updates
+----------------------------------------
+
+In general, no RR must exist beside a CNAME or below a DNAME. Whenever
+such a CNAME or DNAME-related semantic rule is vialoated by an RR addition
+in DDNS (this means addition of a CNAME beside an existing record, addition of
+another record beside a CNAME, addition of a DNAME above an existing record,
+addition of another record below a DNAME), such an RR addition is silently ignored.
+However, other RRs from the same DDNS update are processed normally. This is slightly
+non-compliant with RFC 6672 (in particular, no RR occlusion takes place).
+
+.. _dnssec:
+
+Automatic DNSSEC signing
+========================
+
+Knot DNS supports automatic DNSSEC signing of zones. The signing
+can operate in two modes:
+
+1. :ref:`Manual key management `:
+ In this mode, the server maintains zone signatures (RRSIGs) only. The
+ signatures are kept up-to-date and signing keys are rolled according to
+ the timing parameters assigned to the keys. The keys must be generated and
+ timing parameters must be assigned by the zone operator.
+
+2. :ref:`Automatic key management `:
+ In this mode, the server maintains signing keys. New keys are generated
+ according to the assigned policy and are rolled automatically in a safe manner.
+ No intervention from the zone operator is necessary.
+
+For automatic DNSSEC signing, a :ref:`policy` must
+be configured and assigned to the zone. The policy specifies how the zone
+is signed (i.e. signing algorithm, key size, key lifetime, signature lifetime,
+etc.). If no policy is specified, the default signing parameters are used.
+
+The DNSSEC signing process maintains some metadata which is stored in the
+:abbr:`KASP (Key And Signature Policy)` database. This database is backed
+by LMDB.
+
+.. WARNING::
+ Make sure to set the KASP database permissions correctly. For manual key
+ management, the database must be *readable* by the server process. For
+ automatic key management, it must be *writeable*. If no HSM is used,
+ the database also contains private key material – don't set the permissions
+ too weak.
+
+.. _dnssec-manual-key-management:
+
+Manual key management
+---------------------
+
+For automatic DNSSEC signing with manual key management, the
+:ref:`policy_manual` has to be enabled in the policy::
+
+ policy:
+ - id: manual
+ manual: on
+
+ zone:
+ - domain: myzone.test
+ dnssec-signing: on
+ dnssec-policy: manual
+
+To generate signing keys, use the :doc:`keymgr` utility.
+For example, we can use Single-Type Signing:
+
+.. code-block:: console
+
+ $ keymgr myzone.test. generate algorithm=ECDSAP256SHA256 ksk=yes zsk=yes
+
+And reload the server. The zone will be signed.
+
+To perform a manual rollover of a key, the timing parameters of the key need
+to be set. Let's roll the key. Generate a new key, but do not activate
+it yet:
+
+.. code-block:: console
+
+ $ keymgr myzone.test. generate algorithm=ECDSAP256SHA256 ksk=yes zsk=yes active=+1d
+
+Take the key ID (or key tag) of the old key and disable it the same time
+the new key gets activated:
+
+.. code-block:: console
+
+ $ keymgr myzone.test. set retire=+2d remove=+3d
+
+Reload the server again. The new key will be published (i.e. the DNSKEY record
+will be added into the zone). Remember to update the DS record in the
+parent zone to include a reference to the new key. This must happen within one
+day (in this case) including a delay required to propagate the new DS to
+caches.
+
+.. _dnssec-automatic-zsk-management:
+
+Automatic ZSK management
+------------------------
+
+With :ref:`policy_manual` disabled in the assigned policy (the default),
+the DNSSEC keys are generated automatically (if they do not already exist)
+and are also automatically rolled over according to their configured lifetimes.
+The default :ref:`policy_zsk-lifetime` is finite, whereas :ref:`policy_ksk-lifetime`
+infinite, meaning no KSK rollovers occur in the following example::
+
+ policy:
+ - id: custom_policy
+ signing-threads: 4
+ algorithm: ECDSAP256SHA256
+ zsk-lifetime: 60d
+
+ zone:
+ - domain: myzone.test
+ dnssec-signing: on
+ dnssec-policy: custom_policy
+
+After configuring the server, reload the changes:
+
+.. code-block:: console
+
+ $ knotc reload
+
+Check the server logs (regularly) to see whether everything went well.
+
+.. NOTE::
+ Enabling automatic key management with already existing keys requires attention:
+
+ - Any key timers set to future timestamps are automatically cleared,
+ preventing interference with automatic operation procedures.
+ - If the keys are in an inconsistent state (e.g. an unexpected number of keys
+ or active keys), it might lead to undefined behavior or, at the very least,
+ a halt in key management.
+
+.. _dnssec-automatic-ksk-management:
+
+Automatic KSK management
+------------------------
+
+For automatic KSK management, first configure ZSK management as described above,
+and use :ref:`submission section