diff options
Diffstat (limited to 'Documentation/dev-tools/kunit')
-rw-r--r-- | Documentation/dev-tools/kunit/api/index.rst | 16 | ||||
-rw-r--r-- | Documentation/dev-tools/kunit/api/test.rst | 11 | ||||
-rw-r--r-- | Documentation/dev-tools/kunit/faq.rst | 103 | ||||
-rw-r--r-- | Documentation/dev-tools/kunit/index.rst | 94 | ||||
-rw-r--r-- | Documentation/dev-tools/kunit/kunit-tool.rst | 57 | ||||
-rw-r--r-- | Documentation/dev-tools/kunit/start.rst | 237 | ||||
-rw-r--r-- | Documentation/dev-tools/kunit/style.rst | 205 | ||||
-rw-r--r-- | Documentation/dev-tools/kunit/usage.rst | 617 |
8 files changed, 1340 insertions, 0 deletions
diff --git a/Documentation/dev-tools/kunit/api/index.rst b/Documentation/dev-tools/kunit/api/index.rst new file mode 100644 index 000000000..9b9bffe5d --- /dev/null +++ b/Documentation/dev-tools/kunit/api/index.rst @@ -0,0 +1,16 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +API Reference +============= +.. toctree:: + + test + +This section documents the KUnit kernel testing API. It is divided into the +following sections: + +================================= ============================================== +:doc:`test` documents all of the standard testing API + excluding mocking or mocking related features. +================================= ============================================== diff --git a/Documentation/dev-tools/kunit/api/test.rst b/Documentation/dev-tools/kunit/api/test.rst new file mode 100644 index 000000000..aaa97f17e --- /dev/null +++ b/Documentation/dev-tools/kunit/api/test.rst @@ -0,0 +1,11 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======== +Test API +======== + +This file documents all of the standard testing API excluding mocking or mocking +related features. + +.. kernel-doc:: include/kunit/test.h + :internal: diff --git a/Documentation/dev-tools/kunit/faq.rst b/Documentation/dev-tools/kunit/faq.rst new file mode 100644 index 000000000..8d5029ad2 --- /dev/null +++ b/Documentation/dev-tools/kunit/faq.rst @@ -0,0 +1,103 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================== +Frequently Asked Questions +========================== + +How is this different from Autotest, kselftest, etc? +==================================================== +KUnit is a unit testing framework. Autotest, kselftest (and some others) are +not. + +A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to +test a single unit of code in isolation, hence the name. A unit test should be +the finest granularity of testing and as such should allow all possible code +paths to be tested in the code under test; this is only possible if the code +under test is very small and does not have any external dependencies outside of +the test's control like hardware. + +There are no testing frameworks currently available for the kernel that do not +require installing the kernel on a test machine or in a VM and all require +tests to be written in userspace and run on the kernel under test; this is true +for Autotest, kselftest, and some others, disqualifying any of them from being +considered unit testing frameworks. + +Does KUnit support running on architectures other than UML? +=========================================================== + +Yes, well, mostly. + +For the most part, the KUnit core framework (what you use to write the tests) +can compile to any architecture; it compiles like just another part of the +kernel and runs when the kernel boots, or when built as a module, when the +module is loaded. However, there is some infrastructure, +like the KUnit Wrapper (``tools/testing/kunit/kunit.py``) that does not support +other architectures. + +In short, this means that, yes, you can run KUnit on other architectures, but +it might require more work than using KUnit on UML. + +For more information, see :ref:`kunit-on-non-uml`. + +What is the difference between a unit test and these other kinds of tests? +========================================================================== +Most existing tests for the Linux kernel would be categorized as an integration +test, or an end-to-end test. + +- A unit test is supposed to test a single unit of code in isolation, hence the + name. A unit test should be the finest granularity of testing and as such + should allow all possible code paths to be tested in the code under test; this + is only possible if the code under test is very small and does not have any + external dependencies outside of the test's control like hardware. +- An integration test tests the interaction between a minimal set of components, + usually just two or three. For example, someone might write an integration + test to test the interaction between a driver and a piece of hardware, or to + test the interaction between the userspace libraries the kernel provides and + the kernel itself; however, one of these tests would probably not test the + entire kernel along with hardware interactions and interactions with the + userspace. +- An end-to-end test usually tests the entire system from the perspective of the + code under test. For example, someone might write an end-to-end test for the + kernel by installing a production configuration of the kernel on production + hardware with a production userspace and then trying to exercise some behavior + that depends on interactions between the hardware, the kernel, and userspace. + +KUnit isn't working, what should I do? +====================================== + +Unfortunately, there are a number of things which can break, but here are some +things to try. + +1. Try running ``./tools/testing/kunit/kunit.py run`` with the ``--raw_output`` + parameter. This might show details or error messages hidden by the kunit_tool + parser. +2. Instead of running ``kunit.py run``, try running ``kunit.py config``, + ``kunit.py build``, and ``kunit.py exec`` independently. This can help track + down where an issue is occurring. (If you think the parser is at fault, you + can run it manually against stdin or a file with ``kunit.py parse``.) +3. Running the UML kernel directly can often reveal issues or error messages + kunit_tool ignores. This should be as simple as running ``./vmlinux`` after + building the UML kernel (e.g., by using ``kunit.py build``). Note that UML + has some unusual requirements (such as the host having a tmpfs filesystem + mounted), and has had issues in the past when built statically and the host + has KASLR enabled. (On older host kernels, you may need to run ``setarch + `uname -m` -R ./vmlinux`` to disable KASLR.) +4. Make sure the kernel .config has ``CONFIG_KUNIT=y`` and at least one test + (e.g. ``CONFIG_KUNIT_EXAMPLE_TEST=y``). kunit_tool will keep its .config + around, so you can see what config was used after running ``kunit.py run``. + It also preserves any config changes you might make, so you can + enable/disable things with ``make ARCH=um menuconfig`` or similar, and then + re-run kunit_tool. +5. Try to run ``make ARCH=um defconfig`` before running ``kunit.py run``. This + may help clean up any residual config items which could be causing problems. +6. Finally, try running KUnit outside UML. KUnit and KUnit tests can be + built into any kernel, or can be built as a module and loaded at runtime. + Doing so should allow you to determine if UML is causing the issue you're + seeing. When tests are built-in, they will execute when the kernel boots, and + modules will automatically execute associated tests when loaded. Test results + can be collected from ``/sys/kernel/debug/kunit/<test suite>/results``, and + can be parsed with ``kunit.py parse``. For more details, see "KUnit on + non-UML architectures" in :doc:`usage`. + +If none of the above tricks help, you are always welcome to email any issues to +kunit-dev@googlegroups.com. diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst new file mode 100644 index 000000000..c234a3ab3 --- /dev/null +++ b/Documentation/dev-tools/kunit/index.rst @@ -0,0 +1,94 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +KUnit - Unit Testing for the Linux Kernel +========================================= + +.. toctree:: + :maxdepth: 2 + + start + usage + kunit-tool + api/index + style + faq + +What is KUnit? +============== + +KUnit is a lightweight unit testing and mocking framework for the Linux kernel. + +KUnit is heavily inspired by JUnit, Python's unittest.mock, and +Googletest/Googlemock for C++. KUnit provides facilities for defining unit test +cases, grouping related test cases into test suites, providing common +infrastructure for running tests, and much more. + +KUnit consists of a kernel component, which provides a set of macros for easily +writing unit tests. Tests written against KUnit will run on kernel boot if +built-in, or when loaded if built as a module. These tests write out results to +the kernel log in `TAP <https://testanything.org/>`_ format. + +To make running these tests (and reading the results) easier, KUnit offers +:doc:`kunit_tool <kunit-tool>`, which builds a `User Mode Linux +<http://user-mode-linux.sourceforge.net>`_ kernel, runs it, and parses the test +results. This provides a quick way of running KUnit tests during development, +without requiring a virtual machine or separate hardware. + +Get started now: :doc:`start` + +Why KUnit? +========== + +A unit test is supposed to test a single unit of code in isolation, hence the +name. A unit test should be the finest granularity of testing and as such should +allow all possible code paths to be tested in the code under test; this is only +possible if the code under test is very small and does not have any external +dependencies outside of the test's control like hardware. + +KUnit provides a common framework for unit tests within the kernel. + +KUnit tests can be run on most architectures, and most tests are architecture +independent. All built-in KUnit tests run on kernel startup. Alternatively, +KUnit and KUnit tests can be built as modules and tests will run when the test +module is loaded. + +.. note:: + + KUnit can also run tests without needing a virtual machine or actual + hardware under User Mode Linux. User Mode Linux is a Linux architecture, + like ARM or x86, which compiles the kernel as a Linux executable. KUnit + can be used with UML either by building with ``ARCH=um`` (like any other + architecture), or by using :doc:`kunit_tool <kunit-tool>`. + +KUnit is fast. Excluding build time, from invocation to completion KUnit can run +several dozen tests in only 10 to 20 seconds; this might not sound like a big +deal to some people, but having such fast and easy to run tests fundamentally +changes the way you go about testing and even writing code in the first place. +Linus himself said in his `git talk at Google +<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_: + + "... a lot of people seem to think that performance is about doing the + same thing, just doing it faster, and that is not true. That is not what + performance is all about. If you can do something really fast, really + well, people will start using it differently." + +In this context Linus was talking about branching and merging, +but this point also applies to testing. If your tests are slow, unreliable, are +difficult to write, and require a special setup or special hardware to run, +then you wait a lot longer to write tests, and you wait a lot longer to run +tests; this means that tests are likely to break, unlikely to test a lot of +things, and are unlikely to be rerun once they pass. If your tests are really +fast, you run them all the time, every time you make a change, and every time +someone sends you some code. Why trust that someone ran all their tests +correctly on every change when you can just run them yourself in less time than +it takes to read their test log? + +How do I use it? +================ + +* :doc:`start` - for new users of KUnit +* :doc:`usage` - for a more detailed explanation of KUnit features +* :doc:`api/index` - for the list of KUnit APIs used for testing +* :doc:`kunit-tool` - for more information on the kunit_tool helper script +* :doc:`faq` - for answers to some common questions about KUnit diff --git a/Documentation/dev-tools/kunit/kunit-tool.rst b/Documentation/dev-tools/kunit/kunit-tool.rst new file mode 100644 index 000000000..29ae2fee8 --- /dev/null +++ b/Documentation/dev-tools/kunit/kunit-tool.rst @@ -0,0 +1,57 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================= +kunit_tool How-To +================= + +What is kunit_tool? +=================== + +kunit_tool is a script (``tools/testing/kunit/kunit.py``) that aids in building +the Linux kernel as UML (`User Mode Linux +<http://user-mode-linux.sourceforge.net/>`_), running KUnit tests, parsing +the test results and displaying them in a user friendly manner. + +kunit_tool addresses the problem of being able to run tests without needing a +virtual machine or actual hardware with User Mode Linux. User Mode Linux is a +Linux architecture, like ARM or x86; however, unlike other architectures it +compiles the kernel as a standalone Linux executable that can be run like any +other program directly inside of a host operating system. To be clear, it does +not require any virtualization support: it is just a regular program. + +What is a .kunitconfig? +======================= + +It's just a defconfig that kunit_tool looks for in the base directory. +kunit_tool uses it to generate a .config as you might expect. In addition, it +verifies that the generated .config contains the CONFIG options in the +.kunitconfig; the reason it does this is so that it is easy to be sure that a +CONFIG that enables a test actually ends up in the .config. + +How do I use kunit_tool? +======================== + +If a kunitconfig is present at the root directory, all you have to do is: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py run + +However, you most likely want to use it with the following options: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py run --timeout=30 --jobs=`nproc --all` + +- ``--timeout`` sets a maximum amount of time to allow tests to run. +- ``--jobs`` sets the number of threads to use to build the kernel. + +.. note:: + This command will work even without a .kunitconfig file: if no + .kunitconfig is present, a default one will be used instead. + +For a list of all the flags supported by kunit_tool, you can run: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py run --help diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst new file mode 100644 index 000000000..454f30781 --- /dev/null +++ b/Documentation/dev-tools/kunit/start.rst @@ -0,0 +1,237 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=============== +Getting Started +=============== + +Installing dependencies +======================= +KUnit has the same dependencies as the Linux kernel. As long as you can build +the kernel, you can run KUnit. + +Running tests with the KUnit Wrapper +==================================== +Included with KUnit is a simple Python wrapper which runs tests under User Mode +Linux, and formats the test results. + +The wrapper can be run with: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py run + +For more information on this wrapper (also called kunit_tool) check out the +:doc:`kunit-tool` page. + +Creating a .kunitconfig +----------------------- +If you want to run a specific set of tests (rather than those listed in the +KUnit defconfig), you can provide Kconfig options in the ``.kunitconfig`` file. +This file essentially contains the regular Kernel config, with the specific +test targets as well. The ``.kunitconfig`` should also contain any other config +options required by the tests. + +A good starting point for a ``.kunitconfig`` is the KUnit defconfig: + +.. code-block:: bash + + cd $PATH_TO_LINUX_REPO + cp arch/um/configs/kunit_defconfig .kunitconfig + +You can then add any other Kconfig options you wish, e.g.: + +.. code-block:: none + + CONFIG_LIST_KUNIT_TEST=y + +:doc:`kunit_tool <kunit-tool>` will ensure that all config options set in +``.kunitconfig`` are set in the kernel ``.config`` before running the tests. +It'll warn you if you haven't included the dependencies of the options you're +using. + +.. note:: + Note that removing something from the ``.kunitconfig`` will not trigger a + rebuild of the ``.config`` file: the configuration is only updated if the + ``.kunitconfig`` is not a subset of ``.config``. This means that you can use + other tools (such as make menuconfig) to adjust other config options. + + +Running the tests (KUnit Wrapper) +--------------------------------- + +To make sure that everything is set up correctly, simply invoke the Python +wrapper from your kernel repo: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py run + +.. note:: + You may want to run ``make mrproper`` first. + +If everything worked correctly, you should see the following: + +.. code-block:: bash + + Generating .config ... + Building KUnit Kernel ... + Starting KUnit Kernel ... + +followed by a list of tests that are run. All of them should be passing. + +.. note:: + Because it is building a lot of sources for the first time, the + ``Building KUnit kernel`` step may take a while. + +Running tests without the KUnit Wrapper +======================================= + +If you'd rather not use the KUnit Wrapper (if, for example, you need to +integrate with other systems, or use an architecture other than UML), KUnit can +be included in any kernel, and the results read out and parsed manually. + +.. note:: + KUnit is not designed for use in a production system, and it's possible that + tests may reduce the stability or security of the system. + + + +Configuring the kernel +---------------------- + +In order to enable KUnit itself, you simply need to enable the ``CONFIG_KUNIT`` +Kconfig option (it's under Kernel Hacking/Kernel Testing and Coverage in +menuconfig). From there, you can enable any KUnit tests you want: they usually +have config options ending in ``_KUNIT_TEST``. + +KUnit and KUnit tests can be compiled as modules: in this case the tests in a +module will be run when the module is loaded. + + +Running the tests (w/o KUnit Wrapper) +------------------------------------- + +Build and run your kernel as usual. Test output will be written to the kernel +log in `TAP <https://testanything.org/>`_ format. + +.. note:: + It's possible that there will be other lines and/or data interspersed in the + TAP output. + + +Writing your first test +======================= + +In your kernel repo let's add some code that we can test. Create a file +``drivers/misc/example.h`` with the contents: + +.. code-block:: c + + int misc_example_add(int left, int right); + +create a file ``drivers/misc/example.c``: + +.. code-block:: c + + #include <linux/errno.h> + + #include "example.h" + + int misc_example_add(int left, int right) + { + return left + right; + } + +Now add the following lines to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE + bool "My example" + +and the following lines to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE) += example.o + +Now we are ready to write the test. The test will be in +``drivers/misc/example-test.c``: + +.. code-block:: c + + #include <kunit/test.h> + #include "example.h" + + /* Define the test cases. */ + + static void misc_example_add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1)); + KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1)); + KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN)); + } + + static void misc_example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + + static struct kunit_case misc_example_test_cases[] = { + KUNIT_CASE(misc_example_add_test_basic), + KUNIT_CASE(misc_example_test_failure), + {} + }; + + static struct kunit_suite misc_example_test_suite = { + .name = "misc-example", + .test_cases = misc_example_test_cases, + }; + kunit_test_suite(misc_example_test_suite); + +Now add the following to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE_TEST + bool "Test for my example" + depends on MISC_EXAMPLE && KUNIT=y + +and the following to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o + +Now add it to your ``.kunitconfig``: + +.. code-block:: none + + CONFIG_MISC_EXAMPLE=y + CONFIG_MISC_EXAMPLE_TEST=y + +Now you can run the test: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py run + +You should see the following failure: + +.. code-block:: none + + ... + [16:08:57] [PASSED] misc-example:misc_example_add_test_basic + [16:08:57] [FAILED] misc-example:misc_example_test_failure + [16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17 + [16:08:57] This test never passes. + ... + +Congrats! You just wrote your first KUnit test! + +Next Steps +========== +* Check out the :doc:`usage` page for a more + in-depth explanation of KUnit. diff --git a/Documentation/dev-tools/kunit/style.rst b/Documentation/dev-tools/kunit/style.rst new file mode 100644 index 000000000..8dbcdc552 --- /dev/null +++ b/Documentation/dev-tools/kunit/style.rst @@ -0,0 +1,205 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=========================== +Test Style and Nomenclature +=========================== + +To make finding, writing, and using KUnit tests as simple as possible, it's +strongly encouraged that they are named and written according to the guidelines +below. While it's possible to write KUnit tests which do not follow these rules, +they may break some tooling, may conflict with other tests, and may not be run +automatically by testing systems. + +It's recommended that you only deviate from these guidelines when: + +1. Porting tests to KUnit which are already known with an existing name, or +2. Writing tests which would cause serious problems if automatically run (e.g., + non-deterministically producing false positives or negatives, or taking an + extremely long time to run). + +Subsystems, Suites, and Tests +============================= + +In order to make tests as easy to find as possible, they're grouped into suites +and subsystems. A test suite is a group of tests which test a related area of +the kernel, and a subsystem is a set of test suites which test different parts +of the same kernel subsystem or driver. + +Subsystems +---------- + +Every test suite must belong to a subsystem. A subsystem is a collection of one +or more KUnit test suites which test the same driver or part of the kernel. A +rule of thumb is that a test subsystem should match a single kernel module. If +the code being tested can't be compiled as a module, in many cases the subsystem +should correspond to a directory in the source tree or an entry in the +MAINTAINERS file. If unsure, follow the conventions set by tests in similar +areas. + +Test subsystems should be named after the code being tested, either after the +module (wherever possible), or after the directory or files being tested. Test +subsystems should be named to avoid ambiguity where necessary. + +If a test subsystem name has multiple components, they should be separated by +underscores. *Do not* include "test" or "kunit" directly in the subsystem name +unless you are actually testing other tests or the kunit framework itself. + +Example subsystems could be: + +``ext4`` + Matches the module and filesystem name. +``apparmor`` + Matches the module name and LSM name. +``kasan`` + Common name for the tool, prominent part of the path ``mm/kasan`` +``snd_hda_codec_hdmi`` + Has several components (``snd``, ``hda``, ``codec``, ``hdmi``) separated by + underscores. Matches the module name. + +Avoid names like these: + +``linear-ranges`` + Names should use underscores, not dashes, to separate words. Prefer + ``linear_ranges``. +``qos-kunit-test`` + As well as using underscores, this name should not have "kunit-test" as a + suffix, and ``qos`` is ambiguous as a subsystem name. ``power_qos`` would be a + better name. +``pc_parallel_port`` + The corresponding module name is ``parport_pc``, so this subsystem should also + be named ``parport_pc``. + +.. note:: + The KUnit API and tools do not explicitly know about subsystems. They're + simply a way of categorising test suites and naming modules which + provides a simple, consistent way for humans to find and run tests. This + may change in the future, though. + +Suites +------ + +KUnit tests are grouped into test suites, which cover a specific area of +functionality being tested. Test suites can have shared initialisation and +shutdown code which is run for all tests in the suite. +Not all subsystems will need to be split into multiple test suites (e.g. simple drivers). + +Test suites are named after the subsystem they are part of. If a subsystem +contains several suites, the specific area under test should be appended to the +subsystem name, separated by an underscore. + +In the event that there are multiple types of test using KUnit within a +subsystem (e.g., both unit tests and integration tests), they should be put into +separate suites, with the type of test as the last element in the suite name. +Unless these tests are actually present, avoid using ``_test``, ``_unittest`` or +similar in the suite name. + +The full test suite name (including the subsystem name) should be specified as +the ``.name`` member of the ``kunit_suite`` struct, and forms the base for the +module name (see below). + +Example test suites could include: + +``ext4_inode`` + Part of the ``ext4`` subsystem, testing the ``inode`` area. +``kunit_try_catch`` + Part of the ``kunit`` implementation itself, testing the ``try_catch`` area. +``apparmor_property_entry`` + Part of the ``apparmor`` subsystem, testing the ``property_entry`` area. +``kasan`` + The ``kasan`` subsystem has only one suite, so the suite name is the same as + the subsystem name. + +Avoid names like: + +``ext4_ext4_inode`` + There's no reason to state the subsystem twice. +``property_entry`` + The suite name is ambiguous without the subsystem name. +``kasan_integration_test`` + Because there is only one suite in the ``kasan`` subsystem, the suite should + just be called ``kasan``. There's no need to redundantly add + ``integration_test``. Should a separate test suite with, for example, unit + tests be added, then that suite could be named ``kasan_unittest`` or similar. + +Test Cases +---------- + +Individual tests consist of a single function which tests a constrained +codepath, property, or function. In the test output, individual tests' results +will show up as subtests of the suite's results. + +Tests should be named after what they're testing. This is often the name of the +function being tested, with a description of the input or codepath being tested. +As tests are C functions, they should be named and written in accordance with +the kernel coding style. + +.. note:: + As tests are themselves functions, their names cannot conflict with + other C identifiers in the kernel. This may require some creative + naming. It's a good idea to make your test functions `static` to avoid + polluting the global namespace. + +Example test names include: + +``unpack_u32_with_null_name`` + Tests the ``unpack_u32`` function when a NULL name is passed in. +``test_list_splice`` + Tests the ``list_splice`` macro. It has the prefix ``test_`` to avoid a + name conflict with the macro itself. + + +Should it be necessary to refer to a test outside the context of its test suite, +the *fully-qualified* name of a test should be the suite name followed by the +test name, separated by a colon (i.e. ``suite:test``). + +Test Kconfig Entries +==================== + +Every test suite should be tied to a Kconfig entry. + +This Kconfig entry must: + +* be named ``CONFIG_<name>_KUNIT_TEST``: where <name> is the name of the test + suite. +* be listed either alongside the config entries for the driver/subsystem being + tested, or be under [Kernel Hacking]→[Kernel Testing and Coverage] +* depend on ``CONFIG_KUNIT`` +* be visible only if ``CONFIG_KUNIT_ALL_TESTS`` is not enabled. +* have a default value of ``CONFIG_KUNIT_ALL_TESTS``. +* have a brief description of KUnit in the help text + +Unless there's a specific reason not to (e.g. the test is unable to be built as +a module), Kconfig entries for tests should be tristate. + +An example Kconfig entry: + +.. code-block:: none + + config FOO_KUNIT_TEST + tristate "KUnit test for foo" if !KUNIT_ALL_TESTS + depends on KUNIT + default KUNIT_ALL_TESTS + help + This builds unit tests for foo. + + For more information on KUnit and unit tests in general, please refer + to the KUnit documentation in Documentation/dev-tools/kunit/. + + If unsure, say N. + + +Test File and Module Names +========================== + +KUnit tests can often be compiled as a module. These modules should be named +after the test suite, followed by ``_test``. If this is likely to conflict with +non-KUnit tests, the suffix ``_kunit`` can also be used. + +The easiest way of achieving this is to name the file containing the test suite +``<suite>_test.c`` (or, as above, ``<suite>_kunit.c``). This file should be +placed next to the code under test. + +If the suite name contains some or all of the name of the test's parent +directory, it may make sense to modify the source filename to reduce redundancy. +For example, a ``foo_firmware`` suite could be in the ``foo/firmware_test.c`` +file. diff --git a/Documentation/dev-tools/kunit/usage.rst b/Documentation/dev-tools/kunit/usage.rst new file mode 100644 index 000000000..9c28c518e --- /dev/null +++ b/Documentation/dev-tools/kunit/usage.rst @@ -0,0 +1,617 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=========== +Using KUnit +=========== + +The purpose of this document is to describe what KUnit is, how it works, how it +is intended to be used, and all the concepts and terminology that are needed to +understand it. This guide assumes a working knowledge of the Linux kernel and +some basic knowledge of testing. + +For a high level introduction to KUnit, including setting up KUnit for your +project, see :doc:`start`. + +Organization of this document +============================= + +This document is organized into two main sections: Testing and Isolating +Behavior. The first covers what unit tests are and how to use KUnit to write +them. The second covers how to use KUnit to isolate code and make it possible +to unit test code that was otherwise un-unit-testable. + +Testing +======= + +What is KUnit? +-------------- + +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing +Framework." KUnit is intended first and foremost for writing unit tests; it is +general enough that it can be used to write integration tests; however, this is +a secondary goal. KUnit has no ambition of being the only testing framework for +the kernel; for example, it does not intend to be an end-to-end testing +framework. + +What is Unit Testing? +--------------------- + +A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that +tests code at the smallest possible scope, a *unit* of code. In the C +programming language that's a function. + +Unit tests should be written for all the publicly exposed functions in a +compilation unit; so that is all the functions that are exported in either a +*class* (defined below) or all functions which are **not** static. + +Writing Tests +------------- + +Test Cases +~~~~~~~~~~ + +The fundamental unit in KUnit is the test case. A test case is a function with +the signature ``void (*)(struct kunit *test)``. It calls a function to be tested +and then sets *expectations* for what should happen. For example: + +.. code-block:: c + + void example_test_success(struct kunit *test) + { + } + + void example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + +In the above example ``example_test_success`` always passes because it does +nothing; no expectations are set, so all expectations pass. On the other hand +``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is +a special expectation that logs a message and causes the test case to fail. + +Expectations +~~~~~~~~~~~~ +An *expectation* is a way to specify that you expect a piece of code to do +something in a test. An expectation is called like a function. A test is made +by setting expectations about the behavior of a piece of code under test; when +one or more of the expectations fail, the test case fails and information about +the failure is logged. For example: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + +In the above example ``add_test_basic`` makes a number of assertions about the +behavior of a function called ``add``; the first parameter is always of type +``struct kunit *``, which contains information about the current test context; +the second parameter, in this case, is what the value is expected to be; the +last value is what the value actually is. If ``add`` passes all of these +expectations, the test case, ``add_test_basic`` will pass; if any one of these +expectations fails, the test case will fail. + +It is important to understand that a test case *fails* when any expectation is +violated; however, the test will continue running, potentially trying other +expectations until the test case ends or is otherwise terminated. This is as +opposed to *assertions* which are discussed later. + +To learn about more expectations supported by KUnit, see :doc:`api/test`. + +.. note:: + A single test case should be pretty short, pretty easy to understand, + focused on a single behavior. + +For example, if we wanted to properly test the add function above, we would +create additional tests cases which would each test a different property that an +add function should have like this: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + + void add_test_negative(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 0, add(-1, 1)); + } + + void add_test_max(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN)); + } + + void add_test_overflow(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1)); + } + +Notice how it is immediately obvious what all the properties that we are testing +for are. + +Assertions +~~~~~~~~~~ + +KUnit also has the concept of an *assertion*. An assertion is just like an +expectation except the assertion immediately terminates the test case if it is +not satisfied. + +For example: + +.. code-block:: c + + static void mock_test_do_expect_default_return(struct kunit *test) + { + struct mock_test_context *ctx = test->priv; + struct mock *mock = ctx->mock; + int param0 = 5, param1 = -5; + const char *two_param_types[] = {"int", "int"}; + const void *two_params[] = {¶m0, ¶m1}; + const void *ret; + + ret = mock->do_expect(mock, + "test_printk", test_printk, + two_param_types, two_params, + ARRAY_SIZE(two_params)); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret); + KUNIT_EXPECT_EQ(test, -4, *((int *) ret)); + } + +In this example, the method under test should return a pointer to a value, so +if the pointer returned by the method is null or an errno, we don't want to +bother continuing the test since the following expectation could crash the test +case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if +the appropriate conditions have not been satisfied to complete the test. + +Test Suites +~~~~~~~~~~~ + +Now obviously one unit test isn't very helpful; the power comes from having +many test cases covering all of a unit's behaviors. Consequently it is common +to have many *similar* tests; in order to reduce duplication in these closely +related tests most unit testing frameworks - including KUnit - provide the +concept of a *test suite*. A *test suite* is just a collection of test cases +for a unit of code with a set up function that gets invoked before every test +case and then a tear down function that gets invoked after every test case +completes. + +Example: + +.. code-block:: c + + static struct kunit_case example_test_cases[] = { + KUNIT_CASE(example_test_foo), + KUNIT_CASE(example_test_bar), + KUNIT_CASE(example_test_baz), + {} + }; + + static struct kunit_suite example_test_suite = { + .name = "example", + .init = example_test_init, + .exit = example_test_exit, + .test_cases = example_test_cases, + }; + kunit_test_suite(example_test_suite); + +In the above example the test suite, ``example_test_suite``, would run the test +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``; +each would have ``example_test_init`` called immediately before it and would +have ``example_test_exit`` called immediately after it. +``kunit_test_suite(example_test_suite)`` registers the test suite with the +KUnit test framework. + +.. note:: + A test case will only be run if it is associated with a test suite. + +``kunit_test_suite(...)`` is a macro which tells the linker to put the specified +test suite in a special linker section so that it can be run by KUnit either +after late_init, or when the test module is loaded (depending on whether the +test was built in or not). + +For more information on these types of things see the :doc:`api/test`. + +Isolating Behavior +================== + +The most important aspect of unit testing that other forms of testing do not +provide is the ability to limit the amount of code under test to a single unit. +In practice, this is only possible by being able to control what code gets run +when the unit under test calls a function and this is usually accomplished +through some sort of indirection where a function is exposed as part of an API +such that the definition of that function can be changed without affecting the +rest of the code base. In the kernel this primarily comes from two constructs, +classes, structs that contain function pointers that are provided by the +implementer, and architecture-specific functions which have definitions selected +at compile time. + +Classes +------- + +Classes are not a construct that is built into the C programming language; +however, it is an easily derived concept. Accordingly, pretty much every project +that does not use a standardized object oriented library (like GNOME's GObject) +has their own slightly different way of doing object oriented programming; the +Linux kernel is no exception. + +The central concept in kernel object oriented programming is the class. In the +kernel, a *class* is a struct that contains function pointers. This creates a +contract between *implementers* and *users* since it forces them to use the +same function signature without having to call the function directly. In order +for it to truly be a class, the function pointers must specify that a pointer +to the class, known as a *class handle*, be one of the parameters; this makes +it possible for the member functions (also known as *methods*) to have access +to member variables (more commonly known as *fields*) allowing the same +implementation to have multiple *instances*. + +Typically a class can be *overridden* by *child classes* by embedding the +*parent class* in the child class. Then when a method provided by the child +class is called, the child implementation knows that the pointer passed to it is +of a parent contained within the child; because of this, the child can compute +the pointer to itself because the pointer to the parent is always a fixed offset +from the pointer to the child; this offset is the offset of the parent contained +in the child struct. For example: + +.. code-block:: c + + struct shape { + int (*area)(struct shape *this); + }; + + struct rectangle { + struct shape parent; + int length; + int width; + }; + + int rectangle_area(struct shape *this) + { + struct rectangle *self = container_of(this, struct shape, parent); + + return self->length * self->width; + }; + + void rectangle_new(struct rectangle *self, int length, int width) + { + self->parent.area = rectangle_area; + self->length = length; + self->width = width; + } + +In this example (as in most kernel code) the operation of computing the pointer +to the child from the pointer to the parent is done by ``container_of``. + +Faking Classes +~~~~~~~~~~~~~~ + +In order to unit test a piece of code that calls a method in a class, the +behavior of the method must be controllable, otherwise the test ceases to be a +unit test and becomes an integration test. + +A fake just provides an implementation of a piece of code that is different than +what runs in a production instance, but behaves identically from the standpoint +of the callers; this is usually done to replace a dependency that is hard to +deal with, or is slow. + +A good example for this might be implementing a fake EEPROM that just stores the +"contents" in an internal buffer. For example, let's assume we have a class that +represents an EEPROM: + +.. code-block:: c + + struct eeprom { + ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count); + ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count); + }; + +And we want to test some code that buffers writes to the EEPROM: + +.. code-block:: c + + struct eeprom_buffer { + ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count); + int flush(struct eeprom_buffer *this); + size_t flush_count; /* Flushes when buffer exceeds flush_count. */ + }; + + struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom); + void destroy_eeprom_buffer(struct eeprom *eeprom); + +We can easily test this code by *faking out* the underlying EEPROM: + +.. code-block:: c + + struct fake_eeprom { + struct eeprom parent; + char contents[FAKE_EEPROM_CONTENTS_SIZE]; + }; + + ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(buffer, this->contents + offset, count); + + return count; + } + + ssize_t fake_eeprom_write(struct eeprom *parent, size_t offset, const char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(this->contents + offset, buffer, count); + + return count; + } + + void fake_eeprom_init(struct fake_eeprom *this) + { + this->parent.read = fake_eeprom_read; + this->parent.write = fake_eeprom_write; + memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE); + } + +We can now use it to test ``struct eeprom_buffer``: + +.. code-block:: c + + struct eeprom_buffer_test { + struct fake_eeprom *fake_eeprom; + struct eeprom_buffer *eeprom_buffer; + }; + + static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = SIZE_MAX; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0); + + eeprom_buffer->flush(eeprom_buffer); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff, 0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 2); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + /* Should have only flushed the first two bytes. */ + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0); + } + + static int eeprom_buffer_test_init(struct kunit *test) + { + struct eeprom_buffer_test *ctx; + + ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); + + ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom); + fake_eeprom_init(ctx->fake_eeprom); + + ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer); + + test->priv = ctx; + + return 0; + } + + static void eeprom_buffer_test_exit(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + + destroy_eeprom_buffer(ctx->eeprom_buffer); + } + +.. _kunit-on-non-uml: + +KUnit on non-UML architectures +============================== + +By default KUnit uses UML as a way to provide dependencies for code under test. +Under most circumstances KUnit's usage of UML should be treated as an +implementation detail of how KUnit works under the hood. Nevertheless, there +are instances where being able to run architecture-specific code or test +against real hardware is desirable. For these reasons KUnit supports running on +other architectures. + +Running existing KUnit tests on non-UML architectures +----------------------------------------------------- + +There are some special considerations when running existing KUnit tests on +non-UML architectures: + +* Hardware may not be deterministic, so a test that always passes or fails + when run under UML may not always do so on real hardware. +* Hardware and VM environments may not be hermetic. KUnit tries its best to + provide a hermetic environment to run tests; however, it cannot manage state + that it doesn't know about outside of the kernel. Consequently, tests that + may be hermetic on UML may not be hermetic on other architectures. +* Some features and tooling may not be supported outside of UML. +* Hardware and VMs are slower than UML. + +None of these are reasons not to run your KUnit tests on real hardware; they are +only things to be aware of when doing so. + +The biggest impediment will likely be that certain KUnit features and +infrastructure may not support your target environment. For example, at this +time the KUnit Wrapper (``tools/testing/kunit/kunit.py``) does not work outside +of UML. Unfortunately, there is no way around this. Using UML (or even just a +particular architecture) allows us to make a lot of assumptions that make it +possible to do things which might otherwise be impossible. + +Nevertheless, all core KUnit framework features are fully supported on all +architectures, and using them is straightforward: all you need to do is to take +your kunitconfig, your Kconfig options for the tests you would like to run, and +merge them into whatever config your are using for your platform. That's it! + +For example, let's say you have the following kunitconfig: + +.. code-block:: none + + CONFIG_KUNIT=y + CONFIG_KUNIT_EXAMPLE_TEST=y + +If you wanted to run this test on an x86 VM, you might add the following config +options to your ``.config``: + +.. code-block:: none + + CONFIG_KUNIT=y + CONFIG_KUNIT_EXAMPLE_TEST=y + CONFIG_SERIAL_8250=y + CONFIG_SERIAL_8250_CONSOLE=y + +All these new options do is enable support for a common serial console needed +for logging. + +Next, you could build a kernel with these tests as follows: + + +.. code-block:: bash + + make ARCH=x86 olddefconfig + make ARCH=x86 + +Once you have built a kernel, you could run it on QEMU as follows: + +.. code-block:: bash + + qemu-system-x86_64 -enable-kvm \ + -m 1024 \ + -kernel arch/x86_64/boot/bzImage \ + -append 'console=ttyS0' \ + --nographic + +Interspersed in the kernel logs you might see the following: + +.. code-block:: none + + TAP version 14 + # Subtest: example + 1..1 + # example_simple_test: initializing + ok 1 - example_simple_test + ok 1 - example + +Congratulations, you just ran a KUnit test on the x86 architecture! + +In a similar manner, kunit and kunit tests can also be built as modules, +so if you wanted to run tests in this way you might add the following config +options to your ``.config``: + +.. code-block:: none + + CONFIG_KUNIT=m + CONFIG_KUNIT_EXAMPLE_TEST=m + +Once the kernel is built and installed, a simple + +.. code-block:: bash + + modprobe example-test + +...will run the tests. + +.. note:: + Note that you should make sure your test depends on ``KUNIT=y`` in Kconfig + if the test does not support module build. Otherwise, it will trigger + compile errors if ``CONFIG_KUNIT`` is ``m``. + +Writing new tests for other architectures +----------------------------------------- + +The first thing you must do is ask yourself whether it is necessary to write a +KUnit test for a specific architecture, and then whether it is necessary to +write that test for a particular piece of hardware. In general, writing a test +that depends on having access to a particular piece of hardware or software (not +included in the Linux source repo) should be avoided at all costs. + +Even if you only ever plan on running your KUnit test on your hardware +configuration, other people may want to run your tests and may not have access +to your hardware. If you write your test to run on UML, then anyone can run your +tests without knowing anything about your particular setup, and you can still +run your tests on your hardware setup just by compiling for your architecture. + +.. important:: + Always prefer tests that run on UML to tests that only run under a particular + architecture, and always prefer tests that run under QEMU or another easy + (and monetarily free) to obtain software environment to a specific piece of + hardware. + +Nevertheless, there are still valid reasons to write an architecture or hardware +specific test: for example, you might want to test some code that really belongs +in ``arch/some-arch/*``. Even so, try your best to write the test so that it +does not depend on physical hardware: if some of your test cases don't need the +hardware, only require the hardware for tests that actually need it. + +Now that you have narrowed down exactly what bits are hardware specific, the +actual procedure for writing and running the tests is pretty much the same as +writing normal KUnit tests. One special caveat is that you have to reset +hardware state in between test cases; if this is not possible, you may only be +able to run one test case per invocation. + +.. TODO(brendanhiggins@google.com): Add an actual example of an architecture- + dependent KUnit test. + +KUnit debugfs representation +============================ +When kunit test suites are initialized, they create an associated directory +in ``/sys/kernel/debug/kunit/<test-suite>``. The directory contains one file + +- results: "cat results" displays results of each test case and the results + of the entire suite for the last test run. + +The debugfs representation is primarily of use when kunit test suites are +run in a native environment, either as modules or builtin. Having a way +to display results like this is valuable as otherwise results can be +intermixed with other events in dmesg output. The maximum size of each +results file is KUNIT_LOG_SIZE bytes (defined in ``include/kunit/test.h``). |