diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-28 14:29:10 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-28 14:29:10 +0000 |
commit | 2aa4a82499d4becd2284cdb482213d541b8804dd (patch) | |
tree | b80bf8bf13c3766139fbacc530efd0dd9d54394c /security/nss/gtests/google_test/gtest/docs | |
parent | Initial commit. (diff) | |
download | firefox-upstream.tar.xz firefox-upstream.zip |
Adding upstream version 86.0.1.upstream/86.0.1upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'security/nss/gtests/google_test/gtest/docs')
-rw-r--r-- | security/nss/gtests/google_test/gtest/docs/Pkgconfig.md | 141 | ||||
-rw-r--r-- | security/nss/gtests/google_test/gtest/docs/advanced.md | 2567 | ||||
-rw-r--r-- | security/nss/gtests/google_test/gtest/docs/faq.md | 753 | ||||
-rw-r--r-- | security/nss/gtests/google_test/gtest/docs/primer.md | 567 | ||||
-rw-r--r-- | security/nss/gtests/google_test/gtest/docs/pump_manual.md | 190 | ||||
-rw-r--r-- | security/nss/gtests/google_test/gtest/docs/samples.md | 22 |
6 files changed, 4240 insertions, 0 deletions
diff --git a/security/nss/gtests/google_test/gtest/docs/Pkgconfig.md b/security/nss/gtests/google_test/gtest/docs/Pkgconfig.md new file mode 100644 index 0000000000..6dc0673889 --- /dev/null +++ b/security/nss/gtests/google_test/gtest/docs/Pkgconfig.md @@ -0,0 +1,141 @@ +## Using GoogleTest from various build systems + +GoogleTest comes with pkg-config files that can be used to determine all +necessary flags for compiling and linking to GoogleTest (and GoogleMock). +Pkg-config is a standardised plain-text format containing + +* the includedir (-I) path +* necessary macro (-D) definitions +* further required flags (-pthread) +* the library (-L) path +* the library (-l) to link to + +All current build systems support pkg-config in one way or another. For all +examples here we assume you want to compile the sample +`samples/sample3_unittest.cc`. + +### CMake + +Using `pkg-config` in CMake is fairly easy: + +```cmake +cmake_minimum_required(VERSION 3.0) + +cmake_policy(SET CMP0048 NEW) +project(my_gtest_pkgconfig VERSION 0.0.1 LANGUAGES CXX) + +find_package(PkgConfig) +pkg_search_module(GTEST REQUIRED gtest_main) + +add_executable(testapp samples/sample3_unittest.cc) +target_link_libraries(testapp ${GTEST_LDFLAGS}) +target_compile_options(testapp PUBLIC ${GTEST_CFLAGS}) + +include(CTest) +add_test(first_and_only_test testapp) +``` + +It is generally recommended that you use `target_compile_options` + `_CFLAGS` +over `target_include_directories` + `_INCLUDE_DIRS` as the former includes not +just -I flags (GoogleTest might require a macro indicating to internal headers +that all libraries have been compiled with threading enabled. In addition, +GoogleTest might also require `-pthread` in the compiling step, and as such +splitting the pkg-config `Cflags` variable into include dirs and macros for +`target_compile_definitions()` might still miss this). The same recommendation +goes for using `_LDFLAGS` over the more commonplace `_LIBRARIES`, which happens +to discard `-L` flags and `-pthread`. + +### Autotools + +Finding GoogleTest in Autoconf and using it from Automake is also fairly easy: + +In your `configure.ac`: + +``` +AC_PREREQ([2.69]) +AC_INIT([my_gtest_pkgconfig], [0.0.1]) +AC_CONFIG_SRCDIR([samples/sample3_unittest.cc]) +AC_PROG_CXX + +PKG_CHECK_MODULES([GTEST], [gtest_main]) + +AM_INIT_AUTOMAKE([foreign subdir-objects]) +AC_CONFIG_FILES([Makefile]) +AC_OUTPUT +``` + +and in your `Makefile.am`: + +``` +check_PROGRAMS = testapp +TESTS = $(check_PROGRAMS) + +testapp_SOURCES = samples/sample3_unittest.cc +testapp_CXXFLAGS = $(GTEST_CFLAGS) +testapp_LDADD = $(GTEST_LIBS) +``` + +### Meson + +Meson natively uses pkgconfig to query dependencies: + +``` +project('my_gtest_pkgconfig', 'cpp', version : '0.0.1') + +gtest_dep = dependency('gtest_main') + +testapp = executable( + 'testapp', + files(['samples/sample3_unittest.cc']), + dependencies : gtest_dep, + install : false) + +test('first_and_only_test', testapp) +``` + +### Plain Makefiles + +Since `pkg-config` is a small Unix command-line utility, it can be used in +handwritten `Makefile`s too: + +```makefile +GTEST_CFLAGS = `pkg-config --cflags gtest_main` +GTEST_LIBS = `pkg-config --libs gtest_main` + +.PHONY: tests all + +tests: all + ./testapp + +all: testapp + +testapp: testapp.o + $(CXX) $(CXXFLAGS) $(LDFLAGS) $< -o $@ $(GTEST_LIBS) + +testapp.o: samples/sample3_unittest.cc + $(CXX) $(CPPFLAGS) $(CXXFLAGS) $< -c -o $@ $(GTEST_CFLAGS) +``` + +### Help! pkg-config can't find GoogleTest! + +Let's say you have a `CMakeLists.txt` along the lines of the one in this +tutorial and you try to run `cmake`. It is very possible that you get a failure +along the lines of: + +``` +-- Checking for one of the modules 'gtest_main' +CMake Error at /usr/share/cmake/Modules/FindPkgConfig.cmake:640 (message): + None of the required 'gtest_main' found +``` + +These failures are common if you installed GoogleTest yourself and have not +sourced it from a distro or other package manager. If so, you need to tell +pkg-config where it can find the `.pc` files containing the information. Say you +installed GoogleTest to `/usr/local`, then it might be that the `.pc` files are +installed under `/usr/local/lib64/pkgconfig`. If you set + +``` +export PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig +``` + +pkg-config will also try to look in `PKG_CONFIG_PATH` to find `gtest_main.pc`. diff --git a/security/nss/gtests/google_test/gtest/docs/advanced.md b/security/nss/gtests/google_test/gtest/docs/advanced.md new file mode 100644 index 0000000000..3e5f779d0a --- /dev/null +++ b/security/nss/gtests/google_test/gtest/docs/advanced.md @@ -0,0 +1,2567 @@ +# Advanced googletest Topics + +<!-- GOOGLETEST_CM0016 DO NOT DELETE --> + +## Introduction + +Now that you have read the [googletest Primer](primer.md) and learned how to +write tests using googletest, it's time to learn some new tricks. This document +will show you more assertions as well as how to construct complex failure +messages, propagate fatal failures, reuse and speed up your test fixtures, and +use various flags with your tests. + +## More Assertions + +This section covers some less frequently used, but still significant, +assertions. + +### Explicit Success and Failure + +These three assertions do not actually test a value or expression. Instead, they +generate a success or failure directly. Like the macros that actually perform a +test, you may stream a custom failure message into them. + +```c++ +SUCCEED(); +``` + +Generates a success. This does **NOT** make the overall test succeed. A test is +considered successful only if none of its assertions fail during its execution. + +NOTE: `SUCCEED()` is purely documentary and currently doesn't generate any +user-visible output. However, we may add `SUCCEED()` messages to googletest's +output in the future. + +```c++ +FAIL(); +ADD_FAILURE(); +ADD_FAILURE_AT("file_path", line_number); +``` + +`FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()` +generate a nonfatal failure. These are useful when control flow, rather than a +Boolean expression, determines the test's success or failure. For example, you +might want to write something like: + +```c++ +switch(expression) { + case 1: + ... some checks ... + case 2: + ... some other checks ... + default: + FAIL() << "We shouldn't get here."; +} +``` + +NOTE: you can only use `FAIL()` in functions that return `void`. See the +[Assertion Placement section](#assertion-placement) for more information. + +### Exception Assertions + +These are for verifying that a piece of code throws (or does not throw) an +exception of the given type: + +Fatal assertion | Nonfatal assertion | Verifies +------------------------------------------ | ------------------------------------------ | -------- +`ASSERT_THROW(statement, exception_type);` | `EXPECT_THROW(statement, exception_type);` | `statement` throws an exception of the given type +`ASSERT_ANY_THROW(statement);` | `EXPECT_ANY_THROW(statement);` | `statement` throws an exception of any type +`ASSERT_NO_THROW(statement);` | `EXPECT_NO_THROW(statement);` | `statement` doesn't throw any exception + +Examples: + +```c++ +ASSERT_THROW(Foo(5), bar_exception); + +EXPECT_NO_THROW({ + int n = 5; + Bar(&n); +}); +``` + +**Availability**: requires exceptions to be enabled in the build environment + +### Predicate Assertions for Better Error Messages + +Even though googletest has a rich set of assertions, they can never be complete, +as it's impossible (nor a good idea) to anticipate all scenarios a user might +run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a +complex expression, for lack of a better macro. This has the problem of not +showing you the values of the parts of the expression, making it hard to +understand what went wrong. As a workaround, some users choose to construct the +failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this +is awkward especially when the expression has side-effects or is expensive to +evaluate. + +googletest gives you three different options to solve this problem: + +#### Using an Existing Boolean Function + +If you already have a function or functor that returns `bool` (or a type that +can be implicitly converted to `bool`), you can use it in a *predicate +assertion* to get the function arguments printed for free: + +<!-- mdformat off(github rendering does not support multiline tables) --> + +| Fatal assertion | Nonfatal assertion | Verifies | +| --------------------------------- | --------------------------------- | --------------------------- | +| `ASSERT_PRED1(pred1, val1)` | `EXPECT_PRED1(pred1, val1)` | `pred1(val1)` is true | +| `ASSERT_PRED2(pred2, val1, val2)` | `EXPECT_PRED2(pred2, val1, val2)` | `pred1(val1, val2)` is true | +| `...` | `...` | `...` | + +<!-- mdformat on--> +In the above, `predn` is an `n`-ary predicate function or functor, where `val1`, +`val2`, ..., and `valn` are its arguments. The assertion succeeds if the +predicate returns `true` when applied to the given arguments, and fails +otherwise. When the assertion fails, it prints the value of each argument. In +either case, the arguments are evaluated exactly once. + +Here's an example. Given + +```c++ +// Returns true if m and n have no common divisors except 1. +bool MutuallyPrime(int m, int n) { ... } + +const int a = 3; +const int b = 4; +const int c = 10; +``` + +the assertion + +```c++ + EXPECT_PRED2(MutuallyPrime, a, b); +``` + +will succeed, while the assertion + +```c++ + EXPECT_PRED2(MutuallyPrime, b, c); +``` + +will fail with the message + +```none +MutuallyPrime(b, c) is false, where +b is 4 +c is 10 +``` + +> NOTE: +> +> 1. If you see a compiler error "no matching function to call" when using +> `ASSERT_PRED*` or `EXPECT_PRED*`, please see +> [this](faq.md#the-compiler-complains-no-matching-function-to-call-when-i-use-assert-pred-how-do-i-fix-it) +> for how to resolve it. + +#### Using a Function That Returns an AssertionResult + +While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not +satisfactory: you have to use different macros for different arities, and it +feels more like Lisp than C++. The `::testing::AssertionResult` class solves +this problem. + +An `AssertionResult` object represents the result of an assertion (whether it's +a success or a failure, and an associated message). You can create an +`AssertionResult` using one of these factory functions: + +```c++ +namespace testing { + +// Returns an AssertionResult object to indicate that an assertion has +// succeeded. +AssertionResult AssertionSuccess(); + +// Returns an AssertionResult object to indicate that an assertion has +// failed. +AssertionResult AssertionFailure(); + +} +``` + +You can then use the `<<` operator to stream messages to the `AssertionResult` +object. + +To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`), +write a predicate function that returns `AssertionResult` instead of `bool`. For +example, if you define `IsEven()` as: + +```c++ +::testing::AssertionResult IsEven(int n) { + if ((n % 2) == 0) + return ::testing::AssertionSuccess(); + else + return ::testing::AssertionFailure() << n << " is odd"; +} +``` + +instead of: + +```c++ +bool IsEven(int n) { + return (n % 2) == 0; +} +``` + +the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print: + +```none +Value of: IsEven(Fib(4)) + Actual: false (3 is odd) +Expected: true +``` + +instead of a more opaque + +```none +Value of: IsEven(Fib(4)) + Actual: false +Expected: true +``` + +If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well +(one third of Boolean assertions in the Google code base are negative ones), and +are fine with making the predicate slower in the success case, you can supply a +success message: + +```c++ +::testing::AssertionResult IsEven(int n) { + if ((n % 2) == 0) + return ::testing::AssertionSuccess() << n << " is even"; + else + return ::testing::AssertionFailure() << n << " is odd"; +} +``` + +Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print + +```none + Value of: IsEven(Fib(6)) + Actual: true (8 is even) + Expected: false +``` + +#### Using a Predicate-Formatter + +If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and +`(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your +predicate do not support streaming to `ostream`, you can instead use the +following *predicate-formatter assertions* to *fully* customize how the message +is formatted: + +Fatal assertion | Nonfatal assertion | Verifies +------------------------------------------------ | ------------------------------------------------ | -------- +`ASSERT_PRED_FORMAT1(pred_format1, val1);` | `EXPECT_PRED_FORMAT1(pred_format1, val1);` | `pred_format1(val1)` is successful +`ASSERT_PRED_FORMAT2(pred_format2, val1, val2);` | `EXPECT_PRED_FORMAT2(pred_format2, val1, val2);` | `pred_format2(val1, val2)` is successful +`...` | `...` | ... + +The difference between this and the previous group of macros is that instead of +a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a *predicate-formatter* +(`pred_formatn`), which is a function or functor with the signature: + +```c++ +::testing::AssertionResult PredicateFormattern(const char* expr1, + const char* expr2, + ... + const char* exprn, + T1 val1, + T2 val2, + ... + Tn valn); +``` + +where `val1`, `val2`, ..., and `valn` are the values of the predicate arguments, +and `expr1`, `expr2`, ..., and `exprn` are the corresponding expressions as they +appear in the source code. The types `T1`, `T2`, ..., and `Tn` can be either +value types or reference types. For example, if an argument has type `Foo`, you +can declare it as either `Foo` or `const Foo&`, whichever is appropriate. + +As an example, let's improve the failure message in `MutuallyPrime()`, which was +used with `EXPECT_PRED2()`: + +```c++ +// Returns the smallest prime common divisor of m and n, +// or 1 when m and n are mutually prime. +int SmallestPrimeCommonDivisor(int m, int n) { ... } + +// A predicate-formatter for asserting that two integers are mutually prime. +::testing::AssertionResult AssertMutuallyPrime(const char* m_expr, + const char* n_expr, + int m, + int n) { + if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess(); + + return ::testing::AssertionFailure() << m_expr << " and " << n_expr + << " (" << m << " and " << n << ") are not mutually prime, " + << "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n); +} +``` + +With this predicate-formatter, we can use + +```c++ + EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c); +``` + +to generate the message + +```none +b and c (4 and 10) are not mutually prime, as they have a common divisor 2. +``` + +As you may have realized, many of the built-in assertions we introduced earlier +are special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are +indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`. + +### Floating-Point Comparison + +Comparing floating-point numbers is tricky. Due to round-off errors, it is very +unlikely that two floating-points will match exactly. Therefore, `ASSERT_EQ` 's +naive comparison usually doesn't work. And since floating-points can have a wide +value range, no single fixed error bound works. It's better to compare by a +fixed relative error bound, except for values close to 0 due to the loss of +precision there. + +In general, for floating-point comparison to make sense, the user needs to +carefully choose the error bound. If they don't want or care to, comparing in +terms of Units in the Last Place (ULPs) is a good default, and googletest +provides assertions to do this. Full details about ULPs are quite long; if you +want to learn more, see +[here](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/). + +#### Floating-Point Macros + +<!-- mdformat off(github rendering does not support multiline tables) --> + +| Fatal assertion | Nonfatal assertion | Verifies | +| ------------------------------- | ------------------------------- | ---------------------------------------- | +| `ASSERT_FLOAT_EQ(val1, val2);` | `EXPECT_FLOAT_EQ(val1, val2);` | the two `float` values are almost equal | +| `ASSERT_DOUBLE_EQ(val1, val2);` | `EXPECT_DOUBLE_EQ(val1, val2);` | the two `double` values are almost equal | + +<!-- mdformat on--> + +By "almost equal" we mean the values are within 4 ULP's from each other. + +The following assertions allow you to choose the acceptable error bound: + +<!-- mdformat off(github rendering does not support multiline tables) --> + +| Fatal assertion | Nonfatal assertion | Verifies | +| ------------------------------------- | ------------------------------------- | -------------------------------------------------------------------------------- | +| `ASSERT_NEAR(val1, val2, abs_error);` | `EXPECT_NEAR(val1, val2, abs_error);` | the difference between `val1` and `val2` doesn't exceed the given absolute error | + +<!-- mdformat on--> + +#### Floating-Point Predicate-Format Functions + +Some floating-point operations are useful, but not that often used. In order to +avoid an explosion of new macros, we provide them as predicate-format functions +that can be used in predicate assertion macros (e.g. `EXPECT_PRED_FORMAT2`, +etc). + +```c++ +EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2); +EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2); +``` + +Verifies that `val1` is less than, or almost equal to, `val2`. You can replace +`EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`. + +### Asserting Using gMock Matchers + +[gMock](../../googlemock) comes with a library of matchers for validating +arguments passed to mock objects. A gMock *matcher* is basically a predicate +that knows how to describe itself. It can be used in these assertion macros: + +<!-- mdformat off(github rendering does not support multiline tables) --> + +| Fatal assertion | Nonfatal assertion | Verifies | +| ------------------------------ | ------------------------------ | --------------------- | +| `ASSERT_THAT(value, matcher);` | `EXPECT_THAT(value, matcher);` | value matches matcher | + +<!-- mdformat on--> + +For example, `StartsWith(prefix)` is a matcher that matches a string starting +with `prefix`, and you can write: + +```c++ +using ::testing::StartsWith; +... + // Verifies that Foo() returns a string starting with "Hello". + EXPECT_THAT(Foo(), StartsWith("Hello")); +``` + +Read this +[recipe](../../googlemock/docs/cook_book.md#using-matchers-in-googletest-assertions) +in the gMock Cookbook for more details. + +gMock has a rich set of matchers. You can do many things googletest cannot do +alone with them. For a list of matchers gMock provides, read +[this](../../googlemock/docs/cook_book.md##using-matchers). It's easy to write +your [own matchers](../../googlemock/docs/cook_book.md#NewMatchers) too. + +gMock is bundled with googletest, so you don't need to add any build dependency +in order to take advantage of this. Just include `"testing/base/public/gmock.h"` +and you're ready to go. + +### More String Assertions + +(Please read the [previous](#asserting-using-gmock-matchers) section first if +you haven't.) + +You can use the gMock +[string matchers](../../googlemock/docs/cheat_sheet.md#string-matchers) with +`EXPECT_THAT()` or `ASSERT_THAT()` to do more string comparison tricks +(sub-string, prefix, suffix, regular expression, and etc). For example, + +```c++ +using ::testing::HasSubstr; +using ::testing::MatchesRegex; +... + ASSERT_THAT(foo_string, HasSubstr("needle")); + EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+")); +``` + +If the string contains a well-formed HTML or XML document, you can check whether +its DOM tree matches an +[XPath expression](http://www.w3.org/TR/xpath/#contents): + +```c++ +// Currently still in //template/prototemplate/testing:xpath_matcher +#include "template/prototemplate/testing/xpath_matcher.h" +using prototemplate::testing::MatchesXPath; +EXPECT_THAT(html_string, MatchesXPath("//a[text()='click here']")); +``` + +### Windows HRESULT assertions + +These assertions test for `HRESULT` success or failure. + +Fatal assertion | Nonfatal assertion | Verifies +-------------------------------------- | -------------------------------------- | -------- +`ASSERT_HRESULT_SUCCEEDED(expression)` | `EXPECT_HRESULT_SUCCEEDED(expression)` | `expression` is a success `HRESULT` +`ASSERT_HRESULT_FAILED(expression)` | `EXPECT_HRESULT_FAILED(expression)` | `expression` is a failure `HRESULT` + +The generated output contains the human-readable error message associated with +the `HRESULT` code returned by `expression`. + +You might use them like this: + +```c++ +CComPtr<IShellDispatch2> shell; +ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application")); +CComVariant empty; +ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty)); +``` + +### Type Assertions + +You can call the function + +```c++ +::testing::StaticAssertTypeEq<T1, T2>(); +``` + +to assert that types `T1` and `T2` are the same. The function does nothing if +the assertion is satisfied. If the types are different, the function call will +fail to compile, the compiler error message will say that +`type1 and type2 are not the same type` and most likely (depending on the compiler) +show you the actual values of `T1` and `T2`. This is mainly useful inside +template code. + +**Caveat**: When used inside a member function of a class template or a function +template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is +instantiated. For example, given: + +```c++ +template <typename T> class Foo { + public: + void Bar() { ::testing::StaticAssertTypeEq<int, T>(); } +}; +``` + +the code: + +```c++ +void Test1() { Foo<bool> foo; } +``` + +will not generate a compiler error, as `Foo<bool>::Bar()` is never actually +instantiated. Instead, you need: + +```c++ +void Test2() { Foo<bool> foo; foo.Bar(); } +``` + +to cause a compiler error. + +### Assertion Placement + +You can use assertions in any C++ function. In particular, it doesn't have to be +a method of the test fixture class. The one constraint is that assertions that +generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in +void-returning functions. This is a consequence of Google's not using +exceptions. By placing it in a non-void function you'll get a confusing compile +error like `"error: void value not ignored as it ought to be"` or `"cannot +initialize return object of type 'bool' with an rvalue of type 'void'"` or +`"error: no viable conversion from 'void' to 'string'"`. + +If you need to use fatal assertions in a function that returns non-void, one +option is to make the function return the value in an out parameter instead. For +example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You +need to make sure that `*result` contains some sensible value even when the +function returns prematurely. As the function now returns `void`, you can use +any assertion inside of it. + +If changing the function's type is not an option, you should just use assertions +that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`. + +NOTE: Constructors and destructors are not considered void-returning functions, +according to the C++ language specification, and so you may not use fatal +assertions in them; you'll get a compilation error if you try. Instead, either +call `abort` and crash the entire test executable, or put the fatal assertion in +a `SetUp`/`TearDown` function; see +[constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp) + +WARNING: A fatal assertion in a helper function (private void-returning method) +called from a constructor or destructor does not does not terminate the current +test, as your intuition might suggest: it merely returns from the constructor or +destructor early, possibly leaving your object in a partially-constructed or +partially-destructed state! You almost certainly want to `abort` or use +`SetUp`/`TearDown` instead. + +## Teaching googletest How to Print Your Values + +When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument +values to help you debug. It does this using a user-extensible value printer. + +This printer knows how to print built-in C++ types, native arrays, STL +containers, and any type that supports the `<<` operator. For other types, it +prints the raw bytes in the value and hopes that you the user can figure it out. + +As mentioned earlier, the printer is *extensible*. That means you can teach it +to do a better job at printing your particular type than to dump the bytes. To +do that, define `<<` for your type: + +```c++ +#include <ostream> + +namespace foo { + +class Bar { // We want googletest to be able to print instances of this. +... + // Create a free inline friend function. + friend std::ostream& operator<<(std::ostream& os, const Bar& bar) { + return os << bar.DebugString(); // whatever needed to print bar to os + } +}; + +// If you can't declare the function in the class it's important that the +// << operator is defined in the SAME namespace that defines Bar. C++'s look-up +// rules rely on that. +std::ostream& operator<<(std::ostream& os, const Bar& bar) { + return os << bar.DebugString(); // whatever needed to print bar to os +} + +} // namespace foo +``` + +Sometimes, this might not be an option: your team may consider it bad style to +have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that +doesn't do what you want (and you cannot change it). If so, you can instead +define a `PrintTo()` function like this: + +```c++ +#include <ostream> + +namespace foo { + +class Bar { + ... + friend void PrintTo(const Bar& bar, std::ostream* os) { + *os << bar.DebugString(); // whatever needed to print bar to os + } +}; + +// If you can't declare the function in the class it's important that PrintTo() +// is defined in the SAME namespace that defines Bar. C++'s look-up rules rely +// on that. +void PrintTo(const Bar& bar, std::ostream* os) { + *os << bar.DebugString(); // whatever needed to print bar to os +} + +} // namespace foo +``` + +If you have defined both `<<` and `PrintTo()`, the latter will be used when +googletest is concerned. This allows you to customize how the value appears in +googletest's output without affecting code that relies on the behavior of its +`<<` operator. + +If you want to print a value `x` using googletest's value printer yourself, just +call `::testing::PrintToString(x)`, which returns an `std::string`: + +```c++ +vector<pair<Bar, int> > bar_ints = GetBarIntVector(); + +EXPECT_TRUE(IsCorrectBarIntVector(bar_ints)) + << "bar_ints = " << ::testing::PrintToString(bar_ints); +``` + +## Death Tests + +In many applications, there are assertions that can cause application failure if +a condition is not met. These sanity checks, which ensure that the program is in +a known good state, are there to fail at the earliest possible time after some +program state is corrupted. If the assertion checks the wrong condition, then +the program may proceed in an erroneous state, which could lead to memory +corruption, security holes, or worse. Hence it is vitally important to test that +such assertion statements work as expected. + +Since these precondition checks cause the processes to die, we call such tests +_death tests_. More generally, any test that checks that a program terminates +(except by throwing an exception) in an expected fashion is also a death test. + +Note that if a piece of code throws an exception, we don't consider it "death" +for the purpose of death tests, as the caller of the code could catch the +exception and avoid the crash. If you want to verify exceptions thrown by your +code, see [Exception Assertions](#ExceptionAssertions). + +If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see +Catching Failures + +### How to Write a Death Test + +googletest has the following macros to support death tests: + +Fatal assertion | Nonfatal assertion | Verifies +------------------------------------------------ | ------------------------------------------------ | -------- +`ASSERT_DEATH(statement, matcher);` | `EXPECT_DEATH(statement, matcher);` | `statement` crashes with the given error +`ASSERT_DEATH_IF_SUPPORTED(statement, matcher);` | `EXPECT_DEATH_IF_SUPPORTED(statement, matcher);` | if death tests are supported, verifies that `statement` crashes with the given error; otherwise verifies nothing +`ASSERT_EXIT(statement, predicate, matcher);` | `EXPECT_EXIT(statement, predicate, matcher);` | `statement` exits with the given error, and its exit code matches `predicate` + +where `statement` is a statement that is expected to cause the process to die, +`predicate` is a function or function object that evaluates an integer exit +status, and `matcher` is either a GMock matcher matching a `const std::string&` +or a (Perl) regular expression - either of which is matched against the stderr +output of `statement`. For legacy reasons, a bare string (i.e. with no matcher) +is interpreted as `ContainsRegex(str)`, **not** `Eq(str)`. Note that `statement` +can be *any valid statement* (including *compound statement*) and doesn't have +to be an expression. + +As usual, the `ASSERT` variants abort the current test function, while the +`EXPECT` variants do not. + +> NOTE: We use the word "crash" here to mean that the process terminates with a +> *non-zero* exit status code. There are two possibilities: either the process +> has called `exit()` or `_exit()` with a non-zero value, or it may be killed by +> a signal. +> +> This means that if `*statement*` terminates the process with a 0 exit code, it +> is *not* considered a crash by `EXPECT_DEATH`. Use `EXPECT_EXIT` instead if +> this is the case, or if you want to restrict the exit code more precisely. + +A predicate here must accept an `int` and return a `bool`. The death test +succeeds only if the predicate returns `true`. googletest defines a few +predicates that handle the most common cases: + +```c++ +::testing::ExitedWithCode(exit_code) +``` + +This expression is `true` if the program exited normally with the given exit +code. + +```c++ +::testing::KilledBySignal(signal_number) // Not available on Windows. +``` + +This expression is `true` if the program was killed by the given signal. + +The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate +that verifies the process' exit code is non-zero. + +Note that a death test only cares about three things: + +1. does `statement` abort or exit the process? +2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status + satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`) + is the exit status non-zero? And +3. does the stderr output match `regex`? + +In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it +will **not** cause the death test to fail, as googletest assertions don't abort +the process. + +To write a death test, simply use one of the above macros inside your test +function. For example, + +```c++ +TEST(MyDeathTest, Foo) { + // This death test uses a compound statement. + ASSERT_DEATH({ + int n = 5; + Foo(&n); + }, "Error on line .* of Foo()"); +} + +TEST(MyDeathTest, NormalExit) { + EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success"); +} + +TEST(MyDeathTest, KillMyself) { + EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL), + "Sending myself unblockable signal"); +} +``` + +verifies that: + +* calling `Foo(5)` causes the process to die with the given error message, +* calling `NormalExit()` causes the process to print `"Success"` to stderr and + exit with exit code 0, and +* calling `KillMyself()` kills the process with signal `SIGKILL`. + +The test function body may contain other assertions and statements as well, if +necessary. + +### Death Test Naming + +IMPORTANT: We strongly recommend you to follow the convention of naming your +**test suite** (not test) `*DeathTest` when it contains a death test, as +demonstrated in the above example. The +[Death Tests And Threads](#death-tests-and-threads) section below explains why. + +If a test fixture class is shared by normal tests and death tests, you can use +`using` or `typedef` to introduce an alias for the fixture class and avoid +duplicating its code: + +```c++ +class FooTest : public ::testing::Test { ... }; + +using FooDeathTest = FooTest; + +TEST_F(FooTest, DoesThis) { + // normal test +} + +TEST_F(FooDeathTest, DoesThat) { + // death test +} +``` + +### Regular Expression Syntax + +On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the +[POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04) +syntax. To learn about this syntax, you may want to read this +[Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions). + +On Windows, googletest uses its own simple regular expression implementation. It +lacks many features. For example, we don't support union (`"x|y"`), grouping +(`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among +others. Below is what we do support (`A` denotes a literal character, period +(`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular +expressions.): + +Expression | Meaning +---------- | -------------------------------------------------------------- +`c` | matches any literal character `c` +`\\d` | matches any decimal digit +`\\D` | matches any character that's not a decimal digit +`\\f` | matches `\f` +`\\n` | matches `\n` +`\\r` | matches `\r` +`\\s` | matches any ASCII whitespace, including `\n` +`\\S` | matches any character that's not a whitespace +`\\t` | matches `\t` +`\\v` | matches `\v` +`\\w` | matches any letter, `_`, or decimal digit +`\\W` | matches any character that `\\w` doesn't match +`\\c` | matches any literal character `c`, which must be a punctuation +`.` | matches any single character except `\n` +`A?` | matches 0 or 1 occurrences of `A` +`A*` | matches 0 or many occurrences of `A` +`A+` | matches 1 or many occurrences of `A` +`^` | matches the beginning of a string (not that of each line) +`$` | matches the end of a string (not that of each line) +`xy` | matches `x` followed by `y` + +To help you determine which capability is available on your system, googletest +defines macros to govern which regular expression it is using. The macros are: +`GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death +tests to work in all cases, you can either `#if` on these macros or use the more +limited syntax only. + +### How It Works + +Under the hood, `ASSERT_EXIT()` spawns a new process and executes the death test +statement in that process. The details of how precisely that happens depend on +the platform and the variable ::testing::GTEST_FLAG(death_test_style) (which is +initialized from the command-line flag `--gtest_death_test_style`). + +* On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the + child, after which: + * If the variable's value is `"fast"`, the death test statement is + immediately executed. + * If the variable's value is `"threadsafe"`, the child process re-executes + the unit test binary just as it was originally invoked, but with some + extra flags to cause just the single death test under consideration to + be run. +* On Windows, the child is spawned using the `CreateProcess()` API, and + re-executes the binary to cause just the single death test under + consideration to be run - much like the `threadsafe` mode on POSIX. + +Other values for the variable are illegal and will cause the death test to fail. +Currently, the flag's default value is **"fast"** + +1. the child's exit status satisfies the predicate, and +2. the child's stderr matches the regular expression. + +If the death test statement runs to completion without dying, the child process +will nonetheless terminate, and the assertion fails. + +### Death Tests And Threads + +The reason for the two death test styles has to do with thread safety. Due to +well-known problems with forking in the presence of threads, death tests should +be run in a single-threaded context. Sometimes, however, it isn't feasible to +arrange that kind of environment. For example, statically-initialized modules +may start threads before main is ever reached. Once threads have been created, +it may be difficult or impossible to clean them up. + +googletest has three features intended to raise awareness of threading issues. + +1. A warning is emitted if multiple threads are running when a death test is + encountered. +2. Test suites with a name ending in "DeathTest" are run before all other + tests. +3. It uses `clone()` instead of `fork()` to spawn the child process on Linux + (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely + to cause the child to hang when the parent process has multiple threads. + +It's perfectly fine to create threads inside a death test statement; they are +executed in a separate process and cannot affect the parent. + +### Death Test Styles + +The "threadsafe" death test style was introduced in order to help mitigate the +risks of testing in a possibly multithreaded environment. It trades increased +test execution time (potentially dramatically so) for improved thread safety. + +The automated testing framework does not set the style flag. You can choose a +particular style of death tests by setting the flag programmatically: + +```c++ +testing::FLAGS_gtest_death_test_style="threadsafe" +``` + +You can do this in `main()` to set the style for all death tests in the binary, +or in individual tests. Recall that flags are saved before running each test and +restored afterwards, so you need not do that yourself. For example: + +```c++ +int main(int argc, char** argv) { + InitGoogle(argv[0], &argc, &argv, true); + ::testing::FLAGS_gtest_death_test_style = "fast"; + return RUN_ALL_TESTS(); +} + +TEST(MyDeathTest, TestOne) { + ::testing::FLAGS_gtest_death_test_style = "threadsafe"; + // This test is run in the "threadsafe" style: + ASSERT_DEATH(ThisShouldDie(), ""); +} + +TEST(MyDeathTest, TestTwo) { + // This test is run in the "fast" style: + ASSERT_DEATH(ThisShouldDie(), ""); +} +``` + +### Caveats + +The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If +it leaves the current function via a `return` statement or by throwing an +exception, the death test is considered to have failed. Some googletest macros +may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid +them in `statement`. + +Since `statement` runs in the child process, any in-memory side effect (e.g. +modifying a variable, releasing memory, etc) it causes will *not* be observable +in the parent process. In particular, if you release memory in a death test, +your program will fail the heap check as the parent process will never see the +memory reclaimed. To solve this problem, you can + +1. try not to free memory in a death test; +2. free the memory again in the parent process; or +3. do not use the heap checker in your program. + +Due to an implementation detail, you cannot place multiple death test assertions +on the same line; otherwise, compilation will fail with an unobvious error +message. + +Despite the improved thread safety afforded by the "threadsafe" style of death +test, thread problems such as deadlock are still possible in the presence of +handlers registered with `pthread_atfork(3)`. + + +## Using Assertions in Sub-routines + +### Adding Traces to Assertions + +If a test sub-routine is called from several places, when an assertion inside it +fails, it can be hard to tell which invocation of the sub-routine the failure is +from. You can alleviate this problem using extra logging or custom failure +messages, but that usually clutters up your tests. A better solution is to use +the `SCOPED_TRACE` macro or the `ScopedTrace` utility: + +```c++ +SCOPED_TRACE(message); +ScopedTrace trace("file_path", line_number, message); +``` + +where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE` +macro will cause the current file name, line number, and the given message to be +added in every failure message. `ScopedTrace` accepts explicit file name and +line number in arguments, which is useful for writing test helpers. The effect +will be undone when the control leaves the current lexical scope. + +For example, + +```c++ +10: void Sub1(int n) { +11: EXPECT_EQ(Bar(n), 1); +12: EXPECT_EQ(Bar(n + 1), 2); +13: } +14: +15: TEST(FooTest, Bar) { +16: { +17: SCOPED_TRACE("A"); // This trace point will be included in +18: // every failure in this scope. +19: Sub1(1); +20: } +21: // Now it won't. +22: Sub1(9); +23: } +``` + +could result in messages like these: + +```none +path/to/foo_test.cc:11: Failure +Value of: Bar(n) +Expected: 1 + Actual: 2 + Trace: +path/to/foo_test.cc:17: A + +path/to/foo_test.cc:12: Failure +Value of: Bar(n + 1) +Expected: 2 + Actual: 3 +``` + +Without the trace, it would've been difficult to know which invocation of +`Sub1()` the two failures come from respectively. (You could add an extra +message to each assertion in `Sub1()` to indicate the value of `n`, but that's +tedious.) + +Some tips on using `SCOPED_TRACE`: + +1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the + beginning of a sub-routine, instead of at each call site. +2. When calling sub-routines inside a loop, make the loop iterator part of the + message in `SCOPED_TRACE` such that you can know which iteration the failure + is from. +3. Sometimes the line number of the trace point is enough for identifying the + particular invocation of a sub-routine. In this case, you don't have to + choose a unique message for `SCOPED_TRACE`. You can simply use `""`. +4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer + scope. In this case, all active trace points will be included in the failure + messages, in reverse order they are encountered. +5. The trace dump is clickable in Emacs - hit `return` on a line number and + you'll be taken to that line in the source file! + +### Propagating Fatal Failures + +A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that +when they fail they only abort the _current function_, not the entire test. For +example, the following test will segfault: + +```c++ +void Subroutine() { + // Generates a fatal failure and aborts the current function. + ASSERT_EQ(1, 2); + + // The following won't be executed. + ... +} + +TEST(FooTest, Bar) { + Subroutine(); // The intended behavior is for the fatal failure + // in Subroutine() to abort the entire test. + + // The actual behavior: the function goes on after Subroutine() returns. + int* p = NULL; + *p = 3; // Segfault! +} +``` + +To alleviate this, googletest provides three different solutions. You could use +either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the +`HasFatalFailure()` function. They are described in the following two +subsections. + +#### Asserting on Subroutines with an exception + +The following code can turn ASSERT-failure into an exception: + +```c++ +class ThrowListener : public testing::EmptyTestEventListener { + void OnTestPartResult(const testing::TestPartResult& result) override { + if (result.type() == testing::TestPartResult::kFatalFailure) { + throw testing::AssertionException(result); + } + } +}; +int main(int argc, char** argv) { + ... + testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener); + return RUN_ALL_TESTS(); +} +``` + +This listener should be added after other listeners if you have any, otherwise +they won't see failed `OnTestPartResult`. + +#### Asserting on Subroutines + +As shown above, if your test calls a subroutine that has an `ASSERT_*` failure +in it, the test will continue after the subroutine returns. This may not be what +you want. + +Often people want fatal failures to propagate like exceptions. For that +googletest offers the following macros: + +Fatal assertion | Nonfatal assertion | Verifies +------------------------------------- | ------------------------------------- | -------- +`ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread. + +Only failures in the thread that executes the assertion are checked to determine +the result of this type of assertions. If `statement` creates new threads, +failures in these threads are ignored. + +Examples: + +```c++ +ASSERT_NO_FATAL_FAILURE(Foo()); + +int i; +EXPECT_NO_FATAL_FAILURE({ + i = Bar(); +}); +``` + +Assertions from multiple threads are currently not supported on Windows. + +#### Checking for Failures in the Current Test + +`HasFatalFailure()` in the `::testing::Test` class returns `true` if an +assertion in the current test has suffered a fatal failure. This allows +functions to catch fatal failures in a sub-routine and return early. + +```c++ +class Test { + public: + ... + static bool HasFatalFailure(); +}; +``` + +The typical usage, which basically simulates the behavior of a thrown exception, +is: + +```c++ +TEST(FooTest, Bar) { + Subroutine(); + // Aborts if Subroutine() had a fatal failure. + if (HasFatalFailure()) return; + + // The following won't be executed. + ... +} +``` + +If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test +fixture, you must add the `::testing::Test::` prefix, as in: + +```c++ +if (::testing::Test::HasFatalFailure()) return; +``` + +Similarly, `HasNonfatalFailure()` returns `true` if the current test has at +least one non-fatal failure, and `HasFailure()` returns `true` if the current +test has at least one failure of either kind. + +## Logging Additional Information + +In your test code, you can call `RecordProperty("key", value)` to log additional +information, where `value` can be either a string or an `int`. The *last* value +recorded for a key will be emitted to the +[XML output](#generating-an-xml-report) if you specify one. For example, the +test + +```c++ +TEST_F(WidgetUsageTest, MinAndMaxWidgets) { + RecordProperty("MaximumWidgets", ComputeMaxUsage()); + RecordProperty("MinimumWidgets", ComputeMinUsage()); +} +``` + +will output XML like this: + +```xml + ... + <testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" /> + ... +``` + +> NOTE: +> +> * `RecordProperty()` is a static member of the `Test` class. Therefore it +> needs to be prefixed with `::testing::Test::` if used outside of the +> `TEST` body and the test fixture class. +> * `*key*` must be a valid XML attribute name, and cannot conflict with the +> ones already used by googletest (`name`, `status`, `time`, `classname`, +> `type_param`, and `value_param`). +> * Calling `RecordProperty()` outside of the lifespan of a test is allowed. +> If it's called outside of a test but between a test suite's +> `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be +> attributed to the XML element for the test suite. If it's called outside +> of all test suites (e.g. in a test environment), it will be attributed to +> the top-level XML element. + +## Sharing Resources Between Tests in the Same Test Suite + +googletest creates a new test fixture object for each test in order to make +tests independent and easier to debug. However, sometimes tests use resources +that are expensive to set up, making the one-copy-per-test model prohibitively +expensive. + +If the tests don't change the resource, there's no harm in their sharing a +single resource copy. So, in addition to per-test set-up/tear-down, googletest +also supports per-test-suite set-up/tear-down. To use it: + +1. In your test fixture class (say `FooTest` ), declare as `static` some member + variables to hold the shared resources. +2. Outside your test fixture class (typically just below it), define those + member variables, optionally giving them initial values. +3. In the same test fixture class, define a `static void SetUpTestSuite()` + function (remember not to spell it as **`SetupTestSuite`** with a small + `u`!) to set up the shared resources and a `static void TearDownTestSuite()` + function to tear them down. + +That's it! googletest automatically calls `SetUpTestSuite()` before running the +*first test* in the `FooTest` test suite (i.e. before creating the first +`FooTest` object), and calls `TearDownTestSuite()` after running the *last test* +in it (i.e. after deleting the last `FooTest` object). In between, the tests can +use the shared resources. + +Remember that the test order is undefined, so your code can't depend on a test +preceding or following another. Also, the tests must either not modify the state +of any shared resource, or, if they do modify the state, they must restore the +state to its original value before passing control to the next test. + +Here's an example of per-test-suite set-up and tear-down: + +```c++ +class FooTest : public ::testing::Test { + protected: + // Per-test-suite set-up. + // Called before the first test in this test suite. + // Can be omitted if not needed. + static void SetUpTestSuite() { + shared_resource_ = new ...; + } + + // Per-test-suite tear-down. + // Called after the last test in this test suite. + // Can be omitted if not needed. + static void TearDownTestSuite() { + delete shared_resource_; + shared_resource_ = NULL; + } + + // You can define per-test set-up logic as usual. + virtual void SetUp() { ... } + + // You can define per-test tear-down logic as usual. + virtual void TearDown() { ... } + + // Some expensive resource shared by all tests. + static T* shared_resource_; +}; + +T* FooTest::shared_resource_ = NULL; + +TEST_F(FooTest, Test1) { + ... you can refer to shared_resource_ here ... +} + +TEST_F(FooTest, Test2) { + ... you can refer to shared_resource_ here ... +} +``` + +NOTE: Though the above code declares `SetUpTestSuite()` protected, it may +sometimes be necessary to declare it public, such as when using it with +`TEST_P`. + +## Global Set-Up and Tear-Down + +Just as you can do set-up and tear-down at the test level and the test suite +level, you can also do it at the test program level. Here's how. + +First, you subclass the `::testing::Environment` class to define a test +environment, which knows how to set-up and tear-down: + +```c++ +class Environment : public ::testing::Environment { + public: + virtual ~Environment() {} + + // Override this to define how to set up the environment. + void SetUp() override {} + + // Override this to define how to tear down the environment. + void TearDown() override {} +}; +``` + +Then, you register an instance of your environment class with googletest by +calling the `::testing::AddGlobalTestEnvironment()` function: + +```c++ +Environment* AddGlobalTestEnvironment(Environment* env); +``` + +Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of +each environment object, then runs the tests if none of the environments +reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()` +always calls `TearDown()` with each environment object, regardless of whether or +not the tests were run. + +It's OK to register multiple environment objects. In this suite, their `SetUp()` +will be called in the order they are registered, and their `TearDown()` will be +called in the reverse order. + +Note that googletest takes ownership of the registered environment objects. +Therefore **do not delete them** by yourself. + +You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called, +probably in `main()`. If you use `gtest_main`, you need to call this before +`main()` starts for it to take effect. One way to do this is to define a global +variable like this: + +```c++ +::testing::Environment* const foo_env = + ::testing::AddGlobalTestEnvironment(new FooEnvironment); +``` + +However, we strongly recommend you to write your own `main()` and call +`AddGlobalTestEnvironment()` there, as relying on initialization of global +variables makes the code harder to read and may cause problems when you register +multiple environments from different translation units and the environments have +dependencies among them (remember that the compiler doesn't guarantee the order +in which global variables from different translation units are initialized). + +## Value-Parameterized Tests + +*Value-parameterized tests* allow you to test your code with different +parameters without writing multiple copies of the same test. This is useful in a +number of situations, for example: + +* You have a piece of code whose behavior is affected by one or more + command-line flags. You want to make sure your code performs correctly for + various values of those flags. +* You want to test different implementations of an OO interface. +* You want to test your code over various inputs (a.k.a. data-driven testing). + This feature is easy to abuse, so please exercise your good sense when doing + it! + +### How to Write Value-Parameterized Tests + +To write value-parameterized tests, first you should define a fixture class. It +must be derived from both `testing::Test` and `testing::WithParamInterface<T>` +(the latter is a pure interface), where `T` is the type of your parameter +values. For convenience, you can just derive the fixture class from +`testing::TestWithParam<T>`, which itself is derived from both `testing::Test` +and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a +raw pointer, you are responsible for managing the lifespan of the pointed +values. + +NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()` +they must be declared **public** rather than **protected** in order to use +`TEST_P`. + +```c++ +class FooTest : + public testing::TestWithParam<const char*> { + // You can implement all the usual fixture class members here. + // To access the test parameter, call GetParam() from class + // TestWithParam<T>. +}; + +// Or, when you want to add parameters to a pre-existing fixture class: +class BaseTest : public testing::Test { + ... +}; +class BarTest : public BaseTest, + public testing::WithParamInterface<const char*> { + ... +}; +``` + +Then, use the `TEST_P` macro to define as many test patterns using this fixture +as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you +prefer to think. + +```c++ +TEST_P(FooTest, DoesBlah) { + // Inside a test, access the test parameter with the GetParam() method + // of the TestWithParam<T> class: + EXPECT_TRUE(foo.Blah(GetParam())); + ... +} + +TEST_P(FooTest, HasBlahBlah) { + ... +} +``` + +Finally, you can use `INSTANTIATE_TEST_SUITE_P` to instantiate the test suite +with any set of parameters you want. googletest defines a number of functions +for generating test parameters. They return what we call (surprise!) *parameter +generators*. Here is a summary of them, which are all in the `testing` +namespace: + +<!-- mdformat off(github rendering does not support multiline tables) --> + +| Parameter Generator | Behavior | +| ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | +| `Range(begin, end [, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. | +| `Values(v1, v2, ..., vN)` | Yields values `{v1, v2, ..., vN}`. | +| `ValuesIn(container)` and `ValuesIn(begin,end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)` | +| `Bool()` | Yields sequence `{false, true}`. | +| `Combine(g1, g2, ..., gN)` | Yields all combinations (Cartesian product) as std\:\:tuples of the values generated by the `N` generators. | + +<!-- mdformat on--> + +For more details, see the comments at the definitions of these functions. + +The following statement will instantiate tests from the `FooTest` test suite +each with parameter values `"meeny"`, `"miny"`, and `"moe"`. + +```c++ +INSTANTIATE_TEST_SUITE_P(InstantiationName, + FooTest, + testing::Values("meeny", "miny", "moe")); +``` + +NOTE: The code above must be placed at global or namespace scope, not at +function scope. + +NOTE: Don't forget this step! If you do your test will silently pass, but none +of its suites will ever run! + +To distinguish different instances of the pattern (yes, you can instantiate it +more than once), the first argument to `INSTANTIATE_TEST_SUITE_P` is a prefix +that will be added to the actual test suite name. Remember to pick unique +prefixes for different instantiations. The tests from the instantiation above +will have these names: + +* `InstantiationName/FooTest.DoesBlah/0` for `"meeny"` +* `InstantiationName/FooTest.DoesBlah/1` for `"miny"` +* `InstantiationName/FooTest.DoesBlah/2` for `"moe"` +* `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"` +* `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"` +* `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"` + +You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests). + +This statement will instantiate all tests from `FooTest` again, each with +parameter values `"cat"` and `"dog"`: + +```c++ +const char* pets[] = {"cat", "dog"}; +INSTANTIATE_TEST_SUITE_P(AnotherInstantiationName, FooTest, + testing::ValuesIn(pets)); +``` + +The tests from the instantiation above will have these names: + +* `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"` +* `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"` +* `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"` +* `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"` + +Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the +given test suite, whether their definitions come before or *after* the +`INSTANTIATE_TEST_SUITE_P` statement. + +You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples. + +[sample7_unittest.cc]: ../samples/sample7_unittest.cc "Parameterized Test example" +[sample8_unittest.cc]: ../samples/sample8_unittest.cc "Parameterized Test example with multiple parameters" + +### Creating Value-Parameterized Abstract Tests + +In the above, we define and instantiate `FooTest` in the *same* source file. +Sometimes you may want to define value-parameterized tests in a library and let +other people instantiate them later. This pattern is known as *abstract tests*. +As an example of its application, when you are designing an interface you can +write a standard suite of abstract tests (perhaps using a factory function as +the test parameter) that all implementations of the interface are expected to +pass. When someone implements the interface, they can instantiate your suite to +get all the interface-conformance tests for free. + +To define abstract tests, you should organize your code like this: + +1. Put the definition of the parameterized test fixture class (e.g. `FooTest`) + in a header file, say `foo_param_test.h`. Think of this as *declaring* your + abstract tests. +2. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes + `foo_param_test.h`. Think of this as *implementing* your abstract tests. + +Once they are defined, you can instantiate them by including `foo_param_test.h`, +invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that +contains `foo_param_test.cc`. You can instantiate the same abstract test suite +multiple times, possibly in different source files. + +### Specifying Names for Value-Parameterized Test Parameters + +The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to +specify a function or functor that generates custom test name suffixes based on +the test parameters. The function should accept one argument of type +`testing::TestParamInfo<class ParamType>`, and return `std::string`. + +`testing::PrintToStringParamName` is a builtin test suffix generator that +returns the value of `testing::PrintToString(GetParam())`. It does not work for +`std::string` or C strings. + +NOTE: test names must be non-empty, unique, and may only contain ASCII +alphanumeric characters. In particular, they +[should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore) + +```c++ +class MyTestSuite : public testing::TestWithParam<int> {}; + +TEST_P(MyTestSuite, MyTest) +{ + std::cout << "Example Test Param: " << GetParam() << std::endl; +} + +INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10), + testing::PrintToStringParamName()); +``` + +Providing a custom functor allows for more control over test parameter name +generation, especially for types where the automatic conversion does not +generate helpful parameter names (e.g. strings as demonstrated above). The +following example illustrates this for multiple parameters, an enumeration type +and a string, and also demonstrates how to combine generators. It uses a lambda +for conciseness: + +```c++ +enum class MyType { MY_FOO = 0, MY_BAR = 1 }; + +class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, string>> { +}; + +INSTANTIATE_TEST_SUITE_P( + MyGroup, MyTestSuite, + testing::Combine( + testing::Values(MyType::VALUE_0, MyType::VALUE_1), + testing::ValuesIn("", "")), + [](const testing::TestParamInfo<MyTestSuite::ParamType>& info) { + string name = absl::StrCat( + std::get<0>(info.param) == MY_FOO ? "Foo" : "Bar", "_", + std::get<1>(info.param)); + absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_'); + return name; + }); +``` + +## Typed Tests + +Suppose you have multiple implementations of the same interface and want to make +sure that all of them satisfy some common requirements. Or, you may have defined +several types that are supposed to conform to the same "concept" and you want to +verify it. In both cases, you want the same test logic repeated for different +types. + +While you can write one `TEST` or `TEST_F` for each type you want to test (and +you may even factor the test logic into a function template that you invoke from +the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n` +types, you'll end up writing `m*n` `TEST`s. + +*Typed tests* allow you to repeat the same test logic over a list of types. You +only need to write the test logic once, although you must know the type list +when writing typed tests. Here's how you do it: + +First, define a fixture class template. It should be parameterized by a type. +Remember to derive it from `::testing::Test`: + +```c++ +template <typename T> +class FooTest : public ::testing::Test { + public: + ... + typedef std::list<T> List; + static T shared_; + T value_; +}; +``` + +Next, associate a list of types with the test suite, which will be repeated for +each type in the list: + +```c++ +using MyTypes = ::testing::Types<char, int, unsigned int>; +TYPED_TEST_SUITE(FooTest, MyTypes); +``` + +The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE` +macro to parse correctly. Otherwise the compiler will think that each comma in +the type list introduces a new macro argument. + +Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this +test suite. You can repeat this as many times as you want: + +```c++ +TYPED_TEST(FooTest, DoesBlah) { + // Inside a test, refer to the special name TypeParam to get the type + // parameter. Since we are inside a derived class template, C++ requires + // us to visit the members of FooTest via 'this'. + TypeParam n = this->value_; + + // To visit static members of the fixture, add the 'TestFixture::' + // prefix. + n += TestFixture::shared_; + + // To refer to typedefs in the fixture, add the 'typename TestFixture::' + // prefix. The 'typename' is required to satisfy the compiler. + typename TestFixture::List values; + + values.push_back(n); + ... +} + +TYPED_TEST(FooTest, HasPropertyA) { ... } +``` + +You can see [sample6_unittest.cc] for a complete example. + +[sample6_unittest.cc]: ../samples/sample6_unittest.cc "Typed Test example" + +## Type-Parameterized Tests + +*Type-parameterized tests* are like typed tests, except that they don't require +you to know the list of types ahead of time. Instead, you can define the test +logic first and instantiate it with different type lists later. You can even +instantiate it more than once in the same program. + +If you are designing an interface or concept, you can define a suite of +type-parameterized tests to verify properties that any valid implementation of +the interface/concept should have. Then, the author of each implementation can +just instantiate the test suite with their type to verify that it conforms to +the requirements, without having to write similar tests repeatedly. Here's an +example: + +First, define a fixture class template, as we did with typed tests: + +```c++ +template <typename T> +class FooTest : public ::testing::Test { + ... +}; +``` + +Next, declare that you will define a type-parameterized test suite: + +```c++ +TYPED_TEST_SUITE_P(FooTest); +``` + +Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat +this as many times as you want: + +```c++ +TYPED_TEST_P(FooTest, DoesBlah) { + // Inside a test, refer to TypeParam to get the type parameter. + TypeParam n = 0; + ... +} + +TYPED_TEST_P(FooTest, HasPropertyA) { ... } +``` + +Now the tricky part: you need to register all test patterns using the +`REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first +argument of the macro is the test suite name; the rest are the names of the +tests in this test suite: + +```c++ +REGISTER_TYPED_TEST_SUITE_P(FooTest, + DoesBlah, HasPropertyA); +``` + +Finally, you are free to instantiate the pattern with the types you want. If you +put the above code in a header file, you can `#include` it in multiple C++ +source files and instantiate it multiple times. + +```c++ +typedef ::testing::Types<char, int, unsigned int> MyTypes; +INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes); +``` + +To distinguish different instances of the pattern, the first argument to the +`INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the +actual test suite name. Remember to pick unique prefixes for different +instances. + +In the special case where the type list contains only one type, you can write +that type directly without `::testing::Types<...>`, like this: + +```c++ +INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int); +``` + +You can see [sample6_unittest.cc] for a complete example. + +## Testing Private Code + +If you change your software's internal implementation, your tests should not +break as long as the change is not observable by users. Therefore, **per the +black-box testing principle, most of the time you should test your code through +its public interfaces.** + +**If you still find yourself needing to test internal implementation code, +consider if there's a better design.** The desire to test internal +implementation is often a sign that the class is doing too much. Consider +extracting an implementation class, and testing it. Then use that implementation +class in the original class. + +If you absolutely have to test non-public interface code though, you can. There +are two cases to consider: + +* Static functions ( *not* the same as static member functions!) or unnamed + namespaces, and +* Private or protected class members + +To test them, we use the following special techniques: + +* Both static functions and definitions/declarations in an unnamed namespace + are only visible within the same translation unit. To test them, you can + `#include` the entire `.cc` file being tested in your `*_test.cc` file. + (#including `.cc` files is not a good way to reuse code - you should not do + this in production code!) + + However, a better approach is to move the private code into the + `foo::internal` namespace, where `foo` is the namespace your project + normally uses, and put the private declarations in a `*-internal.h` file. + Your production `.cc` files and your tests are allowed to include this + internal header, but your clients are not. This way, you can fully test your + internal implementation without leaking it to your clients. + +* Private class members are only accessible from within the class or by + friends. To access a class' private members, you can declare your test + fixture as a friend to the class and define accessors in your fixture. Tests + using the fixture can then access the private members of your production + class via the accessors in the fixture. Note that even though your fixture + is a friend to your production class, your tests are not automatically + friends to it, as they are technically defined in sub-classes of the + fixture. + + Another way to test private members is to refactor them into an + implementation class, which is then declared in a `*-internal.h` file. Your + clients aren't allowed to include this header but your tests can. Such is + called the + [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/) + (Private Implementation) idiom. + + Or, you can declare an individual test as a friend of your class by adding + this line in the class body: + + ```c++ + FRIEND_TEST(TestSuiteName, TestName); + ``` + + For example, + + ```c++ + // foo.h + class Foo { + ... + private: + FRIEND_TEST(FooTest, BarReturnsZeroOnNull); + + int Bar(void* x); + }; + + // foo_test.cc + ... + TEST(FooTest, BarReturnsZeroOnNull) { + Foo foo; + EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar(). + } + ``` + + Pay special attention when your class is defined in a namespace, as you + should define your test fixtures and tests in the same namespace if you want + them to be friends of your class. For example, if the code to be tested + looks like: + + ```c++ + namespace my_namespace { + + class Foo { + friend class FooTest; + FRIEND_TEST(FooTest, Bar); + FRIEND_TEST(FooTest, Baz); + ... definition of the class Foo ... + }; + + } // namespace my_namespace + ``` + + Your test code should be something like: + + ```c++ + namespace my_namespace { + + class FooTest : public ::testing::Test { + protected: + ... + }; + + TEST_F(FooTest, Bar) { ... } + TEST_F(FooTest, Baz) { ... } + + } // namespace my_namespace + ``` + +## "Catching" Failures + +If you are building a testing utility on top of googletest, you'll want to test +your utility. What framework would you use to test it? googletest, of course. + +The challenge is to verify that your testing utility reports failures correctly. +In frameworks that report a failure by throwing an exception, you could catch +the exception and assert on it. But googletest doesn't use exceptions, so how do +we test that a piece of code generates an expected failure? + +gunit-spi.h contains some constructs to do this. After #including this header, +you can use + +```c++ + EXPECT_FATAL_FAILURE(statement, substring); +``` + +to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the +current thread whose message contains the given `substring`, or use + +```c++ + EXPECT_NONFATAL_FAILURE(statement, substring); +``` + +if you are expecting a non-fatal (e.g. `EXPECT_*`) failure. + +Only failures in the current thread are checked to determine the result of this +type of expectations. If `statement` creates new threads, failures in these +threads are also ignored. If you want to catch failures in other threads as +well, use one of the following macros instead: + +```c++ + EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring); + EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring); +``` + +NOTE: Assertions from multiple threads are currently not supported on Windows. + +For technical reasons, there are some caveats: + +1. You cannot stream a failure message to either macro. + +2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference + local non-static variables or non-static members of `this` object. + +3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a + value. + +## Registering tests programmatically + +The `TEST` macros handle the vast majority of all use cases, but there are few +were runtime registration logic is required. For those cases, the framework +provides the `::testing::RegisterTest` that allows callers to register arbitrary +tests dynamically. + +This is an advanced API only to be used when the `TEST` macros are insufficient. +The macros should be preferred when possible, as they avoid most of the +complexity of calling this function. + +It provides the following signature: + +```c++ +template <typename Factory> +TestInfo* RegisterTest(const char* test_suite_name, const char* test_name, + const char* type_param, const char* value_param, + const char* file, int line, Factory factory); +``` + +The `factory` argument is a factory callable (move-constructible) object or +function pointer that creates a new instance of the Test object. It handles +ownership to the caller. The signature of the callable is `Fixture*()`, where +`Fixture` is the test fixture class for the test. All tests registered with the +same `test_suite_name` must return the same fixture type. This is checked at +runtime. + +The framework will infer the fixture class from the factory and will call the +`SetUpTestSuite` and `TearDownTestSuite` for it. + +Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is +undefined. + +Use case example: + +```c++ +class MyFixture : public ::testing::Test { + public: + // All of these optional, just like in regular macro usage. + static void SetUpTestSuite() { ... } + static void TearDownTestSuite() { ... } + void SetUp() override { ... } + void TearDown() override { ... } +}; + +class MyTest : public MyFixture { + public: + explicit MyTest(int data) : data_(data) {} + void TestBody() override { ... } + + private: + int data_; +}; + +void RegisterMyTests(const std::vector<int>& values) { + for (int v : values) { + ::testing::RegisterTest( + "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr, + std::to_string(v).c_str(), + __FILE__, __LINE__, + // Important to use the fixture type as the return type here. + [=]() -> MyFixture* { return new MyTest(v); }); + } +} +... +int main(int argc, char** argv) { + std::vector<int> values_to_test = LoadValuesFromConfig(); + RegisterMyTests(values_to_test); + ... + return RUN_ALL_TESTS(); +} +``` +## Getting the Current Test's Name + +Sometimes a function may need to know the name of the currently running test. +For example, you may be using the `SetUp()` method of your test fixture to set +the golden file name based on which test is running. The `::testing::TestInfo` +class has this information: + +```c++ +namespace testing { + +class TestInfo { + public: + // Returns the test suite name and the test name, respectively. + // + // Do NOT delete or free the return value - it's managed by the + // TestInfo class. + const char* test_suite_name() const; + const char* name() const; +}; + +} +``` + +To obtain a `TestInfo` object for the currently running test, call +`current_test_info()` on the `UnitTest` singleton object: + +```c++ + // Gets information about the currently running test. + // Do NOT delete the returned object - it's managed by the UnitTest class. + const ::testing::TestInfo* const test_info = + ::testing::UnitTest::GetInstance()->current_test_info(); + + + + printf("We are in test %s of test suite %s.\n", + test_info->name(), + test_info->test_suite_name()); +``` + +`current_test_info()` returns a null pointer if no test is running. In +particular, you cannot find the test suite name in `TestSuiteSetUp()`, +`TestSuiteTearDown()` (where you know the test suite name implicitly), or +functions called from them. + +## Extending googletest by Handling Test Events + +googletest provides an **event listener API** to let you receive notifications +about the progress of a test program and test failures. The events you can +listen to include the start and end of the test program, a test suite, or a test +method, among others. You may use this API to augment or replace the standard +console output, replace the XML output, or provide a completely different form +of output, such as a GUI or a database. You can also use test events as +checkpoints to implement a resource leak checker, for example. + +### Defining Event Listeners + +To define a event listener, you subclass either testing::TestEventListener or +testing::EmptyTestEventListener The former is an (abstract) interface, where +*each pure virtual method can be overridden to handle a test event* (For +example, when a test starts, the `OnTestStart()` method will be called.). The +latter provides an empty implementation of all methods in the interface, such +that a subclass only needs to override the methods it cares about. + +When an event is fired, its context is passed to the handler function as an +argument. The following argument types are used: + +* UnitTest reflects the state of the entire test program, +* TestSuite has information about a test suite, which can contain one or more + tests, +* TestInfo contains the state of a test, and +* TestPartResult represents the result of a test assertion. + +An event handler function can examine the argument it receives to find out +interesting information about the event and the test program's state. + +Here's an example: + +```c++ + class MinimalistPrinter : public ::testing::EmptyTestEventListener { + // Called before a test starts. + virtual void OnTestStart(const ::testing::TestInfo& test_info) { + printf("*** Test %s.%s starting.\n", + test_info.test_suite_name(), test_info.name()); + } + + // Called after a failed assertion or a SUCCESS(). + virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) { + printf("%s in %s:%d\n%s\n", + test_part_result.failed() ? "*** Failure" : "Success", + test_part_result.file_name(), + test_part_result.line_number(), + test_part_result.summary()); + } + + // Called after a test ends. + virtual void OnTestEnd(const ::testing::TestInfo& test_info) { + printf("*** Test %s.%s ending.\n", + test_info.test_suite_name(), test_info.name()); + } + }; +``` + +### Using Event Listeners + +To use the event listener you have defined, add an instance of it to the +googletest event listener list (represented by class TestEventListeners - note +the "s" at the end of the name) in your `main()` function, before calling +`RUN_ALL_TESTS()`: + +```c++ +int main(int argc, char** argv) { + ::testing::InitGoogleTest(&argc, argv); + // Gets hold of the event listener list. + ::testing::TestEventListeners& listeners = + ::testing::UnitTest::GetInstance()->listeners(); + // Adds a listener to the end. googletest takes the ownership. + listeners.Append(new MinimalistPrinter); + return RUN_ALL_TESTS(); +} +``` + +There's only one problem: the default test result printer is still in effect, so +its output will mingle with the output from your minimalist printer. To suppress +the default printer, just release it from the event listener list and delete it. +You can do so by adding one line: + +```c++ + ... + delete listeners.Release(listeners.default_result_printer()); + listeners.Append(new MinimalistPrinter); + return RUN_ALL_TESTS(); +``` + +Now, sit back and enjoy a completely different output from your tests. For more +details, see [sample9_unittest.cc]. + +[sample9_unittest.cc]: ../samples/sample9_unittest.cc "Event listener example" + +You may append more than one listener to the list. When an `On*Start()` or +`OnTestPartResult()` event is fired, the listeners will receive it in the order +they appear in the list (since new listeners are added to the end of the list, +the default text printer and the default XML generator will receive the event +first). An `On*End()` event will be received by the listeners in the *reverse* +order. This allows output by listeners added later to be framed by output from +listeners added earlier. + +### Generating Failures in Listeners + +You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc) +when processing an event. There are some restrictions: + +1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will + cause `OnTestPartResult()` to be called recursively). +2. A listener that handles `OnTestPartResult()` is not allowed to generate any + failure. + +When you add listeners to the listener list, you should put listeners that +handle `OnTestPartResult()` *before* listeners that can generate failures. This +ensures that failures generated by the latter are attributed to the right test +by the former. + +See [sample10_unittest.cc] for an example of a failure-raising listener. + +[sample10_unittest.cc]: ../samples/sample10_unittest.cc "Failure-raising listener example" + +## Running Test Programs: Advanced Options + +googletest test programs are ordinary executables. Once built, you can run them +directly and affect their behavior via the following environment variables +and/or command line flags. For the flags to work, your programs must call +`::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`. + +To see a list of supported flags and their usage, please run your test program +with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short. + +If an option is specified both by an environment variable and by a flag, the +latter takes precedence. + +### Selecting Tests + +#### Listing Test Names + +Sometimes it is necessary to list the available tests in a program before +running them so that a filter may be applied if needed. Including the flag +`--gtest_list_tests` overrides all other flags and lists tests in the following +format: + +```none +TestSuite1. + TestName1 + TestName2 +TestSuite2. + TestName +``` + +None of the tests listed are actually run if the flag is provided. There is no +corresponding environment variable for this flag. + +#### Running a Subset of the Tests + +By default, a googletest program runs all tests the user has defined. Sometimes, +you want to run only a subset of the tests (e.g. for debugging or quickly +verifying a change). If you set the `GTEST_FILTER` environment variable or the +`--gtest_filter` flag to a filter string, googletest will only run the tests +whose full names (in the form of `TestSuiteName.TestName`) match the filter. + +The format of a filter is a '`:`'-separated list of wildcard patterns (called +the *positive patterns*) optionally followed by a '`-`' and another +'`:`'-separated pattern list (called the *negative patterns*). A test matches +the filter if and only if it matches any of the positive patterns but does not +match any of the negative patterns. + +A pattern may contain `'*'` (matches any string) or `'?'` (matches any single +character). For convenience, the filter `'*-NegativePatterns'` can be also +written as `'-NegativePatterns'`. + +For example: + +* `./foo_test` Has no flag, and thus runs all its tests. +* `./foo_test --gtest_filter=*` Also runs everything, due to the single + match-everything `*` value. +* `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite + `FooTest` . +* `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full + name contains either `"Null"` or `"Constructor"` . +* `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests. +* `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test + suite `FooTest` except `FooTest.Bar`. +* `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs + everything in test suite `FooTest` except `FooTest.Bar` and everything in + test suite `BarTest` except `BarTest.Foo`. + +#### Temporarily Disabling Tests + +If you have a broken test that you cannot fix right away, you can add the +`DISABLED_` prefix to its name. This will exclude it from execution. This is +better than commenting out the code or using `#if 0`, as disabled tests are +still compiled (and thus won't rot). + +If you need to disable all tests in a test suite, you can either add `DISABLED_` +to the front of the name of each test, or alternatively add it to the front of +the test suite name. + +For example, the following tests won't be run by googletest, even though they +will still be compiled: + +```c++ +// Tests that Foo does Abc. +TEST(FooTest, DISABLED_DoesAbc) { ... } + +class DISABLED_BarTest : public ::testing::Test { ... }; + +// Tests that Bar does Xyz. +TEST_F(DISABLED_BarTest, DoesXyz) { ... } +``` + +NOTE: This feature should only be used for temporary pain-relief. You still have +to fix the disabled tests at a later date. As a reminder, googletest will print +a banner warning you if a test program contains any disabled tests. + +TIP: You can easily count the number of disabled tests you have using `gsearch` +and/or `grep`. This number can be used as a metric for improving your test +quality. + +#### Temporarily Enabling Disabled Tests + +To include disabled tests in test execution, just invoke the test program with +the `--gtest_also_run_disabled_tests` flag or set the +`GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`. +You can combine this with the `--gtest_filter` flag to further select which +disabled tests to run. + +### Repeating the Tests + +Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it +will fail only 1% of the time, making it rather hard to reproduce the bug under +a debugger. This can be a major source of frustration. + +The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in +a program many times. Hopefully, a flaky test will eventually fail and give you +a chance to debug. Here's how to use it: + +```none +$ foo_test --gtest_repeat=1000 +Repeat foo_test 1000 times and don't stop at failures. + +$ foo_test --gtest_repeat=-1 +A negative count means repeating forever. + +$ foo_test --gtest_repeat=1000 --gtest_break_on_failure +Repeat foo_test 1000 times, stopping at the first failure. This +is especially useful when running under a debugger: when the test +fails, it will drop into the debugger and you can then inspect +variables and stacks. + +$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.* +Repeat the tests whose name matches the filter 1000 times. +``` + +If your test program contains +[global set-up/tear-down](#global-set-up-and-tear-down) code, it will be +repeated in each iteration as well, as the flakiness may be in it. You can also +specify the repeat count by setting the `GTEST_REPEAT` environment variable. + +### Shuffling the Tests + +You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE` +environment variable to `1`) to run the tests in a program in a random order. +This helps to reveal bad dependencies between tests. + +By default, googletest uses a random seed calculated from the current time. +Therefore you'll get a different order every time. The console output includes +the random seed value, such that you can reproduce an order-related test failure +later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED` +flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an +integer in the range [0, 99999]. The seed value 0 is special: it tells +googletest to do the default behavior of calculating the seed from the current +time. + +If you combine this with `--gtest_repeat=N`, googletest will pick a different +random seed and re-shuffle the tests in each iteration. + +### Controlling Test Output + +#### Colored Terminal Output + +googletest can use colors in its terminal output to make it easier to spot the +important information: + +<code> +...<br/> + <font color="green">[----------]</font><font color="black"> 1 test from + FooTest</font><br/> + <font color="green">[ RUN ]</font><font color="black"> + FooTest.DoesAbc</font><br/> + <font color="green">[ OK ]</font><font color="black"> + FooTest.DoesAbc </font><br/> + <font color="green">[----------]</font><font color="black"> + 2 tests from BarTest</font><br/> + <font color="green">[ RUN ]</font><font color="black"> + BarTest.HasXyzProperty </font><br/> + <font color="green">[ OK ]</font><font color="black"> + BarTest.HasXyzProperty</font><br/> + <font color="green">[ RUN ]</font><font color="black"> + BarTest.ReturnsTrueOnSuccess ... some error messages ...</font><br/> + <font color="red">[ FAILED ]</font><font color="black"> + BarTest.ReturnsTrueOnSuccess ...</font><br/> + <font color="green">[==========]</font><font color="black"> + 30 tests from 14 test suites ran.</font><br/> + <font color="green">[ PASSED ]</font><font color="black"> + 28 tests.</font><br/> + <font color="red">[ FAILED ]</font><font color="black"> + 2 tests, listed below:</font><br/> + <font color="red">[ FAILED ]</font><font color="black"> + BarTest.ReturnsTrueOnSuccess</font><br/> + <font color="red">[ FAILED ]</font><font color="black"> + AnotherTest.DoesXyz<br/> +<br/> + 2 FAILED TESTS + </font> +</code> + +You can set the `GTEST_COLOR` environment variable or the `--gtest_color` +command line flag to `yes`, `no`, or `auto` (the default) to enable colors, +disable colors, or let googletest decide. When the value is `auto`, googletest +will use colors if and only if the output goes to a terminal and (on non-Windows +platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`. + +#### Suppressing the Elapsed Time + +By default, googletest prints the time it takes to run each test. To disable +that, run the test program with the `--gtest_print_time=0` command line flag, or +set the GTEST_PRINT_TIME environment variable to `0`. + +#### Suppressing UTF-8 Text Output + +In case of assertion failures, googletest prints expected and actual values of +type `string` both as hex-encoded strings as well as in readable UTF-8 text if +they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8 +text because, for example, you don't have an UTF-8 compatible output medium, run +the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8` +environment variable to `0`. + + + +#### Generating an XML Report + +googletest can emit a detailed XML report to a file in addition to its normal +textual output. The report contains the duration of each test, and thus can help +you identify slow tests. The report is also used by the http://unittest +dashboard to show per-test-method error messages. + +To generate the XML report, set the `GTEST_OUTPUT` environment variable or the +`--gtest_output` flag to the string `"xml:path_to_output_file"`, which will +create the file at the given location. You can also just use the string `"xml"`, +in which case the output can be found in the `test_detail.xml` file in the +current directory. + +If you specify a directory (for example, `"xml:output/directory/"` on Linux or +`"xml:output\directory\"` on Windows), googletest will create the XML file in +that directory, named after the test executable (e.g. `foo_test.xml` for test +program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left +over from a previous run), googletest will pick a different name (e.g. +`foo_test_1.xml`) to avoid overwriting it. + +The report is based on the `junitreport` Ant task. Since that format was +originally intended for Java, a little interpretation is required to make it +apply to googletest tests, as shown here: + +```xml +<testsuites name="AllTests" ...> + <testsuite name="test_case_name" ...> + <testcase name="test_name" ...> + <failure message="..."/> + <failure message="..."/> + <failure message="..."/> + </testcase> + </testsuite> +</testsuites> +``` + +* The root `<testsuites>` element corresponds to the entire test program. +* `<testsuite>` elements correspond to googletest test suites. +* `<testcase>` elements correspond to googletest test functions. + +For instance, the following program + +```c++ +TEST(MathTest, Addition) { ... } +TEST(MathTest, Subtraction) { ... } +TEST(LogicTest, NonContradiction) { ... } +``` + +could generate this report: + +```xml +<?xml version="1.0" encoding="UTF-8"?> +<testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests"> + <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015"> + <testcase name="Addition" status="run" time="0.007" classname=""> + <failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure> + <failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure> + </testcase> + <testcase name="Subtraction" status="run" time="0.005" classname=""> + </testcase> + </testsuite> + <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005"> + <testcase name="NonContradiction" status="run" time="0.005" classname=""> + </testcase> + </testsuite> +</testsuites> +``` + +Things to note: + +* The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how + many test functions the googletest program or test suite contains, while the + `failures` attribute tells how many of them failed. + +* The `time` attribute expresses the duration of the test, test suite, or + entire test program in seconds. + +* The `timestamp` attribute records the local date and time of the test + execution. + +* Each `<failure>` element corresponds to a single failed googletest + assertion. + +#### Generating a JSON Report + +googletest can also emit a JSON report as an alternative format to XML. To +generate the JSON report, set the `GTEST_OUTPUT` environment variable or the +`--gtest_output` flag to the string `"json:path_to_output_file"`, which will +create the file at the given location. You can also just use the string +`"json"`, in which case the output can be found in the `test_detail.json` file +in the current directory. + +The report format conforms to the following JSON Schema: + +```json +{ + "$schema": "http://json-schema.org/schema#", + "type": "object", + "definitions": { + "TestCase": { + "type": "object", + "properties": { + "name": { "type": "string" }, + "tests": { "type": "integer" }, + "failures": { "type": "integer" }, + "disabled": { "type": "integer" }, + "time": { "type": "string" }, + "testsuite": { + "type": "array", + "items": { + "$ref": "#/definitions/TestInfo" + } + } + } + }, + "TestInfo": { + "type": "object", + "properties": { + "name": { "type": "string" }, + "status": { + "type": "string", + "enum": ["RUN", "NOTRUN"] + }, + "time": { "type": "string" }, + "classname": { "type": "string" }, + "failures": { + "type": "array", + "items": { + "$ref": "#/definitions/Failure" + } + } + } + }, + "Failure": { + "type": "object", + "properties": { + "failures": { "type": "string" }, + "type": { "type": "string" } + } + } + }, + "properties": { + "tests": { "type": "integer" }, + "failures": { "type": "integer" }, + "disabled": { "type": "integer" }, + "errors": { "type": "integer" }, + "timestamp": { + "type": "string", + "format": "date-time" + }, + "time": { "type": "string" }, + "name": { "type": "string" }, + "testsuites": { + "type": "array", + "items": { + "$ref": "#/definitions/TestCase" + } + } + } +} +``` + +The report uses the format that conforms to the following Proto3 using the +[JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json): + +```proto +syntax = "proto3"; + +package googletest; + +import "google/protobuf/timestamp.proto"; +import "google/protobuf/duration.proto"; + +message UnitTest { + int32 tests = 1; + int32 failures = 2; + int32 disabled = 3; + int32 errors = 4; + google.protobuf.Timestamp timestamp = 5; + google.protobuf.Duration time = 6; + string name = 7; + repeated TestCase testsuites = 8; +} + +message TestCase { + string name = 1; + int32 tests = 2; + int32 failures = 3; + int32 disabled = 4; + int32 errors = 5; + google.protobuf.Duration time = 6; + repeated TestInfo testsuite = 7; +} + +message TestInfo { + string name = 1; + enum Status { + RUN = 0; + NOTRUN = 1; + } + Status status = 2; + google.protobuf.Duration time = 3; + string classname = 4; + message Failure { + string failures = 1; + string type = 2; + } + repeated Failure failures = 5; +} +``` + +For instance, the following program + +```c++ +TEST(MathTest, Addition) { ... } +TEST(MathTest, Subtraction) { ... } +TEST(LogicTest, NonContradiction) { ... } +``` + +could generate this report: + +```json +{ + "tests": 3, + "failures": 1, + "errors": 0, + "time": "0.035s", + "timestamp": "2011-10-31T18:52:42Z", + "name": "AllTests", + "testsuites": [ + { + "name": "MathTest", + "tests": 2, + "failures": 1, + "errors": 0, + "time": "0.015s", + "testsuite": [ + { + "name": "Addition", + "status": "RUN", + "time": "0.007s", + "classname": "", + "failures": [ + { + "message": "Value of: add(1, 1)\n Actual: 3\nExpected: 2", + "type": "" + }, + { + "message": "Value of: add(1, -1)\n Actual: 1\nExpected: 0", + "type": "" + } + ] + }, + { + "name": "Subtraction", + "status": "RUN", + "time": "0.005s", + "classname": "" + } + ] + }, + { + "name": "LogicTest", + "tests": 1, + "failures": 0, + "errors": 0, + "time": "0.005s", + "testsuite": [ + { + "name": "NonContradiction", + "status": "RUN", + "time": "0.005s", + "classname": "" + } + ] + } + ] +} +``` + +IMPORTANT: The exact format of the JSON document is subject to change. + +### Controlling How Failures Are Reported + +#### Turning Assertion Failures into Break-Points + +When running test programs under a debugger, it's very convenient if the +debugger can catch an assertion failure and automatically drop into interactive +mode. googletest's *break-on-failure* mode supports this behavior. + +To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value +other than `0`. Alternatively, you can use the `--gtest_break_on_failure` +command line flag. + +#### Disabling Catching Test-Thrown Exceptions + +googletest can be used either with or without exceptions enabled. If a test +throws a C++ exception or (on Windows) a structured exception (SEH), by default +googletest catches it, reports it as a test failure, and continues with the next +test method. This maximizes the coverage of a test run. Also, on Windows an +uncaught exception will cause a pop-up window, so catching the exceptions allows +you to run the tests automatically. + +When debugging the test failures, however, you may instead want the exceptions +to be handled by the debugger, such that you can examine the call stack when an +exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS` +environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when +running the tests. diff --git a/security/nss/gtests/google_test/gtest/docs/faq.md b/security/nss/gtests/google_test/gtest/docs/faq.md new file mode 100644 index 0000000000..960a827989 --- /dev/null +++ b/security/nss/gtests/google_test/gtest/docs/faq.md @@ -0,0 +1,753 @@ +# Googletest FAQ + +<!-- GOOGLETEST_CM0014 DO NOT DELETE --> + +## Why should test suite names and test names not contain underscore? + +Underscore (`_`) is special, as C++ reserves the following to be used by the +compiler and the standard library: + +1. any identifier that starts with an `_` followed by an upper-case letter, and +2. any identifier that contains two consecutive underscores (i.e. `__`) + *anywhere* in its name. + +User code is *prohibited* from using such identifiers. + +Now let's look at what this means for `TEST` and `TEST_F`. + +Currently `TEST(TestSuiteName, TestName)` generates a class named +`TestSuiteName_TestName_Test`. What happens if `TestSuiteName` or `TestName` +contains `_`? + +1. If `TestSuiteName` starts with an `_` followed by an upper-case letter (say, + `_Foo`), we end up with `_Foo_TestName_Test`, which is reserved and thus + invalid. +2. If `TestSuiteName` ends with an `_` (say, `Foo_`), we get + `Foo__TestName_Test`, which is invalid. +3. If `TestName` starts with an `_` (say, `_Bar`), we get + `TestSuiteName__Bar_Test`, which is invalid. +4. If `TestName` ends with an `_` (say, `Bar_`), we get + `TestSuiteName_Bar__Test`, which is invalid. + +So clearly `TestSuiteName` and `TestName` cannot start or end with `_` +(Actually, `TestSuiteName` can start with `_` -- as long as the `_` isn't +followed by an upper-case letter. But that's getting complicated. So for +simplicity we just say that it cannot start with `_`.). + +It may seem fine for `TestSuiteName` and `TestName` to contain `_` in the +middle. However, consider this: + +```c++ +TEST(Time, Flies_Like_An_Arrow) { ... } +TEST(Time_Flies, Like_An_Arrow) { ... } +``` + +Now, the two `TEST`s will both generate the same class +(`Time_Flies_Like_An_Arrow_Test`). That's not good. + +So for simplicity, we just ask the users to avoid `_` in `TestSuiteName` and +`TestName`. The rule is more constraining than necessary, but it's simple and +easy to remember. It also gives googletest some wiggle room in case its +implementation needs to change in the future. + +If you violate the rule, there may not be immediate consequences, but your test +may (just may) break with a new compiler (or a new version of the compiler you +are using) or with a new version of googletest. Therefore it's best to follow +the rule. + +## Why does googletest support `EXPECT_EQ(NULL, ptr)` and `ASSERT_EQ(NULL, ptr)` but not `EXPECT_NE(NULL, ptr)` and `ASSERT_NE(NULL, ptr)`? + +First of all you can use `EXPECT_NE(nullptr, ptr)` and `ASSERT_NE(nullptr, +ptr)`. This is the preferred syntax in the style guide because nullptr does not +have the type problems that NULL does. Which is why NULL does not work. + +Due to some peculiarity of C++, it requires some non-trivial template meta +programming tricks to support using `NULL` as an argument of the `EXPECT_XX()` +and `ASSERT_XX()` macros. Therefore we only do it where it's most needed +(otherwise we make the implementation of googletest harder to maintain and more +error-prone than necessary). + +The `EXPECT_EQ()` macro takes the *expected* value as its first argument and the +*actual* value as the second. It's reasonable that someone wants to write +`EXPECT_EQ(NULL, some_expression)`, and this indeed was requested several times. +Therefore we implemented it. + +The need for `EXPECT_NE(NULL, ptr)` isn't nearly as strong. When the assertion +fails, you already know that `ptr` must be `NULL`, so it doesn't add any +information to print `ptr` in this case. That means `EXPECT_TRUE(ptr != NULL)` +works just as well. + +If we were to support `EXPECT_NE(NULL, ptr)`, for consistency we'll have to +support `EXPECT_NE(ptr, NULL)` as well, as unlike `EXPECT_EQ`, we don't have a +convention on the order of the two arguments for `EXPECT_NE`. This means using +the template meta programming tricks twice in the implementation, making it even +harder to understand and maintain. We believe the benefit doesn't justify the +cost. + +Finally, with the growth of the gMock matcher library, we are encouraging people +to use the unified `EXPECT_THAT(value, matcher)` syntax more often in tests. One +significant advantage of the matcher approach is that matchers can be easily +combined to form new matchers, while the `EXPECT_NE`, etc, macros cannot be +easily combined. Therefore we want to invest more in the matchers than in the +`EXPECT_XX()` macros. + +## I need to test that different implementations of an interface satisfy some common requirements. Should I use typed tests or value-parameterized tests? + +For testing various implementations of the same interface, either typed tests or +value-parameterized tests can get it done. It's really up to you the user to +decide which is more convenient for you, depending on your particular case. Some +rough guidelines: + +* Typed tests can be easier to write if instances of the different + implementations can be created the same way, modulo the type. For example, + if all these implementations have a public default constructor (such that + you can write `new TypeParam`), or if their factory functions have the same + form (e.g. `CreateInstance<TypeParam>()`). +* Value-parameterized tests can be easier to write if you need different code + patterns to create different implementations' instances, e.g. `new Foo` vs + `new Bar(5)`. To accommodate for the differences, you can write factory + function wrappers and pass these function pointers to the tests as their + parameters. +* When a typed test fails, the default output includes the name of the type, + which can help you quickly identify which implementation is wrong. + Value-parameterized tests only show the number of the failed iteration by + default. You will need to define a function that returns the iteration name + and pass it as the third parameter to INSTANTIATE_TEST_SUITE_P to have more + useful output. +* When using typed tests, you need to make sure you are testing against the + interface type, not the concrete types (in other words, you want to make + sure `implicit_cast<MyInterface*>(my_concrete_impl)` works, not just that + `my_concrete_impl` works). It's less likely to make mistakes in this area + when using value-parameterized tests. + +I hope I didn't confuse you more. :-) If you don't mind, I'd suggest you to give +both approaches a try. Practice is a much better way to grasp the subtle +differences between the two tools. Once you have some concrete experience, you +can much more easily decide which one to use the next time. + +## I got some run-time errors about invalid proto descriptors when using `ProtocolMessageEquals`. Help! + +**Note:** `ProtocolMessageEquals` and `ProtocolMessageEquiv` are *deprecated* +now. Please use `EqualsProto`, etc instead. + +`ProtocolMessageEquals` and `ProtocolMessageEquiv` were redefined recently and +are now less tolerant of invalid protocol buffer definitions. In particular, if +you have a `foo.proto` that doesn't fully qualify the type of a protocol message +it references (e.g. `message<Bar>` where it should be `message<blah.Bar>`), you +will now get run-time errors like: + +``` +... descriptor.cc:...] Invalid proto descriptor for file "path/to/foo.proto": +... descriptor.cc:...] blah.MyMessage.my_field: ".Bar" is not defined. +``` + +If you see this, your `.proto` file is broken and needs to be fixed by making +the types fully qualified. The new definition of `ProtocolMessageEquals` and +`ProtocolMessageEquiv` just happen to reveal your bug. + +## My death test modifies some state, but the change seems lost after the death test finishes. Why? + +Death tests (`EXPECT_DEATH`, etc) are executed in a sub-process s.t. the +expected crash won't kill the test program (i.e. the parent process). As a +result, any in-memory side effects they incur are observable in their respective +sub-processes, but not in the parent process. You can think of them as running +in a parallel universe, more or less. + +In particular, if you use mocking and the death test statement invokes some mock +methods, the parent process will think the calls have never occurred. Therefore, +you may want to move your `EXPECT_CALL` statements inside the `EXPECT_DEATH` +macro. + +## EXPECT_EQ(htonl(blah), blah_blah) generates weird compiler errors in opt mode. Is this a googletest bug? + +Actually, the bug is in `htonl()`. + +According to `'man htonl'`, `htonl()` is a *function*, which means it's valid to +use `htonl` as a function pointer. However, in opt mode `htonl()` is defined as +a *macro*, which breaks this usage. + +Worse, the macro definition of `htonl()` uses a `gcc` extension and is *not* +standard C++. That hacky implementation has some ad hoc limitations. In +particular, it prevents you from writing `Foo<sizeof(htonl(x))>()`, where `Foo` +is a template that has an integral argument. + +The implementation of `EXPECT_EQ(a, b)` uses `sizeof(... a ...)` inside a +template argument, and thus doesn't compile in opt mode when `a` contains a call +to `htonl()`. It is difficult to make `EXPECT_EQ` bypass the `htonl()` bug, as +the solution must work with different compilers on various platforms. + +`htonl()` has some other problems as described in `//util/endian/endian.h`, +which defines `ghtonl()` to replace it. `ghtonl()` does the same thing `htonl()` +does, only without its problems. We suggest you to use `ghtonl()` instead of +`htonl()`, both in your tests and production code. + +`//util/endian/endian.h` also defines `ghtons()`, which solves similar problems +in `htons()`. + +Don't forget to add `//util/endian` to the list of dependencies in the `BUILD` +file wherever `ghtonl()` and `ghtons()` are used. The library consists of a +single header file and will not bloat your binary. + +## The compiler complains about "undefined references" to some static const member variables, but I did define them in the class body. What's wrong? + +If your class has a static data member: + +```c++ +// foo.h +class Foo { + ... + static const int kBar = 100; +}; +``` + +You also need to define it *outside* of the class body in `foo.cc`: + +```c++ +const int Foo::kBar; // No initializer here. +``` + +Otherwise your code is **invalid C++**, and may break in unexpected ways. In +particular, using it in googletest comparison assertions (`EXPECT_EQ`, etc) will +generate an "undefined reference" linker error. The fact that "it used to work" +doesn't mean it's valid. It just means that you were lucky. :-) + +## Can I derive a test fixture from another? + +Yes. + +Each test fixture has a corresponding and same named test suite. This means only +one test suite can use a particular fixture. Sometimes, however, multiple test +cases may want to use the same or slightly different fixtures. For example, you +may want to make sure that all of a GUI library's test suites don't leak +important system resources like fonts and brushes. + +In googletest, you share a fixture among test suites by putting the shared logic +in a base test fixture, then deriving from that base a separate fixture for each +test suite that wants to use this common logic. You then use `TEST_F()` to write +tests using each derived fixture. + +Typically, your code looks like this: + +```c++ +// Defines a base test fixture. +class BaseTest : public ::testing::Test { + protected: + ... +}; + +// Derives a fixture FooTest from BaseTest. +class FooTest : public BaseTest { + protected: + void SetUp() override { + BaseTest::SetUp(); // Sets up the base fixture first. + ... additional set-up work ... + } + + void TearDown() override { + ... clean-up work for FooTest ... + BaseTest::TearDown(); // Remember to tear down the base fixture + // after cleaning up FooTest! + } + + ... functions and variables for FooTest ... +}; + +// Tests that use the fixture FooTest. +TEST_F(FooTest, Bar) { ... } +TEST_F(FooTest, Baz) { ... } + +... additional fixtures derived from BaseTest ... +``` + +If necessary, you can continue to derive test fixtures from a derived fixture. +googletest has no limit on how deep the hierarchy can be. + +For a complete example using derived test fixtures, see +[sample5_unittest.cc](../samples/sample5_unittest.cc). + +## My compiler complains "void value not ignored as it ought to be." What does this mean? + +You're probably using an `ASSERT_*()` in a function that doesn't return `void`. +`ASSERT_*()` can only be used in `void` functions, due to exceptions being +disabled by our build system. Please see more details +[here](advanced.md#assertion-placement). + +## My death test hangs (or seg-faults). How do I fix it? + +In googletest, death tests are run in a child process and the way they work is +delicate. To write death tests you really need to understand how they work. +Please make sure you have read [this](advanced.md#how-it-works). + +In particular, death tests don't like having multiple threads in the parent +process. So the first thing you can try is to eliminate creating threads outside +of `EXPECT_DEATH()`. For example, you may want to use mocks or fake objects +instead of real ones in your tests. + +Sometimes this is impossible as some library you must use may be creating +threads before `main()` is even reached. In this case, you can try to minimize +the chance of conflicts by either moving as many activities as possible inside +`EXPECT_DEATH()` (in the extreme case, you want to move everything inside), or +leaving as few things as possible in it. Also, you can try to set the death test +style to `"threadsafe"`, which is safer but slower, and see if it helps. + +If you go with thread-safe death tests, remember that they rerun the test +program from the beginning in the child process. Therefore make sure your +program can run side-by-side with itself and is deterministic. + +In the end, this boils down to good concurrent programming. You have to make +sure that there is no race conditions or dead locks in your program. No silver +bullet - sorry! + +## Should I use the constructor/destructor of the test fixture or SetUp()/TearDown()? {#CtorVsSetUp} + +The first thing to remember is that googletest does **not** reuse the same test +fixture object across multiple tests. For each `TEST_F`, googletest will create +a **fresh** test fixture object, immediately call `SetUp()`, run the test body, +call `TearDown()`, and then delete the test fixture object. + +When you need to write per-test set-up and tear-down logic, you have the choice +between using the test fixture constructor/destructor or `SetUp()/TearDown()`. +The former is usually preferred, as it has the following benefits: + +* By initializing a member variable in the constructor, we have the option to + make it `const`, which helps prevent accidental changes to its value and + makes the tests more obviously correct. +* In case we need to subclass the test fixture class, the subclass' + constructor is guaranteed to call the base class' constructor *first*, and + the subclass' destructor is guaranteed to call the base class' destructor + *afterward*. With `SetUp()/TearDown()`, a subclass may make the mistake of + forgetting to call the base class' `SetUp()/TearDown()` or call them at the + wrong time. + +You may still want to use `SetUp()/TearDown()` in the following cases: + +* C++ does not allow virtual function calls in constructors and destructors. + You can call a method declared as virtual, but it will not use dynamic + dispatch, it will use the definition from the class the constructor of which + is currently executing. This is because calling a virtual method before the + derived class constructor has a chance to run is very dangerous - the + virtual method might operate on uninitialized data. Therefore, if you need + to call a method that will be overridden in a derived class, you have to use + `SetUp()/TearDown()`. +* In the body of a constructor (or destructor), it's not possible to use the + `ASSERT_xx` macros. Therefore, if the set-up operation could cause a fatal + test failure that should prevent the test from running, it's necessary to + use `abort` <!-- GOOGLETEST_CM0015 DO NOT DELETE --> and abort the whole test executable, + or to use `SetUp()` instead of a constructor. +* If the tear-down operation could throw an exception, you must use + `TearDown()` as opposed to the destructor, as throwing in a destructor leads + to undefined behavior and usually will kill your program right away. Note + that many standard libraries (like STL) may throw when exceptions are + enabled in the compiler. Therefore you should prefer `TearDown()` if you + want to write portable tests that work with or without exceptions. +* The googletest team is considering making the assertion macros throw on + platforms where exceptions are enabled (e.g. Windows, Mac OS, and Linux + client-side), which will eliminate the need for the user to propagate + failures from a subroutine to its caller. Therefore, you shouldn't use + googletest assertions in a destructor if your code could run on such a + platform. + +## The compiler complains "no matching function to call" when I use ASSERT_PRED*. How do I fix it? + +If the predicate function you use in `ASSERT_PRED*` or `EXPECT_PRED*` is +overloaded or a template, the compiler will have trouble figuring out which +overloaded version it should use. `ASSERT_PRED_FORMAT*` and +`EXPECT_PRED_FORMAT*` don't have this problem. + +If you see this error, you might want to switch to +`(ASSERT|EXPECT)_PRED_FORMAT*`, which will also give you a better failure +message. If, however, that is not an option, you can resolve the problem by +explicitly telling the compiler which version to pick. + +For example, suppose you have + +```c++ +bool IsPositive(int n) { + return n > 0; +} + +bool IsPositive(double x) { + return x > 0; +} +``` + +you will get a compiler error if you write + +```c++ +EXPECT_PRED1(IsPositive, 5); +``` + +However, this will work: + +```c++ +EXPECT_PRED1(static_cast<bool (*)(int)>(IsPositive), 5); +``` + +(The stuff inside the angled brackets for the `static_cast` operator is the type +of the function pointer for the `int`-version of `IsPositive()`.) + +As another example, when you have a template function + +```c++ +template <typename T> +bool IsNegative(T x) { + return x < 0; +} +``` + +you can use it in a predicate assertion like this: + +```c++ +ASSERT_PRED1(IsNegative<int>, -5); +``` + +Things are more interesting if your template has more than one parameters. The +following won't compile: + +```c++ +ASSERT_PRED2(GreaterThan<int, int>, 5, 0); +``` + +as the C++ pre-processor thinks you are giving `ASSERT_PRED2` 4 arguments, which +is one more than expected. The workaround is to wrap the predicate function in +parentheses: + +```c++ +ASSERT_PRED2((GreaterThan<int, int>), 5, 0); +``` + +## My compiler complains about "ignoring return value" when I call RUN_ALL_TESTS(). Why? + +Some people had been ignoring the return value of `RUN_ALL_TESTS()`. That is, +instead of + +```c++ + return RUN_ALL_TESTS(); +``` + +they write + +```c++ + RUN_ALL_TESTS(); +``` + +This is **wrong and dangerous**. The testing services needs to see the return +value of `RUN_ALL_TESTS()` in order to determine if a test has passed. If your +`main()` function ignores it, your test will be considered successful even if it +has a googletest assertion failure. Very bad. + +We have decided to fix this (thanks to Michael Chastain for the idea). Now, your +code will no longer be able to ignore `RUN_ALL_TESTS()` when compiled with +`gcc`. If you do so, you'll get a compiler error. + +If you see the compiler complaining about you ignoring the return value of +`RUN_ALL_TESTS()`, the fix is simple: just make sure its value is used as the +return value of `main()`. + +But how could we introduce a change that breaks existing tests? Well, in this +case, the code was already broken in the first place, so we didn't break it. :-) + +## My compiler complains that a constructor (or destructor) cannot return a value. What's going on? + +Due to a peculiarity of C++, in order to support the syntax for streaming +messages to an `ASSERT_*`, e.g. + +```c++ + ASSERT_EQ(1, Foo()) << "blah blah" << foo; +``` + +we had to give up using `ASSERT*` and `FAIL*` (but not `EXPECT*` and +`ADD_FAILURE*`) in constructors and destructors. The workaround is to move the +content of your constructor/destructor to a private void member function, or +switch to `EXPECT_*()` if that works. This +[section](advanced.md#assertion-placement) in the user's guide explains it. + +## My SetUp() function is not called. Why? + +C++ is case-sensitive. Did you spell it as `Setup()`? + +Similarly, sometimes people spell `SetUpTestSuite()` as `SetupTestSuite()` and +wonder why it's never called. + + +## I have several test suites which share the same test fixture logic, do I have to define a new test fixture class for each of them? This seems pretty tedious. + +You don't have to. Instead of + +```c++ +class FooTest : public BaseTest {}; + +TEST_F(FooTest, Abc) { ... } +TEST_F(FooTest, Def) { ... } + +class BarTest : public BaseTest {}; + +TEST_F(BarTest, Abc) { ... } +TEST_F(BarTest, Def) { ... } +``` + +you can simply `typedef` the test fixtures: + +```c++ +typedef BaseTest FooTest; + +TEST_F(FooTest, Abc) { ... } +TEST_F(FooTest, Def) { ... } + +typedef BaseTest BarTest; + +TEST_F(BarTest, Abc) { ... } +TEST_F(BarTest, Def) { ... } +``` + +## googletest output is buried in a whole bunch of LOG messages. What do I do? + +The googletest output is meant to be a concise and human-friendly report. If +your test generates textual output itself, it will mix with the googletest +output, making it hard to read. However, there is an easy solution to this +problem. + +Since `LOG` messages go to stderr, we decided to let googletest output go to +stdout. This way, you can easily separate the two using redirection. For +example: + +```shell +$ ./my_test > gtest_output.txt +``` + +## Why should I prefer test fixtures over global variables? + +There are several good reasons: + +1. It's likely your test needs to change the states of its global variables. + This makes it difficult to keep side effects from escaping one test and + contaminating others, making debugging difficult. By using fixtures, each + test has a fresh set of variables that's different (but with the same + names). Thus, tests are kept independent of each other. +2. Global variables pollute the global namespace. +3. Test fixtures can be reused via subclassing, which cannot be done easily + with global variables. This is useful if many test suites have something in + common. + +## What can the statement argument in ASSERT_DEATH() be? + +`ASSERT_DEATH(*statement*, *regex*)` (or any death assertion macro) can be used +wherever `*statement*` is valid. So basically `*statement*` can be any C++ +statement that makes sense in the current context. In particular, it can +reference global and/or local variables, and can be: + +* a simple function call (often the case), +* a complex expression, or +* a compound statement. + +Some examples are shown here: + +```c++ +// A death test can be a simple function call. +TEST(MyDeathTest, FunctionCall) { + ASSERT_DEATH(Xyz(5), "Xyz failed"); +} + +// Or a complex expression that references variables and functions. +TEST(MyDeathTest, ComplexExpression) { + const bool c = Condition(); + ASSERT_DEATH((c ? Func1(0) : object2.Method("test")), + "(Func1|Method) failed"); +} + +// Death assertions can be used any where in a function. In +// particular, they can be inside a loop. +TEST(MyDeathTest, InsideLoop) { + // Verifies that Foo(0), Foo(1), ..., and Foo(4) all die. + for (int i = 0; i < 5; i++) { + EXPECT_DEATH_M(Foo(i), "Foo has \\d+ errors", + ::testing::Message() << "where i is " << i); + } +} + +// A death assertion can contain a compound statement. +TEST(MyDeathTest, CompoundStatement) { + // Verifies that at lease one of Bar(0), Bar(1), ..., and + // Bar(4) dies. + ASSERT_DEATH({ + for (int i = 0; i < 5; i++) { + Bar(i); + } + }, + "Bar has \\d+ errors"); +} +``` + +gtest-death-test_test.cc contains more examples if you are interested. + +## I have a fixture class `FooTest`, but `TEST_F(FooTest, Bar)` gives me error ``"no matching function for call to `FooTest::FooTest()'"``. Why? + +Googletest needs to be able to create objects of your test fixture class, so it +must have a default constructor. Normally the compiler will define one for you. +However, there are cases where you have to define your own: + +* If you explicitly declare a non-default constructor for class `FooTest` + (`DISALLOW_EVIL_CONSTRUCTORS()` does this), then you need to define a + default constructor, even if it would be empty. +* If `FooTest` has a const non-static data member, then you have to define the + default constructor *and* initialize the const member in the initializer + list of the constructor. (Early versions of `gcc` doesn't force you to + initialize the const member. It's a bug that has been fixed in `gcc 4`.) + +## Why does ASSERT_DEATH complain about previous threads that were already joined? + +With the Linux pthread library, there is no turning back once you cross the line +from single thread to multiple threads. The first time you create a thread, a +manager thread is created in addition, so you get 3, not 2, threads. Later when +the thread you create joins the main thread, the thread count decrements by 1, +but the manager thread will never be killed, so you still have 2 threads, which +means you cannot safely run a death test. + +The new NPTL thread library doesn't suffer from this problem, as it doesn't +create a manager thread. However, if you don't control which machine your test +runs on, you shouldn't depend on this. + +## Why does googletest require the entire test suite, instead of individual tests, to be named *DeathTest when it uses ASSERT_DEATH? + +googletest does not interleave tests from different test suites. That is, it +runs all tests in one test suite first, and then runs all tests in the next test +suite, and so on. googletest does this because it needs to set up a test suite +before the first test in it is run, and tear it down afterwords. Splitting up +the test case would require multiple set-up and tear-down processes, which is +inefficient and makes the semantics unclean. + +If we were to determine the order of tests based on test name instead of test +case name, then we would have a problem with the following situation: + +```c++ +TEST_F(FooTest, AbcDeathTest) { ... } +TEST_F(FooTest, Uvw) { ... } + +TEST_F(BarTest, DefDeathTest) { ... } +TEST_F(BarTest, Xyz) { ... } +``` + +Since `FooTest.AbcDeathTest` needs to run before `BarTest.Xyz`, and we don't +interleave tests from different test suites, we need to run all tests in the +`FooTest` case before running any test in the `BarTest` case. This contradicts +with the requirement to run `BarTest.DefDeathTest` before `FooTest.Uvw`. + +## But I don't like calling my entire test suite \*DeathTest when it contains both death tests and non-death tests. What do I do? + +You don't have to, but if you like, you may split up the test suite into +`FooTest` and `FooDeathTest`, where the names make it clear that they are +related: + +```c++ +class FooTest : public ::testing::Test { ... }; + +TEST_F(FooTest, Abc) { ... } +TEST_F(FooTest, Def) { ... } + +using FooDeathTest = FooTest; + +TEST_F(FooDeathTest, Uvw) { ... EXPECT_DEATH(...) ... } +TEST_F(FooDeathTest, Xyz) { ... ASSERT_DEATH(...) ... } +``` + +## googletest prints the LOG messages in a death test's child process only when the test fails. How can I see the LOG messages when the death test succeeds? + +Printing the LOG messages generated by the statement inside `EXPECT_DEATH()` +makes it harder to search for real problems in the parent's log. Therefore, +googletest only prints them when the death test has failed. + +If you really need to see such LOG messages, a workaround is to temporarily +break the death test (e.g. by changing the regex pattern it is expected to +match). Admittedly, this is a hack. We'll consider a more permanent solution +after the fork-and-exec-style death tests are implemented. + +## The compiler complains about "no match for 'operator<<'" when I use an assertion. What gives? + +If you use a user-defined type `FooType` in an assertion, you must make sure +there is an `std::ostream& operator<<(std::ostream&, const FooType&)` function +defined such that we can print a value of `FooType`. + +In addition, if `FooType` is declared in a name space, the `<<` operator also +needs to be defined in the *same* name space. See https://abseil.io/tips/49 for details. + +## How do I suppress the memory leak messages on Windows? + +Since the statically initialized googletest singleton requires allocations on +the heap, the Visual C++ memory leak detector will report memory leaks at the +end of the program run. The easiest way to avoid this is to use the +`_CrtMemCheckpoint` and `_CrtMemDumpAllObjectsSince` calls to not report any +statically initialized heap objects. See MSDN for more details and additional +heap check/debug routines. + +## How can my code detect if it is running in a test? + +If you write code that sniffs whether it's running in a test and does different +things accordingly, you are leaking test-only logic into production code and +there is no easy way to ensure that the test-only code paths aren't run by +mistake in production. Such cleverness also leads to +[Heisenbugs](https://en.wikipedia.org/wiki/Heisenbug). Therefore we strongly +advise against the practice, and googletest doesn't provide a way to do it. + +In general, the recommended way to cause the code to behave differently under +test is [Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection). You can inject +different functionality from the test and from the production code. Since your +production code doesn't link in the for-test logic at all (the +[`testonly`](https://docs.bazel.build/versions/master/be/common-definitions.html#common.testonly) attribute for BUILD targets helps to ensure +that), there is no danger in accidentally running it. + +However, if you *really*, *really*, *really* have no choice, and if you follow +the rule of ending your test program names with `_test`, you can use the +*horrible* hack of sniffing your executable name (`argv[0]` in `main()`) to know +whether the code is under test. + +## How do I temporarily disable a test? + +If you have a broken test that you cannot fix right away, you can add the +DISABLED_ prefix to its name. This will exclude it from execution. This is +better than commenting out the code or using #if 0, as disabled tests are still +compiled (and thus won't rot). + +To include disabled tests in test execution, just invoke the test program with +the --gtest_also_run_disabled_tests flag. + +## Is it OK if I have two separate `TEST(Foo, Bar)` test methods defined in different namespaces? + +Yes. + +The rule is **all test methods in the same test suite must use the same fixture +class.** This means that the following is **allowed** because both tests use the +same fixture class (`::testing::Test`). + +```c++ +namespace foo { +TEST(CoolTest, DoSomething) { + SUCCEED(); +} +} // namespace foo + +namespace bar { +TEST(CoolTest, DoSomething) { + SUCCEED(); +} +} // namespace bar +``` + +However, the following code is **not allowed** and will produce a runtime error +from googletest because the test methods are using different test fixture +classes with the same test suite name. + +```c++ +namespace foo { +class CoolTest : public ::testing::Test {}; // Fixture foo::CoolTest +TEST_F(CoolTest, DoSomething) { + SUCCEED(); +} +} // namespace foo + +namespace bar { +class CoolTest : public ::testing::Test {}; // Fixture: bar::CoolTest +TEST_F(CoolTest, DoSomething) { + SUCCEED(); +} +} // namespace bar +``` diff --git a/security/nss/gtests/google_test/gtest/docs/primer.md b/security/nss/gtests/google_test/gtest/docs/primer.md new file mode 100644 index 0000000000..0317692bbb --- /dev/null +++ b/security/nss/gtests/google_test/gtest/docs/primer.md @@ -0,0 +1,567 @@ +# Googletest Primer + +## Introduction: Why googletest? + +*googletest* helps you write better C++ tests. + +googletest is a testing framework developed by the Testing Technology team with +Google's specific requirements and constraints in mind. Whether you work on +Linux, Windows, or a Mac, if you write C++ code, googletest can help you. And it +supports *any* kind of tests, not just unit tests. + +So what makes a good test, and how does googletest fit in? We believe: + +1. Tests should be *independent* and *repeatable*. It's a pain to debug a test + that succeeds or fails as a result of other tests. googletest isolates the + tests by running each of them on a different object. When a test fails, + googletest allows you to run it in isolation for quick debugging. +2. Tests should be well *organized* and reflect the structure of the tested + code. googletest groups related tests into test suites that can share data + and subroutines. This common pattern is easy to recognize and makes tests + easy to maintain. Such consistency is especially helpful when people switch + projects and start to work on a new code base. +3. Tests should be *portable* and *reusable*. Google has a lot of code that is + platform-neutral; its tests should also be platform-neutral. googletest + works on different OSes, with different compilers, with or without + exceptions, so googletest tests can work with a variety of configurations. +4. When tests fail, they should provide as much *information* about the problem + as possible. googletest doesn't stop at the first test failure. Instead, it + only stops the current test and continues with the next. You can also set up + tests that report non-fatal failures after which the current test continues. + Thus, you can detect and fix multiple bugs in a single run-edit-compile + cycle. +5. The testing framework should liberate test writers from housekeeping chores + and let them focus on the test *content*. googletest automatically keeps + track of all tests defined, and doesn't require the user to enumerate them + in order to run them. +6. Tests should be *fast*. With googletest, you can reuse shared resources + across tests and pay for the set-up/tear-down only once, without making + tests depend on each other. + +Since googletest is based on the popular xUnit architecture, you'll feel right +at home if you've used JUnit or PyUnit before. If not, it will take you about 10 +minutes to learn the basics and get started. So let's go! + +## Beware of the nomenclature + +_Note:_ There might be some confusion arising from different definitions of the +terms _Test_, _Test Case_ and _Test Suite_, so beware of misunderstanding these. + +Historically, googletest started to use the term _Test Case_ for grouping +related tests, whereas current publications, including International Software +Testing Qualifications Board ([ISTQB](http://www.istqb.org/)) materials and +various textbooks on software quality, use the term +_[Test Suite][istqb test suite]_ for this. + +The related term _Test_, as it is used in googletest, corresponds to the term +_[Test Case][istqb test case]_ of ISTQB and others. + +The term _Test_ is commonly of broad enough sense, including ISTQB's definition +of _Test Case_, so it's not much of a problem here. But the term _Test Case_ as +was used in Google Test is of contradictory sense and thus confusing. + +googletest recently started replacing the term _Test Case_ with _Test Suite_. +The preferred API is *TestSuite*. The older TestCase API is being slowly +deprecated and refactored away. + +So please be aware of the different definitions of the terms: + +<!-- mdformat off(github rendering does not support multiline tables) --> + +Meaning | googletest Term | [ISTQB](http://www.istqb.org/) Term +:----------------------------------------------------------------------------------- | :---------------------- | :---------------------------------- +Exercise a particular program path with specific input values and verify the results | [TEST()](#simple-tests) | [Test Case][istqb test case] + +<!-- mdformat on --> + +[istqb test case]: http://glossary.istqb.org/en/search/test%20case +[istqb test suite]: http://glossary.istqb.org/en/search/test%20suite + +## Basic Concepts + +When using googletest, you start by writing *assertions*, which are statements +that check whether a condition is true. An assertion's result can be *success*, +*nonfatal failure*, or *fatal failure*. If a fatal failure occurs, it aborts the +current function; otherwise the program continues normally. + +*Tests* use assertions to verify the tested code's behavior. If a test crashes +or has a failed assertion, then it *fails*; otherwise it *succeeds*. + +A *test suite* contains one or many tests. You should group your tests into test +suites that reflect the structure of the tested code. When multiple tests in a +test suite need to share common objects and subroutines, you can put them into a +*test fixture* class. + +A *test program* can contain multiple test suites. + +We'll now explain how to write a test program, starting at the individual +assertion level and building up to tests and test suites. + +## Assertions + +googletest assertions are macros that resemble function calls. You test a class +or function by making assertions about its behavior. When an assertion fails, +googletest prints the assertion's source file and line number location, along +with a failure message. You may also supply a custom failure message which will +be appended to googletest's message. + +The assertions come in pairs that test the same thing but have different effects +on the current function. `ASSERT_*` versions generate fatal failures when they +fail, and **abort the current function**. `EXPECT_*` versions generate nonfatal +failures, which don't abort the current function. Usually `EXPECT_*` are +preferred, as they allow more than one failure to be reported in a test. +However, you should use `ASSERT_*` if it doesn't make sense to continue when the +assertion in question fails. + +Since a failed `ASSERT_*` returns from the current function immediately, +possibly skipping clean-up code that comes after it, it may cause a space leak. +Depending on the nature of the leak, it may or may not be worth fixing - so keep +this in mind if you get a heap checker error in addition to assertion errors. + +To provide a custom failure message, simply stream it into the macro using the +`<<` operator or a sequence of such operators. An example: + +```c++ +ASSERT_EQ(x.size(), y.size()) << "Vectors x and y are of unequal length"; + +for (int i = 0; i < x.size(); ++i) { + EXPECT_EQ(x[i], y[i]) << "Vectors x and y differ at index " << i; +} +``` + +Anything that can be streamed to an `ostream` can be streamed to an assertion +macro--in particular, C strings and `string` objects. If a wide string +(`wchar_t*`, `TCHAR*` in `UNICODE` mode on Windows, or `std::wstring`) is +streamed to an assertion, it will be translated to UTF-8 when printed. + +### Basic Assertions + +These assertions do basic true/false condition testing. + +Fatal assertion | Nonfatal assertion | Verifies +-------------------------- | -------------------------- | -------------------- +`ASSERT_TRUE(condition);` | `EXPECT_TRUE(condition);` | `condition` is true +`ASSERT_FALSE(condition);` | `EXPECT_FALSE(condition);` | `condition` is false + +Remember, when they fail, `ASSERT_*` yields a fatal failure and returns from the +current function, while `EXPECT_*` yields a nonfatal failure, allowing the +function to continue running. In either case, an assertion failure means its +containing test fails. + +**Availability**: Linux, Windows, Mac. + +### Binary Comparison + +This section describes assertions that compare two values. + +Fatal assertion | Nonfatal assertion | Verifies +------------------------ | ------------------------ | -------------- +`ASSERT_EQ(val1, val2);` | `EXPECT_EQ(val1, val2);` | `val1 == val2` +`ASSERT_NE(val1, val2);` | `EXPECT_NE(val1, val2);` | `val1 != val2` +`ASSERT_LT(val1, val2);` | `EXPECT_LT(val1, val2);` | `val1 < val2` +`ASSERT_LE(val1, val2);` | `EXPECT_LE(val1, val2);` | `val1 <= val2` +`ASSERT_GT(val1, val2);` | `EXPECT_GT(val1, val2);` | `val1 > val2` +`ASSERT_GE(val1, val2);` | `EXPECT_GE(val1, val2);` | `val1 >= val2` + +Value arguments must be comparable by the assertion's comparison operator or +you'll get a compiler error. We used to require the arguments to support the +`<<` operator for streaming to an `ostream`, but this is no longer necessary. If +`<<` is supported, it will be called to print the arguments when the assertion +fails; otherwise googletest will attempt to print them in the best way it can. +For more details and how to customize the printing of the arguments, see the +[documentation](../../googlemock/docs/cook_book.md#teaching-gmock-how-to-print-your-values). + +These assertions can work with a user-defined type, but only if you define the +corresponding comparison operator (e.g., `==` or `<`). Since this is discouraged +by the Google +[C++ Style Guide](https://google.github.io/styleguide/cppguide.html#Operator_Overloading), +you may need to use `ASSERT_TRUE()` or `EXPECT_TRUE()` to assert the equality of +two objects of a user-defined type. + +However, when possible, `ASSERT_EQ(actual, expected)` is preferred to +`ASSERT_TRUE(actual == expected)`, since it tells you `actual` and `expected`'s +values on failure. + +Arguments are always evaluated exactly once. Therefore, it's OK for the +arguments to have side effects. However, as with any ordinary C/C++ function, +the arguments' evaluation order is undefined (i.e., the compiler is free to +choose any order), and your code should not depend on any particular argument +evaluation order. + +`ASSERT_EQ()` does pointer equality on pointers. If used on two C strings, it +tests if they are in the same memory location, not if they have the same value. +Therefore, if you want to compare C strings (e.g. `const char*`) by value, use +`ASSERT_STREQ()`, which will be described later on. In particular, to assert +that a C string is `NULL`, use `ASSERT_STREQ(c_string, NULL)`. Consider using +`ASSERT_EQ(c_string, nullptr)` if c++11 is supported. To compare two `string` +objects, you should use `ASSERT_EQ`. + +When doing pointer comparisons use `*_EQ(ptr, nullptr)` and `*_NE(ptr, nullptr)` +instead of `*_EQ(ptr, NULL)` and `*_NE(ptr, NULL)`. This is because `nullptr` is +typed, while `NULL` is not. See the [FAQ](faq.md) for more details. + +If you're working with floating point numbers, you may want to use the floating +point variations of some of these macros in order to avoid problems caused by +rounding. See [Advanced googletest Topics](advanced.md) for details. + +Macros in this section work with both narrow and wide string objects (`string` +and `wstring`). + +**Availability**: Linux, Windows, Mac. + +**Historical note**: Before February 2016 `*_EQ` had a convention of calling it +as `ASSERT_EQ(expected, actual)`, so lots of existing code uses this order. Now +`*_EQ` treats both parameters in the same way. + +### String Comparison + +The assertions in this group compare two **C strings**. If you want to compare +two `string` objects, use `EXPECT_EQ`, `EXPECT_NE`, and etc instead. + +<!-- mdformat off(github rendering does not support multiline tables) --> + +| Fatal assertion | Nonfatal assertion | Verifies | +| -------------------------- | ------------------------------ | -------------------------------------------------------- | +| `ASSERT_STREQ(str1,str2);` | `EXPECT_STREQ(str1,str2);` | the two C strings have the same content | +| `ASSERT_STRNE(str1,str2);` | `EXPECT_STRNE(str1,str2);` | the two C strings have different contents | +| `ASSERT_STRCASEEQ(str1,str2);` | `EXPECT_STRCASEEQ(str1,str2);` | the two C strings have the same content, ignoring case | +| `ASSERT_STRCASENE(str1,str2);` | `EXPECT_STRCASENE(str1,str2);` | the two C strings have different contents, ignoring case | + +<!-- mdformat on--> + +Note that "CASE" in an assertion name means that case is ignored. A `NULL` +pointer and an empty string are considered *different*. + +`*STREQ*` and `*STRNE*` also accept wide C strings (`wchar_t*`). If a comparison +of two wide strings fails, their values will be printed as UTF-8 narrow strings. + +**Availability**: Linux, Windows, Mac. + +**See also**: For more string comparison tricks (substring, prefix, suffix, and +regular expression matching, for example), see [this](advanced.md) in the +Advanced googletest Guide. + +## Simple Tests + +To create a test: + +1. Use the `TEST()` macro to define and name a test function. These are + ordinary C++ functions that don't return a value. +2. In this function, along with any valid C++ statements you want to include, + use the various googletest assertions to check values. +3. The test's result is determined by the assertions; if any assertion in the + test fails (either fatally or non-fatally), or if the test crashes, the + entire test fails. Otherwise, it succeeds. + +```c++ +TEST(TestSuiteName, TestName) { + ... test body ... +} +``` + +`TEST()` arguments go from general to specific. The *first* argument is the name +of the test suite, and the *second* argument is the test's name within the test +case. Both names must be valid C++ identifiers, and they should not contain +any underscores (`_`). A test's *full name* consists of its containing test suite and +its individual name. Tests from different test suites can have the same +individual name. + +For example, let's take a simple integer function: + +```c++ +int Factorial(int n); // Returns the factorial of n +``` + +A test suite for this function might look like: + +```c++ +// Tests factorial of 0. +TEST(FactorialTest, HandlesZeroInput) { + EXPECT_EQ(Factorial(0), 1); +} + +// Tests factorial of positive numbers. +TEST(FactorialTest, HandlesPositiveInput) { + EXPECT_EQ(Factorial(1), 1); + EXPECT_EQ(Factorial(2), 2); + EXPECT_EQ(Factorial(3), 6); + EXPECT_EQ(Factorial(8), 40320); +} +``` + +googletest groups the test results by test suites, so logically related tests +should be in the same test suite; in other words, the first argument to their +`TEST()` should be the same. In the above example, we have two tests, +`HandlesZeroInput` and `HandlesPositiveInput`, that belong to the same test +suite `FactorialTest`. + +When naming your test suites and tests, you should follow the same convention as +for +[naming functions and classes](https://google.github.io/styleguide/cppguide.html#Function_Names). + +**Availability**: Linux, Windows, Mac. + +## Test Fixtures: Using the Same Data Configuration for Multiple Tests {#same-data-multiple-tests} + +If you find yourself writing two or more tests that operate on similar data, you +can use a *test fixture*. This allows you to reuse the same configuration of +objects for several different tests. + +To create a fixture: + +1. Derive a class from `::testing::Test` . Start its body with `protected:`, as + we'll want to access fixture members from sub-classes. +2. Inside the class, declare any objects you plan to use. +3. If necessary, write a default constructor or `SetUp()` function to prepare + the objects for each test. A common mistake is to spell `SetUp()` as + **`Setup()`** with a small `u` - Use `override` in C++11 to make sure you + spelled it correctly. +4. If necessary, write a destructor or `TearDown()` function to release any + resources you allocated in `SetUp()` . To learn when you should use the + constructor/destructor and when you should use `SetUp()/TearDown()`, read + the [FAQ](faq.md#CtorVsSetUp). +5. If needed, define subroutines for your tests to share. + +When using a fixture, use `TEST_F()` instead of `TEST()` as it allows you to +access objects and subroutines in the test fixture: + +```c++ +TEST_F(TestFixtureName, TestName) { + ... test body ... +} +``` + +Like `TEST()`, the first argument is the test suite name, but for `TEST_F()` +this must be the name of the test fixture class. You've probably guessed: `_F` +is for fixture. + +Unfortunately, the C++ macro system does not allow us to create a single macro +that can handle both types of tests. Using the wrong macro causes a compiler +error. + +Also, you must first define a test fixture class before using it in a +`TEST_F()`, or you'll get the compiler error "`virtual outside class +declaration`". + +For each test defined with `TEST_F()`, googletest will create a *fresh* test +fixture at runtime, immediately initialize it via `SetUp()`, run the test, +clean up by calling `TearDown()`, and then delete the test fixture. Note that +different tests in the same test suite have different test fixture objects, and +googletest always deletes a test fixture before it creates the next one. +googletest does **not** reuse the same test fixture for multiple tests. Any +changes one test makes to the fixture do not affect other tests. + +As an example, let's write tests for a FIFO queue class named `Queue`, which has +the following interface: + +```c++ +template <typename E> // E is the element type. +class Queue { + public: + Queue(); + void Enqueue(const E& element); + E* Dequeue(); // Returns NULL if the queue is empty. + size_t size() const; + ... +}; +``` + +First, define a fixture class. By convention, you should give it the name +`FooTest` where `Foo` is the class being tested. + +```c++ +class QueueTest : public ::testing::Test { + protected: + void SetUp() override { + q1_.Enqueue(1); + q2_.Enqueue(2); + q2_.Enqueue(3); + } + + // void TearDown() override {} + + Queue<int> q0_; + Queue<int> q1_; + Queue<int> q2_; +}; +``` + +In this case, `TearDown()` is not needed since we don't have to clean up after +each test, other than what's already done by the destructor. + +Now we'll write tests using `TEST_F()` and this fixture. + +```c++ +TEST_F(QueueTest, IsEmptyInitially) { + EXPECT_EQ(q0_.size(), 0); +} + +TEST_F(QueueTest, DequeueWorks) { + int* n = q0_.Dequeue(); + EXPECT_EQ(n, nullptr); + + n = q1_.Dequeue(); + ASSERT_NE(n, nullptr); + EXPECT_EQ(*n, 1); + EXPECT_EQ(q1_.size(), 0); + delete n; + + n = q2_.Dequeue(); + ASSERT_NE(n, nullptr); + EXPECT_EQ(*n, 2); + EXPECT_EQ(q2_.size(), 1); + delete n; +} +``` + +The above uses both `ASSERT_*` and `EXPECT_*` assertions. The rule of thumb is +to use `EXPECT_*` when you want the test to continue to reveal more errors after +the assertion failure, and use `ASSERT_*` when continuing after failure doesn't +make sense. For example, the second assertion in the `Dequeue` test is +`ASSERT_NE(nullptr, n)`, as we need to dereference the pointer `n` later, which +would lead to a segfault when `n` is `NULL`. + +When these tests run, the following happens: + +1. googletest constructs a `QueueTest` object (let's call it `t1`). +2. `t1.SetUp()` initializes `t1`. +3. The first test (`IsEmptyInitially`) runs on `t1`. +4. `t1.TearDown()` cleans up after the test finishes. +5. `t1` is destructed. +6. The above steps are repeated on another `QueueTest` object, this time + running the `DequeueWorks` test. + +**Availability**: Linux, Windows, Mac. + +## Invoking the Tests + +`TEST()` and `TEST_F()` implicitly register their tests with googletest. So, +unlike with many other C++ testing frameworks, you don't have to re-list all +your defined tests in order to run them. + +After defining your tests, you can run them with `RUN_ALL_TESTS()`, which +returns `0` if all the tests are successful, or `1` otherwise. Note that +`RUN_ALL_TESTS()` runs *all tests* in your link unit--they can be from +different test suites, or even different source files. + +When invoked, the `RUN_ALL_TESTS()` macro: + +* Saves the state of all googletest flags. + +* Creates a test fixture object for the first test. + +* Initializes it via `SetUp()`. + +* Runs the test on the fixture object. + +* Cleans up the fixture via `TearDown()`. + +* Deletes the fixture. + +* Restores the state of all googletest flags. + +* Repeats the above steps for the next test, until all tests have run. + +If a fatal failure happens the subsequent steps will be skipped. + +> IMPORTANT: You must **not** ignore the return value of `RUN_ALL_TESTS()`, or +> you will get a compiler error. The rationale for this design is that the +> automated testing service determines whether a test has passed based on its +> exit code, not on its stdout/stderr output; thus your `main()` function must +> return the value of `RUN_ALL_TESTS()`. +> +> Also, you should call `RUN_ALL_TESTS()` only **once**. Calling it more than +> once conflicts with some advanced googletest features (e.g., thread-safe +> [death tests](advanced.md#death-tests)) and thus is not supported. + +**Availability**: Linux, Windows, Mac. + +## Writing the main() Function + +Write your own main() function, which should return the value of +`RUN_ALL_TESTS()`. + +You can start from this boilerplate: + +```c++ +#include "this/package/foo.h" +#include "gtest/gtest.h" + +namespace { + +// The fixture for testing class Foo. +class FooTest : public ::testing::Test { + protected: + // You can remove any or all of the following functions if its body + // is empty. + + FooTest() { + // You can do set-up work for each test here. + } + + ~FooTest() override { + // You can do clean-up work that doesn't throw exceptions here. + } + + // If the constructor and destructor are not enough for setting up + // and cleaning up each test, you can define the following methods: + + void SetUp() override { + // Code here will be called immediately after the constructor (right + // before each test). + } + + void TearDown() override { + // Code here will be called immediately after each test (right + // before the destructor). + } + + // Objects declared here can be used by all tests in the test suite for Foo. +}; + +// Tests that the Foo::Bar() method does Abc. +TEST_F(FooTest, MethodBarDoesAbc) { + const std::string input_filepath = "this/package/testdata/myinputfile.dat"; + const std::string output_filepath = "this/package/testdata/myoutputfile.dat"; + Foo f; + EXPECT_EQ(f.Bar(input_filepath, output_filepath), 0); +} + +// Tests that Foo does Xyz. +TEST_F(FooTest, DoesXyz) { + // Exercises the Xyz feature of Foo. +} + +} // namespace + +int main(int argc, char **argv) { + ::testing::InitGoogleTest(&argc, argv); + return RUN_ALL_TESTS(); +} +``` + +The `::testing::InitGoogleTest()` function parses the command line for +googletest flags, and removes all recognized flags. This allows the user to +control a test program's behavior via various flags, which we'll cover in +the [AdvancedGuide](advanced.md). You **must** call this function before calling +`RUN_ALL_TESTS()`, or the flags won't be properly initialized. + +On Windows, `InitGoogleTest()` also works with wide strings, so it can be used +in programs compiled in `UNICODE` mode as well. + +But maybe you think that writing all those main() functions is too much work? We +agree with you completely, and that's why Google Test provides a basic +implementation of main(). If it fits your needs, then just link your test with +gtest\_main library and you are good to go. + +NOTE: `ParseGUnitFlags()` is deprecated in favor of `InitGoogleTest()`. + +## Known Limitations + +* Google Test is designed to be thread-safe. The implementation is thread-safe + on systems where the `pthreads` library is available. It is currently + _unsafe_ to use Google Test assertions from two threads concurrently on + other systems (e.g. Windows). In most tests this is not an issue as usually + the assertions are done in the main thread. If you want to help, you can + volunteer to implement the necessary synchronization primitives in + `gtest-port.h` for your platform. diff --git a/security/nss/gtests/google_test/gtest/docs/pump_manual.md b/security/nss/gtests/google_test/gtest/docs/pump_manual.md new file mode 100644 index 0000000000..10b3c5ff08 --- /dev/null +++ b/security/nss/gtests/google_test/gtest/docs/pump_manual.md @@ -0,0 +1,190 @@ +<b>P</b>ump is <b>U</b>seful for <b>M</b>eta <b>P</b>rogramming. + +# The Problem + +Template and macro libraries often need to define many classes, functions, or +macros that vary only (or almost only) in the number of arguments they take. +It's a lot of repetitive, mechanical, and error-prone work. + +Variadic templates and variadic macros can alleviate the problem. However, while +both are being considered by the C++ committee, neither is in the standard yet +or widely supported by compilers. Thus they are often not a good choice, +especially when your code needs to be portable. And their capabilities are still +limited. + +As a result, authors of such libraries often have to write scripts to generate +their implementation. However, our experience is that it's tedious to write such +scripts, which tend to reflect the structure of the generated code poorly and +are often hard to read and edit. For example, a small change needed in the +generated code may require some non-intuitive, non-trivial changes in the +script. This is especially painful when experimenting with the code. + +# Our Solution + +Pump (for Pump is Useful for Meta Programming, Pretty Useful for Meta +Programming, or Practical Utility for Meta Programming, whichever you prefer) is +a simple meta-programming tool for C++. The idea is that a programmer writes a +`foo.pump` file which contains C++ code plus meta code that manipulates the C++ +code. The meta code can handle iterations over a range, nested iterations, local +meta variable definitions, simple arithmetic, and conditional expressions. You +can view it as a small Domain-Specific Language. The meta language is designed +to be non-intrusive (s.t. it won't confuse Emacs' C++ mode, for example) and +concise, making Pump code intuitive and easy to maintain. + +## Highlights + +* The implementation is in a single Python script and thus ultra portable: no + build or installation is needed and it works cross platforms. +* Pump tries to be smart with respect to + [Google's style guide](https://github.com/google/styleguide): it breaks long + lines (easy to have when they are generated) at acceptable places to fit + within 80 columns and indent the continuation lines correctly. +* The format is human-readable and more concise than XML. +* The format works relatively well with Emacs' C++ mode. + +## Examples + +The following Pump code (where meta keywords start with `$`, `[[` and `]]` are +meta brackets, and `$$` starts a meta comment that ends with the line): + +``` +$var n = 3 $$ Defines a meta variable n. +$range i 0..n $$ Declares the range of meta iterator i (inclusive). +$for i [[ + $$ Meta loop. +// Foo$i does blah for $i-ary predicates. +$range j 1..i +template <size_t N $for j [[, typename A$j]]> +class Foo$i { +$if i == 0 [[ + blah a; +]] $elif i <= 2 [[ + blah b; +]] $else [[ + blah c; +]] +}; + +]] +``` + +will be translated by the Pump compiler to: + +```cpp +// Foo0 does blah for 0-ary predicates. +template <size_t N> +class Foo0 { + blah a; +}; + +// Foo1 does blah for 1-ary predicates. +template <size_t N, typename A1> +class Foo1 { + blah b; +}; + +// Foo2 does blah for 2-ary predicates. +template <size_t N, typename A1, typename A2> +class Foo2 { + blah b; +}; + +// Foo3 does blah for 3-ary predicates. +template <size_t N, typename A1, typename A2, typename A3> +class Foo3 { + blah c; +}; +``` + +In another example, + +``` +$range i 1..n +Func($for i + [[a$i]]); +$$ The text between i and [[ is the separator between iterations. +``` + +will generate one of the following lines (without the comments), depending on +the value of `n`: + +```cpp +Func(); // If n is 0. +Func(a1); // If n is 1. +Func(a1 + a2); // If n is 2. +Func(a1 + a2 + a3); // If n is 3. +// And so on... +``` + +## Constructs + +We support the following meta programming constructs: + +| `$var id = exp` | Defines a named constant value. `$id` is | +: : valid util the end of the current meta : +: : lexical block. : +| :------------------------------- | :--------------------------------------- | +| `$range id exp..exp` | Sets the range of an iteration variable, | +: : which can be reused in multiple loops : +: : later. : +| `$for id sep [[ code ]]` | Iteration. The range of `id` must have | +: : been defined earlier. `$id` is valid in : +: : `code`. : +| `$($)` | Generates a single `$` character. | +| `$id` | Value of the named constant or iteration | +: : variable. : +| `$(exp)` | Value of the expression. | +| `$if exp [[ code ]] else_branch` | Conditional. | +| `[[ code ]]` | Meta lexical block. | +| `cpp_code` | Raw C++ code. | +| `$$ comment` | Meta comment. | + +**Note:** To give the user some freedom in formatting the Pump source code, Pump +ignores a new-line character if it's right after `$for foo` or next to `[[` or +`]]`. Without this rule you'll often be forced to write very long lines to get +the desired output. Therefore sometimes you may need to insert an extra new-line +in such places for a new-line to show up in your output. + +## Grammar + +```ebnf +code ::= atomic_code* +atomic_code ::= $var id = exp + | $var id = [[ code ]] + | $range id exp..exp + | $for id sep [[ code ]] + | $($) + | $id + | $(exp) + | $if exp [[ code ]] else_branch + | [[ code ]] + | cpp_code +sep ::= cpp_code | empty_string +else_branch ::= $else [[ code ]] + | $elif exp [[ code ]] else_branch + | empty_string +exp ::= simple_expression_in_Python_syntax +``` + +## Code + +You can find the source code of Pump in [scripts/pump.py](../scripts/pump.py). +It is still very unpolished and lacks automated tests, although it has been +successfully used many times. If you find a chance to use it in your project, +please let us know what you think! We also welcome help on improving Pump. + +## Real Examples + +You can find real-world applications of Pump in +[Google Test](https://github.com/google/googletest/tree/master/googletest) and +[Google Mock](https://github.com/google/googletest/tree/master/googlemock). The +source file `foo.h.pump` generates `foo.h`. + +## Tips + +* If a meta variable is followed by a letter or digit, you can separate them + using `[[]]`, which inserts an empty string. For example `Foo$j[[]]Helper` + generate `Foo1Helper` when `j` is 1. +* To avoid extra-long Pump source lines, you can break a line anywhere you + want by inserting `[[]]` followed by a new line. Since any new-line + character next to `[[` or `]]` is ignored, the generated code won't contain + this new line. diff --git a/security/nss/gtests/google_test/gtest/docs/samples.md b/security/nss/gtests/google_test/gtest/docs/samples.md new file mode 100644 index 0000000000..aaa5883830 --- /dev/null +++ b/security/nss/gtests/google_test/gtest/docs/samples.md @@ -0,0 +1,22 @@ +# Googletest Samples {#samples} + +If you're like us, you'd like to look at +[googletest samples.](https://github.com/google/googletest/tree/master/googletest/samples) +The sample directory has a number of well-commented samples showing how to use a +variety of googletest features. + +* Sample #1 shows the basic steps of using googletest to test C++ functions. +* Sample #2 shows a more complex unit test for a class with multiple member + functions. +* Sample #3 uses a test fixture. +* Sample #4 teaches you how to use googletest and `googletest.h` together to + get the best of both libraries. +* Sample #5 puts shared testing logic in a base test fixture, and reuses it in + derived fixtures. +* Sample #6 demonstrates type-parameterized tests. +* Sample #7 teaches the basics of value-parameterized tests. +* Sample #8 shows using `Combine()` in value-parameterized tests. +* Sample #9 shows use of the listener API to modify Google Test's console + output and the use of its reflection API to inspect test results. +* Sample #10 shows use of the listener API to implement a primitive memory + leak checker. |