diff options
Diffstat (limited to '')
33 files changed, 5682 insertions, 0 deletions
diff --git a/testing/web-platform/tests/docs/writing-tests/ahem.md b/testing/web-platform/tests/docs/writing-tests/ahem.md new file mode 100644 index 0000000000..30a3fcde26 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/ahem.md @@ -0,0 +1,78 @@ +# The Ahem Font + +A font called [Ahem][ahem-readme] has been developed which consists of +some very well defined glyphs of precise sizes and shapes; it is +especially useful for testing font and text properties. Installation +instructions are available in [Running Tests from the Local +System](../running-tests/from-local-system). + +The font's em-square is exactly square. Its ascent and descent +combined is exactly the size of the em square; this means that the +font's extent is exactly the same as its line-height, meaning that it +can be exactly aligned with padding, borders, margins, and so +forth. Its alphabetic baseline is 0.2em above its bottom, and 0.8em +below its top. + +The font has four glyphs: + +* X (U+0058): A square exactly 1em in height and width. +* p (U+0070): A rectangle exactly 0.2em high, 1em wide, and aligned so +that its top is flush with the baseline. +* É (U+00C9): A rectangle exactly 0.8em high, 1em wide, and aligned so +that its bottom is flush with the baseline. +* [space] (U+0020): A transparent space exactly 1em high and wide. + +Most other US-ASCII characters in the font have the same glyph as X. + +## Usage +Ahem should be loaded in tests as a web font. To simplify this, a test can +link to the `/fonts/ahem.css` stylesheet: + +``` +<link rel="stylesheet" type="text/css" href="/fonts/ahem.css" /> +``` + +If the test uses the Ahem font, make sure its computed font-size is a +multiple of 5px, otherwise baseline alignment may be rendered +inconsistently. A minimum computed font-size of 20px is suggested. + +An explicit (i.e., not `normal`) line-height should also always be +used, with the difference between the computed line-height and +font-size being divisible by 2. In the common case, having the same +value for both is desirable. + +Other font properties should make sure they have their default values; +as such, the `font` shorthand should normally be used. + +As a result, what is typically recommended is: + + +``` css +div { + font: 25px/1 Ahem; +} +``` + +Some things to avoid: + +``` css +div { + font: 1em/1em Ahem; /* computed font-size is typically 16px and potentially + affected by parent elements */ +} + +div { + font: 20px Ahem; /* computed line-height value is normal */ +} + +div { + /* doesn't use font shorthand; font-weight and font-style are inherited */ + font-family: Ahem; + font-size: 25px; + line-height: 50px; /* the difference between computed line-height and + computed font-size is not divisible by 2 + (50 - 25 = 25; 25 / 2 = 12.5). */ +} +``` + +[ahem-readme]: https://www.w3.org/Style/CSS/Test/Fonts/Ahem/README diff --git a/testing/web-platform/tests/docs/writing-tests/assumptions.md b/testing/web-platform/tests/docs/writing-tests/assumptions.md new file mode 100644 index 0000000000..5afa416121 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/assumptions.md @@ -0,0 +1,40 @@ +# Test Assumptions + +The tests make a number of assumptions of the user agent, and new +tests can freely rely on these assumptions being true: + + * The device is a full-color device. + * The device has a viewport width of at least 800px. + * The UA imposes no minimum font size. + * The `medium` `font-size` computes to 16px. + * The canvas background is `white`. + * The initial value of `color` is `black`. + * The user stylesheet is empty (except where indicated by the tests). + * The device is interactive and uses scroll bars. + * The HTML `div` element is assigned `display: block;`, the + `unicode-bidi` property may be declared, and no other property + declarations. + <!-- unicode-bidi: isolate should be required; we currently don't + assume this because Chrome and Safari are yet to ship this: see + https://bugs.chromium.org/p/chromium/issues/detail?id=296863 and + https://bugs.webkit.org/show_bug.cgi?id=65617 --> + * The HTML `span` element is assigned `display: inline;` and no other + property declaration. + * The HTML `p` element is assigned `display: block;` + * The HTML `li` element is assigned `display: list-item;` + * The HTML `table` elements `table`, `tbody`, `tr`, and `td` are + assigned the `display` values `table`, `table-row-group`, + `table-row`, and `table-cell`, respectively. + * The UA implements reasonable line-breaking behavior; e.g., it is + assumed that spaces between alphanumeric characters provide line + breaking opportunities and that UAs will not break at every + opportunity, but only near the end of a line unless a line break is + forced. + +Tests for printing behavior make some further assumptions: + + * The UA is set to print background colors and, if it supports + graphics, background images. + * The UA implements reasonable page-breaking behavior; e.g., it is + assumed that UAs will not break at every opportunity, but only near + the end of a page unless a page break is forced. diff --git a/testing/web-platform/tests/docs/writing-tests/channels.md b/testing/web-platform/tests/docs/writing-tests/channels.md new file mode 100644 index 0000000000..9296247fca --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/channels.md @@ -0,0 +1,159 @@ +# Message Channels + +```eval_rst + +.. contents:: Table of Contents + :depth: 3 + :local: + :backlinks: none +``` + +Message channels provide a mechanism to communicate across globals, +including in cases where there is no client-side mechanism to +establish a communication channel (i.e. when the globals are in +different browsing context groups). + +## Markup ## + +```html +<script src="/resources/channels.sub.js"></script> +``` + +Channels can be used in any global and are not specifically linked to +`testharness.js`. + +### High Level API ### + +The high level API provides a way to message another global, and to +execute functions in that global and return the result. + +Globals wanting to recieve messages using the high level API have to +be loaded with a `uuid` query parameter in their URL, with a value +that's a UUID. This will be used to identify the channel dedicated to +messages sent to that context. + +The context must call either `global_channel` or +`start_global_channel` when it's ready to receive messages. This +returns a `RecvChannel` that can be used to add message handlers. + +```eval_rst + +.. js:autofunction:: global_channel + :short-name: +.. js:autofunction:: start_global_channel + :short-name: +.. js:autoclass:: RemoteGlobalCommandRecvChannel + :members: +``` + +Contexts wanting to communicate with the remote context do so using a +`RemoteGlobal` object. + +```eval_rst + +.. js:autoclass:: RemoteGlobal + :members: +``` + +#### Remote Objects #### + +By default objects (e.g. script arguments) sent to the remote global +are cloned. In order to support referencing objects owned by the +originating global, there is a `RemoteObject` type which can pass a +reference to an object across a channel. + +```eval_rst + +.. js:autoclass:: RemoteObject + :members: +``` + +#### Example #### + +test.html + +```html +<!doctype html> +<title>call example</title> +<script src="/resources/testharness.js"> +<script src="/resources/testharnessreport.js"> +<script src="/resources/channel.js"> + +<script> +promise_test(async t => { + let remote = new RemoteGlobal(); + window.open(`child.html?uuid=${remote.uuid}`, "_blank", "noopener"); + let result = await remote.call(id => { + return document.getElementById(id).textContent; + }, "test"); + assert_equals("result", "PASS"); +}); +</script> +``` + +child.html + +```html +<script src="/resources/channel.js"> + +<p id="nottest">FAIL</p> +<p id="test">PASS</p> +<script> +start_global_channel(); +</script> +``` + +### Low Level API ### + +The high level API is implemented in terms of a channel +abstraction. Each channel is identified by a UUID, and corresponds to +a message queue hosted by the server. Channels are multiple producer, +single consumer, so there's only only entity responsible for +processing messages sent to the channel. This is designed to +discourage race conditions where multiple consumers try to process the +same message. + +On the client side, the read side of a channel is represented by a +`RecvChannel` object, and the send side by `SendChannel`. An initial +channel pair is created with the `channel()` function. + +```eval_rst + +.. js:autofunction:: channel + :members: +.. js:autoclass:: Channel + :members: +.. js:autoclass:: SendChannel + :members: +.. js:autoclass:: RecvChannel + :members: +``` + +### Navigation and bfcache + +For specific use cases around bfcache, it's important to be able to +ensure that no network connections (including websockets) remain open +at the time of navigation, otherwise the page will be excluded from +bfcache. This is handled as follows: + +* A `disconnectReader` method on `SendChannel`. This causes a + server-initiated disconnect of the corresponding `RecvChannel` + websocket. The idea is to allow a page to send a command that will + initiate a navigation, then without knowing when the navigation is + done, send further commands that will be processed when the + `RecvChannel` reconnects. If the commands are sent before the + navigation, but not processed, they can be buffered by the remote + and then lost during navigation. + +* A `close_all_channel_sockets()` function. This just closes all the open + websockets associated with channels in the global in which it's + called. Any channel then has to be reconnected to be used + again. Calling `closeAllChannelSockets()` right before navigating + will leave you in a state with no open websocket connections (unless + something happens to reopen one before the navigation starts). + +```eval_rst + +.. js:autofunction:: close_all_channel_sockets + :members: +``` diff --git a/testing/web-platform/tests/docs/writing-tests/crashtest.md b/testing/web-platform/tests/docs/writing-tests/crashtest.md new file mode 100644 index 0000000000..0166bdeb75 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/crashtest.md @@ -0,0 +1,29 @@ +# crashtest tests + +Crash tests are used to ensure that a document can be loaded without +crashing or experiencing other low-level issues that may be checked by +implementation-specific tooling (e.g. leaks, asserts, or sanitizer +failures). + +Crashtests are identified by the string `-crash` in the filename immediately +before the extension, or by being in a directory called `crashtests`. Examples: + +- `css/css-foo/bar-crash.html` is a crash test +- `css/css-foo/crashtests/bar.html` is a crash test +- `css/css-foo/bar-crash-001.html` is **not** a crash test + +The simplest crashtest is a single HTML file with any content. The +test passes if the load event is reached, and the browser finishes +painting, without terminating. + +In some cases crashtests may need to perform work after the initial page load. +In this case the test may specify a `class=test-wait` attribute on the root +element. The test will not complete until that attribute is removed from the +root. At the time when the test would otherwise have ended a `TestRendered` +event is emitted; test authors can use this event to perform modifications that +are guaranteed not to be batched with the initial paint. This matches the +behaviour of [reftests](reftests). + +Note that crash tests **do not** need to include `testharness.js` or use any of +the [testharness API](testharness-api.md) (e.g. they do not need to declare a +`test(..)`). diff --git a/testing/web-platform/tests/docs/writing-tests/css-metadata.md b/testing/web-platform/tests/docs/writing-tests/css-metadata.md new file mode 100644 index 0000000000..9d8ebeddff --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/css-metadata.md @@ -0,0 +1,188 @@ +# CSS Metadata + +CSS tests have some additional metadata. + +### Specification Links + +Each test **requires** at least one link to specifications: + +``` html +<link rel="help" href="RELEVANT_SPEC_SECTION" /> +``` + +The specification link elements provide a way to align the test with +information in the specification being tested. + +* Links should link to relevant sections within the specification +* Use the anchors from the specification's Table of Contents +* A test can have multiple specification links + * Always list the primary section that is being tested as the + first item in the list of specification links + * Order the list from the most used/specific to least used/specific + * There is no need to list common incidental features like the + color green if it is being used to validate the test unless the + case is specifically testing the color green +* If the test is part of multiple test suites, link to the relevant + sections of each spec. + +Example 1: + +``` html +<link rel="help" +href="https://www.w3.org/TR/CSS21/text.html#alignment-prop" /> +``` + +Example 2: + +``` html +<link rel="help" +href="https://www.w3.org/TR/CSS21/text.html#alignment-prop" /> +<link rel="help" href="https://www.w3.org/TR/CSS21/visudet.html#q7" /> +<link rel="help" +href="https://www.w3.org/TR/CSS21/visudet.html#line-height" /> +<link rel="help" +href="https://www.w3.org/TR/CSS21/colors.html#background-properties" /> +``` + +### Requirement Flags + +If a test has any of the following requirements, a meta element can be added +to include the corresponding flags (tokens): + +<table> +<tr> + <th>Token</th> + <th>Description</th> +</tr> +<tr> + <td>asis</td> + <td>The test has particular markup formatting requirements and + cannot be re-serialized.</td> +</tr> +<tr> + <td>HTMLonly</td> + <td>Test case is only valid for HTML</td> +</tr> +<tr> + <td>invalid</td> + <td>Tests handling of invalid CSS. Note: This case contains CSS + properties and syntax that may not validate.</td> +</tr> +<tr> + <td>may</td> + <td>Behavior tested is preferred but OPTIONAL. + <a href="https://www.ietf.org/rfc/rfc2119.txt">[RFC2119]</a></td> +</tr> +<tr> + <td>nonHTML</td> + <td>Test case is only valid for formats besides HTML (e.g. XHTML + or arbitrary XML)</td> +</tr> +<tr> + <td>paged</td> + <td>Only valid for paged media</td> +</tr> +<tr> + <td>scroll</td> + <td>Only valid for continuous (scrolling) media</td> +</tr> +<tr> + <td>should</td> + <td>Behavior tested is RECOMMENDED, but not REQUIRED. <a + href="https://www.ietf.org/rfc/rfc2119.txt">[RFC2119]</a></td> +</tr> +</table> + +The following flags are **deprecated** and should not be declared by new tests. +Tests which satisfy the described criteria should simply be designated as +"manual" using [the `-manual` file name flag](file-names). + +<table> +<tr> + <th>Token</th> + <th>Description</th> +</tr> +<tr> + <td>animated</td> + <td>Test is animated in final state. (Cannot be verified using + reftests/screenshots.)</td> +</tr> +<tr> + <td>font</td> + <td>Requires a specific font to be installed at the OS level. (A link to the + font to be installed must be provided; this is not needed if only web + fonts are used.)</td> +</tr> +<tr> + <td>history</td> + <td>User agent session history is required. Testing :visited is a + good example where this may be used.</td> +</tr> +<tr> + <td>interact</td> + <td>Requires human interaction (such as for testing scrolling + behavior)</td> +</tr> +<tr> + <td>speech</td> + <td>Device supports audio output. Text-to-speech (TTS) engine + installed</td> +</tr> +<tr> + <td>userstyle</td> + <td>Requires a user style sheet to be set</td> +</tr> +</table> + + +Example 1 (one token applies): + +``` html +<meta name="flags" content="invalid" /> +``` + +Example 2 (multiple tokens apply): + +``` html +<meta name="flags" content="asis HTMLonly may" /> +``` + +### Test Assertions + +``` html +<meta name="assert" content="TEST ASSERTION" /> +``` + +This element should contain a complete detailed statement expressing +what specifically the test is attempting to prove. If the assertion +is only valid in certain cases, those conditions should be described +in the statement. + +The assertion should not be: + +* A copy of the title text +* A copy of the test verification instructions +* A duplicate of another assertion in the test suite +* A line or reference from the CSS specification unless that line is + a complete assertion when taken out of context. + +The test assertion is **optional**, but is highly recommended. +It helps the reviewer understand +the goal of the test so that he or she can make sure it is being +tested correctly. Also, in case a problem is found with the test +later, the testing method (e.g. using `color` to determine pass/fail) +can be changed (e.g. to using `background-color`) while preserving +the intent of the test (e.g. testing support for ID selectors). + +Examples of good test assertions: + +* "This test checks that a background image with no intrinsic size + covers the entire padding box." +* "This test checks that 'word-spacing' affects each space (U+0020) + and non-breaking space (U+00A0)." +* "This test checks that if 'top' and 'bottom' offsets are specified + on an absolutely-positioned replaced element, then any remaining + space is split amongst the 'auto' vertical margins." +* "This test checks that 'text-indent' affects only the first line + of a block container if that line is also the first formatted line + of an element." diff --git a/testing/web-platform/tests/docs/writing-tests/css-user-styles.md b/testing/web-platform/tests/docs/writing-tests/css-user-styles.md new file mode 100644 index 0000000000..9dac5af651 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/css-user-styles.md @@ -0,0 +1,90 @@ +# CSS User Stylesheets + +Some test may require special user style sheets to be applied in order +for the case to be verified. In order for proper indications and +prerequisite to be displayed every user style sheet should contain the +following rules. + +``` css +#user-stylesheet-indication +{ + /* Used by the harness to display an indication there is a user + style sheet applied */ + display: block!important; +} +``` + +The rule ```#user-stylesheet-indication``` is to be used by any +harness running the test suite. + +A harness should identify test that need a user style sheet by +looking at their flags meta tag. It then should display appropriate +messages indicating if a style sheet is applied or if a style sheet +should not be applied. + +Harness style sheet rules: + +``` css +.userstyle +{ + color: green; + display: none; +} +.nouserstyle +{ + color: red; + display: none; +} +``` + +Harness userstyle flag found: + +``` html +<p id="user-stylesheet-indication" class="userstyle">A user style +sheet is applied.</p> +``` + +Harness userstyle flag NOT found: + +``` html +<p id="user-stylesheet-indication" class="nouserstyle">A user style +sheet is applied.</p> +``` + +Within the test case it is recommended that the case itself indicate +the necessary user style sheet that is required. + +Examples: (code for the [`cascade.css`][cascade-css] file) + +``` css +#cascade /* ID name should match user style sheet file name */ +{ + /* Used by the test to hide the prerequisite */ + display: none; +} +``` + +The rule ```#cascade``` in the example above is used by the test +page to hide the prerequisite text. The rule name should match the +user style sheet CSS file name in order to keep this orderly. + +Examples: (code for [the `cascade-###.xht` files][cascade-xht]) + +``` html +<p id="cascade"> + PREREQUISITE: The <a href="support/cascade.css"> + "cascade.css"</a> file is enabled as the user agent's user style + sheet. +</p> +``` + +The id value should match the user style sheet CSS file name and the +user style sheet rule that is used to hide this text when the style +sheet is properly applied. + +Please flag test that require user style sheets with the userstyle +flag so people running the tests know that a user style sheet is +required. + +[cascade-css]: https://github.com/w3c/csswg-test/blob/master/css21/cascade/support/cascade.css +[cascade-xht]: https://github.com/w3c/csswg-test/blob/master/css21/cascade/cascade-001.xht diff --git a/testing/web-platform/tests/docs/writing-tests/file-names.md b/testing/web-platform/tests/docs/writing-tests/file-names.md new file mode 100644 index 0000000000..96296c4ff6 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/file-names.md @@ -0,0 +1,78 @@ +# File Name Flags + +The test filename is significant in determining the type of test it +contains, and enabling specific optional features. This page documents +the various flags available and their meaning. + +In some cases flags can also be set via a directory name, such that any file +that is a (recursive) descendent of the directory inherits the flag value. +These are individually documented for each flag that supports it. + + +### Test Type + +These flags must be the last element in the filename before the +extension e.g. `foo-manual.html` will indicate a manual test, but +`foo-manual-other.html` will not. Unlike test features, test types +are mutually exclusive. + + +`-manual` + : Indicates that a test is a non-automated test. + +`-visual` + : Indicates that a file is a visual test. + + +### Test Features + +These flags are preceded by a `.` in the filename, and must +themselves precede any test type flag, but are otherwise unordered. + + +`.https` + : Indicates that a test is loaded over HTTPS. + + `.h2` + : Indicates that a test is loaded over HTTP/2. + + `.www` + : Indicates that a test is run on the `www` subdomain. + +`.sub` + : Indicates that a test uses the [server-side substitution](server-pipes.html#sub) + feature. + +`.window` + : (js files only) Indicates that the file generates a test in which + it is run in a Window environment. + +`.worker` + : (js files only) Indicates that the file generates a test in which + it is run in a dedicated worker environment. + +`.any` + : (js files only) Indicates that the file generates tests in which it + is [run in multiple scopes](testharness). + +`.optional` + : Indicates that a test makes assertions about optional behavior in a + specification, typically marked by the [RFC 2119] "MAY" or "OPTIONAL" + keywords. This flag should not be used for "SHOULD"; such requirements + can be tested with regular tests, like "MUST". + +`.tentative` + : Indicates that a test makes assertions not yet required by any specification, + or in contradiction to some specification. This is useful when implementation + experience is needed to inform the specification. It should be apparent in + context why the test is tentative and what needs to be resolved to make it + non-tentative. + + This flag can be enabled for an entire directory (and all its descendents), + by naming the directory 'tentative'. For example, every test underneath + 'foo/tentative/' will be considered tentative. + +It's preferable that `.window`, `.worker`, and `.any` are immediately followed +by their final `.js` extension. + +[RFC 2119]: https://tools.ietf.org/html/rfc2119 diff --git a/testing/web-platform/tests/docs/writing-tests/general-guidelines.md b/testing/web-platform/tests/docs/writing-tests/general-guidelines.md new file mode 100644 index 0000000000..1689c064a3 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/general-guidelines.md @@ -0,0 +1,230 @@ +# General Test Guidelines + +### File Paths and Names + +When choosing where in the directory structure to put any new tests, +try to follow the structure of existing tests for that specification; +if there are no existing tests, it is generally recommended to create +subdirectories for each section. + +Due to path length limitations on Windows, test paths must be less +that 150 characters relative to the test root directory (this gives +vendors just over 100 characters for their own paths when running in +automation). + +File names should generally be somewhat descriptive of what is being +tested; very generic names like `001.html` are discouraged. A common +format is `test-topic-001.html`, where `test-topic` is a short +identifier that describes the test. It should avoid conjunctions, +articles, and prepositions as it should be as concise as possible. The +integer that follows is normally just increased incrementally, and +padded to three digits. (If you'd end up with more than 999 tests, +your `test-topic` is probably too broad!) + +The test filename is significant in enabling specific optional features, such as HTTPS +or server-side substitution. See the documentation on [file names flags][file-name-flags] +for more details. + +In the css directory, the file names should be unique within the whole +css/ directory, regardless of where they are in the directory structure. + +### HTTPS + +By default, tests are served over plain HTTP. If a test requires HTTPS +it must be given a filename containing `.https.` e.g., +`test-secure.https.html`, or be the generated service worker test of a +`.https`-less `.any` test. For more details see the documentation on +[file names][file-name-flags]. + +### HTTP2 + +If a test must be served from an HTTP/2 server, it must be given a +filename containing `.h2`. + +#### Support Files + +Various support files are available in in the directories named `/common/`, +`/media/`, and `/css/support/`. Reusing existing resources is encouraged where +possible, as is adding generally-useful files to these common areas rather than +to specific test suites. + + +#### Tools + +Sometimes you may want to add a script to the repository that's meant +to be used from the command line, not from a browser (e.g., a script +for generating test files). If you want to ensure (e.g., for security +reasons) that such scripts will only be usable from the command line +but won't be handled by the HTTP server then place them in a `tools` +subdirectory at the appropriate level—the server will then return a +404 if they are requested. + +For example, if you wanted to add a script for use with tests in the +`notifications` directory, create the `notifications/tools` +subdirectory and put your script there. + + +### File Formats + +Tests are generally formatted as HTML (including XHTML) or XML (including SVG). +Some test types support other formats: + +- [testharness.js tests](testharness) may be expressed as JavaScript files + ([the WPT server automatically generates the HTML documents for these][server + features]) +- [WebDriver specification tests](wdspec) are expressed as Python files + +The best way to determine how to format a new test is to look at how +similar tests have been formatted. You can also ask for advice in [the +project's matrix channel][matrix]. + + +### Character Encoding + +Except when specifically testing encoding, files must be encoded in +UTF-8. In file formats where UTF-8 is not the default encoding, they +must contain metadata to mark them as such (e.g., `<meta +charset=utf-8>` in HTML files) or be pure ASCII. + + +### Server Side Support + +The custom web server +supports [a variety of features][server features] useful for testing +browsers, including (but not limited to!) support for writing out +appropriate domains and custom (per-file and per-directory) HTTP +headers. + + +### Be Short + +Tests should be as short as possible. For reftests in particular +scrollbars at 800×600px window size must be avoided unless scrolling +behavior is specifically being tested. For all tests extraneous +elements on the page should be avoided so it is clear what is part of +the test (for a typical testharness test, the only content on the page +will be rendered by the harness itself). + + +### Be Conservative + +Tests should generally avoid depending on edge case behavior of +features that they don't explicitly intend on testing. For example, +except where testing parsing, tests should contain +no [parse errors](https://validator.nu). + +This is not, however, to discourage testing of edge cases or +interactions between multiple features; such tests are an essential +part of ensuring interoperability of the web platform. When possible, use the +canonical support libraries provided by features; for more information, see the documentation on [testing interactions between features][interacting-features]. + +Tests should pass when the feature under test exposes the expected behavior, +and they should fail when the feature under test is not implemented or is +implemented incorrectly. Tests should not rely on unrelated features if doing +so causes failures in the latest stable release of [Apple +Safari][apple-safari], [Google Chrome][google-chrome], or [Mozilla +Firefox][mozilla-firefox]. They should, therefore, not rely on any features +aside from the one under test unless they are supported in all three browsers. + +Existing tests can be used as a guide to identify acceptable features. For +language features that are not used in existing tests, community-maintained +projects such as [the ECMAScript compatibility tables][es-compat] and +[caniuse.com][caniuse] provide an overview of basic feature support across the +browsers listed above. + +For JavaScript code that is re-used across many tests (e.g. `testharness.js` +and the files located in the directory named `common`), only use language +features that have been supported by each of the major browser engines above +for over a year. This practice avoids introducing test failures for consumers +maintaining older JavaScript runtimes. + +Patches to make tests run on older versions or other browsers will be accepted +provided they are relatively simple and do not add undue complexity to the +test. + + +### Be Cross-Platform + +Tests should be as cross-platform as reasonably possible, working +across different devices, screen resolutions, paper sizes, etc. The +assumptions that can be relied on are documented [here][assumptions]; +tests that rely on anything else should be manual tests that document +their assumptions. + +Fonts cannot be relied on to be either installed or to have specific +metrics. As such, in most cases when a known font is needed, [Ahem][ahem] +should be used and loaded as a web font. In other cases, `@font-face` +should be used. + + +### Be Self-Contained + +Tests must not depend on external network resources. When these tests +are run on CI systems, they are typically configured with access to +external resources disabled, so tests that try to access them will +fail. Where tests want to use multiple hosts, this is possible through +a known set of subdomains and the [text substitution features of +wptserve](server-features). + + +### Be Self-Describing + +Tests should make it obvious when they pass and when they fail. It +shouldn't be necessary to consult the specification to figure out +whether a test has passed of failed. + + +### Style Rules + +A number of style rules should be applied to the test file. These are +not uniformly enforced throughout the existing tests, but will be for +new tests. Any of these rules may be broken if the test demands it: + + * No trailing whitespace + * Use spaces rather than tabs for indentation + * Use UNIX-style line endings (i.e. no CR characters at EOL) + +We have a lint tool for catching these and other common mistakes. You +can run it manually by starting the `wpt` executable from the root of +your local web-platform-tests working directory, and invoking the +`lint` subcommand, like this: + +``` +./wpt lint +``` + +The lint tool is also run automatically for every submitted pull request, +and reviewers will not merge branches with tests that have lint errors, so +you must fix any errors the lint tool reports. For details on doing that, +see the [lint-tool documentation][lint-tool]. + +But in the unusual case of error reports for things essential to a certain +test or that for other exceptional reasons shouldn't prevent a merge of a +test, update and commit the `lint.ignore` file in the web-platform-tests +root directory to suppress the error reports. For details on doing that, +see the [lint-tool documentation][lint-tool]. + + +## CSS-Specific Requirements + +In order to be included in an official specification test suite, tests +for CSS have some additional requirements for: + +* [Metadata][css-metadata], and +* [User style sheets][css-user-styles]. + + +[server features]: server-features +[assumptions]: assumptions +[ahem]: ahem +[matrix]: https://app.element.io/#/room/#wpt:matrix.org +[lint-tool]: lint-tool +[css-metadata]: css-metadata +[css-user-styles]: css-user-styles +[file-name-flags]: file-names +[interacting-features]: interacting-features +[mozilla-firefox]: https://mozilla.org/firefox +[google-chrome]: https://google.com/chrome/browser/desktop/ +[apple-safari]: https://apple.com/safari +[es-compat]: https://kangax.github.io/compat-table/ +[caniuse]: https://caniuse.com/ diff --git a/testing/web-platform/tests/docs/writing-tests/github-intro.md b/testing/web-platform/tests/docs/writing-tests/github-intro.md new file mode 100644 index 0000000000..f1fc161a8a --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/github-intro.md @@ -0,0 +1,318 @@ +# Introduction to GitHub + +All the basics that you need to know are documented on this page, but for the +full GitHub documentation, visit [help.github.com][help]. + +If you are already an experienced Git/GitHub user, all you need to know is that +we use the [normal GitHub Pull Request workflow][github flow] for test +submissions. + +If you are a first-time GitHub user, read on for more details of the workflow. + +## Setup + +1. Create a GitHub account if you do not already have one on + [github.com][github]. + +2. Download and install the latest version of Git: + [https://git-scm.com/downloads][git]; please refer to the instructions there + for different platforms. + +3. Configure your settings so your commits are properly labeled: + + On Mac or Linux or Solaris, open the Terminal. + + On Windows, open Git Bash (From the Start Menu > Git > Git Bash). + + At the prompt, type: + + $ git config --global user.name "Your Name" + + _This will be the name that is displayed with your test submissions_ + + Next, type: + + $ git config --global user.email "your_email@address.com" + + _This should be the email address you used to create the account in Step 1._ + +4. (Optional) If you don't want to enter your username and password every + time you talk to the remote server, you'll need to set up password caching. + See [Caching your GitHub password in Git][password-caching]. + +## Fork the test repository + +Now that you have Git set up, you will need to "fork" the test repository. Your +fork will be a completely independent version of the repository, hosted on +GitHub.com. This will enable you to [submit](#submit) your tests using a pull +request (more on this [below](#submit)). + +1. In the browser, go to [web-platform-tests on GitHub][main-repo]. + +2. Click the ![fork](/assets/forkbtn.png) button in the upper right. + +3. The fork will take several seconds, then you will be redirected to your + GitHub page for this forked repository. + You will now be at + **https://github.com/username/wpt**. + +4. After the fork is complete, you're ready to [clone](#clone). + +## Clone + +If your [fork](#fork) was successful, the next step is to clone (download a copy of the files). + +### Clone the test repository + +Open a command prompt in the directory where you want to keep the tests. Then +execute the following command: + + $ git clone https://github.com/username/wpt.git + +This will download the tests into a directory named for the repository: `wpt/`. + +You should now have a full copy of the test repository on your local +machine. Feel free to browse the directories on your hard drive. You can also +[browse them on github.com][main-repo] and see the full history of +contributions there. + +## Configure Remote / Upstream + +Your forked repository is completely independent of the canonical repository, +which is commonly referred to as the "upstream" repository. Synchronizing your +forked repository with the upstream repository will keep your forked local copy +up-to-date with the latest commits. + +In the vast majority of cases, the **only** upstream branch that you should +need to care about is `master`. If you see other branches in the repository, +you can generally safely ignore them. + +1. On the command line, navigate to to the directory where your forked copy of + the repository is located. + +2. Make sure that you are on the master branch. This will be the case if you + just forked, otherwise switch to master. + + $ git checkout master + +3. Next, add the remote of the repository your forked. This assigns the + original repository to a remote called "upstream": + + $ git remote add upstream https://github.com/web-platform-tests/wpt.git + +4. To pull in changes in the original repository that are not present in your + local repository first fetch them: + + $ git fetch -p upstream + + Then merge them into your local repository: + + $ git merge upstream/master + + We recommend using `-p` to "prune" the outdated branches that would + otherwise accumulate in your local repository. + +For additional information, please see the [GitHub docs][github-fork-docs]. + +## Configure your environment + +If all you intend to do is to load [manual tests](../writing-tests/manual) or [reftests](../writing-tests/reftests) from your local file system, +the above setup should be sufficient. +But many tests (and in particular, all [testharness.js tests](../writing-tests/testharness)) require a local web server. + +See [Local Setup][local-setup] for more information. + +## Branch + +Now that you have everything locally, create a branch for your tests. + +_Note: If you have already been through these steps and created a branch +and now want to create another branch, you should always do so from the +master branch. To do this follow the steps from the beginning of the [previous +section](#configure-remote-upstream). If you don't start with a clean master +branch you will end up with a big nested mess._ + +At the command line: + + $ git checkout -b topic + +This will create a branch named `topic` and immediately +switch this to be your active working branch. + +The branch name should describe specifically what you are testing. For example: + + $ git checkout -b flexbox-flex-direction-prop + +You're ready to start writing tests! Come back to this page you're ready to +[commit](#commit) them or [submit](#submit) them for review. + + +## Commit + +Before you submit your tests for review and contribution to the main test +repository, you'll need to first commit them locally, where you now have your +own personal version control system with git. In fact, as you are writing your +tests, you may want to save versions of your work as you go before you submit +them to be reviewed and merged. + +1. When you're ready to save a version of your work, open a command + prompt and change to the directory where your files are. + +2. First, ask git what new or modified files you have: + + $ git status + + _This will show you files that have been added or modified_. + +3. For all new or modified files, you need to tell git to add them to the + list of things you'd like to commit: + + $ git add [file1] [file2] ... [fileN] + + Or: + + $ git add [directory_of_files] + +4. Run `git status` again to see what you have on the 'Changes to be + committed' list. These files are now 'staged'. Alternatively, you can run + `git diff --staged` to see a visual representation of the changes to be + committed. + +5. Once you've added everything, you can commit and add a message to this + set of changes: + + $ git commit -m "Tests for indexed getters in the HTMLExampleInterface" + +6. Repeat these steps as many times as you'd like before you submit. + +## Verify + +The Web Platform Test project has an automated tool +to verify that coding conventions have been followed, +and to catch a number of common mistakes. + +We recommend running this tool locally. That will help you discover and fix +issues that would make it hard for us to accept your contribution. + +1. On the command line, navigate to to the directory where your clone +of the repository is located. + +2. Run `./wpt lint` + +3. Fix any mistake it reports and [commit](#commit) again. + +For more details, see the [documentation about the lint tool](../writing-tests/lint-tool). + +## Submit + +If you're here now looking for more instructions, that means you've written +some awesome tests and are ready to submit them. Congratulations and welcome +back! + +1. The first thing you do before submitting them to the web-platform-tests + repository is to push them back up to your fork: + + $ git push origin topic + + _Note: Here,_ `origin` _refers to remote repository from which you cloned + (downloaded) the files after you forked, referred to as + web-platform-tests.git in the previous example;_ + `topic` _refers to the name of your local branch that + you want to share_. + +2. Now you can send a message that you have changes or additions you'd like + to be reviewed and merged into the main (original) test repository. You do + this by creating a pull request. In a browser, open the GitHub page for + your forked repository: **https://github.com/username/wpt**. + +3. Now create the pull request. There are several ways to create a PR in the +GitHub UI. Below is one method and others can be found on +[GitHub.com][github-createpr] + + 1. Click the ![new pull request](../assets/pullrequestbtn.png) button. + + 2. On the left, you should see the base repository is the + web-platform-tests/wpt. On the right, you should see your fork of that + repository. In the branch menu of your forked repository, switch to `topic` + + If you see "There isn't anything to compare", make sure your fork and + your `topic` branch is selected on the right side. + + 3. Select the ![create pull request](../assets/createpr.png) button at the top. + + 4. Scroll down and review the summary of changes. + + 5. Scroll back up and in the Title field, enter a brief description for + your submission. + + Example: "Tests for CSS Transforms skew() function." + + 6. If you'd like to add more detailed comments, use the comment field + below. + + 7. Click ![the create pull request button](../assets/createpr.png) + + +4. Wait for feedback on your pull request and once your pull request is +accepted, delete your branch (see '[When Pull Request is Accepted](#cleanup)'). + +[This page on the submissions process](submission-process) has more detail +about what to expect when contributing code to WPT. + +## Refine + +Once you submit your pull request, a reviewer will check your proposed changes +for correctness and style. They may ask you to modify your code. When you are +ready to make the changes, follow these steps: + +1. Check out the branch corresponding to your changes e.g. if your branch was + called `topic` + run: + + $ git checkout topic + +2. Make the changes needed to address the comments, and commit them just like + before. + +3. Push the changes to the remote branch containing the pull request: + + $ git push origin topic + +4. The pull request will automatically be updated with the new commit. + +Sometimes it takes multiple iterations through a review before the changes are +finally accepted. Don't worry about this; it's totally normal. The goal of test +review is to work together to create the best possible set of tests for the web +platform. + +## Cleanup +Once your pull request has been accepted, you will be notified in the GitHub +user interface, and you may get an email. At this point, your changes have been merged +into the main test repository. You do not need to take any further action +on the test but you should delete your branch. This can easily be done in +the GitHub user interface by navigating to the pull request and clicking the +"Delete Branch" button. + +![pull request accepted delete branch](/assets/praccepteddelete.png) + +Alternatively, you can delete the branch on the command line. + + $ git push origin --delete <branchName> + +## Further Reading + +Git is a very powerful tool, and there are many ways to achieve subtly +different results. Recognizing when (and understanding how) to use other +approaches is beyond the scope of this tutorial. [The Pro Git Book][git-book] +is a free digital resource that can help you learn more. + +[local-setup]: ../running-tests/from-local-system +[git]: https://git-scm.com/downloads +[git-book]: https://git-scm.com/book +[github]: https://github.com/ +[github-fork-docs]: https://help.github.com/articles/fork-a-repo +[github-createpr]: https://help.github.com/articles/creating-a-pull-request +[help]: https://help.github.com/ +[main-repo]: https://github.com/web-platform-tests/wpt +[password-caching]: https://help.github.com/articles/caching-your-github-password-in-git +[github flow]: https://guides.github.com/introduction/flow/ diff --git a/testing/web-platform/tests/docs/writing-tests/h2tests.md b/testing/web-platform/tests/docs/writing-tests/h2tests.md new file mode 100644 index 0000000000..7745dca55d --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/h2tests.md @@ -0,0 +1,152 @@ +# Writing H2 Tests + +These instructions assume you are already familiar with the testing +infrastructure and know how to write a standard HTTP/1.1 test. + +On top of the standard `main` handler that the H1 server offers, the +H2 server also offers support for specific frame handlers in the Python +scripts. Currently there is support for for `handle_headers` and `handle_data`. +Unlike the `main` handler, these are run whenever the server receives a +HEADERS frame (RequestReceived event) or a DATA frame (DataReceived event). +`main` can still be used, but it will be run after the server has received +the request in its entirety. + +Here is what a Python script for a test might look like: +```python +def handle_headers(frame, request, response): + if request.headers["test"] == "pass": + response.status = 200 + response.headers.update([('test', 'passed')]) + response.write_status_headers() + else: + response.status = 403 + response.headers.update([('test', 'failed')]) + response.write_status_headers() + response.writer.end_stream() + +def handle_data(frame, request, response): + response.writer.write_data(frame.data[::-1]) + +def main(request, response): + response.writer.write_data('\nEnd of File', last=True) +``` + +The above script is fairly simple: +1. Upon receiving the HEADERS frame, `handle_headers` is run. + - This checks for a header called 'test' and checks if it is set to 'pass'. + If true, it will immediately send a response header, otherwise it responds + with a 403 and ends the stream. +2. Any DATA frames received will then be handled by `handle_data`. This will +simply reverse the data and send it back. +3. Once the request has been fully received, `main` is run which will send +one last DATA frame and signal its the end of the stream. + +## Response Writer API ## + +The H2Response API is pretty much the same as the H1 variant, the main API +difference lies in the H2ResponseWriter which is accessed through `response.writer` + +--- + +#### `write_headers(self, headers, status_code, status_message=None, stream_id=None, last=False):` +Write a HEADER frame using the H2 Connection object, will only work if the +stream is in a state to send HEADER frames. This will automatically format +the headers so that pseudo headers are at the start of the list and correctly +prefixed with ':'. Since this using the H2 Connection object, it requires that +the stream is in the correct state to be sending this frame. + +> <b>Note</b>: Will raise ProtocolErrors if pseudo headers are missing. + +- <b>Parameters</b> + + - <b>headers</b>: List of (header, value) tuples + - <b>status_code</b>: The HTTP status code of the response + - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None + - <b>last</b>: Flag to signal if this is the last frame in stream. + +--- + +#### `write_data(self, item, last=False, stream_id=None):` +Write a DATA frame using the H2 Connection object, will only work if the +stream is in a state to send DATA frames. Uses flow control to split data +into multiple data frames if it exceeds the size that can be in a single frame. +Since this using the H2 Connection object, it requires that the stream is in +the correct state to be sending this frame. + +- <b>Parameters</b> + + - <b>item</b>: The content of the DATA frame + - <b>last</b>: Flag to signal if this is the last frame in stream. + - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None + +--- + +#### `write_push(self, promise_headers, push_stream_id=None, status=None, response_headers=None, response_data=None):` +This will write a push promise to the request stream. If you do not provide +headers and data for the response, then no response will be pushed, and you +should send them yourself using the ID returned from this function. + +- <b>Parameters</b> + - <b>promise_headers</b>: A list of header tuples that matches what the client would use to + request the pushed response + - <b>push_stream_id</b>: The ID of the stream the response should be pushed to. If none given, will + use the next available id. + - <b>status</b>: The status code of the response, REQUIRED if response_headers given + - <b>response_headers</b>: The headers of the response + - <b>response_data</b>: The response data. + +- <b>Returns</b>: The ID of the push stream + +--- + +#### `write_raw_header_frame(self, headers, stream_id=None, end_stream=False, end_headers=False, frame_cls=HeadersFrame):` +Unlike `write_headers`, this does not check to see if a stream is in the +correct state to have HEADER frames sent through to it. It also won't force +the order of the headers or make sure pseudo headers are prefixed with ':'. +It will build a HEADER frame and send it without using the H2 Connection +object other than to HPACK encode the headers. + +> <b>Note</b>: The `frame_cls` parameter is so that this class can be reused +by `write_raw_continuation_frame`, as their construction is identical. + +- <b>Parameters</b> + - <b>headers</b>: List of (header, value) tuples + - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None + - <b>end_stream</b>: Set to `True` to add END_STREAM flag to frame + - <b>end_headers</b>: Set to `True` to add END_HEADERS flag to frame + +--- + +#### `write_raw_data_frame(self, data, stream_id=None, end_stream=False):` +Unlike `write_data`, this does not check to see if a stream is in the correct +state to have DATA frames sent through to it. It will build a DATA frame and +send it without using the H2 Connection object. It will not perform any flow control checks. + +- <b>Parameters</b> + - <b>data</b>: The data to be sent in the frame + - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None + - <b>end_stream</b>: Set to True to add END_STREAM flag to frame + +--- + +#### `write_raw_continuation_frame(self, headers, stream_id=None, end_headers=False):` +This provides the ability to create and write a CONTINUATION frame to the +stream, which is not exposed by `write_headers` as the h2 library handles +the split between HEADER and CONTINUATION internally. Will perform HPACK +encoding on the headers. It also ignores the state of the stream. + +This calls `write_raw_data_frame` with `frame_cls=ContinuationFrame` since +the HEADER and CONTINUATION frames are constructed in the same way. + +- <b>Parameters</b>: + - <b>headers</b>: List of (header, value) tuples + - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None + - <b>end_headers</b>: Set to True to add END_HEADERS flag to frame + +--- + +#### `end_stream(self, stream_id=None):` +Ends the stream with the given ID, or the one that request was made on if no ID given. + +- <b>Parameters</b> + - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None diff --git a/testing/web-platform/tests/docs/writing-tests/idlharness.md b/testing/web-platform/tests/docs/writing-tests/idlharness.md new file mode 100644 index 0000000000..e2abce0a48 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/idlharness.md @@ -0,0 +1,101 @@ +# IDL Tests (idlharness.js) + +## Introduction ## + +`idlharness.js` generates tests for Web IDL fragments, using the +[JavaScript Tests (`testharness.js`)](testharness.md) infrastructure. You typically want to use +`.any.js` or `.window.js` for this to avoid having to write unnessary boilerplate. + +## Adding IDL fragments + +Web IDL is automatically scraped from specifications and added to the `/interfaces/` directory. See +the [README](https://github.com/web-platform-tests/wpt/blob/master/interfaces/README.md) there for +details. + +## Testing IDL fragments + +For example, the Fetch API's IDL is tested in +[`/fetch/api/idlharness.any.js`](https://github.com/web-platform-tests/wpt/blob/master/fetch/api/idlharness.any.js): +```js +// META: global=window,worker +// META: script=/resources/WebIDLParser.js +// META: script=/resources/idlharness.js +// META: timeout=long + +idl_test( + ['fetch'], + ['referrer-policy', 'html', 'dom'], + idl_array => { + idl_array.add_objects({ + Headers: ["new Headers()"], + Request: ["new Request('about:blank')"], + Response: ["new Response()"], + }); + if (self.GLOBAL.isWindow()) { + idl_array.add_objects({ Window: ['window'] }); + } else if (self.GLOBAL.isWorker()) { + idl_array.add_objects({ WorkerGlobalScope: ['self'] }); + } + } +); +``` +Note how it includes `/resources/WebIDLParser.js` and `/resources/idlharness.js` in addition to +`testharness.js` and `testharnessreport.js` (automatically included due to usage of `.any.js`). +These are needed to make the `idl_test` function work. + +The `idl_test` function takes three arguments: + +* _srcs_: a list of specifications whose IDL you want to test. The names here need to match the filenames (excluding the extension) in `/interfaces/`. +* _deps_: a list of specifications the IDL listed in _srcs_ depends upon. Be careful to list them in the order that the dependencies are revealed. +* _setup_func_: a function or async function that takes care of creating the various objects that you want to test. + +## Methods of `IdlArray` ## + +`IdlArray` objects can be obtained through the _setup_func_ argument of `idl_test`. Anything not +documented in this section should be considered an implementation detail, and outside callers should +not use it. + +### `add_objects(dict)` + +_dict_ should be an object whose keys are the names of interfaces or exceptions, and whose values +are arrays of strings. When an interface or exception is tested, every string registered for it +with `add_objects()` will be evaluated, and tests will be run on the result to verify that it +correctly implements that interface or exception. This is the only way to test anything about +`[LegacyNoInterfaceObject]` interfaces, and there are many tests that can't be run on any interface +without an object to fiddle with. + +The interface has to be the *primary* interface of all the objects provided. For example, don't +pass `{Node: ["document"]}`, but rather `{Document: ["document"]}`. Assuming the `Document` +interface was declared to inherit from `Node`, this will automatically test that document implements +the `Node` interface too. + +Warning: methods will be called on any provided objects, in a manner that WebIDL requires be safe. +For instance, if a method has mandatory arguments, the test suite will try calling it with too few +arguments to see if it throws an exception. If an implementation incorrectly runs the function +instead of throwing, this might have side effects, possibly even preventing the test suite from +running correctly. + +### `prevent_multiple_testing(name)` + +This is a niche method for use in case you're testing many objects that implement the same +interfaces, and don't want to retest the same interfaces every single time. For instance, HTML +defines many interfaces that all inherit from `HTMLElement`, so the HTML test suite has something +like + +```js +.add_objects({ + HTMLHtmlElement: ['document.documentElement'], + HTMLHeadElement: ['document.head'], + HTMLBodyElement: ['document.body'], + ... +}) +``` + +and so on for dozens of element types. This would mean that it would retest that each and every one +of those elements implements `HTMLElement`, `Element`, and `Node`, which would be thousands of +basically redundant tests. The test suite therefore calls `prevent_multiple_testing("HTMLElement")`. +This means that once one object has been tested to implement `HTMLElement` and its ancestors, no +other object will be. Thus in the example code above, the harness would test that +`document.documentElement` correctly implements `HTMLHtmlElement`, `HTMLElement`, `Element`, and +`Node`; but `document.head` would only be tested for `HTMLHeadElement`, and so on for further +objects. diff --git a/testing/web-platform/tests/docs/writing-tests/index.md b/testing/web-platform/tests/docs/writing-tests/index.md new file mode 100644 index 0000000000..e5739c9a6e --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/index.md @@ -0,0 +1,91 @@ +# Writing Tests + +So you'd like to write new tests for WPT? Great! For starters, we recommend +reading [the introduction](../index) to learn how the tests are organized and +interpreted. You might already have an idea about what needs testing, but it's +okay if you don't know where to begin. In either case, [the guide on making a +testing plan](making-a-testing-plan) will help you decide what to write. + +There's also a load of [general guidelines](general-guidelines) that apply to all tests. + +## Test Types + +There are various different ways of writing tests: + +* [JavaScript tests (testharness.js)](testharness) are preferred for testing APIs and may be used + for other features too. They are built with the testharness.js unit testing framework, and consist + of assertions written in JavaScript. A high-level [testharness.js tutorial](testharness-tutorial) + is available. + +* Rendering tests should be used to verify that the browser graphically + displays pages as expected. See the [rendering test guidelines](rendering) + for tips on how to write great rendering tests. There are a few different + ways to write rendering tests: + + * [Reftests](reftests) should be used to test rendering and layout. They + consist of two or more pages with assertions as to whether they render + identically or not. A high-level [reftest tutorial](reftest-tutorial) is available. A + [print reftests](print-reftests) variant is available too. + + * [Visual tests](visual) should be used for checking rendering where there is + a large number of conforming renderings such that reftests are impractical. + They consist of a page that renders to final state at which point a + screenshot can be taken and compared to an expected rendering for that user + agent on that platform. + +* [Crashtests](crashtest) tests are used to check that the browser is + able to load a given document without crashing or experiencing other + low-level issues (asserts, leaks, etc.). They pass if the load + completes without error. + +* [wdspec](wdspec) tests are written in Python using + [pytest](https://docs.pytest.org/en/latest/) and test [the WebDriver browser + automation protocol](https://w3c.github.io/webdriver/) + +* [Manual tests](manual) are used as a last resort for anything that can't be + tested using any of the above. They consist of a page that needs manual + interaction or verification of the final result. + +See [file names](file-names) for test types and features determined by the file names, +and [server features](server-features) for advanced testing features. + +## Submitting Tests + +Once you've written tests, please submit them using +the [typical GitHub Pull Request workflow](submission-process); please +make sure you run the [`lint` script](lint-tool) before opening a pull request! + +## Table of Contents + +```eval_rst +.. toctree:: + :maxdepth: 1 + + general-guidelines + making-a-testing-plan + testharness + testharness-tutorial + rendering + reftests + reftest-tutorial + print-reftests + visual + crashtest + wdspec + manual + file-names + server-features + submission-process + lint-tool + ahem + assumptions + css-metadata + css-user-styles + h2tests + testdriver + testdriver-extension-tutorial + tools + test-templates + github-intro + ../tools/webtransport/README.md +``` diff --git a/testing/web-platform/tests/docs/writing-tests/interacting-features.md b/testing/web-platform/tests/docs/writing-tests/interacting-features.md new file mode 100644 index 0000000000..b8c6ce3895 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/interacting-features.md @@ -0,0 +1,25 @@ +# Testing interactions between features + +Web platform features do not exist in isolation. Often, testing the interaction between two features is necessary in tests. +To support this, many directories contain libraries which are intended to be used from other directories. + +These are not WPT server features, but are canonical usage of one feature intended for other features to test against. +This allows the tests for a feature to be decoupled as much as possible from the specifics of another feature which it should integrate with. + +## Web Platform Feature Testing Support Libraries + +### Common + +There are several useful utilities in the `/common/` directory + +### Cookies + +Features which need to test their interaction with cookies can use the scripts in `cookies/resources` to control which cookies are set on a given request. + +### Permissions Policy + +Features which integrate with Permissions Policy can make use of the `permissions-policy.js` support library to generate a set of tests for that integration. + +### Reporting + +Testing integration with the Reporting API can be done with the help of the common report collector. This service will collect reports sent from tests and provides an API to retrieve them. See documentation at `reporting/resources/README.md`. diff --git a/testing/web-platform/tests/docs/writing-tests/lint-tool.md b/testing/web-platform/tests/docs/writing-tests/lint-tool.md new file mode 100644 index 0000000000..95f8b57415 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/lint-tool.md @@ -0,0 +1,78 @@ +# Lint Tool + +We have a lint tool for catching common mistakes in test files. You can run +it manually by running the `wpt lint` command from the root of your local +web-platform-tests working directory like this: + +``` +./wpt lint +``` + +The lint tool is also run automatically for every submitted pull request, +and reviewers will not merge branches with tests that have lint errors, so +you must either [fix all lint errors](#fixing-lint-errors), or you must +[add an exception](#updating-the-ignored-files) to suppress the errors. + +## Fixing lint errors + +You must fix any errors the lint tool reports, unless an error is for something +essential to a certain test or that for some other exceptional reason shouldn't +prevent the test from being merged; in those cases you can [add an +exception](#updating-the-ignored-files) to suppress the errors. In all other +cases, follow the instructions below to fix all errors reported. + +<!-- + This listing is automatically generated from the linting tool's Python source + code. +--> + +```eval_rst +.. wpt-lint-rules:: tools.lint.rules +``` + +## Updating the ignored files + +Normally you must [fix all lint errors](#fixing-lint-errors). But in the +unusual case of error reports for things essential to certain tests or that +for other exceptional reasons shouldn't prevent a merge of a test, you can +update and commit the `lint.ignore` file in the web-platform-tests root +directory to suppress errors the lint tool would report for a test file. + +To add a test file or directory to the list, use the following format: + +``` +ERROR TYPE:file/name/pattern +``` + +For example, to ignore all `TRAILING WHITESPACE` errors in the file +`example/file.html`, add the following line to the `lint.ignore` file: + +``` +TRAILING WHITESPACE:example/file.html +``` + +To ignore errors for an entire directory rather than just one file, use the `*` +wildcard. For example, to ignore all `TRAILING WHITESPACE` errors in the +`example` directory, add the following line to the `lint.ignore` file: + +``` +TRAILING WHITESPACE:example/* +``` + +Similarly, you can also +use +[shell-style wildcards](https://docs.python.org/library/fnmatch.html) to +express other filename patterns or directory-name patterns. + +Finally, to ignore just one line in a file, use the following format: + +``` +ERROR TYPE:file/name/pattern:line_number +``` + +For example, to ignore the `TRAILING WHITESPACE` error for just line 128 of the +file `example/file.html`, add the following to the `lint.ignore` file: + +``` +TRAILING WHITESPACE:example/file.html:128 +``` diff --git a/testing/web-platform/tests/docs/writing-tests/making-a-testing-plan.md b/testing/web-platform/tests/docs/writing-tests/making-a-testing-plan.md new file mode 100644 index 0000000000..a4007039ae --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/making-a-testing-plan.md @@ -0,0 +1,540 @@ +# Making a Testing Plan + +When contributing to a project as large and open-ended as WPT, it's easy to get +lost in the details. It can be helpful to start by making a rough list of tests +you intend to write. That plan will let you anticipate how much work will be +involved, and it will help you stay focused once you begin. + +Many people come to WPT with a general testing goal in mind: + +- specification authors often want to test for new spec text +- browser maintainers often want to test new features or fixes to existing + features +- web developers often want to test discrepancies between browsers on their web + applications + +(If you don't have any particular goal, we can help you get started. Check out +[the issues labeled with `type:missing-coverage` on +GitHub.com](https://github.com/web-platform-tests/wpt/labels/type%3Amissing-coverage). +Leave a comment if you'd like to get started with one, and don't hesitate to +ask clarifying questions!) + +This guide will help you write a testing plan by: + +1. showing you how to use the specifications to learn what kinds of tests will + be most helpful +2. developing your sense for what *doesn't* need to be tested +3. demonstrating methods for figuring out which tests (if any) have already + been written for WPT + +The level of detail in useful testing plans can vary widely. From [a list of +specific +cases](https://github.com/web-platform-tests/wpt/issues/6980#issue-252255894), +to [an outline of important coverage +areas](https://github.com/web-platform-tests/wpt/issues/18549#issuecomment-522631537), +to [an annotated version of the specification under +test](https://rwaldron.github.io/webrtc-pc/), the appropriate fidelity depends +on your needs, so you can be as precise as you feel is helpful. + +## Understanding the "testing surface" + +Web platform specifications are instructions about how a feature should work. +They're critical for implementers to "build the right thing," but they are also +important for anyone writing tests. We can use the same instructions to infer +what kinds of tests would be likely to detect mistakes. Here are a few common +patterns in specification text and the kind of tests they suggest. + +### Input sources + +Algorithms may accept input from many sources. Modifying the input is the most +direct way we can influence the browser's behavior and verify that it matches +the specifications. That's why it's helpful to be able to recognize different +sources of input. + +```eval_rst +================ ============================================================== +Type of feature Potential input sources +================ ============================================================== +JavaScript parameters, `context object <https://dom.spec.whatwg.org/#context-object>`_ +HTML element content, attributes, attribute values +CSS selector strings, property values, markup +================ ============================================================== +``` + +Determine which input sources are relevant for your chosen feature, and build a +list of values which seem worthwhile to test (keep reading for advice on +identifying worthwhile values). For features that accept multiple sources of +input, remember that the interaction between values can often produce +interesting results. Every value you identify should go into your testing plan. + +*Example:* This is the first step of the `Notification` constructor from [the +Notifications standard](https://notifications.spec.whatwg.org/#constructors): + +> The Notification(title, options) constructor, when invoked, must run these steps: +> +> 1. If the [current global +> object](https://html.spec.whatwg.org/multipage/webappapis.html#current-global-object) +> is a +> [ServiceWorkerGlobalScope](https://w3c.github.io/ServiceWorker/#serviceworkerglobalscope) +> object, then [throw](https://webidl.spec.whatwg.org/#dfn-throw) a +> `TypeError` exception. +> 2. Let *notification* be the result of [creating a +> notification](https://notifications.spec.whatwg.org/#create-a-notification) +> given *title* and *options*. Rethrow any exceptions. +> +> [...] + +A thorough test suite for this constructor will include tests for the behavior +of many different values of the *title* parameter and the *options* parameter. +Choosing those values can be a challenge unto itself--see [Avoid Excessive +Breadth](#avoid-excessive-breadth) for advice. + +### Browser state + +The state of the browser may also influence algorithm behavior. Examples +include the current document, the dimensions of the viewport, and the entries +in the browsing history. Just like with direct input, a thorough set of tests +will likely need to control these values. Browser state is often more expensive +to manipulate (whether in terms of code, execution time, or system resources), +and you may want to design your tests to mitigate these costs (e.g. by writing +many subtests from the same state). + +You may not be able to control all relevant aspects of the browser's state. +[The `type:untestable` +label](https://github.com/web-platform-tests/wpt/issues?q=is%3Aopen+is%3Aissue+label%3Atype%3Auntestable) +includes issues for web platform features which cannot be controlled in a +cross-browser way. You should include tests like these in your plan both to +communicate your intention and to remind you when/if testing solutions become +available. + +*Example:* In [the `Notification` constructor referenced +above](https://notifications.spec.whatwg.org/#constructors), the type of "the +current global object" is also a form of input. The test suite should include +tests which execute with different types of global objects. + +### Branches + +When an algorithm branches based on some condition, that's an indication of an +interesting behavior that might be missed. Your testing plan should have at +least one test that verifies the behavior when the branch is taken and at least +one more test that verifies the behavior when the branch is *not* taken. + +*Example:* The following algorithm from [the HTML +standard](https://html.spec.whatwg.org/) describes how the +`localStorage.getItem` method works: + +> The `getItem`(*key*) method must return the current value associated with the +> given *key*. If the given *key* does not exist in the list associated with +> the object then this method must return null. + +This algorithm exhibits different behavior depending on whether or not an item +exists at the provided key. To test this thoroughly, we would write two tests: +one test would verify that `null` is returned when there is no item at the +provided key, and the other test would verify that an item we previously stored +was correctly retrieved when we called the method with its name. + +### Sequence + +Even without branching, the interplay between sequential algorithm steps can +suggest interesting test cases. If two steps have observable side-effects, then +it can be useful to verify they happen in the correct order. + +Most of the time, step sequence is implicit in the nature of the +algorithm--each step operates on the result of the step that precedes it, so +verifying the end result implicitly verifies the sequence of the steps. But +sometimes, the order of two steps isn't particularly relevant to the result of +the overall algorithm. This makes it easier for implementations to diverge. + +There are many common patterns where step sequence is observable but not +necessarily inherent to the correctness of the algorithm: + +- input validation (when an algorithm verifies that two or more input values + satisfy some criteria) +- event dispatch (when an algorithm + [fires](https://dom.spec.whatwg.org/#concept-event-fire) two or more events) +- object property access (when an algorithm retrieves two or more property + values from an object provided as input) + +*Example:* The following text is an abbreviated excerpt of the algorithm that +runs during drag operations (from [the HTML +specification](https://html.spec.whatwg.org/multipage/dnd.html#dnd)): + +> [...] +> 4. Otherwise, if the user ended the drag-and-drop operation (e.g. by +> releasing the mouse button in a mouse-driven drag-and-drop interface), or +> if the `drag` event was canceled, then this will be the last iteration. +> Run the following steps, then stop the drag-and-drop operation: +> 1. If the [current drag +> operation](https://html.spec.whatwg.org/multipage/dnd.html#current-drag-operation) +> is "`none`" (no drag operation) [...] Otherwise, the drag operation +> might be a success; run these substeps: +> 1. Let *dropped* be true. +> 2. If the [current target +> element](https://html.spec.whatwg.org/multipage/dnd.html#current-target-element) +> is a DOM element, [fire a DND +> event](https://html.spec.whatwg.org/multipage/dnd.html#fire-a-dnd-event) +> named `drop` at it; otherwise, use platform-specific conventions for +> indicating a drop. +> 3. [...] +> 2. [Fire a DND +> event](https://html.spec.whatwg.org/multipage/dnd.html#fire-a-dnd-event) +> named `dragend` at the [source +> node](https://html.spec.whatwg.org/multipage/dnd.html#source-node). +> 3. [...] + +A thorough test suite will verify that the `drop` event is fired as specified, +and it will also verify that the `dragend` event is fired as specified. An even +better test suite will also verify that the `drop` event is fired *before* the +`dragend` event. + +In September of 2019, [Chromium accidentally changed the ordering of the `drop` +and `dragend` +events](https://bugs.chromium.org/p/chromium/issues/detail?id=1005747), and as +a result, real web applications stopped functioning. If there had been a test +for the sequence of these events, then this confusion would have been avoided. + +When making your testing plan, be sure to look carefully for event dispatch and +the other patterns listed above. They won't always be as clear as the "drag" +example! + +### Optional behavior + +Specifications occasionally allow browsers discretion in how they implement +certain features. These are described using [RFC +2119](https://tools.ietf.org/html/rfc2119) terms like "MAY" and "OPTIONAL". +Although browsers should not be penalized for deciding not to implement such +behavior, WPT offers tests that verify the correctness of the browsers which +do. Be sure to [label the test as optional according to WPT's +conventions](file-names) so that people reviewing test results know how to +interpret failures. + +*Example:* The algorithm underpinning +[`document.getElementsByTagName`](https://developer.mozilla.org/en-US/docs/Web/API/Document/getElementsByTagName) +includes the following paragraph: + +> When invoked with the same argument, and as long as *root*'s [node +> document](https://dom.spec.whatwg.org/#concept-node-document)'s +> [type](https://dom.spec.whatwg.org/#concept-document-type) has not changed, +> the same [HTMLCollection](https://dom.spec.whatwg.org/#htmlcollection) object +> may be returned as returned by an earlier call. + +That statement uses the word "may," so even though it modifies the behavior of +the preceding algorithm, it is strictly optional. The test we write for this +should be designated accordingly. + +It's important to read these sections carefully because the distinction between +"mandatory" behavior and "optional" behavior can be nuanced. In this case, the +optional behavior is never allowed if the document's type has changed. That +makes for a mandatory test, one that verifies browsers don't return the same +result when the document's type changes. + +## Exercising Restraint + +When writing conformance tests, choosing what *not* to test is sometimes just +as hard as finding what needs testing. + +### Don't dive too deep + +Algorithms are composed of many other algorithms which themselves are defined +in terms of still more algorithms. It can be intimidating to consider +exhaustively testing one of those "nested" algorithms, especially when they are +shared by many different APIs. + +In general, you should plan to write "surface tests" for the nested algorithms. +That means only verifying that they exhibit the basic behavior you are +expecting. + +It's definitely important to test exhaustively, but it's just as important to +do so in a structured way. Reach out to the test suite's maintainers to learn +if and how they have already tested those algorithms. In many cases, it's +acceptable to test them in just one place (and maybe through a different API +entirely), and rely only on surface-level testing everywhere else. While it's +always possible for more tests to uncover new bugs, the chances may be slim. +The time we spend writing tests is highly valuable, so we have to be efficient! + +*Example:* The following algorithm from [the DOM +standard](https://dom.spec.whatwg.org/) powers +[`document.querySelector`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector): + +> To **scope-match a selectors string** *selectors* against a *node*, run these +> steps: +> +> 1. Let *s* be the result of [parse a +> selector](https://drafts.csswg.org/selectors-4/#parse-a-selector) +> *selectors*. +> 2. If *s* is failure, then +> [throw](https://webidl.spec.whatwg.org/#dfn-throw) a +> "[`SyntaxError`](https://webidl.spec.whatwg.org/#syntaxerror)" +> [DOMException](https://webidl.spec.whatwg.org/#idl-DOMException). +> 3. Return the result of [match a selector against a +> tree](https://drafts.csswg.org/selectors-4/#match-a-selector-against-a-tree) +> with *s* and *node*'s +> [root](https://dom.spec.whatwg.org/#concept-tree-root) using [scoping +> root](https://drafts.csswg.org/selectors-4/#scoping-root) *node*. + +As described earlier in this guide, we'd certainly want to test the branch +regarding the parsing failure. However, there are many ways a string might fail +to parse--should we verify them all in the tests for `document.querySelector`? +What about `document.querySelectorAll`? Should we test them all there, too? + +The answers depend on the current state of the test suite: whether or not tests +for selector parsing exist and where they are located. That's why it's best to +confer with the people who are maintaining the tests. + +### Avoid excessive breadth + +When the set of input values is finite, it can be tempting to test them all +exhaustively. When the set is very large, test authors can reduce repetition by +defining tests programmatically in loops. + +Using advanced control flow techniques to dynamically generate tests can +actually *reduce* test quality. It may obscure the intent of the tests since +readers have to mentally "unwind" the iteration to determine what is actually +being verified. The practice is more susceptible to bugs. These bugs may not be +obvious--they may not cause failures, and they may exercise fewer cases than +intended. Finally, tests authored using this approach often take a relatively +long time to complete, and that puts a burden on people who collect test +results in large numbers. + +The severity of these drawbacks varies with the complexity of the generation +logic. For example, it would be pronounced in a test which conditionally made +different assertions within many nested loops. Conversely, the severity would +be low in a test which only iterated over a list of values in order to make the +same assertions about each. Recognizing when the benefits outweigh the risks +requires discretion, so once you understand them, you should use your best +judgement. + +*Example:* We can see this consideration in the very first step of the +`Response` constructor from [the Fetch +standard](https://fetch.spec.whatwg.org/) + +> The `Response`(*body*, *init*) constructor, when invoked, must run these +> steps: +> +> 1. If *init*["`status`"] is not in the range `200` to `599`, inclusive, then +> [throw](https://webidl.spec.whatwg.org/#dfn-throw) a `RangeError`. +> +> [...] + +This function accepts exactly 400 values for the "status." With [WPT's +testharness.js](./testharness), it's easy to dynamically create one test for +each value. Unless we have reason to believe that a browser may exhibit +drastically different behavior for any of those values (e.g. correctly +accepting `546` but incorrectly rejecting `547`), then the complexity of +testing those cases probably isn't warranted. + +Instead, focus on writing declarative tests for specific values which are novel +in the context of the algorithm. For ranges like in this example, testing the +boundaries is a good idea. `200` and `599` should not produce an error while +`199` and `600` should produce an error. Feel free to use what you know about +the feature to choose additional values. In this case, HTTP response status +codes are classified by the "hundred" order of magnitude, so we might also want +to test a "3xx" value and a "4xx" value. + +## Assessing coverage + +It's very likely that WPT already has some tests for the feature (or at least +the specification) that you're interesting in testing. In that case, you'll +have to learn what's already been done before starting to write new tests. +Understanding the design of existing tests will let you avoid duplicating +effort, and it will also help you integrate your work more logically. + +Even if the feature you're testing does *not* have any tests, you should still +keep these guidelines in mind. Sooner or later, someone else will want to +extend your work, so you ought to give them a good starting point! + +### File names + +The names of existing files and folders in the repository can help you find +tests that are relevant to your work. [This page on the design of +WPT](../test-suite-design) goes into detail about how files are generally laid +out in the repository. + +Generally speaking, every conformance tests is stored in a subdirectory +dedicated to the specification it verifies. The structure of these +subdirectories vary. Some organize tests in directories related to algorithms +or behaviors. Others have a more "flat" layout, where all tests are listed +together. + +Whatever the case, test authors try to choose names that communicate the +behavior under test, so you can use them to make an educated guess about where +your tests should go. + +*Example:* Imagine you wanted to write a test to verify that headers were made +immutable by the `Request.error` method defined in [the Fetch +standard](https://fetch.spec.whatwg.org). Here's the algorithm: + +> The static error() method, when invoked, must run these steps: +> +> 1. Let *r* be a new [Response](https://fetch.spec.whatwg.org/#response) +> object, whose +> [response](https://fetch.spec.whatwg.org/#concept-response-response) is a +> new [network error](https://fetch.spec.whatwg.org/#concept-network-error). +> 2. Set *r*'s [headers](https://fetch.spec.whatwg.org/#response-headers) to a +> new [Headers](https://fetch.spec.whatwg.org/#headers) object whose +> [guard](https://fetch.spec.whatwg.org/#concept-headers-guard) is +> "`immutable`". +> 3. Return *r*. + +In order to figure out where to write the test (and whether it's needed at +all), you can review the contents of the `fetch/` directory in WPT. Here's how +that looks on a UNIX-like command line: + + $ ls fetch + api/ DIR_METADATA OWNERS + connection-pool/ h1-parsing/ local-network-access/ + content-encoding/ http-cache/ range/ + content-length/ images/ README.md + content-type/ metadata/ redirect-navigate/ + corb/ META.yml redirects/ + cross-origin-resource-policy/ nosniff/ security/ + data-urls/ origin/ stale-while-revalidate/ + +This test is for a behavior directly exposed through the API, so we should look +in the `api/` directory: + + $ ls fetch/api + abort/ cors/ headers/ policies/ request/ response/ + basic/ credentials/ idlharness.any.js redirect/ resources/ + +And since this is a static method on the `Response` constructor, we would +expect the test to belong in the `response/` directory: + + $ ls fetch/api/response + multi-globals/ response-static-error.html + response-cancel-stream.html response-static-redirect.html + response-clone.html response-stream-disturbed-1.html + response-consume-empty.html response-stream-disturbed-2.html + response-consume.html response-stream-disturbed-3.html + response-consume-stream.html response-stream-disturbed-4.html + response-error-from-stream.html response-stream-disturbed-5.html + response-error.html response-stream-disturbed-6.html + response-from-stream.any.js response-stream-with-broken-then.any.js + response-init-001.html response-trailer.html + response-init-002.html + +There seems to be a test file for the `error` method: +`response-static-error.html`. We can open that to decide if the behavior is +already covered. If not, then we know where to [write the +test](https://github.com/web-platform-tests/wpt/pull/19601)! + +### Failures on wpt.fyi + +There are many behaviors that are difficult to describe in a succinct file +name. That's commonly the case with low-level rendering details of CSS +specifications. Test authors may resort to generic number-based naming schemes +for their files, e.g. `feature-001.html`, `feature-002.html`, etc. This makes +it difficult to determine if a test case exists judging only by the names of +files. + +If the behavior you want to test is demonstrated by some browsers but not by +others, you may be able to use the *results* of the tests to locate the +relevant test. + +[wpt.fyi](https://wpt.fyi) is a website which publishes results of WPT in +various browsers. Because most browsers pass most tests, the pass/fail +characteristics of the behavior you're testing can help you filter through a +large number of highly similar tests. + +*Example:* Imagine you've found a bug in the way Safari renders the top CSS +border of HTML tables. By searching through directory names and file names, +you've determined the probable location for the test: the `css/CSS2/borders/` +directory. However, there are *three hundred* files that begin with +`border-top-`! None of the names mention the `<table>` element, so any one of +the files may already be testing the case you found. + +Luckily, you also know that Firefox and Chrome do not exhibit this bug. You +could find such tests by visual inspection of the [wpt.fyi](https://wpt.fyi) +results overview, but [the website's "search" feature includes operators that +let you query for this information +directly](https://github.com/web-platform-tests/wpt.fyi/blob/master/api/query/README.md). +To find the tests which begin with `border-top-`, pass in Chrome, pass in +Firefox, and fail in Safari, you could write [`border-top- chrome:pass +firefox:pass +safari:fail](https://wpt.fyi/results/?label=master&label=experimental&aligned&q=border-top-%20safari%3Afail%20firefox%3Apass%20chrome%3Apass). +The results show only three such tests exist: + +- `border-top-applies-to-005.xht` +- `border-top-color-applies-to-005.xht` +- `border-top-width-applies-to-005.xht` + +These may not describe the behavior you're interested in testing; the only way +to know for sure is to review their contents. However, this is a much more +manageable set to work with! + +### Querying file contents + +Some web platform features are enabled with a predictable pattern. For example, +HTML attributes follow a fairly consistent format. If you're interested in +testing a feature like this, you may be able to learn where your tests belong +by querying the contents of the files in WPT. + +You may be able to perform such a search on the web. WPT is hosted on +GitHub.com, and [GitHub offers some basic functionality for querying +code](https://help.github.com/en/articles/about-searching-on-github). If your +search criteria are short and distinctive (e.g. all files containing +"querySelectorAll"), then this interface may be sufficient. However, more +complicated criteria may require [regular +expressions](https://www.regular-expressions.info/). For that, you can +[download the WPT +repository](https://web-platform-tests.org/writing-tests/github-intro.html) and +use [git](https://git-scm.com) to perform more powerful searches. + +The following table lists some common search criteria and examples of how they +can be expressed using regular expressions: + +<div class="table-container"> + +```eval_rst +================================= ================== ========================== +Criteria Example match Example regular expression +================================= ================== ========================== +JavaScript identifier references ``obj.foo()`` ``\bfoo\b`` +JavaScript string literals ``x = "foo";`` ``(["'])foo\1`` +HTML tag names ``<foo attr>`` ``<foo(\s|>|$)`` +HTML attributes ``<div foo=3>`` ``<[a-zA-Z][^>]*\sfoo(\s|>|=|$)`` +CSS property name ``style="foo: 4"`` ``([{;=\"']|\s|^)foo\s+:`` +================================= ================== ========================== +``` + +</div> + +Bear in mind that searches like this are not necessarily exhaustive. Depending +on the feature, it may be difficult (or even impossible) to write a query that +correctly identifies all relevant tests. This strategy can give a helpful +guide, but the results may not be conclusive. + +*Example:* Imagine you're interested in testing how the `src` attribute of the +`iframe` element works with `javascript:` URLs. Judging only from the names of +directories, you've found a lot of potential locations for such a test. You +also know many tests use `javascript:` URLs without describing that in their +name. How can you find where to contribute new tests? + +You can design a regular expression that matches many cases where a +`javascript:` URL is assigned to the `src` property in HTML. You can use the +`git grep` command to query the contents of the `html/` directory: + + $ git grep -lE "src\s*=\s*[\"']?javascript:" html + html/browsers/browsing-the-web/navigating-across-documents/javascript-url-query-fragment-components.html + html/browsers/browsing-the-web/navigating-across-documents/javascript-url-return-value-handling.html + html/dom/documents/dom-tree-accessors/Document.currentScript.html + html/dom/self-origin.sub.html + html/editing/dnd/target-origin/114-manual.html + html/semantics/embedded-content/media-elements/track/track-element/cloneNode.html + html/semantics/scripting-1/the-script-element/execution-timing/040.html + html/semantics/scripting-1/the-script-element/execution-timing/080.html + html/semantics/scripting-1/the-script-element/execution-timing/108.html + html/semantics/scripting-1/the-script-element/execution-timing/109.html + html/webappapis/dynamic-markup-insertion/opening-the-input-stream/document-open-cancels-javascript-url-navigation.html + +You will still have to review the contents to know which are relevant for your +purposes (if any), but compared to the 5,000 files in the `html/` directory, +this list is far more approachable! + +## Writing the Tests + +With a complete testing plan in hand, you now have a good idea of the scope of +your work. It's finally time to write the tests! There's a lot to say about how +this is done technically. To learn more, check out [the WPT "reftest" +tutorial](./reftest-tutorial) and [the testharness.js +tutorial](./testharness-tutorial). diff --git a/testing/web-platform/tests/docs/writing-tests/manual.md b/testing/web-platform/tests/docs/writing-tests/manual.md new file mode 100644 index 0000000000..122a22b3f3 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/manual.md @@ -0,0 +1,77 @@ +# Manual Tests + +Some testing scenarios are intrinsically difficult to automate and +require a human to run the test and check the pass condition. + +## When to Write Manual Tests + +Whenever possible it's best to write a fully automated test. For a +browser vendor it's possible to run an automated test hundreds of +times a day, but manual tests are likely to be run at most a handful +of times a year (and quite possibly approximately never!). This makes +them significantly less useful for catching regressions than automated +tests. + +However, there are certain scenarios in which this is not yet +possible. For example: + +* Test which require observing animation (e.g., a test for CSS + animation or for video playback), + +* Tests that require interaction with browser security UI (e.g., a + test in which a user refuses a geolocation permissions grant), + +* Tests that require interaction with the underlying OS (e.g., tests + for drag and drop from the desktop onto the browser), + +* Tests that require non-default browser configuration (e.g., images + disabled), and + +* Tests that require interaction with the physical environment (e.g., + tests that the vibration API causes the device to vibrate or that + various sensor APIs respond in the expected way). + +## Requirements for a Manual Test + +Manual tests are distinguished by their filename; all manual tests +have filenames of the form `name-manual.ext` (i.e., a `-manual` suffix +after the main filename but before the extension). + +Manual tests must be +fully +[self-describing](general-guidelines). +It is particularly important for these tests that it is easy to +determine the result from the information provided in the page to the +tester, because a tester may have hundreds of tests to get through and +little understanding of the features that they are testing. As a +result, minimalism is especially a virtue for manual tests. + +A test should have, at a minimum step-by-step instructions for +performing the test, and a clear statement of either the test result +if it can be automatically determined after some setup or how to +otherwise determine the outcome. + +Any information other than this (e.g., quotes from the spec) should be +avoided (though, as always, can be provided in +HTML/CSS/JS/etc. comments). + +## Using testharness.js for Manual Tests + +A convenient way to present the results of a test that can have the +result determined by script after some manual setup steps is to use +testharness.js to determine and present the result. In this case one +must pass `{explicit_timeout: true}` in a call to `setup()` in order +to disable the automatic timeout of the test. For example: + +```html +<!doctype html> +<title>Manual click on button triggers onclick handler</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script> +setup({explicit_timeout: true}) +</script> +<p>Click on the button below. If a "PASS" result appears the test +passes, otherwise it fails</p> +<button onclick="done()">Click Here</button> +``` diff --git a/testing/web-platform/tests/docs/writing-tests/print-reftests.md b/testing/web-platform/tests/docs/writing-tests/print-reftests.md new file mode 100644 index 0000000000..62a037da12 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/print-reftests.md @@ -0,0 +1,45 @@ +# Print Reftests + +Print reftests are like ordinary [reftests](reftests), except that the +output is rendered to pagninated form and then compared page-by-page +with the reference. + +Print reftests are distinguished by the string `-print` in the +filename immediately before the extension, or by being under a +directory named `print`. Examples: + +- `css/css-foo/bar-print.html` is a print reftest +- `css/css-foo/print/bar.html` is a print reftest +- `css/css-foo/bar-print-001.html` is **not** a print reftest + + +Like ordinary reftests, the reference is specified using a `<link +rel=match>` element. + +The default page size for print reftests is 12.7 cm by 7.62 cm (5 +inches by 3 inches). + +All the features of ordinary reftests also work with print reftests +including [fuzzy matching](reftests.html#fuzzy-matching). Any fuzzy +specifier applies to each image comparison performed i.e. separately +for each page. + +## Page Ranges + +In some cases it may be desirable to only compare a subset of the +output pages in the reftest. This is possible using +``` +<meta name=reftest-pages content=[range-specifier]> +``` +Where a range specifier has the form +``` +range-specifier = <specifier-item> ["," <specifier-item>]* +specifier-item = <int> | <int>? "-" <int>? +``` + +For example to specify rendering pages 1 and 2, 4, 6 and 7, and 9 and +10 of a 10 page page document one could write: + +``` +<meta name=reftest-pages content="-2,4,6,7,9-"> +``` diff --git a/testing/web-platform/tests/docs/writing-tests/python-handlers/index.md b/testing/web-platform/tests/docs/writing-tests/python-handlers/index.md new file mode 100644 index 0000000000..e52e137179 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/python-handlers/index.md @@ -0,0 +1,116 @@ +# Python Handlers + +Python file handlers are Python files which the server executes in response to +requests made to the corresponding URL. This is hooked up to a route like +`("*", "*.py", python_file_handler)`, meaning that any .py file will be +treated as a handler file (note that this makes it easy to write unsafe +handlers, particularly when running the server in a web-exposed setting). + +The Python files must define a function named `main` with the signature: + + main(request, response) + +...where `request` is [a wptserve `Request` +object](/tools/wptserve/docs/request) and `response` is [a wptserve `Response` +object](/tools/wptserve/docs/response). + +This function must return a value in one of the following four formats: + + ((status_code, reason), headers, content) + (status_code, headers, content) + (headers, content) + content + +Above, `headers` is a list of (field name, value) pairs, and `content` is a +string or an iterable returning strings. + +The `main` function may also update the response manually. For example, one may +use `response.headers.set` to set a response header, and only return the +content. One may even use this kind of handler, but manipulate the output +socket directly. The `writer` property of the response exposes a +`ResponseWriter` object that allows writing specific parts of the request or +direct access to the underlying socket. If used, the return value of the +`main` function and the properties of the `response` object will be ignored. + +The wptserver implements a number of Python APIs for controlling traffic. + +```eval_rst +.. toctree:: + :maxdepth: 1 + + /tools/wptserve/docs/request + /tools/wptserve/docs/response + /tools/wptserve/docs/stash +``` + +### Importing local helper scripts + +Python file handlers may import local helper scripts, e.g. to share logic +across multiple handlers. To avoid module name collision, however, imports must +be relative to the root of WPT. For example, in an imaginary +`cookies/resources/myhandler.py`: + +```python +# DON'T DO THIS +import myhelper + +# DO THIS +from cookies.resources import myhelper +``` + +Only absolute imports are allowed; do not use relative imports. If the path to +your helper script includes a hyphen (`-`), you can use `import_module` from +`importlib` to import it. For example: + +```python +import importlib +myhelper = importlib.import_module('common.security-features.myhelper') +``` + +**Note on __init__ files**: Importing helper scripts like this +requires a 'path' of empty `__init__.py` files in every directory down +to the helper. For example, if your helper is +`css/css-align/resources/myhelper.py`, you need to have: + +``` +css/__init__.py +css/css-align/__init__.py +css/css-align/resources/__init__.py +``` + +## Example: Dynamic HTTP headers + +The following code defines a Python handler that allows the requester to +control the value of the `Content-Type` HTTP response header: + +```python +def main(request, response): + content_type = request.GET.first('content-type') + headers = [('Content-Type', content_type)] + + return (200, 'my status text'), headers, 'my response content' +``` + +If saved to a file named `resources/control-content-type.py`, the WPT server +will respond to requests for `resources/control-content-type.py` by executing +that code. + +This could be used from a [testharness.js test](../testharness) like so: + +```html +<!DOCTYPE html> +<meta charset="utf-8"> +<title>Demonstrating the WPT server's Python handler feature</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script> +promise_test(function() { + return fetch('resources/control-content-type.py?content-type=text/foobar') + .then(function(response) { + assert_equals(response.status, 200); + assert_equals(response.statusText, 'my status text'); + assert_equals(response.headers.get('Content-Type'), 'text/foobar'); + }); +}); +</script> +``` diff --git a/testing/web-platform/tests/docs/writing-tests/reftest-tutorial.md b/testing/web-platform/tests/docs/writing-tests/reftest-tutorial.md new file mode 100644 index 0000000000..a51430942c --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/reftest-tutorial.md @@ -0,0 +1,276 @@ +# Writing a reftest + +<!-- +Note to maintainers: + +This tutorial is designed to be an authentic depiction of the WPT contribution +experience. It is not intended to be comprehensive; its scope is intentionally +limited in order to demonstrate authoring a complete test without overwhelming +the reader with features. Because typical WPT usage patterns change over time, +this should be updated periodically; please weigh extensions against the +demotivating effect that a lengthy guide can have on new contributors. +--> + +Let's say you've discovered that WPT doesn't have any tests for the `dir` +attribute of [the `<bdo>` +element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/bdo). This +tutorial will guide you through the process of writing and submitting a test. +You'll need to [configure your system to use WPT's +tools](../running-tests/from-local-system), but you won't need them until +towards the end of this tutorial. Although it includes some very brief +instructions on using git, you can find more guidance in [the tutorial for git +and GitHub](../writing-tests/github-intro). + +WPT's reftests are great for testing web-platform features that have some +visual effect. [The reftests reference page](reftests) describes them in the +abstract, but for the purposes of this guide, we'll only consider the features +we need to test the `<bdo>` element. + +```eval_rst +.. contents:: + :local: +``` + +## Setting up your workspace + +To make sure you have the latest code, first type the following into a terminal +located in the root of the WPT git repository: + + $ git fetch git@github.com:web-platform-tests/wpt.git + +Next, we need a place to store the change set we're about to author. Here's how +to create a new git branch named `reftest-for-bdo` from the revision of WPT we +just downloaded: + + $ git checkout -b reftest-for-bdo FETCH_HEAD + +Now you're ready to create your patch. + +## Writing the test file + +First, we'll create a file that demonstrates the "feature under test." That is: +we'll write an HTML document that displays some text using a `<bdo>` element. + +WPT has thousands of tests, so it can be daunting to decide where to put a new +one. Generally speaking, [test files should be placed in directories +corresponding to the specification text they are +verifying](../test-suite-design). `<bdo>` is defined in [the "text-level +semantics" chapter of the HTML +specification](https://html.spec.whatwg.org/multipage/text-level-semantics.html), +so we'll want to create our new test in the directory +`html/semantics/text-level-semantics/the-bdo-element/`. Create a file named +`rtl.html` and open it in your text editor. + +Here's one way to demonstrate the feature: + +```html +<!DOCTYPE html> +<meta charset="utf-8"> +<title>BDO element dir=rtl</title> +<link rel="help" href="https://html.spec.whatwg.org/#the-bdo-element"> +<meta name="assert" content="BDO element's DIR content attribute renders corrently given value of 'rtl'."> + +<p>Test passes if WAS is displayed below.</p> +<bdo dir="rtl">SAW</bdo> +``` + +That's pretty dense! Let's break it down: + +- ```html + <!DOCTYPE html> + <meta charset="utf-8"> + ``` + + We explicitly set the DOCTYPE and character set to be sure that browsers + don't infer them to be something we aren't expecting. We're omitting the + `<html>` and `<head>` tags. That's a common practice in WPT, preferred + because it makes tests more concise. + +- ```html + <title>BDO element dir=rtl</title> + ``` + The document's title should succinctly describe the feature under test. + +- ```html + <link rel="help" href="https://html.spec.whatwg.org/#the-bdo-element"> + ``` + + The "help" metadata should reference the specification under test so that + everyone understands the motivation. This is so helpful that [the CSS Working + Group requires it for CSS tests](css-metadata)! If you're writing a reftest + for a feature outside of CSS, feel free to omit this tag. + +- ```html + <meta name="assert" content="BDO element's DIR content attribute renders corrently given value of 'rtl'."> + ``` + + The "assert" metadata is a structured way for you to describe exactly what + you want your reftest to verify. For a direct test like the one we're writing + here, it might seem a little superfluous. It's much more helpful for + more-involved tests where reviewers might need some help understanding your + intentions. + + This tag is optional, so you can skip it if you think it's unnecessary. We + recommend using it for your first few tests since it may let reviewers give + you more helpful feedback. As you get more familiar with WPT and the + specifications, you'll get a sense for when and where it's better to leave it + out. + +- ```html + <p>Test passes if WAS is displayed below.</p> + ``` + + We're communicating the "pass" condition in plain English to make the test + self-describing. + +- ```html + <bdo dir="rtl">SAW</bdo> + ``` + + This is the real focus of the test. We're including some text inside a + `<bdo>` element in order to demonstrate the feature under test. + +Since this page doesn't rely on any [special WPT server +features](server-features), we can view it by loading the HTML file directly. +There are a bunch of ways to do this; one is to navigate to the +`html/semantics/text-level-semantics/the-bdo-element/` directory in a file +browser and drag the new `rtl.html` file into an open web browser window. + +![](/assets/reftest-tutorial-test-screenshot.png "screen shot of the new test") + +Sighted people can open that document and verify whether or not the stated +expectation is satisfied. If we were writing a [manual test](manual), we'd be +done. However, it's time-consuming for a human to run tests, so we should +prefer making tests automatic whenever possible. Remember that we set out to +write a "reference test." Now it's time to write the reference file. + +## Writing a "match" reference + +The "match" reference file describes what the test file is supposed to look +like. Critically, it *must not* use the technology that we are testing. The +reference file is what allows the test to be run by a computer--the computer +can verify that each pixel in the test document exactly matches the +corresponding pixel in the reference document. + +Make a new file in the same +`html/semantics/text-level-semantics/the-bdo-element/` directory named +`rtl-ref.html`, and save the following markup into it: + +```html +<!DOCTYPE html> +<meta charset="utf-8"> +<title>BDO element dir=rtl reference</title> + +<p>Test passes if WAS is displayed below.</p> +<p>WAS</p> +``` + +This is like a stripped-down version of the test file. In order to produce a +visual rendering which is the same as the expected rendering, it uses a `<p>` +element whose contents is the characters in right-to-left order. That way, if +the browser doesn't support the `<bdo>` element, this file will still show text +in the correct sequence. + +This file is also completely functional without the WPT server, so you can open +it in a browser directly from your hard drive. + +Currently, there's no way for a human operator or an automated script to know +that the two files we've created are supposed to match visually. We'll need to +add one more piece of metadata to the test file we created earlier. Open +`html/semantics/text-level-semantics/the-bdo-element/rtl.html` in your text +editor and add another `<link>` tag as described by the following change +summary: + +```diff + <!DOCTYPE html> + <meta charset="utf-8"> + <title>BDO element dir=rtl</title> + <link rel="author" title="Sam Smith" href="mailto:sam@example.com"> + <link rel="help" href="https://html.spec.whatwg.org/#the-bdo-element"> ++<link rel="match" href="rtl-ref.html"> + <meta name="assert" content="BDO element's DIR content attribute renders corrently given value of 'rtl'."> + + <p>Test passes if WAS is displayed below.</p> + <bdo dir="rtl">SAW</bdo> +``` + +Now, anyone (human or computer) reviewing the test file will know where to find +the associated reference file. + +## Verifying our work + +We're done writing the test, but we should make sure it fits in with the rest +of WPT before we submit it. This involves using some of the project's tools, so +this is the point you'll need to [configure your system to run +WPT](../running-tests/from-local-system). + +[The lint tool](lint-tool) can detect some of the common mistakes people make +when contributing to WPT. To run it, open a command-line terminal, navigate to +the root of the WPT repository, and enter the following command: + + python ./wpt lint html/semantics/text-level-semantics/the-bdo-element + +If this recognizes any of those common mistakes in the new files, it will tell +you where they are and how to fix them. If you do have changes to make, you can +run the command again to make sure you got them right. + +Now, we'll run the test using the automated pixel-by-pixel comparison approach +mentioned earlier. This is important for reftests because the test and the +reference may differ in very subtle ways that are hard to catch with the naked +eye. That's not to say your test has to pass in all browsers (or even in *any* +browser). But if we expect the test to pass, then running it this way will help +us catch other kinds of mistakes. + +The tools support running the tests in many different browsers. We'll use +Firefox this time: + + python ./wpt run firefox html/semantics/text-level-semantics/the-bdo-element/rtl.html + +We expect this test to pass, so if it does, we're ready to submit it. If we +were testing a web platform feature that Firefox didn't support, we would +expect the test to fail instead. + +There are a few problems to look out for in addition to passing/failing status. +The report will describe fewer tests than we expect if the test isn't run at +all. That's usually a sign of a formatting mistake, so you'll want to make sure +you've used the right file names and metadata. Separately, the web browser +might crash. That's often a sign of a browser bug, so you should consider +[reporting it to the browser's +maintainers](https://rachelandrew.co.uk/archives/2017/01/30/reporting-browser-bugs/)! + +## Submitting the test + +First, let's stage the new files for committing: + + $ git add html/semantics/text-level-semantics/the-bdo-element/rtl.html + $ git add html/semantics/text-level-semantics/the-bdo-element/rtl-ref.html + +We can make sure the commit has everything we want to submit (and nothing we +don't) by using `git diff`: + + $ git diff --staged + +On most systems, you can use the arrow keys to navigate through the changes, +and you can press the `q` key when you're done reviewing. + +Next, we'll create a commit with the staged changes: + + $ git commit -m '[html] Add test for the `<bdo>` element' + +And now we can push the commit to our fork of WPT: + + $ git push origin reftest-for-bdo + +The last step is to submit the test for review. WPT doesn't actually need the +test we wrote in this tutorial, but if we wanted to submit it for inclusion in +the repository, we would create a pull request on GitHub. [The guide on git and +GitHub](../writing-tests/github-intro) has all the details on how to do that. + +## More practice + +Here are some ways you can keep experimenting with WPT using this test: + +- Improve coverage by adding more tests for related behaviors (e.g. nested + `<bdo>` elements) +- Add another reference document which describes what the test should *not* + look like using [`rel=mismatch`](reftests) diff --git a/testing/web-platform/tests/docs/writing-tests/reftests.md b/testing/web-platform/tests/docs/writing-tests/reftests.md new file mode 100644 index 0000000000..219e5887a0 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/reftests.md @@ -0,0 +1,192 @@ +# Reftests + +Reftests are one of the primary tools for testing things relating to +rendering; they are made up of the test and one or more other pages +("references") with assertions as to whether they render identically +or not. This page describes their aspects exhaustively; [the tutorial +on writing a reftest](reftest-tutorial) offers a more limited but +grounded guide to the process. + +## How to Run Reftests + +Reftests can be run manually simply by opening the test and the +reference file in multiple windows or tabs and flipping between the +two. In automation the comparison is done in an automated fashion, +which can lead to differences hard for the human eye to notice to +cause the test to fail. + +## Components of a Reftest + +In the simplest case, a reftest consists of a pair of files called the +*test* and the *reference*. + +The *test* file is the one that makes use of the technology being +tested. It also contains a `link` element with `rel="match"` or +`rel="mismatch"` and `href` attribute pointing to the *reference* +file, e.g. `<link rel=match href=references/green-box-ref.html>`. A +`match` test only passes if the two files render pixel-for-pixel +identically within a 800x600 window *including* scroll-bars if +present; a `mismatch` test only passes if they *don't* render +identically. + +The *reference* file is typically written to be as simple as possible, +and does not use the technology under test. It is desirable that the +reference be rendered correctly even in UAs with relatively poor +support for CSS and no support for the technology under test. + +## Writing a Good Reftest + +In general the files used in a reftest should follow +the [general guidelines][] and +the [rendering test guidelines][rendering]. They should also be +self-describing, to allow a human to determine whether the the +rendering is as expected. + +References can be shared between tests; this is strongly encouraged as +it makes it easier to tell at a glance whether a test passes (through +familiarity) and enables some optimizations in automated test +runners. Shared references are typically placed in `references` +directories, either alongside the tests they are expected to be useful +for or at the top level if expected to be generally applicable (e.g., +many layout tests can be written such that the correct rendering is a +100x100 green square!). For references that are applicable only to a +single test, it is recommended to use the test name with a suffix of +`-ref` as their filename; e.g., `test.html` would have `test-ref.html` +as a reference. + +## Multiple References + +Sometimes, a test's pass condition cannot be captured in a single +reference. + +If a test has multiple links, then the test passes if: + + * If there are any match references, at least one must match, and + * If there are any mismatch references, all must mismatch. + + If you need multiple matches to succeed, these can be turned into + multiple tests (for example, by just having a reference be a test + itself!). If this seems like an unreasonable restriction, please file + a bug and let us know! + +## Controlling When Comparison Occurs + +By default, reftest screenshots are taken after the following +conditions are met: + +* The `load` event has fired +* Web fonts (if any) are loaded +* Pending paints have completed + +In some cases it is necessary to delay the screenshot later than this, +for example because some DOM manipulation is required to set up the +desired test conditions. To enable this, the test may have a +`class="reftest-wait"` attribute specified on the root element. In +this case the harness will run the following sequence of steps: + +* Wait for the `load` event to fire and fonts to load. +* Wait for pending paints to complete. +* Fire an event named `TestRendered` at the root element, with the + `bubbles` attribute set to true. +* Wait for the `reftest-wait` class to be removed from the root + element. +* Wait for pending paints to complete. +* Screenshot the viewport. + +The `TestRendered` event provides a hook for tests to make +modifications to the test document that are not batched into the +initial layout/paint. + +## Fuzzy Matching + +In some situations a test may have subtle differences in rendering +compared to the reference due to, e.g., anti-aliasing. To allow for +these small differences, we allow tests to specify a fuzziness +characterised by two parameters, both of which must be specified: + + * A maximum difference in the per-channel color value for any pixel. + * A number of total pixels that may be different. + +The maximum difference in the per pixel color value is formally +defined as follows: let <code>T<sub>x,y,c</sub></code> be the value of +colour channel `c` at pixel coordinates `x`, `y` in the test image and +<code>R<sub>x,y,c</sub></code> be the corresponding value in the +reference image, and let <code>width</code> and <code>height</code> be +the dimensions of the image in pixels. Then <code>maxDifference = +max<sub>x=[0,width) y=[0,height), c={r,g,b}</sub>(|T<sub>x,y,c</sub> - +R<sub>x,y,c</sub>|)</code>. + +To specify the fuzziness in the test file one may add a `<meta +name=fuzzy>` element (or, in the case of more complex tests, to any +page containing the `<link rel=[mis]match>` elements). In the simplest +case this has a `content` attribute containing the parameters above, +separated by a semicolon e.g. + +``` +<meta name=fuzzy content="maxDifference=15;totalPixels=300"> +``` + +would allow for a difference of exactly 15 / 255 on any color channel +and 300 exactly pixels total difference. The argument names are optional +and may be elided; the above is the same as: + +``` +<meta name=fuzzy content="15;300"> +``` + +The values may also be given as ranges e.g. + +``` +<meta name=fuzzy content="maxDifference=10-15;totalPixels=200-300"> +``` + +or + +``` +<meta name=fuzzy content="10-15;200-300"> +``` + +In this case the maximum pixel difference must be in the range +`10-15` and the total number of different pixels must be in the range +`200-300`. These range checks are inclusive. + +In cases where a single test has multiple possible refs and the +fuzziness is not the same for all refs, a ref may be specified by +prefixing the `content` value with the relative url for the ref e.g. + +``` +<meta name=fuzzy content="option1-ref.html:10-15;200-300"> +``` + +One meta element is required per reference requiring a unique +fuzziness value, but any unprefixed value will automatically be +applied to any ref that doesn't have a more specific value. + +### Debugging fuzzy reftests + +When debugging a fuzzy reftest via `wpt run`, it can be useful to know what the +allowed and detected differences were. Many of the output logger options will +provide this information. For example, by passing `--log-mach=-` for a run of a +hypothetical failing test, one might get: + +``` + 0:08.15 TEST_START: /foo/bar.html + 0:09.70 INFO Found 250 pixels different, maximum difference per channel 6 on page 1 + 0:09.70 INFO Allowed 0-100 pixels different, maximum difference per channel 0-0 + 0:09.70 TEST_END: FAIL, expected PASS - /foo/bar.html ['f83385ed9c9bea168108b8c448366678c7941627'] +``` + +For other logging flags, see the output of `wpt run --help`. + +## Limitations + +In some cases, a test cannot be a reftest. For example, there is no +way to create a reference for underlining, since the position and +thickness of the underline depends on the UA, the font, and/or the +platform. However, once it's established that underlining an inline +element works, it's possible to construct a reftest for underlining +a block element, by constructing a reference using underlines on a +```<span>``` that wraps all the content inside the block. + +[general guidelines]: general-guidelines +[rendering]: rendering diff --git a/testing/web-platform/tests/docs/writing-tests/rendering.md b/testing/web-platform/tests/docs/writing-tests/rendering.md new file mode 100644 index 0000000000..e17b6ef879 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/rendering.md @@ -0,0 +1,84 @@ +# Rendering Test Guidelines + +There are a number of techniques typically used when writing rendering tests; +these are especially using for [visual](visual) tests which need to be manually +judged and following common patterns makes it easier to correctly tell if a +given test passed or not. + +## Indicating success + +Success is largely indicated by the color green; typically in one of +two ways: + + * **The green paragraph**: arguably the simplest form of test, this + typically consists of single line of text with a pass condition of, + "This text should be green". A variant of this is using the + background instead, with a pass condition of, "This should have a + green background". + + * **The green square**: applicable to many block layout tests, the test + renders a green square when it passes; these can mostly be written to + match [this][ref-filled-green-100px-square] reference. This green square is + often rendered over a red square, such that when the test fails there is red + visible on the page; this can even be done using text by using the + [Ahem][ahem] font. + +More occasionally, the entire canvas is rendered green, typically when +testing parts of CSS that affect the entire page. Care has to be taken +when writing tests like this that the test will not result in a single +green paragraph if it fails. This is usually done by forcing the short +descriptive paragraph to have a neutral color (e.g., white). + +Sometimes instead of a green square, a white square is used to ensure +any red is obvious. To ensure the stylesheet has loaded, it is +recommended to make the pass condition paragraph green and require +that in addition to there being no red on the page. + +## Indicating failure + +In addition to having clearly defined characteristics when +they pass, well designed tests should have some clear signs when +they fail. It can sometimes be hard to make a test do something only +when the test fails, because it is very hard to predict how user +agents will fail! Furthermore, in a rather ironic twist, the best +tests are those that catch the most unpredictable failures! + +Having said that, here are the best ways to indicate failures: + + * Using the color red is probably the best way of highlighting + failures. Tests should be designed so that if the rendering is a + few pixels off some red is uncovered or otherwise rendered on the + page. + + * Tests of the `line-height`, `font-size` and similar properties can + sometimes be devised in such a way that a failure will result in + the text overlapping. + + * Some properties lend themselves well to making "FAIL" render in the + case of something going wrong, for example `quotes` and + `content`. + +## Other Colors + +Aside from green and red, other colors are generally used in specific +ways: + + * Black is typically used for descriptive text, + + * Blue is frequently used as an obvious color for tests with complex + pass conditions, + + * Fuchsia, yellow, teal, and orange are typically used when multiple + colors are needed, + + * Dark gray is often used for descriptive lines, and + + * Silver or light gray is often used for irrelevant content, such as + filler text. + +None of these rules are absolute because testing +color-related functionality will necessitate using some of these +colors! + +[ref-filled-green-100px-square]: https://github.com/w3c/csswg-test/blob/master/reference/ref-filled-green-100px-square.xht +[ahem]: ahem
\ No newline at end of file diff --git a/testing/web-platform/tests/docs/writing-tests/server-features.md b/testing/web-platform/tests/docs/writing-tests/server-features.md new file mode 100644 index 0000000000..b50b495212 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/server-features.md @@ -0,0 +1,157 @@ +# Server Features + +For many tests, writing one or more static HTML files is +sufficient. However there are a large class of tests for which this +approach is insufficient, including: + +* Tests that require cross-domain access + +* Tests that depend on setting specific headers or status codes + +* Tests that need to inspect the browser-sent request + +* Tests that require state to be stored on the server + +* Tests that require precise timing of the response. + +To make writing such tests possible, we are using a number of +server-side components designed to make it easy to manipulate the +precise details of the response: + +* *wptserve*, a custom Python HTTP server + +* *pywebsocket*, an existing websockets server + +wptserve is a Python-based web server. By default it serves static +files in the test suite. For more sophisticated requirements, several +mechanisms are available to take control of the response. These are +outlined below. + +### Tests Involving Multiple Origins + +Our test servers are guaranteed to be accessible through two domains +and five subdomains under each. The 'main' domain is unnamed; the +other is called 'alt'. These subdomains are: `www`, `www1`, `www2`, +`天気の良い日`, and `élève`; there is also `nonexistent` which is +guaranteed not to resolve. In addition, the HTTP server listens on two +ports, and the WebSockets server on one. These subdomains and ports +must be used for cross-origin tests. + +Tests must not hardcode the hostname of the server that they expect to +be running on or the port numbers, as these are not guaranteed by the +test environment. Instead they can get this information in one of two +ways: + +* From script, using the `location` API. + +* By using a textual substitution feature of the server. + +In order for the latter to work, a file must either have a name of the form +`{name}.sub.{ext}` e.g. `example-test.sub.html` or be referenced through a URL +containing `pipe=sub` in the query string e.g. `example-test.html?pipe=sub`. +The substitution syntax uses `{{ }}` to delimit items for substitution. For +example to substitute in the main host name, one would write: `{{host}}`. + +To get full domains, including subdomains, there is the `hosts` dictionary, +where the first dimension is the name of the domain, and the second the +subdomain. For example, `{{hosts[][www]}}` would give the `www` subdomain under +the main (unnamed) domain, and `{{hosts[alt][élève]}}` would give the `élève` +subdomain under the alt domain. + +For mostly historic reasons, the subdomains of the main domain are +also available under the `domains` dictionary; this is identical to +`hosts[]`. + +Ports are also available on a per-protocol basis. For example, +`{{ports[ws][0]}}` is replaced with the first (and only) WebSockets port, while +`{{ports[http][1]}}` is replaced with the second HTTP port. + +The request URL itself can be used as part of the substitution using the +`location` dictionary, which has entries matching the `window.location` API. +For example, `{{location[host]}}` is replaced by `hostname:port` for the +current request, matching `location.host`. + + +### Tests Requiring Special Headers + +For tests requiring that a certain HTTP header is set to some static +value, a file with the same path as the test file except for an an +additional `.headers` suffix may be created. For example for +`/example/test.html`, the headers file would be +`/example/test.html.headers`. This file consists of lines of the form + + header-name: header-value + +For example + + Content-Type: text/html; charset=big5 + +To apply the same headers to all files in a directory use a +`__dir__.headers` file. This will only apply to the immediate +directory and not subdirectories. + +Headers files may be used in combination with substitutions by naming +the file e.g. `test.html.sub.headers`. + + +### Tests Requiring Full Control Over The HTTP Response + +```eval_rst +.. toctree:: + :maxdepth: 1 + + python-handlers/index + server-pipes +``` + +For full control over the request and response, the server provides the ability +to write `.asis` files; these are served as literal HTTP responses. In other +words, they are sent byte-for-byte to the server without adding an HTTP status +line, headers, or anything else. This makes them suitable for testing +situations where the precise bytes on the wire are static, and control over the +timing is unnecessary, but the response does not conform to HTTP requirements. + +The server also provides the ability to write [Python +"handlers"](python-handlers/index)--Python scripts that have access to request +data and can manipulate the content and timing of the response. Responses are +also influenced by [the `pipe` query string parameter](server-pipes). + + +### Tests Requiring HTTP/2.0 + +To make a test run over an HTTP/2.0 connection, use `.h2.` in the filename. +By default the HTTP/2.0 server can be accessed using port 9000. At the moment +accessing tests that use `.h2.` over ports that do not use an HTTP/2.0 server +also succeeds, so beware of that when creating them. + +The HTTP/2.0 server supports handlers that work per-frame; these, along with the +API are documented in [Writing H2 Tests](h2tests). + + +### Tests Requiring WebTransport over HTTP/3 + +We do not support loading a test over WebTransport over HTTP/3 yet, but a test +can establish a WebTransport session to the test server. + +The WebTransport over HTTP/3 server is not yet enabled by default, so +WebTransport tests will fail unless `--enable-webtransport` is specified to + `./wpt run`. + +### Test Features specified as query params + +Alternatively to specifying [Test Features](file-names.html#test-features) in +the test filename, they can be specified by setting the `wpt_flags` in the +[test variant](testharness.html#variants). For example, the following variant +will be loaded over HTTPS: +```html +<meta name="variant" content="?wpt_flags=https"> +``` + +`https`, `h2` and `www` features are supported by `wpt_flags`. + +Multiple features can be specified by having multiple `wpt_flags`. For example, +the following variant will be loaded over HTTPS and run on the www subdomain. + +```html +<meta name="variant" content="wpt_flags=www&wpt_flags=https"> +``` diff --git a/testing/web-platform/tests/docs/writing-tests/server-pipes.md b/testing/web-platform/tests/docs/writing-tests/server-pipes.md new file mode 100644 index 0000000000..dc376ddacf --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/server-pipes.md @@ -0,0 +1,155 @@ +# wptserve Pipes + +Pipes are designed to allow simple manipulation of the way that +static files are sent without requiring any custom code. They are also +useful for cross-origin tests because they can be used to activate a +substitution mechanism which can fill in details of ports and server +names in the setup on which the tests are being run. + +## Enabling + +Pipes are functions that may be used when serving files to alter parts +of the response. These are invoked by adding a pipe= query parameter +taking a | separated list of pipe functions and parameters. The pipe +functions are applied to the response from left to right. For example: + + GET /sample.txt?pipe=slice(1,200)|status(404). + +This would serve bytes 1 to 199, inclusive, of foo.txt with the HTTP status +code 404. + +Note: If you write directly to the response socket using ResponseWriter, or +when using the asis handler, only the trickle pipe will affect the response. + +There are several built-in pipe functions, and it is possible to add +more using the `@pipe` decorator on a function, if required. + +Note: Because of the way pipes compose, using some pipe functions prevents the +content-length of the response from being known in advance. In these cases the +server will close the connection to indicate the end of the response, +preventing the use of HTTP 1.1 keepalive. + +## Built-In Pipes + +### `sub` + +Used to substitute variables from the server environment, or from the +request into the response. A typical use case is for testing +cross-domain since the exact domain name and ports of the servers are +generally unknown. + +Substitutions are marked in a file using a block delimited by `{{` +and `}}`. Inside the block the following variables are available: + +- `{{host}}` - The host name of the server excluding any subdomain part. +- `{{domains[]}}` - The domain name of a particular subdomain e.g. + `{{domains[www]}}` for the `www` subdomain. +- `{{hosts[][]}}` - The domain name of a particular subdomain for a particular + host. The first key may be empty (designating the "default" host) or the + value `alt`; i.e., `{{hosts[alt][]}}` (designating the alternate host). +- `{{ports[][]}}` - The port number of servers, by protocol e.g. + `{{ports[http][0]}}` for the first (and, depending on setup, possibly only) + http server +- `{{headers[]}}` The HTTP headers in the request e.g. `{{headers[X-Test]}}` + for a hypothetical `X-Test` header. +- `{{header_or_default(header, default)}}` The value of an HTTP header, or a + default value if it is absent. e.g. `{{header_or_default(X-Test, + test-header-absent)}}` +- `{{GET[]}}` The query parameters for the request e.g. `{{GET[id]}}` for an id + parameter sent with the request. + +So, for example, to write a JavaScript file called `xhr.js` that +depends on the host name of the server, without hardcoding, one might +write: + + var server_url = http://{{host}}:{{ports[http][0]}}/path/to/resource; + //Create the actual XHR and so on + +The file would then be included as: + + <script src="xhr.js?pipe=sub"></script> + +This pipe can also be enabled by using a filename `*.sub.ext`, e.g. the file above could be called `xhr.sub.js`. + +### `status` + +Used to set the HTTP status of the response, for example: + + example.js?pipe=status(410) + +### `headers` + +Used to add or replace http headers in the response. Takes two or +three arguments; the header name, the header value and whether to +append the header rather than replace an existing header (default: +False). So, for example, a request for: + + example.html?pipe=header(Content-Type,text/plain) + +causes example.html to be returned with a text/plain content type +whereas: + + example.html?pipe=header(Content-Type,text/plain,True) + +Will cause example.html to be returned with both text/html and +text/plain content-type headers. + +If the comma (`,`) or closing parenthesis (`)`) characters appear in the header +value, those characters must be escaped with a backslash (`\`): + + example?pipe=header(Expires,Thu\,%2014%20Aug%201986%2018:00:00%20GMT) + +(Note that the programming environment from which the request is issued may +require that the backslash character itself be escaped.) + +### `slice` + +Used to send only part of a response body. Takes the start and, +optionally, end bytes as arguments, although either can be null to +indicate the start or end of the file, respectively. So for example: + + example.txt?pipe=slice(10,20) + +Would result in a response with a body containing 10 bytes of +example.txt including byte 10 but excluding byte 20. + + example.txt?pipe=slice(10) + +Would cause all bytes from byte 10 of example.txt to be sent, but: + + example.txt?pipe=slice(null,20) + +Would send the first 20 bytes of example.txt. + +### `trickle` + +Note: Using this function will force a connection close. + +Used to send the body of a response in chunks with delays. Takes a +single argument that is a microsyntax consisting of colon-separated +commands. There are three types of commands: + +* Bare numbers represent a number of bytes to send + +* Numbers prefixed `d` indicate a delay in seconds + +* Numbers prefixed `r` must only appear at the end of the command, and + indicate that the preceding N items must be repeated until there is + no more content to send. The number of items to repeat must be even. + +In the absence of a repetition command, the entire remainder of the content is +sent at once when the command list is exhausted. So for example: + + example.txt?pipe=trickle(d1) + +causes a 1s delay before sending the entirety of example.txt. + + example.txt?pipe=trickle(100:d1) + +causes 100 bytes of example.txt to be sent, followed by a 1s delay, +and then the remainder of the file to be sent. On the other hand: + + example.txt?pipe=trickle(100:d1:r2) + +Will cause the file to be sent in 100 byte chunks separated by a 1s +delay until the whole content has been sent. diff --git a/testing/web-platform/tests/docs/writing-tests/submission-process.md b/testing/web-platform/tests/docs/writing-tests/submission-process.md new file mode 100644 index 0000000000..73161cd170 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/submission-process.md @@ -0,0 +1,41 @@ +# Submitting Tests + +Test submission is via the typical [GitHub workflow][github flow]. For detailed +guidelines on setup and each of these steps, please refer to the [Github Test +Submission](github-intro) documentation. + +* Fork the [GitHub repository][repo]. + +* Create a feature branch for your changes. + +* Make your changes. + +* Run the `lint` script in the root of your checkout to detect common + mistakes in test submissions. There is [detailed documentation for the lint + tool](lint-tool). + +* Commit your changes. + +* Push your local branch to your GitHub repository. + +* Using the GitHub UI, create a Pull Request for your branch. + +* When you get review comments, make more commits to your branch to + address the comments. + +* Once everything is reviewed and all issues are addressed, your pull + request will be automatically merged. + +We can sometimes take a little while to go through pull requests because we +have to go through all the tests and ensure that they match the specification +correctly. But we look at all of them, and take everything that we can. + +Hop on to the [mailing list][public-test-infra] or [matrix +channel][matrix] if you have an issue. There is no need to announce +your review request; as soon as you make a Pull Request, GitHub will +inform interested parties. + +[repo]: https://github.com/web-platform-tests/wpt/ +[github flow]: https://guides.github.com/introduction/flow/ +[public-test-infra]: https://lists.w3.org/Archives/Public/public-test-infra/ +[matrix]: https://app.element.io/#/room/#wpt:matrix.org diff --git a/testing/web-platform/tests/docs/writing-tests/test-templates.md b/testing/web-platform/tests/docs/writing-tests/test-templates.md new file mode 100644 index 0000000000..e8f4bfe77f --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/test-templates.md @@ -0,0 +1,168 @@ +# Test Templates + +This page contains templates for creating tests. The template syntax +is compatible with several popular editors including TextMate, Sublime +Text, and emacs' YASnippet mode. + +Templates for filenames are also given. In this case `{}` is used to +delimit text to be replaced and `#` represents a digit. + +## Reftests + +### HTML test + +<!-- + Syntax highlighting cannot be enabled for the following template because it + contains invalid CSS. +--> + +``` +<!DOCTYPE html> +<meta charset="utf-8"> +<title>${1:Test title}</title> +<link rel="match" href="${2:URL of match}"> +<style> + ${3:Test CSS} +</style> +<body> + ${4:Test content} +</body> +``` + +Filename: `{test-topic}-###.html` + +### HTML reference + +<!-- + Syntax highlighting cannot be enabled for the following template because it + contains invalid CSS. +--> + +``` +<!DOCTYPE html> +<meta charset="utf-8"> +<title>${1:Reference title}</title> +<style> + ${2:Reference CSS} +</style> +<body> + ${3:Reference content} +</body> +``` + +Filename: `{description}.html` or `{test-topic}-###-ref.html` + +### SVG test + +``` xml +<svg xmlns="http://www.w3.org/2000/svg" xmlns:h="http://www.w3.org/1999/xhtml"> + <title>${1:Test title}</title> + <metadata> + <h:link rel="help" href="${2:Specification link}"/> + <h:link rel="match" href="${3:URL of match}"/> + </metadata> + ${4:Test body} +</svg> +``` + +Filename: `{test-topic}-###.svg` + +### SVG reference + +``` xml +<svg xmlns="http://www.w3.org/2000/svg"> + <title>${1:Reference title}</title> + ${2:Reference content} +</svg> +``` + +Filename: `{description}.svg` or `{test-topic}-###-ref.svg` + +## testharness.js tests + +### HTML + +``` html +<!DOCTYPE html> +<meta charset="utf-8"> +<title>${1:Test title}</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script> +${2:Test body} +</script> +``` + +Filename: `{test-topic}-###.html` + +### HTML with [testdriver automation](testdriver) +``` html +<!DOCTYPE html> +<meta charset="utf-8"> +<title>${1:Test title}</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script src="/resources/testdriver.js"></script> +<script src="/resources/testdriver-vendor.js"></script> + +<script> +${2:Test body} +</script> +``` + +Filename: `{test-topic}-###.html` + +### SVG + +``` xml +<svg xmlns="http://www.w3.org/2000/svg" xmlns:h="http://www.w3.org/1999/xhtml"> + <title>${1:Test title}</title> + <metadata> + <h:link rel="help" href="${2:Specification link}"/> + </metadata> + <h:script src="/resources/testharness.js"/> + <h:script src="/resources/testharnessreport.js"/> + <script><![CDATA[ + ${4:Test body} + ]]></script> +</svg> +``` + +Filename: `{test-topic}-###.svg` + +### Manual Test + +#### HTML + +``` html +<!DOCTYPE html> +<meta charset="utf-8"> +<title>${1:Test title}</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script> +setup({explicit_timeout: true}); +${2:Test body} +</script> +``` + +Filename: `{test-topic}-###-manual.html` + +#### SVG + +``` xml +<svg xmlns="http://www.w3.org/2000/svg" xmlns:h="http://www.w3.org/1999/xhtml"> + <title>${1:Test title}</title> + <metadata> + <h:link rel="help" href="${2:Specification link}"/> + </metadata> + <h:script src="/resources/testharness.js"/> + <h:script src="/resources/testharnessreport.js"/> + <script><![CDATA[ + setup({explicit_timeout: true}); + ${4:Test body} + ]]></script> +</svg> +``` + +Filename: `{test-topic}-###-manual.svg` diff --git a/testing/web-platform/tests/docs/writing-tests/testdriver-extension-tutorial.md b/testing/web-platform/tests/docs/writing-tests/testdriver-extension-tutorial.md new file mode 100644 index 0000000000..185a27f1a4 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/testdriver-extension-tutorial.md @@ -0,0 +1,300 @@ +# Testdriver extension tutorial +Adding new commands to testdriver.js + +## Assumptions +We assume the following in this writeup: + - You know what web-platform-tests is and you have a working checkout and can run tests + - You know what WebDriver is + - Familiarity with JavaScript and Python + +## Introduction! + +Let's implement window resizing. We can do this via the [Set Window Rect](https://w3c.github.io/webdriver/#set-window-rect) command in WebDriver. + +First, we need to think of what the API will look like a little. We will be using WebDriver and Marionette for this, so we can look and see that they take in x, y coordinates, width and height integers. + +The first part of this will be browser agnostic, but later we will need to implement a specific layer for each browser (here we will do Firefox and Chrome). + +## RFC Process + +Before we invest any significant work into extending the testdriver.js API, we should check in with other stakeholders of the Web Platform Tests community on the proposed changes, by writing an [RFC](https://github.com/web-platform-tests/rfcs) ("request for comments"). This is especially useful for changes that may affect test authors or downstream users of web-platform-tests. + +The [process is given in more detail in the RFC repo](https://github.com/web-platform-tests/rfcs#the-rfc-process), but to start let's send in a PR to the RFCs repo by adding a file named `rfcs/testdriver_set_window_rect.md`: + +```md +# RFC N: Add window resizing to testdriver.js +(*Note: N should be replaced by the PR number*) + +## Summary + +Add testdriver.js support for the [Set Window Rect command](https://w3c.github.io/webdriver/#set-window-rect). + +## Details +(*add details here*) + +## Risks +(*add risks here*) +``` + +Members of the community will then have the opportunity to comment on our proposed changes, and perhaps suggest improvements to our ideas. If all goes well it will be approved and merged in. + +With that said, developing a prototype implementation may help others evaluate the proposal during the RFC process, so let's move on to writing some code. + +## Code! + +### [resources/testdriver.js](https://github.com/web-platform-tests/wpt/blob/master/resources/testdriver.js) + +This is the main entry point the tests get. Here we need to add a function to the `test_driver` object that will call the `test_driver_internal` object. + +```javascript +window.test_driver = { + + // other commands... + + /** + * Triggers browser window to be resized and relocated + * + * This matches the behaviour of the {@link + * https://w3c.github.io/webdriver/#set-window-rect|WebDriver + * Set Window Rect command}. + * + * @param {Integer} x - The x coordinate of the top left of the window + * @param {Integer} y - The y coordinate of the top left of the window + * @param {Integer} width - The width of the window + * @param {Integer} height - The width of the window + * @returns {Promise} fulfilled after window rect is set occurs, or rejected in + * the cases the WebDriver command errors + */ + set_window_rect: function(x, y, width, height) { + return window.test_driver_internal.set_element_rect(x, y, width, height); + } +``` + +In the same file, lets add to the internal object. ( do we need to do this?) (make sure to do this if the internal call has different arguments than the external call, especially if it calls multiple internal calls) + +```javascript +window.test_driver_internal = { + + // other commands... + + set_window_rect: function(x, y, width, height) { + return Promise.reject(new Error("unimplemented")) + } +``` +We will leave this unimplemented and override it in another file. Lets do that now! + +### [tools/wptrunner/wptrunner/testdriver-extra.js](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/testdriver-extra.js) + +This will be the default function called when invoking the test driver commands (sometimes it is overridden by testdriver-vendor.js, but that is outside the scope of this tutorial). In most cases this is just boilerplate: + +```javascript +window.test_driver_internal.set_element_rect = function(x, y, width, height) { + return create_action("set_element_rect", {x, y, width, height}); +}; +``` + +The `create_action` helper function does the heavy lifting of setting up a postMessage to the wptrunner internals as well as returning a promise that will resolve once the call is complete. + +Next, this is passed to the executor and protocol in wptrunner. Time to switch to Python! + +### [tools/wptrunner/wptrunner/executors/protocol.py](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/executors/protocol.py) + +```python +class SetWindowRectProtocolPart(ProtocolPart): + """Protocol part for resizing and changing location of window""" + __metaclass__ = ABCMeta + + name = "set_window_rect" + + @abstractmethod + def set_window_rect(self, x, y, width, height): + """Change the window rect + + :param x: The x coordinate of the top left of the window. + :param y: The y coordinate of the top left of the window. + :param width: The width of the window. + :param height: The height of the window.""" + pass +``` + +Next we create a representation of our new action. + +### [tools/wptrunner/wptrunner/executors/actions.py](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/executors/actions.py) + +```python +class SetWindowRectAction(object): + def __init__(self, logger, protocol): + self.logger = logger + self.protocol = protocol + + def __call__(self, payload): + x, y, width, height = payload["x"], payload["y"], payload["width"], payload["height"] + self.logger.debug("Setting window rect to be: x=%s, y=%s, width=%s, height=%s" + .format(x, y, width, height)) + self.protocol.set_window_rect.set_window_rect(x, y, width, height) +``` + +Then add your new class to the `actions = [...]` list at the end of the file. + +Don't forget to write docs in ```testdriver.md```. +Now we write the browser specific implementations. + +### Chrome + +We will modify [executorwebdriver.py](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/executors/executorwebdriver.py) and use the WebDriver API. + +There isn't too much work to do here, we just need to define a subclass of the protocol part we defined earlier. + +```python +class WebDriverSetWindowRectProtocolPart(SetWindowRectProtocolPart): + def setup(self): + self.webdriver = self.parent.webdriver + + def set_window_rect(self, x, y, width, height): + return self.webdriver.set_window_rect(x, y, width, height) +``` + +Make sure to import the protocol part too! + +```python +from .protocol import (BaseProtocolPart, + TestharnessProtocolPart, + Protocol, + SelectorProtocolPart, + ClickProtocolPart, + SendKeysProtocolPart, + {... other protocol parts} + SetWindowRectProtocolPart, # add this! + TestDriverProtocolPart) +``` + +Here we have the setup method which just redefines the webdriver object at this level. The important part is the `set_window_rect` function (and it's important it is named that since we called it that earlier). This will call the WebDriver API for [set window rect](https://w3c.github.io/webdriver/#set-window-rect). + +Finally, we just need to tell the WebDriverProtocol to implement this part. + +```python +class WebDriverProtocol(Protocol): + implements = [WebDriverBaseProtocolPart, + WebDriverTestharnessProtocolPart, + WebDriverSelectorProtocolPart, + WebDriverClickProtocolPart, + WebDriverSendKeysProtocolPart, + {... other protocol parts} + WebDriverSetWindowRectProtocolPart, # add this! + WebDriverTestDriverProtocolPart] +``` + + +### Firefox +We use the [set window rect](https://firefox-source-docs.mozilla.org/python/marionette_driver.html#marionette_driver.marionette.Marionette.set_window_rect) Marionette command. + +We will modify [executormarionette.py](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/executors/executormarionette.py) and use the Marionette Python API. + +We have little actual work to do here! We just need to define a subclass of the protocol part we defined earlier. + +```python +class MarionetteSetWindowRectProtocolPart(SetWindowRectProtocolPart): + def setup(self): + self.marionette = self.parent.marionette + + def set_window_rect(self, x, y, width, height): + return self.marionette.set_window_rect(x, y, width, height) +``` + +Make sure to import the protocol part too! + +```python +from .protocol import (BaseProtocolPart, + TestharnessProtocolPart, + Protocol, + SelectorProtocolPart, + ClickProtocolPart, + SendKeysProtocolPart, + {... other protocol parts} + SetWindowRectProtocolPart, # add this! + TestDriverProtocolPart) +``` + +Here we have the setup method which just redefines the webdriver object at this level. The important part is the `set_window_rect` function (and it's important it is named that since we called it that earlier). This will call the Marionette API for [set window rect](https://firefox-source-docs.mozilla.org/python/marionette_driver.html#marionette_driver.marionette.Marionette.set_window_rect) (`self.marionette` is a marionette instance here). + +Finally, we just need to tell the MarionetteProtocol to implement this part. + +```python +class MarionetteProtocol(Protocol): + implements = [MarionetteBaseProtocolPart, + MarionetteTestharnessProtocolPart, + MarionettePrefsProtocolPart, + MarionetteStorageProtocolPart, + MarionetteSelectorProtocolPart, + MarionetteClickProtocolPart, + MarionetteSendKeysProtocolPart, + {... other protocol parts} + MarionetteSetWindowRectProtocolPart, # add this + MarionetteTestDriverProtocolPart] +``` + +### Other Browsers + +Other browsers (such as safari) may use executorselenium, or a completely new executor (such as servo). For these, you must change the executor in the same way as we did with chrome and firefox. + +### Write an infra test + +Make sure to add a test to `infrastructure/testdriver` :) + +Here is some template code! + +```html +<!DOCTYPE html> +<meta charset="utf-8"> +<title>TestDriver set window rect method</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script src="/resources/testdriver.js"></script> +<script src="/resources/testdriver-vendor.js"></script> + +<script> +promise_test(async t => { + await test_driver.set_window_rect(100, 100, 100, 100); + // do something +} +</script> +``` + +### What about testdriver-vendor.js? + +The file [testdriver-vendor.js](https://github.com/web-platform-tests/wpt/blob/master/resources/testdriver-vendor.js) is the equivalent to testdriver-extra.js above, except it is +run instead of testdriver-extra.js in browser-specific test environments. For example, in [Chromium web_tests](https://cs.chromium.org/chromium/src/third_party/blink/web_tests/). + +### What if I need to return a value from my testdriver API? + +You can return values from testdriver by just making your Action and Protocol classes use return statements. The data being returned will be serialized into JSON and passed +back to the test on the resolving promise. The test can then deserialize the JSON to access the return values. Here is an example of a theoretical GetWindowRect API: + +```python +class GetWindowRectAction(object): + def __call__(self, payload): + return self.protocol.get_window_rect.get_window_rect() +``` + +The WebDriver command will return a [WindowRect object](https://w3c.github.io/webdriver/#dfn-window-rect), which is a dictionary with keys `x`, `y`, `width`, and `height`. +```python +class WebDriverGetWindowRectProtocolPart(GetWindowRectProtocolPart): + def get_window_rect(self): + return self.webdriver.get_window_rect() +``` + +Then a test can access the return value as follows: +```html +<script> +async_test(t => { + test_driver.get_window_rect() + .then((result) => { + assert_equals(result.x, 0) + assert_equals(result.y, 10) + assert_equals(result.width, 800) + assert_equals(result.height, 600) + t.done(); + }) +}); +</script> +``` diff --git a/testing/web-platform/tests/docs/writing-tests/testdriver.md b/testing/web-platform/tests/docs/writing-tests/testdriver.md new file mode 100644 index 0000000000..24159e82cc --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/testdriver.md @@ -0,0 +1,235 @@ +# testdriver.js Automation + +```eval_rst + +.. contents:: Table of Contents + :depth: 3 + :local: + :backlinks: none +``` + +testdriver.js provides a means to automate tests that cannot be +written purely using web platform APIs. Outside of automation +contexts, it allows human operators to provide expected input +manually (for operations which may be described in simple terms). + +It is currently supported only for [testharness.js](testharness) +tests. + +## Markup ## + +The `testdriver.js` and `testdriver-vendor.js` must both be included +in any document that uses testdriver (and in the top-level test +document when using testdriver from a different context): + +```html +<script src="/resources/testdriver.js"></script> +<script src="/resources/testdriver-vendor.js"></script> +``` + +## API ## + +testdriver.js exposes its API through the `test_driver` variable in +the global scope. + +### User Interaction ### + +```eval_rst +.. js:autofunction:: test_driver.click +.. js:autofunction:: test_driver.send_keys +.. js:autofunction:: test_driver.action_sequence +.. js:autofunction:: test_driver.bless +``` + +### Window State ### +```eval_rst +.. js:autofunction:: test_driver.minimize_window +.. js:autofunction:: test_driver.set_window_rect +``` + +### Cookies ### +```eval_rst +.. js:autofunction:: test_driver.delete_all_cookies +.. js:autofunction:: test_driver.get_all_cookies +.. js:autofunction:: test_driver.get_named_cookie +``` + +### Permissions ### +```eval_rst +.. js:autofunction:: test_driver.set_permission +``` + +### Authentication ### + +```eval_rst +.. js:autofunction:: test_driver.add_virtual_authenticator +.. js:autofunction:: test_driver.remove_virtual_authenticator +.. js:autofunction:: test_driver.add_credential +.. js:autofunction:: test_driver.get_credentials +.. js:autofunction:: test_driver.remove_credential +.. js:autofunction:: test_driver.remove_all_credentials +.. js:autofunction:: test_driver.set_user_verified +``` + +### Page Lifecycle ### +```eval_rst +.. js:autofunction:: test_driver.freeze +``` + +### Reporting Observer ### +```eval_rst +.. js:autofunction:: test_driver.generate_test_report +``` + +### Storage ### +```eval_rst +.. js:autofunction:: test_driver.set_storage_access + +``` + +### Accessibility ### +```eval_rst +.. js:autofunction:: test_driver.get_computed_label +.. js:autofunction:: test_driver.get_computed_role + +``` + +### Seure Payment Confirmation ### +```eval_rst +.. js:autofunction:: test_driver.set_spc_transaction_mode +``` + +### Using test_driver in other browsing contexts ### + +Testdriver can be used in browsing contexts (i.e. windows or frames) +from which it's possible to get a reference to the top-level test +context. There are two basic approaches depending on whether the +context in which testdriver is used is same-origin with the test +context, or different origin. + +For same-origin contexts, the context can be passed directly into the +testdriver API calls. For functions that take an element argument this +is done implicitly using the owner document of the element. For +functions that don't take an element, this is done via an explicit +context argument, which takes a WindowProxy object. + +Example: +``` +let win = window.open("example.html") +win.onload = () => { + await test_driver.set_permission({ name: "background-fetch" }, "denied", win); +} +``` + +```eval_rst +.. js:autofunction:: test_driver.set_test_context +.. js:autofunction:: test_driver.message_test +``` + +For cross-origin cases, passing in the `context` doesn't work because +of limitations in the WebDriver protocol used to implement testdriver +in a cross-browser fashion. Instead one may include the testdriver +scripts directly in the relevant document, and use the +[`test_driver.set_test_context`](#test_driver.set_test_context) API to +specify the browsing context containing testharness.js. Commands are +then sent via `postMessage` to the test context. For convenience there +is also a [`test_driver.message_test`](#test_driver.message_test) +function that can be used to send arbitary messages to the test +window. For example, in an auxillary browsing context: + +```js +test_driver.set_test_context(window.opener) +await test_driver.click(document.getElementsByTagName("button")[0]) +test_driver.message_test("click complete") +``` + +The requirement to have a handle to the test window does mean it's +currently not possible to write tests where such handles can't be +obtained e.g. in the case of `rel=noopener`. + +## Actions ## + +### Markup ### + +To use the [Actions](#Actions) API `testdriver-actions.js` must be +included in the document, in addition to `testdriver.js`: + +```html +<script src="/resources/testdriver-actions.js"></script> +``` + +### API ### + +```eval_rst +.. js:autoclass:: Actions + :members: +``` + + +### Using in other browsing contexts ### + +For the actions API, the context can be set using the `setContext` +method on the builder: + +```js +let actions = new test_driver.Actions() + .setContext(frames[0]) + .keyDown("p") + .keyUp("p"); +await actions.send(); +``` + +Note that if an action uses an element reference, the context will be +derived from that element, and must match any explicitly set +context. Using elements in multiple contexts in a single action chain +is not supported. + +### send_keys + +Usage: `test_driver.send_keys(element, keys)` + * _element_: a DOM Element object + * _keys_: string to send to the element + +This function causes the string _keys_ to be sent to the target +element (an `Element` object), potentially scrolling the document to +make it possible to send keys. It returns a promise that resolves +after the keys have been sent, or rejects if the keys cannot be sent +to the element. + +This works with elements in other frames/windows as long as they are +same-origin with the test, and the test does not depend on the +window.name property remaining unset on the target window. + +Note that if the element that the keys need to be sent to does not have +a unique ID, the document must not have any DOM mutations made +between the function being called and the promise settling. + +To send special keys, one must send the respective key's codepoint. Since this uses the WebDriver protocol, you can find a [list for code points to special keys in the spec](https://w3c.github.io/webdriver/#keyboard-actions). +For example, to send the tab key you would send "\uE004". + +_Note: these special-key codepoints are not necessarily what you would expect. For example, <kbd>Esc</kbd> is the invalid Unicode character `\uE00C`, not the `\u001B` Escape character from ASCII._ + +[activation]: https://html.spec.whatwg.org/multipage/interaction.html#activation + +### set_permission + +Usage: `test_driver.set_permission(descriptor, state, context=null)` + * _descriptor_: a + [PermissionDescriptor](https://w3c.github.io/permissions/#dictdef-permissiondescriptor) + or derived object + * _state_: a + [PermissionState](https://w3c.github.io/permissions/#enumdef-permissionstate) + value + * context: a WindowProxy for the browsing context in which to perform the call + +This function causes permission requests and queries for the status of a +certain permission type (e.g. "push", or "background-fetch") to always +return _state_. It returns a promise that resolves after the permission has +been set to be overridden with _state_. + +Example: + +``` js +await test_driver.set_permission({ name: "background-fetch" }, "denied"); +await test_driver.set_permission({ name: "push", userVisibleOnly: true }, "granted"); +``` diff --git a/testing/web-platform/tests/docs/writing-tests/testharness-api.md b/testing/web-platform/tests/docs/writing-tests/testharness-api.md new file mode 100644 index 0000000000..339815c5ff --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/testharness-api.md @@ -0,0 +1,839 @@ +# testharness.js API + +```eval_rst + +.. contents:: Table of Contents + :depth: 3 + :local: + :backlinks: none +``` + +testharness.js provides a framework for writing testcases. It is intended to +provide a convenient API for making common assertions, and to work both +for testing synchronous and asynchronous DOM features in a way that +promotes clear, robust, tests. + +## Markup ## + +The test harness script can be used from HTML or SVG documents and workers. + +From an HTML or SVG document, start by importing both `testharness.js` and +`testharnessreport.js` scripts into the document: + +```html +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +``` + +Refer to the [Web Workers](#web-workers) section for details and an example on +testing within a web worker. + +Within each file one may define one or more tests. Each test is atomic in the +sense that a single test has a single status (`PASS`/`FAIL`/`TIMEOUT`/`NOTRUN`). +Within each test one may have a number of asserts. The test fails at the first +failing assert, and the remainder of the test is (typically) not run. + +**Note:** From the point of view of a test harness, each document +using testharness.js is a single "test" and each js-defined +[`Test`](#Test) is referred to as a "subtest". + +By default tests must be created before the load event fires. For ways +to create tests after the load event, see [determining when all tests +are complete](#determining-when-all-tests-are-complete). + +### Harness Timeout ### + +Execution of tests on a page is subject to a global timeout. By +default this is 10s, but a test runner may set a timeout multiplier +which alters the value according to the requirements of the test +environment (e.g. to give a longer timeout for debug builds). + +Long-running tests may opt into a longer timeout by providing a +`<meta>` element: + +```html +<meta name="timeout" content="long"> +``` + +By default this increases the timeout to 60s, again subject to the +timeout multiplier. + +Tests which define a large number of subtests may need to use the +[variant](testharness.html#specifying-test-variants) feature to break +a single test document into several chunks that complete inside the +timeout. + +Occasionally tests may have a race between the harness timing out and +a particular test failing; typically when the test waits for some +event that never occurs. In this case it is possible to use +[`Test.force_timeout()`](#Test.force_timeout) in place of +[`assert_unreached()`](#assert_unreached), to immediately fail the +test but with a status of `TIMEOUT`. This should only be used as a +last resort when it is not possible to make the test reliable in some +other way. + +## Defining Tests ## + +### Synchronous Tests ### + +```eval_rst +.. js:autofunction:: <anonymous>~test + :short-name: +``` +A trivial test for the DOM [`hasFeature()`](https://dom.spec.whatwg.org/#dom-domimplementation-hasfeature) +method (which is defined to always return true) would be: + +```js +test(function() { + assert_true(document.implementation.hasFeature()); +}, "hasFeature() with no arguments") +``` + +### Asynchronous Tests ### + +Testing asynchronous features is somewhat more complex since the +result of a test may depend on one or more events or other +callbacks. The API provided for testing these features is intended to +be rather low-level but applicable to many situations. + +```eval_rst +.. js:autofunction:: async_test + +``` + +Create a [`Test`](#Test): + +```js +var t = async_test("DOMContentLoaded") +``` + +Code is run as part of the test by calling the [`step`](#Test.step) +method with a function containing the test +[assertions](#assert-functions): + +```js +document.addEventListener("DOMContentLoaded", function(e) { + t.step(function() { + assert_true(e.bubbles, "bubbles should be true"); + }); +}); +``` + +When all the steps are complete, the [`done`](#Test.done) method must +be called: + +```js +t.done(); +``` + +`async_test` can also takes a function as first argument. This +function is called with the test object as both its `this` object and +first argument. The above example can be rewritten as: + +```js +async_test(function(t) { + document.addEventListener("DOMContentLoaded", function(e) { + t.step(function() { + assert_true(e.bubbles, "bubbles should be true"); + }); + t.done(); + }); +}, "DOMContentLoaded"); +``` + +In many cases it is convenient to run a step in response to an event or a +callback. A convenient method of doing this is through the `step_func` method +which returns a function that, when called runs a test step. For example: + +```js +document.addEventListener("DOMContentLoaded", t.step_func(function(e) { + assert_true(e.bubbles, "bubbles should be true"); + t.done(); +})); +``` + +As a further convenience, the `step_func` that calls +[`done`](#Test.done) can instead use +[`step_func_done`](#Test.step_func_done), as follows: + +```js +document.addEventListener("DOMContentLoaded", t.step_func_done(function(e) { + assert_true(e.bubbles, "bubbles should be true"); +})); +``` + +For asynchronous callbacks that should never execute, +[`unreached_func`](#Test.unreached_func) can be used. For example: + +```js +document.documentElement.addEventListener("DOMContentLoaded", + t.unreached_func("DOMContentLoaded should not be fired on the document element")); +``` + +**Note:** the `testharness.js` doesn't impose any scheduling on async +tests; they run whenever the step functions are invoked. This means +multiple tests in the same global can be running concurrently and must +take care not to interfere with each other. + +### Promise Tests ### + +```eval_rst +.. js:autofunction:: promise_test +``` + +`test_function` is a function that receives a new [Test](#Test) as an +argument. It must return a promise. The test completes when the +returned promise settles. The test fails if the returned promise +rejects. + +E.g.: + +```js +function foo() { + return Promise.resolve("foo"); +} + +promise_test(function() { + return foo() + .then(function(result) { + assert_equals(result, "foo", "foo should return 'foo'"); + }); +}, "Simple example"); +``` + +In the example above, `foo()` returns a Promise that resolves with the string +"foo". The `test_function` passed into `promise_test` invokes `foo` and attaches +a resolve reaction that verifies the returned value. + +Note that in the promise chain constructed in `test_function` +assertions don't need to be wrapped in [`step`](#Test.step) or +[`step_func`](#Test.step_func) calls. + +It is possible to mix promise tests with callback functions using +[`step`](#Test.step). However this tends to produce confusing tests; +it's recommended to convert any asynchronous behaviour into part of +the promise chain. For example, instead of + +```js +promise_test(t => { + return new Promise(resolve => { + window.addEventListener("DOMContentLoaded", t.step_func(event => { + assert_true(event.bubbles, "bubbles should be true"); + resolve(); + })); + }); +}, "DOMContentLoaded"); +``` + +Try, + +```js +promise_test(() => { + return new Promise(resolve => { + window.addEventListener("DOMContentLoaded", resolve); + }).then(event => { + assert_true(event.bubbles, "bubbles should be true"); + }); +}, "DOMContentLoaded"); +``` + +**Note:** Unlike asynchronous tests, testharness.js queues promise +tests so the next test won't start running until after the previous +promise test finishes. [When mixing promise-based logic and async +steps](https://github.com/web-platform-tests/wpt/pull/17924), the next +test may begin to execute before the returned promise has settled. Use +[add_cleanup](#cleanup) to register any necessary cleanup actions such +as resetting global state that need to happen consistently before the +next test starts. + +To test that a promise rejects with a specified exception see [promise +rejection]. + +### Single Page Tests ### + +Sometimes, particularly when dealing with asynchronous behaviour, +having exactly one test per page is desirable, and the overhead of +wrapping everything in functions for isolation becomes +burdensome. For these cases `testharness.js` support "single page +tests". + +In order for a test to be interpreted as a single page test, it should set the +`single_test` [setup option](#setup) to `true`. + +```html +<!doctype html> +<title>Basic document.body test</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<body> + <script> + setup({ single_test: true }); + assert_equals(document.body, document.getElementsByTagName("body")[0]) + done() + </script> +``` + +The test title for single page tests is always taken from `document.title`. + +## Making assertions ## + +Functions for making assertions start `assert_`. The full list of +asserts available is documented in the [asserts](#assert-functions) +section. The general signature is: + +```js +assert_something(actual, expected, description) +``` + +although not all assertions precisely match this pattern +e.g. [`assert_true`](#assert_true) only takes `actual` and +`description` as arguments. + +The description parameter is used to present more useful error +messages when a test fails. + +When assertions are violated, they throw an +[`AssertionError`](#AssertionError) exception. This interrupts test +execution, so subsequent statements are not evaluated. A given test +can only fail due to one such violation, so if you would like to +assert multiple behaviors independently, you should use multiple +tests. + +**Note:** Unless the test is a [single page test](#single-page-tests), +assert functions must only be called in the context of a +[`Test`](#Test). + +### Optional Features ### + +If a test depends on a specification or specification feature that is +OPTIONAL (in the [RFC 2119 +sense](https://tools.ietf.org/html/rfc2119)), +[`assert_implements_optional`](#assert_implements_optional) can be +used to indicate that failing the test does not mean violating a web +standard. For example: + +```js +async_test((t) => { + const video = document.createElement("video"); + assert_implements_optional(video.canPlayType("video/webm")); + video.src = "multitrack.webm"; + // test something specific to multiple audio tracks in a WebM container + t.done(); +}, "WebM with multiple audio tracks"); +``` + +A failing [`assert_implements_optional`](#assert_implements_optional) +call is reported as a status of `PRECONDITION_FAILED` for the +subtest. This unusual status code is a legacy leftover; see the [RFC +that introduced +`assert_implements_optional`](https://github.com/web-platform-tests/rfcs/pull/48). + +[`assert_implements_optional`](#assert_implements_optional) can also +be used during [test setup](#setup). For example: + +```js +setup(() => { + assert_implements_optional("optionalfeature" in document.body, + "'optionalfeature' event supported"); +}); +async_test(() => { /* test #1 waiting for "optionalfeature" event */ }); +async_test(() => { /* test #2 waiting for "optionalfeature" event */ }); +``` + +A failing [`assert_implements_optional`](#assert_implements_optional) +during setup is reported as a status of `PRECONDITION_FAILED` for the +test, and the subtests will not run. + +See also the `.optional` [file name convention](file-names.md), which may be +preferable if the entire test is optional. + +## Testing Across Globals ## + +### Consolidating tests from other documents ### + +```eval_rst +.. js::autofunction fetch_tests_from_window +``` + +**Note:** By default any markup file referencing `testharness.js` will +be detected as a test. To avoid this, it must be put in a `support` +directory. + +The current test suite will not report completion until all fetched +tests are complete, and errors in the child contexts will result in +failures for the suite in the current context. + +Here's an example that uses `window.open`. + +`support/child.html`: + +```html +<!DOCTYPE html> +<html> +<title>Child context test(s)</title> +<head> + <script src="/resources/testharness.js"></script> +</head> +<body> + <div id="log"></div> + <script> + test(function(t) { + assert_true(true, "true is true"); + }, "Simple test"); + </script> +</body> +</html> +``` + +`test.html`: + +```html +<!DOCTYPE html> +<html> +<title>Primary test context</title> +<head> + <script src="/resources/testharness.js"></script> + <script src="/resources/testharnessreport.js"></script> +</head> +<body> + <div id="log"></div> + <script> + var child_window = window.open("support/child.html"); + fetch_tests_from_window(child_window); + </script> +</body> +</html> +``` + +### Web Workers ### + +```eval_rst +.. js:autofunction fetch_tests_from_worker +``` + +The `testharness.js` script can be used from within [dedicated workers, shared +workers](https://html.spec.whatwg.org/multipage/workers.html) and [service +workers](https://w3c.github.io/ServiceWorker/). + +Testing from a worker script is different from testing from an HTML document in +several ways: + +* Workers have no reporting capability since they are running in the background. + Hence they rely on `testharness.js` running in a companion client HTML document + for reporting. + +* Shared and service workers do not have a unique client document + since there could be more than one document that communicates with + these workers. So a client document needs to explicitly connect to a + worker and fetch test results from it using + [`fetch_tests_from_worker`](#fetch_tests_from_worker). This is true + even for a dedicated worker. Once connected, the individual tests + running in the worker (or those that have already run to completion) + will be automatically reflected in the client document. + +* The client document controls the timeout of the tests. All worker + scripts act as if they were started with the + [`explicit_timeout`](#setup) option. + +* Dedicated and shared workers don't have an equivalent of an `onload` + event. Thus the test harness has no way to know when all tests have + completed (see [Determining when all tests are + complete](#determining-when-all-tests-are-complete)). So these + worker tests behave as if they were started with the + [`explicit_done`](#setup) option. Service workers depend on the + [oninstall](https://w3c.github.io/ServiceWorker/#service-worker-global-scope-install-event) + event and don't require an explicit [`done`](#done) call. + +Here's an example that uses a dedicated worker. + +`worker.js`: + +```js +importScripts("/resources/testharness.js"); + +test(function(t) { + assert_true(true, "true is true"); +}, "Simple test"); + +// done() is needed because the testharness is running as if explicit_done +// was specified. +done(); +``` + +`test.html`: + +```html +<!DOCTYPE html> +<title>Simple test</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<div id="log"></div> +<script> + +fetch_tests_from_worker(new Worker("worker.js")); + +</script> +``` + + +`fetch_tests_from_worker` returns a promise that resolves once all the remote +tests have completed. This is useful if you're importing tests from multiple +workers and want to ensure they run in series: + +```js +(async function() { + await fetch_tests_from_worker(new Worker("worker-1.js")); + await fetch_tests_from_worker(new Worker("worker-2.js")); +})(); +``` + +## Cleanup ## + +Occasionally tests may create state that will persist beyond the test +itself. In order to ensure that tests are independent, such state +should be cleaned up once the test has a result. This can be achieved +by adding cleanup callbacks to the test. Such callbacks are registered +using the [`add_cleanup`](#Test.add_cleanup) method. All registered +callbacks will be run as soon as the test result is known. For +example: + +```js + test(function() { + var element = document.createElement("div"); + element.setAttribute("id", "null"); + document.body.appendChild(element); + this.add_cleanup(function() { document.body.removeChild(element) }); + assert_equals(document.getElementById(null), element); + }, "Calling document.getElementById with a null argument."); +``` + +If the test was created using the [`promise_test`](#promise_test) API, +then cleanup functions may optionally return a Promise and delay the +completion of the test until all cleanup promises have settled. + +All callbacks will be invoked synchronously; tests that require more +complex cleanup behavior should manage execution order explicitly. If +any of the eventual values are rejected, the test runner will report +an error. + +### AbortSignal support ### + +[`Test.get_signal`](#Test.get_signal) gives an AbortSignal that is aborted when +the test finishes. This can be useful when dealing with AbortSignal-supported +APIs. + +```js +promise_test(t => { + // Throws when the user agent does not support AbortSignal + const signal = t.get_signal(); + const event = await new Promise(resolve => { + document.body.addEventListener(resolve, { once: true, signal }); + document.body.click(); + }); + assert_equals(event.type, "click"); +}, ""); +``` + +## Timers in Tests ## + +In general the use of timers (i.e. `setTimeout`) in tests is +discouraged because this is an observed source of instability on test +running in CI. In particular if a test should fail when +something doesn't happen, it is good practice to simply let the test +run to the full timeout rather than trying to guess an appropriate +shorter timeout to use. + +In other cases it may be necessary to use a timeout (e.g., for a test +that only passes if some event is *not* fired). In this case it is +*not* permitted to use the standard `setTimeout` function. Instead use +either [`Test.step_wait()`](#Test.step_wait), +[`Test.step_wait_func()`](#Test.step_wait_func), or +[`Test.step_timeout()`](#Test.step_timeout). [`Test.step_wait()`](#Test.step_wait) +and [`Test.step_wait_func()`](#Test.step_wait_func) are preferred +when there's a specific condition that needs to be met for the test to +proceed. [`Test.step_timeout()`](#Test.step_timeout) is preferred in other cases. + +Note that timeouts generally need to be a few seconds long in order to +produce stable results in all test environments. + +For [single page tests](#single-page-tests), +[step_timeout](#step_timeout) is also available as a global function. + +```eval_rst + +.. js:autofunction:: <anonymous>~step_timeout + :short-name: +``` + +## Harness Configuration ### + +### Setup ### + +<!-- sphinx-js doesn't support documenting types so we have to copy in + the SettingsObject documentation by hand --> + +```eval_rst +.. js:autofunction:: setup + +.. js:autofunction:: promise_setup + +:SettingsObject: + + :Properties: + - **single_test** (*bool*) - Use the single-page-test mode. In this + mode the Document represents a single :js:class:`Test`. Asserts may be + used directly without requiring :js:func:`Test.step` or similar wrappers, + and any exceptions set the status of the test rather than the status + of the harness. + + - **allow_uncaught_exception** (*bool*) - don't treat an + uncaught exception as an error; needed when e.g. testing the + `window.onerror` handler. + + - **explicit_done** (*bool*) - Wait for a call to :js:func:`done` + before declaring all tests complete (this is always true for + single-page tests). + + - **hide_test_state** (*bool*) - hide the test state output while + the test is running; This is helpful when the output of the test state + may interfere the test results. + + - **explicit_timeout** (*bool*) - disable file timeout; only + stop waiting for results when the :js:func:`timeout` function is + called. This should typically only be set for manual tests, or + by a test runner that provides its own timeout mechanism. + + - **timeout_multiplier** (*Number*) - Multiplier to apply to + timeouts. This should only be set by a test runner. + + - **output** (*bool*) - (default: `true`) Whether to output a table + containing a summary of test results. This should typically + only be set by a test runner, and is typically set to false + for performance reasons when running in CI. + + - **output_document** (*Document*) output_document - The document to which + results should be logged. By default this is the current + document but could be an ancestor document in some cases e.g. a + SVG test loaded in an HTML wrapper + + - **debug** (*bool*) - (default: `false`) Whether to output + additional debugging information such as a list of + asserts. This should typically only be set by a test runner. +``` + +### Output ### + +If the file containing the tests is a HTML file, a table containing +the test results will be added to the document after all tests have +run. By default this will be added to a `div` element with `id=log` if +it exists, or a new `div` element appended to `document.body` if it +does not. This can be suppressed by setting the [`output`](#setup) +setting to `false`. + +If [`output`](#setup) is `true`, the test will, by default, report +progress during execution. In some cases this progress report will +invalidate the test. In this case the test should set the +[`hide_test_state`](#setup) setting to `true`. + + +### Determining when all tests are complete ### + +By default, tests running in a `WindowGlobalScope`, which are not +configured as a [single page test](#single-page-tests) the test +harness will assume there are no more results to come when: + + 1. There are no `Test` objects that have been created but not completed + 2. The load event on the document has fired + +For single page tests, or when the `explicit_done` property has been +set in the [setup](#setup), the [`done`](#done) function must be used. + +```eval_rst + +.. js:autofunction:: <anonymous>~done + :short-name: +.. js:autofunction:: <anonymous>~timeout + :short-name: +``` + +Dedicated and shared workers don't have an event that corresponds to +the `load` event in a document. Therefore these worker tests always +behave as if the `explicit_done` property is set to true (unless they +are defined using [the "multi-global" +pattern](testharness.html#multi-global-tests)). Service workers depend +on the +[install](https://w3c.github.io/ServiceWorker/#service-worker-global-scope-install-event) +event which is fired following the completion of [running the +worker](https://html.spec.whatwg.org/multipage/workers.html#run-a-worker). + +## Reporting API ## + +### Callbacks ### + +The framework provides callbacks corresponding to 4 events: + + * `start` - triggered when the first Test is created + * `test_state` - triggered when a test state changes + * `result` - triggered when a test result is received + * `complete` - triggered when all results are received + +```eval_rst +.. js:autofunction:: add_start_callback +.. js:autofunction:: add_test_state_callback +.. js:autofunction:: add_result_callback +.. js:autofunction:: add_completion_callback +.. js:autoclass:: TestsStatus + :members: +.. js:autoclass:: AssertRecord + :members: +``` + +### External API ### + +In order to collect the results of multiple pages containing tests, the test +harness will, when loaded in a nested browsing context, attempt to call +certain functions in each ancestor and opener browsing context: + + * start - `start_callback` + * test\_state - `test_state_callback` + * result - `result_callback` + * complete - `completion_callback` + +These are given the same arguments as the corresponding internal callbacks +described above. + +The test harness will also send messages using cross-document +messaging to each ancestor and opener browsing context. Since it uses the +wildcard keyword (\*), cross-origin communication is enabled and script on +different origins can collect the results. + +This API follows similar conventions as those described above only slightly +modified to accommodate message event API. Each message is sent by the harness +is passed a single vanilla object, available as the `data` property of the event +object. These objects are structured as follows: + + * start - `{ type: "start" }` + * test\_state - `{ type: "test_state", test: Test }` + * result - `{ type: "result", test: Test }` + * complete - `{ type: "complete", tests: [Test, ...], status: TestsStatus }` + + +## Assert Functions ## + +```eval_rst +.. js:autofunction:: assert_true +.. js:autofunction:: assert_false +.. js:autofunction:: assert_equals +.. js:autofunction:: assert_not_equals +.. js:autofunction:: assert_in_array +.. js:autofunction:: assert_array_equals +.. js:autofunction:: assert_approx_equals +.. js:autofunction:: assert_array_approx_equals +.. js:autofunction:: assert_less_than +.. js:autofunction:: assert_greater_than +.. js:autofunction:: assert_between_exclusive +.. js:autofunction:: assert_less_than_equal +.. js:autofunction:: assert_greater_than_equal +.. js:autofunction:: assert_between_inclusive +.. js:autofunction:: assert_regexp_match +.. js:autofunction:: assert_class_string +.. js:autofunction:: assert_own_property +.. js:autofunction:: assert_not_own_property +.. js:autofunction:: assert_inherits +.. js:autofunction:: assert_idl_attribute +.. js:autofunction:: assert_readonly +.. js:autofunction:: assert_throws_dom +.. js:autofunction:: assert_throws_js +.. js:autofunction:: assert_throws_exactly +.. js:autofunction:: assert_implements +.. js:autofunction:: assert_implements_optional +.. js:autofunction:: assert_unreached +.. js:autofunction:: assert_any + +``` + +Assertions fail by throwing an `AssertionError`: + +```eval_rst +.. js:autoclass:: AssertionError +``` + +### Promise Rejection ### + +```eval_rst +.. js:autofunction:: promise_rejects_dom +.. js:autofunction:: promise_rejects_js +.. js:autofunction:: promise_rejects_exactly +``` + +`promise_rejects_dom`, `promise_rejects_js`, and `promise_rejects_exactly` can +be used to test Promises that need to reject. + +Here's an example where the `bar()` function returns a Promise that rejects +with a TypeError: + +```js +function bar() { + return Promise.reject(new TypeError()); +} + +promise_test(function(t) { + return promise_rejects_js(t, TypeError, bar()); +}, "Another example"); +``` + +## Test Objects ## + +```eval_rst + +.. js:autoclass:: Test + :members: +``` + +## Helpers ## + +### Waiting for events ### + +```eval_rst +.. js:autoclass:: EventWatcher + :members: +``` + +Here's an example of how to use `EventWatcher`: + +```js +var t = async_test("Event order on animation start"); + +var animation = watchedNode.getAnimations()[0]; +var eventWatcher = new EventWatcher(t, watchedNode, ['animationstart', + 'animationiteration', + 'animationend']); + +eventWatcher.wait_for('animationstart').then(t.step_func(function() { + assertExpectedStateAtStartOfAnimation(); + animation.currentTime = END_TIME; // skip to end + // We expect two animationiteration events then an animationend event on + // skipping to the end of the animation. + return eventWatcher.wait_for(['animationiteration', + 'animationiteration', + 'animationend']); +})).then(t.step_func(function() { + assertExpectedStateAtEndOfAnimation(); + t.done(); +})); +``` + +### Utility Functions ### + +```eval_rst +.. js:autofunction:: format_value +``` + +## Deprecated APIs ## + +```eval_rst +.. js:autofunction:: generate_tests +.. js:autofunction:: on_event +``` + + diff --git a/testing/web-platform/tests/docs/writing-tests/testharness-tutorial.md b/testing/web-platform/tests/docs/writing-tests/testharness-tutorial.md new file mode 100644 index 0000000000..6689ad5341 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/testharness-tutorial.md @@ -0,0 +1,395 @@ +# testharness.js tutorial + +<!-- +Note to maintainers: + +This tutorial is designed to be an authentic depiction of the WPT contribution +experience. It is not intended to be comprehensive; its scope is intentionally +limited in order to demonstrate authoring a complete test without overwhelming +the reader with features. Because typical WPT usage patterns change over time, +this should be updated periodically; please weigh extensions against the +demotivating effect that a lengthy guide can have on new contributors. +--> + +Let's say you've discovered that WPT doesn't have any tests for how [the Fetch +API](https://fetch.spec.whatwg.org/) sets cookies from an HTTP response. This +tutorial will guide you through the process of writing a test for the +web-platform, verifying it, and submitting it back to WPT. Although it includes +some very brief instructions on using git, you can find more guidance in [the +tutorial for git and GitHub](github-intro). + +WPT's testharness.js is a framework designed to help people write tests for the +web platform's JavaScript APIs. [The testharness.js reference +page](testharness) describes the framework in the abstract, but for the +purposes of this guide, we'll only consider the features we need to test the +behavior of `fetch`. + +```eval_rst +.. contents:: Table of Contents + :depth: 3 + :local: + :backlinks: none +``` + +## Setting up your workspace + +To make sure you have the latest code, first type the following into a terminal +located in the root of the WPT git repository: + + $ git fetch git@github.com:web-platform-tests/wpt.git + +Next, we need a place to store the change set we're about to author. Here's how +to create a new git branch named `fetch-cookie` from the revision of WPT we +just downloaded: + + $ git checkout -b fetch-cookie FETCH_HEAD + +The tests we're going to write will rely on special abilities of the WPT +server, so you'll also need to [configure your system to run +WPT](../running-tests/from-local-system) before you continue. + +With that out of the way, you're ready to create your patch. + +## Writing a subtest + +<!-- +Goals of this section: + +- demonstrate asynchronous testing with Promises +- motivate non-trivial integration with WPT server +- use web technology likely to be familiar to web developers +- use web technology likely to be supported in the reader's browser +--> + +The first thing we'll do is configure the server to respond to a certain request +by setting a cookie. Once that's done, we'll be able to make the request with +`fetch` and verify that it interpreted the response correctly. + +We'll configure the server with an "asis" file. That's the WPT convention for +controlling the contents of an HTTP response. [You can read more about it +here](server-features), but for now, we'll save the following text into a file +named `set-cookie.asis` in the `fetch/api/basic/` directory of WPT: + +``` +HTTP/1.1 204 No Content +Set-Cookie: test1=t1 +``` + +With this in place, any requests to `/fetch/api/basic/set-cookie.asis` will +receive an HTTP 204 response that sets the cookie named `test1`. When writing +more tests in the future, you may want the server to behave more dynamically. +In that case, [you can write Python code to control how the server +responds](python-handlers/index). + +Now, we can write the test! Create a new file named `set-cookie.html` in the +same directory and insert the following text: + +```html +<!DOCTYPE html> +<meta charset="utf-8"> +<title>fetch: setting cookies</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> + +<script> +promise_test(function() { + return fetch('set-cookie.asis') + .then(function() { + assert_equals(document.cookie, 'test1=t1'); + }); +}); +</script> +``` + +Let's step through each part of this file. + +- ```html + <!DOCTYPE html> + <meta charset="utf-8"> + ``` + + We explicitly set the DOCTYPE and character set to be sure that browsers + don't infer them to be something we aren't expecting. We're omitting the + `<html>` and `<head>` tags. That's a common practice in WPT, preferred + because it makes tests more concise. + +- ```html + <title>fetch: setting cookies</title> + ``` + The document's title should succinctly describe the feature under test. + +- ```html + <script src="/resources/testharness.js"></script> + <script src="/resources/testharnessreport.js"></script> + ``` + + These two `<script>` tags retrieve the code that powers testharness.js. A + testharness.js test can't run without them! + +- ```html + <script> + promise_test(function() { + return fetch('thing.asis') + .then(function() { + assert_equals(document.cookie, 'test1=t1'); + }); + }); + </script> + ``` + + This script uses the testharness.js function `promise_test` to define a + "subtest". We're using that because the behavior we're testing is + asynchronous. By returning a Promise value, we tell the harness to wait until + that Promise settles. The harness will report that the test has passed if + the Promise is fulfilled, and it will report that the test has failed if the + Promise is rejected. + + We invoke the global `fetch` function to exercise the "behavior under test," + and in the fulfillment handler, we verify that the expected cookie is set. + We're using the testharness.js `assert_equals` function to verify that the + value is correct; the function will throw an error otherwise. That will cause + the Promise to be rejected, and *that* will cause the harness to report a + failure. + +If you run the server according to the instructions in [the guide for local +configuration](../running-tests/from-local-system), you can access the test at +[http://web-platform.test:8000/fetch/api/basic/set-cookie.html](http://web-platform.test:8000/fetch/api/basic/set-cookie.html.). +You should see something like this: + +![](../assets/testharness-tutorial-test-screenshot-1.png "screen shot of testharness.js reporting the test results") + +## Refining the subtest + +<!-- +Goals of this section: + +- explain the motivation for "clean up" logic and demonstrate its usage +- motivate explicit test naming +--> + +We'd like to test a little more about `fetch` and cookies, but before we do, +there are some improvements we can make to what we've written so far. + +For instance, we should remove the cookie after the subtest is complete. This +ensures a consistent state for any additional subtests we may add and also for +any tests that follow. We'll use the `add_cleanup` method to ensure that the +cookie is deleted even if the test fails. + +```diff +-promise_test(function() { ++promise_test(function(t) { ++ t.add_cleanup(function() { ++ document.cookie = 'test1=;expires=Thu, 01 Jan 1970 00:00:01 GMT;'; ++ }); ++ + return fetch('thing.asis') + .then(function() { + assert_equals(document.cookie, 'test1=t1'); + }); + }); +``` + +Although we'd prefer it if there were no other cookies defined during our test, +we shouldn't take that for granted. As written, the test will fail if the +`document.cookie` includes additional cookies. We'll use slightly more +complicated logic to test for the presence of the expected cookie. + + +```diff + promise_test(function(t) { + t.add_cleanup(function() { + document.cookie = 'test1=;expires=Thu, 01 Jan 1970 00:00:01 GMT;'; + }); + + return fetch('thing.asis') + .then(function() { +- assert_equals(document.cookie, 'test1=t1'); ++ assert_true(/(^|; )test1=t1($|;)/.test(document.cookie); + }); + }); +``` + +In the screen shot above, the subtest's result was reported using the +document's title, "fetch: setting cookies". Since we expect to add another +subtest, we should give this one a more specific name: + +```diff + promise_test(function(t) { + t.add_cleanup(function() { + document.cookie = 'test1=;expires=Thu, 01 Jan 1970 00:00:01 GMT;'; + }); + + return fetch('thing.asis') + .then(function() { + assert_true(/(^|; )test1=t1($|;)/.test(document.cookie)); + }); +-}); ++}, 'cookie set for successful request'); +``` + +## Writing a second subtest + +<!-- +Goals of this section: + +- introduce the concept of cross-domain testing and the associated tooling +- demonstrate how to verify promise rejection +- demonstrate additional assertion functions +--> + +There are many things we might want to verify about how `fetch` sets cookies. +For instance, it should *not* set a cookie if the request fails due to +cross-origin security restrictions. Let's write a subtest which verifies that. + +We'll add another `<script>` tag for a JavaScript support file: + +```diff + <!DOCTYPE html> + <meta charset="utf-8"> + <title>fetch: setting cookies</title> + <script src="/resources/testharness.js"></script> + <script src="/resources/testharnessreport.js"></script> ++<script src="/common/get-host-info.sub.js"></script> +``` + +`get-host-info.sub.js` is a general-purpose script provided by WPT. It's +designed to help with testing cross-domain functionality. Since it's stored in +WPT's `common/` directory, tests from all sorts of specifications rely on it. + +Next, we'll define the new subtest inside the same `<script>` tag that holds +our first subtest. + +```js +promise_test(function(t) { + t.add_cleanup(function() { + document.cookie = 'test1=;expires=Thu, 01 Jan 1970 00:00:01 GMT;'; + }); + const url = get_host_info().HTTP_NOTSAMESITE_ORIGIN + + '/fetch/api/basic/set-cookie.asis'; + + return fetch(url) + .then(function() { + assert_unreached('The promise for the aborted fetch operation should reject.'); + }, function() { + assert_false(/(^|; )test1=t1($|;)/.test(document.cookie)); + }); +}, 'no cookie is set for cross-domain fetch operations'); +``` + +This may look familiar from the previous subtest, but there are some important +differences. + +- ```js + const url = get_host_info().HTTP_NOTSAMESITE_ORIGIN + + '/fetch/api/basic/set-cookie.asis'; + ``` + + We're requesting the same resource, but we're referring to it with an + alternate host name. The name of the host depends on how the WPT server has + been configured, so we rely on the helper to provide an appropriate value. + +- ```js + return fetch(url) + .then(function() { + assert_unreached('The promise for the aborted fetch operation should reject.'); + }, function() { + assert_false(/(^|; )test1=t1($|;)/.test(document.cookie)); + }); + ``` + + We're returning a Promise value, just like the first subtest. This time, we + expect the operation to fail, so the Promise should be rejected. To express + this, we've used `assert_unreached` *in the fulfillment handler*. + `assert_unreached` is a testharness.js utility function which always throws + an error. With this in place, if fetch does *not* produce an error, then this + subtest will fail. + + We've moved the assertion about the cookie to the rejection handler. We also + switched from `assert_true` to `assert_false` because the test should only + pass if the cookie is *not* set. It's a good thing we have the cleanup logic + in the previous subtest, right? + +If you run the test in your browser now, you can expect to see both tests +reported as passing with their distinct names. + +![](../assets/testharness-tutorial-test-screenshot-2.png "screen shot of testharness.js reporting the test results") + +## Verifying our work + +We're done writing the test, but we should make sure it fits in with the rest +of WPT before we submit it. + +[The lint tool](lint-tool) can detect some of the common mistakes people make +when contributing to WPT. You enabled it when you [configured your system to +work with WPT](../running-tests/from-local-system). To run it, open a +command-line terminal, navigate to the root of the WPT repository, and enter +the following command: + + python ./wpt lint fetch/api/basic + +If this recognizes any of those common mistakes in the new files, it will tell +you where they are and how to fix them. If you do have changes to make, you can +run the command again to make sure you got them right. + +Now, we'll run the test using the automated test runner. This is important for +testharness.js tests because there are subtleties of the automated test runner +which can influence how the test behaves. That's not to say your test has to +pass in all browsers (or even in *any* browser). But if we expect the test to +pass, then running it this way will help us catch other kinds of mistakes. + +The tools support running the tests in many different browsers. We'll use +Firefox this time: + + python ./wpt run firefox fetch/api/basic/set-cookie.html + +We expect this test to pass, so if it does, we're ready to submit it. If we +were testing a web-platform feature that Firefox didn't support, we would +expect the test to fail instead. + +There are a few problems to look out for in addition to passing/failing status. +The report will describe fewer tests than we expect if the test isn't run at +all. That's usually a sign of a formatting mistake, so you'll want to make sure +you've used the right file names and metadata. Separately, the web browser +might crash. That's often a sign of a browser bug, so you should consider +[reporting it to the browser's +maintainers](https://rachelandrew.co.uk/archives/2017/01/30/reporting-browser-bugs/)! + +## Submitting the test + +First, let's stage the new files for committing: + + $ git add fetch/api/basic/set-cookie.asis + $ git add fetch/api/basic/set-cookie.html + +We can make sure the commit has everything we want to submit (and nothing we +don't) by using `git diff`: + + $ git diff --staged + +On most systems, you can use the arrow keys to navigate through the changes, +and you can press the `q` key when you're done reviewing. + +Next, we'll create a commit with the staged changes: + + $ git commit -m '[fetch] Add test for setting cookies' + +And now we can push the commit to our fork of WPT: + + $ git push origin fetch-cookie + +The last step is to submit the test for review. WPT doesn't actually need the +test we wrote in this tutorial, but if we wanted to submit it for inclusion in +the repository, we would create a pull request on GitHub. [The guide on git and +GitHub](github-intro) has all the details on how to do that. + +## More practice + +Here are some ways you can keep experimenting with WPT using this test: + +- Improve the test's readability by defining helper functions like + `cookieIsSet` and `deleteCookie` +- Improve the test's coverage by refactoring it into [a "multi-global" + test](testharness) +- Improve the test's coverage by writing more subtests (e.g. the behavior when + the fetch operation is aborted by `window.stop`, or the behavior when the + HTTP response sets multiple cookies) diff --git a/testing/web-platform/tests/docs/writing-tests/testharness.md b/testing/web-platform/tests/docs/writing-tests/testharness.md new file mode 100644 index 0000000000..fd4450f440 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/testharness.md @@ -0,0 +1,285 @@ +# JavaScript Tests (testharness.js) + +JavaScript tests are the correct type of test to write in any +situation where you are not specifically interested in the rendering +of a page, and where human interaction isn't required; these tests are +written in JavaScript using a framework called `testharness.js`. + +A high-level overview is provided below and more information can be found here: + + * [testharness.js Documentation](testharness-api.md) — An introduction + to the library and a detailed API reference. [The tutorial on writing a + testharness.js test](testharness-tutorial) provides a concise guide to writing + a test — a good place to start for newcomers to the project. + + * [testdriver.js Automation](testdriver.md) — Automating end user actions, such as moving or + clicking a mouse. See also the + [testdriver.js extension tutorial](testdriver-extension-tutorial.md) for adding new commands. + + * [idlharness.js](idlharness.md) — A library for testing + IDL interfaces using `testharness.js`. + + * [Message Channels](channels.md) - A way to communicate between + different globals, including window globals not in the same + browsing context group. + + * [Server features](server-features.md) - Advanced testing features + that are commonly used with JavaScript tests. + +See also the [general guidelines](general-guidelines.md) for all test types. + +## Window tests + +### Without HTML boilerplate (`.window.js`) + +Create a JavaScript file whose filename ends in `.window.js` to have the necessary HTML boilerplate +generated for you at `.window.html`. I.e., for `test.window.js` the server will ensure +`test.window.html` is available. + +In this JavaScript file you can place one or more tests, as follows: +```js +test(() => { + // Place assertions and logic here + assert_equals(document.characterSet, "UTF-8"); +}, "Ensure HTML boilerplate uses UTF-8"); // This is the title of the test +``` + +If you only need to test a [single thing](testharness-api.html#single-page-tests), you could also use: +```js +// META: title=Ensure HTML boilerplate uses UTF-8 +setup({ single_test: true }); +assert_equals(document.characterSet, "UTF-8"); +done(); +``` + +See [asynchronous (`async_test()`)](testharness-api.html#asynchronous-tests) and +[promise tests (`promise_test()`)](testharness-api.html#promise-tests) for more involved setups. + +### With HTML boilerplate + +You need to be a bit more explicit and include the `testharness.js` framework directly as well as an +additional file used by implementations: + +```html +<!doctype html> +<meta charset=utf-8> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<body> + <script> + test(() => { + assert_equals(document.characterSet, "UTF-8"); + }, "Ensure UTF-8 declaration is observed"); + </script> +``` + +Here too you could avoid the wrapper `test()` function: + +```html +<!doctype html> +<meta charset=utf-8> +<title>Ensure UTF-8 declaration is observed</title> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<body> + <script> + setup({ single_test: true }); + assert_equals(document.characterSet, "UTF-8"); + done(); + </script> +``` + +In this case the test title is taken from the `title` element. + +## Dedicated worker tests (`.worker.js`) + +Create a JavaScript file that imports `testharness.js` and whose filename ends in `.worker.js` to +have the necessary HTML boilerplate generated for you at `.worker.html`. + +For example, one could write a test for the `FileReaderSync` API by +creating a `FileAPI/FileReaderSync.worker.js` as follows: + +```js +importScripts("/resources/testharness.js"); +test(function () { + const blob = new Blob(["Hello"]); + const fr = new FileReaderSync(); + assert_equals(fr.readAsText(blob), "Hello"); +}, "FileReaderSync#readAsText."); +done(); +``` + +This test could then be run from `FileAPI/FileReaderSync.worker.html`. + +(Removing the need for `importScripts()` and `done()` is tracked in +[issue #11529](https://github.com/web-platform-tests/wpt/issues/11529).) + +## Tests for other or multiple globals (`.any.js`) + +Tests for features that exist in multiple global scopes can be written in a way +that they are automatically run in several scopes. In this case, the test is a +JavaScript file with extension `.any.js`, which can use all the usual APIs. + +By default, the test runs in a window scope and a dedicated worker scope. + +For example, one could write a test for the `Blob` constructor by +creating a `FileAPI/Blob-constructor.any.js` as follows: + +```js +test(function () { + const blob = new Blob(); + assert_equals(blob.size, 0); + assert_equals(blob.type, ""); + assert_false(blob.isClosed); +}, "The Blob constructor."); +``` + +This test could then be run from `FileAPI/Blob-constructor.any.worker.html` as well +as `FileAPI/Blob-constructor.any.html`. + +It is possible to customize the set of scopes with a metadata comment, such as + +``` +// META: global=sharedworker +// ==> would run in the shared worker scope +// META: global=window,serviceworker +// ==> would only run in the window and service worker scope +// META: global=dedicatedworker +// ==> would run in the default dedicated worker scope +// META: global=dedicatedworker-module +// ==> would run in the dedicated worker scope as a module +// META: global=worker +// ==> would run in the dedicated, shared, and service worker scopes +``` + +For a test file <code><var>x</var>.any.js</code>, the available scope keywords +are: + +* `window` (default): to be run at <code><var>x</var>.any.html</code> +* `dedicatedworker` (default): to be run at <code><var>x</var>.any.worker.html</code> +* `dedicatedworker-module` to be run at <code><var>x</var>.any.worker-module.html</code> +* `serviceworker`: to be run at <code><var>x</var>.any.serviceworker.html</code> (`.https` is + implied) +* `serviceworker-module`: to be run at <code><var>x</var>.any.serviceworker-module.html</code> + (`.https` is implied) +* `sharedworker`: to be run at <code><var>x</var>.any.sharedworker.html</code> +* `sharedworker-module`: to be run at <code><var>x</var>.any.sharedworker-module.html</code> +* `jsshell`: to be run in a JavaScript shell, without access to the DOM + (currently only supported in SpiderMonkey, and skipped in wptrunner) +* `worker`: shorthand for the dedicated, shared, and service worker scopes +* `shadowrealm`: runs the test code in a + [ShadowRealm](https://github.com/tc39/proposal-shadowrealm) context hosted in + an ordinary Window context; to be run at <code><var>x</var>.any.shadowrealm.html</code> + +To check if your test is run from a window or worker you can use the following two methods that will +be made available by the framework: + + self.GLOBAL.isWindow() + self.GLOBAL.isWorker() + +Although [the global `done()` function must be explicitly invoked for most +dedicated worker tests and shared worker +tests](testharness-api.html#determining-when-all-tests-are-complete), it is +automatically invoked for tests defined using the "multi-global" pattern. + +## Other features of `.window.js`, `.worker.js` and `.any.js` + +### Specifying a test title + +Use `// META: title=This is the title of the test` at the beginning of the resource. + +### Including other JavaScript files + +Use `// META: script=link/to/resource.js` at the beginning of the resource. For example, + +``` +// META: script=/common/utils.js +// META: script=resources/utils.js +``` + +can be used to include both the global and a local `utils.js` in a test. + +In window environments, the script will be included using a classic `<script>` tag. In classic +worker environments, the script will be imported using `importScripts()`. In module worker +environments, the script will be imported using a static `import`. + +### Specifying a timeout of long + +Use `// META: timeout=long` at the beginning of the resource. + +### Specifying test [variants](#variants) + +Use `// META: variant=url-suffix` at the beginning of the resource. For example, + +``` +// META: variant= +// META: variant=?wss +``` + +## Variants + +A test file can have multiple variants by including `meta` elements, +for example: + +```html +<meta name="variant" content=""> +<meta name="variant" content="?wss"> +``` + +Test runners will execute the test for each variant specified, appending the corresponding content +attribute value to the URL of the test as they do so. + +`/common/subset-tests.js` and `/common/subset-tests-by-key.js` are two utility scripts that work +well together with variants, allowing a test to be split up into subtests in cases when there are +otherwise too many tests to complete inside the timeout. For example: + +```html +<!doctype html> +<title>Testing variants</title> +<meta name="variant" content="?1-1000"> +<meta name="variant" content="?1001-2000"> +<meta name="variant" content="?2001-last"> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script src="/common/subset-tests.js"></script> +<script> + const tests = [ + { fn: t => { ... }, name: "..." }, + ... lots of tests ... + ]; + for (const test of tests) { + subsetTest(async_test, test.fn, test.name); + } +</script> +``` + +With `subsetTestByKey`, the key is given as the first argument, and the +query string can include or exclude a key (which will be matched as a regular +expression). + +```html +<!doctype html> +<title>Testing variants by key</title> +<meta name="variant" content="?include=Foo"> +<meta name="variant" content="?include=Bar"> +<meta name="variant" content="?exclude=(Foo|Bar)"> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script src="/common/subset-tests-by-key.js"></script> +<script> + subsetTestByKey("Foo", async_test, () => { ... }, "Testing foo"); + ... +</script> +``` + +## Table of Contents + +```eval_rst +.. toctree:: + :maxdepth: 1 + + testharness-api + testdriver + testdriver-extension-tutorial + idlharness +``` diff --git a/testing/web-platform/tests/docs/writing-tests/tools.md b/testing/web-platform/tests/docs/writing-tests/tools.md new file mode 100644 index 0000000000..0a9a7dcfd5 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/tools.md @@ -0,0 +1,25 @@ +# Command-line utility scripts + +Sometimes you may want to add a script to the repository that's meant to be +used from the command line, not from a browser (e.g., a script for generating +test files). If you want to ensure (e.g., for security reasons) that such +scripts won't be handled by the HTTP server, but will instead only be usable +from the command line, then place them in either: + +* the `tools` subdir at the root of the repository, or + +* the `tools` subdir at the root of any top-level directory in the repository + which contains the tests the script is meant to be used with + +Any files in those `tools` directories won't be handled by the HTTP server; +instead the server will return a 404 if a user navigates to the URL for a file +within them. + +If you want to add a script for use with a particular set of tests but there +isn't yet any `tools` subdir at the root of a top-level directory in the +repository containing those tests, you can create a `tools` subdir at the root +of that top-level directory and place your scripts there. + +For example, if you wanted to add a script for use with tests in the +`notifications` directory, create the `notifications/tools` subdir and put your +script there. diff --git a/testing/web-platform/tests/docs/writing-tests/visual.md b/testing/web-platform/tests/docs/writing-tests/visual.md new file mode 100644 index 0000000000..a8ae53d071 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/visual.md @@ -0,0 +1,27 @@ +# Visual Tests + +Visual tests are typically used when testing rendering of things that +cannot be tested with [reftests](reftests). + +Their main advantage of over manual tests is they can be verified using +browser-specific and platform-specific screenshots; note, however, that many +browser vendors treat them identically to manual tests hence they are +similarly discouraged as they very infrequently, if ever, get run by them. + +## Writing a Visual Test + +Visuals tests are test files which have `-visual` at the end of their +filename, before the extension. There is nothing needed in them to +make them work. + +They should follow the [general test guidelines](general-guidelines), +especially noting the requirement to be self-describing (i.e., they +must give a clear pass condition in their rendering). + +Similarly, they should consider the [rendering test guidelines](rendering), +especially those about color, to ensure those running the test don't +incorrectly judge its result. + +The screenshot for comparison is taken at the same point as when screenshots +for [reftest comparisons](reftests) are taken, including potentially waiting +for any `class="reftest-wait"` to be removed from the root element. diff --git a/testing/web-platform/tests/docs/writing-tests/wdspec.md b/testing/web-platform/tests/docs/writing-tests/wdspec.md new file mode 100644 index 0000000000..1943fb9e94 --- /dev/null +++ b/testing/web-platform/tests/docs/writing-tests/wdspec.md @@ -0,0 +1,68 @@ +# wdspec tests + +The term "wdspec" describes a type of test in WPT which verifies some aspect of +[the WebDriver protocol](https://w3c.github.io/webdriver/). These tests are +written in [the Python programming language](https://www.python.org/) and +structured with [the pytest testing +framework](https://docs.pytest.org/en/latest/). + +The test files are organized into subdirectories based on the WebDriver +command under test. For example, tests for [the Close Window +command](https://w3c.github.io/webdriver/#close-window) are located in then +`close_window` directory. + +Similar to [testharness.js](testharness) tests, wdspec tests contain within +them any number of "sub-tests." Sub-tests are defined as Python functions whose +name begins with `test_`, e.g. `test_stale_element`. + +## The `webdriver` client library + +web-platform-tests maintains a WebDriver client library called `webdriver` +located in the `tools/webdriver/` directory. Like other client libraries, it +makes it easier to write code which interfaces with a browser using the +protocol. + +Many tests require some "set up" code--logic intended to bring the browser to a +known state from which the expected behavior can be verified. The convenience +methods in the `webdriver` library **should** be used to perform this task +because they reduce duplication. + +However, the same methods **should not** be used to issue the command under +test. Instead, the HTTP request describing the command should be sent directly. +This practice promotes the descriptive quality of the tests and limits +indirection that tends to obfuscate test failures. + +Here is an example of a test for [the Element Click +command](https://w3c.github.io/webdriver/#element-click): + +```python +from tests.support.asserts import assert_success + +def test_null_response_value(session, inline): + # The high-level API is used to set up a document and locate a click target + session.url = inline("<p>foo") + element = session.find.css("p", all=False) + + # An HTTP request is explicitly constructed for the "click" command itself + response = session.transport.send( + "POST", "session/{session_id}/element/{element_id}/click".format( + session_id=session.session_id, + element_id=element.id)) + + assert_success(response) +``` + +## Utility functions + +The `wedbdriver` library is minimal by design. It mimics the structure of the +WebDriver specification. Many conformance tests perform similar operations +(e.g. calculating the center point of an element or creating a document), but +the library does not expose methods to facilitate them. Instead, wdspec tests +define shared functionality in the form of "support" files. + +Many of these functions are intended to be used directly from the tests using +Python's built-in `import` keyword. Others (particularly those that operate on +a WebDriver session) are defined in terms of Pytest "fixtures" and must be +loaded accordingly. For more detail on how to define and use test fixtures, +please refer to [the pytest project's documentation on the +topic](https://docs.pytest.org/en/latest/fixture.html). |