From 36d22d82aa202bb199967e9512281e9a53db42c9 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Sun, 7 Apr 2024 21:33:14 +0200 Subject: Adding upstream version 115.7.0esr. Signed-off-by: Daniel Baumann --- dom/webgpu/tests/cts/checkout/docs/intro/README.md | 99 +++++++++++++++ .../cts/checkout/docs/intro/convert_to_issue.png | Bin 0 -> 2061 bytes .../tests/cts/checkout/docs/intro/developing.md | 134 +++++++++++++++++++++ .../tests/cts/checkout/docs/intro/life_of.md | 46 +++++++ dom/webgpu/tests/cts/checkout/docs/intro/plans.md | 82 +++++++++++++ dom/webgpu/tests/cts/checkout/docs/intro/tests.md | 25 ++++ 6 files changed, 386 insertions(+) create mode 100644 dom/webgpu/tests/cts/checkout/docs/intro/README.md create mode 100644 dom/webgpu/tests/cts/checkout/docs/intro/convert_to_issue.png create mode 100644 dom/webgpu/tests/cts/checkout/docs/intro/developing.md create mode 100644 dom/webgpu/tests/cts/checkout/docs/intro/life_of.md create mode 100644 dom/webgpu/tests/cts/checkout/docs/intro/plans.md create mode 100644 dom/webgpu/tests/cts/checkout/docs/intro/tests.md (limited to 'dom/webgpu/tests/cts/checkout/docs/intro') diff --git a/dom/webgpu/tests/cts/checkout/docs/intro/README.md b/dom/webgpu/tests/cts/checkout/docs/intro/README.md new file mode 100644 index 0000000000..e5f8bcedc6 --- /dev/null +++ b/dom/webgpu/tests/cts/checkout/docs/intro/README.md @@ -0,0 +1,99 @@ +# Introduction + +These documents contains guidelines for contributors to the WebGPU CTS (Conformance Test Suite) +on how to write effective tests, and on the testing philosophy to adopt. + +The WebGPU CTS is arguably more important than the WebGPU specification itself, because +it is what forces implementation to be interoperable by checking they conform to the specification. +However writing a CTS is hard and requires a lot of effort to reach good coverage. + +More than a collection of tests like regular end2end and unit tests for software artifacts, a CTS +needs to be exhaustive. Contrast for example the WebGL2 CTS with the ANGLE end2end tests: they +cover the same functionality (WebGL 2 / OpenGL ES 3) but are structured very differently: + +- ANGLE's test suite has one or two tests per functionality to check it works correctly, plus + regression tests and special tests to cover implementation details. +- WebGL2's CTS can have thousands of tests per API aspect to cover every combination of + parameters (and global state) used by an operation. + +Below are guidelines based on our collective experience with graphics API CTSes like WebGL's. +They are expected to evolve over time and have exceptions, but should give a general idea of what +to do. + +## Contributing + +Testing tasks are tracked in the [CTS project tracker](https://github.com/orgs/gpuweb/projects/3). +Go here if you're looking for tasks, or if you have a test idea that isn't already covered. + +If contributing conformance tests, the directory you'll work in is [`src/webgpu/`](../src/webgpu/). +This directory is organized according to the goal of the test (API validation behavior vs +actual results) and its target (API entry points and spec areas, e.g. texture sampling). + +The contents of a test file (`src/webgpu/**/*.spec.ts`) are twofold: + +- Documentation ("test plans") on what tests do, how they do it, and what cases they cover. + Some test plans are fully or partially unimplemented: + they either contain "TODO" in a description or are `.unimplemented()`. +- Actual tests. + +**Please read the following short documents before contributing.** + +### 0. [Developing](developing.md) + +- Reviewers should also read [Review Requirements](../reviews.md). + +### 1. [Life of a Test Change](life_of.md) + +### 2. [Adding or Editing Test Plans](plans.md) + +### 3. [Implementing Tests](tests.md) + +## [Additional Documentation](../) + +## Examples + +### Operation testing of vertex input id generation + +This section provides an example of the planning process for a test. +It has not been refined into a set of final test plan descriptions. +(Note: this predates the actual implementation of these tests, so doesn't match the actual tests.) + +Somewhere under the `api/operation` node are tests checking that running `GPURenderPipelines` on +the device using the `GPURenderEncoderBase.draw` family of functions works correctly. Render +pipelines are composed of several stages that are mostly independent so they can be split in +several parts such as `vertex_input`, `rasterization`, `blending`. + +Vertex input itself has several parts that are mostly separate in hardware: + +- generation of the vertex and instance indices to run for this draw +- fetching of vertex data from vertex buffers based on these indices +- conversion from the vertex attribute `GPUVertexFormat` to the datatype for the input variable + in the shader + +Each of these are tested separately and have cases for each combination of the variables that may +affect them. This means that `api/operation/render/vertex_input/id_generation` checks that the +correct operation is performed for the cartesian product of all the following dimensions: + +- for encoding in a `GPURenderPassEncoder` or a `GPURenderBundleEncoder` +- whether the draw is direct or indirect +- whether the draw is indexed or not +- for various values of the `firstInstance` argument +- for various values of the `instanceCount` argument +- if the draw is not indexed: + - for various values of the `firstVertex` argument + - for various values of the `vertexCount` argument +- if the draw is indexed: + - for each `GPUIndexFormat` + - for various values of the indices in the index buffer including the primitive restart values + - for various values for the `offset` argument to `setIndexBuffer` + - for various values of the `firstIndex` argument + - for various values of the `indexCount` argument + - for various values of the `baseVertex` argument + +"Various values" above mean several small values, including `0` and the second smallest valid +value to check for corner cases, as well as some large value. + +An instance of the test sets up a `draw*` call based on the parameters, using point rendering and +a fragment shader that outputs to a storage buffer. After the draw the test checks the content of +the storage buffer to make sure all expected vertex shader invocation, and only these ones have +been generated. diff --git a/dom/webgpu/tests/cts/checkout/docs/intro/convert_to_issue.png b/dom/webgpu/tests/cts/checkout/docs/intro/convert_to_issue.png new file mode 100644 index 0000000000..672324a9d9 Binary files /dev/null and b/dom/webgpu/tests/cts/checkout/docs/intro/convert_to_issue.png differ diff --git a/dom/webgpu/tests/cts/checkout/docs/intro/developing.md b/dom/webgpu/tests/cts/checkout/docs/intro/developing.md new file mode 100644 index 0000000000..5b1aeed36d --- /dev/null +++ b/dom/webgpu/tests/cts/checkout/docs/intro/developing.md @@ -0,0 +1,134 @@ +# Developing + +The WebGPU CTS is written in TypeScript. + +## Setup + +After checking out the repository and installing node/npm, run: + +```sh +npm ci +``` + +Before uploading, you can run pre-submit checks (`npm test`) to make sure it will pass CI. +Use `npm run fix` to fix linting issues. + +`npm run` will show available npm scripts. +Some more scripts can be listed using `npx grunt`. + +## Dev Server + +To start the development server, use: + +```sh +npm start +``` + +Then, browse to the standalone test runner at the printed URL. + +The server will generate and compile code on the fly, so no build step is necessary. +Only a reload is needed to see saved changes. +(TODO: except, currently, `README.txt` and file `description` changes won't be reflected in +the standalone runner.) + +Note: The first load of a test suite may take some time as generating the test suite listing can +take a few seconds. + +## Standalone Test Runner / Test Plan Viewer + +**The standalone test runner also serves as a test plan viewer.** +(This can be done in a browser without WebGPU support.) +You can use this to preview how your test plan will appear. + +You can view different suites (webgpu, unittests, stress, etc.) or different subtrees of +the test suite. + +- `http://localhost:8080/standalone/` (defaults to `?runnow=0&worker=0&debug=0&q=webgpu:*`) +- `http://localhost:8080/standalone/?q=unittests:*` +- `http://localhost:8080/standalone/?q=unittests:basic:*` + +The following url parameters change how the harness runs: + +- `runnow=1` runs all matching tests on page load. +- `debug=1` enables verbose debug logging from tests. +- `worker=1` runs the tests on a Web Worker instead of the main thread. +- `power_preference=low-power` runs most tests passing `powerPreference: low-power` to `requestAdapter` +- `power_preference=high-performance` runs most tests passing `powerPreference: high-performance` to `requestAdapter` + +### Web Platform Tests (wpt) - Ref Tests + +You can inspect the actual and reference pages for web platform reftests in the standalone +runner by navigating to them. For example, by loading: + + - `http://localhost:8080/out/webgpu/web_platform/reftests/canvas_clear.https.html` + - `http://localhost:8080/out/webgpu/web_platform/reftests/ref/canvas_clear-ref.html` + +You can also run a minimal ref test runner. + + - open 2 terminals / command lines. + - in one, `npm start` + - in the other, `node tools/run_wpt_ref_tests [name-of-test]` + +Without `[name-of-test]` all ref tests will be run. `[name-of-test]` is just a simple check for +substring so passing in `rgba` will run every test with `rgba` in its filename. + +Examples: + +MacOS + +``` +# Chrome +node tools/run_wpt_ref_tests /Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary +``` + +Windows + +``` +# Chrome +node .\tools\run_wpt_ref_tests "C:\Users\your-user-name\AppData\Local\Google\Chrome SxS\Application\chrome.exe" +``` + +## Editor + +Since this project is written in TypeScript, it integrates best with +[Visual Studio Code](https://code.visualstudio.com/). +This is optional, but highly recommended: it automatically adds `import` lines and +provides robust completions, cross-references, renames, error highlighting, +deprecation highlighting, and type/JSDoc popups. + +Open the `cts.code-workspace` workspace file to load settings convenient for this project. +You can make local configuration changes in `.vscode/`, which is untracked by Git. + +## Pull Requests + +When opening a pull request, fill out the PR checklist and attach the issue number. +If an issue hasn't been opened, find the draft issue on the +[project tracker](https://github.com/orgs/gpuweb/projects/3) and choose "Convert to issue": + +![convert to issue button screenshot](convert_to_issue.png) + +Opening a pull request will automatically notify reviewers. + +To make the review process smoother, once a reviewer has started looking at your change: + +- Avoid major additions or changes that would be best done in a follow-up PR. +- Avoid rebases (`git rebase`) and force pushes (`git push -f`). These can make + it difficult for reviewers to review incremental changes as GitHub often cannot + view a useful diff across a rebase. If it's necessary to resolve conflicts + with upstream changes, use a merge commit (`git merge`) and don't include any + consequential changes in the merge, so a reviewer can skip over merge commits + when working through the individual commits in the PR. +- When you address a review comment, mark the thread as "Resolved". + +Pull requests will (usually) be landed with the "Squash and merge" option. + +### TODOs + +The word "TODO" refers to missing test coverage. It may only appear inside file/test descriptions +and README files (enforced by linting). + +To use comments to refer to TODOs inside the description, use a backreference, e.g., in the +description, `TODO: Also test the FROBNICATE usage flag [1]`, and somewhere in the code, `[1]: +Need to add FROBNICATE to this list.`. + +Use `MAINTENANCE_TODO` for TODOs which don't impact test coverage. diff --git a/dom/webgpu/tests/cts/checkout/docs/intro/life_of.md b/dom/webgpu/tests/cts/checkout/docs/intro/life_of.md new file mode 100644 index 0000000000..8dced4ad84 --- /dev/null +++ b/dom/webgpu/tests/cts/checkout/docs/intro/life_of.md @@ -0,0 +1,46 @@ +# Life of a Test Change + +A "test change" could be a new test, an expansion of an existing test, a test bug fix, or a +modification to existing tests to make them match new spec changes. + +**CTS contributors should contribute to the tracker and strive to keep it up to date, especially +relating to their own changes.** + +Filing new draft issues in the CTS project tracker is very lightweight. +Anyone with access should do this eagerly, to ensure no testing ideas are forgotten. +(And if you don't have access, just file a regular issue.) + +1. Enter a [draft issue](https://github.com/orgs/gpuweb/projects/3), with the Status + set to "New (not in repo)", and any available info included in the issue description + (notes/plans to ensure full test coverage of the change). The source of this may be: + + - Anything in the spec/API that is found not to be covered by the CTS yet. + - Any test is found to be outdated or otherwise buggy. + - A spec change from the "Needs CTS Issue" column in the + [spec project tracker](https://github.com/orgs/gpuweb/projects/1). + Once information on the required test changes is entered into the CTS project tracker, + the spec issue moves to "Specification Done". + + Note: at some point, someone may make a PR to flush "New (not in repo)" issues into `TODO`s in + CTS file/test description text, changing their "Status" to "Open". + These may be done in bulk without linking back to the issue. + +1. As necessary: + + - Convert the draft issue to a full, numbered issue for linking from later PRs. + + ![convert to issue button screenshot](convert_to_issue.png) + + - Update the "Assignees" of the issue when an issue is assigned or unassigned + (you can assign yourself). + - Change the "Status" of the issue to "Started" once you start the task. + +1. Open one or more PRs, **each linking to the associated issue**. + Each PR may is reviewed and landed, and may leave further TODOs for parts it doesn't complete. + + 1. Test are "planned" in test descriptions. (For complex tests, open a separate PR with the + tests `.unimplemented()` so a reviewer can evaluate the plan before you implement tests.) + 1. Tests are implemented. + +1. When **no TODOs remain** for an issue, close it and change its status to "Complete". + (Enter a new more, specific draft issue into the tracker if you need to track related TODOs.) diff --git a/dom/webgpu/tests/cts/checkout/docs/intro/plans.md b/dom/webgpu/tests/cts/checkout/docs/intro/plans.md new file mode 100644 index 0000000000..f8d7af3a78 --- /dev/null +++ b/dom/webgpu/tests/cts/checkout/docs/intro/plans.md @@ -0,0 +1,82 @@ +# Adding or Editing Test Plans + +## 1. Write a test plan + +For new tests, if some notes exist already, incorporate them into your plan. + +A detailed test plan should be written and reviewed before substantial test code is written. +This allows reviewers a chance to identify additional tests and cases, opportunities for +generalizations that would improve the strength of tests, similar existing tests or test plans, +and potentially useful [helpers](../helper_index.txt). + +**A test plan must serve two functions:** + +- Describes the test, succinctly, but in enough detail that a reader can read *only* the test + plans and evaluate coverage completeness of a file/directory. +- Describes the test precisely enough that, when code is added, the reviewer can ensure that the + test really covers what the test plan says. + +There should be one test plan for each test. It should describe what it tests, how, and describe +important cases that need to be covered. Here's an example: + +```ts +g.test('x,some_detail') + .desc( + ` +Tests [some detail] about x. Tests calling x in various 'mode's { mode1, mode2 }, +with various values of 'arg', and checks correctness of the result. +Tries to trigger [some conditional path]. + +- Valid values (control case) // <- (to make sure the test function works well) +- Unaligned values (should fail) // <- (only validation tests need to intentionally hit invalid cases) +- Extreme values` + ) + .params(u => + u // + .combine('mode', ['mode1', 'mode2']) + .beginSubcases() + .combine('arg', [ + // Valid // <- Comment params as you see fit. + 4, + 8, + 100, + // Invalid + 2, + 6, + 1e30, + ]) + ) + .unimplemented(); +``` + +"Cases" each appear as individual items in the `/standalone/` runner. +"Subcases" run inside each case, like a for-loop wrapping the `.fn(`test function`)`. +Documentation on the parameter builder can be found in the [helper index](../helper_index.txt). + +It's often impossible to predict the exact case/subcase structure before implementing tests, so they +can be added during implementation, instead of planning. + +For any notes which are not specific to a single test, or for preliminary notes for tests that +haven't been planned in full detail, put them in the test file's `description` variable at +the top. Or, if they aren't associated with a test file, put them in a `README.txt` file. + +**Any notes about missing test coverage must be marked with the word `TODO` inside a +description or README.** This makes them appear on the `/standalone/` page. + +## 2. Open a pull request + +Open a PR, and work with the reviewer(s) to revise the test plan. + +Usually (probably), plans will be landed in separate PRs before test implementations. + +## Conventions used in test plans + +- `Iff`: If and only if +- `x=`: "cartesian-cross equals", like `+=` for cartesian product. + Used for combinatorial test coverage. + - Sometimes this will result in too many test cases; simplify/reduce as needed + during planning *or* implementation. +- `{x,y,z}`: list of cases to test + - e.g. `x= texture format {r8unorm, r8snorm}` +- *Control case*: a case included to make sure that the rest of the cases aren't + missing their target by testing some other error case. diff --git a/dom/webgpu/tests/cts/checkout/docs/intro/tests.md b/dom/webgpu/tests/cts/checkout/docs/intro/tests.md new file mode 100644 index 0000000000..a67b6a20cc --- /dev/null +++ b/dom/webgpu/tests/cts/checkout/docs/intro/tests.md @@ -0,0 +1,25 @@ +# Implementing Tests + +Once a test plan is done, you can start writing tests. +To add new tests, imitate the pattern in neigboring tests or neighboring files. +New test files must be named ending in `.spec.ts`. + +For an example test file, see [`src/webgpu/examples.spec.ts`](../../src/webgpu/examples.spec.ts). +For a more complex, well-structured reference test file, see +[`src/webgpu/api/validation/vertex_state.spec.ts`](../../src/webgpu/api/validation/vertex_state.spec.ts). + +Implement some tests and open a pull request. You can open a PR any time you're ready for a review. +(If two tests are non-trivial but independent, consider separate pull requests.) + +Before uploading, you can run pre-submit checks (`npm test`) to make sure it will pass CI. +Use `npm run fix` to fix linting issues. + +## Test Helpers + +It's best to be familiar with helpers available in the test suite for simplifying +test implementations. + +New test helpers can be added at any time to either of those files, or to new `.ts` files anywhere +near the `.spec.ts` file where they're used. + +Documentation on existing helpers can be found in the [helper index](../helper_index.txt). -- cgit v1.2.3