summaryrefslogtreecommitdiffstats
path: root/dom/webgpu/tests/cts/checkout/docs
diff options
context:
space:
mode:
Diffstat (limited to 'dom/webgpu/tests/cts/checkout/docs')
-rw-r--r--dom/webgpu/tests/cts/checkout/docs/adding_timing_metadata.md95
-rw-r--r--dom/webgpu/tests/cts/checkout/docs/build.md55
-rw-r--r--dom/webgpu/tests/cts/checkout/docs/case_cache.md81
-rw-r--r--dom/webgpu/tests/cts/checkout/docs/fp_primer.md102
-rw-r--r--dom/webgpu/tests/cts/checkout/docs/intro/developing.md23
-rw-r--r--dom/webgpu/tests/cts/checkout/docs/terms.md2
6 files changed, 262 insertions, 96 deletions
diff --git a/dom/webgpu/tests/cts/checkout/docs/adding_timing_metadata.md b/dom/webgpu/tests/cts/checkout/docs/adding_timing_metadata.md
index fe32cead20..e251524177 100644
--- a/dom/webgpu/tests/cts/checkout/docs/adding_timing_metadata.md
+++ b/dom/webgpu/tests/cts/checkout/docs/adding_timing_metadata.md
@@ -6,14 +6,22 @@
The raw data may be edited manually, to add entries or change timing values.
-The **list** of tests must stay up to date, so it can be used by external
-tools. This is verified by presubmit checks.
+The list of tests in this file is **not** guaranteed to stay up to date.
+Use the generated `gen/*_variant_list*.json` if you need a complete list.
-The `subcaseMS` values are estimates. They can be set to 0 if for some reason
+The `subcaseMS` values are estimates. They can be set to 0 or omitted if for some reason
you can't estimate the time (or there's an existing test with a long name and
-slow subcases that would result in query strings that are too long), but this
-will produce a non-fatal warning. Avoid creating new warnings whenever
-possible. Any existing failures should be fixed (eventually).
+slow subcases that would result in query strings that are too long).
+It's OK if the number is estimated too high.
+
+These entries are estimates for the amount of time that subcases take to run,
+and are used as inputs into the WPT tooling to attempt to portion out tests into
+approximately same-sized chunks. High estimates are OK, they just may generate
+more chunks than necessary.
+
+To check for missing or 0 entries, run
+`tools/validate --print-metadata-warnings src/webgpu`
+and look at the resulting warnings.
### Performance
@@ -25,46 +33,25 @@ should still execute in under 5 seconds on lower-end computers.
## Problem
-When adding new tests to the CTS you may occasionally see an error like this
+When renaming or removing tests from the CTS you will see an error like this
when running `npm test` or `npm run standalone`:
```
-ERROR: Tests missing from listing_meta.json. Please add the new tests (set subcaseMS to 0 if you cannot estimate it):
- webgpu:shader,execution,expression,binary,af_matrix_addition:matrix:*
-
-/home/runner/work/cts/cts/src/common/util/util.ts:38
- throw new Error(msg && (typeof msg === 'string' ? msg : msg()));
- ^
-Error:
- at assert (/home/runner/work/cts/cts/src/common/util/util.ts:38:11)
- at crawl (/home/runner/work/cts/cts/src/common/tools/crawl.ts:155:11)
-Warning: non-zero exit code 1
- Use --force to continue.
-
-Aborted due to warnings.
+ERROR: Non-existent tests found in listing_meta.json. Please update:
+ webgpu:api,operation,adapter,requestAdapter:old_test_that_got_renamed:*
```
-What this error message is trying to tell us, is that there is no entry for
-`webgpu:shader,execution,expression,binary,af_matrix_addition:matrix:*` in
-`src/webgpu/listing_meta.json`.
+## Solution
-These entries are estimates for the amount of time that subcases take to run,
-and are used as inputs into the WPT tooling to attempt to portion out tests into
-approximately same-sized chunks.
+This means there is a stale line in `src/webgpu/listing_meta.json` that needs
+to be deleted, or updated to match the rename that you did.
-If a value has been defaulted to 0 by someone, you will see warnings like this:
-
-```
-...
-WARNING: subcaseMS≤0 found in listing_meta.json (allowed, but try to avoid):
- webgpu:shader,execution,expression,binary,af_matrix_addition:matrix:*
-...
-```
+## Problem
-These messages should be resolved by adding appropriate entries to the JSON
-file.
+You run `tools/validate --print-metadata-warnings src/webgpu`
+and want to fix the warnings.
-## Solution 1 (manual, best for simple tests)
+## Solution 1 (manual, best for one-off updates of simple tests)
If you're developing new tests and need to update this file, it is sometimes
easiest to do so manually. Run your tests under your usual development workflow
@@ -82,30 +69,18 @@ these values, though they do require some manual intervention. The rest of this
doc will be a walkthrough of running these tools.
Timing data can be captured in bulk and "merged" into this file using
-the `merge_listing_times` tool. This is useful when a large number of tests
+the `merge_listing_times` tool. This is
+This is useful when a large number of tests
change or otherwise a lot of tests need to be updated, but it also automates the
manual steps above.
The tool can also be used without any inputs to reformat `listing_meta.json`.
Please read the help message of `merge_listing_times` for more information.
-### Placeholder Value
-
-If your development workflow requires a clean build, the first step is to add a
-placeholder value for entry to `src/webgpu/listing_meta.json`, since there is a
-chicken-and-egg problem for updating these values.
-
-```
- "webgpu:shader,execution,expression,binary,af_matrix_addition:matrix:*": { "subcaseMS": 0 },
-```
-
-(It should have a value of 0, since later tooling updates the value if the newer
-value is higher.)
-
### Websocket Logger
The first tool that needs to be run is `websocket-logger`, which receives data
-on a WebSocket channel to capture timing data when CTS is run. This
+on a WebSocket channel at `localhost:59497` to capture timing data when CTS is run. This
should be run in a separate process/terminal, since it needs to stay running
throughout the following steps.
@@ -125,10 +100,19 @@ Writing to wslog-2023-09-12T18-57-34.txt
...
```
+See also [tools/websocket-logger/README.md](../tools/websocket-logger/README.md).
+
### Running CTS
Now we need to run the specific cases in CTS that we need to time.
-This should be possible under any development workflow (as long as its runtime environment, like Node, supports WebSockets), but the most well-tested way is using the standalone web runner.
+
+This should be possible under any development workflow by logging through a
+side-channel (as long as its runtime environment, like Node, supports WebSockets).
+Regardless of development workflow, you need to enable logToWebSocket flag
+(`?log_to_web_socket=1` in browser, `--log-to-web-socket` on command line, or
+just hack it in by switching the default in `options.ts`).
+
+The most well-tested way to do this is using the standalone web runner.
This requires serving the CTS locally. In the project root:
@@ -141,7 +125,7 @@ Once this is started you can then direct a WebGPU enabled browser to the
specific CTS entry and run the tests, for example:
```
-http://localhost:8080/standalone/?q=webgpu:shader,execution,expression,binary,af_matrix_addition:matrix:*
+http://localhost:8080/standalone/?log_to_web_socket=1&q=webgpu:*
```
If the tests have a high variance in runtime, you can run them multiple times.
@@ -156,8 +140,9 @@ This can be done using the following command:
```
tools/merge_listing_times webgpu -- tools/websocket-logger/wslog-2023-09-12T18-57-34.txt
+tools/merge_listing_times webgpu -- tools/websocket-logger/wslog-*.txt
```
-where the text file is the result file from websocket-logger.
+Or, you can point it to one of the log files from a specific invocation of websocket-logger.
Now you just need to commit the pending diff in your repo.
diff --git a/dom/webgpu/tests/cts/checkout/docs/build.md b/dom/webgpu/tests/cts/checkout/docs/build.md
index 2d7b2f968c..d786bbc18c 100644
--- a/dom/webgpu/tests/cts/checkout/docs/build.md
+++ b/dom/webgpu/tests/cts/checkout/docs/build.md
@@ -1,16 +1,48 @@
# Building
Building the project is not usually needed for local development.
-However, for exports to WPT, or deployment (https://gpuweb.github.io/cts/),
+However, for exports to WPT, NodeJS, [or deployment](https://gpuweb.github.io/cts/),
files can be pre-generated.
-The project builds into two directories:
+## Build types
-- `out/`: Built framework and test files, needed to run standalone or command line.
-- `out-wpt/`: Build directory for export into WPT. Contains:
- - An adapter for running WebGPU CTS tests under WPT
- - A copy of the needed files from `out/`
- - A copy of any `.html` test cases from `src/`
+The project can be built several different ways, each with a different output directory:
+
+### 0. on-the-fly builds (no output directory)
+
+Use `npm run start` to launch a server that live-compiles everything as needed.
+Use `tools/run_node` and other tools to run under `ts-node` which compiles at runtime.
+
+### 1. `out` directory
+
+**Built with**: `npm run standalone`
+
+**Serve locally with**: `npx grunt serve`
+
+**Used for**: Static deployment of the CTS, primarily for [gpuweb.github.io/cts](https://gpuweb.github.io/cts/).
+
+### 2. `out-wpt` directory
+
+**Built with**: `npm run wpt`
+
+**Used for**: Deploying into [Web Platform Tests](https://web-platform-tests.org/). See [below](#export-to-wpt) for more information.
+
+Contains:
+
+- An adapter for running WebGPU CTS tests under WPT
+- A copy of the needed files from `out/`
+- A copy of any `.html` test cases from `src/`
+
+### 3. `out-node` directory
+
+**Built with**: `npm run node`
+
+**Used for**: Running NodeJS tools, if you want to specifically avoid the live-compilation overhead of the `tools/` versions, or are running on a deployment which no longer has access to `ts-node` (which is a build-time dependency). For example:
+
+- `node out-node/common/runtime/cmdline.js` ([source](../src/common/runtime/cmdline.ts)) - A command line interface test runner
+- `node out-node/common/runtime/server.js` ([source](../src/common/runtime/server.ts)) - An HTTP server for executing CTS tests with a REST interface
+
+## Testing
To build and run all pre-submit checks (including type and lint checks and
unittests), use:
@@ -25,15 +57,6 @@ For checks only:
npm run check
```
-For a quicker iterative build:
-
-```sh
-npm run standalone
-```
-
-## Run
-
-To serve the built files (rather than using the dev server), run `npx grunt serve`.
## Export to WPT
diff --git a/dom/webgpu/tests/cts/checkout/docs/case_cache.md b/dom/webgpu/tests/cts/checkout/docs/case_cache.md
new file mode 100644
index 0000000000..c3ba8718b5
--- /dev/null
+++ b/dom/webgpu/tests/cts/checkout/docs/case_cache.md
@@ -0,0 +1,81 @@
+# Case Cache
+
+The WebGPU CTS contains many tests that check that the results of an operation
+fall within limits defined by the WebGPU and WGSL specifications. The
+computation of these allowed limits can be very expensive to calculate, however
+the values do not vary by platform or device, and can be precomputed and reused
+for multiple CTS runs.
+
+## File cache
+
+To speed up execution of the CTS, the CTS git repo holds holds pre-computed
+test cases, generated from `*.cache.ts` files and serialized in a set of binary
+files under [`src/resources/cache`](../src/resources/cache).
+
+These files are regenerated by [`src/common/tools/gen_cache.ts`](../src/common/tools/gen_cache.ts)
+which can be run with `npx grunt run:generate-cache`.
+This tool is automatically run by the various Grunt build commands.
+
+As generating the cache is expensive (hence why we build it ahead of time!) the
+cache generation tool will only re-build the cache files it believes may be out
+of date. To determine which files it needs to rebuild, the tool calculates a
+hash of all the transitive source TypeScript files that are used to build the
+output, and compares this hash to the hash stored in
+[`src/resources/cache/hashes.json`](`../src/resources/cache/hashes.json`). Only
+those cache files with differing hashes are rebuilt.
+
+Transitive imports easily grow, and these can cause unnecessary rebuilds of the cache.
+To help avoid unnecessary rebuilds, files that are known to not be used by the cache can be
+annotated with a `MUST_NOT_BE_IMPORTED_BY_DATA_CACHE` comment anywhere in the file. If a file with
+this comment is transitively imported by a `.cache.ts` file, then the cache generation tool will
+error with a trace of the imports from the `.cache.ts` file to the file with this comment.
+
+The cache files are copied from [`src/resources/cache`](../src/resources/cache)
+to the `resources/cache` subdirectory of the
+[`out` and `out-node` build directories](build.md#build-types), so the runner
+can load these cache files.
+
+The GitHub presubmit checks will error if the cache files or
+[`hashes.json`](`../src/resources/cache/hashes.json`) need updating.
+
+## In memory cache
+
+If a cache file cannot be found, then the [`CaseCache`](../src/webgpu/shader/execution/expression/case_cache.ts)
+will build the cases during CTS execution and store the results in an in-memory LRU cache.
+
+## Using the cache
+
+To add test cases to the cache:
+
+1. Create a new <code><i>my_file</i>.cache.ts</code> file.
+
+2. In that file, import `makeCaseCache` from [`'case_cache.js'`](../src/webgpu/shader/execution/expression/case_cache.ts);
+
+```ts
+import { makeCaseCache } from '../case_cache.js'; // your relative path may vary
+```
+
+3. Declare an exported global variable with the name `d`, assigned with the return value of `makeCaseCache()`:
+
+```ts
+export const d = makeCaseCache('unique/path/of/your/cache/file', {
+ // Declare any number of fields that build the test cases
+ name_of_your_case: () => {
+ return fullI32Range().map(e => { // example case builder
+ return { input: i32(e), expected: i32(-e) };
+ });
+ },
+});
+```
+
+4. To use the cached cases in a <code><i>my_file</i>.spec.ts</code> file, import `d` from <code><i>my_file</i>.cache.js</code>, and use `d.get();`
+
+```ts
+import { d } from './my_file.cache.js';
+
+const cases = await d.get('name_of_your_case');
+// cases will either be loaded from the cache file, loaded from the in-memory
+// LRU, or built on the fly.
+```
+
+5. Run `npx grunt run generate-cache` to generate the new cache file.
diff --git a/dom/webgpu/tests/cts/checkout/docs/fp_primer.md b/dom/webgpu/tests/cts/checkout/docs/fp_primer.md
index a8302fb461..769657c8f8 100644
--- a/dom/webgpu/tests/cts/checkout/docs/fp_primer.md
+++ b/dom/webgpu/tests/cts/checkout/docs/fp_primer.md
@@ -39,7 +39,8 @@ A floating point number system defines
- Arithmetic operations on those representatives, trying to approximate the
ideal operations on real numbers.
-The cardinality mismatch alone implies that any floating point number system necessarily loses information.
+The cardinality mismatch alone implies that any floating point number system
+necessarily loses information.
This means that not all numbers in the bounds can be exactly represented as a
floating point value.
@@ -114,7 +115,7 @@ Implementations may assume that infinities are not present. When an evaluation
at runtime would produce an infinity, an indeterminate value is produced
instead.
-When a value goes out of bounds for a specific precision there are special
+When a value goes out-of-bounds for a specific precision there are special
rounding rules that apply. If it is 'near' the edge of finite values for that
precision, it is considered to be near-overflowing, and the implementation may
choose to round it to the edge value or the appropriate infinity. If it is not
@@ -163,7 +164,7 @@ the rules for compile time execution will be discussed below.)
Signaling NaNs are treated as quiet NaNs in the WGSL spec. And quiet NaNs have
the same "may-convert-to-indeterminate-value" behaviour that infinities have, so
-for the purpose of the CTS they are handled by the infinite/out of bounds logic
+for the purpose of the CTS they are handled by the infinite/out-of-bounds logic
normally.
## Notation/Terminology
@@ -231,14 +232,20 @@ referred to as the beginning of the interval and `b` as the end of the interval.
When talking about intervals, this doc and the code endeavours to avoid using
the term **range** to refer to the span of values that an interval covers,
-instead using the term bounds to avoid confusion of terminology around output of
-operations.
+instead using the term **endpoints** to avoid confusion of terminology around
+output of operations.
+
+The term **endpoints** is generally used to refer to the conceptual numeric
+spaces, i.e. f32 or abstract float.
+
+Thus a specific interval can have **endpoints** that are either in or out of
+bounds for a specific floating point precision.
## Accuracy
As mentioned above floating point numbers are not able to represent all the
-possible values over their bounds, but instead represent discrete values in that
-interval, and approximate the remainder.
+possible values over their range, but instead represent discrete values in that
+space, and approximate the remainder.
Additionally, floating point numbers are not evenly distributed over the real
number line, but instead are more densely clustered around zero, with the space
@@ -398,7 +405,7 @@ That would be very inefficient though and make your reviewer sad to read.
For mapping intervals to intervals the key insight is that we only need to be
concerned with the extrema of the operation in the interval, since the
-acceptance interval is the bounds of the possible outputs.
+acceptance interval is defined by largest and smallest of the possible outputs.
In more precise terms:
```
@@ -538,6 +545,65 @@ This algorithmically looks something like this:
Return division result
```
+### Out of Bounds
+When calculating inherited intervals, if a intermediate calculation goes out of
+bounds this will flow through to later calculations, even if a later calculation
+would pull the result back inbounds.
+
+For example, `fp.positive.max + fp.positive.max - fp.positive.max` could be
+simplified to just `fp.positive.max` before execution, but it would also be
+valid for an implementation to naively perform left to right evaluation. Thus
+the addition would produce an intermediate value of `2 * fp.positive.max`. Again
+the implementation may hoist intermediate calculation to a higher precision to
+avoid overflow here, but is not required to. So a conforming implementation at
+this point may just return any value since the calculation when out of bounds.
+Thus the execution tests in the CTS should accept any value returned, so the
+case is just effectively confirming the computation completes.
+
+When looking at validation tests there is some subtleties about out of bounds
+behaviour, specifically how far out of bounds the value went that will influence
+the expected results, which will be discussed in more detail below.
+
+#### Vectors and Matrices
+The above discussion about inheritance of out of bounds intervals assumed scalar
+calculations, so all the result intervals were dependent on the input intervals,
+so if an out-of-bounds input occurred naturally all the output values were
+effectively out of bounds.
+
+For vector and matrix operations, this is not always true. Operations on these
+data structures can either define an element-wise mapping, where for each output
+element the result is calculated by executing a scalar operation on a input
+element (sometimes referred to as component-wise), or where the operation is
+defined such the output elements are dependent on the entire input.
+
+For concrete examples, constant scaling (`c * vec` of `c * mat`) is an
+element-wise operation, because one can define a simple mapping
+`o[i]` = `c * i[i]`, where the ith output only depends on the ith input.
+
+A non-element-wise operation would be something like cross product of vectors
+or the determinant of a matrix, where each output element is dependent on
+multiple input elements.
+
+For component-wise operations, out of bounds-ness flows through per element,
+i.e. if the ith input element was considered to be have gone out of bounds, then
+the ith output is considered to have too also regardless of the operation
+performed. Thus an input may be a mixture of out of bounds elements & inbounds
+elements, and produce another mixture, assuming the operation being performed
+itself does not push elements out of bounds.
+
+For non-component-wise operations, out of bounds-ness flows through the entire
+operation, i.e. if any of the input elements is out of bounds, then all the
+output elements are considered to be out of bounds. Additionally, if the
+calculation for any of the elements in output goes out of bounds, then the
+entire output is considered to have gone out of bounds, even if other individual
+elements stayed inbounds.
+
+For some non-element-wise operations one could define mappings for individual
+output elements that do not depend on all the input elements and consider only
+if those inputs that are used, but for the purposes of WGSL and the CTS, OOB
+inheritance is not so finely defined as to consider the difference between using
+some and all the input elements for non-element-wise operations.
+
## Compile vs Run Time Evaluation
The above discussions have been primarily agnostic to when and where a
@@ -553,14 +619,14 @@ These are compile vs run time, and CPU vs GPU. Broadly speaking compile time
execution happens on the host CPU, and run time evaluation occurs on a dedicated
GPU.
-(Software graphics implementations like WARP and SwiftShader technically break this by
-being a software emulation of a GPU that runs on the CPU, but conceptually one can
-think of these implementations being a type of GPU in this context, since it has
-similar constraints when it comes to precision, etc.)
+(Software graphics implementations like WARP and SwiftShader technically break
+this by being a software emulation of a GPU that runs on the CPU, but
+conceptually one can think of these implementations being a type of GPU in this
+context, since it has similar constraints when it comes to precision, etc.)
Compile time evaluation is execution that occurs when setting up a shader
module, i.e. when compiling WGSL to a platform specific shading language. It is
-part of resolving values for things like constants, and occurs once before the
+part of resolving values for things like constants, and occurs once, before the
shader is run by the caller. It includes constant evaluation and override
evaluation. All AbstractFloat operations are compile time evaluated.
@@ -623,7 +689,7 @@ near-overflow vs far-overflow behaviour. Thankfully this can be broken down into
a case by case basis based on where an interval falls.
Assuming `X`, is the well-defined result of an operation, i.e. not indeterminate
-due to the operation isn't defined for the inputs:
+due to the operation not being defined for the inputs:
| Region | | Result |
|------------------------------|------------------------------------------------------|--------------------------------|
@@ -643,7 +709,9 @@ behaviour in this region as rigorously defined nor tested, so fully testing
here would likely find lots of issues that would just need to be mitigated in
the CTS.
-Currently, we choose to avoid testing validation of near-overflow scenarios.
+Currently, we have chosen to not test validation of near-overflow scenarios to
+avoid this complexity. If this becomes a significant source of bugs and/or
+incompatibility between implementations this can be revisited in the future.
### Additional Technical Limitations
@@ -652,7 +720,7 @@ the theoretical world that the intervals being used for testing are infinitely
precise, when in actuality they are implemented by the ECMAScript `number` type,
which is implemented as a f64 value.
-For the vast majority of cases, even out of bounds and overflow, this is
+For the vast majority of cases, even out-of-bounds and overflow, this is
sufficient. There is one small slice where this breaks down. Specifically if
the result just outside the finite range by less than 1 f64 ULP of the edge
value. An example of this is `2 ** -11 + f32.max`. This will be between `f32.max`
@@ -752,7 +820,7 @@ const_assert upper > foo(x) // Result was above the acceptance interval
```
where lower and upper would actually be string replaced with literals for the
-bounds of the acceptance interval when generating the shader text.
+endpoints of the acceptance interval when generating the shader text.
This approach has a number of limitations that made it unacceptable for the CTS.
First, how errors are reported is a pain to debug. Someone working with the CTS
diff --git a/dom/webgpu/tests/cts/checkout/docs/intro/developing.md b/dom/webgpu/tests/cts/checkout/docs/intro/developing.md
index 5b1aeed36d..0016c2c048 100644
--- a/dom/webgpu/tests/cts/checkout/docs/intro/developing.md
+++ b/dom/webgpu/tests/cts/checkout/docs/intro/developing.md
@@ -34,6 +34,11 @@ the standalone runner.)
Note: The first load of a test suite may take some time as generating the test suite listing can
take a few seconds.
+## Documentation
+
+In addition to the documentation pages you're reading, there is TSDoc documentation.
+Start at the [helper index](https://gpuweb.github.io/cts/docs/tsdoc/).
+
## Standalone Test Runner / Test Plan Viewer
**The standalone test runner also serves as a test plan viewer.**
@@ -43,7 +48,7 @@ You can use this to preview how your test plan will appear.
You can view different suites (webgpu, unittests, stress, etc.) or different subtrees of
the test suite.
-- `http://localhost:8080/standalone/` (defaults to `?runnow=0&worker=0&debug=0&q=webgpu:*`)
+- `http://localhost:8080/standalone/` (defaults to `?runnow=0&debug=0&q=webgpu:*`)
- `http://localhost:8080/standalone/?q=unittests:*`
- `http://localhost:8080/standalone/?q=unittests:basic:*`
@@ -51,7 +56,9 @@ The following url parameters change how the harness runs:
- `runnow=1` runs all matching tests on page load.
- `debug=1` enables verbose debug logging from tests.
-- `worker=1` runs the tests on a Web Worker instead of the main thread.
+- `worker=dedicated` (or `worker` or `worker=1`) runs the tests on a dedicated worker instead of the main thread.
+- `worker=shared` runs the tests on a shared worker instead of the main thread.
+- `worker=service` runs the tests on a service worker instead of the main thread.
- `power_preference=low-power` runs most tests passing `powerPreference: low-power` to `requestAdapter`
- `power_preference=high-performance` runs most tests passing `powerPreference: high-performance` to `requestAdapter`
@@ -112,15 +119,17 @@ Opening a pull request will automatically notify reviewers.
To make the review process smoother, once a reviewer has started looking at your change:
- Avoid major additions or changes that would be best done in a follow-up PR.
-- Avoid rebases (`git rebase`) and force pushes (`git push -f`). These can make
- it difficult for reviewers to review incremental changes as GitHub often cannot
+- Avoid deleting commits that have already been reviewed, which occurs when using
+ rebases (`git rebase`) and force pushes (`git push -f`). These can make
+ it difficult for reviewers to review incremental changes as GitHub usually cannot
view a useful diff across a rebase. If it's necessary to resolve conflicts
with upstream changes, use a merge commit (`git merge`) and don't include any
- consequential changes in the merge, so a reviewer can skip over merge commits
+ unnecessary changes in the merge, so that a reviewer can skip over merge commits
when working through the individual commits in the PR.
-- When you address a review comment, mark the thread as "Resolved".
-Pull requests will (usually) be landed with the "Squash and merge" option.
+ The "Create a merge commit" merge option is disabled, so `main` history always
+ remains linear (no merge commits). PRs are usually landed using "Squash and merge".
+- When you address a review comment, mark the thread as "Resolved".
### TODOs
diff --git a/dom/webgpu/tests/cts/checkout/docs/terms.md b/dom/webgpu/tests/cts/checkout/docs/terms.md
index 032639be57..0dc6f0ca17 100644
--- a/dom/webgpu/tests/cts/checkout/docs/terms.md
+++ b/dom/webgpu/tests/cts/checkout/docs/terms.md
@@ -111,7 +111,7 @@ Each Suite has one Listing File (`suite/listing.[tj]s`), containing a list of th
in the suite.
In `src/suite/listing.ts`, this is computed dynamically.
-In `out/suite/listing.js`, the listing has been pre-baked (by `tools/gen_listings`).
+In `out/suite/listing.js`, the listing has been pre-baked (by `tools/gen_listings_and_webworkers`).
**Type:** Once `import`ed, `ListingFile`