summaryrefslogtreecommitdiffstats
path: root/testing/web-platform/tests/fetch/metadata/tools/README.md
blob: 1c3bac2be5b0cb7b9ad0a0129c1b1b292fc4c53c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
# Fetch Metadata test generation framework

This directory defines a command-line tool for procedurally generating WPT
tests.

## Motivation

Many features of the web platform involve the browser making one or more HTTP
requests to remote servers. Only some aspects of these requests are specified
within the standard that defines the relevant feature. Other aspects are
specified by external standards which span the entire platform (e.g. [Fetch
Metadata Request Headers](https://w3c.github.io/webappsec-fetch-metadata/)).

This state of affairs makes it difficult to maintain test coverage for two
reasons:

- When a new feature introduces a new kind of web request, it must be verified
  to integrate with every cross-cutting standard.
- When a new cross-cutting standard is introduced, it must be verified to
  integrate with every kind of web request.

The tool in this directory attempts to reduce this tension. It allows
maintainers to express instructions for making web requests in an abstract
sense. These generic instructions can be reused by to produce a different suite
of tests for each cross-cutting feature.

When a new kind of request is proposed, a single generic template can be
defined here. This will provide the maintainers of all cross-cutting features
with clear instruction on how to extend their test suite with the new feature.

Similarly, when a new cross-cutting feature is proposed, the authors can use
this tool to build a test suite which spans the entire platform.

## Build script

To generate the Fetch Metadata tests, run `./wpt update-built --include fetch`
in the root of the repository.

## Configuration

The test generation tool requires a YAML-formatted configuration file as its
input. The file should define a dictionary with the following keys:

- `templates` - a string describing the filesystem path from which template
  files should be loaded
- `output_directory` - a string describing the filesystem path where the
  generated test files should be written
- `cases` - a list of dictionaries describing how the test templates should be
  expanded with individual subtests; each dictionary should have the following
  keys:
  - `all_subtests` - properties which should be defined for every expansion
  - `common_axis` - a list of dictionaries
  - `template_axes` - a dictionary relating template names to properties that
    should be used when expanding that particular template

Internally, the tool creates a set of "subtests" for each template. This set is
the Cartesian product of the `common_axis` and the given template's entry in
the `template_axes` dictionary. It uses this set of subtests to expand the
template, creating an output file. Refer to the next section for a concrete
example of how the expansion is performed.

In general, the tool will output a single file for each template. However, the
`filename_flags` attribute has special semantics. It is used to separate
subtests for the same template file. This is intended to accommodate [the
web-platform-test's filename-based
conventions](https://web-platform-tests.org/writing-tests/file-names.html).

For instance, when `.https` is present in a test file's name, the WPT test
harness will load that test using the HTTPS protocol. Subtests which include
the value `https` in the `filename_flags` property will be expanded using the
appropriate template but written to a distinct file whose name includes
`.https`.

The generation tool requires that the configuration file references every
template in the `templates` directory. Because templates and configuration
files may be contributed by different people, this requirement ensures that
configuration authors are aware of all available templates. Some templates may
not be relevant for some features; in those cases, the configuration file can
include an empty array for the template's entry in the `template_axes`
dictionary (as in `template3.html` in the example which follows).

## Expansion example

In the following example configuration file, `a`, `b`, `s`, `w`, `x`, `y`, and
`z` all represent associative arrays.

```yaml
templates: path/to/templates
output_directory: path/to/output
cases:
  - every_subtest: s
    common_axis: [a, b]
    template_axes:
      template1.html: [w]
      template2.html: [x, y, z]
      template3.html: []
```

When run with such a configuration file, the tool would generate two files,
expanded with data as described below (where `(a, b)` represents the union of
`a` and `b`):

    template1.html: [(a, w), (b, w)]
    template2.html: [(a, x), (b, x), (a, y), (b, y), (a, z), (b, z)]
    template3.html: (zero tests; not expanded)

## Design Considerations

**Efficiency of generated output** The tool is capable of generating a large
number of tests given a small amount of input. Naively structured, this could
result in test suites which take large amount of time and computational
resources to complete. The tool has been designed to help authors structure the
generated output to reduce these resource requirements.

**Literalness of generated output** Because the generated output is how most
people will interact with the tests, it is important that it be approachable.
This tool avoids outputting abstractions which would frustrate attempts to read
the source code or step through its execution environment.

**Simplicity** The test generation logic itself was written to be approachable.
This makes it easier to anticipate how the tool will behave with new input, and
it lowers the bar for others to contribute improvements.

Non-goals include conciseness of template files (verbosity makes the potential
expansions more predictable) and conciseness of generated output (verbosity
aids in the interpretation of results).