summaryrefslogtreecommitdiffstats
path: root/testing/web-platform/tests/network-error-logging/README.md
blob: 7cf2c6fdceed95b3911deb69542a6820acda479d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# Network Error Logging

The tests in this directory exercise the user agent's implementation of [Network
Error Logging](https://w3c.github.io/network-error-logging/) and
[Reporting](https://w3c.github.io/reporting/).

## Collector

Each test case generates a unique `reportID` that is used to distinguish the NEL
reports generated by that test case.

The [support/report.py][] file is a [Python file handler][] that can be used as
a Reporting collector.  Its default operation is to save any reports that it
receives into the [stash][].  If you pass in the optional `op` URL parameter,
with a value of `retrieve_report`, it will instead return a list of all of the
reports received for a particular `reportID`.

[Python file handler]: https://wptserve.readthedocs.io/en/latest/handlers.html#python-file-handlers
[stash]: https://wptserve.readthedocs.io/en/latest/stash.html
[support/report.py]: support/report.py

## Installing NEL policies

NEL reports are only generated if the user agent has received a NEL policy for
the origin of the request.  The current request counts; if its response contains
a policy, that policy is used for the current request and all future requests,
modulo the policy's `max_age` field.

Most of the test cases will therefore make a request or two to install NEL
policies, and then make another request that should or should not be covered by
those policies.  It will then assert that NEL reports were or were not created,
as required by the spec.

The [support][] directory contains several images, each of which defines a
particular "kind" of NEL policy (e.g., `include_subdomains` set vs unset, no
policy at all, etc.).  The [support/nel.sub.js][] file contains helper
JavaScript methods for requesting those images, so that the test cases
themselves are more descriptive.

[support]: support
[support/nel.sub.js]: support/nel.sub.js

## Avoiding spurious reports

NEL policies apply to **all** future requests to the origin.  We therefore serve
all of the test case's "infrastructure" (the test case itself,
[support/report.py][] and [support/nel.sub.js][]) on a different origin than
the requests that exercise the NEL implementation.  That ensures that we don't
have to wade through NEL reports about the infrastructure when verifying the NEL
reports about the requests that we care about.

## Browser configuration

You must configure your browser's Reporting implementation to upload reports for
a request immediately.  The test cases do not currently have any timeouts; they
assume that as soon as the Fetch API promise resolves, any NEL reports for the
request have already been uploaded.

## Test parallelism

Because NEL policies are stored in a global cache in the user agent, we need to
run the tests in this directory serially instead of in parallel.  We implement a
simple spin-lock in [support/lock.py][] to ensure that only one test is allowed
to perform any NEL-related requests at a time.

[support/lock.py]: support/lock.py

## CORS preflights

Reporting uploads are subject to CORS preflights.  We want to test normal
operation (when preflight requests succeed) as well as failures of the CORS
preflight logic in the user agent.  To support this, our test collector is
configured to always reject the CORS preflight for a single domain (www2), and
to always grant the CORS preflight for all other test subdomains.