summaryrefslogtreecommitdiffstats
path: root/netwerk/docs
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--netwerk/docs/cache2/doc.rst569
-rw-r--r--netwerk/docs/captive_portals.md80
-rw-r--r--netwerk/docs/dns/dns-over-https-trr.md158
-rw-r--r--netwerk/docs/dns/trr-skip-reasons.md324
-rw-r--r--netwerk/docs/early_hints.md157
-rw-r--r--netwerk/docs/http/http3.md154
-rw-r--r--netwerk/docs/http/lifecycle.rst220
-rw-r--r--netwerk/docs/http/logging.rst320
-rw-r--r--netwerk/docs/http_server_for_testing.rst482
-rw-r--r--netwerk/docs/index.rst25
-rw-r--r--netwerk/docs/neqo_triage_guideline.md12
-rw-r--r--netwerk/docs/network_test_guidelines.md175
-rw-r--r--netwerk/docs/new_to_necko_resources.rst80
-rw-r--r--netwerk/docs/submitting_networking_bugs.md112
-rw-r--r--netwerk/docs/url_parsers.md143
-rw-r--r--netwerk/docs/webtransport/webtransport.md6
-rw-r--r--netwerk/docs/webtransport/webtransportsessionproxy.md19
17 files changed, 3036 insertions, 0 deletions
diff --git a/netwerk/docs/cache2/doc.rst b/netwerk/docs/cache2/doc.rst
new file mode 100644
index 0000000000..71982be9e6
--- /dev/null
+++ b/netwerk/docs/cache2/doc.rst
@@ -0,0 +1,569 @@
+HTTP Cache
+==========
+
+This document describes the **HTTP cache implementation**.
+
+The code resides in `/netwerk/cache2 (searchfox)
+<https://searchfox.org/mozilla-central/source/netwerk/cache2>`_
+
+API
+---
+
+Here is a detailed description of the HTTP cache v2 API, examples
+included. This document only contains what cannot be found or may not
+be clear directly from the `IDL files <https://searchfox.org/mozilla-central/search?q=&path=cache2%2FnsICache&case=false&regexp=false>`_ comments.
+
+- The cache API is **completely thread-safe** and **non-blocking**.
+- There is **no IPC support**. It's only accessible on the default
+ chrome process.
+- When there is no profile the new HTTP cache works, but everything is
+ stored only in memory not obeying any particular limits.
+
+.. _nsICacheStorageService:
+
+nsICacheStorageService
+----------------------
+
+- The HTTP cache entry-point. Accessible as a service only, fully
+ thread-safe, scriptable.
+
+- `nsICacheStorageService.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/cache2/nsICacheStorageService.idl>`_
+
+- \ ``"@mozilla.org/netwerk/cache-storage-service;1"``
+
+- Provides methods accessing "storage" objects – see `nsICacheStorage` below – giving further access to cache entries – see :ref:`nsICacheEntry <nsICacheEntry>` more below – per specific URL.
+
+- Currently we have 3 types of storages, all the access methods return
+ an :ref:`nsICacheStorage <nsICacheStorage>` object:
+
+ - **memory-only** (``memoryCacheStorage``): stores data only in a
+ memory cache, data in this storage are never put to disk
+
+ - **disk** (``diskCacheStorage``): stores data on disk, but for
+ existing entries also looks into the memory-only storage; when
+ instructed via a special argument also primarily looks into
+ application caches
+
+ .. note::
+
+ **application cache** (``appCacheStorage``): when a consumer has a
+ specific ``nsIApplicationCache`` (i.e. a particular app cache
+ version in a group) in hands, this storage will provide read and
+ write access to entries in that application cache; when the app
+ cache is not specified, this storage will operate over all
+ existing app caches. **This kind of storage is deprecated and will be removed** in `bug 1694662 <https://bugzilla.mozilla.org/show_bug.cgi?id=1694662>`_
+
+- The service also provides methods to clear the whole disk and memory
+ cache content or purge any intermediate memory structures:
+
+ - ``clear``– after it returns, all entries are no longer accessible
+ through the cache APIs; the method is fast to execute and
+ non-blocking in any way; the actual erase happens in background
+
+ - ``purgeFromMemory``– removes (schedules to remove) any
+ intermediate cache data held in memory for faster access (more
+ about the :ref:`Intermediate_Memory_Caching <Intermediate_Memory_Caching>` below)
+
+.. _nsILoadContextInfo:
+
+nsILoadContextInfo
+------------------
+
+- Distinguishes the scope of the storage demanded to open.
+
+- Mandatory argument to ``*Storage`` methods of :ref:`nsICacheStorageService <nsICacheStorageService>`.
+
+- `nsILoadContextInfo.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/base/nsILoadContextInfo.idl>`_
+
+
+- It is a helper interface wrapping following four arguments into a single one:
+
+ - **private-browsing** boolean flag
+ - **anonymous load** boolean flag
+ - **origin attributes** js value
+
+ .. note::
+
+ Helper functions to create nsILoadContextInfo objects:
+
+ - C++ consumers: functions at ``LoadContextInfo.h`` exported
+ header
+
+ - JS consumers: ``Services.loadContextInfo`` which is an instance of ``nsILoadContextInfoFactory``.
+
+- Two storage objects created with the same set of
+ ``nsILoadContextInfo``\ arguments are identical, containing the same
+ cache entries.
+
+- Two storage objects created with in any way different
+ ``nsILoadContextInfo``\ arguments are strictly and completely
+ distinct and cache entries in them do not overlap even when having
+ the same URIs.
+
+.. _nsICacheStorage:
+
+nsICacheStorage
+---------------
+
+- `nsICacheStorage.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/cache2/nsICacheStorage.idl>`_
+
+- Obtained from call to one of the ``*Storage`` methods on
+ :ref:`nsICacheStorageService <nsICacheStorageService>`.
+
+- Represents a distinct storage area (or scope) to put and get cache
+ entries mapped by URLs into and from it.
+
+- *Similarity with the old cache*\ : this interface may be with some
+ limitations considered as a mirror to ``nsICacheSession``, but less
+ generic and not inclining to abuse.
+
+nsICacheEntryOpenCallback
+-------------------------
+
+- `nsICacheEntryOpenCallback.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/cache2/nsICacheEntryOpenCallback.idl>`_
+
+- The result of ``nsICacheStorage.asyncOpenURI`` is always and only
+ sent to callbacks on this interface.
+
+- These callbacks are ensured to be invoked when ``asyncOpenURI``
+ returns ``NS_OK``.
+
+-
+
+ .. note::
+
+ When the
+ cache entry object is already present in memory or open as
+ "force-new" (a.k.a "open-truncate") this callback is invoked
+ sooner then the ``asyncOpenURI``\ method returns (i.e.
+ immediately); there is currently no way to opt out of this feature
+ (see `bug
+ 938186 <https://bugzilla.mozilla.org/show_bug.cgi?id=938186>`__).
+
+.. _nsICacheEntry:
+
+nsICacheEntry
+-------------
+
+- `nsICacheEntry.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/cache2/nsICacheEntry.idl>`_
+
+- Obtained asynchronously or pseudo-asynchronously by a call to
+ ``nsICacheStorage.asyncOpenURI``.
+
+- Provides access to a cached entry data and meta data for reading or
+ writing or in some cases both, see below.
+
+Lifetime of a new entry
+-----------------------
+
+- Such entry is initially empty (no data or meta data is stored in it).
+
+- The ``aNew``\ argument in ``onCacheEntryAvailable`` is ``true`` for
+ and only for new entries.
+
+- Only one consumer (the so called "*writer*") may have such an entry
+ available (obtained via ``onCacheEntryAvailable``).
+
+- Other parallel openers of the same cache entry are blocked (wait) for
+ invocation of their ``onCacheEntryAvailable`` until one of the
+ following occurs:
+
+ - The *writer* simply throws the entry away: other waiting opener in
+ line gets the entry again as "*new*", the cycle repeats.
+
+ .. note::
+
+ This applies in general, writers throwing away the cache entry
+ means a failure to write the cache entry and a new writer is
+ being looked for again, the cache entry remains empty (a.k.a.
+ "new").
+
+ - The *writer* stored all necessary meta data in the cache entry and
+ called ``metaDataReady`` on it: other consumers now get the entry
+ and may examine and potentially modify the meta data and read the
+ data (if any) of the cache entry.
+ - When the *writer* has data (i.e. the response payload) to write to
+ the cache entry, it **must** open the output stream on it
+ **before** it calls ``metaDataReady``.
+
+- When the *writer* still keeps the cache entry and has open and keeps
+ open the output stream on it, other consumers may open input streams
+ on the entry. The data will be available as the *writer* writes data
+ to the cache entry's output stream immediately, even before the
+ output stream is closed. This is called :ref:`concurrent
+ read/write <Concurrent_read_and_write>`.
+
+.. _Concurrent_read_and_write:
+
+Concurrent read and write
+-------------------------
+
+The cache supports reading a cache entry data while it is still being
+written by the first consumer - the *writer*.
+This can only be engaged for resumable responses that (`bug
+960902 <https://bugzilla.mozilla.org/show_bug.cgi?id=960902#c17>`__)
+don't need revalidation. Reason is that when the writer is interrupted
+(by e.g. external canceling of the loading channel) concurrent readers
+would not be able to reach the remaining unread content.
+
+.. note::
+
+ This could be improved by keeping the network load running and being
+ stored to the cache entry even after the writing channel has been
+ canceled.
+
+When the *writer* is interrupted, the first concurrent *reader* in line
+does a range request for the rest of the data - and becomes that way a
+new *writer*. The rest of the *readers* are still concurrently reading
+the content since output stream for the cache entry is again open and
+kept by the current *writer*.
+
+Lifetime of an existing entry with only a partial content
+---------------------------------------------------------
+
+- Such a cache entry is first examined in the
+ ``nsICacheEntryOpenCallback.onCacheEntryCheck`` callback, where it
+ has to be checked for completeness.
+- In this case, the ``Content-Length`` (or different indicator) header
+ doesn't equal to the data size reported by the cache entry.
+- The consumer then indicates the cache entry needs to be revalidated
+ by returning ``ENTRY_NEEDS_REVALIDATION``\ from
+ ``onCacheEntryCheck``.
+- This consumer, from the point of view the cache, takes a role of the
+ *writer*.
+- Other parallel consumers, if any, are blocked until the *writer*
+ calls ``setValid`` on the cache entry.
+- The consumer is then responsible to validate the partial content
+ cache entry with the network server and attempt to load the rest of
+ the data.
+- When the server responds positively (in case of an HTTP server with a
+ 206 response code) the *writer* (in this order) opens the output
+ stream on the cache entry and calls ``setValid`` to unblock other
+ pending openers.
+- Concurrent read/write is engaged.
+
+Lifetime of an existing entry that doesn't pass server revalidation
+-------------------------------------------------------------------
+
+- Such a cache entry is first examined in the
+ ``nsICacheEntryOpenCallback.onCacheEntryCheck`` callback, where the
+ consumer finds out it must be revalidated with the server before use.
+- The consumer then indicates the cache entry needs to be revalidated
+ by returning ``ENTRY_NEEDS_REVALIDATION``\ from
+ ``onCacheEntryCheck``.
+- This consumer, from the point of view the cache, takes a role of the
+ *writer*.
+- Other parallel consumers, if any, are blocked until the *writer*
+ calls ``setValid`` on the cache entry.
+- The consumer is then responsible to validate the partial content
+ cache entry with the network server.
+- The server responses with a 200 response which means the cached
+ content is no longer valid and a new version must be loaded from the
+ network.
+- The *writer* then calls ``recreate``\ on the cache entry. This
+ returns a new empty entry to write the meta data and data to, the
+ *writer* exchanges its cache entry by this new one and handles it as
+ a new one.
+- The *writer* then (in this order) fills the necessary meta data of
+ the cache entry, opens the output stream on it and calls
+ ``metaDataReady`` on it.
+- Any other pending openers, if any, are now given this new entry to
+ examine and read as an existing entry.
+
+Adding a new storage
+--------------------
+
+Should there be a need to add a new distinct storage for which the
+current scoping model would not be sufficient - use one of the two
+following ways:
+
+#. *[preferred]* Add a new ``<Your>Storage`` method on
+ :ref:`nsICacheStorageService <nsICacheStorageService>` and if needed give it any arguments to
+ specify the storage scope even more. Implementation only should need
+ to enhance the context key generation and parsing code and enhance
+ current - or create new when needed - :ref:`nsICacheStorage <nsICacheStorage>`
+ implementations to carry any additional information down to the cache
+ service.
+#. *[*\ **not**\ *preferred]* Add a new argument to
+ :ref:`nsILoadContextInfo <nsILoadContextInfo>`; **be careful
+ here**, since some arguments on the context may not be known during
+ the load time, what may lead to inter-context data leaking or
+ implementation problems. Adding more distinction to
+ :ref:`nsILoadContextInfo <nsILoadContextInfo>` also affects all existing storages which may
+ not be always desirable.
+
+See context keying details for more information.
+
+Threading
+---------
+
+The cache API is fully thread-safe.
+
+The cache is using a single background thread where any IO operations
+like opening, reading, writing and erasing happen. Also memory pool
+management, eviction, visiting loops happen on this thread.
+
+The thread supports several priority levels. Dispatching to a level with
+a lower number is executed sooner then dispatching to higher number
+layers; also any loop on lower levels yields to higher levels so that
+scheduled deletion of 1000 files will not block opening cache entries.
+
+#. **OPEN_PRIORITY:** except opening priority cache files also file
+ dooming happens here to prevent races
+#. **READ_PRIORITY:** top level documents and head blocking script cache
+ files are open and read as the first
+#. **OPEN**
+#. **READ:** any normal priority content, such as images are open and
+ read here
+#. **WRITE:** writes are processed as last, we cache data in memory in
+ the mean time
+#. **MANAGEMENT:** level for the memory pool and CacheEntry background
+ operations
+#. **CLOSE:** file closing level
+#. **INDEX:** index is being rebuild here
+#. **EVICT:** files overreaching the disk space consumption limit are
+ being evicted here
+
+NOTE: Special case for eviction - when an eviction is scheduled on the
+IO thread, all operations pending on the OPEN level are first merged to
+the OPEN_PRIORITY level. The eviction preparation operation - i.e.
+clearing of the internal IO state - is then put to the end of the
+OPEN_PRIORITY level. All this happens atomically.
+
+Storage and entries scopes
+--------------------------
+
+A *scope key* string used to map the storage scope is based on the
+arguments of :ref:`nsILoadContextInfo <nsILoadContextInfo>`. The form is following (currently
+pending in `bug
+968593 <https://bugzilla.mozilla.org/show_bug.cgi?id=968593>`__):
+
+.. code::
+
+ a,b,i1009,p,
+
+- Regular expression: ``(.([-,]+)?,)*``
+- The first letter is an identifier, identifiers are to be
+ alphabetically sorted and always terminate with ','
+- a - when present the scope is belonging to an **anonymous** load
+- b - when present the scope is **in browser element** load
+- i - when present must have a decimal integer value that represents an
+ app ID the scope belongs to, otherwise there is no app (app ID is
+ considered ``0``)
+- p - when present the scope is of a **private browsing** load, this
+ never persists
+
+``CacheStorageService``\ keeps a global hashtable mapped by the *scope
+key*. Elements in this global hashtable are hashtables of cache entries.
+The cache entries are mapped by concantation of Enhance ID and URI
+passed to ``nsICacheStorage.asyncOpenURI``. So that when an entry is
+being looked up, first the global hashtable is searched using the
+*scope key*. An entries hashtable is found. Then this entries hashtable
+is searched using <enhance-id:><uri> string. The elements in this
+hashtable are CacheEntry classes, see below.
+
+The hash tables keep a strong reference to ``CacheEntry`` objects. The
+only way to remove ``CacheEntry`` objects from memory is by exhausting a
+memory limit for :ref:`Intermediate_Memory_Caching <Intermediate_Memory_Caching>`, what triggers a background
+process of purging expired and then least used entries from memory.
+Another way is to directly call the
+``nsICacheStorageService.purge``\ method. That method is also called
+automatically on the ``"memory-pressure"`` indication.
+
+Access to the hashtables is protected by a global lock. We also - in a
+thread-safe manner - count the number of consumers keeping a reference
+on each entry. The open callback actually doesn't give the consumer
+directly the ``CacheEntry`` object but a small wrapper class that
+manages the 'consumer reference counter' on its cache entry. This both
+mechanisms ensure thread-safe access and also inability to have more
+then a single instance of a ``CacheEntry`` for a single
+<scope+enhanceID+URL> key.
+
+``CacheStorage``, implementing the :ref:`nsICacheStorage <nsICacheStorage>` interface, is
+forwarding all calls to internal methods of ``CacheStorageService``
+passing itself as an argument. ``CacheStorageService`` then generates
+the *scope key* using the ``nsILoadContextInfo`` of the storage. Note:
+CacheStorage keeps a thread-safe copy of ``nsILoadContextInfo`` passed
+to a ``*Storage`` method on ``nsICacheStorageService``.
+
+Invoking open callbacks
+-----------------------
+
+``CacheEntry``, implementing the ``nsICacheEntry`` interface, is
+responsible for managing the cache entry internal state and to properly
+invoke ``onCacheEntryCheck`` and ``onCacheEntryAvaiable`` callbacks to
+all callers of ``nsICacheStorage.asyncOpenURI``.
+
+- Keeps a FIFO of all openers.
+- Keeps its internal state like NOTLOADED, LOADING, EMPTY, WRITING,
+ READY, REVALIDATING.
+- Keeps the number of consumers keeping a reference to it.
+- Refers a ``CacheFile`` object that holds actual data and meta data
+ and, when told to, persists it to the disk.
+
+The openers FIFO is an array of ``CacheEntry::Callback`` objects.
+``CacheEntry::Callback`` keeps a strong reference to the opener plus the
+opening flags. ``nsICacheStorage.asyncOpenURI`` forwards to
+``CacheEntry::AsyncOpen`` and triggers the following pseudo-code:
+
+**CacheStorage::AsyncOpenURI** - the API entry point:
+
+- globally atomic:
+
+ - look a given ``CacheEntry`` in ``CacheStorageService`` hash tables
+ up
+ - if not found: create a new one, add it to the proper hash table
+ and set its state to NOTLOADED
+ - consumer reference ++
+
+- call to `CacheEntry::AsyncOpen`
+- consumer reference --
+
+**CacheEntry::AsyncOpen** (entry atomic):
+
+- the opener is added to FIFO, consumer reference ++ (dropped back
+ after an opener is removed from the FIFO)
+- state == NOTLOADED:
+
+ - state = LOADING
+ - when OPEN_TRUNCATE flag was used:
+
+ - ``CacheFile`` is created as 'new', state = EMPTY
+
+ - otherwise:
+
+ - ``CacheFile`` is created and load on it started
+ - ``CacheEntry::OnFileReady`` notification is now expected
+
+- state == LOADING: just do nothing and exit
+- call to `CacheEntry::InvokeCallbacks`
+
+**CacheEntry::InvokeCallbacks** (entry atomic):
+
+- called on:
+
+ - a new opener has been added to the FIFO via an ``AsyncOpen`` call
+ - asynchronous result of CacheFile open ``CacheEntry::OnFileReady>``
+ - the writer throws the entry away - ``CacheEntry::OnHandleClosed``
+ - the **output stream** of the entry has been **opened** or
+ **closed**
+ - ``metaDataReady``\ or ``setValid``\ on the entry has been called
+ - the entry has been **doomed**
+
+- state == EMPTY:
+
+ - on OPER_READONLY flag use: onCacheEntryAvailable with
+ ``null``\ for the cache entry
+ - otherwise:
+
+ - state = WRITING
+ - opener is removed from the FIFO and remembered as the current
+ '*writer*'
+ - onCacheEntryAvailable with ``aNew = true``\ and this entry is
+ invoked (on the caller thread) for the *writer*
+
+- state == READY:
+
+ - onCacheEntryCheck with the entry is invoked on the first opener in
+ FIFO - on the caller thread if demanded
+ - result == RECHECK_AFTER_WRITE_FINISHED:
+
+ - opener is left in the FIFO with a flag ``RecheckAfterWrite``
+ - such openers are skipped until the output stream on the entry
+ is closed, then ``onCacheEntryCheck`` is re-invoked on them
+ - Note: here is a potential for endless looping when
+ RECHECK_AFTER_WRITE_FINISHED is abused
+
+ - result == ENTRY_NEEDS_REVALIDATION:
+
+ - state = REVALIDATING, this prevents invocation of any callback
+ until ``CacheEntry::SetValid`` is called
+ - continue as in state ENTRY_WANTED (just below)
+
+ - result == ENTRY_WANTED:
+
+ - consumer reference ++ (dropped back when the consumer releases
+ the entry)
+ - onCacheEntryAvailable is invoked on the opener with
+ ``aNew = false``\ and the entry
+ - opener is removed from the FIFO
+
+ - result == ENTRY_NOT_WANTED:
+
+ - ``onCacheEntryAvailable`` is invoked on the opener with
+ ``null``\ for the entry
+ - opener is removed from the FIFO
+
+- state == WRITING or REVALIDATING:
+
+ - do nothing and exit
+
+- any other value of state is unexpected here (assertion failure)
+- loop this process while there are openers in the FIFO
+
+**CacheEntry::OnFileReady** (entry atomic):
+
+- load result == failure or the file has not been found on disk (is
+ new): state = EMPTY
+- otherwise: state = READY since the cache file has been found and is
+ usable containing meta data and data of the entry
+- call to ``CacheEntry::InvokeCallbacks``
+
+**CacheEntry::OnHandleClosed** (entry atomic):
+
+- Called when any consumer throws the cache entry away
+- If the handle is not the handle given to the current *writer*, then
+ exit
+- state == WRITING: the writer failed to call ``metaDataReady`` on the
+ entry - state = EMPTY
+- state == REVALIDATING: the writer failed the re-validation process
+ and failed to call ``setValid`` on the entry - state = READY
+- call to ``CacheEntry::InvokeCallbacks``
+
+**All consumers release the reference:**
+
+- the entry may now be purged (removed) from memory when found expired
+ or least used on overrun of the :ref:`memory
+ pool <Intermediate_Memory_Caching>` limit
+- when this is a disk cache entry, its cached data chunks are released
+ from memory and only meta data is kept
+
+.. _Intermediate_Memory_Caching:
+
+Intermediate memory caching
+---------------------------
+
+Intermediate memory caching of frequently used metadata (a.k.a. disk cache memory pool).
+
+For the disk cache entries we keep some of the most recent and most used
+cache entries' meta data in memory for immediate zero-thread-loop
+opening. The default size of this meta data memory pool is only 250kB
+and is controlled by a new ``browser.cache.disk.metadata_memory_limit``
+preference. When the limit is exceeded, we purge (throw away) first
+**expired** and then **least used** entries to free up memory again.
+
+Only ``CacheEntry`` objects that are already loaded and filled with data
+and having the 'consumer reference == 0' (`bug
+942835 <https://bugzilla.mozilla.org/show_bug.cgi?id=942835#c3>`__) can
+be purged.
+
+The 'least used' entries are recognized by the lowest value of
+`frecency <https://wiki.mozilla.org/User:Jesse/NewFrecency?title=User:Jesse/NewFrecency>`__
+we re-compute for each entry on its every access. The decay time is
+controlled by the ``browser.cache.frecency_half_life_hours`` preference
+and defaults to 6 hours. The best decay time will be based on results of
+`an experiment <https://bugzilla.mozilla.org/show_bug.cgi?id=986728>`__.
+
+The memory pool is represented by two lists (strong referring ordered
+arrays) of ``CacheEntry`` objects:
+
+#. Sorted by expiration time (that default to 0xFFFFFFFF)
+#. Sorted by frecency (defaults to 0)
+
+We have two such pools, one for memory-only entries actually
+representing the memory-only cache and one for disk cache entries for
+which we only keep the meta data. Each pool has a different limit
+checking - the memory cache pool is controlled by
+``browser.cache.memory.capacity``, the disk entries pool is already
+described above. The pool can be accessed and modified only on the cache
+background thread.
diff --git a/netwerk/docs/captive_portals.md b/netwerk/docs/captive_portals.md
new file mode 100644
index 0000000000..971ea8313e
--- /dev/null
+++ b/netwerk/docs/captive_portals.md
@@ -0,0 +1,80 @@
+# Captive portal detection
+
+## What are captive portals?
+A captive portal is what we call a network that requires your action before it allows you to connect to the Internet. This action could be to log in using a username and password, or just to accept the network's terms and conditions.
+
+There are many different ways in which captive portal network might attempt to direct you to the captive portal page.
+- A DNS resolver that always resolves to the captive portal server IP
+- A gateway that intercepts all HTTP requests and responds with a 302/307 redirect to the captive portal page
+- A gateway that rewrites all/specific HTTP responses
+ - Changing their content to be that of the captive portal page
+ - Injecting javascript or other content into the page (Some ISPs do this when the user hasn't paid their internet bill)
+- HTTPS requests are handled differently by captive portals:
+ - They might time out.
+ - They might present the wrong certificate in order to redirect to the captive portal.
+ - They might not be intercepted at all.
+
+## Implementation
+The [CaptivePortalService](https://searchfox.org/mozilla-central/source/netwerk/base/CaptivePortalService.h) controls when the checks are performed. Consumers can check the state on [nsICaptivePortalService](https://searchfox.org/mozilla-central/source/netwerk/base/nsICaptivePortalService.idl) to determine the state of the captive portal.
+- UNKNOWN
+ - The checks have not been performed or have timed out.
+- NOT_CAPTIVE
+ - No captive portal interference was detected.
+- UNLOCKED_PORTAL
+ - A captive portal was previously detected, but has been unlocked by the user. This state might cause the browser to increase the frequency of the captive portal checks.
+- LOCKED_PORTAL
+ - A captive portal was detected, and internet connectivity is not currently available.
+ - A [captive portal notification bar](https://searchfox.org/mozilla-central/source/browser/base/content/browser-captivePortal.js) might be displayed to the user.
+
+The Captive portal service uses [CaptiveDetect.sys.mjs](https://searchfox.org/mozilla-central/source/toolkit/components/captivedetect/CaptiveDetect.jsm) to perform the checks, which in turn uses XMLHttpRequest.
+This request needs to be exempted from HTTPS upgrades, DNS over HTTPS, and many new browser features in order to function as expected.
+
+```{note}
+
+CaptiveDetect.sys.mjs would benefit from being rewritten in rust or C++.
+This is because the API of XMLHttpRequest makes it difficult to distinguish between different types of network errors such as redirect loops vs certificate errors.
+
+Also, we don't currently allow any redirects to take place, even if the redirected resource acts as a transparent proxy (doesn't modify the response). This sometimes causes issues for users on networks which employ such transparent proxies.
+
+```
+
+## Preferences
+```js
+
+pref("network.captive-portal-service.enabled", false); // controls if the checking is performed
+pref("network.captive-portal-service.minInterval", 60000); // 60 seconds
+pref("network.captive-portal-service.maxInterval", 1500000); // 25 minutes
+// Every 10 checks, the delay is increased by a factor of 5
+pref("network.captive-portal-service.backoffFactor", "5.0");
+
+// The URL used to perform the captive portal checks
+pref("captivedetect.canonicalURL", "http://detectportal.firefox.com/canonical.html");
+// The response we expect to receive back for the canonical URL
+// It contains valid HTML that when loaded in a browser redirects the user
+// to a support page explaining captive portals.
+pref("captivedetect.canonicalContent", "<meta http-equiv=\"refresh\" content=\"0;url=https://support.mozilla.org/kb/captive-portal\"/>");
+
+// The timeout for each request.
+pref("captivedetect.maxWaitingTime", 5000);
+// time to retrigger a new request
+pref("captivedetect.pollingTime", 3000);
+// Number of times to retry the captive portal check if there is an error or timeout.
+pref("captivedetect.maxRetryCount", 5);
+
+```
+
+
+# Connectivity checking
+We use a mechanism similar to captive portal checking to verify if the browser has internet connectivity. The [NetworkConnectivityService](https://searchfox.org/mozilla-central/source/netwerk/base/NetworkConnectivityService.h) will periodically connect to the same URL we use for captive portal detection, but will restrict its preferences to either IPv4 or IPv6. Based on which responses succeed, we can infer if Firefox has IPv4 and/or IPv6 connectivity. We also perform DNS queries to check if the system has a IPv4/IPv6 capable DNS resolver.
+
+## Preferences
+
+```js
+
+pref("network.connectivity-service.enabled", true);
+pref("network.connectivity-service.DNSv4.domain", "example.org");
+pref("network.connectivity-service.DNSv6.domain", "example.org");
+pref("network.connectivity-service.IPv4.url", "http://detectportal.firefox.com/success.txt?ipv4");
+pref("network.connectivity-service.IPv6.url", "http://detectportal.firefox.com/success.txt?ipv6");
+
+```
diff --git a/netwerk/docs/dns/dns-over-https-trr.md b/netwerk/docs/dns/dns-over-https-trr.md
new file mode 100644
index 0000000000..dc48edc967
--- /dev/null
+++ b/netwerk/docs/dns/dns-over-https-trr.md
@@ -0,0 +1,158 @@
+---
+title: DNS over HTTPS (Trusted Recursive Resolver)
+---
+
+## Terminology
+
+**DNS-over-HTTPS (DoH)** allows DNS to be resolved with enhanced
+privacy, secure transfers and comparable performance. The protocol is
+described in [RFC 8484](https://tools.ietf.org/html/rfc8484) .
+
+**Trusted Recursive Resolver (TRR)** is the name of Firefox\'s
+implementation of the protocol and the
+[policy](https://wiki.mozilla.org/Security/DOH-resolver-policy) that
+ensures only privacy-respecting DoH providers are recommended by
+Firefox.
+
+On this page we will use DoH when referring to the protocol, and TRR
+when referring to the implementation.
+
+**Unencrypted DNS (Do53)** is the regular way most programs resolve DNS
+names. This is usually done by the operating system by sending an
+unencrypted packet to the DNS server that normally listens on port 53.
+
+## DoH Rollout
+
+**DoH Rollout** refers to the frontend code that decides whether TRR
+will be enabled automatically for users in the [rollout
+population](https://support.mozilla.org/kb/firefox-dns-over-https#w_about-the-us-rollout-of-dns-over-https).
+
+The functioning of this module is described
+[here](https://wiki.mozilla.org/Security/DNS_Over_HTTPS).
+
+The code lives in
+[browser/components/doh](https://searchfox.org/mozilla-central/source/browser/components/doh).
+
+## Implementation
+
+When enabled TRR may work in two modes, TRR-first (2) and TRR-only (3).
+These are controlled by the **network.trr.mode** or **doh-rollout.mode**
+prefs. The difference is that when a DoH request fails in TRR-first
+mode, we then fallback to **Do53**.
+
+For TRR-first mode, we have a strict-fallback setting which can be
+enabled by setting network.trr.strict\_native\_fallback to true. With
+this, while we will still completely skip TRR for certain requests (like
+captive portal detection, bootstrapping the TRR provider, etc.) we will
+only fall back after a TRR failure to **Do53** for three possible
+reasons:
+1. We detected, via Confirmation, that TRR is currently out of
+service on the network. This could mean the provider is down or blocked.
+2. The address successfully resolved via TRR could not be connected to.
+3. TRR result is NXDOMAIN.
+
+When a DNS resolution doesn't use TRR we will normally preserve that data in the form of a _TRRSkippedReason_. A detailed explanation of each one is available [here](trr-skip-reasons).
+
+In other cases, instead of falling back, we will trigger a fresh
+Confirmation (which will start us on a fresh connection to the provider)
+and retry the lookup with TRR again. We only retry once.
+
+DNS name resolutions are performed in _nsHostResolver::ResolveHost_. If a
+cached response for the request could not be found,
+_nsHostResolver::NameLookup_ will trigger either a DoH or a Do53 request.
+First it checks the effective TRR mode of the request is as requests
+could have a different mode from the global one. If the request may use
+TRR, then we dispatch a request in _nsHostResolver::TrrLookup_. Since we
+usually reolve both IPv4 and IPv6 names, a **TRRQuery** object is
+created to perform and combine both responses.
+
+Once done, _nsHostResolver::CompleteLookup_ is called. If the DoH server
+returned a valid response we use it, otherwise we report a failure in
+TRR-only mode, or try Do53 in TRR-first mode.
+
+**TRRService** controls the global state and settings of the feature.
+Each individual request is performed by the **TRR** class.
+
+Since HTTP channels in Firefox normally work on the main thread, TRR
+uses a special implementation called **TRRServiceChannel** to avoid
+congestion on the main thread.
+
+## Dynamic Blocklist
+
+In order to improve performance TRR service manages a dynamic blocklist
+for host names that can\'t be resolved with DoH but work with the native
+resolver. Blocklisted entries will not be retried over DoH for one
+minute (See _network.trr.temp\_blocklist\_duration\_sec_
+pref). When a domain is added to the blocklist, we also check if there
+is an NS record for its parent domain, in which case we add that to the
+blocklist. This feature is controlled by the
+_network.trr.temp\_blocklist_ pref.
+
+## TRR confirmation
+
+TRR requests normally have a 1.5 second timeout. If for some reason we
+do not get a response in that time we fall back to Do53. To avoid this
+delay for all requests when the DoH server is not accessible, we perform
+a confirmation check. If the check fails, we conclude that the server is
+not usable and will use Do53 directly. The confirmation check is retried
+periodically to check if the TRR connection is functional again.
+
+The confirmation state has one of the following values:
+
+- CONFIRM\_OFF: TRR is turned off, so the service is not active.
+- CONFIRM\_TRING\_OK: TRR in on, but we are not sure yet if the
+ DoH server is accessible. We optimistically try to resolve via
+ DoH and fall back to Do53 after 1.5 seconds. While in this state
+ the TRRService will be performing NS record requests to the DoH
+ server as a connectivity check. Depending on a successful
+ response it will either transition to the CONFIRM\_OK or
+ CONFIRM\_FAILED state.
+- CONFIRM\_OK: TRR is on and we have confirmed that the DoH server
+ is behaving adequately. Will use TRR for all requests (and fall
+ back to Do53 in case of timeout, NXDOMAIN, etc).
+- CONFIRM\_FAILED: TRR is on, but the DoH server is not
+ accessible. Either we have no network connectivity, or the
+ server is down. We don\'t perform DoH requests in this state
+ because they are sure to fail.
+- CONFIRM\_TRYING\_FAILED: This is equivalent to CONFIRM\_FAILED,
+ but we periodically enter this state when rechecking if the DoH
+ server is accessible.
+- CONFIRM\_DISABLED: We are in this state if the browser is in
+ TRR-only mode, or if the confirmation was explicitly disabled
+ via pref.
+
+The state machine for the confirmation is defined in the
+_HandleConfirmationEvent_ method in _TRRService.cpp_
+
+If strict fallback mode is enabled, Confirmation will set a flag to
+refresh our connection to the provider.
+
+## Excluded domains
+
+Some domains will never be resolved via TRR. This includes:
+
+- domains listed in the **network.trr.builtin-excluded-domains** pref
+(normally domains that are equal or end in *localhost* or *local*)
+- domains listed in the **network.trr.excluded-domains** pref (chosen by the user)
+- domains that are subdomains of the network\'s DNS suffix
+(for example if the network has the **lan** suffix, domains such as **computer.lan** will not use TRR)
+- requests made by Firefox to check for the existence of a captive-portal
+- requests made by Firefox to check the network\'s IPv6 capabilities
+- domains listed in _/etc/hosts_
+
+## Steering
+
+
+A small set of TRR providers are only available on certain networks.
+Detection is performed in DoHHeuristics.jsm followed by a call to
+_TRRService::SetDetectedURI_. This causes Firefox to use the
+network specific TRR provider until a network change occurs.
+
+## User choice
+
+The TRR feature is designed to prioritize user choice before user agent
+decisions. That means the user may explicitly disable TRR by setting
+**network.trr.mode** to **5** (TRR-disabled), and that
+_doh-rollout_ will not overwrite user settings. Changes to
+the TRR URL or TRR mode by the user will disable heuristics use the user
+configured settings.
diff --git a/netwerk/docs/dns/trr-skip-reasons.md b/netwerk/docs/dns/trr-skip-reasons.md
new file mode 100644
index 0000000000..dbbb4e3336
--- /dev/null
+++ b/netwerk/docs/dns/trr-skip-reasons.md
@@ -0,0 +1,324 @@
+# TRRSkippedReasons
+
+These values are defined in [TRRSkippedReason.h](https://searchfox.org/mozilla-central/source/netwerk/dns/nsITRRSkipReason.idl) and are recorded on _nsHostRecord_ for each resolution.
+We normally use them for telemetry or to determine the cause of a TRR failure.
+
+
+## TRR_UNSET
+
+Value: 0
+
+This reason is set on _nsHostRecord_ before we attempt to resolve the domain.
+Normally we should not report this value into telemetry - if we do that means there's a bug in the code.
+
+
+## TRR_OK
+
+Value: 1
+
+This reason is set when we got a positive TRR result. That means we used TRR for the DNS resolution, the HTTPS request got a 200 response, the response was properly decoded as a DNS packet and that packet contained relevant answers.
+
+
+## TRR_NO_GSERVICE
+
+Value: 2
+
+This reason is only set if there is no TRR service instance when trying to compute the TRR mode for a request. It indicates a bug in the implementation.
+
+
+## TRR_PARENTAL_CONTROL
+
+Value: 3
+
+This reason is set when we have detected system level parental controls are enabled. In this case we will not be using TRR for any requests.
+
+
+## TRR_OFF_EXPLICIT
+
+Value: 4
+
+This reason is set when DNS over HTTPS has been explicitly disabled by the user (by setting _network.trr.mode_ to _5_). In this case we will not be using TRR for any requests.
+
+
+## TRR_REQ_MODE_DISABLED
+
+Value: 5
+
+The request had the _nsIRequest::TRRMode_ set to _TRR\_DISABLED\_MODE_. That is usually the case for request that should not use TRR, such as the TRRServiceChannel, captive portal and connectivity checks, DoHHeuristics checks, requests originating from PAC scripts, etc.
+
+
+## TRR_MODE_NOT_ENABLED
+
+Value: 6
+
+This reason is set when the TRRService is not enabled. The only way we would end up reporting this to telemetry would be if the TRRService was enabled when the request was dispatched, but by the time it was processed the TRRService was disabled.
+
+
+## TRR_FAILED
+
+Value: 7
+
+The TRR request failed for an unknown reason.
+
+
+## TRR_MODE_UNHANDLED_DEFAULT
+
+Value: 8
+
+This reason is no longer used. This value may be recycled to mean something else in the future.
+
+
+## TRR_MODE_UNHANDLED_DISABLED
+
+Value: 9
+
+This reason is no longer used. This value may be recycled to mean something else in the future.
+
+
+## TRR_DISABLED_FLAG
+
+Value: 10
+
+This reason is used when retrying failed connections, sync DNS resolves on the main thread, or requests coming from webextensions that choose to skip TRR.
+
+
+## TRR_TIMEOUT
+
+Value: 11
+
+The TRR request timed out.
+
+## TRR_CHANNEL_DNS_FAIL
+
+Value: 12
+
+This reason is set when we fail to resolve the name of the DNS over HTTPS server.
+
+
+## TRR_IS_OFFLINE
+
+Value: 13
+
+This reason is recorded when the TRR request fails and the browser is offline (no active interfaces).
+
+
+## TRR_NOT_CONFIRMED
+
+Value: 14
+
+This reason is recorded when the TRR Service is not yet confirmed to work. Confirmation is only enabled when _Do53_ fallback is enabled.
+
+
+## TRR_DID_NOT_MAKE_QUERY
+
+Value: 15
+
+This reason is sent when _TrrLookup_ exited without doing a TRR query. It may be set during shutdown, or may indicate an implementation bug.
+
+
+## TRR_UNKNOWN_CHANNEL_FAILURE
+
+Value: 16
+
+The TRR request failed with an unknown channel failure reason.
+
+
+## TRR_HOST_BLOCKED_TEMPORARY
+
+Value: 17
+
+The reason is recorded when the host is temporarily blocked. This happens when a previous attempt to resolve it with TRR failed, but fallback to _Do53_ succeeded.
+
+
+## TRR_SEND_FAILED
+
+Value: 18
+
+The call to TRR::SendHTTPRequest failed.
+
+
+## TRR_NET_RESET
+
+Value: 19
+
+The request failed because the connection to the TRR server was reset.
+
+
+## TRR_NET_TIMEOUT
+
+Value: 20
+
+The request failed because the connection to the TRR server timed out.
+
+
+## TRR_NET_REFUSED
+
+Value: 21
+
+The request failed because the connection to the TRR server was refused.
+
+
+## TRR_NET_INTERRUPT
+
+Value: 22
+
+The request failed because the connection to the TRR server was interrupted.
+
+
+## TRR_NET_INADEQ_SEQURITY
+
+Value: 23
+
+The request failed because the connection to the TRR server used an invalid TLS configuration.
+
+
+## TRR_NO_ANSWERS
+
+Value: 24
+
+The TRR request succeeded but the encoded DNS packet contained no answers.
+
+
+## TRR_DECODE_FAILED
+
+Value: 25
+
+The TRR request succeeded but decoding the DNS packet failed.
+
+
+## TRR_EXCLUDED
+
+Value: 26
+
+This reason is set when the domain being resolved is excluded from TRR, either via the _network.trr.excluded-domains_ pref or because it was covered by the DNS Suffix of the user's network.
+
+
+## TRR_SERVER_RESPONSE_ERR
+
+Value: 27
+
+The server responded with non-200 code.
+
+
+## TRR_RCODE_FAIL
+
+Value: 28
+
+The decoded DNS packet contains an rcode that is different from NOERROR.
+
+
+## TRR_NO_CONNECTIVITY
+
+Value: 29
+
+This reason is set when the browser has no connectivity.
+
+
+## TRR_NXDOMAIN
+
+Value: 30
+
+This reason is set when the DNS response contains NXDOMAIN rcode (0x03).
+
+
+## TRR_REQ_CANCELLED
+
+Value: 31
+
+This reason is set when the request was cancelled prior to completion.
+
+## ODOH_KEY_NOT_USABLE
+
+Value: 32
+
+This reason is set when we don't have a valid ODoHConfig to use.
+
+## ODOH_UPDATE_KEY_FAILED
+
+Value: 33
+
+This reason is set when we failed to update the ODoHConfigs.
+
+## ODOH_KEY_NOT_AVAILABLE
+
+Value: 34
+
+This reason is set when ODoH requests timeout because of no key.
+
+## ODOH_ENCRYPTION_FAILED
+
+Value: 35
+
+This reason is set when we failed to encrypt DNS packets.
+
+## ODOH_DECRYPTION_FAILED
+
+Value: 36
+
+This reason is set when we failed to decrypt DNS packets.
+
+## TRR_HEURISTIC_TRIPPED_GOOGLE_SAFESEARCH
+
+Value: 37
+
+This reason is set when the google safesearch heuristic was tripped.
+
+## TRR_HEURISTIC_TRIPPED_YOUTUBE_SAFESEARCH
+
+Value: 38
+
+This reason is set when the youtube safesearch heuristic was tripped.
+
+## TRR_HEURISTIC_TRIPPED_ZSCALER_CANARY
+
+Value: 39
+
+This reason is set when the zscaler canary heuristic was tripped.
+
+## TRR_HEURISTIC_TRIPPED_CANARY
+
+Value: 40
+
+This reason is set when the global canary heuristic was tripped.
+
+## TRR_HEURISTIC_TRIPPED_MODIFIED_ROOTS
+
+Value: 41
+
+This reason is set when the modified roots (enterprise_roots cert pref) heuristic was tripped.
+
+## TRR_HEURISTIC_TRIPPED_PARENTAL_CONTROLS
+
+Value: 42
+
+This reason is set when parental controls are detected.
+
+## TRR_HEURISTIC_TRIPPED_THIRD_PARTY_ROOTS
+
+Value: 43
+
+This reason is set when the third party roots heuristic was tripped.
+
+## TRR_HEURISTIC_TRIPPED_ENTERPRISE_POLICY
+
+Value: 44
+
+This reason is set when enterprise policy heuristic was tripped.
+
+## TRR_HEURISTIC_TRIPPED_VPN
+
+Value: 45
+
+This reason is set when the heuristic was tripped by a vpn being detected.
+
+## TRR_HEURISTIC_TRIPPED_PROXY
+
+Value: 46
+
+This reason is set when the heuristic was tripped by a proxy being detected.
+
+## TRR_HEURISTIC_TRIPPED_NRPT
+
+Value: 47
+
+This reason is set when the heuristic was tripped by a NRPT being detected.
diff --git a/netwerk/docs/early_hints.md b/netwerk/docs/early_hints.md
new file mode 100644
index 0000000000..6390365072
--- /dev/null
+++ b/netwerk/docs/early_hints.md
@@ -0,0 +1,157 @@
+# Early Hints
+
+[Early Hints](https://html.spec.whatwg.org/multipage/semantics.html#early-hints) is an informational HTTP status code allowing server to send headers likely to appear in the final response before sending the final response.
+This is used to send [Link headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Link) to start `preconnect`s and `preload`s.
+
+This document is about the implementation details of Early Hints in Firefox.
+We focus on the `preload` feature, as it is the main feature interacting with classes.
+For Early Hint `preconnect` the Early Hints specific code is rather small and only touches the code path on [`103 Early Hints` responses](#early-hints-response-on-main-document-load).
+
+```{mermaid}
+sequenceDiagram
+ participant F as Firefox
+ participant S as Server
+ autonumber
+ F->>+S: Main document Request: GET /
+ S-->>F: 103 Early Hints Response
+ note over F: Firefox starts<br/>hinted requests
+ note over S: Server Think Time
+ S->>-F: 200 OK final response
+```
+
+Early Hints benefits originate from leveraging Server Think Time.
+The duration between response (2) and (3) arriving is the theoretical maximal benefit Early Hints can have.
+The server think time can originate from creating dynamic content by interacting with databases or more commonly when proxying the request to a different server.
+
+```{contents}
+:local:
+:depth: 1
+```
+
+## `103 Early Hints` Response on Main Document Load
+
+On `103 Early Hints` response the `nsHttpChannel` handling main document load passes the link header and a few more from the `103 Early Hints` response to the `EarlyHintsService`
+
+When receiving a `103 Early Hints` response, the `nsHttpChannel` forwards the `Link` headers in the `103 Early Hints` response to the `EarlyHintsService`
+When the `DocumentLoadListener` receives a cross-origin redirect, it cancels all preloads in progress.
+
+```{note}
+Only the first `103 Early Hints` response is processed.
+The remaining `103 Early Hints` responses are ignored, even after same-origin redirects.
+When we receive cross origin redirects, all ongoing Early Hint preload requests are cancelled.
+```
+
+```{mermaid}
+graph TD
+ MainChannel[nsHttpChannel]
+ EHS[EarlyHintsService]
+ EHC[EarlyHintPreconnect]
+ EHP[EarlyHintPreloader]
+ PreloadChannel[nsIChannel]
+ PCL[ParentChannelListener]
+
+ MainChannel
+ -- "nsIEarlyHintsObserver::EarlyHint(LinkHeader, Csp, RefererPolicy)<br/>via DocumentLoadListener"
+ --> EHS
+ EHS
+ -- "rel=preconnect"
+ --> EHC
+ EHS -->|"rel=preload<br/>via OngoingEarlyHints"| EHP
+ EHP -->|"CSP checks then AsyncOpen"| PreloadChannel
+ PreloadChannel -->|mListener| PCL
+ PCL -->|mNextListener| EHP
+```
+
+## Main document Final Response
+
+On the final response the `DocumentLoadListener` retrieves the list of link headers from the `EarlyHintsService`.
+As a side effect, the `EarlyHintPreloader` also starts a 10s timer to cancel itself if the content process doesn't connect to the `EarlyHintPreloader`.
+The timeout shouldn't occur in normal circumstances, because the content process connects to that `EarlyHintPreloader` immediately.
+The timeout currently only occurs when:
+
+- the main response has different CSP requirements disallowing the load ([Bug 1815884](https://bugzilla.mozilla.org/show_bug.cgi?id=1815884)),
+- the main response has COEP headers disallowing the load ([Bug 1806403](https://bugzilla.mozilla.org/show_bug.cgi?id=1806403)),
+- the user reloads a website and the image/css is already in the image/css-cache ([Bug 1815884](https://bugzilla.mozilla.org/show_bug.cgi?id=1815884)),
+- the tab gets closed before the connect happens or possibly other corner cases.
+
+```{mermaid}
+graph TD
+ DLL[DocumentLoadListener]
+ EHP[EarlyHintPreloader]
+ PS[PreloadService]
+ EHR[EarlyHintsRegistrar]
+ Timer[nsITimer]
+
+ DLL
+ -- "(1)<br/>GetAllPreloads(newCspRequirements)<br/> via EarlyHintsService and OngoingEarlyHints"
+ --> EHP
+ EHP -->|"Start timer to cancel on<br/>ParentConnectTimeout<br/>after 10s"| Timer
+ EHP -->|"Register(earlyHintPreloaderId)"| EHR
+ Timer -->|"RefPtr"| EHP
+ EHR -->|"RefPtr"| EHP
+ DLL
+ -- "(2)<br/>Send to content process via IPC<br/>List of Links+earlyHintPreloaderId"
+ --> PS
+```
+
+## Preload request from Content process
+
+The Child process parses Link headers from the `103 Early Hints` response first and then from the main document response.
+Preloads from the Link headers of the `103 Early Hints` response have an `earlyHintPreloadId` assigned to them.
+The Preloader sets this `earlyHintPreloaderId` on the channel doing the preload before calling `AsyncOpen`.
+The `HttpChannelParent` looks for the `earlyHintPreloaderId` in `AsyncOpen` and connects to the `EarlyHintPreloader` via the `EarlyHintRegistrar` instead of doing a network request.
+
+```{mermaid}
+graph TD
+ PS[PreloadService]
+ Preloader["FetchPreloader<br/>FontPreloader<br/>imgLoader<br/>ScriptLoader<br/>StyleLoader"]
+ Parent["HttpChannelParent"]
+ EHR["EarlyHintRegistrar"]
+ EHP["EarlyHintPreloader"]
+
+ PS -- "PreloadLinkHeader" --> Preloader
+ Preloader -- "NewChannel<br/>SetEarlyHintPreloaderId<br/>AsyncOpen" --> Parent
+ Parent -- "EarlyHintRegistrar::OnParentReady(this, earlyHintPreloaderId)" --> EHR
+ EHR -- "OnParentConnect" --> EHP
+```
+
+## Early Hint Preload request
+
+The `EarlyHintPreloader` follows HTTP 3xx redirects and always sets the request header `X-Moz: early hint`.
+
+## Early Hint Preload response
+
+When the `EarlyHintPreloader` received the `OnStartRequest` it forwards all `nsIRequestObserver` functions to the `HttpChannelParent` as soon as it knows which `HttpChannelParent` to forward the `nsIRequestObserver` functions to.
+
+```{mermaid}
+graph TD
+ OPC["EHP::OnParentConnect"]
+ OSR["EHP::OnStartRequest"]
+ Invoke["Invoke StreamListenerFunctions"]
+ End(("&shy;"))
+
+ OPC -- "CancelTimer" --> Invoke
+ OSR -- "Suspend Channel if called<br/>before OnParentReady" --> Invoke
+ Invoke -- "Resume Channel if suspended<br/>Forward OSR+ODA+OSR<br/>Set Listener of ParentChanelListener to HttpChannelParent" --> End
+```
+
+## Final setup
+
+In the end all the remaining `OnDataAvailable` and `OnStopRequest` calls are passed down this call chain from `nsIChannel` to the preloader.
+
+```{mermaid}
+graph TD
+ Channel[nsIChannel]
+ PCL[ParentChannelListener]
+ HCP[HttpChanelParent]
+ HCC[HttpChannelChild]
+ Preloader[FetchPreloader/imgLoader/...]
+
+ Channel -- "mListener" --> PCL
+ PCL -- "mNextListener" --> HCP
+ HCP -- "mChannel" --> Channel
+ HCP -- "..." --> HCC
+ HCC -- "..." --> HCP
+ HCC -- "mListener" --> Preloader
+ Preloader -- "mChannel" --> HCC
+```
diff --git a/netwerk/docs/http/http3.md b/netwerk/docs/http/http3.md
new file mode 100644
index 0000000000..9881421d57
--- /dev/null
+++ b/netwerk/docs/http/http3.md
@@ -0,0 +1,154 @@
+# Http3Session and streams
+
+The HTTP/3 and QUIC protocol are implemented in the neqo library. Http3Session, Http3Steam, and Http3WebTransportStream are added to integrate the library into the existing necko code.
+
+The following classes are necessary:
+- HttpConnectionUDP - this is the object that is registered in nsHttpConnectionMgr and it is also used as an async listener to socket events (it implements nsIUDPSocketSyncListener)
+- nsUDPSocket - represent a UDP socket and implements nsASocketHandler. nsSocketTransportService manages UDP and TCP sockets and calls the corresponding nsASocketHandler when the socket encounters an error or has data to be read, etc.
+- NeqoHttp3Conn is a c++ object that maps to the rust object Http3Client.
+- Http3Session manages NeqoHttp3Conn/Http3Client and provides bridge between the rust implementation and necko legacy code, i.e. HttpConnectionUDP and nsHttpTransaction.
+- Http3Streams are used to map reading and writing from/into a nsHttpTransaction onto the NeqoHttp3Conn/Http3Client API (e.g. nsHttpTransaction::OnWriteSegment will call Http3Client::read_data). NeqoHttp3Conn is only accessed by Http3Sesson and NeqoHttp3Conn functions are exposed through Http3Session where needed.
+
+```{mermaid}
+graph TD
+ A[HttpConnectionMgr] --> B[HttpConnectionUDP]
+ B --> C[nsUDPSocket]
+ C --> B
+ D[nsSocketTransportService] --> C
+ B --> E[NeqoHttp3Conn]
+ B --> F[Http3Stream]
+ F -->|row| B
+ F --> G[nsHttpTransport]
+ G --> B
+ B --> G
+```
+
+## Interactions with sockets and driving neqo
+
+As described in [this docs](https://github.com/mozilla/neqo/blob/main/neqo-http3/src/lib.rs), neqo does not create a socket, it produces, i.e. encodes, data that should be sent as a payload in a UDP packet and consumes data received on the UDP socket. Therefore the necko is responsible for creating a socket and reading and writing data from/into the socket. Necko uses nsUDPSocket and nsSocketTransportService for this.
+The UDP socket is constantly polled for reading. It is not polled for writing, we let QUIC control to not overload the network path and buffers.
+
+When the UDP socket has an available packet, nsSocketTransportService will return from the polling function and call nsUDPSocket::OnSocketReady, which calls HttpConnectionUDP::OnPacketReceived, HttpConnectionUDP::RecvData and further Http3Session::RecvData. For writing data
+HttpConnectionUDP::SendData is called which calls Http3Session::SendData.
+
+Neqo needs an external timer. The timer is managed by Http3Session. When the timer expires HttpConnectionUDP::OnQuicTimeoutExpired is executed that calls Http3Session::ProcessOutputAndEvents.
+
+HttpConnectionUDP::RecvData, HttpConnectionUDP::SendData or HttpConnectionUDP::OnQuicTimeoutExpired must be on the stack when we interact with neqo. The reason is that they are responsible for proper clean-up in case of an error. For example, if there is a slow reader that is ready to read, it will call Http3Session::TransactionHasDataToRecv to be registered in a list and HttpConnectionUDP::ForceRecv will be called that will call the same function chain as in the case a new packet is received, i.e. HttpConnectionUDP::RecvData and further Http3Session::RecvData. The other example is when a new HTTP transaction is added to the session, the transaction needs to send data. The transaction will be registered in a list and HttpConnectionUDP::ResumeSend will be called which further calls HttpConnectionUDP::SendData.
+
+Http3Session holds a reference to a ConnectionHandler object which is a wrapper object around HttpConnectionUDP. The destructor of ConnectionHandler calls nsHttpHandler::ReclaimConnection which is responsible for removing the connection from nsHttpConnectionMgr.
+HttpConnectionUDP::RecvData, HttpConnectionUDP::SendData or HttpConnectionUDP::OnQuicTimeoutExpired call HttpConnectionUDP::CloseTransaction which will cause Http3Session to remove the reference to the ConnectionHandler object. The ConnectionHandler object will be destroyed and nsHttpHandler::ReclaimConnection will be called.
+This behavior is historical and it is also used for HTTP/2 and older versions. In the case of the older versions, nsHttpHandler::ReclaimConnection may actually reuse a connection instead of removing it from nsHttpConnectionMgr.
+
+Three main neqo functions responsible for driving neqo are process_input, process_output, and next_event. They are called by:
+- Http3Session::ProcessInput,
+- Http3Session::ProcesOutput and,
+- Http3Session::ProcessEvents.
+
+**ProcessInput**
+In this function we take data from the UDP socket and call NeqoHttp3Conn::ProcessInput that maps to Http3Client::process_input. The packets are read from the socket until the socket buffer is empty.
+
+**ProcessEvents**
+This function process all available neqo events. It returns earlier only in case of a fatal error.
+It calls NeqoHttp3Conn::GetEvent which maps to Http3Client::next_event.
+The events and their handling will be explained below.
+
+**ProcessOutput**
+The function is called when necko has performed some action on neqo, e.g. new HTTP transaction is added, certificate verification is done, etc., or when the timer expires. In both cases, necko wants to check if neqo has data to send or change its state. This function calls NeqoHttp3Conn::ProcessOutput that maps to Http3Client::process_output. NeqoHttp3Conn::ProcessOutput may return a packet that is sent on the socket or a callback timeout. In the Http3Session::ProcessOutput function, NeqoHttp3Conn::ProcessOutput is called repeatedly and packets are sent until a callback timer is returned or a fatal error happens.
+
+**Http3Session::RecvData** performs the following steps:
+- ProcessSlowConsumers - explained below.
+- ProcessInput - process new packets.
+- ProcessEvents - look if there are new events
+- ProcessOutput - look if we have new packets to send after packets arrive(e.g. sending ack) or due to event processing (e.g. a stream has been canceled).
+
+**Http3Session::SendData** performed the following steps:
+- Process (HTTP and WebTransport) streams that have data to write.
+- ProcessOutput - look if there are new packets to be sent after streams have supplied data to neqo.
+
+
+**Http3Session::ProcessOutputAndEvents** performed the following steps:
+- ProcessOutput - after a timeout most probably neqo will have data to retransmit or it will send a ping
+- ProcessEvents - look if the state of the connection has changed, i.e. the connection timed out
+
+
+## HTTP and WebTransport Streams reading data
+
+The following diagram shows how data are read from an HTTP stream. The diagram for a WebTransport stream will be added later.
+
+```{mermaid}
+flowchart TD
+ A1[nsUDPSocket::OnSocketReady] --> |HttpConnectionUDP::OnPacketReceived| C[HttpConnectionUDP]
+ A[HttpConnectionUDP::ResumeRecv calls] --> C
+ B[HttpConnectionUDPForceIO] --> |HttpConnectionUDP::RecvData| C
+ C -->|1. Http3Session::RecvData| D[Http3Session]
+ D --> |2. Http3Stream::WriteSegments|E[Http3Stream]
+ E --> |3. nsHttpTransaction::WriteSegmentsAgain| F[nsHttpTransaction]
+ F --> |4. nsPipeOutputStream::WriteSegments| G["nsPipeOutputStream"]
+ G --> |5. nsHttpTransaction::WritePiipeSegnemt| F
+ F --> |6. Http3Stream::OnWriteSegment| E
+ E --> |"7. Return response headers or call Http3Session::ReadResponseData"|D
+ D --> |8. NeqoHttp3Conn::ReadResponseDataReadResponseData| H[NeqoHHttp3Conn]
+
+```
+
+When there is a new packet carrying a stream data arriving on a QUIC connection nsUDPSocket::OnSocketReady will be called which will call Http3Session::RecvData. Http3Session::RecvData and ProcessInput will read the new packet from the socket and give it to neqo for processing. In the next step, ProcessEvent will be called which will have a DataReadable event and Http3Stream::WriteSegments will be called.
+Http3Stream::WriteSegments calls nsHttpTransaction::WriteSegmentsAgain repeatedly until all data are read from the QUIC stream or until the pipe cannot accept more data. The latter can happen when listeners of an HTTP transaction or WebTransport stream are slow and are not able to read all data available on an HTTP3/WebTransport stream fast enough.
+
+When the pipe cannot accept more data nsHttpTransaction will call nsPipeOutputStream::AsyncWait and wait for the nsHttpTransaction::OnOutputStreamReady callback. When nsHttpTransaction::OnOutputStreamReady is called, Http3Stream/Session::TransactionHasDataToRecv is is executed with the following actions:
+- the corresponding stream to a list(mSlowConsumersReadyForRead) and
+- nsHttpConnection::ResumeRecv is called (i.e. it forces the same code path as when a socket has data to receive so that errors can be properly handled as explained previously).
+
+These streams will be processed in ProcessSlowConsumers which is called by Http3Session::RecvData.
+
+## HTTP and WebTransport Streams writing data
+
+The following diagram shows how data are sent from an HTTP stream. The diagram for a WebTransport stream will be added later.
+
+```{mermaid}
+flowchart TD
+ A[HttpConnectionUDP::ResumeSend calls] --> C[HttpConnectionUDP]
+ B[HttpConnectionUDPForceIO] --> |HttpConnectionUDP::SendData| C
+ C -->|1. Http3Session::SendData| D[Http3Session]
+ D --> |2. Http3Stream::ReadSegments|E[Http3Stream]
+ E --> |3. nsHttpTransaction::ReadSegmentsAgain| F[nsHttpTransaction]
+ F --> |4. nsPipeInputStream::ReadSegments| G["nsPipeInputStream(Request stream)"]
+ G --> |5. nsHttpTransaction::ReadRequestSegment| F
+ F --> |6. Http3Stream::OnReadSegment| E
+ E --> |7. Http3Session::TryActivating/SendRequestBody|D
+ D --> |8. NeqoHttp3Conn::Fetch/SendRequestBody| H[NeqoHHttp3Conn]
+```
+
+When a nsHttpTransaction has been newly added to a transaction or when nsHttpTransaction has more data to write Http3Session::StreamReadyToWrite is called (in the latter case through Http3Session::TransactionHasDataToWrite) which performs the following actions:
+- add the corresponding stream to a list(mReadyForWrite) and
+- call HttpConnectionUDP::ResumeSend
+
+The Http3Session::SendData function iterates through mReadyForWrite and calls Http3Stream::ReadSegments for each stream.
+
+## Neqo events
+
+For **HeaderReady** and **DataReadable** the Http3Stream::WriteSegments function of the corresponding stream is called. The code path shown in the flowchart above will call the nssHttpTransaction served by the stream to take headers and data.
+
+**DataWritable** means that a stream could not accept more data earlier and that flow control now allows sending more data. Http3Sesson will mark the stream as writable(by calling Http3Session::StreamReadyToWrite) to verify if a stream wants to write more data.
+
+**Reset** and **StopSending** events will be propagated to the stream and the stream will be closed.
+
+**RequestsCreatable** events are posted when a QUIC connection could not accept new streams due to the flow control in the past and the stream flow control is increased and the streams are creatable again. Http3Session::ProcessPendingProcessPending will trigger the activation of the queued streams.
+
+**AuthenticationNeeded** and **EchFallbackAuthenticationNeeded** are posted when a certificate verification is needed.
+
+
+**ZeroRttRejected** is posted when zero RTT data was rejected.
+
+**ResumptionToken** is posted when a new resumption token is available.
+
+**ConnectionConnected**, **GoawayReceived**, **ConnectionClosing** and **ConnectionClosed** expose change in the connection state. Difference between **ConnectionClosing** and **ConnectionClosed** that after **ConnectionClosed** the connection can be immediately closed and after **ConnectionClosing** we will keep the connection object for a short time until **ConnectionClosed** event is received. During this period we will retransmit the closing frame if they are lost.
+
+### WebTransport events
+
+**Negotiated** - WebTransport is negotiated only after the HTTP/3 settings frame has been received from the server. At that point **Negotiated** event is posted to inform the application.
+
+The **Session** event is posted when a WebTransport session is successfully negotiated.
+
+The **SessionClosed** event is posted when a connection is closed gracefully or abruptly.
+
+The **NewStream** is posted when a new stream has been opened by the peer.
diff --git a/netwerk/docs/http/lifecycle.rst b/netwerk/docs/http/lifecycle.rst
new file mode 100644
index 0000000000..007d35e579
--- /dev/null
+++ b/netwerk/docs/http/lifecycle.rst
@@ -0,0 +1,220 @@
+The lifecycle of a HTTP request
+===============================
+
+
+HTTP requests in Firefox go through several steps. Each piece of the request message and response message become available at certain points. Extracting that information is a challenge, though.
+
+What is available when
+----------------------
+
++-----------------------+---------------------------------------------------+---------------------------------------+------------------------+-------------------------------+
+| Data | When it's available | Sample JS code | Interfaces | Test code |
++=======================+===================================================+=======================================+========================+===============================+
+| HTTP request method | *http-on-modify-request* observer notification | channel.requestMethod | nsIHttpChannel_ | |
++-----------------------+---------------------------------------------------+---------------------------------------+------------------------+-------------------------------+
+| HTTP request URI | *http-on-modify-request* observer notification | channel.URI | nsIChannel_ | |
++-----------------------+---------------------------------------------------+---------------------------------------+------------------------+-------------------------------+
+| HTTP request headers | *http-on-modify-request* observer notification | channel.visitRequestHeaders(visitor) | nsIHttpChannel_ | |
++-----------------------+---------------------------------------------------+---------------------------------------+------------------------+-------------------------------+
+| HTTP request body | *http-on-modify-request* observer notification | channel.uploadStream | nsIUploadChannel_ | |
++-----------------------+---------------------------------------------------+---------------------------------------+------------------------+-------------------------------+
+|| HTTP response status || *http-on-examine-response* observer notification || channel.responseStatus || nsIHttpChannel_ || test_basic_functionality.js_ |
+|| || || channel.responseStatusText || || |
++-----------------------+---------------------------------------------------+---------------------------------------+------------------------+-------------------------------+
+| HTTP response headers | *http-on-examine-response* observer notification | channel.visitResponseHeaders(visitor) | nsIHttpChannel_ | |
++-----------------------+---------------------------------------------------+---------------------------------------+------------------------+-------------------------------+
+|| HTTP response body || *onStopRequest* via stream listener tee || See below || nsITraceableChannel_ || test_traceable_channel.js_ |
+|| || || || nsIStreamListenerTee_ || |
+|| || || || nsIPipe_ || |
++-----------------------+---------------------------------------------------+---------------------------------------+------------------------+-------------------------------+
+
+The request: http-on-modify-request
+-----------------------------------
+
+Firefox fires a "http-on-modify-request" observer notification before sending the HTTP request, and this blocks the sending of the request until all observers exit. This is generally the point at which you can modify the HTTP request headers (hence the name).
+
+Attaching a listener for a request is pretty simple::
+
+ const obs = {
+ QueryInterface: ChromeUtils.generateQI(["nsIObserver"]),
+
+ observe: function(channel, topic, data) {
+ if (!(channel instanceof Ci.nsIHttpChannel))
+ return;
+
+ // process the channel's data
+ }
+ }
+
+ Services.obs.addObserver(observer, "http-on-modify-request", false);
+
+See nsIObserverService_ for the details.
+
+The request method and URI are immediately available at this time. Request headers are trivially easy to get::
+
+ /**
+ * HTTP header visitor.
+ */
+ class HeaderVisitor {
+ #targetObject;
+
+ constructor(targetObject) {
+ this.#targetObject = targetObject;
+ }
+
+ // nsIHttpHeaderVisitor
+ visitHeader(header, value) {
+ this.#targetObject[header] = value;
+ }
+
+ QueryInterface = ChromeUtils.generateQI(["nsIHttpHeaderVisitor"]);
+ }
+
+ // ...
+ const requestHeaders = {};
+ const visitor = new HeaderVisitor(requestHeaders);
+ channel.visitRequestHeaders(visitor);
+
+This is also the time to set request headers, if you need to. The method for that on the nsIHttpChannel_ interface is `channel.setRequestHeader(header, value);`
+
+Most HTTP requests don't have a body, as they are GET requests. POST requests often have them, though. As the nsIUploadChannel_ documentation indicates, the body of most HTTP requests is available via a seekable stream (nsISeekableStream_). So you can simply capture the body stream and its current position, to revisit it later. network-helper.js_ has code to read the request body.
+
+The response: http-on-examine-response
+--------------------------------------
+
+Firefox fires a "http-on-examine-response" observer notification after parsing the HTTP response status and headers, but **before** reading the response body. Attaching a listener for this phase is also very easy::
+
+ Services.obs.addObserver(observer, "http-on-examine-response", false);
+
+If you use the same observer for "http-on-modify-request" and "http-on-examine-response", make sure you check the topic argument before interacting with the channel.
+
+The response status is available via the *responseStatus* and *responseStatusText* properties. The response headers are available via the *visitResponseHeaders* method, and requires the same interface.
+
+The response body: onStopRequest, stream listener tee
+-----------------------------------------------------
+
+During the "http-on-examine-response" notification, the response body is *not* available. You can, however, use a stream listener tee to *copy* the stream so that the original stream data goes on, and you have a separate input stream you can read from with the same data.
+
+Here's some sample code to illustrate what you need::
+
+ const Pipe = Components.Constructor(
+ "@mozilla.org/pipe;1",
+ "nsIPipe",
+ "init"
+ );
+ const StreamListenerTee = Components.Constructor(
+ "@mozilla.org/network/stream-listener-tee;1",
+ "nsIStreamListenerTee"
+ );
+ const ScriptableStream = Components.Constructor(
+ "@mozilla.org/scriptableinputstream;1",
+ "nsIScriptableInputStream",
+ "init"
+ );
+
+ const obs = {
+ QueryInterface: ChromeUtils.generateQI(["nsIObserver", "nsIRequestObserver"]),
+
+ /** @typedef {WeakMap<nsIHttpChannel, nsIPipe>} */
+ requestToTeePipe: new WeakMap,
+
+ // nsIObserver
+ observe: function(channel, topic, data) {
+ if (!(channel instanceof Ci.nsIHttpChannel))
+ return;
+
+ /* Create input and output streams to take the new data.
+ The 0xffffffff argument is the segment count.
+ It has to be this high because you don't know how much data is coming in the response body.
+
+ As for why these are blocking streams: I believe this is because there's no actual need to make them non-blocking.
+ The stream processing happens during onStopRequest(), so we have all the data then and the operation can be synchronous.
+ But I could be very wrong on this.
+ */
+ const pipe = new Pipe(false, false, 0, 0xffffffff);
+
+ // Install the stream listener tee to intercept the HTTP body.
+ const tee = new StreamListenerTee;
+ const originalListener = channel.setNewListener(tee);
+ tee.init(originalListener, pipe.outputStream, this);
+
+ this.requestToTeePipe.set(channel, pipe);
+ }
+
+ // nsIRequestObserver
+ onStartRequest: function() {
+ // do nothing
+ }
+
+ // nsIRequestObserver
+ onStopRequest: function(channel, statusCode) {
+ const pipe = this.requestToTeePipe.get(channel);
+
+ // No more data coming in anyway.
+ pipe.outputStream.close();
+ this.requestToTeePipe.delete(channel);
+
+ let length = 0;
+ try {
+ length = pipe.inputStream.available();
+ }
+ catch (e) {
+ if (e.result === Components.results.NS_BASE_STREAM_CLOSED)
+ throw e;
+ }
+
+ let responseBody = "";
+ if (length) {
+ // C++ code doesn't need the scriptable input stream.
+ const sin = new ScriptableStream(pipe.inputStream);
+ responseBody = sin.read(length);
+ sin.close();
+ }
+
+ void(responseBody); // do something with the body
+ }
+ }
+
+test_traceable_channel.js_ does essentially this.
+
+Character encodings and compression
+-----------------------------------
+
+Canceling requests
+------------------
+
+HTTP activity distributor notes
+-------------------------------
+
+URIContentLoader notes
+----------------------
+
+Order of operations
+-------------------
+
+1. The HTTP channel is constructed.
+2. The "http-on-modify-request" observer service notification fires.
+3. If the request has been canceled, exit at this step.
+4. The HTTP channel's request is submitted to the server. Time passes.
+5. The HTTP channel's response comes in from the server.
+6. The HTTP channel parses the response status and headers.
+7. The "http-on-examine-response" observer service notification fires.
+
+Useful code samples and references
+----------------------------------
+
+- nsIHttpProtocolHandler_ defines a lot of observer topics, and has a lot of details.
+
+.. _nsIHttpChannel: https://searchfox.org/mozilla-central/source/netwerk/protocol/http/nsIHttpChannel.idl
+.. _nsIChannel: https://searchfox.org/mozilla-central/source/netwerk/base/nsIChannel.idl
+.. _nsIUploadChannel: https://searchfox.org/mozilla-central/source/netwerk/base/nsIUploadChannel.idl
+.. _nsITraceableChannel: https://searchfox.org/mozilla-central/source/netwerk/base/nsITraceableChannel.idl
+.. _nsISeekableStream: https://searchfox.org/mozilla-central/source/xpcom/io/nsISeekableStream.idl
+.. _nsIObserverService: https://searchfox.org/mozilla-central/source/xpcom/ds/nsIObserverService.idl
+.. _nsIHttpProtocolHandler: https://searchfox.org/mozilla-central/source/netwerk/protocol/http/nsIHttpProtocolHandler.idl
+.. _nsIStreamListenerTee: https://searchfox.org/mozilla-central/source/netwerk/base/nsIStreamListenerTee.idl
+.. _nsIPipe: https://searchfox.org/mozilla-central/source/xpcom/io/nsIPipe.idl
+
+.. _test_basic_functionality.js: https://searchfox.org/mozilla-central/source/netwerk/test/httpserver/test/test_basic_functionality.js
+.. _test_traceable_channel.js: https://searchfox.org/mozilla-central/source/netwerk/test/unit/test_traceable_channel.js
+.. _network-helper.js: https://searchfox.org/mozilla-central/source/devtools/shared/webconsole/network-helper.js
diff --git a/netwerk/docs/http/logging.rst b/netwerk/docs/http/logging.rst
new file mode 100644
index 0000000000..7dbc418a7a
--- /dev/null
+++ b/netwerk/docs/http/logging.rst
@@ -0,0 +1,320 @@
+HTTP Logging
+============
+
+
+Sometimes, while debugging your Web app (or client-side code using
+Necko), it can be useful to log HTTP traffic. This saves a log of HTTP-related
+information from your browser run into a file that you can examine (or
+upload to Bugzilla if a developer has asked you for a log).
+
+.. note::
+
+ **Note:** The `Web
+ Console <https://developer.mozilla.org/en-US/docs/Tools/Web_Console>`__
+ also offers the ability to peek at HTTP transactions within Firefox.
+ HTTP logging generally provides more detailed logging.
+
+.. _using-about-networking:
+
+Using about:logging
+-------------------
+
+This is the best and easiest way to do HTTP logging. At any point
+during while your browser is running, you can turn logging on and off.
+
+.. note::
+
+ **Note:** Before Firefox 108 the logging UI used to be located at `about:networking#logging`
+
+This allows you to capture only the "interesting" part of the browser's
+behavior (i.e. your bug), which makes the HTTP log much smaller and
+easier to analyze.
+
+#. Launch the browser and get it into whatever state you need to be in
+ just before your bug occurs.
+#. Open a new tab and type in "about:logging" into the URL bar.
+#. Adjust the location of the log file if you don't like the default
+#. Adjust the list of modules that you want to log: this list has the
+ exact same format as the MOZ_LOG environment variable (see below).
+ Generally the default list is OK, unless a Mozilla developer has told
+ you to modify it.
+#. Click on Start Logging.
+#. Reproduce the bug (i.e. go to the web site that is broken for you and
+ make the bug happen in the browser)
+#. Make a note of the value of "Current Log File".
+#. Click on Stop Logging.
+#. Go to the folder containing the specified log file, and gather all
+ the log files. You will see several files that look like:
+ log.txt-main.1806.moz_log, log.txt-child.1954.moz_log,
+ log.txt-child.1970.moz_log, etc. This is because Firefox now uses
+ multiple processes, and each process gets its own log file.
+#. For many bugs, the "log.txt-main.moz_log" file is the only thing you need to
+ upload as a file attachment to your Bugzilla bug (this is assuming
+ you're logging to help a mozilla developer). Other bugs may require
+ all the logs to be uploaded--ask the developer if you're not sure.
+#. Pat yourself on the back--a job well done! Thanks for helping us
+ debug Firefox.
+
+Logging HTTP activity by manually setting environment variables
+---------------------------------------------------------------
+
+Sometimes the about:logging approach won't work, for instance if your
+bug occurs during startup, or you're running on mobile, etc. In that
+case you can set environment variables \*before\* you launch Firefox.
+Note that this approach winds up logging the whole browser history, so
+files can get rather large (they compress well :)
+
+Setting environment variables differs by operating system. Don't let the
+scary-looking command line stuff frighten you off; it's not hard at all!
+
+Windows
+~~~~~~~
+
+#. If Firefox is already running, exit out of it.
+
+#. Open a command prompt by holding down the Windows key and pressing "R".
+
+#. Type CMD and press enter, a new Command Prompt window with a black
+ background will appear.
+
+#. | Copy and paste the following lines one at a time into the Command
+ Prompt window. Press the enter key after each one.:
+ | **For 64-bit Windows:**
+
+ ::
+
+ set MOZ_LOG=timestamp,rotate:200,nsHttp:5,cache2:5,nsSocketTransport:5,nsHostResolver:5
+ set MOZ_LOG_FILE=%TEMP%\log.txt
+ "c:\Program Files\Mozilla Firefox\firefox.exe"
+
+ **For 32-bit Windows:**
+
+ ::
+
+ set MOZ_LOG=timestamp,rotate:200,nsHttp:5,cache2:5,nsSocketTransport:5,nsHostResolver:5
+ set MOZ_LOG_FILE=%TEMP%\log.txt
+ "c:\Program Files (x86)\Mozilla Firefox\firefox.exe"
+
+ (These instructions assume that you installed Firefox to the default
+ location, and that drive C: is your Windows startup disk. Make the
+ appropriate adjustments if those aren't the case.)
+
+#. Reproduce whatever problem it is that you're having.
+
+#. Once you've reproduced the problem, exit Firefox and look for the
+ generated log files in your temporary directory. You can type
+ "%TEMP%" directly into the Windows Explorer location bar to get there
+ quickly.
+
+Linux
+~~~~~
+
+This section offers information on how to capture HTTP logs for Firefox
+running on Linux.
+
+#. Quit out of Firefox if it's running.
+
+#. Open a new shell. The commands listed here assume a bash-compatible
+ shell.
+
+#. Copy and paste the following commands into the shell one at a time.
+ Make sure to hit enter after each line.
+
+ ::
+
+ export MOZ_LOG=timestamp,rotate:200,nsHttp:5,cache2:5,nsSocketTransport:5,nsHostResolver:5
+ export MOZ_LOG_FILE=/tmp/log.txt
+ cd /path/to/firefox
+ ./firefox
+
+#. Reproduce the problem you're debugging.
+
+#. When the problem has been reproduced, exit Firefox and look for the
+ generated log files, which you can find at ``/tmp/log.txt``.
+
+Mac OS X
+~~~~~~~~
+
+These instructions show how to log HTTP traffic in Firefox on Mac OS X.
+
+#. Quit Firefox is if it's currently running, by using the Quit option
+ in the File menu. Keep in mind that simply closing all windows does
+ **not** quit Firefox on Mac OS X (this is standard practice for Mac
+ applications).
+
+#. Run the Terminal application, which is located in the Utilities
+ subfolder in your startup disk's Applications folder.
+
+#. Copy and paste the following commands into the Terminal window,
+ hitting the return key after each line.
+
+ ::
+
+ export MOZ_LOG=timestamp,rotate:200,nsHttp:5,cache2:5,nsSocketTransport:5,nsHostResolver:5
+ export MOZ_LOG_FILE=~/Desktop/log.txt
+ cd /Applications/Firefox.app/Contents/MacOS
+ ./firefox-bin
+
+ (The instructions assume that you've installed Firefox directly into
+ your startup disk's Applications folder. If you've put it elsewhere,
+ change the path used on the third line appropriately.)
+
+#. Reproduce whatever problem you're trying to debug.
+
+#. Quit Firefox and look for the generated ``log.txt`` log files on your
+ desktop.
+
+.. note::
+
+ **Note:** The generated log file uses Unix-style line endings. Older
+ editors may have problems with this, but if you're using an even
+ reasonably modern Mac OS X application to view the log, you won't
+ have any problems.
+
+Start logging using command line arguments
+------------------------------------------
+
+Since Firefox 61 it's possible to start logging in a bit simpler way
+than setting environment variables: using command line arguments. Here
+is an example for the **Windows** platform, on other platforms we accept
+the same form of the arguments:
+
+#. If Firefox is already running, exit out of it.
+
+#. Open a command prompt. On `Windows
+ XP <https://commandwindows.com/runline.htm>`__, you can find the
+ "Run..." command in the Start menu's "All Programs" submenu. On `all
+ newer versions of
+ Windows <http://www.xp-vista.com/other/where-is-run-in-windows-vista>`__,
+ you can hold down the Windows key and press "R".
+
+#. | Copy and paste the following line into the "Run" command window and
+ then press enter:
+ | **For 32-bit Windows:**
+
+ ::
+
+ "c:\Program Files (x86)\Mozilla Firefox\firefox.exe" -MOZ_LOG=timestamp,rotate:200,nsHttp:5,cache2:5,nsSocketTransport:5,nsHostResolver:5 -MOZ_LOG_FILE=%TEMP%\log.txt
+
+ **For 64-bit Windows:**
+
+ ::
+
+ "c:\Program Files\Mozilla Firefox\firefox.exe" -MOZ_LOG=timestamp,rotate:200,nsHttp:5,cache2:5,nsSocketTransport:5,nsHostResolver:5 -MOZ_LOG_FILE=%TEMP%\log.txt
+
+ (These instructions assume that you installed Firefox to the default
+ location, and that drive C: is your Windows startup disk. Make the
+ appropriate adjustments if those aren't the case.)
+
+#. Reproduce whatever problem it is that you're having.
+
+#. Once you've reproduced the problem, exit Firefox and look for the
+ generated log files in your temporary directory. You can type
+ "%TEMP%" directly into the Windows Explorer location bar to get there
+ quickly.
+
+Advanced techniques
+-------------------
+
+You can adjust some of the settings listed above to change what HTTP
+information get logged.
+
+Limiting the size of the logged data
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+By default there is no limit to the size of log file(s), and they
+capture the logging throughout the time Firefox runs, from start to
+finish. These files can get quite large (gigabytes)! So we have added
+a 'rotate:SIZE_IN_MB' option to MOZ_LOG (we use it in the examples
+above). If you are using Firefox >= 51, setting this option saves only
+the last N megabytes of logging data, which helps keep them manageable
+in size. (Unknown modules are ignored, so it's OK to use 'rotate' in
+your environment even if you're running Firefox <= 50: it will do
+nothing).
+
+This is accomplished by splitting the log into up to 4 separate files
+(their filenames have a numbered extension, .0, .1, .2, .3) The logging
+back end cycles the files it writes to, while ensuring that the sum of
+these files’ sizes will never go over the specified limit.
+
+Note 1: **the file with the largest number is not guaranteed to be the
+last file written!** We don’t move the files, we only cycle. Using the
+rotate module automatically adds timestamps to the log, so it’s always
+easy to recognize which file keeps the most recent data.
+
+Note 2: **rotate doesn’t support append**. When you specify rotate, on
+every start all the files (including any previous non-rotated log file)
+are deleted to avoid any mixture of information. The ``append`` module
+specified is then ignored.
+
+Use 'sync' if your browser crashes or hangs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+By default, HTTP logging buffers messages and only periodically writes
+them to disk (this is more efficient and also makes logging less likely
+to interfere with race conditions, etc). However, if you are seeing
+your browser crash (or hang) you should add ",sync" to the list of
+logging modules in your MOZ_LOG environment variable. This will cause
+each log message to be immediately written (and fflush()'d), which is
+likely to give us more information about your crash.
+
+Turning on QUIC logging
+~~~~~~~~~~~~~~~~~~~~~~~
+
+This can be done by setting `MOZ_LOG` to
+`timestamp,rotate:200,nsHttp:5,neqo_http3::*:5,neqo_transport::*:5`.
+
+Logging only HTTP request and response headers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are two ways to do this:
+
+#. Replace MOZ_LOG\ ``=nsHttp:5`` with MOZ_LOG\ ``=nsHttp:3`` in the
+ commands above.
+#. There's a handy extension for Firefox called `HTTP Header
+ Live <https://addons.mozilla.org/firefox/addon/3829>`__ that you can
+ use to capture just the HTTP request and response headers. This is a
+ useful tool when you want to peek at HTTP traffic.
+
+Turning off logging of socket-level transactions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you're not interested in socket-level log information, either because
+it's not relevant to your bug or because you're debugging something that
+includes a lot of noise that's hard to parse through, you can do that.
+Simply remove the text ``nsSocketTransport:5`` from the commands above.
+
+Turning off DNS query logging
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can turn off logging of host resolving (that is, DNS queries) by
+removing the text ``nsHostResolver:5`` from the commands above.
+
+Enable Logging for try server runs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can enable logging on try by passing the `env` argument via `mach try`.
+For example:
+
+.. note::
+
+ ``./mach try fuzzy --env "MOZ_LOG=nsHttp:5,SSLTokensCache:5"``
+
+See also
+--------
+
+- There are similar options available to debug mailnews protocols.
+ See `this
+ document <https://www-archive.mozilla.org/quality/mailnews/mail-troubleshoot.html>`__ for
+ more info about mailnews troubleshooting.
+- On the Windows platform, nightly Firefox builds have FTP logging
+ built-in (don't ask why this is only the case for Windows!). To
+ enable FTP logging, just set ``MOZ_LOG=nsFtp:5`` (in older versions
+ of Mozilla, you need to use ``nsFTPProtocol`` instead of ``nsFtp``).
+- When Mozilla's built-in logging capabilities aren't good enough, and
+ you need a full-fledged packet tracing tool, two free products are
+ `Wireshark <https://www.wireshark.org/>`__
+ and `ngrep <https://github.com/jpr5/ngrep/>`__. They are available
+ for Windows and most flavors of UNIX (including Linux and Mac OS
+ X), are rock solid, and offer enough features to help uncover any
+ Mozilla networking problem.
diff --git a/netwerk/docs/http_server_for_testing.rst b/netwerk/docs/http_server_for_testing.rst
new file mode 100644
index 0000000000..dbf6ce7520
--- /dev/null
+++ b/netwerk/docs/http_server_for_testing.rst
@@ -0,0 +1,482 @@
+HTTP server for unit tests
+==========================
+
+This page describes the JavaScript implementation of an
+HTTP server located in ``netwerk/test/httpserver/``.
+
+Server functionality
+~~~~~~~~~~~~~~~~~~~~
+
+Here are some of the things you can do with the server:
+
+- map a directory of files onto an HTTP path on the server, for an
+ arbitrary number of such directories (including nested directories)
+- define custom error handlers for HTTP error codes
+- serve a given file for requests for a specific path, optionally with
+ custom headers and status
+- define custom "CGI" handlers for specific paths using a
+ JavaScript-based API to create the response (headers and actual
+ content)
+- run multiple servers at once on different ports (8080, 8081, 8082,
+ and so on.)
+
+This functionality should be more than enough for you to use it with any
+test which requires HTTP-provided behavior.
+
+Where you can use it
+~~~~~~~~~~~~~~~~~~~~
+
+The server is written primarily for use from ``xpcshell``-based
+tests, and it can be used as an inline script or as an XPCOM component. The
+Mochitest framework also uses it to serve its tests, and
+`reftests <https://searchfox.org/mozilla-central/source/layout/tools/reftest/README.txt>`__
+can optionally use it when their behavior is dependent upon specific
+HTTP header values.
+
+Ways you might use it
+~~~~~~~~~~~~~~~~~~~~~
+
+- application update testing
+- cross-"server" security tests
+- cross-domain security tests, in combination with the right proxy
+ settings (for example, using `Proxy
+ AutoConfig <https://en.wikipedia.org/wiki/Proxy_auto-config>`__)
+- tests where the behavior is dependent on the values of HTTP headers
+ (for example, Content-Type)
+- anything which requires use of files not stored locally
+- open-id : the users could provide their own open id server (they only
+ need it when they're using their browser)
+- micro-blogging : users could host their own micro blog based on
+ standards like RSS/Atom
+- rest APIs : web application could interact with REST or SOAP APIs for
+ many purposes like : file/data storage, social sharing and so on
+- download testing
+
+Using the server
+~~~~~~~~~~~~~~~~
+
+The best and first place you should look for documentation is
+``netwerk/test/httpserver/nsIHttpServer.idl``. It's extremely
+comprehensive and detailed, and it should be enough to figure out how to
+make the server do what you want. I also suggest taking a look at the
+less-comprehensive server
+`README <https://searchfox.org/mozilla-central/source/netwerk/test/httpserver/README>`__,
+although the IDL should usually be sufficient.
+
+Running the server
+^^^^^^^^^^^^^^^^^^
+
+From test suites, the server should be importable as a testing-only JS
+module:
+
+.. code:: javascript
+
+ ChromeUtils.import("resource://testing-common/httpd.js");
+
+Once you've done that, you can create a new server as follows:
+
+.. code:: javascript
+
+ let server = new HttpServer(); // Or nsHttpServer() if you don't use ChromeUtils.import.
+
+ server.registerDirectory("/", nsILocalFileForBasePath);
+
+ server.start(-1); // uses a random available port, allows us to run tests concurrently
+ const SERVER_PORT = server.identity.primaryPort; // you can use this further on
+
+ // and when the tests are done, most likely from a callback...
+ server.stop(function() { /* continue execution here */ });
+
+You can also pass in a numeric port argument to the ``start()`` method,
+but we strongly suggest you don't do it. Using a dynamic port allow us
+to run your test in parallel with other tests which reduces wait times
+and makes everybody happy.  If you really have to use a hardcoded port,
+you will have to annotate your test in the xpcshell manifest file with
+``run-sequentially = REASON``.
+However, this should only be used as the last possible option.
+
+.. note::
+
+ Note: You **must** make sure to stop the server (the last line above)
+ before your test completes. Failure to do so will result in the
+ "XPConnect is being called on a scope without a Components property"
+ assertion, which will cause your test to fail in debug builds, and
+ you'll make people running tests grumbly because you've broken the
+ tests.
+
+Debugging errors
+^^^^^^^^^^^^^^^^
+
+The server's default error pages don't give much information, partly
+because the error-dispatch mechanism doesn't currently accommodate doing
+so and partly because exposing errors in a real server could make it
+easier to exploit them. If you don't know why the server is acting a
+particular way, edit
+`httpd.js <https://searchfox.org/mozilla-central/source/netwerk/test/httpserver/httpd.js>`__
+and change the value of ``DEBUG`` to ``true``. This will cause the
+server to print information about the processing of requests (and errors
+encountered doing so) to the console, and it's usually not difficult to
+determine why problems exist from that output. ``DEBUG`` is ``false`` by
+default because the information printed with it set to ``true``
+unnecessarily obscures tinderbox output.
+
+Header modification for files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The server supports modifying the headers of the files (not request
+handlers) it serves. To modify the headers for a file, create a sibling
+file with the first file's name followed by ``^headers^``. Here's an
+example of how such a file might look:
+
+.. code::
+
+ HTTP 404 I want a cool HTTP description!
+ Content-Type: text/plain
+
+The status line is optional; all other lines specify HTTP headers in the
+standard HTTP format. Any line ending style is accepted, and the file
+may optionally end with a single newline character, to play nice with
+Unix text tools like ``diff`` and ``hg``.
+
+Hidden files
+^^^^^^^^^^^^
+
+Any file which ends with a single ``^`` is inaccessible when querying
+the web server; if you try to access such a file you'll get a
+``404 File Not Found`` page instead. If for some reason you need to
+serve a file ending with a ``^``, just tack another ``^`` onto the end
+of the file name and the file will then become available at the
+single-``^`` location.
+
+At the moment this feature is basically a way to smuggle header
+modification for files into the file system without making those files
+accessible to clients; it remains to be seen whether and how hidden-file
+capabilities will otherwise be used.
+
+SJS: server-side scripts
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Support for server-side scripts is provided through the SJS mechanism.
+Essentially an SJS is a file with a particular extension, chosen by the
+creator of the server, which contains a function with the name
+``handleRequest`` which is called to determine the response the server
+will generate. That function acts exactly like the ``handle`` function
+on the ``nsIHttpRequestHandler`` interface. First, tell the server what
+extension you're using:
+
+.. code:: javascript
+
+ const SJS_EXTENSION = "cgi";
+ server.registerContentType(SJS_EXTENSION, "sjs");
+
+Now just create an SJS with the extension ``cgi`` and write whatever you
+want. For example:
+
+.. code:: javascript
+
+ function handleRequest(request, response)
+ {
+ response.setStatusLine(request.httpVersion, 200, "OK");
+ response.write("Hello world! This request was dynamically " +
+ "generated at " + new Date().toUTCString());
+ }
+
+Further examples may be found `in the Mozilla source
+tree <https://searchfox.org/mozilla-central/search?q=&path=.sjs>`__
+in existing tests. The request object is an instance of
+``nsIHttpRequest`` and the response is a ``nsIHttpResponse``.
+Please refer to the `IDL
+documentation <https://searchfox.org/mozilla-central/source/netwerk/test/httpserver/nsIHttpServer.idl>`
+for more details.
+
+Storing information across requests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+HTTP is basically a stateless protocol, and the httpd.js server API is
+for the most part similarly stateless. If you're using the server
+through the XPCOM interface you can simply store whatever state you want
+in enclosing environments or global variables. However, if you're using
+it through an SJS your request is processed in a near-empty environment
+every time processing occurs. To support stateful SJS behavior, the
+following functions have been added to the global scope in which a SJS
+handler executes, providing a simple key-value state storage mechanism:
+
+.. code::
+
+ /*
+ * v : T means v is of type T
+ * function A() : T means A() has type T
+ */
+
+ function getState(key : string) : string
+ function setState(key : string, value : string)
+ function getSharedState(key : string) : string
+ function setSharedState(key : string, value : string)
+ function getObjectState(key : string, callback : function(value : object) : void) // SJS API, XPCOM differs, see below
+ function setObjectState(key : string, value : object)
+
+A key is a string with arbitrary contents. The corresponding value is
+also a string, for the non-object-saving functions. For the
+object-saving functions, it is (wait for it) an object, or also
+``null``. Initially all keys are associated with the empty string or
+with ``null``, depending on whether the function accesses string- or
+object-valued storage. A stored value persists across requests and
+across server shutdowns and restarts. The state methods are available
+both in SJS and, for convenience when working with the server both via
+XPCOM and via SJS, XPCOM through the ``nsIHttpServer`` interface. The
+variants are designed to support different needs.
+
+.. warning::
+
+ **Warning:** Be careful using state: you, the user, are responsible
+ for synchronizing all uses of state through any of the available
+ methods. (This includes the methods that act only on per-path state:
+ you might still run into trouble there if your request handler
+ generates responses asynchronously. Further, any code with access to
+ the server XPCOM component could modify it between requests even if
+ you only ever used or modified that state while generating
+ synchronous responses.) JavaScript's run-to-completion behavior will
+ save you in simple cases, but with anything moderately complex you
+ are playing with fire, and if you do it wrong you will get burned.
+
+``getState`` and ``setState``
+'''''''''''''''''''''''''''''
+
+``getState`` and ``setState`` are designed for the case where a single
+request handler needs to store information from a first request of it
+for use in processing a second request of it — say, for example, if you
+wanted to implement a request handler implementing a counter:
+
+.. code:: javascript
+
+ /**
+ * Generates a response whose body is "0", "1", "2", and so on. each time a
+ * request is made. (Note that browser caching might make it appear
+ * to not quite have that behavior; a Cache-Control header would fix
+ * that issue if desired.)
+ */
+ function handleRequest(request, response)
+ {
+ var counter = +getState("counter"); // convert to number; +"" === 0
+ response.write("" + counter);
+ setState("counter", "" + ++counter);
+ }
+
+The useful feature of these two methods is that this state doesn't bleed
+outside the single path at which it resides. For example, if the above
+SJS were at ``/counter``, the value returned by ``getState("counter")``
+at some other path would be completely distinct from the counter
+implemented above. This makes it much simpler to write stateful handlers
+without state accidentally bleeding between unrelated handlers.
+
+.. note::
+
+ **Note:** State saved by this method is specific to the HTTP path,
+ excluding query string and hash reference. ``/counter``,
+ ``/counter?foo``, and ``/counter?bar#baz`` all share the same state
+ for the purposes of these methods. (Indeed, non-shared state would be
+ significantly less useful if it changed when the query string
+ changed!)
+
+.. note::
+
+ **Note:** The predefined ``__LOCATION__`` state
+ contains the native path of the SJS file itself. You can pass the
+ result directly to the ``nsILocalFile.initWithPath()``. Example:
+ ``thisSJSfile.initWithPath(getState('__LOCATION__'));``
+
+``getSharedState`` and ``setSharedState``
+'''''''''''''''''''''''''''''''''''''''''
+
+``getSharedState`` and ``setSharedState`` make up the functionality
+intentionally not supported by ``getState`` and set\ ``State``: state
+that exists between different paths. If you used the above handler at
+the paths ``/sharedCounters/1`` and ``/sharedCounters/2`` (changing the
+state-calls to use shared state, of course), the first load of either
+handler would return "0", a second load of either handler would return
+"1", a third load either handler would return "2", and so on. This more
+powerful functionality allows you to write cooperative handlers that
+expose and manipulate a piece of shared state. Be careful! One test can
+screw up another test pretty easily if it's not careful what it does
+with this functionality.
+
+``getObjectState`` and ``setObjectState``
+'''''''''''''''''''''''''''''''''''''''''
+
+``getObjectState`` and ``setObjectState`` support the remaining
+functionality not provided by the above methods: storing non-string
+values (object values or ``null``). These two methods are the same as
+``getSharedState`` and ``setSharedState``\ in that state is visible
+across paths; ``setObjectState`` in one handler will expose that value
+in another handler that uses ``getObjectState`` with the same key. (This
+choice was intentional, because object values already expose mutable
+state that you have to be careful about using.) This functionality is
+particularly useful for cooperative request handlers where one request
+*suspends* another, and that second request must then be *resumed* at a
+later time by a third request. Without object-valued storage you'd need
+to resort to polling on a string value using either of the previous
+state APIs; with this, however, you can make precise callbacks exactly
+when a particular event occurs.
+
+``getObjectState`` in an SJS differs in one important way from
+``getObjectState`` accessed via XPCOM. In XPCOM the method takes a
+single string argument and returns the object or ``null`` directly. In
+SJS, however, the process to return the value is slightly different:
+
+.. code:: javascript
+
+ function handleRequest(request, response)
+ {
+ var key = request.hasHeader("key")
+ ? request.getHeader("key")
+ : "unspecified";
+ var obj = null;
+ getObjectState(key, function(objval)
+ {
+ // This function is called synchronously with the object value
+ // associated with key.
+ obj = objval;
+ });
+ response.write("Keyed object " +
+ (obj && Object.prototype.hasOwnProperty.call(obj, "doStuff")
+ ? "has "
+ : "does not have ") +
+ "a doStuff method.");
+ }
+
+This idiosyncratic API is a restriction imposed by how sandboxes
+currently work: external functions added to the sandbox can't return
+object values when called within the sandbox. However, such functions
+can accept and call callback functions, so we simply use a callback
+function here to return the object value associated with the key.
+
+Advanced dynamic response creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The default behavior of request handlers is to fully construct the
+response, return, and only then send the generated data. For certain use
+cases, however, this is infeasible. For example, a handler which wanted
+to return an extremely large amount of data (say, over 4GB on a 32-bit
+system) might run out of memory doing so. Alternatively, precise control
+over the timing of data transmission might be required so that, for
+example, one request is received, "paused" while another request is
+received and completes, and then finished. httpd.js solves this problem
+by defining a ``processAsync()`` method which indicates to the server
+that the response will be written and finished by the handler. Here's an
+example of an SJS file which writes some data, waits five seconds, and
+then writes some more data and finishes the response:
+
+.. code:: javascript
+
+ var timer = null;
+
+ function handleRequest(request, response)
+ {
+ response.processAsync();
+ response.setHeader("Content-Type", "text/plain", false);
+ response.write("hello...");
+
+ timer = Cc["@mozilla.org/timer;1"].createInstance(Ci.nsITimer);
+ timer.initWithCallback(function()
+ {
+ response.write("world!");
+ response.finish();
+ }, 5 * 1000 /* milliseconds */, Ci.nsITimer.TYPE_ONE_SHOT);
+ }
+
+The basic flow is simple: call ``processAsync`` to mark the response as
+being sent asynchronously, write data to the response body as desired,
+and when complete call ``finish()``. At the moment if you drop such a
+response on the floor, nothing will ever terminate the connection, and
+the server cannot be stopped (the stop API is asynchronous and
+callback-based); in the future a default connection timeout will likely
+apply, but for now, "don't do that".
+
+Full documentation for ``processAsync()`` and its interactions with
+other methods may, as always, be found in
+``netwerk/test/httpserver/nsIHttpServer.idl``.
+
+Manual, arbitrary response creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The standard mode of response creation is fully synchronous and is
+guaranteed to produce syntactically correct responses (excluding
+headers, which for the most part may be set to arbitrary values).
+Asynchronous processing enables the introduction of response handling
+coordinated with external events, but again, for the most part only
+syntactically correct responses may be generated. The third method of
+processing removes the correct-syntax property by allowing a response to
+contain completely arbitrary data through the ``seizePower()`` method.
+After this method is called, any data subsequently written to the
+response is written directly to the network as the response, skipping
+headers and making no attempt whatsoever to ensure any formatting of the
+transmitted data. As with asynchronous processing, the response is
+generated asynchronously and must be finished manually for the
+connection to be closed. (Again, nothing will terminate the connection
+for a response dropped on the floor, so again, "don't do that".) This
+mode of processing is useful for testing particular data formats that
+are either not HTTP or which do not match the precise, canonical
+representation that httpd.js generates. Here's an example of an SJS file
+which writes an apparent HTTP response whose status text contains a null
+byte (not allowed by HTTP/1.1, and attempting to set such status text
+through httpd.js would throw an exception) and which has a header that
+spans multiple lines (httpd.js responses otherwise generate only
+single-line headers):
+
+.. code:: javascript
+
+ function handleRequest(request, response)
+ {
+ response.seizePower();
+ response.write("HTTP/1.1 200 OK Null byte \u0000 makes this response malformed\r\n" +
+ "X-Underpants-Gnomes-Strategy:\r\n" +
+ " Phase 1: Collect underpants.\r\n" +
+ " Phase 2: ...\r\n" +
+ " Phase 3: Profit!\r\n" +
+ "\r\n" +
+ "FAIL");
+ response.finish();
+ }
+
+While the asynchronous mode is capable of producing certain forms of
+invalid responses (through setting a bogus Content-Length header prior
+to the start of body transmission, among others), it must not be used in
+this manner. No effort will be made to preserve such implementation
+quirks (indeed, some are even likely to be removed over time): if you
+want to send malformed data, use ``seizePower()`` instead.
+
+Full documentation for ``seizePower()`` and its interactions with other
+methods may, as always, be found in
+``netwerk/test/httpserver/nsIHttpServer.idl``.
+
+Example uses of the server
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Shorter examples (for tests which only do one test):
+
+- ``netwerk/test/unit/test_bug331825.js``
+- ``netwerk/test/unit/test_httpcancel.js``
+- ``netwerk/test/unit/test_cookie_header.js``
+
+Longer tests (where you'd need to do multiple async server requests):
+
+- ``netwerk/test/httpserver/test/test_setstatusline.js``
+- ``netwerk/test/unit/test_content_sniffer.js``
+- ``netwerk/test/unit/test_authentication.js``
+- ``netwerk/test/unit/test_event_sink.js``
+- ``netwerk/test/httpserver/test/``
+
+Examples of modifying HTTP headers in files may be found at
+``netwerk/test/httpserver/test/data/cern_meta/``.
+
+Future directions
+~~~~~~~~~~~~~~~~~
+
+The server, while very functional, is not yet complete. There are a
+number of things to fix and features to add, among them support for
+pipelining, support for incrementally-received requests (rather than
+buffering the entire body before invoking a request handler), and better
+conformance to the MUSTs and SHOULDs of HTTP/1.1. If you have
+suggestions for functionality or find bugs, file them in
+`Testing-httpd.js <https://bugzilla.mozilla.org/enter_bug.cgi?product=Testing&component=General>`__
+.
diff --git a/netwerk/docs/index.rst b/netwerk/docs/index.rst
new file mode 100644
index 0000000000..c89646d00e
--- /dev/null
+++ b/netwerk/docs/index.rst
@@ -0,0 +1,25 @@
+Networking
+==========
+
+These linked pages contain design documents for the Networking stack implementation in Gecko. They live in-tree under the 'netwerk/docs' directory.
+
+There is also documentation for the `HTTP server we use for unit tests`_.
+
+.. toctree::
+ :maxdepth: 1
+
+ cache2/doc
+ http/lifecycle
+ http/logging
+ http/http3.md
+ dns/dns-over-https-trr
+ submitting_networking_bugs.md
+ new_to_necko_resources
+ network_test_guidelines.md
+ url_parsers.md
+ webtransport/webtransport
+ captive_portals.md
+ early_hints.md
+ neqo_triage_guideline.md
+
+.. _HTTP server we use for unit tests: http_server_for_testing.html
diff --git a/netwerk/docs/neqo_triage_guideline.md b/netwerk/docs/neqo_triage_guideline.md
new file mode 100644
index 0000000000..158ca9f1ea
--- /dev/null
+++ b/netwerk/docs/neqo_triage_guideline.md
@@ -0,0 +1,12 @@
+# Neqo triage guideline
+
+[Neqo](https://github.com/mozilla/neqo/issues) has p1, p2, and p3 labels that correspond to the following Bugzilla labels:
+- p1 - the issue should be fixed as soon as possible because it is a defect or a fix has been planned for a project.
+ - This is P1 and P2 in Bugzilla.
+- p2 - the issue should be fixed but they are not critical, an issue can be in this state for a longer time.
+ - This is P3 in Bugzilla
+- p3 - we are not planning to fix the issue.
+ - This contains 2 sets of issues:
+ - we would take a fix if someone wants to work on it or
+ - we may not want to fix the issue at all at this time
+ - This is P5 in Bugzilla
diff --git a/netwerk/docs/network_test_guidelines.md b/netwerk/docs/network_test_guidelines.md
new file mode 100644
index 0000000000..cf51815251
--- /dev/null
+++ b/netwerk/docs/network_test_guidelines.md
@@ -0,0 +1,175 @@
+# Networking Test Guidelines
+
+This is a high level document to introduce the different test types that are used in necko. The target audience is newcomer of necko team.
+
+## Necko Test Types
+
+We only introduce tests under [netwerk/test](https://searchfox.org/mozilla-central/source/netwerk/test) folder in this section.
+
+- [Chrome Tests](https://firefox-source-docs.mozilla.org/testing/chrome-tests/index.html)
+ - We usually write chrome tests when the code to be tested needs a browser windows to load some particular resources.
+ - Path: [netwerk/test/browser](https://searchfox.org/mozilla-central/source/netwerk/test/browser)
+- [Reftest](https://firefox-source-docs.mozilla.org/testing/webrender/index.html)
+ - Rarely used in necko.
+- [Mochitest](https://firefox-source-docs.mozilla.org/testing/mochitest-plain/index.html)
+ - Used when the code to be tested can be triggered by WebIDL. e.g., WebSocket and XMLHttpRequest.
+ - Path: [netwerk/test/mochitests](https://searchfox.org/mozilla-central/source/netwerk/test/mochitests)
+- [XPCShell tests](https://firefox-source-docs.mozilla.org/testing/xpcshell/index.html#xpcshell-tests)
+ - Mostly used in necko to test objects that can be accessed by JS. e.g., `nsIHttpChannel`.
+ - Path: [netwerk/test/unit](https://searchfox.org/mozilla-central/source/netwerk/test/unit)
+- [GTest](https://firefox-source-docs.mozilla.org/gtest/index.html)
+ - Useful when the code doesn't need a http server.
+ - Useful when writing code regarding to parsing strings. e.g., [Parsing Server Timing Header](https://searchfox.org/mozilla-central/rev/0249c123e74640ed91edeabba00649ef4d929372/netwerk/test/gtest/TestServerTimingHeader.cpp)
+- [Performance tests](https://firefox-source-docs.mozilla.org/testing/perfdocs/index.html)
+ - Current tests in [netwerk/test/perf](https://searchfox.org/mozilla-central/source/netwerk/test/perf) are all for testing `HTTP/3` code.
+
+There are also [web-platform-tests](https://firefox-source-docs.mozilla.org/web-platform/index.html) that is related to necko. We don't usually write new `web-platform-tests`. However, we do have lots of `web-platform-tests` for XHR, Fetch, and WebSocket.
+
+## Running Necko xpcshell-tests
+
+- Local:
+
+ Run all xpcshell-tests:
+
+ ```console
+ ./mach xpcshell-test netwerk/test/unit
+ ```
+
+ Note that xpcshell-tests are run in parallel, sometimes we want to run them sequentially for debugging.
+
+ ```console
+ ./mach xpcshell-test --sequential netwerk/test/unit
+ ```
+
+ Run a single tests:
+
+ ```console
+ ./mach xpcshell-test netwerk/test/unit/test_http3.js
+ ```
+
+ Run with socket process enabled:
+
+ ```console
+ ./mach xpcshell-test --setpref="network.http.network_access_on_socket_process.enabled=true" netwerk/test/unit/test_http3.js
+ ```
+
+ We usually debug networking issues with `HTTP Logging`. To enable logging when running tests:
+
+ ```console
+ MOZ_LOG=nsHttp:5 ./mach xpcshell-test netwerk/test/unit/test_http3.js
+ ```
+
+- Remote:
+
+ First of all, we need to know [Fuzzy Selector](https://firefox-source-docs.mozilla.org/tools/try/selectors/fuzzy.html), which is the tool we use to select which test to run on try. If you already know that your code change can be covered by necko xpcshell-tests, you can use the following command to run all tests in `netwerk/test/unit` on try.
+
+ ```console
+ ./mach try fuzzy netwerk/test/unit
+ ```
+
+ Run a single test on try:
+
+ ```console
+ ./mach try fuzzy netwerk/test/unit/test_http3.js
+ ```
+
+ Sometimes we want to debug the failed test on try with logging enabled:
+
+ ```console
+ ./mach try fuzzy --env "MOZ_LOG=nsHttp:5,nsHostResolver:5" netwerk/tesst/unit/test_http3.js
+ ```
+
+ Note that it's not usually a good idea to enabling logging when running all tests in a folder on try, since the raw log file can be really huge. The log file might be not available if the size exceeds the limit on try.
+ In the case that your code change is too generic or you are not sure which tests to run, you can use [Auto Selector](https://firefox-source-docs.mozilla.org/tools/try/selectors/auto.html) to let it select tests for you.
+
+## Debugging intermittent test failures
+
+There are a lot of intermittent failures on try (usually not able to reproduce locally). Debugging these failures can be really annoying and time consuming. Here are some general guidelines to help you debug intermittent failures more efficiently.
+
+- Identify whether the failure is caused by your code change.
+ - Try to reproduce the intermittent failure locally. This is the most straightforward way. Adding `--verify` flag is also helpful when debugging locally (see this [document](https://firefox-source-docs.mozilla.org/testing/test-verification/index.html) for more details).
+ - We can check the failure summery on try to see if there is already a bug filed for this test failure. If yes, it's likely this is not caused by your code change.
+ - Re-trigger the failed test a few times and see if it passed. This can be easily done by clicking the `Push Health` button.
+ - Looking for similar failures happening now on other submissions. This can be done by:
+ ```
+ click on failing job -> read failure summary -> find similar ones by other authors in similar jobs
+ ```
+ - To re-run the failed test suite more times, you could add `rebuild` option to `./mach try`. For example, the following command allows to run necko xpcshell-tests 20 times on try.
+ ```
+ ./mach try fuzzy netwerk/test/unit --rebuild 20
+ ```
+- In the case that we really need to debug an intermittent test failure, see this [document](https://firefox-source-docs.mozilla.org/devtools/tests/debugging-intermittents.html) first for some general tips. Unfortunately, there is no easy way to debug this. One can try to isolate the failed test first and enable `HTTP logging` on try to collect the log for further analysis.
+
+## Writing Necko XPCShell tests
+
+The most typical form of necko xpcsehll-test is creating an HTTP server and test your code by letting the server return some specific responses (e.g., `103 Early Hint`). We will only introduce how to write this kind of test in this document.
+
+- Code at server side
+
+ After [bug 1756557](https://bugzilla.mozilla.org/show_bug.cgi?id=1756557), it is possible to create a `nodejs` HTTP server in your test code. This saves us some time for writing code at the server side by reusing the HTTP module provided by `nodejs`.
+ This is what it looks like to create a simple HTTP server:
+
+ ```js
+ let server = new NodeHTTPServer();
+ await server.start();
+ registerCleanupFunction(async () => {
+ await server.stop();
+ });
+ await server.registerPathHandler("/test", (req, resp) => {
+ resp.writeHead(200);
+ resp.end("done");
+ });
+ ```
+
+ We can also create a `HTTP/2` server easily by replacing `NodeHTTPServer` with `NodeHTTP2Server` and adding the server certification.
+
+ ```js
+ let certdb = Cc["@mozilla.org/security/x509certdb;1"].getService(
+ Ci.nsIX509CertDB
+ );
+ addCertFromFile(certdb, "http2-ca.pem", "CTu,u,u");
+ let server = new NodeHTTP2Server();
+ ```
+
+- Code at client side
+
+ The recommend way is to create and open an HTTP channel and handle the response with a `Promise` asynchronously.
+ The code would be like:
+
+ ```js
+ function makeChan(uri) {
+ let chan = NetUtil.newChannel({
+ uri,
+ loadUsingSystemPrincipal: true,
+ }).QueryInterface(Ci.nsIHttpChannel);
+ chan.loadFlags = Ci.nsIChannel.LOAD_INITIAL_DOCUMENT_URI;
+ return chan;
+ }
+ let chan = makeChan(`http://localhost:${server.port()}/test`);
+ let req = await new Promise(resolve => {
+ chan.asyncOpen(new ChannelListener(resolve, null, CL_ALLOW_UNKNOWN_CL));
+ });
+ ```
+
+ This is what it looks like to put everything together:
+
+ ```js
+ add_task(async function test_http() {
+ let server = new NodeHTTPServer();
+ await server.start();
+ registerCleanupFunction(async () => {
+ await server.stop();
+ });
+ await server.registerPathHandler("/test", (req, resp) => {
+ resp.writeHead(200);
+ resp.end("done");
+ });
+ let chan = makeChan(`http://localhost:${server.port()}/test`);
+ let req = await new Promise(resolve => {
+ chan.asyncOpen(new ChannelListener(resolve, null, CL_ALLOW_UNKNOWN_CL));
+ });
+ equal(req.status, Cr.NS_OK);
+ equal(req.QueryInterface(Ci.nsIHttpChannel).responseStatus, 200);
+ equal(req.QueryInterface(Ci.nsIHttpChannel).protocolVersion, "http/1.1");
+ });
+ ```
diff --git a/netwerk/docs/new_to_necko_resources.rst b/netwerk/docs/new_to_necko_resources.rst
new file mode 100644
index 0000000000..a046d63bd6
--- /dev/null
+++ b/netwerk/docs/new_to_necko_resources.rst
@@ -0,0 +1,80 @@
+New-to-Necko Resources - An Aggregation
+=======================================
+
+This doc serves as a hub for resources/technologies a new-to-necko developer
+should get familiar with.
+
+Code Generation and IPC
+~~~~~~~~~~~~~~~~~~~~~~~
+
+* `IPC`_ (Inter-Process Communication) and `IPDL`_ (Inter-Thread and Inter-Process Message Passing)
+* `IDL`_ (Interface Description Language)
+ - Implementing an interface (C++/JS)
+ - XPCONNECT (scriptable/builtin)
+ - QueryInterface (QI) - do_QueryInterface/do_QueryObject
+ - do_GetService, do_CreateInstance
+* `WebIDL`_
+
+.. _IPC: /ipc/index.html
+.. _IDL: /xpcom/xpidl.html
+.. _IPDL: /ipc/ipdl.html
+.. _WebIDL: /toolkit/components/extensions/webextensions/webidl_bindings.html
+
+
+Necko interfaces
+~~~~~~~~~~~~~~~~
+
+* :searchfox:`nsISupports <xpcom/base/nsISupports.idl>`
+* :searchfox:`nsIRequest <netwerk/base/nsIRequest.idl>` ->
+ :searchfox:`nsIChannel <netwerk/base/nsIChannel.idl>` ->
+ :searchfox:`nsIHttpChannel <netwerk/protocol/http/nsIHttpChannel.idl>`
+* :searchfox:`nsIRequestObserver <netwerk/base/nsIRequestObserver.idl>` (onStart/onStopRequest)
+* :searchfox:`nsIStreamListener <netwerk/base/nsIStreamListener.idl>` (onDataAvailable)
+* :searchfox:`nsIInputStream <xpcom/io/nsIInputStream.idl>`/
+ :searchfox:`nsIOutputStream <xpcom/io/nsIOutputStream.idl>`
+
+Libraries
+~~~~~~~~~
+* `NSPR`_
+* `NSS`_
+* `PSM`_
+
+.. _NSPR: https://firefox-source-docs.mozilla.org/nspr/about_nspr.html?highlight=nspr
+.. _NSS: https://firefox-source-docs.mozilla.org/security/nss/legacy/faq/index.html
+.. _PSM: https://firefox-source-docs.mozilla.org/security/nss/legacy/faq/index.html?highlight=psm
+
+
+Preferences
+~~~~~~~~~~~
+* :searchfox:`all.js <modules/libpref/init/all.js>`
+* :searchfox:`firefox.js <browser/app/profile/firefox.js>`
+* :searchfox:`StaticPrefList.yaml <modules/libpref/init/StaticPrefList.yaml>`
+
+Debugging
+~~~~~~~~~
+* `HTTP Logging`_
+
+.. _HTTP Logging: /networking/http/logging.html
+
+Testing
+~~~~~~~
+* `xpcshell`_
+* `mochitest`_
+* `web-platform`_
+* `gtest`_
+* `marionette`_
+
+.. _xpcshell: /testing/xpcshell/index.html
+.. _mochitest: /browser/components/newtab/docs/v2-system-addon/mochitests.html
+.. _web-platform: /web-platform/index.html
+.. _gtest: /gtest/index.html
+.. _marionette: /testing/marionette/index.html
+
+
+See also
+~~~~~~~~
+ - E10S_ (Electrolysis) -> Split ``HttpChannel`` into: ``HttpChannelChild`` & ``HttpChannelParent``
+ - Fission_ -> Site isolation
+
+ .. _E10s: https://wiki.mozilla.org/Electrolysis
+ .. _Fission: https://hacks.mozilla.org/2021/05/introducing-firefox-new-site-isolation-security-architecture/
diff --git a/netwerk/docs/submitting_networking_bugs.md b/netwerk/docs/submitting_networking_bugs.md
new file mode 100644
index 0000000000..b94e57d276
--- /dev/null
+++ b/netwerk/docs/submitting_networking_bugs.md
@@ -0,0 +1,112 @@
+# Submitting actionable networking bugs
+
+So you've found a networking issue with Firefox and decided to file a bug. First of all **Thanks!**. 🎉🎉🎉
+
+## Networking bugs lifecycle
+
+After a bug is filed, it gets triaged by one of the Necko team members.
+The engineer will consider the *steps to reproduce* then will do one of the following:
+- Assign a priority. An engineer will immediately or eventually start working on the bug.
+- Move the bug to another team.
+- Request more info from the reporter or someone else.
+
+A necko bug is considered triaged when it has a priority and the `[necko-triaged]` tag has been added to the whiteboard.
+
+As a bug reporter, please do not change the `Priority` or `Severity` flags. Doing so could prevent the bug from showing up in the triage queue.
+
+<div class="note">
+<div class="admonition-title">Note</div>
+
+> For bugs to get fixed as quickly as possible engineers should spend their time
+on the actual fix, not on figuring out what might be wrong. That's why it's
+important to go through the sections below and include as much information as
+possible in the bug report.
+
+</div>
+
+
+## Make sure it's a Firefox bug
+
+Sometimes a website may be misbehaving and you'll initially think it's caused by a bug in Firefox. However, extensions and other customizations could also cause an issue. Here are a few things to check before submitting the bug:
+- [Troubleshoot extensions, themes and hardware acceleration issues to solve common Firefox problems](https://support.mozilla.org/en-US/kb/troubleshoot-extensions-themes-to-fix-problems#w_start-firefox-in-troubleshoot-mode)
+ - This will confirm if an extension is causing the issue you're seeing. If the bug goes away, with extensions turned off, you might then want to figure out which extension is causing the problem. Turn off each extension and see if it keeps happening. Include this information in the bug report.
+- [Try reproducing the bug with a new Firefox profile](https://support.mozilla.org/en-US/kb/profile-manager-create-remove-switch-firefox-profiles#w_creating-a-profile)
+ - If a bug stops happening with a new profile, that could be caused by changed prefs, or some bad configuration in your active profile.
+ - Make sure to include the contents of `about:support` in your bug report.
+- Check if the bug also happens in other browsers
+
+## Make sure the bug has clear steps to reproduce
+
+This is one of the most important requirements of getting the bug fixed. Clear steps to reproduce will help the engineer figure out what the problem is.
+If the bug can only be reproduced on a website that requires authentication you may provide a test account to the engineer via private email.
+If a certain interaction with a web server is required to reproduce the bug, feel free to attach a small nodejs, python, etc script to the bug.
+
+Sometimes a bug is intermittent (only happens occasionally) or the steps to reproduce it aren't obvious.
+It's still important to report these bugs but they should include additional info mentioned below so the engineers have a way to investigate.
+
+### Example 1:
+```
+ 1. Load `http://example.com`
+ 2. Click on button
+ 3. See that nothing happens and an exception is present in the web console.
+```
+### Example 2:
+```
+ 1. Download attached testcase
+ 2. Run testcase with the following command: `node index.js`
+ 3. Go to `http://localhost:8888/test` and click the button
+```
+
+## Additional questions
+
+- Are you using a proxy? What kind?
+- Are you using DNS-over-HTTPS?
+ - If the `DoH mode` at about:networking#dns is 2 or 3 then the answer is yes.
+- What platform are you using? (Operating system, Linux distribution, etc)
+ - It's best to simply copy the output of `about:support`
+
+## MozRegression
+
+If a bug is easy to reproduce and you think it used to work before, consider using MozRegression to track down when/what started causing this issue.
+
+First you need to [install the tool](https://mozilla.github.io/mozregression/install.html). Then just follow [the instructions](https://mozilla.github.io/mozregression/quickstart.html) presented by mozregression. Reproducing the bug a dozen times might be necessary before the tool tracks down the cause.
+
+At the end you will be presented with a regression range that includes the commits that introduced the bug.
+
+## Performance issues
+
+If you're seeing a performance issue (site is very slow to load, etc) you should consider submitting a performance profile.
+
+- Activate the profiler at: [https://profiler.firefox.com/](https://profiler.firefox.com/)
+- Use the `Networking` preset and click `Start Recording`.
+
+## Crashes
+
+If something you're doing is causing a crash, having the link to the stack trace is very useful.
+
+- Go to `about:crashes`
+- Paste the **Report ID** of the crash in the bug.
+
+## HTTP logs
+
+See the [HTTP Logging](https://firefox-source-docs.mozilla.org/networking/http/logging.html) page for steps to capture HTTP logs.
+
+If the logs are large you can create a zip archive and attach them to the bug. If the archive is still too large to attach, you can upload it to a file storage service such as Google drive or OneDrive and submit the public link.
+
+Logs may include personal information such as cookies. Try using a fresh Firefox profile to capture the logs. If that is not possible, you can also put them in a password protected archive, or send them directly via email to the developer working on the bug.
+
+## Wireshark dump
+
+In some cases it is necessary to see exactly what bytes Firefox is sending and receiving over the network. When that happens, the developer working on the bug might ask you for a wireshark capture.
+
+[Download](https://www.wireshark.org/download.html) it then run it while reproducing the bug.
+
+If the website you're loading to reproduce the bug is over HTTPS, then it might be necessary to [decrypt the capture file](https://wiki.wireshark.org/TLS#Using_the_.28Pre.29-Master-Secret) when recording it.
+
+## Web console and browser console errors
+
+Sometimes a website breaks because its assumptions about executing JavaScript in Firefox are wrong. When that happens the JavaScript engine might throw exceptions which could break the website you're viewing.
+
+When reporting a broken website or a proxy issue, also check the [web console](https://developer.mozilla.org/en-US/docs/Tools/Web_Console) `Press the Ctrl+Shift+K (Command+Option+K on OS X) keyboard shortcut` and [browser console](https://developer.mozilla.org/en-US/docs/Tools/Browser_Console) `keyboard: press Ctrl+Shift+J (or Cmd+Shift+J on a Mac).`
+
+If they contain errors or warnings, it would be good to add them to the bug report (As text is best, but a screenshot is also acceptable).
diff --git a/netwerk/docs/url_parsers.md b/netwerk/docs/url_parsers.md
new file mode 100644
index 0000000000..f5bd1f110e
--- /dev/null
+++ b/netwerk/docs/url_parsers.md
@@ -0,0 +1,143 @@
+# URL parsers
+
+```{warning}
+In order to ensure thread safety it is important that all of the objects and interfaces of URI objects are immutable.
+If you are implementing a new URI type, please make sure that none of the type's public methods change the URL.
+```
+
+## Definitions
+- URI - Uniform Resource Identifier
+- URL - Uniform Resource Locator
+
+These two terms are used interchangeably throughout the codebase and essentially represent the same thing - a string of characters that represents a specific resource.
+
+## Motivation
+
+While we could simply pass strings around and leave it to the final consumer to deal with it, that creates a burden for the consumer and would probably be inefficient. Instead we parse the string into a nsIURI object as soon as possible and pass that object through function calls. This allows the consumer to easily extract only the part of the string they are interested in (eg. the hostname or the path).
+
+## Interfaces
+- [nsIURI](https://searchfox.org/mozilla-central/source/netwerk/base/nsIURI.idl)
+ - This is the most important interface for URI parsing. It contains a series of readonly attributes that consumers can use to extract information from the URI.
+- [nsIURL](https://searchfox.org/mozilla-central/source/netwerk/base/nsIURL.idl)
+ - Defines a structure for the URI's path (directory, fileName, fileBaseName, fileExtension)
+- [nsIFileURL](https://searchfox.org/mozilla-central/source/netwerk/base/nsIFileURL.idl)
+ - Has a file attribute of type `nsIFile`
+ - Used for local protocols to access the file represented by the `nsIURI`
+- [nsIMozIconURI](https://searchfox.org/mozilla-central/source/image/nsIIconURI.idl)
+ - Used to represent an icon. Contains additional attributes such as the size and contentType or state of the URL.
+- [nsIJARURI](https://searchfox.org/mozilla-central/source/modules/libjar/nsIJARURI.idl)
+ - Used to represent a resource inside of a JAR (zip archive) file.
+ - For example `jar:http://www.example.com/blue.jar!/ocean.html` represents the `/ocean.html` resource located inside the `blue.jar` archive that can be fetched via HTTP from example.com.
+- [nsIStandardURL](https://searchfox.org/mozilla-central/source/netwerk/base/nsIStandardURL.idl)
+ - Defines a few constant flags used to determine the type of the URL. No other attributes.
+- [nsINestedURI](https://searchfox.org/mozilla-central/source/netwerk/base/nsINestedURI.idl)
+ - Defines `innerURI` and `innermostURI`.
+ - `innermostURI` is just a helper - one could also get it by going through `innerURI` repeatedly until the attribute no longer QIs to nsINestedURI.
+- [nsISensitiveInfoHiddenURI](https://searchfox.org/mozilla-central/source/netwerk/base/nsISensitiveInfoHiddenURI.idl)
+ - Objects that implement this interface will have a `getSensitiveInfoHiddenSpec()` method that returns the spec of the URI with sensitive info (such as the password) replaced by the `*` symbol.
+
+### Diagram of interfaces
+```{mermaid}
+classDiagram
+nsISupports <-- nsIURI
+nsIURI <-- nsIURL
+nsIURL <-- nsIFileURL
+nsIURI <-- nsIMozIconURI
+nsIURL <-- nsIJARURI
+nsISupports <-- nsIStandardURL
+nsISupports <-- nsINestedURI
+nsISupports <-- nsISensitiveInfoHiddenURI
+```
+
+### Mutation
+
+To ensure thread safety all implementations of nsIURI must be immutable.
+To change a URI the consumer must call `nsIURI.mutate()` which returns a `nsIMutator`. The `nsIMutator` has several setter methods that can be used change attributes on the concrete object. Once done changing the object, the consumer will call `nsIMutator.finalize()` to obtain an immutable `nsIURI`.
+
+- [nsIURIMutator](https://searchfox.org/mozilla-central/source/netwerk/base/nsIURIMutator.idl)
+ - This interface contains a series of setters that can be used to mutate and/or construct a `nsIURI`
+
+
+### Additional interfaces
+
+- [nsISerializable](https://searchfox.org/mozilla-central/source/xpcom/ds/nsISerializable.idl)
+ - Allows us to serialize and deserialize URL objects into strings for persistent storage (such as session restore).
+
+## Implementations
+- [nsStandardURL](https://searchfox.org/mozilla-central/source/netwerk/base/nsStandardURL.h)
+- [SubstitutingURL](https://searchfox.org/mozilla-central/source/netwerk/protocol/res/SubstitutingURL.h)
+ - overrides nsStandardURL::GetFile to provide nsIFile resolution.
+ - This allows us to map URLs such as `resource://gre/actors/RemotePageChild.jsm` to the actual file on the disk.
+- [nsMozIconURI](https://searchfox.org/mozilla-central/source/image/decoders/icon/nsIconURI.h)
+ - Used to represent icon URLs
+- [nsSimpleURI](https://searchfox.org/mozilla-central/source/netwerk/base/nsSimpleURI.h)
+ - Used for simple URIs that normally don't have an authority (username, password, host, port)
+- [nsSimpleNestedURI](https://searchfox.org/mozilla-central/source/netwerk/base/nsSimpleNestedURI.h)
+ - eg. `view-source:http://example.com/path`
+ - Normally only the extra scheme of the nestedURI is relevant (eg. `view-source:`)
+ - Most of the getter/setters are delegated to the innerURI
+- [nsNestedAboutURI](https://searchfox.org/mozilla-central/source/netwerk/protocol/about/nsAboutProtocolHandler.h)
+ - Similar to nsSimpleNestedURI, but has an extra `mBaseURI` member that allows us to propagate the base URI to about:blank correctly`
+- [BlobURL](https://searchfox.org/mozilla-central/source/dom/file/uri/BlobURL.h)
+ - Used for javascript blobs
+ - Similar to nsSimpleURI, but also has a revoked field.
+- [DefaultURI](https://searchfox.org/mozilla-central/source/netwerk/base/DefaultURI.h)
+ - This class wraps an object parsed by the `rust-url` crate.
+ - While not yet enabled by default, due to small bugs in that parser, the plan is to eventually use this implementation for all _unknown protocols_ that don't have their own URL parser.
+- [nsJSURI](https://searchfox.org/mozilla-central/source/dom/jsurl/nsJSProtocolHandler.h)
+ - Used to represent javascript code (eg. `javascript:alert('hello')`)
+- [nsJARURI](https://searchfox.org/mozilla-central/source/modules/libjar/nsJARURI.h)
+ - Used to represent resources inside of JAR files.
+
+### Diagram of implementations
+
+```{mermaid}
+classDiagram
+nsSimpleURI o-- BlobURL
+nsIMozIconURI o-- nsMozIconURI
+nsIFileURL o-- nsStandardURL
+nsIStandardURL o-- nsStandardURL
+nsISensitiveInfoHiddenURI o-- nsStandardURL
+nsStandardURL o-- SubstitutingURL
+nsIURI o-- nsSimpleURI
+nsSimpleURI o-- nsSimpleNestedURI
+nsSimpleNestedURI o-- nsNestedAboutURI
+
+nsIURI o-- DefaultURI
+
+nsSimpleURI o-- nsJSURI
+
+nsINestedURI o-- nsJARURI
+nsIJARURI o-- nsJARURI
+```
+
+## Class and interface diagram
+
+```{mermaid}
+classDiagram
+nsISupports <-- nsIURI
+nsIURI <-- nsIURL
+nsIURL <-- nsIFileURL
+nsIURI <-- nsIMozIconURI
+nsIURL <-- nsIJARURI
+nsISupports <-- nsIStandardURL
+nsISupports <-- nsINestedURI
+nsISupports <-- nsISensitiveInfoHiddenURI
+
+%% classes
+
+nsSimpleURI o-- BlobURL
+nsSimpleURI o-- nsJSURI
+nsIMozIconURI o-- nsMozIconURI
+nsIFileURL o-- nsStandardURL
+nsIStandardURL o-- nsStandardURL
+nsISensitiveInfoHiddenURI o-- nsStandardURL
+nsStandardURL o-- SubstitutingURL
+nsIURI o-- nsSimpleURI
+nsINestedURI o-- nsJARURI
+nsIJARURI o-- nsJARURI
+nsSimpleURI o-- nsSimpleNestedURI
+nsSimpleNestedURI o-- nsNestedAboutURI
+nsIURI o-- DefaultURI
+
+```
diff --git a/netwerk/docs/webtransport/webtransport.md b/netwerk/docs/webtransport/webtransport.md
new file mode 100644
index 0000000000..452d1fd63a
--- /dev/null
+++ b/netwerk/docs/webtransport/webtransport.md
@@ -0,0 +1,6 @@
+WebTransport
+============
+
+Components:
+
+- [WebTransportSessionProxy](webtransportsessionproxy.md)
diff --git a/netwerk/docs/webtransport/webtransportsessionproxy.md b/netwerk/docs/webtransport/webtransportsessionproxy.md
new file mode 100644
index 0000000000..02fae55361
--- /dev/null
+++ b/netwerk/docs/webtransport/webtransportsessionproxy.md
@@ -0,0 +1,19 @@
+# WebTransportSessionProxy
+
+WebTransportSessionProxy is introduced to enable the creation of a Http3WebTransportSession and coordination of actions that are performed on the main thread and on the socket thread.
+
+WebTransportSessionProxy can be in different states and the following diagram depicts the transition between the states. “MT” and “ST” mean: the action is happening on the main and socket thread. More details about this class can be found in [WebTransportSessionProxy.h](https://searchfox.org/mozilla-central/source/netwerk/protocol/webtransport/WebTransportSessionProxy.h).
+
+```{mermaid}
+graph TD
+ A[INIT] -->|"nsIWebTransport::AsyncConnect; MT"| B[NEGOTIATING]
+ B -->|"200 response; ST"| C[NEGOTIATING_SUCCEEDED]
+ B -->|"nsHttpChannel::OnStart/OnStop failed; MT"| D[DONE]
+ B -->|"nsIWebTransport::CloseSession; MT"| D
+ C -->|"nsHttpChannel::OnStart/OnStop failed; MT"| F[SESSION_CLOSE_PENDING]
+ C -->|"nsHttpChannel::OnStart/OnStop succeeded; MT"| E[ACTIVE]
+ E -->|"nsIWebTransport::CloseSession; MT"| F
+ E -->|"The peer closed the session or HTTP/3 connection error; ST"| G[CLOSE_CALLBACK_PENDING]
+ F -->|"CloseSessionInternal called, The peer closed the session or HTTP/3 connection error; ST"| D
+ G -->|"CallOnSessionClosed or nsIWebTransport::CloseSession; MT"| D
+```