summaryrefslogtreecommitdiffstats
path: root/build/docs
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 19:33:14 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 19:33:14 +0000
commit36d22d82aa202bb199967e9512281e9a53db42c9 (patch)
tree105e8c98ddea1c1e4784a60a5a6410fa416be2de /build/docs
parentInitial commit. (diff)
downloadfirefox-esr-36d22d82aa202bb199967e9512281e9a53db42c9.tar.xz
firefox-esr-36d22d82aa202bb199967e9512281e9a53db42c9.zip
Adding upstream version 115.7.0esr.upstream/115.7.0esr
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'build/docs')
-rw-r--r--build/docs/build-overview.rst117
-rw-r--r--build/docs/build-targets.rst62
-rw-r--r--build/docs/chrome-registration.rst457
-rw-r--r--build/docs/cppeclipse.rst53
-rw-r--r--build/docs/cross-compile.rst15
-rw-r--r--build/docs/defining-binaries.rst345
-rw-r--r--build/docs/defining-xpcom-components.rst313
-rw-r--r--build/docs/environment-variables.rst31
-rw-r--r--build/docs/files-metadata.rst178
-rw-r--r--build/docs/glossary.rst47
-rw-r--r--build/docs/gn.rst17
-rw-r--r--build/docs/index.rst57
-rw-r--r--build/docs/jar-manifests.rst123
-rw-r--r--build/docs/locales.rst368
-rw-r--r--build/docs/mozbuild-files.rst176
-rw-r--r--build/docs/mozbuild-symbols.rst7
-rw-r--r--build/docs/mozbuild/index.rst40
-rw-r--r--build/docs/mozconfigs.rst69
-rw-r--r--build/docs/mozinfo.rst176
-rw-r--r--build/docs/pgo.rst28
-rw-r--r--build/docs/preprocessor.rst219
-rw-r--r--build/docs/python.rst165
-rw-r--r--build/docs/rust.rst180
-rw-r--r--build/docs/sccache-dist.rst194
-rw-r--r--build/docs/slow.rst153
-rw-r--r--build/docs/sparse.rst157
-rw-r--r--build/docs/supported-configurations.rst166
-rw-r--r--build/docs/telemetry.rst49
-rw-r--r--build/docs/test_certificates.rst40
-rw-r--r--build/docs/test_manifests.rst226
-rw-r--r--build/docs/toolchains.rst267
-rw-r--r--build/docs/unified-builds.rst55
-rw-r--r--build/docs/visualstudio.rst82
33 files changed, 4632 insertions, 0 deletions
diff --git a/build/docs/build-overview.rst b/build/docs/build-overview.rst
new file mode 100644
index 0000000000..a7784e7b1a
--- /dev/null
+++ b/build/docs/build-overview.rst
@@ -0,0 +1,117 @@
+.. _build_overview:
+
+=====================
+Build System Overview
+=====================
+
+This document provides an overview on how the build system works. It is
+targeted at people wanting to learn about internals of the build system.
+It is not meant for persons who casually interact with the build system.
+That being said, knowledge empowers, so consider reading on.
+
+The build system is composed of many different components working in
+harmony to build the source tree. We begin with a graphic overview.
+
+.. graphviz::
+
+ digraph build_components {
+ rankdir="LR";
+ "configure" -> "config.status" -> "build backend" -> "build output"
+ }
+
+Phase 1: Configuration
+======================
+
+Phase 1 centers around the ``configure`` script, which is a bash shell script.
+The file is generated from a file called ``configure.in`` which is written in M4
+and processed using Autoconf 2.13 to create the final configure script.
+You don't have to worry about how you obtain a ``configure`` file: the build
+system does this for you.
+
+The primary job of ``configure`` is to determine characteristics of the system
+and compiler, apply options passed into it, and validate everything looks OK to
+build. The primary output of the ``configure`` script is an executable file
+in the object directory called ``config.status``. ``configure`` also produces
+some additional files (like ``autoconf.mk``). However, the most important file
+in terms of architecture is ``config.status``.
+
+The existence of a ``config.status`` file may be familiar to those who have worked
+with Autoconf before. However, Mozilla's ``config.status`` is different from almost
+any other ``config.status`` you've ever seen: it's written in Python! Instead of
+having our ``configure`` script produce a shell script, we have it generating
+Python.
+
+Now is as good a time as any to mention that Python is prevalent in our build
+system. If we need to write code for the build system, we do it in Python.
+That's just how we roll. For more, see :ref:`python`.
+
+``config.status`` contains 2 parts: data structures representing the output of
+``configure`` and a command-line interface for preparing/configuring/generating
+an appropriate build backend. (A build backend is merely a tool used to build
+the tree - like GNU Make or Tup). These data structures essentially describe
+the current state of the system and what the existing build configuration looks
+like. For example, it defines which compiler to use, how to invoke it, which
+application features are enabled, etc. You are encouraged to open up
+``config.status`` to have a look for yourself!
+
+Once we have emitted a ``config.status`` file, we pass into the realm of
+phase 2.
+
+Phase 2: Build Backend Preparation and the Build Definition
+===========================================================
+
+Once ``configure`` has determined what the current build configuration is,
+we need to apply this to the source tree so we can actually build.
+
+What essentially happens is the automatically-produced ``config.status`` Python
+script is executed as soon as ``configure`` has generated it. ``config.status``
+is charged with the task of tell a tool how to build the tree. To do this,
+``config.status`` must first scan the build system definition.
+
+The build system definition consists of various ``moz.build`` files in the tree.
+There is roughly one ``moz.build`` file per directory or per set of related directories.
+Each ``moz.build`` files defines how its part of the build config works. For
+example it says *I want these C++ files compiled* or *look for additional
+information in these directories.* config.status starts with the ``moz.build``
+file from the root directory and then descends into referenced ``moz.build``
+files by following ``DIRS`` variables or similar.
+
+As the ``moz.build`` files are read, data structures describing the overall
+build system definition are emitted. These data structures are then fed into a
+build backend, which then performs actions, such as writing out files to
+be read by a build tool. e.g. a ``make`` backend will write a
+``Makefile``.
+
+When ``config.status`` runs, you'll see the following output::
+
+ Reticulating splines...
+ Finished reading 1096 moz.build files into 1276 descriptors in 2.40s
+ Backend executed in 2.39s
+ 2188 total backend files. 0 created; 1 updated; 2187 unchanged
+ Total wall time: 5.03s; CPU time: 3.79s; Efficiency: 75%
+
+What this is saying is that a total of *1096* ``moz.build`` files were read.
+Altogether, *1276* data structures describing the build configuration were
+derived from them. It took *2.40s* wall time to just read these files and
+produce the data structures. The *1276* data structures were fed into the
+build backend which then determined it had to manage *2188* files derived
+from those data structures. Most of them already existed and didn't need
+changed. However, *1* was updated as a result of the new configuration.
+The whole process took *5.03s*. Although, only *3.79s* was in
+CPU time. That likely means we spent roughly *25%* of the time waiting on
+I/O.
+
+For more on how ``moz.build`` files work, see :ref:`mozbuild-files`.
+
+Phase 3: Invocation of the Build Backend
+========================================
+
+When most people think of the build system, they think of phase 3. This is
+where we take all the code in the tree and produce Firefox or whatever
+application you are creating. Phase 3 effectively takes whatever was
+generated by phase 2 and runs it. Since the dawn of Mozilla, this has been
+make consuming Makefiles. However, with the transition to moz.build files,
+you may soon see non-Make build backends, such as Tup or Visual Studio.
+
+When building the tree, most of the time is spent in phase 3. This is when
+header files are installed, C++ files are compiled, files are preprocessed, etc.
diff --git a/build/docs/build-targets.rst b/build/docs/build-targets.rst
new file mode 100644
index 0000000000..dacd46c7f4
--- /dev/null
+++ b/build/docs/build-targets.rst
@@ -0,0 +1,62 @@
+.. _build_targets:
+
+=============
+Build Targets
+=============
+
+When you build with ``mach build``, there are some special targets that can be
+built. This page attempts to document them.
+
+Partial Tree Targets
+====================
+
+The targets in this section only build part of the tree. Please note that
+partial tree builds can be unreliable. Use at your own risk.
+
+export
+ Build the *export* tier. The *export* tier builds everything that is
+ required for C/C++ compilation. It stages all header files, processes
+ IDLs, etc.
+
+compile
+ Build the *compile* tier. The *compile* tier compiles all C/C++ files.
+
+libs
+ Build the *libs* tier. The *libs* tier performs linking and performs
+ most build steps which aren't related to compilation.
+
+tools
+ Build the *tools* tier. The *tools* tier mostly deals with supplementary
+ tools and compiled tests. It will link tools against libXUL, including
+ compiled test binaries.
+
+binaries:
+ Recompiles and relinks C/C++ files. Only works after a complete normal
+ build, but allows for much faster rebuilds of C/C++ code. For performance
+ reasons, however, it skips nss, nspr, icu and ffi. This is targeted to
+ improve local developer workflow when touching C/C++ code.
+
+install-manifests
+ Process install manifests. Install manifests handle the installation of
+ files into the object directory.
+
+ Unless ``NO_REMOVE=1`` is defined in the environment, files not accounted
+ in the install manifests will be deleted from the object directory.
+
+install-tests
+ Processes the tests install manifest.
+
+Common Actions
+==============
+
+The targets in this section correspond to common build-related actions. Many
+of the actions in this section are effectively frontends to shell scripts.
+These actions will likely all be replaced by mach commands someday.
+
+buildsymbols
+ Create a symbols archive for the current build.
+
+ This must be performed after a successful build.
+
+check
+ Run build system tests.
diff --git a/build/docs/chrome-registration.rst b/build/docs/chrome-registration.rst
new file mode 100644
index 0000000000..e62636f7a2
--- /dev/null
+++ b/build/docs/chrome-registration.rst
@@ -0,0 +1,457 @@
+Chrome Registration
+-------------------
+
+What is chrome?
+---------------
+
+`Chrome` is the set of user interface elements of the
+application window that are outside the window's content area. Toolbars,
+menu bars, progress bars, and window title bars are all examples of
+elements that are typically part of the chrome.
+
+``chrome.manifest`` files are used to register XPCOM components and sources for the chrome protocol.
+Every application supplies a root ``chrome.manifest`` file that Mozilla reads on startup.
+
+Chrome providers
+----------------
+
+A supplier of chrome for a given window type (e.g., for the browser
+window) is called a chrome provider. The providers work together to
+supply a complete set of chrome for a particular window, from the images
+on the toolbar buttons to the files that describe the text, content, and
+appearance of the window itself.
+
+There are three basic types of chrome providers:
+
+Content
+ The main source file for a window description comes from the content
+ provider, and it can be any file type viewable from within Mozilla.
+ It will typically be a XUL file, since XUL is designed for describing
+ the contents of windows and dialogs. The JavaScript files that define
+ the user interface are also contained within the content packages.
+
+Locale
+ Localizable applications keep all their localized information in
+ locale providers and Fluent FTL files, which are handled separately.
+ This allows translators to plug in a different
+ chrome package to translate an application without altering the rest
+ of the source code. In a chrome provider, localizable files are mostly
+ Java-style properties files.
+Skin
+ A skin provider is responsible for providing a complete set of files
+ that describe the visual appearance of the chrome. Typically a skin
+ provider will provide CSS files and
+ images.
+
+The chrome registry
+-------------------
+
+The Gecko runtime maintains a service known as the chrome registry that
+provides mappings from chrome package names to the physical location of
+chrome packages on disk.
+
+This chrome registry is configurable and persistent, and thus a user can
+install different chrome providers, and select a preferred skin and
+locale. This is accomplished through xpinstall and the extension
+manager.
+
+In order to inform the chrome registry of the available chrome, a text
+manifest is used: this manifest is "chrome.manifest" in the root of an
+extension, or theme, or XULRunner application.
+
+The plaintext chrome manifests are in a simple line-based format. Each
+line is parsed individually; if the line is parsable the chrome registry
+takes the action identified by that line, otherwise the chrome registry
+ignores that line (and prints a warning message in the runtime error
+console).
+
+.. code::
+
+ locale packagename localename path/to/files
+ skin packagename skinname path/to/files
+
+.. note::
+
+ The characters @ # ; : ? / are not allowed in the
+ packagename.
+
+Manifest instructions
+---------------------
+
+comments
+~~~~~~~~
+
+.. code::
+
+ # this line is a comment - you can put here whatever you want
+
+A line is a comment if it begins with the character '#'. Any following
+character in the same line is ignored.
+
+manifest
+~~~~~~~~
+
+::
+
+ manifest subdirectory/foo.manifest [flags]
+
+This will load a secondary manifest file. This can be useful for
+separating component and chrome registration instructions, or separate
+platform-specific registration data.
+
+component
+~~~~~~~~~
+
+::
+
+ component {00000000-0000-0000-0000-000000000000} components/mycomponent.js [flags]
+
+Informs Mozilla about a component CID implemented by an XPCOM component
+implemented in JavaScript (or another scripting language, if
+applicable). The ClassID {0000...} must match the ClassID implemented by
+the component. To generate a unique ClassID, use a UUID generator
+program or site.
+
+contract
+~~~~~~~~
+
+::
+
+ contract @foobar/mycontract;1 {00000000-0000-0000-0000-000000000000} [flags]
+
+Maps a contract ID (a readable string) to the ClassID for a specific
+implementation. Typically a contract ID will be paired with a component
+entry immediately preceding.
+
+category
+~~~~~~~~
+
+::
+
+ category category entry-name value [flags]
+
+Registers an entry in the `category manager`. The
+specific format and meaning of category entries depend on the category.
+
+content
+~~~~~~~
+
+A content package is registered with the line:
+
+::
+
+ content packagename uri/to/files/ [flags]
+
+This will register a location to use when resolving the URI
+``chrome://packagename/content/...``. The URI may be absolute or
+relative to the location of the manifest file. Note: it must end with a
+'/'.
+
+locale
+~~~~~~
+
+A locale package is registered with the line:
+
+.. code::
+
+ locale packagename localename uri/to/files/ [flags]
+
+This will register a locale package when resolving the URI
+chrome://*packagename*/locale/... . The *localename* is usually a plain
+language identifier "en" or a language-country identifier "en-US". If
+more than one locale is registered for a package, the chrome registry
+will select the best-fit locale using the user's preferences.
+
+skin
+~~~~
+
+A skin package is registered with the line:
+
+.. code::
+
+ skin packagename skinname uri/to/files/ [flags]
+
+This will register a skin package when resolving the URI
+chrome://packagename/skin/... . The *skinname* is an opaque string
+identifying an installed skin. If more than one skin is registered for a
+package, the chrome registry will select the best-fit skin using the
+user's preferences.
+
+style
+~~~~~
+
+Style overlays (custom CSS which will be applied to a chrome page) are
+registered with the following syntax:
+
+.. code::
+
+ style chrome://URI-to-style chrome://stylesheet-URI [flags]
+
+override
+~~~~~~~~
+
+In some cases an extension or embedder may wish to override a chrome
+file provided by the application or XULRunner. In order to allow for
+this, the chrome registration manifest allows for "override"
+instructions:
+
+.. code::
+
+ override chrome://package/type/original-uri.whatever new-resolved-URI [flags]
+
+Note: overrides are not recursive (so overriding
+chrome://foo/content/bar/ with file:///home/john/blah/ will not usually
+do what you want or expect it to do). Also, the path inside overridden
+files is relative to the overridden path, not the original one (this can
+be annoying and/or useful in CSS files, for example).
+
+resource
+~~~~~~~~
+
+Aliases can be created using the ``resource`` instruction:
+
+.. code::
+
+ resource aliasname uri/to/files/ [flags]
+
+This will create a mapping for ``resource://<aliasname>/`` URIs to the
+path given.
+
+.. note::
+
+ **Note:** There are no security restrictions preventing web content
+ from including content at resource: URIs, so take care what you make
+ visible there.
+
+Manifest flags
+--------------
+
+Manifest lines can have multiple, space-delimited flags added at the end
+of the registration line. These flags mark special attributes of chrome
+in that package, or limit the conditions under which the line is used.
+
+application
+~~~~~~~~~~~
+
+Extensions may install into multiple applications. There may be chrome
+registration lines which only apply to one particular application. The
+flag
+
+.. code::
+
+ application=app-ID
+
+indicates that the instruction should only be applied if the extension
+is installed into the application identified by *app-ID*. Multiple
+application flags may be included on a single line, in which case the
+line is applied if any of the flags match.
+
+This example shows how a different overlay can be used for different
+applications:
+
+::
+
+ overlay chrome://browser/content/browser.xul chrome://myaddon/content/ffOverlay.xul application={ec8030f7-c20a-464f-9b0e-13a3a9e97384}
+ overlay chrome://messenger/content/mailWindowOverlay.xul chrome://myaddon/content/tbOverlay.xul application={3550f703-e582-4d05-9a08-453d09bdfdc6}
+ overlay chrome://songbird/content/xul/layoutBaseOverlay.xul chrome://myaddon/content/sbOverlay.xul application=songbird@songbirdnest.com
+
+appversion
+~~~~~~~~~~
+
+Extensions may install into multiple versions of an application. There
+may be chrome registration lines which only apply to a particular
+application version. The flag
+
+.. code::
+
+ appversion=version
+ appversion<version
+ appversion<=version
+ appversion>version
+ appversion>=version
+
+indicates that the instruction should only be applied if the extension
+is installed into the application version identified. Multiple
+``appversion`` flags may be included on a single line, in which case the
+line is applied if any of the flags match. The version string must
+conform to the `Toolkit version format`.
+
+platformversion
+~~~~~~~~~~~~~~~
+
+When supporting more then one application, it is often more convenient
+for an extension to specify which Gecko version it is compatible with.
+This is particularly true for binary components. If there are chrome
+registration lines which only apply to a particular Gecko version, the
+flag
+
+.. code::
+
+ platformversion=version
+ platformversion<version
+ platformversion<=version
+ platformversion>version
+ platformversion>=version
+
+indicates that the instruction should only be applied if the extension
+is installed into an application using the Gecko version identified.
+Multiple ``platformversion`` flags may be included on a single line, in
+which case the line is applied if any of the flags match.
+
+contentaccessible
+~~~~~~~~~~~~~~~~~
+
+Chrome resources can no longer be referenced from within <img>,
+<script>, or other elements contained in, or added to, content that was
+loaded from an untrusted source. This restriction applies to both
+elements defined by the untrusted source and to elements added by
+trusted extensions. If such references need to be explicitly allowed,
+set the ``contentaccessible`` flag to ``yes`` to obtain the behavior
+found in older versions of Firefox. See
+`bug 436989 <https://bugzilla.mozilla.org/show_bug.cgi?id=436989>`__.
+
+The ``contentaccessible`` flag applies only to content packages: it is
+not recognized for locale or skin registration. However, the matching
+locale and skin packages will also be exposed to content.
+
+**n.b.:** Because older versions of Firefox do not understand the
+``contentaccessible`` flag, any extension designed to work with both
+Firefox 3 and older versions of Firefox will need to provide a fallback.
+For example:
+
+::
+
+ content packagename chrome/path/
+ content packagename chrome/path/ contentaccessible=yes
+
+os
+~~
+
+Extensions (or themes) may offer different features depending on the
+operating system on which Firefox is running. The value is compared to
+the value of `OS_TARGET` for the platform.
+
+.. code::
+
+ os=WINNT
+ os=Darwin
+
+osversion
+~~~~~~~~~
+
+An extension or theme may need to operate differently depending on which
+version of an operating system is running. For example, a theme may wish
+to adopt a different look on Mac OS X 10.5 than 10.4:
+
+.. code::
+
+ osversion>=10.5
+
+abi
+~~~
+
+If a component is only compatible with a particular ABI, it can specify
+which ABI/OS by using this directive. The value is taken from the
+`nsIXULRuntime` OS and
+XPCOMABI values (concatenated with an underscore). For example:
+
+::
+
+ binary-component component/myLib.dll abi=WINNT_x86-MSVC
+ binary-component component/myLib.so abi=Linux_x86-gcc3
+
+platform (Platform-specific packages)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Some packages are marked with a special flag indicating that they are
+platform specific. Some parts of content, skin, and locales may be
+different based on the platform being run. These packages contain three
+different sets of files, for Windows and OS/2, Macintosh, and Unix-like
+platforms. For example, the order of the "OK" and "Cancel" buttons in a
+dialog is different, as well as the names of some items.
+
+The "platform" modifier is only parsed for content registration; it is
+not recognized for locale or skin registration. However, it applies to
+content, locale, and skin parts of the package, when specified.
+
+process
+~~~~~~~
+
+In electrolysis registrations can be set to only apply in either the
+main process or any content processes. The "process" flag selects
+between these two. This can allow you to register different components
+for the same contract ID or ensure a component can only be loaded in the
+main process.
+
+::
+
+ component {09543782-22b1-4a0b-ba07-9134365776ee} maincomponent.js process=main
+ component {98309951-ac89-4642-afea-7b2b6216bcef} contentcomponent.js process=content
+
+remoteenabled
+~~~~~~~~~~~~~
+
+In `multiprocess Firefox`, the
+default is that a given chrome: URI will always be loaded into the
+chrome process. If you set the "remoteenabled" flag, then the page will
+be loaded in the same process as the ``browser`` that loaded it:
+
+::
+
+ content packagename chrome/path/ remoteenabled=yes
+
+remoterequired
+~~~~~~~~~~~~~~
+
+In `multiprocess Firefox`, the
+default is that a given chrome: URI will always be loaded into the
+chrome process. If you set the "remoterequired" flag, then the page will
+always be loaded into a child process:
+
+::
+
+ content packagename chrome/path/ remoterequired=yes
+
+Example chrome manifest
+-----------------------
+
+.. list-table::
+ :widths: 20 20 20 20
+
+
+ * - type
+ - engine
+ - language
+ - url
+ * - content
+ - branding
+ - browser/content/branding/
+ - contentaccessible=yes
+ * - content
+ - browser
+ - browser/content/browser/
+ - contentaccessible=yes
+ * - override
+ -
+ - chrome://global/content/license.html
+ - chrome://browser/content/license.html
+ * - resource
+ - payments
+ - browser/res/payments/
+ -
+ * - skin
+ - browser
+ - classic/1.0 browser/skin/classic/browser/
+ -
+ * - locale
+ - branding
+ - en-US
+ - en-US/locale/branding/
+ * - locale
+ - browser
+ - en-US
+ - en-US/locale/browser/
+ * - locale
+ - browser-region
+ - en-US
+ - en-US/locale/browser-region/
diff --git a/build/docs/cppeclipse.rst b/build/docs/cppeclipse.rst
new file mode 100644
index 0000000000..920190feb9
--- /dev/null
+++ b/build/docs/cppeclipse.rst
@@ -0,0 +1,53 @@
+.. _build_cppeclipse:
+
+=====================
+Cpp Eclipse Projects
+=====================
+
+For additional information on using Eclipse CDT see
+`the MDN page
+<https://developer.mozilla.org/en-US/docs/Eclipse_CDT>`_.
+
+The build system contains alpha support for generating C++ Eclipse
+project files to aid with development.
+
+Please report bugs to bugzilla and make them depend on bug 973770.
+
+To generate a C++ Eclipse project files, you'll need to have a fully
+built tree::
+
+ mach build
+
+Then, simply generate the C++ Eclipse build backend::
+
+ mach build-backend -b CppEclipse
+
+If all goes well, the path to the generated workspace should be
+printed.
+
+To use the generated C++ Eclipse project files, you'll need to
+have a Eclipse CDT 8.3 (We plan to follow the latest Eclipse release)
+`Eclipse CDT plugin
+<https://www.eclipse.org/cdt/>`_
+installed. You can then import all the projects into Eclipse using
+*File > Import ... > General > Existing Projects into Workspace*
+-only- if you have not ran the background indexer.
+
+Updating Project Files
+======================
+
+As you pull and update the source tree, your C++ Eclipse files may
+fall out of sync with the build configuration. The tree should still
+build fine from within Eclipse, but source files may be missing and in
+rare circumstances Eclipse's index may not have the proper build
+configuration.
+
+To account for this, you'll want to periodically regenerate the
+C++ Eclipse project files. You can do this by running ``mach build
+&& mach build-backend -b CppEclipse`` from the
+command line.
+
+Currently, regeneration rewrites the original project files. **If
+you've made any customizations to the projects, they will likely get
+overwritten.** We would like to improve this user experience in the
+future.
diff --git a/build/docs/cross-compile.rst b/build/docs/cross-compile.rst
new file mode 100644
index 0000000000..b9ab59d8b8
--- /dev/null
+++ b/build/docs/cross-compile.rst
@@ -0,0 +1,15 @@
+=================
+Cross-compilation
+=================
+
+If you are planning to perform cross-compilation e.g. for Linux/Aarch64, you
+will probably want to use experimental feature ``--enable-bootstrap`` in your
+``.mozconfig``. Then, you just have to specify the the target arch, after which
+the build system will automatically set up the sysroot.
+
+For example, cross-compiling for Linux/Aarch64:
+
+.. code-block:: text
+
+ ac_add_options --target=aarch64-linux-gnu
+ ac_add_options --enable-bootstrap
diff --git a/build/docs/defining-binaries.rst b/build/docs/defining-binaries.rst
new file mode 100644
index 0000000000..fdac27e26a
--- /dev/null
+++ b/build/docs/defining-binaries.rst
@@ -0,0 +1,345 @@
+.. _defining_binaries:
+
+======================================
+Defining Binaries for the Build System
+======================================
+
+One part of what the build system does is compile C/C++ and link the resulting
+objects to produce executables and/or libraries. This document describes the
+basics of defining what is going to be built and how. All the following
+describes constructs to use in moz.build files.
+
+
+Source files
+============
+
+Source files to be used in a given directory are registered in the ``SOURCES``
+and ``UNIFIED_SOURCES`` variables. ``UNIFIED_SOURCES`` have a special behavior
+in that they are aggregated by batches of 16, requiring, for example, that there
+are no conflicting variables in those source files.
+
+``SOURCES`` and ``UNIFIED_SOURCES`` are lists which must be appended to, and
+each append requires the given list to be alphanumerically ordered.
+
+.. code-block:: python
+
+ UNIFIED_SOURCES += [
+ 'FirstSource.cpp',
+ 'SecondSource.cpp',
+ 'ThirdSource.cpp',
+ ]
+
+ SOURCES += [
+ 'OtherSource.cpp',
+ ]
+
+``SOURCES`` and ``UNIFIED_SOURCES`` can contain a mix of different file types,
+for C, C++, and Objective C.
+
+
+Static Libraries
+================
+
+To build a static library, other than defining the source files (see above), one
+just needs to define a library name with the ``Library`` template.
+
+.. code-block:: python
+
+ Library('foo')
+
+The library file name will be ``libfoo.a`` on UNIX systems and ``foo.lib`` on
+Windows.
+
+If the static library needs to aggregate other static libraries, a list of
+``Library`` names can be added to the ``USE_LIBS`` variable. Like ``SOURCES``, it
+requires the appended list to be alphanumerically ordered.
+
+.. code-block:: python
+
+ USE_LIBS += ['bar', 'baz']
+
+If there are multiple directories containing the same ``Library`` name, it is
+possible to disambiguate by prefixing with the path to the wanted one (relative
+or absolute):
+
+.. code-block:: python
+
+ USE_LIBS += [
+ '/path/from/topsrcdir/to/bar',
+ '../relative/baz',
+ ]
+
+Note that the leaf name in those paths is the ``Library`` name, not an actual
+file name.
+
+Note that currently, the build system may not create an actual library for
+static libraries. It is an implementation detail that shouldn't need to be
+worried about.
+
+As a special rule, ``USE_LIBS`` is allowed to contain references to shared
+libraries. In such cases, programs and shared libraries linking this static
+library will inherit those shared library dependencies.
+
+
+Intermediate (Static) Libraries
+===============================
+
+In many cases in the tree, static libraries are built with the only purpose
+of being linked into another, bigger one (like libxul). Instead of adding all
+required libraries to ``USE_LIBS`` for the bigger one, it is possible to tell
+the build system that the library built in the current directory is meant to
+be linked to that bigger library, with the ``FINAL_LIBRARY`` variable.
+
+.. code-block:: python
+
+ FINAL_LIBRARY = 'xul'
+
+The ``FINAL_LIBRARY`` value must match a unique ``Library`` name somewhere
+in the tree.
+
+As a special rule, those intermediate libraries don't need a ``Library`` name
+for themselves.
+
+
+Shared Libraries
+================
+
+Sometimes, we want shared libraries, a.k.a. dynamic libraries. Such libraries
+are defined similarly to static libraries, using the ``SharedLibrary`` template
+instead of ``Library``.
+
+.. code-block:: python
+
+ SharedLibrary('foo')
+
+When this template is used, no static library is built. See further below to
+build both types of libraries.
+
+With a ``SharedLibrary`` name of ``foo``, the library file name will be
+``libfoo.dylib`` on OSX, ``libfoo.so`` on ELF systems (Linux, etc.), and
+``foo.dll`` on Windows. On Windows, there is also an import library named
+``foo.lib``, used on the linker command line. ``libfoo.dylib`` and
+``libfoo.so`` are considered the import library name for, resp. OSX and ELF
+systems.
+
+On OSX, one may want to create a special kind of dynamic library: frameworks.
+This is done with the ``Framework`` template.
+
+.. code-block:: python
+
+ Framework('foo')
+
+With a ``Framework`` name of ``foo``, the framework file name will be ``foo``.
+This template however affects the behavior on all platforms, so it needs to
+be set only on OSX.
+
+
+Executables
+===========
+
+Executables, a.k.a. programs, are, in the simplest form, defined with the
+``Program`` template.
+
+.. code-block:: python
+
+ Program('foobar')
+
+On UNIX systems, the executable file name will be ``foobar``, while on Windows,
+it will be ``foobar.exe``.
+
+Like static and shared libraries, the build system can be instructed to link
+libraries to the executable with ``USE_LIBS``, listing various ``Library``
+names.
+
+In some cases, we want to create an executable per source file in the current
+directory, in which case we can use the ``SimplePrograms`` template
+
+.. code-block:: python
+
+ SimplePrograms([
+ 'FirstProgram',
+ 'SecondProgram',
+ ])
+
+Contrary to ``Program``, which requires corresponding ``SOURCES``, when using
+``SimplePrograms``, the corresponding ``SOURCES`` are implied. If the
+corresponding ``sources`` have an extension different from ``.cpp``, it is
+possible to specify the proper extension:
+
+.. code-block:: python
+
+ SimplePrograms([
+ 'ThirdProgram',
+ 'FourthProgram',
+ ], ext='.c')
+
+Please note this construct was added for compatibility with what already lives
+in the mozilla tree ; it is recommended not to add new simple programs with
+sources with a different extension than ``.cpp``.
+
+Similar to ``SimplePrograms``, is the ``CppUnitTests`` template, which defines,
+with the same rules, C++ unit tests programs. Like ``SimplePrograms``, it takes
+an ``ext`` argument to specify the extension for the corresponding ``SOURCES``,
+if it's different from ``.cpp``.
+
+
+Linking with system libraries
+=============================
+
+Programs and libraries usually need to link with system libraries, such as a
+widget toolkit, etc. Those required dependencies can be given with the
+``OS_LIBS`` variable.
+
+.. code-block:: python
+
+ OS_LIBS += [
+ 'foo',
+ 'bar',
+ ]
+
+This expands to ``foo.lib bar.lib`` when building with MSVC, and
+``-lfoo -lbar`` otherwise.
+
+For convenience with ``pkg-config``, ``OS_LIBS`` can also take linker flags
+such as ``-L/some/path`` and ``-llib``, such that it is possible to directly
+assign ``LIBS`` variables from ``CONFIG``, such as:
+
+.. code-block:: python
+
+ OS_LIBS += CONFIG['MOZ_PANGO_LIBS']
+
+(assuming ``CONFIG['MOZ_PANGO_LIBS']`` is a list, not a string)
+
+Like ``USE_LIBS``, this variable applies to static and shared libraries, as
+well as programs.
+
+
+Libraries from third party build system
+=======================================
+
+Some libraries in the tree are not built by the moz.build-governed build
+system, and there is no ``Library`` corresponding to them.
+
+However, ``USE_LIBS`` allows to reference such libraries by giving a full
+path (like when disambiguating identical ``Library`` names). The same naming
+rules apply as other uses of ``USE_LIBS``, so only the library name without
+prefix and suffix shall be given.
+
+.. code-block:: python
+
+ USE_LIBS += [
+ '/path/from/topsrcdir/to/third-party/bar',
+ '../relative/third-party/baz',
+ ]
+
+Note that ``/path/from/topsrcdir/to/third-party`` and
+``../relative/third-party/baz`` must lead under a subconfigured directory (a
+directory with an AC_OUTPUT_SUBDIRS in configure.in), or ``security/nss``.
+
+
+Building both static and shared libraries
+=========================================
+
+When both types of libraries are required, one needs to set both
+``FORCE_SHARED_LIB`` and ``FORCE_STATIC_LIB`` boolean variables.
+
+.. code-block:: python
+
+ FORCE_SHARED_LIB = True
+ FORCE_STATIC_LIB = True
+
+But because static libraries and Windows import libraries have the same file
+names, either the static or the shared library name needs to be different
+than the name given to the ``Library`` template.
+
+The ``STATIC_LIBRARY_NAME`` and ``SHARED_LIBRARY_NAME`` variables can be used
+to change either the static or the shared library name.
+
+.. code-block:: python
+
+ Library('foo')
+ STATIC_LIBRARY_NAME = 'foo_s'
+
+With the above, on Windows, ``foo_s.lib`` will be the static library,
+``foo.dll`` the shared library, and ``foo.lib`` the import library.
+
+In some cases, for convenience, it is possible to set both
+``STATIC_LIBRARY_NAME`` and ``SHARED_LIBRARY_NAME``. For example:
+
+.. code-block:: python
+
+ Library('mylib')
+ STATIC_LIBRARY_NAME = 'mylib_s'
+ SHARED_LIBRARY_NAME = CONFIG['SHARED_NAME']
+
+This allows to use ``mylib`` in the ``USE_LIBS`` of another library or
+executable.
+
+When referring to a ``Library`` name building both types of libraries in
+``USE_LIBS``, the shared library is chosen to be linked. But sometimes,
+it is wanted to link the static version, in which case the ``Library`` name
+needs to be prefixed with ``static:`` in ``USE_LIBS``
+
+::
+
+ a/moz.build:
+ Library('mylib')
+ FORCE_SHARED_LIB = True
+ FORCE_STATIC_LIB = True
+ STATIC_LIBRARY_NAME = 'mylib_s'
+ b/moz.build:
+ Program('myprog')
+ USE_LIBS += [
+ 'static:mylib',
+ ]
+
+
+Miscellaneous
+=============
+
+The ``SONAME`` variable declares a "shared object name" for the library. It
+defaults to the ``Library`` name or the ``SHARED_LIBRARY_NAME`` if set. When
+linking to a library with a ``SONAME``, the resulting library or program will
+have a dependency on the library with the name corresponding to the ``SONAME``
+instead of the ``Library`` name. This only impacts ELF systems.
+
+::
+
+ a/moz.build:
+ Library('mylib')
+ b/moz.build:
+ Library('otherlib')
+ SONAME = 'foo'
+ c/moz.build:
+ Program('myprog')
+ USE_LIBS += [
+ 'mylib',
+ 'otherlib',
+ ]
+
+On e.g. Linux, the above ``myprog`` will have DT_NEEDED markers for
+``libmylib.so`` and ``libfoo.so`` instead of ``libmylib.so`` and
+``libotherlib.so`` if there weren't a ``SONAME``. This means the runtime
+requirement for ``myprog`` is ``libfoo.so`` instead of ``libotherlib.so``.
+
+
+Gecko-related binaries
+======================
+
+Some programs or libraries are totally independent of Gecko, and can use the
+above mentioned templates. Others are Gecko-related in some way, and may
+need XPCOM linkage, mozglue. These things are tedious. A set of additional
+templates exists to ease defining such programs and libraries. They are
+essentially the same as the above mentioned templates, prefixed with "Gecko":
+
+ - ``GeckoProgram``
+ - ``GeckoSimplePrograms``
+ - ``GeckoCppUnitTests``
+ - ``GeckoSharedLibrary``
+ - ``GeckoFramework``
+
+All the Gecko-prefixed templates take the same arguments as their
+non-Gecko-prefixed counterparts, and can take a few more arguments
+for non-standard cases. See the definition of ``GeckoBinary`` in
+build/gecko_templates.mozbuild for more details, but most usecases
+should not require these additional arguments.
diff --git a/build/docs/defining-xpcom-components.rst b/build/docs/defining-xpcom-components.rst
new file mode 100644
index 0000000000..51a1bb4fed
--- /dev/null
+++ b/build/docs/defining-xpcom-components.rst
@@ -0,0 +1,313 @@
+.. _defining_xpcom_components:
+
+=========================================
+Defining XPCOM C++-implemented Components
+=========================================
+
+This document explains how to write a :code:`components.conf` file. For
+documentation on the idl format see :ref:`XPIDL`. For a tutorial on writing
+a new XPCOM interface, see
+:ref:`writing_xpcom_interface`.
+
+Native XPCOM components are registered at build time, and compiled into static
+data structures which allow them to be accessed with little runtime overhead.
+Each module which wishes to register components must provide a manifest
+describing each component it implements, its type, and how it should be
+constructed.
+
+Manifest files are Python data files registered in ``moz.build`` files in a
+``XPCOM_MANIFESTS`` file list:
+
+.. code-block:: python
+
+ XPCOM_MANIFESTS += [
+ 'components.conf',
+ ]
+
+The files may define any of the following special variables:
+
+.. code-block:: python
+
+ # Optional: A function to be called once, the first time any component
+ # listed in this manifest is instantiated.
+ InitFunc = 'nsInitFooModule'
+ # Optional: A function to be called at shutdown if any component listed in
+ # this manifest has been instantiated.
+ UnloadFunc = 'nsUnloadFooModule'
+
+ # Optional: A processing priority, to determine how early or late the
+ # manifest is processed. Defaults to 50. In practice, this mainly affects
+ # the order in which unload functions are called at shutdown, with higher
+ # priority numbers being called later.
+ Priority = 10
+
+ # Optional: A list of header files to include before calling init or
+ # unload functions, or any legacy constructor functions.
+ #
+ # Any header path beginning with a `/` is loaded relative to the root of
+ # the source tree, and must not rely on any local includes.
+ #
+ # Any relative header path must be exported.
+ Headers = [
+ '/foo/nsFooModule.h',
+ 'nsFoo.h',
+ ]
+
+ # A list of component classes provided by this module.
+ Classes = [
+ {
+ # ...
+ },
+ # ...
+ ]
+
+ # A list of category registrations
+ Categories = {
+ 'category': {
+ 'name': 'value',
+ 'other-name': ('value', ProcessSelector.MAIN_PROCESS_ONLY),
+ # ...
+ },
+ # ...
+ }
+
+Class definitions may have the following properties:
+
+``name`` (optional)
+ If present, this component will generate an entry with the given name in the
+ ``mozilla::components`` namespace in ``mozilla/Components.h``, which gives
+ easy access to its CID, service, and instance constructors as (e.g.,)
+ ``components::Foo::CID()``, ``components::Foo::Service()``, and
+ ``components::Foo::Create()``, respectively.
+
+``cid``
+ A UUID string containing this component's CID, in the form
+ ``'{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}'``.
+
+``contract_ids`` (optional)
+ A list of contract IDs to register for this class.
+
+``categories`` (optional)
+ A dict of category entries to register for this component's contract ID.
+ Each key in the dict is the name of the category. Each value is either a
+ string containing a single entry, or a list of entries. Each entry is either
+ a string name, or a dictionary of the form ``{'name': 'value', 'backgroundtasks':
+ BackgroundTasksSelector.ALL_TASKS}``. By default, category entries are registered
+ for **no background tasks**: they have
+ ``'backgroundtasks': BackgroundTasksSelector.NO_TASKS``.
+
+``type`` (optional, default=``nsISupports``)
+ The fully-qualified type of the class implementing this component. Defaults
+ to ``nsISupports``, but **must** be provided if the ``init_method`` property
+ is specified, or if neither the ``constructor`` nor ``legacy_constructor``
+ properties are provided.
+
+``headers`` (optional)
+ A list of headers to include in order to call this component's constructor,
+ in the same format as the global ``Headers`` property.
+
+``init_method`` (optional)
+ The name of a method to call on newly-created instances of this class before
+ returning them. The method must take no arguments, and must return a
+ ``nsresult``. If it returns failure, that failure is propagated to the
+ ``getService`` or ``createInstance`` caller.
+
+``constructor`` (optional)
+ The fully-qualified name of a constructor function to call in order to
+ create instances of this class. This function must be declared in one of the
+ headers listed in the ``headers`` property, must take no arguments, and must
+ return ``already_AddRefed<iface>`` where ``iface`` is the interface provided
+ in the ``type`` property.
+
+ This property is incompatible with ``legacy_constructor``.
+
+``jsm`` (optional)
+ If provided, must be the URL of a JavaScript module which contains a
+ JavaScript implementation of the component. The ``constructor`` property
+ must contain the name of an exported function which can be constructed to
+ create a new instance of the component.
+
+``legacy_constructor`` (optional)
+ This property is deprecated, and should not be used in new code.
+
+ The fully-qualified name of a constructor function to call in order to
+ create instances of this class. This function must be declared in one of the
+ headers listed in the ``headers`` property, and must have the signature
+ ``nsresult(const nsID& aIID, void** aResult)``, and behave equivalently to
+ ``nsIFactory::CreateInstance``.
+
+ This property is incompatible with ``constructor``.
+
+``singleton`` (optional, default=``False``)
+ If true, this component's constructor is expected to return the same
+ singleton for every call, and no ``mozilla::components::<name>::Create()``
+ method will be generated for it.
+
+``overridable`` (optional, default=``False``)
+ If true, this component's contract ID is expected to be overridden by some
+ tests, and its ``mozilla::components::<name>::Service()`` getter will
+ therefore look it up by contract ID for every call. This component must,
+ therefore, provide at least one contract ID in its ``contract_ids`` array.
+
+ If false, the ``Service()`` getter will always retrieve the service based on
+ its static data, and it cannot be overridden.
+
+ Note: Enabling this option is expensive, and should not be done when it can
+ be avoided, or when the getter is used by any hot code.
+
+``external`` (optional, default=``False`` if any ``headers`` are provided, ``True`` otherwise)
+ If true, a constructor for this component's ``type`` must be defined in
+ another translation unit, using ``NS_IMPL_COMPONENT_FACTORY(type)``. The
+ constructor must return an ``already_AddRefed<nsISupports>``, and will be
+ used to construct instances of this type.
+
+ This option should only be used in cases where the headers which define the
+ component's concrete type cannot be easily included without local includes.
+
+ Note: External constructors may not specify an ``init_method``, since the
+ generated code will not have the necessary type information required to call
+ it. This option is also incompatible with ``constructor`` and
+ ``legacy_constructor``.
+
+``processes`` (optional, default=``ProcessSelector.ANY_PROCESS``)
+ An optional specifier restricting which types of process this component may
+ be loaded in. This must be a property of ``ProcessSelector`` with the same
+ name as one of the values in the ``Module::ProcessSelector`` enum.
+
+
+Conditional Compilation
+=======================
+
+This manifest may run any appropriate Python code to customize the values of
+the ``Classes`` array based on build configuration. To simplify this process,
+the following globals are available:
+
+``defined``
+ A function which returns true if the given build config setting is defined
+ and true.
+
+``buildconfig``
+ The ``buildconfig`` python module, with a ``substs`` property containing a
+ dict of all available build substitutions.
+
+
+Component Constructors
+======================
+
+There are several ways to define component constructors, which vary mostly
+depending on how old the code that uses them is:
+
+Class Constructors
+------------------
+
+This simplest way to define a component is to include a header defining a
+concrete type, and let the component manager call that class's constructor:
+
+.. code-block:: python
+
+ 'type': 'mozilla::foo::Foo',
+ 'headers': ['mozilla/Foo.h'],
+
+This is generally the preferred method of defining non-singleton constructors,
+but may not be practicable for classes which rely on local includes for their
+definitions.
+
+Singleton Constructors
+----------------------
+
+Singleton classes are generally expected to provide their own constructor
+function which caches a singleton instance the first time it is called, and
+returns the same instance on subsequent calls. This requires declaring the
+constructor in an included header, and implementing it in a separate source
+file:
+
+.. code-block:: python
+
+ 'type': 'mozilla::foo::Foo',
+ 'headers': ['mozilla/Foo.h'],
+ 'constructor': 'mozilla::Foo::GetSingleton',
+
+``Foo.h``
+
+.. code-block:: c++
+
+ class Foo final : public nsISupports {
+ public:
+ static already_AddRefed<Foo> GetSingleton();
+ };
+
+``Foo.cpp``
+
+.. code-block:: c++
+
+ already_AddRefed<Foo> Foo::GetSingleton() {
+ // ...
+ }
+
+External Constructors
+---------------------
+
+For types whose headers can't easily be included, constructors can be defined
+using a template specialization on an incomplete type:
+
+.. code-block:: python
+
+ 'type': 'mozilla::foo::Foo',
+ 'external: True,'
+
+``Foo.cpp``
+
+.. code-block:: c++
+
+ NS_IMPL_COMPONENT_FACTORY(Foo) {
+ return do_AddRef(new Foo()).downcast<nsISupports>();
+ }
+
+Legacy Constructors
+-------------------
+
+These should not be used in new code, and are left as an exercise for the
+reader.
+
+
+Registering Categories
+======================
+
+Classes which need define category entries with the same value as their
+contract ID may do so using the following:
+
+.. code-block:: python
+
+ 'contract_ids': ['@mozilla.org/foo;1'],
+ 'categories': {
+ 'content-policy': 'm-foo',
+ 'Gecko-Content-Viewers': ['image/jpeg', 'image/png'],
+ },
+
+This will define each of the following category entries:
+
+* ``"content-policy"`` ``"m-foo",`` ``"@mozilla.org/foo;1"``
+* ``"Gecko-Content-Viewers"`` ``"image/jpeg"`` ``"@mozilla.org/foo;1"``
+* ``"Gecko-Content-Viewers"`` ``"image/png"`` ``"@mozilla.org/foo;1"``
+
+Some category entries do not have a contract ID as a value. These entries can
+be specified by adding to a global ``Categories`` dictionary:
+
+.. code-block:: python
+
+ Categories = {
+ 'update-timer': {
+ 'nsUpdateService': '@mozilla.org/updates/update-service;1,getService,background-update-timer,app.update.interval,43200,86400',
+ }
+ }
+
+It is possible to limit these on a per-process basis by using a tuple as the
+value:
+
+.. code-block:: python
+
+ Categories = {
+ '@mozilla.org/streamconv;1': {
+ '?from=gzip&to=uncompressed': ('', ProcessSelector.ALLOW_IN_SOCKET_PROCESS),
+ }
+ }
diff --git a/build/docs/environment-variables.rst b/build/docs/environment-variables.rst
new file mode 100644
index 0000000000..c463391596
--- /dev/null
+++ b/build/docs/environment-variables.rst
@@ -0,0 +1,31 @@
+.. _environment_variables:
+
+================================================
+Environment Variables Impacting the Build System
+================================================
+
+Various environment variables have an impact on the behavior of the
+build system. This document attempts to document them.
+
+AUTOCLOBBER
+ If defines, the build system will automatically clobber as needed.
+ The default behavior is to print a message and error out when a
+ clobber is needed.
+
+ This variable is typically defined in a :ref:`mozconfig <mozconfig>`
+ file via ``mk_add_options``.
+
+REBUILD_CHECK
+ If defined, the build system will print information about why
+ certain files were rebuilt.
+
+ This feature is disabled by default because it makes the build slower.
+
+MACH_NO_TERMINAL_FOOTER
+ If defined, the terminal footer displayed when building with mach in
+ a TTY is disabled.
+
+MACH_NO_WRITE_TIMES
+ If defined, mach commands will not prefix output lines with the
+ elapsed time since program start. This option is equivalent to
+ passing ``--log-no-times`` to mach.
diff --git a/build/docs/files-metadata.rst b/build/docs/files-metadata.rst
new file mode 100644
index 0000000000..6a7290c55e
--- /dev/null
+++ b/build/docs/files-metadata.rst
@@ -0,0 +1,178 @@
+.. _mozbuild_files_metadata:
+
+==============
+Files Metadata
+==============
+
+:ref:`mozbuild-files` provide a mechanism for attaching metadata to
+files. Essentially, you define some flags to set on a file or file
+pattern. Later, some tool or process queries for metadata attached to a
+file of interest and it does something intelligent with that data.
+
+Defining Metadata
+=================
+
+Files metadata is defined by using the
+:ref:`Files Sub-Context <mozbuild_subcontext_Files>` in ``moz.build``
+files. e.g.::
+
+ with Files('**/Makefile.in'):
+ BUG_COMPONENT = ('Firefox Build System', 'General')
+
+This working example says, *for all Makefile.in files in every directory
+underneath this one - including this directory - set the Bugzilla
+component to Firefox Build System :: General*.
+
+For more info, read the
+:ref:`docs on Files <mozbuild_subcontext_Files>`.
+
+How Metadata is Read
+====================
+
+``Files`` metadata is extracted in :ref:`mozbuild_fs_reading_mode`.
+
+Reading starts by specifying a set of files whose metadata you are
+interested in. For each file, the filesystem is walked to the root
+of the source directory. Any ``moz.build`` encountered during this
+walking are marked as relevant to the file.
+
+Let's say you have the following filesystem content::
+
+ /moz.build
+ /root_file
+ /dir1/moz.build
+ /dir1/foo
+ /dir1/subdir1/foo
+ /dir2/foo
+
+For ``/root_file``, the relevant ``moz.build`` files are just
+``/moz.build``.
+
+For ``/dir1/foo`` and ``/dir1/subdir1/foo``, the relevant files are
+``/moz.build`` and ``/dir1/moz.build``.
+
+For ``/dir2``, the relevant file is just ``/moz.build``.
+
+Once the list of relevant ``moz.build`` files is obtained, each
+``moz.build`` file is evaluated. Root ``moz.build`` file first,
+leaf-most files last. This follows the rules of
+:ref:`mozbuild_fs_reading_mode`, with the set of evaluated ``moz.build``
+files being controlled by filesystem content, not ``DIRS`` variables.
+
+The file whose metadata is being resolved maps to a set of ``moz.build``
+files which in turn evaluates to a list of contexts. For file metadata,
+we only care about one of these contexts:
+:ref:`Files <mozbuild_subcontext_Files>`.
+
+We start with an empty ``Files`` instance to represent the file. As
+we encounter a *files sub-context*, we see if it is appropriate to
+this file. If it is, we apply its values. This process is repeated
+until all *files sub-contexts* have been applied or skipped. The final
+state of the ``Files`` instance is used to represent the metadata for
+this particular file.
+
+It may help to visualize this. Say we have 2 ``moz.build`` files::
+
+ # /moz.build
+ with Files('*.cpp'):
+ BUG_COMPONENT = ('Core', 'XPCOM')
+
+ with Files('**/*.js'):
+ BUG_COMPONENT = ('Firefox', 'General')
+
+ # /foo/moz.build
+ with Files('*.js'):
+ BUG_COMPONENT = ('Another', 'Component')
+
+Querying for metadata for the file ``/foo/test.js`` will reveal 3
+relevant ``Files`` sub-contexts. They are evaluated as follows:
+
+1. ``/moz.build - Files('*.cpp')``. Does ``/*.cpp`` match
+ ``/foo/test.js``? **No**. Ignore this context.
+2. ``/moz.build - Files('**/*.js')``. Does ``/**/*.js`` match
+ ``/foo/test.js``? **Yes**. Apply ``BUG_COMPONENT = ('Firefox', 'General')``
+ to us.
+3. ``/foo/moz.build - Files('*.js')``. Does ``/foo/*.js`` match
+ ``/foo/test.js``? **Yes**. Apply
+ ``BUG_COMPONENT = ('Another', 'Component')``.
+
+At the end of execution, we have
+``BUG_COMPONENT = ('Another', 'Component')`` as the metadata for
+``/foo/test.js``.
+
+One way to look at file metadata is as a stack of data structures.
+Each ``Files`` sub-context relevant to a given file is applied on top
+of the previous state, starting from an empty state. The final state
+wins.
+
+.. _mozbuild_files_metadata_finalizing:
+
+Finalizing Values
+=================
+
+The default behavior of ``Files`` sub-context evaluation is to apply new
+values on top of old. In most circumstances, this results in desired
+behavior. However, there are circumstances where this may not be
+desired. There is thus a mechanism to *finalize* or *freeze* values.
+
+Finalizing values is useful for scenarios where you want to prevent
+wildcard matches from overwriting previously-set values. This is useful
+for one-off files.
+
+Let's take ``Makefile.in`` files as an example. The build system module
+policy dictates that ``Makefile.in`` files are part of the ``Build
+Config`` module and should be reviewed by peers of that module. However,
+there exist ``Makefile.in`` files in many directories in the source
+tree. Without finalization, a ``*`` or ``**`` wildcard matching rule
+would match ``Makefile.in`` files and overwrite their metadata.
+
+Finalizing of values is performed by setting the ``FINAL`` variable
+on ``Files`` sub-contexts. See the
+:ref:`Files documentation <mozbuild_subcontext_Files>` for more.
+
+Here is an example with ``Makefile.in`` files, showing how it is
+possible to finalize the ``BUG_COMPONENT`` value.::
+
+ # /moz.build
+ with Files('**/Makefile.in'):
+ BUG_COMPONENT = ('Firefox Build System', 'General')
+ FINAL = True
+
+ # /foo/moz.build
+ with Files('**'):
+ BUG_COMPONENT = ('Another', 'Component')
+
+If we query for metadata of ``/foo/Makefile.in``, both ``Files``
+sub-contexts match the file pattern. However, since ``BUG_COMPONENT`` is
+marked as finalized by ``/moz.build``, the assignment from
+``/foo/moz.build`` is ignored. The final value for ``BUG_COMPONENT``
+is ``('Firefox Build System', 'General')``.
+
+Here is another example::
+
+ with Files('*.cpp'):
+ BUG_COMPONENT = ('One-Off', 'For C++')
+ FINAL = True
+
+ with Files('**'):
+ BUG_COMPONENT = ('Regular', 'Component')
+
+For every files except ``foo.cpp``, the bug component will be resolved
+as ``Regular :: Component``. However, ``foo.cpp`` has its value of
+``One-Off :: For C++`` preserved because it is finalized.
+
+.. important::
+
+ ``FINAL`` only applied to variables defined in a context.
+
+ If you want to mark one variable as finalized but want to leave
+ another mutable, you'll need to use 2 ``Files`` contexts.
+
+Guidelines for Defining Metadata
+================================
+
+In general, values defined towards the root of the source tree are
+generic and become more specific towards the leaves. For example,
+the ``BUG_COMPONENT`` for ``/browser`` might be ``Firefox :: General``
+whereas ``/browser/components/preferences`` would list
+``Firefox :: Preferences``.
diff --git a/build/docs/glossary.rst b/build/docs/glossary.rst
new file mode 100644
index 0000000000..d610f07443
--- /dev/null
+++ b/build/docs/glossary.rst
@@ -0,0 +1,47 @@
+Build Glossary
+==============
+
+.. glossary::
+ :sorted:
+
+ object directory
+ A directory holding the output of the build system. The build
+ system attempts to isolate all file modifications to this
+ directory. By convention, object directories are commonly
+ directories under the source directory prefixed with **obj-**.
+ e.g. **obj-firefox**.
+
+ mozconfig
+ A shell script used to configure the build system.
+
+ configure
+ A generated shell script which detects the current system
+ environment, applies a requested set of build configuration
+ options, and writes out metadata to be consumed by the build
+ system.
+
+ config.status
+ An executable file produced by **configure** that takes the
+ generated build config and writes out files used to build the
+ tree. Traditionally, config.status writes out a bunch of
+ Makefiles.
+
+ install manifest
+ A file containing metadata describing file installation rules.
+ A large part of the build system consists of copying files
+ around to appropriate places. We write out special files
+ describing the set of required operations so we can process the
+ actions efficiently. These files are install manifests.
+
+ clobber build
+ A build performed with an initially empty object directory. All
+ build actions must be performed.
+
+ incremental build
+ A build performed with the result of a previous build in an
+ object directory. The build should not have to work as hard because
+ it will be able to reuse the work from previous builds.
+
+ mozinfo
+ An API for accessing a common and limited subset of the build and
+ run-time configuration. See :ref:`mozinfo`.
diff --git a/build/docs/gn.rst b/build/docs/gn.rst
new file mode 100644
index 0000000000..2a8c769130
--- /dev/null
+++ b/build/docs/gn.rst
@@ -0,0 +1,17 @@
+.. _gn:
+
+==============================
+GN support in the build system
+==============================
+
+:abbr:`GN (Generated Ninja)` is a third-party build tool used by chromium and
+some related projects that are vendored in mozilla-central. Rather than
+requiring ``GN`` to build or writing our own build definitions for these projects,
+we have support in the build system for translating GN configuration
+files into moz.build files. In most cases these moz.build files will be like any
+others in the tree (except that they shouldn't be modified by hand), however
+those updating vendored code or building on platforms not supported by
+Mozilla automation may need to re-generate these files. This is a per-project
+process, described in dom/media/webrtc/third_party_build/gn-configs/README.md for
+webrtc. As of writing, it is very specific to webrtc, and likely doesn't work as-is
+for other projects.
diff --git a/build/docs/index.rst b/build/docs/index.rst
new file mode 100644
index 0000000000..52963d4942
--- /dev/null
+++ b/build/docs/index.rst
@@ -0,0 +1,57 @@
+============
+Build System
+============
+
+Important Concepts
+==================
+.. toctree::
+ :maxdepth: 1
+
+ glossary
+ build-overview
+ supported-configurations
+ Mozconfig Files <mozconfigs>
+ mozbuild-files
+ mozbuild-symbols
+ files-metadata
+ Profile Guided Optimization <pgo>
+ slow
+ environment-variables
+ build-targets
+ python
+ test_manifests
+ mozinfo
+ preprocessor
+ jar-manifests
+ defining-binaries
+ defining-xpcom-components
+ toolchains
+ locales
+ unified-builds
+ cross-compile
+ rust
+ sparse
+ Support for projects building with GN <gn>
+ telemetry
+ sccache-dist
+ test_certificates
+
+integrated development environment (IDE)
+========================================
+.. toctree::
+ :maxdepth: 1
+
+ androideclipse
+ cppeclipse
+ visualstudio
+
+mozbuild
+========
+
+mozbuild is a Python package containing a lot of the code for the
+Mozilla build system.
+
+.. toctree::
+ :maxdepth: 1
+
+ mozbuild/index
diff --git a/build/docs/jar-manifests.rst b/build/docs/jar-manifests.rst
new file mode 100644
index 0000000000..b6b1be781c
--- /dev/null
+++ b/build/docs/jar-manifests.rst
@@ -0,0 +1,123 @@
+.. _jar_manifests:
+
+=============
+JAR Manifests
+=============
+
+JAR Manifests are plaintext files in the tree that are used to package chrome
+files into ``.jar`` files and create :ref:`Chrome Registration <Chrome Registration>`
+manifests. JAR Manifests are commonly named ``jar.mn``. They are declared in ``moz.build`` files using the ``JAR_MANIFESTS`` variable, which makes up a collection of ``jar.mn`` files.
+All files declared in JAR Manifests are processed and installed into ``omni.ja`` files in ``browser/`` and ``toolkit/`` when building Firefox.
+
+``jar.mn`` files are automatically processed by the build system when building a
+source directory that contains one. The ``jar.mn`` is run through the
+:ref:`preprocessor` before being passed to the manifest processor. In order to
+have ``@variables@`` expanded (such as ``@AB_CD@``) throughout the file, add
+the line ``#filter substitution`` at the top of your ``jar.mn`` file.
+
+The format of a jar.mn is fairly simple; it consists of a heading specifying
+which JAR file is being packaged, followed by indented lines listing files and
+chrome registration instructions.
+
+For a simple ``jar.mn`` file, see `toolkit/profile/jar.mn <https://searchfox.org/mozilla-central/rev/5b2d2863bd315f232a3f769f76e0eb16cdca7cb0/toolkit/profile/jar.mn>`_. For a much
+more complex ``jar.mn`` file, see `toolkit/locales/jar.mn <https://searchfox.org/mozilla-central/rev/5b2d2863bd315f232a3f769f76e0eb16cdca7cb0/toolkit/locales/jar.mn>`_. More examples with specific formats and uses are available below.
+
+Shipping Chrome Files
+======================
+General Format
+^^^^^^^^^^^^^^
+To ship chrome files in a JAR, an indented line indicates a file to be packaged::
+
+ <jarfile>.jar:
+ path/in/jar/file_name.xul (source/tree/location/file_name.xul)
+
+Note that file path mappings are listed by destination (left) followed by source (right).
+
+Same Directory Omission
+^^^^^^^^^^^^^^^^^^^^^^^
+If the JAR manifest and packaged files live in the same directory, the source path and parentheses can be omitted.
+A sample of a ``jar.mn`` file with omitted source paths and parentheses is `this revision of browser/components/colorways/jar.mn <https://searchfox.org/mozilla-central/rev/5b2d2863bd315f232a3f769f76e0eb16cdca7cb0/browser/components/colorways/jar.mn>`_::
+
+ browser.jar:
+ content/browser/colorwaycloset.html
+ content/browser/colorwaycloset.css
+ content/browser/colorwaycloset.js
+
+Writing the following is equivalent, given that the aforementioned files exist in the same directory as the ``jar.mn``. Notice the ``.jar`` file is named ``browser.jar``::
+
+ browser.jar:
+ content/browser/colorwaycloset.html (colorwaycloset.html)
+ content/browser/colorwaycloset.css (colorwaycloset.css)
+ content/browser/colorwaycloset.js (colorwaycloset.js)
+
+This manifest is responsible for packaging files needed by Colorway Closet, including
+JS scripts, localization files, images (ex. PNGs, AVIFs), and CSS styling. Look at `browser/components/colorways/colorwaycloset.html <https://searchfox.org/mozilla-central/rev/5b2d2863bd315f232a3f769f76e0eb16cdca7cb0/browser/components/colorways/colorwaycloset.html#18>`_
+to see how a file may be referenced using its chrome URL.
+
+Absolute Paths
+^^^^^^^^^^^^^^
+The source tree location may also be an absolute path (taken from the top of the source tree).
+One such example can be found in `toolkit/components/pictureinpicture/jar.mn <https://searchfox.org/mozilla-central/rev/2005e8d87ee045f19dac58e5bff32eff7d01bc9b/toolkit/components/pictureinpicture/jar.mn>`_::
+
+ toolkit.jar:
+ * content/global/pictureinpicture/player.xhtml (content/player.xhtml)
+ content/global/pictureinpicture/player.js (content/player.js)
+
+Asterisk Marker (Preprocessing)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+An asterisk marker (``*``) at the beginning of the line indicates that the file should be processed by the :ref:`preprocessor` before being packaged.
+The file `toolkit/profile/jar.mn <https://searchfox.org/mozilla-central/rev/5b2d2863bd315f232a3f769f76e0eb16cdca7cb0/toolkit/profile/jar.mn>`_ indicates that the file `toolkit/profile/content/profileDowngrade.xhtml <https://searchfox.org/mozilla-central/rev/2005e8d87ee045f19dac58e5bff32eff7d01bc9b/toolkit/profile/content/profileDowngrade.xhtml#34,36>`_ should be
+run through the preprocessor, since it contains ``#ifdef`` and ``#endif`` statements that need to be interpreted::
+
+ * content/mozapps/profile/profileDowngrade.xhtml (content/profileDowngrade.xhtml)
+
+Base Path, Variables, Wildcards and Localized Files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The ``.jar`` file location may be preceded with a base path between square brackets.
+The file `toolkit/locales/jar.mn <https://searchfox.org/mozilla-central/rev/5b2d2863bd315f232a3f769f76e0eb16cdca7cb0/toolkit/locales/jar.mn>`_ uses a base path so that the ``.jar`` file is under a ``localization`` directory,
+which is a `special directory parsed by mozbuild <https://searchfox.org/mozilla-central/rev/2005e8d87ee045f19dac58e5bff32eff7d01bc9b/python/mozbuild/mozpack/packager/l10n.py#260-265>`_.
+
+It is also named according to the value passed by the variable ``@AB_CD@``, normally a locale. Note the use of the preprocessor directive ``#filter substitution`` at the top of the file for replacing the variable with the value::
+
+ #filter substitution
+
+ ...
+
+ [localization] @AB_CD@.jar:
+ crashreporter (%crashreporter/**/*.ftl)
+ toolkit (%toolkit/**/*.ftl)
+
+The percentage sign in front of the source paths designates the locale to target as a source. By default, this is ``en-US``. With this specific example, `/toolkit/locales/en-US <https://searchfox.org/mozilla-central/source/toolkit/locales/en-US>`_ would be targeted.
+Otherwise, the file from an alternate localization source tree ``/l10n/<locale>/toolkit/`` is read if building a localized version.
+The wildcards in ``**/*.ftl`` tell the processor to install all Fluent files within the ``crashreporter`` and ``toolkit`` directories, as well as their subdirectories.
+
+Registering Chrome
+==================
+
+:ref:`Chrome Registration <Chrome Registration>` instructions are marked with a percent sign (``%``) at the beginning of the
+line, and must be part of the definition of a JAR file. Any additional percents
+signs are replaced with an appropriate relative URL of the JAR file being
+packaged.
+
+There are two possible locations for a manifest file. If the chrome is being
+built into a standalone application, the ``jar.mn`` processor creates a
+``<jarfilename>.manifest`` next to the JAR file itself. This is the default
+behavior.
+
+If the ``moz.build`` specifies ``USE_EXTENSION_MANIFEST = 1``, the ``jar.mn`` processor
+creates a single ``chrome.manifest`` file suitable for registering chrome as
+an extension.
+
+Example
+^^^^^^^
+
+The file `browser/themes/addons/jar.mn <https://searchfox.org/mozilla-central/rev/5b2d2863bd315f232a3f769f76e0eb16cdca7cb0/browser/themes/addons/jar.mn>`_ registers a ``resource`` chrome package under the name ``builtin-themes``. Its source files are in ``%content/builtin-themes/``::
+
+ browser.jar:
+ % resource builtin-themes %content/builtin-themes/
+
+ content/builtin-themes/alpenglow (alpenglow/*.svg)
+ content/builtin-themes/alpenglow/manifest.json (alpenglow/manifest.json)
+
+Notice how other files declare an installation destination using the ``builtin-themes`` resource that is defined. As such, a SVG file ``preview.svg`` for a theme ``Alpenglow`` may be loaded using the resource URL ``resource://builtin-themes/alpenglow/preview.svg``
+so that a preview of the theme is available on ``about:addons``. See :ref:`Chrome Registration <Chrome Registration>` for more details on ``resource`` and other manifest instructions.
diff --git a/build/docs/locales.rst b/build/docs/locales.rst
new file mode 100644
index 0000000000..443eebb4e5
--- /dev/null
+++ b/build/docs/locales.rst
@@ -0,0 +1,368 @@
+.. _localization:
+
+================
+Localized Builds
+================
+
+Localization repacks
+====================
+
+To save on build time, the build system and automation collaborate to allow
+downloading a packaged en-US Firefox, performing some locale-specific
+post-processing, and re-packaging a locale-specific Firefox. Such artifacts
+are termed "single-locale language repacks". There is another concept of a
+"multi-locale language build", which is more like a regular build and less
+like a re-packaging post-processing step.
+
+.. note::
+
+ These builds rely on make targets that don't work for
+ `artifact builds <https://bugzilla.mozilla.org/show_bug.cgi?id=1387485>`_.
+
+Instructions for single-locale repacks for developers
+-----------------------------------------------------
+
+This assumes that ``$AB_CD`` is the locale you want to repack with; you
+find the available localizations on `l10n-central <https://hg.mozilla.org/l10n-central/>`_.
+
+#. You must have a built and packaged object directory, or a pre-built
+ ``en-US`` package.
+
+ .. code-block:: shell
+
+ ./mach build
+ ./mach package
+
+#. Repackage using the locale-specific changes.
+
+ .. code-block:: shell
+
+ ./mach build installers-$AB_CD
+
+You should find a re-packaged build at ``OBJDIR/dist/``, and a
+runnable binary in ``OBJDIR/dist/l10n-stage/``.
+The ``installers`` target runs quite a few things for you, including getting
+the repository for the requested locale from
+https://hg.mozilla.org/l10n-central/. It will clone them into
+``~/.mozbuild/l10n-central``. If you have an existing repository there, you
+may want to occasionally update that via ``hg pull -u``. If you prefer
+to have the l10n repositories at a different location on your disk, you
+can point to the directory via
+
+ .. code-block:: shell
+
+ ac_add_options --with-l10n-base=/make/this/a/absolute/path
+
+This build also packages a language pack.
+
+Instructions for language packs
+-------------------------------
+
+Language packs are extensions that contain just the localized resources. Building
+them doesn't require an actual build, but they're only compatible with the
+``mozilla-central`` source they're built with.
+
+
+.. code-block:: shell
+
+ ./mach build langpack-$AB_CD
+
+This target shares much of the logic of the ``installers-$AB_CD`` target above,
+and does the check-out of the localization repository etc. It doesn't require
+a package or a build, though. The generated language pack is in
+``OBJDIR/dist/$(MOZ_PKG_PLATFORM)/xpi/``.
+
+.. note::
+
+ Despite the platform-dependent location in the build directory, language packs
+ are platform independent, and the content that goes into them needs to be
+ built in a platform-independent way.
+
+Instructions for multi-locale builds
+------------------------------------
+
+If you want to create a single build with multiple locales, you will do
+
+#. Create a build and package
+
+ .. code-block:: shell
+
+ ./mach build
+ ./mach package
+
+#. Create the multi-locale package:
+
+ .. code-block:: shell
+
+ ./mach package-multi-locale --locales de it zh-TW
+
+On Android, this produces a multi-locale GeckoView AAR and multi-locale APKs,
+including GeckoViewExample. You can test different locales by changing your
+Android OS locale and restarting GeckoViewExample. You'll need to install with
+the ``MOZ_CHROME_MULTILOCALE`` variable set, like:
+
+ .. code-block:: shell
+
+ env MOZ_CHROME_MULTILOCALE=en-US,de,it,zh-TW ./mach android install-geckoview_example
+
+Multi-locale builds without compiling
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For deep technical reasons, artifact builds do not support multi-locale builds.
+However, with a little work, we can achieve the same effect:
+
+#. Arrange a ``mozconfig`` without a compilation environment but with support
+ for the ``RecursiveMake`` build backend, like:
+
+ .. code-block:: shell
+
+ ac_add_options --disable-compile-environment
+ export BUILD_BACKENDS=FasterMake,RecursiveMake
+ ... other options ...
+
+#. Configure.
+
+ .. code-block:: shell
+
+ ./mach configure
+
+#. Manually provide compiled artifacts.
+
+ .. code-block:: shell
+
+ ./mach artifact install [-v]
+
+#. Build.
+
+ .. code-block:: shell
+
+ ./mach build
+
+#. Produce a multi-locale package.
+
+ .. code-block:: shell
+
+ ./mach package-multi-locale --locales de it zh-TW
+
+This build configuration is fragile and not generally useful for active
+development (for that, use a full/compiled build), but it certainly speeds
+testing multi-locale packaging.
+
+General flow of repacks
+-----------------------
+
+The general flow of the locale repacks is controlled by
+``$MOZ_BUILD_APP/locales/Makefile.in`` and ``toolkit/locales/l10n.mk``, plus
+the packaging build system. The three main entry points above all trigger
+related build flows:
+
+#. Get the localization repository, if needed
+#. Run l10n-merge with a prior clobber of the merge dir
+#. Copy l10n files to ``dist``, with minor differences here between ``l10n-%`` and ``chrome-%``
+#. Repackage and package
+
+Details on l10n-merge are described in its own section below.
+The copying of files is mainly controlled by ``jar.mn``, in the few source
+directories that include localizable files. ``l10n-%`` is used for repacks,
+``chrome-%`` for multi-locale packages. The repackaging is dedicated
+Python code in ``toolkit/mozapps/installer/l10n-repack.py``, using an existing
+package. It strips existing ``chrome`` l10n resources, and adds localizations
+and metadata.
+
+Language packs don't require repackaging. The windows installers are generated
+by merely packaging an existing repackaged zip into to an installer.
+
+Exposing strings
+================
+
+The localization flow handles a few file formats in well-known locations in the
+source tree.
+
+Alongside being built by including the directory in ``$MOZ_BUILD_APP/locales/Makefile.in``
+and respective entries in a ``jar.mn``, we also have configuration files tailored
+to localization tools and infrastructure. They're also controlling which
+files l10n-merge handles, and how.
+
+These configurations are TOML files. They're part of the bigger
+localization ecosystem at Mozilla, and `the documentation about the
+file format <http://moz-l10n-config.readthedocs.io/en/latest/fileformat.html>`_
+explains how to set them up, and what the entries mean. In short, you find
+
+.. code-block::
+
+ [[paths]]
+ reference = browser/locales/en-US/**
+ l10n = {l}browser/**
+
+to add a directory for all localizations. Changes to these files are best
+submitted for review by :Pike or :flod.
+
+These configuration files are the future, and right now, we still have
+support for the previous way to configuring l10n, which is described below.
+
+The locations are commonly in directories like
+
+ :file:`browser/`\ ``locales/en-US/``\ :file:`subdir/file.ext`
+
+The first thing to note is that only files beneath :file:`locales/en-US` are
+exposed to localizers. The second thing to note is that only a few directories
+are exposed. Which directories are exposed is defined in files called
+``l10n.ini``, which are at a
+`few places <https://searchfox.org/mozilla-central/search?q=path%3Al10n.ini&redirect=true>`_
+in the source code.
+
+An example looks like this
+
+.. code-block:: ini
+
+ [general]
+ depth = ../..
+
+ [compare]
+ dirs = browser
+ browser/branding/official
+
+ [includes]
+ toolkit = toolkit/locales/l10n.ini
+
+This tells the l10n infrastructure three things:
+
+* resolve the paths against the directory two levels up
+* include files in :file:`browser/locales/en-US` and
+ :file:`browser/branding/official/locales/en-US`
+* load more data from :file:`toolkit/locales/l10n.ini`
+
+For projects like Thunderbird and SeaMonkey in ``comm-central``, additional
+data needs to be provided when including an ``l10n.ini`` from a different
+repository:
+
+.. code-block:: ini
+
+ [include_toolkit]
+ type = hg
+ mozilla = mozilla-central
+ repo = https://hg.mozilla.org/
+ l10n.ini = toolkit/locales/l10n.ini
+
+This tells the l10n infrastructure where to find the repository, and where inside
+that repository the ``l10n.ini`` file is. This is needed because for local
+builds, :file:`mail/locales/l10n.ini` references
+:file:`mozilla/toolkit/locales/l10n.ini`, which is where the comm-central
+build setup expects toolkit to be.
+
+Now that the directories exposed to l10n are known, we can talk about the
+supported file formats.
+
+File formats
+------------
+
+The following file formats are known to the l10n tool chains:
+
+Fluent
+ Used in Firefox UI, both declarative and programmatically.
+Properties
+ Used from JavaScript and C++. When used from js, also comes with
+ plural support (avoid if possible).
+ini
+ Used by the crashreporter and updater, avoid if possible.
+
+Adding new formats involves changing various different tools, and is strongly
+discouraged.
+
+Exceptions
+----------
+Generally, anything that exists in ``en-US`` needs a one-to-one mapping in
+all localizations. There are a few cases where that's not wanted, notably
+around locale configuration and locale-dependent metadata.
+
+For optional strings and files, l10n-merge won't add ``en-US`` content if
+the localization doesn't have that content.
+
+For the TOML files, the
+`[[filters]] documentation <https://moz-l10n-config.readthedocs.io/en/latest/fileformat.html#filters>`_
+is a good reference. In short, filters match the localized source code, optionally
+a ``key``, and an action. An example like
+
+.. code-block:: toml
+
+ [[filters]]
+ path = "{l}calendar/chrome/calendar/calendar-event-dialog.properties"
+ key = "re:.*Nounclass[1-9].*"
+ action = "ignore"
+
+indicates that the matching messages in ``calendar-event-dialog.properties`` are optional.
+
+For the legacy ini configuration files, there's a Python module
+``filter.py`` next to the main ``l10n.ini``, implementing :py:func:`test`, with the following
+signature
+
+.. code-block:: python
+
+ def test(mod, path, entity = None):
+ if does_not_matter:
+ return "ignore"
+ if show_but_do_not_merge:
+ return "report"
+ # default behavior, localizer or build need to do something
+ return "error"
+
+For any missing file, this function is called with ``mod`` being
+the *module*, and ``path`` being the relative path inside
+:file:`locales/en-US`. The module is the top-level dir as referenced in
+:file:`l10n.ini`.
+
+For missing strings, the :py:data:`entity` parameter is the key of the string
+in the en-US file.
+
+l10n-merge
+==========
+
+The chrome registry in Gecko doesn't support fallback from a localization to ``en-US`` at runtime.
+Thus, the build needs to ensure that the localization as it's built into
+the package has all required strings, and that the strings don't contain
+errors. To ensure that, we're *merging* the localization and ``en-US``
+at build time, nick-named l10n-merge.
+
+For Fluent, we're also removing erroneous messages. For many errors in Fluent,
+that's cosmetic, but when a localization has different values or attributes
+on a message, that's actually important so that the DOM bindings of Fluent
+can apply the translation without having to load the ``en-US`` source to
+compare against.
+
+The process can be manually triggered via
+
+.. code-block:: bash
+
+ $> ./mach build merge-$AB_CD
+
+It creates another directory in the object dir, :file:`browser/locales/merge-dir/$AB_CD`, in
+which the sanitized files are stored. The actual repackaging process only looks
+in the merged directory, so the preparation steps of l10n-merge need to ensure
+that all files are generated or copied.
+
+l10n-merge modifies a file if it supports the particular file type, and there
+are missing strings which are not filtered out, or if an existing string
+shows an error. See the Checks section below for details. If the files are
+not modified, l10n-merge copies them over to the respective location in the
+merge dir.
+
+Checks
+------
+
+As part of the build and other localization tool chains, we run a variety
+of source-based checks. Think of them as linters.
+
+The suite of checks is usually determined by file type, i.e., there's a
+suite of checks for Fluent files and one for properties files, etc.
+
+Localizations
+-------------
+
+Now that we talked in-depth about how to expose content to localizers,
+where are the localizations?
+
+We host a mercurial repository per locale. All of our
+localizations can be found on https://hg.mozilla.org/l10n-central/.
+
+You can search inside our localized files on
+`Transvision <https://transvision.mozfr.org/>`_.
diff --git a/build/docs/mozbuild-files.rst b/build/docs/mozbuild-files.rst
new file mode 100644
index 0000000000..9d69404732
--- /dev/null
+++ b/build/docs/mozbuild-files.rst
@@ -0,0 +1,176 @@
+.. _mozbuild-files:
+
+===============
+moz.build Files
+===============
+
+``moz.build`` files are the mechanism by which tree metadata (notably
+the build configuration) is defined.
+
+Directories in the tree contain ``moz.build`` files which declare
+functionality for their respective part of the tree. This includes
+things such as the list of C++ files to compile, where to find tests,
+etc.
+
+``moz.build`` files are actually Python scripts. However, their
+execution is governed by special rules. This is explained below.
+
+moz.build Python Sandbox
+========================
+
+As mentioned above, ``moz.build`` files are Python scripts. However,
+they are executed in a special Python *sandbox* that significantly
+changes and limits the execution environment. The environment is so
+different, it's doubtful most ``moz.build`` files would execute without
+error if executed by a vanilla Python interpreter (e.g. ``python
+moz.build``.
+
+The following properties make execution of ``moz.build`` files special:
+
+1. The execution environment exposes a limited subset of Python.
+2. There is a special set of global symbols and an enforced naming
+ convention of symbols.
+3. Some symbols are inherited from previously-executed ``moz.build``
+ files.
+
+The limited subset of Python is actually an extremely limited subset.
+Only a few symbols from ``__builtin__`` are exposed. These include
+``True``, ``False``, ``None``, ``sorted``, ``int``, and ``set``. Global
+functions like ``import``, ``print``, and ``open`` aren't available.
+Without these, ``moz.build`` files can do very little. *This is by design*.
+
+The execution sandbox treats all ``UPPERCASE`` variables specially. Any
+``UPPERCASE`` variable must be known to the sandbox before the script
+executes. Any attempt to read or write to an unknown ``UPPERCASE``
+variable will result in an exception being raised. Furthermore, the
+types of all ``UPPERCASE`` variables is strictly enforced. Attempts to
+assign an incompatible type to an ``UPPERCASE`` variable will result in
+an exception being raised.
+
+The strictness of behavior with ``UPPERCASE`` variables is a very
+intentional design decision. By ensuring strict behavior, any operation
+involving an ``UPPERCASE`` variable is guaranteed to have well-defined
+side-effects. Previously, when the build configuration was defined in
+``Makefiles``, assignments to variables that did nothing would go
+unnoticed. ``moz.build`` files fix this problem by eliminating the
+potential for false promises.
+
+After a ``moz.build`` file has completed execution, only the
+``UPPERCASE`` variables are used to retrieve state.
+
+The set of variables and functions available to the Python sandbox is
+defined by the :py:mod:`mozbuild.frontend.context` module. The
+data structures in this module are consumed by the
+:py:class:`mozbuild.frontend.reader.MozbuildSandbox` class to construct
+the sandbox. There are tests to ensure that the set of symbols exposed
+to an empty sandbox are all defined in the ``context`` module.
+This module also contains documentation for each symbol, so nothing can
+sneak into the sandbox without being explicitly defined and documented.
+
+Reading and Traversing moz.build Files
+======================================
+
+The process for reading ``moz.build`` files roughly consists of:
+
+1. Start at the root ``moz.build`` (``<topsrcdir>/moz.build``).
+2. Evaluate the ``moz.build`` file in a new sandbox.
+3. Emit the main *context* and any *sub-contexts* from the executed
+ sandbox.
+4. Extract a set of ``moz.build`` files to execute next.
+5. For each additional ``moz.build`` file, goto #2 and repeat until all
+ referenced files have executed.
+
+From the perspective of the consumer, the output of reading is a stream
+of :py:class:`mozbuild.frontend.reader.context.Context` instances. Each
+``Context`` defines a particular aspect of data. Consumers iterate over
+these objects and do something with the data inside. Each object is
+essentially a dictionary of all the ``UPPERCASE`` variables populated
+during its execution.
+
+.. note::
+
+ Historically, there was only one ``context`` per ``moz.build`` file.
+ As the number of things tracked by ``moz.build`` files grew and more
+ and more complex processing was desired, it was necessary to split these
+ contexts into multiple logical parts. It is now common to emit
+ multiple contexts per ``moz.build`` file.
+
+Build System Reading Mode
+-------------------------
+
+The traditional mode of evaluation of ``moz.build`` files is what's
+called *build system traversal mode.* In this mode, the ``CONFIG``
+variable in each ``moz.build`` sandbox is populated from data coming
+from ``config.status``, which is produced by ``configure``.
+
+During evaluation, ``moz.build`` files often make decisions conditional
+on the state of the build configuration. e.g. *only compile foo.cpp if
+feature X is enabled*.
+
+In this mode, traversal of ``moz.build`` files is governed by variables
+like ``DIRS`` and ``TEST_DIRS``. For example, to execute a child
+directory, ``foo``, you would add ``DIRS += ['foo']`` to a ``moz.build``
+file and ``foo/moz.build`` would be evaluated.
+
+.. _mozbuild_fs_reading_mode:
+
+Filesystem Reading Mode
+-----------------------
+
+There is an alternative reading mode that doesn't involve the build
+system and doesn't use ``DIRS`` variables to control traversal into
+child directories. This mode is called *filesystem reading mode*.
+
+In this reading mode, the ``CONFIG`` variable is a dummy, mostly empty
+object. Accessing all but a few special variables will return an empty
+value. This means that nearly all ``if CONFIG['FOO']:`` branches will
+not be taken.
+
+Instead of using content from within the evaluated ``moz.build``
+file to drive traversal into subsequent ``moz.build`` files, the set
+of files to evaluate is controlled by the thing doing the reading.
+
+A single ``moz.build`` file is not guaranteed to be executable in
+isolation. Instead, we must evaluate all *parent* ``moz.build`` files
+first. For example, in order to evaluate ``/foo/moz.build``, one must
+execute ``/moz.build`` and have its state influence the execution of
+``/foo/moz.build``.
+
+Filesystem reading mode is utilized to power the
+:ref:`mozbuild_files_metadata` feature.
+
+Technical Details
+-----------------
+
+The code for reading ``moz.build`` files lives in
+:py:mod:`mozbuild.frontend.reader`. The Python sandboxes evaluation results
+(:py:class:`mozbuild.frontend.context.Context`) are passed into
+:py:mod:`mozbuild.frontend.emitter`, which converts them to classes defined
+in :py:mod:`mozbuild.frontend.data`. Each class in this module defines a
+domain-specific component of tree metadata. e.g. there will be separate
+classes that represent a JavaScript file vs a compiled C++ file or test
+manifests. This means downstream consumers of this data can filter on class
+types to only consume what they are interested in.
+
+There is no well-defined mapping between ``moz.build`` file instances
+and the number of :py:mod:`mozbuild.frontend.data` classes derived from
+each. Depending on the content of the ``moz.build`` file, there may be 1
+object derived or 100.
+
+The purpose of the ``emitter`` layer between low-level sandbox execution
+and metadata representation is to facilitate a unified normalization and
+verification step. There are multiple downstream consumers of the
+``moz.build``-derived data and many will perform the same actions. This
+logic can be complicated, so we have a component dedicated to it.
+
+:py:class:`mozbuild.frontend.reader.BuildReader`` and
+:py:class:`mozbuild.frontend.reader.TreeMetadataEmitter`` have a
+stream-based API courtesy of generators. When you hook them up properly,
+the :py:mod:`mozbuild.frontend.data` classes are emitted before all
+``moz.build`` files have been read. This means that downstream errors
+are raised soon after sandbox execution.
+
+Lots of the code for evaluating Python sandboxes is applicable to
+non-Mozilla systems. In theory, it could be extracted into a standalone
+and generic package. However, until there is a need, there will
+likely be some tightly coupled bits.
diff --git a/build/docs/mozbuild-symbols.rst b/build/docs/mozbuild-symbols.rst
new file mode 100644
index 0000000000..4e9a8853a0
--- /dev/null
+++ b/build/docs/mozbuild-symbols.rst
@@ -0,0 +1,7 @@
+.. _mozbuild_symbols:
+
+========================
+mozbuild Sandbox Symbols
+========================
+
+.. mozbuildsymbols:: mozbuild.frontend.context
diff --git a/build/docs/mozbuild/index.rst b/build/docs/mozbuild/index.rst
new file mode 100644
index 0000000000..1dbb368034
--- /dev/null
+++ b/build/docs/mozbuild/index.rst
@@ -0,0 +1,40 @@
+========
+mozbuild
+========
+
+mozbuild is a Python package providing functionality used by Mozilla's
+build system.
+
+Modules Overview
+================
+
+* mozbuild.backend -- Functionality for producing and interacting with build
+ backends. A build backend is an entity that consumes build system metadata
+ (from mozbuild.frontend) and does something useful with it (typically writing
+ out files that can be used by a build tool to build the tree).
+* mozbuild.compilation -- Functionality related to compiling. This
+ includes managing compiler warnings.
+* mozbuild.frontend -- Functionality for reading build frontend files
+ (what defines the build system) and converting them to data structures
+ which are fed into build backends to produce backend configurations.
+* mozpack -- Functionality related to packaging builds.
+
+Overview
+========
+
+The build system consists of frontend files that define what to do. They
+say things like "compile X" "copy Y."
+
+The mozbuild.frontend package contains code for reading these frontend
+files and converting them to static data structures. The set of produced
+static data structures for the tree constitute the current build
+configuration.
+
+There exist entities called build backends. From a high level, build
+backends consume the build configuration and do something with it. They
+typically produce tool-specific files such as make files which can be used
+to build the tree.
+
+Piecing it all together, we have frontend files that are parsed into data
+structures. These data structures are fed into a build backend. The output
+from build backends is used by builders to build the tree.
diff --git a/build/docs/mozconfigs.rst b/build/docs/mozconfigs.rst
new file mode 100644
index 0000000000..1859b87875
--- /dev/null
+++ b/build/docs/mozconfigs.rst
@@ -0,0 +1,69 @@
+.. _mozconfig:
+
+===============
+mozconfig Files
+===============
+
+mozconfig files are used to configure how a build works.
+
+mozconfig files are actually shell scripts. They are executed in a
+special context with specific variables and functions exposed to them.
+
+API
+===
+
+Functions
+---------
+
+The following special functions are available to a mozconfig script.
+
+ac_add_options
+^^^^^^^^^^^^^^
+
+This function is used to declare extra options/arguments to pass into
+configure.
+
+e.g.::
+
+ ac_add_options --disable-tests
+ ac_add_options --enable-optimize
+
+mk_add_options
+^^^^^^^^^^^^^^
+
+This function is used to inject statements into client.mk for execution.
+It is typically used to define variables, notably the object directory.
+
+e.g.::
+
+ mk_add_options AUTOCLOBBER=1
+
+Special mk_add_options Variables
+--------------------------------
+
+For historical reasons, the method for communicating certain
+well-defined variables is via mk_add_options(). In this section, we
+document what those special variables are.
+
+MOZ_OBJDIR
+^^^^^^^^^^
+
+This variable is used to define the :term:`object directory` for the current
+build.
+
+Finding the active mozconfig
+============================
+
+Multiple mozconfig files can exist to provide different configuration
+options for different tasks. The rules for finding the active mozconfig
+are defined in the
+:py:func:`mozboot.mozconfig.find_mozconfig` method.
+
+.. automodule:: mozboot.mozconfig
+ :members: find_mozconfig
+
+Loading the active mozconfig
+----------------------------
+
+.. autoclass:: mozbuild.mozconfig.MozconfigLoader
+ :members: read_mozconfig
diff --git a/build/docs/mozinfo.rst b/build/docs/mozinfo.rst
new file mode 100644
index 0000000000..795ee3c219
--- /dev/null
+++ b/build/docs/mozinfo.rst
@@ -0,0 +1,176 @@
+.. _mozinfo:
+
+=======
+mozinfo
+=======
+
+``mozinfo`` is a solution for representing a subset of build
+configuration and run-time data.
+
+``mozinfo`` data is typically accessed through a ``mozinfo.json`` file
+which is written to the :term:`object directory` during build
+configuration. The code for writing this file lives in
+:py:mod:`mozbuild.mozinfo`.
+
+``mozinfo.json`` is an object/dictionary of simple string values.
+
+The attributes in ``mozinfo.json`` are used for many purposes. One use
+is to filter tests for applicability to the current build. For more on
+this, see :ref:`test_manifests`.
+
+.. _mozinfo_attributes:
+
+mozinfo.json Attributes
+=================================
+
+``mozinfo`` currently records the following attributes.
+
+appname
+ The application being built.
+
+ Value comes from ``MOZ_APP_NAME`` from ``config.status``.
+
+ Optional.
+
+asan
+ Whether address sanitization is enabled.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+bin_suffix
+ The file suffix for binaries produced with this build.
+
+ Values may be an empty string, as not all platforms have a binary
+ suffix.
+
+ Always defined.
+
+bits
+ The number of bits in the CPU this build targets.
+
+ Values are typically ``32`` or ``64``.
+
+ Universal Mac builds do not have this key defined.
+
+ Unknown processor architectures (see ``processor`` below) may not have
+ this key defined.
+
+ Optional.
+
+buildapp
+ The path to the XUL application being built.
+
+ For desktop Firefox, this is ``browser``. For Fennec, it's
+ ``mobile/android``.
+
+crashreporter
+ Whether the crash reporter is enabled for this build.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+datareporting
+ Whether data reporting (MOZ_DATA_REPORTING) is enabled for this build.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+debug
+ Whether this is a debug build.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+devedition
+ Whether this is a devedition build.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+healthreport
+ Whether the Health Report feature is enabled.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+mozconfig
+ The path of the :ref:`mozconfig file <mozconfig>` used to produce this build.
+
+ Optional.
+
+nightly_build
+ Whether this is a nightly build.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+os
+ The operating system the build is produced for. Values for tier-1
+ supported platforms are ``linux``, ``win``, ``mac``, and
+ ``android``. For other platforms, the value is the lowercase version
+ of the ``OS_TARGET`` variable from ``config.status``.
+
+ Always defined.
+
+processor
+ Information about the processor architecture this build targets.
+
+ Values come from ``TARGET_CPU``, however some massaging may be
+ performed.
+
+ If the build is a universal build on Mac (it targets both 32-bit and
+ 64-bit), the value is ``universal-x86-x86_64``.
+
+ If the value starts with ``arm``, the value is ``arm``.
+
+ If the value starts with a string of the form ``i[3-9]86]``, the
+ value is ``x86``.
+
+ Always defined.
+
+release_or_beta
+ Whether this is a release or beta build.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+stylo
+ Whether the Stylo styling system is being used.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+tests_enabled
+ Whether tests are enabled for this build.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
+
+toolkit
+ The widget toolkit in case. The value comes from the
+ ``MOZ_WIDGET_TOOLKIT`` ``config.status`` variable.
+
+ Always defined.
+
+topsrcdir
+ The path to the source directory the build came from.
+
+ Always defined.
+
+webrender
+ Whether or not WebRender is enabled as the Gecko compositor.
+
+ Values are ``true`` and ``false``.
+
+ Always defined.
diff --git a/build/docs/pgo.rst b/build/docs/pgo.rst
new file mode 100644
index 0000000000..722056c727
--- /dev/null
+++ b/build/docs/pgo.rst
@@ -0,0 +1,28 @@
+.. _pgo:
+
+===========================
+Profile Guided Optimization
+===========================
+
+:abbr:`PGO (Profile Guided Optimization)` is the process of adding
+probes to a compiled binary, running said binary, then using the
+run-time information to *recompile* the binary to (hopefully) make it
+faster.
+
+How PGO Builds Work
+===================
+
+The supported interface for invoking a PGO build is to add ``MOZ_PGO=1`` to
+configure flags and then build. e.g. in your mozconfig::
+
+ ac_add_options MOZ_PGO=1
+
+Then::
+
+ $ ./mach build
+
+This is roughly equivalent to::
+
+#. Perform a build with *--enable-profile-generate* in $topobjdir/instrumented
+#. Perform a run of the instrumented binaries with build/pgo/profileserver.py
+#. Perform a build with *--enable-profile-use* in $topobjdir
diff --git a/build/docs/preprocessor.rst b/build/docs/preprocessor.rst
new file mode 100644
index 0000000000..5ce9092ed9
--- /dev/null
+++ b/build/docs/preprocessor.rst
@@ -0,0 +1,219 @@
+.. _preprocessor:
+
+=================
+Text Preprocessor
+=================
+
+The build system contains a text preprocessor similar to the C preprocessor,
+meant for processing files which have no built-in preprocessor such as XUL
+and JavaScript documents. It is implemented at ``python/mozbuild/mozbuild/preprocessor.py`` and
+is typically invoked via :ref:`jar_manifests`.
+
+While used to preprocess CSS files, the directives are changed to begin with
+``%`` instead of ``#`` to avoid conflict of the id selectors.
+
+Directives
+==========
+
+Variable Definition
+-------------------
+
+define
+^^^^^^
+
+::
+
+ #define variable
+ #define variable value
+
+Defines a preprocessor variable.
+
+Note that, unlike the C preprocessor, instances of this variable later in the
+source are not automatically replaced (see #filter). If value is not supplied,
+it defaults to ``1``.
+
+Note that whitespace is significant, so ``"#define foo one"`` and
+``"#define foo one "`` is different (in the second case, ``foo`` is defined to
+be a four-character string).
+
+undef
+^^^^^
+
+::
+
+ #undef variable
+
+Undefines a preprocessor variable.
+
+Conditionals
+------------
+
+if
+^^
+
+::
+
+ #if variable
+ #if !variable
+ #if variable == string
+ #if variable != string
+
+Disables output if the conditional is false. This can be nested to arbitrary
+depths. Note that in the equality checks, the variable must come first.
+
+else
+^^^^
+
+::
+
+ #else
+
+Reverses the state of the previous conditional block; for example, if the
+last ``#if`` was true (output was enabled), an ``#else`` makes it off
+(output gets disabled).
+
+endif
+^^^^^
+
+::
+
+ #endif
+
+Ends the conditional block.
+
+ifdef / ifndef
+^^^^^^^^^^^^^^
+
+::
+
+ #ifdef variable
+ #ifndef variable
+
+An ``#if`` conditional that is true only if the preprocessor variable
+variable is defined (in the case of ``ifdef``) or not defined (``ifndef``).
+
+elif / elifdef / elifndef
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ #elif variable
+ #elif !variable
+ #elif variable == string
+ #elif variable != string
+ #elifdef variable
+ #elifndef variable
+
+A shorthand to mean an ``#else`` combined with the relevant conditional.
+The following two blocks are equivalent::
+
+ #ifdef foo
+ block 1
+ #elifdef bar
+ block 2
+ #endif
+
+::
+
+ #ifdef foo
+ block 1
+ #else
+ #ifdef bar
+ block 2
+ #endif
+ #endif
+
+File Inclusion
+--------------
+
+include
+^^^^^^^
+
+::
+
+ #include filename
+
+The file specified by filename is processed as if the contents was placed
+at this position. This also means that preprocessor conditionals can even
+be started in one file and ended in another (but is highly discouraged).
+There is no limit on depth of inclusion, or repeated inclusion of the same
+file, or self inclusion; thus, care should be taken to avoid infinite loops.
+
+includesubst
+^^^^^^^^^^^^
+
+::
+
+ #includesubst @variable@filename
+
+Same as a ``#include`` except that all instances of variable in the included
+file is also expanded as in ``#filter`` substitution
+
+expand
+^^^^^^
+
+::
+
+ #expand string
+
+All variables wrapped in ``__`` are replaced with their value, for this line
+only. If the variable is not defined, it expands to an empty string. For
+example, if ``foo`` has the value ``bar``, and ``baz`` is not defined, then::
+
+ #expand This <__foo__> <__baz__> gets expanded
+
+Is expanded to::
+
+ This <bar> <> gets expanded
+
+filter / unfilter
+^^^^^^^^^^^^^^^^^
+
+::
+
+ #filter filter1 filter2 ... filterN
+ #unfilter filter1 filter2 ... filterN
+
+``#filter`` turns on the given filter.
+
+Filters are run in alphabetical order on a per-line basis.
+
+``#unfilter`` turns off the given filter. Available filters are:
+
+emptyLines
+ strips blank lines from the output
+dumbComments
+ dumbComments: empties out any line that consists of optional whitespace
+ followed by a ``//``. Good for getting rid of comments that are on their
+ own lines, and being smarter with a simple regexp filter is impossible
+substitution
+ all variables wrapped in @ are replaced with their value. If the
+ variable is not defined, it is a fatal error. Similar to ``#expand``
+ and ``#filter``
+attemptSubstitution
+ all variables wrapped in ``@`` are replaced with their value, or an
+ empty string if the variable is not defined. Similar to ``#expand``.
+
+literal
+^^^^^^^
+
+::
+
+ #literal string
+
+Output the string (i.e. the rest of the line) literally, with no other fixups.
+This is useful to output lines starting with ``#``, or to temporarily
+disable filters.
+
+Other
+-----
+
+#error
+^^^^^^
+
+::
+
+ #error string
+
+Cause a fatal error at this point, with the error message being the
+given string.
diff --git a/build/docs/python.rst b/build/docs/python.rst
new file mode 100644
index 0000000000..37872011aa
--- /dev/null
+++ b/build/docs/python.rst
@@ -0,0 +1,165 @@
+.. _python:
+
+===========================
+Python and the Build System
+===========================
+
+The Python programming language is used significantly in the build
+system. If we need to write code for the build system or for a tool
+related to the build system, Python is typically the first choice.
+
+Python Requirements
+===================
+
+The tree requires Python 3.6 or greater to build.
+All Python packages not in the Python distribution are included in the
+source tree. So all you should need is a vanilla Python install and you
+should be good to go.
+
+Only CPython (the Python distribution available from www.python.org) is
+supported.
+
+Compiled Python Packages
+========================
+
+There are some features of the build that rely on compiled Python packages
+(packages containing C source). These features are currently all
+optional because not every system contains the Python development
+headers required to build these extensions.
+
+We recommend you have the Python development headers installed (``mach
+bootstrap`` should do this for you) so you can take advantage of these
+features.
+
+Issues with OS X System Python
+==============================
+
+The Python that ships with OS X has historically been littered with
+subtle bugs and suboptimalities.
+
+OS X 10.8 and below users will be required to install a new Python
+distribution. This may not be necessary for OS X 10.9+. However, we
+still recommend installing a separate Python because of the history with
+OS X's system Python issues.
+
+We recommend installing Python through Homebrew or MacPorts. If you run
+``mach bootstrap``, this should be done for you.
+
+Virtual Environments
+====================
+
+The build system relies heavily on
+`venv <https://docs.python.org/3/library/venv.html>`_. Venv provides
+standalone and isolated Python "virtual environments". The problem a venv
+solves is that of dependencies across multiple Python components. If two
+components on a system relied on different versions of a package, there
+could be a conflict. Instead of managing multiple versions of a package
+simultaneously, Python and venv take the route that it is easier
+to just keep them separate so there is no potential for conflicts.
+
+Very early in the build process, a venv is created inside the
+:term:`object directory`. The venv is configured such that it can
+find all the Python packages in the source tree. The code for this lives
+in ``mach.site``.
+
+Deficiencies
+------------
+
+There are numerous deficiencies with the way virtual environments are
+handled in the build system.
+
+* mach reinvents the venv.
+
+ There is code in ``build/mach_initialize.py`` that configures ``sys.path``
+ much the same way the venv does. There are various bugs tracking
+ this. However, no clear solution has yet been devised. It's not a huge
+ problem and thus not a huge priority.
+
+* They aren't preserved across copies and packaging.
+
+ If you attempt to copy an entire tree from one machine to another or
+ from one directory to another, chances are the venv will fall
+ apart. It would be nice if we could preserve it somehow. Instead of
+ actually solving portable venv, all we really need to solve is
+ encapsulating the logic for populating the venv along with all
+ dependent files in the appropriate place.
+
+* .pyc files written to source directory.
+
+ We rely heavily on ``.pth`` files in our venv. A ``.pth`` file
+ is a special file that contains a list of paths. Python will take the
+ set of listed paths encountered in ``.pth`` files and add them to
+ ``sys.path``.
+
+ When Python compiles a ``.py`` file to bytecode, it writes out a
+ ``.pyc`` file so it doesn't have to perform this compilation again.
+ It puts these ``.pyc`` files alongside the ``.pyc`` file. Python
+ provides very little control for determining where these ``.pyc`` files
+ go, even in Python 3 (which offers customer importers).
+
+ With ``.pth`` files pointing back to directories in the source tree
+ and not the object directory, ``.pyc`` files are created in the source
+ tree. This is bad because when Python imports a module, it first looks
+ for a ``.pyc`` file before the ``.py`` file. If there is a ``.pyc``
+ file but no ``.py`` file, it will happily import the module. This
+ wreaks havoc during file moves, refactoring, etc.
+
+ There are various proposals for fixing this. See bug 795995.
+
+Installing Python Manually
+==========================
+
+We highly recommend you use your system's package manager or a
+well-supported 3rd party package manager to install Python for you. If
+these are not available to you, we recommend the following tools for
+installing Python:
+
+* `buildout.python <https://github.com/collective/buildout.python>`_
+* `pyenv <https://github.com/yyuu/pyenv>`_
+* An official installer from http://www.python.org.
+
+If all else fails, consider compiling Python from source manually. But this
+should be viewed as the least desirable option.
+
+Common Issues with Python
+=========================
+
+Upgrading your Python distribution breaks the venv
+--------------------------------------------------------
+
+If you upgrade the Python distribution (e.g. install Python 3.6.15
+from 3.6.9), chances are parts of the venv will break.
+This commonly manifests as a cryptic ``Cannot import XXX`` exception.
+More often than not, the module being imported contains binary/compiled
+components.
+
+If you upgrade or reinstall your Python distribution, we recommend
+clobbering your build.
+
+Packages installed at the system level conflict with build system's
+-------------------------------------------------------------------
+
+It is common for people to install Python packages using ``sudo`` (e.g.
+``sudo pip install psutil``) or with the system's package manager
+(e.g. ``apt-get install python-mysql``.
+
+A problem with this is that packages installed at the system level may
+conflict with the package provided by the source tree. As of bug 907902
+and changeset f18eae7c3b27 (September 16, 2013), this should no longer
+be an issue since the venv created as part of the build doesn't
+add the system's ``site-packages`` directory to ``sys.path``. However,
+poorly installed packages may still find a way to creep into the mix and
+interfere with our venv.
+
+As a general principle, we recommend against using your system's package
+manager or using ``sudo`` to install Python packages. Instead, create
+virtual environments and isolated Python environments for all of your
+Python projects.
+
+Python on $PATH is not appropriate
+----------------------------------
+
+Tools like ``mach`` will look for Python by performing ``/usr/bin/env
+python`` or equivalent. Please be sure the appropriate Python 2.7.3+
+path is on $PATH. On OS X, this likely means you'll need to modify your
+shell's init script to put something ahead of ``/usr/bin``.
diff --git a/build/docs/rust.rst b/build/docs/rust.rst
new file mode 100644
index 0000000000..f4a7a96d68
--- /dev/null
+++ b/build/docs/rust.rst
@@ -0,0 +1,180 @@
+.. _rust:
+
+==============================
+Including Rust Code in Firefox
+==============================
+
+This page explains how to add, build, link, and vendor Rust crates.
+
+The `code documentation <../../writing-rust-code>`_ explains how to write and
+work with Rust code in Firefox. The
+`test documentation <../../testing-rust-code>`_ explains how to test and debug
+Rust code in Firefox.
+
+Linking Rust crates into libxul
+===============================
+
+Rust crates that you want to link into libxul should be listed in the
+``dependencies`` section of
+`toolkit/library/rust/shared/Cargo.toml <https://searchfox.org/mozilla-central/source/toolkit/library/rust/shared/Cargo.toml>`_.
+You must also add an ``extern crate`` reference to
+`toolkit/library/rust/shared/lib.rs <https://searchfox.org/mozilla-central/source/toolkit/library/rust/shared/lib.rs>`_.
+This ensures that the Rust code will be linked properly into libxul as well
+as the copy of libxul used for gtests. (Even though Rust 2018 mostly doesn't
+require ``extern crate`` declarations, these ones are necessary because the
+gkrust setup is non-typical.)
+
+After adding your crate, execute ``cargo update -p gkrust-shared`` to update
+the ``Cargo.lock`` file. You will also need to do this any time you change the
+dependencies in a ``Cargo.toml`` file. If you don't, you will get a build error
+saying **"error: the lock file /home/njn/moz/mc3/Cargo.lock needs to be updated
+but --frozen was passed to prevent this"**.
+
+By default, all Cargo packages in the mozilla-central repository are part of
+the same
+`workspace <https://searchfox.org/mozilla-central/source/toolkit/library/rust/shared/lib.rs>`_
+and will share the ``Cargo.lock`` file and ``target`` directory in the root of
+the repository. You can change this behavior by adding a path to the
+``exclude`` list in the top-level ``Cargo.toml`` file. You may want to do
+this if your package's development workflow includes dev-dependencies that
+aren't needed by general Firefox developers or test infrastructure.
+
+The actual build mechanism is as follows. The build system generates a special
+'Rust unified library' crate, compiles that to a static library
+(``libgkrust.a``), and links that into libxul, so all public symbols will be
+available to C++ code. Building a static library that is linked into a dynamic
+library is easier than building dynamic libraries directly, and it also avoids
+some subtle issues around how mozalloc works that make the Rust dynamic library
+path a little wonky.
+
+Linking Rust crates into something else
+=======================================
+
+To link Rust code into libraries other than libxul, create a directory with a
+``Cargo.toml`` file for your crate, and a ``moz.build`` file that contains:
+
+.. code-block:: python
+
+ RustLibrary('crate_name')
+
+where ``crate_name`` matches the name from the ``[package]`` section of your
+``Cargo.toml``. You can refer to `the moz.build file <https://searchfox.org/mozilla-central/rev/603b9fded7a11ff213c0f415198cd637b7c86614/toolkit/library/rust/moz.build#9>`_ and `the Cargo.toml file <https://searchfox.org/mozilla-central/rev/603b9fded7a11ff213c0f415198cd637b7c86614/toolkit/library/rust/Cargo.toml>`_ that are used for libxul.
+
+You can then add ``USE_LIBS += ['crate_name']`` to the ``moz.build`` file
+that defines the binary as you would with any other library in the tree.
+
+.. important::
+
+ You cannot link a Rust crate into an intermediate library that will be
+ eventually linked into libxul. The build system enforces that only a single
+ ``RustLibrary`` may be linked into a binary. If you need to do this, you
+ will have to add a ``RustLibrary`` to link to any standalone binaries that
+ link the intermediate library, and also add the Rust crate to the libxul
+ dependencies as in `linking Rust Crates into libxul`_.
+
+Conditional compilation
+========================
+
+Edit `tool/library/rust/gkrust-features.mozbuild
+<https://searchfox.org/mozilla-central/source/toolkit/library/rust/gkrust-features.mozbuild>`_
+to expose build flags as Cargo features.
+
+Standalone Rust programs
+========================
+
+It is also possible to build standalone Rust programs. First, put the Rust
+program (including the ``Cargo.toml`` file and the ``src`` directory) in its
+own directory, and add an empty ``moz.build`` file to the same directory.
+
+Then, if the standalone Rust program must run on the compile target (e.g.
+because it's shipped with Firefox) then add this rule to the ``moz.build``
+file:
+
+.. code-block:: python
+
+ RUST_PROGRAMS = ['prog_name']
+
+where *prog_name* is the name of the executable as specified in the
+``Cargo.toml`` (and probably also matches the name of the directory).
+
+Otherwise, if the standalone Rust program must run on the compile host (e.g.
+because it's used to build Firefox but not shipped with Firefox) then do the
+same thing, but use ``HOST_RUST_PROGRAMS`` instead of ``RUST_PROGRAMS``.
+
+Where should I put my crate?
+============================
+
+If your crate's canonical home is mozilla-central, you can put it next to the
+related code in the appropriate directory.
+
+If your crate is mirrored into mozilla-central from another repository, and
+will not be actively developed in mozilla-central, you can simply list it
+as a ``crates.io``-style dependency with a version number, and let it be
+vendored into the ``third_party/rust`` directory.
+
+If your crate is mirrored into mozilla-central from another repository, but
+will be actively developed in both locations, you should send mail to the
+dev-builds mailing list to start a discussion on how to meet your needs.
+
+Third-party crate dependencies
+==============================
+
+Third-party dependencies for in-tree Rust crates are *vendored* into the
+``third_party/rust`` directory of mozilla-central. This means that a copy of
+each third-party crate's code is committed into mozilla-central. As a result,
+building Firefox does not involve downloading any third-party crates.
+
+If you add a dependency on a new crate you must run ``mach vendor rust`` to
+vendor the dependencies into that directory. (Note that ``mach vendor rust``
+`may not work as well on Windows <https://bugzilla.mozilla.org/show_bug.cgi?id=1647582>`_
+as on other platforms.)
+
+When it comes to checking the suitability of third-party code for inclusion
+into mozilla-central, keep the following in mind.
+
+- ``mach vendor rust`` will check that the licenses of all crates are suitable.
+- You should review the crate code to some degree to check that it looks
+ reasonable (especially for unsafe code) and that it has reasonable tests.
+- Third-party crate tests aren't run, which means that large test fixtures will
+ bloat mozilla-central. Consider working with upstream to mark those test
+ fixtures with ``[package] exclude = ...`` as described
+ `here <https://doc.rust-lang.org/cargo/reference/manifest.html#the-exclude-and-include-fields>`_.
+- If you specify a dependency on a branch, pin it to a specific revision,
+ otherwise other people will get unexpected changes when they run ``./mach
+ vendor rust`` any time the branch gets updated. See `bug 1612619
+ <https://bugzil.la/1612619>`_ for a case where such a problem was fixed.
+- Other than that, there is no formal sign-off procedure, but one may be added
+ in the future.
+
+Note that all dependencies will be vendored, even ones that aren't used due to
+disabled features. It's possible that multiple versions of a crate will end up
+vendored into mozilla-central.
+
+Patching third-party crates
+===========================
+
+Sometimes you might want to temporarily patch a third-party crate, for local
+builds or for a try push.
+
+To do this, first add an entry to the ``[patch.crates-io]`` section of the
+top-level ``Cargo.toml`` that points to the crate within ``third_party``. For
+example
+
+.. code-block:: toml
+
+ bitflags = { path = "third_party/rust/bitflags" }
+
+Next, run ``cargo update -p $CRATE_NAME --precise $VERSION``, where
+``$CRATE_NAME`` is the name of the patched crate, and ``$VERSION`` is its
+version number. This will update the ``Cargo.lock`` file.
+
+Then, make the local changes to the crate.
+
+Finally, make sure you don't accidentally land the changes to the crate or the
+``Cargo.lock`` file.
+
+For an example of a more complex workflow involving a third-party crate, see
+`mp4parse-rust/README.md <https://searchfox.org/mozilla-central/source/media/mp4parse-rust/README.md>`_.
+It describes the workflow for a crate that is hosted on GitHub, and for which
+changes are made via GitHub pull requests, but all pull requests must also be
+tested within mozilla-central before being merged.
diff --git a/build/docs/sccache-dist.rst b/build/docs/sccache-dist.rst
new file mode 100644
index 0000000000..8087c0c017
--- /dev/null
+++ b/build/docs/sccache-dist.rst
@@ -0,0 +1,194 @@
+.. _sccache_dist:
+
+==================================
+Distributed sccache (sccache-dist)
+==================================
+
+`sccache <https://github.com/mozilla/sccache>`_ is a ccache-like tool written in
+Rust by Mozilla.
+
+The steps for setting up your machine as an sccache-dist server are detailed below.
+
+In addition to improved security properties, distributed sccache offers
+distribution and caching of rust compilation, so it should be an improvement
+above and beyond what we see with icecc. Build servers run on Linux and
+distributing builds is currently supported from Linux, macOS, and Windows.
+
+
+Steps for distributing a build as an sccache-dist client
+========================================================
+
+Start by following the instructions at https://github.com/mozilla/sccache/blob/master/docs/DistributedQuickstart.md#configure-a-client
+to configure your sccache distributed client.
+*NOTE* If you're distributing from Linux a toolchain will be packaged
+automatically and provided to the build server. If you're distributing from
+Windows or macOS, start by using the cross-toolchains provided by
+``./mach bootstrap`` rather than attempting to use ``icecc-create-env``.
+sccache 0.2.12 or above is recommended, and the auth section of your config
+must read::
+
+ [dist.auth]
+ type = "mozilla"
+
+* If you're compiling from a macOS client, there are a handful of additional
+ considerations outlined here:
+ https://github.com/mozilla/sccache/blob/master/docs/DistributedQuickstart.md#considerations-when-distributing-from-macos.
+
+ Run ``./mach bootstrap`` to download prebuilt toolchains to
+ ``~/.mozbuild/clang-dist-toolchain.tar.xz`` and
+ ``~/.mozbuild/rustc-dist-toolchain.tar.xz``. This is an example of the paths
+ that should be added to your client config to specify toolchains to build on
+ macOS, located at ``~/Library/Application Support/Mozilla.sccache/config``::
+
+ [[dist.toolchains]]
+ type = "path_override"
+ compiler_executable = "/path/to/home/.rustup/toolchains/stable-x86_64-apple-darwin/bin/rustc"
+ archive = "/path/to/home/.mozbuild/rustc-dist-toolchain.tar.xz"
+ archive_compiler_executable = "/builds/worker/toolchains/rustc/bin/rustc"
+
+ [[dist.toolchains]]
+ type = "path_override"
+ compiler_executable = "/path/to/home/.mozbuild/clang/bin/clang"
+ archive = "/path/to/home/.mozbuild/clang-dist-toolchain.tar.xz"
+ archive_compiler_executable = "/builds/worker/toolchains/clang/bin/clang"
+
+ [[dist.toolchains]]
+ type = "path_override"
+ compiler_executable = "/path/to/home/.mozbuild/clang/bin/clang++"
+ archive = "/path/to/home/.mozbuild/clang-dist-toolchain.tar.xz"
+ archive_compiler_executable = "/builds/worker/toolchains/clang/bin/clang"
+
+ Note that the version of ``rustc`` found in ``rustc-dist-toolchain.tar.xz``
+ must match the version of ``rustc`` used locally. The distributed archive
+ will contain the version of ``rustc`` used by automation builds, which may
+ lag behind stable for a few days after Rust releases, which is specified by
+ the task definition in
+ `this file <https://hg.mozilla.org/mozilla-central/file/tip/taskcluster/ci/toolchain/dist-toolchains.yml>`_.
+ For instance, to specify 1.37.0 rather than the current stable, run
+ ``rustup toolchain add 1.37.0`` and point to
+ ``/path/to/home/.rustup/toolchains/1.37.0-x86_64-apple-darwin/bin/rustc`` in your
+ client config.
+
+ The build system currently requires an explicit target to be passed with
+ ``HOST_CFLAGS`` and ``HOST_CXXFLAGS`` e.g.::
+
+ export HOST_CFLAGS="--target=x86_64-apple-darwin16.0.0"
+ export HOST_CXXFLAGS="--target=x86_64-apple-darwin16.0.0"
+
+* Compiling from a Windows client is supported but hasn't seen as much testing
+ as other platforms. The following example mozconfig can be used as a guide::
+
+ ac_add_options CCACHE="C:/Users/<USER>/.mozbuild/sccache/sccache.exe"
+
+ export CC="C:/Users/<USER>/.mozbuild/clang/bin/clang-cl.exe --driver-mode=cl"
+ export CXX="C:/Users/<USER>/.mozbuild/clang/bin/clang-cl.exe --driver-mode=cl"
+ export HOST_CC="C:/Users/<USER>/.mozbuild/clang/bin/clang-cl.exe --driver-mode=cl"
+ export HOST_CXX="C:/Users/<USER>/.mozbuild/clang/bin/clang-cl.exe --driver-mode=cl"
+
+ The client config should be located at
+ ``~/AppData/Roaming/Mozilla/sccache/config/config``, and as on macOS custom
+ toolchains should be obtained with ``./mach bootstrap`` and specified in the
+ client config, for example::
+
+ [[dist.toolchains]]
+ type = "path_override"
+ compiler_executable = "C:/Users/<USER>/.mozbuild/clang/bin/clang-cl.exe"
+ archive = "C:/Users/<USER>/.mozbuild/clang-dist-toolchain.tar.xz"
+ archive_compiler_executable = "/builds/worker/toolchains/clang/bin/clang"
+
+ [[dist.toolchains]]
+ type = "path_override"
+ compiler_executable = "C:/Users/<USER>/.rustup/toolchains/stable-x86_64-pc-windows-msvc/bin/rustc.exe"
+ archive = "C:/Users/<USER>/.mozbuild/rustc-dist-toolchain.tar.xz"
+ archive_compiler_executable = "/builds/worker/toolchains/rustc/bin/rustc"
+
+* Add the following to your mozconfig::
+
+ ac_add_options CCACHE=/path/to/home/.mozbuild/sccache/sccache
+
+ If you're compiling from a macOS client, you might need some additional configuration::
+
+ # Set the target flag to Darwin
+ export CFLAGS="--target=x86_64-apple-darwin16.0.0"
+ export CXXFLAGS="--target=x86_64-apple-darwin16.0.0"
+ export HOST_CFLAGS="--target=x86_64-apple-darwin16.0.0"
+ export HOST_CXXFLAGS="--target=x86_64-apple-darwin16.0.0"
+
+ # Specify the macOS SDK to use
+ ac_add_options --with-macos-sdk=/path/to/MacOSX-SDKs/MacOSX10.12.sdk
+
+ You can get the right macOS SDK by downloading an old version of XCode from
+ `developer.apple.com <https://developer.apple.com>`_ and unpacking the SDK
+ from it.
+
+* When attempting to get your client running, the output of ``sccache -s`` should
+ be consulted to confirm compilations are being distributed. To receive helpful
+ logging from the local daemon in case they aren't, run
+ ``SCCACHE_NO_DAEMON=1 SCCACHE_START_SERVER=1 SCCACHE_LOG=sccache=trace path/to/sccache``
+ in a terminal window separate from your build prior to building. *NOTE* use
+ ``RUST_LOG`` instead of ``SCCACHE_LOG`` if your build of ``sccache`` does not
+ include `pull request 822
+ <https://github.com/mozilla/sccache/pull/822>`_. (``sccache`` binaries from
+ ``mach bootstrap`` do include this PR.)
+
+* Run ``./mach build -j<value>`` with an appropriately large ``<value>``.
+ ``sccache --dist-status`` should provide the number of cores available to you
+ (or a message if you're not connected). In the future this will be integrated
+ with the build system to automatically select an appropriate value.
+
+This should be enough to distribute your build and replace your use of icecc.
+Bear in mind there may be a few speedbumps, and please ensure your version of
+sccache is current before investigating further. Please see the common questions
+section below and ask for help if anything is preventing you from using it over
+email (dev-builds), on slack in #sccache, or in #build on irc.
+
+Steps for setting up a server
+=============================
+
+Build servers must run linux and use bubblewrap 0.3.0+ for sandboxing of compile
+processes. This requires a kernel 4.6 or greater, so Ubuntu 18+, RHEL 8, or
+similar.
+
+* Run ``./mach bootstrap`` or
+ ``./mach artifact toolchain --from-build linux64-sccache`` to acquire a recent
+ version of ``sccache-dist``. Please use a ``sccache-dist`` binary acquired in
+ this fashion to ensure compatibility with statically linked dependencies.
+
+* The instructions at https://github.com/mozilla/sccache/blob/master/docs/DistributedQuickstart.md#configure-a-build-server
+ should contain everything else required to configure and run the server.
+
+ *NOTE* Port 10500 will be used by convention for builders.
+ Please use port 10500 in the ``public_addr`` section of your builder config.
+
+ Extra logging may be helpful when setting up a server. To enable logging,
+ run your server with
+ ``sudo env SCCACHE_LOG=sccache=trace ~/.mozbuild/sccache/sccache-dist server --config ~/.config/sccache/server.conf``
+ (or similar). *NOTE* ``sudo`` *must* come before setting environment variables
+ for this to work. *NOTE* use ``RUST_LOG`` instead of ``SCCACHE_LOG`` if your
+ build of ``sccache`` does not include `pull request 822
+ <https://github.com/mozilla/sccache/pull/822>`_. (``sccache`` binaries from
+ ``mach bootstrap`` do include this PR.)
+
+
+Common questions/considerations
+===============================
+
+* My build is still slow: scache-dist can only do so much with parts of the
+ build that aren't able to be parallelized. To start debugging a slow build,
+ ensure the "Successful distributed compilations" line in the output of
+ ``sccache -s`` dominates other counts. For a full build, at least a 2-3x
+ improvement should be observed.
+
+* My build output is incomprehensible due to a flood of warnings: clang will
+ treat some warnings differently when it's fed preprocessed code in a separate
+ invocation (preprocessing occurs locally with sccache-dist). Adding
+ ``rewrite_includes_only = true`` to the ``dist`` section of your client config
+ will improve this; however, setting this will cause build failures with a
+ commonly deployed version of ``glibc``. This option will default to ``true``
+ once the fix is more widely available. Details of this fix can be found in
+ `this patch <https://sourceware.org/ml/libc-alpha/2019-11/msg00431.html>`_.
+
+* My build fails with a message about incompatible versions of rustc between
+ dependent crates: if you're using a custom toolchain check that the version
+ of rustc in your ``rustc-dist-toolchain.tar.xz`` is the same as the version
+ you're running locally.
diff --git a/build/docs/slow.rst b/build/docs/slow.rst
new file mode 100644
index 0000000000..3dfdd5b631
--- /dev/null
+++ b/build/docs/slow.rst
@@ -0,0 +1,153 @@
+.. _slow:
+
+==================================
+Why the Build System might be slow
+==================================
+
+A common complaint about the build system is that it can be slow. There are
+many reasons contributing to its slowness.
+However, on modern hardware, Firefox can be built in less than 10 minutes.
+
+First, it is important to distinguish between a :term:`clobber build`
+and an :term:`incremental build`. The reasons for why each are slow can
+be different.
+
+The build does a lot of work
+============================
+
+It may not be obvious, but the main reason the build system is slow is
+because it does a lot of work! The source tree consists of a few
+thousand C++ files. On a modern machine, we spend over 120 minutes of CPU
+core time compiling files! So, if you are looking for the root cause of
+slow clobber builds, look at the sheer volume of C++ files in the tree.
+
+You don't have enough CPU cores and MHz
+=======================================
+
+The build should be CPU bound. If the build system maintainers are
+optimizing the build system perfectly, every CPU core in your machine
+should be 100% saturated during a build. While this isn't currently the
+case (keep reading below), generally speaking, the more CPU cores you
+have in your machine and the more total MHz in your machine, the better.
+
+**We highly recommend building with no fewer than 4 physical CPU
+cores.** Please note the *physical* in this sentence. Hyperthreaded
+cores (an Intel Core i7 will report 8 CPU cores but only 4 are physical
+for example) only yield at most a 1.25x speedup per core.
+
+This cause impacts both clobber and incremental builds.
+
+You are building with a slow I/O layer
+======================================
+
+The build system can be I/O bound if your I/O layer is slow. Linking
+libxul on some platforms and build architectures can perform gigabytes
+of I/O.
+
+To minimize the impact of slow I/O on build performance, **we highly
+recommend building with an SSD.** Power users with enough memory may opt
+to build from a RAM disk. Mechanical disks should be avoided if at all
+possible.
+
+Some may dispute the importance of an SSD on build times. It is true
+that the beneficial impact of an SSD can be mitigated if your system has
+lots of memory and the build files stay in the page cache. However,
+operating system memory management is complicated. You don't really have
+control over what or when something is evicted from the page cache.
+Therefore, unless your machine is a dedicated build machine or you have
+more memory than is needed by everything running on your machine,
+chances are you'll run into page cache eviction and you I/O layer will
+impact build performance. That being said, an SSD certainly doesn't
+hurt build times. And, anyone who has used a machine with an SSD will
+tell you how great of an investment it is for performance all around the
+operating system. On top of that, some automated tests are I/O bound
+(like those touching SQLite databases), so an SSD will make tests
+faster.
+
+This cause impacts both clobber and incremental builds.
+
+You don't have enough memory
+============================
+
+The build system allocates a lot of memory, especially when building
+many things in parallel. If you don't have enough free system memory,
+the build will cause swap activity, slowing down your system and the
+build. Even if you never get to the point of swapping, the build system
+performs a lot of I/O and having all accessed files in memory and the
+page cache can significantly reduce the influence of the I/O layer on
+the build system.
+
+**We recommend building with no less than 8 GB of system memory.** As
+always, the more memory you have, the better. For a bare bones machine
+doing nothing more than building the source tree, anything more than 16
+GB is likely entering the point of diminishing returns.
+
+This cause impacts both clobber and incremental builds.
+
+You are building on Windows
+===========================
+
+New processes on Windows are about a magnitude slower to spawn than on
+UNIX-y systems such as Linux. This is because Windows has optimized new
+threads while the \*NIX platforms typically optimize new processes.
+Anyway, the build system spawns thousands of new processes during a
+build. Parts of the build that rely on rapid spawning of new processes
+are slow on Windows as a result. This is most pronounced when running
+*configure*. The configure file is a giant shell script and shell
+scripts rely heavily on new processes. This is why configure
+can run over a minute slower on Windows.
+
+Another reason Windows builds are slower is because Windows lacks proper
+symlink support. On systems that support symlinks, we can generate a
+file into a staging area then symlink it into the final directory very
+quickly. On Windows, we have to perform a full file copy. This incurs
+much more I/O. And if done poorly, can muck with file modification
+times, messing up build dependencies.
+
+These issues impact both clobber and incremental builds.
+
+make is inefficient
+===================
+
+Compared to modern build backends like Tup or Ninja, `make` is slow and
+inefficient. We can only make `make` so fast. At some point, we'll hit a
+performance plateau and will need to use a different tool to make builds
+faster.
+
+Please note that clobber and incremental builds are different. A clobber build
+with `make` will likely be as fast as a clobber build with a modern build
+system.
+
+C++ header dependency hell
+==========================
+
+Modifying a *.h* file can have significant impact on the build system.
+If you modify a *.h* that is used by 1000 C++ files, all of those 1000
+C++ files will be recompiled.
+
+Our code base has traditionally been sloppy managing the impact of
+changed headers on build performance. Bug 785103 tracks improving the
+situation.
+
+This issue mostly impacts the times of an :term:`incremental build`.
+
+A search/indexing service on your machine is running
+====================================================
+
+Many operating systems have a background service that automatically
+indexes filesystem content to make searching faster. On Windows, you
+have the Windows Search Service. On OS X, you have Finder.
+
+These background services sometimes take a keen interest in the files
+being produced as part of the build. Since the build system produces
+hundreds of megabytes or even a few gigabytes of file data, you can
+imagine how much work this is to index! If this work is being performed
+while the build is running, your build will be slower.
+
+OS X's Finder is notorious for indexing when the build is running. And,
+it has a tendency to suck up a whole CPU core. This can make builds
+several minutes slower. If you build with ``mach`` and have the optional
+``psutil`` package built (it requires Python development headers - see
+:ref:`python` for more) and Finder is running during a build, mach will
+print a warning at the end of the build, complete with instructions on
+how to fix it.
diff --git a/build/docs/sparse.rst b/build/docs/sparse.rst
new file mode 100644
index 0000000000..6dcf548334
--- /dev/null
+++ b/build/docs/sparse.rst
@@ -0,0 +1,157 @@
+.. _build_sparse:
+
+================
+Sparse Checkouts
+================
+
+The Firefox repository is large: over 230,000 files. That many files
+can put a lot of strain on machines, tools, and processes.
+
+Some version control tools have the ability to only populate a
+working directory / checkout with a subset of files in the repository.
+This is called *sparse checkout*.
+
+Various tools in the Firefox repository are configured to work
+when a sparse checkout is being used.
+
+Sparse Checkouts in Mercurial
+=============================
+
+Mercurial 4.3 introduced **experimental** support for sparse checkouts
+in the official distribution (a Facebook-authored extension has
+implemented the feature as a 3rd party extension for years).
+
+To enable sparse checkout support in Mercurial, enable the ``sparse``
+extension::
+
+ [extensions]
+ sparse =
+
+The *sparseness* of the working directory is managed using
+``hg debugsparse``. Run ``hg help debugsparse`` and ``hg help -e sparse``
+for more info on the feature.
+
+When a *sparse config* is enabled, the working directory only contains
+files matching that config. You cannot ``hg add`` or ``hg remove`` files
+outside the *sparse config*.
+
+.. warning::
+
+ Sparse support in Mercurial 4.3 does not have any backwards
+ compatibility guarantees. Expect things to change. Scripting against
+ commands or relying on behavior is strongly discouraged.
+
+In-Tree Sparse Profiles
+=======================
+
+Mercurial supports defining the sparse config using files under version
+control. These are called *sparse profiles*.
+
+Essentially, the sparse profiles are managed just like any other file in
+the repository. When you ``hg update``, the sparse configuration is
+evaluated against the sparse profile at the revision being updated to.
+From an end-user perspective, you just need to *activate* a profile once
+and files will be added or removed as appropriate whenever the versioned
+profile file updates.
+
+In the Firefox repository, the ``build/sparse-profiles`` directory
+contains Mercurial *sparse profiles* files.
+
+Each *sparse profile* essentially defines a list of file patterns
+(see ``hg help patterns``) to include or exclude. See
+``hg help -e sparse`` for more.
+
+Mach Support for Sparse Checkouts
+=================================
+
+``mach`` detects when a sparse checkout is being used and its
+behavior may vary to accommodate this.
+
+By default it is a fatal error if ``mach`` can't load one of the
+``mach_commands.py`` files it was told to. But if a sparse checkout
+is being used, ``mach`` assumes that file isn't part of the sparse
+checkout and to ignore missing file errors. This means that
+running ``mach`` inside a sparse checkout will only have access
+to the commands defined in files in the sparse checkout.
+
+Sparse Checkouts in Automation
+==============================
+
+``hg robustcheckout`` (the extension/command used to perform clones
+and working directory operations in automation) supports sparse checkout.
+However, it has a number of limitations over Mercurial's default sparse
+checkout implementation:
+
+* Only supports 1 profile at a time
+* Does not support non-profile sparse configs
+* Does not allow transitioning from a non-sparse to sparse checkout or
+ vice-versa
+
+These restrictions ensure that any sparse working directory populated by
+``hg robustcheckout`` is as consistent and robust as possible.
+
+``run-task`` (the low-level script for *bootstrapping* tasks in
+automation) has support for sparse checkouts.
+
+TaskGraph tasks using ``run-task`` can specify a ``sparse-profile``
+attribute in YAML (or in code) to denote the sparse profile file to
+use. e.g.::
+
+ run:
+ using: run-command
+ command: <command>
+ sparse-profile: taskgraph
+
+This automagically results in ``run-task`` and ``hg robustcheckout``
+using the sparse profile defined in ``build/sparse-profiles/<value>``.
+
+Pros and Cons of Sparse Checkouts
+=================================
+
+The benefits of sparse checkout are that it makes the repository appear
+to be smaller. This means:
+
+* Less time performing working directory operations -> faster version
+ control operations
+* Fewer files to consult -> faster operations
+* Working directories only contain what is needed -> easier to understand
+ what everything does
+
+Fewer files in the working directory also contributes to disadvantages:
+
+* Searching may not yield hits because a file isn't in the sparse
+ checkout. e.g. a *global* search and replace may not actually be
+ *global* after all.
+* Tools performing filesystem walking or path globbing (e.g.
+ ``**/*.js``) may fail to find files because they don't exist.
+* Various tools and processes make assumptions that all files in the
+ repository are always available.
+
+There can also be problems caused by mixing sparse and non-sparse
+checkouts. For example, if a process in automation is using sparse
+and a local developer is not using sparse, things may work for the
+local developer but fail in automation (because a file isn't included
+in the sparse configuration and not available to automation.
+Furthermore, if environments aren't using exactly the same sparse
+configuration, differences can contribute to varying behavior.
+
+When Should Sparse Checkouts Be Used?
+=====================================
+
+Developers are discouraged from using sparse checkouts for local work
+until tools for handling sparse checkouts have improved. In particular,
+Mercurial's support for sparse is still experimental and various Firefox
+tools make assumptions that all files are available. Developers should
+use sparse checkout at their own risk.
+
+The use of sparse checkouts in automation is a performance versus
+robustness trade-off. Use of sparse checkouts will make automation
+faster because machines will only have to manage a few thousand files
+in a checkout instead of a few hundred thousand. This can potentially
+translate to minutes saved per machine day. At the scale of thousands
+of machines, the savings can be significant. But adopting sparse
+checkouts will open up new avenues for failures. (See section above.)
+If a process is isolated (in terms of file access) and well-understood,
+sparse checkout can likely be leveraged with little risk. But if a
+process is doing things like walking the filesystem and performing
+lots of wildcard matching, the dangers are higher.
diff --git a/build/docs/supported-configurations.rst b/build/docs/supported-configurations.rst
new file mode 100644
index 0000000000..0b33baf841
--- /dev/null
+++ b/build/docs/supported-configurations.rst
@@ -0,0 +1,166 @@
+Supported Build Hosts and Targets
+=================================
+
+ .. role:: strikethrough
+
+There are three tiers of supported Firefox build hosts and targets.
+These tiers represent the shared engineering priorities of the Mozilla project.
+
+The "build host" is the machine that is performing the build of Firefox, and
+the "build target" is the machine that will run the built Firefox application.
+For example, if you were building Firefox for Android on your Linux computer, then the
+Linux computer would be the "build host".
+
+.. note::
+
+ Sheriffs are in charge of monitoring the tree. Their definition for tiers
+ is for automation jobs, which tells a developer what is expected of them when
+ they land code. This document is about the tiers of supported build hosts and targets,
+ which tells a person compiling/using Firefox what they can expect from Mozilla.
+ See the `job tier definition <https://wiki.mozilla.org/Sheriffing/Job_Visibility_Policy#Overview_of_the_Job_Visibility_Tiers>`__ for more information.
+
+
+.. _build_hosts:
+
+Supported Build Hosts
+---------------------
+
+While we want to help users resolve build-related issues on their systems, we
+are unable to help resolve build system issues on all possible operating
+systems and versions.
+
+.. _tier_1_hosts:
+
+Tier-1 Hosts and Toolchains
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Support is available for the following **host operating systems** and versions
+when building for a :ref:`Tier-1 Firefox build target<tier_1_targets>`, including
+cross-compilation where available:
+
+* Ubuntu Linux x86_64
+ * Current stable release
+ * Previous stable release
+ * Current LTS release
+* Debian Linux x86_64
+ * Current stable release
+ * Current testing release
+* Fedora Linux x86_64
+ * Current stable release
+ * Previous stable release
+* macOS Intel and M1
+ * Current major macOS release
+ * Previous major macOS release
+* Windows x86_64
+ * Windows 10 with MozillaBuild Environment
+ * Windows 11 with MozillaBuild Environment
+
+Tier-2 Hosts and Toolchains
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In addition to the limitations outlined above in the Tier-1 list, our ability
+to provide assistance with build issues using/targeting Tier-2
+hosts/targets/compilers is not unbounded.
+
+While we will endeavour to make a best effort to help resolve issues, you may
+be referred to the relevant community maintainers for further support.
+
+The Tier-2 hosts are:
+
+* Other Linux x86_64 distributions and/or versions
+* Older macOS versions
+* Older Windows x86_64 versions
+
+Tier-3 Hosts and Toolchains
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+We cannot provide any guarantees of assistance in resolving build issues using
+or targeting Tier-3 platforms.
+
+
+Supported Build Targets
+-----------------------
+
+.. _tier_1_targets:
+
+Tier-1 Targets
+^^^^^^^^^^^^^^
+
+The term **"Tier-1 platform"** refers to those platforms - CPU
+architectures and operating systems - that are the primary focus of
+Firefox development efforts. Tier-1 platforms are fully supported by
+Mozilla's `continuous integration processes <https://treeherder.mozilla.org/>`__ and the
+:ref:`Pushing to Try`. Any proposed change to Firefox on these
+platforms that results in build failures, test failures, performance
+regressions or other major problems **will be reverted immediately**.
+
+
+The **Tier-1 Firefox platforms** and their supported compilers are:
+
+- Android on Linux x86, x86-64, ARMv7 and ARMv8-A (clang)
+- Linux/x86 and x86-64 (gcc and clang)
+- macOS 10.12 and later on x86-64 and AArch64 (clang)
+- Windows/x86, x86-64 and AArch64 (clang-cl)
+
+Prior to Firefox 63, Windows/x86 and Windows/x86-64 relied on the MSVC
+compiler; from **Firefox 63 onward MSVC is not supported**. Older 32-bit
+x86 CPUs without SSE2 instructions such as the Pentium III and Athlon XP
+are also **not considered Tier-1 platforms, and are not supported**.
+Note also that while Windows/x86 and ARM/AArch64 are supported *as build
+targets*, it is not possible to build Firefox *on* Windows/x86 or
+Windows/AArch64 systems.
+
+Tier-2 Targets
+^^^^^^^^^^^^^^
+
+**Tier-2 platforms** are actively maintained by the Mozilla community,
+though with less rigorous requirements. Proposed changes resulting in
+breakage or regressions limited to these platforms **may not immediately
+result in reversion**. However, developers who break these platforms are
+expected to work with platform maintainers to fix problems, and **may be
+required to revert their changes** if a fix cannot be found.
+
+The **Tier-2 Firefox platforms** and their supported compilers are:
+
+- Linux/AArch64 (clang)
+- Windows/x86 (mingw-clang) - maintained by Tom Ritter and Jacek Caban
+ -
+
+ - *Note that some features of this platform are disabled, as they
+ require MS COM or the w32api project doesn't expose the necessary
+ Windows APIs.*
+
+Tier-3 Targets
+^^^^^^^^^^^^^^
+
+**Tier-3 platforms** have a maintainer or community which attempt to
+keep the platform working. These platforms are **not supported by our
+continuous integration processes**, and **Mozilla does not routinely
+test on these platforms**, nor do we block further development on the
+outcomes of those tests.
+
+At any given time a Firefox built from mozilla-central for these
+platforms may or may not work correctly or build at all.
+
+**Tier-3 Firefox platforms** include:
+
+- Linux on various CPU architectures including ARM variants not listed
+ above, PowerPC, and x86 CPUs without SSE2 support - maintained by
+ various Linux distributions
+- FreeBSD/x86, x86-64, Aarch64 (clang) - `maintained by gecko@FreeBSD.org <https://www.freshports.org/www/firefox/>`__
+- OpenBSD/x86, x86-64 (clang) - maintained by Landry Breuil
+- NetBSD/x86-64 (gcc) - maintained by David Laight
+- Solaris/x86-64, sparc64 (gcc) - maintained by Petr Sumbera
+- :strikethrough:`Windows/x86-64 (mingw-gcc)` - Unsupported due to
+ requirements for clang-bindgen
+
+If you're filing a bug against Firefox on a Tier-3 platform (or any
+combination of OS, CPU and compiler not listed above) please bear in
+mind that Mozilla developers do not reliably have access to non-Tier-1
+platforms or build environments. To be actionable bug reports against
+non-Tier-1 platforms should include as much information as possible to
+help the owner of the bug determine the cause of the problem and the
+proper solution. If you can provide a patch, a regression range or
+assist in verifying that the developer's patches work for your platform,
+that would help a lot towards getting your bugs fixed and checked into
+the tree.
diff --git a/build/docs/telemetry.rst b/build/docs/telemetry.rst
new file mode 100644
index 0000000000..7dd24d7df2
--- /dev/null
+++ b/build/docs/telemetry.rst
@@ -0,0 +1,49 @@
+.. _buildtelemetry:
+
+===============
+Build Telemetry
+===============
+
+The build system (specifically, all the build tooling hooked
+up to ``./mach``) has been configured to collect metrics data
+points and errors for various build system actions. This data
+helps drive team planning for the build team and ensure that
+resources are applied to build processes that need them most.
+You can adjust your telemetry settings by editing your
+``~/.mozbuild/machrc`` file.
+
+Glean Telemetry
+===============
+
+Mozbuild reports data using `Glean <https://mozilla.github.io/glean/>`_ via
+:ref:`mach_telemetry`. The metrics collected are documented :ref:`here<metrics>`.
+
+Error Reporting
+===============
+
+``./mach`` uses `Sentry <https://sentry.io/welcome/>`_
+to automatically report errors to `our issue-tracking dashboard
+<https://sentry.prod.mozaws.net/operations/mach/>`_.
+
+Information captured
+++++++++++++++++++++
+
+Sentry automatically collects useful information surrounding
+the error to help the build team discover what caused the
+issue and how to reproduce it. This information includes:
+
+* Environmental information, such as the computer name, timestamp, Python runtime and Python module versions
+* Process arguments
+* The stack trace of the error, including contextual information:
+
+ * The data contained in the exception
+ * Functions and their respective source file names, line numbers
+ * Variables in each frame
+* `Sentry "Breadcrumbs" <https://docs.sentry.io/platforms/python/default-integrations/>`_,
+ which are important events that have happened which help contextualize the error, such as:
+
+ * An HTTP request has occurred
+ * A subprocess has been spawned
+ * Logging has occurred
+
+Note that file paths may be captured, which include absolute paths (potentially including usernames).
diff --git a/build/docs/test_certificates.rst b/build/docs/test_certificates.rst
new file mode 100644
index 0000000000..ff31f172d4
--- /dev/null
+++ b/build/docs/test_certificates.rst
@@ -0,0 +1,40 @@
+.. _test_certificates:
+
+===============================
+Adding Certificates for Testing
+===============================
+
+Sometimes we need to write tests for scenarios that require custom client, server or certificate authority (CA) certificates. For that purpose, you can generate such certificates using ``build/pgo/genpgocert.py``.
+
+The certificate specifications (and key specifications) are located in ``build/pgo/certs/``.
+
+To add a new **server certificate**, add a ``${cert_name}.certspec`` file to that folder.
+If it needs a non-default private key, add a corresponding ``${cert_name}.server.keyspec``.
+
+For a new **client certificate**, add a ``${cert_name}.client.keyspec`` and corresponding ``${cert_name}.certspec``.
+
+To add a new **CA**, add a ``${cert_name}.ca.keyspec`` as well as a corresponding ``${cert_name}.certspec`` to that folder.
+
+.. hint::
+
+ * The full syntax for .certspec files is documented at https://searchfox.org/mozilla-central/source/security/manager/tools/pycert.py
+
+ * The full syntax for .keyspec files is documented at https://searchfox.org/mozilla-central/source/security/manager/tools/pykey.py
+
+Then regenerate the certificates by running:::
+
+ ./mach python build/pgo/genpgocert.py
+
+These commands will modify cert9.db and key4.db, and if you have added a .keyspec file will generate a ``{$cert_name}.client`` or ``{$cert_name}.ca`` file.
+
+**These files need to be committed.**
+
+If you've created a new server certificate, you probably want to modify ``build/pgo/server-locations.txt`` to add a location with your specified certificate:::
+
+ https://my-test.example.com:443 cert=${cert_name}
+
+You will need to run ``./mach build`` again afterwards.
+
+.. important::
+
+ Make sure to exactly follow the naming conventions and use the same ``cert_name`` in all places
diff --git a/build/docs/test_manifests.rst b/build/docs/test_manifests.rst
new file mode 100644
index 0000000000..60f750d679
--- /dev/null
+++ b/build/docs/test_manifests.rst
@@ -0,0 +1,226 @@
+.. _test_manifests:
+
+==============
+Test Manifests
+==============
+
+Many test suites have their test metadata defined in files called
+**test manifests**.
+
+Test manifests are divided into two flavors: :ref:`manifestparser_manifests`
+and :ref:`reftest_manifests`.
+
+Naming Convention
+=================
+
+The build system does not enforce file naming for test manifest files.
+However, the following convention is used.
+
+mochitest.ini
+ For the *plain* flavor of mochitests.
+
+chrome.ini
+ For the *chrome* flavor of mochitests.
+
+browser.ini
+ For the *browser chrome* flavor of mochitests.
+
+a11y.ini
+ For the *a11y* flavor of mochitests.
+
+xpcshell.ini
+ For *xpcshell* tests.
+
+.. _manifestparser_manifests:
+
+ManifestParser Manifests
+==========================
+
+ManifestParser manifests are essentially ini files that conform to a basic
+set of assumptions.
+
+The :doc:`reference documentation </mozbase/manifestparser>`
+for manifestparser manifests describes the basic format of test manifests.
+
+In summary, manifests are ini files with section names describing test files::
+
+ [test_foo.js]
+ [test_bar.js]
+
+Keys under sections can hold metadata about each test::
+
+ [test_foo.js]
+ skip-if = os == "win"
+ [test_foo.js]
+ skip-if = os == "linux" && debug
+ [test_baz.js]
+ fail-if = os == "mac" || os == "android"
+
+There is a special **DEFAULT** section whose keys/metadata apply to all
+sections/tests::
+
+ [DEFAULT]
+ property = value
+
+ [test_foo.js]
+
+In the above example, **test_foo.js** inherits the metadata **property = value**
+from the **DEFAULT** section.
+
+Recognized Metadata
+-------------------
+
+Test manifests can define some common keys/metadata to influence behavior.
+Those keys are as follows:
+
+head
+ List of files that will be executed before the test file. (Used in
+ xpcshell tests.)
+
+tail
+ List of files that will be executed after the test file. (Used in
+ xpcshell tests.)
+
+support-files
+ List of additional files required to run tests. This is typically
+ defined in the **DEFAULT** section.
+
+ Unlike other file lists, *support-files* supports a globbing mechanism
+ to facilitate pulling in many files with minimal typing. This globbing
+ mechanism is activated if an entry in this value contains a ``*``
+ character. A single ``*`` will wildcard match all files in a directory.
+ A double ``**`` will descend into child directories. For example,
+ ``data/*`` will match ``data/foo`` but not ``data/subdir/bar`` where
+ ``data/**`` will match ``data/foo`` and ``data/subdir/bar``.
+
+ Support files starting with ``/`` are placed in a root directory, rather
+ than a location determined by the manifest location. For mochitests,
+ this allows for the placement of files at the server root. The source
+ file is selected from the base name (e.g., ``foo`` for ``/path/foo``).
+ Files starting with ``/`` cannot be selected using globbing.
+
+ Some support files are used by tests across multiple directories. In
+ this case, a test depending on a support file from another directory
+ must note that dependency with the path to the required support file
+ in its own **support-files** entry. These use a syntax where paths
+ starting with ``!/`` will indicate the beginning of the path to a
+ shared support file starting from the root of the srcdir. For example,
+ if a manifest at ``dom/base/test/mochitest.ini`` has a support file,
+ ``dom/base/test/server-script.sjs``, and a mochitest in
+ ``dom/workers/test`` depends on that support file, the test manifest
+ at ``dom/workers/test/mochitest.ini`` must include
+ ``!/dom/base/test/server-script.sjs`` in its **support-files** entry.
+
+generated-files
+ List of files that are generated as part of the build and don't exist in
+ the source tree.
+
+ The build system assumes that each manifest file, test file, and file
+ listed in **head**, **tail**, and **support-files** is static and
+ provided by the source tree (and not automatically generated as part
+ of the build). This variable tells the build system not to make this
+ assumption.
+
+ This variable will likely go away sometime once all generated files are
+ accounted for in the build config.
+
+ If a generated file is not listed in this key, a clobber build will
+ likely fail.
+
+dupe-manifest
+ Record that this manifest duplicates another manifest.
+
+ The common scenario is two manifest files will include a shared
+ manifest file via the ``[include:file]`` special section. The build
+ system enforces that each test file is only provided by a single
+ manifest. Having this key present bypasses that check.
+
+ The value of this key is ignored.
+
+skip-if
+ Skip this test if the specified condition is true.
+ See :ref:`manifest_filter_language`.
+
+ Conditions can be specified on multiple lines, where each line is implicitly
+ joined by a logical OR (``||``). This makes it easier to add comments to
+ distinct failures. For example:
+
+ .. parsed-literal::
+
+ [test_foo.js]
+ skip-if =
+ os == "mac" && fission # bug 123 - fails on fission
+ os == "windows" && debug # bug 456 - hits an assertion
+
+fail-if
+ Expect test failure if the specified condition is true.
+ See :ref:`manifest_filter_language`.
+
+ Conditions can be specified on multiple lines (see ``skip-if``).
+
+run-sequentially
+ If present, the test should not be run in parallel with other tests.
+
+ Some test harnesses support parallel test execution on separate processes
+ and/or threads (behavior varies by test harness). If this key is present,
+ the test harness should not attempt to run this test in parallel with any
+ other test.
+
+ By convention, the value of this key is a string describing why the test
+ can't be run in parallel.
+
+scheme
+ Changes the scheme and domain from which the test runs. (Only used in mochitest suites)
+
+ There are two possible values:
+ - ``http`` (default): The test will run from http://mochi.test:8888
+ - ``https``: The test will run from https://example.com:443
+
+.. _manifest_filter_language:
+
+Manifest Filter Language
+------------------------
+
+Some manifest keys accept a special filter syntax as their values. These
+values are essentially boolean expressions that are evaluated at test
+execution time.
+
+The expressions can reference a well-defined set of variables, such as
+``os`` and ``debug``. These variables are populated from the
+``mozinfo.json`` file. For the full list of available variables, see
+the :ref:`mozinfo documentation <mozinfo_attributes>`.
+
+See
+`the source <https://hg.mozilla.org/mozilla-central/file/default/testing/mozbase/manifestparser/manifestparser/manifestparser.py>`_ for the full documentation of the
+expression syntax until it is documented here.
+
+.. todo::
+
+ Document manifest filter language.
+
+.. _manifest_file_installation:
+
+File Installation
+-----------------
+
+Files referenced by manifests are automatically installed into the object
+directory into paths defined in
+:py:func:`mozbuild.frontend.emitter.TreeMetadataEmitter._process_test_manifest`.
+
+Relative paths resolving to parent directory (e.g.
+``support-files = ../foo.txt`` have special behavior.
+
+For ``support-files``, the file will be installed to the default destination
+for that manifest. Only the file's base name is used to construct the final
+path: directories are irrelevant. Files starting with ``/`` are an exception,
+these are installed relative to the root of the destination; the base name is
+instead used to select the file..
+
+For all other entry types, the file installation is skipped.
+
+.. _reftest_manifests:
+
+Reftest Manifests
+=================
+
+See `MDN <https://developer.mozilla.org/en-US/docs/Creating_reftest-based_unit_tests>`_.
diff --git a/build/docs/toolchains.rst b/build/docs/toolchains.rst
new file mode 100644
index 0000000000..534f269c07
--- /dev/null
+++ b/build/docs/toolchains.rst
@@ -0,0 +1,267 @@
+.. _build_toolchains:
+
+===========================
+Creating Toolchain Archives
+===========================
+
+There are various scripts in the repository for producing archives
+of the build tools (e.g. compilers and linkers) required to build.
+
+Clang and Rust
+==============
+
+To modify the toolchains used for a particular task, you may need several
+things:
+
+1. A `build task`_
+
+2. Which uses a toolchain task
+
+ - `clang toolchain`_
+ - `rust toolchain`_
+
+3. Which uses a git fetch
+
+ - `clang fetch`_
+ - (from-source ``dev`` builds only) `rust fetch`_
+
+4. (clang only) Which uses a `config json`_
+
+5. Which takes patches_ you may want to apply.
+
+For the most part, you should be able to accomplish what you want by
+copying/editing the existing examples in those files.
+
+.. _build task: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/ci/build/linux.yml#5-45
+.. _clang toolchain: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/ci/toolchain/clang.yml#51-72
+.. _rust toolchain: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/ci/toolchain/rust.yml#57-74
+.. _clang fetch: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/ci/fetch/toolchains.yml#413-418
+.. _rust fetch: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/ci/fetch/toolchains.yml#434-439
+.. _config json: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/build/build-clang/clang-linux64.json
+.. _patches: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/build/build-clang/static-llvm-symbolizer.patch
+
+Clang
+-----
+
+Building clang is handled by `build-clang.py`_, which uses several resources
+in the `build-clang`_ directory. Read the `build-clang README`_ for more
+details.
+
+Note for local builds: build-clang.py can be run on developer machines but its
+lengthy multi-stage build process is unnecessary for most local development. The
+upstream `LLVM Getting Started Guide`_ has instructions on how to build
+clang more directly.
+
+.. _build-clang.py: https://searchfox.org/mozilla-central/source/build/build-clang/build-clang.py
+.. _build-clang README: https://searchfox.org/mozilla-central/source/build/build-clang/README
+.. _build-clang: https://searchfox.org/mozilla-central/source/build/build-clang/
+.. _LLVM Getting Started Guide: https://llvm.org/docs/GettingStarted.html
+
+Rust
+----
+
+Rust builds are handled by `repack_rust.py`_. The primary purpose of
+that script is to download prebuilt tarballs from the Rust project.
+
+It uses the same basic format as `rustup` for specifying the toolchain
+(via ``--channel``):
+
+- request a stable build with ``1.xx.y`` (e.g. ``1.47.0``)
+- request a beta build with ``beta-yyyy-mm-dd`` (e.g. ``beta-2020-08-26``)
+- request a nightly build with ``nightly-yyyy-mm-dd`` (e.g. ``nightly-2020-08-26``)
+- request a build from `Rust's ci`_ with ``bors-$sha`` (e.g. ``bors-796a2a9bbe7614610bd67d4cd0cf0dfff0468778``)
+- request a from-source build with ``dev``
+
+Rust From Source
+----------------
+
+As of this writing, from-source builds for Rust are a new feature, and not
+used anywhere by default. The feature was added so that we can test patches
+to rustc against the tree. Expect things to be a bit hacky and limited.
+
+Most importantly, building from source requires your toolchain to have a
+`fetch of the rust tree`_ as well as `clang and binutils toolchains`_. It is also
+recommended to upgrade the worker-type to e.g. ``b-linux-large``.
+
+Rust's build dependencies are fairly minimal, and it has a sanity check
+that should catch any missing or too-old dependencies. See the `Rust README`_
+for more details.
+
+Patches are set via `the --patch flag`_ (passed via ``toolchain/rust.yml``).
+Patch paths are assumed to be relative to ``/build/build-rust/``, and may be
+optionally prefixed with ``module-path:`` to specify they apply to that git
+submodule in the Rust source. e.g. ``--patch src/llvm-project:mypatch.diff``
+patches rust's llvm with ``/build/build-rust/mypatch.diff``. There are no
+currently checked in rust patches to use as an example, but they should be
+the same format as `the clang ones`_.
+
+Rust builds are not currently configurable, and uses a `hardcoded config.toml`_,
+which you may need to edit for your purposes. See Rust's `example config`_ for
+details/defaults. Note that these options do occasionally change, so be sure
+you're using options for the version you're targeting. For instance, there was
+a large change around Rust ~1.48, and the currently checked in config was for
+1.47, so it may not work properly when building the latest version of Rust.
+
+Rust builds are currently limited to targeting only the host platform.
+Although the machinery is in place to request additional targets, the
+cross-compilation fails for some unknown reason. We have not yet investigated
+what needs to be done to get this working.
+
+While Rust generally maintains a clean tree for building ``rustc`` and
+``cargo``, other tools like ``rustfmt`` or ``miri`` are allowed to be
+transiently broken. This means not every commit in the Rust tree will be
+able to build the `tools we require`_.
+
+Although ``repack_rust`` considers ``rustfmt`` an optional package, Rust builds
+do not currently implement this and will fail if ``rustfmt`` is busted. Some
+attempt was made to work around it, but `more work is needed`_.
+
+.. _Rust's ci: https://github.com/rust-lang/rust/pull/77875#issuecomment-736092083
+.. _repack_rust.py: https://searchfox.org/mozilla-central/source/taskcluster/scripts/misc/repack_rust.py
+.. _fetch of the rust tree: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/ci/toolchain/rust.yml#69-71
+.. _clang and binutils toolchains: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/ci/toolchain/rust.yml#72-74
+.. _the --patch flag: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/scripts/misc/repack_rust.py#667-675
+.. _the clang ones: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/build/build-clang/static-llvm-symbolizer.patch
+.. _Rust README: https://github.com/rust-lang/rust/#building-on-a-unix-like-system
+.. _hardcoded config.toml: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/scripts/misc/repack_rust.py#384-421
+.. _example config: https://github.com/rust-lang/rust/blob/b7ebc6b0c1ba3c27ebb17c0b496ece778ef11e18/config.toml.example
+.. _tools we require: https://searchfox.org/mozilla-central/rev/168c45a7acc44e9904cfd4eebcb9eb080e05699c/taskcluster/scripts/misc/repack_rust.py#398
+.. _more work is needed: https://github.com/rust-lang/rust/issues/79249
+
+Python
+------
+
+Python is built from source by ``taskcluster/scripts/misc/build-cpython.sh`` on
+Linux and OSX. We use the upstream installer on Windows, through
+``taskcluster/scripts/misc/pack-cpython.sh``. In order to ensure consistency, we
+use the same version for both approaches. Note however that the Windows installer is
+not packaged for all patch versions, so there might be a slight delta there.
+
+Windows
+=======
+
+The ``build/vs/generate_yaml.py`` and ``taskcluster/scripts/misc/get_vs.py``
+scripts are used to manage and get Windows toolchains containing Visual
+Studio executables, SDKs, etc.
+
+The ``build/vs/generate_yaml.py`` script is used to generate one of the
+YAML files used in the relevant toolchain task. The exact command line
+used to generate the file is stored in the header of the YAML file itself.
+Each YAML file records the necessary downloads from Microsoft servers to
+install the required Visual Studio components given on the command line.
+
+The ``taskcluster/scripts/misc/get_vs.py`` script takes a YAML file as
+input and fills a directory with the corresponding Visual Studio components.
+
+Both scripts should be run via ``mach python --virtualenv build``. The
+latter is automatically invoked by the bootstrapping mechanism.
+
+
+MacOS
+=====
+
+The ``build/macosx/catalog.py`` and ``taskcluster/scripts/misc/unpack-sdk.py``
+scripts are used to manage and get macOS SDKs.
+
+The ``build/macosx/catalog.py`` script is used to explore the Apple
+software update catalog. Running the script with no argument will show
+a complete list of "products". You probably don't want that, but rather
+start with a filter:
+
+.. code-block:: shell
+
+ $ ./mach python build/macosx/catalog.py --filter SDK
+ 061-44071 Beats Updater 1.0
+ 071-29699 Command Line Tools for Xcode 12.5
+ 001-89745 Command Line Tools for Xcode 12.4
+ 071-54303 Command Line Tools for Xcode 12.5
+ 002-41708 Command Line Tools for Xcode 13.2
+ 002-83793 Command Line Tools for Xcode 13.4
+ 012-92431 Command Line Tools for Xcode 14.2
+ 032-64167 Command Line Tools for Xcode 14.3
+
+From there, pick the id of the product you're interested in, and run the
+script again with that id:
+
+.. code-block:: shell
+
+ $ ./mach python build/macosx/catalog.py 032-64167
+ com.apple.pkg.CLTools_Executables https://swcdn.apple.com/content/downloads/38/61/032-64167-A_F8LL7XSTW6/k3kg0uip4kxd3qupgy6y8fzp27mnxdpt6y/CLTools_Executables.pkg
+ com.apple.pkg.CLTools_SDK_macOS13 https://swcdn.apple.com/content/downloads/38/61/032-64167-A_F8LL7XSTW6/k3kg0uip4kxd3qupgy6y8fzp27mnxdpt6y/CLTools_macOSNMOS_SDK.pkg
+ com.apple.pkg.CLTools_SDK_macOS12 https://swcdn.apple.com/content/downloads/38/61/032-64167-A_F8LL7XSTW6/k3kg0uip4kxd3qupgy6y8fzp27mnxdpt6y/CLTools_macOSLMOS_SDK.pkg
+ com.apple.pkg.CLTools_macOS_SDK https://swcdn.apple.com/content/downloads/38/61/032-64167-A_F8LL7XSTW6/k3kg0uip4kxd3qupgy6y8fzp27mnxdpt6y/CLTools_macOS_SDK.pkg
+ com.apple.pkg.CLTools_SwiftBackDeploy https://swcdn.apple.com/content/downloads/38/61/032-64167-A_F8LL7XSTW6/k3kg0uip4kxd3qupgy6y8fzp27mnxdpt6y/CLTools_SwiftBackDeploy.pkg
+
+From there, pick the id of the package you're interested in, and run the
+script again with a combination of both product and package ids to inspect
+its content and ensure that's what you're looking for.
+
+.. code-block:: shell
+
+ $ ./mach python build/macosx/catalog.py 032-64167/com.apple.pkg.CLTools_SDK_macOS13
+ Library
+ Library/Developer
+ Library/Developer/CommandLineTools
+ Library/Developer/CommandLineTools/SDKs
+ Library/Developer/CommandLineTools/SDKs/MacOSX13.sdk
+ Library/Developer/CommandLineTools/SDKs/MacOSX13.3.sdk
+ Library/Developer/CommandLineTools/SDKs/MacOSX13.3.sdk/usr
+ (...)
+
+Once you have found the SDK you want, you can create or update toolchain tasks
+in ``taskcluster/ci/toolchain/macosx-sdk.yml``.
+
+The ``taskcluster/scripts/misc/unpack-sdk.py`` script takes the url of a SDK
+package, the sha256 hash for its content, the path to the SDK in the package,
+and an output directory, and extracts the package in that directory.
+
+Both scripts should be run via ``mach python``. The latter is automatically
+invoked by the bootstrapping mechanism.
+
+Firefox for Android with Gradle
+===============================
+
+To build Firefox for Android with Gradle in automation, archives
+containing both the Gradle executable and a Maven repository
+comprising the exact build dependencies are produced and uploaded to
+an internal Mozilla server. The build automation will download,
+verify, and extract these archive before building. These archives
+provide a self-contained Gradle and Maven repository so that machines
+don't need to fetch additional Maven dependencies at build time.
+(Gradle and the downloaded Maven dependencies can be both
+redistributed publicly.)
+
+Archiving the Gradle executable is straight-forward, but archiving a
+local Maven repository is not. Therefore a toolchain job exists for
+producing the required archives, `android-gradle-dependencies`. The
+job runs in a container based on a custom Docker image and spawns a
+Sonatype Nexus proxying Maven repository process in the background.
+The job builds Firefox for Android using Gradle and the in-tree Gradle
+configuration rooted at ``build.gradle``. The spawned proxying Maven
+repository downloads external dependencies and collects them. After
+the Gradle build completes, the job archives the Gradle version used
+to build, and the downloaded Maven repository, and exposes them as
+Task Cluster artifacts.
+
+To update the version of Gradle in the archive produced, update
+``gradle/wrapper/gradle-wrapper.properties``. Be sure to also update
+the SHA256 checksum to prevent poisoning the build machines!
+
+To update the versions of Gradle dependencies used, update
+``dependencies`` sections in the in-tree Gradle configuration rooted
+at ``build.gradle``. Once you are confident your changes build
+locally, push a fresh build to try. The `android-gradle-dependencies`
+toolchain should run automatically, fetching your new dependencies and
+wiring them into the appropriate try build jobs.
+
+To update the version of Sonatype Nexus, update the `sonatype-nexus`
+`fetch` task definition.
+
+To modify the Sonatype Nexus configuration, typically to proxy a new
+remote Maven repository, modify
+`taskcluster/scripts/misc/android-gradle-dependencies/nexus.xml`.
+
+There is also a toolchain job that fetches the Android SDK and related
+packages. To update the versions of packaged fetched, modify
+`python/mozboot/mozboot/android-packages.txt` and update the various
+in-tree versions accordingly.
diff --git a/build/docs/unified-builds.rst b/build/docs/unified-builds.rst
new file mode 100644
index 0000000000..b0e93b9e68
--- /dev/null
+++ b/build/docs/unified-builds.rst
@@ -0,0 +1,55 @@
+.. _unified-builds:
+
+==============
+Unified Builds
+==============
+
+The Firefox build system uses the technique of "unified builds" (or elsewhere
+called "`unity builds <https://en.wikipedia.org/wiki/Unity_build>`_") to
+improve compilation performance. Rather than compiling source files individually,
+groups of files in the same directory are concatenated together, then compiled once
+in a single batch.
+
+Unified builds can be configured using the ``UNIFIED_SOURCES`` variable in ``moz.build`` files.
+
+.. _unified_build_compilation_failures:
+
+Why are there unrelated compilation failures when I change files?
+=================================================================
+
+Since multiple files are concatenated together in a unified build, it's possible for a change
+in one file to cause the compilation of a seemingly unrelated file to fail.
+This is usually because source files become implicitly dependent on each other for:
+
+* ``#include`` statements
+* ``using namespace ...;`` statements
+* Other symbol imports or definitions
+
+One of the more common cases of unexpected failures are when source code files are added or
+removed, and the "chunking" is changed. There's a limit on the number of files that are combined
+together for a single compilation, so sometimes the addition of a new file will cause another one
+to be bumped into a different chunk. If that other chunk doesn't meet the implicit requirements
+of the bumped file, there will be a tough-to-debug compilation failure.
+
+Building outside of the unified environment
+===========================================
+
+As described above, unified builds can cause source files to implicitly depend on each other, which
+not only causes unexpected build failures but also can cause issues when using source-analysis tools.
+To combat this, we'll use a "non-unified" build that attempts to perform a build with as many files compiled
+individually as possible.
+
+To build in the non unified mode, set the following flag in your ``mozconfig``:
+
+``ac_add_options --disable-unified-build``
+
+Other notes:
+============
+
+* Some IDEs (such as VSCode with ``clangd``) build files in standalone mode, so they may show
+ more failures than a ``mach build``.
+* The amount of files per chunk can be adjusted in ``moz.build`` files with the
+ ``FILES_PER_UNIFIED_FILE`` variable. Note that changing the chunk size can introduce
+ compilation failures as described :ref:`above<unified_build_compilation_failures>`.
+* We are happy to accept patches that fix problematic unified build chunks (such as by adding
+ includes or namespace annotations).
diff --git a/build/docs/visualstudio.rst b/build/docs/visualstudio.rst
new file mode 100644
index 0000000000..0051f9480f
--- /dev/null
+++ b/build/docs/visualstudio.rst
@@ -0,0 +1,82 @@
+.. _build_visualstudio:
+
+======================
+Visual Studio Projects
+======================
+
+The build system automatically generates Visual Studio project files to aid
+with development, as part of a normal ``mach build`` from the command line.
+
+You can find the solution file at ``$OBJDIR/msvs/mozilla.sln``.
+
+If you want to generate the project files before/without doing a full build,
+running ``./mach configure && ./mach build-backend -b VisualStudio`` will do
+so.
+
+
+Structure of Solution
+=====================
+
+The Visual Studio solution consists of hundreds of projects spanning thousands
+of files. To help with organization, the solution is divided into the following
+trees/folders:
+
+Build Targets
+ This folder contains common build targets. The *full* project is used to
+ perform a full build. The *binaries* project is used to build just binaries.
+ The *visual-studio* project can be built to regenerate the Visual Studio
+ project files.
+
+ Performing the *clean* action on any of these targets will clean the
+ *entire* build output.
+
+Binaries
+ This folder contains common binaries that can be executed from within
+ Visual Studio. If you are building the Firefox desktop application,
+ the *firefox* project will launch firefox.exe. You probably want one of
+ these set to your startup project.
+
+Libraries
+ This folder contains entries for each static library that is produced as
+ part of the build. These roughly correspond to each directory in the tree
+ containing C/C++. e.g. code from ``dom/base`` will be contained in the
+ ``dom_base`` project.
+
+ These projects don't do anything when built. If you build a project here,
+ the *binaries* build target project is built.
+
+Updating Project Files
+======================
+
+Either re-running ``./mach build`` or ``./mach build-backend -b VisualStudio``
+will update the Visual Studio files after the tree changes.
+
+Moving Project Files Around
+===========================
+
+The produced Visual Studio solution and project files should be portable.
+If you want to move them to a non-default directory, they should continue
+to work from wherever they are. If they don't, please file a bug.
+
+Invoking mach through Visual Studio
+===================================
+
+It's possible to run mach commands via Visual Studio. There is some light magic
+involved here.
+
+Alongside the Visual Studio project files is a batch script named ``mach.bat``.
+This batch script sets the environment variables present in your *MozillaBuild*
+development environment at the time of Visual Studio project generation
+and invokes *mach* inside an msys shell with the arguments specified to the
+batch script. This script essentially allows you to invoke mach commands
+inside the MozillaBuild environment without having to load MozillaBuild.
+
+Projects currently utilize the ``mach build`` and ``mach clobber`` commands
+for building and cleaning the tree respectively. Note that running ``clobber``
+deletes the Visual Studio project files, and running ``build`` recreates them.
+This might cause issues while Visual Studio is running. Thus a full rebuild is
+currently neither recommended, nor supported, but incremental builds should work.
+
+The batch script does not limit its use: any mach command can be invoked.
+Developers may use this fact to add custom projects and commands that invoke
+other mach commands.