summaryrefslogtreecommitdiffstats
path: root/taskcluster/docs
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-28 14:29:10 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-28 14:29:10 +0000
commit2aa4a82499d4becd2284cdb482213d541b8804dd (patch)
treeb80bf8bf13c3766139fbacc530efd0dd9d54394c /taskcluster/docs
parentInitial commit. (diff)
downloadfirefox-upstream.tar.xz
firefox-upstream.zip
Adding upstream version 86.0.1.upstream/86.0.1upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r--taskcluster/docs/actions.rst271
-rw-r--r--taskcluster/docs/attributes.rst435
-rw-r--r--taskcluster/docs/balrog.rst45
-rw-r--r--taskcluster/docs/caches.rst98
-rw-r--r--taskcluster/docs/config.rst35
-rw-r--r--taskcluster/docs/cron.rst102
-rw-r--r--taskcluster/docs/docker-images.rst210
-rw-r--r--taskcluster/docs/how-tos.rst247
-rw-r--r--taskcluster/docs/img/enableSourceServer.pngbin0 -> 28002 bytes
-rw-r--r--taskcluster/docs/img/windbg-srcfix.pngbin0 -> 17238 bytes
-rw-r--r--taskcluster/docs/index.rst38
-rw-r--r--taskcluster/docs/kinds.rst727
-rw-r--r--taskcluster/docs/loading.rst34
-rw-r--r--taskcluster/docs/mach.rst117
-rw-r--r--taskcluster/docs/optimization-process.rst78
-rw-r--r--taskcluster/docs/optimization-schedules.rst97
-rw-r--r--taskcluster/docs/optimization.rst52
-rw-r--r--taskcluster/docs/parameters.rst256
-rw-r--r--taskcluster/docs/partials.rst123
-rw-r--r--taskcluster/docs/partner-attribution.rst121
-rw-r--r--taskcluster/docs/partner-repacks.rst256
-rw-r--r--taskcluster/docs/platforms.rst199
-rw-r--r--taskcluster/docs/reference.rst12
-rw-r--r--taskcluster/docs/release-promotion-action.rst158
-rw-r--r--taskcluster/docs/release-promotion.rst54
-rw-r--r--taskcluster/docs/setting-up-an-update-server.rst218
-rw-r--r--taskcluster/docs/signing.rst190
-rw-r--r--taskcluster/docs/task-graph.rst37
-rw-r--r--taskcluster/docs/taskgraph.rst239
-rw-r--r--taskcluster/docs/transforms.rst215
-rw-r--r--taskcluster/docs/try.rst153
-rw-r--r--taskcluster/docs/using-the-mozilla-source-server.rst51
-rw-r--r--taskcluster/docs/versioncontrol.rst108
33 files changed, 4976 insertions, 0 deletions
diff --git a/taskcluster/docs/actions.rst b/taskcluster/docs/actions.rst
new file mode 100644
index 0000000000..a766c1bb29
--- /dev/null
+++ b/taskcluster/docs/actions.rst
@@ -0,0 +1,271 @@
+Actions
+=======
+
+This document shows how to define an action in-tree such that it shows up in
+supported user interfaces like Treeherder. For details on interface between
+in-tree logic and external user interfaces, see `the actions.json spec`_.
+
+At a very high level, the process looks like this:
+
+ * The decision task produces an artifact, ``public/actions.json``, indicating
+ what actions are available.
+
+ * A user interface (for example, Treeherder or the Taskcluster tools) consults
+ ``actions.json`` and presents appropriate choices to the user, if necessary
+ gathering additional data from the user, such as the number of times to
+ re-trigger a test case.
+
+ * The user interface follows the action description to carry out the action.
+ In most cases (``action.kind == 'task'``), that entails creating an "action
+ task", including the provided information. That action task is responsible
+ for carrying out the named action, and may create new sub-tasks if necessary
+ (for example, to re-trigger a task).
+
+Defining Action Tasks
+---------------------
+
+There is one options for defining actions: creating a callback action.
+A callback action automatically defines an action task that will invoke a
+Python function of your devising.
+
+Creating a Callback Action
+--------------------------
+
+.. note::
+
+ You can generate ``actions.json`` on the command line with ``./mach taskgraph actions``.
+
+A *callback action* is an action that calls back into in-tree logic. That is,
+you register the action with name, title, description, context, input schema and a
+python callback. When the action is triggered in a user interface,
+input matching the schema is collected, passed to a new task which then calls
+your python callback, enabling it to do pretty much anything it wants to.
+
+To create a new callback action you must create a file
+``taskcluster/taskgraph/actions/my-action.py``, that at minimum contains::
+
+ from __future__ import absolute_import, print_function, unicode_literals
+
+ from .registry import register_callback_action
+
+ @register_callback_action(
+ name='hello',
+ title='Say Hello',
+ symbol='hw', # Show the callback task in treeherder as 'hw'
+ description="Simple **proof-of-concept** callback action",
+ order=10000, # Order in which it should appear relative to other actions
+ )
+ def hello_world_action(parameters, graph_config, input, task_group_id, task_id, task):
+ print("Hello was triggered from taskGroupId: {}".format(task_group_id))
+
+The arguments are:
+
+``parameters``
+ an instance of ``taskgraph.parameters.Parameters``, carrying decision task parameters from the original decision task.
+
+``graph_config``
+ an instance of ``taskgraph.config.GraphConfig``, carrying configuration for this tree
+
+``input``
+ the input from the user triggering the action (if any)
+
+``task_group_id``
+ the target task group on which this action should operate
+
+``task_id``
+ the target task on which this action should operate (or None if it is operating on the whole group)
+
+``task``
+ the definition of the target task (or None, as for ``task_id``)
+
+The example above defines an action that is available in the context-menu for
+the entire task-group (result-set or push in Treeherder terminology). To create
+an action that shows up in the context menu for a task we would specify the
+``context`` parameter.
+
+The ``order`` value is the sort key defining the order of actions in the
+resulting ``actions.json`` file. If multiple actions have the same name and
+match the same task, the action with the smallest ``order`` will be used.
+
+Setting the Action Context
+..........................
+The context parameter should be a list of tag-sets, such as
+``context=[{"platform": "linux"}]``, which will make the task show up in the
+context-menu for any task with ``task.tags.platform = 'linux'``. Below is
+some examples of context parameters and the resulting conditions on
+``task.tags`` (tags used below are just illustrative).
+
+``context=[{"platform": "linux"}]``:
+ Requires ``task.tags.platform = 'linux'``.
+``context=[{"kind": "test", "platform": "linux"}]``:
+ Requires ``task.tags.platform = 'linux'`` **and** ``task.tags.kind = 'test'``.
+``context=[{"kind": "test"}, {"platform": "linux"}]``:
+ Requires ``task.tags.platform = 'linux'`` **or** ``task.tags.kind = 'test'``.
+``context=[{}]``:
+ Requires nothing and the action will show up in the context menu for all tasks.
+``context=[]``:
+ Is the same as not setting the context parameter, which will make the action
+ show up in the context menu for the task-group.
+ (i.e., the action is not specific to some task)
+
+The example action below will be shown in the context-menu for tasks with
+``task.tags.platform = 'linux'``::
+
+ from registry import register_callback_action
+
+ @register_callback_action(
+ name='retrigger',
+ title='Retrigger',
+ symbol='re-c', # Show the callback task in treeherder as 're-c'
+ description="Create a clone of the task",
+ order=1,
+ context=[{'platform': 'linux'}]
+ )
+ def retrigger_action(parameters, graph_config, input, task_group_id, task_id, task):
+ # input will be None
+ print "Retriggering: {}".format(task_id)
+ print "task definition: {}".format(task)
+
+When the ``context`` parameter is set, the ``task_id`` and ``task`` parameters
+will provided to the callback. In this case the ``task_id`` and ``task``
+parameters will be the ``taskId`` and *task definition* of the task from whose
+context-menu the action was triggered.
+
+Typically, the ``context`` parameter is used for actions that operate on
+tasks, such as retriggering, running a specific test case, creating a loaner,
+bisection, etc. You can think of the context as a place the action should
+appear, but it's also very much a form of input the action can use.
+
+
+Specifying an Input Schema
+..........................
+In call examples so far the ``input`` parameter for the callbacks has been
+``None``. To make an action that takes input you must specify an input schema.
+This is done by passing a JSON schema as the ``schema`` parameter.
+
+When designing a schema for the input it is important to exploit as many of the
+JSON schema validation features as reasonably possible. Furthermore, it is
+*strongly* encouraged that the ``title`` and ``description`` properties in
+JSON schemas is used to provide a detailed explanation of what the input
+value will do. Authors can reasonably expect JSON schema ``description``
+properties to be rendered as markdown before being presented.
+
+The example below illustrates how to specify an input schema. Notice that while
+this example doesn't specify a ``context`` it is perfectly legal to specify
+both ``input`` and ``context``::
+
+ from registry import register_callback_action
+
+ @register_callback_action(
+ name='run-all',
+ title='Run All Tasks',
+ symbol='ra-c', # Show the callback task in treeherder as 'ra-c'
+ description="**Run all tasks** that have been _optimized_ away.",
+ order=1,
+ input={
+ 'title': 'Action Options',
+ 'description': 'Options for how you wish to run all tasks',
+ 'properties': {
+ 'priority': {
+ 'title': 'priority'
+ 'description': 'Priority that should be given to the tasks',
+ 'type': 'string',
+ 'enum': ['low', 'normal', 'high'],
+ 'default': 'low',
+ },
+ 'runTalos': {
+ 'title': 'Run Talos'
+ 'description': 'Do you wish to also include talos tasks?',
+ 'type': 'boolean',
+ 'default': 'false',
+ }
+ },
+ 'required': ['priority', 'runTalos'],
+ 'additionalProperties': False,
+ },
+ )
+ def retrigger_action(parameters, graph_config, input, task_group_id, task_id, task):
+ print "Create all pruned tasks with priority: {}".format(input['priority'])
+ if input['runTalos']:
+ print "Also running talos jobs..."
+
+When the ``schema`` parameter is given the callback will always be called with
+an ``input`` parameter that satisfies the previously given JSON schema.
+It is encouraged to set ``additionalProperties: false``, as well as specifying
+all properties as ``required`` in the JSON schema. Furthermore, it's good
+practice to provide ``default`` values for properties, as user interface generators
+will often take advantage of such properties.
+
+It is possible to specify the ``schema`` parameter as a callable that returns
+the JSON schema. It will be called with a keyword parameter ``graph_config``
+with the `graph configuration <taskgraph-graph-config>` of the current
+taskgraph.
+
+Once you have specified input and context as applicable for your action you can
+do pretty much anything you want from within your callback. Whether you want
+to create one or more tasks or run a specific piece of code like a test.
+
+Conditional Availability
+........................
+
+The decision parameters ``taskgraph.parameters.Parameters`` passed to
+the callback are also available when the decision task generates the list of
+actions to be displayed in the user interface. When registering an action
+callback the ``availability`` option can be used to specify a callable
+which, given the decision parameters, determines if the action should be available.
+The feature is illustrated below::
+
+ from registry import register_callback_action
+
+ @register_callback_action(
+ name='hello',
+ title='Say Hello',
+ symbol='hw', # Show the callback task in treeherder as 'hw'
+ description="Simple **proof-of-concept** callback action",
+ order=2,
+ # Define an action that is only included if this is a push to try
+ available=lambda parameters: parameters.get('project', None) == 'try',
+ )
+ def try_only_action(parameters, graph_config, input, task_group_id, task_id, task):
+ print "My try-only action"
+
+Properties of ``parameters`` are documented in the
+:doc:`parameters section <parameters>`. You can also examine the
+``parameters.yml`` artifact created by decisions tasks.
+
+Context can be similarly conditionalized by passing a function which returns
+the appropriate context::
+
+ context=lambda params:
+ [{}] if int(params['level']) < 3 else [{'worker-implementation': 'docker-worker'}],
+
+Creating Tasks
+--------------
+
+The ``create_tasks`` utility function provides a full-featured way to create
+new tasks. Its features include creating prerequisite tasks, operating in a
+"testing" mode with ``./mach taskgraph test-action-callback``, and generating
+artifacts that can be used by later action tasks to figure out what happened.
+See the source for more detailed docmentation.
+
+The artifacts are:
+
+``task-graph.json`` (or ``task-graph-<suffix>.json``:
+ The graph of all tasks created by the action task. Includes tasks
+ created to satisfy requirements.
+``to-run.json`` (or ``to-run-<suffix>.json``:
+ The set of tasks that the action task requested to build. This does not
+ include the requirements.
+``label-to-taskid.json`` (or ``label-to-taskid-<suffix>.json``:
+ This is the mapping from label to ``taskid`` for all tasks involved in
+ the task-graph. This includes dependencies.
+
+More Information
+----------------
+
+For further details on actions in general, see `the actions.json spec`_.
+The hooks used for in-tree actions are set up by `ci-admin`_ based on configuration in `ci-configuration`_.
+
+.. _the actions.json spec: https://firefox-ci-tc.services.mozilla.com/docs/manual/tasks/actions/spec
+.. _ci-admin: http://hg.mozilla.org/ci/ci-admin/
+.. _ci-configuration: http://hg.mozilla.org/ci/ci-configuration/
diff --git a/taskcluster/docs/attributes.rst b/taskcluster/docs/attributes.rst
new file mode 100644
index 0000000000..8de147f672
--- /dev/null
+++ b/taskcluster/docs/attributes.rst
@@ -0,0 +1,435 @@
+===============
+Task Attributes
+===============
+
+Tasks can be filtered, for example to support "try" pushes which only perform a
+subset of the task graph or to link dependent tasks. This filtering is the
+difference between a full task graph and a target task graph.
+
+Filtering takes place on the basis of attributes. Each task has a dictionary
+of attributes and filters over those attributes can be expressed in Python. A
+task may not have a value for every attribute.
+
+The attributes, and acceptable values, are defined here. In general, attribute
+names and values are the short, lower-case form, with underscores.
+
+kind
+====
+
+A task's ``kind`` attribute gives the name of the kind that generated it, e.g.,
+``build`` or ``spidermonkey``.
+
+run_on_projects
+===============
+
+The projects where this task should be in the target task set. This is how
+requirements like "only run this on autoland" get implemented.
+
+.. note::
+
+ Please use this configuration. Running a job for all projects can quickly add up
+ in term of cost while not providing any value for some projects.
+
+`run-on-projects` can use either aliases or project names.
+
+These are the aliases:
+
+ * `integration` -- integration repository (autoland)
+ * `trunk` -- integration repository plus mozilla-central
+ * `release` -- release repositories (beta, release, esr) including mozilla-central
+ * `all` -- everywhere (the default)
+
+Project names are the repositories. They can be:
+
+* `autoland`
+* `mozilla-central`
+* `mozilla-beta`
+* `mozilla-release`
+* `mozilla-esr78`
+* ... A partial list can be found in taskcluster/taskgraph/util/attributes.py
+
+For try, this attribute applies only if ``-p all`` is specified. All jobs can
+be specified by name regardless of ``run_on_projects``.
+
+If ``run_on_projects`` is set to an empty list, then the task will not run
+anywhere, unless its build platform is specified explicitly in try syntax.
+
+
+.. note::
+
+ As `try` pushes don't use filter_for_projects by design, there isn't a way
+ to define that a task will run on `try`.
+
+
+.. note::
+
+ A given task `[taskA]` may not respect `run-on-projects` if there another task
+ `[taskB]` which is scheduled to run (such as via run-on-projects) which depends it
+ `[taskA]`. Because by nature of `TaskB` running we must run `TaskA`.
+
+ See `bug 1640603 <https://bugzilla.mozilla.org/show_bug.cgi?id=1640603#c5>`_ as example.
+
+run_on_hg_branches
+==================
+
+On a given project, the mercurial branch where this task should be in the target
+task set. This is how requirements like "only run this RELBRANCH" get implemented.
+These are either the regular expression of a branch (e.g.: ``GECKOVIEW_\d+_RELBRANCH``)
+or the following alias:
+
+ * `all` -- everywhere (the default)
+
+Like ``run_on_projects``, the same behavior applies if it is set to an empty list.
+
+task_duplicates
+===============
+
+This is used to indicate that we want multiple copies of the task created.
+This feature is used to track down intermittent job failures.
+
+If this value is set to N, the task-creation machinery will create a total of N
+copies of the task. Only the first copy will be included in the taskgraph
+output artifacts, although all tasks will be contained in the same taskGroup.
+
+While most attributes are considered read-only, target task methods may alter
+this attribute of tasks they include in the target set.
+
+build_platform
+==============
+
+The build platform defines the platform for which the binary was built. It is
+set for both build and test jobs, although test jobs may have a different
+``test_platform``.
+
+build_type
+==========
+
+The type of build being performed. This is a subdivision of ``build_platform``,
+used for different kinds of builds that target the same platform. Values are
+
+ * ``debug``
+ * ``opt``
+
+test_platform
+=============
+
+The test platform defines the platform on which tests are run. It is only
+defined for test jobs and may differ from ``build_platform`` when the same binary
+is tested on several platforms (for example, on several versions of Windows).
+This applies for both talos and unit tests.
+
+Unlike build_platform, the test platform is represented in a slash-separated
+format, e.g., ``linux64/opt``.
+
+unittest_suite
+==============
+
+This is the unit test suite being run in a unit test task. For example,
+``mochitest`` or ``cppunittest``.
+
+unittest_category
+=================
+
+This is the high-level category of test the suite corresponds to. This is
+usually the test harness used to run the suite.
+
+unittest_try_name
+=================
+
+This is the name used to refer to a unit test via try syntax. It
+may not match ``unittest_suite``.
+
+unittest_variant
+================
+
+The configuration variant the test suite is running with. If set, this usually
+means the tests are running with a special pref enabled. These are defined in
+``taskgraph.transforms.tests.TEST_VARIANTS``.
+
+talos_try_name
+==============
+
+This is the name used to refer to a talos job via try syntax.
+
+raptor_try_name
+===============
+
+This is the name used to refer to a raptor job via try syntax.
+
+job_try_name
+============
+
+This is the name used to refer to a "job" via try syntax (``-j``). Note that for
+some kinds, ``-j`` also matches against ``build_platform``.
+
+test_chunk
+==========
+
+This is the chunk number of a chunked test suite. Note that this is a string!
+
+test_manifests
+==============
+
+A list of the test manifests that run in this task.
+
+e10s
+====
+
+For test suites which distinguish whether they run with or without e10s, this
+boolean value identifies this particular run.
+
+image_name
+==========
+
+For the ``docker_image`` kind, this attribute contains the docker image name.
+
+nightly
+=======
+
+Signals whether the task is part of a nightly graph. Useful when filtering
+out nightly tasks from full task set at target stage.
+
+shippable
+=========
+Signals whether the task is considered "shippable", that it should get signed and is ok to
+be used for nightlies or releases.
+
+all_locales
+===========
+
+For the ``l10n`` and ``shippable-l10n`` kinds, this attribute contains the list
+of relevant locales for the platform.
+
+all_locales_with_changesets
+===========================
+
+Contains a dict of l10n changesets, mapped by locales (same as in ``all_locales``).
+
+l10n_chunk
+==========
+For the ``l10n`` and ``shippable-l10n`` kinds, this attribute contains the chunk
+number of the job. Note that this is a string!
+
+chunk_locales
+=============
+For the ``l10n`` and ``shippable-l10n`` kinds, this attribute contains an array of
+the individual locales this chunk is responsible for processing.
+
+locale
+======
+For jobs that operate on only one locale, we set the attribute ``locale`` to the
+specific locale involved. Currently this is only in l10n versions of the
+``beetmover`` and ``balrog`` kinds.
+
+signed
+======
+Signals that the output of this task contains signed artifacts.
+
+stub-installer
+==============
+Signals to the build system that this build is expected to have a stub installer
+present, and informs followon tasks to expect it.
+
+repackage_type
+==============
+This is the type of repackage. Can be ``repackage`` or
+``repackage_signing``.
+
+fetch-artifact
+==============
+
+For fetch jobs, this is the path to the artifact for that fetch operation.
+
+fetch-alias
+===========
+An alias that can be used instead of the real fetch job name in fetch
+stanzas for jobs.
+
+toolchain-artifact
+==================
+For toolchain jobs, this is the path to the artifact for that toolchain.
+
+toolchain-alias
+===============
+An alias that can be used instead of the real toolchain job name in fetch
+stanzas for jobs.
+
+always_target
+=============
+
+Tasks with this attribute will be included in the ``target_task_graph`` if
+``parameters["tasks_for"]`` is ``hg-push``, regardless of any target task
+filtering that occurs. When a task is included in this manner (i.e it otherwise
+would have been filtered out), it will be considered for optimization even if
+the ``optimize_target_tasks`` parameter is False.
+
+This is meant to be used for tasks which a developer would almost always want to
+run. Typically these tasks will be short running and have a high risk of causing
+a backout. For example ``lint`` or ``python-unittest`` tasks.
+
+shipping_product
+================
+For release promotion jobs, this is the product we are shipping.
+
+shipping_phase
+==============
+For release promotion jobs, this is the shipping phase (build, promote, push, ship).
+During the build phase, we build and sign shippable builds. During the promote phase,
+we generate l10n repacks and push to the candidates directory. During the push phase,
+we push to the releases directory. During the ship phase, we update bouncer, push to
+Google Play, version bump, mark as shipped in ship-it.
+
+Using the "snowman model", we depend on previous graphs if they're defined. So if we
+ask for a ``push`` (the head of the snowman) and point at the body and base, we only
+build the head. If we don't point at the body and base, we build the whole snowman
+(build, promote, push).
+
+artifact_prefix
+===============
+Most taskcluster artifacts are public, so we've hardcoded ``public/build`` in a
+lot of places. To support private artifacts, we've moved this to the
+``artifact_prefix`` attribute. It will default to ``public/build`` but will be
+overridable per-task.
+
+artifact_map
+===============
+For beetmover jobs, this indicates which yaml file should be used to
+generate the upstream artifacts and payload instructions to the task.
+
+batch
+=====
+Used by `perftest` to indicates that a task can be run as a batch.
+
+
+enable-full-crashsymbols
+========================
+In automation, full crashsymbol package generation is normally disabled. For
+build kinds where the full crashsymbols should be enabled, set this attribute
+to True. The full symbol packages will then be generated and uploaded on
+release branches and on try.
+
+skip-upload-crashsymbols
+========================
+Shippable/nightly builds are normally required to set enable-full-crashsymbols,
+but in some limited corner cases (universal builds), that is not wanted, because
+the symbols are uploaded independently already.
+
+cron
+====
+Indicates that a task is meant to be run via cron tasks, and should not be run
+on push.
+
+cached_task
+===========
+Some tasks generate artifacts that are cached between pushes. This is a
+dictionary with the type and name of the cache, and the unique string used to
+identify the current version of the artifacts. See :py:mod:`taskgraph.util.cached_task`.
+
+.. code:: yaml
+
+ cached_task:
+ digest: 66dfc2204600b48d92a049b6a18b83972bb9a92f9504c06608a9c20eb4c9d8ae
+ name: debian7-base
+ type: docker-images.v2
+
+eager_indexes
+=============
+A list of strings of indexes to populate before the task ever completes. Some tasks (e.g. cached tasks) we
+want to exist in the index before they even run/complete. Our current use is to allow us to depend on an
+unfinished cached task in future pushes. This avoids extra overhead from multiple tasks running, and
+can allow us to have our results in just a bit earlier.
+
+required_signoffs
+=================
+A list of release signoffs that this kind requires, should the release also
+require these signoffs. For example, ``mar-signing`` signoffs may be required
+by some releases in the future; for any releases that require ``mar-signing``
+signoffs, the kinds that also require that signoff are marked with this
+attribute.
+
+update-channel
+==============
+The update channel the build is configured to use.
+
+mar-channel-id
+==============
+The mar-channel-id the build is configured to use.
+
+accepted-mar-channel-ids
+========================
+The mar-channel-ids this build will accept updates to. It should usually be the same as
+the value mar_channel_id. If more than one ID is needed, then you should use a
+comma separated list of values.
+
+openh264_rev
+============
+Only used for openh264 plugin builds, used to signify the revision (and thus inform artifact name) of the given build.
+
+code-review
+===========
+If a task set this boolean attribute to `true`, it will be processed by the code
+review bot, the task will ran for every new Phabricator diff.
+Any supported and detected issue will be automatically reported on the
+Phabricator revision.
+
+resource-monitor
+================
+If a task set this boolean attribute to `true`, it will collect CPU, memory, and
+- if available - Disk and Network IO by running the resource-monitor utility,
+provided through fetches.
+
+retrigger
+=========
+Whether the task can be retriggered, or if it needs to be re-run.
+
+disable-push-apk
+================
+Some GeckoView-only Android tasks produce APKs that shouldn't be
+pushed to the Google Play Store. Set this to ``true`` to disable
+pushing.
+
+disable-build-signing
+=====================
+Some GeckoView-only tasks produce APKs, but not APKs that should be
+signed. Set this to ``true`` to disable APK signing.
+
+enable-build-signing
+====================
+We enable build-signing for ``shippable``, ``nightly``, and ``enable-build-signing`` tasks.
+
+run-visual-metrics
+==================
+If set to true, will run the visual metrics task on the provided
+video files.
+
+skip-verify-test-packaging
+==========================
+If set to true, this task will not be checked to see that
+MOZ_AUTOMATION_PACKAGE_TESTS is set correctly based on whether or not the task
+has dependent tests. This should only be used in very unique situations, such
+as Windows AArch64 builds that copy test packages between build tasks.
+
+geckodriver
+===========
+If non-empty, declares that the (toolchain) task is a `geckodriver`
+task that produces a binary that should be signed.
+
+rebuild-on-release
+==================
+If true, the digest for this task will also depend on if the branch is a
+release branch. This will cause tasks like toolchains to be rebuilt as they
+move from e.g. autoland to mozilla-central.
+
+local-toolchain
+===============
+This toolchain is used for local development, so should be built on trunk, even
+if it does not have any in-graph consumers.
+
+artifact-build
+==============
+
+This build is an artifact build.
+
+This deliberately excludes builds that are implemented using the artifact build
+machinery, but are not primarily intended to short-circuit build time. In
+particular the Windows aarch64 builds are not marked this way.
diff --git a/taskcluster/docs/balrog.rst b/taskcluster/docs/balrog.rst
new file mode 100644
index 0000000000..cd00fbb325
--- /dev/null
+++ b/taskcluster/docs/balrog.rst
@@ -0,0 +1,45 @@
+Balrog in Release Promotion
+===========================
+
+Overview
+--------
+Balrog is Mozilla's update server. It is responsible for delivering newer versions of Firefox to existing Firefox installations. If you are not already, it would be useful to be familiar with Balrog's core concepts before continuing with this doc. You can find that information on `Balrog's official documentation`_.
+
+The basic interactions that Release Promotion has with Balrog are as follows:
+
+#. Submit new release metadata to Balrog with a number of ``balrog`` tasks and the ``release-balrog-submit-toplevel`` task.
+#. Update test channels to point at the new release in the ``release-balrog-submit-toplevel`` task.
+#. Verify the new release updates with ``release-update-verify`` and ``release-final-verify`` tasks.
+#. Schedule the new release to be shipped in the ``release-balrog-scheduling`` task.
+
+Submit New Release Metadata
+---------------------------
+Balrog requires many different pieces of information before it can ship updates. Most of this information revolves around update file (MAR) metadata (hash, filesize, target platform, target locale). This information is submitted by ``balrog`` tasks.
+
+We also submit some more general information about releases (version number, MAR url templates, release name, etc.) as part of the ``release-balrog-submit-toplevel`` task.
+
+All balrog submission is done by `balrogscript workers`_, and happens in the ``promote`` phase.
+
+Update Test Channels
+--------------------
+Balrog has "test" channels that we use to allow verification of new release updates prior to shipping. The ``release-balrog-submit-toplevel`` task is responsible for updating these test channels whenever we prepare a new release. This happens in the ``promote`` phase.
+
+Verify the Release
+------------------
+Once a release is live on a test channel ``release-update-verify`` begins and performs same sanity checks. This happens in the ``promote`` phase.
+
+After a release has been pushed to cdns, ``release-final-verify`` runs and performs some additional checks. This happens in the ``push`` phase.
+
+Schedule Shipping
+-----------------
+When we're ready to ship a release we need to let Balrog know about it by scheduling a change to the appropriate Balrog rule. If ``release_eta`` is set it will be used as the ship date and time. If not, the release will be scheduled for shipping 5 minutes in the future. In either case, signoff will need to be done in Balrog by multiple parties before the release is actually made live.
+
+This step is done by the ``release-balrog-scheduling`` task in the ``ship`` phase.
+
+``secondary`` tasks
+-------------------
+You may have noticed ``secondary`` variants of the ``release-balrog-submit-toplevel``, ``release-update-verify``, ``release-final-verify``, and ``release-balrog-scheduling`` tasks. These fulfill the same function as their primary counterparts, but for the "beta" update channel. They are only used when we build Release Candidates.
+
+
+.. _Balrog's official documentation: http://mozilla-balrog.readthedocs.io/en/latest/
+.. _balrogscript workers: https://github.com/mozilla-releng/balrogscript
diff --git a/taskcluster/docs/caches.rst b/taskcluster/docs/caches.rst
new file mode 100644
index 0000000000..dcda4102dd
--- /dev/null
+++ b/taskcluster/docs/caches.rst
@@ -0,0 +1,98 @@
+.. taskcluster_caches:
+
+Caches
+======
+
+There are various caches used by the in-tree tasks. This page attempts to
+document them and their appropriate use.
+
+Caches are essentially isolated filesystems that are persisted between
+tasks. For example, if 2 tasks run on a worker - one after the other -
+and both tasks request the same cache, the subsequent task will be
+able to see files in the cache that were created by the first task.
+It's also worth noting that TaskCluster workers ensure a cache can only
+be used by 1 task at a time. If a worker is simultaneously running
+multiple tasks requesting the same named cache, the worker will
+have multiple caches of the same name on the worker.
+
+Caches and ``run-task``
+-----------------------
+
+``run-task`` is our generic task wrapper script. It does common activities
+like ensure a version control checkout is present.
+
+One of the roles of ``run-task`` is to verify and sanitize caches.
+It does this by storing state in a cache on its first use. If the recorded
+*capabilities* of an existing cache don't match expectations for the
+current task, ``run-task`` bails. This ensures that caches are only
+reused by tasks with similar execution *profiles*. This prevents
+accidental cache use across incompatible tasks. It also allows run-task
+to make assumptions about the state of caches, which can help avoid
+costly operations.
+
+In addition, the hash of ``run-task`` is used to derive the cache name.
+So any time ``run-task`` changes, a new set of caches are used. This
+ensures that any backwards incompatible changes or bug fixes to
+``run-task`` result in fresh caches.
+
+Some caches are reserved for use with run-task. That property will be denoted
+below.
+
+Common Caches
+-------------
+
+Version Control Caches
+::::::::::::::::::::::
+
+``level-{{level}}-checkouts-{{hash}}``
+ This cache holds version control checkouts, each in a subdirectory named
+ after the repo (e.g., ``gecko``).
+
+ Checkouts should be read-only. If a task needs to create new files from
+ content of a checkout, this content should be written in a separate
+ directory/cache (like a workspace).
+
+ This cache name pattern is managed by ``run-task`` and must only be
+ used with ``run-task``.
+
+``level-{{level}}-checkouts-sparse-{{hash}}``
+ This is like the above but is used when the checkout is sparse (contains
+ a subset of files).
+
+``level-{{level}}-checkouts-{{version}}`` (deprecated)
+ This cache holds version control checkouts, each in a subdirectory named
+ after the repo (e.g., ``gecko``).
+
+ Checkouts should be read-only. If a task needs to create new files from
+ content of a checkout, this content should be written in a separate
+ directory/cache (like a workspace).
+
+ A ``version`` parameter appears in the cache name to allow
+ backwards-incompatible changes to the cache's behavior.
+
+ The ``hg-store`` contains a `shared store <https://www.mercurial-scm.org/wiki/ShareExtension>`
+ that is is used by ``hg robustcheckout``. If you are using ``run-task`` you
+ should set the ``HG_STORE_PATH`` environment variable to point to this
+ directory. If you are using ``hg robustcheckout``, pass this directory to the
+ ``--sharebase`` option.
+
+Workspace Caches
+::::::::::::::::
+
+``level-{{level}}-*-workspace``
+ These caches (of various names typically ending with ``workspace``)
+ contain state to be shared between task invocations. Use cases are
+ dependent on the task.
+
+Other
+:::::
+
+``level-{{level}}-tooltool-cache-{{hash}}``
+ Tooltool invocations should use this cache. Tooltool will store files here
+ indexed by their hash.
+
+ This cache name pattern is reserved for use with ``run-task`` and must only
+ be used by ``run-task``
+
+``tooltool-cache`` (deprecated)
+ Legacy location for tooltool files. Use the per-level one instead.
diff --git a/taskcluster/docs/config.rst b/taskcluster/docs/config.rst
new file mode 100644
index 0000000000..fefe1265b9
--- /dev/null
+++ b/taskcluster/docs/config.rst
@@ -0,0 +1,35 @@
+Taskcluster Configuration
+=========================
+
+Taskcluster requires configuration of many resources to correctly support Firefox CI.
+Many of those span multiple projects (branches) instead of riding the trains.
+
+Global Settings
+---------------
+
+The data behind configuration of all of these resources is kept in the `ci-configuration`_ repository.
+The files in this repository are intended to be self-documenting, but one of particular interest is ``projects.yml``, which describes the needs of each project.
+
+Configuration Implementation
+----------------------------
+
+Translation of `ci-configuration`_ to Taskcluster resources, and updating those resources, is handled by `ci-admin`_.
+This is a small Python application with commands to generate the expected configuration, compare the expected to actual configuration, and apply the expected configuration.
+Only the ``apply`` subcommand requires elevated privileges.
+
+This tool automatically annotates all managed resources with "DO NOT EDIT", warning users of the administrative UI that changes made through the UI may be reverted.
+
+Changing Configuration
+----------------------
+
+To change Taskcluster configuration, make patches to `ci-configuration`_ or (if necessary) `ci-admin`_, using the Firefox Build System :: Task Configuration Bugzilla component.
+Part of the landing process is for someone with administrative scopes to apply the resulting configuration.
+
+You can test your patches with something like this, assuming ``.`` is a checkout of the `ci-configuration`_ repository containing your changes:
+
+.. code-block: shell
+
+ ci-admin diff --ci-configuration-directory .
+
+.. _ci-configuration: https://hg.mozilla.org/ci/ci-configuration/file
+.. _ci-admin: https://hg.mozilla.org/ci/ci-admin/file
diff --git a/taskcluster/docs/cron.rst b/taskcluster/docs/cron.rst
new file mode 100644
index 0000000000..b68c61a427
--- /dev/null
+++ b/taskcluster/docs/cron.rst
@@ -0,0 +1,102 @@
+Periodic Taskgraphs
+===================
+
+The cron functionality allows in-tree scheduling of task graphs that run
+periodically, instead of on a push.
+
+Cron.yml
+--------
+
+In the root of the Gecko directory, you will find `.cron.yml`. This defines
+the periodic tasks ("cron jobs") run for Gecko. Each specifies a name, what to
+do, and some parameters to determine when the cron job should occur.
+
+See `the scema <https://hg.mozilla.org/ci/ci-admin/file/default/build-decision/src/build_decision/cron/schema.yml>`_
+for details on the format and meaning of this file.
+
+How It Works
+------------
+
+The `TaskCluster Hooks Service <https://firefox-ci-tc.services.mozilla.com/hooks>`_
+has a hook configured for each repository supporting periodic task graphs. The
+hook runs every 15 minutes, and the resulting task is referred to as a "cron task".
+That cron task runs the `build-decision
+<https://hg.mozilla.org/ci/ci-admin/file/default/build-decision>`_ image in a
+checkout of the Gecko source tree.
+
+The task reads ``.cron.yml``, then consults the current time (actually the time
+the cron task was created, rounded down to the nearest 15 minutes) and creates
+tasks for any cron jobs scheduled at that time.
+
+Each cron job in ``.cron.yml`` specifies a ``job.type``, corresponding to a
+function responsible for creating TaskCluster tasks when the job runs.
+
+Describing Time
+---------------
+
+This cron implementation understands the following directives when
+describing when to run:
+
+* ``minute``: The minute in which to run, must be in 15 minute increments (see above)
+* ``hour``: The hour of the day in which to run, in 24 hour time.
+* ``day``: The day of the month as an integer, such as `1`, `16`. Be cautious above `28`, remember February.
+* ``weekday``: The day of the week, `Monday`, `Tuesday`, etc. Full length ISO compliant words.
+
+Setting both 'day' and 'weekday' will result in a cron job that won't run very often,
+and so is undesirable.
+
+*Examples*
+
+.. code-block:: yaml
+
+ # Never
+ when: []
+
+ # 4 AM and 4 PM, on the hour, every day.
+ when:
+ - {hour: 16, minute: 0}
+ - {hour: 4, minute: 0}
+
+ # The same as above, on a single line
+ when: [{hour: 16, minute: 0}, {hour: 4, minute: 0}]
+
+ # 4 AM on the second day of every month.
+ when:
+ - {day: 2, hour: 4, minute: 0}
+
+ # Mondays and Thursdays at 10 AM
+ when:
+ - {weekday: 'Monday', hour: 10, minute: 0}
+ - {weekday: 'Thursday', hour: 10, minute: 0}
+
+.. note::
+
+ Times are expressed in UTC (Coordinated Universal Time)
+
+
+Decision Tasks
+..............
+
+For ``job.type`` "decision-task", tasks are created based on
+``.taskcluster.yml`` just like the decision tasks that result from a push to a
+repository. They run with a distinct ``taskGroupId``, and are free to create
+additional tasks comprising a task graph.
+
+Scopes
+------
+
+The cron task runs with the sum of all cron job scopes for the given repo. For
+example, for the "sequoia" project, the scope would be
+``assume:repo:hg.mozilla.org/projects/sequoia:cron:*``. Each cron job creates
+tasks with scopes for that particular job, by name. For example, the
+``check-frob`` cron job on that repo would run with
+``assume:repo:hg.mozilla.org/projects/sequoia:cron:check-frob``.
+
+.. important::
+
+ The individual cron scopes are a useful check to ensure that a job is not
+ accidentally doing something it should not, but cannot actually *prevent* a
+ job from using any of the scopes afforded to the cron task itself (the
+ ``..cron:*`` scope). This is simply because the cron task runs arbitrary
+ code from the repo, and that code can be easily modified to create tasks
+ with any scopes that it possesses.
diff --git a/taskcluster/docs/docker-images.rst b/taskcluster/docs/docker-images.rst
new file mode 100644
index 0000000000..8facce6afe
--- /dev/null
+++ b/taskcluster/docs/docker-images.rst
@@ -0,0 +1,210 @@
+.. taskcluster_dockerimages:
+
+=============
+Docker Images
+=============
+
+TaskCluster Docker images are defined in the source directory under
+``taskcluster/docker``. Each directory therein contains the name of an
+image used as part of the task graph.
+
+Organization
+------------
+
+Each folder describes a single docker image. We have two types of images that can be defined:
+
+1. Task Images (build-on-push)
+2. Docker Images (prebuilt)
+
+These images depend on one another, as described in the `FROM
+<https://docs.docker.com/v1.8/reference/builder/#from>`_ line at the top of the
+Dockerfile in each folder.
+
+Images could either be an image intended for pushing to a docker registry, or
+one that is meant either for local testing or being built as an artifact when
+pushed to vcs.
+
+Task Images (build-on-push)
+:::::::::::::::::::::::::::
+
+Images can be uploaded as a task artifact, :ref:`indexed <task-image-index-namespace>` under
+a given namespace, and used in other tasks by referencing the task ID.
+
+Important to note, these images do not require building and pushing to a docker registry, and are
+built per push (if necessary) and uploaded as task artifacts.
+
+The decision task that is run per push will :ref:`determine <context-directory-hashing>`
+if the image needs to be built based on the hash of the context directory and if the image
+exists under the namespace for a given branch.
+
+As an additional convenience, and a precaution to loading images per branch, if an image
+has been indexed with a given context hash for mozilla-central, any tasks requiring that image
+will use that indexed task. This is to ensure there are not multiple images built/used
+that were built from the same context. In summary, if the image has been built for mozilla-central,
+pushes to any branch will use that already built image.
+
+To use within an in-tree task definition, the format is:
+
+.. code-block:: yaml
+
+ image:
+ type: 'task-image'
+ path: 'public/image.tar.zst'
+ taskId: '<task_id_for_image_builder>'
+
+.. _context-directory-hashing:
+
+Context Directory Hashing
+.........................
+
+Decision tasks will calculate the sha256 hash of the contents of the image
+directory and will determine if the image already exists for a given branch and hash
+or if a new image must be built and indexed.
+
+Note: this is the contents of *only* the context directory, not the
+image contents.
+
+The decision task will:
+
+1. Recursively collect the paths of all files within the context directory
+2. Sort the filenames alphabetically to ensure the hash is consistently calculated
+3. Generate a sha256 hash of the contents of each file
+4. All file hashes will then be combined with their path and used to update the
+ hash of the context directory
+
+This ensures that the hash is consistently calculated and path changes will result
+in different hashes being generated.
+
+.. _task-image-index-namespace:
+
+Task Image Index Namespace
+..........................
+
+Images that are built on push and uploaded as an artifact of a task will be indexed under the
+following namespaces.
+
+* gecko.cache.level-{level}.docker.v2.{name}.hash.{digest}
+* gecko.cache.level-{level}.docker.v2.{name}.latest
+* gecko.cache.level-{level}.docker.v2.{name}.pushdate.{year}.{month}-{day}-{pushtime}
+
+Not only can images be browsed by the pushdate and context hash, but the 'latest' namespace
+is meant to view the latest built image. This functions similarly to the 'latest' tag
+for docker images that are pushed to a registry.
+
+Docker Registry Images (prebuilt)
+:::::::::::::::::::::::::::::::::
+
+***Warning: Registry images are only used for ``decision`` and
+``image_builder`` images.***
+
+These are images that are intended to be pushed to a docker registry and used
+by specifying the docker image name in task definitions. They are generally
+referred to by a ``<repo>@<repodigest>`` string:
+
+Example:
+
+.. code-block:: none
+
+ image: taskcluster/decision:0.1.10@sha256:c5451ee6c655b3d97d4baa3b0e29a5115f23e0991d4f7f36d2a8f793076d6854
+
+Such images must always be referred to with both a version and a repo digest.
+For the decision image, the repo digest is stored in the ``HASH`` file in the
+image directory and used to refer to the image as above. The version for both
+images is in ``VERSION``.
+
+The version file serves to help users identify which image is being used, and makes old
+versions easy to discover in the registry.
+
+The file ``taskcluster/docker/REGISTRY`` specifies the image registry to which
+the completed image should be uploaded.
+
+Docker Hashes and Digests
+.........................
+
+There are several hashes involved in this process:
+
+ * Image Hash -- the long version of the image ID; can be seen with
+ ``docker images --no-trunc`` or in the ``Id`` field in ``docker inspect``.
+
+ * Repo Digest -- hash of the image manifest; seen when running ``docker
+ push`` or ``docker pull``.
+
+ * Context Directory Hash -- see above (not a Docker concept at all)
+
+The use of hashes allows older tasks which were designed to run on an older
+version of the image to be executed in Taskcluster while new tasks use the new
+version. Furthermore, this mitigates attacks against the registry as docker
+will verify the image hash when loading the image.
+
+(Re)-Building images
+--------------------
+
+Generally, images can be pulled from the Docker registry rather than built
+locally, however, for developing new images it's often helpful to hack on them
+locally.
+
+To build an image, invoke ``mach taskcluster-build-image`` with the name of the
+folder (without a trailing slash):
+
+.. code-block:: none
+
+ ./mach taskcluster-build-image <image-name>
+
+This is a wrapper around ``docker build -t $REGISTRY/$FOLDER:$VERSION``.
+
+It's a good idea to bump the ``VERSION`` early in this process, to avoid
+``docker push``-ing over any old tags.
+
+For task images, test your image locally or push to try. This is all that is
+required.
+
+Docker Registry Images
+::::::::::::::::::::::
+
+Landing docker registry images takes a little more care.
+
+Begin by bumping the ``VERSION``. Once the new version of the image has been
+built and tested locally, push it to the docker registry and make note of the
+resulting repo digest. Put this value in the ``HASH`` file for the
+``decision`` image and in ``taskcluster/taskgraph/transforms/docker_image.py``
+for the ``image_builder`` image.
+
+The change is now safe to use in Try pushes.
+
+Note that ``image_builder`` change can be tested directly in try pushes without
+using a registry, as the in-registry ``image_builder`` image is used to build a
+task image which is then used to build other images. It is referenced by hash
+in ``taskcluster/taskgraph/transforms/docker_image.py``.
+
+Special Dockerfile Syntax
+-------------------------
+
+Dockerfile syntax has been extended to allow *any* file from the
+source checkout to be added to the image build *context*. (Traditionally
+you can only ``ADD`` files from the same directory as the Dockerfile.)
+
+Simply add the following syntax as a comment in a Dockerfile::
+
+ # %include <path>
+
+e.g.
+
+ # %include mach
+ # %include testing/mozharness
+
+The argument to ``# %include`` is a relative path from the root level of
+the source directory. It can be a file or a directory. If a file, only that
+file will be added. If a directory, every file under that directory will be
+added (even files that are untracked or ignored by version control).
+
+Files added using ``# %include`` syntax are available inside the build
+context under the ``topsrcdir/`` path.
+
+Files are added as they exist on disk. e.g. executable flags should be
+preserved. However, the file owner/group is changed to ``root`` and the
+``mtime`` of the file is normalized.
+
+Here is an example Dockerfile snippet::
+
+ # %include mach
+ ADD topsrcdir/mach /builds/worker/mach
diff --git a/taskcluster/docs/how-tos.rst b/taskcluster/docs/how-tos.rst
new file mode 100644
index 0000000000..1b5247928f
--- /dev/null
+++ b/taskcluster/docs/how-tos.rst
@@ -0,0 +1,247 @@
+How Tos
+=======
+
+All of this equipment is here to help you get your work done more efficiently.
+However, learning how task-graphs are generated is probably not the work you
+are interested in doing. This section should help you accomplish some of the
+more common changes to the task graph with minimal fuss.
+
+.. important::
+
+ If you cannot accomplish what you need with the information provided here,
+ please consider whether you can achieve your goal in a different way.
+ Perhaps something simpler would cost a bit more in compute time, but save
+ the much more expensive resource of developers' mental bandwidth.
+ Task-graph generation is already complex enough!
+
+ If you want to proceed, you may need to delve into the implementation of
+ task-graph generation. The documentation and code are designed to help, as
+ are the authors - ``hg blame`` may help track down helpful people.
+
+ As you write your new transform or add a new kind, please consider the next
+ developer. Where possible, make your change data-driven and general, so
+ that others can make a much smaller change. Document the semantics of what
+ you are changing clearly, especially if it involves modifying a transform
+ schema. And if you are adding complexity temporarily while making a
+ gradual transition, please open a new bug to remind yourself to remove the
+ complexity when the transition is complete.
+
+Hacking Task Graphs
+-------------------
+
+The recommended process for changing task graphs is this:
+
+1. Run one of the ``mach taskgraph`` subcommands (see :doc:`mach`) to
+ generate a baseline against which to measure your changes.
+
+ .. code-block:: none
+
+ ./mach taskgraph tasks --json > old-tasks.json
+
+2. Make your modifications under ``taskcluster/``.
+
+3. Run the same ``mach taskgraph`` command, sending the output to a new file,
+ and use ``diff`` to compare the old and new files. Make sure your changes
+ have the desired effect and no undesirable side-effects. A plain unified
+ diff should be useful for most changes, but in some cases it may be helpful
+ to post-process the JSON to strip distracting changes.
+
+4. When you are satisfied with the changes, push them to try to ensure that the
+ modified tasks work as expected.
+
+Hacking Actions
+...............
+
+If you are working on an action task and wish to test it out locally, use the
+``./mach taskgraph test-action-callback`` command:
+
+ .. code-block:: none
+
+ ./mach taskgraph test-action-callback \
+ --task-id I4gu9KDmSZWu3KHx6ba6tw --task-group-id sMO4ybV9Qb2tmcI1sDHClQ \
+ --input input.yml hello_world_action
+
+This invocation will run the hello world callback with the given inputs and
+print any created tasks to stdout, rather than actually creating them.
+
+Common Changes
+--------------
+
+Changing Test Characteristics
+.............................
+
+First, find the test description. This will be in
+``taskcluster/ci/*/tests.yml``, for the appropriate kind (consult
+:doc:`kinds`). You will find a YAML stanza for each test suite, and each
+stanza defines the test's characteristics. For example, the ``chunks``
+property gives the number of chunks to run. This can be specified as a simple
+integer if all platforms have the same chunk count, or it can be keyed by test
+platform. For example:
+
+.. code-block:: yaml
+
+ chunks:
+ by-test-platform:
+ linux64/debug: 10
+ default: 8
+
+The full set of available properties is in
+``taskcluster/taskgraph/transforms/tests.py``. Some other
+commonly-modified properties are ``max-run-time`` (useful if tests are being
+killed for exceeding maxRunTime) and ``treeherder-symbol``.
+
+.. note::
+
+ Android tests are also chunked at the mozharness level, so you will need to
+ modify the relevant mozharness config, as well.
+
+Adding a Test Suite
+...................
+
+To add a new test suite, you will need to know the proper mozharness invocation
+for that suite, and which kind it fits into (consult :doc:`kinds`).
+
+Add a new stanza to ``taskcluster/ci/<kind>/tests.yml``, copying from the other
+stanzas in that file. The meanings should be clear, but authoritative
+documentation is in
+``taskcluster/taskgraph/transforms/tests.py`` should you need
+it. The stanza name is the name by which the test will be referenced in try
+syntax.
+
+Add your new test to a test set in ``test-sets.yml`` in the same directory. If
+the test should only run on a limited set of platforms, you may need to define
+a new test set and reference that from the appropriate platforms in
+``test-platforms.yml``. If you do so, include some helpful comments in
+``test-sets.yml`` for the next person.
+
+Greening Up a New Test
+......................
+
+When a test is not yet reliably green, configuration for that test should not
+be landed on integration branches. Of course, you can control where the
+configuration is landed! For many cases, it is easiest to green up a test in
+try: push the configuration to run the test to try along with your work to fix
+the remaining test failures.
+
+When working with a group, check out a "twig" repository to share among your
+group, and land the test configuration in that repository. Once the test is
+green, merge to an integration branch and the test will begin running there as
+well.
+
+Adding a New Task
+.................
+
+If you are adding a new task that is not a test suite, there are a number of
+options. A few questions to consider:
+
+ * Is this a new build platform or variant that will produce an artifact to
+ be run through the usual test suites?
+
+ * Does this task depend on other tasks? Do other tasks depend on it?
+
+ * Is this one of a few related tasks, or will you need to generate a large
+ set of tasks using some programmatic means (for example, chunking)?
+
+ * How is the task actually executed? Mozharness? Mach?
+
+ * What kind of environment does the task require?
+
+Armed with that information, you can choose among a few options for
+implementing this new task. Try to choose the simplest solution that will
+satisfy your near-term needs. Since this is all implemented in-tree, it
+is not difficult to refactor later when you need more generality.
+
+Existing Kind
+`````````````
+
+The simplest option is to add your task to an existing kind. This is most
+practical when the task "makes sense" as part of that kind -- for example, if
+your task is building an installer for a new platform using mozharness scripts
+similar to the existing build tasks, it makes most sense to add your task to
+the ``build`` kind. If you need some additional functionality in the kind,
+it's OK to modify the implementation as necessary, as long as the modification
+is complete and useful to the next developer to come along.
+
+Tasks in the ``build`` kind generate Firefox installers, and the ``test`` kind
+will add a full set of Firefox tests for each ``build`` task.
+
+New Kind
+````````
+
+The next option to consider is adding a new kind. A distinct kind gives you
+some isolation from other task types, which can be nice if you are adding an
+experimental kind of task.
+
+Kinds can range in complexity. The simplest sort of kind uses the transform
+loader to read a list of jobs from the ``jobs`` key, and applies the standard
+``job`` and ``task`` transforms:
+
+.. code-block:: yaml
+
+ implementation: taskgraph.task.transform:TransformTask
+ transforms:
+ - taskgraph.transforms.job:transforms
+ - taskgraph.transforms.task:transforms
+ jobs:
+ - ..your job description here..
+
+Job descriptions are defined and documented in
+``taskcluster/taskgraph/transforms/job/__init__.py``.
+
+Custom Kind Loader
+``````````````````
+
+If your task depends on other tasks, then the decision of which tasks to create
+may require some code. For example, the ``test`` kind iterates over
+the builds in the graph, generating a full set of test tasks for each one. This specific
+post-build behavior is implemented as a loader defined in ``taskcluster/taskgraph/loader/test.py``.
+
+A custom loader is useful when the set of tasks you want to create is not
+static but based on something else (such as the available builds) or when the
+dependency relationships for your tasks are complex.
+
+Custom Transforms
+`````````````````
+
+Most loaders apply a series of ":doc:`transforms <transforms>`" that start with
+an initial human-friendly description of a task and end with a task definition
+suitable for insertion into a Taskcluster queue.
+
+Custom transforms can be useful to apply defaults, simplifying the YAML files
+in your kind. They can also apply business logic that is more easily expressed
+in code than in YAML.
+
+Transforms need not be one-to-one: a transform can produce zero or more outputs
+for each input. For example, the test transforms perform chunking by producing
+an output for each chunk of a given input.
+
+Ideally those transforms will produce job descriptions, so you can use the
+existing ``job`` and ``task`` transforms:
+
+.. code-block:: yaml
+
+ transforms:
+ - taskgraph.transforms.my_stuff:transforms
+ - taskgraph.transforms.job:transforms
+ - taskgraph.transforms.task:transforms
+
+Try to keep transforms simple, single-purpose and well-documented!
+
+Custom Run-Using
+````````````````
+
+If the way your task is executed is unique (so, not a mach command or
+mozharness invocation), you can add a new implementation of the job
+description's "run" section. Before you do this, consider that it might be a
+better investment to modify your task to support invocation via mozharness or
+mach, instead. If this is not possible, then adding a new file in
+``taskcluster/taskgraph/transforms/jobs`` with a structure similar to its peers
+will make the new run-using option available for job descriptions.
+
+Something Else?
+...............
+
+If you make another change not described here that turns out to be simple or
+common, please include an update to this file in your patch.
+
+
diff --git a/taskcluster/docs/img/enableSourceServer.png b/taskcluster/docs/img/enableSourceServer.png
new file mode 100644
index 0000000000..2a3a469129
--- /dev/null
+++ b/taskcluster/docs/img/enableSourceServer.png
Binary files differ
diff --git a/taskcluster/docs/img/windbg-srcfix.png b/taskcluster/docs/img/windbg-srcfix.png
new file mode 100644
index 0000000000..f9102ea913
--- /dev/null
+++ b/taskcluster/docs/img/windbg-srcfix.png
Binary files differ
diff --git a/taskcluster/docs/index.rst b/taskcluster/docs/index.rst
new file mode 100644
index 0000000000..5347ce2ce5
--- /dev/null
+++ b/taskcluster/docs/index.rst
@@ -0,0 +1,38 @@
+.. taskcluster_index:
+
+TaskCluster Task-Graph Generation
+=================================
+
+The ``taskcluster`` directory contains support for defining the graph of tasks
+that must be executed to build and test the Gecko tree. This is more complex
+than you might suppose! This implementation supports:
+
+ * A huge array of tasks
+ * Different behavior for different repositories
+ * "Try" pushes, with special means to select a subset of the graph for execution
+ * Optimization -- skipping tasks that have already been performed
+ * Extremely flexible generation of a variety of tasks using an approach of
+ incrementally transforming job descriptions into task definitions.
+
+This section of the documentation describes the process in some detail,
+referring to the source where necessary. If you are reading this with a
+particular goal in mind and would rather avoid becoming a task-graph expert,
+check out the :doc:`how-to section <how-tos>`.
+
+.. toctree::
+
+ taskgraph
+ mach
+ loading
+ transforms
+ optimization
+ docker-images
+ cron
+ try
+ actions
+ release-promotion
+ versioncontrol
+ config
+ how-tos
+ task-graph
+ reference
diff --git a/taskcluster/docs/kinds.rst b/taskcluster/docs/kinds.rst
new file mode 100644
index 0000000000..3ec1aff9ad
--- /dev/null
+++ b/taskcluster/docs/kinds.rst
@@ -0,0 +1,727 @@
+Task Kinds
+==========
+
+This section lists and documents the available task kinds.
+
+build
+-----
+
+Builds are tasks that produce an installer or other output that can be run by
+users or automated tests. This is more restrictive than most definitions of
+"build" in a Mozilla context: it does not include tasks that run build-like
+actions for static analysis or to produce instrumented artifacts.
+
+build-fat-aar
+-------------
+
+Build architecture-independent GeckoView AAR (Android ARchive) files. This build-like tasks is an
+artifact build (ARMv7, but this is arbitrary) that itself depends on arch-specific Android build
+jobs. It fetches arch-specific AAR files, extracts arch-specific libraries and preference files,
+and then assembles a multi-architecture "fat AAR". Downstream consumers are expected to use
+per-ABI feature splits to produce arch-specific APKs.
+
+If you want to run this task locally, you need to specify these environment variable:
+ - MOZ_ANDROID_FAT_AAR_ARCHITECTURES: must be a comma-separated list of architecture.
+ Eg: "armeabi-v7a,arm64-v8a,x86,x86_64".
+ - each of MOZ_ANDROID_FAT_AAR_ARM64_V8A, MOZ_ANDROID_FAT_AAR_ARMEABI_V7A,
+ MOZ_ANDROID_FAT_AAR_X86, MOZ_ANDROID_FAT_AAR_X86_64 must be a path relative to
+ MOZ_FETCHES_DIR.
+
+build-signing
+-------------
+
+Many builds must be signed. The build-signing task takes the unsigned `build`
+kind artifacts and passes them through signingscriptworker to a signing server
+and returns signed results.
+
+For mac notarization, we download the signed bits that have been notarized by Apple, and we staple the notarization to the app and pkg.
+
+build-notarization-part-1
+-------------------------
+
+We switched to a 3-part mac notarization workflow in bug 1562412. This is the first task, which signs the files and submits them for notarization.
+
+build-notarization-poller
+-------------------------
+
+We switched to a 3-part mac notarization workflow in bug 1562412. This is the second task, which polls Apple for notarization status. Because this is run in a separate, special notarization poller pool, we free up the mac notarization pool for actual signing work.
+
+artifact-build
+--------------
+
+This kind performs an artifact build: one based on precompiled binaries
+discovered via the TaskCluster index. This task verifies that such builds
+continue to work correctly.
+
+hazard
+------
+
+Hazard builds are similar to "regular' builds, but use a compiler extension to
+extract a bunch of data from the build and then analyze that data looking for
+hazardous behaviors.
+
+l10n
+----
+
+The l10n kind takes the last published nightly build, and generates localized builds
+from it. You can read more about how to trigger these on the `wiki
+<https://wiki.mozilla.org/ReleaseEngineering/TryServer#Desktop_l10n_jobs_.28on_Taskcluster.29>`_.
+
+shippable-l10n
+--------------
+
+The nightly l10n kind repacks a specific nightly build (from the same source code)
+in order to provide localized versions of the same source.
+
+shippable-l10n-signing
+----------------------
+
+The shippable l10n signing kind takes artifacts from the shippable-l10n kind and
+passes them to signing servers to have their contents signed appropriately, based
+on an appropriate signing format. One signing job is created for each shippable-l10n
+job (usually chunked).
+
+For mac notarization, we download the signed bits that have been notarized by Apple, and we staple the notarization to the app and pkg.
+
+shippable-l10n-notarization-part-1
+----------------------------------
+
+We switched to a 3-part mac notarization workflow in bug 1562412. This is the first task, which signs the files and submits them for notarization.
+
+shippable-l10n-notarization-poller
+----------------------------------
+
+We switched to a 3-part mac notarization workflow in bug 1562412. This is the second task, which polls Apple for notarization status. Because this is run in a separate, special notarization poller pool, we free up the mac notarization pool for actual signing work.
+
+source-test
+-----------
+
+Source-tests are tasks that run directly from the Gecko source. This can include linting,
+unit tests, source-code analysis, or measurement work. While source-test tasks run from
+a source checkout, it is still possible for them to depend on a build artifact, though
+often they do not.
+
+code-review
+-----------
+
+Publish issues found by source-test tasks on Phabricator.
+This is a part of Release Management code review Bot.
+
+upload-symbols
+--------------
+
+Upload-symbols tasks run after builds and upload the symbols files generated by
+build tasks to Socorro for later use in crash analysis.
+
+upload-generated-sources
+------------------------
+
+Upload-generated-sources tasks run after builds and upload source files that were generated as part of the build process to an s3 bucket for later use in links from crash reports or when debugging shipped builds.
+
+valgrind
+--------
+
+Valgrind tasks produce builds instrumented by valgrind.
+
+searchfox
+---------
+
+Searchfox builds generate C++ index data for Searchfox.
+
+static-analysis-autotest
+------------------------
+
+Static analysis autotest utility in order to be sure that there is no regression
+when upgrading utilities that impact static-analysis.
+
+toolchain
+---------
+
+Toolchain builds create the compiler toolchains used to build Firefox. These
+will eventually be dependencies of the builds themselves, but for the moment
+are run manually via try pushes and the results uploaded to tooltool.
+
+spidermonkey
+------------
+
+Spidermonkey tasks check out the full gecko source tree, then compile only the
+spidermonkey portion. Each task runs specific tests after the build.
+
+test
+----
+
+The ``desktop-test`` kind defines tests for builds. Its ``tests.yml`` defines
+the full suite of desktop tests and their particulars, leaving it to the
+transforms to determine how those particulars apply to the various platforms.
+
+The process of generating tests goes like this, based on a set of YAML files
+named in ``kind.yml``:
+
+ * For each build task, determine the related test platforms based on the build
+ platform. For example, a Windows 2010 build might be tested on Windows 7
+ and Windows 10. Each test platform specifies "test sets" indicating which
+ tests to run. This is configured in the file named
+ ``test-platforms.yml``.
+
+ * Each test set is expanded to a list of tests to run. This is configured in
+ the file named by ``test-sets.yml``. A platform may specify several test
+ sets, in which case the union of those sets is used.
+
+ * Each named test is looked up in the file named by ``tests.yml`` to find a
+ test description. This test description indicates what the test does, how
+ it is reported to treeherder, and how to perform the test, all in a
+ platform-independent fashion.
+
+ * Each test description is converted into one or more tasks. This is
+ performed by a sequence of transforms defined in the ``transforms`` key in
+ ``kind.yml``. See :doc:`transforms`: for more information on these
+ transforms.
+
+ * The resulting tasks become a part of the task graph.
+
+.. important::
+
+ This process generates *all* test jobs, regardless of tree or try syntax.
+ It is up to a later stages of the task-graph generation (the target set and
+ optimization) to select the tests that will actually be performed.
+
+docker-image
+------------
+
+Tasks of the ``docker-image`` kind build the Docker images in which other
+Docker tasks run.
+
+The tasks to generate each docker image have predictable labels:
+``docker-image-<name>``.
+
+Docker images are built from subdirectories of ``taskcluster/docker``, using
+``docker build``. There is currently no capability for one Docker image to
+depend on another in-tree docker image, without uploading the latter to a
+Docker repository.
+
+balrog
+------
+
+Balrog tasks are responsible for submitting metadata to our update server (Balrog).
+They are typically downstream of a beetmover job that moves signed MARs somewhere
+(eg: beetmover and beetmover-l10n for releases, beetmover-repackage for nightlies).
+
+beetmover
+---------
+
+Beetmover, takes specific artifacts, "Beets", and pushes them to a location outside
+of Taskcluster's task artifacts, (archive.mozilla.org as one place) and in the
+process determines the final location and a "pretty" name (versioned product name)
+
+beetmover-l10n
+--------------
+
+Beetmover L10n, takes specific artifacts, "Beets", and pushes them to a location outside
+of Taskcluster's task artifacts, (archive.mozilla.org as one place) and in the
+process determines the final location and a "pretty" name (versioned product name)
+This separate kind uses logic specific to localized artifacts, such as including
+the language in the final artifact names.
+
+beetmover-repackage
+-------------------
+
+Beetmover-repackage is beetmover but for tasks that need an intermediate step
+between signing and packaging, such as OSX. For more details see the definitions
+of the Beetmover kind above and the repackage kind below.
+
+release-beetmover-push-to-release
+---------------------------------
+
+release-beetmover-push-to-release publishes promoted releases from the
+candidates directory to the release directory. This is part of release
+promotion.
+
+beetmover-snap
+--------------
+Beetmover-source publishes Ubuntu's snap. This is part of release promotion.
+
+beetmover-source
+----------------
+Beetmover-source publishes release source. This is part of release promotion.
+
+beetmover-geckoview
+-------------------
+Beetmover-geckoview publishes the Android library called "geckoview".
+
+condprof
+--------
+condprof creates and updates realistic profiles.
+
+release-source-checksums-signing
+--------------------------------
+release-source-checksums-signing take as input the checksums file generated by
+source-related beetmover task and sign it via the signing scriptworkers.
+Returns the same file signed and additional detached signature.
+
+beetmover-checksums
+-------------------
+Beetmover, takes specific artifact checksums and pushes it to a location outside
+of Taskcluster's task artifacts (archive.mozilla.org as one place) and in the
+process determines the final location and "pretty" names it (version product name)
+
+release-beetmover-source-checksums
+----------------------------------
+Beetmover, takes source specific artifact checksums and pushes it to a location outside
+of Taskcluster's task artifacts (archive.mozilla.org as one place) and in the
+process determines the final location and "pretty" names it (version product name)
+
+perftest
+--------
+Runs performance tests using mozperftest.
+
+release-balrog-submit-toplevel
+------------------------------
+Toplevel tasks are responsible for submitting metadata to Balrog that is not specific to any
+particular platform+locale. For example: fileUrl templates, versions, and platform aliases.
+
+Toplevel tasks are also responsible for updating test channel rules to point at the Release
+being generated.
+
+release-secondary-balrog-submit-toplevel
+----------------------------------------
+Performs the same function as `release-balrog-submit-toplevel`, but against the beta channel
+during RC builds.
+
+release-balrog-scheduling
+-------------------------
+Schedules a Release for shipping in Balrog. If a `release_eta` was provided when starting the Release,
+it will be scheduled to go live at that day and time.
+
+release-secondary-balrog-scheduling
+-----------------------------------
+Performs the same function as `release-balrog-scheduling`, except for the beta channel as part of RC
+Releases.
+
+release-binary-transparency
+---------------------------
+Binary transparency creates a publicly verifiable log of binary shas for downstream
+release auditing. https://wiki.mozilla.org/Security/Binary_Transparency
+
+release-snap-repackage
+----------------------
+Generate an installer using Ubuntu's Snap format.
+
+release-flatpak-repackage
+-------------------------
+Generate an installer using Flathub's Flatpak format.
+
+release-snap-push
+-----------------
+Pushes Snap repackage on Snap store.
+
+release-flatpak-push
+--------------------
+Pushes Flatpak repackage on Flathub
+
+release-secondary-snap-push
+---------------------------
+Performs the same function as `release-snap-push`, except for the beta channel as part of RC
+Releases.
+
+release-secondary-flatpak-push
+------------------------------
+Performs the same function as `release-flatpak-push`, except for the beta channel as part of RC
+Releases.
+
+release-notify-av-announce
+--------------------------
+Notify anti-virus vendors when a release is likely shipping.
+
+release-notify-push
+-------------------
+Notify when a release has been pushed to CDNs.
+
+release-notify-ship
+-------------------
+Notify when a release has been shipped.
+
+release-secondary-notify-ship
+-----------------------------
+Notify when an RC release has been shipped to the beta channel.
+
+release-notify-promote
+----------------------
+Notify when a release has been promoted.
+
+release-notify-started
+----------------------
+Notify when a release has been started.
+
+release-bouncer-sub
+-------------------
+Submits bouncer information for releases.
+
+release-mark-as-shipped
+-----------------------
+Marks releases as shipped in Ship-It v1
+
+release-bouncer-aliases
+-----------------------
+Update Bouncer's (download.mozilla.org) "latest" aliases.
+
+cron-bouncer-check
+------------------
+Checks Bouncer (download.mozilla.org) uptake.
+
+bouncer-locations
+-----------------
+Updates nightly bouncer locations for version bump.
+
+release-bouncer-check
+---------------------
+Checks Bouncer (download.mozilla.org) uptake as part of the release tasks.
+
+release-generate-checksums
+--------------------------
+Generate the per-release checksums along with the summaries
+
+release-generate-checksums-signing
+----------------------------------
+Sign the pre-release checksums produced by the above task
+
+release-generate-checksums-beetmover
+------------------------------------
+Submit to S3 the artifacts produced by the release-checksums task and its signing counterpart.
+
+release-final-verify
+--------------------
+Verifies the contents and package of release update MARs.
+
+release-secondary-final-verify
+------------------------------
+Verifies the contents and package of release update MARs for RC releases.
+
+release-push-langpacks
+-------------------------------
+Publishes language packs onto addons.mozilla.org.
+
+release-beetmover-signed-langpacks
+----------------------------------
+Publishes signed langpacks to archive.mozilla.org
+
+release-beetmover-signed-langpacks-checksums
+--------------------------------------------
+Publishes signed langpacks to archive.mozilla.org
+
+release-update-verify
+---------------------
+Verifies the contents and package of release update MARs.
+release-secondary-update-verify
+-------------------------------
+Verifies the contents and package of release update MARs.
+
+release-update-verify-next
+--------------------------
+Verifies the contents and package of release and updare MARs from the previous ESR release.
+
+release-update-verify-config
+----------------------------
+Creates configs for release-update-verify tasks
+
+release-secondary-update-verify-config
+--------------------------------------
+Creates configs for release-secondary-update-verify tasks
+
+release-update-verify-config-next
+---------------------------------
+Creates configs for release-update-verify-next tasks
+
+release-updates-builder
+-----------------------
+Top level Balrog blob submission & patcher/update verify config updates.
+
+release-version-bump
+--------------------
+Bumps to the next version.
+
+release-source
+--------------
+Generates source for the release
+
+release-source-signing
+----------------------
+Signs source for the release
+
+release-partner-repack
+----------------------
+Generates customized versions of releases for partners.
+
+release-partner-attribution
+---------------------------
+Generates attributed versions of releases for partners.
+
+release-partner-repack-chunking-dummy
+-------------------------------------
+Chunks the partner repacks by locale.
+
+release-partner-repack-signing
+------------------------------
+Internal signing of partner repacks.
+
+For mac notarization, we download the signed bits that have been notarized by Apple, and we staple the notarization to the app and pkg.
+
+release-partner-repack-notarization-part-1
+------------------------------------------
+
+We switched to a 3-part mac notarization workflow in bug 1562412. This is the first task, which signs the files and submits them for notarization.
+
+release-partner-repack-notarization-poller
+------------------------------------------
+
+We switched to a 3-part mac notarization workflow in bug 1562412. This is the second task, which polls Apple for notarization status. Because this is run in a separate, special notarization poller pool, we free up the mac notarization pool for actual signing work.
+
+release-partner-repack-repackage
+--------------------------------
+Repackaging of partner repacks.
+
+release-partner-repack-repackage-signing
+----------------------------------------
+External signing of partner repacks.
+
+release-partner-repack-beetmover
+--------------------------------
+Moves the partner repacks to S3 buckets.
+
+release-partner-attribution-beetmover
+-------------------------------------
+Moves the partner attributions to S3 buckets.
+
+release-partner-repack-bouncer-sub
+----------------------------------
+Sets up bouncer products for partners.
+
+release-early-tagging
+---------------------
+Utilises treescript to perform tagging that should happen near the start of a release.
+
+release-eme-free-repack
+-----------------------
+Generates customized versions of releases for eme-free repacks.
+
+release-eme-free-repack-signing
+-------------------------------
+Internal signing of eme-free repacks
+
+For mac notarization, we download the signed bits that have been notarized by Apple, and we staple the notarization to the app and pkg.
+
+release-eme-free-repack-notarization-part-1
+-------------------------------------------
+
+We switched to a 3-part mac notarization workflow in bug 1562412. This is the first task, which signs the files and submits them for notarization.
+
+release-eme-free-repack-notarization-poller
+-------------------------------------------
+
+We switched to a 3-part mac notarization workflow in bug 1562412. This is the second task, which polls Apple for notarization status. Because this is run in a separate, special notarization poller pool, we free up the mac notarization pool for actual signing work.
+
+release-eme-free-repack-repackage
+---------------------------------
+Repackaging of eme-free repacks.
+
+release-eme-free-repack-repackage-signing
+-----------------------------------------
+External signing of eme-free repacks.
+
+release-eme-free-repack-beetmover
+---------------------------------
+Moves the eme-free repacks to S3 buckets.
+
+release-eme-free-repack-beetmover-checksums
+-------------------------------------------
+Moves the beetmover checksum for eme-free repacks to S3 buckets.
+
+repackage
+---------
+Repackage tasks take a signed output and package them up into something suitable
+for shipping to our users. For example, on OSX we return a tarball as the signed output
+and this task would package that up as an Apple Disk Image (.dmg)
+
+repackage-l10n
+--------------
+Repackage-L10n is a ```Repackage``` task split up to be suitable for use after l10n repacks.
+
+
+repackage-signing
+-----------------
+Repackage-signing take the repackaged installers (windows) and signs them.
+
+repackage-signing-l10n
+----------------------
+Repackage-signing-l10n take the repackaged installers (windows) and signs them for localized versions.
+
+mar-signing
+-----------
+Mar-signing takes the complete update MARs and signs them.
+
+mar-signing-l10n
+----------------
+Mar-signing-l10n takes the complete update MARs and signs them for localized versions.
+
+mar-signing-autograph-stage
+---------------------------
+These tasks are only to test autograph-stage, when the autograph team asks for their staging environment to be tested.
+
+repackage-msi
+-------------
+Repackage-msi takes the signed full installer and produces an msi installer (that wraps the full installer)
+Using the ```./mach repackage``` command
+
+repackage-signing-msi
+---------------------
+Repackage-signing-msi takes the repackaged msi installers and signs them.
+
+repo-update
+-----------
+Repo-Update tasks are tasks that perform some action on the project repo itself,
+in order to update its state in some way.
+
+python-dependency-update
+------------------------
+Python-dependency-update runs `pip-compile --generate-hashes` against the specified `requirements.in` and
+submits patches to Phabricator.
+
+partials
+--------
+Partials takes the complete.mar files produced in previous tasks and generates partial
+updates between previous nightly releases and the new one. Requires a release_history
+in the parameters. See ``mach release-history`` if doing this manually.
+
+partials-signing
+----------------
+Partials-signing takes the partial updates produced in Partials and signs them.
+
+post-balrog-dummy
+-----------------
+Dummy tasks to consolidate balrog dependencies to avoid taskcluster limits on number of dependencies per task.
+
+post-beetmover-dummy
+--------------------
+Dummy tasks to consolidate beetmover dependencies to avoid taskcluster limits on number of dependencies per task.
+
+post-beetmover-checksums-dummy
+------------------------------
+Dummy tasks to consolidate beetmover-checksums dependencies to avoid taskcluster limits on number of dependencies per task.
+
+post-langpack-dummy
+-------------------
+Dummy tasks to consolidate language pack beetmover dependencies to avoid taskcluster limits on number of dependencies per task.
+
+post-update-verify-dummy
+------------------------
+Dummy tasks to consolidate update verify dependencies to avoid taskcluster limits on number of dependencies per task.
+
+fetch
+-----
+Tasks that obtain something from a remote service and re-expose it as a
+task artifact. These tasks are used to effectively cache and re-host
+remote content so it is reliably and deterministically available.
+
+packages
+--------
+Tasks used to build packages for use in docker images.
+
+diffoscope
+----------
+Tasks used to compare pairs of Firefox builds using https://diffoscope.org/.
+As of writing, this is mainly meant to be used in try builds, by editing
+taskcluster/ci/diffoscope/kind.yml for your needs.
+
+addon
+-----
+Tasks used to build/package add-ons.
+
+openh264-plugin
+---------------
+Tasks used to build the openh264 plugin.
+
+openh264-signing
+----------------
+Signing for the openh264 plugin.
+
+webrender
+---------
+Tasks used to do testing of WebRender standalone (without gecko). The
+WebRender code lives in gfx/wr and has its own testing infrastructure.
+
+wgpu
+---------
+Tasks used to do testing of WebGPU standalone (without gecko). The
+WebGPU code lives in gfx/wgpu and has its own testing infrastructure.
+
+github-sync
+------------
+Tasks used to do synchronize parts of Gecko that have downstream GitHub
+repositories.
+
+instrumented-build
+------------------
+Tasks that generate builds with PGO instrumentation enabled. This is an
+intermediate build that can be used to generate profiling information for a
+final PGO build. This is the 1st stage of the full 3-step PGO process.
+
+generate-profile
+----------------
+Tasks that take a build configured for PGO and run the binary against a sample
+set to generate profile data. This is the 2nd stage of the full 3-step PGO
+process.
+
+geckodriver-signing
+-------------------
+Signing for geckodriver binary.
+
+visual-metrics
+--------------
+Tasks that compute visual performance metrics from videos and images captured
+by other tasks.
+
+visual-metrics-dep
+------------------
+Tasks that compute visual performance metrics from videos and images captured
+by another task that produces a jobs.json artifact
+
+iris
+----
+Iris testing suite
+
+maybe-release
+-------------
+A shipitscript task that does the following:
+
+1. Checks if automated releases are disabled
+2. Checks if the changes between the current revision and the previous releases
+ revision are considered "worthwhile" for a new release.
+3. Triggers the release via ship-it, which will then create an action task.
+
+l10n-bump
+---------
+Cron-driven tasks that bump l10n-changesets files in-tree, using data from the l10n dashboard.
+
+merge-automation
+----------------
+Hook-driven tasks that automate "Merge Day" tasks during the release cycle.
+
+system-symbols
+--------------
+Generate missing macOS and windows system symbols from crash reports.
+
+system-symbols-upload
+---------------------
+Upload macOS and windows system symbols to tecken.
+
+scriptworker-canary
+-------------------
+Push tasks to try to test new scriptworker deployments.
+
+updatebot
+------------------
+Check for updates to (supported) third party libraries, and manage their lifecycle.
+
+fuzzing
+-------
+
+Performs fuzzing smoke tests
diff --git a/taskcluster/docs/loading.rst b/taskcluster/docs/loading.rst
new file mode 100644
index 0000000000..4de4c5ee08
--- /dev/null
+++ b/taskcluster/docs/loading.rst
@@ -0,0 +1,34 @@
+Loading Tasks
+=============
+
+The full task graph generation involves creating tasks for each kind. Kinds
+are ordered to satisfy ``kind-dependencies``, and then the ``loader`` specified
+in ``kind.yml`` is used to load the tasks for that kind. It should point to
+a Python function like::
+
+ def loader(cls, kind, path, config, parameters, loaded_tasks):
+ pass
+
+The ``kind`` is the name of the kind; the configuration for that kind
+named this class.
+
+The ``path`` is the path to the configuration directory for the kind. This
+can be used to load extra data, templates, etc.
+
+The ``parameters`` give details on which to base the task generation. See
+:doc:`parameters` for details.
+
+At the time this method is called, all kinds on which this kind depends
+(that is, specified in the ``kind-dependencies`` key in ``config``)
+have already loaded their tasks, and those tasks are available in
+the list ``loaded_tasks``.
+
+The return value is a list of inputs to the transforms listed in the kind's
+``transforms`` property. The specific format for the input depends on the first
+transform - whatever it expects. The final transform should be
+``taskgraph.transform.task:transforms``, which produces the output format the
+task-graph generation infrastructure expects.
+
+The ``transforms`` key in ``kind.yml`` is further documented in
+:doc:`transforms`. For more information on how all of this works, consult the
+docstrings and comments in the source code itself.
diff --git a/taskcluster/docs/mach.rst b/taskcluster/docs/mach.rst
new file mode 100644
index 0000000000..ccb4ce5ac9
--- /dev/null
+++ b/taskcluster/docs/mach.rst
@@ -0,0 +1,117 @@
+Taskcluster Mach commands
+=========================
+
+A number of mach subcommands are available aside from ``mach taskgraph
+decision`` to make this complex system more accessible to those trying to
+understand or modify it. They allow you to run portions of the
+graph-generation process and output the results.
+
+``mach taskgraph tasks``
+ Get the full task set
+
+``mach taskgraph full``
+ Get the full task graph
+
+``mach taskgraph target``
+ Get the target task set
+
+``mach taskgraph target-graph``
+ Get the target task graph
+
+``mach taskgraph optimized``
+ Get the optimized task graph
+
+``mach taskgraph morphed``
+ Get the morphed task graph
+
+See :doc:`how-tos` for further practical tips on debugging task-graph mechanics
+locally.
+
+Parameters
+----------
+
+Each of these commands takes an optional ``--parameters`` argument giving a file
+with parameters to guide the graph generation. The decision task helpfully
+produces such a file on every run, and that is generally the easiest way to get
+a parameter file. The parameter keys and values are described in
+:doc:`parameters`; using that information, you may modify an existing
+``parameters.yml`` or create your own. The ``--parameters`` option can also
+take the following forms:
+
+``project=<project>``
+ Fetch the parameters from the latest push on that project
+``task-id=<task-id>``
+ Fetch the parameters from the given decision task id
+
+If not specified, parameters will default to ``project=mozilla-central``.
+
+Taskgraph JSON Format
+---------------------
+By default, the above commands will only output a list of tasks. Use `-J` flag
+to output full task definitions. For example:
+
+.. code-block:: shell
+
+ $ ./mach taskgraph optimized -J
+
+
+Task graphs -- both the graph artifacts produced by the decision task and those
+output by the ``--json`` option to the ``mach taskgraph`` commands -- are JSON
+objects, keyed by label, or for optimized task graphs, by taskId. For
+convenience, the decision task also writes out ``label-to-taskid.json``
+containing a mapping from label to taskId. Each task in the graph is
+represented as a JSON object.
+
+Each task has the following properties:
+
+``kind``
+ The name of this task's kind
+
+``task_id``
+ The task's taskId (only for optimized task graphs)
+
+``label``
+ The task's label
+
+``attributes``
+ The task's attributes
+
+``dependencies``
+ The task's in-graph dependencies, represented as an object mapping
+ dependency name to label (or to taskId for optimized task graphs)
+
+``optimizations``
+ The optimizations to be applied to this task
+
+``task``
+ The task's TaskCluster task definition.
+
+The results from each command are in the same format, but with some differences
+in the content:
+
+* The ``tasks`` and ``target`` subcommands both return graphs with no edges.
+ That is, just collections of tasks without any dependencies indicated.
+
+* The ``optimized`` subcommand returns tasks that have been assigned taskIds.
+ The dependencies array, too, contains taskIds instead of labels, with
+ dependencies on optimized tasks omitted. However, the ``task.dependencies``
+ array is populated with the full list of dependency taskIds. All task
+ references are resolved in the optimized graph.
+
+The output of the ``mach taskgraph`` commands are suitable for processing with
+the `jq <https://stedolan.github.io/jq/>`_ utility. For example, to extract all
+tasks' labels and their dependencies:
+
+.. code-block:: shell
+
+ jq 'to_entries | map({label: .value.label, dependencies: .value.dependencies})'
+
+An alternate way of searching the output of ``mach taskgraph`` is
+`gron <https://github.com/tomnomnom/gron>`_, which converts json into a format
+that's easily searched with ``grep``
+
+.. code-block:: shell
+
+ gron taskgraph.json | grep -E 'test.*machine.platform = "linux64";'
+ ./mach taskgraph --json | gron | grep ...
+
diff --git a/taskcluster/docs/optimization-process.rst b/taskcluster/docs/optimization-process.rst
new file mode 100644
index 0000000000..4ead2094f0
--- /dev/null
+++ b/taskcluster/docs/optimization-process.rst
@@ -0,0 +1,78 @@
+Optimization Process
+====================
+
+Optimization proceeds in three phases: removing tasks, replacing tasks,
+and finally generating a subgraph containing only the remaining tasks.
+
+Assume the following task graph as context for these examples::
+
+ TC1 <--\ ,- UP1
+ , B1 <--- T1a
+ I1 <-| `- T1b
+ ` B2 <--- T2a
+ TC2 <--/ |- T2b
+ `- UP2
+
+Removing Tasks
+--------------
+
+This phase begins with tasks on which nothing depends and follows the
+dependency graph backward from there -- right to left in the diagram above. If
+a task is not removed, then nothing it depends on will be removed either.
+Thus if T1a and T1b are both removed, B1 may be removed as well. But if T2b is
+not removed, then B2 may not be removed either.
+
+For each task with no remaining dependencies, the decision whether to remove is
+made by calling the optimization strategy's ``should_remove_task`` method. If
+this method returns True, the task is removed.
+
+The optimization process takes a ``do_not_optimize`` argument containing a list
+of tasks that cannot be removed under any circumstances. This is used to
+"force" running specific tasks.
+
+Replacing Tasks
+---------------
+
+This phase begins with tasks having no dependencies and follows the reversed
+dependency graph from there -- left to right in the diagram above. If a task is
+not replaced, then anything depending on that task cannot be replaced.
+Replacement is generally done on the basis of some hash of the inputs to the
+task. In the diagram above, if both TC1 and I1 are replaced with existing
+tasks, then B1 is a candidate for replacement. But if TC2 has no replacement,
+then replacement of B2 will not be considered.
+
+It is possible to replace a task with nothing. This is similar to optimzing
+away, but is useful for utility tasks like UP1. If such a task is considered
+for replacement, then all of its dependencies (here, B1) have already been
+replaced and there is no utility in running the task and no need for a
+replacement task. It is an error for a task on which others depend to be
+replaced with nothing.
+
+The ``do_not_optimize`` set applies to task replacement, as does an additional
+``existing_tasks`` dictionary which allows the caller to supply as set of
+known, pre-existing tasks. This is used for action tasks, for example, where it
+contains the entire task-graph generated by the original decision task.
+
+Subgraph Generation
+-------------------
+
+The first two phases annotate each task in the existing taskgraph with their
+fate: removed, replaced, or retained. The tasks that are replaced also have a
+replacement taskId.
+
+The last phase constructs a subgraph containing the retained tasks, and
+simultaneously rewrites all dependencies to refer to taskIds instead of labels.
+To do so, it assigns a taskId to each retained task and uses the replacement
+taskId for all replaced tasks.
+
+The `soft-dependencies` are then solved for each task, by adding all the
+remaining tasks in the subgraph from that list to its `dependencies`.
+
+The result is an optimized taskgraph with tasks named by taskId instead of
+label. At this phase, the edges in the task graph diverge from the
+``task.dependencies`` attributes, as the latter may contain dependencies
+outside of the taskgraph (for replacement tasks).
+
+As a side-effect, this phase also expands all ``{"task-reference": ".."}`` and
+``{"artifact-reference": ".."}`` objects within the task definitions.
+
diff --git a/taskcluster/docs/optimization-schedules.rst b/taskcluster/docs/optimization-schedules.rst
new file mode 100644
index 0000000000..22483c3023
--- /dev/null
+++ b/taskcluster/docs/optimization-schedules.rst
@@ -0,0 +1,97 @@
+Optimization and SCHEDULES
+==========================
+
+Most optimization of builds and tests is handled with ``SCHEDULES``.
+The concept is this: we allocate tasks into named components, and associate a set of such components to each file in the source tree.
+Given a set of files changed in a push, we then calculate the union of components affected by each file, and remove tasks that are not tagged with any of them.
+
+This optimization system is intended to be *conservative*.
+It represents what could *possibly* be affected, rather than any intuitive notion of what tasks would be useful to run for changes to a particular file.
+For example:
+
+* ``dom/url/URL.cpp`` schedules tasks on all platform and could potentially cause failures in any test suite
+
+* ``dom/system/mac/CoreLocationLocationProvider.mm`` could not possibly affect any platform but ``macosx``, but potentially any test suite
+
+* ``python/mozbuild/mozbuild/preprocessor.py`` could possibly affect any platform, and should also schedule Python lint tasks
+
+Exclusive and Inclusive
+-----------------------
+
+The first wrinkle in this "simple" plan is that there are a lot of files, and for the most part they all affect most components.
+But there are some components which are only affected by a well-defined set of files.
+For example, a Python lint component need only be scheduled when Python files are changed.
+
+We divide the components into "exclusive" and "inclusive" components.
+Absent any other configuration, any file in the repository is assumed to affect all of the exclusive components and none of the inclusive components.
+
+Exclusive components can be thought of as a series of families.
+For example, the platform (linux, windows, macosx, android) is a component family.
+The test suite (mochitest, reftest, xpcshell, etc.) is another.
+By default, source files are associated with every component in every family.
+This means tasks tagged with an exclusive component will *always* run, unless none of the modified source files are associated with that component.
+
+But what if we only want to run a particular task when a pre-determined file is modified?
+This is where inclusive components are used.
+Any task tagged with an inclusive component will *only* be run when a source file associated with that component is modified.
+Lint tasks and well separated unittest tasks are good examples of things you might want to schedule inclusively.
+
+A good way to keep this straight is to think of exclusive platform-family components (``macosx``, ``android``, ``windows``, ``linux``) and inclusive linting components (``py-lint``, ``js-lint``).
+An arbitrary file in the repository affects all platform families, but does not necessarily require a lint run.
+But we can configure mac-only files such as ``CoreLocationLocationProvider.mm`` to affect exclusively ``macosx``, and Python files like ``preprocessor.py`` to affect ``py-lint`` in addition to the exclusive components.
+
+It is also possible to define a file as affecting an inclusive component and nothing else.
+For example, the source code and configuration for the Python linting tasks does not affect any tasks other than linting.
+
+.. note::
+
+ Most unit test suite tasks are allocated to components for their platform family and for the test suite.
+ This indicates that if a platform family is affected (for example, ``android``) then the builds for that platform should execute as well as the full test suite.
+ If only a single suite is affected (for example, by a change to a reftest source file), then the reftests should execute for all platforms.
+
+ However, some test suites, for which the set of contributing files are well-defined, are represented as inclusive components.
+ These components will not be executed by default for any platform families, but only when one or more of the contributing files are changed.
+
+Specification
+-------------
+
+Components are defined as either inclusive or exclusive in :py:mod:`mozbuild.schedules`.
+
+File Annotation
+:::::::::::::::
+
+Files are annotated with their affected components in ``moz.build`` files with stanzas like ::
+
+ with Files('**/*.py'):
+ SCHEDULES.inclusive += ['py-lint']
+
+for inclusive components and ::
+
+ with Files('*gradle*'):
+ SCHEDULES.exclusive = ['android']
+
+for exclusive components.
+Note the use of ``+=`` for inclusive compoenents (as this is adding to the existing set of affected components) but ``=`` for exclusive components (as this is resetting the affected set to something smaller).
+For cases where an inclusive component is affected exclusively (such as the python-lint configuration in the example above), that component can be assigned to ``SCHEDULES.exclusive``::
+
+ with Files('**/pep8rc'):
+ SCHEDULES.exclusive = ['py-lint']
+
+If multiple stanzas set ``SCHEDULES.exclusive``, the last one will take precedence. Thus the following
+will set ``SCHEDULES.exclusive`` to ``hpux`` for all files except those under ``docs/``. ::
+
+ with Files('**'):
+ SCHEDULES.exclusive = ['hpux']
+
+ with Files('**/docs'):
+ SCHEDULES.exclusive = ['docs']
+
+Task Annotation
+:::::::::::::::
+
+Tasks are annotated with the components they belong to using the ``"skip-unless-schedules"`` optimization, which takes a list of components for this task::
+
+ task['optimization'] = {'skip-unless-schedules': ['windows', 'gtest']}
+
+For tests, this value is set automatically by the test transform based on the suite name and the platform family, doing the correct thing for inclusive test suites.
+Tests can also use a variety of other optimizers, such as ``relevant_tests``, ``bugbug`` (uses machine learning) or ``backstop`` (ensures regressions aren't missed).
diff --git a/taskcluster/docs/optimization.rst b/taskcluster/docs/optimization.rst
new file mode 100644
index 0000000000..624b479ece
--- /dev/null
+++ b/taskcluster/docs/optimization.rst
@@ -0,0 +1,52 @@
+Optimization
+============
+
+The objective of optimization to remove as many tasks from the graph as
+possible, as efficiently as possible, thereby delivering useful results as
+quickly as possible. For example, ideally if only a test script is modified in
+a push, then the resulting graph contains only the corresponding test suite
+task.
+
+A task is said to be "optimized" when it is either replaced with an equivalent,
+already-existing task, or dropped from the graph entirely.
+
+Optimization Strategies
+-----------------------
+
+Each task has a single named optimization strategy, and can provide an argument
+to that strategy. Each strategy is defined as an ``OptimizationStrategy``
+instance in ``taskcluster/taskgraph/optimization.py``.
+
+Each task has a ``task.optimization`` property describing the optimization
+strategy that applies, specified as a dictionary mapping strategy to argument. For
+example::
+
+ task.optimization = {'skip-unless-changed': ['js/**', 'tests/**']}
+
+Strategy implementations are shared across all tasks, so they may cache
+commonly-used information as instance variables.
+
+Optimizing Target Tasks
+-----------------------
+
+In some cases, such as try pushes, tasks in the target task set have been
+explicitly requested and are thus excluded from optimization. In other cases,
+the target task set is almost the entire task graph, so targeted tasks are
+considered for optimization. This behavior is controlled with the
+``optimize_target_tasks`` parameter.
+
+.. note::
+
+ Because it is a mix of "what the push author wanted" and "what should run
+ when necessary", try pushes with the old option syntax (``-b do -p all``,
+ etc.) *do* optimize target tasks. This can cause unexpected results when
+ requested jobs are optimized away. If those jobs were actually necessary,
+ then a try push with ``try_task_config.json`` is the solution.
+
+More Information
+----------------
+
+.. toctree::
+
+ optimization-process
+ optimization-schedules
diff --git a/taskcluster/docs/parameters.rst b/taskcluster/docs/parameters.rst
new file mode 100644
index 0000000000..ac5746bee4
--- /dev/null
+++ b/taskcluster/docs/parameters.rst
@@ -0,0 +1,256 @@
+==========
+Parameters
+==========
+
+Task-graph generation takes a collection of parameters as input, in the form of
+a JSON or YAML file.
+
+During decision-task processing, some of these parameters are supplied on the
+command line or by environment variables. The decision task helpfully produces
+a full parameters file as one of its output artifacts. The other ``mach
+taskgraph`` commands can take this file as input. This can be very helpful
+when working on a change to the task graph.
+
+When experimenting with local runs of the task-graph generation, it is always
+best to find a recent decision task's ``parameters.yml`` file, and modify that
+file if necessary, rather than starting from scratch. This ensures you have a
+complete set of parameters.
+
+The properties of the parameters object are described here, divided roughly by
+topic.
+
+Push Information
+----------------
+
+``backstop``
+ Whether or not this push is a "backstop" push. That is a push where all
+ builds and tests should run to ensure regressions aren't accidentally
+ missed.
+
+``base_repository``
+ The repository from which to do an initial clone, utilizing any available
+ caching.
+
+``head_repository``
+ The repository containing the changeset to be built. This may differ from
+ ``base_repository`` in cases where ``base_repository`` is likely to be cached
+ and only a few additional commits are needed from ``head_repository``.
+
+``head_rev``
+ The revision to check out; this can be a short revision string
+
+``head_ref``
+ For Mercurial repositories, this is the same as ``head_rev``. For
+ git repositories, which do not allow pulling explicit revisions, this gives
+ the symbolic ref containing ``head_rev`` that should be pulled from
+ ``head_repository``.
+
+``owner``
+ Email address indicating the person who made the push. Note that this
+ value may be forged and *must not* be relied on for authentication.
+
+``message``
+ The try syntax in the commit message, if any.
+
+``pushlog_id``
+ The ID from the ``hg.mozilla.org`` pushlog
+
+``pushdate``
+ The timestamp of the push to the repository that triggered this decision
+ task. Expressed as an integer seconds since the UNIX epoch.
+
+``hg_branch``
+ The mercurial branch where the revision lives in.
+
+``build_date``
+ The timestamp of the build date. Defaults to ``pushdate`` and falls back to present time of
+ taskgraph invocation. Expressed as an integer seconds since the UNIX epoch.
+
+``moz_build_date``
+ A formatted timestamp of ``build_date``. Expressed as a string with the following
+ format: %Y%m%d%H%M%S
+
+``tasks_for``
+ The ``tasks_for`` value used to generate the decision task.
+
+Tree Information
+----------------
+
+``project``
+ Another name for what may otherwise be called tree or branch or
+ repository. This is the unqualified name, such as ``mozilla-central`` or
+ ``cedar``.
+
+``level``
+ The `SCM level
+ <https://www.mozilla.org/en-US/about/governance/policies/commit/access-policy/>`_
+ associated with this tree. This dictates the names of resources used in the
+ generated tasks, and those tasks will fail if it is incorrect.
+
+Try Configuration
+-----------------
+
+``try_mode``
+ The mode in which a try push is operating. This can be one of:
+
+ * ``"try_task_config"`` - Used to configure the taskgraph.
+ * ``"try_option_syntax"`` - Used when pushing to try with legacy try syntax.
+ * ``"try_auto"`` - Used to make try pushes behave more like a push on ``autoland``.
+ * ``"try_select"`` - Used by ``mach try`` to build a list of tasks locally.
+ * ``None`` - Not a try push or ``mach try release``.
+
+``try_options``
+ The arguments given as try syntax (as a dictionary), or ``None`` if
+ ``try_mode`` is not ``try_option_syntax``.
+
+``try_task_config``
+ The contents of the ``try_task_config.json`` file, or ``{}`` if
+ ``try_mode`` is not ``try_task_config``.
+
+Test Configuration
+------------------
+
+``test_manifest_loader``
+ The test manifest loader to use as defined in ``taskgraph.util.chunking.manifest_loaders``.
+
+Target Set
+----------
+
+The "target set" is the set of task labels which must be included in a task
+graph. The task graph generation process will include any tasks required by
+those in the target set, recursively. In a decision task, this set can be
+specified programmatically using one of a variety of methods (e.g., parsing try
+syntax or reading a project-specific configuration file).
+
+``filters``
+ List of filter functions (from ``taskcluster/taskgraph/filter_tasks.py``) to
+ apply. This is usually defined internally, as filters are typically
+ global.
+
+``target_tasks_method``
+ The method to use to determine the target task set. This is the suffix of
+ one of the functions in ``taskcluster/taskgraph/target_tasks.py``.
+
+``release_history``
+ History of recent releases by platform and locale, used when generating
+ partial updates for nightly releases.
+ Suitable contents can be generated with ``mach release-history``,
+ which will print to the console by default.
+
+Optimization
+------------
+
+``optimize_strategies``
+ A python path of the form ``<module>:<object>`` containing a dictionary of
+ optimization strategies to use, overwriting the defaults.
+
+``optimize_target_tasks``
+ If true, then target tasks are eligible for optimization.
+
+``do_not_optimize``
+ Specify tasks to not optimize out of the graph. This is a list of labels.
+ Any tasks in the graph matching one of the labels will not be optimized out
+ of the graph.
+
+``existing_tasks``
+ Specify tasks to optimize out of the graph. This is a dictionary of label to taskId.
+ Any tasks in the graph matching one of the labels will use the previously-run
+ taskId rather than submitting a new task.
+
+Release Promotion
+-----------------
+
+``build_number``
+ Specify the release promotion build number.
+
+``version``
+ Specify the version for release tasks.
+
+``app_version``
+ Specify the application version for release tasks. For releases, this is often a less specific version number than ``version``.
+
+``next_version``
+ Specify the next version for version bump tasks.
+
+``release_type``
+ The type of release being promoted. One of "nightly", "beta", "esr78", "release-rc", or "release".
+
+``release_eta``
+ The time and date when a release is scheduled to live. This value is passed to Balrog.
+
+``release_enable_partner_repack``
+ Boolean which controls repacking vanilla Firefox builds for partners.
+
+``release_enable_partner_attribution``
+ Boolean which controls adding attribution to vanilla Firefox builds for partners.
+
+``release_enable_emefree``
+ Boolean which controls repacking vanilla Firefox builds into EME-free builds.
+
+``release_partners``
+ List of partners to repack or attribute if a subset of the whole config. A null value defaults to all.
+
+``release_partner_config``
+ Configuration for partner repacks & attribution, as well as EME-free repacks.
+
+``release_partner_build_number``
+ The build number for partner repacks. We sometimes have multiple partner build numbers per release build number; this parameter lets us bump them independently. Defaults to 1.
+
+``release_product``
+ The product that is being released.
+
+``required_signoffs``
+ A list of signoffs that are required for this release promotion flavor. If specified, and if the corresponding `signoff_urls` url isn't specified, tasks that require these signoffs will not be scheduled.
+
+``signoff_urls``
+ A dictionary of signoff keys to url values. These are the urls marking the corresponding ``required_signoffs`` as signed off.
+
+
+Repository Merge Day
+--------------------
+
+``merge_config``
+ Merge config describes the repository merge behaviour, using an alias to cover which set of file replacements and version increments are required, along with overrides for the source and target repository URIs.
+
+``source_repo``
+ The clone/push URI of the source repository, such as https://hg.mozilla.org/mozilla-central
+
+``target_repo``
+ The clone/push URI of the target repository, such as https://hg.mozilla.org/releases/mozilla-beta
+
+``source_branch``
+ The firefoxtree alias of the source branch, such as 'central', 'beta'
+
+``target_branch``
+ The firefoxtree alias of the target branch, such as 'beta', 'release'
+
+``force-dry-run``
+ Don't push any results to target repositories.
+
+
+Comm Push Information
+---------------------
+
+These parameters correspond to the repository and revision of the comm-central
+repository to checkout. Their meaning is the same as the corresponding
+parameters for the gecko repository above. They are optional, but if any of
+them are specified, they must all be specified.
+
+``comm_base_repository``
+``comm_head_repository``
+``comm_head_rev``
+``comm_head_ref``
+
+Code Review
+-----------
+
+``phabricator_diff``
+ The code review process needs to know the Phabricator Differential diff that
+ started the analysis. This parameter must start with `PHID-DIFF-`
+
+Local configuration
+-------------------
+
+``target-kind``
+ Generate only the given kind and its kind-dependencies. This is used for local inspection of the graph
+ and is not supported at run-time.
diff --git a/taskcluster/docs/partials.rst b/taskcluster/docs/partials.rst
new file mode 100644
index 0000000000..e4cb369354
--- /dev/null
+++ b/taskcluster/docs/partials.rst
@@ -0,0 +1,123 @@
+Partial Update Generation
+=========================
+
+Overview
+--------
+
+Windows, Mac and Linux releases have partial updates, to reduce
+the file size end-users have to download in order to receive new
+versions. These are created using a docker image, some Python,
+``mbsdiff``, and the tools in ``tools/update-packaging``
+
+The task has been called 'Funsize' for quite some time. This might
+make sense depending on what brands of chocolate bar are available
+near you.
+
+How the Task Works
+------------------
+
+Funsize uses a docker image that's built in-tree, named funsize-update-generator.
+The image contains some Python to examine the task definition and determine
+what needs to be done, but it downloads tools like ``mar`` and ``mbsdiff``
+from either locations specified in the task definition, or default mozilla-central
+locations.
+
+The 'extra' section of the task definition contains most of the payload, under
+the 'funsize' key. In here is a list of partials that this specific task will
+generate, and each entry includes the earlier (or 'from') version, and the most
+recent (or 'to') version, which for most releases will likely be a taskcluster
+artifact.
+
+.. code-block:: json
+
+ {
+ "to_mar": "https://tc.net/api/queue/v1/task/EWtBFqVuT-WqG3tGLxWhmA/artifacts/public/build/ach/target.complete.mar",
+ "product": "Firefox",
+ "dest_mar": "target-60.0b8.partial.mar",
+ "locale": "ach",
+ "from_mar": "http://archive.mozilla.org/pub/firefox/candidates/60.0b8-candidates/build1/update/linux-i686/ach/firefox-60.0b8.complete.mar",
+ "update_number": 2,
+ "platform": "linux32",
+ "previousVersion": "60.0b8",
+ "previousBuildNumber": "1",
+ "branch": "mozilla-beta"
+ }
+
+The 'update number' indicates how many released versions there are between 'to' and the current 'from'.
+For example, if we are building a partial update for the current nightly from the previous one, the update
+number will be 1. For the release before that, it will be 2. This lets us use generic output artifact
+names that we can rename in the later ``beetmover`` tasks.
+
+Inside the task, for each partial it has been told to generate, it will download, unpack and virus
+scan the 'from_mar' and 'to_mar', download the tools, and run ``make_incremental_update.sh`` from
+``tools/update-packaging``.
+
+If a scope is given for a set of temporary S3 credentials, the task will use a caching script,
+to allow re-use of the diffs made for larger files. Some of the larger files are not localised,
+and this allows us to save a lot of compute time.
+
+For Releases
+------------
+
+Partials are made as part of the ``promote`` task group. The previous
+versions used to create the update are specified in ship-it by
+Release Management.
+
+Nightly Partials
+----------------
+
+Since nightly releases don't appear in ship-it, the partials to create
+are determined in the decision task. This was controversial, and so here
+are the assumptions and reasons, so that when an alternative solution is
+discovered, we can assess it in context:
+
+1. Balrog is the source of truth for previous nightly releases.
+2. Re-running a task should produce the same results.
+3. A task's input and output should be specified in the definition.
+4. A task transform should avoid external dependencies. This is to
+ increase the number of scenarios in which 'mach taskgraph' works.
+5. A task graph doesn't explicitly know that it's intended for nightlies,
+ only that specific tasks are only present for nightly.
+6. The decision task is explicitly told that its target is nightly
+ using the target-tasks-method argument.
+
+a. From 2 and 3, this means that the partials task itself cannot query
+ balrog for the history, as it may get different results when re-run,
+ and hides the inputs and outputs from the task definition.
+b. From 4, anything run by 'mach taskgraph' is an inappropriate place
+ to query Balrog, even if it results in a repeatable task graph.
+c. Since these restrictions don't apply to the decision task, and given
+ 6, we can query Balrog in the decision task if the target-tasks-method
+ given contains 'nightly', such as 'nightly_desktop' or 'nightly_linux'
+
+Using the decision task involves making fewer, larger queries to Balrog,
+and storing the results for task graph regeneration and later audit. At
+the moment this data is stored in the ``parameters`` under the label
+``release_history``, since the parameters are an existing method for
+passing data to the task transforms, but a case could be made
+for adding a separate store, as it's a significantly larger number of
+records than anything else in the parameters.
+
+Nightly Partials and Beetmover
+------------------------------
+
+A release for a specific platform and locale may not have a history of
+prior releases that can be used to build partial updates. This could be
+for a variety of reasons, such as a new locale, or a hiatus in nightly
+releases creating too long a gap in the history.
+
+This means that the ``partials`` and ``partials-signing`` tasks may have
+nothing to do for a platform and locale. If this is true, then the tasks
+are filtered out in the ``transform``.
+
+This does mean that the downstream task, ``beetmover-repackage`` can not
+rely on the ``partials-signing`` task existing. It depends on both the
+``partials-signing`` and ``repackage-signing`` task, and chooses which
+to depend on in the transform.
+
+If there is a history in the ``parameters`` ``release_history`` section
+then ``beetmover-repackage`` will depend on ``partials-signing``.
+Otherwise, it will depend on ``repackage-signing``.
+
+This is not ideal, as it results in unclear logic in the task graph
+generation. It will be improved.
diff --git a/taskcluster/docs/partner-attribution.rst b/taskcluster/docs/partner-attribution.rst
new file mode 100644
index 0000000000..51365018cc
--- /dev/null
+++ b/taskcluster/docs/partner-attribution.rst
@@ -0,0 +1,121 @@
+Partner attribution
+===================
+.. _partner attribution:
+
+In contrast to :ref:`partner repacks`, attributed builds only differ from the normal Firefox
+builds by the adding a string in the dummy windows signing certificate. We support doing this for
+full installers but not stub. The parameters of the string are carried into the telemetry system,
+tagging an install into a cohort of users. This a lighter weight process because we don't
+repackage or re-sign the builds.
+
+Parameters & Scheduling
+-----------------------
+
+Partner attribution uses a number of parameters to control how they work:
+
+* ``release_enable_partner_attribution``
+* ``release_partner_config``
+* ``release_partner_build_number``
+* ``release_partners``
+
+The enable parameter is a boolean, a simple on/off switch. We set it in shipit's
+`is_partner_enabled() <https://github.com/mozilla-releng/shipit/blob/main/api/src/shipit_api/admin/release.py#L93>`_ when starting a
+release. It's true for Firefox betas >= b8 and releases, but otherwise false, the same as
+partner repacks.
+
+``release_partner_config`` is a dictionary of configuration data which drives the task generation
+logic. It's usually looked up during the release promotion action task, using the Github
+GraphQL API in the `get_partner_config_by_url()
+<python/taskgraph.util.html#taskgraph.util.partners.get_partner_config_by_url>`_ function, with the
+url defined in `taskcluster/ci/config.yml <https://searchfox.org/mozilla-central/search?q=partner-urls&path=taskcluster%2Fci%2Fconfig.yml&case=true&regexp=false&redirect=true>`_.
+
+``release_partner_build_number`` is an integer used to create unique upload paths in the firefox
+candidates directory, while ``release_partners`` is a list of partners that should be
+attributed (i.e. a subset of the whole config). Both are intended for use when respinning a partner after
+the regular Firefox has shipped. More information on that can be found in the
+`RelEng Docs <https://moz-releng-docs.readthedocs.io/en/latest/procedures/misc-operations/off-cycle-partner-repacks-and-funnelcake.html>`_.
+
+``release_partners`` is shared with partner repacks but we don't support doing both at the same time.
+
+
+Configuration
+-------------
+
+This is done using an ``attribution_config.yml`` file which next lives to the ``default.xml`` used
+for partner repacks. There are no repos for each partner, the whole configuration exists in the one
+file because the amount of information to be tracked is much smaller.
+
+An example config looks like this:
+
+.. code-block:: yaml
+
+ defaults:
+ medium: distribution
+ source: mozilla
+ configs:
+ - campaign: sample
+ content: sample-001
+ locales:
+ - en-US
+ - de
+ - ru
+ platforms:
+ - win64-shippable
+ - win32-shippable
+ upload_to_candidates: true
+
+The four main parameters are ``medium, source, campaign, content``, of which the first two are
+common to all attributions. The combination of ``campaign`` and ``content`` should be unique
+to avoid confusion in telemetry data. They correspond to the repo name and sub-directory in partner repacks,
+so avoid any overlap between values in partner repacks and atrribution.
+The optional parameters of ``variation``, and ``experiment`` may also be specified.
+
+Non-empty lists of locales and platforms are required parameters (NB the `-shippable` suffix should be used on
+the platforms).
+
+``upload_to_candidates`` is an optional setting which controls whether the Firefox installers
+are uploaded into the `candidates directory <https://archive.mozilla.org/pub/firefox/candidates/>`_.
+If not set the files are uploaded to the private S3 bucket for partner builds.
+
+
+Repacking process
+-----------------
+
+Attribution only has two kinds:
+
+* attribution - add attribution code to the regular builds
+* beetmover - move the files to a partner-specific destination
+
+Attribution
+^^^^^^^^^^^
+
+* kinds: ``release-partner-attribution``
+* platforms: Any Windows, runs on linux
+* upstreams: ``repackage-signing`` ``repackage-signing-l10n``
+
+There is one task, calling out to `python/mozrelease/mozrelease/partner_attribution.py
+<https://hg.mozilla.org/releases/mozilla-release/file/default/python/mozrelease/mozrelease/partner_attribution.py>`_.
+
+It takes as input the repackage-signing and repackage-signing-l10n artifacts, which are all
+target.exe full installers. The ``ATTRIBUTION_CONFIG`` environment variable controls the script.
+It produces more target.exe installers.
+
+The size of ``ATTRIBUTION_CONFIG`` variable may grow large if the number of configurations
+increases, and it may be necesssary to pass the content of ``attribution_config.yml`` to the
+script instead, or via an artifact of the promotion task.
+
+Beetmover
+^^^^^^^^^
+
+* kinds: ``release-partner-attribution-beetmover``
+* platforms: N/A, scriptworker
+* upstreams: ``release-partner-attribution``
+
+Moves and renames the artifacts to their public location in the `candidates directory
+<https://archive.mozilla.org/pub/firefox/candidates/>`_, or a private S3 bucket. There is one task
+for public artifacts and another for private.
+
+Each task will have the ``project:releng:beetmover:action:push-to-partner`` scope, with public uploads having
+``project:releng:beetmover:bucket:release`` and private uploads using
+``project:releng:beetmover:bucket:partner``. There's a partner-specific code path in
+`beetmoverscript <https://github.com/mozilla-releng/scriptworker-scripts/tree/master/beetmoverscript>`_.
diff --git a/taskcluster/docs/partner-repacks.rst b/taskcluster/docs/partner-repacks.rst
new file mode 100644
index 0000000000..236b7babd4
--- /dev/null
+++ b/taskcluster/docs/partner-repacks.rst
@@ -0,0 +1,256 @@
+Partner repacks
+===============
+.. _partner repacks:
+
+We create slightly-modified Firefox releases for some extra audiences
+
+* EME-free builds, which disable DRM plugins by default
+* Funnelcake builds, which are used for Mozilla experiments
+* partner builds, which customize Firefox for external partners
+
+We use the phrase "partner repacks" to refer to all these builds because they
+use the same process of repacking regular Firefox releases with additional files.
+The specific differences depend on the type of build.
+
+We produce partner repacks for some beta builds, and for release builds, as part of the release
+automation. We don't produce any files to update these builds as they are handled automatically
+(see updates_).
+
+We also produce :ref:`partner attribution` builds, which are Firefox Windows installers with a cohort identifier
+added.
+
+Parameters & Scheduling
+-----------------------
+
+Partner repacks have a number of parameters which control how they work:
+
+* ``release_enable_emefree``
+* ``release_enable_partner_repack``
+* ``release_partner_config``
+* ``release_partner_build_number``
+* ``release_partners``
+
+We split the repacks into two 'paths', EME-free and everything else, to retain some
+flexibility over enabling/disabling them separately. This costs us some duplication of the kinds
+in the repacking stack. The two enable parameters are booleans to turn these two paths
+on/off. We set them in shipit's `is_partner_enabled() <https://github.com/mozilla-releng/shipit/blob/main/api/src/shipit_api/admin/release.py#L93>`_ when starting a
+release. They're both true for Firefox betas >= b8 and releases, but otherwise disabled.
+
+``release_partner_config`` is a dictionary of configuration data which drives the task generation
+logic. It's usually looked up during the release promotion action task, using the Github
+GraphQL API in the `get_partner_config_by_url()
+<python/taskgraph.util.html#taskgraph.util.partners.get_partner_config_by_url>`_ function, with the
+url defined in `taskcluster/ci/config.yml <https://searchfox
+.org/mozilla-release/search?q=regexp%3A^partner+path%3Aconfig.yml&redirect=true>`_.
+
+``release_partner_build_number`` is an integer used to create unique upload paths in the firefox
+candidates directory, while ``release_partners`` is a list of partners that should be
+repacked (i.e. a subset of the whole config). Both are intended for use when respinning a few partners after
+the regular Firefox has shipped. More information on that can be found in the
+`RelEng Docs <https://moz-releng-docs.readthedocs.io/en/latest/procedures/misc-operations/off-cycle-partner-repacks-and-funnelcake.html>`_.
+
+Most of the machine time for generating partner repacks takes place in the `promote` phase of the
+automation, or `promote_rc` in the case of X.0 release candidates. The EME-free builds are copied into the
+Firefox releases directory in the `push` phase, along with the regular bits.
+
+
+Configuration
+-------------
+
+We need some configuration to know *what* to repack, and *how* to do that. The *what* is defined by
+default.xml manifests, as used with the `repo <https://gerrit.googlesource.com/git-repo>`_ tool
+for git. The `default.xml for EME-free <https://github
+.com/mozilla-partners/mozilla-EME-free-manifest/blob/master/default.xml>`_ illustrates this::
+
+ <?xml version="1.0" ?>
+ <manifest>
+ <remote fetch="git@github.com:mozilla-partners/" name="mozilla-partners"/>
+ <remote fetch="git@github.com:mozilla/" name="mozilla"/>
+
+ <project name="repack-scripts" path="scripts" remote="mozilla-partners" revision="master"/>
+ <project name="build-tools" path="scripts/tools" remote="mozilla" revision="master"/>
+ <project name="mozilla-EME-free" path="partners/mozilla-EME-free" remote="mozilla-partners" revision="master"/>
+ </manifest>
+
+The repack-scripts and build-tools repos are found in all manifests, and then there is a list of
+partner repositories which contain the *how* configuration. Some of these repos are not publicly
+visible.
+
+A partner repository may contain multiple configurations inside the ``desktop`` directory. Each
+subdirectory must contain a ``repack.cfg`` and a ``distribution`` directory, the latter
+containing the customizations needed. Here's `EME-free's repack.cfg <https://github.com/mozilla-partners/mozilla-EME-free/blob/master/desktop/mozilla-EME-free/repack.cfg>`_::
+
+ aus="mozilla-EMEfree"
+ dist_id="mozilla-EMEfree"
+ dist_version="1.0"
+ linux-i686=false
+ linux-x86_64=false
+ locales="ach af an ar" # truncated for display here
+ mac=true
+ win32=true
+ win64=true
+ output_dir="%(platform)s-EME-free/%(locale)s"
+
+ # Upload params
+ upload_to_candidates=true
+
+Note the list of locales and boolean toggles for enabling platforms. The ``output_dir`` and
+``upload_to_candidates`` parameters are only present for repacks which are uploaded into the
+`candidates directory <https://archive.mozilla.org/pub/firefox/candidates/>`_.
+
+All customizations will be placed in the ``distribution`` directory at the root of the Firefox
+install directory, or in the case of OS X in ``Firefox.app/Contents/Resources/distribution/``. A
+``distribution.ini`` file is the minimal requirement, here's an example from `EME-free
+<https://github.com/mozilla-partners/mozilla-EME-free/blob/master/desktop/mozilla-EME-free/distribution
+/distribution.ini>`_::
+
+ # Partner Distribution Configuration File
+ # Author: Mozilla
+ # Date: 2015-03-27
+
+ [Global]
+ id=mozilla-EMEfree
+ version=1.0
+ about=Mozilla Firefox EME-free
+
+ [Preferences]
+ media.eme.enabled=false
+ app.partner.mozilla-EMEfree="mozilla-EMEfree"
+
+Extensions and other customizations might also be included in repacks.
+
+
+Repacking process
+-----------------
+
+The stack of tasks to create partner repacks is broadly similar to localised nightlies and
+regular releases. The basic form is
+
+* partner repack - insert the customisations into the the regular builds
+* signing - sign the internals which will become the installer (Mac only)
+* repackage - create the "installer" (Mac and Windows)
+* chunking dummy - a linux only bridge to ...
+* repackage signing - sign the "installers" (mainly Windows)
+* beetmover - move the files to a partner-specific destination
+* beetmover checksums - possibly beetmove the checksums from previous step
+
+Some key divergences are:
+
+* all intermediate artifacts are uploaded with a ``releng/partner`` prefix
+* we don't insert any binaries on Windows so no need for internal signing
+* there's no need to create any complete mar files at the repackage step
+* we support both public and private destinations in beetmover
+* we only need beetmover checksums for EME-free builds
+
+
+Partner repack
+^^^^^^^^^^^^^^
+
+* kinds: ``release-partner-repack`` ``release-eme-free-repack``
+* platforms: Typically all (but depends on what's enabled by partner configuration)
+* upstreams: ``build-signing`` ``l10n-signing``
+
+There is one task per platform in this step, calling out to `scripts/desktop_partner_repacks.py
+<https://hg.mozilla.org/mozilla-central/file/default/testing/mozharness/scripts
+/desktop_partner_repacks.py>`_ in mozharness to prepare an environment and then perform the repacks.
+The actual repacking is done by `python/mozrelease/mozrelease/partner_repack.py
+<https://hg.mozilla.org/mozilla-central/file/default/python/mozrelease/mozrelease/partner_repack.py>`_.
+
+It takes as input the build-signing and l10n-signing artifacts, which are all zip/tar.gz/tar.bz2
+archives, simplifying the repack process by avoiding dmg and exe. Windows produces ``target.zip``
+& ``setup.exe``, Mac is ``target.tar.gz``, Linux is the final product ``target.tar.bz2``
+(beetmover handles pretty naming as usual).
+
+Signing
+^^^^^^^
+
+* kinds: ``release-partner-repack-notarization-part-1`` ``release-partner-repack-notarization-poller`` ``release-partner-repack-signing``
+* platforms: Mac
+* upstreams: ``release-partner-repack`` ``release-eme-free-repack``
+
+We chunk the single partner repack task out to a signing task with 5 artifacts each. For
+example, EME-free will become 19 tasks. We collect the target.tar.gz from the
+upstream, and return a signed target.tar.gz. We use a ``target.dmg`` artifact for
+nightlies/regular releases, but this is converted to ``target.tar.gz`` by the signing
+scriptworker before sending it to the signing server, so partners are equivalent. The ``part-1`` task
+uploads the binaries to apple, while the ``poller`` task waits for their approval, then
+``release-partner-repack-signing`` staples on the notarization ticket.
+
+Repackage
+^^^^^^^^^
+
+* kinds: ``release-partner-repack-repackage`` ``release-eme-free-repack-repackage``
+* platforms: Mac & Windows
+* upstreams:
+
+ * Mac: ``release-partner-signing`` ``release-eme-free-signing``
+ * Windows: ``release-partner-repack`` ``release-eme-free-repack``
+
+Mac has a repackage job for each of the signing tasks. Windows repackages are chunked here to
+the same granularity as mac. Takes ``target.zip`` & ``setup.exe`` to produce ``target.exe`` on
+Windows, and ``target.tar.gz`` to produce ``target.dmg`` on Mac. There's no need to produce any
+complete.mar files here like regular release bits do because we can reuse those.
+
+Chunking dummy
+^^^^^^^^^^^^^^
+
+* kinds: ``release-partner-repack-chunking-dummy``
+* platforms: Linux
+* upstreams: ``release-partner-repack``
+
+We're need Linux chunked at the next step so this dummy takes care of that for the relatively simple path
+Linux follows. One task per sub config+locale combination, the same as Windows and Mac. This doesn't need to
+exist for EME-free because we don't need to create Linux builds there.
+
+Repackage Signing
+^^^^^^^^^^^^^^^^^
+
+* kinds: ``release-partner-repack-repackage-signing`` ``release-eme-free-repack-repackage-signing``
+* platforms: All
+* upstreams:
+
+ * Mac & Windows: ``release-partner-repackage`` ``release-eme-free-repackage``
+ * Linux: ``release-partner-repack-chunking-dummy``
+
+This step GPG signs all platforms, and authenticode signs the Windows installer.
+
+Beetmover
+^^^^^^^^^
+
+* kinds: ``release-partner-repack-beetmover`` ``release-eme-free-repack-beetmover``
+* platforms: All
+* upstreams: ``release-partner-repack-repackage-signing`` ``release-eme-free-repack-repackage-signing``
+
+Moves and renames the artifacts to their public location in the `candidates directory
+<https://archive.mozilla.org/pub/firefox/candidates/>`_, or a private S3 bucket. Each task will
+have the ``project:releng:beetmover:action:push-to-partner`` scope, with public uploads having
+``project:releng:beetmover:bucket:release`` and private uploads using
+``project:releng:beetmover:bucket:partner``. The ``upload_to_candidates`` key in the partner config
+controls the second scope. There's a separate partner code path in `beetmoverscript <https://github.com/mozilla-releng/scriptworker-scripts/tree/master/beetmoverscript>`_.
+
+Beetmover checksums
+^^^^^^^^^^^^^^^^^^^
+
+* kinds: ``release-eme-free-repack-beetmover-checksums``
+* platforms: Mac & Windows
+* upstreams: ``release-eme-free-repack-repackage-beetmover``
+
+The EME-free builds should be present in our SHA256SUMS file and friends (`e.g. <https://archive
+.mozilla.org/pub/firefox/releases/61.0/SHA256SUMS>`_) so we beetmove the target.checksums from
+the beetmover tasks into the candidates directory. They get picked up by the
+``release-generate-checksums`` kind.
+
+.. _updates:
+
+Updates
+-------
+
+It's very rare to need to update a partner repack differently from the original
+release build but we retain that capability. A partner build with distribution name ``foo``,
+based on a release Firefox build, will query for an update on the ``release-cck-foo`` channel. If
+the update server `Balrog <http://mozilla-balrog.readthedocs.io/en/latest/>`_ finds no rule for
+that channel it will fallback to the ``release`` channel. The update files for the regular releases do not
+modify the ``distribution/`` directory, so the customizations are not modified.
+
+`Bug 1430254 <https://bugzilla.mozilla.org/show_bug.cgi?id=1430254>`_ is an example of an exception to this
+logic.
diff --git a/taskcluster/docs/platforms.rst b/taskcluster/docs/platforms.rst
new file mode 100644
index 0000000000..3db479e978
--- /dev/null
+++ b/taskcluster/docs/platforms.rst
@@ -0,0 +1,199 @@
+Platforms in the CI
+===================
+
+
+.. https://raw.githubusercontent.com/mozilla/treeherder/HEAD/ui/helpers/constants.js
+ awk -e /thPlatformMap = {/,/};/ constants.js |grep ""|cut -d: -f2|sed -e s/^/ /|sed -e "s/$/ ,, /g"
+ TODO:
+ * Leverage verify_docs - https://bugzilla.mozilla.org/show_bug.cgi?id=1636400
+ * Add a new column (when executed ? ie always, rarely, etc)
+ * asan reporter isn't listed for mac os x
+
+Build Platforms
+---------------
+
+.. csv-table::
+ :header: "Platform", "Owner", "Why?"
+ :widths: 40, 20, 40
+
+ Linux, ,
+ Linux DevEdition, ,
+ Linux shippable, ,
+ Linux x64, ,
+ Linux x64 addon, ,
+ Linux x64 DevEdition, ,
+ Linux x64 WebRender Shippable, Jeff Muizelaar, Build with WebRender
+ Linux x64 WebRender, Jeff Muizelaar, Build with WebRender
+ Linux x64 shippable, , "| What we ship to our users.
+ | Builds with PGO"
+ Linux x64 NoOpt, , "| Developer build - Disable optimizations, enable debug options
+ | Only runs on m-c"
+ Linux AArch64, ,
+ OS X 10.14, ,
+ OS X Cross Compiled, ,
+ OS X 10.14 shippable, ,
+ OS X Cross Compiled shippable, , What we ship to our users
+ OS X Cross Compiled NoOpt, , "| Developer build - Disable optimizations, enable debug options
+ | Only runs on m-c"
+ OS X Cross Compiled addon, ,
+ OS X Cross Compiled DevEdition, ,
+ OS X 10.14, ,
+ OS X 10.14 WebRender, Jeff Muizelaar, Build with WebRender
+ OS X 10.14 Shippable, ,
+ OS X 10.14 WebRender Shippable, Jeff Muizelaar, Build with WebRender
+ OS X 10.14 DevEdition, ,
+ Windows 2012, ,
+ Windows 2012 shippable, , What we ship to our users
+ Windows 2012 addon, ,
+ Windows 2012 NoOpt, , "| Developer build - Disable optimizations, enable debug options
+ | Only runs on m-c"
+ Windows 2012 DevEdition, ,
+ Windows 2012 x64, ,
+ Windows 2012 x64 shippable, ,
+ Windows 2012 AArch64, ,
+ Windows 2012 AArch64 Shippable, ,
+ Windows 2012 AArch64 DevEdition, ,
+ Windows 2012 x64 addon, ,
+ Windows 2012 x64 NoOpt, , "| Developer build - Disable optimizations, enable debug options
+ | Only runs on m-c"
+ Windows 2012 x64 DevEdition, ,
+ Windows MinGW, Tom Ritter, "| the Tor project uses MinGW; make sure we test that for them
+ | Only runs on autoland, m-c and m-esr"
+ Android 4.0 API16+, , "| All Android jobs are for GeckoView. Fenix nightly uses m-c, Fenix beta => m-b, Fenix release => m-r and Focus uses m-r.
+ | We run these tests in the CI to make sure that GeckoView tests do not regress."
+ Android 4.0 API16+ Beta, James Willcox (Snorp), To ship/test Android 4.1 on arm v7 CPU
+ Android 4.0 API16+ Release, , To ship/test Android 4.1 on arm v7 CPU
+ Android 4.0 API16+ GeckoView multi-arch fat AAR, ,
+ Android 4.2 x86, ,
+ Android 4.2 x86 Beta, ,
+ Android 4.2 x86 Release, ,
+ Android 4.2 x86, ,
+ Android 4.2 x86 Beta, ,
+ Android 4.2 x86 Release, ,
+ Android 4.3 API16+, ,
+ Android 4.3 API16+ Beta, ,
+ Android 4.3 API16+ Release, ,
+ Android 5.0 AArch64, ,
+ Android 5.0 AArch64 Beta, ,
+ Android 5.0 AArch64 Release, ,
+ Android 5.0 x86-64, ,
+ Android 5.0 x86-64 Beta, ,
+ Android 5.0 x86-64 Release, ,
+ Android 7.0 x86, ,
+ Android 7.0 x86 Beta, ,
+ Android 7.0 x86 Release, ,
+ Android 7.0 x86-64, ,
+ Android 7.0 x86-64 WebRender, Kris Taeleman, Build and test GeckoView with WebRender
+ Android 7.0 x86-64 Beta, ,
+ Android 7.0 x86-64 Release, ,
+ Android 7.0 MotoG5, ,
+ Android 8.0 Pixel2, ,
+ Android 8.0 Pixel2 WebRender, Kris Taeleman, Build and test GeckoView with WebRender
+ Android 8.0 Pixel2 AArch64, ,
+ Android 8.0 Pixel2 AArch64 WebRender, Kris Taeleman, Build and test GeckoView with WebRender
+ Android, ,
+
+Testing configurations
+----------------------
+
+We have some platforms used to run the tests to make sure they run correctly on different versions of the operating systems.
+
+.. csv-table::
+ :header: "Platform", "Owner", "Why?"
+ :widths: 40, 20, 40
+
+ Linux 18.04 shippable, ,
+ Linux 18.04 x64, ,
+ Linux 18.04 x64 DevEdition, ,
+ Linux 18.04 x64 WebRender Shippable, Jeff Muizelaar, Build with WebRender for testing
+ Linux 18.04 x64 WebRender, Jeff Muizelaar, Build with WebRender for testing
+ Linux 18.04 x64 shippable, ,
+ Linux 18.04 x64 Stylo-Seq, ,
+ Windows 7, ,
+ Windows 7 DevEdition, ,
+ Windows 7 Shippable, ,
+ Windows 7 MinGW, Tom Ritter, "| the Tor project uses MinGW; make sure we test that for them
+ | Only runs on autoland, m-c and m-esr"
+ Windows 10 x64, ,
+ Windows 10 x64 DevEdition, ,
+ Windows 10 x64 Shippable, ,
+ Windows 10 x64 WebRender Shippable, Jeff Muizelaar, Build with WebRender for testing
+ Windows 10 x64 WebRender, Jeff Muizelaar, Build with WebRender for testing
+ Windows 10 x64 2017 Ref HW, ,
+ Windows 10 x64 MinGW, Tom Ritter, "| the Tor project uses MinGW; make sure we test that for them
+ | Only runs on autoland, m-c and m-esr"
+ Windows 10 AArch64, ,
+
+
+Quality platforms
+-----------------
+
+We have many platforms used to run various quality tools. They aren't directly focussing on user quality but on code quality,
+or prevening some classes of errors (memory, threading, etc).
+
+.. csv-table::
+ :header: "Platform", "Owner", "Why?"
+ :widths: 40, 20, 40
+
+ Linux 18.04 x64 tsan, Christian Holler, Identify threading issues with ThreadSanitizer
+ Linux x64 asan, "| Christian Holler
+ | Tyson Smith (ubsan)", "| Identify memory issues with :ref:`Address Sanitizer`.
+ | Also includes the UndefinedBehaviorSanitizer"
+ Linux x64 WebRender asan, "| Christian Holler
+ | Tyson Smith (ubsan)", "| Identify memory issues with :ref:`Address Sanitizer`.
+ | Also includes the UndefinedBehaviorSanitizer"
+ Linux x64 asan reporter, Christian Holler, Generate :ref:`ASan Nightly Project <ASan Nightly>` builds
+ Linux x64 CCov, Marco Castelluccio , Collect :ref:`Code coverage` information to identify what is tested (or not)
+ Linux 18.04 x64 asan, "| Christian Holler
+ | Tyson Smith (ubsan)", "| Identify memory issues with :ref:`Address Sanitizer`.
+ | Also includes the UndefinedBehaviorSanitizer"
+ Linux 18.04 x64 WebRender asan, "| Christian Holler
+ | Tyson Smith (ubsan)", "| Identify memory issues with :ref:`Address Sanitizer`.
+ | Also includes the UndefinedBehaviorSanitizer"
+ Linux 18.04 x64 CCov, Marco Castelluccio , Collect :ref:`Code coverage` information to identify what is tested (or not)
+ OS X Cross Compiled CCov, Marco Castelluccio , Collect :ref:`Code coverage` information to identify what is tested (or not)
+ OS X 10.14 Cross Compiled CCov, Marco Castelluccio , Collect :ref:`Code coverage` information to identify what is tested (or not)
+ Windows 2012 x64 asan reporter, Christian Holler, Generate :ref:`ASan Nightly Project <ASan Nightly>` builds
+ Windows 10 x64 CCov, Marco Castelluccio , Collect :ref:`Code coverage` information to identify what is tested (or not)
+ Android 4.0 API16+ CCov, Marco Castelluccio , Collect :ref:`Code coverage` information to identify what is tested (or not)
+ Android 4.3 API16+ CCov, Marco Castelluccio , Collect :ref:`Code coverage` information to identify what is tested (or not)
+ Diffoscope, Mike Hommey, Make sure the build remains reproducible
+ Linting, "| Sylvestre Ledru
+ | Andrew Halberstadt", "| Identify :ref:`code quality` earlier
+ | Also contains some Bugzilla
+ | Run on all branches (except the Bugzilla task)"
+ Documentation, "| Sylvestre Ledru
+ | Andrew Halberstadt", "| :ref:`Documentation jobs <Managing Documentation>`
+ | integration repository plus mozilla-central"
+
+
+
+Infrastructure tasks
+--------------------
+
+The decision tasks responsible for creating the task graph.
+
+.. csv-table::
+ :header: "Task", "Owner", "Why?"
+ :widths: 40, 20, 40
+
+ Gecko Decision Task, , Define the tasks to run and their order
+ Firefox Release Tasks, ,
+ Devedition Release Tasks, ,
+ Fennec Beta Tasks, ,
+ Fennec Release Tasks, ,
+ Thunderbird Release Tasks, ,
+
+
+Others
+------
+
+.. csv-table::
+ :header: "Platform", "Owner", "Why?"
+ :widths: 40, 20, 40
+
+ Docker Images, ,
+ Fetch, ,
+ Packages, ,
+ Toolchains, ,
+ Other, ,
diff --git a/taskcluster/docs/reference.rst b/taskcluster/docs/reference.rst
new file mode 100644
index 0000000000..813a3f630a
--- /dev/null
+++ b/taskcluster/docs/reference.rst
@@ -0,0 +1,12 @@
+Reference
+=========
+
+These sections contain some reference documentation for various aspects of
+taskgraph generation.
+
+.. toctree::
+
+ kinds
+ parameters
+ attributes
+ caches
diff --git a/taskcluster/docs/release-promotion-action.rst b/taskcluster/docs/release-promotion-action.rst
new file mode 100644
index 0000000000..14c6b80036
--- /dev/null
+++ b/taskcluster/docs/release-promotion-action.rst
@@ -0,0 +1,158 @@
+Release Promotion Action
+========================
+
+The release promotion action is how Releng triggers `release promotion`_
+taskgraphs. The one action covers all release promotion needs: different
+*flavors* allow for us to trigger the different :ref:`release promotion phases`
+for each product. The input schema and release promotion flavors are defined in
+the `release promotion action`_.
+
+.. _snowman model:
+
+The snowman model
+-----------------
+
+The `release promotion action`_ allows us to chain multiple taskgraphs (aka graphs, aka task groups) together.
+Essentially, we're using `optimization`_ logic to replace task labels in the
+current taskgraph with task IDs from the previous taskgraph(s).
+
+This is the ``snowman`` model. If you request the body of
+the snowman and point at the base, we only create the middle section of the snowman.
+If you request the body of the snowman and don't point it at the base, we build the
+first base and body of the snowman from scratch.
+
+For example, let's generate a task ``t2`` that depends on ``t1``. Let's call our new taskgraph ``G``::
+
+ G
+ |
+ t1
+ |
+ t2
+
+Task ``t2`` will wait on task ``t1`` to finish, and downloads some artifacts from task ``t1``.
+
+Now let's specify task group ``G1`` and ``G2`` as previous task group IDs. If task ``t1`` is in one of them, ``t2`` will depend on that task, rather than spawning a new ``t1`` in task group ``G``::
+
+ G1 G2 G
+ | | |
+ t1 t1 |
+ \______ |
+ \|
+ t2
+
+ or
+
+ G1 G2 G
+ | | |
+ t1 t0 |
+ \________________ |
+ \|
+ t2
+
+For a more real-world example::
+
+ G
+ |
+ build
+ |
+ signing
+ |
+ l10n-repack
+ |
+ l10n-signing
+
+If we point the ``promote`` task group G at the on-push build task group ``G1``, the l10n-repack job will depend on the previously finished build and build-signing tasks::
+
+ G1 G
+ | |
+ build |
+ | |
+ signing |
+ \_________|
+ |
+ l10n-repack
+ |
+ l10n-signing
+
+We can also explicitly exclude certain tasks from being optimized out.
+We currently do this by specifying ``rebuild_kinds`` in the action; these
+are `kinds`_ that we want to explicitly rebuild in the current task group,
+even if they existed in previous task groups. We also allow for specifying a list of
+``do_not_optimize`` labels, which would be more verbose and specific than
+specifying kinds to rebuild.
+
+Release promotion action mechanics
+----------------------------------
+
+There are a number of inputs defined in the `release promotion action`_. Among these are the ``previous_graph_ids``, which is an ordered list of taskGroupIds of the task groups that we want to build our task group, off of. In the :ref:`snowman model`, these define the already-built portions of the snowman.
+
+The action downloads the ``parameters.yml`` from the initial ``previous_graph_id``, which matches the decision- or action- taskId. (See :ref:`taskid vs taskgroupid`.) This is most likely the decision task of the revision to promote, which is generally the same revision the release promotion action is run against.
+
+.. note:: If the parameters have been changed since the build happened, *and* we explicitly want the new parameters for the release promotion action task, the first ``previous_graph_id`` should be the new revision's decision task. Then the build and other previous action task group IDs can follow, so we're still replacing the task labels with the task IDs from the original revision.
+
+The action then downloads the various ``label-to-taskid.json`` artifacts from each previous task group, and builds an ``existing_tasks`` parameter of which labels to replace with which task IDs. Each successive update to this dictionary overwrites existing keys with new task IDs, so the rightmost task group with a given label takes precedence. Any labels that match the ``do_not_optimize`` list or that belong to tasks in the ``rebuild_kinds`` list are excluded from the ``existing_tasks`` parameter.
+
+Once all that happens, and we've gotten our configuration from the original parameters and our action config and inputs, we run the decision task function with our custom parameters. The `optimization`_ phase replaces any ``existing_tasks`` with the task IDs we've built from the previous task groups.
+
+Release Promotion Flavors
+-------------------------
+
+For the most part, release promotion flavors match the pattern ``phase_product``,
+e.g. ``promote_firefox``, ``push_devedition``, or ``ship_firefox``.
+
+We've added ``_rc`` suffix flavors, to deal with special RC behavior around rolling out updates using a different rate or channel.
+
+We are planning on adding ``_partners`` suffix flavors, to allow for creating partner repacks off-cycle.
+
+The various flavors are defined in the `release promotion action`_.
+
+Triggering the release promotion action via Treeherder
+------------------------------------------------------
+
+Currently, we're able to trigger this action via `Treeherder`_; we sometimes use this method for testing purposes. This is powerful, because we can modify the inputs directly, but is less production friendly, because it requires us to enter the inputs manually. At some point we may disable the ability to trigger the action via Treeherder.
+
+This requires being signed in with the right scopes. On `Release Promotion Projects`_, there's a dropdown in the top right of a given revision. Choose ``Custom Push Action``, then ``Release Promotion``. The inputs are specifiable as raw yaml on the left hand column.
+
+Release promotion action taskId and taskGroupId
+-----------------------------------------------
+
+The ``taskGroupId`` of a release promotion action task will be the same as the ``taskId`` of the decision task.
+
+The ``taskGroupId`` of a release promotion *task group* will be the same as the ``taskId`` of the release promotion action task.
+
+So:
+
+* for a given push, the decision taskId ``D`` will create the taskGroupId ``D``
+* we create a release promotion action task with the taskId ``A``. The ``A`` task will be part of the ``D`` task group, but will spawn a task group with the taskGroupId ``A``.
+
+Another way of looking at it:
+
+* If you're looking at a task ``t1`` in the action taskGroup, ``t1``'s taskGroupId is the action task's taskId. (In the above example, this would be ``A``.)
+* Then if you look at the action task's taskGroupId, that's the original decision task's taskId. (In the above example, this would be ``D``.)
+
+Testing and developing the release promotion action
+---------------------------------------------------
+
+To test the release promotion, action, we can use ``./mach taskgraph test-action-callback`` to debug.
+
+The full command for a ``promote_firefox`` test might look like::
+
+ ./mach taskgraph test-action-callback \
+ --task-group-id LR-xH1ViTTi2jrI-N1Mf2A \
+ --input /src/gecko/params/promote_firefox.yml \
+ -p /src/gecko/params/maple-promote-firefox.yml \
+ release_promotion_action > ../promote.json
+
+The input file (in the above example, that would be ``/src/gecko/params/promote_firefox.yml``), contains the action inputs. The input schema is defined in the `release promotion action`_. Previous example inputs are embedded in previous promotion task group action task definitions (``task.extra.action.input``).
+
+The ``parameters.yml`` file is downloadable from a previous decision or action task.
+
+.. _release promotion: release-promotion.html
+.. _optimization: optimization.html
+.. _kinds: kinds.html
+.. _release promotion action: https://searchfox.org/mozilla-central/source/taskcluster/taskgraph/actions/release_promotion.py
+.. _Treeherder: https://treeherder.mozilla.org
+.. _Release Promotion Projects: https://searchfox.org/mozilla-central/search?q=RELEASE_PROMOTION_PROJECTS&path=taskcluster/taskgraph/util/attributes.py
+.. _releasewarrior docs: https://github.com/mozilla-releng/releasewarrior-2.0/blob/master/docs/release-promotion/desktop/howto.md#how
+.. _trigger_action.py: https://searchfox.org/build-central/source/tools/buildfarm/release/trigger_action.py#118
+.. _.taskcluster.yml: https://searchfox.org/mozilla-central/source/.taskcluster.yml
diff --git a/taskcluster/docs/release-promotion.rst b/taskcluster/docs/release-promotion.rst
new file mode 100644
index 0000000000..c0239351cc
--- /dev/null
+++ b/taskcluster/docs/release-promotion.rst
@@ -0,0 +1,54 @@
+Release Promotion
+=================
+
+Release promotion allows us to ship the same compiled binaries that we've
+already tested.
+
+In the olden days, we used to re-compile our release builds with separate
+configs, which led to release-specific bugs which weren't caught by continuous
+integration tests. This meant we required new builds at release time, which
+increased the end-to-end time for a given release significantly. Release
+promotion removes these anti-patterns.
+
+By running our continuous integration tests against our shippable builds, we
+have a higher degree of confidence at release time. By separating the build
+phase tasks (compilation, packaging, and related tests) from the promotion
+phase tasks, we can schedule each phase at their own independent cadence, as
+needed, and the end-to-end time for promotion is reduced significantly.
+
+.. _release promotion phases:
+
+Release Promotion Phases
+------------------------
+
+Currently, we have the ``build``, ``promote``, ``push``, and ``ship`` phases.
+
+The ``build`` phase creates ``shippable builds``. These optimize for correctness
+over speed, and are designed to be of shipping quality, should we decide to
+ship that revision of code. These are triggered on push on release branches.
+(We also schedule ``depend`` builds on most branches, which optimize for speed
+over correctness, so we can detect new code bustage sooner.)
+
+The ``promote`` phase localizes the shippable builds, creates any update MARs,
+and populates the candidates directories on S3. (On Android, we rebuild, because
+we haven't been successful repacking the APK.)
+
+The ``push`` phase pushes the candidates files to the appropriate release directory
+on S3.
+
+The ``ship`` phase ships or schedules updates to users. These are often at a
+limited rollout percentage or are dependent on multiple downstream signoffs to
+fully ship.
+
+In-depth relpro guide
+---------------------
+
+.. toctree::
+
+ release-promotion-action
+ balrog
+ setting-up-an-update-server
+ partials
+ signing
+ partner-repacks
+ partner-attribution
diff --git a/taskcluster/docs/setting-up-an-update-server.rst b/taskcluster/docs/setting-up-an-update-server.rst
new file mode 100644
index 0000000000..c1ec57c3d0
--- /dev/null
+++ b/taskcluster/docs/setting-up-an-update-server.rst
@@ -0,0 +1,218 @@
+Setting Up An Update Server
+===========================
+
+The goal of this document is to provide instructions for installing a
+locally-served Firefox update.
+
+Obtaining an update MAR
+-----------------------
+
+Updates are served as MAR files. There are two common ways to obtain a
+MAR to use: download a prebuilt one, or build one yourself.
+
+Downloading a MAR
+~~~~~~~~~~~~~~~~~
+
+Prebuilt Nightly MARs can be found
+`here <https://archive.mozilla.org/pub/firefox/nightly/>`__ on
+archive.mozilla.org. Be sure that you use the one that matches your
+machine's configuration. For example, if you want the Nightly MAR from
+2019-09-17 for a 64 bit Windows machine, you probably want the MAR
+located at
+https://archive.mozilla.org/pub/firefox/nightly/2019/09/2019-09-17-09-36-29-mozilla-central/firefox-71.0a1.en-US.win64.complete.mar.
+
+Prebuilt MARs for release and beta can be found
+`here <https://archive.mozilla.org/pub/firefox/releases/>`__. Beta
+builds are those with a ``b`` in the version string. After locating the
+desired version, the MARs will be in the ``update`` directory. You want
+to use the MAR labelled ``complete``, not a partial MAR. Here is an
+example of an appropriate MAR file to use:
+https://archive.mozilla.org/pub/firefox/releases/69.0b9/update/win64/en-US/firefox-69.0b9.complete.mar.
+
+Building a MAR
+~~~~~~~~~~~~~~
+
+Building a MAR locally is more complicated. Part of the problem is that
+MARs are signed by Mozilla and so you cannot really build an "official"
+MAR yourself. This is a security measure designed to prevent anyone from
+serving malicious updates. If you want to use a locally-built MAR, the
+copy of Firefox being updated will need to be built to allow un-signed
+MARs. See :ref:`Building Firefox <Firefox Contributors' Quick Reference>`
+for more information on building Firefox locally. These changes will
+need to be made in order to use the locally built MAR:
+
+- Put this line in the mozconfig file in root of the build directory
+ (create it if it does not exist):
+ ``ac_add_options --disable-verify-mar``
+- Several files contain a line that must be uncommented. Open them and
+ find this line:
+ ``#DEFINES['DISABLE_UPDATER_AUTHENTICODE_CHECK'] = True``. Delete the
+ ``#`` at the beginning of the line to uncomment it. These are the
+ files that must be changed:
+
+ - toolkit/components/maintenanceservice/moz.build
+ - toolkit/mozapps/update/tests/moz.build
+
+Firefox should otherwise be built normally. After building, you may want
+to copy the installation of Firefox elsewhere. If you update the
+installation without moving it, attempts at further incremental builds
+will not work properly, and a clobber will be needed when building next.
+To move the installation, first call ``./mach package``, then copy
+``<obj dir>/dist/firefox`` elsewhere. The copied directory will be your
+install directory.
+
+If you are running Windows and want the `Mozilla Maintenance
+Service <https://support.mozilla.org/en-US/kb/what-mozilla-maintenance-service>`__
+to be used, there are a few additional steps to be taken here. First,
+the maintenance service needs to be "installed". Most likely, a
+different maintenance service is already installed, probably at
+``C:\Program Files (x86)\Mozilla Maintenance Service\maintenanceservice.exe``.
+Backup that file to another location and replace it with
+``<obj dir>/dist/bin/maintenanceservice.exe``. Don't forget to restore
+the backup when you are done. Next, you will need to change the
+permissions on the Firefox install directory that you created. Both that
+directory and its parent directory should have permissions preventing
+the current user from writing to it.
+
+Now that you have a build of Firefox capable of using a locally-built
+MAR, it's time to build the MAR. First, build Firefox the way you want
+it to be after updating. If you want it to be the same before and after
+updating, this step is unnecessary and you can use the same build that
+you used to create the installation. Then run these commands,
+substituting ``<obj dir>``, ``<MAR output path>``, ``<version>`` and
+``<channel>`` appropriately:
+
+.. code:: bash
+
+ $ ./mach package
+ $ touch "<obj dir>/dist/firefox/precomplete"
+ $ MAR="<obj dir>/dist/host/bin/mar.exe" MOZ_PRODUCT_VERSION=<version> MAR_CHANNEL_ID=<channel> ./tools/update-packaging/make_full_update.sh <MAR output path> "<obj dir>/dist/firefox"
+
+For a local build, ``<channel>`` can be ``default``, and ``<version>``
+can be the value from ``browser/config/version.txt`` (or something
+arbitrarily large like ``2000.0a1``).
+
+.. container:: blockIndicator note
+
+ Note: It can be a bit tricky to get the ``make_full_update.sh``
+ script to accept paths with spaces.
+
+Serving the update
+------------------
+
+Preparing the update files
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+First, create the directory that updates will be served from and put the
+MAR file in it. Then, create a file within called ``update.xml`` with
+these contents, replacing ``<mar name>``, ``<hash>`` and ``<size>`` with
+the MAR's filename, its sha512 hash, and its file size in bytes.
+
+::
+
+ <?xml version="1.0" encoding="UTF-8"?>
+ <updates>
+ <update type="minor" displayVersion="2000.0a1" appVersion="2000.0a1" platformVersion="2000.0a1" buildID="21181002100236">
+ <patch type="complete" URL="http://127.0.0.1:8000/<mar name>" hashFunction="sha512" hashValue="<hash>" size="<size>"/>
+ </update>
+ </updates>
+
+If you've downloaded the MAR you're using, you'll find the sha512 value
+in a file called SHA512SUMS in the root of the release directory on
+archive.mozilla.org for a release or beta build (you'll have to search
+it for the file name of your MAR, since it includes the sha512 for every
+file that's part of that release), and for a nightly build you'll find a
+file with a .checksums extension adjacent to your MAR that contains that
+information (for instance, for the MAR file at
+https://archive.mozilla.org/pub/firefox/nightly/2019/09/2019-09-17-09-36-29-mozilla-central/firefox-71.0a1.en-US.win64.complete.mar,
+the file
+https://archive.mozilla.org/pub/firefox/nightly/2019/09/2019-09-17-09-36-29-mozilla-central/firefox-71.0a1.en-US.win64.checksums
+contains the sha512 for that file as well as for all the other win64
+files that are part of that nightly release).
+
+If you've built your own MAR, you can obtain its sha512 checksum by
+running the following command, which should work in Linux, macOS, or
+Windows in the MozillaBuild environment:
+
+.. code::
+
+ shasum --algorithm 512 <filename>
+
+On Windows, you can get the exact file size in bytes for your MAR by
+right clicking on it in the file explorer and selecting Properties.
+You'll find the correct size in bytes at the end of the line that begins
+"Size", **not** the one that begins "Size on disk". Be sure to remove
+the commas when you paste this number into the XML file.
+
+On macOS, you can get the exact size of your MAR by running the command:
+
+.. code::
+
+ stat -f%z <filename>
+
+Or on Linux, the same command would be:
+
+.. code::
+
+ stat --format "%s" <filename>
+
+Starting your update server
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Now, start an update server to serve the update files on port 8000. An
+easy way to do this is with Python. Remember to navigate to the correct
+directory before starting the server. This is the Python2 command:
+
+.. code:: bash
+
+ $ python -m SimpleHTTPServer 8000
+
+or, this is the Python3 command:
+
+.. code:: bash
+
+ $ python3 -m http.server 8000
+
+.. container:: blockIndicator note
+
+ If you aren't sure that you started the server correctly, try using a
+ web browser to navigate to ``http://127.0.0.1:8000/update.xml`` and
+ make sure that you get the XML file you created earlier.
+
+Installing the update
+---------------------
+
+You may want to start by deleting any pending updates to ensure that no
+previously found updates interfere with installing the desired update.
+You can use this command with Firefox's browser console to determine the
+update directory:
+
+.. code::
+
+ const {FileUtils} = ChromeUtils.import("resource://gre/modules/FileUtils.jsm");
+ FileUtils.getDir("UpdRootD", [], false).path
+
+Once you have determined the update directory, close Firefox, browse to
+the directory and remove the subdirectory called ``updates``.
+
+| Next, you need to change the update URL to point to the local XML
+ file. This can be done most reliably with an enterprise policy. The
+ policy file location depends on the operating system you are using.
+| Windows/Linux: ``<install dir>/distribution/policies.json``
+| macOS: ``<install dir>/Contents/Resources/distribution/policies.json``
+| Create the ``distribution`` directory, if necessary, and put this in
+ ``policies.json``:
+
+::
+
+ {
+ "policies": {
+ "AppUpdateURL": "http://127.0.0.1:8000/update.xml"
+ }
+ }
+
+Now you are ready to update! Launch Firefox out of its installation
+directory and navigate to the Update section ``about:preferences``. You
+should see it downloading the update to the update directory. Since the
+transfer is entirely local this should finish quickly, and a "Restart to
+Update" button should appear. Click it to restart and apply the update.
diff --git a/taskcluster/docs/signing.rst b/taskcluster/docs/signing.rst
new file mode 100644
index 0000000000..5bf4b41448
--- /dev/null
+++ b/taskcluster/docs/signing.rst
@@ -0,0 +1,190 @@
+Signing
+=======
+
+Overview
+--------
+
+Our `code signing`_ happens in discrete tasks, for both performance reasons
+and to limit which machines have access to the signing servers and keys.
+
+In general, the binary-to-be-signed is generated in one task, and the request
+to sign it is in a second task. We verify the request via the `chain of trust`_,
+sign the binary, then upload the signed binary or original binary + detached
+signature as artifacts.
+
+How the Task Works
+------------------
+
+Scriptworker_ verifies the task definition and the upstream tasks until it
+determines the graph comes from a trusted tree; this is `chain of trust`_
+verification. Part of this verification is downloading and verifying the shas
+of the ``upstreamArtifacts`` in the task payload.
+
+An example signing task payload:
+
+::
+
+ {
+ "payload": {
+ "upstreamArtifacts": [{
+ "paths": ["public/build/target.dmg"],
+ "formats": ["macapp"],
+ "taskId": "abcde",
+ "taskType": "build"
+ }, {
+ "paths": ["public/build/target.tar.gz"],
+ "formats": ["autograph_gpg"],
+ "taskId": "12345",
+ "taskType": "build"
+ }]
+ }
+ }
+
+In the above example, scriptworker would download the ``target.dmg`` from task
+``abcde`` and ``target.tar.gz`` from task ``12345`` and verify their shas and
+task definitions via `chain of trust`_ verification. Then it will launch
+`signingscript`_, which requests a signing token from the signing server pool.
+
+Signingscript determines it wants to sign ``target.dmg`` with the ``macapp``
+format, and ``target.tar.gz`` with the ``autograph_gpg`` format. Each of the
+`signing formats`_ has their own behavior. After performing any format-specific
+checks or optimizations, it calls `signtool`_ to submit the file to the signing
+servers and poll them for signed output. Once it downloads all of the signed
+output files, it exits and scriptworker uploads the signed binaries.
+
+We can specify multiple paths from a single task for a given set of formats,
+and multiple formats for a given set of paths.
+
+Signing kinds
+-------------
+
+We currently have multiple signing kinds. These fall into several categories:
+
+**Build internal signing**: Certain package types require the internals to be signed.
+For certain package types, e.g. exe or dmg, we extract the internal binaries
+(e.g. xul.dll) and sign them. This is true for certain zipfiles, exes, and dmgs;
+we need to sign the internals before we [re]create the package. For linux
+tarballs, we don't need special packaging, so we can sign everything in this
+task. These kinds include ``build-signing``, ``shippable-l10n-signing``,
+``release-eme-free-repack-signing``, and ``release-partner-repack-signing``.
+
+**Build repackage signing**: Once we take the signed internals and package them
+(known as a ``repackage``), certain formats require a signed external package.
+If we have created an update MAR file from the signed internals, the MAR
+file will also need to be signed. These kinds include ``repackage-signing``,
+``release-eme-free-repack-repackage-signing``, and ``release-partner-repack-repackage-signing``.
+
+``release-source-signing`` and ``partials-signing`` sign the release source tarball
+and partial update MARs.
+
+**Mac signing and notarization**: For mac, we have ``*-notarization-part-1``, which signs the app and pkg and submits them to Apple for notarization, ``*-notarization-poller``, which polls Apple until it finds a successful notarization status, and the ``*-signing`` task downloads the signed app and pkg from the ``part-1`` task and staples the notarization to them.
+
+We generate signed checksums at the top of the releases directories, like
+in `60.0`_. To generate these, we have the checksums signing kinds, including
+``release-generate-checksums-signing``, ``checksums-signing``, and
+``release-source-checksums-signing``
+
+.. _signing formats:
+
+Signing formats
+---------------
+
+The known signingscript formats are listed in the fourth column of the
+`signing password files`_.
+
+The formats are specified in the ``upstreamArtifacts`` list-of-dicts.
+``autograph_gpg`` signing results in a detached ``.asc`` signature file. Because of its
+nature, we gpg-sign at the end if given multiple formats for a given set of
+files.
+
+``jar`` signing is Android apk signing. After signing, we ``zipalign`` the apk.
+This includes the ``focus-jar`` format, which is just a way to specify a different
+set of keys for the Focus app.
+
+``macapp`` signing accepts either a ``dmg`` or ``tar.gz``; it converts ``dmg``
+files to ``tar.gz`` before submitting to the signing server. The signed binary
+is a ``tar.gz``.
+
+``authenticode`` signing takes individual binaries or a zipfile. We sign the
+individual file or internals of the zipfile, skipping any already-signed files
+and a select few blocklisted files (using the `should_sign_windows`_ function).
+It returns a signed individual binary or zipfile with signed internals, depending
+on the input. This format includes ``autograph_authenticode``, and
+``autograph_authenticode_stub``.
+
+``mar`` signing signs our update files (Mozilla ARchive). ``mar_sha384`` is
+the same, but with a different hashing algorithm.
+
+``autograph_widevine`` is also video-related; see the
+`widevine site`_. We sign specific files inside the package and rebuild the
+``precomplete`` file that we use for updates.
+
+Cert levels
+-----------
+
+Cert levels are how we separate signing privileges. We have the following levels:
+
+``dep`` is short for ``depend``, which is a term from the Netscape days. (This
+refers to builds that don't clobber, so they keep their dependency object files
+cached from the previous build.) These certs and keys are designed to be used
+for Try or on-push builds that we don't intend to ship. Many of these are
+self-signed and not of high security value; they're intended for testing
+purposes.
+
+``nightly`` refers to the Nightly product and channel. We use these keys for
+signing and shipping nightly builds, as well as Devedition on the beta channel.
+Because these are shipping keys, they are restricted; only a subset of branches
+can request the use of these keys.
+
+``release`` refers to our releases, off the beta, release, or esr channels.
+These are the most restricted keys.
+
+We request a certain cert level via scopes:
+``project:releng:signing:cert:dep-signing``,
+``project:releng:signing:cert:nightly-signing``, or
+``project:releng:signing:cert:release-signing``. Each signing task is required
+to have exactly one of those scopes, and only nightly- and release-enabled
+branches are able to use the latter two scopes. If a task is scheduled with one
+of those restricted scopes on a non-allowlisted branch, Chain of Trust
+verification will raise an exception.
+
+Signing scriptworker workerTypes
+--------------------------------
+
+The `linux-depsigning`_ pool handles all of the non-mac dep signing. These are
+heavily in use on try and autoland, but also other branches. These verify
+the `chain of trust`_ artifact but not its signature, and they don't have a
+gpg key to sign their own chain of trust artifact. This is by design; the chain
+of trust should and will break if a production scriptworker is downstream from
+a depsigning worker.
+
+The `linux-signing`_ pool is the production signing pool; it handles the
+nightly- and release- signing requests. As such, it verifies the upstream
+chain of trust and all signatures, and signs its chain of trust artifact.
+
+The `linux-devsigning`_ pool is intended for signingscript and scriptworker
+development use. Because it isn't used on any Firefox-developer-facing branch,
+Mozilla Releng is able to make breaking changes on this pool without affecting
+any other team.
+
+Similarly, we have the `mac-depsigning`_ and `mac-signing`_ pools, which handle
+CI and nightly/release signing, respectively. The `mac-notarization-poller`_
+pool consists of lightweight workers that poll Apple for status.
+
+.. _60.0: https://archive.mozilla.org/pub/firefox/releases/60.0/
+.. _addonscript: https://github.com/mozilla-releng/addonscript/
+.. _code signing: https://en.wikipedia.org/wiki/Code_signing
+.. _chain of trust: https://scriptworker.readthedocs.io/en/latest/chain_of_trust.html
+.. _linux-depsigning: https://firefox-ci-tc.services.mozilla.com/provisioners/scriptworker-k8s/worker-types/gecko-t-signing
+.. _should_sign_windows: https://github.com/mozilla-releng/signingscript/blob/65cbb99ea53896fda9f4844e050a9695c762d24f/signingscript/sign.py#L369
+.. _Encrypted Media Extensions: https://hacks.mozilla.org/2014/05/reconciling-mozillas-mission-and-w3c-eme/
+.. _signing password files: https://github.com/mozilla/build-puppet/tree/feff5e12ab70f2c060b29940464e77208c7f0ef2/modules/signing_scriptworker/templates
+.. _signingscript: https://github.com/mozilla-releng/signingscript/
+.. _linux-devsigning: https://firefox-ci-tc.services.mozilla.com/provisioners/scriptworker-k8s/worker-types/gecko-t-signing-dev
+.. _linux-signing: https://firefox-ci-tc.services.mozilla.com/provisioners/scriptworker-k8s/worker-types/gecko-3-signing
+.. _mac-depsigning: https://firefox-ci-tc.services.mozilla.com/provisioners/scriptworker-prov-v1/worker-types/depsigning-mac-v1
+.. _mac-signing: https://firefox-ci-tc.services.mozilla.com/provisioners/scriptworker-prov-v1/worker-types/signing-mac-v1
+.. _mac-notarization-poller: https://firefox-ci-tc.services.mozilla.com/provisioners/scriptworker-prov-v1/worker-types/mac-notarization-poller
+.. _signtool: https://github.com/mozilla-releng/signtool
+.. _Scriptworker: https://github.com/mozilla-releng/scriptworker/
+.. _widevine site: https://www.widevine.com/wv_drm.html
diff --git a/taskcluster/docs/task-graph.rst b/taskcluster/docs/task-graph.rst
new file mode 100644
index 0000000000..04745d1288
--- /dev/null
+++ b/taskcluster/docs/task-graph.rst
@@ -0,0 +1,37 @@
+Task Graph
+==========
+
++--------------------------------------------------------------------+
+| This page is an import from MDN and the contents might be outdated |
++--------------------------------------------------------------------+
+
+After a change to the Gecko source code is pushed to version-control,
+jobs for that change appear
+on `Treeherder <https://treeherder.mozilla.org/>`__. How does this
+work?
+
+- A "decision task" is created to decide what to do with the push.
+- The decision task creates a lot of additional tasks. These tasks
+ include build and test tasks, along with lots of other kinds of tasks
+ to build docker images, build toolchains, perform analyses, check
+ syntax, and so on.
+- These tasks are arranged in a "task graph", with some tasks (e.g.,
+ tests) depending on others (builds). Once its prerequisite tasks
+ complete, a dependent task begins.
+- The result of each task is sent to
+ `TreeHerder <https://treeherder.mozilla.org>`__ where developers and
+ sheriffs can track the status of the push.
+- The outputs from each task, log files, Firefox installers, and so on,
+ appear attached to each task when it completes. These are viewable in
+ the `Task
+ Inspector <https://tools.taskcluster.net/task-inspector/>`__.
+
+All of this is controlled from within the Gecko source code, through a
+process called *task-graph generation*. This means it's easy to add a
+new job or tweak the parameters of a job in a :ref:`try
+push <Try Server>`, eventually landing
+that change on an integration branch.
+
+The details of task-graph generation are documented :ref:`in the source
+code itself <TaskCluster Task-Graph Generation>`,
+including a some :ref:`quick recipes for common changes <How Tos>`.
diff --git a/taskcluster/docs/taskgraph.rst b/taskcluster/docs/taskgraph.rst
new file mode 100644
index 0000000000..a51e4b360b
--- /dev/null
+++ b/taskcluster/docs/taskgraph.rst
@@ -0,0 +1,239 @@
+========
+Overview
+========
+
+The task graph is built by linking different kinds of tasks together, pruning
+out tasks that are not required, then optimizing by replacing subgraphs with
+links to already-completed tasks.
+
+Concepts
+--------
+
+* *Task Kind* - Tasks are grouped by kind, where tasks of the same kind
+ have substantial similarities or share common processing logic. Kinds
+ are the primary means of supporting diversity, in that a developer can
+ add a new kind to do just about anything without impacting other kinds.
+
+* *Task Attributes* - Tasks have string attributes by which can be used for
+ filtering. Attributes are documented in :doc:`attributes`.
+
+* *Task Labels* - Each task has a unique identifier within the graph that is
+ stable across runs of the graph generation algorithm. Labels are replaced
+ with TaskCluster TaskIds at the latest time possible, facilitating analysis
+ of graphs without distracting noise from randomly-generated taskIds.
+
+* *Optimization* - replacement of a task in a graph with an equivalent,
+ already-completed task, or a null task, avoiding repetition of work.
+
+Kinds
+-----
+
+Kinds are the focal point of this system. They provide an interface between
+the large-scale graph-generation process and the small-scale task-definition
+needs of different kinds of tasks. Each kind may implement task generation
+differently. Some kinds may generate task definitions entirely internally (for
+example, symbol-upload tasks are all alike, and very simple), while other kinds
+may do little more than parse a directory of YAML files.
+
+A ``kind.yml`` file contains data about the kind, as well as referring to a
+Python class implementing the kind in its ``implementation`` key. That
+implementation may rely on lots of code shared with other kinds, or contain a
+completely unique implementation of some functionality.
+
+The full list of pre-defined keys in this file is:
+
+``implementation``
+ Class implementing this kind, in the form ``<module-path>:<object-path>``.
+ This class should be a subclass of ``taskgraph.kind.base:Kind``.
+
+``kind-dependencies``
+ Kinds which should be loaded before this one. This is useful when the kind
+ will use the list of already-created tasks to determine which tasks to
+ create, for example adding an upload-symbols task after every build task.
+
+Any other keys are subject to interpretation by the kind implementation.
+
+The result is a segmentation of implementation so that the more esoteric
+in-tree projects can do their crazy stuff in an isolated kind without making
+the bread-and-butter build and test configuration more complicated.
+
+Dependencies
+------------
+
+Dependencies between tasks are represented as labeled edges in the task graph.
+For example, a test task must depend on the build task creating the artifact it
+tests, and this dependency edge is named 'build'. The task graph generation
+process later resolves these dependencies to specific taskIds.
+
+Dependencies are typically used to ensure that prerequisites to a task, such as
+creation of binary artifacts, are completed before that task runs. But
+dependencies can also be used to schedule follow-up work such as summarizing
+test results. In the latter case, the summarization task will "pull in" all of
+the tasks it depends on, even if those tasks might otherwise be optimized away.
+There are two ways to work around this problem.
+
+If Dependencies
+...............
+
+The ``if-dependencies`` key (list) can be used to denote a task that should
+only run if at least one of these specified dependencies are also run.
+Dependencies specified by this key will not be "pulled in". This makes it
+suitable for things like signing builds or uploading symbols.
+
+This key is specified as a list of dependency names (e.g, ``build`` rather than
+the label of the build).
+
+Soft Dependencies
+.................
+
+To add a task depending on arbitrary tasks remaining after the optimization
+process completed, you can use ``soft-dependencies``, as a list of optimized
+tasks labels. This is useful for tasks that need to perform some action on N
+other tasks and it is not known how many. Unlike ``if-dependencies``, tasks
+that specify ``soft-dependencies`` will still be scheduled, even if none of the
+candidate dependencies are.
+
+
+Decision Task
+-------------
+
+The decision task is the first task created when a new graph begins. It is
+responsible for creating the rest of the task graph.
+
+The decision task for pushes is defined in-tree, in ``.taskcluster.yml``. That
+task description invokes ``mach taskcluster decision`` with some metadata about
+the push. That mach command determines the optimized task graph, then calls
+the TaskCluster API to create the tasks.
+
+Note that this mach command is *not* designed to be invoked directly by humans.
+Instead, use the mach commands described below, supplying ``parameters.yml``
+from a recent decision task. These commands allow testing everything the
+decision task does except the command-line processing and the
+``queue.createTask`` calls.
+
+Graph Generation
+----------------
+
+Graph generation, as run via ``mach taskgraph decision``, proceeds as follows:
+
+#. For all kinds, generate all tasks. The result is the "full task set"
+#. Create dependency links between tasks using kind-specific mechanisms. The
+ result is the "full task graph".
+#. Filter the target tasks (based on a series of filters, such as try syntax,
+ tree-specific specifications, etc). The result is the "target task set".
+#. Based on the full task graph, calculate the transitive closure of the target
+ task set. That is, the target tasks and all requirements of those tasks.
+ The result is the "target task graph".
+#. Optimize the target task graph using task-specific optimization methods.
+ The result is the "optimized task graph" with fewer nodes than the target
+ task graph. See :doc:`optimization`.
+#. Morph the graph. Morphs are like syntactic sugar: they keep the same meaning,
+ but express it in a lower-level way. These generally work around limitations
+ in the TaskCluster platform, such as number of dependencies or routes in
+ a task.
+#. Create tasks for all tasks in the morphed task graph.
+
+Transitive Closure
+..................
+
+Transitive closure is a fancy name for this sort of operation:
+
+ * start with a set of tasks
+ * add all tasks on which any of those tasks depend
+ * repeat until nothing changes
+
+The effect is this: imagine you start with a linux32 test job and a linux64 test job.
+In the first round, each test task depends on the test docker image task, so add that image task.
+Each test also depends on a build, so add the linux32 and linux64 build tasks.
+
+Then repeat: the test docker image task is already present, as are the build
+tasks, but those build tasks depend on the build docker image task. So add
+that build docker image task. Repeat again: this time, none of the tasks in
+the set depend on a task not in the set, so nothing changes and the process is
+complete.
+
+And as you can see, the graph we've built now includes everything we wanted
+(the test jobs) plus everything required to do that (docker images, builds).
+
+
+Action Tasks
+------------
+
+Action Tasks are tasks which help you to schedule new jobs via Treeherder's
+"Add New Jobs" feature. The Decision Task creates a YAML file named
+``action.yml`` which can be used to schedule Action Tasks after suitably replacing
+``{{decision_task_id}}`` and ``{{task_labels}}``, which correspond to the decision
+task ID of the push and a comma separated list of task labels which need to be
+scheduled.
+
+This task invokes ``mach taskgraph action-callback`` which builds up a task graph of
+the requested tasks. This graph is optimized using the tasks running initially in
+the same push, due to the decision task.
+
+So for instance, if you had already requested a build task in the ``try`` command,
+and you wish to add a test which depends on this build, the original build task
+is re-used.
+
+
+Runnable jobs
+-------------
+As part of the execution of the Gecko decision task we generate a
+``public/runnable-jobs.json.gz`` file. It contains a subset of all the data
+contained within the ``full-task-graph.json``.
+
+This file has the minimum amount of data needed by Treeherder to show all
+tasks that can be scheduled on a push.
+
+
+Task Parameterization
+---------------------
+
+A few components of tasks are only known at the very end of the decision task
+-- just before the ``queue.createTask`` call is made. These are specified
+using simple parameterized values, as follows:
+
+``{"relative-datestamp": "certain number of seconds/hours/days/years"}``
+ Objects of this form will be replaced with an offset from the current time
+ just before the ``queue.createTask`` call is made. For example, an
+ artifact expiration might be specified as ``{"relative-datestamp": "1
+ year"}``.
+
+``{"task-reference": "string containing <dep-name>"}``
+ The task definition may contain "task references" of this form. These will
+ be replaced during the optimization step, with the appropriate taskId for
+ the named dependency substituted for ``<dep-name>`` in the string.
+ Additionally, `decision` and `self` can be used a dependency names to refer
+ to the decision task, and the task itself. Multiple labels may be
+ substituted in a single string, and ``<<>`` can be used to escape a literal
+ ``<``.
+
+``{"artifact-reference": "..<dep-name/artifact/name>.."}``
+ Similar to a ``task-reference``, but this substitutes a URL to the queue's
+ ``getLatestArtifact`` API method (for which a GET will redirect to the
+ artifact itself).
+
+.. _taskgraph-graph-config:
+
+Graph Configuration
+-------------------
+
+There are several configuration settings that are pertain to the entire
+taskgraph. These are specified in :file:`config.yml` at the root of the
+taskgraph configuration (typically :file:`taskcluster/ci/`). The available
+settings are documented inline in `taskcluster/taskgraph/config.py
+<https://searchfox.org/mozilla-central/source/taskcluster/taskgraph/config.py>`_.
+
+.. _taskgraph-trust-domain:
+
+Trust Domain
+------------
+
+When publishing and signing releases, that tasks verify their definition and
+all upstream tasks come from a decision task based on a trusted tree. (see
+`chain-of-trust verification <https://scriptworker.readthedocs.io/en/latest/chain_of_trust.html>`_).
+Firefox and Thunderbird share the taskgraph code and in particular, they have
+separate taskgraph configurations and in particular distinct decision tasks.
+Although they use identical docker images and toolchains, in order to track the
+province of those artifacts when verifying the chain of trust, they use
+different index paths to cache those artifacts. The ``trust-domain`` graph
+configuration controls the base path for indexing these cached artifacts.
diff --git a/taskcluster/docs/transforms.rst b/taskcluster/docs/transforms.rst
new file mode 100644
index 0000000000..8509dbd071
--- /dev/null
+++ b/taskcluster/docs/transforms.rst
@@ -0,0 +1,215 @@
+Transforms
+==========
+
+Many task kinds generate tasks by a process of transforming job descriptions
+into task definitions. The basic operation is simple, although the sequence of
+transforms applied for a particular kind may not be!
+
+Overview
+--------
+
+To begin, a kind implementation generates a collection of items; see
+:doc:`loading`. The items are simply Python dictionaries, and describe
+"semantically" what the resulting task or tasks should do.
+
+The kind also defines a sequence of transformations. These are applied, in
+order, to each item. Early transforms might apply default values or break
+items up into smaller items (for example, chunking a test suite). Later
+transforms rewrite the items entirely, with the final result being a task
+definition.
+
+Transform Functions
+...................
+
+Each transformation looks like this:
+
+.. code-block:: python
+
+ @transforms.add
+ def transform_an_item(config, items):
+ """This transform ...""" # always a docstring!
+ for item in items:
+ # ..
+ yield item
+
+The ``config`` argument is a Python object containing useful configuration for
+the kind, and is a subclass of
+:class:`taskgraph.transforms.base.TransformConfig`, which specifies a few of
+its attributes. Kinds may subclass and add additional attributes if necessary.
+
+While most transforms yield one item for each item consumed, this is not always
+true: items that are not yielded are effectively filtered out. Yielding
+multiple items for each consumed item implements item duplication; this is how
+test chunking is accomplished, for example.
+
+The ``transforms`` object is an instance of
+:class:`taskgraph.transforms.base.TransformSequence`, which serves as a simple
+mechanism to combine a sequence of transforms into one.
+
+Schemas
+.......
+
+The items used in transforms are validated against some simple schemas at
+various points in the transformation process. These schemas accomplish two
+things: they provide a place to add comments about the meaning of each field,
+and they enforce that the fields are actually used in the documented fashion.
+
+Keyed By
+........
+
+Several fields in the input items can be "keyed by" another value in the item.
+For example, a test description's chunks may be keyed by ``test-platform``.
+In the item, this looks like:
+
+.. code-block:: yaml
+
+ chunks:
+ by-test-platform:
+ linux64/debug: 12
+ linux64/opt: 8
+ android.*: 14
+ default: 10
+
+This is a simple but powerful way to encode business rules in the items
+provided as input to the transforms, rather than expressing those rules in the
+transforms themselves. If you are implementing a new business rule, prefer
+this mode where possible. The structure is easily resolved to a single value
+using :func:`taskgraph.transform.base.resolve_keyed_by`.
+
+Exact matches are used immediately. If no exact matches are found, each
+alternative is treated as a regular expression, matched against the whole
+value. Thus ``android.*`` would match ``android-api-16/debug``. If nothing
+matches as a regular expression, but there is a ``default`` alternative, it is
+used. Otherwise, an exception is raised and graph generation stops.
+
+Organization
+-------------
+
+Task creation operates broadly in a few phases, with the interfaces of those
+stages defined by schemas. The process begins with the raw data structures
+parsed from the YAML files in the kind configuration. This data can processed
+by kind-specific transforms resulting, for test jobs, in a "test description".
+For non-test jobs, the next step is a "job description". These transformations
+may also "duplicate" tasks, for example to implement chunking or several
+variations of the same task.
+
+In any case, shared transforms then convert this into a "task description",
+which the task-generation transforms then convert into a task definition
+suitable for ``queue.createTask``.
+
+Test Descriptions
+-----------------
+
+Test descriptions specify how to run a unittest or talos run. They aim to
+describe this abstractly, although in many cases the unique nature of
+invocation on different platforms leaves a lot of specific behavior in the test
+description, divided by ``by-test-platform``.
+
+Test descriptions are validated to conform to the schema in
+``taskcluster/taskgraph/transforms/tests.py``. This schema is extensively
+documented and is a the primary reference for anyone modifying tests.
+
+The output of ``tests.py`` is a task description. Test dependencies are
+produced in the form of a dictionary mapping dependency name to task label.
+
+Job Descriptions
+----------------
+
+A job description says what to run in the task. It is a combination of a
+``run`` section and all of the fields from a task description. The run section
+has a ``using`` property that defines how this task should be run; for example,
+``mozharness`` to run a mozharness script, or ``mach`` to run a mach command.
+The remainder of the run section is specific to the run-using implementation.
+
+The effect of a job description is to say "run this thing on this worker". The
+job description must contain enough information about the worker to identify
+the workerType and the implementation (docker-worker, generic-worker, etc.).
+Alternatively, job descriptions can specify the ``platforms`` field in
+conjunction with the ``by-platform`` key to specify multiple workerTypes and
+implementations. Any other task-description information is passed along
+verbatim, although it is augmented by the run-using implementation.
+
+The run-using implementations are all located in
+``taskcluster/taskgraph/transforms/job``, along with the schemas for their
+implementations. Those well-commented source files are the canonical
+documentation for what constitutes a job description, and should be considered
+part of the documentation.
+
+following ``run-using`` are available
+
+ * ``hazard``
+ * ``mach``
+ * ``mozharness``
+ * ``mozharness-test``
+ * ``run-task``
+ * ``spidermonkey`` or ``spidermonkey-package`` or ``spidermonkey-mozjs-crate`` or ``spidermonkey-rust-bindings``
+ * ``debian-package``
+ * ``toolchain-script``
+ * ``always-optimized``
+ * ``fetch-url``
+ * ``python-test``
+
+
+Task Descriptions
+-----------------
+
+Every kind needs to create tasks, and all of those tasks have some things in
+common. They all run on one of a small set of worker implementations, each
+with their own idiosyncrasies. And they all report to TreeHerder in a similar
+way.
+
+The transforms in ``taskcluster/taskgraph/transforms/task.py`` implement
+this common functionality. They expect a "task description", and produce a
+task definition. The schema for a task description is defined at the top of
+``task.py``, with copious comments. Go forth and read it now!
+
+In general, the task-description transforms handle functionality that is common
+to all Gecko tasks. While the schema is the definitive reference, the
+functionality includes:
+
+* TreeHerder metadata
+
+* Build index routes
+
+* Information about the projects on which this task should run
+
+* Optimizations
+
+* Defaults for ``expires-after`` and and ``deadline-after``, based on project
+
+* Worker configuration
+
+The parts of the task description that are specific to a worker implementation
+are isolated in a ``task_description['worker']`` object which has an
+``implementation`` property naming the worker implementation. Each worker
+implementation has its own section of the schema describing the fields it
+expects. Thus the transforms that produce a task description must be aware of
+the worker implementation to be used, but need not be aware of the details of
+its payload format.
+
+The ``task.py`` file also contains a dictionary mapping treeherder groups to
+group names using an internal list of group names. Feel free to add additional
+groups to this list as necessary.
+
+Signing Descriptions
+--------------------
+
+Signing kinds are passed a single dependent job (from its kind dependency) to act
+on.
+
+The transforms in ``taskcluster/taskgraph/transforms/signing.py`` implement
+this common functionality. They expect a "signing description", and produce a
+task definition. The schema for a signing description is defined at the top of
+``signing.py``, with copious comments.
+
+In particular you define a set of upstream artifact urls (that point at the
+dependent task) and can optionally provide a dependent name (defaults to build)
+for use in ``task-reference``/``artifact-reference``. You also need to provide
+the signing formats to use.
+
+More Detail
+-----------
+
+The source files provide lots of additional detail, both in the code itself and
+in the comments and docstrings. For the next level of detail beyond this file,
+consult the transform source under ``taskcluster/taskgraph/transforms``.
diff --git a/taskcluster/docs/try.rst b/taskcluster/docs/try.rst
new file mode 100644
index 0000000000..c868064e11
--- /dev/null
+++ b/taskcluster/docs/try.rst
@@ -0,0 +1,153 @@
+Try
+===
+
+"Try" is a way to "try out" a proposed change safely before review, without
+officially landing it. This functionality has been around for a *long* time in
+various forms, and can sometimes show its age.
+
+Access to "push to try" is typically available to a much larger group of
+developers than those who can land changes in integration and release branches.
+Specifically, try pushes are allowed for anyone with `SCM Level`_ 1, while
+integration branches are at SCM level 3.
+
+Scheduling a Task on Try
+------------------------
+
+There are three methods for scheduling a task on try: legacy try option syntax,
+try task config, and an empty try.
+
+Try Option Syntax
+:::::::::::::::::
+
+The first, older method is a command line string called ``try syntax`` which is passed
+into the decision task via the commit message. The resulting commit is then
+pushed to the https://hg.mozilla.org/try repository. An example try syntax
+might look like:
+
+.. parsed-literal::
+
+ try: -b o -p linux64 -u mochitest-1 -t none
+
+This gets parsed by ``taskgraph.try_option_syntax:TryOptionSyntax`` and returns
+a list of matching task labels. For more information see the
+`TryServer wiki page <https://wiki.mozilla.org/Try>`_.
+
+Try Task Config
+:::::::::::::::
+
+The second, more modern method specifies exactly the tasks to run. That list
+of tasks is usually generated locally with some :doc:`local tool </tools/try/selectors/fuzzy>`
+and attached to the commit pushed to the try repository. This gives
+finer control over exactly what runs and enables growth of an
+ecosystem of tooling appropriate to varied circumstances.
+
+Implementation
+,,,,,,,,,,,,,,
+
+This method uses a checked-in file called ``try_task_config.json`` which lives
+at the root of the source dir. The JSON object in this file contains a
+``tasks`` key giving the labels of the tasks to run. For example, the
+``try_task_config.json`` file might look like:
+
+.. parsed-literal::
+
+ {
+ "version": 1,
+ "tasks": [
+ "test-windows10-64/opt-web-platform-tests-12",
+ "test-windows7-32/opt-reftest-1",
+ "test-windows7-32/opt-reftest-2",
+ "test-windows7-32/opt-reftest-3",
+ "build-linux64/debug",
+ "source-test-mozlint-eslint"
+ ]
+ }
+
+Very simply, this will run any task label that gets passed in as well as their
+dependencies. While it is possible to manually commit this file and push to
+try, it is mainly meant to be a generation target for various :ref:`try server <Try Server>`
+choosers. For example:
+
+.. parsed-literal::
+
+ $ ./mach try fuzzy
+
+A list of all possible task labels can be obtained by running:
+
+.. parsed-literal::
+
+ $ ./mach taskgraph tasks
+
+A list of task labels relevant to a tree (defaults to mozilla-central) can be
+obtained with:
+
+.. parsed-literal::
+
+ $ ./mach taskgraph target
+
+Modifying Tasks in a Try Push
+,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
+
+It's possible to alter the definition of a task by defining additional
+configuration in ``try_task_config.json``. For example, to set an environment
+variable in all tasks, you can add:
+
+.. parsed-literal::
+
+ {
+ "version": 1,
+ "tasks": [...],
+ "env": {
+ "FOO": "bar"
+ }
+ }
+
+The allowed configs are defined in :py:obj:`taskgraph.decision.try_task_config_schema`.
+The values will be available to all transforms, so how each config applies will
+vary wildly from one context to the next. Some (such as ``env``) will affect
+all tasks in the graph. Others might only affect certain kinds of task. The
+``use-artifact-builds`` config only applies to build tasks for instance.
+
+Empty Try
+:::::::::
+
+If there is no try syntax or ``try_task_config.json``, the ``try_mode``
+parameter is None and no tasks are selected to run. The resulting push will
+only have a decision task, but one with an "add jobs" action that can be used
+to add the desired jobs to the try push.
+
+
+Complex Configuration
+:::::::::::::::::::::
+
+If you need more control over the build configuration,
+(:doc:`staging releases </tools/try/selectors/release>`, for example),
+you can directly specify :doc:`parameters <parameters>`
+to override from the ``try_task_config.json`` like this:
+
+.. parsed-literal::
+
+ {
+ "version": 2,
+ "parameters": {
+ "optimize_target_tasks": true,
+ "release_type": "beta",
+ "target_tasks_method": "staging_release_builds"
+ }
+ }
+
+This format can express a superset of the version 1 format, as the
+version one configuration is equivalent to the following version 2
+config.
+
+.. parsed-literal::
+
+ {
+ "version": 2,
+ "parameters": {
+ "try_task_config": {...},
+ "try_mode": "try_task_config",
+ }
+ }
+
+.. _SCM Level: https://www.mozilla.org/en-US/about/governance/policies/commit/access-policy/
diff --git a/taskcluster/docs/using-the-mozilla-source-server.rst b/taskcluster/docs/using-the-mozilla-source-server.rst
new file mode 100644
index 0000000000..7443b50393
--- /dev/null
+++ b/taskcluster/docs/using-the-mozilla-source-server.rst
@@ -0,0 +1,51 @@
+Using The Mozilla Source Server
+===============================
+
++--------------------------------------------------------------------+
+| This page is an import from MDN and the contents might be outdated |
++--------------------------------------------------------------------+
+
+Using the Mozilla source server is now even more feature-packed. The
+nightly debug builds are now also Source Indexed so that by following a
+couple of simple steps you can also have the source code served to you
+for debugging without a local build
+
+What you'll need
+----------------
+
+- `WinDbg <https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/>`__
+ or Visual Studio (Note: express editions will not work, but WinDbg is
+ a free download)
+- A nightly build that was created after April 15, 2008; go to the
+ `/pub/firefox/nightly/latest-mozilla-central/ <https://archive.mozilla.org/pub/firefox/nightly/latest-mozilla-central/>`__
+ folder and grab the installer
+
+Set up symbols
+--------------
+
+Follow the instructions for :ref:`Using the Mozilla symbol
+server <Using The Mozilla Symbol Server>`. Once
+the symbol path is set you must now enable Source Server.
+
+Using the source server in WinDbg
+---------------------------------
+
+In the WinDbg command line, type ``.srcfix`` and hit enter. This enables
+source server support.
+
+.. image:: img/windbg-srcfix.png
+
+
+Using the source server in Visual Studio
+----------------------------------------
+
+Source server support does not work correctly out of the
+box in Visual Studio 2005. If you install WinDBG, and copy srcsrv.dll
+from the WinDBG install dir to the Visual Studio install dir
+(replacing the existing copy) it will work.
+
+Enable source server support under Tools -> Options. Also, disable
+(uncheck) the box that says "Require source files to exactly match the
+original version".
+
+.. image:: img/enableSourceServer.png
diff --git a/taskcluster/docs/versioncontrol.rst b/taskcluster/docs/versioncontrol.rst
new file mode 100644
index 0000000000..4cc98c922e
--- /dev/null
+++ b/taskcluster/docs/versioncontrol.rst
@@ -0,0 +1,108 @@
+=====================
+Version Control in CI
+=====================
+
+Upgrading Mercurial
+===================
+
+Upgrading Mercurial in CI requires touching a handful of different
+components.
+
+Vendored robustcheckout
+-----------------------
+
+The ``robustcheckout`` Mercurial extension is used throughout CI to
+perform clones and working directory updates. The canonical home of
+the extension is in the
+https://hg.mozilla.org/hgcustom/version-control-tools repository
+at the path ``hgext/robustcheckout/__init__.py``.
+
+
+When upgrading Mercurial, the ``robustcheckout`` extension should also
+be updated to ensure it is compatible with the version of Mercurial
+being upgraded to. Typically, one simply copies the latest version
+from ``version-control-tools`` into the vendored locations.
+
+The locations are as follows:
+
+- In-tree: ``testing/mozharness/external_tools/robustcheckout.py``
+- Treescript: ``https://github.com/mozilla-releng/scriptworker-scripts/blob/master/treescript/treescript/py2/robustcheckout.py``
+- build-puppet: ``https://github.com/mozilla-releng/build-puppet/blob/master/modules/mercurial/files/robustcheckout.py``
+- ronin_puppet: ``https://github.com/mozilla-platform-ops/ronin_puppet/blob/master/modules/mercurial/files/robustcheckout.py``
+- OpenCloudConfig: ``https://github.com/mozilla-releng/OpenCloudConfig/blob/master/userdata/Configuration/FirefoxBuildResources/robustcheckout.py``
+
+
+Debian Packages for Debian Based Docker Images
+----------------------------------------------
+
+``taskcluster/ci/packages/kind.yml`` defines custom Debian packages for
+Mercurial. These are installed in various Docker images.
+
+To upgrade Mercurial, typically you just need to update the source URL
+and its hash in this file.
+
+Non-Debian Linux Docker Images
+------------------------------
+
+The ``taskcluster/docker/recipes/install-mercurial.sh`` script is sourced
+by a handful of Docker images to install Mercurial.
+
+The script references 3 tooltool artifacts:
+
+* A Mercurial source tarball (for ``pip`` installs).
+* A ``mercurial_*_amd64.deb`` Debian package.
+* A ``mercurial-common_*_all.deb`` Debian package.
+
+The source tarball is a vanilla Mercurial source distribution. The Debian
+archives will need to be produced manually.
+
+To produce the Debian archives,
+``hg clone https://www.mercurial-scm.org/repo/hg`` and ``hg update`` to
+the tag being built. Then run ``make docker-ubuntu-xenial``. This will
+build the Mercurial Debian packages in a Docker container. It will deposit
+the produced packages in ``packages/ubuntu-xenial/``.
+
+Once all 3 files are available, copy them to the same directory and
+upload them to tooltool.
+
+ $ tooltool.py add --public mercurial-x.y.z.tar.gz mercurial*.deb
+ $ tooltool.py upload --message 'Bug XXX - Mercurial x.y.z' --authentication-file ~/.tooltoolauth
+
+.. note::
+
+ See https://wiki.mozilla.org/ReleaseEngineering/Applications/Tooltool#How_To_Upload_To_Tooltool
+ for how to use tooltool and where to obtain ``tooltool.py``.
+
+Next, copy values from the ``manifest.tt`` file into
+``taskcluster/docker/recipes/install-mercurial.sh``. See revision
+``977768c296ca`` for an example upgrade.
+
+Windows AMIs
+------------
+
+https://github.com/mozilla-releng/OpenCloudConfig defines the Windows
+environment for various Windows AMIs used by Taskcluster. Several of
+the files reference a ``mercurial-x.y.z-*.msi`` installer. These references
+will need to be updated to the Mercurial version being upgraded to.
+
+The ``robustcheckout`` extension is also vendored into this repository
+at ``userdata/Configuration/FirefoxBuildResources/robustcheckout.py``. It
+should also be updated if needed.
+
+Puppet Maintained Hosts
+-----------------------
+
+Some hosts (namely macOS machines) are managed by Puppet and Puppet is used
+to install Mercurial.
+
+Puppet code lives in the https://github.com/mozilla-releng/build-puppet repository.
+Relevant files are in ``modules/mercurial/``,
+``modules/packages/manifests/mozilla/mozilla-python27-mercurial-debian/``,
+and ``modules/packages/manifests/mozilla/py27_mercurial*``. A copy of
+``robustcheckout`` is also vendored at
+``modules/mercurial/files/robustcheckout.py``.
+
+.. note::
+
+ The steps to upgrade Mercurial in Puppet aren't currently captured here.
+ Someone should capture those...