summaryrefslogtreecommitdiffstats
path: root/Documentation/process
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/process')
-rw-r--r--Documentation/process/1.Intro.rst268
-rw-r--r--Documentation/process/2.Process.rst497
-rw-r--r--Documentation/process/3.Early-stage.rst223
-rw-r--r--Documentation/process/4.Coding.rst421
-rw-r--r--Documentation/process/5.Posting.rst360
-rw-r--r--Documentation/process/6.Followthrough.rst219
-rw-r--r--Documentation/process/7.AdvancedTopics.rst178
-rw-r--r--Documentation/process/8.Conclusion.rst73
-rw-r--r--Documentation/process/adding-syscalls.rst577
-rw-r--r--Documentation/process/applying-patches.rst444
-rw-r--r--Documentation/process/botching-up-ioctls.rst225
-rw-r--r--Documentation/process/changes.rst571
-rw-r--r--Documentation/process/clang-format.rst184
-rw-r--r--Documentation/process/code-of-conduct-interpretation.rst158
-rw-r--r--Documentation/process/code-of-conduct.rst86
-rw-r--r--Documentation/process/coding-style.rst1271
-rw-r--r--Documentation/process/contribution-maturity-model.rst109
-rw-r--r--Documentation/process/deprecated.rst374
-rw-r--r--Documentation/process/development-process.rst28
-rw-r--r--Documentation/process/email-clients.rst372
-rw-r--r--Documentation/process/embargoed-hardware-issues.rst319
-rw-r--r--Documentation/process/handling-regressions.rst790
-rw-r--r--Documentation/process/howto.rst626
-rw-r--r--Documentation/process/index.rst82
-rw-r--r--Documentation/process/kernel-docs.rst210
-rw-r--r--Documentation/process/kernel-driver-statement.rst202
-rw-r--r--Documentation/process/kernel-enforcement-statement.rst163
-rw-r--r--Documentation/process/license-rules.rst485
-rw-r--r--Documentation/process/magic-number.rst84
-rw-r--r--Documentation/process/maintainer-handbooks.rst22
-rw-r--r--Documentation/process/maintainer-kvm-x86.rst390
-rw-r--r--Documentation/process/maintainer-netdev.rst454
-rw-r--r--Documentation/process/maintainer-pgp-guide.rst919
-rw-r--r--Documentation/process/maintainer-soc-clean-dts.rst25
-rw-r--r--Documentation/process/maintainer-soc.rst177
-rw-r--r--Documentation/process/maintainer-tip.rst804
-rw-r--r--Documentation/process/maintainers.rst1
-rw-r--r--Documentation/process/management-style.rst290
-rw-r--r--Documentation/process/programming-language.rst58
-rw-r--r--Documentation/process/researcher-guidelines.rst170
-rw-r--r--Documentation/process/security-bugs.rst93
-rw-r--r--Documentation/process/stable-api-nonsense.rst204
-rw-r--r--Documentation/process/stable-kernel-rules.rst235
-rw-r--r--Documentation/process/submit-checklist.rst120
-rw-r--r--Documentation/process/submitting-patches.rst871
-rw-r--r--Documentation/process/volatile-considered-harmful.rst125
46 files changed, 14557 insertions, 0 deletions
diff --git a/Documentation/process/1.Intro.rst b/Documentation/process/1.Intro.rst
new file mode 100644
index 000000000..c3d0270bb
--- /dev/null
+++ b/Documentation/process/1.Intro.rst
@@ -0,0 +1,268 @@
+.. _development_process_intro:
+
+Introduction
+============
+
+Executive summary
+-----------------
+
+The rest of this section covers the scope of the kernel development process
+and the kinds of frustrations that developers and their employers can
+encounter there. There are a great many reasons why kernel code should be
+merged into the official ("mainline") kernel, including automatic
+availability to users, community support in many forms, and the ability to
+influence the direction of kernel development. Code contributed to the
+Linux kernel must be made available under a GPL-compatible license.
+
+:ref:`development_process` introduces the development process, the kernel
+release cycle, and the mechanics of the merge window. The various phases in
+the patch development, review, and merging cycle are covered. There is some
+discussion of tools and mailing lists. Developers wanting to get started
+with kernel development are encouraged to track down and fix bugs as an
+initial exercise.
+
+:ref:`development_early_stage` covers early-stage project planning, with an
+emphasis on involving the development community as soon as possible.
+
+:ref:`development_coding` is about the coding process; several pitfalls which
+have been encountered by other developers are discussed. Some requirements for
+patches are covered, and there is an introduction to some of the tools
+which can help to ensure that kernel patches are correct.
+
+:ref:`development_posting` talks about the process of posting patches for
+review. To be taken seriously by the development community, patches must be
+properly formatted and described, and they must be sent to the right place.
+Following the advice in this section should help to ensure the best
+possible reception for your work.
+
+:ref:`development_followthrough` covers what happens after posting patches; the
+job is far from done at that point. Working with reviewers is a crucial part
+of the development process; this section offers a number of tips on how to
+avoid problems at this important stage. Developers are cautioned against
+assuming that the job is done when a patch is merged into the mainline.
+
+:ref:`development_advancedtopics` introduces a couple of "advanced" topics:
+managing patches with git and reviewing patches posted by others.
+
+:ref:`development_conclusion` concludes the document with pointers to sources
+for more information on kernel development.
+
+What this document is about
+---------------------------
+
+The Linux kernel, at over 8 million lines of code and well over 1000
+contributors to each release, is one of the largest and most active free
+software projects in existence. Since its humble beginning in 1991, this
+kernel has evolved into a best-of-breed operating system component which
+runs on pocket-sized digital music players, desktop PCs, the largest
+supercomputers in existence, and all types of systems in between. It is a
+robust, efficient, and scalable solution for almost any situation.
+
+With the growth of Linux has come an increase in the number of developers
+(and companies) wishing to participate in its development. Hardware
+vendors want to ensure that Linux supports their products well, making
+those products attractive to Linux users. Embedded systems vendors, who
+use Linux as a component in an integrated product, want Linux to be as
+capable and well-suited to the task at hand as possible. Distributors and
+other software vendors who base their products on Linux have a clear
+interest in the capabilities, performance, and reliability of the Linux
+kernel. And end users, too, will often wish to change Linux to make it
+better suit their needs.
+
+One of the most compelling features of Linux is that it is accessible to
+these developers; anybody with the requisite skills can improve Linux and
+influence the direction of its development. Proprietary products cannot
+offer this kind of openness, which is a characteristic of the free software
+process. But, if anything, the kernel is even more open than most other
+free software projects. A typical three-month kernel development cycle can
+involve over 1000 developers working for more than 100 different companies
+(or for no company at all).
+
+Working with the kernel development community is not especially hard. But,
+that notwithstanding, many potential contributors have experienced
+difficulties when trying to do kernel work. The kernel community has
+evolved its own distinct ways of operating which allow it to function
+smoothly (and produce a high-quality product) in an environment where
+thousands of lines of code are being changed every day. So it is not
+surprising that Linux kernel development process differs greatly from
+proprietary development methods.
+
+The kernel's development process may come across as strange and
+intimidating to new developers, but there are good reasons and solid
+experience behind it. A developer who does not understand the kernel
+community's ways (or, worse, who tries to flout or circumvent them) will
+have a frustrating experience in store. The development community, while
+being helpful to those who are trying to learn, has little time for those
+who will not listen or who do not care about the development process.
+
+It is hoped that those who read this document will be able to avoid that
+frustrating experience. There is a lot of material here, but the effort
+involved in reading it will be repaid in short order. The development
+community is always in need of developers who will help to make the kernel
+better; the following text should help you - or those who work for you -
+join our community.
+
+Credits
+-------
+
+This document was written by Jonathan Corbet, corbet@lwn.net. It has been
+improved by comments from Johannes Berg, James Berry, Alex Chiang, Roland
+Dreier, Randy Dunlap, Jake Edge, Jiri Kosina, Matt Mackall, Arthur Marsh,
+Amanda McPherson, Andrew Morton, Andrew Price, Tsugikazu Shibata, and
+Jochen Voß.
+
+This work was supported by the Linux Foundation; thanks especially to
+Amanda McPherson, who saw the value of this effort and made it all happen.
+
+The importance of getting code into the mainline
+------------------------------------------------
+
+Some companies and developers occasionally wonder why they should bother
+learning how to work with the kernel community and get their code into the
+mainline kernel (the "mainline" being the kernel maintained by Linus
+Torvalds and used as a base by Linux distributors). In the short term,
+contributing code can look like an avoidable expense; it seems easier to
+just keep the code separate and support users directly. The truth of the
+matter is that keeping code separate ("out of tree") is a false economy.
+
+As a way of illustrating the costs of out-of-tree code, here are a few
+relevant aspects of the kernel development process; most of these will be
+discussed in greater detail later in this document. Consider:
+
+- Code which has been merged into the mainline kernel is available to all
+ Linux users. It will automatically be present on all distributions which
+ enable it. There is no need for driver disks, downloads, or the hassles
+ of supporting multiple versions of multiple distributions; it all just
+ works, for the developer and for the user. Incorporation into the
+ mainline solves a large number of distribution and support problems.
+
+- While kernel developers strive to maintain a stable interface to user
+ space, the internal kernel API is in constant flux. The lack of a stable
+ internal interface is a deliberate design decision; it allows fundamental
+ improvements to be made at any time and results in higher-quality code.
+ But one result of that policy is that any out-of-tree code requires
+ constant upkeep if it is to work with new kernels. Maintaining
+ out-of-tree code requires significant amounts of work just to keep that
+ code working.
+
+ Code which is in the mainline, instead, does not require this work as the
+ result of a simple rule requiring any developer who makes an API change
+ to also fix any code that breaks as the result of that change. So code
+ which has been merged into the mainline has significantly lower
+ maintenance costs.
+
+- Beyond that, code which is in the kernel will often be improved by other
+ developers. Surprising results can come from empowering your user
+ community and customers to improve your product.
+
+- Kernel code is subjected to review, both before and after merging into
+ the mainline. No matter how strong the original developer's skills are,
+ this review process invariably finds ways in which the code can be
+ improved. Often review finds severe bugs and security problems. This is
+ especially true for code which has been developed in a closed
+ environment; such code benefits strongly from review by outside
+ developers. Out-of-tree code is lower-quality code.
+
+- Participation in the development process is your way to influence the
+ direction of kernel development. Users who complain from the sidelines
+ are heard, but active developers have a stronger voice - and the ability
+ to implement changes which make the kernel work better for their needs.
+
+- When code is maintained separately, the possibility that a third party
+ will contribute a different implementation of a similar feature always
+ exists. Should that happen, getting your code merged will become much
+ harder - to the point of impossibility. Then you will be faced with the
+ unpleasant alternatives of either (1) maintaining a nonstandard feature
+ out of tree indefinitely, or (2) abandoning your code and migrating your
+ users over to the in-tree version.
+
+- Contribution of code is the fundamental action which makes the whole
+ process work. By contributing your code you can add new functionality to
+ the kernel and provide capabilities and examples which are of use to
+ other kernel developers. If you have developed code for Linux (or are
+ thinking about doing so), you clearly have an interest in the continued
+ success of this platform; contributing code is one of the best ways to
+ help ensure that success.
+
+All of the reasoning above applies to any out-of-tree kernel code,
+including code which is distributed in proprietary, binary-only form.
+There are, however, additional factors which should be taken into account
+before considering any sort of binary-only kernel code distribution. These
+include:
+
+- The legal issues around the distribution of proprietary kernel modules
+ are cloudy at best; quite a few kernel copyright holders believe that
+ most binary-only modules are derived products of the kernel and that, as
+ a result, their distribution is a violation of the GNU General Public
+ license (about which more will be said below). Your author is not a
+ lawyer, and nothing in this document can possibly be considered to be
+ legal advice. The true legal status of closed-source modules can only be
+ determined by the courts. But the uncertainty which haunts those modules
+ is there regardless.
+
+- Binary modules greatly increase the difficulty of debugging kernel
+ problems, to the point that most kernel developers will not even try. So
+ the distribution of binary-only modules will make it harder for your
+ users to get support from the community.
+
+- Support is also harder for distributors of binary-only modules, who must
+ provide a version of the module for every distribution and every kernel
+ version they wish to support. Dozens of builds of a single module can
+ be required to provide reasonably comprehensive coverage, and your users
+ will have to upgrade your module separately every time they upgrade their
+ kernel.
+
+- Everything that was said above about code review applies doubly to
+ closed-source code. Since this code is not available at all, it cannot
+ have been reviewed by the community and will, beyond doubt, have serious
+ problems.
+
+Makers of embedded systems, in particular, may be tempted to disregard much
+of what has been said in this section in the belief that they are shipping
+a self-contained product which uses a frozen kernel version and requires no
+more development after its release. This argument misses the value of
+widespread code review and the value of allowing your users to add
+capabilities to your product. But these products, too, have a limited
+commercial life, after which a new version must be released. At that
+point, vendors whose code is in the mainline and well maintained will be
+much better positioned to get the new product ready for market quickly.
+
+Licensing
+---------
+
+Code is contributed to the Linux kernel under a number of licenses, but all
+code must be compatible with version 2 of the GNU General Public License
+(GPLv2), which is the license covering the kernel distribution as a whole.
+In practice, that means that all code contributions are covered either by
+GPLv2 (with, optionally, language allowing distribution under later
+versions of the GPL) or the three-clause BSD license. Any contributions
+which are not covered by a compatible license will not be accepted into the
+kernel.
+
+Copyright assignments are not required (or requested) for code contributed
+to the kernel. All code merged into the mainline kernel retains its
+original ownership; as a result, the kernel now has thousands of owners.
+
+One implication of this ownership structure is that any attempt to change
+the licensing of the kernel is doomed to almost certain failure. There are
+few practical scenarios where the agreement of all copyright holders could
+be obtained (or their code removed from the kernel). So, in particular,
+there is no prospect of a migration to version 3 of the GPL in the
+foreseeable future.
+
+It is imperative that all code contributed to the kernel be legitimately
+free software. For that reason, code from anonymous (or pseudonymous)
+contributors will not be accepted. All contributors are required to "sign
+off" on their code, stating that the code can be distributed with the
+kernel under the GPL. Code which has not been licensed as free software by
+its owner, or which risks creating copyright-related problems for the
+kernel (such as code which derives from reverse-engineering efforts lacking
+proper safeguards) cannot be contributed.
+
+Questions about copyright-related issues are common on Linux development
+mailing lists. Such questions will normally receive no shortage of
+answers, but one should bear in mind that the people answering those
+questions are not lawyers and cannot provide legal advice. If you have
+legal questions relating to Linux source code, there is no substitute for
+talking with a lawyer who understands this field. Relying on answers
+obtained on technical mailing lists is a risky affair.
diff --git a/Documentation/process/2.Process.rst b/Documentation/process/2.Process.rst
new file mode 100644
index 000000000..613a01da4
--- /dev/null
+++ b/Documentation/process/2.Process.rst
@@ -0,0 +1,497 @@
+.. _development_process:
+
+How the development process works
+=================================
+
+Linux kernel development in the early 1990's was a pretty loose affair,
+with relatively small numbers of users and developers involved. With a
+user base in the millions and with some 2,000 developers involved over the
+course of one year, the kernel has since had to evolve a number of
+processes to keep development happening smoothly. A solid understanding of
+how the process works is required in order to be an effective part of it.
+
+The big picture
+---------------
+
+The kernel developers use a loosely time-based release process, with a new
+major kernel release happening every two or three months. The recent
+release history looks like this:
+
+ ====== =================
+ 5.0 March 3, 2019
+ 5.1 May 5, 2019
+ 5.2 July 7, 2019
+ 5.3 September 15, 2019
+ 5.4 November 24, 2019
+ 5.5 January 6, 2020
+ ====== =================
+
+Every 5.x release is a major kernel release with new features, internal
+API changes, and more. A typical release can contain about 13,000
+changesets with changes to several hundred thousand lines of code. 5.x is
+the leading edge of Linux kernel development; the kernel uses a
+rolling development model which is continually integrating major changes.
+
+A relatively straightforward discipline is followed with regard to the
+merging of patches for each release. At the beginning of each development
+cycle, the "merge window" is said to be open. At that time, code which is
+deemed to be sufficiently stable (and which is accepted by the development
+community) is merged into the mainline kernel. The bulk of changes for a
+new development cycle (and all of the major changes) will be merged during
+this time, at a rate approaching 1,000 changes ("patches," or "changesets")
+per day.
+
+(As an aside, it is worth noting that the changes integrated during the
+merge window do not come out of thin air; they have been collected, tested,
+and staged ahead of time. How that process works will be described in
+detail later on).
+
+The merge window lasts for approximately two weeks. At the end of this
+time, Linus Torvalds will declare that the window is closed and release the
+first of the "rc" kernels. For the kernel which is destined to be 5.6,
+for example, the release which happens at the end of the merge window will
+be called 5.6-rc1. The -rc1 release is the signal that the time to
+merge new features has passed, and that the time to stabilize the next
+kernel has begun.
+
+Over the next six to ten weeks, only patches which fix problems should be
+submitted to the mainline. On occasion a more significant change will be
+allowed, but such occasions are rare; developers who try to merge new
+features outside of the merge window tend to get an unfriendly reception.
+As a general rule, if you miss the merge window for a given feature, the
+best thing to do is to wait for the next development cycle. (An occasional
+exception is made for drivers for previously-unsupported hardware; if they
+touch no in-tree code, they cannot cause regressions and should be safe to
+add at any time).
+
+As fixes make their way into the mainline, the patch rate will slow over
+time. Linus releases new -rc kernels about once a week; a normal series
+will get up to somewhere between -rc6 and -rc9 before the kernel is
+considered to be sufficiently stable and the final release is made.
+At that point the whole process starts over again.
+
+As an example, here is how the 5.4 development cycle went (all dates in
+2019):
+
+ ============== ===============================
+ September 15 5.3 stable release
+ September 30 5.4-rc1, merge window closes
+ October 6 5.4-rc2
+ October 13 5.4-rc3
+ October 20 5.4-rc4
+ October 27 5.4-rc5
+ November 3 5.4-rc6
+ November 10 5.4-rc7
+ November 17 5.4-rc8
+ November 24 5.4 stable release
+ ============== ===============================
+
+How do the developers decide when to close the development cycle and create
+the stable release? The most significant metric used is the list of
+regressions from previous releases. No bugs are welcome, but those which
+break systems which worked in the past are considered to be especially
+serious. For this reason, patches which cause regressions are looked upon
+unfavorably and are quite likely to be reverted during the stabilization
+period.
+
+The developers' goal is to fix all known regressions before the stable
+release is made. In the real world, this kind of perfection is hard to
+achieve; there are just too many variables in a project of this size.
+There comes a point where delaying the final release just makes the problem
+worse; the pile of changes waiting for the next merge window will grow
+larger, creating even more regressions the next time around. So most 5.x
+kernels go out with a handful of known regressions though, hopefully, none
+of them are serious.
+
+Once a stable release is made, its ongoing maintenance is passed off to the
+"stable team," currently Greg Kroah-Hartman. The stable team will release
+occasional updates to the stable release using the 5.x.y numbering scheme.
+To be considered for an update release, a patch must (1) fix a significant
+bug, and (2) already be merged into the mainline for the next development
+kernel. Kernels will typically receive stable updates for a little more
+than one development cycle past their initial release. So, for example, the
+5.2 kernel's history looked like this (all dates in 2019):
+
+ ============== ===============================
+ July 7 5.2 stable release
+ July 14 5.2.1
+ July 21 5.2.2
+ July 26 5.2.3
+ July 28 5.2.4
+ July 31 5.2.5
+ ... ...
+ October 11 5.2.21
+ ============== ===============================
+
+5.2.21 was the final stable update of the 5.2 release.
+
+Some kernels are designated "long term" kernels; they will receive support
+for a longer period. Please refer to the following link for the list of active
+long term kernel versions and their maintainers:
+
+ https://www.kernel.org/category/releases.html
+
+The selection of a kernel for long-term support is purely a matter of a
+maintainer having the need and the time to maintain that release. There
+are no known plans for long-term support for any specific upcoming
+release.
+
+
+The lifecycle of a patch
+------------------------
+
+Patches do not go directly from the developer's keyboard into the mainline
+kernel. There is, instead, a somewhat involved (if somewhat informal)
+process designed to ensure that each patch is reviewed for quality and that
+each patch implements a change which is desirable to have in the mainline.
+This process can happen quickly for minor fixes, or, in the case of large
+and controversial changes, go on for years. Much developer frustration
+comes from a lack of understanding of this process or from attempts to
+circumvent it.
+
+In the hopes of reducing that frustration, this document will describe how
+a patch gets into the kernel. What follows below is an introduction which
+describes the process in a somewhat idealized way. A much more detailed
+treatment will come in later sections.
+
+The stages that a patch goes through are, generally:
+
+ - Design. This is where the real requirements for the patch - and the way
+ those requirements will be met - are laid out. Design work is often
+ done without involving the community, but it is better to do this work
+ in the open if at all possible; it can save a lot of time redesigning
+ things later.
+
+ - Early review. Patches are posted to the relevant mailing list, and
+ developers on that list reply with any comments they may have. This
+ process should turn up any major problems with a patch if all goes
+ well.
+
+ - Wider review. When the patch is getting close to ready for mainline
+ inclusion, it should be accepted by a relevant subsystem maintainer -
+ though this acceptance is not a guarantee that the patch will make it
+ all the way to the mainline. The patch will show up in the maintainer's
+ subsystem tree and into the -next trees (described below). When the
+ process works, this step leads to more extensive review of the patch and
+ the discovery of any problems resulting from the integration of this
+ patch with work being done by others.
+
+- Please note that most maintainers also have day jobs, so merging
+ your patch may not be their highest priority. If your patch is
+ getting feedback about changes that are needed, you should either
+ make those changes or justify why they should not be made. If your
+ patch has no review complaints but is not being merged by its
+ appropriate subsystem or driver maintainer, you should be persistent
+ in updating the patch to the current kernel so that it applies cleanly
+ and keep sending it for review and merging.
+
+ - Merging into the mainline. Eventually, a successful patch will be
+ merged into the mainline repository managed by Linus Torvalds. More
+ comments and/or problems may surface at this time; it is important that
+ the developer be responsive to these and fix any issues which arise.
+
+ - Stable release. The number of users potentially affected by the patch
+ is now large, so, once again, new problems may arise.
+
+ - Long-term maintenance. While it is certainly possible for a developer
+ to forget about code after merging it, that sort of behavior tends to
+ leave a poor impression in the development community. Merging code
+ eliminates some of the maintenance burden, in that others will fix
+ problems caused by API changes. But the original developer should
+ continue to take responsibility for the code if it is to remain useful
+ in the longer term.
+
+One of the largest mistakes made by kernel developers (or their employers)
+is to try to cut the process down to a single "merging into the mainline"
+step. This approach invariably leads to frustration for everybody
+involved.
+
+How patches get into the Kernel
+-------------------------------
+
+There is exactly one person who can merge patches into the mainline kernel
+repository: Linus Torvalds. But, for example, of the over 9,500 patches
+which went into the 2.6.38 kernel, only 112 (around 1.3%) were directly
+chosen by Linus himself. The kernel project has long since grown to a size
+where no single developer could possibly inspect and select every patch
+unassisted. The way the kernel developers have addressed this growth is
+through the use of a lieutenant system built around a chain of trust.
+
+The kernel code base is logically broken down into a set of subsystems:
+networking, specific architecture support, memory management, video
+devices, etc. Most subsystems have a designated maintainer, a developer
+who has overall responsibility for the code within that subsystem. These
+subsystem maintainers are the gatekeepers (in a loose way) for the portion
+of the kernel they manage; they are the ones who will (usually) accept a
+patch for inclusion into the mainline kernel.
+
+Subsystem maintainers each manage their own version of the kernel source
+tree, usually (but certainly not always) using the git source management
+tool. Tools like git (and related tools like quilt or mercurial) allow
+maintainers to track a list of patches, including authorship information
+and other metadata. At any given time, the maintainer can identify which
+patches in his or her repository are not found in the mainline.
+
+When the merge window opens, top-level maintainers will ask Linus to "pull"
+the patches they have selected for merging from their repositories. If
+Linus agrees, the stream of patches will flow up into his repository,
+becoming part of the mainline kernel. The amount of attention that Linus
+pays to specific patches received in a pull operation varies. It is clear
+that, sometimes, he looks quite closely. But, as a general rule, Linus
+trusts the subsystem maintainers to not send bad patches upstream.
+
+Subsystem maintainers, in turn, can pull patches from other maintainers.
+For example, the networking tree is built from patches which accumulated
+first in trees dedicated to network device drivers, wireless networking,
+etc. This chain of repositories can be arbitrarily long, though it rarely
+exceeds two or three links. Since each maintainer in the chain trusts
+those managing lower-level trees, this process is known as the "chain of
+trust."
+
+Clearly, in a system like this, getting patches into the kernel depends on
+finding the right maintainer. Sending patches directly to Linus is not
+normally the right way to go.
+
+
+Next trees
+----------
+
+The chain of subsystem trees guides the flow of patches into the kernel,
+but it also raises an interesting question: what if somebody wants to look
+at all of the patches which are being prepared for the next merge window?
+Developers will be interested in what other changes are pending to see
+whether there are any conflicts to worry about; a patch which changes a
+core kernel function prototype, for example, will conflict with any other
+patches which use the older form of that function. Reviewers and testers
+want access to the changes in their integrated form before all of those
+changes land in the mainline kernel. One could pull changes from all of
+the interesting subsystem trees, but that would be a big and error-prone
+job.
+
+The answer comes in the form of -next trees, where subsystem trees are
+collected for testing and review. The older of these trees, maintained by
+Andrew Morton, is called "-mm" (for memory management, which is how it got
+started). The -mm tree integrates patches from a long list of subsystem
+trees; it also has some patches aimed at helping with debugging.
+
+Beyond that, -mm contains a significant collection of patches which have
+been selected by Andrew directly. These patches may have been posted on a
+mailing list, or they may apply to a part of the kernel for which there is
+no designated subsystem tree. As a result, -mm operates as a sort of
+subsystem tree of last resort; if there is no other obvious path for a
+patch into the mainline, it is likely to end up in -mm. Miscellaneous
+patches which accumulate in -mm will eventually either be forwarded on to
+an appropriate subsystem tree or be sent directly to Linus. In a typical
+development cycle, approximately 5-10% of the patches going into the
+mainline get there via -mm.
+
+The current -mm patch is available in the "mmotm" (-mm of the moment)
+directory at:
+
+ https://www.ozlabs.org/~akpm/mmotm/
+
+Use of the MMOTM tree is likely to be a frustrating experience, though;
+there is a definite chance that it will not even compile.
+
+The primary tree for next-cycle patch merging is linux-next, maintained by
+Stephen Rothwell. The linux-next tree is, by design, a snapshot of what
+the mainline is expected to look like after the next merge window closes.
+Linux-next trees are announced on the linux-kernel and linux-next mailing
+lists when they are assembled; they can be downloaded from:
+
+ https://www.kernel.org/pub/linux/kernel/next/
+
+Linux-next has become an integral part of the kernel development process;
+all patches merged during a given merge window should really have found
+their way into linux-next some time before the merge window opens.
+
+
+Staging trees
+-------------
+
+The kernel source tree contains the drivers/staging/ directory, where
+many sub-directories for drivers or filesystems that are on their way to
+being added to the kernel tree live. They remain in drivers/staging while
+they still need more work; once complete, they can be moved into the
+kernel proper. This is a way to keep track of drivers that aren't
+up to Linux kernel coding or quality standards, but people may want to use
+them and track development.
+
+Greg Kroah-Hartman currently maintains the staging tree. Drivers that
+still need work are sent to him, with each driver having its own
+subdirectory in drivers/staging/. Along with the driver source files, a
+TODO file should be present in the directory as well. The TODO file lists
+the pending work that the driver needs for acceptance into the kernel
+proper, as well as a list of people that should be Cc'd for any patches to
+the driver. Current rules require that drivers contributed to staging
+must, at a minimum, compile properly.
+
+Staging can be a relatively easy way to get new drivers into the mainline
+where, with luck, they will come to the attention of other developers and
+improve quickly. Entry into staging is not the end of the story, though;
+code in staging which is not seeing regular progress will eventually be
+removed. Distributors also tend to be relatively reluctant to enable
+staging drivers. So staging is, at best, a stop on the way toward becoming
+a proper mainline driver.
+
+
+Tools
+-----
+
+As can be seen from the above text, the kernel development process depends
+heavily on the ability to herd collections of patches in various
+directions. The whole thing would not work anywhere near as well as it
+does without suitably powerful tools. Tutorials on how to use these tools
+are well beyond the scope of this document, but there is space for a few
+pointers.
+
+By far the dominant source code management system used by the kernel
+community is git. Git is one of a number of distributed version control
+systems being developed in the free software community. It is well tuned
+for kernel development, in that it performs quite well when dealing with
+large repositories and large numbers of patches. It also has a reputation
+for being difficult to learn and use, though it has gotten better over
+time. Some sort of familiarity with git is almost a requirement for kernel
+developers; even if they do not use it for their own work, they'll need git
+to keep up with what other developers (and the mainline) are doing.
+
+Git is now packaged by almost all Linux distributions. There is a home
+page at:
+
+ https://git-scm.com/
+
+That page has pointers to documentation and tutorials.
+
+Among the kernel developers who do not use git, the most popular choice is
+almost certainly Mercurial:
+
+ https://www.selenic.com/mercurial/
+
+Mercurial shares many features with git, but it provides an interface which
+many find easier to use.
+
+The other tool worth knowing about is Quilt:
+
+ https://savannah.nongnu.org/projects/quilt/
+
+Quilt is a patch management system, rather than a source code management
+system. It does not track history over time; it is, instead, oriented
+toward tracking a specific set of changes against an evolving code base.
+Some major subsystem maintainers use quilt to manage patches intended to go
+upstream. For the management of certain kinds of trees (-mm, for example),
+quilt is the best tool for the job.
+
+
+Mailing lists
+-------------
+
+A great deal of Linux kernel development work is done by way of mailing
+lists. It is hard to be a fully-functioning member of the community
+without joining at least one list somewhere. But Linux mailing lists also
+represent a potential hazard to developers, who risk getting buried under a
+load of electronic mail, running afoul of the conventions used on the Linux
+lists, or both.
+
+Most kernel mailing lists are run on vger.kernel.org; the master list can
+be found at:
+
+ http://vger.kernel.org/vger-lists.html
+
+There are lists hosted elsewhere, though; a number of them are at
+redhat.com/mailman/listinfo.
+
+The core mailing list for kernel development is, of course, linux-kernel.
+This list is an intimidating place to be; volume can reach 500 messages per
+day, the amount of noise is high, the conversation can be severely
+technical, and participants are not always concerned with showing a high
+degree of politeness. But there is no other place where the kernel
+development community comes together as a whole; developers who avoid this
+list will miss important information.
+
+There are a few hints which can help with linux-kernel survival:
+
+- Have the list delivered to a separate folder, rather than your main
+ mailbox. One must be able to ignore the stream for sustained periods of
+ time.
+
+- Do not try to follow every conversation - nobody else does. It is
+ important to filter on both the topic of interest (though note that
+ long-running conversations can drift away from the original subject
+ without changing the email subject line) and the people who are
+ participating.
+
+- Do not feed the trolls. If somebody is trying to stir up an angry
+ response, ignore them.
+
+- When responding to linux-kernel email (or that on other lists) preserve
+ the Cc: header for all involved. In the absence of a strong reason (such
+ as an explicit request), you should never remove recipients. Always make
+ sure that the person you are responding to is in the Cc: list. This
+ convention also makes it unnecessary to explicitly ask to be copied on
+ replies to your postings.
+
+- Search the list archives (and the net as a whole) before asking
+ questions. Some developers can get impatient with people who clearly
+ have not done their homework.
+
+- Use interleaved ("inline") replies, which makes your response easier to
+ read. (i.e. avoid top-posting -- the practice of putting your answer above
+ the quoted text you are responding to.) For more details, see
+ :ref:`Documentation/process/submitting-patches.rst <interleaved_replies>`.
+
+- Ask on the correct mailing list. Linux-kernel may be the general meeting
+ point, but it is not the best place to find developers from all
+ subsystems.
+
+The last point - finding the correct mailing list - is a common place for
+beginning developers to go wrong. Somebody who asks a networking-related
+question on linux-kernel will almost certainly receive a polite suggestion
+to ask on the netdev list instead, as that is the list frequented by most
+networking developers. Other lists exist for the SCSI, video4linux, IDE,
+filesystem, etc. subsystems. The best place to look for mailing lists is
+in the MAINTAINERS file packaged with the kernel source.
+
+
+Getting started with Kernel development
+---------------------------------------
+
+Questions about how to get started with the kernel development process are
+common - from both individuals and companies. Equally common are missteps
+which make the beginning of the relationship harder than it has to be.
+
+Companies often look to hire well-known developers to get a development
+group started. This can, in fact, be an effective technique. But it also
+tends to be expensive and does not do much to grow the pool of experienced
+kernel developers. It is possible to bring in-house developers up to speed
+on Linux kernel development, given the investment of a bit of time. Taking
+this time can endow an employer with a group of developers who understand
+the kernel and the company both, and who can help to train others as well.
+Over the medium term, this is often the more profitable approach.
+
+Individual developers are often, understandably, at a loss for a place to
+start. Beginning with a large project can be intimidating; one often wants
+to test the waters with something smaller first. This is the point where
+some developers jump into the creation of patches fixing spelling errors or
+minor coding style issues. Unfortunately, such patches create a level of
+noise which is distracting for the development community as a whole, so,
+increasingly, they are looked down upon. New developers wishing to
+introduce themselves to the community will not get the sort of reception
+they wish for by these means.
+
+Andrew Morton gives this advice for aspiring kernel developers
+
+::
+
+ The #1 project for all kernel beginners should surely be "make sure
+ that the kernel runs perfectly at all times on all machines which
+ you can lay your hands on". Usually the way to do this is to work
+ with others on getting things fixed up (this can require
+ persistence!) but that's fine - it's a part of kernel development.
+
+(https://lwn.net/Articles/283982/).
+
+In the absence of obvious problems to fix, developers are advised to look
+at the current lists of regressions and open bugs in general. There is
+never any shortage of issues in need of fixing; by addressing these issues,
+developers will gain experience with the process while, at the same time,
+building respect with the rest of the development community.
diff --git a/Documentation/process/3.Early-stage.rst b/Documentation/process/3.Early-stage.rst
new file mode 100644
index 000000000..894a92004
--- /dev/null
+++ b/Documentation/process/3.Early-stage.rst
@@ -0,0 +1,223 @@
+.. _development_early_stage:
+
+Early-stage planning
+====================
+
+When contemplating a Linux kernel development project, it can be tempting
+to jump right in and start coding. As with any significant project,
+though, much of the groundwork for success is best laid before the first
+line of code is written. Some time spent in early planning and
+communication can save far more time later on.
+
+
+Specifying the problem
+----------------------
+
+Like any engineering project, a successful kernel enhancement starts with a
+clear description of the problem to be solved. In some cases, this step is
+easy: when a driver is needed for a specific piece of hardware, for
+example. In others, though, it is tempting to confuse the real problem
+with the proposed solution, and that can lead to difficulties.
+
+Consider an example: some years ago, developers working with Linux audio
+sought a way to run applications without dropouts or other artifacts caused
+by excessive latency in the system. The solution they arrived at was a
+kernel module intended to hook into the Linux Security Module (LSM)
+framework; this module could be configured to give specific applications
+access to the realtime scheduler. This module was implemented and sent to
+the linux-kernel mailing list, where it immediately ran into problems.
+
+To the audio developers, this security module was sufficient to solve their
+immediate problem. To the wider kernel community, though, it was seen as a
+misuse of the LSM framework (which is not intended to confer privileges
+onto processes which they would not otherwise have) and a risk to system
+stability. Their preferred solutions involved realtime scheduling access
+via the rlimit mechanism for the short term, and ongoing latency reduction
+work in the long term.
+
+The audio community, however, could not see past the particular solution
+they had implemented; they were unwilling to accept alternatives. The
+resulting disagreement left those developers feeling disillusioned with the
+entire kernel development process; one of them went back to an audio list
+and posted this:
+
+ There are a number of very good Linux kernel developers, but they
+ tend to get outshouted by a large crowd of arrogant fools. Trying
+ to communicate user requirements to these people is a waste of
+ time. They are much too "intelligent" to listen to lesser mortals.
+
+(https://lwn.net/Articles/131776/).
+
+The reality of the situation was different; the kernel developers were far
+more concerned about system stability, long-term maintenance, and finding
+the right solution to the problem than they were with a specific module.
+The moral of the story is to focus on the problem - not a specific solution
+- and to discuss it with the development community before investing in the
+creation of a body of code.
+
+So, when contemplating a kernel development project, one should obtain
+answers to a short set of questions:
+
+ - What, exactly, is the problem which needs to be solved?
+
+ - Who are the users affected by this problem? Which use cases should the
+ solution address?
+
+ - How does the kernel fall short in addressing that problem now?
+
+Only then does it make sense to start considering possible solutions.
+
+
+Early discussion
+----------------
+
+When planning a kernel development project, it makes great sense to hold
+discussions with the community before launching into implementation. Early
+communication can save time and trouble in a number of ways:
+
+ - It may well be that the problem is addressed by the kernel in ways which
+ you have not understood. The Linux kernel is large and has a number of
+ features and capabilities which are not immediately obvious. Not all
+ kernel capabilities are documented as well as one might like, and it is
+ easy to miss things. Your author has seen the posting of a complete
+ driver which duplicated an existing driver that the new author had been
+ unaware of. Code which reinvents existing wheels is not only wasteful;
+ it will also not be accepted into the mainline kernel.
+
+ - There may be elements of the proposed solution which will not be
+ acceptable for mainline merging. It is better to find out about
+ problems like this before writing the code.
+
+ - It's entirely possible that other developers have thought about the
+ problem; they may have ideas for a better solution, and may be willing
+ to help in the creation of that solution.
+
+Years of experience with the kernel development community have taught a
+clear lesson: kernel code which is designed and developed behind closed
+doors invariably has problems which are only revealed when the code is
+released into the community. Sometimes these problems are severe,
+requiring months or years of effort before the code can be brought up to
+the kernel community's standards. Some examples include:
+
+ - The Devicescape network stack was designed and implemented for
+ single-processor systems. It could not be merged into the mainline
+ until it was made suitable for multiprocessor systems. Retrofitting
+ locking and such into code is a difficult task; as a result, the merging
+ of this code (now called mac80211) was delayed for over a year.
+
+ - The Reiser4 filesystem included a number of capabilities which, in the
+ core kernel developers' opinion, should have been implemented in the
+ virtual filesystem layer instead. It also included features which could
+ not easily be implemented without exposing the system to user-caused
+ deadlocks. The late revelation of these problems - and refusal to
+ address some of them - has caused Reiser4 to stay out of the mainline
+ kernel.
+
+ - The AppArmor security module made use of internal virtual filesystem
+ data structures in ways which were considered to be unsafe and
+ unreliable. This concern (among others) kept AppArmor out of the
+ mainline for years.
+
+In each of these cases, a great deal of pain and extra work could have been
+avoided with some early discussion with the kernel developers.
+
+
+Who do you talk to?
+-------------------
+
+When developers decide to take their plans public, the next question will
+be: where do we start? The answer is to find the right mailing list(s) and
+the right maintainer. For mailing lists, the best approach is to look in
+the MAINTAINERS file for a relevant place to post. If there is a suitable
+subsystem list, posting there is often preferable to posting on
+linux-kernel; you are more likely to reach developers with expertise in the
+relevant subsystem and the environment may be more supportive.
+
+Finding maintainers can be a bit harder. Again, the MAINTAINERS file is
+the place to start. That file tends to not always be up to date, though,
+and not all subsystems are represented there. The person listed in the
+MAINTAINERS file may, in fact, not be the person who is actually acting in
+that role currently. So, when there is doubt about who to contact, a
+useful trick is to use git (and "git log" in particular) to see who is
+currently active within the subsystem of interest. Look at who is writing
+patches, and who, if anybody, is attaching Signed-off-by lines to those
+patches. Those are the people who will be best placed to help with a new
+development project.
+
+The task of finding the right maintainer is sometimes challenging enough
+that the kernel developers have added a script to ease the process:
+
+::
+
+ .../scripts/get_maintainer.pl
+
+This script will return the current maintainer(s) for a given file or
+directory when given the "-f" option. If passed a patch on the
+command line, it will list the maintainers who should probably receive
+copies of the patch. This is the preferred way (unlike "-f" option) to get the
+list of people to Cc for your patches. There are a number of options
+regulating how hard get_maintainer.pl will search for maintainers; please be
+careful about using the more aggressive options as you may end up including
+developers who have no real interest in the code you are modifying.
+
+If all else fails, talking to Andrew Morton can be an effective way to
+track down a maintainer for a specific piece of code.
+
+
+When to post?
+-------------
+
+If possible, posting your plans during the early stages can only be
+helpful. Describe the problem being solved and any plans that have been
+made on how the implementation will be done. Any information you can
+provide can help the development community provide useful input on the
+project.
+
+One discouraging thing which can happen at this stage is not a hostile
+reaction, but, instead, little or no reaction at all. The sad truth of the
+matter is (1) kernel developers tend to be busy, (2) there is no shortage
+of people with grand plans and little code (or even prospect of code) to
+back them up, and (3) nobody is obligated to review or comment on ideas
+posted by others. Beyond that, high-level designs often hide problems
+which are only revealed when somebody actually tries to implement those
+designs; for that reason, kernel developers would rather see the code.
+
+If a request-for-comments posting yields little in the way of comments, do
+not assume that it means there is no interest in the project.
+Unfortunately, you also cannot assume that there are no problems with your
+idea. The best thing to do in this situation is to proceed, keeping the
+community informed as you go.
+
+
+Getting official buy-in
+-----------------------
+
+If your work is being done in a corporate environment - as most Linux
+kernel work is - you must, obviously, have permission from suitably
+empowered managers before you can post your company's plans or code to a
+public mailing list. The posting of code which has not been cleared for
+release under a GPL-compatible license can be especially problematic; the
+sooner that a company's management and legal staff can agree on the posting
+of a kernel development project, the better off everybody involved will be.
+
+Some readers may be thinking at this point that their kernel work is
+intended to support a product which does not yet have an officially
+acknowledged existence. Revealing their employer's plans on a public
+mailing list may not be a viable option. In cases like this, it is worth
+considering whether the secrecy is really necessary; there is often no real
+need to keep development plans behind closed doors.
+
+That said, there are also cases where a company legitimately cannot
+disclose its plans early in the development process. Companies with
+experienced kernel developers may choose to proceed in an open-loop manner
+on the assumption that they will be able to avoid serious integration
+problems later. For companies without that sort of in-house expertise, the
+best option is often to hire an outside developer to review the plans under
+a non-disclosure agreement. The Linux Foundation operates an NDA program
+designed to help with this sort of situation; more information can be found
+at:
+
+ https://www.linuxfoundation.org/nda/
+
+This kind of review is often enough to avoid serious problems later on
+without requiring public disclosure of the project.
diff --git a/Documentation/process/4.Coding.rst b/Documentation/process/4.Coding.rst
new file mode 100644
index 000000000..1f0d81f44
--- /dev/null
+++ b/Documentation/process/4.Coding.rst
@@ -0,0 +1,421 @@
+.. _development_coding:
+
+Getting the code right
+======================
+
+While there is much to be said for a solid and community-oriented design
+process, the proof of any kernel development project is in the resulting
+code. It is the code which will be examined by other developers and merged
+(or not) into the mainline tree. So it is the quality of this code which
+will determine the ultimate success of the project.
+
+This section will examine the coding process. We'll start with a look at a
+number of ways in which kernel developers can go wrong. Then the focus
+will shift toward doing things right and the tools which can help in that
+quest.
+
+
+Pitfalls
+---------
+
+Coding style
+************
+
+The kernel has long had a standard coding style, described in
+:ref:`Documentation/process/coding-style.rst <codingstyle>`. For much of
+that time, the policies described in that file were taken as being, at most,
+advisory. As a result, there is a substantial amount of code in the kernel
+which does not meet the coding style guidelines. The presence of that code
+leads to two independent hazards for kernel developers.
+
+The first of these is to believe that the kernel coding standards do not
+matter and are not enforced. The truth of the matter is that adding new
+code to the kernel is very difficult if that code is not coded according to
+the standard; many developers will request that the code be reformatted
+before they will even review it. A code base as large as the kernel
+requires some uniformity of code to make it possible for developers to
+quickly understand any part of it. So there is no longer room for
+strangely-formatted code.
+
+Occasionally, the kernel's coding style will run into conflict with an
+employer's mandated style. In such cases, the kernel's style will have to
+win before the code can be merged. Putting code into the kernel means
+giving up a degree of control in a number of ways - including control over
+how the code is formatted.
+
+The other trap is to assume that code which is already in the kernel is
+urgently in need of coding style fixes. Developers may start to generate
+reformatting patches as a way of gaining familiarity with the process, or
+as a way of getting their name into the kernel changelogs - or both. But
+pure coding style fixes are seen as noise by the development community;
+they tend to get a chilly reception. So this type of patch is best
+avoided. It is natural to fix the style of a piece of code while working
+on it for other reasons, but coding style changes should not be made for
+their own sake.
+
+The coding style document also should not be read as an absolute law which
+can never be transgressed. If there is a good reason to go against the
+style (a line which becomes far less readable if split to fit within the
+80-column limit, for example), just do it.
+
+Note that you can also use the ``clang-format`` tool to help you with
+these rules, to quickly re-format parts of your code automatically,
+and to review full files in order to spot coding style mistakes,
+typos and possible improvements. It is also handy for sorting ``#includes``,
+for aligning variables/macros, for reflowing text and other similar tasks.
+See the file :ref:`Documentation/process/clang-format.rst <clangformat>`
+for more details.
+
+
+Abstraction layers
+******************
+
+Computer Science professors teach students to make extensive use of
+abstraction layers in the name of flexibility and information hiding.
+Certainly the kernel makes extensive use of abstraction; no project
+involving several million lines of code could do otherwise and survive.
+But experience has shown that excessive or premature abstraction can be
+just as harmful as premature optimization. Abstraction should be used to
+the level required and no further.
+
+At a simple level, consider a function which has an argument which is
+always passed as zero by all callers. One could retain that argument just
+in case somebody eventually needs to use the extra flexibility that it
+provides. By that time, though, chances are good that the code which
+implements this extra argument has been broken in some subtle way which was
+never noticed - because it has never been used. Or, when the need for
+extra flexibility arises, it does not do so in a way which matches the
+programmer's early expectation. Kernel developers will routinely submit
+patches to remove unused arguments; they should, in general, not be added
+in the first place.
+
+Abstraction layers which hide access to hardware - often to allow the bulk
+of a driver to be used with multiple operating systems - are especially
+frowned upon. Such layers obscure the code and may impose a performance
+penalty; they do not belong in the Linux kernel.
+
+On the other hand, if you find yourself copying significant amounts of code
+from another kernel subsystem, it is time to ask whether it would, in fact,
+make sense to pull out some of that code into a separate library or to
+implement that functionality at a higher level. There is no value in
+replicating the same code throughout the kernel.
+
+
+#ifdef and preprocessor use in general
+**************************************
+
+The C preprocessor seems to present a powerful temptation to some C
+programmers, who see it as a way to efficiently encode a great deal of
+flexibility into a source file. But the preprocessor is not C, and heavy
+use of it results in code which is much harder for others to read and
+harder for the compiler to check for correctness. Heavy preprocessor use
+is almost always a sign of code which needs some cleanup work.
+
+Conditional compilation with #ifdef is, indeed, a powerful feature, and it
+is used within the kernel. But there is little desire to see code which is
+sprinkled liberally with #ifdef blocks. As a general rule, #ifdef use
+should be confined to header files whenever possible.
+Conditionally-compiled code can be confined to functions which, if the code
+is not to be present, simply become empty. The compiler will then quietly
+optimize out the call to the empty function. The result is far cleaner
+code which is easier to follow.
+
+C preprocessor macros present a number of hazards, including possible
+multiple evaluation of expressions with side effects and no type safety.
+If you are tempted to define a macro, consider creating an inline function
+instead. The code which results will be the same, but inline functions are
+easier to read, do not evaluate their arguments multiple times, and allow
+the compiler to perform type checking on the arguments and return value.
+
+
+Inline functions
+****************
+
+Inline functions present a hazard of their own, though. Programmers can
+become enamored of the perceived efficiency inherent in avoiding a function
+call and fill a source file with inline functions. Those functions,
+however, can actually reduce performance. Since their code is replicated
+at each call site, they end up bloating the size of the compiled kernel.
+That, in turn, creates pressure on the processor's memory caches, which can
+slow execution dramatically. Inline functions, as a rule, should be quite
+small and relatively rare. The cost of a function call, after all, is not
+that high; the creation of large numbers of inline functions is a classic
+example of premature optimization.
+
+In general, kernel programmers ignore cache effects at their peril. The
+classic time/space tradeoff taught in beginning data structures classes
+often does not apply to contemporary hardware. Space *is* time, in that a
+larger program will run slower than one which is more compact.
+
+More recent compilers take an increasingly active role in deciding whether
+a given function should actually be inlined or not. So the liberal
+placement of "inline" keywords may not just be excessive; it could also be
+irrelevant.
+
+
+Locking
+*******
+
+In May, 2006, the "Devicescape" networking stack was, with great
+fanfare, released under the GPL and made available for inclusion in the
+mainline kernel. This donation was welcome news; support for wireless
+networking in Linux was considered substandard at best, and the Devicescape
+stack offered the promise of fixing that situation. Yet, this code did not
+actually make it into the mainline until June, 2007 (2.6.22). What
+happened?
+
+This code showed a number of signs of having been developed behind
+corporate doors. But one large problem in particular was that it was not
+designed to work on multiprocessor systems. Before this networking stack
+(now called mac80211) could be merged, a locking scheme needed to be
+retrofitted onto it.
+
+Once upon a time, Linux kernel code could be developed without thinking
+about the concurrency issues presented by multiprocessor systems. Now,
+however, this document is being written on a dual-core laptop. Even on
+single-processor systems, work being done to improve responsiveness will
+raise the level of concurrency within the kernel. The days when kernel
+code could be written without thinking about locking are long past.
+
+Any resource (data structures, hardware registers, etc.) which could be
+accessed concurrently by more than one thread must be protected by a lock.
+New code should be written with this requirement in mind; retrofitting
+locking after the fact is a rather more difficult task. Kernel developers
+should take the time to understand the available locking primitives well
+enough to pick the right tool for the job. Code which shows a lack of
+attention to concurrency will have a difficult path into the mainline.
+
+
+Regressions
+***********
+
+One final hazard worth mentioning is this: it can be tempting to make a
+change (which may bring big improvements) which causes something to break
+for existing users. This kind of change is called a "regression," and
+regressions have become most unwelcome in the mainline kernel. With few
+exceptions, changes which cause regressions will be backed out if the
+regression cannot be fixed in a timely manner. Far better to avoid the
+regression in the first place.
+
+It is often argued that a regression can be justified if it causes things
+to work for more people than it creates problems for. Why not make a
+change if it brings new functionality to ten systems for each one it
+breaks? The best answer to this question was expressed by Linus in July,
+2007:
+
+::
+
+ So we don't fix bugs by introducing new problems. That way lies
+ madness, and nobody ever knows if you actually make any real
+ progress at all. Is it two steps forwards, one step back, or one
+ step forward and two steps back?
+
+(https://lwn.net/Articles/243460/).
+
+An especially unwelcome type of regression is any sort of change to the
+user-space ABI. Once an interface has been exported to user space, it must
+be supported indefinitely. This fact makes the creation of user-space
+interfaces particularly challenging: since they cannot be changed in
+incompatible ways, they must be done right the first time. For this
+reason, a great deal of thought, clear documentation, and wide review for
+user-space interfaces is always required.
+
+
+Code checking tools
+-------------------
+
+For now, at least, the writing of error-free code remains an ideal that few
+of us can reach. What we can hope to do, though, is to catch and fix as
+many of those errors as possible before our code goes into the mainline
+kernel. To that end, the kernel developers have put together an impressive
+array of tools which can catch a wide variety of obscure problems in an
+automated way. Any problem caught by the computer is a problem which will
+not afflict a user later on, so it stands to reason that the automated
+tools should be used whenever possible.
+
+The first step is simply to heed the warnings produced by the compiler.
+Contemporary versions of gcc can detect (and warn about) a large number of
+potential errors. Quite often, these warnings point to real problems.
+Code submitted for review should, as a rule, not produce any compiler
+warnings. When silencing warnings, take care to understand the real cause
+and try to avoid "fixes" which make the warning go away without addressing
+its cause.
+
+Note that not all compiler warnings are enabled by default. Build the
+kernel with "make KCFLAGS=-W" to get the full set.
+
+The kernel provides several configuration options which turn on debugging
+features; most of these are found in the "kernel hacking" submenu. Several
+of these options should be turned on for any kernel used for development or
+testing purposes. In particular, you should turn on:
+
+ - FRAME_WARN to get warnings for stack frames larger than a given amount.
+ The output generated can be verbose, but one need not worry about
+ warnings from other parts of the kernel.
+
+ - DEBUG_OBJECTS will add code to track the lifetime of various objects
+ created by the kernel and warn when things are done out of order. If
+ you are adding a subsystem which creates (and exports) complex objects
+ of its own, consider adding support for the object debugging
+ infrastructure.
+
+ - DEBUG_SLAB can find a variety of memory allocation and use errors; it
+ should be used on most development kernels.
+
+ - DEBUG_SPINLOCK, DEBUG_ATOMIC_SLEEP, and DEBUG_MUTEXES will find a
+ number of common locking errors.
+
+There are quite a few other debugging options, some of which will be
+discussed below. Some of them have a significant performance impact and
+should not be used all of the time. But some time spent learning the
+available options will likely be paid back many times over in short order.
+
+One of the heavier debugging tools is the locking checker, or "lockdep."
+This tool will track the acquisition and release of every lock (spinlock or
+mutex) in the system, the order in which locks are acquired relative to
+each other, the current interrupt environment, and more. It can then
+ensure that locks are always acquired in the same order, that the same
+interrupt assumptions apply in all situations, and so on. In other words,
+lockdep can find a number of scenarios in which the system could, on rare
+occasion, deadlock. This kind of problem can be painful (for both
+developers and users) in a deployed system; lockdep allows them to be found
+in an automated manner ahead of time. Code with any sort of non-trivial
+locking should be run with lockdep enabled before being submitted for
+inclusion.
+
+As a diligent kernel programmer, you will, beyond doubt, check the return
+status of any operation (such as a memory allocation) which can fail. The
+fact of the matter, though, is that the resulting failure recovery paths
+are, probably, completely untested. Untested code tends to be broken code;
+you could be much more confident of your code if all those error-handling
+paths had been exercised a few times.
+
+The kernel provides a fault injection framework which can do exactly that,
+especially where memory allocations are involved. With fault injection
+enabled, a configurable percentage of memory allocations will be made to
+fail; these failures can be restricted to a specific range of code.
+Running with fault injection enabled allows the programmer to see how the
+code responds when things go badly. See
+Documentation/fault-injection/fault-injection.rst for more information on
+how to use this facility.
+
+Other kinds of errors can be found with the "sparse" static analysis tool.
+With sparse, the programmer can be warned about confusion between
+user-space and kernel-space addresses, mixture of big-endian and
+small-endian quantities, the passing of integer values where a set of bit
+flags is expected, and so on. Sparse must be installed separately (it can
+be found at https://sparse.wiki.kernel.org/index.php/Main_Page if your
+distributor does not package it); it can then be run on the code by adding
+"C=1" to your make command.
+
+The "Coccinelle" tool (http://coccinelle.lip6.fr/) is able to find a wide
+variety of potential coding problems; it can also propose fixes for those
+problems. Quite a few "semantic patches" for the kernel have been packaged
+under the scripts/coccinelle directory; running "make coccicheck" will run
+through those semantic patches and report on any problems found. See
+:ref:`Documentation/dev-tools/coccinelle.rst <devtools_coccinelle>`
+for more information.
+
+Other kinds of portability errors are best found by compiling your code for
+other architectures. If you do not happen to have an S/390 system or a
+Blackfin development board handy, you can still perform the compilation
+step. A large set of cross compilers for x86 systems can be found at
+
+ https://www.kernel.org/pub/tools/crosstool/
+
+Some time spent installing and using these compilers will help avoid
+embarrassment later.
+
+
+Documentation
+-------------
+
+Documentation has often been more the exception than the rule with kernel
+development. Even so, adequate documentation will help to ease the merging
+of new code into the kernel, make life easier for other developers, and
+will be helpful for your users. In many cases, the addition of
+documentation has become essentially mandatory.
+
+The first piece of documentation for any patch is its associated
+changelog. Log entries should describe the problem being solved, the form
+of the solution, the people who worked on the patch, any relevant
+effects on performance, and anything else that might be needed to
+understand the patch. Be sure that the changelog says *why* the patch is
+worth applying; a surprising number of developers fail to provide that
+information.
+
+Any code which adds a new user-space interface - including new sysfs or
+/proc files - should include documentation of that interface which enables
+user-space developers to know what they are working with. See
+Documentation/ABI/README for a description of how this documentation should
+be formatted and what information needs to be provided.
+
+The file :ref:`Documentation/admin-guide/kernel-parameters.rst
+<kernelparameters>` describes all of the kernel's boot-time parameters.
+Any patch which adds new parameters should add the appropriate entries to
+this file.
+
+Any new configuration options must be accompanied by help text which
+clearly explains the options and when the user might want to select them.
+
+Internal API information for many subsystems is documented by way of
+specially-formatted comments; these comments can be extracted and formatted
+in a number of ways by the "kernel-doc" script. If you are working within
+a subsystem which has kerneldoc comments, you should maintain them and add
+them, as appropriate, for externally-available functions. Even in areas
+which have not been so documented, there is no harm in adding kerneldoc
+comments for the future; indeed, this can be a useful activity for
+beginning kernel developers. The format of these comments, along with some
+information on how to create kerneldoc templates can be found at
+:ref:`Documentation/doc-guide/ <doc_guide>`.
+
+Anybody who reads through a significant amount of existing kernel code will
+note that, often, comments are most notable by their absence. Once again,
+the expectations for new code are higher than they were in the past;
+merging uncommented code will be harder. That said, there is little desire
+for verbosely-commented code. The code should, itself, be readable, with
+comments explaining the more subtle aspects.
+
+Certain things should always be commented. Uses of memory barriers should
+be accompanied by a line explaining why the barrier is necessary. The
+locking rules for data structures generally need to be explained somewhere.
+Major data structures need comprehensive documentation in general.
+Non-obvious dependencies between separate bits of code should be pointed
+out. Anything which might tempt a code janitor to make an incorrect
+"cleanup" needs a comment saying why it is done the way it is. And so on.
+
+
+Internal API changes
+--------------------
+
+The binary interface provided by the kernel to user space cannot be broken
+except under the most severe circumstances. The kernel's internal
+programming interfaces, instead, are highly fluid and can be changed when
+the need arises. If you find yourself having to work around a kernel API,
+or simply not using a specific functionality because it does not meet your
+needs, that may be a sign that the API needs to change. As a kernel
+developer, you are empowered to make such changes.
+
+There are, of course, some catches. API changes can be made, but they need
+to be well justified. So any patch making an internal API change should be
+accompanied by a description of what the change is and why it is
+necessary. This kind of change should also be broken out into a separate
+patch, rather than buried within a larger patch.
+
+The other catch is that a developer who changes an internal API is
+generally charged with the task of fixing any code within the kernel tree
+which is broken by the change. For a widely-used function, this duty can
+lead to literally hundreds or thousands of changes - many of which are
+likely to conflict with work being done by other developers. Needless to
+say, this can be a large job, so it is best to be sure that the
+justification is solid. Note that the Coccinelle tool can help with
+wide-ranging API changes.
+
+When making an incompatible API change, one should, whenever possible,
+ensure that code which has not been updated is caught by the compiler.
+This will help you to be sure that you have found all in-tree uses of that
+interface. It will also alert developers of out-of-tree code that there is
+a change that they need to respond to. Supporting out-of-tree code is not
+something that kernel developers need to be worried about, but we also do
+not have to make life harder for out-of-tree developers than it needs to
+be.
diff --git a/Documentation/process/5.Posting.rst b/Documentation/process/5.Posting.rst
new file mode 100644
index 000000000..de4edd42d
--- /dev/null
+++ b/Documentation/process/5.Posting.rst
@@ -0,0 +1,360 @@
+.. _development_posting:
+
+Posting patches
+===============
+
+Sooner or later, the time comes when your work is ready to be presented to
+the community for review and, eventually, inclusion into the mainline
+kernel. Unsurprisingly, the kernel development community has evolved a set
+of conventions and procedures which are used in the posting of patches;
+following them will make life much easier for everybody involved. This
+document will attempt to cover these expectations in reasonable detail;
+more information can also be found in the files
+:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
+and :ref:`Documentation/process/submit-checklist.rst <submitchecklist>`.
+
+
+When to post
+------------
+
+There is a constant temptation to avoid posting patches before they are
+completely "ready." For simple patches, that is not a problem. If the
+work being done is complex, though, there is a lot to be gained by getting
+feedback from the community before the work is complete. So you should
+consider posting in-progress work, or even making a git tree available so
+that interested developers can catch up with your work at any time.
+
+When posting code which is not yet considered ready for inclusion, it is a
+good idea to say so in the posting itself. Also mention any major work
+which remains to be done and any known problems. Fewer people will look at
+patches which are known to be half-baked, but those who do will come in
+with the idea that they can help you drive the work in the right direction.
+
+
+Before creating patches
+-----------------------
+
+There are a number of things which should be done before you consider
+sending patches to the development community. These include:
+
+ - Test the code to the extent that you can. Make use of the kernel's
+ debugging tools, ensure that the kernel will build with all reasonable
+ combinations of configuration options, use cross-compilers to build for
+ different architectures, etc.
+
+ - Make sure your code is compliant with the kernel coding style
+ guidelines.
+
+ - Does your change have performance implications? If so, you should run
+ benchmarks showing what the impact (or benefit) of your change is; a
+ summary of the results should be included with the patch.
+
+ - Be sure that you have the right to post the code. If this work was done
+ for an employer, the employer likely has a right to the work and must be
+ agreeable with its release under the GPL.
+
+As a general rule, putting in some extra thought before posting code almost
+always pays back the effort in short order.
+
+
+Patch preparation
+-----------------
+
+The preparation of patches for posting can be a surprising amount of work,
+but, once again, attempting to save time here is not generally advisable
+even in the short term.
+
+Patches must be prepared against a specific version of the kernel. As a
+general rule, a patch should be based on the current mainline as found in
+Linus's git tree. When basing on mainline, start with a well-known release
+point - a stable or -rc release - rather than branching off the mainline at
+an arbitrary spot.
+
+It may become necessary to make versions against -mm, linux-next, or a
+subsystem tree, though, to facilitate wider testing and review. Depending
+on the area of your patch and what is going on elsewhere, basing a patch
+against these other trees can require a significant amount of work
+resolving conflicts and dealing with API changes.
+
+Only the most simple changes should be formatted as a single patch;
+everything else should be made as a logical series of changes. Splitting
+up patches is a bit of an art; some developers spend a long time figuring
+out how to do it in the way that the community expects. There are a few
+rules of thumb, however, which can help considerably:
+
+ - The patch series you post will almost certainly not be the series of
+ changes found in your working revision control system. Instead, the
+ changes you have made need to be considered in their final form, then
+ split apart in ways which make sense. The developers are interested in
+ discrete, self-contained changes, not the path you took to get to those
+ changes.
+
+ - Each logically independent change should be formatted as a separate
+ patch. These changes can be small ("add a field to this structure") or
+ large (adding a significant new driver, for example), but they should be
+ conceptually small and amenable to a one-line description. Each patch
+ should make a specific change which can be reviewed on its own and
+ verified to do what it says it does.
+
+ - As a way of restating the guideline above: do not mix different types of
+ changes in the same patch. If a single patch fixes a critical security
+ bug, rearranges a few structures, and reformats the code, there is a
+ good chance that it will be passed over and the important fix will be
+ lost.
+
+ - Each patch should yield a kernel which builds and runs properly; if your
+ patch series is interrupted in the middle, the result should still be a
+ working kernel. Partial application of a patch series is a common
+ scenario when the "git bisect" tool is used to find regressions; if the
+ result is a broken kernel, you will make life harder for developers and
+ users who are engaging in the noble work of tracking down problems.
+
+ - Do not overdo it, though. One developer once posted a set of edits
+ to a single file as 500 separate patches - an act which did not make him
+ the most popular person on the kernel mailing list. A single patch can
+ be reasonably large as long as it still contains a single *logical*
+ change.
+
+ - It can be tempting to add a whole new infrastructure with a series of
+ patches, but to leave that infrastructure unused until the final patch
+ in the series enables the whole thing. This temptation should be
+ avoided if possible; if that series adds regressions, bisection will
+ finger the last patch as the one which caused the problem, even though
+ the real bug is elsewhere. Whenever possible, a patch which adds new
+ code should make that code active immediately.
+
+Working to create the perfect patch series can be a frustrating process
+which takes quite a bit of time and thought after the "real work" has been
+done. When done properly, though, it is time well spent.
+
+
+Patch formatting and changelogs
+-------------------------------
+
+So now you have a perfect series of patches for posting, but the work is
+not done quite yet. Each patch needs to be formatted into a message which
+quickly and clearly communicates its purpose to the rest of the world. To
+that end, each patch will be composed of the following:
+
+ - An optional "From" line naming the author of the patch. This line is
+ only necessary if you are passing on somebody else's patch via email,
+ but it never hurts to add it when in doubt.
+
+ - A one-line description of what the patch does. This message should be
+ enough for a reader who sees it with no other context to figure out the
+ scope of the patch; it is the line that will show up in the "short form"
+ changelogs. This message is usually formatted with the relevant
+ subsystem name first, followed by the purpose of the patch. For
+ example:
+
+ ::
+
+ gpio: fix build on CONFIG_GPIO_SYSFS=n
+
+ - A blank line followed by a detailed description of the contents of the
+ patch. This description can be as long as is required; it should say
+ what the patch does and why it should be applied to the kernel.
+
+ - One or more tag lines, with, at a minimum, one Signed-off-by: line from
+ the author of the patch. Tags will be described in more detail below.
+
+The items above, together, form the changelog for the patch. Writing good
+changelogs is a crucial but often-neglected art; it's worth spending
+another moment discussing this issue. When writing a changelog, you should
+bear in mind that a number of different people will be reading your words.
+These include subsystem maintainers and reviewers who need to decide
+whether the patch should be included, distributors and other maintainers
+trying to decide whether a patch should be backported to other kernels, bug
+hunters wondering whether the patch is responsible for a problem they are
+chasing, users who want to know how the kernel has changed, and more. A
+good changelog conveys the needed information to all of these people in the
+most direct and concise way possible.
+
+To that end, the summary line should describe the effects of and motivation
+for the change as well as possible given the one-line constraint. The
+detailed description can then amplify on those topics and provide any
+needed additional information. If the patch fixes a bug, cite the commit
+which introduced the bug if possible (and please provide both the commit ID
+and the title when citing commits). If a problem is associated with
+specific log or compiler output, include that output to help others
+searching for a solution to the same problem. If the change is meant to
+support other changes coming in later patch, say so. If internal APIs are
+changed, detail those changes and how other developers should respond. In
+general, the more you can put yourself into the shoes of everybody who will
+be reading your changelog, the better that changelog (and the kernel as a
+whole) will be.
+
+Needless to say, the changelog should be the text used when committing the
+change to a revision control system. It will be followed by:
+
+ - The patch itself, in the unified ("-u") patch format. Using the "-p"
+ option to diff will associate function names with changes, making the
+ resulting patch easier for others to read.
+
+You should avoid including changes to irrelevant files (those generated by
+the build process, for example, or editor backup files) in the patch. The
+file "dontdiff" in the Documentation directory can help in this regard;
+pass it to diff with the "-X" option.
+
+The tags already briefly mentioned above are used to provide insights how
+the patch came into being. They are described in detail in the
+:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
+document; what follows here is a brief summary.
+
+One tag is used to refer to earlier commits which introduced problems fixed by
+the patch::
+
+ Fixes: 1f2e3d4c5b6a ("The first line of the commit specified by the first 12 characters of its SHA-1 ID")
+
+Another tag is used for linking web pages with additional backgrounds or
+details, for example an earlier discussion which leads to the patch or a
+document with a specification implemented by the patch::
+
+ Link: https://example.com/somewhere.html optional-other-stuff
+
+Many maintainers when applying a patch also add this tag to link to the
+latest public review posting of the patch; often this is automatically done
+by tools like b4 or a git hook like the one described in
+'Documentation/maintainer/configure-git.rst'.
+
+If the URL points to a public bug report being fixed by the patch, use the
+"Closes:" tag instead::
+
+ Closes: https://example.com/issues/1234 optional-other-stuff
+
+Some bug trackers have the ability to close issues automatically when a
+commit with such a tag is applied. Some bots monitoring mailing lists can
+also track such tags and take certain actions. Private bug trackers and
+invalid URLs are forbidden.
+
+Another kind of tag is used to document who was involved in the development of
+the patch. Each of these uses this format::
+
+ tag: Full Name <email address> optional-other-stuff
+
+The tags in common use are:
+
+ - Signed-off-by: this is a developer's certification that he or she has
+ the right to submit the patch for inclusion into the kernel. It is an
+ agreement to the Developer's Certificate of Origin, the full text of
+ which can be found in :ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
+ Code without a proper signoff cannot be merged into the mainline.
+
+ - Co-developed-by: states that the patch was co-created by several developers;
+ it is a used to give attribution to co-authors (in addition to the author
+ attributed by the From: tag) when multiple people work on a single patch.
+ Every Co-developed-by: must be immediately followed by a Signed-off-by: of
+ the associated co-author. Details and examples can be found in
+ :ref:`Documentation/process/submitting-patches.rst <submittingpatches>`.
+
+ - Acked-by: indicates an agreement by another developer (often a
+ maintainer of the relevant code) that the patch is appropriate for
+ inclusion into the kernel.
+
+ - Tested-by: states that the named person has tested the patch and found
+ it to work.
+
+ - Reviewed-by: the named developer has reviewed the patch for correctness;
+ see the reviewer's statement in :ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
+ for more detail.
+
+ - Reported-by: names a user who reported a problem which is fixed by this
+ patch; this tag is used to give credit to the (often underappreciated)
+ people who test our code and let us know when things do not work
+ correctly. Note, this tag should be followed by a Closes: tag pointing to
+ the report, unless the report is not available on the web. The Link: tag
+ can be used instead of Closes: if the patch fixes a part of the issue(s)
+ being reported.
+
+ - Cc: the named person received a copy of the patch and had the
+ opportunity to comment on it.
+
+Be careful in the addition of tags to your patches, as only Cc: is appropriate
+for addition without the explicit permission of the person named; using
+Reported-by: is fine most of the time as well, but ask for permission if
+the bug was reported in private.
+
+
+Sending the patch
+-----------------
+
+Before you mail your patches, there are a couple of other things you should
+take care of:
+
+ - Are you sure that your mailer will not corrupt the patches? Patches
+ which have had gratuitous white-space changes or line wrapping performed
+ by the mail client will not apply at the other end, and often will not
+ be examined in any detail. If there is any doubt at all, mail the patch
+ to yourself and convince yourself that it shows up intact.
+
+ :ref:`Documentation/process/email-clients.rst <email_clients>` has some
+ helpful hints on making specific mail clients work for sending patches.
+
+ - Are you sure your patch is free of silly mistakes? You should always
+ run patches through scripts/checkpatch.pl and address the complaints it
+ comes up with. Please bear in mind that checkpatch.pl, while being the
+ embodiment of a fair amount of thought about what kernel patches should
+ look like, is not smarter than you. If fixing a checkpatch.pl complaint
+ would make the code worse, don't do it.
+
+Patches should always be sent as plain text. Please do not send them as
+attachments; that makes it much harder for reviewers to quote sections of
+the patch in their replies. Instead, just put the patch directly into your
+message.
+
+When mailing patches, it is important to send copies to anybody who might
+be interested in it. Unlike some other projects, the kernel encourages
+people to err on the side of sending too many copies; don't assume that the
+relevant people will see your posting on the mailing lists. In particular,
+copies should go to:
+
+ - The maintainer(s) of the affected subsystem(s). As described earlier,
+ the MAINTAINERS file is the first place to look for these people.
+
+ - Other developers who have been working in the same area - especially
+ those who might be working there now. Using git to see who else has
+ modified the files you are working on can be helpful.
+
+ - If you are responding to a bug report or a feature request, copy the
+ original poster as well.
+
+ - Send a copy to the relevant mailing list, or, if nothing else applies,
+ the linux-kernel list.
+
+ - If you are fixing a bug, think about whether the fix should go into the
+ next stable update. If so, stable@vger.kernel.org should get a copy of
+ the patch. Also add a "Cc: stable@vger.kernel.org" to the tags within
+ the patch itself; that will cause the stable team to get a notification
+ when your fix goes into the mainline.
+
+When selecting recipients for a patch, it is good to have an idea of who
+you think will eventually accept the patch and get it merged. While it
+is possible to send patches directly to Linus Torvalds and have him merge
+them, things are not normally done that way. Linus is busy, and there are
+subsystem maintainers who watch over specific parts of the kernel. Usually
+you will be wanting that maintainer to merge your patches. If there is no
+obvious maintainer, Andrew Morton is often the patch target of last resort.
+
+Patches need good subject lines. The canonical format for a patch line is
+something like:
+
+::
+
+ [PATCH nn/mm] subsys: one-line description of the patch
+
+where "nn" is the ordinal number of the patch, "mm" is the total number of
+patches in the series, and "subsys" is the name of the affected subsystem.
+Clearly, nn/mm can be omitted for a single, standalone patch.
+
+If you have a significant series of patches, it is customary to send an
+introductory description as part zero. This convention is not universally
+followed though; if you use it, remember that information in the
+introduction does not make it into the kernel changelogs. So please ensure
+that the patches, themselves, have complete changelog information.
+
+In general, the second and following parts of a multi-part patch should be
+sent as a reply to the first part so that they all thread together at the
+receiving end. Tools like git and quilt have commands to mail out a set of
+patches with the proper threading. If you have a long series, though, and
+are using git, please stay away from the --chain-reply-to option to avoid
+creating exceptionally deep nesting.
diff --git a/Documentation/process/6.Followthrough.rst b/Documentation/process/6.Followthrough.rst
new file mode 100644
index 000000000..66fa400c6
--- /dev/null
+++ b/Documentation/process/6.Followthrough.rst
@@ -0,0 +1,219 @@
+.. _development_followthrough:
+
+Followthrough
+=============
+
+At this point, you have followed the guidelines given so far and, with the
+addition of your own engineering skills, have posted a perfect series of
+patches. One of the biggest mistakes that even experienced kernel
+developers can make is to conclude that their work is now done. In truth,
+posting patches indicates a transition into the next stage of the process,
+with, possibly, quite a bit of work yet to be done.
+
+It is a rare patch which is so good at its first posting that there is no
+room for improvement. The kernel development process recognizes this fact,
+and, as a result, is heavily oriented toward the improvement of posted
+code. You, as the author of that code, will be expected to work with the
+kernel community to ensure that your code is up to the kernel's quality
+standards. A failure to participate in this process is quite likely to
+prevent the inclusion of your patches into the mainline.
+
+
+Working with reviewers
+----------------------
+
+A patch of any significance will result in a number of comments from other
+developers as they review the code. Working with reviewers can be, for
+many developers, the most intimidating part of the kernel development
+process. Life can be made much easier, though, if you keep a few things in
+mind:
+
+ - If you have explained your patch well, reviewers will understand its
+ value and why you went to the trouble of writing it. But that value
+ will not keep them from asking a fundamental question: what will it be
+ like to maintain a kernel with this code in it five or ten years later?
+ Many of the changes you may be asked to make - from coding style tweaks
+ to substantial rewrites - come from the understanding that Linux will
+ still be around and under development a decade from now.
+
+ - Code review is hard work, and it is a relatively thankless occupation;
+ people remember who wrote kernel code, but there is little lasting fame
+ for those who reviewed it. So reviewers can get grumpy, especially when
+ they see the same mistakes being made over and over again. If you get a
+ review which seems angry, insulting, or outright offensive, resist the
+ impulse to respond in kind. Code review is about the code, not about
+ the people, and code reviewers are not attacking you personally.
+
+ - Similarly, code reviewers are not trying to promote their employers'
+ agendas at the expense of your own. Kernel developers often expect to
+ be working on the kernel years from now, but they understand that their
+ employer could change. They truly are, almost without exception,
+ working toward the creation of the best kernel they can; they are not
+ trying to create discomfort for their employers' competitors.
+
+ - Be prepared for seemingly silly requests for coding style changes
+ and requests to factor out some of your code to shared parts of
+ the kernel. One job the maintainers do is to keep things looking
+ the same. Sometimes this means that the clever hack in your driver
+ to get around a problem actually needs to become a generalized
+ kernel feature ready for next time.
+
+What all of this comes down to is that, when reviewers send you comments,
+you need to pay attention to the technical observations that they are
+making. Do not let their form of expression or your own pride keep that
+from happening. When you get review comments on a patch, take the time to
+understand what the reviewer is trying to say. If possible, fix the things
+that the reviewer is asking you to fix. And respond back to the reviewer:
+thank them, and describe how you will answer their questions.
+
+Note that you do not have to agree with every change suggested by
+reviewers. If you believe that the reviewer has misunderstood your code,
+explain what is really going on. If you have a technical objection to a
+suggested change, describe it and justify your solution to the problem. If
+your explanations make sense, the reviewer will accept them. Should your
+explanation not prove persuasive, though, especially if others start to
+agree with the reviewer, take some time to think things over again. It can
+be easy to become blinded by your own solution to a problem to the point
+that you don't realize that something is fundamentally wrong or, perhaps,
+you're not even solving the right problem.
+
+Andrew Morton has suggested that every review comment which does not result
+in a code change should result in an additional code comment instead; that
+can help future reviewers avoid the questions which came up the first time
+around.
+
+One fatal mistake is to ignore review comments in the hope that they will
+go away. They will not go away. If you repost code without having
+responded to the comments you got the time before, you're likely to find
+that your patches go nowhere.
+
+Speaking of reposting code: please bear in mind that reviewers are not
+going to remember all the details of the code you posted the last time
+around. So it is always a good idea to remind reviewers of previously
+raised issues and how you dealt with them; the patch changelog is a good
+place for this kind of information. Reviewers should not have to search
+through list archives to familiarize themselves with what was said last
+time; if you help them get a running start, they will be in a better mood
+when they revisit your code.
+
+What if you've tried to do everything right and things still aren't going
+anywhere? Most technical disagreements can be resolved through discussion,
+but there are times when somebody simply has to make a decision. If you
+honestly believe that this decision is going against you wrongly, you can
+always try appealing to a higher power. As of this writing, that higher
+power tends to be Andrew Morton. Andrew has a great deal of respect in the
+kernel development community; he can often unjam a situation which seems to
+be hopelessly blocked. Appealing to Andrew should not be done lightly,
+though, and not before all other alternatives have been explored. And bear
+in mind, of course, that he may not agree with you either.
+
+
+What happens next
+-----------------
+
+If a patch is considered to be a good thing to add to the kernel, and once
+most of the review issues have been resolved, the next step is usually
+entry into a subsystem maintainer's tree. How that works varies from one
+subsystem to the next; each maintainer has his or her own way of doing
+things. In particular, there may be more than one tree - one, perhaps,
+dedicated to patches planned for the next merge window, and another for
+longer-term work.
+
+For patches applying to areas for which there is no obvious subsystem tree
+(memory management patches, for example), the default tree often ends up
+being -mm. Patches which affect multiple subsystems can also end up going
+through the -mm tree.
+
+Inclusion into a subsystem tree can bring a higher level of visibility to a
+patch. Now other developers working with that tree will get the patch by
+default. Subsystem trees typically feed linux-next as well, making their
+contents visible to the development community as a whole. At this point,
+there's a good chance that you will get more comments from a new set of
+reviewers; these comments need to be answered as in the previous round.
+
+What may also happen at this point, depending on the nature of your patch,
+is that conflicts with work being done by others turn up. In the worst
+case, heavy patch conflicts can result in some work being put on the back
+burner so that the remaining patches can be worked into shape and merged.
+Other times, conflict resolution will involve working with the other
+developers and, possibly, moving some patches between trees to ensure that
+everything applies cleanly. This work can be a pain, but count your
+blessings: before the advent of the linux-next tree, these conflicts often
+only turned up during the merge window and had to be addressed in a hurry.
+Now they can be resolved at leisure, before the merge window opens.
+
+Some day, if all goes well, you'll log on and see that your patch has been
+merged into the mainline kernel. Congratulations! Once the celebration is
+complete (and you have added yourself to the MAINTAINERS file), though, it
+is worth remembering an important little fact: the job still is not done.
+Merging into the mainline brings its own challenges.
+
+To begin with, the visibility of your patch has increased yet again. There
+may be a new round of comments from developers who had not been aware of
+the patch before. It may be tempting to ignore them, since there is no
+longer any question of your code being merged. Resist that temptation,
+though; you still need to be responsive to developers who have questions or
+suggestions.
+
+More importantly, though: inclusion into the mainline puts your code into
+the hands of a much larger group of testers. Even if you have contributed
+a driver for hardware which is not yet available, you will be surprised by
+how many people will build your code into their kernels. And, of course,
+where there are testers, there will be bug reports.
+
+The worst sort of bug reports are regressions. If your patch causes a
+regression, you'll find an uncomfortable number of eyes upon you;
+regressions need to be fixed as soon as possible. If you are unwilling or
+unable to fix the regression (and nobody else does it for you), your patch
+will almost certainly be removed during the stabilization period. Beyond
+negating all of the work you have done to get your patch into the mainline,
+having a patch pulled as the result of a failure to fix a regression could
+well make it harder for you to get work merged in the future.
+
+After any regressions have been dealt with, there may be other, ordinary
+bugs to deal with. The stabilization period is your best opportunity to
+fix these bugs and ensure that your code's debut in a mainline kernel
+release is as solid as possible. So, please, answer bug reports, and fix
+the problems if at all possible. That's what the stabilization period is
+for; you can start creating cool new patches once any problems with the old
+ones have been taken care of.
+
+And don't forget that there are other milestones which may also create bug
+reports: the next mainline stable release, when prominent distributors pick
+up a version of the kernel containing your patch, etc. Continuing to
+respond to these reports is a matter of basic pride in your work. If that
+is insufficient motivation, though, it's also worth considering that the
+development community remembers developers who lose interest in their code
+after it's merged. The next time you post a patch, they will be evaluating
+it with the assumption that you will not be around to maintain it
+afterward.
+
+
+Other things that can happen
+-----------------------------
+
+One day, you may open your mail client and see that somebody has mailed you
+a patch to your code. That is one of the advantages of having your code
+out there in the open, after all. If you agree with the patch, you can
+either forward it on to the subsystem maintainer (be sure to include a
+proper From: line so that the attribution is correct, and add a signoff of
+your own), or send an Acked-by: response back and let the original poster
+send it upward.
+
+If you disagree with the patch, send a polite response explaining why. If
+possible, tell the author what changes need to be made to make the patch
+acceptable to you. There is a certain resistance to merging patches which
+are opposed by the author and maintainer of the code, but it only goes so
+far. If you are seen as needlessly blocking good work, those patches will
+eventually flow around you and get into the mainline anyway. In the Linux
+kernel, nobody has absolute veto power over any code. Except maybe Linus.
+
+On very rare occasion, you may see something completely different: another
+developer posts a different solution to your problem. At that point,
+chances are that one of the two patches will not be merged, and "mine was
+here first" is not considered to be a compelling technical argument. If
+somebody else's patch displaces yours and gets into the mainline, there is
+really only one way to respond: be pleased that your problem got solved and
+get on with your work. Having one's work shoved aside in this manner can
+be hurtful and discouraging, but the community will remember your reaction
+long after they have forgotten whose patch actually got merged.
diff --git a/Documentation/process/7.AdvancedTopics.rst b/Documentation/process/7.AdvancedTopics.rst
new file mode 100644
index 000000000..bf7cbfb4c
--- /dev/null
+++ b/Documentation/process/7.AdvancedTopics.rst
@@ -0,0 +1,178 @@
+.. _development_advancedtopics:
+
+Advanced topics
+===============
+
+At this point, hopefully, you have a handle on how the development process
+works. There is still more to learn, however! This section will cover a
+number of topics which can be helpful for developers wanting to become a
+regular part of the Linux kernel development process.
+
+Managing patches with git
+-------------------------
+
+The use of distributed version control for the kernel began in early 2002,
+when Linus first started playing with the proprietary BitKeeper
+application. While BitKeeper was controversial, the approach to software
+version management it embodied most certainly was not. Distributed version
+control enabled an immediate acceleration of the kernel development
+project. In current times, there are several free alternatives to
+BitKeeper. For better or for worse, the kernel project has settled on git
+as its tool of choice.
+
+Managing patches with git can make life much easier for the developer,
+especially as the volume of those patches grows. Git also has its rough
+edges and poses certain hazards; it is a young and powerful tool which is
+still being civilized by its developers. This document will not attempt to
+teach the reader how to use git; that would be sufficient material for a
+long document in its own right. Instead, the focus here will be on how git
+fits into the kernel development process in particular. Developers who
+wish to come up to speed with git will find more information at:
+
+ https://git-scm.com/
+
+ https://www.kernel.org/pub/software/scm/git/docs/user-manual.html
+
+and on various tutorials found on the web.
+
+The first order of business is to read the above sites and get a solid
+understanding of how git works before trying to use it to make patches
+available to others. A git-using developer should be able to obtain a copy
+of the mainline repository, explore the revision history, commit changes to
+the tree, use branches, etc. An understanding of git's tools for the
+rewriting of history (such as rebase) is also useful. Git comes with its
+own terminology and concepts; a new user of git should know about refs,
+remote branches, the index, fast-forward merges, pushes and pulls, detached
+heads, etc. It can all be a little intimidating at the outset, but the
+concepts are not that hard to grasp with a bit of study.
+
+Using git to generate patches for submission by email can be a good
+exercise while coming up to speed.
+
+When you are ready to start putting up git trees for others to look at, you
+will, of course, need a server that can be pulled from. Setting up such a
+server with git-daemon is relatively straightforward if you have a system
+which is accessible to the Internet. Otherwise, free, public hosting sites
+(Github, for example) are starting to appear on the net. Established
+developers can get an account on kernel.org, but those are not easy to come
+by; see https://kernel.org/faq/ for more information.
+
+The normal git workflow involves the use of a lot of branches. Each line
+of development can be separated into a separate "topic branch" and
+maintained independently. Branches in git are cheap, there is no reason to
+not make free use of them. And, in any case, you should not do your
+development in any branch which you intend to ask others to pull from.
+Publicly-available branches should be created with care; merge in patches
+from development branches when they are in complete form and ready to go -
+not before.
+
+Git provides some powerful tools which can allow you to rewrite your
+development history. An inconvenient patch (one which breaks bisection,
+say, or which has some other sort of obvious bug) can be fixed in place or
+made to disappear from the history entirely. A patch series can be
+rewritten as if it had been written on top of today's mainline, even though
+you have been working on it for months. Changes can be transparently
+shifted from one branch to another. And so on. Judicious use of git's
+ability to revise history can help in the creation of clean patch sets with
+fewer problems.
+
+Excessive use of this capability can lead to other problems, though, beyond
+a simple obsession for the creation of the perfect project history.
+Rewriting history will rewrite the changes contained in that history,
+turning a tested (hopefully) kernel tree into an untested one. But, beyond
+that, developers cannot easily collaborate if they do not have a shared
+view of the project history; if you rewrite history which other developers
+have pulled into their repositories, you will make life much more difficult
+for those developers. So a simple rule of thumb applies here: history
+which has been exported to others should generally be seen as immutable
+thereafter.
+
+So, once you push a set of changes to your publicly-available server, those
+changes should not be rewritten. Git will attempt to enforce this rule if
+you try to push changes which do not result in a fast-forward merge
+(i.e. changes which do not share the same history). It is possible to
+override this check, and there may be times when it is necessary to rewrite
+an exported tree. Moving changesets between trees to avoid conflicts in
+linux-next is one example. But such actions should be rare. This is one
+of the reasons why development should be done in private branches (which
+can be rewritten if necessary) and only moved into public branches when
+it's in a reasonably advanced state.
+
+As the mainline (or other tree upon which a set of changes is based)
+advances, it is tempting to merge with that tree to stay on the leading
+edge. For a private branch, rebasing can be an easy way to keep up with
+another tree, but rebasing is not an option once a tree is exported to the
+world. Once that happens, a full merge must be done. Merging occasionally
+makes good sense, but overly frequent merges can clutter the history
+needlessly. Suggested technique in this case is to merge infrequently, and
+generally only at specific release points (such as a mainline -rc
+release). If you are nervous about specific changes, you can always
+perform test merges in a private branch. The git "rerere" tool can be
+useful in such situations; it remembers how merge conflicts were resolved
+so that you don't have to do the same work twice.
+
+One of the biggest recurring complaints about tools like git is this: the
+mass movement of patches from one repository to another makes it easy to
+slip in ill-advised changes which go into the mainline below the review
+radar. Kernel developers tend to get unhappy when they see that kind of
+thing happening; putting up a git tree with unreviewed or off-topic patches
+can affect your ability to get trees pulled in the future. Quoting Linus:
+
+::
+
+ You can send me patches, but for me to pull a git patch from you, I
+ need to know that you know what you're doing, and I need to be able
+ to trust things *without* then having to go and check every
+ individual change by hand.
+
+(https://lwn.net/Articles/224135/).
+
+To avoid this kind of situation, ensure that all patches within a given
+branch stick closely to the associated topic; a "driver fixes" branch
+should not be making changes to the core memory management code. And, most
+importantly, do not use a git tree to bypass the review process. Post an
+occasional summary of the tree to the relevant list, and, when the time is
+right, request that the tree be included in linux-next.
+
+If and when others start to send patches for inclusion into your tree,
+don't forget to review them. Also ensure that you maintain the correct
+authorship information; the git "am" tool does its best in this regard, but
+you may have to add a "From:" line to the patch if it has been relayed to
+you via a third party.
+
+When requesting a pull, be sure to give all the relevant information: where
+your tree is, what branch to pull, and what changes will result from the
+pull. The git request-pull command can be helpful in this regard; it will
+format the request as other developers expect, and will also check to be
+sure that you have remembered to push those changes to the public server.
+
+
+Reviewing patches
+-----------------
+
+Some readers will certainly object to putting this section with "advanced
+topics" on the grounds that even beginning kernel developers should be
+reviewing patches. It is certainly true that there is no better way to
+learn how to program in the kernel environment than by looking at code
+posted by others. In addition, reviewers are forever in short supply; by
+looking at code you can make a significant contribution to the process as a
+whole.
+
+Reviewing code can be an intimidating prospect, especially for a new kernel
+developer who may well feel nervous about questioning code - in public -
+which has been posted by those with more experience. Even code written by
+the most experienced developers can be improved, though. Perhaps the best
+piece of advice for reviewers (all reviewers) is this: phrase review
+comments as questions rather than criticisms. Asking "how does the lock
+get released in this path?" will always work better than stating "the
+locking here is wrong."
+
+Different developers will review code from different points of view. Some
+are mostly concerned with coding style and whether code lines have trailing
+white space. Others will focus primarily on whether the change implemented
+by the patch as a whole is a good thing for the kernel or not. Yet others
+will check for problematic locking, excessive stack usage, possible
+security issues, duplication of code found elsewhere, adequate
+documentation, adverse effects on performance, user-space ABI changes, etc.
+All types of review, if they lead to better code going into the kernel, are
+welcome and worthwhile.
diff --git a/Documentation/process/8.Conclusion.rst b/Documentation/process/8.Conclusion.rst
new file mode 100644
index 000000000..8c847dffe
--- /dev/null
+++ b/Documentation/process/8.Conclusion.rst
@@ -0,0 +1,73 @@
+.. _development_conclusion:
+
+For more information
+====================
+
+There are numerous sources of information on Linux kernel development and
+related topics. First among those will always be the Documentation
+directory found in the kernel source distribution. Start with the
+top-level :ref:`process/howto.rst <process_howto>`; also read
+:ref:`process/submitting-patches.rst <submittingpatches>`. Many internal
+kernel APIs are documented using the kerneldoc mechanism; "make htmldocs"
+or "make pdfdocs" can be used to generate those documents in HTML or PDF
+format (though the version of TeX shipped by some distributions runs into
+internal limits and fails to process the documents properly).
+
+Various web sites discuss kernel development at all levels of detail. Your
+author would like to humbly suggest https://lwn.net/ as a source;
+information on many specific kernel topics can be found via the LWN kernel
+index at:
+
+ https://lwn.net/Kernel/Index/
+
+Beyond that, a valuable resource for kernel developers is:
+
+ https://kernelnewbies.org/
+
+And, of course, one should not forget https://kernel.org/, the definitive
+location for kernel release information.
+
+There are a number of books on kernel development:
+
+ Linux Device Drivers, 3rd Edition (Jonathan Corbet, Alessandro
+ Rubini, and Greg Kroah-Hartman). Online at
+ https://lwn.net/Kernel/LDD3/.
+
+ Linux Kernel Development (Robert Love).
+
+ Understanding the Linux Kernel (Daniel Bovet and Marco Cesati).
+
+All of these books suffer from a common fault, though: they tend to be
+somewhat obsolete by the time they hit the shelves, and they have been on
+the shelves for a while now. Still, there is quite a bit of good
+information to be found there.
+
+Documentation for git can be found at:
+
+ https://www.kernel.org/pub/software/scm/git/docs/
+
+ https://www.kernel.org/pub/software/scm/git/docs/user-manual.html
+
+
+Conclusion
+==========
+
+Congratulations to anybody who has made it through this long-winded
+document. Hopefully it has provided a helpful understanding of how the
+Linux kernel is developed and how you can participate in that process.
+
+In the end, it's the participation that matters. Any open source software
+project is no more than the sum of what its contributors put into it. The
+Linux kernel has progressed as quickly and as well as it has because it has
+been helped by an impressively large group of developers, all of whom are
+working to make it better. The kernel is a premier example of what can be
+done when thousands of people work together toward a common goal.
+
+The kernel can always benefit from a larger developer base, though. There
+is always more work to do. But, just as importantly, most other
+participants in the Linux ecosystem can benefit through contributing to the
+kernel. Getting code into the mainline is the key to higher code quality,
+lower maintenance and distribution costs, a higher level of influence over
+the direction of kernel development, and more. It is a situation where
+everybody involved wins. Fire up your editor and come join us; you will be
+more than welcome.
diff --git a/Documentation/process/adding-syscalls.rst b/Documentation/process/adding-syscalls.rst
new file mode 100644
index 000000000..906c47f1a
--- /dev/null
+++ b/Documentation/process/adding-syscalls.rst
@@ -0,0 +1,577 @@
+
+.. _addsyscalls:
+
+Adding a New System Call
+========================
+
+This document describes what's involved in adding a new system call to the
+Linux kernel, over and above the normal submission advice in
+:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`.
+
+
+System Call Alternatives
+------------------------
+
+The first thing to consider when adding a new system call is whether one of
+the alternatives might be suitable instead. Although system calls are the
+most traditional and most obvious interaction points between userspace and the
+kernel, there are other possibilities -- choose what fits best for your
+interface.
+
+ - If the operations involved can be made to look like a filesystem-like
+ object, it may make more sense to create a new filesystem or device. This
+ also makes it easier to encapsulate the new functionality in a kernel module
+ rather than requiring it to be built into the main kernel.
+
+ - If the new functionality involves operations where the kernel notifies
+ userspace that something has happened, then returning a new file
+ descriptor for the relevant object allows userspace to use
+ ``poll``/``select``/``epoll`` to receive that notification.
+ - However, operations that don't map to
+ :manpage:`read(2)`/:manpage:`write(2)`-like operations
+ have to be implemented as :manpage:`ioctl(2)` requests, which can lead
+ to a somewhat opaque API.
+
+ - If you're just exposing runtime system information, a new node in sysfs
+ (see ``Documentation/filesystems/sysfs.rst``) or the ``/proc`` filesystem may
+ be more appropriate. However, access to these mechanisms requires that the
+ relevant filesystem is mounted, which might not always be the case (e.g.
+ in a namespaced/sandboxed/chrooted environment). Avoid adding any API to
+ debugfs, as this is not considered a 'production' interface to userspace.
+ - If the operation is specific to a particular file or file descriptor, then
+ an additional :manpage:`fcntl(2)` command option may be more appropriate. However,
+ :manpage:`fcntl(2)` is a multiplexing system call that hides a lot of complexity, so
+ this option is best for when the new function is closely analogous to
+ existing :manpage:`fcntl(2)` functionality, or the new functionality is very simple
+ (for example, getting/setting a simple flag related to a file descriptor).
+ - If the operation is specific to a particular task or process, then an
+ additional :manpage:`prctl(2)` command option may be more appropriate. As
+ with :manpage:`fcntl(2)`, this system call is a complicated multiplexor so
+ is best reserved for near-analogs of existing ``prctl()`` commands or
+ getting/setting a simple flag related to a process.
+
+
+Designing the API: Planning for Extension
+-----------------------------------------
+
+A new system call forms part of the API of the kernel, and has to be supported
+indefinitely. As such, it's a very good idea to explicitly discuss the
+interface on the kernel mailing list, and it's important to plan for future
+extensions of the interface.
+
+(The syscall table is littered with historical examples where this wasn't done,
+together with the corresponding follow-up system calls --
+``eventfd``/``eventfd2``, ``dup2``/``dup3``, ``inotify_init``/``inotify_init1``,
+``pipe``/``pipe2``, ``renameat``/``renameat2`` -- so
+learn from the history of the kernel and plan for extensions from the start.)
+
+For simpler system calls that only take a couple of arguments, the preferred
+way to allow for future extensibility is to include a flags argument to the
+system call. To make sure that userspace programs can safely use flags
+between kernel versions, check whether the flags value holds any unknown
+flags, and reject the system call (with ``EINVAL``) if it does::
+
+ if (flags & ~(THING_FLAG1 | THING_FLAG2 | THING_FLAG3))
+ return -EINVAL;
+
+(If no flags values are used yet, check that the flags argument is zero.)
+
+For more sophisticated system calls that involve a larger number of arguments,
+it's preferred to encapsulate the majority of the arguments into a structure
+that is passed in by pointer. Such a structure can cope with future extension
+by including a size argument in the structure::
+
+ struct xyzzy_params {
+ u32 size; /* userspace sets p->size = sizeof(struct xyzzy_params) */
+ u32 param_1;
+ u64 param_2;
+ u64 param_3;
+ };
+
+As long as any subsequently added field, say ``param_4``, is designed so that a
+zero value gives the previous behaviour, then this allows both directions of
+version mismatch:
+
+ - To cope with a later userspace program calling an older kernel, the kernel
+ code should check that any memory beyond the size of the structure that it
+ expects is zero (effectively checking that ``param_4 == 0``).
+ - To cope with an older userspace program calling a newer kernel, the kernel
+ code can zero-extend a smaller instance of the structure (effectively
+ setting ``param_4 = 0``).
+
+See :manpage:`perf_event_open(2)` and the ``perf_copy_attr()`` function (in
+``kernel/events/core.c``) for an example of this approach.
+
+
+Designing the API: Other Considerations
+---------------------------------------
+
+If your new system call allows userspace to refer to a kernel object, it
+should use a file descriptor as the handle for that object -- don't invent a
+new type of userspace object handle when the kernel already has mechanisms and
+well-defined semantics for using file descriptors.
+
+If your new :manpage:`xyzzy(2)` system call does return a new file descriptor,
+then the flags argument should include a value that is equivalent to setting
+``O_CLOEXEC`` on the new FD. This makes it possible for userspace to close
+the timing window between ``xyzzy()`` and calling
+``fcntl(fd, F_SETFD, FD_CLOEXEC)``, where an unexpected ``fork()`` and
+``execve()`` in another thread could leak a descriptor to
+the exec'ed program. (However, resist the temptation to re-use the actual value
+of the ``O_CLOEXEC`` constant, as it is architecture-specific and is part of a
+numbering space of ``O_*`` flags that is fairly full.)
+
+If your system call returns a new file descriptor, you should also consider
+what it means to use the :manpage:`poll(2)` family of system calls on that file
+descriptor. Making a file descriptor ready for reading or writing is the
+normal way for the kernel to indicate to userspace that an event has
+occurred on the corresponding kernel object.
+
+If your new :manpage:`xyzzy(2)` system call involves a filename argument::
+
+ int sys_xyzzy(const char __user *path, ..., unsigned int flags);
+
+you should also consider whether an :manpage:`xyzzyat(2)` version is more appropriate::
+
+ int sys_xyzzyat(int dfd, const char __user *path, ..., unsigned int flags);
+
+This allows more flexibility for how userspace specifies the file in question;
+in particular it allows userspace to request the functionality for an
+already-opened file descriptor using the ``AT_EMPTY_PATH`` flag, effectively
+giving an :manpage:`fxyzzy(3)` operation for free::
+
+ - xyzzyat(AT_FDCWD, path, ..., 0) is equivalent to xyzzy(path,...)
+ - xyzzyat(fd, "", ..., AT_EMPTY_PATH) is equivalent to fxyzzy(fd, ...)
+
+(For more details on the rationale of the \*at() calls, see the
+:manpage:`openat(2)` man page; for an example of AT_EMPTY_PATH, see the
+:manpage:`fstatat(2)` man page.)
+
+If your new :manpage:`xyzzy(2)` system call involves a parameter describing an
+offset within a file, make its type ``loff_t`` so that 64-bit offsets can be
+supported even on 32-bit architectures.
+
+If your new :manpage:`xyzzy(2)` system call involves privileged functionality,
+it needs to be governed by the appropriate Linux capability bit (checked with
+a call to ``capable()``), as described in the :manpage:`capabilities(7)` man
+page. Choose an existing capability bit that governs related functionality,
+but try to avoid combining lots of only vaguely related functions together
+under the same bit, as this goes against capabilities' purpose of splitting
+the power of root. In particular, avoid adding new uses of the already
+overly-general ``CAP_SYS_ADMIN`` capability.
+
+If your new :manpage:`xyzzy(2)` system call manipulates a process other than
+the calling process, it should be restricted (using a call to
+``ptrace_may_access()``) so that only a calling process with the same
+permissions as the target process, or with the necessary capabilities, can
+manipulate the target process.
+
+Finally, be aware that some non-x86 architectures have an easier time if
+system call parameters that are explicitly 64-bit fall on odd-numbered
+arguments (i.e. parameter 1, 3, 5), to allow use of contiguous pairs of 32-bit
+registers. (This concern does not apply if the arguments are part of a
+structure that's passed in by pointer.)
+
+
+Proposing the API
+-----------------
+
+To make new system calls easy to review, it's best to divide up the patchset
+into separate chunks. These should include at least the following items as
+distinct commits (each of which is described further below):
+
+ - The core implementation of the system call, together with prototypes,
+ generic numbering, Kconfig changes and fallback stub implementation.
+ - Wiring up of the new system call for one particular architecture, usually
+ x86 (including all of x86_64, x86_32 and x32).
+ - A demonstration of the use of the new system call in userspace via a
+ selftest in ``tools/testing/selftests/``.
+ - A draft man-page for the new system call, either as plain text in the
+ cover letter, or as a patch to the (separate) man-pages repository.
+
+New system call proposals, like any change to the kernel's API, should always
+be cc'ed to linux-api@vger.kernel.org.
+
+
+Generic System Call Implementation
+----------------------------------
+
+The main entry point for your new :manpage:`xyzzy(2)` system call will be called
+``sys_xyzzy()``, but you add this entry point with the appropriate
+``SYSCALL_DEFINEn()`` macro rather than explicitly. The 'n' indicates the
+number of arguments to the system call, and the macro takes the system call name
+followed by the (type, name) pairs for the parameters as arguments. Using
+this macro allows metadata about the new system call to be made available for
+other tools.
+
+The new entry point also needs a corresponding function prototype, in
+``include/linux/syscalls.h``, marked as asmlinkage to match the way that system
+calls are invoked::
+
+ asmlinkage long sys_xyzzy(...);
+
+Some architectures (e.g. x86) have their own architecture-specific syscall
+tables, but several other architectures share a generic syscall table. Add your
+new system call to the generic list by adding an entry to the list in
+``include/uapi/asm-generic/unistd.h``::
+
+ #define __NR_xyzzy 292
+ __SYSCALL(__NR_xyzzy, sys_xyzzy)
+
+Also update the __NR_syscalls count to reflect the additional system call, and
+note that if multiple new system calls are added in the same merge window,
+your new syscall number may get adjusted to resolve conflicts.
+
+The file ``kernel/sys_ni.c`` provides a fallback stub implementation of each
+system call, returning ``-ENOSYS``. Add your new system call here too::
+
+ COND_SYSCALL(xyzzy);
+
+Your new kernel functionality, and the system call that controls it, should
+normally be optional, so add a ``CONFIG`` option (typically to
+``init/Kconfig``) for it. As usual for new ``CONFIG`` options:
+
+ - Include a description of the new functionality and system call controlled
+ by the option.
+ - Make the option depend on EXPERT if it should be hidden from normal users.
+ - Make any new source files implementing the function dependent on the CONFIG
+ option in the Makefile (e.g. ``obj-$(CONFIG_XYZZY_SYSCALL) += xyzzy.o``).
+ - Double check that the kernel still builds with the new CONFIG option turned
+ off.
+
+To summarize, you need a commit that includes:
+
+ - ``CONFIG`` option for the new function, normally in ``init/Kconfig``
+ - ``SYSCALL_DEFINEn(xyzzy, ...)`` for the entry point
+ - corresponding prototype in ``include/linux/syscalls.h``
+ - generic table entry in ``include/uapi/asm-generic/unistd.h``
+ - fallback stub in ``kernel/sys_ni.c``
+
+
+x86 System Call Implementation
+------------------------------
+
+To wire up your new system call for x86 platforms, you need to update the
+master syscall tables. Assuming your new system call isn't special in some
+way (see below), this involves a "common" entry (for x86_64 and x32) in
+arch/x86/entry/syscalls/syscall_64.tbl::
+
+ 333 common xyzzy sys_xyzzy
+
+and an "i386" entry in ``arch/x86/entry/syscalls/syscall_32.tbl``::
+
+ 380 i386 xyzzy sys_xyzzy
+
+Again, these numbers are liable to be changed if there are conflicts in the
+relevant merge window.
+
+
+Compatibility System Calls (Generic)
+------------------------------------
+
+For most system calls the same 64-bit implementation can be invoked even when
+the userspace program is itself 32-bit; even if the system call's parameters
+include an explicit pointer, this is handled transparently.
+
+However, there are a couple of situations where a compatibility layer is
+needed to cope with size differences between 32-bit and 64-bit.
+
+The first is if the 64-bit kernel also supports 32-bit userspace programs, and
+so needs to parse areas of (``__user``) memory that could hold either 32-bit or
+64-bit values. In particular, this is needed whenever a system call argument
+is:
+
+ - a pointer to a pointer
+ - a pointer to a struct containing a pointer (e.g. ``struct iovec __user *``)
+ - a pointer to a varying sized integral type (``time_t``, ``off_t``,
+ ``long``, ...)
+ - a pointer to a struct containing a varying sized integral type.
+
+The second situation that requires a compatibility layer is if one of the
+system call's arguments has a type that is explicitly 64-bit even on a 32-bit
+architecture, for example ``loff_t`` or ``__u64``. In this case, a value that
+arrives at a 64-bit kernel from a 32-bit application will be split into two
+32-bit values, which then need to be re-assembled in the compatibility layer.
+
+(Note that a system call argument that's a pointer to an explicit 64-bit type
+does **not** need a compatibility layer; for example, :manpage:`splice(2)`'s arguments of
+type ``loff_t __user *`` do not trigger the need for a ``compat_`` system call.)
+
+The compatibility version of the system call is called ``compat_sys_xyzzy()``,
+and is added with the ``COMPAT_SYSCALL_DEFINEn()`` macro, analogously to
+SYSCALL_DEFINEn. This version of the implementation runs as part of a 64-bit
+kernel, but expects to receive 32-bit parameter values and does whatever is
+needed to deal with them. (Typically, the ``compat_sys_`` version converts the
+values to 64-bit versions and either calls on to the ``sys_`` version, or both of
+them call a common inner implementation function.)
+
+The compat entry point also needs a corresponding function prototype, in
+``include/linux/compat.h``, marked as asmlinkage to match the way that system
+calls are invoked::
+
+ asmlinkage long compat_sys_xyzzy(...);
+
+If the system call involves a structure that is laid out differently on 32-bit
+and 64-bit systems, say ``struct xyzzy_args``, then the include/linux/compat.h
+header file should also include a compat version of the structure (``struct
+compat_xyzzy_args``) where each variable-size field has the appropriate
+``compat_`` type that corresponds to the type in ``struct xyzzy_args``. The
+``compat_sys_xyzzy()`` routine can then use this ``compat_`` structure to
+parse the arguments from a 32-bit invocation.
+
+For example, if there are fields::
+
+ struct xyzzy_args {
+ const char __user *ptr;
+ __kernel_long_t varying_val;
+ u64 fixed_val;
+ /* ... */
+ };
+
+in struct xyzzy_args, then struct compat_xyzzy_args would have::
+
+ struct compat_xyzzy_args {
+ compat_uptr_t ptr;
+ compat_long_t varying_val;
+ u64 fixed_val;
+ /* ... */
+ };
+
+The generic system call list also needs adjusting to allow for the compat
+version; the entry in ``include/uapi/asm-generic/unistd.h`` should use
+``__SC_COMP`` rather than ``__SYSCALL``::
+
+ #define __NR_xyzzy 292
+ __SC_COMP(__NR_xyzzy, sys_xyzzy, compat_sys_xyzzy)
+
+To summarize, you need:
+
+ - a ``COMPAT_SYSCALL_DEFINEn(xyzzy, ...)`` for the compat entry point
+ - corresponding prototype in ``include/linux/compat.h``
+ - (if needed) 32-bit mapping struct in ``include/linux/compat.h``
+ - instance of ``__SC_COMP`` not ``__SYSCALL`` in
+ ``include/uapi/asm-generic/unistd.h``
+
+
+Compatibility System Calls (x86)
+--------------------------------
+
+To wire up the x86 architecture of a system call with a compatibility version,
+the entries in the syscall tables need to be adjusted.
+
+First, the entry in ``arch/x86/entry/syscalls/syscall_32.tbl`` gets an extra
+column to indicate that a 32-bit userspace program running on a 64-bit kernel
+should hit the compat entry point::
+
+ 380 i386 xyzzy sys_xyzzy __ia32_compat_sys_xyzzy
+
+Second, you need to figure out what should happen for the x32 ABI version of
+the new system call. There's a choice here: the layout of the arguments
+should either match the 64-bit version or the 32-bit version.
+
+If there's a pointer-to-a-pointer involved, the decision is easy: x32 is
+ILP32, so the layout should match the 32-bit version, and the entry in
+``arch/x86/entry/syscalls/syscall_64.tbl`` is split so that x32 programs hit
+the compatibility wrapper::
+
+ 333 64 xyzzy sys_xyzzy
+ ...
+ 555 x32 xyzzy __x32_compat_sys_xyzzy
+
+If no pointers are involved, then it is preferable to re-use the 64-bit system
+call for the x32 ABI (and consequently the entry in
+arch/x86/entry/syscalls/syscall_64.tbl is unchanged).
+
+In either case, you should check that the types involved in your argument
+layout do indeed map exactly from x32 (-mx32) to either the 32-bit (-m32) or
+64-bit (-m64) equivalents.
+
+
+System Calls Returning Elsewhere
+--------------------------------
+
+For most system calls, once the system call is complete the user program
+continues exactly where it left off -- at the next instruction, with the
+stack the same and most of the registers the same as before the system call,
+and with the same virtual memory space.
+
+However, a few system calls do things differently. They might return to a
+different location (``rt_sigreturn``) or change the memory space
+(``fork``/``vfork``/``clone``) or even architecture (``execve``/``execveat``)
+of the program.
+
+To allow for this, the kernel implementation of the system call may need to
+save and restore additional registers to the kernel stack, allowing complete
+control of where and how execution continues after the system call.
+
+This is arch-specific, but typically involves defining assembly entry points
+that save/restore additional registers and invoke the real system call entry
+point.
+
+For x86_64, this is implemented as a ``stub_xyzzy`` entry point in
+``arch/x86/entry/entry_64.S``, and the entry in the syscall table
+(``arch/x86/entry/syscalls/syscall_64.tbl``) is adjusted to match::
+
+ 333 common xyzzy stub_xyzzy
+
+The equivalent for 32-bit programs running on a 64-bit kernel is normally
+called ``stub32_xyzzy`` and implemented in ``arch/x86/entry/entry_64_compat.S``,
+with the corresponding syscall table adjustment in
+``arch/x86/entry/syscalls/syscall_32.tbl``::
+
+ 380 i386 xyzzy sys_xyzzy stub32_xyzzy
+
+If the system call needs a compatibility layer (as in the previous section)
+then the ``stub32_`` version needs to call on to the ``compat_sys_`` version
+of the system call rather than the native 64-bit version. Also, if the x32 ABI
+implementation is not common with the x86_64 version, then its syscall
+table will also need to invoke a stub that calls on to the ``compat_sys_``
+version.
+
+For completeness, it's also nice to set up a mapping so that user-mode Linux
+still works -- its syscall table will reference stub_xyzzy, but the UML build
+doesn't include ``arch/x86/entry/entry_64.S`` implementation (because UML
+simulates registers etc). Fixing this is as simple as adding a #define to
+``arch/x86/um/sys_call_table_64.c``::
+
+ #define stub_xyzzy sys_xyzzy
+
+
+Other Details
+-------------
+
+Most of the kernel treats system calls in a generic way, but there is the
+occasional exception that may need updating for your particular system call.
+
+The audit subsystem is one such special case; it includes (arch-specific)
+functions that classify some special types of system call -- specifically
+file open (``open``/``openat``), program execution (``execve``/``exeveat``) or
+socket multiplexor (``socketcall``) operations. If your new system call is
+analogous to one of these, then the audit system should be updated.
+
+More generally, if there is an existing system call that is analogous to your
+new system call, it's worth doing a kernel-wide grep for the existing system
+call to check there are no other special cases.
+
+
+Testing
+-------
+
+A new system call should obviously be tested; it is also useful to provide
+reviewers with a demonstration of how user space programs will use the system
+call. A good way to combine these aims is to include a simple self-test
+program in a new directory under ``tools/testing/selftests/``.
+
+For a new system call, there will obviously be no libc wrapper function and so
+the test will need to invoke it using ``syscall()``; also, if the system call
+involves a new userspace-visible structure, the corresponding header will need
+to be installed to compile the test.
+
+Make sure the selftest runs successfully on all supported architectures. For
+example, check that it works when compiled as an x86_64 (-m64), x86_32 (-m32)
+and x32 (-mx32) ABI program.
+
+For more extensive and thorough testing of new functionality, you should also
+consider adding tests to the Linux Test Project, or to the xfstests project
+for filesystem-related changes.
+
+ - https://linux-test-project.github.io/
+ - git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
+
+
+Man Page
+--------
+
+All new system calls should come with a complete man page, ideally using groff
+markup, but plain text will do. If groff is used, it's helpful to include a
+pre-rendered ASCII version of the man page in the cover email for the
+patchset, for the convenience of reviewers.
+
+The man page should be cc'ed to linux-man@vger.kernel.org
+For more details, see https://www.kernel.org/doc/man-pages/patches.html
+
+
+Do not call System Calls in the Kernel
+--------------------------------------
+
+System calls are, as stated above, interaction points between userspace and
+the kernel. Therefore, system call functions such as ``sys_xyzzy()`` or
+``compat_sys_xyzzy()`` should only be called from userspace via the syscall
+table, but not from elsewhere in the kernel. If the syscall functionality is
+useful to be used within the kernel, needs to be shared between an old and a
+new syscall, or needs to be shared between a syscall and its compatibility
+variant, it should be implemented by means of a "helper" function (such as
+``ksys_xyzzy()``). This kernel function may then be called within the
+syscall stub (``sys_xyzzy()``), the compatibility syscall stub
+(``compat_sys_xyzzy()``), and/or other kernel code.
+
+At least on 64-bit x86, it will be a hard requirement from v4.17 onwards to not
+call system call functions in the kernel. It uses a different calling
+convention for system calls where ``struct pt_regs`` is decoded on-the-fly in a
+syscall wrapper which then hands processing over to the actual syscall function.
+This means that only those parameters which are actually needed for a specific
+syscall are passed on during syscall entry, instead of filling in six CPU
+registers with random user space content all the time (which may cause serious
+trouble down the call chain).
+
+Moreover, rules on how data may be accessed may differ between kernel data and
+user data. This is another reason why calling ``sys_xyzzy()`` is generally a
+bad idea.
+
+Exceptions to this rule are only allowed in architecture-specific overrides,
+architecture-specific compatibility wrappers, or other code in arch/.
+
+
+References and Sources
+----------------------
+
+ - LWN article from Michael Kerrisk on use of flags argument in system calls:
+ https://lwn.net/Articles/585415/
+ - LWN article from Michael Kerrisk on how to handle unknown flags in a system
+ call: https://lwn.net/Articles/588444/
+ - LWN article from Jake Edge describing constraints on 64-bit system call
+ arguments: https://lwn.net/Articles/311630/
+ - Pair of LWN articles from David Drysdale that describe the system call
+ implementation paths in detail for v3.14:
+
+ - https://lwn.net/Articles/604287/
+ - https://lwn.net/Articles/604515/
+
+ - Architecture-specific requirements for system calls are discussed in the
+ :manpage:`syscall(2)` man-page:
+ http://man7.org/linux/man-pages/man2/syscall.2.html#NOTES
+ - Collated emails from Linus Torvalds discussing the problems with ``ioctl()``:
+ https://yarchive.net/comp/linux/ioctl.html
+ - "How to not invent kernel interfaces", Arnd Bergmann,
+ https://www.ukuug.org/events/linux2007/2007/papers/Bergmann.pdf
+ - LWN article from Michael Kerrisk on avoiding new uses of CAP_SYS_ADMIN:
+ https://lwn.net/Articles/486306/
+ - Recommendation from Andrew Morton that all related information for a new
+ system call should come in the same email thread:
+ https://lore.kernel.org/r/20140724144747.3041b208832bbdf9fbce5d96@linux-foundation.org
+ - Recommendation from Michael Kerrisk that a new system call should come with
+ a man page: https://lore.kernel.org/r/CAKgNAkgMA39AfoSoA5Pe1r9N+ZzfYQNvNPvcRN7tOvRb8+v06Q@mail.gmail.com
+ - Suggestion from Thomas Gleixner that x86 wire-up should be in a separate
+ commit: https://lore.kernel.org/r/alpine.DEB.2.11.1411191249560.3909@nanos
+ - Suggestion from Greg Kroah-Hartman that it's good for new system calls to
+ come with a man-page & selftest: https://lore.kernel.org/r/20140320025530.GA25469@kroah.com
+ - Discussion from Michael Kerrisk of new system call vs. :manpage:`prctl(2)` extension:
+ https://lore.kernel.org/r/CAHO5Pa3F2MjfTtfNxa8LbnkeeU8=YJ+9tDqxZpw7Gz59E-4AUg@mail.gmail.com
+ - Suggestion from Ingo Molnar that system calls that involve multiple
+ arguments should encapsulate those arguments in a struct, which includes a
+ size field for future extensibility: https://lore.kernel.org/r/20150730083831.GA22182@gmail.com
+ - Numbering oddities arising from (re-)use of O_* numbering space flags:
+
+ - commit 75069f2b5bfb ("vfs: renumber FMODE_NONOTIFY and add to uniqueness
+ check")
+ - commit 12ed2e36c98a ("fanotify: FMODE_NONOTIFY and __O_SYNC in sparc
+ conflict")
+ - commit bb458c644a59 ("Safer ABI for O_TMPFILE")
+
+ - Discussion from Matthew Wilcox about restrictions on 64-bit arguments:
+ https://lore.kernel.org/r/20081212152929.GM26095@parisc-linux.org
+ - Recommendation from Greg Kroah-Hartman that unknown flags should be
+ policed: https://lore.kernel.org/r/20140717193330.GB4703@kroah.com
+ - Recommendation from Linus Torvalds that x32 system calls should prefer
+ compatibility with 64-bit versions rather than 32-bit versions:
+ https://lore.kernel.org/r/CA+55aFxfmwfB7jbbrXxa=K7VBYPfAvmu3XOkGrLbB1UFjX1+Ew@mail.gmail.com
diff --git a/Documentation/process/applying-patches.rst b/Documentation/process/applying-patches.rst
new file mode 100644
index 000000000..c269f5e1a
--- /dev/null
+++ b/Documentation/process/applying-patches.rst
@@ -0,0 +1,444 @@
+.. _applying_patches:
+
+Applying Patches To The Linux Kernel
+++++++++++++++++++++++++++++++++++++
+
+Original by:
+ Jesper Juhl, August 2005
+
+.. note::
+
+ This document is obsolete. In most cases, rather than using ``patch``
+ manually, you'll almost certainly want to look at using Git instead.
+
+A frequently asked question on the Linux Kernel Mailing List is how to apply
+a patch to the kernel or, more specifically, what base kernel a patch for
+one of the many trees/branches should be applied to. Hopefully this document
+will explain this to you.
+
+In addition to explaining how to apply and revert patches, a brief
+description of the different kernel trees (and examples of how to apply
+their specific patches) is also provided.
+
+
+What is a patch?
+================
+
+A patch is a small text document containing a delta of changes between two
+different versions of a source tree. Patches are created with the ``diff``
+program.
+
+To correctly apply a patch you need to know what base it was generated from
+and what new version the patch will change the source tree into. These
+should both be present in the patch file metadata or be possible to deduce
+from the filename.
+
+
+How do I apply or revert a patch?
+=================================
+
+You apply a patch with the ``patch`` program. The patch program reads a diff
+(or patch) file and makes the changes to the source tree described in it.
+
+Patches for the Linux kernel are generated relative to the parent directory
+holding the kernel source dir.
+
+This means that paths to files inside the patch file contain the name of the
+kernel source directories it was generated against (or some other directory
+names like "a/" and "b/").
+
+Since this is unlikely to match the name of the kernel source dir on your
+local machine (but is often useful info to see what version an otherwise
+unlabeled patch was generated against) you should change into your kernel
+source directory and then strip the first element of the path from filenames
+in the patch file when applying it (the ``-p1`` argument to ``patch`` does
+this).
+
+To revert a previously applied patch, use the -R argument to patch.
+So, if you applied a patch like this::
+
+ patch -p1 < ../patch-x.y.z
+
+You can revert (undo) it like this::
+
+ patch -R -p1 < ../patch-x.y.z
+
+
+How do I feed a patch/diff file to ``patch``?
+=============================================
+
+This (as usual with Linux and other UNIX like operating systems) can be
+done in several different ways.
+
+In all the examples below I feed the file (in uncompressed form) to patch
+via stdin using the following syntax::
+
+ patch -p1 < path/to/patch-x.y.z
+
+If you just want to be able to follow the examples below and don't want to
+know of more than one way to use patch, then you can stop reading this
+section here.
+
+Patch can also get the name of the file to use via the -i argument, like
+this::
+
+ patch -p1 -i path/to/patch-x.y.z
+
+If your patch file is compressed with gzip or xz and you don't want to
+uncompress it before applying it, then you can feed it to patch like this
+instead::
+
+ xzcat path/to/patch-x.y.z.xz | patch -p1
+ bzcat path/to/patch-x.y.z.gz | patch -p1
+
+If you wish to uncompress the patch file by hand first before applying it
+(what I assume you've done in the examples below), then you simply run
+gunzip or xz on the file -- like this::
+
+ gunzip patch-x.y.z.gz
+ xz -d patch-x.y.z.xz
+
+Which will leave you with a plain text patch-x.y.z file that you can feed to
+patch via stdin or the ``-i`` argument, as you prefer.
+
+A few other nice arguments for patch are ``-s`` which causes patch to be silent
+except for errors which is nice to prevent errors from scrolling out of the
+screen too fast, and ``--dry-run`` which causes patch to just print a listing of
+what would happen, but doesn't actually make any changes. Finally ``--verbose``
+tells patch to print more information about the work being done.
+
+
+Common errors when patching
+===========================
+
+When patch applies a patch file it attempts to verify the sanity of the
+file in different ways.
+
+Checking that the file looks like a valid patch file and checking the code
+around the bits being modified matches the context provided in the patch are
+just two of the basic sanity checks patch does.
+
+If patch encounters something that doesn't look quite right it has two
+options. It can either refuse to apply the changes and abort or it can try
+to find a way to make the patch apply with a few minor changes.
+
+One example of something that's not 'quite right' that patch will attempt to
+fix up is if all the context matches, the lines being changed match, but the
+line numbers are different. This can happen, for example, if the patch makes
+a change in the middle of the file but for some reasons a few lines have
+been added or removed near the beginning of the file. In that case
+everything looks good it has just moved up or down a bit, and patch will
+usually adjust the line numbers and apply the patch.
+
+Whenever patch applies a patch that it had to modify a bit to make it fit
+it'll tell you about it by saying the patch applied with **fuzz**.
+You should be wary of such changes since even though patch probably got it
+right it doesn't /always/ get it right, and the result will sometimes be
+wrong.
+
+When patch encounters a change that it can't fix up with fuzz it rejects it
+outright and leaves a file with a ``.rej`` extension (a reject file). You can
+read this file to see exactly what change couldn't be applied, so you can
+go fix it up by hand if you wish.
+
+If you don't have any third-party patches applied to your kernel source, but
+only patches from kernel.org and you apply the patches in the correct order,
+and have made no modifications yourself to the source files, then you should
+never see a fuzz or reject message from patch. If you do see such messages
+anyway, then there's a high risk that either your local source tree or the
+patch file is corrupted in some way. In that case you should probably try
+re-downloading the patch and if things are still not OK then you'd be advised
+to start with a fresh tree downloaded in full from kernel.org.
+
+Let's look a bit more at some of the messages patch can produce.
+
+If patch stops and presents a ``File to patch:`` prompt, then patch could not
+find a file to be patched. Most likely you forgot to specify -p1 or you are
+in the wrong directory. Less often, you'll find patches that need to be
+applied with ``-p0`` instead of ``-p1`` (reading the patch file should reveal if
+this is the case -- if so, then this is an error by the person who created
+the patch but is not fatal).
+
+If you get ``Hunk #2 succeeded at 1887 with fuzz 2 (offset 7 lines).`` or a
+message similar to that, then it means that patch had to adjust the location
+of the change (in this example it needed to move 7 lines from where it
+expected to make the change to make it fit).
+
+The resulting file may or may not be OK, depending on the reason the file
+was different than expected.
+
+This often happens if you try to apply a patch that was generated against a
+different kernel version than the one you are trying to patch.
+
+If you get a message like ``Hunk #3 FAILED at 2387.``, then it means that the
+patch could not be applied correctly and the patch program was unable to
+fuzz its way through. This will generate a ``.rej`` file with the change that
+caused the patch to fail and also a ``.orig`` file showing you the original
+content that couldn't be changed.
+
+If you get ``Reversed (or previously applied) patch detected! Assume -R? [n]``
+then patch detected that the change contained in the patch seems to have
+already been made.
+
+If you actually did apply this patch previously and you just re-applied it
+in error, then just say [n]o and abort this patch. If you applied this patch
+previously and actually intended to revert it, but forgot to specify -R,
+then you can say [**y**]es here to make patch revert it for you.
+
+This can also happen if the creator of the patch reversed the source and
+destination directories when creating the patch, and in that case reverting
+the patch will in fact apply it.
+
+A message similar to ``patch: **** unexpected end of file in patch`` or
+``patch unexpectedly ends in middle of line`` means that patch could make no
+sense of the file you fed to it. Either your download is broken, you tried to
+feed patch a compressed patch file without uncompressing it first, or the patch
+file that you are using has been mangled by a mail client or mail transfer
+agent along the way somewhere, e.g., by splitting a long line into two lines.
+Often these warnings can easily be fixed by joining (concatenating) the
+two lines that had been split.
+
+As I already mentioned above, these errors should never happen if you apply
+a patch from kernel.org to the correct version of an unmodified source tree.
+So if you get these errors with kernel.org patches then you should probably
+assume that either your patch file or your tree is broken and I'd advise you
+to start over with a fresh download of a full kernel tree and the patch you
+wish to apply.
+
+
+Are there any alternatives to ``patch``?
+========================================
+
+
+Yes there are alternatives.
+
+You can use the ``interdiff`` program (http://cyberelk.net/tim/patchutils/) to
+generate a patch representing the differences between two patches and then
+apply the result.
+
+This will let you move from something like 5.7.2 to 5.7.3 in a single
+step. The -z flag to interdiff will even let you feed it patches in gzip or
+bzip2 compressed form directly without the use of zcat or bzcat or manual
+decompression.
+
+Here's how you'd go from 5.7.2 to 5.7.3 in a single step::
+
+ interdiff -z ../patch-5.7.2.gz ../patch-5.7.3.gz | patch -p1
+
+Although interdiff may save you a step or two you are generally advised to
+do the additional steps since interdiff can get things wrong in some cases.
+
+Another alternative is ``ketchup``, which is a python script for automatic
+downloading and applying of patches (https://www.selenic.com/ketchup/).
+
+Other nice tools are diffstat, which shows a summary of changes made by a
+patch; lsdiff, which displays a short listing of affected files in a patch
+file, along with (optionally) the line numbers of the start of each patch;
+and grepdiff, which displays a list of the files modified by a patch where
+the patch contains a given regular expression.
+
+
+Where can I download the patches?
+=================================
+
+The patches are available at https://kernel.org/
+Most recent patches are linked from the front page, but they also have
+specific homes.
+
+The 5.x.y (-stable) and 5.x patches live at
+
+ https://www.kernel.org/pub/linux/kernel/v5.x/
+
+The 5.x.y incremental patches live at
+
+ https://www.kernel.org/pub/linux/kernel/v5.x/incr/
+
+The -rc patches are not stored on the webserver but are generated on
+demand from git tags such as
+
+ https://git.kernel.org/torvalds/p/v5.1-rc1/v5.0
+
+The stable -rc patches live at
+
+ https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/
+
+
+The 5.x kernels
+===============
+
+These are the base stable releases released by Linus. The highest numbered
+release is the most recent.
+
+If regressions or other serious flaws are found, then a -stable fix patch
+will be released (see below) on top of this base. Once a new 5.x base
+kernel is released, a patch is made available that is a delta between the
+previous 5.x kernel and the new one.
+
+To apply a patch moving from 5.6 to 5.7, you'd do the following (note
+that such patches do **NOT** apply on top of 5.x.y kernels but on top of the
+base 5.x kernel -- if you need to move from 5.x.y to 5.x+1 you need to
+first revert the 5.x.y patch).
+
+Here are some examples::
+
+ # moving from 5.6 to 5.7
+
+ $ cd ~/linux-5.6 # change to kernel source dir
+ $ patch -p1 < ../patch-5.7 # apply the 5.7 patch
+ $ cd ..
+ $ mv linux-5.6 linux-5.7 # rename source dir
+
+ # moving from 5.6.1 to 5.7
+
+ $ cd ~/linux-5.6.1 # change to kernel source dir
+ $ patch -p1 -R < ../patch-5.6.1 # revert the 5.6.1 patch
+ # source dir is now 5.6
+ $ patch -p1 < ../patch-5.7 # apply new 5.7 patch
+ $ cd ..
+ $ mv linux-5.6.1 linux-5.7 # rename source dir
+
+
+The 5.x.y kernels
+=================
+
+Kernels with 3-digit versions are -stable kernels. They contain small(ish)
+critical fixes for security problems or significant regressions discovered
+in a given 5.x kernel.
+
+This is the recommended branch for users who want the most recent stable
+kernel and are not interested in helping test development/experimental
+versions.
+
+If no 5.x.y kernel is available, then the highest numbered 5.x kernel is
+the current stable kernel.
+
+The -stable team provides normal as well as incremental patches. Below is
+how to apply these patches.
+
+Normal patches
+~~~~~~~~~~~~~~
+
+These patches are not incremental, meaning that for example the 5.7.3
+patch does not apply on top of the 5.7.2 kernel source, but rather on top
+of the base 5.7 kernel source.
+
+So, in order to apply the 5.7.3 patch to your existing 5.7.2 kernel
+source you have to first back out the 5.7.2 patch (so you are left with a
+base 5.7 kernel source) and then apply the new 5.7.3 patch.
+
+Here's a small example::
+
+ $ cd ~/linux-5.7.2 # change to the kernel source dir
+ $ patch -p1 -R < ../patch-5.7.2 # revert the 5.7.2 patch
+ $ patch -p1 < ../patch-5.7.3 # apply the new 5.7.3 patch
+ $ cd ..
+ $ mv linux-5.7.2 linux-5.7.3 # rename the kernel source dir
+
+Incremental patches
+~~~~~~~~~~~~~~~~~~~
+
+Incremental patches are different: instead of being applied on top
+of base 5.x kernel, they are applied on top of previous stable kernel
+(5.x.y-1).
+
+Here's the example to apply these::
+
+ $ cd ~/linux-5.7.2 # change to the kernel source dir
+ $ patch -p1 < ../patch-5.7.2-3 # apply the new 5.7.3 patch
+ $ cd ..
+ $ mv linux-5.7.2 linux-5.7.3 # rename the kernel source dir
+
+
+The -rc kernels
+===============
+
+These are release-candidate kernels. These are development kernels released
+by Linus whenever he deems the current git (the kernel's source management
+tool) tree to be in a reasonably sane state adequate for testing.
+
+These kernels are not stable and you should expect occasional breakage if
+you intend to run them. This is however the most stable of the main
+development branches and is also what will eventually turn into the next
+stable kernel, so it is important that it be tested by as many people as
+possible.
+
+This is a good branch to run for people who want to help out testing
+development kernels but do not want to run some of the really experimental
+stuff (such people should see the sections about -next and -mm kernels below).
+
+The -rc patches are not incremental, they apply to a base 5.x kernel, just
+like the 5.x.y patches described above. The kernel version before the -rcN
+suffix denotes the version of the kernel that this -rc kernel will eventually
+turn into.
+
+So, 5.8-rc5 means that this is the fifth release candidate for the 5.8
+kernel and the patch should be applied on top of the 5.7 kernel source.
+
+Here are 3 examples of how to apply these patches::
+
+ # first an example of moving from 5.7 to 5.8-rc3
+
+ $ cd ~/linux-5.7 # change to the 5.7 source dir
+ $ patch -p1 < ../patch-5.8-rc3 # apply the 5.8-rc3 patch
+ $ cd ..
+ $ mv linux-5.7 linux-5.8-rc3 # rename the source dir
+
+ # now let's move from 5.8-rc3 to 5.8-rc5
+
+ $ cd ~/linux-5.8-rc3 # change to the 5.8-rc3 dir
+ $ patch -p1 -R < ../patch-5.8-rc3 # revert the 5.8-rc3 patch
+ $ patch -p1 < ../patch-5.8-rc5 # apply the new 5.8-rc5 patch
+ $ cd ..
+ $ mv linux-5.8-rc3 linux-5.8-rc5 # rename the source dir
+
+ # finally let's try and move from 5.7.3 to 5.8-rc5
+
+ $ cd ~/linux-5.7.3 # change to the kernel source dir
+ $ patch -p1 -R < ../patch-5.7.3 # revert the 5.7.3 patch
+ $ patch -p1 < ../patch-5.8-rc5 # apply new 5.8-rc5 patch
+ $ cd ..
+ $ mv linux-5.7.3 linux-5.8-rc5 # rename the kernel source dir
+
+
+The -mm patches and the linux-next tree
+=======================================
+
+The -mm patches are experimental patches released by Andrew Morton.
+
+In the past, -mm tree were used to also test subsystem patches, but this
+function is now done via the
+`linux-next` (https://www.kernel.org/doc/man-pages/linux-next.html)
+tree. The Subsystem maintainers push their patches first to linux-next,
+and, during the merge window, sends them directly to Linus.
+
+The -mm patches serve as a sort of proving ground for new features and other
+experimental patches that aren't merged via a subsystem tree.
+Once such patches has proved its worth in -mm for a while Andrew pushes
+it on to Linus for inclusion in mainline.
+
+The linux-next tree is daily updated, and includes the -mm patches.
+Both are in constant flux and contains many experimental features, a
+lot of debugging patches not appropriate for mainline etc., and is the most
+experimental of the branches described in this document.
+
+These patches are not appropriate for use on systems that are supposed to be
+stable and they are more risky to run than any of the other branches (make
+sure you have up-to-date backups -- that goes for any experimental kernel but
+even more so for -mm patches or using a Kernel from the linux-next tree).
+
+Testing of -mm patches and linux-next is greatly appreciated since the whole
+point of those are to weed out regressions, crashes, data corruption bugs,
+build breakage (and any other bug in general) before changes are merged into
+the more stable mainline Linus tree.
+
+But testers of -mm and linux-next should be aware that breakages are
+more common than in any other tree.
+
+
+This concludes this list of explanations of the various kernel trees.
+I hope you are now clear on how to apply the various patches and help testing
+the kernel.
+
+Thank you's to Randy Dunlap, Rolf Eike Beer, Linus Torvalds, Bodo Eggert,
+Johannes Stezenbach, Grant Coady, Pavel Machek and others that I may have
+forgotten for their reviews and contributions to this document.
diff --git a/Documentation/process/botching-up-ioctls.rst b/Documentation/process/botching-up-ioctls.rst
new file mode 100644
index 000000000..a05e8401d
--- /dev/null
+++ b/Documentation/process/botching-up-ioctls.rst
@@ -0,0 +1,225 @@
+=================================
+(How to avoid) Botching up ioctls
+=================================
+
+From: https://blog.ffwll.ch/2013/11/botching-up-ioctls.html
+
+By: Daniel Vetter, Copyright © 2013 Intel Corporation
+
+One clear insight kernel graphics hackers gained in the past few years is that
+trying to come up with a unified interface to manage the execution units and
+memory on completely different GPUs is a futile effort. So nowadays every
+driver has its own set of ioctls to allocate memory and submit work to the GPU.
+Which is nice, since there's no more insanity in the form of fake-generic, but
+actually only used once interfaces. But the clear downside is that there's much
+more potential to screw things up.
+
+To avoid repeating all the same mistakes again I've written up some of the
+lessons learned while botching the job for the drm/i915 driver. Most of these
+only cover technicalities and not the big-picture issues like what the command
+submission ioctl exactly should look like. Learning these lessons is probably
+something every GPU driver has to do on its own.
+
+
+Prerequisites
+-------------
+
+First the prerequisites. Without these you have already failed, because you
+will need to add a 32-bit compat layer:
+
+ * Only use fixed sized integers. To avoid conflicts with typedefs in userspace
+ the kernel has special types like __u32, __s64. Use them.
+
+ * Align everything to the natural size and use explicit padding. 32-bit
+ platforms don't necessarily align 64-bit values to 64-bit boundaries, but
+ 64-bit platforms do. So we always need padding to the natural size to get
+ this right.
+
+ * Pad the entire struct to a multiple of 64-bits if the structure contains
+ 64-bit types - the structure size will otherwise differ on 32-bit versus
+ 64-bit. Having a different structure size hurts when passing arrays of
+ structures to the kernel, or if the kernel checks the structure size, which
+ e.g. the drm core does.
+
+ * Pointers are __u64, cast from/to a uintptr_t on the userspace side and
+ from/to a void __user * in the kernel. Try really hard not to delay this
+ conversion or worse, fiddle the raw __u64 through your code since that
+ diminishes the checking tools like sparse can provide. The macro
+ u64_to_user_ptr can be used in the kernel to avoid warnings about integers
+ and pointers of different sizes.
+
+
+Basics
+------
+
+With the joys of writing a compat layer avoided we can take a look at the basic
+fumbles. Neglecting these will make backward and forward compatibility a real
+pain. And since getting things wrong on the first attempt is guaranteed you
+will have a second iteration or at least an extension for any given interface.
+
+ * Have a clear way for userspace to figure out whether your new ioctl or ioctl
+ extension is supported on a given kernel. If you can't rely on old kernels
+ rejecting the new flags/modes or ioctls (since doing that was botched in the
+ past) then you need a driver feature flag or revision number somewhere.
+
+ * Have a plan for extending ioctls with new flags or new fields at the end of
+ the structure. The drm core checks the passed-in size for each ioctl call
+ and zero-extends any mismatches between kernel and userspace. That helps,
+ but isn't a complete solution since newer userspace on older kernels won't
+ notice that the newly added fields at the end get ignored. So this still
+ needs a new driver feature flags.
+
+ * Check all unused fields and flags and all the padding for whether it's 0,
+ and reject the ioctl if that's not the case. Otherwise your nice plan for
+ future extensions is going right down the gutters since someone will submit
+ an ioctl struct with random stack garbage in the yet unused parts. Which
+ then bakes in the ABI that those fields can never be used for anything else
+ but garbage. This is also the reason why you must explicitly pad all
+ structures, even if you never use them in an array - the padding the compiler
+ might insert could contain garbage.
+
+ * Have simple testcases for all of the above.
+
+
+Fun with Error Paths
+--------------------
+
+Nowadays we don't have any excuse left any more for drm drivers being neat
+little root exploits. This means we both need full input validation and solid
+error handling paths - GPUs will die eventually in the oddmost corner cases
+anyway:
+
+ * The ioctl must check for array overflows. Also it needs to check for
+ over/underflows and clamping issues of integer values in general. The usual
+ example is sprite positioning values fed directly into the hardware with the
+ hardware just having 12 bits or so. Works nicely until some odd display
+ server doesn't bother with clamping itself and the cursor wraps around the
+ screen.
+
+ * Have simple testcases for every input validation failure case in your ioctl.
+ Check that the error code matches your expectations. And finally make sure
+ that you only test for one single error path in each subtest by submitting
+ otherwise perfectly valid data. Without this an earlier check might reject
+ the ioctl already and shadow the codepath you actually want to test, hiding
+ bugs and regressions.
+
+ * Make all your ioctls restartable. First X really loves signals and second
+ this will allow you to test 90% of all error handling paths by just
+ interrupting your main test suite constantly with signals. Thanks to X's
+ love for signal you'll get an excellent base coverage of all your error
+ paths pretty much for free for graphics drivers. Also, be consistent with
+ how you handle ioctl restarting - e.g. drm has a tiny drmIoctl helper in its
+ userspace library. The i915 driver botched this with the set_tiling ioctl,
+ now we're stuck forever with some arcane semantics in both the kernel and
+ userspace.
+
+ * If you can't make a given codepath restartable make a stuck task at least
+ killable. GPUs just die and your users won't like you more if you hang their
+ entire box (by means of an unkillable X process). If the state recovery is
+ still too tricky have a timeout or hangcheck safety net as a last-ditch
+ effort in case the hardware has gone bananas.
+
+ * Have testcases for the really tricky corner cases in your error recovery code
+ - it's way too easy to create a deadlock between your hangcheck code and
+ waiters.
+
+
+Time, Waiting and Missing it
+----------------------------
+
+GPUs do most everything asynchronously, so we have a need to time operations and
+wait for outstanding ones. This is really tricky business; at the moment none of
+the ioctls supported by the drm/i915 get this fully right, which means there's
+still tons more lessons to learn here.
+
+ * Use CLOCK_MONOTONIC as your reference time, always. It's what alsa, drm and
+ v4l use by default nowadays. But let userspace know which timestamps are
+ derived from different clock domains like your main system clock (provided
+ by the kernel) or some independent hardware counter somewhere else. Clocks
+ will mismatch if you look close enough, but if performance measuring tools
+ have this information they can at least compensate. If your userspace can
+ get at the raw values of some clocks (e.g. through in-command-stream
+ performance counter sampling instructions) consider exposing those also.
+
+ * Use __s64 seconds plus __u64 nanoseconds to specify time. It's not the most
+ convenient time specification, but it's mostly the standard.
+
+ * Check that input time values are normalized and reject them if not. Note
+ that the kernel native struct ktime has a signed integer for both seconds
+ and nanoseconds, so beware here.
+
+ * For timeouts, use absolute times. If you're a good fellow and made your
+ ioctl restartable relative timeouts tend to be too coarse and can
+ indefinitely extend your wait time due to rounding on each restart.
+ Especially if your reference clock is something really slow like the display
+ frame counter. With a spec lawyer hat on this isn't a bug since timeouts can
+ always be extended - but users will surely hate you if their neat animations
+ starts to stutter due to this.
+
+ * Consider ditching any synchronous wait ioctls with timeouts and just deliver
+ an asynchronous event on a pollable file descriptor. It fits much better
+ into event driven applications' main loop.
+
+ * Have testcases for corner-cases, especially whether the return values for
+ already-completed events, successful waits and timed-out waits are all sane
+ and suiting to your needs.
+
+
+Leaking Resources, Not
+----------------------
+
+A full-blown drm driver essentially implements a little OS, but specialized to
+the given GPU platforms. This means a driver needs to expose tons of handles
+for different objects and other resources to userspace. Doing that right
+entails its own little set of pitfalls:
+
+ * Always attach the lifetime of your dynamically created resources to the
+ lifetime of a file descriptor. Consider using a 1:1 mapping if your resource
+ needs to be shared across processes - fd-passing over unix domain sockets
+ also simplifies lifetime management for userspace.
+
+ * Always have O_CLOEXEC support.
+
+ * Ensure that you have sufficient insulation between different clients. By
+ default pick a private per-fd namespace which forces any sharing to be done
+ explicitly. Only go with a more global per-device namespace if the objects
+ are truly device-unique. One counterexample in the drm modeset interfaces is
+ that the per-device modeset objects like connectors share a namespace with
+ framebuffer objects, which mostly are not shared at all. A separate
+ namespace, private by default, for framebuffers would have been more
+ suitable.
+
+ * Think about uniqueness requirements for userspace handles. E.g. for most drm
+ drivers it's a userspace bug to submit the same object twice in the same
+ command submission ioctl. But then if objects are shareable userspace needs
+ to know whether it has seen an imported object from a different process
+ already or not. I haven't tried this myself yet due to lack of a new class
+ of objects, but consider using inode numbers on your shared file descriptors
+ as unique identifiers - it's how real files are told apart, too.
+ Unfortunately this requires a full-blown virtual filesystem in the kernel.
+
+
+Last, but not Least
+-------------------
+
+Not every problem needs a new ioctl:
+
+ * Think hard whether you really want a driver-private interface. Of course
+ it's much quicker to push a driver-private interface than engaging in
+ lengthy discussions for a more generic solution. And occasionally doing a
+ private interface to spearhead a new concept is what's required. But in the
+ end, once the generic interface comes around you'll end up maintaining two
+ interfaces. Indefinitely.
+
+ * Consider other interfaces than ioctls. A sysfs attribute is much better for
+ per-device settings, or for child objects with fairly static lifetimes (like
+ output connectors in drm with all the detection override attributes). Or
+ maybe only your testsuite needs this interface, and then debugfs with its
+ disclaimer of not having a stable ABI would be better.
+
+Finally, the name of the game is to get it right on the first attempt, since if
+your driver proves popular and your hardware platforms long-lived then you'll
+be stuck with a given ioctl essentially forever. You can try to deprecate
+horrible ioctls on newer iterations of your hardware, but generally it takes
+years to accomplish this. And then again years until the last user able to
+complain about regressions disappears, too.
diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
new file mode 100644
index 000000000..b48da698d
--- /dev/null
+++ b/Documentation/process/changes.rst
@@ -0,0 +1,571 @@
+.. _changes:
+
+Minimal requirements to compile the Kernel
+++++++++++++++++++++++++++++++++++++++++++
+
+Intro
+=====
+
+This document is designed to provide a list of the minimum levels of
+software necessary to run the current kernel version.
+
+This document is originally based on my "Changes" file for 2.0.x kernels
+and therefore owes credit to the same people as that file (Jared Mauch,
+Axel Boldt, Alessandro Sigala, and countless other users all over the
+'net).
+
+Current Minimal Requirements
+****************************
+
+Upgrade to at **least** these software revisions before thinking you've
+encountered a bug! If you're unsure what version you're currently
+running, the suggested command should tell you.
+
+Again, keep in mind that this list assumes you are already functionally
+running a Linux kernel. Also, not all tools are necessary on all
+systems; obviously, if you don't have any PC Card hardware, for example,
+you probably needn't concern yourself with pcmciautils.
+
+====================== =============== ========================================
+ Program Minimal version Command to check the version
+====================== =============== ========================================
+GNU C 5.1 gcc --version
+Clang/LLVM (optional) 11.0.0 clang --version
+Rust (optional) 1.71.1 rustc --version
+bindgen (optional) 0.65.1 bindgen --version
+GNU make 3.82 make --version
+bash 4.2 bash --version
+binutils 2.25 ld -v
+flex 2.5.35 flex --version
+bison 2.0 bison --version
+pahole 1.16 pahole --version
+util-linux 2.10o fdformat --version
+kmod 13 depmod -V
+e2fsprogs 1.41.4 e2fsck -V
+jfsutils 1.1.3 fsck.jfs -V
+reiserfsprogs 3.6.3 reiserfsck -V
+xfsprogs 2.6.0 xfs_db -V
+squashfs-tools 4.0 mksquashfs -version
+btrfs-progs 0.18 btrfsck
+pcmciautils 004 pccardctl -V
+quota-tools 3.09 quota -V
+PPP 2.4.0 pppd --version
+nfs-utils 1.0.5 showmount --version
+procps 3.2.0 ps --version
+udev 081 udevd --version
+grub 0.93 grub --version || grub-install --version
+mcelog 0.6 mcelog --version
+iptables 1.4.2 iptables -V
+openssl & libcrypto 1.0.0 openssl version
+bc 1.06.95 bc --version
+Sphinx\ [#f1]_ 1.7 sphinx-build --version
+cpio any cpio --version
+GNU tar 1.28 tar --version
+gtags (optional) 6.6.5 gtags --version
+====================== =============== ========================================
+
+.. [#f1] Sphinx is needed only to build the Kernel documentation
+
+Kernel compilation
+******************
+
+GCC
+---
+
+The gcc version requirements may vary depending on the type of CPU in your
+computer.
+
+Clang/LLVM (optional)
+---------------------
+
+The latest formal release of clang and LLVM utils (according to
+`releases.llvm.org <https://releases.llvm.org>`_) are supported for building
+kernels. Older releases aren't guaranteed to work, and we may drop workarounds
+from the kernel that were used to support older versions. Please see additional
+docs on :ref:`Building Linux with Clang/LLVM <kbuild_llvm>`.
+
+Rust (optional)
+---------------
+
+A particular version of the Rust toolchain is required. Newer versions may or
+may not work because the kernel depends on some unstable Rust features, for
+the moment.
+
+Each Rust toolchain comes with several "components", some of which are required
+(like ``rustc``) and some that are optional. The ``rust-src`` component (which
+is optional) needs to be installed to build the kernel. Other components are
+useful for developing.
+
+Please see Documentation/rust/quick-start.rst for instructions on how to
+satisfy the build requirements of Rust support. In particular, the ``Makefile``
+target ``rustavailable`` is useful to check why the Rust toolchain may not
+be detected.
+
+bindgen (optional)
+------------------
+
+``bindgen`` is used to generate the Rust bindings to the C side of the kernel.
+It depends on ``libclang``.
+
+Make
+----
+
+You will need GNU make 3.82 or later to build the kernel.
+
+Bash
+----
+
+Some bash scripts are used for the kernel build.
+Bash 4.2 or newer is needed.
+
+Binutils
+--------
+
+Binutils 2.25 or newer is needed to build the kernel.
+
+pkg-config
+----------
+
+The build system, as of 4.18, requires pkg-config to check for installed
+kconfig tools and to determine flags settings for use in
+'make {g,x}config'. Previously pkg-config was being used but not
+verified or documented.
+
+Flex
+----
+
+Since Linux 4.16, the build system generates lexical analyzers
+during build. This requires flex 2.5.35 or later.
+
+
+Bison
+-----
+
+Since Linux 4.16, the build system generates parsers
+during build. This requires bison 2.0 or later.
+
+pahole:
+-------
+
+Since Linux 5.2, if CONFIG_DEBUG_INFO_BTF is selected, the build system
+generates BTF (BPF Type Format) from DWARF in vmlinux, a bit later from kernel
+modules as well. This requires pahole v1.16 or later.
+
+It is found in the 'dwarves' or 'pahole' distro packages or from
+https://fedorapeople.org/~acme/dwarves/.
+
+Perl
+----
+
+You will need perl 5 and the following modules: ``Getopt::Long``,
+``Getopt::Std``, ``File::Basename``, and ``File::Find`` to build the kernel.
+
+BC
+--
+
+You will need bc to build kernels 3.10 and higher
+
+
+OpenSSL
+-------
+
+Module signing and external certificate handling use the OpenSSL program and
+crypto library to do key creation and signature generation.
+
+You will need openssl to build kernels 3.7 and higher if module signing is
+enabled. You will also need openssl development packages to build kernels 4.3
+and higher.
+
+Tar
+---
+
+GNU tar is needed if you want to enable access to the kernel headers via sysfs
+(CONFIG_IKHEADERS).
+
+gtags / GNU GLOBAL (optional)
+-----------------------------
+
+The kernel build requires GNU GLOBAL version 6.6.5 or later to generate
+tag files through ``make gtags``. This is due to its use of the gtags
+``-C (--directory)`` flag.
+
+System utilities
+****************
+
+Architectural changes
+---------------------
+
+DevFS has been obsoleted in favour of udev
+(https://www.kernel.org/pub/linux/utils/kernel/hotplug/)
+
+32-bit UID support is now in place. Have fun!
+
+Linux documentation for functions is transitioning to inline
+documentation via specially-formatted comments near their
+definitions in the source. These comments can be combined with ReST
+files the Documentation/ directory to make enriched documentation, which can
+then be converted to PostScript, HTML, LaTex, ePUB and PDF files.
+In order to convert from ReST format to a format of your choice, you'll need
+Sphinx.
+
+Util-linux
+----------
+
+New versions of util-linux provide ``fdisk`` support for larger disks,
+support new options to mount, recognize more supported partition
+types, have a fdformat which works with 2.4 kernels, and similar goodies.
+You'll probably want to upgrade.
+
+Ksymoops
+--------
+
+If the unthinkable happens and your kernel oopses, you may need the
+ksymoops tool to decode it, but in most cases you don't.
+It is generally preferred to build the kernel with ``CONFIG_KALLSYMS`` so
+that it produces readable dumps that can be used as-is (this also
+produces better output than ksymoops). If for some reason your kernel
+is not build with ``CONFIG_KALLSYMS`` and you have no way to rebuild and
+reproduce the Oops with that option, then you can still decode that Oops
+with ksymoops.
+
+Mkinitrd
+--------
+
+These changes to the ``/lib/modules`` file tree layout also require that
+mkinitrd be upgraded.
+
+E2fsprogs
+---------
+
+The latest version of ``e2fsprogs`` fixes several bugs in fsck and
+debugfs. Obviously, it's a good idea to upgrade.
+
+JFSutils
+--------
+
+The ``jfsutils`` package contains the utilities for the file system.
+The following utilities are available:
+
+- ``fsck.jfs`` - initiate replay of the transaction log, and check
+ and repair a JFS formatted partition.
+
+- ``mkfs.jfs`` - create a JFS formatted partition.
+
+- other file system utilities are also available in this package.
+
+Reiserfsprogs
+-------------
+
+The reiserfsprogs package should be used for reiserfs-3.6.x
+(Linux kernels 2.4.x). It is a combined package and contains working
+versions of ``mkreiserfs``, ``resize_reiserfs``, ``debugreiserfs`` and
+``reiserfsck``. These utils work on both i386 and alpha platforms.
+
+Xfsprogs
+--------
+
+The latest version of ``xfsprogs`` contains ``mkfs.xfs``, ``xfs_db``, and the
+``xfs_repair`` utilities, among others, for the XFS filesystem. It is
+architecture independent and any version from 2.0.0 onward should
+work correctly with this version of the XFS kernel code (2.6.0 or
+later is recommended, due to some significant improvements).
+
+PCMCIAutils
+-----------
+
+PCMCIAutils replaces ``pcmcia-cs``. It properly sets up
+PCMCIA sockets at system startup and loads the appropriate modules
+for 16-bit PCMCIA devices if the kernel is modularized and the hotplug
+subsystem is used.
+
+Quota-tools
+-----------
+
+Support for 32 bit uid's and gid's is required if you want to use
+the newer version 2 quota format. Quota-tools version 3.07 and
+newer has this support. Use the recommended version or newer
+from the table above.
+
+Intel IA32 microcode
+--------------------
+
+A driver has been added to allow updating of Intel IA32 microcode,
+accessible as a normal (misc) character device. If you are not using
+udev you may need to::
+
+ mkdir /dev/cpu
+ mknod /dev/cpu/microcode c 10 184
+ chmod 0644 /dev/cpu/microcode
+
+as root before you can use this. You'll probably also want to
+get the user-space microcode_ctl utility to use with this.
+
+udev
+----
+
+``udev`` is a userspace application for populating ``/dev`` dynamically with
+only entries for devices actually present. ``udev`` replaces the basic
+functionality of devfs, while allowing persistent device naming for
+devices.
+
+FUSE
+----
+
+Needs libfuse 2.4.0 or later. Absolute minimum is 2.3.0 but mount
+options ``direct_io`` and ``kernel_cache`` won't work.
+
+Networking
+**********
+
+General changes
+---------------
+
+If you have advanced network configuration needs, you should probably
+consider using the network tools from ip-route2.
+
+Packet Filter / NAT
+-------------------
+The packet filtering and NAT code uses the same tools like the previous 2.4.x
+kernel series (iptables). It still includes backwards-compatibility modules
+for 2.2.x-style ipchains and 2.0.x-style ipfwadm.
+
+PPP
+---
+
+The PPP driver has been restructured to support multilink and to
+enable it to operate over diverse media layers. If you use PPP,
+upgrade pppd to at least 2.4.0.
+
+If you are not using udev, you must have the device file /dev/ppp
+which can be made by::
+
+ mknod /dev/ppp c 108 0
+
+as root.
+
+NFS-utils
+---------
+
+In ancient (2.4 and earlier) kernels, the nfs server needed to know
+about any client that expected to be able to access files via NFS. This
+information would be given to the kernel by ``mountd`` when the client
+mounted the filesystem, or by ``exportfs`` at system startup. exportfs
+would take information about active clients from ``/var/lib/nfs/rmtab``.
+
+This approach is quite fragile as it depends on rmtab being correct
+which is not always easy, particularly when trying to implement
+fail-over. Even when the system is working well, ``rmtab`` suffers from
+getting lots of old entries that never get removed.
+
+With modern kernels we have the option of having the kernel tell mountd
+when it gets a request from an unknown host, and mountd can give
+appropriate export information to the kernel. This removes the
+dependency on ``rmtab`` and means that the kernel only needs to know about
+currently active clients.
+
+To enable this new functionality, you need to::
+
+ mount -t nfsd nfsd /proc/fs/nfsd
+
+before running exportfs or mountd. It is recommended that all NFS
+services be protected from the internet-at-large by a firewall where
+that is possible.
+
+mcelog
+------
+
+On x86 kernels the mcelog utility is needed to process and log machine check
+events when ``CONFIG_X86_MCE`` is enabled. Machine check events are errors
+reported by the CPU. Processing them is strongly encouraged.
+
+Kernel documentation
+********************
+
+Sphinx
+------
+
+Please see :ref:`sphinx_install` in :ref:`Documentation/doc-guide/sphinx.rst <sphinxdoc>`
+for details about Sphinx requirements.
+
+rustdoc
+-------
+
+``rustdoc`` is used to generate the documentation for Rust code. Please see
+Documentation/rust/general-information.rst for more information.
+
+Getting updated software
+========================
+
+Kernel compilation
+******************
+
+gcc
+---
+
+- <ftp://ftp.gnu.org/gnu/gcc/>
+
+Clang/LLVM
+----------
+
+- :ref:`Getting LLVM <getting_llvm>`.
+
+Rust
+----
+
+- Documentation/rust/quick-start.rst.
+
+bindgen
+-------
+
+- Documentation/rust/quick-start.rst.
+
+Make
+----
+
+- <ftp://ftp.gnu.org/gnu/make/>
+
+Bash
+----
+
+- <ftp://ftp.gnu.org/gnu/bash/>
+
+Binutils
+--------
+
+- <https://www.kernel.org/pub/linux/devel/binutils/>
+
+Flex
+----
+
+- <https://github.com/westes/flex/releases>
+
+Bison
+-----
+
+- <ftp://ftp.gnu.org/gnu/bison/>
+
+OpenSSL
+-------
+
+- <https://www.openssl.org/>
+
+System utilities
+****************
+
+Util-linux
+----------
+
+- <https://www.kernel.org/pub/linux/utils/util-linux/>
+
+Kmod
+----
+
+- <https://www.kernel.org/pub/linux/utils/kernel/kmod/>
+- <https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git>
+
+Ksymoops
+--------
+
+- <https://www.kernel.org/pub/linux/utils/kernel/ksymoops/v2.4/>
+
+Mkinitrd
+--------
+
+- <https://code.launchpad.net/initrd-tools/main>
+
+E2fsprogs
+---------
+
+- <https://www.kernel.org/pub/linux/kernel/people/tytso/e2fsprogs/>
+- <https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/>
+
+JFSutils
+--------
+
+- <https://jfs.sourceforge.net/>
+
+Reiserfsprogs
+-------------
+
+- <https://git.kernel.org/pub/scm/linux/kernel/git/jeffm/reiserfsprogs.git/>
+
+Xfsprogs
+--------
+
+- <https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git>
+- <https://www.kernel.org/pub/linux/utils/fs/xfs/xfsprogs/>
+
+Pcmciautils
+-----------
+
+- <https://www.kernel.org/pub/linux/utils/kernel/pcmcia/>
+
+Quota-tools
+-----------
+
+- <https://sourceforge.net/projects/linuxquota/>
+
+
+Intel P6 microcode
+------------------
+
+- <https://downloadcenter.intel.com/>
+
+udev
+----
+
+- <https://www.freedesktop.org/software/systemd/man/udev.html>
+
+FUSE
+----
+
+- <https://github.com/libfuse/libfuse/releases>
+
+mcelog
+------
+
+- <https://www.mcelog.org/>
+
+cpio
+----
+
+- <https://www.gnu.org/software/cpio/>
+
+Networking
+**********
+
+PPP
+---
+
+- <https://download.samba.org/pub/ppp/>
+- <https://git.ozlabs.org/?p=ppp.git>
+- <https://github.com/paulusmack/ppp/>
+
+NFS-utils
+---------
+
+- <https://sourceforge.net/project/showfiles.php?group_id=14>
+- <https://nfs.sourceforge.net/>
+
+Iptables
+--------
+
+- <https://netfilter.org/projects/iptables/index.html>
+
+Ip-route2
+---------
+
+- <https://www.kernel.org/pub/linux/utils/net/iproute2/>
+
+OProfile
+--------
+
+- <https://oprofile.sf.net/download/>
+
+Kernel documentation
+********************
+
+Sphinx
+------
+
+- <https://www.sphinx-doc.org/>
diff --git a/Documentation/process/clang-format.rst b/Documentation/process/clang-format.rst
new file mode 100644
index 000000000..1d089a847
--- /dev/null
+++ b/Documentation/process/clang-format.rst
@@ -0,0 +1,184 @@
+.. _clangformat:
+
+clang-format
+============
+
+``clang-format`` is a tool to format C/C++/... code according to
+a set of rules and heuristics. Like most tools, it is not perfect
+nor covers every single case, but it is good enough to be helpful.
+
+``clang-format`` can be used for several purposes:
+
+ - Quickly reformat a block of code to the kernel style. Specially useful
+ when moving code around and aligning/sorting. See clangformatreformat_.
+
+ - Spot style mistakes, typos and possible improvements in files
+ you maintain, patches you review, diffs, etc. See clangformatreview_.
+
+ - Help you follow the coding style rules, specially useful for those
+ new to kernel development or working at the same time in several
+ projects with different coding styles.
+
+Its configuration file is ``.clang-format`` in the root of the kernel tree.
+The rules contained there try to approximate the most common kernel
+coding style. They also try to follow :ref:`Documentation/process/coding-style.rst <codingstyle>`
+as much as possible. Since not all the kernel follows the same style,
+it is possible that you may want to tweak the defaults for a particular
+subsystem or folder. To do so, you can override the defaults by writing
+another ``.clang-format`` file in a subfolder.
+
+The tool itself has already been included in the repositories of popular
+Linux distributions for a long time. Search for ``clang-format`` in
+your repositories. Otherwise, you can either download pre-built
+LLVM/clang binaries or build the source code from:
+
+ https://releases.llvm.org/download.html
+
+See more information about the tool at:
+
+ https://clang.llvm.org/docs/ClangFormat.html
+
+ https://clang.llvm.org/docs/ClangFormatStyleOptions.html
+
+
+.. _clangformatreview:
+
+Review files and patches for coding style
+-----------------------------------------
+
+By running the tool in its inline mode, you can review full subsystems,
+folders or individual files for code style mistakes, typos or improvements.
+
+To do so, you can run something like::
+
+ # Make sure your working directory is clean!
+ clang-format -i kernel/*.[ch]
+
+And then take a look at the git diff.
+
+Counting the lines of such a diff is also useful for improving/tweaking
+the style options in the configuration file; as well as testing new
+``clang-format`` features/versions.
+
+``clang-format`` also supports reading unified diffs, so you can review
+patches and git diffs easily. See the documentation at:
+
+ https://clang.llvm.org/docs/ClangFormat.html#script-for-patch-reformatting
+
+To avoid ``clang-format`` formatting some portion of a file, you can do::
+
+ int formatted_code;
+ // clang-format off
+ void unformatted_code ;
+ // clang-format on
+ void formatted_code_again;
+
+While it might be tempting to use this to keep a file always in sync with
+``clang-format``, specially if you are writing new files or if you are
+a maintainer, please note that people might be running different
+``clang-format`` versions or not have it available at all. Therefore,
+you should probably refrain yourself from using this in kernel sources;
+at least until we see if ``clang-format`` becomes commonplace.
+
+
+.. _clangformatreformat:
+
+Reformatting blocks of code
+---------------------------
+
+By using an integration with your text editor, you can reformat arbitrary
+blocks (selections) of code with a single keystroke. This is specially
+useful when moving code around, for complex code that is deeply intended,
+for multi-line macros (and aligning their backslashes), etc.
+
+Remember that you can always tweak the changes afterwards in those cases
+where the tool did not do an optimal job. But as a first approximation,
+it can be very useful.
+
+There are integrations for many popular text editors. For some of them,
+like vim, emacs, BBEdit and Visual Studio you can find support built-in.
+For instructions, read the appropriate section at:
+
+ https://clang.llvm.org/docs/ClangFormat.html
+
+For Atom, Eclipse, Sublime Text, Visual Studio Code, XCode and other
+editors and IDEs you should be able to find ready-to-use plugins.
+
+For this use case, consider using a secondary ``.clang-format``
+so that you can tweak a few options. See clangformatextra_.
+
+
+.. _clangformatmissing:
+
+Missing support
+---------------
+
+``clang-format`` is missing support for some things that are common
+in kernel code. They are easy to remember, so if you use the tool
+regularly, you will quickly learn to avoid/ignore those.
+
+In particular, some very common ones you will notice are:
+
+ - Aligned blocks of one-line ``#defines``, e.g.::
+
+ #define TRACING_MAP_BITS_DEFAULT 11
+ #define TRACING_MAP_BITS_MAX 17
+ #define TRACING_MAP_BITS_MIN 7
+
+ vs.::
+
+ #define TRACING_MAP_BITS_DEFAULT 11
+ #define TRACING_MAP_BITS_MAX 17
+ #define TRACING_MAP_BITS_MIN 7
+
+ - Aligned designated initializers, e.g.::
+
+ static const struct file_operations uprobe_events_ops = {
+ .owner = THIS_MODULE,
+ .open = probes_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+ .write = probes_write,
+ };
+
+ vs.::
+
+ static const struct file_operations uprobe_events_ops = {
+ .owner = THIS_MODULE,
+ .open = probes_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+ .write = probes_write,
+ };
+
+
+.. _clangformatextra:
+
+Extra features/options
+----------------------
+
+Some features/style options are not enabled by default in the configuration
+file in order to minimize the differences between the output and the current
+code. In other words, to make the difference as small as possible,
+which makes reviewing full-file style, as well diffs and patches as easy
+as possible.
+
+In other cases (e.g. particular subsystems/folders/files), the kernel style
+might be different and enabling some of these options may approximate
+better the style there.
+
+For instance:
+
+ - Aligning assignments (``AlignConsecutiveAssignments``).
+
+ - Aligning declarations (``AlignConsecutiveDeclarations``).
+
+ - Reflowing text in comments (``ReflowComments``).
+
+ - Sorting ``#includes`` (``SortIncludes``).
+
+They are typically useful for block re-formatting, rather than full-file.
+You might want to create another ``.clang-format`` file and use that one
+from your editor/IDE instead.
diff --git a/Documentation/process/code-of-conduct-interpretation.rst b/Documentation/process/code-of-conduct-interpretation.rst
new file mode 100644
index 000000000..66b07f147
--- /dev/null
+++ b/Documentation/process/code-of-conduct-interpretation.rst
@@ -0,0 +1,158 @@
+.. _code_of_conduct_interpretation:
+
+Linux Kernel Contributor Covenant Code of Conduct Interpretation
+================================================================
+
+The :ref:`code_of_conduct` is a general document meant to
+provide a set of rules for almost any open source community. Every
+open-source community is unique and the Linux kernel is no exception.
+Because of this, this document describes how we in the Linux kernel
+community will interpret it. We also do not expect this interpretation
+to be static over time, and will adjust it as needed.
+
+The Linux kernel development effort is a very personal process compared
+to "traditional" ways of developing software. Your contributions and
+ideas behind them will be carefully reviewed, often resulting in
+critique and criticism. The review will almost always require
+improvements before the material can be included in the
+kernel. Know that this happens because everyone involved wants to see
+the best possible solution for the overall success of Linux. This
+development process has been proven to create the most robust operating
+system kernel ever, and we do not want to do anything to cause the
+quality of submission and eventual result to ever decrease.
+
+Maintainers
+-----------
+
+The Code of Conduct uses the term "maintainers" numerous times. In the
+kernel community, a "maintainer" is anyone who is responsible for a
+subsystem, driver, or file, and is listed in the MAINTAINERS file in the
+kernel source tree.
+
+Responsibilities
+----------------
+
+The Code of Conduct mentions rights and responsibilities for
+maintainers, and this needs some further clarifications.
+
+First and foremost, it is a reasonable expectation to have maintainers
+lead by example.
+
+That being said, our community is vast and broad, and there is no new
+requirement for maintainers to unilaterally handle how other people
+behave in the parts of the community where they are active. That
+responsibility is upon all of us, and ultimately the Code of Conduct
+documents final escalation paths in case of unresolved concerns
+regarding conduct issues.
+
+Maintainers should be willing to help when problems occur, and work with
+others in the community when needed. Do not be afraid to reach out to
+the Technical Advisory Board (TAB) or other maintainers if you're
+uncertain how to handle situations that come up. It will not be
+considered a violation report unless you want it to be. If you are
+uncertain about approaching the TAB or any other maintainers, please
+reach out to our conflict mediator, Joanna Lee <jlee@linuxfoundation.org>.
+
+In the end, "be kind to each other" is really what the end goal is for
+everybody. We know everyone is human and we all fail at times, but the
+primary goal for all of us should be to work toward amicable resolutions
+of problems. Enforcement of the code of conduct will only be a last
+resort option.
+
+Our goal of creating a robust and technically advanced operating system
+and the technical complexity involved naturally require expertise and
+decision-making.
+
+The required expertise varies depending on the area of contribution. It
+is determined mainly by context and technical complexity and only
+secondary by the expectations of contributors and maintainers.
+
+Both the expertise expectations and decision-making are subject to
+discussion, but at the very end there is a basic necessity to be able to
+make decisions in order to make progress. This prerogative is in the
+hands of maintainers and project's leadership and is expected to be used
+in good faith.
+
+As a consequence, setting expertise expectations, making decisions and
+rejecting unsuitable contributions are not viewed as a violation of the
+Code of Conduct.
+
+While maintainers are in general welcoming to newcomers, their capacity
+of helping contributors overcome the entry hurdles is limited, so they
+have to set priorities. This, also, is not to be seen as a violation of
+the Code of Conduct. The kernel community is aware of that and provides
+entry level programs in various forms like kernelnewbies.org.
+
+Scope
+-----
+
+The Linux kernel community primarily interacts on a set of public email
+lists distributed around a number of different servers controlled by a
+number of different companies or individuals. All of these lists are
+defined in the MAINTAINERS file in the kernel source tree. Any emails
+sent to those mailing lists are considered covered by the Code of
+Conduct.
+
+Developers who use the kernel.org bugzilla, and other subsystem bugzilla
+or bug tracking tools should follow the guidelines of the Code of
+Conduct. The Linux kernel community does not have an "official" project
+email address, or "official" social media address. Any activity
+performed using a kernel.org email account must follow the Code of
+Conduct as published for kernel.org, just as any individual using a
+corporate email account must follow the specific rules of that
+corporation.
+
+The Code of Conduct does not prohibit continuing to include names, email
+addresses, and associated comments in mailing list messages, kernel
+change log messages, or code comments.
+
+Interaction in other forums is covered by whatever rules apply to said
+forums and is in general not covered by the Code of Conduct. Exceptions
+may be considered for extreme circumstances.
+
+Contributions submitted for the kernel should use appropriate language.
+Content that already exists predating the Code of Conduct will not be
+addressed now as a violation. Inappropriate language can be seen as a
+bug, though; such bugs will be fixed more quickly if any interested
+parties submit patches to that effect. Expressions that are currently
+part of the user/kernel API, or reflect terminology used in published
+standards or specifications, are not considered bugs.
+
+Enforcement
+-----------
+
+The address listed in the Code of Conduct goes to the Code of Conduct
+Committee. The exact members receiving these emails at any given time
+are listed at https://kernel.org/code-of-conduct.html. Members can not
+access reports made before they joined or after they have left the
+committee.
+
+The Code of Conduct Committee consists of volunteer community members
+appointed by the TAB, as well as a professional mediator acting as a
+neutral third party. The processes the Code of Conduct committee will
+use to address reports is varied and will depend on the individual
+circumstance, however, this file serves as documentation for the
+general process used.
+
+Any member of the committee, including the mediator, can be contacted
+directly if a reporter does not wish to include the full committee in a
+complaint or concern.
+
+The Code of Conduct Committee reviews the cases according to the
+processes (see above) and consults with the TAB as needed and
+appropriate, for instance to request and receive information about the
+kernel community.
+
+Any decisions regarding enforcement recommendations will be brought to
+the TAB for implementation of enforcement with the relevant maintainers
+if needed. A decision by the Code of Conduct Committee can be overturned
+by the TAB by a two-thirds vote.
+
+At quarterly intervals, the Code of Conduct Committee and TAB will
+provide a report summarizing the anonymised reports that the Code of
+Conduct committee has received and their status, as well details of any
+overridden decisions including complete and identifiable voting details.
+
+Because how we interpret and enforce the Code of Conduct will evolve over
+time, this document will be updated when necessary to reflect any
+changes.
diff --git a/Documentation/process/code-of-conduct.rst b/Documentation/process/code-of-conduct.rst
new file mode 100644
index 000000000..be50294ae
--- /dev/null
+++ b/Documentation/process/code-of-conduct.rst
@@ -0,0 +1,86 @@
+.. _code_of_conduct:
+
+Contributor Covenant Code of Conduct
+++++++++++++++++++++++++++++++++++++
+
+Our Pledge
+==========
+
+In the interest of fostering an open and welcoming environment, we as
+contributors and maintainers pledge to making participation in our project and
+our community a harassment-free experience for everyone, regardless of age, body
+size, disability, ethnicity, sex characteristics, gender identity and
+expression, level of experience, education, socio-economic status, nationality,
+personal appearance, race, religion, or sexual identity and orientation.
+
+Our Standards
+=============
+
+Examples of behavior that contributes to creating a positive environment
+include:
+
+* Using welcoming and inclusive language
+* Being respectful of differing viewpoints and experiences
+* Gracefully accepting constructive criticism
+* Focusing on what is best for the community
+* Showing empathy towards other community members
+
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or
+ advances
+* Trolling, insulting/derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others’ private information, such as a physical or electronic
+ address, without explicit permission
+* Other conduct which could reasonably be considered inappropriate in a
+ professional setting
+
+
+Our Responsibilities
+====================
+
+Maintainers are responsible for clarifying the standards of acceptable behavior
+and are expected to take appropriate and fair corrective action in response to
+any instances of unacceptable behavior.
+
+Maintainers have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions that are
+not aligned to this Code of Conduct, or to ban temporarily or permanently any
+contributor for other behaviors that they deem inappropriate, threatening,
+offensive, or harmful.
+
+Scope
+=====
+
+This Code of Conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community. Examples of
+representing a project or community include using an official project e-mail
+address, posting via an official social media account, or acting as an appointed
+representative at an online or offline event. Representation of a project may be
+further defined and clarified by project maintainers.
+
+Enforcement
+===========
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported by contacting the Code of Conduct Committee at
+<conduct@kernel.org>. All complaints will be reviewed and investigated
+and will result in a response that is deemed necessary and appropriate
+to the circumstances. The Code of Conduct Committee is obligated to
+maintain confidentiality with regard to the reporter of an incident.
+Further details of specific enforcement policies may be posted
+separately.
+
+Attribution
+===========
+
+This Code of Conduct is adapted from the Contributor Covenant, version 1.4,
+available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+
+Interpretation
+==============
+
+See the :ref:`code_of_conduct_interpretation` document for how the Linux
+kernel community will be interpreting this document.
diff --git a/Documentation/process/coding-style.rst b/Documentation/process/coding-style.rst
new file mode 100644
index 000000000..6db37a46d
--- /dev/null
+++ b/Documentation/process/coding-style.rst
@@ -0,0 +1,1271 @@
+.. _codingstyle:
+
+Linux kernel coding style
+=========================
+
+This is a short document describing the preferred coding style for the
+linux kernel. Coding style is very personal, and I won't **force** my
+views on anybody, but this is what goes for anything that I have to be
+able to maintain, and I'd prefer it for most other things too. Please
+at least consider the points made here.
+
+First off, I'd suggest printing out a copy of the GNU coding standards,
+and NOT read it. Burn them, it's a great symbolic gesture.
+
+Anyway, here goes:
+
+
+1) Indentation
+--------------
+
+Tabs are 8 characters, and thus indentations are also 8 characters.
+There are heretic movements that try to make indentations 4 (or even 2!)
+characters deep, and that is akin to trying to define the value of PI to
+be 3.
+
+Rationale: The whole idea behind indentation is to clearly define where
+a block of control starts and ends. Especially when you've been looking
+at your screen for 20 straight hours, you'll find it a lot easier to see
+how the indentation works if you have large indentations.
+
+Now, some people will claim that having 8-character indentations makes
+the code move too far to the right, and makes it hard to read on a
+80-character terminal screen. The answer to that is that if you need
+more than 3 levels of indentation, you're screwed anyway, and should fix
+your program.
+
+In short, 8-char indents make things easier to read, and have the added
+benefit of warning you when you're nesting your functions too deep.
+Heed that warning.
+
+The preferred way to ease multiple indentation levels in a switch statement is
+to align the ``switch`` and its subordinate ``case`` labels in the same column
+instead of ``double-indenting`` the ``case`` labels. E.g.:
+
+.. code-block:: c
+
+ switch (suffix) {
+ case 'G':
+ case 'g':
+ mem <<= 30;
+ break;
+ case 'M':
+ case 'm':
+ mem <<= 20;
+ break;
+ case 'K':
+ case 'k':
+ mem <<= 10;
+ fallthrough;
+ default:
+ break;
+ }
+
+Don't put multiple statements on a single line unless you have
+something to hide:
+
+.. code-block:: c
+
+ if (condition) do_this;
+ do_something_everytime;
+
+Don't use commas to avoid using braces:
+
+.. code-block:: c
+
+ if (condition)
+ do_this(), do_that();
+
+Always uses braces for multiple statements:
+
+.. code-block:: c
+
+ if (condition) {
+ do_this();
+ do_that();
+ }
+
+Don't put multiple assignments on a single line either. Kernel coding style
+is super simple. Avoid tricky expressions.
+
+
+Outside of comments, documentation and except in Kconfig, spaces are never
+used for indentation, and the above example is deliberately broken.
+
+Get a decent editor and don't leave whitespace at the end of lines.
+
+
+2) Breaking long lines and strings
+----------------------------------
+
+Coding style is all about readability and maintainability using commonly
+available tools.
+
+The preferred limit on the length of a single line is 80 columns.
+
+Statements longer than 80 columns should be broken into sensible chunks,
+unless exceeding 80 columns significantly increases readability and does
+not hide information.
+
+Descendants are always substantially shorter than the parent and
+are placed substantially to the right. A very commonly used style
+is to align descendants to a function open parenthesis.
+
+These same rules are applied to function headers with a long argument list.
+
+However, never break user-visible strings such as printk messages because
+that breaks the ability to grep for them.
+
+
+3) Placing Braces and Spaces
+----------------------------
+
+The other issue that always comes up in C styling is the placement of
+braces. Unlike the indent size, there are few technical reasons to
+choose one placement strategy over the other, but the preferred way, as
+shown to us by the prophets Kernighan and Ritchie, is to put the opening
+brace last on the line, and put the closing brace first, thusly:
+
+.. code-block:: c
+
+ if (x is true) {
+ we do y
+ }
+
+This applies to all non-function statement blocks (if, switch, for,
+while, do). E.g.:
+
+.. code-block:: c
+
+ switch (action) {
+ case KOBJ_ADD:
+ return "add";
+ case KOBJ_REMOVE:
+ return "remove";
+ case KOBJ_CHANGE:
+ return "change";
+ default:
+ return NULL;
+ }
+
+However, there is one special case, namely functions: they have the
+opening brace at the beginning of the next line, thus:
+
+.. code-block:: c
+
+ int function(int x)
+ {
+ body of function
+ }
+
+Heretic people all over the world have claimed that this inconsistency
+is ... well ... inconsistent, but all right-thinking people know that
+(a) K&R are **right** and (b) K&R are right. Besides, functions are
+special anyway (you can't nest them in C).
+
+Note that the closing brace is empty on a line of its own, **except** in
+the cases where it is followed by a continuation of the same statement,
+ie a ``while`` in a do-statement or an ``else`` in an if-statement, like
+this:
+
+.. code-block:: c
+
+ do {
+ body of do-loop
+ } while (condition);
+
+and
+
+.. code-block:: c
+
+ if (x == y) {
+ ..
+ } else if (x > y) {
+ ...
+ } else {
+ ....
+ }
+
+Rationale: K&R.
+
+Also, note that this brace-placement also minimizes the number of empty
+(or almost empty) lines, without any loss of readability. Thus, as the
+supply of new-lines on your screen is not a renewable resource (think
+25-line terminal screens here), you have more empty lines to put
+comments on.
+
+Do not unnecessarily use braces where a single statement will do.
+
+.. code-block:: c
+
+ if (condition)
+ action();
+
+and
+
+.. code-block:: none
+
+ if (condition)
+ do_this();
+ else
+ do_that();
+
+This does not apply if only one branch of a conditional statement is a single
+statement; in the latter case use braces in both branches:
+
+.. code-block:: c
+
+ if (condition) {
+ do_this();
+ do_that();
+ } else {
+ otherwise();
+ }
+
+Also, use braces when a loop contains more than a single simple statement:
+
+.. code-block:: c
+
+ while (condition) {
+ if (test)
+ do_something();
+ }
+
+3.1) Spaces
+***********
+
+Linux kernel style for use of spaces depends (mostly) on
+function-versus-keyword usage. Use a space after (most) keywords. The
+notable exceptions are sizeof, typeof, alignof, and __attribute__, which look
+somewhat like functions (and are usually used with parentheses in Linux,
+although they are not required in the language, as in: ``sizeof info`` after
+``struct fileinfo info;`` is declared).
+
+So use a space after these keywords::
+
+ if, switch, case, for, do, while
+
+but not with sizeof, typeof, alignof, or __attribute__. E.g.,
+
+.. code-block:: c
+
+
+ s = sizeof(struct file);
+
+Do not add spaces around (inside) parenthesized expressions. This example is
+**bad**:
+
+.. code-block:: c
+
+
+ s = sizeof( struct file );
+
+When declaring pointer data or a function that returns a pointer type, the
+preferred use of ``*`` is adjacent to the data name or function name and not
+adjacent to the type name. Examples:
+
+.. code-block:: c
+
+
+ char *linux_banner;
+ unsigned long long memparse(char *ptr, char **retptr);
+ char *match_strdup(substring_t *s);
+
+Use one space around (on each side of) most binary and ternary operators,
+such as any of these::
+
+ = + - < > * / % | & ^ <= >= == != ? :
+
+but no space after unary operators::
+
+ & * + - ~ ! sizeof typeof alignof __attribute__ defined
+
+no space before the postfix increment & decrement unary operators::
+
+ ++ --
+
+no space after the prefix increment & decrement unary operators::
+
+ ++ --
+
+and no space around the ``.`` and ``->`` structure member operators.
+
+Do not leave trailing whitespace at the ends of lines. Some editors with
+``smart`` indentation will insert whitespace at the beginning of new lines as
+appropriate, so you can start typing the next line of code right away.
+However, some such editors do not remove the whitespace if you end up not
+putting a line of code there, such as if you leave a blank line. As a result,
+you end up with lines containing trailing whitespace.
+
+Git will warn you about patches that introduce trailing whitespace, and can
+optionally strip the trailing whitespace for you; however, if applying a series
+of patches, this may make later patches in the series fail by changing their
+context lines.
+
+
+4) Naming
+---------
+
+C is a Spartan language, and your naming conventions should follow suit.
+Unlike Modula-2 and Pascal programmers, C programmers do not use cute
+names like ThisVariableIsATemporaryCounter. A C programmer would call that
+variable ``tmp``, which is much easier to write, and not the least more
+difficult to understand.
+
+HOWEVER, while mixed-case names are frowned upon, descriptive names for
+global variables are a must. To call a global function ``foo`` is a
+shooting offense.
+
+GLOBAL variables (to be used only if you **really** need them) need to
+have descriptive names, as do global functions. If you have a function
+that counts the number of active users, you should call that
+``count_active_users()`` or similar, you should **not** call it ``cntusr()``.
+
+Encoding the type of a function into the name (so-called Hungarian
+notation) is asinine - the compiler knows the types anyway and can check
+those, and it only confuses the programmer.
+
+LOCAL variable names should be short, and to the point. If you have
+some random integer loop counter, it should probably be called ``i``.
+Calling it ``loop_counter`` is non-productive, if there is no chance of it
+being mis-understood. Similarly, ``tmp`` can be just about any type of
+variable that is used to hold a temporary value.
+
+If you are afraid to mix up your local variable names, you have another
+problem, which is called the function-growth-hormone-imbalance syndrome.
+See chapter 6 (Functions).
+
+For symbol names and documentation, avoid introducing new usage of
+'master / slave' (or 'slave' independent of 'master') and 'blacklist /
+whitelist'.
+
+Recommended replacements for 'master / slave' are:
+ '{primary,main} / {secondary,replica,subordinate}'
+ '{initiator,requester} / {target,responder}'
+ '{controller,host} / {device,worker,proxy}'
+ 'leader / follower'
+ 'director / performer'
+
+Recommended replacements for 'blacklist/whitelist' are:
+ 'denylist / allowlist'
+ 'blocklist / passlist'
+
+Exceptions for introducing new usage is to maintain a userspace ABI/API,
+or when updating code for an existing (as of 2020) hardware or protocol
+specification that mandates those terms. For new specifications
+translate specification usage of the terminology to the kernel coding
+standard where possible.
+
+5) Typedefs
+-----------
+
+Please don't use things like ``vps_t``.
+It's a **mistake** to use typedef for structures and pointers. When you see a
+
+.. code-block:: c
+
+
+ vps_t a;
+
+in the source, what does it mean?
+In contrast, if it says
+
+.. code-block:: c
+
+ struct virtual_container *a;
+
+you can actually tell what ``a`` is.
+
+Lots of people think that typedefs ``help readability``. Not so. They are
+useful only for:
+
+ (a) totally opaque objects (where the typedef is actively used to **hide**
+ what the object is).
+
+ Example: ``pte_t`` etc. opaque objects that you can only access using
+ the proper accessor functions.
+
+ .. note::
+
+ Opaqueness and ``accessor functions`` are not good in themselves.
+ The reason we have them for things like pte_t etc. is that there
+ really is absolutely **zero** portably accessible information there.
+
+ (b) Clear integer types, where the abstraction **helps** avoid confusion
+ whether it is ``int`` or ``long``.
+
+ u8/u16/u32 are perfectly fine typedefs, although they fit into
+ category (d) better than here.
+
+ .. note::
+
+ Again - there needs to be a **reason** for this. If something is
+ ``unsigned long``, then there's no reason to do
+
+ typedef unsigned long myflags_t;
+
+ but if there is a clear reason for why it under certain circumstances
+ might be an ``unsigned int`` and under other configurations might be
+ ``unsigned long``, then by all means go ahead and use a typedef.
+
+ (c) when you use sparse to literally create a **new** type for
+ type-checking.
+
+ (d) New types which are identical to standard C99 types, in certain
+ exceptional circumstances.
+
+ Although it would only take a short amount of time for the eyes and
+ brain to become accustomed to the standard types like ``uint32_t``,
+ some people object to their use anyway.
+
+ Therefore, the Linux-specific ``u8/u16/u32/u64`` types and their
+ signed equivalents which are identical to standard types are
+ permitted -- although they are not mandatory in new code of your
+ own.
+
+ When editing existing code which already uses one or the other set
+ of types, you should conform to the existing choices in that code.
+
+ (e) Types safe for use in userspace.
+
+ In certain structures which are visible to userspace, we cannot
+ require C99 types and cannot use the ``u32`` form above. Thus, we
+ use __u32 and similar types in all structures which are shared
+ with userspace.
+
+Maybe there are other cases too, but the rule should basically be to NEVER
+EVER use a typedef unless you can clearly match one of those rules.
+
+In general, a pointer, or a struct that has elements that can reasonably
+be directly accessed should **never** be a typedef.
+
+
+6) Functions
+------------
+
+Functions should be short and sweet, and do just one thing. They should
+fit on one or two screenfuls of text (the ISO/ANSI screen size is 80x24,
+as we all know), and do one thing and do that well.
+
+The maximum length of a function is inversely proportional to the
+complexity and indentation level of that function. So, if you have a
+conceptually simple function that is just one long (but simple)
+case-statement, where you have to do lots of small things for a lot of
+different cases, it's OK to have a longer function.
+
+However, if you have a complex function, and you suspect that a
+less-than-gifted first-year high-school student might not even
+understand what the function is all about, you should adhere to the
+maximum limits all the more closely. Use helper functions with
+descriptive names (you can ask the compiler to in-line them if you think
+it's performance-critical, and it will probably do a better job of it
+than you would have done).
+
+Another measure of the function is the number of local variables. They
+shouldn't exceed 5-10, or you're doing something wrong. Re-think the
+function, and split it into smaller pieces. A human brain can
+generally easily keep track of about 7 different things, anything more
+and it gets confused. You know you're brilliant, but maybe you'd like
+to understand what you did 2 weeks from now.
+
+In source files, separate functions with one blank line. If the function is
+exported, the **EXPORT** macro for it should follow immediately after the
+closing function brace line. E.g.:
+
+.. code-block:: c
+
+ int system_is_up(void)
+ {
+ return system_state == SYSTEM_RUNNING;
+ }
+ EXPORT_SYMBOL(system_is_up);
+
+6.1) Function prototypes
+************************
+
+In function prototypes, include parameter names with their data types.
+Although this is not required by the C language, it is preferred in Linux
+because it is a simple way to add valuable information for the reader.
+
+Do not use the ``extern`` keyword with function declarations as this makes
+lines longer and isn't strictly necessary.
+
+When writing function prototypes, please keep the `order of elements regular
+<https://lore.kernel.org/mm-commits/CAHk-=wiOCLRny5aifWNhr621kYrJwhfURsa0vFPeUEm8mF0ufg@mail.gmail.com/>`_.
+For example, using this function declaration example::
+
+ __init void * __must_check action(enum magic value, size_t size, u8 count,
+ char *fmt, ...) __printf(4, 5) __malloc;
+
+The preferred order of elements for a function prototype is:
+
+- storage class (below, ``static __always_inline``, noting that ``__always_inline``
+ is technically an attribute but is treated like ``inline``)
+- storage class attributes (here, ``__init`` -- i.e. section declarations, but also
+ things like ``__cold``)
+- return type (here, ``void *``)
+- return type attributes (here, ``__must_check``)
+- function name (here, ``action``)
+- function parameters (here, ``(enum magic value, size_t size, u8 count, char *fmt, ...)``,
+ noting that parameter names should always be included)
+- function parameter attributes (here, ``__printf(4, 5)``)
+- function behavior attributes (here, ``__malloc``)
+
+Note that for a function **definition** (i.e. the actual function body),
+the compiler does not allow function parameter attributes after the
+function parameters. In these cases, they should go after the storage
+class attributes (e.g. note the changed position of ``__printf(4, 5)``
+below, compared to the **declaration** example above)::
+
+ static __always_inline __init __printf(4, 5) void * __must_check action(enum magic value,
+ size_t size, u8 count, char *fmt, ...) __malloc
+ {
+ ...
+ }
+
+7) Centralized exiting of functions
+-----------------------------------
+
+Albeit deprecated by some people, the equivalent of the goto statement is
+used frequently by compilers in form of the unconditional jump instruction.
+
+The goto statement comes in handy when a function exits from multiple
+locations and some common work such as cleanup has to be done. If there is no
+cleanup needed then just return directly.
+
+Choose label names which say what the goto does or why the goto exists. An
+example of a good name could be ``out_free_buffer:`` if the goto frees ``buffer``.
+Avoid using GW-BASIC names like ``err1:`` and ``err2:``, as you would have to
+renumber them if you ever add or remove exit paths, and they make correctness
+difficult to verify anyway.
+
+The rationale for using gotos is:
+
+- unconditional statements are easier to understand and follow
+- nesting is reduced
+- errors by not updating individual exit points when making
+ modifications are prevented
+- saves the compiler work to optimize redundant code away ;)
+
+.. code-block:: c
+
+ int fun(int a)
+ {
+ int result = 0;
+ char *buffer;
+
+ buffer = kmalloc(SIZE, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+ if (condition1) {
+ while (loop1) {
+ ...
+ }
+ result = 1;
+ goto out_free_buffer;
+ }
+ ...
+ out_free_buffer:
+ kfree(buffer);
+ return result;
+ }
+
+A common type of bug to be aware of is ``one err bugs`` which look like this:
+
+.. code-block:: c
+
+ err:
+ kfree(foo->bar);
+ kfree(foo);
+ return ret;
+
+The bug in this code is that on some exit paths ``foo`` is NULL. Normally the
+fix for this is to split it up into two error labels ``err_free_bar:`` and
+``err_free_foo:``:
+
+.. code-block:: c
+
+ err_free_bar:
+ kfree(foo->bar);
+ err_free_foo:
+ kfree(foo);
+ return ret;
+
+Ideally you should simulate errors to test all exit paths.
+
+
+8) Commenting
+-------------
+
+Comments are good, but there is also a danger of over-commenting. NEVER
+try to explain HOW your code works in a comment: it's much better to
+write the code so that the **working** is obvious, and it's a waste of
+time to explain badly written code.
+
+Generally, you want your comments to tell WHAT your code does, not HOW.
+Also, try to avoid putting comments inside a function body: if the
+function is so complex that you need to separately comment parts of it,
+you should probably go back to chapter 6 for a while. You can make
+small comments to note or warn about something particularly clever (or
+ugly), but try to avoid excess. Instead, put the comments at the head
+of the function, telling people what it does, and possibly WHY it does
+it.
+
+When commenting the kernel API functions, please use the kernel-doc format.
+See the files at :ref:`Documentation/doc-guide/ <doc_guide>` and
+``scripts/kernel-doc`` for details.
+
+The preferred style for long (multi-line) comments is:
+
+.. code-block:: c
+
+ /*
+ * This is the preferred style for multi-line
+ * comments in the Linux kernel source code.
+ * Please use it consistently.
+ *
+ * Description: A column of asterisks on the left side,
+ * with beginning and ending almost-blank lines.
+ */
+
+For files in net/ and drivers/net/ the preferred style for long (multi-line)
+comments is a little different.
+
+.. code-block:: c
+
+ /* The preferred comment style for files in net/ and drivers/net
+ * looks like this.
+ *
+ * It is nearly the same as the generally preferred comment style,
+ * but there is no initial almost-blank line.
+ */
+
+It's also important to comment data, whether they are basic types or derived
+types. To this end, use just one data declaration per line (no commas for
+multiple data declarations). This leaves you room for a small comment on each
+item, explaining its use.
+
+
+9) You've made a mess of it
+---------------------------
+
+That's OK, we all do. You've probably been told by your long-time Unix
+user helper that ``GNU emacs`` automatically formats the C sources for
+you, and you've noticed that yes, it does do that, but the defaults it
+uses are less than desirable (in fact, they are worse than random
+typing - an infinite number of monkeys typing into GNU emacs would never
+make a good program).
+
+So, you can either get rid of GNU emacs, or change it to use saner
+values. To do the latter, you can stick the following in your .emacs file:
+
+.. code-block:: none
+
+ (defun c-lineup-arglist-tabs-only (ignored)
+ "Line up argument lists by tabs, not spaces"
+ (let* ((anchor (c-langelem-pos c-syntactic-element))
+ (column (c-langelem-2nd-pos c-syntactic-element))
+ (offset (- (1+ column) anchor))
+ (steps (floor offset c-basic-offset)))
+ (* (max steps 1)
+ c-basic-offset)))
+
+ (dir-locals-set-class-variables
+ 'linux-kernel
+ '((c-mode . (
+ (c-basic-offset . 8)
+ (c-label-minimum-indentation . 0)
+ (c-offsets-alist . (
+ (arglist-close . c-lineup-arglist-tabs-only)
+ (arglist-cont-nonempty .
+ (c-lineup-gcc-asm-reg c-lineup-arglist-tabs-only))
+ (arglist-intro . +)
+ (brace-list-intro . +)
+ (c . c-lineup-C-comments)
+ (case-label . 0)
+ (comment-intro . c-lineup-comment)
+ (cpp-define-intro . +)
+ (cpp-macro . -1000)
+ (cpp-macro-cont . +)
+ (defun-block-intro . +)
+ (else-clause . 0)
+ (func-decl-cont . +)
+ (inclass . +)
+ (inher-cont . c-lineup-multi-inher)
+ (knr-argdecl-intro . 0)
+ (label . -1000)
+ (statement . 0)
+ (statement-block-intro . +)
+ (statement-case-intro . +)
+ (statement-cont . +)
+ (substatement . +)
+ ))
+ (indent-tabs-mode . t)
+ (show-trailing-whitespace . t)
+ ))))
+
+ (dir-locals-set-directory-class
+ (expand-file-name "~/src/linux-trees")
+ 'linux-kernel)
+
+This will make emacs go better with the kernel coding style for C
+files below ``~/src/linux-trees``.
+
+But even if you fail in getting emacs to do sane formatting, not
+everything is lost: use ``indent``.
+
+Now, again, GNU indent has the same brain-dead settings that GNU emacs
+has, which is why you need to give it a few command line options.
+However, that's not too bad, because even the makers of GNU indent
+recognize the authority of K&R (the GNU people aren't evil, they are
+just severely misguided in this matter), so you just give indent the
+options ``-kr -i8`` (stands for ``K&R, 8 character indents``), or use
+``scripts/Lindent``, which indents in the latest style.
+
+``indent`` has a lot of options, and especially when it comes to comment
+re-formatting you may want to take a look at the man page. But
+remember: ``indent`` is not a fix for bad programming.
+
+Note that you can also use the ``clang-format`` tool to help you with
+these rules, to quickly re-format parts of your code automatically,
+and to review full files in order to spot coding style mistakes,
+typos and possible improvements. It is also handy for sorting ``#includes``,
+for aligning variables/macros, for reflowing text and other similar tasks.
+See the file :ref:`Documentation/process/clang-format.rst <clangformat>`
+for more details.
+
+
+10) Kconfig configuration files
+-------------------------------
+
+For all of the Kconfig* configuration files throughout the source tree,
+the indentation is somewhat different. Lines under a ``config`` definition
+are indented with one tab, while help text is indented an additional two
+spaces. Example::
+
+ config AUDIT
+ bool "Auditing support"
+ depends on NET
+ help
+ Enable auditing infrastructure that can be used with another
+ kernel subsystem, such as SELinux (which requires this for
+ logging of avc messages output). Does not do system-call
+ auditing without CONFIG_AUDITSYSCALL.
+
+Seriously dangerous features (such as write support for certain
+filesystems) should advertise this prominently in their prompt string::
+
+ config ADFS_FS_RW
+ bool "ADFS write support (DANGEROUS)"
+ depends on ADFS_FS
+ ...
+
+For full documentation on the configuration files, see the file
+Documentation/kbuild/kconfig-language.rst.
+
+
+11) Data structures
+-------------------
+
+Data structures that have visibility outside the single-threaded
+environment they are created and destroyed in should always have
+reference counts. In the kernel, garbage collection doesn't exist (and
+outside the kernel garbage collection is slow and inefficient), which
+means that you absolutely **have** to reference count all your uses.
+
+Reference counting means that you can avoid locking, and allows multiple
+users to have access to the data structure in parallel - and not having
+to worry about the structure suddenly going away from under them just
+because they slept or did something else for a while.
+
+Note that locking is **not** a replacement for reference counting.
+Locking is used to keep data structures coherent, while reference
+counting is a memory management technique. Usually both are needed, and
+they are not to be confused with each other.
+
+Many data structures can indeed have two levels of reference counting,
+when there are users of different ``classes``. The subclass count counts
+the number of subclass users, and decrements the global count just once
+when the subclass count goes to zero.
+
+Examples of this kind of ``multi-level-reference-counting`` can be found in
+memory management (``struct mm_struct``: mm_users and mm_count), and in
+filesystem code (``struct super_block``: s_count and s_active).
+
+Remember: if another thread can find your data structure, and you don't
+have a reference count on it, you almost certainly have a bug.
+
+
+12) Macros, Enums and RTL
+-------------------------
+
+Names of macros defining constants and labels in enums are capitalized.
+
+.. code-block:: c
+
+ #define CONSTANT 0x12345
+
+Enums are preferred when defining several related constants.
+
+CAPITALIZED macro names are appreciated but macros resembling functions
+may be named in lower case.
+
+Generally, inline functions are preferable to macros resembling functions.
+
+Macros with multiple statements should be enclosed in a do - while block:
+
+.. code-block:: c
+
+ #define macrofun(a, b, c) \
+ do { \
+ if (a == 5) \
+ do_this(b, c); \
+ } while (0)
+
+Things to avoid when using macros:
+
+1) macros that affect control flow:
+
+.. code-block:: c
+
+ #define FOO(x) \
+ do { \
+ if (blah(x) < 0) \
+ return -EBUGGERED; \
+ } while (0)
+
+is a **very** bad idea. It looks like a function call but exits the ``calling``
+function; don't break the internal parsers of those who will read the code.
+
+2) macros that depend on having a local variable with a magic name:
+
+.. code-block:: c
+
+ #define FOO(val) bar(index, val)
+
+might look like a good thing, but it's confusing as hell when one reads the
+code and it's prone to breakage from seemingly innocent changes.
+
+3) macros with arguments that are used as l-values: FOO(x) = y; will
+bite you if somebody e.g. turns FOO into an inline function.
+
+4) forgetting about precedence: macros defining constants using expressions
+must enclose the expression in parentheses. Beware of similar issues with
+macros using parameters.
+
+.. code-block:: c
+
+ #define CONSTANT 0x4000
+ #define CONSTEXP (CONSTANT | 3)
+
+5) namespace collisions when defining local variables in macros resembling
+functions:
+
+.. code-block:: c
+
+ #define FOO(x) \
+ ({ \
+ typeof(x) ret; \
+ ret = calc_ret(x); \
+ (ret); \
+ })
+
+ret is a common name for a local variable - __foo_ret is less likely
+to collide with an existing variable.
+
+The cpp manual deals with macros exhaustively. The gcc internals manual also
+covers RTL which is used frequently with assembly language in the kernel.
+
+
+13) Printing kernel messages
+----------------------------
+
+Kernel developers like to be seen as literate. Do mind the spelling
+of kernel messages to make a good impression. Do not use incorrect
+contractions like ``dont``; use ``do not`` or ``don't`` instead. Make the
+messages concise, clear, and unambiguous.
+
+Kernel messages do not have to be terminated with a period.
+
+Printing numbers in parentheses (%d) adds no value and should be avoided.
+
+There are a number of driver model diagnostic macros in <linux/dev_printk.h>
+which you should use to make sure messages are matched to the right device
+and driver, and are tagged with the right level: dev_err(), dev_warn(),
+dev_info(), and so forth. For messages that aren't associated with a
+particular device, <linux/printk.h> defines pr_notice(), pr_info(),
+pr_warn(), pr_err(), etc.
+
+Coming up with good debugging messages can be quite a challenge; and once
+you have them, they can be a huge help for remote troubleshooting. However
+debug message printing is handled differently than printing other non-debug
+messages. While the other pr_XXX() functions print unconditionally,
+pr_debug() does not; it is compiled out by default, unless either DEBUG is
+defined or CONFIG_DYNAMIC_DEBUG is set. That is true for dev_dbg() also,
+and a related convention uses VERBOSE_DEBUG to add dev_vdbg() messages to
+the ones already enabled by DEBUG.
+
+Many subsystems have Kconfig debug options to turn on -DDEBUG in the
+corresponding Makefile; in other cases specific files #define DEBUG. And
+when a debug message should be unconditionally printed, such as if it is
+already inside a debug-related #ifdef section, printk(KERN_DEBUG ...) can be
+used.
+
+
+14) Allocating memory
+---------------------
+
+The kernel provides the following general purpose memory allocators:
+kmalloc(), kzalloc(), kmalloc_array(), kcalloc(), vmalloc(), and
+vzalloc(). Please refer to the API documentation for further information
+about them. :ref:`Documentation/core-api/memory-allocation.rst
+<memory_allocation>`
+
+The preferred form for passing a size of a struct is the following:
+
+.. code-block:: c
+
+ p = kmalloc(sizeof(*p), ...);
+
+The alternative form where struct name is spelled out hurts readability and
+introduces an opportunity for a bug when the pointer variable type is changed
+but the corresponding sizeof that is passed to a memory allocator is not.
+
+Casting the return value which is a void pointer is redundant. The conversion
+from void pointer to any other pointer type is guaranteed by the C programming
+language.
+
+The preferred form for allocating an array is the following:
+
+.. code-block:: c
+
+ p = kmalloc_array(n, sizeof(...), ...);
+
+The preferred form for allocating a zeroed array is the following:
+
+.. code-block:: c
+
+ p = kcalloc(n, sizeof(...), ...);
+
+Both forms check for overflow on the allocation size n * sizeof(...),
+and return NULL if that occurred.
+
+These generic allocation functions all emit a stack dump on failure when used
+without __GFP_NOWARN so there is no use in emitting an additional failure
+message when NULL is returned.
+
+15) The inline disease
+----------------------
+
+There appears to be a common misperception that gcc has a magic "make me
+faster" speedup option called ``inline``. While the use of inlines can be
+appropriate (for example as a means of replacing macros, see Chapter 12), it
+very often is not. Abundant use of the inline keyword leads to a much bigger
+kernel, which in turn slows the system as a whole down, due to a bigger
+icache footprint for the CPU and simply because there is less memory
+available for the pagecache. Just think about it; a pagecache miss causes a
+disk seek, which easily takes 5 milliseconds. There are a LOT of cpu cycles
+that can go into these 5 milliseconds.
+
+A reasonable rule of thumb is to not put inline at functions that have more
+than 3 lines of code in them. An exception to this rule are the cases where
+a parameter is known to be a compiletime constant, and as a result of this
+constantness you *know* the compiler will be able to optimize most of your
+function away at compile time. For a good example of this later case, see
+the kmalloc() inline function.
+
+Often people argue that adding inline to functions that are static and used
+only once is always a win since there is no space tradeoff. While this is
+technically correct, gcc is capable of inlining these automatically without
+help, and the maintenance issue of removing the inline when a second user
+appears outweighs the potential value of the hint that tells gcc to do
+something it would have done anyway.
+
+
+16) Function return values and names
+------------------------------------
+
+Functions can return values of many different kinds, and one of the
+most common is a value indicating whether the function succeeded or
+failed. Such a value can be represented as an error-code integer
+(-Exxx = failure, 0 = success) or a ``succeeded`` boolean (0 = failure,
+non-zero = success).
+
+Mixing up these two sorts of representations is a fertile source of
+difficult-to-find bugs. If the C language included a strong distinction
+between integers and booleans then the compiler would find these mistakes
+for us... but it doesn't. To help prevent such bugs, always follow this
+convention::
+
+ If the name of a function is an action or an imperative command,
+ the function should return an error-code integer. If the name
+ is a predicate, the function should return a "succeeded" boolean.
+
+For example, ``add work`` is a command, and the add_work() function returns 0
+for success or -EBUSY for failure. In the same way, ``PCI device present`` is
+a predicate, and the pci_dev_present() function returns 1 if it succeeds in
+finding a matching device or 0 if it doesn't.
+
+All EXPORTed functions must respect this convention, and so should all
+public functions. Private (static) functions need not, but it is
+recommended that they do.
+
+Functions whose return value is the actual result of a computation, rather
+than an indication of whether the computation succeeded, are not subject to
+this rule. Generally they indicate failure by returning some out-of-range
+result. Typical examples would be functions that return pointers; they use
+NULL or the ERR_PTR mechanism to report failure.
+
+
+17) Using bool
+--------------
+
+The Linux kernel bool type is an alias for the C99 _Bool type. bool values can
+only evaluate to 0 or 1, and implicit or explicit conversion to bool
+automatically converts the value to true or false. When using bool types the
+!! construction is not needed, which eliminates a class of bugs.
+
+When working with bool values the true and false definitions should be used
+instead of 1 and 0.
+
+bool function return types and stack variables are always fine to use whenever
+appropriate. Use of bool is encouraged to improve readability and is often a
+better option than 'int' for storing boolean values.
+
+Do not use bool if cache line layout or size of the value matters, as its size
+and alignment varies based on the compiled architecture. Structures that are
+optimized for alignment and size should not use bool.
+
+If a structure has many true/false values, consider consolidating them into a
+bitfield with 1 bit members, or using an appropriate fixed width type, such as
+u8.
+
+Similarly for function arguments, many true/false values can be consolidated
+into a single bitwise 'flags' argument and 'flags' can often be a more
+readable alternative if the call-sites have naked true/false constants.
+
+Otherwise limited use of bool in structures and arguments can improve
+readability.
+
+18) Don't re-invent the kernel macros
+-------------------------------------
+
+The header file include/linux/kernel.h contains a number of macros that
+you should use, rather than explicitly coding some variant of them yourself.
+For example, if you need to calculate the length of an array, take advantage
+of the macro
+
+.. code-block:: c
+
+ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+Similarly, if you need to calculate the size of some structure member, use
+
+.. code-block:: c
+
+ #define sizeof_field(t, f) (sizeof(((t*)0)->f))
+
+There are also min() and max() macros that do strict type checking if you
+need them. Feel free to peruse that header file to see what else is already
+defined that you shouldn't reproduce in your code.
+
+
+19) Editor modelines and other cruft
+------------------------------------
+
+Some editors can interpret configuration information embedded in source files,
+indicated with special markers. For example, emacs interprets lines marked
+like this:
+
+.. code-block:: c
+
+ -*- mode: c -*-
+
+Or like this:
+
+.. code-block:: c
+
+ /*
+ Local Variables:
+ compile-command: "gcc -DMAGIC_DEBUG_FLAG foo.c"
+ End:
+ */
+
+Vim interprets markers that look like this:
+
+.. code-block:: c
+
+ /* vim:set sw=8 noet */
+
+Do not include any of these in source files. People have their own personal
+editor configurations, and your source files should not override them. This
+includes markers for indentation and mode configuration. People may use their
+own custom mode, or may have some other magic method for making indentation
+work correctly.
+
+
+20) Inline assembly
+-------------------
+
+In architecture-specific code, you may need to use inline assembly to interface
+with CPU or platform functionality. Don't hesitate to do so when necessary.
+However, don't use inline assembly gratuitously when C can do the job. You can
+and should poke hardware from C when possible.
+
+Consider writing simple helper functions that wrap common bits of inline
+assembly, rather than repeatedly writing them with slight variations. Remember
+that inline assembly can use C parameters.
+
+Large, non-trivial assembly functions should go in .S files, with corresponding
+C prototypes defined in C header files. The C prototypes for assembly
+functions should use ``asmlinkage``.
+
+You may need to mark your asm statement as volatile, to prevent GCC from
+removing it if GCC doesn't notice any side effects. You don't always need to
+do so, though, and doing so unnecessarily can limit optimization.
+
+When writing a single inline assembly statement containing multiple
+instructions, put each instruction on a separate line in a separate quoted
+string, and end each string except the last with ``\n\t`` to properly indent
+the next instruction in the assembly output:
+
+.. code-block:: c
+
+ asm ("magic %reg1, #42\n\t"
+ "more_magic %reg2, %reg3"
+ : /* outputs */ : /* inputs */ : /* clobbers */);
+
+
+21) Conditional Compilation
+---------------------------
+
+Wherever possible, don't use preprocessor conditionals (#if, #ifdef) in .c
+files; doing so makes code harder to read and logic harder to follow. Instead,
+use such conditionals in a header file defining functions for use in those .c
+files, providing no-op stub versions in the #else case, and then call those
+functions unconditionally from .c files. The compiler will avoid generating
+any code for the stub calls, producing identical results, but the logic will
+remain easy to follow.
+
+Prefer to compile out entire functions, rather than portions of functions or
+portions of expressions. Rather than putting an ifdef in an expression, factor
+out part or all of the expression into a separate helper function and apply the
+conditional to that function.
+
+If you have a function or variable which may potentially go unused in a
+particular configuration, and the compiler would warn about its definition
+going unused, mark the definition as __maybe_unused rather than wrapping it in
+a preprocessor conditional. (However, if a function or variable *always* goes
+unused, delete it.)
+
+Within code, where possible, use the IS_ENABLED macro to convert a Kconfig
+symbol into a C boolean expression, and use it in a normal C conditional:
+
+.. code-block:: c
+
+ if (IS_ENABLED(CONFIG_SOMETHING)) {
+ ...
+ }
+
+The compiler will constant-fold the conditional away, and include or exclude
+the block of code just as with an #ifdef, so this will not add any runtime
+overhead. However, this approach still allows the C compiler to see the code
+inside the block, and check it for correctness (syntax, types, symbol
+references, etc). Thus, you still have to use an #ifdef if the code inside the
+block references symbols that will not exist if the condition is not met.
+
+At the end of any non-trivial #if or #ifdef block (more than a few lines),
+place a comment after the #endif on the same line, noting the conditional
+expression used. For instance:
+
+.. code-block:: c
+
+ #ifdef CONFIG_SOMETHING
+ ...
+ #endif /* CONFIG_SOMETHING */
+
+
+22) Do not crash the kernel
+---------------------------
+
+In general, the decision to crash the kernel belongs to the user, rather
+than to the kernel developer.
+
+Avoid panic()
+*************
+
+panic() should be used with care and primarily only during system boot.
+panic() is, for example, acceptable when running out of memory during boot and
+not being able to continue.
+
+Use WARN() rather than BUG()
+****************************
+
+Do not add new code that uses any of the BUG() variants, such as BUG(),
+BUG_ON(), or VM_BUG_ON(). Instead, use a WARN*() variant, preferably
+WARN_ON_ONCE(), and possibly with recovery code. Recovery code is not
+required if there is no reasonable way to at least partially recover.
+
+"I'm too lazy to do error handling" is not an excuse for using BUG(). Major
+internal corruptions with no way of continuing may still use BUG(), but need
+good justification.
+
+Use WARN_ON_ONCE() rather than WARN() or WARN_ON()
+**************************************************
+
+WARN_ON_ONCE() is generally preferred over WARN() or WARN_ON(), because it
+is common for a given warning condition, if it occurs at all, to occur
+multiple times. This can fill up and wrap the kernel log, and can even slow
+the system enough that the excessive logging turns into its own, additional
+problem.
+
+Do not WARN lightly
+*******************
+
+WARN*() is intended for unexpected, this-should-never-happen situations.
+WARN*() macros are not to be used for anything that is expected to happen
+during normal operation. These are not pre- or post-condition asserts, for
+example. Again: WARN*() must not be used for a condition that is expected
+to trigger easily, for example, by user space actions. pr_warn_once() is a
+possible alternative, if you need to notify the user of a problem.
+
+Do not worry about panic_on_warn users
+**************************************
+
+A few more words about panic_on_warn: Remember that ``panic_on_warn`` is an
+available kernel option, and that many users set this option. This is why
+there is a "Do not WARN lightly" writeup, above. However, the existence of
+panic_on_warn users is not a valid reason to avoid the judicious use
+WARN*(). That is because, whoever enables panic_on_warn has explicitly
+asked the kernel to crash if a WARN*() fires, and such users must be
+prepared to deal with the consequences of a system that is somewhat more
+likely to crash.
+
+Use BUILD_BUG_ON() for compile-time assertions
+**********************************************
+
+The use of BUILD_BUG_ON() is acceptable and encouraged, because it is a
+compile-time assertion that has no effect at runtime.
+
+Appendix I) References
+----------------------
+
+The C Programming Language, Second Edition
+by Brian W. Kernighan and Dennis M. Ritchie.
+Prentice Hall, Inc., 1988.
+ISBN 0-13-110362-8 (paperback), 0-13-110370-9 (hardback).
+
+The Practice of Programming
+by Brian W. Kernighan and Rob Pike.
+Addison-Wesley, Inc., 1999.
+ISBN 0-201-61586-X.
+
+GNU manuals - where in compliance with K&R and this text - for cpp, gcc,
+gcc internals and indent, all available from https://www.gnu.org/manual/
+
+WG14 is the international standardization working group for the programming
+language C, URL: http://www.open-std.org/JTC1/SC22/WG14/
+
+Kernel CodingStyle, by greg@kroah.com at OLS 2002:
+http://www.kroah.com/linux/talks/ols_2002_kernel_codingstyle_talk/html/
diff --git a/Documentation/process/contribution-maturity-model.rst b/Documentation/process/contribution-maturity-model.rst
new file mode 100644
index 000000000..b87ab34de
--- /dev/null
+++ b/Documentation/process/contribution-maturity-model.rst
@@ -0,0 +1,109 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========================================
+Linux Kernel Contribution Maturity Model
+========================================
+
+
+Background
+==========
+
+As a part of the 2021 Linux Kernel Maintainers’ Summit, there was a
+`discussion <https://lwn.net/Articles/870581/>`_ about the challenges in
+recruiting kernel maintainers as well as maintainer succession. Some of
+the conclusions from that discussion included that companies which are a
+part of the Linux Kernel community need to allow engineers to be
+maintainers as part of their job, so they can grow into becoming
+respected leaders and eventually, kernel maintainers. To support a
+strong talent pipeline, developers should be allowed and encouraged to
+take on upstream contributions such as reviewing other people’s patches,
+refactoring kernel infrastructure, and writing documentation.
+
+To that end, the Linux Foundation Technical Advisory Board (TAB)
+proposes this Linux Kernel Contribution Maturity Model. These common
+expectations for upstream community engagement aim to increase the
+influence of individual developers, increase the collaboration of
+organizations, and improve the overall health of the Linux Kernel
+ecosystem.
+
+The TAB urges organizations to continuously evaluate their Open Source
+maturity model and commit to improvements to align with this model. To
+be effective, this evaluation should incorporate feedback from across
+the organization, including management and developers at all seniority
+levels. In the spirit of Open Source, we encourage organizations to
+publish their evaluations and plans to improve their engagement with the
+upstream community.
+
+Level 0
+=======
+
+* Software Engineers are not allowed to contribute patches to the Linux
+ kernel.
+
+
+Level 1
+=======
+
+* Software Engineers are allowed to contribute patches to the Linux
+ kernel, either as part of their job responsibilities or on their own
+ time.
+
+Level 2
+=======
+
+* Software Engineers are expected to contribute to the Linux Kernel as
+ part of their job responsibilities.
+* Software Engineers will be supported to attend Linux-related
+ conferences as a part of their job.
+* A Software Engineer’s upstream code contributions will be considered
+ in promotion and performance reviews.
+
+Level 3
+=======
+
+* Software Engineers are expected to review patches (including patches
+ authored by engineers from other companies) as part of their job
+ responsibilities
+* Contributing presentations or papers to Linux-related or academic
+ conferences (such those organized by the Linux Foundation, Usenix,
+ ACM, etc.), are considered part of an engineer’s work.
+* A Software Engineer’s community contributions will be considered in
+ promotion and performance reviews.
+* Organizations will regularly report metrics of their open source
+ contributions and track these metrics over time. These metrics may be
+ published only internally within the organization, or at the
+ organization’s discretion, some or all may be published externally.
+ Metrics that are strongly suggested include:
+
+ * The number of upstream kernel contributions by team or organization
+ (e.g., all people reporting up to a manager, director, or VP).
+ * The percentage of kernel developers who have made upstream
+ contributions relative to the total kernel developers in the
+ organization.
+ * The time interval between kernels used in the organization’s servers
+ and/or products, and the publication date of the upstream kernel
+ upon which the internal kernel is based.
+ * The number of out-of-tree commits present in internal kernels.
+
+Level 4
+=======
+
+* Software Engineers are encouraged to spend a portion of their work
+ time focused on Upstream Work, which is defined as reviewing patches,
+ serving on program committees, improving core project infrastructure
+ such as writing or maintaining tests, upstream tech debt reduction,
+ writing documentation, etc.
+* Software Engineers are supported in helping to organize Linux-related
+ conferences.
+* Organizations will consider community member feedback in official
+ performance reviews.
+
+Level 5
+=======
+
+* Upstream kernel development is considered a formal job position, with
+ at least a third of the engineer’s time spent doing Upstream Work.
+* Organizations will actively seek out community member feedback as a
+ factor in official performance reviews.
+* Organizations will regularly report internally on the ratio of
+ Upstream Work to work focused on directly pursuing business goals.
diff --git a/Documentation/process/deprecated.rst b/Documentation/process/deprecated.rst
new file mode 100644
index 000000000..1f7f3e6c9
--- /dev/null
+++ b/Documentation/process/deprecated.rst
@@ -0,0 +1,374 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+.. _deprecated:
+
+=====================================================================
+Deprecated Interfaces, Language Features, Attributes, and Conventions
+=====================================================================
+
+In a perfect world, it would be possible to convert all instances of
+some deprecated API into the new API and entirely remove the old API in
+a single development cycle. However, due to the size of the kernel, the
+maintainership hierarchy, and timing, it's not always feasible to do these
+kinds of conversions at once. This means that new instances may sneak into
+the kernel while old ones are being removed, only making the amount of
+work to remove the API grow. In order to educate developers about what
+has been deprecated and why, this list has been created as a place to
+point when uses of deprecated things are proposed for inclusion in the
+kernel.
+
+__deprecated
+------------
+While this attribute does visually mark an interface as deprecated,
+it `does not produce warnings during builds any more
+<https://git.kernel.org/linus/771c035372a036f83353eef46dbb829780330234>`_
+because one of the standing goals of the kernel is to build without
+warnings and no one was actually doing anything to remove these deprecated
+interfaces. While using `__deprecated` is nice to note an old API in
+a header file, it isn't the full solution. Such interfaces must either
+be fully removed from the kernel, or added to this file to discourage
+others from using them in the future.
+
+BUG() and BUG_ON()
+------------------
+Use WARN() and WARN_ON() instead, and handle the "impossible"
+error condition as gracefully as possible. While the BUG()-family
+of APIs were originally designed to act as an "impossible situation"
+assert and to kill a kernel thread "safely", they turn out to just be
+too risky. (e.g. "In what order do locks need to be released? Have
+various states been restored?") Very commonly, using BUG() will
+destabilize a system or entirely break it, which makes it impossible
+to debug or even get viable crash reports. Linus has `very strong
+<https://lore.kernel.org/lkml/CA+55aFy6jNLsywVYdGp83AMrXBo_P-pkjkphPGrO=82SPKCpLQ@mail.gmail.com/>`_
+feelings `about this
+<https://lore.kernel.org/lkml/CAHk-=whDHsbK3HTOpTF=ue_o04onRwTEaK_ZoJp_fjbqq4+=Jw@mail.gmail.com/>`_.
+
+Note that the WARN()-family should only be used for "expected to
+be unreachable" situations. If you want to warn about "reachable
+but undesirable" situations, please use the pr_warn()-family of
+functions. System owners may have set the *panic_on_warn* sysctl,
+to make sure their systems do not continue running in the face of
+"unreachable" conditions. (For example, see commits like `this one
+<https://git.kernel.org/linus/d4689846881d160a4d12a514e991a740bcb5d65a>`_.)
+
+open-coded arithmetic in allocator arguments
+--------------------------------------------
+Dynamic size calculations (especially multiplication) should not be
+performed in memory allocator (or similar) function arguments due to the
+risk of them overflowing. This could lead to values wrapping around and a
+smaller allocation being made than the caller was expecting. Using those
+allocations could lead to linear overflows of heap memory and other
+misbehaviors. (One exception to this is literal values where the compiler
+can warn if they might overflow. However, the preferred way in these
+cases is to refactor the code as suggested below to avoid the open-coded
+arithmetic.)
+
+For example, do not use ``count * size`` as an argument, as in::
+
+ foo = kmalloc(count * size, GFP_KERNEL);
+
+Instead, the 2-factor form of the allocator should be used::
+
+ foo = kmalloc_array(count, size, GFP_KERNEL);
+
+Specifically, kmalloc() can be replaced with kmalloc_array(), and
+kzalloc() can be replaced with kcalloc().
+
+If no 2-factor form is available, the saturate-on-overflow helpers should
+be used::
+
+ bar = dma_alloc_coherent(dev, array_size(count, size), &dma, GFP_KERNEL);
+
+Another common case to avoid is calculating the size of a structure with
+a trailing array of others structures, as in::
+
+ header = kzalloc(sizeof(*header) + count * sizeof(*header->item),
+ GFP_KERNEL);
+
+Instead, use the helper::
+
+ header = kzalloc(struct_size(header, item, count), GFP_KERNEL);
+
+.. note:: If you are using struct_size() on a structure containing a zero-length
+ or a one-element array as a trailing array member, please refactor such
+ array usage and switch to a `flexible array member
+ <#zero-length-and-one-element-arrays>`_ instead.
+
+For other calculations, please compose the use of the size_mul(),
+size_add(), and size_sub() helpers. For example, in the case of::
+
+ foo = krealloc(current_size + chunk_size * (count - 3), GFP_KERNEL);
+
+Instead, use the helpers::
+
+ foo = krealloc(size_add(current_size,
+ size_mul(chunk_size,
+ size_sub(count, 3))), GFP_KERNEL);
+
+For more details, also see array3_size() and flex_array_size(),
+as well as the related check_mul_overflow(), check_add_overflow(),
+check_sub_overflow(), and check_shl_overflow() family of functions.
+
+simple_strtol(), simple_strtoll(), simple_strtoul(), simple_strtoull()
+----------------------------------------------------------------------
+The simple_strtol(), simple_strtoll(),
+simple_strtoul(), and simple_strtoull() functions
+explicitly ignore overflows, which may lead to unexpected results
+in callers. The respective kstrtol(), kstrtoll(),
+kstrtoul(), and kstrtoull() functions tend to be the
+correct replacements, though note that those require the string to be
+NUL or newline terminated.
+
+strcpy()
+--------
+strcpy() performs no bounds checking on the destination buffer. This
+could result in linear overflows beyond the end of the buffer, leading to
+all kinds of misbehaviors. While `CONFIG_FORTIFY_SOURCE=y` and various
+compiler flags help reduce the risk of using this function, there is
+no good reason to add new uses of this function. The safe replacement
+is strscpy(), though care must be given to any cases where the return
+value of strcpy() was used, since strscpy() does not return a pointer to
+the destination, but rather a count of non-NUL bytes copied (or negative
+errno when it truncates).
+
+strncpy() on NUL-terminated strings
+-----------------------------------
+Use of strncpy() does not guarantee that the destination buffer will
+be NUL terminated. This can lead to various linear read overflows and
+other misbehavior due to the missing termination. It also NUL-pads
+the destination buffer if the source contents are shorter than the
+destination buffer size, which may be a needless performance penalty
+for callers using only NUL-terminated strings.
+
+When the destination is required to be NUL-terminated, the replacement is
+strscpy(), though care must be given to any cases where the return value
+of strncpy() was used, since strscpy() does not return a pointer to the
+destination, but rather a count of non-NUL bytes copied (or negative
+errno when it truncates). Any cases still needing NUL-padding should
+instead use strscpy_pad().
+
+If a caller is using non-NUL-terminated strings, strtomem() should be
+used, and the destinations should be marked with the `__nonstring
+<https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html>`_
+attribute to avoid future compiler warnings. For cases still needing
+NUL-padding, strtomem_pad() can be used.
+
+strlcpy()
+---------
+strlcpy() reads the entire source buffer first (since the return value
+is meant to match that of strlen()). This read may exceed the destination
+size limit. This is both inefficient and can lead to linear read overflows
+if a source string is not NUL-terminated. The safe replacement is strscpy(),
+though care must be given to any cases where the return value of strlcpy()
+is used, since strscpy() will return negative errno values when it truncates.
+
+%p format specifier
+-------------------
+Traditionally, using "%p" in format strings would lead to regular address
+exposure flaws in dmesg, proc, sysfs, etc. Instead of leaving these to
+be exploitable, all "%p" uses in the kernel are being printed as a hashed
+value, rendering them unusable for addressing. New uses of "%p" should not
+be added to the kernel. For text addresses, using "%pS" is likely better,
+as it produces the more useful symbol name instead. For nearly everything
+else, just do not add "%p" at all.
+
+Paraphrasing Linus's current `guidance <https://lore.kernel.org/lkml/CA+55aFwQEd_d40g4mUCSsVRZzrFPUJt74vc6PPpb675hYNXcKw@mail.gmail.com/>`_:
+
+- If the hashed "%p" value is pointless, ask yourself whether the pointer
+ itself is important. Maybe it should be removed entirely?
+- If you really think the true pointer value is important, why is some
+ system state or user privilege level considered "special"? If you think
+ you can justify it (in comments and commit log) well enough to stand
+ up to Linus's scrutiny, maybe you can use "%px", along with making sure
+ you have sensible permissions.
+
+If you are debugging something where "%p" hashing is causing problems,
+you can temporarily boot with the debug flag "`no_hash_pointers
+<https://git.kernel.org/linus/5ead723a20e0447bc7db33dc3070b420e5f80aa6>`_".
+
+Variable Length Arrays (VLAs)
+-----------------------------
+Using stack VLAs produces much worse machine code than statically
+sized stack arrays. While these non-trivial `performance issues
+<https://git.kernel.org/linus/02361bc77888>`_ are reason enough to
+eliminate VLAs, they are also a security risk. Dynamic growth of a stack
+array may exceed the remaining memory in the stack segment. This could
+lead to a crash, possible overwriting sensitive contents at the end of the
+stack (when built without `CONFIG_THREAD_INFO_IN_TASK=y`), or overwriting
+memory adjacent to the stack (when built without `CONFIG_VMAP_STACK=y`)
+
+Implicit switch case fall-through
+---------------------------------
+The C language allows switch cases to fall through to the next case
+when a "break" statement is missing at the end of a case. This, however,
+introduces ambiguity in the code, as it's not always clear if the missing
+break is intentional or a bug. For example, it's not obvious just from
+looking at the code if `STATE_ONE` is intentionally designed to fall
+through into `STATE_TWO`::
+
+ switch (value) {
+ case STATE_ONE:
+ do_something();
+ case STATE_TWO:
+ do_other();
+ break;
+ default:
+ WARN("unknown state");
+ }
+
+As there have been a long list of flaws `due to missing "break" statements
+<https://cwe.mitre.org/data/definitions/484.html>`_, we no longer allow
+implicit fall-through. In order to identify intentional fall-through
+cases, we have adopted a pseudo-keyword macro "fallthrough" which
+expands to gcc's extension `__attribute__((__fallthrough__))
+<https://gcc.gnu.org/onlinedocs/gcc/Statement-Attributes.html>`_.
+(When the C17/C18 `[[fallthrough]]` syntax is more commonly supported by
+C compilers, static analyzers, and IDEs, we can switch to using that syntax
+for the macro pseudo-keyword.)
+
+All switch/case blocks must end in one of:
+
+* break;
+* fallthrough;
+* continue;
+* goto <label>;
+* return [expression];
+
+Zero-length and one-element arrays
+----------------------------------
+There is a regular need in the kernel to provide a way to declare having
+a dynamically sized set of trailing elements in a structure. Kernel code
+should always use `"flexible array members" <https://en.wikipedia.org/wiki/Flexible_array_member>`_
+for these cases. The older style of one-element or zero-length arrays should
+no longer be used.
+
+In older C code, dynamically sized trailing elements were done by specifying
+a one-element array at the end of a structure::
+
+ struct something {
+ size_t count;
+ struct foo items[1];
+ };
+
+This led to fragile size calculations via sizeof() (which would need to
+remove the size of the single trailing element to get a correct size of
+the "header"). A `GNU C extension <https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html>`_
+was introduced to allow for zero-length arrays, to avoid these kinds of
+size problems::
+
+ struct something {
+ size_t count;
+ struct foo items[0];
+ };
+
+But this led to other problems, and didn't solve some problems shared by
+both styles, like not being able to detect when such an array is accidentally
+being used _not_ at the end of a structure (which could happen directly, or
+when such a struct was in unions, structs of structs, etc).
+
+C99 introduced "flexible array members", which lacks a numeric size for
+the array declaration entirely::
+
+ struct something {
+ size_t count;
+ struct foo items[];
+ };
+
+This is the way the kernel expects dynamically sized trailing elements
+to be declared. It allows the compiler to generate errors when the
+flexible array does not occur last in the structure, which helps to prevent
+some kind of `undefined behavior
+<https://git.kernel.org/linus/76497732932f15e7323dc805e8ea8dc11bb587cf>`_
+bugs from being inadvertently introduced to the codebase. It also allows
+the compiler to correctly analyze array sizes (via sizeof(),
+`CONFIG_FORTIFY_SOURCE`, and `CONFIG_UBSAN_BOUNDS`). For instance,
+there is no mechanism that warns us that the following application of the
+sizeof() operator to a zero-length array always results in zero::
+
+ struct something {
+ size_t count;
+ struct foo items[0];
+ };
+
+ struct something *instance;
+
+ instance = kmalloc(struct_size(instance, items, count), GFP_KERNEL);
+ instance->count = count;
+
+ size = sizeof(instance->items) * instance->count;
+ memcpy(instance->items, source, size);
+
+At the last line of code above, ``size`` turns out to be ``zero``, when one might
+have thought it represents the total size in bytes of the dynamic memory recently
+allocated for the trailing array ``items``. Here are a couple examples of this
+issue: `link 1
+<https://git.kernel.org/linus/f2cd32a443da694ac4e28fbf4ac6f9d5cc63a539>`_,
+`link 2
+<https://git.kernel.org/linus/ab91c2a89f86be2898cee208d492816ec238b2cf>`_.
+Instead, `flexible array members have incomplete type, and so the sizeof()
+operator may not be applied <https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html>`_,
+so any misuse of such operators will be immediately noticed at build time.
+
+With respect to one-element arrays, one has to be acutely aware that `such arrays
+occupy at least as much space as a single object of the type
+<https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html>`_,
+hence they contribute to the size of the enclosing structure. This is prone
+to error every time people want to calculate the total size of dynamic memory
+to allocate for a structure containing an array of this kind as a member::
+
+ struct something {
+ size_t count;
+ struct foo items[1];
+ };
+
+ struct something *instance;
+
+ instance = kmalloc(struct_size(instance, items, count - 1), GFP_KERNEL);
+ instance->count = count;
+
+ size = sizeof(instance->items) * instance->count;
+ memcpy(instance->items, source, size);
+
+In the example above, we had to remember to calculate ``count - 1`` when using
+the struct_size() helper, otherwise we would have --unintentionally-- allocated
+memory for one too many ``items`` objects. The cleanest and least error-prone way
+to implement this is through the use of a `flexible array member`, together with
+struct_size() and flex_array_size() helpers::
+
+ struct something {
+ size_t count;
+ struct foo items[];
+ };
+
+ struct something *instance;
+
+ instance = kmalloc(struct_size(instance, items, count), GFP_KERNEL);
+ instance->count = count;
+
+ memcpy(instance->items, source, flex_array_size(instance, items, instance->count));
+
+There are two special cases of replacement where the DECLARE_FLEX_ARRAY()
+helper needs to be used. (Note that it is named __DECLARE_FLEX_ARRAY() for
+use in UAPI headers.) Those cases are when the flexible array is either
+alone in a struct or is part of a union. These are disallowed by the C99
+specification, but for no technical reason (as can be seen by both the
+existing use of such arrays in those places and the work-around that
+DECLARE_FLEX_ARRAY() uses). For example, to convert this::
+
+ struct something {
+ ...
+ union {
+ struct type1 one[0];
+ struct type2 two[0];
+ };
+ };
+
+The helper must be used::
+
+ struct something {
+ ...
+ union {
+ DECLARE_FLEX_ARRAY(struct type1, one);
+ DECLARE_FLEX_ARRAY(struct type2, two);
+ };
+ };
diff --git a/Documentation/process/development-process.rst b/Documentation/process/development-process.rst
new file mode 100644
index 000000000..61c627e41
--- /dev/null
+++ b/Documentation/process/development-process.rst
@@ -0,0 +1,28 @@
+.. _development_process_main:
+
+A guide to the Kernel Development Process
+=========================================
+
+Contents:
+
+.. toctree::
+ :numbered:
+ :maxdepth: 2
+
+ 1.Intro
+ 2.Process
+ 3.Early-stage
+ 4.Coding
+ 5.Posting
+ 6.Followthrough
+ 7.AdvancedTopics
+ 8.Conclusion
+
+The purpose of this document is to help developers (and their managers)
+work with the development community with a minimum of frustration. It is
+an attempt to document how this community works in a way which is
+accessible to those who are not intimately familiar with Linux kernel
+development (or, indeed, free software development in general). While
+there is some technical material here, this is very much a process-oriented
+discussion which does not require a deep knowledge of kernel programming to
+understand.
diff --git a/Documentation/process/email-clients.rst b/Documentation/process/email-clients.rst
new file mode 100644
index 000000000..471e1f93f
--- /dev/null
+++ b/Documentation/process/email-clients.rst
@@ -0,0 +1,372 @@
+.. _email_clients:
+
+Email clients info for Linux
+============================
+
+Git
+---
+
+These days most developers use ``git send-email`` instead of regular
+email clients. The man page for this is quite good. On the receiving
+end, maintainers use ``git am`` to apply the patches.
+
+If you are new to ``git`` then send your first patch to yourself. Save it
+as raw text including all the headers. Run ``git am raw_email.txt`` and
+then review the changelog with ``git log``. When that works then send
+the patch to the appropriate mailing list(s).
+
+General Preferences
+-------------------
+
+Patches for the Linux kernel are submitted via email, preferably as
+inline text in the body of the email. Some maintainers accept
+attachments, but then the attachments should have content-type
+``text/plain``. However, attachments are generally frowned upon because
+it makes quoting portions of the patch more difficult in the patch
+review process.
+
+It's also strongly recommended that you use plain text in your email body,
+for patches and other emails alike. https://useplaintext.email may be useful
+for information on how to configure your preferred email client, as well as
+listing recommended email clients should you not already have a preference.
+
+Email clients that are used for Linux kernel patches should send the
+patch text untouched. For example, they should not modify or delete tabs
+or spaces, even at the beginning or end of lines.
+
+Don't send patches with ``format=flowed``. This can cause unexpected
+and unwanted line breaks.
+
+Don't let your email client do automatic word wrapping for you.
+This can also corrupt your patch.
+
+Email clients should not modify the character set encoding of the text.
+Emailed patches should be in ASCII or UTF-8 encoding only.
+If you configure your email client to send emails with UTF-8 encoding,
+you avoid some possible charset problems.
+
+Email clients should generate and maintain "References:" or "In-Reply-To:"
+headers so that mail threading is not broken.
+
+Copy-and-paste (or cut-and-paste) usually does not work for patches
+because tabs are converted to spaces. Using xclipboard, xclip, and/or
+xcutsel may work, but it's best to test this for yourself or just avoid
+copy-and-paste.
+
+Don't use PGP/GPG signatures in mail that contains patches.
+This breaks many scripts that read and apply the patches.
+(This should be fixable.)
+
+It's a good idea to send a patch to yourself, save the received message,
+and successfully apply it with 'patch' before sending patches to Linux
+mailing lists.
+
+
+Some email client (MUA) hints
+-----------------------------
+
+Here are some specific MUA configuration hints for editing and sending
+patches for the Linux kernel. These are not meant to be complete
+software package configuration summaries.
+
+
+Legend:
+
+- TUI = text-based user interface
+- GUI = graphical user interface
+
+Alpine (TUI)
+************
+
+Config options:
+
+In the :menuselection:`Sending Preferences` section:
+
+- :menuselection:`Do Not Send Flowed Text` must be ``enabled``
+- :menuselection:`Strip Whitespace Before Sending` must be ``disabled``
+
+When composing the message, the cursor should be placed where the patch
+should appear, and then pressing :kbd:`CTRL-R` let you specify the patch file
+to insert into the message.
+
+Claws Mail (GUI)
+****************
+
+Works. Some people use this successfully for patches.
+
+To insert a patch use :menuselection:`Message-->Insert File` (:kbd:`CTRL-I`)
+or an external editor.
+
+If the inserted patch has to be edited in the Claws composition window
+"Auto wrapping" in
+:menuselection:`Configuration-->Preferences-->Compose-->Wrapping` should be
+disabled.
+
+Evolution (GUI)
+***************
+
+Some people use this successfully for patches.
+
+When composing mail select: Preformat
+ from :menuselection:`Format-->Paragraph Style-->Preformatted` (:kbd:`CTRL-7`)
+ or the toolbar
+
+Then use:
+:menuselection:`Insert-->Text File...` (:kbd:`ALT-N x`)
+to insert the patch.
+
+You can also ``diff -Nru old.c new.c | xclip``, select
+:menuselection:`Preformat`, then paste with the middle button.
+
+Kmail (GUI)
+***********
+
+Some people use Kmail successfully for patches.
+
+The default setting of not composing in HTML is appropriate; do not
+enable it.
+
+When composing an email, under options, uncheck "word wrap". The only
+disadvantage is any text you type in the email will not be word-wrapped
+so you will have to manually word wrap text before the patch. The easiest
+way around this is to compose your email with word wrap enabled, then save
+it as a draft. Once you pull it up again from your drafts it is now hard
+word-wrapped and you can uncheck "word wrap" without losing the existing
+wrapping.
+
+At the bottom of your email, put the commonly-used patch delimiter before
+inserting your patch: three hyphens (``---``).
+
+Then from the :menuselection:`Message` menu item, select
+:menuselection:`insert file` and choose your patch.
+As an added bonus you can customise the message creation toolbar menu
+and put the :menuselection:`insert file` icon there.
+
+Make the composer window wide enough so that no lines wrap. As of
+KMail 1.13.5 (KDE 4.5.4), KMail will apply word wrapping when sending
+the email if the lines wrap in the composer window. Having word wrapping
+disabled in the Options menu isn't enough. Thus, if your patch has very
+long lines, you must make the composer window very wide before sending
+the email. See: https://bugs.kde.org/show_bug.cgi?id=174034
+
+You can safely GPG sign attachments, but inlined text is preferred for
+patches so do not GPG sign them. Signing patches that have been inserted
+as inlined text will make them tricky to extract from their 7-bit encoding.
+
+If you absolutely must send patches as attachments instead of inlining
+them as text, right click on the attachment and select :menuselection:`properties`,
+and highlight :menuselection:`Suggest automatic display` to make the attachment
+inlined to make it more viewable.
+
+When saving patches that are sent as inlined text, select the email that
+contains the patch from the message list pane, right click and select
+:menuselection:`save as`. You can use the whole email unmodified as a patch
+if it was properly composed. Emails are saved as read-write for user only so
+you will have to chmod them to make them group and world readable if you copy
+them elsewhere.
+
+Lotus Notes (GUI)
+*****************
+
+Run away from it.
+
+IBM Verse (Web GUI)
+*******************
+
+See Lotus Notes.
+
+Mutt (TUI)
+**********
+
+Plenty of Linux developers use ``mutt``, so it must work pretty well.
+
+Mutt doesn't come with an editor, so whatever editor you use should be
+used in a way that there are no automatic linebreaks. Most editors have
+an :menuselection:`insert file` option that inserts the contents of a file
+unaltered.
+
+To use ``vim`` with mutt::
+
+ set editor="vi"
+
+If using xclip, type the command::
+
+ :set paste
+
+before middle button or shift-insert or use::
+
+ :r filename
+
+if you want to include the patch inline.
+(a)ttach works fine without ``set paste``.
+
+You can also generate patches with ``git format-patch`` and then use Mutt
+to send them::
+
+ $ mutt -H 0001-some-bug-fix.patch
+
+Config options:
+
+It should work with default settings.
+However, it's a good idea to set the ``send_charset`` to::
+
+ set send_charset="us-ascii:utf-8"
+
+Mutt is highly customizable. Here is a minimum configuration to start
+using Mutt to send patches through Gmail::
+
+ # .muttrc
+ # ================ IMAP ====================
+ set imap_user = 'yourusername@gmail.com'
+ set imap_pass = 'yourpassword'
+ set spoolfile = imaps://imap.gmail.com/INBOX
+ set folder = imaps://imap.gmail.com/
+ set record="imaps://imap.gmail.com/[Gmail]/Sent Mail"
+ set postponed="imaps://imap.gmail.com/[Gmail]/Drafts"
+ set mbox="imaps://imap.gmail.com/[Gmail]/All Mail"
+
+ # ================ SMTP ====================
+ set smtp_url = "smtp://username@smtp.gmail.com:587/"
+ set smtp_pass = $imap_pass
+ set ssl_force_tls = yes # Require encrypted connection
+
+ # ================ Composition ====================
+ set editor = `echo \$EDITOR`
+ set edit_headers = yes # See the headers when editing
+ set charset = UTF-8 # value of $LANG; also fallback for send_charset
+ # Sender, email address, and sign-off line must match
+ unset use_domain # because joe@localhost is just embarrassing
+ set realname = "YOUR NAME"
+ set from = "username@gmail.com"
+ set use_from = yes
+
+The Mutt docs have lots more information:
+
+ https://gitlab.com/muttmua/mutt/-/wikis/UseCases/Gmail
+
+ http://www.mutt.org/doc/manual/
+
+Pine (TUI)
+**********
+
+Pine has had some whitespace truncation issues in the past, but these
+should all be fixed now.
+
+Use alpine (pine's successor) if you can.
+
+Config options:
+
+- ``quell-flowed-text`` is needed for recent versions
+- the ``no-strip-whitespace-before-send`` option is needed
+
+
+Sylpheed (GUI)
+**************
+
+- Works well for inlining text (or using attachments).
+- Allows use of an external editor.
+- Is slow on large folders.
+- Won't do TLS SMTP auth over a non-SSL connection.
+- Has a helpful ruler bar in the compose window.
+- Adding addresses to address book doesn't understand the display name
+ properly.
+
+Thunderbird (GUI)
+*****************
+
+Thunderbird is an Outlook clone that likes to mangle text, but there are ways
+to coerce it into behaving.
+
+After doing the modifications, this includes installing the extensions,
+you need to restart Thunderbird.
+
+- Allow use of an external editor:
+
+ The easiest thing to do with Thunderbird and patches is to use extensions
+ which open your favorite external editor.
+
+ Here are some example extensions which are capable of doing this.
+
+ - "External Editor Revived"
+
+ https://github.com/Frederick888/external-editor-revived
+
+ https://addons.thunderbird.net/en-GB/thunderbird/addon/external-editor-revived/
+
+ It requires installing a "native messaging host".
+ Please read the wiki which can be found here:
+ https://github.com/Frederick888/external-editor-revived/wiki
+
+ - "External Editor"
+
+ https://github.com/exteditor/exteditor
+
+ To do this, download and install the extension, then open the
+ :menuselection:`compose` window, add a button for it using
+ :menuselection:`View-->Toolbars-->Customize...`
+ then just click on the new button when you wish to use the external editor.
+
+ Please note that "External Editor" requires that your editor must not
+ fork, or in other words, the editor must not return before closing.
+ You may have to pass additional flags or change the settings of your
+ editor. Most notably if you are using gvim then you must pass the -f
+ option to gvim by putting ``/usr/bin/gvim --nofork"`` (if the binary is in
+ ``/usr/bin``) to the text editor field in :menuselection:`external editor`
+ settings. If you are using some other editor then please read its manual
+ to find out how to do this.
+
+To beat some sense out of the internal editor, do this:
+
+- Edit your Thunderbird config settings so that it won't use ``format=flowed``!
+ Go to your main window and find the button for your main dropdown menu.
+ :menuselection:`Main Menu-->Preferences-->General-->Config Editor...`
+ to bring up the thunderbird's registry editor.
+
+ - Set ``mailnews.send_plaintext_flowed`` to ``false``
+
+ - Set ``mailnews.wraplength`` from ``72`` to ``0``
+
+- Don't write HTML messages! Go to the main window
+ :menuselection:`Main Menu-->Account Settings-->youracc@server.something-->Composition & Addressing`!
+ There you can disable the option "Compose messages in HTML format".
+
+- Open messages only as plain text! Go to the main window
+ :menuselection:`Main Menu-->View-->Message Body As-->Plain Text`!
+
+TkRat (GUI)
+***********
+
+Works. Use "Insert file..." or external editor.
+
+Gmail (Web GUI)
+***************
+
+Does not work for sending patches.
+
+Gmail web client converts tabs to spaces automatically.
+
+At the same time it wraps lines every 78 chars with CRLF style line breaks
+although tab2space problem can be solved with external editor.
+
+Another problem is that Gmail will base64-encode any message that has a
+non-ASCII character. That includes things like European names.
+
+Proton Mail
+***********
+
+Proton Mail has a "feature" where it looks up keys using Web Key Directory
+(WKD) and encrypts mail to any recipients for which it finds a key.
+Kernel.org publishes the WKD for all developers who have kernel.org accounts.
+As a result, emails sent using Proton Mail to kernel.org addresses will be
+encrypted.
+Unfortunately, Proton Mail does not provide a mechanism to disable the
+automatic encryption, viewing it as a privacy feature.
+The automatic encryption feature is also enabled for mail sent via the Proton
+Mail Bridge, so this affects all outgoing messages, including patches sent with
+``git send-email``.
+Encrypted mail adds unnecessary friction, as other developers may not have mail
+clients, or tooling, configured for use with encrypted mail and some mail
+clients may encrypt responses to encrypted mail for all recipients, including
+the mailing lists.
+Unless a way to disable this "feature" is introduced, Proton Mail is unsuited
+to kernel development.
diff --git a/Documentation/process/embargoed-hardware-issues.rst b/Documentation/process/embargoed-hardware-issues.rst
new file mode 100644
index 000000000..31000f075
--- /dev/null
+++ b/Documentation/process/embargoed-hardware-issues.rst
@@ -0,0 +1,319 @@
+.. _embargoed_hardware_issues:
+
+Embargoed hardware issues
+=========================
+
+Scope
+-----
+
+Hardware issues which result in security problems are a different category
+of security bugs than pure software bugs which only affect the Linux
+kernel.
+
+Hardware issues like Meltdown, Spectre, L1TF etc. must be treated
+differently because they usually affect all Operating Systems ("OS") and
+therefore need coordination across different OS vendors, distributions,
+hardware vendors and other parties. For some of the issues, software
+mitigations can depend on microcode or firmware updates, which need further
+coordination.
+
+.. _Contact:
+
+Contact
+-------
+
+The Linux kernel hardware security team is separate from the regular Linux
+kernel security team.
+
+The team only handles developing fixes for embargoed hardware security
+issues. Reports of pure software security bugs in the Linux kernel are not
+handled by this team and the reporter will be guided to contact the regular
+Linux kernel security team (:ref:`Documentation/admin-guide/
+<securitybugs>`) instead.
+
+The team can be contacted by email at <hardware-security@kernel.org>. This
+is a private list of security officers who will help you to coordinate a
+fix according to our documented process.
+
+The list is encrypted and email to the list can be sent by either PGP or
+S/MIME encrypted and must be signed with the reporter's PGP key or S/MIME
+certificate. The list's PGP key and S/MIME certificate are available from
+the following URLs:
+
+ - PGP: https://www.kernel.org/static/files/hardware-security.asc
+ - S/MIME: https://www.kernel.org/static/files/hardware-security.crt
+
+While hardware security issues are often handled by the affected hardware
+vendor, we welcome contact from researchers or individuals who have
+identified a potential hardware flaw.
+
+Hardware security officers
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The current team of hardware security officers:
+
+ - Linus Torvalds (Linux Foundation Fellow)
+ - Greg Kroah-Hartman (Linux Foundation Fellow)
+ - Thomas Gleixner (Linux Foundation Fellow)
+
+Operation of mailing-lists
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The encrypted mailing-lists which are used in our process are hosted on
+Linux Foundation's IT infrastructure. By providing this service, members
+of Linux Foundation's IT operations personnel technically have the
+ability to access the embargoed information, but are obliged to
+confidentiality by their employment contract. Linux Foundation IT
+personnel are also responsible for operating and managing the rest of
+kernel.org infrastructure.
+
+The Linux Foundation's current director of IT Project infrastructure is
+Konstantin Ryabitsev.
+
+
+Non-disclosure agreements
+-------------------------
+
+The Linux kernel hardware security team is not a formal body and therefore
+unable to enter into any non-disclosure agreements. The kernel community
+is aware of the sensitive nature of such issues and offers a Memorandum of
+Understanding instead.
+
+
+Memorandum of Understanding
+---------------------------
+
+The Linux kernel community has a deep understanding of the requirement to
+keep hardware security issues under embargo for coordination between
+different OS vendors, distributors, hardware vendors and other parties.
+
+The Linux kernel community has successfully handled hardware security
+issues in the past and has the necessary mechanisms in place to allow
+community compliant development under embargo restrictions.
+
+The Linux kernel community has a dedicated hardware security team for
+initial contact, which oversees the process of handling such issues under
+embargo rules.
+
+The hardware security team identifies the developers (domain experts) who
+will form the initial response team for a particular issue. The initial
+response team can bring in further developers (domain experts) to address
+the issue in the best technical way.
+
+All involved developers pledge to adhere to the embargo rules and to keep
+the received information confidential. Violation of the pledge will lead to
+immediate exclusion from the current issue and removal from all related
+mailing-lists. In addition, the hardware security team will also exclude
+the offender from future issues. The impact of this consequence is a highly
+effective deterrent in our community. In case a violation happens the
+hardware security team will inform the involved parties immediately. If you
+or anyone becomes aware of a potential violation, please report it
+immediately to the Hardware security officers.
+
+
+Process
+^^^^^^^
+
+Due to the globally distributed nature of Linux kernel development,
+face-to-face meetings are almost impossible to address hardware security
+issues. Phone conferences are hard to coordinate due to time zones and
+other factors and should be only used when absolutely necessary. Encrypted
+email has been proven to be the most effective and secure communication
+method for these types of issues.
+
+Start of Disclosure
+"""""""""""""""""""
+
+Disclosure starts by contacting the Linux kernel hardware security team by
+email. This initial contact should contain a description of the problem and
+a list of any known affected hardware. If your organization builds or
+distributes the affected hardware, we encourage you to also consider what
+other hardware could be affected.
+
+The hardware security team will provide an incident-specific encrypted
+mailing-list which will be used for initial discussion with the reporter,
+further disclosure, and coordination of fixes.
+
+The hardware security team will provide the disclosing party a list of
+developers (domain experts) who should be informed initially about the
+issue after confirming with the developers that they will adhere to this
+Memorandum of Understanding and the documented process. These developers
+form the initial response team and will be responsible for handling the
+issue after initial contact. The hardware security team is supporting the
+response team, but is not necessarily involved in the mitigation
+development process.
+
+While individual developers might be covered by a non-disclosure agreement
+via their employer, they cannot enter individual non-disclosure agreements
+in their role as Linux kernel developers. They will, however, agree to
+adhere to this documented process and the Memorandum of Understanding.
+
+The disclosing party should provide a list of contacts for all other
+entities who have already been, or should be, informed about the issue.
+This serves several purposes:
+
+ - The list of disclosed entities allows communication across the
+ industry, e.g. other OS vendors, HW vendors, etc.
+
+ - The disclosed entities can be contacted to name experts who should
+ participate in the mitigation development.
+
+ - If an expert which is required to handle an issue is employed by an
+ listed entity or member of an listed entity, then the response teams can
+ request the disclosure of that expert from that entity. This ensures
+ that the expert is also part of the entity's response team.
+
+Disclosure
+""""""""""
+
+The disclosing party provides detailed information to the initial response
+team via the specific encrypted mailing-list.
+
+From our experience the technical documentation of these issues is usually
+a sufficient starting point and further technical clarification is best
+done via email.
+
+Mitigation development
+""""""""""""""""""""""
+
+The initial response team sets up an encrypted mailing-list or repurposes
+an existing one if appropriate.
+
+Using a mailing-list is close to the normal Linux development process and
+has been successfully used in developing mitigations for various hardware
+security issues in the past.
+
+The mailing-list operates in the same way as normal Linux development.
+Patches are posted, discussed and reviewed and if agreed on applied to a
+non-public git repository which is only accessible to the participating
+developers via a secure connection. The repository contains the main
+development branch against the mainline kernel and backport branches for
+stable kernel versions as necessary.
+
+The initial response team will identify further experts from the Linux
+kernel developer community as needed. Bringing in experts can happen at any
+time of the development process and needs to be handled in a timely manner.
+
+If an expert is employed by or member of an entity on the disclosure list
+provided by the disclosing party, then participation will be requested from
+the relevant entity.
+
+If not, then the disclosing party will be informed about the experts
+participation. The experts are covered by the Memorandum of Understanding
+and the disclosing party is requested to acknowledge the participation. In
+case that the disclosing party has a compelling reason to object, then this
+objection has to be raised within five work days and resolved with the
+incident team immediately. If the disclosing party does not react within
+five work days this is taken as silent acknowledgement.
+
+After acknowledgement or resolution of an objection the expert is disclosed
+by the incident team and brought into the development process.
+
+List participants may not communicate about the issue outside of the
+private mailing list. List participants may not use any shared resources
+(e.g. employer build farms, CI systems, etc) when working on patches.
+
+
+Coordinated release
+"""""""""""""""""""
+
+The involved parties will negotiate the date and time where the embargo
+ends. At that point the prepared mitigations are integrated into the
+relevant kernel trees and published. There is no pre-notification process:
+fixes are published in public and available to everyone at the same time.
+
+While we understand that hardware security issues need coordinated embargo
+time, the embargo time should be constrained to the minimum time which is
+required for all involved parties to develop, test and prepare the
+mitigations. Extending embargo time artificially to meet conference talk
+dates or other non-technical reasons is creating more work and burden for
+the involved developers and response teams as the patches need to be kept
+up to date in order to follow the ongoing upstream kernel development,
+which might create conflicting changes.
+
+CVE assignment
+""""""""""""""
+
+Neither the hardware security team nor the initial response team assign
+CVEs, nor are CVEs required for the development process. If CVEs are
+provided by the disclosing party they can be used for documentation
+purposes.
+
+Process ambassadors
+-------------------
+
+For assistance with this process we have established ambassadors in various
+organizations, who can answer questions about or provide guidance on the
+reporting process and further handling. Ambassadors are not involved in the
+disclosure of a particular issue, unless requested by a response team or by
+an involved disclosed party. The current ambassadors list:
+
+ ============= ========================================================
+ AMD Tom Lendacky <thomas.lendacky@amd.com>
+ Ampere Darren Hart <darren@os.amperecomputing.com>
+ ARM Catalin Marinas <catalin.marinas@arm.com>
+ IBM Power Anton Blanchard <anton@linux.ibm.com>
+ IBM Z Christian Borntraeger <borntraeger@de.ibm.com>
+ Intel Tony Luck <tony.luck@intel.com>
+ Qualcomm Trilok Soni <tsoni@codeaurora.org>
+ RISC-V Palmer Dabbelt <palmer@dabbelt.com>
+ Samsung Javier González <javier.gonz@samsung.com>
+
+ Microsoft James Morris <jamorris@linux.microsoft.com>
+ Xen Andrew Cooper <andrew.cooper3@citrix.com>
+
+ Canonical John Johansen <john.johansen@canonical.com>
+ Debian Ben Hutchings <ben@decadent.org.uk>
+ Oracle Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
+ Red Hat Josh Poimboeuf <jpoimboe@redhat.com>
+ SUSE Jiri Kosina <jkosina@suse.cz>
+
+ Google Kees Cook <keescook@chromium.org>
+
+ LLVM Nick Desaulniers <ndesaulniers@google.com>
+ ============= ========================================================
+
+If you want your organization to be added to the ambassadors list, please
+contact the hardware security team. The nominated ambassador has to
+understand and support our process fully and is ideally well connected in
+the Linux kernel community.
+
+Encrypted mailing-lists
+-----------------------
+
+We use encrypted mailing-lists for communication. The operating principle
+of these lists is that email sent to the list is encrypted either with the
+list's PGP key or with the list's S/MIME certificate. The mailing-list
+software decrypts the email and re-encrypts it individually for each
+subscriber with the subscriber's PGP key or S/MIME certificate. Details
+about the mailing-list software and the setup which is used to ensure the
+security of the lists and protection of the data can be found here:
+https://korg.wiki.kernel.org/userdoc/remail.
+
+List keys
+^^^^^^^^^
+
+For initial contact see :ref:`Contact`. For incident specific mailing-lists
+the key and S/MIME certificate are conveyed to the subscribers by email
+sent from the specific list.
+
+Subscription to incident specific lists
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Subscription is handled by the response teams. Disclosed parties who want
+to participate in the communication send a list of potential subscribers to
+the response team so the response team can validate subscription requests.
+
+Each subscriber needs to send a subscription request to the response team
+by email. The email must be signed with the subscriber's PGP key or S/MIME
+certificate. If a PGP key is used, it must be available from a public key
+server and is ideally connected to the Linux kernel's PGP web of trust. See
+also: https://www.kernel.org/signature.html.
+
+The response team verifies that the subscriber request is valid and adds
+the subscriber to the list. After subscription the subscriber will receive
+email from the mailing-list which is signed either with the list's PGP key
+or the list's S/MIME certificate. The subscriber's email client can extract
+the PGP key or the S/MIME certificate from the signature so the subscriber
+can send encrypted email to the list.
+
diff --git a/Documentation/process/handling-regressions.rst b/Documentation/process/handling-regressions.rst
new file mode 100644
index 000000000..5d3c3de3f
--- /dev/null
+++ b/Documentation/process/handling-regressions.rst
@@ -0,0 +1,790 @@
+.. SPDX-License-Identifier: (GPL-2.0+ OR CC-BY-4.0)
+.. See the bottom of this file for additional redistribution information.
+
+Handling regressions
+++++++++++++++++++++
+
+*We don't cause regressions* -- this document describes what this "first rule of
+Linux kernel development" means in practice for developers. It complements
+Documentation/admin-guide/reporting-regressions.rst, which covers the topic from a
+user's point of view; if you never read that text, go and at least skim over it
+before continuing here.
+
+The important bits (aka "The TL;DR")
+====================================
+
+#. Ensure subscribers of the `regression mailing list <https://lore.kernel.org/regressions/>`_
+ (regressions@lists.linux.dev) quickly become aware of any new regression
+ report:
+
+ * When receiving a mailed report that did not CC the list, bring it into the
+ loop by immediately sending at least a brief "Reply-all" with the list
+ CCed.
+
+ * Forward or bounce any reports submitted in bug trackers to the list.
+
+#. Make the Linux kernel regression tracking bot "regzbot" track the issue (this
+ is optional, but recommended):
+
+ * For mailed reports, check if the reporter included a line like ``#regzbot
+ introduced v5.13..v5.14-rc1``. If not, send a reply (with the regressions
+ list in CC) containing a paragraph like the following, which tells regzbot
+ when the issue started to happen::
+
+ #regzbot ^introduced 1f2e3d4c5b6a
+
+ * When forwarding reports from a bug tracker to the regressions list (see
+ above), include a paragraph like the following::
+
+ #regzbot introduced: v5.13..v5.14-rc1
+ #regzbot from: Some N. Ice Human <some.human@example.com>
+ #regzbot monitor: http://some.bugtracker.example.com/ticket?id=123456789
+
+#. When submitting fixes for regressions, add "Link:" tags to the patch
+ description pointing to all places where the issue was reported, as
+ mandated by Documentation/process/submitting-patches.rst and
+ :ref:`Documentation/process/5.Posting.rst <development_posting>`.
+
+#. Try to fix regressions quickly once the culprit has been identified; fixes
+ for most regressions should be merged within two weeks, but some need to be
+ resolved within two or three days.
+
+
+All the details on Linux kernel regressions relevant for developers
+===================================================================
+
+
+The important basics in more detail
+-----------------------------------
+
+
+What to do when receiving regression reports
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Ensure the Linux kernel's regression tracker and others subscribers of the
+`regression mailing list <https://lore.kernel.org/regressions/>`_
+(regressions@lists.linux.dev) become aware of any newly reported regression:
+
+ * When you receive a report by mail that did not CC the list, immediately bring
+ it into the loop by sending at least a brief "Reply-all" with the list CCed;
+ try to ensure it gets CCed again in case you reply to a reply that omitted
+ the list.
+
+ * If a report submitted in a bug tracker hits your Inbox, forward or bounce it
+ to the list. Consider checking the list archives beforehand, if the reporter
+ already forwarded the report as instructed by
+ Documentation/admin-guide/reporting-issues.rst.
+
+When doing either, consider making the Linux kernel regression tracking bot
+"regzbot" immediately start tracking the issue:
+
+ * For mailed reports, check if the reporter included a "regzbot command" like
+ ``#regzbot introduced 1f2e3d4c5b6a``. If not, send a reply (with the
+ regressions list in CC) with a paragraph like the following:::
+
+ #regzbot ^introduced: v5.13..v5.14-rc1
+
+ This tells regzbot the version range in which the issue started to happen;
+ you can specify a range using commit-ids as well or state a single commit-id
+ in case the reporter bisected the culprit.
+
+ Note the caret (^) before the "introduced": it tells regzbot to treat the
+ parent mail (the one you reply to) as the initial report for the regression
+ you want to see tracked; that's important, as regzbot will later look out
+ for patches with "Link:" tags pointing to the report in the archives on
+ lore.kernel.org.
+
+ * When forwarding a regressions reported to a bug tracker, include a paragraph
+ with these regzbot commands::
+
+ #regzbot introduced: 1f2e3d4c5b6a
+ #regzbot from: Some N. Ice Human <some.human@example.com>
+ #regzbot monitor: http://some.bugtracker.example.com/ticket?id=123456789
+
+ Regzbot will then automatically associate patches with the report that
+ contain "Link:" tags pointing to your mail or the mentioned ticket.
+
+What's important when fixing regressions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You don't need to do anything special when submitting fixes for regression, just
+remember to do what Documentation/process/submitting-patches.rst,
+:ref:`Documentation/process/5.Posting.rst <development_posting>`, and
+Documentation/process/stable-kernel-rules.rst already explain in more detail:
+
+ * Point to all places where the issue was reported using "Link:" tags::
+
+ Link: https://lore.kernel.org/r/30th.anniversary.repost@klaava.Helsinki.FI/
+ Link: https://bugzilla.kernel.org/show_bug.cgi?id=1234567890
+
+ * Add a "Fixes:" tag to specify the commit causing the regression.
+
+ * If the culprit was merged in an earlier development cycle, explicitly mark
+ the fix for backporting using the ``Cc: stable@vger.kernel.org`` tag.
+
+All this is expected from you and important when it comes to regression, as
+these tags are of great value for everyone (you included) that might be looking
+into the issue weeks, months, or years later. These tags are also crucial for
+tools and scripts used by other kernel developers or Linux distributions; one of
+these tools is regzbot, which heavily relies on the "Link:" tags to associate
+reports for regression with changes resolving them.
+
+Expectations and best practices for fixing regressions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+As a Linux kernel developer, you are expected to give your best to prevent
+situations where a regression caused by a recent change of yours leaves users
+only these options:
+
+ * Run a kernel with a regression that impacts usage.
+
+ * Switch to an older or newer kernel series.
+
+ * Continue running an outdated and thus potentially insecure kernel for more
+ than three weeks after the regression's culprit was identified. Ideally it
+ should be less than two. And it ought to be just a few days, if the issue is
+ severe or affects many users -- either in general or in prevalent
+ environments.
+
+How to realize that in practice depends on various factors. Use the following
+rules of thumb as a guide.
+
+In general:
+
+ * Prioritize work on regressions over all other Linux kernel work, unless the
+ latter concerns a severe issue (e.g. acute security vulnerability, data loss,
+ bricked hardware, ...).
+
+ * Expedite fixing mainline regressions that recently made it into a proper
+ mainline, stable, or longterm release (either directly or via backport).
+
+ * Do not consider regressions from the current cycle as something that can wait
+ till the end of the cycle, as the issue might discourage or prevent users and
+ CI systems from testing mainline now or generally.
+
+ * Work with the required care to avoid additional or bigger damage, even if
+ resolving an issue then might take longer than outlined below.
+
+On timing once the culprit of a regression is known:
+
+ * Aim to mainline a fix within two or three days, if the issue is severe or
+ bothering many users -- either in general or in prevalent conditions like a
+ particular hardware environment, distribution, or stable/longterm series.
+
+ * Aim to mainline a fix by Sunday after the next, if the culprit made it
+ into a recent mainline, stable, or longterm release (either directly or via
+ backport); if the culprit became known early during a week and is simple to
+ resolve, try to mainline the fix within the same week.
+
+ * For other regressions, aim to mainline fixes before the hindmost Sunday
+ within the next three weeks. One or two Sundays later are acceptable, if the
+ regression is something people can live with easily for a while -- like a
+ mild performance regression.
+
+ * It's strongly discouraged to delay mainlining regression fixes till the next
+ merge window, except when the fix is extraordinarily risky or when the
+ culprit was mainlined more than a year ago.
+
+On procedure:
+
+ * Always consider reverting the culprit, as it's often the quickest and least
+ dangerous way to fix a regression. Don't worry about mainlining a fixed
+ variant later: that should be straight-forward, as most of the code went
+ through review once already.
+
+ * Try to resolve any regressions introduced in mainline during the past
+ twelve months before the current development cycle ends: Linus wants such
+ regressions to be handled like those from the current cycle, unless fixing
+ bears unusual risks.
+
+ * Consider CCing Linus on discussions or patch review, if a regression seems
+ tangly. Do the same in precarious or urgent cases -- especially if the
+ subsystem maintainer might be unavailable. Also CC the stable team, when you
+ know such a regression made it into a mainline, stable, or longterm release.
+
+ * For urgent regressions, consider asking Linus to pick up the fix straight
+ from the mailing list: he is totally fine with that for uncontroversial
+ fixes. Ideally though such requests should happen in accordance with the
+ subsystem maintainers or come directly from them.
+
+ * In case you are unsure if a fix is worth the risk applying just days before
+ a new mainline release, send Linus a mail with the usual lists and people in
+ CC; in it, summarize the situation while asking him to consider picking up
+ the fix straight from the list. He then himself can make the call and when
+ needed even postpone the release. Such requests again should ideally happen
+ in accordance with the subsystem maintainers or come directly from them.
+
+Regarding stable and longterm kernels:
+
+ * You are free to leave regressions to the stable team, if they at no point in
+ time occurred with mainline or were fixed there already.
+
+ * If a regression made it into a proper mainline release during the past
+ twelve months, ensure to tag the fix with "Cc: stable@vger.kernel.org", as a
+ "Fixes:" tag alone does not guarantee a backport. Please add the same tag,
+ in case you know the culprit was backported to stable or longterm kernels.
+
+ * When receiving reports about regressions in recent stable or longterm kernel
+ series, please evaluate at least briefly if the issue might happen in current
+ mainline as well -- and if that seems likely, take hold of the report. If in
+ doubt, ask the reporter to check mainline.
+
+ * Whenever you want to swiftly resolve a regression that recently also made it
+ into a proper mainline, stable, or longterm release, fix it quickly in
+ mainline; when appropriate thus involve Linus to fast-track the fix (see
+ above). That's because the stable team normally does neither revert nor fix
+ any changes that cause the same problems in mainline.
+
+ * In case of urgent regression fixes you might want to ensure prompt
+ backporting by dropping the stable team a note once the fix was mainlined;
+ this is especially advisable during merge windows and shortly thereafter, as
+ the fix otherwise might land at the end of a huge patch queue.
+
+On patch flow:
+
+ * Developers, when trying to reach the time periods mentioned above, remember
+ to account for the time it takes to get fixes tested, reviewed, and merged by
+ Linus, ideally with them being in linux-next at least briefly. Hence, if a
+ fix is urgent, make it obvious to ensure others handle it appropriately.
+
+ * Reviewers, you are kindly asked to assist developers in reaching the time
+ periods mentioned above by reviewing regression fixes in a timely manner.
+
+ * Subsystem maintainers, you likewise are encouraged to expedite the handling
+ of regression fixes. Thus evaluate if skipping linux-next is an option for
+ the particular fix. Also consider sending git pull requests more often than
+ usual when needed. And try to avoid holding onto regression fixes over
+ weekends -- especially when the fix is marked for backporting.
+
+
+More aspects regarding regressions developers should be aware of
+----------------------------------------------------------------
+
+
+How to deal with changes where a risk of regression is known
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Evaluate how big the risk of regressions is, for example by performing a code
+search in Linux distributions and Git forges. Also consider asking other
+developers or projects likely to be affected to evaluate or even test the
+proposed change; if problems surface, maybe some solution acceptable for all
+can be found.
+
+If the risk of regressions in the end seems to be relatively small, go ahead
+with the change, but let all involved parties know about the risk. Hence, make
+sure your patch description makes this aspect obvious. Once the change is
+merged, tell the Linux kernel's regression tracker and the regressions mailing
+list about the risk, so everyone has the change on the radar in case reports
+trickle in. Depending on the risk, you also might want to ask the subsystem
+maintainer to mention the issue in his mainline pull request.
+
+What else is there to known about regressions?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Check out Documentation/admin-guide/reporting-regressions.rst, it covers a lot
+of other aspects you want might want to be aware of:
+
+ * the purpose of the "no regressions rule"
+
+ * what issues actually qualify as regression
+
+ * who's in charge for finding the root cause of a regression
+
+ * how to handle tricky situations, e.g. when a regression is caused by a
+ security fix or when fixing a regression might cause another one
+
+Whom to ask for advice when it comes to regressions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Send a mail to the regressions mailing list (regressions@lists.linux.dev) while
+CCing the Linux kernel's regression tracker (regressions@leemhuis.info); if the
+issue might better be dealt with in private, feel free to omit the list.
+
+
+More about regression tracking and regzbot
+------------------------------------------
+
+
+Why the Linux kernel has a regression tracker, and why is regzbot used?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Rules like "no regressions" need someone to ensure they are followed, otherwise
+they are broken either accidentally or on purpose. History has shown this to be
+true for the Linux kernel as well. That's why Thorsten Leemhuis volunteered to
+keep an eye on things as the Linux kernel's regression tracker, who's
+occasionally helped by other people. Neither of them are paid to do this,
+that's why regression tracking is done on a best effort basis.
+
+Earlier attempts to manually track regressions have shown it's an exhausting and
+frustrating work, which is why they were abandoned after a while. To prevent
+this from happening again, Thorsten developed regzbot to facilitate the work,
+with the long term goal to automate regression tracking as much as possible for
+everyone involved.
+
+How does regression tracking work with regzbot?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The bot watches for replies to reports of tracked regressions. Additionally,
+it's looking out for posted or committed patches referencing such reports
+with "Link:" tags; replies to such patch postings are tracked as well.
+Combined this data provides good insights into the current state of the fixing
+process.
+
+Regzbot tries to do its job with as little overhead as possible for both
+reporters and developers. In fact, only reporters are burdened with an extra
+duty: they need to tell regzbot about the regression report using the ``#regzbot
+introduced`` command outlined above; if they don't do that, someone else can
+take care of that using ``#regzbot ^introduced``.
+
+For developers there normally is no extra work involved, they just need to make
+sure to do something that was expected long before regzbot came to light: add
+"Link:" tags to the patch description pointing to all reports about the issue
+fixed.
+
+Do I have to use regzbot?
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It's in the interest of everyone if you do, as kernel maintainers like Linus
+Torvalds partly rely on regzbot's tracking in their work -- for example when
+deciding to release a new version or extend the development phase. For this they
+need to be aware of all unfixed regression; to do that, Linus is known to look
+into the weekly reports sent by regzbot.
+
+Do I have to tell regzbot about every regression I stumble upon?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Ideally yes: we are all humans and easily forget problems when something more
+important unexpectedly comes up -- for example a bigger problem in the Linux
+kernel or something in real life that's keeping us away from keyboards for a
+while. Hence, it's best to tell regzbot about every regression, except when you
+immediately write a fix and commit it to a tree regularly merged to the affected
+kernel series.
+
+How to see which regressions regzbot tracks currently?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Check `regzbot's web-interface <https://linux-regtracking.leemhuis.info/regzbot/>`_
+for the latest info; alternatively, `search for the latest regression report
+<https://lore.kernel.org/lkml/?q=%22Linux+regressions+report%22+f%3Aregzbot>`_,
+which regzbot normally sends out once a week on Sunday evening (UTC), which is a
+few hours before Linus usually publishes new (pre-)releases.
+
+What places is regzbot monitoring?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Regzbot is watching the most important Linux mailing lists as well as the git
+repositories of linux-next, mainline, and stable/longterm.
+
+What kind of issues are supposed to be tracked by regzbot?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The bot is meant to track regressions, hence please don't involve regzbot for
+regular issues. But it's okay for the Linux kernel's regression tracker if you
+use regzbot to track severe issues, like reports about hangs, corrupted data,
+or internal errors (Panic, Oops, BUG(), warning, ...).
+
+Can I add regressions found by CI systems to regzbot's tracking?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Feel free to do so, if the particular regression likely has impact on practical
+use cases and thus might be noticed by users; hence, please don't involve
+regzbot for theoretical regressions unlikely to show themselves in real world
+usage.
+
+How to interact with regzbot?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+By using a 'regzbot command' in a direct or indirect reply to the mail with the
+regression report. These commands need to be in their own paragraph (IOW: they
+need to be separated from the rest of the mail using blank lines).
+
+One such command is ``#regzbot introduced <version or commit>``, which makes
+regzbot consider your mail as a regressions report added to the tracking, as
+already described above; ``#regzbot ^introduced <version or commit>`` is another
+such command, which makes regzbot consider the parent mail as a report for a
+regression which it starts to track.
+
+Once one of those two commands has been utilized, other regzbot commands can be
+used in direct or indirect replies to the report. You can write them below one
+of the `introduced` commands or in replies to the mail that used one of them
+or itself is a reply to that mail:
+
+ * Set or update the title::
+
+ #regzbot title: foo
+
+ * Monitor a discussion or bugzilla.kernel.org ticket where additions aspects of
+ the issue or a fix are discussed -- for example the posting of a patch fixing
+ the regression::
+
+ #regzbot monitor: https://lore.kernel.org/all/30th.anniversary.repost@klaava.Helsinki.FI/
+
+ Monitoring only works for lore.kernel.org and bugzilla.kernel.org; regzbot
+ will consider all messages in that thread or ticket as related to the fixing
+ process.
+
+ * Point to a place with further details of interest, like a mailing list post
+ or a ticket in a bug tracker that are slightly related, but about a different
+ topic::
+
+ #regzbot link: https://bugzilla.kernel.org/show_bug.cgi?id=123456789
+
+ * Mark a regression as fixed by a commit that is heading upstream or already
+ landed::
+
+ #regzbot fixed-by: 1f2e3d4c5d
+
+ * Mark a regression as a duplicate of another one already tracked by regzbot::
+
+ #regzbot dup-of: https://lore.kernel.org/all/30th.anniversary.repost@klaava.Helsinki.FI/
+
+ * Mark a regression as invalid::
+
+ #regzbot invalid: wasn't a regression, problem has always existed
+
+Is there more to tell about regzbot and its commands?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+More detailed and up-to-date information about the Linux
+kernel's regression tracking bot can be found on its
+`project page <https://gitlab.com/knurd42/regzbot>`_, which among others
+contains a `getting started guide <https://gitlab.com/knurd42/regzbot/-/blob/main/docs/getting_started.md>`_
+and `reference documentation <https://gitlab.com/knurd42/regzbot/-/blob/main/docs/reference.md>`_
+which both cover more details than the above section.
+
+Quotes from Linus about regression
+----------------------------------
+
+Find below a few real life examples of how Linus Torvalds expects regressions to
+be handled:
+
+ * From `2017-10-26 (1/2)
+ <https://lore.kernel.org/lkml/CA+55aFwiiQYJ+YoLKCXjN_beDVfu38mg=Ggg5LFOcqHE8Qi7Zw@mail.gmail.com/>`_::
+
+ If you break existing user space setups THAT IS A REGRESSION.
+
+ It's not ok to say "but we'll fix the user space setup".
+
+ Really. NOT OK.
+
+ [...]
+
+ The first rule is:
+
+ - we don't cause regressions
+
+ and the corollary is that when regressions *do* occur, we admit to
+ them and fix them, instead of blaming user space.
+
+ The fact that you have apparently been denying the regression now for
+ three weeks means that I will revert, and I will stop pulling apparmor
+ requests until the people involved understand how kernel development
+ is done.
+
+ * From `2017-10-26 (2/2)
+ <https://lore.kernel.org/lkml/CA+55aFxW7NMAMvYhkvz1UPbUTUJewRt6Yb51QAx5RtrWOwjebg@mail.gmail.com/>`_::
+
+ People should basically always feel like they can update their kernel
+ and simply not have to worry about it.
+
+ I refuse to introduce "you can only update the kernel if you also
+ update that other program" kind of limitations. If the kernel used to
+ work for you, the rule is that it continues to work for you.
+
+ There have been exceptions, but they are few and far between, and they
+ generally have some major and fundamental reasons for having happened,
+ that were basically entirely unavoidable, and people _tried_hard_ to
+ avoid them. Maybe we can't practically support the hardware any more
+ after it is decades old and nobody uses it with modern kernels any
+ more. Maybe there's a serious security issue with how we did things,
+ and people actually depended on that fundamentally broken model. Maybe
+ there was some fundamental other breakage that just _had_ to have a
+ flag day for very core and fundamental reasons.
+
+ And notice that this is very much about *breaking* peoples environments.
+
+ Behavioral changes happen, and maybe we don't even support some
+ feature any more. There's a number of fields in /proc/<pid>/stat that
+ are printed out as zeroes, simply because they don't even *exist* in
+ the kernel any more, or because showing them was a mistake (typically
+ an information leak). But the numbers got replaced by zeroes, so that
+ the code that used to parse the fields still works. The user might not
+ see everything they used to see, and so behavior is clearly different,
+ but things still _work_, even if they might no longer show sensitive
+ (or no longer relevant) information.
+
+ But if something actually breaks, then the change must get fixed or
+ reverted. And it gets fixed in the *kernel*. Not by saying "well, fix
+ your user space then". It was a kernel change that exposed the
+ problem, it needs to be the kernel that corrects for it, because we
+ have a "upgrade in place" model. We don't have a "upgrade with new
+ user space".
+
+ And I seriously will refuse to take code from people who do not
+ understand and honor this very simple rule.
+
+ This rule is also not going to change.
+
+ And yes, I realize that the kernel is "special" in this respect. I'm
+ proud of it.
+
+ I have seen, and can point to, lots of projects that go "We need to
+ break that use case in order to make progress" or "you relied on
+ undocumented behavior, it sucks to be you" or "there's a better way to
+ do what you want to do, and you have to change to that new better
+ way", and I simply don't think that's acceptable outside of very early
+ alpha releases that have experimental users that know what they signed
+ up for. The kernel hasn't been in that situation for the last two
+ decades.
+
+ We do API breakage _inside_ the kernel all the time. We will fix
+ internal problems by saying "you now need to do XYZ", but then it's
+ about internal kernel API's, and the people who do that then also
+ obviously have to fix up all the in-kernel users of that API. Nobody
+ can say "I now broke the API you used, and now _you_ need to fix it
+ up". Whoever broke something gets to fix it too.
+
+ And we simply do not break user space.
+
+ * From `2020-05-21
+ <https://lore.kernel.org/all/CAHk-=wiVi7mSrsMP=fLXQrXK_UimybW=ziLOwSzFTtoXUacWVQ@mail.gmail.com/>`_::
+
+ The rules about regressions have never been about any kind of
+ documented behavior, or where the code lives.
+
+ The rules about regressions are always about "breaks user workflow".
+
+ Users are literally the _only_ thing that matters.
+
+ No amount of "you shouldn't have used this" or "that behavior was
+ undefined, it's your own fault your app broke" or "that used to work
+ simply because of a kernel bug" is at all relevant.
+
+ Now, reality is never entirely black-and-white. So we've had things
+ like "serious security issue" etc that just forces us to make changes
+ that may break user space. But even then the rule is that we don't
+ really have other options that would allow things to continue.
+
+ And obviously, if users take years to even notice that something
+ broke, or if we have sane ways to work around the breakage that
+ doesn't make for too much trouble for users (ie "ok, there are a
+ handful of users, and they can use a kernel command line to work
+ around it" kind of things) we've also been a bit less strict.
+
+ But no, "that was documented to be broken" (whether it's because the
+ code was in staging or because the man-page said something else) is
+ irrelevant. If staging code is so useful that people end up using it,
+ that means that it's basically regular kernel code with a flag saying
+ "please clean this up".
+
+ The other side of the coin is that people who talk about "API
+ stability" are entirely wrong. API's don't matter either. You can make
+ any changes to an API you like - as long as nobody notices.
+
+ Again, the regression rule is not about documentation, not about
+ API's, and not about the phase of the moon.
+
+ It's entirely about "we caused problems for user space that used to work".
+
+ * From `2017-11-05
+ <https://lore.kernel.org/all/CA+55aFzUvbGjD8nQ-+3oiMBx14c_6zOj2n7KLN3UsJ-qsd4Dcw@mail.gmail.com/>`_::
+
+ And our regression rule has never been "behavior doesn't change".
+ That would mean that we could never make any changes at all.
+
+ For example, we do things like add new error handling etc all the
+ time, which we then sometimes even add tests for in our kselftest
+ directory.
+
+ So clearly behavior changes all the time and we don't consider that a
+ regression per se.
+
+ The rule for a regression for the kernel is that some real user
+ workflow breaks. Not some test. Not a "look, I used to be able to do
+ X, now I can't".
+
+ * From `2018-08-03
+ <https://lore.kernel.org/all/CA+55aFwWZX=CXmWDTkDGb36kf12XmTehmQjbiMPCqCRG2hi9kw@mail.gmail.com/>`_::
+
+ YOU ARE MISSING THE #1 KERNEL RULE.
+
+ We do not regress, and we do not regress exactly because your are 100% wrong.
+
+ And the reason you state for your opinion is in fact exactly *WHY* you
+ are wrong.
+
+ Your "good reasons" are pure and utter garbage.
+
+ The whole point of "we do not regress" is so that people can upgrade
+ the kernel and never have to worry about it.
+
+ > Kernel had a bug which has been fixed
+
+ That is *ENTIRELY* immaterial.
+
+ Guys, whether something was buggy or not DOES NOT MATTER.
+
+ Why?
+
+ Bugs happen. That's a fact of life. Arguing that "we had to break
+ something because we were fixing a bug" is completely insane. We fix
+ tens of bugs every single day, thinking that "fixing a bug" means that
+ we can break something is simply NOT TRUE.
+
+ So bugs simply aren't even relevant to the discussion. They happen,
+ they get found, they get fixed, and it has nothing to do with "we
+ break users".
+
+ Because the only thing that matters IS THE USER.
+
+ How hard is that to understand?
+
+ Anybody who uses "but it was buggy" as an argument is entirely missing
+ the point. As far as the USER was concerned, it wasn't buggy - it
+ worked for him/her.
+
+ Maybe it worked *because* the user had taken the bug into account,
+ maybe it worked because the user didn't notice - again, it doesn't
+ matter. It worked for the user.
+
+ Breaking a user workflow for a "bug" is absolutely the WORST reason
+ for breakage you can imagine.
+
+ It's basically saying "I took something that worked, and I broke it,
+ but now it's better". Do you not see how f*cking insane that statement
+ is?
+
+ And without users, your program is not a program, it's a pointless
+ piece of code that you might as well throw away.
+
+ Seriously. This is *why* the #1 rule for kernel development is "we
+ don't break users". Because "I fixed a bug" is absolutely NOT AN
+ ARGUMENT if that bug fix broke a user setup. You actually introduced a
+ MUCH BIGGER bug by "fixing" something that the user clearly didn't
+ even care about.
+
+ And dammit, we upgrade the kernel ALL THE TIME without upgrading any
+ other programs at all. It is absolutely required, because flag-days
+ and dependencies are horribly bad.
+
+ And it is also required simply because I as a kernel developer do not
+ upgrade random other tools that I don't even care about as I develop
+ the kernel, and I want any of my users to feel safe doing the same
+ time.
+
+ So no. Your rule is COMPLETELY wrong. If you cannot upgrade a kernel
+ without upgrading some other random binary, then we have a problem.
+
+ * From `2021-06-05
+ <https://lore.kernel.org/all/CAHk-=wiUVqHN76YUwhkjZzwTdjMMJf_zN4+u7vEJjmEGh3recw@mail.gmail.com/>`_::
+
+ THERE ARE NO VALID ARGUMENTS FOR REGRESSIONS.
+
+ Honestly, security people need to understand that "not working" is not
+ a success case of security. It's a failure case.
+
+ Yes, "not working" may be secure. But security in that case is *pointless*.
+
+ * From `2011-05-06 (1/3)
+ <https://lore.kernel.org/all/BANLkTim9YvResB+PwRp7QTK-a5VNg2PvmQ@mail.gmail.com/>`_::
+
+ Binary compatibility is more important.
+
+ And if binaries don't use the interface to parse the format (or just
+ parse it wrongly - see the fairly recent example of adding uuid's to
+ /proc/self/mountinfo), then it's a regression.
+
+ And regressions get reverted, unless there are security issues or
+ similar that makes us go "Oh Gods, we really have to break things".
+
+ I don't understand why this simple logic is so hard for some kernel
+ developers to understand. Reality matters. Your personal wishes matter
+ NOT AT ALL.
+
+ If you made an interface that can be used without parsing the
+ interface description, then we're stuck with the interface. Theory
+ simply doesn't matter.
+
+ You could help fix the tools, and try to avoid the compatibility
+ issues that way. There aren't that many of them.
+
+ From `2011-05-06 (2/3)
+ <https://lore.kernel.org/all/BANLkTi=KVXjKR82sqsz4gwjr+E0vtqCmvA@mail.gmail.com/>`_::
+
+ it's clearly NOT an internal tracepoint. By definition. It's being
+ used by powertop.
+
+ From `2011-05-06 (3/3)
+ <https://lore.kernel.org/all/BANLkTinazaXRdGovYL7rRVp+j6HbJ7pzhg@mail.gmail.com/>`_::
+
+ We have programs that use that ABI and thus it's a regression if they break.
+
+ * From `2012-07-06 <https://lore.kernel.org/all/CA+55aFwnLJ+0sjx92EGREGTWOx84wwKaraSzpTNJwPVV8edw8g@mail.gmail.com/>`_::
+
+ > Now this got me wondering if Debian _unstable_ actually qualifies as a
+ > standard distro userspace.
+
+ Oh, if the kernel breaks some standard user space, that counts. Tons
+ of people run Debian unstable
+
+ * From `2019-09-15
+ <https://lore.kernel.org/lkml/CAHk-=wiP4K8DRJWsCo=20hn_6054xBamGKF2kPgUzpB5aMaofA@mail.gmail.com/>`_::
+
+ One _particularly_ last-minute revert is the top-most commit (ignoring
+ the version change itself) done just before the release, and while
+ it's very annoying, it's perhaps also instructive.
+
+ What's instructive about it is that I reverted a commit that wasn't
+ actually buggy. In fact, it was doing exactly what it set out to do,
+ and did it very well. In fact it did it _so_ well that the much
+ improved IO patterns it caused then ended up revealing a user-visible
+ regression due to a real bug in a completely unrelated area.
+
+ The actual details of that regression are not the reason I point that
+ revert out as instructive, though. It's more that it's an instructive
+ example of what counts as a regression, and what the whole "no
+ regressions" kernel rule means. The reverted commit didn't change any
+ API's, and it didn't introduce any new bugs. But it ended up exposing
+ another problem, and as such caused a kernel upgrade to fail for a
+ user. So it got reverted.
+
+ The point here being that we revert based on user-reported _behavior_,
+ not based on some "it changes the ABI" or "it caused a bug" concept.
+ The problem was really pre-existing, and it just didn't happen to
+ trigger before. The better IO patterns introduced by the change just
+ happened to expose an old bug, and people had grown to depend on the
+ previously benign behavior of that old issue.
+
+ And never fear, we'll re-introduce the fix that improved on the IO
+ patterns once we've decided just how to handle the fact that we had a
+ bad interaction with an interface that people had then just happened
+ to rely on incidental behavior for before. It's just that we'll have
+ to hash through how to do that (there are no less than three different
+ patches by three different developers being discussed, and there might
+ be more coming...). In the meantime, I reverted the thing that exposed
+ the problem to users for this release, even if I hope it will be
+ re-introduced (perhaps even backported as a stable patch) once we have
+ consensus about the issue it exposed.
+
+ Take-away from the whole thing: it's not about whether you change the
+ kernel-userspace ABI, or fix a bug, or about whether the old code
+ "should never have worked in the first place". It's about whether
+ something breaks existing users' workflow.
+
+ Anyway, that was my little aside on the whole regression thing. Since
+ it's that "first rule of kernel programming", I felt it is perhaps
+ worth just bringing it up every once in a while
+
+..
+ end-of-content
+..
+ This text is available under GPL-2.0+ or CC-BY-4.0, as stated at the top
+ of the file. If you want to distribute this text under CC-BY-4.0 only,
+ please use "The Linux kernel developers" for author attribution and link
+ this as source:
+ https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/plain/Documentation/process/handling-regressions.rst
+..
+ Note: Only the content of this RST file as found in the Linux kernel sources
+ is available under CC-BY-4.0, as versions of this text that were processed
+ (for example by the kernel's build system) might contain content taken from
+ files which use a more restrictive license.
diff --git a/Documentation/process/howto.rst b/Documentation/process/howto.rst
new file mode 100644
index 000000000..deb8235e2
--- /dev/null
+++ b/Documentation/process/howto.rst
@@ -0,0 +1,626 @@
+.. _process_howto:
+
+HOWTO do Linux kernel development
+=================================
+
+This is the be-all, end-all document on this topic. It contains
+instructions on how to become a Linux kernel developer and how to learn
+to work with the Linux kernel development community. It tries to not
+contain anything related to the technical aspects of kernel programming,
+but will help point you in the right direction for that.
+
+If anything in this document becomes out of date, please send in patches
+to the maintainer of this file, who is listed at the bottom of the
+document.
+
+
+Introduction
+------------
+
+So, you want to learn how to become a Linux kernel developer? Or you
+have been told by your manager, "Go write a Linux driver for this
+device." This document's goal is to teach you everything you need to
+know to achieve this by describing the process you need to go through,
+and hints on how to work with the community. It will also try to
+explain some of the reasons why the community works like it does.
+
+The kernel is written mostly in C, with some architecture-dependent
+parts written in assembly. A good understanding of C is required for
+kernel development. Assembly (any architecture) is not required unless
+you plan to do low-level development for that architecture. Though they
+are not a good substitute for a solid C education and/or years of
+experience, the following books are good for, if anything, reference:
+
+ - "The C Programming Language" by Kernighan and Ritchie [Prentice Hall]
+ - "Practical C Programming" by Steve Oualline [O'Reilly]
+ - "C: A Reference Manual" by Harbison and Steele [Prentice Hall]
+
+The kernel is written using GNU C and the GNU toolchain. While it
+adheres to the ISO C11 standard, it uses a number of extensions that are
+not featured in the standard. The kernel is a freestanding C
+environment, with no reliance on the standard C library, so some
+portions of the C standard are not supported. Arbitrary long long
+divisions and floating point are not allowed. It can sometimes be
+difficult to understand the assumptions the kernel has on the toolchain
+and the extensions that it uses, and unfortunately there is no
+definitive reference for them. Please check the gcc info pages (`info
+gcc`) for some information on them.
+
+Please remember that you are trying to learn how to work with the
+existing development community. It is a diverse group of people, with
+high standards for coding, style and procedure. These standards have
+been created over time based on what they have found to work best for
+such a large and geographically dispersed team. Try to learn as much as
+possible about these standards ahead of time, as they are well
+documented; do not expect people to adapt to you or your company's way
+of doing things.
+
+
+Legal Issues
+------------
+
+The Linux kernel source code is released under the GPL. Please see the file
+COPYING in the main directory of the source tree. The Linux kernel licensing
+rules and how to use `SPDX <https://spdx.org/>`_ identifiers in source code are
+described in :ref:`Documentation/process/license-rules.rst <kernel_licensing>`.
+If you have further questions about the license, please contact a lawyer, and do
+not ask on the Linux kernel mailing list. The people on the mailing lists are
+not lawyers, and you should not rely on their statements on legal matters.
+
+For common questions and answers about the GPL, please see:
+
+ https://www.gnu.org/licenses/gpl-faq.html
+
+
+Documentation
+-------------
+
+The Linux kernel source tree has a large range of documents that are
+invaluable for learning how to interact with the kernel community. When
+new features are added to the kernel, it is recommended that new
+documentation files are also added which explain how to use the feature.
+When a kernel change causes the interface that the kernel exposes to
+userspace to change, it is recommended that you send the information or
+a patch to the manual pages explaining the change to the manual pages
+maintainer at mtk.manpages@gmail.com, and CC the list
+linux-api@vger.kernel.org.
+
+Here is a list of files that are in the kernel source tree that are
+required reading:
+
+ :ref:`Documentation/admin-guide/README.rst <readme>`
+ This file gives a short background on the Linux kernel and describes
+ what is necessary to do to configure and build the kernel. People
+ who are new to the kernel should start here.
+
+ :ref:`Documentation/process/changes.rst <changes>`
+ This file gives a list of the minimum levels of various software
+ packages that are necessary to build and run the kernel
+ successfully.
+
+ :ref:`Documentation/process/coding-style.rst <codingstyle>`
+ This describes the Linux kernel coding style, and some of the
+ rationale behind it. All new code is expected to follow the
+ guidelines in this document. Most maintainers will only accept
+ patches if these rules are followed, and many people will only
+ review code if it is in the proper style.
+
+ :ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
+ This file describes in explicit detail how to successfully create
+ and send a patch, including (but not limited to):
+
+ - Email contents
+ - Email format
+ - Who to send it to
+
+ Following these rules will not guarantee success (as all patches are
+ subject to scrutiny for content and style), but not following them
+ will almost always prevent it.
+
+ Other excellent descriptions of how to create patches properly are:
+
+ "The Perfect Patch"
+ https://www.ozlabs.org/~akpm/stuff/tpp.txt
+
+ "Linux kernel patch submission format"
+ https://web.archive.org/web/20180829112450/http://linux.yyz.us/patch-format.html
+
+ :ref:`Documentation/process/stable-api-nonsense.rst <stable_api_nonsense>`
+ This file describes the rationale behind the conscious decision to
+ not have a stable API within the kernel, including things like:
+
+ - Subsystem shim-layers (for compatibility?)
+ - Driver portability between Operating Systems.
+ - Mitigating rapid change within the kernel source tree (or
+ preventing rapid change)
+
+ This document is crucial for understanding the Linux development
+ philosophy and is very important for people moving to Linux from
+ development on other Operating Systems.
+
+ :ref:`Documentation/process/security-bugs.rst <securitybugs>`
+ If you feel you have found a security problem in the Linux kernel,
+ please follow the steps in this document to help notify the kernel
+ developers, and help solve the issue.
+
+ :ref:`Documentation/process/management-style.rst <managementstyle>`
+ This document describes how Linux kernel maintainers operate and the
+ shared ethos behind their methodologies. This is important reading
+ for anyone new to kernel development (or anyone simply curious about
+ it), as it resolves a lot of common misconceptions and confusion
+ about the unique behavior of kernel maintainers.
+
+ :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
+ This file describes the rules on how the stable kernel releases
+ happen, and what to do if you want to get a change into one of these
+ releases.
+
+ :ref:`Documentation/process/kernel-docs.rst <kernel_docs>`
+ A list of external documentation that pertains to kernel
+ development. Please consult this list if you do not find what you
+ are looking for within the in-kernel documentation.
+
+ :ref:`Documentation/process/applying-patches.rst <applying_patches>`
+ A good introduction describing exactly what a patch is and how to
+ apply it to the different development branches of the kernel.
+
+The kernel also has a large number of documents that can be
+automatically generated from the source code itself or from
+ReStructuredText markups (ReST), like this one. This includes a
+full description of the in-kernel API, and rules on how to handle
+locking properly.
+
+All such documents can be generated as PDF or HTML by running::
+
+ make pdfdocs
+ make htmldocs
+
+respectively from the main kernel source directory.
+
+The documents that uses ReST markup will be generated at Documentation/output.
+They can also be generated on LaTeX and ePub formats with::
+
+ make latexdocs
+ make epubdocs
+
+Becoming A Kernel Developer
+---------------------------
+
+If you do not know anything about Linux kernel development, you should
+look at the Linux KernelNewbies project:
+
+ https://kernelnewbies.org
+
+It consists of a helpful mailing list where you can ask almost any type
+of basic kernel development question (make sure to search the archives
+first, before asking something that has already been answered in the
+past.) It also has an IRC channel that you can use to ask questions in
+real-time, and a lot of helpful documentation that is useful for
+learning about Linux kernel development.
+
+The website has basic information about code organization, subsystems,
+and current projects (both in-tree and out-of-tree). It also describes
+some basic logistical information, like how to compile a kernel and
+apply a patch.
+
+If you do not know where you want to start, but you want to look for
+some task to start doing to join into the kernel development community,
+go to the Linux Kernel Janitor's project:
+
+ https://kernelnewbies.org/KernelJanitors
+
+It is a great place to start. It describes a list of relatively simple
+problems that need to be cleaned up and fixed within the Linux kernel
+source tree. Working with the developers in charge of this project, you
+will learn the basics of getting your patch into the Linux kernel tree,
+and possibly be pointed in the direction of what to go work on next, if
+you do not already have an idea.
+
+Before making any actual modifications to the Linux kernel code, it is
+imperative to understand how the code in question works. For this
+purpose, nothing is better than reading through it directly (most tricky
+bits are commented well), perhaps even with the help of specialized
+tools. One such tool that is particularly recommended is the Linux
+Cross-Reference project, which is able to present source code in a
+self-referential, indexed webpage format. An excellent up-to-date
+repository of the kernel code may be found at:
+
+ https://elixir.bootlin.com/
+
+
+The development process
+-----------------------
+
+Linux kernel development process currently consists of a few different
+main kernel "branches" and lots of different subsystem-specific kernel
+branches. These different branches are:
+
+ - Linus's mainline tree
+ - Various stable trees with multiple major numbers
+ - Subsystem-specific trees
+ - linux-next integration testing tree
+
+Mainline tree
+~~~~~~~~~~~~~
+
+The mainline tree is maintained by Linus Torvalds, and can be found at
+https://kernel.org or in the repo. Its development process is as follows:
+
+ - As soon as a new kernel is released a two week window is open,
+ during this period of time maintainers can submit big diffs to
+ Linus, usually the patches that have already been included in the
+ linux-next for a few weeks. The preferred way to submit big changes
+ is using git (the kernel's source management tool, more information
+ can be found at https://git-scm.com/) but plain patches are also just
+ fine.
+ - After two weeks a -rc1 kernel is released and the focus is on making the
+ new kernel as rock solid as possible. Most of the patches at this point
+ should fix a regression. Bugs that have always existed are not
+ regressions, so only push these kinds of fixes if they are important.
+ Please note that a whole new driver (or filesystem) might be accepted
+ after -rc1 because there is no risk of causing regressions with such a
+ change as long as the change is self-contained and does not affect areas
+ outside of the code that is being added. git can be used to send
+ patches to Linus after -rc1 is released, but the patches need to also be
+ sent to a public mailing list for review.
+ - A new -rc is released whenever Linus deems the current git tree to
+ be in a reasonably sane state adequate for testing. The goal is to
+ release a new -rc kernel every week.
+ - Process continues until the kernel is considered "ready", the
+ process should last around 6 weeks.
+
+It is worth mentioning what Andrew Morton wrote on the linux-kernel
+mailing list about kernel releases:
+
+ *"Nobody knows when a kernel will be released, because it's
+ released according to perceived bug status, not according to a
+ preconceived timeline."*
+
+Various stable trees with multiple major numbers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Kernels with 3-part versions are -stable kernels. They contain
+relatively small and critical fixes for security problems or significant
+regressions discovered in a given major mainline release. Each release
+in a major stable series increments the third part of the version
+number, keeping the first two parts the same.
+
+This is the recommended branch for users who want the most recent stable
+kernel and are not interested in helping test development/experimental
+versions.
+
+Stable trees are maintained by the "stable" team <stable@vger.kernel.org>, and
+are released as needs dictate. The normal release period is approximately
+two weeks, but it can be longer if there are no pressing problems. A
+security-related problem, instead, can cause a release to happen almost
+instantly.
+
+The file :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
+in the kernel tree documents what kinds of changes are acceptable for
+the -stable tree, and how the release process works.
+
+Subsystem-specific trees
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The maintainers of the various kernel subsystems --- and also many
+kernel subsystem developers --- expose their current state of
+development in source repositories. That way, others can see what is
+happening in the different areas of the kernel. In areas where
+development is rapid, a developer may be asked to base his submissions
+onto such a subsystem kernel tree so that conflicts between the
+submission and other already ongoing work are avoided.
+
+Most of these repositories are git trees, but there are also other SCMs
+in use, or patch queues being published as quilt series. Addresses of
+these subsystem repositories are listed in the MAINTAINERS file. Many
+of them can be browsed at https://git.kernel.org/.
+
+Before a proposed patch is committed to such a subsystem tree, it is
+subject to review which primarily happens on mailing lists (see the
+respective section below). For several kernel subsystems, this review
+process is tracked with the tool patchwork. Patchwork offers a web
+interface which shows patch postings, any comments on a patch or
+revisions to it, and maintainers can mark patches as under review,
+accepted, or rejected. Most of these patchwork sites are listed at
+https://patchwork.kernel.org/.
+
+linux-next integration testing tree
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before updates from subsystem trees are merged into the mainline tree,
+they need to be integration-tested. For this purpose, a special
+testing repository exists into which virtually all subsystem trees are
+pulled on an almost daily basis:
+
+ https://git.kernel.org/?p=linux/kernel/git/next/linux-next.git
+
+This way, the linux-next gives a summary outlook onto what will be
+expected to go into the mainline kernel at the next merge period.
+Adventurous testers are very welcome to runtime-test the linux-next.
+
+
+Bug Reporting
+-------------
+
+The file 'Documentation/admin-guide/reporting-issues.rst' in the main kernel
+source directory describes how to report a possible kernel bug, and details
+what kind of information is needed by the kernel developers to help track
+down the problem.
+
+
+Managing bug reports
+--------------------
+
+One of the best ways to put into practice your hacking skills is by fixing
+bugs reported by other people. Not only you will help to make the kernel
+more stable, but you'll also learn to fix real world problems and you will
+improve your skills, and other developers will be aware of your presence.
+Fixing bugs is one of the best ways to get merits among other developers,
+because not many people like wasting time fixing other people's bugs.
+
+To work on already reported bug reports, find a subsystem you are interested in.
+Check the MAINTAINERS file where bugs for that subsystem get reported to; often
+it will be a mailing list, rarely a bugtracker. Search the archives of said
+place for recent reports and help where you see fit. You may also want to check
+https://bugzilla.kernel.org for bug reports; only a handful of kernel subsystems
+use it actively for reporting or tracking, nevertheless bugs for the whole
+kernel get filed there.
+
+
+Mailing lists
+-------------
+
+As some of the above documents describe, the majority of the core kernel
+developers participate on the Linux Kernel Mailing list. Details on how
+to subscribe and unsubscribe from the list can be found at:
+
+ http://vger.kernel.org/vger-lists.html#linux-kernel
+
+There are archives of the mailing list on the web in many different
+places. Use a search engine to find these archives. For example:
+
+ https://lore.kernel.org/lkml/
+
+It is highly recommended that you search the archives about the topic
+you want to bring up, before you post it to the list. A lot of things
+already discussed in detail are only recorded at the mailing list
+archives.
+
+Most of the individual kernel subsystems also have their own separate
+mailing list where they do their development efforts. See the
+MAINTAINERS file for a list of what these lists are for the different
+groups.
+
+Many of the lists are hosted on kernel.org. Information on them can be
+found at:
+
+ http://vger.kernel.org/vger-lists.html
+
+Please remember to follow good behavioral habits when using the lists.
+Though a bit cheesy, the following URL has some simple guidelines for
+interacting with the list (or any list):
+
+ http://www.albion.com/netiquette/
+
+If multiple people respond to your mail, the CC: list of recipients may
+get pretty large. Don't remove anybody from the CC: list without a good
+reason, or don't reply only to the list address. Get used to receiving the
+mail twice, one from the sender and the one from the list, and don't try
+to tune that by adding fancy mail-headers, people will not like it.
+
+Remember to keep the context and the attribution of your replies intact,
+keep the "John Kernelhacker wrote ...:" lines at the top of your reply, and
+add your statements between the individual quoted sections instead of
+writing at the top of the mail.
+
+If you add patches to your mail, make sure they are plain readable text
+as stated in :ref:`Documentation/process/submitting-patches.rst <submittingpatches>`.
+Kernel developers don't want to deal with
+attachments or compressed patches; they may want to comment on
+individual lines of your patch, which works only that way. Make sure you
+use a mail program that does not mangle spaces and tab characters. A
+good first test is to send the mail to yourself and try to apply your
+own patch by yourself. If that doesn't work, get your mail program fixed
+or change it until it works.
+
+Above all, please remember to show respect to other subscribers.
+
+
+Working with the community
+--------------------------
+
+The goal of the kernel community is to provide the best possible kernel
+there is. When you submit a patch for acceptance, it will be reviewed
+on its technical merits and those alone. So, what should you be
+expecting?
+
+ - criticism
+ - comments
+ - requests for change
+ - requests for justification
+ - silence
+
+Remember, this is part of getting your patch into the kernel. You have
+to be able to take criticism and comments about your patches, evaluate
+them at a technical level and either rework your patches or provide
+clear and concise reasoning as to why those changes should not be made.
+If there are no responses to your posting, wait a few days and try
+again, sometimes things get lost in the huge volume.
+
+What should you not do?
+
+ - expect your patch to be accepted without question
+ - become defensive
+ - ignore comments
+ - resubmit the patch without making any of the requested changes
+
+In a community that is looking for the best technical solution possible,
+there will always be differing opinions on how beneficial a patch is.
+You have to be cooperative, and willing to adapt your idea to fit within
+the kernel. Or at least be willing to prove your idea is worth it.
+Remember, being wrong is acceptable as long as you are willing to work
+toward a solution that is right.
+
+It is normal that the answers to your first patch might simply be a list
+of a dozen things you should correct. This does **not** imply that your
+patch will not be accepted, and it is **not** meant against you
+personally. Simply correct all issues raised against your patch and
+resend it.
+
+
+Differences between the kernel community and corporate structures
+-----------------------------------------------------------------
+
+The kernel community works differently than most traditional corporate
+development environments. Here are a list of things that you can try to
+do to avoid problems:
+
+ Good things to say regarding your proposed changes:
+
+ - "This solves multiple problems."
+ - "This deletes 2000 lines of code."
+ - "Here is a patch that explains what I am trying to describe."
+ - "I tested it on 5 different architectures..."
+ - "Here is a series of small patches that..."
+ - "This increases performance on typical machines..."
+
+ Bad things you should avoid saying:
+
+ - "We did it this way in AIX/ptx/Solaris, so therefore it must be
+ good..."
+ - "I've being doing this for 20 years, so..."
+ - "This is required for my company to make money"
+ - "This is for our Enterprise product line."
+ - "Here is my 1000 page design document that describes my idea"
+ - "I've been working on this for 6 months..."
+ - "Here's a 5000 line patch that..."
+ - "I rewrote all of the current mess, and here it is..."
+ - "I have a deadline, and this patch needs to be applied now."
+
+Another way the kernel community is different than most traditional
+software engineering work environments is the faceless nature of
+interaction. One benefit of using email and irc as the primary forms of
+communication is the lack of discrimination based on gender or race.
+The Linux kernel work environment is accepting of women and minorities
+because all you are is an email address. The international aspect also
+helps to level the playing field because you can't guess gender based on
+a person's name. A man may be named Andrea and a woman may be named Pat.
+Most women who have worked in the Linux kernel and have expressed an
+opinion have had positive experiences.
+
+The language barrier can cause problems for some people who are not
+comfortable with English. A good grasp of the language can be needed in
+order to get ideas across properly on mailing lists, so it is
+recommended that you check your emails to make sure they make sense in
+English before sending them.
+
+
+Break up your changes
+---------------------
+
+The Linux kernel community does not gladly accept large chunks of code
+dropped on it all at once. The changes need to be properly introduced,
+discussed, and broken up into tiny, individual portions. This is almost
+the exact opposite of what companies are used to doing. Your proposal
+should also be introduced very early in the development process, so that
+you can receive feedback on what you are doing. It also lets the
+community feel that you are working with them, and not simply using them
+as a dumping ground for your feature. However, don't send 50 emails at
+one time to a mailing list, your patch series should be smaller than
+that almost all of the time.
+
+The reasons for breaking things up are the following:
+
+1) Small patches increase the likelihood that your patches will be
+ applied, since they don't take much time or effort to verify for
+ correctness. A 5 line patch can be applied by a maintainer with
+ barely a second glance. However, a 500 line patch may take hours to
+ review for correctness (the time it takes is exponentially
+ proportional to the size of the patch, or something).
+
+ Small patches also make it very easy to debug when something goes
+ wrong. It's much easier to back out patches one by one than it is
+ to dissect a very large patch after it's been applied (and broken
+ something).
+
+2) It's important not only to send small patches, but also to rewrite
+ and simplify (or simply re-order) patches before submitting them.
+
+Here is an analogy from kernel developer Al Viro:
+
+ *"Think of a teacher grading homework from a math student. The
+ teacher does not want to see the student's trials and errors
+ before they came up with the solution. They want to see the
+ cleanest, most elegant answer. A good student knows this, and
+ would never submit her intermediate work before the final
+ solution.*
+
+ *The same is true of kernel development. The maintainers and
+ reviewers do not want to see the thought process behind the
+ solution to the problem one is solving. They want to see a
+ simple and elegant solution."*
+
+It may be challenging to keep the balance between presenting an elegant
+solution and working together with the community and discussing your
+unfinished work. Therefore it is good to get early in the process to
+get feedback to improve your work, but also keep your changes in small
+chunks that they may get already accepted, even when your whole task is
+not ready for inclusion now.
+
+Also realize that it is not acceptable to send patches for inclusion
+that are unfinished and will be "fixed up later."
+
+
+Justify your change
+-------------------
+
+Along with breaking up your patches, it is very important for you to let
+the Linux community know why they should add this change. New features
+must be justified as being needed and useful.
+
+
+Document your change
+--------------------
+
+When sending in your patches, pay special attention to what you say in
+the text in your email. This information will become the ChangeLog
+information for the patch, and will be preserved for everyone to see for
+all time. It should describe the patch completely, containing:
+
+ - why the change is necessary
+ - the overall design approach in the patch
+ - implementation details
+ - testing results
+
+For more details on what this should all look like, please see the
+ChangeLog section of the document:
+
+ "The Perfect Patch"
+ https://www.ozlabs.org/~akpm/stuff/tpp.txt
+
+
+All of these things are sometimes very hard to do. It can take years to
+perfect these practices (if at all). It's a continuous process of
+improvement that requires a lot of patience and determination. But
+don't give up, it's possible. Many have done it before, and each had to
+start exactly where you are now.
+
+
+
+
+----------
+
+Thanks to Paolo Ciarrocchi who allowed the "Development Process"
+(https://lwn.net/Articles/94386/) section
+to be based on text he had written, and to Randy Dunlap and Gerrit
+Huizenga for some of the list of things you should and should not say.
+Also thanks to Pat Mochel, Hanna Linder, Randy Dunlap, Kay Sievers,
+Vojtech Pavlik, Jan Kara, Josh Boyer, Kees Cook, Andrew Morton, Andi
+Kleen, Vadim Lobanov, Jesper Juhl, Adrian Bunk, Keri Harris, Frans Pop,
+David A. Wheeler, Junio Hamano, Michael Kerrisk, and Alex Shepard for
+their review, comments, and contributions. Without their help, this
+document would not have been possible.
+
+
+
+Maintainer: Greg Kroah-Hartman <greg@kroah.com>
diff --git a/Documentation/process/index.rst b/Documentation/process/index.rst
new file mode 100644
index 000000000..b501cd977
--- /dev/null
+++ b/Documentation/process/index.rst
@@ -0,0 +1,82 @@
+.. raw:: latex
+
+ \renewcommand\thesection*
+ \renewcommand\thesubsection*
+
+.. _process_index:
+
+=============================================
+Working with the kernel development community
+=============================================
+
+So you want to be a Linux kernel developer? Welcome! While there is a lot
+to be learned about the kernel in a technical sense, it is also important
+to learn about how our community works. Reading these documents will make
+it much easier for you to get your changes merged with a minimum of
+trouble.
+
+Below are the essential guides that every developer should read.
+
+.. toctree::
+ :maxdepth: 1
+
+ license-rules
+ howto
+ code-of-conduct
+ code-of-conduct-interpretation
+ development-process
+ submitting-patches
+ handling-regressions
+ programming-language
+ coding-style
+ maintainer-handbooks
+ maintainer-pgp-guide
+ email-clients
+ kernel-enforcement-statement
+ kernel-driver-statement
+
+For security issues, see:
+
+.. toctree::
+ :maxdepth: 1
+
+ security-bugs
+ embargoed-hardware-issues
+
+Other guides to the community that are of interest to most developers are:
+
+.. toctree::
+ :maxdepth: 1
+
+ changes
+ stable-api-nonsense
+ management-style
+ stable-kernel-rules
+ submit-checklist
+ kernel-docs
+ deprecated
+ maintainers
+ researcher-guidelines
+ contribution-maturity-model
+
+These are some overall technical guides that have been put here for now for
+lack of a better place.
+
+.. toctree::
+ :maxdepth: 1
+
+ applying-patches
+ adding-syscalls
+ magic-number
+ volatile-considered-harmful
+ botching-up-ioctls
+ clang-format
+ ../riscv/patch-acceptance
+ ../core-api/unaligned-memory-access
+
+.. only:: subproject and html
+
+ Indices
+ =======
+
+ * :ref:`genindex`
diff --git a/Documentation/process/kernel-docs.rst b/Documentation/process/kernel-docs.rst
new file mode 100644
index 000000000..8660493b9
--- /dev/null
+++ b/Documentation/process/kernel-docs.rst
@@ -0,0 +1,210 @@
+.. _kernel_docs:
+
+Index of Further Kernel Documentation
+=====================================
+
+The need for a document like this one became apparent in the
+linux-kernel mailing list as the same questions, asking for pointers
+to information, appeared again and again.
+
+Fortunately, as more and more people get to GNU/Linux, more and more
+get interested in the Kernel. But reading the sources is not always
+enough. It is easy to understand the code, but miss the concepts, the
+philosophy and design decisions behind this code.
+
+Unfortunately, not many documents are available for beginners to
+start. And, even if they exist, there was no "well-known" place which
+kept track of them. These lines try to cover this lack.
+
+PLEASE, if you know any paper not listed here or write a new document,
+include a reference to it here, following the kernel's patch submission
+process. Any corrections, ideas or comments are also welcome.
+
+All documents are cataloged with the following fields: the document's
+"Title", the "Author"/s, the "URL" where they can be found, some
+"Keywords" helpful when searching for specific topics, and a brief
+"Description" of the Document.
+
+.. note::
+
+ The documents on each section of this document are ordered by its
+ published date, from the newest to the oldest. The maintainer(s) should
+ periodically retire resources as they become obsolete or outdated; with
+ the exception of foundational books.
+
+Docs at the Linux Kernel tree
+-----------------------------
+
+The Sphinx books should be built with ``make {htmldocs | pdfdocs | epubdocs}``.
+
+ * Name: **linux/Documentation**
+
+ :Author: Many.
+ :Location: Documentation/
+ :Keywords: text files, Sphinx.
+ :Description: Documentation that comes with the kernel sources,
+ inside the Documentation directory. Some pages from this document
+ (including this document itself) have been moved there, and might
+ be more up to date than the web version.
+
+On-line docs
+------------
+
+ * Title: **Linux Kernel Mailing List Glossary**
+
+ :Author: various
+ :URL: https://kernelnewbies.org/KernelGlossary
+ :Date: rolling version
+ :Keywords: glossary, terms, linux-kernel.
+ :Description: From the introduction: "This glossary is intended as
+ a brief description of some of the acronyms and terms you may hear
+ during discussion of the Linux kernel".
+
+ * Title: **The Linux Kernel Module Programming Guide**
+
+ :Author: Peter Jay Salzman, Michael Burian, Ori Pomerantz, Bob Mottram,
+ Jim Huang.
+ :URL: https://sysprog21.github.io/lkmpg/
+ :Date: 2021
+ :Keywords: modules, GPL book, /proc, ioctls, system calls,
+ interrupt handlers .
+ :Description: A very nice GPL book on the topic of modules
+ programming. Lots of examples. Currently the new version is being
+ actively maintained at https://github.com/sysprog21/lkmpg.
+
+Published books
+---------------
+
+ * Title: **Linux Kernel Debugging: Leverage proven tools and advanced techniques to effectively debug Linux kernels and kernel modules**
+
+ :Author: Kaiwan N Billimoria
+ :Publisher: Packt Publishing Ltd
+ :Date: August, 2022
+ :Pages: 638
+ :ISBN: 978-1801075039
+ :Notes: Debugging book
+
+ * Title: **Linux Kernel Programming: A Comprehensive Guide to Kernel Internals, Writing Kernel Modules, and Kernel Synchronization**
+
+ :Author: Kaiwan N Billimoria
+ :Publisher: Packt Publishing Ltd
+ :Date: March, 2021
+ :Pages: 754
+ :ISBN: 978-1789953435
+
+ * Title: **Linux Kernel Programming Part 2 - Char Device Drivers and Kernel Synchronization: Create user-kernel interfaces, work with peripheral I/O, and handle hardware interrupts**
+
+ :Author: Kaiwan N Billimoria
+ :Publisher: Packt Publishing Ltd
+ :Date: March, 2021
+ :Pages: 452
+ :ISBN: 978-1801079518
+
+ * Title: **Linux System Programming: Talking Directly to the Kernel and C Library**
+
+ :Author: Robert Love
+ :Publisher: O'Reilly Media
+ :Date: June, 2013
+ :Pages: 456
+ :ISBN: 978-1449339531
+ :Notes: Foundational book
+
+ * Title: **Linux Kernel Development, 3rd Edition**
+
+ :Author: Robert Love
+ :Publisher: Addison-Wesley
+ :Date: July, 2010
+ :Pages: 440
+ :ISBN: 978-0672329463
+ :Notes: Foundational book
+
+ * Title: **Practical Linux System Administration: A Guide to Installation, Configuration, and Management, 1st Edition**
+
+ :Author: Kenneth Hess
+ :Publisher: O'Reilly Media
+ :Date: May, 2023
+ :Pages: 246
+ :ISBN: 978-1098109035
+ :Notes: System administration
+
+.. _ldd3_published:
+
+ * Title: **Linux Device Drivers, 3rd Edition**
+
+ :Authors: Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman
+ :Publisher: O'Reilly & Associates
+ :Date: 2005
+ :Pages: 636
+ :ISBN: 0-596-00590-3
+ :Notes: Foundational book. Further information in
+ http://www.oreilly.com/catalog/linuxdrive3/
+ PDF format, URL: https://lwn.net/Kernel/LDD3/
+
+ * Title: **The Design of the UNIX Operating System**
+
+ :Author: Maurice J. Bach
+ :Publisher: Prentice Hall
+ :Date: 1986
+ :Pages: 471
+ :ISBN: 0-13-201757-1
+ :Notes: Foundational book
+
+Miscellaneous
+-------------
+
+ * Name: **Cross-Referencing Linux**
+
+ :URL: https://elixir.bootlin.com/
+ :Keywords: Browsing source code.
+ :Description: Another web-based Linux kernel source code browser.
+ Lots of cross references to variables and functions. You can see
+ where they are defined and where they are used.
+
+ * Name: **Linux Weekly News**
+
+ :URL: https://lwn.net
+ :Keywords: latest kernel news.
+ :Description: The title says it all. There's a fixed kernel section
+ summarizing developers' work, bug fixes, new features and versions
+ produced during the week.
+
+ * Name: **The home page of Linux-MM**
+
+ :Author: The Linux-MM team.
+ :URL: https://linux-mm.org/
+ :Keywords: memory management, Linux-MM, mm patches, TODO, docs,
+ mailing list.
+ :Description: Site devoted to Linux Memory Management development.
+ Memory related patches, HOWTOs, links, mm developers... Don't miss
+ it if you are interested in memory management development!
+
+ * Name: **Kernel Newbies IRC Channel and Website**
+
+ :URL: https://www.kernelnewbies.org
+ :Keywords: IRC, newbies, channel, asking doubts.
+ :Description: #kernelnewbies on irc.oftc.net.
+ #kernelnewbies is an IRC network dedicated to the 'newbie'
+ kernel hacker. The audience mostly consists of people who are
+ learning about the kernel, working on kernel projects or
+ professional kernel hackers that want to help less seasoned kernel
+ people.
+ #kernelnewbies is on the OFTC IRC Network.
+ Try irc.oftc.net as your server and then /join #kernelnewbies.
+ The kernelnewbies website also hosts articles, documents, FAQs...
+
+ * Name: **linux-kernel mailing list archives and search engines**
+
+ :URL: http://vger.kernel.org/vger-lists.html
+ :URL: http://www.uwsg.indiana.edu/hypermail/linux/kernel/index.html
+ :URL: http://groups.google.com/group/mlist.linux.kernel
+ :Keywords: linux-kernel, archives, search.
+ :Description: Some of the linux-kernel mailing list archivers. If
+ you have a better/another one, please let me know.
+
+-------
+
+This document was originally based on:
+
+ https://www.dit.upm.es/~jmseyas/linux/kernel/hackers-docs.html
+
+and written by Juan-Mariano de Goyeneche
diff --git a/Documentation/process/kernel-driver-statement.rst b/Documentation/process/kernel-driver-statement.rst
new file mode 100644
index 000000000..a849790a6
--- /dev/null
+++ b/Documentation/process/kernel-driver-statement.rst
@@ -0,0 +1,202 @@
+.. _process_statement_driver:
+
+Kernel Driver Statement
+-----------------------
+
+Position Statement on Linux Kernel Modules
+==========================================
+
+
+We, the undersigned Linux kernel developers, consider any closed-source
+Linux kernel module or driver to be harmful and undesirable. We have
+repeatedly found them to be detrimental to Linux users, businesses, and
+the greater Linux ecosystem. Such modules negate the openness,
+stability, flexibility, and maintainability of the Linux development
+model and shut their users off from the expertise of the Linux
+community. Vendors that provide closed-source kernel modules force their
+customers to give up key Linux advantages or choose new vendors.
+Therefore, in order to take full advantage of the cost savings and
+shared support benefits open source has to offer, we urge vendors to
+adopt a policy of supporting their customers on Linux with open-source
+kernel code.
+
+We speak only for ourselves, and not for any company we might work for
+today, have in the past, or will in the future.
+
+ - Dave Airlie
+ - Nick Andrew
+ - Jens Axboe
+ - Ralf Baechle
+ - Felipe Balbi
+ - Ohad Ben-Cohen
+ - Muli Ben-Yehuda
+ - Jiri Benc
+ - Arnd Bergmann
+ - Thomas Bogendoerfer
+ - Vitaly Bordug
+ - James Bottomley
+ - Josh Boyer
+ - Neil Brown
+ - Mark Brown
+ - David Brownell
+ - Michael Buesch
+ - Franck Bui-Huu
+ - Adrian Bunk
+ - François Cami
+ - Ralph Campbell
+ - Luiz Fernando N. Capitulino
+ - Mauro Carvalho Chehab
+ - Denis Cheng
+ - Jonathan Corbet
+ - Glauber Costa
+ - Alan Cox
+ - Magnus Damm
+ - Ahmed S. Darwish
+ - Robert P. J. Day
+ - Hans de Goede
+ - Arnaldo Carvalho de Melo
+ - Helge Deller
+ - Jean Delvare
+ - Mathieu Desnoyers
+ - Sven-Thorsten Dietrich
+ - Alexey Dobriyan
+ - Daniel Drake
+ - Alex Dubov
+ - Randy Dunlap
+ - Michael Ellerman
+ - Pekka Enberg
+ - Jan Engelhardt
+ - Mark Fasheh
+ - J. Bruce Fields
+ - Larry Finger
+ - Jeremy Fitzhardinge
+ - Mike Frysinger
+ - Kumar Gala
+ - Robin Getz
+ - Liam Girdwood
+ - Jan-Benedict Glaw
+ - Thomas Gleixner
+ - Brice Goglin
+ - Cyrill Gorcunov
+ - Andy Gospodarek
+ - Thomas Graf
+ - Krzysztof Halasa
+ - Harvey Harrison
+ - Stephen Hemminger
+ - Michael Hennerich
+ - Tejun Heo
+ - Benjamin Herrenschmidt
+ - Kristian Høgsberg
+ - Henrique de Moraes Holschuh
+ - Marcel Holtmann
+ - Mike Isely
+ - Takashi Iwai
+ - Olof Johansson
+ - Dave Jones
+ - Jesper Juhl
+ - Matthias Kaehlcke
+ - Kenji Kaneshige
+ - Jan Kara
+ - Jeremy Kerr
+ - Russell King
+ - Olaf Kirch
+ - Roel Kluin
+ - Hans-Jürgen Koch
+ - Auke Kok
+ - Peter Korsgaard
+ - Jiri Kosina
+ - Aaro Koskinen
+ - Mariusz Kozlowski
+ - Greg Kroah-Hartman
+ - Michael Krufky
+ - Aneesh Kumar
+ - Clemens Ladisch
+ - Christoph Lameter
+ - Gunnar Larisch
+ - Anders Larsen
+ - Grant Likely
+ - John W. Linville
+ - Yinghai Lu
+ - Tony Luck
+ - Pavel Machek
+ - Matt Mackall
+ - Paul Mackerras
+ - Roland McGrath
+ - Patrick McHardy
+ - Kyle McMartin
+ - Paul Menage
+ - Thierry Merle
+ - Eric Miao
+ - Akinobu Mita
+ - Ingo Molnar
+ - James Morris
+ - Andrew Morton
+ - Paul Mundt
+ - Oleg Nesterov
+ - Luca Olivetti
+ - S.Çağlar Onur
+ - Pierre Ossman
+ - Keith Owens
+ - Venkatesh Pallipadi
+ - Nick Piggin
+ - Nicolas Pitre
+ - Evgeniy Polyakov
+ - Richard Purdie
+ - Mike Rapoport
+ - Sam Ravnborg
+ - Gerrit Renker
+ - Stefan Richter
+ - David Rientjes
+ - Luis R. Rodriguez
+ - Stefan Roese
+ - Francois Romieu
+ - Rami Rosen
+ - Stephen Rothwell
+ - Maciej W. Rozycki
+ - Mark Salyzyn
+ - Yoshinori Sato
+ - Deepak Saxena
+ - Holger Schurig
+ - Amit Shah
+ - Yoshihiro Shimoda
+ - Sergei Shtylyov
+ - Kay Sievers
+ - Sebastian Siewior
+ - Rik Snel
+ - Jes Sorensen
+ - Alexey Starikovskiy
+ - Alan Stern
+ - Timur Tabi
+ - Hirokazu Takata
+ - Eliezer Tamir
+ - Eugene Teo
+ - Doug Thompson
+ - FUJITA Tomonori
+ - Dmitry Torokhov
+ - Marcelo Tosatti
+ - Steven Toth
+ - Theodore Tso
+ - Matthias Urlichs
+ - Geert Uytterhoeven
+ - Arjan van de Ven
+ - Ivo van Doorn
+ - Rik van Riel
+ - Wim Van Sebroeck
+ - Hans Verkuil
+ - Horst H. von Brand
+ - Dmitri Vorobiev
+ - Anton Vorontsov
+ - Daniel Walker
+ - Johannes Weiner
+ - Harald Welte
+ - Matthew Wilcox
+ - Dan J. Williams
+ - Darrick J. Wong
+ - David Woodhouse
+ - Chris Wright
+ - Bryan Wu
+ - Rafael J. Wysocki
+ - Herbert Xu
+ - Vlad Yasevich
+ - Peter Zijlstra
+ - Bartlomiej Zolnierkiewicz
diff --git a/Documentation/process/kernel-enforcement-statement.rst b/Documentation/process/kernel-enforcement-statement.rst
new file mode 100644
index 000000000..dc2d813b2
--- /dev/null
+++ b/Documentation/process/kernel-enforcement-statement.rst
@@ -0,0 +1,163 @@
+.. _process_statement_kernel:
+
+Linux Kernel Enforcement Statement
+----------------------------------
+
+As developers of the Linux kernel, we have a keen interest in how our software
+is used and how the license for our software is enforced. Compliance with the
+reciprocal sharing obligations of GPL-2.0 is critical to the long-term
+sustainability of our software and community.
+
+Although there is a right to enforce the separate copyright interests in the
+contributions made to our community, we share an interest in ensuring that
+individual enforcement actions are conducted in a manner that benefits our
+community and do not have an unintended negative impact on the health and
+growth of our software ecosystem. In order to deter unhelpful enforcement
+actions, we agree that it is in the best interests of our development
+community to undertake the following commitment to users of the Linux kernel
+on behalf of ourselves and any successors to our copyright interests:
+
+ Notwithstanding the termination provisions of the GPL-2.0, we agree that
+ it is in the best interests of our development community to adopt the
+ following provisions of GPL-3.0 as additional permissions under our
+ license with respect to any non-defensive assertion of rights under the
+ license.
+
+ However, if you cease all violation of this License, then your license
+ from a particular copyright holder is reinstated (a) provisionally,
+ unless and until the copyright holder explicitly and finally
+ terminates your license, and (b) permanently, if the copyright holder
+ fails to notify you of the violation by some reasonable means prior to
+ 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+ reinstated permanently if the copyright holder notifies you of the
+ violation by some reasonable means, this is the first time you have
+ received notice of violation of this License (for any work) from that
+ copyright holder, and you cure the violation prior to 30 days after
+ your receipt of the notice.
+
+Our intent in providing these assurances is to encourage more use of the
+software. We want companies and individuals to use, modify and distribute
+this software. We want to work with users in an open and transparent way to
+eliminate any uncertainty about our expectations regarding compliance or
+enforcement that might limit adoption of our software. We view legal action
+as a last resort, to be initiated only when other community efforts have
+failed to resolve the problem.
+
+Finally, once a non-compliance issue is resolved, we hope the user will feel
+welcome to join us in our efforts on this project. Working together, we will
+be stronger.
+
+Except where noted below, we speak only for ourselves, and not for any company
+we might work for today, have in the past, or will in the future.
+
+ - Laura Abbott
+ - Bjorn Andersson (Linaro)
+ - Andrea Arcangeli
+ - Neil Armstrong
+ - Jens Axboe
+ - Pablo Neira Ayuso
+ - Khalid Aziz
+ - Ralf Baechle
+ - Felipe Balbi
+ - Arnd Bergmann
+ - Ard Biesheuvel
+ - Tim Bird
+ - Paolo Bonzini
+ - Christian Borntraeger
+ - Mark Brown (Linaro)
+ - Paul Burton
+ - Javier Martinez Canillas
+ - Rob Clark
+ - Kees Cook (Google)
+ - Jonathan Corbet
+ - Dennis Dalessandro
+ - Vivien Didelot (Savoir-faire Linux)
+ - Hans de Goede
+ - Mel Gorman (SUSE)
+ - Sven Eckelmann
+ - Alex Elder (Linaro)
+ - Fabio Estevam
+ - Larry Finger
+ - Bhumika Goyal
+ - Andy Gross
+ - Juergen Gross
+ - Shawn Guo
+ - Ulf Hansson
+ - Stephen Hemminger (Microsoft)
+ - Tejun Heo
+ - Rob Herring
+ - Masami Hiramatsu
+ - Michal Hocko
+ - Simon Horman
+ - Johan Hovold (Hovold Consulting AB)
+ - Christophe JAILLET
+ - Olof Johansson
+ - Lee Jones (Linaro)
+ - Heiner Kallweit
+ - Srinivas Kandagatla
+ - Jan Kara
+ - Shuah Khan (Samsung)
+ - David Kershner
+ - Jaegeuk Kim
+ - Namhyung Kim
+ - Colin Ian King
+ - Jeff Kirsher
+ - Greg Kroah-Hartman (Linux Foundation)
+ - Christian König
+ - Vinod Koul
+ - Krzysztof Kozlowski
+ - Viresh Kumar
+ - Aneesh Kumar K.V
+ - Julia Lawall
+ - Doug Ledford
+ - Chuck Lever (Oracle)
+ - Daniel Lezcano
+ - Shaohua Li
+ - Xin Long
+ - Tony Luck
+ - Catalin Marinas (Arm Ltd)
+ - Mike Marshall
+ - Chris Mason
+ - Paul E. McKenney
+ - Arnaldo Carvalho de Melo
+ - David S. Miller
+ - Ingo Molnar
+ - Kuninori Morimoto
+ - Trond Myklebust
+ - Martin K. Petersen (Oracle)
+ - Borislav Petkov
+ - Jiri Pirko
+ - Josh Poimboeuf
+ - Sebastian Reichel (Collabora)
+ - Guenter Roeck
+ - Joerg Roedel
+ - Leon Romanovsky
+ - Steven Rostedt (VMware)
+ - Frank Rowand
+ - Ivan Safonov
+ - Anna Schumaker
+ - Jes Sorensen
+ - K.Y. Srinivasan
+ - David Sterba (SUSE)
+ - Heiko Stuebner
+ - Jiri Kosina (SUSE)
+ - Willy Tarreau
+ - Dmitry Torokhov
+ - Linus Torvalds
+ - Thierry Reding
+ - Rik van Riel
+ - Luis R. Rodriguez
+ - Geert Uytterhoeven (Glider bvba)
+ - Eduardo Valentin (Amazon.com)
+ - Daniel Vetter
+ - Linus Walleij
+ - Richard Weinberger
+ - Dan Williams
+ - Rafael J. Wysocki
+ - Arvind Yadav
+ - Masahiro Yamada
+ - Wei Yongjun
+ - Lv Zheng
+ - Marc Zyngier (Arm Ltd)
diff --git a/Documentation/process/license-rules.rst b/Documentation/process/license-rules.rst
new file mode 100644
index 000000000..2ef44ada3
--- /dev/null
+++ b/Documentation/process/license-rules.rst
@@ -0,0 +1,485 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+.. _kernel_licensing:
+
+Linux kernel licensing rules
+============================
+
+The Linux Kernel is provided under the terms of the GNU General Public
+License version 2 only (GPL-2.0), as provided in LICENSES/preferred/GPL-2.0,
+with an explicit syscall exception described in
+LICENSES/exceptions/Linux-syscall-note, as described in the COPYING file.
+
+This documentation file provides a description of how each source file
+should be annotated to make its license clear and unambiguous.
+It doesn't replace the Kernel's license.
+
+The license described in the COPYING file applies to the kernel source
+as a whole, though individual source files can have a different license
+which is required to be compatible with the GPL-2.0::
+
+ GPL-1.0+ : GNU General Public License v1.0 or later
+ GPL-2.0+ : GNU General Public License v2.0 or later
+ LGPL-2.0 : GNU Library General Public License v2 only
+ LGPL-2.0+ : GNU Library General Public License v2 or later
+ LGPL-2.1 : GNU Lesser General Public License v2.1 only
+ LGPL-2.1+ : GNU Lesser General Public License v2.1 or later
+
+Aside from that, individual files can be provided under a dual license,
+e.g. one of the compatible GPL variants and alternatively under a
+permissive license like BSD, MIT etc.
+
+The User-space API (UAPI) header files, which describe the interface of
+user-space programs to the kernel are a special case. According to the
+note in the kernel COPYING file, the syscall interface is a clear boundary,
+which does not extend the GPL requirements to any software which uses it to
+communicate with the kernel. Because the UAPI headers must be includable
+into any source files which create an executable running on the Linux
+kernel, the exception must be documented by a special license expression.
+
+The common way of expressing the license of a source file is to add the
+matching boilerplate text into the top comment of the file. Due to
+formatting, typos etc. these "boilerplates" are hard to validate for
+tools which are used in the context of license compliance.
+
+An alternative to boilerplate text is the use of Software Package Data
+Exchange (SPDX) license identifiers in each source file. SPDX license
+identifiers are machine parsable and precise shorthands for the license
+under which the content of the file is contributed. SPDX license
+identifiers are managed by the SPDX Workgroup at the Linux Foundation and
+have been agreed on by partners throughout the industry, tool vendors, and
+legal teams. For further information see https://spdx.org/
+
+The Linux kernel requires the precise SPDX identifier in all source files.
+The valid identifiers used in the kernel are explained in the section
+`License identifiers`_ and have been retrieved from the official SPDX
+license list at https://spdx.org/licenses/ along with the license texts.
+
+License identifier syntax
+-------------------------
+
+1. Placement:
+
+ The SPDX license identifier in kernel files shall be added at the first
+ possible line in a file which can contain a comment. For the majority
+ of files this is the first line, except for scripts which require the
+ '#!PATH_TO_INTERPRETER' in the first line. For those scripts the SPDX
+ identifier goes into the second line.
+
+|
+
+2. Style:
+
+ The SPDX license identifier is added in form of a comment. The comment
+ style depends on the file type::
+
+ C source: // SPDX-License-Identifier: <SPDX License Expression>
+ C header: /* SPDX-License-Identifier: <SPDX License Expression> */
+ ASM: /* SPDX-License-Identifier: <SPDX License Expression> */
+ scripts: # SPDX-License-Identifier: <SPDX License Expression>
+ .rst: .. SPDX-License-Identifier: <SPDX License Expression>
+ .dts{i}: // SPDX-License-Identifier: <SPDX License Expression>
+
+ If a specific tool cannot handle the standard comment style, then the
+ appropriate comment mechanism which the tool accepts shall be used. This
+ is the reason for having the "/\* \*/" style comment in C header
+ files. There was build breakage observed with generated .lds files where
+ 'ld' failed to parse the C++ comment. This has been fixed by now, but
+ there are still older assembler tools which cannot handle C++ style
+ comments.
+
+|
+
+3. Syntax:
+
+ A <SPDX License Expression> is either an SPDX short form license
+ identifier found on the SPDX License List, or the combination of two
+ SPDX short form license identifiers separated by "WITH" when a license
+ exception applies. When multiple licenses apply, an expression consists
+ of keywords "AND", "OR" separating sub-expressions and surrounded by
+ "(", ")" .
+
+ License identifiers for licenses like [L]GPL with the 'or later' option
+ are constructed by using a "+" for indicating the 'or later' option.::
+
+ // SPDX-License-Identifier: GPL-2.0+
+ // SPDX-License-Identifier: LGPL-2.1+
+
+ WITH should be used when there is a modifier to a license needed.
+ For example, the linux kernel UAPI files use the expression::
+
+ // SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
+ // SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note
+
+ Other examples using WITH exceptions found in the kernel are::
+
+ // SPDX-License-Identifier: GPL-2.0 WITH mif-exception
+ // SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0
+
+ Exceptions can only be used with particular License identifiers. The
+ valid License identifiers are listed in the tags of the exception text
+ file. For details see the point `Exceptions`_ in the chapter `License
+ identifiers`_.
+
+ OR should be used if the file is dual licensed and only one license is
+ to be selected. For example, some dtsi files are available under dual
+ licenses::
+
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+
+ Examples from the kernel for license expressions in dual licensed files::
+
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+ // SPDX-License-Identifier: GPL-2.0 OR Apache-2.0
+ // SPDX-License-Identifier: GPL-2.0 OR MPL-1.1
+ // SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR MIT
+ // SPDX-License-Identifier: GPL-1.0+ OR BSD-3-Clause OR OpenSSL
+
+ AND should be used if the file has multiple licenses whose terms all
+ apply to use the file. For example, if code is inherited from another
+ project and permission has been given to put it in the kernel, but the
+ original license terms need to remain in effect::
+
+ // SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) AND MIT
+
+ Another other example where both sets of license terms need to be
+ adhered to is::
+
+ // SPDX-License-Identifier: GPL-1.0+ AND LGPL-2.1+
+
+License identifiers
+-------------------
+
+The licenses currently used, as well as the licenses for code added to the
+kernel, can be broken down into:
+
+1. _`Preferred licenses`:
+
+ Whenever possible these licenses should be used as they are known to be
+ fully compatible and widely used. These licenses are available from the
+ directory::
+
+ LICENSES/preferred/
+
+ in the kernel source tree.
+
+ The files in this directory contain the full license text and
+ `Metatags`_. The file names are identical to the SPDX license
+ identifier which shall be used for the license in source files.
+
+ Examples::
+
+ LICENSES/preferred/GPL-2.0
+
+ Contains the GPL version 2 license text and the required metatags::
+
+ LICENSES/preferred/MIT
+
+ Contains the MIT license text and the required metatags
+
+ _`Metatags`:
+
+ The following meta tags must be available in a license file:
+
+ - Valid-License-Identifier:
+
+ One or more lines which declare which License Identifiers are valid
+ inside the project to reference this particular license text. Usually
+ this is a single valid identifier, but e.g. for licenses with the 'or
+ later' options two identifiers are valid.
+
+ - SPDX-URL:
+
+ The URL of the SPDX page which contains additional information related
+ to the license.
+
+ - Usage-Guidance:
+
+ Freeform text for usage advice. The text must include correct examples
+ for the SPDX license identifiers as they should be put into source
+ files according to the `License identifier syntax`_ guidelines.
+
+ - License-Text:
+
+ All text after this tag is treated as the original license text
+
+ File format examples::
+
+ Valid-License-Identifier: GPL-2.0
+ Valid-License-Identifier: GPL-2.0+
+ SPDX-URL: https://spdx.org/licenses/GPL-2.0.html
+ Usage-Guide:
+ To use this license in source code, put one of the following SPDX
+ tag/value pairs into a comment according to the placement
+ guidelines in the licensing rules documentation.
+ For 'GNU General Public License (GPL) version 2 only' use:
+ SPDX-License-Identifier: GPL-2.0
+ For 'GNU General Public License (GPL) version 2 or any later version' use:
+ SPDX-License-Identifier: GPL-2.0+
+ License-Text:
+ Full license text
+
+ ::
+
+ SPDX-License-Identifier: MIT
+ SPDX-URL: https://spdx.org/licenses/MIT.html
+ Usage-Guide:
+ To use this license in source code, put the following SPDX
+ tag/value pair into a comment according to the placement
+ guidelines in the licensing rules documentation.
+ SPDX-License-Identifier: MIT
+ License-Text:
+ Full license text
+
+|
+
+2. Deprecated licenses:
+
+ These licenses should only be used for existing code or for importing
+ code from a different project. These licenses are available from the
+ directory::
+
+ LICENSES/deprecated/
+
+ in the kernel source tree.
+
+ The files in this directory contain the full license text and
+ `Metatags`_. The file names are identical to the SPDX license
+ identifier which shall be used for the license in source files.
+
+ Examples::
+
+ LICENSES/deprecated/ISC
+
+ Contains the Internet Systems Consortium license text and the required
+ metatags::
+
+ LICENSES/deprecated/GPL-1.0
+
+ Contains the GPL version 1 license text and the required metatags.
+
+ Metatags:
+
+ The metatag requirements for 'other' licenses are identical to the
+ requirements of the `Preferred licenses`_.
+
+ File format example::
+
+ Valid-License-Identifier: ISC
+ SPDX-URL: https://spdx.org/licenses/ISC.html
+ Usage-Guide:
+ Usage of this license in the kernel for new code is discouraged
+ and it should solely be used for importing code from an already
+ existing project.
+ To use this license in source code, put the following SPDX
+ tag/value pair into a comment according to the placement
+ guidelines in the licensing rules documentation.
+ SPDX-License-Identifier: ISC
+ License-Text:
+ Full license text
+
+|
+
+3. Dual Licensing Only
+
+ These licenses should only be used to dual license code with another
+ license in addition to a preferred license. These licenses are available
+ from the directory::
+
+ LICENSES/dual/
+
+ in the kernel source tree.
+
+ The files in this directory contain the full license text and
+ `Metatags`_. The file names are identical to the SPDX license
+ identifier which shall be used for the license in source files.
+
+ Examples::
+
+ LICENSES/dual/MPL-1.1
+
+ Contains the Mozilla Public License version 1.1 license text and the
+ required metatags::
+
+ LICENSES/dual/Apache-2.0
+
+ Contains the Apache License version 2.0 license text and the required
+ metatags.
+
+ Metatags:
+
+ The metatag requirements for 'other' licenses are identical to the
+ requirements of the `Preferred licenses`_.
+
+ File format example::
+
+ Valid-License-Identifier: MPL-1.1
+ SPDX-URL: https://spdx.org/licenses/MPL-1.1.html
+ Usage-Guide:
+ Do NOT use. The MPL-1.1 is not GPL2 compatible. It may only be used for
+ dual-licensed files where the other license is GPL2 compatible.
+ If you end up using this it MUST be used together with a GPL2 compatible
+ license using "OR".
+ To use the Mozilla Public License version 1.1 put the following SPDX
+ tag/value pair into a comment according to the placement guidelines in
+ the licensing rules documentation:
+ SPDX-License-Identifier: MPL-1.1
+ License-Text:
+ Full license text
+
+|
+
+4. _`Exceptions`:
+
+ Some licenses can be amended with exceptions which grant certain rights
+ which the original license does not. These exceptions are available
+ from the directory::
+
+ LICENSES/exceptions/
+
+ in the kernel source tree. The files in this directory contain the full
+ exception text and the required `Exception Metatags`_.
+
+ Examples::
+
+ LICENSES/exceptions/Linux-syscall-note
+
+ Contains the Linux syscall exception as documented in the COPYING
+ file of the Linux kernel, which is used for UAPI header files.
+ e.g. /\* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note \*/::
+
+ LICENSES/exceptions/GCC-exception-2.0
+
+ Contains the GCC 'linking exception' which allows to link any binary
+ independent of its license against the compiled version of a file marked
+ with this exception. This is required for creating runnable executables
+ from source code which is not compatible with the GPL.
+
+ _`Exception Metatags`:
+
+ The following meta tags must be available in an exception file:
+
+ - SPDX-Exception-Identifier:
+
+ One exception identifier which can be used with SPDX license
+ identifiers.
+
+ - SPDX-URL:
+
+ The URL of the SPDX page which contains additional information related
+ to the exception.
+
+ - SPDX-Licenses:
+
+ A comma separated list of SPDX license identifiers for which the
+ exception can be used.
+
+ - Usage-Guidance:
+
+ Freeform text for usage advice. The text must be followed by correct
+ examples for the SPDX license identifiers as they should be put into
+ source files according to the `License identifier syntax`_ guidelines.
+
+ - Exception-Text:
+
+ All text after this tag is treated as the original exception text
+
+ File format examples::
+
+ SPDX-Exception-Identifier: Linux-syscall-note
+ SPDX-URL: https://spdx.org/licenses/Linux-syscall-note.html
+ SPDX-Licenses: GPL-2.0, GPL-2.0+, GPL-1.0+, LGPL-2.0, LGPL-2.0+, LGPL-2.1, LGPL-2.1+
+ Usage-Guidance:
+ This exception is used together with one of the above SPDX-Licenses
+ to mark user-space API (uapi) header files so they can be included
+ into non GPL compliant user-space application code.
+ To use this exception add it with the keyword WITH to one of the
+ identifiers in the SPDX-Licenses tag:
+ SPDX-License-Identifier: <SPDX-License> WITH Linux-syscall-note
+ Exception-Text:
+ Full exception text
+
+ ::
+
+ SPDX-Exception-Identifier: GCC-exception-2.0
+ SPDX-URL: https://spdx.org/licenses/GCC-exception-2.0.html
+ SPDX-Licenses: GPL-2.0, GPL-2.0+
+ Usage-Guidance:
+ The "GCC Runtime Library exception 2.0" is used together with one
+ of the above SPDX-Licenses for code imported from the GCC runtime
+ library.
+ To use this exception add it with the keyword WITH to one of the
+ identifiers in the SPDX-Licenses tag:
+ SPDX-License-Identifier: <SPDX-License> WITH GCC-exception-2.0
+ Exception-Text:
+ Full exception text
+
+
+All SPDX license identifiers and exceptions must have a corresponding file
+in the LICENSES subdirectories. This is required to allow tool
+verification (e.g. checkpatch.pl) and to have the licenses ready to read
+and extract right from the source, which is recommended by various FOSS
+organizations, e.g. the `FSFE REUSE initiative <https://reuse.software/>`_.
+
+_`MODULE_LICENSE`
+-----------------
+
+ Loadable kernel modules also require a MODULE_LICENSE() tag. This tag is
+ neither a replacement for proper source code license information
+ (SPDX-License-Identifier) nor in any way relevant for expressing or
+ determining the exact license under which the source code of the module
+ is provided.
+
+ The sole purpose of this tag is to provide sufficient information
+ whether the module is free software or proprietary for the kernel
+ module loader and for user space tools.
+
+ The valid license strings for MODULE_LICENSE() are:
+
+ ============================= =============================================
+ "GPL" Module is licensed under GPL version 2. This
+ does not express any distinction between
+ GPL-2.0-only or GPL-2.0-or-later. The exact
+ license information can only be determined
+ via the license information in the
+ corresponding source files.
+
+ "GPL v2" Same as "GPL". It exists for historic
+ reasons.
+
+ "GPL and additional rights" Historical variant of expressing that the
+ module source is dual licensed under a
+ GPL v2 variant and MIT license. Please do
+ not use in new code.
+
+ "Dual MIT/GPL" The correct way of expressing that the
+ module is dual licensed under a GPL v2
+ variant or MIT license choice.
+
+ "Dual BSD/GPL" The module is dual licensed under a GPL v2
+ variant or BSD license choice. The exact
+ variant of the BSD license can only be
+ determined via the license information
+ in the corresponding source files.
+
+ "Dual MPL/GPL" The module is dual licensed under a GPL v2
+ variant or Mozilla Public License (MPL)
+ choice. The exact variant of the MPL
+ license can only be determined via the
+ license information in the corresponding
+ source files.
+
+ "Proprietary" The module is under a proprietary license.
+ This string is solely for proprietary third
+ party modules and cannot be used for modules
+ which have their source code in the kernel
+ tree. Modules tagged that way are tainting
+ the kernel with the 'P' flag when loaded and
+ the kernel module loader refuses to link such
+ modules against symbols which are exported
+ with EXPORT_SYMBOL_GPL().
+ ============================= =============================================
+
+
+
diff --git a/Documentation/process/magic-number.rst b/Documentation/process/magic-number.rst
new file mode 100644
index 000000000..7029c3c08
--- /dev/null
+++ b/Documentation/process/magic-number.rst
@@ -0,0 +1,84 @@
+.. _magicnumbers:
+
+Linux magic numbers
+===================
+
+This file is a registry of magic numbers which are in use. When you
+add a magic number to a structure, you should also add it to this
+file, since it is best if the magic numbers used by various structures
+are unique.
+
+It is a **very** good idea to protect kernel data structures with magic
+numbers. This allows you to check at run time whether (a) a structure
+has been clobbered, or (b) you've passed the wrong structure to a
+routine. This last is especially useful --- particularly when you are
+passing pointers to structures via a void * pointer. The tty code,
+for example, does this frequently to pass driver-specific and line
+discipline-specific structures back and forth.
+
+The way to use magic numbers is to declare them at the beginning of
+the structure, like so::
+
+ struct tty_ldisc {
+ int magic;
+ ...
+ };
+
+Please follow this discipline when you are adding future enhancements
+to the kernel! It has saved me countless hours of debugging,
+especially in the screwy cases where an array has been overrun and
+structures following the array have been overwritten. Using this
+discipline, these cases get detected quickly and safely.
+
+Changelog::
+
+ Theodore Ts'o
+ 31 Mar 94
+
+ The magic table is current to Linux 2.1.55.
+
+ Michael Chastain
+ <mailto:mec@shout.net>
+ 22 Sep 1997
+
+ Now it should be up to date with Linux 2.1.112. Because
+ we are in feature freeze time it is very unlikely that
+ something will change before 2.2.x. The entries are
+ sorted by number field.
+
+ Krzysztof G. Baranowski
+ <mailto: kgb@knm.org.pl>
+ 29 Jul 1998
+
+ Updated the magic table to Linux 2.5.45. Right over the feature freeze,
+ but it is possible that some new magic numbers will sneak into the
+ kernel before 2.6.x yet.
+
+ Petr Baudis
+ <pasky@ucw.cz>
+ 03 Nov 2002
+
+ Updated the magic table to Linux 2.5.74.
+
+ Fabian Frederick
+ <ffrederick@users.sourceforge.net>
+ 09 Jul 2003
+
+
+===================== ================ ======================== ==========================================
+Magic Name Number Structure File
+===================== ================ ======================== ==========================================
+PG_MAGIC 'P' pg_{read,write}_hdr ``include/linux/pg.h``
+APM_BIOS_MAGIC 0x4101 apm_user ``arch/x86/kernel/apm_32.c``
+FASYNC_MAGIC 0x4601 fasync_struct ``include/linux/fs.h``
+SLIP_MAGIC 0x5302 slip ``drivers/net/slip.h``
+BAYCOM_MAGIC 0x19730510 baycom_state ``drivers/net/baycom_epp.c``
+HDLCDRV_MAGIC 0x5ac6e778 hdlcdrv_state ``include/linux/hdlcdrv.h``
+KV_MAGIC 0x5f4b565f kernel_vars_s ``arch/mips/include/asm/sn/klkernvars.h``
+CODA_MAGIC 0xC0DAC0DA coda_file_info ``fs/coda/coda_fs_i.h``
+YAM_MAGIC 0xF10A7654 yam_port ``drivers/net/hamradio/yam.c``
+CCB_MAGIC 0xf2691ad2 ccb ``drivers/scsi/ncr53c8xx.c``
+QUEUE_MAGIC_FREE 0xf7e1c9a3 queue_entry ``drivers/scsi/arm/queue.c``
+QUEUE_MAGIC_USED 0xf7e1cc33 queue_entry ``drivers/scsi/arm/queue.c``
+NMI_MAGIC 0x48414d4d455201 nmi_s ``arch/mips/include/asm/sn/nmi.h``
+===================== ================ ======================== ==========================================
diff --git a/Documentation/process/maintainer-handbooks.rst b/Documentation/process/maintainer-handbooks.rst
new file mode 100644
index 000000000..976391cec
--- /dev/null
+++ b/Documentation/process/maintainer-handbooks.rst
@@ -0,0 +1,22 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+.. _maintainer_handbooks_main:
+
+Subsystem and maintainer tree specific development process notes
+================================================================
+
+The purpose of this document is to provide subsystem specific information
+which is supplementary to the general development process handbook
+:ref:`Documentation/process <development_process_main>`.
+
+Contents:
+
+.. toctree::
+ :numbered:
+ :maxdepth: 2
+
+ maintainer-netdev
+ maintainer-soc
+ maintainer-soc-clean-dts
+ maintainer-tip
+ maintainer-kvm-x86
diff --git a/Documentation/process/maintainer-kvm-x86.rst b/Documentation/process/maintainer-kvm-x86.rst
new file mode 100644
index 000000000..9183bd449
--- /dev/null
+++ b/Documentation/process/maintainer-kvm-x86.rst
@@ -0,0 +1,390 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+KVM x86
+=======
+
+Foreword
+--------
+KVM strives to be a welcoming community; contributions from newcomers are
+valued and encouraged. Please do not be discouraged or intimidated by the
+length of this document and the many rules/guidelines it contains. Everyone
+makes mistakes, and everyone was a newbie at some point. So long as you make
+an honest effort to follow KVM x86's guidelines, are receptive to feedback,
+and learn from any mistakes you make, you will be welcomed with open arms, not
+torches and pitchforks.
+
+TL;DR
+-----
+Testing is mandatory. Be consistent with established styles and patterns.
+
+Trees
+-----
+KVM x86 is currently in a transition period from being part of the main KVM
+tree, to being "just another KVM arch". As such, KVM x86 is split across the
+main KVM tree, ``git.kernel.org/pub/scm/virt/kvm/kvm.git``, and a KVM x86
+specific tree, ``github.com/kvm-x86/linux.git``.
+
+Generally speaking, fixes for the current cycle are applied directly to the
+main KVM tree, while all development for the next cycle is routed through the
+KVM x86 tree. In the unlikely event that a fix for the current cycle is routed
+through the KVM x86 tree, it will be applied to the ``fixes`` branch before
+making its way to the main KVM tree.
+
+Note, this transition period is expected to last quite some time, i.e. will be
+the status quo for the foreseeable future.
+
+Branches
+~~~~~~~~
+The KVM x86 tree is organized into multiple topic branches. The purpose of
+using finer-grained topic branches is to make it easier to keep tabs on an area
+of development, and to limit the collateral damage of human errors and/or buggy
+commits, e.g. dropping the HEAD commit of a topic branch has no impact on other
+in-flight commits' SHA1 hashes, and having to reject a pull request due to bugs
+delays only that topic branch.
+
+All topic branches, except for ``next`` and ``fixes``, are rolled into ``next``
+via a Cthulhu merge on an as-needed basis, i.e. when a topic branch is updated.
+As a result, force pushes to ``next`` are common.
+
+Lifecycle
+~~~~~~~~~
+Fixes that target the current release, a.k.a. mainline, are typically applied
+directly to the main KVM tree, i.e. do not route through the KVM x86 tree.
+
+Changes that target the next release are routed through the KVM x86 tree. Pull
+requests (from KVM x86 to main KVM) are sent for each KVM x86 topic branch,
+typically the week before Linus' opening of the merge window, e.g. the week
+following rc7 for "normal" releases. If all goes well, the topic branches are
+rolled into the main KVM pull request sent during Linus' merge window.
+
+The KVM x86 tree doesn't have its own official merge window, but there's a soft
+close around rc5 for new features, and a soft close around rc6 for fixes (for
+the next release; see above for fixes that target the current release).
+
+Timeline
+~~~~~~~~
+Submissions are typically reviewed and applied in FIFO order, with some wiggle
+room for the size of a series, patches that are "cache hot", etc. Fixes,
+especially for the current release and or stable trees, get to jump the queue.
+Patches that will be taken through a non-KVM tree (most often through the tip
+tree) and/or have other acks/reviews also jump the queue to some extent.
+
+Note, the vast majority of review is done between rc1 and rc6, give or take.
+The period between rc6 and the next rc1 is used to catch up on other tasks,
+i.e. radio silence during this period isn't unusual.
+
+Pings to get a status update are welcome, but keep in mind the timing of the
+current release cycle and have realistic expectations. If you are pinging for
+acceptance, i.e. not just for feedback or an update, please do everything you
+can, within reason, to ensure that your patches are ready to be merged! Pings
+on series that break the build or fail tests lead to unhappy maintainers!
+
+Development
+-----------
+
+Base Tree/Branch
+~~~~~~~~~~~~~~~~
+Fixes that target the current release, a.k.a. mainline, should be based on
+``git://git.kernel.org/pub/scm/virt/kvm/kvm.git master``. Note, fixes do not
+automatically warrant inclusion in the current release. There is no singular
+rule, but typically only fixes for bugs that are urgent, critical, and/or were
+introduced in the current release should target the current release.
+
+Everything else should be based on ``kvm-x86/next``, i.e. there is no need to
+select a specific topic branch as the base. If there are conflicts and/or
+dependencies across topic branches, it is the maintainer's job to sort them
+out.
+
+The only exception to using ``kvm-x86/next`` as the base is if a patch/series
+is a multi-arch series, i.e. has non-trivial modifications to common KVM code
+and/or has more than superficial changes to other architectures' code. Multi-
+arch patch/series should instead be based on a common, stable point in KVM's
+history, e.g. the release candidate upon which ``kvm-x86 next`` is based. If
+you're unsure whether a patch/series is truly multi-arch, err on the side of
+caution and treat it as multi-arch, i.e. use a common base.
+
+Coding Style
+~~~~~~~~~~~~
+When it comes to style, naming, patterns, etc., consistency is the number one
+priority in KVM x86. If all else fails, match what already exists.
+
+With a few caveats listed below, follow the tip tree maintainers' preferred
+:ref:`maintainer-tip-coding-style`, as patches/series often touch both KVM and
+non-KVM x86 files, i.e. draw the attention of KVM *and* tip tree maintainers.
+
+Using reverse fir tree, a.k.a. reverse Christmas tree or reverse XMAS tree, for
+variable declarations isn't strictly required, though it is still preferred.
+
+Except for a handful of special snowflakes, do not use kernel-doc comments for
+functions. The vast majority of "public" KVM functions aren't truly public as
+they are intended only for KVM-internal consumption (there are plans to
+privatize KVM's headers and exports to enforce this).
+
+Comments
+~~~~~~~~
+Write comments using imperative mood and avoid pronouns. Use comments to
+provide a high level overview of the code, and/or to explain why the code does
+what it does. Do not reiterate what the code literally does; let the code
+speak for itself. If the code itself is inscrutable, comments will not help.
+
+SDM and APM References
+~~~~~~~~~~~~~~~~~~~~~~
+Much of KVM's code base is directly tied to architectural behavior defined in
+Intel's Software Development Manual (SDM) and AMD's Architecture Programmer’s
+Manual (APM). Use of "Intel's SDM" and "AMD's APM", or even just "SDM" or
+"APM", without additional context is a-ok.
+
+Do not reference specific sections, tables, figures, etc. by number, especially
+not in comments. Instead, if necessary (see below), copy-paste the relevant
+snippet and reference sections/tables/figures by name. The layouts of the SDM
+and APM are constantly changing, and so the numbers/labels aren't stable.
+
+Generally speaking, do not explicitly reference or copy-paste from the SDM or
+APM in comments. With few exceptions, KVM *must* honor architectural behavior,
+therefore it's implied that KVM behavior is emulating SDM and/or APM behavior.
+Note, referencing the SDM/APM in changelogs to justify the change and provide
+context is perfectly ok and encouraged.
+
+Shortlog
+~~~~~~~~
+The preferred prefix format is ``KVM: <topic>:``, where ``<topic>`` is one of::
+
+ - x86
+ - x86/mmu
+ - x86/pmu
+ - x86/xen
+ - selftests
+ - SVM
+ - nSVM
+ - VMX
+ - nVMX
+
+**DO NOT use x86/kvm!** ``x86/kvm`` is used exclusively for Linux-as-a-KVM-guest
+changes, i.e. for arch/x86/kernel/kvm.c. Do not use file names or complete file
+paths as the subject/shortlog prefix.
+
+Note, these don't align with the topics branches (the topic branches care much
+more about code conflicts).
+
+All names are case sensitive! ``KVM: x86:`` is good, ``kvm: vmx:`` is not.
+
+Capitalize the first word of the condensed patch description, but omit ending
+punctionation. E.g.::
+
+ KVM: x86: Fix a null pointer dereference in function_xyz()
+
+not::
+
+ kvm: x86: fix a null pointer dereference in function_xyz.
+
+If a patch touches multiple topics, traverse up the conceptual tree to find the
+first common parent (which is often simply ``x86``). When in doubt,
+``git log path/to/file`` should provide a reasonable hint.
+
+New topics do occasionally pop up, but please start an on-list discussion if
+you want to propose introducing a new topic, i.e. don't go rogue.
+
+See :ref:`the_canonical_patch_format` for more information, with one amendment:
+do not treat the 70-75 character limit as an absolute, hard limit. Instead,
+use 75 characters as a firm-but-not-hard limit, and use 80 characters as a hard
+limit. I.e. let the shortlog run a few characters over the standard limit if
+you have good reason to do so.
+
+Changelog
+~~~~~~~~~
+Most importantly, write changelogs using imperative mood and avoid pronouns.
+
+See :ref:`describe_changes` for more information, with one amendment: lead with
+a short blurb on the actual changes, and then follow up with the context and
+background. Note! This order directly conflicts with the tip tree's preferred
+approach! Please follow the tip tree's preferred style when sending patches
+that primarily target arch/x86 code that is _NOT_ KVM code.
+
+Stating what a patch does before diving into details is preferred by KVM x86
+for several reasons. First and foremost, what code is actually being changed
+is arguably the most important information, and so that info should be easy to
+find. Changelogs that bury the "what's actually changing" in a one-liner after
+3+ paragraphs of background make it very hard to find that information.
+
+For initial review, one could argue the "what's broken" is more important, but
+for skimming logs and git archaeology, the gory details matter less and less.
+E.g. when doing a series of "git blame", the details of each change along the
+way are useless, the details only matter for the culprit. Providing the "what
+changed" makes it easy to quickly determine whether or not a commit might be of
+interest.
+
+Another benefit of stating "what's changing" first is that it's almost always
+possible to state "what's changing" in a single sentence. Conversely, all but
+the most simple bugs require multiple sentences or paragraphs to fully describe
+the problem. If both the "what's changing" and "what's the bug" are super
+short then the order doesn't matter. But if one is shorter (almost always the
+"what's changing), then covering the shorter one first is advantageous because
+it's less of an inconvenience for readers/reviewers that have a strict ordering
+preference. E.g. having to skip one sentence to get to the context is less
+painful than having to skip three paragraphs to get to "what's changing".
+
+Fixes
+~~~~~
+If a change fixes a KVM/kernel bug, add a Fixes: tag even if the change doesn't
+need to be backported to stable kernels, and even if the change fixes a bug in
+an older release.
+
+Conversely, if a fix does need to be backported, explicitly tag the patch with
+"Cc: stable@vger.kernel" (though the email itself doesn't need to Cc: stable);
+KVM x86 opts out of backporting Fixes: by default. Some auto-selected patches
+do get backported, but require explicit maintainer approval (search MANUALSEL).
+
+Function References
+~~~~~~~~~~~~~~~~~~~
+When a function is mentioned in a comment, changelog, or shortlog (or anywhere
+for that matter), use the format ``function_name()``. The parentheses provide
+context and disambiguate the reference.
+
+Testing
+-------
+At a bare minimum, *all* patches in a series must build cleanly for KVM_INTEL=m
+KVM_AMD=m, and KVM_WERROR=y. Building every possible combination of Kconfigs
+isn't feasible, but the more the merrier. KVM_SMM, KVM_XEN, PROVE_LOCKING, and
+X86_64 are particularly interesting knobs to turn.
+
+Running KVM selftests and KVM-unit-tests is also mandatory (and stating the
+obvious, the tests need to pass). The only exception is for changes that have
+negligible probability of affecting runtime behavior, e.g. patches that only
+modify comments. When possible and relevant, testing on both Intel and AMD is
+strongly preferred. Booting an actual VM is encouraged, but not mandatory.
+
+For changes that touch KVM's shadow paging code, running with TDP (EPT/NPT)
+disabled is mandatory. For changes that affect common KVM MMU code, running
+with TDP disabled is strongly encouraged. For all other changes, if the code
+being modified depends on and/or interacts with a module param, testing with
+the relevant settings is mandatory.
+
+Note, KVM selftests and KVM-unit-tests do have known failures. If you suspect
+a failure is not due to your changes, verify that the *exact same* failure
+occurs with and without your changes.
+
+Changes that touch reStructured Text documentation, i.e. .rst files, must build
+htmldocs cleanly, i.e. with no new warnings or errors.
+
+If you can't fully test a change, e.g. due to lack of hardware, clearly state
+what level of testing you were able to do, e.g. in the cover letter.
+
+New Features
+~~~~~~~~~~~~
+With one exception, new features *must* come with test coverage. KVM specific
+tests aren't strictly required, e.g. if coverage is provided by running a
+sufficiently enabled guest VM, or by running a related kernel selftest in a VM,
+but dedicated KVM tests are preferred in all cases. Negative testcases in
+particular are mandatory for enabling of new hardware features as error and
+exception flows are rarely exercised simply by running a VM.
+
+The only exception to this rule is if KVM is simply advertising support for a
+feature via KVM_GET_SUPPORTED_CPUID, i.e. for instructions/features that KVM
+can't prevent a guest from using and for which there is no true enabling.
+
+Note, "new features" does not just mean "new hardware features"! New features
+that can't be well validated using existing KVM selftests and/or KVM-unit-tests
+must come with tests.
+
+Posting new feature development without tests to get early feedback is more
+than welcome, but such submissions should be tagged RFC, and the cover letter
+should clearly state what type of feedback is requested/expected. Do not abuse
+the RFC process; RFCs will typically not receive in-depth review.
+
+Bug Fixes
+~~~~~~~~~
+Except for "obvious" found-by-inspection bugs, fixes must be accompanied by a
+reproducer for the bug being fixed. In many cases the reproducer is implicit,
+e.g. for build errors and test failures, but it should still be clear to
+readers what is broken and how to verify the fix. Some leeway is given for
+bugs that are found via non-public workloads/tests, but providing regression
+tests for such bugs is strongly preferred.
+
+In general, regression tests are preferred for any bug that is not trivial to
+hit. E.g. even if the bug was originally found by a fuzzer such as syzkaller,
+a targeted regression test may be warranted if the bug requires hitting a
+one-in-a-million type race condition.
+
+Note, KVM bugs are rarely urgent *and* non-trivial to reproduce. Ask yourself
+if a bug is really truly the end of the world before posting a fix without a
+reproducer.
+
+Posting
+-------
+
+Links
+~~~~~
+Do not explicitly reference bug reports, prior versions of a patch/series, etc.
+via ``In-Reply-To:`` headers. Using ``In-Reply-To:`` becomes an unholy mess
+for large series and/or when the version count gets high, and ``In-Reply-To:``
+is useless for anyone that doesn't have the original message, e.g. if someone
+wasn't Cc'd on the bug report or if the list of recipients changes between
+versions.
+
+To link to a bug report, previous version, or anything of interest, use lore
+links. For referencing previous version(s), generally speaking do not include
+a Link: in the changelog as there is no need to record the history in git, i.e.
+put the link in the cover letter or in the section git ignores. Do provide a
+formal Link: for bug reports and/or discussions that led to the patch. The
+context of why a change was made is highly valuable for future readers.
+
+Git Base
+~~~~~~~~
+If you are using git version 2.9.0 or later (Googlers, this is all of you!),
+use ``git format-patch`` with the ``--base`` flag to automatically include the
+base tree information in the generated patches.
+
+Note, ``--base=auto`` works as expected if and only if a branch's upstream is
+set to the base topic branch, e.g. it will do the wrong thing if your upstream
+is set to your personal repository for backup purposes. An alternative "auto"
+solution is to derive the names of your development branches based on their
+KVM x86 topic, and feed that into ``--base``. E.g. ``x86/pmu/my_branch_name``,
+and then write a small wrapper to extract ``pmu`` from the current branch name
+to yield ``--base=x/pmu``, where ``x`` is whatever name your repository uses to
+track the KVM x86 remote.
+
+Co-Posting Tests
+~~~~~~~~~~~~~~~~
+KVM selftests that are associated with KVM changes, e.g. regression tests for
+bug fixes, should be posted along with the KVM changes as a single series. The
+standard kernel rules for bisection apply, i.e. KVM changes that result in test
+failures should be ordered after the selftests updates, and vice versa, new
+tests that fail due to KVM bugs should be ordered after the KVM fixes.
+
+KVM-unit-tests should *always* be posted separately. Tools, e.g. b4 am, don't
+know that KVM-unit-tests is a separate repository and get confused when patches
+in a series apply on different trees. To tie KVM-unit-tests patches back to
+KVM patches, first post the KVM changes and then provide a lore Link: to the
+KVM patch/series in the KVM-unit-tests patch(es).
+
+Notifications
+-------------
+When a patch/series is officially accepted, a notification email will be sent
+in reply to the original posting (cover letter for multi-patch series). The
+notification will include the tree and topic branch, along with the SHA1s of
+the commits of applied patches.
+
+If a subset of patches is applied, this will be clearly stated in the
+notification. Unless stated otherwise, it's implied that any patches in the
+series that were not accepted need more work and should be submitted in a new
+version.
+
+If for some reason a patch is dropped after officially being accepted, a reply
+will be sent to the notification email explaining why the patch was dropped, as
+well as the next steps.
+
+SHA1 Stability
+~~~~~~~~~~~~~~
+SHA1s are not 100% guaranteed to be stable until they land in Linus' tree! A
+SHA1 is *usually* stable once a notification has been sent, but things happen.
+In most cases, an update to the notification email be provided if an applied
+patch's SHA1 changes. However, in some scenarios, e.g. if all KVM x86 branches
+need to be rebased, individual notifications will not be given.
+
+Vulnerabilities
+---------------
+Bugs that can be exploited by the guest to attack the host (kernel or
+userspace), or that can be exploited by a nested VM to *its* host (L2 attacking
+L1), are of particular interest to KVM. Please follow the protocol for
+:ref:`securitybugs` if you suspect a bug can lead to an escape, data leak, etc.
+
diff --git a/Documentation/process/maintainer-netdev.rst b/Documentation/process/maintainer-netdev.rst
new file mode 100644
index 000000000..09dcf6377
--- /dev/null
+++ b/Documentation/process/maintainer-netdev.rst
@@ -0,0 +1,454 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+.. _netdev-FAQ:
+
+=============================
+Networking subsystem (netdev)
+=============================
+
+tl;dr
+-----
+
+ - designate your patch to a tree - ``[PATCH net]`` or ``[PATCH net-next]``
+ - for fixes the ``Fixes:`` tag is required, regardless of the tree
+ - don't post large series (> 15 patches), break them up
+ - don't repost your patches within one 24h period
+ - reverse xmas tree
+
+netdev
+------
+
+netdev is a mailing list for all network-related Linux stuff. This
+includes anything found under net/ (i.e. core code like IPv6) and
+drivers/net (i.e. hardware specific drivers) in the Linux source tree.
+
+Note that some subsystems (e.g. wireless drivers) which have a high
+volume of traffic have their own specific mailing lists and trees.
+
+The netdev list is managed (like many other Linux mailing lists) through
+VGER (http://vger.kernel.org/) with archives available at
+https://lore.kernel.org/netdev/
+
+Aside from subsystems like those mentioned above, all network-related
+Linux development (i.e. RFC, review, comments, etc.) takes place on
+netdev.
+
+Development cycle
+-----------------
+
+Here is a bit of background information on
+the cadence of Linux development. Each new release starts off with a
+two week "merge window" where the main maintainers feed their new stuff
+to Linus for merging into the mainline tree. After the two weeks, the
+merge window is closed, and it is called/tagged ``-rc1``. No new
+features get mainlined after this -- only fixes to the rc1 content are
+expected. After roughly a week of collecting fixes to the rc1 content,
+rc2 is released. This repeats on a roughly weekly basis until rc7
+(typically; sometimes rc6 if things are quiet, or rc8 if things are in a
+state of churn), and a week after the last vX.Y-rcN was done, the
+official vX.Y is released.
+
+To find out where we are now in the cycle - load the mainline (Linus)
+page here:
+
+ https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+
+and note the top of the "tags" section. If it is rc1, it is early in
+the dev cycle. If it was tagged rc7 a week ago, then a release is
+probably imminent. If the most recent tag is a final release tag
+(without an ``-rcN`` suffix) - we are most likely in a merge window
+and ``net-next`` is closed.
+
+git trees and patch flow
+------------------------
+
+There are two networking trees (git repositories) in play. Both are
+driven by David Miller, the main network maintainer. There is the
+``net`` tree, and the ``net-next`` tree. As you can probably guess from
+the names, the ``net`` tree is for fixes to existing code already in the
+mainline tree from Linus, and ``net-next`` is where the new code goes
+for the future release. You can find the trees here:
+
+- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
+- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
+
+Relating that to kernel development: At the beginning of the 2-week
+merge window, the ``net-next`` tree will be closed - no new changes/features.
+The accumulated new content of the past ~10 weeks will be passed onto
+mainline/Linus via a pull request for vX.Y -- at the same time, the
+``net`` tree will start accumulating fixes for this pulled content
+relating to vX.Y
+
+An announcement indicating when ``net-next`` has been closed is usually
+sent to netdev, but knowing the above, you can predict that in advance.
+
+.. warning::
+ Do not send new ``net-next`` content to netdev during the
+ period during which ``net-next`` tree is closed.
+
+RFC patches sent for review only are obviously welcome at any time
+(use ``--subject-prefix='RFC net-next'`` with ``git format-patch``).
+
+Shortly after the two weeks have passed (and vX.Y-rc1 is released), the
+tree for ``net-next`` reopens to collect content for the next (vX.Y+1)
+release.
+
+If you aren't subscribed to netdev and/or are simply unsure if
+``net-next`` has re-opened yet, simply check the ``net-next`` git
+repository link above for any new networking-related commits. You may
+also check the following website for the current status:
+
+ https://netdev.bots.linux.dev/net-next.html
+
+The ``net`` tree continues to collect fixes for the vX.Y content, and is
+fed back to Linus at regular (~weekly) intervals. Meaning that the
+focus for ``net`` is on stabilization and bug fixes.
+
+Finally, the vX.Y gets released, and the whole cycle starts over.
+
+netdev patch review
+-------------------
+
+.. _patch_status:
+
+Patch status
+~~~~~~~~~~~~
+
+Status of a patch can be checked by looking at the main patchwork
+queue for netdev:
+
+ https://patchwork.kernel.org/project/netdevbpf/list/
+
+The "State" field will tell you exactly where things are at with your
+patch:
+
+================== =============================================================
+Patch state Description
+================== =============================================================
+New, Under review pending review, patch is in the maintainer’s queue for
+ review; the two states are used interchangeably (depending on
+ the exact co-maintainer handling patchwork at the time)
+Accepted patch was applied to the appropriate networking tree, this is
+ usually set automatically by the pw-bot
+Needs ACK waiting for an ack from an area expert or testing
+Changes requested patch has not passed the review, new revision is expected
+ with appropriate code and commit message changes
+Rejected patch has been rejected and new revision is not expected
+Not applicable patch is expected to be applied outside of the networking
+ subsystem
+Awaiting upstream patch should be reviewed and handled by appropriate
+ sub-maintainer, who will send it on to the networking trees;
+ patches set to ``Awaiting upstream`` in netdev's patchwork
+ will usually remain in this state, whether the sub-maintainer
+ requested changes, accepted or rejected the patch
+Deferred patch needs to be reposted later, usually due to dependency
+ or because it was posted for a closed tree
+Superseded new version of the patch was posted, usually set by the
+ pw-bot
+RFC not to be applied, usually not in maintainer’s review queue,
+ pw-bot can automatically set patches to this state based
+ on subject tags
+================== =============================================================
+
+Patches are indexed by the ``Message-ID`` header of the emails
+which carried them so if you have trouble finding your patch append
+the value of ``Message-ID`` to the URL above.
+
+Updating patch status
+~~~~~~~~~~~~~~~~~~~~~
+
+Contributors and reviewers do not have the permissions to update patch
+state directly in patchwork. Patchwork doesn't expose much information
+about the history of the state of patches, therefore having multiple
+people update the state leads to confusion.
+
+Instead of delegating patchwork permissions netdev uses a simple mail
+bot which looks for special commands/lines within the emails sent to
+the mailing list. For example to mark a series as Changes Requested
+one needs to send the following line anywhere in the email thread::
+
+ pw-bot: changes-requested
+
+As a result the bot will set the entire series to Changes Requested.
+This may be useful when author discovers a bug in their own series
+and wants to prevent it from getting applied.
+
+The use of the bot is entirely optional, if in doubt ignore its existence
+completely. Maintainers will classify and update the state of the patches
+themselves. No email should ever be sent to the list with the main purpose
+of communicating with the bot, the bot commands should be seen as metadata.
+
+The use of the bot is restricted to authors of the patches (the ``From:``
+header on patch submission and command must match!), maintainers of
+the modified code according to the MAINTAINERS file (again, ``From:``
+must match the MAINTAINERS entry) and a handful of senior reviewers.
+
+Bot records its activity here:
+
+ https://netdev.bots.linux.dev/pw-bot.html
+
+Review timelines
+~~~~~~~~~~~~~~~~
+
+Generally speaking, the patches get triaged quickly (in less than
+48h). But be patient, if your patch is active in patchwork (i.e. it's
+listed on the project's patch list) the chances it was missed are close to zero.
+Asking the maintainer for status updates on your
+patch is a good way to ensure your patch is ignored or pushed to the
+bottom of the priority list.
+
+.. _Changes requested:
+
+Changes requested
+~~~~~~~~~~~~~~~~~
+
+Patches :ref:`marked<patch_status>` as ``Changes Requested`` need
+to be revised. The new version should come with a change log,
+preferably including links to previous postings, for example::
+
+ [PATCH net-next v3] net: make cows go moo
+
+ Even users who don't drink milk appreciate hearing the cows go "moo".
+
+ The amount of mooing will depend on packet rate so should match
+ the diurnal cycle quite well.
+
+ Signed-of-by: Joe Defarmer <joe@barn.org>
+ ---
+ v3:
+ - add a note about time-of-day mooing fluctuation to the commit message
+ v2: https://lore.kernel.org/netdev/123themessageid@barn.org/
+ - fix missing argument in kernel doc for netif_is_bovine()
+ - fix memory leak in netdev_register_cow()
+ v1: https://lore.kernel.org/netdev/456getstheclicks@barn.org/
+
+The commit message should be revised to answer any questions reviewers
+had to ask in previous discussions. Occasionally the update of
+the commit message will be the only change in the new version.
+
+Partial resends
+~~~~~~~~~~~~~~~
+
+Please always resend the entire patch series and make sure you do number your
+patches such that it is clear this is the latest and greatest set of patches
+that can be applied. Do not try to resend just the patches which changed.
+
+Handling misapplied patches
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Occasionally a patch series gets applied before receiving critical feedback,
+or the wrong version of a series gets applied.
+
+Making the patch disappear once it is pushed out is not possible, the commit
+history in netdev trees is immutable.
+Please send incremental versions on top of what has been merged in order to fix
+the patches the way they would look like if your latest patch series was to be
+merged.
+
+In cases where full revert is needed the revert has to be submitted
+as a patch to the list with a commit message explaining the technical
+problems with the reverted commit. Reverts should be used as a last resort,
+when original change is completely wrong; incremental fixes are preferred.
+
+Stable tree
+~~~~~~~~~~~
+
+While it used to be the case that netdev submissions were not supposed
+to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer
+the case today. Please follow the standard stable rules in
+:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,
+and make sure you include appropriate Fixes tags!
+
+Security fixes
+~~~~~~~~~~~~~~
+
+Do not email netdev maintainers directly if you think you discovered
+a bug that might have possible security implications.
+The current netdev maintainer has consistently requested that
+people use the mailing lists and not reach out directly. If you aren't
+OK with that, then perhaps consider mailing security@kernel.org or
+reading about http://oss-security.openwall.org/wiki/mailing-lists/distros
+as possible alternative mechanisms.
+
+
+Co-posting changes to user space components
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+User space code exercising kernel features should be posted
+alongside kernel patches. This gives reviewers a chance to see
+how any new interface is used and how well it works.
+
+When user space tools reside in the kernel repo itself all changes
+should generally come as one series. If series becomes too large
+or the user space project is not reviewed on netdev include a link
+to a public repo where user space patches can be seen.
+
+In case user space tooling lives in a separate repository but is
+reviewed on netdev (e.g. patches to ``iproute2`` tools) kernel and
+user space patches should form separate series (threads) when posted
+to the mailing list, e.g.::
+
+ [PATCH net-next 0/3] net: some feature cover letter
+ └─ [PATCH net-next 1/3] net: some feature prep
+ └─ [PATCH net-next 2/3] net: some feature do it
+ └─ [PATCH net-next 3/3] selftest: net: some feature
+
+ [PATCH iproute2-next] ip: add support for some feature
+
+Posting as one thread is discouraged because it confuses patchwork
+(as of patchwork 2.2.2).
+
+Preparing changes
+-----------------
+
+Attention to detail is important. Re-read your own work as if you were the
+reviewer. You can start with using ``checkpatch.pl``, perhaps even with
+the ``--strict`` flag. But do not be mindlessly robotic in doing so.
+If your change is a bug fix, make sure your commit log indicates the
+end-user visible symptom, the underlying reason as to why it happens,
+and then if necessary, explain why the fix proposed is the best way to
+get things done. Don't mangle whitespace, and as is common, don't
+mis-indent function arguments that span multiple lines. If it is your
+first patch, mail it to yourself so you can test apply it to an
+unpatched tree to confirm infrastructure didn't mangle it.
+
+Finally, go back and read
+:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
+to be sure you are not repeating some common mistake documented there.
+
+Indicating target tree
+~~~~~~~~~~~~~~~~~~~~~~
+
+To help maintainers and CI bots you should explicitly mark which tree
+your patch is targeting. Assuming that you use git, use the prefix
+flag::
+
+ git format-patch --subject-prefix='PATCH net-next' start..finish
+
+Use ``net`` instead of ``net-next`` (always lower case) in the above for
+bug-fix ``net`` content.
+
+Dividing work into patches
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Put yourself in the shoes of the reviewer. Each patch is read separately
+and therefore should constitute a comprehensible step towards your stated
+goal.
+
+Avoid sending series longer than 15 patches. Larger series takes longer
+to review as reviewers will defer looking at it until they find a large
+chunk of time. A small series can be reviewed in a short time, so Maintainers
+just do it. As a result, a sequence of smaller series gets merged quicker and
+with better review coverage. Re-posting large series also increases the mailing
+list traffic.
+
+Multi-line comments
+~~~~~~~~~~~~~~~~~~~
+
+Comment style convention is slightly different for networking and most of
+the tree. Instead of this::
+
+ /*
+ * foobar blah blah blah
+ * another line of text
+ */
+
+it is requested that you make it look like this::
+
+ /* foobar blah blah blah
+ * another line of text
+ */
+
+Local variable ordering ("reverse xmas tree", "RCS")
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Netdev has a convention for ordering local variables in functions.
+Order the variable declaration lines longest to shortest, e.g.::
+
+ struct scatterlist *sg;
+ struct sk_buff *skb;
+ int err, i;
+
+If there are dependencies between the variables preventing the ordering
+move the initialization out of line.
+
+Format precedence
+~~~~~~~~~~~~~~~~~
+
+When working in existing code which uses nonstandard formatting make
+your code follow the most recent guidelines, so that eventually all code
+in the domain of netdev is in the preferred format.
+
+Resending after review
+~~~~~~~~~~~~~~~~~~~~~~
+
+Allow at least 24 hours to pass between postings. This will ensure reviewers
+from all geographical locations have a chance to chime in. Do not wait
+too long (weeks) between postings either as it will make it harder for reviewers
+to recall all the context.
+
+Make sure you address all the feedback in your new posting. Do not post a new
+version of the code if the discussion about the previous version is still
+ongoing, unless directly instructed by a reviewer.
+
+The new version of patches should be posted as a separate thread,
+not as a reply to the previous posting. Change log should include a link
+to the previous posting (see :ref:`Changes requested`).
+
+Testing
+-------
+
+Expected level of testing
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+At the very minimum your changes must survive an ``allyesconfig`` and an
+``allmodconfig`` build with ``W=1`` set without new warnings or failures.
+
+Ideally you will have done run-time testing specific to your change,
+and the patch series contains a set of kernel selftest for
+``tools/testing/selftests/net`` or using the KUnit framework.
+
+You are expected to test your changes on top of the relevant networking
+tree (``net`` or ``net-next``) and not e.g. a stable tree or ``linux-next``.
+
+patchwork checks
+~~~~~~~~~~~~~~~~
+
+Checks in patchwork are mostly simple wrappers around existing kernel
+scripts, the sources are available at:
+
+https://github.com/kuba-moo/nipa/tree/master/tests
+
+**Do not** post your patches just to run them through the checks.
+You must ensure that your patches are ready by testing them locally
+before posting to the mailing list. The patchwork build bot instance
+gets overloaded very easily and netdev@vger really doesn't need more
+traffic if we can help it.
+
+netdevsim
+~~~~~~~~~
+
+``netdevsim`` is a test driver which can be used to exercise driver
+configuration APIs without requiring capable hardware.
+Mock-ups and tests based on ``netdevsim`` are strongly encouraged when
+adding new APIs, but ``netdevsim`` in itself is **not** considered
+a use case/user. You must also implement the new APIs in a real driver.
+
+We give no guarantees that ``netdevsim`` won't change in the future
+in a way which would break what would normally be considered uAPI.
+
+``netdevsim`` is reserved for use by upstream tests only, so any
+new ``netdevsim`` features must be accompanied by selftests under
+``tools/testing/selftests/``.
+
+Testimonials / feedback
+-----------------------
+
+Some companies use peer feedback in employee performance reviews.
+Please feel free to request feedback from netdev maintainers,
+especially if you spend significant amount of time reviewing code
+and go out of your way to improve shared infrastructure.
+
+The feedback must be requested by you, the contributor, and will always
+be shared with you (even if you request for it to be submitted to your
+manager).
diff --git a/Documentation/process/maintainer-pgp-guide.rst b/Documentation/process/maintainer-pgp-guide.rst
new file mode 100644
index 000000000..f5277993b
--- /dev/null
+++ b/Documentation/process/maintainer-pgp-guide.rst
@@ -0,0 +1,919 @@
+.. _pgpguide:
+
+===========================
+Kernel Maintainer PGP guide
+===========================
+
+:Author: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
+
+This document is aimed at Linux kernel developers, and especially at
+subsystem maintainers. It contains a subset of information discussed in
+the more general "`Protecting Code Integrity`_" guide published by the
+Linux Foundation. Please read that document for more in-depth discussion
+on some of the topics mentioned in this guide.
+
+.. _`Protecting Code Integrity`: https://github.com/lfit/itpol/blob/master/protecting-code-integrity.md
+
+The role of PGP in Linux Kernel development
+===========================================
+
+PGP helps ensure the integrity of the code that is produced by the Linux
+kernel development community and, to a lesser degree, establish trusted
+communication channels between developers via PGP-signed email exchange.
+
+The Linux kernel source code is available in two main formats:
+
+- Distributed source repositories (git)
+- Periodic release snapshots (tarballs)
+
+Both git repositories and tarballs carry PGP signatures of the kernel
+developers who create official kernel releases. These signatures offer a
+cryptographic guarantee that downloadable versions made available via
+kernel.org or any other mirrors are identical to what these developers
+have on their workstations. To this end:
+
+- git repositories provide PGP signatures on all tags
+- tarballs provide detached PGP signatures with all downloads
+
+.. _devs_not_infra:
+
+Trusting the developers, not infrastructure
+-------------------------------------------
+
+Ever since the 2011 compromise of core kernel.org systems, the main
+operating principle of the Kernel Archives project has been to assume
+that any part of the infrastructure can be compromised at any time. For
+this reason, the administrators have taken deliberate steps to emphasize
+that trust must always be placed with developers and never with the code
+hosting infrastructure, regardless of how good the security practices
+for the latter may be.
+
+The above guiding principle is the reason why this guide is needed. We
+want to make sure that by placing trust into developers we do not simply
+shift the blame for potential future security incidents to someone else.
+The goal is to provide a set of guidelines developers can use to create
+a secure working environment and safeguard the PGP keys used to
+establish the integrity of the Linux kernel itself.
+
+.. _pgp_tools:
+
+PGP tools
+=========
+
+Use GnuPG 2.2 or later
+----------------------
+
+Your distro should already have GnuPG installed by default, you just
+need to verify that you are using a reasonably recent version of it.
+To check, run::
+
+ $ gpg --version | head -n1
+
+If you have version 2.2 or above, then you are good to go. If you have a
+version that is prior than 2.2, then some commands from this guide may
+not work.
+
+Configure gpg-agent options
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The GnuPG agent is a helper tool that will start automatically whenever
+you use the ``gpg`` command and run in the background with the purpose
+of caching the private key passphrase. There are two options you should
+know in order to tweak when the passphrase should be expired from cache:
+
+- ``default-cache-ttl`` (seconds): If you use the same key again before
+ the time-to-live expires, the countdown will reset for another period.
+ The default is 600 (10 minutes).
+- ``max-cache-ttl`` (seconds): Regardless of how recently you've used
+ the key since initial passphrase entry, if the maximum time-to-live
+ countdown expires, you'll have to enter the passphrase again. The
+ default is 30 minutes.
+
+If you find either of these defaults too short (or too long), you can
+edit your ``~/.gnupg/gpg-agent.conf`` file to set your own values::
+
+ # set to 30 minutes for regular ttl, and 2 hours for max ttl
+ default-cache-ttl 1800
+ max-cache-ttl 7200
+
+.. note::
+
+ It is no longer necessary to start gpg-agent manually at the
+ beginning of your shell session. You may want to check your rc files
+ to remove anything you had in place for older versions of GnuPG, as
+ it may not be doing the right thing any more.
+
+.. _protect_your_key:
+
+Protect your PGP key
+====================
+
+This guide assumes that you already have a PGP key that you use for Linux
+kernel development purposes. If you do not yet have one, please see the
+"`Protecting Code Integrity`_" document mentioned earlier for guidance
+on how to create a new one.
+
+You should also make a new key if your current one is weaker than 2048
+bits (RSA).
+
+Understanding PGP Subkeys
+-------------------------
+
+A PGP key rarely consists of a single keypair -- usually it is a
+collection of independent subkeys that can be used for different
+purposes based on their capabilities, assigned at their creation time.
+PGP defines four capabilities that a key can have:
+
+- **[S]** keys can be used for signing
+- **[E]** keys can be used for encryption
+- **[A]** keys can be used for authentication
+- **[C]** keys can be used for certifying other keys
+
+The key with the **[C]** capability is often called the "master" key,
+but this terminology is misleading because it implies that the Certify
+key can be used in place of any of other subkey on the same chain (like
+a physical "master key" can be used to open locks made for other keys).
+Since this is not the case, this guide will refer to it as "the Certify
+key" to avoid any ambiguity.
+
+It is critical to fully understand the following:
+
+1. All subkeys are fully independent from each other. If you lose a
+ private subkey, it cannot be restored or recreated from any other
+ private key on your chain.
+2. With the exception of the Certify key, there can be multiple subkeys
+ with identical capabilities (e.g. you can have 2 valid encryption
+ subkeys, 3 valid signing subkeys, but only one valid certification
+ subkey). All subkeys are fully independent -- a message encrypted to
+ one **[E]** subkey cannot be decrypted with any other **[E]** subkey
+ you may also have.
+3. A single subkey may have multiple capabilities (e.g. your **[C]** key
+ can also be your **[S]** key).
+
+The key carrying the **[C]** (certify) capability is the only key that
+can be used to indicate relationship with other keys. Only the **[C]**
+key can be used to:
+
+- add or revoke other keys (subkeys) with S/E/A capabilities
+- add, change or revoke identities (uids) associated with the key
+- add or change the expiration date on itself or any subkey
+- sign other people's keys for web of trust purposes
+
+By default, GnuPG creates the following when generating new keys:
+
+- One subkey carrying both Certify and Sign capabilities (**[SC]**)
+- A separate subkey with the Encryption capability (**[E]**)
+
+If you used the default parameters when generating your key, then that
+is what you will have. You can verify by running ``gpg --list-secret-keys``,
+for example::
+
+ sec ed25519 2022-12-20 [SC] [expires: 2024-12-19]
+ 000000000000000000000000AAAABBBBCCCCDDDD
+ uid [ultimate] Alice Dev <adev@kernel.org>
+ ssb cv25519 2022-12-20 [E] [expires: 2024-12-19]
+
+The long line under the ``sec`` entry is your key fingerprint --
+whenever you see ``[fpr]`` in the examples below, that 40-character
+string is what it refers to.
+
+Ensure your passphrase is strong
+--------------------------------
+
+GnuPG uses passphrases to encrypt your private keys before storing them on
+disk. This way, even if your ``.gnupg`` directory is leaked or stolen in
+its entirety, the attackers cannot use your private keys without first
+obtaining the passphrase to decrypt them.
+
+It is absolutely essential that your private keys are protected by a
+strong passphrase. To set it or change it, use::
+
+ $ gpg --change-passphrase [fpr]
+
+Create a separate Signing subkey
+--------------------------------
+
+Our goal is to protect your Certify key by moving it to offline media,
+so if you only have a combined **[SC]** key, then you should create a
+separate signing subkey::
+
+ $ gpg --quick-addkey [fpr] ed25519 sign
+
+.. note:: ECC support in GnuPG
+
+ Note, that if you intend to use a hardware token that does not
+ support ED25519 ECC keys, you should choose "nistp256" instead or
+ "ed25519." See the section below on recommended hardware devices.
+
+
+Back up your Certify key for disaster recovery
+----------------------------------------------
+
+The more signatures you have on your PGP key from other developers, the
+more reasons you have to create a backup version that lives on something
+other than digital media, for disaster recovery reasons.
+
+The best way to create a printable hardcopy of your private key is by
+using the ``paperkey`` software written for this very purpose. See ``man
+paperkey`` for more details on the output format and its benefits over
+other solutions. Paperkey should already be packaged for most
+distributions.
+
+Run the following command to create a hardcopy backup of your private
+key::
+
+ $ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt
+
+Print out that file (or pipe the output straight to lpr), then take a
+pen and write your passphrase on the margin of the paper. **This is
+strongly recommended** because the key printout is still encrypted with
+that passphrase, and if you ever change it you will not remember what it
+used to be when you had created the backup -- *guaranteed*.
+
+Put the resulting printout and the hand-written passphrase into an envelope
+and store in a secure and well-protected place, preferably away from your
+home, such as your bank vault.
+
+.. note::
+
+ Your printer is probably no longer a simple dumb device connected to
+ your parallel port, but since the output is still encrypted with
+ your passphrase, printing out even to "cloud-integrated" modern
+ printers should remain a relatively safe operation.
+
+Back up your whole GnuPG directory
+----------------------------------
+
+.. warning::
+
+ **!!!Do not skip this step!!!**
+
+It is important to have a readily available backup of your PGP keys
+should you need to recover them. This is different from the
+disaster-level preparedness we did with ``paperkey``. You will also rely
+on these external copies whenever you need to use your Certify key --
+such as when making changes to your own key or signing other people's
+keys after conferences and summits.
+
+Start by getting a small USB "thumb" drive (preferably two!) that you
+will use for backup purposes. You will need to encrypt them using LUKS
+-- refer to your distro's documentation on how to accomplish this.
+
+For the encryption passphrase, you can use the same one as on your
+PGP key.
+
+Once the encryption process is over, re-insert the USB drive and make
+sure it gets properly mounted. Copy your entire ``.gnupg`` directory
+over to the encrypted storage::
+
+ $ cp -a ~/.gnupg /media/disk/foo/gnupg-backup
+
+You should now test to make sure everything still works::
+
+ $ gpg --homedir=/media/disk/foo/gnupg-backup --list-key [fpr]
+
+If you don't get any errors, then you should be good to go. Unmount the
+USB drive, distinctly label it so you don't blow it away next time you
+need to use a random USB drive, and put in a safe place -- but not too
+far away, because you'll need to use it every now and again for things
+like editing identities, adding or revoking subkeys, or signing other
+people's keys.
+
+Remove the Certify key from your homedir
+----------------------------------------
+
+The files in our home directory are not as well protected as we like to
+think. They can be leaked or stolen via many different means:
+
+- by accident when making quick homedir copies to set up a new workstation
+- by systems administrator negligence or malice
+- via poorly secured backups
+- via malware in desktop apps (browsers, pdf viewers, etc)
+- via coercion when crossing international borders
+
+Protecting your key with a good passphrase greatly helps reduce the risk
+of any of the above, but passphrases can be discovered via keyloggers,
+shoulder-surfing, or any number of other means. For this reason, the
+recommended setup is to remove your Certify key from your home directory
+and store it on offline storage.
+
+.. warning::
+
+ Please see the previous section and make sure you have backed up
+ your GnuPG directory in its entirety. What we are about to do will
+ render your key useless if you do not have a usable backup!
+
+First, identify the keygrip of your Certify key::
+
+ $ gpg --with-keygrip --list-key [fpr]
+
+The output will be something like this::
+
+ pub ed25519 2022-12-20 [SC] [expires: 2022-12-19]
+ 000000000000000000000000AAAABBBBCCCCDDDD
+ Keygrip = 1111000000000000000000000000000000000000
+ uid [ultimate] Alice Dev <adev@kernel.org>
+ sub cv25519 2022-12-20 [E] [expires: 2022-12-19]
+ Keygrip = 2222000000000000000000000000000000000000
+ sub ed25519 2022-12-20 [S]
+ Keygrip = 3333000000000000000000000000000000000000
+
+Find the keygrip entry that is beneath the ``pub`` line (right under the
+Certify key fingerprint). This will correspond directly to a file in your
+``~/.gnupg`` directory::
+
+ $ cd ~/.gnupg/private-keys-v1.d
+ $ ls
+ 1111000000000000000000000000000000000000.key
+ 2222000000000000000000000000000000000000.key
+ 3333000000000000000000000000000000000000.key
+
+All you have to do is simply remove the .key file that corresponds to
+the Certify key keygrip::
+
+ $ cd ~/.gnupg/private-keys-v1.d
+ $ rm 1111000000000000000000000000000000000000.key
+
+Now, if you issue the ``--list-secret-keys`` command, it will show that
+the Certify key is missing (the ``#`` indicates it is not available)::
+
+ $ gpg --list-secret-keys
+ sec# ed25519 2022-12-20 [SC] [expires: 2024-12-19]
+ 000000000000000000000000AAAABBBBCCCCDDDD
+ uid [ultimate] Alice Dev <adev@kernel.org>
+ ssb cv25519 2022-12-20 [E] [expires: 2024-12-19]
+ ssb ed25519 2022-12-20 [S]
+
+You should also remove any ``secring.gpg`` files in the ``~/.gnupg``
+directory, which may be left over from previous versions of GnuPG.
+
+If you don't have the "private-keys-v1.d" directory
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you do not have a ``~/.gnupg/private-keys-v1.d`` directory, then your
+secret keys are still stored in the legacy ``secring.gpg`` file used by
+GnuPG v1. Making any changes to your key, such as changing the
+passphrase or adding a subkey, should automatically convert the old
+``secring.gpg`` format to use ``private-keys-v1.d`` instead.
+
+Once you get that done, make sure to delete the obsolete ``secring.gpg``
+file, which still contains your private keys.
+
+.. _smartcards:
+
+Move the subkeys to a dedicated crypto device
+=============================================
+
+Even though the Certify key is now safe from being leaked or stolen, the
+subkeys are still in your home directory. Anyone who manages to get
+their hands on those will be able to decrypt your communication or fake
+your signatures (if they know the passphrase). Furthermore, each time a
+GnuPG operation is performed, the keys are loaded into system memory and
+can be stolen from there by sufficiently advanced malware (think
+Meltdown and Spectre).
+
+The best way to completely protect your keys is to move them to a
+specialized hardware device that is capable of smartcard operations.
+
+The benefits of smartcards
+--------------------------
+
+A smartcard contains a cryptographic chip that is capable of storing
+private keys and performing crypto operations directly on the card
+itself. Because the key contents never leave the smartcard, the
+operating system of the computer into which you plug in the hardware
+device is not able to retrieve the private keys themselves. This is very
+different from the encrypted USB storage device we used earlier for
+backup purposes -- while that USB device is plugged in and mounted, the
+operating system is able to access the private key contents.
+
+Using external encrypted USB media is not a substitute to having a
+smartcard-capable device.
+
+Available smartcard devices
+---------------------------
+
+Unless all your laptops and workstations have smartcard readers, the
+easiest is to get a specialized USB device that implements smartcard
+functionality. There are several options available:
+
+- `Nitrokey Start`_: Open hardware and Free Software, based on FSI
+ Japan's `Gnuk`_. One of the few available commercial devices that
+ support ED25519 ECC keys, but offer fewest security features (such as
+ resistance to tampering or some side-channel attacks).
+- `Nitrokey Pro 2`_: Similar to the Nitrokey Start, but more
+ tamper-resistant and offers more security features. Pro 2 supports ECC
+ cryptography (NISTP).
+- `Yubikey 5`_: proprietary hardware and software, but cheaper than
+ Nitrokey Pro and comes available in the USB-C form that is more useful
+ with newer laptops. Offers additional security features such as FIDO
+ U2F, among others, and now finally supports NISTP and ED25519 ECC
+ keys.
+
+Your choice will depend on cost, shipping availability in your
+geographical region, and open/proprietary hardware considerations.
+
+.. note::
+
+ If you are listed in MAINTAINERS or have an account at kernel.org,
+ you `qualify for a free Nitrokey Start`_ courtesy of The Linux
+ Foundation.
+
+.. _`Nitrokey Start`: https://shop.nitrokey.com/shop/product/nitrokey-start-6
+.. _`Nitrokey Pro 2`: https://shop.nitrokey.com/shop/product/nkpr2-nitrokey-pro-2-3
+.. _`Yubikey 5`: https://www.yubico.com/products/yubikey-5-overview/
+.. _Gnuk: https://www.fsij.org/doc-gnuk/
+.. _`qualify for a free Nitrokey Start`: https://www.kernel.org/nitrokey-digital-tokens-for-kernel-developers.html
+
+Configure your smartcard device
+-------------------------------
+
+Your smartcard device should Just Work (TM) the moment you plug it into
+any modern Linux workstation. You can verify it by running::
+
+ $ gpg --card-status
+
+If you see full smartcard details, then you are good to go.
+Unfortunately, troubleshooting all possible reasons why things may not
+be working for you is way beyond the scope of this guide. If you are
+having trouble getting the card to work with GnuPG, please seek help via
+usual support channels.
+
+To configure your smartcard, you will need to use the GnuPG menu system, as
+there are no convenient command-line switches::
+
+ $ gpg --card-edit
+ [...omitted...]
+ gpg/card> admin
+ Admin commands are allowed
+ gpg/card> passwd
+
+You should set the user PIN (1), Admin PIN (3), and the Reset Code (4).
+Please make sure to record and store these in a safe place -- especially
+the Admin PIN and the Reset Code (which allows you to completely wipe
+the smartcard). You so rarely need to use the Admin PIN, that you will
+inevitably forget what it is if you do not record it.
+
+Getting back to the main card menu, you can also set other values (such
+as name, sex, login data, etc), but it's not necessary and will
+additionally leak information about your smartcard should you lose it.
+
+.. note::
+
+ Despite having the name "PIN", neither the user PIN nor the admin
+ PIN on the card need to be numbers.
+
+.. warning::
+
+ Some devices may require that you move the subkeys onto the device
+ before you can change the passphrase. Please check the documentation
+ provided by the device manufacturer.
+
+Move the subkeys to your smartcard
+----------------------------------
+
+Exit the card menu (using "q") and save all changes. Next, let's move
+your subkeys onto the smartcard. You will need both your PGP key
+passphrase and the admin PIN of the card for most operations::
+
+ $ gpg --edit-key [fpr]
+
+ Secret subkeys are available.
+
+ pub ed25519/AAAABBBBCCCCDDDD
+ created: 2022-12-20 expires: 2024-12-19 usage: SC
+ trust: ultimate validity: ultimate
+ ssb cv25519/1111222233334444
+ created: 2022-12-20 expires: never usage: E
+ ssb ed25519/5555666677778888
+ created: 2017-12-07 expires: never usage: S
+ [ultimate] (1). Alice Dev <adev@kernel.org>
+
+ gpg>
+
+Using ``--edit-key`` puts us into the menu mode again, and you will
+notice that the key listing is a little different. From here on, all
+commands are done from inside this menu mode, as indicated by ``gpg>``.
+
+First, let's select the key we'll be putting onto the card -- you do
+this by typing ``key 1`` (it's the first one in the listing, the **[E]**
+subkey)::
+
+ gpg> key 1
+
+In the output, you should now see ``ssb*`` on the **[E]** key. The ``*``
+indicates which key is currently "selected." It works as a *toggle*,
+meaning that if you type ``key 1`` again, the ``*`` will disappear and
+the key will not be selected any more.
+
+Now, let's move that key onto the smartcard::
+
+ gpg> keytocard
+ Please select where to store the key:
+ (2) Encryption key
+ Your selection? 2
+
+Since it's our **[E]** key, it makes sense to put it into the Encryption
+slot. When you submit your selection, you will be prompted first for
+your PGP key passphrase, and then for the admin PIN. If the command
+returns without an error, your key has been moved.
+
+**Important**: Now type ``key 1`` again to unselect the first key, and
+``key 2`` to select the **[S]** key::
+
+ gpg> key 1
+ gpg> key 2
+ gpg> keytocard
+ Please select where to store the key:
+ (1) Signature key
+ (3) Authentication key
+ Your selection? 1
+
+You can use the **[S]** key both for Signature and Authentication, but
+we want to make sure it's in the Signature slot, so choose (1). Once
+again, if your command returns without an error, then the operation was
+successful::
+
+ gpg> q
+ Save changes? (y/N) y
+
+Saving the changes will delete the keys you moved to the card from your
+home directory (but it's okay, because we have them in our backups
+should we need to do this again for a replacement smartcard).
+
+Verifying that the keys were moved
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you perform ``--list-secret-keys`` now, you will see a subtle
+difference in the output::
+
+ $ gpg --list-secret-keys
+ sec# ed25519 2022-12-20 [SC] [expires: 2024-12-19]
+ 000000000000000000000000AAAABBBBCCCCDDDD
+ uid [ultimate] Alice Dev <adev@kernel.org>
+ ssb> cv25519 2022-12-20 [E] [expires: 2024-12-19]
+ ssb> ed25519 2022-12-20 [S]
+
+The ``>`` in the ``ssb>`` output indicates that the subkey is only
+available on the smartcard. If you go back into your secret keys
+directory and look at the contents there, you will notice that the
+``.key`` files there have been replaced with stubs::
+
+ $ cd ~/.gnupg/private-keys-v1.d
+ $ strings *.key | grep 'private-key'
+
+The output should contain ``shadowed-private-key`` to indicate that
+these files are only stubs and the actual content is on the smartcard.
+
+Verifying that the smartcard is functioning
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To verify that the smartcard is working as intended, you can create a
+signature::
+
+ $ echo "Hello world" | gpg --clearsign > /tmp/test.asc
+ $ gpg --verify /tmp/test.asc
+
+This should ask for your smartcard PIN on your first command, and then
+show "Good signature" after you run ``gpg --verify``.
+
+Congratulations, you have successfully made it extremely difficult to
+steal your digital developer identity!
+
+Other common GnuPG operations
+-----------------------------
+
+Here is a quick reference for some common operations you'll need to do
+with your PGP key.
+
+Mounting your safe offline storage
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You will need your Certify key for any of the operations below, so you
+will first need to mount your backup offline storage and tell GnuPG to
+use it::
+
+ $ export GNUPGHOME=/media/disk/foo/gnupg-backup
+ $ gpg --list-secret-keys
+
+You want to make sure that you see ``sec`` and not ``sec#`` in the
+output (the ``#`` means the key is not available and you're still using
+your regular home directory location).
+
+Extending key expiration date
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Certify key has the default expiration date of 2 years from the date
+of creation. This is done both for security reasons and to make obsolete
+keys eventually disappear from keyservers.
+
+To extend the expiration on your key by a year from current date, just
+run::
+
+ $ gpg --quick-set-expire [fpr] 1y
+
+You can also use a specific date if that is easier to remember (e.g.
+your birthday, January 1st, or Canada Day)::
+
+ $ gpg --quick-set-expire [fpr] 2025-07-01
+
+Remember to send the updated key back to keyservers::
+
+ $ gpg --send-key [fpr]
+
+Updating your work directory after any changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After you make any changes to your key using the offline storage, you will
+want to import these changes back into your regular working directory::
+
+ $ gpg --export | gpg --homedir ~/.gnupg --import
+ $ unset GNUPGHOME
+
+Using gpg-agent over ssh
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can forward your gpg-agent over ssh if you need to sign tags or
+commits on a remote system. Please refer to the instructions provided
+on the GnuPG wiki:
+
+- `Agent Forwarding over SSH`_
+
+It works more smoothly if you can modify the sshd server settings on the
+remote end.
+
+.. _`Agent Forwarding over SSH`: https://wiki.gnupg.org/AgentForwarding
+
+.. _pgp_with_git:
+
+Using PGP with Git
+==================
+
+One of the core features of Git is its decentralized nature -- once a
+repository is cloned to your system, you have full history of the
+project, including all of its tags, commits and branches. However, with
+hundreds of cloned repositories floating around, how does anyone verify
+that their copy of linux.git has not been tampered with by a malicious
+third party?
+
+Or what happens if a backdoor is discovered in the code and the "Author"
+line in the commit says it was done by you, while you're pretty sure you
+had `nothing to do with it`_?
+
+To address both of these issues, Git introduced PGP integration. Signed
+tags prove the repository integrity by assuring that its contents are
+exactly the same as on the workstation of the developer who created the
+tag, while signed commits make it nearly impossible for someone to
+impersonate you without having access to your PGP keys.
+
+.. _`nothing to do with it`: https://github.com/jayphelps/git-blame-someone-else
+
+Configure git to use your PGP key
+---------------------------------
+
+If you only have one secret key in your keyring, then you don't really
+need to do anything extra, as it becomes your default key. However, if
+you happen to have multiple secret keys, you can tell git which key
+should be used (``[fpr]`` is the fingerprint of your key)::
+
+ $ git config --global user.signingKey [fpr]
+
+How to work with signed tags
+----------------------------
+
+To create a signed tag, simply pass the ``-s`` switch to the tag
+command::
+
+ $ git tag -s [tagname]
+
+Our recommendation is to always sign git tags, as this allows other
+developers to ensure that the git repository they are pulling from has
+not been maliciously altered.
+
+How to verify signed tags
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To verify a signed tag, simply use the ``verify-tag`` command::
+
+ $ git verify-tag [tagname]
+
+If you are pulling a tag from another fork of the project repository,
+git should automatically verify the signature at the tip you're pulling
+and show you the results during the merge operation::
+
+ $ git pull [url] tags/sometag
+
+The merge message will contain something like this::
+
+ Merge tag 'sometag' of [url]
+
+ [Tag message]
+
+ # gpg: Signature made [...]
+ # gpg: Good signature from [...]
+
+If you are verifying someone else's git tag, then you will need to
+import their PGP key. Please refer to the
+":ref:`verify_identities`" section below.
+
+Configure git to always sign annotated tags
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Chances are, if you're creating an annotated tag, you'll want to sign
+it. To force git to always sign annotated tags, you can set a global
+configuration option::
+
+ $ git config --global tag.forceSignAnnotated true
+
+How to work with signed commits
+-------------------------------
+
+It is easy to create signed commits, but it is much more difficult to
+use them in Linux kernel development, since it relies on patches sent to
+the mailing list, and this workflow does not preserve PGP commit
+signatures. Furthermore, when rebasing your repository to match
+upstream, even your own PGP commit signatures will end up discarded. For
+this reason, most kernel developers don't bother signing their commits
+and will ignore signed commits in any external repositories that they
+rely upon in their work.
+
+However, if you have your working git tree publicly available at some
+git hosting service (kernel.org, infradead.org, ozlabs.org, or others),
+then the recommendation is that you sign all your git commits even if
+upstream developers do not directly benefit from this practice.
+
+We recommend this for the following reasons:
+
+1. Should there ever be a need to perform code forensics or track code
+ provenance, even externally maintained trees carrying PGP commit
+ signatures will be valuable for such purposes.
+2. If you ever need to re-clone your local repository (for example,
+ after a disk failure), this lets you easily verify the repository
+ integrity before resuming your work.
+3. If someone needs to cherry-pick your commits, this allows them to
+ quickly verify their integrity before applying them.
+
+Creating signed commits
+~~~~~~~~~~~~~~~~~~~~~~~
+
+To create a signed commit, you just need to pass the ``-S`` flag to the
+``git commit`` command (it's capital ``-S`` due to collision with
+another flag)::
+
+ $ git commit -S
+
+Configure git to always sign commits
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can tell git to always sign commits::
+
+ git config --global commit.gpgSign true
+
+.. note::
+
+ Make sure you configure ``gpg-agent`` before you turn this on.
+
+.. _verify_identities:
+
+
+How to work with signed patches
+-------------------------------
+
+It is possible to use your PGP key to sign patches sent to kernel
+developer mailing lists. Since existing email signature mechanisms
+(PGP-Mime or PGP-inline) tend to cause problems with regular code
+review tasks, you should use the tool kernel.org created for this
+purpose that puts cryptographic attestation signatures into message
+headers (a-la DKIM):
+
+- `Patatt Patch Attestation`_
+
+.. _`Patatt Patch Attestation`: https://pypi.org/project/patatt/
+
+Installing and configuring patatt
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Patatt is packaged for many distributions already, so please check there
+first. You can also install it from pypi using "``pip install patatt``".
+
+If you already have your PGP key configured with git (via the
+``user.signingKey`` configuration parameter), then patatt requires no
+further configuration. You can start signing your patches by installing
+the git-send-email hook in the repository you want::
+
+ patatt install-hook
+
+Now any patches you send with ``git send-email`` will be automatically
+signed with your cryptographic signature.
+
+Checking patatt signatures
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you are using ``b4`` to retrieve and apply patches, then it will
+automatically attempt to verify all DKIM and patatt signatures it
+encounters, for example::
+
+ $ b4 am 20220720205013.890942-1-broonie@kernel.org
+ [...]
+ Checking attestation on all messages, may take a moment...
+ ---
+ ✓ [PATCH v1 1/3] kselftest/arm64: Correct buffer allocation for SVE Z registers
+ ✓ [PATCH v1 2/3] arm64/sve: Document our actual ABI for clearing registers on syscall
+ ✓ [PATCH v1 3/3] kselftest/arm64: Enforce actual ABI for SVE syscalls
+ ---
+ ✓ Signed: openpgp/broonie@kernel.org
+ ✓ Signed: DKIM/kernel.org
+
+.. note::
+
+ Patatt and b4 are still in active development and you should check
+ the latest documentation for these projects for any new or updated
+ features.
+
+.. _kernel_identities:
+
+How to verify kernel developer identities
+=========================================
+
+Signing tags and commits is easy, but how does one go about verifying
+that the key used to sign something belongs to the actual kernel
+developer and not to a malicious imposter?
+
+Configure auto-key-retrieval using WKD and DANE
+-----------------------------------------------
+
+If you are not already someone with an extensive collection of other
+developers' public keys, then you can jumpstart your keyring by relying
+on key auto-discovery and auto-retrieval. GnuPG can piggyback on other
+delegated trust technologies, namely DNSSEC and TLS, to get you going if
+the prospect of starting your own Web of Trust from scratch is too
+daunting.
+
+Add the following to your ``~/.gnupg/gpg.conf``::
+
+ auto-key-locate wkd,dane,local
+ auto-key-retrieve
+
+DNS-Based Authentication of Named Entities ("DANE") is a method for
+publishing public keys in DNS and securing them using DNSSEC signed
+zones. Web Key Directory ("WKD") is the alternative method that uses
+https lookups for the same purpose. When using either DANE or WKD for
+looking up public keys, GnuPG will validate DNSSEC or TLS certificates,
+respectively, before adding auto-retrieved public keys to your local
+keyring.
+
+Kernel.org publishes the WKD for all developers who have kernel.org
+accounts. Once you have the above changes in your ``gpg.conf``, you can
+auto-retrieve the keys for Linus Torvalds and Greg Kroah-Hartman (if you
+don't already have them)::
+
+ $ gpg --locate-keys torvalds@kernel.org gregkh@kernel.org
+
+If you have a kernel.org account, then you should `add the kernel.org
+UID to your key`_ to make WKD more useful to other kernel developers.
+
+.. _`add the kernel.org UID to your key`: https://korg.wiki.kernel.org/userdoc/mail#adding_a_kernelorg_uid_to_your_pgp_key
+
+Web of Trust (WOT) vs. Trust on First Use (TOFU)
+------------------------------------------------
+
+PGP incorporates a trust delegation mechanism known as the "Web of
+Trust." At its core, this is an attempt to replace the need for
+centralized Certification Authorities of the HTTPS/TLS world. Instead of
+various software makers dictating who should be your trusted certifying
+entity, PGP leaves this responsibility to each user.
+
+Unfortunately, very few people understand how the Web of Trust works.
+While it remains an important aspect of the OpenPGP specification,
+recent versions of GnuPG (2.2 and above) have implemented an alternative
+mechanism called "Trust on First Use" (TOFU). You can think of TOFU as
+"the SSH-like approach to trust." With SSH, the first time you connect
+to a remote system, its key fingerprint is recorded and remembered. If
+the key changes in the future, the SSH client will alert you and refuse
+to connect, forcing you to make a decision on whether you choose to
+trust the changed key or not. Similarly, the first time you import
+someone's PGP key, it is assumed to be valid. If at any point in the
+future GnuPG comes across another key with the same identity, both the
+previously imported key and the new key will be marked as invalid and
+you will need to manually figure out which one to keep.
+
+We recommend that you use the combined TOFU+PGP trust model (which is
+the new default in GnuPG v2). To set it, add (or modify) the
+``trust-model`` setting in ``~/.gnupg/gpg.conf``::
+
+ trust-model tofu+pgp
+
+Using the kernel.org web of trust repository
+--------------------------------------------
+
+Kernel.org maintains a git repository with developers' public keys as a
+replacement for replicating keyserver networks that have gone mostly
+dark in the past few years. The full documentation for how to set up
+that repository as your source of public keys can be found here:
+
+- `Kernel developer PGP Keyring`_
+
+If you are a kernel developer, please consider submitting your key for
+inclusion into that keyring.
+
+.. _`Kernel developer PGP Keyring`: https://korg.docs.kernel.org/pgpkeys.html
diff --git a/Documentation/process/maintainer-soc-clean-dts.rst b/Documentation/process/maintainer-soc-clean-dts.rst
new file mode 100644
index 000000000..1b32430d0
--- /dev/null
+++ b/Documentation/process/maintainer-soc-clean-dts.rst
@@ -0,0 +1,25 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==============================================
+SoC Platforms with DTS Compliance Requirements
+==============================================
+
+Overview
+--------
+
+SoC platforms or subarchitectures should follow all the rules from
+Documentation/process/maintainer-soc.rst. This document referenced in
+MAINTAINERS impose additional requirements listed below.
+
+Strict DTS DT Schema and dtc Compliance
+---------------------------------------
+
+No changes to the SoC platform Devicetree sources (DTS files) should introduce
+new ``make dtbs_check W=1`` warnings. Warnings in a new board DTS, which are
+results of issues in an included DTSI file, are considered existing, not new
+warnings. The platform maintainers have automation in place which should point
+out any new warnings.
+
+If a commit introducing new warnings gets accepted somehow, the resulting
+issues shall be fixed in reasonable time (e.g. within one release) or the
+commit reverted.
diff --git a/Documentation/process/maintainer-soc.rst b/Documentation/process/maintainer-soc.rst
new file mode 100644
index 000000000..12637530d
--- /dev/null
+++ b/Documentation/process/maintainer-soc.rst
@@ -0,0 +1,177 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+SoC Subsystem
+=============
+
+Overview
+--------
+
+The SoC subsystem is a place of aggregation for SoC-specific code.
+The main components of the subsystem are:
+
+* devicetrees for 32- & 64-bit ARM and RISC-V
+* 32-bit ARM board files (arch/arm/mach*)
+* 32- & 64-bit ARM defconfigs
+* SoC-specific drivers across architectures, in particular for 32- & 64-bit
+ ARM, RISC-V and Loongarch
+
+These "SoC-specific drivers" do not include clock, GPIO etc drivers that have
+other top-level maintainers. The drivers/soc/ directory is generally meant
+for kernel-internal drivers that are used by other drivers to provide SoC-
+specific functionality like identifying an SoC revision or interfacing with
+power domains.
+
+The SoC subsystem also serves as an intermediate location for changes to
+drivers/bus, drivers/firmware, drivers/reset and drivers/memory. The addition
+of new platforms, or the removal of existing ones, often go through the SoC
+tree as a dedicated branch covering multiple subsystems.
+
+The main SoC tree is housed on git.kernel.org:
+ https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git/
+
+Clearly this is quite a wide range of topics, which no one person, or even
+small group of people are capable of maintaining. Instead, the SoC subsystem
+is comprised of many submaintainers, each taking care of individual platforms
+and driver subdirectories.
+In this regard, "platform" usually refers to a series of SoCs from a given
+vendor, for example, Nvidia's series of Tegra SoCs. Many submaintainers operate
+on a vendor level, responsible for multiple product lines. For several reasons,
+including acquisitions/different business units in a company, things vary
+significantly here. The various submaintainers are documented in the
+MAINTAINERS file.
+
+Most of these submaintainers have their own trees where they stage patches,
+sending pull requests to the main SoC tree. These trees are usually, but not
+always, listed in MAINTAINERS. The main SoC maintainers can be reached via the
+alias soc@kernel.org if there is no platform-specific maintainer, or if they
+are unresponsive.
+
+What the SoC tree is not, however, is a location for architecture-specific code
+changes. Each architecture has its own maintainers that are responsible for
+architectural details, CPU errata and the like.
+
+Information for (new) Submaintainers
+------------------------------------
+
+As new platforms spring up, they often bring with them new submaintainers,
+many of whom work for the silicon vendor, and may not be familiar with the
+process.
+
+Devicetree ABI Stability
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Perhaps one of the most important things to highlight is that dt-bindings
+document the ABI between the devicetree and the kernel.
+Please read Documentation/devicetree/bindings/ABI.rst.
+
+If changes are being made to a devicetree that are incompatible with old
+kernels, the devicetree patch should not be applied until the driver is, or an
+appropriate time later. Most importantly, any incompatible changes should be
+clearly pointed out in the patch description and pull request, along with the
+expected impact on existing users, such as bootloaders or other operating
+systems.
+
+Driver Branch Dependencies
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A common problem is synchronizing changes between device drivers and devicetree
+files. Even if a change is compatible in both directions, this may require
+coordinating how the changes get merged through different maintainer trees.
+
+Usually the branch that includes a driver change will also include the
+corresponding change to the devicetree binding description, to ensure they are
+in fact compatible. This means that the devicetree branch can end up causing
+warnings in the "make dtbs_check" step. If a devicetree change depends on
+missing additions to a header file in include/dt-bindings/, it will fail the
+"make dtbs" step and not get merged.
+
+There are multiple ways to deal with this:
+
+* Avoid defining custom macros in include/dt-bindings/ for hardware constants
+ that can be derived from a datasheet -- binding macros in header files should
+ only be used as a last resort if there is no natural way to define a binding
+
+* Use literal values in the devicetree file in place of macros even when a
+ header is required, and change them to the named representation in a
+ following release
+
+* Defer the devicetree changes to a release after the binding and driver have
+ already been merged
+
+* Change the bindings in a shared immutable branch that is used as the base for
+ both the driver change and the devicetree changes
+
+* Add duplicate defines in the devicetree file guarded by an #ifndef section,
+ removing them in a later release
+
+Devicetree Naming Convention
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The general naming scheme for devicetree files is as follows. The aspects of a
+platform that are set at the SoC level, like CPU cores, are contained in a file
+named $soc.dtsi, for example, jh7100.dtsi. Integration details, that will vary
+from board to board, are described in $soc-$board.dts. An example of this is
+jh7100-beaglev-starlight.dts. Often many boards are variations on a theme, and
+frequently there are intermediate files, such as jh7100-common.dtsi, which sit
+between the $soc.dtsi and $soc-$board.dts files, containing the descriptions of
+common hardware.
+
+Some platforms also have System on Modules, containing an SoC, which are then
+integrated into several different boards. For these platforms, $soc-$som.dtsi
+and $soc-$som-$board.dts are typical.
+
+Directories are usually named after the vendor of the SoC at the time of its
+inclusion, leading to some historical directory names in the tree.
+
+Validating Devicetree Files
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``make dtbs_check`` can be used to validate that devicetree files are compliant
+with the dt-bindings that describe the ABI. Please read the section
+"Running checks" of Documentation/devicetree/bindings/writing-schema.rst for
+more information on the validation of devicetrees.
+
+For new platforms, or additions to existing ones, ``make dtbs_check`` should not
+add any new warnings. For RISC-V and Samsung SoC, ``make dtbs_check W=1`` is
+required to not add any new warnings.
+If in any doubt about a devicetree change, reach out to the devicetree
+maintainers.
+
+Branches and Pull Requests
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Just as the main SoC tree has several branches, it is expected that
+submaintainers will do the same. Driver, defconfig and devicetree changes should
+all be split into separate branches and appear in separate pull requests to the
+SoC maintainers. Each branch should be usable by itself and avoid
+regressions that originate from dependencies on other branches.
+
+Small sets of patches can also be sent as separate emails to soc@kernel.org,
+grouped into the same categories.
+
+If changes do not fit into the normal patterns, there can be additional
+top-level branches, e.g. for a treewide rework, or the addition of new SoC
+platforms including dts files and drivers.
+
+Branches with a lot of changes can benefit from getting split up into separate
+topics branches, even if they end up getting merged into the same branch of the
+SoC tree. An example here would be one branch for devicetree warning fixes, one
+for a rework and one for newly added boards.
+
+Another common way to split up changes is to send an early pull request with the
+majority of the changes at some point between rc1 and rc4, following up with one
+or more smaller pull requests towards the end of the cycle that can add late
+changes or address problems identified while testing the first set.
+
+While there is no cut-off time for late pull requests, it helps to only send
+small branches as time gets closer to the merge window.
+
+Pull requests for bugfixes for the current release can be sent at any time, but
+again having multiple smaller branches is better than trying to combine too many
+patches into one pull request.
+
+The subject line of a pull request should begin with "[GIT PULL]" and made using
+a signed tag, rather than a branch. This tag should contain a short description
+summarising the changes in the pull request. For more detail on sending pull
+requests, please see Documentation/maintainer/pull-requests.rst.
diff --git a/Documentation/process/maintainer-tip.rst b/Documentation/process/maintainer-tip.rst
new file mode 100644
index 000000000..08dd0f804
--- /dev/null
+++ b/Documentation/process/maintainer-tip.rst
@@ -0,0 +1,804 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+The tip tree handbook
+=====================
+
+What is the tip tree?
+---------------------
+
+The tip tree is a collection of several subsystems and areas of
+development. The tip tree is both a direct development tree and a
+aggregation tree for several sub-maintainer trees. The tip tree gitweb URL
+is: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
+
+The tip tree contains the following subsystems:
+
+ - **x86 architecture**
+
+ The x86 architecture development takes place in the tip tree except
+ for the x86 KVM and XEN specific parts which are maintained in the
+ corresponding subsystems and routed directly to mainline from
+ there. It's still good practice to Cc the x86 maintainers on
+ x86-specific KVM and XEN patches.
+
+ Some x86 subsystems have their own maintainers in addition to the
+ overall x86 maintainers. Please Cc the overall x86 maintainers on
+ patches touching files in arch/x86 even when they are not called out
+ by the MAINTAINER file.
+
+ Note, that ``x86@kernel.org`` is not a mailing list. It is merely a
+ mail alias which distributes mails to the x86 top-level maintainer
+ team. Please always Cc the Linux Kernel mailing list (LKML)
+ ``linux-kernel@vger.kernel.org``, otherwise your mail ends up only in
+ the private inboxes of the maintainers.
+
+ - **Scheduler**
+
+ Scheduler development takes place in the -tip tree, in the
+ sched/core branch - with occasional sub-topic trees for
+ work-in-progress patch-sets.
+
+ - **Locking and atomics**
+
+ Locking development (including atomics and other synchronization
+ primitives that are connected to locking) takes place in the -tip
+ tree, in the locking/core branch - with occasional sub-topic trees
+ for work-in-progress patch-sets.
+
+ - **Generic interrupt subsystem and interrupt chip drivers**:
+
+ - interrupt core development happens in the irq/core branch
+
+ - interrupt chip driver development also happens in the irq/core
+ branch, but the patches are usually applied in a separate maintainer
+ tree and then aggregated into irq/core
+
+ - **Time, timers, timekeeping, NOHZ and related chip drivers**:
+
+ - timekeeping, clocksource core, NTP and alarmtimer development
+ happens in the timers/core branch, but patches are usually applied in
+ a separate maintainer tree and then aggregated into timers/core
+
+ - clocksource/event driver development happens in the timers/core
+ branch, but patches are mostly applied in a separate maintainer tree
+ and then aggregated into timers/core
+
+ - **Performance counters core, architecture support and tooling**:
+
+ - perf core and architecture support development happens in the
+ perf/core branch
+
+ - perf tooling development happens in the perf tools maintainer
+ tree and is aggregated into the tip tree.
+
+ - **CPU hotplug core**
+
+ - **RAS core**
+
+ Mostly x86-specific RAS patches are collected in the tip ras/core
+ branch.
+
+ - **EFI core**
+
+ EFI development in the efi git tree. The collected patches are
+ aggregated in the tip efi/core branch.
+
+ - **RCU**
+
+ RCU development happens in the linux-rcu tree. The resulting changes
+ are aggregated into the tip core/rcu branch.
+
+ - **Various core code components**:
+
+ - debugobjects
+
+ - objtool
+
+ - random bits and pieces
+
+
+Patch submission notes
+----------------------
+
+Selecting the tree/branch
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In general, development against the head of the tip tree master branch is
+fine, but for the subsystems which are maintained separately, have their
+own git tree and are only aggregated into the tip tree, development should
+take place against the relevant subsystem tree or branch.
+
+Bug fixes which target mainline should always be applicable against the
+mainline kernel tree. Potential conflicts against changes which are already
+queued in the tip tree are handled by the maintainers.
+
+Patch subject
+^^^^^^^^^^^^^
+
+The tip tree preferred format for patch subject prefixes is
+'subsys/component:', e.g. 'x86/apic:', 'x86/mm/fault:', 'sched/fair:',
+'genirq/core:'. Please do not use file names or complete file paths as
+prefix. 'git log path/to/file' should give you a reasonable hint in most
+cases.
+
+The condensed patch description in the subject line should start with a
+uppercase letter and should be written in imperative tone.
+
+
+Changelog
+^^^^^^^^^
+
+The general rules about changelogs in the :ref:`Submitting patches guide
+<describe_changes>`, apply.
+
+The tip tree maintainers set value on following these rules, especially on
+the request to write changelogs in imperative mood and not impersonating
+code or the execution of it. This is not just a whim of the
+maintainers. Changelogs written in abstract words are more precise and
+tend to be less confusing than those written in the form of novels.
+
+It's also useful to structure the changelog into several paragraphs and not
+lump everything together into a single one. A good structure is to explain
+the context, the problem and the solution in separate paragraphs and this
+order.
+
+Examples for illustration:
+
+ Example 1::
+
+ x86/intel_rdt/mbm: Fix MBM overflow handler during hot cpu
+
+ When a CPU is dying, we cancel the worker and schedule a new worker on a
+ different CPU on the same domain. But if the timer is already about to
+ expire (say 0.99s) then we essentially double the interval.
+
+ We modify the hot cpu handling to cancel the delayed work on the dying
+ cpu and run the worker immediately on a different cpu in same domain. We
+ donot flush the worker because the MBM overflow worker reschedules the
+ worker on same CPU and scans the domain->cpu_mask to get the domain
+ pointer.
+
+ Improved version::
+
+ x86/intel_rdt/mbm: Fix MBM overflow handler during CPU hotplug
+
+ When a CPU is dying, the overflow worker is canceled and rescheduled on a
+ different CPU in the same domain. But if the timer is already about to
+ expire this essentially doubles the interval which might result in a non
+ detected overflow.
+
+ Cancel the overflow worker and reschedule it immediately on a different CPU
+ in the same domain. The work could be flushed as well, but that would
+ reschedule it on the same CPU.
+
+ Example 2::
+
+ time: POSIX CPU timers: Ensure that variable is initialized
+
+ If cpu_timer_sample_group returns -EINVAL, it will not have written into
+ *sample. Checking for cpu_timer_sample_group's return value precludes the
+ potential use of an uninitialized value of now in the following block.
+ Given an invalid clock_idx, the previous code could otherwise overwrite
+ *oldval in an undefined manner. This is now prevented. We also exploit
+ short-circuiting of && to sample the timer only if the result will
+ actually be used to update *oldval.
+
+ Improved version::
+
+ posix-cpu-timers: Make set_process_cpu_timer() more robust
+
+ Because the return value of cpu_timer_sample_group() is not checked,
+ compilers and static checkers can legitimately warn about a potential use
+ of the uninitialized variable 'now'. This is not a runtime issue as all
+ call sites hand in valid clock ids.
+
+ Also cpu_timer_sample_group() is invoked unconditionally even when the
+ result is not used because *oldval is NULL.
+
+ Make the invocation conditional and check the return value.
+
+ Example 3::
+
+ The entity can also be used for other purposes.
+
+ Let's rename it to be more generic.
+
+ Improved version::
+
+ The entity can also be used for other purposes.
+
+ Rename it to be more generic.
+
+
+For complex scenarios, especially race conditions and memory ordering
+issues, it is valuable to depict the scenario with a table which shows
+the parallelism and the temporal order of events. Here is an example::
+
+ CPU0 CPU1
+ free_irq(X) interrupt X
+ spin_lock(desc->lock)
+ wake irq thread()
+ spin_unlock(desc->lock)
+ spin_lock(desc->lock)
+ remove action()
+ shutdown_irq()
+ release_resources() thread_handler()
+ spin_unlock(desc->lock) access released resources.
+ ^^^^^^^^^^^^^^^^^^^^^^^^^
+ synchronize_irq()
+
+Lockdep provides similar useful output to depict a possible deadlock
+scenario::
+
+ CPU0 CPU1
+ rtmutex_lock(&rcu->rt_mutex)
+ spin_lock(&rcu->rt_mutex.wait_lock)
+ local_irq_disable()
+ spin_lock(&timer->it_lock)
+ spin_lock(&rcu->mutex.wait_lock)
+ --> Interrupt
+ spin_lock(&timer->it_lock)
+
+
+Function references in changelogs
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When a function is mentioned in the changelog, either the text body or the
+subject line, please use the format 'function_name()'. Omitting the
+brackets after the function name can be ambiguous::
+
+ Subject: subsys/component: Make reservation_count static
+
+ reservation_count is only used in reservation_stats. Make it static.
+
+The variant with brackets is more precise::
+
+ Subject: subsys/component: Make reservation_count() static
+
+ reservation_count() is only called from reservation_stats(). Make it
+ static.
+
+
+Backtraces in changelogs
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+See :ref:`backtraces`.
+
+Ordering of commit tags
+^^^^^^^^^^^^^^^^^^^^^^^
+
+To have a uniform view of the commit tags, the tip maintainers use the
+following tag ordering scheme:
+
+ - Fixes: 12char-SHA1 ("sub/sys: Original subject line")
+
+ A Fixes tag should be added even for changes which do not need to be
+ backported to stable kernels, i.e. when addressing a recently introduced
+ issue which only affects tip or the current head of mainline. These tags
+ are helpful to identify the original commit and are much more valuable
+ than prominently mentioning the commit which introduced a problem in the
+ text of the changelog itself because they can be automatically
+ extracted.
+
+ The following example illustrates the difference::
+
+ Commit
+
+ abcdef012345678 ("x86/xxx: Replace foo with bar")
+
+ left an unused instance of variable foo around. Remove it.
+
+ Signed-off-by: J.Dev <j.dev@mail>
+
+ Please say instead::
+
+ The recent replacement of foo with bar left an unused instance of
+ variable foo around. Remove it.
+
+ Fixes: abcdef012345678 ("x86/xxx: Replace foo with bar")
+ Signed-off-by: J.Dev <j.dev@mail>
+
+ The latter puts the information about the patch into the focus and
+ amends it with the reference to the commit which introduced the issue
+ rather than putting the focus on the original commit in the first place.
+
+ - Reported-by: ``Reporter <reporter@mail>``
+
+ - Originally-by: ``Original author <original-author@mail>``
+
+ - Suggested-by: ``Suggester <suggester@mail>``
+
+ - Co-developed-by: ``Co-author <co-author@mail>``
+
+ Signed-off: ``Co-author <co-author@mail>``
+
+ Note, that Co-developed-by and Signed-off-by of the co-author(s) must
+ come in pairs.
+
+ - Signed-off-by: ``Author <author@mail>``
+
+ The first Signed-off-by (SOB) after the last Co-developed-by/SOB pair is the
+ author SOB, i.e. the person flagged as author by git.
+
+ - Signed-off-by: ``Patch handler <handler@mail>``
+
+ SOBs after the author SOB are from people handling and transporting
+ the patch, but were not involved in development. SOB chains should
+ reflect the **real** route a patch took as it was propagated to us,
+ with the first SOB entry signalling primary authorship of a single
+ author. Acks should be given as Acked-by lines and review approvals
+ as Reviewed-by lines.
+
+ If the handler made modifications to the patch or the changelog, then
+ this should be mentioned **after** the changelog text and **above**
+ all commit tags in the following format::
+
+ ... changelog text ends.
+
+ [ handler: Replaced foo by bar and updated changelog ]
+
+ First-tag: .....
+
+ Note the two empty new lines which separate the changelog text and the
+ commit tags from that notice.
+
+ If a patch is sent to the mailing list by a handler then the author has
+ to be noted in the first line of the changelog with::
+
+ From: Author <author@mail>
+
+ Changelog text starts here....
+
+ so the authorship is preserved. The 'From:' line has to be followed
+ by a empty newline. If that 'From:' line is missing, then the patch
+ would be attributed to the person who sent (transported, handled) it.
+ The 'From:' line is automatically removed when the patch is applied
+ and does not show up in the final git changelog. It merely affects
+ the authorship information of the resulting Git commit.
+
+ - Tested-by: ``Tester <tester@mail>``
+
+ - Reviewed-by: ``Reviewer <reviewer@mail>``
+
+ - Acked-by: ``Acker <acker@mail>``
+
+ - Cc: ``cc-ed-person <person@mail>``
+
+ If the patch should be backported to stable, then please add a '``Cc:
+ stable@vger.kernel.org``' tag, but do not Cc stable when sending your
+ mail.
+
+ - Link: ``https://link/to/information``
+
+ For referring to an email on LKML or other kernel mailing lists,
+ please use the lore.kernel.org redirector URL::
+
+ https://lore.kernel.org/r/email-message@id
+
+ The kernel.org redirector is considered a stable URL, unlike other email
+ archives.
+
+ Maintainers will add a Link tag referencing the email of the patch
+ submission when they apply a patch to the tip tree. This tag is useful
+ for later reference and is also used for commit notifications.
+
+Please do not use combined tags, e.g. ``Reported-and-tested-by``, as
+they just complicate automated extraction of tags.
+
+
+Links to documentation
+^^^^^^^^^^^^^^^^^^^^^^
+
+Providing links to documentation in the changelog is a great help to later
+debugging and analysis. Unfortunately, URLs often break very quickly
+because companies restructure their websites frequently. Non-'volatile'
+exceptions include the Intel SDM and the AMD APM.
+
+Therefore, for 'volatile' documents, please create an entry in the kernel
+bugzilla https://bugzilla.kernel.org and attach a copy of these documents
+to the bugzilla entry. Finally, provide the URL of the bugzilla entry in
+the changelog.
+
+Patch resend or reminders
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+See :ref:`resend_reminders`.
+
+Merge window
+^^^^^^^^^^^^
+
+Please do not expect large patch series to be handled during the merge
+window or even during the week before. Such patches should be submitted in
+mergeable state *at* *least* a week before the merge window opens.
+Exceptions are made for bug fixes and *sometimes* for small standalone
+drivers for new hardware or minimally invasive patches for hardware
+enablement.
+
+During the merge window, the maintainers instead focus on following the
+upstream changes, fixing merge window fallout, collecting bug fixes, and
+allowing themselves a breath. Please respect that.
+
+The release candidate -rc1 is the starting point for new patches to be
+applied which are targeted for the next merge window.
+
+So called _urgent_ branches will be merged into mainline during the
+stabilization phase of each release.
+
+
+Git
+^^^
+
+The tip maintainers accept git pull requests from maintainers who provide
+subsystem changes for aggregation in the tip tree.
+
+Pull requests for new patch submissions are usually not accepted and do not
+replace proper patch submission to the mailing list. The main reason for
+this is that the review workflow is email based.
+
+If you submit a larger patch series it is helpful to provide a git branch
+in a private repository which allows interested people to easily pull the
+series for testing. The usual way to offer this is a git URL in the cover
+letter of the patch series.
+
+Testing
+^^^^^^^
+
+Code should be tested before submitting to the tip maintainers. Anything
+other than minor changes should be built, booted and tested with
+comprehensive (and heavyweight) kernel debugging options enabled.
+
+These debugging options can be found in kernel/configs/x86_debug.config
+and can be added to an existing kernel config by running:
+
+ make x86_debug.config
+
+Some of these options are x86-specific and can be left out when testing
+on other architectures.
+
+.. _maintainer-tip-coding-style:
+
+Coding style notes
+------------------
+
+Comment style
+^^^^^^^^^^^^^
+
+Sentences in comments start with an uppercase letter.
+
+Single line comments::
+
+ /* This is a single line comment */
+
+Multi-line comments::
+
+ /*
+ * This is a properly formatted
+ * multi-line comment.
+ *
+ * Larger multi-line comments should be split into paragraphs.
+ */
+
+No tail comments:
+
+ Please refrain from using tail comments. Tail comments disturb the
+ reading flow in almost all contexts, but especially in code::
+
+ if (somecondition_is_true) /* Don't put a comment here */
+ dostuff(); /* Neither here */
+
+ seed = MAGIC_CONSTANT; /* Nor here */
+
+ Use freestanding comments instead::
+
+ /* This condition is not obvious without a comment */
+ if (somecondition_is_true) {
+ /* This really needs to be documented */
+ dostuff();
+ }
+
+ /* This magic initialization needs a comment. Maybe not? */
+ seed = MAGIC_CONSTANT;
+
+Comment the important things:
+
+ Comments should be added where the operation is not obvious. Documenting
+ the obvious is just a distraction::
+
+ /* Decrement refcount and check for zero */
+ if (refcount_dec_and_test(&p->refcnt)) {
+ do;
+ lots;
+ of;
+ magic;
+ things;
+ }
+
+ Instead, comments should explain the non-obvious details and document
+ constraints::
+
+ if (refcount_dec_and_test(&p->refcnt)) {
+ /*
+ * Really good explanation why the magic things below
+ * need to be done, ordering and locking constraints,
+ * etc..
+ */
+ do;
+ lots;
+ of;
+ magic;
+ /* Needs to be the last operation because ... */
+ things;
+ }
+
+Function documentation comments:
+
+ To document functions and their arguments please use kernel-doc format
+ and not free form comments::
+
+ /**
+ * magic_function - Do lots of magic stuff
+ * @magic: Pointer to the magic data to operate on
+ * @offset: Offset in the data array of @magic
+ *
+ * Deep explanation of mysterious things done with @magic along
+ * with documentation of the return values.
+ *
+ * Note, that the argument descriptors above are arranged
+ * in a tabular fashion.
+ */
+
+ This applies especially to globally visible functions and inline
+ functions in public header files. It might be overkill to use kernel-doc
+ format for every (static) function which needs a tiny explanation. The
+ usage of descriptive function names often replaces these tiny comments.
+ Apply common sense as always.
+
+
+Documenting locking requirements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ Documenting locking requirements is a good thing, but comments are not
+ necessarily the best choice. Instead of writing::
+
+ /* Caller must hold foo->lock */
+ void func(struct foo *foo)
+ {
+ ...
+ }
+
+ Please use::
+
+ void func(struct foo *foo)
+ {
+ lockdep_assert_held(&foo->lock);
+ ...
+ }
+
+ In PROVE_LOCKING kernels, lockdep_assert_held() emits a warning
+ if the caller doesn't hold the lock. Comments can't do that.
+
+Bracket rules
+^^^^^^^^^^^^^
+
+Brackets should be omitted only if the statement which follows 'if', 'for',
+'while' etc. is truly a single line::
+
+ if (foo)
+ do_something();
+
+The following is not considered to be a single line statement even
+though C does not require brackets::
+
+ for (i = 0; i < end; i++)
+ if (foo[i])
+ do_something(foo[i]);
+
+Adding brackets around the outer loop enhances the reading flow::
+
+ for (i = 0; i < end; i++) {
+ if (foo[i])
+ do_something(foo[i]);
+ }
+
+
+Variable declarations
+^^^^^^^^^^^^^^^^^^^^^
+
+The preferred ordering of variable declarations at the beginning of a
+function is reverse fir tree order::
+
+ struct long_struct_name *descriptive_name;
+ unsigned long foo, bar;
+ unsigned int tmp;
+ int ret;
+
+The above is faster to parse than the reverse ordering::
+
+ int ret;
+ unsigned int tmp;
+ unsigned long foo, bar;
+ struct long_struct_name *descriptive_name;
+
+And even more so than random ordering::
+
+ unsigned long foo, bar;
+ int ret;
+ struct long_struct_name *descriptive_name;
+ unsigned int tmp;
+
+Also please try to aggregate variables of the same type into a single
+line. There is no point in wasting screen space::
+
+ unsigned long a;
+ unsigned long b;
+ unsigned long c;
+ unsigned long d;
+
+It's really sufficient to do::
+
+ unsigned long a, b, c, d;
+
+Please also refrain from introducing line splits in variable declarations::
+
+ struct long_struct_name *descriptive_name = container_of(bar,
+ struct long_struct_name,
+ member);
+ struct foobar foo;
+
+It's way better to move the initialization to a separate line after the
+declarations::
+
+ struct long_struct_name *descriptive_name;
+ struct foobar foo;
+
+ descriptive_name = container_of(bar, struct long_struct_name, member);
+
+
+Variable types
+^^^^^^^^^^^^^^
+
+Please use the proper u8, u16, u32, u64 types for variables which are meant
+to describe hardware or are used as arguments for functions which access
+hardware. These types are clearly defining the bit width and avoid
+truncation, expansion and 32/64-bit confusion.
+
+u64 is also recommended in code which would become ambiguous for 32-bit
+kernels when 'unsigned long' would be used instead. While in such
+situations 'unsigned long long' could be used as well, u64 is shorter
+and also clearly shows that the operation is required to be 64 bits wide
+independent of the target CPU.
+
+Please use 'unsigned int' instead of 'unsigned'.
+
+
+Constants
+^^^^^^^^^
+
+Please do not use literal (hexa)decimal numbers in code or initializers.
+Either use proper defines which have descriptive names or consider using
+an enum.
+
+
+Struct declarations and initializers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Struct declarations should align the struct member names in a tabular
+fashion::
+
+ struct bar_order {
+ unsigned int guest_id;
+ int ordered_item;
+ struct menu *menu;
+ };
+
+Please avoid documenting struct members within the declaration, because
+this often results in strangely formatted comments and the struct members
+become obfuscated::
+
+ struct bar_order {
+ unsigned int guest_id; /* Unique guest id */
+ int ordered_item;
+ /* Pointer to a menu instance which contains all the drinks */
+ struct menu *menu;
+ };
+
+Instead, please consider using the kernel-doc format in a comment preceding
+the struct declaration, which is easier to read and has the added advantage
+of including the information in the kernel documentation, for example, as
+follows::
+
+
+ /**
+ * struct bar_order - Description of a bar order
+ * @guest_id: Unique guest id
+ * @ordered_item: The item number from the menu
+ * @menu: Pointer to the menu from which the item
+ * was ordered
+ *
+ * Supplementary information for using the struct.
+ *
+ * Note, that the struct member descriptors above are arranged
+ * in a tabular fashion.
+ */
+ struct bar_order {
+ unsigned int guest_id;
+ int ordered_item;
+ struct menu *menu;
+ };
+
+Static struct initializers must use C99 initializers and should also be
+aligned in a tabular fashion::
+
+ static struct foo statfoo = {
+ .a = 0,
+ .plain_integer = CONSTANT_DEFINE_OR_ENUM,
+ .bar = &statbar,
+ };
+
+Note that while C99 syntax allows the omission of the final comma,
+we recommend the use of a comma on the last line because it makes
+reordering and addition of new lines easier, and makes such future
+patches slightly easier to read as well.
+
+Line breaks
+^^^^^^^^^^^
+
+Restricting line length to 80 characters makes deeply indented code hard to
+read. Consider breaking out code into helper functions to avoid excessive
+line breaking.
+
+The 80 character rule is not a strict rule, so please use common sense when
+breaking lines. Especially format strings should never be broken up.
+
+When splitting function declarations or function calls, then please align
+the first argument in the second line with the first argument in the first
+line::
+
+ static int long_function_name(struct foobar *barfoo, unsigned int id,
+ unsigned int offset)
+ {
+
+ if (!id) {
+ ret = longer_function_name(barfoo, DEFAULT_BARFOO_ID,
+ offset);
+ ...
+
+Namespaces
+^^^^^^^^^^
+
+Function/variable namespaces improve readability and allow easy
+grepping. These namespaces are string prefixes for globally visible
+function and variable names, including inlines. These prefixes should
+combine the subsystem and the component name such as 'x86_comp\_',
+'sched\_', 'irq\_', and 'mutex\_'.
+
+This also includes static file scope functions that are immediately put
+into globally visible driver templates - it's useful for those symbols
+to carry a good prefix as well, for backtrace readability.
+
+Namespace prefixes may be omitted for local static functions and
+variables. Truly local functions, only called by other local functions,
+can have shorter descriptive names - our primary concern is greppability
+and backtrace readability.
+
+Please note that 'xxx_vendor\_' and 'vendor_xxx_` prefixes are not
+helpful for static functions in vendor-specific files. After all, it
+is already clear that the code is vendor-specific. In addition, vendor
+names should only be for truly vendor-specific functionality.
+
+As always apply common sense and aim for consistency and readability.
+
+
+Commit notifications
+--------------------
+
+The tip tree is monitored by a bot for new commits. The bot sends an email
+for each new commit to a dedicated mailing list
+(``linux-tip-commits@vger.kernel.org``) and Cc's all people who are
+mentioned in one of the commit tags. It uses the email message ID from the
+Link tag at the end of the tag list to set the In-Reply-To email header so
+the message is properly threaded with the patch submission email.
+
+The tip maintainers and submaintainers try to reply to the submitter
+when merging a patch, but they sometimes forget or it does not fit the
+workflow of the moment. While the bot message is purely mechanical, it
+also implies a 'Thank you! Applied.'.
diff --git a/Documentation/process/maintainers.rst b/Documentation/process/maintainers.rst
new file mode 100644
index 000000000..6174cfb41
--- /dev/null
+++ b/Documentation/process/maintainers.rst
@@ -0,0 +1 @@
+.. maintainers-include::
diff --git a/Documentation/process/management-style.rst b/Documentation/process/management-style.rst
new file mode 100644
index 000000000..dfbc69bf4
--- /dev/null
+++ b/Documentation/process/management-style.rst
@@ -0,0 +1,290 @@
+.. _managementstyle:
+
+Linux kernel management style
+=============================
+
+This is a short document describing the preferred (or made up, depending
+on who you ask) management style for the linux kernel. It's meant to
+mirror the :ref:`process/coding-style.rst <codingstyle>` document to some
+degree, and mainly written to avoid answering [#f1]_ the same (or similar)
+questions over and over again.
+
+Management style is very personal and much harder to quantify than
+simple coding style rules, so this document may or may not have anything
+to do with reality. It started as a lark, but that doesn't mean that it
+might not actually be true. You'll have to decide for yourself.
+
+Btw, when talking about "kernel manager", it's all about the technical
+lead persons, not the people who do traditional management inside
+companies. If you sign purchase orders or you have any clue about the
+budget of your group, you're almost certainly not a kernel manager.
+These suggestions may or may not apply to you.
+
+First off, I'd suggest buying "Seven Habits of Highly Effective
+People", and NOT read it. Burn it, it's a great symbolic gesture.
+
+.. [#f1] This document does so not so much by answering the question, but by
+ making it painfully obvious to the questioner that we don't have a clue
+ to what the answer is.
+
+Anyway, here goes:
+
+.. _decisions:
+
+1) Decisions
+------------
+
+Everybody thinks managers make decisions, and that decision-making is
+important. The bigger and more painful the decision, the bigger the
+manager must be to make it. That's very deep and obvious, but it's not
+actually true.
+
+The name of the game is to **avoid** having to make a decision. In
+particular, if somebody tells you "choose (a) or (b), we really need you
+to decide on this", you're in trouble as a manager. The people you
+manage had better know the details better than you, so if they come to
+you for a technical decision, you're screwed. You're clearly not
+competent to make that decision for them.
+
+(Corollary:if the people you manage don't know the details better than
+you, you're also screwed, although for a totally different reason.
+Namely that you are in the wrong job, and that **they** should be managing
+your brilliance instead).
+
+So the name of the game is to **avoid** decisions, at least the big and
+painful ones. Making small and non-consequential decisions is fine, and
+makes you look like you know what you're doing, so what a kernel manager
+needs to do is to turn the big and painful ones into small things where
+nobody really cares.
+
+It helps to realize that the key difference between a big decision and a
+small one is whether you can fix your decision afterwards. Any decision
+can be made small by just always making sure that if you were wrong (and
+you **will** be wrong), you can always undo the damage later by
+backtracking. Suddenly, you get to be doubly managerial for making
+**two** inconsequential decisions - the wrong one **and** the right one.
+
+And people will even see that as true leadership (*cough* bullshit
+*cough*).
+
+Thus the key to avoiding big decisions becomes to just avoiding to do
+things that can't be undone. Don't get ushered into a corner from which
+you cannot escape. A cornered rat may be dangerous - a cornered manager
+is just pitiful.
+
+It turns out that since nobody would be stupid enough to ever really let
+a kernel manager have huge fiscal responsibility **anyway**, it's usually
+fairly easy to backtrack. Since you're not going to be able to waste
+huge amounts of money that you might not be able to repay, the only
+thing you can backtrack on is a technical decision, and there
+back-tracking is very easy: just tell everybody that you were an
+incompetent nincompoop, say you're sorry, and undo all the worthless
+work you had people work on for the last year. Suddenly the decision
+you made a year ago wasn't a big decision after all, since it could be
+easily undone.
+
+It turns out that some people have trouble with this approach, for two
+reasons:
+
+ - admitting you were an idiot is harder than it looks. We all like to
+ maintain appearances, and coming out in public to say that you were
+ wrong is sometimes very hard indeed.
+ - having somebody tell you that what you worked on for the last year
+ wasn't worthwhile after all can be hard on the poor lowly engineers
+ too, and while the actual **work** was easy enough to undo by just
+ deleting it, you may have irrevocably lost the trust of that
+ engineer. And remember: "irrevocable" was what we tried to avoid in
+ the first place, and your decision ended up being a big one after
+ all.
+
+Happily, both of these reasons can be mitigated effectively by just
+admitting up-front that you don't have a friggin' clue, and telling
+people ahead of the fact that your decision is purely preliminary, and
+might be the wrong thing. You should always reserve the right to change
+your mind, and make people very **aware** of that. And it's much easier
+to admit that you are stupid when you haven't **yet** done the really
+stupid thing.
+
+Then, when it really does turn out to be stupid, people just roll their
+eyes and say "Oops, not again".
+
+This preemptive admission of incompetence might also make the people who
+actually do the work also think twice about whether it's worth doing or
+not. After all, if **they** aren't certain whether it's a good idea, you
+sure as hell shouldn't encourage them by promising them that what they
+work on will be included. Make them at least think twice before they
+embark on a big endeavor.
+
+Remember: they'd better know more about the details than you do, and
+they usually already think they have the answer to everything. The best
+thing you can do as a manager is not to instill confidence, but rather a
+healthy dose of critical thinking on what they do.
+
+Btw, another way to avoid a decision is to plaintively just whine "can't
+we just do both?" and look pitiful. Trust me, it works. If it's not
+clear which approach is better, they'll eventually figure it out. The
+answer may end up being that both teams get so frustrated by the
+situation that they just give up.
+
+That may sound like a failure, but it's usually a sign that there was
+something wrong with both projects, and the reason the people involved
+couldn't decide was that they were both wrong. You end up coming up
+smelling like roses, and you avoided yet another decision that you could
+have screwed up on.
+
+
+2) People
+---------
+
+Most people are idiots, and being a manager means you'll have to deal
+with it, and perhaps more importantly, that **they** have to deal with
+**you**.
+
+It turns out that while it's easy to undo technical mistakes, it's not
+as easy to undo personality disorders. You just have to live with
+theirs - and yours.
+
+However, in order to prepare yourself as a kernel manager, it's best to
+remember not to burn any bridges, bomb any innocent villagers, or
+alienate too many kernel developers. It turns out that alienating people
+is fairly easy, and un-alienating them is hard. Thus "alienating"
+immediately falls under the heading of "not reversible", and becomes a
+no-no according to :ref:`decisions`.
+
+There's just a few simple rules here:
+
+ (1) don't call people d*ckheads (at least not in public)
+ (2) learn how to apologize when you forgot rule (1)
+
+The problem with #1 is that it's very easy to do, since you can say
+"you're a d*ckhead" in millions of different ways [#f2]_, sometimes without
+even realizing it, and almost always with a white-hot conviction that
+you are right.
+
+And the more convinced you are that you are right (and let's face it,
+you can call just about **anybody** a d*ckhead, and you often **will** be
+right), the harder it ends up being to apologize afterwards.
+
+To solve this problem, you really only have two options:
+
+ - get really good at apologies
+ - spread the "love" out so evenly that nobody really ends up feeling
+ like they get unfairly targeted. Make it inventive enough, and they
+ might even be amused.
+
+The option of being unfailingly polite really doesn't exist. Nobody will
+trust somebody who is so clearly hiding their true character.
+
+.. [#f2] Paul Simon sang "Fifty Ways to Leave Your Lover", because quite
+ frankly, "A Million Ways to Tell a Developer They're a D*ckhead" doesn't
+ scan nearly as well. But I'm sure he thought about it.
+
+
+3) People II - the Good Kind
+----------------------------
+
+While it turns out that most people are idiots, the corollary to that is
+sadly that you are one too, and that while we can all bask in the secure
+knowledge that we're better than the average person (let's face it,
+nobody ever believes that they're average or below-average), we should
+also admit that we're not the sharpest knife around, and there will be
+other people that are less of an idiot than you are.
+
+Some people react badly to smart people. Others take advantage of them.
+
+Make sure that you, as a kernel maintainer, are in the second group.
+Suck up to them, because they are the people who will make your job
+easier. In particular, they'll be able to make your decisions for you,
+which is what the game is all about.
+
+So when you find somebody smarter than you are, just coast along. Your
+management responsibilities largely become ones of saying "Sounds like a
+good idea - go wild", or "That sounds good, but what about xxx?". The
+second version in particular is a great way to either learn something
+new about "xxx" or seem **extra** managerial by pointing out something the
+smarter person hadn't thought about. In either case, you win.
+
+One thing to look out for is to realize that greatness in one area does
+not necessarily translate to other areas. So you might prod people in
+specific directions, but let's face it, they might be good at what they
+do, and suck at everything else. The good news is that people tend to
+naturally gravitate back to what they are good at, so it's not like you
+are doing something irreversible when you **do** prod them in some
+direction, just don't push too hard.
+
+
+4) Placing blame
+----------------
+
+Things will go wrong, and people want somebody to blame. Tag, you're it.
+
+It's not actually that hard to accept the blame, especially if people
+kind of realize that it wasn't **all** your fault. Which brings us to the
+best way of taking the blame: do it for someone else. You'll feel good
+for taking the fall, they'll feel good about not getting blamed, and the
+person who lost their whole 36GB porn-collection because of your
+incompetence will grudgingly admit that you at least didn't try to weasel
+out of it.
+
+Then make the developer who really screwed up (if you can find them) know
+**in private** that they screwed up. Not just so they can avoid it in the
+future, but so that they know they owe you one. And, perhaps even more
+importantly, they're also likely the person who can fix it. Because, let's
+face it, it sure ain't you.
+
+Taking the blame is also why you get to be manager in the first place.
+It's part of what makes people trust you, and allow you the potential
+glory, because you're the one who gets to say "I screwed up". And if
+you've followed the previous rules, you'll be pretty good at saying that
+by now.
+
+
+5) Things to avoid
+------------------
+
+There's one thing people hate even more than being called "d*ckhead",
+and that is being called a "d*ckhead" in a sanctimonious voice. The
+first you can apologize for, the second one you won't really get the
+chance. They likely will no longer be listening even if you otherwise
+do a good job.
+
+We all think we're better than anybody else, which means that when
+somebody else puts on airs, it **really** rubs us the wrong way. You may
+be morally and intellectually superior to everybody around you, but
+don't try to make it too obvious unless you really **intend** to irritate
+somebody [#f3]_.
+
+Similarly, don't be too polite or subtle about things. Politeness easily
+ends up going overboard and hiding the problem, and as they say, "On the
+internet, nobody can hear you being subtle". Use a big blunt object to
+hammer the point in, because you can't really depend on people getting
+your point otherwise.
+
+Some humor can help pad both the bluntness and the moralizing. Going
+overboard to the point of being ridiculous can drive a point home
+without making it painful to the recipient, who just thinks you're being
+silly. It can thus help get through the personal mental block we all
+have about criticism.
+
+.. [#f3] Hint: internet newsgroups that are not directly related to your work
+ are great ways to take out your frustrations at other people. Write
+ insulting posts with a sneer just to get into a good flame every once in
+ a while, and you'll feel cleansed. Just don't crap too close to home.
+
+
+6) Why me?
+----------
+
+Since your main responsibility seems to be to take the blame for other
+peoples mistakes, and make it painfully obvious to everybody else that
+you're incompetent, the obvious question becomes one of why do it in the
+first place?
+
+First off, while you may or may not get screaming teenage girls (or
+boys, let's not be judgmental or sexist here) knocking on your dressing
+room door, you **will** get an immense feeling of personal accomplishment
+for being "in charge". Never mind the fact that you're really leading
+by trying to keep up with everybody else and running after them as fast
+as you can. Everybody will still think you're the person in charge.
+
+It's a great job if you can hack it.
diff --git a/Documentation/process/programming-language.rst b/Documentation/process/programming-language.rst
new file mode 100644
index 000000000..bc56dee6d
--- /dev/null
+++ b/Documentation/process/programming-language.rst
@@ -0,0 +1,58 @@
+.. _programming_language:
+
+Programming Language
+====================
+
+The kernel is written in the C programming language [c-language]_.
+More precisely, the kernel is typically compiled with ``gcc`` [gcc]_
+under ``-std=gnu11`` [gcc-c-dialect-options]_: the GNU dialect of ISO C11.
+``clang`` [clang]_ is also supported, see docs on
+:ref:`Building Linux with Clang/LLVM <kbuild_llvm>`.
+
+This dialect contains many extensions to the language [gnu-extensions]_,
+and many of them are used within the kernel as a matter of course.
+
+Attributes
+----------
+
+One of the common extensions used throughout the kernel are attributes
+[gcc-attribute-syntax]_. Attributes allow to introduce
+implementation-defined semantics to language entities (like variables,
+functions or types) without having to make significant syntactic changes
+to the language (e.g. adding a new keyword) [n2049]_.
+
+In some cases, attributes are optional (i.e. a compiler not supporting them
+should still produce proper code, even if it is slower or does not perform
+as many compile-time checks/diagnostics).
+
+The kernel defines pseudo-keywords (e.g. ``__pure``) instead of using
+directly the GNU attribute syntax (e.g. ``__attribute__((__pure__))``)
+in order to feature detect which ones can be used and/or to shorten the code.
+
+Please refer to ``include/linux/compiler_attributes.h`` for more information.
+
+Rust
+----
+
+The kernel has experimental support for the Rust programming language
+[rust-language]_ under ``CONFIG_RUST``. It is compiled with ``rustc`` [rustc]_
+under ``--edition=2021`` [rust-editions]_. Editions are a way to introduce
+small changes to the language that are not backwards compatible.
+
+On top of that, some unstable features [rust-unstable-features]_ are used in
+the kernel. Unstable features may change in the future, thus it is an important
+goal to reach a point where only stable features are used.
+
+Please refer to Documentation/rust/index.rst for more information.
+
+.. [c-language] http://www.open-std.org/jtc1/sc22/wg14/www/standards
+.. [gcc] https://gcc.gnu.org
+.. [clang] https://clang.llvm.org
+.. [gcc-c-dialect-options] https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html
+.. [gnu-extensions] https://gcc.gnu.org/onlinedocs/gcc/C-Extensions.html
+.. [gcc-attribute-syntax] https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html
+.. [n2049] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2049.pdf
+.. [rust-language] https://www.rust-lang.org
+.. [rustc] https://doc.rust-lang.org/rustc/
+.. [rust-editions] https://doc.rust-lang.org/edition-guide/editions/
+.. [rust-unstable-features] https://github.com/Rust-for-Linux/linux/issues/2
diff --git a/Documentation/process/researcher-guidelines.rst b/Documentation/process/researcher-guidelines.rst
new file mode 100644
index 000000000..d159cd4f5
--- /dev/null
+++ b/Documentation/process/researcher-guidelines.rst
@@ -0,0 +1,170 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+.. _researcher_guidelines:
+
+Researcher Guidelines
++++++++++++++++++++++
+
+The Linux kernel community welcomes transparent research on the Linux
+kernel, the activities involved in producing it, and any other byproducts
+of its development. Linux benefits greatly from this kind of research, and
+most aspects of Linux are driven by research in one form or another.
+
+The community greatly appreciates if researchers can share preliminary
+findings before making their results public, especially if such research
+involves security. Getting involved early helps both improve the quality
+of research and ability for Linux to improve from it. In any case,
+sharing open access copies of the published research with the community
+is recommended.
+
+This document seeks to clarify what the Linux kernel community considers
+acceptable and non-acceptable practices when conducting such research. At
+the very least, such research and related activities should follow
+standard research ethics rules. For more background on research ethics
+generally, ethics in technology, and research of developer communities
+in particular, see:
+
+* `History of Research Ethics <https://www.unlv.edu/research/ORI-HSR/history-ethics>`_
+* `IEEE Ethics <https://www.ieee.org/about/ethics/index.html>`_
+* `Developer and Researcher Views on the Ethics of Experiments on Open-Source Projects <https://arxiv.org/pdf/2112.13217.pdf>`_
+
+The Linux kernel community expects that everyone interacting with the
+project is participating in good faith to make Linux better. Research on
+any publicly-available artifact (including, but not limited to source
+code) produced by the Linux kernel community is welcome, though research
+on developers must be distinctly opt-in.
+
+Passive research that is based entirely on publicly available sources,
+including posts to public mailing lists and commits to public
+repositories, is clearly permissible. Though, as with any research,
+standard ethics must still be followed.
+
+Active research on developer behavior, however, must be done with the
+explicit agreement of, and full disclosure to, the individual developers
+involved. Developers cannot be interacted with/experimented on without
+consent; this, too, is standard research ethics.
+
+Surveys
+=======
+
+Research often takes the form of surveys sent to maintainers or
+contributors. As a general rule, though, the kernel community derives
+little value from these surveys. The kernel development process works
+because every developer benefits from their participation, even working
+with others who have different goals. Responding to a survey, though, is a
+one-way demand placed on busy developers with no corresponding benefit to
+themselves or to the kernel community as a whole. For this reason, this
+method of research is discouraged.
+
+Kernel community members already receive far too much email and are likely
+to perceive survey requests as just another demand on their time. Sending
+such requests deprives the community of valuable contributor time and is
+unlikely to yield a statistically useful response.
+
+As an alternative, researchers should consider attending developer events,
+hosting sessions where the research project and its benefits to the
+participants can be explained, and interacting directly with the community
+there. The information received will be far richer than that obtained from
+an email survey, and the community will gain from the ability to learn from
+your insights as well.
+
+Patches
+=======
+
+To help clarify: sending patches to developers *is* interacting
+with them, but they have already consented to receiving *good faith
+contributions*. Sending intentionally flawed/vulnerable patches or
+contributing misleading information to discussions is not consented
+to. Such communication can be damaging to the developer (e.g. draining
+time, effort, and morale) and damaging to the project by eroding
+the entire developer community's trust in the contributor (and the
+contributor's organization as a whole), undermining efforts to provide
+constructive feedback to contributors, and putting end users at risk of
+software flaws.
+
+Participation in the development of Linux itself by researchers, as
+with anyone, is welcomed and encouraged. Research into Linux code is
+a common practice, especially when it comes to developing or running
+analysis tools that produce actionable results.
+
+When engaging with the developer community, sending a patch has
+traditionally been the best way to make an impact. Linux already has
+plenty of known bugs -- what's much more helpful is having vetted fixes.
+Before contributing, carefully read the appropriate documentation:
+
+* Documentation/process/development-process.rst
+* Documentation/process/submitting-patches.rst
+* Documentation/admin-guide/reporting-issues.rst
+* Documentation/process/security-bugs.rst
+
+Then send a patch (including a commit log with all the details listed
+below) and follow up on any feedback from other developers.
+
+When sending patches produced from research, the commit logs should
+contain at least the following details, so that developers have
+appropriate context for understanding the contribution. Answer:
+
+* What is the specific problem that has been found?
+* How could the problem be reached on a running system?
+* What effect would encountering the problem have on the system?
+* How was the problem found? Specifically include details about any
+ testing, static or dynamic analysis programs, and any other tools or
+ methods used to perform the work.
+* Which version of Linux was the problem found on? Using the most recent
+ release or a recent linux-next branch is strongly preferred (see
+ Documentation/process/howto.rst).
+* What was changed to fix the problem, and why it is believed to be correct?
+* How was the change build tested and run-time tested?
+* What prior commit does this change fix? This should go in a "Fixes:"
+ tag as the documentation describes.
+* Who else has reviewed this patch? This should go in appropriate
+ "Reviewed-by:" tags; see below.
+
+For example::
+
+ From: Author <author@email>
+ Subject: [PATCH] drivers/foo_bar: Add missing kfree()
+
+ The error path in foo_bar driver does not correctly free the allocated
+ struct foo_bar_info. This can happen if the attached foo_bar device
+ rejects the initialization packets sent during foo_bar_probe(). This
+ would result in a 64 byte slab memory leak once per device attach,
+ wasting memory resources over time.
+
+ This flaw was found using an experimental static analysis tool we are
+ developing, LeakMagic[1], which reported the following warning when
+ analyzing the v5.15 kernel release:
+
+ path/to/foo_bar.c:187: missing kfree() call?
+
+ Add the missing kfree() to the error path. No other references to
+ this memory exist outside the probe function, so this is the only
+ place it can be freed.
+
+ x86_64 and arm64 defconfig builds with CONFIG_FOO_BAR=y using GCC
+ 11.2 show no new warnings, and LeakMagic no longer warns about this
+ code path. As we don't have a FooBar device to test with, no runtime
+ testing was able to be performed.
+
+ [1] https://url/to/leakmagic/details
+
+ Reported-by: Researcher <researcher@email>
+ Fixes: aaaabbbbccccdddd ("Introduce support for FooBar")
+ Signed-off-by: Author <author@email>
+ Reviewed-by: Reviewer <reviewer@email>
+
+If you are a first time contributor it is recommended that the patch
+itself be vetted by others privately before being posted to public lists.
+(This is required if you have been explicitly told your patches need
+more careful internal review.) These people are expected to have their
+"Reviewed-by" tag included in the resulting patch. Finding another
+developer familiar with Linux contribution, especially within your own
+organization, and having them help with reviews before sending them to
+the public mailing lists tends to significantly improve the quality of the
+resulting patches, and there by reduces the burden on other developers.
+
+If no one can be found to internally review patches and you need
+help finding such a person, or if you have any other questions
+related to this document and the developer community's expectations,
+please reach out to the private Technical Advisory Board mailing list:
+<tech-board@lists.linux-foundation.org>.
diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
new file mode 100644
index 000000000..5a6993795
--- /dev/null
+++ b/Documentation/process/security-bugs.rst
@@ -0,0 +1,93 @@
+.. _securitybugs:
+
+Security bugs
+=============
+
+Linux kernel developers take security very seriously. As such, we'd
+like to know when a security bug is found so that it can be fixed and
+disclosed as quickly as possible. Please report security bugs to the
+Linux kernel security team.
+
+Contact
+-------
+
+The Linux kernel security team can be contacted by email at
+<security@kernel.org>. This is a private list of security officers
+who will help verify the bug report and develop and release a fix.
+If you already have a fix, please include it with your report, as
+that can speed up the process considerably. It is possible that the
+security team will bring in extra help from area maintainers to
+understand and fix the security vulnerability.
+
+As it is with any bug, the more information provided the easier it
+will be to diagnose and fix. Please review the procedure outlined in
+'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what
+information is helpful. Any exploit code is very helpful and will not
+be released without consent from the reporter unless it has already been
+made public.
+
+Please send plain text emails without attachments where possible.
+It is much harder to have a context-quoted discussion about a complex
+issue if all the details are hidden away in attachments. Think of it like a
+:doc:`regular patch submission <../process/submitting-patches>`
+(even if you don't have a patch yet): describe the problem and impact, list
+reproduction steps, and follow it with a proposed fix, all in plain text.
+
+Disclosure and embargoed information
+------------------------------------
+
+The security list is not a disclosure channel. For that, see Coordination
+below.
+
+Once a robust fix has been developed, the release process starts. Fixes
+for publicly known bugs are released immediately.
+
+Although our preference is to release fixes for publicly undisclosed bugs
+as soon as they become available, this may be postponed at the request of
+the reporter or an affected party for up to 7 calendar days from the start
+of the release process, with an exceptional extension to 14 calendar days
+if it is agreed that the criticality of the bug requires more time. The
+only valid reason for deferring the publication of a fix is to accommodate
+the logistics of QA and large scale rollouts which require release
+coordination.
+
+While embargoed information may be shared with trusted individuals in
+order to develop a fix, such information will not be published alongside
+the fix or on any other disclosure channel without the permission of the
+reporter. This includes but is not limited to the original bug report
+and followup discussions (if any), exploits, CVE information or the
+identity of the reporter.
+
+In other words our only interest is in getting bugs fixed. All other
+information submitted to the security list and any followup discussions
+of the report are treated confidentially even after the embargo has been
+lifted, in perpetuity.
+
+Coordination with other groups
+------------------------------
+
+The kernel security team strongly recommends that reporters of potential
+security issues NEVER contact the "linux-distros" mailing list until
+AFTER discussing it with the kernel security team. Do not Cc: both
+lists at once. You may contact the linux-distros mailing list after a
+fix has been agreed on and you fully understand the requirements that
+doing so will impose on you and the kernel community.
+
+The different lists have different goals and the linux-distros rules do
+not contribute to actually fixing any potential security problems.
+
+CVE assignment
+--------------
+
+The security team does not assign CVEs, nor do we require them for
+reports or fixes, as this can needlessly complicate the process and may
+delay the bug handling. If a reporter wishes to have a CVE identifier
+assigned, they should find one by themselves, for example by contacting
+MITRE directly. However under no circumstances will a patch inclusion
+be delayed to wait for a CVE identifier to arrive.
+
+Non-disclosure agreements
+-------------------------
+
+The Linux kernel security team is not a formal body and therefore unable
+to enter any non-disclosure agreements.
diff --git a/Documentation/process/stable-api-nonsense.rst b/Documentation/process/stable-api-nonsense.rst
new file mode 100644
index 000000000..a9625ab1f
--- /dev/null
+++ b/Documentation/process/stable-api-nonsense.rst
@@ -0,0 +1,204 @@
+.. _stable_api_nonsense:
+
+The Linux Kernel Driver Interface
+==================================
+
+(all of your questions answered and then some)
+
+Greg Kroah-Hartman <greg@kroah.com>
+
+This is being written to try to explain why Linux **does not have a binary
+kernel interface, nor does it have a stable kernel interface**.
+
+.. note::
+
+ Please realize that this article describes the **in kernel** interfaces, not
+ the kernel to userspace interfaces.
+
+ The kernel to userspace interface is the one that application programs use,
+ the syscall interface. That interface is **very** stable over time, and
+ will not break. I have old programs that were built on a pre 0.9something
+ kernel that still work just fine on the latest 2.6 kernel release.
+ That interface is the one that users and application programmers can count
+ on being stable.
+
+
+Executive Summary
+-----------------
+You think you want a stable kernel interface, but you really do not, and
+you don't even know it. What you want is a stable running driver, and
+you get that only if your driver is in the main kernel tree. You also
+get lots of other good benefits if your driver is in the main kernel
+tree, all of which has made Linux into such a strong, stable, and mature
+operating system which is the reason you are using it in the first
+place.
+
+
+Intro
+-----
+
+It's only the odd person who wants to write a kernel driver that needs
+to worry about the in-kernel interfaces changing. For the majority of
+the world, they neither see this interface, nor do they care about it at
+all.
+
+First off, I'm not going to address **any** legal issues about closed
+source, hidden source, binary blobs, source wrappers, or any other term
+that describes kernel drivers that do not have their source code
+released under the GPL. Please consult a lawyer if you have any legal
+questions, I'm a programmer and hence, I'm just going to be describing
+the technical issues here (not to make light of the legal issues, they
+are real, and you do need to be aware of them at all times.)
+
+So, there are two main topics here, binary kernel interfaces and stable
+kernel source interfaces. They both depend on each other, but we will
+discuss the binary stuff first to get it out of the way.
+
+
+Binary Kernel Interface
+-----------------------
+Assuming that we had a stable kernel source interface for the kernel, a
+binary interface would naturally happen too, right? Wrong. Please
+consider the following facts about the Linux kernel:
+
+ - Depending on the version of the C compiler you use, different kernel
+ data structures will contain different alignment of structures, and
+ possibly include different functions in different ways (putting
+ functions inline or not.) The individual function organization
+ isn't that important, but the different data structure padding is
+ very important.
+
+ - Depending on what kernel build options you select, a wide range of
+ different things can be assumed by the kernel:
+
+ - different structures can contain different fields
+ - Some functions may not be implemented at all, (i.e. some locks
+ compile away to nothing for non-SMP builds.)
+ - Memory within the kernel can be aligned in different ways,
+ depending on the build options.
+
+ - Linux runs on a wide range of different processor architectures.
+ There is no way that binary drivers from one architecture will run
+ on another architecture properly.
+
+Now a number of these issues can be addressed by simply compiling your
+module for the exact specific kernel configuration, using the same exact
+C compiler that the kernel was built with. This is sufficient if you
+want to provide a module for a specific release version of a specific
+Linux distribution. But multiply that single build by the number of
+different Linux distributions and the number of different supported
+releases of the Linux distribution and you quickly have a nightmare of
+different build options on different releases. Also realize that each
+Linux distribution release contains a number of different kernels, all
+tuned to different hardware types (different processor types and
+different options), so for even a single release you will need to create
+multiple versions of your module.
+
+Trust me, you will go insane over time if you try to support this kind
+of release, I learned this the hard way a long time ago...
+
+
+Stable Kernel Source Interfaces
+-------------------------------
+
+This is a much more "volatile" topic if you talk to people who try to
+keep a Linux kernel driver that is not in the main kernel tree up to
+date over time.
+
+Linux kernel development is continuous and at a rapid pace, never
+stopping to slow down. As such, the kernel developers find bugs in
+current interfaces, or figure out a better way to do things. If they do
+that, they then fix the current interfaces to work better. When they do
+so, function names may change, structures may grow or shrink, and
+function parameters may be reworked. If this happens, all of the
+instances of where this interface is used within the kernel are fixed up
+at the same time, ensuring that everything continues to work properly.
+
+As a specific examples of this, the in-kernel USB interfaces have
+undergone at least three different reworks over the lifetime of this
+subsystem. These reworks were done to address a number of different
+issues:
+
+ - A change from a synchronous model of data streams to an asynchronous
+ one. This reduced the complexity of a number of drivers and
+ increased the throughput of all USB drivers such that we are now
+ running almost all USB devices at their maximum speed possible.
+ - A change was made in the way data packets were allocated from the
+ USB core by USB drivers so that all drivers now needed to provide
+ more information to the USB core to fix a number of documented
+ deadlocks.
+
+This is in stark contrast to a number of closed source operating systems
+which have had to maintain their older USB interfaces over time. This
+provides the ability for new developers to accidentally use the old
+interfaces and do things in improper ways, causing the stability of the
+operating system to suffer.
+
+In both of these instances, all developers agreed that these were
+important changes that needed to be made, and they were made, with
+relatively little pain. If Linux had to ensure that it will preserve a
+stable source interface, a new interface would have been created, and
+the older, broken one would have had to be maintained over time, leading
+to extra work for the USB developers. Since all Linux USB developers do
+their work on their own time, asking programmers to do extra work for no
+gain, for free, is not a possibility.
+
+Security issues are also very important for Linux. When a
+security issue is found, it is fixed in a very short amount of time. A
+number of times this has caused internal kernel interfaces to be
+reworked to prevent the security problem from occurring. When this
+happens, all drivers that use the interfaces were also fixed at the
+same time, ensuring that the security problem was fixed and could not
+come back at some future time accidentally. If the internal interfaces
+were not allowed to change, fixing this kind of security problem and
+insuring that it could not happen again would not be possible.
+
+Kernel interfaces are cleaned up over time. If there is no one using a
+current interface, it is deleted. This ensures that the kernel remains
+as small as possible, and that all potential interfaces are tested as
+well as they can be (unused interfaces are pretty much impossible to
+test for validity.)
+
+
+What to do
+----------
+
+So, if you have a Linux kernel driver that is not in the main kernel
+tree, what are you, a developer, supposed to do? Releasing a binary
+driver for every different kernel version for every distribution is a
+nightmare, and trying to keep up with an ever changing kernel interface
+is also a rough job.
+
+Simple, get your kernel driver into the main kernel tree (remember we are
+talking about drivers released under a GPL-compatible license here, if your
+code doesn't fall under this category, good luck, you are on your own here,
+you leech). If your driver is in the tree, and a kernel interface changes,
+it will be fixed up by the person who did the kernel change in the first
+place. This ensures that your driver is always buildable, and works over
+time, with very little effort on your part.
+
+The very good side effects of having your driver in the main kernel tree
+are:
+
+ - The quality of the driver will rise as the maintenance costs (to the
+ original developer) will decrease.
+ - Other developers will add features to your driver.
+ - Other people will find and fix bugs in your driver.
+ - Other people will find tuning opportunities in your driver.
+ - Other people will update the driver for you when external interface
+ changes require it.
+ - The driver automatically gets shipped in all Linux distributions
+ without having to ask the distros to add it.
+
+As Linux supports a larger number of different devices "out of the box"
+than any other operating system, and it supports these devices on more
+different processor architectures than any other operating system, this
+proven type of development model must be doing something right :)
+
+
+
+------
+
+Thanks to Randy Dunlap, Andrew Morton, David Brownell, Hanna Linder,
+Robert Love, and Nishanth Aravamudan for their review and comments on
+early drafts of this paper.
diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
new file mode 100644
index 000000000..41f1e07ab
--- /dev/null
+++ b/Documentation/process/stable-kernel-rules.rst
@@ -0,0 +1,235 @@
+.. _stable_kernel_rules:
+
+Everything you ever wanted to know about Linux -stable releases
+===============================================================
+
+Rules on what kind of patches are accepted, and which ones are not, into the
+"-stable" tree:
+
+ - It or an equivalent fix must already exist in Linus' tree (upstream).
+ - It must be obviously correct and tested.
+ - It cannot be bigger than 100 lines, with context.
+ - It must follow the
+ :ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
+ rules.
+ - It must either fix a real bug that bothers people or just add a device ID.
+ To elaborate on the former:
+
+ - It fixes a problem like an oops, a hang, data corruption, a real security
+ issue, a hardware quirk, a build error (but not for things marked
+ CONFIG_BROKEN), or some "oh, that's not good" issue.
+ - Serious issues as reported by a user of a distribution kernel may also
+ be considered if they fix a notable performance or interactivity issue.
+ As these fixes are not as obvious and have a higher risk of a subtle
+ regression they should only be submitted by a distribution kernel
+ maintainer and include an addendum linking to a bugzilla entry if it
+ exists and additional information on the user-visible impact.
+ - No "This could be a problem..." type of things like a "theoretical race
+ condition", unless an explanation of how the bug can be exploited is also
+ provided.
+ - No "trivial" fixes without benefit for users (spelling changes, whitespace
+ cleanups, etc).
+
+
+Procedure for submitting patches to the -stable tree
+----------------------------------------------------
+
+.. note::
+
+ Security patches should not be handled (solely) by the -stable review
+ process but should follow the procedures in
+ :ref:`Documentation/process/security-bugs.rst <securitybugs>`.
+
+There are three options to submit a change to -stable trees:
+
+ 1. Add a 'stable tag' to the description of a patch you then submit for
+ mainline inclusion.
+ 2. Ask the stable team to pick up a patch already mainlined.
+ 3. Submit a patch to the stable team that is equivalent to a change already
+ mainlined.
+
+The sections below describe each of the options in more detail.
+
+:ref:`option_1` is **strongly** preferred, it is the easiest and most common.
+:ref:`option_2` is mainly meant for changes where backporting was not considered
+at the time of submission. :ref:`option_3` is an alternative to the two earlier
+options for cases where a mainlined patch needs adjustments to apply in older
+series (for example due to API changes).
+
+When using option 2 or 3 you can ask for your change to be included in specific
+stable series. When doing so, ensure the fix or an equivalent is applicable,
+submitted, or already present in all newer stable trees still supported. This is
+meant to prevent regressions that users might later encounter on updating, if
+e.g. a fix merged for 5.19-rc1 would be backported to 5.10.y, but not to 5.15.y.
+
+.. _option_1:
+
+Option 1
+********
+
+To have a patch you submit for mainline inclusion later automatically picked up
+for stable trees, add the tag
+
+.. code-block:: none
+
+ Cc: stable@vger.kernel.org
+
+in the sign-off area. Once the patch is mainlined it will be applied to the
+stable tree without anything else needing to be done by the author or
+subsystem maintainer.
+
+To sent additional instructions to the stable team, use a shell-style inline
+comment:
+
+ * To specify any additional patch prerequisites for cherry picking use the
+ following format in the sign-off area:
+
+ .. code-block:: none
+
+ Cc: <stable@vger.kernel.org> # 3.3.x: a1f84a3: sched: Check for idle
+ Cc: <stable@vger.kernel.org> # 3.3.x: 1b9508f: sched: Rate-limit newidle
+ Cc: <stable@vger.kernel.org> # 3.3.x: fd21073: sched: Fix affinity logic
+ Cc: <stable@vger.kernel.org> # 3.3.x
+ Signed-off-by: Ingo Molnar <mingo@elte.hu>
+
+ The tag sequence has the meaning of:
+
+ .. code-block:: none
+
+ git cherry-pick a1f84a3
+ git cherry-pick 1b9508f
+ git cherry-pick fd21073
+ git cherry-pick <this commit>
+
+ * For patches that may have kernel version prerequisites specify them using
+ the following format in the sign-off area:
+
+ .. code-block:: none
+
+ Cc: <stable@vger.kernel.org> # 3.3.x
+
+ The tag has the meaning of:
+
+ .. code-block:: none
+
+ git cherry-pick <this commit>
+
+ For each "-stable" tree starting with the specified version.
+
+ Note, such tagging is unnecessary if the stable team can derive the
+ appropriate versions from Fixes: tags.
+
+ * To delay pick up of patches, use the following format:
+
+ .. code-block:: none
+
+ Cc: <stable@vger.kernel.org> # after 4 weeks in mainline
+
+ * For any other requests, just add a note to the stable tag. This for example
+ can be used to point out known problems:
+
+ .. code-block:: none
+
+ Cc: <stable@vger.kernel.org> # see patch description, needs adjustments for <= 6.3
+
+.. _option_2:
+
+Option 2
+********
+
+If the patch already has been merged to mainline, send an email to
+stable@vger.kernel.org containing the subject of the patch, the commit ID,
+why you think it should be applied, and what kernel versions you wish it to
+be applied to.
+
+.. _option_3:
+
+Option 3
+********
+
+Send the patch, after verifying that it follows the above rules, to
+stable@vger.kernel.org and mention the kernel versions you wish it to be applied
+to. When doing so, you must note the upstream commit ID in the changelog of your
+submission with a separate line above the commit text, like this:
+
+.. code-block:: none
+
+ commit <sha1> upstream.
+
+or alternatively:
+
+.. code-block:: none
+
+ [ Upstream commit <sha1> ]
+
+If the submitted patch deviates from the original upstream patch (for example
+because it had to be adjusted for the older API), this must be very clearly
+documented and justified in the patch description.
+
+
+Following the submission
+------------------------
+
+The sender will receive an ACK when the patch has been accepted into the
+queue, or a NAK if the patch is rejected. This response might take a few
+days, according to the schedules of the stable team members.
+
+If accepted, the patch will be added to the -stable queue, for review by other
+developers and by the relevant subsystem maintainer.
+
+
+Review cycle
+------------
+
+ - When the -stable maintainers decide for a review cycle, the patches will be
+ sent to the review committee, and the maintainer of the affected area of
+ the patch (unless the submitter is the maintainer of the area) and CC: to
+ the linux-kernel mailing list.
+ - The review committee has 48 hours in which to ACK or NAK the patch.
+ - If the patch is rejected by a member of the committee, or linux-kernel
+ members object to the patch, bringing up issues that the maintainers and
+ members did not realize, the patch will be dropped from the queue.
+ - The ACKed patches will be posted again as part of release candidate (-rc)
+ to be tested by developers and testers.
+ - Usually only one -rc release is made, however if there are any outstanding
+ issues, some patches may be modified or dropped or additional patches may
+ be queued. Additional -rc releases are then released and tested until no
+ issues are found.
+ - Responding to the -rc releases can be done on the mailing list by sending
+ a "Tested-by:" email with any testing information desired. The "Tested-by:"
+ tags will be collected and added to the release commit.
+ - At the end of the review cycle, the new -stable release will be released
+ containing all the queued and tested patches.
+ - Security patches will be accepted into the -stable tree directly from the
+ security kernel team, and not go through the normal review cycle.
+ Contact the kernel security team for more details on this procedure.
+
+
+Trees
+-----
+
+ - The queues of patches, for both completed versions and in progress
+ versions can be found at:
+
+ https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
+
+ - The finalized and tagged releases of all stable kernels can be found
+ in separate branches per version at:
+
+ https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
+
+ - The release candidate of all stable kernel versions can be found at:
+
+ https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git/
+
+ .. warning::
+ The -stable-rc tree is a snapshot in time of the stable-queue tree and
+ will change frequently, hence will be rebased often. It should only be
+ used for testing purposes (e.g. to be consumed by CI systems).
+
+
+Review committee
+----------------
+
+ - This is made up of a number of kernel developers who have volunteered for
+ this task, and a few that haven't.
diff --git a/Documentation/process/submit-checklist.rst b/Documentation/process/submit-checklist.rst
new file mode 100644
index 000000000..b1bc2d37b
--- /dev/null
+++ b/Documentation/process/submit-checklist.rst
@@ -0,0 +1,120 @@
+.. _submitchecklist:
+
+Linux Kernel patch submission checklist
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Here are some basic things that developers should do if they want to see their
+kernel patch submissions accepted more quickly.
+
+These are all above and beyond the documentation that is provided in
+:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
+and elsewhere regarding submitting Linux kernel patches.
+
+
+1) If you use a facility then #include the file that defines/declares
+ that facility. Don't depend on other header files pulling in ones
+ that you use.
+
+2) Builds cleanly:
+
+ a) with applicable or modified ``CONFIG`` options ``=y``, ``=m``, and
+ ``=n``. No ``gcc`` warnings/errors, no linker warnings/errors.
+
+ b) Passes ``allnoconfig``, ``allmodconfig``
+
+ c) Builds successfully when using ``O=builddir``
+
+ d) Any Documentation/ changes build successfully without new warnings/errors.
+ Use ``make htmldocs`` or ``make pdfdocs`` to check the build and
+ fix any issues.
+
+3) Builds on multiple CPU architectures by using local cross-compile tools
+ or some other build farm.
+
+4) ppc64 is a good architecture for cross-compilation checking because it
+ tends to use ``unsigned long`` for 64-bit quantities.
+
+5) Check your patch for general style as detailed in
+ :ref:`Documentation/process/coding-style.rst <codingstyle>`.
+ Check for trivial violations with the patch style checker prior to
+ submission (``scripts/checkpatch.pl``).
+ You should be able to justify all violations that remain in
+ your patch.
+
+6) Any new or modified ``CONFIG`` options do not muck up the config menu and
+ default to off unless they meet the exception criteria documented in
+ ``Documentation/kbuild/kconfig-language.rst`` Menu attributes: default value.
+
+7) All new ``Kconfig`` options have help text.
+
+8) Has been carefully reviewed with respect to relevant ``Kconfig``
+ combinations. This is very hard to get right with testing -- brainpower
+ pays off here.
+
+9) Check cleanly with sparse.
+
+10) Use ``make checkstack`` and fix any problems that it finds.
+
+ .. note::
+
+ ``checkstack`` does not point out problems explicitly,
+ but any one function that uses more than 512 bytes on the stack is a
+ candidate for change.
+
+11) Include :ref:`kernel-doc <kernel_doc>` to document global kernel APIs.
+ (Not required for static functions, but OK there also.) Use
+ ``make htmldocs`` or ``make pdfdocs`` to check the
+ :ref:`kernel-doc <kernel_doc>` and fix any issues.
+
+12) Has been tested with ``CONFIG_PREEMPT``, ``CONFIG_DEBUG_PREEMPT``,
+ ``CONFIG_DEBUG_SLAB``, ``CONFIG_DEBUG_PAGEALLOC``, ``CONFIG_DEBUG_MUTEXES``,
+ ``CONFIG_DEBUG_SPINLOCK``, ``CONFIG_DEBUG_ATOMIC_SLEEP``,
+ ``CONFIG_PROVE_RCU`` and ``CONFIG_DEBUG_OBJECTS_RCU_HEAD`` all
+ simultaneously enabled.
+
+13) Has been build- and runtime tested with and without ``CONFIG_SMP`` and
+ ``CONFIG_PREEMPT.``
+
+14) All codepaths have been exercised with all lockdep features enabled.
+
+15) All new ``/proc`` entries are documented under ``Documentation/``
+
+16) All new kernel boot parameters are documented in
+ ``Documentation/admin-guide/kernel-parameters.rst``.
+
+17) All new module parameters are documented with ``MODULE_PARM_DESC()``
+
+18) All new userspace interfaces are documented in ``Documentation/ABI/``.
+ See ``Documentation/ABI/README`` for more information.
+ Patches that change userspace interfaces should be CCed to
+ linux-api@vger.kernel.org.
+
+19) Has been checked with injection of at least slab and page-allocation
+ failures. See ``Documentation/fault-injection/``.
+
+ If the new code is substantial, addition of subsystem-specific fault
+ injection might be appropriate.
+
+20) Newly-added code has been compiled with ``gcc -W`` (use
+ ``make KCFLAGS=-W``). This will generate lots of noise, but is good
+ for finding bugs like "warning: comparison between signed and unsigned".
+
+21) Tested after it has been merged into the -mm patchset to make sure
+ that it still works with all of the other queued patches and various
+ changes in the VM, VFS, and other subsystems.
+
+22) All memory barriers {e.g., ``barrier()``, ``rmb()``, ``wmb()``} need a
+ comment in the source code that explains the logic of what they are doing
+ and why.
+
+23) If any ioctl's are added by the patch, then also update
+ ``Documentation/userspace-api/ioctl/ioctl-number.rst``.
+
+24) If your modified source code depends on or uses any of the kernel
+ APIs or features that are related to the following ``Kconfig`` symbols,
+ then test multiple builds with the related ``Kconfig`` symbols disabled
+ and/or ``=m`` (if that option is available) [not all of these at the
+ same time, just various/random combinations of them]:
+
+ ``CONFIG_SMP``, ``CONFIG_SYSFS``, ``CONFIG_PROC_FS``, ``CONFIG_INPUT``, ``CONFIG_PCI``, ``CONFIG_BLOCK``, ``CONFIG_PM``, ``CONFIG_MAGIC_SYSRQ``,
+ ``CONFIG_NET``, ``CONFIG_INET=n`` (but latter with ``CONFIG_NET=y``).
diff --git a/Documentation/process/submitting-patches.rst b/Documentation/process/submitting-patches.rst
new file mode 100644
index 000000000..efac910e2
--- /dev/null
+++ b/Documentation/process/submitting-patches.rst
@@ -0,0 +1,871 @@
+.. _submittingpatches:
+
+Submitting patches: the essential guide to getting your code into the kernel
+============================================================================
+
+For a person or company who wishes to submit a change to the Linux
+kernel, the process can sometimes be daunting if you're not familiar
+with "the system." This text is a collection of suggestions which
+can greatly increase the chances of your change being accepted.
+
+This document contains a large number of suggestions in a relatively terse
+format. For detailed information on how the kernel development process
+works, see Documentation/process/development-process.rst. Also, read
+Documentation/process/submit-checklist.rst
+for a list of items to check before submitting code.
+For device tree binding patches, read
+Documentation/devicetree/bindings/submitting-patches.rst.
+
+This documentation assumes that you're using ``git`` to prepare your patches.
+If you're unfamiliar with ``git``, you would be well-advised to learn how to
+use it, it will make your life as a kernel developer and in general much
+easier.
+
+Some subsystems and maintainer trees have additional information about
+their workflow and expectations, see
+:ref:`Documentation/process/maintainer-handbooks.rst <maintainer_handbooks_main>`.
+
+Obtain a current source tree
+----------------------------
+
+If you do not have a repository with the current kernel source handy, use
+``git`` to obtain one. You'll want to start with the mainline repository,
+which can be grabbed with::
+
+ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+
+Note, however, that you may not want to develop against the mainline tree
+directly. Most subsystem maintainers run their own trees and want to see
+patches prepared against those trees. See the **T:** entry for the subsystem
+in the MAINTAINERS file to find that tree, or simply ask the maintainer if
+the tree is not listed there.
+
+.. _describe_changes:
+
+Describe your changes
+---------------------
+
+Describe your problem. Whether your patch is a one-line bug fix or
+5000 lines of a new feature, there must be an underlying problem that
+motivated you to do this work. Convince the reviewer that there is a
+problem worth fixing and that it makes sense for them to read past the
+first paragraph.
+
+Describe user-visible impact. Straight up crashes and lockups are
+pretty convincing, but not all bugs are that blatant. Even if the
+problem was spotted during code review, describe the impact you think
+it can have on users. Keep in mind that the majority of Linux
+installations run kernels from secondary stable trees or
+vendor/product-specific trees that cherry-pick only specific patches
+from upstream, so include anything that could help route your change
+downstream: provoking circumstances, excerpts from dmesg, crash
+descriptions, performance regressions, latency spikes, lockups, etc.
+
+Quantify optimizations and trade-offs. If you claim improvements in
+performance, memory consumption, stack footprint, or binary size,
+include numbers that back them up. But also describe non-obvious
+costs. Optimizations usually aren't free but trade-offs between CPU,
+memory, and readability; or, when it comes to heuristics, between
+different workloads. Describe the expected downsides of your
+optimization so that the reviewer can weigh costs against benefits.
+
+Once the problem is established, describe what you are actually doing
+about it in technical detail. It's important to describe the change
+in plain English for the reviewer to verify that the code is behaving
+as you intend it to.
+
+The maintainer will thank you if you write your patch description in a
+form which can be easily pulled into Linux's source code management
+system, ``git``, as a "commit log". See :ref:`the_canonical_patch_format`.
+
+Solve only one problem per patch. If your description starts to get
+long, that's a sign that you probably need to split up your patch.
+See :ref:`split_changes`.
+
+When you submit or resubmit a patch or patch series, include the
+complete patch description and justification for it. Don't just
+say that this is version N of the patch (series). Don't expect the
+subsystem maintainer to refer back to earlier patch versions or referenced
+URLs to find the patch description and put that into the patch.
+I.e., the patch (series) and its description should be self-contained.
+This benefits both the maintainers and reviewers. Some reviewers
+probably didn't even receive earlier versions of the patch.
+
+Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
+instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
+to do frotz", as if you are giving orders to the codebase to change
+its behaviour.
+
+If you want to refer to a specific commit, don't just refer to the
+SHA-1 ID of the commit. Please also include the oneline summary of
+the commit, to make it easier for reviewers to know what it is about.
+Example::
+
+ Commit e21d2170f36602ae2708 ("video: remove unnecessary
+ platform_set_drvdata()") removed the unnecessary
+ platform_set_drvdata(), but left the variable "dev" unused,
+ delete it.
+
+You should also be sure to use at least the first twelve characters of the
+SHA-1 ID. The kernel repository holds a *lot* of objects, making
+collisions with shorter IDs a real possibility. Bear in mind that, even if
+there is no collision with your six-character ID now, that condition may
+change five years from now.
+
+If related discussions or any other background information behind the change
+can be found on the web, add 'Link:' tags pointing to it. If the patch is a
+result of some earlier mailing list discussions or something documented on the
+web, point to it.
+
+When linking to mailing list archives, preferably use the lore.kernel.org
+message archiver service. To create the link URL, use the contents of the
+``Message-Id`` header of the message without the surrounding angle brackets.
+For example::
+
+ Link: https://lore.kernel.org/r/30th.anniversary.repost@klaava.Helsinki.FI/
+
+Please check the link to make sure that it is actually working and points
+to the relevant message.
+
+However, try to make your explanation understandable without external
+resources. In addition to giving a URL to a mailing list archive or bug,
+summarize the relevant points of the discussion that led to the
+patch as submitted.
+
+In case your patch fixes a bug, use the 'Closes:' tag with a URL referencing
+the report in the mailing list archives or a public bug tracker. For example::
+
+ Closes: https://example.com/issues/1234
+
+Some bug trackers have the ability to close issues automatically when a
+commit with such a tag is applied. Some bots monitoring mailing lists can
+also track such tags and take certain actions. Private bug trackers and
+invalid URLs are forbidden.
+
+If your patch fixes a bug in a specific commit, e.g. you found an issue using
+``git bisect``, please use the 'Fixes:' tag with the first 12 characters of
+the SHA-1 ID, and the one line summary. Do not split the tag across multiple
+lines, tags are exempt from the "wrap at 75 columns" rule in order to simplify
+parsing scripts. For example::
+
+ Fixes: 54a4f0239f2e ("KVM: MMU: make kvm_mmu_zap_page() return the number of pages it actually freed")
+
+The following ``git config`` settings can be used to add a pretty format for
+outputting the above style in the ``git log`` or ``git show`` commands::
+
+ [core]
+ abbrev = 12
+ [pretty]
+ fixes = Fixes: %h (\"%s\")
+
+An example call::
+
+ $ git log -1 --pretty=fixes 54a4f0239f2e
+ Fixes: 54a4f0239f2e ("KVM: MMU: make kvm_mmu_zap_page() return the number of pages it actually freed")
+
+.. _split_changes:
+
+Separate your changes
+---------------------
+
+Separate each **logical change** into a separate patch.
+
+For example, if your changes include both bug fixes and performance
+enhancements for a single driver, separate those changes into two
+or more patches. If your changes include an API update, and a new
+driver which uses that new API, separate those into two patches.
+
+On the other hand, if you make a single change to numerous files,
+group those changes into a single patch. Thus a single logical change
+is contained within a single patch.
+
+The point to remember is that each patch should make an easily understood
+change that can be verified by reviewers. Each patch should be justifiable
+on its own merits.
+
+If one patch depends on another patch in order for a change to be
+complete, that is OK. Simply note **"this patch depends on patch X"**
+in your patch description.
+
+When dividing your change into a series of patches, take special care to
+ensure that the kernel builds and runs properly after each patch in the
+series. Developers using ``git bisect`` to track down a problem can end up
+splitting your patch series at any point; they will not thank you if you
+introduce bugs in the middle.
+
+If you cannot condense your patch set into a smaller set of patches,
+then only post say 15 or so at a time and wait for review and integration.
+
+
+
+Style-check your changes
+------------------------
+
+Check your patch for basic style violations, details of which can be
+found in Documentation/process/coding-style.rst.
+Failure to do so simply wastes
+the reviewers time and will get your patch rejected, probably
+without even being read.
+
+One significant exception is when moving code from one file to
+another -- in this case you should not modify the moved code at all in
+the same patch which moves it. This clearly delineates the act of
+moving the code and your changes. This greatly aids review of the
+actual differences and allows tools to better track the history of
+the code itself.
+
+Check your patches with the patch style checker prior to submission
+(scripts/checkpatch.pl). Note, though, that the style checker should be
+viewed as a guide, not as a replacement for human judgment. If your code
+looks better with a violation then its probably best left alone.
+
+The checker reports at three levels:
+ - ERROR: things that are very likely to be wrong
+ - WARNING: things requiring careful review
+ - CHECK: things requiring thought
+
+You should be able to justify all violations that remain in your
+patch.
+
+
+Select the recipients for your patch
+------------------------------------
+
+You should always copy the appropriate subsystem maintainer(s) and list(s) on
+any patch to code that they maintain; look through the MAINTAINERS file and the
+source code revision history to see who those maintainers are. The script
+scripts/get_maintainer.pl can be very useful at this step (pass paths to your
+patches as arguments to scripts/get_maintainer.pl). If you cannot find a
+maintainer for the subsystem you are working on, Andrew Morton
+(akpm@linux-foundation.org) serves as a maintainer of last resort.
+
+linux-kernel@vger.kernel.org should be used by default for all patches, but the
+volume on that list has caused a number of developers to tune it out. Please
+do not spam unrelated lists and unrelated people, though.
+
+Many kernel-related lists are hosted on vger.kernel.org; you can find a
+list of them at http://vger.kernel.org/vger-lists.html. There are
+kernel-related lists hosted elsewhere as well, though.
+
+Do not send more than 15 patches at once to the vger mailing lists!!!
+
+Linus Torvalds is the final arbiter of all changes accepted into the
+Linux kernel. His e-mail address is <torvalds@linux-foundation.org>.
+He gets a lot of e-mail, and, at this point, very few patches go through
+Linus directly, so typically you should do your best to -avoid-
+sending him e-mail.
+
+If you have a patch that fixes an exploitable security bug, send that patch
+to security@kernel.org. For severe bugs, a short embargo may be considered
+to allow distributors to get the patch out to users; in such cases,
+obviously, the patch should not be sent to any public lists. See also
+Documentation/process/security-bugs.rst.
+
+Patches that fix a severe bug in a released kernel should be directed
+toward the stable maintainers by putting a line like this::
+
+ Cc: stable@vger.kernel.org
+
+into the sign-off area of your patch (note, NOT an email recipient). You
+should also read Documentation/process/stable-kernel-rules.rst
+in addition to this document.
+
+If changes affect userland-kernel interfaces, please send the MAN-PAGES
+maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
+least a notification of the change, so that some information makes its way
+into the manual pages. User-space API changes should also be copied to
+linux-api@vger.kernel.org.
+
+
+No MIME, no links, no compression, no attachments. Just plain text
+-------------------------------------------------------------------
+
+Linus and other kernel developers need to be able to read and comment
+on the changes you are submitting. It is important for a kernel
+developer to be able to "quote" your changes, using standard e-mail
+tools, so that they may comment on specific portions of your code.
+
+For this reason, all patches should be submitted by e-mail "inline". The
+easiest way to do this is with ``git send-email``, which is strongly
+recommended. An interactive tutorial for ``git send-email`` is available at
+https://git-send-email.io.
+
+If you choose not to use ``git send-email``:
+
+.. warning::
+
+ Be wary of your editor's word-wrap corrupting your patch,
+ if you choose to cut-n-paste your patch.
+
+Do not attach the patch as a MIME attachment, compressed or not.
+Many popular e-mail applications will not always transmit a MIME
+attachment as plain text, making it impossible to comment on your
+code. A MIME attachment also takes Linus a bit more time to process,
+decreasing the likelihood of your MIME-attached change being accepted.
+
+Exception: If your mailer is mangling patches then someone may ask
+you to re-send them using MIME.
+
+See Documentation/process/email-clients.rst for hints about configuring
+your e-mail client so that it sends your patches untouched.
+
+Respond to review comments
+--------------------------
+
+Your patch will almost certainly get comments from reviewers on ways in
+which the patch can be improved, in the form of a reply to your email. You must
+respond to those comments; ignoring reviewers is a good way to get ignored in
+return. You can simply reply to their emails to answer their comments. Review
+comments or questions that do not lead to a code change should almost certainly
+bring about a comment or changelog entry so that the next reviewer better
+understands what is going on.
+
+Be sure to tell the reviewers what changes you are making and to thank them
+for their time. Code review is a tiring and time-consuming process, and
+reviewers sometimes get grumpy. Even in that case, though, respond
+politely and address the problems they have pointed out. When sending a next
+version, add a ``patch changelog`` to the cover letter or to individual patches
+explaining difference against previous submission (see
+:ref:`the_canonical_patch_format`).
+
+See Documentation/process/email-clients.rst for recommendations on email
+clients and mailing list etiquette.
+
+.. _interleaved_replies:
+
+Use trimmed interleaved replies in email discussions
+----------------------------------------------------
+Top-posting is strongly discouraged in Linux kernel development
+discussions. Interleaved (or "inline") replies make conversations much
+easier to follow. For more details see:
+https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
+
+As is frequently quoted on the mailing list::
+
+ A: http://en.wikipedia.org/wiki/Top_post
+ Q: Were do I find info about this thing called top-posting?
+ A: Because it messes up the order in which people normally read text.
+ Q: Why is top-posting such a bad thing?
+ A: Top-posting.
+ Q: What is the most annoying thing in e-mail?
+
+Similarly, please trim all unneeded quotations that aren't relevant
+to your reply. This makes responses easier to find, and saves time and
+space. For more details see: http://daringfireball.net/2007/07/on_top ::
+
+ A: No.
+ Q: Should I include quotations after my reply?
+
+.. _resend_reminders:
+
+Don't get discouraged - or impatient
+------------------------------------
+
+After you have submitted your change, be patient and wait. Reviewers are
+busy people and may not get to your patch right away.
+
+Once upon a time, patches used to disappear into the void without comment,
+but the development process works more smoothly than that now. You should
+receive comments within a week or so; if that does not happen, make sure
+that you have sent your patches to the right place. Wait for a minimum of
+one week before resubmitting or pinging reviewers - possibly longer during
+busy times like merge windows.
+
+It's also ok to resend the patch or the patch series after a couple of
+weeks with the word "RESEND" added to the subject line::
+
+ [PATCH Vx RESEND] sub/sys: Condensed patch summary
+
+Don't add "RESEND" when you are submitting a modified version of your
+patch or patch series - "RESEND" only applies to resubmission of a
+patch or patch series which have not been modified in any way from the
+previous submission.
+
+
+Include PATCH in the subject
+-----------------------------
+
+Due to high e-mail traffic to Linus, and to linux-kernel, it is common
+convention to prefix your subject line with [PATCH]. This lets Linus
+and other kernel developers more easily distinguish patches from other
+e-mail discussions.
+
+``git send-email`` will do this for you automatically.
+
+
+Sign your work - the Developer's Certificate of Origin
+------------------------------------------------------
+
+To improve tracking of who did what, especially with patches that can
+percolate to their final resting place in the kernel through several
+layers of maintainers, we've introduced a "sign-off" procedure on
+patches that are being emailed around.
+
+The sign-off is a simple line at the end of the explanation for the
+patch, which certifies that you wrote it or otherwise have the right to
+pass it on as an open-source patch. The rules are pretty simple: if you
+can certify the below:
+
+Developer's Certificate of Origin 1.1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By making a contribution to this project, I certify that:
+
+ (a) The contribution was created in whole or in part by me and I
+ have the right to submit it under the open source license
+ indicated in the file; or
+
+ (b) The contribution is based upon previous work that, to the best
+ of my knowledge, is covered under an appropriate open source
+ license and I have the right under that license to submit that
+ work with modifications, whether created in whole or in part
+ by me, under the same open source license (unless I am
+ permitted to submit under a different license), as indicated
+ in the file; or
+
+ (c) The contribution was provided directly to me by some other
+ person who certified (a), (b) or (c) and I have not modified
+ it.
+
+ (d) I understand and agree that this project and the contribution
+ are public and that a record of the contribution (including all
+ personal information I submit with it, including my sign-off) is
+ maintained indefinitely and may be redistributed consistent with
+ this project or the open source license(s) involved.
+
+then you just add a line saying::
+
+ Signed-off-by: Random J Developer <random@developer.example.org>
+
+using a known identity (sorry, no anonymous contributions.)
+This will be done for you automatically if you use ``git commit -s``.
+Reverts should also include "Signed-off-by". ``git revert -s`` does that
+for you.
+
+Some people also put extra tags at the end. They'll just be ignored for
+now, but you can do this to mark internal company procedures or just
+point out some special detail about the sign-off.
+
+Any further SoBs (Signed-off-by:'s) following the author's SoB are from
+people handling and transporting the patch, but were not involved in its
+development. SoB chains should reflect the **real** route a patch took
+as it was propagated to the maintainers and ultimately to Linus, with
+the first SoB entry signalling primary authorship of a single author.
+
+
+When to use Acked-by:, Cc:, and Co-developed-by:
+------------------------------------------------
+
+The Signed-off-by: tag indicates that the signer was involved in the
+development of the patch, or that he/she was in the patch's delivery path.
+
+If a person was not directly involved in the preparation or handling of a
+patch but wishes to signify and record their approval of it then they can
+ask to have an Acked-by: line added to the patch's changelog.
+
+Acked-by: is often used by the maintainer of the affected code when that
+maintainer neither contributed to nor forwarded the patch.
+
+Acked-by: is not as formal as Signed-off-by:. It is a record that the acker
+has at least reviewed the patch and has indicated acceptance. Hence patch
+mergers will sometimes manually convert an acker's "yep, looks good to me"
+into an Acked-by: (but note that it is usually better to ask for an
+explicit ack).
+
+Acked-by: does not necessarily indicate acknowledgement of the entire patch.
+For example, if a patch affects multiple subsystems and has an Acked-by: from
+one subsystem maintainer then this usually indicates acknowledgement of just
+the part which affects that maintainer's code. Judgement should be used here.
+When in doubt people should refer to the original discussion in the mailing
+list archives.
+
+If a person has had the opportunity to comment on a patch, but has not
+provided such comments, you may optionally add a ``Cc:`` tag to the patch.
+This is the only tag which might be added without an explicit action by the
+person it names - but it should indicate that this person was copied on the
+patch. This tag documents that potentially interested parties
+have been included in the discussion.
+
+Co-developed-by: states that the patch was co-created by multiple developers;
+it is used to give attribution to co-authors (in addition to the author
+attributed by the From: tag) when several people work on a single patch. Since
+Co-developed-by: denotes authorship, every Co-developed-by: must be immediately
+followed by a Signed-off-by: of the associated co-author. Standard sign-off
+procedure applies, i.e. the ordering of Signed-off-by: tags should reflect the
+chronological history of the patch insofar as possible, regardless of whether
+the author is attributed via From: or Co-developed-by:. Notably, the last
+Signed-off-by: must always be that of the developer submitting the patch.
+
+Note, the From: tag is optional when the From: author is also the person (and
+email) listed in the From: line of the email header.
+
+Example of a patch submitted by the From: author::
+
+ <changelog>
+
+ Co-developed-by: First Co-Author <first@coauthor.example.org>
+ Signed-off-by: First Co-Author <first@coauthor.example.org>
+ Co-developed-by: Second Co-Author <second@coauthor.example.org>
+ Signed-off-by: Second Co-Author <second@coauthor.example.org>
+ Signed-off-by: From Author <from@author.example.org>
+
+Example of a patch submitted by a Co-developed-by: author::
+
+ From: From Author <from@author.example.org>
+
+ <changelog>
+
+ Co-developed-by: Random Co-Author <random@coauthor.example.org>
+ Signed-off-by: Random Co-Author <random@coauthor.example.org>
+ Signed-off-by: From Author <from@author.example.org>
+ Co-developed-by: Submitting Co-Author <sub@coauthor.example.org>
+ Signed-off-by: Submitting Co-Author <sub@coauthor.example.org>
+
+
+Using Reported-by:, Tested-by:, Reviewed-by:, Suggested-by: and Fixes:
+----------------------------------------------------------------------
+
+The Reported-by tag gives credit to people who find bugs and report them and it
+hopefully inspires them to help us again in the future. The tag is intended for
+bugs; please do not use it to credit feature requests. The tag should be
+followed by a Closes: tag pointing to the report, unless the report is not
+available on the web. The Link: tag can be used instead of Closes: if the patch
+fixes a part of the issue(s) being reported. Please note that if the bug was
+reported in private, then ask for permission first before using the Reported-by
+tag.
+
+A Tested-by: tag indicates that the patch has been successfully tested (in
+some environment) by the person named. This tag informs maintainers that
+some testing has been performed, provides a means to locate testers for
+future patches, and ensures credit for the testers.
+
+Reviewed-by:, instead, indicates that the patch has been reviewed and found
+acceptable according to the Reviewer's Statement:
+
+Reviewer's statement of oversight
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By offering my Reviewed-by: tag, I state that:
+
+ (a) I have carried out a technical review of this patch to
+ evaluate its appropriateness and readiness for inclusion into
+ the mainline kernel.
+
+ (b) Any problems, concerns, or questions relating to the patch
+ have been communicated back to the submitter. I am satisfied
+ with the submitter's response to my comments.
+
+ (c) While there may be things that could be improved with this
+ submission, I believe that it is, at this time, (1) a
+ worthwhile modification to the kernel, and (2) free of known
+ issues which would argue against its inclusion.
+
+ (d) While I have reviewed the patch and believe it to be sound, I
+ do not (unless explicitly stated elsewhere) make any
+ warranties or guarantees that it will achieve its stated
+ purpose or function properly in any given situation.
+
+A Reviewed-by tag is a statement of opinion that the patch is an
+appropriate modification of the kernel without any remaining serious
+technical issues. Any interested reviewer (who has done the work) can
+offer a Reviewed-by tag for a patch. This tag serves to give credit to
+reviewers and to inform maintainers of the degree of review which has been
+done on the patch. Reviewed-by: tags, when supplied by reviewers known to
+understand the subject area and to perform thorough reviews, will normally
+increase the likelihood of your patch getting into the kernel.
+
+Both Tested-by and Reviewed-by tags, once received on mailing list from tester
+or reviewer, should be added by author to the applicable patches when sending
+next versions. However if the patch has changed substantially in following
+version, these tags might not be applicable anymore and thus should be removed.
+Usually removal of someone's Tested-by or Reviewed-by tags should be mentioned
+in the patch changelog (after the '---' separator).
+
+A Suggested-by: tag indicates that the patch idea is suggested by the person
+named and ensures credit to the person for the idea. Please note that this
+tag should not be added without the reporter's permission, especially if the
+idea was not posted in a public forum. That said, if we diligently credit our
+idea reporters, they will, hopefully, be inspired to help us again in the
+future.
+
+A Fixes: tag indicates that the patch fixes an issue in a previous commit. It
+is used to make it easy to determine where a bug originated, which can help
+review a bug fix. This tag also assists the stable kernel team in determining
+which stable kernel versions should receive your fix. This is the preferred
+method for indicating a bug fixed by the patch. See :ref:`describe_changes`
+for more details.
+
+Note: Attaching a Fixes: tag does not subvert the stable kernel rules
+process nor the requirement to Cc: stable@vger.kernel.org on all stable
+patch candidates. For more information, please read
+Documentation/process/stable-kernel-rules.rst.
+
+.. _the_canonical_patch_format:
+
+The canonical patch format
+--------------------------
+
+This section describes how the patch itself should be formatted. Note
+that, if you have your patches stored in a ``git`` repository, proper patch
+formatting can be had with ``git format-patch``. The tools cannot create
+the necessary text, though, so read the instructions below anyway.
+
+The canonical patch subject line is::
+
+ Subject: [PATCH 001/123] subsystem: summary phrase
+
+The canonical patch message body contains the following:
+
+ - A ``from`` line specifying the patch author, followed by an empty
+ line (only needed if the person sending the patch is not the author).
+
+ - The body of the explanation, line wrapped at 75 columns, which will
+ be copied to the permanent changelog to describe this patch.
+
+ - An empty line.
+
+ - The ``Signed-off-by:`` lines, described above, which will
+ also go in the changelog.
+
+ - A marker line containing simply ``---``.
+
+ - Any additional comments not suitable for the changelog.
+
+ - The actual patch (``diff`` output).
+
+The Subject line format makes it very easy to sort the emails
+alphabetically by subject line - pretty much any email reader will
+support that - since because the sequence number is zero-padded,
+the numerical and alphabetic sort is the same.
+
+The ``subsystem`` in the email's Subject should identify which
+area or subsystem of the kernel is being patched.
+
+The ``summary phrase`` in the email's Subject should concisely
+describe the patch which that email contains. The ``summary
+phrase`` should not be a filename. Do not use the same ``summary
+phrase`` for every patch in a whole patch series (where a ``patch
+series`` is an ordered sequence of multiple, related patches).
+
+Bear in mind that the ``summary phrase`` of your email becomes a
+globally-unique identifier for that patch. It propagates all the way
+into the ``git`` changelog. The ``summary phrase`` may later be used in
+developer discussions which refer to the patch. People will want to
+google for the ``summary phrase`` to read discussion regarding that
+patch. It will also be the only thing that people may quickly see
+when, two or three months later, they are going through perhaps
+thousands of patches using tools such as ``gitk`` or ``git log
+--oneline``.
+
+For these reasons, the ``summary`` must be no more than 70-75
+characters, and it must describe both what the patch changes, as well
+as why the patch might be necessary. It is challenging to be both
+succinct and descriptive, but that is what a well-written summary
+should do.
+
+The ``summary phrase`` may be prefixed by tags enclosed in square
+brackets: "Subject: [PATCH <tag>...] <summary phrase>". The tags are
+not considered part of the summary phrase, but describe how the patch
+should be treated. Common tags might include a version descriptor if
+the multiple versions of the patch have been sent out in response to
+comments (i.e., "v1, v2, v3"), or "RFC" to indicate a request for
+comments.
+
+If there are four patches in a patch series the individual patches may
+be numbered like this: 1/4, 2/4, 3/4, 4/4. This assures that developers
+understand the order in which the patches should be applied and that
+they have reviewed or applied all of the patches in the patch series.
+
+Here are some good example Subjects::
+
+ Subject: [PATCH 2/5] ext2: improve scalability of bitmap searching
+ Subject: [PATCH v2 01/27] x86: fix eflags tracking
+ Subject: [PATCH v2] sub/sys: Condensed patch summary
+ Subject: [PATCH v2 M/N] sub/sys: Condensed patch summary
+
+The ``from`` line must be the very first line in the message body,
+and has the form:
+
+ From: Patch Author <author@example.com>
+
+The ``from`` line specifies who will be credited as the author of the
+patch in the permanent changelog. If the ``from`` line is missing,
+then the ``From:`` line from the email header will be used to determine
+the patch author in the changelog.
+
+The explanation body will be committed to the permanent source
+changelog, so should make sense to a competent reader who has long since
+forgotten the immediate details of the discussion that might have led to
+this patch. Including symptoms of the failure which the patch addresses
+(kernel log messages, oops messages, etc.) are especially useful for
+people who might be searching the commit logs looking for the applicable
+patch. The text should be written in such detail so that when read
+weeks, months or even years later, it can give the reader the needed
+details to grasp the reasoning for **why** the patch was created.
+
+If a patch fixes a compile failure, it may not be necessary to include
+_all_ of the compile failures; just enough that it is likely that
+someone searching for the patch can find it. As in the ``summary
+phrase``, it is important to be both succinct as well as descriptive.
+
+The ``---`` marker line serves the essential purpose of marking for
+patch handling tools where the changelog message ends.
+
+One good use for the additional comments after the ``---`` marker is
+for a ``diffstat``, to show what files have changed, and the number of
+inserted and deleted lines per file. A ``diffstat`` is especially useful
+on bigger patches. If you are going to include a ``diffstat`` after the
+``---`` marker, please use ``diffstat`` options ``-p 1 -w 70`` so that
+filenames are listed from the top of the kernel source tree and don't
+use too much horizontal space (easily fit in 80 columns, maybe with some
+indentation). (``git`` generates appropriate diffstats by default.)
+
+Other comments relevant only to the moment or the maintainer, not
+suitable for the permanent changelog, should also go here. A good
+example of such comments might be ``patch changelogs`` which describe
+what has changed between the v1 and v2 version of the patch.
+
+Please put this information **after** the ``---`` line which separates
+the changelog from the rest of the patch. The version information is
+not part of the changelog which gets committed to the git tree. It is
+additional information for the reviewers. If it's placed above the
+commit tags, it needs manual interaction to remove it. If it is below
+the separator line, it gets automatically stripped off when applying the
+patch::
+
+ <commit message>
+ ...
+ Signed-off-by: Author <author@mail>
+ ---
+ V2 -> V3: Removed redundant helper function
+ V1 -> V2: Cleaned up coding style and addressed review comments
+
+ path/to/file | 5+++--
+ ...
+
+See more details on the proper patch format in the following
+references.
+
+.. _backtraces:
+
+Backtraces in commit messages
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Backtraces help document the call chain leading to a problem. However,
+not all backtraces are helpful. For example, early boot call chains are
+unique and obvious. Copying the full dmesg output verbatim, however,
+adds distracting information like timestamps, module lists, register and
+stack dumps.
+
+Therefore, the most useful backtraces should distill the relevant
+information from the dump, which makes it easier to focus on the real
+issue. Here is an example of a well-trimmed backtrace::
+
+ unchecked MSR access error: WRMSR to 0xd51 (tried to write 0x0000000000000064)
+ at rIP: 0xffffffffae059994 (native_write_msr+0x4/0x20)
+ Call Trace:
+ mba_wrmsr
+ update_domains
+ rdtgroup_mkdir
+
+.. _explicit_in_reply_to:
+
+Explicit In-Reply-To headers
+----------------------------
+
+It can be helpful to manually add In-Reply-To: headers to a patch
+(e.g., when using ``git send-email``) to associate the patch with
+previous relevant discussion, e.g. to link a bug fix to the email with
+the bug report. However, for a multi-patch series, it is generally
+best to avoid using In-Reply-To: to link to older versions of the
+series. This way multiple versions of the patch don't become an
+unmanageable forest of references in email clients. If a link is
+helpful, you can use the https://lore.kernel.org/ redirector (e.g., in
+the cover email text) to link to an earlier version of the patch series.
+
+
+Providing base tree information
+-------------------------------
+
+When other developers receive your patches and start the review process,
+it is often useful for them to know where in the tree history they
+should place your work. This is particularly useful for automated CI
+processes that attempt to run a series of tests in order to establish
+the quality of your submission before the maintainer starts the review.
+
+If you are using ``git format-patch`` to generate your patches, you can
+automatically include the base tree information in your submission by
+using the ``--base`` flag. The easiest and most convenient way to use
+this option is with topical branches::
+
+ $ git checkout -t -b my-topical-branch master
+ Branch 'my-topical-branch' set up to track local branch 'master'.
+ Switched to a new branch 'my-topical-branch'
+
+ [perform your edits and commits]
+
+ $ git format-patch --base=auto --cover-letter -o outgoing/ master
+ outgoing/0000-cover-letter.patch
+ outgoing/0001-First-Commit.patch
+ outgoing/...
+
+When you open ``outgoing/0000-cover-letter.patch`` for editing, you will
+notice that it will have the ``base-commit:`` trailer at the very
+bottom, which provides the reviewer and the CI tools enough information
+to properly perform ``git am`` without worrying about conflicts::
+
+ $ git checkout -b patch-review [base-commit-id]
+ Switched to a new branch 'patch-review'
+ $ git am patches.mbox
+ Applying: First Commit
+ Applying: ...
+
+Please see ``man git-format-patch`` for more information about this
+option.
+
+.. note::
+
+ The ``--base`` feature was introduced in git version 2.9.0.
+
+If you are not using git to format your patches, you can still include
+the same ``base-commit`` trailer to indicate the commit hash of the tree
+on which your work is based. You should add it either in the cover
+letter or in the first patch of the series and it should be placed
+either below the ``---`` line or at the very bottom of all other
+content, right before your email signature.
+
+
+References
+----------
+
+Andrew Morton, "The perfect patch" (tpp).
+ <https://www.ozlabs.org/~akpm/stuff/tpp.txt>
+
+Jeff Garzik, "Linux kernel patch submission format".
+ <https://web.archive.org/web/20180829112450/http://linux.yyz.us/patch-format.html>
+
+Greg Kroah-Hartman, "How to piss off a kernel subsystem maintainer".
+ <http://www.kroah.com/log/linux/maintainer.html>
+
+ <http://www.kroah.com/log/linux/maintainer-02.html>
+
+ <http://www.kroah.com/log/linux/maintainer-03.html>
+
+ <http://www.kroah.com/log/linux/maintainer-04.html>
+
+ <http://www.kroah.com/log/linux/maintainer-05.html>
+
+ <http://www.kroah.com/log/linux/maintainer-06.html>
+
+NO!!!! No more huge patch bombs to linux-kernel@vger.kernel.org people!
+ <https://lore.kernel.org/r/20050711.125305.08322243.davem@davemloft.net>
+
+Kernel Documentation/process/coding-style.rst
+
+Linus Torvalds's mail on the canonical patch format:
+ <https://lore.kernel.org/r/Pine.LNX.4.58.0504071023190.28951@ppc970.osdl.org>
+
+Andi Kleen, "On submitting kernel patches"
+ Some strategies to get difficult or controversial changes in.
+
+ http://halobates.de/on-submitting-patches.pdf
diff --git a/Documentation/process/volatile-considered-harmful.rst b/Documentation/process/volatile-considered-harmful.rst
new file mode 100644
index 000000000..7eb6bd7c9
--- /dev/null
+++ b/Documentation/process/volatile-considered-harmful.rst
@@ -0,0 +1,125 @@
+
+.. _volatile_considered_harmful:
+
+Why the "volatile" type class should not be used
+------------------------------------------------
+
+C programmers have often taken volatile to mean that the variable could be
+changed outside of the current thread of execution; as a result, they are
+sometimes tempted to use it in kernel code when shared data structures are
+being used. In other words, they have been known to treat volatile types
+as a sort of easy atomic variable, which they are not. The use of volatile in
+kernel code is almost never correct; this document describes why.
+
+The key point to understand with regard to volatile is that its purpose is
+to suppress optimization, which is almost never what one really wants to
+do. In the kernel, one must protect shared data structures against
+unwanted concurrent access, which is very much a different task. The
+process of protecting against unwanted concurrency will also avoid almost
+all optimization-related problems in a more efficient way.
+
+Like volatile, the kernel primitives which make concurrent access to data
+safe (spinlocks, mutexes, memory barriers, etc.) are designed to prevent
+unwanted optimization. If they are being used properly, there will be no
+need to use volatile as well. If volatile is still necessary, there is
+almost certainly a bug in the code somewhere. In properly-written kernel
+code, volatile can only serve to slow things down.
+
+Consider a typical block of kernel code::
+
+ spin_lock(&the_lock);
+ do_something_on(&shared_data);
+ do_something_else_with(&shared_data);
+ spin_unlock(&the_lock);
+
+If all the code follows the locking rules, the value of shared_data cannot
+change unexpectedly while the_lock is held. Any other code which might
+want to play with that data will be waiting on the lock. The spinlock
+primitives act as memory barriers - they are explicitly written to do so -
+meaning that data accesses will not be optimized across them. So the
+compiler might think it knows what will be in shared_data, but the
+spin_lock() call, since it acts as a memory barrier, will force it to
+forget anything it knows. There will be no optimization problems with
+accesses to that data.
+
+If shared_data were declared volatile, the locking would still be
+necessary. But the compiler would also be prevented from optimizing access
+to shared_data _within_ the critical section, when we know that nobody else
+can be working with it. While the lock is held, shared_data is not
+volatile. When dealing with shared data, proper locking makes volatile
+unnecessary - and potentially harmful.
+
+The volatile storage class was originally meant for memory-mapped I/O
+registers. Within the kernel, register accesses, too, should be protected
+by locks, but one also does not want the compiler "optimizing" register
+accesses within a critical section. But, within the kernel, I/O memory
+accesses are always done through accessor functions; accessing I/O memory
+directly through pointers is frowned upon and does not work on all
+architectures. Those accessors are written to prevent unwanted
+optimization, so, once again, volatile is unnecessary.
+
+Another situation where one might be tempted to use volatile is
+when the processor is busy-waiting on the value of a variable. The right
+way to perform a busy wait is::
+
+ while (my_variable != what_i_want)
+ cpu_relax();
+
+The cpu_relax() call can lower CPU power consumption or yield to a
+hyperthreaded twin processor; it also happens to serve as a compiler
+barrier, so, once again, volatile is unnecessary. Of course, busy-
+waiting is generally an anti-social act to begin with.
+
+There are still a few rare situations where volatile makes sense in the
+kernel:
+
+ - The above-mentioned accessor functions might use volatile on
+ architectures where direct I/O memory access does work. Essentially,
+ each accessor call becomes a little critical section on its own and
+ ensures that the access happens as expected by the programmer.
+
+ - Inline assembly code which changes memory, but which has no other
+ visible side effects, risks being deleted by GCC. Adding the volatile
+ keyword to asm statements will prevent this removal.
+
+ - The jiffies variable is special in that it can have a different value
+ every time it is referenced, but it can be read without any special
+ locking. So jiffies can be volatile, but the addition of other
+ variables of this type is strongly frowned upon. Jiffies is considered
+ to be a "stupid legacy" issue (Linus's words) in this regard; fixing it
+ would be more trouble than it is worth.
+
+ - Pointers to data structures in coherent memory which might be modified
+ by I/O devices can, sometimes, legitimately be volatile. A ring buffer
+ used by a network adapter, where that adapter changes pointers to
+ indicate which descriptors have been processed, is an example of this
+ type of situation.
+
+For most code, none of the above justifications for volatile apply. As a
+result, the use of volatile is likely to be seen as a bug and will bring
+additional scrutiny to the code. Developers who are tempted to use
+volatile should take a step back and think about what they are truly trying
+to accomplish.
+
+Patches to remove volatile variables are generally welcome - as long as
+they come with a justification which shows that the concurrency issues have
+been properly thought through.
+
+
+References
+==========
+
+[1] https://lwn.net/Articles/233481/
+
+[2] https://lwn.net/Articles/233482/
+
+Credits
+=======
+
+Original impetus and research by Randy Dunlap
+
+Written by Jonathan Corbet
+
+Improvements via comments from Satyam Sharma, Johannes Stezenbach, Jesper
+Juhl, Heikki Orsila, H. Peter Anvin, Philipp Hahn, and Stefan
+Richter.