summaryrefslogtreecommitdiffstats
path: root/Documentation/gpu
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/gpu')
-rw-r--r--Documentation/gpu/amdgpu/driver-misc.rst18
-rw-r--r--Documentation/gpu/amdgpu/thermal.rst30
-rw-r--r--Documentation/gpu/automated_testing.rst20
-rw-r--r--Documentation/gpu/drivers.rst1
-rw-r--r--Documentation/gpu/drm-kms.rst2
-rw-r--r--Documentation/gpu/drm-mm.rst20
-rw-r--r--Documentation/gpu/drm-uapi.rst92
-rw-r--r--Documentation/gpu/drm-usage-stats.rst1
-rw-r--r--Documentation/gpu/drm-vm-bind-async.rst309
-rw-r--r--Documentation/gpu/i915.rst29
-rw-r--r--Documentation/gpu/implementation_guidelines.rst9
-rw-r--r--Documentation/gpu/index.rst1
-rw-r--r--Documentation/gpu/panfrost.rst40
-rw-r--r--Documentation/gpu/rfc/xe.rst93
14 files changed, 588 insertions, 77 deletions
diff --git a/Documentation/gpu/amdgpu/driver-misc.rst b/Documentation/gpu/amdgpu/driver-misc.rst
index 4321c38fef..e40e15f89f 100644
--- a/Documentation/gpu/amdgpu/driver-misc.rst
+++ b/Documentation/gpu/amdgpu/driver-misc.rst
@@ -26,12 +26,30 @@ serial_number
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
:doc: serial_number
+fru_id
+-------------
+
+.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+ :doc: fru_id
+
+manufacturer
+-------------
+
+.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+ :doc: manufacturer
+
unique_id
---------
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: unique_id
+board_info
+----------
+
+.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+ :doc: board_info
+
Accelerated Processing Units (APU) Info
---------------------------------------
diff --git a/Documentation/gpu/amdgpu/thermal.rst b/Documentation/gpu/amdgpu/thermal.rst
index 5e27e4eb39..2f6166f81e 100644
--- a/Documentation/gpu/amdgpu/thermal.rst
+++ b/Documentation/gpu/amdgpu/thermal.rst
@@ -64,6 +64,36 @@ gpu_metrics
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: gpu_metrics
+fan_curve
+---------
+
+.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
+ :doc: fan_curve
+
+acoustic_limit_rpm_threshold
+----------------------------
+
+.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
+ :doc: acoustic_limit_rpm_threshold
+
+acoustic_target_rpm_threshold
+-----------------------------
+
+.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
+ :doc: acoustic_target_rpm_threshold
+
+fan_target_temperature
+----------------------
+
+.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
+ :doc: fan_target_temperature
+
+fan_minimum_pwm
+---------------
+
+.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
+ :doc: fan_minimum_pwm
+
GFXOFF
======
diff --git a/Documentation/gpu/automated_testing.rst b/Documentation/gpu/automated_testing.rst
index 469b6fb65c..240e29d5ba 100644
--- a/Documentation/gpu/automated_testing.rst
+++ b/Documentation/gpu/automated_testing.rst
@@ -67,6 +67,19 @@ Lists the tests that for a given driver on a specific hardware revision are
known to behave unreliably. These tests won't cause a job to fail regardless of
the result. They will still be run.
+Each new flake entry must be associated with a link to the email reporting the
+bug to the author of the affected driver, the board name or Device Tree name of
+the board, the first kernel version affected, and an approximation of the
+failure rate.
+
+They should be provided under the following format::
+
+ # Bug Report: $LORE_OR_PATCHWORK_URL
+ # Board Name: broken-board.dtb
+ # Version: 6.6-rc1
+ # Failure Rate: 100
+ flaky-test
+
drivers/gpu/drm/ci/${DRIVER_NAME}-${HW_REVISION}-skips.txt
-----------------------------------------------------------
@@ -86,10 +99,13 @@ https://gitlab.freedesktop.org/janedoe/linux/-/settings/ci_cd), change the
CI/CD configuration file from .gitlab-ci.yml to
drivers/gpu/drm/ci/gitlab-ci.yml.
-3. Next time you push to this repository, you will see a CI pipeline being
+3. Request to be added to the drm/ci-ok group so that your user has the
+necessary privileges to run the CI on https://gitlab.freedesktop.org/drm/ci-ok
+
+4. Next time you push to this repository, you will see a CI pipeline being
created (eg. https://gitlab.freedesktop.org/janedoe/linux/-/pipelines)
-4. The various jobs will be run and when the pipeline is finished, all jobs
+5. The various jobs will be run and when the pipeline is finished, all jobs
should be green unless a regression has been found.
diff --git a/Documentation/gpu/drivers.rst b/Documentation/gpu/drivers.rst
index 3a52f48215..45a12e5520 100644
--- a/Documentation/gpu/drivers.rst
+++ b/Documentation/gpu/drivers.rst
@@ -18,6 +18,7 @@ GPU Driver Documentation
xen-front
afbc
komeda-kms
+ panfrost
.. only:: subproject and html
diff --git a/Documentation/gpu/drm-kms.rst b/Documentation/gpu/drm-kms.rst
index 690d2ffe72..a98a7e04e8 100644
--- a/Documentation/gpu/drm-kms.rst
+++ b/Documentation/gpu/drm-kms.rst
@@ -360,6 +360,8 @@ Format Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_fourcc.c
:export:
+.. _kms_dumb_buffer_objects:
+
Dumb Buffer Objects
===================
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index c19b34b1c0..602010cb68 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -466,40 +466,40 @@ DRM MM Range Allocator Function References
.. kernel-doc:: drivers/gpu/drm/drm_mm.c
:export:
-DRM GPU VA Manager
-==================
+DRM GPUVM
+=========
Overview
--------
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Overview
Split and Merge
---------------
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Split and Merge
Locking
-------
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Locking
Examples
--------
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Examples
-DRM GPU VA Manager Function References
---------------------------------------
+DRM GPUVM Function References
+-----------------------------
-.. kernel-doc:: include/drm/drm_gpuva_mgr.h
+.. kernel-doc:: include/drm/drm_gpuvm.h
:internal:
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:export:
DRM Buddy Allocator
diff --git a/Documentation/gpu/drm-uapi.rst b/Documentation/gpu/drm-uapi.rst
index 65fb3036a5..370d820be2 100644
--- a/Documentation/gpu/drm-uapi.rst
+++ b/Documentation/gpu/drm-uapi.rst
@@ -285,6 +285,83 @@ for GPU1 and GPU2 from different vendors, and a third handler for
mmapped regular files. Threads cause additional pain with signal
handling as well.
+Device reset
+============
+
+The GPU stack is really complex and is prone to errors, from hardware bugs,
+faulty applications and everything in between the many layers. Some errors
+require resetting the device in order to make the device usable again. This
+section describes the expectations for DRM and usermode drivers when a
+device resets and how to propagate the reset status.
+
+Device resets can not be disabled without tainting the kernel, which can lead to
+hanging the entire kernel through shrinkers/mmu_notifiers. Userspace role in
+device resets is to propagate the message to the application and apply any
+special policy for blocking guilty applications, if any. Corollary is that
+debugging a hung GPU context require hardware support to be able to preempt such
+a GPU context while it's stopped.
+
+Kernel Mode Driver
+------------------
+
+The KMD is responsible for checking if the device needs a reset, and to perform
+it as needed. Usually a hang is detected when a job gets stuck executing. KMD
+should keep track of resets, because userspace can query any time about the
+reset status for a specific context. This is needed to propagate to the rest of
+the stack that a reset has happened. Currently, this is implemented by each
+driver separately, with no common DRM interface. Ideally this should be properly
+integrated at DRM scheduler to provide a common ground for all drivers. After a
+reset, KMD should reject new command submissions for affected contexts.
+
+User Mode Driver
+----------------
+
+After command submission, UMD should check if the submission was accepted or
+rejected. After a reset, KMD should reject submissions, and UMD can issue an
+ioctl to the KMD to check the reset status, and this can be checked more often
+if the UMD requires it. After detecting a reset, UMD will then proceed to report
+it to the application using the appropriate API error code, as explained in the
+section below about robustness.
+
+Robustness
+----------
+
+The only way to try to keep a graphical API context working after a reset is if
+it complies with the robustness aspects of the graphical API that it is using.
+
+Graphical APIs provide ways to applications to deal with device resets. However,
+there is no guarantee that the app will use such features correctly, and a
+userspace that doesn't support robust interfaces (like a non-robust
+OpenGL context or API without any robustness support like libva) leave the
+robustness handling entirely to the userspace driver. There is no strong
+community consensus on what the userspace driver should do in that case,
+since all reasonable approaches have some clear downsides.
+
+OpenGL
+~~~~~~
+
+Apps using OpenGL should use the available robust interfaces, like the
+extension ``GL_ARB_robustness`` (or ``GL_EXT_robustness`` for OpenGL ES). This
+interface tells if a reset has happened, and if so, all the context state is
+considered lost and the app proceeds by creating new ones. There's no consensus
+on what to do to if robustness is not in use.
+
+Vulkan
+~~~~~~
+
+Apps using Vulkan should check for ``VK_ERROR_DEVICE_LOST`` for submissions.
+This error code means, among other things, that a device reset has happened and
+it needs to recreate the contexts to keep going.
+
+Reporting causes of resets
+--------------------------
+
+Apart from propagating the reset through the stack so apps can recover, it's
+really useful for driver developers to learn more about what caused the reset in
+the first place. DRM devices should make use of devcoredump to store relevant
+information about the reset, so this information can be added to user bug
+reports.
+
.. _drm_driver_ioctl:
IOCTL Support on Device Nodes
@@ -450,12 +527,12 @@ VBlank event handling
The DRM core exposes two vertical blank related ioctls:
-DRM_IOCTL_WAIT_VBLANK
+:c:macro:`DRM_IOCTL_WAIT_VBLANK`
This takes a struct drm_wait_vblank structure as its argument, and
it is used to block or request a signal when a specified vblank
event occurs.
-DRM_IOCTL_MODESET_CTL
+:c:macro:`DRM_IOCTL_MODESET_CTL`
This was only used for user-mode-settind drivers around modesetting
changes to allow the kernel to update the vblank interrupt after
mode setting, since on many devices the vertical blank counter is
@@ -478,11 +555,18 @@ The index is used in cases where a densely packed identifier for a CRTC is
needed, for instance a bitmask of CRTC's. The member possible_crtcs of struct
drm_mode_get_plane is an example.
-DRM_IOCTL_MODE_GETRESOURCES populates a structure with an array of CRTC ID's,
-and the CRTC index is its position in this array.
+:c:macro:`DRM_IOCTL_MODE_GETRESOURCES` populates a structure with an array of
+CRTC ID's, and the CRTC index is its position in this array.
.. kernel-doc:: include/uapi/drm/drm.h
:internal:
.. kernel-doc:: include/uapi/drm/drm_mode.h
:internal:
+
+
+dma-buf interoperability
+========================
+
+Please see Documentation/userspace-api/dma-buf-alloc-exchange.rst for
+information on how dma-buf is integrated and exposed within DRM.
diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
index 044e6b2ed1..7aca5c7a7b 100644
--- a/Documentation/gpu/drm-usage-stats.rst
+++ b/Documentation/gpu/drm-usage-stats.rst
@@ -169,3 +169,4 @@ Driver specific implementations
-------------------------------
:ref:`i915-usage-stats`
+:ref:`panfrost-usage-stats`
diff --git a/Documentation/gpu/drm-vm-bind-async.rst b/Documentation/gpu/drm-vm-bind-async.rst
new file mode 100644
index 0000000000..3d709d0209
--- /dev/null
+++ b/Documentation/gpu/drm-vm-bind-async.rst
@@ -0,0 +1,309 @@
+.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+
+====================
+Asynchronous VM_BIND
+====================
+
+Nomenclature:
+=============
+
+* ``VRAM``: On-device memory. Sometimes referred to as device local memory.
+
+* ``gpu_vm``: A virtual GPU address space. Typically per process, but
+ can be shared by multiple processes.
+
+* ``VM_BIND``: An operation or a list of operations to modify a gpu_vm using
+ an IOCTL. The operations include mapping and unmapping system- or
+ VRAM memory.
+
+* ``syncobj``: A container that abstracts synchronization objects. The
+ synchronization objects can be either generic, like dma-fences or
+ driver specific. A syncobj typically indicates the type of the
+ underlying synchronization object.
+
+* ``in-syncobj``: Argument to a VM_BIND IOCTL, the VM_BIND operation waits
+ for these before starting.
+
+* ``out-syncobj``: Argument to a VM_BIND_IOCTL, the VM_BIND operation
+ signals these when the bind operation is complete.
+
+* ``dma-fence``: A cross-driver synchronization object. A basic
+ understanding of dma-fences is required to digest this
+ document. Please refer to the ``DMA Fences`` section of the
+ :doc:`dma-buf doc </driver-api/dma-buf>`.
+
+* ``memory fence``: A synchronization object, different from a dma-fence.
+ A memory fence uses the value of a specified memory location to determine
+ signaled status. A memory fence can be awaited and signaled by both
+ the GPU and CPU. Memory fences are sometimes referred to as
+ user-fences, userspace-fences or gpu futexes and do not necessarily obey
+ the dma-fence rule of signaling within a "reasonable amount of time".
+ The kernel should thus avoid waiting for memory fences with locks held.
+
+* ``long-running workload``: A workload that may take more than the
+ current stipulated dma-fence maximum signal delay to complete and
+ which therefore needs to set the gpu_vm or the GPU execution context in
+ a certain mode that disallows completion dma-fences.
+
+* ``exec function``: An exec function is a function that revalidates all
+ affected gpu_vmas, submits a GPU command batch and registers the
+ dma_fence representing the GPU command's activity with all affected
+ dma_resvs. For completeness, although not covered by this document,
+ it's worth mentioning that an exec function may also be the
+ revalidation worker that is used by some drivers in compute /
+ long-running mode.
+
+* ``bind context``: A context identifier used for the VM_BIND
+ operation. VM_BIND operations that use the same bind context can be
+ assumed, where it matters, to complete in order of submission. No such
+ assumptions can be made for VM_BIND operations using separate bind contexts.
+
+* ``UMD``: User-mode driver.
+
+* ``KMD``: Kernel-mode driver.
+
+
+Synchronous / Asynchronous VM_BIND operation
+============================================
+
+Synchronous VM_BIND
+___________________
+With Synchronous VM_BIND, the VM_BIND operations all complete before the
+IOCTL returns. A synchronous VM_BIND takes neither in-fences nor
+out-fences. Synchronous VM_BIND may block and wait for GPU operations;
+for example swap-in or clearing, or even previous binds.
+
+Asynchronous VM_BIND
+____________________
+Asynchronous VM_BIND accepts both in-syncobjs and out-syncobjs. While the
+IOCTL may return immediately, the VM_BIND operations wait for the in-syncobjs
+before modifying the GPU page-tables, and signal the out-syncobjs when
+the modification is done in the sense that the next exec function that
+awaits for the out-syncobjs will see the change. Errors are reported
+synchronously.
+In low-memory situations the implementation may block, performing the
+VM_BIND synchronously, because there might not be enough memory
+immediately available for preparing the asynchronous operation.
+
+If the VM_BIND IOCTL takes a list or an array of operations as an argument,
+the in-syncobjs needs to signal before the first operation starts to
+execute, and the out-syncobjs signal after the last operation
+completes. Operations in the operation list can be assumed, where it
+matters, to complete in order.
+
+Since asynchronous VM_BIND operations may use dma-fences embedded in
+out-syncobjs and internally in KMD to signal bind completion, any
+memory fences given as VM_BIND in-fences need to be awaited
+synchronously before the VM_BIND ioctl returns, since dma-fences,
+required to signal in a reasonable amount of time, can never be made
+to depend on memory fences that don't have such a restriction.
+
+The purpose of an Asynchronous VM_BIND operation is for user-mode
+drivers to be able to pipeline interleaved gpu_vm modifications and
+exec functions. For long-running workloads, such pipelining of a bind
+operation is not allowed and any in-fences need to be awaited
+synchronously. The reason for this is twofold. First, any memory
+fences gated by a long-running workload and used as in-syncobjs for the
+VM_BIND operation will need to be awaited synchronously anyway (see
+above). Second, any dma-fences used as in-syncobjs for VM_BIND
+operations for long-running workloads will not allow for pipelining
+anyway since long-running workloads don't allow for dma-fences as
+out-syncobjs, so while theoretically possible the use of them is
+questionable and should be rejected until there is a valuable use-case.
+Note that this is not a limitation imposed by dma-fence rules, but
+rather a limitation imposed to keep KMD implementation simple. It does
+not affect using dma-fences as dependencies for the long-running
+workload itself, which is allowed by dma-fence rules, but rather for
+the VM_BIND operation only.
+
+An asynchronous VM_BIND operation may take substantial time to
+complete and signal the out_fence. In particular if the operation is
+deeply pipelined behind other VM_BIND operations and workloads
+submitted using exec functions. In that case, UMD might want to avoid a
+subsequent VM_BIND operation to be queued behind the first one if
+there are no explicit dependencies. In order to circumvent such a queue-up, a
+VM_BIND implementation may allow for VM_BIND contexts to be
+created. For each context, VM_BIND operations will be guaranteed to
+complete in the order they were submitted, but that is not the case
+for VM_BIND operations executing on separate VM_BIND contexts. Instead
+KMD will attempt to execute such VM_BIND operations in parallel but
+leaving no guarantee that they will actually be executed in
+parallel. There may be internal implicit dependencies that only KMD knows
+about, for example page-table structure changes. A way to attempt
+to avoid such internal dependencies is to have different VM_BIND
+contexts use separate regions of a VM.
+
+Also for VM_BINDS for long-running gpu_vms the user-mode driver should typically
+select memory fences as out-fences since that gives greater flexibility for
+the kernel mode driver to inject other operations into the bind /
+unbind operations. Like for example inserting breakpoints into batch
+buffers. The workload execution can then easily be pipelined behind
+the bind completion using the memory out-fence as the signal condition
+for a GPU semaphore embedded by UMD in the workload.
+
+There is no difference in the operations supported or in
+multi-operation support between asynchronous VM_BIND and synchronous VM_BIND.
+
+Multi-operation VM_BIND IOCTL error handling and interrupts
+===========================================================
+
+The VM_BIND operations of the IOCTL may error for various reasons, for
+example due to lack of resources to complete and due to interrupted
+waits.
+In these situations UMD should preferably restart the IOCTL after
+taking suitable action.
+If UMD has over-committed a memory resource, an -ENOSPC error will be
+returned, and UMD may then unbind resources that are not used at the
+moment and rerun the IOCTL. On -EINTR, UMD should simply rerun the
+IOCTL and on -ENOMEM user-space may either attempt to free known
+system memory resources or fail. In case of UMD deciding to fail a
+bind operation, due to an error return, no additional action is needed
+to clean up the failed operation, and the VM is left in the same state
+as it was before the failing IOCTL.
+Unbind operations are guaranteed not to return any errors due to
+resource constraints, but may return errors due to, for example,
+invalid arguments or the gpu_vm being banned.
+In the case an unexpected error happens during the asynchronous bind
+process, the gpu_vm will be banned, and attempts to use it after banning
+will return -ENOENT.
+
+Example: The Xe VM_BIND uAPI
+============================
+
+Starting with the VM_BIND operation struct, the IOCTL call can take
+zero, one or many such operations. A zero number means only the
+synchronization part of the IOCTL is carried out: an asynchronous
+VM_BIND updates the syncobjects, whereas a sync VM_BIND waits for the
+implicit dependencies to be fulfilled.
+
+.. code-block:: c
+
+ struct drm_xe_vm_bind_op {
+ /**
+ * @obj: GEM object to operate on, MBZ for MAP_USERPTR, MBZ for UNMAP
+ */
+ __u32 obj;
+
+ /** @pad: MBZ */
+ __u32 pad;
+
+ union {
+ /**
+ * @obj_offset: Offset into the object for MAP.
+ */
+ __u64 obj_offset;
+
+ /** @userptr: user virtual address for MAP_USERPTR */
+ __u64 userptr;
+ };
+
+ /**
+ * @range: Number of bytes from the object to bind to addr, MBZ for UNMAP_ALL
+ */
+ __u64 range;
+
+ /** @addr: Address to operate on, MBZ for UNMAP_ALL */
+ __u64 addr;
+
+ /**
+ * @tile_mask: Mask for which tiles to create binds for, 0 == All tiles,
+ * only applies to creating new VMAs
+ */
+ __u64 tile_mask;
+
+ /* Map (parts of) an object into the GPU virtual address range.
+ #define XE_VM_BIND_OP_MAP 0x0
+ /* Unmap a GPU virtual address range */
+ #define XE_VM_BIND_OP_UNMAP 0x1
+ /*
+ * Map a CPU virtual address range into a GPU virtual
+ * address range.
+ */
+ #define XE_VM_BIND_OP_MAP_USERPTR 0x2
+ /* Unmap a gem object from the VM. */
+ #define XE_VM_BIND_OP_UNMAP_ALL 0x3
+ /*
+ * Make the backing memory of an address range resident if
+ * possible. Note that this doesn't pin backing memory.
+ */
+ #define XE_VM_BIND_OP_PREFETCH 0x4
+
+ /* Make the GPU map readonly. */
+ #define XE_VM_BIND_FLAG_READONLY (0x1 << 16)
+ /*
+ * Valid on a faulting VM only, do the MAP operation immediately rather
+ * than deferring the MAP to the page fault handler.
+ */
+ #define XE_VM_BIND_FLAG_IMMEDIATE (0x1 << 17)
+ /*
+ * When the NULL flag is set, the page tables are setup with a special
+ * bit which indicates writes are dropped and all reads return zero. In
+ * the future, the NULL flags will only be valid for XE_VM_BIND_OP_MAP
+ * operations, the BO handle MBZ, and the BO offset MBZ. This flag is
+ * intended to implement VK sparse bindings.
+ */
+ #define XE_VM_BIND_FLAG_NULL (0x1 << 18)
+ /** @op: Operation to perform (lower 16 bits) and flags (upper 16 bits) */
+ __u32 op;
+
+ /** @mem_region: Memory region to prefetch VMA to, instance not a mask */
+ __u32 region;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+ };
+
+
+The VM_BIND IOCTL argument itself, looks like follows. Note that for
+synchronous VM_BIND, the num_syncs and syncs fields must be zero. Here
+the ``exec_queue_id`` field is the VM_BIND context discussed previously
+that is used to facilitate out-of-order VM_BINDs.
+
+.. code-block:: c
+
+ struct drm_xe_vm_bind {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @vm_id: The ID of the VM to bind to */
+ __u32 vm_id;
+
+ /**
+ * @exec_queue_id: exec_queue_id, must be of class DRM_XE_ENGINE_CLASS_VM_BIND
+ * and exec queue must have same vm_id. If zero, the default VM bind engine
+ * is used.
+ */
+ __u32 exec_queue_id;
+
+ /** @num_binds: number of binds in this IOCTL */
+ __u32 num_binds;
+
+ /* If set, perform an async VM_BIND, if clear a sync VM_BIND */
+ #define XE_VM_BIND_IOCTL_FLAG_ASYNC (0x1 << 0)
+
+ /** @flag: Flags controlling all operations in this ioctl. */
+ __u32 flags;
+
+ union {
+ /** @bind: used if num_binds == 1 */
+ struct drm_xe_vm_bind_op bind;
+
+ /**
+ * @vector_of_binds: userptr to array of struct
+ * drm_xe_vm_bind_op if num_binds > 1
+ */
+ __u64 vector_of_binds;
+ };
+
+ /** @num_syncs: amount of syncs to wait for or to signal on completion. */
+ __u32 num_syncs;
+
+ /** @pad2: MBZ */
+ __u32 pad2;
+
+ /** @syncs: pointer to struct drm_xe_sync array */
+ __u64 syncs;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+ };
diff --git a/Documentation/gpu/i915.rst b/Documentation/gpu/i915.rst
index 378e825754..0ca1550fd9 100644
--- a/Documentation/gpu/i915.rst
+++ b/Documentation/gpu/i915.rst
@@ -267,19 +267,22 @@ i915 driver.
Intel GPU Basics
----------------
-An Intel GPU has multiple engines. There are several engine types.
-
-- RCS engine is for rendering 3D and performing compute, this is named
- `I915_EXEC_RENDER` in user space.
-- BCS is a blitting (copy) engine, this is named `I915_EXEC_BLT` in user
- space.
-- VCS is a video encode and decode engine, this is named `I915_EXEC_BSD`
- in user space
-- VECS is video enhancement engine, this is named `I915_EXEC_VEBOX` in user
- space.
-- The enumeration `I915_EXEC_DEFAULT` does not refer to specific engine;
- instead it is to be used by user space to specify a default rendering
- engine (for 3D) that may or may not be the same as RCS.
+An Intel GPU has multiple engines. There are several engine types:
+
+- Render Command Streamer (RCS). An engine for rendering 3D and
+ performing compute.
+- Blitting Command Streamer (BCS). An engine for performing blitting and/or
+ copying operations.
+- Video Command Streamer. An engine used for video encoding and decoding. Also
+ sometimes called 'BSD' in hardware documentation.
+- Video Enhancement Command Streamer (VECS). An engine for video enhancement.
+ Also sometimes called 'VEBOX' in hardware documentation.
+- Compute Command Streamer (CCS). An engine that has access to the media and
+ GPGPU pipelines, but not the 3D pipeline.
+- Graphics Security Controller (GSCCS). A dedicated engine for internal
+ communication with GSC controller on security related tasks like
+ High-bandwidth Digital Content Protection (HDCP), Protected Xe Path (PXP),
+ and HuC firmware authentication.
The Intel GPU family is a family of integrated GPU's using Unified
Memory Access. For having the GPU "do work", user space will feed the
diff --git a/Documentation/gpu/implementation_guidelines.rst b/Documentation/gpu/implementation_guidelines.rst
new file mode 100644
index 0000000000..138e637dcc
--- /dev/null
+++ b/Documentation/gpu/implementation_guidelines.rst
@@ -0,0 +1,9 @@
+.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+
+===========================================================
+Misc DRM driver uAPI- and feature implementation guidelines
+===========================================================
+
+.. toctree::
+
+ drm-vm-bind-async
diff --git a/Documentation/gpu/index.rst b/Documentation/gpu/index.rst
index e45ff09152..37e383ccf7 100644
--- a/Documentation/gpu/index.rst
+++ b/Documentation/gpu/index.rst
@@ -18,6 +18,7 @@ GPU Driver Developer's Guide
vga-switcheroo
vgaarbiter
automated_testing
+ implementation_guidelines
todo
rfc/index
diff --git a/Documentation/gpu/panfrost.rst b/Documentation/gpu/panfrost.rst
new file mode 100644
index 0000000000..b80e41f4b2
--- /dev/null
+++ b/Documentation/gpu/panfrost.rst
@@ -0,0 +1,40 @@
+.. SPDX-License-Identifier: GPL-2.0+
+
+=========================
+ drm/Panfrost Mali Driver
+=========================
+
+.. _panfrost-usage-stats:
+
+Panfrost DRM client usage stats implementation
+==============================================
+
+The drm/Panfrost driver implements the DRM client usage stats specification as
+documented in :ref:`drm-client-usage-stats`.
+
+Example of the output showing the implemented key value pairs and entirety of
+the currently possible format options:
+
+::
+ pos: 0
+ flags: 02400002
+ mnt_id: 27
+ ino: 531
+ drm-driver: panfrost
+ drm-client-id: 14
+ drm-engine-fragment: 1846584880 ns
+ drm-cycles-fragment: 1424359409
+ drm-maxfreq-fragment: 799999987 Hz
+ drm-curfreq-fragment: 799999987 Hz
+ drm-engine-vertex-tiler: 71932239 ns
+ drm-cycles-vertex-tiler: 52617357
+ drm-maxfreq-vertex-tiler: 799999987 Hz
+ drm-curfreq-vertex-tiler: 799999987 Hz
+ drm-total-memory: 290 MiB
+ drm-shared-memory: 0 MiB
+ drm-active-memory: 226 MiB
+ drm-resident-memory: 36496 KiB
+ drm-purgeable-memory: 128 KiB
+
+Possible `drm-engine-` key names are: `fragment`, and `vertex-tiler`.
+`drm-curfreq-` values convey the current operating frequency for that engine.
diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst
index 2516fe141d..c29113a0ac 100644
--- a/Documentation/gpu/rfc/xe.rst
+++ b/Documentation/gpu/rfc/xe.rst
@@ -67,14 +67,8 @@ platforms.
When the time comes for Xe, the protection will be lifted on Xe and kept in i915.
-Xe driver will be protected with both STAGING Kconfig and force_probe. Changes in
-the uAPI are expected while the driver is behind these protections. STAGING will
-be removed when the driver uAPI gets to a mature state where we can guarantee the
-‘no regression’ rule. Then force_probe will be lifted only for future platforms
-that will be productized with Xe driver, but not with i915.
-
-Xe – Pre-Merge Goals
-====================
+Xe – Pre-Merge Goals - Work-in-Progress
+=======================================
Drm_scheduler
-------------
@@ -94,41 +88,6 @@ depend on any other patch touching drm_scheduler itself that was not yet merged
through drm-misc. This, by itself, already includes the reach of an agreement for
uniform 1 to 1 relationship implementation / usage across drivers.
-GPU VA
-------
-Two main goals of Xe are meeting together here:
-
-1) Have an uAPI that aligns with modern UMD needs.
-
-2) Early upstream engagement.
-
-RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping
-track of GPU virtual address mappings. This is still not merged upstream, but
-this aligns very well with our goals and with our VM_BIND. The engagement with
-upstream and the port of Xe towards GPUVA is already ongoing.
-
-As a key measurable result, Xe needs to be aligned with the GPU VA and working in
-our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA
-related patch should be independent and present on dri-devel or acked by
-maintainers to go along with the first Xe pull request towards drm-next.
-
-DRM_VM_BIND
------------
-Nouveau, and Xe are all implementing ‘VM_BIND’ and new ‘Exec’ uAPIs in order to
-fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the
-development of a common new drm_infrastructure. However, the Xe team needs to
-engage with the community to explore the options of a common API.
-
-As a key measurable result, the DRM_VM_BIND needs to be documented in this file
-below, or this entire block deleted if the consensus is for independent drivers
-vm_bind ioctls.
-
-Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
-Xe merged, it is mandatory to enforce the overall locking scheme for all major
-structs and list (so vm and vma). So, a consensus is needed, and possibly some
-common helpers. If helpers are needed, they should be also documented in this
-document.
-
ASYNC VM_BIND
-------------
Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
@@ -138,8 +97,8 @@ memory fences. Ideally with helper support so people don't get it wrong in all
possible ways.
As a key measurable result, the benefits of ASYNC VM_BIND and a discussion of
-various flavors, error handling and a sample API should be documented here or in
-a separate document pointed to by this document.
+various flavors, error handling and sample API suggestions are documented in
+:doc:`The ASYNC VM_BIND document </gpu/drm-vm-bind-async>`.
Userptr integration and vm_bind
-------------------------------
@@ -212,6 +171,14 @@ This item ties into the GPUVA, VM_BIND, and even long-running compute support.
As a key measurable result, we need to have a community consensus documented in
this document and the Xe driver prepared for the changes, if necessary.
+Xe – uAPI high level overview
+=============================
+
+...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached.
+
+Xe – Pre-Merge Goals - Completed
+================================
+
Dev_coredump
------------
@@ -229,7 +196,37 @@ infrastructure with overall possible improvements, like multiple file support
for better organization of the dumps, snapshot support, dmesg extra print,
and whatever may make sense and help the overall infrastructure.
-Xe – uAPI high level overview
-=============================
+DRM_VM_BIND
+-----------
+Nouveau, and Xe are all implementing ‘VM_BIND’ and new ‘Exec’ uAPIs in order to
+fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the
+development of a common new drm_infrastructure. However, the Xe team needs to
+engage with the community to explore the options of a common API.
-...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached.
+As a key measurable result, the DRM_VM_BIND needs to be documented in this file
+below, or this entire block deleted if the consensus is for independent drivers
+vm_bind ioctls.
+
+Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
+Xe merged, it is mandatory to enforce the overall locking scheme for all major
+structs and list (so vm and vma). So, a consensus is needed, and possibly some
+common helpers. If helpers are needed, they should be also documented in this
+document.
+
+GPU VA
+------
+Two main goals of Xe are meeting together here:
+
+1) Have an uAPI that aligns with modern UMD needs.
+
+2) Early upstream engagement.
+
+RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping
+track of GPU virtual address mappings. This is still not merged upstream, but
+this aligns very well with our goals and with our VM_BIND. The engagement with
+upstream and the port of Xe towards GPUVA is already ongoing.
+
+As a key measurable result, Xe needs to be aligned with the GPU VA and working in
+our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA
+related patch should be independent and present on dri-devel or acked by
+maintainers to go along with the first Xe pull request towards drm-next.