diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-11 08:27:49 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-11 08:27:49 +0000 |
commit | ace9429bb58fd418f0c81d4c2835699bddf6bde6 (patch) | |
tree | b2d64bc10158fdd5497876388cd68142ca374ed3 /Documentation/sound/designs | |
parent | Initial commit. (diff) | |
download | linux-ace9429bb58fd418f0c81d4c2835699bddf6bde6.tar.xz linux-ace9429bb58fd418f0c81d4c2835699bddf6bde6.zip |
Adding upstream version 6.6.15.upstream/6.6.15
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'Documentation/sound/designs')
-rw-r--r-- | Documentation/sound/designs/channel-mapping-api.rst | 164 | ||||
-rw-r--r-- | Documentation/sound/designs/compress-offload.rst | 329 | ||||
-rw-r--r-- | Documentation/sound/designs/control-names.rst | 142 | ||||
-rw-r--r-- | Documentation/sound/designs/index.rst | 18 | ||||
-rw-r--r-- | Documentation/sound/designs/jack-controls.rst | 48 | ||||
-rw-r--r-- | Documentation/sound/designs/jack-injection.rst | 166 | ||||
-rw-r--r-- | Documentation/sound/designs/midi-2.0.rst | 566 | ||||
-rw-r--r-- | Documentation/sound/designs/oss-emulation.rst | 336 | ||||
-rw-r--r-- | Documentation/sound/designs/powersave.rst | 43 | ||||
-rw-r--r-- | Documentation/sound/designs/procfile.rst | 238 | ||||
-rw-r--r-- | Documentation/sound/designs/seq-oss.rst | 371 | ||||
-rw-r--r-- | Documentation/sound/designs/timestamping.rst | 215 | ||||
-rw-r--r-- | Documentation/sound/designs/tracepoints.rst | 172 |
13 files changed, 2808 insertions, 0 deletions
diff --git a/Documentation/sound/designs/channel-mapping-api.rst b/Documentation/sound/designs/channel-mapping-api.rst new file mode 100644 index 0000000000..58e6312a43 --- /dev/null +++ b/Documentation/sound/designs/channel-mapping-api.rst @@ -0,0 +1,164 @@ +============================ +ALSA PCM channel-mapping API +============================ + +Takashi Iwai <tiwai@suse.de> + +General +======= + +The channel mapping API allows user to query the possible channel maps +and the current channel map, also optionally to modify the channel map +of the current stream. + +A channel map is an array of position for each PCM channel. +Typically, a stereo PCM stream has a channel map of +``{ front_left, front_right }`` +while a 4.0 surround PCM stream has a channel map of +``{ front left, front right, rear left, rear right }.`` + +The problem, so far, was that we had no standard channel map +explicitly, and applications had no way to know which channel +corresponds to which (speaker) position. Thus, applications applied +wrong channels for 5.1 outputs, and you hear suddenly strange sound +from rear. Or, some devices secretly assume that center/LFE is the +third/fourth channels while others that C/LFE as 5th/6th channels. + +Also, some devices such as HDMI are configurable for different speaker +positions even with the same number of total channels. However, there +was no way to specify this because of lack of channel map +specification. These are the main motivations for the new channel +mapping API. + + +Design +====== + +Actually, "the channel mapping API" doesn't introduce anything new in +the kernel/user-space ABI perspective. It uses only the existing +control element features. + +As a ground design, each PCM substream may contain a control element +providing the channel mapping information and configuration. This +element is specified by: + +* iface = SNDRV_CTL_ELEM_IFACE_PCM +* name = "Playback Channel Map" or "Capture Channel Map" +* device = the same device number for the assigned PCM substream +* index = the same index number for the assigned PCM substream + +Note the name is different depending on the PCM substream direction. + +Each control element provides at least the TLV read operation and the +read operation. Optionally, the write operation can be provided to +allow user to change the channel map dynamically. + +TLV +--- + +The TLV operation gives the list of available channel +maps. A list item of a channel map is usually a TLV of +``type data-bytes ch0 ch1 ch2...`` +where type is the TLV type value, the second argument is the total +bytes (not the numbers) of channel values, and the rest are the +position value for each channel. + +As a TLV type, either ``SNDRV_CTL_TLVT_CHMAP_FIXED``, +``SNDRV_CTL_TLV_CHMAP_VAR`` or ``SNDRV_CTL_TLVT_CHMAP_PAIRED`` can be used. +The ``_FIXED`` type is for a channel map with the fixed channel position +while the latter two are for flexible channel positions. ``_VAR`` type is +for a channel map where all channels are freely swappable and ``_PAIRED`` +type is where pair-wise channels are swappable. For example, when you +have {FL/FR/RL/RR} channel map, ``_PAIRED`` type would allow you to swap +only {RL/RR/FL/FR} while ``_VAR`` type would allow even swapping FL and +RR. + +These new TLV types are defined in ``sound/tlv.h``. + +The available channel position values are defined in ``sound/asound.h``, +here is a cut: + +:: + + /* channel positions */ + enum { + SNDRV_CHMAP_UNKNOWN = 0, + SNDRV_CHMAP_NA, /* N/A, silent */ + SNDRV_CHMAP_MONO, /* mono stream */ + /* this follows the alsa-lib mixer channel value + 3 */ + SNDRV_CHMAP_FL, /* front left */ + SNDRV_CHMAP_FR, /* front right */ + SNDRV_CHMAP_RL, /* rear left */ + SNDRV_CHMAP_RR, /* rear right */ + SNDRV_CHMAP_FC, /* front center */ + SNDRV_CHMAP_LFE, /* LFE */ + SNDRV_CHMAP_SL, /* side left */ + SNDRV_CHMAP_SR, /* side right */ + SNDRV_CHMAP_RC, /* rear center */ + /* new definitions */ + SNDRV_CHMAP_FLC, /* front left center */ + SNDRV_CHMAP_FRC, /* front right center */ + SNDRV_CHMAP_RLC, /* rear left center */ + SNDRV_CHMAP_RRC, /* rear right center */ + SNDRV_CHMAP_FLW, /* front left wide */ + SNDRV_CHMAP_FRW, /* front right wide */ + SNDRV_CHMAP_FLH, /* front left high */ + SNDRV_CHMAP_FCH, /* front center high */ + SNDRV_CHMAP_FRH, /* front right high */ + SNDRV_CHMAP_TC, /* top center */ + SNDRV_CHMAP_TFL, /* top front left */ + SNDRV_CHMAP_TFR, /* top front right */ + SNDRV_CHMAP_TFC, /* top front center */ + SNDRV_CHMAP_TRL, /* top rear left */ + SNDRV_CHMAP_TRR, /* top rear right */ + SNDRV_CHMAP_TRC, /* top rear center */ + SNDRV_CHMAP_LAST = SNDRV_CHMAP_TRC, + }; + +When a PCM stream can provide more than one channel map, you can +provide multiple channel maps in a TLV container type. The TLV data +to be returned will contain such as: +:: + + SNDRV_CTL_TLVT_CONTAINER 96 + SNDRV_CTL_TLVT_CHMAP_FIXED 4 SNDRV_CHMAP_FC + SNDRV_CTL_TLVT_CHMAP_FIXED 8 SNDRV_CHMAP_FL SNDRV_CHMAP_FR + SNDRV_CTL_TLVT_CHMAP_FIXED 16 NDRV_CHMAP_FL SNDRV_CHMAP_FR \ + SNDRV_CHMAP_RL SNDRV_CHMAP_RR + +The channel position is provided in LSB 16bits. The upper bits are +used for bit flags. +:: + + #define SNDRV_CHMAP_POSITION_MASK 0xffff + #define SNDRV_CHMAP_PHASE_INVERSE (0x01 << 16) + #define SNDRV_CHMAP_DRIVER_SPEC (0x02 << 16) + +``SNDRV_CHMAP_PHASE_INVERSE`` indicates the channel is phase inverted, +(thus summing left and right channels would result in almost silence). +Some digital mic devices have this. + +When ``SNDRV_CHMAP_DRIVER_SPEC`` is set, all the channel position values +don't follow the standard definition above but driver-specific. + +Read Operation +-------------- + +The control read operation is for providing the current channel map of +the given stream. The control element returns an integer array +containing the position of each channel. + +When this is performed before the number of the channel is specified +(i.e. hw_params is set), it should return all channels set to +``UNKNOWN``. + +Write Operation +--------------- + +The control write operation is optional, and only for devices that can +change the channel configuration on the fly, such as HDMI. User needs +to pass an integer value containing the valid channel positions for +all channels of the assigned PCM substream. + +This operation is allowed only at PCM PREPARED state. When called in +other states, it shall return an error. diff --git a/Documentation/sound/designs/compress-offload.rst b/Documentation/sound/designs/compress-offload.rst new file mode 100644 index 0000000000..655624f770 --- /dev/null +++ b/Documentation/sound/designs/compress-offload.rst @@ -0,0 +1,329 @@ +========================= +ALSA Compress-Offload API +========================= + +Pierre-Louis.Bossart <pierre-louis.bossart@linux.intel.com> + +Vinod Koul <vinod.koul@linux.intel.com> + + +Overview +======== +Since its early days, the ALSA API was defined with PCM support or +constant bitrates payloads such as IEC61937 in mind. Arguments and +returned values in frames are the norm, making it a challenge to +extend the existing API to compressed data streams. + +In recent years, audio digital signal processors (DSP) were integrated +in system-on-chip designs, and DSPs are also integrated in audio +codecs. Processing compressed data on such DSPs results in a dramatic +reduction of power consumption compared to host-based +processing. Support for such hardware has not been very good in Linux, +mostly because of a lack of a generic API available in the mainline +kernel. + +Rather than requiring a compatibility break with an API change of the +ALSA PCM interface, a new 'Compressed Data' API is introduced to +provide a control and data-streaming interface for audio DSPs. + +The design of this API was inspired by the 2-year experience with the +Intel Moorestown SOC, with many corrections required to upstream the +API in the mainline kernel instead of the staging tree and make it +usable by others. + + +Requirements +============ +The main requirements are: + +- separation between byte counts and time. Compressed formats may have + a header per file, per frame, or no header at all. The payload size + may vary from frame-to-frame. As a result, it is not possible to + estimate reliably the duration of audio buffers when handling + compressed data. Dedicated mechanisms are required to allow for + reliable audio-video synchronization, which requires precise + reporting of the number of samples rendered at any given time. + +- Handling of multiple formats. PCM data only requires a specification + of the sampling rate, number of channels and bits per sample. In + contrast, compressed data comes in a variety of formats. Audio DSPs + may also provide support for a limited number of audio encoders and + decoders embedded in firmware, or may support more choices through + dynamic download of libraries. + +- Focus on main formats. This API provides support for the most + popular formats used for audio and video capture and playback. It is + likely that as audio compression technology advances, new formats + will be added. + +- Handling of multiple configurations. Even for a given format like + AAC, some implementations may support AAC multichannel but HE-AAC + stereo. Likewise WMA10 level M3 may require too much memory and cpu + cycles. The new API needs to provide a generic way of listing these + formats. + +- Rendering/Grabbing only. This API does not provide any means of + hardware acceleration, where PCM samples are provided back to + user-space for additional processing. This API focuses instead on + streaming compressed data to a DSP, with the assumption that the + decoded samples are routed to a physical output or logical back-end. + +- Complexity hiding. Existing user-space multimedia frameworks all + have existing enums/structures for each compressed format. This new + API assumes the existence of a platform-specific compatibility layer + to expose, translate and make use of the capabilities of the audio + DSP, eg. Android HAL or PulseAudio sinks. By construction, regular + applications are not supposed to make use of this API. + + +Design +====== +The new API shares a number of concepts with the PCM API for flow +control. Start, pause, resume, drain and stop commands have the same +semantics no matter what the content is. + +The concept of memory ring buffer divided in a set of fragments is +borrowed from the ALSA PCM API. However, only sizes in bytes can be +specified. + +Seeks/trick modes are assumed to be handled by the host. + +The notion of rewinds/forwards is not supported. Data committed to the +ring buffer cannot be invalidated, except when dropping all buffers. + +The Compressed Data API does not make any assumptions on how the data +is transmitted to the audio DSP. DMA transfers from main memory to an +embedded audio cluster or to a SPI interface for external DSPs are +possible. As in the ALSA PCM case, a core set of routines is exposed; +each driver implementer will have to write support for a set of +mandatory routines and possibly make use of optional ones. + +The main additions are + +get_caps + This routine returns the list of audio formats supported. Querying the + codecs on a capture stream will return encoders, decoders will be + listed for playback streams. + +get_codec_caps + For each codec, this routine returns a list of + capabilities. The intent is to make sure all the capabilities + correspond to valid settings, and to minimize the risks of + configuration failures. For example, for a complex codec such as AAC, + the number of channels supported may depend on a specific profile. If + the capabilities were exposed with a single descriptor, it may happen + that a specific combination of profiles/channels/formats may not be + supported. Likewise, embedded DSPs have limited memory and cpu cycles, + it is likely that some implementations make the list of capabilities + dynamic and dependent on existing workloads. In addition to codec + settings, this routine returns the minimum buffer size handled by the + implementation. This information can be a function of the DMA buffer + sizes, the number of bytes required to synchronize, etc, and can be + used by userspace to define how much needs to be written in the ring + buffer before playback can start. + +set_params + This routine sets the configuration chosen for a specific codec. The + most important field in the parameters is the codec type; in most + cases decoders will ignore other fields, while encoders will strictly + comply to the settings + +get_params + This routines returns the actual settings used by the DSP. Changes to + the settings should remain the exception. + +get_timestamp + The timestamp becomes a multiple field structure. It lists the number + of bytes transferred, the number of samples processed and the number + of samples rendered/grabbed. All these values can be used to determine + the average bitrate, figure out if the ring buffer needs to be + refilled or the delay due to decoding/encoding/io on the DSP. + +Note that the list of codecs/profiles/modes was derived from the +OpenMAX AL specification instead of reinventing the wheel. +Modifications include: +- Addition of FLAC and IEC formats +- Merge of encoder/decoder capabilities +- Profiles/modes listed as bitmasks to make descriptors more compact +- Addition of set_params for decoders (missing in OpenMAX AL) +- Addition of AMR/AMR-WB encoding modes (missing in OpenMAX AL) +- Addition of format information for WMA +- Addition of encoding options when required (derived from OpenMAX IL) +- Addition of rateControlSupported (missing in OpenMAX AL) + +State Machine +============= + +The compressed audio stream state machine is described below :: + + +----------+ + | | + | OPEN | + | | + +----------+ + | + | + | compr_set_params() + | + v + compr_free() +----------+ + +------------------------------------| | + | | SETUP | + | +-------------------------| |<-------------------------+ + | | compr_write() +----------+ | + | | ^ | + | | | compr_drain_notify() | + | | | or | + | | | compr_stop() | + | | | | + | | +----------+ | + | | | | | + | | | DRAIN | | + | | | | | + | | +----------+ | + | | ^ | + | | | | + | | | compr_drain() | + | | | | + | v | | + | +----------+ +----------+ | + | | | compr_start() | | compr_stop() | + | | PREPARE |------------------->| RUNNING |--------------------------+ + | | | | | | + | +----------+ +----------+ | + | | | ^ | + | |compr_free() | | | + | | compr_pause() | | compr_resume() | + | | | | | + | v v | | + | +----------+ +----------+ | + | | | | | compr_stop() | + +--->| FREE | | PAUSE |---------------------------+ + | | | | + +----------+ +----------+ + + +Gapless Playback +================ +When playing thru an album, the decoders have the ability to skip the encoder +delay and padding and directly move from one track content to another. The end +user can perceive this as gapless playback as we don't have silence while +switching from one track to another + +Also, there might be low-intensity noises due to encoding. Perfect gapless is +difficult to reach with all types of compressed data, but works fine with most +music content. The decoder needs to know the encoder delay and encoder padding. +So we need to pass this to DSP. This metadata is extracted from ID3/MP4 headers +and are not present by default in the bitstream, hence the need for a new +interface to pass this information to the DSP. Also DSP and userspace needs to +switch from one track to another and start using data for second track. + +The main additions are: + +set_metadata + This routine sets the encoder delay and encoder padding. This can be used by + decoder to strip the silence. This needs to be set before the data in the track + is written. + +set_next_track + This routine tells DSP that metadata and write operation sent after this would + correspond to subsequent track + +partial drain + This is called when end of file is reached. The userspace can inform DSP that + EOF is reached and now DSP can start skipping padding delay. Also next write + data would belong to next track + +Sequence flow for gapless would be: +- Open +- Get caps / codec caps +- Set params +- Set metadata of the first track +- Fill data of the first track +- Trigger start +- User-space finished sending all, +- Indicate next track data by sending set_next_track +- Set metadata of the next track +- then call partial_drain to flush most of buffer in DSP +- Fill data of the next track +- DSP switches to second track + +(note: order for partial_drain and write for next track can be reversed as well) + +Gapless Playback SM +=================== + +For Gapless, we move from running state to partial drain and back, along +with setting of meta_data and signalling for next track :: + + + +----------+ + compr_drain_notify() | | + +------------------------>| RUNNING | + | | | + | +----------+ + | | + | | + | | compr_next_track() + | | + | V + | +----------+ + | compr_set_params() | | + | +-----------|NEXT_TRACK| + | | | | + | | +--+-------+ + | | | | + | +--------------+ | + | | + | | compr_partial_drain() + | | + | V + | +----------+ + | | | + +------------------------ | PARTIAL_ | + | DRAIN | + +----------+ + +Not supported +============= +- Support for VoIP/circuit-switched calls is not the target of this + API. Support for dynamic bit-rate changes would require a tight + coupling between the DSP and the host stack, limiting power savings. + +- Packet-loss concealment is not supported. This would require an + additional interface to let the decoder synthesize data when frames + are lost during transmission. This may be added in the future. + +- Volume control/routing is not handled by this API. Devices exposing a + compressed data interface will be considered as regular ALSA devices; + volume changes and routing information will be provided with regular + ALSA kcontrols. + +- Embedded audio effects. Such effects should be enabled in the same + manner, no matter if the input was PCM or compressed. + +- multichannel IEC encoding. Unclear if this is required. + +- Encoding/decoding acceleration is not supported as mentioned + above. It is possible to route the output of a decoder to a capture + stream, or even implement transcoding capabilities. This routing + would be enabled with ALSA kcontrols. + +- Audio policy/resource management. This API does not provide any + hooks to query the utilization of the audio DSP, nor any preemption + mechanisms. + +- No notion of underrun/overrun. Since the bytes written are compressed + in nature and data written/read doesn't translate directly to + rendered output in time, this does not deal with underrun/overrun and + maybe dealt in user-library + + +Credits +======= +- Mark Brown and Liam Girdwood for discussions on the need for this API +- Harsha Priya for her work on intel_sst compressed API +- Rakesh Ughreja for valuable feedback +- Sing Nallasellan, Sikkandar Madar and Prasanna Samaga for + demonstrating and quantifying the benefits of audio offload on a + real platform. diff --git a/Documentation/sound/designs/control-names.rst b/Documentation/sound/designs/control-names.rst new file mode 100644 index 0000000000..765ff9b5b7 --- /dev/null +++ b/Documentation/sound/designs/control-names.rst @@ -0,0 +1,142 @@ +=========================== +Standard ALSA Control Names +=========================== + +This document describes standard names of mixer controls. + +Standard Syntax +--------------- +Syntax: [LOCATION] SOURCE [CHANNEL] [DIRECTION] FUNCTION + + +DIRECTION +~~~~~~~~~ +================ =============== +<nothing> both directions +Playback one direction +Capture one direction +Bypass Playback one direction +Bypass Capture one direction +================ =============== + +FUNCTION +~~~~~~~~ +======== ================================= +Switch on/off switch +Volume amplifier +Route route control, hardware specific +======== ================================= + +CHANNEL +~~~~~~~ +============ ================================================== +<nothing> channel independent, or applies to all channels +Front front left/right channels +Surround rear left/right in 4.0/5.1 surround +CLFE C/LFE channels +Center center channel +LFE LFE channel +Side side left/right for 7.1 surround +============ ================================================== + +LOCATION (Physical location of source) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +============ ===================== +Front front position +Rear rear position +Dock on docking station +Internal internal +============ ===================== + +SOURCE +~~~~~~ +=================== ================================================= +Master +Master Mono +Hardware Master +Speaker internal speaker +Bass Speaker internal LFE speaker +Headphone +Line Out +Beep beep generator +Phone +Phone Input +Phone Output +Synth +FM +Mic +Headset Mic mic part of combined headset jack - 4-pin + headphone + mic +Headphone Mic mic part of either/or - 3-pin headphone or mic +Line input only, use "Line Out" for output +CD +Video +Zoom Video +Aux +PCM +PCM Pan +Loopback +Analog Loopback D/A -> A/D loopback +Digital Loopback playback -> capture loopback - + without analog path +Mono +Mono Output +Multi +ADC +Wave +Music +I2S +IEC958 +HDMI +SPDIF output only +SPDIF In +Digital In +HDMI/DP either HDMI or DisplayPort +=================== ================================================= + +Exceptions (deprecated) +----------------------- + +===================================== ======================= +[Analogue|Digital] Capture Source +[Analogue|Digital] Capture Switch aka input gain switch +[Analogue|Digital] Capture Volume aka input gain volume +[Analogue|Digital] Playback Switch aka output gain switch +[Analogue|Digital] Playback Volume aka output gain volume +Tone Control - Switch +Tone Control - Bass +Tone Control - Treble +3D Control - Switch +3D Control - Center +3D Control - Depth +3D Control - Wide +3D Control - Space +3D Control - Level +Mic Boost [(?dB)] +===================================== ======================= + +PCM interface +------------- + +=================== ======================================== +Sample Clock Source { "Word", "Internal", "AutoSync" } +Clock Sync Status { "Lock", "Sync", "No Lock" } +External Rate external capture rate +Capture Rate capture rate taken from external source +=================== ======================================== + +IEC958 (S/PDIF) interface +------------------------- + +============================================ ====================================== +IEC958 [...] [Playback|Capture] Switch turn on/off the IEC958 interface +IEC958 [...] [Playback|Capture] Volume digital volume control +IEC958 [...] [Playback|Capture] Default default or global value - read/write +IEC958 [...] [Playback|Capture] Mask consumer and professional mask +IEC958 [...] [Playback|Capture] Con Mask consumer mask +IEC958 [...] [Playback|Capture] Pro Mask professional mask +IEC958 [...] [Playback|Capture] PCM Stream the settings assigned to a PCM stream +IEC958 Q-subcode [Playback|Capture] Default Q-subcode bits + +IEC958 Preamble [Playback|Capture] Default burst preamble words (4*16bits) +============================================ ====================================== diff --git a/Documentation/sound/designs/index.rst b/Documentation/sound/designs/index.rst new file mode 100644 index 0000000000..b79db9ad87 --- /dev/null +++ b/Documentation/sound/designs/index.rst @@ -0,0 +1,18 @@ +Designs and Implementations +=========================== + +.. toctree:: + :maxdepth: 2 + + control-names + channel-mapping-api + compress-offload + timestamping + jack-controls + tracepoints + procfile + powersave + oss-emulation + seq-oss + jack-injection + midi-2.0 diff --git a/Documentation/sound/designs/jack-controls.rst b/Documentation/sound/designs/jack-controls.rst new file mode 100644 index 0000000000..e8a18f126a --- /dev/null +++ b/Documentation/sound/designs/jack-controls.rst @@ -0,0 +1,48 @@ +================== +ALSA Jack Controls +================== + +Why we need Jack kcontrols +========================== + +ALSA uses kcontrols to export audio controls(switch, volume, Mux, ...) +to user space. This means userspace applications like pulseaudio can +switch off headphones and switch on speakers when no headphones are +plugged in. + +The old ALSA jack code only created input devices for each registered +jack. These jack input devices are not readable by userspace devices +that run as non root. + +The new jack code creates embedded jack kcontrols for each jack that +can be read by any process. + +This can be combined with UCM to allow userspace to route audio more +intelligently based on jack insertion or removal events. + +Jack Kcontrol Internals +======================= + +Each jack will have a kcontrol list, so that we can create a kcontrol +and attach it to the jack, at jack creation stage. We can also add a +kcontrol to an existing jack, at anytime when required. + +Those kcontrols will be freed automatically when the Jack is freed. + +How to use jack kcontrols +========================= + +In order to keep compatibility, snd_jack_new() has been modified by +adding two params: + +initial_kctl + if true, create a kcontrol and add it to the jack list. +phantom_jack + Don't create a input device for phantom jacks. + +HDA jacks can set phantom_jack to true in order to create a phantom +jack and set initial_kctl to true to create an initial kcontrol with +the correct id. + +ASoC jacks should set initial_kctl as false. The pin name will be +assigned as the jack kcontrol name. diff --git a/Documentation/sound/designs/jack-injection.rst b/Documentation/sound/designs/jack-injection.rst new file mode 100644 index 0000000000..f979052152 --- /dev/null +++ b/Documentation/sound/designs/jack-injection.rst @@ -0,0 +1,166 @@ +============================ +ALSA Jack Software Injection +============================ + +Simple Introduction On Jack Injection +===================================== + +Here jack injection means users could inject plugin or plugout events +to the audio jacks through debugfs interface, it is helpful to +validate ALSA userspace changes. For example, we change the audio +profile switching code in the pulseaudio, and we want to verify if the +change works as expected and if the change introduce the regression, +in this case, we could inject plugin or plugout events to an audio +jack or to some audio jacks, we don't need to physically access the +machine and plug/unplug physical devices to the audio jack. + +In this design, an audio jack doesn't equal to a physical audio jack. +Sometimes a physical audio jack contains multi functions, and the +ALSA driver creates multi ``jack_kctl`` for a ``snd_jack``, here the +``snd_jack`` represents a physical audio jack and the ``jack_kctl`` +represents a function, for example a physical jack has two functions: +headphone and mic_in, the ALSA ASoC driver will build 2 ``jack_kctl`` +for this jack. The jack injection is implemented based on the +``jack_kctl`` instead of ``snd_jack``. + +To inject events to audio jacks, we need to enable the jack injection +via ``sw_inject_enable`` first, once it is enabled, this jack will not +change the state by hardware events anymore, we could inject plugin or +plugout events via ``jackin_inject`` and check the jack state via +``status``, after we finish our test, we need to disable the jack +injection via ``sw_inject_enable`` too, once it is disabled, the jack +state will be restored according to the last reported hardware events +and will change by future hardware events. + +The Layout of Jack Injection Interface +====================================== + +If users enable the SND_JACK_INJECTION_DEBUG in the kernel, the audio +jack injection interface will be created as below: +:: + + $debugfs_mount_dir/sound + |-- card0 + |-- |-- HDMI_DP_pcm_10_Jack + |-- |-- |-- jackin_inject + |-- |-- |-- kctl_id + |-- |-- |-- mask_bits + |-- |-- |-- status + |-- |-- |-- sw_inject_enable + |-- |-- |-- type + ... + |-- |-- HDMI_DP_pcm_9_Jack + |-- |-- jackin_inject + |-- |-- kctl_id + |-- |-- mask_bits + |-- |-- status + |-- |-- sw_inject_enable + |-- |-- type + |-- card1 + |-- HDMI_DP_pcm_5_Jack + |-- |-- jackin_inject + |-- |-- kctl_id + |-- |-- mask_bits + |-- |-- status + |-- |-- sw_inject_enable + |-- |-- type + ... + |-- Headphone_Jack + |-- |-- jackin_inject + |-- |-- kctl_id + |-- |-- mask_bits + |-- |-- status + |-- |-- sw_inject_enable + |-- |-- type + |-- Headset_Mic_Jack + |-- jackin_inject + |-- kctl_id + |-- mask_bits + |-- status + |-- sw_inject_enable + |-- type + +The Explanation Of The Nodes +====================================== + +kctl_id + read-only, get jack_kctl->kctl's id + :: + + sound/card1/Headphone_Jack# cat kctl_id + Headphone Jack + +mask_bits + read-only, get jack_kctl's supported events mask_bits + :: + + sound/card1/Headphone_Jack# cat mask_bits + 0x0001 HEADPHONE(0x0001) + +status + read-only, get jack_kctl's current status + +- headphone unplugged: + + :: + + sound/card1/Headphone_Jack# cat status + Unplugged + +- headphone plugged: + + :: + + sound/card1/Headphone_Jack# cat status + Plugged + +type + read-only, get snd_jack's supported events from type (all supported events on the physical audio jack) + :: + + sound/card1/Headphone_Jack# cat type + 0x7803 HEADPHONE(0x0001) MICROPHONE(0x0002) BTN_3(0x0800) BTN_2(0x1000) BTN_1(0x2000) BTN_0(0x4000) + +sw_inject_enable + read-write, enable or disable injection + +- injection disabled: + + :: + + sound/card1/Headphone_Jack# cat sw_inject_enable + Jack: Headphone Jack Inject Enabled: 0 + +- injection enabled: + + :: + + sound/card1/Headphone_Jack# cat sw_inject_enable + Jack: Headphone Jack Inject Enabled: 1 + +- to enable jack injection: + + :: + + sound/card1/Headphone_Jack# echo 1 > sw_inject_enable + +- to disable jack injection: + + :: + + sound/card1/Headphone_Jack# echo 0 > sw_inject_enable + +jackin_inject + write-only, inject plugin or plugout + +- to inject plugin: + + :: + + sound/card1/Headphone_Jack# echo 1 > jackin_inject + +- to inject plugout: + + :: + + sound/card1/Headphone_Jack# echo 0 > jackin_inject diff --git a/Documentation/sound/designs/midi-2.0.rst b/Documentation/sound/designs/midi-2.0.rst new file mode 100644 index 0000000000..086487ca7a --- /dev/null +++ b/Documentation/sound/designs/midi-2.0.rst @@ -0,0 +1,566 @@ +================= +MIDI 2.0 on Linux +================= + +General +======= + +MIDI 2.0 is an extended protocol for providing higher resolutions and +more fine controls over the legacy MIDI 1.0. The fundamental changes +introduced for supporting MIDI 2.0 are: + +- Support of Universal MIDI Packet (UMP) +- Support of MIDI 2.0 protocol messages +- Transparent conversions between UMP and legacy MIDI 1.0 byte stream +- MIDI-CI for property and profile configurations + +UMP is a new container format to hold all MIDI protocol 1.0 and MIDI +2.0 protocol messages. Unlike the former byte stream, it's 32bit +aligned, and each message can be put in a single packet. UMP can send +the events up to 16 "UMP Groups", where each UMP Group contain up to +16 MIDI channels. + +MIDI 2.0 protocol is an extended protocol to achieve the higher +resolution and more controls over the old MIDI 1.0 protocol. + +MIDI-CI is a high-level protocol that can talk with the MIDI device +for the flexible profiles and configurations. It's represented in the +form of special SysEx. + +For Linux implementations, the kernel supports the UMP transport and +the encoding/decoding of MIDI protocols on UMP, while MIDI-CI is +supported in user-space over the standard SysEx. + +As of this writing, only USB MIDI device supports the UMP and Linux +2.0 natively. The UMP support itself is pretty generic, hence it +could be used by other transport layers, although it could be +implemented differently (e.g. as a ALSA sequencer client), too. + +The access to UMP devices are provided in two ways: the access via +rawmidi device and the access via ALSA sequencer API. + +ALSA sequencer API was extended to allow the payload of UMP packets. +It's allowed to connect freely between MIDI 1.0 and MIDI 2.0 sequencer +clients, and the events are converted transparently. + + +Kernel Configuration +==================== + +The following new configs are added for supporting MIDI 2.0: +`CONFIG_SND_UMP`, `CONFIG_SND_UMP_LEGACY_RAWMIDI`, +`CONFIG_SND_SEQ_UMP`, `CONFIG_SND_SEQ_UMP_CLIENT`, and +`CONFIG_SND_USB_AUDIO_MIDI_V2`. The first visible one is +`CONFIG_SND_USB_AUDIO_MIDI_V2`, and when you choose it (to set `=y`), +the core support for UMP (`CONFIG_SND_UMP`) and the sequencer binding +(`CONFIG_SND_SEQ_UMP_CLIENT`) will be automatically selected. + +Additionally, `CONFIG_SND_UMP_LEGACY_RAWMIDI=y` will enable the +support for the legacy raw MIDI device for UMP Endpoints. + + +Rawmidi Device with USB MIDI 2.0 +================================ + +When a device supports MIDI 2.0, the USB-audio driver probes and uses +the MIDI 2.0 interface (that is found always at the altset 1) as +default instead of the MIDI 1.0 interface (at altset 0). You can +switch back to the binding with the old MIDI 1.0 interface by passing +`midi2_enable=0` option to snd-usb-audio driver module, too. + +The USB audio driver tries to query the UMP Endpoint and UMP Function +Block information that are provided since UMP v1.1, and builds up the +topology based on those information. When the device is older and +doesn't respond to the new UMP inquiries, the driver falls back and +builds the topology based on Group Terminal Block (GTB) information +from the USB descriptor. Some device might be screwed up by the +unexpected UMP command; in such a case, pass `midi2_ump_probe=0` +option to snd-usb-audio driver for skipping the UMP v1.1 inquiries. + +When the MIDI 2.0 device is probed, the kernel creates a rawmidi +device for each UMP Endpoint of the device. Its device name is +`/dev/snd/umpC*D*` and different from the standard rawmidi device name +`/dev/snd/midiC*D*` for MIDI 1.0, in order to avoid confusing the +legacy applications accessing mistakenly to UMP devices. + +You can read and write UMP packet data directly from/to this UMP +rawmidi device. For example, reading via `hexdump` like below will +show the incoming UMP packets of the card 0 device 0 in the hex +format:: + + % hexdump -C /dev/snd/umpC0D0 + 00000000 01 07 b0 20 00 07 b0 20 64 3c 90 20 64 3c 80 20 |... ... d<. d<. | + +Unlike the MIDI 1.0 byte stream, UMP is a 32bit packet, and the size +for reading or writing the device is also aligned to 32bit (which is 4 +bytes). + +The 32-bit words in the UMP packet payload are always in CPU native +endianness. Transport drivers are responsible to convert UMP words +from / to system endianness to required transport endianness / byte +order. + +When `CONFIG_SND_UMP_LEGACY_RAWMIDI` is set, the driver creates +another standard raw MIDI device additionally as `/dev/snd/midiC*D*`. +This contains 16 substreams, and each substream corresponds to a +(0-based) UMP Group. Legacy applications can access to the specified +group via each substream in MIDI 1.0 byte stream format. With the +ALSA rawmidi API, you can open the arbitrary substream, while just +opening `/dev/snd/midiC*D*` will end up with opening the first +substream. + +Each UMP Endpoint can provide the additional information, constructed +from the information inquired via UMP 1.1 Stream messages or USB MIDI +2.0 descriptors. And a UMP Endpoint may contain one or more UMP +Blocks, where UMP Block is an abstraction introduced in the ALSA UMP +implementations to represent the associations among UMP Groups. UMP +Block corresponds to Function Block in UMP 1.1 specification. When +UMP 1.1 Function Block information isn't available, it's filled +partially from Group Terminal Block (GTB) as defined in USB MIDI 2.0 +specifications. + +The information of UMP Endpoints and UMP Blocks are found in the proc +file `/proc/asound/card*/midi*`. For example:: + + % cat /proc/asound/card1/midi0 + ProtoZOA MIDI + + Type: UMP + EP Name: ProtoZOA + EP Product ID: ABCD12345678 + UMP Version: 0x0000 + Protocol Caps: 0x00000100 + Protocol: 0x00000100 + Num Blocks: 3 + + Block 0 (ProtoZOA Main) + Direction: bidirection + Active: Yes + Groups: 1-1 + Is MIDI1: No + + Block 1 (ProtoZOA Ext IN) + Direction: output + Active: Yes + Groups: 2-2 + Is MIDI1: Yes (Low Speed) + .... + +Note that `Groups` field shown in the proc file above indicates the +1-based UMP Group numbers (from-to). + +Those additional UMP Endpoint and UMP Block information can be +obtained via the new ioctls `SNDRV_UMP_IOCTL_ENDPOINT_INFO` and +`SNDRV_UMP_IOCTL_BLOCK_INFO`, respectively. + +The rawmidi name and the UMP Endpoint name are usually identical, and +in the case of USB MIDI, it's taken from `iInterface` of the +corresponding USB MIDI interface descriptor. If it's not provided, +it's copied from `iProduct` of the USB device descriptor as a +fallback. + +The Endpoint Product ID is a string field and supposed to be unique. +It's copied from `iSerialNumber` of the device for USB MIDI. + +The protocol capabilities and the actual protocol bits are defined in +`asound.h`. + + +ALSA Sequencer with USB MIDI 2.0 +================================ + +In addition to the rawmidi interfaces, ALSA sequencer interface +supports the new UMP MIDI 2.0 device, too. Now, each ALSA sequencer +client may set its MIDI version (0, 1 or 2) to declare itself being +either the legacy, UMP MIDI 1.0 or UMP MIDI 2.0 device, respectively. +The first, legacy client is the one that sends/receives the old +sequencer event as was. Meanwhile, UMP MIDI 1.0 and 2.0 clients send +and receive in the extended event record for UMP. The MIDI version is +seen in the new `midi_version` field of `snd_seq_client_info`. + +A UMP packet can be sent/received in a sequencer event embedded by +specifying the new event flag bit `SNDRV_SEQ_EVENT_UMP`. When this +flag is set, the event has 16 byte (128 bit) data payload for holding +the UMP packet. Without the `SNDRV_SEQ_EVENT_UMP` bit flag, the event +is treated as a legacy event as it was (with max 12 byte data +payload). + +With `SNDRV_SEQ_EVENT_UMP` flag set, the type field of a UMP sequencer +event is ignored (but it should be set to 0 as default). + +The type of each client can be seen in `/proc/asound/seq/clients`. +For example:: + + % cat /proc/asound/seq/clients + Client info + cur clients : 3 + .... + Client 14 : "Midi Through" [Kernel Legacy] + Port 0 : "Midi Through Port-0" (RWe-) + Client 20 : "ProtoZOA" [Kernel UMP MIDI1] + UMP Endpoint: ProtoZOA + UMP Block 0: ProtoZOA Main [Active] + Groups: 1-1 + UMP Block 1: ProtoZOA Ext IN [Active] + Groups: 2-2 + UMP Block 2: ProtoZOA Ext OUT [Active] + Groups: 3-3 + Port 0 : "MIDI 2.0" (RWeX) [In/Out] + Port 1 : "ProtoZOA Main" (RWeX) [In/Out] + Port 2 : "ProtoZOA Ext IN" (-We-) [Out] + Port 3 : "ProtoZOA Ext OUT" (R-e-) [In] + +Here you can find two types of kernel clients, "Legacy" for client 14, +and "UMP MIDI1" for client 20, which is a USB MIDI 2.0 device. +A USB MIDI 2.0 client gives always the port 0 as "MIDI 2.0" and the +rest ports from 1 for each UMP Group (e.g. port 1 for Group 1). +In this example, the device has three active groups (Main, Ext IN and +Ext OUT), and those are exposed as sequencer ports from 1 to 3. +The "MIDI 2.0" port is for a UMP Endpoint, and its difference from +other UMP Group ports is that UMP Endpoint port sends the events from +the all ports on the device ("catch-all"), while each UMP Group port +sends only the events from the given UMP Group. +Also, UMP groupless messages (such as the UMP message type 0x0f) are +sent only to the UMP Endpoint port. + +Note that, although each UMP sequencer client usually creates 16 +ports, those ports that don't belong to any UMP Blocks (or belonging +to inactive UMP Blocks) are marked as inactive, and they don't appear +in the proc outputs. In the example above, the sequencer ports from 4 +to 16 are present but not shown there. + +The proc file above shows the UMP Block information, too. The same +entry (but with more detailed information) is found in the rawmidi +proc output. + +When clients are connected between different MIDI versions, the events +are translated automatically depending on the client's version, not +only between the legacy and the UMP MIDI 1.0/2.0 types, but also +between UMP MIDI 1.0 and 2.0 types, too. For example, running +`aseqdump` program on the ProtoZOA Main port in the legacy mode will +give you the output like:: + + % aseqdump -p 20:1 + Waiting for data. Press Ctrl+C to end. + Source Event Ch Data + 20:1 Note on 0, note 60, velocity 100 + 20:1 Note off 0, note 60, velocity 100 + 20:1 Control change 0, controller 11, value 4 + +When you run `aseqdump` in MIDI 2.0 mode, it'll receive the high +precision data like:: + + % aseqdump -u 2 -p 20:1 + Waiting for data. Press Ctrl+C to end. + Source Event Ch Data + 20:1 Note on 0, note 60, velocity 0xc924, attr type = 0, data = 0x0 + 20:1 Note off 0, note 60, velocity 0xc924, attr type = 0, data = 0x0 + 20:1 Control change 0, controller 11, value 0x2000000 + +while the data is automatically converted by ALSA sequencer core. + + +Rawmidi API Extensions +====================== + +* The additional UMP Endpoint information can be obtained via the new + ioctl `SNDRV_UMP_IOCTL_ENDPOINT_INFO`. It contains the associated + card and device numbers, the bit flags, the protocols, the number of + UMP Blocks, the name string of the endpoint, etc. + + The protocols are specified in two field, the protocol capabilities + and the current protocol. Both contain the bit flags specifying the + MIDI protocol version (`SNDRV_UMP_EP_INFO_PROTO_MIDI1` or + `SNDRV_UMP_EP_INFO_PROTO_MIDI2`) in the upper byte and the jitter + reduction timestamp (`SNDRV_UMP_EP_INFO_PROTO_JRTS_TX` and + `SNDRV_UMP_EP_INFO_PROTO_JRTS_RX`) in the lower byte. + + A UMP Endpoint may contain up to 32 UMP Blocks, and the number of + the currently assigned blocks are shown in the Endpoint information. + +* Each UMP Block information can be obtained via another new ioctl + `SNDRV_UMP_IOCTL_BLOCK_INFO`. The block ID number (0-based) has to + be passed for the block to query. The received data contains the + associated the direction of the block, the first associated group ID + (0-based) and the number of groups, the name string of the block, + etc. + + The direction is either `SNDRV_UMP_DIR_INPUT`, + `SNDRV_UMP_DIR_OUTPUT` or `SNDRV_UMP_DIR_BIDIRECTION`. + +* For the device supports UMP v1.1, the UMP MIDI protocol can be + switched via "Stream Configuration Request" message (UMP type 0x0f, + status 0x05). When UMP core receives such a message, it updates the + UMP EP info and the corresponding sequencer clients as well. + + +Control API Extensions +====================== + +* The new ioctl `SNDRV_CTL_IOCTL_UMP_NEXT_DEVICE` is introduced for + querying the next UMP rawmidi device, while the existing ioctl + `SNDRV_CTL_IOCTL_RAWMIDI_NEXT_DEVICE` queries only the legacy + rawmidi devices. + + For setting the subdevice (substream number) to be opened, use the + ioctl `SNDRV_CTL_IOCTL_RAWMIDI_PREFER_SUBDEVICE` like the normal + rawmidi. + +* Two new ioctls `SNDRV_CTL_IOCTL_UMP_ENDPOINT_INFO` and + `SNDRV_CTL_IOCTL_UMP_BLOCK_INFO` provide the UMP Endpoint and UMP + Block information of the specified UMP device via ALSA control API + without opening the actual (UMP) rawmidi device. + The `card` field is ignored upon inquiry, always tied with the card + of the control interface. + + +Sequencer API Extensions +======================== + +* `midi_version` field is added to `snd_seq_client_info` to indicate + the current MIDI version (either 0, 1 or 2) of each client. + When `midi_version` is 1 or 2, the alignment of read from a UMP + sequencer client is also changed from the former 28 bytes to 32 + bytes for the extended payload. The alignment size for the write + isn't changed, but each event size may differ depending on the new + bit flag below. + +* `SNDRV_SEQ_EVENT_UMP` flag bit is added for each sequencer event + flags. When this bit flag is set, the sequencer event is extended + to have a larger payload of 16 bytes instead of the legacy 12 + bytes, and the event contains the UMP packet in the payload. + +* The new sequencer port type bit (`SNDRV_SEQ_PORT_TYPE_MIDI_UMP`) + indicates the port being UMP-capable. + +* The sequencer ports have new capability bits to indicate the + inactive ports (`SNDRV_SEQ_PORT_CAP_INACTIVE`) and the UMP Endpoint + port (`SNDRV_SEQ_PORT_CAP_UMP_ENDPOINT`). + +* The event conversion of ALSA sequencer clients can be suppressed the + new filter bit `SNDRV_SEQ_FILTER_NO_CONVERT` set to the client info. + For example, the kernel pass-through client (`snd-seq-dummy`) sets + this flag internally. + +* The port information gained the new field `direction` to indicate + the direction of the port (either `SNDRV_SEQ_PORT_DIR_INPUT`, + `SNDRV_SEQ_PORT_DIR_OUTPUT` or `SNDRV_SEQ_PORT_DIR_BIDIRECTION`). + +* Another additional field for the port information is `ump_group` + which specifies the associated UMP Group Number (1-based). + When it's non-zero, the UMP group field in the UMP packet updated + upon delivery to the specified group (corrected to be 0-based). + Each sequencer port is supposed to set this field if it's a port to + specific to a certain UMP group. + +* Each client may set the additional event filter for UMP Groups in + `group_filter` bitmap. The filter consists of bitmap from 1-based + Group numbers. For example, when the bit 1 is set, messages from + Group 1 (i.e. the very first group) are filtered and not delivered. + The bit 0 is used for filtering UMP groupless messages. + +* Two new ioctls are added for UMP-capable clients: + `SNDRV_SEQ_IOCTL_GET_CLIENT_UMP_INFO` and + `SNDRV_SEQ_IOCTL_SET_CLIENT_UMP_INFO`. They are used to get and set + either `snd_ump_endpoint_info` or `snd_ump_block_info` data + associated with the sequencer client. The USB MIDI driver provides + those information from the underlying UMP rawmidi, while a + user-space client may provide its own data via `*_SET` ioctl. + For an Endpoint data, pass 0 to the `type` field, while for a Block + data, pass the block number + 1 to the `type` field. + Setting the data for a kernel client shall result in an error. + +* With UMP 1.1, Function Block information may be changed + dynamically. When the update of Function Block is received from the + device, ALSA sequencer core changes the corresponding sequencer port + name and attributes accordingly, and notifies the changes via the + announcement to the ALSA sequencer system port, similarly like the + normal port change notification. + + +MIDI2 USB Gadget Function Driver +================================ + +The latest kernel contains the support for USB MIDI 2.0 gadget +function driver, which can be used for prototyping and debugging MIDI +2.0 features. + +`CONFIG_USB_GADGET`, `CONFIG_USB_CONFIGFS` and +`CONFIG_USB_CONFIGFS_F_MIDI2` need to be enabled for the MIDI2 gadget +driver. + +In addition, for using a gadget driver, you need a working UDC driver. +In the example below, we use `dummy_hcd` driver (enabled via +`CONFIG_USB_DUMMY_HCD`) that is available on PC and VM for debugging +purpose. There are other UDC drivers depending on the platform, and +those can be used for a real device, instead, too. + +At first, on a system to run the gadget, load `libcomposite` module:: + + % modprobe libcomposite + +and you'll have `usb_gadget` subdirectory under configfs space +(typically `/sys/kernel/config` on modern OS). Then create a gadget +instance and add configurations there, for example:: + + % cd /sys/kernel/config + % mkdir usb_gadget/g1 + + % cd usb_gadget/g1 + % mkdir configs/c.1 + % mkdir functions/midi2.usb0 + + % echo 0x0004 > idProduct + % echo 0x17b3 > idVendor + % mkdir strings/0x409 + % echo "ACME Enterprises" > strings/0x409/manufacturer + % echo "ACMESynth" > strings/0x409/product + % echo "ABCD12345" > strings/0x409/serialnumber + + % mkdir configs/c.1/strings/0x409 + % echo "Monosynth" > configs/c.1/strings/0x409/configuration + % echo 120 > configs/c.1/MaxPower + +At this point, there must be a subdirectory `ep.0`, and that is the +configuration for a UMP Endpoint. You can fill the Endpoint +information like:: + + % echo "ACMESynth" > functions/midi2.usb0/iface_name + % echo "ACMESynth" > functions/midi2.usb0/ep.0/ep_name + % echo "ABCD12345" > functions/midi2.usb0/ep.0/product_id + % echo 0x0123 > functions/midi2.usb0/ep.0/family + % echo 0x4567 > functions/midi2.usb0/ep.0/model + % echo 0x123456 > functions/midi2.usb0/ep.0/manufacturer + % echo 0x12345678 > functions/midi2.usb0/ep.0/sw_revision + +The default MIDI protocol can be set either 1 or 2:: + + % echo 2 > functions/midi2.usb0/ep.0/protocol + +And, you can find a subdirectory `block.0` under this Endpoint +subdirectory. This defines the Function Block information:: + + % echo "Monosynth" > functions/midi2.usb0/ep.0/block.0/name + % echo 0 > functions/midi2.usb0/ep.0/block.0/first_group + % echo 1 > functions/midi2.usb0/ep.0/block.0/num_groups + +Finally, link the configuration and enable it:: + + % ln -s functions/midi2.usb0 configs/c.1 + % echo dummy_udc.0 > UDC + +where `dummy_udc.0` is an example case and it differs depending on the +system. You can find the UDC instances in `/sys/class/udc` and pass +the found name instead:: + + % ls /sys/class/udc + dummy_udc.0 + +Now, the MIDI 2.0 gadget device is enabled, and the gadget host +creates a new sound card instance containing a UMP rawmidi device by +`f_midi2` driver:: + + % cat /proc/asound/cards + .... + 1 [Gadget ]: f_midi2 - MIDI 2.0 Gadget + MIDI 2.0 Gadget + +And on the connected host, a similar card should appear, too, but with +the card and device names given in the configfs above:: + + % cat /proc/asound/cards + .... + 2 [ACMESynth ]: USB-Audio - ACMESynth + ACME Enterprises ACMESynth at usb-dummy_hcd.0-1, high speed + +You can play a MIDI file on the gadget side:: + + % aplaymidi -p 20:1 to_host.mid + +and this will appear as an input from a MIDI device on the connected +host:: + + % aseqdump -p 20:0 -u 2 + +Vice versa, a playback on the connected host will work as an input on +the gadget, too. + +Each Function Block may have different direction and UI-hint, +specified via `direction` and `ui_hint` attributes. +Passing `1` is for input-only, `2` for out-only and `3` for +bidirectional (the default value). For example:: + + % echo 2 > functions/midi2.usb0/ep.0/block.0/direction + % echo 2 > functions/midi2.usb0/ep.0/block.0/ui_hint + +When you need more than one Function Blocks, you can create +subdirectories `block.1`, `block.2`, etc dynamically, and configure +them in the configuration procedure above before linking. +For example, to create a second Function Block for a keyboard:: + + % mkdir functions/midi2.usb0/ep.0/block.1 + % echo "Keyboard" > functions/midi2.usb0/ep.0/block.1/name + % echo 1 > functions/midi2.usb0/ep.0/block.1/first_group + % echo 1 > functions/midi2.usb0/ep.0/block.1/num_groups + % echo 1 > functions/midi2.usb0/ep.0/block.1/direction + % echo 1 > functions/midi2.usb0/ep.0/block.1/ui_hint + +The `block.*` subdirectories can be removed dynamically, too (except +for `block.0` which is persistent). + +For assigning a Function Block for MIDI 1.0 I/O, set up in `is_midi1` +attribute. 1 is for MIDI 1.0, and 2 is for MIDI 1.0 with low speed +connection:: + + % echo 2 > functions/midi2.usb0/ep.0/block.1/is_midi1 + +For disabling the processing of UMP Stream messages in the gadget +driver, pass `0` to `process_ump` attribute in the top-level config:: + + % echo 0 > functions/midi2.usb0/process_ump + +The MIDI 1.0 interface at altset 0 is supported by the gadget driver, +too. When MIDI 1.0 interface is selected by the connected host, the +UMP I/O on the gadget is translated from/to USB MIDI 1.0 packets +accordingly while the gadget driver keeps communicating with the +user-space over UMP rawmidi. + +MIDI 1.0 ports are set up from the config in each Function Block. +For example:: + + % echo 0 > functions/midi2.usb0/ep.0/block.0/midi1_first_group + % echo 1 > functions/midi2.usb0/ep.0/block.0/midi1_num_groups + +The configuration above will enable the Group 1 (the index 0) for MIDI +1.0 interface. Note that those groups must be in the groups defined +for the Function Block itself. + +The gadget driver supports more than one UMP Endpoints, too. +Similarly like the Function Blocks, you can create a new subdirectory +`ep.1` (but under the card top-level config) to enable a new Endpoint:: + + % mkdir functions/midi2.usb0/ep.1 + +and create a new Function Block there. For example, to create 4 +Groups for the Function Block of this new Endpoint:: + + % mkdir functions/midi2.usb0/ep.1/block.0 + % echo 4 > functions/midi2.usb0/ep.1/block.0/num_groups + +Now, you'll have 4 rawmidi devices in total: the first two are UMP +rawmidi devices for Endpoint 0 and Endpoint 1, and other two for the +legacy MIDI 1.0 rawmidi devices corresponding to both EP 0 and EP 1. + +The current altsetting on the gadget can be informed via a control +element "Operation Mode" with `RAWMIDI` iface. e.g. you can read it +via `amixer` program running on the gadget host like:: + + % amixer -c1 cget iface=RAWMIDI,name='Operation Mode' + ; type=INTEGER,access=r--v----,values=1,min=0,max=2,step=0 + : values=2 + +The value (shown in the second returned line with `: values=`) +indicates 1 for MIDI 1.0 (altset 0), 2 for MIDI 2.0 (altset 1) and 0 +for unset. + +As of now, the configurations can't be changed after binding. diff --git a/Documentation/sound/designs/oss-emulation.rst b/Documentation/sound/designs/oss-emulation.rst new file mode 100644 index 0000000000..e8dcb9633e --- /dev/null +++ b/Documentation/sound/designs/oss-emulation.rst @@ -0,0 +1,336 @@ +============================= +Notes on Kernel OSS-Emulation +============================= + +Jan. 22, 2004 Takashi Iwai <tiwai@suse.de> + + +Modules +======= + +ALSA provides a powerful OSS emulation on the kernel. +The OSS emulation for PCM, mixer and sequencer devices is implemented +as add-on kernel modules, snd-pcm-oss, snd-mixer-oss and snd-seq-oss. +When you need to access the OSS PCM, mixer or sequencer devices, the +corresponding module has to be loaded. + +These modules are loaded automatically when the corresponding service +is called. The alias is defined ``sound-service-x-y``, where x and y are +the card number and the minor unit number. Usually you don't have to +define these aliases by yourself. + +Only necessary step for auto-loading of OSS modules is to define the +card alias in ``/etc/modprobe.d/alsa.conf``, such as:: + + alias sound-slot-0 snd-emu10k1 + +As the second card, define ``sound-slot-1`` as well. +Note that you can't use the aliased name as the target name (i.e. +``alias sound-slot-0 snd-card-0`` doesn't work any more like the old +modutils). + +The currently available OSS configuration is shown in +/proc/asound/oss/sndstat. This shows in the same syntax of +/dev/sndstat, which is available on the commercial OSS driver. +On ALSA, you can symlink /dev/sndstat to this proc file. + +Please note that the devices listed in this proc file appear only +after the corresponding OSS-emulation module is loaded. Don't worry +even if "NOT ENABLED IN CONFIG" is shown in it. + + +Device Mapping +============== + +ALSA supports the following OSS device files: +:: + + PCM: + /dev/dspX + /dev/adspX + + Mixer: + /dev/mixerX + + MIDI: + /dev/midi0X + /dev/amidi0X + + Sequencer: + /dev/sequencer + /dev/sequencer2 (aka /dev/music) + +where X is the card number from 0 to 7. + +(NOTE: Some distributions have the device files like /dev/midi0 and +/dev/midi1. They are NOT for OSS but for tclmidi, which is +a totally different thing.) + +Unlike the real OSS, ALSA cannot use the device files more than the +assigned ones. For example, the first card cannot use /dev/dsp1 or +/dev/dsp2, but only /dev/dsp0 and /dev/adsp0. + +As seen above, PCM and MIDI may have two devices. Usually, the first +PCM device (``hw:0,0`` in ALSA) is mapped to /dev/dsp and the secondary +device (``hw:0,1``) to /dev/adsp (if available). For MIDI, /dev/midi and +/dev/amidi, respectively. + +You can change this device mapping via the module options of +snd-pcm-oss and snd-rawmidi. In the case of PCM, the following +options are available for snd-pcm-oss: + +dsp_map + PCM device number assigned to /dev/dspX + (default = 0) +adsp_map + PCM device number assigned to /dev/adspX + (default = 1) + +For example, to map the third PCM device (``hw:0,2``) to /dev/adsp0, +define like this: +:: + + options snd-pcm-oss adsp_map=2 + +The options take arrays. For configuring the second card, specify +two entries separated by comma. For example, to map the third PCM +device on the second card to /dev/adsp1, define like below: +:: + + options snd-pcm-oss adsp_map=0,2 + +To change the mapping of MIDI devices, the following options are +available for snd-rawmidi: + +midi_map + MIDI device number assigned to /dev/midi0X + (default = 0) +amidi_map + MIDI device number assigned to /dev/amidi0X + (default = 1) + +For example, to assign the third MIDI device on the first card to +/dev/midi00, define as follows: +:: + + options snd-rawmidi midi_map=2 + + +PCM Mode +======== + +As default, ALSA emulates the OSS PCM with so-called plugin layer, +i.e. tries to convert the sample format, rate or channels +automatically when the card doesn't support it natively. +This will lead to some problems for some applications like quake or +wine, especially if they use the card only in the MMAP mode. + +In such a case, you can change the behavior of PCM per application by +writing a command to the proc file. There is a proc file for each PCM +stream, ``/proc/asound/cardX/pcmY[cp]/oss``, where X is the card number +(zero-based), Y the PCM device number (zero-based), and ``p`` is for +playback and ``c`` for capture, respectively. Note that this proc file +exists only after snd-pcm-oss module is loaded. + +The command sequence has the following syntax: +:: + + app_name fragments fragment_size [options] + +``app_name`` is the name of application with (higher priority) or without +path. +``fragments`` specifies the number of fragments or zero if no specific +number is given. +``fragment_size`` is the size of fragment in bytes or zero if not given. +``options`` is the optional parameters. The following options are +available: + +disable + the application tries to open a pcm device for + this channel but does not want to use it. +direct + don't use plugins +block + force block open mode +non-block + force non-block open mode +partial-frag + write also partial fragments (affects playback only) +no-silence + do not fill silence ahead to avoid clicks + +The ``disable`` option is useful when one stream direction (playback or +capture) is not handled correctly by the application although the +hardware itself does support both directions. +The ``direct`` option is used, as mentioned above, to bypass the automatic +conversion and useful for MMAP-applications. +For example, to playback the first PCM device without plugins for +quake, send a command via echo like the following: +:: + + % echo "quake 0 0 direct" > /proc/asound/card0/pcm0p/oss + +While quake wants only playback, you may append the second command +to notify driver that only this direction is about to be allocated: +:: + + % echo "quake 0 0 disable" > /proc/asound/card0/pcm0c/oss + +The permission of proc files depend on the module options of snd. +As default it's set as root, so you'll likely need to be superuser for +sending the command above. + +The block and non-block options are used to change the behavior of +opening the device file. + +As default, ALSA behaves as original OSS drivers, i.e. does not block +the file when it's busy. The -EBUSY error is returned in this case. + +This blocking behavior can be changed globally via nonblock_open +module option of snd-pcm-oss. For using the blocking mode as default +for OSS devices, define like the following: +:: + + options snd-pcm-oss nonblock_open=0 + +The ``partial-frag`` and ``no-silence`` commands have been added recently. +Both commands are for optimization use only. The former command +specifies to invoke the write transfer only when the whole fragment is +filled. The latter stops writing the silence data ahead +automatically. Both are disabled as default. + +You can check the currently defined configuration by reading the proc +file. The read image can be sent to the proc file again, hence you +can save the current configuration +:: + + % cat /proc/asound/card0/pcm0p/oss > /somewhere/oss-cfg + +and restore it like +:: + + % cat /somewhere/oss-cfg > /proc/asound/card0/pcm0p/oss + +Also, for clearing all the current configuration, send ``erase`` command +as below: +:: + + % echo "erase" > /proc/asound/card0/pcm0p/oss + + +Mixer Elements +============== + +Since ALSA has completely different mixer interface, the emulation of +OSS mixer is relatively complicated. ALSA builds up a mixer element +from several different ALSA (mixer) controls based on the name +string. For example, the volume element SOUND_MIXER_PCM is composed +from "PCM Playback Volume" and "PCM Playback Switch" controls for the +playback direction and from "PCM Capture Volume" and "PCM Capture +Switch" for the capture directory (if exists). When the PCM volume of +OSS is changed, all the volume and switch controls above are adjusted +automatically. + +As default, ALSA uses the following control for OSS volumes: + +==================== ===================== ===== +OSS volume ALSA control Index +==================== ===================== ===== +SOUND_MIXER_VOLUME Master 0 +SOUND_MIXER_BASS Tone Control - Bass 0 +SOUND_MIXER_TREBLE Tone Control - Treble 0 +SOUND_MIXER_SYNTH Synth 0 +SOUND_MIXER_PCM PCM 0 +SOUND_MIXER_SPEAKER PC Speaker 0 +SOUND_MIXER_LINE Line 0 +SOUND_MIXER_MIC Mic 0 +SOUND_MIXER_CD CD 0 +SOUND_MIXER_IMIX Monitor Mix 0 +SOUND_MIXER_ALTPCM PCM 1 +SOUND_MIXER_RECLEV (not assigned) +SOUND_MIXER_IGAIN Capture 0 +SOUND_MIXER_OGAIN Playback 0 +SOUND_MIXER_LINE1 Aux 0 +SOUND_MIXER_LINE2 Aux 1 +SOUND_MIXER_LINE3 Aux 2 +SOUND_MIXER_DIGITAL1 Digital 0 +SOUND_MIXER_DIGITAL2 Digital 1 +SOUND_MIXER_DIGITAL3 Digital 2 +SOUND_MIXER_PHONEIN Phone 0 +SOUND_MIXER_PHONEOUT Phone 1 +SOUND_MIXER_VIDEO Video 0 +SOUND_MIXER_RADIO Radio 0 +SOUND_MIXER_MONITOR Monitor 0 +==================== ===================== ===== + +The second column is the base-string of the corresponding ALSA +control. In fact, the controls with ``XXX [Playback|Capture] +[Volume|Switch]`` will be checked in addition. + +The current assignment of these mixer elements is listed in the proc +file, /proc/asound/cardX/oss_mixer, which will be like the following +:: + + VOLUME "Master" 0 + BASS "" 0 + TREBLE "" 0 + SYNTH "" 0 + PCM "PCM" 0 + ... + +where the first column is the OSS volume element, the second column +the base-string of the corresponding ALSA control, and the third the +control index. When the string is empty, it means that the +corresponding OSS control is not available. + +For changing the assignment, you can write the configuration to this +proc file. For example, to map "Wave Playback" to the PCM volume, +send the command like the following: +:: + + % echo 'VOLUME "Wave Playback" 0' > /proc/asound/card0/oss_mixer + +The command is exactly as same as listed in the proc file. You can +change one or more elements, one volume per line. In the last +example, both "Wave Playback Volume" and "Wave Playback Switch" will +be affected when PCM volume is changed. + +Like the case of PCM proc file, the permission of proc files depend on +the module options of snd. you'll likely need to be superuser for +sending the command above. + +As well as in the case of PCM proc file, you can save and restore the +current mixer configuration by reading and writing the whole file +image. + + +Duplex Streams +============== + +Note that when attempting to use a single device file for playback and +capture, the OSS API provides no way to set the format, sample rate or +number of channels different in each direction. Thus +:: + + io_handle = open("device", O_RDWR) + +will only function correctly if the values are the same in each direction. + +To use different values in the two directions, use both +:: + + input_handle = open("device", O_RDONLY) + output_handle = open("device", O_WRONLY) + +and set the values for the corresponding handle. + + +Unsupported Features +==================== + +MMAP on ICE1712 driver +---------------------- +ICE1712 supports only the unconventional format, interleaved +10-channels 24bit (packed in 32bit) format. Therefore you cannot mmap +the buffer as the conventional (mono or 2-channels, 8 or 16bit) format +on OSS. diff --git a/Documentation/sound/designs/powersave.rst b/Documentation/sound/designs/powersave.rst new file mode 100644 index 0000000000..138157452e --- /dev/null +++ b/Documentation/sound/designs/powersave.rst @@ -0,0 +1,43 @@ +========================== +Notes on Power-Saving Mode +========================== + +AC97 and HD-audio drivers have the automatic power-saving mode. +This feature is enabled via Kconfig ``CONFIG_SND_AC97_POWER_SAVE`` +and ``CONFIG_SND_HDA_POWER_SAVE`` options, respectively. + +With the automatic power-saving, the driver turns off the codec power +appropriately when no operation is required. When no applications use +the device and/or no analog loopback is set, the power disablement is +done fully or partially. It'll save a certain power consumption, thus +good for laptops (even for desktops). + +The time-out for automatic power-off can be specified via ``power_save`` +module option of snd-ac97-codec and snd-hda-intel modules. Specify +the time-out value in seconds. 0 means to disable the automatic +power-saving. The default value of timeout is given via +``CONFIG_SND_AC97_POWER_SAVE_DEFAULT`` and +``CONFIG_SND_HDA_POWER_SAVE_DEFAULT`` Kconfig options. Setting this to 1 +(the minimum value) isn't recommended because many applications try to +reopen the device frequently. 10 would be a good choice for normal +operations. + +The ``power_save`` option is exported as writable. This means you can +adjust the value via sysfs on the fly. For example, to turn on the +automatic power-save mode with 10 seconds, write to +``/sys/modules/snd_ac97_codec/parameters/power_save`` (usually as root): +:: + + # echo 10 > /sys/modules/snd_ac97_codec/parameters/power_save + + +Note that you might hear click noise/pop when changing the power +state. Also, it often takes certain time to wake up from the +power-down to the active state. These are often hardly to fix, so +don't report extra bug reports unless you have a fix patch ;-) + +For HD-audio interface, there is another module option, +power_save_controller. This enables/disables the power-save mode of +the controller side. Setting this on may reduce a bit more power +consumption, but might result in longer wake-up time and click noise. +Try to turn it off when you experience such a thing too often. diff --git a/Documentation/sound/designs/procfile.rst b/Documentation/sound/designs/procfile.rst new file mode 100644 index 0000000000..e9f7e0cbdc --- /dev/null +++ b/Documentation/sound/designs/procfile.rst @@ -0,0 +1,238 @@ +========================== +Proc Files of ALSA Drivers +========================== + +Takashi Iwai <tiwai@suse.de> + +General +======= + +ALSA has its own proc tree, /proc/asound. Many useful information are +found in this tree. When you encounter a problem and need debugging, +check the files listed in the following sections. + +Each card has its subtree cardX, where X is from 0 to 7. The +card-specific files are stored in the ``card*`` subdirectories. + + +Global Information +================== + +cards + Shows the list of currently configured ALSA drivers, + index, the id string, short and long descriptions. + +version + Shows the version string and compile date. + +modules + Lists the module of each card + +devices + Lists the ALSA native device mappings. + +meminfo + Shows the status of allocated pages via ALSA drivers. + Appears only when ``CONFIG_SND_DEBUG=y``. + +hwdep + Lists the currently available hwdep devices in format of + ``<card>-<device>: <name>`` + +pcm + Lists the currently available PCM devices in format of + ``<card>-<device>: <id>: <name> : <sub-streams>`` + +timer + Lists the currently available timer devices + + +oss/devices + Lists the OSS device mappings. + +oss/sndstat + Provides the output compatible with /dev/sndstat. + You can symlink this to /dev/sndstat. + + +Card Specific Files +=================== + +The card-specific files are found in ``/proc/asound/card*`` directories. +Some drivers (e.g. cmipci) have their own proc entries for the +register dump, etc (e.g. ``/proc/asound/card*/cmipci`` shows the register +dump). These files would be really helpful for debugging. + +When PCM devices are available on this card, you can see directories +like pcm0p or pcm1c. They hold the PCM information for each PCM +stream. The number after ``pcm`` is the PCM device number from 0, and +the last ``p`` or ``c`` means playback or capture direction. The files in +this subtree is described later. + +The status of MIDI I/O is found in ``midi*`` files. It shows the device +name and the received/transmitted bytes through the MIDI device. + +When the card is equipped with AC97 codecs, there are ``codec97#*`` +subdirectories (described later). + +When the OSS mixer emulation is enabled (and the module is loaded), +oss_mixer file appears here, too. This shows the current mapping of +OSS mixer elements to the ALSA control elements. You can change the +mapping by writing to this device. Read OSS-Emulation.txt for +details. + + +PCM Proc Files +============== + +``card*/pcm*/info`` + The general information of this PCM device: card #, device #, + substreams, etc. + +``card*/pcm*/xrun_debug`` + This file appears when ``CONFIG_SND_DEBUG=y`` and + ``CONFIG_SND_PCM_XRUN_DEBUG=y``. + This shows the status of xrun (= buffer overrun/xrun) and + invalid PCM position debug/check of ALSA PCM middle layer. + It takes an integer value, can be changed by writing to this + file, such as:: + + # echo 5 > /proc/asound/card0/pcm0p/xrun_debug + + The value consists of the following bit flags: + + * bit 0 = Enable XRUN/jiffies debug messages + * bit 1 = Show stack trace at XRUN / jiffies check + * bit 2 = Enable additional jiffies check + + When the bit 0 is set, the driver will show the messages to + kernel log when an xrun is detected. The debug message is + shown also when the invalid H/W pointer is detected at the + update of periods (usually called from the interrupt + handler). + + When the bit 1 is set, the driver will show the stack trace + additionally. This may help the debugging. + + Since 2.6.30, this option can enable the hwptr check using + jiffies. This detects spontaneous invalid pointer callback + values, but can be lead to too much corrections for a (mostly + buggy) hardware that doesn't give smooth pointer updates. + This feature is enabled via the bit 2. + +``card*/pcm*/sub*/info`` + The general information of this PCM sub-stream. + +``card*/pcm*/sub*/status`` + The current status of this PCM sub-stream, elapsed time, + H/W position, etc. + +``card*/pcm*/sub*/hw_params`` + The hardware parameters set for this sub-stream. + +``card*/pcm*/sub*/sw_params`` + The soft parameters set for this sub-stream. + +``card*/pcm*/sub*/prealloc`` + The buffer pre-allocation information. + +``card*/pcm*/sub*/xrun_injection`` + Triggers an XRUN to the running stream when any value is + written to this proc file. Used for fault injection. + This entry is write-only. + +AC97 Codec Information +====================== + +``card*/codec97#*/ac97#?-?`` + Shows the general information of this AC97 codec chip, such as + name, capabilities, set up. + +``card*/codec97#0/ac97#?-?+regs`` + Shows the AC97 register dump. Useful for debugging. + + When CONFIG_SND_DEBUG is enabled, you can write to this file for + changing an AC97 register directly. Pass two hex numbers. + For example, + +:: + + # echo 02 9f1f > /proc/asound/card0/codec97#0/ac97#0-0+regs + + +USB Audio Streams +================= + +``card*/stream*`` + Shows the assignment and the current status of each audio stream + of the given card. This information is very useful for debugging. + + +HD-Audio Codecs +=============== + +``card*/codec#*`` + Shows the general codec information and the attribute of each + widget node. + +``card*/eld#*`` + Available for HDMI or DisplayPort interfaces. + Shows ELD(EDID Like Data) info retrieved from the attached HDMI sink, + and describes its audio capabilities and configurations. + + Some ELD fields may be modified by doing ``echo name hex_value > eld#*``. + Only do this if you are sure the HDMI sink provided value is wrong. + And if that makes your HDMI audio work, please report to us so that we + can fix it in future kernel releases. + + +Sequencer Information +===================== + +seq/drivers + Lists the currently available ALSA sequencer drivers. + +seq/clients + Shows the list of currently available sequencer clients and + ports. The connection status and the running status are shown + in this file, too. + +seq/queues + Lists the currently allocated/running sequencer queues. + +seq/timer + Lists the currently allocated/running sequencer timers. + +seq/oss + Lists the OSS-compatible sequencer stuffs. + + +Help For Debugging? +=================== + +When the problem is related with PCM, first try to turn on xrun_debug +mode. This will give you the kernel messages when and where xrun +happened. + +If it's really a bug, report it with the following information: + +- the name of the driver/card, show in ``/proc/asound/cards`` +- the register dump, if available (e.g. ``card*/cmipci``) + +when it's a PCM problem, + +- set-up of PCM, shown in hw_parms, sw_params, and status in the PCM + sub-stream directory + +when it's a mixer problem, + +- AC97 proc files, ``codec97#*/*`` files + +for USB audio/midi, + +- output of ``lsusb -v`` +- ``stream*`` files in card directory + + +The ALSA bug-tracking system is found at: +https://bugtrack.alsa-project.org/alsa-bug/ diff --git a/Documentation/sound/designs/seq-oss.rst b/Documentation/sound/designs/seq-oss.rst new file mode 100644 index 0000000000..ec6304a074 --- /dev/null +++ b/Documentation/sound/designs/seq-oss.rst @@ -0,0 +1,371 @@ +=============================== +OSS Sequencer Emulation on ALSA +=============================== + +Copyright (c) 1998,1999 by Takashi Iwai + +ver.0.1.8; Nov. 16, 1999 + +Description +=========== + +This directory contains the OSS sequencer emulation driver on ALSA. Note +that this program is still in the development state. + +What this does - it provides the emulation of the OSS sequencer, access +via ``/dev/sequencer`` and ``/dev/music`` devices. +The most of applications using OSS can run if the appropriate ALSA +sequencer is prepared. + +The following features are emulated by this driver: + +* Normal sequencer and MIDI events: + + They are converted to the ALSA sequencer events, and sent to the + corresponding port. + +* Timer events: + + The timer is not selectable by ioctl. The control rate is fixed to + 100 regardless of HZ. That is, even on Alpha system, a tick is always + 1/100 second. The base rate and tempo can be changed in ``/dev/music``. + +* Patch loading: + + It purely depends on the synth drivers whether it's supported since + the patch loading is realized by callback to the synth driver. + +* I/O controls: + + Most of controls are accepted. Some controls + are dependent on the synth driver, as well as even on original OSS. + +Furthermore, you can find the following advanced features: + +* Better queue mechanism: + + The events are queued before processing them. + +* Multiple applications: + + You can run two or more applications simultaneously (even for OSS + sequencer)! + However, each MIDI device is exclusive - that is, if a MIDI device + is opened once by some application, other applications can't use + it. No such a restriction in synth devices. + +* Real-time event processing: + + The events can be processed in real time without using out of bound + ioctl. To switch to real-time mode, send ABSTIME 0 event. The followed + events will be processed in real-time without queued. To switch off the + real-time mode, send RELTIME 0 event. + +* ``/proc`` interface: + + The status of applications and devices can be shown via + ``/proc/asound/seq/oss`` at any time. In the later version, + configuration will be changed via ``/proc`` interface, too. + + +Installation +============ + +Run configure script with both sequencer support (``--with-sequencer=yes``) +and OSS emulation (``--with-oss=yes``) options. A module ``snd-seq-oss.o`` +will be created. If the synth module of your sound card supports for OSS +emulation (so far, only Emu8000 driver), this module will be loaded +automatically. +Otherwise, you need to load this module manually. + +At beginning, this module probes all the MIDI ports which have been +already connected to the sequencer. Once after that, the creation and deletion +of ports are watched by announcement mechanism of ALSA sequencer. + +The available synth and MIDI devices can be found in proc interface. +Run ``cat /proc/asound/seq/oss``, and check the devices. For example, +if you use an AWE64 card, you'll see like the following: +:: + + OSS sequencer emulation version 0.1.8 + ALSA client number 63 + ALSA receiver port 0 + + Number of applications: 0 + + Number of synth devices: 1 + synth 0: [EMU8000] + type 0x1 : subtype 0x20 : voices 32 + capabilities : ioctl enabled / load_patch enabled + + Number of MIDI devices: 3 + midi 0: [Emu8000 Port-0] ALSA port 65:0 + capability write / opened none + + midi 1: [Emu8000 Port-1] ALSA port 65:1 + capability write / opened none + + midi 2: [0: MPU-401 (UART)] ALSA port 64:0 + capability read/write / opened none + +Note that the device number may be different from the information of +``/proc/asound/oss-devices`` or ones of the original OSS driver. +Use the device number listed in ``/proc/asound/seq/oss`` +to play via OSS sequencer emulation. + +Using Synthesizer Devices +========================= + +Run your favorite program. I've tested playmidi-2.4, awemidi-0.4.3, gmod-3.1 +and xmp-1.1.5. You can load samples via ``/dev/sequencer`` like sfxload, +too. + +If the lowlevel driver supports multiple access to synth devices (like +Emu8000 driver), two or more applications are allowed to run at the same +time. + +Using MIDI Devices +================== + +So far, only MIDI output was tested. MIDI input was not checked at all, +but hopefully it will work. Use the device number listed in +``/proc/asound/seq/oss``. +Be aware that these numbers are mostly different from the list in +``/proc/asound/oss-devices``. + +Module Options +============== + +The following module options are available: + +maxqlen + specifies the maximum read/write queue length. This queue is private + for OSS sequencer, so that it is independent from the queue length of ALSA + sequencer. Default value is 1024. + +seq_oss_debug + specifies the debug level and accepts zero (= no debug message) or + positive integer. Default value is 0. + +Queue Mechanism +=============== + +OSS sequencer emulation uses an ALSA priority queue. The +events from ``/dev/sequencer`` are processed and put onto the queue +specified by module option. + +All the events from ``/dev/sequencer`` are parsed at beginning. +The timing events are also parsed at this moment, so that the events may +be processed in real-time. Sending an event ABSTIME 0 switches the operation +mode to real-time mode, and sending an event RELTIME 0 switches it off. +In the real-time mode, all events are dispatched immediately. + +The queued events are dispatched to the corresponding ALSA sequencer +ports after scheduled time by ALSA sequencer dispatcher. + +If the write-queue is full, the application sleeps until a certain amount +(as default one half) becomes empty in blocking mode. The synchronization +to write timing was implemented, too. + +The input from MIDI devices or echo-back events are stored on read FIFO +queue. If application reads ``/dev/sequencer`` in blocking mode, the +process will be awaked. + +Interface to Synthesizer Device +=============================== + +Registration +------------ + +To register an OSS synthesizer device, use snd_seq_oss_synth_register() +function: +:: + + int snd_seq_oss_synth_register(char *name, int type, int subtype, int nvoices, + snd_seq_oss_callback_t *oper, void *private_data) + +The arguments ``name``, ``type``, ``subtype`` and ``nvoices`` +are used for making the appropriate synth_info structure for ioctl. The +return value is an index number of this device. This index must be remembered +for unregister. If registration is failed, -errno will be returned. + +To release this device, call snd_seq_oss_synth_unregister() function: +:: + + int snd_seq_oss_synth_unregister(int index) + +where the ``index`` is the index number returned by register function. + +Callbacks +--------- + +OSS synthesizer devices have capability for sample downloading and ioctls +like sample reset. In OSS emulation, these special features are realized +by using callbacks. The registration argument oper is used to specify these +callbacks. The following callback functions must be defined: +:: + + snd_seq_oss_callback_t: + int (*open)(snd_seq_oss_arg_t *p, void *closure); + int (*close)(snd_seq_oss_arg_t *p); + int (*ioctl)(snd_seq_oss_arg_t *p, unsigned int cmd, unsigned long arg); + int (*load_patch)(snd_seq_oss_arg_t *p, int format, const char *buf, int offs, int count); + int (*reset)(snd_seq_oss_arg_t *p); + +Except for ``open`` and ``close`` callbacks, they are allowed to be NULL. + +Each callback function takes the argument type ``snd_seq_oss_arg_t`` as the +first argument. +:: + + struct snd_seq_oss_arg_t { + int app_index; + int file_mode; + int seq_mode; + snd_seq_addr_t addr; + void *private_data; + int event_passing; + }; + +The first three fields, ``app_index``, ``file_mode`` and ``seq_mode`` +are initialized by OSS sequencer. The ``app_index`` is the application +index which is unique to each application opening OSS sequencer. The +``file_mode`` is bit-flags indicating the file operation mode. See +``seq_oss.h`` for its meaning. The ``seq_mode`` is sequencer operation +mode. In the current version, only ``SND_OSSSEQ_MODE_SYNTH`` is used. + +The next two fields, ``addr`` and ``private_data``, must be +filled by the synth driver at open callback. The ``addr`` contains +the address of ALSA sequencer port which is assigned to this device. If +the driver allocates memory for ``private_data``, it must be released +in close callback by itself. + +The last field, ``event_passing``, indicates how to translate note-on +/ off events. In ``PROCESS_EVENTS`` mode, the note 255 is regarded +as velocity change, and key pressure event is passed to the port. In +``PASS_EVENTS`` mode, all note on/off events are passed to the port +without modified. ``PROCESS_KEYPRESS`` mode checks the note above 128 +and regards it as key pressure event (mainly for Emu8000 driver). + +Open Callback +------------- + +The ``open`` is called at each time this device is opened by an application +using OSS sequencer. This must not be NULL. Typically, the open callback +does the following procedure: + +#. Allocate private data record. +#. Create an ALSA sequencer port. +#. Set the new port address on ``arg->addr``. +#. Set the private data record pointer on ``arg->private_data``. + +Note that the type bit-flags in port_info of this synth port must NOT contain +``TYPE_MIDI_GENERIC`` +bit. Instead, ``TYPE_SPECIFIC`` should be used. Also, ``CAP_SUBSCRIPTION`` +bit should NOT be included, too. This is necessary to tell it from other +normal MIDI devices. If the open procedure succeeded, return zero. Otherwise, +return -errno. + +Ioctl Callback +-------------- + +The ``ioctl`` callback is called when the sequencer receives device-specific +ioctls. The following two ioctls should be processed by this callback: + +IOCTL_SEQ_RESET_SAMPLES + reset all samples on memory -- return 0 + +IOCTL_SYNTH_MEMAVL + return the available memory size + +FM_4OP_ENABLE + can be ignored usually + +The other ioctls are processed inside the sequencer without passing to +the lowlevel driver. + +Load_Patch Callback +------------------- + +The ``load_patch`` callback is used for sample-downloading. This callback +must read the data on user-space and transfer to each device. Return 0 +if succeeded, and -errno if failed. The format argument is the patch key +in patch_info record. The buf is user-space pointer where patch_info record +is stored. The offs can be ignored. The count is total data size of this +sample data. + +Close Callback +-------------- + +The ``close`` callback is called when this device is closed by the +application. If any private data was allocated in open callback, it must +be released in the close callback. The deletion of ALSA port should be +done here, too. This callback must not be NULL. + +Reset Callback +-------------- + +The ``reset`` callback is called when sequencer device is reset or +closed by applications. The callback should turn off the sounds on the +relevant port immediately, and initialize the status of the port. If this +callback is undefined, OSS seq sends a ``HEARTBEAT`` event to the +port. + +Events +====== + +Most of the events are processed by sequencer and translated to the adequate +ALSA sequencer events, so that each synth device can receive by input_event +callback of ALSA sequencer port. The following ALSA events should be +implemented by the driver: + +============= =================== +ALSA event Original OSS events +============= =================== +NOTEON SEQ_NOTEON, MIDI_NOTEON +NOTE SEQ_NOTEOFF, MIDI_NOTEOFF +KEYPRESS MIDI_KEY_PRESSURE +CHANPRESS SEQ_AFTERTOUCH, MIDI_CHN_PRESSURE +PGMCHANGE SEQ_PGMCHANGE, MIDI_PGM_CHANGE +PITCHBEND SEQ_CONTROLLER(CTRL_PITCH_BENDER), + MIDI_PITCH_BEND +CONTROLLER MIDI_CTL_CHANGE, + SEQ_BALANCE (with CTL_PAN) +CONTROL14 SEQ_CONTROLLER +REGPARAM SEQ_CONTROLLER(CTRL_PITCH_BENDER_RANGE) +SYSEX SEQ_SYSEX +============= =================== + +The most of these behavior can be realized by MIDI emulation driver +included in the Emu8000 lowlevel driver. In the future release, this module +will be independent. + +Some OSS events (``SEQ_PRIVATE`` and ``SEQ_VOLUME`` events) are passed as event +type SND_SEQ_OSS_PRIVATE. The OSS sequencer passes these event 8 byte +packets without any modification. The lowlevel driver should process these +events appropriately. + +Interface to MIDI Device +======================== + +Since the OSS emulation probes the creation and deletion of ALSA MIDI +sequencer ports automatically by receiving announcement from ALSA +sequencer, the MIDI devices don't need to be registered explicitly +like synth devices. +However, the MIDI port_info registered to ALSA sequencer must include +a group name ``SND_SEQ_GROUP_DEVICE`` and a capability-bit +``CAP_READ`` or ``CAP_WRITE``. Also, subscription capabilities, +``CAP_SUBS_READ`` or ``CAP_SUBS_WRITE``, must be defined, too. If +these conditions are not satisfied, the port is not registered as OSS +sequencer MIDI device. + +The events via MIDI devices are parsed in OSS sequencer and converted +to the corresponding ALSA sequencer events. The input from MIDI sequencer +is also converted to MIDI byte events by OSS sequencer. This works just +a reverse way of seq_midi module. + +Known Problems / TODO's +======================= + +* Patch loading via ALSA instrument layer is not implemented yet. + diff --git a/Documentation/sound/designs/timestamping.rst b/Documentation/sound/designs/timestamping.rst new file mode 100644 index 0000000000..7c7ecf5dbc --- /dev/null +++ b/Documentation/sound/designs/timestamping.rst @@ -0,0 +1,215 @@ +===================== +ALSA PCM Timestamping +===================== + +The ALSA API can provide two different system timestamps: + +- Trigger_tstamp is the system time snapshot taken when the .trigger + callback is invoked. This snapshot is taken by the ALSA core in the + general case, but specific hardware may have synchronization + capabilities or conversely may only be able to provide a correct + estimate with a delay. In the latter two cases, the low-level driver + is responsible for updating the trigger_tstamp at the most appropriate + and precise moment. Applications should not rely solely on the first + trigger_tstamp but update their internal calculations if the driver + provides a refined estimate with a delay. + +- tstamp is the current system timestamp updated during the last + event or application query. + The difference (tstamp - trigger_tstamp) defines the elapsed time. + +The ALSA API provides two basic pieces of information, avail +and delay, which combined with the trigger and current system +timestamps allow for applications to keep track of the 'fullness' of +the ring buffer and the amount of queued samples. + +The use of these different pointers and time information depends on +the application needs: + +- ``avail`` reports how much can be written in the ring buffer +- ``delay`` reports the time it will take to hear a new sample after all + queued samples have been played out. + +When timestamps are enabled, the avail/delay information is reported +along with a snapshot of system time. Applications can select from +``CLOCK_REALTIME`` (NTP corrections including going backwards), +``CLOCK_MONOTONIC`` (NTP corrections but never going backwards), +``CLOCK_MONOTIC_RAW`` (without NTP corrections) and change the mode +dynamically with sw_params + + +The ALSA API also provide an audio_tstamp which reflects the passage +of time as measured by different components of audio hardware. In +ascii-art, this could be represented as follows (for the playback +case): +:: + + --------------------------------------------------------------> time + ^ ^ ^ ^ ^ + | | | | | + analog link dma app FullBuffer + time time time time time + | | | | | + |< codec delay >|<--hw delay-->|<queued samples>|<---avail->| + |<----------------- delay---------------------->| | + |<----ring buffer length---->| + + +The analog time is taken at the last stage of the playback, as close +as possible to the actual transducer + +The link time is taken at the output of the SoC/chipset as the samples +are pushed on a link. The link time can be directly measured if +supported in hardware by sample counters or wallclocks (e.g. with +HDAudio 24MHz or PTP clock for networked solutions) or indirectly +estimated (e.g. with the frame counter in USB). + +The DMA time is measured using counters - typically the least reliable +of all measurements due to the bursty nature of DMA transfers. + +The app time corresponds to the time tracked by an application after +writing in the ring buffer. + +The application can query the hardware capabilities, define which +audio time it wants reported by selecting the relevant settings in +audio_tstamp_config fields, thus get an estimate of the timestamp +accuracy. It can also request the delay-to-analog be included in the +measurement. Direct access to the link time is very interesting on +platforms that provide an embedded DSP; measuring directly the link +time with dedicated hardware, possibly synchronized with system time, +removes the need to keep track of internal DSP processing times and +latency. + +In case the application requests an audio tstamp that is not supported +in hardware/low-level driver, the type is overridden as DEFAULT and the +timestamp will report the DMA time based on the hw_pointer value. + +For backwards compatibility with previous implementations that did not +provide timestamp selection, with a zero-valued COMPAT timestamp type +the results will default to the HDAudio wall clock for playback +streams and to the DMA time (hw_ptr) in all other cases. + +The audio timestamp accuracy can be returned to user-space, so that +appropriate decisions are made: + +- for dma time (default), the granularity of the transfers can be + inferred from the steps between updates and in turn provide + information on how much the application pointer can be rewound + safely. + +- the link time can be used to track long-term drifts between audio + and system time using the (tstamp-trigger_tstamp)/audio_tstamp + ratio, the precision helps define how much smoothing/low-pass + filtering is required. The link time can be either reset on startup + or reported as is (the latter being useful to compare progress of + different streams - but may require the wallclock to be always + running and not wrap-around during idle periods). If supported in + hardware, the absolute link time could also be used to define a + precise start time (patches WIP) + +- including the delay in the audio timestamp may + counter-intuitively not increase the precision of timestamps, e.g. if a + codec includes variable-latency DSP processing or a chain of + hardware components the delay is typically not known with precision. + +The accuracy is reported in nanosecond units (using an unsigned 32-bit +word), which gives a max precision of 4.29s, more than enough for +audio applications... + +Due to the varied nature of timestamping needs, even for a single +application, the audio_tstamp_config can be changed dynamically. In +the ``STATUS`` ioctl, the parameters are read-only and do not allow for +any application selection. To work around this limitation without +impacting legacy applications, a new ``STATUS_EXT`` ioctl is introduced +with read/write parameters. ALSA-lib will be modified to make use of +``STATUS_EXT`` and effectively deprecate ``STATUS``. + +The ALSA API only allows for a single audio timestamp to be reported +at a time. This is a conscious design decision, reading the audio +timestamps from hardware registers or from IPC takes time, the more +timestamps are read the more imprecise the combined measurements +are. To avoid any interpretation issues, a single (system, audio) +timestamp is reported. Applications that need different timestamps +will be required to issue multiple queries and perform an +interpolation of the results + +In some hardware-specific configuration, the system timestamp is +latched by a low-level audio subsystem, and the information provided +back to the driver. Due to potential delays in the communication with +the hardware, there is a risk of misalignment with the avail and delay +information. To make sure applications are not confused, a +driver_timestamp field is added in the snd_pcm_status structure; this +timestamp shows when the information is put together by the driver +before returning from the ``STATUS`` and ``STATUS_EXT`` ioctl. in most cases +this driver_timestamp will be identical to the regular system tstamp. + +Examples of timestamping with HDAudio: + +1. DMA timestamp, no compensation for DMA+analog delay +:: + + $ ./audio_time -p --ts_type=1 + playback: systime: 341121338 nsec, audio time 342000000 nsec, systime delta -878662 + playback: systime: 426236663 nsec, audio time 427187500 nsec, systime delta -950837 + playback: systime: 597080580 nsec, audio time 598000000 nsec, systime delta -919420 + playback: systime: 682059782 nsec, audio time 683020833 nsec, systime delta -961051 + playback: systime: 852896415 nsec, audio time 853854166 nsec, systime delta -957751 + playback: systime: 937903344 nsec, audio time 938854166 nsec, systime delta -950822 + +2. DMA timestamp, compensation for DMA+analog delay +:: + + $ ./audio_time -p --ts_type=1 -d + playback: systime: 341053347 nsec, audio time 341062500 nsec, systime delta -9153 + playback: systime: 426072447 nsec, audio time 426062500 nsec, systime delta 9947 + playback: systime: 596899518 nsec, audio time 596895833 nsec, systime delta 3685 + playback: systime: 681915317 nsec, audio time 681916666 nsec, systime delta -1349 + playback: systime: 852741306 nsec, audio time 852750000 nsec, systime delta -8694 + +3. link timestamp, compensation for DMA+analog delay +:: + + $ ./audio_time -p --ts_type=2 -d + playback: systime: 341060004 nsec, audio time 341062791 nsec, systime delta -2787 + playback: systime: 426242074 nsec, audio time 426244875 nsec, systime delta -2801 + playback: systime: 597080992 nsec, audio time 597084583 nsec, systime delta -3591 + playback: systime: 682084512 nsec, audio time 682088291 nsec, systime delta -3779 + playback: systime: 852936229 nsec, audio time 852940916 nsec, systime delta -4687 + playback: systime: 938107562 nsec, audio time 938112708 nsec, systime delta -5146 + +Example 1 shows that the timestamp at the DMA level is close to 1ms +ahead of the actual playback time (as a side time this sort of +measurement can help define rewind safeguards). Compensating for the +DMA-link delay in example 2 helps remove the hardware buffering but +the information is still very jittery, with up to one sample of +error. In example 3 where the timestamps are measured with the link +wallclock, the timestamps show a monotonic behavior and a lower +dispersion. + +Example 3 and 4 are with USB audio class. Example 3 shows a high +offset between audio time and system time due to buffering. Example 4 +shows how compensating for the delay exposes a 1ms accuracy (due to +the use of the frame counter by the driver) + +Example 3: DMA timestamp, no compensation for delay, delta of ~5ms +:: + + $ ./audio_time -p -Dhw:1 -t1 + playback: systime: 120174019 nsec, audio time 125000000 nsec, systime delta -4825981 + playback: systime: 245041136 nsec, audio time 250000000 nsec, systime delta -4958864 + playback: systime: 370106088 nsec, audio time 375000000 nsec, systime delta -4893912 + playback: systime: 495040065 nsec, audio time 500000000 nsec, systime delta -4959935 + playback: systime: 620038179 nsec, audio time 625000000 nsec, systime delta -4961821 + playback: systime: 745087741 nsec, audio time 750000000 nsec, systime delta -4912259 + playback: systime: 870037336 nsec, audio time 875000000 nsec, systime delta -4962664 + +Example 4: DMA timestamp, compensation for delay, delay of ~1ms +:: + + $ ./audio_time -p -Dhw:1 -t1 -d + playback: systime: 120190520 nsec, audio time 120000000 nsec, systime delta 190520 + playback: systime: 245036740 nsec, audio time 244000000 nsec, systime delta 1036740 + playback: systime: 370034081 nsec, audio time 369000000 nsec, systime delta 1034081 + playback: systime: 495159907 nsec, audio time 494000000 nsec, systime delta 1159907 + playback: systime: 620098824 nsec, audio time 619000000 nsec, systime delta 1098824 + playback: systime: 745031847 nsec, audio time 744000000 nsec, systime delta 1031847 diff --git a/Documentation/sound/designs/tracepoints.rst b/Documentation/sound/designs/tracepoints.rst new file mode 100644 index 0000000000..b0a7e30101 --- /dev/null +++ b/Documentation/sound/designs/tracepoints.rst @@ -0,0 +1,172 @@ +=================== +Tracepoints in ALSA +=================== + +2017/07/02 +Takasahi Sakamoto + +Tracepoints in ALSA PCM core +============================ + +ALSA PCM core registers ``snd_pcm`` subsystem to kernel tracepoint system. +This subsystem includes two categories of tracepoints; for state of PCM buffer +and for processing of PCM hardware parameters. These tracepoints are available +when corresponding kernel configurations are enabled. When ``CONFIG_SND_DEBUG`` +is enabled, the latter tracepoints are available. When additional +``SND_PCM_XRUN_DEBUG`` is enabled too, the former trace points are enabled. + +Tracepoints for state of PCM buffer +------------------------------------ + +This category includes four tracepoints; ``hwptr``, ``applptr``, ``xrun`` and +``hw_ptr_error``. + +Tracepoints for processing of PCM hardware parameters +----------------------------------------------------- + +This category includes two tracepoints; ``hw_mask_param`` and +``hw_interval_param``. + +In a design of ALSA PCM core, data transmission is abstracted as PCM substream. +Applications manage PCM substream to maintain data transmission for PCM frames. +Before starting the data transmission, applications need to configure PCM +substream. In this procedure, PCM hardware parameters are decided by +interaction between applications and ALSA PCM core. Once decided, runtime of +the PCM substream keeps the parameters. + +The parameters are described in struct snd_pcm_hw_params. This +structure includes several types of parameters. Applications set preferable +value to these parameters, then execute ioctl(2) with SNDRV_PCM_IOCTL_HW_REFINE +or SNDRV_PCM_IOCTL_HW_PARAMS. The former is used just for refining available +set of parameters. The latter is used for an actual decision of the parameters. + +The struct snd_pcm_hw_params structure has below members: + +``flags`` + Configurable. ALSA PCM core and some drivers handle this flag to select + convenient parameters or change their behaviour. +``masks`` + Configurable. This type of parameter is described in + struct snd_mask and represent mask values. As of PCM protocol + v2.0.13, three types are defined. + + - SNDRV_PCM_HW_PARAM_ACCESS + - SNDRV_PCM_HW_PARAM_FORMAT + - SNDRV_PCM_HW_PARAM_SUBFORMAT +``intervals`` + Configurable. This type of parameter is described in + struct snd_interval and represent values with a range. As of + PCM protocol v2.0.13, twelve types are defined. + + - SNDRV_PCM_HW_PARAM_SAMPLE_BITS + - SNDRV_PCM_HW_PARAM_FRAME_BITS + - SNDRV_PCM_HW_PARAM_CHANNELS + - SNDRV_PCM_HW_PARAM_RATE + - SNDRV_PCM_HW_PARAM_PERIOD_TIME + - SNDRV_PCM_HW_PARAM_PERIOD_SIZE + - SNDRV_PCM_HW_PARAM_PERIOD_BYTES + - SNDRV_PCM_HW_PARAM_PERIODS + - SNDRV_PCM_HW_PARAM_BUFFER_TIME + - SNDRV_PCM_HW_PARAM_BUFFER_SIZE + - SNDRV_PCM_HW_PARAM_BUFFER_BYTES + - SNDRV_PCM_HW_PARAM_TICK_TIME +``rmask`` + Configurable. This is evaluated at ioctl(2) with + SNDRV_PCM_IOCTL_HW_REFINE only. Applications can select which + mask/interval parameter can be changed by ALSA PCM core. For + SNDRV_PCM_IOCTL_HW_PARAMS, this mask is ignored and all of parameters + are going to be changed. +``cmask`` + Read-only. After returning from ioctl(2), buffer in user space for + struct snd_pcm_hw_params includes result of each operation. + This mask represents which mask/interval parameter is actually changed. +``info`` + Read-only. This represents hardware/driver capabilities as bit flags + with SNDRV_PCM_INFO_XXX. Typically, applications execute ioctl(2) with + SNDRV_PCM_IOCTL_HW_REFINE to retrieve this flag, then decide candidates + of parameters and execute ioctl(2) with SNDRV_PCM_IOCTL_HW_PARAMS to + configure PCM substream. +``msbits`` + Read-only. This value represents available bit width in MSB side of + a PCM sample. When a parameter of SNDRV_PCM_HW_PARAM_SAMPLE_BITS was + decided as a fixed number, this value is also calculated according to + it. Else, zero. But this behaviour depends on implementations in driver + side. +``rate_num`` + Read-only. This value represents numerator of sampling rate in fraction + notation. Basically, when a parameter of SNDRV_PCM_HW_PARAM_RATE was + decided as a single value, this value is also calculated according to + it. Else, zero. But this behaviour depends on implementations in driver + side. +``rate_den`` + Read-only. This value represents denominator of sampling rate in + fraction notation. Basically, when a parameter of + SNDRV_PCM_HW_PARAM_RATE was decided as a single value, this value is + also calculated according to it. Else, zero. But this behaviour depends + on implementations in driver side. +``fifo_size`` + Read-only. This value represents the size of FIFO in serial sound + interface of hardware. Basically, each driver can assigns a proper + value to this parameter but some drivers intentionally set zero with + a care of hardware design or data transmission protocol. + +ALSA PCM core handles buffer of struct snd_pcm_hw_params when +applications execute ioctl(2) with SNDRV_PCM_HW_REFINE or SNDRV_PCM_HW_PARAMS. +Parameters in the buffer are changed according to +struct snd_pcm_hardware and rules of constraints in the runtime. The +structure describes capabilities of handled hardware. The rules describes +dependencies on which a parameter is decided according to several parameters. +A rule has a callback function, and drivers can register arbitrary functions +to compute the target parameter. ALSA PCM core registers some rules to the +runtime as a default. + +Each driver can join in the interaction as long as it prepared for two stuffs +in a callback of struct snd_pcm_ops.open. + +1. In the callback, drivers are expected to change a member of + struct snd_pcm_hardware type in the runtime, according to + capacities of corresponding hardware. +2. In the same callback, drivers are also expected to register additional rules + of constraints into the runtime when several parameters have dependencies + due to hardware design. + +The driver can refers to result of the interaction in a callback of +struct snd_pcm_ops.hw_params, however it should not change the +content. + +Tracepoints in this category are designed to trace changes of the +mask/interval parameters. When ALSA PCM core changes them, ``hw_mask_param`` or +``hw_interval_param`` event is probed according to type of the changed parameter. + +ALSA PCM core also has a pretty print format for each of the tracepoints. Below +is an example for ``hw_mask_param``. + +:: + + hw_mask_param: pcmC0D0p 001/023 FORMAT 00000000000000000000001000000044 00000000000000000000001000000044 + + +Below is an example for ``hw_interval_param``. + +:: + + hw_interval_param: pcmC0D0p 000/023 BUFFER_SIZE 0 0 [0 4294967295] 0 1 [0 4294967295] + +The first three fields are common. They represent name of ALSA PCM character +device, rules of constraint and name of the changed parameter, in order. The +field for rules of constraint consists of two sub-fields; index of applied rule +and total number of rules added to the runtime. As an exception, the index 000 +means that the parameter is changed by ALSA PCM core, regardless of the rules. + +The rest of field represent state of the parameter before/after changing. These +fields are different according to type of the parameter. For parameters of mask +type, the fields represent hexadecimal dump of content of the parameter. For +parameters of interval type, the fields represent values of each member of +``empty``, ``integer``, ``openmin``, ``min``, ``max``, ``openmax`` in +struct snd_interval in this order. + +Tracepoints in drivers +====================== + +Some drivers have tracepoints for developers' convenience. For them, please +refer to each documentation or implementation. |