aboutsummaryrefslogtreecommitdiff
path: root/en/devices/audio/terminology.html
blob: f84fa88b79eedc6ae6b48c9fee19fb2edc2cb348 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
<html devsite>
  <head>
    <title>Audio Terminology</title>
    <meta name="project_path" value="/_project.yaml" />
    <meta name="book_path" value="/_book.yaml" />
  </head>
  <body>
  <!--
      Copyright 2017 The Android Open Source Project

      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at

          http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License.
  -->



<p>
This glossary of audio-related terminology includes widely-used generic terms
and Android-specific terms.
</p>

<h2 id="genericTerm">Generic Terms</h2>

<p>
Generic audio-related terms have conventional meanings.
</p>

<h3 id="digitalAudioTerms">Digital Audio</h3>
<p>
Digital audio terms relate to handling sound using audio signals encoded
in digital form. For details, refer to
<a href="http://en.wikipedia.org/wiki/Digital_audio">Digital Audio</a>.
</p>

<dl>

<dt>acoustics</dt>
<dd>
Study of the mechanical properties of sound, such as how the physical
placement of transducers (speakers, microphones, etc.) on a device affects
perceived audio quality.
</dd>

<dt>attenuation</dt>
<dd>
Multiplicative factor less than or equal to 1.0, applied to an audio signal
to decrease the signal level. Compare to <em>gain</em>.
</dd>

<dt>audiophile</dt>
<dd>
Person concerned with a superior music reproduction experience, especially
willing to make substantial tradeoffs (expense, component size, room design,
etc.) for sound quality. For details, refer to
<a href="http://en.wikipedia.org/wiki/Audiophile">audiophile</a>.
</dd>

<dt>bits per sample or bit depth</dt>
<dd>
Number of bits of information per sample.
</dd>

<dt>channel</dt>
<dd>
Single stream of audio information, usually corresponding to one location of
recording or playback.
</dd>

<dt>downmixing</dt>
<dd>
Decrease the number of channels, such as from stereo to mono or from 5.1 to
stereo. Accomplished by dropping channels, mixing channels, or more advanced
signal processing. Simple mixing without attenuation or limiting has the
potential for overflow and clipping. Compare to <em>upmixing</em>.
</dd>

<dt>DSD</dt>
<dd>
Direct Stream Digital. Proprietary audio encoding based on
<a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">pulse-density
modulation</a>. While Pulse Code Modulation (PCM) encodes a waveform as a
sequence of individual audio samples of multiple bits, DSD encodes a waveform as
a sequence of bits at a very high sample rate (without the concept of samples).
Both PCM and DSD represent multiple channels by independent sequences. DSD is
better suited to content distribution than as an internal representation for
processing as it can be difficult to apply traditional digital signal processing
(DSP) algorithms to DSD. DSD is used in <a href="http://en.wikipedia.org/wiki/Super_Audio_CD">Super Audio CD (SACD)</a> and in DSD over PCM (DoP) for USB. For details, refer
to <a href="http://en.wikipedia.org/wiki/Direct_Stream_Digital">Digital Stream
Digital</a>.
</dd>

<dt>duck</dt>
<dd>
Temporarily reduce the volume of a stream when another stream becomes active.
For example, if music is playing when a notification arrives, the music ducks
while the notification plays. Compare to <em>mute</em>.
</dd>

<dt>FIFO</dt>
<dd>
First In, First Out. Hardware module or software data structure that implements
<a href="http://en.wikipedia.org/wiki/FIFO">First In, First Out</a>
queueing of data. In an audio context, the data stored in the queue are
typically audio frames. FIFO can be implemented by a
<a href="http://en.wikipedia.org/wiki/Circular_buffer">circular buffer</a>.
</dd>

<dt>frame</dt>
<dd>
Set of samples, one per channel, at a point in time.
</dd>

<dt>frames per buffer</dt>
<dd>
Number of frames handed from one module to the next at one time. The audio HAL
interface uses the concept of frames per buffer.
</dd>

<dt>gain</dt>
<dd>
Multiplicative factor greater than or equal to 1.0, applied to an audio signal
to increase the signal level. Compare to <em>attenuation</em>.
</dd>

<dt>HD audio</dt>
<dd>
High-Definition audio. Synonym for high-resolution audio (but different than
Intel High Definition Audio).
</dd>

<dt>Hz</dt>
<dd>
Units for sample rate or frame rate.
</dd>

<dt>high-resolution audio</dt>
<dd>
Representation with greater bit-depth and sample rate than CDs (stereo 16-bit
PCM at 44.1 kHz) and without lossy data compression. Equivalent to HD audio.
For details, refer to
<a href="http://en.wikipedia.org/wiki/High-resolution_audio">high-resolution
audio</a>.
</dd>

<dt>latency</dt>
<dd>
Time delay as a signal passes through a system.
</dd>

<dt>lossless</dt>
<dd>
A <a href="http://en.wikipedia.org/wiki/Lossless_compression">lossless data
compression algorithm</a> that preserves bit accuracy across encoding and
decoding, where the result of decoding previously encoded data is equivalent
to the original data. Examples of lossless audio content distribution formats
include <a href="http://en.wikipedia.org/wiki/Compact_disc">CDs</a>, PCM within
<a href="http://en.wikipedia.org/wiki/WAV">WAV</a>, and
<a href="http://en.wikipedia.org/wiki/FLAC">FLAC</a>.
The authoring process may reduce the bit depth or sample rate from that of the
<a href="http://en.wikipedia.org/wiki/Audio_mastering">masters</a>; distribution
formats that preserve the resolution and bit accuracy of masters are the subject
of high-resolution audio.
</dd>

<dt>lossy</dt>
<dd>
A <a href="http://en.wikipedia.org/wiki/Lossy_compression">lossy data
compression algorithm</a> that attempts to preserve the most important features
of media across encoding and decoding where the result of decoding previously
encoded data is perceptually similar to the original data but not identical.
Examples of lossy audio compression algorithms include MP3 and AAC. As analog
values are from a continuous domain and digital values are discrete, ADC and DAC
are lossy conversions with respect to amplitude. See also <em>transparency</em>.
</dd>

<dt>mono</dt>
<dd>
One channel.
</dd>

<dt>multichannel</dt>
<dd>
See <em>surround sound</em>. In strict terms, <em>stereo</em> is more than one
channel and could be considered multichannel; however, such usage is confusing
and thus avoided.
</dd>

<dt>mute</dt>
<dd>
Temporarily force volume to be zero, independent from the usual volume controls.
</dd>

<dt>overrun</dt>
<dd>
Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by
failure to accept supplied data in sufficient time. For details, refer to
<a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
Compare to <em>underrun</em>.
</dd>

<dt>panning</dt>
<dd>
Direct a signal to a desired position within a stereo or multichannel field.
</dd>

<dt>PCM</dt>
<dd>
Pulse Code Modulation. Most common low-level encoding of digital audio. The
audio signal is sampled at a regular interval, called the sample rate, then
quantized to discrete values within a particular range depending on the bit
depth. For example, for 16-bit PCM the sample values are integers between
-32768 and +32767.
</dd>

<dt>ramp</dt>
<dd>
Gradually increase or decrease the level of a particular audio parameter, such
as the volume or the strength of an effect. A volume ramp is commonly applied
when pausing and resuming music to avoid a hard audible transition.
</dd>

<dt>sample</dt>
<dd>
Number representing the audio value for a single channel at a point in time.
</dd>

<dt>sample rate or frame rate</dt>
<dd>
Number of frames per second. While <em>frame rate</em> is more accurate,
<em>sample rate</em> is conventionally used to mean frame rate.
</dd>

<dt>sonification</dt>
<dd>
Use of sound to express feedback or information, such as touch sounds and
keyboard sounds.
</dd>

<dt>stereo</dt>
<dd>
Two channels.
</dd>

<dt>stereo widening</dt>
<dd>
Effect applied to a stereo signal to make another stereo signal that sounds
fuller and richer. The effect can also be applied to a mono signal, where it is
a type of upmixing.
</dd>

<dt>surround sound</dt>
<dd>
Techniques for increasing the ability of a listener to perceive sound position
beyond stereo left and right.
</dd>

<dt>transparency</dt>
<dd>
Ideal result of lossy data compression. Lossy data conversion is transparent if
it is perceptually indistinguishable from the original by a human subject. For
details, refer to
<a href="http://en.wikipedia.org/wiki/Transparency_%28data_compression%29">Transparency</a>.

</dd>

<dt>underrun</dt>
<dd>
Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by
failure to supply needed data in sufficient time. For details, refer to
<a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
Compare to <em>overrun</em>.
</dd>

<dt>upmixing</dt>
<dd>
Increase the number of channels, such as from mono to stereo or from stereo to
surround sound. Accomplished by duplication, panning, or more advanced signal
processing. Compare to <em>downmixing</em>.
</dd>

<dt>virtualizer</dt>
<dd>
Effect that attempts to spatialize audio channels, such as trying to simulate
more speakers or give the illusion that sound sources have position.
</dd>

<dt>volume</dt>
<dd>
Loudness, the subjective strength of an audio signal.
</dd>

</dl>

<h3 id="interDeviceTerms">Inter-device interconnect</h3>

<p>
Inter-device interconnection technologies connect audio and video components
between devices and are readily visible at the external connectors. The HAL
implementer and end user should be aware of these terms.
</p>

<dl>

<dt>Bluetooth</dt>
<dd>
Short range wireless technology. For details on the audio-related
<a href="http://en.wikipedia.org/wiki/Bluetooth_profile">Bluetooth profiles</a>
and
<a href="http://en.wikipedia.org/wiki/Bluetooth_protocols">Bluetooth protocols</a>,
refer to <a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29">A2DP</a> for
music, <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link">SCO</a> for telephony, and <a href="http://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Audio.2FVideo_Remote_Control_Profile_.28AVRCP.29">Audio/Video Remote Control Profile (AVRCP)</a>.
</dd>

<dt>DisplayPort</dt>
<dd>
Digital display interface by the Video Electronics Standards Association (VESA).
</dd>

<dt>dongle</dt>
<dd>
A <a href="https://en.wikipedia.org/wiki/Dongle">dongle</a>
is a small gadget, especially one that hangs off another device.
</dd>

<dt>HDMI</dt>
<dd>
High-Definition Multimedia Interface. Interface for transferring audio and
video data. For mobile devices, a micro-HDMI (type D) or MHL connector is used.
</dd>

<dt>Intel HDA</dt>
<dd>
Intel High Definition Audio (do not confuse with generic <em>high-definition
audio</em> or <em>high-resolution audio</em>). Specification for a front-panel
connector. For details, refer to
<a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel High
Definition Audio</a>.
</dd>

<dt>interface</dt>
<dd>
An <a href="https://en.wikipedia.org/wiki/Interface_(computing)">interface</a>
converts a signal from one representation to another.  Common interfaces
include a USB audio interface and MIDI interface.
</dd>

<dt>line level</dt>
<dd>
<a href="http://en.wikipedia.org/wiki/Line_level">Line level</a> is the strength
of an analog audio signal that passes between audio components, not transducers.
</dd>

<dt>MHL</dt>
<dd>
Mobile High-Definition Link. Mobile audio/video interface, often over micro-USB
connector.
</dd>

<dt>phone connector</dt>
<dd>
Mini or sub-mini component that connects a device to wired headphones, headset,
or line-level amplifier.
</dd>

<dt>SlimPort</dt>
<dd>
Adapter from micro-USB to HDMI.
</dd>

<dt>S/PDIF</dt>
<dd>
Sony/Philips Digital Interface Format. Interconnect for uncompressed PCM. For
details, refer to <a href="http://en.wikipedia.org/wiki/S/PDIF">S/PDIF</a>.
S/PDIF is the consumer grade variant of <a href="https://en.wikipedia.org/wiki/AES3">AES3</a>.
</dd>

<dt>Thunderbolt</dt>
<dd>
Multimedia interface that competes with USB and HDMI for connecting to high-end
peripherals. For details, refer to <a href="http://en.wikipedia.org/wiki/Thunderbolt_%28interface%29">Thunderbolt</a>.
</dd>

<dt>TOSLINK</dt>
<dd>
<a href="https://en.wikipedia.org/wiki/TOSLINK">TOSLINK</a> is an optical audio cable
used with <em>S/PDIF</em>.
</dd>

<dt>USB</dt>
<dd>
Universal Serial Bus. For details, refer to
<a href="http://en.wikipedia.org/wiki/USB">USB</a>.
</dd>

</dl>

<h3 id="intraDeviceTerms">Intra-device interconnect</h3>

<p>
Intra-device interconnection technologies connect internal audio components
within a given device and are not visible without disassembling the device. The
HAL implementer may need to be aware of these, but not the end user. For details
on intra-device interconnections, refer to the following articles:
</p>
<ul>
<li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output">GPIO</a></li>
<li><a href="http://en.wikipedia.org/wiki/I%C2%B2C">I²C</a>, for control channel</li>
<li><a href="http://en.wikipedia.org/wiki/I%C2%B2S">I²S</a>, for audio data, simpler than SLIMbus</li>
<li><a href="http://en.wikipedia.org/wiki/McASP">McASP</a></li>
<li><a href="http://en.wikipedia.org/wiki/SLIMbus">SLIMbus</a></li>
<li><a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus">SPI</a></li>
<li><a href="http://en.wikipedia.org/wiki/AC%2797">AC'97</a></li>
<li><a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel HDA</a></li>
<li><a href="http://mipi.org/specifications/soundwire">SoundWire</a></li>
</ul>

<p>
In
<a href="http://www.alsa-project.org/main/index.php/ASoC">ALSA System on Chip (ASoC)</a>,
these are collectively called
<a href="https://www.kernel.org/doc/Documentation/sound/alsa/soc/DAI.txt">Digital Audio Interfaces</a>
(DAI).
</p>

<h3 id="signalTerms">Audio Signal Path</h3>

<p>
Audio signal path terms relate to the signal path that audio data follows from
an application to the transducer or vice-versa.
</p>

<dl>

<dt>ADC</dt>
<dd>
Analog-to-digital converter. Module that converts an analog signal (continuous
in time and amplitude) to a digital signal (discrete in time and amplitude).
Conceptually, an ADC consists of a periodic sample-and-hold followed by a
quantizer, although it does not have to be implemented that way. An ADC is
usually preceded by a low-pass filter to remove any high frequency components
that are not representable using the desired sample rate. For details, refer to
<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">Analog-to-digital
converter</a>.
</dd>

<dt>AP</dt>
<dd>
Application processor. Main general-purpose computer on a mobile device.
</dd>

<dt>codec</dt>
<dd>
Coder-decoder. Module that encodes and/or decodes an audio signal from one
representation to another (typically analog to PCM or PCM to analog). In strict
terms, <em>codec</em> is reserved for modules that both encode and decode but
can be used loosely to refer to only one of these. For details, refer to
<a href="http://en.wikipedia.org/wiki/Audio_codec">Audio codec</a>.
</dd>

<dt>DAC</dt>
<dd>
Digital-to-analog converter. Module that converts a digital signal (discrete in
time and amplitude) to an analog signal (continuous in time and amplitude).
Often followed by a low-pass filter to remove high-frequency components
introduced by digital quantization. For details, refer to
<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">Digital-to-analog
converter</a>.
</dd>

<dt>DSP</dt>
<dd>
Digital Signal Processor. Optional component typically located after the
application processor (for output) or before the application processor (for
input). Primary purpose is to off-load the application processor and provide
signal processing features at a lower power cost.
</dd>

<dt>PDM</dt>
<dd>
Pulse-density modulation. Form of modulation used to represent an analog signal
by a digital signal, where the relative density of 1s versus 0s indicates the
signal level. Commonly used by digital to analog converters. For details, refer
to <a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">Pulse-density
modulation</a>.
</dd>

<dt>PWM</dt>
<dd>
Pulse-width modulation. Form of modulation used to represent an analog signal by
a digital signal, where the relative width of a digital pulse indicates the
signal level. Commonly used by analog-to-digital converters. For details, refer
to <a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width
modulation</a>.
</dd>

<dt>transducer</dt>
<dd>
Converts variations in physical real-world quantities to electrical signals. In
audio, the physical quantity is sound pressure, and the transducers are the
loudspeaker and microphone. For details, refer to
<a href="http://en.wikipedia.org/wiki/Transducer">Transducer</a>.
</dd>

</dl>

<h3 id="srcTerms">Sample Rate Conversion</h3>
<p>
Sample rate conversion terms relate to the process of converting from one
sampling rate to another.
</p>

<dl>

<dt>downsample</dt>
<dd>Resample, where sink sample rate &lt; source sample rate.</dd>

<dt>Nyquist frequency</dt>
<dd>
Maximum frequency component that can be represented by a discretized signal at
1/2 of a given sample rate. For example, the human hearing range extends to
approximately 20 kHz, so a digital audio signal must have a sample rate of at
least 40 kHz to represent that range. In practice, sample rates of 44.1 kHz and
48 kHz are commonly used, with Nyquist frequencies of 22.05 kHz and 24 kHz
respectively. For details, refer to
<a href="http://en.wikipedia.org/wiki/Nyquist_frequency">Nyquist frequency</a>
and
<a href="http://en.wikipedia.org/wiki/Hearing_range">Hearing range</a>.
</dd>

<dt>resampler</dt>
<dd>Synonym for sample rate converter.</dd>

<dt>resampling</dt>
<dd>Process of converting sample rate.</dd>

<dt>sample rate converter</dt>
<dd>Module that resamples.</dd>

<dt>sink</dt>
<dd>Output of a resampler.</dd>

<dt>source</dt>
<dd>Input to a resampler.</dd>

<dt>upsample</dt>
<dd>Resample, where sink sample rate &gt; source sample rate.</dd>

</dl>

<h2 id="androidSpecificTerms">Android-Specific Terms</h2>

<p>
Android-specific terms include terms used only in the Android audio framework
and generic terms that have special meaning within Android.
</p>

<dl>

<dt>ALSA</dt>
<dd>
Advanced Linux Sound Architecture. An audio framework for Linux that has also
influenced other systems. For a generic definition, refer to
<a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture">ALSA</a>.
In Android, ALSA refers to the kernel audio framework and drivers and not to the
user-mode API. See also <em>tinyalsa</em>.
</dd>

<dt>audio device</dt>
<dd>
Audio I/O endpoint backed by a HAL implementation.
</dd>

<dt>AudioEffect</dt>
<dd>
API and implementation framework for output (post-processing) effects and input
(pre-processing) effects. The API is defined at
<a href="http://developer.android.com/reference/android/media/audiofx/AudioEffect.html">android.media.audiofx.AudioEffect</a>.
</dd>

<dt>AudioFlinger</dt>
<dd>
Android sound server implementation. AudioFlinger runs within the mediaserver
process. For a generic definition, refer to
<a href="http://en.wikipedia.org/wiki/Sound_server">Sound server</a>.
</dd>

<dt>audio focus</dt>
<dd>
Set of APIs for managing audio interactions across multiple independent apps.
For details, see <a href="http://developer.android.com/training/managing-audio/audio-focus.html">Managing Audio Focus</a> and the focus-related methods and constants of
<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
</dd>

<dt>AudioMixer</dt>
<dd>
Module in AudioFlinger responsible for combining multiple tracks and applying
attenuation (volume) and effects. For a generic definition, refer to
<a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)">Audio mixing (recorded music)</a> (discusses a mixer as a hardware device or software application, rather
than a software module within a system).
</dd>

<dt>audio policy</dt>
<dd>
Service responsible for all actions that require a policy decision to be made
first, such as opening a new I/O stream, re-routing after a change, and stream
volume management.
</dd>

<dt>AudioRecord</dt>
<dd>
Primary low-level client API for receiving data from an audio input device such
as a microphone. The data is usually PCM format. The API is defined at
<a href="http://developer.android.com/reference/android/media/AudioRecord.html">android.media.AudioRecord</a>.
</dd>

<dt>AudioResampler</dt>
<dd>
Module in AudioFlinger responsible for <a href="src.html">sample rate conversion</a>.
</dd>

<dt>audio source</dt>
<dd>
An enumeration of constants that indicates the desired use case for capturing
audio input. For details, see <a href="http://developer.android.com/reference/android/media/MediaRecorder.AudioSource.html">audio source</a>. As of API level 21 and above,
<a href="attributes.html">audio attributes</a> are preferred.
</dd>

<dt>AudioTrack</dt>
<dd>
Primary low-level client API for sending data to an audio output device such as
a speaker. The data is usually in PCM format. The API is defined at
<a href="http://developer.android.com/reference/android/media/AudioTrack.html">android.media.AudioTrack</a>.
</dd>

<dt>audio_utils</dt>
<dd>
Audio utility library for features such as PCM format conversion, WAV file I/O,
and
<a href="avoiding_pi.html#nonBlockingAlgorithms">non-blocking FIFO</a>, which is
largely independent of the Android platform.
</dd>

<dt>client</dt>
<dd>
Usually an application or app client. However, an AudioFlinger client can be a
thread running within the mediaserver system process, such as when playing media
decoded by a MediaPlayer object.
</dd>

<dt>HAL</dt>
<dd>
Hardware Abstraction Layer. HAL is a generic term in Android; in audio, it is a
layer between AudioFlinger and the kernel device driver with a C API (which
replaces the C++ libaudio).
</dd>

<dt>FastCapture</dt>
<dd>
Thread within AudioFlinger that sends audio data to lower latency fast tracks
and drives the input device when configured for reduced latency.
</dd>

<dt>FastMixer</dt>
<dd>
Thread within AudioFlinger that receives and mixes audio data from lower latency
fast tracks and drives the primary output device when configured for reduced
latency.
</dd>

<dt>fast track</dt>
<dd>
AudioTrack or AudioRecord client with lower latency but fewer features on some
devices and routes.
</dd>

<dt>MediaPlayer</dt>
<dd>
Higher-level client API than AudioTrack. Plays encoded content or content that
includes multimedia audio and video tracks.
</dd>

<dt>media.log</dt>
<dd>
AudioFlinger debugging feature available in custom builds only. Used for logging
audio events to a circular buffer where they can then be retroactively dumped
when needed.
</dd>

<dt>mediaserver</dt>
<dd>
Android system process that contains media-related services, including
AudioFlinger.
</dd>

<dt>NBAIO</dt>
<dd>
Non-blocking audio input/output. Abstraction for AudioFlinger ports. The term
can be misleading as some implementations of the NBAIO API support blocking. The
key implementations of NBAIO are for different types of pipes.
</dd>

<dt>normal mixer</dt>
<dd>
Thread within AudioFlinger that services most full-featured AudioTrack clients.
Directly drives an output device or feeds its sub-mix into FastMixer via a pipe.
</dd>

<dt>OpenSL ES</dt>
<dd>
Audio API standard by
<a href="http://www.khronos.org/">The Khronos Group</a>. Android versions since
API level 9 support a native audio API that is based on a subset of
<a href="http://www.khronos.org/opensles/">OpenSL ES 1.0.1</a>.
</dd>

<dt>silent mode</dt>
<dd>
User-settable feature to mute the phone ringer and notifications without
affecting media playback (music, videos, games) or alarms.
</dd>

<dt>SoundPool</dt>
<dd>
Higher-level client API than AudioTrack. Plays sampled audio clips. Useful for
triggering UI feedback, game sounds, etc. The API is defined at
<a href="http://developer.android.com/reference/android/media/SoundPool.html">android.media.SoundPool</a>.
</dd>

<dt>Stagefright</dt>
<dd>
See <a href="/devices/media.html">Media</a>.
</dd>

<dt>StateQueue</dt>
<dd>
Module within AudioFlinger responsible for synchronizing state among threads.
Whereas NBAIO is used to pass data, StateQueue is used to pass control
information.
</dd>

<dt>strategy</dt>
<dd>
Group of stream types with similar behavior. Used by the audio policy service.
</dd>

<dt>stream type</dt>
<dd>
Enumeration that expresses a use case for audio output. The audio policy
implementation uses the stream type, along with other parameters, to determine
volume and routing decisions. For a list of stream types, see
<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
</dd>

<dt>tee sink</dt>
<dd>
See <a href="debugging.html#teeSink">Audio Debugging</a>.
</dd>

<dt>tinyalsa</dt>
<dd>
Small user-mode API above ALSA kernel with BSD license. Recommended for HAL
implementations.
</dd>

<dt>ToneGenerator</dt>
<dd>
Higher-level client API than AudioTrack. Plays dual-tone multi-frequency (DTMF)
signals. For details, refer to
<a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling">Dual-tone
multi-frequency signaling</a> and the API definition at
<a href="http://developer.android.com/reference/android/media/ToneGenerator.html">android.media.ToneGenerator</a>.
</dd>

<dt>track</dt>
<dd>
Audio stream. Controlled by the AudioTrack or AudioRecord API.
</dd>

<dt>volume attenuation curve</dt>
<dd>
Device-specific mapping from a generic volume index to a specific attenuation
factor for a given output.
</dd>

<dt>volume index</dt>
<dd>
Unitless integer that expresses the desired relative volume of a stream. The
volume-related APIs of
<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>
operate in volume indices rather than absolute attenuation factors.
</dd>

</dl>

  </body>
</html>