aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorClay Murphy <claym@google.com>2014-09-10 20:35:11 +0000
committerGerrit Code Review <noreply-gerritcodereview@google.com>2014-09-10 20:35:11 +0000
commitbc803ea37da136c64b9eea3657ded058f949e831 (patch)
treed434f9fba09e08c047b5f2b5a80dfe8fbc291978
parente2caa98f86abc325a11ecff6980d703ffd673ab1 (diff)
parent3a7af3aaa9226c73214b1c8b566f8f753fb69aa7 (diff)
downloadsource.android.com-bc803ea37da136c64b9eea3657ded058f949e831.tar.gz
Merge "Docs: Fixing text in Devices section of site."
-rw-r--r--src/devices/audio_avoiding_pi.jd39
-rw-r--r--src/devices/audio_debugging.jd54
-rw-r--r--src/devices/audio_implement.jd2
-rw-r--r--src/devices/audio_src.jd45
-rw-r--r--src/devices/audio_terminology.jd88
-rw-r--r--src/devices/audio_warmup.jd2
-rw-r--r--src/devices/devices_toc.cs2
-rw-r--r--src/devices/index.jd2
-rw-r--r--src/devices/latency_design.jd12
9 files changed, 117 insertions, 129 deletions
diff --git a/src/devices/audio_avoiding_pi.jd b/src/devices/audio_avoiding_pi.jd
index a8cd208c..49b901e8 100644
--- a/src/devices/audio_avoiding_pi.jd
+++ b/src/devices/audio_avoiding_pi.jd
@@ -42,14 +42,14 @@ allows the audio buffer sizes and counts to be reduced while still
avoiding artifacts due to underruns.
</p>
-<h2 id="priorityInversion">Priority Inversion</h2>
+<h2 id="priorityInversion">Priority inversion</h2>
<p>
<a href="http://en.wikipedia.org/wiki/Priority_inversion">Priority inversion</a>
is a classic failure mode of real-time systems,
where a higher-priority task is blocked for an unbounded time waiting
-for a lower-priority task to release a resource such as [shared
-state protected by] a
+for a lower-priority task to release a resource such as (shared
+state protected by) a
<a href="http://en.wikipedia.org/wiki/Mutual_exclusion">mutex</a>.
</p>
@@ -64,7 +64,7 @@ are used, or delay in responding to a command.
<p>
In the Android audio implementation, priority inversion is most
-likely to occur in these places. And so we focus attention here:
+likely to occur in these places. And so you should focus your attention here:
</p>
<ul>
@@ -99,10 +99,10 @@ not yet implemented. The likely priority inversion spots will be
similar to those for AudioTrack.
</p>
-<h2 id="commonSolutions">Common Solutions</h2>
+<h2 id="commonSolutions">Common solutions</h2>
<p>
-The typical solutions listed in the Wikipedia article include:
+The typical solutions include:
</p>
<ul>
@@ -130,18 +130,17 @@ Priority inheritance
in Linux kernel, but are not currently exposed by the Android C
runtime library
<a href="http://en.wikipedia.org/wiki/Bionic_(software)">Bionic</a>.
-We chose not to use them in the audio system
-because they are relatively heavyweight, and because they rely on
-a trusted client.
+They are not used in the audio system because they are relatively heavyweight,
+and because they rely on a trusted client.
</p>
<h2 id="androidTechniques">Techniques used by Android</h2>
<p>
-We started with "try lock" and lock with timeout. These are
+Experiments started with "try lock" and lock with timeout. These are
non-blocking and bounded blocking variants of the mutex lock
-operation. Try lock and lock with timeout worked fairly well for
-us, but were susceptible to a couple of obscure failure modes: the
+operation. Try lock and lock with timeout worked fairly well but were
+susceptible to a couple of obscure failure modes: the
server was not guaranteed to be able to access the shared state if
the client happened to be busy, and the cumulative timeout could
be too long if there was a long sequence of unrelated locks that
@@ -167,10 +166,9 @@ SMP barriers. The disadvantage is they can require unbounded retries.
In practice, we've found that the retries are not a problem.
</p>
-<p>
-<strong>Note</strong>: Atomic operations and their interactions with memory barriers
-are notoriously badly misunderstood and used incorrectly. We include
-these methods here for completeness but recommend you also read the article
+<p class="note"><strong>Note:</strong> Atomic operations and their interactions with memory barriers
+are notoriously badly misunderstood and used incorrectly. These methods are
+included here for completeness but recommend you also read the article
<a href="https://developer.android.com/training/articles/smp.html">
SMP Primer for Android</a>
for further information.
@@ -234,7 +232,7 @@ such as PCM audio where a corruption is inconsequential.
</ul>
-<h2 id="nonBlockingAlgorithms">Non-Blocking Algorithms</h2>
+<h2 id="nonBlockingAlgorithms">Non-blocking algorithms</h2>
<p>
<a href="http://en.wikipedia.org/wiki/Non-blocking_algorithm">Non-blocking algorithms</a>
@@ -273,9 +271,8 @@ suitable for other purposes.
</p>
<p>
-For developers, we may update some of the sample OpenSL ES application
-code to use non-blocking algorithms or reference a non-Android open source
-library.
+For developers, some of the sample OpenSL ES application code may be updated to
+use non-blocking algorithms or reference a non-Android open source library.
</p>
<h2 id="tools">Tools</h2>
@@ -297,7 +294,7 @@ are useful for seeing priority inversion after it occurs, but do
not tell you in advance.
</p>
-<h2 id="aFinalWord">A Final Word</h2>
+<h2 id="aFinalWord">A final word</h2>
<p>
After all of this discussion, don't be afraid of mutexes. Mutexes
diff --git a/src/devices/audio_debugging.jd b/src/devices/audio_debugging.jd
index 31d61d53..ebab35b6 100644
--- a/src/devices/audio_debugging.jd
+++ b/src/devices/audio_debugging.jd
@@ -39,12 +39,12 @@ and may require changes for other versions.
<ol>
<li><code>cd frameworks/av/services/audioflinger</code></li>
-<li>edit <code>Configuration.h</code></li>
-<li>uncomment <code>#define TEE_SINK</code></li>
-<li>re-build <code>libaudioflinger.so</code></li>
+<li>Edit <code>Configuration.h</code>.</li>
+<li>Uncomment <code>#define TEE_SINK</code>.</li>
+<li>Re-build <code>libaudioflinger.so</code>.</li>
<li><code>adb root</code></li>
<li><code>adb remount</code></li>
-<li>push or sync the new <code>libaudioflinger.so</code> to the device's <code>/system/lib</code></li>
+<li>Push or sync the new <code>libaudioflinger.so</code> to the device's <code>/system/lib</code>.</li>
</ol>
<h3>Run-time setup</h3>
@@ -72,7 +72,7 @@ chown media:media /data/misc/media
</code>
</li>
<li><code>echo af.tee=# &gt; /data/local.prop</code>
-<br />where the <code>af.tee</code> value is a number described below
+<br />Where the <code>af.tee</code> value is a number described below.
</li>
<li><code>chmod 644 /data/local.prop</code></li>
<li><code>reboot</code></li>
@@ -100,17 +100,17 @@ but you can get similar results using "4."
<h3>Test and acquire data</h3>
<ol>
-<li>Run your audio test</li>
+<li>Run your audio test.</li>
<li><code>adb shell dumpsys media.audio_flinger</code></li>
<li>Look for a line in dumpsys output such as this:<br />
<code>tee copied to /data/misc/media/20131010101147_2.wav</code>
-<br />This is a PCM .wav file</br>
+<br />This is a PCM .wav file.</br>
</li>
<li><code>adb pull</code> any <code>/data/misc/media/*.wav</code> files of interest;
note that track-specific dump filenames do not appear in the dumpsys output,
-but are still saved to <code>/data/misc/media</code> upon track closure
+but are still saved to <code>/data/misc/media</code> upon track closure.
</li>
-<li>Review the dump files for privacy concerns before sharing with others</li>
+<li>Review the dump files for privacy concerns before sharing with others.</li>
</ol>
<h4>Suggestions</h4>
@@ -118,15 +118,15 @@ but are still saved to <code>/data/misc/media</code> upon track closure
<p>Try these ideas for more useful results:</p>
<ul>
-<li>Disable touch sounds and key clicks</li>
-<li>Maximize all volumes</li>
+<li>Disable touch sounds and key clicks.</li>
+<li>Maximize all volumes.</li>
<li>Disable apps that make sound or record from microphone,
-if they are not of interest to your test
+if they are not of interest to your test.
</li>
<li>Track-specific dumps are only saved when the track is closed;
you may need to force close an app in order to dump its track-specific data
<li>Do the <code>dumpsys</code> immediately after test;
-there is a limited amount of recording space available</li>
+there is a limited amount of recording space available.</li>
<li>To make sure you don't lose your dump files,
upload them to your host periodically.
Only a limited number of dump files are preserved;
@@ -140,10 +140,10 @@ As noted above, the tee sink feature should not be left enabled.
Restore your build and device as follows:
</p>
<ol>
-<li>Revert the source code changes to <code>Configuration.h</code></li>
-<li>Re-build <code>libaudioflinger.so</code></li>
+<li>Revert the source code changes to <code>Configuration.h</code>.</li>
+<li>Re-build <code>libaudioflinger.so</code>.</li>
<li>Push or sync the restored <code>libaudioflinger.so</code>
-to the device's <code>/system/lib</code>
+to the device's <code>/system/lib</code>.
</li>
<li><code>adb shell</code></li>
<li><code>rm /data/local.prop</code></li>
@@ -228,15 +228,14 @@ By convention, each thread should use it's own timeline.
<h3>Benefits</h3>
<p>
-The benefits of the <code>media.log</code> system include:
+The benefits of the <code>media.log</code> system are that it:
</p>
<ul>
-<li>doesn't spam the main log unless and until it is needed</li>
-<li>can be examined even when <code>mediaserver</code> crashes or hangs</li>
-<li>is non-blocking per timeline</li>
-<li>
-less disturbance to performance
-(of course no form of logging is completely non-intrusive)
+<li>Doesn't spam the main log unless and until it is needed.</li>
+<li>Can be examined even when <code>mediaserver</code> crashes or hangs.</li>
+<li>Is non-blocking per timeline.</li>
+<li>Offers less disturbance to performance.
+(Of course no form of logging is completely non-intrusive.)
</li>
</ul>
@@ -251,9 +250,9 @@ and the <code>init</code> process, before <code>media.log</code> is introduced:
Notable points:
</p>
<ul>
-<li><code>init</code> forks and execs <code>mediaserver</code></li>
-<li><code>init</code> detects the death of <code>mediaserver</code>, and re-forks as necessary</li>
-<li><code>ALOGx</code> logging is not shown
+<li><code>init</code> forks and execs <code>mediaserver</code>.</li>
+<li><code>init</code> detects the death of <code>mediaserver</code>, and re-forks as necessary.</li>
+<li><code>ALOGx</code> logging is not shown.
</ul>
<p>
@@ -348,8 +347,7 @@ within a context where the thread's mutex <code>mLock</code> is held.
After you have added the logs, re-build AudioFlinger.
</p>
-<b>Caution:</b>
-<p>
+<p class="caution"><strong>Caution:</strong>
A separate <code>NBLog::Writer</code> timeline is required per thread,
to ensure thread safety, since timelines omit mutexes by design. If you
want more than one thread to use the same timeline, you can protect with an
diff --git a/src/devices/audio_implement.jd b/src/devices/audio_implement.jd
index 2016367f..32cd1376 100644
--- a/src/devices/audio_implement.jd
+++ b/src/devices/audio_implement.jd
@@ -244,7 +244,7 @@ pre_processing {
<p>For <code>AudioSource</code> tuning, there are no explicit requirements on audio gain or audio processing
with the exception of voice recognition (<code>VOICE_RECOGNITION</code>).</p>
-<p>The following are the requirements for voice recognition:</p>
+<p>The requirements for voice recognition are:</p>
<ul>
<li>"flat" frequency response (+/- 3dB) from 100Hz to 4kHz</li>
diff --git a/src/devices/audio_src.jd b/src/devices/audio_src.jd
index b57717c6..9454e54d 100644
--- a/src/devices/audio_src.jd
+++ b/src/devices/audio_src.jd
@@ -9,6 +9,8 @@ page.title=Sample Rate Conversion
</div>
</div>
+<h2 id="srcIntro">Introduction</h2>
+
<p>
See the Wikipedia article
<a class="external-link" href="http://en.wikipedia.org/wiki/Resampling_(audio)" target="_android">Resampling (audio)</a>
@@ -69,49 +71,6 @@ below lists the available resamplers, summarizes their characteristics,
and identifies where they should typically be used.
</p>
-<h2 id="srcTerms">Terminology</h2>
-
-<dl>
-
-<dt>downsample</dt>
-<dd>to resample, where sink sample rate &lt; source sample rate</dd>
-
-<dt>Nyquist frequency</dt>
-<dd>
-The Nyquist frequency, equal to 1/2 of a given sample rate, is the
-maximum frequency component that can be represented by a discretized
-signal at that sample rate. For example, the human hearing range is
-typically assumed to extend up to approximately 20 kHz, and so a digital
-audio signal must have a sample rate of at least 40 kHz to represent that
-range. In practice, sample rates of 44.1 kHz and 48 kHz are commonly
-used, with Nyquist frequencies of 22.05 kHz and 24 kHz respectively.
-See the Wikipedia articles
-<a class="external-link" href="http://en.wikipedia.org/wiki/Nyquist_frequency" target="_android">Nyquist frequency</a>
-and
-<a class="external-link" href="http://en.wikipedia.org/wiki/Hearing_range" target="_android">Hearing range</a>
-for more information.
-</dd>
-
-<dt>resampler</dt>
-<dd>synonym for sample rate converter</dd>
-
-<dt>resampling</dt>
-<dd>the process of converting sample rate</dd>
-
-<dt>sample rate converter</dt>
-<dd>a module that resamples</dd>
-
-<dt>sink</dt>
-<dd>the output of a resampler</dd>
-
-<dt>source</dt>
-<dd>the input to a resampler</dd>
-
-<dt>upsample</dt>
-<dd>to resample, where sink sample rate &gt; source sample rate</dd>
-
-</dl>
-
<h2 id="srcResamplers">Resampler implementations</h2>
<p>
diff --git a/src/devices/audio_terminology.jd b/src/devices/audio_terminology.jd
index a27703b7..bd59a84f 100644
--- a/src/devices/audio_terminology.jd
+++ b/src/devices/audio_terminology.jd
@@ -201,23 +201,21 @@ may need to be aware of these, as well as the end user.
<dd>
A short range wireless technology.
The major audio-related
-<a class="external-link" href="http://en.wikipedia.org/wiki/Bluetooth_profile"
+<a href="http://en.wikipedia.org/wiki/Bluetooth_profile"
target="_android">Bluetooth profiles</a>
and
-<a class="external-link" href="http://en.wikipedia.org/wiki/Bluetooth_protocols"
+<a href="http://en.wikipedia.org/wiki/Bluetooth_protocols"
target="_android">Bluetooth protocols</a>
are described at these Wikipedia articles:
<ul>
-<li><a class="external-link"
-href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29"
+<li><a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29"
target="_android">A2DP</a>
for music
</li>
-<li><a class="external-link"
-href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link"
+<li><a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link"
target="_android">SCO</a>
for telephony
</li>
@@ -257,14 +255,14 @@ An adapter from micro-USB to HDMI.
<dt>S/PDIF</dt>
<dd>
Sony/Philips Digital Interface Format is an interconnect for uncompressed PCM.
-See Wikipedia article <a class="external-link" href="http://en.wikipedia.org/wiki/S/PDIF"
+See Wikipedia article <a href="http://en.wikipedia.org/wiki/S/PDIF"
target="_android">S/PDIF</a>.
</dd>
<dt>USB</dt>
<dd>
Universal Serial Bus.
-See Wikipedia article <a class="external-link" href="http://en.wikipedia.org/wiki/USB" target="_android">USB</a>.
+See Wikipedia article <a href="http://en.wikipedia.org/wiki/USB" target="_android">USB</a>.
</dd>
</dl>
@@ -279,13 +277,13 @@ implementor may need to be aware of these, but not the end user.
See these Wikipedia articles:
<ul>
-<li><a class="external-link" href="http://en.wikipedia.org/wiki/General-purpose_input/output"
+<li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output"
target="_android">GPIO</a></li>
-<li><a class="external-link" href="http://en.wikipedia.org/wiki/I%C2%B2C" target="_android">I²C</a></li>
-<li><a class="external-link" href="http://en.wikipedia.org/wiki/I%C2%B2S" target="_android">I²S</a></li>
-<li><a class="external-link" href="http://en.wikipedia.org/wiki/McASP" target="_android">McASP</a></li>
-<li><a class="external-link" href="http://en.wikipedia.org/wiki/SLIMbus" target="_android">SLIMbus</a></li>
-<li><a class="external-link" href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus"
+<li><a href="http://en.wikipedia.org/wiki/I%C2%B2C" target="_android">I²C</a></li>
+<li><a href="http://en.wikipedia.org/wiki/I%C2%B2S" target="_android">I²S</a></li>
+<li><a href="http://en.wikipedia.org/wiki/McASP" target="_android">McASP</a></li>
+<li><a href="http://en.wikipedia.org/wiki/SLIMbus" target="_android">SLIMbus</a></li>
+<li><a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus"
target="_android">SPI</a></li>
</ul>
@@ -307,7 +305,7 @@ sample-and-hold followed by a quantizer, although it does not have to
be implemented that way. An ADC is usually preceded by a low-pass filter
to remove any high frequency components that are not representable using
the desired sample rate. See Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Analog-to-digital_converter"
+<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter"
target="_android">Analog-to-digital_converter</a>.
</dd>
@@ -323,7 +321,7 @@ from one representation to another. Typically this is analog to PCM, or PCM to
Strictly, the term "codec" is reserved for modules that both encode and decode,
however it can also more loosely refer to only one of these.
See Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Audio_codec" target="_android">Audio codec</a>.
+<a href="http://en.wikipedia.org/wiki/Audio_codec" target="_android">Audio codec</a>.
</dd>
<dt>DAC</dt>
@@ -334,7 +332,7 @@ Digital to analog converter, a module that converts a digital signal
a low-pass filter to remove any high frequency components introduced
by digital quantization.
See Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Digital-to-analog_converter"
+<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter"
target="_android">Digital-to-analog converter</a>.
</dd>
@@ -353,7 +351,7 @@ is a form of modulation used to represent an analog signal by a digital signal,
where the relative density of 1s versus 0s indicates the signal level.
It is commonly used by digital to analog converters.
See Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Pulse-density_modulation"
+<a href="http://en.wikipedia.org/wiki/Pulse-density_modulation"
target="_android">Pulse-density modulation</a>.
</dd>
@@ -364,7 +362,7 @@ is a form of modulation used to represent an analog signal by a digital signal,
where the relative width of a digital pulse indicates the signal level.
It is commonly used by analog to digital converters.
See Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Pulse-width_modulation"
+<a href="http://en.wikipedia.org/wiki/Pulse-width_modulation"
target="_android">Pulse-width modulation</a>.
</dd>
@@ -384,7 +382,7 @@ may have a special meaning within Android beyond their general meaning.
Advanced Linux Sound Architecture. As the name suggests, it is an audio
framework primarily for Linux, but it has influenced other systems.
See Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture" target="_android">ALSA</a>
+<a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture" target="_android">ALSA</a>
for the general definition. As used within Android, it refers primarily
to the kernel audio framework and drivers, not to the user-mode API. See
tinyalsa.
@@ -401,7 +399,7 @@ and input (pre-processing) effects. The API is defined at
<dd>
The sound server implementation for Android. AudioFlinger
runs within the mediaserver process. See Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Sound_server" target="_android">Sound server</a>
+<a href="http://en.wikipedia.org/wiki/Sound_server" target="_android">Sound server</a>
for the generic definition.
</dd>
@@ -418,7 +416,7 @@ Focus</a> and the focus-related methods and constants of
The module within AudioFlinger responsible for
combining multiple tracks and applying attenuation
(volume) and certain effects. The Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)" target="_android">Audio mixing (recorded music)</a>
+<a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)" target="_android">Audio mixing (recorded music)</a>
may be useful for understanding the generic
concept. But that article describes a mixer more as a hardware device
or a software application, rather than a software module within a system.
@@ -580,7 +578,7 @@ for use in HAL implementations.
<dd>
A higher-level client API than AudioTrack, used for playing DTMF signals.
See the Wikipedia article
-<a class="external-link" href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling"
+<a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling"
target="_android">Dual-tone multi-frequency signaling</a>,
and the API definition at
<a href="http://developer.android.com/reference/android/media/ToneGenerator.html"
@@ -610,8 +608,44 @@ operate in volume indices rather than absolute attenuation factors.
<h2 id="srcTerms">Sample Rate Conversion</h2>
-<p>
-For terms related to sample rate conversion, see the separate article
-<a href="audio_src.html">Sample Rate Conversion</a>.
-</p>
+<dl>
+
+<dt>downsample</dt>
+<dd>To resample, where sink sample rate &lt; source sample rate.</dd>
+
+<dt>Nyquist frequency</dt>
+<dd>
+The Nyquist frequency, equal to 1/2 of a given sample rate, is the
+maximum frequency component that can be represented by a discretized
+signal at that sample rate. For example, the human hearing range is
+typically assumed to extend up to approximately 20 kHz, and so a digital
+audio signal must have a sample rate of at least 40 kHz to represent that
+range. In practice, sample rates of 44.1 kHz and 48 kHz are commonly
+used, with Nyquist frequencies of 22.05 kHz and 24 kHz respectively.
+See
+<a href="http://en.wikipedia.org/wiki/Nyquist_frequency" target="_android">Nyquist frequency</a>
+and
+<a href="http://en.wikipedia.org/wiki/Hearing_range" target="_android">Hearing range</a>
+for more information.
+</dd>
+
+<dt>resampler</dt>
+<dd>Synonym for sample rate converter.</dd>
+
+<dt>resampling</dt>
+<dd>The process of converting sample rate.</dd>
+
+<dt>sample rate converter</dt>
+<dd>A module that resamples.</dd>
+
+<dt>sink</dt>
+<dd>The output of a resampler.</dd>
+
+<dt>source</dt>
+<dd>The input to a resampler.</dd>
+
+<dt>upsample</dt>
+<dd>To resample, where sink sample rate &gt; source sample rate.</dd>
+
+</dl>
diff --git a/src/devices/audio_warmup.jd b/src/devices/audio_warmup.jd
index 0a0ec046..777650b8 100644
--- a/src/devices/audio_warmup.jd
+++ b/src/devices/audio_warmup.jd
@@ -24,7 +24,7 @@ page.title=Audio Warmup
</div>
</div>
-<p>Audio warmup is the time for the audio amplifier circuit in your device to
+<p>Audio warmup is the time it takes for the audio amplifier circuit in your device to
be fully powered and reach its normal operation state. The major contributors
to audio warmup time are power management and any "de-pop" logic to stabilize
the circuit.
diff --git a/src/devices/devices_toc.cs b/src/devices/devices_toc.cs
index b20ea6da..1eb43aad 100644
--- a/src/devices/devices_toc.cs
+++ b/src/devices/devices_toc.cs
@@ -31,6 +31,7 @@
</a>
</div>
<ul>
+ <li><a href="<?cs var:toroot ?>devices/audio_terminology.html">Terminology</a></li>
<li><a href="<?cs var:toroot ?>devices/audio_implement.html">Implementation</a></li>
<li><a href="<?cs var:toroot ?>devices/audio_warmup.html">Warmup</a></li>
<li class="nav-section">
@@ -47,7 +48,6 @@
</li>
<li><a href="<?cs var:toroot ?>devices/audio_avoiding_pi.html">Priority Inversion</a></li>
<li><a href="<?cs var:toroot ?>devices/audio_src.html">Sample Rate Conversion</a></li>
- <li><a href="<?cs var:toroot ?>devices/audio_terminology.html">Terminology</a></li>
<li><a href="<?cs var:toroot ?>devices/audio_debugging.html">Debugging</a></li>
</ul>
</li>
diff --git a/src/devices/index.jd b/src/devices/index.jd
index f0b4e42e..da9438dd 100644
--- a/src/devices/index.jd
+++ b/src/devices/index.jd
@@ -33,7 +33,7 @@ page.title=Porting Android to Devices
<p>To ensure that your devices maintain a high level of quality and offers a consistent
experience for your users, they must must also
pass the tests in the compatibility test suite (CTS). CTS ensures that anyone
- building a device meets a quality standard that ensures apps run reliabaly well
+ building a device meets a quality standard that ensures apps run reliably well
and gives users a good experience. For more information, see the
<a href="{@docRoot}compatibility/index.html">Compatibility</a> section.</p>
diff --git a/src/devices/latency_design.jd b/src/devices/latency_design.jd
index eb503f30..15485a52 100644
--- a/src/devices/latency_design.jd
+++ b/src/devices/latency_design.jd
@@ -53,11 +53,11 @@ The factors that impact the decision include:
</p>
<ul>
-<li>presence of a fast mixer thread for this output (see below)</li>
-<li>track sample rate</li>
-<li>presence of a client thread to execute callback handlers for this track</li>
-<li>track buffer size</li>
-<li>available fast track slots (see below)</li>
+<li>Presence of a fast mixer thread for this output (see below)</li>
+<li>Track sample rate</li>
+<li>Presence of a client thread to execute callback handlers for this track</li>
+<li>Track buffer size</li>
+<li>Available fast track slots (see below)</li>
</ul>
<p>
@@ -85,7 +85,7 @@ The fast mixer thread provides these features:
</p>
<ul>
-<li>mixing of the normal mixer's sub-mix and up to 7 client fast tracks</li>
+<li>Mixing of the normal mixer's sub-mix and up to 7 client fast tracks</li>
<li>Per track attenuation</li>
</ul>