diff options
Diffstat (limited to 'en/devices')
22 files changed, 4 insertions, 1130 deletions
diff --git a/en/devices/_toc-tech.yaml b/en/devices/_toc-tech.yaml index 77e2419f..c30d4570 100644 --- a/en/devices/_toc-tech.yaml +++ b/en/devices/_toc-tech.yaml @@ -260,18 +260,4 @@ toc: - title: An End-to-End Example path: /devices/tech/test_infra/tradefed/full_example - title: Package Index - path: /reference/tradefed/ -- title: Vendor Test Suite (VTS) - section: - - title: Overview - path: /devices/tech/vts/ - - title: Systems Testing with VTS - path: /devices/tech/test_infra/tradefed/fundamentals/vts - - title: VTS Dashboard Setup - path: /devices/tech/vts/setup - - title: VTS Dashboard Database - path: /devices/tech/vts/database - - title: VTS Dashboard UI - path: /devices/tech/vts/ui - - title: Performance Testing - path: /devices/tech/vts/performance + path: /reference/tradefed/
\ No newline at end of file diff --git a/en/devices/tech/connect/data-saver.html b/en/devices/tech/connect/data-saver.html index e9624295..32e3d6aa 100644 --- a/en/devices/tech/connect/data-saver.html +++ b/en/devices/tech/connect/data-saver.html @@ -48,7 +48,9 @@ ensures desired background data exchange when Data Saver is on per user control. <p> Since the Data Saver is a feature in the platform, device manufacturers gain its -functionality by default with the N release. +functionality by default with the N release. Find the source files in:</br> +<a class="external" + href="https://android.googlesource.com/platform/packages/apps/Settings/+/master/src/com/android/settings/datausage">packages/apps/Settings/src/com/android/settings/datausage</a> </p> <h3 id="settings-interface">Settings interface</h3> diff --git a/en/devices/tech/images/treble_latency_bubble.png b/en/devices/tech/images/treble_latency_bubble.png Binary files differdeleted file mode 100644 index d0337ddd..00000000 --- a/en/devices/tech/images/treble_latency_bubble.png +++ /dev/null diff --git a/en/devices/tech/images/treble_priority_inv_rta.png b/en/devices/tech/images/treble_priority_inv_rta.png Binary files differdeleted file mode 100644 index 57df8f5b..00000000 --- a/en/devices/tech/images/treble_priority_inv_rta.png +++ /dev/null diff --git a/en/devices/tech/images/treble_priority_inv_rta_blocked.png b/en/devices/tech/images/treble_priority_inv_rta_blocked.png Binary files differdeleted file mode 100644 index fa21c958..00000000 --- a/en/devices/tech/images/treble_priority_inv_rta_blocked.png +++ /dev/null diff --git a/en/devices/tech/images/treble_systrace_binder_processes.png b/en/devices/tech/images/treble_systrace_binder_processes.png Binary files differdeleted file mode 100644 index a2aefa59..00000000 --- a/en/devices/tech/images/treble_systrace_binder_processes.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_dash_arch.png b/en/devices/tech/images/treble_vts_dash_arch.png Binary files differdeleted file mode 100644 index 14d5f9bb..00000000 --- a/en/devices/tech/images/treble_vts_dash_arch.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_dash_entity_ancestry.png b/en/devices/tech/images/treble_vts_dash_entity_ancestry.png Binary files differdeleted file mode 100644 index 5b639376..00000000 --- a/en/devices/tech/images/treble_vts_dash_entity_ancestry.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_descend.png b/en/devices/tech/images/treble_vts_descend.png Binary files differdeleted file mode 100644 index 00b1b0ad..00000000 --- a/en/devices/tech/images/treble_vts_descend.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_descend_not.png b/en/devices/tech/images/treble_vts_descend_not.png Binary files differdeleted file mode 100644 index c84a8da0..00000000 --- a/en/devices/tech/images/treble_vts_descend_not.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_ui_coverage.png b/en/devices/tech/images/treble_vts_ui_coverage.png Binary files differdeleted file mode 100644 index d4363a25..00000000 --- a/en/devices/tech/images/treble_vts_ui_coverage.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_ui_coverage_source.png b/en/devices/tech/images/treble_vts_ui_coverage_source.png Binary files differdeleted file mode 100644 index f3580a80..00000000 --- a/en/devices/tech/images/treble_vts_ui_coverage_source.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_ui_favorites.png b/en/devices/tech/images/treble_vts_ui_favorites.png Binary files differdeleted file mode 100644 index bc875627..00000000 --- a/en/devices/tech/images/treble_vts_ui_favorites.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_ui_histogram.png b/en/devices/tech/images/treble_vts_ui_histogram.png Binary files differdeleted file mode 100644 index b25f5a25..00000000 --- a/en/devices/tech/images/treble_vts_ui_histogram.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_ui_main.png b/en/devices/tech/images/treble_vts_ui_main.png Binary files differdeleted file mode 100644 index f68ab2b9..00000000 --- a/en/devices/tech/images/treble_vts_ui_main.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_ui_performance.png b/en/devices/tech/images/treble_vts_ui_performance.png Binary files differdeleted file mode 100644 index 67f6da2d..00000000 --- a/en/devices/tech/images/treble_vts_ui_performance.png +++ /dev/null diff --git a/en/devices/tech/images/treble_vts_ui_results.png b/en/devices/tech/images/treble_vts_ui_results.png Binary files differdeleted file mode 100644 index 9996b469..00000000 --- a/en/devices/tech/images/treble_vts_ui_results.png +++ /dev/null diff --git a/en/devices/tech/vts/database.html b/en/devices/tech/vts/database.html deleted file mode 100644 index e0c285b1..00000000 --- a/en/devices/tech/vts/database.html +++ /dev/null @@ -1,224 +0,0 @@ -<html devsite> - <head> - <title>VTS Dashboard Database</title> - <meta name="project_path" value="/_project.yaml" /> - <meta name="book_path" value="/_book.yaml" /> - </head> - <body> - <!-- - Copyright 2017 The Android Open Source Project - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - --> - - -<p> -To support a continuous integration dashboard that is scalable, performant, and -flexible, the VTS Dashboard backend must be carefully designed with a strong -understanding of database functionality. -<a href="https://cloud.google.com/datastore/docs/" class="external">Google Cloud -Datastore</a> is a NoSQL database that offers transactional ACID guarantees and -eventual consistency as well as strong consistency within entity groups. -However, the structure is very different than SQLdatabases (and even Cloud -Bigtable); instead of tables, rows, and cells there are kinds, entities, and -properties. -</p> -<p> -The following sections outline the data structure and querying patterns for -creating an effective backend for the VTS Dashboard web service. -</p> - -<h2 id=entities>Entities</h2> -<p> -The following entities store summaries and resources from VTS test runs: -</p> -<ul> -<li><strong>Test Entity</strong>. The test entity stores metadata -about test runs of a particular test. Its key is the test name and its -properties include the failure count, passing count, and list of test case -breakages from when the alert jobs update it.</li> -<li><strong>Test Run Entity</strong>. The test run entity contains metadata -from runs of a particular test. It must store the test start and end timestamps, -the test build ID, the number of passing and failing test cases, the type of run -(e.g. pre-submit, post-submit, or local), a list of log links, the host machine -name, and coverage summary counts.</li> -<li><strong>Device Information Entity</strong>. Device information entities -contain details about the devices used during the test run. It includes the -device build ID, product name, build target, branch, and ABI information. This -is stored separately from the test run entity to support multi-device test runs -in a one-to-many fashion.</li> -<li><strong>Profiling Point Run Entity</strong>. The profiling point run entity -summarizes the data gathered for a particular profiling point within a test -run. It describes the axis labels, profiling point name, values, type, and -regression mode of the profiling data.</li> -<li><strong>Coverage Entity</strong>. Each coverage entity describes the -coverage data gathered for one file. It contains the GIT project information, -file path, and the list of coverage counts per line in the source file. -<li><strong>Test Case Run Entity</strong>. A test case run describes the outcome -of a particular test case from a test run, including the test case name and its -result.</li> -<li><strong>User Favorites Entity</strong>. Each user subscription can be -represented in an entity containing a reference to the test and the user ID -generated from the App Engine user service. This allows for efficient -bi-directional querying (i.e. for all users subscribed to a test and for all -tests favorited by a user).</li> -</ul> - -<h2 id=entity-grouping>Entity grouping</h2> -<p> -When designing ancestry relationships, you must balance the need to provide -effective and consistent querying mechanisms against the limitations enforced by -the database. -</p> - -<p> -Each test module will represent the root of an entity group, with test run -entities as children. Each test run entity is also a parent for device entities, -profiling point entities, and coverage entities relevant to the respective test -and test run ancestor. -</p> -<img src="../images/treble_vts_dash_entity_ancestry.png"> -<figcaption><strong>Figure 1</strong>. Test entity ancestry.</figcaption> - -<h3 id=benefits>Benefits</h3> -<p> -The consistency requirement ensures that future operations will not see the -effects of a transaction until it commits, and that transactions in the past are -visible to present operations. In Cloud Datastore, entity grouping creates -islands of strong read and write consistency within the group, which in this -case is all of test runs and data related to a test module. This offers the -following benefits: -</p> -<ul> -<li>Reads and updates to test module state by alert jobs can be treated as -atomic</li> -<li>Guaranteed consistent view of test case results within test modules</li> -<li>Faster querying within ancestry trees</li> -</ul> - -<h3 id=limitations>Limitations</h3> -<p> -Writing to an entity group at a rate faster than one entity per second is not -advised as some writes may be rejected. As long as the alert jobs and the -uploading does not happen at a rate faster than one write per second, the -structure is solid and guarantees strong consistency. -</p> -<p> -Ultimately, the cap of one write per test module per second is reasonable because -test runs usually take at least one minute including the overhead of the VTS -framework; unless a test is consistently being executed simultaneously on more -than 60 different hosts, there cannot be a write bottleneck. This becomes even -more unlikely given that each module is part of a test plan which often takes -longer than one hour. However, anomalies can easily be handled if hosts run the -tests at the same time, causing short bursts of writes to the same hosts (e.g. -by catching write errors and trying again). -</p> - -<h3 id=scaling>Scaling considerations</h3> -<p> -A test run doesn't necessarily need to have the test as its parent (e.g. it -could take some other key and have test name, test start time as properties); -however, this will exchange strong consistency for eventual consistency. For -instance, the alert job may not see a mutually consistent snapshot of the most -recent test runs within a test module, which means that the global state may not -depict a fully accurate representation of sequence of test runs. This may also -impact the display of test runs within a single test module, which may not -necessarily be a consistent snapshot of the run sequence. Eventually the -snapshot will be consistent, but there are no guarantees the freshest data -will be. -</p> - -<h2 id=test-cases>Test cases</h2> -<p> -Another potential bottleneck is large tests with many test cases. The two -operative constraints are the write throughput maximum within of an entity group -of one per second, along with a maximum transaction size of 500 entities. -</p> -<p> -One natural approach would be to specify a test case that has a test run as an -ancestor (similar to how coverage data, profiling data, and device information -are stored): -</p> -<img src="../images/treble_vts_descend_not.png"> -<figcaption><strong>Figure 2</strong>. Test Cases descend from Test Runs (NOT -RECOMMENDED).</figcaption> - -<p>While this approach offers atomicity and consistency, it imposes strong -limitations on tests: If a transaction is limited to 500 entities, then a test -can have no more than 498 test cases (assuming no coverage or profiling data). -If a test were to exceed this, then a single transaction could not write all of -the test case results at once, and dividing the test cases into separate -transactions could exceed the maximum entity group write throughput of one -iteration per second. As this solution will not scale well without sacrificing -performance, it is not recommended. -</p> - -<p> -However, instead of storing the test case results as children of the test run, -the test cases can be stored independently and their keys provided to the test -run (a test run contains a list of identifiers to its test cases entities): -</p> - -<img src="../images/treble_vts_descend.png"> -<figcaption><strong>Figure 3</strong>. Test Cases stored independently -(RECOMMENDED).</figcaption> - -<p> -At first glance, this may appear to break the strong consistency guarantee. -However, if the client has a test run entity and a list of test case -identifiers, it doesn't need to construct a query; it can instead directly get -the test cases by their identifiers, which is always guaranteed to be -consistent. -</p> -<p> -This approach vastly alleviates the constraint on the number of test cases a -test run may have while gaining strong consistency without threatening -excessive writing within an entity group. -</p> - -<h2 id=patterns>Data access patterns</h2> -<p> -The VTS Dashboard uses the following data access patterns: -</p> -<ul> -<li><strong>User favorites</strong>. User favorites can be queried for by using -an equality filter on user favorites entities having the particular App Engine -User object as a property.</li> -<li><strong>Test listing</strong>. Test listing is a simple query of test -entities; to reduce bandwidth to render the home page, a projection can be used -on passing and failing counts so as to omit the potentially long listing of -failed test case IDs and other metadata used by the alerting jobs.</li> -<li><strong>Test runs</strong>. Querying for test run entities requires a sort -on the key (timestamp) and possible filtering on the test run properties such as -build ID, passing count, etc. By performing an ancestor query with a test entity -key, the read is strongly consistent. At this point, all of the test case -results can be retrieved using the list of IDs stored in a test run property; -this also is guaranteed to be a strongly consistent outcome by the nature of -datastore get operations.</li> -<li><strong>Profiling and coverage data</strong>. Querying for profiling or -coverage data associated with a test can be done without also retrieving any -other test run data (such as other profiling/coverage data, test case data, -etc.). An ancestor query using the test test and test run entity keys will -retrieve all profiling points recorded during the test run; by also filtering on -the profiling point name or filename, a single profiling or coverage entity can -be retrieved. By the nature of ancestor queries, this operation is strongly -consistent.</li> -</ul> - -<p> -For details on the UI and screenshots of these data patterns in action, see -<a href="/devices/architecture/testing/ui.html">VTS Dashboard UI</a>. -</p> - - </body> -</html> diff --git a/en/devices/tech/vts/index.html b/en/devices/tech/vts/index.html deleted file mode 100644 index 510c1200..00000000 --- a/en/devices/tech/vts/index.html +++ /dev/null @@ -1,58 +0,0 @@ -<html devsite> - <head> - <title>Vendor Test Suite (VTS) & Infrastructure</title> - <meta name="project_path" value="/_project.yaml" /> - <meta name="book_path" value="/_book.yaml" /> - </head> - <body> - <!-- - Copyright 2017 The Android Open Source Project - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - --> - - -<p> -The Android Vendor Test Suite (VTS)</a> provides extensive new functionality for -Android testing and promotes a test-driven development process. To help the -Android development community interact with test data, Android O includes the -following testing resources:</p> -<ul> -<li><a href="/devices/tech/test_infra/tradefed/fundamentals/vts.html">Systems -Testing with VTS</a>. Describes how to use VTS to test an Android native system -implementation, set up a testing environment, then test a patch using a VTS -plan.</li> -<li><strong>VTS Dashboard</strong>. Web-based user interface for viewing VTS -results. Includes details on: - <ul> - <li><a href="/devices/tech/vts/database.html">Dashboard - database</a>. A scalable back-end to support the continuous integration - dashboard.</li> - <li><a href="/devices/tech/vts/ui.html">Dashboard UI</a>. A - cohesive user interface that uses material design to effectively display - information about test results, profiling, and coverage.</li> - <li><a href="/devices/tech/vts/setup.html">Dashboard setup</a>. - Instructions for setting up and configuring the VTS Dashboard.</li> - </ul> -</li> -<li><a href="/devices/tech/vts/performance.html">binder and hwbinder -performance tests</a>. Tools for measuring throughput and latency.</li> -</ul> - -<p>For additional details, refer to the -<a href="https://codelabs.developers.google.com/codelabs/android-vts/#0" class="external">Android -VTS v8.0 Codelab</a> on developer.android.com and the -<a href="https://www.youtube.com/watch?v=7BX7oSHc7nk&list=PLWz5rJ2EKKc9JOMtoWWMJHFHgvXDoThva" class="external">Android VTS Products video</a> produced by Google Developers.</p> - - </body> -</html> diff --git a/en/devices/tech/vts/performance.html b/en/devices/tech/vts/performance.html deleted file mode 100644 index 0f05b234..00000000 --- a/en/devices/tech/vts/performance.html +++ /dev/null @@ -1,473 +0,0 @@ -<html devsite> - <head> - <title>Performance Testing</title> - <meta name="project_path" value="/_project.yaml" /> - <meta name="book_path" value="/_book.yaml" /> - </head> - <body> - <!-- - Copyright 2017 The Android Open Source Project - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - --> - - -<p>Android O includes binder and hwbinder performance tests for throughput and -latency. While many scenarios exist for detecting perceptible performance -problems, running such scenarios can be time consuming and results are often -unavailable until after a system is integrated. Using the provided performance -tests makes it easier to test during development, detect serious problems -earlier, and improve user experience.</p> - -<p>Performance tests include the following four categories:</p> -<ul> -<li>binder throughput (available in -<code>system/libhwbinder/vts/performance/Benchmark_binder.cpp</code>)</li> -<li>binder latency (available in -<code>frameworks/native/libs/binder/tests/schd-dbg.cpp</code>)</li> -<li>hwbinder throughput (available in -<code>system/libhwbinder/vts/performance/Benchmark.cpp</code>)</li> -<li>hwbinder latency (available in -<code>system/libhwbinder/vts/performance/Latency.cpp</code>)</li> -</ul> - -<h2 id=about>About binder and hwbinder</h2> -<p>Binder and hwbinder are Android inter-process communication (IPC) -infrastructures that share the same Linux driver but have the following -qualitative differences:</p> - -<table> -<tr> -<th>Aspect</th> -<th>binder</th> -<th>hwbinder</th> -</tr> - -<tr> -<td>Purpose</td> -<td>Provide a general purpose IPC scheme for framework</td> -<td>Communicate with hardware</td> -</tr> - -<tr> -<td>Property</td> -<td>Optimized for Android framework usage</td> -<td>Minimum overhead low latency</td> -</tr> - -<tr> -<td>Change scheduling policy for foreground/background</td> -<td>Yes</td> -<td>No</td> -</tr> - -<tr> -<td>Arguments passing</td> -<td>Uses serialization supported by Parcel object</td> -<td>Uses scatter buffers and avoids the overhead to copy data required for -Parcel serialization</td> -</tr> - -<tr> -<td>Priority inheritance</td> -<td>No</td> -<td>Yes</td> -</tr> - -</table> - -<h3 id=transactions>Binder and hwbinder processes</h2> -<p>A systrace visualizer displays transactions as follows:</p> -<img src="../images/treble_systrace_binder_processes.png"> -<figcaption><strong>Figure 1.</strong> Systrace visualization of binder -processes.</figcaption> - -<p>In the above example:</p> -<ul> -<li>The four (4) schd-dbg processes are client processes.</li> -<li>The four (4) binder processes are server processes (name starts with -<strong>Binder</strong> and ends with a sequence number).</li> -<li>A client process is always paired with a server process, which is dedicated -to its client.</li> -<li>All the client-server process pairs are scheduled independently by kernel -concurrently.</li> -</ul> - -<p>In CPU 1, the OS kernel executes the client to issue the request. It then -uses the same CPU whenever possible to wake up a server process, handle the -request, and context switch back after the request is complete.</p> - -<h3 id=throughput-diffs>Throughput vs. latency</h3> -<p>In a perfect transaction, where the client and server process switch -seamlessly, throughput and latency tests do not produce substantially different -messages. However, when the OS kernel is handling an interrupt request (IRQ) -from hardware, waiting for locks, or simply choosing not to handle a message -immediately, a latency bubble can form.</p> - -<img src="../images/treble_latency_bubble.png"> -<figcaption><strong>Figure 2.</strong> Latency bubble due to differences in -throughput and latency.</figcaption> - -<p>The throughput test generates a large number of transactions with different -payload sizes, providing a good estimation for the regular transaction time (in -best case scenarios) and the maximum throughput the binder can achieve.</p> - -<p>In contrast, the latency test performs no actions on the payload to minimize -the regular transaction time. We can use transaction time to estimate the binder -overhead, make statistics for the worst case, and calculate the ratio of -transactions whose latency meets a specified deadline.</p> - -<h3 id=priority-inversions>Handling priority inversions</h3> -<p>A priority inversion occurs when a thread with higher priority is logically -waiting for a thread with lower priority. Real-time (RT) applications have a -priority inversion problem:</p> - -<img src="../images/treble_priority_inv_rta.png"> -<figcaption><strong>Figure 3.</strong> Priority inversion in real-time -applications.</figcaption> - -<p>When using Linux Completely Fair Scheduler (CFS) scheduling, a thread always -has a chance to run even when other threads have a higher priority. As a result, -applications with CFS scheduling handle priority inversion as expected behavior -and not as a problem. In cases where the Android framework needs RT scheduling -to guarantee the privilege of high priority threads however, priority inversion -must be resolved.</p> - -<p>Example priority inversion during a binder transaction (RT thread is -logically blocked by other CFS threads when waiting for a binder thread to -service):</p> -<img src="../images/treble_priority_inv_rta_blocked.png"> -<figcaption><strong>Figure 4.</strong> Priority inversion, blocked real-time -threads.</figcaption> - -<p>To avoid blockages, you can use priority inheritance to temporarily escalate -the Binder thread to a RT thread when it services a request from a RT client. -Keep in mind that RT scheduling has limited resources and should be used -carefully. In a system with <em>n</em> CPUs, the maximum number of current RT -threads is also <em>n</em>; additional RT threads might need to wait (and thus -miss their deadlines) if all CPUs are taken by other RT threads.</p> - -<p>To resolve all possible priority inversions, you could use priority -inheritance for both binder and hwbinder. However, as binder is widely used -across the system, enabling priority inheritance for binder transactions might -spam the system with more RT threads than it can service.</p> - -<h2 id=throughput>Running throughput tests</h2> -<p>The throughput test is run against binder/hwbinder transaction throughput. In -a system that is not overloaded, latency bubbles are rare and their impact -can be eliminated as long as the number of iterations is high enough.</p> - -<ul> -<li>The <strong>binder</strong> throughput test is in -<code>system/libhwbinder/vts/performance/Benchmark_binder.cpp</code>.</li> -<li>The <strong>hwbinder</strong> throughput test is in -<code>system/libhwbinder/vts/performance/Benchmark.cpp</code>.</li> -</ul> - -<h3 id=throughput-results>Test results</h3> -<p>Example throughput test results for transactions using different payload -sizes:</p> - -<pre class="prettyprint"> -Benchmark Time CPU Iterations ---------------------------------------------------------------------- -BM_sendVec_binderize/4 70302 ns 32820 ns 21054 -BM_sendVec_binderize/8 69974 ns 32700 ns 21296 -BM_sendVec_binderize/16 70079 ns 32750 ns 21365 -BM_sendVec_binderize/32 69907 ns 32686 ns 21310 -BM_sendVec_binderize/64 70338 ns 32810 ns 21398 -BM_sendVec_binderize/128 70012 ns 32768 ns 21377 -BM_sendVec_binderize/256 69836 ns 32740 ns 21329 -BM_sendVec_binderize/512 69986 ns 32830 ns 21296 -BM_sendVec_binderize/1024 69714 ns 32757 ns 21319 -BM_sendVec_binderize/2k 75002 ns 34520 ns 20305 -BM_sendVec_binderize/4k 81955 ns 39116 ns 17895 -BM_sendVec_binderize/8k 95316 ns 45710 ns 15350 -BM_sendVec_binderize/16k 112751 ns 54417 ns 12679 -BM_sendVec_binderize/32k 146642 ns 71339 ns 9901 -BM_sendVec_binderize/64k 214796 ns 104665 ns 6495 -</pre> - -<ul> -<li><strong>Time</strong> indicates the round trip delay measured in real time. -</li> -<li><strong>CPU</strong> indicates the accumulated time when CPUs are scheduled -for the test.</li> -<li><strong>Iterations</strong> indicates the number of times the test function -executed.</li> -</ul> - -<p>For example, for an 8-byte payload:</p> - -<pre class="prettyprint"> -BM_sendVec_binderize/8 69974 ns 32700 ns 21296 -</pre> -<p>… the maximum throughput the binder can achieve is calculated as:</p> -<p><em>MAX throughput with 8-byte payload = (8 * 21296)/69974 ~= 2.423 b/ns ~= -2.268 Gb/s</em></p> - -<h3 id=throughput-options>Test options</h3> -<p>To get results in .json, run the test with the -<code>--benchmark_format=json</code> argument:</p> - -<pre class="prettyprint"> -<code class="devsite-terminal">libhwbinder_benchmark --benchmark_format=json</code> -{ - "context": { - "date": "2017-05-17 08:32:47", - "num_cpus": 4, - "mhz_per_cpu": 19, - "cpu_scaling_enabled": true, - "library_build_type": "release" - }, - "benchmarks": [ - { - "name": "BM_sendVec_binderize/4", - "iterations": 32342, - "real_time": 47809, - "cpu_time": 21906, - "time_unit": "ns" - }, - …. -} -</pre> - -<h2 id=latency>Running latency tests</h2> -<p>The latency test measures the time it takes for the client to begin -initializing the transaction, switch to the server process for handling, and -receive the result. The test also looks for known bad scheduler behaviors that -can negatively impact transaction latency, such as a scheduler that does not -support priority inheritance or honor the sync flag.</p> - -<ul> -<li>The binder latency test is in -<code>frameworks/native/libs/binder/tests/schd-dbg.cpp</code>.</li> -<li>The hwbinder latency test is in -<code>system/libhwbinder/vts/performance/Latency.cpp</code>.</li> -</ul> - -<h3 id=latency-results>Test results</h3> -<p>Results (in .json) show statistics for average/best/worst latency and the -number of deadlines missed.</p> - -<h3 id=latency-options>Test options</h3> -<p>Latency tests take the following options:</p> - -<table> -<tr> -<th>Command</th> -<th>Description</th> -</tr> - -<tr> -<td><code>-i <em>value</em></code></td> -<td>Specify number of iterations.</td> -</tr> - -<tr> -<td><code>-pair <em>value</em></code></td> -<td>Specify the number of process pairs.</td> -</tr> - -<tr> -<td><code>-deadline_us 2500</code></td> -<td>Specify the deadline in us.</td> -</tr> - -<tr> -<td><code>-v</code></td> -<td>Get verbose (debugging) output.</td> -</tr> - -<tr> -<td><code>-trace</code></td> -<td>Halt the trace on a deadline hit.</td> -</tr> - -</table> - -<p>The following sections detail each option, describe usage, and provide -example results.</p> - -<h4 id=iterations>Specifying iterations</h4> -<p>Example with a large number of iterations and verbose output disabled:</p> - -<pre class="prettyprint"> -<code class="devsite-terminal">libhwbinder_latency -i 5000 -pair 3</code> -{ -"cfg":{"pair":3,"iterations":5000,"deadline_us":2500}, -"P0":{"SYNC":"GOOD","S":9352,"I":10000,"R":0.9352, - "other_ms":{ "avg":0.2 , "wst":2.8 , "bst":0.053, "miss":2, "meetR":0.9996}, - "fifo_ms": { "avg":0.16, "wst":1.5 , "bst":0.067, "miss":0, "meetR":1} -}, -"P1":{"SYNC":"GOOD","S":9334,"I":10000,"R":0.9334, - "other_ms":{ "avg":0.19, "wst":2.9 , "bst":0.055, "miss":2, "meetR":0.9996}, - "fifo_ms": { "avg":0.16, "wst":3.1 , "bst":0.066, "miss":1, "meetR":0.9998} -}, -"P2":{"SYNC":"GOOD","S":9369,"I":10000,"R":0.9369, - "other_ms":{ "avg":0.19, "wst":4.8 , "bst":0.055, "miss":6, "meetR":0.9988}, - "fifo_ms": { "avg":0.15, "wst":1.8 , "bst":0.067, "miss":0, "meetR":1} -}, -"inheritance": "PASS" -} -</pre> -<p>These test results show the following:</p> - -<dl> -<dt><strong><code>"pair":3</code></strong></dt> -<dd>Creates one client and server pair.</dd> - -<dt><strong><code>"iterations": 5000</code></strong></dt> -<dd>Includes 5000 iterations.</dd> - -<dt><strong><code>"deadline_us":2500</code></strong></dt> -<dd>Deadline is 2500us (2.5ms); most transactions are expected to meet this -value.</dd> - -<dt><strong><code>"I": 10000</code></strong></dt> -<dd>A single test iteration includes two (2) transactions: -<ul> - <li>One transaction by normal priority (<code>CFS other</code>)</li> - <li>One transaction by real time priority (<code>RT-fifo</code>)</li> -</ul> -5000 iterations equals a total of 10000 transactions.</dd> - -<dt><strong><code>"S": 9352</code></strong></dt> -<dd>9352 of the transactions are synced in the same CPU.</dd> - -<dt><strong><code>"R": 0.9352</code></strong></dt> -<dd>Indicates the ratio at which the client and server are synced together in -the same CPU.</dd> - -<dt><strong><code>"other_ms":{ "avg":0.2 , "wst":2.8 , "bst":0.053, "miss":2, -"meetR":0.9996}</code></strong></dt> -<dd>The average (<code>avg</code>), worst (<code>wst</code>), and the best -(<code>bst</code>) case for all transactions issued by a normal priority caller. -Two transactions <code>miss</code> the deadline, making the meet ratio -(<code>meetR</code>) 0.9996.</dd> - -<dt><strong><code>"fifo_ms": { "avg":0.16, "wst":1.5 , "bst":0.067, "miss":0, -"meetR":1}</code></strong></dt> -<dd>Similar to <code>other_ms</code>, but for transactions issued by client with -<code>rt_fifo</code> priority. It's likely (but not required) that the -<code>fifo_ms</code> has a better result than <code>other_ms</code>, with lower -<code>avg</code> and <code>wst</code> values and a higher <code>meetR</code> -(the difference can be even more significant with load in the background).</dd> - -</dl> - -<p class="note"><strong>Note:</strong> Background load may impact the throughput -result and the <code>other_ms</code> tuple in the latency test. Only the -<code>fifo_ms</code> may show similar results as long as the background load has -a lower priority than <code>RT-fifo</code>.</p> - -<h4 id=pair-values>Specifying pair values</h4> -<p>Each client process is paired with a server process dedicated for the client, -and each pair may be scheduled independently to any CPU. However, the CPU -migration should not happen during a transaction as long as the SYNC flag is -<code>honor</code>.</p> - -<p>Ensure the system is not overloaded! While high latency in an overloaded -system is expected, test results for an overloaded system do not provide useful -information. To test a system with higher pressure, use <code>-pair -#cpu-1</code> (or <code>-pair #cpu</code> with caution). Testing using -<code>-pair <em>n</em></code> with <code><em>n</em> > #cpu</code> overloads the -system and generates useless information.</p> - -<h4 id=deadline-values>Specifying deadline values</h4> -<p>After extensive user scenario testing (running the latency test on a -qualified product), we determined that 2.5ms is the deadline to meet. For new -applications with higher requirements (such as 1000 photos/second), this -deadline value will change.</p> - -<h4 id=verbose>Specifying verbose output</h4> -<p>Using the <code>-v</code> option displays verbose output. Example:</p> - -<pre class="devsite-click-to-copy"> -<code class="devsite-terminal">libhwbinder_latency -i 1 -v</code> - -<div style="color: orange">-------------------------------------------------- -service pid: 8674 tid: 8674 cpu: 1 -SCHED_OTHER 0</div> --------------------------------------------------- -main pid: 8673 tid: 8673 cpu: 1 - --------------------------------------------------- -client pid: 8677 tid: 8677 cpu: 0 -SCHED_OTHER 0 - -<div style="color: blue">-------------------------------------------------- -fifo-caller pid: 8677 tid: 8678 cpu: 0 -SCHED_FIFO 99 - --------------------------------------------------- -hwbinder pid: 8674 tid: 8676 cpu: 0 -??? 99</div> -<div style="color: green">-------------------------------------------------- -other-caller pid: 8677 tid: 8677 cpu: 0 -SCHED_OTHER 0 - --------------------------------------------------- -hwbinder pid: 8674 tid: 8676 cpu: 0 -SCHED_OTHER 0</div> -</pre> - -<ul> -<li>The <font style="color:orange">service thread</font> is created with a -<code>SCHED_OTHER</code> priority and run in <code>CPU:1</code> with <code>pid -8674</code>.</li> -<li>The <font style="color:blue">first transaction</font> is then started by a -<code>fifo-caller</code>. To service this transaction, the hwbinder upgrades the -priority of server (<code>pid: 8674 tid: 8676</code>) to be 99 and also marks it -with a transient scheduling class (printed as <code>???</code>). The scheduler -then puts the server process in <code>CPU:0</code> to run and syncs it with the -same CPU with its client.</li> -<li>The <font style="color:green">second transaction</font> caller has a -<code>SCHED_OTHER</code> priority. The server downgrades itself and services the -caller with <code>SCHED_OTHER</code> priority.</li> -</ul> - -<h4 id=trace>Using trace for debugging</h4> -<p>You can specify the <code>-trace</code> option to debug latency issues. When -used, the latency test stops the tracelog recording at the moment when bad -latency is detected. Example:</p> - -<pre class="prettyprint"> -<code class="devsite-terminal">atrace --async_start -b 8000 -c sched idle workq binder_driver sync freq</code> -<code class="devsite-terminal">libhwbinder_latency -deadline_us 50000 -trace -i 50000 -pair 3</code> -deadline triggered: halt ∓ stop trace -log:/sys/kernel/debug/tracing/trace -</pre> - -<p>The following components can impact latency:</p> - -<ul> -<li><strong>Android build mode</strong>. Eng mode is usually slower than -userdebug mode.</li> -<li><strong>Framework</strong>. How does the framework service use -<code>ioctl</code> to config to the binder?</li> -<li><strong>Binder driver</strong>. Does the driver support fine-grained -locking? Does the driver contain all performance turning patches? -<li><strong>Kernel version</strong>. The better real time capability the kernel -has, the better the results.</li> -<li><strong>Kernel config</strong>. Does the kernel config contain -<code>DEBUG</code> configs such as <code>DEBUG_PREEMPT</code> and -<code>DEBUG_SPIN_LOCK</code>?</li> -<li><strong>Kernel scheduler</strong>. Does the kernel have an Energy-Aware -scheduler (EAS) or Heterogeneous Multi-Processing (HMP) scheduler? Are there -kernel drivers (<code>cpu-freq</code> driver, <code>cpu-idle</code> driver, -<code>cpu-hotplug</code>, etc.) that impact the scheduler?</li> -</ul> - - </body> -</html> diff --git a/en/devices/tech/vts/setup.html b/en/devices/tech/vts/setup.html deleted file mode 100644 index 79bc79d7..00000000 --- a/en/devices/tech/vts/setup.html +++ /dev/null @@ -1,198 +0,0 @@ -<html devsite> - <head> - <title>VTS Dashboard Setup</title> - <meta name="project_path" value="/_project.yaml" /> - <meta name="book_path" value="/_book.yaml" /> - </head> - <body> - <!-- - Copyright 2017 The Android Open Source Project - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - --> - -<p> -The VTS Dashboard provides a user backend and user interface for viewing test -results from the VTS continuous integration system. It supports test-driven -development with tools such as test status notifications to help developers to -locate and prevent regression areas during the development cycle (include test -monitoring and -triaging support). -</p> - -<p> -The user interface supports new features (such as native code coverage) provided -by the VTS infrastructure and enables the development of tools with optimized -and well-characterized performance by offering continuous performance -monitoring. -</p> - -<h2 id=requirements>Requirements</h2> -<p> -The following services are required to use the VTS Dashboard: -</p> -<ul> -<li><a href="https://maven.apache.org/">Apache Maven</a>, for building and -deployment</li> -<li><a href="https://cloud.google.com/appengine">Google Cloud App Engine</a>, -for web-service hosting</li> -<li><a href="https://cloud.google.com/datastore/docs/">Google Cloud -Datastore</a>, for storage</li> -<li><a href="http://www.stackdriver.com/">Google Stackdriver</a>, for -monitoring</li> -</ul> - -<p> -Viewing <a href="/devices/architecture/testing/ui.html#coverage">test -coverage</a> relies on a REST API to a source code server (e.g. Gerrit), which -enables the web service to fetch original source code according to existing -access control lists. -</p> - -<h2 id=arch>Architecture</h2> -<p> -The VTS Dashboard uses the following architecture: -</p> -<img src="../images/treble_vts_dash_arch.png" title="VTS Dashboard Architecture"> -<figcaption><strong>Figure 1</strong>. VTS Dashboard architecture.</figcaption> - -<p> -Test status results are continuously uploaded to the Cloud Datastore database -via a REST interface. The VTS runner automatically processes the results and -serializes them using the Protobuf format. -</p> -<p> -Web servlets form the primary access point for users, delivering and processing -data from the Datastore database. The servlets include: a main servlet for -delivering all of the tests, a preferences servlet for managing user favorites, -a results servlet for populating a test table, a graph servlet for preparing -profiling data, and a coverage servlet for preparing coverage data for the -client. -</p> -<p> -Each test module has its own Datastore ancestry tree and test results are -indexed with the Unix timestamp of the test start time. Coverage data in the -database is stored with the test results as a vector of counts (i.e. for each -line in the original source file) and identifying information to fetch the -source code from a source code server. -</p> -<p> -The notification service runs using task queues, identifying test case status -changes, and notifying subscribers. Stateful information is stored in a status -table to keep track of data freshness and existing failures. This allows for the -notification service to provide rich information about individual test case -failures and fixes. -</p> - -<h2 id=code-structure>Code structure</h2> -<p> -VTS Dashboard essential components include the servlets implemented in Java, -the front-end JSPs, CSS stylesheets, and configuration files. The following list -details the locations and descriptions of these components (all paths relative -to <code>test/vts/web/dashboard</code>): -</p> -<ul> -<li><code>pom.xml</code><br>Settings file where environment variables and -dependencies are defined.</li> -<li><code>src/main/java/com/android/vts/api/</code><br>Contains endpoints for -interacting with the data via REST.</li> -<li><code>src/main/java/com/android/vts/entity/</code><br>Contains Java models -of the Datastore entities.</li> -<li><code>src/main/java/com/android/vts/proto/</code><br>Contains Java files -for Protobuf, including <code>VtsReportMessage.java</code>, which is a Java -implementation of Protobuf type used to describe VTS test results.</li> -<li><code>src/main/java/com/android/vts/servlet/</code><br>Contains Java -files for servlets.</li> -<li><code>src/main/java/com/android/vts/util/</code><br>Contains Java files -for utility functions and classes used by the servlets.</li> -<li><code>src/test/java/com/android/vts/</code><br>Contains UI tests for the -servlets and utils.</li> -<li><code>src/main/webapp/</code><br>Contains files related to the UI (JSP, -CSS, XML): - <ul> - <li><code>js/</code>. Contains Javascript files used by the web pages.</li> - <li><code>WEB-INF/</code>. Contains configuration and UI files.</li> - <li><code>jsp/</code>. Contains the JSP files for each web page.</li> - </ul> -</li> -<li><code>appengine-web.xml</code><br>Settings file where environment -variables are loaded into variables.</li> -<li><code>web.xml</code><br>Settings file where servlet mappings and -security constraints are defined.</li> -<li><code>cron.xml</code><br>Settings file defining scheduled tasks (i.e. -the notifications service).</li> -</ul> - -<h2 id=setup>Setting up the Dashboard</h2> -<p> -To set up the VTS Dashboard: -</p> -<ol> -<li>Create a Google Cloud App Engine Project.</li> -<li>Set up the deployment host by installing: - <ul> - <li>Java 8</li> - <li>Google App Engine SDK</li> - <li>Maven</li> - </ul> -</li> -<li>Generate an OAuth 2.0 Client ID in the Google Cloud API Manager.</li> -<li>Create a Service Account and create a keyfile.</li> -<li>Add an email address to the App Engine Email API Authorized Senders List.</li> -<li>Set up a Google Analytics Account.</li> -<li>Specify environment variables in the Dashboard <code>pom.xml</code>: - <ul> - <li>Set the client ID with the OAuth 2.0 ID (from step 3).</li> - <li>Set the service client ID with the identifier included in the keyfile (from - step 4).</li> - <li>Specify the sender email address for alerts (from step 5).</li> - <li>Specify an email domain to which all emails will be sent.</li> - <li>Specify the address to the Gerrit REST server.</li> - <li>Specify the OAuth 2.0 scope to use for the Gerrit REST server.</li> - <li>Specify the Google Analytics ID (from step 6).</li> - <li>Build and deploy the project.</li> - </ul> -</li> -<li>In a terminal, run <code>mvn clean appengine:update</code></li> -</ol> - -<p> -For more information regarding Dashboard setup and configuration, refer to the -<a href="https://codelabs.developers.google.com/codelabs/android-vts">Android -VTS Code Lab</a>. -</p> - -<h2 id=security>Security considerations</h2> -<p> -Robust coverage information requires access to the original source code. -However, some code may be sensitive and an additional gateway to it may allow -for exploitation of existing access control lists. -</p> -<p> -To avoid this threat, instead of serving the source code with the coverage -information, the Dashboard directly handles a coverage vector (i.e., a vector of -execution counts mapping to the lines in a source file). Along with the coverage -vector, the Dashboard receives a Git project name and path so that the client -can fetch the code from an external source code API. The client browser receives -this information and uses cross-origin resource sharing (CORS) in Javascript to -query the source code server for the original source code; the resulting code is -combined with the coverage vector to produce a display. -</p> -<p> -This approach does not widen the attack surface because the Dashboard uses the -user's cookies to authenticate with an outside service. A user who cannot access -source code directly cannot exploit the Dashboard to view sensitive information. -</p> - - </body> -</html> diff --git a/en/devices/tech/vts/ui.html b/en/devices/tech/vts/ui.html deleted file mode 100644 index 775ab372..00000000 --- a/en/devices/tech/vts/ui.html +++ /dev/null @@ -1,161 +0,0 @@ -<html devsite> - <head> - <title>VTS Dashboard UI</title> - <meta name="project_path" value="/_project.yaml" /> - <meta name="book_path" value="/_book.yaml" /> - </head> - <body> - <!-- - Copyright 2017 The Android Open Source Project - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - --> - - -<p> -The VTS Dashboard provides a cohesive user interface that uses material design -to effectively display information about test results, profiling, and coverage. -Dashboard styling uses open-source Javascript libraries including Materialize -CSS and jQueryUI to process data delivered by Java servlets in Google App -Engine. -</p> - -<h2>Dashboard home</h2> -<p> -The Dashboard home page displays a list of test suites a user has added to -favorites. -</p> -<img src="../images/treble_vts_ui_main.png" title="VTS Dashboard landing page"> -<figcaption><strong>Figure 1.</strong> VTS Dashboard, home page.</figcaption> - -<p> -From this list, users can: -</p> -<ul> -<li>Select a test suite to view results for that suite. -<li>Click <strong>SHOW ALL</strong> to view all VTS test names. -<li>Select the <strong>Edit</strong> icon to modify the Favorites list. -<img src="../images/treble_vts_ui_favorites.png" title="VTS Dashboard favorites"> -<figcaption><strong>Figure 2.</strong> VTS Dashboard, editing Favorites -page.</figcaption></li> -</ul> - -<h2 id=test-results>Test results</h2> -<p> -Test Results displays the latest information about the selected test suite, -including a list of profiling points, a table of test case results in -chronological order, and a pie chart displaying the result breakdown of the -latest run (users can load older data by paging right). -</p> - -<img src="../images/treble_vts_ui_results.png" title="VTS Dashboard results"> -<figcaption><strong>Figure 3.</strong> VTS Dashboard, test results.</figcaption> - -<p> -Users can filter data using queries or by modifying the test type (pre-submit, -post-submit, or both). Search queries support general tokens and field-specific -qualifiers; supported search fields are: device build ID, branch, target name, -device name, and test build ID. These are specified in the format: -<var>FIELD-ID</var>="<var>SEARCH QUERY</var>". Quotes are used to treat multiple -words as a single token to match with the data in the columns. -</p> - -<h2 id=profiling>Data profiling</h2> -<p> -Users can select a profiling point to reach an interactive view of the -quantitative data for that point in a <strong>line graph</strong> or -<strong>histogram</strong> (examples below). By default, the view displays the -latest information; users can use the date picker to load specific time windows. -</p> -<img src="../images/treble_vts_ui_performance.png" title="VTS Dashboard performance"> -<figcaption><strong>Figure 4.</strong> VTS Dashboard, line graph performance. -</figcaption> -<p> -Line graphs display data from a collection of unordered performance values, -which can be useful when a test of performance produces a vector of performance -values that vary as a function of another variable (e.g., throughput versus -message size). -</p> -<img src="../images/treble_vts_ui_histogram.png" title="VTS Dashboard histogram"> -<figcaption><strong>Figure 5.</strong> VTS Dashboard, histogram performance.</figcaption> - -<h2 id=coverage>Test coverage</h2> -<p> -Users can view coverage information from the coverage percent link in test -results. -<img src="../images/treble_vts_ui_coverage.png" title="VTS Dashboard coverage"> -<figcaption> -<strong>Figure 6.</strong> VTS Dashboard, coverage percentages.</figcaption> - -<p> -For each test case and source file, users can view an expandable element -containing color-coded source code according to the coverage provided by the -selected test: -</p> -<img src="../images/treble_vts_ui_coverage_source.png" title="VTS Dashboard coverage_source"> -<figcaption> -<strong>Figure 7.</strong> VTS Dashboard, coverage source code.</figcaption> - -<ul> -<li>Uncovered lines are highlighted <font style="color:red">red</font>.</li> -<li>Covered lines are highlighted <font style="color:green">green</font>.</li> -<li>Non-executable lines are <strong>uncolored</strong>.</li> -</ul> - -<p> -Coverage information is grouped depending into sections depending on how it was -provided at run-time. Tests may upload coverage: -</p> -<ul> -<li><strong>Per function</strong>. Section headers have the format "Coverage: -<var>FUNCTION-NAME</var>".</li> -<li><strong>In Total</strong> (provided at the end of the test run). Only one -header is present: "Coverage: All".</li> -</ul> - -<p> -The Dashboard fetches source code client-side from a server, which uses the -open-source -<a href="https://gerrit-review.googlesource.com/Documentation/rest-api.html">Gerrit -REST API</a>. -</p> - -<h2 id=monitor>Monitoring & testing</h2> -<p> -The VTS Dashboard provides the following monitors and unit tests. -</p> -<ul> -<li><strong>Test email alerts</strong>. Alerts are configured in a Cron job that -executes at a fixed interval of two (2) minutes. The job reads the VTS status -table to determine if new data has been uploaded to each table, done by checking -the test's raw data upload timestamp is newer than the last status update -timestamp. If the upload timestamp is newer, the job queries for new data -between now and the last raw data upload. New test case failures, continued test -case failures, transient test case failures, test case fixes, an inactive tests -are determined; this information is then sent in email format to the subscribers -of each test.</li> -<li><strong>Web service health</strong>. Google Stackdriver integrates with -Google App Engine to provide easy monitoring of the VTS Dashboard. Simple uptime -checks verify pages can be accessed while other tests can be created to verify -latency on each page, servlet, or database. These checks ensure the Dashboard is -always accessible (else an administrator will be notified).</li> -<li><strong>Analytics</strong>. Each page on the VTS Dashboard supports -integration with Google Cloud Analytics, provided that an Analytics ID is -specified in the configuration. This provides more robust analysis of page -usage, user interaction, locality, session statistics, etc. By simply providing -the valid ID in the pom.xml file, analytics are automatically -provided.</li> -</ul> - - </body> -</html> |