1 <!DOCTYPE html>
  2 <html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
  3 <head>
  4   <meta charset="utf-8" />
  5   <meta name="generator" content="pandoc" />
  6   <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  7   <title>Testing the JDK</title>
  8   <style>
  9     code{white-space: pre-wrap;}
 10     span.smallcaps{font-variant: small-caps;}
 11     div.columns{display: flex; gap: min(4vw, 1.5em);}
 12     div.column{flex: auto; overflow-x: auto;}
 13     div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
 14     ul.task-list{list-style: none;}
 15     ul.task-list li input[type="checkbox"] {
 16       width: 0.8em;
 17       margin: 0 0.8em 0.2em -1.6em;
 18       vertical-align: middle;
 19     }
 20     .display.math{display: block; text-align: center; margin: 0.5rem auto;}
 21   </style>
 22   <link rel="stylesheet" href="../make/data/docs-resources/resources/jdk-default.css" />
 23   <style type="text/css">pre, code, tt { color: #1d6ae5; }</style>
 24   <!--[if lt IE 9]>
 25     <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
 26   <![endif]-->
 27 </head>
 28 <body>
 29 <header id="title-block-header">
 30 <h1 class="title">Testing the JDK</h1>
 31 </header>
 32 <nav id="TOC" role="doc-toc">
 33 <ul>
 34 <li><a href="#overview" id="toc-overview">Overview</a></li>
 35 <li><a href="#running-tests-locally-with-make-test"
 36 id="toc-running-tests-locally-with-make-test">Running tests locally with
 37 <code>make test</code></a>
 38 <ul>
 39 <li><a href="#configuration"
 40 id="toc-configuration">Configuration</a></li>
 41 </ul></li>
 42 <li><a href="#test-selection" id="toc-test-selection">Test selection</a>
 43 <ul>
 44 <li><a href="#common-test-groups" id="toc-common-test-groups">Common
 45 Test Groups</a></li>
 46 <li><a href="#jtreg" id="toc-jtreg">JTReg</a></li>
 47 <li><a href="#gtest" id="toc-gtest">Gtest</a></li>
 48 <li><a href="#microbenchmarks"
 49 id="toc-microbenchmarks">Microbenchmarks</a></li>
 50 <li><a href="#special-tests" id="toc-special-tests">Special
 51 tests</a></li>
 52 </ul></li>
 53 <li><a href="#test-results-and-summary"
 54 id="toc-test-results-and-summary">Test results and summary</a></li>
 55 <li><a href="#test-suite-control" id="toc-test-suite-control">Test suite
 56 control</a>
 57 <ul>
 58 <li><a href="#general-keywords-test_opts"
 59 id="toc-general-keywords-test_opts">General keywords
 60 (TEST_OPTS)</a></li>
 61 <li><a href="#jtreg-keywords" id="toc-jtreg-keywords">JTReg
 62 keywords</a></li>
 63 <li><a href="#gtest-keywords" id="toc-gtest-keywords">Gtest
 64 keywords</a></li>
 65 <li><a href="#microbenchmark-keywords"
 66 id="toc-microbenchmark-keywords">Microbenchmark keywords</a></li>
 67 </ul></li>
 68 <li><a href="#notes-for-specific-tests"
 69 id="toc-notes-for-specific-tests">Notes for Specific Tests</a>
 70 <ul>
 71 <li><a href="#docker-tests" id="toc-docker-tests">Docker Tests</a></li>
 72 <li><a href="#non-us-locale" id="toc-non-us-locale">Non-US
 73 locale</a></li>
 74 <li><a href="#pkcs11-tests" id="toc-pkcs11-tests">PKCS11 Tests</a></li>
 75 <li><a href="#testing-ahead-of-time-optimizations"
 76 id="toc-testing-ahead-of-time-optimizations">Testing Ahead-of-time
 77 Optimizations</a></li>
 78 <li><a href="#testing-with-alternative-security-providers"
 79 id="toc-testing-with-alternative-security-providers">Testing with
 80 alternative security providers</a></li>
 81 <li><a href="#client-ui-tests" id="toc-client-ui-tests">Client UI
 82 Tests</a></li>
 83 </ul></li>
 84 <li><a href="#editing-this-document"
 85 id="toc-editing-this-document">Editing this document</a></li>
 86 </ul>
 87 </nav>
 88 <h2 id="overview">Overview</h2>
 89 <p>The bulk of JDK tests use <a
 90 href="https://openjdk.org/jtreg/">jtreg</a>, a regression test framework
 91 and test runner built for the JDK's specific needs. Other test
 92 frameworks are also used. The different test frameworks can be executed
 93 directly, but there is also a set of make targets intended to simplify
 94 the interface, and figure out how to run your tests for you.</p>
 95 <h2 id="running-tests-locally-with-make-test">Running tests locally with
 96 <code>make test</code></h2>
 97 <p>This is the easiest way to get started. Assuming you've built the JDK
 98 locally, execute:</p>
 99 <pre><code>$ make test</code></pre>
100 <p>This will run a default set of tests against the JDK, and present you
101 with the results. <code>make test</code> is part of a family of
102 test-related make targets which simplify running tests, because they
103 invoke the various test frameworks for you. The "make test framework" is
104 simple to start with, but more complex ad-hoc combination of tests is
105 also possible. You can always invoke the test frameworks directly if you
106 want even more control.</p>
107 <p>Some example command-lines:</p>
108 <pre><code>$ make test-tier1
109 $ make test-jdk_lang JTREG=&quot;JOBS=8&quot;
110 $ make test TEST=jdk_lang
111 $ make test-only TEST=&quot;gtest:LogTagSet gtest:LogTagSetDescriptions&quot; GTEST=&quot;REPEAT=-1&quot;
112 $ make test TEST=&quot;hotspot:hotspot_gc&quot; JTREG=&quot;JOBS=1;TIMEOUT_FACTOR=8;JAVA_OPTIONS=-XshowSettings -Xlog:gc+ref=debug&quot;
113 $ make test TEST=&quot;jtreg:test/hotspot:hotspot_gc test/hotspot/jtreg/native_sanity/JniVersion.java&quot;
114 $ make test TEST=&quot;micro:java.lang.reflect&quot; MICRO=&quot;FORK=1;WARMUP_ITER=2&quot;
115 $ make exploded-test TEST=tier2</code></pre>
116 <p>"tier1" and "tier2" refer to tiered testing, see further down. "TEST"
117 is a test selection argument which the make test framework will use to
118 try to find the tests you want. It iterates over the available test
119 frameworks, and if the test isn't present in one, it tries the next one.
120 The main target <code>test</code> uses the jdk-image as the tested
121 product. There is also an alternate target <code>exploded-test</code>
122 that uses the exploded image instead. Not all tests will run
123 successfully on the exploded image, but using this target can greatly
124 improve rebuild times for certain workflows.</p>
125 <p>Previously, <code>make test</code> was used to invoke an old system
126 for running tests, and <code>make run-test</code> was used for the new
127 test framework. For backward compatibility with scripts and muscle
128 memory, <code>run-test</code> and variants like
129 <code>exploded-run-test</code> or <code>run-test-tier1</code> are kept
130 as aliases.</p>
131 <h3 id="configuration">Configuration</h3>
132 <p>To be able to run JTReg tests, <code>configure</code> needs to know
133 where to find the JTReg test framework. If it is not picked up
134 automatically by configure, use the
135 <code>--with-jtreg=&lt;path to jtreg home&gt;</code> option to point to
136 the JTReg framework. Note that this option should point to the JTReg
137 home, i.e. the top directory, containing <code>lib/jtreg.jar</code> etc.
138 (An alternative is to set the <code>JT_HOME</code> environment variable
139 to point to the JTReg home before running <code>configure</code>.)</p>
140 <p>To be able to run microbenchmarks, <code>configure</code> needs to
141 know where to find the JMH dependency. Use
142 <code>--with-jmh=&lt;path to JMH jars&gt;</code> to point to a directory
143 containing the core JMH and transitive dependencies. The recommended
144 dependencies can be retrieved by running
145 <code>sh make/devkit/createJMHBundle.sh</code>, after which
146 <code>--with-jmh=build/jmh/jars</code> should work.</p>
147 <p>When tests fail or timeout, jtreg runs its failure handler to capture
148 necessary data from the system where the test was run. This data can
149 then be used to analyze the test failures. Collecting this data involves
150 running various commands (which are listed in files residing in
151 <code>test/failure_handler/src/share/conf</code>) and some of these
152 commands use <code>sudo</code>. If the system's <code>sudoers</code>
153 file isn't configured to allow running these commands, then it can
154 result in password being prompted during the failure handler execution.
155 Typically, when running locally, collecting this additional data isn't
156 always necessary. To disable running the failure handler, use
157 <code>--enable-jtreg-failure-handler=no</code> when running
158 <code>configure</code>. If, however, you want to let the failure handler
159 to run and don't want to be prompted for sudo password, then you can
160 configure your <code>sudoers</code> file appropriately. Please read the
161 necessary documentation of your operating system to see how to do that;
162 here we only show one possible way of doing that - edit the
163 <code>/etc/sudoers.d/sudoers</code> file to include the following
164 line:</p>
165 <pre><code>johndoe ALL=(ALL) NOPASSWD: /sbin/dmesg</code></pre>
166 <p>This line configures <code>sudo</code> to <em>not</em> prompt for
167 password for the <code>/sbin/dmesg</code> command (this is one of the
168 commands that is listed in the files at
169 <code>test/failure_handler/src/share/conf</code>), for the user
170 <code>johndoe</code>. Here <code>johndoe</code> is the user account
171 under which the jtreg tests are run. Replace the username with a
172 relevant user account of your system.</p>
173 <h2 id="test-selection">Test selection</h2>
174 <p>All functionality is available using the <code>test</code> make
175 target. In this use case, the test or tests to be executed is controlled
176 using the <code>TEST</code> variable. To speed up subsequent test runs
177 with no source code changes, <code>test-only</code> can be used instead,
178 which do not depend on the source and test image build.</p>
179 <p>For some common top-level tests, direct make targets have been
180 generated. This includes all JTReg test groups, the hotspot gtest, and
181 custom tests (if present). This means that <code>make test-tier1</code>
182 is equivalent to <code>make test TEST="tier1"</code>, but the latter is
183 more tab-completion friendly. For more complex test runs, the
184 <code>test TEST="x"</code> solution needs to be used.</p>
185 <p>The test specifications given in <code>TEST</code> is parsed into
186 fully qualified test descriptors, which clearly and unambiguously show
187 which tests will be run. As an example, <code>:tier1</code> will expand
188 to include all subcomponent test directories that define
189 <code>tier1</code>, for example:
190 <code>jtreg:$(TOPDIR)/test/hotspot/jtreg:tier1 jtreg:$(TOPDIR)/test/jdk:tier1 jtreg:$(TOPDIR)/test/langtools:tier1 ...</code>.
191 You can always submit a list of fully qualified test descriptors in the
192 <code>TEST</code> variable if you want to shortcut the parser.</p>
193 <h3 id="common-test-groups">Common Test Groups</h3>
194 <p>Ideally, all tests are run for every change but this may not be
195 practical due to the limited testing resources, the scope of the change,
196 etc.</p>
197 <p>The source tree currently defines a few common test groups in the
198 relevant <code>TEST.groups</code> files. There are test groups that
199 cover a specific component, for example <code>hotspot_gc</code>. It is a
200 good idea to look into <code>TEST.groups</code> files to get a sense
201 what tests are relevant to a particular JDK component.</p>
202 <p>Component-specific tests may miss some unintended consequences of a
203 change, so other tests should also be run. Again, it might be
204 impractical to run all tests, and therefore <em>tiered</em> test groups
205 exist. Tiered test groups are not component-specific, but rather cover
206 the significant parts of the entire JDK.</p>
207 <p>Multiple tiers allow balancing test coverage and testing costs. Lower
208 test tiers are supposed to contain the simpler, quicker and more stable
209 tests. Higher tiers are supposed to contain progressively more thorough,
210 slower, and sometimes less stable tests, or the tests that require
211 special configuration.</p>
212 <p>Contributors are expected to run the tests for the areas that are
213 changed, and the first N tiers they can afford to run, but at least
214 tier1.</p>
215 <p>A brief description of the tiered test groups:</p>
216 <ul>
217 <li><p><code>tier1</code>: This is the most fundamental test tier.
218 Roughly speaking, a failure of a test in this tier has the potential to
219 indicate a problem that would affect many Java programs. Tests in
220 <code>tier1</code> include tests of HotSpot, core APIs in the
221 <code>java.base</code> module, and the <code>javac</code> compiler.
222 Multiple developers run these tests every day. Because of the widespread
223 use, the tests in <code>tier1</code> are carefully selected and
224 optimized to run fast, and to run in the most stable manner. As a
225 guideline, nearly all individual tests in <code>tier1</code> are
226 expected to run to completion in ten seconds or less when run on common
227 configurations used for development. Long-running tests, even of core
228 functionality, should occur in higher tiers or be covered in other kinds
229 of testing. The test failures in <code>tier1</code> are usually followed
230 up on quickly, either with fixes, or adding relevant tests to problem
231 list. GitHub Actions workflows, if enabled, run <code>tier1</code>
232 tests.</p></li>
233 <li><p><code>tier2</code>: This test group covers even more ground.
234 These contain, among other things, tests that either run for too long to
235 be at <code>tier1</code>, or may require special configuration, or tests
236 that are less stable, or cover the broader range of non-core JVM and JDK
237 features/components (for example, XML).</p></li>
238 <li><p><code>tier3</code>: This test group includes more stressful
239 tests, the tests for corner cases not covered by previous tiers, plus
240 the tests that require GUIs. As such, this suite should either be run
241 with low concurrency (<code>TEST_JOBS=1</code>), or without headful
242 tests(<code>JTREG_KEYWORDS=\!headful</code>), or both.</p></li>
243 <li><p><code>tier4</code>: This test group includes every other test not
244 covered by previous tiers. It includes, for example,
245 <code>vmTestbase</code> suites for Hotspot, which run for many hours
246 even on large machines. It also runs GUI tests, so the same
247 <code>TEST_JOBS</code> and <code>JTREG_KEYWORDS</code> caveats
248 apply.</p></li>
249 </ul>
250 <h3 id="jtreg">JTReg</h3>
251 <p>JTReg tests can be selected either by picking a JTReg test group, or
252 a selection of files or directories containing JTReg tests.
253 Documentation can be found at <a
254 href="https://openjdk.org/jtreg/">https://openjdk.org/jtreg/</a>, note
255 especially the extensive <a
256 href="https://openjdk.org/jtreg/faq.html">FAQ</a>.</p>
257 <p>JTReg test groups can be specified either without a test root, e.g.
258 <code>:tier1</code> (or <code>tier1</code>, the initial colon is
259 optional), or with, e.g. <code>hotspot:tier1</code>,
260 <code>test/jdk:jdk_util</code> or
261 <code>$(TOPDIR)/test/hotspot/jtreg:hotspot_all</code>. The test root can
262 be specified either as an absolute path, or a path relative to the JDK
263 top directory, or the <code>test</code> directory. For simplicity, the
264 hotspot JTReg test root, which really is <code>hotspot/jtreg</code> can
265 be abbreviated as just <code>hotspot</code>.</p>
266 <p>When specified without a test root, all matching groups from all test
267 roots will be added. Otherwise, only the group from the specified test
268 root will be added.</p>
269 <p>Individual JTReg tests or directories containing JTReg tests can also
270 be specified, like
271 <code>test/hotspot/jtreg/native_sanity/JniVersion.java</code> or
272 <code>hotspot/jtreg/native_sanity</code>. Just like for test root
273 selection, you can either specify an absolute path (which can even point
274 to JTReg tests outside the source tree), or a path relative to either
275 the JDK top directory or the <code>test</code> directory.
276 <code>hotspot</code> can be used as an alias for
277 <code>hotspot/jtreg</code> here as well.</p>
278 <p>As long as the test groups or test paths can be uniquely resolved,
279 you do not need to enter the <code>jtreg:</code> prefix. If this is not
280 possible, or if you want to use a fully qualified test descriptor, add
281 <code>jtreg:</code>, e.g.
282 <code>jtreg:test/hotspot/jtreg/native_sanity</code>.</p>
283 <h3 id="gtest">Gtest</h3>
284 <p><strong>Note:</strong> To be able to run the Gtest suite, you need to
285 configure your build to be able to find a proper version of the gtest
286 source. For details, see the section <a
287 href="building.html#running-tests">"Running Tests" in the build
288 documentation</a>.</p>
289 <p>Since the Hotspot Gtest suite is so quick, the default is to run all
290 tests. This is specified by just <code>gtest</code>, or as a fully
291 qualified test descriptor <code>gtest:all</code>.</p>
292 <p>If you want, you can single out an individual test or a group of
293 tests, for instance <code>gtest:LogDecorations</code> or
294 <code>gtest:LogDecorations.level_test_vm</code>. This can be
295 particularly useful if you want to run a shaky test repeatedly.</p>
296 <p>For Gtest, there is a separate test suite for each JVM variant. The
297 JVM variant is defined by adding <code>/&lt;variant&gt;</code> to the
298 test descriptor, e.g. <code>gtest:Log/client</code>. If you specify no
299 variant, gtest will run once for each JVM variant present (e.g. server,
300 client). So if you only have the server JVM present, then
301 <code>gtest:all</code> will be equivalent to
302 <code>gtest:all/server</code>.</p>
303 <h3 id="microbenchmarks">Microbenchmarks</h3>
304 <p>Which microbenchmarks to run is selected using a regular expression
305 following the <code>micro:</code> test descriptor, e.g.,
306 <code>micro:java.lang.reflect</code>. This delegates the test selection
307 to JMH, meaning package name, class name and even benchmark method names
308 can be used to select tests.</p>
309 <p>Using special characters like <code>|</code> in the regular
310 expression is possible, but needs to be escaped multiple times:
311 <code>micro:ArrayCopy\\\\\|reflect</code>.</p>
312 <h3 id="special-tests">Special tests</h3>
313 <p>A handful of odd tests that are not covered by any other testing
314 framework are accessible using the <code>special:</code> test
315 descriptor. Currently, this includes <code>failure-handler</code> and
316 <code>make</code>.</p>
317 <ul>
318 <li><p>Failure handler testing is run using
319 <code>special:failure-handler</code> or just
320 <code>failure-handler</code> as test descriptor.</p></li>
321 <li><p>Tests for the build system, including both makefiles and related
322 functionality, is run using <code>special:make</code> or just
323 <code>make</code> as test descriptor. This is equivalent to
324 <code>special:make:all</code>.</p>
325 <p>A specific make test can be run by supplying it as argument, e.g.
326 <code>special:make:idea</code>. As a special syntax, this can also be
327 expressed as <code>make-idea</code>, which allows for command lines as
328 <code>make test-make-idea</code>.</p></li>
329 </ul>
330 <h2 id="test-results-and-summary">Test results and summary</h2>
331 <p>At the end of the test run, a summary of all tests run will be
332 presented. This will have a consistent look, regardless of what test
333 suites were used. This is a sample summary:</p>
334 <pre><code>==============================
335 Test summary
336 ==============================
337    TEST                                          TOTAL  PASS  FAIL ERROR
338 &gt;&gt; jtreg:jdk/test:tier1                           1867  1865     2     0 &lt;&lt;
339    jtreg:langtools/test:tier1                     4711  4711     0     0
340    jtreg:nashorn/test:tier1                        133   133     0     0
341 ==============================
342 TEST FAILURE</code></pre>
343 <p>Tests where the number of TOTAL tests does not equal the number of
344 PASSed tests will be considered a test failure. These are marked with
345 the <code>&gt;&gt; ... &lt;&lt;</code> marker for easy
346 identification.</p>
347 <p>The classification of non-passed tests differs a bit between test
348 suites. In the summary, ERROR is used as a catch-all for tests that
349 neither passed nor are classified as failed by the framework. This might
350 indicate test framework error, timeout or other problems.</p>
351 <p>In case of test failures, <code>make test</code> will exit with a
352 non-zero exit value.</p>
353 <p>All tests have their result stored in
354 <code>build/$BUILD/test-results/$TEST_ID</code>, where TEST_ID is a
355 path-safe conversion from the fully qualified test descriptor, e.g. for
356 <code>jtreg:jdk/test:tier1</code> the TEST_ID is
357 <code>jtreg_jdk_test_tier1</code>. This path is also printed in the log
358 at the end of the test run.</p>
359 <p>Additional work data is stored in
360 <code>build/$BUILD/test-support/$TEST_ID</code>. For some frameworks,
361 this directory might contain information that is useful in determining
362 the cause of a failed test.</p>
363 <h2 id="test-suite-control">Test suite control</h2>
364 <p>It is possible to control various aspects of the test suites using
365 make control variables.</p>
366 <p>These variables use a keyword=value approach to allow multiple values
367 to be set. So, for instance,
368 <code>JTREG="JOBS=1;TIMEOUT_FACTOR=8"</code> will set the JTReg
369 concurrency level to 1 and the timeout factor to 8. This is equivalent
370 to setting <code>JTREG_JOBS=1 JTREG_TIMEOUT_FACTOR=8</code>, but using
371 the keyword format means that the <code>JTREG</code> variable is parsed
372 and verified for correctness, so <code>JTREG="TMIEOUT_FACTOR=8"</code>
373 would give an error, while <code>JTREG_TMIEOUT_FACTOR=8</code> would
374 just pass unnoticed.</p>
375 <p>To separate multiple keyword=value pairs, use <code>;</code>
376 (semicolon). Since the shell normally eats <code>;</code>, the
377 recommended usage is to write the assignment inside quotes, e.g.
378 <code>JTREG="...;..."</code>. This will also make sure spaces are
379 preserved, as in
380 <code>JTREG="JAVA_OPTIONS=-XshowSettings -Xlog:gc+ref=debug"</code>.</p>
381 <p>(Other ways are possible, e.g. using backslash:
382 <code>JTREG=JOBS=1\;TIMEOUT_FACTOR=8</code>. Also, as a special
383 technique, the string <code>%20</code> will be replaced with space for
384 certain options, e.g.
385 <code>JTREG=JAVA_OPTIONS=-XshowSettings%20-Xlog:gc+ref=debug</code>.
386 This can be useful if you have layers of scripts and have trouble
387 getting proper quoting of command line arguments through.)</p>
388 <p>As far as possible, the names of the keywords have been standardized
389 between test suites.</p>
390 <h3 id="general-keywords-test_opts">General keywords (TEST_OPTS)</h3>
391 <p>Some keywords are valid across different test suites. If you want to
392 run tests from multiple test suites, or just don't want to care which
393 test suite specific control variable to use, then you can use the
394 general TEST_OPTS control variable.</p>
395 <p>There are also some keywords that applies globally to the test runner
396 system, not to any specific test suites. These are also available as
397 TEST_OPTS keywords.</p>
398 <h4 id="jobs">JOBS</h4>
399 <p>Currently only applies to JTReg.</p>
400 <h4 id="timeout_factor">TIMEOUT_FACTOR</h4>
401 <p>Currently only applies to <a href="#timeout_factor-1">JTReg
402 -timeoutFactor</a>.</p>
403 <h4 id="java_options">JAVA_OPTIONS</h4>
404 <p>Applies to JTReg, GTest and Micro.</p>
405 <h4 id="vm_options">VM_OPTIONS</h4>
406 <p>Applies to JTReg, GTest and Micro.</p>
407 <h4 id="jcov">JCOV</h4>
408 <p>This keyword applies globally to the test runner system. If set to
409 <code>true</code>, it enables JCov coverage reporting for all tests run.
410 To be useful, the JDK under test must be run with a JDK built with JCov
411 instrumentation
412 (<code>configure --with-jcov=&lt;path to directory containing lib/jcov.jar&gt;</code>,
413 <code>make jcov-image</code>).</p>
414 <p>The simplest way to run tests with JCov coverage report is to use the
415 special target <code>jcov-test</code> instead of <code>test</code>, e.g.
416 <code>make jcov-test TEST=jdk_lang</code>. This will make sure the JCov
417 image is built, and that JCov reporting is enabled.</p>
418 <p>To include JCov coverage for just a subset of all modules, you can
419 use the <code>--with-jcov-modules</code> arguments to
420 <code>configure</code>, e.g.
421 <code>--with-jcov-modules=jdk.compiler,java.desktop</code>.</p>
422 <p>For more fine-grained control, you can pass arbitrary filters to JCov
423 using <code>--with-jcov-filters</code>, and you can specify a specific
424 JDK to instrument using <code>--with-jcov-input-jdk</code>.</p>
425 <p>The JCov report is stored in
426 <code>build/$BUILD/test-results/jcov-output/report</code>.</p>
427 <p>Please note that running with JCov reporting can be very memory
428 intensive.</p>
429 <h4 id="jcov_diff_changeset">JCOV_DIFF_CHANGESET</h4>
430 <p>While collecting code coverage with JCov, it is also possible to find
431 coverage for only recently changed code. JCOV_DIFF_CHANGESET specifies a
432 source revision. A textual report will be generated showing coverage of
433 the diff between the specified revision and the repository tip.</p>
434 <p>The report is stored in
435 <code>build/$BUILD/test-results/jcov-output/diff_coverage_report</code>
436 file.</p>
437 <h4 id="aot_jdk">AOT_JDK</h4>
438 <p>See <a href="#testing-ahead-of-time-optimizations">Testing
439 Ahead-of-time optimizations</a>.</p>
440 <h3 id="jtreg-keywords">JTReg keywords</h3>
441 <h4 id="jobs-1">JOBS</h4>
442 <p>The test concurrency (<code>-concurrency</code>).</p>
443 <p>Defaults to TEST_JOBS (if set by <code>--with-test-jobs=</code>),
444 otherwise it defaults to JOBS, except for Hotspot, where the default is
445 <em>number of CPU cores/2</em>, but never more than <em>memory size in
446 GB/2</em>.</p>
447 <h4 id="timeout_factor-1">TIMEOUT_FACTOR</h4>
448 <p>The <code>TIMEOUT_FACTOR</code> is forwarded to JTReg framework
449 itself (<code>-timeoutFactor</code>). Also, some test cases that
450 programmatically wait a certain amount of time will apply this factor.
451 If we run in forced compilation mode (<code>-Xcomp</code>), the build
452 system will automatically adjust this factor to compensate for less
453 performance. Defaults to 4.</p>
454 <h4 id="failure_handler_timeout">FAILURE_HANDLER_TIMEOUT</h4>
455 <p>Sets the argument <code>-timeoutHandlerTimeout</code> for JTReg. The
456 default value is 0. This is only valid if the failure handler is
457 built.</p>
458 <h4 id="test_thread_factory">TEST_THREAD_FACTORY</h4>
459 <p>Sets the <code>-testThreadFactory</code> for JTReg. It should be the
460 fully qualified classname of a class which implements
461 <code>java.util.concurrent.ThreadFactory</code>. One such implementation
462 class, named Virtual, is currently part of the JDK build in the
463 <code>test/jtreg_test_thread_factory/</code> directory. This class gets
464 compiled during the test image build. The implementation of the Virtual
465 class creates a new virtual thread for executing each test class.</p>
466 <h4 id="jvmti_stress_agent">JVMTI_STRESS_AGENT</h4>
467 <p>Executes JTReg tests with JVM TI stress agent. The stress agent is
468 the part of test library and located in
469 <code>test/lib/jdk/test/lib/jvmti/libJvmtiStressAgent.cpp</code>. The
470 value of this argument is set as JVM TI agent options. This mode uses
471 ProblemList-jvmti-stress-agent.txt as an additional exclude list.</p>
472 <h4 id="test_mode">TEST_MODE</h4>
473 <p>The test mode (<code>agentvm</code> or <code>othervm</code>).</p>
474 <p>Defaults to <code>agentvm</code>.</p>
475 <h4 id="assert">ASSERT</h4>
476 <p>Enable asserts (<code>-ea -esa</code>, or none).</p>
477 <p>Set to <code>true</code> or <code>false</code>. If true, adds
478 <code>-ea -esa</code>. Defaults to true, except for hotspot.</p>
479 <h4 id="verbose">VERBOSE</h4>
480 <p>The verbosity level (<code>-verbose</code>).</p>
481 <p>Defaults to <code>fail,error,summary</code>.</p>
482 <h4 id="retain">RETAIN</h4>
483 <p>What test data to retain (<code>-retain</code>).</p>
484 <p>Defaults to <code>fail,error</code>.</p>
485 <h4 id="max_mem">MAX_MEM</h4>
486 <p>Limit memory consumption (<code>-Xmx</code> and
487 <code>-vmoption:-Xmx</code>, or none).</p>
488 <p>Limit memory consumption for JTReg test framework and VM under test.
489 Set to 0 to disable the limits.</p>
490 <p>Defaults to 512m, except for hotspot, where it defaults to 0 (no
491 limit).</p>
492 <h4 id="max_output">MAX_OUTPUT</h4>
493 <p>Set the property <code>javatest.maxOutputSize</code> for the
494 launcher, to change the default JTReg log limit.</p>
495 <h4 id="keywords">KEYWORDS</h4>
496 <p>JTReg keywords sent to JTReg using <code>-k</code>. Please be careful
497 in making sure that spaces and special characters (like <code>!</code>)
498 are properly quoted. To avoid some issues, the special value
499 <code>%20</code> can be used instead of space.</p>
500 <h4 id="extra_problem_lists">EXTRA_PROBLEM_LISTS</h4>
501 <p>Use additional problem lists file or files, in addition to the
502 default ProblemList.txt located at the JTReg test roots.</p>
503 <p>If multiple file names are specified, they should be separated by
504 space (or, to help avoid quoting issues, the special value
505 <code>%20</code>).</p>
506 <p>The file names should be either absolute, or relative to the JTReg
507 test root of the tests to be run.</p>
508 <h4 id="run_problem_lists">RUN_PROBLEM_LISTS</h4>
509 <p>Use the problem lists to select tests instead of excluding them.</p>
510 <p>Set to <code>true</code> or <code>false</code>. If <code>true</code>,
511 JTReg will use <code>-match:</code> option, otherwise
512 <code>-exclude:</code> will be used. Default is <code>false</code>.</p>
513 <h4 id="options">OPTIONS</h4>
514 <p>Additional options to the JTReg test framework.</p>
515 <p>Use <code>JTREG="OPTIONS=--help all"</code> to see all available
516 JTReg options.</p>
517 <h4 id="java_options-1">JAVA_OPTIONS</h4>
518 <p>Additional Java options for running test classes (sent to JTReg as
519 <code>-javaoption</code>).</p>
520 <h4 id="vm_options-1">VM_OPTIONS</h4>
521 <p>Additional Java options to be used when compiling and running classes
522 (sent to JTReg as <code>-vmoption</code>).</p>
523 <p>This option is only needed in special circumstances. To pass Java
524 options to your test classes, use <code>JAVA_OPTIONS</code>.</p>
525 <h4 id="launcher_options">LAUNCHER_OPTIONS</h4>
526 <p>Additional Java options that are sent to the java launcher that
527 starts the JTReg harness.</p>
528 <h4 id="retry_count">RETRY_COUNT</h4>
529 <p>Retry failed tests up to a set number of times, until they pass. This
530 allows to pass the tests with intermittent failures. Defaults to 0.</p>
531 <h4 id="repeat_count">REPEAT_COUNT</h4>
532 <p>Repeat the tests up to a set number of times, stopping at first
533 failure. This helps to reproduce intermittent test failures. Defaults to
534 0.</p>
535 <h4 id="report">REPORT</h4>
536 <p>Use this report style when reporting test results (sent to JTReg as
537 <code>-report</code>). Defaults to <code>files</code>.</p>
538 <h3 id="gtest-keywords">Gtest keywords</h3>
539 <h4 id="repeat">REPEAT</h4>
540 <p>The number of times to repeat the tests
541 (<code>--gtest_repeat</code>).</p>
542 <p>Default is 1. Set to -1 to repeat indefinitely. This can be
543 especially useful combined with
544 <code>OPTIONS=--gtest_break_on_failure</code> to reproduce an
545 intermittent problem.</p>
546 <h4 id="options-1">OPTIONS</h4>
547 <p>Additional options to the Gtest test framework.</p>
548 <p>Use <code>GTEST="OPTIONS=--help"</code> to see all available Gtest
549 options.</p>
550 <h3 id="microbenchmark-keywords">Microbenchmark keywords</h3>
551 <h4 id="fork">FORK</h4>
552 <p>Override the number of benchmark forks to spawn. Same as specifying
553 <code>-f &lt;num&gt;</code>.</p>
554 <h4 id="iter">ITER</h4>
555 <p>Number of measurement iterations per fork. Same as specifying
556 <code>-i &lt;num&gt;</code>.</p>
557 <h4 id="time">TIME</h4>
558 <p>Amount of time to spend in each measurement iteration, in seconds.
559 Same as specifying <code>-r &lt;num&gt;</code></p>
560 <h4 id="warmup_iter">WARMUP_ITER</h4>
561 <p>Number of warmup iterations to run before the measurement phase in
562 each fork. Same as specifying <code>-wi &lt;num&gt;</code>.</p>
563 <h4 id="warmup_time">WARMUP_TIME</h4>
564 <p>Amount of time to spend in each warmup iteration. Same as specifying
565 <code>-w &lt;num&gt;</code>.</p>
566 <h4 id="results_format">RESULTS_FORMAT</h4>
567 <p>Specify to have the test run save a log of the values. Accepts the
568 same values as <code>-rff</code>, i.e., <code>text</code>,
569 <code>csv</code>, <code>scsv</code>, <code>json</code>, or
570 <code>latex</code>.</p>
571 <h4 id="test_jdk">TEST_JDK</h4>
572 <p>The path to the JDK that will be used to run the benchmarks.</p>
573 <p>Defaults to <code>build/&lt;CONF-NAME&gt;/jdk</code>.</p>
574 <h4 id="benchmarks_jar">BENCHMARKS_JAR</h4>
575 <p>The path to the JAR containing the benchmarks.</p>
576 <p>Defaults to <code>test/micro/benchmarks.jar</code>.</p>
577 <h4 id="vm_options-2">VM_OPTIONS</h4>
578 <p>Additional VM arguments to provide to forked off VMs. Same as
579 <code>-jvmArgs &lt;args&gt;</code></p>
580 <h4 id="options-2">OPTIONS</h4>
581 <p>Additional arguments to send to JMH.</p>
582 <h2 id="notes-for-specific-tests">Notes for Specific Tests</h2>
583 <h3 id="docker-tests">Docker Tests</h3>
584 <p>Docker tests with default parameters may fail on systems with glibc
585 versions not compatible with the one used in the default docker image
586 (e.g., Oracle Linux 7.6 for x86). For example, they pass on Ubuntu 16.04
587 but fail on Ubuntu 18.04 if run like this on x86:</p>
588 <pre><code>$ make test TEST=&quot;jtreg:test/hotspot/jtreg/containers/docker&quot;</code></pre>
589 <p>To run these tests correctly, additional parameters for the correct
590 docker image are required on Ubuntu 18.04 by using
591 <code>JAVA_OPTIONS</code>.</p>
592 <pre><code>$ make test TEST=&quot;jtreg:test/hotspot/jtreg/containers/docker&quot; \
593     JTREG=&quot;JAVA_OPTIONS=-Djdk.test.docker.image.name=ubuntu
594     -Djdk.test.docker.image.version=latest&quot;</code></pre>
595 <h3 id="non-us-locale">Non-US locale</h3>
596 <p>If your locale is non-US, some tests are likely to fail. To work
597 around this you can set the locale to US. On Unix platforms simply
598 setting <code>LANG="en_US"</code> in the environment before running
599 tests should work. On Windows or macOS, setting
600 <code>JTREG="VM_OPTIONS=-Duser.language=en -Duser.country=US"</code>
601 helps for most, but not all test cases.</p>
602 <p>For example:</p>
603 <pre><code>$ export LANG=&quot;en_US&quot; &amp;&amp; make test TEST=...
604 $ make test JTREG=&quot;VM_OPTIONS=-Duser.language=en -Duser.country=US&quot; TEST=...</code></pre>
605 <h3 id="pkcs11-tests">PKCS11 Tests</h3>
606 <p>It is highly recommended to use the latest NSS version when running
607 PKCS11 tests. Improper NSS version may lead to unexpected failures which
608 are hard to diagnose. For example,
609 sun/security/pkcs11/Secmod/AddTrustedCert.java may fail on Ubuntu 18.04
610 with the default NSS version in the system. To run these tests
611 correctly, the system property
612 <code>jdk.test.lib.artifacts.&lt;NAME&gt;</code> is required on Ubuntu
613 18.04 to specify the alternative NSS lib directory. The
614 <code>&lt;NAME&gt;</code> component should be replaced with the name
615 element of the appropriate <code>@Artifact</code> class. (See
616 <code>test/jdk/sun/security/pkcs11/PKCS11Test.java</code>)</p>
617 <p>For example:</p>
618 <pre><code>$ make test TEST=&quot;jtreg:sun/security/pkcs11/Secmod/AddTrustedCert.java&quot; \
619     JTREG=&quot;JAVA_OPTIONS=-Djdk.test.lib.artifacts.nsslib-linux_aarch64=/path/to/NSS-libs&quot;</code></pre>
620 <p>For more notes about the PKCS11 tests, please refer to
621 test/jdk/sun/security/pkcs11/README.</p>
622 <h3 id="testing-ahead-of-time-optimizations">Testing Ahead-of-time
623 Optimizations</h3>
624 <p>One way to improve test coverage of ahead-of-time (AOT) optimizations
625 in the JDK is to run existing jtreg test cases in a special "AOT_JDK"
626 mode. Example:</p>
627 <pre><code>$ make test JTREG=&quot;AOT_JDK=onestep&quot; \
628     TEST=open/test/hotspot/jtreg/runtime/invokedynamic</code></pre>
629 <p>In this testing mode, we first perform an AOT training run (see
630 https://openjdk.org/jeps/483) of a special test program (<a
631 href="../test/setup_aot/TestSetupAOT.java">test/setup_aot/TestSetupAOT.java</a>)
632 that accesses about 5,0000 classes in the JDK core libraries.
633 Optimization artifacts for these classes (such as pre-linked lambda
634 expressions, execution profiles, and pre-generated native code) are
635 stored into an AOT cache file, which will be used by all the JVMs
636 launched by the selected jtreg test cases.</p>
637 <p>When the jtreg tests call into the core libraries classes that are in
638 the AOT cache, we will be able to test the AOT optimizations that were
639 used on those classes.</p>
640 <p>Please note that not all existing jtreg test cases can be executed
641 with the AOT_JDK mode. See <a
642 href="../test/hotspot/jtreg/ProblemList-AotJdk.txt">test/hotspot/jtreg/ProblemList-AotJdk.txt</a>
643 and <a
644 href="../test/jdk/ProblemList-AotJdk.txt">test/jdk/ProblemList-AotJdk.txt</a>.</p>
645 <p>Also, test cases that were written specifically to test AOT, such as
646 the tests under <a
647 href="../test/hotspot/jtreg/runtime/cds/">test/hotspot/jtreg/runtime/cds</a>,
648 cannot be executed with the AOT_JDK mode.</p>
649 <p>Valid values for <code>AOT_JDK</code> are <code>onestep</code> and
650 <code>twostep</code>. These control how the AOT cache is generated. See
651 https://openjdk.org/jeps/514 for details. All other values are
652 ignored.</p>
653 <h3 id="testing-with-alternative-security-providers">Testing with
654 alternative security providers</h3>
655 <p>Some security tests use a hardcoded provider for
656 <code>KeyFactory</code>, <code>Cipher</code>,
657 <code>KeyPairGenerator</code>, <code>KeyGenerator</code>,
658 <code>AlgorithmParameterGenerator</code>, <code>KeyAgreement</code>,
659 <code>Mac</code>, <code>MessageDigest</code>, <code>SecureRandom</code>,
660 <code>Signature</code>, <code>AlgorithmParameters</code>,
661 <code>Configuration</code>, <code>Policy</code>, or
662 <code>SecretKeyFactory</code> objects. Specify the
663 <code>-Dtest.provider.name=NAME</code> property to use a different
664 provider for the service(s).</p>
665 <h3 id="client-ui-tests">Client UI Tests</h3>
666 <h4 id="system-key-shortcuts">System key shortcuts</h4>
667 <p>Some Client UI tests use key sequences which may be reserved by the
668 operating system. Usually that causes the test failure. So it is highly
669 recommended to disable system key shortcuts prior testing. The steps to
670 access and disable system key shortcuts for various platforms are
671 provided below.</p>
672 <h5 id="macos">macOS</h5>
673 <p>Choose Apple menu; System Preferences, click Keyboard, then click
674 Shortcuts; select or deselect desired shortcut.</p>
675 <p>For example,
676 test/jdk/javax/swing/TooltipManager/JMenuItemToolTipKeyBindingsTest/JMenuItemToolTipKeyBindingsTest.java
677 fails on macOS because it uses <code>CTRL + F1</code> key sequence to
678 show or hide tooltip message but the key combination is reserved by the
679 operating system. To run the test correctly the default global key
680 shortcut should be disabled using the steps described above, and then
681 deselect "Turn keyboard access on or off" option which is responsible
682 for <code>CTRL + F1</code> combination.</p>
683 <h5 id="linux">Linux</h5>
684 <p>Open the Activities overview and start typing Settings; Choose
685 Settings, click Devices, then click Keyboard; set or override desired
686 shortcut.</p>
687 <h5 id="windows">Windows</h5>
688 <p>Type <code>gpedit</code> in the Search and then click Edit group
689 policy; navigate to User Configuration -&gt; Administrative Templates
690 -&gt; Windows Components -&gt; File Explorer; in the right-side pane
691 look for "Turn off Windows key hotkeys" and double click on it; enable
692 or disable hotkeys.</p>
693 <p>Note: restart is required to make the settings take effect.</p>
694 <h4 id="robot-api">Robot API</h4>
695 <p>Most automated Client UI tests use <code>Robot</code> API to control
696 the UI. Usually, the default operating system settings need to be
697 adjusted for Robot to work correctly. The detailed steps how to access
698 and update these settings for different platforms are provided
699 below.</p>
700 <h5 id="macos-1">macOS</h5>
701 <p><code>Robot</code> is not permitted to control your Mac by default
702 since macOS 10.15. To allow it, choose Apple menu -&gt; System Settings,
703 click Privacy &amp; Security; then click Accessibility and ensure the
704 following apps are allowed to control your computer: <em>Java</em> and
705 <em>Terminal</em>. If the tests are run from an IDE, the IDE should be
706 granted this permission too.</p>
707 <h5 id="windows-1">Windows</h5>
708 <p>On Windows if Cygwin terminal is used to run the tests, there is a
709 delay in focus transfer. Usually it causes automated UI test failure. To
710 disable the delay, type <code>regedit</code> in the Search and then
711 select Registry Editor; navigate to the following key:
712 <code>HKEY_CURRENT_USER\Control Panel\Desktop</code>; make sure the
713 <code>ForegroundLockTimeout</code> value is set to 0.</p>
714 <p>Additional information about Client UI tests configuration for
715 various operating systems can be obtained at <a
716 href="https://wiki.openjdk.org/display/ClientLibs/Automated+client+GUI+testing+system+set+up+requirements">Automated
717 client GUI testing system set up requirements</a></p>
718 <h2 id="editing-this-document">Editing this document</h2>
719 <p>If you want to contribute changes to this document, edit
720 <code>doc/testing.md</code> and then run
721 <code>make update-build-docs</code> to generate the same changes in
722 <code>doc/testing.html</code>.</p>
723 </body>
724 </html>