-
Notifications
You must be signed in to change notification settings - Fork 2
home
Richard R. Drake edited this page Apr 14, 2024
·
2 revisions
vvtest is a test harness designed for running verification, validation and regression type tests on workstations and on batched, cluster machines. Being a stand-alone product, it can also be used by analysts to help manage sets of simulations.
The inline documentation is obtained with vvtest --help and can be used as a reference. The current document is structured by topic.
- Obtaining and installing vvtest
- Writing and running a simple test
- Specifying the files used by a test
- Scanning for test files
- Rerunning tests
- Writing test files
- General format and common attributes
- link and copy: Soft link or copy working files into the test execution directory
- testname: Set the name of the test or define multiple test names in the same file
- enable: Enable or disable a test based on platforms and options
- skipif: Skip tests based on evaluation of a python expression
- keywords: Arbitrary keyword strings and implicit keywords
- parameterize: Create multiple test instances with different parameter names and values
- analyze: Define an analysis test for a parameterized test
- sources: Associate additional source files with a test
- timeout: Set a specific runtime limit for a test
- baseline: Define a baseline or re-baseline operation
- include: Read and insert files containing more directives
- depends on: Specify test-to-test dependencies
- preload: Define a label to pass to the preload user plugin function
- Parameterizing a test
- Writing an execute/analyze test
- the importance of the "np" parameter, and "ndevices" parameter
- Using the "analyze only" option and writing tests that work with it
- Using staging in parameterized tests
- Setting parameterized values on the command line with the -S option
- Filtering and selecting tests
- Selectively rerunning tests
- Using arbitrary option strings
- Using the script utilities
- Marking tests for Test Driven Development
- Specifying test timeouts
- Encoding Test Results in Return Code
- Testing with devices, such as GPUs
- Saving and Using Test Runtimes, for runtime filtering and batch performance
- the special "long" keyword
- Directory locking and the --force option
- Configuration overview
- Platform configuration
- Project specific configuration and plugins
- Project provided script utilities
- Setting a termination function in the project script_util_plugin.py file
- General test dependencies
- Integration with Jenkins using JUnit
- Integration with Kitware CDash
- Writing test results to JSON files
- Test baseline mechanism
- Inserting header directives from another file
- batch mode
- tracking and using test run times
- The -s or --search option
- Automating ssh with keytab.py
- The old inline documentation