A LAVA Test Job comprises
For certain tests, the instructions can be included inline with the actions. For more complex tests or to share test definitions across multiple devices, environments and purposes, the test can use a repository of YAML files.
The YAML is downloaded from the repository (or handled inline) and installed into the test image, either as a single file or as part of a git or bzr repository. (See Test definitions in version control)
Each test definition YAML file contains metadata and instructions. Metadata includes:
metadata:
format: Lava-Test Test Definition 1.0
name: singlenode-advanced
description: "Advanced (level 3): single node test commands for Linux Linaro ubuntu Images"
Note
the short name of the purpose of the test definition, i.e., value of field name, must not contain any non-ascii characters or special characters from the following list, including white space(s): $& "'`()<>/\|;
If the file is not under version control (i.e. not in a git or bzr repository), the version of the file must also be specified in the metadata:
metadata:
format: Lava-Test Test Definition 1.0
name: singlenode-advanced
description: "Advanced (level 3): single node test commands for Linux Linaro ubuntu Images"
version: "1.0"
There are also optional metadata fields:
maintainer:
- user.user@linaro.org
os:
- ubuntu
scope:
- functional
devices:
- kvm
- arndale
- panda
- beaglebone-black
- beagle-xm
The instructions within the YAML file can include installation requirements for images based on supported distributions (currently, Ubuntu or Debian):
install:
deps:
- curl
- realpath
- ntpdate
- lsb-release
- usbutils
Note
for an install step to work, the test must first raise a usable network interface without running any instructions from the rest of the YAML file. If this is not possible, raise a network interface manually as a run step and install or build the components directly then.
When an external PPA or package repository (specific to debian based distros) is required for installation of packages, it can be added in the install section as follows:
install:
keys:
- 7C751B3F
- 6CCD4038
sources:
- https://security.debian.org
- ppa:linaro-maintainers/tools
deps:
- curl
- ntpdate
- lava-tool
Debian and Ubuntu repositories must be signed for the apt package management tool to trust them as package sources. To tell the system to trust extra repositories listed here, add references to the PGP keys used in the keys list. These may be either the names of Debian keyring packages (already available in the standard Debian archive), or PGP key IDs. If using key IDs, LAVA will import them from a key server (pgp.mit.edu). PPA keys will be automatically imported using data from launchpad.net. For more information, see the documentation of apt-add-repository, man 1 apt-add-repository
See Debian apt source addition and Ubuntu PPA addition
Note
When a new source is added and there are no ‘deps’ in the ‘install’ section, then it is the test writer’s responsibility to run apt update before attempting any other apt operation elsewhere in the test definition.
Note
When keys are not added for an apt source repository listed in the sources section, packages may fail to install if the repository is not trusted. LAVA does not add the –force-yes option during apt operations which would over-ride the trust check.
The principal purpose of the test definitions in the YAML file is to run commands on the device. These are specified in the run steps:
run:
steps:
- echo "test1: pass"
- echo test2: fail
While this syntax will pass:
- echo "test1:" "pass"
- echo "test2:" "fail"
Note
Commands must not try to access files from other test definitions. If a script needs to be in multiple tests, either combine the repositories into one or copy the script into multiple repositories. The copy of the script executed will be the one below the working directory of the current test.
Rather than refer to a separate file or VCS repository, it is also possible to create a test definition directly inside the test action of a job submission. This is called an inline test definition:
- test: definitions: - repository: metadata: format: Lava-Test Test Definition 1.0 name: apache-server description: "server installation" os: - debian scope: - functional run: steps: - apt update - apt install apache2 from: inline name: apache-server path: inline/apache-server.yaml
An inline test definition must:
Inline test definitions will be written out as single files, so if the test definition needs to call any scripts or programs, those need to be downloaded or installed before being called in the inline test definition.
Note
Custom scripts are not available in an inline definition, unless the definition itself downloads the script and makes it executable.
When multiple actions are necessary to get usable output, write a custom script to go alongside the YAML and execute that script as a run step:
run:
steps:
- $(./my-script.sh arguments)
You can choose whatever scripting language you prefer, as long as you ensure that it is available in the test image.
Take care when using cd inside custom scripts - always store the initial return value or the value of pwd before the call and change back to that directory at the end of the script.
Example of a custom script wrapping the output:
https://git.linaro.org/lava-team/refactoring.git/blob/HEAD:/functional/unittests.sh
The script is simply called directly from the test shell definition:
Example V2 job using this support:
https://git.linaro.org/lava-team/refactoring.git/blob/HEAD:/functional/qemu-server-pipeline.yaml
If your YAML file does not reside in a repository, the YAML run steps will need to ensure that a network interface is raised, install a tool like wget and then use that to obtain the script, setting permissions if appropriate.
If all your test does is feed the textual output of commands to the log file, you will spend a lot of time reading log files. To make test results easier to parse, aggregate and compare, individual commands can be converted into test cases with a pass or fail result. The simplest way to do this is to use the exit value of the command. A non-zero exit value is a test case failure. This produces a simple list of passes and failures in the result bundle which can be easily tracked over time.
To use the exit value, simply precede the command with a call to lava-test-case with a test-case name (no spaces):
run:
steps:
- lava-test-case test-ls-command --shell ls /usr/bin/sort
- lava-test-case test-ls-fail --shell ls /user/somewhere/else/
Use subshells instead of backticks to execute a command as an argument to another command:
- lava-test-case pointless-example --shell ls $(pwd)
For more details on the contents of the YAML file and how to construct YAML for your own tests, see the Writing Tests.
Warning
Parse patterns and fixup dictionaries are confusing and hard to debug. The syntax is Python and the support remains for compatibility with existing Lava Test Shell Definitions. With LAVA V2, it is recommended to move parsing into a custom script contained within the test definition repository. The script can simply call lava-test-case directly with the relevant options once the data is parsed. This has the advantage that the log output from LAVA can be tested directly as input for the script.
If the test involves parsing the output of a command rather than simply relying on the exit value, LAVA can use a pass/fail/skip/unknown output:
run:
steps:
- echo "test1:" "pass"
- echo "test2:" "fail"
- echo "test3:" "skip"
- echo "test4:" "unknown"
The quotes are required to ensure correct YAML parsing.
The parse section can supply a parser to convert the output into test case results:
parse:
pattern: "(?P<test_case_id>.*-*):\\s+(?P<result>(pass|fail))"
The result of the above test would be a set of results:
test1 -> pass
test2 -> fail
test3 -> pass
test4 -> pass
lava-test-case can also be used with a parser with the extra support for checking the exit value of the call:
run:
steps:
- echo "test1:" "pass"
- echo "test2:" "fail"
- lava-test-case echo1 --shell echo "test3:" "pass"
- lava-test-case echo2 --shell echo "test4:" "fail"
This syntax will result in extra test results:
test1 -> pass
test2 -> fail
test3 -> pass
test4 -> fail
echo1 -> pass
echo2 -> pass
Note that echo2 passed because the echo "test4:" "fail" returned an exit code of zero.
Alternatively, the --result command can be used to output the value to be picked up by the parser:
run:
steps:
- echo "test1:" "pass"
- echo "test2:" "fail"
- lava-test-case test5 --result pass
- lava-test-case test6 --result fail
This syntax will result in the test results:
test1 -> pass
test2 -> fail
test5 -> pass
test6 -> fail
Various tests require measurements and lava-test-case supports measurements and units per test at a precision of 10 digits.
--result must always be specified and only numbers can be recorded as measurements (to support charts based on measurement trends).
See also
run:
steps:
- echo "test1:" "pass"
- echo "test2:" "fail"
- lava-test-case test5 --result pass --measurement 99 --units bottles
- lava-test-case test6 --result fail --measurement 0 --units mugs
This syntax will result in the test results:
test1 -> pass
test2 -> fail
test5 -> pass -> 99.0000000000 bottles
test6 -> fail -> 0E-10 mugs
The simplest way to use this with real data is to use a custom script which runs lava-test-case with the relevant arguments.
A version string or similar can be recorded as a lava-test-case name:
lava-test-case ${VERSION} --result pass
Version strings need specific handling to compare for newer, older etc. so LAVA does not support comparing or ordering of such strings beyond simple alphanumeric sorting. A custom frontend would be the best way to handle such results.
lava-test-case-attach is not supported in V2.
A test job may consist of several LAVA test definitions and multiple deployments, but this flexibility needs to be balanced against the complexity of the job and the ways to analyse the results.
lava-test-shell is a useful helper but that can become a limitation. Avoid relying upon the helper for anything more than the automation by putting the logic and the parsing of your test into a more competent language. Remember: as test writer, you control which languages are available inside your test.
lava-test-shell has to try and get by with not much more than busybox ash as the lowest common denominator.
Please don’t expect lava-test-shell to do everything.
Let lava-test-shell provide you with a directory layout containing your scripts, some basic information about the job and a way of reporting test case results - that’s about all it should be doing outside of the MultiNode API.
Do not lock yourself out of your tests
Follow the standard UNIX model of Make each program do one thing well. Make a set of separate test definitions. Each definition should concentrate on one area of functionality and test that one area thoroughly.
While it is supported to reboot from one distribution and boot into a different one, the usefulness of this is limited. If the first environment fails, the subsequent tests might not run at all.
While LAVA tries to ensure that all tests are run, adding more and more test repositories to a single LAVA job increases the risk that one test will fail in a way that prevents the results from all tests being collected.
Overly long sets of test definitions also increase the complexity of the log files, which can make it hard to identify why a particular job failed.
Splitting a large job into smaller chunks also means that the device can run other jobs for other users in between the smaller jobs.