Verify - A simple verification tool to manage the definition, building, and running of tests in a verification environment.
verify [options] config::test[,params...] ...
Options:
-h, --help Print usage information
--man Display manpage-styled help
--man2html Writes manpage-styled help to a formatted HTML file
-l, --log Specifies log filename
-d, --debug Use debug mode
-i, --interactive Interactive debug mode
-n Just print what would have run without running anything
-e, --email Email Verify status
-q, --quiet Quiet output
-v, --verbose Verbose output
-p, --print List available tests
-P, --parallel Run in parallel mode
-r, --report Enable test(s) pass/fail report logging
--no-build Skip build
--build Enable build
--no-run Skip run
--run Enable run
--version Display version information
- -h
- --help
-
Shows this usage information.
- --man
-
Displays a manpage-styled help documentation.
- --man2html
-
Writes the same manpage-styled help displayed by
--man
into an HTML-formatted file called verify.html. - -l
- --log
-
Specifies the name of the log file. Default is verify.log.
- -d
- --debug
-
Enables debugging mode for the test(s). (Requires implementation in build_test.pm and run_test.pm.)
- -i
- --interactive
-
Enables debugging in interactive mode for the test(s). (Requires implementation in build_test.pm and run_test.pm.)
- -n
-
Only print out what the tool would have done, without actually running anything in the shell.
- --no-build
-
Skips the build step.
- --build
-
Enables the build step (default). This will override previous usage of
--no-build
. - --no-run
-
Skips the run step.
- --run
-
Enables the run step (default). This will override previous usage of
--no-run
. - -e
-
Email results to one or more email recipients at tool completion.
- -q
- --quiet
-
Suppresses all console output.
- -v
- --verbose
-
Enables console output (default). This will override previous usage of
--quiet
. - -p
-
Causes Verify to find and list all available tests, and then exits.
- -P
- --parallel
-
Enables running of multiple tests in parallel. See the section called Running Tests in the manual for more information.
- -r
- --report
-
When used, pass/fail status of all tests run will be recorded in a tab-delimited file report.txt.
- --version
-
Displays version information for Verify on the console.
This script was designed with portability in mind, so there are only a few files needed for basic script operation. The initial configuration step when installing this script is also very minimal. The whole of this script consists of at least three groups of files: the main source, indexing and parsing modules, and user-implemented modules for the build and run steps.
The main source is in verify.pl. It contains all the code necessary for organizing and invoking tests. It will query the test file indexer (by default, TestIndex.pm) and invoke the build and run steps from methods implemented in build_test.pm and run_test.pm, respectively. These two files may exist anywhere, so long as they are both in the same directory and that you configure verify.pl to know where these files are. I will touch on configuration in the next subsection.
TestIndex.pm contains the code necessary for resolving config::testname
pairs into its testfile. It utilizes test file parsing functions from TestFileParser.pm to store the test definition into a hash variable. It also contains the code necessary for outputting the entire test list (if the tool is invoked with -p
).
build_test.pm contains the code necessary for your build step. It implements three functions: pre_build()
, build()
, and post_build()
. The implementation of these functions is not provided, because they are to be written by the user. run_test.pm is similar, but is associated with the run step. It implements three similar functions: pre_run()
, run()
, and post_run()
.
To install this tool, simply store verify.pl wherever you deem appropriate. Implement your build_test.pm and run_test.pm files and store those somewhere appropriate, as well. (Details for implementation of these files will be provided in the next subsection). This script requires a single environment variable to be set: $PRJ_HOME
. This points to the head of your verification environment.
Next, you have to tell the tool where your build_test.pm and run_test.pm files are, as well as where it should start looking for test files. To do this, create a file called verify.ini and place it in $PRJ_HOME
. Open it in a text editor and any of the following:
testsdir=./path/to/tests
libsdir=./path/to/libs
testsdir
will specify the relative path to your test files. It will cause the tool to start under $PRJ_HOME/testsdir
and recursively search for test definitions files. libsdir
tells the script where to look for your build_test.pm and run_test.pm files. This is also a relative path, under $PRJ_HOME
.
Your verify.ini file can exclude either of these lines. Whatever the tool does not read in from this file, it will use the default.
NOTE: This tool will always look for the verify.ini file in $PRJ_HOME
. If it does not exist, it will default testsdir
to $PRJ_HOME/verification/tests
and libsdir
to $PRJ_HOME/verification/scripts
. If there are any errors when it tries to pull in these modules (either due to compilation or file not found), it will cause this tool to die, logging out the error.
Templates for these two files are provided with the source code of this tool. You will notice that they are very sparse. They only contain three empty subroutine definitions, each. For both build()
and run()
, you will notice that there is an associated pre_*()
and post_*()
subroutine associated. As you might guess, these subroutines will be invoked before and after its corresponding method, respectively. You don't have to implement these functions, but they must exist. Otherwise, the Perl compiler will fail. I recommend using them for steps that are required as part of your build and run steps, but aren't directly involved in actually building or running the test itself. For instance, I use pre_build()
and pre_run()
to parse test parameters, and use post_run()
to allow me to invoke my debugging environment after a test completes, if desired. However, you can use them to do anything you'd like.
When implementing these functions, you should not use Perl's built-in print()
variants for displaying output to standard output. As there is an extensive output and logging mechanism built into the tool, anything you report using Perl's built-in functions will not be able to be captured in the log files. Instead, I provide a function for you to use instead:
verify::tlog(level, message);
This function will display a message to your console, as well as log it. message
can be any string. level
can be either 0
or 1
. These level codes correspond to standard output and standard error, respectively. The power of this function is that if you run the test in regression mode (with --parallel
), or decide to force any console output to be suppressed (with --quiet
), any calls to tlog()
will suppress console output, but still output to the logfiles for reference later. Also, anything that you output using tlog()
with a level of 1
will be both written to the log file, as well as a dedicated error log (named logfile.log.error). This can be useful if you want to isolate errors for debug purposes.
Similarly, instead of using Perl's built-in system()
call for invoking shell commands, I provide another subroutine for you to use:
verify::run_command(command_str);
This function behaves just as Perl's built-in system()
call, except it has a few additional capabilities. It will execute your command (stored in command_str
), retrieve the error code and return it back to the caller. However, it also will capture all output from the console and redirect it to the log file. Anything sent by the shell command to standard output and standard error will be displayed on the console and written to the log. It also has the capability to suppress console output in the same way as tlog()
provides. For logging purposes, it will also log the command that is being executed in the subshell. As an additional benefit, if you invoke the tool using -n
("null" mode), it will cause the tool to follow all steps in a normal run but skip actual execution of your build and run tools. Using this command-line switch will tell verify::run_command()
to simply output the command it would have run, and return without actually running it.
As before, if you invoke any commands using Perl's built-in system()
call, you will not capture any of the test's output to the log files, nor be able to suppress console output. It also will not be able to skip running the command if you decide to invoke the tool using -n
.
Lastly, I have a replacement for Perl's built-in function, die()
. This will cause the tool to immediately exit, but I want to be able to control how the tool exits so that even in a case where a die()
call is appropriate, we will have potentially important information logged correctly. Instead of calling die()
, use:
verify::tdie(message);
It does the same thing as Perl's die()
, but allows capturing of messages to our log files, and emailing results (if you enabled emailing when you invoked the tool).
Of course, I do not prevent you from using Perl's built-in print()
variants, nor system()
(and related). You may use them if you feel it would be useful. However, if you choose to do so, I believe you should be aware of the caveats of doing so.
To specify what test(s) you wish to run, simply specify both its config and name together on the command line, as such:
verify config::name
Verify will then search its database of tests for the test definition file containing this test. It will parse this file to gather any test information (such as test and tool parameters to use). This information will be passed to the build and run functions defined in the build_test.pm and run_test.pm files, which you will implement, with your build and run steps.
You may specify multiple tests to run in series by listing their config::name one after each other, separated with spaces, like so:
verify config1::test1 config2::test2 ...
If you want to run the same test multiple times, you can use the repetition operator ({}
) instead of specifying the full test name multiple times. For instance, to run config1::test1
twice, you can type:
verify config1::test1{2}
If you have test parameters, simply include them before the repetition operator:
verify config1::test1,param1{2}
For more on test parameters, see the section called Test Parameters.
You may pass parameters to the test itself by including them following the test name plus a comma, as such:
verify config::name,param1
You can pass any number of parameters to the test itself, this way:
verify config::name,param1,param2,param3,param4
These parameters need not fulfill any particular syntax, so long as spaces and other special characters are properly escaped. You may parse these arguments as you see fit in the pre/post/build and pre/post/run subroutines in build_test.pm and run_test.pm. Therefore, they are most useful for directly configuring test build and run behavior from the command-line.
If you use the --parallel
command-line option, you can specify a maximum number of tests allowed to run at once. This will cause Verify to run each test in its own separate process. At the same time, it will keep track of all currently running tests, logging test results and dispatching more as additional tests complete, until all tests have run. For example:
verify --parallel 2 config1::test1 config2::test2 config3::test3
This executes all of the tests listed, while allowing for a maximum of 2 tests running in parallel. The Verify script will first run config1::test1
and config2::test2
in their own individual processes, which will be monitored by the main process. Once one of them completes, config3::test3
will be dispatched. Once all tests complete, error status is logged, and the script exits. This means that at any given time, if running in parallel mode for a setting of N available parallel slots, you will at any given time have no more than N+1 processes executing, and no more than N tests running simultaneously. Test dispatch is first-come-first-serve, in the order listed on the command-line.
Once all tests complete, Verify will exit with an status code indicating pass or failure. A code of zero means 'pass'. A code of non-zero means 'failed'. Actual fail codes are determined by the user-implemented build_test.pm and run_test.pm files. The final error status is a bitwise OR of these error codes. The script will exit indicating a failure if this final error status is non-zero.
The files required when writing a test are:
Your test source
testfile.test
These files will be assumed to live in $PRJ_HOME/verification/tests by default. This can be redefined, however. See the subsection labelled Installation and Configuration for more information.
The testfile.test file is a simple text file containing the test definition. It has eight different keywords: test...endtest
, description
, config
, params
, build.args
, run.args
, define
.
NOTE: You do not have to have an individual *.test file per test, so long as there is a test definition associated with the test that exists in a *.test file located somewhere where Verify can find it.
test
...endtest
-
All test definitions are enclosed in a
test...endtest
block (henceforth referred to as "test blocks"). These test blocks also are used to define the name of the test. The syntax is:test: test_name # Test info goes here endtest
The
test_name
field is required. description
-
A string containing a brief description of the test. This is a required field.
config
-
This corresponds to your testbench configuration associated with this test. This is a required field.
build.args
-
Contains a list of arguments to your build tool to be passed directly on the command-line. These arguments will not be parsed in any way, so be sure that they are well-formed!
run.args
-
Contains a list of arguments to your test binary to be passed directly on the command-line. These arguments will not be parsed in any way, so be sure that they are well-formed!
params
-
Contains a list of parameters to always use in addition to whatever is passed on the command-line as comma-separated test parameters.
define
-
(Also:
define build
anddefine run
)You can use this to specify custom parameters for the test, which can be passed on the command-line by appending a comma after the test name and listing the comma-separated list of parameters. You can also list them in the test file using the
params
line.A
define
line in a test file looks like this:define option1=+option1+1
If you specify
option1
as a parameter when you invoke the test,+option1+1
will then be appended to the build and run tools' invocations using thebuild.args
andrun.args
fields. Here's an example of how you invoke the test with this newly defined parameter:verify.pl config::test,option1
You can also create a define and specify that it should only apply to either the build tool or the run tool, like this:
define build option2=+option2+1 or define run option2=+option2+1
When using this method,
+option2+1
will only be applied to its corresponding tool when the test is invoked using theoption2
parameter.You can also define parameters to accept strings. This will allow you to change build and run step behavior based on arbitrary values passed in when invoking the test. To do this, simply define your custom parameter as before, and use
$$
to represent the string to replace. For example, this will define a custom parameter that lets you set a plusarg to some arbitrary value:define option3=+option3+$$
Now, you can invoke the test and set the plusarg
+option3
to store any number or string you'd like. To do this, invoke the test with the parameter as such:verify.pl config::test,option3=15
This will then pass
+option3+15
to your build and run tools as command-line arguments, setting the+option3
plusarg to 15.If you want to define a test parameter so that it will pass
$$
without substitution, simply escape it with a backslash (\
):define option4=+option4+\$$
When that backslash is present, it will not substitute the
$$
out, and will pass the following to your build and run tools:+option4+$$
NOTE: Your test source contains the code for your actual test. It may involve one or more files, depending on your verification methods and tool environment. Since the build_test.pm and run_test.pm files are user-implemented, identifying the test source may be done using values from one or more of these fields. Therefore, one must be conscious of the file and test naming conventions used.
v.2.3.1, July 19, 2013
- Bugfixes:
-
Reimplemented check for optional parameters in TestIndex so that they will not trigger false positives.
Disabling warning messages for $PRJ_HOME when simply getting help/version information when that variable isn't set in the environment.
- Enhancements:
-
Putting the tool under GPLv2 and adding copyright notices to files.
v.2.3.0, July 12, 2013
- Bugfixes:
-
Fixed some bugs in TestIndexCSV where if a new testfile has been added to the file tree, it wouldn't be properly indexed.
- Enhancements:
-
Refactored test file parsing code, consolidating it into a module called TestFileParser.pm. This module simplifies test file reading and returns test file instructions using a standard format. TestIndex and TestIndexCSV now utilize this module for all test file parsing.
Aside from moving all test file parsing code to use the TestFileParser module, also did a lot of clean-up surrounding indexing code. Added some debug switches (accessible in the source code) to help with SQL statement debug.
v.2.2.2, June 17, 2013
- Features:
-
Adding a new module that uses CSV files and SQL queries for test indexing. Supports indexing of additional metadata alongside test name and file (for instance, line number). This should be more useful for indexing tests in a few, very large .test files.
v2.2.1, June 13, 2013
- Enhancements:
-
Switched over from using a flat, sequential text file for the index, to using the Berkeley DB_FILE format and a BTree container structure. This should allow indexes to scale to larger sizes without introducing a significant performance penalty.
v2.2.0, May 13, 2013
- NOTE:
-
With this release, I'm migrating the tool to the GitHub repository.
- Bugfixes:
-
Moved the checking whether $PRJ_HOME is set from compile-time to run-time so you don't have to set it just to view the help documentation.
v2.1.1, April 25, 2013
- Bugfixes:
-
Fixed a bug where test parameters that pass a value to build and run might lose the value, especially when specified by using the
params
field in the test file.
- Features:
-
Finally adding a
--version
switch with version information.
- Enhancements:
-
For log files, changed from using hard links and relative paths to symbolic links and absolute paths.
If a log file froma previous run is found it will now be backed up to verify.old.log before being overwritten.
v2.1, February 15, 2013
- Bugfixes:
-
Fixed a bug where if you specify a test that doesn't exist in the index, the reindexing process would end up deleting the entire index and then rebuilding it from scratch every time.
Fixed a bug where if you ran a list of tests one after the other, the script would fail when trying to create log files for the second test in the list.
When running in parallel mode, if the build step failed for a test, it would still attempt to continue running the test, instead of properly exiting and continuing to the next test in the list.
Pass/Fail statuses reported when running with the
-n
switch contradicted each other (logging FAIL while logfile is named pass*.log). Fixed so now it's reported as a FAIL throughout the tool output.When run with certain command-line switches that cause the tool to exit immediately (for instance,
--help
or--print
), it would never generate the verify_status.env log of environment variables. Now it does.
- Features:
-
Added the repetition operator so that you can tell the tool to repeat tests an arbitrary number of times without typing the full test name out more than once.
Added the
-r
/--report
switch to enable pass/fail reports for the list of tests to be saved at run.Added the
--man2html
switch for a more readable option for the manpage help.
- Enhancements:
-
Changed the
run_command()
implementation to use Perl IPCs instead ofsystem()
and appending tee to the file. This makes the logging mechanism a bit more robust.Overhauled the formatting for the manpage help, for readability and to make the HTML generated help nicer as well.
v2.0, December 18, 2012
- Features:
-
Overhauled the test parsing code, now implemented in a module called TestIndexParser.pm. This new parser updates the test definition file syntax and greatly increases its flexibility and scalability. We now support having multiple test definitions within a single *.test file, through the use of test blocks. Also, test files now may exist anywhere underneath the location specified by the
testsdir
configuration option, instead of being confined totestsdir
/config/testname.test. In order to reduce filesystem access time, this tool will also transparently maintain a test index, to allow quick and direct access to the associated test file when running a test.
v1.3, December 13, 2012
- Enhancements:
-
Extracted test parsing code into a separate module called TestParser.pm, and modified the tool to call this file. This is in preparation for a future revision allowing for greater enhancement of the test parsing and management system.
v1.2.6, November 29, 2012
- Bugfixes:
-
Fixed a bug where if the tool failed when trying to include the build_test.pm and run_test.pm modules, it would fail silently, and cause the verify tool to fail. Now, the script will die with the specific, more descriptive error.
- Features:
-
Extended the ability to configure the Verify tool using verify.ini. Now it will look for its configuration in
$PRJ_HOME
. It also now supports configuring where to find build_test.pm and run_test.pm. Instead of loading these files at compile time, it will load them at run time, which enables us to configure from where it should load them based on what we read in from the file.Extended custom test parameters to allow passing in values. Now, any instance of double dollar signs (
$$
) will be replaced with anything you put to the right of the equals sign when passing the parameter on the command-line.
- Enhancements:
-
Updated the documentation to include a section describing how to install and configure the tool, as well as some basic information about implementing the build and run flows.
v1.2.5, November 20, 2012
- Bugfixes:
-
Fixed enforcement of required fields in test files. The script now checks that they exist and will error if any are missing.
- Features:
-
Added the ability to define test-specific parameter options from within the *.test files. You can list these on the command-line as comma-separated strings, or include them in the
params
line as default parameters in the *.test files themselves.
- Enhancements:
-
Since
--debug
and--interactive
are supposed to set flags that are acted upon by the user code in build_test.pm and run_test.pm, this script now sets dedicated global flags, so the user doesn't have to use the%options
.In response to the previously listed enhancement,
%options
has now been made private to the main script file, instead of global. This will protect the script from being put into undefined configuration states through errant user code.
v1.2.1, October 26, 2012
- Bugfixes:
-
Fixed a bug that caused zero-length files to accumulate when doing multiple runs over the same directory. Not a critical bug, but it eventually would run into the filesystem limit if left unchecked.
- Features:
-
Added ability to specify directory where test files are stored using a configuration file.
- Enhancements:
-
Updated script logging for readability and to reflect paths where script and tests are stored.
v1.2, October 18, 2012
- Bugfixes:
-
Fixed a bug where if the build step fails, log files weren't cleanly finished and links to the log files weren't created.
Removed some stale code for debug mode since it is no longer handled in the main script.
Fixed a bug exposed when running a list of tests, where if one or more tests fail while the last test in the list passes, the script will return with exit code '0' (pass). This causes the script to effectively forget about the error status of previous tests, and incorrectly report the test results.
Added enforcement of making the
-i
/--interactive
command-line option override use of-d
/--debug
so that they are mutually exclusive.
- Features:
-
Enhanced comment parsing in *.test files to allow you to include
#
characters by escaping them C-style (\#
).
- Enhancements:
-
Updated help documentation with more descriptions on running tests and more organized formatting in VERSIONS section.
v1.1, August 28, 2012
- Bugfixes:
-
Fixed a bug preventing errors to be logged to the error logfile.
- Features:
-
Added additional test fields to allow for individual customization of build and run steps.
Added the ability to hard-code test parameters into test files as an alternative to passing them on the command-line.
Added version information to manpage-style documentation.
v1.0, August 17, 2012
Initial revision.
Benjamin D. Richards
This tool is licensed under the GNU GENERAL PUBLIC LICENSE Version 2 (GPLv2). See COPYRIGHT for its terms.