# ahkab.testing¶

A straight-forward framework to buid tests to ensure no regressions occur during development.

Two classes for describing tests are defined in this module:

Every test, no matter which class is referenced internally, is univocally identified by a alphanumeric id, which will be referred to as <test_id> in the following.

## Directory structure¶

The tests are placed in tests/, under a directory with the same id as the test, ie:

tests/<test_id>/


## Running tests¶

The test is performed with as working directory one among the following:

• The ahkab repository root,
• tests/,
• tests/<test_id>.

this is necessary for the framework to find its way to the reference files.

More specifically a test can either be run manually through the Python interpreter:

python tests/<test_id>/test_<test_id>.py


or with the nose testing package:

nosetests tests/<test_id>/test_<test_id>.py


To run the whole test suite, issue:

nosetests tests/*/*.py


Please refer to the nose documentation for more info about the command nosetests.

## Running your tests for the first time¶

The first time you run a test you defined yourself, no reference data will be available to check the test results and decide whether the test was passed or if a test fail occurred.

In this case, if you call nose, the test will (expectedly) fail.

Please run the test manually (see above) and the test framework will generate the reference data for you.

Please check the generated reference data carefully! Wrong reference defeats the whole concept of running tests!

## Overview of a typical test based on NetlistTest¶

Each test is composed by multiple files.

### Required files¶

The main directory must contain:

• <test_id>.ini, an INI configuration file containing the details of the test,
• test_<test_id>.py, the script executing the test,
• <test_id>.ckt, the main netlist file to be run.
• the reference data files for checking the pass/fail status of the test. These can be automatically generated, as it will be shown below.

With the exception of the netlist file, which is free for the test writer to define, and the data files, which clearly depend on the test at hand, the other files have a predefined structure which will be examined in more detail in the next sections.

#### Configuration file¶

Few rules are there regarding the entries in the configuration file.

They are as follows:

• The file name must be <test_id>.ini,
• It must be located under tests/<test_id>/,
• It must have a [test] section, containing the following entries:
• name, set to the <test_id>, for error-checking,
• netlist, set to the netlist filename, <test_id>.ckt, prepended with the the netlist path relative to tests/<test_id>/ (most of the time that means just <test_id>.ckt)
• type, a comma-separated list of analyses that will be executed during the test. Values may be op, dc, tran, symbolic... and so on.
• One entry <analysis>_ref for each of the analyses listed in the type entry above. The value is recommended to be set to <test_id>-ref.<analysis> or <test_id>-ref.<analysis>.pickle, if you prefer to save data in Python’s pickle format. Notice only trusted pickle files should ever be loaded.
• skip-on-travis, set to either 0 or 1, to flag whether this test should be run on Travis-CI or not. Torture tests, tests needing lots of CPU or memory, and long-lasting tests in general should be disabled on Travis-CI to not exceed:
• a total build time of 50 minutes,
• A no stdout activity time of 10 minutes.
• skip-on-pypy, set to either 0 or 1, to flag whether the test should be skipped if useing a PYPY Python implemetntation or not. In general, as PYPY supports neither scipy nor matplotlib, only symbolic-oriented tests make sense with PYPY (where it really excels!).

The contents of an example test configuration file rtest1.ini follow, as an example.

[test]
name = rtest1
netlist = rtest1.ckt
type = dc, op
dc_ref = rtest1-ref.dc
op_ref = rtest1-ref.op
skip-on-travis = 0
skip-on-pypy = 1


#### Script file¶

The test script file is where most of the action takes place and where the highest amount of flexibility is available.

That said, the ahkab testing framework was designed to make for extremely simple and straight-forward test scripts.

It is probably easier to introduce writing the scripts with an example.

Below is a typical script file.

from ahkab.testing import NetlistTest
from ahkab import options
# add this to prevent interactive plot directives
# in the netlist from halting the test waiting for
# user input
options.plotting_show_plots = False

def myoptions():
# optionally, set non-standard options
sim_opts = {}
sim_opts.update({'gmin':1e-9})
sim_opts.update({'iea':1e-3})
sim_opts.update({'transient_max_nr_iter':200})
return sim_opts

def test():
# this requires a netlist mytest.ckt
# and a configuration file mytest.ini
nt = NetlistTest('mytest', sim_opts=myoptions())
nt.setUp()
nt.test()
nt.tearDown()

# It is recommended to set the docstring to a meaningful value
test.__doc__ = "My test description, printed out by nose"

if __name__ == '__main__':
nt = NetlistTest('mytest', sim_opts=myoptions())
nt.setUp()
nt.test()


Notice how a function test() is defined, as that will be run by nose, and a '__main__' block is defined too, to allow running the script from the command line.

It is slightly non-standard, as NetlistTest.setUp() and NetlistTest.tearDown() are called inside test(), but this was found to be an acceptable compromise between complexity and following standard practices.

The script is meant to be run from the command line in case a regression is detected by nose, possibly with the aid of a debugger. As such, the NetlistTest.tearDown() function is not executed in the '__main__' block, so that the test outputs are preserved for inspection.

That said, the example file should be easy to understand and in most cases a simple:

:%s/mytest/<test_id>/g


in VIM - will suffice to generate your own script file. Just remember to save to test_<test_id>.py.

## Overview of a typical test based on APITest¶

### Required files¶

The main directory must contain:

• test_<test_id>.py, the script executing the test,
• the reference data files for checking the pass/fail status of the test. These can be automatically generated, as it will be shown below.

#### Script file¶

Again, it is probably easier to introduce the API test scripts with an example.

Below is a typical test script file:

import ahkab
from ahkab import ahkab, circuit, printing, devices, testing

cli = False

def test():
"""Test docstring to be printed out by nose"""

mycircuit = circuit.Circuit(title="Butterworth Example circuit", filename=None)

## define nodes
gnd = mycircuit.get_ground_node()
n1 = mycircuit.create_node('n1')
n2 = mycircuit.create_node('n2')
# ...

# ...

if cli:
print(mycircuit)

## define analyses
op_analysis = ahkab.new_op(outfile='<test_id>')
ac_analysis = ahkab.new_ac(start=1e3, stop=1e5, points=100, outfile='<test_id>')
# ...

## create a testbench
testbench = testing.APITest('<test_id>', mycircuit,
[op_analysis, ac_analysis],
skip_on_travis=True, skip_on_pypy=True)

## setup and test
testbench.setUp()
testbench.test()

## this section is recommended. If something goes wrong, you may call the
## test from the cli and the plots to video in the following will allow
## for quick inspection
if cli:
## re-run the test to grab the results
r = ahkab.run(mycircuit, an_list=[op_analysis, ac_analysis])
## plot and save interesting data
fig = plt.figure()
plt.title(mycircuit.title + " - TRAN Simulation")
plt.plot(r['tran']['T'], r['tran']['VN1'], label="Input voltage")
plt.hold(True)
plt.plot(r['tran']['T'], r['tran']['VN4'], label="output voltage")
plt.legend()
plt.hold(False)
plt.grid(True)
plt.ylabel('Step response')
plt.xlabel('Time [s]')
fig.savefig('tran_plot.png')
else:
## don't forget to tearDown the testbench when under nose!
testbench.tearDown()

if __name__ == '__main__':
import pylab as plt
cli = True
test()
plt.show()


Once again, a function test() is defined, as that will be the entry point of nose, and a '__main__' block is defined as well, to allow running the script from the command line.

Inside test(), the circuit to be tested is defined, accessing the ahkab module directly, to set up elements, sources and analyses. Directly calling ahkab.run() is not necessary, APITest.test() will take care of that for you.

Notice how APITest.setUp() and APITest.tearDown() are called inside test(), as in the previous case.

The script is meant to be run from the command line in case a regression is detected by nose, possibly with the aid of a debugger. As such, the APITest.tearDown() function is not executed in the '__main__' block, so that the test outputs are preserved for inspection.

Additionally, plotting is performed if the test is directly run from the command line.

In case non-standard simulation options are necessary, they can be set as in the previous example.

## Module reference¶

class APITest(test_id, circ, an_list, er=1e-06, ea=1e-09, sim_opts=None, skip_on_travis=False, skip_on_pypy=True)[source]

A class to run a supplied circuit and check the results against a pre-computed reference.

Parameters:

test_id : string
The test id.
circ : circuit instance
The circuit to be tested
an_list : list of dicts
A list of the analyses to be performed
er : float, optional
Allowed relative error (applies to numeric results only).
er : float, optional
Allowed absolute error (applies to numeric results only).
sim_opts : dict, optional
A dictionary containing the options to be used for the test.
skip_on_travis : bool, optional
Should we skip the test on Travis? Set to True for long tests. Defaults to False.
skip_on_pypy : bool, optional
Should we skip the test on PYPY? Set to True for tests requiring libraries not supported by PYPY (eg. scipy, matplotlib). Defaults to True, as most numeric tests will fail.
setUp()[source]

Set up the testbench

tearDown()[source]

Remove temporary files - if needed.

class NetlistTest(test_id, er=1e-06, ea=1e-09, sim_opts=None, verbose=6)[source]

A class to run a netlist file and check the results against a pre-computed reference.

Parameters:

test_id : string
The test id. For a netlist named "rc_network.ckt", this is to be set to "rc_network".
er : float, optional
Allowed relative error (applies to numeric results only).
er : float, optional
Allowed absolute error (applies to numeric results only).
sim_opts : dict, optional
A dictionary containing the options to be used for the test.
verbose : int
The verbosity level to be used in the test. From 0 (silent) to 6 (verbose). Notice higher verbosity values usually result in higher coverage. Defaults to 6.
setUp()[source]

Set up the testbench.

tearDown()[source]

Remove temporary files - if needed.

ok(x, ref, rtol, atol, msg)[source]