Monthly Archives: April 2011

Automatically List a Directory’s Contents After Changing Dir

The Problem

After the cd command, the next command is almost always ls so we want to combine the two to automatically issuing the ls command right after the cd command.

The Solution

In bash, add the following line in either ~/.bash_profile or ~/.bashrc:

function cd() { builtin cd "${@:-$HOME}" && ls -l; }

If you are using csh or tcsh, add the following line to .cshrc:

alias cd 'cd \!*; ls -l'

Now, whenever we type a cd command, not only we are changing the work directory, but also list the files at the new location. I would like to thank Matt Jenkins for helping me out with the csh part.

tcltest Part 8: Recursive Test Suites

The Problem

We want to organize tests in a hierarchy of directories and execute them in all at once.

The Solution

We can organize tests in sub directories of arbitrary depths, but in order for tcltest to include a directory, that directory must contain a file named all.tcl.

In this example, we only create two sub directories, but in practice, the number of directories and the depth can be arbitrary. Let say under the tcltest_part8 directory we have two sub directories: suiteA and suiteB. Each of these directory contains an all.tcl file and a number of *.test files:

  • tcltest_part8 (dir)
    • all.tcl
    • suiteA (dir)
      • all.tcl
      • a.test
    • suiteB (dir)
      • all.tcl
      • b.test

The contents of all.tcl for each directory is the same, unless we have reasons to customize them:

package require tcltest
tcltest::configure -testdir [file dirname [file normalize [info script]]]
eval tcltest::configure $argv

Once we have the directory and their files in place, we can execute all tests by issuing the following command at tcltest_part8 directory:

tclsh all.tcl

Because all.tcl processes command-line parameters, we can pass any parameter to tcltest::configure. Below are a couple of examples of command line usages:

# Skip tests in directory suiteB
tclsh all.tcl -asidefromdir suiteB

# Skips tests in certain files
tclsh all.tcl -notfile undone_*.test


The documentation for tcltest is confusing, but if we read carefully, the following requirements must be met to setup hierarchical test directories:

  1. Each directory, including the root directory, must contain a file named all.tcl. Without this file, tcltest will skip the tests in that directory.
  2. If a directory does not contain all.tcl, tcltest ignores only tests within that directory, it might include tests in the sub directories. For example, if directory suiteB does not have all.tcl, tcltest will ignore tests in this directory, but might include test in suiteB’s sub directories.
  3. Consequently, if a directory does not have any test, it does not require to have all.tcl.

If for some reason, tests in a directory is not running, make sure that it has all.tcl and the contents must at least include the four lines in the sample all.tcl shown above.


The key to executing tests in a hierarchy of directories is the presence of the file all.tcl. We can use the command line parameters to tailor the test run to include or exclude tests as discussed in earlier installments. At this point, we know quite a bit about tcltest. However, there are still many other aspects of tcltest which we have not explored. In our next installment, we will discuss some of the more useful commands.

tcltest Part 7 – Inexact Result Matching

The Problem

We want to test the result of a function, but don’t know the exact values

The Solution

In case where we only have a vague idea of what the result or output should look like, we can use the test command’s -match option. For example:

# fuzzy.test

package require tcltest
namespace import ::tcltest::*

test divide_by_zero {} -body {

    expr {5 / 0}

} -returnCodes {error} -match regexp -result {[Dd]ivide by zero}

test open_non_existing_file {} -body {

   open zzzzz r

} -returnCodes {error} -match glob -result {*no such file or directory}



  • Lines 6-10: this is the same test we did in part 6.
  • Line 10: if for some reason, we don’t know if the error message (result) is “Divide by zero” (with capital ‘D’) or “divide by zero”. To deal with this situation, we use the -match regexp option and supply a regular expression for the result.
  • Line 16: similarly, in this test, we use the -match glob option, in which the syntax follows the glob command. For more information, please consult the Tcl’s glob command.
  • Besides regexp and glob, the -match option also allows a third option: exact, which requires the exact output. Since this is the default option, we normally do not specify it.

tcltest Part 6 – Test for Error Conditions

The Problem

We want to test functions which might return an error.

The Solution

tcltest provides a way to test functions which might return an error: the -returnCodes flag. Before we go into details with the -returnCodes, let’s take a look at the behavior when a function returns an error:

$ tclsh
% expr 5 / 0
divide by zero
% open zzzzz r
couldn't open "zzzzz": no such file or directory

In line 1, we started an interactive Tcl session. Line 2 and 4 triggers different kinds of errors, which we we will capture in our tests:

# negative.test

package require tcltest
namespace import ::tcltest::*

test divide_by_zero {} -body {

    expr {5 / 0}

} -returnCodes {error} -result "divide by zero"

test open_non_existing_file {} -body {

   open zzzzz r

} -returnCodes {error} -result {couldn't open "zzzzz": no such file or directory}



  • Lines 8-10: We know that this expression will trigger an error, so we use the -returnCodes flags to specify this behavior. The -result flag in this case specifies the error message.
  • Likewise, lines 14-16 tests the situation when opening a non-existing file for reading.


Testing the error condition in tcltest is straight forward. All we need to know if when the error condition occur and its error message. In the next installment, we will discuss situations where we have to test functions while we don’t know the exact output.

tcltest part 5: Capture the Standard Output

The Problem

Not all functions return a value. Some of them produce output to the standard output (stdout) and we want to test that.

The Solution

We can capture the stdout for testing using the -output option to the test command. Furthermore, we can also use the -match option to deal with cases when we cannot test for exact output. In this installment, we create a directory called tcltest_part5, which contains 3 files: hello.tcl (the file under test), all.tcl (the test driver), and stdout.test (the test cases).

# hello.tcl
proc greeting {name} {
    puts "Hello, $name"
proc showWeekDay {} {
    set now [clock seconds]
    puts [clock format $now -format "Today is %A"]

File hello.tcl contains two functions, greeting and showWeekDay. These are the functions we want to test.

# all.tcl
package require tcltest
namespace import ::tcltest::*
configure -verbose {skip start}
eval ::tcltest::configure $::argv

File all.tcl is unchanged from the previous installment.

# stdout.test
source hello.tcl
package require tcltest
namespace import ::tcltest::*
test simple_capture {
    Capture the output of the greeting command
} -body {
    greeting world!
} -output "Hello, world!\n"
test regexp_capture {
    Capture the output, which can be Monday ... Sunday, we want to be
    able to match any of those values.
} -body {
} -match regexp -output "Today is (Mon|Tue|Wednes|Thurs|Fri|Satur|Sun)day"

File stdout.test contains two tests: one for greeting and one for showWeekDay. Let’s take a look at the first test: simple_capture.

  • Line 7-9 contain the test decoration which should be familiar by now.
  • Line 10 calls the greeting function, which will prints to stdout.
  • Line 11 specifies what we expect of the output. Note that the expected output includes the trailing newline to match puts’ behavior.

The second test is more interesting. It deals with cases when we don’t know the exact output. In this case, the output can only be of 7 possibilities: Monday to Sunday. However, in practice, sometimes the possibilities are infinite and we need to deal with these situations. The test command provides the -match option to allow testing the output in a fuzzy way.

  • Line 18 invokes the showWeekDay command which prints to the stdout.
  • Line 19 states the expected output using regular expression format as opposed to the default of exact match.


tcltest makes it easy to capture and test the standard output. tcltest can also test the standard error with the -errorOutput option. So far, we have been dealing with positive tests. In the next installment, we will discuss negative testing strategies such as testing for situation which a function might crash.

Determine the Last Day of a Month

The Problem

Given a year and a month, I want to determine the last day of that month. For example, if the year is 2004 and the month is 2, then the last day is 29th because of leap year.

The Solution

Calculating the last day of a month is not hard, but complicated by the leap year. If the year is a leap year, then February’s last day will be 29th instead of the usual 28th. My algorithm to find the last day of the month is simpler than that: take the first day of the next month and count backward by one day. Here is the code.

#!/usr/bin/env python

import datetime

def get_last_day_of_the_month(y, m):
	Returns an integer representing the last day of the month, given
	a year and a month.

	# Algorithm: Take the first day of the next month, then count back
	# ward one day, that will be the last day of a given month. The 
	# advantage of this algorithm is we don't have to determine the 
	# leap year.

	m += 1
	if m == 13:
		m = 1
		y += 1

	first_of_next_month =, m, 1)
	last_of_this_month = first_of_next_month + datetime.timedelta(-1)


The code above is easy enough to understand. The datetime.timedelta(-1) on line 22 basically says, “subtract one day.”

tcltest Part 4: Constraints

The Problem

I want to be able to put constraints on tests such as:

  • Platform constraints: For example, on the Unix platform, I don’t want to run Windows-specific tests.
  • Known bugs: I want to temporarily disable valid tests that are failing until the developers fix the code.
  • Crashed tests: I want to temporarily disabled tests that crashed, until I have time to investigate.
  • Custom constraints: I want to define my own constraints, such as tests which are time consuming, tests that only run on a specific host (presumably because that host has some special setup.)

The Solution

In the previous installment, I discussed of ways to include or exclude tests based on the file names or test names. In this installment, I will discuss the use of the -constraints flag to filter tests. For the purpose of this discussions, we are creating tests which does not do anything useful, so we can concentrate on the constraints instead. For the demonstrative purpose, I created a directory called tcltest_part4 with two files: all.tcl and constraints.test.

# all.tcl
package require tcltest
namespace import ::tcltest::*

configure -verbose {skip start}
eval ::tcltest::configure $::argv

# constraints.test

package require tcltest
namespace import ::tcltest::*

test runOnUnix {}           -constraints unix -body {}
test runOnWindows {}        -constraints win -body {}
test runOnMac {}            -constraints mac -body {}
test runNotOnUnix {}        -constraints tempNotUnix -body {}
test knownBug {}            -constraints knownBug -body {}
test customConstraints {}   -constraints timeConsuming -body {}
test multipleConstraints {} -constraints {timeConsuming unix} -body {}
test runOnlyOnMyLaptop {}   -constraints {[info hostname] == "haiv-mac.local"} -body {}
test runOnlyOnMyLaptop {}   -constraints {[info hostname] != "haiv-mac.local"} -body {}



  • Lines 6-14: Introduces a number of tests whose bodies are empty so we can concentrate on the -constraints flag.
  • Lines 6-8: These are platform-specific tests. Note that I am using the term “platform” instead of “operating system”. The platform is determined by the $tcl_platform(platform) variable. On the Mac OS X, platform is unix, not mac. My guess is the ‘mac’ platform refers to Mac OS 9 or earlier.
  • Line 9: This test should not be run on the unix platform.
  • Line 10: knownBug is a predefined constraint which disables the test. One common use for this constraint is to disable valid tests which are failing because of bugs in the code. Once the developers fix their code and the test passed again, the test engineer can remove this constraint, thus enable them. Many people will point out that this is a way to tamper with test statistics by disabling failed tests. While it is true, this constraint is still useful in some cases such as when the bug’s severity and priority are low, but the cost of fixing them are high, so developers want to postpone fixing it. Meanwhiles, it does not make any sense to see the test keeps failing.
  • Line 11: This line introduces a custom constraint, timeConsuming is a constraint I made up, it can be anything. In practice, I often use this constraint to disable those tests that take too long to run and not required to run on a regular basis.
  • Line 12: A test can have more than one constraint.
  • Lines 13-14: Now this is something different. Instead of tokens identifying test constraints we introduce constraints in the form of an expression. tcltest will evaluate these expressions, and if they are true, the test will run. Line 13 defines a test which will only run on my laptop (perhaps because it meets a certain setup condition). Line 14 is the opposite of line 13.

Now that we discuss the constraints, let see the run output on my Mac OS X laptop:

$ tclsh all.tcl 
Tests running in interp:  /usr/bin/tclsh
Tests located in:  /Users/haiv/src/tcl/tcltest_part4
Tests running in:  /Users/haiv/src/tcl/tcltest_part4
Temporary files stored in /Users/haiv/src/tcl/tcltest_part4
Test files run in separate interpreters
Running tests that match:  *
Skipping test files that match:  l.*.test
Only running test files that match:  *.test
Tests began at Sun Apr 03 09:14:12 PDT 2011
---- runOnUnix start
++++ runOnWindows SKIPPED: win
++++ runOnMac SKIPPED: mac
++++ runNotOnUnix SKIPPED: tempNotUnix
++++ knownBug SKIPPED: knownBug
++++ customConstraints SKIPPED: timeConsuming
++++ multipleConstraints SKIPPED: timeConsuming
---- runOnlyOnMyLaptop start
++++ runOnlyOnMyLaptop SKIPPED: [info hostname] != "haiv-mac.local"

Tests ended at Sun Apr 03 09:14:12 PDT 2011
all.tcl:	Total	9	Passed	2	Skipped	7	Failed	0
Sourced 1 Test Files.
Number of tests skipped for each constraint:
	1	[info hostname] != "haiv-mac.local"
	1	knownBug
	1	mac
	1	tempNotUnix
	2	timeConsuming
	1	win

At the end of the output, tcltest lists 7 skipped tests and their reasons such as knownBug, tempNotUnix, or win (platform). What if I want to run the time-consuming tests? I can enable these timeConsuming tests by adding the -constraints flag to the command line:

$ tclsh all.tcl -constraints timeConsuming
... (irrelevant output snipped)
---- runOnUnix start
++++ runOnWindows SKIPPED: win
++++ runOnMac SKIPPED: mac
++++ runNotOnUnix SKIPPED: tempNotUnix
++++ knownBug SKIPPED: knownBug
---- customConstraints start
---- multipleConstraints start
---- runOnlyOnMyLaptop start
++++ runOnlyOnMyLaptop SKIPPED: [info hostname] != "haiv-mac.local"

Tests ended at Sun Apr 03 09:19:26 PDT 2011
all.tcl:	Total	9	Passed	4	Skipped	5	Failed	0
Sourced 1 Test Files.
Number of tests skipped for each constraint:
	1	[info hostname] != "haiv-mac.local"
	1	knownBug
	1	mac
	1	tempNotUnix
	1	win

Notice this time around, the two timeConsuming tests are included in the run, reducing number of skipped tests from 7 down to 5. However, there are times I want to run only these two tests. I can accomplish this requirement by introducing the –limitconstraints flag:

$ tclsh all.tcl -constraints timeConsuming -limitconstraints true

I can also enable more than one constraints by grouping the constraints within the single- or double-quote characters:

$ tclsh all.tcl -constraints "timeConsuming tempNotUnix" -limitconstraints true

Predefined Constraints

The documentation for tcltest 2.2.5 lists the following predefined constraints. Please consult this manual for their meanings and usage.

  • Platforms: unix, win, mac, unixOrWin, macOrWin, macOrUnix, tempNotWin, tempNotMac
  • Operating systems: nt 95 98
  • Disabling crashed tests: unixCrash, winCrash, macCrash
  • Other constraints: emptyTest, knownBug, nonPortable, userInteraction, interactive, nonBlockFiles, asyncPipeClose, unixExecs, hasIsoLocale, root, notRoot, eformat, stdio, singleTestInterp

Custom Constraints

Below is a list of suggested custom constraints.

  • Time consuming tests: timeConsuming
  • Run only on Mac OS X: {$tcl_platform(os) == “Darwin”}
  • Never on Sunday: {[clock format [clock seconds] -format “%a”] != “Sun”}
  • On the first day of each month: {clock format [clock seconds] -format “%d”] == “01”}
  • Only for i386 architecture: {$tcl_platform(machine) == “i386”}
  • For a specific user: {$tcl_platform(user) == “test1user”}
  • for a specific Tcl version or later: {[info tclversion] >= “8.5”}
  • Only when a specific function is found: {[info procs myFunction] == “myFunction”}

For constraints that are very complex, we can write a function, which returns true (1) or false (0) and pass that into the test constraints:

proc myComplexConstraint {} {
	# code ...


test myTest {} -constraints [myComplexConstraint] -body { ... }

What’s Next?

So far, we only test functions that return values such as sum, square. Sometimes, we need to test those functions which writes output to the standard output device. The next installment will discuss this feature of tcltest.

tcltest Part 3 – Include and Exclude Tests

The Problem

Sometimes I want to run just a subset of the tests. Some other times, I want to skip some specific tests.

The Solution

tcltest came with a configure command which customizes many aspects of the test invocation. Among those are the ability to include or exclude based on file names or test names. Below is a list of useful configure commands and their shortcuts:

  • configure -file patternList
  • configure -notfile patternList
  • configure -match patternList
  • configure -skip patternList
  • matchFiles patternList = shortcut for configure -file
  • skipFiles patternList = shortcut for configure -notfile
  • match patternList = shortcut for configure -match
  • skip patternList = shortcut for configure -skip

Dynamically Configure Test Runs

Before I move all to explain the configure command, let’s modify the all.tcl main script to allow us the ability to configure the test runs using command line. Here is the revived all.tcl:

package require tcltest
namespace import ::tcltest::*

if {$argc != 0} {
    foreach {action arg} $::argv {
        if {[string match -* $action]} {
            configure $action $arg
        } else {
            $action $arg

  • Lines 4-12: The script assumes the command-line parameters are for configuration.
  • Line 5: I assume that command-line parameters come in pairs of action and argument.
  • Lines 6-7: If the action starts with a dash, I assume it is part of the configure command, so I invoke the configure command accordingly.
  • Lines 8-9: If the action does not start with a dash, it must be a shortcut command, invoke that action accordingly.

Now that I updated all.tcl, I am ready to have some fun with test filtering.

Run Only Selective Files

In this installment, I assume the same settings as in part 2, with the exception of all.tcl, which was modified as discussed above. That means our tests directory has two test files: square.test and sum.test. To run tests in one or more files:

tclsh all.tcl -file sum.test # Run only tests in sum.test
tclsh all.tcl matchFiles sum.test # Same as above

The above command will only run tests in sum.test and ignore other test files. This filter is especially good when you are developing test and want to run only those tests you are developing. You can choose more than one files by using wildcards or by listing several patterns at once:

# Run all files with names start with s
tclsh all.tcl -file 's*.test' 

# List more than one files
tclsh all.tcl -file 'su*.test sq*.test' 

Note that the configure command and its shortcuts will override previous settings. The following command will only run tests in sum.test, but not in square.test:

tclsh all.tcl -file square.test -file sum.test

Skipping Files

Here are some examples of skipping files:

# Will run all, but sum.test
tclsh all.tcl -notfile sum.test 

# Same as above
tclsh all.tcl skipFiles sum.test 

# Skips all files with names start with a or b
tclsh all.tcl -notfile 'a*.test b*.test' 

Run Based on Test Names

While previous commands filter based on file names, in this section and the next, I will filter based on the name of the tests instead. These filters work regardless of which files the tests reside in.

# Run only tests whose names start with 'square_'
tclsh all.tcl -match 'square_*' 

# Same as above
tclsh all.tcl match 'square_*' 

# Only tests with Negative or Zero in their names
tclsh all.tcl -match '*Negative* *Zero*' 

# Tests names that end with Zero
tclsh all.tcl -match '*Zero' 

Skip Tests Based on Test Names

While the previous section “filter-in” based on test names, this section will “filter-out” instead.

# Skip tests that start with sum
tclsh all.tcl -skip 'sum*' 

# Skip tests that start with sum
tclsh all.tcl skip 'sum*' 

# Skip tests that contains either Positive or Negative in their names
tclsh all.tcl -skip '*Positve* *Negative*' 

Combine the Filters

Finally, I can mix and match filters:

# Run tests in the file square.test, but skipping over those tests
# whose names contain 'Zero'
tclsh all.tcl -file square.test -skip '*Zero*'

# Skip the file sum.test, also skip tests with 'Negative' in their names
tclsh all.tcl -skip '*Negative*' -notfile sum.test

What’s Next?

So far, I only touch a little on test filtering. In m next installment, I will discuss test constrains. Better yet, please subscribe to my blog to make sure you won’t miss my posts. For reference, please review my previous posts: