How to test shell scripts

Extreme hipster superheroes like me need tests for their shell. Here’s what’s available.

YOLO: No automated testing

Few shell scripts have any automated testing because shell programmers live life on the edge. Inevitably, this results in tedious manual ‘testing’. Loads of projects use this approach.

Here are some more. I separated them because they’re all shell profiles.

This is actually okay much of the time. The programs I reference above are reasonably complex, but shell scripts are often much simpler; shell is often convenient for small connections among programs and for simple configuration. If your shell scripts are short and easy to read, maybe you don’t need tests.

Posers: Automated commands with manual human review

You can easily generate a rough test suite by just saving the commands you used for manual debugging; this creates the illusion of living only once while actually living multiple times. Here are some examples.

These scripts just run a range of commands, and you look for weird things in the output. You can also write up the intended output for comparison.

Mainstream: Test cases are functions

This approach is somewhat standard in other languages. Write functions inside of files or classes, and run assertions within those functions. Failed assertions and other errors are caught and raised.

In Roundup, test cases are functions, and their return code determines whether the test passes. Shell already has a nice assertion function called test, so Roundup doesn’t need to implement its own. It also helps you structure your tests; you can use the describe function to name your tests, and you can define before and after functions to be run before and after test cases, respectively. For an example of roundup in action, check out spark

shunit is similar. One notable difference is that it defines its own assertion functions, like assertEquals and assertFalse git-ftp uses it.

tf is also similar, but it is cool because it provides some special shell-style assertions (“matchers”) that are specified as shell comments. Rather than just testing status codes or stdout, you can also test environment characteristics, and you can test multiple properties of one command. rvm uses it.

There are some language-agnostic protocals with assertion libraries in multiple languages. The idea is that you can combine test results from several languages. I guess this is more of a big deal for shell than for other languages because shell is likely to be used for a small component of a project that mostly uses another language. WvTest and Test Anything Protocal (This site is down for me right now.) are examples of that.

Even though all of these frameworks exist, artisinal test frameworks are often specially crafted for a specific projects. This is the case for bash-toolbox and treegit.

Implementing your own framework like this is pretty simple; the main thing you need to know is that $? gives you the exit code of the previous command, so something like this will tell you whether the previous command passed.

test "$?" = '0'

Ironic elegance: Design for the shell

Assertion libraries are common and reasonable in other languages, but I don’t think they work as well for shell. Shell uses a bizarre concept of input and output, so the sort of assertion functions that work in other languages don’t feel natural to me in shell.

In Urchin, test cases are executable files. A test passes if its exit code is 0. You can define setup and teardown procedures; these are also files. For an example of Urchin tests, check out nvm. (By the way, I wrote both Urchin and the nvm tests.)

In cmdtest, one test case spans multiple files. Minimally, you provide the test script, but you can also provide files for the stdin, the intended stdout, the intended stderr and the intended exit code. Like in urchin, the setup and teardown procedures are files.

The fundamental similarity that I see between Urchin and cmdtest is that they are based on files rather than functions; this is much more of a shell way to do things. There are obviously other similarities between these two frameworks, but I think most of the other similarities can be seen as stemming from the file basis of test cases.

Here’s one particularly cool feature that might not be obvious. Earlier, I mentioned some protocals for testing in multiple languages. I found them somewhat strange because I see shell as the standard interface between languages. In Urchin and cmdtest, test cases are just files, so you can actually use these frameworks to test code written in any language.

Which framework should I use?

If you are writing anything complicated in shell, it could probably use some tests. For the simplest tests, writing your own framework is fine, but for anything complicated, I recommend either Urchin or cmdtest. You’ll want to use a different one depending on your project.

cmdtest makes it easy to specify inputs and test outputs, but it doesn’t have a special way of testing what files have changed. Also, the installation is a bit more involved.

Urchin doesn’t help you at all with outputs, but it makes testing side-effects easier. In urchin, you can nest tests inside of directories; to test a side-effect, you make a subdirectory, put the command of interest in the setup_dir file and then test your side effects in your test files. Urchin is also easier to install; it’s just a shell script.

I recommend cmdtest if you are mainly testing input and output; otherwise, I recommend Urchin. If you are working on a very simple project, you might also consider writing your own framework.

For hip trend-setters like me

Test-driven development is mainstream in other languages but uncommon in shell. Nobody does test-driven development in shell, so all of these approaches are ahead of the curve. Hip programmers like me know this, so we’re testing our shell scripts now, before shell testing gets big.

This entry was posted in developer. Bookmark the permalink.

3 Responses to How to test shell scripts

  1. descartavel1 says:

    had you been writing scripts like it’s the norm when writing Makefile all you had to do was replace your collateral generator command with something innocuous.

    e.g.
    SORT=/bin/sort
    CURL=/bin/curl
    $SORT $1 | $CURL http://example.com/save.php -T — #or something like this, totally forgot now how to POST something with curl

    then you could have another script for testing that that would call the initial script but with $CURL being /bin/echo or something. and for repeatability testing you can write another script that does that uses a know input and compare that to a know output of the echo.

    never touched any of the frameworks you mention and would be scared of what complexity they introduce on my scripts to begin with.

  2. dgvncsz0f says:

    There is also a golden-test approach: http://joyful.com/shelltestrunner/

    I find it quite useful, and it is possible to test shell scripts using
    a pattern like this:

    — script.sh
    some_function () {

    }

    — script.test
    . script.sh; some_function [args…]
    <<>>
    expected stdout
    >>>2
    expected stderr
    >>=0 # execpted exit code

  3. Pingback: npm install urchin | ScraperWiki Data Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s