Book Review - Python Testing Cookbook by Greg L. Turnquist
I have been doing (developer) automated testing for years now, but I recently moved from .NET to Python. Recently, at one point I suggested to collegues that we try Concordion, only to learn that there is the doctest module
that could be used to achieve similar result (more about that in a later post). Remembering my own advice: When In Rome, Do as the Romans Do, I set out looking for a Python specific book about testing - and the Python Testing Cookbook by Greg L. Turnquist book seemed to be a good fit based on reviews.
Overall, I liked the book, and it lived up to my expectations - it provided me with a list of tools and some sample code to get started with each of them.
Beware that it is an entry level book - if, like me, you are already familiar with the testing concepts, and are looking for a book to learn about advanced testing concepts, theories, this book might be too little for you (or just read through the "There is more" sections of the recipies). But it is great for someone new to testing - though discussions with (and code reviews by) someone experienced in testing should accompany the reading.
Despite the below criticisms, which are meant to be rather a companion to the book than an advice against it (i.e.: probably the only book I wouldn't find anything missing from and nothing to criticise about would be written for me, in real time, based on my immediate needs). The fact that the list is short shows how I found the rest of the book valuable, with great advices that go beyond the cookbook format (why you shouldn't use fail()
, why there should be one assert per test, etc.). While I don't see eye to eye on each topic with the book, but just as the book is not written in a "my way or the highway" style, I will not get into minor differrences of opinion.
Format of the book
Each chapter explores a topic, with multiple specific recipes. Each recipe is relatively self contained, so if we are in a hurry and need to get to the solution of one specific problem without reading the whole book/chapter, it's possible. However, for those reading whole chapters, this results in a bit of repetition - I had to fight the urge to just gloss over the code I (thought) I had seen before.
Each recepie follows the format of
- stating the problem
- showing code that solves it
- explaining how the code works
- and finally, providing warnings about pitfalls and problems in the code, and some further advice
While this format is easy to follow, it has a few drawbacks.
- until I got used to this style, I often found myself cursing out loud like the code reviewers in this comic while reading the code that will later be explained to be bad/antipattern.
- each recipe has a lot of additional testing insight, potentially unrelated to the tool being demonstrated - but one can miss these, thinking "oh, I know all about doctest, I'll just skip that chapter"
- for people in a hurry, just scanning the cookbook and copying (typing up) the code - there is nothing to indicate in the code that there is an antipattern there, only in the later paragrpahs - which you might not read when in a hurry. Just thinking about the examples where the unit tests have no asserts but only print statements gives me the shivers (and it's even used for illustration in the chapter about Continious Integration!).
What was missing from the book
- About testing legacy code, I was missing two things:
- a pointer to Michael Feather's classic book, Working Effectively with Legacy Code
- a warning about a mistake I have seen almost everyone (myself included) make when getting started with testing legacy code: don't write tests just because you can - only add cases for the area you are working on and only add enough to cover your current needs. This is hinted at, but I've found it's important to state it explicitly.
- Notes about test maintainability
- I strongly disagree with the approach of having one test class per physical class, and test methods directly excercising the class's method. I've found these can lead to maintainability problems down the road, so I prefer to introduce helper abstractions (e.g.:
assert_roman_numeral_conversion(expected_result, roman_numeral_string)
method) for most of my tests, and organize test methods by logical scenarios instead of mirroring code organizational units (on successful login, validating user input, etc.). These abstraction (indirections) makes it easier to update tests after an API change or refactoring. It might sound like an advanced topic, but I think it's a key concept for effective tests, and one that people should be exposed to early (just after they've made the mental jump from throwaway main methods with visual/human assertions to automated tests with automated assertions). - Acceptance Testing - it is notoriously difficult for us programmers to write good acceptance tests that are both maintainable and readable by the customers. I'm rather sure that in the example given in the book, the customers would not be interested in knowing which html tag contains the price in the output.
- I strongly disagree with the approach of having one test class per physical class, and test methods directly excercising the class's method. I've found these can lead to maintainability problems down the road, so I prefer to introduce helper abstractions (e.g.:
Minor criticisms
- there is an inconsistent level of detail and further pointers. E.g.: while
optparse
is explained in detail,virtualenv
andsetuptools
are glossed over. - In addition to the assertless test methods, the other thing that shocked me was the example in the doctest module that - to illustrate that the test works - introduced a bug in the test code. While the fact that test is code and thus can be buggy should be emphasized, but that wasn't the case here. This could leave the reader wondering why exactly we introduced the bug in the test code - aren't we testing the application?
- The book is careful not to fall into the trap of elitist speak that might alienate people, but saying that coupling and cohesiveness are subjective terms is just providing gunpowder to unwinnable arguments(*).
Interesting notes
- This might be a cultural thing (I'm coming from .NET), but I've found it rather surprising that such an entry level book talks about test runners, and write custom test runners. It's useful knowledge, just something that I have not seen mentioned in so much detail in the Java/.NET world so early. Maybe it's got to do with IDEs being more widespread, where running a subset of the tests is easy.
As said, the book lives up to the expectations, so if you would like to get a quick and practical introduction to testing in pytohn - both tools and concepts, I can recommend this one for you.
(*) Reminds me of a story from long ago. The team in question has decided to use bind variables for all SQL invocations (I've said it's been some time ago) to prevent SQL Injection. The one programmer wrote a stored procedure that concatenated the SQL command in its body... and argued that this is just a matter of style. At least the procedure was invoked from the application using bind parameters...