20081218

Test Driven Learning

Everyone already knows about Test Driven Design, but today I happened to come across a clever idea, from someone called Mike Clark, about using exactly the same process for learning a new programming language. This is quite simply brilliance.

Astoundingly Awesome!

Yes that's right, it's astounding. Why? Because now when I learn something new, I'll write some code about it, and as Mike pointed out if you write it down you are so much more likely remember it. But that's not even scratching the surface, not only will I now have a great way of forcing otherwise unmemorable knowledge in to my brain, I'll also have documentation for everything I've already used. Documentation for everything I've already used, documentation that I've written! Seriously think for a moment about how good that is, no more searching through APIs and obtuse JavaDoc output trying to find that function/method I once used and now can't find just because I've forgotten the name of it and where I last used it.

You seriously think that's enough of a reason to justify wasting so much time writing code you're never actually going to use? Yes! Not only is it enough reason, there are others, but I'll come back to those later. For now let me tell you why it doesn't actually take any more time.

Every time I go to use a new library I find myself writing a file named "debug.ext", this file is roughly the same in every language I've played with, it will have the appropriate require statements, some initialization and then a few print/dump type statements testing how the thing works. Up till now this has always been code that I've written and then never used, that's right never! Often once I've actually implemented the new feature using delete the file all together, what a waste! It takes absolutely no more time to write exactly the same thing in to a test case file.

Ok now that I've convinced you it's not going to hurt much (I have convinced you haven't I? if you answer no to this, please just stop reading now) to switch to using this genius idea, lets go back and briefly discuss the remaining good points.

I can reuse the same Test Cases I write while learning how to use a library over and over again. So even though I'm still writing about the same amount of code while learning a new library, I'm no longer throwing it away at the end of that process, meaning I already have the basis for a Test Case for my app when it comes time to write one.

Sometimes libraries just change a little. I'm not refering to big things like the way objects changed between PHP 4 and 5, I mean subtler things, sometimes even bugs. Library maintainers have a picture in their mind of the way everyone uses their library, and it's possible that if I'm not using it in just the way they pictured that one day I'll have a piece of code that will stop working for no other reason than the API has changed slightly. Sure I should be able to find that out from a changelog, but some libs don't have changelogs, and lets be honest, I usually don't even read them.

Couldn't you just catch changes to the Language and Libraries you use in the normal test cases for your app? Probably, but maybe I don't want to. The current app I work with (at work), is written in PHP 4 and has an insane number of dependencies and things that can go wrong at install. At some point we will need to upgrade to PHP 5. Why would I want to go through all the hassle of configuring up a full install with PHP 5 just to start finding out how my assumptions of the way Class Constructors have changed? I wouldn't, that's why. Much easier would be to take a set of PHP 4 "learning test cases" (which unfortunately I don't have :( ) and run them against an out-of-the-box PHP 5 install, I could then modify them to be working PHP 5 test cases and have a great heads up for all the things I'm going to have to change in the rest of my code, possibly even giving me great data to use to make a script that will automate all the changes between the two versions of PHP :)

Boring! Gimme example!

Ok. Today I was trying to figure out how to unit test a web crawler app written in Ruby. I decided I would use Mongrel. I had never used Mongrel before and I had no idea how to use it at first. Rather then trying to integrate it straight into my Unit Test Suite and risk confusing my self, I needed to learn to use Mongrel separately from any of the code I was actually writing. Normally I would just write a debug file and be done with it, but not today! Today I wrote a Test Case. The best part was that it didn't turn out to be significantly longer then a debug.rb file woudl have, and now rather than throwing it away I can keep it forever and even reuse the same Test Case in my web crawler app.

For those who actually want at least a little proof that I have actually written something today, here it is:
require 'rubygems'
require 'mongrel'
require 'net/http'

require 'test/unit'

# special class
class TC_MongrelHandler < Mongrel::HttpHandler
def process(request, response)
response.start(200) do |head,out|
head['Content-Type'] = 'text/plain'
out.write 'hello'
end
end
end

class TC_Mongrel < Test::Unit::TestCase

def setup
@http_port = 12345

# start mongrel web server, try default port increment on failure to bind
begin
@http_server = Mongrel::HttpServer.new('127.0.0.1', @http_port)
rescue Errno::EPERM => e
# todo: is there a better way of checking the exception type?
if e.message == 'Operation not permitted - bind(2)' then
@http_port += 1
retry
else
raise
end
end

@http_server.register('/', TC_MongrelHandler.new)

# join the thread but wait 0 seconds for it to return
@http_server.run.join 0
end

def teardown
# kill off mongrel web server
@http_server.stop
end

def test_hello_world
res = Net::HTTP.get_response(URI.parse("http://localhost:#{@http_port.to_s}/"))
assert_equal 'hello', res.body
end
end
It's perfectly possible I'm not using the libraries involved exactly as they were designed, but it doesn't matter much because if they ever break on me I'll know!