I’m currently reading the beta edition of the RSpec book by David Chelimsky et.al. Because a book like this can only be comprehended when actually using the content, I’ve decided to start documenting a new project I’m doing with it.
So far, it has been about Behavior Driven Design (BDD), which is an acronym I’ve head before, but I didn’t have the time to read more about it.
It feels a bit weird specifying stuff using mostly natural language, but on the other hand it’s naturally very cosy to do so. What’s really neat is that you start using the api you want to specify right up, instead of first formalizing a design for it. That way you known that all the methods in your api really belong there and actually work.
While I was busy to code up some small project I received the new linux journal, which had an article on metric_fu. It contains a lot of code that can measure the quality of your code. That is always good to do, because the more checks you perform on your code, the bigger the chance that you run into a bug waiting to happen. Of course, you also run into false positives faster, and most people stop using checks like these because they run into false positives too often.
But reading the article I was thinking to myself: why don’t we use BDD combined with something like metric_fu on hour one-off tools we create to solve a case? Most forensic practitioners I know are bound to run into the situation where all the available tooling is not adequate to perform a certain job. Things that come to mind are refiling images based on camera, but oh wait, based on resolution first, or extracting all email addresses from an image and compare them to some filter, These things should be rigorously tested before put in use, because a simple code snafu can dump all your stuff in the bin and will cost you valuable time to clean up again. There is in this case an obvious tradeoff between codingtime, solvingtime, clean-the-errors-time and the time you need in court to explain that you did everything in your power to not botch up the code. For that last part you would ideally show testing output that shows that your testcases have a 100% coverage and pass every test you thought was possible.