Recently, I read a post that described purposely doing things in a way that wouldn't scale, thus hoping to provide better insight into the problem domain, as well as deferring investment until confidence of the return is better known. In particular, Joel suggested starting a project by running processes that could be automated by hand at first. These processed could then be automated if needed later on. So, how could you go about trying to follow this advise?
Being a keen test driven development advocate, this was a tough decision. When I first learnt about TDD and 'saw the light', I thought all code must be test driven, if only for the regression benefits. Having test driven software for ~10 years, I'm now less dogmatic.
I pretty much always use it when writing complex, and thus risky software. Following this approach, I've found my first stab at a solution to have a higher probablilty of being 'right', and the second stab is easier to iterate towards, due to the confidence bought by the tests.
The other real benefit I gain from test driving development is the deliberately incremental approach to learning unknown problem domains by way of examples. Instead of trying to 'eat the elephant' and trying to learn it all in one big lesson, I prefer to identify a simple understandable example, write a test, and make it pass by writing the simplest production code possible. I then repeat, and repeat, until gradually I understand the wider problem domain, and not only have a solution, but one proven by the tests to solve the problem!
However, being less dogmatic, I appreciate there are scenarios in which test driving code is less optimum.
Quite often, I'm faced with writing software using a language, platform, or framework that I've little experience with. During these projects, I'm often faced with the challenge that I don't the know the shape of the components I will write. Will they be object oriented? Functional? Should they follow platform specific patterns that I don't yet understand? If I don't know the structure of the production code, how can I know their responsibilities and thus how to write tests? Often this is accompanied with a lack of knowledge about the maturity of testing tools and which ones to choose. Just the time needed to evaluate the exact test tools might exceed the time tests save with their regression benefits.
The other scenario that I often find writing tests first to be premature is often not based on complexity, but rather on unpredictability of the software's expected result. For example, if I was asked to write software that tracked a mountain biker around a UK trail, I would likely not know exact gpx polygons needed to write a valid test. Instead, I'd possibly just spike a solution and initially see if the recorded tracks look roughly sensible. I don't need a failing Cucumber feature to tell me that a resulting track that crosses the equator is obviously wrong. Only when I've proved gps tracking basically works would I consider investigating how to test double device locations, time, and other factors, to ensure the system really works, and continues to do so in the future.
Joel's article describes some benefits of doing things deliberately, in a way that doesn't scale. For example, instead of using a simple script to send standardised emails to new website members, perhaps send the first thousand emails by hand. Embrace the chance to personally connect to the first members. Make them feel valuable, and possibly gain feedback off them about the sign up experience. They're more likely to provide feedback to a personal email than to an automated script and survey tool.
During development of my mountain bike ride recording application (now abandoned - thank you Strava...), I decided against initially test driving the recording features due to the reasons mentioned in the previous section. Instead, I'd hack basic prototypes that I'd actually take out cycling, which allowed me to explore the general concept and suitability of mobile GPS for this problem. I quickly discovered the battery life and GPS accuracy limits that would make this software extremely difficult to write and support. It allowed me to identify other unknown unknowns; how would I actually specify the trails, their start and end points. Could this be crowd sourced?
I also gained valuable feedback from people I met on the trails about whether they'd be interested in such an app, and if not, what services would be useful to them. One in particular would be a good candidate for a startup in the future. These are benefits I just couldn't have gained by crafting a substantial high quality test suite.
Don't shun automation. Embrace it. Save time, reap the risk reduction of repeatablity and predictabilty. But don't do this dogmatically. Instead, consider doing things the manual way and weigh up the pros and cons of both approaches. If it makes sense, try doing things by hand, and when you think you're close to learning 'all the things', then automate, automate, automate....