Dave Horner's Website - Yet another perspective on things...
Home Tech Talk Random Tech The Three Laws of Test Driven Development (TDD)
61 guests
Rough Hits : 2657972
moon and stars
how did u find my site?





 
what's a matter












 
"Whenever two men meet there are really six people present. There is each man as he sees himself, each man as the other sees him, and each man as he really is." -William James
\begin{bmatrix} 1 & 0 & \ldots & 0 \\ 0 & 1 & 0 & \vdots \\ \vdots & 0 & \ddots & 0\\ 0 & \ldots & 0 & 1_{n} \end{bmatrix}

The Three Laws of Test Driven Development (TDD)

Sunday, 16 November 2008 11:49
     The Three Laws of Test Driven Development (TDD)
     ===============================================
By now everyone knows that TDD asks us to write unit tests first,before we write production code. But that rule is just the tip of the iceberg. Consider the following three laws:

First Law:      You may not write production code until
                you have written a *failing* unit test.

Second Law:     You may not write more of a unit test than is
                sufficient to fail, and not compiling is failing.

Third Law:      You may not write more production code than
                is sufficient to pass the current failing test.

These three laws lock you into a cycle that is perhaps thirty seconds long. The test and the production code are written *TOGETHER*, with the tests just a few seconds ahead of the production code.

If we work this way, we will write dozens of tests every day, hundreds of tests every month, and thousands of tests every year. If we work this way, those tests *will cover* virtually *all* of our production code. The sheer bulk of those tests, which *can rival the size* of the production code itself, can present a daunting management problem.

Keeping Tests Clean
===================
Some years back I was asked to coach a team who had explicitly decided that their test code should not maintained to the same standards of quality as their production code. They gave each other license to break the rules in their unit test. "Quick and dirty" was the watchword. Their variables did not have to be well named, their test functions did not need to be short and descriptive. Their test code did not need to be well designed and thoughtfully partitioned. So long as the test code worked, and so long as it covered the production code, it was good enough.

Some of you reading this might sympathize with that decision. [...]

It's a huge step from writing that kind of throw-away test, to writing a suite of automated unit tests. So, like the team I was coaching, you might decide that having dirty tests is better than having no tests.

What this team did not realize was that having dirty tests is equivalent to, if no worse than, having no tests. The problem is that tests must change as the production code evolves. The dirtier the tests, the harder they are to change. The more tangle the test code, the more likely it is that you will spend more time cramming new tests into the suite than it takes to write the new production code. As you modify the production code, old tests start to fail, and the mess in the test code makes it hard to get those tests to pass again. So the tests become viewed as an ever-increasing liability.

From release to release the cost of maintaining my team's test suite rose. Eventually it became the single biggest complaint among the developers. When managers asked why their estimates were getting so large, the developers blamed the tests. In the end they were forced to discard the test suite entirely.

But, without a test suite they lost the ability to make sure that changes to their code base worked as expected. Without a test suite they could not ensure that changes to one part of their system did not break other parts of their system. So their defect rate began to rise. As the number of unintended defects rose, they started to fear making changes. They stopped cleaning their production code because they feared the changes would do more harm than good. Their production code began to *rot*. In the end they were left with no tests, tangled and bug-riddled
production code, frustrated customers, and the feeling that their testing effort had failed them.

In a way they were right. Their testing effort had failed them. But it was *their decision* to allow the tests to be messy that was the seed of that failure. Had they kept their tests clean, their testing effort would not have failed. I can say this with some certainty because I have participated in, and coached, many teams who have been successful with *clean* unit tests.

The moral of the story is simple:
     *Test code is just as important as production code*

It is not a second-class citizen. It requires thought, design, and care.
It must be kept as clean as production code.

--------------
excerpt from:
"Clean Code"
'A Handbook of Agile Software Craftsmanship'
Robert C. Martin.
Amazon.com: Clean Code: A Handbook of Agile Software Craftsmanship (Robert C. Martin Series)


Magic tricks of testing

Test the interface, not the implementation
Message Origin in relation to the class under test: incoming,sent to self,outgoing.
Message Types queries (read/no write) and commands (write). We conflate commands and queries at our peril.
querycommand
incomingassert resultassert direct public side effects
sent to selfignore
outgoingignoreExpect to send
be a minimalist.

Agile Manifesto – Agile Rails – the proof | Richard Bucker - the agile speak is sometimes very hard to swallow.

< Prev  Next >
Last Updated on Saturday, 05 July 2014 19:16