In an earlier post; Design & Development Principles - XP - Keyboard Circle; which would be a couple months ago by the time time this goes live; I talk about doing TDD and the benefit it provides.
There I mention that it's not about the tests. It's about the mindset that the tests support. My favorite way to phrase this is; The least important thing about TDD, is the test.
The reason I'm writing now is to ramble out my thoughts about Unit Tests. I kinda see a few levels of unit tests. (I'm thinking 3, but we'll see how many the rambling produces)
Code Coverage Unit Test
I think this is the lowest form of Unit Test. It's a test solely to improve code coverage.
These tests don't come up often, primarily when Code Coverage becomes THE METRIC! (tm). It's primarily why you'll never want Code Coverage as a metric. The way I look at it; low code coverage is a smell, high code coverage ... is nothing.
Code can't have a metric for "quality". Not really. It's a lot of systems interactiong; legacy and new, hold over and work arounds. Requiring a high code coverage everywhere will reduce the meaning of tests; once you can get away with passing tests that "don't do anything" then no tests need to do anything. But it looks like a well tested system... with none of the benefits. Code that lies is more dangerous than code that does nothing.
Implementation Based Unit Tests
The first type of tests I think anyone coming into testing writes are "Implementation Tests".
Implementation based tests are testing the ... unsurprisingly ... implementation.
This generally implies Test-After tests. The method should be doing X, Y, and Z - The test will then that X, Y, and Z were done. If mock's are being used; then, generally, that these methods are called gets validated.
Implementation based tests are fragile. The
Assert of the test has to be updated when the implementation changes. We don't want to have to change the asserts to get the test to pass. An Implementation based unit test can go so far as to test the order things were called. This increases fragility.
If we have a single assert verifying the return type; then the test is less fragile. If the algorithm changes; we may need to update the values the mock returns; but the assert remains the same. We're not "changing the test" to get it to pass, we're updating the Arrangement to match what the code requires. This happens when you refactor, but this form of a test is really fragile to refactoring. It's going to make ruthless refactoring REALLY hard to do without having to fix a bunch of tests everytime.
Another downside of these tests is that they don't support an emergent design. This is tied pretty tightly into ruthless refactoring. If you can't refactor, you can't get an emergent design to come out of changes.
This same hinderance to refactoring prevents the design from becomeing simple. When you have to fix multiple tests for every change; you can't be sure your change hasn't actually broken anything.
Did you just fix the test to match the new implementation, or... did you change the expected behavior and updated the test to reflect it?
If you're in the habit of updating tests for every small implementation change; you're less likely to notice.
Continuing the lack of what a proper TDD Unit Test provides; Documentation.
A good test suite is like documentation for the code. The tests should show the behavior, not the how. Implementation based tests show the HOW; they tend to be longer and more convoluted test names, and arrange sections. The asserts also tend to start getting out of hand - it needs to validate it does what we told it to do in the order we expect...
These tests no longer describe what the class should be doing; but the details of how. When a test knows the details; it no longer going to protect against behavior changes, "everything" will break the tests.
I'm sure I could complain more about these types of tests... I wrote so many of them as I drug myself up into the value of unit testing.
All The Things Unit Test
This type of unit test doesn't understand the term "unit". It tries to do all of the tests in a giant chain. verifying a lot of functionality in a single pass of a single test.
In the proper Arrange/Act/Assert; there's only 1 act and the asserts required to verify the behavior. With a unit test trying to do all the things; there will be many calls to trigger a lot of behavior. There will be many unrelated asserts.
These tests are easiest to find by their name; it's going to be very non-specific. The test provides no information on how the system behaves.
These tests, while they verify the code; they are fragile. A single test can break because of many changes to the code.
The less places that can cause a test to fail, the more useful the test is.
The way I look at unit tests is to test one thing. It's where TDD helps drive this behavior. You're writing a test for one thing; the test tests one thing.
It's a small test. There's only one act and the smallest number of Assert's to validate teh behavior. Don't be lured into testing the implementation, even if it's easy to do.
I was hoping to ramble out a bit more sitting at my daughter's practice... Oh well. That's some thoughts...