Skimming through the introduction of the post, it looks like the goals of this align well with what MicroObjects give us in terms of testability. I'm going to go through the article and post my thoughts about what's in it. For the "in general" thoughts and how it aligns with MicroObjects.
No broad tests required. The test suite consists entirely of "narrow" tests that are focused on specific concepts. Although broad integration tests can be added as a safety net, their failure indicates a gap in the main test suite.
While a lack of broad tests isn't a requirement of MicroObjects, it's a result of it. I like having a few end to end tests, but when ever piece integrates with the next correctly as shown by the narrow tests... It's rare for broad tests to find isuses.
Easy refactoring. Object interactions are considered implementation to be encapsulated, not behavior to be tested. Although the consequences of object interactions are tested, the specific method calls aren't. This allows structural refactorings to be made without breaking tests.
MicroObjects support easy refactoring. Most of the time it's not changing classes but encapsulating the interactions or creating a new object to do the new behavior.
Readable tests. Tests follow a straightforward "arrange, act, assert" structure. They describe the externally-visible behavior of the unit under test, not its implementation. They can act as documentation for the unit under test.
The small amounts of code in the methods of MicroObjects make the tests very clear. Using the AAA model also makes it clear what the tests does and helps us keep our tests clean.
No magic. Tools that automatically remove busywork, such as dependency-injection frameworks and auto-mocking frameworks, are not required.
MicroObjects don't need these tools. They are a problem.
Fast and deterministic. The test suite only executes "slow" code, such as network calls or file system requests, when that behavior is explicitly part of the unit under test. Such tests are organized so they produce the same results on every test run.
I don't think the tests should hit "slow" calls. Create BookEnds and Shims to enable testing the "slow" code w/o it being slow.
The goals are very well aligned with what MicroObjects gives us in regards to testability. Quick, easy, clear, concise - The way tests should be.
This style does have tradeoffs. It's called out and some of them presented. I think there's tradeoffs to using MicroObjects as well, but they are far less costly than not using them.
Test-specific production code. Some code needed for the tests is written as tested production code, particularly for infrastructure classes. It requires extra time to write and adds noise to class APIs.
This ties directly into Creating BookEnds and Shims. That's 'test' code in production. It should be heavily mitigated and ideally made unavailable to Release builds. Code cannot be untestable; whatever is required to test it should be done. Then fix it so it's not horrible, but... Testing is the only way we're gonna have repeatable confidence it still works.
Hand-written stub code. Some third-party infrastructure code has to be mimicked with hand-written stub code. It can't be auto-generated and takes extra time to write.
I think this ties into the Creating BookEnds and Shims again. Especially around making it not as painful to write the stub code.
Sociable tests. Although tests are written to focus on specific concepts, the units under test execute code in their dependencies. (Jay Fields coined the term "sociable tests" for this behavior.) This can result in multiple tests failing when a bug is introduced.
This is where I deviate from this a bit. I don't object to it - it's just not what I've done. It's an interesting concept though. I like it, it just changes the approach.
This takes it away from being a 'micro-test', to use Kent Beck's term. It's not longer JUST dealing with the class under test. It's dealing with one extra layer of classes.
I like what it ends up doing; and I'll probably use this in a project.
Not a silver bullet. Code must be written with careful thought to design. Design mistakes are inevitable and this necessitates continuous attention to design and refactoring.
No practice or style or constraints are going to make your design for you. There are ways to make it easier to refactor your design. To make it harder to make mistakes... but we can't stop bad code from happening. It takes discipline and constant attention to keep the design from killing us.
Overlapping Sociable Tests
I'm really interested to try this. Creating that fine mesh of overlapping tests to prove the entire system works would be great.
There's a bunch of patterns recommended. Some I like, some I don't - at a first glance. I see the value; I just think it can be achieved without as much test-in-prod pollution. Of coruse... the way I'm thinking requires some type of Mock/Stub/Spy. These are indeed techinques that enable us to NOT need those.
This is an interesting style. I've used flavors of this before. It doesn't fit how my systems evolve. That's probably just my style. I like to have a "flow". Not "yo-yo" through the system. Go down this path, maybe some branching, finish. I think going down to get something, coming back up with that, then going down another path... maybe repeat... is MUCH harder to follow what's happening. Could be that the code I'm involved with now has made me bitter with how much it does that... It's a recognized preference/bias on my part.
I do tend to use the
outside-in design he cites from Growing Object-Oriented Software, Guided by Tests. I find it very valuable to only write the code the system needs. No future proofing.
I've got a new name for no logic in the constructor - Zero-Work Instantiation! Do that. Everywhere. I take it even further than what James says
Don't do significant work in constructors.
I say don't do ANY work in the constructor. (Excepting, of course, assignment).
Signature Shielding... I don't really agree with it... but... At least not for methods. It's basically what I do with constructor chaining. So... maybe?
It's not bad, I think some langs will support it much more than others.
There's more patterns in the this section. Go Read Them.
Getting into this section it makes it clearer why the A-Frame pattern is appealing.
When using A-Frame Architecture, the application's Logic layer has no infrastructure dependencies. It represents pure computation, so it's fast and deterministic. To qualify for the Logic layer, code can't talk to a database, communicate across a network, or touch the file system.
I find that this implies that there's no decoupling from actually interacting with the "slow" dependencies in the infrastructure leg of the A-Frame. ... It's looking a lot more like my practice of Creating BookEnds and Shims sets me up in a way that I don't need to A-Frame my code. Could I? Probably. I think I'm solving the same problem differently.
I'll give the A-Frame credit that it's proably a lot more clearly drawing lines between types of behavior than I do. Not sure if that's positive or negative as those lines might bring in additional abstractions or complexity, like a "yo-yo" in the code.
I think Easily-Visible Behavior for tests is something MicroObjects heavily results in. With objects being immutable, they're easy to test. C# is an object oriented langauge, so I shy away from "pure functions" as a concept to use in the language, but I don't disagree with the idea - just disagree with the "static methods" that it starts to bring in.
I've had a few instances where change has been communicated by eventing. Works great. Not a lot given the immutable nature I strive for, but there have been instances outside of that, or a cache refresh where we want to let other things know what's happening.
In all cases, avoid writing code that explicitly depends on (or changes) the state of dependencies more than one level deep. That makes test setup difficult, and it's a sign of poor design anyway. Instead, design dependencies so they completely encapsulate their next-level-down dependencies.
That. Don't make your tests set up or interrogate levels of code. That's going to hinder refactoring efforts. The parts of the system your tests know about are parts of the system with resistance to refactoring. If every test uses a specific API, that API is going to resist refactoring from the entire system - it will never change, and you try, you can't change it safely.
This is one of my favorite sections of this post. 3rd party code should be wrapped. Create a wrapper that reflects what your system needs.
I don't agree to add getter/setters - those are verboten. None. Ever. You don't need them.
For the most part I don't think we should be testing 3rd party code. We wrap it, and POSSIBLY test CRITICAL functionality. For example, hashing. If we are using a library to hash for us, it makes sense to wrap it and provide some tests around the functionality to alert us if required behavior changes. if
A => b and suddenly
A => a, even if it's fixing as bug in their system; we'd need to know. Sometimes it won't matter, if it's only in memory, then a change won't affect stored values. So it's not at that CRITICAL point.
Also - Never getter or setter.
James recognizes the huge win of wrapping 3rd party code
When the third-party code introduces a breaking change, or needs to be replaced, modify the wrapper so no other code is affected.
This is the constructor chaining I do. Or Manual Dependency Injection. Avoid the framework. They add and hide complexity. Minimize the parameters, default as much as possible.
I've had some ideas about it - but I don't like including it in production code. With a
#if DEBUG around it... MAYBE... but BLEH... don't like that either.
It's an idea I'm not ruling out, just haven't been convinced it's the best way.
This, oddly, breaks some of the rules I've followed. I try to practice never using values from objects in the tests. Always hard code what you expect. This advocates having the collaborators from Overlapping Sociable Tests define the values. Which I get. I'll probably follow that when I get to doing the OSTs.
It makes sense when doing things slightly differently. In this case, when
class A is under test and uses
class B our test shouldn't KNOW about
Bs internals. We then NEED to use
Bs methods to construct our asserts.
Maybe it isn't breaking the rules I've been following, not really. Not the intent of the rules?
In the end, nothing the class under test does should be used to assert what it does. But the collaborators we provide? We should be able to to use those to verify the class under test did it correctly.
It's 3rd party code. It's not our system - Wrap that shit up.
Focused Integration Tests
Make sure your BookEnds work right. Yep.
Testing the BookEnd itself, as an integration test, avoid the complicated dependencies of consumers of the BookEnd, and they have the Overlapping Social Tests, so no worries.
I like the idea of spinning up servers to run tests against. Don't do it a lot, but have tried in the past.
This is interesting, and I can see how it might take a bit of time to craft the first time through. Is it better than Creating BookEnds and Shims... I don't know. It's an interesting approach.
Same effect, for the most part.
There's some more patterns around how to test the infrastructure side. You should go read them.
It's an interesting approach. Very inline with MicroObjects. BookEnds aren't specific to MicroObjects, just a technique to decouple from 3rd party dependencies. The Infrastructure Patterns are another way. Clearly still favoring the way I've been using for yers. :) It's great to get different insights, and think about this some more.