Writing Tests Top Down

One bit of info that I’ve seen floating around on the ‘Net revolves around the idea that if you’re doing test-driven development you should start from the bottom up, writing tests for small pieces of functionality and then moving up through the layers. This sounds good, and I’m sure it works for some people, but I’m coming to the conclusion that it’s not right for me.

I was re-reading some of the older posts on Ted’s blog, and came across an article he linked to titled Test Driving Code: Top-Down or Bottom-Up? which made me think a little bit. My executive summary of the article is as follows…

  • Top-down testing makes you focus on the specification of the business requirements of the code. The implementation becomes less of a distraction. Once your test is finished and passing, it becomes easy to refactor without tripping over your implementation.
  • Bottom-up testing locks down your implementation to a certain degree. If you need to change your model for some reason, you also need to change your tests.

To give some background, my current project writes almost no unit tests. We write functional tests on DTO’s that are passed into and returned from a Facade layer. Our goal is to have each DTO map exactly to the system it’s targeting, which is most often the Presentation layer. Therefore, we’re effectively testing the very thin, logic-free (we hope) portion of the app that’s closest to the user. We don’t care how the DTO gets it’s data, just that the data that is there is correct. The end benefit is that we can easily refactor the domain model without tripping on our tests. The downside is that when a test fails, it doesn’t tell us where it’s failing, just that the client’s specification of what they want to see isn’t being met. Which means we spend a lot more time in the debugger than we otherwise might. It’s a tradeoff.

From my understanding, we largely got to this point through first-hand experience, as detailed by Ted in some of his posts:

I don’t have enough experience with writing actual unit tests to have a valid opinion on this matter, although I do know that I tend to think in a top-down manner, which fits in well with the way we write our functional tests. Now is that way of thinking natural to me, or a side-effect of coming to a project that already worked that way? I suspect I won’t know for a while.

One thought on “Writing Tests Top Down

  1. Vlad

    When I am doing my own programming, I tend to kind of do both, although in practice I think it winds up being “bottom up.” I do think about the higher level functionality, and I often start by writing a test for it. However, if I realize that the code I want to start writing can be supported by a smaller test, then I comment out the higher level test, get the small test passing, and move on from there. The reason I like this approach is that it leads to clean interfaces and a larger number of smaller tests. Some of the advantages are faster-running tests, and tests that are more isolated, so that if they fail, the failure is more likely the result of a smaller number of problems. Also, with functional tests, you may end up needing to write a cross product of several conditions. If you test individual functions, you can flatten that out from i * j * k…to just i + j + k… It’s true that if you change the design, you have to deal with the tests. My argument in favour is that the goal of development is after all to stabilize the code base over time. Sure, you can in theory re-write all of the code behind the Facade in our project and be reasonably happy if all the tests pass, but would you ever do that? In that case I think I’d rather just re-write the whole app from scratch anyway. However, functional tests are certainly better than nothing, and they have the advantage of being simpler conceptually. At the end of the day I’d like to see more true unit tests for our project, but still write functional and fit tests too.