Noel Rappin Writes Here

Mock Me, Amadeus

Posted on December 10, 2010


Nick Gauthier’s post about mock testing, got some attention and I was kind of opining about it on Twitter, but it’s well over 140 characters, so lucky you, I’ve decided to take it to the blog. (Do read Nick’s post, and then then comments, with Nick and Dave Chelimsky and others discussing the topic.)

I want to back up a bit from the “Mock testing is bad for you and hurts puppies” versus the “Mock testing will make your breath fresh and cause you to lose weight” split argument. (It’s possible I might be oversimplifying the two sides of the argument). I’ve used mocks badly and I stopped using them for a while. I want to discuss how I’ve started to use mocks with what feels like effectiveness. I’m not all that confident that it’s the best way – I expect to have changed my mind again in six months – but it’s working for me and I like it.

Take a fairly typical Rails feature, maybe a list that’s filtering based on certain criteria. I think we can do this discussion without specifying exactly what the criteria is, let’s stay abstract.

From a user perspective, the only valid test is the acceptance test, which is “when I submit this form describing my filter, I see the objects that I expect”.

From a developer’s perspective, though, that’s not enough information to fully test or code the feature. Specifically, it’s too big a chunk of code to do in one TDD step, so we need to break it into smaller pieces that we can write developer tests against.

The TDD process is supposed to ensure that any change in program logic is driven by a failed test. In this particular case, we’ll most likely be changing the program logic in two places.

  • The model object or objects being discussed needs to change to take in parameters from the user request and return the expected set of objects.

  • The controller that is the target of the user request needs to convert the Rails request parameters to some kind of call to the model object.

I’m assuming a typical Rails practice thin controller structure, where the controller basically makes a single call to a model that does most of the work.

We’re getting to mock objects, promise.

Okay, we’ve written a Cucumber acceptance test and it’s failing, so we need to start working on the actual application. It doesn’t matter technically whether we do the model or controller first, but for the purposes of storytelling, let’s talk about the model first.

The model testing is straightforward. We have some set of cases, usually the happy-path case, plus exceptional cases like blank input. Maybe we’ll test for case sensitivity or partial matches, or whatever the business logic requires.

For our purposes here, the main point is that we don’t mock the calls within the model layer. They way I do it, even if this filter spans multiple model objects, I don’t use mock objects within the model layer, for most of the reasons that Nick lays out in his post.

You might argue that the database itself is a different layer than the model layer, and you could mock the actual calls to the database itself so that your tests don’t have the overhead of dealing with the database. I think that’s a useful line of argument in some web frameworks, but in Rails I generally haven’t found that helpful, except maybe for pure speed benefits. I do, however, try to write as many tests as I can without actually saving objects to the database.

Okay, model tested. That brings us to the changes to the controller. (To keep this a little less complicated, I’m leaving the view test out of it, these days I tend to let Cucumber stand as my view testing, on the theory that if there’s view logic that doesn’t present an acceptance-testable change to the user, then it probably shouldn’t be view logic). So, what is the behavior of the controller that needs to be tested and built?

There are two different ways of describing the controller’s behavior, each with a different implication for specifying that behavior. I’m not saying that one is right or one is wrong, but I do think that the kinds of other tests you are writing make one plan more useful than the other.

  • The controller’s job is to take the user’s input and present to the view the expected set of objects.

  • The controller’s job is to take the user’s input, send it to the model.

The difference, I guess, is between a conductor and a conduit – in both cases, the controller dispatches the work elsewhere, but in the conductor view, the controller object has more perceived responsibility for the behavior of the system.

Or, to put this another way, if the model behavior is not correct, should the controller test also fail? Thinking of the controller in the first way, the answer is yes, the controller’s job is to deliver objects to the view, and if the calls are incorrect, then the controller has also failed. In the narrower view of the controller’s responsibilities, the controller’s job is just to call the model layer. Whether or not the model is correct is somebody else’s problem.

Before I started using Cucumber regularly, I tended toward the conductor metaphor, but when I also have Cucumber tests, controller tests written like that feel very redundant with the acceptance test. So now I’m more likely to use the conduit metaphor and just test that the controller has the expected interaction with the model.

Which means mocks. And that’s largely how I use mocks within my application – to prevent the controller layer tests from directly interacting with model code. (And I using mocks to avoid dealing with an external library, but that is a different story.)

There are two potential problems with using mocks like this. First, the mocked controller test doesn’t fail if the model is wrong or the model’s API changes. If the mocked controller test is part of a larger test suite, though, this isn’t an issue. The controller test won’t fail, but something else will, either a model test or an integration test. You might even argue that limiting the number of failing tests from one code problem makes the issue easier to diagnose. (You might also argue that the cucumber test makes the controller test irrelevant. I don’t agree, but you might argue it). So this issue doesn’t bother me much.

The second thing to look out for is the inverse – the mock test will fail if the API to the model changes even if the actual behavior at the end hasn’t changed. I have more trouble with this one, but it does tend to work out (while admitting that it has been annoying when I’ve gotten sloppy and used mocks to cover an uglier API). Assuming that the model API is reasonable, and that the controller’s mission in life is to call a particular API, then if that API changes, then in some sense the controller’s behavior is changing and that should trigger a failed test.

However, in order to prevent this kind of problem, you really do need to use your tests to drive code structure and the clean API between the controller and the model. If the test mocking seems like it’s becoming a burden, then that indicates that the code is not properly factored.

I’m not yet prepared to defend that all the way to the death, as I’ve said, I have been bitten by cases where mocks made the tests more brittle by exposing the internals of the model to the controller in more detail then necessary. Almost always, that happens on legacy systems that were written without tests and might not have a clean separation between model and view concerns. I have much, much less trouble with newer code that has a TDD-accented modular design.

You can mess yourself up badly with mocks. I for sure have, and it kept me away from RSpec for a long time. I’ve gotten better at using them, I think, and have started using them more frequently, but pretty much as I’ve described here. It’s what works for me this week.



Comments

comments powered by Disqus



Copyright 2024 Noel Rappin

All opinions and thoughts expressed or shared in this article or post are my own and are independent of and should not be attributed to my current employer, Chime Financial, Inc., or its subsidiaries.