Testing Strategies
Previously on Locally Sourced: I wrote about building a small feature in Hotwire. Also, I have two, count ‘em, two, books for sale. Modern Front End Development For Rails (ebook) (amazon) and Modern CSS with Tailwind (ebook) (amazon).
I have this idea that teams get in trouble when they do something that they are “supposed to do” without understanding what problem they are trying to solve and what tradeoffs they are willing to make to solve it. This especially goes for things that “everybody” knows are a good idea, like testing. Or pair programming. (Or reviews, but that’s a different take).
Your teams’ testing practice might have any of these goals, and a few others that I’m probably missing:
- Prevent bugs from making it to production
- Increase confidence in making changes
- Improve the design of the code
- Replace manual testing
- Keep build times as small as possible
Each of these are worthy goals, but each would end up with a different set of tests being written, and if you try do all of them, you’ll wind up with a lot of duplicate tests and you’ll have huge extra carrying costs. Eventually you’ll wind up in the testing uncanny valley, where you have the costs of maintaining a large test suite without the benefit of confidence in your code.
I wanted to go into my project tracker side project with a testing strategy that made my testing goals explicit and that I could use to make decisions about what tests to write. Then I’d be able to refine as it inevitably turned out not to work.
To be clear, writing an explicit testing strategy is not a technique that I’ve battle-tested on a bunch of projects. It is, though, an extension of “Why did you hire this test?” and my general bias that explicit guidelines that you can refer to and refine are better than implicit knowledge that nobody fully understands.
Here’s the testing strategy that I’m starting with for this tracker project.
Testing Goal
- The goal of this test suite is to increase confidence in the code and increase my ability to make changes.
Techniques
- Each new feature gets a happy-path end-to-end test in Cypress.
- Error-case end-to-end tests only where the error behavior is unusual or important.
- Ruby code for back end logic written using Spec and TDD, or at least written in very small bits with fast unit tests as the focus.
- Ruby test only need to be written where logic is added on top of Rails. A normal Rails path with no conditionals is not considered extra logic.
- Cypress tests won’t be written TDD, because I find it really hard to write an end-to-end test without at least a scratch implementation of the markup. But I will try to write them in small increments.
- New tests will be written as bugs are found, even in development.
Speed
- Given the list of test speeds in this post, individual Ruby test files should be runnable in level 2 (fast enough to not break your focus), and I’d like the entire test suite to stay in that level, which should be possible.
- Level 2 speed is almost certainly not feasible for the Cypress suite as a whole, but it might be possible for individual tests. But for practical reasons, I’m less concerned about the speed of the Cypress tests.
Tradeoffs
- The focus of the end-to-end tests is behavior, not necessarily the minutia of the look and feel. I acknowledge this may cause the tests to be incomplete, at least to start.
- It’s okay if this setup is probably a little short on coverage, to start, with the idea that the bug tests will catch it up.
- I am willing to trade off end-to-end coverage in the name of overall test suite speed.
That’s my plan. It lays out an abstract goal, and a speed target, sets out techniques for how I might proceed, and crucially states that I’m not going to aim for 100% coverage as an end in of itself.
There are a couple of reasons why I’m comfortable playing a little fast and loose with coverage. On this project, bugs might not be the end of the world, especially given this code might never have a user who isn’t named Noel Rappin.
More broadly, I’ll throw out the Hot Take that for most features on most web applications, if you never have a bug in production you have spent too much time and effort on QA.
Let me put that a different way. Most web teams have an infinite amount of work and a finite amount of time. Some bugs are easy to find and fix, of course, but at some point along the curve of increasing bug subtlety, the time spent squashing the bug is time better spent building new stuff, you just aren’t preventing enough cost by fixing the bug.
And for this app, where I am going to be the only coder for the foreseeable future, and the app is only going to happen if its fun for me… I mean, I like writing tests, but there are limits to everything.
Writing tests to cover bugs I find in development is a key to this strategy, though. The idea is that buggy code tends to cluster, so a bug in code is an indicator that the code in question needs more scrutiny.
We’ll see how it goes.
What does a test strategy for your project look like? What are you willing to say is your top priority and what are you willing to let be a secondary priority?
The permanent home of this post is https://noelrappin.com/blog/2021/06/testing-strategies/, let me know what you think.
Coming soon: The test strategy in practice…