Unit tests are hard!
They’re not hard in the same way as, say, a stubborn concurrency bug is hard.
Unit tests are hard because you have to write a lot of them, and they have to be useful and correct, now and forever.
It’s especially tempting to commit unit tests as soon as they start passing, after the monotony of adding dozens of them to verify your new change.
But there’s one more step you have to do! Make sure they actually fail if the code is wrong.
Imagine a function that multiplies two numbers, and you’re updating it to support a division mode (I don’t know why! Did you bring these concerns up during planning?) Here’s one error I often see:
def multiply_or_divide(a, b, multiply=True):
return a*b if multiply else a/b
import unittest
class test(unittest.TestCase):
def testMultiply(self):
self.assertEqual(multiply_or_divide(3, 4), 12)
def testDivide(self):
self.assertEqual(multiply_or_divide(5, 1), 5)
test().testMultiply()
test().testDivide()
Great, both tests pass! Of course, the divide test doesn’t really test the divide path because you forgot to pass the multiply=False
flag. It only passes because 5*1
happens to be the same as 5/1
.
But, if you did the ‘make sure this test fails’ step, and ran the tests against this code:
def multiply_or_divide(a, b, multiply=True):
return a*b #if multiply else a/b TODO do not commit, commented out for test
then you’ll know something is up when the test doesn’t fail!
That’s a Ridiculous Example
Ok, sure. A more realistic version of ‘the test is doing nothing because it doesn’t even run your new code’ would look something like:
1) You need to add another branch to some legacy function nobody owns anymore
2) The function has tests, and they pass after your change! Ship it!
3) Except your code is never even run, because tests never existed for the branch you’re modifying, or the test is actually for a very similarly-named file that has been pasted into another directory, or…
There Has to Be a Better Way
Test-driven development allegedly solves this sort of thing. You write your tests first (and they fail, naturally). Then you write your feature, and the tests turn green!
This is a fabulous idea that I almost never use. Why? Because the way I write software is very iterative, and my initial design is rarely a 100% match for what I ship in the end. I always find better ways or hidden constraints after I start writing.
If I write my tests up-front, then I have to re-write them as well, resulting in a) churn and b) tests that have never failed, taking me to square one.
While TDD is a beautiful ideal, a pragmatic solution that I actually use is better! So right before you publish that PR, break your code, and make sure the tests fail.