-
Notifications
You must be signed in to change notification settings - Fork 0
Book1
Most programmers I've talked to believe it's a good idea to write automated tests for all their code. But the fact is, real-world software is often quite difficult to test. When the production code makes network requests, checks the current time, or generates random data, testing it requires elaborate setup, including the creation of mocks. Tests for this kind of code tend to be long and complex. They rarely inspire confidence that the software is actually working. I've seen tests that have more bugs than the code they're testing. I've even seen tests that short-circuit themselves, and pass without testing any production code at all. To make matters worse, complicated tests—especially those that make heavy use of mocks—often have to be completely rewritten when we make improvements to the code.
Because automated testing is so difficult, we often perform an informal cost/benefit analysis when we think of a test we "should" write. Does the likely value of this test to future maintainers outweigh the cost of writing it? Sometimes, the answer is "no". The software is so hard to test that the pragmatic option is simply to omit certain test cases.
Less testing, of course, means that more bugs escape into production. When code is hard to test, its quality suffers. But more importantly, we suffer—and that suffering creates a detrimental feedback loop. When testing is an unpleasant chore—a to-do item we have to check off before the software can ship—we are less likely to engage fully with it. That leads to poorer test coverage, lower quality, and a worse experience for everyone involved with the software: users, customers, and developers.
If we're to improve on the situation I've described, we need to reframe the idea of testing.
Testing is not something we do to get the bugs out of our code. Rather, testing is something we can do to get objective feedback about our code, in real time, as we work. Testing is part of a programming process that lets us avoid putting bugs into the code in the first place.
Testing is not an upfront cost that we pay to benefit some hypothetical future developer. Rather, it's an aid to our thinking that benefits us, at the moment we write the test.
The behavior of our software is not inherently hard to test. We can rethink the design of our code so it's easy to test. Testable code is often more reusable, maintainable, and observable as well.
By adopting the view of testing above, we can reverse the destructive feedback loop of discouraging tests, poorly-tested code, and unhappy programmers. We can write simple, well-tested code that promotes feelings of confidence, satisfaction, and joy.
Confidence means knowing your code will work in production. Passing tests can boost our confidence, but the main factor in how confident we are is whether we grok how the code works. The primary benefit of tests is that they help us reach that deep understanding.
Satisfaction comes from feeling that the value you produce is proportional to the effort you put in. When the code and tests are simple and reliable, the programming process becomes smooth and untroubled. There are fewer production issues, bugs, and other interruptions. Improving the software becomes a calm, step-by-step process, rather than a series of heroic feats and narrow escapes.
Joy is more subtle. Joy comes from feeling connected to our work and to other people. When we feel that our code stands on the shoulders of giants—while also recognizing that those "giants" are people who struggle with the same things that we do—we can feel joy. When we're empowered to shape the code however it needs to be shaped, to do whatever we need it to do—that's joy, too. We can make deeper connections, too, by seeing our code not just as code, but as a contribution to the endless project of human knowledge. The Appendix on the scientific method might help you make this connection.
Confidence, satisfaction, and joy are the human benefits that well-tested code provides.
Test-driven development is a process for writing code, (re-)discovered and popularized by Kent Beck. It's often described as a three-step-cycle:
- Write a test for functionality you wish you had. Watch it fail.
- Write the code to make the test pass.
- Refactor the code to simplify it, while keeping all the tests passing.
By only adding functionality to our code when a failing test forces us to, we ensure that all the functionality is tested. By refactoring code to remove hardcoded values and special-case "if" statements, we ensure that it generalizes beyond the specific cases we've tested.
TDD is, unfortunately, one of the most misused and maligned buzzwords in the software field today. Over the last two decades, lots of people have published their own spin on it, often saddling it with unhelpfully dogmatic baggage. If you learned TDD from one of those sources, you might have found it... well, unhelpfully dogmatic.
Even if you've hated your TDD experiences so far, I hope this book will convince you to give it another chance. To clarify what I think TDD is not, here is a short list:
Test-driven development is NOT:
- writing tests for every method of every class
- automated testing through the UI
- always writing tests before the production code
- 100% test coverage
- testing classes and functions in isolation by mocking out all their dependencies
- having to wait more than a fraction of a second for your tests to run
If these are the things about "TDD" that have vexed you, you might like the way this book treats it. I believe this treatment is aligned with Kent Beck's vision of the practice. Here's the man himself:
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.
—Kent Beck on Stack Overflow (2008, https://stackoverflow.com/a/153565)
In priority order
multiple assertions per test is ok, as long as expected values are unique.
Don't pretend to do load testing in a functional test. Avoid realistic values in programmer unit tests.
Avoid negative assertions
note about negative assertions being okay when they’re syntactic sugar for tight assertions, e.g. gomega’s Expect(err).NotTo(HaveOccurred())
A process is the running instantiation of a program...
Avoid passing objects to objects.
no missed opportunities for abstraction--often code can be made app-agnostic just by renaming things.
== "parse, don't validate"
Use test doubles to evoke an open class of collaborators
All essential state wants to be global, so use something like Redux to make that suck less.
You'll need to serialize your essential state, so keep it in a form that's amenable to JSONification.
Caching? Monitoring? Error handling? Null checks? These concerns don't need to be entangled with your business logic.
UIs are designed for humans, not tests You need to manually test your UI anyway
-
Test
-
Automated Test
-
Manual Test
-
Formal Test
-
Informal Test
-
Unit Test
-
System Test
-
Functional Test
-
Non-functional Test
-
Integration Test
-
Contract Test
-
Sum Type
-
Union Type
-
Object
-
Entity
-
Essential State
-
Accidental State
-
Test Double
-
Mock
-
Stub
-
Fake
-
Spy
-
Dummy
-
Call Graph
-
Dependency Graph
Programming is learning