Skip to content
Ben Christel edited this page Nov 10, 2022 · 22 revisions

Learning to Love Tests and Types

What Is This Book About?

The trouble with testing

Most programmers I've talked to believe it's a good idea to write automated tests for all their code. But the fact is, real-world software is often quite difficult to test. When the production code makes network requests, checks the current time, or generates random data, testing it requires elaborate setup, including the creation of mocks. Tests for this kind of code tend to be long and complex. They rarely inspire confidence that the software is actually working. I've seen tests that have more bugs than the code they're testing. I've even seen tests that short-circuit themselves, and pass without testing any production code at all. To make matters worse, complicated tests—especially those that make heavy use of mocks—often have to be completely rewritten when we make improvements to the code.

Because automated testing is so difficult, we often perform an informal cost/benefit analysis when we think of a test we "should" write. Does the likely value of this test to future maintainers outweigh the cost of writing it? Sometimes, the answer is "no". The software is so hard to test that the pragmatic option is simply to omit certain test cases.

Less testing, of course, means that more bugs escape into production. When code is hard to test, its quality suffers. But more importantly, we suffer—and that suffering creates a detrimental feedback loop. When testing is an unpleasant chore—a to-do item we have to check off before the software can ship—we are less likely to engage fully with it. That leads to poorer test coverage, lower quality, and a worse experience for everyone involved with the software: users, customers, and developers.

A different view of testing

If we're to improve on the situation I've described, we need to reframe the idea of testing.

Testing is not something we do to get the bugs out of our code. Rather, testing is something we can do to get objective feedback about our code, in real time, as we work. Testing is part of a programming process that lets us avoid putting bugs into the code in the first place.

Testing is not an upfront cost that we pay to benefit some hypothetical future developer. Rather, it's an aid to our thinking that benefits us, at the moment we write the test.

The behavior of our software is not inherently hard to test. We can rethink the design of our code so it's easy to test. Testable code is often more reusable, maintainable, and observable as well.

Confidence, satisfaction, and joy

By adopting the view of testing above, we can reverse the destructive feedback loop of discouraging tests, poorly-tested code, and unhappy programmers. We can write simple, well-tested code that promotes feelings of confidence, satisfaction, and joy.

Confidence means knowing your code will work in production. Passing tests can boost our confidence, but the main factor in how confident we are is whether we grok how the code works. The primary benefit of tests is that they help us reach that deep understanding.

Satisfaction comes from feeling that the value you produce is proportional to the effort you put in. When the code and tests are simple and reliable, the programming process becomes smooth and untroubled. There are fewer production issues, bugs, and other interruptions. Improving the software becomes a calm, step-by-step process, rather than a series of heroic feats and narrow escapes.

Joy is more subtle. Joy comes from feeling connected to our work and to other people. When we feel that our code stands on the shoulders of giants—while also recognizing that those "giants" are people who struggle with the same things that we do—we can feel joy. When we're empowered to shape the code however it needs to be shaped, to do whatever we need it to do—that's joy, too. We can make deeper connections, too, by seeing our code not just as code, but as a contribution to the endless project of human knowledge. The Appendix on the scientific method might help you make this connection.

Confidence, satisfaction, and joy are the human benefits that well-tested code provides.

How do we get there?

Test-Driven Development

Test-driven development is a process for writing code, (re-)discovered and popularized by Kent Beck. It's often described as a three-step-cycle:

  1. Write a test for functionality you wish you had. Watch it fail.
  2. Write the code to make the test pass.
  3. Refactor the code to simplify it, while keeping all the tests passing.

By only adding functionality to our code when a failing test forces us to, we ensure that all the functionality is tested. By refactoring code to remove hardcoded values and special-case "if" statements, we ensure that it generalizes beyond the specific cases we've tested.

TDD is, unfortunately, one of the most misused and maligned buzzwords in the software field today. Over the last two decades, lots of people have published their own spin on it, often saddling it with unhelpfully dogmatic baggage. If you learned TDD from one of those sources, you might have found it... well, unhelpfully dogmatic.

Even if you've hated your TDD experiences so far, I hope this book will convince you to give it another chance. To clarify what I think TDD is not, here is a short list:

Test-driven development is NOT:

  • writing tests for every method of every class
  • automated testing through the UI
  • always writing tests before the production code
  • 100% test coverage
  • testing classes and functions in isolation by mocking out all their dependencies
  • having to wait more than a fraction of a second for your tests to run

If these are the things about "TDD" that have vexed you, you might like the way this book treats it. I believe this treatment is aligned with Kent Beck's vision of the practice. Here's the man himself:

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.

—Kent Beck on Stack Overflow (2008, https://stackoverflow.com/a/153565)

Algebraic Type-Driven Design

Compositional Reasoning

Domain Modeling

Techniques

In priority order

Get Feedback in Four Hundred Milliseconds

Get Feedback on Every Change

Give Up On Proving Correctness

Make Optimistic Hypotheses and Disprove Them Ruthlessly

Demonstrate That Your Tests Can Fail

Design Tests to Fail Informatively

multiple assertions per test is ok, as long as expected values are unique.

Keep Test Output Tidy

Call Your Shots and Investigate Surprises

Don't Believe Code Coverage Tools

Don't Bother With Automated Tests For Unconditional, Unstable Code

Move Logic Where You Can Test It

Know What—and Why—You're Testing

Don't pretend to do load testing in a functional test. Avoid realistic values in programmer unit tests.

Design So You Can Test Only What You're Curious About

Answer One Question At A Time

Design Interfaces, Refactor Implementations

Test Exceptional And Boundary Cases First

Write Fewer Integration Tests

Flatten Your Test Suites

Inline Test Values

Reduce Test Setup To Its Essence—But No Further

Use Test Data Builders

Create New Objects For Each Test

Keep Assertions Snug

Avoid negative assertions

note about negative assertions being okay when they’re syntactic sugar for tight assertions, e.g. gomega’s Expect(err).NotTo(HaveOccurred())

Separate Effects, State, and Calculations

A process is the running instantiation of a program...

Move Effects Up The Call Stack

Keep Effects Generic

Maintain Ephemeral State In Objects

Pass Data to Objects

Avoid passing objects to objects.

Represent Time as a Message

Separate Business Logic From Presentation

Separate Mechanism From Policy

Call Things What They Are

no missed opportunities for abstraction--often code can be made app-agnostic just by renaming things.

Treat Tests As An Append-Only List Of Requirements

Document Desired Behaviors In Tests, Not In Production Code

Write Characterization Tests For Legacy Code

Separate Input, Processing, And Output

Rule Out Inconsistencies Early

== "parse, don't validate"

Enumerate The Possible States

Eliminate Conditionals by Simplifying State

Represent Independent Variables As Independent Variables

Separate App Code From General-Purpose Code

Flatten The Call Graph

Use The Part You're Certain About To Discover The Part You're Uncertain About

Avoid Mocking Static Dependencies

Use test doubles to evoke an open class of collaborators

Use Contracts to "Cut" the Dependency Graph

Design Contracts Around Algebraic Properties

Fit Contracts To Ubiquitous Interfaces

Use Types to Prove the Parts Fit Together

Generate Types From API Specs

Represent Domain Errors As Algebraic Types

Represent Infrastructural Errors As Exceptions

Represent Entities As Data, Not Objects

All essential state wants to be global, so use something like Redux to make that suck less.

You'll need to serialize your essential state, so keep it in a form that's amenable to JSONification.

Encapsulate Accidental State

Design Essential State to Be Persisted and Transmitted

Version and Migrate Essential State

Compose Aspects

Caching? Monitoring? Error handling? Null checks? These concerns don't need to be entangled with your business logic.

Refactor Tests While They're Failing

Don't Test Functionality Through The UI

UIs are designed for humans, not tests You need to manually test your UI anyway

Shun Complicated Testing Tools

Peer-Review Process, Not Just Code

Embrace Imperfection

Glossary

  • Test

  • Automated Test

  • Manual Test

  • Formal Test

  • Informal Test

  • Unit Test

  • System Test

  • Functional Test

  • Non-functional Test

  • Integration Test

  • Contract Test

  • Sum Type

  • Union Type

  • Object

  • Entity

  • Essential State

  • Accidental State

  • Test Double

  • Mock

  • Stub

  • Fake

  • Spy

  • Dummy

  • Call Graph

  • Dependency Graph

Appendix A: Testing and the Philosophy of Science

Appendix B: The Behavior of a Software System

Appendix C: Research and Development

Programming is learning

Appendix D: The Dao of Test-Driven Development

Clone this wiki locally