Skip to content
Ben Christel edited this page Oct 23, 2022 · 22 revisions

Software Application Engineering

How to Write Code You Know Will Work

In this book, bold headlines alternate with supporting details.
To get the gist, you can just read the headlines.

As you try to put these ideas into practice, you'll undoubtedly have questions. That is a good time to come back to the book and thoroughly understand the details.

The programming language used in this book is TypeScript.

What is Software Engineering?

Engineering means applying science and mathematics to the design of practical things, in order to make them a better fit for the contexts of their creation, distribution, use, (and retirement).

The goal of software engineering is to create systems that are free of errors and annoyances, easy to change, and satisfying to work on and use. This book explains how to write code that has these attributes.

The fundamental techniques are test-driven development and type-driven design. These techniques aid the creation of multi-paradigm code that can be mentally modeled and therefore known to behave a certain way. When we understand what code does, how, and why, we can change it without fear of breaking it. That makes the system satisfying to work on because the outcomes we produce are in proportion to our effort: working on the software creates value reliably and steadily, without turmoil, catastrophe, or heroism. All these qualities enable us to adapt the system to the changing needs of its users, so it stays useful and usable in the long run.

The term software engineering is a controversial one in 2022. Programmers, and others involved in the software trade, sometimes muse doubtfully about whether software engineering is a "real" engineering field. I contend that, while much of the software-making that takes place today is not engineering, some of it is, and software could indeed become an engineering discipline in the not-too-distant future. I hope this book will give you a good sense of what such a shift might look like, and get you excited about being involved in it.

We apply the scientific method to programming by testing our software. We apply mathematics by using type algebra.

I used to think that there wasn't much science in computer science, and not much interesting math in your average web application. What do science and math have to do with building better software?

A lot, it turns out. While "science" in the sense of "the study of the natural world" isn't really applicable to software, we can use a version of the scientific method to help us develop simple programs that exhibit the behavior required of them. That scientific method is called test-driven development.

A note on test-driven development: TDD is one of the most misused and maligned buzzwords in the software field today. If you've learned it from a source other than Kent Beck's original book Test Driven Development by Example, you've likely been exposed to some unhelpfully dogmatic ideas along with the good stuff. If you've tried TDD and hated it, well... I hope this book will convince you to give it a second chance. To clarify what I think TDD is not, here is a short list:

Test-driven development is NOT:

  • writing tests for every method of every class
  • automated testing through the UI
  • always writing tests before the production code
  • 100% test coverage
  • testing units of code in isolation by mocking out all their dependencies
  • having to wait more than a fraction of a second for your tests to run

If these are the things about "TDD" that have vexed you (or your coworkers), you might like this book.

Math enters the picture in the form of type algebra, which is a system of logical rules for reasoning about types. A type is a set of possible values. So, for example, when we talk about the type of a variable, we're talking about the set of values that could be stored in that variable.

A type system is a language for expressing theorems about a program—statements like "the concat function is always called with two strings as arguments, and always returns a string". A type checker is a program we can run on our code, which tries to prove that all the theorems we've stated are true. If it can't prove some of them, we get a type error.

A note on type systems: many programmers' only exposure to types is via Java and C, which I think is dreadfully unfortunate. These languages have horrible, bungled type systems that are hardly deserving of the name. Their type annotations exist mainly to help the compiler optimize the code, not to help the programmer.

Much better type systems—ones that do substantially help the programmer—exist, and this book focuses on those. TypeScript is one of the good ones. It is an algebraic type system, which means its types can be composed to form more sophisticated types. For instance, you can have so called union types like this, which allow a value to be any of a set of alternatives:

// The httpResponse variable can contain either the exact string
// "pending", an Error object, or a response object with a `data`
// property.
let httpResponse: "pending" | Error | {data: string}

Unlike the type systems of Java and C, which are riddled with opportunities for NullPointerExceptions and segmentation faults, TypeScript can effectively rule out the equivalent errors from JavaScript code. What this means is that if the type checker accepts your code, you do not have to worry about those kinds of errors. That dramatically simplifies the process of reasoning about the software.

Even with good tests and good type systems, many programmers still see them as an annoying stumbling block—just one more thing they have to deal with before they can ship their code. I used to have this adversarial relationship with tests and types, but over time I discovered that, with the right approach, they can be extremely useful. The test failures and type errors cease to be annoying once you gain intellectual control over them and begin to wield them as a tool. The error messages can form a kind of self-checking to-do list, reminding you what still needs to be fixed up after your last change. And there are subtler and more powerful benefits too, which will be explored in depth throughout the rest of this book.

This book is about engineering software applications—software that people can use to solve particular problems.

I call this book Software Application Engineering because I do not have enough experience with systems programming to feel confident saying anything about it, or giving examples from it. The ideas and techniques in this book may not be applicable to, say, the engineering of operating systems. That said, I would be very surprised if they were not applicable at all to such systems. Most likely they will just require adaptation or reinterpretation—reading between the lines, if you will. Principles still more general than the ones in this book might someday elucidate a comprehensive theory of information-systems engineering, one that includes applications and systems programming, and perhaps even hardware. That will have to wait for another book, though.

Software engineering is not a matter of thinking or acting according to rigid, bureaucratic procedures, and never will be.

When something is completely algorithmically solvable in software, we tend to automate it. Case in point: compilation. It is likely that more and more programming tasks will be automated, at least partially, as AI becomes more capable. However, to the extent that there is human work involved in software, it will always require intuition, holistic awareness, judgment, ethical values, and knowledge of the world outside the machine.

Some people worry that if programming becomes engineering, all the fun will be taken out of it. I certainly hope not—and I don't think that's likely, anyway. At "worst," the superficial fun will be replaced by a much deeper joy. The worry seems to come from the idea—unfortunately reinforced by too much of the STEM curriculum in schools today—that math, science, and engineering are dry, soulless disciplines, lovable only by people who want to think like machines. That simply isn't accurate. Science and mathematics are, at their heart, the investigation of reality by engaged and curious minds—investigation that is made much easier by creativity and an appreciation of beauty. Nor is the reality that science reveals to us depressing. While twentieth-century philosophy has left us with the idea that reality is fundamentally machine-like and inhuman, closer investigation reveals this view to be miguided. There is nothing fundamentally true about the view that the universe is like a machine—that view is an incomplete mental model, like any other. A deep understanding of software has the power to reveal this to you, through quasi-mystical insight. Once you grok that insight, science and mathematics become a window through which you can glimpse the awe-inspiring and inexpressible metapattern that generates all experience. So don't worry!

In fact, curiosity, adaptability, and creativity are the core of the engineering mindset. You need good judgment, too.

While some might believe engineering is all about tradeoffs, I think that the best solutions come from transcending tradeoffs (GoUpALevel), which often represent false dichotomies. E.g. in a healthy software project, there is no tradeoff between cost and quality. My experience is that low quality is such a strong driver of increased costs that efforts to improve quality actually speed up the overall pace of feature delivery.

Accordingly, this book does not present a "process" or "method" that you can follow step by step.
Rather, it presents a set of heuristics and "views" of programming—techniques for understanding existing software and making it do what you want.

These techniques may or may not be applicable in your situation. The important thing is not just to learn the techniques, but to understand when and why they're useful.

Mental Models

As programmers, we're generally tasked with making changes to an existing software system.
The first thing we have to do is understand that existing system.
The code for large systems is too complex to understand in minute detail, so we build simplified mental models.
Then we reason deductively from those models to predict what will happen in a given situation.
All models have flaws, because they're simplifications. But if our models are good enough, our predictions generally will be, too.
If our models allow us to correctly predict what the system will do when we make a given change to it, we can change it safely.
Each of the techniques in this book helps with at least one of four things: understanding programs (building models), designing programs so they can be modeled more effectively, predicting the results of changes, and changing programs.

The Goal

The first goal of software engineering is to make software that does what we intend.

Define SoftwareSystem

Code that DoesWhatYouIntend.

This is an easier goal to agree on than code that is "correct" or "high-quality". What "correct" and "high-quality" mean will depend on your context. But we can all agree that if code doesn't do what we intended it to do, it's no good.

Definitions of SoftwareQuality often focus on conformance to requirements. For application software, this is problematic, because we almost never have a complete and correct description of "requirements" before we start writing code. We discover "requirements" as we build the software and observe people using it. Furthermore, the "requirements" are not really requirements, in the sense of "behaviors the software must exhibit to be considered a success". They're more like options: "this is one possible way of solving the user's problem". We're constantly weighing the cost and value of these options, some of which may be incompatible with each other, to design the product.

Our second goal is to be able to explain, rationally, why we believe that the software does what we intend.

If we can't explain why the system works, to ourselves and others, with evidence and arguments, then we have no firm basis for asserting that it works. Other people are likely to look at the code and see an unintelligible mess—legacy code. We certainly are going to have a hard time keeping it working as we change it....

Our third goal is to keep the system going in the face of change, because the context we're operating in always changes over time.

Even if we're developing against a frozen specification, code that we've already written becomes part of the context of future development. We can't keep the whole system in our heads, and it doesn't spring into being all at once like Athena out of the head of Zeus. When developing any system larger than a few lines of code, we have to design, code, and test in small increments.

Much of the effort of programming is in understanding the context in which the next increment of functionality is to take shape. That means learning about existing code—our own or other people's. I estimate that half of my working hours are spent learning: reading code or documentation, or probing the code with tests or other experiments. A further quarter of my time goes toward communicating and recording what I learned, in code, tests, or documentation—essentially the flip side of learning, since I do this in the hope that it will let my teammates learn what I just learned more quickly. Most of the rest of the time I'm thinking about how to solve a problem computationally—what Rich Hickey calls "hammock-driven development". Only a tiny fraction of my time—a few percent—is spent typing code.

This means that if we want to look for ways to reduce the cost of software development, we must look for ways to reduce the cost of learning and communication. Further improvements in programming language design (which might help with computation problem solving) or input methods (which would help with typing in code) can't reduce the overall cost by more than a quarter or so.

characteristics may be explicable only in the context of history. E.g. "we're leaving this database column around because it's needed for backwards compatibility, even though we don't write to it anymore". While such historical explanations may not be completely avoidable, we should strive to minimize them where possible. Admitting a historical explanation for something basically means that part of the software that was once the "form", the object of design, has escaped our control and become the "context", which our designs must now accommodate.

Worth emphasizing: there is no contradiction between sound design / traceable decisions and agile development. If we learn something new and need to change a decision, the system should be changeable enough for us to completely reverse the old, bad decision and cleanly encode the new decision.

A project that rushes forward without any foresight will have no provisions for change. Effective design and engineering enable change. They don't obstruct it. The techniques described in this book are largely about change-enablement.

Modeling Software Systems' Behavior

In order to know if a software system does what we intend, we first have to be able to describe what the system does. "What the system does" is called its behavior.
To grasp the behavior of a system, we first divide the system into components. The behavior of the system is the set of possible interactions among those components. An interaction is a sequence of discrete messages.
Creating systems that do what we intend is hard in two ways. First we have to define, even imprecisely, what the system is supposed to do.

TheTaskOfProgramming

No one pays for code for code's sake—they pay for the behavior of the running system. Therefore, our job as programmers is to shape our SoftwareSystem's Behavior. In general, the behavior is an infinite set of desirable Interactions. We're tasked with sculpting an infinitely large shape in hyperspace, using finite means—finite code and finite brainpower.

It's easy to lose sight of this while coding, because code is easy to change. The risk of making the wrong change should give us pause—though too often it doesn't. It's easy to forge ahead with false confidence, thinking (or more soberly, hoping) that things will just work out. The reason this seems easy is that obtaining true confidence seems prohibitively hard. It would be nice if our systems had the simplicity and lucidity of textbook examples, but they don't, so we make do.

Then we have to encode an infinite set of potential behaviors in a compact form.

What do programmers actually do all day?

About half my programming time is spent learning.
A large chunk of the remaining time is spent communicating what I've learned.
The rest of the time I'm inventing new things—that is, solving novel problems computationally.
The time it takes to type in the code is insignificant compared to all of the other work—learning, communicating, and inventing. It's only 1 or 2 percent of the total.
With this in mind, it is obvious why Brooks' Law—that adding staff to a late software project makes it later—is true.
Most of our job is understanding—and 10 people can't understand something faster than one person can.

Given that learning and understanding are so fundamental to programming, let's spend a few pages considering the nature of understanding. The next chapter, on mental models, delves into this issue.

Starting with a test

At some point in this book, I'm going to have to show you code. Since we've talked about the behavior model of software systems, writing a test that expreses the behavior of some piece of software seems like as good a place as any to begin.

A test is a self-verifying example of correct behavior.

Testing the tests

Code expresses a theory about how to solve a problem.
A test is a reproducible experiment that can disprove the theory expressed by the code.

In other words, a test can quickly, reliably, and automatically tell you, "no, that won't work".

In order for a test to be valuable, it must be possible for the test to fail.
Tests are worth more when they give understandable failure messages.
Therefore, test your tests by watching them fail when the code is broken.

The Means

What we seek is a workable mental model of our software—what PeterNaur in ProgrammingAsTheoryBuilding calls a theory of the software. Actually, we need several mental models. Each model will give us a wrong or incomplete impression of some details, so having multiple models is necessary to fill the gaps.

A set of techniques that can help solve problems that you might run into while programming. Emphasis on you. Writing code that does what you intend is largely a matter of the experience that you have while programming.

Example problems: Some behavior of your code is hard to test. The test output is hard to understand. You get lost while navigating the codebase and trying to understand what calls what. There are techniques that address all of these problems.

This book is not a set of rules to follow. Not everything in here is appropriate for every situation. Some of it is only appropriate very rarely. The point is not to do what the book says, the point is to use these techniques if and when they improve your experience of programming.

The Shape of the Solution

Though the Wholeness of the system is Ineffable (not amenable to rule-based explication) that doesn't mean it can't be communicated or understood. We can use multiple complementary MentalModels (36Views) to communicate about, and eventually reach a shared understanding of, what we can't explicitly teach.

Mental Models of Programs

Sources of Confidence

  • InformalReasoning - c.f. OutOfTheTarPit
    • GregWilson cited some reasearch showing that reading the code finds more bugs per hour than testing
  • SoftwareTesting - passing tests make us more confident
  • AlgebraicType - proofs of some kinds of internal consistency, ruling out many errors that could happen in a program with dynamic types. If our typechecker outputs no errors, that makes us more confident.
    • easiest to see in a language where we can do an apples-to-apples comparison of typed and untyped forms, e.g. TypeScript vs. JavaScript.
  • CompositionalReasoning with AlgebraicProperties (i.e. "semi-formal" reasoning)

How the Confidence Sources complement each other

  • InformalReasoning
  • SoftwareTesting
    • Flaw: spot-checking is not a proof.
    • Flaw: process-external Effects are hard to test
    • Flaw: duplicate test coverage makes failures hard to interpret and coverage hard to analyze. The opposite, "over-mocking," leads to situations where all the tests pass but the system as a whole doesn't work.
    • Flaw: you're not the Oracle
      • e.g. you need to call an API that returns some complicated data. It's not clearly documented and you misinterpret the meaning of one of the fields when creating a Stub of the API. So your UnitTests pass, but the system as a whole is wrong.
      • Partial fix: ensure you only make this mistake once by transforming incoming data into a form that's self-documenting or otherwise well-documented, and hard to misuse. I.e. ParseDontValidate.
    • Summary: testing is complemented by TDD (writing the simplest code that passes the tests), an architecture that pushes effects to the boundaries or abstracts them behind contracts with convenient algebraic properties, a shallowly layered architecture, and a discipline of understanding the data that's passed across architectural boundaries.
  • AlgebraicType
    • Flaw: certain generic interfaces are very difficult or impossible to express in certain type systems. E.g. generic variadic functions.
      • This is a shortcoming of current, specific type system technologies, not the mathematics of types
      • Even proponents of dynamic typing rely on the idea of types to make sense of their programs—they just don't have automated tools to check their thinking.
      • Possible resolution: something like Clojure's spec? Then you can't write fewer tests, though.
    • Summary: algebraic types are complemented by tests.
  • AlgebraicProperty and CompositionalReasoning
    • Flaw: error propagation threatens the simplicity of algebraic properties when the implementor has process-external Effects.

Programming Tactics

The True Goal

The first goal is the sine qua non of the second.

A sense of oneness with your work. "Oneness" isn't quite the right word, and there are many possible near-synonyms: connectedness, identity, care, kindness. Oneness involves:

  • a feeling of Mastery: you are confident that you can get the code to do what you need it to do. When you get a feature request, you can often develop plans of action that will work.
  • a feeling of Compassion toward your fellow programmers, the users of your software, and the authors of your dependencies. Compassion also involves doing things that help your coworkers achieve oneness with their work (e.g. writing understandable code with APIs that are hard to misuse).
  • a sense of responsibility. If the code's Messy or has a Bug, you take it seriously and fix it. This kind of responsibility can't be forced on you by others. You assume it, naturally and inevitably, when you care about the code and the people it affects. If the other qualities of oneness are present, you'll find it easy to fix any problems you cause, so the responsibility won't be a burden.
  • non-Instrumentality. Instrumentality often appears when you try to get something for free, without putting time or attention into it, or identifying with it. E.g. suppose you use someone else's code that you don't understand very well to try to make a job easier. If what you try doesn't work, it's easy to get frustrated and blame the code or the other person, which causes everyone to suffer. An attitude of non-instrumentality both:
    • recognizes that you may have to put some learning effort in to get the benefit of using the code. Nothing is free.
    • is willing to let go of a dependency on bad code and reimplement the functionality, if that's the pragmatic option.
  • continuous attention and course-correction as you work.
Clone this wiki locally