-
Notifications
You must be signed in to change notification settings - Fork 0
MutualBenefit
Mutual benefit is a principle of Extreme Programming. It's the idea that sometimes, there's no meaningful tradeoff between different types of Good. If A is good, and B is good, and by increasing A we get more B too, then we should maximize A. It makes no sense to try to trade off A to get more B.
Concrete example: InternalSoftwareQuality (ISQ). Improvements to internal software quality make the software cheaper to develop and maintain, and usually produce improvements in ExternalSoftwareQuality (ESQ) as well—i.e. fewer bugs. Philip Crosby wrote that QualityIsFree—what this means is that the least expensive way to operate is to treat defects as unacceptable. Whatever you spend on eliminating defects is more than paid back by the money you avoid spending on customer support, bugfixes, etc.
Another way of framing the mutual benefit principle is that we should try to do things in a way that has positive externalities, and avoid negative externalities whenever we can.
Specialization, compartmentalization, and reductionism destroy opportunities for positive externalities. In Antifragile, Nassim Taleb relates an anecdote about a businessman who employs a valet to lug his heavy suitcase up the hotel stairs. After checking in, the businessman goes straight to the hotel gym to lift weights. In effect, he pays someone to lift a heavy thing, and then pays someone else to give him access to a different heavy thing to lift. This is farcically inefficient, and a great example of a missed opportunity for positive externalities.
As the people and machines in a SoftwareSystem interact to produce value, complex webs of mutual benefit tend to form. System components get used in off-label ways due to KranzsLaw, simply because they (accidentally) have affordances for those uses. Side-channels get exploited because they happen to reveal useful system state. The value-harvesting tendrils of the system find their way, lichen-like, into every crevice and irregularity.
This is one major reason that big-bang rewrites of a complex system are doomed to fail. Most would-be utopian reformers don't take the time or have the patience to understand the positive externalities at work in the current system. The probability that the new system will have the same opportunities for positive externalities is small—more likely, the new system has a rigid, compartmentalized design that seems intended to destroy as many positive externalities as it can.
In a software company that uses Slack for distributed collaboration, employees tend to ask questions about internal systems on Slack. Some would-be reformers want to replace this "inefficient" dialogue with documentation. The reformist view assumes that the only Good that comes out of asking questions on Slack is that the question-asker gets the information they're seeking. In fact, though, there are many positive externalities:
- There's an opportunity for interpersonal connection during a Slack interaction that doesn't exist when reading or writing documentation.
- Asking and answering questions in public signals to others that this is a safe place to ask questions.
- Questions about a system can be viewed as feedback about the user experience. If the same question gets asked over and over, the system probably has some rough edges, or doesn't accommodate a common use case.
- Answering questions in public establishes one's credibility and authority in that area. It can be a way to meet new employees and welcome them.