Erik Larson

Oct 8, 2007

Tightly Coupled

The Phoenix Project is one of my theoretical forays into finding a common conceptual landscape that accurate reflects our situation, and enables us to more effectively collaborate on problem solving tasks to make the world a better place. It’s about getting on the same page. Supposing such a project is possible, and that it can be carried out, we still won’t be able to predict the future, but we can make plausible inferences about what will likely succeed, and what won’t. I’ll gloss, for now, over differences between explanation and prediction, and suffice it to say that what we’re after are abductive inferences, or ones based on relevance, even if the conclusions reached are not deductively certain (or, in some cases, even more likely than not—more on that later).

Abduction is a strange beast because, to date (and probably for a very long time to come), only humans can do it. We make abductive inferences all of the time, so in this sense the proposal is, by this light, not particularly ground breaking. But the problem is that for some reason not particularly clear to me, there exists considerable confusion about our place in the world and how best to proceed.

To return, my basic thesis is that we live in a tightly coupled system of our own making, and the digitization of information, while a hallmark of our practical ingenuity itself, is also making the world harder to understand and to predict. This is, in some sense, just a consequence of making a complex world that is capable of creating many goods: economic prosperity, technologies for convenience, modern medicine. But nonetheless the sheer complexity of the system makes it dangerous. The fact that things can change so quicky means we have to adapt really quickly as well, and so from everything from procedure to follow in the event of airline hijackings to setting policy for auto emissions and many, many other decisions, we are under an increasing demand to get it right, with consequences for failure becoming larger as well.

In football there is this phenomenon of becoming too conservative running the offense. It is easy enough to spot when it happens, and, like many things in life, the reasoning that lead to its occurrence may or may not have made perfect sense. But then it happens. The quarterback can’t push the ball down the field. Short passes across the middle, or to the sidelines (“outs” as they are called). The defense senses a constriction of the opposing team—an inability for the same to strike big, to score by opening up the pass game and airing it out. Secondary players come up to play the quick pass or run, linebackers start keying on the ball. The game gets predictable. If the offense doesn’t adjust, it’s over. The run game shuts down because half of th defense is waiting for the ball carrier, having lost of the edge of unpredictabiliyt. The short passes are smothered by defenders, and the spectre of interceptions, fumbles, and in general a collapse of the offense is manifest.

This is what I want to avoid. It’s easy to make observations that the world is complicated, and that things are dangerous. It’s easy too to give “cable news channel” analyses of what to do: all reactive, driven largely by action-reaction scenarios spotted through with politics. We don’t want that. We can’t afford it anymore. (In fact it’s really watching the news that prompted me to start think about this—we’re not thinking, really thinking, about our situation anymore. Too much information. Not enough understanding). So, I think in large part the entire edifice of decision makers right now are becoming predictable. We can’t push the ball down the field. Possibilities are shrinking. To someone, watching our system, it IS becoming predictable. If that person’s goal is to cause massive damage—to really shake up the established geopolitical system—we’re in big trouble.

So, let’s “air it out”, let’s take a real bold look at things, and see what happens.

Really Obvious Things that Smart People Don’t Get

The power of generalizations reduces drastically when applied to complex systems. In contrast a classical generalization—say, the inverse square law in physics—has enormous predictive power when applied to “the very large” as Hawking put it. What’s interesting about such mechnical laws. It’s equally true of the reall small. But what’s common to such generalizations is the lack of complexity in the systems in which they apply. Celestial mechanics ignores quite a lot: we want to know how long it will take for one body to orbit another, but we don’t inject millions of other possible interactions: meteorites, etc. to do the calculation. Likewise, we isolate photons or other quantum phenomena in order to use quantum mechanics to predict outcomes.

Classical mechanics — Newton’s or we can include Einstein’s theories of relativity for these purposes— are really beautiful, powerful generalizations. So strange, then, that these classical mechanisms are so irrelevant to prediction in everyday experience. The location at some time t + n given the location at time t of an entire planet is knowable given our classic theory. But something seeminly simple—a particular cubic inch of fluid in a turbulent system—is not.

We have other generalizations for predicting outcomes in complex systems. Mostly, we don’t use laws but past experience. This is true of course with people (we rely on our knowledge of past events to make plausible inferences about future ones), and typically computer models of complex systems invoke observed prior cases and relevant features (where “relevance” is added by the human) to generalize to likely future outcomes given unseen data). Laws don’t do the predictive work in messy systems (we may assume, of course, that laws governing the relation between pressure, temperature, and so on all continue to apply in such systems).

Humans use generalizations for everyday experience that constitute “heuristics” or “rules of thumb”. These are generalizations that no one expects will always apply. We know they admit of exceptions, but still they capture correlations between events of certain types that make them useful. Don’t get into a car with a stranger, I tell my children, knowing full well that there are scenarios where that is exactly what they should do (say, to save them from a maniac on the street).

We’re tightly linked to changing circumstances—to facts—in a way that classical generalizations is expressely designed to avoid. We don’t care about details of the celestial bodies when computing their trajectory through space. We care about some fixable features (their mass and velocity, mostly).

Because we’re tightly linked to changing circumstances, the kind of reasoning we do is completely different. And, again, our use of generalizations is very much attenuated.