Erik Larson

Feb 5, 2014

Information Terms

I’ll use Kurzweil again as I find that he’s a spokesperson for the latest sci-fi thinking on smart machines and the like. He does his homework, I mean, so when he draws all the wrong conclusions he does it with impressive command of facts. He’s also got an unapologetic vision, and he articulates it in his books in a way that critics and enthusiasts alike really know where he’s coming from. I like the guy, really. He’s just wrong.

For example, in his eminently skimmable “The Singularity is Near”, he quips on page who-cares that the project of Strong AI is to reverse engineer the human brain in “information terms.” What is this? Everything is information these days, but the problem with seeing the world through the lens of “information” or even “information theory” is that it’s just a theory about transmitting bits (or, “yes/no”s). Then, computation is just processing bits (which is really what a Turing Machine does, is traverse a graph with a deterministic, discreet decision at each node), and communications is just, well, communicating them. But information in this sense is just a way of seeing process discretely. You can then build a mathematics around it and processes like communication can be handled in terms of “throughput” (of bits) and “loss” (of bits) and compression and so on. Nothing about this is “smart” or should even really generate a lot of excitement about intelligence. It’s a way of packaging up processes so we can handle them. But intelligence isn’t really a “process” in this boring, deterministic way, and so we shouldn’t expect “information terms” to shed a bunch of theoretical light on it.

Intelligence is about skipping all those “yes/no” decisions and intuitively reaching a conclusion from background facts or knowledge in a context. It’s sort of anti-information terms, really. Or to put it more correctly, after intelligence has reached its conclusions, we can view what happened as a process, discretize the process, graph it in “information terms” and wallah!, we’ve got something described in information terms.

So my gripe here is that “information” may be a ground breaking way of understanding processes and even of expressing results from science (e.g., thermodynamics, or entropy, or quantum limitations, or what have you), but it’s not in the drivers seat for intelligence, properly construed. Saying we’re reverse engineering the brain is a nice buzz-phrase for doing some very mysterious thinking about thinking; saying “oh, and we’re doing it in information terms” doesn’t really add much. In fact, whenever we have a theory of intelligence (whatever that might look like, who knows?), we can be pretty confident that there’ll be some way of fitting it into an information terms framework. My point here is that it’s small solace for finding that illusive theory in the first place.

Shannon himself—the pioneer of information theory (Hurray! Boo!)—bluntly dismissed any mystery when formalizing the theory, saying in effect that we should ignore what happens with the sender and receiver, and how it gets translated into meaning and so on. This is the “hard problem” of information—how we make meaning in our brains out of mindless bits. That problem is not illuminated by formalizing the transmission of bits in purely physical terms between sender and receiver. As Shannon knew, drawing the boundary at the hard problem meant he could make progress on the easier parts. And so it is with science when it comes face to face with the mysteries of mind. Ray, buddy, you’re glossing it all with your information terms. But then, maybe you have to, to have anything smart sounding to say at all.