Erik Larson

May 8, 2013

Coming to a Town Near You: Singularitarians, Transhumanists, and Smart Robots

If you’re lucky enough to work in a software start up in a bastion of innovation like Palo Alto, you’ll have a front row seat watching young 20 somethings with oodles of technical talent writing tomorrow’s killer apps, talking about the latest tech news (everyone is in the know), and generally mapping out a techno vision of the future. It’s exciting stuff. Walk down University Ave and take it all in; it doesn’t matter much which bistro or restaurant you wander into, you’ll hear the same excited patter of future talk—the next “New New Thing” as writer Michael Lewis put it. The techno-ethos of Palo Alto is of course understandable, as hundreds of millions in venture capital flow into start ups each year, making millionaires of kids barely out of school, and changing the nature of business and everyday life for the rest of us. It’s an exciting place. Yet, for all the benefits and sheer exhilaration of innovation, if you stick around long enough, you’ll catch some oddly serious discussions about seemingly sillier topics. While there are plenty of sceptics and agnostics, lots of technical types are drawn to “Sci Fi” versions of the future. And some of them, for whatever reason, seem to think they can predict it.

</span> What’s next, “big picture”? Ask Google’s founders, to take a notable example. In a 2004 Newsweek interview, Sergei Brin ruminated:

</span> </span> “I think we’re pretty far along compared to 10 years ago,” he says. “At the same time, where can you go? Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off. Between that and today, there’s plenty of space to cover.”

</span> </span> And it’s not just Brin. Google technology director Craig Silverstone chimed in (in the same article): “The ultimate goal is to have a computer that has the kind of semantic knowledge that a reference librarian has”.

</span> </span> Really? From the Google intelligencia, no less. But this is part of the culture in Silicon Valley, and all over the world, it’s the engineers, computer scientists, and entrepreneurs who seem obsessed with the idea of reverse engineering our brains to create artificial versions. If you’re an engineer immersed in the project of making better, “smarter” software all day, it’s an understandable vision, even noble, by “geek” standards. But cerebral types have been trumpeting the imminent arrival of Artificial Intelligence for decades, almost since Alan Turing gave us the original theoretical spec for a universal computing machine, in 1936.

</span> Well, as a member of the “geek squad” myself, I’ve been following the debates for years, since back in graduate school at Texas and Arizona, where debates about the nature of the human mind, and the differences between humans and machines are a commonplace. Not much has changed—fundamentally—since those years (as far as I can tell), and the question of whether a machine can reproduce a mind is still largely unanswered. But the world of technology has changed, quite radically, with the development and widespread adoption of the Web. Perhaps our software isn’t “human smart”, but impressive technology is everywhere these days, and it seems to grow further and further into every corner of our lives, almost daily. The notion, then, that our minds might end up in silicon-based systems is perhaps not that impossibly far fetched.

</span> </span> In fact the explosion of Web technology is probably most to credit (or blame) for the latest version of a Sci Fi future. If you dare browse through all the “isms” that have sprung up out of this cornucopia of digitization, you’ll likely find yourself wishing Lonely Planet published a tourists guide for would-be futurists. Failing that, let’s take a look at a Cliff Notes version, next.

The Isms

</span> </span> As far as I can tell, there are three main tenets to the Sci Fi Future involving superintelligent, artificial beings. One, we have Singularitarianism (no this isn’t misspelled). Entrepreneurs like Ray Kurzweil have popularized the neologism, in books like The Age of Spiritual Machines (1999), The Singularity is Near (2005), and the most recent How to Create a Mind: The Secrets of Human Thought Revealed (2011). The “singularity” as the name suggests, is the future point at which human or biological and machine or non-biological intelligence merges, creating a super intelligence that is no longer constrained by the limits of our physical bodies. At “the singularity”, we can download our brains onto a better hardware, and create a future world where we never have to get old and die, or get injured (we can have titanium bodies). Plus, we’ll be super smart, just like Brin suggests. When we need some information about something, we’ll just, well, “think”, and the information will come to our computer-enhanced brains.

</span> </span> If this sounds incredible, you’re not alone. But Singularitarians insist that the intelligence of computers is increasingly exponentially, and that as highfalutin as this vision might seem, the laws of exponential growth make it not only plausible but imminent. Kurzweil famously predicted that the “s-spot”, the singularity—where machines outstrip the intelligence of humans—would occur by 2029 in his earlier works; by 2005 he had revised this to 2045. Right up ahead. (His predictions are predictably precise; understandably, they also tend to get revised to more distant futures as reality marches on.) And Carnegie Mellon robotics expert Hanz Moravec agrees, citing evidence from Moore’s Law—the generally accepted observation that computing capacity on integrated circuits is doubling roughly every eighteen months—that a coming “mind fire” will replace human intelligence with a “superintelligence” vastly outstripping mere mortals. Moravec’s prediction? Eerily on par with Kurzweil, in his 1998 Robot: Mere Machine to Transcendent Mind, he sees machines achieving human levels of intelligence by 2040, and surpassing our biologically flawed hardware and software by 2050.

</span> </span> Well, if all of this singularity talk creeps you out, don’t worry. There are tamer visions of the future from the geek squad, like transhumanism. Transhumanists (many of whom share the millennial raptures of Singularitarians) seek an extension of our current cognitive powers by the fusion of machine and human intelligence. Smarter human brains, from the development of smart drugs, artificial brain implants for enhanced memory or cognitive functions, and even “nanobots”—microscopic robots let loose in our brains to map out and enhance our neural activities—promise to evolve our species from the boring, latte drinking Humans 1.0 to the 2.0 machine-fused versions, where, as Brin suggests, we can “have the world’s information attached to our brains.” (Sweet!)

</span> </span> Enter True AI

</span> </span> Singularitarians. Transhumanists. They’re all all bearish on mere humanity, it seems. But there’s another common thread besides the disdain for mere flesh and blood , which makes the futurists’ “isms” distinctions one without a substantive difference, because whether your transhuman future includes a singularity, or a mere perpetual, incremental enhancement (which, arguably, we’ve been doing with our technology since pre-history), you’re into Artificial Intelligence, smart robots.

</span> </span> After all, who would fuse themselves with a shovel, or a toaster? It’s the promise of artificial intelligence that infuses techno-futurists prognostications with hope for tomorrow. And while the history of AI suggests deeper and thornier issues beguile the engineering of truly intelligent machines, the exponential explosion of computing power and speed, along with the miniaturization of nearly everything, make the world of smart robots seem plausible (again), at least to the “isms” crowd. As co-founder of Wired magazine and techno-futurist Kevin Kelly remarks in his 2010 What Technology Wants, we are witnessing the “intelligenization” of nearly everything. Everywhere we look “smart technologies” are enhancing our driving experiences, our ability to navigate with GPS, to find what we want, to shop, bank, socialize, you name it. Computers are embedded in our clothing now, or in our eye wear (you can wear a prototype version of the computer-embedded Google Glass these days, if you’re one of the select few chosen). Intelligenization, everywhere.

</span> </span> Or, not. Computers are getting faster and more useful, no doubt, but are they really getting smarter, like humans? That’s a question for neuroscience, to which we now turn.

</span> </span> The Verdict from Neuroscience? Don’t Ask

One peculiarity with the current theorizing among the technology “nerds”, focused as they are on the possibilities of unlocking the neural “software” in our brains to use as blueprints for machine smarts, is the rather lackluster or even hostile reception their ideas receive from the people ostensibly most in the know about “intelligence” and its prospects or challenges—the brain scientists. Scientists like Nobel laureate and director of the Neurosciences Institute in San Diego Gerald Edelman, for example. Edelman is notably sceptical, almost sarcastic, when he’s asked questions about the prospects of reverse engineering the brain in software systems. This is a wonderful project—that we’re going to have a spiritual bar mitzvah in some galaxy,” Edelman says of the singularity. “But it’s a very unlikely idea.” Bummer. (In California parlance: “dude, you’re dragging us down”).

</span> </span> And Edelman is not alone in voicing skepticism of what sci fi writer Ken MacLeod calls “rapture for nerds”. In fact, almost in proportion to the enthusiasm among the “machine types”—the engineers and entrepreneurs like Google’s Brin, and countless others in the slick office spaces adorning high tech places like Silicon Valley—the “brain types” seem to pour cold water. Wolf Singer of the Max Planck Institute for Brain Research in Frankfurt, Germany, is best known for his “oscillations” proposal, where he theorizes that patterns in the firing of neurons are linked, perhaps, to cognition. Singer’s research inspired no less than Francis Crick, co discoverer of DNA, and Caltech neuroscience star Kristof Koch to propose that “40 hz occillations” play a central role in forming our conscious experiences.

</span> </span> Yet, he’s notably nonplussed about the futurists’ prognostications about artificial minds. As former Scientific American writer John Horgan notes in his IEEE Spectrum article, The Consciousness Conundrum: “ Given our ignorance about the brain, Singer calls the idea of an imminent singularity [achieving true AI] ‘science fiction’.” Koch agrees. Comparing his work with Crick—decoding DNA—to the project of understanding the “neural code” for purposes of engineering a mind, he muses: It is very unlikely that the neural code will be anything as simple and as universal as the genetic code.” What gives?

</span> </span> It’s hard to say. As always, the future of predicting the future is uncertain. One thing seems probable, however. The core mysteries of life, like conscious experience and intelligence, will continue to beguile and humble us, with a greater appreciation for its complexity and beauty. And, predictably, what has been called “Level 1” technologies, or “shop floor” technologies that we employ to achieve specific goals—like traveling from A to B quickly (an airplane), or digging a ditch (a shovel) or searching millions of electronic web pages (a search engine) will continue to get more powerful and complex. What is less predictable it seems, is whether all these enhancement projects will really unlock anything special, beyond the digitization of our everyday experiences in zillions of gadgets and tools. Indeed, whether all these gadgets and tools really are getting “smarter” or just faster, smaller, and more ubiquitous in our lives, is itself an open question, properly understood. In the complicated connections between technologies and the broader social, political, and cultural contexts within which they exist, almost any future seems possible. As Allenby and Sarewitz note in their 2012 critique of transhumanism, The Techno-Human Condition, the real world is always a struggle to define values, and contra the technology-centered types like Kurzweil or Moravec, it gets more and more complicated, and harder—not easier—to predict. Technology, in other words, makes things murkier for futurists. And real science—real thinking —, ideally, can provide some balance. We’ll see.

</span> Back in Silicon Valley, things don’t seem so philosophically confusing. The future, as always, seems perpetually wide open to more and better , which lockstep-like seems also certain to equal better outcomes for us, too. But the sobering news, as the frontiers of neuroscience report, is that the “big questions” are unanswered still today, and answering them seems a long way away to boot. I’m not a betting person, but however the world appears in 2045 (or was it 2029?), it’s safe to say we don’t know yet. In the meantime, the all-too-human tendency to see nails everywhere with each new version of a hammer is likely to continue, unabated. Well, so what? Perhaps the Google founders and their legions of programmers have earned their right to prognosticate. We humans can smile and shrug, and wait and see. We’re all just human, after all.

</span>