Tuesday, April 11, 2006

On the mistakes of futurism

Futurism is the art and science of predicting the future. Most futurists (other than science fiction authors) are involved in the technology industries they think will shape the future, partially as a matter of commonality of interest, though also because, as Alan Kay said, "The best way to predict the future is to invent it."

It's easy to gauge the accuracy of futurism: just go read what was predicted in the past about the present. You might be struck by the eerie accuracy of some predictions (Jules Verne is often cited here), but more likely, you'll find yourself laughing at it. "It's the 21st century; where are my flying cars? Where is my moonbase? Where is my vaguely threatening humanoid robot household servant?" More interestingly, why didn't anyone predict the computer in my pocket would be a thousand times more powerful than a mainframe? Why didn't anyone foresee microprocessors being used to make neckties and disposable birthday cards sing?

Futurism errors can be classified into two groups: insufficiently ambitious, and overly ambitious. The insufficiently ambitious ones are usually just a failure of imagination, though they do point out the fundamental limitations of imagination. One person cannot imagine in a year everything that six billion people will think of in a century, pretty much by definition. It's the overly ambitious failures that are more interesting to consider.

Typically, an overly ambitious prediction arises thus. "This technology is cool, but wouldn't it be far more useful and cooler if someday we had such-and-such a variation, application, or alternative to it?" The futurists starts with some future involving something great, and then works out a way to get there from here. Or, sometimes, doesn't bother to work out the intermediate steps at all.

The problem with this approach is typically the same as the problem people have understanding evolution. Evolution doesn't decide, "hey, eyes would be useful" and come up with a long-term plan to develop them. It decides, "right now, photoreceptors would be useful"; then when that's done, "hey, multifaceted photoreceptors would be useful"; then, "it'd be even more useful if these could focus"; and so on. Each step has to make sense and be profitable in and of itself, regardless of the future directions it might open up.

It's not that technological advances are never done in a single huge leap. They are, sometimes. But each individual leap, small or large, is done because it makes sense at that moment and can be made to work and be profitable. No one makes an advance that doesn't work or do something useful, on the basis that it'll become useful later, save within the narrow bounds of venture capital.

For example, consider the transhumanist question, will our brains be digitized and uploaded? Let's assume for the sake of argument (though I will post another time about this) that it is physically possible to represent your mind as software, linked to inputs and outputs that represent your senses and motor nerves adequately that you wouldn't go insane. How do we get from here to there? Which intermediate steps, all of them practical on their own, will lead there? Even setting aside the various philosophical and religious oppositions such progress will have to overcome, I cannot see a series of steps leading to that point that don't have to cross some impossible gulfs. How will anyone trust a digitized personality is really the same as the person would have been, particularly given that we can't even make a word processor that isn't bug-ridden after thirty years of improving them?

If we ever get to uploaded minds, it'll be via a sideswipe. Some technology going in an apparently unrelated direction will suddenly turn out, by surprise, to be relevant to the problem. And it won't be artificial intelligence, I bet. The problems of AI are essentially reverse-engineering thought -- the solutions bear little resemblance to the internal hardware or information representations of the brain, since they emulate the end result far, far, far more than they emulate the underlying processes (which are barely understood even dimly). The idea of AI research leading to uploaded minds is tantamount to the idea of robotic spacecraft leading to prosthetic limbs; not that it's impossible, but it's very, very roundabout.

And yet, uploaded minds are a staple of modern futurism, particularly transhumanist. Why? Because we can't help think, "that'd be so cool, solve so many problems" and try to figure out a way we could get there, because we want to get there, not because it's likely we will, at least not by a foreseeable path. But we'll probably have ways to make fashion statements with our spleens or something similarly absurd, and no one's forecasting that.

3 comments:

litlfrog said...

One of the academic journals I used to work on was Technological Forecasting and Social Change. We used to joke about futurism being such a sweet job because you'd be retired by the time anyone knew you'd been wrong. There was an article in one that analyzed a lot of glaringly incorrect predictions by intelligent people; it did a good job of categorizing the different kinds of mistakes. If I come across it when I'm cleaning my storage area, I'll bring it over.

Hawthorn Thistleberry said...

Could've used that when I was doing a class on futurism!

Should have mentioned that in the post. One of the products of that class was a short story I wrote that made its own predictions. Though the story is now nine years old, it still seems to me to be holding up, so I'm cautiously optimistic.

Idham said...

:)....am an optimist who goes by
the saying of
"Go as far as you can see...when yiu get there, you will see further".

IdHam