Recently in the Washington Post, Joel Achenbach wrote a long and very smart piece of reporting on the state of Artificial Intelligence doom mongering.
The short version goes something like this: Science fiction writers have been worrying about the possibility that Artificial Intelligence (AI) could bring about a cataclysm since the 1950s, when computers barely existed. In the last few years those concerns have begun to be taken seriously by some people in the academy. There are now academic books and even a fledgling non-profit foundation dedicated to understanding the dangers of AI and developing strategies to hedge against it.
Yet as Achenbach details, real AI—that is, AI strong enough to approach superintelligence—is nowhere near a reality. In fact, the possibility of AI has been lurking just over the horizon for decades. And that’s where it seems to stay—always just over the horizon, which suggests that there might be a fundamental obstacle to it. Instead, Achenbach suggests we ought to worry about “superstupidity.” That is, machines we have programmed to perform important tasks, but that aren’t smart enough to manage inputs properly. Think here about the control systems for power grids, deepwater oil wells, or nuclear reactors—even the fly-by-wire control structures of aircraft. Even at a smaller scale, machine intelligence poses risks. Achenbach has written elsewhere about the problem of programming driverless cars about how to behave when faced with an unavoidable collision: Someone will have to write into their code the possibility that they be directed to kill one person if it means avoiding a greater loss of life in the incident.
All of that said, worrying about technology is not an either/or binary choice. It’s and/both.
We ought to be devoting resources to hedge against superintelligent AI even if it never develops. And we ought to be concerned about the degree to which (relatively dumb) algorithms have taken over the world. Look, for instance, at the rise of algorithmic stock trading. Commonly referred to as “flash trading,” the problem with creating entire financial strategies around computer algorithms is actually two-fold. First, the algorithms have become tantamount to a black box, where even their creators don’t fully understand them and can’t predict their behavior. Second, they contribute to the perversion of the market turning it from an engine for capital allocation into, depending on your view, either a casino or a mine. Neither prospect seems likely to produce good outcomes in the long term.
And while we’re at it, we should probably go the full-spectrum of tech skepticism by worrying about how technology is changing us. Ten years ago the TED talk crowd liked to rhapsodize about how awesome the world would be when the “digital natives” took over because they would grok technology on a level that the olds never could. But if you look around at the internet, the opposite seems to be the case: Most people who came of age in the 1950s understood how cars worked. They could change the oil; they knew what a carburetor did. And that’s because they grew up as the automobile was maturing. Today, everyone takes cars for granted and the result is that we know much less about cars and how they work. I suspect most people couldn’t change the oil on their Toyota if you offered them $10,000 to do so.
In the same way, kids seem dumber about computers today than they were twenty years ago. Sure, they can take videos with their cell phone and post them to Twitter. But ask them to do anything more complicated than a Google search and, well, good luck with that.
And that’s just the practical side of the internet making us dumber. There’s a philosophical component, too. Two years ago the web developer Maciej Ceglowski gave a lecture about the problem of the internet and memory. He too, began with an analogy concerning cars and the 1950s: Because of the rise of the automobile in the 1950s, the American government decided to build the interstate highway system. And the very fact of this collection of roads exerted enormous influence over the subsequent development of America. It made possible the suburbs, McDonald’s, shopping malls, and much else. But there was a flip side. Here’s Ceglowski:
The wide-open spaces that first attracted people to the suburbs were soon filled with cookie-cutter buildings. Our commercial spaces became windowless islands in a sea of parking lots. We discovered gridlock, smog, and the frustrations of trying to walk in a landscape not designed for people. When everyone has a car, it means you can’t get anywhere without one. Instead of freeing you, the car becomes a cage. The people who built the cars and the roads didn’t intend for this to happen. Perhaps they didn’t feel they had a say in the matter. Maybe the economic interests promoting car culture were too strong. Maybe they thought this was the inevitable price of progress. Or maybe they lacked an alternative vision for what a world with cars could look like.
His point is that you could say the exact same thing about internet culture and the idea that everyone should always be online. Ceglowski argues that one of the principle problems with the internet is that its conventions for memory are directly opposite human conventions for memory. Human beings are designed to remember some things and forget others. All of our social norms and customs are built around this fact.
Computers are designed to remember everything, forever. And there’s a reason that we have so many social conflicts at the intersection where computers are imposing their mnemonic conventions onto humans.
In other words, we ought to be worried—probably even skeptical—of everything about computers—all the way up to AI and down to Facebook. It would be terrible if Skynet ever became self-aware, of course. But even if it doesn’t, the internet might destroy us anyway.