In human history, there have been three great technological revolutions and many smaller ones. The three great ones are the agricultural revolution, the industrial revolution, and the one we are now in the middle of—the software revolution. [1]
The great technological revolutions have affected what most people do every day and how society is structured. The previous one, the industrial revolution, created lots of jobs because the new technology required huge numbers of humans to run it. But this is not the normal course of technology; it was an anomaly in that sense. And it makes people think, perhaps subconsciously, that technological revolutions are always good for most people’s personal economic status.
It appears that the software revolution will do what technology usually does—create wealth but destroy jobs. Of course, we will probably find new things to do to satisfy limitless human demand. But we should stop pretending that the software revolution, by itself, is going to be good for median wages.
Technology provides leverage on ability and luck, and in the process concentrates wealth and drives inequality. I think that drastic wealth inequality is likely to be one of the biggest social problems of the next 20 years. [2] We can—and we will—redistribute wealth, but it still doesn’t solve the real problem of people needing something fulfilling to do.
Trying to hold on to worthless jobs is a terrible but popular idea. Trying to find new jobs for billions of people is a good idea but obviously very hard because whatever the new jobs are, they will probably be so fundamentally different from anything that exists today that meaningful planning is almost impossible. But the current strategy—“let’s just pretend that Travis is kidding when he talks about self-driving cars and that Uber really is going to create millions of jobs forever”—is not the right answer.
The second major challenge of the software revolution is the concentration of power in small groups. This also happens with most technological revolutions, but the last truly terrifying technology (the atomic bomb) taught us bad lessons in a similar way to the industrial revolution and job growth.
It is hard to make an atomic bomb not because the knowledge is restricted (though it is—if I, hypothetically, knew how to make an atomic bomb, it would be tremendously illegal for me to say anything about it) but because it takes huge amounts of energy to enrich Uranium. One effectively needs the resources of nations to do it. [3]
Again, this is not the normal course for technology—it was an idiosyncrasy of nuclear development. The software revolution is likely to do what technology usually does, and make more power available to small groups.
Two of the biggest risks I see emerging from the software revolution—AI and synthetic biology—may put tremendous capability to cause harm in the hands of small groups, or even individuals. It is probably already possible to design and produce a terrible disease in a small lab; development of an AI that could end human life may only require a few hundred people in an office building anywhere in the world, with no equipment other than laptops.
The new existential threats won’t require the resources of nations to produce. A number of things that used to take the resources of nations—building a rocket, for example—are now doable by companies, at least partially enabled by software. But a rocket can destroy anything on earth.
What can we do? We can’t make the knowledge of these things illegal and hope it will work. We can’t try to stop technological progress.
I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get. If we can synthesize new diseases, maybe we can synthesize vaccines. If we can make a bad AI, maybe we can make a good AI that stops the bad one.
The current strategy is badly misguided. It’s not going to be like the atomic bomb this time around, and the sooner we stop pretending otherwise, the better off we’ll be. The fact that we don’t have serious efforts underway to combat threats from synthetic biology and AI development is astonishing.
To be clear, I’m a fan of the software revolution and I feel fortunate I was born when I was. But I worry we learned the wrong lessons from recent examples, and these two issues—huge-scale destruction of jobs, and concentration of huge power—are getting lost.
[1] A lot of the smaller ones have been very important, like the hand axe (incidentally, the hand axe is the longest-serving piece of technology in human history), writing, cannons, the internal combustion engine, atomic bombs, fishing (many people believe that fishing is what allowed us to develop the brains that we have now), and many more.
[2] It is true that life is better in an absolute sense than it was a hundred years ago even for very poor people. Most of the stuff that people defending current levels of wealth inequality say is also true—highly paid people do indeed make inexpensive services for poor people.
However, ignoring quality of life relative to other people alive today feels like it ignores what make us human. I think it’s a good thing when some people make thousands of times as much money as what other people make, but I also don’t resent paying my taxes and think we should do much more to help people that are actually poor. The social safety net will have to trend up with the development of technology.
[3] Or at least, one used to: http://en.wikipedia.org/wiki/Separation_of_isotopes_by_laser_excitation