Artificial intelligence is still in its infancy. But it may well prove to be the most powerful technology ever invented. It has the potential to improve health, supercharge intellects, multiply productivity, save the environment and enhance both freedom and democracy.
But as that intelligence continues to climb, the danger from using AI in an irresponsible way also brings the potential for AI to become a social and cultural H-bomb. It’s a technology that can deprive us of our liberty, power autocracies and genocides, program our behavior, turn us into human machines and, ultimately, turn us into slaves. Therefore, we must be very careful about the ascendance of AI; we don’t dare make a mistake. And our best defense may be to put AI on an extreme diet.
We already know certain threatening attributes of AI. For one thing, the progress of this technology has been, and will continue to be, shockingly quick. Many people were likely stunned to read recently the announcement by Microsoft that AI was proving to be better at reading X-rays than trained radiologists. Most newspaper readers don’t realize how much of their daily paper is now written by AI. That wasn’t supposed to happen; robots were supposed to supplant manual labor jobs, not professional brainwork. Yet here we are: AI is quickly gobbling up entire professions—and those jobs will never come back.
We also are getting closer to creating machines capable of artificial general intelligence—that is, machines as intelligent as humans. We may never get all of the way to actual consciousness, but in terms of processing power, inference, metaphor and even acquired wisdom, it is easy to imagine AI surpassing humanity. More than 20 years ago, chess master Gary Kasparov playing IBM’s supercomputer Deep Blue, sensed a mind on the other side of the board. Today, there are hundreds of thousands of computers in use around the world that are more powerful than Deep Blue—and that doesn’t include millions of personal computers with access to the cloud.
We also know that profit motives and the will to power and control have already driven the rapid growth of vast libraries of antisocial applications. We need look no farther than the use of facial recognition and other AI techniques by the government of China to control the behavior of its citizens to see one such trajectory. That country’s Social Credit System monitors the behavior of millions of its citizens, rewarding them for what the government judges to be “good” behavior—and punishes them for “bad” behavior—by expanding or limiting their access to the institutions of daily life. Those being punished often do not even know that their lives are being circumscribed. They are simply not offered access to locations, promotions, entertainment and services enjoyed by their neighbors.
Meanwhile, here in the free world the most worrisome threat is the use of AI by industry to exploit us—and of special interest groups to build and manipulate affinity groups to increasingly polarize society. The latter activity is particularly egregious in election years like this one. We are also concerned about the use by law enforcement, the IRS and regulators to better surveil people who might commit crimes, evade taxes and commit other transgressive acts. Some of this is necessary—but without guardrails it can lead to a police state.
Sound extreme? Consider that already all of us are being detained against our wills, often even against our knowledge, in what have been called “algorithmic prisons.” We do not know who sentenced us to them or even the terms of that sentence. What we do know is that based upon a decision made by some AI system about our behavior (such as a low credit rating), our choices are being limited. Predetermination is being made about the information we see: whether a company will look at our resume, or whether we are eligible for a home loan at a favorable rate, if we can rent a certain apartment, how much we must pay for car insurance (our driving quality monitored by new devices attached to our engine computers), whether we will get into the college of our choice and whether police should closely monitor our behavior.
Looking ahead, we can be certain that such monitoring will grow. We know as well that AI will be used by groups to recruit members and influence their opinions, and by foreign governments to influence elections. We can also be certain that as AI tools become more powerful and as the Internet of Things grows, the arsenal of the virtual weapons will become more commercially—and socially—deadly.
We need to act. The problem is that, even now, it will be hard to get the horse back into the barn. The alarm about the growing power of AI already has led to warnings from the likes of Stephen Hawking and Elon Musk. But it is hard to figure out what to do legislatively. We haven’t seen any proposals that would have a broad impact, without crushing the enormous potential advantages of AI.
Europeans now have the “right to explanation,” which requires a humanly readable justification for all decisions rendered by AI systems. Certainly, that transparency is desirable, but it is not clear how much good it will do. After all, AI systems are in constant flux. So, any actions taken based on the discovery of an injustice will be like shaping water. AI will just adopt a different shape.
We think a better approach is to make AI less powerful. That is, not to control artificial intelligence, but to put it on an extreme diet. And what does AI consume? Our personal information.
If AI systems and the algorithms in charge of “virtual prisons” cannot get their hands on this personal information, cannot indulge their insatiable hunger for this data, they necessarily will become much less intrusive and powerful.
How do we choke down the flow of this personal information? One obvious way is to give individuals ownership of their private data. Today, each of us is surrounded by a penumbra of data that we continuously generate. And that body of data is a free target for anyone who wishes to capture and monetize it. Why not, rather than letting that information flow directly into the servers of the world, instead store it in the equivalent of a safe deposit box at an information fiduciary like Equifax? Once it is safely there, the consumer could then decide who gets access to that data.
For example, suppose a consumer wants to get a loan, he or she could release the relevant information to a credit provider—who in turn would have the right to use that information for that one instance. If that consumer wants to get free service from, say, Facebook, he or she could provide the company with relevant information for that application alone. If the government needs access to that information to catch a terrorist, it will need to get a search warrant. (Another nice feature of such a system would be that the consumer would only have to go to one place to check the accuracy of the information on file.)
Human society existed for millennia before AI systems had unlimited knowledge about each of us. And it will continue to exist, even if we limit that knowledge by starving our machines of that personal information. AI will still be able to make the economy more efficient, create medical advances, reduce traffic and create more effective regulations to ensure the health of the environment. What it will be less able to do is threaten human autonomy, liberty and pursuit of happiness.
In the case of AI, lean will mean less mean. It’s time to put artificial intelligence on a data diet.