“We Shouldn’t be Scared by ‘Superintelligent A.I.’”
The New York Times, October 31, 2019
““Superintelligence” is a flawed concept and shouldn’t inform our policy decisions.”
Intelligent machines catastrophically misinterpreting human desires is a frequent trope in science fiction, perhaps used most memorably in Isaac Asimov’s stories of robots that misconstrue the famous “three laws of robotics.” The idea of artificial intelligence going awry resonates with human fears about technology. But current discussions of superhuman A.I. are plagued by flawed intuitions about the nature of intelligence.
The problem with such forecasts is that they underestimate the complexity of general, human-level intelligence. Human intelligence is a strongly integrated system, one whose many attributes — including emotions, desires, and a strong sense of selfhood and autonomy — can’t easily be separated.
What’s more, the notion of superintelligence without humanlike limitations may be a myth. It seems likely to me that many of the supposed deficiencies of human cognition are inseparable aspects of our general intelligence, which evolved in large part to allow us to function as a social group. It’s possible that the emotions, “irrational” biases and other qualities sometimes considered cognitive shortcomings are what enable us to be generally intelligent social beings rather than narrow savants. I can’t prove it, but I believe that general intelligence can’t be isolated from all these apparent shortcomings, either in humans or in machines that operate in our human world.
It’s fine to speculate about aligning an imagined superintelligent — yet strangely mechanical — A.I. with human objectives. But without more insight into the complex nature of intelligence, such speculations will remain in the realm of science fiction and cannot serve as a basis for A.I policy in the real world.
Understanding our own thinking is a hard problem for our plain old intelligent minds. But I’m hopeful that we, and our future thinking computers, will eventually achieve such understanding in spite of — or perhaps thanks to — our shared lack of superintelligence.
About the Author:
Melanie Mitchell is a professor of computer science at Portland State University, external professor at the Santa Fe Institute and the author of “Artificial Intelligence: A Guide for Thinking Humans,” from which this essay is partly adapted.