Rebooting AI: Building Artificial Intelligence We Can Trust

Image of Rebooting AI: Building Artificial Intelligence We Can Trust
Author(s): 
Release Date: 
September 10, 2019
Publisher/Imprint: 
Pantheon
Pages: 
288
Reviewed by: 

“Machines will be capable, within twenty years, of doing any work a man can do.” This was the confident pronouncement of artificial intelligence pioneer Herb Simon in 1965. His confidence was shared by Marvin Minsky and other students of AI in the 1960s. As recently as 2002 the futurist prophet Ray Kurzweil made a public bet that AI would surpass native human intelligence by 2029. Now with Google, Kurzweil in 2017 predicted that by 2045 a “Singularity” will occur when humans “will multiply our effective intelligence a billion-fold by merging with the intelligence we have created.”

Whether you hoped or feared that AI machines would not just supplement but replace/rule the human mind, this book will diminish your hopes and fears. Its authors summarize the formidable progress made in the development of artificial intelligence but go on to underscore the deep obstacles to shifting the abilities of AI machines from narrow to general intelligence. 

Co-author Gary Marcus is an entrepreneur in machine learning and author of The Birth of the Mind, while Ernest Davis is a professor of computer science at New York University and author of Verses for the Information Age. Together, Marcus and Davis trace the progress of what they call “narrow” AI, which can program a machine to forecast every possible move in a game of Go, because the board’s dimensions and all the rules are fixed. 

In that context, the machine can even “learn” by trial and error. In real life, however, everything is open-ended and evolving. “The ability to cope with open systems, relying on a general form of intelligence rather than brute force tailored to closed systems, is the key to moving the whole field forward.”

The authors provide a catalog of structural and other reasons for their skeptical appraisal. Of course, machines are limited by their dependence on inputs from outside. A fast-moving calculator will be no better than a weary historian at deciding whether the Soviet invasion of Finland in 1939 should be seen as part of World War II.

Just because an AI machine can recognize a syllable or a border collie doesn’t mean it is smart. “To extrapolate from success in one aspect of cognition to success in all aspects . . . is to succumb to the illusion of progress.” The authors provide photos of objects misidentified by deep learning systems—a turtle for a rifle, a foam-covered baseball for an espresso, a banana and a sticker identified as a toaster. 

The authors’ skeptical view parallels the caution voiced by one of the creators of complexity science, Stuart Kauffman, who argues the important role of “adjacent possibles”—the myriad unforeseen and unforeseeable possibilities opened by every invention or human action. No human and certainly no robot could have foreseen how the Turing Machine would evolve into giant computers followed by laptops, the Internet, social media, the Arab spring, and Russian manipulation of American and European voters. 

However the authors do suggest that things could change in the future. To improve AI, people must somehow invest them with common sense and then with a kind of general intelligence. They must develop systems able to represent the core frameworks of knowledge such as time, space, causality, and how physical objects and humans interact. The list of prerequisites goes on and on—what the authors concede it a “tall order.”

Like most technologies, AI can be used for evil as well as good purposes. Indeed, even noble goals can engender harmful byproducts. The authors ignore the tendencies of humans to abuse computers and AI.

The list of social problems spawned by computers is long and, for some observers, distressing and beyond redemption. It includes the physical, mental, and moral consequences of looking at a cellphone day and night; blind acceptance or profound uncertainty about what to believe or distrust in the piles of alleged facts spewed out by individual cranks and corporate liars such as Fox News; the character assassination of persons such as Nancy Pelosi whose words and image can be distorted online; the rise of drone warfare in which targets  can be located (perhaps erroneously) and then destroyed by AI systems based thousands of miles distant; facial recognition tests at airport screening that misread a face or discriminate against non-whites; the possibility that hardware like that provided by firms such as Huawei could do not only its ostensible task but also conduct espionage or sabotage energy and water networks. 

We now know that for several years Russian and US “intelligence” agencies have planted “digital land mines” able to subvert if not destroy their adversaries’ electric grids—an adjacent possible for deterrence. 

The plethora of musical styles and content available on cellphones goes against social unity. It leads to a situation where even teenagers have little common base for arguing the pros and cons of particular singers and bands. In earlier times, by contrast, nearly every American—young and old, rich or poor, was familiar with Louis Armstrong and  lla Fitzgerald. Their work was known even in the USSR thanks to smuggled records and the Voice of America. 

As Xi Jinping’s China is showing with its nearly universal surveillance, AI can bring humanity ever closer to Aldous Huxley’s Brave New World. It will take more than rebooting to alter this projection.