In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence
In Our Own Image will make accessible to readers many of the complex issues behind artificial intelligence (AI). Author George Zarkadakis has good credentials for this task as an engineer with a PhD in AI, however his narrative has many problems.
Zarkadakis believes it is possible to develop a machine consciousness though he admits that this is not possible by means of computer programming. He cannot explain how to develop a machine consciousness, but he suggests it may be possible to do this by studying the brain. He shies away from the topic of the brain, instead providing a long detour around—tap dancing, more or less, to distract the reader. The issues he does cover are centered on AI in culture. That is, if we could create AI, what might AI look like? Here Zarkadakis substitutes popular culture for popular science.
Zarkadakis begins with his theory of evolution-of-mind with evidence from anthropological history, ancient art carvings, and cave paintings 40,000 years ago. From these scant artifacts he asks questions such as, When does creativity kick in? When does the human mind begin? And how different was the ancient mind from today’s mind? Zarkadakis attempts to answer these but time and again (and what appears to be an ongoing theme) offers conjecture as fact.
Zarkadakis addresses the more philosophical issues of AI such as the theory of mind, the significance of metaphor, the mind-body problem, paradigms, the competing theories of reductionism versus emergence, the Turing Test, cultural attitudes toward robots, golems, androids, and cyborgs in history, science fiction, literature, movies, and the possibility that we may not be alive but living in a simulation inside a computer.
However, many of Zarkadakis claims are open to debate. Following one such claim—that thinking machines will be more intelligent, more perfect, and more moral than man—this reviewer studied the text for supporting evidence, for example, was there a definition of artificial intelligence or machine consciousness, so one might recognize AI when one sees it? Alas, within these pages there is no definition for intelligence, artificial or otherwise. There is no definition for machine consciousness. There is no definition for human consciousness. There is no definition for perfection. There is no definition for morals. There is no definition for morality. SO what’s the value of a PhD in AI?
To top this off, in chapter 15 Zarkadakis disavows assertions made in the previous 14. He writes, “There were far too many things in the world that lay beyond logic yet were very much part of life experience,” and “Abstract symbolic reasoning cannot provide meaning on its own.” A more honest author would have started with this disclaimer, instead of hiding it near the end, and only after his disclaimer does Zarkadakis mention non-algorithmic approaches and once again only lightly.
If not Zarkadakis, then who? Curious readers might do better to start with an essay by computer scientist John McCarthy published in 1977, “Epistemological Problems of Artificial Intelligence.” In this essay McCarthy identifies a number of problems that were difficult in 1977 and remain difficult today, though computer scientists are achieving traction against one class of problems, the algorithmic manipulation and understanding of 3D objects in a cluttered environment. Given low-cost sensors and high-powered computers, self-driving cars are on they way to practicality—albeit with a few (accidental) deaths or two along the way.
Despite today’s successes, computer scientists are still failing entire classes of problems, for example real world situations in which rules are unclear or undefined. In general, problems can be solved algorithmically where rules are well defined (i.e. games such as chess, or where there are properties (statistical or other) that allow heuristics can be applied but will fail with respect to “corner cases” and “black swans,” that is, where rules depend on context and context can vary (a lot!).
Algorithms will also fail when pitted against humans who lie, cheat, and “game” the rules. For example when “chatbots” are brought online (think: Microsoft’s Tay), that are supposed to learn through human interaction, they quickly degrade into cursing bigots—because that is what clever humans will teach them to do. If AI has difficulty in distinguishing between truth and lies, or between white sky and white panel trucks, then AI will never perform “better than” humans.
This review recommends In Our Own Image with reservations. Readers will find the book entertaining on the topics of philosophy, psychology, sociology, and culture but these are empty calories with respect to a (lack of) science.