“Exposes the vast gap between the actual science underlying AI and the dramatic claims being made for it.”—John Horgan
“If you want to know about AI, read this book…It shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence.”—Peter Thiel
“Larson worries that we’re making two mistakes at once, defining human intelligence down while overestimating what AI is likely to achieve…Another concern is learned passivity: our tendency to assume that AI will solve problems and our failure, as a result, to cultivate human ingenuity.”—David A. Shaywitz, Wall Street Journal
“A convincing case that artificial general intelligence—machine-based intelligence that matches our own—is beyond the capacity of algorithmic machine learning because there is a mismatch between how humans and machines know what they know.”—Sue Halpern, New York Review of Books
Futurists insist that AI will soon eclipse the capacities of the most gifted human mind. What hope do we have against superintelligent machines? But we aren’t really on the path to developing intelligent machines. In fact, we don’t even know where that path might be.
A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there. Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake. AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don’t correlate data sets: we make conjectures informed by context and experience. Human intelligence is a web of best guesses, given what we know about the world. We haven’t a clue how to program this kind of intuitive reasoning, known as abduction. Yet it is the heart of common sense. That’s why Alexa can’t understand what you are asking, and why AI can only take us so far.
Larson argues that AI hype is both bad science and bad for science. A culture of invention thrives on exploring unknowns, not overselling existing methods. Inductive AI will continue to improve at narrow tasks, but if we want to make real progress, we will need to start by more fully appreciating the only true intelligence we know—our own.