On the web, you will find some of the most amazing stories about AI (i.e., sensationalism). The people who provide them can charge more for the ad space (because they get lots of clicks).
But the problem with stories about AI goes beyond sensationalist “If it bleeds it leads” journalism. Articles about AI can also be colored by extreme ideology. A belief that we are just computers made of meat can lead to materialist errors when interpreting results from AI. Then there are those who want an AI superintelligence to come along and save us so badly that they will glom onto anything that gives them hope. And some of the articles are written by people without a pinch of domain expertise.
At Mind matters.ai, we seek traffic too. We don’t have ads or a materialist ideology but we do have experts. And if you have a moment, we have the stories you won’t get from Clickbait News.
So, to close out the year, here are our Top Ten hypes, flops, and spins in AI news 2018, beginning with # 10…
A headline from the UK Telegraph reads “DeepMind’s AlphaZero now showing human-like intuition in historical ‘turning point’ for AI”. The subsequent text reads,
Washington, July 7 (UPI) Deep Mind revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.
Let me quickly confess. I just lied to you. The subsequent text is actually the words from a July 8, 1958, article from The New York Times titled NEW NAVY DEVICE LEARNS BY DOING. Replace “Deep Mind” by “The Navy” and the subsequent text reads, correctly,
The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.
The Deep Mind story, sixty years later, goes on to say:
…a year of testing and analysis by chess grandmasters, the machine has developed a new style of play unlike anything ever seen before, suggesting the programme is now improvising like a human.
Incremental results are often extrapolated in this way into hype stories about the future. A small improvement in performance is simply assumed to be followed by an inevitable and indefinite series of further improvements. Many people thought that way in 1958 and many still do today. But series of improvements usually end, often abruptly.
Bottom line: This improvement might be surprising and impressive, but it is only incremental so far as human capabilities are concerned.
Watch for #9, coming soon…
See also: Deep learning won’t solve AI AlphaGo pioneer: We need “another dozen or half-a-dozen breakthroughs”
Can simple probabilities outperform deep learning? (Eric Holloway)
Robert J. Marks II, Ph.D., is Distinguished Professor of Engineering in the Department of Electrical & Computer Engineering at Baylor University. Marks is the founding Director of the Walter Bradley Center for Natural & Artificial Intelligence and hosts the podcast Mind Matters. He is the Editor-in-Chief of BIO-Complexity and the former Editor-in-Chief of the IEEE Transactions on Neural Networks. He served as the first President of the IEEE Neural Networks Council, now the IEEE Computational Intelligence Society. He is a Fellow of the IEEE and a Fellow of the Optical Society of America. His latest book is Introduction to Evolutionary Informatics coauthored with William Dembski and Winston Ewert. A Christian, Marks served for 17 years as the faculty advisor for CRU at the University of Washington and currently is a faculty advisor at Baylor University for the student groups the American Scientific Affiliation and Oso Logos, a Christian apologetics group. Also: byRobert J. Marks: