8: AI Just Needs a Bigger Truck!
AI help, not hype, with Robert J. Marks: Can we create superintelligent computers just by adding more computing power?Here’s #8 of our Top Ten AI hypes, flops, and spins of 2018: The claim that AI can be written to evolve even smarter AI1 is slowly being abandoned.2 AI software pioneer François Chollet, for example, concluded in “The Impossibility of Intelligence Explosion” that the search should be abandoned: “An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself.” A computer cannot do that either.
However, some claim that “massive parallelism” lifts the human brain’s performance above that of AI. So strong AI is still possible, they argue. if we can harness more computational power, “taking advantage of the large number of neurons and large number of connections each neuron makes”:
For instance, the moving tennis ball activates many cells in the retina called photoreceptors, whose job is to convert light into electrical signals. These signals are then transmitted to many different kinds of neurons in the retina in parallel. By the time signals originating in the photoreceptor cells have passed through two to three synaptic connections in the retina, information regarding the location, direction, and speed of the ball has been extracted by parallel neuronal circuits and is transmitted in parallel to the brain. Likewise, the motor cortex (part of the cerebral cortex that is responsible for volitional motor control) sends commands in parallel to control muscle contraction in the legs, the trunk, the arms, and the wrist, such that the body and the arms are simultaneously well positioned to receiving the incoming ball. LIQUN LUO, “Why Is the Human Brain So Efficient?” at Nautilus
In other words, our brains are thought to be effective because they are more computationally powerful than any computer we have today. We see similar claims in the computer science literature.3 The implication is that if only we could add more computing power, computers could do what people do. That reminds me of an old story:
John bought a truck so he could go into a farm-to-market business. He bought tomatoes from local farmers for two dollars a pound and drove them to market where he sold them for a dollar a pound. Needless to say, he began to lose money hand over fist.
He consulted his accountant who, after a detailed analysis, said, the “The answer is obvious. You need a bigger truck.”
A bigger computer would be like a bigger truck. All a truck can really do is haul things and all computers can really do is calculate. Limitations on computer performance are constrained by algorithmic information theory. According to the Church-Turing Thesis, anything done on the very fast computers of today could have been done—in principle—on Turing’s original 1930’s Turing machine. We can perform tasks faster today but the fundamental limitations of computing remain. Bigger and faster computers are not going to start creating new ideas. Those who believe that bigger computers will lead to superintelligence are asking for—a bigger truck.
1 Kory Becker, Justin Gottschlich, AI Programmer: Autonomously Creating Software Programs Using Genetic Algorithms
2 Roman Yampolskiy, Why We Do Not Evolve Software? Analysis of Evolutionary Algorithms
3 See, for example, Roman Yampolskiy, Why We Do Not Evolve Software? Analysis of Evolutionary Algorithms
See also: 2018 AI Hype Countdown 9: Will That Army Robot Squid Ever Be “Self-Aware”? The thrill of fear invites the reader to accept a metaphorical claim as a literal fact.
and
2018 AI Hype Countdown 10. Is AI really becoming “human-like”? A headline from the UK Telegraph reads “DeepMind’s AlphaZero now showing human-like intuition in historical ‘turning point’ for AI” Don’t worry if you missed it.
Robert J. Marks II, Ph.D., is Distinguished Professor of Engineering in the Department of Electrical & Computer Engineering at Baylor University. Marks is the founding Director of the Walter Bradley Center for Natural & Artificial Intelligence and hosts the podcast Mind Matters. He is the Editor-in-Chief of BIO-Complexity and the former Editor-in-Chief of the IEEE Transactions on Neural Networks. He served as the first President of the IEEE Neural Networks Council, now the IEEE Computational Intelligence Society. He is a Fellow of the IEEE and a Fellow of the Optical Society of America. His latest book is Introduction to Evolutionary Informatics coauthored with William Dembski and Winston Ewert. A Christian, Marks served for 17 years as the faculty advisor for CRU at the University of Washington and currently is a faculty advisor at Baylor University for the student groups the American Scientific Affiliation and Oso Logos, a Christian apologetics group. Also: byRobert J. Marks: