Here’s #9 of our Top Ten AI hypes, flops, and spins of 2018: The Army Times headline would jolt your morning coffee:
Reporter Todd South helpfully adds, “your next nightmare.”
The thrill of fear invites the reader to accept the metaphorical claim that the robot will be “self-aware” as a literal fact.
Although we could, for technical reasons, quibble with the claim that the robot squid will be printed in 3D, we won’t just now. Let’s focus instead on the seductive semantics of the term “self-aware.” For humans, Oxford tells me, self-aware means “having conscious knowledge of one’s own character and feelings.” Computers have no character and no feelings. So we can rule that meaning out.
In a more general sense, “self-aware” could mean being aware of ourselves in our surroundings. Could mechanisms be self-aware in that sense? For example, does placing sensors on a car cause the car to be self-aware? A sensor in my gas tank tells me when I need gas. When I back up the car, it beeps at me when I get too close to an obstacle. Automatic parking requires sensors.
All of these are examples of cars being aware of their surroundings. And that’s probably what the robot squid’s developers mean when they use the term “self-aware.”
But their semantics are misleading. I wouldn’t call my car “self-aware” because there is no self in the car that experiences awareness. Electronic sensors generate information about the car’s position but the car is not experiencing it.
Even more simply, my thermostat detects the temperature of its immediate surroundings. Does that make the thermostat self-aware? You’d really have to stretch the meaning of the word.
For what it is worth, the robot squid is not even a machine right now, just a concept. The article goes on to say:
“If we can understand these interactions, then we can use those insights to fabricate dynamic structures and flexible robots which are designed to be self-aware, self-sensing and capable of adjusting their morphologies and properties in real time to adapt to a myriad of external and internal conditions,” Habtour said.
The material is still in early development stages, so don’t expect to see a robot squid in the foxhole next to you tomorrow. Todd South, “Army researchers are developing a self-aware squid-like robot you can 3D print in the field” at Army Times
See also: 2018 AI Hype Countdown: 10. Is AI really becoming “human-like”? Robert J. Marks: AI help, not hype: Here’s #10 of our Top Ten AI hypes, flops, and spins of 2018 A headline from the UK Telegraph reads “DeepMind’s AlphaZero now showing human-like intuition in historical ‘turning point’ for AI” Don’t worry if you missed it.
Robert J. Marks II, Ph.D., is Distinguished Professor of Engineering in the Department of Electrical & Computer Engineering at Baylor University. Marks is the founding Director of the Walter Bradley Center for Natural & Artificial Intelligence and hosts the podcast Mind Matters. He is the Editor-in-Chief of BIO-Complexity and the former Editor-in-Chief of the IEEE Transactions on Neural Networks. He served as the first President of the IEEE Neural Networks Council, now the IEEE Computational Intelligence Society. He is a Fellow of the IEEE and a Fellow of the Optical Society of America. His latest book is Introduction to Evolutionary Informatics coauthored with William Dembski and Winston Ewert. A Christian, Marks served for 17 years as the faculty advisor for CRU at the University of Washington and currently is a faculty advisor at Baylor University for the student groups the American Scientific Affiliation and Oso Logos, a Christian apologetics group. Also: byRobert J. Marks: