Many high tech companies, including Microsoft, are headquartered near the coast in the state of Washington. The executives must have been terrified when they read the headline:
“Tuna Biting Off Washington Coast”
But wait. Tuna are not chomping on Seattle beaches. The headline, meant to convey good news for fishermen, can be read that way of course. We use common sense to identify the intended meaning and the incorrect interpretation makes us smile. But AI has trouble doing this because it lacks common sense.
To solve the problem of AI’s lack of common sense, Microsoft’s co-founder Paul Allen (1953–2018) poured big bucks into Seattle’s Allen Institute for Artificial Intelligence. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” he acknowledged. Oren Etzioni, a former University of Washington professor who now oversees the Allen Institute, agrees, “[AI] is devoid of common sense.”
Here’s another example of a flubbed headline:
“Body Search Reveals $4,000 in Crack”
You and I both know what the headline means to say. The alternative incorrect interpretation makes us giggle. AI has no sense of humor and will not know which interpretation is correct.
I therefore offer AI researchers the Flubbed Headline Challenge: Given an ambiguous headline, enable the AI to independently identify the meaning intended by the headline’s author.
Some AI requires training data to use as examples. So to kick off the Flubbed Headline Challenge, here are some training examples I have collected from various sources:
- “New Housing for Elderly Not Yet Dead”
- “Shouting Match Ends Teacher’s Hearing”
- “Dr. Gonzalez Gives Talk on Moon”
- “Man Seeking Help for Dog Charged with DUI”
- “Navy SEALS Responsible for Getting Osama Bin Laden to Be Honored at Museum”
- “General Who Ran Vietnam Briefly Dies at 86”
- “Police Squad Helps Dog Bite Victim”
- “Red Tape Holds Up New Bridge”
- “Police Begin Campaign to Run Down Jaywalkers”
- “Iraqi Head Seeks Arms”
- “Include Your Children when Baking Cookies”
- “Stolen Painting Found by Tree”
- “Two Sisters Reunited After 18 Years at Checkout Counter”
- “Kids Make Nutritious Snacks”
- “Hospitals are Sued by 7 Foot Doctors”
- “New Vaccine May Contain Rabies”
- “Man Struck By Lightning Faces Battery Charge”
- “Students Cook and Serve Grandparents”
- “Utah Girl Does Well in Dog Show”
- “Local High School Dropouts Cut in Half”
- “Death Causes Loneliness, Feelings of Isolation”
- “Legislatures Tax Brains to Cut Deficit”
- “Meat Head Resigns”
- “Police begin campaign to run down jaywalkers”
A more established test for detecting common sense in AI is the Winograd Schema. Mind Matters News contributor Brendan Dixon explains the challenge the schema addresses here. Gary Smith, author of The AI Delusion, talks about AI’s overall problem with ambiguity here.
Won’t the great AI of the future get around the problem of common sense? Maybe. That’s the goal of the gatherings known as the Winograd Schema Challenge in which AI tries to unravel ambiguities not seen before. Smith notes in in his fine book that AI success at these meetings is a bit above 50% so far. The result of random guesses in solving such problems is 50% so that is hardly an impressive figure.
The success for the Flubbed Headline Challenge will, I suspect, also be about 50%, using current technology. So how is my Flubbed Headline Challenge an improvement over the Winograd Schema Challenge? It is funnier.
Note: The technical term for words that happen to have two or more separate meanings (“change,” for example) is polysemy. When an additional, perhaps risky, meaning is actually intended, that’s double entendre. For example, in a 19th-century spiritual, the expression “wade in the water” alluded both to baptism and to the water-based escape routes from slavery but only the first meaning was safe to use. (Richard Nordquist, Thoughtco).
More fun with computers and ambiguity:
Teaching computers common sense is very hard Those fancy voice interfaces are little more than immense lookup tables guided by complex statistics.
AI is no match for ambiguity: Many simple sentences confuse AI but not humans (Robert J. Marks)
Computers’ stupidity makes them dangerous: The real danger today is not that computers are smarter than us, but that we think computers are smarter than us
Also: Why did Watson think Toronto was in the USA? How that happened tells us a lot about what AI can and can’t do, to this day.