Random Thoughts on Recent AI Headlines: Google Gives Away “Free” Cookies…
Also, why AI can't predict the stock market or deal with windblown plastic bags1. “Google Chrome has become surveillance software. It’s time to switch” (Silicon Valley) Our privacy continues to erode. Washington Post columnist Geoffrey A. Fowler writes “My tests of Chrome versus Firefox unearthed a personal data caper of absurd proportions. In a week of web surfing on my desktop, I discovered 11,189 requests for tracker ‘cookies’ that Chrome would have ushered right onto my computer, but were automatically blocked by Firefox.”
Firefox, here I come.
2. “Who to Sue When a Robot Loses Your Fortune” (Bloomberg). I’ve been approached a number of times by software writing enthusiasts who sincerely believe they have developed AI to beat the stock market. I was told by John Marshall, PhD how to respond to such claims. John is former Professor of Financial Engineering and co-author of books like Financial Engineering: A Complete Guide to Financial Innovation. To immediately cut to the truth, ask enthusiasts what kind of car they drive. If the answer falls short of a low-end Lamborghini, don’t buy what they’re selling.
Forecasting using AI like neural networks does not work. Stock market price tick data is not ergodic. Successful computer trading makes use of arbitrage opportunities and pseudo-insider trading knowledge of big stock purchases or trades. No AI is needed here.
3. “Scientific American: no consensus on smartphones’ effect on teen brains” (Mind Matters News) The debate on the effect of cell phones on the brain reminds me of the older debate about the health effects of living under high voltage power lines. My friend, the late power engineering professor, Mohamed El-Sharkawi, told the story of one such study. A single middle-aged man who worked under power lines had his blood pressure measured in the morning when he arrived at work. In the evening, his blood pressure was always markedly higher. The researchers discarded their data when they realized the morning data was taken by a male nurse and the evening data by an attractive female RN.
The power line debate continues with anecdotal evidence on both sides. The question, though, remains unresolved. I suspect this will also be the case for the future debate over teen brains and cell phones.
4. “Does workplace automation improve service or merely cut costs?” (Mind Matters News) Good question. Anyone who has wasted time pushing phone buttons or speaking to a machine to traverse stupid decision trees for on-line help knows that there are places where automation has worsened service. And please don’t tell me “Your call is very important to us” when your AI voice recognition interaction with me shows no more sensitivity than a toilet seat.
5. “How do you teach a car that a snowman won’t walk across the road?” (Aeon) Windblown plastic bags fooling self-driving cars is an example of an unanticipated contingency that was not considered by the writers of AI code. Another example is from IBM Watson’s participation in Jeopardy. When a human contestant gave an incorrect response, Watson buzzed in and gave the exact same response. No seasoned human Jeopardy contestant would do that. These sorts of problems will increase as conjunctive AI systems increase in complexity. A good rule of thumb is that unintended contingencies increase exponentially as a function of AI complexity. The impact on the development of Artificial General Intelligence can be enormous—maybe prohibitive. We’ll see.
6. “No AI in humor: R2-D2 walks into a bar, doesn’t get the joke” (Associated Press) I’ve often wondered if AI can write really funny jokes. When I say “Alexa. Tell me a joke,” In response, I usually get a one-dimensional pun—the lowest form of humor other than vulgarity. Most are oldies but goodies. Where do these jokes come from? “Most … tech companies—Apple, Google, Amazon and Microsoft—have a specific team thinking up these tidbits of joy.”
Will AI ever construct simple one-dimensional puns? One-dimensional puns look to be the lowest hanging fruit. For example: “What has four wheels and flies?” A garbage truck.
Higher-dimensional puns, requiring a play on more than one word, are funnier but require more creativity. Confusing the Christian hymn “Gladly the Cross I’d Bear” with a children’s book “Gladly, the Cross-Eyed Bear” still makes me smile.
Paraprosdokian humor is a form of joke where the second part of a sentence unexpectedly changes the meaning of the first. An example is Groucho Marx’s “I’ve had a perfectly wonderful evening, but this wasn’t it” and Mitch Hedberg’s “I haven’t slept for ten days, because that would be too long.”
Markov processes branch off from an initialization with different probabilities. Paraprosdokian humor seems to take a meaningful branch from the initial idea with low but non-zero probability and we think that’s funny. Are any AI researchers up for giving this a try? How about puns? And no, I don’t even know where to begin.
The computer won’t laugh because computers have no sense of humor. This is not the same thing as being humorous. But then I also know people who crack me up but are clueless as to why.
Also by Robert J. Marks: Random Thoughts on Recent AI Headlines (March 18, 2019)
and