Software engineer Gregory Coppola bravely exposed the fact that political views influence Google’s search engine displays. The surprise is not the bias but its severity.
Just as Big Mac buns will always have sesame seeds, AI algorithms will always have bias. Every computer program ever written, including Google’s news filtering algorithm, has bias. At times, the bias is so obvious that we are numbed by familiarity and don’t notice. At other times, the bias is slanted in a secret, unethical, or even unlawful manner.
Some biases are unintentional. In 2015 Google’s image search software identified a black software developer and a friend as gorillas. Google immediately apologized and did a quick fix on the problem by blocking gorillas and chimpanzees from its image recognition algorithm. Unintentional bias can be fixed when it is identified. But those who have an intentional bias — think of CEOs of cigarette manufacturers testifying at a congressional hearing — can sneakily try to avoid detection and scrutiny.
All computer algorithms are biased by design. The programs are biased to perform whatever tasks programmers tell them to do. The need for bias was first explicitly noted by Tom Mitchell about forty years ago in “The need for biases in learning generalizations.”1 Twenty-five years ago computer scientist Cullen Schaffer noted, in reference to machine learning, “a learner… that achieves at least mildly better-than-chance performance [without bias]… is like a perpetual motion machine.”2 In the case of learning in machine intelligence, the amount of infused bias can be measured in bits. 3 Any attempt at machine learning or search engine data mining 4 without bias is “futile.”5
Is bias resident in all computer programs? Yes. Programming a computer to add and multiply numbers means biasing the computer to add and multiply numbers. In this case, the bias is so obvious as to almost escape recognition. In the same way, you don’t notice how your foot feels in your shoe unless you think about it. Like many things, bias is neither good nor bad. How it is used is what matters. I like bias against displays of filth words, explicit pornography, and references to novels by Dan Brown.
The takeaway is obvious. In ranking news and censoring content, artificial intelligence will never be “fair and balanced” toward all because fairness is perceived differently by different people. One person’s critical analysis is another’s hate speech. One person appreciates nudes in art and another is disgusted by pornography. One American raises a fist and declares MAGA and another screams “Impeach!”.
So how can search engines and AI be fair when everyone biases their code to one degree or another? One answer is: Either announce and celebrate your bias or show us your ranking and sorting algorithms so we know your bias. Disinterested computer nerds, smarter than me, can analyze code and let us know. But hmmm. Won’t these computer nerds’ reports themselves be biased?
Luckily, performance history and reputation play a big role in trust. Most know that HuffPost leans sharply left and Ben Shapiro’s Daily Wire is biased to the right. Their biases are obvious and known. In the same way, Google should come clean and confess to the world where their bias is.
1 Tom M Mitchell, The need for biases in learning generalizations. New Jersey: Department of Computer Science, Laboratory for Computer Science Research, Rutgers Univ., 1980.
2 Cullen Schaffer, “A conservation law for generalization performance,” in Proc. Eleventh International Conference on Machine Learning, H. Willian and W. Cohen. San Francisco: Morgan Kaufmann, 1994, pp.295-265.
3 William A. Dembski and Robert J. Marks II. “Conservation of information in search: measuring the cost of success.” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 39, no. 5 (2009): 1051-1061. Robert J.Marks, William A. Dembski, and Winston Ewert. Introduction to Evolutionary Informatics, World Scientific, 2017.
4 Montanez, George D., Jonathan Hayase, Julius Lauw, Dominique Macias, Akshay Trikha, and Julia Vendemiatti. “The Futility of Bias-Free Learning and Search.” arXiv preprint arXiv:1907.06010 (2019).
5 George D Montanez, “Why Machine Learning Works.” Ph.D. diss., Carnegie-Mellon, 2017.