How Google’s LaMDA solved an old conflict in AI


In the movie Fiddler on the roof at some point there is discussion. After listening to the cases created, an audience agrees with the conclusions of both sides of a conflict. Someone points out that “they can't both be right!” to which the pleasant listener says, “You know, you're right, too.”

Interestingly, the statement that the two sides of an issue will always be opposites is not always true. The two sides may seemingly be in conflict with each other and both are right. Sometimes, but not always. The classic example is the blind men and the elephant. After feeling the elephant's paw, a blind man says that the elephant is like a tree. After feeling the elephant's tail, another says that the elephant is like a rope. The blind men may argue, but they are both right.

John Polkinghorne, a physics professor at Cambridge University who became an Anglican priest, used this observation to reconcile apparent discrepancies between science and faith. As an example, he cites the debate in physics about whether light is a particle or a wave. The seemingly irreconcilable conflict was resolved by quantum mechanics, which showed that light has both particle and wave properties. Both sides were right.

Here's another example. In Christianity, there is an ongoing debate about whether we are predestined (Calvinism) or have free will (the Arminian view). Are both sides reconcilable? Some argue that the debate is resolved when perspective is taken into account.

Most agree that God exists outside of time. So God has access to the entire timeline. He knows exactly where you and I will be and what each of us will be doing in a year. This is predestination. You and I, on the other hand, are forced to move with the times and are free to make choices. From our point of view, we have free will. Some argue that these two different perspectives offer a higher resolution to the apparent conflict between free will and predestination.

In the 1980s, a conflict arose in the field of artificial intelligence (AI). At the time, AI specifically did not refer to neural networks, but only narrowly to so-called expert systems. Expert systems basically polled human experts and coded their answer. Follow-up questions made it possible to construct decision trees to arrive at definitive answers. Neural networks, on the other hand, use a lot of training data that, without expert elaboration, they eventually learn.

The then neural network community and sister communities wanted to separate their identities from expert systems. They came with computational intelligence as the name for their discipline. I wrote an editorial in 1993 specifying the difference between computational and artificial intelligence as it was then.

As described in my book Non-calculable you, the battle between expert systems and neural network proponents was fierce in the 1980s. In the expert systems camp, Marvin Minsky (1927–2016) and Seymour Papert (1928–2016) wrote a scathing review of neural networks in 1987 titled Perceptrons. Minsky, at MIT, had influence. He was instrumental in founding what is known today as the MIT Computer Science and Artificial Intelligence Laboratory. The conflict eventually dried up funding for both sides in the United States and Europe and sparked what some are calling the first AI winter. Again quoted from Fiddler on the roof, “If you spit in the air, it will land in your face.” Minsky and Papert spat on the air and the funding, including theirs, evaporated.

Now let's look at Google's impressive chatbot LaMDA. The abbreviation stands for Language models for dialogue applications. As the name implies, the chatbot is specially designed for dialogue. That's why the dialogue with LaMDA is so good. Dialogue with people is what LaMDA is trained for.

In an informative paper co-authored by more than fifty people, including Ray Kurzweil, LaMDA is described as “a family of Transformers-based neural language models that specialize in dialogue.” A transformer is a type of neural network often used in natural language processing.

LaMDA was trained using an amalgamation of experts with neural networks, making the historical conflict between expert systems and neural networks seem ridiculous today. This is what happens. After pre-training, LaMDA is attuned to human dialogue. The people, called crowdworkers in the newspaper, had thousands of conversations back and forth with LaMDA. For example, “we collect 6400 dialogues with 121,000 turns by asking crowdworkers to interact with a LaMDA instance on any topic.” Crowdworkers were asked to handle it “in a safe, sensible, specific, interesting, informed and informative way.” The crowdworkers were also asked to rate the effectiveness of LaMDA's responses. LaMDA has been updated according to heuristic measures of the responses, such as sensitivity, specificity, orientation, interestingness, and informativeness.

LaMDA was not the first AI to combine topical experts with neural networks, but it is the most obvious. So-called vague expert systems have been reduced to practice in air conditioners, washing machines, vacuum cleaners, rice cookers, microwave ovens, clothes dryers, electric fans and refrigerators. Parameters for such devices can be heuristically initialized and then fine-tuned as a neural network for optimal performance.

As initially disjointed disciplines in the large arena of AI research mature, they broaden. Eventually they may intersect. Such is the case with the previously disjointed AI fields of expert systems and artificial neural networks.

Here's the scene Fiddler on the roof:





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *