Bender is not opposed to using the language model for question-and-answer exchange in all cases. She has a Google Assistant in her kitchen, which she uses to convert units of measure into recipes. “Being able to use voice to access information is very convenient,” he says.
But Shah and Bender also give a more troubling example that came up last year, when Google asked “What is the worst language in India?” Answered the question. With Snippet “The answer is Kannada, spoken by about 40 million people in South India.”
There are no easy answers
There is a confusion here. Straightforward answers may be convenient, but they are also often wrong, irrelevant, or even abusive. They can hide the complexities of the real world, says Beno Stein of Bauhaus University in Weimar, Germany. In 2020, Stein and colleagues at the University of Leipzig, Martin Pothast, and Matthias Hagen, at the Martin Luther University in Halle-Wittenberg, Germany, published a paper highlighting problems with direct answers. “The answer to most questions is ‘it depends’,” says Matthias. “It’s hard to reach the person you are looking for.”
Stein and his colleagues see search techniques as moving beyond sorting and filtering information through techniques such as providing a list of documents that match the search query, making recommendations as a single answer to a question. And they think that’s a far cry.
Again, the problem is not the limitations of existing technology. Even with perfect technology, we don’t get accurate answers, says Stein: “We don’t know what the good answer is because the world is complicated, but we stop thinking when we see these straightforward answers.”
Shah agrees. Giving people the same answer can be a problem because it hides the sources of information and any differences between them, he says: “It really depends on having complete trust in these systems.”
Shah and Bender suggest many solutions to the problems they expect. In general, search technologies today should support the various ways that people use search engines, many of which are not provided by direct answers. Shah says people often use search to find topics about which they have no specific questions. In this case, it would be more helpful to simply provide a list of documents.
It should be clear where the information comes from, especially if the AI draws fragments from more than one source. Some voice assistants already do this, for example proposing an answer with “What I found on Wikipedia is here”. Shah says future search tools should have the ability to say “that’s a dumb question”. This will help the technology to avoid offensive or biased spaces in question.
Stein suggests that AI-based search engines can present reasons for their answers, which can give pros and cons from different perspectives.
However, many of these suggestions easily highlight the confusion identified by Stein and his colleagues. Anything that degrades the feature would be less appealing to most users. “If you don’t click on the second page of Google results, you don’t want to read the various arguments,” says Stein.
Google says it is aware of many of the issues raised by these researchers and is working hard to develop technology that people find useful. But Google is the developer of a multibillion-dollar service. Ultimately, it will create the tools that most people bring.
Stein hopes it won’t last long. “Discovery is very important to us, to society,” he says.