Pinau has helped change the way research is published, with researchers presenting a checklist of things that should be submitted along with their results, including details of codes and how experiments are run. Ever since she joined Meta (then Facebook) in 2017, she has championed that culture in her AI lab.
“I’m here for the commitment to open up to science,” she says. “I will not be here on any other terms.”
Ultimately, Pineau wants to change how we do justice to AI. “What we call sophisticated can’t just be about performance,” she says. “It should also be up-to-date in terms of accountability.”
However, providing a broader model of language is a bold step for meta. “I can’t tell you that there is no risk in producing the language of this model that we are not proud of,” says Pinyu. “It will.”
The weight of the risks
Margaret Mitchell, one of Google’s forced AI ethics researchers in 2020, who is now on Hugging Face, sees the release of OPT as a positive step. But she believes there are limits to transparency. Has the language model been tested with sufficient rigor? Do the benefits outweigh the disadvantages-such as the generation of misinformation, or racist and abusive language?
“Introducing a broad language model to the world where a wide audience is likely to use it or be influenced by its output, comes with responsibilities,” she says. Mitchell notes that not only the model itself, but researchers will be able to generate harmful content through a downstream application built on top of it.
The meta-AI audited the OPT to eliminate some of the harmful behaviors, but the point is that researchers should come up with a model that can learn from warts and all, says Pinyu.
“There were a lot of conversations about how to do it that puts us to sleep at night, knowing that there is a non-zero risk in terms of reputation, a non-zero risk in terms of damage,” she says. She rejects the notion that you should not release the model because it is too risky – the reason given by OpenAI for not releasing GPT-3’s predecessor, GPT-2. “I understand the weaknesses of these models, but not the research mindset,” she says.