But most of Oasis’ plans, at best, remain idealistic. One example is the proposal to use machine learning to detect harassment and unpleasant speech. As my colleague Karen Hao reported last year, AI models offer a great opportunity to either spread hate speech or go beyond it. Nevertheless, Wang defends the promotion of Oasis’ AI as a central tool. “AI is as good as getting data,” she says. “Platforms share a variety of moderation practices, but all work toward safety through better accuracy, faster response and design redress.”
The document itself is seven pages long and outlines future goals for the consortium. Much of it reads like a mission statement, and Wang says the first few months of work are focused on creating advisory groups to help create goals.
Other components of the plan, such as its content moderation strategy, are unclear. Wang says she wants companies to hire a variety of content moderators so they can understand and deal with the harassment of people of color and non-men. But the plan does not provide any further steps to achieve this goal.
The consortium will also expect member companies to share data on which users are abusive, which is important in identifying repeat offenders. Wang says the participating tech companies will partner with nonprofits, government agencies and law enforcement to help formulate security policies. She also plans to have a law enforcement response team for Oasis, whose job will be to report harassment and abuse to police. But it remains unclear how The task of the task force will be different from the status quo with law enforcement.
Balance of privacy and security
Despite the lack of concrete details, the experts I have spoken to think that documenting the standards of the consortium is a good first step. “It’s good that Oasis is looking at self-regulation, starting with people who know the systems and their limitations,” says Brittan Heller, a lawyer specializing in technology and human rights.
This is not the first time that tech companies have worked together in this way. In 2017, some agreed to freely exchange information with global Internet forums to combat terrorism. Today, GIFCT is independent, and the companies that sign it are self-regulating.
Lucy Sparrow, a researcher at the University of Melbourne’s School of Computing and Information Systems, says what is going on for Oasis is that it offers companies something to work with, rather than whether they come up with the language themselves or wait for it. A third party to make it work.
Sparrow adds that from the outset the backing ethics in design, such as pushing for Oasis, is admirable and her research into multiplayer game systems shows that it makes a difference. “Morality is pushed aside, but here, they [Oasis] Encouraging people to think about ethics from the beginning, “she says.
But Heller says ethical design may not be enough. She suggests that tech companies rearrange their terms of service, which has been heavily criticized for taking advantage of customers without legal expertise.
Sparrow agrees, saying he is reluctant to believe the group of tech companies will act in the best interests of consumers. “It really raises two questions,” she says. “One, how much do we rely on capital-managed corporations to control security? And second, how much control do we want tech companies to have over our virtual life? ”
It’s a sticky situation, especially since users have a right to both security and privacy, but those requirements can be stressful.