For example, Melanie Davies, chief executive of Ofcom, which regulates social media in the UK, says social media platforms need to explain how their code works. And the European Union’s recently passed Digital Services Act, agreed on April 23, will similarly force the platform to provide transparency on algorithms. In the U.S., Democratic senators introduced proposals for the Algorithmic Accountability Act in February 2022. Their goal is to bring new transparency and oversight to the algorithms that manage our timelines and news feeds, and much more.
Allowing Twitter’s algorithm to be visible to others and acceptable to competitors, in principle, means that someone can simply copy Twitter’s source code and release a rebranded version. Much of the Internet runs on open-source software – the most popular OpenSSL, the security toolkit used by large sections of the web, which suffered major security breaches in 2014.
There are already examples of open source social networks. Mastodon, a microblogging platform set up after concerns about Twitter’s dominant position, allows users to monitor its code, which is posted on the software repository GitHub.
But looking at the code behind the algorithm doesn’t necessarily mean how it works, and it certainly doesn’t give the average person a better understanding of the business structures and processes that go into its creation.
“It’s just like trying to understand ancient organisms with genetic material,” says Jonathan Gray, a senior lecturer in complex structural studies at King’s College London. “He tells us more than anything, but he says we know how they live.”
There is not a single algorithm controlling Twitter. Katherine Flick, a researcher on computing and social responsibility at the University of De Montfort in the UK, says “some of them will determine what people see or follow in their timeline in terms of trends or content.” The algorithm controls what content appears in users’ timelines that will primarily interest people, but even that wouldn’t be very useful without training data.
“Most of the time when people talk about algorithmic accountability these days, we know that the algorithms themselves are not necessarily what we want to see – what we really want is information about how they were developed,” says Jennifer Kobe. Is, postdoctoral research associate at Cambridge University. This is largely due to concerns that AI algorithms may perpetuate human biases in the data used to train them. Who develops the algorithms, and what data they use, can make a significant difference in the results.
For cabbage, the risks outweigh the potential benefits. Computer code does not give us an understanding of how algorithms were trained or tested, what factors or considerations were involved, or what kind of things were prioritized in the process, so open-sourcing cannot make a significant difference in that transparency. On Twitter. Meanwhile, it could pose some significant security risks.
Companies often publish performance appraisals that examine and test their data protection systems to highlight vulnerabilities and shortcomings. When they are detected, they recover, but the data is frequently redirected to prevent security risks. Open-sourcing Twitter’s algorithms will make the entire code base of the website accessible to all, allowing potentially bad artists to be perforated on software and find vulnerabilities for exploitation.
“I don’t believe for a moment that Elon Musk is seeing open-sourcing all of Twitter’s infrastructure and security aspects,” says Erke Boiten, a professor of cybersecurity at the University of Montfort.