Building a better society with better AI

“As human beings, we are very partisan,” says Bman Ammanath, global head of the Deloitte AI Institute and lead tech and AI ethics at Deloitte. “And as these prejudices are fired into the system, there is a very high probability that social classes will be left behind – minorities, people who do not have access to certain resources – and that could lead to greater inequality in the world.”

Projects starting with good intentions – creating similar results or reducing past inequalities – can still be biased if the system is trained with biased data or researchers are not accounting for how their own perspectives affect the line of research.

So far, adjusting to AI biases has been frequently reactive to the discovery of biased algorithms or underrepresented demographics that emerge after the fact, says Ammanath. But, companies now have to learn how to be proactive, alleviate these problems as soon as possible and take responsibility for errors in their AI efforts.

Algorithmic bias in AI

In AI, bias appears in the form of algorithmic bias. Kirk Bresniker, chief architect at Hewlett-Packard Labs and vice president of Hewlett-Packard Enterprise (HPE), explains that “algorithmic bias is a set of challenges in building an AI model.” “We may have a challenge because we have an algorithm that is not able to handle different inputs, or because we have not collected a comprehensive set of data to include in our model training. In any case, we have insufficient data. “

Algorithmic bias can also be caused by inaccurate processing, alteration of data, or injection of a false signal. Prejudice, whether intentional or not, results in unjust consequences, perhaps giving privileges to one group or excluding another.

For example, Ammanath describes an algorithm designed to identify different types of shoes such as flip flops, sandals, formal shoes and sneakers. However, when it was released, the algorithm could not identify the woman’s shoes with heels. The development team was a group of fresh college grades – all men – who had never considered training on the heels of women’s shoes.

“This is a small example, but you realize the data set was limited,” said Ammanath. “Now consider a similar algorithm using historical data to diagnose a disease or illness. What if he is not trained on certain body types or certain races or certain races? Those effects are huge.

Critically, she says that if you don’t have that variety on the table, you’ll miss certain scenes.

Similar Posts

Leave a Reply

Your email address will not be published.