Bias in AI is spreading and it’s time to fix the problem

Did you miss a session from the Future of Work Summit? Visit our Future of Work Summit on-demand library to stream.


This article was contributed by Lauren Goodman, co-founder and CTO of InRule Technology.

Traditional machine learning (ML) does only one thing: it makes predictions based on historical information.

Machine learning begins with the analysis of a table of historical data and the production of what is called a model; This is known as training. After the model is created, a new row of data can be fed into the model and the forecast is returned. For example, you can train a model from a list of housing transactions and then you can use the model to predict the selling price of a building that has not yet been sold.

There are two primary problems in machine learning today. The first is the “black box” problem. Machine learning models make extremely accurate predictions, but they lack the ability to explain the reasoning behind predictions in terms that humans can understand. Machine learning models give you only a guess and a score that shows confidence in that prediction.

Second, machine learning cannot think beyond the data used to train it. If historical bias exists in the training data, then, if unchecked, that bias will be present in the predictions. While machine learning offers exciting opportunities for both consumers and businesses, the historical data on which these algorithms are built can be fraught with inherent biases.

The reason for the alarm is that business decision makers do not have an effective way of looking at the biased practices encoded in their model. For this reason, there is an urgent need to understand what biases are hidden in the source data. In its wake, human-powered governors need to be established as protection against actions arising from machine learning predictions.

Biased predictions lead to biased behaviors and, as a result, we “breathe our own fatigue.” We constantly build on biased actions as a result of biased decisions. This creates a cycle that happens on its own, creating a problem that combines with each prediction over time. The sooner you find and eliminate the bias, the faster you will reduce the risk and expand your market to previously denied opportunities. Those who no longer address prejudice are exposing themselves to a myriad of future strangers regarding risk, penalties, and lost income.

Demographic patterns in financial services

Demographic patterns and trends can also feed into more biases in the financial services industry. A famous example is 2019, where web programmer and author David Heinmeier shared his outrage on Twitter that Apple’s credit card offered him 20 times his wife’s credit limit, even though they file a combined tax.

There are two things to keep in mind about this example:

  • The underwriting process was found to be law abiding. Why? Because there are currently no laws around bias in AI in the U.S. because the subject is viewed as highly subjective.
  • In order to properly train these models, historical biases will need to be incorporated into algorithms. Otherwise, AI will not know why it is biased and cannot correct its mistakes. Doing so fixes the “our own exhaust breathing” problem and provides better forecasts for tomorrow.

The real world value of AI bias

Machine learning is used in a variety of public applications. In particular, scrutiny of social service programs such as Medicaid, housing assistance or supplementary social security income is on the rise. Historical data on which these programs rely may suffer from biased data, and reliance on biased data in machine learning models perpetuates bias. However, awareness of potential bias is the first step to correcting it.

The popular algorithm used by many large US-based health care systems to screen patients for high-risk care management intervention programs turned out to be discriminatory against black patients because it relied on data related to patient treatment costs. However, the model did not take into account racial disparities in access to healthcare, which contributes to lower costs on black patients than on similarly diagnosed white patients. According to Ziad Obermeyer, a working associate professor at the University of California, Berkeley, “spending is a reasonable proxy for health, but it is biased, and that choice actually introduces bias into the algorithm.”

In addition, the widely cited case shows that judges in Florida and several other states relied on a machine learning-powered tool called COMPAS (Correctional Offender Management Profiling for Alternative Prohibition) to estimate retrial rates for inmates. However, numerous studies challenged the accuracy of the algorithm and exposed racial bias – although race was not included as an input in the model.

Remove prejudice

Solution to AI bias in models? Assign people the helm of deciding when to take real-world actions based on machine learning predictions. Clarity and transparency are important for people to understand why AI and technology make certain decisions and assumptions. By expanding on the logic and factors affecting ML predictions, algorithmic biases can be brought to the surface, and the decision can be adjusted to avoid costly penalties or harsh responses via social media.

Businesses and technologists need to focus on clarity and transparency within AI.

There is limited but increasing regulation and guidance from lawmakers to reduce biased AI practices. Recently, the UK Government has unveiled a framework for ethics, transparency and accountability for automated decision making to provide more specific guidance on the ethical use of artificial intelligence in the public sector. This seven-point framework will help government departments build a safe, sustainable and ethical algorithmic decision-making system.

In order to unlock the full power of automation and bring about just change, humans need to understand how and why AI bias leads to certain results and what it means to all of us.

Lauren Goodman is the co-founder and CTO Inner technology.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including tech people working on data, can share data-related insights and innovations.

If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing to your own article!

Read more from DataDecisionMakers

Similar Posts

Leave a Reply

Your email address will not be published.