Safeguarding user interest: 3 core principles of Design for Trust

We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!


Confidence in technology is declining. This is especially true when it comes to emerging technologies such as AI, machine learning, augmented and virtual reality, and the Internet of Things. These technologies are powerful and have great potential. But they are not well understood by tech end-users and in some cases, not even by tech makers. Distrust is particularly high when these technologies are used in areas such as healthcare, health, food security, and law enforcement, where the consequences of defective or biased technology are more serious than receiving a bad movie recommendation from Netflix.

What can companies that use emerging technologies to connect and serve customers do to regain lost trust? The simple answer is to protect the interests of users. Easier said than done.

The approach I recommend is a concept I call design for trust. In simple terms, trust for trust is a collection of three design principles and associated methods. The three principles are fairness, clarity and accountability.

1. Impartiality

There is an old saying from accounting in the early days of computing: Garbage in, Garbage out-short for the idea that poor quality input will always produce defective output. In AI and machine learning (ML) systems, defective output usually means inaccurate or biased. Both are problematic, but the latter is controversial because biased systems can adversely affect people based on characteristics such as race, gender, or ethnicity.

There are numerous examples of bias in the AI ​​/ ML system. A particularly serious matter came to light in September 2021 when a report on Facebook stated that “black men saw an automated prompt from a social network asking if they ‘wanted to continue watching videos about primates’, which led the company to And disable the AI-powered feature that forwarded the message. “

Facebook called this an “unacceptable mistake” and, of course, it was. This is because the AI ​​/ ML system’s facial recognition feature did a bad job of isolating people of color and minorities. The underlying problem was potential data bias. The datasets used to train the system do not contain enough images or references to minorities to enable the system to learn properly.

Another type of bias, model bias, has engulfed many tech companies, including Google. In the early days of Google, fairness was not an issue. But as the company grew and became the global de facto standard for search, more and more people began to complain that its search results were biased.

Google search results are based on algorithms that determine which search results are presented to searchers. To help them get the results they want, Google also auto-completes search requests with suggestions and introduces a “knowledge panel” that provides snapshots and news results of search results based on what is available on the web, which is usually Cannot be changed or removed by moderators. There is nothing inherently biased about these symptoms. But they add or subtract impartiality depending on how it is designed, implemented and operated by Google.

Over the years, Google has taken a series of actions to improve the fairness of search results and protect users. Today, Google uses blacklists, algorithm tweaks, and an army of humans to shape what people see as part of its search page results. The company has created an Algorithm Review Board to monitor biases and ensure that search results do not favor its own offers or links compared to independent third parties. Google has also upgraded its privacy options to prevent anonymous location tracking of users.

For tech makers who want to create fair systems, Keys focuses on a variety of datasets, models and teams. Datasets should be diverse and extensive to provide systems with adequate options for learning to identify and differentiate races, ethnicities and ethnicities. The model should be properly designed for the weight factors that the system uses to make decisions. Because datasets are selected and designed by humans, highly trained and diverse teams are an essential component. Design is important for trust and it goes without saying that systems should be extensively tested before they are deployed.

2. Clarity

Although tech makers take steps to improve the accuracy and fairness of their AI / ML systems, there is a lack of transparency in how systems make decisions and produce results. AI / ML systems are generally recognized and understood only by the data scientists, programmers, and designers who created them. Therefore, when their inputs and outputs are visible to users, their internal functions such as the logic and purpose / reward functions of algorithms and platforms cannot be examined so that others can understand whether they are performing as expected and from their results and feedback. Learning. They should. Equally opaque is whether the data and analytical models are designed and monitored by people who understand the processes, functions, measures and desired outcomes. Design for trust can help.

Lack of transparency is not always a problem. But when it comes to the serious consequences of decisions made by AI / ML systems – consider medical diagnosis, safety-critical systems such as autonomous automobiles and loan approvals – the system needs to be able to explain how they were created. Thus, in addition to fairness, clarity is also needed.

Take the example of the long-standing problem of systemic racism in lending. Before technology, the problem was biased between people who got loans and who didn’t. But the same bias may be present in AI / ML systems based on selected datasets and models created as those decisions are made by humans. If a person feels they have been unfairly denied a loan, banks and credit card companies should be able to explain the decision. In fact, with the increasing number of geographical areas, they are essential.

This is especially true in the insurance industry in many parts of Europe, where insurers are required to design their claims processing and approval systems to improve trust in accordance with standards of both fairness and clarity. When an insurance claim is denied, companies must provide a criterion and a full explanation of why.

Today, the ability to explain is often acquired by people who have developed a system that creates an audit trail of the processes that go through to document the design of the system and make decisions. A major challenge in the ability to explain is that systems are increasingly analyzing and processing data at a faster pace than human ability to process or understand. In these situations, the only way to provide the ability to explain is to monitor the machines and check the function of the machines. This driver is behind an emerging field called Explanable AI (XAI). XAI is a set of processes and methods that allow humans to understand the results and output of an AI / ML system.

3. Responsibility

Even with the best efforts to create technology systems that are reasonable and explainable, things can go awry. When they do, the fact that the internal workings of many systems are usually only known by the data scientists, developers and programmers who created them, can be difficult to identify if something went wrong and by the creators, the providers. Can be traced back to the choices made. , And the users who led those results. However, someone or some entity should be held accountable.

Take the example of Microsoft’s talk boat, Tay. Released in 2016, Tay was designed to engage people in dialogue while imitating the style and insults of a teenage girl. Within 16 hours of its release, Tay tweeted more than 95,000 times, with a large percentage insulting and derogatory to minorities. The problem was that Tay was designed to learn more about language from interactions with people – and many of the responses to Tay’s tweets were offensive and abusive to minorities. The underlying problem with Tay was model bias. Weak decisions were made by the people at Microsoft who designed the learning model for Tay. Still, Tae learned racist language from people on the Internet, which caused him to respond the way he did. Because it is impossible to hold “the people on the Internet” accountable, Microsoft must bear the brunt of the responsibility … and it did.

Now consider Tesla’s example, its autopilot driver-assistance system and its high-level functionality called full self-driving capability. Tesla has long been criticized for naming its driver-assisted features one that could lead people to think that it could work on its own and sell more of the capabilities of both systems. Over the years, the US National Highway Traffic Safety Administration (NHTSA) has opened more than 30 special crash investigations involving Teslas that may have been linked to the autopilot. In August 2021, following 11 crashes involving Teslas and First-Response vehicles, resulting in 17 injuries and one death, NHTSA began a formal investigation into the autopilot.

NHTSA has cut its work for him because it is difficult to determine who is to blame for the accident involving Tesla. Whether the cause was a defect in the design of the autopilot, a misuse of the autopilot by the driver, a defect in a Tesla component that has nothing to do with self-driving, or a driver error or violation that may occur in any vehicle. Does it have an autonomous driving system, for example, texting or speeding while driving?

Despite the complexity of determining fault in some of these situations, it is always the responsibility of the makers and providers of technology to 1) adhere to global and local laws, regulations and norms, and community norms and standards; And 2) clearly define and communicate the financial, legal and ethical responsibilities of each party involved in using their system.

Practices that can help tech providers with these responsibilities include:

  • Complete and continuous testing of data, models, algorithms, usage, learning and system results to ensure the system meets financial, legal and ethical requirements and standards
  • Creating and maintaining a resource model and demonstrating how the system is performing in a format that humans can understand and make available when needed.
  • Develop contingency plans to withdraw or disable AI / ML implementations that violate any of these standards

After all, design for a trust is not a one-time activity. Instead, it is a system of perpetual management and oversight and adjustment for qualities that undermine trust.

Arun ‘Rock’ Ramachandran is the corporate VP at Hexaware.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including tech people working on data, can share data-related insights and innovations.

If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing to your own article!

Read more from DataDecisionMakers

Similar Posts

Leave a Reply

Your email address will not be published.