Responsible AI will give you a competitive advantage

Did you miss a session from the Future of Work Summit? Visit our Future of Work Summit on-demand library to stream.


There is no doubt that AI is changing the business landscape and providing competitive advantages to those who adopt it. However, it is time to move beyond simple implementation of AI and ensure that AI is done safely and ethically. This is called responsible AI and will serve not only as a protection against negative consequences, but also as a competitive advantage in its own right.

What is AI responsible?

Responsible AI is a governance framework that covers ethical, legal, security, privacy and accountability concerns. Although the implementation of responsible AI varies by company, its need is clear. Without responsible AI practice, the company faces serious financial, reputable and legal risks. On the positive side, responsible AI practices are also becoming a prerequisite for bidding on specific contracts, especially when governments are involved; A well-executed strategy will go a long way in winning those bids. In addition, the adoption of responsible AI can contribute to the overall reputation of the company.

Values ​​by design

Most of the problems in implementing responsible AI come from precaution. This precaution is the ability to predict what ethical or legal issues the AI ​​system may have during its development and deployment lifecycle. Right now, most responsible AI considerations occur after an AI product has been developed – a very ineffective way to implement AI. If you want to protect your company from financial, legal and reputable risks, you need to start projects with AI in mind. Your company needs to have values ​​by design, not by what you do at the end of the project.

Implementation of values ​​by design

Responsible AI covers a large number of values ​​that need to be prioritized by the company leadership. It is important to cover all areas in any responsible AI plan, how much effort your company puts into each value depends on the company leaders. There must be a balance between investigating the responsible AI and actually implementing the AI. If you put too much effort into responsible AI, your effectiveness can be compromised. On the other hand, ignoring responsible AI is becoming reckless with the company’s resources. The best way to deal with these trade-offs is to start with a thorough analysis at the beginning of the project, and not as a post-factual effort.

The best practice is to set up a responsible AI committee to review your AI projects before they start, periodically during and after completion of the projects. The purpose of this committee is to evaluate the project against the responsible AI values ​​and to approve, disapprove or disapprove with the actions to bring the project into compliance. This may include requesting more information or items that need to be changed fundamentally. Like the Institutional Review Board, which is used to oversee ethics in biomedical research, this committee should have both AI and non-technical members specialists. Non-technical members can come from any background and serve as reality checks on AI specialists. AI experts, on the other hand, may better understand possible difficulties and solutions but may be too accustomed to organizational and industry standards that may not be sensitive enough to the concerns of the larger community. This committee should be called for final approval at the beginning of the project, periodically during the project and at the end of the project.

What values ​​should the responsible AI committee consider?

The values ​​to be focused on should be considered by the business to fit into its overall mission statement. Your business will choose to emphasize certain values, but all major areas of concern should be covered. There are many frameworks that you can choose to use for inspiration like Google and Facebook. For this article, however, we will discuss based on the recommendations made by a high-level expert group on Artificial Intelligence set up by the European Commission in the evaluation list for credible artificial intelligence. These recommendations cover seven areas. We will explore each area and suggest questions related to each area.

1. Human agency and supervision

AI projects should respect human agency and decision making. This theory involves how the AI ​​project will influence or support humans in the decision-making process. It also includes how AI subjects will be made aware of AI and its results will be trusted. Some of the questions that need to be asked include:

  • Have users been informed that the decision or outcome is the result of an AI project?
  • Is there a search and response method to monitor the adverse effects of the AI ​​project?

2. Technical strength and safety

Technical robustness and security require that AI projects reliably address the concerns surrounding the risks associated with AI in advance and minimize such impact. AI project outcomes should include the ability of AI to perform predictable and consistent operations, and cover the need to protect AI from cyber security concerns. Some of the questions that need to be asked include:

  • Has the AI ​​system been tested by cyber security experts?
  • Is there a monitoring process to measure and access the risks associated with the AI ​​project?

3. Privacy and data governance

AI must protect individual and group privacy in both its inputs and its outputs. The algorithm should not include data collected in a way that violates privacy, and should not provide results that violate the privacy of subjects, even if bad artists try to force such errors. To do this effectively, data governance must also be a concern. Suitable questions to ask include:

  • Does any training or guess data use secure personal data?
  • Can the results of this AI project be crossed with external data in a way that violates an individual’s privacy?

4. Transparency

Transparency covers the overall explanatory concerns of traceability and AI algorithms in individual results. Traceability allows the user to understand why an individual decision was made. Explanation means that the user is able to understand the basics of the algorithm used for decision making. It also refers to the user’s ability to understand what factors are involved in the decision-making process for accurate predictions. Questions to ask are:

  • Do you monitor and record the quality of input data?
  • Can a user get feedback on how a particular decision was made and what they can do to change that decision?

5. Diversity, non-discrimination

In order for AI to be considered responsible, the AI ​​project should work as much as possible for all subgroups of people. While AI bias can hardly be completely eliminated, it can be effectively managed. This mitigation can occur during the data collection process – to include a more diverse background of people in the training dataset – and can also be used at prediction times to help balance accuracy between different groups of people. Common questions include:

  • Have you adjusted your training dataset as much as possible to accommodate different subgroups of people?
  • Do you define fairness and then quantitatively evaluate the results?

6. Social and environmental well-being

The AI ​​project should be evaluated in terms of its impact on the environment along with its impact on the subjects and users. Social norms such as democratic decision making, upholding values ​​and preventing addiction to AI projects. In addition the consequences of AI project decisions on the environment should be considered where applicable. One factor that applies in almost all cases is the evaluation of the amount of energy required to train the required models. Frequently Asked Questions:

  • Have you evaluated the impact of the project on its users and topics as well as other stakeholders?
  • How much energy is needed to train the model and how much does it contribute to carbon emissions?

7. Responsibility

Certain individuals or organizations are required to be responsible for the actions and decisions taken by the AI ​​project or encountered during development. There should be a system in place to ensure adequate redressal in cases where harmful decisions are being made. Some time and attention should also be given to risk management and mitigation. Appropriate questions include:

  • Can AI systems be audited by third parties for risk?
  • What are the major risks associated with the AI ​​project and how can they be mitigated?

Bottom line

The seven values ​​of responsible AI outlined above provide a starting point for an organization’s responsible AI initiative. Organizations that choose to follow responsible AI will have access to more opportunities – such as bidding on government contracts. Organizations that do not implement these practices expose themselves to legal, ethical, and reputable risks.

David Ellison is a senior AI data scientist at Lenovo.

Venturebeat

VentureBeat’s mission is to become a digital town square for technical decision makers to gain knowledge about transformative technology and practices. Our site delivers essential information on data technologies and strategies so you can lead your organizations. We invite you to access, to become a member of our community:

  • Up-to-date information on topics of interest to you
  • Our newsletters
  • Gated idea-leader content and discounted access to our precious events, such as Transform 2021: Learn more
  • Networking features and more

Become a member

Similar Posts

Leave a Reply

Your email address will not be published.