The quest for explainable AI

We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!

Artificial Intelligence (AI) is extremely effective at parsing huge amounts of data and making decisions based on information beyond the limits of human comprehension. But he suffers from a serious drawback: he cannot explain how he arrives at the conclusion he presents, at least not in a way that most people can understand.

This “black box” feature is starting to throw some serious problems into applications that are enabling AI, especially in the medical, financial and other critical areas, where the “why” of a particular action is often more important than the “what”. .

Peek under the hood

This leads to a new field of study called Explicit AI (XAI), which seeks to add AI algorithms with sufficient transparency so that users outside the realm of data scientists and programmers can double-check the logic of their AI to ensure that it works inside. Is. Limits of acceptable reasoning, bias and other factors.

As noted by tech writer Scott Clark CMSWire More recently, explanatory AI has provided much-needed insight into the decision-making process so that users can understand why it behaves the way it should. In this way, organizations will be able to identify flaws in its data model, which will ultimately lead to deeper insights into what works and what doesn’t with advanced predictive capabilities and AI-powered applications.

The key element in XAI is trust. Without it, any action or decision that the AI ​​model generates will continue to be in doubt and this increases the risk of deployment in a product environment where AI is supposed to bring true value to the enterprise.

According to the National Institute of Standards and Technology, explainable AI should be built on four principles:

  • Explanation – Ability to provide evidence, support or reasoning for each output;
  • Semantics – Ability to express in a way that users can understand;
  • Accuracy – Ability to explain not only why the decision was made, but also how it was made;
  • Knowledge limit – Ability to determine when its findings are not reliable because it goes beyond the limits of its design.

While these principles can be used to guide the development and training of intelligent algorithms, their purpose is also to guide human understanding of what can be explained when mathematical constructs are applied.

Buyers beware Explainable AI

According to Jeremy Kahn of Fortune, the main problem with XAI at the moment is that it has become a marketing buzzword to push the platform out the door instead of a true product position developed under any reasonable standards.

As long as buyers realize that “explainable” can only mean a raft of ambiguity that may or may not have anything to do with the task at hand, the system has been implemented and it is very expensive and time consuming to switch. Can ask. Ongoing studies are finding flaws in many of the leading specification techniques that are too simple and unable to explain why a given dataset was considered important or unimportant for algorithm output.

Anthony Habayeb, CEO of AI Governance Developer Monitour, explains why this partially explained AI is not enough. What is really needed is an understandable AI. The difference lies in the broader context that understanding makes more sense. As any teacher knows, you can explain something to your students, but that does not mean that they will understand it, especially if they lack the previous basic knowledge required for understanding. For AI, this means that users should now not only have transparency in how the model is working, but also how and why it was chosen for this particular task; What data came in the model and why; What problems arose during development and training and many other issues.

At its core, the ability to explain is a data management problem. Developing tools and techniques to fully understand AI processes and to test at such a granular level to do this in a reasonable time frame would not be easy or inexpensive. And it is likely that the same effort will be required on the part of the knowledge workforce to integrate AI in such a way that it can understand the frequently irrelevant, chaotic reasoning of the human brain.

After all, it takes two to form a dialogue.

Venturebeat’s mission Transformative Enterprise is about to become a digital town square for technology decision makers to gain knowledge about technology and transactions. Learn more about membership.

Similar Posts

Leave a Reply

Your email address will not be published.