How to protect AI from cyberattacks – start with the data

We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!


Artificial intelligence is definitely a game-changer when it comes to security. Not only does it greatly expand the ability to manage and monitor systems and data, it also adds a level of mobility to both security and recovery which significantly increases the difficulty in mounting a successful attack, and reduces rewards.

But AI is still a digital technology, which means it can be compromised, especially when faced with an intelligent attack. As the world becomes increasingly dependent on the intelligence and autonomy of the system for everything from occupational processes to healthcare to transportation, the consequences of security breaches are likely to increase.

>> Special Report: Intelligent Security <

For this reason, the enterprise should keep a close eye on their AI deployments so far as well as their ongoing strategies, to see where the vulnerabilities reside and what can be done to overcome them.

According to Robotics Biz, the most common type of attack on AI systems so far is the infiltration of high-volume algorithms to modify their projected output. In most cases, this involves inserting false or malicious inputs (aka data) into the system so that the view of reality is not completely wrong.

Any AI that is connected to the Internet can be compromised in this way, often periodically so that its effects slow down and the damage lasts longer. The best counter is to streamline both the AI ​​algorithm and the process in which data is used, as well as maintain strict control over data conditioning to detect defective or corrupted data before it enters the chain.

Reaching AI through its data sources

The need for large amounts of data is actually one of the biggest weaknesses of AI as it creates a situation in which security can be breached without attacking AI. A series of recent papers by CSET, Center for Security and Emerging Technology, highlights the growing number of ways in which AI can be compromised by white hat hackers targeting its data sources.

This can be used to mislead autonomous cars into incoming traffic or to speed up to dangerous levels, which means that they can suddenly derail business processes. Unlike traditional cyber attacks, however, the purpose is not usually to destroy the AI ​​or down the system but to take control of the attacker to benefit him, such as diverting data or funds or simply causing trouble.

Dan Boneh, a professor of cryptography at Stanford University, says image-based training data is the most sensitive. Typically, the hacker will use the Fast Gradient Sign Method (FGSM), which makes pixel-level changes to the training image that cannot be detected by the human eye, which is confusing in training models. These “conflicting examples” are hard to find, but can alter the results of algorithms in a variety of ways, even if the attacker only has access to inputs, training data, and output. And as AI algorithms become increasingly dependent on open source tools, hackers will have even more access to the algorithms.

How to protect your AI

What can an enterprise do to protect itself? According to Great Learning Figure Galav and SEO Consultant Saket Gupta, the three main steps to be taken now are:

  • Maintain as strict a security protocol as possible throughout the data environment.
  • Ensure that all records of all operations performed by AI are logged and placed in the audit trail.
  • Implement strong access control and authentication.

In addition, organizations should pursue long-term strategic goals, such as developing a data protection policy specifically for AI training, educating employees about AI risks and how to detect defective outcomes, and maintaining an ongoing risk assessment system that is both dynamic. And looking forward.

No digital system can be 100% secure, no matter how intelligent it may be. The risks involved in compromised AI are more subtle but no less consequential than traditional platforms, so the enterprise now needs to update its security policies to reflect this new reality, rather than wait until it is damaged.

And just like legacy technology, protecting AI is a two-pronged effort: to reduce the means and opportunities of attack, and to minimize damage as quickly as possible and restore credibility when inevitable.

Venturebeat’s mission Digital Town Square is set to become a place for technical decision makers to gain knowledge about the changing enterprise technology and practices. Learn more about membership.

Similar Posts

Leave a Reply

Your email address will not be published.