Join online with today’s leading executives at the Data Summit on March 9th. Register here.
Now that AI is advancing into the mainstream of IT architecture, the race is on to ensure that it is safe when exposed to data sources beyond the control of the enterprise. From the data center to the cloud to the edge, AI will face a variety of vulnerabilities and increasingly complex threats, all of which will be managed by AI.
Meanwhile, the stakes will be higher and higher, as AI is likely to be the backbone of our healthcare, transportation, finance and other sectors that are crucial to supporting our modern lifestyle. So before organizations start pushing AI too deeply into this distributed architecture, it can help to pause for a moment to make sure it is secure enough.
Trust and transparency
In a recent interview with VentureBeat, IBM Chief AI Officer Seth Dobrin noted that building trust and transparency across the AI data chain is important if the enterprise expects maximum value from its investment. Unlike traditional architectures that can only shut down or loot data when compromised by viruses and malware, the risk to AI is much higher because it can be trained to retrain itself from data obtained from Endpoint.
“The end point is the REST API that collects data,” Dobrin said. “We need to protect AI from poisoning. We need to make sure that AI endpoints are secure and constantly monitored, not just for performance but for bias. “
To do this, Dobrin said IBM is working on establishing adversarial robustness at the system level of platforms like Watson. By implementing AI models that ask other AI models to explain their decision-making processes, and then correct them if those models deviate from the norm, the enterprise will be able to maintain a state of security at the pace of today’s fast-paced digital economy. But this requires thinking away from hunting and failing bad code to monitor and manage the AI response, which appears to be general data.
Already, reports are circulating in a number of intelligent ways, including manipulating data to fool AI into changing its code in a harmful way. Jim Dempsey, a lecturer at UC Berkeley Law School and a senior consultant at the Stanford Cyber Policy Center, says ML algorithms sound like speech but not to humans. Image recognition systems and deep neural networks can wander with distractions that are invisible to the human eye, sometimes by moving just one pixel. In addition, these attacks can be initiated, even if the perpetrator does not have access to the model or the data used to train him.
Pause and respond
To deal with this, the enterprise must focus on two things. First, John Rose, global CTO of Dell Technology, says it should allocate more resources to prevent and respond to attacks. Most organizations specialize in detecting hazards using AI-managed event information-management services or managed-security service providers, but prevention and response are still too slow to adequately reduce serious breaches.
Corey Thomas, CEO of Rapid 7, says this leads to another change that the enterprise should implement: Enhance prevention and response with more AI. This is a tough pill to swallow for most organizations as it essentially allows AI to modify the data environment. But Thomas says there are ways to do this that allow AI to work on security aspects that human operators are most adept at handling while reserving key capabilities.
In the end, it comes down to faith. AI is a new kid in the office right now, so he shouldn’t have the keys to the vault. But over time, as he proves his worth in the entry-level settings, he must gain the same confidence as any other employee. This means rewarding him when he performs well, teaching him to do better when he fails, and always making sure he has enough resources and the right data to make sure he does the right thing and the right thing to do. Understand the way
Venturebeat’s mission Digital Town Square is about to become a place for technical decision makers to gain knowledge about the changing enterprise technology and practices. Learn more