We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!
When Eric Horwitz, Microsoft’s chief scientific officer, testified before a subcommittee of the U.S. Senate Armed Services Committee on cybersecurity on May 3, he focused on how cyber security attacks, including the use of AI, would pose new challenges to sophisticated organizations.
While AI is improving its ability to detect cyber security threats, he explained, intimidators are also on the rise.
“While there is a dearth of information today about the active use of AI in cyber attacks, it is widely accepted that AI technology can be used to measure cyber attacks through a variety of probing and automation … known as offensive AI.” He said.
However, it’s not just the military that needs to stay ahead of dangerous actors using AI to augment their attacks and avoid detection. As enterprise companies face a growing number of security breaches, they need to prepare for increasingly sophisticated AI-powered cybercrime, experts say.
Attackers want to make a big leap forward with AI
“We haven’t seen the ‘Big Bang’ yet, where ‘Terminator’ cyber AI comes in and destroys everywhere, but the attackers are preparing that battlefield,” Max Heinmeier, VP of cyber innovation at AI cybersecurity firm Darktress, told VentureBeat. As we are currently seeing, he added, “Cyber security is a big driver – when attackers want to make a great leap, with mind-changing attacks that would be extremely disruptive.”
For example, there have been non-AI-powered attacks such as the 2017 WannaCry ransomware attack, in which the novel was considered a cyber weapon, he explained, while today in the Ukraine-Russia war malware has been used which is rarely seen. Is. Earlier, “this kind of mindset-shifting attack is where we expect to see AI,” he said.
So far, the use of AI in the Ukraine-Russia war has been limited to the Russian use of DeepFax and the use of the controversial face recognition software of Clearview AI by Ukraine, at least in public. But security experts are preparing for the fight: a Darktress survey last year found that a growing number of IT security leaders are concerned about the potential use of artificial intelligence by cybercriminals. While 60 percent of human responses are declining to keep pace with cyber attacks, almost all (96%) have begun to protect their companies from AI-based threats – mostly related to email, advanced spear phishing and fake threats.
“There’s been very little real-world research on real-world machine learning or AI attacks, but the bad guys are definitely already using AI,” said Corey Naturaliner, CSO of Watchguard, which provides enterprise-grade security products to mid-market customers. doing. .
Threatening artists are already using machine learning to help with more social engineering attacks. If they have a large, large data set of lots and lots of passwords, they can learn things about those passwords to improve their password hacking.
Machine-learning algorithms will also run spear-phishing attacks, or highly targeted, non-generic fraudulent emails, much more than in the past, he said. “Unfortunately, it is difficult to train users against clicking on spear-phishing messages,” he said.
Do ventures really need to worry
According to Seth Siegel, North America leader at Artificial Intelligence Consulting at Infosys, security professionals may not think of dangerous actors using AI explicitly, but they are seeing more, faster attacks and increasing use of AI on the horizon.
“I think they see him getting there fast and furious,” he told VentureBeat. “Compared to three years ago, the threat landscape is really aggressive compared to last year, and it’s getting worse.”
However, he warned, organizations should be much more concerned than spear phishing attacks. “The question really is, how can companies deal with one of the biggest AI threats, the introduction of bad data into your machine learning models?” He said.
These efforts will come not from individual attackers, but from sophisticated nation-state hackers and criminal gangs.
“This is where the problem lies – they use the most available technology, the fastest technology, the most sophisticated technology because they not only need to be able to get past crimes, but they are overwhelming departments that are not explicitly equipped. Handle this level of bad acting, “he said.” Basically, you can’t bring a human tool to AI combat. ”
4 Ways to Prepare for the Future of AI Cyber Attacks
Experts say security experts should take several necessary steps to prepare for the future of AI cyber attacks:
Provide ongoing safety awareness training.
The problem with spear phishing, Naturener said, is that since emails are customized to look like real business messages, blocking them is more difficult. “You need to have security awareness training so that users know how to expect and be suspicious of these emails, even if they seem to be coming in a business context,” he said.
Use AI-powered tools.
The Infosec organization should adopt AI as a basic security strategy, Henenmeyer said. “They shouldn’t wait to use AI or just consider it a cherry on top – they should expect and implement AI themselves,” he explained. “I don’t think they realize how necessary it is at the moment – but once the threatening artists start using more furious automation and, perhaps, more destructive attacks against the West begin, then you really want to keep AI. ”
Think beyond personal bad actors.
Companies need to refocus on their perspective away from personal bad actors, Siegel said. “He should think more about nation-state hacking, around criminal gang hacking, and be competent in a defensive posture and also understand that now they need to be dealt with on a daily basis.”
Keep an active strategy.
Organizations also need to make sure they are at the top of their security posture, Siegel said. “When patches are deployed, you have to deal with them with the level of complexity they deserve,” he explained, “and you need to audit your data and models so that you do not insert malicious information into the model.”
Siegel added that his organization embeds cybersecurity professionals in the data science team and also trains data scientists in cybersecurity techniques.
The future of aggressive AI
According to Nachreiner, more “anti” machine learning is coming down the pike.
“This is how we use machine learning for defense – people will use it against us,” he said.
For example, organizations using AI and machine learning today have a way of catching malware better – because malware is changing so fast and signature-based malware detection can no longer catch malware on a regular basis. However, in the future, that ML model will be vulnerable to attacks by dangerous actors.
The AI-powered threat landscape will continue to deteriorate, Heinmeier said, with increasing geopolitical tensions contributing to the trend. He cited a recent study from Georgetown University that studied China and how they link their AI research to universities and nation-state-sponsored hacking. “It says a lot about how closely the Chinese, like other governments, work with scholars and universities and AI research to use it for potential cyber operations for hacking.”
“As I think about this study and other things happening, I think a year from now my view on threats will be more vague than it is today,” he admitted. However, he noted that the defensive approach would also improve as more organizations are adopting AI. “We’re still stuck in this cat-and-mouse game,” he said.
Venturebeat’s mission Transformative Enterprise is about to become a digital town square for technology decision makers to gain knowledge about technology and transactions. Learn more about membership.