Mission

Search

The War with Algorithms: Why Your Next Security Strategy Includes A.I. and Machine Learning

Play episode

Or listen in your favorite podcast app

Apple Podcasts  /  Google Podcasts Spotify

The image of a hooded individual illuminated by the glare of a computer screen hacking into a company’s network is the classic picture of what a cyber attack looks like. The reality, though, is these attackers are almost never a one-man band, but rather a sophisticated team armed with the same technology companies often deploy against them. But what makes these threats from these attackers so devious is not the technology they use, but rather that they never use the same approach twice.

“Cyber is big business and unfortunately it’s causing a tremendous amount of business disruption. They continue to increase in sophistication and they have new and novel approaches. There are constantly unknown threats and new threat actors and that’s part of the reason it gets very challenging. In some ways, a lot of the focus is people love to hear about all these cool techniques they use, but it’s the fact that they’re constantly evolving and changing the techniques that make it so difficult for security teams and for what I think of as static security tools to stay current against this range of threats.”

Nicole Eagan is the Chief A.I. Officer at Darktrace, a leader in autonomous cybersecurity A.I. On this episode of IT Visionaries, Nicole dives into why A.I. is a good tool, but not one powerful enough to prevalent today’s cybercriminals. Plus, she explains how security systems must constantly be learning, the impact IoT devices have had on security threats,  and why algorithms are at a war with one another.

Main Takeaways

  • Update Available: Companies are utilizing A.I. to predict and prevent attacks, but there is one major issue: the data sets they are using to prevent attackers from entering their networks are often filled with old datasets. By using old data, your artificial intelligence cannot understand how to protect you from new attacks.
  • The War of Algorithms: Future cyber attacks are no longer going to simply come from hackers infiltrating a network. Instead, these attackers are becoming more sophisticated and will use machine learning techniques to understand the type of behaviors the user is using in order to build a sophisticated attack. The only way to combat these types of attacks, is to use real-time self-learning A.I. systems that can detect abnormalities within a system.
  • Stop Acting in Silos: Security prevention needs to stop operating in silos when it comes to preventing attacks. Cyber criminals are no long attacking users through email or through loca networks, so security teams need to stop focusing on a niche area. Instead, develop an all encompassing self-learning platform that is constantly monitoring all aspects of your business and your employees.

—–

For a more in-depth look at this episode, check out the article below.


The image of a hooded individual illuminated by the glare of a computer screen hacking into a company’s network is the classic picture of what a cyber attack looks like. The reality, though, is these attackers are almost never a one-man band, but rather a sophisticated team armed with the same technology companies often deploy against them. But what makes these threats from these attackers so devious is not the technology they use, but rather that they never use the same approach twice.

“Cyber is big business and unfortunately it’s causing a tremendous amount of business disruption. They continue to increase in sophistication and they have new and novel approaches. There are constantly unknown threats and new threat actors and that’s part of the reason it gets very challenging. In some ways, a lot of the focus is people love to hear about all these cool techniques they use, but it’s the fact that they’re constantly evolving and changing the techniques that make it so difficult for security teams and for what I think of as static security tools to stay current against this range of threats.”

Nicole Eagan is the Chief A.I. Officer at Darktrace, a leader in autonomous cybersecurity A.I. On this episode of IT Visionaries, Nicole dives into why A.I. is a good tool, but not one powerful enough to prevalent today’s cybercriminals. Plus, she explains how security systems must constantly be learning, the impact IoT devices have had on security threats,  and why algorithms are at a war with one another.

Founded in 2013, Darkface utilizes a combination of human-centric security teams and A.I. to both find and prevent attackers from infiltrating a company’s network. While in theory, deploying autonomous technologies sound like a full-proof plan, the issue remains the complex nature and the level of professionalism these bad actors possess has changed throughout the years — including deploying call-center, these criminals now act as full-fledged organizations. As the attacks have evolved, it’s made them more difficult to prevent for services such as Darktrace.

“It’s this thought that you need to know what the attack looks like to spot it,” Eagan said. “Or you need to second guess what the attacker’s going to do and come up with a rule to catch it. I’ve seen companies try to use more A.I. techniques, but what they’re doing is they’re pointing the AI at a training data set of historical attack data.”

The proper term for this technique is known as the rearview mirror approach, where security teams spots an attack, dissect what the attack was, how it infiltrated their system and then train their algorithm to prevent that same type of attack from occurring down the road. Eagan said the issue with this approach is that while you are using historical data to understand how an attack occurred, your system is not updating fast enough to be able to understand how the next attack is going to happen.

“The biggest shift that has to happen in security is to move it to this notion that the threats are constantly changing,” she said. “They’re going to come from all over the place. They’re going to range in sophistication. Almost to a certain degree, it doesn’t matter what the attack vector is. It could be a video conferencing camera. It could be a sensor, or it could be literally a car connecting to a wifi network. The thing is you have to be prepared for the unknown attack. And that is something that I think has been really hard for the industry as a whole to get its head wrapped around.”

Over the years the attack surface, the point of entry for criminals to attack vulnerabilities has drastically increased. No longer are criminals limited to infiltrating a system from a local computer or an encrypted email, but now those same attackers can find their way in through a multitude of devices, including internet enabled devices, such as phones or thermostats. So as this attack surface widens how can A.I. be distributed to prevent these bad actors? It starts with having the right data.

“There’s a lot of different types of machine learning and A.I. out there, but we need a much more pragmatic approach,” Eagan said. “We really sought out to develop ways to do this through what’s called self-learning A.I. The A.I. basically is learning how people work, how they connect to systems and what websites they use. “

The objective is simple: if a system possesses the ability to fully understand a company’s workflow without being limited to protecting certain areas of a company, then networks are much more likely to prevent attacks from finding a way into your network.

“The reality is you’ve got to look at it all, and just look at silos,” Eagan said. “Attackers, by the way, are expecting that you’re only looking at silos. The reality is you have to be able to have self-learning technology, look at all of that and be able to protect you against vulnerabilities wherever they might come from. And so that’s another big shift that we see is moving away from these niche or siloed approaches and actually being able to protect all areas of your network. You need the machine learning that sits there, silent in the background, to be able to learn self across all those environments in order to keep you safe.”

Speaking of machine learning, Eagan said the future of security systems is not just preventing attackers, but rather having machine learning techniques that can communicate with each other in a real-time bases. Techniques that not only communicate back and forth with each other, but methods that try to solve problems in real time.

“The next evolution of A.I. is the ability for one A.I. system that finds the threat to actually feed that information to the next A.I. system, that actually says, ‘Okay, I kind of, I see what you mean about this threat, but I have a few more questions for you.’ And it feeds it back to the first system. And that AI system does a little more analysis and feeds it back. So this idea that A.I. systems don’t only sit independently, but can learn from each other in a way to have these algorithmic conversations with each other to get a higher sense of context to actually make the right decisions.”

To learn more about autonomous security systems and the work Darktrace is doing, checkout the full episode of IT Visionaries.

To hear the entire discussion, tune into IT Visionaries here

Menu

Episode 244