Mission

Search

Analyzing the Impact of A.I. and Technology on Society and Cybersecurity

Play episode

Technology is advancing at a pace never seen before and the newest tech, applications and widgets are being widely adopted at an even quicker rate. 

Just look at A.I. and machine learning tools, which are now used to identify things once thought unimaginable — whether it’s to figure out simple things such as what clothes best suit consumers or completing everyday work tasks, the endstate for these technologies appear endless… 

But as technology grows more sophisticated, why is the software that operates it not being secured?

“The human without the suit is weak and the suit without the human is dumb. A.I. and machine learning, these different computer learnings we’ve got to work with now in cybersecurity and across the board, they’re levers. They’re not a replacement in my mind for human intelligence. When that happens, we’re going to be worried about Skynet, not these conversations. And I’m going to be thinking about how to hack that stuff, to make sure that humans stay safe.”

The future of A.I. and machine learning is mostly rooted in Hollywood sci-fi; Tony Stark’s Jarvis, or Skynet represent the full advancement of our imaginations of these technologies so far. But the reality of these tools isn’t there, but the power is. So why are we not protecting ourselves from it? On this roundtable episode of IT Visionaries, we explore the impact A.I. and technology are having on society and cybersecurity with Casey Ellis, the founder and CTO of BugCrowd, and Malcolm Harkins, a cybersecurity advisor, coach and board member. 

The two discuss why you’ll never be able to eliminate risk and why the lack of financial incentives is leaving most companies vulnerable to nefarious attacks. Enjoy this episode!

Main Takeaways

  • Just Throw Money at the Problem: One of the leading issues right now when it comes to cybersecurity is that app developers are not incentivized to protect products during the development lifecycle. When there is no monetary incentive for developers to protect their software, the needed layers of security are not built in. This leads to security teams doing patchwork on problems that could have been architected during the development process.  
  • Two Repelling Magnets: Security and privacy are consistently bound together but the reality is that good security can encroach on a user’s privacy. When designing products, developers must be thinking first about the layer of security they are placing within the code, but also how those security measures will infringe on the user’s rights.
  • Working Hand-in-Hand: Your cybersecurity strategy should be a mix of technology and human creativity. While A.I and machine learning algorithms can help detect irregularities within a platform, most of those algorithms are not trained to learn from mistakes, leaving them open to vulnerabilities. Instead of relying just on technology, deploy a hybrid model using crowdsourced protection which allows for highly-trained and skilled hackers to test the vulnerabilities within a system that can then be fixed immediately. 

For a more in-depth look at this episode, check out the article below.


Article 

Technology is advancing at pace never seen before and the newest tech, applications and widgets are being widely adopted at an even quicker rate. 

Just look at A.I. and machine learning tools, which are now used to identify things once thought unimaginable — whether it’s to figure out simple things such as what clothes best suit consumers or completing everyday work tasks, the endstate for these technologies appear endless… 

But as technology grows more sophisticated, why is the software that operates it not being secured?

“The human without the suit is weak and the suit without the human is dumb. A.I. and machine learning, these different computer learnings we’ve got to work with now in cybersecurity and across the board, they’re levers. They’re not a replacement in my mind for human intelligence. When that happens, we’re going to be worried about Skynet, not these conversations. And I’m going to be thinking about how to hack that stuff, to make sure that humans stay safe.”

The future of A.I. and machine learning is mostly rooted in Hollywood sci-fi; Tony Stark’s Jarvis, or Skynet represent the full advancement of our imaginations of these technologies so far. But the reality of these tools isn’t there, but the power is. So why are we not protecting ourselves from it? On this roundtable episode of IT Visionaries, we explore the impact A.I. and technology are having on society and cybersecurity with Casey Ellis, the founder and CTO of BugCrowd, and Malcolm Harkins, a cybersecurity advisor, coach and board member. 

With more products and services being developed at a rapid pace, and open source API connected across a broad spectrum of internet users, the current state of cybersecurity can be summed up in one world: vulnerable. 

“The reality is if it can execute code, it can execute malicious code,” Harkins said. “And because of the job we’ve done in the security development lifecycle to mitigate risk upfront, and then the sloppy job we’ve done in the back end to operationalized security, we frankly are living in the mess that we’ve created for ourselves because of the poor economic incentives that have existed for the creators of technology and the operators of it to do something the right way and manage the risks better.” 

Ellis oversees BugCrowd, a crowdsourced security platform that deploys whitehat security reachers in an effort to find and eliminate vulnerabilities within a platform. 

“Builders don’t think like breakers,” Ellis said. “They’re not incentivized to do the same things. They’re logically different. People who deploy code don’t necessarily believe in the boogeyman.”

The reality is, as Ellis states, most products are vulnerable to outside attackers, so what needs to change? For Harkins, it starts with recognizing a basic difference between incentives and motivations.

There’s a difference between incentives and motivations and you can incentivize somebody, but it doesn’t mean they’re motivated,” he said. “It’s the difference between somebody who is compliant and somebody who cares. I’d rather have somebody who cares than somebody who is compliant. We have to work on both sides of the incentives we were talking about on the economic side, but also the motivations.” 

Even as these technologies continue to develop, Ellis stressed that technology should never be used as a replacement for creativity and human development still must be at the center of every product that gets pushed online. For Harkins, he remained bullish on the idea of A.I. being able to protect users

“A.I. itself will be manipulated and compromised,” he said. “You can manipulate it. It’s the same thing as that exoskeleton suit. If it’s increasing your effectiveness and efficiency, do it. If it’s not, don’t buy the marketing.” 

To hear more about the impact A.I. and technology is having on cybersecurity, check out the full episode of IT Visionaries!


To hear the entire discussion, tune into IT Visionaries here

Menu

Episode 288