Two Professional IT Programers Discussing Data Network Architecture Design and Development Shown on Desktop Computer Display.
getty
AI has raised a lot of hackles in terms of cybersecurity. The prospect of autonomous agents dreaming up their own ways to attack systems is terrifying.
Still, there’s evidence that the same technologies have been a boon to defenders, too, as in this TechNewsWorld article showing the results of a study by Sapio Research suggesting that 97% of surveyed cybersecurity decision-makers indicated they were using or planning to use a cybersecurity solution that takes advantage of AI.
That idea of AI as a “cybersecurity force multiplier” also came up during a segment of the Imagination in Action event where Google engineer Ayush Khandelwal interviewed Heather Adkins, Google VP of Security Engineering, about the ways and means of protecting internet-connected networks in 2025.
In considering likely strategies, Adkins pointed to sandboxing and other tactics as useful ways to pre-empt AI hacking on the horizon. She also noted that security pros have always had to be versatile in some ways.
“We’re pretty used to non-deterministic situations, because your hacker is usually fairly non-deterministic,” she said. “Predicting what they’re going to do is very difficult, and so we’re kind of used to having to put ourselves in those shoes. It is true, though, that the tools we use, we want (them) to be as deterministic as possible. And so when it comes down to starting to use a large language model in a very specific application, one of the things we think about are constraints. We want to be able to constrain what it does.”
In response to questions of “vibe hacking,” where human black hats can just ask AI to attack on their behalf, Adkins admitted that things, soon, will be different.
Everything is changing right now, the whole field is undergoing a kind of quiet revolution.
Who’s Calling?
Discussing the ways that AI can boost deepfakes and spearphishing campaigns, Adkins explained how something in this threat vector might work.
“My hypothesis is that deepfakes and the ability to create digital identities will actually bring us closer together as humans,” she said, “and when it really matters that I know your name, that I know who you are, that you’re actually my banker, we will probably seek some kind of connection on a human level. And I’m seeing this already with fraud, people will call up and say, ‘Mom, did you really just call me? Let me verify’ or ‘it sounds like my mom. Mom, what did we do on my 16th birthday?’ And ‘if she knows the answer, then it’s probably my mom.’ And so these kinds of things, I think, these instincts will build naturally over time, as a natural defense.”
Some of this, she suggested, is already happening, perhaps pre-emptively.
“In some parts of the world, nobody (is) believing anything on the internet anymore,” she said. “They question everything, every news story, every picture. So these amazing feats by athletes – (they’re) like, ‘that’s not real.’ And as an athlete, I’m sure that feels pretty terrible, because maybe that was real. But I think we are going to start to question more and more.”
In addition, Adkins mentioned something called the Dead Internet theory, which ends up being attributed to the Agora Road forum, having cropped up as a way to explain all of the automated content and activity now on the world wide web.
Defenders of the Nets
As a backstop, though, she also described “self-defending systems,” which might, for example, use tactics like C+ hardening, and may end up evolving dynamically in real-time to foil hackers.
“What we think we’re moving towards now is where the security engineer’s job will be to train the machine … where the system is kind of defending itself,” she said. “The security person becomes a guide and orchestrator, making decisions for the business with the business context. And so it changes from kind of what we have considered to be a very tedious and toilsome job, into one where you’re really assisting the machine to make the right decisions.”
That’s a little about where cybersecurity looks to be in 2025. We have big challenges, as Adkins pointed out, but also big opportunities. I thought this was one of the more interesting discussions during the Stanford event, where so many people close to AI came to try to figure out solutions for the years ahead. Stay tuned for more.


