Generative artificial intelligence is transforming cybersecurity, aiding both attackers and defenders. Cybercriminals are harnessing AI to launch sophisticated and novel attacks at large scale. And defenders are using the same technology to protect critical infrastructure, government organizations, and corporate networks, said Christopher Ahlberg, CEO of threat intelligence platform Recorded Future.
Generative AI has helped bad actors innovate and develop new attack strategies, enabling them to stay one step ahead of cybersecurity defenses. AI helps cybercriminals automate attacks, scan attack surfaces, and generate content that resonates with various geographic regions and demographics, allowing them to target a broader range of potential victims across different countries. Cybercriminals adopted the technology to create convincing phishing emails. AI-generated text helps attackers produce highly personalized emails and text messages more likely to deceive targets.
“I think you don’t have to think very creatively to realize that, man, this can actually help [cybercriminals] be authors, which is a problem,” Ahlberg said.
Also: AI could automate 25% of all jobs. Here’s which are most (and least) at risk
Defenders are using AI to fend off attacks. Organizations are using the tech to prevent leaks and find network vulnerabilities proactively. It also dynamically automates tasks such as setting up alerts for specific keywords and detecting sensitive information online. Threat hunters are using AI to identify unusual patterns and summarize large amounts of data, connecting the dots across multiple sources of information and hidden patterns.
The work still requires human experts, but Ahlberg says the generative AI technology we’re seeing in projects like ChatGPT can help.
“We want to speed up the analysis cycle [to] help us analyze at the speed of thought,” he said. “That’s a very hard thing to do and I think we’re seeing a breakthrough here, which is pretty exciting.”
Ahlberg also discussed the potential threats that highly intelligent machines might bring. As the world becomes increasingly digital and interconnected, the ability to bend reality and shape perceptions could be exploited by malicious actors. These threats are not limited to nation-states, making the landscape even more complex and asymmetric.
Also: ChatGPT is more like an ‘alien intelligence’ than a human brain, says futurist
AI has the potential to help protect against these emerging threats, but it also presents its own set of risks. For example, machines with high processing capabilities could hack systems faster and more effectively than humans. To counter these threats, we need to ensure that AI is used defensively and with a clear understanding of who is in control.
As AI becomes more integrated into society, it’s important for lawmakers, judges, and other decision-makers to understand the technology and its implications. Building strong alliances between technical experts and policymakers will be crucial in navigating the future of AI in threat hunting and beyond.
AI’s opportunities, challenges, and ethical considerations in cybersecurity are complex and evolving. Ensuring unbiased AI models and maintaining human involvement in decision-making will help manage ethical challenges. Vigilance, collaboration, and a clear understanding of the technology will be crucial in addressing the potential long-term threats of highly intelligent machines.
Also: How ChatGPT works
Ahlberg also raised concerns about China, Russia, and economic adversaries deploying autonomous machines. These countries likely won’t slow down AI development or share ethical considerations. While having the ability to “pull the plug” on such machines is a smart safeguard, he suggests that the integration of technology into society and the global economy will likely make it hard to detach. Ahlberg emphasizes the need to design products and machines with clarity about who controls them.
“The big thing that the internet did in all of this is that the internet sort of became the place where all the world’s information migrated,” said Ahlberg. “These large language models are doing pretty magical things… to speed up that thinking cycle.”
He added, “In the next 25 years, the world becomes a reflection of the internet.”