AI: a Double-Edged Sword for Security Teams?
In our exclusive GenAI segment, The Fast Mode spoke to Sygnia’s Rob Kehl on how AI and Generative AI is transforming telecom and enterprise networks, from managing network performance to mitigating threats and improving efficiency.
First published on The Fast Mode, January 30, 2025
To sum up, the main messages are as follows:
- AI will enhance network operations but also enable attackers to exploit vulnerabilities faster.
- Without governance, monitoring, and Zero Trust defenses, AI systems will amplify existing risks.
- Treat AI as an attack surface from the start with measures like monitoring for prompt injections, protecting training data integrity, applying ethical frameworks (e.g., NIST AI RMF), and using Zero Trust architectures to segment networks, identities, and code repositories.
AI is transforming network operations with real-time optimization, predictive maintenance, and automated fault detection. Yet, AI is just another application built on infrastructure layers requiring vast data, compute power, and interconnected systems. The key challenge is securing AI-driven networks before attackers weaponize the same technology. The benefits are clear. Google’s AI-based fuzzing recently uncovered a 20-year-old security flaw missed by humans. AI’s ability to analyze massive datasets and detect subtle issues makes it invaluable for defenders trying to outpace threats.
But attackers will leverage AI, too. Smarter fuzzing will rapidly identify vulnerabilities, enabling faster, more precise attacks. Tools that automate reconnaissance and targeted exploit creation will make breaches harder to detect. Zero-day vulnerabilities, like those seen in Citrix, Ivanti, Palo Alto, and Fortinet, could go from discovery to full exploitation in hours. AI-powered botnets will behave like organized threat actors, adapting and hiding as they propagate.
The real risk lies in unprepared organizations. Legacy systems, misconfigurations, and unpatched vulnerabilities remain widespread. Adding AI-driven tools will not solve these problems; they may introduce new vulnerabilities that become targets. AI systems themselves are susceptible to supply chain compromises, model poisoning (where corrupted training data skews outputs), and prompt injection attacks (where manipulated inputs bypass model restrictions). Without governance, monitoring, and Zero Trust defenses, AI will amplify existing risks.
Smarter does not mean safer. Organizations must harden their AI infrastructure early and treat it as a critical attack surface. Continuous monitoring, robust governance, and Zero Trust architectures are essential to detect manipulation and ensure every interaction is verified. For example, continuous monitoring can detect prompt injection attacks targeting AI models or anomalies that compromise the integrity of training data. Robust governance should include ethical frameworks, such as NIST’s AI Risk Management Framework or the OECD AI Principles, to ensure transparency, accountability, and resilience. Zero Trust architectures can segment and segregate networks, identities, data sets, and code repositories, limiting the blast radius if one component is compromised.
Just as Google used AI to uncover vulnerabilities, attackers will do the same. Securing AI as part of broader defenses is not optional it is essential. Without these measures, organizations risk turning their smartest tools into their greatest vulnerabilities.
This article is a part of The Fast Mode’s 2025 Special Edition: GenAI segment. To learn more about the segment, visit the dedicated page here. To view all articles published under the segment, click here. A research report based on the findings of the segment survey will be published in February 2025. To access the survey, click here.
By clicking Subscribe, I agree to the use of my personal data in accordance with Sygnia Privacy Policy. Sygnia will not sell, trade, lease, or rent your personal data to third parties.
