AI’s Double-Edged Sword: Countering AI-Enabled Cyberattacks by Deploying Defensive AI Strategies | Martech Edge | Best News on Marketing and Technology
GFG image
AI’s Double-Edged Sword: Countering AI-Enabled Cyberattacks by Deploying Defensive AI Strategies

artificial intelligencecybersecurity

AI’s Double-Edged Sword: Countering AI-Enabled Cyberattacks by Deploying Defensive AI Strategies

MTEMTE

Published on 24th Feb, 2026

By Dr. David Utzke, CEO and CTO at MyKey Technologies
 
Organizations are at an inflection point where AI is accelerating cybercrime at scale, as experts warn that it broadens the attack surface, creates new vulnerabilities, and introduces complex governance and compliance challenges.

 Like all AI systems, those deployed in cyberattacks continuously learn and evolve, enabling them to adapt, evade detection, and develop attack patterns that traditional security tools may fail to recognize.

 Furthermore, AI agents capable of operating autonomously are significantly increasing the scalability and sophistication of cyberattacks and fraud operations.
 

(Q) What makes AI-powered cyberattacks fundamentally different from traditional automated cyber threats?

 

This is a great question and one that I am frequently asked. I find it helpful to begin by defining a cyberattack. In cybersecurity, a cyberattack is an intentional, malicious attempt by an individual or organization to breach a computer network or system. These attacks aim to compromise the CIA triad: the Confidentiality, Integrity, or Availability of digital assets and information. The NIST (National Institute of Standards and Technology) CSRC (Computer Security Resource Center) Glossary officially defines it as an attempt to gain unauthorized access to system services or resources, or to compromise system integrity and availability.

 

So, working from this common definition of cyberattacks, AI-powered cyberattacks differ from traditional, non-AI attacks primarily through increased speed that increases the capability in the number of attacks, considerable automation lowering the barrier to entry, and intelligent, real-time adaptation to avoid detection. While traditional attacks rely on manual, static methods, AI technologies enable autonomous scanning, evasive polymorphic (i.e., occurring in several different forms) malware, and highly personalized social engineering at scale, transforming the threat landscape from weeks of planning to near-instantaneous execution.

 

Some of the core advancements in AI technology-facilitated cyberattacks include:
 
o   Hyper-personalized social engineering
o   Synthetic media (deepfakes)
o   Autonomous vulnerability discovery
o   LLMjacking
o   Prompt Injection
o   AI Model data poisoning
 

(Q) How can autonomous AI agents amplify the speed, sophistication, and scale of modern cybercrime?

 

It is important to articulate that the term “autonomous AI agents technology” is considered partially accurate but often hyped, representing an emerging capability rather than a fully realized, foolproof technology as of the time of this interview. I have to laugh every time I see the ServiceNow ad on streaming. In the dialogue, when AI agents are brought up, it is clarified that they are not just “secret agents,” but rather “autonomous minions that you control” to handle routine, repetitive tasks. How can the minion (def.: underling of a powerful person) at the same time be autonomous? Get it? The hype!
 

So, here is another opportunity to define another frequently misunderstood term from the perspective of AI architecture. An “AI agent” is most commonly an LLM (Large Language Model) that can take actions to achieve specific, high-level goals with minimal human oversight – a step up from an AI bot. Unlike an AI bot, AI agents can break down complex tasks, use tools, and learn from experience. 
 

An AI agent is a coded system that can, to a limited extent, set its own sub-goals, plan, and take actions to achieve a high-level objective with little to no human intervention. However, most, if not all, current “autonomous” agents require human-in-the-loop for oversight (aka Human Agent), especially for high-stakes decisions, making them more “agentic” than fully autonomous.
 

The term “autonomous AI agents” is often used as a marketing buzzword that obscures the actual technology behind it. To highlight AI technologies involved in cyberattacks, include:
 
o   ML (Machine Learning) and DL (Deep Learning)
o   GPTs (General Pre-trained Transformers) and LLMs (e.g., WormGPT and FraudGPT)
o   GANs (Generative Adversarial Networks) and NNs (Neural Networks)
o   NLP (Natural Language Processing): Voice-to-Text and Text-to-Voice
 

Given the advancements in ML, specifically DL, AI models can understand complex, nuanced language patterns. NLP is the driving force underpinning LLMs, enabling more accurate, context-aware, and human-like interactions to enact more sophisticated cyberattacks against cybersecurity frameworks, even if an organization deploys AI-enhanced cybersecurity systems.
 

It is for this reason that it is crucial for cybersecurity professionals to understand AI model architecture rather than treating AI as an impenetrable “singularity” or a magical black box. As AI models become deeply integrated into IT infrastructure, understanding the specific mechanisms, data pipelines, potential failure points of these systems, and how to audit AI models for vulnerabilities is essential for effective, proactive defense. Viewing AI as a “singularity,” or as a mysterious, all-knowing entity, leaves organizations vulnerable to unique, AI-based cyberattack threats.  
 

(Q) How can organizations detect AI-generated attacks that are specifically designed to evade conventional security tools?

 

When I teach grad students and CPE sessions on the topic of cybersecurity, I emphasize that the first necessary step for an organization is to have a well-established AI model and data governance framework. Implementing technology governance frameworks is no longer just a compliance task; it is a foundational strategic requirement for any organization. Having AI model and data governance frameworks is critical for organizations to ensure AI initiatives are reliable, ethical, secure, and compliant with emerging regulations. Without a governance framework, organizations face significant risks beyond cyberattacks that include biased models, inaccurate or harmful outputs, as well as suffering from potential reputational damage and legal penalties.
 

With the above noted, cybersecurity professionals can audit and detect AI-based cyberattacks, which often evade traditional defense mechanisms. But it requires moving from point-in-time, snapshot, random, or set periodic audits to a continuous monitoring approach. Continuous monitoring is crucial because it replaces snapshot, point-in-time, or random audits with real-time, “always-on” visibility, allowing organizations to detect and remediate risks instantly rather than months later. It reduces security vulnerabilities and ensures continuous regulatory compliance (e.g., DORA, PCI DSS 4.0). 
 

(Q) In what ways can companies move from reactive incident response to predictive, AI-driven threat prevention?
 

To protect against AI-based cyberattacks, organizations need to adopt a ZTA (zero-trust architecture) and a defense-in-depth strategy that combines AI-driven security tools, robust AI governance, and enhanced human training. Key measures include deploying anomaly detection, behavioral biometrics, and automated AI-based security tools to counter rapid, automated attacks, while enforcing strict data validation to prevent data poisoning.
 
·       Defense-in-depth is a comprehensive cybersecurity strategy that layers multiple, heterogeneous security controls—covering people, technology, and operations—to protect assets, ensuring that if one defense fails, others contain the threat. Inspired by military, castle-style tactics (i.e., reinforced architecture), it aims to increase attacker complexity and prevent single points of failure.
 
·       ZTA is a cybersecurity framework based on “never trust, always verify,” treating all network traffic as hostile, regardless of origin. It removes implicit trust, focusing on strict IAM (Identity & Access Management) verification, least-privilege access, and microsegmentation (divides networks into small, isolated, and granular security zones) to contain breaches. Key components include continuous monitoring, MFA, and data encryption to secure distributed, modern, cloud-based environments.
 

(Q) How can resilient risk-based AI governance frameworks help organizations rebuild trust and accountability as AI-driven threats continue to escalate?
 

As AI-driven cyber threats, such as adversarial attacks, data poisoning, and model BS (imprecisely called hallucination), escalate, the need for governance frameworks to provide the necessary guardrails to ensure AI technologies are reliable, ethical, and secure becomes even more urgent.
 

Key ways that governance frameworks rebuild trust and accountability include:
 
·   Establishing Proactive Risk Management
·   Ensuring Transparency and Explainability
·   Enforcing Clear Accountability
·   Implementing Real-Time Monitoring and Control
·   Aligning with Ethical Standards
 

In addition, well-devised governance frameworks counteract escalating threats as AI-driven threats grow, by offering a structured approach to resilience by incorporating Red-teaming and adversarial testing to uncover security gaps before deployment, Data Security Posture Management (DSPM) to protect sensitive data used in AI workloads, and continuous monitoring to identify vulnerabilities and potential threats in real-time. 
 

Ultimately, these frameworks turn cyberattacks into a manageable risk and compliant processes, moving from a position of “control” to “confidence.”
 

As a final note, this interview is given with an eye on research of the near-term future of cybercrimes through cyberattacks as AI technologies that are currently being converged with quantum computing. MyKey Technologies is addressing the research involving the near-term future (2026–2030) of the integration of Artificial Intelligence (AI) technologies with emerging quantum computing capabilities, which is set to fundamentally reshape the threat landscape, turning cybercrime into a highly automated, “agentic” ecosystem. While fully functional quantum attacks on encryption are anticipated closer to the 2030s, the immediate threat lies in the combination of AI-powered reconnaissance with the “harvest now, decrypt later” (HNDL) strategy.
 

So, balancing the immediate, “here-and-now” threat responses with attention given to near-term strategic planning is a critical, yet challenging endeavor for organizations. However, failing to do so can lead to a “whack-a-mole” cycle of endless crisis management. Effective approaches involve integrating short-term actions into a strategic vision regarded as strategic agility. 
REQUEST PROPOSAL