OpenText Report: Enterprises Race to Deploy GenAI, but Security and Governance Are Falling Behind | Martech Edge | Best News on Marketing and Technology
GFG image
OpenText Report: Enterprises Race to Deploy GenAI, but Security and Governance Are Falling Behind

marketing artificial intelligence

OpenText Report: Enterprises Race to Deploy GenAI, but Security and Governance Are Falling Behind

OpenText Report: Enterprises Race to Deploy GenAI, but Security and Governance Are Falling Behind

PR Newswire

Published on : Mar 24, 2026

Enterprises are embracing generative AI at a rapid pace—but many are doing so without the safeguards needed to manage its risks. That’s the central finding of a new global study released by OpenText in partnership with the Ponemon Institute.

The report, “Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI,” reveals that 52% of organizations have already fully or partially deployed generative AI, yet security governance and risk management practices are lagging far behind adoption.

The research underscores a growing tension across the enterprise AI landscape: companies are racing to integrate AI into operations, but the governance frameworks required to ensure trust, compliance, and reliability are still catching up.

As AI systems become more embedded in business workflows—and increasingly autonomous—the stakes for managing these risks are rising quickly.

The AI Maturity Gap

According to the study, only one in five enterprises has reached what researchers consider “AI maturity.” In practical terms, that means organizations where AI-driven cybersecurity systems are fully deployed and the risks associated with those systems are properly assessed and managed.

For the majority of enterprises, that maturity remains out of reach.

Nearly 79% of organizations have not yet achieved full AI maturity in cybersecurity, indicating that while AI adoption is widespread, the operational and governance infrastructure needed to support it is still developing.

The findings reflect a broader industry reality: implementing AI technology is often faster and easier than establishing the policies, oversight structures, and risk frameworks needed to manage it responsibly.

“AI maturity isn't just about adopting AI tools—it's about doing it responsibly,” said Muhi Majzoub, EVP of Product and Engineering at OpenText. “Security and governance are foundational to getting real value from AI.”

Governance and Security Struggling to Keep Pace

The study highlights several major gaps in how organizations are managing AI-related risks.

Among the most striking findings:

  • Only 43% of organizations have adopted a risk-based governance strategy for AI systems.
  • Just 41% have AI-specific data privacy policies in place.
  • Nearly 59% say AI makes regulatory compliance more difficult, especially in privacy and security frameworks.

These numbers suggest that while enterprises recognize AI’s potential benefits, many are still struggling to integrate governance practices that address risks such as bias, misinformation, or security vulnerabilities.

In fact, 58% of respondents reported that prompt or input risks—such as misleading or harmful outputs—are extremely difficult to mitigate.

User behavior also introduces new challenges. More than half of organizations surveyed reported difficulty controlling how employees interact with AI systems, particularly when it comes to the unintended spread of inaccurate or misleading information.

Bias and Reliability Remain Major Obstacles

Beyond governance concerns, organizations are also confronting technical limitations in AI systems themselves.

Nearly two-thirds of respondents (62%) say minimizing model bias is very or extremely difficult, raising concerns about fairness, reliability, and ethical AI use.

Operational challenges further complicate deployment:

  • 45% of organizations report errors in AI decision rules as a barrier to effectiveness.
  • 40% cite problems with incorrect or incomplete data inputs affecting AI performance.

These issues directly affect how well AI can perform in cybersecurity and threat detection scenarios.

While many organizations hope AI will accelerate security operations, the study suggests those gains remain uneven.

Just 51% of respondents say AI effectively reduces the time needed to detect anomalies or emerging threats, and only 48% believe AI meaningfully improves threat detection and analysis.

In other words, the technology is promising—but far from perfect.

Autonomous AI Still a Distant Goal

One of the most ambitious visions for enterprise AI involves systems that can operate autonomously—analyzing threats, making decisions, and responding without human intervention.

But the study indicates that level of independence is still a long way off.

Fewer than half of organizations surveyed (47%) say their AI models are capable of learning behavioral norms and making safe decisions autonomously.

Because of these limitations, 51% of organizations say human oversight remains essential in AI governance—particularly as cyber attackers evolve their tactics and attempt to exploit AI systems themselves.

This reliance on human supervision highlights a fundamental paradox in AI adoption: the technology promises automation and efficiency, yet still requires careful monitoring to ensure reliability.

Why Trust and Explainability Matter

The study also points to a deeper issue affecting enterprise AI adoption: trust.

For AI systems to be widely accepted in critical business operations, they must be transparent and explainable. Organizations need to understand not only what decisions AI makes, but why it makes them.

Without that transparency, enterprises may hesitate to rely fully on automated systems—particularly in high-risk areas such as cybersecurity or regulatory compliance.

Industry experts increasingly argue that explainability, governance frameworks, and policy-based controls must be built into AI systems from the start, rather than added later as an afterthought.

A Global Snapshot of AI Adoption

The report’s findings are based on a global survey conducted by the Ponemon Institute in November 2025.

Researchers gathered responses from 1,878 IT and security professionals across North America, Europe, Asia-Pacific, the Middle East, Africa, and Latin America. Participants represented a wide range of industries including financial services, healthcare, technology, manufacturing, and energy.

The survey included executives, engineers, security specialists, compliance professionals, and other decision-makers involved in AI and cybersecurity strategy.

This broad sample provides a global perspective on how organizations are navigating the challenges of AI adoption.

The Road Ahead for Enterprise AI

The report arrives at a critical moment for enterprise technology. Generative AI adoption has accelerated dramatically over the past two years, and many organizations are now experimenting with more advanced systems such as agentic AI—models capable of performing complex tasks with minimal human direction.

But as AI systems grow more powerful, the risks associated with them also increase.

For companies hoping to unlock the full value of AI, the message from the study is clear: adoption alone isn’t enough. Governance, security frameworks, and responsible AI policies must evolve just as quickly.

Organizations that invest early in these foundations may gain a significant competitive advantage—not only by avoiding regulatory and security pitfalls, but by building AI systems that employees and customers can trust.

As Majzoub noted, the next generation of AI leaders will likely be those that combine innovation with transparency and control.

Get in touch with our MarTech Experts.

REQUEST PROPOSAL