×
Cohesity CEO warns AI tools need strict access controls to prevent data breaches
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Cohesity CEO Sanjay Poonen outlined how enterprises must implement strict role-based access controls and secure data governance to prevent AI hallucinations and unauthorized access when deploying generative AI tools for data analysis. Speaking at The AI Security Summit, Poonen emphasized that companies need robust security frameworks to safely unlock the potential of their massive, unclassified data stores through AI-powered summarization and search capabilities.

What you should know: Cohesity, a data security company, positions itself as an “AI-powered data security company” that helps enterprises balance accessibility with protection when implementing AI tools.

  • The company serves over 13,000 customers, including major banks, and works closely with them to understand evolving AI security needs.
  • Cohesity is collaborating with Nvidia, a leading AI chip manufacturer, to deliver generative AI tools that allow customers to securely search, summarize and analyze both primary and backup data.

The security challenge: Enterprises face significant risks when deploying AI tools without proper safeguards, particularly around data access and AI accuracy.

  • “You have to have very strict role-based access controls to ensure the wrong people aren’t searching for and summarizing your data,” Poonen said.
  • AI hallucinations—when AI systems generate false or misleading information—pose a critical business risk since incorrect summarizations could lead to flawed decision-making.
  • Companies must ensure AI tools don’t provide unauthorized access to sensitive information or generate misleading insights.

Why regulation matters: Data governance requirements vary significantly across industries, creating complex compliance landscapes that AI implementations must navigate.

  • Heavily regulated sectors like banking and healthcare face strict mandates requiring permanent data retention.
  • Less regulated industries have more flexibility, with some companies choosing to delete emails after a year to reduce costs and legal liability risks.
  • “Different organizations have different regulatory requirements from the government on how long you need to keep email and messaging,” Poonen explained.

The bigger picture: As generative and agentic AI reshape enterprise automation, the intersection of data security and AI capabilities becomes increasingly critical for business success.

  • Companies must evolve their data protection strategies to accommodate AI-powered analysis while maintaining security standards.
  • The rise of sovereign clouds and hybrid computing adds additional complexity to enterprise AI security frameworks.
  • Trust and transparency will drive the next era of enterprise AI adoption, requiring companies to balance innovation with robust security measures.
Data bridges artificial intelligence and security

Recent News

AI data centers now consume more power than Pakistan and raise US electric bills

States are rolling out red carpets with tax breaks to host these energy-hungry facilities.

Why human empathy becomes the ultimate advantage as AI commoditizes intelligence

The transition risks creating workplace divides between AI directors and the directed.