“AI amplifies threats, increases the attack surface, and exacerbates existing vulnerabilities”

AI has become a major cross-functional risk, impacting all organizational structures rather than just a single technical department. Considered a strategic priority, it requires comprehensive governance. This is TrendAI’s mission. Explanations from Nadine Serneels, Country Director, Trend Micro BeLux.

AI has ushered in a new era of cyber threats: faster, more deceptive, and easier to deploy at scale,” notes Nadine Serneels, Country Director, Trend Micro BeLux. “Most companies are aware of the risks. But they don’t fully understand them.”

AI is no longer limited to isolated projects. It is now integrated into business tools, SaaS platforms, development environments, and, increasingly, into cybersecurity solutions themselves. This gradual proliferation is transforming information systems, but also makes it harder to grasp their use in its entirety.

“We can call this a cross-functional business initiative. AI no longer involves just IT—and therefore the CIO—but the business units themselves. Organizations must ask: Do I have the internal expertise to use these technologies? Are my models trained on my own data? Do I have the necessary infrastructure and budgets? That is our mission at TrendAI. ”

A radically different risk profile

There is, quite clearly, a paradigm shift. Just yesterday, cybersecurity was contained; IT managed it. Today, with AI, we’re entering a new dimension, continues Nadine Serneels. “The risks are operational, legal, and reputational. AI amplifies threats, increases the attack surface, and exacerbates existing vulnerabilities.”

Furthermore, liability risks are emerging, associated with automated decision-making, biased or discriminatory models, and the misuse of intellectual property. And ambiguities remain regarding the liability of the parties in the event of harm resulting from an AI-related decision.

That’s all it takes to turn the security landscape upside down. When AI systems can plan, act, and interact autonomously with other tools, the risk profile is radically different from that of traditional AI…

A Matter of Governance

“In terms of governance, frameworks are still being developed in many organizations,” notes Nadine Serneels. “Responsibilities are not always clearly defined, and usage frameworks continue to evolve within a regulatory environment that is itself still under construction. That’s a lot to handle!”

The challenge is no longer simply to adopt AI, but to track its actual deployment within an information system where uses are multiplying and overlapping. The issue, then, becomes less technological and more organizational: regaining visibility in an environment that produces less and less of it.

“This evolution marks a product, commercial, and technological reorientation centered on a single promise: securing AI infrastructures, applications, and agents. At Trend Micro, we are rolling it out under a new identity, TrendAI.” As companies rebuild their information systems on AI foundations, the goal is to transform reactive cybersecurity into proactive governance of autonomous systems.

From a product portfolio to a unified platform

TrendAI can transform agent-based AI from a high-risk experience into an enterprise-ready architecture. Organizations can thus define trust boundaries, enforce policies in real time, and maintain continuous visibility into the behavior of autonomous AI, while preserving the flexibility and power that make agent-based systems valuable.

“In practical terms,” continues Nadine Serneels, “we’re moving from a portfolio of industry-leading products to a unified AI cybersecurity platform for enterprises.” Indeed, TrendAI adds an enterprise-grade security layer that governs agent behavior, the tools they can access, and how risks are detected and addressed—before, during, and after execution.

Finally, TrendAI’s approach is based on four fundamental principles: visibility into AI usage, systems, and agents interacting across different environments; understanding the context and underlying intentions behind these interactions; enforcing policies and controlling agent-driven usage and actions; and introducing human oversight at critical decision points.