How can companies be reorganized to promote effective collaboration between humans and AI?

Trust in fully autonomous AI agents has fallen from 43% to 27% over the past year due to concerns about privacy and ethics. Behind this decline lies a real challenge.

“Companies are discovering that AI agents have a greater impact when humans remain actively involved,” says Franck Greverie, Chief Portfolio & Technology Officer, Capgemini. This is also the conclusion of the Capgemini Institute report,Rise of agentic AI: How trust is the key to human-AI collaboration . Trust and human oversight are essential factors in realizing the full potential of agentic AI. As such, the gap between intention and implementation capability is now one of the main barriers to realizing this opportunity.

To succeed, companies must remain focused on results, rethinking their processes with an AI-first approach,” continues Frank Greverie. “The success of this transformation lies in the need to build trust in AI by ensuring it is developed responsibly, with ethics and security built in from the design stage.”

The issue of trust involves reorganizing businesses

However, trust in fully autonomous AI agents has fallen sharply, from 43% to 27% in one year, according to the report. Nearly two in five executives now believe that the risks associated with implementing AI outweigh the benefits. Only 40% of companies say they trust AI agents to manage tasks and processes autonomously, while the majority remain wary of this technology. 

However, confidence is growing as companies move from exploration to implementation: among those that have started deployment, 47% have above-average trust in AI agents, compared to 37% in the exploratory phases. The issue of trust is a , involving the reorganization of businesses to foster effective collaboration between humans and AI,” says Frank Greverie. “It’s about creating the right conditions for these systems to reinforce human judgment and improve economic performance.”

The human-AI alchemy: the key to sustainable adoption

The real promise of agentic AI lies in its ability to address strategic business challenges and fundamentally rethink ways of working, according to the Capgemini Institute. In the next 12 months, more than 60% of companies plan to train hybrid human-agent teams, where AI agents will act as subordinates, tools, or support for human capabilities. “This means that AI agents can no longer be seen as mere tools: they are becoming full-fledged members of teams…”

While companies recognize that the added value of AI agents is greater when humans remain involved, few are ready to deploy agentic AI on a large scale. Eighty percent of them do not have a sufficiently mature AI infrastructure, according to the Capgemini Institute. And less than one in five consider themselves truly ready in terms of data. Ethical concerns such as data protection, algorithmic bias, and lack of explainability remain widespread… but few companies are taking concrete action.

To fully exploit the potential of AI agents, companies must move beyond the hype, the report recommends. This involves rethinking processes, reinventing business models, transforming organizational structures, and finding the right balance between agent autonomy and human involvement.