Study: Trust in GenAI surges globally despite gaps in AI safeguards

Organisations building trustworthy AI are 60% more likely to double ROI of AI projects, underscoring the high cost of ignoring responsible practices

SAS, a global leader in data and AI, has unveiled new research that explores the use, impact and trustworthiness of AI. The IDC Data and AI Impact Report: The Trust Imperative, commissioned by SAS, found that IT and business leaders across the UK report having greater trust in generative AI than any other form of AI.

The research exploring AI use and adoption also found that only 40% globally are investing to make AI systems trustworthy through governance, explainability and ethical safeguards, even though organisations that prioritise trustworthy AI were 60% more likely to report doubling the ROI of their AI projects.

Paradoxically, among those reporting the least investment in trustworthy AI systems, GenAI (e.g., ChatGPT) was rated roughly three times as trustworthy as traditional AI (e.g., machine learning), despite the latter being the most established, reliable and explainable form of AI.

Emerging AI technologies evoke most trust

Overall, the study found the most trusted AI deployments in Europe were emerging technologies, like GenAI and agentic AI, over more established forms of AI. Almost half of respondents (46%) reported “complete trust” in GenAI, while over a quarter said the same for agentic AI (27%). The least trusted form of AI is traditional AI where just one in seven (13%) indicated complete trust.

In the UK, trust in AI is strongly tied to privacy and compliance. Respondents are far more likely than their global peers to cite data privacy and security (+10%) and data privacy and compliance challenges (+8%) as barriers, reflecting the UK’s heightened sensitivity to regulatory oversight and the need for transparent, well-governed AI systems that meet both customer and regulator expectations.

At the same time, difficulty accessing relevant data sources is a significant challenge (+8%). This points to a readiness gap: without seamless, secure access to high-quality data, scaling AI becomes difficult.

This mix highlights regulation as both a constraint and catalyst: slowing experimentation but reinforcing the need for trusted, transparent AI. UK enterprises that treat compliance as an enabler will be best placed to deliver AI systems that balance innovation with resilience.

Across UK organisations, use of quantum AI is picking up slowly. Almost a third of global decision makers say they are familiar with quantum AI, whereas just 10% of UK respondents said the same.

A strong link between trustworthy AI and AI impact

Across Europe, there is significant variation between countries in how organisations approach trustworthy AI implementation and in the level of business impact they achieve.

For example, the UK invests more in trustworthiness but struggles to turn this into business impact, underusing its capabilities. Across the continent, Ireland achieves strong business impact from AI, while Denmark faces challenges with both impact and trustworthiness.

Trust dilemma

A trust dilemma exists where organisations are not aligned between perceived trust in AI systems and their actual trustworthiness.

Two risks emerge: underutilisation of reliable systems when trust remains low and overreliance on unproven systems when confidence is disproportionately high. The challenge is particularly acute for GenAI, where rapid enthusiasm has outpaced governance and data quality.

Although only 8% of organisations in the UK report a significant degree of alignment between how they trust AI, and the actions they are taking to implement trustworthy AI, this is close to, but slightly below, the global average (9%). However, 44% of UK organisations experience the trust dilemma, compared to 46% globally.

Most crucially, almost a third of UK organisations – 32% – are in the “danger zone”. These organisations say that they have complete trust in AI, but at the same time, they don’t underpin that level of trust with a corresponding degree of investment in delivering trustworthy AI.

“Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy,” said Kathy Lange, Research Director of the AI and Automation Practice at IDC. “As AI providers, professionals and personal users, we must ask: GenAI is trusted, but is it always trustworthy? And are leaders applying the necessary guardrails and AI governance practices to this emerging technology?”

“For AI to deliver meaningful business impact, trust must be matched by action,” said Dr Iain Brown, Head of AI & Data Science at SAS Northern Europe. “Our findings show that while many UK organisations have confidence in AI systems, there is room to strengthen governance, data quality, and trustworthy practices. By addressing this trust-action gap, organisations can reduce risk, make better use of AI capabilities, and ultimately turn confidence into tangible value for both business and society.”

The research draws on a global survey of 2,375 respondents conducted across North America, Latin America, Europe, the Middle East and Africa, and Asia Pacific. Participants included a balanced mix of IT professionals and line-of-business leaders, offering perspectives from both technology and business functions. 

SAS Innovate 2026 – a one-of-a-kind experience for business leaders, technical users, and SAS partners – is coming April 27–30, 2026 in Grapevine, Texas.  Visit the SAS Innovate website for more information and to save the date!

About SAS

SAS is a global leader in data and AI. With SAS software and industry-specific solutions, organisations transform data into trusted decisions. SAS gives you THE POWER TO KNOW®.

SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are trademarks of their respective companies. Copyright © 2025 SAS Institute Inc. All rights reserved.

Editorial contacts:

Visit the SAS Newsroom
https://www.sas.com/en_gb/news.html