Study: Trust in GenAI surges globally despite gaps in AI safeguards
Organisations building trustworthy AI are 60% more likely to double ROI of AI projects, underscoring the high cost of ignoring responsible practices
SAS, a global leader in data and AI, recently unveiled new research that explores the use, impact and trustworthiness of AI. The IDC Data and AI Impact Report: The Trust Imperative, commissioned by SAS, found that IT and business leaders report having greater trust in generative AI than any other form of AI.
The global research exploring AI use and adoption also found that only 40% are investing to make AI systems trustworthy through governance, explainability and ethical safeguards, even though organisations prioritising trustworthy AI are 60% more likely to double ROI of AI projects. Paradoxically, among those reporting the least investment in trustworthy AI systems, GenAI (e.g., ChatGPT) was viewed as 200% more trustworthy than traditional AI (e.g., machine learning), despite the latter being the most established, reliable and explainable form of AI.
The research revealed that organisations in Australia and New Zealand are accelerating their AI adoption and data readiness, driven by investments in AI, cloud computing, and digital infrastructure. Instead of basic AI experimentation, organisations are prioritising sophisticated, data-driven strategies, leveraging cloud migration, unified AI platforms, and robust data governance.
“It’s encouraging to see Australia and New Zealand advancing so strongly in AI adoption and data readiness, even as challenges like legacy systems, skills shortages, and cybersecurity risks persist,” said Craig Jennings Vice President & Managing Director of SAS Australia and New Zealand. “Through strategic investments in AI governance, infrastructure modernisation, and workforce capability, we’re well positioned to continue capitalising on this momentum and driving sustainable, responsible innovation at scale”.
Access the Asia Pacific regional report here.
The research draws on a global survey of 2,375 respondents conducted across North America, Latin America, Europe, the Middle East and Africa, and Asia Pacific. Participants included a balanced mix of IT professionals and line-of-business leaders, offering perspectives from both technology and business functions.
Emerging AI technologies evoke most trust
Overall, the study found the most trusted AI deployments were emerging technologies, like GenAI and agentic AI, over more established forms of AI. For Australia and New Zealand, the results are consistent with the global results, with organisations showing relatively high levels of trust in GenAI (43%). Almost half of respondents (48%) reported “complete trust” in GenAI, while a third said the same for agentic AI (33%). The least trusted form of AI is traditional AI – less than one in five (18%) indicated complete trust.
Even as they reported high trust in GenAI and agentic AI, survey respondents expressed concerns, including data privacy (62%), transparency and explainability (57%), and ethical use (56%).
Meanwhile, quantum AI is picking up confidence quickly, even as the technology to execute most use cases has yet to be fully realised. Almost a third of global decision makers say they are familiar with quantum AI, and 26% report complete trust in the technology, despite real-world applications still in the early stages.
Lagging AI guardrails weaken AI impact ... and ROI
The study showed a rapid rise in AI usage – particularly GenAI, which has quickly eclipsed traditional AI in both visibility and application (81% vs. 66%). Locally, Australia and New Zealand have emerged as leaders in adopting GenAI, with 92% of their organisations using it – a 10% increase over the global average. This momentum has sparked a new level of risks and ethical concerns.
Across all regions, IDC researchers identified a misalignment in how much organisations trust AI versus how trustworthy the technology truly is. Per the study, while nearly 8 in 10 (78%) organisations claim to fully trust AI, only 40% have invested to make systems demonstrably trustworthy through AI governance, explainability and ethical safeguards.
The research also showed a low priority placed on implementing trustworthy AI measures when operationalising AI projects. Among respondents’ top three organisational priorities, only 2% selected developing an AI governance framework, and less than 10% reported developing a responsible AI policy. However, deprioritising trustworthy AI measures may be preventing these organisations from fully realising their AI investments down the road.
In Australia and New Zealand, however, organisations have demonstrated that even with a rapid adoption of GenAI, there can be an alignment between trustworthy AI and trust in AI. Organisations are making significant investments in trustworthy AI, positioning the region as one of the small groups of markets where they are not just adopting AI but doing so responsibly. These investments are driving substantial business impact, with Australia and New Zealand ranked as one of the top performers globally, achieving a score of 3.53 out of 5 on the Impact Index.
Researchers divided survey respondents into trustworthy AI leaders and trustworthy AI followers. Leaders invested the most in practices, technologies and governance frameworks to make their AI systems trustworthy – and appear to be reaping rewards. Those same trustworthy AI leaders were 1.6 times more likely to report double or greater ROI on their AI projects.
Lack of strong data foundations and governance stall AI
As AI systems become more autonomous and deeply integrated into critical processes, data foundations also become more important. The quality, diversity and governance of data directly influence AI outcomes, making smart data strategies essential to realising benefits (e.g., ROI, productivity gains) and mitigating risks.
The study identified three major hurdles preventing success with AI implementations: weak data infrastructure, poor governance and a lack of AI skills. Nearly half (49%) of organisations cite data foundations that are not centralised or nonoptimised cloud data environments as a major barrier. This top concern was followed by a lack of sufficient data governance processes (44%) and a shortage of skilled specialists within their organisation (41%).
Australia and New Zealand’s success in AI is rooted in balanced operational priorities with organisations’ key priorities including upskilling and training (55%), building a data science and AI team (51%), and developing data architecture for enterprise AI (45%). Given that data is a dynamic resource, continuously growing with new enterprise applications and accelerated by the rapid adoption of AI, ANZ’s advanced data and AI readiness is driving a sustained focus on data excellence and supporting the region’s success in AI adoption.
“Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy,” said Kathy Lange, Research Director of the AI and Automation Practice at IDC. “As AI providers, professionals and personal users, we must ask: GenAI is trusted, but is it always trustworthy? And are leaders applying the necessary guardrails and AI governance practices to this emerging technology?”
About SAS
SAS is a global leader in data and AI. With SAS software and industry-specific solutions, organizations transform data into trusted decisions. SAS gives you THE POWER TO KNOW®.
Editorial contacts:
- SAS Australia and New Zealand
Nicholas Quirke
+61 2 9428 0519