What People Are Saying About
The Data and AI Impact Report
“Change is the only constant, and AI is evolving faster than ever, but speed without trust is a risk we can’t afford. This report reveals a striking trend: Industries like banking and insurance are seeing both high returns and high volatility – a clear sign of the trust gap. As a passionate leader in trustworthy AI, I see these findings reflected in real business conversations every day. The path forward is clear: Trust and data governance must drive the race.”
Preeti Shivpuri
Partner, National Leader Trustworthy AI and Data Risk, Deloitte
“Trying to scale GenAI on weak data foundations is like building a skyscraper on quicksand. The Data and AI Impact Report by SAS and IDC unpacks important nuances for trust in AI and the impact of AI and offers essential guidance for leaders navigating the fast-moving world of technology.”
Derek Yueh
Research Lead, LinkedIn B2B Institute
“Trust is essential to create impact. This report aligns with our own findings on the importance of improving employee trust in AI adoption, and this research will aid organizations in that process.”
Roy Ikink
Technology Lead, Accenture the Netherlands
“As an organization deeply committed to leveraging data and AI responsibly, we found Data and AI Impact Report to be a powerful validation of what drives real business value – trust, governance and scalable data foundations. This analysis goes beyond surface-level insights, offering a global and actionable view into how AI maturity correlates with strategic impact. For leaders navigating the shift from experimentation to enterprise transformation, this report is a must-read.”
Ryan Bisharaz
EVP, Revenue & Data Strategy, LAFC and BMO Stadium
Contents
Executive Summary
A Message from the Analyst Team
Methodology
Introduction
The Global State of Data and AI
Core Metrics of this Research
The Trust Dilemma
Global - Key Findings
The State of Data and AI Across Industries
The Trust Dilemma Across Industries
Global Life Sciences Overview
Global Insurance Overview
Global Banking Overview
Global Government Overview
About IDC and SAS
Executive Summary
In just the last two years, generative AI has already eclipsed the use of traditional AI. As the market rapidly advances to agentic AI, the impact on decision making will be pervasive, often concealed behind automation and integration.
For the good of society, businesses and employees: Trust in AI is imperative.
In order to achieve trust, the AI industry must increase the success rate of implementations, humans must critically review AI results and leaders must empower the workforce with AI.
We set out to tackle these important topics and other questions in a global survey around the use, impact and trustworthiness of data and AI.
In these pages, you’ll learn:
We tend to overly trust more humanlike technology.
In spite of evidence that GenAI can be error-prone, organizations have more faith in this technology than in other types of AI, including machine learning.
Tangible ROI increases with trust.
To realize greater value from AI initiatives, the survey found businesses must focus on governance, explainability and ethical safeguards.
Businesses must move beyond cost cutting.
Saving money is often a top goal when it comes to AI initiatives, but the study showed it delivered the lowest ROI compared to other AI goals. Organizations with more strategic AI initiatives significantly expand market share and improve customer experiences.
Agentic AI requires groundwork.
The study revealed that agentic AI progress will likely stall when faced with non-optimized cloud data environments, poor data governance, or talent shortages.
Quantum AI is quickly becoming a reality.
Quantum applications are currently being explored in logistics, finance, cybersecurity, life sciences, climate modeling, and materials science. Although this technology is in an experimental phase, survey respondents shared that the excitement for its potential is very real.
In addition to these findings, you will also learn about the AI Trust Index and the AI Impact Index – two new measurements we are introducing to compare the use and impact of AI across regions and industries.
For AI to reach its next growth phase, it must be grounded in delivering tangible ROI to businesses.
This requires everyone to overcome the trust dilemma.
A Message from the Analyst Team
Chris Marshall
Vice President, Data, Analytics, AI,
Sustainability, and Industry Research, IDC
From an academic perspective, the trust dilemma represents one of the most consequential barriers to realizing AI’s full potential. Our research shows that while 78% of organizations claim to fully trust AI, only 40% have invested to make systems demonstrably trustworthy through governance, explainability and ethical safeguards. This misalignment leaves much of AI’s potential untapped, with ROI lower where there is a lack of trustworthiness. Conceptually, the trust dilemma illustrates the difference between perception and practice: the trust in AI’s promise versus the organizational capacity to ensure its reliability. Solving this dilemma is not optional; it is the prerequisite for sustainable impact.
Neil Ward-Dutton
VP AI, Automation,
Data & Analytics Europe, IDC
From a global perspective, but grounded in Europe’s regulatory maturity, it is clear that AI’s success depends less on hype and more on the strength of its underlying data foundations. Our survey shows that weak cloud environments, siloed data and limited governance are consistently holding back the adoption of all types of AI. The most advanced use cases, ranging from fraud detection in banking to personalized health care, only succeed when powered by high-quality, well-governed data. Stronger infrastructure and compliance frameworks improve the trust dilemma, but globally, many organizations must still invest in data strategy before scaling transformative AI.
Dave Schubmehl
Research Vice President,
AI & Automation, IDC
What stands out in this research is how rapidly the center of gravity has shifted from traditional machine learning toward generative and agentic AI. Organizations are no longer experimenting on the edges; they are embedding these technologies into workflows that span customer service, coding and decision support. Yet, the real differentiator is not just adoption but integration: the ability to unify structured and unstructured information, apply governance and embed explainability into automated processes. Companies that align trustworthy AI practices with their automation strategies are seeing the clearest improvements in efficiency and workflow impact, while those that don’t risk amplifying inefficiencies at scale.
Kathy Lange
Research Director,
AI Software, IDC
What this research makes clear is that the conversation about AI trust and impact cannot be separated from the AI life cycle itself. From data preparation and labeling to model training, deployment and monitoring, every stage introduces opportunities and risks that directly affect outcomes. Too often, organizations focus on adoption metrics without investing in life cycle automation, governance and continuous oversight – leaving them exposed to bias, drift or compliance failures. The data shows that those building maturity across the full AI pipeline not only solve the trust dilemma, but also realize greater business value. Trustworthy AI is not a final checkpoint; it is embedded in every phase of the life cycle.
Methodology
This research draws on a global survey of 2,375 respondents conducted across North America, Latin America, Europe, the Middle East and Africa, and Asia Pacific. Participants included a balanced mix of IT professionals and line-of-business leaders, offering perspectives from both technology and business functions. The sample covered a wide range of industries, with a particular focus on banking, insurance, life sciences and government, while also including responses from sectors such as manufacturing, retail and telecommunications.
The questionnaire was designed to explore how organizations are approaching AI adoption and how they are embedding trust into their AI initiatives. It examined a broad set of dimensions, including business value, use cases, governance frameworks, deployment practices and data foundations. AI maturity was assessed across a spectrum, from early-stage efforts to advanced implementations, allowing for comparative analysis.
The first graphic on this page outlines the key research questions, showing how AI maturity was evaluated across strategic, technical and organizational layers. The second graphic presents the research framework and shows how the questionnaire was structured around five core themes: organizational context, AI adoption, enablers of trustworthy AI, enterprise trust in AI and industry-specific applications. Together, these elements provided the foundation for the findings presented in this report.
Introduction
Once the domain of rule-based systems and neural networks, the AI market now finds itself energized and redefined by the advent of generative AI. This wave of innovation has not only accelerated the adoption of AI (65% currently using AI, 32% planning to in the next 12 months) but has fundamentally shifted the way businesses and individuals perceive, trust and harness AI capabilities.
Generative AI, with its ability to automate knowledge work and tackle complex tasks, has quickly eclipsed traditional AI in both visibility and application (81% vs. 66%). In just a few short years, GenAI has moved from an emerging curiosity to a mainstream essential.
And, of course, the AI landscape continues to evolve beyond GenAI.
Agentic AI (52% current adoption), which can act autonomously and make decisions in dynamic environments, is poised to further expand the boundaries of automation and intelligence. Likewise, quantum AI promises to solve problems previously considered intractable due to computational limitations. While these technologies are still emerging, their potential has captured the imagination of decision makers eager to experiment and innovate (61% seek greater process efficiency, only 32% seek cost savings).
This moment brings with it new requirements and responsibilities. As AI systems become more autonomous and deeply integrated into critical processes, data foundations (16% siloed or ad hoc) also become more important. The quality, diversity and governance of data directly influence AI outcomes, making smart data strategies essential to realizing benefits and mitigating risks.
Central to this new landscape is the question of trust. Two elements are at play:
- The degree to which users trust AI, shaped by factors such as training data, usage patterns and observed results (78% say they have complete trust in AI).
- The inherent trustworthiness of the technology itself, including capabilities for explainability and transparency, along with strong governance practices (only 40% show advanced or higher levels of AI trustworthiness).
As the AI market matures, organizations must address both dimensions of trust to drive adoption and unlock the full transformative potential of artificial intelligence.
This report goes beyond tracking change: it introduces the key metrics that define AI’s progress, uncovers the forces reshaping the market and highlights how trust is emerging as the foundation for impact. By examining enabling technologies, implementation strategies and evolving standards of responsibility, it provides a guide not just to where AI stands today, but to where it must go to deliver lasting value.
“AI should give clear justifications for its choices, because ethical and long-term AI adoption is fueled by trust.”
Survey respondent, asked about trustworthy AI and why it matters
The Global State of Data and AI
Current status of AI
Disconnected. AI initiatives are tactical and disconnected from organizational strategy.
Functional. AI initiatives are initiated at the function or line-of-business (LOB) level, with some connection to organizational strategy.
Short-term focus.AI initiatives are organization-oriented but typically have a short-term focus.
Integrated. Integrated, continuous organization-wide AI innovation is in place with operations and customer/service experiences.
Transformative. There is a longer-term investment plan in place, and the organizational strategy is to use governed AI to transform markets and customers by creating new business models and product/service experiences.
Current status of Data Infrastructure
Ad hoc. Data architecture is unstructured and lacks formal processes or governance. Data is siloed and inconsistently managed, and decisions are reactive.
Siloed. Basic frameworks and processes begin to emerge, but they remain fragmented with gaps in consistency and governance.
Standardized. Clear governance procedures, standards and operating models are established, but compliance is incomplete.
Managed. Data architecture processes are integrated across the organization, with iterative updates reflecting business needs.
Optimized. Data architecture is fully optimized and continuously improved using metrics like KPIs.
Core Metrics of this Research
To understand how organizations approach and benefit from AI, this report introduces three core metrics. The Trustworthy AI Index captures the strength of practices that ensure AI systems are built responsibly and deserving of trust. The Impact Index measures the tangible business value organizations derive from AI investments. Finally, the concept of the trust dilemma highlights where trust and trustworthiness fall out of alignment, exposing either underutilized potential or elevated risk. Together, these measures provide a comprehensive view of how AI is governed, how it delivers impact and where organizations must act to align trust with outcomes.
The Trustworthy AI Index
The Trustworthy AI Index measures how far organizations have invested in practices, technologies and governance frameworks that make AI systems reliable, ethical, transparent and worthy of trust. This includes data governance, responsible AI, compliance, explainability and risk management.
The Impact Index
The Impact Index is a quantitative measure that captures the realized business value and outcomes from AI investments. It aggregates factors such as productivity, innovation, customer experience, operational efficiency and financial returns. A higher score indicates greater tangible benefits from AI.
The Interplay of Trustworthiness and Impact
The research shows how countries cluster differently across trustworthiness and impact. Ireland and Australia/New Zealand combine high trustworthiness with strong impact, proving the value of governance and responsible AI. Others achieve high impact but with weaker foundations, gaining short-term benefits while risking long-term setbacks. Meanwhile, some invest in trustworthiness but fail to translate it into impact, underutilizing their capabilities. These patterns highlight the need to align trust in AI with actual trustworthiness to realize sustainable benefits.
The Trust Dilemma
The chart below plots the relationship between the perceived trust in AI systems and their actual trustworthiness, illustrating the “trust dilemma.” This misalignment, evident across all regions, represents a critical barrier to effective AI adoption. Most organizations experience this misalignment, with relatively few achieving the ideal balance. Two risks emerge: underutilization of reliable systems when trust remains low and overreliance on unproven systems when confidence is disproportionately high. The challenge is particularly acute for generative AI, where rapid enthusiasm has outpaced governance and data quality.
Global Trust Dilemma
The matrix presents clear categories, but both trust in AI and its trustworthiness lie on a continuum. While the report uses a 2x2 framework, readers should keep in mind that shifts between levels are gradual, not binary.
The trust dilemma is a persistent global issue, affecting nearly half of organizations worldwide (46%). It is slightly more pronounced in Asia Pacific and North America, where 47% of organizations face misalignment between trust in AI and actual system trustworthiness. META (Middle East, Turkey and Africa) and Latin America perform slightly better at 45%, though the dissonance remains significant. Even in Europe, where regulatory oversight is stronger, 46% of organizations still fall into this dilemma. Solving it will require sustained investment in governance frameworks, the development of skilled talent and robust infrastructure to ensure that organizational confidence in AI technologies is firmly grounded in demonstrable reliability and integrity. Ultimately, solving this trust dilemma is essential if organizations are to unlock the full impact and value of AI.
“This research from SAS and IDC illustrates one of our core beliefs at Microsoft that we cannot harness the full potential of AI without trust, and that trust is never assumed or given, it must be earned.”
Sarah Bird
Chief Product Officer, Responsible AI, Microsoft
Global - Key findings
Trust isn't optional: The dilemma holding AI back Global Trust Dilemma
AI trust issues are more than just a philosophical concern; they’re a financial one.
According to our research, organizations with a significant misalignment—the difference between how AI is perceived and how responsibly it’s implemented—see significantly lower ROI.
In fact, 46% of companies around the world report experiencing the trust dilemma, meaning nearly half of their AI potential is left untapped. The report makes it clear: the more mature an organization’s AI adoption, the more it invests in responsible AI. Only those with robust governance frameworks are seeing truly transformational results.
Solving this dilemma isn’t just the right thing to do; it’s essential for unlocking AI’s full value.
Companies are responding. Fifty-seven percent plan to moderately increase investments in responsible AI, and 25% expect significant increases. Top areas of investment include hiring or training experts in AI ethics and compliance (56%), building platforms that embed responsible AI principles (54%), and developing technical capabilities for model explainability and bias detection (47%).
With only about a quarter of organizations having a central group dedicated to AI governance, the demand for ethical AI expertise is rising. Just as importantly, responsible AI must become a company-wide cultural value to drive lasting impact.
“In order for organizations to succeed in this new era of generative AI, they need to establish high trust through a solid data foundation with cloud-based scale and governance. AWS and SAS provide innovative solutions to companies that want to set up a data foundation for future AI success.”
Danielle Greshock
Director, Partner Solution Architects - World Wide Technology Partners, Amazon Web Services
Organizations that get the most ROI from AI have more strategic goals
Companies that get the most ROI from AI aren’t just trying to cut costs—they’re reimagining how their organizations work.
This research shows that newer AI adopters (less than two years in) tend to focus on personal productivity, with 57% listing it as their top priority. But in organizations with over eight years of AI experience, the focus shifts to process efficiency (64%) and decision making (60%). These mature users are the only ones who consistently prioritize decision making as a key AI goal.
The data also reveals that cost reduction, while common among newer adopters, delivers the lowest ROI. In fact, saving costs ranks as the third-highest priority for AI newcomers, but drops to seventh for mature organizations. Companies that aim to improve customer experience, expand market share and boost business resilience report significantly higher returns. This shows that focusing on using AI to enhance processes and deliver better experiences across the board is key to unlocking measurable value.
From GenAI to agentic AI: The next leap requires a stronger foundation
Agentic AI is poised to revolutionize enterprise operations, but most organizations aren’t ready for it. Like GenAI, agentic AI is being held back by weak data infrastructure, poor governance and a lack of AI skills.
To scale value from these technologies, companies must build strong data foundations that support innovation. Right now, 49% of organizations cite nonoptimized cloud data environments as a major barrier, followed by insufficient data governance (44%) and a shortage of skilled specialists (41%). For more experienced AI organizations, the challenges shift toward building organizational AI architecture (57%), running effective data and AI teams (45%) and supporting training and reskilling (42%).
Since 2024, data centralization has surged to become the number one challenge in AI implementation. Concerns around data privacy, explainability and ethical use are also top of mind for organizations deploying agentic AI.
With these limits in mind, agentic AI adoption will be slower than GenAI because it requires organization-wide process redesign and scales one agent at a time. As companies progress in AI maturity, they’ll face new hurdles like API integration, agent orchestration and hybrid cloud data management. Executives focused on process efficiency must recognize that a solid data foundation is essential for driving impact with agentic AI.
Quantum AI Is gaining confidence faster than capabilities
Quantum AI is gaining trust faster than it is gaining understanding. According to our findings, 30% of global decision makers say they’re familiar with quantum AI, and 25% say they trust it, despite real-world applications still being rare and mostly confined to research labs. Global leadership in this space is concentrated in the US, Europe and select Asian innovation hubs, but the technology remains in its early stages, defined by experimental hardware and a handful of promising research-led use cases.
This research reveals that while quantum AI is on the threshold of real-world impact, its adoption is largely aspirational. The fusion of quantum computing and AI holds transformative potential in fields like logistics, finance, cybersecurity and climate modeling. To prepare, organizations should explore hybrid quantum-classical architectures, where quantum processors work alongside traditional AI systems to accelerate complex model training and inference. As advancements in quantum hardware and error correction continue, strategic attention is essential – even if full-scale deployment is still a future goal.
Trust is the cornerstone of successful AI adoption. Building transparent, secure, reliable and ethical AI systems ensures that organizations can confidently leverage AI to drive innovation and achieve transformative outcomes. By prioritizing responsible AI practices and robust governance frameworks, businesses worldwide can bridge the trust gap and unlock the full potential of AI.”
Chris Tobias
General Manager, Intel Americas Technology Leadership & Platform ISV’s
Low trustworthiness: GenAI is trusted 200% more than machine learning
Despite being the newest and trendiest form of AI, generative AI is the most trusted – by a wide margin. Survey respondents scoring low on trustworthiness trust GenAI 200% (3x) more than traditional machine learning, even though machine learning is mathematically explainable and far more transparent (blue outlined boxes). This paradox reveals a humanlike bias: People tend to trust AI that feels intuitive and conversational, even when they know it may not always be accurate. The more “human” an AI feels, the more we trust it, regardless of its actual reliability.
At the same time, trust in GenAI is layered with concern. While users report high levels of trust, they also express significant worries about data privacy (62%), transparency and explainability (57%) and ethical use (56%). This trust is often driven by perceived usefulness and interactivity – GenAI delivers fast, tailored responses that feel helpful, especially in low-stakes situations. But interactivity doesn’t equal control, and as GenAI becomes more embedded in daily workflows, organizations must be cautious not to mistake familiarity for reliability.
The State of Data and AI Across Industries
The chart shows the relative use of different AI technologies across industries worldwide. Generative AI and traditional AI are the most widely applied, while agentic AI sees more selective adoption. Quantum AI remains the least deployed, reflecting its immaturity, limited availability and specialized use cases.
Banking, insurance, life sciences and government stand out as focus industries because they combine high AI adoption with disruption, regulatory scrutiny and broad societal impact. In banking and insurance, AI reshapes fraud detection, risk management and personalization, where sensitive data and strict compliance make governance essential. Life sciences push AI into drug discovery, clinical trials and diagnostics, emphasizing transparency and ethics. Government use of AI is accelerating in services and policymaking but faces intense scrutiny on fairness and accountability. Together, these sectors illustrate AI’s most dynamic opportunities and risks, making them central to questions of trust, governance and impact.
In non-focus industries, AI adoption is more incremental. Manufacturing applies AI mainly to predictive maintenance, quality control and supply chains, improving efficiency but without deep transformation. Retail uses AI for personalization, pricing and inventory management, creating value but facing fewer governance challenges. Health care overlaps with life sciences, with much of AI’s impact tied to diagnostics and treatment innovations already captured there. Education adoption remains modest and uneven, focused on personalized learning and administrative tasks, but with limited scale or urgency compared to priority sectors.
“It matters because our business decisions depend on data-drive insights."
AI Decision Maker,
asked about Trustworthy AI and why it matters
In terms of AI maturity, life sciences organizations stand out. They lead in the proportion of respondents identifying as being in the “transformative” stage of AI use, with 14.6% of life sciences organizations at this level. In contrast, only 8-10% of organizations in banking, insurance and government fall into this category. Banking in particular lags, with over a quarter of respondents placing themselves in the “functional” stage of maturity.
Data infrastructure maturity has a similar distribution to AI maturity: Life sciences again lead, with 13.9% of organizations reporting they are in the “optimized” stage of maturity. Meanwhile, banking trails, with 19.4% of organizations identifying as being in the “siloed” stage of maturity.
Across the board, the data shows a strong correlation between AI maturity and data infrastructure maturity when analyzed by industry.
The Trust Dilemma Across Industries
As artificial intelligence continues to reshape industries, the level of trust in AI and the investment in trustworthy AI practices vary significantly across sectors. From banking and insurance to government and life sciences, organizations are navigating the complex balance between innovation and responsible adoption.
Banking
Banking is often at the forefront of AI adoption, especially in regions like North America and META. Banks invest heavily in trustworthy AI, motivated by regulatory scrutiny and risk sensitivity. This commitment is reflected in banking having the largest share (11%) of organizations in the ideal quadrant, those combining high trust with strong trustworthy AI practices. However, a significant trust dilemma remains, with 47% of banks globally falling into the underutilization and overreliance quadrants, with banking showing a slightly higher tilt toward underutilization than our other industries. This cautious stance means, despite substantial investments in trustworthy AI, banks still face foundational challenges such as data governance and talent shortages, which can lead to missed opportunities.
Government
Government organizations worldwide are rapidly adopting AI, especially GenAI, but face a unique trust dilemma. 46% of government organizations are in the underutilization and overreliance quadrants, with a surprisingly high percentage over-relying on untrustworthy systems. This means that many government entities place strong confidence in AI systems that may not yet be fully trustworthy. While some governments (notably in Europe and Latin America) are making progress in embedding responsible AI practices, most still face significant gaps in data centralization, governance and talent, which hinder their ability to fully realize AI’s potential.
Insurance
The insurance sector shows a moderate trust dilemma globally, with 43% in the underutilization and overreliance quadrants. Insurers are generally cautious, prioritizing data governance and risk management, which reduces misalignment compared to other industries. However, many insurance organizations remain in early or functional stages of AI maturity, and only a minority have fully aligned their trust in AI with investments in trustworthy AI practices. This cautious approach slows innovation but helps avoid the risks of overreliance on unproven AI systems.
Life Sciences
Life sciences organizations face the largest overall trust dilemma among the four focus industries, with 48% of organizations falling into the underutilization and overreliance quadrants. This means that while many life sciences organizations have advanced AI and data infrastructure maturity, a significant portion either underutilizes reliable AI systems due to low trust or over-relies on systems lacking trustworthy foundations. Notably, life sciences organizations also have a high percentage in the overreliance quadrant, indicating that enthusiasm for AI adoption sometimes outpaces the implementation of robust governance and ethical safeguards. As the industry moves toward more autonomous and agentic AI, aligning perceived trust with actual trustworthiness will be increasingly critical for realizing sustainable impact.
Solving the trust dilemma
While each sector faces unique challenges, a common theme emerges: Trust in AI must be matched by tangible investments in governance, talent and infrastructure. Solving the trust dilemma is not just a matter of belief in AI’s potential – it requires deliberate action to ensure responsible and sustainable adoption. To solve the trust dilemma, organizations must focus on strengthening data and model governance, embedding ethical and transparent practices into AI life cycles and developing the skills and culture needed to align human trust with technological trustworthiness.
Global Life Sciences Overview
The AI and data infrastructure maturity leader
Among the four industries examined in this study, life sciences stands out for its advanced maturity in both AI and data infrastructure. A significantly higher share of life sciences organizations identify as being in the “transformative” stage of AI maturity compared to banking, insurance and government. This leadership position also extends to data infrastructure maturity.
Trustworthy AI efforts today, but comparatively modest trustworthy AI investment plans for agentic AI
OVERALL TRUSTWORTHY AI:
TRANSFORMATIONAL STATUS
TRUSTWORTHY AI INVESTMENT FOR
AGENTIC AI: SIGNIFICANTLY INCREASING
Life sciences organizations outperform only the government sector in their current efforts to deliver trustworthy AI, with 20% operating at the highest level of our Trustworthy AI Index – slightly above the global average of 19.8%. However, they are the least likely among the four industries to anticipate a significant increase in future investment toward trustworthy AI initiatives that support agentic AI, falling behind both our other focus industries and the global average.
Modest investment plans, focused on driving efficiency with investments in architecture and skills acquisition
Life sciences organizations generally have a long-standing history of working with AI. Given the nature of the industry, they tend to approach AI implementation with a strong emphasis on responsibility and ethical use to ensure safety and meaningful outcomes for all patients.
Globally, life sciences organizations report facing key AI implementation challenges, particularly around data foundations, a lack of specialized AI technology personnel and data governance. In response, their top priorities include building appropriate data and technology architectures and investing in the development of specialized teams to address these gaps.
Among our four focus industries, life sciences organizations express the most modest expectations for future AI investment. Under 8% expect investments in AI to grow greater than 20% over the coming year, and over a third (34.4%) expect only a limited change in AI investment in that period.
When it comes to AI’s business value, nearly two-thirds of life sciences leaders prioritize process efficiency and effectiveness as the primary outcome of AI adoption.
Global Insurance Overview
AI and data infrastructure maturity lag
Among the four industries analyzed in this study, insurance presents the most modest profile in terms of both AI and data infrastructure maturity. Insurance organizations report the lowest share of respondents identifying as being in the “transformative” stage of AI maturity compared to government, life sciences and banking. This trend is mirrored in data infrastructure maturity as well.
Significant trustworthy AI efforts today, but comparatively modest trustworthy AI investment plans for agentic AI
Insurance organizations outperform life sciences and government sectors in their current efforts to deliver trustworthy AI, with 20.1% operating at the highest level of our Trustworthy AI Index – slightly above the global average of 19.8%. Only banking ranks higher in this regard. When it comes to future investment plans for trustworthy AI initiatives supporting agentic AI, insurance organizations are roughly aligned with the global average, again trailing only banking respondents.
Modest investment plans, focused on driving efficiency with investments in skills acquisition leading
Globally, insurers express modest expectations for AI investment growth. Fewer than 8% anticipate a significant increase in investment over the next year, while more than 30% expect only limited changes in investment. Nearly two-thirds of insurance leaders identify process efficiency and effectiveness as the primary way AI delivers business value.
Like the broader respondent base, insurers cite data foundations, data governance and limited access to specialized AI technology personnel as major implementation challenges. Notably, insurers are the most likely among the four industries to identify data governance as a top concern.
In response to these challenges, insurers are prioritizing skill development across both general employee populations and technical specialist roles focused on data and AI. Compared to other industries, insurers are slightly less likely to emphasize architectural solutions as key implementation priorities to address data governance issues going forward. Additionally, about one-third of insurance organizations are prioritizing the creation of AI strategies and roadmaps.
Global Banking Overview
Modest AI and data infrastructure maturity
Banking organizations show modest levels of maturity in both AI and data infrastructure. The proportion of banking respondents identifying as being in the “transformative” stage of AI maturity is roughly on par with insurance and government organizations, with only life sciences reporting higher levels.
Significant trustworthy AI efforts today, together with strong trustworthy AI investment plans for agentic AI
Banking organizations lead all other focus industries in their current efforts to deliver trustworthy AI, with 23.4% operating at the highest level of our Trustworthy AI Index – well above the global average of 19.8%. They are also significantly more likely than peers in life sciences, insurance and government to anticipate increased future investment in trustworthy AI initiatives that support agentic AI.
Aggressive investment plans, focused first on driving innovation with investments in skills acquisition and architecture
Globally, banking organizations report the most aggressive investment outlook for AI among the four industries – 11.5% expect AI investment to grow by more than 20%, and nearly 60% anticipate growth between 4% and 20%. Unlike other sectors, banking leaders prioritize product and service innovation over process efficiency and effectiveness as the primary source of AI-driven business value.
Banking organizations identify key challenges in AI implementation, including a lack of data governance, a lack of specialized AI technology personnel and insufficient data centralization and optimization. However, it’s notable that none of these issues were cited by half or more of banking respondents.
In response to these challenges, banking organizations are prioritizing the development of robust technology architectures and workforce skills. Notably, they also rank the development and tuning of AI models among their top five implementation priorities.
Global Government Overview
Strong AI maturity, but behind in data infrastructure maturity
Government organizations report notably strong AI maturity on a global scale. Nearly half of respondents place themselves in either the “transformative” or “integrated” stages of maturity – significantly ahead of banking and insurance organizations. However, the picture shifts when it comes to data infrastructure maturity: Government organizations trail life sciences by a wide margin, though their responses are generally consistent with those from banking and insurance.
Lagging trustworthy AI efforts today, together with comparatively weak trustworthy AI investment plans for agentic AI
In terms of delivering trustworthy AI, government organizations lag behind the other three focus industries. Only 15.3% operate at the highest level of our Trustworthy AI Index, compared to the global average of 19.8%. They also fall short of banking and insurance organizations in their expectations for future investment in trustworthy AI initiatives.
Aggressive investment plans, focused first on driving innovation with investments in skills acquisition and architecture
Like the other industries in this study, government organizations frequently cite challenges related to data foundations, data governance and skills availability. However, they stand out as the only sector where respondents are more likely to highlight skills gaps among general employee populations rather than among specialized technical teams.
Reflecting these challenges, government organizations are prioritizing investments in technology architecture alongside workforce skill development. Nearly one-third of respondents also emphasize the importance of creating an AI strategy and roadmap.
Government organizations express strong expectations for AI investment growth in the coming year – 12.6% anticipate increases of more than 20%, and nearly half expect growth between 4% and 20%. Like leaders in life sciences and insurance, government respondents view process efficiency and effectiveness as the primary lens for realizing AI’s business value. Additionally, personal productivity is cited by over 60%, the highest rate among all four industries.
For additional insights on AI, trust and governance
About IDC
International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets.
With more than 1,300 analysts worldwide, IDC offers global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. IDC’s analysis and insight help IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives.
Founded in 1964, IDC is a wholly owned subsidiary of International Data Group (IDG, Inc.), the world’s leading tech media, data, and marketing services company.
About SAS
SAS is a global leader in data and AI. With SAS software and industry-specific solutions, organizations transform data into trusted decisions. SAS gives you THE POWER TO KNOW®.
