The SAS Approach to the EU AI Act 

 

Introduction

The EU Artificial Intelligence Act (AI Act) is a regulation that governs the use and deployment of AI systems. The AI Act categorizes AI systems based on their risk levels and imposes compliance requirements. This document outlines the responsibilities of both SAS and our customers in relation to the AI Act.

The EU AI Act entered into force on August 1, 2024, but it does not yet apply fully. Different implementation periods apply depending on the type of system, with specific provisions for prohibited AI practices and general-purpose AI from 2025 and high-risk AI systems from 2026 or 2027.

 

Understanding the EU AI Act

The EU AI Act defines AI systems and places them into four categories based on risk: 

  1. the Act prohibits those classified as “unacceptable risk” (e.g., AI systems that deploy purposefully manipulative or deceptive techniques);
  2. the Act regulates “high-risk” AI systems (e.g., AI systems intended to be used as a safety component of a product);  
  3. the Act regulates “limited risk” AI systems (e.g., those with transparency risks); and
  4. the Act does not regulate “minimal risk” AI systems (e.g., AI-enabled video games). 

The regulatory requirements are stricter for high-risk AI systems, which require greater oversight and compliance with specific provisions of the Act. For AI systems to be classified as high-risk, they must involve certain purposes, such as biometric identification or critical infrastructure management. While many AI systems will fall into minimal risk or limited risk categories, if an AI system built using SAS platforms and tools involves such high-risk functions, it may be classified as a high-risk AI system, triggering additional regulatory requirements.

The European Commission has been publishing guidelines to facilitate the interpretation and application of the law, currently including:

The AI Pact

SAS has signed the European Commission’s AI Pact, joining more than 130 companies accelerating the adoption of the European Union AI Act’s principles. The pledge encourages companies to advance AI governance, identify high-risk AI systems and promote AI literacy, which are all requirements of the Act.

AI Pact signatories will provide updates on their progress and can test and share their solutions with the wider community to share best practices and build trust in AI systems. This collaborative effort is reflective of SAS’ approach to AI.

 

AI Oversight at SAS

AI Oversight Committee

The AI Oversight Committee (AIOC) at SAS plays a crucial role in understanding and overseeing the company's use of AI as a deployer and in developing AI as a provider. The AIOC is composed of a diverse group of experienced SAS staff members from various departments and geographical regions, bringing a wide range of perspectives and expertise to the table. The AIOC meets regularly to review AI initiatives, assess compliance efforts, and provide recommendations for improvement. The AIOC reports directly to the Executive Leadership Team (ELT), which allows for a robust oversight mechanism. The guidance from the AIOC has been important in planning SAS’ approach to the EU AI Act as well as influencing key decisions and internal policies regarding SAS’ use of AI. 

SAS Data Ethics Practice

Our Data Ethics Practice (DEP) is a guiding hand for our responsible innovation efforts to ensure our products, people, and processes globally reflect our key principles:

  • human-centricity,
  • inclusivity,
  • accountability,
  • transparency,
  • privacy and security, and
  • robustness.

The DEP works internally and externally to:

  • Drive R&D development of important features related to responsible innovation and AI governance.
  • Guide SAS and its customers in the development and deployment of technologies that reflect these principles.
  • Establish SAS as a leading contributor to the growth of responsible innovation in the fields of AI and analytics.
  • Develop best practices to help SAS quickly respond to regulatory changes.

The SAS DEP also includes the Data for Good Team, which generates thought leadership and showcases the power of SAS technology by putting our principles into practice and addressing pressing global issues through the lens of our responsible innovation methodologies.

 

Prohibited Practices

SAS has conducted a thorough internal assessment to ensure that SAS is not engaging in any of the EU AI Act’s prohibited practices. SAS organized a cross-functional working group, including experienced SAS staff members from various functions and geographies and reviewed our products and practices. The assessment concluded that SAS does not engage in any of the prohibited practices outlined in the EU AI Act. No planned initiatives are at risk of violating these practices either. This conclusion is based on the feedback from participants who conducted broad inquiries within their respective departments.

SAS has also established standard screening practices as part of our Product Development Lifecycle (PDLC) to ensure no new SAS offerings violate EU law regarding prohibited practices.

Restricting Customer Use Cases

As a software developer, SAS has not designed any products to perform or enable prohibited practices. To mitigate the potential risk that customers could misuse some of our products for such purposes, our Responsible Use Policy and contract terms prohibit customers from engaging in prohibited practices using SAS offerings. SAS has also implemented employee training to address prohibited practices.

 

AI Literacy

SAS is committed to addressing AI literacy requirements in alignment with the AI Act through a comprehensive and multi-faceted approach. This includes a variety of training programs, continuous learning content, and internal resources aimed at fostering responsible AI practices. The following initiatives are examples of how SAS is ensuring that its employees and customers are well-equipped to meet the AI literacy requirements of the AI Act and contribute to the responsible development and deployment of AI technologies:

  • SAS has included AI literacy content in company-wide compliance training. The content provides foundational knowledge on AI and its ethical implications. This program is designed to ensure that employees are well-versed in the principles of responsible AI development and usage. 
  • SAS requires AI-system-specific training for users of certain Generative AI productivity tools to ensure responsible use in compliance with the SAS Generative AI Policy. 
  • SAS offers a Continuous Learning Journey for customer-facing employees on Responsible Innovation. This series of content covers various aspects of responsible AI, including ethical considerations and best practices for innovation.
  • SAS disseminates information on AI regulation through internal channels, such as articles, newsletters, and videos that provide insights into navigating new AI regulations.
  • SAS provides a range of Responsible AI internal resources, which are accessible to all employees. These resources include guidelines, best practices, and case studies on data ethics and responsible AI. The SAS Data Ethics Program (DEP) plays a crucial role in overseeing these initiatives by offering support and advice to employees on integrating ethical considerations throughout the lifecycle of AI systems and promoting a culture of responsible AI practices within the organization.
  • SAS makes available to its customers and employees a variety of courses on AI and machine learning. These courses are designed to enhance technical skills and understanding of AI technologies.
  • The SAS course on “Responsible Innovation and Trustworthy AI” has been included in the EU AI Office Repository of AI Literacy Practices available online here. This SAS course is external, free and available online, designed for anyone who wants to gain a deeper understanding of the importance of trust and responsibility in AI, analytics, and innovation.
  • Through the SAS DEP, SAS established an AI Ethics Ambassadors program. This program aims to promote ethical AI practices and foster a culture of responsibility within the organization.

 

General Purpose AI Models and the SAS GenAI Approach

The AI Act includes specific provisions for General Purpose AI (GPAI) models, such as large language models (LLMs). Requirements include detailed documentation about the models involved and transparency requirements. A first common approach for providing this documentation stems from the GPAI Model Code of Practice. Moreover, the Act emphasizes the importance of transparency by requiring that the use of GPAI models in products and services be clearly communicated to users. 

The Use of GPAI Models in SAS Offerings

SAS does not create GPAI models like large language models (LLMs) from scratch. 

Where we have decided to offer features that rely on Generative AI, such as SAS Viya Copilot, SAS leverages third-party models from upstream model providers under commercial arrangements or models that are available under permissive licenses (such as open source licenses). 

Depending on the feature and use case, SAS may apply fine-tuning to tailor the performance of models to specific use cases. As explained in Recital 109 of the AI Act and the Commission’s Guidelines for providers of GPAI models, for providers who have fine-tuned a preexisting model, “the obligations for providers of general-purpose AI models should be limited to that modification or fine-tuning, for example by complementing the already existing technical documentation with information on the modifications, including new training data sources, as a means to comply with the value chain obligations provided in this Regulation.”  What that means, per the Commission’s Guidelines for providers of GPAI models, is that the requirements of the AI’s Article 53(1), including documentation, copyright policy, and summary of the content used for training, are limited to the modification.

Therefore, where SAS may be a “downstream provider” of a GPAI model under the AI Act based on fine-tuning or other modifications, SAS plans to meet its obligations under the law, including the following:

  • Model Information: SAS documents basic information about the models we leverage, including the model name, input and output modalities, and intended use.
  • Fine-tuning information: When SAS conducts fine-tuning, we document the aspects under our control at SAS and describe what data was used. 
  • Transparency Obligations: We disclose the use of GenAI in features, clearly labeling GenAI-powered features and providing usage instructions and disclosures as needed.
  • Compliance with Acceptable Use Policies (AUPs): We ensure that we adhere to relevant acceptable use policies from upstream model providers and that our legal terms list acceptable use policies relevant to our customers, including those from SAS and from upstream model providers.
  • Copyright Obligations: SAS maintains a copyright policy aligned with the GPAI Code of Practice, requiring that SAS only use appropriately sourced, lawfully obtained data for model creation, including for fine-tuning of Generative AI models conducted by SAS. SAS intends to provide information required by the GPAI Code of Practice. For more information, see https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice.

SAS features that leverage Generative AI may also employ techniques such as prompt engineering, retrieval-augmented generation (RAG), or similar techniques that do not fine-tune or otherwise modify an upstream model. Where SAS plays no role in the development of a model and is not the “provider” of a model, SAS will maintain for transparency purposes simplified sets of information regarding these models, make that information available to our customers to the extent it is available from the upstream provider, and document SAS’ use of the models in our documentation. 

Copyright Policy Summary

The EU AI Act’s GPAI Code of Practice encourages companies to maintain a publicly available summary of the required copyright policy. The policy SAS implemented includes the following key aspects:

  • Teams may only use appropriately sourced data for training AI models and must comply with all applicable laws regarding copyright and related rights.
  • Teams must follow relevant review processes to request use of third-party data, including data scraped from public websites or similar resources.
  • Teams must adhere to contractual obligations regarding Customer data usage.
  • Teams must implement appropriate and proportional practices, where applicable, to mitigate the risk of outputs that infringe copyright.
  • SAS legal terms prohibit Customers from using SAS models for copyright-infringing uses through SAS’ Responsible Use Policy: https://www.sas.com/en_us/legal/responsible-use.html.

Information about how to raise concerns about copyright issues to SAS is available in the “Copyright Complaints” section of SAS’ online Terms and Conditions. The Copyright Agent’s contact information is also included here for ease of reference:

Copyright Agent
SAS Institute Inc.
SAS Campus Drive
Cary, North Carolina 27513
e-mail: legalWeb@sas.com

Customer FAQ on Generative AI models

Does SAS use my prompts or completions to train models?

No, SAS does not train any models using your prompts or completions. 

Do any third-party models leveraged by SAS use my prompts or completions to train models?

No, SAS only chooses third-party foundation models that can be used consistently with SAS’ obligations to you, such as our obligation to protect your confidential information. That means we make sure any third party (such as Microsoft Azure or Amazon Web Services) has appropriate protections for confidential data, and we confirm they do not train their models on your data, prompts, or completions. 

Does SAS treat data that I input into a Generative AI-enabled function as confidential?

Yes, SAS treats your prompts as confidential information and protects it per the terms of your contract with SAS.

Will outputs from Generative AI features always be accurate?

Generative AI models are known to sometimes generate incorrect or otherwise unexpected outputs. Customers are responsible for ensuring any Generative AI output or function is suitable for their use cases and should have humans review their outputs. Consistent with the AI Act's transparency obligations, SAS designs its Generative AI features to ensure the user is aware of the involvement of Generative AI and to ensure the user knows when outputs are created using Generative AI.

What data has SAS used to fine-tune Generative AI models?

To date, SAS has only conducted fine-tuning of Generative AI models using data that SAS owns or that SAS has synthetically generated based on data SAS owns.

Will you make these commitments in your contract?

Yes, we do: https://www.sas.com/generative-ai-terms

 

High-Risk AI Systems

SAS provides platforms that offer algorithms for training machine learning models and conducting statistical analysis. Customers may think of SAS tools as a set of building blocks or a toolbox: SAS provides tools that customers can use to build systems that are meaningful for their business needs.    

In anticipation of the European Commission’s Guidelines on High-Risk AI Systems that are currently being drafted, SAS is committed to complying with the EU AI Act and helping our customers do the same.

How does SAS help customers?

When customers rely on our tools and platforms to build AI systems, we are dedicated to supporting them in their AI governance efforts. Our commitment includes: