SAS® Business Analytics Features

Data access

  • Provides access to data from more than 60 data sources, including relational and nonrelational databases, PC files, Hadoop, Amazon Redshift and data warehouse appliances with a single SAS Business Analytics license.
  • Provides direct, secure data access with native interfaces and integration standards.
  • Supports business decisions with complete, consistent, up-to-date and accurate data.

View more data access features

Seamless, transparent data access

  • Broad access to data through an intuitive interface, regardless of where it’s stored.
  • Support for a wide range of databases and platforms, including big data databases, relational stores, data warehouses, mainframe sources and PC files.
  • Easy integration with popular platforms without detailed knowledge of the database or SQL.
  • Option of working in SAS or SQL with automatic generation of the appropriate SQL statements, passed through to the database for execution.
  • Support for integration standards outside of the dedicated SAS/ACCESS module, including SAS/ACCESS Interface to ODBC, SAS/ACCESS Interface to JDBC and SAS/ACCESS Interface to OLE DB.
  • Ability to execute procedural language, complementing SQL-based statement logic.
  • High level of security with native database security mechanisms.
  • Option to import or export data from a PC file to a SAS data set, as well as the ability to read and write directly to PC files.
  • DBMS metadata can be accurately maintained within the SAS Metadata Repository for metadata reuse.

Flexible query language support

  • Seamlessly access data with minimal knowledge of the data or the SQL required to surface it.
  • Take more control by using your own customer SQL statements to modify or maintain automatically generated SQL.
  • Map SAS specific statements or functions to database-specific statements or functions and process SQL statements directly inside the database for optimal performance.
  • Enlist a SAS extension to process the appropriate query logic using the SAS or database engine.

Performance tuning options

  • Take advantage of a multithreaded read interface, as well as threaded kernel technology and native APIs to Oracle, DB2 Spark and Teradata.
  • Enable federated views through optimized read and write, including buffering, compression, threading, chunking, and sort and join performance.
  • Join processing is automatically pushed into the database.
  • Boost performance with temporary table support.
  • Work effortlessly with seamless interfaces to loaders and utilities without an in-depth understanding of each loader.
  • Use PROC TRANSPOSE pushdown capability for Teradata and Hadoop.
  • Use PROC pushdown capability for Amazon Redshift, Postgres, Microsoft SQL Server and metadata integration.
  • Maintain DBMS metadata within the SAS metadata repository, and reuse data jobs.
  • Use jobs across a variety of SAS solutions, including SAS® Enterprise Guide® and SAS Data Management.
  • Use native storage options, including support for temporary tables, materialized views and partitioned tables.
  • Make use of native database types that translate the source database to the appropriate SAS data type.
  • Increase database performance with processor threads, placing data into a memory buffer between reads.
  • National language support.

Optimization features for better performance

  • Use pipeline read to increase database read performance by up to 30 percent via a processor thread that reads data from a database and places it into a memory buffer. Pipeline read is available in SAS/ACCESS Interface to DB2 (non-z/OS), SAS/ACCESS Interface to Greenplum, SAS/ACCESS Interface to Oracle and SAS/ACCESS Interface to Teradata.
  • Improve efficiency with native storage options, including temporary tables, materialized views and partitioned tables.
  • Speed processing with native database types, which are automatically translated to the appropriate SAS data type.

SAS/ACCESS for databases

SAS/ACCESS interfaces to relational databases and database appliances include:

  • Teradata
  • Aster Data
  • Cloudera Impala
  • Datacom
  • Greenplum
  • Netezza
  • Vertica
  • Informix
  • DB2
  • Oracle and Oracle RDB
  • PostgreSQL
  • Microsoft SQL Server
  • MySQL
  • Amazon Redshift
  • JDBC
  • Spark SQL – SAS Viya
  • ODBC
  • OLE DB 

SAS/ACCESS® for mainframes

Supported mainframe sources include:

  • SAS/ACCESS® Interface to ADABAS
  • SAS/ACCESS® Interface to DATACOM/DB
  • SAS/ACCESS® Interface to CA IDMS
  • SAS/ACCESS® Interface to IMS-DL/I

SAS/ACCESS® for NoSQL data platforms

  • SAS/ACCESS® Interface to the PI System

SAS/ACCESS® for distributed file systems

One supported distributed file system source is:

  • SAS/ACCESS® Interface to Hadoop

SAS/ACCESS® Interface to PC Files 

SAS/ACCESS Interface to PC Files includes access to:

  • DBF
  • DIF (Unix)
  • XLS (Windows)
  • WK1
  • WK3
  • WK4
  • XLSX

SAS/Access® for Non-Relational Sources

  • MongoDB
  • Salesforce

Self-service data preparation

  • Provides an interactive, self-service, easy-to-use interface for profiling, cleansing and blending data using a GUI.
  • Enables full integration with your analytics pipeline.
  • Provides access to data lineage with network diagrams.
  • Enables you to reuse, schedule and monitor jobs.

View more self-service data preparation features

Data and metadata access

  • Use any authorized internal source, accessible external data sources and data held in-memory in SAS Viya.
    • View a sample of a table or file loaded in the in-memory engine of SAS Viya, or from data sources registered with SAS/ACCESS, to visualize the data you want to work with.
    • Quickly create connections to and between external data sources.
    • Access physical metadata information like column names, data types, encoding, column count and row count to gain further insight into the data.
  • Data sources and types include:
    • DNFS, HDFS, PATH-based files (CSV, SAS, Excel, delimited).
    • DB2.
    • Hive.
    • Impala.
    • SAS® LASR.
    • ODBC.
    • Oracle.
    • Postgres.
    • Teradata.
    • Feeds from Twitter, YouTube, Facebook, Google Analytics, Google Drive, Esri and local files.
    • SAS® Cloud Analytic Services (CAS).

Data provisioning

  • Parallel load data from desired data sources into memory simply by selecting them – no need to write code or have experience with an ETL tool. (Data cannot be sent back to the following data sources: Twitter, YouTube, Facebook, Google Analytics, Esri; it can only be sourced form these sites).
    • Reduce the amount of data being copied by performing row filtering or column filtering before the data is provisioned.
    • Retain big data in situ, and push processing to the source system by including SAS In-Database optional add-ons.

    Guided, interactive data preparation

    • Transform, blend, shape, cleanse and standardize data in an interactive, visual environment that guides you through data preparation processes.
    • Easily understand how a transformation affected results, getting visual feedback in near-real-time through the distributed, in-memory processing of SAS Viya.

    Column-based transformations

    • Use column-based transformations to standardize, remediate and shape data without doing configurations. You can:
      • Change case.
      • Convert column.
      • Rename.
      • Remove.
      • Split.
      • Trim whitespace.
      • Custom calculation.

    Row-based transformations

    • Use row-based transformations to filter and shape data.
    • Create analytical-based tables using the transpose transformation to prepare the data for analytics and reporting tasks.
    • Create simple or complex filters to remove unnecessary data.

    Code-based transformations

    • Write custom code to transform, shape, blend, remediate and standardize data.
    • Write simple expressions to create calculated columns, write advanced code or reuse code snippets for greater transformational flexibility.
    • Import custom code defined by others, sharing best practices and collaborative productivity.

    Multiple-input-based transformations

    • Use multiple-input-based transformations to blend and shape data.
    • Blend or shape one or more sets of data together using the guided interface – there’s no requirement to know SQL or SAS. You can:
      • Append data.
      • Join data.
      • Transpose data.

    Data profiling

    • Profile data to generate column-based and table-based basic and advanced profile metrics.
    • Use the table-level profile metrics to uncover data quality issues and get further insight into the data itself.
    • Drill into each column for column-level profile metrics and to see visual graphs of pattern distribution and frequency distribution results that help uncover hidden insights.
    • Use a variety of data types/sources (listed previously). To profile data from Twitter, Facebook, Google Analytics or YouTube, you must first explicitly import the data into the SAS Viya in-memory environment.

    Data quality processing
    (SAS® Data Quality on SAS® Viya® is included in SAS Data Preparation)

    Data cleansing

    • Use locale- and context-specific parsing and field extraction definitions to reshape data and uncover additional insights.
    • Use the extraction transformation to identify and extract contact information (e.g., name, gender, field, pattern, identify, email and phone number) in a specified column.
    • Use parsing when data in a specified column needs to be tokenized into substrings (e.g., a full name tokenized into prefix, given name, middle name and family name).
    • Derive unique identifiers from match codes that link disparate data sources.
    • Standardize data with locale- and context-specific definitions to transform data into a common format, like casing.

    Identity definition

    • Analyze column data using locale-specific rules to determine gender or context.
    • Use identification analysis to analyze the data and determine its context, which is particularly valuable if the data or source of data is unfamiliar.
    • Use gender analysis to determine the gender of a name using locale-specific rules so the data can be easily filtered or segmented.

    Data matching

    • Determine matching records based upon locale- and context-specific definitions.
    • Easily identify matching records using more than 25 context-specific rules such as date, address, name, email, etc.
    • Use the results of the match code transformation to remove duplicates, perform a fuzzy search or a fuzzy join.

    System and job monitoring

    • Use integrated monitoring capabilities for system- and job-level processes.
    • Gain insight into how many processes are running, how long they’re taking and who is running them.
    • Easily filter through all system jobs based on job status (running, successful, failed, pending and cancelled).
    • Access job error logs to help with root-cause analysis and troubleshooting. (Note: Monitoring is available using SAS Environment Manager and the job monitor application.)

    Data import and data preparation job scheduling

    • Create a data import job from automatically generated code to perform a data refresh using the integrated scheduler.
    • Schedule data explorer imports as jobs so they will become an automatic, repeatable process.
    • Specify a time, date, frequency and/or interval for the jobs.

    Data lineage

    • Explore relationships between accessible data sources, data objects and jobs.
    • Use the relationship graph to visually show the relationships that exist between objects, making it easier to understand the origin of data and trace its processing.

    Plan templates and project collaboration

    • Use data preparation plans (templates), which consist of a set of transformation rules that get applied to one or more sources of data, to improve productivity (spend less time preparing data).
    • Reuse the templates by applying them to different sets of data to ensure that data is transformed consistently to adhere to enterprise data standards and policies.
    • Rely on team-based collaboration through a project hub used with SAS Viya projects. The project’s activity feed shows who did what and when, and can be used to communicate with other team members.

    Visual data exploration & insights deployment

    • Provides an integrated environment for self-service data discovery, reporting and world class analytics.
    • Delivers easy-to-use predictive analytics with “smart algorithms.”
    • Enables data exploration and information sharing via email, web browser, Microsoft Office or mobile devices.
    • Provides web-based administration, monitoring and governance of a single platform.

    View more data exploration & insights deployment features

    Approachable analytics

    • Provides access to advanced analytic capabilities without coding, including:                                        
      • Correlations.
      • Forecasting.
      • Scenario analysis.
      • Decision trees.
      • Text analysis.
      • Automated goal seeking (an advanced SAS Forecasting feature).

    Back to Top