Four steps to realizing your data center of the future
SAS and Sun partner to help IT manage SAS® applications
As more SAS users move from departmental analytics to enterprise business intelligence, the transition generally means that SAS applications will migrate to the data center, where they will be deployed and managed by IT. In this new model, the IT organization is an equal partner, along with the business users, in helping everyone in the organization compete on analytics.
Moving SAS resources to the data center also means that IT matters even more for enterprise SAS users. What IT issues are of the highest importance to SAS users?
- Ensuring that SAS is always available.
- Providing reasonable and predictable SAS application performance.
- Offering secure access to SAS applications and data.
- Delivering the IT infrastructure as cost-effectively as possible.
For each of the four goals above, the SAS and Sun Data Center of the Future team spent three months developing an engineering plan to study and characterize the issue. You’ll find white papers and videos documenting our findings at: www.sas.com/sascom-dcof. For a brief overview of what we’ve discovered, continue reading below.
ONE: Creating highly available SAS® environments
From a business point of view, high availability for SAS applications means that application services and data are nearly always accessible. Because today’s application services are increasingly interlinked, failure of any single component can have a cascading effect throughout the enterprise that can pose significant business risk and financial loss. When even one service is unavailable, it can dramatically affect all of the other services, forcing users to make business decisions without supporting data. In order to deploy high availability you need to understand the function of each component, how each one interacts with the others and how the failure of a component affects the operation of the entire platform.
For example, the SAS Metadata Server is the most important component of the SAS environment. Every decision and nearly all interactions involve it in some way. It handles authentication, authorization and contains information on all of the other SAS applications. Given its critical role within the SAS environment, maximizing the availability of the SAS Metadata Server should be one of the first steps in increasing availability.
Sun and SAS engineers carefully studied the interaction of SAS applications and developed and documented a layered approach that can help keep SAS application services operating continuously, regardless of the cause of failure. Each layer of availability provides the tools to shield SAS application services from failures in hardware, software, buildings and geographic sites. Each layer can be implemented individually to help you deploy the right level of application availability at the right cost.
TWO: Increasing flexibility, adaptability and ROI
Today’s enterprises are under pressure to be innovative and adaptable. SAS applications can help you achieve these goals with the ability to integrate, manage and analyze data. However, when SAS is deployed one application per server, the results are data everywhere, underutilized resources, lax security and increased complexity. Server sprawl also leads to higher training, operational and data center costs, as well as an inability to deal with real-time analytics, globalization and changing needs.
Consolidating and virtualizing the SAS environments within your organization can deliver reduced costs, improved ROI, increased agility and responsiveness, increased performance and reduced downtime, but it is important to carefully design the architecture beforehand. For example, suppose the SAS OLAP Server is housed in one container (OS partition). This server’s function is to respond to requests for data stored in SAS cubes. However, the SAS workspace server generally builds the cubes. Since the data access and resource usage patterns for these two servers differ, you might locate the SAS workspace server in a different, isolated container, which means the cubes created by the SAS workspace server might not be available to the SAS OLAP Server, even though both servers are running on the same physical machine. You must be sure to identify a mechanism to make the cubes available everywhere they are needed. This is only one example of the types of things that must be considered when designing a consolidated and virtualized architecture.
Sun and SAS considered these types of situations and documented recommended configurations and procedures to make it easier and faster for you to consolidate and virtualize SAS applications. The configurations use no-cost Sun virtualization technologies to help effectively manage virtualized resources and workloads to reduce TCO, decrease complexity and rapidly respond to changes in demand and business requirements.
THREE: Going grid
As SAS users begin to take greater advantage of their SAS applications, it’s natural for the number of SAS jobs to grow over time. At some point it becomes challenging to prioritize and balance SAS workloads across the existing computing resources. A grid computing environment enables SAS jobs to be balanced across the existing computing resources and scheduled according to priority. This enables more efficient utilization of computing resources and helps achieve faster results for SAS users. Another common reason for grid computing is to reduce run times for large and complex computational problems.
Sun and SAS configured and tested a solution using low-cost servers that also offers improved application performance, higher resource utilization and greater flexibility in responding to change. A white paper and free grid computing technical assessment can help you understand how to effectively apply Sun technologies to reduce the cost of purchasing and operating a SAS grid.
FOUR: Affordable, high-performance storage
Rapid data growth and the need to deliver high service levels for more and more SAS users makes it difficult to contain costs while providing fast access to the right information. Large SAS data sets require fast I/O processing with sustained high throughput to keep servers from being underutilized while waiting for data. The traditional solutions are SAN or direct attached storage, which introduce complexity and cost. In addition, it takes a skilled storage administrator to keep these complex storage systems running smoothly and it can be difficult to trace the source of performance issues. In contrast, networked storage is easy to set up and less expensive, but generally lacks the performance required for larger SAS installations. Ideally, you need a fast, easy to manage and less expensive storage platform to meet the needs of demanding SAS users.
To help you achieve this goal, SAS and Sun engineers characterized SAS application performance and created best practices and recommendations to help you quickly deploy and easily manage TBs of SAS data on Sun Open Storage products that address all of these issues.
SUN and SAS – delivering an analytical advantage
SAS and Sun understand that IT matters and strive to address the IT issues that are paramount to SAS users. The Data Center of the Future initiative provides a forum for SAS and Sun to test and design solutions that take advantage of the latest technology advances and are optimized for high performance. Together, SAS and Sun engineers determine the best approach for implementing the technologies to save time, reduce risk and ultimately improve business results through optimized solutions.
John Smale is Global SAS Market Development Manager, Sun Microsystems
John Smale, Sun Microsystems