Abstract art - green with circle overlays

Deep Learning

What it is and why it matters

Deep learning trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing.

Why is deep learning important today?

Deep learning is one of the foundations of cognitive computing. The current interest in deep learning is due in part to the buzz surrounding cognitive computing, software applications that understand human input and can respond in humanlike form or output. Deep learning techniques have greatly improved our ability to classify, recognize, detect and describe – in one word, understand. Many applications are in fields where these tasks apply to non-numerical data, for example, classification of images, recognizing speech, detecting objects and describing content. Systems such as Siri and Cortana are powered in part by cognitive computing and driven by deep machine learning.

Several developments are now advancing deep learning. Algorithmic improvements have boosted the performance of deep learning methods, and improving the accuracy of machine learning approaches creates massive business value. New classes of neural networks have been developed that fit particularly well for applications like text translation and image classification. Neural networks have existed for almost 50 years. They had fallen out of favor by the late 1990s due to difficulties in getting good accuracy in the absence of large training data. However, two changes within the last decade have revolutionized their use:

  1. We have a lot more data available to build neural networks with many deep layers, including streaming data from the Internet of Things, textual data from social media, physicians notes and investigative transcripts.
  2. Computational advances of distributed cloud computing and graphics processing units have put incredible computing power at our disposal. This level of computing power is necessary to train deep algorithms.

At the same time, human-to-machine interfaces have evolved greatly as well. The mouse and the keyboard are being replaced with gesture, swipe, touch and natural language, ushering in a renewed interest in cognitive computing.


Innovations in data science

Patrick Hall, a data scientist at SAS, explains how advances in deep learning will lead to text recognition capabilities that can decode, understand and even generate text.

What other data science breakthroughs are on the horizon? Hear what a leading data scientist thinks will be next for deep learning, mass modeling, image recognition and sound recognition.

Deep learning opportunities and applications

Due to the iterative nature of deep learning algorithms, their complexity as the number of layers increase, and the large volumes of data needed to train the networks, a lot of computational power is needed to solve deep learning problems.

Traditional modeling methods are well understood, and their predictive methods and business rules can be explained. Deep learning methods have been characterized as more of a black-box approach. You can prove that they perform well by testing them on new data. However, it is difficult to explain to decision makers why they produce a particular outcome, due to their nonlinear nature. This can create some resistance to adoption of these techniques, especially in highly regulated industries.

On the other hand, the dynamic nature of learning methods – their ability to continuously improve and adapt to changes in the underlying information pattern – presents a great opportunity to introduce less deterministic, more dynamic behavior into analytics. Greater personalization of customer analytics is one possibility.

Another great opportunity is to improve accuracy and performance in applications where neural networks have been used for a long time. Through better algorithms and more computing power, we can add greater depth.

While the current market focus of deep learning techniques is in applications of cognitive computing, SAS sees great potential in more traditional analytics applications, for example, time series analysis.

Another opportunity is to simply be more efficient and streamlined in existing analytical operations. Recently, SAS experimented with deep neural networks in speech-to-text transcription problems. Compared to the standard techniques, the word-error-rate (WER) decreased by more than 10 percent when deep neural networks were applied. They also eliminated about 10 steps of data preprocessing, feature engineering and modeling. Computationally, it might take longer to train the deep network compared to the traditional modeling flow, but the impressive performance gains and the time savings when compared to feature engineering signify a paradigm shift.

Deep learning in today's world

The impact that the deep learning has had on the world has been significant – and it’s only getting started. Learn more about what people are saying.


Deep learning methods and applications

From playing AlphaGo to detecting fraud, deep learning can be used to solve many complex problems. In this interview, SAS VP of Analytic Server R&D Oliver Schabenberger provides examples and explains how it differs from standard analytics.

Read the interview


Deep learning for natural language processing

Find out how deep learning techniques are being applied to natural language processing. James C. Lester, a Distinguished Professor of Computer Science at NC State University, discusses text normalization and other topics with James A. Cox, Director of Text Analytics at SAS.

Watch the webinar


The digitization of everything

“The collective increase in available processing power and new frontiers in artificial intelligence and deep learning are allowing us to not only collect, but also make better sense of [location, network and activity] details leading to a greater understanding of our customers,” says Brian Vellmure, a management consultant.

Learn more

How is deep learning being used?

To the outside eye, deep learning may appear to be in a research phase as computer science researchers and data scientists continue to test its capabilities. However, deep learning has many practical applications that businesses are using today, and many more that will be used as research continues. Popular uses today include:

Speech Recognition

Both the business and academic worlds have embraced deep learning for speech recognition. Xbox, Skype, Google Now and Apple’s Siri®, to name a few, are already employing deep learning technologies in their systems to recognize human speech and voice patterns.

Image Recognition

One practical application of image recognition is automatic image captioning and scene description. This could be crucial in law enforcement investigations for identifying criminal activity in thousands of photos submitted by bystanders in a crowded area where a crime has occurred. Self-driving cars will also benefit from image recognition through the use of 360-degree camera technology.

Natural Language Processing

Neural networks, a central component of deep learning, have been used to process and analyze written text for many years. A specialization of text mining, this technique can be used to discover patterns in customer complaints, physician notes or news reports, to name a few. 

Recommendation Systems

Amazon and Netflix have popularized the notion of a recommendation system with a good chance of knowing what you might be interested in next, based on past behavior. Deep learning can be used to enhance recommendations in complex environments such as music interests or clothing preferences across multiple platforms.


Hand written data set numbers in different colors

The thinking machine: Experiments in deep learning

The MNIST database is a public data set containing thousands of handwritten images of digits. It’s used as a classic problem in machine learning research that shows how complex it can be to process and decode images.

Learn more about how this data set, filled with thousands of handwritten zeros, ones, twos and other single-digit numbers can be used to further research into deep learning. And discover what one data scientist learned when he applied deep learning to the data.  

Watch the deep learning webinar now


How deep learning works

Deep learning changes how you think about representing the problems that you’re solving with analytics. It moves from telling the computer how to solve a problem to training the computer to solve the problem itself.

The traditional approach is to use the data at hand to engineer features to derive new variables, then select an analytic model and finally estimate the parameters (or the unknowns) of that model. These techniques can yield predictive systems that do not generalize well because completeness and correctness depend on the quality of the model and its features. For example, if you develop a fraud model with feature engineering, you start with a set of variables, and you most likely derive a model from those variables using data transformations. You may end up with 30,000 variables that your model depends on, then you have to shape the model, figure out which variables are meaningful, which ones are not, and so on. Adding more data requires you to do it all over again.

The new approach with deep learning is to replace the formulation and specification of the model with appropriate hierarchical characterizations (layers) that learn to recognize latent features of the data from the regularities in the layers.

The paradigm shift with deep learning is a move from feature engineering to feature representation.

The premise and promise of deep learning is that it can lead to predictive systems that generalize well, that adapt well, that can continuously improve as new data arrives, and that are more dynamic than predictive systems built on hard and deterministic business rules. You no longer fit a model. Instead, you train the task.

Read more about advanced analytics

Content is loading

Back to Top