What it is and why it matters
What is machine learning?
Machine learning is a branch of artificial intelligence (AI) based on two things: mathematical algorithms and automation. The idea is to automate the building of analytic models that use algorithms to learn from data in an iterative fashion. The “machine” (it’s really an algorithm) learns from its mistakes in previous steps to derive the best results without human intervention. These models can then be used to produce reliable, repeatable decisions.
The iterative aspect of machine learning is important because your models aren’t going to get any smarter by themselves. They need to learn from previous computations to produce the best results.
It’s a science that’s not new – but one that’s gaining fresh momentum. The highly hyped, self-driving Google car? Essence of machine learning. Online recommendation offers? A machine learning application for everyday life. Fraud detection? One of the more obvious, important uses in our world today.
Why is machine learning important?
For most organizations, the race is on to extract valuable information from growing volumes and varieties of data. Electronic (and other) data is increasing at rates never seen before. Storage options for big data are more affordable than ever – maybe even free! And computational processing power has never been cheaper or more powerful.
That means with the right data, the right technologies and the right analytics, it’s possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster, more accurate results without human intervention – even on a very large scale. The result? High-value predictions that can guide better decisions and actions in real time.
One key to this is automated model building. Analytics thought leader Thomas H. Davenport wrote in The Wall Street Journal that with rapidly changing, growing volumes of data, "... you need fast-moving modeling streams to keep up." And you can do it with machine learning, he says: "Humans can typically create one or two good models a week; machine learning can create thousands of models a week."
Ever wonder how an online retailer provides nearly instantaneous offers for other products that may interest you? Or how lenders can provide near-real-time answers to your loan requests? Many day-to-day activities are powered by machine learning algorithms, including:
- Fraud detection.
- Online recommendations.
- Real-time ads on web pages and mobile devices.
- Text-based sentiment analysis.
- Credit scoring and next-best offers.
- Prediction of equipment failures.
- New pricing models.
- Network intrusion detection.
- Handwriting analyses.
- Email spam filtering.
It’s not your daddy's (or granddaddy's) machine learning anymore
Machine learning today is not like machine learning of the past. While many of the mathematical algorithms have been around for a long time, the ability to apply complex mathematical calculations to huge quantities of data – over and over, faster and faster – is a recent development. Cheaper data storage, distributed processing, more powerful computers and the analytical opportunities they provide are all responsible for the resurging interest in these systems.
Connect with the latest insights on analytics through related articles and research.
More on machine learning
- Analytics in Writing blog: "Machine Learning Using SAS Enterprise Miner"
- Subconscious Musings blog: "Building a $1 Billion Machine Learning Model"
- White paper: Managing the Analytical Life Cycle for Continuous Innovation
Machine learning methods explained
Two of the most widely adopted machine learning methods are supervised and unsupervised. Most machine learning – about 70 percent – is supervised learning. Unsupervised learning accounts for 10 to 20 percent.
- Supervised learning algorithms are trained using labeled examples, such as an input where the desired output is known. For example, a piece of equipment could have data points labeled either “F” (failed) or “R” (runs). The learning algorithm receives a set of inputs along with the corresponding correct outputs, and the algorithm learns by comparing its actual output with the correct outputs to find errors. It then modifies the model accordingly. Through methods like classification, regression, prediction and gradient boosting, supervised learning uses patterns to predict the values of the label on additional unlabeled data. Supervised learning is commonly used in applications where historical data predicts likely future events. For example, it can anticipate when credit card transactions are likely to be fraudulent or which insurance customer is likely to file a claim.
- Unsupervised learning is used against data that has no historical labels. The system is not told the "right answer." The algorithm must figure out for itself what's being shown. The goal is to explore the data and find some structure within. Unsupervised learning works well on transactional data. For example, it can identify segments of customers with similar attributes who can then be treated similarly in marketing campaigns. Or it can find the main attributes that separate customer segments from each other. Popular techniques include self-organizing maps, nearest-neighbor mapping, k-means clustering and singular value decomposition. These algorithms are also used to segment text topics, recommend items and identify data outliers.
Semi-supervised learning and reinforcement learning are two other techniques that are sometimes used.
- Semi-supervised learning is used for the same applications as supervised learning. But it uses both labeled and unlabeled data for training – typically a small amount of labeled data with a large amount of unlabeled data (because unlabeled data takes less effort and is less expensive to acquire). This type of learning can be used with methods such as classification, regression and prediction. Semi-supervised learning is useful when the cost associated with labeling is too high to allow for a fully labeled training process. Early examples of this include identifying a person's face on a web cam.
- Reinforcement learning is often used for robotics, gaming and navigation. With reinforcement learning, the algorithm discovers for itself through trial and error which actions yield the greatest rewards. This type of learning has three primary components: the agent (the learner or decision maker), the environment (everything the agent interacts with), and actions (what the agent can do). The objective is for the agent to choose actions that maximize the expected reward over a given amount of time. The agent will reach the goal much faster by following a good policy. So the goal in reinforcement learning is to learn the best policy.
Machine learning vs. other approaches
The difference between machine learning and other statistical and mathematical approaches, like data mining, is another popular topic of discussion. In simple terms, while machine learning uses many of the same algorithms as data mining, the difference lies in what the two disciplines predict. Data mining discovers previously unknown patterns and knowledge. Machine learning is used to reproduce known patterns and knowledge, automatically apply those to other data, and then apply that to decision making.
Did you know?
- In machine learning, a target is called a label.
- In statistics, a target is called a dependent variable.
What do you need to create good machine learning systems?
- Data preparation capabilities.
- Algorithms – basic and advanced.
- Automation and iterative processes.
- Ensemble modeling.
How SAS can help
SAS graphical user interfaces help you build machine-learning models and implement an iterative machine learning process. You don't have to be an advanced statistician. Our comprehensive selection of machine learning algorithms can help you quickly get value from your big data. They include:
- Neural networks.
- Decision trees.
- Random forests.
- Associations and sequence discovery.
- Gradient boosting and bagging.
- Support vector machines.
- Nearest-neighbor mapping.
- k-means clustering.
- Self-organizing maps.
- Local search optimization techniques (e.g., genetic algorithms.)
- Expectation maximization.
- Multivariate adaptive regression splines.
- Bayesian networks.
- Kernel density estimation.
- Principal components analysis.
- Singular value decomposition.
- Gaussian mixture models.
- Sequential covering rule building.
As we know by now, it’s not just the algorithms. Ultimately, the secret to getting the most value from your big data lies in pairing the best algorithms for the task at hand with:
- Comprehensive data quality and management.
- GUIs for building models and process flows.
- Comparisons of different machine learning models to quickly identify the best one.
- Interactive data exploration and visualization of model results.
- Automated ensemble model evaluation to identify the best performers.
- Easy model deployment so you can get repeatable, objective decisions quickly.
- An integrated, end-to-end platform for the automation of the data-to-decision process.
Experience and expertise
At SAS, we are continuously searching for and evaluating new approaches. Ours is a long history of implementing the statistical methods best suited to solving the problems you face. We combine our rich, sophisticated heritage in statistics and data mining with new architectural advances to ensure your models run as fast as possible – even in huge enterprise environments.
We understand that quick time to value not only means fast, automated model performance, but also time NOT spent moving data between platforms – especially when it comes to big data. High-performance, distributed analytical techniques take advantage of massively parallel processing integrated with Hadoop, as well as all major databases. You can cycle quickly through all steps of the modeling process – without moving data.