ModelOps: How to operationalize the model life cycle
Improving the odds that more analytical models are deployed and that they can more quickly create business value
Jeff Alford, SAS Insights Editor
ModelOps is how analytical models are cycled from the data science team to the IT production team in a regular cadence of deployment and updates. In the race to realize value from AI models, it’s a winning ingredient that only a few companies are using.
More and more, organizations are relying on machine learning (ML) models to turn massive volumes of data into fresh insights and information. These ML models are not limited by the number of data dimensions they can effectively access and use vast amounts of unstructured data to identify patterns for predictive purposes.
When 50% of models never make it into production, what can you do to improve the odds?
Easily register, modify, track, score, publish, govern and report on your analytical models with a visual, web-based interface to quickly publish and monitor models for optimum results.
Introducing . . . ModelOps
But model development and deployment are difficult. Only about 50% of models are ever put in production, and those that are take at least three months to be ready for deployment. This time and effort equal a real operational cost and mean a slower time to value, too.
All models degrade, and if they are not given regular attention, performance suffers. Models are like cars: To ensure quality performance, you need to perform regular maintenance. Model performance depends not just on model construction, but also on data, fine-tuning, regular updates and retraining.
ModelOps allows you to move models from the lab to validation, testing and production as quickly as possible while ensuring quality results. It enables you to manage and scale models to meet demand and continuously monitor them to spot and fix early signs of degradation. ModelOps is based on long-standing DevOps principles. It’s a must-have for implementing scalable predictive analytics. But let’s be clear: Model development practices are not the same as software engineering best practices. The difference should become clearer as you continue reading.
ModelOps allows you to move models from the lab to validation, testing and production as quickly as possible while ensuring quality results.
Measuring results from start to finish
For a ModelOps first step, you need to monitor the performance of your ModelOps program. Why? Because ModelOps represents a cycle of development, testing, deployment and monitoring, but it can only be effective if it’s making progress toward the goal of providing the scale and accuracy that your organization requires.
You need, at the highest level, to determine the effectiveness of your ModelOps program. Has the implementation of ModelOps practices helped you achieve the scale, accuracy and process rigor the organization needs?
Then, at the operational level, you need to monitor the performance of each model. As they degrade, they will need retraining and redeployment. Here are some considerations when creating a performance dashboard:
- For models (or classes of models), set accuracy targets and track them through development, validation and deployment for dimensions such as drift and degradation.
- Identify business metrics affected by the model in operation. For example, is a model designed to increase subscribers having a positive effect on subscription rates?
- Track metrics such as data size and frequency of update, locations, categories and types. Sometimes model performance problems are due to changes in the data and its sources and these metrics can help in your investigation.
- Monitor how much computing resources or memory models consume.
Related to metrics, model validation is an important foundation of ModelOps. Some use validation and verification interchangeably, but their intent is different.
Verification confirms that a model was correctly implemented and works as designed. Validation ensures the model provides the results it should, based on the model’s fundamental objectives. Both are important best practices in the development and deployment of quality models.
Three common problems addressed using a ModelOps approach
Models can degrade as soon as they are deployed (sometimes in days). Of course, there are things that will affect the performance of your models more than others. Below are some common problems – ones that you will almost certainly encounter.
Data quality
Subtle changes or shifts in data that might go unnoticed or may have a lesser effect on some traditional analytical processes can have a more significant effect on machine learning model accuracy.
As part of your ModelOps efforts, it’s important to properly assess the data sources and variables available for use by your models so you can answer:
- What data sources will you use?
- Would you be comfortable telling a customer that a decision was made based on that data?
- Do the data inputs directly or indirectly violate any regulations?
- How have you addressed model bias?
- How frequently are new data fields added or changed?
- Can you replicate your feature engineering in production?
Time to deployment
Because the model development/deployment cycle can be long, you first assess how long that cycle is for your organization, then set benchmarks to measure improvement. Break down your process into discrete steps, then measure and compare projects to identify best and worst practices. Also consider model management software that can help automate some activities.
Degradation
Be on the lookout for things like drift and bias. The answer to these problems is creating a strong approach to model stewardship in your organization. If everyone from model developers to business users takes ownership for the health of your models, these problems can be addressed before they affect your bottom line.
When to update your models
The most difficult aspect of machine learning is deploying models and maintaining precision. It means always searching for newer, better data to feed them to improve accuracy.
Is there a standard schedule you can set for retraining models that are falling below your accuracy thresholds? The simple answer is “no.” The reason? One reason is that models degrade differing rates. Another is that the need for precision is relative to what you’re trying to accomplish. For example, where the risk of an inaccurate prediction is costly, or dangerous, then model updates may need to be continuous.
That’s why it’s important for you understand your models’ levels of accuracy by monitoring results and your own accuracy measurements.
The danger of ignoring ModelOps
The predictive power of these models, in combination with the availability of big data and increasing computational power, will continue to be a source of competitive advantage for smart organizations. Those who fail to embrace ModelOps face increasing challenges in scaling their analytics and will fall short of the competition.
Recommended reading
- Analytics tackles the scourge of human traffickingVictims of human trafficking are all around us. From forced labor to sex work, modern-day slavery thrives in the shadows. Learn why organizations are turning to AI and big data analytics to unveil these crimes and change future trajectories.
- Public health infrastructure desperately needs modernizationPublic health agencies must flex to longitudinal health crises and acute emergencies – from natural disasters like hurricanes to events like a pandemic. To be prepared, public health infrastructure must be modernized to support connectivity, real-time data exchanges, analytics and visualization.
- Respond, recover and reimagineDisruptions to our lives happen regularly, though most are not as far-reaching as the COVID-19 pandemic. Whatever their nature, it’s helpful to have a plan for how to exit disruption still on your feet and in the game. Learn about the three-phase approach SAS recommends for mitigating widespread disturbances.
Ready to subscribe to Insights now?