Building bulletproof models
Including macroeconomic factors provides more accurate stress testing
Within any financial institution, there’s an inherent tension between conservative instincts and profit-seeking motives. Regulators pressure institutions to accumulate a larger equity base to absorb losses and prevent problems. Investors, of course, want to keep that equity base low (and out in the market) to generate more returns.
To optimize the balance and guide their strategic decisions, financial institutions increasingly rely on risk models built with data mining techniques. For instance, the minimum equity (buffer capital) and provisions that a financial institution holds is being governed by a range of models including credit risk, market risk, operational risk, fraud risk and insurance risk.
Most institutions rely on data mining techniques to build these models. Unfortunately, almost all of these models are imperfect – and some are badly flawed in material ways. These errors can have a broad impact that affects profitability, solvency, shareholder value – even the macro economy and society. But what’s the alternative? Usually company managers address those imperfections through conservative parameter calculations. For example, if you assume that the statistically estimated probability of default is 3 percent, you might use 5 percent for strategic decisions to account for model risk.
Strengthening your models
From my perspective, most models developed with data mining can be strengthened by incorporating macroeconomic factors, collective behaviors and improved validation. A financial institution's model must be able to do more than score entities according to the likelihood of an event happening – such as a loan default or account churn. That’s no longer enough. We need to account for macroeconomic factors as well, and we achieve that through model calibration. Whereas data mining has allowed us to group and discriminate classes of internally focused data (typically customer data), calibration looks beyond the four walls. With a calibrated model, we tie in historical data, time series analyses, simulation techniques, Markov chains and subjective expectations of future events (e.g., GDP contraction vs. expansion) to provide more accurate projected probabilities.
By introducing macroeconomic factors into a model, we can perform a range of data-driven stress tests. The fact is, correlations among customer segments can often break down during stress events (typically a negative event that happens once in 25 years).
For sensitivity analysis, we can assess the effects of changing single or multiple variables. We can perform a scenario analysis using either historical or hypothetical data. For instance, what happens if GDP contracts for three straight years or housing prices drop 5 percent? Simulations that integrate macroeconomic factors provide an ideal way to strengthen these risk models.
In addition to discrimination and calibration, consider the following key requirements as you build your model:
– If the users of the model don’t understand the underlying logic, they won’t use the model. But a so-called “white box” model is easy to understand. For instance, your “white box” model may call for you to approve people over age 35, with incomes of more than $75,000, and who are currently employed. This conclusion likely makes sense to most.
Justifiability – This is subjective but crucial. If your model is not defensible, few will use it or believe its output. It’s a “black-box” model. For example, a model that posits that low-income unemployed borrowers are good credit risks flies in the face of intuition. Neural networks create black-box models by performing multiple nonlinear transformations with complex formulas. These black-box models often see lower adoption rates because they’re hard to justify.
Operational Efficiency – How much effort is needed to evaluate, monitor or retrain the model?
Economic Cost – Get a clear sense of the total cost to construct and run your model. This can include expenses associated with gathering input data and evaluating the model. Ensure it makes sense to buy and use external data or sub-models (such as FICO).
Regulatory Compliance – Be certain that your model aligns with key regulatory frameworks such as Basel II and Solvency II.
Validation – Be sure to quantitatively backtest and benchmark your model and validate it from a quality perspective for aspects such as data quality, model design, documentation and corporate governance.
As financial institutions increase their reliance on these sophisticated models, the steps they take to strengthen the validity and quality of the output have a direct bearing on the decisions they make and thus the strength and safety of our global financial system. Systematically identifying and eliminating potential sources of errors, tying in macroeconomic data, and rigorously and continually refining and enhancing the data are essential to the creation of models that reduce risk and increase efficiency and profitability.
Assistant Professor, Department of Decisions Sciences and Information Management, Katholieke Universitit Leuven, and School of Management of the University of Southampton