Firms that are among the first to adopt a dynamic predictive analytics framework can gain a competitive advantage. This advantage comes from the alignment of a new capital optimization framework, deployment of a high-performance computation environment and the implementation of a scenario-based adaptive learning process. In this series, we have discussed the need for a new methodology for the optimization of capital and the deployment of a high-performance computational environment. The last leg in achieving a high-performance risk management capability – the evolution of the analytical process – will be discussed in this final article.
As we have mentioned many times in this series, advances in modeling approaches and technology give firms the ability to construct highly adaptive models that rely less on the traditional techniques of parameter minimization prior to construction of a valid model for the measurement of a particular risk or optimization scenario. With the automation of the modeling process, in addition to the monitoring of the model outputs for expected results, considerations for model performance and model calibration can now be a more dynamic process.
Traditionally, as markets changed, domain experts would start to analyze model outputs for variation and change in output. Expert judgment determined if the change in the model performance was due to systemic changes in the market or other risk factors. Manual model adjustments were made and a series of back tests were performed to determine if the changed parameters were more predictive of the expected result. The challenge with this process is that it is time consuming and reliant on the modeling team’s skill and experience. Since computation time is a scare resource, experimentation or adaptive learning to adjust model parameters is not a process most firms want to follow.
To evolve beyond a traditional reporting and analytics function – progressing to a more predictive analytical framework – it is absolutely critical for a firm to be able to incorporate dynamically changing market, portfolio and event information. Additionally, an option analysis framework will help to bind the problem by establishing the range of parameters to adjust. Although we are advocating a dynamic process for the evolution of risk factor inputs, we are also stating that an efficient process is to bind the set of constraints by making the decision construct a structured process.
Scenarios define the level and types of dynamic inputs for event types that have been encountered or are potential considerations. Base-level reports evolve to robust measurement, robust measurement evolves under a decision or option framework, and when underlying factors are allowed to be dynamic within this construct, a predictive framework is achieved.
Org chart changes
So how does a firm make this type of alignment occur? Incentives to management teams are critical. Employees will do what they are incented to do, period. Without an incentive to work within the framework, (outlined in “Risk management: Shifting from a portfolio-theory view to an optimization view of risk management“) the firm will revert to its previously fragmented processes.
Extending the incentive discussion into the considerations of enterprisewide alignment, there are a few models we should discuss to outline best practices. With the large number of interaction points between business units, asset managers, risk managers and capital providers, a fully vertically integrated team would not be cost-effective because it causes redundant functions. Additionally, these functions are needed to provide services to the firm in other areas. The organizational approach required is a federated approach among the functions listed.
As one can anticipate, there is a significant contribution from each area, so in a very large firm there should be consideration for a liaison function to a much larger, dedicated working group. The overall group should be held accountable at the CEO and board level because of the expansive nature of the process; if any one component – risk, CFO, business unit, etc., – were in charge, it could easily drive the decision-making process. There is also a consideration for this team to be a part of an already functioning group, such as an ALCO committee.
Last in this discussion is the implementation of the technology. Whereas the organizational side of the equation needs to be a federated approach, the technology or computational side of the equation should be a centralized approach – at the computational level. Traditionally when the financial services industry has a large risk or capital problem, it tends to view the problem as a reporting problem and not as an analytics problem. Hundreds of millions of dollars have been spent on fork-lifting data from various transactional systems into divisional or point risk-exposure analytics engines, and then the outputs are further bifurcated into separate reporting and analysis engines.
There has been pioneering work at SAS to eliminate this slow, outdated process and to enable these capabilities to be implemented from one common risk, capital and management platform. The portfolio data and market inputs will always be from disparate sources, but the convergence of the various analytical methods outlined in this series can be performed within one analytic environment.
The challenge in not changing the process to an integrated and common platform is that the highly deconstructed process of managing a distributed risk and capital environment requires a large, cost-ineffective management support capability that requires days to compute. This process causes the data to become suspect in quality, which allows for potential inconsistencies in the analytical results. SAS has solved this problem through the creation of a high-performance risk management appliance that provides a common platform for the following items:
- An integrated risk and capital analytics mart that can manage millions of rows of asset data and thousands of risk and market factors.
- An integrated risk analytics engine that is efficient at computing very large risk factor matrices, being able to compute what normally takes days in minutes.
- An integrated management analytics environment that allows for scenarios to be driven by business users through the interactive selection of parameters that can execute on demand any combination of option analysis models described earlier.
- An integrated hardware environment that utilizes commodity hardware to provide thousands of processors at a tenth of the cost of traditional technology platforms.
By offering a smarter approach to the technology problem and by having all of the core components already integrated on a common platform, less time and money can be spent on the infrastructure and more time can be spent on analyzing the risk problems and optimizing the capital deployed.
There has been much conversation in the past two years concerning the consumption of human capital, time, and potential loss of financial capital in response to market shocks, including the subprime, Dubai and Greece crises. By implementing a more inclusive and robust framework that incorporates the various risk dimensions – capital, market and liquidity – the firm can react to various market shocks with more precision, take better advantage of the shocks and look for better arbitrage opportunities. Due to the stepwise and nonconcurrent nature of most firms’ risk exposure calculation processes – in conjunction with the decision process of whether to stay, exit or increase asset positions in the market – millions of dollars are lost in both the operational support process and the capital exposure process.
Shifting to a concurrent, proactive process that integrates the latest market information, portfolio updates, capital returns and a market view of liquidity on a scenario basis provides a more structured decision capability. This capability will allow firms to more consistently evaluate product structure and expected return against inflections in the market. Beyond the benefit of a closer alignment to risk and capital management policy, and a level of consistency expected by regulators, it also gives the firm an analytical framework that will provide a consistent process for making decisions.
This is the final article in the series based upon our joint white paper, Evolving from Quantitative Risk Management to a High-Performance Risk Management Analytic Framework. If you’ve missed any of the articles in the series, please go back and read them at the links provided below.