During the past few months, we have hosted a series of articles wherein we advance the idea of a much needed evolution to the risk framework. This evolution requires risk managers and Chief Risk Officers to rethink the tools and theories they use to manage risk. This is the sixth article in the series:
- “The evolution of risk management”
- “Moving toward optimization”
- “Quantitative math or management analytics?”
- “Shifting from a portfolio-theory view to an optimization view of risk management”
- “Risk management is capital optimization.”
As we continue in our discussion, it is important to note that the framework will evolve in one of the following directions: The first approach is to allocate capital to a strategy or line of business based on stress loss. While portfolio theory allocates capital based on total returns and sharing capital based on correlations, capital optimization based on stress loss or tail exposure allocates capital to each business to stand on its own. And, if capital is allocated based on stress loss, the risks are additive because the underlying assumption is that at times of stress, all of the strategies will suffer loss and the correlations will approach one. The firm can set a risk budget and not exceed that risk budget as it adds risk. The problem then is to estimate stress loss to apply to each strategy.
The second approach is to allocate capital to a strategy to handle normal market conditions and then buy protection against losses on the strategy. The value of liquidity is the cost of that protection. First, it is necessary to lay out when and under what conditions the entity would need liquidity. Second, the option that provides these contingent payoffs must be valued. And the option cost would be added to the base of capital to define the total capital needed to support positions.
Most likely, the firm could not acquire this option in the market or trust counterparts to make good on it if there were systemic market shocks. Thus, the firm would need to adjust its assets or capital support dynamically to attempt to replicate these options. At times of shock, the firm would not be able to adjust quickly enough.
Both of these approaches require more capital than standard VaR calculations. Moreover, the option approach is dynamic in that as asset values increase the need for additional protection changes and require capital to protect positions increases. Although the option approach is superior theoretically to the stress-loss approach, it is very difficult to implement. Although every entity should attempt to perform the analysis and undertake the exercise, it is extremely difficult to articulate in advance the states that require protection and when they might occur. This is a computationally difficult problem with many interactions. If done, however, the option framework prices liquidity. The stress-loss approach indirectly provides this “insurance” protection by providing staying power (capital) to the firm for weathering shocks.
It is possible to use previous periods of shock to estimate the capital necessary to sustain positions. As the number of shock events increase, however, the amount of capital allocated to stress events tends to increase. Therefore, it becomes necessary to parameterize the calculation of stress capital to some probability level, such as the 99 percent stress-loss event. In addition, it becomes necessary to understand the underlying data and the actual frequency of occurrence.
The beauty of the stress-loss approach is that since the capital needed to support positions does not change very much, entities are unlikely to target volatility. Currently, when volatility is deemed to be low, entities increase risk and leverage to augment returns. Although the return might change, stress risk remains relatively constant. Therefore, there is no tendency to target volatility.
This approach provides a better starting point for return on capital in any optimizing technology. And, that optimizing technology should take into account both normal correlations and the possibilities of shocks. With shock possibilities, the costs to adjust the portfolio are included in the optimizer. Obviously, those strategies that are expensive to liquidate would reduce the returns by expected liquidation costs.
It is our view that liquidity issues are of first order. This assumption leads to fundamentally different approaches to optimizing risk and return from those approaches that ignore liquidity costs, or assume that the price of liquidity remains constant over time regardless of whether the markets are calm or in shock. A firm will continue to use VaR for definition of short-term risks. A firm will rely on correlations to dampen return volatility. But, it must also move to a risk budget allocation based on stress and consider the costs to adjust positions in the event of changes in the opportunity set. Return generation must be integrated with risk management. This means that analytics must be run on a daily basis to assist allocation decisions. In fact, they might need to be run more frequently if market prices change dramatically. This will help traders hedge and evaluate positions.
Looking at the combined criteria in an option decision framework will allow a firm to make each investment path stand on its own. Adding to this concept of combined analysis, the concept of applying scenario analysis at the factor level across the entire framework is critical for having a consistent set of parameters to apply to the combined analysis. Consistency in the stress parameters will assist in producing a consistent effect across each of the analyses applied.
By developing a programmatic method of being able to drive out large computational matrices, we can rely less on the eloquence of the mathematics and begin to combine effects across a number of areas. Portfolio views of risk and return can become firmwide views of combined portfolios. Instead of determining liquidity in the market for a limited set of instruments or estimating volatility in a limited basis, we can provide an interrelated view of the potential dependencies across instruments, portfolios and markets.
In “Implementation and management of a high-performance risk environment,” the next and final article of this series, we’ll discuss the final stage in evolving the high-performance risk management framework – evolving the analytical process. Today’s advances in computing allow firms to construct highly adaptive models. Additionally, model performance and calibration can now be a dynamic process. High performance analytics move risk decision making to near real time.
For more complete reading around this topic, download the white paper Evolving from Quantitative Risk Management to a High-Performance Risk Management Analytic Framework.