Part one of seven-part series documenting steps to a high-performance risk management framework.
As a result of the recent market shocks, banks, capital markets firms and asset managers are rethinking certain issues and focusing on integrating risk and reward tradeoffs. To do this, they are using portfolio theory and planning for market shocks and the resulting impact on the business and its divisions. Leading financial entities are linking their portfolio risk with the return on capital and integrating market liquidity into their analyses in an attempt to gain a more complete view of risk and return. As a result, optimization of capital deployed has become the new role of risk management.
Risk management has traditionally relied on expert judgment coupled with a narrow use of quantitative techniques. These techniques are being replaced by sophisticated analytics that make traditional quantitative techniques more transparent and available to decision makers by combining them with an analytic decision framework that is optimized for exposures and capital return.
Predictive, on-demand scenarios provide an up-to-the minute, scenario-optimized view of risk and return, allowing executives to understand and integrate capital to various asset classes and divisions of the firm. By incorporating all elements of the risk and reward equation – exposures, return, capital reserves, capital deployed in various forms, firm liquidity and market liquidity – we now have the opportunity to provide and grow high-performance risk management capabilities within firms.
Portfolio theory has dominated the thinking of capital-management strategy planning, and value at risk (VaR) has dominated the quantification of risk management exposures within the portfolio theory construct. Traditionally, accounting and planning for liquidity shocks and market dislocations has been handled indirectly through techniques such as scenario analyses or forms of risk budgeting. However, scenarios were typically constructed only to deal with interest rate shocks or changes in top-level economic indicators.
Although common risk techniques such as VaR and probability of default are still employed, they fail to anticipate systemic changes in the structure of markets. These techniques assume that the volatility of the market and correlations among assets change slowly or not at all; they are not designed to handle systemic negative changes caused by jumps in the availability of liquidity or jumps in market values.
As we emerge from the crisis that began in 2008, not only have banks had to boost capital reserves, but the impact of sovereign nation debt has also constricted the flow of capital. This has the potential to heighten the impact of market volatility, and with unanticipated events, we are anticipating that there is the potential for an amplification effect of the volatility and market shocks. The ability to acquire capital for investment or to liquidate a position may accelerate more default events over the next few years as markets adjust to systemic changes in the market structure.
Although most firms use dynamic measures such as VaR to gauge the sensitivity of results to short-run market factor movements, they realize that they need to overlay these measures with additional capital and static reserves to handle shocks. Correctly, entities realize that short-term measures are inadequate. However, new work is needed to measure the size of needed risk reserves or cushions; that is, how to dynamically adjust them and partition the cushion among the various asset categories within an entity to make more accurate risk and return tradeoffs. This is a new direction for research. The ability to enhance risk methodologies is due to advances in technology, such as SAS high-performance computational environment, that remove the computational complexity associated with multifactor, cross-firm, full-valuation methods. Large regression computations of thousands of risk factors, across thousands of market states, can be computed in minutes rather than days. Breakthroughs in methodology can now be realized due to breakthroughs in technology.
Risk management comprises three tools with unique benefits and costs:
- Diversification.Financial entities concentrate their holdings to earn returns and serve client needs. However, these reserves have a cost in the loss of diversification.
- Reserves.Likewise, a large reserve held in the form of low-yielding bonds (including low levels of debt) protects against shocks, but limits investments and expected returns.
- Insurance. The “correct” form and amount of insurance is unknowable. It can also be costly if hedging needs and timing are unknown. For example, most of the time, financing activities with overnight wholesale funds is less expensive than term financing until term funding would have saved the entity from collapse.
Finally, each tool might be insufficient at times of market shocks or systemic risk.
Moving from theory to implementation issues, market participants relied too heavily on recent market experience (during the 1990s and 2000s) to frame their views on risk and to calibrate their models. They concluded incorrectly that the likely need and the resultant cost to adjust their holdings – and to reduce risks in light of shocks and lack of liquidity in the market – were extremely low. They relied almost exclusively on the advantages of diversification across uncorrelated firm activities and concluded that risks were controlled within the isolated portfolios; they relied too heavily on a limited set of quantitative techniques to measure and to plan on how to react to unexpected market conditions.
They also relied extensively on external monitors, such as the rating agencies, to validate risks. The rating agencies failed to incorporate multiple, simultaneous failures in their models; they also overlooked the fact that recent market event data might not tell the complete story, or that the quality of the composition of structured products might deteriorate over time as entities reverse-engineered them to “just pass” to receive a rating of “AAA.”
Therefore, the problem is that firms simply maximized along a truncated view of possible investment paths, assuming that recent volatilities and observed correlations were the best indicators of future volatility and correlations. Additionally, firms viewed planning for shocks and changes in the opportunity set as unnecessary or of little value. Risk had been tamed, and risk officers had cried wolf too many times to be heard. It turns out that observed low portfolio volatilities largely contributed to low observed correlations. As a result, regulators and market participants believed that the risks observed years ago were the risks of the past; risks today were “understood” and would remain as such into the foreseeable future. Market participants responded to this belief by increasing their own risks through leverage, concentrating holdings (becoming less diversified) and holding riskier positions, and reducing contingency reserves for shocks. Contingency reserves were reduced because risk could be either diversified or distributed through securitized products. Flexibility planning in the form of capital optimization became less necessary with reduced uncertainty.
If risk had been controlled, these were the correct planning decisions. In retrospect, relying too heavily on recent data – and even ignoring recent minishocks – was the wrong decision. We had gone through a long period of market quiescence; risk had not been tamed. The business cycle remains; information sets are too vast to understand all of the interactions necessary to tame risk. This is a key lesson.
One additional dimension to note was the role of the sovereign countries in international capital flows. Stability at the sovereign level, in terms of having an adequate and stable central bank funding mechanism, enabled multinational banks to push capital across countries. With stable banking and sovereign central banking systems, the perception was that markets within the European Economic Union and North America would see continuing flows from the emerging financial markets in the Middle East, China, Russia and Latin America. Ready access to capital from these markets, which only 10 years earlier did not participate in global capital markets, fueled the notion that risk management was merely a set of tools for complying with regulatory measures such as the Basel II Accord.
Flexibility planning – including liquidity, asset allocation and capital structure – was relegated to lower-level status and to the risk officer – not to senior planners. The focus on risk management was compliance for regulatory purposes. This banking stability allowed a few large global players to create a system for sourcing capital from new markets and distributing newly formed securitized products of repackaged risk back into these markets. Again, the notion of easy and available distribution systems for transferring risk became paramount.
We want to move forward and learn from these observations to propose a new framework geared toward optimization and the tradeoffs for achieving the required returns for adequate levels of invested capital. In the weeks ahead, we will continue in this series of articles excerpted from Evolving from quantitative risk management to a high-performance risk management analytic framework.
Continue in this series by reading “Moving toward optimization.” In that article, we discuss reasons firms should consider capital optimization – to reduce risk, increase flexibility and improve competiveness.