Most investment banks are now under considerable pressure from their clients, shareholders and boards to reduce their trading costs and improve their return on capital employed in their trading operations. There is also strong pressure from the regulators to improve the quality of their trading books and increase the capital cover set against the perceived risks on these books.
Add to this the fact that many investment banks are now seeing significantly reduced margins from a number of their trading operations. It’s no wonder that ‘change the bank’ is now top on the board’s agenda.
Banks currently gather risk information from their numerous front office trading systems at the end of day and then stage it through a large number of intermediate steps and processes that aggregate and summarize the data before it is surfaced in the risk reports the following day. This is a time-consuming process that causes considerable delay in the finalization and circulation of the reports.
How can technology help overcome these challenges? We offer six recommendations based on our work with investment banking customers around the world:
1. Create a holistic view of risk
In most investment banks today, market and credit risk are treated as separate operational functions, with little, if any, interaction. In many operations, much of the risk and pricing information still resides within the numerous trading systems used by the front office.
Many try to tackle this problem by investing in cumbersome data warehouses that take many years to implement and, by their very nature, are obsolete the day they are put in to production. These expensive ventures consume huge amounts of valuable resources while delivering little genuine value.
The best, most cost effective way of gaining a holistic view of risk is to leave the data at its source and federate all the relevant data at the lowest level of granularity via a low cost, highly scalable ‘data fabric.’ Essentially, the system stages only the relevant data into massively scalable, commodity, in-memory information stores where high-performance risk analytics can analyze billions of rows of data in a matter of seconds.
2. Increase processing speeds
Once trading has been completed on a trading desk and the trading book reconciled and “closed” it needs to be processed and passed to the risk management system for Analysis and Reporting. Traditionally this is done in batch mode via “End of the Day” processing ready for reporting the following day. Errors, omissions and corrections are dealt with the following day, a process which can take an inordinate amount of time. This long-winded process and the subsequent delay in the availability of the comprehensive risk reports is no longer acceptable to either senior management or the business.
With the limited time window now available for a global “batch” EOD, such things are simply no longer practical. By utilizing the latest High Performance Event Processing (CEP) technologies each “book closure” event can be treated separately and processed semi independently of other similar events. These “book closed” processes can then be done in parallel in the bank’s high performance pricing engines (in the majority of cases) for onward routing to the high-performance risk analytics and reporting system.
This means that instead of waiting until the following day to get the latest reports, the risk management team can have an up-to-date view of their risk exposures on an intraday basis using the latest blended view of the “interim” and “official” risk figures.
3. Improve data transparency
Current risk systems, for performance reasons, typically aggregate the risk data at the book or portfolio level and retain little, if any detail, down at the trade or leaf level. While this enables the risk department to produce their reports quickly, this approach is fraught with challenges. At the very least it obscures the underlying information and makes it very hard for the front office and the risk managers to reconcile any discrepancies in the underlying data without considerable time, effort and angst, further exacerbating the information “confidence gap.”
By taking a “big data” approach to the problem and holding the data right down to the leaf level, including the lineage and provenance of the data, the “confidence gap” can be all but eliminated and high levels of transparency achieved. The business, senior management and indeed the regulators can then be much more confident of the veracity of the figures used in the risk reports.
4. Enrich data quality
No data set, by its very nature, can be perfect. However, with the right level of care and attention it can be of high quality and integrity. But quality takes time, especially where old style “legacy” batch processing systems are used.
By deploying “big data” analytics and using a “bottom up” approach to the issue of data quality, errors and anomalies can be quickly and easily detected and prompt corrective action taken.
With the latest CEP technologies, these corrections, instead of being applied to the intermediate data stores, can then be easily rectified at source and the resultant quality data quickly propagated throughout the risk reporting infrastructure.
The bottom up approach to data quality is a very powerful tool. With complete in-memory access to the full data set, at its lowest level of granularity, every data item can be quickly reviewed in detail for correctness and integrity. Then ownership “up” the associated hierarchies can be quickly established and the root causes identified and resolved.
This contrasts with the more traditional top down approach which is very hit or miss at the best of times, suffering, as it does, from low level data issues being obscured by aggregations in the upper levels of the hierarchies.
5. Expand risk measures
Value at risk is no longer an adequate measure of risk. Nor is it capable of informing senior managers of the true nature of their market and credit risk exposures on their trading books. In order to provide full transparency and the level of detail required by the business, many different types of analyses are now required including: eVar, sVar, transient concentrations, extreme value, abnormal correlations, stress tests and a wide range of other tests and scenarios.
The system needs to be able to do ad-hoc analysis on the base level data which consists of many billions of rows and many trillions of data points. In order to be effective, this needs to be done in seconds and it needs to be flexible enough to cater for a wide range of hierarchical structures and combinations.
This data also needs to be analyzed quickly and flexibly to enable the business to be better informed about “known” and “suspected” areas of concentration, exposure and risk.
6. Improve self-service reporting
In order for the risk managers to be more effective at their job of analysing and reporting on market and credit risk, they need a system that allows them to quickly and easily report on the risk data at the level and in the way that best fits the rapidly evolving needs of the business. Risk managers need to be able to slice and dice the risk data in numerous new and interesting ways to be able to highlight areas of interest and/or concern in the risk figures.
Traditionally any new reports would require the services of a skilled business analyst who would typically have to access and manipulate the source data from the variety of source systems in a suitable way, to provide the risk manager with the required report. The long, time-consuming process could take many weeks to complete.
With the new SAS Visual Analytics module, risk managers can create and manipulate reports on a self-service basis in a matter of minutes and publish the results to their colleagues, the front office and senior management via a number of channels including on-screen and hard copy reports and the latest multi-media devices such as the iPad.
Combining these six tips will help the bank reduce its operational costs, focus on more profitable trading operations, reduce its RWA and thereby achieve a better return on capital deployed in its trading operations.