News / Features



Grid computing primer

How to tackle large computational problems with grid computing 

The portfolio optimization and capital allocation problem that international banks face is quite complex. Their goal is to find a portfolio composition that maximizes the bank’s expected return, subject to a set of limits expressed in terms of risk measures and bounds.

An example is maximizing a commodity trading portfolio’s expected return for a given (99 percent one-day horizon) maximum value-at-risk level of US$1 million and subject to a diversification rule that limits the exposure to a particular commodity to 10 percent of the total portfolio exposure.

Applying grid computing
The optimization process for a problem like this relies on the valuation of a set of N simulations – i.e. simulating possible market conditions in a number of ways and evaluating the assets’ value under those conditions. The simulations are based on simulated market states at specified future time points and can be generated using different techniques, including model-based Monte Carlo simulations, covariance matrix-based simulations, historical simulations or scenario simulations. Let’s consider an optimization associated with N simulations to be performed on M instruments with some of those M instruments being valued using binomial trees or Monte Carlo simulations. Parallel programming or grid computing can speed up the optimization valuation process at three different levels:

Simulations level: by splitting the workload linked to the N simulations in x parallel streams.
Portfolio level: by splitting the evaluation of M instruments into y parallel streams.
Instrument level: by splitting the evaluation of the binomial tree or Monte Carlo simulations into z parallel streams.

By making the valuation process faster, parallel programming ultimately resolves the optimization and capital allocation problem faster.

Other uses for grid computing
Besides portfolio optimization, parallel programming can reduce the time required to perform many other risk calculations and banking processes, including:

Mark-to-market portfolio valuation.
Stress tests scenarios valuation.
Sensitivity analysis.
Value at risk computation.

In the past, these types of risk calculations were often performed after the fact or once the deal was done. Grid computing makes it possible to make that information available before executing the deal, using real-time and what-if calculations.

Benefiting the industry
While past gains in performance were incremental and allowed the banks to price instruments with more and more precision, the new method of calculating risk using parallel pricing methods is transformational: by reducing the time to price an instrument by a factor greater than 100. Essentially, risk analyses that had to run in the back office at the end of the day could now be run in a few seconds at the front office.

The bank’s Chief Risk Officer could ultimately benefit from this solution by being able to impose risk boundaries before they are broken. Likewise, the Chief Financial Officer will gain on two sides: Not only will his company behave in a more controlled manner, but this will come at a reduced cost, as the performances/costs ratio gets higher and higher.

Overall, calculating risk in near-real time will give the industry a tool to detect catastrophic deals before they occur and will help remove the Sword of Damocles that the banking sector puts above the global economy.

Etienne Hermand
Risk Management Consultant, SAS EMEA Risk Practice

Read More