During the peak of the financial crisis, regulators around the world moved quickly to ask to the banks for an aggregate exposure of their risks. Up to that moment, Banks usually faced the different risks on isolated departments without any communication between them. The thing worked as follows: The Credit risk department tried to develop complex Credit Risk system in order to classify their clients and separate the wheat from the chaff. The Treasury department hired a bunch of Market Risk Analyst in order to keep the Mark to market under control and the guys of Operational Risks, who usually worked in a different floor and nobody knew what the hell were they doing, tried to make up a methodology that brings out a reasonable data. At the end, the Senior Risk Manager in a rush, pulled together all these information on his MS Excel spreadsheet, added and ready. That used to be all.
This system broke down because of a lot of reasons, but, fortunately, few months later the Basel committee attempted to address that problem by publishing a document: “Principles of effective risk data aggregation and risk reporting”. It sets out core principles around the aggregation of risk data. So, it’s a big deal because for the first time in banking regulation there are explicit requirements for accuracy, completeness and timeliness.
At first glance, all seem very reasonable. The 14 principles cover the common places of IT databases management and it is impossible to disagree with the main conclusion: “Risk management reports should cover all material risk areas in the organization”. Last December, the Basel committee published an update assessing the process in adopting the 14 principles. As expected, the entities say that they are progressing in a positive way. However, the main question remains open: How to build a system that aggregates the huge range of risks that face a big organization like a global bank?
The first issue that comes up is the “Technological challenge”. Nowadays a lot of banks are carrying out projects to build up a huge, efficient, complete, flexible and robust Data Warehouse which can contain the whole range of risks. The purpose is to overcome the two main endemic weaknesses that entities have stretched back for decades:
(1) The Data standardization within an organization. Anyone who has worked in a bank knows the difficulty to gather homogeneous information when you ask different departments. Data in millions, thousands or units, the currency and the FX applied, pre or post netting, even the sign depending on the kind of asset. These are only the formal ones. Furthermore, there’re other sources of heterogeneity caused by methodological or regulatory discrepancies.
(2) The proliferation of MS Excel spreadsheets where the Senior Manager usually calculates the ratio that finally appears on the Power Point. All of us who have worked within a bank at some time have had our own MS Spreadsheet where we have estimated a last minute requirement. Perhaps the most well-known example is Hypo Real Estate. In 2011 having been bailed out by the German government the bank announced a Euro55bn accounting error that was attributed to a MS spreadsheet mistake. I’m sure that this is not unique. Let he who has not sinned throw the first stone.
But there is more. Once an entity has addressed its “Technological challenge”, should face the hardest one, the “Methodological challenge”. That is, how to combine apples and oranges with a minimum level of consistency?
The first step should be to define a new metric that allows us to compare different approaches and bank sizes. I have called it “Absorption ration” and would be something like that:
So, now we have two variables that must calculate with accuracy. The denominator measures the capacity to absorb potential losses. Assuming 1 year time limit, the Risk Capacity would be the Core Capital, the expected earnings plus whatever other liquid resource also considering regulation constrains. Even though there still are some discussions about what means core capital or what can be considered a liquid asset, this is the easy part.
However, the numerator, the risk exposure, is the challenging one. There are two basic approaches to tackle the issue:
(1) Statistical approach: Allows the aggregation of a wide range of risks using statistical techniques which can be associated to probabilities. Here, playing with historical observations, Montecarlo simulations, Non-normal distributions and different kind of copulas is allowed.
(2) Scenario approach: More intuitive stress measure which calculates the impact of a scenario on an entity including the causality chain by which losses would arise in each case. That approach must incorporate a broad variety of alternative scenarios, some of them, highly unlikely.
Big entities are investing a lot of money an effort attempting to meet the Risk aggregation requirements. As I said, nowadays everybody is building a new Data Warehouse. So, good news for IT Business Analyst. But in addition, these large entities are calculating their risk exposure thought both approaches. So, good news for Risk Consultants as well.
In summary, nowadays there’re thousand of smart people trying to figure out future events. So, the target is to forecast a Black Swan. It reminds me a Taleb’s passage: “the randomness in the Black Swan domain is intractable. (…). What is nonmeasurable and nonpredictable will remain nonmeasurable and nonpredictable, no matter how many PhDs with Russian and Indian names you put on the job. (…) There is, in the Black Swan zone, a limit to knowledge that can never be reached.”