Now that’s a sexy title… but only to a small audience interested in scenario analysis (stress testing) and model risk.1 During the past week, many analysts have written and commented about the Brexit vote and its implications. We are interested in two particular aspects of the event:
- Was there sufficient scenario analysis (stress testing in SR 12-7) prior to the vote, and
- Did model risk–associated with the inappropriate use of models–and aggregate or combined model risk contribute to the large losses observed around the world?
Across the world and across different types of markets–e.g., equities and commodities–initial, first-day losses were extreme. Given that, it would seem that one of three scenarios must and will be true:
- Initial reactions (revisions to estimates of earnings and values) by investors, economists, analysts and traders were/are appropriate and accurate;
- Initial reactions over-estimated the long-term, negative effects; or
- Initial reactions, while severe, underestimated the negative implications of the exit, which will be experienced in the coming days, weeks, months or years.
Consider the following facts:
- According to the Financial Times, “some” U.S. banks included a supplemental or idiosyncratic Brexit “leave” vote scenario in their recent CCAR submission. (The Fed did not, nor did the other CCAR banks that aren’t part of the “some.”) 2 No firm was mentioned by name, but the article implies, and we would imagine, that the very largest banks that have the biggest operations and exposure in London and Europe would have performed the analysis.
- Although we try not to infer too much from one-day stock returns, of the 17 U.S. banks that we follow, the four biggest losers–in percentage terms–on Friday were regionals: Zions (-10.19%), Comerica (-10.03%) Citizens (-9.81) and Regions (-9.74%). (Clearly, the largest banks lost more in absolute terms.) To our knowledge, none of the four has much business, if any at all, in London or Europe, and we would imagine that their FX positions would be mostly hedged. So, it seems unlikely that those large stock price declines resulted from any direct effect(s).
- While we heard a commentator on the news erroneously refer to the result as a “black swan,” Britain leaving the EU was not an unpredictable, low-probability or unforeseen event, e.g., the polling had been close, and many people–but, unfortunately, not everyone–considered the implications of leaving prior to the event.
SR 12-7: Scenario Analysis, or the lack thereof
If (1) is true, then given the prior publicity–again, this was not a black swan–it seems that risk management, especially risk identification, as well as scenario analysis and stress testing, needs to be strengthened and made more comprehensive at many firms. Per the first sentence of SR 12-7, “All banking organizations should have the capacity to understand fully their risks and the potential impact of stressful events and circumstances on their financial condition.”
Of the banks in our tracking portfolio, the smallest percentage loss was 4.59%, which was still a decrease of over $11B in that firm’s market cap. So, at least for bank shareholders, and at least initially, Brexit has been a stressful event, and the vote had been planned for a while.
The first two principles in SR 12-7 are:
Principle 1: A banking organization’s stress testing framework should include activities and exercises that are tailored to and sufficiently capture the banking organization’s exposures, activities, and risks.
Principle 2: An effective stress testing framework employs multiple conceptually sound stress testing activities and approaches.
Other than complaining that these two aims are challenging or expensive to meet, it’s difficult to argue with either one. They are central to effective and robust risk management and provide much of the framework for intermediate and long-term risk management. Moreover, Principle 1 is not limited only to direct threats to the firm. For most domestic firms, the process of determining the implications of Brexit would involve asking two questions. The first is “if the ‘leave’ vote wins, what are the likely direct implications?” The second one is: “how would those changes or shocks (in (i) investors’ and regulators’ perceptions and (ii) certain markets and (iii) certain asset values or classes) affect our firm, given our exposures, activities, and risks?”
Clearly, such an analysis can’t be delegated to junior “quants” but must involve experienced experts who understand the connections and possible, conditional connections and relationships among markets, economies, firms, and values, especially in times of stress (or perceived stress). In other words, how would people panic and how could models mislead?
Please note that we aren’t saying that scenario analysis would have led to perfect estimates or that such a scenario analysis would have prevented Friday’s losses. However, the lack of any type of analysis leaves executives unprepared to respond to and influence investors’ (and others’) perceptions and reactions. Moreover, while executives may argue that “so far, our firm hasn’t lost a penny from Brexit,” we doubt that their shareholders would agree. While there has only been one day of trading, shareholders at the 17 U.S. banks in our portfolio lost about $67B on Friday.
If the losses observed on Friday persist, then it would seem that many firms were unprepared for a relatively high probability event, i.e., much closer to 50% than zero; they had no contingency plans. Alternatively, investors and traders may have over-reacted to the vote. In that case the question is “why?” Our answer follows.
SR 11-7: Model Risk, especially Aggregate Model Risk
If (2) is true, and Friday’s losses were the result of panic and overreaction, then we think it is likely that poorly-specified models and (non-obvious) interactions between and among those models played a role in the over-reaction.
The aggregate model risk–the combined adverse consequences–would be both within individual firms and across firms (and other trading and investing organizations) and would result from positive feedback loops.3 We’ll explain our hypothesis below; however, if such loops exist, then the Fed not only ignored the firm-specific effects in 2016 CCAR, it also missed the systemic effects because almost every bank was substantially worse off (after one day of trading) at least in terms of its share price. We agree that it is only one day, plus the weekend, but those losses still have real implications for a lot of real people.
So, what do we mean by “poorly-specified models” and “aggregate model risk?”
First, we speculate that (1) models were extensively used for forecasting, pricing and valuation on Thursday night and Friday, and (2) none, if any, of those models was redeveloped or refined with Brexit in mind. In other words, whatever models were available for everyday purposes prior to Brexit were applied to the immediate reactions to the “leave” vote.
How could that be bad? Well, if you have been watching or reading about the vote, especially afterwards, you likely heard or read the word “unprecedented.” It’s almost as common as the sentence, “this has never happened before.” That means that the effects of similar events–whether short-term or long-term–can’t be in anyone’s sample set because they haven’t occurred. It also means that models and parameters based on those sample sets aren’t necessarily relevant to this situation.4 Therefore, the blanket, thoughtless, and automatic applications of models that represent past average and typical situations aren’t necessarily appropriate or useful in a novel case. Their use in such a situation should have to be justified. Moreover, it makes one wonder whether the event caused a bigger shock only because it was a big shock within models that hadn’t been shown to be valid.
In model risk/governance talk, that application would pretty much be a separate and new use, which at many firms would require at least a review, if not a full validation. Note that the issue is somewhat analogous to the problem faced by CCAR/DFAST PPNR and loss-forecasting modelers: there are much more data about normal times than bad times, but data from typical quarters don’t necessarily inform about what happens in the worst ones.
So, we can imagine analysts, traders and others misapplying (what are otherwise sound) models to a situation for which those models were never intended to be used when they created. In modeling, context matters, a lot. That’s why it is very important to make certain that all assumptions–behavior, economic, mathematical and statistical–are clearly stated and justified for each alternative use. It’s also why any scenario analysis around Brexit should have included the question: “will our models still be valid, or will a “leave” vote cause a regime change?” (It has for the British government, to be sure.)
If there was an overreaction on Friday, then it was the result of either: (a) human fear or panic or (b) the inappropriate application of forecasting, pricing and valuation models to a new and unique situation or (c) some combination thereof. If it was either (b) or (c), then it is quite possible that it resulted from the (rarely considered) interactions between or among models, i.e., the positive feedback loops mentioned above.
For example, we imagine that someone has a model that relates the pound (GBP) as denominated in U.S. dollars, to oil prices, say, Brent crude. Someone else relates GBP to interest rates and someone else to British GDP. (Or some economist has all these factors in one, large, seemingly simultaneous system of equations that needs a few “exogenous shocks” to get started calculating.) Other traders or analysts have models that relate one or more of those items to U.S. GDP, interest rates, West Texas Intermediate (WTI) oil prices, then natural gas prices, other commodities, and bank stocks… Whenever a model’s output changes enough, it triggers a trade, thereby causing cascading (and rebounding) effects as price changes provide fresh inputs into others’ models for trading and investment decisions. Moreover, this is not just between British and American markets and banks but throughout the world.5
That, indeed, would be aggregate model risk–on a global scale. It would be harder to observe those interaction across firms and markets, unless there were “industry standard” models, which then, ironically, would intensify the systematic behavior and effects. It might be hard to observe within firms. Unfortunately, we can imagine uncoordinated traders in the same firm destroying each other’s values each time a programmed trade is executed and ripples across markets and bounces back to the other side of the firm.
In model risk management, many banks group models into related families as a way to capture aggregate risk: all CCAR models, all scorecards, all models in a chain that are processed sequentially. However, what we are describing involves typically and seemingly unrelated models–or models that tend to have negligible effects on each other in normal times–having pernicious effects on each other at the very worst of times. Now, that would be a useful family to know beforehand to control–by stopping and thinking–in the worst of times. (It’s why we were all frequently admonished in school not to extrapolate outside of our sample set or a function’s domain.)
It’s why the emphasis on documentation and transparency, and what we would call “considered use” in SR 11-7, is appropriate and crucial to minimizing potential adverse consequences when those mistakes most need to be identified, managed and controlled. Model risk management isn’t academic and shouldn’t be bureaucratic. It should inform of situations where models are likely to be misleading, including when the use of different and usually unrelated models across the firm–by and among unrelated and distant parties–interact to almost everyone’s detriment.
P.S. We won’t spend any time on it here, but (3) that the losses were bad but the worst is yet to come, combines much of what was said in (1) and (2). Its realization would provide further, albeit very expensive, evidence of the importance and value of both (1) scenario analysis (and contingency planning) as well as (2) model risk management.