Tuesday, May 6, 2014

Risk: Its Wildness Lies In Wait

G.K. Chesterton wrote that "[life] is a trap for logicians. It looks just a little more mathematical and regular than it is; its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait." I'm often reminded of this when I hear sophisticated investors describe their risk management processes. For many investors, particularly if they are institutions or cater to institutions, it seems to be a prerequisite that a rigorous risk management system is by definition highly quantitative. There are three possibilities where the use of these models is concerned:

(1) Their risk models actually drive portfolio construction.
(2) Their risk models are presented to investors to give the veneer of rigour, but are basically ignored.
(3) Risk models are used to augment commonsense approaches, but are not the final arbiter of portfolio construction.

I suspect (2) and (3) are significantly more common than (1). While (2) is unethical, it may have the benefit of shielding investors from blind submission to a model. The version of risk (i.e. volatility) taught in introductory investments textbooks can be a dangerous tool. It's particularly problematic when investors don't understand the math and statistics that underlie these quantitative risk models, and are therefore unable to fully grasp their limitations. For example, it seems like a lot of people are willing to say, "Yes, we understand that asset prices don't follow a normal distribution, but let's use it as a tool anyway." Obviously, one doesn't need to be an automotive engineer in order to drive a car, but I think being unclear about the limitations of one's vehicle creates the need for humility, and caution. 

The quintessential definition of risk comes from Frank Knight in Uncertainty and Profit. "The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics), while in the case of uncertainty this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique." 

Critics like Taleb waste no time in attacking Knight's classification. Taleb writes, "Had [Knight] taken economic or financial risks he would have realized that these "computable" risks are largely absent from real life! They are laboratory contraptions!" Indeed, those who take Knightian risk to excessive lengths (and I don't think Knight himself intended this) are guilty of reifying risk. Berger & Luckmann, in their classic The Social Construction of Reality describe reification as "the apprehension of the products of human acitvity as if they were something else than human products - such as facts as nature, results of cosmic laws, or manifestations of divine will. Reification implies that man is capable of forgetting his own authorship of the human world... The reified world is, by definition, a dehumanized world." This is how simplistic models treat risk - as historical volatility which is indicative of future volatility because of some inherent feature, rather than as the product of economic fundamentals and economic agent action.

One reason this thinking is dangerous is because of what economists have termed endogenous risk. This is just a fancy phrase for something that other social scientists are well aware of. For example, the sociologist Kathleen Tierney notes that, "Risks associated with social and physical systems are not inherent in those systems, nor are they fixed; rather they are the outcome of interactions among those social and physical units, social structure, and human (usually organizational) decisions." Translating this into economics jargon, Danielsson and Shin define endogenous risk as "the risk from shocks that are generated and amplified within the system." Woody Brock cautions that, "In episodes of market turmoil, the relevant endogenous risks have probability distributions that cannot be known. This, in turn, means that those "market risks" we prattle on about cannot be properly assessed and thus cannot be correctly priced and thus cannot be optimally "managed" by individuals or institutions - despite widespread beliefs by today's risk managers that they can. Ironically, just when optimal risk assessment and risk management tend to be most needed - in periods of crisis - we learn that they cannot exist." No doubt there are very clever folks out there with complicated models that can simulate the effects of the economy and financial markets as complex adaptive systems (the Danielsson and Shin paper suggests competitive equilibrium and game theoretic models), but since I don't understand their workings, I have no way of commenting whether these models will actually work for investors.

The backward-looking nature of many of these models is also widely recognized, but considered by many to be a necessary evil. But it is worth considering, for example, whether low historical correlation between certain assets will hold as large institutional investors become ever more creative in allocating funds.

I'm not suggesting that we should give up on thinking about risk, merely that quantifying it may not be helpful for everyone. But I also reject the notion of risk as the "permanent loss of capital". In a terrific article on his Top 10 Peeves, AQR's Cliff Asness very effectively defends the use of volatility in an investing framework. He clarifies first, "Volatility isn't how much the security is likely to move; it's how much it's likely to move versus the forecast of expected return." More importantly, he explains, "Risk is the chance you are wrong. Saying that your risk control is to buy cheap stocks and hold them... is another way of saying that your risk control is not being wrong. That's nice work if you can get it. Trying not to be wrong is great and something we all strive for, but it's not risk control. Risk control is limiting how bad it could be if you are wrong." 

That's a great description of risk management, and I appreciate the fact that Asness is able to convey it in straightforward language. Highly quantitative risk models are not appropriate for many investors, but that doesn't mean they need to eschew risk management altogether, or feel insecure about the lack of precision in their attempts. As Danielsson and Shin note, "an effective risk manager should be able to make an intelligent distinction between those cases where those cases where the standard "roulette wheel" approach view of uncertainty is sufficient, and to distinguish those cases from instances where endogeneity of risk is important. Common sense and a feel for the underlying pressures dormant in a market are essential complements to any quantitative risk management tool that merely looks back at the recent past." In a future post, I'll attempt to lay out a commonsense approach to thinking about risk and its management.

2 comments:

  1. Great post. Couldn't agree more on the inherent inaccuracies of models and the issues with causal direction in investing that you point out. To defend mathematicians though, I’ll say most of them are pretty abstract thinkers and already know that uncertainty is not an ontological concept (randomness is - and that is modelled). It is what people choose or claim to do with models without understanding them that causes obvious problems.

    ReplyDelete
    Replies
    1. Fair point! I can't deny that there are very sophisticated models out there that capture reality better.. but I do worry when people of average quantitative abilities mistake mathematics for rigour.

      Delete