Does standard deviation really measure risk?

mark's picture

In college finance programs and the CFA curriculum, the degree that a security’s price fluctuates is taught to be the measure of “riskiness” of a security. Statistically, standard deviation (SD) is the measure of this “risk”.  But is it a true measure of risk?   

I would contend that SD is not the true measure of risk.  The amount a security moves up and down in price doesn’t define risk.  The thought of SD measuring risk is a product of academia trying to find some standardized method to quantify risk.  Risk is not easily quantifiable so this is the best that academics can do.

Benjamin Graham, the father of value investing, discusses this subject in his celebrated book “The Intelligent Investor”.  Here is what he says:

“…the idea of risk of often extended to apply to a possible decline in the price of a security, even though the decline may be of a cyclical and temporary nature and even though the holder is unlikely to be forced to sell at such times.  These chances are present in all securities, other than United States savings bonds, and to a greater extent in the general run of common stocks than in senior issues as a class.  But we believe that what is her involved is not a true risk in the useful sense of the term.  The man who holds a mortgage on a building might have to take a substantial loss if he were forced to sell it at an unfavorable time.  That element is not taken into account in judging the safety or risk of ordinary real-estate mortgages, the only criterion being the certainty of punctual payments.  In the same way the risk attached to an ordinary commercial business is measured by the chance of its losing money, not by what would happen if the owner were forced to sell.”

While SD provides a convenient technique to measure degree of change, it lacks applicability to assess the potential of income streams being reduced or eliminated, or the true loss of investment capital.

Copyright 2017 Mark T. McLaren