Don’t use standard deviation by itself as a measure of risk. It is misleading and wrong

Mutual fund risk: standard deviation is not the answer: don’t use standard deviation by itself as a measure of risk. It is misleading and wrong

Chuck Chakrapani


What is the biggest risk you face when you invest your money in a mutual fund? Hardly anyone would deny that the biggest risk one faces in investing is losing one’s capital.

One would also assume that the measure of risk calculated by many specialists would take this obvious fact into account. But, believe it or not, the most commonly reported measure of risk does not even care if you lose all your capital, as long as you lose it in a steady fashion!


Most mutual fund industry analysts define risk as the unpredictability of returns. As far as they are concerned, a mutual fund that goes up every month by different amounts is more risky than a fund that goes down every month by a specific amount! Absurd as this may sound, that is exactly how many analysts calculate how `risky’ a mutual fund is.

To be fair, a mutual fund whose return is predictable is less risky than another fund whose return is unpredictable. But this is not the most serious risk. As we noted, the most serious risk is losing your money, which is not at all taken into account in computing the standard deviation.


Is standard deviation any good at all? Yes, it can be useful if it is used in conjunction with the rate of return. Standard deviation by itself is NOT a measure of risk, despite the conventional wisdom that it is.

So, how do we correctly use standard deviation? One way to do this is never to look at standard deviation except for funds of comparable returns. For instance, if you have two funds that have an annual return of 10% and if fund A has a standard deviation of 3.4 and fund B has a standard deviation of 4.5, then fund B is more risky. Both funds have similar returns but fund B is less predictable than Fund A. In such cases, we should prefer fund A since its returns are comparable to fund B and are much more predictable.


In real life, we seldom compare funds with identical returns. For instance, consider these two funds:

Fund Return Std. Dev.

X 8.7 3.6

Y 14.6 4.9

If we just looked at the standard deviation, reported in newspapers, we would have concluded that fund Y, although featuring a higher return, is more `risky.’ In reality this is not so. The fact is, standard deviations by themselves are a meaningless measure. To correctly interpret risk, we need to divide the standard deviation by the average return. For example:

Fund Return SD (SD/Return) x 100

X 8.7 3.6 (3.6/8.7) x 100=41%

Y 14.6 4.9 (4.9/14.6) x 100=34%

Our calculations clearly show that fund X is more risky than fund Y since its real volatility of 41% is higher than 34%. Fund Y not only provides a higher return, it is also less risky.


If you had used standard deviation alone as a measure of risk (as most people do), you may have bought a higher risk, lower-return fund instead of a lower risk, lower-return fund instead of a lower risk, higher return fund.

The moral. Do not use standard deviation by itself as a measure of risk. It is not. If you must use it, use it in conjuction with the average return, as explained above.

COPYRIGHT 1996 Money Digest

COPYRIGHT 2004 Gale Group