Studies have suggested that equally-weighted diversification (also known as 1/N) is ingrained in human behaviour. For example, research has shown that more than 30 percent of defined contribution plan participants use equally weighted diversification – in other words, they divide their money equally across investment opportunities.
Investing using 1/N strategies has some positive qualities. It never underperforms the portfolio that is fully invested in the worst-performing asset and avoids concentrated positions. To the extent that there is a size premium, the strategy will capture it because it will underweight large cap assets and overweight small caps. It will capture any mean reversion because when investors rebalance to 1/N, assets that have appreciated are sold and those that have depreciated are bought.
There has even been empirical research to suggest that 1/N outperforms optimally diversified portfolios. In one study[2], across multiple models and data sets, including sectors, countries and factors, researchers concluded that no optimization method they examined could consistently outperform 1/N diversification. In fact, they found that 1/N portfolios produced Sharpe ratios (which characterize how well the return of an asset compensates investors for risks taken) that were on average 50 percent higher than those of mean-variance-optimized portfolios.
This is a surprising result. After all, 1/N doesn’t make use of any investment knowledge, nor make use of any information in the data sample. But in the study, the researchers relied, in one way or another, on small samples of realized returns to construct their expected returns. The problem with this is that returns associated with small rolling samples can vary widely and might have nothing to do with what reasonable practitioners might do — for example, such basic assumptions as the widely-held presumption that stocks will outperform bonds.
The Fallacy of 1/N
Recently, we have seen a growing skepticism of optimization techniques. In a recent research paper[3], David Turkington and coauthors Mark Kritzman and Sebastien Page argued that the perceived failure of optimization is largely predicated on fundamental misunderstandings and an over reliance on short-term samples to extrapolate expected returns. The authors showed that, using inputs that are intentionally simplistic, but reasonable in terms of basic financial intuition, optimized portfolios usually outperform equally weighted portfolios out of sample.
Many investors object to the extreme allocations that optimizers sometimes recommend. The claim “optimizers are error maximizers” epitomizes this view. Consequently, a vast amount of effort has been spent on research to create better-behaved portfolios. For example, Jorion[4] introduced the Bayes-Stein approach to compress expected returns toward the minimum variance portfolio. Michaud[5] introduced re-sampling as a method of reducing estimation error. Black and Litterman[6] combined investor views with equilibrium expected returns to reduce extreme allocations. Chow[7] proposed adding a benchmark tracking error term to anchor allocations around a benchmark, and Chevrier and McCulloch[8] used Bayesian estimation and economic theory to create superior optimized portfolios.
But do techniques for generating better-behaved portfolios really solve
The perceived superiority of 1/N arises not from limitations in optimization, but rather, from reliance on short-rolling samples to estimate expected returns. Sixty or 120-month realized returns are extremely noisy. Just because equities have lost value over the past decade does not mean that they will over the next 10 years. We shouldn’t expect optimizers to translate valueless inputs into superior portfolios any more than we should expect calculators to generate correct solutions based on mistaken inputs.
State Street Associates performed an extensive study[9] covering 13 datasets, generating over 50,000 optimal portfolios and testing their performance out-of-sample, with tests organized into three categories corresponding to the typical hierarchy of decisions for plan sponsors: asset-liability management, allocation across betas sources and the search for alpha.
In developing the tests, the authors did not assume any forecasting skill, nor did they rely on short-rolling samples of returns. The study simply used either no information about expected returns, which lead to the minimum-variance portfolio, or based expected returns on long-term risk premiums estimated over long (e.g., 50 year) periods. It found that optimization added value even with these naïve – yet plausible – inputs.
Quantitative and discretionary practitioners should use the minimum variance portfolio as a starting point when they build models of expected returns. They should start with the assumption that they have no information about expected returns, optimize the covariance matrix and then begin adding models of expected returns to see how they perform relative to the minimum variance portfolio.
Likewise, there are numerous ways to improve mean-variance optimization, such as incorporating regime-dependent inputs and using optimization methods that account for the so-called “tail risk” inherent in periods of extreme market turbulence.
As investors re-evaluate portfolio structures in the wake of the financial crisis, portfolio optimization tools are more important than ever. Simple approaches such as 1/N may be appealing in their simplicity, but fail to employ essential investment knowledge. Basic tools and reasonable intuition can go a long way towards achieving efficient portfolio diversification.
[1] “Composite Asset Mix Report 2008”, Pension Investment Association of Canada, 2008
[2] DeMiguel, Victor, Lorenzo Garlappi, and Raman Uppal. 2009. “Optimal versus Naïve Diversification: How Inefficient Is the 1/N Portfolio Strategy?” Review of Financial Studies, vol. 22, no. 5 (May):1915-1953.
[3] “In Defense of Optimization: The Fallacy of 1/N” ,Financial Analysts Journal, March/April 2010
[4] Jorion, Philippe. 1986. “Bayes-Stein Estimation for Portfolio Analysis.” Journal of Financial and Quantitative Analysis, vol. 21, no. 3 (September):279:292.
[5] Michaud, Richard O. 1998. Efficient Asset Management: A Practical Guide to Stock Portfolio Optimization and Asset Allocation. Cambridge, MA: Harvard Business School Press.
[6] Black, Fisher, and Robert Litterman. 1992. “Global Portfolio Optimization.” Financial Analysts Journal, vol. 48, no. 5 (September/October):28-43.
[7] Chow, George. 1995. “Portfolio Selection Based on Return, Risk, and Relative Performance.” Financial Analysts Journal, vol. 51, no. 2 (March/April):54-60.
[8] Chevrier, Thomas, and Robert E. McCulloch. 2008. “Using Economic Theory to Build Optimal Portfolios.” Working paper, University of Chicago (April).
[9] Op. cit., Kritzman, Page and Turkington.
David Turkington, CFA, is Vice-president with State Street Associates.