Random Portfolios in Finance


This page is divided into the following sections:

Random portfolios have the power to revolutionize fund management. You might think that means they must be esoteric and complex. You would be wrong — the idea is very simple.

The Idea

In order to have random portfolios you need a universe of assets and some set of constraints to impose on the portfolios. A set of random portfolios is a sample from the population of portfolios that obey all of the constraints.

Figure 1 shows the sampling area (in weights) for a toy problem of three assets. The constraints are:

  • long-only
  • no weight greater than 45%
  • a maximum volatility

Volatility constraints are non-linear and hence the boundary corresponding to that constraint is non-linear.

Figure 1: Allowable weights given some constraints

back to top

Of Monkeys and Men, and Darts

The most familiar form of random portfolios is the stock market dartboard game. Humans or monkeys throw darts to select one or a few assets. The selection via darts is then compared to some professional selection. This is fun, and almost a great approach, but has two failings.

The first failing is that we only get to see if the professional outperforms one random selection. We don’t get to see what fraction of random selections the professional outperforms. To be truly informed we need to see on the order of a hundred or more random selections.

The second failing is that the darts do not obey any constraints. This is fair in a newspaper contest where the experts don’t have constraints either. But real funds do have constraints. Comparing a fund with constraints to random portfolios without constraints puts the fund at a disadvantage.

back to top

Performance Measurement

There are two ways of using random portfolios to achieve performance measurement: the static method and the shadowing method. We will see why performance measurement via benchmarks is inferior.

The Static Method

In the static method we generate a set of random portfolios that obey the constraints at the beginning of the time period, hold those portfolios throughout the time period, and find their returns for the period. The percentile of the fund is the percent of the random portfolios with larger returns. (The convention in performance measurement is for good to be near the zeroth percentile and bad to be near the 100th percentile.)

Figure 2 is an example. It shows the distribution of returns of the random portfolios in blue, and the return of the fund in gold. In this case the fund did not perform very well.

Figure 2: Static method of performance measurement

This is very much like performance measurement with peer groups. In both cases we are using a single time period, and in both cases we are comparing our fund to a set of alternative possibilities. There are some significant differences though — we highlight two.

In peer groups the alternatives are other funds that are “similar” to the fund of interest. Ideally only funds with the same constraints would be used. On the other hand we want to have a lot of peers in order to get more precision. So there are opposing forces for small peer groups versus large peer groups. There is no such tension with random portfolios — we can generate as many random portfolios as we like.

A more serious problem with peer groups is that we don’t know what the results mean. We are meant to believe that if our fund of interest did better than all but 10% of its peers, then our fund’s skill is roughly at the 10th percentile among its peers. This assumes that differences in skill dominate differences in luck. Such an assumption is unlikely to be justified. In particular if it is the case that no fund has skill (or all the funds have equal skill), then our fund is at the 10th percentile of luck — the measure contains no information at all. Burns (2007a) expands on this argument. Surz (2006, 2009) discusses additional problems with peer groups.

The Shadowing Method

The static method for random portfolios is more informative than peer groups. But it is still rather generic information.

Performance is — at root — about decisions.  The idea of the shadowing method is to use random trades to mimic the decisions that the fund takes.  This can give us a much clearer picture of the value of the decision process.  An example is discussed in the performance measurement application page.

Benchmarks

A fund is judged against a benchmark by comparing a series of returns from the fund with the corresponding returns for the benchmark. This method has a few problems. The major one is the time it takes to decide that a good fund really is better than the benchmark — it probably will take decades.

The power of these tests in the ideal setting is given in Burns (2007a) — several years are required to get reasonable power even for exceptional skill. But the reality is much worse than the ideal because the difficulty of beating a benchmark is not constant. If the most heavily weighted assets in the benchmark happen to perform relatively well, then it will be hard to beat the benchmark. Conversely, if the most heavily weighted assets perform relatively poorly, then it will be easy to beat the benchmark. Kothari and Warner (2001) discuss this.

Figure 3 shows the percent of funds that have the S&P 500 as their benchmark that outperformed the benchmark in each year — see specifics of this in “Performance Measurement via Random Portfolios”. In order to believe that the comparison is meaningful, we need to think that the fund managers — as a group — were poor for years, suddenly became good for three years and then went back to being poor.

Figure 3: Percent of  S&P 500 benchmarked funds outperforming by year

More details about performance measurement are in the working paper “Performance Measurement via Random Portfolios”.

Burns (2007b) discusses performance measurement in the slightly different setting of testing the recommendations of market commentators.

back to top

Testing Trading Strategies

Fund managers and potential fund managers face a number of problems when deciding on a trading strategy. Here we examine two:

  • Data snooping
  • Herd risk

Essentially there is the problem of being wrong, and the problem of being right.

Data snooping makes the strategies look better than they really are. To see why, suppose that you tried 1000 trading strategies that were completely random. The one that performed best might look reasonably good. Hopefully an investment manager isn’t going to be trying completely random strategies, but selection bias will still exist.

If similar models are being used in several companies to manage a lot of money, then a fund manager using those models is subject to dramatic moves in the market. This became evident to a lot of people in August 2007. Without a crisis it is hard to tell that this is happening.

Random portfolios can help with the first problem, and possibly with the second.

Trading strategies can be tested using the shadowing method discussed above. There is one key difference between performance measurement and testing a trading strategy. When testing a trading strategy we want to do the shadowing process a number of times with different starting portfolios.

This testing process reduces the effect of data snooping because there is a much stricter definition of a successful strategy. The fund manager is still vulnerable to changes in market behavior, but much less susceptible to wrong interpretations of the historical period.

Testing with random portfolios may be able to reduce herding because the technology makes it feasible to pick up more ephemeral signals.

The mechanics of doing the testing are given in the working paper “Random Portfolios for Evaluating Trading Strategies”. There is also a blog post about backtesting.

back to top

Rational Investment

Current practice is less than rational for:

  • tracking error constraints
  • performance fees
  • constraint bounds

Tracking Error Constraints

Many mandates give the investment manager a benchmark and a maximum tracking error from the benchmark. This is wasteful in several respects.

In virtually all cases the investor can buy an index fund for the benchmark with very low management fees. What’s the advantage of hiring an active manager to run a fund that is extremely correlated to the index fund?

If the manager doesn’t outperform the benchmark by more than the extra management fees, then there is obviously no advantage at all. If the manager does have the skill to consistently beat the benchmark, then that skill could be put to much better use. A skilled fund manager should, in general, be able to achieve higher returns when the tracking error constraint is dropped.

Assuming the investor has money in the index, that higher return of the unconstrained manager will be more valuable as well. All else being equal, it is better for the active fund to have a low correlation with the index. This turns out to be the same as a large tracking error. That is, the rational thing would be to impose a minimum tracking error constraint rather than a maximum tracking error constraint.

The reason there are maximum tracking error constraints is in order to have the illusion that we can see if the fund manager is outperforming or not. We can’t really tell by using benchmarks, but we can tell using random portfolios even if there isn’t a tracking error constraint. Random portfolios work equally well for performance measurement no matter what tracking error there is.

Performance Fees

If you have a performance fee, it is not a good idea to have it relative to a benchmark. As Figure 3 implies, that is mostly a bet between the fund manager and the investor on whether large caps will outperform. Skill will have very little to do with it.

A more reasonable target would be the mean return of a set of random portfolios that obey the constraints of the fund.

Constraint Effects

We can use random portfolios to decide rationally what the constraint bounds should be. Constraints are habitually imposed with no sense of what is being gained and lost.

The working paper “Does my beta look big in this?” includes such an analysis.

Figure 4 shows an example analysis of constraints. The densities of realized utility over time are shown for a certain set of constraints (gold) and for those constraints plus a volatility constraint (blue). During the normal market times we will be fairly indifferent to the volatility constraint. However, during the poor market conditions of 2008 the volatility constraint was quite valuable.

Figure 4: Effect of constraints in 2007-2008

back to top

Additional Uses of Random Portfolios

A number of additional uses of random portfolios have been suggested and there is surely a large number of applications yet to be discovered. Here we discuss a few additional uses.

Assessing risk models

Random portfolios provide a means of generating realistic portfolios that can be put through risk models in order to see how they perform. Risk models can be compared with each other, or individual models can be tested for weak spots.

Figure 5 shows an example of comparing a risk model’s prediction of volatility to the realized volatility for some 120/20 portfolios. The correlation between predicted and realized volatility across a large number of random portfolios was computed.

Figure 5: Correlation of predicted and realized volatility

General quant tool

Random portfolios can be used in pretty much all quantitative exercises involving portfolios.  A list of some of the uses is in the quant research applications page.

back to top

History

The idea of random portfolios is not new — an early use was “program selected portfolios” by Dean LeBaron and colleagues at Batterymarch Financial Management in the 1970’s. An even earlier use is described in an American Statistical Association speech by James Lorie in 1965 (any speech that starts with Mark Twain and ends in St. Tropez can’t be all bad).

At that point random portfolios were stretching computational ability. Computational speed is no longer a serious issue with suitable technology.

back to top

Some Technical Points

The statistical bootstrap and random permutation tests are techniques that have radically changed data analysis in the last couple of decades. Depending on how random portfolios are used, they are generally equivalent to one of these techniques.

The use of random portfolios to do performance measurement is analogous to doing a random permutation test. The examination of the effect of constraint bounds, such as in Figure 5, is similar to how the bootstrap can be used.

The only real difference is that, because of the constraints, random portfolios are harder to compute.

back to top

Discussion

Senior Consultant published some testimonials on PIPODs. While this is specifically about one implementation, most of the comments apply to random portfolios in general.

Even naively generating random portfolios can be useful. Examples of this include Mikkelsen (2001); Kritzman and Page (2003) and Assoé, L’Her and Plante (2004). Kothari and Warner (2001) show that benchmarking against an index is problematic, and their technique involves random portfolios.

back to top

Products

The following products were created independently of each other, and only Portfolio Probe is associated with Burns Statistics.

Portfolio Probe from Burns Statistics. This has a wide range of constraints, including the very important one of limiting the volatility of the portfolios.

PODs and PIPODs from PPCA Inc.

back to top

References

Assoé, Kodjovi, Jean-François L’Her and Jean-François Plante (2004). “Is There Really a Hierarchy in Investment Choice?” http://www.hec.ca/cref/pdf/c-04-15e.pdf

Bridgeland, Sally (2001). “Process attribution — a new way to measure skill in portfolio construction” Journal of Asset Management.

Burns, Patrick (2003). Does my beta look big in this? (pdf)

Burns, Patrick (2004). Performance measurement via random portfolios (pdf)

Burns, Patrick (2006). Random portfolios for evaluating trading strategies (pdf)

Burns, Patrick (2006). Portfolio analysis with random portfolios (pdf of annotated presentation slides)

Burns, P. (2006). “Random Portfolios for Performance Measurement” in Optimisation, Econometric and Financial Analysis E. Kontoghiorghes and C. Gatu, editors. Springer.

Burns, P. (2007a). “Bullseye” Professional Investor March issue.
A very similar version is available as Dart to the Heart

Burns, P. (2007b). Cramer vs. Pseudo-Cramer (pdf)

Carl, Peter and Brian Peterson and Kris Boudt (2010). “Business Objectives and
Complex Portfolio Optimization”
. R/Finance tutorial

Clare, Andrew and Nicholas Motson and Stephen Thomas (2013). “An evaluation of alternative equity indices Part 1: Heuristic and optimised weighting schemes” Cass Business School

Cohen, Kalman J. and Jerry A. Pogue (1967) “Some comments concering mutual fund vs. random portfolio performance”. Journal of Business

Daniel, G., D. Sornette and P. Wohrmann (2008). “Look-Ahead Benchmark Bias in Portfolio Performance Evaluation” working paper at SSRN

Dawson, Richard and Richard Young (2003). “Nearly-uniformly distributed, stochastically generated portfolios” in Advances in Portfolio Construction and Implementation edited by Stephen Satchell and Alan Scowcroft. Butterworth-Heinemann.

Elton, E. J., M. J. Gruber, S. J. Brown and W. N. Goetzmann (2003). Modern Portfolio Theory and Investment Analysis, Sixth Edition (Chapter 24, Evaluation of Portfolio Performance).

Friend, Irwin and Douglas Vickers (1967). “Re-evaluation of alternative portfolio-selection methods” (gated). Journal of Business

Kothari, S. P. and Jerold Warner (2001). “Evaluating Mutual Fund Performance” Journal of Finance working paper at SSRN

Kritzman, Mark and Sébastien Page (2003). “The Hierarchy of Investment Choice” Journal of Portfolio Management 29, number 4, pages 11-23.

Lisi, Francesco (2011). “Dicing with the Market: Randomized Procedures for Evaluation of Mutual Funds”. Quantitative Finance 11, number 2, pages 163-172. University of Padova working paper. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1375730

Mikkelsen, Hans (2001). “The Relation Between Expected Return and Beta: A Random Resampling Approach” SSRN papers

Shaw, William (2010) “Monte Carlo Portfolio Optimization for General Investor Risk-Return Objectives and Arbitrary Return Distributions: A Solution for Long-Only Portfolios” SSRN version

Simon, Thibaut (2010). “An empirical study of stock portfolios based on
diversi fication and innovative measures of risks”
. Masters thesis

Stein, Roberto (2012). “Not Fooled by Randomness: Using Random Portfolios to Analyze Investment Funds”  SSRN version

Surz, Ron (1994). “Portfolio Opportunity Distributions: An Innovation in Performance Evaluation” Journal of Investing.

Surz, Ron (1996). “Portfolio Opportunity Distributions: A Solution to the Problems with Benchmarks and Peer Groups” Journal of Performance Measurement.

Surz, Ron (1997). “Global Performance Evaluation and Equity Style: Introducing Portfolio Opportunity Distributions” in Handbook of Equity Style Management. Frank Fabozzi Associates.

Surz, Ron (2004). “‘Hedge Funds Have Alpha’ is A Hypothesis Worth Testing” Albourne Village library.

Surz, Ron (2005). “Testing the Hypothesis ‘Hedge Fund Performance is Good'” Journal of Wealth Management. Spring issue.

Surz, Ron (2006). “A Fresh Look at Investment Performance Evaluation: Unifying Best Practice to Improve Timeliness and Reliability” Journal of Portfolio Management Summer issue.

Surz, Ron (2007). “Accurate Benchmarking is Gone But Not Forgotten: The Imperative Need to Get Back to Basics” Journal of Performance Measurement, Vol. 11, No. 3, Spring, pp 34-43.

Surz, Ron (2009). “A Handicap of the Investment Performance Horserace” Published as “Handicap in the Investment Performance Horserace” in Advisor Perspectives 2009 April 28.

Surz, Ron (2010) “The New Trust but Verify” Investment and Wealth Management
SSRN version

Videos

Surz, Ron (2013) Peer into the Future of Hedge Fund Evaluation: Send in the Clones

See Also

The random portfolios blog category

back to top