Stumbling blocks on the trek from theory to practical optimization in fund management.
Problem 1: portfolio optimization is too hard
If you are using a spreadsheet, then this is indeed a problem. Spreadsheets are dangerous when given a complex task. Portfolio optimization qualifies as complex in this context (complex in data requirements).
If you are using a more appropriate computing environment, then it isn’t really all that hard. There are a few issues that need to be dealt with, but taking them one at a time keeps the task from being overwhelming.
If you are using spreadsheets, my prescription is to switch to R. When there is real money on the line, using a spreadsheet for portfolio optimization seems to me to be penny wise and dollar foolish.
If you have other problems with optimization, read the rest of this post.
Problem 2: portfolio optimizers suggest too much trading
A major frustration with optimizers is that the turnover can be excessive.
All reasonable portfolio optimizers allow:
- turnover constraints
- transaction costs
Use either of these to reduce the turnover to a suitable amount.
We don’t often let cars roll uncontrolled down a hill. And we shouldn’t allow that of optimizers either.
Problem 3: expected returns are needed
First off, this isn’t strictly true. You can find minimum variance portfolios which need a variance matrix but not expected returns. The success of low volatility investing is a reason to go down this route.
But assuming that you are an active investor, you need expectations in some sense. There are a number of techniques that don’t require numerical expected returns.
Anyone should be able to provide an ideal target portfolio — the portfolio that you would like to hold when all constraints are ignored. Once you have the target portfolio, then you can get a portfolio that is “close” to the target but does obey the constraints. One of those constraints should almost surely be turnover.
In Portfolio Probe you can get close to your target portfolio without either expected returns or a variance matrix.
Probably a better solution would be to minimize the tracking error to the target portfolio. This does require a variance matrix.
The technique of reverse optimization (also called implied alpha) can be used iteratively to try to find a portfolio that looks like what you want in terms of the expected returns that are implied. This avoids actually doing optimization, but it is labor-intensive and it depends on the constraints not spoiling the implied alphas (which is perhaps doubtful).
If you can order the assets in your universe in terms of expected returns, then it is feasible to produce expected returns to give to an optimizer. Ranking assets is much easier than giving numerical estimates of returns.
A paper by Almgren and Chriss explains how to turn ranks into numerical expected returns. The simple case just requires the use of the qnorm function in R. That gives you relative sizes, but you will still want to scale them to match the variance matrix.
Problem 4: mean-variance optimization is restrictive
There is a myth that mean-variance optimization is only useful when returns are normally distributed. That’s backwards. If returns are normally distributed, then mean-variance optimization is all that can be done — all other utilities will be equivalent. See more at “Ancient portfolio theory”.
If the return distribution of any assets in the universe are not reasonably close to symmetric, then, yes, mean-variance optimization is restrictive and should not be used. Examples of disruptive assets are bonds and options.
However, if the universe is just stocks, then mean-variance is a pretty good approximation to the best we can do. Skewness and kurtosis could be added to the utility to account for the non-normality of returns. The blog post “Predictability of skewness and kurtosis in S&P constituents” indicates that skewness is probably close to impossible to predict and the predictability of kurtosis is limited.
In 1999 lower partial moments and semi-variance were popular with tech stocks because they weren’t really risky, they only went up. It turned out that there was symmetry in the returns of tech stocks — it was just that the down-side came later.
If indeed you are in a situation — including fixed income or options — where mean-variance optimization is not appropriate, then you should probably do scenario optimization.
Problem 5: portfolio optimization inputs are noisy estimates
Portfolio optimizers are stupid enough to believe what we tell them. The optimizer gives us a solution as if we really knew the expected returns and the variance matrix. In fact:
- estimates of expected returns are almost total noise
- estimates of the variance matrix are quite noisy
“almost total noise” applies to the best fund managers — the “almost” needs to be dropped for below-average fund managers.
Factor models of variance are often input to optimizers. These are much better than sample variance matrices for large universes. However, using a shrinkage estimate is probably better than either.
We have a Wharfian problem with “portfolio optimization”. People think that we are optimizing the portfolio when we say that. In fact we are really optimizing the trade. For some purposes it doesn’t matter, but it does matter when we are thinking about what to do about noisy inputs.
Black-Litterman type operations
Some people think that doing something like Black-Litterman is a solution to this problem. It isn’t. If done intelligently, then it reduces — but does not eliminate — the noise in the expected returns.
The real solution to this problem goes by the name of robust optimization. I find this term unfortunate since there are several uses of the term “robust” which can easily be confused with the meaning of getting good solutions to a trade optimization from noisy inputs.
There is a rather large selection of proposals for implementing solutions. Most of them are quite complicated.
There is a simple and easily implemented solution (though the exact number probably needs to be found via experimentation).
Here’s the story (assuming we have an existing portfolio):
If the inputs we give to the optimizer are exactly true, then we should accept what the optimizer says. We should do the suggested trade — remember we are optimizing the trade.
If the inputs we give to the optimizer are complete garbage, we should do nothing. Our trade should be zero.
The reality is that our inputs are somewhere between exactly true and complete garbage, so our trade should be somewhere between the suggested trade and no trade. We want to shrink the trade.
It is easy to shrink the trade either by imposing a (stronger) turnover constraint or by increasing the transaction costs. How much to do that is an issue, of course, but the principle is simple. A guess is likely to be better than not doing it at all.
Problem 6: transaction costs are tricky
This is true. Some of the costs are straightforward, but market impact is hard to pin down.
But there’s an even trickier bit: either the transaction costs need to be scaled to match the expected returns and variance, or the expected returns and variance need to be scaled to match the transaction costs.
The three entities all appear in the utility function, and scaling is necessary for the utility to make sense.
The coward’s way out is just to impose a turnover constraint.
The other way is to work and think hard about trading costs. And probably to use an optimizer that allows flexible specification of costs.
Problem 7: risk and alpha factor alignment trouble
There has been talk among the portfolio optimization literati about alpha eating and factor alignment. The whole thing sounds seriously geeky (even to a nerd like me).
The gist of it is that if there are factors used in the expected returns that are not factors in the risk model, then the optimizer will think those factors are essentially riskless and use them too much.
Update: There is now a blog post just about this topic: “Alpha alignment”.
One of the main “solutions” to this is to add the missing factors to the risk model. This of course assumes that there are factors in the expected returns model.
I suspect that the real problem is that factor models are the wrong technology to use as the variance matrix in optimizers. The solution, then, is better technology. My suggestion is to use Ledoit-Wolf estimates which shrink towards equal correlation.
Problem 8: constraints get in the way
This is the invisible problem. It is of no concern to people because they don’t know they have it.
Constraints are in place so that the portfolio doesn’t do anything too stupid. But how many have checked to see that the constraints are doing as intended?
You can directly investigate the effect of your constraints.
There might be a way to look for constraints that actually help optimization.
The “the” in the title is of course huckstering nonsense — I don’t really know which problems are on top. What other problems are in the running?
portfolio optimization in R
Many of the commercial portfolio optimizers have an R interface. This of course includes Portfolio Probe.
There are a number of more or less naive portfolio optimization implementations in R that have been contributed. See the Empirical Finance task view for more details.
You can get a function that does Ledoit-Wolf shrinkage towards equal correlation by doing (in R):
The first command you only need to do once (per version of R), the second you need to do in each R session in which you wish to use the function. It is called var.shrink.eqcor.
By default this ensures that the minimum eigenvalue is at least 0.001 times the largest eigenvalue. This is a way of avoiding the factor alignment problem. There is no scientific reason for that particular value of the limit — feel free to experiment and report back.
The BurStFin package also has factor.model.stat which estimates a statistical factor model.
Pingback: Thursday links: paused portfolios | Abnormal Returns
I arrived to your Blog VIA the systematic Investor Blog. I am not an advanced R software programmer and more often than not have errors stacking.
I would kindly ask you if you can explain how can I load and test the code you have writen. I would hihgly appreciate your assistance.
Wishing you a pleasant weekend, I here remain
If you are talking about the ‘BurStFin’ code, then doing what is shown should work if you are using a version of R older than 2.14.x. If you are using 2.14, then probably the easiest thing to do is use an older version of R to get BurStFin, ‘save’ the functions, and then when using 2.14 ‘attach’ the saved file. I’ll update BurStFin at some point to add a namespace to it so it works in 2.14.
If you have more specific problems with ‘BurStFin’, then email me as the maintainer to ask questions.
If you have more general problems, then asking questions on Stackoverflow (tagged R) or on R-help is a possibility.
The R Inferno has suggestions on asking better questions.
Some hints for the R beginner can help you getting started.
BurStFin has been updated and is now on CRAN, see “The BurStFin R package”
Pingback: Multiple Factor Model Summary « Systematic Investor
With respect to problem 3, “asset ranks”, I am not clear why you suggest scaling the “centroid” return vector resulting from the Almgren and Chriss ranking approach before running the mean-variance optimization. The result of mean-variance optimization should be invariant to scaling the vector of expected returns.
How exactly should the vector be scaled to match the covariance matrix?
The authors of the paper themselves state on page 13:
In that quote, they are constraining the variance to a specific value. In that case the scaling of the expected returns does not matter. But in general it does.
If the ‘l’ is barely above zero, then you will essentially be getting the minimum variance portfolio. If ‘l’ is very large, then you will be maximizing expected return.
In order to have the same mean-variance problem, you need to change the risk aversion to correspond to the value of ‘l’. If you are maximizing the information ratio, then you want the scales of the expected returns and the variance to match.
I have a specific question concerning the scale of “expected return” in the optimization process. If I transform the ranking information into z-score (with mean of 0 and std of 1), I need to scale either the “z-score” or the covariance matrix in order to make the utility function meaningful. Do you think is there a practical way to generate the scaling factor? What I am thinking is to caculate the avearge variance from the covariance matrix and downscale the “expected return” so that the variance of expected return matches that average variance.
I think you are on the right track. But “match” should not, in general, mean equal. If your predictions are very poor, then the ratio of those two should be close to zero.
Pingback: Review of “Numerical Methods and Optimization in Finance” by Gilli, Maringer and Schumann | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
Pingback: Blog year 2012 in review | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
Pingback: Popular posts 2012 January | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
Pingback: Popular posts 2012 March | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
Pingback: A comparison of some heuristic optimization methods | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
Pingback: Another comparison of heuristic optimizers | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
Pingback: Popular posts 2012 February | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
Pingback: Popular posts 2012 October | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
Pingback: Predicted correlations and portfolio optimization | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
When getting data from stocks, do they have to be from and to same Date?
Having data for one company’s stocks (company A) in years before financial crisis (when they did really good) and one company’s (company B) stocks in years in recession isn’t quite usefull, because the portfolio optimization software/algorithm will return output that all weights should go to company A? Right?
You are in Problem 3 here: expected returns are hard.
As I said in my higher moments talk using historical returns is close to completely useless for most purposes.
As you rightly point out, they would be even more dangerous if the historical periods are not the same (at least largely).
Thank you for the enlightening post. I am especially interested in the power of R as an investment analysis tool.
After successfully implementing the classical portfolio optimization model, I am looking for an efficient way to draw the whole feasible investment area in R (in addition to the efficient investment frontier). My current approach is to generate random portfolio weights (uniformly distributed inside a simplex), check that constraints are held and plot them. However, the charts I get are very different to the ones that I have seen as output from other programs (e.g. OptiFolio, ECVaR). My results show a very small cloud of portfolios.
Do you have any suggestions on how to produce a more detailed feasible investment area using R?
I suspect you are seeing something like Figure 3 of Realized efficient frontiers.
It seems that typical portfolios live in a fairly small part of the feasible space. I haven’t ever tried to do what you are doing, so I don’t really have any wisdom on the subject. I think you will need to do some sort of optimization with varying inputs. But I don’t fully see it, at the moment at least.
If you can assume that the feasible space is convex, then the ‘chull’ (as in ‘convex hull’) R function is your friend.
Quality articles or reviews is the important to interest the people to pay a visit the web page, that’s what this web page
Different optimization methods ( like Markowitz, CVaR, Robust optimization ) have the different performances in different years.
No one always performed better than others according to my test. Certain methods are good in some years and others are good in other years. Can you explain the reason of it?
I have two possibilities:
1) the results are random. The optimal portfolio depends on your starting place (see https://www.portfolioprobe.com/2010/11/05/backtesting-almost-wordless/) and your tests are just showing which technique appears to be best.
2) there really are differences between the techniques at different times. Markets are dynamic and so it is natural that different styles of portfolios would be best at different times.
My guess is that what you are seeing is a combination of these two.
I have a question : I have constructed the efficient frontier -going to infinit- so as I increase the expected return the sharp ratio increase!so I can not find the tangent optimal portfolio.when I decrease the risk free rate i will reach to the max sharp ratio within the same range! I am not sure if this is because I have only 7 stocks?! or because of the high covariance between my stocks?
Nice article, it definitely was an interesting read!
This year I am writing my masters thesis and im just enquiring whether you have any ideas. I wish to do some sort of coding and the only ‘strict’ condition it has to be something to do with liquidity! I would love to incorporate this into portfolio optimisation. However I don’t have many ideas!
Any assistance would be great!
I do believe all the concepts you have presented for your post.
They’re very convincing and will certainly work.
Nonetheless, the posts are very quick for novices. Could you please lengthen them
a bit from next time? Thanks for the post.