Unproxying weight constraints

It is common practice to have portfolio constraints like:

wi ≤ 0.05

That is, the weight of each asset can be no more than 5%.

Proxy for risk

We think that is what we want to do because we are so used to doing it.  But why should we care about the weight of assets?

I don’t think we do.  I think that that constraint is a proxy for constraining risk.

The reason we’ve used weight as a proxy for risk is a lack of technology.  We didn’t have the ability to constrain risk on an asset-by-asset basis so we used a constraint that we could do.

There are weight constraints that are about liquidity risk — these are sensible as weight constraints.  But they are different than blanket constraints over all assets.

Attributing variance to assets

We can partition the variance of a portfolio into pieces attributed to each asset.

Let’s start by noting that the portfolio variance is:


where w is the vector of portfolio weights and V is the variance matrix of the assets.  In R notation this is:

w %*% V %*% w

We can change this slightly to get our partition of the variance:

w * Vw

where the * operator is element-by-element multiplication.  In R this is:

w * V %*% w

Just as weights sum to 1, we are interested in the risk fractions summing to 1.  We get this with:

f = (w * Vw) / w’Vw

We started with:

wi ≤ 0.05

I’m claiming that what is really wanted there is:

fi ≤ 0.05

This is a type of constraint that is now available in the Portfolio Probe software.


If there is a benchmark, then the benchmark weight vector b needs to be subtracted from the portfolio weight vector. So we have:

f = ((wb) * V(wb)) / (wb)’V(wb)

Here f is the risk fraction of each asset deviating from the benchmark.

Without a benchmark only assets that are in the portfolio contribute to the variance — f is zero for assets not in the portfolio.  With a benchmark it is only assets that have the same weight in the portfolio as in the benchmark that are guaranteed to contribute zero to the variance.  Assets not in the portfolio can be quite risky relative to the benchmark.

The application that led to the original user request was an asset allocation with a benchmark.

Risk parity

While risk fraction constraints can be used more generally, one application of them is in the creation of risk parity portfolios.  These are portfolios where each asset class contributes the same amount of variance to the portfolio.

I quite like the idea of controlling how much risk comes from various pieces of the portfolio.  However, I’m not convinced that equality across a semi-arbitrary categorization of assets is going to be the best thing to do.  I would prefer more thought going into the balance.

Subscribe to the Portfolio Probe blog by Email

This entry was posted in Fund management in general and tagged , , , . Bookmark the permalink.

7 Responses to Unproxying weight constraints

  1. Stephen says:

    This analysis looks like it assumes identical distributions for each asset class and that the distribution changes at the same time period for each asset class. This may lead to incorrect allocation decisions when an asset class like alternatives exhibits periods of temporary skewness or kurtosis that are uncorrelated with other asset class distributions.

    • Pat says:


      Thanks for the comment. I presume you are talking about risk parity. The advocates of risk parity don’t assume that the distributions of asset classes are the same, but they want to have equal risk from them. So an equity-bond risk parity portfolio would have something like 90% bonds.

      I think your point about market changes is well placed. In Portfolio Probe it is possible to put risk fraction constraints on multiple variance matrices. So you can have your standard constraints on the “real” variance matrix, and probably looser constraints on alternative variance matrices. For example you might add a variance matrix that you think would happen during a crash.

      • Stephen says:

        In my opinion, wanting equal risk from all asset classes is simply assuming that long term estimates of risk are more accurate than long term estimates of returns. If you were to automate the process and make decisions on risk allocation, what would be the downside to using a DCC-MVGARCH model to maintain an updated covariance matrix rather than having a handful of scenarios?

        • Pat says:

          I think something stronger than that is being assumed. Risk estimates are more accurate than return estimates. So that should be an assumption in all fund management.

          That really is a great question, I think: What assumptions are needed to make you want the variance fractions to be equal?

          If you used a dynamic variance matrix like garch, then you are setting yourself up for a lot of turnover. If you use some scenario variances, then you are avoiding (hopefully) getting yourself into a position that is likely to be terrible. Prevention rather than cure.

  2. Antonio says:

    Hi Pat, altough the risk fraction concept is actually interesting, i think that weight constrains are necessary to avoid concentration or to keep up to generic portfolio rules.

    I’m thinking of algorithmic models that have to follow an initial asset allocation strategy, moving weights between pre-setted boundaries.

    • Pat says:


      Thanks for your comment. I’m certainly not saying that weight constraints are never appropriate. I am saying though that there are times when weight constraints are imposed when risk fraction constraints are intended.

  3. Pingback: Backtesting Asset Allocation portfolios « Systematic Investor

Leave a Reply

Your email address will not be published. Required fields are marked *