garch and the Algorithmic Trading Conference

The Imperial College Algorithmic Trading Conference was Saturday.

Talks

Massoud Mussavian

Massoud gave a great talk on “Algo Evolution”.  It started with a historical review of how trading used to be done “by hand”.  It culminated in a phylogenetic tree of trading algorithms.  There was an herbivore branch and a carnivore branch.

Robert Macrae

Robert talked about risk.  He put it into a 2-by-2 table:

  • Are you measuring or controlling?
  • Are you thinking of normal times or extreme times?

Measuring for normal times is easy.  Easy in quotation marks.

Controlling risk in normal times is possible, but there are some tricky bits.

Measuring extremes is too hard.  We can get estimates but the variability of those estimates is extremely high.  To get practical value we have to impose prior beliefs.

Controlling extreme risk is: “Err …”  It is basically impossible.  On the other hand it is essential for civilization to continue.  In Robert’s words, “If the ATMs stop working, things are going to get rough.”  Robert also pointed out that it is very easy to do things in the name of controlling extreme risk that will make it worse.

Panos Parpas

Panos talked on “Multiscale Stochastic Volatility”.  Stochastic volatility is the continuous-time cousin of garch (see below).  However, the trail to stochastic volatility was hard to see.  I found the talk interesting because I saw some mathematics and computing that I know nothing about.

Keith Quinton

Keith talked about 130/30 funds.  These in his view solve a few problems.  In particular, they allow the fund manager to be more active, and allow bets on stocks going down (which tend to work better than positive bets) to be fully realized.

He emphasized that 130/30 is a portfolio construction device, it is not an alpha generator.  In fact, if you don’t have alpha and you do 130/30, you get a whole lot more of nothing.

The 130/30 form has fallen out of favor with investors because performance has happened to be not so great.  Keith thinks that a restart is called for.

Pat Burns

I talked about “3 realms of garch modelling” (pdf).  A companion to the talk is the blog post “A practical introduction to garch modeling”.  This points to another blog post on variance targeting, which is also discussed in the talk.  The talk concludes with an argument for doing financial research in R.

High Frequency Trading Panel

There was a lively debate about the topic:

  • Do we know what “high frequency trading” means?  No.
  • Does it provide liquidity?  Maybe, maybe not.
  • Is it dangerous?  Various opinions.
This entry was posted in Quant finance, R language and tagged , , . Bookmark the permalink.

6 Responses to garch and the Algorithmic Trading Conference

  1. Bill Alpert says:

    I favor controlling against extreme risk. What a pity that it’s seemingly impossible. I liked civilization.

    Bill Alpert
    Barron’s

    • Robert Macrae says:

      Perhaps I put it a bit strongly, there are plenty of sensible things you can do after an event to support stability. For example, if money gets distributed really badly then you can print lots more of it and wash the mistakes away!

      My intended point was that it is verging on impossible to prevent a future financial crisis by deducing control rules from past historic data. Fancy stats are completely inappropriate, what is required is to make very simple deductions — eg that TBTFs, complexity and leverage are bad — and then move on to the more difficult task of taking steps to restrict them.

  2. Pat says:

    It seems that I have slandered Alexios, the author of the rugarch package. The components garch model is already implemented. Since libel reform has not yet gone through in the UK, I could be at risk.

  3. Robert Macrae says:

    Based on your Garch paper I think some of my concerns over Garch have been misplaced. At any point we get
    1) a sensible estimate for both what vol will be in the absence of a new spike;
    2) a probability for a new spike;
    3) a T distribution on the size of the spike if it happens.

    I think its data intensive mainly because spikes are rare. Because of its importance you might wish to shrink 3) towards something like the unconditional distribution… maybe.

    • Pat says:

      I think that is a very interesting comment.

      I recall suspecting that the distribution of the standardized residuals of real data didn’t match the estimated t distribution very well.

      This would be a mechanism to explain such differences.

      However, in Figure 3 of “Variability of garch estimates” we see that the degrees of freedom estimates for simulated data look pretty unbiased — they just aren’t overly precise.

Leave a Reply to Robert Macrae Cancel reply

Your email address will not be published.