An attempt to clarify the basics.
There have been several posts about garch. In particular:
A reader emailed me because he was confused about the workings of garch in general, and simulation with the empirical distribution in particular.
If there is one confused person, there are more.
A basic feature of garch models is that they are in discrete time. The frequency is usually daily, in which case we pretend that the volatility of the returns is constant throughout each day.
Deep in the heart of a garch model is an innovation at each timepoint. This is — conceptually — a draw from a statistical distribution that has variance 1. Note that variance 1 rules out infinite variance, and hence stable distributions. Also, the normal distribution is unlikely to be a good approximation.
The innovation is the net result of the “news” that comes into the market during the period. If the innovation is large (in absolute value, and compared to a variance 1 distribution), then that is a signal that future volatility is likely to be larger.
But we don’t see innovations. We see innovations scaled by the current volatility.
What the garch model keeps track of is the conditional variance at each time. The “conditional” being short for conditional on the returns (and possibly other variables) that have already occurred. “Variance” because it is mathematically handy — in higher dimensions there are variance matrices but you’ve never heard of a standard deviation matrix.
So far our model is:
return at time t = innovation at time t, scaled using the conditional variance at time t
There is one more element to the model: the mean return at time t. The typical garch model is:
return at time t = mean return at time t + innovation at time t, scaled using the conditional variance at time t
Keep in mind that the conditional variance needs to be transformed before it can be used to scale the innovation.
The fact is that there is a return — a single number — for a time period. The garch models are taking that number and decomposing it into three pieces:
- the mean return at that time
- the conditional variance at that time
- the innovation for that time
Those three things are fictional. And the fictions of the last two heavily depend on each other.
Sometimes they are useful fictions.
The basic input to a garch model is a time series of returns. It is log returns that make the most sense in this setting.
Possible specifications for the mean return are:
- a constant (to be estimated)
- a simple autoregression — AR(1)
- an ARMA model
- something even fancier
There is unlikely to be much difference between assuming the mean is zero and any of the others. In particular, the estimation of the variance is highly unlikely to change significantly.
It is easy for the term “residual” to be confusing in garch models. That’s because the residuals don’t really have anything to do with the garchiness of the model.
The residual at time t is the return at time t minus the estimated mean return at time t.
More garchy — and more useful — are the standardized residuals. The standardized residual at time t is the residual at time t divided by the square root of the conditional variance at time t.
In other words, the standardized residuals are the estimates of the innovations.
There are two parts to the fit of a garch model:
- fit for the mean return
- fit for the conditional variance (or standard deviation or volatility)
The second of these is the more interesting.
To do a garch simulation you need to start with:
- a garch model with the parameters specified
- a state for the conditional variance
Usually, of course, the parameter specification is taken from the estimated model. It is also typical that the variance state is the most recent variance state from the estimated model.
There is one more thing the simulation needs — innovations.
The two most common choices for generating a series of innovations are:
- sample from the distribution assumed in the model (including any estimated parameters)
- sample from the standardized residuals of the estimated model
The process of simulation is to use the first innovation, scale it via the volatility state and add in the mean return to get the first return in the simulation — let’s say this is for time T+1.
We then need to update the volatility state for T+1 — this depends on the innovation.
After those two steps we are in the same situation except we are one time step further. We continue doing those steps until we have as many timepoints as we want.
Usually we do that process multiple times so that we have a large number of simulated series.
Your brow is sweatin’ and your mouth gets dry,
Fancy people go driftin’ by.
The moment of truth is right at hand,
Just one more nightmare you can stand.
from “Stage Fright” by Robbie Robertson
You can use garch in R, among other places.
There are alternatives, but the
rugarch package seems to be the most complete implementation in R. Here is a small example. The first step is:
This command will only work once the package has been installed on your machine.
set the garch specification
We start by creating a specification of the garch model:
comtspec <- ugarchspec(mean.model=list( armaOrder=c(0,0)), distribution="std", variance.model=list(model="csGARCH"))
This specification does three things. It sets:
- the model for the mean return to be a constant to be estimated — the default is an ARMA(1,1)
- the distribution of the innovations to be a Student-t distribution — the default is normal
- the model for the variance to be components garch — the default is garch(1,1)
estimate the model
We use the specification when fitting the model:
garEstim <- ugarchfit(comtspec, data=tail(spxret, 2000))
The data are the most recent 2000 returns in the vector.
The regular residuals for the garch model are:
garResid <- residuals(garEstim)
The standardized residuals are computed like:
garStandResid <- residuals(garEstim, standardize=TRUE)
We can see that the results make sense by inspecting the variances:
> var(tail(spxret, 2000))  0.0001969673 > var(garResid) [,1] [1,] 0.0001969673 > var(garStandResid) [,1] [1,] 1.204764
I’m slightly concerned that the variance of the standardized residuals is so far from 1, but clearly it is trying to be close to 1.
how you might have found how to get residuals
residuals function is generic. There are two ways to look for the methods of a generic function:
> methods('residuals')  residuals.default residuals.glm  residuals.HoltWinters* residuals.isoreg*  residuals.lm residuals.nls*  residuals.smooth.spline* residuals.tukeyline* Non-visible functions are asterisked
> showMethods('residuals') Function: residuals (package stats) object="ANY" object="ARFIMAfilter" object="ARFIMAfit" object="ARFIMAmultifilter" object="ARFIMAmultifit" object="uGARCHfilter" object="uGARCHfit" object="uGARCHmultifilter" object="uGARCHmultifit"
(Note that the results you get depend on what packages are loaded into your R session.)
The class of the object that we want is:
> class(garEstim)  "uGARCHfit" attr(,"package")  "rugarch"
This is found in the second group. To have a look at the method in this group, you do:
It is here that we see that
standardize is an argument (though not exactly blatantly obviously).
The post “garch and the distribution of returns” has some examples of simulation.