/******************************************************************/ /* */ /* Drawing random variables */ /* */ /* Frank Schorfheide */ /* */ /******************************************************************/ /* filename: random.g ** created: 09/18/97 */ cls; library pgraph; /* In this application we will generate random variables from ** the exponential distribution with density exp(-x). ** Let F(x) be the cdf. Recall Casella and Berger, p. 52-54. ** Suppose u is drawn from a U[0,1], then the x that solves ** F(x) = u is a draw from a distribution with cdf F(x). ** ** Find cdf: F(x) = 1 - exp(-x) ** Solve F(x) = u for x ->> x = - ln(1-u) */ /* we could generate a nobs*1 vector of u's with the ** command rndu(nobs,1). However, to illustrate how to ** program a loop we will do it a little differently: */ nobs = 1000; u = zeros(nobs,1); i = 1; do until i > nobs; u[i,1] = rndu(1,1); i = i+1; endo; /* convert the u's into x's, here we apply the transformation to the entire vector */ x = - ln(1-u); /* Compute the mean and the sample variance ** What are population mean and variance for the standard exponential? ** You might change the number of observations and see what happens to the sample moments.. */ meanx = meanc(x); varx = (stdc(x))^2; "The sample mean is:" meanx; "The sample variance is:" varx; /* We now try to recover the density from the randomly generated data ** using what is called a non-parametric kernel density estimate which is ** a fancy version of a histogram. You do not have to understand the details ** of the density estimation at this point. The density will be estimated ** at npt points t between 0 and maxt ** ** The density estimator is of the form ** f(t) = 1/(nobs*h) * sum_{i=1}^nobs K( (t-x[i])/h ) ** ** h is called bandwidth... just play around with it and see what happens ** K is the kernel function: it is small if |t-x[i]| is large, that is ** if most of the observations are far away from t ** (meaning t is unlikely) this estimator will produce ** a small value for the density function. ** We will use the pdf of a Gaussian random variable ** which is exp( -0.5(t-x[i])^2/h^2 ). ** The GAUSS command that returns the density of a ** standard Gaussian random variable is pdfn(x). */ npt = 100; maxt = 10; t = seqa(0,maxt/(npt-1),npt); /* You should try to change h: if h is increased the density estimate becomes smoother, ** if h is decreased it becomes bumpier ** the formula gives some kind of reference value */ h = 1.06*nobs^(-1/5)*sqrt(varx); pdfxhat = zeros(npt,1); /* this variable will contain our density estimate */ pdfx = zeros(npt,1); /* this variable will contain the true value of the density */ i = 1; do until i > npt; pdfxhat[i] = 1/(nobs*h)*sumc( pdfn( (ones(nobs,1)*t[i]-x)/h ) ); pdfx[i] = exp(-t[i]); i = i+1; endo; /* Create a density plot: ** does the shape look like exp(-x)? */ graphset; margin(0.5,0.6,0.5,0.6); fonts("MICROB"); _ptitlht = 0.3; _paxht = 0.25; _pnumht = 0.2; _pcolor = 0 ; _plwidth = 1.8; xlabel("x"); ylabel("density"); begwind; window(1,1,0); _plegstr = "Density\000Estimate"; _plegctl = {1 4 5 0.3}; xy(t,pdfx~pdfxhat); endwind;