1.B HP filter and plot US quarterly (log) GDP. Store it in postcript or pdf. Compute the same table as in the Cooley book for those 4 variables using data up to 2003:4 or later. 1.C Calculate a linear trend and decompose log GDP in the linear trend the hp trend and the hp residual.
1.D Plot the growth rates together with the hp residual and comment the differences.
1.E Compute a VAR of those 4 variables and plot the impulse responses. Make sure that you explicitly state what are the identifying assumptions that you make.
Write a routine that linearly interpolates. Apply it by storing the value of exp (x) between 0 and 1. in intervals of .1 and assessing the value by interpolation in intervals of .05. Plot the function and what results from using approximation.
What about with Cobb-Douglas ( r =1).
Note that Labor share = w*N/Y, and that under competition w=(dY/dN).
7.B Take away a linear trend (in logs) of the Solow residual and rename the new object alos as linearly detrended Solow residual. Estimate a univariate process for this new variable. Make sure that you argue forcefully for your specification.
7.C Compute a bivariate VAR with the linearly detrended Solow residual and linearly detrended output and a linearly detrended trivariate VAR with the Solow residual, output and linearly detrended hours worked.
8.B Solve for the steady state. Be lazy and use software to get the derivatives and dynare to get the steady state.
8.C Answer the question using dynare and the estimated process for the Solow residual.
8.D Reassess your answer with data since 1982. Get labor data from Manovskii, here and described here of labor in the CPS and reestimate your answer.
8.E Redo your answer posing alternative processes for the Solow residual (random walk with drift, AR(2)).
9.B Compute your answer using dynare.
9.C Explore alternative specifications of the calibration targets based on some alternative logic and report your answers. What matters.
9.D Redo your answer posing TFP shocks and shocks to the relative price of investment (which you have to compute using mostly Violante and coauthor's series).
10.B Do it again with three independent shocks. One to hours worked in the utility function, another to patience, and another to productivity using data of output the Solow residual and hours by using ML or by Bayesian methods.
10.C Redo the estimation adding labor share and the Frisch elasticity of labor to the parameters that govern the three shocks.
10.D Compare the answers obtained in 10.A, 10.B, and 10.C.
12.B Write a f90 or f95 code that computes a piecewise linear approximation to the decision rule of capital for the optimal capital accumulation function. Use collocation (or the Dirac measures at the grid points) to weigh the errors. Plot it. If you insist, use some alternative global solution method.
12.C Compare it with one that results from doing the same approximation with only 7 grid points over the same range. Was it worth to go from 7 to 21 grid points?
12.D Use now the endogenous grid method to solve for the decision rule for capital (in case you did not use for parts B and C).
12.E Do it now for a stochastic version of the growh model with leisure. Use this paper for details.
15.A Take your favorite version of the neclassical growth model. Calculate its steady state.
15.B Now suppose that by surprise, TFP doubles. Compute the new steady state and the transition from the old to the new, assuming that it has completed in 200 periods. Compute such transition three ways and specify how long it takes to solve it each way. The first way is a system of 200 equations and unknowns, the second by guessing first period capital and hoping that the system gets the right capital in period 201, and by guessing capital in the period 200 hoping that if you moved backward you obtain the right initial capital.
17.A Compute the stationary distribution of this economy. Use both an approximation to the cdf. And a huge sample.
17.B Compute and plot the Lorenz curve for wealth.
17.C Compute the persistence of inequality in this economy. Choose a statistic, compute it and argue its usefulness.