## Econ 8185 Quantitative Macro, Fall of 2012

##### José-Víctor Ríos-Rull: vr0j@umn.edu, <!-- var LastUpdated = document.lastModified; document.writeln ("Page last updated " + LastUpdated); // --> This page requires JavaScript, which is not enabled on your browser.

• Department of Economics University of Minnesota Phone-(612) 625-0941 4-101 Hanson Hall (off 4-179) Fed Phone (612) 204-5528 1925 Fourth Street South Fax: (612) 624-0209 Minneapolis, MN 55455. Homepage http://www.econ.umn.edu/~vr0j/index.html

• Wed 3:45-6:15 Hanson Hall 4-170. This is a mini course that will last all semester so some classes will be shorter, others will be cancelled. The whole teaching time will be no less than a regular mini course. Off Hours: Before and after class and by appointment. email: vr0j@umn.edu.

• http://EasyWebCal.com/econ8185 Web page to sign up for presenting in class.

• ## Please plan to come and talk to me individually to tell me which homework are you the proudest of. Any time that I am around is good. The Aiyagari calibration homework is ready.

### What are we doing? A class by class diary.

1. #### Sept 5.

We talked about the class, and what it is about, going over all the details. We started with a discussion of what is the meaning of an approximation to a function and the criterias (family of approximating functions, criteria of distance, and method to find it) that are used to describe a particular approximation approach. We then went in detail over the first set of homeworks and its purpose. We finished with a discussion of the Solow residual. This included the components of GDP on the income side (and I forgot the difference between GDP and GNP, this is the payments to foreign factors net of payments to national factors from abroad that is also ambiguous). You should read the Cooley-Prescott chapter on the Cooley book or the - appendices here or here for details of the construction of the Solow residual needed to compute it.

2. #### Sept 19

We discussed how to use a model to answer quantitative questions about the economy. In particular, we talked about how calibration and estimation operate, and what are their differences and similarities. We put special emphasis in saying that what matters is not the statistical techniques as much as the sources of identification. I refer to the first 5 sections of this paper, but in particular, Section 3. Gero and Arun presented some of the homeworkds (Thanks to both). Arun will continue with the discussion of the ordering assumptions in the VAR next class. We then discussed the next batch of homeworks (and I want volunteers to present some of them in the next class, October 3). Such discussion involved a minimal description of log linearization using dynare as well as comments on the use of software to take derivatives and on how to implement the notions of calibration discussed at the beginning of hte class. While doing this, we talked about how to calibrate a model economy so that its steady state looks like the U.S. in the dimensions that we want. We may start the discussion of global approximation methods using piecewise linear functions with exact solution in the grid points.

3. #### Oct 3

Arun finished discussing why some assumptions are made when estimating VARs (Thanks Arun). Zhifeng presentd the gist of homework 8 and some of the subtle issues that arise (Thanks Zhifeng). I discussed how to do global approximations and what do they mean. In particular, I went over the endogenous grid method. This paper is a great reference to a particularly efficient way to obtain a global approximation.

4. #### Oct 10

George presented the endogenous grid method. I then discussed how to adapt the problem to the Aiyagari economy given its specificities.

5. #### Oct 17

Gero and Zhifeng presented the problem of solving for the Aiyagari decision rules using endogenous grid methods. Thanks to both. I then discussed how to store the distribution of types in the Aiyagari economy using both a large sample of agents and approximating the cumulative distribution function. I then talked a bit about Markov perfect equilibria in a growth model with a public good.

6. #### Oct 24

David Wiczer did the following (and I am sure that he did a fantastic job). These are his notes for using data sets The notes come from a long tradition.

• He discussed the mechanics of the CPS.

The organization of the survey with rotating groups and why "outgoing rotation groups" are special The supplements, e.g. Displaced Workers and March How hours are tricky between "actual" and "usual" How employment status definitions are nuanced and changes by year

• He talked about using the cross-sectional data to create synthetic panels. He also talked about Autor, Katz and Kearney (2006) and Juhn, Murphy and Pierce (1993). He went over about chaining the CPS to get month-over-month data such as employment status transitions.

• CPS data can be found in three locations

• Went to the NBER website and looked at their chaining programs.
• Went to IPUMS for their integrated data, mostly e March Files.
• Went to CEPR data where they have a collection of the "outgoing rotations groups".

• Then he talked about the CEX and SCF.

The SCF website was used to look at some figures from the 2009 SCF follow-up of the 2007 wave.

• He talked about using of panel data. He described cohort,time and age effects and how fixed effects are good.
• He discussed the mechanics of the PSID: The individual and family files "line number" identifiers Occupational codes are relatively error-free.
• He went to the PSID website and talked about the short-comings of the search feature
• He introduced the NLSY79 and NLSY97. He talked about the ASVAB scores and other unique features.
• The NLSY79 website was used to download some data.
• I talked about the SIPP, that it's monthly but only for 4 years and described where to get it,
7. #### Oct 31

Gero and Zhifeng presented the computation of the stationary distribution of the Aiyagari economy using the two methods (approximation to the cdf and a large sample). Thanks to both. I then discussed how to solve for the steady state of the Aiyagari economy which involves finding the equilibrium capital output ratio. From there we discussed some policies and how to compute a steady state that also involves a period by period balanced budget constraint. In this context I showed how steady state comparisons, while informative about the long run properties of the economies, do not tell us anything about welfare. For this we need a transition. I then asked you to pose a strategy to solve for such a transition starting with the economy that has no taxes to another that poses every period a 20\% labor income tax that is redistributed lump sum, and then to talk about welfare. I then discussed the Krusell-Smith method to solve for equilibria of economies with distributions as state variables when predetermined variables are sufficient statistics for current prices. I finished by developing the Generalized Euler Equation of the Markov perfect equilibrium of an economy with hyperbolic discounting and how it poses difficulties to calculate the steady state.
8. #### Nov 7

Joao talked about the Fella paper on solving problems with continuous and discrete variables. We discussed the intricacies of this problem which are many. I then discussed solving the hard Krusell Smith problem, when predetermined variables are not sufficient statistics of prices (we will also see an intermediate case with two predetermined state variables).

9. #### Nov 21

Joao, Richard, and Radek did various nice presentations about the homeworks and the Fella paper. Thanks to all. We are still struggling with the details of the very difficult, but fascinating, Fella paper. Another thing that we still have to clarify is how to approximate aggregate capital in the decision rule of agents when using the JPE Krusell Smith approximating method. I started talking briefly about parallel computing, especially MPI. This was enough about it

10. #### Nov 28

Radek finished the Krusell-Smith I approximation. Zhifeng discussed the Krusell Smith II approximation when predetermined aggregates are not sufficient statistics to prices. Thanks to both. I discussed how to solve n equations with m< n+1 unknowns. By minimizing (or finding a zero of) a function of various variables. I discussed various forms of doing so speaking badly of Newton-Rapson and Nelder and Mead.

11. #### Dec 5

Kai will deal with how to solve hyperbolic discounting type problems where the derivatives of equilibrium functions are in the Euler equations. Rocio will talk to us about how to use both values and derivatives to construct approximations to the value function using this paper for the three country example . I may talk about how to solve for problems when the value functions are not even continuous (marriage). This finishes the class.

### Course Description

The target of this course is for you to be able to go to a macro seminar and to realize after the layout of the model that you are able to solve it, and relate it to data, probably much better than the presenter.

This course can be thought of as an addendum to the Macro sequence, it follows naturally after 8105-8108. Its purpose is to learn the map from models to data, i.e. to answer quantitative questions that we are interested in (in the process of doing so, some interesting theoretical questions arise). We will develop tools by stating general questions, and then discussing how to approach its answer. The tools that we will be developing beyond those already covered in the first year can be grouped into:
• Theoretical tools. While most of the necessary theoretical tools have been acquired in the first year, we may on occasion develop some additional theory to look at a particular issue. We will use representative agent models, models with a continuum of agents represented with measures, overlapping generations models, as well as models where agents form households. We will look at models where equilibria are optima and where they are not. We will look at stationary and non--stationary equilibria. We will look at models without perfect commitment and without perfect information.

• Empirical tools. A necessary condition to be able to do applied theory is to be able to characterize some properties of the world. This involves the capability of accessing data sets and of understanding the way they are organized as well as the principles that guide the construction of the main data sources. This requires some knowledge of NIPA and of some software to read data (eviews, stata, gretl, R).

• Computational Tools. Students should be able to construct and characterize the properties of the equlibrium allocations of artificial model economies. This is the main element of the course and where you will spend most of the time.

• Calibration. We will spend a good deal of time thinking of how a model is related to the data. This is, I think, the more important part of the learning process.

This is a Ph.D. course not a Masters course. As such students are not expected to learn what other people have discovered, but the tools that are needed in order to discover things by themselves. Because of this reason the active work of the students is crucial to achieve the objective of mastering the tools that are described above. This is a course to learn to do things, and, therefore, it requires to do some things.

Every class except the first one we will devote the first twenty minutes or so to students presentations of homeworks. I expect professional competence in this regard.

### Course Requirements

There are various types of requirements that are a necessary part of the course, all of which are pertinent in order to achieve fulfillment of the course goals. This course believes drastically in Learning by Doing but, on the other hand, you are adults.
• Regular Homeworks.The ones posted here. Full credit only if on time. Partial credit otherwise.

Students will place the solution to the homeworks and to other requirements in electronic form. You have to send an email ASAP to help@cla.umn.edu stating your name and university email and username and that you are in my course to have access to a directory named /pkg/econ8185 and a subdirectory your user name.

• Class Presentations Every student will make at least two class presentations with at least one being of a subset of a homework. The first presentation should take no more than 15 minutes and it will be absolutely professional. Every second wasted, every statement not planned, every bad thing will be highlighted. The second presentation (that will depend on class size and interests) maybe on a paper or on another homework.

• Referee Report. I will assign a paper to each of you as we go along to write a referee report and perhaps also to present the paper in no more than 20 minutes. The refere report should be no longer than five pages and should contain a clear and concise exposition of the main points of the article as well as a critical evaluation of the article's contributions. In addition, you should write a letter to the editor with your personal recommendations. If very good, I will use them, anonymously.

• Wikipedia Article. Optional, but excellent training. This is something that should be done by the end of the course. The moment you post it email me and place a copy in your directory. Think of a topic of the course no matter how silly.

### Class schedules.

This class occurs mostly on Wed 10:00 to 1 in Hanson Hall 4-170 for the whole first semester. This is a mini length course yet, the need for long homeworks on your regard makes advisable to extend its duration. In addition, due to travels and other things we will teach on other times whenever I am out of town on Wednesdays. For the sake of paralellism, the main candidate as a substitute is Monday. See this homepage for details.

### What about knowledge of Computers? This is not a course in computer languages so students are responsible to learn to write computer programs. Students are also responsible for learning their way around McNeil computational facilities. I do not expect anybody to have a computer at home or anything like that. It is better to work in McNeill's computer room because you can talk to each other.

There are various general classes of computer languages.

• Fortran 90. This is the best and more powerfull computer language. Among economists very few prefer C. It is a little bit hard at the beginning (you have to declare variables and the like) but students have told me it is well worth to learn as soon as possible. A very good introduction to fortran can be found here.

• MPI and open MP. This is the mother of all serious calculations. It is a form of using f90 to parallelize code and take advantage of various hardwares.

• Matlab, Gauss, Scilab, Octave and R. These are very popular packages in economics. They are relatively easy to learn and code writing is easy. They generate a lot slower code than F90 (about 100 times) but they are probably a good choice to solve some problems. They may have an interface with f90 but I have never seen it working. Matlab is by now used in 90% of the cases. R and Octave are free. Dynare works with Matlab and Octave and it is dumb simple.

• Stata, Eviews, R, Gretl, and Maple, SAS. These are packages best suited for reading data. State is the most popular and expensive. (I have used fortran for this which is insane, others have used Gauss). Still it is worth to learn stata.

• Mathematica and Maple. These are packages capable of doing symbolic manipulation of equations. Occasionally they can also be used to do numeric calculations. It does not hurt to know them. They may work together with matlab and sciword.

• Excel, open office Calc. A dirty thing to do fast data manipulation. Perfect for grades. And to get output of models. Calc is free and slow.

• A program to write plots. Sciword does it as matlab and excel and gauss. Some dinosaurs like me use gnuplot.
Students should be able to write code in F90 in addition to matlab or gauss and to stata. Most students tell me in later years that I should have enforced harder the learning of F90, but I am willing to consider exceptions. If somebody has a serious reason not to use F90, please come and talk to me. At least one homework should be answered in f90.

To satisfactorily complete the course, students have to do the requirements well.

For those that do not register but take the course, I recommend that they do the homeworks. We learn to solve problems by facing them. Learning jointly with others greatly speeds the process.

### What about textbooks?

In addition to the standard macro books (Stokey and Lucas, [1989], Harris, [1987], Ljungqvist and Sargent, [2000]) I find that there are a few books of interest.
• Cooley and Prescott, [1995]. It is now dated but it contains some important lines of attack on business cycles. The computational techniques are a little bit obsolete, but the questions less so.
• Judd, [1998]. is a general computational textbook with special attention to economists. While it is short on some details that we care about (complicated equilibrium considerations, multidimensional value functions, multidimensional interpolation) it is a very useful book for many topics.
• Marimon and Scott, [1998]. It has a bunch of chapters that deal with specific problems. I find the continuous time chapter nice as well as some scattered other chapters.
• Miranda and Fackler, [2002]. It is quite a nice book. Like others it is too irrelevant in some places and too easy in others. It is designed for matlab which is a pity, but it has a nice implementation (via a <a href="http://www4.ncsu.edu/unity/users/p/pfackler/www/compecon/toolbox.html" > downloadable toolbox</a>) of function global approximations.
• Heer and Maussner, [2004]. This is a nice book with a lot more economics than the others. Its consideration of the theory is closest to what we do. It has also many examples. The codes can be found here.
• Press et al., [1992]. The classic book for numerical analysis. Very useful.

### Some interesting and useful links

Tips for Doing Computational Work in Economics by Tony Smith for insights.

Makoto Nakajima's course materials a great place for stuff.

Allan Miller's Fortran Software A good list of recently updated F90 codes.

Computer Codes from RED.

Fortran repositories. A place to look for that routine that you need.