Department of Economics University of Minnesota
Phone-(612) 625-0941 4-101 Hanson Hall (off 4-179) Fed
Phone (612) 204-5528 1925 Fourth Street South Fax: (612)
624-0209 Minneapolis, MN 55455.
Homepage
http://www.econ.umn.edu/~vr0j/index.html
Wed 10:00-11:20 Hanson Hall 4-170. Some Mondays 10:00 -
11:20 in 4-170 Hanson will substitute for Wednesdays. Off
Hours: Before and after class and by appointment.
email:
vr0j@umn.edu.
The new deadline for all homeworks is the last day of
class. Please plan to come and talk to me individually to
tell me which homework are you the proudest of.
The eigth homework batch is corrected to
talk about labor and households. I wish you do the new 14
instead of either 13 or 15.
Please those that present put the slides in some location on
the web and its addresses in the easywebcal page of the course
so we can all have access to it.
A preliminary version of the
Syllabus. It is very likely that it will change. I
will update it.
Other relevant links.
Here you may find an occasional
paper, homepage or subroutine of interest.
What are we doing?
A class by class ex-post diary.
Sept 7.
We talked about the class, and what
it is about, going over all the details. We started
with a discussion of the first homework and of what is
the meaning of an approximation to a function and the
criterias (family of approximating functions, criteria
of distance, and method to find it) that are used to
describe a particular approximation approach.
Sept 14
We discussed the construction of the
series for the Solow residual. This required the
imputation of all series in NIPA on the factor side to
either capital or labor and the addition of some
components that are not in GDP but are in our measure of
output (services from Government capital and from
consumer durables). Look at Cooley and Prescott's
chapter in the Cooley volume or the
appendices here
or here
for details. We started to discuss how to use the
stochastic properties of the time series and the basic
growth model to answer the question about the role of
productivity shocks in shaping output
fluctuations.
Sept 21
Gustavo presented some of the
homeworks (Thanks Gustavo). We continued the discussion
of how to use the stochastic properties of the time
series and the basic growth model to answer the question
about the role of productivity shocks in shaping output
fluctuations. In particular, we talked about how to
calibrate a model economy so that its steady state looks
like the U.S. in the dimensions that we want.
Sept 28
Rishabh presented the 4th Hwk batch
(Thanks Rishabh). We talked about various issues related
to comparing model generated data and U.S. data and
about how to use the model to say something about the
U.S. Rishabh discussed the calibration. (More on that
next week). We talked briefly about solving the growth
model with log linear approximations (via dynare). We
then started the discussion of global approximation
methods using piecewise linear functions with exact
solution in the grid points.
Oct 3
Zachary Mahone presented
parts of Hwk batch 5 to get an answer to the question of
how much do productivity shocks contribute to the
volatility of hours worked. Thanks Zach. We still have
to get an answer and to discuss in more depth how it
compares with estimation. I would like another student
to finish this homework. I interrupted often to discuss
the nature of the homework. I dit a bit of talking on
the global approximation methods including the inverse
search method. I pointed to this paper to
this paper for a description of how it is.
Oct 12
Zoe Xie presented a
Business Cycle model with TFP shocks and shocks to the
relative price of investment (from Violante and
coauthor's series). Thanks Zoe. I then discussed the
nature of the problems involving solving for the
allocations in the Aiyagari economy, hwk batch
6. This involved the nature of the functional equation
that determines the laws of motion and the way to
approximate the stationary distribution of agents.
Oct 19
Gustavo made a
presentation on global approximation. We discussed in
some detail how to use endogenous grid to solve a
problem that uses piecewise linear approximations with
leisure. Thanks Gustavo. I then talked briefly about
splines and about how to find a steady state in the
Aiyagari economy. I pointed to some twist on
ambiguity (Sargent?)
using these
or these
slides.
Oct 26
David Wiczer described the
class as.
First we talked about general pitfalls and such with
these micro data sources, i.e. (1) what are the weights,
why do we need them, (2) what is top coding, (3) what
sort of checks one can do for coding errors.
Cross-section: We talked about the CPS and it's
flavors, the displaced worker survey, outgoing rotation
groups and March supplement. I talked about the DWS by
looking at Farber's recent NBER WP 17040 about the
experience of the unemployed in the Great Recession We
went to: --IPUMS and downloaded some March Supplement
--CEPR and got the ORG --went to the NBER page and talked
about how to chain them.
Then I covered the CEX, what's there and how to get it
from the NBER or ICPSR. We talked about the survey and
diary versions and how it matches up with macro data.
Then I talked about the SCF and some of your "facts on
distributions of wealth " papers and I showed them the
awesome new panel they made for 2007 respondents.
Panels: I talked about why to use panels in two contexts:
What is the nature of earnings risk: isolating
idiosyncratic, and persistent shocks and individual
trends. We went through MaCurdy (1982) co-variance test
for individual trends and how panel data gives it more
power. What are individual fixed effects and why might
they be economically meaningful? I talked about Abowd,
Kramarz and Margolis (1999) which studies whether high
wage firms are just collections of individuals with high
wage fixed effects.
What are cohort, time and age effects? I went through
Storesletten, Telmer & Yaron (2004) and how to construct
earnings variance profiles.
Nov 2
Actually, David Wiczer
came back to finish the use of panel data and to
discuss how to calibrate the Aiyagari model to certian
statistics of the wealth and income distribution and of
the persistence of wages using a model with leisure
choice. He will talk more about how to get an
approximation of a distribution in a computer and then
use it, so he will get a bit into the nitty-gritty of the
actual computation. In his actual words what he did
was:
Panel Data: We talked about the problem with short,
dynamic panels and how the Arellano-Bond estimator
addresses this.
We went to the PSID and talked about the strengths and
shortcomings of the data. For retrieving it, their
search engine is useless but PSIDUSE is good.
About the SIPP, we discussed the unique structure of its
high-frequency data and how to get it from the census and
the CEPR. Finally we discussed the NLSY and all its
unique details about young people.
Wealth & Income Distribution: We went through
step-by-step how to approximate distributions by
discretizing and by Monte Carlo.
We reviewed some facts about the joint wealth income
distribution, and how many models fail.
We discussed some remedies, starting with heterogeneous
discount rates. Then we touched on why entrepreneurs
might be important and how Cagetti & Di Nardi (2006)
incorporate them.
Then we talked Castaneda, Diaz-Gimenez and Rios-Rull
(2003). I highlighted the bequest/benevolence towards
kids, retirement and consumption smoothing motives.
Technically, we went through how to summarize tons of
stuff as a single stochastic state variable and how we
have some flexibility with the transition matrix to hit
various objectives. And there was much rejoicing.
Nov 9
Filippo presented
using this
some stuff on Hwk Batch 5 (the three shock economy). We
used his very nice presentation (thanks Filippo) to discuss
how to think between model and data. Then I discussed the
issues associated to solve for a transition. First, in the
simple case of the growth model and then in the context of
the Aiyagari economy.
Nov 16
Zach presentes using
Endogenous Grid Methods how to solve the Aiyagari
economy. Thanks Zach. Then, I presented Krusell-- Smith
family of methods to solve for equilibria of economies with
distributions as state variables when predetermined
variables are sufficient statistics for current prices.
Nov 23
Rishabh presented the
computation of the stationary distribution of wealth and
productivity in the Aiyagari economy and how to find the
steady staThe eigth homework batch is corrected to talk
about labor and households. I wish you do the new 14
instead of either 13 or 15te. We made comments about the
issue to improve the speed and accuracy of the
calculations. Thanks Rishabh. Then, I presented the second
generation Krusell-- Smith family of methods to solve for
equilibria of economies with distributions as state
variables when predetermined variables are NOT sufficient
statistics for current prices.
Dec 5
Given the sparse attendance I talked about things that were interesting for the few in the class. This was about time consistent policies. What are the issues, and what is the charaterization. It is partly related to my paper with Klein and Krusell.
Dec 7
I went again over some issues in time
consistent policy determination.
Dec 14
I will wrap up the course. I will ask about the homeworks. We will discuss some political economy issues and perhaps what are interesting topics of research.
Course Description
The target of this course is for you to be able to go to a
macro seminar and to realize after the layout of the model
that you are able to solve it, and relate it to data, probably
much better than the presenter.
This course can be thought of as an addendum to the Macro
sequence, it follows naturally after 8105-8108. Its purpose is
to learn the map from models to data i.e. to answer
quantitative questions that we are interested in (in the
process of doing so, some interesting theoretical questions
arise). We will develop tools by stating general questions,
and then discussing how to approach its answer. The tools
that we will be developing beyond those already covered in the
first year can be grouped into:
Theoretical tools. Not all the
necessary tools have been acquired in the first year. We
will look at representative agent models, models with a
continuum of agents represented with measures, overlapping
generations models, as well as models where agents form
households. We will look at models where equilibria are
optima and where they are not. We will look at stationary
and non--stationary equilibria. We will look at models
without perfect commitment and without perfect
information.
Empirical tools. A necessary condition
to be able to do applied theory is to be able to
characterize some properties of the world. This involves the
capability of accessing data sets and of understanding the
way they are organized as well as the principles that guide
the construction of the main sources. This requires some
knowledge of NIPA and of some software to read data (eviews,
stata, gretl, R).
Computational Tools. Students should
be able to construct and characterize the properties of the
equlibrium allocations of artificial model economies. This
is the main element of the course and where you will spend
most of the time.
Calibration. We will spend a good deal
of time thinking of how a model is related to the data. This
is I think the more important part of the learning process.
This is a Ph.D. course not a Masters course. As such students
are not expected to learn what other people have discovered, but
the tools that are needed in order to discover things by
themselves. Because of this reason the active work of the
students is crucial to achieve the objective of mastering the
tools that are described above. This is a course to learn to do
things, and, therefore, it requires to do some things.
Every class except the first one we will devote the first
twenty minutes or so to students presentations of
homeworks. I expect professional competence in this regard.
Course Requirements
There are various types of requirements that are a necessary
part of the course, all of which are pertinent in order to
achieve fulfillment of the course goals. This course believes
drastically in Learning by Doing but, on the other hand, you
are adults.
Regular Homeworks.The ones posted
here. Full credit only if on time. Partial credit otherwise.
Students will place the solution to the homeworks and to
other requirements in electronic form. You have to send an
email ASAP to help@cla.umn.edu stating your name and
university email and username and that you are in my course
to have access to a directory named /pkg/econ8185 and a
subdirectory your user name.
Class Presentations Every student
will make at least two class presentations with at least
one being of a subset of a homework. The first
presentation should take no more than 15 minutes and it
will be absolutely professional. Every second wasted,
every statement not planned, every bad thing will be
highlighted. The second presentation (that will depend on
class size and interests) maybe on a paper or on another
homework.
Referee Report. I will assign a
paper to each of you as we go along to write a referee
report and perhaps also to present the paper in no more
than 20 minutes. The refere report should be no longer
than five pages and should contain a clear and concise
exposition of the main points of the article as well as a
critical evaluation of the article's contributions. In
addition, you should write a letter to the editor with
your personal recommendations. If very good, I will use
them, anonymously.
Wikipedia Article. Optional, but
excellent training. This is something that should be done
by the end of the course. The moment you post it email me
and place a copy in your directory. Think of a topic of
the course no matter how silly.
Class schedules.
This class occurs mostly on Wed 10:00 to 1 in Hanson Hall
4-170 for the whole first semester. This is a mini length course
yet, the need for long homeworks on your regard makes advisable to
extend its duration. In addition, due to travels and other things
we will teach on other times whenever I am out of town on
Wednesdays. For the sake of paralellism, the main candidate as a
substitute is Monday. See this homepage for details.
What about knowledge of Computers?
This is not a course in computer languages so students are
responsible to learn to write computer programs. Students are
also responsible for learning their way around McNeil
computational facilities. I do not expect anybody to have a
computer at home or anything like that. It is better to work
in McNeill's computer room because you can talk to each other.
There are various general classes of computer languages.
Fortran 90. This is the best and more powerfull computer
language. Among economists very few prefer C. It is a little
bit hard at the beginning (you have to declare variables and
the like) but students have told me it is well worth to learn
as soon as possible. A very good introduction to fortran can
be
found
here.
MPI and open MP. This is the mother of all serious
calculations. It is a form of using f90 to parallelize code
and take advantage of various hardwares.
Matlab, Gauss, Scilab, Octave and R. These are very
popular packages in economics. They are relatively easy to
learn and code writing is easy. They generate a lot slower
code than F90 (about 100 times) but they are probably a good
choice to solve some problems. They may have an interface
with f90 but I have never seen it working. Matlab is by now
used in 90% of the cases. R and Octave are free. Dynare works
with Matlab and Octave and it is dumb simple.
Stata, Eviews, R, Gretl, and Maple, SAS. These are
packages best suited for reading data. State is the most
popular and expensive. (I have used fortran for this which is
insane, others have used Gauss). Still it is worth to learn
stata.
Mathematica and Maple. These are packages capable of
doing symbolic manipulation of equations. Occasionally they
can also be used to do numeric calculations. It does not hurt
to know them. They may work together with matlab and sciword.
Excel, open office Calc. A dirty thing to do fast data
manipulation. Perfect for grades. And to get output of
models. Calc is free and slow.
A program to write plots. Sciword does it as matlab and
excel and gauss. Some dinosaurs like me use gnuplot.
Students should be able to write code in F90 in addition to matlab
or gauss and to stata. Most students tell me in later years that I
should have enforced harder the learning of F90, but I am willing
to consider exceptions. If somebody has a serious reason not to
use F90, please come and talk to me. At least one homework should
be answered in f90.
Grading Rules
To satisfactorily complete the course, students have to do the
requirements well.
For those that do not register but take the course, I recommend
that they do the homeworks. We learn to solve problems by facing
them. Learning jointly with others greatly speeds the process.
Cooley and Prescott,
[1995]. It is now dated but it contains some important
lines of attack on business cycles. The computational
techniques are a little bit obsolete, but the questions less
so.
Judd, [1998].
is a general computational textbook with special attention
to economists. While it is short on some details that we
care about (complicated equilibrium considerations,
multidimensional value functions, multidimensional
interpolation) it is a very useful book for many topics.
Marimon and Scott,
[1998]. It has a bunch of chapters that deal with
specific problems. I find the continuous time chapter nice
as well as some scattered other chapters.
Miranda and Fackler,
[2002]. It is quite a nice book. Like others it is too
irrelevant in some places and too easy in others. It is
designed for matlab which is a pity, but it has a nice
implementation (via a <a
href="http://www4.ncsu.edu/unity/users/p/pfackler/www/compecon/toolbox.html"
> downloadable toolbox</a>) of function global
approximations.
Heer and Maussner,
[2004].
This is a nice book with a lot more economics than the others. Its
consideration of the theory is closest to what we do. It has also many
examples. The codes can be found
here.