Registry of published analyses (Harvard School of Public Health)

Compare two intervals:

A -------- B

C ---------------- D

1. *Direct judgment.*

Rate ratio or proportion (analog scale, attribute weights)

Rank order (dichotomous choice, "conjoint")

2. *Find D' so that CD' matches AB.*

CD is money, probability, time, or number of people

N -------------------- BBDD 100 0Doesn't force a difference of 10 to mean the same thing everywhere on the scale.

But could be seen as difference measurement, assigning numbers to differences between outcomes, e.g., A-B=?, B-C=?.

Baron and Ubel's ohp1 experiment using Lenert's scale

RATIO JUDGMENTS (RJ):

BBDD is times as bad as BB.

How many people would have to get outcome A for the total good (or bad) to be the same as if X many people got outcome B? e.g., "Curing how many people of headache is just as good as curing 100 people of cough?" Suppose the answer was 50 people. Then if U(cough)=-20 (relative to U(normal health=0)) we can infer that U(headache)=-40.

Suppose you had to chose between:

Option 1: p (Down syndrome birth) = X

Option 2: Abortion for certain.

What would X have to be for you to be indifferent between the two options?

Suppose X would have to be .03. If we assume that you would be indifferent only when the expected utility of the two options is the same, then, on a scale where 0 = normal birth, and -100 = Down's syndrome birth, the expected utility of Option 1 is, (.03)(-100) = -3, so we can infer that the utility of Option 2 is also -3.

RISK-RISK TRADEOFFS (RR):

What would be the risk of BB so that you would have a hard time
deciding whether to accept this risk or a 10/1000 risk of BBDD?
/1000

"How many days of headaches is just as bad as 10 days of cough?"

Suppose the answer was 5 days. Then if U(cough)=-20 (relative to U(normal health=0)) we can infer that U(headache)=-40.

- In a population of 1000 who are at risk of some disease, we must choose between saving 10 lives or curing 100 people with the disease.
- PTO says that saving 10 lives is just as good as curing the
disease in 90 people.

Therefore, we would do better by curing 100 people than by saving 10 lives. - But, with RR, each person says that a 10/1000 risk of death
is just as bad as a 110/1000 risk of the disease.

Therefore, we would do better by removing the risk of death, since a 100/1000 risk of the disease is not as bad as a 10/1000 risk of death. - In sum, if we use the PTO to override individual judgment, we can wind up doing something that is worse for each person.

Superadditivity: (x-y) + (y-z) > (x-z)

Explained by diminishing sensitivity

Ratio inconsistency: A/C less extreme than (A/B)(B/C)

Explained by Parducci's range-frequency theory

Certainty effect (avoided by RR)

Ratio bias: 10/1000 seems worse than 1/100.

Epstein (but also Piaget and Inhelder)

not avoided by RR :(

PVs for life in time tradeoff

Of course, if the respondent has strong, crisp, unalterable views on all questions and if these are inconsistent, then we would be in a mess, wouldn't we?

In practice, however, the respondent usually feels fuzzier about some of his answers than others..."

This would create a problem for the insurer. To determine the badness of being One-blind relative to Both-blind-Both-deaf, they would not know which answer to use. ...

Try to make your numerical answers consistent. Can you do this and still have them reflect your true opinions about the conditions? (If not, why not?)

1. A-----------B A-------------------C 2. B-------C A-------------------C

For example, suppose question 1 is "How large is the difference (in percent) between no problems and being blind in one eye, compared to the difference between no problems and being blind?" Suppose question 2 is "How large is the difference between being blind in one eye and being blind, compared to the difference between no problems and being blind?" Then the sum of these answers should be 100%.

- Assume utility is a sum of utility on attributes.
- Ask for judgments of every combination of top and bottom.
- Measure the utility of each attribute by subtraction.
- Gives utility of each attribute
*relative*to others.

Example (from Psych 353):

Study on attractiveness and relationships (Christine Kam)