Decision theory

About 17 years ago, I spent a year at Dartmouth Medical School getting another degree in medical outcomes research. I had retired from the practice of surgery after a 14 hour spine fusion. In college, I had a fall in gymnastics class that sent me to the student health center. They x-rayed my neck but not my back below the neck. When I began medical school, we all had to have chest x-rays and mine showed that the fall had caused a three level compression fracture in my thoracic spine. After 18 years in practice and 25 years of standing at an operating table, I had begun to have trouble with my back. It began with pain but continued to signs of spinal cord compression. In 1994, I went to UC San Francisco to consult David Bradford, who had written a number of papers on newer techniques in the surgery I needed. He agreed that I needed it and we arranged for me to have the surgery after Christmas 1994. It involved a lengthy recovery so I retired from my practice and turned it over to a younger associate. I had planned to return part time and see office patients only but he had other ideas, which were not well thought out but there was little I could do about it.

I had been interested in medical quality measurement for years. Now, with no activity planned once I recovered, I got interested in the Dartmouth program. It was called “Center for Evaluative Clinical Sciences,” a rather clumsy name. It is now called something else, but the idea is the same. Jim Weinstein, who is now CEO of the Dartmouth-Hitchcock medical center, was in my class that began in 1994.

The program included some remedial math for us oldsters. Although I had been an engineer it had been in the 1950s. We got a lot of statistics education and some health policy. The Dartmouth folks had been involved in the design of Hillary Clinton’s health plan and I had some fundamental disagreements about policy with them. Like so many academics, they were convinced that they knew how to run a top-down system and I was not so sure. However, the methodology training was, I thought, to be invaluable to me.

Two new areas, in my own experience, were very enlightening. One was survey design, in which I learned a lot about surveys, and incidentally, about polling. The other was decision theory. I had had no idea how important this was to be in health care.

Choice under uncertainty
This area represents the heart of decision theory. The procedure now referred to as expected value was known from the 17th century. Blaise Pascal invoked it in his famous wager (see below), which is contained in his Pensées, published in 1670. The idea of expected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an expected value. The action to be chosen should be the one that gives rise to the highest total expected value. In 1738, Daniel Bernoulli published an influential paper entitled Exposition of a New Theory on the Measurement of Risk, in which he uses the St. Petersburg paradox to show that expected value theory must be normatively wrong. He also gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St Petersburg in winter, when it is known that there is a 5% chance that the ship and cargo will be lost. In his solution, he defines a utility function and computes expected utility rather than expected financial value (see[1] for a review).
In the 20th century, interest was reignited by Abraham Wald’s 1939 paper[2] pointing out that the two central procedures of sampling–distribution based statistical-theory, namely hypothesis testing and parameter estimation, are special cases of the general decision problem. Wald’s paper renewed and synthesized many concepts of statistical theory, including loss functions, risk functions, admissible decision rules, antecedent distributions, Bayesian procedures, and minimax procedures. The phrase “decision theory” itself was used in 1950 by E. L. Lehmann.[3]

I was to spend considerable time learning about Bayes theorem. This is at the base of probability theories. It is also the basis for all diagnostic testing, including physical examination. The math looks intimidating but the result can be quite simple. The number of true positive results and the number of false positive results determines the sensitivity of a test. When compared with false negatives and true negatives, the calculation can determine the value of a diagnostic test. It can result in some counterintuitive results. For example:

Drug testing

Suppose a drug test is 99% sensitive and 99% specific. That is, the test will produce 99% true positive results for drug users and 99% true negative results for non-drug users. Suppose that 0.5% of people are users of the drug. If a randomly selected individual tests positive, what is the probability he or she is a user?

Despite the apparent accuracy of the test, if an individual tests positive, it is more likely that they do not use the drug than that they do.
This surprising result arises because the number of non-users is very large compared to the number of users, such that the number of false positives (0.995%) outweighs the number of true positives (0.495%). To use concrete numbers, if 1000 individuals are tested, there are expected to be 995 non-users and 5 users. From the 995 non-users, false positives are expected. From the 5 users, true positives are expected. Out of 15 positive results, only 5, about 33%, are genuine.
[edit]

This is important in breast screening with mammograms and became very important with AIDS testing.

The same theorem can be applies to electronics. In signal detection, Receiver Operating Characteristics are plotted on a curve and can tell if the signal to noise ratio makes the device useful.

Anyway, the subject of decision theory has become more important in medicine in recent years. One example is the too many choices dilemma. Used car salesmen know this as a practical matter. They are taught that showing a potential buyer more than one car decreases the likelihood that they will buy anything. This was seen in a New England Journal of Medicine article some years ago that found that offering a patient a wide choice of NSAIDS, like Motrin, for osteoarthritis, decreased the probability that they would take any of them.

The example used in my class at Dartmouth was an example of a student headed for the library to study. If a friend stopped him/her on the way and suggested a party instead, there was an increased probability that the student would do neither and return to the dorm.

A recent suggests how this affects the average person in daily life.

You walk into a Starbucks and see two deals for a cup of coffee. The first deal offers 33% extra coffee. The second takes 33% off the regular price. What’s the better deal?

“They’re about equal!” you’d say, if you’re like the students who participated in a new study published in the Journal of Marketing. And you’d be wrong. The deals appear to be equivalent, but in fact, a 33% discount is the same as a 50 percent increase in quantity. Math time: Let’s say the standard coffee is $1 for 3 quarts ($0.33 per quart). The first deal gets you 4 quarts for $1 ($0.25 per quart) and the second gets you 3 quarts for 66 cents ($.22 per quart).

The upshot: Getting something extra “for free” feels better than getting the same for less. The applications of this simple fact are huge. Selling cereal? Don’t talk up the discount. Talk how much bigger the box is! Selling a car? Skip the MPG conversion. Talk about all the extra miles.

There are two broad reasons why these kind of tricks work. First: Consumers don’t know what the heck anything should cost, so we rely on parts of our brains that aren’t strictly quantitative. Second: Although humans spend in numbered dollars, we make decisions based on clues and half-thinking that amount to innumeracy.

It also emphasizes the math skills of the public but this is not actually the real reason. We are not good at estimating the probability of uncommon events. General surgeons were asked the probability that a positive mammogram indicated the presence of breast cancer. They should be experts, yet the estimates they offered were far too high. This goes back to the issue of Bayes Theorem and the probabilities when the numbers of true positives are quite small and the true negatives are quite large. The issue has been very controversial in recent years as breast cancer screening is debated in light of the coming changes in American health care.

Tags: , ,

Comments are closed.