Archive for the ‘medical history’ Category

Neuroscience, the next medical frontier

Monday, December 15th, 2008

When I was a freshman medical student, I spent a summer working in the VA psychiatric hospital in west Los Angeles. While there, I spent many hours talking to chronic schizophrenic patients, some from World War II and one even from World War I. I watched electro-shock therapy for psychosis and spent hours listening to the professor there, George Harrington. He was one of the two or three most impressive men I met in medicine. He was convinced that psychosis was an organic disease and had no confidence in psychoanalysis to explain anything to do with psychosis. I was very interested in psychiatry for a while but my exposure to other psychiatrists in medical school soon ended my enthusiasm.

Now, neuroscience is one of the most promising areas in medicine. We have increasing evidence of the anatomy of mental illness. Obsessive-compulsive disorder can now be cured with a surgical interruption of a feedback loop in the brain. Functional MRI can show differences in the response to stimuli between schizophrenic and non-schizophrenic twins.

Now, we are getting to the analysis of normal function. The visual cortex seems to have a map of the retina contained in it. By analyzing the fMRI of the visual cortex in a subject looking at a picture, it has now been possible to reconstruct the image from the fMRI. We can look at the brain in a functional way and read what it is seeing.

The next step, and it is coming fast. is to create a biological-electronic interface. We already have one called the cochlear implant. It is able to restore hearing by stimulating hair cells in the ear. A visual implant would stimulate the optic nerve when the rods and cone cells are lost.

If I were a medical student today, I would be looking very hard at this field. When I was a medical student 46 years ago, I decided that the science of the brain and the immune system were too primitive at the time to have any implication for clinical work. I decided that, if I wanted to go into research, I would be better off as a physical chemist. That was true then but is no longer true.

Michael DeBakey dies at 99

Saturday, July 12th, 2008

–Houston Chronicle, ‘Dr. Michael DeBakey: 1908-2008 — ‘Greatest surgeon of the 20th century’ dies’: ‘Dr. Michael Ellis DeBakey, internationally acclaimed as the father of modern cardiovascular surgery – and considered by many to be the greatest surgeon ever – died Friday night at The Methodist Hospital in Houston. He was 99. Methodist officials said DeBakey died of natural causes. They gave no additional details.

‘Medical statesman, chancellor emeritus of Baylor College of Medicine, and a surgeon at The Methodist Hospital since 1949, DeBakey trained thousands of surgeons over several generations, achieving legendary status decades before his death. During his career, he estimated he had performed more than 60,000 operations. His patients included the famous – Russian President Boris Yeltsin and movie actress Marlene Dietrich among them – and the uncelebrated.’

DeBakey was an amazing pioneer in surgery. In 1938, he and his mentor, Alton Ochsner, published an article on the drainage of subphrenic abscess, a surgical plague in the days before antibiotics. Patients with perforated appendicitis were kept in the hospital for weeks in “Fowler’s position,” to avoid the dreaded complication of subphrenic abscess. Fowler’s position, inadequately explained in that Wikipedia article, was a seated position in bed to allow pus to drain into the pelvis where it could be drained through the rectum or, in females, through the vagina. Before DeBakey’s and Ochner’s article, the approach to a subphrenic abscess, above the liver and below the diaphragm, was extremely dangerous. It was also common because lying flat in bed tended to allow pus to flow up to the space above the liver. They found that the space could be drained through an approach through the 11th rib. It was easy and effective in the days before antibiotics made subphrenic abscess rare. He should be famous for that alone. The rest of his career will be covered extensively by others but I fear his first great contribution may be ignored. He was a great man.

Tuskeegee

Sunday, May 4th, 2008

There is so much horse shit being put out about the Tuskeegee Study,  most recently this weekend, that it is time to add a few facts. Syphilis was a great scourge brought to Europe from the Americas by Columbus’ crew when they returned. It was ferocious when the epidemic was still new. With time, the manifestations of the disease were less horrible but it was very common. By 1600, one third of Paris was infected. With time, as in all infectious diseases, the virulence declined but it was still a serious disease.

The first successful treatment was with the use of Mercury, first described by Paracelsus who cured nine syphlitics with mercury in 1530. He also provided the first accurate description of the disease and described its manifestations. For centuries after, it was said “One night with Venus may lead to a life with Mercury.” The treatment was onerous and needed to be repeated periodically for life. The discovery of mercurial diuretics in the 1920s came about accidentally through the treatment of syphlis cases with heart failure from syphlitic heart lesions. When I was a medical student, the only powerful diuretics we had were still mercurials.

In 1905, Paul Ehrlich was searching for an antibiotic for syphilis when he stumbled upon the use of organic arsenic. Eventually, by 1910, he announced the new drug called compound 606, or Salvarsan. This was more effective than Mercury and moderately less toxic but it was not the “silver bullet” that he had been searching for. Of course, we currently have hysteria over tiny doses of Mercury in vaccines to prevent contamination.

In 1932, the Public Health Service began a study of negro males who were infected with syphilis. No one was “given” syphilis. This Wikipedia entry, while somewhat biased in tone, gets the facts right in the beginning. The group of subjects was divided into those with early signs, such as genital lesions, and those who were in what is called the “latent phase.” Those with early signs were treated with arsenicals. There was no evidence that latent phase syphilis was treatable.

Instead, we get this sort of thing;
And, you know, you can explain them, as he explained, for instance, the idea that the government in fact would infect blacks with AIDS, by saying, well, remember Tuskegee, when the government actually did infect blacks with syphilis. He does come from a different era, a different age. And so the way he presents himself is very different.

from Sally Quinn of the Washington Post who should know better but probably doesn’t do science.

The discovery and manufacture of penicillin came about in the 1940s and by 1950 there are serious questions about whether treatment should have been offered to those men. The  treatment of tertiary syphilis, especially neurosyphilis, requires very high doses of penicillin, doses that were not available until after 1950.
Penicillin remains the treatment of choice for all stages of syphilis, although it penetrates the blood brain barrier poorly. Treatment with intramuscular benzathine penicillin 2.4 million units stat, or 600,000 units procaine penicillin daily does not produce treponemicidal levels within the CSF. However, the incidence of neurosyphilis is low in immunocompetent patients treated with such regimens during early syphilis.

In late syphilis, it is the policy to treat everyone.

Does penicillin cure tertiary syphilis ? Sometimes.

Should the “Tuskegee Boys” have been offered penicillin in 1950 and after ? Yes.

Would it have made a difference ? I don’t know.

I do know that Reverend Wright and Sally Quinn are ignoramuses although he may actually know better.

The second era of bacteriology

Wednesday, March 26th, 2008

The history of medicine, as distinct from surgery, took giant steps in the 19th century when bacteria were identified and then linked to human illness. Surgery had been able to treat battle wounds for centuries although bacteriology would also lead to major advances there. For the medical doctor, however, there was little that could be accomplished for the sick prior to Louis Pasteur. Medicine in that era was concerned with diagnosis and prognosis, a significant benefit if accurate, which it sometimes was. Treatment was more harmful than effective.

William Withering had introduced the first effective medicine in 1785.

Paracelsus had discovered that mercury would inhibit syphilis in the 14th century but that was the only previous effective use of medicine. It was said, in an era when syphilis was endemic, that “A night with Venus leads to a lifetime with Mercury” as the treatment required continuous use to be effective. There would be no other treatment for syphilis until the 20th century.

Edward Jenner discovered the ability of cowpox infection to prevent the far more dangerous infection of smallpox. These few pioneers were bright supernovae in a dark universe of ignorance. Infectious diseases were the most common cause of death prior to this century.

Louis Pasteur

Louis Pasteur was a chemist who first recognized that living organisms were responsible for such phenomena as fermentation of wine and souring of milk. His research resulted in an age of bacteriology for the next 50 years.

Robert Koch

Robert Koch was a German physician who learned to grow bacteria in cultures that could be purified and subcultured. He established the principles of infection by a specific organism. Pasteur grew bacteria in liquid medium that did not lend itself to purifying cultures. Koch began the use of solid medium and his assistant invented the Petri dish. Koch also discovered the organism that causes cholera, which cannot be grown in artificial culture. It lives only in the human intestine and is transmitted in water supplies contaminated by fecal material.

John Snow

John Snow the founder of epidemiology (along with Florence Nightingale), had identified the connection of cholera to water supplies in 1859 but he could not go further because bacteria had not yet been discovered.

The microscope, especially after improvements by Joseph Jackson Lister allowed these men to see the bacteria in wounds, diseased organs and rotting flesh. Lister’s son would add the first great step in treating these diseases.

Joseph Lister

Joseph Lister, the son, was an orthopedic surgeon who learned to prevent infection by applying carbolic acid to compound fracture wounds after the fracture had been reduced. Lister was still somewhat vague about the organisms he was treating because they were still poorly visualized. In fact, that lack of proof caused great resistance to his innovation.

Hans Christian Gram

In 1884, Hans Christian Gram discovered that some bacteria would stain blue with crystal violet and that this characteristic was related to other features of the organism. A powerful new tool was available to bacteriologists called Gram staining.

The era of the bacteriologist reached its pinnacle when Koch described the tuberculosis organism in 1882 , proving that “consumption” was an infection, and then Pasteur was able to prevent rabies with a vaccine. Unfortunately, Koch’s career ended with a bit of farce as he announced a cure for tuberculosis that was, in fact, no such thing. He fled with a girlfriend to Egypt proving there is nothing new under the sun. His other innovations survived.

Domagk

Vaccines would dominate medicine until the discovery of antibiotics, first by Domagk, , when he discovered the sulfa drugs in 1937. A German physician, he was not permitted by Hitler to accept the Nobel Prize and was awarded the Prize after the war.

Alexander Fleming

Even before Domagk’s discovery, in 1928, Alexander Fleming had discovered penicillin but did not follow up his discovery after a few tentative attempts at treatment.

Howard Florey

Ten years later, Howard Florey, an Australia physician at Oxford, resumed study of penicillin with the result that infectious diseases caused by bacteria would recede into a secondary role in medicine. Other antibiotics were discovered and new ones continue to be synthesized. Cancer, and other degenerative diseases, became the most common causes of death.

The New Era

Carl Woese

In 1977, a microbiologist named Carl Woese proposed a new kingdom of biology. It was called Archaea and he met considerable resistance at first. They are also called Extremeophiles as they were often found in extreme environments such as steam vents on the ocean floor, or in national park geysers, with temperatures at far above boiling. Bacteria and most other forms of life could not exist there because proteins denature at temperatures well below those found in these environments. However, it was soon found that these organisms are widely distributed and some are quite common, such as Methanococcus, which makes swamp gas by metabolizing rotting vegetation and producing methane gas. Some varieties are even found in the gut of cows.

The genomes of over 50 varieties have now been sequenced and similarities with higher life forms have been found, placing them between the bacteria and higher forms. They may well represent the first life forms and there is a possibility that similar organisms may be found on other planets. Since some of these organisms are capable of synthesizing carbon chains, like those in oil, the secret of the energy crisis may be found here. Some of them are capable of scrubbing CO2 from the exhaust of coal burning power plants. Some are capable of making methane (natural gas) from coal without burning at all. This may even be possible without digging up the coal. For example, it is now known that Archaea organisms are still making methane in abandoned coal mines. This creates danger for anyone entering these old mines but may provide a source of natural gas from residual coal that was left behind. In the future, it may be possible to inject coal deposits with the culture of Archaea and collect the gas without ever digging a mine or stripping surface layers above the coal.

The possibility of processing nuclear waste should not be ignored. The organism survives in a high radiation environment and other Archaea are capable of generating electricity in fuel cells

The future is with biotechnology and the limits are not yet visible. We are entering the second Age of Bacteriology

Lies in the service of policy

Tuesday, March 11th, 2008

Politics has always been infested with lies. As it becomes more important in our daily lives, those lies become more significant. Woodrow Wilson said he would keep us out of war. He lied although there is some possibility that he believed it when he said it. Roosevelt said something similar but there is no chance that he believed what he was saying. A few years ago, the issue of minimum wage was influenced by a published report which purported to prove that raising minimum wages, contrary to economic theory, would not increase unemployment for low income workers. The study was deeply flawed but it has remained a popular basis for those who wish to justify the policy of raising the minimum wage.

Now, the major domestic issue that influences public policy is immigration. Sure enough, a new study has appeared that purports to show illegal immigration raises average wages for the native-born poor. Once again, it has been shown that the study in question is bogus.

I’ve always been a little skeptical of the Ottaviano-Peri evidence. A couple of years ago, Jeff Grogger, Gordon Hanson, and I worked on a paper that examined the link between immigration and African-American economic status. As a by-product of that work, we explicitly attempted to replicate the Ottaviano-Peri finding–but couldn’t. Since then, we’ve been quite interested in trying to see what explains the discrepancy between our evidence and theirs.

Then they found why the discrepancy existed. The other authors had doctored the data.

The Ottaviano and Peri data includes currently enrolled high school juniors and seniors. They classify these high school juniors and seniors as part of the “high school dropout” workforce. Their finding of immigrant-native complementarity disappears if the analysis excludes these high school juniors and seniors.

Things that seem too good to be true usually aren’t.

This is not a new phenomenon. I saw something very similar in surgery 30 years ago. At one time, there was a flurry of interest in what was called “The no-touch technique” in colon cancer surgery. The principal author was George Crile Jr, often known as “Barney” Crile. His father had founded the Cleveland Clinic and was a famous pioneer surgeon. The son had ambitions to emulate his famous father and had become a senior surgeon in the clinic his father had founded. He published the “no-touch technique” study when I was a resident in surgery and we all immediately adopted the method as Crile’s study suggested a significant improvement in survival of the patients. Years after it was shown to be a fraud, it is still being studied. It is difficult to find the original paper anymore but it is still being referred to proudly in Cleveland Clinic literature. In that account, Rupert Turnbull is credited with the development of the technique, which involved isolating and ligating the veins from the colon before the tumor bearing area was touched or dissected. It made sense logically in that tumor cells were thought to flow in the venous blood to the liver where they lodged and became metastases. By ligating the veins first, tumor cells disturbed by manipulating the tumor would not escape and flow to the liver. Every surgeon who did colon cancer surgery adopted it.

A few years later, I attended the GI cancer postgraduate course at the American College of Surgeons annual meeting. One of the items on the program was a study of the effect of injecting 5-FU, a chemotherapy drug, into the colon before removing the tumor. The theory here was that the chemotherapy drug would flow, in the same distribution of portal vein blood as the cancer cells, toward the liver. It was a reasonable premise but the study produced one of the most dramatic scenes I have ever witnessed in a medical meeting.

The senior author was describing the 5FU study and pointed out that the control group for his study was the same as that for the “no touch” study. The veins were not ligated until the colon and tumor had been completely dissected. Any tumor cells that would tend to break off and flow to the liver should make the control group results worse than the no-touch treatment group and similar to the control group of the Crile study. In fact, that did not happen. The control group of the 5FU study did as well at five years as the treated group of the no-touch study and the control group of the no-touch study had a significantly lower survival than any of the three other groups. Why ?

The senior author of the 5FU study answered the question for all of us right then and there. He had contacted the Cleveland Clinic statistician to learn why the results were so different and he finally figured out what had happened.

All medical studies that involved time-survival statistics use what are called “time-life tables.” These are usually generated by actuaries for life insurance companies. Over five years, a certain percentage of people will die of various causes and the percentage who die is based on their age and sex and other factors that these tables consider. Any medical study that considers survival over five years or longer must use these tables to be valid statistically. Some people will die from causes unrelated to the treated condition and these must be allowed for.  You have to correct your results for the normal death rates or you will show more deaths in the treated group (and control group) than can be attributed to the disease you  are studying. The 5FU study author had learned that Crile, who had written the “no-touch” paper, had used time-life tables for the treated group in his study (thus improving the survival) but not for the control group. This is not poor statistical method; it is lying. He twisted the data to make his study look like progress in cancer treatment. In fact, there was no benefit to the early ligation of the veins. Cancer is not affected by those theoretical considerations, probably because host resistance is far more important.

Rupert Turnbull, a justifiably famous colon and rectal surgeon, was in the audience at that conference and the author of the 5FU paper invited him to comment. Turnbull declined, saying that they would have to “ask Dr. Crile about methods.” Crile was not there and nothing further was said but the tension was tremendous. Turnbull was, no doubt, humiliated but everybody knew about Barney Crile and his obsession to surpass his father. There were questions about his earlier work on breast cancer and the validity of his papers on that subject. Ironically, his son, a journalist and author of “Charlie Wilson’s War” would become more famous. Also ironic is the fact that CBS was successfully sued for libel by General Westmoreland because of a George Crile III report on Vietnam. Maybe that’s another family tradition; manipulating data.

Isn’t it interesting that the “no-touch” technique is still being promoted as a science breakthrough 30 years after the study was shown to be a fraud? I suspect that few people who were not at that American College of Surgeons meeting are aware of what happened. I suspect the other fraudulent studies will be influencing public policy years from now, as well.

I have been called a cynic.

Health Reform IV

Tuesday, February 19th, 2008

This is not going to be the full story as I just don’t have the time right now but there are a few items that I will try to add as I go along. The fee-for-service system served us well until about 1950. Paul Starr’s book, The Social Transformation of American Medicine, does a good of of telling the history. We must remember that doctors were unable to do much to influence the course of disease until 1900. Surgery came ahead of medicine here and, by 1867, a surgeon did more good than harm in most cases. That was the year that Lord Lister published his revolutionary paper on the prevention of infection. In 1905, John B Murphy published a chapter in WW Keen’s American Textbook of Surgery that defined the condition of acute appendicitis and explained how to make the early diagnosis.

I had a patient one time whose appendix had been removed in 1905 when he was 15. He complained to me about the appearance of the scar. I told him to just be grateful for a surgeon in Denver (where he lived at the time) who knew enough to get him through.

By 1945, antibiotics, first sulfa drugs, then penicillin, had cut the mortality rate of pneumonia from 30% to 5% in England. In 1948, Waksman had discovered that streptomycin cured tuberculosis. That was as great a triumph as that of Fleming and Florey, who discovered and purified penicillin. Consumption (tuberculosis) was the great scourge of civilization dating back to the invention of agriculture. Now medicine could really do something and the value of medical care, as opposed to health care, was rising.

When I was a medical student in 1962, the coronary care unit was not a feature of medicine. The Massachusetts General Hospital did not have a surgical ICU. There were no total hip replacements and no coronary bypass surgeries. In 1967, one of the great heroes of medical care (and unknown) Rene Favaloro performed the first coronary bypass surgery in man. Five years later, when I was a cardiac surgery trainee, it was becoming a common operation and ten years later, there were 70,000 performed in the US. By 2002, there were 657,000 CABG procedures in the US in spite of the fact that an alternative procedure, angioplasty and stenting, had appeared. Technology was racing ahead of any attempt to control it. CABG made people live longer. Total hip replacement, followed by total knee replacement, made the life more enjoyable. The level of medical care intensified. Cost quickly followed.

Fee-for-service medicine had a fatal flaw once it was combined with health insurance. Health insurance appeared during the 1930s. In Dallas, in 1929, the school teachers contracted with Baylor University to provide health care for their members. This was the beginning. In the 1930s, California doctors formed a group plan to provide medical care in return for a monthly fee of two dollars. The hospital associations had already followed the Dallas initiative and formed Blue Cross. Both of these programs were in response to the Depression when people had less money to spend on health care and the concept of insurance became more attractive. When combined with fee-for-service, a serious problem resulted.

In the new system, the patient was not responsible for the cost of his own health care. The doctor had a relationship with the patient but, until about 1978, the insurance companies were passive partners. This was true for several reasons. One, Blue Cross was owned by the hospitals and Blue Shield was owned by the medical associations. They were non-profit corporations, different in each state, and the boards of directors were dominated by providers, doctors or hospitals. Large insurance companies had also entered the business of health care in the late 1940s but they dealt with large corporations that bought coverage for their own employees, or with unions that had coverage for members, paid by employers. The employers and union officers were powerful as customers. For years, little scrutiny was devoted to the details of the care provided and increased costs were handled by increasing premiums. High technology and an aging population would shatter this complacency.

The advent of Medicare in 1965 brought a new player to the table. Lyndon Johnson, in order to assuage the fears of doctors about government medicine, adopted the solution of Aneurin Bevan, who wrote the legislation for the National Health Service in England in 1945. Beven said “He filled their mouths with gold,” referring to objections of hospital specialists to the NHS. That link, by the way, has an excellent summary of the issue of single payer health care. Johnson followed Bevan’s advice and made physician compensation generous. That would not last and it aggravated the problems but, initially, everybody was happy with Medicare. By 1978, that impression no longer applied to the government which was seeing double digit inflation everywhere, including health care.

In 1978, a new program called Professional Standards Review Organizations, or PSRO, appeared and the government was funding what were called “Peer Review Organizations” to oversee Medicare. They were everywhere advertised as concerned with quality but quality was always measured by cost so the physicians were completely cynical about their focus. We were obliged to participate and we quickly noticed that they attracted many critics of fee-for-service medicine. Some of them had failed to find success in caring for patients so they sought bureaucratic positions, some were idealists and some were political zealots.

The honeymoon was over.

More to follow.

Previous posts on this topic are under “Health Reform” in the right side column.

Another death on the ACLU’s conscience

Saturday, February 16th, 2008

The deinstitutionalizing of the mentally ill in the 1970s followed directly from the ACLU lawsuits against committment of the mentally ill. This followed the movie, “One Flew Over the Cuckoo’s Nest.” That is a damn poor way of making public policy but that is what we have. Now we have one more murder to chalk up to the ACLU. Here is another such example. Mental health professionals worry about the effects. Still, nothing is done.The legal situation is chaotic. But still people, psychotic and their victims, continue to die.

A loss of history-updated

Monday, February 4th, 2008

ANOTHER UPDATE: No wonder the British teenagers don’t know any history. They are listening to the BBC.

I previously posted a bit about the loss of history in school curricula. I don’t expect much of American public schools anymore but Britain has a much longer history and I have found much more interest in such subjects as medical history in Britain than in the USA. That may be changing as British teenagers increasingly believe that historical figures are fictional and vice versa.

Despite his celebrated military reputation, 47 per cent of respondents dismissed the 12th-century crusading English king Richard the Lionheart as fictional.

More than a quarter (27 per cent) thought Florence Nightingale, the pioneering nurse who coaxed injured soldiers back to health in the Crimean War, was a mythical figure.

In contrast, a series of fictitious characters that have featured in British films and literature over the past few centuries were awarded real-life status.

King Arthur is the mythical figure most commonly mistaken for fact – almost two thirds of teens (65 per cent) believe that he existed and led a round table of knights at Camelot.

Twenty percent of British teens believed that Winston Churchill is a fictional character.

On the medical front, female Muslim medical students are refusing to scrub their forearms because of “modesty rules.”

Minutes of a clinical academics’ meeting at Liverpool University revealed that female Muslim students at Alder Hey children’s hospital had objected to rolling up their sleeves to wear gowns.

Similar concerns have been raised at Leicester University. Minutes from a medical school committee said that “a number of Muslim females had difficulty in complying with the procedures to roll up sleeves to the elbow for appropriate handwashing”.

No doubt Allah will prevent MRSA infections.

Thanks to Eric Blair for the tip to that story.

Is Evidence-Based Medicine Socialized Medicine?

Tuesday, December 18th, 2007

Today, Glen Reynolds of Instapundit linked to a rather heated denunciation of Evidence-Based Medicine. The term, vigorously debated in medicine, may not be familiar to those not part of the industry. The definition involves two major issues. One is the medical literature and what is called a randomized trial. This involves a new, or occasionally a standard, form of treatment. The question to be answered is whether the tested form of treatment is better than the control form, which may be no treatment at all. The treating physicians, or institutions, and the patients being treated, ideally, should not know whether the treatment being given is the test version or the control. Obviously, this is easier to do with pills. For ethical reasons, it is difficult to do with surgery although a very few such trials have occurred. Arthroscopy of joints involves very small incisions and a few sham operation trials have been conducted to test the effect of arthroscopic surgery. Those studies are very controversial. Another alternative is the use of randomized trials comparing surgery against non-surgical treatment. The problem here is that it is obvious to everybody concerned, who got the surgery and who didn’t. One such study compared surgery on the medial meniscus of the knee (The rubbery cushion in the knee that is subject to tears) to simple exercise therapy. The results? According to the outcome scores arthroscopic partial medial meniscectomy combined with exercise did not lead to greater improvement than exercise alone.

This brings up the concept of “Outcomes Research.” The difference between outcomes research and the standard clinical research we have done for 100 years is in what is measured. Most journal articles on clinical research list the results as mortality (death) or morbidity (poor health) or cure, if cancer is the topic. In studies of non-fatal conditions, such as low back pain or knee arthritis, results can be misleading. Dartmouth Medical School has studied benign prostatic hypertrophy (known as BPH) for 30 years. The results of surgery, removal of the prostate, cannot be assessed by measuring mortality rate as the mortality of that operation is very low. The cure rate is also misleading because removing the prostate “cures” the condition but often leaves a series of complications, major and minor, in its wake. Outcomes research uses survey methods to determine how the treatment affects the patient’s quality of life. One such is called the “SF 36.” If a man is cured of BPH but dribbles urine all day, the improvement from the pre-treatment condition may be minor or even negative. Similar studies have been done with low back pain. The reader will note that many of these studies come from countries with government funded health care programs. Some of that is because it is easier to follow patients for a long time in such systems because of a uniform record and a single source of data. In a distributed system like the US, there may be difficulty tracking subsequent care, a major consideration for this type of study. We want to know how this patent is doing five years later. In the US people change employer and/or insurance carrier every three years on average. Regrettably, insurance companies usually do not share data.

Take the example of spine fusion for back pain, a common procedure. For many years, the literature on such procedures looked like this. A small number of patients, followed only in retrospect and with no control for possible bias. Surgeons like to operate and they rarely report bad results. A series of 67 patients reported by a malpractice lawyer might look very different.

The Patient Outcomes Research Team approach (or PORT) looks like this. Other studies have shown that spine fusion done for back pain alone has a 95% failure rate when residual pain is the metric. An awful lot of spinal fusion surgery is done in this country every year, billions of dollars worth. Is it all useless ? There are “Guidelines” for what will produce results worth the cost and risk. How are these derived ?

Guidelines are of several types. Some are established by the government. How do they decide what will be included ? The best guidelines are based on randomized trials. Those are few. Many are based on the PORT method where common conditions are studied over years using every system of data collection availabe. Prospective trials, which are randomized, are the best in surgical cases, where it is obvious who got what treatment, but they are difficult to do. Patients may refuse to be included because they, or their doctor, are convinced one type of treatment is best. This is where fear of socialized medicine is most concentrated. They fear that the guidelines are based on cost, not efficacy. I might add that insurance companies often resist outcomes research because they fear that optimal treatment may be more expensive than what is commonly done now. There is always a lot of fear when changes come.

The least useful guidelines, and the most common, are called “Consensus Guidelines.” These are derived from committee meetings in which a group of experts concludes what the best treatment should be. Most of the time, the experts are using a lifetime of experience and a thorough knowledge of the medical literature to come to their conclusions. Bias, however, is not excluded by this process and the guidelines are often muddied with second guessing and reluctance to challenge colleagues who may be out of date. If all doctors kept up to date on medical progress, such guidelines would probably be unnecessary. As it is, they are better than nothing. Evidence Based Medicine, then, consists of trying to use “best practices” when they can be identified. In many instances, the art of medicine still remains the better indicator of what should be done. Doctors need to listen to their patients and they may find that explanation will cure something that surgery would only aggravate. I see this every week when I make hospital rounds with medical students. The students have much less information than the older resident physicans but they have time and interest and patients may respond to that when the science has failed. It is important to know the difference.

God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.

Medical History

Friday, November 16th, 2007

One of the three reasons for starting this blog, is medical history. So, let’s have a little history.

Lecture Announcement

I am giving a lecture the Monday after Thanksgiving about the medical history of the Civil War. This is an interesting period because anesthesia had been in use for 14 years when the war began but the concept of antisepsis would only appear two years after its end. At the beginning of the war, the US Army was tiny and scattered and the medical corps was in the hands of incompetents. For example, they had provided no textbooks to save funds. As the war began, Samuel Gross, a professor of surgery in Philadelphia, wrote his own textbook of military medicine in nine days. It was copied by the Confederates in 1863 and became their medical “bible” as well.

The Gross Clinic

This painting of Gross in his surgery clinic is one of the most famous of American medical paintings. The procedure was one of draining an infected femur. Surgery in America, or anywhere ese, in 1860 was very basic and few sophisticated procedures were even contemplated. You might also note the absence of any attempt at antiseptic practice. There are no gowns or gloves and the thought that contamination of the wound was a hazard was unknown.

The military surgery manual

The textbook was very basic, chiefly concerned with how to do amputations. In 1860, a compound fracture, that is a fracture in which the bone ends are exposed to the outside either because they have penetrated the flesh, or because a bullet or other missile caused the fracture, was fatal without amputation. There was no understanding of infection. Even in 1867, when Lister published his famous first report of anti-sepsis, he did not know why infection occurred. It would be 20 years before infection was known to be due to microscopic organisms called bacteria.