DOE Shield DOE Openness: Human Radiation Experiments: Roadmap to the Project
ACHRE Report
Roadmap to the Project
HomeRoadmapWhat's NewSearch HREXMultimediaRelated SitesFeedback
ACHRE Report

Part I

Chapter 2

Introduction

The American Expert, the American Medical Association, and the Nuremburg Medical Trial

The "Real World" of Human Experimentation

Nuremburg and Research with Patients

American Medical Researchers' Reactions to News of the Nuremburg Medical Trial

New Times, New Codes

Conclusion

Chapter 2: The "Real World" of Human Experimentation

It would be historically irresponsible, however, to rely solely on records related directly to the Nuremberg Medical Trial in evaluating the postwar scene in American medical research. The panorama of American thought and practice in human experimentation was considerably more complex than Ivy acknowledged on the witness stand in Nuremberg. In general, it does seem that most American medical scientists probably sought to approximate the practices suggested in the Nuremberg Code and the AMA principles when working with "healthy volunteers." Indeed, a subtle, yet pervasive, indication of the recognition during this period that consent should be obtained from healthy subjects was the widespread use of the term volunteer to describe such research participants. Yet, as Advisory Committee member Susan Lederer has recently pointed out, the use of the word volunteer cannot always be taken as an indication that researchers intended to use subjects who had knowingly and freely agreed to participate in an experiment; it seems that researchers sometimes used volunteer as a synonym for research subject, with no special meaning intended regarding the decision of the participants to join in an experiment.[21]

Even with this ambiguity it is, however, quite clear that a strong tradition of consent has existed in research with healthy subjects, research that generally offered no prospect of medical benefit to the participant. In the United States much of this tradition has rested on the well-known example of Walter Reed's turn-of-the-century experiments, when he employed informed volunteers to establish the mosquito as the vector of transmission for yellow fever.[22] Indeed, it seems that a tradition of research with consenting subjects has been particularly strong among Reed's military descendants in the field of infectious disease research (which has frequently required the use of healthy subjects). For example, Dr. Theodore Woodward, a physician-researcher commissioned in the Army, conducted vaccine research during the 1950s with healthy subjects under the auspices of the Armed Forces Epidemiological Board. In a recent interview conducted by the Advisory Committee, Woodward recalled that the risks of exposure to diseases such as typhus were always fully disclosed to potential healthy subjects and that their consent was obtained. Since some of these studies were conducted in other countries with non-English-speakers, the disclosure was given in the volunteer's language.[23] Of his own values during this time, Woodward stated: "If I gave someone something that could make them sick or kill them and hadn't told them, I'm a murderer."[24] Similarly, Dr. John Arnold, a physician who conducted Army-sponsored malaria research on prisoners from the late 1940s through the mid-1950s, recalled that he always obtained written permission from his subjects.[25]

Not all the evidence on consent and healthy subjects comes from the military tradition. A particularly compelling general characterization of research with "normal volunteers" during this period comes from the "Analytic Summary" of a conference on the "Concept of Consent in Clinical Research," which the Law-Medicine Research Institute (LMRI) of Boston University convened on April 29, 1961. At this conference, twenty-one researchers from universities, hospitals, and pharmaceutical companies across the country were brought together "to explore problems arising from the legal and ethical requirements of informed consent of research subjects."[26] The LMRI project was what one might now call a fact-finding mission; the LMRI staff was attempting "to define and to analyze the actual patterns of administrative practice governing the conduct of clinical research in the United States" during the early 1960s.[27] Anne S. Harris, an LMRI staff member and author of the conference's final report, offered a simple but significant assessment of the handling of healthy participants in nontherapeutic research as expressed by the researchers at the meeting, whose careers included the decade and a half since the end of World War II: "The conferees indicated that normal subjects are usually fully informed."[28]

Even so, researchers who almost certainly knew better sometimes employed unconsenting healthy subjects in research that offered them no medical benefits. For example, Dr. Louis Lasagna, who has since become a respected authority on bioethics, stated in an interview conducted by the Advisory Committee that between 1952 and 1954, when he was a research fellow at Harvard Medical School, he helped carry out secret, Army-sponsored experiments in which hallucinogens were administered to healthy subjects without their full knowledge or consent:

The idea was that we were supposed to give hallucinogens or possible hallucinogens to healthy volunteers and see if we could worm out of them secret information. And it went like this: a volunteer would be told, 'Now we're going to ask you a lot of questions, but under no circumstances tell us your mother's maiden name or your social security number,' I forget what. I refused to participate in this because it was so mindless that a psychologist did the interviewing and then we'd give them a drug and ask them a number of questions and sure enough, one of the questions was 'What is you mother's maiden name?' Well, it was laughable in retrospect . . . [The subjects] weren't informed about anything [emphasis added].[29]

Lasagna, reflecting "not with pride" on the episode, offered the following explanation: "It wasn't that we were Nazis and said, 'If we ask for consent we lose our subjects,' it was just that we were so ethically insensitive that it never occurred to us that you ought to level with people that they were in an experiment."[30] This might have been true for Lasagna the young research fellow, but the explanation is harder to understand for the director of the research project, Henry Beecher. Beecher was a Harvard anesthesiologist who, as we will see later in this chapter and in chapter 3, would emerge as an important figure in biomedical research and ethics during the mid-1960s.[31]

If American researchers experimenting on healthy subjects sometimes did not strive to follow the standards enunciated at Nuremberg, research practices with sick patients seem even more problematic in retrospect. Advisory Committee member Jay Katz has recently argued that this type of research still gives rise to ethical dificulties for physicians engaged in research with patients, and he has offered an explanation: "In conflating clinical trials and therapy, as well as patients and subjects, as if both were one and the same, physician-investigators unwittingly become double agents with conflicting loyalties."[32]

It is likely that such confusion and conflict would have been as troublesome several decades ago, if not more troublesome, than it is today. The immediate postwar period was a time of vast expansion and change in American medical science (see Introduction). Clinical research was emerging as a new and prestigious career possibility for a growing number of medical school graduates. Most of these young clinical researchers almost certainly would have absorbed in their early training a paternalistic approach to medical practice that was not seriously challenged until the 1970s. This approach encouraged physicians to take the responsibility for determining what was in the best interest of their patients and to act accordingly. The general public allowed physicians to act with great authority in assuming this responsibility because of an implicit trust that doctors were guided in their actions by a desire to help their patients.

This paternalistic approach to medical practice can be traced to the Hippocratic admonition: "to help, or at least do no harm."[33] Another long-standing medical tradition that can be found in Hippocratic medicine is the belief that each patient poses a unique medical problem calling for creative solution. Creativity in the treatment of individuals, which was not commonly thought of as requiring consent, could be--and often was--called experimentation. This tradition of medical tinkering without explicit and informed consent from a patient was intended to achieve proper treatment for an individual's ailments; but it seems also to have served (often unconsciously) as a justification for some researchers who engaged in large-scale clinical research projects without particular concern for consent from patients.

Members of the medical profession and the American public have today come to better understand the intellectual and institutional distinctions between organized medical research and standard medical practice. There were significant differences between research and practice in the 1950s, but these differences were harder to recognize because they were relatively new. For example, randomized, controlled, double-blind trials of drugs, which have brought so much benefit to medical practice by greatly decreasing bias in the testing of new medicines, were introduced in the 1950s. The postwar period also brought an unprecedented expansion of universities and research institutes. Many more physicians than ever before were no longer solely concerned, or even primarily concerned, with aiding individual patients. These medical scientists instead set their sights on goals they deemed more important: expanding basic knowledge of the natural world, curing a dread disease (for the benefit of many, not one), and in some cases, helping to defend the nation against foreign aggressors. At the same time, this new breed of clinical researchers was motivated by more pragmatic concerns, such as getting published and moving up the academic career ladder. But these differences between medical practice and medical science, which seem relatively clear in retrospect, were not necessarily easy to recognize at the time. And coming to terms with these differences was not especially convenient for researchers; using readily available patients as "clinical material" was an expedient solution to a need for human subjects.

As difficult and inconvenient as it might have been for researchers in the boom years of American medical science following World War II to confront the fundamental differences between therapeutic and nontherapeutic relationships with other human beings, it was not impossible. Otto E. Guttentag, a physician at the University of California School of Medicine in San Francisco, directly addressed these issues in a 1953 Science magazine article. Guttentag's article, and three others that appeared with it, originated as presentations in a symposium held in 1951 on "The Problem of Experimentation on Human Beings" at Guttentag's home institution. Guttentag constructed his paper around a comparison between the traditional role of the physician as healer and the relatively new role of physician as medical researcher. Guttentag referred to the former as "physician-friend" and the latter as "physician-experimenter." He explicitly laid out the manner in which medical research could conflict with the traditional doctor-patient relationship:

Historically, . . . one human being is in distress, in need, crying for help; and another fellow human being is concerned and wants to help and the desire for it precipitates the relationship. Here both the healthy and the sick persons are . . . fellow-companions, partners to conquer a common enemy who has overwhelmed one of them. . . . Objective experimentation to confirm or disprove some doubtful or suggested biological generalization is foreign to this relationship . . . for it would involve taking advantage of the patient's cry for help, and of his insecurity.[34]

Guttentag worried that a "physician-experimenter" could not resist the temptation to "tak[e] advantage of the patient's cry for help."[35] To prevent the experimental exploitation of the sick that he envisioned (or knew about), Guttentag suggested the following arrangement:

Research and care would not be pursued by the same doctor for the same person, but would be kept distinct. The physician-friend and the physician-experimenter would be two different persons as far as a single patient is concerned. . . . The responsibility for the patient as patient would rest, during the experimental period, with the physician-friend, unless the patient decided differently.

Retaining his original physician as personal adviser, the patient would at least be under less conflict than he is at present when the question of experimentation arises.[36]

Among physicians, Guttentag was nearly unique in medicine in those days in raising such problems in print. Another example of concern about the moral issues raised by research at the bedside comes from what might be an unexpected source: a Catholic theologian writing in 1945. In the course of a general review of issues in moral theology, John C. Ford, a prominent Jesuit scholar, devoted several pages to the matter of experimentation with human subjects. Ford was not a physician, but his thoughts on this topic--published a year before the beginning of the Nuremberg Medical Trial--suggest that a thoughtful observer could recognize, even decades ago, serious problems with conducting medical research on unconsenting hospital patients:

The point of getting the patient's consent [before conducting an experiment] is increasingly important, I believe, because of reports which occasionally reach me of grave abuses in this matter. In some cases, especially charity cases, patients are not provided with a sure, well-tried, and effective remedy that is at hand, but instead are subjected to other treatment. The purpose of delaying the well-tried remedy is, not to cure this patient, but to discover experimentally what the effects of the new treatment will be, in the hope, of course, that a new discovery will benefit later generations, and that the delay in administering the well-tried remedy will not harm the patient too much. . . . This sort of thing is not only immoral, but unethical from the physician's own standpoint, and is illegal as well.[37]

The transcripts and reports produced in the Law-Medicine Research Institute's effort during the early 1960s to gather information on ethical and administrative practices in research in medical settings suggest that by this time more researchers had come to recognize the troubling issues associated with using sick patients as subjects in research that could not benefit them. The body of evidence from the LMRI project also suggests that problems with this type of human experimentation had been widespread before the early 1960s and remained common at that time. The transcript of a May 1, 1961, closed-door meeting of medical researchers organized by LMRI to explore issues in pediatric research shows a medical scientist from the University of Iowa offering a revealing generalization from which none of his colleagues dissented. In order to understand this transcript excerpt one must know that item "A1" on the meeting agenda related to research "primarily directed toward the advancement of medical science" and item "A2" referred to "clinical investigation . . . primarily directed toward diagnostic, therapeutic and/or prophylactic benefit to patients."

We have done a thousand things with an implied feeling [of consent]. . . . We wear two hats. Item A2 allows us to do A1 but we feel uncomfortable about it. The responsibility of the physician includes responsibility to advance in knowledge. Things are different now and this problem of a secondary role [i.e., to advance knowledge] is increasingly in front stage [emphasis added].[38]

This researcher acknowledged that many physicians during the period let themselves slide into nontherapeutic research with patients. He provided the additional, and significant, assessment that he and his colleagues felt guilty about this behavior, even though it was quite common.

An even more probing analysis of these issues had taken place two days earlier at the April 29, 1961, LMRI conference on "The Concept of Consent," referred to above in our discussion of research with healthy subjects. The participants at this meeting recognized that research with sick patients could be both therapeutic and nontherapeutic. Interestingly, they suggested that patients employed for research in which "there was the possibility of therapeutic benefit with minimal or moderate risk" were "usually informed" of the proposed study. The author of the conference report offered the plausible explanation that informing subjects in potentially beneficial research "is psychologically more comfortable for investigators [because] the [therapeutic] expectations of potential subjects coincide with the purpose and expected results of the experiment."[39] The conferees identified research in which "patients are used for studies unrelated to their own disease, or in studies in which therapeutic benefits are unlikely" as the most problematic. Those at the meeting "indicated that it is most often subjects in this category to whom disclosure is not made."[40] The conference report outlined an approach employed by many researchers (including some at the meeting), in which, rather than seeking consent from patients for research that offers them no benefit,

[t]he therapeutic illusion is maintained, and the patient is often not even told he is participating in research. Instead, he is told he is "just going to have a test." If the experimental procedure involves minimal risk, but some discomfort, such as hourly urine collection, "All you do is tell the patient: 'We want you to urinate every hour.' We merely let them assume that it is part of the hospital work that is being done."[41]

Again, it is important to note that the conference participants displayed some moral discomfort with this pattern of behavior, as can be seen from the following exchange:

Dr. X: There is a matter here of whether the patient is not informed because the risk is too trivial, or because it's too serious.

Dr. Y: I think you're getting right at it. There's a great difference in not telling the patient because you're afraid he won't participate and not telling him because you don't think there is a conceivable risk, and it's so trivial you don't bother to inform him.

Dr. Z: On the question of whether it's [acceptable] not to tell, we would say that it is not permissible on the grounds of refusal potential.[42]

It is also important to draw out of this transcript excerpt the general point that most researchers in this period appear not to have had great ethical qualms about enrolling an uninformed patient in a research project if the risk was deemed low or nonexistent. Of course, the varying definitions of "low risk" could lead to problems with this approach. Indeed, the participants at the "Concept of Consent" conference grappled at length with this very issue without ever reaching consensus. A minority steadfastly asserted that participants in an experiment should be asked for consent even if the risk would be extremely low, such as in only taking a small clipping of hair.

The Advisory Committee's Ethics Oral History Project[43] has provided extensive additional evidence that medical researchers sometimes (perhaps even often) took liberties with sick patients during the decades immediately following World War II. The element of opportunism was recounted in several interviews. Dr. Lasagna, who was involved in pain-management studies in postoperative patients at Harvard in the 1950s, explained rather bluntly:

[M]ostly, I'm ashamed to say, it was as if, and I'm putting this very crudely purposely, as if you'd ordered a bunch of rats from a laboratory and you had experimental subjects available to you. They were never asked by anybody. They might have guessed they were involved in something because a young woman would come around every hour and ask them how they were and quantified their pain. We never made any efforts to find out if they guessed that they were part of it.[44]

Other researchers told similar tales, with a similar mixture of matter-of-fact reporting and regretful recollection. Dr. Paul Beeson remembered a study he conducted in the 1940s, while a professor at Emory University, on patients with bacterial endocarditis, an invariably fatal disease at the time. He recalled that he thought it would be interesting to use the new technique of cardiac catheterization to compare the number of bacteria in the blood at different points in circulation:

[This is] something I wouldn't dare do now. It would do no good for the patient. They had to come to the lab and lie on a fluoroscopic table for a couple of hours, a catheter was put into the heart, a femoral needle was put in so we could get femoral arterial blood and so on. . . . All I could say at the end was that these poor people were lying there and we had nothing to offer them and it might have given them some comfort that a lot of people were paying attention to them for this one study. I don't remember ever asking their permission to do it. I did go around and see them, of course, and said, "We want to do a study on you in the X-ray department, we'll do it tomorrow morning," and they said yes. There was never any question. Such a thing as informed consent, that term didn't even exist at that time. . . . [I]f I were ever on a hospital ethics committee today, I wouldn't ever pass on that particular study.[45]

Radiologist Leonard Sagan recalled an experiment in which he assisted during his training on a metabolic unit at Moffett Hospital in San Francisco in 1956-1957.

At the time, the adrenal gland was hot stuff. ACTH [adreno-corticotropic hormone] had just become available and it was an important tool for exploring the function of the adrenal gland. . . . This was the project I was involved in during that year, the study of adrenal function in patients with thyroid disease, both hypo- and hyperthyroid disease. So what did we do? I'd find some patients in the hospital and I'd add a little ACTH to their infusion and collect urines and measure output of urinary corticoids. . . . I didn't consider it dangerous. But I didn't consider it necessary to inform them either. So far as they were concerned, this was part of their treatment. They didn't know, and no one had asked me to tell them. As far as I know, informed consent was not practiced anyplace in that hospital at the time.[46]

Sagan viewed the above experiment as conforming not only with the practices of the particular hospital but also in accord with the high degree of professional autonomy and respect that was granted to physicians in this era:

In 1945, '50, the doctor . . . was king or queen. It never occurred to a doctor to ask for consent for anything. . . . People say, oh, injection with plutonium, why didn't the doctor tell the patient? Doctors weren't in the habit of telling the patients anything. They were in charge and nobody questioned their authority. Now that seems egregious. But at the time, that's the way the world was.[47]

Another investigator, Dr. Stuart Finch, who was a professor of medicine at Yale during the 1950s and 1960s, recalled instances when oncologists there were overly aggressive in pursuing experimental therapies with terminal patients.

[I]t's very easy to talk a terminal patient into taking that medication or to try that compound or whatever the substance is. . . . Sometimes the oncologists [got] way overenthused using it. It's very easy when you have a dying patient to say, "Look, you're going to die. Why don't you let me try this substance on you?" I don't think if they have informed consent or not it makes much difference at that point.[48]

Economically disadvantaged patients seem to have been perceived by some physicians as particularly appropriate subjects for medical experimentation. Dr. Beeson offered a frank description of a quid pro quo rationale that was probably quite common in justifying the use of poor patients in medical research: "We were taking care of them, and felt we had a right to get some return from them, since it wouldn't be in professional fees and since our taxes were paying their hospital bills."[49]

Another investigator, Dr. Thomas Chalmers, who began his career in medical research during the 1940s, identified sick patients as the most vulnerable type of experimental subjects--more vulnerable even than prisoners:

One of the real ludicrous aspects of talking about a prisoner being a captive, and therefore needing more protection than others, is, there's nobody more captive than a sick patient. You've got pain. You feel awful. You've got this one person who's going to help you. You do anything he says. You're a captive. You can't, especially if you're sick and dying, discharge the doctor and get another one without a great deal of trauma and possible loss of lifesaving measures.[50]

Thus, as compared with prisoners, who are now generally viewed to be vulnerable to coercion, those who are sick may be even more compromised in their ability to withstand subtle pressure to be research subjects. Appropriate protection for the sick who might be candidates for medical research has proved to be an especially troublesome issue in the era following Nuremberg.

back table of contents forward