DOE Shield DOE Openness: Human Radiation Experiments: Roadmap to the Project
ACHRE Report
Roadmap to the Project
HomeRoadmapWhat's NewSearch HREXMultimediaRelated SitesFeedback
ACHRE Report

Part I

Chapter 4

Introduction

An Ethical Framework

Applying the Ethical Framework

Chapter 4: Applying the Ethical Framework

The three kinds of standards presented in this chapter provide a general framework for evaluating the ethics of human radiation experiments. In this section of the chapter, we revisit those standards in the specific context of human radiation experiments conducted between 1944 and 1974 and what we have learned about the policies and practices involving human subjects during that period.

Basic Ethical Principles

Earlier in this chapter we identified six basic ethical principles as particularly relevant to our work: "One ought not to treat people as mere means to the ends of others"; "One ought not to deceive others"; "One ought not to inflict harm or risk of harm"; "One ought to promote welfare and prevent harm"; "One ought to treat people fairly and with equal respect"; and "One ought to respect the self-determination of others."

These principles are central to our analysis of the cases we present in part II of the report, although not every case we evaluate engages every principle. Two of the principles, however, recur repeatedly as we consider the ethics of past experiments. These are "One ought not to treat people as mere means to the ends of others" and "One ought not to inflict harm or risk of harm." Whether an experiment involving human subjects violates the principle not to use people as mere means generally depends on two factors--consent and therapeutic intent. An individual may give his or her consent to being treated as a means to the ends of others. If a person freely consents, then he or she is no longer being used as a mere means, that is, as a means only. Thus, if a person is used as a subject in an experiment from which the person cannot possibly benefit directly, but the person's consent to that use is obtained, the person is not being used as a mere means to the ends of others. By contrast, if a person is used as a subject in such an experiment but the person's consent is not obtained for that use, the person is being used as a mere means to the ends of the investigator conducting the experiment and the institutions funding or sponsoring the experiment.

If an action that involves the use of a person is undertaken in whole or in part for that person's benefit, then the person is not being used as a mere means toward the ends of others. Thus, if a person is used as a subject in an experiment that is intended to offer the subject a prospect of direct benefit, then, even if the subject's consent has not been obtained, the subject is not being used as a mere means to the ends of others. This is because the experiment is intended to serve the subject's interests as well as the interests of the investigator and funding agency. It may be wrong not to obtain the subject's consent in this case, but the wrong does not stem from a violation of the principle not to use people as mere means. Instead, the wrong reflects the violation of other basic principles such as the principles enjoining us to respect self-determination and to promote welfare and prevent harm.

These two factors--the obtaining of consent and an intention to benefit--also can transform the moral quality of an act that involves the imposition of harm or risk of harm. One important way to make the imposition of a risk of harm justifiable is to obtain the person's permission for the imposition. The imposition of risk on a person also is more justifiable when the risk is imposed to secure a benefit for that person, although even in the presence of a prospect of offsetting benefit, the imposition of risk on another without that person's consent is morally questionable because it appears to violate the principle of respect for self-determination.[17]

Consider the following example of how the factors of therapeutic intent and consent can transform a morally questionable action into a morally acceptable one. Patients are enrolled in an experiment in which they are given a new drug that is unproven in humans, induces substantial discomfort or even suffering, and may produce irreversible damage to vital organs. There is, however, no effective treatment for the condition from which these patient-subjects suffer, and the condition is life threatening. The drug is theoretically promising compared with related drugs used in similar diseases, and it has proven effective in animals. Further, the opportunity to participate in the experiment is offered to patients while they are lucid, comfortable, and at ease. Under these circumstances the imposition of harm may be transformed into a caring and respectful act.

Policies of Government Agencies

Where agencies of the government had policies on the conduct of research involving human subjects, and where these policies included requirements or rules that are morally sound, these policies constitute standards against which the conduct of the agencies and the people who worked there, as well as the experiments the agencies sponsored or conducted, can be evaluated. Government agencies must be held responsible for failures to implement their own policies. To do otherwise is to break faith with the American people, who have a reasonable expectation that an agency will conduct its affairs in accord with the agency's stated policies. As we noted in chapter 1, it is not always clear, however, whether statements made in letters or memorandums constitute agency policy. When there is little evidence that a statement by a government official was ever implemented, it is often difficult to determine whether this was an instance of an agency failing to implement its own policies or an instance where a statement by a government official was not perceived as agency policy in the first place.

Among the general conclusions that can be drawn from the discussions about policies during the late 1940s and early 1950s is that the AEC, DOD, and NIH required investigators to obtain the consent of the healthy or "normal" subject, and prior group review was required for risk in research using radioisotopes for all private and publicly financed research (and, in the NIH, for all hazardous procedures). Also, in 1953, the Department of Defense adopted the Nuremberg Code as the policy for research related to atomic, biological, and chemical warfare, and the NIH Clinical Center articulated a consent requirement for patient-subjects in intramural research (see chapter 1).

Two questions that arise at this juncture are whether an experiment was wrong if it violated one of these policies but took place at another government agency, and whether an experiment was wrong if it took place under the auspices of an agency before it promulgated the policy. The answer to both questions is the same: Even if such an experiment was not wrong according to the policy of the agency sponsoring the experiment at the time, the experiment may nevertheless have been unethical based on one or more basic ethical principles or rules of professional ethics.

As is the case today, decades ago government officials had obligations to take reasonable steps to see that policies were adequately implemented.[18] Policies constitute organizational commitments, and organizational commitments generate obligations on the part of the organization and its members. In some cases, however, it is not clear that conditions stated by individual officials rise to a level that all would be comfortable calling "policies." Accordingly, it is not clear whether corresponding obligations to implement can be inferred. The two letters signed by AEC General Manager Carroll Wilson in April and November 1947 are the best examples of this problem. Nevertheless, if it is correct to say that high officials have an obligation to exert due efforts to implement and communicate the rules they are empowered to establish, then they may reasonably be blamed for failures in this regard. Further, if they do not even attempt to articulate rules that are indicated by basic ethical principles and that are clearly relevant to organizational activities that fall under their authority, they are also subject to moral blame.

The mitigating condition of culturally induced moral ignorance does not apply to government officials who failed to exercise their responsibilities to implement or communicate requirements that clearly fell within the ambit of their office and of which they were aware. The very fact that these requirements were articulated by the agencies in which they worked is evidence that officials could not have been morally ignorant of them.

We have observed, however, that, especially with regard to research involving patients, policies were frequently unclear. When this research offered patient-subjects a chance to benefit medically, the widespread discretion granted physicians to make decisions on behalf of their patients is a mitigating factor in judging the blameworthiness of government officials for failing to impose consent requirements on physician-investigators. This failure could be attributed to a cultural moral ignorance concerning the proper limits to the authority of physicians over their patients.

The same cannot be said of government officials for failing to impose consent requirements on physician-investigators who used patient-subjects in research from which the patients could not benefit medically. This use of human subjects took place outside of the therapeutic context that defines the doctor-patient relationship and therefore also was outside of the authority then ceded to physicians. In this case responsible agency officials had a ready analogy to healthy subjects for whom there was a lengthy tradition of policies and rules requiring the use of "volunteers" and the obtaining of consent. Government officials could and should have perceived the morally identical nature of these cases--that, without consent, both cases involved violation of the principle not to use people as mere means to the ends of others. Those who were ill should have been granted the same protections as those who were well.

In contrast to requirements for consent, requirements intended to ensure that risks to experimental subjects were acceptable were far more clearly stated. Government officials are blameworthy if they permitted research to continue that was known to entail unusual risks to the subjects, in direct violation of agency policy.

Finally, some lessons that can be drawn from the experience of the human radiation experiments we considered speak to the conduct of government itself as a collective agent, rather than simply to individual government officials. In too many instances, as we saw in chapter 1, we found a lack of clarity about the status within an agency of specific declarations by responsible officials. Particularly when agencies are engaged in activities that may compromise the rights or interests of citizens, it is critically important that agencies be clear about their commitments and policies and that they not remain passive in the face of questionable practices for which they may bear some responsibility. In chapter 3 we saw an effective response to such a situation in the 1960s by the PHS. This example attests to the fact that institutional clarity and active reform measures can succeed and that when they do they can be great forward strides.

Rules of Professional Ethics

Even if the federal government had adopted no formal human research ethics policy whatsoever, the medical profession and its members would still have moral obligations to those who entrust themselves to their care. The successes of modern medical research, regardless of its funding source, are ultimately due to the efforts of talented and dedicated medical scientists. These investigators bear a profound ethical burden in their work with human subjects. Society entrusts them with the privilege of using other human beings to advance their important work. Although society must not discourage them from the pursuit of new information, it also must diligently pursue signs that medical scientists have not exercised their ethical responsibility with the care and sensitivity that society has good reason to expect from them.

Without reference to the policies adopted by federal agencies, what rules of professional ethics were seen by the medical profession during the 1944-1974 period as relevant to the conduct of its members engaged in human subjects research? The answer to this question depends upon which kind of experimental situation is under discussion: an experiment on a healthy subject; an experiment on a patient-subject without a scientific or clinical basis for an expectation of benefit to the patient-subject; or an experiment on a patient-subject with a scientific or clinical basis for an expectation of benefit to the patient-subject.

Experiments on Healthy Subjects: By the mid-1940s it was common to obtain the voluntary consent of healthy subjects who were to participate in biomedical experiments that offered no prospect of medical benefit to them. Sophisticated philosophical analysis is not required to reach the conclusion that using a human being in a medical experiment that offers the person no prospect of personal benefits without that person's consent is wrong. As we have already noted, such conduct violates the basic ethical principle that one ought not use people as mere means to the ends of others.

Experiments on Patient-Subjects Without a Scientific or Clinical Basis for an Expectation of Benefit to the Patient-Subject: The Hippocratic tradition of medical ethics inherited by physicians in the 1940s holds that, unless the physician is reasonably sure that his or her treatment is, on balance, likely to do the patient more good than harm, the treatment should not be introduced. The heart of the Hippocratic ethic is the physician's commitment to putting the interests of the patient first. Subjecting one's patient to experimentation that offers no prospect of benefit to the patient without his or her consent is a direct repudiation of this commitment. (If the patient consents to this use, the moral warrant for proceeding with the experiment comes from the patient's permission, not from the Hippocratic ethic.)

Experiments on Patient-Subjects with a Scientific or Clinical Basis for an Expectation of Benefit to the Patient-Subject: Even in Hippocratic medicine it is recognized that physicians should attempt to use unproven or experimental methods to benefit the patient, whether through efforts at cure or palliation, but only so long as there is no efficacious standard therapy available and innovative measures are compatible with the obligation to avoid doing harm without the prospect of offsetting benefit. Interventions in this category should be based on scientific reasoning and conservative clinical judgment. Arguably, so long as these conditions prevailed, it was not thought morally necessary within the medical profession to obtain the patient's consent to such experimentation prior to the 1960s. But the physician assumed a corresponding obligation to base his or her deviation from standard practice on the reasonable likelihood of patient benefit, sufficient to outweigh the risks associated with being in the experiment. This type of reasoning, too, has been available to and accepted by physicians for many years, even though the ability to assess and calculate risks has developed greatly.

* * *

Although the professional ethics of the period thus had relevant moral rules for each of these three experimental situations, compliance with these rules is a separate matter. There may be many reasons for specific failures by physicians to adhere to the requirements of their ethical tradition, some of which may render them nonculpable, and there are various limitations on our ability to assign blame for particular cases of a physician's failure to adhere to professional ethics. However, any use of human subjects that did not proceed in accordance with these rules of professional ethics was wrong in the sense that it was a violation of sound professional ethical standards. Moreover, even if there was then or is now a lack of clarity about the rules of professional ethics, recognition by morally serious individuals of basic ethical principles is enough to identify certain sorts of human experiments as morally unacceptable.

The special moral responsibilities of the medical profession as a whole, whether decades ago or in our own time, deserve careful consideration, especially insofar as previous experience can help formulate lessons for the future. Like the government, the medical profession as a whole must be held to a higher standard than individuals in society. Confidence in the medical profession is important because individuals put their very lives, and the lives of their loved ones, in the hands of those whom the profession has certified as competent to practice. Unlike government officials, members of the medical profession are explicitly bound to a moral tradition in their professional relations, based on which society grants the medical profession the privilege of largely policing itself. This authority is part of what constitutes the medical profession as a profession, but the authority is granted by society on the condition that the profession will adhere to the high moral rules it professes and that, if necessary, the medical profession will reform or encourage the reform of relevant institutions to ensure that those rules will be honored in practice.

Moreover, many of the privileges that devolve on the medical profession are granted on the condition that it is sufficiently well organized to police itself, with minimal intervention by the government and the legal system. Therefore, members of the medical profession are further legitimately expected to engage in organizational conduct that constitutes sound moral practices. Implicit in this arrangement is also the assumption that it will be self-critical even about its relatively well-entrenched attitudes and beliefs, so that it will be prepared to undertake reforms. Without this commitment to self-criticism, self-regulation cannot be effective and the public's trust in the professional's ability to self-regulate would be unwarranted.

Today we regard subjects of biomedical research whose consent was not obtained to have been wronged; under conditions of significant risk, the wrong is greater, and in the absence of the potential for offsetting medical benefit, greater still. The historical silence of the medical profession with respect to nontherapeutic experiments was perhaps based on the rationale that those who are ill and perhaps dying may be used in experiments because they will not be harmed even though they will not benefit. But this rationale overlooks both the principle that people should never be used as mere means and the principle of respect for self-determination; it may also provide insufficient protection against harm, given the position of conflict of interest in which the physician-researcher may find him-or herself. Nevertheless, until the mid-1960s medical conventions were silent on experiments with patient-subjects that offered no direct benefit but which physicians believed to pose acceptable risk. This silence was a failure of the profession.

One defense of the profession in this regard is that it was as subject to the phenomenon we have called cultural moral ignorance as any other group in society at the time, including the arguably excessive deference to physician authority on the part of the government and possibly the public at large. However, the medical profession was in a wholly different position from the others, in several respects. First, it insisted upon and was given the privilege of policing its own behavior. Second, the profession was the direct beneficiary of the deference paid to it. Third, there were already examples of experiments that had involved subject consent that could have served as models of reform. Under these conditions the profession had an obligation to be self-critical concerning the norms and rules it thought appropriate to govern its members' conduct.

The medical profession could and should have seen that healthy subjects and patient-subjects in nontherapeutic experiments were in similar moral positions--neither was expected to benefit medically. Just as physicians had no moral license to determine an "acceptable risk" for healthy subjects without their voluntary consent, they had no moral license to do so in the case of other subjects who also could not benefit from being in research, even if they were patients. The prevailing standards for healthy subject groups could easily have been applied to patient-subjects for whom there was no expectation of medical benefit. The moral equivalence of the use of healthy people and ill people as subjects of experiments from which no subject could possibly benefit directly was perceptible at the time.

This moral equivalence would have made it clear that no one, well or sick, should be used as a mere means to advance medical science without voluntary consent. Thus, this moral ignorance could have and should have been remedied at the time. Indeed, it is arguably the case that physicians could and should have seen that using patients in this way was morally worse than using healthy people, for in so doing one was violating not only the basic ethical principle not to use people as a mere means but also the basic ethical principle to treat people fairly and with equal respect.

American physicians are members of a society that places a high value on these basic moral principles, still more vital than the advancement of medical science. These principles are as easily known to physicians as to anyone else, and it is unacceptable to single oneself out as an exception to these principles simply because one is a member of an esteemed profession. Someone who is ill deserves to be treated with the same respect as someone who is well. Accordingly, a physician who failed to tell a patient that what was proposed was an experiment with no therapeutic intent was and is blameworthy. To the extent that the experiment entailed significant risk, the physician is more blameworthy; where it was reasonable to assume that the experiment imposed no risk or minimal risk or inconvenience, the blame is less.

We argue here that the use of patients in nontherapeutic experiments without their consent was not only a violation of these basic moral principles but also a violation of the Hippocratic principle that was the cornerstone of professional medical ethics at that time. That principle enjoins physicians to act in the best interests of their patients and thus would seem to prohibit subjecting patients to experiments from which they could not benefit. It might be argued that a widespread practice that is not in conformity with a principle of professional ethics invalidates the principle, since the practice shows that the profession was not really committed to the principle in the first place. This is a misunderstanding, however, of what it means for a profession to adopt and espouse a moral principle. Even if many or most physicians sometimes fail or even often fail to comply with the principle, it is still coherent to say that the principle is accepted by the profession, if the principle has been publicly pronounced and affirmed by the profession, as was clearly the case with respect to the Hippocratic ethic.

To characterize a great profession as having engaged over many years in unethical conduct--years in which massive progress was being made in curbing some of mankind's greatest ills--may strike some as arrogant and unreasonable. However, fair assessment indicates that the circumstance was one of those times in history in which wrongs were committed by very decent people who were in a position to know that a specific aspect of their interactions with others should be improved. Wrongs are not less egregious because they were committed by a member of a certain profession or by people who are very decent in their relationships with other parties. It is common for us to look back at such conduct in amazement that so many otherwise good and decent people could have engaged in it without a high level of self-awareness. Moral consistency requires the Advisory Committee to conclude that, if the use of healthy subjects without consent was understood to be wrong at the time, then the use of patients without consent in nontherapeutic experiments should also have been discerned as wrong at the time, no matter how widespread the practice.

It should be emphasized, however, that often these nontherapeutic experiments on unconsenting patients constituted only minor wrongs. Often there was little or no risk to patient-subjects and no inconvenience. Although it is always morally offensive to use a person as a means only, as the burden on the patient-subject decreased, so too did the seriousness of the wrong.

Much the same can be said of experiments that were conducted on patient-subjects without their consent but that offered a prospect of medical benefit. To the extent that such experiments were conducted within the moral environment of the doctor-patient relationship, that is, based on the physician's considered and informed judgment that it was in the patient's best interests to be enrolled in the research, then the less blameworthy the physician was for failing to obtain consent. However, where the risks were great or where there were viable alternatives to participation in research, then the physician was more blameworthy for failing to obtain consent.

It is often difficult to establish standards and make judgments about right and wrong, and about blame and exculpation. Our charge was all the more difficult because the context of the actions and agents we were asked to evaluate differs from our own. In arriving at this moral framework for evaluating human radiation experiments, we have tried to be fair to history, to considerations of ethics, and above all, to the people affected by our analysis--former subjects, physician-investigators, and government officials.

back table of contents forward