DOE Openness: Human Radiation Experiments: Roadmap to the Project ACHRE Report |
ACHRE Report Part I Chapter 4 An Ethical Framework |
Chapter 4: An Ethical FrameworkFor purposes of the Committee's charge, there are two main types of moral judgment: judgments about the moral quality of actions, policies, practices, institutions, and organizations; and judgments about the praiseworthiness or blameworthiness of individual agents and in some cases entities such as professions and governments (insofar as these can be viewed as collective agents with powers and responsibilities). The first type contains several kinds of judgments. Actions may be judged to be obligatory, wrong, or permissible. Institutions, policies, and practices can be characterized as just or unjust, equitable or inequitable, humane or inhumane. Organizations can be said to be responsible or negligent, fair-dealing or exploitative.The second type of judgment about the praiseworthiness or blameworthiness of agents also contains a diversity of determinations. Agents, whether individual or collective, can be judged to be culpable or praiseworthy for this or that action or policy, to be generous or mean-spirited, responsible or negligent, to respect the moral equality of people or to discriminate against certain individuals or groups, and so on.
Three Kinds of Ethical StandardsA recognized way to make moral judgments is to evaluate the facts of a case in the context of ethical standards. The Committee identified three kinds of ethical standards as relevant to the evaluation of the human radiation experiments:[1]
1. Basic ethical principles that are widely accepted and generally regarded as so fundamental as to be applicable to the past as well as the present; Basic Ethical PrinciplesBasic ethical principles are general standards or rules that all morally serious individuals accept. The Advisory Committee has identified six basic ethical principles as particularly relevant to our work: "One ought not to treat people as mere means to the ends of others"; "One ought not to deceive others"; "One ought not to inflict harm or risk of harm"; "One ought to promote welfare and prevent harm": "One ought to treat people fairly and with equal respect"; and "One ought to respect the self-determination of others." These principles state moral requirements; they are principles of obligation telling us what we ought to do.[2]Every principle on this list has exceptions, because all moral principles can justifiably be overridden by other basic principles in circumstances when they conflict. To give priority to one principle over another is not a moral mistake; it is a reality of moral judgment. The justifiability of such judgments depends on many factors in the circumstance; it is not possible to assign priorities to these principles in the abstract. Far more social consensus exists about the acceptability of these basic principles than exists about any philosophical, religious, or political theory of ethics. This is not surprising, given the central social importance of morality and the fact that its precepts are embraced in some form by virtually all major ethical theories and traditions. These principles are at the deepest level of any person's commitment to a moral way of life. It is important to emphasize that the validity of these basic principles is not typically thought of as limited by time: we commonly judge agents in the past by these standards. For example, the passing of fifty years in no way changes the fact that Hitler's extermination of millions of people was wrong, nor does it erase or even diminish his culpability. Nor would the passing of a hundred years or a thousand do so. This is not to deny that it might be inappropriate to apply to the distant past some ethical principles to which we now subscribe. It is only to note that there are some principles so basic that we ordinarily assume, with good reason, that they are applicable to the past as well as the present (and will be applicable in the future as well). We regard these principles as basic because any minimally acceptable ethical standpoint must include them.
Policies of Government Departments and AgenciesThe policies of departments and agencies of the government can be understood as statements of commitment on the part of those governmental organizations, and hence of individuals in them, to conduct their affairs according to the rules and procedures that constitute those policies. In this sense, policies create ethical obligations. When a department or agency adopts a particular policy, it in effect promises to make reasonable efforts to abide by it.[3]At least where participation in the organization is voluntary, and where the organization's defining purpose is morally legitimate (it is not, for example, a criminal organization), to assume a role in the organization is to assume the obligations that attach to that role. Depending upon their roles in the organization, particular individuals may have a greater or lesser responsibility for helping to ensure that the policy commitments of the organization are honored. For example, high-level managers who formulate organizational policies have an obligation to take reasonable steps to ensure that these policies are effectively implemented. If they fail to discharge these obligations, they have done wrong and are blameworthy, unless some extenuating circumstance absolves them of responsibility. One sort of extenuating circumstance is that the policy in question is unethical. In that case, we would hold an individual blameless for not attempting to implement it (at least if the individual did so because of a recognition that the policy was unethical). Moreover, we might praise the individual for attempting an institutional reform at some professional or personal risk. Different types of organizations have different defining purposes, and these differences determine the character of the department's or agency's role-derived obligations. All government organizations have special responsibilities to act impartially and to fairly protect all citizens, including the most vulnerable ones. These special obligations constitute a standard for evaluating the conduct of government officials.
Rules of Professional EthicsProfessions traditionally assume responsibilities for self-regulation, including the promulgation of certain standards to which all members are supposed to adhere. These standards are of two kinds: technical standards that establish the minimum conditions for competent practice, and ethical principles that are intended to govern the conduct of members in their practice. In exchange for exercising this responsibility, society implicitly grants professions a degree of autonomy. The privilege of this autonomy in turn creates certain special obligations for the profession's members.These obligations function as constraints on professionals to reduce the risk that they will use their special power and knowledge to the detriment of those whom they are supposed to serve. Thus, physicians, whose special knowledge gives them opportunities for exploiting patients or breaching confidentiality, are obligated to act in the patient's best interest in general and to follow various prescriptions for minimizing conflicts of interest. Unlike basic ethical principles that speak to the whole of moral life, rules of professional ethics are particularized to the practices, social functions, and relationships that characterize a profession. Rules of professional ethics are often justified by appeal to basic ethical principles. For example, as we discuss later in this chapter, the obligation to obtain informed consent, which is a rule of research and medical ethics, is grounded in principles of respect for self-determination, the promotion of others' welfare, and the noninfliction of harm. In one respect, rules of professional ethics are like the policies of institutions and organizations: they express commitments to which their members may be rightly held by others. That is, rules of professional ethics express the obligations that collective entities impose on their members and constitute a commitment to the public that the members will abide by them. Absent some special justification, failure to honor the commitment to fulfill these obligations constitutes a wrong. To the extent that the profession as a collective entity has obligations of self-regulation, failure to fulfill these obligations can lead to judgments of collective blame.
Ethical Pluralism and the Convergence of Moral PositionsAlthough we have argued that there is broad agreement about and acceptance of basic ethical principles in the United States, such as principles that enjoin us to promote the welfare of others and to respect self-determination, people nevertheless disagree about the relative priority or importance of these principles in the moral life. For example, although any minimally acceptable ethical standpoint must include both these principles, some approaches to morality emphasize the importance of respecting self-determination while others place a higher priority on duties to promote welfare. These differences in approaches to morality pose a problem for public moral discourse. How can a public body, such as the Advisory Committee, purport to speak on behalf of society as a whole and at the same time respect this diversity of views about ethics? The key to understanding how this is possible is to appreciate that different ethical approaches can and often do converge on the same ethical conclusions. People can agree about what ought to be done without necessarily appealing to the same moral arguments to defend their common position.This phenomenon of convergence has been observed in the work of other public bodies whose charge was to make ethical evaluations on research involving human subjects, including the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research.[4] For example, both those who take the viewpoint that emphasizes obligations to promote welfare and to refrain from inflicting harm and those who accord priority to self-determination can agree that law and medical and research practice should recognize a right to informed consent for competent individuals. The argument for a requirement of informed consent based on promoting welfare and refraining from inflicting harm assumes that individuals are generally most interested in and knowledgeable about their own well-being. Individuals are thus in the best position to discern what will promote their welfare when deciding about participation in research or medical care. Allowing physicians or others to decide for them runs too great a risk of harm or loss of benefits. By contrast, an approach based on self-determination assumes that, at least for competent individuals, being able to make important decisions concerning one's own life and health is intrinsically valuable, independent of its contribution to promoting one's well-being. The most compelling case for recognizing a right of informed consent for competent subjects and patients draws upon both lines of justification, emphasizing that this requirement is necessary from the perspective of self-determination considered as valuable in itself and from the standpoint of promoting welfare and refraining from doing harm. Therefore, although people may have different approaches to the moral life, which reflect different priorities among basic moral principles, these differences need not result in a lack of consensus on social policy or even on particular moral rules such as the rule that competent individuals ought to be allowed to accept or refuse participation in experiments. On the contrary, the fact that the same moral rules or social policies can be grounded in different basic moral principles and points of view greatly strengthens the case for their public endorsement by official bodies charged to speak for society as a whole. The three kinds of ethical standards upon which the Committee relies for our ethical evaluations--the basic moral principles, government policies, and rules of professional ethics--also enjoy a broad consensus. They are not idiosyncratic to a particular ethical value system. Thus it would be a mistake to think that in order to fulfill our charge of ethical evaluation, the Advisory Committee must assume that there is only one uniquely correct ethical standpoint. A broad range of views can acknowledge that the medical profession should be held accountable for moral rules it publicly professes and that individual physicians can be held responsible for abiding by these rules of professional ethics. Likewise, regardless of whether one believes that the ultimate justification for government policies is the goal of promoting welfare and minimizing harms or respect for self-determination, one can agree that policies represent commitments to action and hence generate obligations. Moreover, any plausible ethical viewpoint will recognize that when individuals assume roles in organizations they thereby undertake role-derived obligations. We have already argued that the basic ethical principles that we employ in evaluating experiments are widely accepted and command significant allegiance not only from our contemporaries but also from reflective and morally sensitive individuals and ethical traditions in the past. It would be very implausible to construe any of them as parochial or controversial.
Retrospective Moral Judgment and the Challenge of RelativismSome may still have reservations about the project of evaluating the ethics of decisions and actions that occurred several decades ago. The worry is that it is somehow inappropriate, if not muddled, to apply currently accepted standards to earlier periods when they were not accepted, recognized, or viewed as matters of obligation. This is an important worry, though one that does not apply to our framework.The position that the values and principles of today cannot be validly applied to past situations in which they may not have been accepted is called historical ethical relativism. This is the thesis that moral judgments across time are invalid because moral judgments can be justified only by reference to a set of shared values, and the values of a society change over time. According to this view, one historical period differs from another by virtue of lacking the relevant values contained in the other historical period, namely, those that support or justify the particular moral judgments in question. Understood in this way, historical ethical relativism, if true, would explain why some retrospective moral judgments are invalid, namely, where the past society about which the judgments are made lacked the values that, in our time, support our judgments. In other words, the claim is that moral judgments made about actions and agents in one period of history cannot be made from the perspective of the values of another historical period. The question of whether historical ethical relativism limits the validity of retrospective moral judgment is not a mere theoretical puzzle for moral philosophers. It is an eminently practical question, since how we answer it has direct and profound implications for what we ought to do now. Most obviously, the position we adopt on the validity of retrospective moral judgment will determine whether we should honor claims that people now make for remedies for historical injustices allegedly perpetrated against themselves or their ancestors. Similarly, we must know whether there is any special circumstance resulting from the historical context in which the responsible parties acted that mitigates whatever blame would be appropriate. We return to this question later in the chapter. In addition, something even more fundamental is at stake in the debate over retrospective moral judgment: the possibility of moral progress. The idea of moral progress makes sense only if it is possible to make moral judgments about the past and to make them by appealing to some of the same moral standards that we apply to the present. Unless we can apply the same moral yardstick to the past and the present, we cannot meaningfully say either that there has been moral progress or that there has not. For example, unless some retrospective moral judgments are valid, we cannot say that the abolition of slavery is a case of moral progress, moral regression, or either one. More specifically, unless we can say that slavery was wrong, we cannot say that the abolition of slavery was a moral improvement. For these and other reasons, the acceptance of historical ethical relativism has troubling implications. But even if we were to accept historical ethical relativism as the correct position, it would not follow from this alone that there is anything improper about making judgments about radiation experiments conducted decades ago based on the three kinds of ethical standards the Committee has identified. Two of these kinds of standards--government policies and rules of professional ethics--are standards used at the time the experiments were conducted. Neither of these kinds of standards involves projecting current cultural values onto a different cultural milieu. We have already argued that basic ethical principles, the third kind of standard adopted by the Committee, are not temporally limited. Although there have been changes in ethical values in the United States between the mid-1940s and the present, it is implausible that these changes involved the rejection or affirmation of principles so basic as that it is wrong to treat people as mere means, wrong to inflict harm, or wrong to deceive people. Thus, the Advisory Committee's evaluations of the human radiation experiments in light of these basic principles is based on a simple and we think reasonable assumption that, even fifty years ago, these principles were pervasive features of moral life in the United States that were widely recognized and accepted, much as we recognize and accept them today.[5]
Factors That Influence or Limit Ethical EvaluationSeveral considerations influence and can limit the ability to reach ethical conclusions about rightness and wrongness and praise and blame. Some of these may be more likely to be present in efforts to evaluate the past, but all can arise when attempts are made to evaluate contemporary events as well. The most important such limitations relevant to the Advisory Committee's evaluations are these:
(1) Lack of evidence as to whether ethical standards were followed or violated and if so, by whom, and The three kinds of ethical standards adopted by the Committee can yield the conclusion that an individual or collective agent had or has a particular obligation. But this conclusion is not by itself sufficient to determine in any particular case whether anything wrong was done or whether any individual or collective agent deserves blame.
Lack of EvidenceSound evaluations cannot be made without sufficient evidence. Sometimes it cannot be determined if anything wrong was done because key facts about a case are missing or unclear. Other times there may be sufficient evidence that a wrong was done, but insufficient evidence to determine who performed the action that was wrong or who authorized the policy that was wrong or who was responsible for a practice that was wrong. This is why the Advisory Committee strove during our tenure to reconstruct the details of the circumstances under which the human radiation experiments themselves took place. However, these records are incomplete, and even the copious documentation we have gathered does not tell as complete a story as sometimes was needed to make ethical evaluations.
Conflicting ObligationsBecause we all have more than one obligation, because they can conflict with one another, and because some obligations are weightier than others, a particular obligation that is otherwise morally binding may not be binding in a particular circumstance, all things considered. For example, a government official might be obligated to follow certain routine procedures, but in a time of dire emergency he or she might have a weightier obligation to avert great harm to many people by taking direct action that disregards the procedures. Similarly, a physician is obligated to keep his patient's condition confidential, but in some cases it is permissible and even obligatory to breach this confidence (for example, in order to prevent the spread of deadly infectious diseases). In such cases, the agent has done nothing wrong in failing to do what he or she would ordinarily be morally obligated to do; that obligation has been validly overridden by what is in the particular circumstances a weightier obligation.The presence of conflicting obligations may limit our ability to make moral judgments when, for example, it is difficult to determine, in a particular case, which obligation should take precedence. At the same time, however, if it can be determined which obligation is weightier, then the presence of this factor does not serve as an impediment to evaluation; rather, it can lead to the conclusion that nothing morally wrong was done and that no one should be blamed. An example of a potentially overriding obligation that is especially important for the Advisory Committee's work is the possibility that, during the period of the radiation experiments, obligations to protect national security were sometimes more morally weighty than obligations to comply with standards for human subjects research. If the threat were great enough, considerations of national security grounded in the basic ethical principle that one ought to promote welfare and prevent harm could justifiably override the basic ethical principle of not using people as mere means to the ends of others, as well as the more specific rule of research ethics requiring the voluntary consent of human subjects. Had such an overriding obligation to protect national security existed during the period we studied, it also would have relieved responsible individuals of any blame otherwise attributable to them for using individuals in experiments that were crucial to the national defense. Especially during the late 1940s and early 1950s, and then again in the first years of the early 1960s, our country was engaged in an intense competition with the Soviet Union. A high premium was placed upon military superiority, not only in "conventional" warfare but also in atomic, biological, and chemical warfare. The DOD's Wilson memorandum, when originally promulgated in 1953, declared that it was directed toward the need to pursue atomic, biological, and chemical warfare experiments "for defensive purposes" in these fields. It would not be surprising, therefore, to discover that, in the government's policies and rules for human subject research, provisions had been made for the possibility that obligations to protect national security might conflict with and take priority over obligations to protect human subjects, and thus that such policies would have included exceptions for national security needs. The moral justification would also not be surprising: that, in order to preserve the American way of life with its precious freedoms, some sacrifices of individual rights and interests would have to be made for the greater good. The very phrase Cold War expressed the conviction that we already were engaged in a life-or-death struggle and that in war actions may be permissible that would be impermissible in peacetime. Survival in the treacherous and heavily armed post-World War II era might demand no less, repugnant as those actions otherwise might be to many Americans. The Advisory Committee did not undertake an inquiry to determine whether during either World War II or the Cold War there were ever circumstances in which considerations of national security might have justified infringements of the rights and protections that would otherwise be enjoyed by American citizens in the context of human experimentation. Our sources for answering this question were limited to materials pertinent to specific human radiation experiments and declassified defense-related memorandums and transcripts. With regard to the experiments, particular cases are reviewed in part II of this report. In those experiments that took place under circumstances most closely tied to national security considerations, such as the plutonium injections (see chapter 5), it does not appear that such considerations would have barred satisfying the basic elements of voluntary consent. Thus, for instance, although the word plutonium was classified until the end of World War II, subjects could still have been asked their permission after having been told that subjects in the experiment would be injected with a radioactive substance with which medical science had had little experience and which might be dangerous and that would not help them personally, but that the experiment was important to protecting the health of people involved in the war effort or safeguarding the national defense. With regard to defense-related documents, in none of the memorandums or transcripts of various agencies did we encounter a formal national security exception to conditions under which human subjects may be used. In none of these materials does any official, military or civilian, argue for the position that individual rights may be justifiably overridden owing to the needs of the nation in the Cold War. In none of them is an official position expressed that the Nuremberg Code or other conventions concerning human subjects could be overridden because of national security needs. Some government officials, military and civilian, may have personally advocated the view that obligations to protect national security were more important than obligations to protect the rights and interests of human subjects. It is, of course, possible that the priority placed on national security was so great in some circles of government that the ability of security interests to override other national interests was implicitly assumed, rather than explicitly articulated. It is a matter of historical record that some initiatives undertaken by government officials at some agencies during this period adopted the view that greater national purposes justified the exploitation of individuals. Notorious examples are the CIA's MKULTRA project and the Army's psychochemical experiments, which subjected unsuspecting people to experiments with LSD and other substances (see chapter 3).[6] However, even the internal investigation of the Department of Defense into these incidents in the 1970s concluded that these incidents were violations of government policy, not recognized legitimate exceptions to it.[7] During the era of the Manhattan Project, the United States and its allies were engaged in a declared and just war against the Axis powers. Regarding the possibility of a wartime exception, it is well documented that during World War II the Committee on Medical Research (CMR) of the Executive Office of the President funded research on various problems confronting U.S. troops in the field, including dysentery, malaria, and influenza. This research involved the use of many subjects whose capacity to consent to be a volunteer was questionable at best, including children, the mentally retarded, and prisoners.[8] However, when the CMR considered proposed gonorrhea experiments that would have involved deliberately exposing prisoners to infection, the resulting discussion about the ethics of research exhibited a cautious attitude. The conclusion was that only "volunteers" could be used and that they had to be carefully informed about the risks and benefits of participation. In these and other classified conversations, the CMR took the position that care is to be taken with human subjects, including conscientious objectors and military personnel.[9] It is difficult to reconcile these deliberations with the fact that many subjects of CMR-funded research were not true volunteers. Whether the CMR believed that the needs of a country at war justified the use of people who could not be true volunteers as research subjects is not known. It would, however, be an error to conclude that, even in contexts where important national security interests are at stake, such as during wartime, a conflict between obligations to protect national defense and obligations to protect human subjects ought always to be resolved in favor of national security. The question of whether any and all means are morally acceptable for the sake of national security and the national defense is a complex one. Even in the case of a representative democracy that is not an aggressor, it would be wrong to assume that there are no moral constraints in time of war. All of the major religious and secular traditions concerning the morality of warfare recognize that there are substantial limitations upon the manner in which even a just war is conducted.[10] The issue of the morality of "total warfare" for a just cause, including the use of medical science, was beyond the scope of the Advisory Committee's charter, deliberations, and expertise.
Distinguishing Between the Wrongness of Actions and Policies and the Blameworthiness of AgentsFactors That Influence or Limit Judgments About BlameThe factors we have just discussed--lack of evidence and the presence of conflicting obligations--place limits on our ability to make judgments about both the rightness and wrongness of actions and the blameworthiness of the agents responsible for them. Some factors, however, place limits only on our ability to make judgments about the blameworthiness of agents. Even in cases where actions or policies are clearly morally wrong, it may be uncertain how blameworthy the agents who conducted or promulgated them are, or in fact, whether they are blameworthy at all. Some factors make it difficult to affix blame; other factors can mitigate or lessen the blame actors deserve. Four such factors are of particular concern to the Committee:[11]
(1) Factual ignorance;
Factual IgnoranceFactual ignorance refers to circumstances in which some information relevant to the moral assessment of a situation is not available to the agent. There are many reasons that this may be so, including that the information in question is beyond the scope of human knowledge at the time or that there was no good reason to think that a particular item of information was relevant or significant. However, just because an agent's ignorance of morally relevant information leads him or her to commit a morally wrong act, it does not follow that the person is not blameworthy for that act. The agent is blameworthy if a reasonably prudent person in that agent's position should have been aware that some information was required prior to action, and the information could have been obtained without undue effort or cost on his or her part. Some people are in positions that obligate them to make special efforts to acquire knowledge, such as those who are directly responsible for the well-being of others. Determinations of culpable and nonculpable factual ignorance often turn on whether the competent person in the field at that time had that knowledge or had the means to acquire it without undue burdens.
Culturally Induced Moral IgnoranceSometimes cultural factors can prevent individuals from discerning what they are morally required to do and can therefore mitigate the blame we would otherwise place on individuals for failing to do what they ought to do. In some cases these factors may have been at work in the past but are no longer operative in the present, because of changes in culture over time.An individual may, like other members of the culture, be morally ignorant. Because of features of his or her deeply enculturated beliefs, the individual may be unable to recognize, for example, that certain people (such as members of another race) deserve equal respect or even that they are people with rights. Moral ignorance can impair moral judgment and hence may result in a failure to act morally. In extreme cases, a culture may instill a moral ignorance so profound that we may speak of cultural moral blindness. In some societies the dominant culture may recognize that it is wrong to exploit people but fail to recognize certain classes of individuals as being people. Some of those committed to the ideology of slavery may have been morally blind in just this way, and their culture may have induced this blindness. Here it is crucial to distinguish between culpable and nonculpable moral ignorance. The fact that one's moral ignorance is instilled by one's culture does not by itself mean that one is not responsible for being ignorant; nor does it necessarily render one blameless for actions or omissions that result from that ignorance. What matters is not whether the erroneous belief that constitutes the moral ignorance was instilled by one's culture. What matters is the extent to which the individual can be held responsible for maintaining this belief, as opposed to correcting it. Where opportunities for remedying culturally induced moral ignorance are available, a person may rightly be held responsible for remaining in ignorance and for the wrongful behavior that issues from his or her mistaken beliefs. People who maintain their culturally induced moral ignorance in the face of repeated opportunities for correction typically do so by indulging in unjustifiable rationalizations, such as those associated with racist attitudes. They show an excessive partiality to their own opinions and interests, a willful rejection of facts that they find inconvenient or disturbing, an inflated sense of their own self-worth relative to others, a lack of sensitivity to the predicament of others, and the like. These moral failings are widely recognized as such across a broad spectrum of cultural values and ethical traditions, both religious and secular. Only if an agent could not be reasonably expected to remedy his or her culturally induced moral ignorance would such ignorance exculpate his conduct. But even in cases in which the individual could not be blamed for persisting in ignorance, this would do nothing to show that the actions or omissions resulting from his or her ignorance were not wrong. Nonculpable moral ignorance only exculpates the agent; it does not make wrong acts right.
Evolution in Interpretations of Ethical PrinciplesThere is another respect in which the dependence of our perceptions of right and wrong on our cultural context has a bearing on the Advisory Committee's evaluations. While basic ethical principles do not change, interpretations and applications of basic ethical principles as they are expressed in more specific rules of conduct do evolve over time through processes of cultural change.Recognizing that more specific moral rules do change has implications for how we judge the past. For example, the current requirement of informed consent is the result of evolution. Acceptance of the simple idea that medical treatment requires the consent of the patient (at least in the case of competent adults) seems to have preceded by a considerable interval the more complex notion that informed consent is required.[12] Furthermore, the notion of informed consent itself has undergone refinement and development through common law rulings, through analyses and explanations of these rulings in the scholarly legal literature, through philosophical treatments of the key concepts emerging from legal analyses, and through guidelines in reports by government and professional bodies.[13] For example, as early as 1914, the duty to obtain consent to medical treatment was established in American law: "Every human being of adult years and sound mind has a right to determine what shall be done with his own body; and a surgeon who performs an operation without his patient's consent commits an assault."[14] However, it was not until 1957 that the courts decreed that consent must be informed,[15] and this 1957 ruling was only the beginning of a long debate about what it means for a consent to be informed. Thus it is probably fair to say that the current understanding of informed consent is more sophisticated, and what is required of physicians and scientists more demanding, than both the preceding requirement of consent and earlier interpretations of what counts as informed consent. As the content of the concept has evolved, so has the scope of the corresponding obligation on the part of these professionals. For this reason it would be inappropriate to blame clinicians or researchers of the 1940s and 1950s for not adhering to the details of a standard that emerged through a complex process of cultural change that was to span decades. At the same time, however, it remains appropriate to hold them to the general requirements of the basic moral principles that underlie informed consent--not treating others as mere means, promoting the welfare of others, and respecting self-determination.
Inferring Bureaucratic ResponsibilitiesIt is often unclear in complex organizations such as government agencies who has responsibility for implementing the organization's policies and rules. This is particularly common in new and changing organizations, where it is more likely than in stable organizations that there will be interconnecting lines of authority among employees and officials, and job descriptions that are not explicit with respect to responsibility for implementation of policies and initiatives. When policies are not properly implemented in organizations that fit this description, it often is difficult to assign blame to particular individuals. An employee or official of an agency cannot fairly be blamed for a failed or poorly executed policy unless it can be determined with confidence that the person had responsibility for implementing that policy and should have known that he or she had this responsibility.
The Importance of Distinguishing Wrongdoing from BlameworthinessJudgments of wrongdoing and judgments of blameworthiness have very different implications. Even where a wrong was done, it does not follow that anyone should be blamed for the wrong. This is because there are factors, including the four we have just described, that can lessen or remove blame from an agent for a morally wrong act but that cannot in any way make the wrong act right. If experiments violated basic ethical principles, institutional or organizational policies, or rules of professional ethics, then they were and will always be wrong. Whether and how much anyone should be blamed for these wrongs are separate questions.[16]The distinction between the moral status of experiments and that of the individuals who were involved with conducting, funding, or sponsoring them also has important implications for our own time. For a society to make moral progress, individuals must be able to exercise moral judgment about their actions. It is important for social actors to be critical about their activities, even those in which they have been engaged for some time. It is important for them to be able to step back and analyze their actions as right or wrong. If we did not distinguish between actions and agents, then people may feel that, once they have perceived their moral error, it is "too late" for them to change their ways, to object to the ongoing activity, and to try to rally others in support of reform. For any generation to initiate morally indicated reforms, it must be able to take this critical stance. As we see in part III of this report, even now there are aspects of our society's use of human subjects that should be critically examined. The actions we ourselves have performed do not condemn us as moral agents unless we refuse to open ourselves to the possibility that we have in some ways been in error. As we have said, even if we are exculpated by our own culturally induced moral ignorance, that does not make our wrong acts right. Even if we must accept a measure of blame for our actions, we are free to achieve a critical assessment and to initiate and participate in needed change.
The Significance of Judgments About BlameworthinessThe Committee believes that its first task is to evaluate the rightness or wrongness of the actions, practices, and policies involved in the human radiation experiments that occurred from 1944 to 1974. However, it is also important to consider whether judgments ascribing blame to individuals or groups or organizations can responsibly be made and whether they ought to be made.There are three main reasons for judging culpability as well as wrongness. First, a crucial part of the Committee's task is to make recommendations that will reduce the risk of errors and abuses in human experimentation in the future, on the basis of its diagnoses of what went wrong in the past. A complete and accurate diagnosis requires not only stating what wrongs were done, but also explaining who was responsible for the wrongs occurring. To do this is likely to yield the judgment that some individuals were morally blameworthy. Second, unless judgments of culpability are made about particular individuals, one important means of deterring future wrongs will be precluded. People contemplating unethical behavior will presumably be more likely to refrain from it, other things being equal, if they believe that they, as individuals, may be held accountable for wrongdoing than if they can assure themselves that at most their government or their particular government agency or their profession may be subject to blame. Third, ethical evaluation generally involves both evaluation of the rightness or wrongness of actions and the praiseworthiness or blameworthiness of agents. In the absence of any explicit exemption of the latter sorts of judgment in our mandate, the Committee believes it would be arbitrary to exclude them. Having made a case for judgments of culpability as well as wrongness, the Committee believes it is very important to distinguish carefully between judging that an individual was culpable for a particular action and judging that he or she is a person of bad moral character. Justifiable judgments of character must be based on accurate information about long-standing and stable patterns of action in a number of areas of a person's life, under a variety of different situations. Such patterns cannot usually be inferred from information about a few isolated actions a person performs in one particular department of his or her life, unless the actions are so extreme as to be on the order of heinous crimes. |