DOE Shield DOE Openness: Human Radiation Experiments: Roadmap to the Project
ACHRE Report
Roadmap to the Project
HomeRoadmapWhat's NewSearch HREXMultimediaRelated SitesFeedback
ACHRE Report

Part I

Chapter 3

Introduction

The Development of Human Subject Research Policy at DHEW

The Development of Requirements for Human Subject Research in Other Federal Agencies

Supreme Court Dissents Invoke the Nuremberg Code: CIA and DOD Human Subjects Research Scandals

Conclusion

The Development of Human Subject Research Policy at DHEW

As the largest funding source in the federal government for human subject research, DHEW led the way in developing regulations aimed at protecting the rights and welfare of subjects. The evolution of the regulations, which would eventually be adopted on a government wide basis, was influenced by revelations of unethical research, congressional reaction to the revelations, and concern over public perception of such research. That regulations were eventually adopted at all by DHEW was influenced by the political realities of the time and the lack of congressional support for a standing regulatory body to oversee human subject research, as had been recommended by an influential federally appointed panel, the Tuskegee Syphilis Study Ad Hoc Panel. In a trade-off that would have major influence on the future of human subject research oversight, the proposed bill creating the standing regulatory body was withdrawn in exchange for the National Research Act, establishing the National Commission, and an understanding that DHEW would promulgate the aforementioned regulations. This historical backdrop is outlined in the remainder of this chapter.

The Thalidomide Tragedy and the Congressional Requirement for Patient Consent

In 1959 a Senate subcommittee chaired by Senator Estes Kefauver of Tennessee began hearings into the conduct of pharmaceutical companies. Testimony revealed that it was common practice for drug companies to provide samples of experimental drugs, whose safety and efficacy had not been established, to physicians, who were then paid to collect data on their patients taking these drugs. Physicians throughout the country prescribed these drugs to patients without their knowledge or consent as part of this loosely controlled research. These practices and others prompted calls by Kefauver and other senators for an amendment to the Food, Drug, and Cosmetic Act of 1938 to address the injuriousness and ineffectiveness of certain drugs. In 1961 the dangers of new drug uses were vividly exemplified by the thalidomide disaster in Europe, Canada, and to a lesser degree, the United States.[5] Starting in late 1957, the sedative thalidomide was given to countless pregnant women and caused thousands of birth defects in newborn infants (most commonly, missing or deformed limbs). The thalidomide disaster was widely covered by the television networks, and the visual impact of these babies stunned viewers and caused Americans to question the protections afforded those receiving investigational agents.

It is in large measure because of the thalidomide episode that the 1962 Kefauver-Harris amendments to the Food, Drug, and Cosmetic Act were passed,[6] requiring that informed consent be obtained in the testing of investigational drugs.[7] While such testing occurred mainly with patients, Congress carefully avoided interfering in the doctor-patient relationship and in the process severely reduced the effectiveness of the requirement. Consent was not required when it was "not feasible" or was deemed not to be in the best interests of the patient--both judgments made "according to the best judgment of the doctors involved."[8] Despite their being limited in scope, the Kefauver-Harris amendments were influential in advancing considerations of protections of research subjects first within the DHEW and later throughout the rest of the government.

NIH and PHS Develop a Uniform Policy to Protect Human Subjects

In late 1963, concerns were raised within NIH by Director James Shannon after disturbing revelations about two research projects funded in part by the Public Health Service and NIH. One was the unsuccessful transplantation of a chimpanzee kidney into a human being at Tulane University, a procedure that promised neither benefit to the recipient nor new scientific information. The transplant was reportedly done with the consent of the patient, but without consultation or review by anyone other than the medical team involved.[9]

The second was research undertaken in mid-1963 at the Brooklyn Jewish Chronic Disease Hospital. There, investigators (the chief investigator, Dr. Chester M. Southam was a physician at the Sloan-Kettering Cancer Research Institute, and he received permission to proceed with the work from the hospital's medical director, Dr. Emmanuel E. Mandel) had undertaken a research project in which they injected live cancer cells into indigent elderly patients without their consent. The research went forward without review by the hospital's research committee and over the objections of three physicians consulted, who argued that the proposed subjects were incapable of giving adequate consent to participate.[10] The disclosure of the experiment served to make both PHS officials like Shannon and the Board of Regents of the University of the State of New York, which had jurisdiction over licensure of physicians, aware of the shortcomings of procedures in place to protect human subjects. They were further concerned over the public's reaction to disclosure of the research and the impact it would have on research generally and the institutions in particular. After a review, the Board of Regents censured the researchers. They suspended the licenses of Drs. Mandel and Southam, but subsequently stayed the suspension and placed the physicians on probation for one year.[11] There were no immediate repercussions for the hospital, Sloan-Kettering, the university, or PHS, but the case nonetheless profoundly affected the subsequent development of federal guidelines to protect research subjects.

To add to the ferment, NIH officials had closely followed the work of the Law-Medicine Research Institute at Boston University, which issued survey findings in 1962 showing that few institutions had procedural guidelines covering clinical research.[12] And in the year after both the above-mentioned cases came to light, the World Medical Association issued its Declaration of Helsinki, which set standards for clinical research and required that subjects give informed consent prior to enrolling in an experiment.[13] Thus national and world opinion on matters related to the ethics of human subject research created a climate ripe for changes in policies and approaches toward research ethics.

Concern over disturbing cases and the growing attention paid to research ethics prompted NIH director James Shannon to create a committee in late 1963 under the direction of the NIH associate chief for program development, Robert B. Livingston, whose office supported centers at which NIH-funded research took place. The internal committee was charged with studying problems of inadequate consent and the standards of self-scrutiny involving research protocols and procedures. The committee was also to recommend a suitable set of controls for the protection of human subjects in NIH-sponsored research. The Livingston Committee recognized that ethically questionable research--exemplified by the research at the Jewish Chronic Disease Hospital--could wreak havoc on public perception, increase the likelihood of liability, and inhibit research.[14] These problems made it worthwhile to reconsider central oversight--or lack thereof--for research contracted out. However, the committee expressed concern over NIH taking too authoritarian a posture toward research oversight and so argued that it would be difficult for the agency to assume responsibility for ethics and research practices. When it issued its report in late 1964, the committee did not recommend any changes in the current NIH policies and, moreover, cautioned that "whatever NIH might do by way of designating a code or stipulating standards for acceptable clinical research would be likely to inhibit, delay, or distort the carrying out of clinical research. . . ."[15] In deference to physician autonomy and traditional regard for the sanctity of the doctor-patient relationship, the report concluded that NIH was "not in a position to shape the educational foundations of medical ethics. . . ."[16]

Director Shannon did not think the conclusions of the Livingston Committee went far enough, feeling as he did that NIH should take a position of increased responsibility for research ethics.[17] Especially in light of the Jewish Chronic Disease Hospital case and its implications for the NIH, both internally and in terms of public perception, he felt that a stronger reaction was needed. Thus, despite the committee's limited conclusions, Shannon and Surgeon General Luther Terry together decided in 1965 to propose to the National Advisory Health Council (NAHC), an advisory committee to the surgeon general of the Public Health Service,[18] that in light of recent problems, the NIH should assume responsibility for formal controls on individual investigators.[19] At the NAHC meeting, Shannon argued for impartial prior peer review of the risks research posed to subjects and questioned the adequacy of the protections of the rights of subjects.[20]

The council's members mostly agreed with Shannon's concerns and three months later issued a "resolution concerning research on humans" following Shannon's broad recommendations and endorsing the importance of obtaining informed consent from subjects:

Be it resolved that the National Advisory Health Council believes that Public Health Service support of clinical research and investigation involving human beings should be provided only if the judgment of the investigator is subject to prior review by his institutional associates to assure an independent determination of the protection of the rights and welfare of the individual or individuals involved, of the appropriateness of the methods used to secure informed consent, and of the risks and potential medical benefits of the investigation.[21]

What this statement did not do, however, was explain what would count as informed consent. The NAHC recommendations were accepted by the new surgeon general, William H. Stewart, and in February 1966 he issued a policy statement requiring PHS grantee institutions to address three topics by committee prior review for all proposed research involving human subjects:

This review should assure an independent determination (1) of the rights and welfare of the individual or individuals involved, (2) of the appropriateness of the methods used to secure informed consent, and (3) of the risks and potential medical benefits of the investigation.[22]

The 1966 PHS policy required that institutions give the funding agency a written "assurance" of compliance, but like the NAHC recommendations, the policy spoke strictly to the procedural aspects of informed consent and not to its meaning and criteria. Substantive informed consent criteria were established for research at the NIH Clinical Center shortly after the PHS policy was issued, but this new policy applied only to intramural research, that is, to research undertaken at the Clinical Center. The Clinical Center policy was important as the first federal research policy with a specific definition of what constituted informed consent requirements in the research context. The inclusion of specific consent requirements in policies applying to extramural research would not occur, however, until the mid-1970s.

The 1966 PHS policy is significant both for its recognition that patient-subjects, like healthy subjects, should be included in the consent provisions for federally sponsored human experimentation and for its attempt to strike a balance between federal regulation and local control, which continues to this day. Such a balancing continued the work begun by the AEC, in its provision for local human use committees as a condition for the use of AEC-supplied isotopes, and the DOD, in the provision for high-level review of proposed experimentation. Although a landmark in the government regulation of biomedical research, the 1966 policy was to be revised and changed throughout the decade as biomedical research drew greater attention and informed consent grew in importance.

While, from the outset, the PHS policy was revised periodically,[23] site visits by PHS employees to randomly selected institutions revealed a wide range of compliance.[24] These site visits found widespread confusion about how to assess risks and benefits, refusal by some researchers to cooperate with the policy, and in many cases, indifference by those charged with administering research and its rules at local institutions. Complaints of overworked review committees and requests for clarification and guidance came from research institutions all over the country.[25]

In response to continued questions about the scope and meaning of the policy, DHEW in 1971 produced The Institutional Guide to DHEW Policy on Protection of Human Subjects.[26] Better known as the "Yellow Book" because of its cover's color, this substantial guide contained both the requirements and commentary on how the requirements were to be understood and implemented. The guide provided that informed consent was to be obtained from anyone who "may be at risk as a consequence of participation" in research--including both patients and healthy volunteers.[27]

As the 1960s progressed, increased discussion of research practices appeared in both professional literature and the popular press. One person who advanced the debate in both arenas was Henry Beecher of Harvard Medical School.

Henry Beecher: The Medical Insider Speaks Out

Henry Beecher, as noted in chapter 2, was an active participant in professional discussions of ethics in research during the late 1950s and early 1960s. In March 1965, Beecher focused attention on the issues at a conference for science journalists sponsored by the Upjohn pharmaceutical company. There Beecher presented a paper discussing twenty-two examples of potentially serious ethical violations in experiments that he had found in recent issues of medical journals.[28] (Among them was the Brooklyn Jewish Chronic Disease Hospital study.) He explained this research had not taken place "in a remote corner, but [in] . . . leading medical schools, university hospitals, top governmental military departments, governmental institutes and industry."[29] He also acknowledged that his own conscience was not entirely clear: "Lest I seem to stand aside from these matters I am obliged to say that in years gone by work in my laboratory could have been criticized."[30] Beecher also explained the consciousness-raising purpose of these revelations with stark clarity: "It is hoped that blunt presentation of these examples will attract the attention of the uninformed or the thoughtless and careless, the great majority of offenders."[31]

In making this presentation to a group of journalists, Beecher was clearly breaking with a professional expectation that such matters should be addressed within the biomedical community. After some reservations on the part of medical journals, the March 1965 paper having been rejected by at least the Journal of the American Medical Association (JAMA), Beecher published a revised version in the New England Journal of Medicine in June 1966.[32] That article, like his presentation at the conference, indicted the entire biomedical research community and the journals that published biomedical research results.

Beecher's efforts to focus professional, press, and therefore public awareness on the conduct of research involving human subjects met with some success. A July 1965 article in the New York Times Magazine was headlined "Doctors Must Experiment on Humans--But What Are the Patient's Rights?"[33] In February 1966, as the PHS issued its first uniform policy for biomedical research, more headlines, this time in the Saturday Review, asked, "Do We Need New Rules for Experimentation on People?"[34] In July 1966, following Beecher's article in the New England Journal of Medicine and an editorial in JAMA, [35] another article declared "Experiments on People--The Growing Debate."[36] Thus, by the mid- to late 1960s, professional, governmental, and public attention was all being drawn to issues of research on human subjects. Revelations of purportedly unethical treatment of research subjects would not be over by this time, but changes in policy largely driven by attention from so many corners were beginning to move toward a more comprehensive approach to research oversight.

Public Attention Is Galvanized: Willowbrook and Tuskegee

From 1956 to 1972 Dr. Saul Krugman of New York University led a study team at the Willowbrook State School for the Retarded, on Staten Island, New York. The study was not secret or hidden. (It was one of the twenty-two projects Beecher discussed as ethically troublesome in his 1966 article.) The Willowbrook study was discovered by the media beginning in the late 1960s[37] and was the subject of further discussion of the case in separate places by Beecher,[38] theologian Paul Ramsey,[39] and physician Stephen Goldby.[40] Noting the high incidence of hepatitis among the residents of the school, nearly all of whom were profoundly mentally impaired children and adolescents, Krugman and his colleagues injected some of them with a mild form of hepatitis serum. The researchers justified their work on the grounds that the subjects probably would have become infected anyway, and they hoped to find a prophylaxis for the virus by studying it from the earliest stages of infection. Before beginning the work, Krugman discussed it with many physician colleagues and sought approval from the Armed Forces Epidemiological Board, which approved and funded the research,[41] and the executive faculty of the New York University School of Medicine, who approved the research. A review committee for human experimentation did not exist in 1955,[42] but later, when such a committee was formed, it too approved the research.

According to Krugman, the parents of each subject signed a consent form after receiving a detailed explanation of the research, without any pressure to enroll their child.[43] Some critics argued that the content of the consent form was itself deceiving, since it seemed to say that children were to receive a vaccine against the virus. Moreover, charges of coercion arose. It is alleged that parents who enrolled their children in the study were initially offered more rapid admission to the school through the hepatitis unit and later found, due to overcrowding, that the only route for admission of new patients was through the hepatitis unit.[44] Commentators further argued that the fault in the doctors' study lay in their deliberate attempt to infect the children, with or without parental consent, as opposed to studying the course of disease in children who naturally became sick.

Soon after Willowbrook, another research project, the Tuskegee syphilis study, provoked widespread public outcry when it was revealed the study had exposed people to unnecessary and serious harm with no prospect of direct benefit to them. Beginning in 1932, PHS physicians sought to trace the natural history of syphilis by observing some 400 African-American men affected by the disease and another group of approximately 200 African-American men without syphilis serving as controls. All the subjects lived in or around Tuskegee, Alabama. Originally designed to be a short-term study in the range of six to eight months, some investigators successfully argued that the potential scientific value of longer-term study was so great that the research ought to go on indefinitely. The subjects were enticed into the study with offers of free medical examinations. Many of those who came from around the area to be tested by "government doctors" had never had a blood test before and had no idea what one was.[45] Once selected to be subjects in the study, the men were not informed as to the nature of their disease or of the fact that the research held no therapeutic benefit for them. Subjects were asked to appear for "special free treatments," which included purely diagnostic procedures such as lumbar punctures.[46]

By the mid-1940s it was becoming clear that the death rate for the infected men in the study was twice as high as for those in the control group. This was the period in which penicillin was discovered and soon after began to be used to treat syphilis, at least in its primary stage. The study was reviewed by PHS officials and medical societies and reported by a number of journals from the early 1930s to 1970. In the 1960s a growing number of criticisms began to appear, although the study was not stopped until 1973.

Thus, men with a confirmed disease were not told of their diagnosis and were deceived into participating in the study under the guise of its being therapeutic for unspecified maladies. In addition to exposing the subjects to the additional harms of participation in the study, the false belief that treatment was being administered prevented subjects from otherwise seeking medical care for their disease. As at Willowbrook, a justification given after the fact for the research was that the disease had appeared in a way that was natural and inevitable and that the study would be of immense benefit to future patients.[47] Over this forty-year history, at least 28 participants died and approximately 100 more suffered blindness and insanity from untreated syphilis before the study was stopped.

In 1972, an account of the study was published on the front page of the New York Times.[48] In response, DHEW appointed the Tuskegee Syphilis Study Ad Hoc Panel to review the Tuskegee study as well as the department's policies and procedures for the protection of human subjects. The work of the ad hoc panel--which consisted of physicians, a university president, a theologian, an attorney, and a labor representative--contributed in large measure to the passage of the first comprehensive regulations for federally sponsored human subjects research. One member of the ad hoc panel who is also a member of the Advisory Committee, Jay Katz, expressed his dismay over the unwillingness or incapacity of society to mobilize the necessary resources for "treatment" at the beginning of the study and the deliberate efforts of the investigators to "obstruct the opportunity for treatment."[49]

Despite the fact that the PHS Policy for the Protection of Human Subjects had been in place for six years by the time the Tuskegee study was revealed, it was exposed by a journalist rather than by a review committee. Although an institutional committee had allegedly reviewed the Tuskegee study, the study was not discontinued until after the recommendation of the ad hoc panel.[50] The human rights abuses of the Tuskegee study demonstrated the need for both prior and ongoing review, in that the study was undertaken before prior review requirements were in place, and the prevailing review policies during the period of the study were so flawed that the study was allowed to continue.

As a result of their deliberations, the ad hoc panel found that neither DHEW nor any other agency in the government had adequate policies for oversight of human subjects research. The panel recommended that the Tuskegee study be stopped immediately and that remaining subjects be given necessary medical care resulting from their participation.[51] The panel also recommended that Congress establish "a permanent body with the authority to regulate at least all federally supported research involving human subjects."[52] In summary, the panel concluded that despite the lessons of Nuremberg, the Jewish Chronic Disease Hospital case, and the Declaration of Helsinki, human subject research oversight and mechanisms to ensure informed consent were still inadequate and new approaches were needed to adequately protect the rights and welfare of human subjects.

Congressional Response to Abuses of Human Subjects: The National Research Act

Public attention to abuses such as those inflicted on the subjects of the Tuskegee study increased during the late 1960s and early 1970s. Following the initial revelations about the Tuskegee syphilis study, several bills were introduced in Congress to regulate the conduct of human experimentation. In February 1973 Senator Edward Kennedy held hearings on these bills;[53] the Tuskegee study; experimentation with prisoners, children, and poor women; and a variety of other issues related to biomedical research and the need for a national body to consider the ethics of research and advancing medical technology.[54] After the hearings, Senator Kennedy introduced an unsuccessful bill to create a National Human Experimentation Board, as recommended by the Tuskegee Syphilis Study Ad Hoc Panel. When it became clear, however, that the bill would not be successful, Senator Kennedy introduced the bill that would become the National Research Act, endorsing the regulations about to be promulgated by DHEW and establishing the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, in return for DHEW's issuance of human subject research regulations.[55] The trade-off was clear: no national regulatory body in return for regulations applying to the research funded or performed by the government agency responsible for the greatest proportion of human subject research. This meant that the goal of oversight of all federally funded research would not be achieved and that whatever oversight did exist was left to the funding agencies rather than an independent body.

On May 30, 1974, DHEW published regulations for the use of human subjects in the Federal Register. [56] These regulations required that each grantee institution form a committee (what became known as an institutional review board, or IRB) to approve all research proposals before they were passed to DHEW for funding consideration. These committees were charged with reviewing the safety of the proposals brought to them as well as the adequacy of the informed consent obtained from each subject prior to participation in the research. Additionally, the regulations defined not only the procedure for obtaining informed consent but substantive criteria for it as well. Shortly after the announcement of the DHEW regulations, in July 1974, the National Research Act was passed, and with it came the establishment of the National Commission.[57]

The National Commission--charged with advising the secretary of DHEW (though the National Research Act did not require the secretary to follow the commission's recommendations)--existed over the next four years and published seventeen reports and appendix volumes. During its tenure, the commission did pioneering work as it addressed issues of autonomy, informed consent, and third-party permission, particularly in relation to research involving vulnerable subjects such as prisoners, children, and people with cognitive disabilities. It was also charged with examining the IRB system and procedures for informed consent, as background for proposing guidelines that would ensure that basic ethical principles were instituted in the research oversight system and in research involving vulnerable populations.

In the course of its deliberations, the commission identified three general moral principles--respect for persons, beneficence, and justice--as the appropriate framework for guiding the ethics of research involving human subjects. These three are known as the Belmont principles because they appeared in The Belmont Report, one of the commission's major publications.[58]

The National Commission was required to examine the "nature and definition" of informed consent as well as the "adequacy" of current practices. In its reports, the commission decisively argued that the basic justification for obligations to obtain informed consent is the moral principle of respect for persons. This emphasis on respect for persons meant a great premium was put on autonomous decision making by the research subject, an emphasis that continues to the current day.

While it may not have been the intent of those who sponsored it, the National Research Act--because it was limited to DHEW-funded research--did not ensure that all federally sponsored research would be subject to requirements for informed consent and prior review. Nonetheless, by this time, as described below, published policies within the DOD, the AEC, the VA, and NASA did meet these requirements.

The passage of the National Research Act and the promulgation of DHEW's regulations were important milestones in the development of federal standards for the protection of human subjects of research. They represented the first national recognition of the need to protect human subjects. Moreover, they attempted to provide for that protection through the IRB requirement and establishment of the National Commission. The Advisory Committee's charter requires that it examine the standards for research between 1944 and 1974. These two landmark events in 1974 ushered in a new era in which the conduct and oversight of biomedical experimentation with humans remained a topic of national scrutiny and debate. Eventually, the approaches required by the 1974 DHEW regulations would be applied to nearly all federally sponsored human research, as described in chapter 14.

back table of contents forward