a history of US IRB's
Book Summary of "Regulating Human Research"
Disclaimer: I've now read a lot more about international IRB comparisons and no longer believe UK IRBs are more efficient than US IRBs
The condensed version of this book by Professor Susan Babbs is that regulatory pressure and the decentralized American government combined to form our current IRB system, which blends top-down pronouncements and audits with for-profit private IRB's competing on the basis of convenience and efficiency. The resulting mess is more constraining and wasteful than the equivalent system in the UK, where regulation is centralized and top-down.
The whole system started informally, with peer review boards at institutions that would glance at research proposals and approve them. When the federal government got into the business of funding biomedical research on a large scale after WWII, the NIH began requiring scientists to submit their work for peer review at their institutions before getting money for proposals. In the 1960-70's a series of research scandals, among them the Tuskegee study, caught public and Senatorial attention, and Senators Kennedy and Humphrey pushed for a centralized bureaucracy that could regulate human subjects research.
Institutional players like the NIH, plus a lack of political will for a new highly centralized federal bureaucracy, managed to stop a wholesale reform ala a National Human Experimentation Board. Instead, we got the passage of the National Research Act in 1974 and the publication of the Belmont Report in 1979, and a slightly more structured system to regulate research emerged. The ideas were noble: protecting human subjects from ethical abuse by researchers. They mandated that federally funded research be reviewed by Institutional Review Boards (IRBs).
Per this post from Dr. Albert Jonsen, that Commission was the birth of bioethics as a distinct field [emphasis mine]:
Senator Ted Kennedy was one of the founding figures of bioethics. Those of us who call ourselves bioethicists, or those who work in this field, should be aware of his contributions to its origins…1973, he presided over hearings of the Senate Labor and Public Welfare Committee that examined a range of problems that today constitute the agenda of bioethics….A bill written by Senator Kennedy, with the collaboration of Senator Javits…National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research... President Nixon signed the bill as Public Law 93-348 on July 12, 1974. The Commission became an engine that drove bioethics research and debate.
Mostly Status Quo…
For the next 20 years or so the state of affairs for university and drug researchers did not change dramatically, in spite of the federal government promulgating rules. Since local IRB's were staffed by faculty who were volunteering their time, who didn't have expertise or time to learn confusing federal regulation in-depth, these IRB's tended to benign neglect. In the mid and late 90's several research scandals and subsequent government action changed the game:
Two years later, in 1996, a research participant named Michele Wan died after being given a lethal dose of lidocaine in a University of Rochester clinical trial. In 1999, Jesse Gelsinger died after being given an experimental treatment without having received accurate information about its substantial risks.33 (Location 465) The main message of the crackdown—conveyed in hundreds of enforcement letters scrutinized anxiously by thousands of administrators and IRB members—was that approximate compliance was not enough. (Location 484)
Though the federal government was sending a strong signal that approximate compliance would not be enough, the rules it had issued were still confusing and in the late 90's and early 2000's, the relevant government offices were still too understaffed to issue clarification. Risk-averse institutions like universities responded to this regulatory uncertainty with “hypercompliance”, and research slowed to a crawl. Even social scientists doing research that no reasonable person could construe as endangering a human subject became subject to IRB rules as universities rushed to demonstrate compliance. As Babb puts it:
The production of auditable hypercompliance was labor-intensive and created multiple levels of obstruction to the research process. (Location 793)
The market response to regulatory uncertainty and institutional demand for "auditable hypercompliance" was the private IRB industry. IRB's had previously been faculty volunteers, research administrators, and a non-scientist; instead, an IRB professional emerged, as represented by the Public Responsibility in Medicine and Research (PRIM&R). Another organization, AAHRPP, emerged as the most stringent voluntary certification program program. At conferences, this newly emerging class of research bureaucrats could share the most efficient ways to stay compliant with federal regulation. These private IRB’s would be contracted by universities and pharmaceutical firms. Inflexible grant rules kept private IRB’s from complete market penetration initially but eventually even federal studies would use private IRB’s.
The era of overcompliance gave way to "efficient compliance". Three factors were key: IRB customers demanded less burdensome and quicker review; an improved regulatory environment under Pres. Bush's OHRP head, Bernard Schwetz; and the founding of AAHRPP. The AAHRPP was sponsored by the AAMC, PRIM&R, and other “blue-chip” science and regulatory organizations and oozed respectability. When the AAHRPP began pushing for efficiency, not just strict standards, the rest of the industry followed suit. A variety of IRB technology and methods helped this along: checklists that could de-skill some IRB work; electronic tools to complete some of the IRB requirements without staff, and more. This sped up IRB's substantially.
An unintended consequence of this process was that university institutions began to use the newly efficient (and now, mostly free of voluntary faculty) IRB system to exert more control over the research process.
perhaps more troubling form of goal displacement was the use of IRBs to engage in institutional protection—to shield their organizations from either legal liability or reputational damage. I was surprised to find that my informants openly acknowledged this practice (Location 1042). “The whole IRB operation operates according to the [regional newspaper] rule; you don’t want it to appear in the headlines. (Location 1050). A specific example of this is the "site permission" policy that many IRB's have, which required researchers to get permission from an organization whose members would be studied. Site permission was not a regulatory requirement, and had far more to do with institutional protection than protecting human participants. (Location 1070).
This policy was particularly infuriating to social scientists, particularly those studying more dangerous subjects-- for instance, trying to interview people outside of a police station about their interactions with police might require site permission from the station. To add to their annoyance, IRB review that had started under the pretense of "peer review" became "review by regulatory compliance experts", and depending on your university, didn't have an appeal mechanism.
As the author emphasizes, this decentralized, hyper-cautious, and professionalized system emerged as a response to federal policy failure:
The system was costly because its rules were fragmented, confusing, and imperfectly clarified; because it delegated decision making and administrative work to local institutions; and because it incentivized meticulous, labor-intensive proceduralism. The rationalization of IRBs was a logical adaptation to an illogical system. (Location 1096)
Brits Do It Better
In contrast, the UK, which had previously had over 200 local IRB's, similar to the US model, centralized in 2000:
the British government was able to initiate major reforms. The Department of Health began to standardize the system, and inaugurated a structure of region-based committees to serve as lead boards in multisite clinical trials.18 In 2000 it established a national Central Office for Research Ethics Committees, (Location 1943) National Health Service–sponsored central booking service, to be assigned to one of fewer than 100 government-administered review committees, composed of a mix of expert and lay members. (Location 1948) The health department supplies these committees with a regularly updated, comprehensive set of standard operating procedures, more than 300 pages in length, and providing guidance—in everyday, non-legalistic language—on how committees should operate. (Location 1950)
The resulting body of policy is much clearer, can be updated more rapidly by senior officials, and perhaps more importantly, can be responsive to political pressure in a way the US system cannot:
problematic feature is diffuse accountability. Because of their complex, multilayered structure, systems that delegate government functions often lack transparent lines of authority that show “who’s in charge”; and the objects of policy often misrecognize “who they’re dealing with.” Local policies or best practices may be taken for federal regulations; conversely, the role of government may be veiled entirely. (Location 1985)
In other words, because the US model of research regulation is a mix of federal and state law, involves a dozen federal agencies, and has generated an ostensibly private IRB industry, it is less legible (to use a James C Scott term ) to basically everybody involved than the UK's equivalent system. More precisely, though the current IRB system generates an enormous amount of precise information about how and when a trial is being conducted, how subjects will be protected, etc. and is highly legible in that regard, it is illegible to outsiders who wish to understand and reform the system as a whole. Asking "why does my local IRB say I can't do X" yields very confusing answers.
In a striking real-world example of the UK system's superiority in adapting to real-world events, they are the first (and so far, the only!) country to approve human challenge trials (HCT's) with covid-19. In the US, Senators Donna Shalala (former head of HHS, so she should know better than anybody how to compel them) and Bill Foster, among others, sent an open letter to HHS and the FDA to urge them to consider using challenge trials to speed up coronavirus vaccine development, to no visible effect. My guess is that the diffuse IRB system of the US, in contrast to the UK's centralized system, was a big part of why the US never used HCT's.
IRB’s vs Civil Rights EEO
The rest of the book compares the IRB compliance bureaucracy with Civil Rights legislation and is quite interesting: the key difference as the author sees it is that while federal research subject regulation demands "process compliance" and performs audits, Civil Rights legislation hinges on whether a judge presiding over your suit views you as making credible commitments to equal opportunity:
more often described them as embracing symbolic best practices, designed not to improve speed and reduce costs, but to signal to powerful outsiders (especially judges in lawsuits) that organizations are “doing [their] best to figure out how to comply.” (Location 1652) EEO offices were not primarily organized around auditability, and therefore were not pressured to innovate in this way. These differences suggest two distinct varieties of compliance: symbolic compliance, assessed in court cases that reward recognizable gestures of good faith; and auditable compliance, (Location 1661)
Key points I learned from this book:
decentralized governance in the US late 20th century model can have all the downsides of government that libertarians complain about-- regulation, fussy bureaucrats, audits, glacial pace, etc.-- without the upside of a centralized system that can actually respond to challenges in a coordinated way.
the idea that risk aversion has increased, and that this is part of the reason why progress, broadly construed, is slowing, seems true, but the causal model of institutions-->culture of risk aversion seems underrated and much of the story in this case. Of course, a certain level of risk aversion has to emerge to create these institutions, but once they're extant, they can create an entire culture of compliance with regulation that has its own momentum.
people interested in "progress studies", effective altruists, and techno-optimists should try to get into the details of how exactly regulatory systems work. In this case, given its decentralized nature, there's no obvious policy solution to cut the Gordian knot of rules, but having a few more effective altruist or utilitarian-curious government bureaucrats would probably be great. In US politics, Personnel is Politics, and if there was an already existing EA friendly base to draw from, policy would eventually move in that direction.
Other posts of mine you might be interested in: