Many researchers would admit to being cynical about the industrial IRB complex, but Carl Schneider, MD, JD, systematically dismantles every argument in favor of the current IRB system in his book. I'll summarize his arguments in this post and excerpt some of his case studies in a later post.
Sidenote: I went over the history of IRB's in my last post but here1 is short recap.
Research is low-risk
He starts by arguing that IRB's do more harm than good because: 1, the risk they're trying to regulate is already low; 2, the cost of IRB's is high. Per him, "regulationists" (those who mostly agree with the current IRB system) have used "justification by scandal" as their main argument, when quantitative study of research risks show little to worry about:
In 1977—well before IRBs proliferated—2,000 researchers, 800 IRB members, and 1,000 subjects were asked about harmful effects from research not predictable “as integral to the procedure.” The vast majority of projects had no such effects. Three percent of projects experienced harms (to an average of two subjects), generally trivial or temporary. The three projects with fatalities involved cancer research, and in two at least some subjects were already terminally ill.9 Similarly, Levine reported that three large institutions studied “reported a very low incidence of injury to research subjects and an extremely low rate of injuries that could be attributed directly to the performance of research.” In both 1981 and 1988, Levine concluded: “On the basis of all the empirical evidence” he knew, “the role of research subject is not particularly hazardous. … [A]ttempts to portray it as such and arguments for policies designed to restrict research generally because it is hazardous are without warrant.”10 Recent literature reviews concur. Burris and Moss report that harm from research misconduct is “apparently very rare.”11 Saver says “the Advisory Committee on Human Radiation Experiments’ comprehensive review of federally funded research … determined that most studies posed only minimal risks of harm.
When OHRP audits IRB's, they rarely find harmed subjects or violations of regulations. Instead, "most commonly, they are about following and documenting 'required procedures'". And as 3rd piece of evidence, if participation in research was so risky, we'd expect a negative "trial effect", or for research subjects participating in trials to have worse survival than comparable non-participants. And yet multiple reviews find either the opposite (participation is associated with higher survival) or no effect.
Even before empirical investigation of research risks, common-sense would guide us to expect the majority of human-subject research to be physically harmless: almost all social-science research, observational biomedical research like chart review and biobanking, and diagnostic testing is harmless. And yet all of it is subject to some form of IRB oversight— with the caveat that recent changes to federal guidance have loosened IRB authority somewhat.
A key category of biomedical research is aimed at discovering which of two standard medical treatments is superior. Without trial evidence doctors and patients routinely choose one or the other with no supervision— but when researchers attempt to study them in a randomized fashion, they come under IRB jurisdiction, and he documents multiple cases of IRB's unduly interfering with such research, under the presumption that an inferior treatment could cause harm. Of course, until a study is completed, we have no way of knowing which treatment is inferior! Ending Medical Reversal is a book-length treatment of this tricky question, and incidentally, the authors argue for a "randomization by default" approach to clinical medicine, which would surely require some IRB reform.
The point of this argument is that because research risk is already so low, there is little room for IRB's to make research safer. That is, their potential benefit has a low ceiling.
Other “Harms”
In response to much of the argument above, a regulationist might grudgingly agree and then invoke nonphysical harms, like social, psychological, or dignitary harm. While possible, there is scant evidence that such harms actually occur or are in excess of those that people handle on a daily basis. The caution of IRB's on these questions reaches the point of satire:
But nightmares haunt IRBs. An IRB warned a historian not to ask civil rights workers about laws broken during civil disobedience [emphasis mine]. Did the IRB know that the point of civil disobedience is to break the law openly, be arrested, and even jailed? Did it know the “Letter from Birmingham Jail”? Did it know that the statute of limitations on these “crimes” ran out long since? Did it know that civil rights workers are heroes because they defied the law? Did it know that suppressing evidence of that defiance dishonors them?
Because IRB's often have members that are not subject-matter experts, their evidence-free speculation about potential harms can sometimes directly contradict available evidence. For instance, IRB's tend to tread very cautiously around research involving recalling traumatic events for fear of causing emotional harm. However:
IRBs widely fear emotional harm from recalling traumatic events, and “many IRBs” require interviewers to assure interviewees counseling. But as a literature review found, people can and do say no if they fear distress. Even parents who recently lost a child “clearly and unambiguously” decided whether to participate. A third of them declined, and 8% agreed but withdrew later. Furthermore, few interviewers like upsetting people, and it is callous and stupid to hector the distraught, who won’t answer well, or at all. No answers, no research.
It gets worse: there is evidence that people tend to report a benefit from openly discussing painful topics, so IRB's may actually be harming participants by preventing them from participating in this kind of research.
Even if people did experience distress from discussing traumatic events, people discuss important and troubling topics with friends and family all the time, so research would be in line with their daily choices and distress tolerance. And in a similar vein, there is evidence against the claims that deception harms subjects or that payments impair decision-making ability, which are recurring pain points for IRB's. To the degree that research does cause harms, it pales in comparison to other industries like journalism or medicine:
researchers are much less likely to harm subjects than, for example, doctors are to harm patients. As Fost writes, deaths related to research “since 1973 are extremely uncommon compared to standard care.” Fost cites the IOM’s estimate of “60,000 to 90,000 deaths a year in doctors’ offices. These are preventable deaths.” The estimate “is surely exaggerated,” but “the unit of measurement is the tens of thousands,” while in research, it is “in single digits or possibly two digits.”
In sum, IRB's routinely inflate small risks or imagine them where there are none.
The hypersensitivity to risk that IRB's share appears to stem from poorly worded federal guidance:
So in a 2,400-word pronouncement, the HHS Secretary used “ensure” nine times and “guarantee” and “make sure” twice each. The “system of protections” must “ensure” not optimal but “maximal protection for all human subjects” and “guarantee” not optimal but “the greatest possible protection for every human subject, in every clinical trial and at every research institution in the country.” Really and truly: “even one lapse is too many.”131
To put my Tyler Cowen hat on for a minute in response: what is the optimal number of research deaths in a country of 330 million?
IRB’s are costly!
If IRB's could only do a limited amount of good but were also very cheap, they might still be a good deal. But because of its incentives and design, the IRB system is in fact very costly. First, IRB's use "event licensing" as their regulatory strategy, which means IRB's must review every single study in advance of any wrongdoing to prevent rare and mostly trivial harms, as opposed to a "command and control" model, where rule breaking is punished after the fact. Most research is harmless, yet IRB's spend much of their time reviewing those safe projects.
Second, IRB's impose direct and indirect costs. Direct costs, which researchers pay to IRB's directly for their time, can be substantial, around 15% of a grant budget. Indirect costs are harder to calculate, but likely much more important: the delays imposed by IRB's as they request revisions of research questions and documents, the distortions in research questions and priorities, and the "invisible graveyard" of studies that IRB's have rejected or deterred from being examined. When medical researchers are surveyed on obstacles to research, they rate such obstacles as "moderate or severe" and commonly cite IRB's as a cause.
Sloth Kills
Whole areas of important research can be crippled for years as regulators struggle to generate new rules: in 1993, emergency medicine research became nearly impossible to conduct after OHRP's forerunner (OPRR) "forbade research without prospective consent. The blow to resuscitation research was 'far reaching and devastating'. During 3 years, one protocol was permitted." While new rules were eventually passed in 1996 that provided ways to waive consent in emergency cases, they were very burdensome, such as imposing "community consultation" requirements that waste time and resources for meetings with community members that generally see no problem with the studies in question. Because many of these emergency situations have such high mortality, incremental advances in treatment would save many lives; correspondingly, the cost that IRB's impose in delays or deterred studies is quite large: "'the number and proportion of randomized trials' of treatments for sudden cardiac arrest fell significantly during the last decade because of the regulations...roughly 300,000 people annually have cardiac arrests out of hospitals. Five percent of unselected patients survive, so 'a one-year delay of a new therapy that improves survival by 1% may cost approximately 3,000 lives.'" Another hint at IRB costs is seen in the delay of the ISIS-2 trial, a multi-national study of thrombolytics in which American consent requirements slowed the study by 6 months, which, given the prevalence of heart attacks and the high efficacy of thrombolytics, caused an estimated thousands of unnecessary deaths.
IRB's and the associated compliance industry demand and generate enormous amounts of paperwork, often require multiple rounds of review to approve minutiae like consent documents, and increasingly demand training for anybody involved in running a study. These are hidden costs that show up in study timelines, opaque consent forms, and researcher time. Because IRB demands have changed over time, and because researchers often modify their long-run projects, they must occasionally ask for IRB approval of protocol changes, even when those changes do not increase risks. To give a sense of the delay:
IRB approval can take months. An extreme example is Gittner’s trivially risky health-service research—8 years of negotiation with several IRBs. More normally, McWilliams’ genetic epidemiologic research study “took 9 to 252 days for approval by 31 IRB committees.” Dziak’s health-services research “took 5 to 172 days by 15 IRB committees.” Stair’s “clinical trial took 26 to 62 days for approval by 44 IRB committees.” Sherwood’s median time between enlisting an investigator at a site and IRB approval there was 14.7 months.
Some areas of study are more vulnerable to research delay: any delay in studying emerging pathogens, as occurred in the Seattle Flu study in early 2020, can be harmful. Participants can get frustrated with long delays and drop out of studies. When IRB's modify consent forms to include fanciful risks or restrict how researchers can recruit participants, they can harm recruitment efforts or make some studies impossible.
IRBs are inflexible
Social scientists like ethnographers that use qualitative and exploratory methods are hit particularly hard by IRB inflexibility, especially those working with "vulnerable" populations:
IRBs often tell those studying “hospitalized psychiatric patients, marihuana smokers, homeless black men, or pool hustlers” to get signed consent. But “most ethnographers establish trust only through gaining rapport,” so requiring consent first usually severs “ties before trust can begin to be cultivated.” Thus government “shields us from knowing the truth about ghettos, tearooms, marihuana users, and abuses of patients in psychiatric hospitals.” (Location 1068).
Vulnerable are understudied
Because IRB's are particularly strict with so-called "vulnerable populations" it is probable that the problems of more marginalized people are understudied as a result. Routine medical care generates information and often involves blood and tissue samples: linking that data to health records or nonmedical data (like a pollution index, for example) can be very fruitful. Under the guise of protecting subject privacy (violation of which is considered very harmful) IRB's have made archival research much harder:
Not only can IRBs make archival research costlier, longer, and harder; they can stop it. Fost reports that HIPAA reduced medical-records research at the University of Wisconsin 77%. Multiplied nationally, “tens of thousands of important epidemiologic studies” are stopped. Yet there is scant evidence that archival research is unsafe.
In a recent example, employees at the US Census Bureau, likely motivated by similar ideals, are attempting to restrict researcher access to US census data, provoking a researcher backlash. This is despite the fact that nobody has yet been harmed by freely available census data— this is a purely speculative risk that has a serious chance of harming huge chunks of social science work for generations.
The perverse result of federal regulations that attempt to protect vulnerable populations is that they end up understudied and in a fog of non-evidence-based treatment, like pregnant women, pediatric patients, and:
The children of incarcerated adolescent girls “are among the most vulnerable and least well served” American children, but little is known about their “numbers, health, developmental, and placement status.” HIPAA’s stringency and IRB ignorance “make obtaining permission to conduct institution-based case file reviews a long and expensive process.” Acoca reports the “increasing difficulty—if not impossibility—of obtaining permission to conduct face-to-face interviews.”
Infuriatingly, IRB's will even sometimes require researchers to destroy data to better protect subject privacy, which has made longitudinal and comparative research much more difficult.
IRBs slow research timelines, which hurts early-career researchers
The "death by a thousand cuts" tendency of IRB's makes research particularly difficult for graduate students and others on short timelines: if you don't know when an IRB will finally approve a protocol, best to try something else. Social science students have stopped performing as much fieldwork:
Van den Hoonaard describes a “crisis brought on by research-ethics review”—“the decline of methods that have traditionally been at the heart of social research,” including “fieldwork/participant observation, covert research, [and] action research.” He studied Canadian master’s theses before and after ethics review had proliferated and found a decrease in theses involving research subjects from 31% in 1995 to 8% in 2004. Of those theses, 40% included fieldwork in 1995, but only 5% in 2004.
Individual IRB's may have particular aversions to some subjects and demand extra protection for any research related to their pet peeve, which distorts the scientific literature. Whole fields become less attractive to researchers as the regulatory burden increases. When IRB's interfere with researcher protocol they sometimes cause harm to subjects directly, such as demanding names for signed consent forms from subjects seeking anonymity or reducing the payments offered to participants. IRB's have even begun to intrude into other domains of university or hospital life: obtaining student feedback on curriculum changes or running a quality improvement study can suddenly become subject to IRB ruling, with little recourse for involved faculty besides quietly acceding to their demands.
IRBs have strayed from their mandate
It's worth pausing for a minute to understand how far from their original purpose IRB's have strayed: originally constituted to prevent researcher scandals that caused severe harm to human subjects, they have extended themselves to regulate research on curriculum changes ("how do you feel about the new curriculum compared to the old one?") which no sane person would regard as causing physical harm.
IRB’s made make bad and capricious decisions
Part II of his book demonstrates how IRB decisions are poor-quality, contrary to common notions of procedural fairness, and ethically incoherent.
Putting aside the cost-benefit analysis of an IRB system, the actual operations of IRB's are "chronically arbitrary and capricious." This is demonstrated by multi-site studies in which different IRB's come to different conclusions— a recurring problem that was recently addressed by the NIH adopting a policy that requires single-site IRB's for funded work. High-profile studies addressing pressing clinical issues-- like what level of oxygenation premature infants should be maintained on (SUPPORT trial) or how aggressively to ventilate ARDS patients— have been unfairly criticized by the OHRP for exposing patients to risks, even though both treatments explored were standard clinical treatment and multiple IRB's had signed off on the studies. That is, research was not exposing any patients to additional risk, merely trying to answer which extant treatment was superior. In another case involving vitamin A administration for premature infants, the 18 IRB's involved could not agree either:
One IRB thought the risks of supplemental vitamin A so outweighed the benefits that giving it would be unethical; another thought the benefits so outweighed the risks that withholding it would be unethical.
Variation between IRB's on similar questions has been a "persistent pattern" of IRB's since the late 1970's, with disagreements on basic questions like "who may be a subject...enrolling a child who cannot assent...approaching people by phone...literacy forms…risks of blood draws". Schneider argues that this variation is proof of IRB arbitrariness.
Consent forms are unreadable
IRB's emphasize the importance of informed consent but their insistence on lengthy explanations of research any possible risks has made consent forms increasingly long and unreadable— a far cry from how Paul Offit in Vaccinated described consent forms for the Mumps vaccine, tested in the 1960's: "parents...received a 3-5 inch card stating 'I allow my child to get a mumps vaccine. ' At the bottom of the card was a line for the parents' signatures."
IRB’s core documents are equivocal
The guiding principles of IRB's are derived, in theory, from its "guiding texts": the Belmont Report, the Helsinki Declaration, and others. While the Belmont Report lists "respect for persons, beneficence, and justice" as its core principles, it doesn't provide detailed guidance the way a legal statute might. The OHRP guidebook is also unhelpfully vague, and while it restates the principles that researchers must follow, it doesn't give explicit procedural steps to achieve those ethical aims. While those principles might be decent ethical principles, they are not useful regulatory guidance. Schenider shows how any reasonable person might apply the Belmont principles to some ethics question— say, the regulation of archival research like biobanks— and arrive at the opposite conclusions that IRB' usually do: "shouldn't respect for persons require the default assumption that people want to behave well, to help the larger research enterprise that benefits everyone". That is, while the US IRB system has come to some vague consensus on these recurring ethical issues, those decisions weren't made by keeping the Belmont Report's principles in mind. Just as plausibly, they might reflect the outcome of a risk-averse and veto-prone institution. In other cases IRB's use words in completely alien ways. "coercion", for instance, can mean anything from physical coercion to being paid too much to participate in a study.
IRB’s are paternalistic
Another IRB principle that diverges from its common-sense understanding is paternalism. While bioethicists regularly denounce medical paternalism and IRB's officially agree, in practice IRB's are very paternalistic: they regulate how much money subjects can be paid and what risks they're allowed to take on, even when research subjects regularly make decisions that are just as consequential. Patients choose their own medical care, engage in potentially distressing conversation with friends, and, horror of horrors, often engage in paid labor that IRB's don't have control over. IRB paternalism is even more evident when dealing with "vulnerable" subjects: when a Canadian sociologist wanted to interview farmer children her appeal to overturn the first IRB's denial took over a year; all this merely to interview 7-12 year old's about their life. IRB's are similarly restrictive with prisoners, viewing them as uniquely vulnerable to harm, but the available data show that inmates viewed their research participation in a positive way.
Schneider is pessimistic about any agency's ability to construct a "coherent ethics for human-subject research [because]...IRBs are government agencies, their ethics must be legible and enforceable....affects ethics....law has limits that arise from its special social purpose" and instead thinks that this ethics should arise organically within different disciplines, as indeed was the case before 1974: "in the bad old days, disciplines developed their own ethics through debates among specialists who understood their disciplines ethical problems. ".
What’s the purpose of IRB’s?
He asks an important question: was a lack of IRB's the reason that historical research scandals occurred? He makes a strong case that differing ethical standards were the real reason the Tuskegee Study and others were undertaken and sustained. In fact, many of the classic research scandals, like Willowbrook, were approved by ethics committees at the time. In a time when the military experimented on unknowing soldiers and corporal punishment was the norm, I’m not convinced ethics committees would have prevented bad research from being conducted.
Faced with these arguments, the Straussian read on IRB's might be that they're not really about ethics, but about preserving public trust. However, the data are not clear on whether the public even realizes the IRB system exists in the first place! Also, polls consistently find that doctors and the medical system rate quite highly in the public's perception and perhaps the IRB system's focus on historical scandals actually reduces trust instead of increasing it.
Opt-in vs Opt-out consent
Drawing on his legal expertise, Schneider proposes an alternative ethos for IRB's in line with legal precedents for promoting the common good: "when research is little risky and when some default rule must be used-- as in permission for archival research-- rules should serve both the majority's wishes and social interests. For example, people are massively willing to let researchers use information...only 3.6% of the patients refused....95% of the Ugandans asked agreed". Some Nordic countries use this approach for their population registries, which has resulted in them producing much of the world's best epidemiological studies. This alternative ethos for IRB's would also recognize that daily life presents hazards and choices that are more risky and consequential than most faced in research— driving a car or a motocycle? rock climbing or skydiving?— and consequently trust participants to make decisions with much more autonomy.
IRB’s lack Due Process
Schneider thinks the IRB process is itself fatally flawed. IRB decisions are cloaked in secrecy, don't allow petitioners a chance to argue their case, and lack an appeals process— that is, they lack due process, the bedrock of a just legal system. The first problem is that IRB's lack a clear-cut set of rules in the first place, stemming from vaguely worded federal regulation. Different IRB's, guidebooks, and federal agencies disagree on many specifics. Blatant falsehoods can spread in such an environment:
"Schrag finds lawlessness compounded by “false claims” about the “regulations flitting from university to university, without citation.” The University of Iowa says that the federal regulations prohibit researchers from deciding whether their “study meets the definition of human subjects research.” Schrag writes that the “regulations say no such thing, but you can find the same falsehood at the University of Southern California.” Schrag describes “various mechanisms by which such falsehoods spread.” In the IRB Forum, for example, “most queries are answered by IRB staffers explaining how they believe a situation ought to be handled” with no authority “beyond the Belmont Report and the federal regulations.”
IRB members sometimes rely merely on "gut feelings" to judge whether a given protocol is acceptable!
Second, IRB's hearings don't follow the accepted rules of procedure: IRB members are colleagues or administrators, not neutral 3rd parties; IRB's have no rules on introducing evidence and don't protect the right to call expert witnesses; IRB meetings are "closed to the public"; they lack an appeals process through IRB's or courts.
IRB’s are Censor Boards
Beyond mere process, IRB's, are, per Schneider, effectively censor boards. They regulate the language of consent forms, how subjects can be approached, compel researchers to report progress in certain ways, and regulate which study questions can be asked (one meta-study that assigned IRB's similar hypothetical studies found IRB's "withheld approval from 63% of the reverse-discrimination proposals, 51% of the discrimination proposals, but only 26% of the height/weight proposals"). Moreover it does so in an academic context, which has historically received substantial protection on free speech grounds from the Supreme Court. Even more egregiously, IRBs restrict speech in a "prior restraint" manner, which has historically faced the greatest hostility from the Supreme Court ("the Supreme Court calls the university a 'traditional sphere of free expression'"). Like a censor board, IRB's serve the status quo by discouraging research that discomforts any IRB member or could provoke backlash to a university or institution.
IRB’s grow through ethical imperialism
Overall, he is skeptical of IRB reform short of radical change because of the IRB system's tendency to grow in influence:
Furthermore, the greater the reform, the more opposition it would provoke. Even if a serious reform were implemented, I doubt it would last. Reforms that truly reduced the IRB system’s costs would usually bring it nearer its original (relatively modest) ideal. But the same forces that caused that ideal to collapse would gradually corrode serious reforms. To put it differently, the IRB system achieved its authority through the power of its imperialism, and no meaningful reform could long resist that power. Just sketching the history of IRB imperialism suggests how hard it would be to tame it and to institute and preserve genuine reform.
Lately even high-school science projects are coming under IRB control, as are quality improvement and possibly journalism (though my reading of the recent rule changes is that it places journalism out of IRB hands).
IRB’s are Entrenched Interests
Historically the IRB system has tended to resist attempts to rein it in, to the point of deceiving duly elected officials:
When Reagan was elected, Charles McCarthy was Director of the Office of Protection from Research Risks (OHRP’s predecessor). As he “later recalled, ‘everybody knew that this was not a time to try to propose a new regulation.’” So he described jurisdiction over social-science research “as a reduction of regulation.” To do this, “he had to distort the effects of both the 1974 regulations and their proposed replacements.”42 He “exaggerated the existing extent of” regulation of behavioral research and “then claimed that the new rules were more lenient, stating that the ‘proposed new rules would exempt risk-free behavioral and social science research resulting in deregulation of about 80% of research.’” (Location 3407)
It appears that the revisions to the Common Rule that occurred under Trump are an exception, but while they appear to have more clearly described that some social science research is exempt from IRB approval, they still preserved the vast majority of the IRB system.
This should make us pessimistic about achieving IRB reform through purely Executive Branch fiat. In addition the IRB system has powerful incumbents: the whole quasi-private IRB system (embodied in PRIM&R and AAHRPP ) that benefits financially from the current system. The recent flurry of discussion on twitter over a controversial but innovative paper on the effects of Protestant Evangelism on Economic Outcomes introduces another danger: that academics comfortable in the status quo may view IRB's explicitly as a mechanism for suppressing some kinds of research, and may advocate against IRB reform. As he puts it: "how...can Big Ethics...want real reform enough to demand, or even tolerate it?...Bioethicists and a succession of expert investigations have identified the regulatory defects of the IRB, but defined the remedy as more of the same-- more review...IRB system was crucially shaped by moral entrepreneurs in a moral panic"
IRB Reform?
Though Schneider doesn't have a detailed prescription, he thinks Tort Law + occasional criminal sanction + self-regulation by professional organizations are more than sufficient to regulate research.
Overall, Schneider presents a compelling case that IRB’s are very costly, don’t do much good, and delay or deter enormous amounts of beneficial research. A downside of his book is that it was finished (published in 2015) before the recent rule changes made under the Trump administration in 2018, which seems to have changed some aspects of IRB regulation on the margins. However, since the rule changes do not appear to have radically reduced IRB power, the main conclusions of his book are not substantially affected.
I think Schneider may somewhat overstate how costly IRB research is in the “efficient compliance” era (per my last post) in terms of time and paperwork to researchers nowadays, because it seems like the private IRB’s have sped up the review timelines substantially over the last few years, but I haven’t seen good data on this, so I’m not sure.
Ideas for Reform
Finally, here are some tentative thoughts on IRB reform:
The original intent of IRBs, as embodied in the National Research Act of 1974, was to prevent unethical medical experiments from taking place. Refocusing IRBs on risky biomedical research would ensure those studies get higher-quality and more sustained oversight, while freeing up low-risk research from unnecessary oversight. To some degree this has already been done, but we could move further in this direction.
If we accept the premise that overly vague federal regulation enabled “IRB Imperialism”, it seems likely that very detailed federal regulation on what kinds of research was subject to IRB approval, standardized forms, etc. would reduce scope creep of IRBs. I’m not very confident in this, because one of the proposals to do this, back in 2002, is vague on details and doesn’t appear to have aimed at reducing regulatory burden.
As I mentioned in my last post, the UK IRB system underwent modernization and centralization in 2000 with good results: for instance, they were the only country to approve COVID-19 human challenge trials. Some changes made during the Trump Administration, like requiring single IRB approvals for multi-site studies, were apparently inspired by the UK reforms. We could seek further policy inspiration from their IRB system, which also has a formal appeals process, unlike the US system.
Note: new material added to the original post on 06/06/2021
Institutional Review Boards (IRB's) were originally committees made up of a mix of university faculty and administrators that regulated "human subjects" research. The whole system has some precedent in pre-1974 review boards that were much more informal and focused on biomedical research, but really came into existence in 1974, when the National Research Act of 1974 was passed in response to the Tuskegee Scandal, and formed the Office of Human Research Protections, which supervised IRB's and promulgated rules on human subject research. The whole IRB system since then has gone through 4 phases of compliance and become a hybrid of private for-profit IRB's, non-profit and ostensibly non-governmental organizations like PRIM&R and AAHRPP , all constrained by extremely vague pronouncements from OHRP.