“The experiment worked,” the doctor told me, “but the patient died.”
The patient was my father, who was diagnosed with leukemia. Three months earlier, the doctor said my father would probably live only three months. No effective therapy existed. Most doctors would not offer any intervention.
“We could try chemotherapy,” the doctor said, “but it would be a bit of an experiment. There’s a 50 percent chance he could live an additional three to 18 months.”
The doctor was upbeat, optimistic about the treatment. My father was torn. My mother thought it was too risky. But as a newly minted doctor, I had high hopes for science.
“I think you should try it,” I said. He agreed — largely because of me.
Three months later, his white blood count normalized, but the side effects of the treatment killed him.
His death has haunted me ever since. I felt let down: Science had failed to prolong his life any longer than if he had refused the experimental drugs. I wondered whether I or the doctor should have acted differently; if the process could have been handled better; if he had been too optimistic and downplayed the risks; if he should have helped us more with this decision.
Controversies involving experiments have been increasing — from stem cells, genomics and Ebola to pharmaceutical and social media company studies. Last year, Facebook admitted it had experimented on almost 700,000 users, seeing whether it could influence their mood without telling them.
Such studies present critical questions of who oversees science, how well they do so and whether they should do more.
Most people take for granted is that some protective mechanism — laws or watchdogs — ensures that experiments are ethical. Indeed, research ethics committees or institutional review boards (IRBs) do review all human experiments. But they have become increasingly controversial.
Why? In part because they operate behind closed doors and, scientists now argue, often stymy, rather than support key studies.
Investigators commonly call IRBs “the Ethics Police” and complain that these boards unnecessarily block or delay studies. As a researcher, I, too, have sometimes been frustrated by them.
Yet despite the controversy in the field, the public knows little about them, despite how they affect all our lives.
These committees were created following revelations about the gross ethical violations in the Tuskegee Syphilis Study. In that experiment, researchers studied African-American men in the South who had syphilis. But the researchers, funded by the U.S. government, decided not to inform the men about penicillin when it became available as a definitive treatment. If the men got treatment, it would destroy the experiment. In response to this grave moral lapse, Congress passed the National Research Act in 1974, establishing IRBs.
The United States now has more than 4,000 of these boards, examining tens of billions of dollars of research each year. Large medical centers have five or six of these committees. These boards must have at least five members, including at least one scientific member, one nonscientific member and one unaffiliated with the institution. Typically, they include about 14 individuals, mostly researchers. They try to minimize risks and maximize benefits in any studies and ensure researchers do not overly burden disadvantaged groups.
But complaints about them have been mounting.
This is perhaps not surprising. Since the regulations were written, science has evolved and become more complex. The NIH budget alone has increased about 15-fold. The biotech and pharmaceutical industries have burgeoned.
The problem is that the regulations have not kept up with science, and the system needs to be revised. Currently, IRBs at 50 institutions frequently need to approve a single study, but vary, and these committees often now delay important studies. Despite this, research scandals still occur. Some subjects die unnecessarily in experiments. And many critics argue that social media companies are not sufficiently protecting your rights when they experiment on you online.
In 2011, with such criticisms in mind, the Obama administration proposed several changes, including increased use of central IRBs (or CIRBs) in multisite studies and listed 74 questions about how to alter and oversee those committees. A revised version of these proposals was submitted to the Office of Management and Budget this February, but the content of these revisions and their fate remain unknown.
Unfortunately, these proposals have received far less attention than they should, given that they will affect tens of billions of dollars of research.
We urgently need to consider these complex issues, yet very little research has been undertaken on how exactly these committees make decisions.
With that in mind, I recently looked into the issue myself, interviewing IRB leaders around the country. What I learned constantly surprised me.
These committees wrestle with genuine dilemmas and are constantly trying to weigh possible future risks and potential benefits of studies that have not yet been conducted, deciding exactly what scientists should tell participants, how much to trust vs. monitor individual researchers.
IRBs have “different flavors and colors,” one chair told me.
Some are “nitpicky,” others “user-friendly.” They are well-intentioned but base many decisions on “gut feelings” and “the smell test.” Some committees are flexible and have “open door” policies, inviting researchers to attend meetings or call; others prohibit it. Other reforms are thus also essential.
It seems obvious that greater clarity and standardization are vital.
Boards that remain closed to researchers should be more open. A body of “case law” should be built, based on documented precedents. Interpretations and applications of principles in specific cases should, as much as possible, be openly vetted.
IRB members, staff and chairs should for their part undergo rigorous training and testing using standardized protocols. IRBs also need to recognize more fully how they are involved in complex, interpretive processes and to acknowledge more fully the costs of some of their decisions to research.
As a researcher, I am still at times frustrated by review boards, but at least I now better understand perspectives. After all, they face decisions that impact everyone — present and future patients, family members of patients and doctors. It is therefore important that we all understand these dilemmas and the possible solutions.
The ancient Roman writer Juvenal once asked, “Who will guard the guards themselves?” The answer is that we all have a role to play.