In June 2017, Yale CRIT hosted an international conference titled “Ensuring Safety, Efficacy, and Access to Medical Products in the Age of Global Deregulation.” The following blogpost is the first installment of a blog series with commentaries from the conference participants. The views and opinions expressed in this blogpost are those of the authors and do not necessarily reflect the position of Yale CRIT. For more blogposts related to this series, see here or click the tag “YaleCRIT17” below.
We are currently witnessing the pendulum of public opinion swing toward a less protectionist FDA, with the result that it is not clear what is expected from the agency anymore. It was created in 1906 to ensure that foods and drugs are (1) what they claim to be on their packaging, and (2) safe. Over time, an additional assurance was added for drugs: that they are proven effective.
Now we are facing a deregulatory trend. There are serious proposals to allow the marketing of drugs before they are proven to be effective via conditional approval, where full approval is granted after post-market efficacy studies. Others call for allowing the approval of drugs without Phase 3 trials, leaving payors, providers, and patients to determine effectiveness.
In the American public opinion (or at least a vocal segment of it), patients view themselves as consumers being barred from purchasing items by an overly protectionist, bureaucratic government agency. The shift from perceiving medicine as a trusted profession to seeing healthcare as a business has resulted in less interest, on the part of some, in expert assessment and more reliance on customer choice.
If Americans really want the FDA to assure merely that drugs are "safe," this is problematic in two ways:
One, drugs are never 100% safe. Rather, we deem some of them safe enough to use, given the expected benefits. That many Americans do not fully appreciate this underscores our nation's current lack of scientific literacy and unrealistic expectations concerning what drugs can and cannot do.
Two, if Americans do not care about proof of efficacy before FDA approval, does that set up an even more segregated system in which — assuming that the FDA approves drugs somewhere after Phase 1 clinical studies (i.e. dosage finding and initial safety testing) — those who can afford to purchase drugs can buy whatever they want while those who rely on insurance or government programs will be left to the whims of individual insurers with regard to which drugs these entities believe to be effective? It also means that marketing will shift more strongly from “ask your doctor” to, for those who have money, “demand from your doctor.”
It seems doubtful, especially looking at the recent Sarepta experience (where the FDA conditionally approved a drug targeting Duchenne muscular dystrophy against the recommendations of some of its own scientists), that insurers are willing to pay for drugs that have not been proven to be effective. But how would the landscape look if insurers were called upon to demand robust efficacy studies? Would insurance companies share the outcomes of these trials? Would they band together in accepting or not accepting a new drug or device?
Or would insurance companies hoard the outcomes of their trials, considering them proprietary data essential to internal business decisions? How would insurance companies’ drive to contain costs skew what they say about efficacy or eligibility for reimbursement? Current concerns over pharmacy benefit managers’ (PBM) formulary decisions do not instill confidence in an insurance-mediated future.
Many argue that the pharmaceutical industry is complicit or even central to the deregulatory tide buffeting the FDA. From our experiences, we have not witnessed this.
In our work with patient advocacy groups and pharmaceutical companies, we have seen industry’s reluctance to have FDA rules changed. The companies we work or speak with know they are subject to the FDA's decisions about their drugs, but they know that their competitors are too. During the decade or so it takes to develop a drug, they rely upon the goalposts not being moved: they know, more or less, what the FDA will expect them to prove. Deregulation, which may seem to be in industry's favor, is actually undesirable because it introduces additional unpredictability in an already high-risk innovation field.
In our experience, the people who desire FDA deregulation are (1) certain patient advocates who are frustrated about waiting years without even one good drug, and (2) libertarian activists who either morally oppose the FDA "keeping" drugs from patients or who cynically see the FDA as easy pickings in their government-wide effort to roll back regulation. Not all patient advocates take this view, but some do, and their complaints that the FDA is responsible for patient deaths and disability find fertile ground in a public that believes that government is slow and, moreover, that individual people deserve to be “rescued” from unenviable fates, including death and serious disability.
The FDA has a crucial role in overseeing drug development that includes ensuring safety and efficacy. Other tools need to be in place to then decide what ought to be made available to patients and at what price. Beating on the FDA from the left and the right does the public little good. But in an age when health care in America is viewed as a business, agencies that stand for the public good and public health become increasingly hard to defend, if nonetheless needed.
Alison Bateman-House, PhD, MPH, is an assistant professor in the Division of Medical Ethics at NYU School of Medicine.
Arthur Caplan, PhD, is the Drs. William F. and Virginia Connolly Mitty Professor and founding head of the Division of Medical Ethics at NYU School of Medicine.