FDA’s Standards for Device Approvals Under Scrutiny

FDA’s Standards for Device Approvals Under Scrutiny

Studies often low quality — or missing entirely when it comes to post-market surveillance

https://www.medpagetoday.com/publichealthpolicy/fdageneral/67300?xid=nl_mpt_WeeklyVideos_2017-08-19&eun=g578717d0r

A study appearing in the Journal of the American Medical Association documents the occasionally poor quality of clinical studies used to expand FDA approvals for high-risk medical devices. But as F. Perry Wilson, MD, discusses in this 150-second analysis, adequate post-marketing surveillance may help to balance regulation with access to improved devices.

This week, we take a walk along the rocky path of medical device regulation with this study, appearing in the Journal of the American Medical Association.

image

Devices are regulated in a different way than drugs. Once approved, a drug doesn’t really get changed. Indications may be expanded, but the drug is still the drug. But devices get modified frequently – think different pacemaker leads.

image

The FDA does not require a clinical trial demonstrating proof of safety and efficacy for all those changes – far from it. In fact, in the vast majority of cases the FDA requires no clinical data at all for approval.

One exception to that rule is for high-risk devices, which include things like cardiac stents. Modifications of these devices need to utilize the most rigorous standard, known as the “panel track.”

But, as the JAMA paper suggests, the panel track is not really that rigorous. There have only been 78 panel-track approvals between 2006 and 2016, underscoring how rare it is for a manufacturer to use this pathway. In contrast, from 1979 through 2012 there have been 5,800 non-panel track supplements for cardiac implantable electronic devices alone.

According to the study, the data supporting the changes rarely measure up to the quality we might expect. Of 83 studies supporting these approvals, only 45% were randomized. Only 30% were blinded.

image

Almost a quarter didn’t specify a primary endpoint. And shockingly, only 87% reported the number of patients enrolled. Only 84% reported the mean age of enrollees. These are pretty basic stats, folks.

Obviously, we could spin this data to make it look like the FDA is asleep at the wheel — but before we grab our pitchforks, let me ask this question: Why were some studies randomized, and some not?

 

The reason is that there are civil servants at the FDA whose job it is to interface with manufacturers to decide how these studies should be conducted. They are charged with determining the “least burdensome” standard of data. That’s the law. In other words, sometimes a blinded, randomized trial is the least burdensome thing you can do to make sure the device is still safe and effective. But not always. We’d need to review each of these 78 approvals separately to determine if we, as a medical community, think the data presented was inadequate.

I’m actually OK with this system, with one caveat. Rigorous post-approval research must be conducted to ensure safety, especially as indications are expanded. And here the FDA has not done a great job. The FDA has been lax about enforcing post-marketing surveillance. According to the study authors, only 13% of post-marketing safety studies are completed between 3 and 5 years after FDA approval, and the FDA has never issued a warning letter, penalty, or fine against a manufacturer for noncompliance.

Getting these products to patients quickly may be laudable, but once they are in the wild manufacturers should not be left entirely to their own devices.

Postscript

After making this video, I heard from lead author Rita Redberg concerning some questions I had with the manuscript.

I was curious about the “denominator” for these approvals. The study looks at 78 device supplement approvals, but we are not told how many rejected applications there are. Her response: “FDA does not make available the number of applications they receive and do not approve … Anecdotally, I have heard it is about 80% approved.”

Dr. Redberg also stated that she feels the current standard for approvals is relatively lax, citing a recent cluster of deaths associated with a rapidly-approved gastric balloon. She suggests that high-risk devices should face the same standard as drugs, where two preferably randomized clinical trials with meaningful endpoints are necessary for approval. Finally, she notes that post-marketing surveillance may not be the best solution to this problem (as I had suggested), as while drugs can be quickly pulled from the market, many devices are not easily removed from patients.

F. Perry Wilson, MD, MSCE, is an assistant professor of medicine at the Yale School of Medicine. He is a MedPage Today reviewer, and in addition to his video analyses, he authors a blog, The Methods Man. You can follow @methodsmanmd on Twitter.

One Response

  1. I have a brain implant. I had questions about the wires, chip and battery as mine had been in 20 yrs when it failed. What is the maximum length of time pts had them, the mean time frame, reasons for failure (nothing looked out of order on the xrays) etc and to my astonishment I was told the company doesnt keep any records on it.

Leave a Reply

Discover more from PHARMACIST STEVE

Subscribe now to keep reading and get access to the full archive.

Continue reading