Info & Opinion
December 8, 2016
With news about big data, Aetna, Covance, GNS, Scrip, Quintiles, PPD, Icon, BioClinica, Merge, Medidata and GSK.
With news about FDA, CTMS, PMG Research, Inclinix, EMA, Hemofarm, Parexel and the Korea Drug Development Fund
David Underwood of Quanticate says some firms are giving short shrift to the basics of clinical trials
With news about Roche, Quintiles, Allscripts, Janssen, SGS, Oracle, TriReme, OpenClinica and FDA
History records that after her husband was assassinated, someone asked Mrs. Abraham Lincoln about the theatrical production she had just seen.
With all deference to Mrs. Lincoln, we are in a similarly stunned state of mind after talking to Kit Howard, principal of Kestrel Consultants. Based in Michigan, with a Canadian's calm demeanor, Howard speaks in methodical, reasonable cadences. There's no rancor. But oh, my—it's a hot topic.
Howard's career includes working at and consulting for some of the largest firms in the industry. Lately, she's been helping academic medical centers. And she's been surprised, to say the least, by what the bow-tied physician-experts at medical schools actually know about, um, well, clinical trials and GCP. 21 CFR Part 11. Minor stuff. Details.
If Howard is right, major medical schools, by relying on self-reinforcing reputations of high rectitude, have been held to a far lower standard of clinical research quality than the rest of the biomedical research world.
"Most of what has been published in the research journals over the past 50 years or so is probably untrustworthy," Howard believes. "Operational and statistical issues have not been addressed as part of the review process. We have no way of knowing if the research is valid. The peer review process does not include reviewing protocols, much less statistical analysis plans. Forget about operational data-definition type things."
Aside from all medical research being suspect, Mrs. Lincoln, is anything at your local pharmacy worthwhile? Kleenex? Suntan lotion?
Howard goes on to suggest that two parallel research worlds have arisen. One is inhabited with academics and their well-meaning aides. They have almost no idea what they are doing in terms of FDA-approved science. But they enjoy full emotional certainty that their research is superior to the science in the other, second world.
Over there, in industry, companies learn the regulations, implement them, and endure the heckling from the first world. It appears that Howard has had her fill of that dynamic.
"The people in industry absolutely are held to a higher standard," she says. "They've had the regulatory agencies breathing down their necks for a number of years. Nobody has ever subjected academic researchers to that kind of scrutiny." She hastens to add there is no deliberate misconduct in academia, just a lack of knowledge about a certain mass of expertise.
In essence, the professors have granted themselves a free pass to publish, based on the affirmation of their peers, who turn out to be other academics on some other campus not too far away. They learn a bit of epidemiology, a term or two in biostatistics, and voila! They're scientists.
Howard believes the time has come for a boot camp for clinical research. Such tutorials would augment a few degree-based programs in clinical research. "The majority of protocols are not rigorously designed in academia because they have not been held to standards," she says. "Most people doing research have not been taught."
Some of the most basic concepts of integrity in clinical data, Howard believes, are rashly denigrated or discounted by the occupants of the medical ivory towers. Data management, such folks believe, is a bit nebulous, but probably something for the IT department.
On this score, Howard is especially severe: "They think it is either building a database or data entry. They know it has something to do with case report forms. The concept of data quality starting from making sure you're asking the right questions and gathering the right data to answer the questions? It's never occurred to them."
In a way, Howard has issues with a scientific culture that rewards scientists who veer off into their own micro-specialty without rigorous statistical and operational constraints. The pressure to establish a reputation in academia, she says, puts a premium on iconoclastic, quirky geniuses like, say, Craig Venter. In their zeal to find new knowledge, such iconic figures tend to ignore the best practices of clinical trials.
In the manner of meteors and comets, such figures zoom around wonderfully. Trouble is, the speed and glory of the innovators' journeys cause them to bypass some of the prosaic, boring nuances of clinical research that allow scientific ideas to be placed in a proper context. Without that context, every comet can look like Haley's, everybody gets published, and some baseline level of clinical trial rigor is absent.
For society, the stakes are profound. Howard says one of the best examples of the consequences of the lack of rigor in academic science is the controversy around vaccines for childhood diseases. A few squirrel-headed academics have subverted a crucial public health agenda, she says, by convincing parents that vaccines are harmful and part of a global conspiracy.
Such scientists, though isolated, have persuaded large numbers of U.S. parents not to vaccinate their kids. Some epidemiologists are worrying about communicable diseases once believed to have been vanquished.
The general lack of scientific rigor in academia, Howard suspects, is probably having effects on the ability of industry to find patients for research. As academic grants run out, or medical professors move to new jobs, many trials just peter out. They're never finished. When that happens, patients can be stranded.
And the scientific world? It doesn't get the answer to the question the trial was supposed to deliver. Citing a U.S. Institute of Medicine report, Howard estimates that perhaps half of all oncology trials are never concluded. "That's absolutely horrifying," she says. The statistics for incomplete trials in less prominent therapeutic areas could well be worse.
In the end, Howard believes, the academic world may need something besides a clinical trial boot camp. She's pondering the need for an online community to bring shared experiences into a single internet location.
Without such a moderated online forum, she fears, hard-won insights into the proper ways to run clinical trials will never filter down to the professors who desperately need them. "It takes a lot of people talking about this type of thing and bringing in their points of view that creates a gestalt of knowledge. That gives you the judgment when to do one thing and when to do another," Howard explains. "We need something like a Wikipedia, where these things can be discussed and curated—where we end up with a body of knowledge that doesn't reflect people's opinions."
Here's another article written by Howard.