Like those who work in quarries and mines, journalists covering the pharmaceutical industry wear special ear protection while on the job. It's all required by OSHA and has greatly advanced over the years. Certain terms that (for those in the life sciences) are oxymoronic or their own antonyms never reach the battered ears of journalists. Other words, which tend to have highly convoluted meanings (think "governance," "real-time," "proactive"), are automatically back translated into English.

We had to yank out our earplugs yesterday while listening to Janet Woodcock, director of the FDA’s Center for Drug Evaluation and Research (CDER). It turns out that she speaks English fluently. Woodcock was speaking at the 2011 Post Approval Summit, a conference dedicated to Phase IV research. It's held at Harvard's medical school campus and sponsored by Outcome, a technology firm that specializes in registries.

Woodcock has spoken at the Post Approval conference before, and seems to assume a more sophisticated level of comprehension than is typical for FDA presentations at other industry events. Woodcock is now in her 25th year at FDA, but that tenure has not diminished her considerable intellectual spark.

Paper Chase

Woodcock said there is a newly blurring line between the science and regulatory activities of pre- and post-approval research. “We are no longer going to be looking at each of these stages of development—bench, clinic—as separate,” she said. The goal is a “unified” picture of a drug that encompasses both a continuum of everything from basic research to registries after therapies reach the market.

Woodcock believes there will be increasing regulatory awareness of and planning for real world usage. That means off-label prescribing by physicians for indications that were never examined in a controlled clinical trial, or even abuse or misuse of medication. Taking such a broad perspective on approving a new therapy is much more complex than what typically happens today in a controlled clinical trial.

No doubt aware that the audience might feel differently, Woodcock stated that she views this merging of pre- and post-regulatory analysis as a positive shift. Said Woodcock: “This increases its complexity, but I think it is good news because it gets us to knowledge much faster. The availability of electronic data is going to speed this up remarkably.”

Woodcock didn’t dwell on electronic data in her own organization. That might have been wise, with no mandates to move away from paper on any U.S. regulatory horizon.

In 2011, the FDA still accepts drug safety reports via prodigious quantities of paper and fax, which is a bit like the pit crew at a Nascar race using solid rubber tires or hand-cranked engines from the 1920s. Readers are encouraged to pray that public awareness of antiquated industry and FDA computer systems remains an arcane topic—not the germ of a new drug safety controversy in which someone accidentally misplaces 73 coffee-stained faxes and then rediscovers them years afterward.

Out of Consideration

Woodcock’s presentation wasn’t necessarily stuffed with brand-new information about FDA policies. But she did provide an engaging and largely positive look at the overall drug safety environment. For starters, she says, in vitro or in silico testing of drug-drug interactions or liver function is keeping unsafe drugs from ever reaching human testing, much less FDA evaluation in the regulatory process.

“We are not seeing those drugs,” she says. “The newer drugs are not going to have these liabilities unless they have some wonderful property that makes them be developed anyway.”

Woodcock gently took issue, as FDA speakers invariably do, with the perception that the barriers to drug approval are newly high in the post-Vioxx, post Avandia era. Older statutes and regulations, she implied, have always included elastic and fluid requirements for industry to assess drug safety. The height of the bar to get new therapies to market has never been static, she suggested. It’s always rising.

New Tools

Yet Woodcock also mentioned a few new pushes. One example is the Tox21 initiative, a National Institutes of Health (NIH) research project that uses high-throughput screening to test large numbers of drugs. She cited another boutique project: the International Serious Adverse Event Consortium (iAES), lead by Arthur Holden with significant help from prominent sponsors such as Abbott, GSK, JNJ, Pfizer, Roche, Sanofi and Wyeth.

And then there are the mushrooming efforts in personalized medicine that seek to unravel the genetic reasons for averse events. Woodcock cited peer-reviewed journal publications that are tightly linking specific genetic markers to specific safety issues. Similarly, some 138 INDs in 2010 contained information about the personalized genetic profile of the compound; her slides suggested similar uptrends in BLA and NDA filings. Pharmacoepidemiological data is probably part of regulatory documents to a greater degree than many in the industry may appreciate.

The upshot of all that effort, it appears, is a safer and more informed response to drug safety. Research into drug safety tools and assays leads to different products, which in turn supports even better regulatory techniques. “This is a feedback loop,” Woodcock said. “We are able to intervene in ways that were unheard of 10 years ago.”

image

Janet Woodcock, director of CDER at FDA

Diary Usage

Woodcock singled out patient-reported outcomes as a special area of FDA interest. In some circumstances, she sounded dismissive of clinician assessments of whether a drug has affected a patient’s general sexuality or quality of life during chemotherapy. Said Woodcock: “What really matters is what patients think. We are planning to have more patient input.”

She also addressed the ways that the industry and the agency communicate with patients. Woodcock mocked the traditional paper sheets that accompany consumer prescriptions in the U.S. She joked about the still-common diagrams of a drug’s chemical structure, with each atom duly drawn, as if physicians should study carbon or hydrogen icons before prescribing a pill.

FDA has been pushing for briefer, clearer, more standardized package inserts. “We are working on that very hard,” Woodcock said. “Most grownup countries in the world have [standardized] leaflets. We don't have that in the U.S.” She didn’t note whether the fonts and typography would be required to be readable, as with cereal boxes or other food labels, or continue to be best viewed with a microscope. She chided the audience on the topic. “I am not happy about how long it is taking to get all the package inserts in the new format,” she added.

Woodcock quickly sketched changes to the agency’s infrastructure and processes to manage hundreds of thousands of spontaneous reports of adverse events. She implied that significant managerial resources at FDA have been thrown at doing a faster, more consistent job of processing such reports. She noted that the FDA was on track to analyze drug safety data from 100 million Americans by 2012, as politicians have demanded.

Push-Button Math

So far, so good. But Woodcock suggested that while better piles of data will be essential and welcome, they will only go so far. There may be no simple, easy way to analyze large quantities of drug safety data without hiring a person with an advanced degree.

In some cases, Woodcock said, the primary constraint is neither statistical nor methodological. Rather there is incomplete medical knowledge. Citing the cardiovascular risks for diabetes drugs, she said, the underlying mechanisms and risk factors are simply unknown at this point. Woodcock concluded: “The whole field needs to get a higher level of rigor.”

In her presentation, Woodcock also spent a bit of time on pharmacovigilance techniques she described as “push button.” There is no doubt that the worlds of clinical trials (with predetermined plans for statistical analysis) and epidemiology (with a more fluid, dynamic approach to selecting the right math) are distinct. Woodcock comes from the former realm.

As one push-button effort, Woodcock cited the Observational Medical Outcomes Project (OMOP) effort, which is directed at the NIH with FDA involvement. Industry is also participating; Woodcock herself is chairing the effort’s executive board.

OMOP Project

By Woodcock’s account, OMOP researchers have tried to combine vast stores of medical data and a long list of respected statistical techniques. The goal has been to see if well-established statistical tools could automatically detect subtle drug safety signals or verify an epidemiological hypothesis. “I see some people in the audience laughing,” Woodcock said. “You were right.” No single method evaluated by OMOP, in short, was particularly insightful. It’s unclear whether any of the OMOP research will be published in peer-reviewed journals.

Woodcock doesn’t appear especially optimistic that push-button, database-driven drug safety will soon prove valuable for either regulatory or industry professionals. Despite the poor OMOP results, she added, enthusiasm for the automated techniques remains high in certain circles. “There is a huge group of people who believe that you can do this—once you amass the data, and push a button, you get truth. This put some boundaries on that,” she said.

Woodcock faced more technical questions from the audience than those uttered at many industry events. She had no difficulties. One epidemiologist asked whether the field’s methods had been unfairly described in her talk, but Woodcock did not yield an inch. d9A2t49mkex

Indeed, Woodcock referenced the differences between clinical trial math and epidemiological math, and displayed a touch of tartness that isn’t always visible when FDA officials are at the podium. “I have trouble where people went through fifteen iterations [of the statistical analysis plan] and then came up with a result,” she said.

d9A2t49mkex