Not long ago, the Drug Information Association (DIA) and FDA’s two most important divisions, CDER and CBER, held a joint meeting. Its title was less than intuitive: “Computational Science Annual Meeting.”

But the goal of the meeting was laudable and significant to everyone in clinical research. The idea was to explore how technology, broadly defined, could help the agency become more efficient. Unlike many business-oriented pharmaceutical conferences, this gathering was intended to be a working, practical meeting.

The take-home message was austere. FDA speakers clearly suggested that the lack of data standards, adequate computing power and appropriate informatics tools are hindering their ability to fulfill their public mandate. That mandate is vast, and not always fully supported with adequate dollars from Congress.

Incomplete Toolbox

The FDA knows that it needs significant, industrial strength computing power and software to support the analysis of:

•Lab data, including molecular modeling, microarrays, proteomic analysis

•Simulations and Monte Carlo modeling

•Large database and surveillance analyses

•Bayesian analyses of large clinical databases (e.g., detecting potential safety signals by comparing to huge historical databases)

Naturally, there are political dimensions to how the agency is going to obtain the resources in an atmosphere of large federal financial challenges. But it is some small measure of progress that FDA also recognizes what industry needs: clear direction about clinical data standards. (The agency acknowledged the difficulties caused when it changes its priorities.)

In that regard, the agency tried to clear up confusion about long-promised transitions away from the Clinical Data Interchange Standards Consortium's (CDISC) study data tabulation model (SDTM) and SAS transport submission standards. While HL7 submission standards will be developed over time, they will not be required until industry has had ample time to evaluate and adjust to them.

The five-year IT plan in the PDUFA IV appendix will be changed accordingly. Some future, indeterminate migration to standards from HL7 and away from SAS Xport Version 5 appears to be inevitable and non-negotiable; the FDA’s parent agency, the U.S. Department of Health and Human Services, has decreed that that will be the standard.

Pushing Industry

In the meantime, FDA strongly encouraged industry participants at the meeting to adopt CDISC standards throughout their processes rather than trying to map to the standards at the end of a trial, which is expensive and risks changing the meaning of the underlying data. This recommendation includes at a minimum using the Clinical Data Acquisition Standards Harmonization (CDASH) standard for case report form (CRF) content and SDTM for electronic submission to the agency.

After a period of relative quiet, FDA appears to be getting back into a more active role in standards development. It promised participants that it will actively work with industry to ensure that the results meet the needs of government and sponsors. The FDA is dedicating staff to the informatics and standards projects, both administrative and scientific.

There was some bad news for CDISC adherents. While CDISC standards are a step toward the right solution, they may be too flexible, too malleable to the demands of individual companies or programmers.

Excessive Flexibility?

That looseness allows the creation of datasets that vary too much for consistent FDA analysis and software. Yes, some datasets may technically meet some minimal designation of “SDTM-compliant” by dumping non-CDISC-defined data into supplemental datasets (SUPPQUALs), but because such submissions do not properly adhere to the structure and naming conventions, they don’t necessarily help the agency work more efficiently and analyze massive submissions quickly. Resolving these challenges may require limiting the flexibility of the model, developing new standards or requiring companies to use the naming and structuring rules when submitting new data domains.

There were five breakout sessions at the meeting. These were intended to identify and prioritize different sets of tools that should be developed. The session focused on non-clinical preapproval needs; clinical preapproval needs; post-market safety requirements; product quality; and data quality. Notes were taken in each session to allow progress on the issues over time. The attendees were assured that the notes would be published, although it was not clear where.

Nascent Efforts

There were also presentations on a variety of efforts that show the government is moving slowly toward more standards and more electronic, automated ways of interacting with industry. These projects may be small in scale at present but telegraph how the FDA is imagining its own future data landscape. The initiatives include the caBIG informatics projects, the FDA's Janus safety data warehouse, the plans for CDISC standards development collaboration with a new FDA Computational Sciences Center, and an effort to explain data standards to those outside a tight circle of fervent partisans.

In a keynote address, Janet Woodcock, director of CDER, described the agency’s current workload and baldly conceded that the agency still needs more robust informatics capabilities. The word “swamped” was used with ample justification. Her 3,000 employees at CDER annually handle nearly 500,000 spontaneous adverse event reports; trial meta-analyses with tens of thousands of patients; more than 5,000 manufacturing supplements; 300,000 import requests at over 300 import points; 800 ANDAs for generics; 10,000 generics amendments and 1,600 generics label changes; 5,728 INDs with ongoing activity; 1,977 IND/NDA meeting requests; and 3,189 submission supplements.

What To Do Now

Needless to say, some of the above activity could be automated or streamlined. But that will only be feasible if the industry and the agency are well aligned not only technologically, but terminologically and conceptually. At the moment, sadly, FDA appears to lack the necessary computing power, applications and databases to help with such a daunting challenge.

Yet it’s not too early for industry to support and extend the excellent work of CDISC even further. Such an effort would involve anticipating what FDA will need, and the agency appears to be open to such discussions.

One aspect of this is especially forward-looking and welcome. Both at individual companies and at an industry wide level, there is a new and embryonic discussion of what quality in clinical trials means. No doubt quality in clinical research will look quite different than it does in manufacturing. But if we begin to define it in the research arena, we will be well prepared to help regulatory bodies anticipate and manage their responsibilities in ways that benefit human health and allow innovative treatments to reach the market as efficiently as possible.

Kit Howard helps life science companies and academic research organizations optimize their clinical trials by providing data standards, data quality and data management consulting and education. Kit is a Registered Solutions Provider at CDISC, a member of CDASH—and co-leads its medical devices effort. She is also a Certified Clinical Data Manager from the Society for Clinical Data Management (SCDM). There is an additional ClinPage article about her here and by her here.