This reporter is always disheartened when lawyers are ridiculed. Lately, we've been saddened by the news that a few law schools, just trying to help graduates find employment, have unilaterally granted all their students better grades. A formula here, a few computer key strokes there—and bingo! All of the lawyers just got smarter.

Elastic metrics are not confined to the legal arena. If three organizations working on the same clinical trial have three different benchmarks for the same item, something delivered on October 7th might be simultaneously early, late and punctual. By the same token, if 500 companies in clinical development develop 5,000 different “metrics,” the word begins to get as fuzzy as law student grade point averages.

One perverse result is that metrics with divergent definitions can make it harder to measure what matters. That makes life tricky for sponsors trying to predict contract research organization (CRO) performance across projects or therapeutic areas.

So we were surprised to get on the phone with Ken Faulkner, VP of medical imaging at Perceptive Informatics. The firm is taking a novel step: using industry-wide benchmarks for its imaging reports. Specifically, it will fold the metrics of the Metrics Champion Consortium (MCC) into its reports.

“We have come to agreement about metrics that would help,” says Faulkner. “We were thrilled with a standard set of metrics. We view it as a positive for the field—that people are held to a certain standard.” Here's an official news release.

MCC Endorsement

Perceptive is a division of the Parexel contract research organization, and takes a certain pride in its imaging operation. It believes that reporting out its performance according to the MCC metrics will further establish its reputation. “We are finding our sponsors requesting the MCC metrics, some more than others,” Faulkner says. “[MCC] will become more standard, although there is always customization that happens.” The firm will continue to tweak reports on a project-by-project basis at the request of clients.

The MCC metrics were originally developed around oncology, but have since been broadened into benchmarks for electrocardiograms, labs and trials in general. For now, Faulkner says, some key imaging metrics relate to last patient to database lock; image analysis time; and site eligibility.

Awareness of MCC is not thoroughly distributed throughout the industry, Faulkner says, but is growing slowly, especially among biotechnology firms. There are two additional ClinPage articles on the MCC effort here and here.

The MCC metrics are intended to ensure that there are no curve balls. “We are just keeping track of the health of the study so that when we get to an interim analysis, we're not surprised by anything,” Faulkner says.

Adjudication Data

Faulkner predicts that the move could spark internal efficiencies at Perceptive, with a main set of metrics to report against. But the company is also welcoming the chance to stand toe to toe with competitors and be compared to them in an apples-to-apples manner. Some metrics, he suggests, are perfectly valid ways to assess competitors. “The most critical ones typically have to do with eligibility,” he says. “It's about turnaround time, whether it's about data or patient qualification.” image

The metrics are not just collected for the sake of gathering them. They're used to run projects. As an example, he cites something the company has always worried about. “We monitor our adjudication rate,” Faulkner says. “If you see a large adjudication rate, that would be a sign you don't have the training in place.”

Competitive Juices

Faulkner says some smaller imaging core labs may not appreciate the demands of clinical research as it scales up to larger projects. To win a sponsor's heart, he suggests, some small imaging labs may understate their capacity. Says Faulkner: “They just don't have the capability of handling these large trials. We are happy to compete on quality. We'll go up against anybody.”

The metrics Perceptive is trying to hold itself to are not nebulous poems of technical jargon and buzz phrases. Rather they are hard, numerical, granular. That could give sponsors confidence that (on the imaging side at least) a trial will proceed smoothly. Since the performance of any CRO is dependent on its research sites, the metrics could showcase the usage of superlative sites or mercilessly expose investigators who are not performing well.

So far, he says, the MCC metrics have not been written into any contracts. That could mark another stage in the industry taking the evaluation of its own performance out of the informal, idiosyncratic, flexible, unmanaged state of the art at present.

It’s a bit distant on the horizon, but it’s possible to theorize a more rigorous and consistent metrics environment. If sponsors insisted upon it, all providers of clinical trial technologies and services might have to hew to the same set of yardsticks. That prospect could be delightful to firms that are well-run and worrisome to those at the other end of the bell curve, especially if financial incentives were linked to meeting or missing metrics.