Looking for more best practices? See our guides on KPIs, ePRO form design, and more here in our success guide library.

What do world-class universities and too many of today’s trials have in common?

They’re really hard to get in to.

That’s as worrisome a problem for clinical researchers as it is for high school seniors. Stringent criteria make recruitment more difficult, a burden for sites, sponsors, and even statisticians. Even if enrollment does proceed apace, limitations crop up at the analysis stage: the stricter the qualifications, the less generalizable the results. The mantra is common by now: we need to open the gates wider if we want to gather robust, real-world evidence of how new treatments impact disease.

Yet inclusion and exclusion criteria aren’t going anywhere, especially in early phase trials. Investigators who fail to evaluate patients against eligibility criteria compound the risk that’s a part of all interventional trials. Good experimental science requires criteria, too. Imagine you’re launching a one-year study to test whether once-a-day ColdFreeze reduces occurrences of upper respiratory tract infection. You can’t give all your spots to those who suffered a lot of colds in the prior year. When the positive results pour in, it won’t be ColdFreeze that deserves the headlines, but regression to the mean.

Of course, most of us are quickly out of our depth when it comes to drafting criteria for a particular study. This is where biostatisticians shine. Eventually, though, those criteria need to inform study operations. How should we approach eligibility at the level of data management and workflow optimization? Here, I’ll offer some best practices and share an example eligibility form that puts them into action.

 

#1 Make eligibility a form.

We’ll start with the obvious. A form that comprises your inclusion and exclusion criteria does more than drive protocol compliance. It makes confirming eligibility easier, which saves monitoring time and cost. A well-designed eligibility eCRF also encourages your CRCs (clinical research coordinators) to review the criteria in a predetermined order; the order you’ve established to maximize efficiency.

 

#2 Fail early. (And cheaply.)

No one likes to invest a lot of time or money in a project that will eventually hit an impasse. But when it comes to trials, there’s more at stake than frustration or finances. Quickly disqualifying a potential participant frees up more time for the site to:

  • match that patient with suitable care or alternative research, and
  • screen more patients who may be a fit for your trial.

That’s a win for all parties. But how do you ensure that disqualification, if it is going to occur, occurs early in the evaluation? Here we’re faced with an optimization problem that includes three key variables:

  1. which criteria are most likely to disqualify a patient,
  2. which are the quickest to evaluate, and
  3. which are the most cost-effective to evaluate.

Let’s start by considering the happy case where those three properties coincide. Suppose that participation in your neuroscientific study is restricted to the chosen few of us who are left-handed. A CRC working on your study could disqualify 90% of randomly selected persons in the time it takes to witness a signature. The cost? A piece of paper. So there’s excellent reason to make the handedness criteria the first one assessed, and thus the first item on your form. (You’re not obliged to follow the protocol’s order of criteria. You just need ensure that your form applies them all.)

Protocol often bury their most productive disqualifier deep within the exclusion list. That’s a rational strategy if evaluating that “buried” criteria is unduly time-consuming. Suppose that five, 12-minute tests, conducted serially, each pose a 10% chance of disqualifying a study candidate. In that case, it makes sense to conduct those tests before an hour-long one, even if that longer test comes with a 40% chance of disqualification. In the first case, the chance of disqualification is actually 41% after an hour (1 – .95). That’s not a big improvement by itself. But note also that the average time to reach a qualification decision using the five-test method is a little over 49 minutes. Think what could be accomplished in the 11 minutes saved per patient amassed over hundreds of patients.

But what if those five tests each cost $20 to conduct, while the single test with the 40% failure rate costs $60? Suddenly, that 1% boost becomes a lot less appealing. Why? Assume again that sites would conduct the five tests sequentially, stopping after the first failure. Sites would then need to test the average candidate a little more than four times. (10% of patients will take and fail exactly one test, 9% of patients will pass the first test but fail the second, etc. The average for all comers is 4.1 tests.) Ultimately, the site or sponsor would spend 50% more in hard dollars using the five test method to disqualify roughly the same number of candidates.

Generally, then, criteria that are likely to disqualify, as well as inexpensive and quick to evaluate, ought to come first. Costly, time-consuming, and less decisive criteria should fall further down the list. Inevitably, trade-offs will occur. Which should take priority: criteria that are easy to check and somewhat likely to disqualify, or difficult to check but very likely to disqualify? In these cases, perfect mathematical rigor may be impossible. Often, it’s not even necessary–most criteria can be assessed simply by consulting the patient’s chart. But thinking like an economist, even an amateur one, about how to fail early and cheaply could pay big dividends for everyone involved.

 

#3 Make your forms carry out the logic.

Speaking of failure, think a moment about the human brain. Specifically, think about its capacity to carry out any logical deduction without flaw, time and again, against a background of distractions, and even urgent medical issues.

It doesn’t have the best track record.

Research coordinators typically boast sharper than average minds, especially if they’re left-handed. But even they could benefit from a reliable aid in parsing all of the and’s, or’s, and not’s scattered throughout your study’s eligibility. A good form serves as that aid. Consider the following inclusion criteria, taken from a protocol published on clinicaltrials.gov.

 

 

Inclusion criteria #1 is straightforward enough. (Although even there, two criteria are compounded into one.) By contrast, there are countless ways of meeting, or missing, criteria #2. It’s easy to imagine a busy CRC mistaking some combination of metformin dose and A1C level as qualifying, when in fact it isn’t.

But computing devices don’t make these sorts of errors. All the software needs from you is the right logical expression (e.g., Criteria #2 is met if and only if A and B are both true, OR C and D are both true, etc.) Once that’s in place, the CRC can depend on your form to deliver perfect judgment every time. Best of all, that statement can live under the surface of your form. All the CRC needs to do is provide the input that corresponds to A, B, C, and D. The form then chops the logic instantly, invisibly, and flawlessly.

The form snippets below show criteria #2 applied to four sets of inputs. Nowhere is the user asked to determine the truth of the compound statement built up out of those or’s and and’s. Rather, the form consults a truth table “behind the scenes” to return the result.

 

 

Form engines will vary in the syntax needed to build up these logical formulas. But the concepts themselves are either already familiar to you or easily grasped. So build reasoning into your forms and spare your sites all that deductive work!

 

#4 Move beyond ‘yes’ or ‘no.’

The criteria offered so far places a premium on in-clinic efficiency; getting to the right answer quickly for a particular study participant. But eligibility eCRFs can serve another goal. As long as your form is collecting the “logical inputs” as described above, you’ll eventually gather a mass of fine-grained data about study candidates who did not meet eligibility. And that data is worth your consideration if the protocol ever needs to be amended. Was the range set by the hemoglobin criteria broad enough? Is a medical history that precludes hypertension really necessary to evaluating safety and efficacy? If so, maybe version 2 can modify those criteria.

Of course, as with any statistical consideration, it’s easy to get in trouble. You (or, more likely, your biostatistician) will need to carefully consider whether any changes in eligibility will undermine your study’s ability to test its explicitly stated hypothesis. Recall, too, that if you’re working with data from only some of your disqualified candidates (i.e. those for whom the form was completed), that data may not be representative of all disqualified candidates, much less of the target population at large. In other words, more data isn’t always better, and there are regulations in place–valuable ones–to stop us from collecting data that isn’t germane to a study. It’s critical that you and your team hold these caveats in mind. But even aside from statistical inferences, collecting eligibility data “from the ground up” can reduce a good deal of doubt about whether a candidate really did meet or miss a criteria. That’s a tremendous boon to your monitors.

“Of course we need to uphold eligibility criteria. But is the form worth this much thought?”

That’s an understandable response. But as data managers and study operations professionals, it’s up to us to squeeze every drop of efficiency and quality out of the work we do. We owe it to our sites and patients. Consent and eligibility constitute the foundation of their study journey. There are no better places that to start making good on our responsibility.

 

Bring these practices above to life with the example form below. For more on designing forms that capture better data, faster, view our on-demand webinars from December 2018.

Looking for more best practices? See our guides on KPIs, ePRO form design, and more here in our success guide library.