“Unless someone like you cares a whole awful lot, nothing is going to get better.It’s not.”  — Dr. Seuss

Years of college and training. Professional certification. Memorizing what seems like an entire periodic table of data management acronyms: CDISC, CDM, CRF, eCRF, EDC, SOP, UAT. Tests. More tests. Clinical data managers spend their careers ensuring the accuracy and integrity of clinical trial data. It’s a bit ironic, then, that perhaps the most important CDM test is one that we are supposed to fail.

User Acceptance Testing (UAT) is the process of testing CDM software. UAT is the last step along the path to a live study launch. It’s the 11:30 AM seminar speaker that is the only thing between you and lunch. The proximity of UAT to study launch is unfortunate. Our collective mindset at this stage screams, “Can’t we just get on with it?” The necessity of UAT, however, cannot be overstated. Done well, two weeks of UAT will save the clinical data manager months of headaches in post-collection data cleaning.

Breaking Bad: Why should we care about UAT?
The obvious answer is that we care about data accuracy and integrity. This answer is specious. Of course we care. This is why we will diligently (and manually) correct errors after the fact. If bugs, missing form logic, incorrect form logic are not caught until the end of the study, we will dive in and diligently correct hundreds of data points without hesitation.

The correct answer as to why we should care about UAT, therefore, is that UAT saves time. Breaking things before we start protects us from having to fix things down the road. We’re doing our future selves a favor.

An Ounce of Prevention (UAT Best Practices)
To reap the benefits of UAT, you need to take the time to develop a thorough testing plan. Yes, it’s cathartic to just start hacking away like they do on HGTV home renovations, but we are striving for a more targeted probing of the data platform. Poking, not smashing. We need a plan of attack that focuses on key areas of risk.

During UAT planning:

Don’t reinvent the wheel. It is possible to invest an unlimited amount of time in testing, so include in your scope of UAT areas the system that would not have been covered in documented validation testing carried out by the software vendor. Do you need to test that a date field only allows a date to be entered? Do you need to test the system to make sure all the data entered is included in an export? Items like this were probably already covered in earlier stages of testing carried out by the software vendor (e.g. performance qualification (PQ) testing). Rather, for your UAT scope you’ll want to consider adding test coverage for the custom things you have configured in the software platform. For example, the following types of questions should be addressed in UAT:

  • Are user permissions set correctly?
  • Do forms collect the right study data?
  • Are data validations functioning correctly (e.g., what happens if we enter a BMI of 236 instead of 23.6?)?
  • Are calculated fields showing the right data?
  • Do form logic and rules you have defined work as intended?

Define the risks. You can further enhance the effectiveness of your UAT by identifying which parts of the study are most critical. For example, which fields, logic, and workflows support safety? Which support primary endpoints? What data show inclusion/exclusion compliance? Be sure to define robusts tests for these areas in your UAT plan.

Identify the Users. It’s great that you’re willing to get your own hands dirty, but the “U” in UAT isn’t texting shorthand for “you”. It’s “user”, which could be you but is more likely a different member of the research team. We need to find these users, then bribe (er…reward) them for participating. Create a friendly competition, or individual bounties for identifying flaws and errors. And more importantly, make sure your new “users” have been given a demo of the functionality of the system and understand the workflow itself. Far too often, UAT “findings” are raised by “users” using the system sight unseen. It simply equates to erroneous findings to sift through later. Finally, if you failed to include someone from Stats in your design period (tisk-tisk design best practice oversight alert), you better not leave them out here as well. Stats feedback during UAT is crucial to making sure they have a seat at the table to help solidify the expected output. If Stats, or more specifically, a Stats programmer dealing with study data output can give their approval here, you’ve got a friend for life!

Document Testing Results. Create a bug tracking form or error reporting tool. At minimum, we want the user to report:

  • where they encountered the bug, spelling error, missing data options, etc. (“Where” as in Form, Field name)
  • what did they expect to happen (“What” exact steps did they take, what exact data did they enter into the field)
  • what actually happened (“What” error message did they receive, what rule should have fired but failed, what data options did they expect to see that are MIA)
  • what priority applies to this finding? (Think in terms of high, medium and low. High is the “show-stopper” stuff that says there is no way you can “Go Live” unless this is resolved. Picture a flashing Stop sign. Whereas Low, may be something that applies to a later visit and would not impact your FPFV. Why is this distinction important? Oftentimes UAT is carried out in crunch-mode (we know, understatement of the year!) but the ability to negotiate which of the findings must be addressed for FPFV vs. those that can be addressed slightly after as a post go-live fix may mean the difference in hitting or missing your very important milestone.

To encourage as much feedback as possible give your testers a simple, and straightforward way to log results. You don’t need a fancy ticketing system. A simple spreadsheet can work great.

For example:

4. Get User Feedback. This step is separate from bug tracking. Your users just spent days kicking the tires on your shiny new system. You’re missing a golden opportunity if you don’t separately ask about their overall user experience. And while you may capture some aspects of the experience through surveys, you’ll likely get more useful feedback through personal interviews. Here are some questions you might include in such interviews:

  • What do you find easiest/most intuitive?
  • What do you find challenging or confusing?
  • What would be something you’d recommend we improve?

If we can be of any help with UAT or any additional needs, let us know. Thank you.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×