Each sponsor and their partners enter the Risk-Based Monitoring (RBM) process at different places. Now that the new version of GCP (ICH GCP E6 (R2) addendum) mandates risk-based thinking, all of us in clinical trials need to embrace risk-based processes.

What exactly does this mean?

We need to begin implementing risk assessment, risk measurements, task level action and tighter linkages of Data Management and ClinOps in particular.  At CROS NT, we’ve had experiences with a variety of teams involved in implementing RBM.

Risk Based Monitoring Implementation

Case Study: Implementing RBM from a Data Perspective

CROS NT and one of its sponsors started fairly simply. As their pilot RBM experience, our sponsor chose a study that was quite low risk: an open label extension, but one that was key on the development path for a particular CNS candidate molecule. The key data was safety information looking at 24 month exposure data and real world experience regarding concomitant medications.

The team decided to implement RBM initially as a metrics-based approach focused on the study’s readily available EDC data.  CROS and the sponsor began with a series of meetings reviewing the Transcelerate RACT model and adapted it to our needs.  We reviewed the CRF together as a team and pulled out 8-10 key variables for our RB metrics.  This process encompassed a month of weekly meetings.

We set our goal as less than 10 key data fields o put into our risk plan.  We purposefully did not want to over-complicate our first experience using risk-based tools when deciding on a data review plan for setting clinical monitoring priorities.

Initially we focused on a metrics or threshold-based analysis.    We selected the key measures you would expect, including adverse events and serious adverse events, along with other measures like con med usage and an investigator’s global impression score.

After selecting the appropriate data, CROS built a series of queries in the Sponsor’s EDC system to produce the raw metrics data.  We interpreted this data and merged it with the historical RB metrics data to keep a three month rolling view of the key variables.  We color-coded the data where significant changes happened comparing month-to-month data to highlight new or changed information.

The team had an iterative process for the first few months.  We needed to balance the goals of the analysis against the data availability.  In a few cases, we learned that some of our anticipated measures could not be easily derived from the EDC data without revisions to the system configuration.   We opted not to invest in re-work of the EDC setup to make some of the meta-data available to the RB metrics analysis. In the future, the sponsor will implement a standardized approach to EDC system setup that will provide better data to the metrics analysis.

We were in a planning and definitional phase for 2 months and we’ve been producing and using the metrics analysis for the last 10 months.  The joint CRO and sponsor team has learned a lot together about the terms, the process, and usefulness of using a risk-based approach to data review and monitoring practices.  This close collaboration is ideal to a successful RBM implementation.

Next blog: As you probably know the metrics approach is really just half of the RBM story.

To reduce the amount of SDV, the FDA recommends both a metrics or threshold approach combined with a statistical approach.  The theory being the metrics are based on close knowledge of the protocol, the patient population, and clinically relevant measures indicative of relative risk to subject safety or data integrity.  The statistical approach relies on the study data itself to generate comparative analyses of performance in key measures across different investigative sites.  Essentially you want to know if one or more of the sites has data that is not in line with the other sites, or, put more formally, whether some individual sites have statistically significant differences in their data compared with the aggregated site data.