Webinar Q&A: How to reduce costs and risks in clinical trials using adaptive designs

On May 28, CROS NT hosted a complimentary webinar on reducing risks and costs in clinical trials. This webinar gave a brief introduction into current trial designs that can help to decrease the risk of failure and reduce the costs and discussed their advantages and disadvantages.

The webinar was conducted by Thomas Zwingers, Statistical Consultant of CROS NT, has been working in the clinical trial environment since 1980. He is specialized in statistical analysis and reporting with particular expertise in Adaptive Trial Design. Prior to joining CROS NT, Thomas ran his own biometrics consulting CRO in Germany for over 20 years. During this time, he gained a considerable amount of experience in Dermatology, Respiratory and Oncology studies. He has written and collaborated on over 100 publications in the pharmaceutical and clinical trial statistics.

Continue reading to discover the questions from the Q&A session from the webinar answered by the speaker.

How can I perform the adaptive process on the available data if the study has not started yet (i.e I do not have the data yet).

Thomas Zwingers: The adaptive process in general starts with the definition of intended adaptations and the rules when to apply the adaptation. You can only make the decision at the stage of the interim analysis when data is already available.

Do adaptive designs imply protocol amendments?

Thomas Zwingers: Yes, whenever you make a change/adaptation to the original protocol for the conduct of a trial you have to implement an amendment. These amendments are not “substantial”, as you only refer to a possible adaptations that is already described.

You mentioned adaptations should always be pre-specified. Have you seen any relaxation on this by regulators in the COVID-19 period such as trials making adaptations when not previously planned? E.g. sample size re-estimation.

Thomas Zwingers: I don’t think that COVID-19 has any impact on the principle of pre-specification. We should distinguish between two scenarios:

  • adaptation is caused by external information, e.g. results from other studies not available at the time of protocol generation
  • adaptations are based on data from the current study and not related to safety

Adaptations caused by external information, e.g. new data on safety, are not considered as part of an “adaptive design” and are generally accepted. Adaptations based on data from the current study, which necessarily request an interim analysis, introduce an “adaptive design” and need a substantial amendment. This topic is discussed in M. POSCH AND M. A. PROSCHAN: Unplanned adaptations before breaking the blind, Statist. Med. 2012, 31 4146–4153

Is it possible to start a study with a conventional design and then move to an adaptive design (for example allowing a sample size re-estimation)?

Thomas Zwingers: It should be possible to start with a conventional study design and then introduce an interim analysis in order to re-estimate the sample size based on current data, but this would require a substantial amendment.

Could you please give examples of p-value combination designs?

Thomas Zwingers: The most popular statistical methods for p-value combination are:

  • Fisher‘s Combination test (Köhne, Bauer: Evaluation of Experiments with Adaptive Interim Analyses,  Biometrics, Vol. 50, No. 4 (Dec., 1994), pp. 1029-1041)
  • Weighted inverse normal method (Lehmacher W., Wassmer, G.: Adaptive Sample Size Calculations in Group Sequential Trials,  Biometrics 55(4):1286-90 · January 2000
  • Conditional error functions (see: Posch M, Bauer P, Adaptive two stage designs and the conditional error function, Biometrical Journal, 1999, 6, 689 – 696.)

If interim analysis is made only for sample size re-estimation, in which case should I adjust significance level? Can I reduce sample size or just increase it?

Thomas Zwingers: If an interim analysis based on unblinded data is performed, the significance level has to be adjusted.  EMEA states in section 4.2.2 of their “Points to consider”: “Sample size reassessment based on results of an ongoing trial may lead to an increase of the type I error and in these instances methods should be used that can sufficiently correct for this”. A blinded interim analysis does not require an adjustment of the error probability.

In principle sample size can be in- and decreased. FDA states in their Guidance for Industry-Adaptive Design Clinical Trials for Drugs and Biologics: “Adaptive designs employing these methods should be used only for increases in the sample size, not for decreases. The potential to decrease the sample size is best achieved through group sequential designs with well-understood alpha spending rules structured to accommodate the opportunity to decrease the study size by early termination at the time of the interim analysis.” Therefore, for adaptive designs it is recommended to start with a low number of patients and then possibly increase the number after the interim analysis.

Are group sequential and p-value combination methods accepted by regulatory authorities?

Thomas Zwingers: Both methods – group sequential and p-value combination – are in general accepted by the regulatory authorities. For more details on the possible concerns of the authorities we invite you to have a look at:

When using adaptive trial designs, how much details have to be written in the protocol?

Thomas Zwingers: EMEA states in section 4.2.1 of their “Points to consider”: “A minimal prerequisite for statistical methods to be accepted in the regulatory setting is the control of the pre-specified type I error”.  I recommend to mention in addition all intended adaptations with some kind of justification.