Print page Resize text Change font-size Change font-size Change font-size High contrast


methodologicalGuide14
Home > Standards & Guidances > Methodological Guide

ENCePP Guide on Methodological Standards in Pharmacoepidemiology

 

Chapter 14: Specific topics

 

14.1. Comparative effectiveness research

      14.1.1. Introduction

      14.1.2. General aspects

      14.1.3. Prominent issues in CER

14.2. Vaccine safety and effectiveness

      14.2.1. Vaccine safety

      14.2.2. Vaccine effectiveness

14.3. Design, implementation and analysis of pharmacogenomic studies

      14.3.1. Introduction

      14.3.2. Identification of genetic variants influencing drug response

      14.3.3. Study designs

      14.3.4. Data collection

      14.3.5. Data analysis

      14.3.6. Reporting

      14.3.7. Clinical implementation and resources

14.4. Methods for pharmacovigilance impact research

      14.4.1. Introduction

      14.4.2. Outcomes

      14.4.3. Considerations on data sources

      14.4.4. Study designs

      14.4.5. Analytical methods

      14.4.6. Measuring unintended effects of regulatory interventions

 

 

14.1. Comparative effectiveness research

 

Note: Chapter 14.1. has not been updated for revision 9

 

14.1.1. Introduction

 

Comparative effectiveness research (CER) is designed to inform health-care decisions at the level of both policy and the individual by comparing the benefits and harms of therapeutic strategies available in routine practice, for the prevention, the diagnosis or the treatment of a given health condition. The interventions under comparison may be related to similar treatments, such as competing drugs, or different approaches, such as surgical procedures and drug therapy. The comparison may focus only on the relative medical benefits and risks of the different options or it may weigh both their costs and their benefits. The methods of comparative effectiveness research (Annu Rev Public Health 2012;33:425-45) defines the key elements of CER as (a) head-to-head comparison of active treatments, (b) study populations typical of day-to-day clinical practice, and (c) a focus on evidence to inform health care tailored to the characteristics of individual patients. In What is Comparative Effectiveness Research, the AHRQ highlights that CER requires the development, expansion and use of a variety of data sources and methods to conduct timely and relevant research and disseminate the results in a form that is quickly usable. The evidence may come from a review and synthesis of available evidence from existing clinical trials or observational studies or from the conduct of studies that generate new evidence. In Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide, AHRQ also highlights that CER is still a relatively new field of enquiry that has its origin across multiple disciplines and is likely to evolve and be refined over time.

 

Among resources for keeping up with the evolution in this field, the US National Library of Medicine provides a web site for queries on CER.

 

The terminology ‘Relative effectiveness assessment (REA)’ is also used when comparing multiple technologies or a new technology against standard of care, while ‘rapid’ REA refers to performing an assessment within a limited timeframe in the case of a new marketing authorisation or a new indication granted for an approved medicine (What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10(4):397-410).

 

14.1.2. General aspects

 

Several initiatives have promoted the conduct of CER and REA and proposed general methodological guidance to help in the design and analysis of such studies.

 

The Methodological Guidelines for Rapid Relative Effectiveness Assessment of Pharmaceuticals developed by EUnetHTA cover a broad spectrum of issues on REA. They address methodological challenges that are encountered by health technology assessors while performing rapid REA and provide and discuss practical recommendations on definitions to be used and how to extract, assess and present relevant information in assessment reports. Specific topics covered include the choice of comparators, strengths and limitations of various data sources and methods, internal and external validity of studies, the selection and assessment of endpoints (including composite and surrogate endpoints and Health Related Quality of Life [HRQoL]) and the evaluation of relative safety.

 

AHRQ’s Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide identifies minimal standards and best practices for observational CER. It provides principles on a wide range of topics for designing research and developing protocols, with relevant questions to be addressed and checklists of key elements to be considered. The GRACE Principles provide guidance on the evaluation of the quality of observational CER studies to help decision-makers in recognizing high-quality studies and researchers in design and conduct high quality studies. A checklist to evaluate the quality of observational CER studies is also provided. the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) addressed several key issues of CER in three publications: Part I includes the selection of study design and data sources and the reporting and interpretation of results in the light of policy questions; Part II relates to the validity and generalisability of study results, with an overview of potential threats to validity; Part III includes approaches to reducing such threats and, in particular, to controlling of confounding. The Patient-Centered Outcomes Research Institute (PCORI) Methodology Standards document provides standards for patient-centred outcome research that aims to improve the way research questions are selected, formulated and addressed, and findings reported. The PCORI group has recently published how stakeholders may be involved in PCORI research, Stakeholder-Driven Comparative Effectiveness Research (JAMA 2015; 314: 2235-2236). In a Journal of Clinical Epidemiology series of articles, the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group offers a structured process for rating quality of evidence and grading strength of recommendations in systematic reviews, health technology assessment and clinical practice guidelines. The GRADE group recommends individuals new to GRADE to first read the 6-part 2008 BMJ series.

 

A guideline on methods for performing systematic reviews of existing comparative effectiveness research has been published by the AHRQ (Methods Guide for Effectiveness and Comparative Effectiveness Reviews).

 

The RWE Navigator website has been developed by the IMI GetReal consortium to provide recommendations on the use of real-world evidence for decision-making on effectiveness and relative effectiveness of medicinal products. It discusses important topics such as the sources of real-world data, study designs, approaches to summarising and synthesising the evidence, modelling of effectiveness and methods to adjust for bias and governance aspects. It also presents a glossary of terms and case studies relevant for RWD research, with a focus on effectiveness research.

 

14.1.3. Prominent issues in CER

 

14.1.3.1. Randomised clinical trials vs. observational studies

 

While RCTs are considered to provide the most robust evidence of the efficacy of therapeutic options, they are affected by well-recognised qualitative and quantitative limitations that may not reflect how the drug of interest will perform in real-life. Moreover, relatively few RCTs are traditionally designed using an alternative therapeutic strategy as a comparator, which limits the utility of the resulting data in establishing recommendations for treatment choices. For these reasons, other research methodologies such as pragmatic trials and observational studies may complement traditional explanatory RCTs in CER.

 

Explanatory and Pragmatic Attitudes in Therapeutic Trials (J Chron Dis 1967; republished in J Clin Epidemiol 2009;62(5):499-505) distinguishes between two approaches in designing clinical trials: the ‘explanatory’ approach, which seeks to understand differences between the effects of treatments administered in experimental conditions, and the ‘pragmatic’ approach which seeks to answer the practical question of choosing the best treatment administered in normal conditions of use. The two approaches affect the definition of the treatments, the assessment of results, the choice of subjects and the way in which the treatments are compared. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers (CMAJ 2009; 180 (10):E47-57) quantifies distinguishing characteristics between pragmatic and explanatory trials and has been updated in The Precis-2 tool: designing trials that are fit for purpose (BMJ 2015; 350: h2147). A checklist of eight items for the reporting of pragmatic trials was also developed as an extension of the CONSORT statement to facilitate the use of results from such trials in decisions about health-care (Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008;337 (a2390):1-8).

 

The article Why we need observational studies to evaluate effectiveness of health care (BMJ 1996;312(7040):1215-18) documents situations in the field of health care intervention assessment where observational studies are needed because randomised trials are either unnecessary, inappropriate, impossible or inadequate. In a review of five interventions, Randomized, controlled trials, observational studies, and the hierarchy of research designs (N Engl J Med 2000;342(25):1887-92) found that the results of well-designed observational studies (with either a cohort or case-control design) did not systematically overestimate the magnitude of treatment effects. In defense of Pharmacoepidemiology-Embracing the Yin and Yang of Drug Research (N Engl J Med 2007;357(22):2219-21) shows that strengths and weaknesses of RCTs and observational studies make both designs necessary in the study of drug effects. However, When are observational studies as credible as randomised trials? (Lancet 2004;363(9422):1728-31) explains that observational studies are suitable for the study of adverse (non-predictable) effects of drugs but should not be used for intended effects of drugs because of the potential for selection bias.

 

With regard to the selection and assessment of endpoints for CER, the COMET (Core Outcome Measures in Effectiveness Trials) Initiative aims at developing agreed minimum standardized sets of outcomes (‘core outcome sets’, COS) to be assessed and reported in effectiveness trials of a specific condition as discussed in Choosing Important Health Outcomes for Comparative Effectiveness Research: An Updated Review and User Survey (PLoS One 2016 ;11(1):e0146444.).

 

14.1.3.2. Use of electronic healthcare databases

 

A review of uses of health care utilization databases for epidemiologic research on therapeutics (J Clin Epidemiol 2005;58(4):323-37) considers the application of health care utilisation databases to epidemiology and health services research, with particular reference to the study of medications. Information on relevant covariates and in particular on confounding factors may not be available or adequately measured in electronic healthcare databases. To overcome this limit, CER studies have integrated information from health databases with information collected ad hoc from study subjects. Enhancing electronic health record measurement of depression severity and suicide ideation: a Distributed Ambulatory Research in Therapeutics Network (DARTNet) study (J Am Board Fam Med. 2012;25(5):582-93) shows the value of adding direct measurements and pharmacy claims data to data from electronic healthcare records participating in Assessing medication exposures and outcomes in the frail elderly: assessing research challenges in nursing home pharmacotherapy (Med Care 2010;48(6 Suppl):S23-31) describe how merging longitudinal electronic clinical and functional data from nursing home sources with Medicare and Medicaid claims data can support unique study designs in CER but pose many challenging design and analytic issues. Pragmatic randomised trials using routine electronic health records: putting them to the test (BMJ 2012;344:e55) discusses opportunities for using electronic healthcare records for conducting pragmatic trials.

 

A model based on counterfactual theory for CER using large administrative healthcare databases has been suggested, in which causal inference from observational studies based on large administrative health databases is viewed as an emulation of a randomized trial. This ‘target trial’ is made explicit and design and analytic approaches are reviewed in Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available (Am J Epidemiol (2016) 183 (8): 758-764).

 

14.1.3.3. Bias and confounding in observational CER

 

Methodological issues and principles of Chapter 5 of the ENCePP Guide are applicable to CER as well and the textbooks cited in that chapter are recommended for consultation.

 

Methods to assess intended effects of drug treatment in observational studies are reviewed (J Clin Epidemiol 2004;57(12):1223-31) provides an overview of methods that seek to adjust for confounding in observational studies when assessing intended drug effects. Developments in post-marketing comparative effectiveness research (Clin Pharmacol Ther 2007;82(2):143-56) also reviews the roles of propensity scores (PS), instrumental variables and sensitivity analyses to reduce measured and unmeasured confounding in CER. Use of propensity scores and disease risk scores in the context of observational health-care programme research is described in Summary Variables in Observational Research: Propensity Scores and Disease Risk Scores. More recently, high-dimensional propensity score has been suggested as a method to further improve control for confounding as these variables may collectively be proxies for unobserved factors.

 

Results presented in High-dimensional propensity score adjustment in studies of treatment effects using health care claims data (Epidemiology 2009;20(4):512-22) show that in a selected empirical evaluation, high-dimensional propensity score improved confounding control compared to conventional PS adjustment when benchmarked against results from randomized controlled trials. See Chapter 5.3.4 of the Guide for an in-depth discussion of propensity scores. Several methods can be considered to handle cofounders in non-experimental CER (Confounding adjustment in comparative effectiveness research conducted within distributed research networks (Med Care 2013 ; 51(8 Suppl 3) : S4-S10); Disease Risk Score (DRS) as a Confounder Summary Method: Systematic Review and Recommendations (Pharmacoepidemiol Drug Saf 2013; 22(2): 122–129). Strategies for selecting variables for adjustment in non-experimental CER have also been proposed (Pharmacoepidemiol Drug Saf 2013; 22(11): 1139–1145).

 

A reason for discrepancies between results of randomised trials and observational studies may be the use of prevalent drug users in the latter. Evaluating medication effects outside of clinical trials: new-user designs (Am J Epidemiol 2003;158(9):915-20) explains the biases introduced by use of prevalent drug users and how a new-user (or incident user) design eliminate these biases by restricting analyses to persons under observation at the start of the current course of treatment. The Incident User Design in Comparative Effectiveness Research (Pharmacoepidemiol Drug Saf 2013; 22(1): 1–6) reviews published CER case studies in which investigators had used the incident user design, discusses its strengths (reduced bias) and weakness (reduced precision of comparative effectiveness estimates) and provides recommendations to investigators considering to use this design. The value of incident user design and its exceptions have been reviewed.

 

14.2. Vaccine safety and effectiveness

 

14.2.1. Vaccine safety

 

14.2.1.1. General considerations

 

The ADVANCE Report on appraisal of vaccine safety methods is a comprehensive reference providing a brief description of a wide range of direct and indirect methods of vaccine risk assessment, evaluated based on nine criteria related to five domains: Effect Measure, Statistical Criteria, Timeliness, Restriction and Robustness, and Operational Criteria. It also emphasises the specificities of safety assessment for vaccines and how they differ from other medicines, evaluates study designs, discusses perspectives of different stakeholders on risk assessment, describes experiences from other projects and systems, and provides recommendations. This document is highly relevant for all the topics covered in this section on vaccine safety.

 

Specific aspects related to vaccine safety are discussed in several other documents.

  • The Report of the CIOMS/WHO Working Group on Definition and Application of Terms for Vaccine Pharmacovigilance (2012) provides definitions and explanatory notes for the terms ‘vaccine pharmacovigilance’, ‘vaccination failure’ and ‘adverse event following immunisation (AEFI)’.
  • The CIOMS Guide to Active Vaccine Safety Surveillance (2017) describes the process of determining whether active vaccine safety surveillance is necessary, more specifically in the context of resource-limited countries, and, if so, of choosing the best type of active safety surveillance and considering key implementation issues.
  • The CIOMS Guide to Vaccine Safety Communication (2018) provides an overview of strategic communication issues faced by regulators, those responsible for vaccination policies and other stakeholders in introducing current or new vaccines in populations. Building upon existing recommendations, it provides a guide for vaccine risk communication approaches.
  • The Brighton Collaboration provides resources to facilitate and harmonise collection, analysis and presentation of vaccine safety data, including case definitions specifically intended for pharmacoepidemiological research, an electronic tool to help the classification of reported signs and symptoms, template protocols, and guidelines.
  • Module 4 (Surveillance) of the e-learning training course Vaccine Safety Basics of the World Health Organization (WHO) describes pharmacovigilance principles, causality assessment procedures, surveillance systems and places safety in the context of the vaccine benefit/risk profile. For example the systematic review Maternal Influenza Vaccination and Risk for Congenital Malformations: A Systematic Review and Meta-analysis (Obstet Gynecol 2015;126(5):1075-84) on influenza vaccination in pregnancy and risk of congenital anomalies in newborns did not find an association, adding to the evidence-base in favour of influenza vaccination in pregnancy.
  • Recommendations on vaccine-specific aspects of the EU pharmacovigilance system, including on risk management, signal detection and post-authorisation safety studies (PASS) are presented in Module P.I: Vaccines for prophylaxis against infectious diseases of the Good pharmacovigilance practices (GVP).
  • A vaccine study design selection framework for the postlicensure rapid immunization safety monitoring program (Am J Epidemiol. 2015;181(8):608-18) describes and summarises, in a tabular form, strengths and weaknesses of the cohort, case-centered, risk-interval, case-control, self-controlled risk interval (SCRI), self-controlled case series (SCCS) and case-crossover designs for vaccine safety monitoring, to support decision-making.
  • The WHO Covid-19 vaccines safety surveillance manual has been developed upon recommendation and guidance of the Global Advisory Committee on Vaccine Safety (GACVS) and other experts and addresses pharmacovigilance preparedness for the launch of COVID-19 vaccines.

There is an increasing interest in the influence of genetics on safety and efficacy outcomes of vaccination. Understanding this influence may optimise the choice of vaccines and the vaccination schedule. Research in this field is illustrated by Effects of vaccines in patients with sickle cell disease: a systematic review protocol (BMJ Open 2018;8:e021140) and Adversomics: a new paradigm for vaccine safety and design (Expert Rev Vaccines. 2015 Jul; 14(7): 935–47). Vaccinomics and Adversomics in the Era of Precision Medicine: A Review Based on HBV, MMR, HPV, and COVID-19 Vaccines (J Clin Med. 2020;9(11):3561) highlights that knowledge of genetic factors modulating responses to vaccination could contribute to the evaluation of the safety and effectiveness of vaccines, including COVID-19 vaccines.

 

14.2.1.2. Signal detection and validation

 

Besides a qualitative analysis of spontaneous case reports or case series, quantitative methods such as disproportionality analyses (described in Chapter 9) and observed vs. expected (O/E) analyses are routinely employed in signal detection for vaccines. Several documents discuss the merits and review the methods of these approaches for vaccines.

 

Disproportionality analyses

 

GVP Module P.I: Vaccines for prophylaxis against infectious diseases describes issues to be considered when applying methods for disproportionality analyses for vaccines, including the choice of the comparator group and the use of stratification. Effects of stratification on data mining in the US Vaccine Adverse Event Reporting System (VAERS) (Drug Saf. 2008;31(8):667-74) demonstrates that stratification can reveal and reduce confounding and unmask some vaccine-event pairs not found by crude analyses. However, Stratification for Spontaneous Report Databases (Drug Saf. 2008;31(11):1049-52) highlights that extensive use of stratification in signal detection algorithms should be avoided as it can mask true signals. Vaccine-Based Subgroup Analysis in VigiBase: Effect on Sensitivity in Paediatric Signal Detection (Drug Saf. 2012;35(4)335-46) further examines the effects of subgroup analyses based on the relative distribution of vaccine/non-vaccine reports in paediatric ADR data. In Performance of Stratified and Subgrouped Disproportionality Analyses in Spontaneous Databases (Drug Saf. 2016;39(4):355-64), it was found that subgrouping by vaccines/non-vaccines resulted in a decrease in both precision and sensitivity in all spontaneous report databases that contributed data.

 

The article Optimization of a quantitative signal detection algorithm for spontaneous reports of adverse events post immunization (Pharmacoepidemiol Drug Saf .2013;22(5): 477–87) explores various ways of improving performance of signal detection algorithms when looking for vaccine adverse events.

 

The article Adverse events associated with pandemic influenza vaccines: comparison of the results of a follow-up study with those coming from spontaneous reporting (Vaccine 2011;29(3):519-22) reported a more complete pattern of reactions when using two complementary methods for first characterisation of the post-marketing safety profile of a new vaccine, which may impact on signal detection.

In Review of the initial post-marketing safety surveillance for the recombinant zoster vaccine (Vaccine 2020;38(18):3489-500), the time-to-onset distribution of zoster vaccine-adverse event pairs was used to generate a quantitative signal of unexpected temporal relationship.

 

Bayesian disproportionality methods have also been developed to generate disproportionality signals. In Association of Facial Paralysis With mRNA COVID-19 Vaccines: A Disproportionality Analysis Using the World Health Organization Pharmacovigilance Database (JAMA Intern Med. 2021;e212219), a potential safety signal for facial paralysis was explored using the Bayesian neural network method.

 

In Disproportionality analysis of anaphylactic reactions after vaccination with messenger RNA coronavirus disease 2019 vaccines in the United States (Ann Allergy Asthma Immunol. 2021; S1081-1206(21)00267-2) the CDC Wide-ranging Online Data for Epidemiologic Research (CDC WONDER) system was used in conjunction with proportional reporting ratios to evaluate whether the rates of anaphylaxis cases reported in the VAERS database following administration of mRNA COVID-19 vaccines was disproportionately different from all other vaccines.

 

Observed-to-expected analyses

 

In vaccine vigilance, an O/E analysis compares the ‘observed’ number of cases of an adverse event occurring in vaccinated individuals and recorded in a data collection system (e.g. a spontaneous reporting system or an electronic health care record database) and the ‘expected’ number of cases that would have naturally occurred in the same population without vaccination, estimated from available incidence rates in a non-vaccinated population. GVP Module P.I: Vaccines for prophylaxis against infectious diseases suggests the conduct of O/E analyses for signal validation and preliminary signal evaluation when prompt decision-making is required and there is insufficient time to review a large number of individual cases. It discusses key requirements of O/E analyses: the observed number of cases detected in a passive or active surveillance system, near real-time exposure data, appropriately stratified background incidence rates calculated on a population similar to the vaccinated population (for the expected number of cases), the definition of appropriate risk periods (where there is suspicion and/or biological plausibility that there is a vaccine‐associated increased risk of experiencing the event) and sensitivity analyses around these measures. O/E analyses may require some adjustments for continuous monitoring due to inflation of type 1 error rates when multiple tests are performed. The method is further discussed in Pharmacoepidemiological considerations in observed‐to‐expected analyses for vaccines (Pharmacoepidemiol Drug Saf. 2016;25(2):215-22) and the review Near real‐time vaccine safety surveillance using electronic health records - a systematic review of the application of statistical methods (Pharmacoepidemiol Drug Saf. 2016;25(3):225-37).

 

O/E analyses require several pre-defined assumptions based on the requirements listed above. Each of these assumptions can be associated with some uncertainties. How to manage these uncertainties is also addressed in Pharmacoepidemiological considerations in observed-to-expected analyses for vaccines (Pharmacoepidemiol Drug Saf. 2016;25(2):215–22).

 

Use of population based background rates of disease to assess vaccine safety in childhood and mass immunisation in Denmark: nationwide population based cohort study (BMJ 2012;345:e5823) illustrate the importance of collecting background rates by estimating risks of coincident associations of emergency consultations, hospitalisations and outpatients consultations with vaccination. Rates of selected disease events for several countries may vary by age, sex, method of ascertainment and geography, as shown in Incidence Rates of Autoimmune Diseases in European Healthcare Databases: A Contribution of the ADVANCE Project (Drug Saf. 2021;44(3):383-95), where age-, gender-, and calendar-year stratified incidence rates of nine autoimmune diseases in seven European healthcare databases from four countries were generated to support O/E analyses of vaccines. Guillain-Barré syndrome and influenza vaccines: A meta-analysis (Vaccine 2015; 33(31):3773-8) suggests that a trend observed between different geographical areas would be consistent with a different susceptibility of developing a particular adverse reaction among different populations. In addition, comparisons with background rates may be invalid if conditions are unmasked at vaccination visits (see Human papillomavirus vaccination of adult women and risk of autoimmune and neurological diseases (J Intern Med. 2018;283(2):154-165)).

 

The article The critical role of background rates of possible adverse events in the assessment of COVID-19 vaccine safety (Vaccine 2021;39(19):2712-18) describes two key steps for the safety evaluation of COVID-19 vaccines: defining a dynamic list of Adverse Events of Special Interest (AESIs) and establishing background rates for these AESIs, and discusses tools from the Brighton Collaboration to facilitate case evaluation.

 

A protocol for generating background rates of AESIs for the monitoring of COVID-19 vaccines has been developed by the vACcine Covid-19 monitoring readinESS (ACCESS) consortium. These background rate data are publicly available on the VAC4EU website. Similarly, the FDA Best Initiative has published a protocol for Background Rates of Adverse Events of Special Interest for COVID-19 Vaccine Safety Monitoring.  

 

In Arterial events, venous thromboembolism, thrombocytopenia, and bleeding after vaccination with Oxford-AstraZeneca ChAdOx1-S in Denmark and Norway: population based cohort study (BMJ 2021;373:n1114), observed rates of events among vaccinated people were compared with expected rates, based on national age- and sex-specific rates from the general population calculated from the same databases, thereby removing a source of variability between observed and expected rates. Where this is not possible, background rates available from multiple large healthcare databases have shown to be heterogeneous, and the choice of relevant data for a given analysis should take into account differences in database and population characteristics related to different diagnosis, recording and coding practices, source populations (e.g., inclusion of patients from general practitioners and/or hospitals), healthcare systems determining reimbursement and inclusion of data in claims databases, and linkage ability (e.g., to hospital records). This is further discussed in Characterising the background incidence rates of adverse events of special interest for covid-19 vaccines in eight countries: multinational network cohort study (BMJ, 2021).

 

Sequential methods

 

Sequential methods, as described in Early detection of adverse drug events within population-based health networks: application of sequential methods (Pharmacoepidemiol Drug Saf. 2007;16(12):1275-84), allow O/E analyses to be performed on a routine (e.g. weekly) basis using cumulative data with adjustment for multiplicity. Such methods are routinely used for near-real time surveillance in the Vaccine Safety Datalink (VSD) (see: Near real-time surveillance for influenza vaccine safety: proof-of-concept in the Vaccine Safety Datalink Project, Am J Epidemiol 2010;171(2):177-88). Potential issues are described in Challenges in the design and analysis of sequentially monitored postmarket safety surveillance evaluations using electronic observational health care data (Pharmacoepidemiol Drug Saf. 2012;21(S1):62-71). A review of signals detected over 3 years with these methods in the Vaccine Safety Datalink concluded that care with data quality, outcome definitions, comparison groups and duration of surveillance is required to enable detection of true safety issues while controlling error rates (Active surveillance for adverse events: the experience of the Vaccine Safety Datalink Project (Pediatrics 2011;127(S1):S54-S64)). Sequential methods are therefore considered more valid but also more complex to perform, understand and communicate to a non-expert audience.

 

A new self-controlled case series method for analyzing spontaneous reports of adverse events after vaccination (Am J Epidemiol. 2013;178(9):1496-504) extends the self-controlled case series approach to explore and quantify vaccine safety signals from spontaneous reports. It uses parametric and nonparametric versions with different assumptions to account for the specific features of the data (e.g., large amount of underreporting and variation of reporting with time since vaccination). The method should be seen as a signal strengthening approach for quickly exploring a signal based on spontaneous reports prior to a pharmacoepidemiologic study, if any. The method was used in Intussusception after Rotavirus Vaccination -- Spontaneous Reports (N Engl J Med. 2011;365:2139) and Kawasaki disease and 13-valent pneumococcal conjugate vaccination among young children: A self-controlled risk interval and cohort study with null results (PLoS Med. 2019;16(7):e100284). 

 

The tree-based scan statistic (TreeScan) is a statistical data mining method that can be used for the detection of vaccine safety signals from large health insurance claims and electronic health records (see Drug safety data mining with a tree-based scan statistic, Pharmacoepidemiol Drug Saf. 2013;22(5):517-23). A Broad Safety Assessment of the 9-Valent Human Papillomavirus Vaccine (Am J Epidemiol. 2021;kwab022) uses the self-controlled tree-temporal scan statistic, which builds on this method but does not require pre-specified outcomes or specific post-exposure risk periods, to evaluate outcomes associated with receipt of a HPV vaccine by scanning data on all diagnoses recorded to detect any clustering of cases within a large hierarchy, or “tree,” of diagnoses as well as within the follow-up period. The method requires further evaluation of its utility for routine vaccine surveillance in terms of requirements for large databases and computer resources, as well as predictive value of the signals detected.

 

14.2.1.3. Hypothesis testing safety studies

 

A complete review of study designs and methods for hypothesis-testing studies in the field of vaccine safety is included in the ADVANCE Report on appraisal of vaccine safety methods.

 

Case-only designs

 

Traditional study designs such as cohort and case-control studies (see Chapter 5.2) may be difficult to implement for vaccines in circumstances where there is high vaccine coverage in the study population, an appropriate unvaccinated group is lacking, or adequate information on covariates at the individual level is not available. Frequent sources of confounding to be considered are socioeconomic status, underlying health status and other factors influencing the probability of being vaccinated such as access to healthcare. In such situations, case-only designs (see Chapters 5.2.3 and 5.4.3) may be useful, as illustrated in Control without separate controls: evaluation of vaccine safety using case-only methods (Vaccine 2004; 22(15-16):2064-70). It concludes that properly designed and analysed epidemiological studies using only cases, especially the SCCS method, may provide stronger evidence than large cohort studies as they control for fixed individual-level confounders (such as demographics, genetics and social deprivation) and typically have similar, sometimes higher, power.

 

Several publications have compared traditional and case-only study designs for vaccine studies:

  • Epidemiological designs for vaccine safety assessment: methods and pitfalls (Biologicals 2012;40(5):389-92) used three study designs (cohort, case-control and SCCS) to illustrate issues such as correct understanding and definition of the vaccine safety question, case definition and interpretation of findings, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs.

  • Comparison of epidemiologic methods for active surveillance of vaccine safety (Vaccine 2008; 26(26):3341-45) performed a simulation study to compare four designs (matched-cohort, vaccinated-only (risk interval) cohort, case-control and SCCS) in the context of vaccine safety surveillance. The cohort study design allowed for the most rapid signal detection, the least false-positive error and highest statistical power in performing sequential analysis. The authors highlight, however, that the main limitation of this simulation is the exclusion of confounding effects and the lack of chart review, which is a time and resource intensive requirement.

  • The simulation study (Four different study designs to evaluate vaccine safety were equally validated with contrasting limitations, J Clin Epidemiol. 2006; 59(8):808-818) compared four study designs (cohort, case-control, risk-interval and SCCS) with the conclusion that all the methods were valid, with contrasting strengths and weaknesses. The SCCS method, in particular, proved to be an efficient and valid alternative to the cohort method.

  • Hepatitis B vaccination and first central nervous system demyelinating events: Reanalysis of a case-control study using the self-controlled case series method. Vaccine 2007;25(31):5938-43) describes how the SCCS found similar results as the case-control study but with greater precision as it used cases without matched controls excluded from the case-control analysis. This is at the cost of the assumption that exposures are independent of earlier events. The authors recommended that, if case-control studies of vaccination and adverse events are undertaken, parallel case-series analyses should also be conducted, where appropriate.

While the SCCS is suited to secondary use of data, it may not always be appropriate in situations where primary data collection and rapid data generation are needed (e.g., a pandemic) since follow-up time needs to be accrued. In such instances, the Self-controlled Risk Interval (SCRI) method can be used to shorten the observation time (see The risk of Guillain-Barre Syndrome associated with influenza A (H1N1) 2009 monovalent vaccine and 2009-2010 seasonal influenza vaccines: Results from self-controlled analyses,

Pharmacoepidemiol. Drug Saf 2012;21(5):546-52), historical background rates can be used for an O/E analysis (see Near real-time surveillance for influenza vaccine safety: proof-of-concept in the Vaccine Safety Datalink Project. Am J Epidemiol 2010;171(2):177-88), or a classical case-control study can be performed, as in Guillain-Barré syndrome and adjuvanted pandemic influenza A (H1N1) 2009 vaccine: multinational case-control study in Europe BMJ. 2011;343:d3908).

 

Nevertheless, the SCCS design remains an adequate method to study vaccine safety, provided the main requirements of the method are taken into account (see Chapter 5.4.3). An illustrative example is shown in Bell's palsy and influenza(H1N1)pdm09 containing vaccines: A self-controlled case series (PLoS One. 2017;12(5):e0175539).

 

Cohort-event monitoring

 

Prospective cohort-event monitoring including active surveillance of vaccinated subjects using applications and/or other web-based tools has been extensively used to monitor the safety of COVID-19 vaccines, as primary data collection was the only means to rapidly identify potential safety concerns as soon as the vaccines began to be used at large scale. A definition of cohort-event monitoring is provided in The safety of medicines in public health programmes : pharmacovigilance, an essential tool (who.int) (see Chapter 6.5, Cohort event monitoring, pp 40-41). Specialist Cohort Event Monitoring studies: a new study method for risk management in pharmacovigilance (Drug Saf. 2015;38(2):153-63) discusses the rationale and features to address possible bias, and some applications of this design. The study Vaccine side-effects and SARS-CoV-2 infection after vaccination in users of the COVID Symptom Study app in the UK: a prospective observational study (Lancet Infect Dis 2021;S1473) examined the proportion and probability of self-reported systemic and local side-effects 8 days after one or two doses of the BNT162b2 vaccine or one dose of the ChAdOx1 nCoV-19 vaccine. Such self-reported data may introduce information bias, as some participants might be more likely to report symptoms and some may drop out; however, use of an app allowed to monitor a large sample size. Adverse events following mRNA SARS-CoV-2 vaccination among U.S. nursing home residents (Vaccine 2021) prospectively monitored residents of nursing homes using electronic health record data on vaccinations and pre-specified adverse and compared to unvaccinated residents during the same time period. As immunisation campaigns expand and vaccination coverage increases, non-vaccinated comparator groups will no longer be feasible and alternative designs will need to be applied.

 

Case-coverage design

 

The case-coverage design is a type of ecological design that uses exposure information on cases and population data on vaccination coverage to serve as control. It compares odds of exposure in cases to odds of exposure in the general population, similar to the screening method used in vaccine effectiveness studies. However, this method does not control for residual confounding and may be prone to selection bias introduced by propensity to seek care (or vaccination) and awareness of possible occurrence of a specific outcome, and it does not consider underlying medical conditions, with a limited comparability between cases and controls. In addition, it requires reliable and detailed vaccine coverage data corresponding to the population from which cases are drawn to allow control of confounding by stratified analysis. An example of a vaccine safety study using a case-coverage method is Risk of narcolepsy in children and young people receiving AS03 adjuvanted pandemic A/H1N1 2009 influenza vaccine: retrospective analysis (BMJ 2013; 346:f794).

 

Generic protocols

 

The ACCESS consortium has published four Template study protocols to support the choice of design for COVID-19 vaccine safety studies. The prospective cohort-event monitoring protocol uses primary data collection to record data on suspected adverse drug reactions from vaccinated subjects, while protocols for the rapid assessment of safety concerns or the evaluation of safety signals are based on electronic health records. The protocol Rapid assessment of COVID-19 vaccines safety concerns through electronic health records- a protocol template from the ACCESS project compares the suitability of the ecological design and the unadjusted SCRI for rapid assessment by type of AESI. Similarly, the FDA BEST Initiative has published a COVID-19 Vaccine Safety Active Monitoring Protocol and a Master Protocol: Assessment of Risk of Safety Outcomes Following COVID-19 Vaccination

 

14.2.1.4. Meta-analyses

 

The guidance on conducting meta-analyses of completed comparative pharmacoepidemiological studies of safety outcomes (Annex 1 of this Guide) also applies to vaccines. A systematic review evaluating the potential for bias and the methodological quality of meta-analyses in vaccinology (Vaccine 2007; 25(52):8794-806) provides a comprehensive overview of the methodological quality and limitations of 121 meta-analyses of vaccine studies. Association between Guillain-Barré syndrome and influenza A (H1N1) 2009 monovalent inactivated vaccines in the USA: a meta-analysis (Lancet 2013;381(9876):1461-8) describes a self-controlled risk-interval design in a meta-analysis of six studies at the patient level with a reclassification of cases according to the Brighton Collaboration classification. Meta-analysis of the risk of autoimmune thyroiditis, Guillain-Barré syndrome, and inflammatory bowel disease following vaccination with AS04-adjuvanted human papillomavirus 16/18 vaccine (Pharmacoepidemiol Drug Saf. 2020;29(9):1159-67) combined data from 18 randomised controlled trials, one cluster-randomised trial, two large observational retrospective cohort studies, and one case-control study, resulting in a large sample size for these rare events.

 

14.2.1.5. Studies on vaccine safety in special populations

 

The article Vaccine safety in special populations (Hum Vaccin 2011;7(2):269-71) highlights common methodological issues that may arise in evaluating vaccine safety in special populations, especially infants and children who often differ in important ways from healthy individuals and change rapidly during the first few years of life, and elderly patients.

 

Pregnant and lactating women represent an important group to be addressed when monitoring vaccine use, and recommendations have been provided on methodological standards to be applied in vaccine studies in this population. Pregnancy registries including pregnant women can be used to assess pregnancy and neonatal outcomes (see Chapter 4.3.5). Assessing the effect of vaccine on spontaneous abortion using time-dependent covariates Cox models (Pharmacoepidemiol Drug Saf 2012;21(8):844-50) demonstrates that rates of spontaneous abortion can be severely underestimated without survival analysis techniques using time-dependent covariates to avoid immortal time bias and shows how to fit such models. Risk of miscarriage with bivalent vaccine against human papillomavirus (HPV) types 16 and 18: pooled analysis of two randomised controlled trials (BMJ 2010; 340:c712) explains methods to calculate rates of miscarriage, address the lack of knowledge of time of conception during which vaccination might confer risk and perform subgroup and sensitivity analyses. In Harmonising Immunisation Safety Assessment in Pregnancy Part I (Vaccine 2016;34 (49): 5991-6110) and Part II (Vaccine 2017;35 (48), 6469-582), the Global Alignment of Immunization Safety Assessment in pregnancy (GAIA) project has provided a selection of case definitions and guidelines for the evaluation of pregnancy outcomes following immunization. The Systematic overview of data sources for Drug Safety in pregnancy research provides an inventory of pregnancy exposure registries and alternative data sources useful to assess the safety of prenatal vaccine exposure.

 

The Guidance for design and analysis of observational studies of fetal and newborn outcomes following COVID-19 vaccination during pregnancy (Vaccine 2021;39(14):1882-86) provides useful insights on study design, data collection, and analytical issues in COVID-19 vaccine safety studies in pregnant women, and Methodologic approaches in studies using real-world data (RWD) to measure pediatric safety and effectiveness of vaccines administered to pregnant women: A scoping review (Vaccine 2021) describes the types of data sources that have been used in maternal immunisation studies, the methods to link maternal and infant data and estimate gestational age at time of maternal vaccination, and how exposure was documented.

 

Post-authorisation studies in immunocompromised subjects are often required as this population is usually not included in the clinical development of vaccines. Influenza vaccination for immunocompromised patients: systematic review and meta-analysis by etiology (J Infect Dis 2012;206(8):1250-9) illustrates the importance of performing stratified analyses by aetiology of immunocompromise and possible limitations due to residual confounding, differences within and between etiological groups and small sample size in some etiological groups. In anticipation of the design of post-authorisation vaccine effectiveness and safety studies, the study Burden of herpes zoster in 16 selected immunocompromised populations in England: a cohort study in the Clinical Practice Research Datalink 2000–2012 (BMJ Open. 2018; 8(6): e020528) illustrated the challenges of defining an immunocompromised cohort and a relevant comparator cohort when making secondary use of a primary healthcare database.

 

14.2.2. Vaccine effectiveness

 

14.2.2.1. General considerations

 

The textbook Design and Analysis of Vaccine Studies (ME Halloran, IM Longini Jr., CJ Struchiner, Ed., Springer, 2010) presents methods for vaccine effectiveness evaluation and a conceptual framework of the different effects of vaccination at the individual and population level, and includes methods for evaluating indirect, total and overall effects of vaccination in populations.

 

The article Vaccine effects and impact of vaccination programmes in post-licensure studies (Vaccine 2013;31(48):5634-42) reviews effectiveness of vaccine and of vaccination programmes methods, proposes epidemiological measures of public health impact, describes relevant methods to measure these effects and discusses the assumptions and potential biases involved.

 

A framework for research on vaccine effectiveness (Vaccine 2018;36(48):7286-93) proposes standardised definitions, considers models of vaccine failure and provides methodological considerations for different designs. This article is useful to researchers who investigate the effectiveness of vaccines and vaccination programs and why they may fail.

 

The World Health Organisation’s Evaluation of influenza vaccine effectiveness: a guide to the design and interpretation of observational studies (2017) provides an overview of methods to study the effectiveness of influenza vaccines, also relevant for other vaccines.

 

Study designs and methods for measuring vaccine effectiveness in the Post-Licensure Rapid Immunization Safety Monitoring (PRISM) program are further explained in Exploring the Feasibility of Conducting Vaccine Effectiveness Studies in Sentinel’s PRISM Program (CBER, 2018).

 

The ADVANCE Report on appraisal of vaccine safety methods, although primarily dedicated to vaccine safety methods, also offers considerations relevant for effectiveness evaluation.

 

The WHO document Evaluation of COVID-19 vaccine effectiveness provides interim best practice guidance on how to monitor COVID-19 vaccine effectiveness using observational study designs, including considerations on effectiveness evaluation in low- and middle-income countries.

 

The template protocols developed by the ACCESS consortium for effectiveness studies of COVID-19 vaccines using the cohort design and the test-negative case-control design are published on the EU PAS Register.

It is worth mentioning that there are few comparative effectiveness studies of vaccines, except for some head-to-head immunogenicity studies. However, comparative effectiveness methods have been used to compare vaccination schedules or vaccine formulations. For example, see: Analysis of relative effectiveness of high-dose versus standard-dose influenza vaccines using an instrumental variable method (Vaccine 2019;37(11):1484-90) and The risk of non-specific hospitalised infections following MMR vaccination given with and without inactivated vaccines in the second year of life. Comparative self-controlled case-series study in England (Vaccine 2019;37(36):5211-17).

 

Assessment of Effectiveness of 1 Dose of BNT162b2 Vaccine for SARS-CoV-2 Infection 13 to 24 Days After Immunization (JAMA Netw Open. 2021;4(6):e2115985) compared the effectiveness of the first vaccine dose between two post-immunisation periods. It is likely that further comparative studies will be conducted to compare the real-world performance of COVID-19 vaccines. Postmarketing studies: can they provide a safety net for COVID-19 vaccines in the UK? (BMJ Evid Based Med. 2020:bmjebm-2020-111507) discusses methodological and operational aspects of post-authorisation studies of COVID-19 vaccines and provides considerations on head-to-head vaccine comparisons.

 

Vaccination programmes have indirect effects at the population-level, also called herd immunity, as a result of reduced transmission. Besides measuring the direct effect of vaccination in vaccine effectiveness studies, it is important to assess whether vaccination will have an effect on transmission. As a high risk setting for transmission, households can provide early evidence of such impact. Impact of vaccination on household transmission of SARS-COV-2 in England (Public Health England, 2021) was a nested case-control study estimating the odds ratios for household members becoming secondary cases if the case was vaccinated within 21 days or more before testing positive, vs. household members where the case was not vaccinated (see Chapter 5.2 for more details on this study).

 

14.2.2.2 Sources of exposure and outcome data

 

Data sources for vaccine studies largely rely on vaccine-preventable infectious disease surveillance (for effectiveness studies) and vaccine registries or vaccination data recorded in healthcare databases (for safety and effectiveness studies). Considerations on validation of exposure and outcome data are provided in Chapter 5.3.

 

Infectious disease surveillance is a population-based, routine public health activity involving systematic data collection to monitor epidemiological trends over time in a defined catchment population, and can use various indicators. Data can be obtained from reference laboratories, outbreak reports, hospital records or sentinel systems, and use consistent case definitions and reporting methods. Usually there is no known population denominator thus surveillance data cannot be used to measure incidence. Limitations include under-detection/under-reporting (if passive surveillance), or conversely, over-reporting due to improvements in case detection or introduction of new vaccines with increased disease awareness.

 

Changes/delays in case counting or reporting can artificially reduce the number of reported cases thus artificially increasing vaccine effectiveness. Infectious Disease Surveillance (International Encyclopedia of Public Health 2017;222-229) is a comprehensive review including definitions, methods, and considerations on use of surveillance data in vaccine studies. The chapter on Routine Surveillance of Infectious Diseases in Modern Infectious Disease Epidemiology (J. Giesecke. 3rd Ed. CRC Press 2017) discusses how data for surveillance are collected and interpreted and identifies several sources of potential bias.

 

Access to valid surveillance data for SARS-CoV-2 infection is of particular importance for studies evaluating the effectiveness of COVID-19 vaccines against variants of concern. Such epidemic surveillance data can be obtained, for example, from the ECDC COVID-19 Dashboard.

 

Examples of vaccination registries, and challenges in developing such registries, are discussed in a special journal issue on Vaccine registers--experiences from Europe and elsewhere (Euro Surveill. 2012;17(17):20159), in Validation of the new Swedish vaccination register - Accuracy and completeness of register data (Vaccine 2020; 38(25):4104-10), and in Establishing and maintaining the National Vaccination Register in Finland (Euro Surveill. 2017;22(17):30520).

 

14.2.2.3. Traditional cohort and case-control designs

 

Generic protocols for retrospective case-control studies and retrospective cohort studies to assess the effectiveness of rotavirus and influenza vaccination in EU Member States based on computerised databases were published by the European Centre for Disease Prevention and Control (ECDC). They describe the information that should be collected by country and region in vaccine effectiveness studies and the data sources that may be available to identify virus-related outcomes a vaccine is intended to avert, including hospital registers, computerised primary care databases, specific surveillance systems (i.e. laboratory surveillance, hospital surveillance, primary care surveillance) and laboratory registers. The DRIVE project has developed a similar Core protocol for population-based database cohort-studies. These templates can potentially be used as a guide for the design of effectiveness studies for vaccines other than influenza vaccines.

 

The case-control methodology is frequently used to evaluate vaccine effectiveness post-authorisation but the potential for bias and confounding in such studies are important limitations. The articles Case-control vaccine effectiveness studies: Preparation, design, and enrollment of cases and controls (Vaccine 2017; 35(25):3295-302) and Case-control vaccine effectiveness studies: Data collection, analysis and reporting results (Vaccine 2017; 35(25):3303-8) summarize the recommendations of an expert group regarding best practices for the design, analysis and reporting of case-control vaccine effectiveness studies.

 

Based on a meta-analysis comprising 49 cohort studies and 10 case-control studies, Efficacy and effectiveness of influenza vaccines in elderly people: a systematic review (Lancet 2005;366(9492):1165-74) highlights the heterogeneity of outcomes and study populations included in such studies and the high likelihood of selection bias.

 

Non-specific effects of vaccines, such as a decrease of mortality, have been claimed in observational studies but generally can be affected by bias and confounding. Epidemiological studies of the 'non-specific effects' of vaccines: I--data collection in observational studies (Trop Med Int Health 2009;14(9):969-76.) and Epidemiological studies of the non-specific effects of vaccines: II--methodological issues in the design and analysis of cohort studies (Trop Med Int Health 2009;14(9):977-85) provide recommendations for vaccine observational studies conducted in countries with high mortality; however, these recommendations have wider relevance. The study Observational studies of non-specific effects of Diphtheria-Tetanus-Pertussis vaccines in low-income countries: Assessing the potential impact of study characteristics, bias and confounding through meta-regression (Vaccine. 2019;37(1):34–40) used meta-regression to analyse study characteristics significantly associated with increased relative risks of non-specific effects of DTP vaccines. 

The cohort design has been used to monitor the effectiveness of COVID-19 vaccines in mass immunisation settings. BNT162b2 mRNA Covid-19 Vaccine in a Nationwide Mass Vaccination Setting (N Engl J Med. 2021;384(15):1412-1423) used data from a nationwide health care organisation to match vaccinated and unvaccinated persons according to demographic and clinical characteristics to assess effectiveness against documented infection, symptomatic infection, COVID-19 related hospitalisation, severe illness, and death. BNT162b2 vaccine effectiveness in preventing asymptomatic infection with SARS-CoV-2 virus: a nationwide historical cohort study (Open Forum Infectious Diseases, 2021; ofab262) used data from of a large health maintenance organisation to compare vaccinated and unvaccinated individuals repeatedly tested for SARS-CoV-2 infection.

 

14.2.2.4. Test negative design

 

The test-negative design aims to reduce bias associated with confounding by health-care-seeking behaviour. The article The test-negative design for estimating influenza vaccine effectiveness (Vaccine 2013;31(17):2165-8) explains the rationale, assumptions and analysis of the test-negative design, originally developed for influenza vaccines. Study subjects were all persons who seek care for an acute respiratory illness and influenza VE was estimated from the ratio of the odds of vaccination among subjects testing positive for influenza to the odds of vaccination among subject testing negative. This design is less susceptible to bias due to misclassification of infection and the confounding by health care-seeking behaviour, at the cost of difficult-to-test assumptions. Test-Negative Designs: Differences and Commonalities with Other Case-Control Studies with "Other Patient" Controls (Epidemiology. 2019 Nov;30(6):838-844) discusses the advantages and disadvantages of the test-negative design in various circumstances.

 

Effectiveness of rotavirus vaccines in preventing cases and hospitalizations due to rotavirus gastroenteritis in Navarre, Spain (Vaccine 2012;30(3):539-43) used a test negative case-control design based on electronic clinical reports. Cases were children with confirmed rotavirus and controls were those who tested negative for rotavirus in all samples. The test-negative design was based on an assumption that the rate of gastroenteritis caused by pathogens other than rotavirus is the same in both vaccinated and unvaccinated persons. This approach may rule out differences in parental attitude when seeking medical care and of physician differences in making decisions about stool sampling or hospitalisation. A limitation is sensitivity of antigen detection which may underestimate vaccine effectiveness. In addition, if virus serotype is not available, it is not possible to study the association between vaccine failure and a possible mismatch of vaccine strains and circulating strains of virus.

 

The article Theoretical basis of the test-negative study design for assessment of influenza vaccine effectiveness (Am J Epidemiol. 2016;184(5):345-53; see also the related Comments) uses directed acyclic graphs to characterize potential biases in studies using this design and shows how bias can be avoided or minimised and where bias may be introduced with particular design variations. The DRIVE project has developed a Core protocol for test-negative design studies which outlines the key elements of the test-negative design, applied to influenza vaccines.

 

The article 2012/13 influenza vaccine effectiveness against hospitalised influenza A(H1N1)pdm09, A(H3N2) and B: estimates from a European network of hospitals (EuroSurveill 2015;20(2):pii=21011) illustrates a multicentre test-negative case-control study to estimate influenza VE in 18 hospitals. It is believed that confounding due to health-seeking behaviour is minimised since, in the study sites, all people needing hospitalisation are likely to be hospitalised. The study Trivalent inactivated seasonal influenza vaccine effectiveness for the prevention of laboratory-confirmed influenza in a Scottish population 2000 to 2009 (EuroSurveill 2015;20(8):pii=21043) applied this method using a Scotland-wide linkage of patient-level primary care, hospital and virological swab data over nine influenza seasons and discusses strengths and weaknesses of the design in this context.

 

Postlicensure Evaluation of COVID-19 Vaccines (JAMA. 2020 Nov 17;324(19):1939-1940) describes methodological challenges of the test-negative case-control design applied to the evaluation of COVID-19 vaccine effectiveness and discusses potential solutions to reduce bias. The study Effectiveness of the Pfizer-BioNTech and Oxford-AstraZeneca vaccines on covid-19 related symptoms, hospital admissions, and mortality in older adults in England: test negative case-control study (BMJ. 2021;373:n1088) linked routine community testing data and vaccination data from the UK National Immunisation Management System to estimate the effect of vaccination on confirmed symptomatic infection, COVID-19 related hospital admissions and case fatality and estimated the odds ratios for testing positive to SARS-CoV-2 in vaccinated compared with unvaccinated people with compatible symptoms tested using polymerase chain reaction (PCR). The study also provides considerations on strengths and limitations of the test-negative design applied to COVID-19 vaccine effectiveness studies.

 

14.2.2.5. Case-population, case-coverage, and screening methods

 

These methods are related and are in some occasions also applied to vaccine safety studies. All include, to some extent, an ecological component such as vaccine coverage or infectious disease surveillance data at population level. Terms to refer to these designs are sometimes used interchangeably. The case-coverage design is mentioned above in paragraph 14.2.1.3. Case-population studies are described in Chapter 5.4.7 and in Vaccine Case-Population: A New Method for Vaccine Safety Surveillance (Drug Saf. 2016;39(12):1197-1209).

 

The screening method estimates vaccine effectiveness by comparing vaccination coverage in positive (usually laboratory confirmed) cases of a disease (e.g. influenza) with the vaccination coverage in the population from which the cases are derived (e.g., the same age group). If representative data on cases and vaccination coverage are available, it can provide an inexpensive and ready-to-use method that can be useful in providing early effectiveness estimates or identify changes in effectiveness over time. However, Application of the screening method to monitor influenza vaccine effectiveness among the elderly in Germany (BMC Infect Dis. 2015;15(1):137) emphasises that accurate and age-specific vaccine coverage rates are crucial to provide valid VE estimates. Since adjusting for important confounders and the assessment of product-specific VE is generally not possible, this method should be considered only a supplementary tool for assessing crude VE.

 

14.2.2.6. Indirect cohort (Broome) method

 

The indirect cohort method is a case-control type design which uses cases caused by non-vaccine serotypes as controls. Use of surveillance data to estimate the effectiveness of the 7-valent conjugate pneumococcal vaccine in children less than 5 years of age over a 9 year period (Vaccine 2012;30(27):4067-72) applied this method to evaluate the effectiveness of a pneumococcal conjugate vaccine against invasive pneumococcal disease (IPD) and compared the results to the effectiveness measured using a standard case-control study conducted during the same time period. The authors considered the method would be most useful shortly after vaccine introduction, and less useful in a setting of very high vaccine coverage and fewer vaccine-type cases.

 

Using the indirect cohort design to estimate the effectiveness of the seven valent pneumococcal conjugate vaccine in England and Wales (PLoS One 2011;6(12):e28435) and Effectiveness of the seven-valent and thirteen-valent pneumococcal conjugate vaccines in England: The indirect cohort design, 2006-2018 (Vaccine. 2019;37(32):4491-4498) describe how the method was used to estimate effectiveness of various vaccine schedules as well as for each vaccine serotype.

 

14.2.2.7. Density case-control design

 

Effectiveness of live-attenuated Japanese encephalitis vaccine (SA14-14-2): a case-control study (Lancet 1996;347(9015):1583-6) describes a case control study of incident cases in which the control group consisted of all village-matched children of a given age who were at risk of developing disease at the time that the case occurred (density sampling). The effect measured is an incidence density rate ratio. Vaccine Effectiveness of Polysaccharide Vaccines Against Clinical Meningitis - Niamey, Niger, June 2015 (PLoS Curr. 2016;8) conducted a case-control study comparing the odds of vaccination among suspected meningitis cases to controls enrolled in a vaccine coverage survey performed at the end of the epidemic. A simulated density case-control design randomly attributing recruitment dates to controls based on case dates of onset was used to compute vaccine effectiveness.

 

14.2.2.8. Impact assessment

 

Vaccine impact studies measure the indirect, total and overall effects of a vaccine, either before/after a vaccination campaign or between two populations during the vaccination campaign, and are largely based on ecological designs; for an overview, see Vaccine effects and impact of vaccination programmes in post-licensure studies (Vaccine. 2013;31(48):5634-42). For example, for a paediatric vaccine, the impact of vaccination can be quantified in the age group targeted for vaccination (overall effect) or in children of other age groups (indirect effect). A generic study protocol to assess the impact of rotavirus vaccination in EU Member States has been published by the ECDC. It recommends the information that needs to be collected to compare the incidence/proportion of rotavirus cases in the period before and after the introduction of the vaccine. These generic protocols need to be adapted to each country/regions and specific situation. Direct and indirect effects in vaccine efficacy and effectiveness (Am J Epidemiol. 1991;133(4):323-31) describes how parameters intended to measure direct effects must be robust and interpretable in the midst of complex indirect effects of vaccine intervention programmes.

 

Impact of rotavirus vaccination in regions with low and moderate vaccine uptake in Germany (Hum Vaccin Immunother 2012; 8(10):1407-15) describes an impact assessment of rotavirus vaccination comparing the incidence rates of hospitalisations before, and in seasons after, vaccine introduction using data from national mandatory disease reporting system. First year experience of rotavirus immunisation programme in Finland (Vaccine 2012; 31(1):176-82) estimates the impact of a rotavirus immunisation programme on the total hospital inpatient and outpatient treated acute gastroenteritis burden and on severe rotavirus disease burden during the first year after introduction. The study may be considered as a vaccine-probe-study, where unspecific disease burden prevented by immunisation is assumed to be caused by the agent the vaccine is targeted against. The study Lack of impact of rotavirus vaccination on childhood seizure hospitalizations in England - An interrupted time series analysis (Vaccine 2018; 36(31):4589-92) discusses possible reasons for negative findings in this study although previous studies have established a protective vaccine association in this age group.

 

In a review of 65 included articles, Population-level impact and herd effects following the introduction of human papillomavirus vaccination programmes: updated systematic review and meta-analysis (Lancet. 2019;394(10197):497–509) compared the frequency (prevalence or incidence) of several HPV-related endpoints between the pre-vaccination and post-vaccination periods with stratification by sex, age, and years since introduction of HPV vaccination.

 

Impact and effectiveness of mRNA BNT162b2 vaccine against SARS-CoV-2 infections and COVID-19 cases, hospitalisations, and deaths following a nationwide vaccination campaign in Israel: an observational study using national surveillance data (Lancet. 2021;397(10287):1819-1829) evaluated the nationwide public-health impact following the widespread introduction of the vaccine using national surveillance and vaccine uptake data. Although such population-level data are ecological, and teasing apart the impact of the vaccination programme from the impact of non-pharmaceutical interventions is complex, declines in incident cases of SARS-CoV-2 by age group were aligned with high vaccine coverage rather than initiation of the nationwide lockdown. Even after re-opening occurred, incidence remained low, suggesting that high vaccine coverage might provide a sustainable path towards resuming normal activity.

 

The effectiveness of currently available COVID-19 vaccines suggests a potential for a population-level effect, which is critical to control the pandemic. Community-level evidence for SARS-CoV-2 vaccine protection of unvaccinated individuals (Nat Med. 2021) used methods to measure this effect by analysing vaccination records and test results in a large population from 177 geographically defined communities, while mitigating the confounding effect of natural immunisation and the spatiotemporally dynamic nature of the epidemic. The results suggest that vaccination not only protects vaccinated individuals but also provides cross-protection to unvaccinated individuals in the community.

 

14.2.2.9. Cluster design

 

A cluster is a group of subjects sharing common characteristics, such as geographical (community, administrative area), health-related (hospital), educational (schools), social (household). In cluster randomised trials, clusters instead of individual subjects are randomly allocated to an intervention, whereas in infectious disease epidemiology studies clusters are sampled based on aspects of transmission (e.g. within a community). This design is often used in low and middle income settings and can measure vaccination interventions naturally applied at the cluster level or when the study objectives require a cluster design (e.g. to estimate herd immunity).

 

The core Protocol_for_Cluster_Investigations_to_Measure_Influenza_Vaccine_Effectiveness builds on the cluster design to generate rapid/early influenza season estimates in settings where vaccination records might be easily obtainable and investigation can take place at the same time as vaccination is carried out (e.g. schools, care homes).

 

In Post-authorisation passive enhanced safety surveillance of seasonal influenza vaccines: protocol of a pilot study in England (BMJ Open. 2017;7(5):e015469) the effect of clustering by GP practice was examined. Meningococcal B Vaccine and Meningococcal Carriage in Adolescents in Australia (N Engl J Med. 2020 Jan 23;382(4):318-327) used cluster randomisation to assign students, according to school, to receive 4CMenB vaccination either at baseline or at 12 months (control) to measure oropharyngeal carriage.

 

In The ring vaccination trial: a novel cluster randomised controlled trial design to evaluate vaccine efficacy and effectiveness during outbreaks, with special reference to Ebola (BMJ. 2015 Jul 27;351:h3740), a newly diagnosed Ebola case served as the index case to form a “ring”, which was then randomised to immediate or delayed vaccination with inclusion based on tracing cases using active surveillance instead of randomisation.

 

The Prospective study to evaluate the safety, effectiveness and impact of the RTS,S/AS01E malaria vaccine in young children in sub-Saharan Africa is using active surveillance to enrol large numbers of children in vaccinated and unvaccinated clusters as part of the WHO Malaria Vaccine Implementation Programme to conduct temporal (before/after) and concurrent (exposed vs. unexposed clusters) comparisons. Clusters are selected based on geographically limited areas with demographic surveillance in place and infrastructure to monitor population health and vaccination programmes.

 

14.2.2.10. Methods to study waning immunity

 

The study of vaccine effectiveness against diseases where immunity wanes over time requires consideration of both the within-host dynamics of the pathogen and immune system as well as the associated population-level transmission dynamics. Implications of vaccination and waning immunity (Proc Biol Sci 2009; 276(1664):2071-80) seeks to combine immunological and epidemiological models for measles infection to examine the interplay between disease incidence, waning immunity and boosting.

 

Besides a discussion on effectiveness of varicella vaccines over time, Global Varicella Vaccine Effectiveness: A Meta-analysis (Pediatrics 2016; 137(3):e20153741) reports low effectiveness in outbreak investigations and highlights the difficulties to reliably measure effectiveness in this situation where some confounders cannot be controlled for, the force of infection may be high, the degree of exposure may be variable across study participants and measures may originate from settings where there is epidemiologic evidence of vaccine failure. More than a few estimates are therefore needed to accurately assess vaccine effectiveness and conclude in waning immunity. 

 

14.2.2.11. Misclassification in studies of vaccine effectiveness

 

Like vaccine safety studies, studies of vaccine effectiveness rely on accurate identification of vaccination and cases of vaccine-preventable diseases but in practice, diagnostic tests, clinical case definitions and vaccination records often present inaccuracies. For outcomes with a complex natural history, and particularly when using secondary data collection (where case finding may be difficult), such as neurological or potential immune mediated diseases, validation studies based on case validation may be needed in a first step. Bias due to differential and non-differential disease- and exposure misclassification in studies of vaccine effectiveness (PLoS One 2018;15;13(6):e0199180) explores through simulations the impact of non-differential and differential disease- and exposure-misclassification when estimating vaccine effectiveness using cohort, case-control, test-negative case-control and case-cohort designs.

 

Misclassification can lead to significant bias and its impact strongly depends on the vaccination scenarios. A web application designed in the ADVANCE project is publicly available to assess the potential (joint) impact of possibly differential disease- and exposure misclassification.

 

14.3. Design, implementation and analysis of pharmacogenomic studies

 

14.3.1. Introduction

 

Individual differences in the response to medicines encompasses variation in both efficacy and safety, including the risk of severe adverse drug reactions. Clinical factors influencing response include disease severity, age, gender, and concomitant drug use. However, natural genetic variation that influences the expression or activity of proteins involved in drug disposition (absorption, metabolism, distribution, and excretion) as well as the protein targets of drug action (such as enzymes and receptors) may be an important additional source of inter-individual variability in both the beneficial and adverse effects of drugs (see Pharmacogenomics: translating functional genomics into rational therapeutics. Science 1999;286(5439):487-91).

 

Pharmacogenomics is defined as the study of genetic variation as a determinant of drug response. Drug response may vary as a result of differences in the DNA sequence present in the germline or, in the case of cancer treatments, due to somatic variation in the DNA arising in cancer cells (see The Roles of Common Variation and Somatic Mutation in Cancer Pharmacogenomics, Oncol Ther. 2019 Jun;7(1):1-32) or, in the case of treatment or prevention of infectious diseases, due to variation in the pathogen's genome (see Pharmacogenomics and infectious diseases: impact on drug response and applications to disease management, Am J Health Syst Pharm. 2002 1;59(17):1626-31). When incorporated, the study of genetic variation underlying drug response can complement information on clinical factors and disease sub-phenotypes to optimise the prediction of treatment response and reduce the risk of adverse reactions. The identification of variation in genes that modify the response to drugs provides an opportunity to optimise safety and effectiveness of the currently available drugs and to develop new drugs for paediatric and adult populations (see Drug discovery: a historical perspective, Science 2000;287(5460):1960-4).

 

It is important to note that pharmacogenomics is one of several approaches available to identify useful biomarkers of drug effects. Other approaches include, but are not limited to, epigenomics (the study of gene expression changes not attributable to changes in the DNA sequence), transcriptomics, proteomics (protein function and levels, see Precision medicine: from pharmacogenomics to pharmacoproteomics, Clin Proteom .2016; 13:25), and metabolomics.

 

14.3.2. Identification of genetic variants influencing drug response

 

Approaches

 

Identification of genetic variation associated with important drug or therapy-related outcomes can be carried out by three main technologies. The choice of which may be dictated by whether the aim is research and discovery or clinical application, and whether the genetic variants being sought occur at high or low frequency in the population or patient group being evaluated. The strategy to identify genetic variants will depend on the aim and design of the pharmacogenetic study or the clinical application (see Methodological and statistical issues in pharmacogenomics, J Pharm Pharmacol. 2010;62(2):161-6). For illustration, to assess clinical applications, technologies might be used to identify genetic variants where there is already prior knowledge about the gene or the variant (candidate gene studies). These studies require prior information about the likelihood of the polymorphism, gene, or gene-product interacting with a drug or drug pathway, and thus, resources can be directed to several important genetic polymorphisms with a higher a priori chance of relevant drug-gene interactions. Moving towards individualized medicine with pharmacogenomics (Nature 2004;429(6990):464-8) explains that lack or incompleteness of information on genes from previous studies may result in the failure in identifying every important genetic determinant in the genome.

 

In contrast, genome-wide scan approaches are discovery orientated and use technologies that identify genetic variants across the genome without previous information or gene/variant hypothesis (hypothesis-generating or hypothesis-agnostic approach). Genome-wide approaches are widely used to discover the genetic basis of common complex diseases where multiple genetic variations contribute to disease risk. The same study design is applicable to identification of genetic variants that influence treatment response. However, common variants in the genome, if functional, have generally small effect sizes, and therefore large sample sizes should be considered, for example by pooling different studies as done by the CHARGE Consortium with its focus on cardiovascular diseases (see The Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium as a model of collaborative science, Epidemiology 2013;24(3):346-8). By comparing the frequency of genetic variants between drug responders and non-responders, or those with or without drug toxicity, genome-wide approaches can identify important genetic determinants. They may detect variants in genes, which were previously not considered as candidate genes, or even variants outside of the genes. However, because of the concept of linkage disequilibrium, whereby certain genetic determinants tend to be co-inherited together, it is possible that the genetic associations identified through a genome-wide approach may not be truly biologically functional polymorphisms, but instead may simply be a linkage-related marker of another genetic determinant that is the true biologically relevant genetic determinant. Thus, this approach is considered discovery in nature. Furthermore, failure to cover all relevant genetic risk factors can still be a problem, though less than with the candidate gene approach. It is therefore essential to conduct replication studies in independent cohorts and validation studies (in vivo and in vitro) to ascertain the generalisability of findings to populations of individuals, to characterise the mechanistic basis of the effect of these genes on drug action, and to identify true biologic genetic determinants. Importantly, allele frequencies differ across populations, and these differences should be accounted for to reduce biases when designing and analysing pharmacogenetic studies, and to ensure equity when implementing pharmacogenomics in the healthcare setting (see Preventing the exacerbation of health disparities by iatrogenic pharmacogenomic applications: lessons from warfarin, Pharmacogenomics 2018 19(11):875-81).

 

More recently, pharmacogenomics has also been undertaken in large national biobanks which link genetic data to healthcare data for a cohort of hundreds of thousands of participants, for examples the UK Biobank (see UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age, PLoS Med. 2015;12(3):e1001779, and the Estonian Biobank (see Cohort Profile: Estonian Biobank of the Estonian Genome Center, University of Tartu, Int J Epidemiol. 2015;44(4):1137-47). Translating genotype data of 44,000 biobank participants into clinical pharmacogenetic recommendations: challenges and solutions and other studies (Genet Med. 2019;21(6):1345-54) has shown that these large-scale resources represent unique opportunities to discover novel and rare variants.

 

Technologies used for detection of genetic variants

 

The main technologies are:

  • Genotyping and array-based technologies which are the most feasible and cost-effective approach for most large-scale clinical utility studies and for clinical implementation, either through commercial or customised arrays. They can identify hundreds of thousands of genetic variants within one or several genes, including a common form of variations known as single nucleotide polymorphisms (SNPs). The identification of genetic determinants is limited to the variants included in the array, and thus, it cannot be used to discover novel variants. Generally, they are chosen on the grounds of biological plausibility, which may have been proven before in previous studies, or of knowledge of functional genes known to be involved in pharmacokinetic and pharmacodynamics pathways or related to the disease or intermediate phenotype.

  • Sanger sequencing represents the gold standard used in clinical settings for confirming genetic variants since it was first commercialised in 1986. More recently, Sanger sequencing has been replaced by other sequencing methods to increase the speed and reduce the cost of DNA sequencing, especially for automated analysis involving large numbers of samples.

  • Next generation sequencing (NGS) is a high-throughput sequencing technology that identifies genetic variants across the genome (whole genome sequencing; WGS) or the exome (whole exome sequencing; WES) without requiring prior knowledge on genetic biomarkers. These techniques may prove valuable in early research settings for discovery of novel or rare variants, and for the detection of structural variants and copy number variation which are common in pharmacogenes such as CYP2D6 (see A Review of the Important Role of CYP2D6 in Pharmacogenomics. Genes (Basel) 2020;11(11):1295). As use of clinical WGS testing increases, the return of secondary pharmacogenomic findings will benefit from greater understanding of rare and novel variants.

Variant curation and annotation

 

Lastly, the identification of genetic variants requires careful curation and annotation to ensure that their description and allelic designation is standardised. Common pharmacogenomic variants and haplotypes (combinations of sequence variants in the same individual) are catalogued by the Pharmacogene Variation Consortium (PharmVar) using a ‘star allele’ nomenclature. The use of this nomenclature is historic and in human disease genetics, the reference sequence identifier (rs-id) is more commonly used as to assign a genetic variant unambiguously. Although the star allele nomenclature remains the most widely used classification in pharmacogenomic research it is recognised to have several limitations. Pharmacogenomic haplotypes and star alleles can lack accurate definition and validation, and there may be limited annotation of phenotypic effects. In addition, current classifications also exclude many rare variants which are increasingly recognised as having an important effect, as described in Pharmacogenetics at Scale: An Analysis of the UK Biobank (Clin Pharmacol Ther. 2021;109(6):1528-37). Some authors have called for an effort to standardise annotation sequence variants (see The Star-Allele Nomenclature: Retooling for Translational Genomics. Clin Pharmacol Ther. 2007;82(3):244–8).

 

14.3.3. Study designs

 

Several options are available for the design of pharmacogenetic studies to ascertain the effect and importantly the clinical relevance and utility of obtaining pharmacogenetic information to guide prescribing decisions regarding the choice and dose of agent for a particular condition (see Prognosis research strategy (PROGRESS) 4: Stratified medicine research, BMJ. 2013;346:e5793).

 

Firstly, RCTs, both pre- and post-authorisation, provide the opportunity to address several pharmacogenetic questions. Pharmacogenetics in randomized controlled trials: considerations for trial design (Pharmacogenomics 2011;12(10):1485-92) describes three different trial designs differing in the timing of randomization and genotyping, and Promises and challenges of pharmacogenetics: an overview of study design, methodological and statistical issues (JRSM Cardiovasc Dis. 2012;1(1)) discusses outstanding methodological and statistical issues that may lead to heterogeneity among reported pharmacogenetic studies and how they may be addressed. Pharmacogenetic trials can be designed (or post hoc analysed) with the intention to study whether a subgroup of patients, defined by certain genetic characteristics, respond differently to the treatment under study. Alternatively, a trial can verify whether genotype-guided treatment is beneficial over standard care. Obvious limitations with regard to the assessment of rare adverse drug events or low prevalence genetic variants are the large sample size required and its related high costs. In order to make a trial as efficient as possible in terms of time, money and/or sample size, it is possible to opt for an adaptive trial design, which allows prospectively planned modifications in design after patients have been enrolled in the study. Such a design uses accumulating data to decide how to modify aspects of the study during its progress, without undermining the validity and integrity of the trial. An additional benefit is that the expected number of patients exposed to an inferior/harmful treatment can be reduced (see Potential of adaptive clinical trial designs in pharmacogenetic research, Pharmacogenomics 2012;13(5):571-8).

 

Observational studies are an alternative and can be family-based (using twins or siblings) or population-based (using unrelated individuals). The main advantage of family-based studies is the avoidance of bias due to population stratification. A clear practical disadvantage for pharmacogenetic studies is the requirement to study families where patients have been treated with the same drugs (see Methodological quality of pharmacogenetic studies: issues of concern, Stat Med. 2008;27(30):6547-69).

 

Population-based studies may be designed to assess drug-gene interactions as cohort (including exposure-only), case-cohort and case-control studies (including case-only, as described in Nontraditional epidemiologic approaches in the analysis of gene-environment interaction: case-control studies with no controls! Am J Epidemiol. 1996;144(3):207-13). Sound pharmacoepidemiological principles as described in the current Guide also apply to observational pharmacogenetic studies. A specific type of confounding due to population stratification needs to be considered in pharmacogenetic studies, and, if present, needs to be dealt with. Its presence may be obvious where the study population includes more than one immediately recognisable ethnic group; however, in other studies stratification may be more subtle. Population stratification can be detected by the Pritchard and Rosenberg’s method, which involves genotyping additional SNPs in other areas of the genome and testing for association between them and outcome (see Association mapping in structured populations, Am J Hum Genet. 2000;67(1):170-81). In genome-wide association studies, the data contained within the many SNPs typed can be used to assess population stratification without the need to undertake any further genotyping. Several methods have been suggested to control for population stratification such as genomic control, structure association and EIGENSTRAT. These methods are discussed in Methodological quality of pharmacogenetic studies: issues of concern (Stat Med. 2008;27(30):6547-69), Softwares and methods for estimating genetic ancestry in human populations (Hum Genomics 2013;7(1):1) and Population Stratification in Genetic Association Studies (Curr Protoc Hum Genet. 2017;95:1.22.1–1.22.23).

 

The main advantage of exposure-only and case-only designs is the smaller sample size that is required, at the cost of not being able to study the main effects of drug exposure (case-only) or genetic variant (exposure-only) on the outcome. Furthermore, interaction can be assessed only on a multiplicative scale, whereas from a public health perspective, additive interactions are very relevant. Up till now GWAS with gene*interactions have not been very rewarding because of the required huge power. However, this is likely to improve as genetic data is linked to longitudinal clinical data in large biobanks, as described in Drug Response Pharmacogenetics for 200,000 UK Biobank Participants (Biocomputing 2021;184-95). An important condition that has to be fulfilled for case-only studies is that the exposure is independent of the genetic variant, e.g. prescribers are not aware of the genotype of a patient and do not take this into account, directly or indirectly (by observing clinical characteristics associated with the genetic variant). In the exposure-only design, the genetic variant should not be associated with the outcome, for example variants of genes coding for cytochrome p-450 enzymes. When these conditions are fulfilled and the main interest is in the drug-gene interaction, these designs may be an efficient option. In practice, case-control and case-only studies usually result in the same interaction effect as empirically assessed in Bias in the case-only design applied to studies of gene-environment and gene-gene interaction: a systematic review and meta-analysis (Int J Epidemiol. 2011;40(5):1329-41). The assumption of independence of genetic and exposure factors can be verified among controls before proceeding to the case-only analysis. Further development of the case-only design for assessing gene-environment interaction: evaluation of and adjustment for bias (Int J Epidemiol. 2004;33(5):1014-24) conducted sensitivity analyses to describe the circumstances in which controls can be used as proxy for the source population when evaluating gene-environment independence. The gene-environment association in controls will be a reasonably accurate reflection of that in the source population if baseline risk of disease is small (<1%) and the interaction and independent effects are moderate (i.e. risk ratio<2), or if the disease risk is low (e.g. <5%) in all strata of genotype and exposure. Furthermore, non-independence of gene-environment can be adjusted in multivariable models if non-independence can be measured in controls. Further methodological considerations and assumptions of study designs in pharmacogenomics research are discussed in A critical appraisal of pharmacogenetic inference (Clin Genet. 2018;93(3): 498-507).

 

Lastly, variation in prevalence and effect of pharmacogenetic variants across different ethnicities is an important consideration for study design and ultimately clinical utility, cost-effectiveness and implementation of testing. International research collaborations, as demonstrated in several studies (see HLA-B*5701 Screening for Hypersensitivity to Abacavir, N Engl J Med. 2008;358(6):568-79; and Effect of Genotype-Guided Oral P2Y12 Inhibitor Selection vs Conventional Clopidogrel Therapy on Ischemic Outcomes After Percutaneous Coronary Intervention: The TAILOR-PCI Randomized Clinical Trial. JAMA. 2020; 25;324(8):761-71), encourage greater representation of different populations and ensure broader applicability of pharmacogenomic study results. Diverse ethnic representation in study recruitment is important to detect the range of variant alleles of importance across different ethnic groups and reduce inequity in the clinical impact of pharmacogenomic testing once implemented.

 

14.3.4. Data collection

 

The same principles and approaches to data collection as for other pharmacoepidemiological studies can be followed (see Chapter 4 of this Guide on Approaches to Data Collection). An efficient approach to data collection for pharmacogenetic studies is to combine secondary use of electronic health records with primary data collection (e.g. biological samples to extract DNA).

 

Examples are given in SLCO1B1 genetic variant associated with statin-induced myopathy: a proof-of-concept study using the clinical practice research datalink (Clin Pharmacol Ther. 2013;94(6):695-701), Diuretic therapy, the alpha-adducin gene variant, and the risk of myocardial infarction or stroke in persons with treated hypertension (JAMA. 2002;287(13):1680-9) and Interaction between the Gly460Trp alpha-adducin gene variant and diuretics on the risk of myocardial infarction (J Hypertens. 2009;27(1):61-8). Another approach to enrich electronic health records with biological samples is record linkage to biobanks as illustrated in Genetic variation in the renin-angiotensin system modifies the beneficial effects of ACE inhibitors on the risk of diabetes mellitus among hypertensives (Hum Hypertens. 2008;22(11):774-80). A third approach is to use active surveillance methods to fully characterise drug effects such that a rigorous phenotype can be developed prior to genetic analysis. This approach was followed in Adverse drug reaction active surveillance: developing a national network in Canada's children's hospitals (Pharmacoepidemiol Drug Saf. 2009;18(8):713-21) and EUDRAGENE: European collaboration to establish a case-control DNA collection for studying the genetic basis of adverse drug reactions (Pharmacogenomics 2006;7(4):633-8).

 

14.3.5. Data analysis

 

The focus of data analysis should be on the measure of effect modification (see Chapter 4.2.4 of this Guide on Effect Modification). Attention should be given to whether the mode of inheritance (e.g. dominant, recessive or additive) is defined a priori based on prior knowledge from functional studies. However, investigators are usually naïve regarding the underlying mode of inheritance. A solution might be to undertake several analyses, each under a different assumption, though the approach to analysing data raises the problem of multiple testing (see Methodological quality of pharmacogenetic studies: issues of concern, Stat Med. 2008;27(30):6547-69). The problem of multiple testing and the increased risk of type I error is in general a problem in pharmacogenetic studies evaluating multiple SNPs, multiple exposures and multiple interactions. The most common approach to correct for multiple testing is to use the Bonferroni correction. This correction may be considered too conservative and runs the risk of producing many pharmacogenetic studies with a null result. Other approaches to adjust for multiple testing include permutation testing and false discovery rate (FDR) control, which are less conservative. The FDR, described in Statistical significance for genome-wide studies (Proc Natl Acad Sci. USA 2003;100(16):9440-5), estimates the expected proportion of false-positives among associations that are declared significant, which is expressed as a q-value.

 

Alternative innovative methods are under development and may be used in the future, such as Mendelian Randomization (see Mendelian Randomization: New Applications in the Coming Age of Hypothesis-Free Causality, Annu Rev Genomics Hum Genet. 2015;16:327-50), systems biology, Bayesian approaches, or data mining (see Methodological and statistical issues in pharmacogenomics, J Pharm Pharmacol. 2010;62(2):161-6).

Important complementary approaches include the conduct of individual patient data meta-analyses and/or replication studies to avoid the risk of false-positive findings.

 

An important step in analysis of genome-wide association studies data that needs to be considered is the conduct of rigorous quality control procedures before conducting the final association analyses. This becomes particularly important when phenotypic data were originally collected for a different purpose (“secondary use of data”). Relevant guidelines include Guideline for data analysis of genomewide association studies (Cancer Genomics Proteomics 2007;4(1):27-34) and Statistical Optimization of Pharmacogenomics Association Studies: Key Considerations from Study Design to Analysis (Curr Pharmacogenomics Person Med. 2011;9(1):41-66).

 

14.3.6. Reporting

 

The guideline STrengthening the Reporting Of Pharmacogenetic Studies: Development of the STROPS guideline (PLOS Medicine 2020;17(9):e1003344) should be followed for reporting findings of pharmacogenetic studies. Essential Characteristics of Pharmacogenomics Study Publications (Clin Pharmacol Ther. 2019;105(1):86-91) also provides recommendations to ensure that all the relevant information is reported in pharmacogenetic studies. As pharmacogenetic information is increasingly found in drug labels, as described in Pharmacogenomic information in drug labels: European Medicines Agency perspective (Pharmacogenomics J. 2015;15(3):201–10), it is essential to warrant consistency across the reporting of pharmacogenetic studies. Additional efforts by regulatory agencies, international organisations or boards to standardise the reporting and utilisation of pharmacogenetic studies will be discussed in the next section.

 

14.3.7. Clinical implementation and resources

 

An important step towards the implementation of the use of genotype information to guide pharmacotherapy is the development of clinical practice guidelines. An important pharmacogenomics knowledge resource is PharmGKB which curates and disseminates clinical information about the impact of human genetic variation on drug responses, including genotype-phenotype relationships, potentially clinically actionable gene-drug associations, clinical guidelines, and drug labels. The development and publication of clinical practice guidelines for pharmacogenomics has been driven by international initiatives including the Clinical Pharmacogenetics Implementation Consortium, the European Medicines Agency Pharmacogenomics Working Party, the Dutch Pharmacogenetics Working Group (see Pharmacogenetics: From Bench to Byte— An Update of Guidelines, Clin Pharmacol Ther. 2011;89(5):662–73; Use of Pharmacogenetic Drugs by the Dutch Population, Front Genet. 2019;10:567) and the Canadian Pharmacogenomics Network for Drug Safety). Evidence of clinical utility and cost-effectiveness of pharmacogenomic tests is important to support the translation of clinical guidelines into policies for implementation across health services, such as pharmacogenomic testing for DPYD polymorphisms with fluoropyrimidine therapies (see EMA recommendations on DPD testing prior to treatment with fluorouracil, capecitabine, tegafur and flucytosine).

 

The clinical implementation of pharmacogenomic testing requires consideration of complex clinical pathways and the multifactorial nature of drug response. Translational research and clinical utility studies can identify issues arising from the translation of pharmacokinetic or retrospective studies into real-world implementation of pharmacogenomic testing (see Carbamazepine-induced toxic effects and HLA-B*1502 screening in Taiwan, N Engl J Med. 2011;364(12):1126-33). Careful consideration is required in the interpretation of gene variants which cause a spectrum of effects. Binary interpretation or thresholds for phenotypic categorisation within clinical guidelines may result in different treatment recommendations for patients who would ultimately have the same drug response. In addition, the safety, efficacy and cost-effectiveness of alternative treatments are important factors in assessing the overall health benefit to patients from pharmacogenomic testing.

 

Within clinical practice, the choice of technology for testing must be mapped to the clinical pathway to ensure that test results are available at an appropriate time to guide decision-making. Other key factors for clinical implementation include workforce education in pharmacogenomics, multidisciplinary pathway design, digital integration and tools to aid shared decision making (see Attitudes of clinicians following large-scale pharmacogenomics implementation, Pharmacogenomics J. 2016;16(4):393-8; Pharmacogenomics Implementation at the National Institutes of Health Clinical Center, J Clin Pharmacol. 2017;57 (Suppl 10):S67-S77; The implementation of pharmacogenomics into UK general practice: a qualitative study exploring barriers, challenges and opportunities, J Community Genet. 2020;11(3):269-277; Implementation of a multidisciplinary pharmacogenomics clinic in a community health system, Am J Health Syst Pharm. 2016;73(23):1956-66).

 

Large-scale international population studies of clinical utility in pharmacogenomics will contribute to understanding these real-world implementation factors, including studies underway with the U-PGx (see Implementing Pharmacogenomics in Europe: Design and Implementation Strategy of the Ubiquitous Pharmacogenomics Consortium, Clin Pharmacol Ther. 2017;101(3):341-58) and The IGNITE Pharmacogenetics Working Group: An Opportunity for Building Evidence with Pharmacogenetic Implementation in a Real-World Setting, Clin Transl Sci. 2017;10(3):143-6).

 

14.4. Methods for pharmacovigilance impact research

 

14.4.1. Introduction

 

Pharmacovigilance activities aim to protect patients and promote public health. This includes implementing risk minimisation measures that lead to changes in the knowledge and behaviour of individuals (e.g. patients, consumers, caregivers and healthcare professionals) and in healthcare practice. Impact research aims to generate evidence to evaluate the outcomes of these activities which may be intended or unintended. This approach has been adopted in the EMA Guideline on good pharmacovigilance practices (GVP) - Module XVI – Risk minimisation measures: selection of tools and effectiveness indicators (Rev 2), which is currently undergoing revision (see Guideline on good pharmacovigilance practices (GVP) - Module Risk Minimisation Measures for the draft of Rev. 3).

 

Pharmacovigilance activities are frequently examined for their impact on processes of healthcare delivery, such as healthcare outcomes or drug utilisation patterns following changes to the product information. In addition, measuring dissemination of risk minimisation is of importance as well as changes in knowledge, awareness and behaviour of healthcare professionals and patients.

 

These effects can be assessed separately, or combined in a framework, which is more challenging and therefore rarely done. An example of such a standardised framework includes evaluation of the effectiveness of risk minimisation measures through four domains: data, knowledge, behaviour and outcomes (Evaluating the effectiveness of risk minimisation measures: the application of a conceptual framework to Danish real-world dabigatran data; Pharmacoepidemiol Drug Saf. 2017;26(6):607-14). Further testing of this method is needed, however, to ascertain its usefulness in regulatory practice. 

 

Measuring the impact of pharmacovigilance activities may be challenging as these activities may target stakeholder groups at different levels of the healthcare system, co-exist with other unrelated events that can influence healthcare, and can use several tools applied simultaneously or sequentially to deliver information and influence behaviour (Measuring the impact of pharmacovigilance activities, challenging but important; Br J Clin Pharmacol. 2019;85(10):2235-7). In addition to the intended outcomes of pharmacovigilance activities, there may be unintended outcomes which are important to be measured as they could counteract the effectiveness of risk minimisation. Another challenging aspect is separating the outcomes of individual pharmacovigilance activities from simultaneous events such as media attention, reimbursement policies, publications in scientific journals, changes in clinical guidelines and practice, or secular trends in health outcomes.

 

This Chapter provides a detailed guidance on the methodological conduct of impact studies.

 

14.4.2. Outcomes

 

Outcomes to be studied in impact research are closely tied to the nature and objective of the pharmacovigilance activities. Because regulatory actions are mostly tailored to individual medicinal products, there is no standard outcome that could be measured for each activity and the concepts outlined in this chapter need to be applied on a case-by-case basis (Post-approval evaluation of effectiveness of risk minimisation: methods, challenges and interpretation; Drug Saf. 2014;37(1):33-42).

 

Outcome measures provide an overall indication of the level of risk reduction that has been achieved with a specific risk minimisation measure in place. This may also require measuring outcomes not linked to the specific medicinal product but representing potential unintended consequences of regulatory interventions e.g., change of non-target drug use in a population leading to less favourable health outcomes. Examples are provided in Table XVI.1 of the Guideline on good pharmacovigilance practices (GVP) - Module Risk Minimisation Measures.

 

Relevant outcomes may include: information dissemination and risk knowledge; changes in behaviour or clinical practice; drug utilisation patterns (e.g. prescribing or dispensing rates, use of treatment alternatives); and health outcomes (Measuring the impact of medicines regulatory interventions - Systematic review and methodological considerations; Br J Clin Pharmacol. 2018;84(3):419-33).

 

Dissemination of information and risk knowledge can be assessed in a quantitative, qualitative or mixed-methods manner. Quantitative assessment can involve measuring the proportion of healthcare professionals and patients aware of the risk minimisation measure as well as their level of comprehension (Effectiveness of Risk Minimization Measures to Prevent Pregnancy Exposure to Mycophenolate-Containing Medicines in Europe; Pharmaceut Med. 2019;33(5):395-406). Qualitative measures often focus on understanding of attitudes about the risk minimisation measure, impact of external factors on implementation and information update whilst mixed methods utilise both qualitative and quantitative approaches.

 

Assessment of behavioural changes is performed to measure if changes towards intended behaviour have been achieved, and to what extent. These measures align with those applied when measuring dissemination of information and risk knowledge. Quantitative assessment can include measuring the proportion of patients exposed to a medicinal product which is not in accordance with authorised use (off label use, contraindicated use, interactions). A qualitative assessment may allow an in-depth understanding of enablers and barriers in relation to awareness, attitudes towards use of the medicinal product and the causes why intended outcomes may not have been achieved.

 

Health outcomes should preferably be measured directly. They may include clinical outcomes such as all-cause mortality, congenital defects or other conditions that prompted the pharmacovigilance activity. Direct measurement of health outcomes is not always feasible or may not be necessary, for example when it can be replaced with indirect measures. Indirect surrogate measures may use data on hospitalisations, emergency department admissions or laboratory values e.g. blood pressure as a surrogate for cardiac risk, as outlined in Practical Approaches to Risk Minimisation for Medicinal Products: Report of CIOMS Working Group IX. An example of use of a surrogate measure is glycaemic outcomes (HbA1C change from baseline) in patients with diabetes mellitus using the Veterans Integrated Services Network database; the results confirmed a 45% discontinuation of thiazolidinedione use in this population and a worsening of glycaemic control following safety warning publicity in 2007, which may have driven the decline in usage of this class of medicines (Impact of thiazolidinedione safety warnings on medication use patterns and glycemic control among veterans with diabetes mellitus; J Diabetes Complications 2011;25(3):143-50).

 

Depending on the nature of the safety concern and the regulatory intervention, or when the assessment of patient-relevant health outcomes is unfeasible (e.g. inadequate number of exposed patients, rare adverse reaction), the dissemination of safety information, risk knowledge or behavioural changes may be alternative objectives of impact research (Guideline on good pharmacovigilance practices (GVP) - Module VIII – Post-authorisation safety studies (Rev 3)).

 

14.4.3. Considerations on data sources

 

The impact of pharmacovigilance activities can be measured using both primary and secondary data collection, although the literature shows that the latter is more commonly used (Measuring the impact of medicines regulatory interventions - Systematic review and methodological considerations; Br J Clin Pharmacol. 2018;84(3):419-33). Chapter 4 of this Guide provides a general description of the main characteristics, advantages and disadvantages of various data sources. Chapter 4.1.2 provides guidance on primary data collection through surveys.

 

The impact of pharmacovigilance activities should be interpreted with a view to the limitations of the data sources used for the evaluation (A General Framework for Considering Selection Bias in EHR-Based Studies: What Data Are Observed and Why?; EGEMS. (Wash DC.) 2016;4(1):1203). Researchers should have a clear understanding of the limitations of the different data sources when planning their research and assess whether these limitations could impact the results in one direction or the other in such a way that their interpretation may be significantly influenced, for example due to bias or unmeasured confounders. As for all observational studies, the evaluation of the usefulness and limitation of a given data source for the study requires a very good understanding of the research question.

 

Primary data collection, via interviews or surveys, can usually never cover the complete target population. Therefore, a sampling approach is often required which can involve those that prescribe, dispense or use the medicinal product. Sampling should be performed in accordance with the Guideline on good pharmacovigilance practices (GVP) - Module XVI Addendum II, ensuring target population representativeness. The following elements should be considered to minimise bias and optimise generalisability: sampling procedures (including sample size), design and administration of the data collection instrument, analytical approaches and overall feasibility (including ethics).

 

Different databases are unlikely to capture all impact–relevant outcomes, even when they are linked to one another. Data of good quality may be available on hard outcomes such as death, hospital admission, emergency room visit or medical contacts but claims databases rarely capture primary care diagnoses, symptoms, conditions or other events that do not lead to a claim, such as suicidal ideation, abuse or misuse. An accurate definition of the outcomes also often requires the development of algorithms that need validation in the database that will be used for impact measurement.

 

Nurse-Led Medicines' Monitoring for Patients with Dementia in Care Homes: A Pragmatic Cohort Stepped Wedge Cluster Randomised Trial (PLoS One 2015;10(10):e0140203) reported that only about 50% of the less serious drug-related problems listed in the product information are recorded in patient notes. If generalisable to electronic data sources, this would indicate that incomplete recording of patient-reported outcomes of low severity may reduce the likelihood of identifying some outcomes related to a pharmacovigilance activity, for example a change in the frequency of occurrence of an adverse drug reaction (ADR). Combining different approaches such as integrating a patient survey would be necessary to overcome this situation.

 

Missing information on vulnerable populations, such as pregnant women, and missing mother-child or father-child links is a significant barrier to measuring the impact of paternal/maternal exposure or behaviour. For example, the impact of pregnancy prevention programmes could not be accurately assessed using European databases that had been used to report prescribing in pregnancy (The limitations of some European healthcare databases for monitoring the effectiveness of pregnancy prevention programmes as risk minimisation measures; Eur J Clin Pharmacol. 2018;74(4):513-20). This was largely due to inadequate data on planned abortions and exposure to oral contraceptives.

 

Depending on the initial purpose of the data source used for impact research, information on potential confounders may be missing, such as indication of drug use, co-morbidities, co-medication, smoking, diet, body mass index, family history of disease or recreational drug use. Missing information may impair a valid assessment of risk factors for changes in health care practice, but this limitation should be considered in light of the research question. In some settings, record linkage between different types of data sources including different information could provide comprehensive data on the frequency of ADRs and potential confounders (Health services research and data linkages: issues, methods, and directions for the future; Health Serv Res. 2010;45(5 Pt 2):1468-88; Selective Serotonin Reuptake Inhibitor (SSRI) Antidepressants in Pregnancy and Congenital Anomalies: Analysis of Linked Databases in Wales, Norway and Funen, Denmark; PLoS One 2016;11(12):e0165122; Linking electronic health records to better understand breast cancer patient pathways within and between two health systems; EGEMS. (Wash DC.) 2015;3(1):1127).

 

14.4.4. Study designs

 

14.4.4.1. Single time point cross-sectional study

 

The cross-sectional study design as defined in Appendix 1.1.2.1 of the Guideline on good pharmacovigilance practices (GVP) - Module VIII – Post-authorisation safety studies (Rev 3) collects data at a single point in time after implementation of a regulatory intervention. However, cross-sectional studies have limitations as a sole measure of the impact of interventions. Cross-sectional studies may include data collected through surveys and can be complemented with data from other studies, e.g. on patterns of drug use Healthcare professional surveys to investigate the implementation of the isotretinoin Pregnancy Prevention Programme: a descriptive study; Expert Opin Drug Saf. 2013;12(1):29-38; Prescriptive contraceptive use among isotretinoin users in the Netherlands in comparison with non-users: a drug utilisation study; Pharmacoepidemiol Drug Saf. 2012;21(10):1060-6).

 

14.4.4.2. Before/after cross-sectional study

 

A before/after cross-sectional study is defined as an evaluation at one point in time before and one point in time after the date of the intervention and/or its implementation. When uncontrolled, before/after cross-sectional studies need to be interpreted with caution as any baseline trends are ignored, potentially leading to the intervention effect being incorrectly estimated. Including a control (e.g. a population that did not receive the intervention or a drug not targeted by the risk minimisation measure) can strengthen this design by minimising potential confounding. However, identifying a suitable control group may be challenging or unfeasible as any regulatory action aimed at reducing risk is intended to be applied to the entire target population (Post-approval evaluation of effectiveness of risk minimisation: methods, challenges and interpretation; Drug Saf. 2014;37(1):33-42; Measuring the impact of medicines regulatory interventions - Systematic review and methodological considerations; Br J Clin Pharmacol. 2018;84(3):419-33).

 

14.4.4.3. Time series design

 

A time series is a sequence of data points (values) usually gathered at regularly spaced intervals over time. These data points can represent a value or a quantification of outcomes that are used for impact research. The underlying trend of a particular outcome is ‘interrupted’ by a regulatory intervention at a known point in time. Time series data can be analysed using various methods, including interrupted time series (ITS) and Joinpoint analysis.

 

14.4.4.4. Cohort study

 

The cohort study design as defined in Appendix 1.1.2.2 of the Guideline on good pharmacovigilance practices (GVP) - Module VIII – Post-authorisation safety studies (Rev 3) can be useful in impact research to establish the base population for the conduct of drug utilisation studies or to perform aetiological studies.

Cohort studies can be used to study exposure to the medicine targeted by regulatory interventions before and after its implementation, and indeed to perform drug utilisation studies in clinical populations targeted by these interventions. To model their impact on health outcomes, more complex study designs may be required, that are the subject of further research.

The following are examples of cohort studies being used for:

14.4.4.5. Randomised controlled trial

 

The randomised controlled trial (RCT) as defined in Appendix 1.1.2.2 of the Guideline on good pharmacovigilance practices (GVP) - Module VIII – Post-authorisation safety studies (Rev 3) can be useful in evaluating the effectiveness of different interventions but it is not always possible to randomise individual participants and few examples exist (Improved therapeutic monitoring with several interventions: a randomized trial; Arch Intern Med. 2006;166(17):1848-54). Designs including cluster randomised trials or step-wedge trials may be more feasible, in which randomisation is conducted at the level of organisation, when a phased roll-out is being considered (Research designs for studies evaluating the effectiveness of change and improvement strategies; Qual Saf Health Care 2003;12(1):47-52). RCTs could be considered more often to generate evidence on the impact of pharmacovigilance interventions by evaluating interventions that potentially enhance agreed safety information and normal methods of dissemination and communication channels.

 

14.4.5. Analytical methods

 

The analytical methods to be applied in impact research depend on the study design and approach to data collection. Various types of analyses have been used to assess the impact of a regulatory guidance, as described in: Measuring the impact of medicines regulatory interventions - Systematic review and methodological considerations (Br J Clin Pharmacol. 2018;84(3):419-33); Impact of regulatory guidances and drug regulation on risk minimization interventions in drug safety: a systematic review (Drug Saf. 2012;35(7):535-46); and A descriptive review of additional risk minimisation measures applied to EU centrally authorised medicines 2006-2015 (Expert Opin Drug Saf. 2017;16(8):877-84).

 

14.4.5.1 Descriptive statistics

 

Descriptive measures are the basis of quantitative analyses in studies evaluating the impact of regulatory interventions. Whilst appropriate to describe the population to understand generalisability, simple descriptive approaches do not determine whether statistically significant changes have occurred (Measuring the impact of medicines regulatory interventions - Systematic review and methodological considerations; Br J Clin Pharmacol. 2018;84(3):419-33). When simple descriptive statistics are used, they are often insufficiently valid to determine statistical significance.

 

14.4.5.2 Time series analysis

 

Interrupted time series (ITS) analysis

 

ITS analysis, sometimes referred to as interrupted segmented regression analysis, can provide statistical evidence about whether observed changes in a time series represent a real decrease or increase by accounting for secular trends. ITS has commonly been used to measure the impact of regulatory interventions and is among the more robust approaches to pharmacovigilance impact research (Measuring the impact of medicines regulatory interventions - Systematic review and methodological considerations; Br J Clin Pharmacol. 2018;84(3):419-33; Impact of EMA regulatory label changes on systemic diclofenac initiation, discontinuation, and switching to other pain medicines in Scotland, England, Denmark, and The Netherlands; Pharmacoepidemiol Drug Saf. 2020;29(3):296-305; The Effect of Safety Warnings on Antipsychotic Drug Prescribing in Elderly Persons with Dementia in the United Kingdom and Italy: A Population-Based Study; CNS Drugs 2016;30(11):1097-109).

 

ITS is well suited to study changes in outcomes that are expected to occur relatively quickly following an intervention, such as change in prescribing, and can consist of averages, proportions, counts or rates. ITS can be used to estimate a variety of outcomes including: the immediate change in outcome after the intervention; the change in trend in the outcome compared to before the intervention; and the effects at specific time periods following the intervention.

 

Common segmented regression models fit a least squares regression line to each time segment and assume a linear relationship between time and the outcome within each segment.

 

When the effects of interventions take time to manifest, this can be accounted for through the use of lag times in the analysis to avoid incorrect specification of the intervention effect. To model these effects, one can exclude from the analysis outcome values that occur during the lag or during the intervention period. Alternatively, with enough data points, the period may be modelled as a separate segment.

 

ITS regression requires that the time point of the intervention is known prior to the analysis and sufficient data points are collected before and after the intervention for adequate power. Studies with a small number of data points should be interpreted with caution as they may be underpowered.

 

An assumption of ITS segmented regression analysis is that time points are independent of each other. Autocorrelation is a measure of how correlated data collected closely together in time are with each other. If autocorrelation is present it may violate the underlying model assumptions that observations are independent of each other and can lead to an over-estimation of the statistical significance of effects. Autocorrelation can be checked by examining autocorrelation and partial autocorrelation function plots and checking the Durbin-Watson statistic or performing the Breusch-Godfrey test (Testing for serial correlation in least squares regression. I; Biometrika. 1950;37(3-4):409-28; Testing for serial correlation in least squares regression. II; Biometrika. 1951;38(1-2):159-78). Factors such as autocorrelation, seasonality and non-stationarity should therefore be checked and may require more complicated modelling approaches if detected, e.g. autoregressive integrated moving average (ARIMA) models (Impact of FDA Black Box Warning on Psychotropic Drug Use in Noninstitutionalized Elderly Patients Diagnosed With Dementia: A Retrospective Study; J Pharm Pract. 2016;29(5):495-502; IMI Work Package 5: Benefit –Risk Integration and Visual Representation).

 

Long time periods may also be affected by historical changes in trend that can violate model assumptions. Therefore, data should always be visually inspected and reported.

 

Data point outliers that are explainable, such a sudden peak in drug dispensing in anticipation of a drug restriction policy can be controlled for using an indicator term. Outliers that result from random variation can be treated as regular data point.

 

Another caveat when conducting ITS analysis relates to possible outcome measure ceiling or floor effects. For example, when studying the impact of an intervention in improving the proportion of patients treated with a drug, the outcome has a natural ceiling of 100% and thus, depending of the initial level of measurement, minimal change in the outcome is observed.

 

Time-varying confounding, such as from concomitant interventions, may be addressed by use of a control outcome in the same population or a control population using the same outcome. An advantage on ITS analysis is the ease in stratifying results by different groups.

 

Joinpoint analysis

 

Accurately establishing the date of the intervention time period may be challenging (e.g. during a phased roll out of a regulatory intervention or when attempting to assess different parts of a regulatory intervention). In such instances, more complex modelling techniques and other approaches time series approaches could be considered.

 

Statistical analysis using joinpoint regression identifies the time point(s) where there is a marked change in trend (the ‘joinpoint’) in the time series data and estimates the regression function compared with previously identified joinpoints. Joinpoints can be identified by using permutation tests using Monte Carlo methods or Bayesian Information Criterion approaches (Permutation tests for joinpoint regression with applications to cancer rates; Stat Med. 2000;19(3):335-51). As the final number of joinpoints is established on the basis of a statistical criterion, their position is not fixed. Therefore, joinpoint regression does not require that the date of the regulatory intervention is pre-specified. It can be used to estimate the average percent change in an outcome, which is a summary measure of the trend over a pre-specified fixed interval. It can also be used to undertake single or pairwise comparisons.

 

14.4.5.3 Other statistical techniques

 

Different types of regression models can be applied to the time series data once it has been properly organised depending upon the exact question being asked such as Poisson regression (Interrupted time series regression for the evaluation of public health interventions: a tutorial; Int J Epidemiol. 2017;46(1):348-55. Erratum in: Int J Epidemiol. 2020;49(4):1414). These methods are based on the assumption that error terms are normally distributed. When time series analysis measurements are based at extreme values (e.g. all are near 0% or near 100% or with low cell counts near 0) alternative approaches may be required (e.g. aggregate binomial regression models) and advice from an experienced statistician is recommended.

 

14.4.5.4 Examples of impact research using time series analysis

 

Before/after time series have been used to evaluate the effects of:

Examples of the use of Joinpoint regression analysis:

14.4.5.5 Regression modelling

 

Multivariable regression allows controlling for potential confounding factors or to study factors associated with the impact or non-impact of regulatory interventions.  

 

An analysis with multivariate regression was used in Measuring the Effectiveness of Safety Warnings on the Risk of Stroke in Older Antipsychotic Users: A Nationwide Cohort Study in Two Large Electronic Medical Records Databases in the United Kingdom and Italy (Drug Saf. 2019;42(12):1471-1485). The Medicines and Healthcare Regulatory Agency (MHRA) and the Italian Drug Agency (AIFA) both launched a safety warning on the risk of stroke and all-cause mortality with antipsychotics in older people with dementia. In the UK, the MHRA launched a warning in March 2004 for the use of risperidone and olanzapine which was expanded to all antipsychotics in March 2009. In Italy, AIFA restricted prescribing of antipsychotics in the elderly to specific prescribing centres in July 2005, which was followed by communication about these restrictions in May 2009. A retrospective new-user cohort study was undertaken to estimate incidence rates of stroke in elderly incident antipsychotic users. The authors showed a significant reduction of stroke after both safety warnings in the UK, while there was no impact of the warning on incidence rates of stroke in Italy. Metabolic screening in children receiving antipsychotic drug treatment (Arch Pediatr Adolesc Med. 2010;164(4):344-51) measured the impact of a class warning issued by the Food and Drug Administration (FDA) for all second-generation antipsychotics (SGAs) regarding the risk of hyperglycaemia and diabetes mellitus in 2003. This warning stated that glucose levels should be monitored in at-risk patients. A retrospective new-user cohort study was undertaken to estimate population-based rates of glucose and lipid testing in children after the availability of FDA warnings and to identify predictors of the likelihood of receiving glucose or lipid testing among SGAs-treated children after adjusting for covariates. Children without diabetes taking albuterol but no SGA drugs were used as controls. The authors showed that most included children starting treatment with SGAs did not receive recommended glucose and lipid screening.

 

More sophisticated methodologies, such as propensity-score matching (chapter 6.2.3.2), instrumental variable analysis (chapter 6.2.3.3), and time-varying exposures and covariates may be implemented in regression analyses if relevant.

 

Whichever design and method of analysis is used, consideration should be given to reporting both relative and absolute effects.

 

14.4.5.6 Other types of analytical methods

 

Metrics such as “Population Impact Number of Eliminating a Risk factor over time t” (PIN-ER-t), and “Number of Events Prevented in a Population” (NEPP) have proven valuable in assessing the impact of removing a risk factor on public health, and may be useful in assessing impact of regulatory interventions. Illustrative examples for population impact analyses include Potential population impact of changes in heroin treatment and smoking prevalence rates: using Population Impact Measures (Eur J Public Health 2009;19(1):28-31) and Assessing the population impact of low rates of vitamin D supplementation on type 1 diabetes using a new statistical method (JRSM Open 2016;7(11):2054270416653522). Further, statistical analysis using impact metrics is possible where proxy measures are used to assess the impact that one event or resource has on another, as shown in Communicating risks at the population level: application of population impact numbers (BMJ. 2003;327(7424):1162-5); the benefit-risk case study report for rimonabant in IMI Work Package 5: Benefit –Risk Integration and Visual Representation; and in Population Impact Analysis: a framework for assessing the population impact of a risk or intervention (J Public Health (Oxf.) 2012;34(1):83-9).

 

Predictive modelling techniques may provide an insight into future impact of regulatory actions. Modelling the risk of adverse reactions leading to product withdrawal alongside drug utilisation data can assess the number of patients at risk of experiencing the adverse reactions per year, and provide an estimate of the number of patients per year which are protected from as a result of regulatory action (Population Impact Analysis: a framework for assessing the population impact of a risk or intervention; J Public Health (Oxf.) 2012;34(1):83-9; Assessing the population impact of low rates of vitamin D supplementation on type 1 diabetes using a new statistical method; JRSM Open 2016;7(11):2054270416653522).

 

Chronographs, typically used for rapid signal detection in observational longitudinal databases, have been used to visualise the impact of regulatory actions. Although this is a novel method that could potentially be applied to rapidly assess impact, the method lacks ways to control for confounding. In addition, further validation may be required to understand in which situations this works well or not (A Novel Approach to Visualize Risk Minimization Effectiveness: Peeping at the 2012 UK Proton Pump Inhibitor Label Change Using a Rapid Cycle Analysis Tool; Drug Saf. 2019;42(11):1365-76).

 

14.4.6. Measuring unintended effects of regulatory interventions

 

Pharmacovigilance activities can have unintended consequences, which could in some cases counteract the effectiveness of risk minimisation measures. To determine the net attributable impact of pharmacovigilance activities, besides the intended outcomes, other outcomes associated with potential unintended consequences may need to be measured and incorporated into the design of impact research (see Table XVI.1 of the Guideline on good pharmacovigilance practices (GVP) - Module Risk Minimisation Measures). Examples of such studies include the Effect of withdrawal of fusafungine from the market on prescribing of antibiotics and other alternative treatments in Germany: a pharmacovigilance impact study (Eur J Clin Pharmacol. 2019;75(7):979-84), which was associated with an increase in prescribing of other nasal or throat preparations but no increase in alternative antibiotic prescribing. Another example concerns the unintended increased use of conventional antipsychotics in two European countries after the introduction of EU risk minimisation measures for the risk of stroke and all-cause mortality with atypical antipsychotic drug use (The Effect of Safety Warnings on Antipsychotic Drug Prescribing in Elderly Persons with Dementia in the United Kingdom and Italy: A Population-Based Study; CNS Drugs 2016;30(11):1097-109). Further, prescribers may extrapolate warnings for one group of patients to other groups (spill-over effects), although they may not share the same risk factors. In 2003, the FDA warned of an association between SSRI prescription and suicidality in paediatric patients (<18 years of age). Subsequently, the number of prescriptions of SSRIs in newly diagnosed adult patients fell without compensation by alternative medicines or treatment (Spillover effects on treatment of adult depression in primary care after FDA advisory on risk of pediatric suicidality with SSRIs; Am J Psychiatry 2007;164(8):1198-205). 

 

Socio-economic factors may also play an important role in implementing regulatory interventions at local level. It has been suggested that practices in affluent communities are more likely to implement regulatory interventions faster than over-stretched or under-resourced practices in more deprived communities and that permanent changes in daily practice in these communities may take longer (THE INTERNATIONAL MARCÉ SOCIETY FOR PERINATAL MENTAL HEALTH BIENNIAL SCIENTIFIC CONFERENCE; Arch Womens Ment Health 2015;18:269–408; Prescribing of antipsychotics in UK primary care: a cohort study; BMJ Open 2014;4(12):e006135).

 

Both health care service providers and users may circumvent or ‘work round’ restrictions. Where medicines are restricted or restrictions are perceived as inconvenient, patients may turn to buying medicines over the internet, self-medicating with over-the-counter medicines or using herbals or other complementary medicines. Healthcare professionals may subvert requirements for additional documentation by realigning diagnostic categories (Changes in rates of recorded depression in English primary care 2003-2013: Time trend analyses of effects of the economic recession, and the GP contract quality outcomes framework (QOF); J Affect Disord. 2015;180:68-78) or switch to medicines where patient monitoring is not mandated (Incorporating Comprehensive Management of Direct Oral Anticoagulants into Anticoagulation Clinics; Pharmacotherapy 2017;37(10):1284-1297). The effects of progressive dextropropoxyphene withdrawal in the EU since 2007 on prescribing behaviour showed an increased use of same level analgesics but also an increased use of paracetamol as monotherapy. Aggregated dispensation data suggested that the choice of analgesics depended on physician speciality, healthcare setting, indication, patients’ comorbidities and age, underlining the complexity and international differences in pain management (Use of analgesics in France, following dextropropoxyphene withdrawal; BMC Health Serv Res. 2018;18(1):231).

 

 

« Back to main table of contents