10.1.2. General aspects
10.1.3. Prominent issues in CER
10.2.1. Vaccine safety
10.2.2. Vaccine effectiveness
10.3.3. Study designs
10.3.4. Data collection
10.3.5. Data analysis
10.3.7. Clinical practice guidelines
Comparative effectiveness research (CER) is designed to inform health-care decisions at the level of both policy and the individual by comparing the benefits and harms of therapeutic strategies available in routine practice, for the prevention, the diagnosis or the treatment of a given health condition. The interventions under comparison may be related to similar treatments, such as competing drugs, or different approaches, such as surgical procedures and drug therapy. The comparison may focus only on the relative medical benefits and risks of the different options or it may weigh both their costs and their benefits. The methods of comparative effectiveness research (Annu Rev Public Health 2012;33:425-45) defines the key elements of CER as (a) head-to-head comparison of active treatments, (b) study populations typical of day-to-day clinical practice, and (c) a focus on evidence to inform health care tailored to the characteristics of individual patients. In What is Comparative Effectiveness Research, the AHRQ highlights that CER requires the development, expansion and use of a variety of data sources and methods to conduct timely and relevant research and disseminate the results in a form that is quickly usable. The evidence may come from a review and synthesis of available evidence from existing clinical trials or observational studies or from the conduct of studies that generate new evidence. In Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide, AHRQ also highlights that CER is still a relatively new field of enquiry that has its origin across multiple disciplines and is likely to evolve and be refined over time.
Among resources for keeping up with the evolution in this field, the US National Library of Medicine provides a web site for queries on CER.
The terminology ‘Relative effectiveness assessment (REA)’ is also used when comparing multiple technologies or a new technology against standard of care, while ‘rapid’ REA refers to performing an assessment within a limited timeframe in the case of a new marketing authorisation or a new indication granted for an approved medicine (What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10(4):397-410).
Several initiatives have promoted the conduct of CER and REA and proposed general methodological guidance to help in the design and analysis of such studies.
The Methodological Guidelines for Rapid Relative Effectiveness Assessment of Pharmaceuticals developed by EUnetHTA cover a broad spectrum of issues on REA. They address methodological challenges that are encountered by health technology assessors while performing rapid REA and provide and discuss practical recommendations on definitions to be used and how to extract, assess and present relevant information in assessment reports. Specific topics covered include the choice of comparators, strengths and limitations of various data sources and methods, internal and external validity of studies, the selection and assessment of endpoints (including composite and surrogate endpoints and Health Related Quality of Life [HRQoL]) and the evaluation of relative safety.
AHRQ’s Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide identifies minimal standards and best practices for observational CER. It provides principles on a wide range of topics for designing research and developing protocols, with relevant questions to be addressed and checklists of key elements to be considered. The GRACE Principles provide guidance on the evaluation of the quality of observational CER studies to help decision-makers in recognizing high-quality studies and researchers in design and conduct high quality studies. A checklist to evaluate the quality of observational CER studies is also provided. the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) addressed several key issues of CER in three publications: Part I includes the selection of study design and data sources and the reporting and interpretation of results in the light of policy questions; Part II relates to the validity and generalisability of study results, with an overview of potential threats to validity; Part III includes approaches to reducing such threats and, in particular, to controlling of confounding. The Patient-Centered Outcomes Research Institute (PCORI) Methodology Standards document provides standards for patient-centred outcome research that aims to improve the way research questions are selected, formulated and addressed, and findings reported. The PCORI group has recently published how stakeholders may be involved in PCORI research, Stakeholder-Driven Comparative Effectiveness Research (JAMA 2015; 314: 2235-2236). In a Journal of Clinical Epidemiology series of articles, the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group offers a structured process for rating quality of evidence and grading strength of recommendations in systematic reviews, health technology assessment and clinical practice guidelines. The GRADE group recommends individuals new to GRADE to first read the 6-part 2008 BMJ series.
A guideline on methods for performing systematic reviews of existing comparative effectiveness research has been published by the AHRQ (Methods Guide for Effectiveness and Comparative Effectiveness Reviews).
The RWE Navigator website has been developed by the IMI GetReal consortium to provide recommendations on the use of real-world evidence for decision-making on effectiveness and relative effectiveness of medicinal products. It discusses important topics such as the sources of real-world data, study designs, approaches to summarising and synthesising the evidence, modelling of effectiveness and methods to adjust for bias and governance aspects. It also presents a glossary of terms and case studies relevant for RWD research, with a focus on effectiveness research.
While RCTs are considered to provide the most robust evidence of the efficacy of therapeutic options, they are affected by well-recognised qualitative and quantitative limitations that may not reflect how the drug of interest will perform in real-life. Moreover, relatively few RCTs are traditionally designed using an alternative therapeutic strategy as a comparator, which limits the utility of the resulting data in establishing recommendations for treatment choices. For these reasons, other research methodologies such as pragmatic trials and observational studies may complement traditional explanatory RCTs in CER.
Explanatory and Pragmatic Attitudes in Therapeutic Trials (J Chron Dis 1967; republished in J Clin Epidemiol 2009;62(5):499-505) distinguishes between two approaches in designing clinical trials: the ‘explanatory’ approach, which seeks to understand differences between the effects of treatments administered in experimental conditions, and the ‘pragmatic’ approach which seeks to answer the practical question of choosing the best treatment administered in normal conditions of use. The two approaches affect the definition of the treatments, the assessment of results, the choice of subjects and the way in which the treatments are compared. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers (CMAJ 2009; 180 (10):E47-57) quantifies distinguishing characteristics between pragmatic and explanatory trials and has been updated in The Precis-2 tool: designing trials that are fit for purpose (BMJ 2015; 350: h2147). A checklist of eight items for the reporting of pragmatic trials was also developed as an extension of the CONSORT statement to facilitate the use of results from such trials in decisions about health-care (Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008;337 (a2390):1-8).
The article Why we need observational studies to evaluate effectiveness of health care (BMJ 1996;312(7040):1215-18) documents situations in the field of health care intervention assessment where observational studies are needed because randomised trials are either unnecessary, inappropriate, impossible or inadequate. In a review of five interventions, Randomized, controlled trials, observational studies, and the hierarchy of research designs (N Engl J Med 2000;342(25):1887-92) found that the results of well-designed observational studies (with either a cohort or case-control design) did not systematically overestimate the magnitude of treatment effects. In defense of Pharmacoepidemiology-Embracing the Yin and Yang of Drug Research (N Engl J Med 2007;357(22):2219-21) shows that strengths and weaknesses of RCTs and observational studies make both designs necessary in the study of drug effects. However, When are observational studies as credible as randomised trials? (Lancet 2004;363(9422):1728-31) explains that observational studies are suitable for the study of adverse (non-predictable) effects of drugs but should not be used for intended effects of drugs because of the potential for selection bias.
With regard to the selection and assessment of endpoints for CER, the COMET (Core Outcome Measures in Effectiveness Trials) Initiative aims at developing agreed minimum standardized sets of outcomes (‘core outcome sets’, COS) to be assessed and reported in effectiveness trials of a specific condition as discussed in Choosing Important Health Outcomes for Comparative Effectiveness Research: An Updated Review and User Survey (PLoS One 2016 ;11(1):e0146444.).
A review of uses of health care utilization databases for epidemiologic research on therapeutics (J Clin Epidemiol 2005;58(4):323-37) considers the application of health care utilisation databases to epidemiology and health services research, with particular reference to the study of medications. Information on relevant covariates and in particular on confounding factors may not be available or adequately measured in electronic healthcare databases. To overcome this limit, CER studies have integrated information from health databases with information collected ad hoc from study subjects. Enhancing electronic health record measurement of depression severity and suicide ideation: a Distributed Ambulatory Research in Therapeutics Network (DARTNet) study (J Am Board Fam Med. 2012;25(5):582-93) shows the value of adding direct measurements and pharmacy claims data to data from electronic healthcare records participating in Assessing medication exposures and outcomes in the frail elderly: assessing research challenges in nursing home pharmacotherapy (Med Care 2010;48(6 Suppl):S23-31) describe how merging longitudinal electronic clinical and functional data from nursing home sources with Medicare and Medicaid claims data can support unique study designs in CER but pose many challenging design and analytic issues. Pragmatic randomised trials using routine electronic health records: putting them to the test (BMJ 2012;344:e55) discusses opportunities for using electronic healthcare records for conducting pragmatic trials.
A model based on counterfactual theory for CER using large administrative healthcare databases has been suggested, in which causal inference from observational studies based on large administrative health databases is viewed as an emulation of a randomized trial. This ‘target trial’ is made explicit and design and analytic approaches are reviewed in Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available (Am J Epidemiol (2016) 183 (8): 758-764).
Methodological issues and principles of Chapter 5 of the ENCePP Guide are applicable to CER as well and the textbooks cited in that chapter are recommended for consultation.
Methods to assess intended effects of drug treatment in observational studies are reviewed (J Clin Epidemiol 2004;57(12):1223-31) provides an overview of methods that seek to adjust for confounding in observational studies when assessing intended drug effects. Developments in post-marketing comparative effectiveness research (Clin Pharmacol Ther 2007;82(2):143-56) also reviews the roles of propensity scores (PS), instrumental variables and sensitivity analyses to reduce measured and unmeasured confounding in CER. Use of propensity scores and disease risk scores in the context of observational health-care programme research is described in Summary Variables in Observational Research: Propensity Scores and Disease Risk Scores. More recently, high-dimensional propensity score has been suggested as a method to further improve control for confounding as these variables may collectively be proxies for unobserved factors.
Results presented in High-dimensional propensity score adjustment in studies of treatment effects using health care claims data (Epidemiology 2009;20(4):512-22) show that in a selected empirical evaluation, high-dimensional propensity score improved confounding control compared to conventional PS adjustment when benchmarked against results from randomized controlled trials. See Chapter 5.3.4 of the Guide for an in-depth discussion of propensity scores. Several methods can be considered to handle cofounders in non-experimental CER (Confounding adjustment in comparative effectiveness research conducted within distributed research networks (Med Care 2013 ; 51(8 Suppl 3) : S4-S10); Disease Risk Score (DRS) as a Confounder Summary Method: Systematic Review and Recommendations (Pharmacoepidemiol Drug Saf 2013; 22(2): 122–129). Strategies for selecting variables for adjustment in non-experimental CER have also been proposed (Pharmacoepidemiol Drug Saf 2013; 22(11): 1139–1145).
A reason for discrepancies between results of randomised trials and observational studies may be the use of prevalent drug users in the latter. Evaluating medication effects outside of clinical trials: new-user designs (Am J Epidemiol 2003;158(9):915-20) explains the biases introduced by use of prevalent drug users and how a new-user (or incident user) design eliminate these biases by restricting analyses to persons under observation at the start of the current course of treatment. The Incident User Design in Comparative Effectiveness Research (Pharmacoepidemiol Drug Saf 2013; 22(1): 1–6) reviews published CER case studies in which investigators had used the incident user design, discusses its strengths (reduced bias) and weakness (reduced precision of comparative effectiveness estimates) and provides recommendations to investigators considering to use this design. The value of incident user design and its exceptions have been reviewed.
A thorough and up-to-date reference to be consulted for vaccine safety assessment is the ADVANCE Report on appraisal of vaccine safety methods. Together with a large number of relevant references, it provides a brief description of a very wide range of direct and indirect methods of risk assessment for vaccines (listed in the Table of Contents) and evaluates them based on 9 criteria related to five domains: Effect Measure, Statistical Criteria, Timeliness, Restriction and Robustness, and Operational Criteria. It also emphasises the specificities of safety assessment for vaccines and how they differ from other pharmaceutical drugs, evaluates study designs, discusses perspectives of different stakeholders on risk assessment, describes experiences from other projects and systems, and provides recommendations. This document is highly relevant for all the topics covered in this chapter on vaccine safety.
Specific aspects related to vaccine safety are discussed in several other documents.
The Report of the CIOMS/WHO Working Group on Definition and Application of Terms for Vaccine Pharmacovigilance (2012) provides definitions and explanatory notes for the terms ‘vaccine pharmacovigilance’, ‘vaccination failure’ and ‘adverse event following immunisation (AEFI)’.
The CIOMS Guide to Active Vaccine Safety Surveillance (2017) describes the process of determining whether active vaccine safety surveillance is necessary, more specifically in the context of resource-limited countries, and, if so, of choosing the best type of active safety surveillance and considering key implementation issues.
The CIOMS Guide to Vaccine Safety Communication (2018) provides an overview of strategic communication issues faced by regulators, those responsible for vaccination policies and other stakeholders in introducing current or new vaccines in populations. Building upon existing recommendations, it provides a guide for vaccine risk safety communication approaches.
The Brighton Collaboration provides resources to facilitate and harmonise collection, analysis and presentation of vaccine safety data, including case definitions, an electronic tool to help the classification of reported signs and symptoms, template protocols and guidelines.
Module 4 (Surveillance) of the e-learning training course Vaccine Safety Basics of the World Health Organization (WHO) describes phamacovigilance principles, causality assessment procedures, surveillance systems and factors influencing the risk-benefit balance of vaccines. In particular, in contrast to the use of other medicines, vaccines are often used in healthy people and it is not only important to identify possible risks but also to emphasize safety if it does exist. For example a systematic review on influenza vaccination in pregnancy and the risk of congenital anomalies in newborns did not find an association, adding to the evidence base of influenza vaccination in pregnancy (Maternal Influenza Vaccination and Risk for Congenital Malformations: A Systematic Review and Meta-analysis. Obstet Gynecol 2015;126(5):1075-84.).
Recommendations on vaccine-specific aspects of the EU pharmacovigilance system, including on risk management, signal detection and post-authorisation safety studies (PASS) are presented in the Module P.I: Vaccines for prophylaxis against infectious diseases of the Good pharmacovigilance practices (GVP).
Aside from a qualitative analysis of spontaneous case reports or case series, quantitative methods such as disproportionality analyses and observed vs. expected (O/E) analyses are routinely employed in signal detection for vaccines. Several documents discuss the merits and review the methods of these approaches.
GVP Module P.I: Vaccines for prophylaxis against infectious diseases describes issues to be considered when applying methods for disproportionality analyses for vaccines, including the choice of the comparator group and the use of stratification. Effects of stratification on data mining in the US Vaccine Adverse Event Reporting System (VAERS) (Drug Saf 2008;31(8):667-74) demonstrates that stratification can reveal and reduce confounding and unmask some vaccine-event pairs not found by crude analyses. However, Stratification for Spontaneous Report Databases (Drug Saf 2008;31(11):1049-52) highlights that extensive use of stratification in signal detection algorithms should be avoided as it can mask true signals. Vaccine-Based Subgroup Analysis in VigiBase: Effect on Sensitivity in Paediatric Signal Detection (Drug Saf 2012;35(4)335-46) further examines the effects of subgroup analyses based on the relative distribution of vaccine/non-vaccine reports in paediatric ADR data.
The article Optimization of a quantitative signal detection algorithm for spontaneous reports of adverse events post immunization (Pharmacoepidemiol Drug Saf 2013; 22(5): 477–87) explores various ways of improving performance of signal detection algorithms when looking for vaccines.
The article Adverse events associated with pandemic influenza vaccines: comparison of the results of a follow-up study with those coming from spontaneous reporting (Vaccine 2011;29(3):519-22) reported a more complete pattern of reactions when using two complementary methods for first characterisation of the post-marketing safety profile of a new vaccine, which may impact on signal detection.
When prompt decision-making about a safety concern is required and there is insufficient time to review individual cases, GVP Module P.I: Vaccines for prophylaxis against infectious diseases suggests the conduct of O/E analyses for signal validation and preliminary signal evaluation. The module discusses key requirements of O/E analyses: the observed number of cases detected in a passive or active surveillance systems, near real-time exposure data, appropriately stratified background incidence rates (to calculate the expected number of cases) and sensitivity analyses around these measures. O/E analyses for vaccines are further discussed in Pharmacoepidemiological considerations in observed‐to‐expected analyses for vaccines (Pharmacoepidemiol Drug Saf 2016;25(2): 215-22) and are also addressed in the review Near real‐time vaccine safety surveillance using electronic health records—a systematic review of the application of statistical methods (Pharmacoepidemiol Drug Saf 2016;25(3):225-37).
Simple ‘snapshot’ O/E analyses require near-real-time exposure data, appropriately stratified background incidence rates (to calculate the expected number of cases) and sensitivity analyses around these measures, and they may not be appropriate for continuous monitoring due to inflation of type 1 error rates when multiple tests are performed. Safety monitoring of Influenza A/H1N1 pandemic vaccines in EudraVigilance (Vaccine 2011;29(26):4378-87) illustrates that simple ‘snapshot’ O/E analyses are affected by uncertainties regarding the numbers of vaccinated individuals and age-specific background incidence rates.
Human papilloma virus immunization in adolescents and young adults: a cohort study to illustrate what events might be mistaken for adverse reactions (Pediatr Infect Dis J 2007;26(11):979-84) and Health problems most commonly diagnosed among young female patients during visits to general practitioners and gynecologists in France before the initiation of the human papillomavirus vaccination program (Pharmacoepidemiol Drug Saf 2012; 21(3):261-80) illustrate the importance of collecting background rates by estimating risks of coincident associations of emergency consultations, hospitalisations and outpatients consultations with vaccination. Rates of selected disease events for several countries also vary by age, sex, method of ascertainment and geography, as shown in Importance of background rates of disease in assessment of vaccine safety during mass immunisation with pandemic H1N1 influenza vaccines (Lancet 2009; 374(9707):2115-22). Moreover, Guillain-Barré syndrome and influenza vaccines: A meta-analysis (Vaccine 2015; 33(31):3773-8) suggests that a trend observed between different geographical areas would be consistent with a different susceptibility of developing a particular adverse reaction among different populations.
Sequential methods, as described in Early detection of adverse drug events within population-based health networks: application of sequential methods (Pharmacoepidemiol Drug Saf 2007; 16(12):1275-1284), allow O/E analyses to be performed on a routine (e.g. weekly) basis using cumulative data with adjustment for multiplicity. Such methods are routinely used for near-real time surveillance in the Vaccine Safety Datalink (VSD) (Near real-time surveillance for influenza vaccine safety: proof-of-concept in the Vaccine Safety Datalink Project. Am J Epidemiol 2010;171(2):177-88). Potential issues are described in Challenges in the design and analysis of sequentially monitored postmarket safety surveillance evaluations using electronic observational health care data (Pharmacoepidemiol Drug Saf 2012;21(S1):62-71). A review of signals detected over 3 years with these methods in Vaccine Safety Datalink concluded that care with data quality, outcome definitions, comparison groups and length of surveillance is required to enable detection of true safety problems while controlling error rates (Active surveillance for adverse events: the experience of the Vaccine Safety Datalink Project (Pediatrics 2011;127(S1):S54-S64)). Sequential methods are, therefore, more robust but also more complex to perform, understand and communicate to a non-statistical audience.
A new self-controlled case series method for analyzing spontaneous reports of adverse events after vaccination (Am J Epidemiol 2013;178(9):1496-504) extends the self-controlled case series approach to explore and quantify vaccine safety signals from spontaneous reports. It uses parametric and nonparametric versions with different assumptions to account for the specific features of the data (e.g., large amount of underreporting and variation of reporting with time since vaccination). The method should be seen as a signal strengthening approach for quickly exploring a signal based on spontaneous reports prior to a pharmacoepidemiologic study, if any. The method was used to document the risk of intussusception after rotavirus vaccines (see Intussusception after Rotavirus Vaccination — Spontaneous Reports; N Engl J Med 2011; 365:2139).
A complete review of study designs and methods from hypothesis testing studies in the field of vaccine safety is included in the ADVANCE Report on appraisal of vaccine safety methods.
Traditional study designs such as cohort and case-control studies may be difficult to implement for vaccines where studies involve populations with high vaccine coverage rates, an appropriate unvaccinated group is lacking or adequate information on covariates at the individual level is not available. Frequent sources of confounding to be considered are socioeconomic status, underlying health status and other factors influencing the probability of being vaccinated. Control without separate controls: evaluation of vaccine safety using case-only methods (Vaccine 2004; 22(15-16):2064-70) describes and illustrates epidemiological methods that are useful in such situations. They are mostly case-only design described in Chapter 5.3.2 of the Guide:
The case-crossover design was primarily developed to investigate the association between a vaccine and an adverse event. In this design, control information for each case is based on own past exposure experience and a person can ‘crossover’ between two or more exposure levels. It is a retrospective design that requires the strong assumption that the underlying probability of vaccination should be the same in all defined time intervals, but this is unlikely to hold for paediatric vaccines administered according to strict schedules or for seasonally administered vaccines.
The self-controlled case series (SCCS) design can be both prospective and retrospective and aims to estimate a relative incidence, which compares the incidence of adverse events within periods of hypothesised excess risk due to exposure with incidence during all other times (baseline risk).
The case-coverage design uses exposure information on cases and population data on vaccination coverage to serve as control. It requires reliable and detailed vaccine coverage data corresponding to the population from which cases are drawn. This will allow control of confounding by stratified analysis. During vaccine introduction, it is also particularly important to address selection bias introduced by awareness of possible occurrence of a specific outcome. An example of a study using a case-coverage method is Risk of narcolepsy in children and young people receiving AS03 adjuvanted pandemic A/H1N1 2009 influenza vaccine: retrospective analysis (BMJ 2013; 346:f794).
The study Control without separate controls: evaluation of vaccine safety using case-only methods (Vaccine 2004; 22(15-6):2064-70) concludes that properly designed and analysed epidemiological studies using only cases, especially the SCCS method, may provide stronger evidence than large cohort studies as they control completely for fixed individual-level confounders (such as demographics, genetics and social deprivation) and typically have similar, sometimes better, power. Three factors are however critical in making optimal use of such methods: access to good data on cases, computerised vaccination records with the ability to link them to cases and availability of appropriate analysis techniques.
Several studies on vaccines have compared traditional and case-only study designs:
Epidemiological designs for vaccine safety assessment: methods and pitfalls (Biologicals 2012;40(5):389-92) used three study designs (cohort, case-control and self-controlled case series) to illustrate the issues that may arise when designing an epidemiological study, such as understanding the vaccine safety question, case definition and finding, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs.
Comparison of epidemiologic methods for active surveillance of vaccine safety (Vaccine 2008; 26(26):3341-3345) performed a simulation study to compare four designs (matched-cohort, vaccinated-only (risk interval) cohort, case-control and self-controlled case series) in the context of vaccine safety surveillance. The cohort study design allowed for the most rapid signal detection, the least false-positive error and highest statistical power in performing sequential analysis. The authors highlight, however, that the chief limitation of this simulation is the exclusion of confounding effects and the lack of chart review, which is a time and resource intensive requirement.
Another simulation study (Four different study designs to evaluate vaccine safety were equally validated with contrasting limitations. J Clin Epidemiol 2006; 59(8):808-818) compared four study designs (cohort, case-control, risk-interval and SCCS) with the conclusion that all the methods were valid designs, with contrasting strengths and weaknesses. The SCCS method, in particular, proved to be an efficient and valid alternative to the cohort method.
Hepatitis B vaccination and first central nervous system demyelinating events: Reanalysis of a case-control study using the self-controlled case series method. Vaccine 2007;25(31):5938-43) describes how the SCCS found similar results as the case-control study but with greater precision as it used cases without matched controls excluded from the case-control analysis. This is at the cost of the assumption that exposures are independent of earlier events. The authors recommended that, if case-control studies of vaccination and adverse events are undertaken, parallel case-series analyses should also be conducted, where appropriate.
In situations where primary data collection is needed (e.g. a pandemic), the SCCS may not be adequate since follow-up time needs to be accrued. In such instances, the Self-controlled Risk Interval (SCRI) method can be used to shorten the observation time (see The risk of Guillain-Barre Syndrome associated with influenza A (H1N1) 2009 monovalent vaccine and 2009-2010 seasonal influenza vaccines: Results from self-controlled analyses. Pharmacoepidemiol Drug Saf 2012;21(5):546-52), historical background rates can be used for an O/E analysis (see Near real-time surveillance for influenza vaccine safety: proof-of-concept in the Vaccine Safety Datalink Project. Am J Epidemiol 2010;171(2):177-88) or a classical case-control study can be performed, as used in Guillain-Barré syndrome and adjuvanted pandemic influenza A (H1N1) 2009 vaccine: multinational case-control study in Europe. BMJ 2011;343:d3908).
Ecological analyses should not be considered hypothesis testing studies. See Chapter 5.5 of this Guide.
A systematic review evaluating the potential for bias and the methodological quality of meta-analyses in vaccinology (Vaccine 2007; 25(52):8794-806) provides a comprehensive overview of the methodological quality and limitations of 121 meta-analyses of vaccine studies. Association between Guillain-Barré syndrome and influenza A (H1N1) 2009 monovalent inactivated vaccines in the USA: a meta-analysis (Lancet 2013;381(9876):1461-8) describes a self-controlled risk-interval design in a meta-analysis of six studies at the patient level with a reclassification of cases according to the Brighton Collaboration classification.
The article Vaccine safety in special populations (Hum Vaccin 2011;7(2):269-71) highlights common methodological issues that may arise in evaluating vaccine safety in special populations, especially infants and children who often differ in important ways from healthy individuals and change rapidly during the first few years of life, and elderly patients.
Observational studies on vaccine adverse effects during pregnancy (especially on pregnancy loss), which often use pregnancy registries or healthcare databases, are faced with three challenges: embryonic and early foetal loss are often not recognised or recorded, data on the gestational age at which these events occur are often missing, and the likelihood of vaccination increases with gestational age whereas the likelihood of foetal death decreases. Assessing the effect of vaccine on spontaneous abortion using time-dependent covariates Cox models (Pharmacoepidemiol Drug Saf 2012;21(8):844-850) demonstrates that rates of spontaneous abortion can be severely underestimated without survival analysis techniques using time-dependent covariates to avoid immortal time bias and shows how to fit such models. Risk of miscarriage with bivalent vaccine against human papillomavirus (HPV) types 16 and 18: pooled analysis of two randomised controlled trials (BMJ 2010; 340:c712) explains methods to calculate rates of miscarriage, address the lack of knowledge of time of conception during which vaccination might confer risk and perform subgroup and sensitivity analyses.
In Harmonising Immunisation Safety Assessment in Pregnancy (Vaccine 2016;34 (49): 5991-6110; Vaccine 2017;35 (48), 6469-582), the Global Alignment of Immunization Safety Assessment in pregnancy (GAIA) project has provided a selection of case definitions and guidelines for the evaluation of pregnancy outcomes following immunization. The Systematic overview of data sources for Drug Safety in pregnancy research provides an inventory of pregnancy exposure registries and alternative data sources useful to assess the safety of prenatal vaccine exposure.
Few vaccine studies are performed in immunocompromised subjects. Influenza vaccination for immunocompromised patients: systematic review and meta-analysis by etiology (J Infect Dis 2012;206(8):1250-9) illustrates the importance of performing stratified analyses by aetiology of immunocompromise and possible limitations due to residual confounding, differences within and between etiological groups and small sample size in some etiological groups. Further research is needed on this topic.
There is an increasing interest in the influence of genetics on safety and efficacy outcomes of vaccinations. Understanding this influence may optimise the choice of vaccines and the vaccination schedule. Research in this field is illustrated by Effects of vaccines in patients with sickle cell disease: a systematic review protocol (BMJ Open 2018;8:e021140. doi:10.1136/bmjopen-2017-021140).
Vaccine effects and impact of vaccination programmes in post-licensure studies (Vaccine 2013;31(48):5634-42) reviews effectiveness of vaccine and of vaccination programmes, proposes epidemiological measures of public health impact, describes relevant methods to measure these effects and discusses the assumptions and potential biases involved.
Generic protocols for retrospective case-control studies and retrospective cohort studies to assess the effectiveness of rotavirus vaccination in EU Member States based on computerised databases were published by the European Centre for Disease Prevention and Control (ECDC). They describe the information that should be collected by country and region in vaccine effectiveness studies and the data sources that may be available to identify virus-related outcomes a vaccine is intended to avert, including hospital registers, computerised primary care databases, specific surveillance systems (i.e. laboratory surveillance, hospital surveillance, primary care surveillance) and laboratory registers. Based on a meta-analysis comprising 49 cohort studies and 10 case-control studies, Efficacy and effectiveness of influenza vaccines in elderly people: a systematic review (Lancet 2005;366(9492):1165-74) highlights the heterogeneity of outcomes and study populations included in such studies and the high likelihood of selection bias.
Non-specific effects of vaccines, such as a decrease of mortality, have been claimed in observational studies but generally can be affected by bias and confounding. Epidemiological studies of the 'non-specific effects' of vaccines: I--data collection in observational studies (Trop Med Int Health 2009;14(9):969-76.) and Epidemiological studies of the non-specific effects of vaccines: II--methodological issues in the design and analysis of cohort studies (Trop Med Int Health 2009;14(9):977-85) provide recommendations for vaccine observational studies conducted in countries with high mortality; these recommendations have wider relevance.
The screening method estimates vaccine effectiveness by comparing vaccination coverage in positive cases of a disease (e.g. influenza) with the vaccination coverage in the population from which the cases are derived (e.g., the same age group). If representative data on cases and vaccination coverage are available, it can provide an inexpensive and ready-to-use method that can be useful in providing early effectiveness estimates or identify changes in effectiveness over time. However, Application of the screening method to monitor influenza vaccine effectiveness among the elderly in Germany (BMC Infect Dis. 2015;15(1):137) emphasises that accurate and age-specific vaccine coverage rates are crucial to provide valid VE estimates. Since adjusting for important confounders and the assessment of product-specific VE is generally not possible, this method should be considered only a supplementary tool for assessing crude VE.
The indirect cohort method is a case-control type design which uses cases caused by non-vaccine serotypes as controls. Use of surveillance data to estimate the effectiveness of the 7-valent conjugate pneumococcal vaccine in children less than 5 years of age over a 9 year period (Vaccine 2012;30(27):4067-72) applied this method to evaluate the effectiveness of a pneumococcal conjugate vaccine against invasive pneumococcal disease (IPD) and compared the results to the effectiveness measured using a standard case-control study conducted during the same time period. The authors considered the method would be most useful shortly after vaccine introduction, and less useful in a setting of very high vaccine coverage and fewer vaccine-type cases. Using the Indirect Cohort Design to Estimate the Effectiveness of the Seven Valent Pneumococcal Conjugate Vaccine in England and Wales (PLoS One 6(12):e28435. doi:10.1371/journal.pone.0028435) describes how the method was used to estimate effectiveness of various numbers of doses as well as for each vaccine serotype.
Effectiveness of live-attenuated Japanese encephalitis vaccine (SA14-14-2): a case-control study (Lancet 1996;347(9015):1583-6) describes a case control study of incident cases in which the control group consisted of all village-matched children of a given age who were at risk of developing disease at the time that the case occurred (density sampling). The effect measured is an incidence density rate ratio.
The article The test-negative design for estimating influenza vaccine effectiveness (Vaccine 2013;31(17):2165-8) explains the rationale, assumptions and analysis of the test-negative study as applied to influenza VE. Study subjects are all persons who seek care for an acute respiratory illness and influenza VE is estimated from the ratio of the odds of vaccination among subjects testing positive for influenza to the odds of vaccination among subject testing negative. This design is less susceptible to bias due to misclassification of infection and the confounding by health care-seeking behaviour, at the cost of difficult-to-test assumptions.
Effectiveness of rotavirus vaccines in preventing cases and hospitalizations due to rotavirus gastroenteritis in Navarre, Spain (Vaccine 2012;30(3):539-43) evaluates effectiveness using a test negative case-control design based on electronic clinical reports. Cases were children with confirmed rotavirus and controls were those who tested negative for rotavirus in all samples. The test-negative design was based on an assumption that the rate of gastroenteritis caused by pathogens other than rotavirus is the same in both vaccinated and unvaccinated persons. This approach may rule out differences in parental attitude when seeking medical care and of physician differences in making decisions about stool sampling or hospitalisation. A limitation is sensitivity of antigen detection which may underestimate vaccine effectiveness. In addition, if virus serotype is not available, it is not possible to study the association between vaccine failure and a possible mismatch of vaccine strains and circulating strains of virus.
The article 2012/13 influenza vaccine effectiveness against hospitalised influenza A(H1N1)pdm09, A(H3N2) and B: estimates from a European network of hospitals (EuroSurveill 2015;20(2):pii=21011) illustrates a multicentre test-negative case-control study to estimate influenza VE in 18 hospitals. It is believed that confounding due to health-seeking behaviour is minimised since, in the study sites, all people needing hospitalisation are likely to be hospitalised. The study Trivalent inactivated seasonal influenza vaccine effectiveness for the prevention of laboratory-confirmed influenza in a Scottish population 2000 to 2009 (EuroSurveill 2015;20(8):pii=21043) applied this method using a Scotland-wide linkage of patient-level primary care, hospital and virological swab data over nine influenza seasons and discusses strengths and weaknesses of the design in this context.
This design is described in Chapter 10.2.1.3.
A generic study protocol to assess the impact of rotavirus vaccination in EU Member States has been published by the ECDC. It recommends the information that needs to be collected to compare the incidence/proportion of rotavirus cases in the period before and after the introduction of the vaccine. These generic protocols need to be adapted to each country/regions and specific situation.
The impact of vaccination can be quantified in children in the age group targeted for the vaccine (overall effect) or in children of other age groups (indirect effect). The direct effect of a vaccine, however, needs to be defined by the protection it confers given a specific amount of exposure to infection and not just a comparable exposure. Direct and indirect effects in vaccine efficacy and effectiveness (Am J Epidemiol 1991; 133(4):323-31) describes how parameters intended to measure direct effects must be robust and interpretable in the midst of complex indirect effects of vaccine intervention programmes.
Impact of rotavirus vaccination in regions with low and moderate vaccine uptake in Germany (Hum Vaccin Immunother 2012; 8(10):1407-15) describes an impact assessment of rotavirus vaccination comparing the incidence rates of hospitalisations before, and in seasons after, vaccine introduction using data from national mandatory disease reporting system.
First year experience of rotavirus immunisation programme in Finland (Vaccine 2012; 31(1):176-82) estimates the impact of a rotavirus immunisation programme on the total hospital inpatient and outpatient treated acute gastroenteritis burden and on severe rotavirus disease burden during the first year after introduction. The study may be considered as a vaccine-probe-study, where unspecific disease burden prevented by immunisation is assumed to be caused by the agent the vaccine is targeted against.
The study of vaccine effectiveness against diseases where immunity wanes over time requires consideration of both the within-host dynamics of the pathogen and immune system as well as the associated population-level transmission dynamics. Implications of vaccination and waning immunity (Proc Biol Sci 2009; 276(1664):2071-80) seeks to combine immunological and epidemiological models for measles infection to examine the interplay between disease incidence, waning immunity and boosting.
Studies of vaccine effectiveness rely on accurate identification of vaccination and cases of vaccine-preventable diseases but in practice diagnostic tests, clinical case definitions and vaccination records often present inaccuracies. Bias due to differential and non-differential disease- and exposure misclassification in studies of vaccine effectiveness (PLoS One 2018;15;13(6):e0199180) explores through simulations the impact of non-differential and differential disease- and exposure-misclassification when estimating vaccine effectiveness using cohort, case-control, test-negative case-control and case-cohort designs.
Misclassification can lead to significant bias and its impact strongly depends on the vaccination scenarios. A web application is publicly available to assess the potential (joint) impact of possibly differential disease- and exposure misclassification.
Pharmacogenetics is defined as the study of genetic variation as a determinant of drug response. It can complement information on clinical factors and disease sub-phenotypes to optimise the prediction of treatment response and reduce the risk of adverse reactions.
Individual variation in the response to drugs is an important clinical issue and may range from a lack of therapeutic effect to serious adverse drug reactions. This heterogeneity of response has important policy implications if individual patients not responding to conventional agents are denied access to other agents based on clinical trial evidence and systematic reviews that show no overall benefit. While also clinical variables such as disease severity, age, concomitant drug use and illnesses are potentially important determinants of the response to drugs, heterogeneity in drug disposition (absorption, metabolism, distribution, and excretion) and targets (such as receptors and signal transduction modulators) may be an important cause of inter-individual variability in the therapeutic effects of drugs (see Pharmacogenomics: translating functional genomics into rational therapeutics. Science 1999;286(5439):487-91). Identification of variation in genes which modify the response to drugs provides the opportunity to optimise safety and effectiveness of the currently available drugs and develop new drugs for paediatric and adult populations (see Drug discovery: a historical perspective. Science 2000;287(5460):1960-4).
It is important to note that genetic variants are not the only potentially useful biomarkers of drug effects but a first step in the chain of genomics [DNA variation, SNPs, Copy Number Variations, indels], epigenomics [methylation], transcriptomics [RNA transcription], and proteomics [protein function and structure].
Identification of genetic variation associated with important drug or therapy-related outcomes can follow two main approaches.
The first is the candidate gene approach in which as many as dozens to thousands of genetic variations within one or several genes, including a common form of variations known as single nucleotide polymorphisms (SNPs), are genotyped, including the coding and noncoding sequence. Generally they are chosen on the grounds of biological plausibility, which may have been proven before in previous studies, or of knowledge of functional genes known to be involved in pharmacokinetic and pharmacodynamics pathways or related to the disease or intermediate phenotype. Methodological and statistical issues in pharmacogenomics (J Pharm Pharmacol 2010;62(2):161-6) discusses pros and cons of a candidate gene approach and a genome-wide scan approach (see below), and A tutorial on statistical methods for population association studies (Nat Rev Genet 2006;7(10):781-91) gives an outline of key methods that can be used. The advantage of the candidate gene approach is that resources can be directed to several important genetic polymorphisms and the higher a priori chance of relevant drug-gene interactions. This approach, however, requires a priori information about the likelihood of the polymorphism, gene, or gene-product interacting with a drug or drug pathway. Moving towards individualized medicine with pharmacogenomics (Nature 2004;429:464-8) explains that lack or incompleteness of information on genes from previous studies may result in the failure in identifying every important genetic determinant in the genome.
The second approach is hypothesis-generating or hypothesis-agnostic, known as genome-wide, which identifies genetic variants across the whole genome. By comparing the frequency of genetic or SNP markers between drug responders and non-responders, or those with or without drug toxicity, important genetic determinants are identified. In this approach, no previous information or specific gene/variant hypothesis is needed. Because of the concept of linkage disequilibrium, whereby certain genetic determinants tend to be co-inherited together, it is possible that the genetic associations identified through a genome-wide approach may not be truly biologically functional polymorphisms, but instead may simply be a linkage-related marker of another genetic determinant that is the true biologically relevant genetic determinant. Thus, this approach is considered discovery in nature. It may detect the SNPs in genes, which were previously not considered as candidate genes, or even SNPs outside of the genes. Nonetheless, failure to cover all relevant genetic risk factors can still be a problem, though less than with the candidate gene approach. It is therefore important to conduct replication and validation studies (in vivo and in vitro) to ascertain the generalisability of findings to populations of individuals, to characterise the mechanistic basis of the effect of these genes on drug action, and to identify true biologic genetic determinants. This approach is useful for studying complex diseases where multiple genetic variations contribute to disease risk, but are applicable to disease and treatment outcomes.
Various genome-wide approaches are currently available including genome and exome sequencing, and application of various chips that type hundreds of thousands to billions of SNPs (e.g. exome chip). Finally, power is usually limited to detect only common variants with a large effect, and therefore large sample sizes should be considered, e.g. through pooling of biobanks. An example of such pooling is the CHARGE Consortium wth its focus on cardiovascular diseases [The Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium as a model of collaborative science. Epidemiology 2013;24:346-8.]. It is important that findng are replicated in other cohorts and consortia, but also that other techniques are actively used to confirm or refute associations [epigenomics, transcriptomics, and proteomics].
Several options are available for the design of pharmacogenetic studies. Firstly, RCTs, both pre- and post-authorisation, provide the opportunity to address several pharmacogenetic questions. Pharmacogenetics in randomized controlled trials: considerations for trial design (Pharmacogenomics 2011;12(10):1485-92) describes three different trial designs differing in the timing of randomization and genotyping, and Promises and challenges of pharmacogenetics: an overview of study design, methodological and statistical issues (JRSM Cardiovasc Dis 2012 5;1(1)) discusses outstanding methodological and statistical issues that may lead to heterogeneity among reported pharmacogenetic studies and how they may be addressed. Pharmacogenetic trials can be designed (or post hoc analysed) with the intention to study whether a subgroup of patients, defined by certain genetic characteristics, respond differently to the treatment under study. Alternatively, a trial can verify whether genotype-guided treatment is beneficial over standard care. Obvious limitations with regard to the assessment of rare adverse drug events are the large sample size required and its related high costs. In order to make a trial as efficient as possible in terms of time, money and/or sample size, it is possible to opt for an adaptive trial design, which allows prospectively planned modifications in design after patients have been enrolled in the study. Such a design uses accumulating data to decide how to modify aspects of the study during its progress, without undermining the validity and integrity of the trial. An additional benefit is that the expected number of patients exposed to an inferior/harmful treatment can be reduced (see Potential of adaptive clinical trial designs in pharmacogenetic research. Pharmacogenomics 2012;13(5):571-8).
Observational studies are the alternative and can be family-based (using twins or siblings) or population-based (using unrelated individuals). The main advantage of family-based studies is the avoidance of bias due to population stratification. A clear practical disadvantage for pharmacogenetic studies is the requirement to study families where patients have been treated with the same drugs (see Methodological quality of pharmacogenetic studies: issues of concern. Stat Med 2008;27(30):6547-69).
Population-based studies may be designed to assess drug-gene interactions as cohort (including exposure-only), case-cohort and case-control studies (including case-only, as described in Nontraditional epidemiologic approaches in the analysis of gene-environment interaction: case-control studies with no controls! Am J Epidemiol 1996;144(3):207-13). Sound pharmacoepidemiological principles as described in the current Guide also apply to observational pharmacogenetic studies. A specific type of confounding due to population stratification needs to be considered in pharmacogenetic studies, and, if present, needs to be dealt with. Its presence may be obvious where the study population includes more than one immediately recognisable ethnic group; however in other studies stratification may be more subtle. Population stratification can be detected by Pritchard and Rosenberg’s method, which involves genotyping additional SNPs in other areas of the genome and testing for association between them and outcome. In genome-wide association studies, the data contained within the many SNPs typed can be used to assess population stratification without the need to undertake any further genotyping. Several methods have been suggested to control for population stratification such as genomic control, structure association and EIGENSTAT.
These methods are discussed in Methodological quality of pharmacogenetic studies: issues of concern (Stat Med 2008;27(30):6547-69) and Softwares and methods for estimating genetic ancestry in human populations (Hum Genomics 2013;7:1).
The main advantage of exposure-only and case-only designs is the smaller sample size that is required, at the cost of not being able to study the main effects of drug exposure (case-only) or genetic variant (exposure-only) on the outcome. Furthermore, interaction can be assessed only on a multiplicative scale, whereas from a public health perspective additive interactions are very relevant. However, up till now GWAs with gene*interactions have not been very rewarding because of the required huge power. An important condition that has to be fulfilled for case-only studies is that the exposure is independent of the genetic variant, e.g. prescribers are not aware of the genotype of a patient and do not take this into account, directly or indirectly (by observing clinical characteristics associated with the genetic variant). In the exposure-only design, the genetic variant should not be associated with the outcome, for example variants of genes coding for cytochrome p-450 enzymes. When these conditions are fulfilled and the main interest is in the drug-gene interaction, these designs may be an efficient option. In practice, case-control and case-only studies usually result in the same interaction effect as empirically assessed in Bias in the case-only design applied to studies of gene-environment and gene-gene interaction: a systematic review and meta-analysis (Int J Epidemiol 2011;40(5):1329-41). The assumption of independence of genetic and exposure factors can be verified among controls before proceeding to the case-only analysis. Further development of the case-only design for assessing gene-environment interaction: evaluation of and adjustment for bias (Int J Epidemiol 2004;33(5):1014-24) conducted sensitivity analyses to describe the circumstances in which controls can be used as proxy for the source population when evaluating gene-environment independence. The gene-environment association in controls will be a reasonably accurate reflection of that in the source population if baseline risk of disease is small (<1%) and the interaction and independent effects are moderate (i.e. risk ratio<2), or if the disease risk is low (e.g. <5%) in all strata of genotype and exposure. Furthermore, non-independence of gene-environment can be adjusted in multivariable models if non-independence can be measured in controls.
The same principles and approaches to data collection as for other pharmacoepidemiological studies can be followed (see Chapter 3 of this Guide on Approaches to Data Collection). An efficient approach to data collection for pharmacogenetic studies is to combine secondary use of electronic health records with primary data collection (e.g. biological samples to extract DNA).
Examples are given by SLCO1B1 genetic variant associated with statin-induced myopathy: a proof-of-concept study using the clinical practice research datalink (Clin Pharmacol Ther 2013;94(6):695-701), Diuretic therapy, the alpha-adducin gene variant, and the risk of myocardial infarction or stroke in persons with treated hypertension (JAMA 2002;287(13):1680-9) and Interaction between the Gly460Trp alpha-adducin gene variant and diuretics on the risk of myocardial infarction (J Hypertens 2009 Jan;27(1):61-8). Another approach to enrich electronic health records with biological samples is record linkage to biobanks as illustrated in Genetic variation in the renin-angiotensin system modifies the beneficial effects of ACE inhibitors on the risk of diabetes mellitus among hypertensives (Hum Hypertens 2008;22(11):774-80). A third approach is to use active surveillance methods to fully characterise drug effects such that a rigorous phenotype can be developed prior to genetic analysis. This approach was followed in Adverse drug reaction active surveillance: developing a national network in Canada's children's hospitals (Pharmacoepidemiol Drug Saf 2009;18(8):713-21) and EUDRAGENE: European collaboration to establish a case-control DNA collection for studying the genetic basis of adverse drug reactions (Pharmacogenomics 2006;7(4):633-8).
The focus of data analysis should be on the measure of effect modification (see Chapter 4.2.4 of this Guide on Effect Modification). Attention should be given to whether the mode of inheritance (e.g. dominant, recessive or additive) is defined a priori based on prior knowledge from functional studies. However, investigators are usually naïve regarding the underlying mode of inheritance. A solution might be to undertake several analyses, each under a different assumption, though the approach to analysing data raises the problem of multiple testing (see Methodological quality of pharmacogenetic studies: issues of concern. Stat Med 2008;27(30):6547-69). The problem of multiple testing and the increased risk of type I error is in general a problem in pharmacogenetic studies evaluating multiple SNPs, multiple exposures and multiple interactions. The most common approach to correct for multiple testing is to use the Bonferroni correction. This correction may be considered too conservative and runs the risk of producing many pharmacogenetic studies with a null result. Other approaches to adjust for multiple testing include permutation testing and false discovery rate (FDR) control, which are less conservative. The FDR, described in Statistical significance for genomewide studies (Proc Natl Acad Sci USA 2003;100(16):9440-5), estimates the expected proportion of false-positives among associations that are declared significant, which is expressed as a q-value.
Alternative innovative methods are under development and may be used in the future, such as the systems biology approach, a Bayesian approach, or data mining (see Methodological and statistical issues in pharmacogenomics. J Pharm Pharmacol 2010;62(2):161-6).
Important complementary approaches include the conduct of individual patient data meta-analyses and/or replication studies to avoid the risk of false-positive findings.
An important step in analysis of genome-wide association studies data that needs to be considered is the conduct of rigorous quality control procedures before conducting the final association analyses. Relevant guidelines include Guideline for data analysis of genomewide association studies (Cancer Genomics Proteomics 2007;4(1):27-34) and Statistical Optimization of Pharmacogenomics Association Studies: Key Considerations from Study Design to Analysis (Curr Pharmacogenomics Person Med 2011;9(1):41-66).
The guideline STrengthening the REporting of Genetic Association studies (STREGA)--an extension of the STROBE statement (Eur J Clin Invest 2009;39(4):247-66) should be followed for reporting findings of genetic studies.
An important step towards the implementation of the use of genotype information to guide pharmacotherapy is the development of clinical practice guidelines. Several initiatives have been developed to provide these guidelines such as the Clinical Pharmacogenetics Implementation Consortium. Furthermore, several clinical practice recommendations have been published, for example Recommendations for HLA-B*15:02 and HLA-A*31:01 genetic testing to reduce the risk of carbamazepine-induced hypersensitivity reactions (Epilepsia 2014;55(4):496-506) or Clinical practice guideline: CYP2D6 genotyping for safe and efficacious codeine therapy (J Popul Ther Clin Pharmacol 2013;20(3):e369-96).
An important pharmacogenomics knowledge resource is available through PharmGKB that encompasses clinical information including dosing guidelines and drug labels, potentially clinically actionable gene-drug associations and genotype-phenotype relationships. PharmGKB collects curates and disseminates knowledge about the impact of human genetic variation on drug responses.