Print page Resize text Change font-size Change font-size Change font-size High contrast


methodologicalGuide5_4.shtml
Home > Standards & Guidances > Methodological Guide

ENCePP Guide on Methodological Standards in Pharmacoepidemiology

 

5.4. Specific aspects of study design

 

5.4.1. Pragmatic trials and large simple trials

 

5.4.1.1 Pragmatic trials

 

RCTs are considered the gold standard for demonstrating the efficacy of medicinal products and for obtaining an initial estimate of the risk of adverse outcomes. However, they are not necessarily indicative of the benefits, risks or comparative effectiveness of an intervention when used in clinical practice. The IMI GetReal Glossary defines a pragmatic clinical trial (PCT) as ‘a study comparing several health interventions among a randomised, diverse population representing clinical practice, and measuring a broad range of health outcomes’. The publication Series: Pragmatic trials and real world evidence: Paper 1. Introduction (J Clin Epidemiol. 2017;88:7-13) describes the main characteristics of this design and the complex interplay between design options, feasibility, acceptability, validity, precision, and generalisability of the results, and the review Pragmatic Trials (N Engl J Med. 2016;375(5):454-63) discusses the context in which a pragmatic design is relevant, and its strengths and limitations based on examples.

 

PCTs are focused on evaluating benefits and risks of treatments in patient populations and settings that are more representative of routine clinical practice. To ensure generalisability, PCTs should represent the patients to whom the treatment will be applied, for instance, inclusion criteria may be broader (e.g. allowing co-morbidity, co-medication, wider age range), and the follow-up may be minimised and allow for treatment switching. Real-World Data and Randomised Controlled Trials: The Salford Lung Study (Adv Ther. 2020;37(3):977-997) and Monitoring safety in a phase III real-world effectiveness trial: use of novel methodology in the Salford Lung Study (Pharmacoepidemiol Drug Saf. 2017;26(3):344-352) describes the model of a phase III PCT where patients were enrolled through primary care practices using minimal exclusion criteria and without extensive diagnostic testing, and where potential safety events were captured through patients’ electronic health records and triggered review by the specialist safety team.

 

Pragmatic explanatory continuum summary (PRECIS): a tool to help trial designers (CMAJ. 2009;180(10): E45-E57) is a tool to support pragmatic trial designs and help define and evaluate the degree of pragmatism. The Pragmatic–Explanatory Continuum Indicator Summary (PRECIS) tool has been further refined and now comprises nine domains each scored on a 5 point Likert scale ranging from very explanatory to very pragmatic with an exclusive focus on the issue of applicability (The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350: h2147). A checklist and additional guidance is provided in Improving the reporting of pragmatic trials: an extension of the CONSORT statement (BMJ. 2008;337 (a2390):1-8), and Good Clinical Practice Guidance and Pragmatic Clinical Trials: Balancing the Best of Both Worlds (Circulation 2016;133(9):872-80) discusses the application of Good Clinical Practice to pragmatic trials, and the use of additional data sources such as registries and electronic health records for “EHR-facilitated” PCTs.

 

Based on the evidence that the current costs and complexity of conducting randomised trials lead to more restrictive eligibility criteria and short durations of trials, and therefore reduce the generalisability and reliability of the evidence about the efficacy and safety of interventions, the article The Magic of Randomization versus the Myth of Real-World Evidence (N Engl J Med. 2020;382(7):674-678) proposes measures to remove practical obstacles to the conduct of randomised trials of appropriate size.

 

The BRACE CORONA study (Effect of Discontinuing vs Continuing Angiotensin-Converting Enzyme Inhibitors and Angiotensin II Receptor Blockers on Days Alive and Out of the Hospital in Patients Admitted With COVID-19: A Randomized Clinical Trial, JAMA. 2021;325(3):254-64) is a registry-based pragmatic trial that included patients hospitalised with COVID-19 who were taking ACEIs or ARBs prior to hospital admission, to determine whether discontinuation vs. continuation of these drugs affects the number of days alive and out of the hospital. Patients with a suspected COVID-19 diagnosis were included in the registry and followed up until diagnosis confirmation and randomised to either discontinue or continue ACEI or ARB therapy for 30 days. There was no specific treatment modification beyond discontinuing or continuing use of ACEIs or ARBs, the study team provided oversight on drug replacement based on current treatment guidelines. Treatment adherence was assessed based on medical prescriptions recorded in electronic health records after discharge.

 

5.4.1.2 Large simple trials

 

Large simple trials are pragmatic clinical trials with minimal data collection narrowly focused on clearly defined outcomes important to patients as well as clinicians. Their large sample size provides adequate statistical power to detect even small differences in effects. Additionally, large simple trials include a follow-up time that mimics routine clinical practice.

 

Large simple trials are particularly suited when an adverse event is very rare or has a delayed latency (with a large expected attrition rate), when the population exposed to the risk is heterogeneous (e.g. different indications and age groups), when several risks need to be assessed in the same trial or when many confounding factors need to be balanced between treatment groups. In these circumstances, the cost and complexity of a traditional RCT may outweigh its advantages and large simple trials can help keep the volume and complexity of data collection to a minimum.

 

Outcomes that are simple and objective can also be measured from the routine process of care using epidemiological follow-up methods, for example by using questionnaires or hospital discharge records. Classical examples of published large simple trials are An assessment of the safety of paediatric ibuprofen: a practitioner based randomised clinical trial (JAMA. 1995;279:929-33) and Comparative mortality associated with ziprasidone and olanzapine in real-world use among 18,154 patients with schizophrenia: The Zodiac Observational Study of Cardiac Outcomes (ZODIAC) (Am J Psychiatry 2011;168(2):193-201).

Note that the use of the term ‘simple’ in the expression ‘Large simple trials’ refers to data structure and not to data collection. It is used in relation to situations in which a small number of outcomes are measured. The term may therefore not adequately reflect the complexity of the studies undertaken.

 

5.4.1.3 Randomised database studies

 

Randomised database studies can be considered a special form of a large simple trial where patients included in the trial are enrolled in a healthcare system with electronic records. Eligible patients may be identified and flagged automatically by the software, with the advantage of allowing comparison of included and non-included patients. Database screening or record linkage can be used to detect and measure outcomes of interest otherwise assessed through the normal process of care. Patient recruitment, informed consent and proper documentation of patient information are hurdles that still need to be addressed in accordance with the applicable legislation for RCTs. Randomised database studies attempt to combine the advantages of randomisation and observational database studies. These and other aspects of randomised database studies are discussed in The opportunities and challenges of pragmatic point-of-care randomised trials using routinely collected electronic records: evaluations of two exemplar trials (Health Technol Assess. 2014;18(43):1-146) which illustrates the practical implementation of randomised studies in general practice databases.

 

There are few published examples of randomised database studies, but this design could become more common in the near future with the increasing computerisation of medical records. Pragmatic randomised trials using routine electronic health records: putting them to the test (BMJ 2012;344:e55) describes a project to implement randomised trials in the everyday clinical work of general practitioners, comparing treatments that are already in common use, and using routinely collected electronic healthcare records both to identify participants and to gather results. The above-mentioned Salford Lung Study also belongs to this category.

 

A particular form of randomised databases studies is the registry-based randomised trial, which uses an existing registry as a platform for the identification of cases, their randomisation and their follow-up. The editorial The randomized registry trial - the next disruptive technology in clinical research? (N Engl J Med. 2013;369(17):1579-81) introduces the concept. This hybrid design tries to achieve both internal and external validity by performing an RCT in a data source with higher generalisability (such as registries). Other examples are the TASTE trial that followed patients in the long-term using data from a Scandinavian registry (Thrombus aspiration during ST-segment elevation myocardial infarction (N Engl J Med. 2013;369:1587-97) and A registry-based randomized trial comparing radial and femoral approaches in women undergoing percutaneous coronary intervention: the SAFE-PCI for Women (Study of Access Site for Enhancement of PCI for Women) trial (JACC Cardiovasc Interv. 2014;7:857-67).

 

The importance of large simple trials has been highlighted by their role in evaluating well-established products that were repurposed for the treatment of COVID-19. The PRINCIPLE Trial platform (for trials in primary care) and the RECOVERY Trial platform (for trials in hospitals) recruited large numbers of study participants and sites within short periods of time. In addition to brief case report forms, important clinical outcomes such as death, intensive care admission and ventilation were ascertained through data linkage to existing data streams. As an example of these platform trials, the study Lopinavir-ritonavir in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial (Lancet 2020;396:1345–52) found that lopinavir–ritonavir was not associated with COVID-19 in patients admitted to hospital, with reductions in other outcomes. On the other hand, in Dexamethasone in Hospitalized Patients with Covid-19 (N Engl J Med. 2021;384(8):693-704), the RECOVERY trial also reported that the use of dexamethasone resulted in lower 28-day mortality in patients who were receiving either invasive mechanical ventilation or oxygen alone at randomisation. The streamlined and reusable approaches in data collection in these platform trials clearly were essential in the achievements to enrol larger numbers of trial participants and evaluate multiple treatments rapidly.

 

5.4.2. The target trial approach

 

The target trial approach and its emulation by an observational study was initially introduced in 1989 (The clinical trial as a paradigm for epidemiologic research. J Clin Epidemiol. 1989;42(6):491-6) and later extended to pharmacoepidemiology as a conceptual framework helping researchers to identify and avoid potential biases (Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available. Am J Epidemiol. 2016;183(8) 758-64). The underlying idea is to “design” a hypothetical ideal randomised trial (“target trial”) that would answer the research question. The target trial is described with regards to all design elements: the eligibility criteria, the treatment strategies, the assignment procedure, the follow-up, the outcome, the causal contrasts and the analysis plan. In the second step, the researcher specifies how to emulate the design elements of the target trial and what analytic approaches to take given the trade-offs in an observational setting.

 

The target trial paradigm aims to prevent common biases, such as immortal time bias or prevalent user bias. It also facilitates a systematic methodological evaluation and comparison of observational studies (Specifying a target trial prevents immortal time bias and other self-inflicted injuries in observational analyses. J Clin Epidemiol. 2016;79: 70-5). How to estimate the effect of treatment duration on survival outcomes using observational data (BMJ. 2018;360: k182) proposes methods for overcoming bias with this approach when quantifying the effect of treatment duration. An example of application of the target trial approach is described in The value of explicitly emulating a target trial when using real world evidence: an application to colorectal cancer screening (Eur J Epidemiol. 2017 Jun;32(6):495-500). Emulating a target trial in case-control designs: an application to statins and colorectal cancer (Int J Epidemiol. 2020;49(5),1637–46) describes how to emulate a target trial using case-control data and demonstrates that correct emulation reduces the discrepancies between observational and randomized trial evidence. Empirical research on this method is ongoing, one example being Emulating Randomized Clinical Trials With Nonrandomized Real-World Evidence Studies: First Results From the RCT DUPLICATE Initiative (Circulation 2021;143(10):1002-13).

The observational study BNT162b2 mRNA Covid-19 Vaccine in a Nationwide Mass Vaccination Setting (N Engl J Med. 2021;384(15):1412-23) emulated a target trial of the causal effect of the BNT162b2 vaccine on Covid-19 outcomes by matching vaccine recipients and controls on a daily basis on a wide range of potential confounding factors. The large population size of four large health care organisations led to a nearly perfect matching leading to a consistent pattern of similarity between the groups in the days just before day 12 after the first dose, the anticipated onset of the vaccine effect.

 

ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions (BMJ. 2016;355:i4919) supports the evaluation of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation and can be applied to target trials and to systematic reviews that include non-randomised studies.

 

Target trials are discussed in Chapters 3.6 (The target trial) and 22 (Target trial emulation) of the Causal Inference Book (Hernán MA, Robins JM (2020). Causal Inference: What If. Boca Raton: Chapman & Hall/CRC).

 

5.4.3. Self-controlled case series and self-controlled risk interval designs

 

The self-controlled case series (SCCS) design was initially developed for vaccines (see also Chapter 14.2). It is a case-only design where the observation period for each exposed case is divided into risk period(s) (e.g. number of days following each exposure) and a control period (observed time outside this risk period). Incidence rates within the risk period after exposure are compared with incidence rates within the control period. The SCCS design inherently controls for unmeasured time-invariant and between-individual confounding, but factors that vary over time within the same persons still need to be controlled for. The three assumptions of the SCCS are that 1) events arise independently within individuals (e.g. fractures do not affect the occurrence of a subsequent fracture), 2) events do not influence subsequent follow-up, and 3) the event itself does not affect the chance of being exposed.

 

An illustrative example is Opioids and the Risk of Fracture: a Self-Controlled Case Series Study in the Clinical Practice Research Datalink (Am J Epidemiol. 2021:kwab042) where the relative incidence of fracture was estimated by comparing periods when cases were exposed and unexposed to opioids. Each risk period was divided into smaller periods to account for changes throughout follow-up in age, season and exposure to fracture risk–increasing drugs and the assumptions required for a SCCS were tested in sensitivity analyses. Inaccurate specification of the risk window can however lead to bias and a data-based approach for identifying the optimal risk windows is proposed in Identifying optimal risk windows for self-controlled case series studies of vaccine safety (Stat Med. 2011;30(7):742-52). The pseudo-likelihood method developed to address this possible issue is described in Cases series analysis for censored, perturbed, or curtailed post-event exposures (Biostatistics 2009;10(1):3-16).

 

The Tutorial in biostatistics: the self-controlled case series method (Stat Med. 2006;25(10):1768-97) explains how to fit SCCS models using standard statistical packages.

 

Use of the self-controlled case-series method in vaccine safety studies: review and recommendations for best practice (Epidemiol Infect. 2011;139(12):1805-17) assesses how the SCCS method has been used across 40 vaccine studies, highlights good practice and gives guidance on how the method should be used and reported. Using several methods of analysis is recommended, as it can reinforce conclusions or shed light on possible sources of bias when these differ for different study designs. When should case-only designs be used for safety monitoring of medical products? (Pharmacoepidemiol Drug Saf 2012;21(Suppl. 1):50-61) compares the SCCS and case-crossover methods as to their use, strength and major difference (directionality). It concludes that case-only analyses of intermittent users complement the cohort analyses of prolonged users because their different biases compensate for one another. It also provides recommendations on when case-only designs should and should not be used for drug safety monitoring. Empirical performance of the self-controlled case series design: lessons for developing a risk identification and analysis system (Drug Saf. 2013;36(Suppl. 1):S83-S93) evaluates the performance of the SCCS design using 399 drug-health outcome pairs in 5 observational databases and 6 simulated datasets. Four outcomes and five design choices were assessed. Within-person study designs had lower precision and greater susceptibility to bias because of trends in exposure than cohort and nested case-control designs (J Clin Epidemiol 2012;65(4):384-93) compares cohort, case-control, case-cross-over and SCCS designs to explore the association between thiazolidinediones and the risks of heart failure and fracture and anticonvulsants and the risk of fracture. Bias was removed when follow-up was sampled both before and after the outcome, or when a case-time-control design was used.

 

The self-controlled risk interval design (SCRI) has been mostly used in vaccine safety studies. Its limitation is a vulnerability to time-varying confounders over the observation window or duration of follow-up. It has been infrequently used in studies with chronic drug exposures but is appropriate when there are no suitable between-person designs and the study question pertains to comparisons of time periods when an elevated risk of the outcome can occur. Generally, observation windows are kept short to minimise the potential for time-varying confounding. In Use of FDA's Sentinel System to Quantify Seizure Risk Immediately Following New Ranolazine Exposure (Drug Saf. 2019;42(7):897-906), new users were restricted to patients with 32 days of continuous exposure to ranolazine (i.e., capturing individuals that typically would have a 30-day dispensing). The observation window began the day after the start of the incident ranolazine dispensing and ended on the 32nd day after the index date. An elongated observation window (up to 62 days) was used in a sensitivity analysis. The relative risk was calculated as a ratio of the number of events in the risk interval to the number of events in the control interval multiplied by the ratio of the length of control interval to length of risk interval from only cases.

 

According to the Master Protocol: Assessment of Risk of Safety Outcomes Following COVID-19 Vaccination (bestinitiative.org), the standard SCCS design is more adaptable and is thus preferred when risk or control windows may be less well-defined, when there is a need to increase statistical power, or when time-varying confounding is a lesser concern. The SCCS design can also be more easily used to assess multiple occurrences of independent events within an individual. The SCRI design is preferred when it is feasible to have strictly defined risk and control windows for outcomes of interest, or when time varying confounding is a concern. The Use of active Comparators in self-controlled Designs (Am J Epidemiol. 2021) showed that presence of confounding by indication can be mitigated by using an active comparator, using an empirical example of a study of the association between penicillin and venous thromboembolism (VTE), with roxithromycin, a macrolide antibiotic, as the comparator, and upper respiratory infection, a transient risk factor for VTE, representing time-dependent confounding by indication.

 

5.4.4. Positive and negative control exposures and outcomes

 

One may test the validity of putative causal associations by using control exposures or outcomes. Well-chosen positive and negative controls support decision-making on whether the data at hand correctly support the detection of existing associations or correctly demonstrate lack of association when none is expected. Positive controls turning out as negative and negative as positive may signal presence of bias, as illustrated in a study demonstrating health adherer bias by showing that adherence to statins was associated with decreased risks of biologically implausible outcomes (Statin adherence and risk of accidents: a cautionary tale, Circulation 2009;119(15):2051-7) and in Utilization of Positive and Negative Controls to Examine Comorbid Associations in Observational Database Studies (Med Care 2017;55(3):244-51). The general principle, with additional examples, is described in Control Outcomes and Exposures for Improving Internal Validity of Nonrandomized Studies (Health Serv Res. 2015;50(5):1432-51).

 

Chapter 18. Method Validity of The Book of OHDSI (2021) recommends use of negative and positive controls as a diagnostic test to evaluate whether the study design produced valid results and proposes practical considerations for their selection. Selecting drug-event combinations as reliable controls nevertheless poses important challenges: it is difficult to establish for negative controls proof of absence of an association, and it is still more problematic to select positive controls because it is desirable not only to establish an association but also an accurate estimate of the effect size. This has led to attempts to establish libraries of controls that can be used to characterise the performance of different observational datasets in detecting various types of associations using a number of different study designs. Although the methods used to identify negative and positive controls may be questioned according to Evidence of Misclassification of Drug-Event Associations Classified as Gold Standard 'Negative Controls' by the Observational Medical Outcomes Partnership (OMOP) (Drug Saf. 2016;39(5):421-32), this approach may allow separate characterisation of random and systematic errors in epidemiological studies, providing a context for evaluating uncertainty surrounding effect estimates. It has not been widely used but examples are found in Interpreting observational studies: Why empirical calibration is needed to correct p-values (Stat Med. 2014;33(2):209-18), Robust empirical calibration of p-values using observational data (Stat Med. 2016;35(22):3883-8), Empirical confidence interval calibration for population-level effect estimation studies in observational healthcare data (Proc Natl Acad Sci. USA 2018;115(11): 571-7), and Empirical assessment of case-based methods for identification of drugs associated with acute liver injury in the French National Healthcare System database (SNDS) (Pharmacoepidemiol Drug Saf. 2021;30(3):320-33). However, Limitations of empirical calibration of p-values using observational data, Stat Med. 2016;35(22):3869-82) concludes that, although the method may reduce the number of false-positive results, it may also reduce the ability to detect a true safety or efficacy signal.

 

5.4.5. Use of an active comparator

 

The main purpose of using an active comparator is to reduce confounding by indication or by severity. Its use is optimal in the context of the new user design, whereby comparison is between patients with the same indication initiating different treatments (The active comparator, new user study design in pharmacoepidemiology: historical foundations and contemporary application, Curr Epidemiol Rep. 2015;2(4):221-8). An example is Risk of skin cancer in new users of thiazides and thiazide-like diuretics: a cohort study using an active comparator group (Br J Dermatol. 2021).

 

Ideally, an active comparator should be chosen to represent the counterfactual risk of a given outcome with a different treatment, i.e. it should have a known and positive safety profile with respect to the events of interest and ideally represent the background risk in the diseased (for example, safety of antiepileptics in pregnancy in relation to risk of congenital malformations could be compared against that of lamotrigine, which is not known to be teratogenic). The paper Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available (Am J Epidemiol. 2016;183(8):758-64) proposes the target trial approach for comparing the effects of treatment strategies helping avoid common methodologic pitfalls (see also Chapter 5.4.2). The C-Word: Scientific Euphemisms Do Not Improve Causal Inference From Observational Data (Am J Public Health 2018;108(5):616-19) highlights the need to be explicit about the causal objective of a study to help for the emulation of a particular target trial and support the choice of confounding adjustment variables.

 

With newly marketed medicines, an active comparator with ideal comparability of patients’ characteristics may be unavailable because prescribing of newly marketed medicines may be driven to a greater extent by patients’ prognostic characteristics (early users may be either sicker or healthier than all patients with the indication) and by reimbursement considerations than prescribing of established medicines. This is described for comparative effectiveness studies in Assessing the comparative effectiveness of newly marketed medications: methodological challenges and implications for drug development (Clin Pharmacol Ther. 2011;90(6):777-90) and in Newly marketed medications present unique challenges for nonrandomized comparative effectiveness analyses. (J Comp Eff Res. 2012;1(2):109-11). Other challenges include treatment effect heterogeneity as patient characteristics of users evolve over time, and low precision owing to slow drug uptake.

 

5.4.6. Interrupted time series analyses

 

In evaluating effectiveness of population-level interventions that are implemented at a specific point in time (clear before-after periods, such as policy effect date, regulatory action date) interrupted time series (ITS) studies are becoming the standard approach. The ITS analysis establishes the expected pre-intervention trend for an outcome of interest. The counterfactual scenario in the absence of the intervention serves as the comparator, the expected trend that provides a comparison for the evaluation of the impact of the intervention by examining any change occurring following the intervention period (Interrupted time series regression for the evaluation of public health interventions: a tutorial, Int J Epidemiol. 2017;46(1):348-55).

ITS is a quasi-experimental design and has been described as the “next best” approach for dealing with interventions in the absence of randomisation. ITS analysis requires several assumptions and its implementation is technically sophisticated, as explained in Regression based quasi-experimental approach when randomisation is not an option: Interrupted time series analysis (BMJ. 2015; 350:h2750). The use of ITS regression in impact research is illustrated in Chapter 14.4, Methods for pharmacovigilance impact research.

 

5.4.7. Case-population studies

 

Case-population studies are a form of ecological studies where cases are compared to an aggregated comparator consisting of population data. The case-population study design: an analysis of its application in pharmacovigilance (Drug Saf. 2011;34(10):861-8) explains its design and its application in pharmacovigilance for signal generation and drug surveillance. The design is also explained in Chapter 2: Study designs in drug utilization research of the textbook Drug Utilization Research - Methods and Applications (M Elseviers, B Wettermark, AB Almarsdóttir, et al. Editors. Wiley Blackwell, 2016). An example is a multinational case-population study aiming to estimate population rates of a suspected adverse event using national sales data (see Transplantation for Acute Liver Failure in Patients Exposed to NSAIDs or Paracetamol, Drug Saf. 2013;36(2):135–44). Based on the same study, Choice of the denominator in case population studies: event rates for registration for liver transplantation after exposure to NSAIDs in the SALT study in France (Pharmacoepidemiol Drug Saf. 2013;22(2):160-7) compared sales data and healthcare insurance data as denominators to estimate population exposure and found large differences in the event rates. Choosing the wrong denominator in case population studies might generate erroneous results. The choice of the right denominator depends not only on a valid data source but will also depend on the hazard function of the adverse event.

 

The case-population approach has also been adapted for vaccine safety surveillance, in particular for prospective investigation of urgent vaccine safety concerns or for the prospective generation of vaccine safety signals (see Vaccine Case-Population: A New Method for Vaccine Safety Surveillance, Drug Saf. 2016 Dec;39(12):1197-1209).

 

Use of the case-population design for fast investigation is illustrated in Use of renin-angiotensin-aldosterone system inhibitors and risk of COVID-19 requiring admission to hospital: a case-population study (Lancet 2020;395(10238):1705-14), in which the authors consecutively selected patients aged 18 years or older with a PCR-confirmed diagnosis of COVID-19 requiring admission to hospital from seven hospitals between March 1 and March 24, 2020. As a reference group, ten patients per case were randomly sampled, individually matched for age, sex, region and date of admission to hospital from a primary health-care database (available year: 2018). Information was extracted on comorbidities and prescriptions up to the month before index date from electronic clinical records of both cases and controls. Although the cases and controls originated from different data sources in different years, it was assumed that the primary health-care database of controls represented the source population of the cases and that a random sample of controls from that database would provide a valid estimate of the prevalence of the exposure and covariates in the source population, approaching the primary base paradigm of case-control studies.

 

A pragmatic attitude towards case-population studies is recommended: in situations where nation-wide or region-wide electronic health records (EHR) are available and allow assessing the outcomes and confounders with sufficient validity, a case-population approach is neither necessary nor desirable, as one can perform a population-based cohort or case-control study with adequate control for confounding. In situations where outcomes are difficult to ascertain in EHR or where such databases do not exist, the case-population design might give an approximation of the absolute and relative risk when both events and exposures are rare. This is limited by the ecological nature of the reference data that restricts the ability to control for confounding.

 

« Back