Real World Health Care Data Analysis. Uwe Siebert

Читать онлайн книгу.

Real World Health Care Data Analysis - Uwe Siebert


Скачать книгу
of the outcome (https://www.ema.europa.eu/documents/scientific-guideline/draft-ich-e9-r1-addendum-estimands-sensitivity-analysis-clinical-trials-guideline-statistical_en.pdf). At the end of Stage 1 you have a clear goal allowing for development of an analysis plan.

      Stage 2 is the design stage. The goal here is to approximate the conditions of the conceptualized randomized trial and ensure balance in covariates between treatment groups. This design stage will include a quantitative assessment of the feasibility of the study and confirmation that the bias adjustment methods (such as propensity matching) bring balance similar to a randomized study. Creating directed acyclic graphs (DAGs) are very useful here as this process will inform the feasibility (do we even have the right covariates?) and selection of the variables for the bias adjustment models. A key issue here is that the design stage is conducted “outcome free.” That is, one conducts the feasibility assessment, finalizes, and documents the statistical analysis methods prior to accessing the outcome data. One can use the baseline (pre-index) data – this will allow confirmation of the feasibility of the data to achieve the research objectives – but should have no outcomes data in sight. For a detailed practical discussion of the design phase planning for causal inference studies, we recommend following the concepts described by Hernan and Robins (2016) in their target trial approach.

      Stage 3 is the analysis stage. Too often this is the first step in an analysis that can lead to “cherry-picking” of methods that give the desired results or analyses not tied to the estimand of interest. In this stage, the researcher conducts the pre-planned analyses for the estimand, sensitivity analyses to assess the robustness of the results, analyses of secondary objectives (different estimands), and any ad hoc analyses driven by the results (such should be denoted as ad hoc). Note that while some sensitivity analyses should cover study specific analytic issues, in general researchers should include assessment of the core assumptions needed for causal inference using real world data (unmeasured confounding, appropriate modeling, positivity; see Chapter 2).

      Lastly, stage 4 studies the causal conclusions from the findings. Because this text is focused on the analytic portions of real world research, we will focus primarily on stages 2 and 3 of this process in the chapters moving forward.

      The book is organized as follows. This chapter and Chapter 2 provide foundational information about real world data research with a focus on causal inference in Chapter 2. Chapter 3 introduces the data sets that are used in the example analyses throughout the remainder of the book as well as a brief discussion on how to simulate real world data. Chapters 4–10 contain specific methods demonstrating comparative (causal) analyses of outcomes between two or more interventions that adjust for baseline confounding using propensity matching, stratification, weighting methods, and model averaging. Chapters 11 and 12 demonstrate the use of more complex methods that can adjust for both baseline and time-varying confounders and are applicable for longitudinal data such as to account for changes in the interventions over time. Lastly, Chapters 13–15 present analyses regarding the emerging topics of unmeasured confounding sensitivity analyses, quantitative generalizability analyses, and personalized medicine.

      Each chapter (beginning with Chapter 3) contains: (1) an introduction to the topic and methods discussion at a sufficient level to understand the implementation of and the pros and cons of each approach, (2) a brief discussion of best practices and guidance on the use of the methods, (3) SAS code to implement the methods, and (4) an example analysis using the SAS code applied to one of the data sets discussed in Chapter 3.

      Berger ML, Mamdani M, Atkins D, Johnson ML (2009). Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: The ISPOR good research practices for retrospective database analysis task force report—Part I. Value in Health 12:1044-52.

      Berger M, Martin B, Husereau D Worley K, Allen D, Yang W, Mullins CD, Kahler K, Quon NC, Devine S, Graham J, Cannon E, Crown W (2014). A Questionnaire to assess the relevance and credibility of observational studies to inform healthcare decision making: an ISPOR-AMCP- NPC Good Practice Task Force. Value in Health 17(2):143-156.

      Berger ML, Sox H, Willke RJ Brixner DL, Eichler HG, Goettsch W, Madigan D, Makady A, Schneeweiss S, Tarricone R, Wang SV, Watkins J, Mullins CD (2017). Good Practices for Real‐World Data Studies of Treatment and/or Comparative Effectiveness: Recommendations from the Joint ISPOR‐ISPE Special Task Force on Real‐World Evidence in Health Care Decision Making. Pharmacoepidemiology and Drug Safety 26(9): 1033-1039.

      Bind MAC, Rubin DB (2017). Bridging Observational Studies and Randomized Experiments by Embedding the Former in the Latter. Statistical Methods in Medical Research 28(7):1958-1978.

      Cox E, Martin BC, Van Staa T, Garbe E, Siebert U, Johnson ML (2009). Good Research Practices for Comparative Effectiveness Research: Approaches To Mitigate Bias And Confounding In The Design Of Non-randomized Studies of Treatment Effects Using Secondary Databases: Part II. Value in Health 12(8):1053-61.

      Des Jarlais DC, Lyles C, Crepaz N, TREND Group (2004). Improving the Reporting Quality of Nonrandomized Evaluations of Behavioral and Public Health Interventions: The TREND Statement. Am J Public Health. 94:361-366.

      Dreyer NA, Bryant A, Velentgas P. (2016). The GRACE Checklist: A Validated Assessment Tool for High Quality Observational Studies of Comparative Effectiveness. Journal of Managed Care and Specialty Pharmacy 22(10):1107-13.

      Dreyer NA, Schneeweiss S, McNeil B, et al. (2010). GRACE Principles: Recognizing high-quality observational studies of comparative effectiveness. American Journal of Managed Care 16(6):467-471.

      Dreyer NA, Velentgas P, Westrich K et al. (2014). The GRACE Checklist for Rating the Quality of Observational Studies of Comparative Effectiveness: A Tale of Hope and Caution. Journal of Managed Care Pharmacy 20(3):301-08.

      Duke Margolis Center for Health Policy - White Paper (2017). A Framework for Regulatory Use of Real World Evidence. Accessed on Jan 12, 2019 at https://healthpolicy.duke.edu/sites/default/files/atoms/files/rwe_white_paper_2017.09.06.pdf.

      Faries D, Leon AC, Haro JM, Obenchain RL. (2010). Analysis of Observational Health Care Data Using SAS. Cary, NC: SAS Institute Inc.

      Fletcher RH, Fletcher SW, Fletcher GS (2014). Clinical Epidemiology, 5ª Edition, Baltimore, MD: Wolters Kluwer.

      Food and Drug Administration (FDA). Use of Real World Evidence to Support Regulatory Decision-Making for Medical Devices. https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuemnts/UCM513027.pdf. Accessed 10/3/2019.

      Food and Drug Administration (FDA). Framework for FDA’s Real World Evidence Program. https://www.fda.gov/media/120060/download. Accessed 10/3/2019.

      Garrison LP, Neumann PJ, Erickson P, Marshall D, Mullins CD (2007). Using Real-World Data for Coverage and Payment Decisions: The ISPOR Real-World Data Task Force Report. Value in Health 10(5): 326-335.

      Gilbody S, Wahlbeck K, Adams C (2002). Randomized controlled trials in schizophrenia: a critical perspective on the literature. Acta Psychiatr Scand 105:243–51.

      Guidance for Industry and FDA Staff: Best Practices for Conducting and Reporting Epidemiologic Safety Studies Using Electronic Healthcare Data (2013). Accessed January 2019 at: https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM243537.pdf.


Скачать книгу