ENAR Webinar Series (WebENARs)

Past Webinars

 

The Overlap between Statisticians and Pharmacometricians in Clinical Drug Development and a Case Study

Friday, May 19, 2017
10:00 am – 12:00 pm Eastern

Presenter:
Kenneth G. Kowalski, MS
Kowalski PMetrics Consulting, LLC

Wenping Wang, PhD
Novartis Pharmaceuticals Corporation

Description:
This WebENAR will be presented in two parts. The first part will focus on a commentary presented by Ken Kowalski discussing the overlap between statisticians and pharmacometricians working in clinical drug development. Individuals with training in various academic disciplines including pharmacokinetics, pharmacology, engineering and statistics, to name a few, have pursued careers as pharmacometricians. While pharmcometrics has benefitted greatly from advances in statistical methodology, there is considerable tension and skepticism between biostatisticians and pharmacometricians as they apply their expertise to drug development applications. This talk explores some of the root causes for this tension and provides some suggestions for improving collaborations between statisticians and pharmcometricians. The talk concludes with a plea for more statisticians to consider careers as pharmacometrics practitioners. The second part of the WebENAR will highlight a case study presented by Wenping Wang illustrating the application of pharmacokinetic-pharmacodynamic modeling of the time to first flare to support dose justification of Canakinumab in a sBLA submission. The case study will conclude with some observations regarding team interactions between statisticians and pharmacometricians that resulted in a successful sBLA submission.

 

Evaluation and Use of Surrogate Markers

Friday, April 21, 2017
11:00 am – 1:00 pm Eastern

Presenter:
Dr. Layla Parast
Statistician
RAND Corporation

Description:
The use of surrogate markers to estimate and test for a treatment effect has been an area of popular research. Given the long follow-up periods that are often required for treatment or intervention studies, appropriate use of surrogate marker information has the potential to decrease required follow-up time. However, previous studies have shown that using inadequate markers or making inappropriate assumptions about the relationship between the primary outcome and the surrogate marker can lead to inaccurate conclusions regarding the treatment effect. Many of the available methods for identifying, validating and using surrogate markers to test for a treatment effect tend to rely on restrictive model assumptions and/or focus on uncensored outcomes. In this course, I will describe different approaches to quantify the proportion of treatment effect explained by surrogate marker information in both a non-survival outcome setting and censored survival outcome setting. One described approach will be a nonparametric method that can accommodate a setting where individuals may experience the primary outcome before the surrogate marker is measured. I will illustrate the procedures using an R package available on CRAN to examine potential surrogate markers for diabetes with data from the Diabetes Prevention Program.

Purchase Webinar Recording (4/21/17)

 

How Credible are Your Conclusions about Treatment Effect When There are Missing Data? Sensitivity Analyses for Time-to-event and Recurrent-event Outcomes

Friday, February 24, 2017
10:00 am – 12:00 pm Eastern

Presenter:
Dr. Michael O'Kelly
Dr. Bohdana Ratitch
Dr. Ilya Lipkovich
Center for Statistics in Drug Development
Quintiles

Description:
Most experiments have missing data. When there are missing data, it is useful to provide sensitivity analyses to allow the reader of the account of the research to assess the robustness to the missing data of any conclusions made. Using the pattern-mixture framework, a variety of assumptions can be implemented with regard to categories of missing outcomes. Assumptions that would tend to undermine the alternative hypothesis can be especially useful for assessing robustness of conclusions. Multiple imputation (MI) is one quite straightforward way of implementing such pattern-mixture approaches. While MI is a standard tool for continuous outcomes, recently researchers have come up with ways of implementing MI for other outcomes, such as time-to-event and recurrent-event outcomes. This webinar describes a number of these new applications of the MI idea. The strengths and weaknesses of these approaches are described and illustrated via examples and simulations.

This webinar was not recorded and is not available for on-demand purchase.

 

Introduction to Clinical Trial Optimization

Friday, February 3, 2017
10:00 am – 12:00 pm Eastern

Presenter:
Dr. Alex Dmitrienko
Founder & President
Mediana Inc.

Description:
This webinar focuses on a broad class of statistical problems related to optimizing the design and analysis of Phase II and III trials (Dmitrienko and Pulkstenis, 2017). This general topic has attracted much attention across the clinical trial community due to increasing pressure to reduce implementation costs and shorten timelines in individual trials and development programs.

The Clinical Scenario Evaluation (CSE) framework (Benda et al., 2010) will be described to formulate a general approach to clinical trial optimization and decision-making. Using the CSE approach, main objectives of clinical trial optimization will be formulated, including selection of clinically relevant optimization criteria, identification of sets of optimal and nearly optimal values of the parameters of interest, and sensitivity assessments. Key principles of clinical trial optimization will be illustrated using a problem of identifying efficient and robust multiplicity adjustment strategies in late-stage trials (Dmitrienko et al., 2009; Dmitrienko, D’Agostino and Huque, 2013; Dmitrienko, Paux and Brechenmacher, 2015).

Software tools for applying optimization methods will be presented, including R software (Mediana package) and Windows applications with a graphical user interface (MedianaFixedDesign application).

Purchase Webinar Recording (2/3/17)

 

Design and Analysis of Genome-Wide Association Studies

Friday, December 9, 2016
10:00 am – 12:00 pm Eastern

Presenter:
Dr. Nilanjan Chatterjee
Bloomberg Distinguished Professor
Department of Biostatistics, Bloomberg School of Public Health
Department of Oncology, School of Medicine
Johns Hopkins University

Description:
Decreasing cost of large scale genotyping and sequencing technologies is fuelling investigation of association between complex traits and genetic variants across the whole genome using studies of massive sample sizes. Recent genome-wide association studies (GWAS) focused on common variants have already led to the discoveries of thousands of genetic loci across variety of complex traits, including chronic diseases such as cancers, heart diseases and type-2 diabetes. Future studies of less common and rare variants hold further promise for discovery of new genetic loci and better understanding of causal mechanisms underlying existing loci. The webinar will provide brief review of some state of the art design and analysis issues faced in the field. The topics will include sample size requirement and power calculations, methods for single- and multi-marker association testing, estimation of heritability and effect-size distribution, techniques for pleiotropic and Mendelian randomization analyses and genetic risk prediction.

Purchase Webinar Recording (12/9/16)

 

Nonparametric Bayes Biostatistics

Friday, October 28, 2016
10:00 am – 12:00 pm Eastern

Presenter:
Dr. David Dunson
Arts & Sciences Professor of Statistical Science, Mathematics and Electrical & Computer Engineering
Duke University

Description:
This webinar will provide an introduction to the practical use of nonparametric Bayesian methods in the analysis and interpretation of data from biomedical studies. I will start with a very brief review of the Bayesian paradigm, rapidly leading into what is meant by "Nonparametric Bayes." I'll then describe some canonical nonparametric Bayes models, including Dirichlet process mixtures and Gaussian processes. Basic practical properties and approaches for computation will be sketched, and I'll provide a practical motivation through some biomedical applications ranging from genomics to epidemiology to neuroscience. I'll finish up by describing some possibilities in terms of more advanced models that allow the density of a response variable to change flexibly with predictors, while providing practical motivation and implementation details.

 

Pragmatic Trials in Public Health and Medicine

Friday, May 20, 2016
11:00 am- 1:00 pm Eastern

Presenters:
David M. Murray, Ph.D.
Associate Director for Prevention
Director, Office of Disease Prevention
Office of the Director
National Institutes of Health

Description:
This webinar will review key issues and their solutions for pragmatic trials in public health and medicine. Pragmatic trials are used increasingly in health care settings to help clinicians choose between options for care. They often involve group- or cluster-randomization, though alternatives to randomized trials are also available. Many current trials rely upon electronic health records as the major source for data. These studies face a variety of challenges in the development and delivery of their interventions, research design, informed consent, data collection, and data analysis. This webinar will review these issues both generally and using examples from the Health Care Systems Collaboratory. The HCS Collaboratory is an NIH funded consortium of nine pragmatic trials that address a variety of health issues and outcomes, all conducted within health care systems, all relying on electronic health records as their primary source of data, with most implemented as a group- or cluster-randomized trial.

 

Analytic Methods for Functional Neuroimaging Data

Friday, April 15, 2016
10:00 am- 12:00 pm Eastern

Presenters:
F. DuBois Bowman
Dr. Daniel Drake
Dr. Ben Cassidy
Department of Biostatistics, Mailman School of Public Health
Columbia University

Description:
Brain imaging scanners collect detailed information on brain function and various aspects of brain structure. When used as a research tool, imaging enables studies to investigate brain function related to emotion, cognition, language, memory, and responses to numerous other external stimuli, as well as resting-state brain function. Brain imaging studies also attempt to determine the functional or structural basis for psychiatric or neurological disorders and to examine the responses of these disorders to treatment. Neuroimaging data, particularly functional images, are massive and exhibit complex patterns of temporal and spatial dependence, which pose analytic challenges. There is a critical need for statisticians to establish rigorous methods to extract information and to quantify evidence for formal inferences. In this webinar, I briefly provide background on various types of neuroimaging data (with an emphasis on functional data) and analysis objectives that are commonly targeted in the field. I also present a survey of existing methods aimed at these objectives and identify particular areas offering opportunities for future statistical contribution.

 

Bayesian Population Projections

Friday, February 12, 2016
10:00 am - 12:00 pm Eastern

Presenter:
Adrian E. Raftery
Professor of Statistics and Sociology
University of Washington

Description:
Projections of countries' future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. I will describe a Bayesian statistical method for probabilistic population projections for all countries. These new methods have been used by the United Nations to produce their most recent population projections for all countries.

 

Regulatory Perspective on Subgroup Analysis in Clinical Trials

December 4, 2015
10:00 am - 12:00 pm Eastern

Presenters:
Dr. Mohamed Alosh & Dr. Kathleen Fritsch
Division of Biometrics III, Office of Biostatistics, OTS, CDER, FDA

Description:
For a confirmatory clinical trial that established treatment efficacy in the overall population, subgroup analysis aims to investigate the extent of benefits from the therapy for the major subgroups. Consequently, findings from the subgroup analysis play a major role in interpreting the trial results. This presentation focuses on two areas related to subgroup analysis in a confirmatory clinical trial: (i) investigating consistency of treatment effect across subgroups, and (ii) designing a clinical trial with the objective of establishing treatment efficacy in a targeted subgroup in addition to the overall population. The presentation also outlines the regulatory guidelines for subgroup analysis in such trials and provides examples of clinical trials where subgroup analysis played a role in determining the population for treatment use.

 

Reproducible Research: The Time is Now

Friday, November 20, 2015
10:00 am - 12:00 pm Eastern

Presenter:
Dr. Keith Baggerly
The University of Texas MD Anderson Cancer Center

Description:
The buzz phrase "Reproducible Research" refers to studies where the raw data and code supplied are enough to let a new investigator exactly match the reported results without a huge amount of effort. "Replicable Research" refers to studies whose methods, when applied to new data, give rise to qualitatively similar results. Particularly as experiments get bigger, more involved, and more expensive, reproducibility should precede replication. Unfortunately, more attention is now being focused on such issues due to some high-profile failures.

In this talk, we first illustrate the issues with some case studies from oncology showing the types of things that can go wrong, the simple nature of the most common mistakes, and what the implications can be: e.g., treating patients incorrectly. We then give some point estimates of how widespread the problems of reproducibility and replicability are thought to be, and discuss some additional problems associated with the replication. We survey tools introduced in the past few years which have made assembling reproducible studies markedly easier, discuss considerations to be applied when considering replication, and give pointers to some resources for further information.

 

The Statistical Analysis of fMRI Data

Friday, June 26, 2015
10:00 am to 12:00 pm Eastern

Presenter:
Martin Lindquist
Professor
Department of Biostatistics
Johns Hopkins University

Description:
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique for studying brain activity. During the past two decades fMRI has provided researchers with an unprecedented access to the inner workings of the brain, leading to countless new insights into how the brain processes information. The field that has grown around the acquisition and analysis of fMRI data has experienced a rapid growth in the past several years and found applications in a wide variety of areas. This webinar introduces fMRI and discusses key statistical aspects involved in the analysis of fMRI data. Topics include: (a) an overview of the acquisition and reconstruction of fMRI data; (b) overview of the physiological basis of the fMRI signal; (c) common experimental designs; (d) pre-processing steps; (d) methods for localizing areas activated by a task; (e.) connectivity analysis; and (f.) prediction and brain decoding.

 

Sparse Logical Models for Interpretable Machine Learning

Friday, May 8, 2015
10:00 am to 12:00 pm Eastern

Presenter:
Cynthia Rudin, PhD, Associate Professor of Statistics, MIT CSAIL and Sloan School of Management, Massachusetts Institute of Technology

Description:
Possibly *the* most important obstacle in the deployment of predictive models is the fact that humans simply do not trust them. If it is known exactly which variables were important for the prediction and how they were combined, this information can be very powerful in helping to convince people to believe (or not believe) the prediction and make the right decision. In this talk I will discuss algorithms for making these non-black box predictions including:

  1. "Bayesian Rule Lists" - This algorithm builds a decision list using a probabilistic model over permutations of IF-THEN rules. It competes with the CART algorithm for building accurate-yet-interpretable logical models. It is not a greedy algorithm like CART.

  2. "Falling Rule Lists" - These are decision lists where the probabilities decrease monotonically along the list. These are really useful for medical applications because they stratify patients into risk categories from highest to lowest risk.

  3. "Bayesian Or's of And's" - These are disjunctions of conjunction models (disjunctive normal forms). These models are natural for modeling customer preferences in marketing.

  4. "The Bayesian Case Model" - This is a case-based reasoning clustering method. It provides a prototypical exemplar from each cluster along with the subspace that is important for the cluster.

 

Statistical Issues in Comparative Effectiveness Research

Friday, February 20, 2015
11:00 am to 1:00 pm EST

Presenter:
Sharon-Lise Normand, Department of Health Care Policy, Harvard Medical School & Department of Biostatistics, Harvard School of Public Health

Description:
Comparative Effectiveness Research (CER) refers to a body of research that generates and synthesizes evidence on the comparative benefits and harms of alternative interventions to prevent, diagnose, treat, and monitor clinical conditions, or to improve the delivery of health care. The evidence from CER is intended to support clinical and policy decision making at both the individual and the population level. While the growth of massive health care data sources has given rise to new opportunities for CER, several statistical challenges have also emerged. This tutorial will provide an overview of the types of research questions addressed by CER, review the main statistical methodology currently utilized, and highlight areas where new methodology is required. Inferential issues in the "big data" context are identified. Examples from cardiology and mental illness will illustrate methodological issues.

 

Statistical Challenges in Genomics High Throughput Data

Friday, January 30, 2015
11:00 am to 1:00 pm EST

Presenter: Rafa Irizarry, PhD
Professor of Biostatistics and Computational Biology at the Dana Farber Cancer Center
Professor of Biostatistics at the Harvard School of Public Health
http://rafalab.dfci.harvard.edu/

Description:
In this webinar I will give an overview of genomics technologies and the challenges arising when analyzing the data they produce. Specifically, I will focus on microarrays and next generation sequencing technologies. We will cover statistical issues related to preprocessing, normalization, detecting differential expression, and dealing with batch effects.

 

An Introduction to Dynamic Treatment Regimes

December 5, 2014
11:00 am to 1:00 pm (EST)

Presenter: Marie Davidian, PhD North Carolina State University

Description: Treatment of patients with chronic diseases or disorders in clinical practice involves a series of decisions made over time. Clinicians adjust, change, modify, or discontinue therapies based on the patient's observed progress, side effects, compliance, and so on, with the goal of "personalizing" treatment to the patient in order to provide the best care. The decisions are typically based on synthesis of the available information on the patient using clinical experience and judgment.

A "dynamic treatment regime," also referred to as an "adaptive treatment strategy," is a set of sequential rules that dictate how to make decisions on treatment of a patient over time. Each rule corresponds to a key decision point at which a decision on which treatment action to take from among the available options must be made. Based on patient information, the rule outputs the next treatment action. Thus, a dynamic treatment regime is an algorithm that formalizes the way clinicians manage patients in practice.

In this presentation, we introduce the notion of a dynamic treatment regime and an appropriate statistical framework in which treatment regimes can be studied. We demonstrate how statistical inference may be made on the effects of different regimes based on data from so-called sequential, multiple assignment, randomized trials (SMARTs). We conclude with a discussion of current challenges, including the development of "optimal" treatment regimes. The material presented is ideal background for the shortcourse on personalized medicine and optimal dynamic treatment regimes to be offered at the ENAR Spring Meeting in March 2015.