한국보건사회연구원 전자도서관

로그인

한국보건사회연구원 전자도서관

자료검색

  1. 메인
  2. 자료검색
  3. 통합검색

통합검색

단행본Wiley series in probability and statistics

Statistical analysis with missing data

서명/저자사항
Statistical analysis with missing data
판사항
3rd ed
발행사항
Hoboken, N.J. : Wiley , 2020.
형태사항
xii, 449 p. : ill. ; 24cm.
ISBN
9780470526798 (hbk.) : 9781118595695 (ebk.)
주기사항
Includes bibliographical references (p. 405-427) and index
소장정보
위치등록번호청구기호 / 출력상태반납예정일
이용 가능 (1)
자료실WM020947대출가능-
이용 가능 (1)
  • 등록번호
    WM020947
    상태/반납예정일
    대출가능
    -
    위치/청구기호(출력)
    자료실
책 소개
An up-to-date, comprehensive treatment of a classic text on missing data in statistics

The topic of missing data has gained considerable attention in recent decades. This new edition by two acknowledged experts on the subject offers an up-to-date account of practical methodology for handling missing data problems. Blending theory and application, authors Roderick Little and Donald Rubin review historical approaches to the subject and describe simple methods for multivariate analysis with missing values. They then provide a coherent theory for analysis of problems based on likelihoods derived from statistical models for the data and the missing data mechanism, and then they apply the theory to a wide range of important missing data problems.

Statistical Analysis with Missing Data, Third Edition starts by introducing readers to the subject and approaches toward solving it. It looks at the patterns and mechanisms that create the missing data, as well as a taxonomy of missing data. It then goes on to examine missing data in experiments, before discussing complete-case and available-case analysis, including weighting methods. The new edition expands its coverage to include recent work on topics such as nonresponse in sample surveys, causal inference, diagnostic methods, and sensitivity analysis, among a host of other topics.
  • An updated “classic” written by renowned authorities on the subject
  • Features over 150 exercises (including many new ones)
  • Covers recent work on important methods like multiple imputation, robust alternatives to weighting, and Bayesian methods
  • Revises previous topics based on past student feedback and class experience
  • Contains an updated and expanded bibliography

The authors were awarded The Karl Pearson Prize in 2017 by the International Statistical Institute, for a research contribution that has had profound influence on statistical theory, methodology or applications. Their work "has been no less than defining and transforming." (ISI)

Statistical Analysis with Missing Data, Third Edition is an ideal textbook for upper undergraduate and/or beginning graduate level students of the subject. It is also an excellent source of information for applied statisticians and practitioners in government and industry.

New feature

AN UP-TO-DATE, COMPREHENSIVE TREATMENT OF A CLASSIC TEXT ON MISSING DATA IN STATISTICS

The topic of missing data has gained considerable attention in recent decades. This new edition by two acknowledged experts on the subject offers an up-to-date account of practical methodology for handling missing data problems. Blending theory and application, authors Roderick Little and Donald Rubin review historical approaches to the subject and describe simple methods for multivariate analysis with missing values. They then provide a coherent theory for analysis of problems based on likelihoods derived from statistical models for the data and the missing data mechanism, and then they apply the theory to a wide range of important missing data problems.

Statistical Analysis with Missing Data, Third Edition starts by introducing readers to the subject and approaches toward solving it. It looks at the patterns and mechanisms that create the missing data, as well as a taxonomy of missing data. It then goes on to examine missing data in experiments, before discussing complete-case and available-case analysis, including weighting methods. The new edition expands its coverage to include recent work on topics such as nonresponse in sample surveys, causal inference, diagnostic methods, and sensitivity analysis, among a host of other topics.

  • An updated "classic" written by renowned authorities on the subject
  • Features over 150 exercises (including many new ones)
  • Covers recent work on important methods like multiple imputation, robust alternatives to weighting, and Bayesian methods
  • Revises previous topics based on past student feedback and class experience
  • Contains an updated and expanded bibliography

Statistical Analysis with Missing Data, Third Edition is an ideal textbook for upper undergraduate and/or beginning graduate level students of the subject. It is also an excellent source of information for applied statisticians and practitioners in government and industry.

목차

Preface

Part I: Overview and Basic Approaches

Chapter 1. Introduction

1.1. The Problem of Missing Data

Example 1.1. Nonresponse for a Binary Outcome Measured at Three Times Points.

Example 1.2. Causal Effects of Treatments with Survival and Quality of Life Outcomes.

Example 1.3. Nonresponse in Opinion Polls.

1.2. Missingness Patterns and Mechanisms

Example 1.4. Univariate Missing Data.

Example 1.5. Unit and Item Nonresponse in Surveys.

Example 1.6. Attrition in Longitudinal Studies.

Example 1.7. The File-Matching Problem, with Two Sets of Variables Never Jointly Observed.

Example 1.8. Patterns with Latent Variables That Are Never Observed.

Example 1.9. Missing Data in Clinical Trials.

1.3. Mechanisms that Lead to Missing Data

Example 1.10. Artificially-Created Missing Data in a Univariate Normal Sample.

Example 1.11. Right-Censored Survival Data.

Example 1.12. Historical Heights.

Example 1.13. MAR for Univariate Missing Data.

Example 1.14. Missing Data by Design: Double and Matrix Sampling.

Example 1.15. Measurement Error as a Missing-Data Problem.

Example 1.16. Missing Data by Design: Disclosure Limitation.

Example 1.17. Income Nonresponse.

Example 1.18. Mechanisms of Attrition in Longitudinal Data (Example 1.6 continued).

Example 1.19. MAR for a General Bivariate Pattern.

1.4. A Taxonomy of Missing-Data Methods

Example 1.20 Estimating the Mean and Covariance Matrix with Monotone Missingness Pattern.

Example 1.21. Estimating the Mean and Covariance Matrix with General Missingness Pattern.

Example 1.22. Estimation When Some Variables Are Categorical.

Example 1.23. Estimation When the Data May Not Be Missing at Random.

Chapter 2. Missing Data in Experiments

2.1. Introduction

2.2. The Exact Least Squares Solution with Complete Data

2.3. The Correct Least Squares Analysis with Missing Data

2.4. Filling in Least Squares Estimates

2.4.1. Yates's Method

2.4.2. Using a Formula for the Missing Values

2.4.3. Iterating to Find the Missing Values

2.4.4. ANCOVA with Missing-Value Covariates

2.5. Bartlett's ANCOVA Method

2.5.1. Useful Properties of Bartlett's Method

2.5.2. Notation

2.5.3. The ANCOVA Estimates of Parameters and Missing Y- Values

2.5.4. ANCOVA Estimates of the Residual Sums of Squares and the Covariance Matrix of ˆβ

2.6. Least Squares Estimates of Missing Values by ANCOVA using only Complete-Data Methods

Example 2.1. Estimating Missing Values in a Randomized Block.

2.7. Correct Least Squares Estimates of Standard Errors and One Degree of Freedom Sums of Squares

Example 2.2. Adjusting Standard Errors for Filled-In Missing Values (Example 2.1 continued).

2.8. Correct Least Squares Sums of Squares with more than One Degree of Freedom

Example 2.3. Adjusting Sums of Squares for the Filled-In Values (Example 2.2 continued).

Chapter 3. Complete-Case and Available-Case Analysis, Including Weighting Methods

3.1. Introduction

3.2. Complete-Case Analysis

Example 3.1. Efficiency of Complete-Case Analysis for Bivariate Normal Monotone Data.

Example 3.2. Bias of Complete-Case Inferences for Means.

Example 3.3. Bias and Precision of Complete-Case Inferences for Regression Coefficients.

Example 3.4. Bias and Precision of Complete-Case Inferences for an Odds Ratio.

3.3. Weighted Complete-Case Analysis

3.3.1. Weighting Adjustments

Example 3.5. Randomization Inference in Surveys with Complete Response.

Example 3.6. Weighting Class Estimate of the Mean.

Example 3.7. Propensity Weighting.

Example 3.8. Inverse-Probability Weighted Generalized Estimating Equations.

3.3.2. Post-stratification and Raking to Known Margins

Example 3.9. Post-Stratification.

Example 3.10. Raking-Ratio Estimation.

3.3.3 Inference from Weighted Data

3.3.4 Summary of Weighting Methods

3.4. Available-Case Analysis

Chapter 4. Single Imputation Methods

4.1. Introduction

4.2. Imputing Means from a Predictive Distribution

4.2.1 Unconditional Mean Imputation

4.2.2. Conditional Mean Imputation

Example 4.1. Imputing Means within Adjustment Cells.

Example 4.2. Regression Imputation.

Example 4.3. Buck’s Method.

4.3. Imputing Draws from a Predictive Distribution

4.3.1 Draws Based on Explicit Models

Example 4.4. Stochastic Regression Imputation.

Example 4.5. Comparison of Methods for Bivariate Monotone MCAR Data.

Example 4.6. Missing Covariates in Regression.

Example 4.7 Regression Calibration for Measurement Error in Regression

4.3.2. Draws Based on Implicit Models – Hot Deck Methods.

Example 4.8. The Hot Deck by Simple Random Sampling with Replacement.

Example 4.9. Hot Deck within Adjustment Cells.

Example 4.10. Hot Deck Based on a Matching Metric.

Example 4.11. Hot Decks for Multivariate Missing Data.

Example 4.12. Imputation Methods for Repeated Measures with Dropouts.

4.4. Conclusions

Chapter 5. Estimation of Imputation Uncertainty

5.1. Introduction

5.2. Imputation Methods that Provide Valid Standard Errors from a Single Filled-In Dataset.

Example 5.1. Standard Errors from Cluster Samples with Imputed Data.

Example 5.2. Standard Errors from Stratified Cluster Samples with Imputed Data.

5.3. Standard Errors for Imputed Data by Resampling

5.3.1 Bootstrap standard errors.

Example 5.3. The Simple Bootstrap for Complete Data.

Example 5.4. The Simple Bootstrap Applied to Data Completed by Imputation.

5.3.2. Jackknife Standard Errors.

Example 5.5. The Simple Jackknife for Complete Data.

Example 5.6. The Simple Jackknife Applied to Data Completed by Imputation.

Example 5.7. Standard Errors from Stratified Cluster Samples (Example 5.2 continued).

5.4. Introduction to Multiple Imputation

Example 5.8. Multiple Imputation for Stratified Random Samples.

5.5. Comparison of Resampling Methods and Multiple Imputation

PART II: LIKELIHOOD-BASED APPROACHES TO THE ANALYSIS OF DATA WITH MISSING VALUES

CHAPTER 6. THEORY OF INFERENCE BASED ON THE LIKELIHOOD FUNCTION

6.1. REVIEW OF LIKELIHOOD-BASED ESTIMATION FOR COMPLETE DATA

6.1.1 Maximum Likelihood Estimation

Example 6.1. Univariate Normal Sample.

Example 6.2. Exponential Sample.

Example 6.3. Multinomial Sample.

Example 6.4. Multivariate Normal Sample

Example 6.5. Exponential Sample (Example 6.2 continued).

Example 6.6. Multinomial Sample (Example 6.3 continued).

Example 6.7. Univariate Normal Sample (Example 6.1 continued).

Example 6.8. Multivariate Normal Sample (Example 6.4 continued).

Example 6.9. A Conditional Distribution Derived from a Bivariate Normal Sample.

Example 6.10. Multiple Linear Regression, Unweighted and Weighted.

Example 6.11. Generalized Linear Models.

Example 6.12. Normal Repeated Measures Models

6.1.2. Inference Based on the Likelihood

6.1.3. Large-Sample Maximum Likelihood and Bayes Inference

Example 6.13. Exponential Sample (Example 6.2 continued).

Example 6.14. Univariate Normal Sample (Example 6.1 continued).

Example 6.15. Univariate Normal Sample (Example 6.1 continued).

6.1.4 Bayes Inference Based on the Full Posterior Distribution

Example 6.16. Bayes Inference for a Univariate Normal Sample with Conjugate Prior (Example 6.1 continued).

Example 6.17. Bayes’ Inference for Unweighted and Weighted Multiple Linear Regression (Example 6.10 continued).

Example 6.18. Bayes Inference for a Multinomial Sample (Example 6.3 continued). EXAMPLE 6.19. Bayes Inference for a Multivariate Normal Sample (Example 6.4 continued).

6.1.5. Simulating Posterior Distributions

Example 6.20 Bayes Inference for Multiple Linear Regression (Example 6.17 continued.)

Example 6.21. Bayes Inference for Multinomial Sample (Example 6.17 continued).

Example 6.22. Bayes Inference for Multivariate Normal Sample (Example 6.18 continued).

6.2. Likelihood-Based Inference with Incomplete Data

Example 6.23. Incomplete Exponential Sample.

Example 6.24. Bivariate Normal Sample with One Variable Subject to Missingness..

Example 6.25. One-Way ANOVA with Missing Values, when Missingness Depends on the Unobserved Group Means.

Example 6.26. Regression where missingness is a function of the covariates.

6.3. A Flawed Alternative to Maximum Likelihood: Maximizing over the Parameters and the Missing Data

6.3.1. The Method

6.3.2. Background

6.3.3. Examples

Example 6.27. Univariate Normal Sample with Missing Data.

Example 6.28. Missing-Plot Analysis of Variance.

Example 6.29. An Exponential Sample with Censored Values.

Example 6.30. Maximum Likelihood for Generalized Linear Mixed Models.

6.4. Likelihood Theory for Coarsened Data

Example 6.31. Censoring with stochastic censoring time.

Example 6.32. Censoring Mechanisms (Example 6.29 continued).

Chapter 7. Methods based on factoring the likelihood, ignoring the missing-data mechanism

7.1. Introduction

7.2. Bivariate Normal Data with One Variable Subject to Nonresponse: ML Estimation

7.2.1 ML estimates

Example 7.1. Bivariate Normal Sample with One Variable Subject to Nonresponse (Example 6.24 continued).

Example 7.2. Bivariate Normal Numerical Illustration.

7.2.2 Large-Sample Covariance Matrix

7.3 Bivariate Normal Monotone Data: Small-Sample Inference

Example 7.3. Bayes Interval Estimation for the Bivariate Normal (Example 7.2 continued).

7.4 Monotone Data with More Than Two Variables

7.4.1. Multivariate Data with One Normal Variable Subject to Missingness

Example 7.4. K + 1 Variables, One Subject to Missingness.

7.4.2. The Factored Likelihood for a General Monotone Pattern

Example 7.5. Multivariate Normal Monotone Data.

7.4.3. ML Computation for Monotone Normal Data via the Sweep Operator

Example 7.6. Bivariate Normal Monotone Data (Example 7.1 continued).

Example 7.7. Multivariate Normal Monotone Data (Example 7.5 continued).

Example 7.8. A Numerical Example.

7.4.4. Bayes Computation for Monotone Normal Data via the Sweep Operator.

Example 7.9. Inferences for Data in Example 7.8.

7.5. Factored Likelihoods for Special Nonmonotone Patterns

Example 7.10. A Normal Three-Variable Example.

Example 7.11. An Application to Educational Data.

Example 7.12. Correcting for Measurement Error Using External Calibration Data (Example 1.14 contd.)

Chapter 8. Maximum Likelihood for General Patterns of Missing Data: Introduction and Theory with Ignorable Nonresponse

8.1. Alternative Computational Strategies

8.2. Introduction to the EM Algorithm

8.3. The E Step and the M Step of EM

Example 8.1. Univariate Normal Data.

Example 8.2. A Multinomial Example.

Example 8.3. Bivariate Normal Sample with Missingness on Both Variables.

8.4. Theory of the EM Algorithm

8.4.1 Convergence Properties

8.4.2 EM for Exponential Families

Example 8.4. ML Estimation for a Sample from the Univariate t Distribution with Known Degrees of Freedom.

8.4.3 Rate of Convergence of EM

Example 8.5. A Multinomial Example (Example 8.2 continued).

8.5. Extensions of EM

8.5.1 The ECM Algorithm.

Example 8.6. A Multivariate Normal Regression Model with Incomplete Data.

Example 8.7. A Log-Linear Model for Contingency Tables with Incomplete Data.

Example 8.8. Univariate t with Unknown Degrees of Freedom. (Example 8.4 Continued).

8.5.2 The ECME and AECM Algorithms

Example 8.9. Univariate t with Unknown Degrees of Freedom (Example 8.8 Continued).

8.5.3 The PX-EM Algorithm

Example 8.10. PX-EM Algorithm for the Univariate t with Known Degrees of Freedom (Example 8.4 continued).

8.6. Hybrid Maximization Methods.

Chapter 9. Large-Sample Inference Based On Maximum Likelihood Estimates.

9.1. Standard Errors Based on the Information Matrix.

9.2. Standard Errors via Methods that Do Not Require Computing and Inverting an Estimate of The Observed Information Matrix.

9.2.1 The Supplemented EM Algorithm

Example 9.1. Standard Errors for Multinomial Example. (Example 8.5 continued).

Example 9.2. Standard Errors for a Bivariate Normal Sample with Monotone Missing Data (Example 7.6 continued).

9.2.2 Bootstrapping the Observed Data

9.2.3 Other Large-Sample Methods

9.2.4 Posterior Standard Errors from Bayesian Methods

Chapter 10. Bayes and Multiple Imputation

10.1. Bayesian Iterative Simulation Methods

10.1.1. Data Augmentation

Example 10.1. Bivariate Normal Data with Ignorable Nonresponse and a General Pattern of Missingness (Example 8.3 continued).

Example 10.2. Bayesian Computations for a One-Parameter Multinomial Model (Example 9.1 continued).

10.1.2. The Gibbs’ Sampler

Example 10.3. A Multivariate Normal Regression Model with Incomplete Data (Example 8.6 continued).

Example 10.4. Univariate t Sample with Known Degree of Freedom (Example 8.10 continued).

10.1.3. Assessing Convergence of Iterative Simulations

10.1.4. Some Other Simulation Methods.

10.2. Multiple Imputation

10.2.1. Large-Sample Bayesian Approximations of the Posterior Mean and Variance Based on a Small Number of Draws

Example 10.5. Bivariate Normal Data with Ignorable Nonresponse and a General Pattern of Missingness (Example 10.1 continued).

10.2.2. Approximations Using Test Statistics

10.2.3. Other Methods for Creating Multiple Imputations

10.2.4. Chained-Equation Multiple Imputation

10.2.5. Use of Different Models for Imputation and Analysis

Example 10.6. Inference Under the Approximate Bayesian Bootstrap (Example 5.8 continued).

Part III: Likelihood-Based Approaches to the Analysis of Missing Data: Applications to Some Common Models

Chapter 11. Multivariate Normal Examples, Ignoring the Missing-Data Mechanism

11.1. Introduction

11.2. Inference for a Mean Vector and Covariance Matrix with Missing Data Under Normality

11.2.1. The EM Algorithm for Incomplete Multivariate Normal Samples

11.2.2. Estimated Asymptotic Covariance Matrix of (θ- θ)

11.2.3. Bayes Inference and Multiple Imputation for the Normal Model

Example 11.1. St. Louis Risk Research Data.

11.3. The Normal Model with a Restricted Covariance Matrix

Example 11.2. Patterned Covariance Matrices.

Example 11.3. Exploratory Factor Analysis.

Example 11.4. Variance Component Models.

11.4. Multiple Linear Regression

11.4.1. Linear Regression with Missingness Confined to the Dependent Variables

Example 11.5. Missing Outcomes in ANOVA.

11.4.2. More General Linear Regression Problems with Missing Data.

Example 11.6. MANOVA with missing data illustrated using the St. Louis data (Example 11.1 continued).

11.5. A General Repeated-Measures Model With Missing Data

Example 11.7. Growth Curve Models with Missing Data.

11.6. Time Series Models

11.6.1. Introduction

11.6.2. Autoregressive Models for Univariate Time Series with Missing Values

Example 11.8. The AR1 Model for Time Series with Missing Values.

11.6.3. Kalman Filter Models

Example 11.9. A Bivariate Time Series Measuring an Underlying Series with Error.

11.7. Measurement Error Formulated as Missing Data

Example 11.10. Measurement Error as Missing Data: a Normal Model for External Calibration.

Chapter 12. Models for Robust Estimation

12.1. Introduction

12.2. Reducing The Influence of Outliers by Replacing the Normal Distribution by a Longer-Tailed Distribution

12.2.1 Estimation for a Univariate Sample

Example 12.1. The Univariate Contaminated Normal Model.

12.2.2 Robust Estimation of the Mean and Covariance Matrix with Complete Data

12.2.3 Robust Estimation of the Mean and Covariance Matrix from Data with Missing Values

Example 12.2. Distribution of Weights from Multivariate t and Contaminated Multivariate Normal Models.

12.2.4 Adaptive Robust Multivariate Estimation

12.2.5 Bayes Inferences for the T Model

Example 12.3. Robust MANOVA with Missing Data Illustrated Using the St. Louis Data (Example 11.6 continued)

12.2.6. Further Extensions of the T Model

Example 12.4. Robust ML Estimation of Repeated Lung-Function Measures with Missing Values.

12.3. Penalized Spline of Propensity Prediction

Chapter 13. Models for Partially Classified Contingency Tables, Ignoring the Missing-Data Mechanism

13.1. Introduction

13.2. Factored Likelihoods for Monotone Multinomial Data

13.2.1. Introduction

13.2.2. ML and Bayes for Monotone Patterns

Example 13.1. Two-Way Contingency Table with One Supplemental One-Way Margin.

Example 13.2. Numerical Illustration of ML and Bayes for Monotone Bivariate Counted Data.

Example 13.3. Application to a Six-Way Table.

Example 13.4. Tables with Refined and Coarse Classifications.

13.2.3. Precision of Estimation

Example 13.5. Estimates of Precision for Bivariate Monotone Multinomial Data (Example 13.2 continued)

13.3. ML and Bayes Estimation for Multinomial Samples with General Patterns of Missingness.

Example 13.6. A 22× Table with Supplemental Data on Both Margins.

Example 13.7. Application of EM to Positron Emission Tomography.

13.4. Loglinear Models for Partially Classified Contingency Tables

13.4.1. The Complete-Data Case

Example 13.8. A Complete Three-Way Table.

13.4.2. Loglinear Models for Partially Classified Tables

Example 13.9. Bayesian IPF for the No Three-Way Association Model for a 2x2x2 Table.

Example 13.10. ML Estimates for an Incomplete Three- Way Table (Ex 13.8 continued).

13.4.3. Goodness-of-Fit Tests for Partially-Classified Data

Example 13.11. Goodness-of-Fit Statistics for Incomplete Three-Way Table. (Example 13.10 continued).

Chapter 14. Mixed Normal and Nonnormal Data with Missing Values, Ignoring the Missing-Data Mechanism

14.1. Introduction

14.2. The General Location Model

14.2.1. The Complete-Data Model and Parameter Estimates

14.2.2. ML Estimation with Missing Values

Example 14.1. ML Analysis of Categorical and Continuous Outcomes in St. Louis Risk Research Data (Example 11.1 continued).

14.2.3. Details of the E-Step Calculations

14.2.4. Bayes’ Computations for the Unrestricted General Location Model

Example 14.2. Bayes’ Analysis of St. Louis Data (Example 14.1 continued).

14.3. The General Location Model with Parameter Constraints

14.3.1. Introduction

14.3.2. Restricted Models for the Cell Means

14.3.3. Loglinear Models for the Cell Probabilities

14.3.4. Modifications to the Algorithms of Section 14.2.2 and 14.2.3 to Accommodate Parameter Restrictions

Example 14.3. Restricted Models for St. Louis Data (Example 14.1 continued).

14.3.5. Simplifications when the Categorical Variables are More Observed than the Continuous Variables

14.4. Regression Problems Involving Mixtures of Continuous and Categorical Variables

14.4.1. Normal Linear Regression with Missing Continuous or Categorical Covariates

Example 14.4. Univariate Mixture Model for Biological Data. Aitkin and Wilson (1980)

14.4.2. Logistic Regression with Missing Continuous or Categorical Covariates

14.5. Further Extensions of the General Location Model

Chapter 15. Missing not at Random Models

15.1. Introduction

Example 15.1. Pattern-Mixture and Selection Models for Univariate Nonresponse.

Example 15.2. Pattern-Set Mixture Models for Survey Nonresponse.

15.2. Models with Known MNAR Missingness Mechanisms: Grouped and Rounded Data

Example 15.3. Grouped Exponential Sample.

Example 15.4. Grouped Normal Data with Covariates.

Example 15.5. Censored Normal Data with Covariates (Tobit Model).

Example 15.6. Multiple Imputation of Coarsened Data from the Health and Retirement Survey (HRS)

15.3. Normal Models for MNAR Missing Data

15.3.1. Normal Selection and Pattern-Mixture Models for Univariate Missingness

Example 15.7. A Probit Selection Model for Univariate Missingness.

Example 15.8. A Normal Pattern-Mixture Model for Univariate Missingness.

15.3.2. Following up a Sample of Nonrespondents.

Example 15.9. Decreased Sensitivity of Inference with Follow-ups.

15.3.3. The Bayesian Approach.

Example 15.10. Inference about the Sample Mean to MNAR Nonresponse, in the Presence of Covariates.

15.3.4. Imposing Restrictions on Model Parameters.

Example 15.11. Income Nonresponse in the U.S. Current Population Survey.

Example 15.12. Estimating Incidence of AIDS from Demographic Surveys with Randomly Assigned Interviewers.

Example 15.13. A Bivariate Normal Pattern-Mixture Model with Parameter Restrictions.

Example 15.14. Nonresponse Adjustment of Survey Estimates Based on Auxiliary Variables Subject to Measurement Error.

15.3.5. Sensitivity Analysis

Example 15.15. Sensitivity Analysis for Bivariate Normal Pattern-Mixture Model.

15.3.6. Subsample Ignorable Likelihood for Regression with Missing Data.

Example 15.16. Application to Regression of Blood Pressure.

15.4. Other Models for MNAR Missing Data.

15.4.1. MNAR Models for Repeated-Measures Data

15.4.2. MNAR Models for Categorical Data

Example 15.17. Two-Way Contingency Table with One Supplemental Margin.

Example 15.18. Predicting Results in the Slovenian Plebiscite with Polling Data.

15.4.3. Sensitivity Analyses for Chained-Equation Multiple Imputations.

Example 15.19. Sensitivity Analysis for Income Nonresponse in a Rotating Panel Survey.

15.4.4. Sensitivity Analyses in Pharmaceutical Applications

Example 15.20. A Sensitivity Analysis to Assess the Potential Impact of Differential Nonignorable Censoring in Survival Analysis.

Example 15.21. Enhanced Tipping Point Displays.