Generalization of degrees of freedom for t distribution for coefficients after multiple imputation

Donald Rubin has shown that regression coefficient estimates have fatter tails after multiple imputation and has provided a formula for the degrees of freedom to use as a t-distribution approximation to the coefficient estimates resulting from Rubin’s rule for combining multiple imputations. I would like a generalization of this approach that handles more than single degree of freedom tests, e.g., an adjustment to the multiple degree of freedom χ2 distribution for “chunk tests” that test combinations of parameters. Has anyone seen such a procedure? The goal is to improve (raise) the coverage probability of confidence intervals after multiple imputation, and to better preserve type-I error.

Ultimately it would be nice to have a way to convert covariance matrices arising from Rubin’s rule so that normal and χ2 distributions can still be used, and confidence coverage will be more accurate. In the single parameter case one could just bump up the standard error of ˆβ by a factor equal to the t critical value divided by the normal critical value.

Follow-up

As I’ve gotten more into Bayesian modeling I came to know about posterior stacking for incorporating multiple imputation into Bayesian analysis using MCMC posterior sampling. This makes the tails of the posterior distribution properly heavier, automatically, and avoids complexities of the approximate Bayesian method of multiple inputation requiring the use of Rubin’s rule. An example is in the RMS course notes.

Answer

Attribution
Source : Link , Question Author : Frank Harrell , Answer Author : Community

Leave a Comment