MCMCglmm multivariate with multiple residuals

I am trying to extend a univariate model to a multivariate one within MCMCglmm. Multiple (some correlated) traits were assessed for a number of individuals whose mothers are known (i.e. we know half sib families). There is replication of the families (not the individuals), however due to processing constraints, some replicates everything was taken, and … Read more

Conditional Gaussian Distribution

I am trying to study this paper on Linear Gaussian Models. I’m a little stuck on the following result for finding the conditional of a Gaussian: In the paper it states this is done ‘simply by linear matrix projection’, could anyone possibly provide me with a starting point or some hints so I can try … Read more

Comparing approximating mixture distributions

Setup: Say I have some Bayesian predictive model I assume to be true for each observation x1. Each x2 is a latent/unseen/hidden random variable. The parameters are θ. It’s a mixture distribution: p(x1)=∫p(x1|x2,θ)p(x2|θ)p(θ)dx2dθ. I can sample, for i=1,…,N, θi∼p(θ), then sample Xi2∼p(x2|θi), and then have the approximation ˆp1(x1)=1NN∑i=1p(x1|Xi2,θi). Now, assume I cannot practically do this, … Read more

Discrete Kernel for Sequential Monte Carlo (population monte carlo)

I’m attempting understand, and use, the population Monte Carlo algorithm found here https://arxiv.org/abs/0805.2256 for approximate Bayesian computation. However I think this is a general SMC question, or even SIS question. It’s motivated by my wish to do parameter inference, when my parameters take values in a subset of R+×N. At iteration n−1 we have some … Read more

In Metropolis Algorithm, if draws $\theta^{t-1}$ and $\theta^t$ have the same marginals, why is the target is the same as the stationary distribution?

In the Metropolis algorithm, suppose I start my algorithm at time $t-1$ with a draw $\theta^{t-1}$ from my target distribution $p(\theta|y)$. It can be shown that $\theta^t$ and $\theta^{t-1}$ are symmetric in that $$ P(\theta^t = \theta_a, \theta^{t-1} = \theta_b) = P(\theta^{t-1} = \theta_a, \theta^{t} = \theta_b) $$ Based on this, it is written in … Read more

Team Expected Points Given Very Limited Information

Suppose we were interested in projecting Team A’s expected points in a given game. Lets say, on average over a very large sample size, they score 3 points a game against an average team. Then lets say that Team B allows 2 points a game against an average team. Finally, we will say that the … Read more

Can someone give me an intuition of congeniality in multiple imputation?

As the title says. I read a lot about congeniality of Bayesian models (e.g. Meng, 1994) and I do know some definitions, but I don’t feel I can get grip on what happens when models are congenial or uncongenial. My goal is to write an R script that shows the difference (numerically), but I am … Read more

Derivation for full conditional distribution?

I’m a biochem grad student trying to understand part of my research group’s math. Could someone help me understand the derivation for a full conditional distribution for the i-th individual mean, which is given by $$\alpha_i|y,\mu_\alpha, \alpha_{k_\ne}{_i}, \sigma_y, \sigma_\alpha \sim N(m,v)$$ Where $$v=(n_i/\sigma^2_y+1/\sigma^2_\alpha)^{-1} $$ $$m=v\times(n_i/\sigma^2_y\times\bar{y}_i + 1/\sigma^2_\alpha\times\mu_\alpha) = (n_i/\sigma^2_y\times\bar{y}_i + 1/\sigma^2_\alpha\times\mu_\alpha)/(n_i/\sigma^2_y+1/\sigma^2_\alpha)$$ Understood by the Bayesian … Read more

comparing multivariate posteriors

Do there exist standard methods for summarizing high-dimensional multivariate posteriors, in particular to determine how much overlap exists between two separately estimated posteriors? The R package ‘ks’ provides functionality for kernel smooths in up to 6 dimensions. In a different context (designed for summarizing multivariate niche space in ecology), the R package ‘hypervolume’ has some … Read more

Metropolis Hastings: What motivates the use of Metropolis-Hastings?

I am confused with metropolis hastings. This is a simple question. In the metropolis hastings, it is assumed that we know the un-normalised posterior, π(x). We can obtain the density by normalising π(x). No doubt, if you have a closed-form expression of π(x), this can be difficult because you still evaluate the integral ∫π(x). If … Read more