1
$\begingroup$

Suppose we have a posterior sample of parameter $\theta$ obtained by fitting some Bayesian model to $n$ data points.

In black is the empirical posterior density and in red is a normal approximation to the posterior.

enter image description here

Is there a way to write the normal approximation to the posterior in terms of $n$?

(My goal is to have an idea of what the posterior distribution would have been if I had had a larger $n$.)

$\endgroup$
5
  • 1
    $\begingroup$ "(My goal is to have an idea of what the posterior distribution would have been if I had had a larger n.)" If you could know this without actually observing more data, observing more data would be pointless, wouldn't it? $\endgroup$ Commented Mar 13, 2023 at 11:06
  • $\begingroup$ The standard error of the sample mean is sigma/sqrt(n). Therefore, I can infer what would be the width of the confidence interval if I change n. I would like to do something similar by approximating my posterior using the normal distribution. $\endgroup$ Commented Mar 13, 2023 at 13:42
  • $\begingroup$ This is not the same thing. The sampling distribution of the sample mean is unobservable as it depends on the true parameters. The posterior is observable. How it will look like with more data, if you assume that true parameters exist, will depend on how the prior relates to the truth. If the truth is in a low probability region of the posterior for given $n$, the posterior variance for larger $n$ can get larger. Although at some point it will get smaller. Maybe something can be shown, but I don't think it'll allow you to say for fixed $n$ and data what will happen for fixed larger $n$. $\endgroup$ Commented Mar 13, 2023 at 17:45
  • $\begingroup$ Are you thinking about a Laplace approximation? $\endgroup$ Commented Mar 14, 2023 at 22:59
  • $\begingroup$ Dear OP: the root n asymptotics are still relevant. Check out Bernstein von Mises theroems for more details. @ChristianHennig is arguing that in pathological cases we might not get this behavior in finite samples, which is good to keep in mind. (But I would argue shouldn't force us to embark on a program "Asymptotic Skepticism" where we completely ignore asymptotic results.) $\endgroup$ Commented Jun 16, 2023 at 20:32

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.