I'm reading Casella-Berger chapter 10, where they introduce asymptotic evaluations. I don't seem quite to understand how the factor $\sqrt{n}$ works when we are using asymptotic evaluations in order to estimate the approximate variance of estimators.
Following their example (on p. 473, here's a screenshot), given that, according to the asymptotic efficiency of MLEs (where $\hat{\theta}$ is a MLE from an iid sample, $\tau(.)$ is continuous and $I(\theta)$ is the information number for a single observation):
$\sqrt{n} \left[ \tau(\hat{\theta}) - \tau(\theta) \right] \rightarrow_d N \left(0, \frac{1}{I(\theta)}\right)$
and that, according to the Delta method, if $\sqrt{n} \left[ \hat{\theta} - \theta \right] \rightarrow_d N\left(0, \sigma^2\right)$, then:
$\sqrt{n} \left[ h(\hat{\theta}) - h(\theta) \right]\rightarrow_d N(0, \sigma^2[h^{'}(\theta)]^2)$
Why is it that (according to Casella-Berger, p. 473), the approximate variance of the estimator is given by:
$Var(h(\hat{\theta})|\theta) \approx \frac{[h^{'}(\theta)]^2}{I_n(\theta)}$
instead of:
$Var(h(\hat{\theta})|\theta) \approx \frac{[h^{'}(\theta)]^2}{I_n(\theta)} \cdot \frac{1}{n}$
I can't understand why, in this example and others (e.g. at p.243, example 5.5.25), the approximate variance of the estimator doesn't get multiplied by the $\frac{1}{n}$ term. Can anyone help me understand? Thanks in advance.