1
$\begingroup$

Let $X_1, X_2, ..., X_n$ be iid $N \sim (\theta, \sigma^2)$, Let $\delta_{n} = \bar{X}^2 - \frac{1}{n(n-1)}S^2$, where $S^2 = \sum (X_i - \bar{X})^2$ . Find the limiting distribution of

(i) $\sqrt{n} (\delta_{n} - \theta^2)$ for $\theta \neq 0$

(ii) $n(\delta_{n} - \theta^2)$ for $\theta = 0$

(i) I know that $\sqrt{n} (\bar{X^2} - \theta^2) \overset{L}{\to} N(0, 4\sigma^2 \theta^2)$ using delta method. For $\sqrt{n} (\bar{X}^2 - \frac{1}{n(n-1)}S^2 -\theta^2)$, i can write it as $\sqrt{n} (\bar{X}^2 -\theta^2) - \sqrt{n}(\frac{1}{n(n-1)}S^2)$, what's the limiting distribution for $\sqrt{n}(\frac{1}{n(n-1)}S^2)$?

(ii) I know that $n (\bar{X^2} ) \overset{L}{\to} \sigma^2 \chi_{1}^2$ , but I don't know how to find $n (\bar{X^2} - \frac{1}{n(n-1)}S^2)$

$\endgroup$
4
  • $\begingroup$ Are $X_i$'s i.i.d Normal? $\endgroup$ Commented Oct 6, 2021 at 21:25
  • $\begingroup$ @StubbornAtom yes $X_i$'s are iid $N(\theta, \sigma^2)$ $\endgroup$ Commented Oct 6, 2021 at 21:40
  • $\begingroup$ For (i), one option is to use bivariate CLT combined with multivariate delta method. But there might be an easier way that uses the independence of $\overline X$ and $S^2$. $\endgroup$ Commented Oct 6, 2021 at 22:08
  • $\begingroup$ @StubbornAtom I still don't get how I can use bivariate CLT since there is the $\theta^2$ is a scalar, I think there needs to have an extra term $\sigma^2$ since $\frac{1}{n(n-1)}S^2$ is the mean of the sample variance and the expected value of that is $\sigma^2$? $\endgroup$ Commented Oct 6, 2021 at 22:53

1 Answer 1

3
$\begingroup$

The following relies only on the fact that $X_1,\ldots,X_n$ are i.i.d with finite mean $\theta$, finite variance $\sigma^2(>0)$ and $E|\delta_n|<\infty$ for every $n\ge 1$.

Your $\delta_n$ is a U-statistic, since it can be written in the form

$$\delta_n=\frac1{\binom{n}{2}}\sum_{1\le \alpha<\beta\le n}X_\alpha X_\beta$$

Asymptotic distributions of U-statistics in general can be derived using projections.

If $\theta\ne 0$, the general result gives

$$\sqrt n(\delta_n-\theta^2)\stackrel{d}\longrightarrow N(0,4\theta^2\sigma^2) \tag{1}$$

And if $\theta=0$,

$$n(\delta_n-\theta^2)\stackrel{d}\longrightarrow \sigma^2(\chi^2_1-1) \tag{2}$$


To prove $(1)$ using projection theorem, let $T_n=\sqrt n(\delta_n-\theta^2)$, so that $E(T_n)=0$.

Define $K_P(X_i)=E(T_n\mid X_i)$, which simplifies to

$$K_P(X_i)=\frac{\sqrt n}{\binom{n}{2}}(n-1)(\phi_1(X_i)-\theta^2)=\frac2{\sqrt n}(\phi_1(X_i)-\theta^2)\,,$$

with $\phi_1(X_1)=E[X_1X_2\mid X_1]=\theta X_1$.

Note that $E(\phi_1)=\theta^2$ and $\operatorname{Var}(\phi_1)=\theta^2\sigma^2=\sigma_1^2$ (say).

So by classical CLT, $$V_P=\sum_{i=1}^n K_P(X_i) \stackrel{d}\longrightarrow N(0,4\sigma_1^2)$$

It can be shown that $\operatorname{Var}(\delta_n)=\frac{4\sigma_1^2}{n}+O\left(\frac1{n^2}\right)$. And clearly, $\operatorname{Var}(V_P)=4\sigma_1^2$.

Therefore, as $n\to \infty$, $$\operatorname{Var}(T_n)-\operatorname{Var}(V_P)=O\left(\frac1n\right)\longrightarrow 0$$

Projection theorem (which is another application of Slutsky's theorem) then says that limiting distributions of $T_n$ and $V_P$ are identical.


Edit:

I might have complicated things before. Ignore the proof above as it is missing some details.

For $\theta \ne 0$, simply write

$$\sqrt n(\delta_n-\theta^2)=\sqrt n\left(\overline X^2 -\theta^2\right)-\frac1{\sqrt n}\left(\frac{S^2}{n-1}\right)$$

Using delta method, OP can show that

$$\sqrt n\left(\overline X^2 -\theta^2\right)\stackrel{d}\longrightarrow N(0,4\theta^2\sigma^2)$$

And we know that

$$\frac{S^2}{n-1} \stackrel{P}\longrightarrow \sigma^2\,,$$

which implies $$\frac1{\sqrt n}\left(\frac{S^2}{n-1}\right)\stackrel{P}\longrightarrow 0$$

Hence $(1)$ follows directly from Slutsky's theorem.

Now when $\theta=0$,

$$n(\delta_n-\theta^2)=n\overline X^2 - \frac{S^2}{n-1}$$

We have from CLT and continuous mapping theorem that

$$n\overline X^2\stackrel{d}\longrightarrow \sigma^2\chi^2_1$$

And as before, $$\frac{S^2}{n-1} \stackrel{P}\longrightarrow \sigma^2$$

Therefore, Slutsky's theorem again proves $(2)$.

$\endgroup$
3
  • $\begingroup$ Trying to read your answer and than trying to replicate. But where are the phi defined? Further, you use the classical CLT with i.i.d. assumptions right? Are they trivially satisfied here? $\endgroup$ Commented Nov 27, 2021 at 17:33
  • $\begingroup$ $\phi_1$ is defined as $\phi_1(x)=\theta x$. IID assumption is in the question. And yes, limiting distribution of $V_p$ in this answer follows from the usual CLT. $\endgroup$ Commented Nov 27, 2021 at 18:14
  • $\begingroup$ Thanks I will try to put everything together :) $\endgroup$ Commented Nov 27, 2021 at 19:13

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.