Questions tagged [parameter-estimation]
Questions about parameter estimation. Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured/empirical data that has a random component. (Def: http://en.m.wikipedia.org/wiki/Estimation_theory)
1,975 questions
1
vote
2
answers
97
views
The method of moments and maximum likelihood estimator for exponential distribution $\lambda^2 + \lambda$
Let $X_1,...,X_n$ be a random sample with exponential distribution Exp$(\lambda^2+\lambda)$
What is the method of moments estimator of $\lambda$?
So PDF is $F(x;\lambda) = (\lambda^2+\lambda)\exp(-(\...
0
votes
1
answer
67
views
Expectation under true distribution with mixture samples
Let $X_1, X_2, \dots , X_n$ be an i.i.d. sample from the mixture distribution
\begin{equation} \label{eqn:mixture distribution}
p_{\epsilon,\theta} = (1 - \epsilon)p_{\theta} + \epsilon \delta,
\end{...
1
vote
1
answer
93
views
Uniqueness of the function $\tau(\theta)$ in the Cramér-Rao inequality
Theorem (Cramér-Rao inequality).
Consider a sample from a parametric model satisfying regularity conditions.
Let $\theta^*$ be an unbiased estimator of $\tau(\theta)$. Then for any $\theta \in \Theta$,...
0
votes
2
answers
63
views
Transfer of strong consistency
Consider a sequence of i.i.d. random variables $X_1,\, \dots,\, X_n$ whose mean is denoted as $x_0$ and variance $\sigma^2 < \infty$.
From the Strong Law of Large Numbers, the empirical mean $\bar ...
0
votes
0
answers
35
views
Identifiability & estimation: $d$ and $\underline c$ from $\lvert S \rvert = 6$
The scalar target $z$ is modeled as $$f(x,y) = \underline c^T \underline b, \qquad \underline b=\begin{bmatrix} 1 \\ x \\ ln(y+d) \\ x \cdot ln(y+d) \end{bmatrix},$$
with unknown parameter $d$ and ...
2
votes
0
answers
51
views
Reference request for theory of estimation
I am trying to learn the theory of estimation, primarily from a mathematical (measure-theoretic/probabilistic) perspective. More specifically, I'm looking for resources that cover one-parameter and ...
2
votes
1
answer
84
views
Show that powers of an MVUE is an MVUE
Question is in the title. Given that $\delta:=\delta(\mathbf X_n)$ is MVUE (minimum variance unbiased estimator) of a scalar parameter $\theta$, we are asked to show that for all natural numbers $k$, $...
2
votes
2
answers
220
views
Robust Method to Fit an Ellipse in $\mathbb{R}^{2}$
Summary
I am looking for a convex and robust formulation to fit an ellipse to a set of points.
Specifically, can handle an extreme condition number of the Scattering Matrix.
Full Question
The ...
0
votes
0
answers
41
views
Does it make sense to consider situations where the MLE exists only for infinitely many $n$?
Does it make sense to study statistical models in which the maximum likelihood estimator (MLE) $ \hat{\theta}_n $ exists only for infinitely many $ n $, but not necessarily for all $ n $?
Suppose, for ...
0
votes
1
answer
127
views
Intuition behind matrix form of Fisher information
Throughout mathematical statistics, the Fisher information comes up quite frequently as a measure of information. I understand that in the case where you have a single parameter, the Fisher ...
0
votes
0
answers
43
views
Proving completeness for a statistic [duplicate]
Let $X_1, X_2,...,X_n$ be $iid$ continuous uniform
$\mathcal{U} (0,\theta)$ and let $T=Max(X_i)$ Show that the family of
distributions of T is complete.
Step I: Find the CDF (using independence ...
0
votes
0
answers
44
views
Is a linear model an instance of a parametric model?
In All of Statistics, chapter 6.2, it states "a parametric model takes the form $F=\{f(x; θ) : θ ∈ Θ\}$ ...". Then, in chapter 13.1, it states, "The Simple Linear Regression Model
$Y_i =...
1
vote
1
answer
96
views
Existence of unbiased efficient estimators
Let $X_1,X_2,...,X_n (n\geq 2)$ be a random sample from a distribution
with probability density function:
$$ f(x;\theta) = \begin{cases}
\theta x^{\theta-1}, \hspace{1 cm} 0\leq x \leq 1 \\
...
1
vote
1
answer
91
views
Unbiased Estimator
Consider $X_1, X_2, \dots, X_n$ as a random sample from a distribution with the probability density function (pdf):
$$
f(x) = \begin{cases}
e^{-(x - \theta)}, & \text{for } x < \theta, -\infty&...
0
votes
0
answers
27
views
Find a CI for a sample of Poisson distributions
Let $X_1,X_2,\dots,X_n$ be idd Poisson distributions with unknown mean $\lambda>0$.
Find the $100(1-\alpha)\%$ confidence interval for $\lambda$, where $\alpha\in(0,1)$.
I think we need to find an ...
0
votes
1
answer
95
views
How can I derive the update equations for a discrete-time extended Kalman filter with non-additive noise?
I would like to know how the update equations for a discrete-time extended Kalman filter (EKF), in the case of non-additive noise, are derived. This section on Wikipedia briefly mentions the update ...
4
votes
1
answer
84
views
Is there a name for the estimator that minimizes $E (\frac{(X-a)^2}{X})$, like Huber's function?
We can consider the mean value of a random value or a sample (plig in estimation) as the minimizer of $E (X-a)^2$. There are also ways to describe other statistics in such style. For the value of $(E ...
1
vote
1
answer
45
views
Determing the parameter space for a density function (in estimation)
Let $X_1,X_2,...,X_n$ be a random sample from a distribution with
density function $$ f(x;\theta)= \begin{cases}
\sqrt{\frac{2}{\pi}}e^{-\frac{1}{2}(x-\theta)^2} & \text{if } x\geq\theta \\...
3
votes
3
answers
121
views
The method of moments estimator
Let $X_1, X_2, \dots, X_n$ be a random sample from the probability
density function: $$ f(x; \theta) = \theta x^{-2}, \quad 0 < \theta <
x < \infty. $$Find the method of moments estimator ...
0
votes
1
answer
55
views
Matricial approach to parameter estimation of a linear function
If $f(x) = ax + b$ a linear function, and $(x_0, y_0), \ldots, (x_n, y_n)$ are observed values of $f$, then we can estimate the values of $a, b$ which minimize the sum of squares with the following ...
3
votes
1
answer
95
views
Eliminating the doubtful asymptotic normality proof of MLE
Let the random variables $X_1, \ldots, X_n \overset{i.i.d.}{\sim}f(x\mid\theta)$ for a p.d.f. $f$.
Under general regularity conditions (e.g., Lehmann (1999)), let the MLE of $\theta$ be $\hat{\theta}$....
6
votes
1
answer
103
views
Estimation of the $\sigma$ parameter in the Laplace distribution $(p(x) = \frac{1}{2\sigma}e^{-\frac{|x|}{\sigma}})$
In my problem, there is a collection of $n$ random variable $X_1, \dots,X_n$, being all independent and identically distributed according to a Laplace distribution with density $p(x) = \frac{1}{2\...
2
votes
1
answer
150
views
Showing that $\hat{\alpha}$ is the MLE of the Pareto distribution
I am currently deriving the Pareto distribution MLE, where the Pareto distribution is given by:
$$f_X(x)= \begin{cases}\frac{\alpha x_{\mathrm{m}}^\alpha}{x^{\alpha+1}} & x \geq x_{\mathrm{m}} \\ ...
4
votes
1
answer
57
views
U-statistics and independence testing
I wish you a happy new year.
I am reading this paper
I am struggling to understand a small part of section 5.1: Independence testing for multinomials at page 17. Specifically, I am having difficulty ...
1
vote
0
answers
39
views
Calculating the Uncertainty in the Parameters of a 2 Parameter Fit
I have three sets of time-series data, which for now I will call $y(t)$, $f_1(t)$ and $f_2(t)$. I am using a model of the form
$$ \hat{y}(t) = \beta f_1(t) - \lambda f_2(t)$$
(which is Physically ...
2
votes
0
answers
97
views
Check my derivation please - MLE for a gaussian with $\theta = \mu = \sigma^2 >0$
I was tasked with the following:
Let $X_1, X_2, \dots, X_n \sim N(\mu, \sigma^2)$, random i.i.d. samples where $\mu = \sigma^2 = \theta > 0$ and $\theta$ is an unknown parameter.
lets find the ...
0
votes
0
answers
58
views
Need resources for estimating parameters with constraints
I have a problem with my parameters on the dynamic system model here which consist of 3 equations and 11 parameters. In the beginning, i thought when i already find the stability conditions for my ...
2
votes
0
answers
107
views
Fitting a probability distribution to another
I’m looking for standard methods (e.g. methods accepted and used by the community) for fitting a probability distribution (either a probability density function (PDF) or cumulative distribution ...
1
vote
0
answers
60
views
Uncertainty Propagation in Linear Systems
Suppose we have a linear dynamical system, written in discrete form as $x_{k+1} = Ax_k$, with $x_\cdot \in \mathbb{R}^n$ and $A$ a constant matrix. Now suppose that we are not certain about $A$. ...
1
vote
1
answer
66
views
Should I scale my coordinates before least squares?
I assume a linear relationship between a point observed in two coordinate systems:
$$
\begin{bmatrix}
l \\
y' \\
z'
\end{bmatrix}
=
M_{3x4}
\begin{bmatrix}
x \\
y \\
z \\
1
\end{bmatrix}
$$
...
0
votes
1
answer
56
views
Given an unbiased estimator, prove that all unbiased estimators are that estimator minus any estimator with expectation zero
Lehmann (Theory of Point Estimation second edition, Lemma 1.4) doesn't prove this statement, because he says it is obvious. I would like a proof.
If $\delta_0$ is any unbiased estimator of $g(\theta)$,...
0
votes
0
answers
71
views
Sample complexity of covariance matrix estimation of a Gaussian random variable (with explicit constants)
I'm looking for an explicit bound for the number of samples required to estimate the covariance matrix of a Gaussian distribution. In https://arxiv.org/pdf/1011.3027v7 (end of page 31), the following ...
0
votes
1
answer
81
views
How do I prove this statement on this article?
So, a friend of mine showed me this article "A Geometric Perspective on Quantum Parameter Estimation" (link in the end of the question), and there is a statement on page 5 (equation 23) that ...
0
votes
0
answers
67
views
Estimator of θ using the method of moments ,using transformation g(θ)=ln x
Question:
We have a continuous random variable 𝑋 with the probability density function (pdf):
$$𝑓_\theta(x) = \begin{cases}\theta/(x^\theta)+1, & x > 1,\theta > 0\\ 0,
&\textrm{...
1
vote
1
answer
31
views
Finding an observer fitted for a ramp
According to control theory, an optimal observer for a linear process with random noise is the discrete Kalman filter:
$$\hat{x}_{k+1|k}=F_k \hat{x}_k + G_k u_k$$
$$P_{k+1|k}=F^t_k P_{k} F_k + Q_k$$
$$...
1
vote
1
answer
67
views
How to find the Bayesian estimator for θ using the posterior mean with the prior for θ being a double exponential distribution?
This is the setup problem:
Suppose $X_1,X_2,...,X_n$ are i.i.d. samples from the $N(θ,1)$ distribution. Assume the prior for $θ$ is a double exponential distribution, i.e. $f(θ)=\frac{1}{2} e^{−|θ|}$....
1
vote
4
answers
200
views
Least Squares Curve Fit with 2 Exponential Terms
We have a measured data set with a relationship with the following formula:
$$Y = A\cdot \exp(B\cdot T) + C\cdot \exp(D\cdot T)$$
How can we approximate the values of A, B, C and D? The method used ...
0
votes
0
answers
79
views
Estimator of Poisson Distribution parameter
I had a doubt regarding the Properties of an Estimator specifically in the case of Poisson Distribution with parameter $\lambda$. I know $\mu=\lambda$ and $\sigma^2=\lambda$. Further, $\overline{X}$ ...
2
votes
1
answer
68
views
Question about Bernoulli Estimator
I am self-studying statistical inference, and I got confused by an example about a biased Bernoulli estimator. Please do not refer to measure theory when answering this question, because it is not ...
1
vote
1
answer
212
views
MLE of Uniform($\theta$, $\theta+|\theta|$)
Let $X_1,...,X_n$ be a random sample from a uniform distribution on the interval $(\theta, \theta+|\theta|)$, where $\theta \in (-\infty, \infty)$, $\theta \neq 0$. I want to find the MLE of $\theta$.
...
2
votes
1
answer
165
views
Show that estimator $\hat{\sigma}^2$ of $\sigma^2$ is unbiased.
For the linear model $y_i=2+\beta_1\exp(x_i)+\varepsilon_i$ for $i=1,...,n$ where $\beta_1$ is an unknown slope parameter, and errors $\varepsilon_i$ are uncorrelated
with zero mean and common ...
1
vote
1
answer
44
views
Estimation of upper intervall bound theta for $X_1,\dots,X_n$ uniformly i.i.d. on $[0,\theta]$
I have $X_1,\dots , X_n \sim\text{unif}([0,\theta])$ i.i.d. and I want to compute the maximum likelihood estimator for this $\theta$
My Log-Likelihood-Function is:
$$L_x(\theta)= \sum_{i=1}^{n}\log(...
1
vote
1
answer
73
views
Computing MSE of a linear combination of order statistics
Let $X_1 , \dots , X_n$ be IID Uniform$(\theta, 2\theta)$ with $\theta>0$. Consider estimators for $\theta$ of the form:
\begin{equation}
Z_a = aX_{(1)} + \frac{1}{2}(1-a)X_{(n)}
\end{equation}
...
1
vote
1
answer
86
views
Is identifiability a necessary assumption for the Cramer-Rao bound?
Consider the following setup: a random variable with two outcomes (0,1) with probability mass function parameterized by $x$ as $p(0|x)=\frac{1}{2}+x^2$ and $p(1|x)=\frac{1}{2}-x^2$ with $x\in[0,1/\...
0
votes
0
answers
68
views
How to optimize for rotation and translation in 3D for the next equation?
Let us consider we have two known $SE3$ transformations with matrix representations $H_1$ and $H_2$ of the form $H= [R; t]$ where $R$ is a 3x3 rotation matrix and $t$ a 3x1 translation vector. I am ...
1
vote
1
answer
109
views
UMVUE for the Product of Means from Independent Samples [closed]
Let $ x_1, \ldots, x_m $ be i.i.d. samples drawn from a distribution $P$, and $ y_1, \ldots, y_n $ be i.i.d. samples drawn from a distribution $Q$. Assume that the samples $x_i$ and $y_j$ are ...
0
votes
0
answers
71
views
Prove that $\frac{X_{(1)}+X_{(n)}}{2}$ is minimax for $\theta$ when $X_i\sim U(\theta-\frac{1}{2},\theta+\frac{1}{2})$
Let $(X_1,..., X_n)$ be a random sample from the uniform distribution on the interval $U(\theta-\frac{1}{2},\theta+\frac{1}{2})$ with an unknown $\theta\in\mathbb{R}$. Under the squared error loss, ...
5
votes
3
answers
553
views
Maximum Likelihood Estimation for Poisson Mean with Given Observations
You have a sample of $n$ i.i.d. realizations of the random variable $X$ distributed as a Poisson with parameter $\lambda$. It is known that:
$n_1$ values are greater than or equal to $2$;
$n_2$ ...
0
votes
0
answers
83
views
Expected value of an estimator of shape parameter of the generalized Pareto distribution
I would like to compute the expected value and variance of the kappa parameter for the generalized Pareto distribution, where
$$
\hat{\kappa} = \frac{\hat{\sigma}^2}{{s}}
$$
Where
$$
s = \frac{1}{n} \...
1
vote
2
answers
147
views
Is $\frac{1}{n}\sum_{i=1}^{n}X_i$ is a sufficient estimator for $\lambda$ in the Poisson distribution?
I know from this question that $\sum_{i=1}^{n}X_i$ is a sufficient estimator for $\lambda$ in the Poisson distribution. However, from looking at the proof I can see that $\frac{1}{n}\sum_{i=1}^{n}X_i$ ...