Skip to main content

Questions tagged [parameter-estimation]

Questions about parameter estimation. Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured/empirical data that has a random component. (Def: http://en.m.wikipedia.org/wiki/Estimation_theory)

Filter by
Sorted by
Tagged with
1 vote
2 answers
97 views

Let $X_1,...,X_n$ be a random sample with exponential distribution Exp$(\lambda^2+\lambda)$ What is the method of moments estimator of $\lambda$? So PDF is $F(x;\lambda) = (\lambda^2+\lambda)\exp(-(\...
Antony's user avatar
  • 85
0 votes
1 answer
67 views

Let $X_1, X_2, \dots , X_n$ be an i.i.d. sample from the mixture distribution \begin{equation} \label{eqn:mixture distribution} p_{\epsilon,\theta} = (1 - \epsilon)p_{\theta} + \epsilon \delta, \end{...
SATYA's user avatar
  • 77
1 vote
1 answer
93 views

Theorem (Cramér-Rao inequality). Consider a sample from a parametric model satisfying regularity conditions. Let $\theta^*$ be an unbiased estimator of $\tau(\theta)$. Then for any $\theta \in \Theta$,...
Ritabrata's user avatar
0 votes
2 answers
63 views

Consider a sequence of i.i.d. random variables $X_1,\, \dots,\, X_n$ whose mean is denoted as $x_0$ and variance $\sigma^2 < \infty$. From the Strong Law of Large Numbers, the empirical mean $\bar ...
Tasty's user avatar
  • 104
0 votes
0 answers
35 views

The scalar target $z$ is modeled as $$f(x,y) = \underline c^T \underline b, \qquad \underline b=\begin{bmatrix} 1 \\ x \\ ln(y+d) \\ x \cdot ln(y+d) \end{bmatrix},$$ with unknown parameter $d$ and ...
lmixa's user avatar
  • 65
2 votes
0 answers
51 views

I am trying to learn the theory of estimation, primarily from a mathematical (measure-theoretic/probabilistic) perspective. More specifically, I'm looking for resources that cover one-parameter and ...
RSG's user avatar
  • 1,353
2 votes
1 answer
84 views

Question is in the title. Given that $\delta:=\delta(\mathbf X_n)$ is MVUE (minimum variance unbiased estimator) of a scalar parameter $\theta$, we are asked to show that for all natural numbers $k$, $...
Martund's user avatar
  • 15k
2 votes
2 answers
220 views

Summary I am looking for a convex and robust formulation to fit an ellipse to a set of points. Specifically, can handle an extreme condition number of the Scattering Matrix. Full Question The ...
Royi's user avatar
  • 10.6k
0 votes
0 answers
41 views

Does it make sense to study statistical models in which the maximum likelihood estimator (MLE) $ \hat{\theta}_n $ exists only for infinitely many $ n $, but not necessarily for all $ n $? Suppose, for ...
randomwalker's user avatar
0 votes
1 answer
127 views

Throughout mathematical statistics, the Fisher information comes up quite frequently as a measure of information. I understand that in the case where you have a single parameter, the Fisher ...
LSK21's user avatar
  • 1,283
0 votes
0 answers
43 views

Let $X_1, X_2,...,X_n$ be $iid$ continuous uniform $\mathcal{U} (0,\theta)$ and let $T=Max(X_i)$ Show that the family of distributions of T is complete. Step I: Find the CDF (using independence ...
Starlight's user avatar
  • 2,674
0 votes
0 answers
44 views

In All of Statistics, chapter 6.2, it states "a parametric model takes the form $F=\{f(x; θ) : θ ∈ Θ\}$ ...". Then, in chapter 13.1, it states, "The Simple Linear Regression Model $Y_i =...
Joe C.'s user avatar
  • 221
1 vote
1 answer
96 views

Let $X_1,X_2,...,X_n (n\geq 2)$ be a random sample from a distribution with probability density function: $$ f(x;\theta) = \begin{cases} \theta x^{\theta-1}, \hspace{1 cm} 0\leq x \leq 1 \\ ...
Starlight's user avatar
  • 2,674
1 vote
1 answer
91 views

Consider $X_1, X_2, \dots, X_n$ as a random sample from a distribution with the probability density function (pdf): $$ f(x) = \begin{cases} e^{-(x - \theta)}, & \text{for } x < \theta, -\infty&...
heynakata's user avatar
0 votes
0 answers
27 views

Let $X_1,X_2,\dots,X_n$ be idd Poisson distributions with unknown mean $\lambda>0$. Find the $100(1-\alpha)\%$ confidence interval for $\lambda$, where $\alpha\in(0,1)$. I think we need to find an ...
Addem's user avatar
  • 6,197
0 votes
1 answer
95 views

I would like to know how the update equations for a discrete-time extended Kalman filter (EKF), in the case of non-additive noise, are derived. This section on Wikipedia briefly mentions the update ...
Mahmoud's user avatar
  • 1,538
4 votes
1 answer
84 views

We can consider the mean value of a random value or a sample (plig in estimation) as the minimizer of $E (X-a)^2$. There are also ways to describe other statistics in such style. For the value of $(E ...
Саня Честер's user avatar
1 vote
1 answer
45 views

Let $X_1,X_2,...,X_n$ be a random sample from a distribution with density function $$ f(x;\theta)= \begin{cases} \sqrt{\frac{2}{\pi}}e^{-\frac{1}{2}(x-\theta)^2} & \text{if } x\geq\theta \\...
Starlight's user avatar
  • 2,674
3 votes
3 answers
121 views

Let $X_1, X_2, \dots, X_n$ be a random sample from the probability density function: $$ f(x; \theta) = \theta x^{-2}, \quad 0 < \theta < x < \infty. $$Find the method of moments estimator ...
Alex Nguyen's user avatar
0 votes
1 answer
55 views

If $f(x) = ax + b$ a linear function, and $(x_0, y_0), \ldots, (x_n, y_n)$ are observed values of $f$, then we can estimate the values of $a, b$ which minimize the sum of squares with the following ...
lafinur's user avatar
  • 3,595
3 votes
1 answer
95 views

Let the random variables $X_1, \ldots, X_n \overset{i.i.d.}{\sim}f(x\mid\theta)$ for a p.d.f. $f$. Under general regularity conditions (e.g., Lehmann (1999)), let the MLE of $\theta$ be $\hat{\theta}$....
ytnb's user avatar
  • 682
6 votes
1 answer
103 views

In my problem, there is a collection of $n$ random variable $X_1, \dots,X_n$, being all independent and identically distributed according to a Laplace distribution with density $p(x) = \frac{1}{2\...
Wither_1422883's user avatar
2 votes
1 answer
150 views

I am currently deriving the Pareto distribution MLE, where the Pareto distribution is given by: $$f_X(x)= \begin{cases}\frac{\alpha x_{\mathrm{m}}^\alpha}{x^{\alpha+1}} & x \geq x_{\mathrm{m}} \\ ...
JDoe2's user avatar
  • 825
4 votes
1 answer
57 views

I wish you a happy new year. I am reading this paper I am struggling to understand a small part of section 5.1: Independence testing for multinomials at page 17. Specifically, I am having difficulty ...
Pipnap's user avatar
  • 549
1 vote
0 answers
39 views

I have three sets of time-series data, which for now I will call $y(t)$, $f_1(t)$ and $f_2(t)$. I am using a model of the form $$ \hat{y}(t) = \beta f_1(t) - \lambda f_2(t)$$ (which is Physically ...
pboardman449's user avatar
2 votes
0 answers
97 views

I was tasked with the following: Let $X_1, X_2, \dots, X_n \sim N(\mu, \sigma^2)$, random i.i.d. samples where $\mu = \sigma^2 = \theta > 0$ and $\theta$ is an unknown parameter. lets find the ...
kal_elk122's user avatar
0 votes
0 answers
58 views

I have a problem with my parameters on the dynamic system model here which consist of 3 equations and 11 parameters. In the beginning, i thought when i already find the stability conditions for my ...
Aji Wibowo's user avatar
2 votes
0 answers
107 views

I’m looking for standard methods (e.g. methods accepted and used by the community) for fitting a probability distribution (either a probability density function (PDF) or cumulative distribution ...
Comnicron's user avatar
1 vote
0 answers
60 views

Suppose we have a linear dynamical system, written in discrete form as $x_{k+1} = Ax_k$, with $x_\cdot \in \mathbb{R}^n$ and $A$ a constant matrix. Now suppose that we are not certain about $A$. ...
Ramin Abbasi's user avatar
1 vote
1 answer
66 views

I assume a linear relationship between a point observed in two coordinate systems: $$ \begin{bmatrix} l \\ y' \\ z' \end{bmatrix} = M_{3x4} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} $$ ...
Andy's user avatar
  • 877
0 votes
1 answer
56 views

Lehmann (Theory of Point Estimation second edition, Lemma 1.4) doesn't prove this statement, because he says it is obvious. I would like a proof. If $\delta_0$ is any unbiased estimator of $g(\theta)$,...
Alex's user avatar
  • 414
0 votes
0 answers
71 views

I'm looking for an explicit bound for the number of samples required to estimate the covariance matrix of a Gaussian distribution. In https://arxiv.org/pdf/1011.3027v7 (end of page 31), the following ...
Antonio Anna Mele's user avatar
0 votes
1 answer
81 views

So, a friend of mine showed me this article "A Geometric Perspective on Quantum Parameter Estimation" (link in the end of the question), and there is a statement on page 5 (equation 23) that ...
brschultze's user avatar
0 votes
0 answers
67 views

Question: We have a continuous random variable 𝑋 with the probability density function (pdf): $$𝑓_\theta(x) = \begin{cases}\theta/(x^\theta)+1, & x > 1,\theta > 0\\ 0, &\textrm{...
ARKA's user avatar
  • 21
1 vote
1 answer
31 views

According to control theory, an optimal observer for a linear process with random noise is the discrete Kalman filter: $$\hat{x}_{k+1|k}=F_k \hat{x}_k + G_k u_k$$ $$P_{k+1|k}=F^t_k P_{k} F_k + Q_k$$ $$...
Frederic Peugny's user avatar
1 vote
1 answer
67 views

This is the setup problem: Suppose $X_1,X_2,...,X_n$ are i.i.d. samples from the $N(θ,1)$ distribution. Assume the prior for $θ$ is a double exponential distribution, i.e. $f(θ)=\frac{1}{2} e^{−|θ|}$....
Chris's user avatar
  • 13
1 vote
4 answers
200 views

We have a measured data set with a relationship with the following formula: $$Y = A\cdot \exp(B\cdot T) + C\cdot \exp(D\cdot T)$$ How can we approximate the values ​​of A, B, C and D? The method used ...
Rose's user avatar
  • 11
0 votes
0 answers
79 views

I had a doubt regarding the Properties of an Estimator specifically in the case of Poisson Distribution with parameter $\lambda$. I know $\mu=\lambda$ and $\sigma^2=\lambda$. Further, $\overline{X}$ ...
Upstart's user avatar
  • 2,712
2 votes
1 answer
68 views

I am self-studying statistical inference, and I got confused by an example about a biased Bernoulli estimator. Please do not refer to measure theory when answering this question, because it is not ...
Beerus's user avatar
  • 2,949
1 vote
1 answer
212 views

Let $X_1,...,X_n$ be a random sample from a uniform distribution on the interval $(\theta, \theta+|\theta|)$, where $\theta \in (-\infty, \infty)$, $\theta \neq 0$. I want to find the MLE of $\theta$. ...
Alex He's user avatar
  • 235
2 votes
1 answer
165 views

For the linear model $y_i=2+\beta_1\exp(x_i)+\varepsilon_i$ for $i=1,...,n$ where $\beta_1$ is an unknown slope parameter, and errors $\varepsilon_i$ are uncorrelated with zero mean and common ...
Peter Chen's user avatar
1 vote
1 answer
44 views

I have $X_1,\dots , X_n \sim\text{unif}([0,\theta])$ i.i.d. and I want to compute the maximum likelihood estimator for this $\theta$ My Log-Likelihood-Function is: $$L_x(\theta)= \sum_{i=1}^{n}\log(...
Sgt. Slothrop's user avatar
1 vote
1 answer
73 views

Let $X_1 , \dots , X_n$ be IID Uniform$(\theta, 2\theta)$ with $\theta>0$. Consider estimators for $\theta$ of the form: \begin{equation} Z_a = aX_{(1)} + \frac{1}{2}(1-a)X_{(n)} \end{equation} ...
Bastiza's user avatar
  • 467
1 vote
1 answer
86 views

Consider the following setup: a random variable with two outcomes (0,1) with probability mass function parameterized by $x$ as $p(0|x)=\frac{1}{2}+x^2$ and $p(1|x)=\frac{1}{2}-x^2$ with $x\in[0,1/\...
Jake's user avatar
  • 121
0 votes
0 answers
68 views

Let us consider we have two known $SE3$ transformations with matrix representations $H_1$ and $H_2$ of the form $H= [R; t]$ where $R$ is a 3x3 rotation matrix and $t$ a 3x1 translation vector. I am ...
tricostume's user avatar
1 vote
1 answer
109 views

Let $ x_1, \ldots, x_m $ be i.i.d. samples drawn from a distribution $P$, and $ y_1, \ldots, y_n $ be i.i.d. samples drawn from a distribution $Q$. Assume that the samples $x_i$ and $y_j$ are ...
milad jalali's user avatar
0 votes
0 answers
71 views

Let $(X_1,..., X_n)$ be a random sample from the uniform distribution on the interval $U(\theta-\frac{1}{2},\theta+\frac{1}{2})$ with an unknown $\theta\in\mathbb{R}$. Under the squared error loss, ...
Ho-Oh's user avatar
  • 956
5 votes
3 answers
553 views

You have a sample of $n$ i.i.d. realizations of the random variable $X$ distributed as a Poisson with parameter $\lambda$. It is known that: $n_1$ values are greater than or equal to $2$; $n_2$ ...
Emalas's user avatar
  • 53
0 votes
0 answers
83 views

I would like to compute the expected value and variance of the kappa parameter for the generalized Pareto distribution, where $$ \hat{\kappa} = \frac{\hat{\sigma}^2}{{s}} $$ Where $$ s = \frac{1}{n} \...
norh's user avatar
  • 1
1 vote
2 answers
147 views

I know from this question that $\sum_{i=1}^{n}X_i$ is a sufficient estimator for $\lambda$ in the Poisson distribution. However, from looking at the proof I can see that $\frac{1}{n}\sum_{i=1}^{n}X_i$ ...
gbd's user avatar
  • 2,148

1
2 3 4 5
40