Skip to main content

Questions tagged [approximation]

Approximations to distributions, functions, or other mathematical objects. To approximate something means to find some representation of it which is simpler in some respect, but not exact.

166 questions with no upvoted or accepted answers
Filter by
Sorted by
Tagged with
7 votes
1 answer
156 views

Let $\mathbf{a}_1,\ldots,\mathbf{a}_n \in \mathbb R^d$ and $b_1,\ldots,b_n \in \mathbb R$ be fixed. For $\mathbf{x} \sim \mathcal U([0,1]^d)$ and $j \in \{1,\ldots,n\}$, consider the real variable ...
dohmatob's user avatar
  • 538
7 votes
0 answers
140 views

Suppose I have a distribution over the real line ($p$) and I'm approximating it by matching its first $N$ moments. What can I say about the approximation error as a function of $N$? Alternatively, ...
David J. Harris's user avatar
7 votes
0 answers
2k views

My goal is to find a faster way to calculate something like mvtnorm::pmvnorm(upper = rep(1,100)) that is, the tail probability of multivariate normal distribution ...
Randel's user avatar
  • 6,919
6 votes
0 answers
248 views

Suppose we have several data points $x_1,\ldots,x_m\in\mathbb R^n$ as well as a positive definite kernel $K(x,y):\mathbb R^n\times\mathbb R^n\to\mathbb R$ that can be written in the form $$K(x,y)=\...
Justin Solomon's user avatar
5 votes
0 answers
2k views

In the book Asymptotic theory of statistics and probability by Anirban DasGupta (2008, Springer Science & Business Media) in page 109 Example 8.13 I found the following approximation $$\Phi^{-1}\...
Mur1lo's user avatar
  • 1,395
5 votes
0 answers
3k views

For bivariate zero-mean normal distribution $P(x_1,x_2)$, the quadrant probability is defined as $P(x_1>0,x_2>0)$ or $P(x_1<0,x_2<0).$ $P(x_1>0,x_2>0) = \frac{1}{4}+\frac{sin^{-1}(\...
smo's user avatar
  • 311
5 votes
0 answers
83 views

I am working on joint and conditional density trees for approximating clique potentials in Bayesian Belief Networks. A brief introduction to topic is available from this paper in case you'd like to ...
mahonya's user avatar
  • 1,121
4 votes
0 answers
970 views

I am currently studying on Bayesian models, and still new to probability theory. I learned that Gaussian Mixture Model is used to represent the distribution of given population as a weighted sum of ...
esh3390's user avatar
  • 107
4 votes
0 answers
570 views

We have a bivariate random variable $(X,Y)$ for which sampling is challenging. If we were to know how to sample from the conditionals $(X|Y)$ and $(Y|X)$, we could get samples from the joint using ...
PAM's user avatar
  • 311
4 votes
0 answers
795 views

I am trying to approximate a set of quantiles from the estimated mean, variance, skewness and kurtosis of a random variable with unknown distribution. I tried to apply the Cornish-Fisher expansion of ...
Trung Le's user avatar
4 votes
0 answers
189 views

Suppose that data X have a Normal distribution with some mean $\mu$ and some variance $\sigma^2$. However, you don't get to see X. Instead, you see $Y = g(X)$ where $g$ is a known function. Assume ...
tom_0's user avatar
  • 41
4 votes
0 answers
119 views

When an observation $x$ is generated by $P(x|\theta)$ for a parameter $\theta$ the Bayesian optimal estimator for the value of $\theta$ is $\hat\theta_{BEST}=\mathbb{E}[\theta|x]=\frac{1}{P(x)}\int d\...
Uri Cohen's user avatar
  • 437
4 votes
0 answers
122 views

I am trying to compute an expectation $E[f(X;\theta,n)]$ where $\theta$ and $n$ are known parameters. I have an easy-to-compute deterministic function $\tilde{f}(\theta,n)$ that provides an ...
bnaul's user avatar
  • 3,261
3 votes
1 answer
141 views

Consider the following random quadratic equation, $$ x^2 + Z x + Y = 0, $$ where, $$ \begin{gathered} Z \sim \mathcal{N}(\mu_Z,\sigma_Z), \qquad Y \sim \mathcal{N}(\mu_Y,\sigma_Y). \end{gathered} $$ ...
Emmy B's user avatar
  • 93
3 votes
0 answers
147 views

I give a try to read the arXiv paper Distributed Adaptive Sampling for Kernel Matrix Approximation, Calandriello et al. 2017. I got a code implementation where they compute ridge leverage scores ...
Emon Hossain's user avatar
3 votes
0 answers
170 views

We am trying to understand the number of points that a neural network of a particular size can interpolate. I think this may be isomorphic to its degree of freedom? We are not interested in whether ...
jlperla's user avatar
  • 535
3 votes
0 answers
76 views

Suppose you have a smooth function $f^*:D_1 \times D_2\rightarrow\mathbb{R}$ that you observe with error as $f$ such that $$f(x,y)=f^*(x,y)+\epsilon$$ where $\epsilon$ has zero expectation (you can ...
nothing's user avatar
  • 1,219
3 votes
0 answers
76 views

I'm developing a logistic regression used for prediction. I have pre-selected, based on prev. literature, 15 candidate predictors (fitting my ~200 events). Now, I want a reduced/more parsimonious ...
Ozeuss's user avatar
  • 327
3 votes
0 answers
476 views

I am exploring ways to reduce the noise of a covariance matrix estimator when the number of variables is greater than the number of observations, i.e. $n > t$. First, I tried using a low rank ...
J.K.'s user avatar
  • 131
3 votes
1 answer
171 views

There is a ton of literature (see, for example, a highly cited paper by Huang et al. (2006)) on neural networks with random weights (NNRWs), i.e. neural networks whose weights are random except for ...
Vivek Subramanian's user avatar
3 votes
0 answers
838 views

Let $T$ be a compact, connected, proper subset of $\mathbb{R}^3:\quad T \subset \mathbb{R}^3$. Further let $\left\{ \boldsymbol{\mu}_i \right\}_{i=1}^n$ be a given finite set of $n$ point in $T$: $$ \...
aberdysh's user avatar
  • 131
3 votes
1 answer
73 views

I am working with a blackbox prediction model which takes known inputs and outputs a single mean response. I know this model's residuals to be heteroskedastic, but also can assume the error term of ...
ballboy's user avatar
  • 31
3 votes
0 answers
143 views

I'm looking for an analytic formula. Approximate formulas are welcome, in which case I give more importance to simple and nice expressions rather than to precision of the approximation. I'm looking ...
matus's user avatar
  • 618
3 votes
0 answers
167 views

I am studying Guassian Prcess Emulator (GPE) to approximate computationally expensive computer models. Basically, we suppose the computer model, or simulator, is denoted by $f(x)$, where $x$ is the ...
Eridk Poliruyt's user avatar
3 votes
0 answers
318 views

I am trying to approximate a multivariate function $y = f(x_1, ...x_n)$, which I have reason to believe will be well approximated by a classification and regression tree. Some of the variables are ...
nikosd's user avatar
  • 419
3 votes
0 answers
167 views

Consider a probability distribution $P(x)$, a set observed samples $S = \{x_1,\cdots, x_n\}$ where $x_i \stackrel{iid}{\sim} P(x)$ for $i \leq n$, and a symmetric function $h(x,y)$. How can one ...
abari's user avatar
  • 73
3 votes
0 answers
64 views

I was curious whether it is possible to obtain approximation error bounds on continuous densities from approximation error bounds of random variables. To make my question more precise: We consider ...
Igor's user avatar
  • 485
3 votes
0 answers
199 views

For me, the first thing that comes to mind when I come across the term continuity correction is that when $X\sim\mathrm{Bin}(n,p)$, one approximates $\Pr(X\le x)=\Pr(X<x)$ by $\Pr(Y<x+1/2)$ ...
Michael Hardy's user avatar
3 votes
0 answers
4k views

According to various sources, the variance of ML estimators can be obtained from the Hessian matrix of the likelihood function. If $H$ is the Hessian of the negative log-likelihood function, then $H^{-...
Ernest A's user avatar
  • 2,432
3 votes
0 answers
692 views

I am reading a paper "A note on the Delta Method" by Gary Oehlert, JASA, 1992. I am trying to estimate the variance of a function of a random variable, but first I want to understand the limitations ...
David LeBauer's user avatar
2 votes
0 answers
54 views

I have been using Taylor expansion to get some feelings and many approximating results in trying to find innovative ideas for my research. And I have seen a lot of approximate equals or asymptotically ...
Gerry G's user avatar
  • 21
2 votes
1 answer
79 views

In theory the differential privacy guarantee comes from adding randomness to an algorithm so whatever is output is a sample from a target distribution (e.g., the Laplacian, Gaussian, Exponential ...
travelingbones's user avatar
2 votes
0 answers
79 views

I have a function that takes a few hundred parameters and which returns a score I want to optimize for - It's a piece of software attempting to play a game against another player. The parameters ...
Borborbor's user avatar
2 votes
1 answer
143 views

I am running a generalized linear mixed effect model with a Poisson distribution to analyse count data. The model has a random effect that takes into account multiple observation obtained by the same ...
Lenakeiz's user avatar
2 votes
0 answers
188 views

Suppose I have a data set $X_1, \ldots, X_n$, and from that I compute a statistic $T(X_1, \ldots, X_n) := T$. I want to assess how reactive/sensitive this calculation is to changes in parameter values....
Taylor's user avatar
  • 22.4k
2 votes
0 answers
157 views

I've been trying to experiment and test the extents to which a neural network works. I was only able to make something with broad categorical variables function in an acceptable amount of time and in ...
John Sohn's user avatar
  • 121
2 votes
0 answers
146 views

I'd like to draw samples from some "target" probability density function $f(x)$. However, I don't have a way to do that -- instead I just have access to $N$ samples, each drawn from one of $...
mbarete's user avatar
  • 121
2 votes
0 answers
250 views

I've encountered a sentence: In information theory, Kullback–Leibler divergence is regarded as a measure of the information lost when probability distribution Q is used to approximate a true ...
Martyna's user avatar
  • 21
2 votes
0 answers
144 views

Let $m\geq 1$ be an integer and $F\in \mathbb{R}[x_1, \dots, x_m]$ be a polynomial. I want to approximate $F$ on the unit hypercube $[0, 1]^m$ by a (possibly multilayer) feedforward neural network. ...
ofow's user avatar
  • 21
2 votes
0 answers
209 views

Consider a random uniform distribution of $N$ points in $2D$ space bounded by $[0, 1]$ in both dimensions. Example: If I want to estimate the mean area of their Voronoi cells, I have to obtain the ...
Gabriel's user avatar
  • 4,382
2 votes
0 answers
995 views

The desired percentage of SiO$_2$ in a certain type of aluminous cement is 5.5. To test whether the true average percentage is 5.5 for a particular production facility, 16 independently obtained ...
Been's user avatar
  • 21
2 votes
0 answers
68 views

Consider a random variable $X \sim p_{n,\theta}$ where the first four moments are given by known functions: $$\begin{matrix} \ \ \ \ \ \ \mathbb{E}(X) \equiv \mu(n,\theta) & & & \ \ \ \ \ ...
Ben's user avatar
  • 142k
2 votes
0 answers
82 views

In Bayesian statistics, you have a likelihood and a prior, $f(x_1,\ldots,x_n \mid \theta)$ and $\pi(\theta)$ respectively, and you use these to obtain the posterior $\pi(\theta \mid x_1, \ldots, x_n) \...
Taylor's user avatar
  • 22.4k
2 votes
0 answers
134 views

I am trying to grapple with the following problem. I have an application that develops empirical distributions. In essence, I end up with a histogram of equally spaced $x$ values, with both a $max$...
eSurfsnake's user avatar
  • 1,094
2 votes
0 answers
812 views

I'm trying to understand the proof of an expression for the asymptotic bias in local polynomial regression of degree $p\ge0$. Specifically, I'm distraught with equation $(3.59)$ on page 102 of this ...
Epiousios's user avatar
  • 238
2 votes
0 answers
52 views

Are there any theorems which tell us how well AR(p) models are able to approximate any stationary finite time series? If so, what are the relevant results?
Mikkel Rev's user avatar
2 votes
0 answers
186 views

If I fit a model m <- glm/gam/gamm/lme/whatever(y ~ x + z, family = some exponential family) and extract coef(m) vcov(m) ...
Malik's user avatar
  • 21
2 votes
0 answers
67 views

The most common form of linear regression estimates the best values of $\vec{\beta}$ and $\sigma^2$ assuming that data is sampled from a model $y = \vec{\beta} \cdot \vec{x} + \vec{\epsilon}$ where $\...
Ilya Grigoriev's user avatar
2 votes
0 answers
159 views

Is it possible to chose the parameters of a RBM to maximize the likelihood of the observed data? (I follow the notation of the deeplearning tutorial ). Denote the observable data by $x$, hidden data ...
fabian's user avatar
  • 121
2 votes
0 answers
2k views

I learnt from section 10.3 of statistical inference that both Wald test statistic $\frac{W_n-\theta_0}{S_n}\approx\frac{W_n-\theta_0}{\sqrt{\hat I_n(W_n)}}$ and score test statistic $\frac{S(\theta_0)}...
user3813057's user avatar
  • 1,132