Skip to main content

Questions tagged [approximation]

Approximations to distributions, functions, or other mathematical objects. To approximate something means to find some representation of it which is simpler in some respect, but not exact.

Filter by
Sorted by
Tagged with
1 vote
0 answers
173 views

I have a function $s(\omega)$ that is a sum of a function with random numbers $a_m$ and looks something like the following. $$ s(\omega) = \sum_{m = 1}^{M} f(a_m, \omega) $$ where all the $a_m$ are ...
CfourPiO's user avatar
  • 325
0 votes
0 answers
117 views

Is anyone aware of an approximation to the density function for the studentized range distribution https://en.wikipedia.org/wiki/Studentized_range_distribution ? I've found a fast approximation for ...
Brian Powers's user avatar
1 vote
0 answers
66 views

I'm trying to approximate an unknown distribution by a truncated Edgeworth series, with cumulants/central moments estimated from a large sample. I notice though that I am getting negative tail ...
Frido's user avatar
  • 171
1 vote
0 answers
167 views

Suppose we have a posterior sample of parameter $\theta$ obtained by fitting some Bayesian model to $n$ data points. In black is the empirical posterior density and in red is a normal approximation to ...
user7064's user avatar
  • 2,277
0 votes
0 answers
238 views

How do you compute a binomial probability distribution for large $n$? If I try the following, I get an integer overflow in any programming language: ...
at01's user avatar
  • 111
3 votes
2 answers
270 views

I read this paper "voom: precision weights unlock linear model analysis tools for RNA-seq read counts", in the methods, the "Delta rule for log-cpm" section: The RNA-seq data ...
Dan Li's user avatar
  • 179
3 votes
1 answer
99 views

Suppose we have $$ X_1, \ldots, X_n \mid \theta \, \mathop{\sim}^{iid} \, L(\cdot \mid \theta), \quad \theta \sim \pi $$ By Bayes' theorem, the corresponding posterior distribution is $$ \pi_n(\mathrm ...
mariob6's user avatar
  • 550
2 votes
2 answers
462 views

The central limit theorem says that $$ \frac{\bar{X}-\mu}{\frac{\sigma}{\sqrt{n}}} \stackrel{\mathcal{D}}{\rightarrow} N(0,1) $$ What is the distribution of $\bar{X}$? I've seen it given as $\sum X \...
user1141170's user avatar
1 vote
1 answer
135 views

A lot of ML models, such as neural networks, are Universal Function Approximators. But when evaluating ML models, we use usually metrics, such as MSE or accuracy, to assess the performance of a ML ...
Frank Gallagher's user avatar
1 vote
0 answers
75 views

My NN (a few linear layers with ReLUs + batch normalization, no activation in the last layer) learns to approximate vector-valued labels $y_z$ from data $z\sim\rho_z$ in a supervised way, i.e. net$(z)=...
joinijo's user avatar
  • 111
1 vote
1 answer
245 views

Goal Given a curve defined by a set of (x, y) coordinates with linear interpolation, we want to find the best approximation using a smaller set of points (w/ linear interpolation) that fall along a ...
Kungfunk's user avatar
1 vote
1 answer
314 views

I am learning about variational inference and Gibbs simpler. I am in the process of deriving variational inference on my own. In this process, I need to make a comparison with Gibbs sampler. I am ...
sam's user avatar
  • 447
0 votes
0 answers
148 views

While searching around, I found this question Expected value of a natural logarithm dealing with the expected value of the natural log. The top answer references a paper that approximates the expected ...
Jama's user avatar
  • 101
0 votes
0 answers
99 views

Given two multivariate Gaussians $G_1(\mathbf{x}), G_2(\mathbf{x})$ (not PDFs!) with the same center at the coordinate origin and different covariance matrix $\mathbf{F}_1, \mathbf{F}_2$, where $\...
logocar3's user avatar
4 votes
2 answers
277 views

Consider a 1-layer fully-connected neural network (FCNN) given by $$ f(x) = \sum_{i=1}^n v_i\sigma\!\left({w_i}^T x\right) $$ where $x,w_i\in\mathbb{R}^d$, $v_i\in\mathbb{R}$, and $\sigma(y)=\max(y,0)$...
Tham's user avatar
  • 141
0 votes
0 answers
153 views

We know that ${f'(x) \approx \frac{f(x+h)- f(x)}{h}}$. If we have three points ${x_0 = x-h}$, ${x_1 = x}$, ${x_2 = x + h}$, we can compute the 3-point centered-difference formula using the Newton's ...
Ele975's user avatar
  • 217
0 votes
0 answers
188 views

I'm looking to find an approximation of $\mathbb{E}\left[ XY \right]$ in terms of $\mathbb{E}\left[ X \right]^n$, $\mathbb{E}\left[ Y \right]^n$, $\mathbb{E}\left[ X^n \right]$, $\mathbb{E}\left[ Y^n \...
abnowack's user avatar
  • 111
2 votes
2 answers
300 views

So, a little context. The image you see is from the GCE A-LEVEL syllabus where they have defined the conditions for approximating binomial to poisson. But why did they have mention that the ...
Mile Stone's user avatar
1 vote
0 answers
71 views

Let $X \sim \mathrm{Bernoulli}(\vartheta)$ for some unknown $\vartheta \in (0,1)$, and let $(X_1, …, X_n)$ be a moderately large IID sample for $X$. Let $\vartheta_0 \in (0,1)$. I want to test $H_0 \...
Federico's user avatar
  • 111
4 votes
0 answers
970 views

I am currently studying on Bayesian models, and still new to probability theory. I learned that Gaussian Mixture Model is used to represent the distribution of given population as a weighted sum of ...
esh3390's user avatar
  • 107
0 votes
1 answer
244 views

We know a series of probability distribution approximations that are considered good as long as some condition holds. A few examples are: Binomial can be approximated by Normal if $np(1-p) > 10$ ...
Gian's user avatar
  • 455
1 vote
1 answer
100 views

Suppose I have a population whose distribution is definitely not normal but both the population and sample size will be large. Is there any way I can prove/ show that the mean of a large enough sample ...
Aryaman Darda's user avatar
0 votes
2 answers
162 views

Let $$ R_{i}(t) \sim \mathcal{N}(\mu_i, \sigma_i^2), $$ denote the one period return distribution for asset $i$, from which we observe the iid samples $\{R_i(t)\}_{t=1}^{n_i}$. The MLE sample mean and ...
Pontus Hultkrantz's user avatar
3 votes
2 answers
834 views

Can gaussian kernels reproduce non continuous L2 integrable functions? ( Do non continuous L2 integrable functions lie in the RKHS constructed by a Gaussian Kernel?) Edit: I think my question is being ...
Harduin's user avatar
  • 77
1 vote
0 answers
423 views

I want to find an approximation of a mixture of probability distributions that minimises the Kullback-Leibler divergence (KLD). I need to verify my result, as it seems suspect. We have a joint ...
scj's user avatar
  • 111
1 vote
0 answers
104 views

Can a mixture of $N$ Weibull distributions approximate any continuous density with non-negative support, if $N$ is sufficiently large? (If so, a reference to the proof would be greatly appreciated). (...
zen_of_python's user avatar
2 votes
1 answer
143 views

I am running a generalized linear mixed effect model with a Poisson distribution to analyse count data. The model has a random effect that takes into account multiple observation obtained by the same ...
Lenakeiz's user avatar
2 votes
1 answer
227 views

Data and objective I have count data from two groups, A and B, from across multiple samples. I want to estimate the average ratio of A to B across all samples, along with a confidence interval. Issues ...
G. Channing's user avatar
0 votes
0 answers
54 views

The difference between the value of a statistical estimate and its parameter's value is almost never exactly $0$. For example, $r - \rho$, for a unique sample $r$, is likely to be some non-zero ...
virtuolie's user avatar
  • 862
1 vote
0 answers
74 views

We know that the Wold Decomposition Theorem says that any purely nondeterministic covariance-stationary process, $x = [x_t : t \in \mathbb{Z}]$, can be written as a linear combination of lagged values ...
Fam's user avatar
  • 1,007
9 votes
1 answer
636 views

Let's assume the likelihood $$ y \sim\mathcal N_p(0, \Sigma + \Lambda\Lambda^\top) $$ where $\Sigma$ is a diagonal $p \times p$ matrix and $\Lambda$ is a $p \times d$ matrix with $d \ll p$. What is ...
mariob6's user avatar
  • 550
3 votes
0 answers
170 views

We am trying to understand the number of points that a neural network of a particular size can interpolate. I think this may be isomorphic to its degree of freedom? We are not interested in whether ...
jlperla's user avatar
  • 535
1 vote
0 answers
139 views

In variational inference, the mean-field family of probability distributions is the set of distributions that factors over its terms (i.e. each component is independent of all others). This allows us ...
blue_egg's user avatar
  • 399
4 votes
1 answer
671 views

Given $k$, $\theta$ fixed shape and scale parameters for some Gamma distribution which has a CDF $F$. Let $G^{-1}$ be the inverse CDF of the standard Normal distribution. Consider the composition $H(x)...
Apen13's user avatar
  • 43
0 votes
0 answers
167 views

Note: I originally tried to pose this question generally, without discussing the specific type of stochastic process. I hope that this can still be an interesting question generally. Assume that we ...
knightontable's user avatar
3 votes
0 answers
199 views

I want to estimate negative binomial regression for from scratch i.e. I want to write a script that will maximize maximum likelihood obtaining optimal parameters. To do so we can calculate derivatives ...
John's user avatar
  • 542
4 votes
1 answer
185 views

I have a posterior function which is easy to approximate using numerical methods (the posterior has only 2 parameters, and is approximately Gaussian because of the large sample). However, I need to ...
Closed Limelike Curves's user avatar
1 vote
0 answers
393 views

I am trying to find the distribution of the following variable Z: $X_i$ are each independent with Lognormal distribution ($\mu_i, \sigma^2_i$), $X_i \in L^2$ forall $\forall i$ Z = $\sum_i cX_i$ where ...
mathcomp guy's user avatar
2 votes
0 answers
188 views

Suppose I have a data set $X_1, \ldots, X_n$, and from that I compute a statistic $T(X_1, \ldots, X_n) := T$. I want to assess how reactive/sensitive this calculation is to changes in parameter values....
Taylor's user avatar
  • 22.4k
0 votes
0 answers
53 views

How to approximate the expression on the left hand side to $\sum_{i=1}^Nx_i$ as $n\to \infty$ $$ \frac{\sum\limits_{i=1}^{N}x_i^2}{n-2\frac{\sum\limits_{i=1}^{N}x_i}{N}} \left(\sqrt{1+\frac{Nn\left(...
Sara's user avatar
  • 11
9 votes
3 answers
1k views

Suppose $X$ and $Y$ are real random variables that are uncorrelated. Now, uncorrelated does not imply independence, so $E[X \mid Y] \ne E[X]$. However, can they be said to be approximately equal? If ...
Bridgeburners's user avatar
2 votes
1 answer
391 views

I'm looking for a normal approximation for a Bernoulli variable (so I can later sum multiple correlated approximated variables) The trivial approximation is taking the mean and variance of the ...
user107511's user avatar
0 votes
0 answers
136 views

I have two time series of financial returns for assets $A$ and $B$ defined below for $n$ periods. The return $a_i$ is the percent growth in the asset price of $A$ using period-end values for $i-1$ and ...
AndrewLong's user avatar
1 vote
0 answers
152 views

I was reading this great article on deriving the equation for the line of best fit (https://www.neelocean.com/simple-ols-estimators/), and got confused when I came across: Rearranging: $$\hat \beta = \...
rayaantaneja's user avatar
2 votes
0 answers
157 views

I've been trying to experiment and test the extents to which a neural network works. I was only able to make something with broad categorical variables function in an acceptable amount of time and in ...
John Sohn's user avatar
  • 121
1 vote
1 answer
692 views

Let $X_1, X_2,...$ be i.i.d. random variables with finite mean $\mu$ and finite variance $\sigma^2$. From the Central Limit Theorem, we know that $\sqrt{n}(\bar{X_n}-\mu)$ tends in distribution to $N(...
DM-97's user avatar
  • 107
1 vote
1 answer
411 views

Assume that the conditional density of $ y \vert x $ is a Beta distribution for all values of x. Can a Beta distribution with parameters computed by a neural net, i.e. Beta($\hat{\alpha}$, $\hat{\beta}...
user avatar
1 vote
0 answers
44 views

I am currently reading the paper on Gradient Boosting Machines - J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Ann. Stat., vol. 29, no. 5, pp. 1189–1232, 2001, doi: 10....
Karina's user avatar
  • 85
2 votes
1 answer
789 views

There is a known approximation of the Dirichlet Distribution by a Logit-Normal, as presented in wikipedia. However, I am interested in the reverse, can I approximate a logit-normal by a Dirichlet? I.e....
Andreas Look's user avatar
1 vote
0 answers
65 views

Some definitions first: Acquired customers: Customers placing an order for their first time. Cohort: Group of customers that have been acquired during the same time period. Repurchase: An order placed ...
Facundo Iannello's user avatar

1
2
3 4 5
10