Skip to main content

Questions tagged [projection]

For on-topic questions involving the mathematical concept projection, a linear transformation $P$ such that $P=P^2$. Please include also a more statistical methods tag. For purely mathematical questions about projections it is better to ask on math SE https://math.stackexchange.com/

Filter by
Sorted by
Tagged with
0 votes
0 answers
28 views

Lefse is a widely used tool in metagenimics with implementations in python and R. (The former is the original one, which under the hood relies on R, the latter is a re-implementation in R.) The final ...
Roger V.'s user avatar
  • 5,091
2 votes
1 answer
94 views

I am working with the following mixed model: $$ y = Xb + Zu + e $$ with variance components: $$ V(u) = G, V(e) = R, V(y) = V = ZGZ' + R $$ The inverse of $V$ is given by: $$ V^{-1} = R^{-1} - R^{-1} ...
Byoungho Park's user avatar
2 votes
1 answer
138 views

Recently, in my GLM course, we analyzed linear regression from a more philosophical perspective using Hilbert spaces. My teacher suggests that we be careful when when two models are "nested.&...
daniel's user avatar
  • 311
1 vote
0 answers
120 views

Here is my interpretation of the hat matrix, but decomposing it so that I have an intuitive understanding of what they are. $H=X(X^TX)^{-1}X^T$. $\Longrightarrow$ $\hat y = Hy$, where $\hat y$ is the ...
amz_fan's user avatar
  • 11
2 votes
0 answers
48 views

I am trying to understand the CVA function in the Morpho package in R, which performs canonical variate analysis, aka, linear discriminant analysis. I am confused as to how the canonical variates (CV),...
Patrick's user avatar
  • 247
0 votes
0 answers
92 views

I am trying to develop a more robust methodology for a forecast model. This attempts to project the final number of recruits for this cycle based on comparing the current recruitment cycle to previous ...
Richard Manser's user avatar
1 vote
1 answer
231 views

Background My context is I am thinking about maximum likelihood estimation of a parameter $\theta\in \mathbb{R}^r$ by drawing samples $\{x_i\}$ from a distribution which depends on $\theta$. I know ...
Jagerber48's user avatar
10 votes
2 answers
508 views

I'm trying to draw a parallel between the concept of projections in a finite linear space to an infinite linear space. Here is the set-up, first in the finite dimensional case, and then second in the ...
absolutelyzeroEQ's user avatar
3 votes
1 answer
186 views

I am reading up on linear regression from mit 16.850 Here is how the lecture goes: Given: $Y_{n,1}$ (targets), $X_{n, p}$ (data), $t_{p, 1}$ (the parameters I'm optimizing over), True model: $Y = \...
figs_and_nuts's user avatar
3 votes
1 answer
181 views

Consider a uniform probability distribution on a circle of radius r, i.e. $\{(x,y) \in \mathbb{R}^2: x^2 + y^2 = r^2 \}$.If we wish to project onto the x-axis, we can consider each point on the circle ...
SSD's user avatar
  • 223
0 votes
0 answers
65 views

Every textbook I encounter tells me that this is simply a meaningful relationship of a hat matrix, without explaining why: If H is the hat (projection) matrix, and our X matrix has full rank and a ...
Shebb's user avatar
  • 1
0 votes
0 answers
110 views

I want to estimate the impact of volatility shocks on cross-assets spillovers. I have series of spillovers, and I want to use a Local Projection model, and the volatility of some financial assets ...
justaneconomist's user avatar
1 vote
2 answers
968 views

In the book I am currently reading, they propose the following proof: $\hat{Y} = X_1 \hat{\beta}_1 + X_2 \hat{\beta}_2$ Now because $M_1 = I - P_1$, we can always derive a formula for $X_2$: $X_2 = ...
Marlon Brando's user avatar
2 votes
0 answers
272 views

I would be grateful if you could help me clear up some confusion regarding conditional expectation and regression. I have seen two formulations of the linear regression framework: $$Y=a+bX+\varepsilon\...
abeeisnotabug's user avatar
2 votes
1 answer
161 views

I have data that look like this. And my goal is to reduce this 3D dimension into 2D dimension so it might looks like this. Turning the angle so the distance between all classes becomes maximum. So ...
euraad's user avatar
  • 447
0 votes
1 answer
914 views

I would really appreciate it if anyone can guide me through this. I have a $n \times (p+1)$ matrix $X$. The projection matrix $P = X(X'X)^{-1}X'$. I want to prove that $P(i,i)$ is in $[0,1]$, where $P(...
Tahmid Mahmud's user avatar
2 votes
1 answer
105 views

Given $X=(d\; X_1)$, we want to prove that $$(d'd)^{-1}d'P_{[d X_1]}y=(d'd)^{-1}d'y,$$ where $P_{[X]}y=\hat{y}.$ That is, restricting the regression to the subsample for which $d_i=1$, we have that ...
pommefatale's user avatar
1 vote
0 answers
115 views

I've come across a phenomenon from a simulation that I'm very curious about. But I don't know how to start my analysis. So, I am asking for some guidance. Thanks! Denote by $\mathbf{H}$ the principal ...
Huihang's user avatar
  • 115
1 vote
2 answers
159 views

According to 4.5.13 $\hat{A}_{mh}=\left(\frac{1}{n}\sum_{i=1}^{n}z_{im}x_{i}^{\prime}\right)\left(\frac{1}{n}\sum_{i=1}^{n}x_{i}x_{i}^{\prime}\right)^{-1}\left(\frac{1}{n}\sum_{i=1}^{n}x_{i}z_{ih}^{\...
Hagan Ross's user avatar
1 vote
0 answers
114 views

Let $\mu$ be some probability measure and consider its information projection defined by $P_* = \arg\min_{P \in \mathcal{P}} KL(P || \mu)$ and $\mathcal{P}$ is some convex family of probability ...
yprobnoob's user avatar
  • 141
0 votes
0 answers
53 views

As shown in this Cross-Validated post Close curves on an Andrews plot I don't understand how, in the accepted answer, the cross-covariance can be defined as, $$\int_{-\pi}^{\pi}f_xf_ydt$$ Considering ...
WorseThanEinstein's user avatar
1 vote
0 answers
46 views

In linear regression whose matrix form is $$Y = Xb + E$$ where $X=[X_1,X_2]$. If the error of using only $X_1$ (i.e $ Y = X_1b+E$ ) is $x_1$, and the error of using $X_2$ (i.e $ Y=X_2b+E$) is $e_2$, ...
Gavin's user avatar
  • 121
0 votes
0 answers
56 views

Phrasing Attempt 1 If I have one function $f_1: \mathcal{X} \rightarrow \mathbb{R}^{D_1}$ that yields a particular hat matrix $P_1 \in \mathbb{R}^{N \times N}$, how do I find the function $f_2: \...
Rylan Schaeffer's user avatar
0 votes
0 answers
13 views

From my understanding, this formula is used for least-squares when we're interested in minimizing the distance between a point and some space we are projecting on. Somebody can correct me if this is ...
Lydia's user avatar
  • 1
0 votes
0 answers
96 views

I understand that in the least square setting, the projection of $\mathbf{y}$ onto the column space of $\mathbf{X}$ is the vector $\mathbf{Xb}$, where the vector $$\mathbf{b=(X'X)^{-1}Xy}$$ Thus the ...
eisendon's user avatar
  • 119
1 vote
1 answer
224 views

I am working through the proof of Lemma 2 in this paper (page 25, need it for my own research) and I am stuck at the very first step. Here, I will formulate a bit simplified version of this step. ...
Misius's user avatar
  • 763
3 votes
1 answer
98 views

Let $X(t)= \phi X_{t-12}+Z_t+\theta Z_{t-1}$ where $Z_t\sim WN(0,1)$. I need to find prediction error for projecting $X_t$ onto $H_{t-3}(X)$ (Hilbert space). So, I know that $X_t \perp P_{H_{t-3}}X_t$ ...
thesecond's user avatar
  • 380
1 vote
2 answers
305 views

We need to show that a smoothing spline of $y_i$ to $x_i$ retains the local regression part of the fit. For linear regression, this problem seems trivial because it is relatively easy to move from $...
Bruh's user avatar
  • 27
1 vote
0 answers
60 views

I am trying to solid background of linear regression by using linear algebra. In linear algebra, there are some chapters that related to linear regression. (orthogonality, Projection) I learned some ...
Dougie Hwang's user avatar
1 vote
0 answers
63 views

My understanding could be wrong as the lecture series from my University isn't clear and there are no links to share. But having rewatched a few times, I think the point being made is: . We want the ...
Mr. Johnny Doe's user avatar
0 votes
1 answer
288 views

For OLS in matrix form, we are taught that Hat matrix is $X(X^TX)^-X^T$, and is idempotent etc, i.e. when it multiplies with itself, it will self cancel and thus lead back to the same Hat matrix. I ...
jojorabbit's user avatar
5 votes
1 answer
2k views

Linear projection (or fully connected layer) is perhaps one of the most common operations in deep learning models. When doing linear projection, we can project a vector $x$ of dimension $n$ to a ...
Yu Gu's user avatar
  • 163
4 votes
1 answer
1k views

I've been doing a lot of Kalman filtering work recently. I've derived all the equations starting from a basic linear inverse problem, so strictly speaking I know where everything comes from. I also ...
Pavel Komarov's user avatar
1 vote
1 answer
87 views

I have a chart that displays numbers for 2017, 2018, 2019 and 2020, but the 2020 numbers are based on projections - I want to display the "actual" value on the data label next to the ...
dwirony's user avatar
  • 113
1 vote
0 answers
169 views

I have a set of data points in high-dimensional space that I wish to map onto a lower dimension (3D or 2D). Question : How do I obtain the Projection (Hyper)Plane (e.g., its normal vector or its set ...
Miss Swiss's user avatar
5 votes
1 answer
2k views

I'm new to machine learning and came across projection matrix . In a random thread it was interpreted as The matrix $X(X^\text{T} X)^{-1} X^\text{T}$ is a projection matrix, as it does precisely that:...
offset-null1's user avatar
3 votes
1 answer
3k views

I vaguely remember seeing somewhere that the conditional expectation $E(Y|X)$ can be interpreted as projection of random variable $Y$ onto random variable $X$. My question is: Is the aforementioned ...
ExcitedSnail's user avatar
  • 3,090
3 votes
1 answer
198 views

Suppose I have $n$ observation indexed by $i$ and that each observation is part of a group $g$. I want to compare two regressions. First regression: $$ Y_i=\beta X_i + \alpha F_{g(i)}+\varepsilon_i $$ ...
user_lambda's user avatar
1 vote
0 answers
179 views

Suppose that I have a set of observations indexed by $i$. Each observation belongs to a group $g$. Let's define by $\hat{Z}_i=Z_i - E\left[Z_i\vert g(i)\right]$ is the residual after projecting $Z$ on ...
user_lambda's user avatar
3 votes
1 answer
2k views

I have a point $\mathbf{x}$ in 3-dimensional space, which is measured with a degree of uncertainty. The point falls within a unit cube, and the uncertainty is assumed to follow a multivariate normal ...
Joe Baker's user avatar
0 votes
1 answer
111 views

Say I want to open a shop but first I want to project the likely sales in the first 5 years to see if it is a viable option. I have data pertaining to 100s of other start ups, including their success ...
JFG123's user avatar
  • 133
1 vote
1 answer
531 views

Starting with the 'residual maker' defined by M in: $e= y-\hat{y} = Y-X(X'X)^{-1}X'Y = [I-X(X'X)^{-1}X']Y =MY$ where e is the regression residual. one common equality i see relating the regression ...
Steve's user avatar
  • 753
3 votes
0 answers
157 views

$\hat \beta_{GLS} = (X'V^{-1}X)^{-1}X'V^{-1}Y \\ Y=X\hat \beta_{GLS} + \epsilon=X(X'V^{-1}X)^{-1}X'V^{-1}Y + \epsilon = \\X(X'V^{-1}X)^{-1}X'V^{-1}Y + (I-X(X'V^{-1}X)^{-1}X'V^{-1})Y$ It is clear that ...
Maverick Meerkat's user avatar
1 vote
0 answers
99 views

I'm reading a paper and wondering whether I get things right: Say I have a variable Y and two further variables $X_1$ and $X_2$. I want to determine the part of $Y$ which is not linearly explained by $...
Laura's user avatar
  • 11
2 votes
1 answer
2k views

After performing PCA I would like to project any new samples to the principal component space (I would like to see how samples cluster together). I did the PCA analysis in R: ...
Something like that's user avatar
3 votes
1 answer
864 views

I'm trying to finish proving that in simple-regression analysis, $h_i = \frac{1}{n} + \frac{(X_i - \bar{X})^2}{\sum_{j=1}^{n}(X_j - \bar{X})^2}$, where $h_i := h_{ii} = \sum_{j=1}^nh_{ij}^2$, the ...
Jake's user avatar
  • 209
2 votes
0 answers
115 views

I am preparing a small example of a projection using python, numpy and sklearn to perform <...
Luis's user avatar
  • 473
1 vote
2 answers
335 views

The proof of the Wold Decomposition [1] of $x_t$ involves the definition of the process $$w_t = x_t - P_{\mathcal{M}_{t-1}^x} x_t,$$ where $x_t$ is a stationary zero-mean process, $\mathcal{M}_n^x = ...
toliveira's user avatar
  • 307
3 votes
0 answers
262 views

I recently read this article explaining the geometric meaning of covariance matrix. http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/ My question is : is there an ...
user152503's user avatar
  • 1,539
2 votes
1 answer
356 views

Consider two random variables $Y$ and $X$. In the context of the best linear prediction, if we would like to predict $Y$ given $X$ known, we derive the solution solving the following minimize problem ...
Fam's user avatar
  • 1,007