3
$\begingroup$

Suppose we are given a list of $N$ positive definite quadratic forms $X^TQ_k X$ (where $k\in[1,N]$ and $Q_k\in\mathbb{R}^{p\times p}$ $\forall k$), and a positive vector $V$ of same length $N$ i.e. $V=(v_k>0)_{k\in[1,N]}$.

My question is the following: how do we find the vector $X^*\in\mathbb{R}^p$ resulting in the least-square distance between the array of quadratic forms and vector $V$, i.e. solving: $$X^*=\underset{X}{\mathrm{argmin}} \sum_{k=1}^N \left(X^TQ_k X - v_k\right)^2 \quad ?$$ The scalar case ($p$=1) is easy to solve, giving $X^*=\pm\sqrt{\frac{\sum q_kv_k}{\sum q_k^2}}$, but I'm not sure how to proceed in the general case $p$>1.

$\endgroup$
6
  • $\begingroup$ Writing the minimisation problem as a zero first-order derivative in $X$ when $p>1$, we end up with the following equation for $X^*$: $$\sum_{k=1}^N ({X^*}^T Q_k X^*-v_k)\times Q_kX^*=0.$$ Yet not clue how to solve this, whether analytically or numerically. Does this correspond to a known type of equation ? $\endgroup$ Commented Feb 14 at 14:56
  • 2
    $\begingroup$ It's a quartic optimization problem, so I wouldn't expect an analytical or algebraic solution. Furthermore, in most statistical applications this would not be a good loss function to use: the distributions of quadratic forms tend to be asymmetric. Do you have a statistical application in mind? BTW, are there any restrictions on these forms? Your specification of "positive vectors" suggests the $Q_k$ are at least positive definite. $\endgroup$ Commented Feb 14 at 15:04
  • $\begingroup$ Yes, the $Q_k$ matrices are PD (just added this to the original post). $\endgroup$ Commented Feb 14 at 15:10
  • $\begingroup$ The context is (remotely) SVD on time-series. @whuber: any quartic solvers to recommend ? $\endgroup$ Commented Feb 14 at 15:16
  • $\begingroup$ $Q_k$'s can be taken Toeplitz matrices. $\endgroup$ Commented Feb 14 at 15:46

2 Answers 2

1
$\begingroup$

The problem is given by:

$$ \arg \min_{\boldsymbol{x}} \sum_{k = 1}^{N} {\left( \boldsymbol{x}^{\top} \boldsymbol{Q}_{k} \boldsymbol{x} - {v}_{k} \right)}^{2} $$

Since $\sum_{k} \boldsymbol{x}^{\top} \boldsymbol{Q}_{k} \boldsymbol{x} = \boldsymbol{1}^{T} \tilde{\boldsymbol{Q}}^{\top} \operatorname{vec} \left( \boldsymbol{x} \boldsymbol{x}^{\top} \right) $ where $\tilde{\boldsymbol{Q}} = \begin{bmatrix} \mid & & \mid \\ \operatorname{vec} \left( \boldsymbol{Q}_{1} \right) & \cdots & \operatorname{vec} \left( \boldsymbol{Q}_{N} \right) \\ \mid & & \mid \end{bmatrix}$ and $\operatorname{vec} \left( \cdot \right)$ is the vectorization operator.

Then, the problem can be written as:

$$ \arg \min_{\boldsymbol{z}} \frac{1}{2} {\left\| \tilde{\boldsymbol{Q}}^{T} \boldsymbol{z} - \boldsymbol{v} \right\|}_{2}^{2} $$

Where $\boldsymbol{z} = \operatorname{vec} \left( \boldsymbol{x} \boldsymbol{x}^{\top} \right)$.

This has a closed form solution as a Linear Least Squares problem.
Once you solve it, you need to recover $\boldsymbol{x}$ from $\boldsymbol{z}$.
It will require some prior as the problem on its own has infinite solutions.

$\endgroup$
0
$\begingroup$

Another formulation can take advantage of the Symmetric Positive Definite (SPD) property of the matrices $\boldsymbol{Q}_{k}$.

For specific $k$ one can write, using the Eigen Deocmposition of the matrix:

$$ \boldsymbol{x}^{\top} \boldsymbol{Q} \boldsymbol{x} = \boldsymbol{x}^{\top} \boldsymbol{U}_{k}^{\top} \boldsymbol{D}_{k} \boldsymbol{U}_{k} \boldsymbol{x} $$

Where $\boldsymbol{U}_{k}^{\top} \boldsymbol{D}_{k} \boldsymbol{U}_{k} = \boldsymbol{U}_{k}^{\top} \operatorname{Diag} \left( \boldsymbol{d}_{k} \right) \boldsymbol{U}_{k}$ being the Eigen Value Decomposition of $\boldsymbol{Q}_{k}$ with $\boldsymbol{d}_{k} > \boldsymbol{0}$.
Setting $\boldsymbol{z}_{k} = \boldsymbol{U}_{k} \boldsymbol{x}$ the problem becomes $\sum_{k}^{N} {\left( \sum_{i}^{p} {d}_{k, i} {z}_{k, i}^{2} - {v}_{k} \right)}^{2}$.

This can be solved by forming a Linear Least Squares problem to find $\boldsymbol{y}_{k} = \boldsymbol{z}_{k}^{\circ 2}$.
It will require to build a matrix $\tilde{\boldsymbol{D}}$ where the $k$ row has $N$ non zero values of $\boldsymbol{d}_{k}$ wrapped with zeros to have length of $N p$. This will match a vector which is the vectorization of all the different $\boldsymbol{y}_{k}$.
Since both $\boldsymbol{d}_{k}$ and $\boldsymbol{v}$ are positive, all values of $\boldsymbol{y}_{k}$ will be non negative as well.

Then, using the identity $\boldsymbol{U}_{k} \boldsymbol{x} = \boldsymbol{z}_{k}$ can be used to get $\boldsymbol{x}$.

I didn't solve it all the way, but I think the end result will be something like:

$$ \boldsymbol{x} = \frac{1}{N} \sum_{k}^{N} \boldsymbol{U}_{k}^{T} \sqrt{\boldsymbol{y}_{k}} $$

There will be ab ambiguity about the sign of each element.

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.