1

I am following a paper that uses a Hermitian covariance matrix

enter image description here

and inverts it to produce a Fisher matrix. I construct my covariance as Gamma[i,m,n], stored inside a larger array of shape (n_z, n_k, n_mu, 3, 3), where n_z, n_k and n_mu are numbers of data points in z, k, and mu. Each inner 3×3 block should be Hermitian and invertible.

I verified Hermiticity with:

for i in range(n_z):
    for m in range(n_k):
        for n in range(n_mu):
            print(np.allclose(Gamma[i, m, n], Gamma[i, m, n].conj().T, atol=1e-10, rtol=0))

This prints True for all indices. I also symmetrized explicitly:

A = np.zeros_like(Gamma)
for i in range(n_z):
    for m in range(n_k):
        for n in range(n_mu):
            A[i, m, n] = 0.5*(Gamma[i, m, n] + Gamma[i, m, n].T.conj())
            print(np.allclose(A[i, m, n], Gamma[i, m, n], atol=1e-10))

Again True for all. The diagonal entries are real (+0j) and off-diagonals are mutual conjugates as expected.

However, when I attempt to invert each 3×3 block I sometimes get LinAlgError: Singular matrix. Example output from:

for l in range(n_z):
    for m in range(n_k):
        for n in range(n_mu):
            g = Gamma[l, m, n]
            try:
                invg = np.linalg.inv(g)
            except np.linalg.LinAlgError:
                print(f"Matrix not invertible at (l={l}, m={m}, n={n})")
                print("det:", np.linalg.det(g))

prints for example:

Matrix not invertible at (l=1, m=3, n=7)
0j
Matrix not invertible at (l=1, m=5, n=3)
0j

So for those indices det(g) == 0 (within floating point), hence singular. This happens for 23 out of 900 total blocks — not all — which is unexpected because each block is supposed to be a valid covariance (semi-)positive definite matrix.

I inspected eigenvalues with:

for i in range(n_z):
    for m in range(n_k):
        for n in range(n_mu):
            print(np.linalg.eigvals(Gamma[i, m, n]))

For example, at (0,0,0) I get:

[ 1.70674158e+09-3.5e-09j,  2.29744006e+01-8.1e-09j,
 -1.01287448e-08+1.2e-08j ]

Small imaginary parts look like numeric noise, and the small negative real on the third eigenvalue is also consistent with rounding to zero. But the determinant for (0,0,0) is:

(4783.194482059961+83.43640215104267j)

which has a noticeable imaginary part. However such determinants still allow me to invert it, only when I get determinant equal to 0 that it's make non-invertible.

The code that builds Gamma (most constants fixed; CAMB is an astronomy library) is:

import camb
from camb import model
import numpy as np
from scipy.integrate import quad

# --- Constants and conversions ---
c_light = 2.998e5  # km/s

h0 = 0.6774
Om = 0.31
Ob = 0.05

H0 = h0 * 100
ns = 0.967
As = 2.142e-9

# --- Cosmology functions (H, chi, Vsur, Nk, Pkm) ---
# (definitions omitted here for brevity; full code used in my script)

# --- CAMB setup ---
pars = camb.CAMBparams()
pars.set_cosmology(H0=H0, ombh2=Ob*h0**2, omch2=(Om-Ob)*h0**2, omk=0, mnu=0)
pars.InitPower.set_params(ns=ns, r=0, As=As)
pars.set_matter_power(redshifts=[0.133, 0.3, 0.467], kmax=0.2)
pars.NonLinear = model.NonLinear_none
results = camb.get_results(pars)

# --- Arrays ---
S_area = 10000
omega = S_area*(np.pi/180)**2
z = np.array([0.133, 0.3, 0.467])
Dz = 0.111

deltak = [kmin(zi, Dz, omega) for zi in z]
k = [np.logspace(np.log10(dk), np.log10(0.2), num=30) for dk in deltak]
k = np.array(k)                  # -> shape (n_z, n_kpoints)
mu = np.array([np.linspace(-1, 1, num=10) for _ in z])

Deltamu = 2
n_z = 3
n_k = 30
n_mu = 10

pk = np.array([Pkm(ki, zi) for zi in z for ki in k])  # ensure pk[i,m] is scalar in my loop
# ... compute f, h, biases, alphas, n_g arrays ...

Gamma = np.zeros((n_z, n_k, n_mu, 3, 3), dtype=complex)

def P_auto_tilde(mui, hi, ki, alpha, b, fi, ng, pki):
    return ((b + fi*mui**2)**2 + (hi/ki)**2 * alpha**2 * fi**2 * mui**2) * pki + ng

def Pxy(mui, hi, ki, a1, a2, b1, b2, fi):
    return (
        (b1 + fi*mui**2)*(b2 + fi*mui**2)
        + (hi/ki)**2 * a1*a2 * fi**2 * mui**2
        - 1j * (fi*hi*mui*(a1*(b2+fi*mui**2) - a2*(b1+fi*mui**2)) / ki)
    )

for i in range(n_z):
    for m in range(n_k):
        for n in range(n_mu):
            mu_val = mu[i, n]
            h_val  = h[i]
            k_val  = k[i, m]
            f_val  = f[i]
            pk_val = pk[i, m]

            Pxx = P_auto_tilde(mu_val, h_val, k_val, a1_val, b1_val, f_val, ng1_val, pk_val)
            Pyy = P_auto_tilde(mu_val, h_val, k_val, a2_val, b2_val, f_val, ng2_val, pk_val)
            Pxy_val = Pxy(mu_val, h_val, k_val, a1_val, a2_val, b1_val, b2_val, f_val) * pk_val
            Pyx_val = Pxy_val.conj()

            pref = 2.0 / Nk(z[i], k_val, deltak[i], Deltamu, Dz, omega)

            M = np.zeros((3, 3), dtype=complex)
            M[0, 0] = Pxx**2
            M[0, 1] = Pxx * Pxy_val
            M[0, 2] = Pxy_val**2

            M[1, 0] = M[0, 1].conj()
            M[1, 1] = 0.5 * (Pxx*Pyy + Pxy_val*Pyx_val)
            M[1, 2] = Pxy_val * Pyy

            M[2, 0] = M[0, 2].conj()
            M[2, 1] = M[1, 2].conj()
            M[2, 2] = Pyy**2

            Gamma[i, m, n] = M * pref

Why are some 3×3 blocks singular (determinant zero) even though they look Hermitian, and how can I fix or diagnose this issue? What are robust ways to ensure invertibility regularization?

Edit: thanks to the hint from @NickODell, it seems likely that some rows/columns are numerically linearly dependent for the problematic data points as they are "approximately a linear scaling". How would you recommend resolving that without using pinv()?

9
  • 1
    Please make sure you minimal reproducible example has all necessary info, e.g. currently missing n_z, n_k, n_mu, mu, pref... and so on Commented Sep 15 at 23:50
  • Also "Gamma shape (3,3)" doesn't match Gamma = np.zeros((n_z, n_k, n_mu, 3, 3), dtype=complex)... Commented Sep 15 at 23:52
  • "One might think this is just numerical noise" Note that the complex part of each eigenvalue is approximately 10**17 times smaller than the real part of the largest eigenvalue. NumPy is computing an approximation of an eigenvalue with a limited number of bits of precision, and it is not clear to me that being approximately Hermetian guarantees either an approximately real determinant or approximately real eigenvalue. It is not clear to me that you can rely on this complex value being zero, even if the algebra guarantees it. Commented Sep 16 at 0:30
  • Also, can you be more specific about how you are finding the eigenvalues and determinant? Are you running np.linalg.det() over a (3, 3) array, or are you using broadcasting to run np.linalg.det() over a (n_z, n_k, n_mu, 3, 3) array? They should give the same result, but occasionally this kind of thing gives a different result in NumPy. Same question for eigenvalues. Commented Sep 16 at 0:38
  • 1
    @NickODell I have already made an edit that hopefully addresses your questions. My main concern is not simply that the eigenvalues or determinant are complex, since that could be attributed to numerical error. The real issue arises when I try to invert the matrix: the small imaginary parts are large enough to prevent the inversion, which doesn’t happen when it is only numerical error, from what I have observed. Commented Sep 16 at 1:44

1 Answer 1

0

The issue wasn’t with the code itself, but with the expression. It should be 1/ng instead of ng.
That change makes a big difference, since using 1/ng keeps the values on a proper scale and ensures the determinant is non-zero.
After fixing it to 1/ng, it works as expected.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.