(log-) likelihood of a mixture model
You have a model $g_{\theta}$ to describe some data sample $\mathbf{x}$, in this case your mixture model. This model is dependent on it's parameters, in this case the means, variances, and weights of the mixture components . For simplicity's sake we gather them in $\theta$.
The likelihood function gives us "the probability of the observed data under the model $g_{\theta}$. [...]. We think of (the likelihood function) as a function of $\theta$, with our data $\mathbf{x}$ fixed. " (1). That is, the value of the likelihood function tells us in some sense how 'plausible' the model at hand is, given the data that we have. As a value on it's own, this isn't very helpful to us - but it allows us to compare models with different parameter values. That's why it is useful, for example in model inference.
The log-likelihood function then is
$$\ell(\theta|\mathbf{x})=\sum_{i=1}^{N} \log( g_\theta(x_i))$$
with in your case of a gaussian mixture model $g_\theta(x_i)$ being the density estimate $f(x_i)$
$$ f(x_i)= \sum_{m=1}^{M} \alpha_m \phi(x_i|\mu_m,\Sigma_m) $$
and the log-likelihood is
$$\ell(\theta|\mathbf{x})=\sum_{i=1}^{N} \log\left( \sum_{m=1}^{M} \alpha_m \phi(x_i|\mu_m,\Sigma_m)\right)$$
given the parameters $\theta=\{\alpha_1,...,\alpha_M,\mu_1,...,\mu_M,\Sigma_1,...,\Sigma_M\}$
Why the log-likelihood values in your example are different
We have seen by now, that the parameters as well as the data affect the values of the log-likelihood.
The models in your first image are trained on the original data set, while those in the second picture are trained on the results of the PCA you performed. As such the dimensionality of the two data sets and probably also the ranges of the features are different.
As you might know, the model parameters of your mixture models are determined via EM based on the data. I hope you see where this is going.
In the case you described, the data and by that also the parameters of models are different - comparing the values of the log-likelihood in your example won't give you the insight you might have looked for.
References
(1) Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements of
statistical learning (Vol. 1, No. 10). New York: Springer series in
statistics.