In some papers about $\mathcal{l}_1$-penalized regression, $$ \hat{\beta}=\underset{\beta\in\mathbb{R}^{p}}{\operatorname{\arg\min}}\|Y-X\beta\|_2^2+\lambda\|\beta\|_1, $$ the authors say that they obtained some theoretical results for the estimators. One is the consistency of the estimator $\hat{\beta}$.
In high-dimensional settings, what does consistency really mean? Does consistency have different meanings in different places?