Sampling error is usually the only one we can quantify using data within the sample itself. For the other, nonsampling errors, we need to do extra research beyond the survey at hand to find out how that error tends to add bias or variance to estimates -- and the answer will vary across populations, across survey topics, over time, across survey modes, etc.
For example, maybe you want to survey the population of "residents of town A" but you can only find a list of "homeowners in town A." This means your sample will not include renters, unhoused people, etc. If renters etc. tend to have different answers to your survey question than homeowners do, that's an example of "coverage error."
To quantify this, you could run a separate study that takes extra time, expense, and effort to include renters etc. as well as homeowners... then estimate the bias (avg. difference in responses between the two groups) using this 2nd study... then add that bias to your estimates from the original homeowners-only study.
But there is no way to quantify this coverage error by only using internal data from your survey of homeowners. And the bias estimate might vary a lot from one survey question to another. It may also vary from year to year, or from town to town. Finally, an estimate of the bias due to coverage error might not tell you anything about other biases, such as measurement errors due to poorly-worded questions; nonresponse error due to respondents differing from sampled-but-nonresponding people; social-desirability bias due to having face-to-face interviewers instead of paper or electronic forms; etc. That's what the UK ONS means by "These errors are usually very difficult to quantify and to do so would require additional and specific research."
For a very good but dated reference, I recommend starting with Groves et al (2008), Survey Methodology, 2nd edition. That will get you started on understanding these concepts, so you'll be prepared to read more up-to-date literature on specific situations.