The use of the phrase "the z-score method" in your title seems to assume that I should know of some method by that name. If we are to assume something specific from it, it would be best to explain what it consists of (perhaps with a reference). While I know what a z-score is and many things one might do with a z-score I know of no thing that's called "the" z-score method. To clarify, this ignorance of the term is not because I am ignorant of the subject of statistics; I have a fairly broad knowledge of statistics.
A z-score is simply a standardized value, $Z=\frac{X-\mu}{\sigma}$.
Calculating a z-score carries no distributional assumption.
For example, we can write Chebyshev's inequality -- which clearly has nothing whatever to do with assumptions relating to normality -- quite directly in terms of z-scores.
However, once you start saying that a certain proportion of the distribution of the standardized variable lies between this point and that point and you base that calculation on the normal distribution, at that point you have chosen to assume normality, simply because you used properties of the normal distribution to do that calculation. There's nothing about calculating a z-score that implies that you should do that; that's something that you add on to the calculation on your own.
For example, if I specify a z-value of 3, then I would look at both sides and know its position in the distribution (99.97%).
With any normal distribution, 99.73% of the probability (NB not 99.97%) lies between $\mu-3\sigma$ and $\mu+3\sigma$. That fact has of itself nothing inherently to do with z-scores, but yes, if you subtract $\mu$ and divide by $\sigma$ then that corresponds to looking between $-3$ and $3$ on a standard normal. It's not the z-score that does it, it's the "normal" that does it.
So all you're doing there is saying "if I assume normality, I can compute that 99.73% of the population falls between $\mu-3\sigma$ and $\mu+3\sigma$" ... sure, that follows directly from the assumption of normality.
Would this change if I have a left or right skewed distribution?
Typically, yes, each proportion within some distance of the mean changes if you change distributions, so it will typically change if you pick a left skewed or right skewed distribution, or some other symmetric distribution, or indeed a distribution that was not symmetric but not necessarily left or right skewed. But it can definitely happen that some other distributions -- not necessarily symmetric -- have some or all of those same percentages that are shown in your diagram between those whole-numbers of standard deviations above and below the mean, without being normal.
Normality is more specific than the diagram might suggest, since it pins down every quantile of $Z$, not just those at whole numbers of standard deviations from the mean.
Would I have to choose a lower threshold for the positive or negative calculation of the z-value to fall into the actual range of outliers? Is my thinking wrong?
I'm not sure what you mean by "actual range of outliers" here. You seem to be assuming I would understand something specific by the term "outlier" that relates to z-values.
There's no specific, generally applicable definition of outlier in terms of standard deviations from the mean -- whether you assume normality or not. A z value of 3.35 (say) isn't an outlier, per se. It's a rare value in terms of a normal distribution, but being rare doesn't automatically mean it's an "outlier" unless you choose a specific definition of the term outlier.
The problem with doing that is it often leads people to do things I wouldn't typically advise with their data, rather than say, reconsider the suitability of their model if such things are considered a likely possibility (e.g. maybe the model should include some allowance for wilder observations than expected under the basic distributional model, such as a contaminating mixture, or incorporate more estimation robust methods that would not be much affected by their presence).
If you mean "If I assume normality, and then choose to treat a value more than 3 sd's from the mean as an outlier" ... then fine, but I don't know what would be a good reason to do that (nor do I know how you'd know what the population mean and variance are).
If you do the same thing with some other distribution, whatever it might be, then yes the tail proportions generally change, so if you use some "3 standard deviations from the mean" rule to label points as outliers, you'll generally do that with a somewhat different proportion of the population than if it were normal.
If you were to specify that you wanted the same proportion in each tail as in your diagram beyond some specific number of standard deviations from the mean (such as 0.15% for $\pm 3$ sds in the normal), you could do the corresponding quantile calculation for any given choice of population distribution. It would in general be a different number of sds in each direction. However, I wouldn't generally go about identifying outliers this way, nor would I typically advise this as a way to omit data, if that was the plan.
On the other hand, if you use sample mean and standard deviation, to standardize then even if the original population were somehow normal (which is generally a pretty dubious assumption), then the proportions of the population within $3s$ of $\bar{x}$ will also differ from those given by the normal distribution, and if you are talking about proportions within the sample that you used to obtain $s$ and $\bar{x}$, it changes again.
If you're looking to make your analysis robust to potentially wild observations, rather than applying some arbitrary rule for leaving out observations, it's probably best to step back and consider whether it's necessary to do that, to consider carefully what other things you might do and what the impacts of your various choices might be.