In the literature, a Markov chain (which can be either in continuous or discrete time) is a stochastic process that evolves in a finite or countable state space. A Markov process, on the other hand, is usually defined as a stochastic process with a continuous state space.
However, in all papers and books I have read, when describing the Metropolis–Hastings (or more generally MCMC) algorithm, it is stated that the goal is to construct a Markov chain whose stationary distribution is the target distribution. The proposal of a new state is drawn from a proposal distribution, typically a normal distribution centered at the current state. In this case, since the states lie in a continuous space, I would be inclined to call it a Markov process rather than a chain.
So my question is: why is it called a “chain” and not a “process”?