I have a semi-Markov process in which the time between states is log-normally distributed, but with parameters that depend on $n$ (the mean and variance are state-dependent). In other words I have the transition time from state $n$ to $n+1$ follows $\tau_{n\rightarrow n+1}\sim \textrm{Lognormal}(\mu_n,\sigma_n^2)$. These transition times are correlated in state number, with covariance matrix $\textbf{C}$. meaning that $\tau_n$ is influenced by the particular value of $\tau_{n-1}$, while overall having a known lognormal distribution.
How can I generate a sequence of simulated transition times that respects both the state-dependent distribution of times, and their correlations?