Questions tagged [markov-process]
A stochastic process with the property that the future is conditionally independent of the past, given the present.
1,338 questions
0
votes
0
answers
47
views
How to decide if the inclusion of a variable violates the Markov property?
My question assumes the following set up for continuous time discrete space:
Consider $n$ individuals indexed by $i = 1, \ldots, n$. For each individual $i$:
$Y_i(t) \in \{1, 2, \ldots, k\}$ denotes ...
1
vote
1
answer
68
views
Metropolis–Hastings algorithm and MCMC
In the literature, a Markov chain (which can be either in continuous or discrete time) is a stochastic process that evolves in a finite or countable state space. A Markov process, on the other hand, ...
0
votes
0
answers
66
views
Online estimation of time dependent mean
This is my first question in this community. I would appreciate feedback on whether the following problem formulation makes sense.
Consider the following sequential online setting:
At each round $t = ...
4
votes
1
answer
161
views
Markov ordinal longitudinal model or discrete time multistate model for analysing recurrent event
Inspired by this discussion (Martingale and Deviance residuals in parametric recurrent event analysis?), I would like to extend it a bit for collecting more ideas and tips. To be clear, my questions ...
0
votes
0
answers
34
views
Uniqueness of solution for bias vector in policy evaluation
I have a two dimensional state space MPD with state space $(s,i)$ where $s$ can take values in natural numbers and $i \in \{1,2\}$. I have written the policy evaluation equation for a policy
$$r-g+(P-...
0
votes
0
answers
117
views
Reverse absorption probability of Markov chain
Let $X_0, X_1, ..., X_n$ be a Markov chain with state spaces $S$ and transition probability matrix $T = \{p_{i,j}\}$. There are $A$ absorbing states.
I'm interested in $P({X_{t-1}=j}|X_t=i)$.
By Bayes'...
3
votes
1
answer
201
views
Comparison of number of visits to transient states in Markov chain
Let $X_0, X_1, ..., X_n$ be a Markov chain with state spaces $S$, initial probability distribution $\pi$ and transition probability matrix $P = \{p_{i,j}\}$.
The first passage time from state $i$ to $...
1
vote
0
answers
35
views
Is a time inhomogeneous Markov Process also a Markov Renewal Process? [closed]
A time inhomogeneous CTMC is a Markov Renewal Process?
Many on-line sources say that the MRP generalizes CTMC.
For a time homogeneous CTMC or a Markov Chain one can express it by a MRP, however for ...
3
votes
1
answer
224
views
Conditional probability of absorption in Markov chain
Consider an absorbing Markov chain with $s$ absorbing states and $r$ transient states.
Its canonical form is written as follows:
$$P = \left(\begin{array}{cc} I & O \\ R & Q \end{array}\right)$...
4
votes
1
answer
81
views
Continuous-time Markov chain Monte Carlo
Given a probability distribution of the form $\pi(z) = Z^{-1}e^{-\beta H(z)}$, MCMC algorithms sample from $\pi$ by constructing an ergodic Markov process $\{z_n\}$ whose limiting distribution is $\pi$...
1
vote
0
answers
37
views
Relationship between CTMC and its embedded DTMC
Le $ \mathbb{X} = \{X(t) : t ≥ 0\} $ a regular and irreducible CTMC
over $S$ with $Q=(\lambda_{i,j} : i,j ∈S)$ and embedded DTMC
$\{X_n : n \in \mathbb{N}_0\}$. Prove if $p = (P_j : j \in S)$ is
...
1
vote
0
answers
32
views
Limiting distribution of CTMCs
For a continuous-time Markov chain with state space $S$ and with an irreducible positive recurrent embedded chain, it can be proved that the transition probabilities $p_{xy}(t)$ for any pair of states ...
1
vote
0
answers
45
views
Inference on large discrete Markov chains using trajectories
Suppose we have a finite, time homogenous Markov chain given by some initial distribution $I$ on a state space $S$, and transfer probabilities $P_{ij}(\theta) = P_{\theta}(X_{k+1}=j | X_{k}=i)$ that ...
4
votes
1
answer
202
views
Action Independent Transition Probability in Reinforcement Learning
I am doing a finance related project, where we take the 'market' into account represented by covariance matrixes and economic indicators.
As market participants are price takers, as we cannot ...
1
vote
1
answer
110
views
Likelihood ratio test with a modified likelihood function
I have decided to use the likelihood ratio test to evaluate if all the covariates my model considers are strictly necessary, as explain in page 388 and later illustrated in Example 14.4 of Statistical ...
6
votes
3
answers
386
views
What is Not a Markov Chain?
I learned about Markov chains a while ago and it make sense to me. If the system transition between states and the probabilities of these transitions do not depend on the system's prior states, then ...
2
votes
0
answers
59
views
Closed-form for Markov Chain CLT: Special case
Under the assumptions given in the statement of the Markov Chain CLT on Wikipedia, is there a general closed-form for $\sigma^2$ in a three-state chain when $g(X)=1_{\{X=n\}}$?
Calculations so far:
...
5
votes
1
answer
209
views
Can this problem only be solved using Bayesian?
Here is an open-ended problem.
There is a vacation resort with unlimited room.
There an aggregated dataset at the weekly level which shows how many people have been here 0-1 weeks, 1-2 weeks, .... 10 ...
1
vote
1
answer
73
views
Proving Termination Properties of a Markov Chain with Non-Constant Transition Probabilities
I am working through a problem in my textbook regarding Markov chains and hidden Markov models (HMMs). The problem is as follows:
Prove that $P(\pi)$ is equal to the product of the transmission ...
0
votes
0
answers
74
views
Trends in Type I error when testing Markov memoryless assumption
Question
Are the following trends in Type I error (false positive / incorrectly reject $H_0$) expected when testing the Markov memoryless assumption? Specifically, Type I error seems to increase as:
...
1
vote
1
answer
135
views
How do I compute the KL divergence between my MCMC samples and the target distribution?
Assume $(E,\mathcal E,\lambda)$ is a $\sigma$-finite measure space and $\nu$ is a probability measure on $(E,\mathcal E)$ with $\nu\ll\lambda$. Furthermore, assume that $\mu=\sum_{i=0}^{n-1}\delta_{...
0
votes
0
answers
43
views
Can a minimal separating set in a chordal graph ever be maximal?
I'm currently reading Daphne Koller's book on Probabilistic Graphical Models, and, in the proof of theorem 4.12, the authors state something that I cannot wrap my head around.
The authors state that ...
1
vote
0
answers
36
views
What Makes Phase Type Distributions Interesting
I am studying phase type distributions in my introductory stochastic processes course. I understand what they are and how to calculate with them.
What I don't understand is, what makes them special? ...
1
vote
0
answers
70
views
Max Likelihood of GBM with 2 Markov States
Consider the stochastic process
$$dX_t = \mu_{\epsilon_t}X_tdt + \sigma_{\epsilon_t}X_tdW_t$$
where $W_t$ is a standard Brownian motion. The process $X_t$ is a geometric Brownian motion (GBM) whose ...
1
vote
0
answers
91
views
Creating a correlated sequence of 1s and 0s where the frequency of 1s occuring is fixed
I want to generate two sequences of length of 100 consisting of 1s and 0s. The sequences should exhibit both correlated case and uncorrelated case but the number of 1s should be the same in both the ...
1
vote
0
answers
81
views
Is an integrated exponential alternating renewal process a Markov chain?
Consider an alternating renewal process where the alternating extents are independent exponentially distributed with means, mu_0 and mu_1. The alternating extents determine whether the outcome at any ...
0
votes
0
answers
54
views
Understanding the definition of $n$-th iterate of a transition kernel
I recently came across the definition of the transition kernel for a continuous state space, which is defined recursively as follows:
\begin{aligned} & P^{(1)}(x, A)=P(x, A) \\ & P^{(n)}(x, A)...
1
vote
1
answer
48
views
R MSM package: how to flag incident and prevalent disease?
I am trying to create a 2-state Markov model (state 1 to 2 and 2 to 1) with the msm package in R. hpv_state is 2 when infected, 1 when not infected, and 999 when ...
9
votes
2
answers
458
views
Sums of exponentials joint probability
If we have that: $\tau_i \overset{\text{independent}}{\sim}
\exp(\lambda_i)$, for $i=1,2,3,...,n$, where $\lambda_i\neq \lambda_j, \forall i\neq j$ then I would like to find a general form for the ...
9
votes
2
answers
718
views
Markov Chains with Changing Number of States
I have seen these kinds of Discrete State Markov Chains before (Continuous Time or Discrete Time):
Homogeneous (Probability Transition Matrix is constant)
Non-Homogeneous (Probability Transition ...
3
votes
0
answers
125
views
How to tune the unadjusted Langevin algorithm?
I want to start investigating the (unadjusted) simulation of the Langevin process $${\rm d}X_t=b(X_t){\rm d}t+\sigma{\rm d}W_t,$$ where $$b:=\frac{\sigma^2}2\nabla\ln p.$$ I don't want to simulate ...
3
votes
3
answers
300
views
Why reverse diffusion process is not a gaussian distribution?
The forward diffusion process, which goes from $x_t$ to $x_{t+1} $ is Gaussian, which is very reasonable as we go the next state by adding random gaussian noise. However, I do not understand why the ...
0
votes
0
answers
170
views
Sum of powers (geometric series) of state transition matrix
I am working discrete time Markov chain analysis for some large state transition graph. I want to find the rewards/cost to reach from the init state to the terminal/accepting states.
I have the state ...
0
votes
1
answer
398
views
msm package: Mutlti state model initial value in 'vmmin' is not finite
I am new to msm package and markov models. I have a randomized trial dataset with readings from three time points: baseline, at 1 year, and at 2 year. I am trying to calculate annual transition ...
4
votes
1
answer
381
views
Can MCMC sample any probability distributions?
I have three fundamental questions related to MCMC. I would appreciate the help on any one of those.
The most fundamental question in MCMC field, which I can't find a reference, is: Can MCMC generate ...
1
vote
1
answer
120
views
Show that the total variation distance of the Metropolis kernel to its proposal kernel is equal to the rejection probability
Furthermore, let
$(E,\mathcal E,\lambda)$ be a $\sigma$-finite measure space;
$Q$ be a Markov kernel on $(E,\mathcal E)$ with density $q$ with respect to $\lambda$;
$\mu$ be a probability measure on $...
-1
votes
0
answers
33
views
Help with Gambler's ruin problem, can't solve abstraction [duplicate]
I'm having difficulty solving this exercise. When I assume that p=0.4 and player A's fortune is 99 dollars and B's fortune is 1 dollar, I can find that the probability of player A losing to player B ...
1
vote
1
answer
107
views
Quesiton about markov chain period of transient
I got a quick question I got a markov chain with this trans matrix $$\begin{pmatrix}1&0\\1/2&1/2\end{pmatrix}$$
And I got 2 states right [0,1] right.
So I know state 0 has a period 1 and is ...
0
votes
0
answers
67
views
A recurrent Markov Chain implies its k-step version is also recurrent?
I am curious about whether a Markov Chain $X_n$ is recurrent implies that for any $k > 0$, $X_{kn}$ is also recurrent.
Here are my observations. If $X_n$ is transient, $X_{kn}$ must be transient by ...
9
votes
6
answers
2k
views
Infinite dice roll probability
The following is an interview question:
Two players A and B play a game rolling a fair die. If A rolls a 1, they immediately reroll, and if the reroll is less than 4 then A wins. Otherwise, B rolls. ...
4
votes
3
answers
1k
views
Deriving the Distribution of Markov Chain Times
I am interested in learning how to derive the probability distributions for the Time to Absorption in Markov Chains (Discrete and Continuous).
In the past, I have usually done one of the following:
...
1
vote
0
answers
166
views
About paper "Deep Unsupervised Learning using Nonequilibrium Thermodynamics"
Here is the paper link related to the question from my title.
In Appendix B, it computes the entropy of $p(X^T)$ and says "By design, the cross entropy to $\pi(x^t)$ is constant under our ...
1
vote
1
answer
143
views
Help with Gambler's ruin problem
Players A and B each have $10 at the beginning of a game in which each player bets at each play, and the game continues until one player is broke. Suppose that Player A wins on a single bet with ...
1
vote
0
answers
54
views
Is it possible to Discretize a Continuous Time Markov Chain?
In a Continuous Time Markov Chain (CTMC), the following properties are said to hold:
Discrete (Embedded Jump Process):
$$P_{ij} = \frac{q_{ij}}{\sum_{i} q_{ij}}$$
$$q_{ij} = \lim_{{h \to 0}} \frac{...
1
vote
0
answers
121
views
Is a sub Markov chain also a Markov chain?
Let us assume $A \rightarrow B \rightarrow C \rightarrow D$ is a markov chain. Can we also state that $A \rightarrow C \rightarrow D$ is also a Markov chain? It intuitively feels right. Can anyone ...
0
votes
0
answers
265
views
Probability that team A will win overall match
Team A and Team B are competing in a sports game and the score is currently tied at 10-10. The first team to win by a margin or two will win the tournament. Team A has 65% chance of winning each point ...
1
vote
0
answers
55
views
Exponential families as families of limite distributions of Markov processes
An exponential family verifies a maximum entropy property: each density is the maximum entropy density given the expectation of its sufficient statistic.
On the other hand, from my understanding, the ...
1
vote
0
answers
54
views
Question about the mean first passage time
A homogeneous Markov chain $\{X_n\}_{n\in\mathbb N}$ with discrete state space $\mathcal{S}$.
Consider the minimum number of steps to visit $k\in \mathcal{S},$
$$\tau_{k}:=\text{min} \left\{n\ge 1:\, ...
1
vote
1
answer
101
views
Markov Chain and deterministic function
Here is a problem I am trying to solve:
Consider a sequence of IID random variables $Y_1,Y_2,Y_3,...$ with values in $E$ and let the function $\varphi: E^2 \rightarrow E$ define the corresponding ...
1
vote
0
answers
55
views
Normalization for time series comparison
I have a time series Markov Switching model, which is estimated in about 15 different versions. One or two of the time series had to be normalized in order to converge. That is 1-2 out of 15. My ...