Skip to main content

Questions tagged [markov-process]

A stochastic process with the property that the future is conditionally independent of the past, given the present.

Filter by
Sorted by
Tagged with
0 votes
0 answers
47 views

My question assumes the following set up for continuous time discrete space: Consider $n$ individuals indexed by $i = 1, \ldots, n$. For each individual $i$: $Y_i(t) \in \{1, 2, \ldots, k\}$ denotes ...
shaktidev's user avatar
1 vote
1 answer
68 views

In the literature, a Markov chain (which can be either in continuous or discrete time) is a stochastic process that evolves in a finite or countable state space. A Markov process, on the other hand, ...
Alby's user avatar
  • 19
0 votes
0 answers
66 views

This is my first question in this community. I would appreciate feedback on whether the following problem formulation makes sense. Consider the following sequential online setting: At each round $t = ...
Guesttilunderstandingnature's user avatar
4 votes
1 answer
161 views

Inspired by this discussion (Martingale and Deviance residuals in parametric recurrent event analysis?), I would like to extend it a bit for collecting more ideas and tips. To be clear, my questions ...
C.W.'s user avatar
  • 51
0 votes
0 answers
34 views

I have a two dimensional state space MPD with state space $(s,i)$ where $s$ can take values in natural numbers and $i \in \{1,2\}$. I have written the policy evaluation equation for a policy $$r-g+(P-...
user avatar
0 votes
0 answers
117 views

Let $X_0, X_1, ..., X_n$ be a Markov chain with state spaces $S$ and transition probability matrix $T = \{p_{i,j}\}$. There are $A$ absorbing states. I'm interested in $P({X_{t-1}=j}|X_t=i)$. By Bayes'...
HJA24's user avatar
  • 25
3 votes
1 answer
201 views

Let $X_0, X_1, ..., X_n$ be a Markov chain with state spaces $S$, initial probability distribution $\pi$ and transition probability matrix $P = \{p_{i,j}\}$. The first passage time from state $i$ to $...
HJA24's user avatar
  • 25
1 vote
0 answers
35 views

A time inhomogeneous CTMC is a Markov Renewal Process? Many on-line sources say that the MRP generalizes CTMC. For a time homogeneous CTMC or a Markov Chain one can express it by a MRP, however for ...
Daniel Koiti Oshiro's user avatar
3 votes
1 answer
224 views

Consider an absorbing Markov chain with $s$ absorbing states and $r$ transient states. Its canonical form is written as follows: $$P = \left(\begin{array}{cc} I & O \\ R & Q \end{array}\right)$...
HJA24's user avatar
  • 25
4 votes
1 answer
81 views

Given a probability distribution of the form $\pi(z) = Z^{-1}e^{-\beta H(z)}$, MCMC algorithms sample from $\pi$ by constructing an ergodic Markov process $\{z_n\}$ whose limiting distribution is $\pi$...
Daniel Shapero's user avatar
1 vote
0 answers
37 views

Le $ \mathbb{X} = \{X(t) : t ≥ 0\} $ a regular and irreducible CTMC over $S$ with $Q=(\lambda_{i,j} : i,j ∈S)$ and embedded DTMC $\{X_n : n \in \mathbb{N}_0\}$. Prove if $p = (P_j : j \in S)$ is ...
Jhon Knows's user avatar
1 vote
0 answers
32 views

For a continuous-time Markov chain with state space $S$ and with an irreducible positive recurrent embedded chain, it can be proved that the transition probabilities $p_{xy}(t)$ for any pair of states ...
Arindam's user avatar
  • 51
1 vote
0 answers
45 views

Suppose we have a finite, time homogenous Markov chain given by some initial distribution $I$ on a state space $S$, and transfer probabilities $P_{ij}(\theta) = P_{\theta}(X_{k+1}=j | X_{k}=i)$ that ...
Sina's user avatar
  • 141
4 votes
1 answer
202 views

I am doing a finance related project, where we take the 'market' into account represented by covariance matrixes and economic indicators. As market participants are price takers, as we cannot ...
dragonforce's user avatar
1 vote
1 answer
110 views

I have decided to use the likelihood ratio test to evaluate if all the covariates my model considers are strictly necessary, as explain in page 388 and later illustrated in Example 14.4 of Statistical ...
XavierO's user avatar
  • 31
6 votes
3 answers
386 views

I learned about Markov chains a while ago and it make sense to me. If the system transition between states and the probabilities of these transitions do not depend on the system's prior states, then ...
E Tam's user avatar
  • 299
2 votes
0 answers
59 views

Under the assumptions given in the statement of the Markov Chain CLT on Wikipedia, is there a general closed-form for $\sigma^2$ in a three-state chain when $g(X)=1_{\{X=n\}}$? Calculations so far: ...
Aaron Hendrickson's user avatar
5 votes
1 answer
209 views

Here is an open-ended problem. There is a vacation resort with unlimited room. There an aggregated dataset at the weekly level which shows how many people have been here 0-1 weeks, 1-2 weeks, .... 10 ...
stats_noob's user avatar
1 vote
1 answer
73 views

I am working through a problem in my textbook regarding Markov chains and hidden Markov models (HMMs). The problem is as follows: Prove that $P(\pi)$ is equal to the product of the transmission ...
EngineerMathlover's user avatar
0 votes
0 answers
74 views

Question Are the following trends in Type I error (false positive / incorrectly reject $H_0$) expected when testing the Markov memoryless assumption? Specifically, Type I error seems to increase as: ...
jessexknight's user avatar
1 vote
1 answer
135 views

Assume $(E,\mathcal E,\lambda)$ is a $\sigma$-finite measure space and $\nu$ is a probability measure on $(E,\mathcal E)$ with $\nu\ll\lambda$. Furthermore, assume that $\mu=\sum_{i=0}^{n-1}\delta_{...
0xbadf00d's user avatar
  • 223
0 votes
0 answers
43 views

I'm currently reading Daphne Koller's book on Probabilistic Graphical Models, and, in the proof of theorem 4.12, the authors state something that I cannot wrap my head around. The authors state that ...
tim0's user avatar
  • 1
1 vote
0 answers
36 views

I am studying phase type distributions in my introductory stochastic processes course. I understand what they are and how to calculate with them. What I don't understand is, what makes them special? ...
BurgerMan's user avatar
1 vote
0 answers
70 views

Consider the stochastic process $$dX_t = \mu_{\epsilon_t}X_tdt + \sigma_{\epsilon_t}X_tdW_t$$ where $W_t$ is a standard Brownian motion. The process $X_t$ is a geometric Brownian motion (GBM) whose ...
Alex's user avatar
  • 387
1 vote
0 answers
91 views

I want to generate two sequences of length of 100 consisting of 1s and 0s. The sequences should exhibit both correlated case and uncorrelated case but the number of 1s should be the same in both the ...
the-nihilist-ninja's user avatar
1 vote
0 answers
81 views

Consider an alternating renewal process where the alternating extents are independent exponentially distributed with means, mu_0 and mu_1. The alternating extents determine whether the outcome at any ...
Aleph's user avatar
  • 41
0 votes
0 answers
54 views

I recently came across the definition of the transition kernel for a continuous state space, which is defined recursively as follows: \begin{aligned} & P^{(1)}(x, A)=P(x, A) \\ & P^{(n)}(x, A)...
user1141170's user avatar
1 vote
1 answer
48 views

I am trying to create a 2-state Markov model (state 1 to 2 and 2 to 1) with the msm package in R. hpv_state is 2 when infected, 1 when not infected, and 999 when ...
Ken Supanat's user avatar
9 votes
2 answers
458 views

If we have that: $\tau_i \overset{\text{independent}}{\sim} \exp(\lambda_i)$, for $i=1,2,3,...,n$, where $\lambda_i\neq \lambda_j, \forall i\neq j$ then I would like to find a general form for the ...
ben18785's user avatar
  • 940
9 votes
2 answers
718 views

I have seen these kinds of Discrete State Markov Chains before (Continuous Time or Discrete Time): Homogeneous (Probability Transition Matrix is constant) Non-Homogeneous (Probability Transition ...
udon762's user avatar
  • 91
3 votes
0 answers
125 views

I want to start investigating the (unadjusted) simulation of the Langevin process $${\rm d}X_t=b(X_t){\rm d}t+\sigma{\rm d}W_t,$$ where $$b:=\frac{\sigma^2}2\nabla\ln p.$$ I don't want to simulate ...
0xbadf00d's user avatar
  • 223
3 votes
3 answers
300 views

The forward diffusion process, which goes from $x_t$ to $x_{t+1} $ is Gaussian, which is very reasonable as we go the next state by adding random gaussian noise. However, I do not understand why the ...
levitatmas's user avatar
0 votes
0 answers
170 views

I am working discrete time Markov chain analysis for some large state transition graph. I want to find the rewards/cost to reach from the init state to the terminal/accepting states. I have the state ...
JackDaniels's user avatar
0 votes
1 answer
398 views

I am new to msm package and markov models. I have a randomized trial dataset with readings from three time points: baseline, at 1 year, and at 2 year. I am trying to calculate annual transition ...
spri0330's user avatar
4 votes
1 answer
381 views

I have three fundamental questions related to MCMC. I would appreciate the help on any one of those. The most fundamental question in MCMC field, which I can't find a reference, is: Can MCMC generate ...
George Lu's user avatar
1 vote
1 answer
120 views

Furthermore, let $(E,\mathcal E,\lambda)$ be a $\sigma$-finite measure space; $Q$ be a Markov kernel on $(E,\mathcal E)$ with density $q$ with respect to $\lambda$; $\mu$ be a probability measure on $...
0xbadf00d's user avatar
  • 223
-1 votes
0 answers
33 views

I'm having difficulty solving this exercise. When I assume that p=0.4 and player A's fortune is 99 dollars and B's fortune is 1 dollar, I can find that the probability of player A losing to player B ...
Insomnia's user avatar
1 vote
1 answer
107 views

I got a quick question I got a markov chain with this trans matrix $$\begin{pmatrix}1&0\\1/2&1/2\end{pmatrix}$$ And I got 2 states right [0,1] right. So I know state 0 has a period 1 and is ...
Fernando Martinez's user avatar
0 votes
0 answers
67 views

I am curious about whether a Markov Chain $X_n$ is recurrent implies that for any $k > 0$, $X_{kn}$ is also recurrent. Here are my observations. If $X_n$ is transient, $X_{kn}$ must be transient by ...
qqhgsjah8221's user avatar
9 votes
6 answers
2k views

The following is an interview question: Two players A and B play a game rolling a fair die. If A rolls a 1, they immediately reroll, and if the reroll is less than 4 then A wins. Otherwise, B rolls. ...
Ria's user avatar
  • 511
4 votes
3 answers
1k views

I am interested in learning how to derive the probability distributions for the Time to Absorption in Markov Chains (Discrete and Continuous). In the past, I have usually done one of the following: ...
Uk rain troll's user avatar
1 vote
0 answers
166 views

Here is the paper link related to the question from my title. In Appendix B, it computes the entropy of $p(X^T)$ and says "By design, the cross entropy to $\pi(x^t)$ is constant under our ...
user405729's user avatar
1 vote
1 answer
143 views

Players A and B each have $10 at the beginning of a game in which each player bets at each play, and the game continues until one player is broke. Suppose that Player A wins on a single bet with ...
waterr's user avatar
  • 41
1 vote
0 answers
54 views

In a Continuous Time Markov Chain (CTMC), the following properties are said to hold: Discrete (Embedded Jump Process): $$P_{ij} = \frac{q_{ij}}{\sum_{i} q_{ij}}$$ $$q_{ij} = \lim_{{h \to 0}} \frac{...
Uk rain troll's user avatar
1 vote
0 answers
121 views

Let us assume $A \rightarrow B \rightarrow C \rightarrow D$ is a markov chain. Can we also state that $A \rightarrow C \rightarrow D$ is also a Markov chain? It intuitively feels right. Can anyone ...
Bhutum Banerjee's user avatar
0 votes
0 answers
265 views

Team A and Team B are competing in a sports game and the score is currently tied at 10-10. The first team to win by a margin or two will win the tournament. Team A has 65% chance of winning each point ...
Ria's user avatar
  • 511
1 vote
0 answers
55 views

An exponential family verifies a maximum entropy property: each density is the maximum entropy density given the expectation of its sufficient statistic. On the other hand, from my understanding, the ...
Chevallier's user avatar
1 vote
0 answers
54 views

A homogeneous Markov chain $\{X_n\}_{n\in\mathbb N}$ with discrete state space $\mathcal{S}$. Consider the minimum number of steps to visit $k\in \mathcal{S},$ $$\tau_{k}:=\text{min} \left\{n\ge 1:\, ...
user553010's user avatar
1 vote
1 answer
101 views

Here is a problem I am trying to solve: Consider a sequence of IID random variables $Y_1,Y_2,Y_3,...$ with values in $E$ and let the function $\varphi: E^2 \rightarrow E$ define the corresponding ...
HellBoy's user avatar
  • 123
1 vote
0 answers
55 views

I have a time series Markov Switching model, which is estimated in about 15 different versions. One or two of the time series had to be normalized in order to converge. That is 1-2 out of 15. My ...
David K's user avatar
  • 31

1
2 3 4 5
27