Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[lake_model] Comments #169

Open
oyamad opened this issue Jul 18, 2021 · 2 comments
Open

[lake_model] Comments #169

oyamad opened this issue Jul 18, 2021 · 2 comments

Comments

@oyamad
Copy link
Member

oyamad commented Jul 18, 2021

  • This part https://github.com/QuantEcon/lecture-python.myst/blame/main/lectures/lake_model.md#L260-L274 is not very wise (computing the stationary distribution of a 2-state (column-)stochastic matrix by iteration). As shown in the Finite Markov Chains chapter, and to be discussed in the current chapter, it can be simply computed exactly (up to floating points errors) by

    def rate_steady_state(self):
        x = np.array([self.A_hat[0, 1], self.A_hat[1, 0]])
        return x / x.sum()
  • The discussion in "Aggregate Dynamics":
    This part is hard to read. The same discussion as in "Finite Markov Chains" is given in a different language without any indication: From the discussion in "Finite Markov Chains", we know that

    • the (column-)stochastic matrix A_hat has a stationary distribution (or equivalently, it has a nonnegative eigenvector with eigenvalue one); and
    • A_hat being (irreducible and) aperiodic (or equivalently, the other eigenvalues are less than one in magnitude), from any initial distribution we have convergence to the (unique) stationary distribution.

    In my view, this new language (with eigenvalues) is not necessary, and it would be enough to refer to the previous discussion in "Finite Markov Chains" (as to be done below).

  • There are a few places where the inner product of two vectors (1d-ndarrays) a and b is computed by

    np.sum(a * b)

    instead of

    a @ b

    Is there any purpose for this?

@jstac
Copy link
Contributor

jstac commented Jul 19, 2021

Thanks @oyamad . I agree. I'll fix this when I get some time.

@shlff
Copy link
Member

shlff commented Jul 11, 2023

Thanks @jstac and @oyamad .

My comment on point 1 is that It is a smart change that makes the most of the analytical solution of the stationary distribution of the Markov chain when the stochastic matrix is positive.

For example, let the stochastic matrix be

$$ P = \left( \begin{matrix} 1 - \lambda & \lambda \\ \alpha & 1 - \alpha \end{matrix} \right) $$

if $\alpha \in (0, 1)$ and $\lambda \in (0, 1)$, then $P$ has a unique stationary distribution.

However it cannot handle the case when $\alpha$ or $\lambda$ takes the boundary values, that is, $\alpha, \lambda$ takes $0, 1$.

I suggest that we turn this change or the original one into an exercise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants