Skip to content

Commit

Permalink
Update links to turing.ml (#396)
Browse files Browse the repository at this point in the history
  • Loading branch information
devmotion authored Jun 30, 2023
1 parent 8b4f970 commit 0a8c7cb
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ end;

## Sampling

Now we can run our sampler. This time we'll use [`HMC`](http://turing.ml/docs/library/#Turing.HMC) to sample from our posterior.
Now we can run our sampler. This time we'll use [`NUTS`](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS) to sample from our posterior.

```julia
# Retrieve the number of observations.
Expand Down
6 changes: 3 additions & 3 deletions tutorials/04-hidden-markov-model/04_hidden-markov-model.jmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This tutorial illustrates training Bayesian [Hidden Markov Models](https://en.wi

In this tutorial, we assume there are $k$ discrete hidden states; the observations are continuous and normally distributed - centered around the hidden states. This assumption reduces the number of parameters to be estimated in the emission matrix.

Let's load the libraries we'll need. We also set a random seed (for reproducibility) and the automatic differentiation backend to forward mode (more [here](http://turing.ml/docs/autodiff/) on why this is useful).
Let's load the libraries we'll need. We also set a random seed (for reproducibility) and the automatic differentiation backend to forward mode (more [here](https://turinglang.org/stable/docs/using-turing/autodiff) on why this is useful).

```julia
# Load libraries.
Expand Down Expand Up @@ -117,11 +117,11 @@ The priors on our transition matrix are noninformative, using `T[i] ~ Dirichlet(
end;
```

We will use a combination of two samplers ([HMC](http://turing.ml/docs/library/#Turing.HMC) and [Particle Gibbs](http://turing.ml/docs/library/#Turing.PG)) by passing them to the [Gibbs](http://turing.ml/docs/library/#Turing.Gibbs) sampler. The Gibbs sampler allows for compositional inference, where we can utilize different samplers on different parameters.
We will use a combination of two samplers ([HMC](https://turinglang.org/stable/docs/library/#Turing.Inference.HMC) and [Particle Gibbs](https://turinglang.org/stable/docs/library/#Turing.Inference.PG)) by passing them to the [Gibbs](https://turinglang.org/stable/docs/library/#Turing.Inference.Gibbs) sampler. The Gibbs sampler allows for compositional inference, where we can utilize different samplers on different parameters.

In this case, we use HMC for `m` and `T`, representing the emission and transition matrices respectively. We use the Particle Gibbs sampler for `s`, the state sequence. You may wonder why it is that we are not assigning `s` to the HMC sampler, and why it is that we need compositional Gibbs sampling at all.

The parameter `s` is not a continuous variable. It is a vector of **integers**, and thus Hamiltonian methods like HMC and [NUTS](http://turing.ml/docs/library/#-turingnuts--type) won't work correctly. Gibbs allows us to apply the right tools to the best effect. If you are a particularly advanced user interested in higher performance, you may benefit from setting up your Gibbs sampler to use [different automatic differentiation](http://turing.ml/stable/docs/autodiff/#compositional-sampling-with-differing-ad-modes) backends for each parameter space.
The parameter `s` is not a continuous variable. It is a vector of **integers**, and thus Hamiltonian methods like HMC and [NUTS](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS) won't work correctly. Gibbs allows us to apply the right tools to the best effect. If you are a particularly advanced user interested in higher performance, you may benefit from setting up your Gibbs sampler to use [different automatic differentiation](https://turinglang.org/stable/docs/using-turing/autodiff#compositional-sampling-with-differing-ad-modes) backends for each parameter space.

Time to run our sampler.

Expand Down
2 changes: 1 addition & 1 deletion tutorials/05-linear-regression/05_linear-regression.jmd
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ Lastly, each observation $y_i$ is distributed according to the calculated `mu` t
end
```

With our model specified, we can call the sampler. We will use the No U-Turn Sampler ([NUTS](http://turing.ml/docs/library/#-turingnuts--type)) here.
With our model specified, we can call the sampler. We will use the No U-Turn Sampler ([NUTS](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS)) here.

```julia
model = linear_regression(train, train_target)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ x &\sim \mathrm{Normal}(\mu_z, \Sigma)
\end{align}
$$

which resembles the model in the [Gaussian mixture model tutorial](https://turing.ml/dev/tutorials/1-gaussianmixturemodel/) with a slightly different notation.
which resembles the model in the [Gaussian mixture model tutorial](https://turinglang.org/stable/tutorials/01-gaussian-mixture-model/) with a slightly different notation.

## Infinite Mixture Model

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ end;

## Sampling

Now we can run our sampler. This time we'll use [`HMC`](http://turing.ml/docs/library/#Turing.HMC) to sample from our posterior.
Now we can run our sampler. This time we'll use [`NUTS`](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS) to sample from our posterior.

```julia
m = logistic_regression(train_features, train_target, 1)
Expand Down
2 changes: 1 addition & 1 deletion tutorials/11-probabilistic-pca/11_probabilistic-pca.jmd
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ Specifically:
Here we aim to perform MCMC sampling to infer the projection matrix $\mathbf{W}_{D \times k}$, the latent variable matrix $\mathbf{Z}_{k \times N}$, and the offsets $\boldsymbol{\mu}_{N \times 1}$.

We run the inference using the NUTS sampler, of which the chain length is set to be 500, target accept ratio 0.65 and initial stepsize 0.1. By default, the NUTS sampler samples 1 chain.
You are free to try [different samplers](https://turing.ml/stable/docs/library/#samplers).
You are free to try [different samplers](https://turinglang.org/stable/docs/library/#samplers).

```julia
k = 2 # k is the dimension of the projected space, i.e. the number of principal components/axes of choice
Expand Down

0 comments on commit 0a8c7cb

Please sign in to comment.