From 0a8c7cb2a991c1e65e0c33d0965c7e3689362163 Mon Sep 17 00:00:00 2001 From: David Widmann Date: Fri, 30 Jun 2023 10:49:30 +0200 Subject: [PATCH] Update links to turing.ml (#396) --- tutorials/02-logistic-regression/02_logistic-regression.jmd | 2 +- tutorials/04-hidden-markov-model/04_hidden-markov-model.jmd | 6 +++--- tutorials/05-linear-regression/05_linear-regression.jmd | 2 +- .../06-infinite-mixture-model/06_infinite-mixture-model.jmd | 2 +- .../08_multinomial-logistic-regression.jmd | 2 +- tutorials/11-probabilistic-pca/11_probabilistic-pca.jmd | 2 +- 6 files changed, 8 insertions(+), 8 deletions(-) diff --git a/tutorials/02-logistic-regression/02_logistic-regression.jmd b/tutorials/02-logistic-regression/02_logistic-regression.jmd index 8143f50ff..151a58a8a 100644 --- a/tutorials/02-logistic-regression/02_logistic-regression.jmd +++ b/tutorials/02-logistic-regression/02_logistic-regression.jmd @@ -121,7 +121,7 @@ end; ## Sampling -Now we can run our sampler. This time we'll use [`HMC`](http://turing.ml/docs/library/#Turing.HMC) to sample from our posterior. +Now we can run our sampler. This time we'll use [`NUTS`](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS) to sample from our posterior. ```julia # Retrieve the number of observations. diff --git a/tutorials/04-hidden-markov-model/04_hidden-markov-model.jmd b/tutorials/04-hidden-markov-model/04_hidden-markov-model.jmd index db51e1f80..45b0b9f31 100644 --- a/tutorials/04-hidden-markov-model/04_hidden-markov-model.jmd +++ b/tutorials/04-hidden-markov-model/04_hidden-markov-model.jmd @@ -10,7 +10,7 @@ This tutorial illustrates training Bayesian [Hidden Markov Models](https://en.wi In this tutorial, we assume there are $k$ discrete hidden states; the observations are continuous and normally distributed - centered around the hidden states. This assumption reduces the number of parameters to be estimated in the emission matrix. -Let's load the libraries we'll need. We also set a random seed (for reproducibility) and the automatic differentiation backend to forward mode (more [here](http://turing.ml/docs/autodiff/) on why this is useful). +Let's load the libraries we'll need. We also set a random seed (for reproducibility) and the automatic differentiation backend to forward mode (more [here](https://turinglang.org/stable/docs/using-turing/autodiff) on why this is useful). ```julia # Load libraries. @@ -117,11 +117,11 @@ The priors on our transition matrix are noninformative, using `T[i] ~ Dirichlet( end; ``` -We will use a combination of two samplers ([HMC](http://turing.ml/docs/library/#Turing.HMC) and [Particle Gibbs](http://turing.ml/docs/library/#Turing.PG)) by passing them to the [Gibbs](http://turing.ml/docs/library/#Turing.Gibbs) sampler. The Gibbs sampler allows for compositional inference, where we can utilize different samplers on different parameters. +We will use a combination of two samplers ([HMC](https://turinglang.org/stable/docs/library/#Turing.Inference.HMC) and [Particle Gibbs](https://turinglang.org/stable/docs/library/#Turing.Inference.PG)) by passing them to the [Gibbs](https://turinglang.org/stable/docs/library/#Turing.Inference.Gibbs) sampler. The Gibbs sampler allows for compositional inference, where we can utilize different samplers on different parameters. In this case, we use HMC for `m` and `T`, representing the emission and transition matrices respectively. We use the Particle Gibbs sampler for `s`, the state sequence. You may wonder why it is that we are not assigning `s` to the HMC sampler, and why it is that we need compositional Gibbs sampling at all. -The parameter `s` is not a continuous variable. It is a vector of **integers**, and thus Hamiltonian methods like HMC and [NUTS](http://turing.ml/docs/library/#-turingnuts--type) won't work correctly. Gibbs allows us to apply the right tools to the best effect. If you are a particularly advanced user interested in higher performance, you may benefit from setting up your Gibbs sampler to use [different automatic differentiation](http://turing.ml/stable/docs/autodiff/#compositional-sampling-with-differing-ad-modes) backends for each parameter space. +The parameter `s` is not a continuous variable. It is a vector of **integers**, and thus Hamiltonian methods like HMC and [NUTS](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS) won't work correctly. Gibbs allows us to apply the right tools to the best effect. If you are a particularly advanced user interested in higher performance, you may benefit from setting up your Gibbs sampler to use [different automatic differentiation](https://turinglang.org/stable/docs/using-turing/autodiff#compositional-sampling-with-differing-ad-modes) backends for each parameter space. Time to run our sampler. diff --git a/tutorials/05-linear-regression/05_linear-regression.jmd b/tutorials/05-linear-regression/05_linear-regression.jmd index 57af19b6c..641f27eea 100644 --- a/tutorials/05-linear-regression/05_linear-regression.jmd +++ b/tutorials/05-linear-regression/05_linear-regression.jmd @@ -128,7 +128,7 @@ Lastly, each observation $y_i$ is distributed according to the calculated `mu` t end ``` -With our model specified, we can call the sampler. We will use the No U-Turn Sampler ([NUTS](http://turing.ml/docs/library/#-turingnuts--type)) here. +With our model specified, we can call the sampler. We will use the No U-Turn Sampler ([NUTS](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS)) here. ```julia model = linear_regression(train, train_target) diff --git a/tutorials/06-infinite-mixture-model/06_infinite-mixture-model.jmd b/tutorials/06-infinite-mixture-model/06_infinite-mixture-model.jmd index 67107609b..e2a426f73 100644 --- a/tutorials/06-infinite-mixture-model/06_infinite-mixture-model.jmd +++ b/tutorials/06-infinite-mixture-model/06_infinite-mixture-model.jmd @@ -77,7 +77,7 @@ x &\sim \mathrm{Normal}(\mu_z, \Sigma) \end{align} $$ -which resembles the model in the [Gaussian mixture model tutorial](https://turing.ml/dev/tutorials/1-gaussianmixturemodel/) with a slightly different notation. +which resembles the model in the [Gaussian mixture model tutorial](https://turinglang.org/stable/tutorials/01-gaussian-mixture-model/) with a slightly different notation. ## Infinite Mixture Model diff --git a/tutorials/08-multinomial-logistic-regression/08_multinomial-logistic-regression.jmd b/tutorials/08-multinomial-logistic-regression/08_multinomial-logistic-regression.jmd index 525f8811b..bc7d9f347 100644 --- a/tutorials/08-multinomial-logistic-regression/08_multinomial-logistic-regression.jmd +++ b/tutorials/08-multinomial-logistic-regression/08_multinomial-logistic-regression.jmd @@ -119,7 +119,7 @@ end; ## Sampling -Now we can run our sampler. This time we'll use [`HMC`](http://turing.ml/docs/library/#Turing.HMC) to sample from our posterior. +Now we can run our sampler. This time we'll use [`NUTS`](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS) to sample from our posterior. ```julia m = logistic_regression(train_features, train_target, 1) diff --git a/tutorials/11-probabilistic-pca/11_probabilistic-pca.jmd b/tutorials/11-probabilistic-pca/11_probabilistic-pca.jmd index 633ab0ba6..433986e5a 100644 --- a/tutorials/11-probabilistic-pca/11_probabilistic-pca.jmd +++ b/tutorials/11-probabilistic-pca/11_probabilistic-pca.jmd @@ -189,7 +189,7 @@ Specifically: Here we aim to perform MCMC sampling to infer the projection matrix $\mathbf{W}_{D \times k}$, the latent variable matrix $\mathbf{Z}_{k \times N}$, and the offsets $\boldsymbol{\mu}_{N \times 1}$. We run the inference using the NUTS sampler, of which the chain length is set to be 500, target accept ratio 0.65 and initial stepsize 0.1. By default, the NUTS sampler samples 1 chain. -You are free to try [different samplers](https://turing.ml/stable/docs/library/#samplers). +You are free to try [different samplers](https://turinglang.org/stable/docs/library/#samplers). ```julia k = 2 # k is the dimension of the projected space, i.e. the number of principal components/axes of choice