Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training neural network is not working on GPU #55

Open
sametsoekel opened this issue Jun 15, 2022 · 6 comments
Open

Training neural network is not working on GPU #55

sametsoekel opened this issue Jun 15, 2022 · 6 comments
Labels
feature a feature request or enhancement

Comments

@sametsoekel
Copy link

sametsoekel commented Jun 15, 2022

I'm having trouble with training mlp() specification by brulee. I know that brulee uses torch and I checked my torch & gpu relationship, seems okay. But in the example below, training goes on CPU.

suppressPackageStartupMessages({
    library(tidymodels)
    library(torch)
})

torch::cuda_is_available()

#> TRUE

torch::cuda_device_count()

#> [1] 1

set.seed(1)


modspec <- mlp(hidden_units = tune(),
  penalty = tune(),
  epochs = tune(),
  activation = tune(),
  learn_rate = tune()) %>%
set_mode('classification') %>%
set_engine('brulee')

fk_param <- modspec %>%
extract_parameter_set_dials %>%
grid_max_entropy(size = 50) 

spl_obj <- initial_split(iris,.7)

cv_obj <- vfold_cv(training(spl_obj),5)

rcp <- recipe(formula = Species ~
Sepal.Width +
Sepal.Length +
Petal.Width +
Petal.Length,
data = training(spl_obj)) %>%
step_normalize(all_numeric_predictors())

wf <- workflow() %>%
add_model(modspec) %>%
add_recipe(rcp)

cv_fit <- wf %>%
tune_grid(resamples = cv_obj,grid = fk_param)
@juliasilge
Copy link
Member

juliasilge commented Jun 16, 2022

Hmmm, that sounds frustrating @sametsoekel!

  • Can you train a "bare" torch model like this one or this one using your GPU?
  • Can you train an unwrapped (non-parsnip) brulee model like this one using your GPU?

@sametsoekel
Copy link
Author

sametsoekel commented Jun 16, 2022

Hmmm, that sounds frustrating @sametsoekel!

  • Can you train a "bare" torch model like this one or this one using your GPU?
  • Can you train an unwrapped (non-parsnip) brulee model like this one using your GPU?

Hi @juliasilge I tried to train the first example you attached. At the beginning of training %4 of my GPU is used, about 5 minutes later I saw %15 of my GPU is being used. Shortly, I can say yes. I'm going to let you know if unwrapped brulee model still using my GPU.

Update :

I tried unwrapped brulee model you attached also, and set the epochs 200. While training, my GPU is never used.

Thanks for interest

@juliasilge
Copy link
Member

Thank you @sametsoekel! That sounds like it is an issue with brulee then. 👍

@juliasilge juliasilge added the bug an unexpected problem or unintended behavior label Jun 16, 2022
@jonthegeek
Copy link

I suspect this is because brulee pre-dates luz. It doesn't look like brulee does anything to auto-switch between cpu and gpu. For example, there would have to be code to switch between cpu and gpu as needed in here: https://github.com/tidymodels/brulee/blob/main/R/mlp-fit.R#L635

I think this might require a pretty big refactor to implement, although you might be able to do the switching based on cuda_is_available (I've already forgotten how to do that cleanly without luz, I'm spoiled!).

@dfalbel
Copy link
Collaborator

dfalbel commented Jun 16, 2022

@sametsoekel Thanks for filing the issue. We currently don't support GPU's in brulee, we'll add support for it soon and let you know here.

@juliasilge juliasilge added feature a feature request or enhancement and removed bug an unexpected problem or unintended behavior labels Jun 16, 2022
@sametsoekel
Copy link
Author

Thank you all for the precious attention, I'll be looking forward to the feature being added.

topepo added a commit that referenced this issue Nov 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature a feature request or enhancement
Projects
None yet
Development

No branches or pull requests

4 participants