Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nothing does not correspond to updating the state with a zero gradient. #140

Closed
CarloLucibello opened this issue Apr 7, 2023 · 1 comment
Milestone

Comments

@CarloLucibello
Copy link
Member

CarloLucibello commented Apr 7, 2023

As mentioned in #137 (comment), when a nothing gradient is encountered the apply! rule is not called at all and the state is not updated. So these two calls

Optimisers.update!(st, x, nothing)
Optimisers.update!(st, x, zero(x))

give different results. In the same discussion @mcabbott said

I suspect this is more an accident than a design, but I'm not sure it's an awful one.
If you are doing ordinary AD and happen to get an array of zeros on some batch, probably you do want that to update the momenta etc.
But you won't get nothing just because of the data in that batch. Instead, you'll get it because you are e.g. doing transfer learning, or the generator & discriminator on even/odd steps, or something like that. You will get nothing not for one array, but for a whole part of the model. And it seems like you probably don't want to update the momenta for the part of the model not being trained, but instead just ignore them completely.

but i think these examples should correspond to the opt_tree only having part of the model or to using different trees for discriminator and generator.

So in this issue I argue we should treat nothing exactly as semantically equivalent to a zero gradient, and define another type e.g. NoUpdate to signal that the apply! rule should not be called at all (so no momentum updates etc...)

@CarloLucibello CarloLucibello added this to the 0.4 milestone Nov 6, 2024
@CarloLucibello
Copy link
Member Author

After some extra thought, I convinced myself it is ok to have nothing signaling no update. Otherwise, we would have to materialize the zero gradient and possibly higher order derivatives, but this should be responsibility of the user.

We just need to document this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant