Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix prob macros #147

Merged
merged 4 commits into from
Aug 11, 2020
Merged

Fix prob macros #147

merged 4 commits into from
Aug 11, 2020

Conversation

devmotion
Copy link
Member

This PR fixes #137 and #138, and allows to update a VarInfo object with a named tuple.

vi
end
_setval!(vi::TypedVarInfo, values, keys) = _typed_setval!(vi, vi.metadata, values, keys)
@generated function _typed_setval!(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think simple generated functions like this can be replaced by a map do-block on the named tuple directly. Last I checked the Julia compiler inferred and inlined it just fine with a do-block.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be done in many places in DPPL.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's what I assumed as well but it's not true in general. At least in March I found the generated function to be much more efficient on a simple example (TuringLang/Turing.jl#1167 (comment)), and hence IMO one really has to benchmark every possible switch from generated function to regular map (which is a bit unfortunate since I'd like to just use regular functions wherever possible...).

I'll benchmark this example to see if we could get rid of the generated function here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably ok to keep as-is here, and perform refactoring to change all places in another PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this one of those things where the compiler will "inline" everything up-to some fixed threshold? Same as with recursions. I remember looking into this stuff when working on Bijectors.jl, and there was essentially a fixed depth (I think ~20 or something) at which point the recursion (even though the methods were type-stable) wouldn't be unrolled. I bet it's the same here, where if names is sufficiently small then map will be the same as generated, but if names is large it won't since this can cause issues, e.g. a Turing model with millions of univariate parameters will probably not be too comfortable for the compiler. With that said, I agree with forcing "inlining" by the use of generated functions since if someone is running a model with millions of parameters, it's likely that you're still just looking at <30 different symbols.

@devmotion
Copy link
Member Author

bors try

bors bot added a commit that referenced this pull request Jul 3, 2020
@bors
Copy link
Contributor

bors bot commented Jul 3, 2020

try

Build failed:

@devmotion
Copy link
Member Author

bors try

bors bot added a commit that referenced this pull request Jul 3, 2020
@yebai
Copy link
Member

yebai commented Jul 20, 2020

bors r+

@bors
Copy link
Contributor

bors bot commented Jul 20, 2020

👎 Rejected by too few approved reviews

@yebai
Copy link
Member

yebai commented Jul 20, 2020

bors r+

bors bot pushed a commit that referenced this pull request Jul 20, 2020
This PR fixes #137 and #138, and allows to update a `VarInfo` object with a named tuple.

Co-authored-by: David Widmann <[email protected]>
@bors
Copy link
Contributor

bors bot commented Jul 20, 2020

Build failed:

@yebai
Copy link
Member

yebai commented Jul 20, 2020

Test error seems related to TuringLang/Turing.jl#1358

Copy link
Member

@torfjelde torfjelde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome stuff @devmotion !

I've added a couple of style-changes and a have a couple of questions (that might not be issues, but just me misunderstanding stuff!).

vi
end
_setval!(vi::TypedVarInfo, values, keys) = _typed_setval!(vi, vi.metadata, values, keys)
@generated function _typed_setval!(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this one of those things where the compiler will "inline" everything up-to some fixed threshold? Same as with recursions. I remember looking into this stuff when working on Bijectors.jl, and there was essentially a fixed depth (I think ~20 or something) at which point the recursion (even though the methods were type-stable) wouldn't be unrolled. I bet it's the same here, where if names is sufficiently small then map will be the same as generated, but if names is large it won't since this can cause issues, e.g. a Turing model with millions of univariate parameters will probably not be too comfortable for the compiler. With that said, I agree with forcing "inlining" by the use of generated functions since if someone is running a model with millions of parameters, it's likely that you're still just looking at <30 different symbols.

src/varinfo.jl Outdated Show resolved Hide resolved
src/varinfo.jl Outdated Show resolved Hide resolved
src/prob_macro.jl Outdated Show resolved Hide resolved
src/varinfo.jl Outdated Show resolved Hide resolved
) where {names}
updates = map(names) do n
quote
for vn in metadata.$n.vns
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this always going to be just a single element now, which is the same as VarName($n)? If not, could you be so kind and explain? 👼 That was quite confusing to me when working on the other PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK metadata.$n.vns will be a proper vector of multiple elements as soon as the model contains random variables with different indexing, e.g., a vector-valued random variable x for which separate samples x[1] and x[2] are sampled in the generative model definition will lead to a TypedVarInfo object in which metadata is a NamedTuple with key :x and a corresponding value of type Metadata that contains all information of x[1] and x[2] - and hence a field vns that contains the VarName instances for both x[1] and x[2]. See

"""
TypedVarInfo(vi::UntypedVarInfo)
This function finds all the unique `sym`s from the instances of `VarName{sym}` found in
`vi.metadata.vns`. It then extracts the metadata associated with each symbol from the
global `vi.metadata` field. Finally, a new `VarInfo` is created with a new `metadata` as
a `NamedTuple` mapping from symbols to type-stable `Metadata` instances, one for each
symbol.
"""
for the current implementation that creates TypedVarInfo objects from untyped VarInfo objects (usually after running the model once).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. So if x ~ MvNormal it won't be different, but if x[1] ~ Normal, x[2] ~ Normal then it will be?

Thanks man:)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, exactly.

Co-authored-by: Tor Erlend Fjelde <[email protected]>
@torfjelde
Copy link
Member

Is there anything keeping this from being merged?

@mohamed82008
Copy link
Member

bors r+

@bors
Copy link
Contributor

bors bot commented Aug 11, 2020

👎 Rejected by code reviews

@mohamed82008
Copy link
Member

bors r+

@bors
Copy link
Contributor

bors bot commented Aug 11, 2020

👎 Rejected by code reviews

@mohamed82008
Copy link
Member

mohamed82008 commented Aug 11, 2020

@torfjelde It seems that your requested changes are blocking bors 😄

@mohamed82008
Copy link
Member

bors try

bors bot added a commit that referenced this pull request Aug 11, 2020
@bors
Copy link
Contributor

bors bot commented Aug 11, 2020

try

Build failed:

@mohamed82008
Copy link
Member

There seem to be issues with Bijectors https://github.com/TuringLang/DynamicPPL.jl/runs/969319108#step:5:58.

Copy link
Member

@torfjelde torfjelde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To satisfy bors!

@torfjelde
Copy link
Member

Btw, bors is probably complaining for the same reason as this: TuringLang/Bijectors.jl#126 (comment)

@yebai
Copy link
Member

yebai commented Aug 11, 2020

Maybe merge this one manually as an exception since the falling test is from Bijectors?

@mohamed82008 mohamed82008 merged commit 5d72768 into dev Aug 11, 2020
@bors bors bot deleted the probmacro_bugfixes branch August 11, 2020 14:48
@mohamed82008
Copy link
Member

Thanks @devmotion!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants