Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent sampling due to intermediate quantity not being updated #255

Closed
miguelbiron opened this issue Dec 10, 2024 · 5 comments
Closed

Comments

@miguelbiron
Copy link
Collaborator

Hi @sunxd3 -- there's an issue with iid simulation from a BUGSModel. Apparently, calls to evaluate!!(rng, model) are not updating deterministic functions of unconditioned parameters. In the example below, this is p_prod. This means that descendants of such nodes -- like n_heads in the example below -- have the wrong distribution.

This simulation tests shows that the distribution of n_heads is dictated by the value for p_prod set at compile time, and not the values that are generated during calls to evaluate!!.

using JuliaBUGS
using Random
using Statistics
using Distributions

# unidentifiable coin-toss model
unid_model_def = @bugs begin
    for i in 1:2
        p[i] ~ dunif(0,1)
    end
    p_prod = p[1]*p[2]
    n_heads ~ dbin(p_prod, n_flips)
end
data = (; n_flips=100000)
n_sim = 1000
true_prop = 0.25 # = E[p_prod] = 0.5^2
rng = MersenneTwister(123)

# do multiple initializations to check for bug
for _ in 1:10
    model = compile(unid_model_def, data)
    original_env = deepcopy(model.evaluation_env)

    # simulate flips and compute rate of heads
    heads_rate = mean(first(JuliaBUGS.evaluate!!(rng, model)).n_heads/data.n_flips for _ in 1:n_sim)

    # compute pvalue for a one-sample test against true proportion
    z_true = (heads_rate - true_prop)/sqrt(true_prop*(1-true_prop)/n_sim)
    pval_true = 2*ccdf(Normal(), abs(z_true))

    # compute pvalue for a one-sample test against initial p_prod
    z_alt = (heads_rate - original_env.p_prod)/sqrt(original_env.p_prod*(1-original_env.p_prod)/n_sim)
    pval_alt = 2*ccdf(Normal(), abs(z_alt))
    
    # check 
    @assert pval_true < pval_alt
end
@sunxd3
Copy link
Member

sunxd3 commented Dec 10, 2024

interesting, let me investigate, thanks a lot!

@sunxd3
Copy link
Member

sunxd3 commented Dec 10, 2024

ah, I caught the error, really stupid one, thanks for figure this out @miguelbiron

the issue is in line

https://github.com/TuringLang/JuliaBUGS.jl/blob/4c5ab1a706a5150b94517635287f52e29544a865/src/model.jl#L444C13-L444C27

where I should've use evaluation_env instead of model.evaluation_env, I am fixing this

question about the MWE, should the final line be pval_true > pval_alt ?

@miguelbiron
Copy link
Collaborator Author

miguelbiron commented Dec 11, 2024

It was fine for showing the problem existed (if the assertion is true, the bug is there). For adding to the test suite, yeah you would reverse it

But it might be overkill for diagnosin the issue in a unit test---this was just my way of triple checking that there was a problem. Maybe just testing for p_prod == p1*p2 after calling evaluate!! would suffice?

@miguelbiron
Copy link
Collaborator Author

Btw thanks for fixing this so quickly!

sunxd3 added a commit that referenced this issue Dec 11, 2024
@sunxd3
Copy link
Member

sunxd3 commented Dec 11, 2024

no problem man, thanks for the effort!

version v0.7.5 should have the fix

@sunxd3 sunxd3 closed this as completed Dec 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants