Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

threads: Addresses for atomic operations silently wrap instead of using an effective 33-bit index #3132

Closed
alexcrichton opened this issue Jul 30, 2021 · 1 comment · Fixed by #3143
Labels
bug Incorrect behavior in the current implementation that needs fixing

Comments

@alexcrichton
Copy link
Member

alexcrichton commented Jul 30, 2021

This module, for example:

(module
    (memory 1)

    (func
        i32.const 0xffff_fff0
        i32.atomic.load offset=16
        drop
    )
    (start 0)
)

should fail with a trap but succeeds:

$ cargo run -- --enable-threads foo.wat
@alexcrichton alexcrichton added the bug Incorrect behavior in the current implementation that needs fixing label Jul 30, 2021
@cfallin
Copy link
Member

cfallin commented Jul 30, 2021

It appears that atomic loads handle their offset differently than others. In #2077 we introduced a new LoadNoOffset instruction class, and instead, the Wasm translator inserts a bare iadd_imm (with no overflow check): link.

@julian-seward1, do you recall why we didn't just take an offset on the atomic load/store ops like we do on regular ones? Was it in order to simplify the implementation or is there some other reason?

IMHO the right fix (assuming the above question doesn't reveal a blocking reason) is to harmonize atomic loads/stores with regular ones and use the same offset handling / overflow checking logic.

alexcrichton added a commit to alexcrichton/wasmtime that referenced this issue Aug 4, 2021
This commit consolidates all calcuations of guest addresses into one
`prepare_addr` function. This notably remove the atomics-specifics paths
as well as the `prepare_load` function (now renamed to `prepare_addr`
and folded into `get_heap_addr`).

The goal of this commit is to simplify how addresses are managed in the
code generator for atomics to use all the shared infrastrucutre of other
loads/stores as well. This additionally fixes bytecodealliance#3132 via the use of
`heap_addr` in clif for all operations.

I also added a number of tests for loads/stores with varying alignments.
Originally I was going to allow loads/stores to not be aligned since
that's what the current formal specification says, but the overview of
the threads proposal disagrees with the formal specification, so I
figured I'd leave it as-is but adding tests probably doesn't hurt.

Closes bytecodealliance#3132
alexcrichton added a commit that referenced this issue Aug 4, 2021
* Consolidate address calculations for atomics

This commit consolidates all calcuations of guest addresses into one
`prepare_addr` function. This notably remove the atomics-specifics paths
as well as the `prepare_load` function (now renamed to `prepare_addr`
and folded into `get_heap_addr`).

The goal of this commit is to simplify how addresses are managed in the
code generator for atomics to use all the shared infrastrucutre of other
loads/stores as well. This additionally fixes #3132 via the use of
`heap_addr` in clif for all operations.

I also added a number of tests for loads/stores with varying alignments.
Originally I was going to allow loads/stores to not be aligned since
that's what the current formal specification says, but the overview of
the threads proposal disagrees with the formal specification, so I
figured I'd leave it as-is but adding tests probably doesn't hurt.

Closes #3132

* Fix old backend

* Guarantee misalignment checks happen before out-of-bounds
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Incorrect behavior in the current implementation that needs fixing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants