Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(perf): Remove unused last loads in mem2reg #5905

Closed

Conversation

vezenovm
Copy link
Contributor

@vezenovm vezenovm commented Sep 3, 2024

Description

Problem*

Resolves

Summary*

Follow-up to #5865

Make the last loads removal smarter by checking whether that last loads is actually used anywhere. If it is not, remove the loads and allow stores to that loads address to be removed as well.

Additional Context

Documentation*

Check one:

  • No documentation needed.
  • Documentation included in this PR.
  • [For Experimental Features] Documentation to be submitted in a separate PR.

PR Checklist*

  • I have tested the changes locally.
  • I have formatted the changes with Prettier and/or cargo fmt on default settings.

@vezenovm vezenovm changed the base branch from master to mv/simplify-immediate-stores September 3, 2024 17:04
@@ -7,10 +7,8 @@ struct EnumEmulation {
unconstrained fn main() -> pub Field {
let mut emulated_enum = EnumEmulation { a: Option::some(1), b: Option::none(), c: Option::none() };

// Do a copy to optimize out loads in the loop
let copy_enum = emulated_enum;
Copy link
Contributor Author

@vezenovm vezenovm Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After this PR:
Even after removing this copy optimization we get a Brillig bytecode size improvement from 55 -> 44.

Keeping the copy optimization we get the following final SSA which produces 24 brillig opcodes:

After Array Set Optimizations:
brillig fn main f0 {
  b0():
    jmp b1(u32 0)
  b1(v6: u32):
    v27 = eq v6, u32 0
    jmpif v27 then: b2, else: b3
  b2():
    v28 = add v6, u32 1
    jmp b1(v28)
  b3():
    return Field 2
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok this PR now produces the optimal SSA specified above without the copy optimization. Just getting a failure on uhashmap that is the same as #5897.

Copy link
Contributor

github-actions bot commented Sep 3, 2024

Changes to Brillig bytecode sizes

Generated at commit: 85faf641c1cba5d018c805e76b45c58875ffeb60, compared to commit: 0db5610b501013e8b549102815f909aa0b656db7

🧾 Summary (10% most significant diffs)

Program Brillig opcodes (+/-) %
poseidon2 -15 ✅ -4.52%
brillig_loop_size_regression -31 ✅ -56.36%

Full diff report 👇
Program Brillig opcodes (+/-) %
u128 3,794 (-4) -0.11%
hashmap 36,837 (-105) -0.28%
uhashmap 24,210 (-97) -0.40%
fold_complex_outputs 1,015 (-5) -0.49%
fold_numeric_generic_poseidon 1,114 (-15) -1.33%
no_predicates_numeric_generic_poseidon 1,114 (-15) -1.33%
poseidon2 317 (-15) -4.52%
brillig_loop_size_regression 24 (-31) -56.36%

@vezenovm
Copy link
Contributor Author

vezenovm commented Sep 4, 2024

Closing in favor of #5925

@vezenovm vezenovm closed this Sep 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant