-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segfault with function splatting #1942
Comments
This was referenced Oct 8, 2024
Here's a pure-Enzyme reproducer: using Enzyme
f(x::T...) where {T} = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
x = [2.0, 3.0]
hvp(splat(f), x, zero(x)) Error log:
|
gdalle
changed the title
Segfault with DI and splatting
Segfault with function splatting
Oct 8, 2024
Note that with the latest changes to DI's handling of StaticArrays, switching the import DifferentiationInterface as DI
using Enzyme: Enzyme
using StaticArrays
f(x::T...) where {T} = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
f_nosplat(x::AbstractVector) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
xs = SVector(2.0, 3.0)
backend = AutoEnzyme()
DI.hessian(f_nosplat, backend, xs) # works
DI.hessian(splat(f), backend, xs) # works |
Resolved by #1975 |
I don't think this is completely solved? With Enzyme v0.13.11 I don't get a segfault but I do get an error: julia> Enzyme.hvp(splat(f), x, zero(x))
ERROR: Enzyme execution failed.
Enzyme: Not yet implemented, mixed activity for jl_new_struct constants=Bool[1, 1, 1, 1, 1, 1, 0, 1, 1] %16 = call noalias nonnull "enzyme_type"="{[-1]:Pointer}" {} addrspace(10)* ({} addrspace(10)* ({} addrspace(10)*, {} addrspace(10)**, i32)*, {} addrspace(10)*, ...) @julia.call({} addrspace(10)* ({} addrspace(10)*, {} addrspace(10)**, i32)* noundef nonnull @jl_f_tuple, {} addrspace(10)* noundef null, {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135706086968176 to {}*) to {} addrspace(10)*), {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135706049721088 to {}*) to {} addrspace(10)*), {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135706086968176 to {}*) to {} addrspace(10)*), {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135703490524368 to {}*) to {} addrspace(10)*), {} addrspace(10)* nonnull %15, {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135703919856144 to {}*) to {} addrspace(10)*), {} addrspace(10)* nonnull %11, {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135705534029648 to {}*) to {} addrspace(10)*), {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135706276003848 to {}*) to {} addrspace(10)*)) #17, !dbg !58 Tuple{Bool, Any, Any}[(1, Val{false}, GPUCompiler.BITS_REF), (1, Val{1}, GPUCompiler.BITS_REF), (1, Val{false}, GPUCompiler.BITS_REF), (1, Type{@NamedTuple{1, 2, 3}}, GPUCompiler.BITS_REF), (0, nothing, nothing), (1, Type{typeof(f)}, GPUCompiler.BITS_REF), (0, nothing, nothing), (1, typeof(f), GPUCompiler.BITS_REF), (1, Nothing, GPUCompiler.BITS_REF)] LLVM.User[LLVM.ConstantExpr(0x0000000019ed5530), LLVM.ConstantExpr(0x000000001affa930), LLVM.ConstantExpr(0x0000000019ed5530), LLVM.ConstantExpr(0x000000001df82d30), LLVM.CallInst(%15 = call nonnull "enzyme_type"="{[-1]:Pointer}" {} addrspace(10)* ({} addrspace(10)* ({} addrspace(10)*, {} addrspace(10)**, i32)*, {} addrspace(10)*, ...) @julia.call({} addrspace(10)* ({} addrspace(10)*, {} addrspace(10)**, i32)* noundef nonnull @ijl_apply_generic, {} addrspace(10)* noundef addrspacecast ({}* inttoptr (i64 135706048967360 to {}*) to {} addrspace(10)*), {} addrspace(10)* nonnull %14) #16, !dbg !58), LLVM.ConstantExpr(0x000000001b075fb0), LLVM.CallInst(%11 = call nonnull "enzyme_type"="{[-1]:Pointer}" {} addrspace(10)* ({} addrspace(10)* ({} addrspace(10)*, {} addrspace(10)**, i32)*, {} addrspace(10)*, ...) @julia.call({} addrspace(10)* ({} addrspace(10)*, {} addrspace(10)**, i32)* noundef nonnull @jl_f__apply_iterate, {} addrspace(10)* noundef null, {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135706084209456 to {}*) to {} addrspace(10)*), {} addrspace(10)* addrspacecast ({}* inttoptr (i64 135702402236400 to {}*) to {} addrspace(10)*), {} addrspace(10)* nonnull %10) #15, !dbg !55), LLVM.ConstantExpr(0x0000000014b05f30), LLVM.ConstantExpr(0x0000000012df5ff0)]
Stacktrace:
[1] runtime_iterate_augfwd
@ ~/.julia/packages/Enzyme/vgArw/src/rules/jitrules.jl:77 [inlined]
[2] runtime_iterate_augfwd
@ ~/.julia/packages/Enzyme/vgArw/src/rules/jitrules.jl:0 [inlined]
[3] fwddiffejulia_runtime_iterate_augfwd_29831_inner_1wrap
@ ~/.julia/packages/Enzyme/vgArw/src/rules/jitrules.jl:0
[4] macro expansion
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:8136 [inlined]
[5] enzyme_call
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:7702 [inlined]
[6] ForwardModeThunk
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:7491 [inlined]
[7] runtime_generic_fwd(activity::Type{…}, runtimeActivity::Val{…}, width::Val{…}, RT::Val{…}, f::typeof(Enzyme.Compiler.runtime_iterate_augfwd), df::Nothing, primal_1::Type{…}, shadow_1_1::Nothing, primal_2::Val{…}, shadow_2_1::Nothing, primal_3::Val{…}, shadow_3_1::Nothing, primal_4::Val{…}, shadow_4_1::Nothing, primal_5::Val{…}, shadow_5_1::Nothing, primal_6::typeof(f), shadow_6_1::Nothing, primal_7::Nothing, shadow_7_1::Nothing, primal_8::Vector{…}, shadow_8_1::Vector{…}, primal_9::Vector{…}, shadow_9_1::Vector{…})
@ Enzyme.Compiler ~/.julia/packages/Enzyme/vgArw/src/rules/jitrules.jl:305
[8] Splat
@ ./operators.jl:1271 [inlined]
[9] diffejulia_Splat_30897wrap
@ ./operators.jl:0 [inlined]
[10] macro expansion
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:8136 [inlined]
[11] enzyme_call
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:7702 [inlined]
[12] CombinedAdjointThunk
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:7475 [inlined]
[13] autodiff_deferred
@ ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:781 [inlined]
[14] autodiff
@ ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:512 [inlined]
[15] gradient!
@ ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:1786 [inlined]
[16] fwddiffejulia_gradient__30894wrap
@ ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:0
[17] macro expansion
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:8136 [inlined]
[18] enzyme_call
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:7702 [inlined]
[19] ForwardModeThunk
@ ~/.julia/packages/Enzyme/vgArw/src/compiler.jl:7491 [inlined]
[20] autodiff
@ ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:647 [inlined]
[21] autodiff
@ ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:537 [inlined]
[22] autodiff
@ ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:504 [inlined]
[23] hvp!
@ ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:2497 [inlined]
[24] hvp(f::Base.Splat{typeof(f)}, x::Vector{Float64}, v::Vector{Float64})
@ Enzyme ~/.julia/packages/Enzyme/vgArw/src/Enzyme.jl:2464
[25] top-level scope
@ REPL[50]:1
Some type information was truncated. Use `show(err)` to see complete types. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
EDIT: pure Enyzme MWE is below
Here's an MWE for the bug I uncovered in the JuMP docs PR (jump-dev/JuMP.jl#3836). It seems to be due to splatting, which is only necessary because JuMP doesn't use vector arguments:
And here's the error log (on 1.10):
The text was updated successfully, but these errors were encountered: