Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overflow checked arithmetics #41

Open
Petr-Hlavenka opened this issue Mar 30, 2016 · 26 comments
Open

Overflow checked arithmetics #41

Petr-Hlavenka opened this issue Mar 30, 2016 · 26 comments
Milestone

Comments

@Petr-Hlavenka
Copy link

As discussed in https://github.com/JuliaLang/julia/issues/15690 with @timholy I've been recently bitten by the inconsistency regarding overflow in FixedPoint arithmetic when dealing with Images.jl.

FixedPointNumbers are pretending to be <:Real numbers, but their behavior regarding arithmetic is more like Integer. I just cannot think of any use-case one would get any advantage from the "modulo" arithmetic when dealing with the numbers guaranteed to fall within <0, 1> range. I'd be glad if my code stopped by an OverflowError exception indicating problem in my algorithm. Thus, easy fixable by manual widening of the operands. With the current silent overflow, algorithm just continues, giving finally wrong results.

Before writing a PR to introduce arithmetic that throws on overflow or underflow, I'd like to know your opinion on this change. I was also thinking of using a global "flag" to dynamically opt-in for the overflow-checking behavior but i'm worried about the branching costs. Is it possible e.g. use traits for this behavior change?

@timholy
Copy link
Member

timholy commented Mar 30, 2016

Agreed that this can be an annoyance, but the alternative is annoying, too. Operations with FixedPointNumbers used to return Float32 or Float64 (see #12), and that changed for consistency with base julia in #27.

Let's consider alternatives. With images, one approach should be float(img1)+img2. float was deprecated for Base, but it's too useful in the image processing world to not have it. Another (more performant) option would be to make the output type the first argument, +(Image{Float32}, img1, img2) instead of img1+img2. Both of these have strengths and weaknesses. Any other ideas?

@Petr-Hlavenka
Copy link
Author

The problem is, usually you don't even know you should take care of the overflow arithmetics when dealing with images (recall FixedPoint <: Real). Usually, any math (applying kernels, multiplying with a number etc.) you do to a pixel does the widening implicitly (just because you don't specify the constant as UFixed8):

println(typeof(im[1,1]))
println(typeof(im[1,1]*1))

gives you implicit widening

ColorTypes.Gray{FixedPointNumbers.UFixed{UInt8,8}}
ColorTypes.Gray{Float64}

Unfortunately overflow pops in quickly as soon as you try to subtract pixelvalue from an other pixelvalue (both UFixed) and one forgets to convert the operands to a float.

I think that the only reason the FixedPointNumbers arithmetic gets ever called is just as a mistake in the code and the correct function that one wanted to call was the floating-point arithmetic.

@Petr-Hlavenka
Copy link
Author

Actually I'd prefer the same behavior as the rational numbers (throw OverflowError, not implicitly widen):

>typeof(0xff//0x33)
Rational{UInt8}
>typeof(0xff//0x33+0xff)
LoadError: OverflowError()
while loading In[76], in expression starting on line 1
 in + at rational.jl:180

@timholy
Copy link
Member

timholy commented Mar 30, 2016

A simpler example:

julia> x = UFixed8(0.5)
UFixed8(0.502)

julia> x*1
0.5019607843137255

julia> x*one(UFixed8)
UFixed8(0.502)

It is a little disturbing that multiplying by Int(1) modifies the type, but of course you have to do that if you want type-stability.

One of the points I'd raise, though, is that this has very little to do with FixedPointNumbers. Let's say we did the same thing that many other image packages do, interpreting floating point images within the scale 0 to 1, and UInt8 images within the range 0 to 255 (which is pretty yucky). Add two UInt8 images together, and you get another UInt8 image out, with likely overflow. So the behavior you dislike has nothing to do with FixedPointNumbers.

In other words, you're really asking julia to switch to checked arithmetic for all operations, or go back to the old way of promoting all integer arithmetic to the machine width (UInt64/UInt32 for most operations).

For the record, from Matlab you get this:

>> a8 = uint8([1, 250])

a8 =

    1  250

>> a8+a8

ans =

    2  255

@timholy
Copy link
Member

timholy commented Mar 30, 2016

...and lest you think that's better, consider:

>> (a8+a8)-a8

ans =

    1    5

which is a disaster. Compare julia:

julia> x = UInt8[1,250]
2-element Array{UInt8,1}:
 0x01
 0xfa

julia> (x+x)-x
2-element Array{UInt8,1}:
 0x01
 0xfa

which is at least good mathematics.

@timholy
Copy link
Member

timholy commented Mar 30, 2016

@phlavenk, all this is to say it's not an easy problem, and that all choices have negatives. It would be lovely to find a nice syntax for "safe" arithmetic operations (i.e., ones that promote).

Some experiments (julia-0.5):

julia> a = 0xff
0xff

julia> b = 0xf0
0xf0

julia> a + b
0xef

julia> safeplus(x::UInt8, y::UInt8) = UInt16(x)+UInt16(y)
safeplus (generic function with 1 method)

julia> safe(::typeof(+)) = safeplus
safe (generic function with 1 method)

julia> a safe(+) b
ERROR: syntax: extra token "safe" after end of expression
 in eval(::Module, ::Any) at ./boot.jl:237

julia> safe(+)(a, b)
0x01ef

Works, but it's not particularly nice. Any ideas?

@andyferris
Copy link

The reason in Base that it is safe to e.g. have an operation on two Float16s return another Float16 is that these types have NaN16, Inf16 and -Inf16.

For UInt8 it is reasonably obvious that we are just doing CPU-style arithmetic on 8 bits. Wrapping overflows might even be desired for some applications.

Here we are sitting in between... we are dealing with "real" numbers but with a more efficient representation than floating point for our given problem, but there is no NaN or Inf values to fall back upon. In theory, these could be implemented and checked-for at runtime (at the cost of speed, complexity, and reserving some of the values - the last of which is really bad for e.g. standard 8-bit per channel RGB).

Unlike the solution that Base takes to address the problems particular to the types defined in Base, perhaps automatic widening is warranted here for these types? Generally, if I'm adding, subtracting, or multiplying color channel intensities, I want to do so without strange artefacts in the middle of my computation. When the value is assigned to another array/image, the widened pixel values will be squashed and clipped to conform the the array/image eltype.

@timholy
Copy link
Member

timholy commented Feb 2, 2017

I have more than a little sympathy for this viewpoint, but also some experience with its negatives. In old versions of Julia, UInt8 + UInt8 returned a UInt64. Perhaps surprisingly, this proved to be pretty unpopular. Aside from the "we want to exploit the wacky properties of overflow" reason, probably the main negative to automatic widening is all the subtle ways you can break type-stability or make it very challenging to initialize an accumulator. Back then, these problems were confounded by the empty-array problem, but now that we're allowed to use inference to solve that problem, maybe it would be better.

One thing you can't do is "minimal" widening, i.e., ::UInt8 + ::UInt8 -> ::UInt16. The problem there is that now add 3 ::UInt8 numbers together, and you get a UInt32; add 4, you get a UInt64. This is a nightmare in a language like julia where types matter. If you're going to widen, you have to immediately go all the way.

@andyferris
Copy link

Good points. It's definitely a tricky one. I note that some of these things really need fixing, outside of this package, like initializing accumulators, but these are also very tricky problems.

@Petr-Hlavenka
Copy link
Author

Is there a way to at least use a macro that would widen all variables in the expression of the N0fx types to F32f32 type? I must admit the learning curve for metaprogramming is too steep for me and I treat it as kinf of black-magic art. Therefore I'm asking here for help.

For a trivial example like doing average (sure, I know how to write it correctly for trivial cases like this):

    function avr(im_arr, blackim_arr)
        avr_im = similar(im_arr[1])
        tmp = @widen similar(im_arr[1])      #here tmp will be array of e.g. Fixed{Int64,8}
        for (i, im) in enumerate(im_arr)
            tmp .+= @widen im .- blackim_arr[i]    #some "fancy math" on pixels where underflow/overflow matters
        end
        avr_im = round(eltype(im_arr[1]),  tmp / n )
    end

the widen macro will transform into widening all the Normed{UIntxx, xx} into a type with best arithmetic performace and enough bits for sign and the value - I asume Int64.
It is than the user responsibility to convert the widened value back to the desired type/precision. In future, the macro could ensure the julia code is transformed in such way that instead of calling widening function, it could use optimized operators as @timholy speculated above, using bitshifts and other CPU magic.

For now, unfortunatelly, to ensure the correctness, I convert all input images into Float64 and do all operations with them. The bad is the speed and 8-time higher memory consumption. This is a problem when processing large 8/16 bit images.

I like the idea behind Images.jl a lot where image algorithms work on normalized black/white regardless of representation. I'd be glad to use the dot fusion with implicit widening and store the results again as 8/16 bit image. This is usually sufficient precision. The only problem that I'm facing is the correctness of the implementation - simple with Float64, tricky with overflow/underflow gotchas in the 8 bit integer arithmetics, and make a mistake there is pretty easy.
Thank you for your feedback.
Petr

@timholy
Copy link
Member

timholy commented Feb 13, 2017

ImageCore includes functions like float32 that work "through" colors, etc. Might that help?

My general strategy for writing code like what you have above is to

  • separate out the initialization of the output into a separate function. This allows users to specify a different output type if they want to control it
  • accumulate internally into a "safe" type. The storage into the output will then trigger an InexactError if it fails.

For example:

function avr(im_arr, blackim_arr)
    Tout = typeof(first(im_arr[1]) - first(blackim_arr[1]))
    avr!(similar(im_arr[1], Tout), im_arr, blackim_arr)
end

function avr!(out, im_arr, blackim_arr)
    z = zero(accumtype(img_arr..., blackim_arr...)))
    for I in eachindex(out)
        tmp = z
        for i = 1:length(im_arr)
            tmp += oftype(z, im_arr[i][I])) - oftype(z, blackim_arr[i][I])
        end
        out[I] = tmp   # this will throw an InexactError if there's overflow that eltype(out) can't handle
    end
    out
end

This has several advantages over the version you wrote above, including not creating temporary arrays at each stage.

The tricky part is accumtype, which might look something like this:

"""
    accumtype(arrays...) -> T

Compute a type safe for accumulating arithmetic on elements of the given `arrays`. For "small integer" element types, type `T` will be wider so as to avoid overflow.
"""  
accumtype(imgs::AbstractArray...) = _accumtype(Union{}, imgs...)

using FixedPointNumbers: floattype
_accumtype{Tout,T<:FixedPoint}(::Type{Tout}, img::AbstractArray{T}, imgs...) = _accumtype(promote_type(Tout, floattype(T)), imgs...)
_accumtype{Tout,T<:Base.WidenReduceResult}(::Type{Tout}, img::AbstractArray{T}, imgs...) = _accumtype(promote_type(Tout, widen(T)), imgs...)
_accumtype{Tout,T}(::Type{Tout}, img::AbstractArray{T}, imgs...) = _accumtype(promote_type(Tout, T), imgs...)
# This is called when we've exhausted all the inputs
_accumtype{Tout}(::Type{Tout}) = Tout

This calls itself recursively so that it's type-stable.

I wonder if we should put accumtype into FixedPointNumbers? And extend it in ColorTypes?

@Petr-Hlavenka
Copy link
Author

The pattern you suggest is highly type instable becouse of the unknown type of z during compilation. The slight modification of your code to

using BenchmarkTools
n = 2000; a = [UFixed8.(rand(n,n)) for i=1:20]; b = [UFixed8.(rand(n,n)) for i=1:20];
function avr2!(out, im_arr, blackim_arr, z=zero(accumtype(im_arr..., blackim_arr...)))    
    for I in eachindex(out)
        tmp = z
        for i = 1:length(im_arr)
            tmp += oftype(z, im_arr[i][I]) - oftype(z, blackim_arr[i][I])
        end
        out[I] = tmp   # this will throw an InexactError if there's overflow that eltype(out) can't handle
    end
    out
end
out = zeros(Float64, size(a[1])...)
@benchmark avr2!(out, a,b)

gives 750ms / run, instead of 17.7s with accumtype determination inside the body.

The fastest implementation is the following:

w{T<:FixedPoint}(x::T) = Fixed{Int64, 8}(x.i, 0)

function avr2!(out, im_arr, blackim_arr)        
    for i = 1:length(im_arr)
        for I in eachindex(out)                
            out[I] += w(im_arr[i][I]) - w(blackim_arr[i][I])
        end
    end
    out
end

out = zeros(Fixed{Int64, 8}, size(a[1])...)
@benchmark avr2!(out, a,b)

that gives 250ms for run. Widens to Fixed{Int64, 8} so still uses integer arithmetics and traverses contineously memory

The most convenient way for me would be possibility to rewrite the inner loop line

out[I] += w(im_arr[i][I]) - w(blackim_arr[i][I])

with

@widen out[I] += im_arr[i][I] - blackim_arr[i][I]

that would do the transformation for all fixedpoint numbers (expecting there will be allowed Normalized -> Fixed conversion. Not possible now)

T = Fixed{Int64, 8}
out[I] += convert(T, im_arr[i][I]) - convert(T, blackim_arr[i][I])
#or
T =typeof(out[I])
out[I] += convert(T, im_arr[i][I]) - convert(T, blackim_arr[i][I])

This way we can reach speed, corectness and readability at the same time, all of them so important for image processing.
What's your oppinion?

@timholy
Copy link
Member

timholy commented Feb 14, 2017

If you were using tuples rather than arrays for im_arr and blackim_arr, then the original version would have been fine. But yes, using z as an argument is indeed safer.

Also, using a FixedPoint type rather than a Float32 is great when all the inputs (and the output) are consistent with respect to Fixed or Normed and also have the same f. In a "real" implementation that would need to be checked for, so the rules could get a little complex, but it's totally doable.

Nice! Yes, let's do some version of this. If you feel comfortable with it, please submit a PR, otherwise I will get to it eventually.

@timholy
Copy link
Member

timholy commented Nov 22, 2019

To tackle this, I'm considering writing a package called CheckedArithmetic. I'd appreciate feedback about whether the strategies I'm planning to implement make sense, and whether they do as much as we reasonably can to solve the problem. TBH I am not sure how useful @checked will be, but @check (assuming it doesn't prove more difficult to implement than I expect) seems likely to be useful. accumulatortype and acc are less about checking and more about writing safe algorithms, but I don't think they are "big" enough to merit a package on their own.

Here's the proposed README:

CheckedArithmetic

This package aims to make it easier to detect overflow in numeric computations.
It exports two macros, @check and @checked, as well as functions
accumulatortype and acc.
Packages can add support for their own types to interact appropriately
with these tools.

@checked

@checked converts arithmetic operators to their checked variants.
For example,

@checked z = x + y

rewrites this expression as

z = Base.Checked.checked_add(x, y)

Note that this macro only operates at the level of surface syntax, i.e.,

@checked z = f(x) + f(y)

will not detect overflow caused by f.

The Base.Checked module defines numerous checked operations.
These can be specialized for custom types.

@check

@check performs an operation in two different ways,
checking that both approaches agree.
@check is primarily useful in tests.

For example,

d = @check ssd(a, b)

would perform ssd(a, b) with the inputs as given, and also compute ssd(asafe, bsafe)
where asafe and bsafe are "safer" variants of a and b. [EDITORIAL NOTE: for example, if a is an array of N0f8, then asafe = float.(a) would be a reasonable choice.]
It then tests whether the result obtained from the "safe" arguments is consistent with
the result obtained from a and b.
If the two differ to within the precision of the "ordinary" (unsafe) result, an
error is thrown.

Packages can specialize CheckedArithmetic.safearg to control how asafe and bsafe
are generated. To guard against oversights, safearg must be explicitly defined for
each numeric type---the fallback method for safearg(::Number) is to throw an error.

accumulatortype and acc

These functions are useful for writing overflow-safe algorithms.
accumulatortype(T) or accumulatortype(T1, T2...) takes types as input arguments
and returns a type suitable for limited-risk arithmetic operations.
acc(x) is just shorthand for convert(accumulatortype(typeof(x)), x).

For example, this package includes the definition

accumulatortype(::Type{UInt8}) = Int

If you're computing a sum-of-squares and want to make sure you algorithm is (reasonably)
safe for an input array of UInt8 numbers, you might want to write that as

function sumsquares(A::AbstractArray)
    s = zero(accumulatortype(eltype(A)))
    for a in A
        s += acc(a)^2
    end
    return s
end

@kimikage
Copy link
Collaborator

kimikage commented Nov 23, 2019

CheckedArithmetic looks good to me!

IMO, accumulatortype should depend on not only the element type but also the array size if we can know it. It is a common technique to split an array to sub-arrays.

@johnnychen94
Copy link
Collaborator

johnnychen94 commented Nov 23, 2019

As a possible alternative to @check, I did some efforts in ImageDistances.jl to guarantee the numeric accuracy (including overflow) with the help of ReferenceTests.jl.

E.g., https://github.com/JuliaImages/ImageDistances.jl/blob/d43368fba73fcebb0413bf65ab32370e2fa43108/test/metrics.jl#L25-L39

@timholy
Copy link
Member

timholy commented Nov 23, 2019

IMO, accumulatortype should depend on not only the element type but also the array size if we can know it. It is a common technique to split an array to sub-arrays.

Can you give an example of how it would be used? For example, if we use Float64 as an accumulatortype for N0f8 images, then the image array would have to have >10^13 elements before we'd have to start worrying about roundoff error. That's not out of the realm of possibility (we have some that are right around there) but it is a big array...

@timholy
Copy link
Member

timholy commented Nov 23, 2019

Thanks for the reminder about ReferenceTests! Definitely a useful resource.

@kimikage
Copy link
Collaborator

Can you give an example of how it would be used?

What I want to say is in your past comments.

Also, using a FixedPoint type rather than a Float32 is great when all the inputs (and the output) are consistent with respect to Fixed or Normed and also have the same f. In a "real" implementation that would need to be checked for, so the rules could get a little complex, but it's totally doable.

For example, the sum of pixel values of 4096 x 4096 N0f8 image can be represented in N24f8 exactly. It is a little unreliable, but when we divide the image into 16 sub-images, for example, it will be enough for general image processing and also suitable for parallel computing. Concerning the type-stability, the conversion (e.g. to Float64) should be done once in the final stage, and there is no need to convert each element to the final type.

I understand that I am asking for too much. However, if we just want to calculate with Float64 / Float32, we should simply convert the values to Float64 / Float32 first, if the memory allows.

@kimikage
Copy link
Collaborator

@timholy, I have another question about accumulatortype.
Who defines particular accumulatortypes (e.g. accumulatortype(::Type{<:Normed{UInt8}}))?

I think CheckedArithmetic is a higher-level package and FixedPointNumbers should not depend on it.

@timholy
Copy link
Member

timholy commented Nov 30, 2019

No, I was planning to make it a dependency of FixedPointNumbers. There is no way to have a common interface without having a common package.

We could split the @check and @checked out from the accumulatortype and acc, but it's such a small package currently that I don't see a huge advantage in slimming it down further.

@timholy
Copy link
Member

timholy commented Nov 30, 2019

For example, the sum of pixel values of 4096 x 4096 N0f8 image can be represented in N24f8 exactly. It is a little unreliable, but when we divide the image into 16 sub-images, for example, it will be enough for general image processing and also suitable for parallel computing. Concerning the type-stability, the conversion (e.g. to Float64) should be done once in the final stage, and there is no need to convert each element to the final type.

I think having a size-dependence would introduce an undesirable type-instability. Moreover,

julia> function accum(s, A::AbstractArray)
           @inbounds @simd for a in A
               s += a
           end
           return s
       end
accum (generic function with 1 method)

julia> A = rand(UInt8, 100);

julia> @btime accum(0, $A)
  10.704 ns (0 allocations: 0 bytes)
12174

julia> @btime accum(UInt(0), $A)
  9.551 ns (0 allocations: 0 bytes)
0x0000000000002f8e

julia> @btime accum(UInt16(0), $A)
  12.597 ns (0 allocations: 0 bytes)
0x2f8e

so I think we can just use "native" types for all accumulator operations. When adding FixedPoint types it seems to make sense to use a FixedPoint type, hence accumulatortype allows you to pass in an operator: accumulatortype(+, N0f8). Handling accumulatortype(-, N0f8) was the primary motivation for opening the discussion in #143.

@kimikage
Copy link
Collaborator

Is there any reason not to do the following?

julia> function accum(s, A::AbstractArray)
           @inbounds @simd for a in A
               s += a
           end
           return Float64(s) # for example
       end

My concern is that the accumulatortype is not completely safe as you suggested. If there is a significant advantage in calculation speed, we will be motivated to use the accumulatortype. However, incomplete abstractions and generalizations can cause deep bugs. Therefore, I think it is never a bad practice to use a concrete type instead of accumulatortype.
Of course, if users don't want to use it, the users don't need to use it. However this actually affects the discussion in #143.

@timholy
Copy link
Member

timholy commented Nov 30, 2019

Now that CheckedArithmetic is registered, I should just push my branch, which will help clarify this discussion.

@timholy
Copy link
Member

timholy commented Nov 30, 2019

See #146

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants