-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change printing of Float32/Float16 #7298
Comments
Just a note: the julia> 10f0^10
1.0f10 |
The |
Ah, thanks for the clarification guys. Not sure how I got that mixed up. I guess my proposal wouldn't really work then.....hmmm. |
Ok, I figured out where I went wrong. C/C++ has 1e-45 || 1e-45f64 # => Float64
1e-45f32 # => Float32
1e-45f16 # => Float16 Similar idea, but exponentiation would always use |
That conflicts with implicit multiplication by numbers. It's also hard to generalize e.g. to fixed point. |
Oh drat. Seems there's not an easy solution here. |
Ok, one more. It's actually been suggested before through a combination of #25 and #964. 1.0 # => default Float64
1.0::Float32 # => Obvious
1.0::Float16 # => Again
#exponentials
1e-45
1f-45 or 1e-45::Float32 # I'd be in favor of getting rid of the 'f' exponential syntax since it doesn't generalize well
1e-45::Float16 And as #25 suggests, I think we go further and do the same with Int8, Int16, and Int32. Any time you're working with those you're constantly doing |
That can't work either:
It might work if there were more concise syntax for |
I think there's a case for having 1.0::BigFloat
1.0::Float64
1.0::Float32
1.0::Float16 Or maybe I'm not understanding the parsing problem. Is it that we don't want to litter code with all these calls to convert? |
The parsing problem is that if I write |
Part of the subtlety and confusion is that using f(a,b)::Int64 # => return type declaration; I know I'll get an Int64 back; :: has forceful behavior
1.0::Float32 # => float literal syntax that makes sure I get a Float32; :: has forceful behavior
function foo()
x::Int16
end # => declaration syntax works as it currently does, leading to calls to convert; :: has forceful behavior With BigFloat parsing, is it just a "hard to get right" problem? Or not really possible problem? |
As far as I remember (don't quote me!), the parser itself treats most numbers as numbers directly (using the femtolisp types) instead of just postponing this step to Julia, which makes a potential BigFloat literal something problematic. |
The big float literal is problematic because not every floating point number is machine representable. You need a default float type in the parser because otherwise you would have inconsistent behavior with rounding if you still want to have floating point numbers as literal values in the ast. |
"Forceful behavior" is not a meaningful concept. Right now Parsing can be handled; we are able to parse |
I think a lot of the confusion about |
We could add |
Is there any possibility of a Float16 literal notation being introduced? It seems inconsistent to have literals for the other types but not Float16. Also, I would argue that Float16 has become more important in the past 5 years, with increasing interest from machine learners and (in my case) atmospheric modellers. What about |
It would be a breaking change, so not until Julia 2.0 at the earliest. |
One stopgap in the meantime could be struct Float16Scalar
exp::Int
end
Base.:(*)(x, u:: Float16Scalar) = convert(Float16, x*10.0^(u.exp))
macro g_str(sn)
Float16Scalar(Meta.parse(sn)::Int)
end
julia> 1g"0"
Float16(1.0)
julia> 2g"10"
Inf16 Inspired by Unitful.jl |
Or of course a |
Having mucked around in the float printing code quite a bit lately, I found it unfortunate that Float16 is more lame than Float32. It's also unfortunate that there's no literal syntax for Float16.
I'd like to propose the following:
Currently, Float32 is printed as
1.0f0
, which, while visually indicative of its type, doesn't mean anything more (thef0
at the end doesn't indicate anything). Using1.0f32
instead gives a hint to the bit-size of the float you're dealing with, along with allowing a natural extension to other float types1.0f16
,1.0f80
,1.0f128
, etc. The other advantage is it could allow literal syntax as well for different floats, and we all know how great it is to have literal syntax for types. I'm not sure if changing1.0f0
to1.0f32
would be breaking or not (does anyone rely on how Float32's are printed?), but we could always parse1.0f0
as Float32 for another release if need be.Is this really that important? Maybe not. Would it provide a little more polish and consistency? I think so.
The text was updated successfully, but these errors were encountered: