Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Also support floats in NewValue #47

Merged
merged 1 commit into from
Nov 17, 2020
Merged

Also support floats in NewValue #47

merged 1 commit into from
Nov 17, 2020

Conversation

paultyng
Copy link
Contributor

No description provided.

@paultyng
Copy link
Contributor Author

I think I added non pointer types to the godoc, but I realize now I haven't tested those, not sure they actually work, probably do not as they aren't coded as such so that needs to be addressed. Perhaps in a follow-up PR?

Copy link
Contributor

@paddycarver paddycarver left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough.

@paultyng paultyng merged commit b9d1615 into main Nov 17, 2020
@paultyng paultyng deleted the float branch November 17, 2020 02:40
paddycarver added a commit that referenced this pull request Mar 24, 2021
Previously, we were validating that the value passed to
`tftypes.NewValue` was a valid value for _any_ `tftypes.Type`, but not
for the specific `tftypes.Type` we passed to `tftypes.NewValue`. This
meant, for example, `tftypes.NewValue(tftypes.Bool, "hello, world")`
would return a `tftypes.Value` that was invalid. The invalid would
eventually cause an error when we try to marshal it into msgpack to send
back to Terraform, but by that point, it's hard to track down where the
invalid `tftypes.Value` even came from.

This PR changes to a much stricter validation in `tftypes.NewValue`,
checking that the value passed can be used for the specific
`tftypes.Type` passed. This goes so far as to as validate that the types
of the `tftypes.Value`s used as attribute or element values on aggregate
types are valid.

I took the opportunity to refactor a bit, as well, splitting the
conversions in `tftypes.NewValue` out into type-specific helper
functions, and to test the conversions done for `tftypes.Number`, the
only type we currently convert for.

Testing `tftypes.Number` surfaced that the conversion of `float32`
values to `float64`s that can be used in `big.NewFloat` was lossy in
precision. I tried a couple of different ways to make the conversion
lossless, but wasn't able to. I therefore removed support for `float32`
values from `tftypes.NewValue`--provider developers can convert them
themselves using whatever lossy strategy they like. If we can find a
lossless strategy for turning a `float32` into a `*big.Float`, I'm all
for adding support back, but I'd like our conversions to be lossless to
prevent future situations like hashicorp/terraform-plugin-sdk#655.

I took this opportunity to also properly support uint, uint8, uint16,
uint32, uint64, int, int8, int16, int32, int64, and float64 in
`tftypes.NewValue`. #46 and #47 meant to do this, but forgot to include
them, only include their pointer variations instead, on accident.

I noticed our `cmp.Diff` and `cmp.Equals` calls were still using the
`tftypes.ValueComparer()` option, which is no longer needed because
`tftypes.Value` now has an `Equal` method, which `cmp` will call. So I
removed that option to clean up.

The `tftypes.Type` interface also got a `supportedGoTypes` method, which
returns a slice of strings, indicating which Go types a `tftypes.Type`
supports. This was helpful in crafting error messages and keeping the
amount of copying and pasting I was doing minimal. It also lets us
define the list of Go types we tell people a `tftypes.Type` supports
right next to the code that handles those Go types, making it more
likely that the list will stay in sync with the code.

`tftypes.DynamicPseudoType` proved to be a problem when validating
values in `tftypes.NewValue`. In theory, provider developers shouldn't
be using it at all, as `tftypes.DynamicPseudoType` means the type _and_
value are both unknown. But we need it in our msgpack and JSON
marshaling tests, and those live outside the `tftypes` package, so they
can't just skip the validation and create a `tftypes.Value` by hand. My
solution was to allow `tftypes.DynamicPseudoType` to contain any Go type
that `tftypes.String`, `tftypes.Bool`, `tftypes.Number`,
`tftypes.Tuple`, or `tftypes.Object` could. This may need future
revisions and rethinking based on conversations with core about how they
expected `tftypes.DynamicPseudoType` to be used.

Because the stricter validation will lead to more panics, I added a
`tftypes.ValidateValue` function that returns an error if
`tftypes.NewValue` would panic for that combination of `tftypes.Type`
and Go value. I think panicking is the right answer, usually, as it
represents a programming error that can never be gracefully handled, the
most rudimentary of testing will find it, and returning an error creates
a prohibitive number of branches in provider code and makes working with
`tftypes` much, much harder. For example, constructing a map, object,
list, set, or tuple would now require provider developers to construct
the `tftypes.Value` for each element and attribute individually, and
handle the errors from each. By panicking, we allow the inline
definition of these values, reducing them to a one-liner that doesn't
pollute the scope with a bunch of variables. However, the above isn't
true in one case that I can think of: if the provider is constructing a
`tftypes.Value` out of a user- or API-defined type that can change at
runtime, that is no longer a programming error and is now an
environmental error--i.e., it can be prompted by changes in the context
the code is running in. This makes testing to avoid panics much, much
harder. In that case, panicking is an unfortunate situation. I think
that's a rare case, however, and providers that find themselves in that
case can use `tftypes.ValidateValue` to handle the error as they see
fit.

I removed a number of test cases from the `TestValueAs` test function,
as they were really testing `tftypes.NewValue` instead of
`tftypes.Value.As`, which was confusing. These tests are replicated and
expanded upon in the new tests for `tftypes.NewValue`.

Finally, the stricter validation in `tftypes.NewValue` found an error in
the `TestTransform` tests, where an object didn't have values for all
the attributes listed in its type. The extra attributes listed in the
type were removed.
paddycarver added a commit that referenced this pull request Mar 31, 2021
Previously, we were validating that the value passed to
`tftypes.NewValue` was a valid value for _any_ `tftypes.Type`, but not
for the specific `tftypes.Type` we passed to `tftypes.NewValue`. This
meant, for example, `tftypes.NewValue(tftypes.Bool, "hello, world")`
would return a `tftypes.Value` that was invalid. The invalid would
eventually cause an error when we try to marshal it into msgpack to send
back to Terraform, but by that point, it's hard to track down where the
invalid `tftypes.Value` even came from.

This PR changes to a much stricter validation in `tftypes.NewValue`,
checking that the value passed can be used for the specific
`tftypes.Type` passed. This goes so far as to as validate that the types
of the `tftypes.Value`s used as attribute or element values on aggregate
types are valid.

I took the opportunity to refactor a bit, as well, splitting the
conversions in `tftypes.NewValue` out into type-specific helper
functions, and to test the conversions done for `tftypes.Number`, the
only type we currently convert for.

Testing `tftypes.Number` surfaced that the conversion of `float32`
values to `float64`s that can be used in `big.NewFloat` was lossy in
precision. I tried a couple of different ways to make the conversion
lossless, but wasn't able to. I therefore removed support for `float32`
values from `tftypes.NewValue`--provider developers can convert them
themselves using whatever lossy strategy they like. If we can find a
lossless strategy for turning a `float32` into a `*big.Float`, I'm all
for adding support back, but I'd like our conversions to be lossless to
prevent future situations like hashicorp/terraform-plugin-sdk#655.

I took this opportunity to also properly support uint, uint8, uint16,
uint32, uint64, int, int8, int16, int32, int64, and float64 in
`tftypes.NewValue`. #46 and #47 meant to do this, but forgot to include
them, only include their pointer variations instead, on accident.

I noticed our `cmp.Diff` and `cmp.Equals` calls were still using the
`tftypes.ValueComparer()` option, which is no longer needed because
`tftypes.Value` now has an `Equal` method, which `cmp` will call. So I
removed that option to clean up.

The `tftypes.Type` interface also got a `supportedGoTypes` method, which
returns a slice of strings, indicating which Go types a `tftypes.Type`
supports. This was helpful in crafting error messages and keeping the
amount of copying and pasting I was doing minimal. It also lets us
define the list of Go types we tell people a `tftypes.Type` supports
right next to the code that handles those Go types, making it more
likely that the list will stay in sync with the code.

`tftypes.DynamicPseudoType` proved to be a problem when validating
values in `tftypes.NewValue`. In theory, provider developers shouldn't
be using it at all, as `tftypes.DynamicPseudoType` means the type _and_
value are both unknown. But we need it in our msgpack and JSON
marshaling tests, and those live outside the `tftypes` package, so they
can't just skip the validation and create a `tftypes.Value` by hand. My
solution was to allow `tftypes.DynamicPseudoType` to contain any Go type
that `tftypes.String`, `tftypes.Bool`, `tftypes.Number`,
`tftypes.Tuple`, or `tftypes.Object` could. This may need future
revisions and rethinking based on conversations with core about how they
expected `tftypes.DynamicPseudoType` to be used.

Because the stricter validation will lead to more panics, I added a
`tftypes.ValidateValue` function that returns an error if
`tftypes.NewValue` would panic for that combination of `tftypes.Type`
and Go value. I think panicking is the right answer, usually, as it
represents a programming error that can never be gracefully handled, the
most rudimentary of testing will find it, and returning an error creates
a prohibitive number of branches in provider code and makes working with
`tftypes` much, much harder. For example, constructing a map, object,
list, set, or tuple would now require provider developers to construct
the `tftypes.Value` for each element and attribute individually, and
handle the errors from each. By panicking, we allow the inline
definition of these values, reducing them to a one-liner that doesn't
pollute the scope with a bunch of variables. However, the above isn't
true in one case that I can think of: if the provider is constructing a
`tftypes.Value` out of a user- or API-defined type that can change at
runtime, that is no longer a programming error and is now an
environmental error--i.e., it can be prompted by changes in the context
the code is running in. This makes testing to avoid panics much, much
harder. In that case, panicking is an unfortunate situation. I think
that's a rare case, however, and providers that find themselves in that
case can use `tftypes.ValidateValue` to handle the error as they see
fit.

I removed a number of test cases from the `TestValueAs` test function,
as they were really testing `tftypes.NewValue` instead of
`tftypes.Value.As`, which was confusing. These tests are replicated and
expanded upon in the new tests for `tftypes.NewValue`.

Finally, the stricter validation in `tftypes.NewValue` found an error in
the `TestTransform` tests, where an object didn't have values for all
the attributes listed in its type. The extra attributes listed in the
type were removed.
paddycarver added a commit that referenced this pull request Mar 31, 2021
Previously, we were validating that the value passed to
`tftypes.NewValue` was a valid value for _any_ `tftypes.Type`, but not
for the specific `tftypes.Type` we passed to `tftypes.NewValue`. This
meant, for example, `tftypes.NewValue(tftypes.Bool, "hello, world")`
would return a `tftypes.Value` that was invalid. The invalid would
eventually cause an error when we try to marshal it into msgpack to send
back to Terraform, but by that point, it's hard to track down where the
invalid `tftypes.Value` even came from.

This PR changes to a much stricter validation in `tftypes.NewValue`,
checking that the value passed can be used for the specific
`tftypes.Type` passed. This goes so far as to as validate that the types
of the `tftypes.Value`s used as attribute or element values on aggregate
types are valid.

I took the opportunity to refactor a bit, as well, splitting the
conversions in `tftypes.NewValue` out into type-specific helper
functions, and to test the conversions done for `tftypes.Number`, the
only type we currently convert for.

Testing `tftypes.Number` surfaced that the conversion of `float32`
values to `float64`s that can be used in `big.NewFloat` was lossy in
precision. I tried a couple of different ways to make the conversion
lossless, but wasn't able to. I therefore removed support for `float32`
values from `tftypes.NewValue`--provider developers can convert them
themselves using whatever lossy strategy they like. If we can find a
lossless strategy for turning a `float32` into a `*big.Float`, I'm all
for adding support back, but I'd like our conversions to be lossless to
prevent future situations like hashicorp/terraform-plugin-sdk#655.

I took this opportunity to also properly support uint, uint8, uint16,
uint32, uint64, int, int8, int16, int32, int64, and float64 in
`tftypes.NewValue`. #46 and #47 meant to do this, but forgot to include
them, only include their pointer variations instead, on accident.

I noticed our `cmp.Diff` and `cmp.Equals` calls were still using the
`tftypes.ValueComparer()` option, which is no longer needed because
`tftypes.Value` now has an `Equal` method, which `cmp` will call. So I
removed that option to clean up.

The `tftypes.Type` interface also got a `supportedGoTypes` method, which
returns a slice of strings, indicating which Go types a `tftypes.Type`
supports. This was helpful in crafting error messages and keeping the
amount of copying and pasting I was doing minimal. It also lets us
define the list of Go types we tell people a `tftypes.Type` supports
right next to the code that handles those Go types, making it more
likely that the list will stay in sync with the code.

`tftypes.DynamicPseudoType` proved to be a problem when validating
values in `tftypes.NewValue`. In theory, provider developers shouldn't
be using it at all, as `tftypes.DynamicPseudoType` means the type _and_
value are both unknown. But we need it in our msgpack and JSON
marshaling tests, and those live outside the `tftypes` package, so they
can't just skip the validation and create a `tftypes.Value` by hand. My
solution was to allow `tftypes.DynamicPseudoType` to contain any Go type
that `tftypes.String`, `tftypes.Bool`, `tftypes.Number`,
`tftypes.Tuple`, or `tftypes.Object` could. This may need future
revisions and rethinking based on conversations with core about how they
expected `tftypes.DynamicPseudoType` to be used.

Because the stricter validation will lead to more panics, I added a
`tftypes.ValidateValue` function that returns an error if
`tftypes.NewValue` would panic for that combination of `tftypes.Type`
and Go value. I think panicking is the right answer, usually, as it
represents a programming error that can never be gracefully handled, the
most rudimentary of testing will find it, and returning an error creates
a prohibitive number of branches in provider code and makes working with
`tftypes` much, much harder. For example, constructing a map, object,
list, set, or tuple would now require provider developers to construct
the `tftypes.Value` for each element and attribute individually, and
handle the errors from each. By panicking, we allow the inline
definition of these values, reducing them to a one-liner that doesn't
pollute the scope with a bunch of variables. However, the above isn't
true in one case that I can think of: if the provider is constructing a
`tftypes.Value` out of a user- or API-defined type that can change at
runtime, that is no longer a programming error and is now an
environmental error--i.e., it can be prompted by changes in the context
the code is running in. This makes testing to avoid panics much, much
harder. In that case, panicking is an unfortunate situation. I think
that's a rare case, however, and providers that find themselves in that
case can use `tftypes.ValidateValue` to handle the error as they see
fit.

I removed a number of test cases from the `TestValueAs` test function,
as they were really testing `tftypes.NewValue` instead of
`tftypes.Value.As`, which was confusing. These tests are replicated and
expanded upon in the new tests for `tftypes.NewValue`.

Finally, the stricter validation in `tftypes.NewValue` found an error in
the `TestTransform` tests, where an object didn't have values for all
the attributes listed in its type. The extra attributes listed in the
type were removed.
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 28, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants