Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concatkdf fails on big endian architectures #77

Closed
jdennis opened this issue Aug 2, 2018 · 0 comments
Closed

concatkdf fails on big endian architectures #77

jdennis opened this issue Aug 2, 2018 · 0 comments

Comments

@jdennis
Copy link
Contributor

jdennis commented Aug 2, 2018

All the concatkdf unit tests will fail on big endian architectures. The failure occurs because one of the operations requires generating a 32-bit integer in big endian format as a length specifier. The code correctly used htonl() to produce the 32-bit big endian integer but then tried to further manipulate the result with bit shifting and octet indexing which was unnecessary. The bit shifting and octet indexing only worked on little endian architectures.

I will be submitting a pull request momentarily that fixes the issue, the commit comment tries to explain why the mistake occurred.

jdennis pushed a commit to jdennis/cjose that referenced this issue Aug 3, 2018
Several of the elements used to compute the digest in ECDH-ES key
agreement computation are represented in binary form as a 32-bit
integer length followed by that number of octets. The 32-bit length
integer is represented in big endian format (the 8 most significant
bits are in the first octet.).

The conversion to a 4 byte big endian integer was being computed
in a manner that only worked on little endian architectures. The
function htonl() returns a 32-bit integer whose octet sequence given
the address of the integer is big endian. There is no need for any
further manipulation.

The existing code used bit shifting on a 32-bit value. In C bit
shifting is endian agnostic for multi-octet values, a right shift
moves most significant bits toward least significant bits. The result
of a bit shift of a multi-octet value on either big or little
archictures will always be the same provided you "view" it as the same
data type (e.g. 32-bit integer). But indexing the octets of that
mulit-octet value will be different depending on endianness, hence the
assembled octets differed depending on endianness.

Issue: cisco#77
Signed-off-by: John Dennis <[email protected]>
linuxwolf pushed a commit that referenced this issue Aug 3, 2018
Several of the elements used to compute the digest in ECDH-ES key
agreement computation are represented in binary form as a 32-bit
integer length followed by that number of octets. The 32-bit length
integer is represented in big endian format (the 8 most significant
bits are in the first octet.).

The conversion to a 4 byte big endian integer was being computed
in a manner that only worked on little endian architectures. The
function htonl() returns a 32-bit integer whose octet sequence given
the address of the integer is big endian. There is no need for any
further manipulation.

The existing code used bit shifting on a 32-bit value. In C bit
shifting is endian agnostic for multi-octet values, a right shift
moves most significant bits toward least significant bits. The result
of a bit shift of a multi-octet value on either big or little
archictures will always be the same provided you "view" it as the same
data type (e.g. 32-bit integer). But indexing the octets of that
mulit-octet value will be different depending on endianness, hence the
assembled octets differed depending on endianness.

Issue: #77
Signed-off-by: John Dennis <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant