You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have 2 non allocating hash functions; xor and concat_and_hash. xor just XORs two hashes together. concat_and_hash concats the two 32byte hashes into one 64byte hash and then hashes that to get out a 32byte hash again.
xor is faster, but if you XOR A, B, C together you get the same result as C,B,A or A,C,B etc. ie it's the same irrespective of the htings we're XORing.
If you xor two identical hashes you end up with 0's everywhere, which can obscure a lot of mismatches (ie xor(A,A) == xor(B,B) == xor(C,C), so all of them would appear identical when they are in fact things we probably want to have different hashes).
concat_and_hash is slower (it actually hashes), but concat_and_hash(A,B) would give different output to concat_and_hash(B,A) ie the order is preserved.
We use xor fairly liberally. We also allocate in a few places (ie allocate a vec, append some stuff to it, sort it, and hash that).
We should:
Ensure that we use concat_and_hash everywhere order etc matters, and that we aren't over-using xor.
See whether we can get rid of the allocations; can we just XOR eg pallet hashes together rather than do any sorting based on pallet names? Things like hashing the pallet name into the per-pallet hashes will help ensure they are unique.
Think about validation in terms of DecodeAsType and EncodeAsType, ie if field names in some struct change places, that's mostly OK now for instance (this is an optimisaiton though; we can be stricter too if we want)
Ultimately we want validation to be as fast as possible (so that people have as few reasons as possible reason to opt out) but also actually protect as best as possible against things that DecodeAsType and EncodeAsType would consider different.
The text was updated successfully, but these errors were encountered:
We have 2 non allocating hash functions;
xor
andconcat_and_hash
.xor
just XORs two hashes together.concat_and_hash
concats the two 32byte hashes into one 64byte hash and then hashes that to get out a 32byte hash again.xor
is faster, but if you XOR A, B, C together you get the same result as C,B,A or A,C,B etc. ie it's the same irrespective of the htings we're XORing.xor
two identical hashes you end up with 0's everywhere, which can obscure a lot of mismatches (iexor(A,A)
==xor(B,B)
==xor(C,C)
, so all of them would appear identical when they are in fact things we probably want to have different hashes).concat_and_hash
is slower (it actually hashes), butconcat_and_hash(A,B)
would give different output toconcat_and_hash(B,A)
ie the order is preserved.We use
xor
fairly liberally. We also allocate in a few places (ie allocate a vec, append some stuff to it, sort it, and hash that).We should:
concat_and_hash
everywhere order etc matters, and that we aren't over-usingxor
.Ultimately we want validation to be as fast as possible (so that people have as few reasons as possible reason to opt out) but also actually protect as best as possible against things that DecodeAsType and EncodeAsType would consider different.
The text was updated successfully, but these errors were encountered: