Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Check the max data size against the final message payload #212

Merged
merged 3 commits into from
Aug 10, 2020

Conversation

AzureMarker
Copy link
Contributor

Copy link
Member

@jrconlin jrconlin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

crap, saw that this was pending.

Looks like you've already addressed part of the comment.

@@ -33,6 +24,17 @@ pub fn build_message_data(
Ok(message_data)
}

/// Check the data against the max data size and return an error if there is too
/// much data.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kind of depends on how you want to do this, but for the python side, I determined that the max payload size depends on the encoding type. aesgcm requires the header keys, which need to be included with the overall payload and impact the size of the data that can be sent over. aes128gcm includes the decryption keys in the body payload, and the body and therefore the body can be larger. The other problem is that the body is base64 encoded which basically inflates things by about 1.3x, which may confuse update publishers that don't know that.

Of course, key sizes can vary as can the result of base64 encoding, so things are bit fuzzy as far as how big things are or could get.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The payload size error returns the number of excess bytes, so the publishers will have an idea about how much data to cut out. I feel like doing the fuzzy calculations of guessing the raw data size and accounting for base64 adds a lot of unnecessary complexity/uncertainty and is too hand-holding. We should view this from the perspective of the messaging services (like FCM) and just make sure the data we are passing along will fit through the bridge services. If it doesn't fit, then we relay the excess byte count to the publisher. We can talk about this in the docs so the publishers have more insight into why they might be over the payload limit, but guessing the reason in code seems error-prone.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose putting a note in the docs might be as useful. One thing I've discovered is that the more data you can put into error messages, the less customer support and invalid bug triage you have to do in the future. Folks are less inclined to read docs than error messages, but at least we can point them in the right direction.

@AzureMarker AzureMarker merged commit 4e07ff0 into master Aug 10, 2020
@AzureMarker AzureMarker deleted the fix/payload-size-check branch August 10, 2020 17:54
This was referenced Oct 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

FCM/ADM router notification size limit check is inaccurate
2 participants