-
Notifications
You must be signed in to change notification settings - Fork 865
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
smsc/xpmem: alignment and stack memory space #10127
Conversation
Bounds when it comes to rcache registration are the last byte of the range. Probably doesn't matter much here as no rcache is in use. |
ae71d72
to
70b7ee2
Compare
I changed the fallback to align to the actual page size, which seems to work as well and reduces the number of times we may need to remap until we have mapped all the relevant stack positions mapped. |
@hjelmn Can you please take a look at this PR? |
70b7ee2
to
92f2e0e
Compare
Rebased to current |
bot:aws:retest |
@devreal I think you'll want to rebase and force push again. |
The upper bound of the mapped region must include the last byte of the range and not reach past the aligned range. Signed-off-by: Joseph Schuchart <[email protected]>
…pped The aligned range computed in mca_smsc_xpmem_map_peer_region may reach past the end of the stack, which may cause the mapping to fail. Retrying with an actual page as upper bound has a better chance to succeed. Signed-off-by: Joseph Schuchart <[email protected]>
92f2e0e
to
4201c94
Compare
bot:aws:retest |
This PR addresses two issues identified in #10121:
Signed-off-by: Joseph Schuchart [email protected]