We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serialization and deserialization of objects to pass between pydodide and the main threat causes very large donation to fail with memory overflows.
The text was updated successfully, but these errors were encountered:
Serialization is not necessary. Make it optional.
This means to pd.DataFrame.to_json() step is skipped. But also json.dumps can be avoided. Both options consume too much memory.
This means data needs to be extracted into the dict equivalent of pd.DataFrame.to_json().
send back dict object like so:
{ "variable_x": {"0":1}, "variable_x": {"0":1}, }
This results in tables being able to load massive amounts of data:
Sorry, something went wrong.
trbKnl
Successfully merging a pull request may close this issue.
Serialization and deserialization of objects to pass between pydodide and the main threat causes very large donation to fail with memory overflows.
The text was updated successfully, but these errors were encountered: