Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

geth process is stopped because debug_traceBlockByNumber #21879

Closed
forencen opened this issue Nov 20, 2020 · 11 comments · Fixed by #22857
Closed

geth process is stopped because debug_traceBlockByNumber #21879

forencen opened this issue Nov 20, 2020 · 11 comments · Fixed by #22857
Assignees

Comments

@forencen
Copy link

my rpc request,this block is "0x242c60":
image

my geth logs :
image

When I send the request, the geth process stops.
how should i solve this problem?

@forencen forencen changed the title geth process is stopped because debug_traceBlockByNumber stop geth process is stopped because debug_traceBlockByNumber Nov 20, 2020
@forencen
Copy link
Author

geth version
Uploading image.png…

@ligi
Copy link
Member

ligi commented Nov 20, 2020

Please provide your get version and details about the environment. Just tried this call against an archive node (though currently it is not fully synced) and it does not crash for me.
And just to make sure - this command crashes your node?:
curl -X POST --header "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["0x242c60",{"tracer":"callTracer"}],"id":1}' https://yournode

@forencen
Copy link
Author

geth --rpc --rpcapi web3,eth,net,db,txpool,admin,personal,debug --rpcaddr 0.0.0.0 --rpcport 8546 --datadir /data/eth/data --syncmode full --gcmode=archive --cache 2048 --maxpeers 5000

geth version

root@full-node-eth-new:~# geth version
Geth
Version: 1.9.24-stable
Git Commit: cc05b050df5f88e80bb26aaf6d2f339c49c2d702
Architecture: amd64
Protocol Versions: [65 64 63]
Go Version: go1.15.5
Operating System: linux
GOPATH=
GOROOT=go

server configuration

Linux full-node-eth-new 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
CPU:32
Mem:60G
HDD:6T

I have another ssd node with the same problem

@forencen
Copy link
Author

debug_traceTransaction

Now, I have the same problem with debug_traceTransaction.

{
    "jsonrpc": "2.0",
    "method": "debug_traceTransaction",
    "params": [
        "0x0c10fafe0cdbfff32abfe53d57ec861d09986cc1050c850481f79b1a862bb10a",
        {"tracer": "callTracer"}
    ],
    "id": 1
}

this request will crash my node.I suspect it's the size of the internal transaction.

but,this request is normal:

{
    "jsonrpc": "2.0",
    "method": "debug_traceTransaction",
    "params": [
        "0xbcd7f45d90c46c86fb8c71471b57f9c2fee3eb4672efffd8487d9634079a3341",
        {"tracer": "callTracer"}
    ],
    "id": 1
}

how can I avoid this problem?
waiting for help, thanks~

@ligi
Copy link
Member

ligi commented Nov 26, 2020

Can you provide the full log of the crash?
Also wondering why you do --maxpeers 5000 - this might lead to resource problems

@holiman
Copy link
Contributor

holiman commented Nov 26, 2020

this might lead to resource problems

To clarify, your node will be bombarded with transaction- and block announcements, as well as requests for various data

@forencen
Copy link
Author

nohup geth --rpc --rpcapi web3,eth,net,db,txpool,admin,personal,debug --rpcaddr 0.0.0.0 --rpcport 8546 --datadir /data/eth/data --syncmode full --gcmode=archive --cache 2048 --maxpeers 150 &

Geth:Version: 1.9.24-stable
OS: Linux
CPU:32
Mem:60G
HDD:6T

rpc request

{
    "jsonrpc": "2.0",
    "method": "debug_traceTransaction",
    "params": [
        "0x0c10fafe0cdbfff32abfe53d57ec861d09986cc1050c850481f79b1a862bb10a",

        {"tracer": "callTracer"}
    ],
    "id": 1
}

error logs:

error_logs.txt

@holiman
Copy link
Contributor

holiman commented Nov 27, 2020

Thanks, yes, I can repro this!

holiman added a commit to holiman/go-ethereum that referenced this issue Nov 27, 2020
@holiman
Copy link
Contributor

holiman commented Nov 27, 2020

So, apparently this is already known, it's due to a rather conservative choice of stack limit in the upstream library we use for tracing: #16426

@holiman
Copy link
Contributor

holiman commented Apr 22, 2021

Triage:
We have decided to make a workaround in the tracer(s), and simply abort the tracing if the depth becomes too large, which will result in a too deeply neseted js. It will then return an error for this case, instead of crashing.

@afmsavage
Copy link

Triage:
We have decided to make a workaround in the tracer(s), and simply abort the tracing if the depth becomes too large, which will result in a too deeply neseted js. It will then return an error for this case, instead of crashing.

Do you know which versions of Geth this is occurring on? Also, do you know what response it will return if it returns the error instead of crashing? Seeing this happen on a Rinkeby archival node and just trying to get some more information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants