-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in upgrading orderer from 2.4.2 to 2.5.0 (wal: max entry size limit exceeded) #4290
Comments
Do you have a transactions that are extremely big? |
Yes, I have few transactions those have the size like 31180 KB, 9404 KB, 3453 KB etc. |
Any suggestion or solution to fix this? Are these transactions showstopper to upgrade from 2.4.2 to 2.5.0? |
You only have 1 orderer, so this is not a production environment, i hope? |
Yes, this is not a production environment. But I have below concerns and I appreciate any suggestion or help on these.
|
|
Based on the git blame when I looked at the code, I think it's related to some update, but not sure about the version. But what are you trying to say? You want to put a release note message or something? |
Adding more details about error when tried to upgrade with single orderer:
Tried below steps to solve the issue but no luck:
Other orderer nodes showing below details:
|
@yacovm I'm just trying to assess for now... is this a legitimate new problem that can impact production environments? If so, I'm assuming we should investigate further and either fix, put in place a mitigation, or mention in release notes and doc. |
A similar problem was reported at etcd-io/etcd#14025. etcd-io/etcd#14114 was used to fix it in etcd raft v3.5.5 (Fabric is still on v3.5.1). Here is the commit with the fix: So moving up to latest etcd v3.5.9 may help to resolve it. Although it looks like there still may be an upper limit due to the write ahead log file size, which appears to be 64MB.
https://pkg.go.dev/go.etcd.io/etcd/server/v3/wal#pkg-overview https://github.com/etcd-io/etcd/blob/release-3.5/server/wal/wal.go#L55 Am I reading it correctly that we still may have an issue with very large transactions? WDYT? |
Possible. I think we should consider restricting too big Raft transactions. Well they clearly say:
So the WAL entry that is not corrupted is always less than the remaining file size, by definition, no? Since the remaining file size contains the "header" of the entry and the data itself. But I still would like someone to make a test to reproduce this problem in our current latest code base, so we can be sure the problem is only reproduced at startup, and then see if the upgrade to the latest Raft version fixes this or not. @semil do you want to take a look at this? |
Thanks @semil ! |
Thanks @semil, this fix will be included in Fabric v2.5.4. |
Thank you All! :) |
Description
When upgrading orderer from 2.4.2 to 2.5.0 it is giving below error. Starting orderer container with image tag 2.4.2 working fine but not with 2.5.0.
[orderer.commmon.multichannel] initAppChannels -> Failed to create chain support for channel 'testchannel', error: error creating consenter for channel: testchannel: failed to restore persisted raft data: failed to create or read WAL: failed to read WAL and cannot repair: wal: max entry size limit exceeded
Steps to reproduce
Upgrade hyperledger fabric components from 2.4.2 to 2.5.0. Setup is with 1 orderer and 2 peers. Binaries and other config files are updated as per 2.5.0 release.
Followed below step to reproduce.
[orderer.commmon.multichannel] initAppChannels -> Failed to create chain support for channel 'testchannel', error: error creating consenter for channel: testchannel: failed to restore persisted raft data: failed to create or read WAL: failed to read WAL and cannot repair: wal: max entry size limit exceeded
The text was updated successfully, but these errors were encountered: