-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DocDB][LST] Packed columns: FATAL: Failed to write a batch with 0 operations into RocksDB: Corruption (yb/docdb/docdb_compaction_context.cc:265): Unable to pack old value for 50 #13037
Labels
Comments
def-
added
area/docdb
YugabyteDB core features
priority/high
High Priority
status/awaiting-triage
Issue awaiting triage
labels
Jun 24, 2022
def-
changed the title
[DocDB][LST] Failed to write a batch with 0 operations into RocksDB: Corruption (yb/docdb/docdb_compaction_context.cc:265): Unable to pack old value for 50
[DocDB][LST] Packed columns? Failed to write a batch with 0 operations into RocksDB: Corruption (yb/docdb/docdb_compaction_context.cc:265): Unable to pack old value for 50
Jun 24, 2022
def-
changed the title
[DocDB][LST] Packed columns? Failed to write a batch with 0 operations into RocksDB: Corruption (yb/docdb/docdb_compaction_context.cc:265): Unable to pack old value for 50
[DocDB][LST] Packed columns: Failed to write a batch with 0 operations into RocksDB: Corruption (yb/docdb/docdb_compaction_context.cc:265): Unable to pack old value for 50
Jun 24, 2022
spolitov
added a commit
that referenced
this issue
Jun 28, 2022
Summary: It could happen that a packed row is already near the size limit, and then user adds new columns to the table. Since each column uses 4 bytes in a packed row, such row could grow over the limit after repacking. Previously we assumed that repacking row could not make it larger that the limit, and there is a check for that. But clearly it is not so in the scenario above. Changed the code to force-repack a row even if the repacked row overflows the specified limit, so we could have rows larger than the limit without crashing. It should not be an issue, since we expect it to happen quite rarely in an actual DB. And large packed rows are just less effective, but they still work fine. Test Plan: PgPackedRowTest.PackOverflow Reviewers: mbautin Reviewed By: mbautin Subscribers: bogdan, ybase Differential Revision: https://phabricator.dev.yugabyte.com/D17904
spolitov
added a commit
that referenced
this issue
Jun 30, 2022
Summary: It could happen that a packed row is already near the size limit, and then user adds new columns to the table. Since each column uses 4 bytes in a packed row, such row could grow over the limit after repacking. Previously we assumed that repacking row could not make it larger that the limit, and there is a check for that. But clearly it is not so in the scenario above. Changed the code to force-repack a row even if the repacked row overflows the specified limit, so we could have rows larger than the limit without crashing. It should not be an issue, since we expect it to happen quite rarely in an actual DB. And large packed rows are just less effective, but they still work fine. Original diff: b608dda/D17904 Test Plan: PgPackedRowTest.PackOverflow Reviewers: mbautin, rthallam Reviewed By: mbautin, rthallam Subscribers: ybase, bogdan Differential Revision: https://phabricator.dev.yugabyte.com/D18012
@def- Can you re-run this and confirm if Sergei's diff above fixes this? |
Yes, running. Will report if I see this again. |
@def- Is this good to close? |
Not seen in 6 days, good enough to close. |
def-
changed the title
[DocDB][LST] Packed columns: Failed to write a batch with 0 operations into RocksDB: Corruption (yb/docdb/docdb_compaction_context.cc:265): Unable to pack old value for 50
[DocDB][LST] Packed columns: FATAL: Failed to write a batch with 0 operations into RocksDB: Corruption (yb/docdb/docdb_compaction_context.cc:265): Unable to pack old value for 50
Jul 12, 2022
1 task
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Jira Link: DB-2767
Description
With LST on my dev server I have run into this issue with packed columsn enabled.
On state 2e8c2fc with 85ac8d8 reverted locally (unrelated bug), I just still got a corruption:
FATAL tserver file contains:
As @spolitov indicated that this is a separate issue, I have opened a new bug for this. Initially I thought this was related to #12813
I have shared the full yugabyte-data directory following this corruption for analysis.
The text was updated successfully, but these errors were encountered: