Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime_error: log offset is outside the translation range #11403

Closed
kolayuk opened this issue Jun 13, 2023 · 5 comments · Fixed by #11450 or #11838
Closed

runtime_error: log offset is outside the translation range #11403

kolayuk opened this issue Jun 13, 2023 · 5 comments · Fixed by #11450 or #11838
Assignees
Labels
kind/bug Something isn't working

Comments

@kolayuk
Copy link

kolayuk commented Jun 13, 2023

Version & Environment

Redpanda version: (use rpk version):
23.1.11
Running in docker on ubuntu 20.04

What went wrong?

I receive error in logs

WARN  2023-06-13 16:57:43,942 [shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/0}: log offset 9 is outside the translation range (starting at 10))

And after this error consuming stops, including the other topics (not sure if it is not handled on client, or server does not provide data, we're using kgo golang library (https://github.com/twmb/franz-go)

What should have happened instead?

How to reproduce the issue?

Steps to reproduce a bit unclear, but for sure it is reproducadle on "delete" retention policy and to easily reproduce you can adjust retention time to smaller values (I had 5 min)

  1. Start redpanda
  2. Create topics with the settings provided above (at least 2)
  3. Start consumer, non-stop working producer to the one of topics
  4. (not sure) Stop consumer, to wait some data collected
  5. Wait until retention will clean some data
  1. (not sure) Start consumer (may be it would be required to start / stop / wait multiple times, pattern is a bit unclear for be)
  2. You will probably get some errors like fetch offset out of range for {kafka/topic1/0}, requested offset: 19, partition start offset: 20, high watermark: 20, ec: { error_code: offset_out_of_range [1] } cuz retention cleaned old data
  3. publish data to partition from p.7

Additional information

Logs : https://gist.github.com/kolayuk/04a6c7a776be3e8aaf42a40de06a48a6

@kolayuk kolayuk added the kind/bug Something isn't working label Jun 13, 2023
@kolayuk
Copy link
Author

kolayuk commented Jun 14, 2023

Just reproduced it once again, it seems issue appears when I trying to publish something in the partition close or exact in the cleanup time.
This time I got
[shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/8}: log offset 19 is outside the translation range (starting at 20))
And issue is fixed after restart, on restart it cleans extra offsets (19-20) and falled back to 17

INFO  2023-06-14 08:51:45,504 [shard 0] storage - disk_log_impl.cc:1245 - Removing "/var/lib/redpanda/data/kafka/achieved-badges/8_164/19-4-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:4, base_offset:19, committed_offset:20, dirty_offset:20}, compacted_segment=0, finished_self_compaction=0, generation={2}, reader={/var/lib/redpanda/data/kafka/achieved-badges/8_164/19-4-v1.log, (245 bytes)}, writer=nullptr, cache={cache_size=1}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/achieved-badges/8_164/19-4-v1.base_index, offsets:{19}, index:{header_bitflags:0, base_offset:{19}, max_offset:{20}, base_timestamp:{timestamp: 1686728062118}, max_timestamp:{timestamp: 1686728062118}, batch_timestamps_are_monotonic:1, index(1,1,1)}, step:32768, needs_persistence:0}})

WARN  2023-06-14 08:51:46,023 [shard 0] kafka - fetch.cc:198 - fetch offset out of range for {kafka/achieved-badges/8}, requested offset: 16, partition start offset: 17, high watermark: 17, ec: { error_code: offset_out_of_range [1] }

Then topic works normally

@VladLazar
Copy link
Contributor

Thanks for reporting this! Could you attempt to reproduce with trace logging for a few subsystems? You'd have to append the following to your redpanda invocation: --logger-log-level=offset_translator=trace --logger-log-level=cluster=trace --logger-log-level=storage=trace --logger-log-level=storage-gc=trace --logger-log-level=kafka=trace.

Alternatively, you can use the admin api to enable them at run-time (in the example below the logs revert back to info level after 5 minutes):

curl -X PUT <admin_api_host>:<admin_api_port>/v1/config/log_level/offset_translator?level=trace&expires=300
curl -X PUT <admin_api_host>:<admin_api_port>/v1/config/log_level/cluster?level=trace&expires=300
curl -X PUT <admin_api_host>:<admin_api_port>/v1/config/log_level/storage?level=trace&expires=300
curl -X PUT <admin_api_host>:<admin_api_port>/v1/config/log_level/storage-gc?level=trace&expires=300
curl -X PUT <admin_api_host>:<admin_api_port>/v1/config/log_level/kafka?level=trace&expires=300

@kolayuk
Copy link
Author

kolayuk commented Jun 14, 2023

I failed to get them while reproducing (cuz it is not easy to reproduce), but here they are on continious fetching from client while issue is active (greped by problematic partition)

TRACE 2023-06-14 11:04:46,019 [shard 0] cluster - partition_leaders_table.cc:196 - updated partition: {kafka/achieved-badges/9} leader: {term: 5, current leader: {0}, previous leader: {0}, revision: 164}
TRACE 2023-06-14 11:04:46,390 [shard 0] storage - readers_cache.cc:101 - {kafka/achieved-badges/9} - trying to get reader for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:46,390 [shard 0] storage - readers_cache.cc:130 - {kafka/achieved-badges/9} - reader cache miss for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:46,390 [shard 0] storage - readers_cache.cc:75 - {kafka/achieved-badges/9} - adding reader [9,10]
WARN  2023-06-14 11:04:46,390 [shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/9}: log offset 9 is outside the translation range (starting at 10))
TRACE 2023-06-14 11:04:48,891 [shard 0] storage - readers_cache.cc:101 - {kafka/achieved-badges/9} - trying to get reader for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:48,891 [shard 0] storage - readers_cache.cc:130 - {kafka/achieved-badges/9} - reader cache miss for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:48,891 [shard 0] storage - readers_cache.cc:75 - {kafka/achieved-badges/9} - adding reader [9,10]
WARN  2023-06-14 11:04:48,892 [shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/9}: log offset 9 is outside the translation range (starting at 10))
TRACE 2023-06-14 11:04:49,020 [shard 0] cluster - partition_leaders_table.cc:196 - updated partition: {kafka/achieved-badges/9} leader: {term: 5, current leader: {0}, previous leader: {0}, revision: 164}
TRACE 2023-06-14 11:04:49,226 [shard 0] storage-gc - disk_log_impl.cc:740 - [{kafka/achieved-badges/9}] house keeping with configuration from manager: {evicition_time:{timestamp: 1686135889224}, max_bytes:18446744073709551615, max_collectible_offset:11, should_sanitize:false}
TRACE 2023-06-14 11:04:49,226 [shard 0] storage-gc - disk_log_impl.cc:769 - [{kafka/achieved-badges/9}] applying 'deletion' log cleanup policy with config: {evicition_time:{timestamp: 1686740389226}, max_bytes:18446744073709551615, max_collectible_offset:11, should_sanitize:false}
DEBUG 2023-06-14 11:04:49,226 [shard 0] storage-gc - disk_log_impl.cc:340 - [{kafka/achieved-badges/9}] time retention timestamp: {timestamp: 1686740389226}, first segment max timestamp: {timestamp: 1686738622758}
DEBUG 2023-06-14 11:04:49,226 [shard 0] storage-gc - disk_log_impl.cc:289 - [{kafka/achieved-badges/9}] gc[time_based_retention] requested to remove segments up to 11 offset
TRACE 2023-06-14 11:04:51,394 [shard 0] storage - readers_cache.cc:101 - {kafka/achieved-badges/9} - trying to get reader for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:51,394 [shard 0] storage - readers_cache.cc:130 - {kafka/achieved-badges/9} - reader cache miss for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:51,394 [shard 0] storage - readers_cache.cc:75 - {kafka/achieved-badges/9} - adding reader [9,10]
WARN  2023-06-14 11:04:51,394 [shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/9}: log offset 9 is outside the translation range (starting at 10))
TRACE 2023-06-14 11:04:52,021 [shard 0] cluster - partition_leaders_table.cc:196 - updated partition: {kafka/achieved-badges/9} leader: {term: 5, current leader: {0}, previous leader: {0}, revision: 164}
TRACE 2023-06-14 11:04:53,895 [shard 0] storage - readers_cache.cc:101 - {kafka/achieved-badges/9} - trying to get reader for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:53,895 [shard 0] storage - readers_cache.cc:130 - {kafka/achieved-badges/9} - reader cache miss for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:53,895 [shard 0] storage - readers_cache.cc:75 - {kafka/achieved-badges/9} - adding reader [9,10]
WARN  2023-06-14 11:04:53,896 [shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/9}: log offset 9 is outside the translation range (starting at 10))
TRACE 2023-06-14 11:04:55,023 [shard 0] cluster - partition_leaders_table.cc:196 - updated partition: {kafka/achieved-badges/9} leader: {term: 5, current leader: {0}, previous leader: {0}, revision: 164}
TRACE 2023-06-14 11:04:56,397 [shard 0] storage - readers_cache.cc:101 - {kafka/achieved-badges/9} - trying to get reader for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:56,397 [shard 0] storage - readers_cache.cc:130 - {kafka/achieved-badges/9} - reader cache miss for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:56,397 [shard 0] storage - readers_cache.cc:75 - {kafka/achieved-badges/9} - adding reader [9,10]
WARN  2023-06-14 11:04:56,397 [shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/9}: log offset 9 is outside the translation range (starting at 10))
TRACE 2023-06-14 11:04:58,024 [shard 0] cluster - partition_leaders_table.cc:196 - updated partition: {kafka/achieved-badges/9} leader: {term: 5, current leader: {0}, previous leader: {0}, revision: 164}
TRACE 2023-06-14 11:04:58,900 [shard 0] storage - readers_cache.cc:101 - {kafka/achieved-badges/9} - trying to get reader for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:58,900 [shard 0] storage - readers_cache.cc:130 - {kafka/achieved-badges/9} - reader cache miss for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:04:58,900 [shard 0] storage - readers_cache.cc:75 - {kafka/achieved-badges/9} - adding reader [9,10]
WARN  2023-06-14 11:04:58,901 [shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/9}: log offset 9 is outside the translation range (starting at 10))
TRACE 2023-06-14 11:04:59,643 [shard 0] storage-gc - disk_log_impl.cc:740 - [{kafka/achieved-badges/9}] house keeping with configuration from manager: {evicition_time:{timestamp: 1686135899642}, max_bytes:18446744073709551615, max_collectible_offset:11, should_sanitize:false}
TRACE 2023-06-14 11:04:59,643 [shard 0] storage-gc - disk_log_impl.cc:769 - [{kafka/achieved-badges/9}] applying 'deletion' log cleanup policy with config: {evicition_time:{timestamp: 1686740399643}, max_bytes:18446744073709551615, max_collectible_offset:11, should_sanitize:false}
DEBUG 2023-06-14 11:04:59,643 [shard 0] storage-gc - disk_log_impl.cc:340 - [{kafka/achieved-badges/9}] time retention timestamp: {timestamp: 1686740399643}, first segment max timestamp: {timestamp: 1686738622758}
DEBUG 2023-06-14 11:04:59,643 [shard 0] storage-gc - disk_log_impl.cc:289 - [{kafka/achieved-badges/9}] gc[time_based_retention] requested to remove segments up to 11 offset
TRACE 2023-06-14 11:05:01,024 [shard 0] cluster - partition_leaders_table.cc:196 - updated partition: {kafka/achieved-badges/9} leader: {term: 5, current leader: {0}, previous leader: {0}, revision: 164}
TRACE 2023-06-14 11:05:01,404 [shard 0] storage - readers_cache.cc:101 - {kafka/achieved-badges/9} - trying to get reader for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:05:01,404 [shard 0] storage - readers_cache.cc:130 - {kafka/achieved-badges/9} - reader cache miss for: {start_offset:{10}, max_offset:{10}, min_bytes:0, max_bytes:958698, type_filter:batch_type::raft_data, first_timestamp:nullopt}
TRACE 2023-06-14 11:05:01,404 [shard 0] storage - readers_cache.cc:75 - {kafka/achieved-badges/9} - adding reader [9,10]
WARN  2023-06-14 11:05:01,404 [shard 0] kafka - connection_context.cc:451 - Error processing request: std::runtime_error (ntp {kafka/achieved-badges/9}: log offset 9 is outside the translation range (starting at 10))
^C

@mmaslankaprv
Copy link
Member

thank you very much, this is very helpful

mmaslankaprv added a commit to mmaslankaprv/redpanda that referenced this issue Jun 15, 2023
When a segment is rolled due to the `segment.ms` property its base
offset is set to the committed offset of last segment plus one. In a
segment constructor all segment offset tracker offsets were set to the
base offset passed as the constructor argument. This way an empty
segment had exactly the same set of offsets as the segment containing
single batch containing one record and base offset equal to the segment
base offset.

Incorrect offset accounting in empty segments lead to the situation in
which eviction driven `log::truncate_prefix` was called with an
incorrect offset, leading to the situation in which a log wasn't
truncated at the segment boundary. This in the end resulted in the
disconnection between the offset translator truncation point and first
log segment start offset.

For example:

(we represent segment as `[base_offset,end_offset]`)

In the case of log with segments:

```
[0,10][11,15]
```

After segment ms a new segment is created like this:

```
[0,10][11,15][16,16]
```

When eviction point is established the last offset is checked in this
case if all the segments are to be evicted it will be equal to 16.
`16` is a last offset that is going to be evicted (last included in raft
snapshot). Hence the `log::truncate_prefix` is called with offset `17`.

If some batches are appended to the rolled segment then the log will
contain f.e.

```
 [0,10][11,15][16,25]
```

Now if the log is prefix truncated at offset `17` it starts offset is
updated to `17` but the underlying segment is kept.

```
[16,25]
```

Now when the reader start reading the log it will request to start from
the log start offset which is `17` but it will have to skip over the
batch starting at `16`. If a batch at `16` has more than one record it
will still be returned to the reader and it will have to translate its
base offset to kafka offset space. This is impossible as the
offset_translator was already truncated to `16`. i.e. there is no
information in the translator to correctly translate the offset.

A fix is setting the empty segment dirty, committed and stable offset to
its base_offset minus one. This way all the semantics of log operation
holds as the offset returned is equal to previous segment last offset
which would be the case if there would be no empty segment at the head
of the log.

Fixes: redpanda-data#11403

Signed-off-by: Michal Maslanka <[email protected]>
mmaslankaprv added a commit to mmaslankaprv/redpanda that referenced this issue Jun 15, 2023
When a segment is rolled due to the `segment.ms` property its base
offset is set to the committed offset of last segment plus one. In a
segment constructor all segment offset tracker offsets were set to the
base offset passed as the constructor argument. This way an empty
segment had exactly the same set of offsets as the segment containing
single batch containing one record and base offset equal to the segment
base offset.

Incorrect offset accounting in empty segments lead to the situation in
which eviction driven `log::truncate_prefix` was called with an
incorrect offset, leading to the situation in which a log wasn't
truncated at the segment boundary. This in the end resulted in the
disconnection between the offset translator truncation point and first
log segment start offset.

For example:

(we represent segment as `[base_offset,end_offset]`)

In the case of log with segments:

```
[0,10][11,15]
```

After segment ms a new segment is created like this:

```
[0,10][11,15][16,16]
```

When eviction point is established the last offset is checked in this
case if all the segments are to be evicted it will be equal to 16.
`16` is a last offset that is going to be evicted (last included in raft
snapshot). Hence the `log::truncate_prefix` is called with offset `17`.

If some batches are appended to the rolled segment then the log will
contain f.e.

```
 [0,10][11,15][16,25]
```

Now if the log is prefix truncated at offset `17` it starts offset is
updated to `17` but the underlying segment is kept.

```
[16,25]
```

Now when the reader start reading the log it will request to start from
the log start offset which is `17` but it will have to skip over the
batch starting at `16`. If a batch at `16` has more than one record it
will still be returned to the reader and it will have to translate its
base offset to kafka offset space. This is impossible as the
offset_translator was already truncated to `16`. i.e. there is no
information in the translator to correctly translate the offset.

A fix is setting the empty segment dirty, committed and stable offset to
its base_offset minus one. This way all the semantics of log operation
holds as the offset returned is equal to previous segment last offset
which would be the case if there would be no empty segment at the head
of the log.

Fixes: redpanda-data#11403

Signed-off-by: Michal Maslanka <[email protected]>
vbotbuildovich pushed a commit to vbotbuildovich/redpanda that referenced this issue Jun 19, 2023
When a segment is rolled due to the `segment.ms` property its base
offset is set to the committed offset of last segment plus one. In a
segment constructor all segment offset tracker offsets were set to the
base offset passed as the constructor argument. This way an empty
segment had exactly the same set of offsets as the segment containing
single batch containing one record and base offset equal to the segment
base offset.

Incorrect offset accounting in empty segments lead to the situation in
which eviction driven `log::truncate_prefix` was called with an
incorrect offset, leading to the situation in which a log wasn't
truncated at the segment boundary. This in the end resulted in the
disconnection between the offset translator truncation point and first
log segment start offset.

For example:

(we represent segment as `[base_offset,end_offset]`)

In the case of log with segments:

```
[0,10][11,15]
```

After segment ms a new segment is created like this:

```
[0,10][11,15][16,16]
```

When eviction point is established the last offset is checked in this
case if all the segments are to be evicted it will be equal to 16.
`16` is a last offset that is going to be evicted (last included in raft
snapshot). Hence the `log::truncate_prefix` is called with offset `17`.

If some batches are appended to the rolled segment then the log will
contain f.e.

```
 [0,10][11,15][16,25]
```

Now if the log is prefix truncated at offset `17` it starts offset is
updated to `17` but the underlying segment is kept.

```
[16,25]
```

Now when the reader start reading the log it will request to start from
the log start offset which is `17` but it will have to skip over the
batch starting at `16`. If a batch at `16` has more than one record it
will still be returned to the reader and it will have to translate its
base offset to kafka offset space. This is impossible as the
offset_translator was already truncated to `16`. i.e. there is no
information in the translator to correctly translate the offset.

A fix is setting the empty segment dirty, committed and stable offset to
its base_offset minus one. This way all the semantics of log operation
holds as the offset returned is equal to previous segment last offset
which would be the case if there would be no empty segment at the head
of the log.

Fixes: redpanda-data#11403

Signed-off-by: Michal Maslanka <[email protected]>
(cherry picked from commit 76a9e6f)
@kolayuk
Copy link
Author

kolayuk commented Jul 2, 2023

Issue is still reproducable in redpanda Redpanda v23.2.1-rc4 - ec050f4

Here are logs right before first occurance after update

 
INFO  2023-07-02 11:21:20,299 [shard 1] storage - segment.cc:759 - Creating new segment /var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log


2023-07-02 11:21:20	
INFO  2023-07-02 11:21:20,528 [shard 0] storage - segment.cc:759 - Creating new segment /var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log
2023-07-02 11:21:27	
INFO  2023-07-02 11:21:27,850 [shard 0] storage - segment.cc:759 - Creating new segment /var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log
2023-07-02 11:22:51	
INFO  2023-07-02 11:22:51,139 [shard 1] storage - disk_log_impl.cc:1362 - Removing "/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:194, base_offset:673, committed_offset:675, dirty_offset:675}, compacted_segment=0, finished_self_compaction=0, generation={7}, reader={/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.log, (1207 bytes)}, writer=nullptr, cache={cache_size=3}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.base_index, offsets:{673}, index:{header_bitflags:0, base_offset:{673}, max_offset:{675}, base_timestamp:{timestamp: 1688210476775}, max_timestamp:{timestamp: 1688210571080}, batch_timestamps_are_monotonic:1, index(1,1,1)}, step:32768, needs_persistence:0}})
2023-07-02 11:22:51	
INFO  2023-07-02 11:22:51,139 [shard 1] storage - disk_log_impl.cc:1362 - Removing "/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:194, base_offset:676, committed_offset:675, dirty_offset:675}, compacted_segment=0, finished_self_compaction=0, generation={0}, reader={/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log, (0 bytes)}, writer={no_of_chunks:64, closed:0, fallocation_offset:0, committed_offset:0, bytes_flush_pending:0}, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.base_index, offsets:{676}, index:{header_bitflags:0, base_offset:{676}, max_offset:{0}, base_timestamp:{timestamp: 0}, max_timestamp:{timestamp: 0}, batch_timestamps_are_monotonic:1, index(0,0,0)}, step:32768, needs_persistence:0}})
2023-07-02 11:22:57	
INFO  2023-07-02 11:22:57,764 [shard 0] storage - disk_log_impl.cc:1362 - Removing "/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:194, base_offset:673, committed_offset:675, dirty_offset:675}, compacted_segment=0, finished_self_compaction=0, generation={7}, reader={/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.log, (1207 bytes)}, writer=nullptr, cache={cache_size=3}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.base_index, offsets:{673}, index:{header_bitflags:0, base_offset:{673}, max_offset:{675}, base_timestamp:{timestamp: 1688210476775}, max_timestamp:{timestamp: 1688210571080}, batch_timestamps_are_monotonic:1, index(1,1,1)}, step:32768, needs_persistence:0}})
2023-07-02 11:22:57	
INFO  2023-07-02 11:22:57,765 [shard 0] storage - disk_log_impl.cc:1362 - Removing "/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:194, base_offset:676, committed_offset:675, dirty_offset:675}, compacted_segment=0, finished_self_compaction=0, generation={0}, reader={/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log, (0 bytes)}, writer={no_of_chunks:64, closed:0, fallocation_offset:0, committed_offset:0, bytes_flush_pending:0}, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.base_index, offsets:{676}, index:{header_bitflags:0, base_offset:{676}, max_offset:{0}, base_timestamp:{timestamp: 0}, max_timestamp:{timestamp: 0}, batch_timestamps_are_monotonic:1, index(0,0,0)}, step:32768, needs_persistence:0}})
2023-07-02 11:23:01	
INFO  2023-07-02 11:23:01,420 [shard 0] storage - disk_log_impl.cc:1362 - Removing "/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:194, base_offset:673, committed_offset:675, dirty_offset:675}, compacted_segment=0, finished_self_compaction=0, generation={7}, reader={/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.log, (1207 bytes)}, writer=nullptr, cache={cache_size=3}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/viewed-news/76_94/673-194-v1.base_index, offsets:{673}, index:{header_bitflags:0, base_offset:{673}, max_offset:{675}, base_timestamp:{timestamp: 1688210476775}, max_timestamp:{timestamp: 1688210571080}, batch_timestamps_are_monotonic:1, index(1,1,1)}, step:32768, needs_persistence:0}})
2023-07-02 11:23:01	
INFO  2023-07-02 11:23:01,420 [shard 0] storage - disk_log_impl.cc:1362 - Removing "/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:194, base_offset:676, committed_offset:675, dirty_offset:675}, compacted_segment=0, finished_self_compaction=0, generation={0}, reader={/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.log, (0 bytes)}, writer={no_of_chunks:64, closed:0, fallocation_offset:0, committed_offset:0, bytes_flush_pending:0}, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/viewed-news/76_94/676-194-v1.base_index, offsets:{676}, index:{header_bitflags:0, base_offset:{676}, max_offset:{0}, base_timestamp:{timestamp: 0}, max_timestamp:{timestamp: 0}, batch_timestamps_are_monotonic:1, index(0,0,0)}, step:32768, needs_persistence:0}})
2023-07-02 11:23:01	
WARN  2023-07-02 11:23:01,446 [shard 3] kafka - connection_context.cc:497 - Error processing request: std::runtime_error (ntp {kafka/viewed-news/76}: log offset 675 is outside the translation range (starting at 676))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
4 participants