Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Redesign of Pegasus Scanner #723

Open
Smityz opened this issue Apr 22, 2021 · 6 comments
Open

Proposal: Redesign of Pegasus Scanner #723

Smityz opened this issue Apr 22, 2021 · 6 comments
Labels
type/enhancement Indicates new feature requests

Comments

@Smityz
Copy link
Contributor

Smityz commented Apr 22, 2021

Proposal Redesign of Pegasus Scanner

Background

Pegasus provides three interfaces on_get_scanner on_scan and on_clear_scanner , for clients to execute scanning tasks.

If we want to full scan the whole table, at first, the client will call on_get_scanner on each partition, and then partitions return a context_id which is a random number generated by the server to record some parameters such as hash_key_filter_type, batch_size and the context of this scanning task.

Secondly, the client uses this context_id to call on_scan and completes scanning in the corresponding partition in turn. Servers will scan the whole data of the table on the disk, and return compliant value to the client in batches.

If the tasking end or any error happened, the client will call on_clear_scanner to clear its context_id on the server.

Problem Statement

In actual use, such a design will cause some problems.

  1. prefix scan is too slow

If we execute this scanning task:

full_scan --hash_key_filter_type prefix --hash_key_filter_pattern 2021-04-21

Server will scan all the data in the table, then returns the prefix match key of the pattern. But we can speed it up by using prefix seeking futures of RocksDB.

  1. scanning task is easily failed

Although we have a batch size to limit the scan time, it does not work if the data is sparse. In the case above, we need to scan almost the whole partition but it is possible that there is no row which matches the prefix,then it will be easy to timeout.

Proposal

For problem 1

  1. Pegasus store key schema in RocksDB is like [hashkey_len(2bytes)][hashkey][sortkey], so we can't directly use prefix seeking. But we can prefix seek [01][prefix_pattern],[02][prefix_pattern],[03][prefix_pattern]...[65535][prefix_pattern] in RocksDB.
  2. Client can parallelly scan all the partitions instead of one by one.

For problem 2

  1. We can set a HeartbeatCheck during scanning like Hbase StoreScanner, pegasus sever sends heartbeat packets periodically to avoid timeout, which performed like a stream.

  2. We can change the way to count batch size: compliant value number -> already scan value number

@Smityz Smityz added the type/enhancement Indicates new feature requests label Apr 22, 2021
@Smityz Smityz changed the title Proposal Redesign of Pegasus Scanner ### Proposal Redesign of Pegasus Scanner Apr 22, 2021
@Smityz Smityz changed the title Proposal Redesign of Pegasus Scanner Proposal: Redesign of Pegasus Scanner Apr 22, 2021
@shenxingwuying
Copy link
Contributor

shenxingwuying commented Apr 25, 2021

Redesign of Pegasus Scanner, to solve the problem scan timeout.
In my opinion,the root cause of the problem is the method of data sort.
Rocksdb's data should use customized Comparator, which will reserve sorted by userkey(hash_key, sort_key), and then
the prefix filter should very fast.

Why comparator use the default ByteWiseComparator at the beginning?
At this time , maybe pegasus can fix to the new comparator(customized Comparator).
To avoid data incompatible, we can support two comparator(add new Comparator), and the new pegasus cluster use new comparator.

1、support postfix,should scan all data,the cost as before, maybe the filter not important.
2、support prefix,need not scan all data, speed will increase by reduce scans.

@Apache9
Copy link
Contributor

Apache9 commented Apr 25, 2021

Changing comparator will be a pain, as all the old data can not be read any more. Introduce a table level flag to indicate that whether we should use customized comparator? And we also need to test the performance impact of using customized comparator.

@neverchanje
Copy link
Contributor

neverchanje commented Apr 26, 2021

First of all, we use the default ByteWiseComparator because we design the key schema based on it.
We design the hashkey length ahead of the hashkey bytes in order to prevent key conflict like:

  1. hashkey = a, sortkey = xxx

  2. hashkey = ax, sortkey = xx

With the default comparator, the two keys are seen as distinct:

01axxx
02axxx

So we chose this method, but didn't consider that one day we would need prefix filtering of hashkey. So now the problem is:
how can we upgrade our key schema version to support efficient hashkey prefix-filtering, or do other workaround, without modifying the key schema (and also give up support of hashkey sorting), like the above solution that @Smityz came up with.

@Apache9
Copy link
Contributor

Apache9 commented Apr 27, 2021

So let's change the comparator and check the performance impact first?

@Smityz
Copy link
Contributor Author

Smityz commented May 7, 2021

If there are no compatibility issues, I think changing the comparator is feasible, look forward to your PR @shenxingwuying

@foreverneverer
Copy link
Contributor

foreverneverer commented May 17, 2021

  1. We can set a HeartbeatCheck during scanning like Hbase StoreScanner, pegasus sever sends heartbeat packets periodically to avoid timeout, which performed like a stream

@Apache9 @Smityz XiaoMi/pegasus-java-client#156 and XiaoMi/pegasus-go-client#86 have fix next retry failed when timeout, you can resolve the problem before refactor scanner

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/enhancement Indicates new feature requests
Projects
None yet
Development

No branches or pull requests

5 participants