-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
*: Support 'TTL'(like the one in HBase) in TiDB #18731
Comments
Hi @tianyu4552 , thanks for reporting this request! Would you mind modifying the title to English, please? Generally, it would be better if everyone is able to understand it. |
Hi @tianyu4552 I've translated the title in English, hope that it can be accepted, thank you! |
MongoDB also has something called capped collections. I think an sql-like way to do this would be with partitioning + truncate partition. We could look at the syntax of |
read from the old partition, update it, write it to the new partition, truncate the old partition periodically. @nullnotnil Am I correct? |
@zz-jason To be TTL based there would need to be a partition on |
here is the reference from HBase Time To Live (TTL):
|
See also: https://www.percona.com/blog/2020/08/04/the-road-story-of-a-myrocks-mariadb-migration/ This describes the use of a TTL in the |
I feel that we can declare the TTL mechanism which is work on row or partition by |
We need to investigate the user scenario in depth, and then make a decision after the research is completed. |
See https://brandur.org/fragments/ttl-indexes and https://docs.mongodb.com/manual/core/index-ttl/ for another implementation type. This makes it an index property rather than a table property. |
TTL feature is released with TiDB v6.5: https://docs.pingcap.com/tidb/stable/time-to-live#periodically-delete-expired-data-using-ttl-time-to-live tracking issue: #39262 |
我们现在遇到一个问题,我们打算建一个数据基础维表,数据会被周更。
但是周更的过程不能保证100%全量覆盖,但不希望旧数据被命中出来,这样上层使用方需要做额外处理。
能否像HBase的TTL机制那样,有过期淘汰策略。
可以接受以rowId为主键,对整条数据过过期处理
Now we have a problem. We plan to build a basic dimension table of data, and the data will be updated weekly.
However, the weekly shift process can not guarantee 100% full coverage, but the old data is not expected to be hit, so the upper users need to do additional processing.
Can there be a strategy of expiration and elimination like the TTL mechanism of HBase.
It is acceptable to use rowId as the primary key to process the whole data after expiration
The text was updated successfully, but these errors were encountered: