Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(nft): add log_index to history table and use in PK #1926

Merged
merged 3 commits into from
Aug 7, 2023

Conversation

shamardy
Copy link
Collaborator

@shamardy shamardy commented Aug 1, 2023

fixes KomodoPlatform/komodo-wallet#1239 (comment)
related comment KomodoPlatform/komodo-wallet#1239 (comment)

  • This PR fixes transactions that transfer multiple NFT tokens in db. These transactions cause errors when adding due to the db constraint on tx hash uniqueness. To solve this, the PR uses log_index as part of the transfers history table primary key.
  • nft_tx_history table is now called nft_transfer_history and tx/txs are renamed to transfer/transfers throughout the NFT code since what's added/retrieved from DB is NFT transfers not transactions (Multiple NFT transfers can be in one transaction). By renaming the table, there are no need for db migrations due to the addition of log_index column. Although NFT is not used in production yet, if anybody used it, transfers will be re-fetched and saved to the new DB table when using the related API methods.

@shamardy shamardy added P0 1.0.7-beta in progress Changes will be made from the author under review and removed in progress Changes will be made from the author labels Aug 1, 2023
@shamardy
Copy link
Collaborator Author

shamardy commented Aug 1, 2023

@anarkiVan can you please check if this PR fixes this error KomodoPlatform/komodo-wallet#1239 (comment) or not?

@anarkiVan
Copy link

@anarkiVan can you please check if this PR fixes this error KomodoPlatform/komodo-wallet#1239 (comment) or not?

@shamardy the error is gone. I was able to see nfts and transactions using seed from the issue

onur-ozkan
onur-ozkan previously approved these changes Aug 1, 2023
Copy link
Member

@onur-ozkan onur-ozkan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@@ -63,7 +64,8 @@ fn create_tx_history_table_sql(chain: &Chain) -> MmResult<String, SqlError> {
collection_name TEXT,
image_url TEXT,
token_name TEXT,
details_json TEXT
details_json TEXT,
PRIMARY KEY (transaction_hash, log_index, chain)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: I don't know about the frequency of write operations this table will get, but this PK design will definetly slow down these operations a lot specially when there are too many rows.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can remove this primary key all together, we already avoid writing duplicate transfers that have the same chain/transaction_hash/log_index by using the last block number when requesting new transfers that will be saved. It will be difficult though to detect problems/bugs like the one that this PR fixes if I did that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we already avoid writing duplicate transfers that have the same chain/transaction_hash/log_index by using the last block number when requesting new transfers that will be saved.

If it's done with the thread-safe way, seems fine yeah.

It will be difficult though to detect problems/bugs like the one that this PR fixes if I did that.

The error seems quite clear if I am not missing something.

I don't have hard feelings to remove this PK, just wanted to leave a note on it. I leave it up to you :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KomodoPlatform/komodo-wallet#1239 (comment) seems quite clear if I am not missing something.

It was a db error due to the unique constraint on transaction hash, if moralis duplicates data for some reason or there are problems with our implementation that we are trying to save duplicate data it won't be caught without some kind of constraint. Adding a constraint on these values instead of a primary key will also slow down operations afaik.

I don't have hard feelings to remove this PK, just wanted to leave a note on it. I leave it up to you :)

Let's leave it as is then and see if there are performance problems in the future :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a constraint on these values instead of a primary key will also slow down operations afaik.

The idea is not merging multiple values as PK/constrait because on each write each row in the table will need to check all these fields. The best way to deal with this would be creating another field(maybe id?) and inserting the values with pre-calculated id(which can be combination of the values used as PK atm) from mm2.

Let's leave it as is then and see if there are performance problems in the future :)

Totally fine 👍

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the column necessary?

It's not, but I guess it was added to make this

fn nft_from_row(row: &Row<'_>) -> Result<Nft, SqlError> {
let json_string: String = row.get(0)?;
json::from_str(&json_string).map_err(|e| SqlError::FromSqlConversionFailure(0, Type::Text, Box::new(e)))
}
fn transfer_history_from_row(row: &Row<'_>) -> Result<NftTransferHistory, SqlError> {
let json_string: String = row.get(0)?;
json::from_str(&json_string).map_err(|e| SqlError::FromSqlConversionFailure(0, Type::Text, Box::new(e)))
}
deserialize to the structs straight away. I can change the function to transfer_history_from_row_and_chain and insert the chain value into the json object before returning it or just leave it as is. What do you think is better?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think rather than keeping a field all with the same values, inserting the value while deserializing better.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inserting the value after deserialization and removing it before serialization would complicate the code a lot. Additionally, adding the chain field after fetching the row/s requires parsing the JSON string into a serde_json::Value, modifying it, and then deserializing it into an Nft/NftTransferHistory structs. This process likely has a greater impact on performance than simply fetching the chain value directly from the database and deserializing the entire row in one operation. Ideally, the chain field should not have been included in the Nft/NftTransferHistory structs from the begining, but it cannot be removed now as it is part of the responses used by GUIs.

Copy link
Collaborator Author

@shamardy shamardy Aug 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can of course change these structs

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct Nft {
#[serde(flatten)]
pub(crate) common: NftCommon,
pub(crate) chain: Chain,
pub(crate) block_number_minted: Option<u64>,
pub(crate) block_number: u64,
pub(crate) contract_type: ContractType,
pub(crate) uri_meta: UriMeta,
}

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct NftTransferHistory {
#[serde(flatten)]
pub(crate) common: NftTransferCommon,
pub(crate) chain: Chain,
pub(crate) block_number: u64,
pub(crate) block_timestamp: u64,
pub(crate) contract_type: ContractType,
pub(crate) token_uri: Option<String>,
pub(crate) collection_name: Option<String>,
pub(crate) image_url: Option<String>,
pub(crate) token_name: Option<String>,
pub(crate) status: TransferStatus,
}

to make all fields other than chain part of a new struct that is used for DB operations and use #[serde(flatten)] for this new struct, but the Nft/NftTransferHistory structs will get complicated too. I think I would rather keep the chain column :)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I thought it would be simple. In that case, it can stay as it is.

@onur-ozkan
Copy link
Member

For the future PRs, we can split the changes into multiple commits(for this PR it would be 1 for the renaming, 1 for the actual fix), it would be very useful for the reviewers. It was quite confusing for me :)

Copy link
Member

@borngraced borngraced left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work. only one comment

@shamardy
Copy link
Collaborator Author

shamardy commented Aug 1, 2023

For the future PRs, we can split the changes into multiple commits(for this PR it would be 1 for the renaming, 1 for the actual fix), it would be very useful for the reviewers. It was quite confusing for me :)

Yeah, sorry about that. I thought name changes are gonna be minimal and limited to db methods when I started renaming but it went out of control and I couldn't separate it into a different commit. I will keep that in mind in the future and try to separate commits even if I thought they will be small at the start.

borngraced
borngraced previously approved these changes Aug 1, 2023
Copy link
Member

@borngraced borngraced left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀

Copy link

@caglaryucekaya caglaryucekaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I only have some questions

mm2src/coins/nft/storage/db_test_helpers.rs Show resolved Hide resolved
@@ -63,7 +64,8 @@ fn create_tx_history_table_sql(chain: &Chain) -> MmResult<String, SqlError> {
collection_name TEXT,
image_url TEXT,
token_name TEXT,
details_json TEXT
details_json TEXT,
PRIMARY KEY (transaction_hash, log_index, chain)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't there a different transfer history table for each chain, why do we need the chain as part of the primary key?

@caglaryucekaya
Copy link

pub(crate) async fn test_add_get_nfts_impl() {
let chain = Chain::Bsc;
let storage = init_nft_list_storage(&chain).await;
let nft_list = nft_list();
storage.add_nfts_to_list(&chain, nft_list, 28056726).await.unwrap();
let token_id = BigDecimal::from_str(TOKEN_ID).unwrap();
let nft = storage
.get_nft(&chain, TOKEN_ADD.to_string(), token_id)
.await
.unwrap()
.unwrap();
assert_eq!(nft.block_number, 28056721);
}
pub(crate) async fn test_last_nft_blocks_impl() {
let chain = Chain::Bsc;
let storage = init_nft_list_storage(&chain).await;
let nft_list = nft_list();
storage.add_nfts_to_list(&chain, nft_list, 28056726).await.unwrap();
let token_id = BigDecimal::from_str(TOKEN_ID).unwrap();
let nft = storage
.get_nft(&chain, TOKEN_ADD.to_string(), token_id)
.await
.unwrap()
.unwrap();
assert_eq!(nft.block_number, 28056721);
}

@laruh It's not related to this PR, but when looking at it I saw that these two tests are doing the same thing. Did you mean to do something else with the test_last_nft_blocks maybe?

@shamardy shamardy merged commit 92372cb into dev Aug 7, 2023
@shamardy shamardy deleted the fix-nft-db-multi-token-tx branch August 7, 2023 03:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants