-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
alternate approach to transaction validation #3191
alternate approach to transaction validation #3191
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I like this approach to doing the validation. By adding some more info about the chain state when outputs and transactions are updated we are able to reduce the complexity of the polling and assumptions around chain state by quite a bit. It also means these calls can be batched quite nicely which will be a major load off.
- I do think we need to be smarter about the interval at which the validation is done. At the very least the interval should be configurable and linked to the power mode so that clients can choose to have a less frequent base node polling
Output Validation:
- The output validation will set a pending output to be spendable and not encumbered without taking into account the confirmation time
- The get_balance call is still using the
tx_id
field and the PendingTransactionOutputs encumberance mechanism - cancellation also uses the deprecated
tx_id
field - Output validation is performed for all outputs including CancelledInbound which is a DoS vector as you could send a wallet many transactions and cancel them and the wallet will now include them in all batch validations.
- Seems like we should remove the pending_transaction_outputs table completely and rather make a new status for pending coinbases
- should remove the concept of Invalid outputs because they are now either spent, unspent or encumbered
Transaction Validation:
- Should create and emit events for when reorgs occur that can cause a transaction's state to change
- If there are no active transactions we should reduce the frequency of these validations significantly
- PR doesn't invalidate imported Faux transactions when the imported output is invalidated
It is a WIP PR but lots of cleanup needed
base_layer/wallet/migrations/2021-07-05-13201407_metadata_signature/up.sql
Outdated
Show resolved
Hide resolved
base_layer/wallet/migrations/2021-07-05-13201407_metadata_signature/up.sql
Outdated
Show resolved
Hide resolved
base_layer/wallet/migrations/2021-07-05-13201407_metadata_signature/up.sql
Outdated
Show resolved
Hide resolved
/// This method is called when a pending transaction is to be confirmed. It must move the `outputs_to_be_spent` and | ||
/// `outputs_to_be_received` from a `PendingTransactionOutputs` record into the `unspent_outputs` and | ||
/// `spent_outputs` collections. | ||
fn confirm_transaction(&self, tx_id: TxId) -> Result<(), OutputManagerStorageError>; | ||
fn confirm_transaction_encumberance(&self, tx_id: TxId) -> Result<(), OutputManagerStorageError>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I don't really agree with the change to this name. Confirming the transaction actually removes encumberance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea here was to distinguish it from the 'Mined confirmed'. I think it was originally used when a transaction was confirmed, but I need to have the mined_height and hash to confirm it now. Let me have a look and see if I can add it
.await | ||
.for_protocol(self.operation_id)?; | ||
|
||
if num_confirmations >= self.config.num_confirmations_required { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like a weird Event to use for a lost coinbase that is actually not mined, perhaps a new kind of event for this case?
.set_completed_transaction_validity(tx_id, validity) | ||
.await | ||
.map(|_| TransactionServiceResponse::CompletedTransactionValidityChanged), | ||
/* TransactionServiceRequest::SetCompletedTransactionValidity(tx_id, validity) => self |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is used by the output manager to be able to set the validity of Import Transaction's if the output becomes invalid. This is needed because the faux transactions we create to represent imported UTXO's are not checked for validity but should be marked as invalid if the output itself is invalidated.
@@ -276,9 +275,22 @@ where | |||
JoinHandle<Result<u64, TransactionServiceProtocolError>>, | |||
> = FuturesUnordered::new(); | |||
|
|||
// Should probably be block time or at least configurable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should take into account the power mode timeout that is currently set. These timeouts allow the wallet client to set a much lower interval to reduce polling frequency and save power.
) -> Result<(), OutputManagerProtocolError> { | ||
let unmined_outputs = self | ||
.db | ||
.fetch_unmined_received_outputs() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem correct to me. This function gets all outputs that have mined_in_block
as null but this includes outputs that are being spent not just received?
Below if these outputs are mined then update_output_as_mined
is called on them that gives them the Unspent status which seems wrong because they were outgoing funds.
I see that the next stage of update_spent_outputs
will correct this but still the database state at this stage is incorrect.
|
||
fn mark_output_as_unspent(&self, hash: Vec<u8>) -> Result<(), OutputManagerStorageError> { | ||
let conn = self.database_connection.acquire_lock(); | ||
// Only allow updating of non-deleted utxos |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure what this comment means?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good so far. I tried at the beginning to mark all the commented out and unimplemented so that we don't forget about them, but there is still too many so I stopped that halfway through, Those comments can be disregarded.
|
||
let deleted_bitmap = self | ||
.db | ||
.fetch_complete_deleted_bitmap_at(metadata.best_block().clone()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a note, this can be an expensive call to run if the agent requestion this call keeps asking for a header way back in the past
// } | ||
// }); | ||
|
||
let utxo_validation = TxoValidationTaskV2::new( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I would keep this as TxoValidationTask
, and only start using a TxoValidationTaskV2
when we have two at the same time with something akin to a hard fork where we need to use 2 versions at the same time. If we replace one with the other, I would just replace it keep the name of the replaced version, in this case TxoValidationTask
mined_mmr_position: Option<i64>, | ||
marked_deleted_at_height: Option<i64>, | ||
marked_deleted_in_block: Option<Vec<u8>>, | ||
received_in_tx_id: Option<i64>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this not be an i64?
You should always receive an output before you you have it, so you should receive it in some transaction.
Or is this for the faucets and imported? If that's the case cant we add a special recive_id if they don't have a matching id?
@@ -276,9 +275,22 @@ where | |||
JoinHandle<Result<u64, TransactionServiceProtocolError>>, | |||
> = FuturesUnordered::new(); | |||
|
|||
// Should probably be block time or at least configurable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, a time faster than the block time, does not make sense
join_handles: &mut FuturesUnordered<JoinHandle<Result<u64, TransactionServiceProtocolError>>>, | ||
) -> Result<u64, TransactionServiceError> { | ||
if self.base_node_public_key.is_none() { | ||
return Err(TransactionServiceError::NoBaseNodeKeysProvided); | ||
} | ||
trace!(target: LOG_TARGET, "Starting transaction validation protocols"); | ||
let id = OsRng.next_u64(); | ||
let timeout = match self.power_mode { | ||
let _timeout = match self.power_mode { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove?
if deleted_bitmap_response | ||
.not_deleted_positions | ||
.contains(&output.mined_mmr_position.unwrap()) && | ||
output.marked_deleted_at_height.is_some() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the word deleted here makes this difficult to follow.
Am I correct in saying this case, the base_node said this utxo is still unspent, it has a mined_mmr height and the wallet thinks this utxo was spent at some height?
.for_protocol(self.operation_id)?; | ||
|
||
for output in batch { | ||
if deleted_bitmap_response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this if checks if the base_node said this utxo was mined at x mmr height but was spent.
} | ||
} | ||
|
||
loop { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wont this be more efficient to rather do this in a single loop.
We ask for both the mined and unmined from the db.
We then ask for the block which is the furthest back and we repeat this till we found a height that the base_node still has.
event_publisher: TransactionEventSender, | ||
) -> Self { | ||
Self { | ||
operation_id: 122, // Get a real tx id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this used for, tracking?
if so cant we just use a random number here?
@@ -1012,7 +1012,7 @@ impl InboundTransactionSql { | |||
|
|||
if num_updated == 0 { | |||
return Err(TransactionStorageError::UnexpectedResult( | |||
"Database update error".to_string(), | |||
"Updating inbound transactions failed. No rows were affected".to_string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not use num_rows_affected_or_not_found
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This review focussed on system-level behaviour.
General observations are listed below:
-
Mini stress test to three receiving wallets of 2000 transactions each:
- All transactions completed successfully in the receiving wallets
- All transactions detected as mined confirmed in the receiving wallets
Update 2021/08/19 14:34
-
Mini stress test from three sending wallets of 2000 transactions each:
- All transactions completed successfully in the sending wallets
- All transactions detected as mined confirmed in the sending wallets
Noteworthy findings are listed below:
-
Some transaction monitoring process steps are repeated unnecessarily after successful submission to the unconfirmed mempool, draining resources:
Updated transaction ____ as unmined
Wallet Event Monitor received wallet transaction service event TransactionBroadcast
Updating transaction ____ as mined
Wallet Event Monitor received wallet transaction service event TransactionMined
-
The wallet consumes a lot of CPU resources, >1x full-time CPU thread, when apparently doing nothing, noted to continue long after many transactions have been received.
Update 2021/08/19 14:34
-
I could not pick up any transaction events in the log files, i.e. messages containing
event for tx_id
. It seems the transaction service event stream is broken or publishing events are unreachable in the code (async fn update_transaction_as_mined(
). -
Log message
trace!(target: LOG_TARGET, "Transaction Validation protocol has ended with result {:?}", join_result);
obfuscates warnings and errors; those should be logged as warning or errors:TRACE Transaction Validation protocol has ended with result Ok(Ok(122))
TRACE Transaction Validation protocol has ended with result Ok(Err(TransactionServiceProtocolError { id: 122, error: ConnectivityError { source: ConnectionFailed(ConnectFailedMaximumAttemptsReached) } }))
TRACE Transaction Validation protocol has ended with result Ok(Err(TransactionServiceProtocolError { id: 122, error: RpcError(HandshakeError(TimedOut)) }))
- The console wallet transaction statuses are not always updated, e.g. it shows completed when the log shows it is mined.
d1738c1
to
c607d5b
Compare
c607d5b
to
65ba51e
Compare
- Removed OpenSSL installers and dependencies from Windows runtime - Removed stdout as appender from console wallet's log4rs logger
Description --- Exit command didn't exit the cli loop. Rustyline was holding up a tokio thread - used a blocking thread instead Motivation and Context --- Bug How Has This Been Tested? --- Manually on base node
…3244) Description --- Add status output to logs in non-interactive mode Motivation and Context --- Base node in non-interactive mode was logging status How Has This Been Tested? --- Manually run base node in non-interactive mode
Description --- Add a github action to check PR titles follow conventional commit spec Motivation and Context --- Automate checks How Has This Been Tested? --- github actions
Description --- - Removed OpenSSL installers and dependencies from Windows runtime - Removed stdout as appender from console wallet's log4rs logger Motivation and Context --- - OpenSSL is not a Windows runtime dependency anymore - The console wallet cannot handle log4rs logger messages to its stdout as it breaks the TUI How Has This Been Tested? --- Build the installer, ran an installer, ran all the executable components
Description --- Add a new tab in the console wallet. In the new tab is a stdout log from the wallet. The stdout is redirected to a new logfile called "stdout.log" I've added also some coloring, it's much easier to read the log file with some simple colors. Motivation and Context --- Easier to debug the wallet. How Has This Been Tested? --- Manually.
## Description Add support for forcing sync to a seed node. Specify index for the node from peer_seeds list. ## How Has This Been Tested? Manually. ## Checklist: <!--- Go over all the following points, and put an `x` in all the boxes that apply. --> * [x] I'm merging against the `development` branch. * [x] I have squashed my commits into a single commit.
## Description Add cucumber test that tests forced sync to single node. ## How Has This Been Tested? npm test -- --name "Force sync many nodes agains one peer" ## Checklist: <!--- Go over all the following points, and put an `x` in all the boxes that apply. --> * [x] I'm merging against the `development` branch. * [x] I have squashed my commits into a single commit.
Description Added multiple wallet recovery from peer Motivation and Context Additional tests How Has This Been Tested? Manually (npm test -- --name "Multiple Wallet recovery from seed node")
<!--- Provide a general summary of your changes in the Title above --> ## Description <!--- Describe your changes in detail --> - Use the GRPC client connection given `WalletProcess` - Outputs error details in webhook in recovery cron job - Adds very basic mocha tests ## Motivation and Context <!--- Why is this change required? What problem does it solve? --> <!--- If it fixes an open issue, please link to the issue here. --> Recovery daily is failing even though it succeeds. ## How Has This Been Tested? <!--- Please describe in detail how you tested your changes. --> <!--- Include details of your testing environment, and the tests you ran to --> <!--- see how your change affects other areas of the code, etc. --> ## Checklist: <!--- Go over all the following points, and put an `x` in all the boxes that apply. --> * [x] I'm merging against the `development` branch. * [x] I have squashed my commits into a single commit.
…reate` (tari-project#3249) Description --- ### Note: This is a breaking change to LibWallet FFI Currently if a wallet recovery was in progress and the wallet was shutdown the next time that wallet is start by an FFI client using the ‘wallet_create’ method there is no way for the FFI client to know that the recovery should be continued. The wallet is able to resume the recovery from where it left off and it should so as not to lose funds but the FFI client must restart the recovery process with the same seed words. The FFI client has to do the restarting so that it can provide the callback through which the process is monitored. Furthermore, the wallet does not respond to P2P transaction negotiation message if a recovery process is in progress so it is important that an FFI client completes any outstanding recoveries ASAP. How Has This Been Tested? --- untested in the backend.
Description --- Fixed the base_node_service_config not being initialized with values from the config file. Motivation and Context --- See above How Has This Been Tested? --- System level testing
alternate approach to transaction validation (#3191) feat!: integration of new transaction and output validation (#3352) feat: update coinbase handling for new tx and output validation (#3383) refactor: use wallet connectivity in wallet services (#3391) refactor: mapping for deleted mmr position to height/hash for perf (#3394) feat: Fix reinstating CancelledInbound outputs and fix tests (#3400) chore: merge development into tx-validation2 and fix issues (#3417) Uncomment cucumber tests
No description provided.