-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
services/horizon/internal/db2/history: Implement account loader and future account ids #5015
Conversation
65ee6e4
to
6853ee0
Compare
for address := range a.set { | ||
addresses = append(addresses, address) | ||
} | ||
// sort entries before inserting rows to prevent deadlocks on acquiring a ShareLock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a prior issue that explains this aspect further, since ingestion is single threaded and all this happens in same db tx, trying to understand where it could happen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ingestion is single threaded. however, it is possible to do parallel reingestion with multiple workers where each worker has a separate but concurrent transaction
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, then there is potential that same accounts could be processed from different ledger ranges of different worker threads at same time, how does sorting in that case further avoid db deadlock, I'm not suggesting to remove it, just to understand. It sticks out in the application code as performing unrelated complexity that would have expected to be mitigated at db level with tx repeatable read isolation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's say there are two workers ingesting different ledger ranges. In both the ledger ranges the workers have to insert the same set of accounts into the history_accounts table. If the workers insert the accounts in the same order they will avoid a deadlock because the worker who wins the race will acquire the lock and the other worker will block until the transaction is complete. Consider the worst case scenario if worker 1 inserts accounts A, B, C and worker 2 inserts accounts C, B, A. Let's say worker 1 is faster so it inserts accounts A and B. Then worker 2 inserts account C. When worker 1 tries to insert account C there will be a deadlock because worker 2 already has a lock on that row.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think changing the transaction isolation level could avoid the deadlock
// GetNow should only be called on values which were registered by | ||
// GetFuture() calls. Also, Exec() must be called before any GetNow | ||
// call can succeed. | ||
func (a AccountLoader) GetNow(address string) (int64, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could follow the map interface and return bool for indication of presence rather than error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the loader is used correctly then the absence of an account is always an error. You should never lookup an address which you have never inserted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed this so that GetNow()
panics instead of returning an error
|
||
insert := 0 | ||
for _, address := range addresses { | ||
if _, ok := a.ids[address]; ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you apply this filter that skips 'set' addresses which are already resolved in ids
up in the loop that builds the query on line 100, to avoid it going through db i/o?
nvm, I don't think there could be a case where set
initially overlaps with any keys in ids
at the start of Exec.
if insert == 0 { | ||
return nil | ||
} | ||
addresses = addresses[:insert] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
neat, re-used the same array in place
) | ||
} | ||
|
||
sql := ` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be compelling at some point if we have time to consider evaluating sql code options that provide compile time type safety on sql statements, less embedded string fragments/concat, like gojet. it may not even be viable for our setup, but worth keeping in mind.
return err | ||
} | ||
|
||
return a.lookupKeys(ctx, q, addresses) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the AccountLoader.set
be cleared at this point, since those 'future' requests have now been realized into AccountLoader.ids
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I intended that the account loader should only be used once and not be reused. I will add some code to enforce that intention.
it looks like in example code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great design!
We will actually want to orchestrate the ordering because in parallel reingestion we want the account loader to execute in a separate db transaction from the rest of ingestion. The reason behind that is because parallel workers are likely to have their db transactions block on insertions to the history_account tables (e.g. when multiple workers try to insert the same row). So if we isolate the call to |
PR Checklist
PR Structure
otherwise).
services/friendbot
, orall
ordoc
if the changes are broad or impact manypackages.
Thoroughness
.md
files, etc... affected by this change). Take a look in the
docs
folder for a given service,like this one.
Release planning
needed with deprecations, added features, breaking changes, and DB schema changes.
semver, or if it's mainly a patch change. The PR is targeted at the next
release branch if it's not a patch change.
What
Some of the history tables have columns referencing accounts. However, these columns do not include the text representation of an account address. Instead, they include an integer id which points to the account address in the
history_accounts
table. When we ingest into history tables that have account integer id columns we first have to lookup the integer id from thehistory_accounts
table and then we can construct and insert the row into the history table.This PR introduces the concept of an account loader which encapsulates the management of account integer ids. Using the account loader we can more easily refactor the ingestion data flow. Whenever the history processor encounters an account string in
ProcessTransaction()
, the processor can register the account string in the loader component and store the resultingFutureAccountID
in aFastBatchInsertBuilder
. We can ensure thatExec()
is called on the account loader before it's called on theFastBatchInsertBuilder
.Why
This PR is required to implement #4909
Known limitations
[N/A]