-
Notifications
You must be signed in to change notification settings - Fork 675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[merkledb benchmark] implement simple write profile benchmark #3372
base: master
Are you sure you want to change the base?
Conversation
x/merkledb/benchmarks/benchmark.go
Outdated
) | ||
|
||
func getMerkleDBConfig(promRegistry prometheus.Registerer) merkledb.Config { | ||
const defaultHistoryLength = 300 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use the same number as firewood of 120: https://github.com/ava-labs/firewood/blob/main/firewood/src/manager.rs#L21
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
Hasher: merkledb.DefaultHasher, | ||
RootGenConcurrency: 0, | ||
HistoryLength: defaultHistoryLength, | ||
ValueNodeCacheSize: units.MiB, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These seem really small. If I am reading this correctly, there is about 2Mb of total cache?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I attempted to tweak this value, but it had no performance impact.
} | ||
|
||
fmt.Printf("Initializing database.") | ||
ticksCh := make(chan interface{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do you really need this ticker? Might be easier to report every 100k rows or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reporting every 100k rows doesn't work nicely because of the batch writing ( which blocks for a long time ).
x/merkledb/benchmarks/benchmark.go
Outdated
|
||
const ( | ||
defaultDatabaseEntries = 2000000 | ||
databaseCreationBatchSize = 1000000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Batch size is supposed to be 10k. This is 1M.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
return err | ||
} | ||
} | ||
deleteDuration = time.Since(startDeleteTime) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can avoid all this math and just report the raw number of deletes. Grafana can convert this to a rate for you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added both. I believe that my calculation would be more accurate, but let's have both for the time being.
deleteRate = prometheus.NewGauge(prometheus.GaugeOpts{ | ||
Namespace: "merkledb_bench", | ||
Name: "entry_delete_rate", | ||
Help: "The rate at which elements are deleted", | ||
}) | ||
updateRate = prometheus.NewGauge(prometheus.GaugeOpts{ | ||
Namespace: "merkledb_bench", | ||
Name: "entry_update_rate", | ||
Help: "The rate at which elements are updated", | ||
}) | ||
insertRate = prometheus.NewGauge(prometheus.GaugeOpts{ | ||
Namespace: "merkledb_bench", | ||
Name: "entry_insert_rate", | ||
Help: "The rate at which elements are inserted", | ||
}) | ||
batchWriteRate = prometheus.NewGauge(prometheus.GaugeOpts{ | ||
Namespace: "merkledb_bench", | ||
Name: "batch_write_rate", | ||
Help: "The rate at which the batch was written", | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not be calculating the rates in the benchmark. The prometheus server should do this based on the counts.
deleteRate = prometheus.NewGauge(prometheus.GaugeOpts{ | |
Namespace: "merkledb_bench", | |
Name: "entry_delete_rate", | |
Help: "The rate at which elements are deleted", | |
}) | |
updateRate = prometheus.NewGauge(prometheus.GaugeOpts{ | |
Namespace: "merkledb_bench", | |
Name: "entry_update_rate", | |
Help: "The rate at which elements are updated", | |
}) | |
insertRate = prometheus.NewGauge(prometheus.GaugeOpts{ | |
Namespace: "merkledb_bench", | |
Name: "entry_insert_rate", | |
Help: "The rate at which elements are inserted", | |
}) | |
batchWriteRate = prometheus.NewGauge(prometheus.GaugeOpts{ | |
Namespace: "merkledb_bench", | |
Name: "batch_write_rate", | |
Help: "The rate at which the batch was written", | |
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that it won't generate accurate results, since we're mixing batch writing and put in the same sequence.
I've included both the counter and the rate metrics so that we can get both numbers in the grafana.
x/merkledb/benchmarks/benchmark.go
Outdated
err = mdb.Close() | ||
if err != nil { | ||
fmt.Fprintf(os.Stderr, "unable to close levelDB database : %v\n", err) | ||
return err | ||
} | ||
err = levelDB.Close() | ||
if err != nil { | ||
fmt.Fprintf(os.Stderr, "unable to close merkleDB database : %v\n", err) | ||
return err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logs seem inverted here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
ValueNodeCacheSize: units.MiB, | ||
IntermediateNodeCacheSize: 1024 * units.MiB, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How much memory are we using? Feels like we could probably increase these
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've attempted to tweak these, but haven't seen any concrete gains. Adjusting the leveldb config was helpful, though.
x/merkledb/benchmarks/benchmark.go
Outdated
startUpdateTime := time.Now() | ||
for keyToUpdateIdx := low + ((*databaseEntries) / 2); keyToUpdateIdx < low+((*databaseEntries)/2)+databaseRunningUpdateSize; keyToUpdateIdx++ { | ||
updateEntryKey := calculateIndexEncoding(keyToUpdateIdx) | ||
updateEntryValue := calculateIndexEncoding(keyToUpdateIdx - ((*databaseEntries) / 2)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is incorrect, should be:
updateEntryValue := calculateIndexEncoding(keyToUpdateIdx - ((*databaseEntries) / 2)) | |
updateEntryValue := calculateIndexEncoding(low) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm.. I think that it's ok to use the low
as you suggested, although using the above would yield different and unique values ( i.e. [low..low+5k] ).
levelDB.Close() | ||
}() | ||
|
||
low := uint64(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
low
never changes, and should be increased by 2.5k each pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch; fixed.
This PR has become stale because it has been open for 30 days with no activity. Adding the |
Why this should be merged
How this works
How this was tested