You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 21, 2018. It is now read-only.
Here is a comparison of runtime of leveldown's performance on its benchmarks and tests (total wall clock time) between the V8 API implementation and the NAPI implementation.
This is using x64 release builds of node.js and leveldown, running on
Windows 10 10586
Intel Xeon E5-1620 @ 3.60 GHz
16 GB 1600 MHz DDR3 RAM
Kingston SHPM2280P2 240 GB SSD
Node.js and leveldown are built from these commits:
db-bench.js and write-random.js appear to perform equally well, while write-sorted.js appears to have become slightly slower. The test suite is taking significantly longer, over twice as long.
These are interesting results that suggest we are correct to believe that performance is only hindered in very frequent calls from JavaScript code into native module code. I have not verified but suspect that the benchmark tests are exercising LevelDOWN's internals and LevelDB itself, rather than LevelDOWN's API layer.
So in the case of the benchmarks it appears the overhead of NAPI is insignificant relative to the workload LevelDB is handling, whereas in the case of the tests NAPI is significant presumably because the tests are focused on exercising the API that LevelDOWN exposes.
We currently know of two areas where our NAPI prototype has room to improve:
creating constructor functions (e.g. Database, Iterator, Batch) currently does not take advantage of v8's FunctionTemplate optimization
throwing exceptions with simple text message is a chatty operation requiring three NAPI calls (napi_create_string, napi_create_error, napi_throw)
Next thing I will do is whip up an API for creating a constructor with methods using a v8::FunctionTemplate properly and see how that changes these numbers. I expect this will make a large difference. Next I'll add an API to create and throw a new error from a text message in one API call. I expect this to have a minor to nil effect on performance but will try it since it will be easy and quick. Finally after that if there is still a gap I will do profiling to see where time is being spent.
I will also get timing numbers for x86 release builds sometime this week.
The text was updated successfully, but these errors were encountered:
instead of napi_create_constructor_for_wrap. Passing the methods in
together allows the V8 implementation to utilize v8::FunctionTemplate
correctly and get the optimization benefit. It also makes the code much
smaller and easier to read.
This has given a significant reduction in the test suite runtime. See
nodejs/api#25
Getting the API to use v8::FunctionTemplate improved the numbers significantly for the test suite. Benchmarks remained the same, including the 5% slow down write-sorted.js.
Here is a comparison of runtime of leveldown's performance on its benchmarks and tests (total wall clock time) between the V8 API implementation and the NAPI implementation.
This is using x64 release builds of node.js and leveldown, running on
Node.js and leveldown are built from these commits:
Each test was run three times.
Raw data here https://gist.github.com/ianwjhalliday/236bdb53448a372536793580c0882197
Averaged results:
db-bench.js
andwrite-random.js
appear to perform equally well, whilewrite-sorted.js
appears to have become slightly slower. The test suite is taking significantly longer, over twice as long.These are interesting results that suggest we are correct to believe that performance is only hindered in very frequent calls from JavaScript code into native module code. I have not verified but suspect that the benchmark tests are exercising LevelDOWN's internals and LevelDB itself, rather than LevelDOWN's API layer.
So in the case of the benchmarks it appears the overhead of NAPI is insignificant relative to the workload LevelDB is handling, whereas in the case of the tests NAPI is significant presumably because the tests are focused on exercising the API that LevelDOWN exposes.
We currently know of two areas where our NAPI prototype has room to improve:
FunctionTemplate
optimizationnapi_create_string
,napi_create_error
,napi_throw
)Next thing I will do is whip up an API for creating a constructor with methods using a
v8::FunctionTemplate
properly and see how that changes these numbers. I expect this will make a large difference. Next I'll add an API to create and throw a new error from a text message in one API call. I expect this to have a minor to nil effect on performance but will try it since it will be easy and quick. Finally after that if there is still a gap I will do profiling to see where time is being spent.I will also get timing numbers for x86 release builds sometime this week.
The text was updated successfully, but these errors were encountered: