Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run cpu instruction calibration on a variety of hardware #1020

Closed
jayz22 opened this issue Aug 28, 2023 · 8 comments · Fixed by #1105
Closed

Run cpu instruction calibration on a variety of hardware #1020

jayz22 opened this issue Aug 28, 2023 · 8 comments · Fixed by #1105
Assignees

Comments

@jayz22
Copy link
Contributor

jayz22 commented Aug 28, 2023

What

Calibrate the cpu instructions on a variety of hardwares that the validators run on.

Why

The metering model is deterministic across all nodes. The model is currently calibrated on a single machine (M1) which may vary from actual hardware that validators use. This can make the actual compute time vary for the same amount of CPU instructions, which could affect ledger close time. The network resource limits need to be set conservatively w.r.t the worst case. We need to calibrate them on various hardware architecture in order to figure out the correct bonds.

@graydon graydon self-assigned this Sep 28, 2023
@graydon
Copy link
Contributor

graydon commented Sep 28, 2023

I talked to @anupsdf about this and we concluded two points:

  1. The key question isn't how many "model CPU instructions" a contract takes, it's how much time, and so we really will want to set our network limits by reasoning backwards from the observed virtual-instructions-per-unit-real-time value we see in the network, empirically, on the nodes we're running (which, as you mention, are likely to be different enough from our workstations to warrant empirical observation). This is fine, but it means that the actual instruction count values are mostly irrelevant. They're just a term in an equation we divide out to get the number we set the limit to. Eg. if the network says it's processing 20 virtual instructions per nanosecond and we want to limit contracts to 1ms, then we set the instruction limit to 20m instructions. But if it says 5 instructions per ns, we set the limit to 5m instructions. The actual "instructions number" doesn't matter to setting a "time target".
  2. That said, it's a little confusing to think about and might be misleading to users to see "virtual instruction counts" that are much higher than the (unknown but plausibly estimatable) true instruction count for the machine they're on. So for sake of not-confusing people, it'd be good to calibrate the model instruction counts to their values as measured on x86-64 machines, because we expect most validators to be on that arch.

So .. I'm going to take this and just run calibration on the x86-64 machine I have here. Doesn't matter what its clock frequency is, we're only talking instruction counts of the cost centers.

@graydon
Copy link
Contributor

graydon commented Oct 3, 2023

Some investigation and results here (I meant to discuss this with @jayz22 but I'll make a note here for future reference too):

  • x64 actually gives very incorrect-seeming numbers when I try to calibrate
  • I think part of what's happening is that for memcpy (and mempcy-like cost centers, of which there are a few) we pass large buffers through to rust's slice-copying code which will engage a throughput-oriented fast path for memory copies -- using AVX2 instructions but with a relatively high setup/teardown overhead for the process.
  • I think we actually don't want to calibrate against this path at all; it's misleading since almost all calls to memcpy-like costs will be for much smaller single-struct or small-buffer sort of chunks of memory.
  • I think it might just make sense to define the cost for memcpy-like costs analytically, from reason, rather than from measurement. I think we can assume that a memory-copy can move, for example, 8 bytes per instruction on a machine with 8-byte (64-bit) words, and then just define the cost of moving N bytes as N/8 instructions (or perhaps 1 + N/8 so we never charge zero).
  • I also noticed while I was thinking about this and exploring the cost runners that there are multiple cost types that all probably cost the same thing. I think HostMemCpy, HostMemCmp, ValSet, ValDeser, MapEntry, VecEntry, VmMemRead and VmMemWrite should all logically be the same cost -- the cost of N bytes of main-memory access -- and we might consider merging them for simplicity.

@jayz22
Copy link
Contributor Author

jayz22 commented Oct 4, 2023

Posting my calibration results on m1 and x86 (- m1, + x86, full outputs attached below):

-                cost_type     cpu_model_const_param     cpu_model_lin_param     mem_model_const_param     mem_model_lin_param
-             HostMemAlloc                      1123                       1                        16     128
-               HostMemCpy                        32                      24                         0     0
-               HostMemCmp                        24                      64                         0     0
-     DispatchHostFunction                       262                       0                         0     0
-              VisitObject                       158                       0                         0     0
-                   ValSer                       646                      66                        18     384
-                 ValDeser                      1127                      34                        16     128
-        ComputeSha256Hash                      2877                    4125                        40     0
-     ComputeEd25519PubKey                     25640                       0                         0     0
-                 MapEntry                        84                       0                         0     0
-                 VecEntry                        35                       0                         0     0
-         VerifyEd25519Sig                    400983                    2685                         0     0
-                VmMemRead                       182                      24                         0     0
-               VmMemWrite                       178                      25                         0     0
-          VmInstantiation                    916377                   68226                    129471     5080
-         InvokeVmFunction                      1128                       0                        14     0
-     ComputeKeccak256Hash                      2882                    3561                        40     0
- ComputeEcdsaSecp256k1Key                     37899                       0                         0     0
- ComputeEcdsaSecp256k1Sig                       224                       0                         0     0
- RecoverEcdsaSecp256k1Key                   1667731                       0                       201     0
-             Int256AddSub                      1714                       0                       119     0
-                Int256Mul                      2226                       0                       119     0
-                Int256Div                      2332                       0                       119     0
-                Int256Pow                      5223                       0                       119     0
-              Int256Shift                       415                       0                       119     0
-        ChaCha20DrawBytes                      4857                    2461                         0     0


+                cost_type     cpu_model_const_param     cpu_model_lin_param     mem_model_const_param     mem_model_lin_param
+             HostMemAlloc                       310                       0                        16     128
+               HostMemCpy                        52                       0                         0     0
+               HostMemCmp                        55                      36                         0     0
+     DispatchHostFunction                       239                       0                         0     0
+              VisitObject                        34                       0                         0     0
+                   ValSer                       564                       0                        18     384
+                 ValDeser                      1104                       0                        16     128
+        ComputeSha256Hash                      3943                    6812                        40     0
+     ComputeEd25519PubKey                     40356                       0                         0     0
+                 MapEntry                        55                       0                         0     0
+                 VecEntry                         0                       0                         0     0
+         VerifyEd25519Sig                    654651                    4288                         0     0
+                VmMemRead                       210                       0                         0     0
+               VmMemWrite                       209                       0                         0     0
+          VmInstantiation                    459816                   49469                    129471     5080
+         InvokeVmFunction                      1189                       0                        14     0
+     ComputeKeccak256Hash                      4076                    5962                        40     0
+ ComputeEcdsaSecp256k1Key                     58314                       0                         0     0
+ ComputeEcdsaSecp256k1Sig                       249                       0                         0     0
+ RecoverEcdsaSecp256k1Key                   2323402                       0                       181     0
+             Int256AddSub                      1620                       0                        99     0
+                Int256Mul                      2209                       0                        99     0
+                Int256Div                      2150                       0                        99     0
+                Int256Pow                      3925                       0                        99     0
+              Int256Shift                       379                       0                        99     0
+        ChaCha20DrawBytes                      2155                    1051                         0     0

The main differences are as @graydon pointed out, the memory related operations appear to be constant (with larger const factor) costs on x86. I believe this is what you are talking about?

I think the analytical approach make sense. I've noticed some of those memory-related calibration results are pretty sensitive to the size of the sample (e.g. VecEntry #1051 ) and haven't found a good way to get around that.

Re: cost type consolidation, I think it makes sense to consolidate some of those types, especially the {host, vm} mem-cmp/cpy/read/write ones. I will look into it further.

output_m1.txt
output_x86.txt

(A bit of extra information, my x86 cpu is a Intel 2012Q2 model, with AVX (not AVX2) extention)

@jayz22
Copy link
Contributor Author

jayz22 commented Oct 5, 2023

Re: cost type consolidation and using analytical model

  • HostMemCpy vs HostMemCmp: from what I understand (also from calibrated results), memcmp requires loading values from two memory locations and comparing them (2 MOV + 1 CMP). memcpy is logically just 1 MOV. So they should probably be two different analytical models, with the linear coefficient of memcmp being 3x larger (which also somewhat matches calibration results)?
  • VmMemRead and VmMemWrite: I think these can be consolidated into HostMemCpy, since underneath it is just doing copy_from_slice` (plus some small overhead of resolving the memory entity).
  • VecEntry and MapEntry: these two are just memory access can probably be consolidated into HostMemCpy. Although not sure if the coefficients should be same, since there is a bit of extra container logic like index bounds checking . (Calibration numbers do not provide a good guidance here. See VecEntry contract cost parameters are zero #1051)
  • ValSer and ValDeser: this one I'm least sure. Logically they are also just doing mem copying. However, there can be a fair amount of overhead due to xdr structuring, and recursion? Looking from M1 results, at least the linear coefficients are comparible to HostMemCpy/Cmp.

These are very crude analysis and is a bit stretching my low-level knowledge. @graydon let me know what you think.

@jayz22
Copy link
Contributor Author

jayz22 commented Oct 9, 2023

ValSer, ValDeserare clearly different from simple memcpy after taking into account deep structure nesting. See #1102

@jayz22
Copy link
Contributor Author

jayz22 commented Oct 9, 2023

Re: cost type consolidation

WasmMemAlloc can be removed now (use HostMemAlloc instead), since we have moved away from the memory fuel concept and all memory allocation is now done on the host side via ResourceLimiter.

@jayz22
Copy link
Contributor Author

jayz22 commented Oct 10, 2023

Just had a conversation with @MonsieurNicolas. He expressed concerns about calibration numbers not being accurate and reproducible due to the advanced instruction set (e.g. AVX, AVX2).
While the first-principle models for mem copy works, the AVX might be messing with other calibration numbers on x86_64. So in order to have more confidence in the calibration numbers and improve reproducibility, he has suggested:

  • During calibration, compile to generic x86 target via march=x86-64, more info can be found here. This will still include some extensions such as MMX and SSE, but hopefully 1. they don't mess with calibration results too much (i.e. preserves the correct linear characteristics) 2. are ubiquitous enough that every node should have them.

I will give it a try.

@graydon
Copy link
Contributor

graydon commented Oct 10, 2023

hmm. avx2 is 10 years old, there's nothing in the field that doesn't speak avx2. I am not sure this is really related to the constant-factor-ness of our measurements on those machines -- if we really want to correct that fact I think we should figure out why it's happening rather than just fiddling with codegen options (which none of our users will fiddle with anyways)

github-merge-queue bot pushed a commit that referenced this issue Oct 11, 2023
A bunch of exploration of minor issues discovered in budget calibration,
ostensibly about #1020 but also involving some tracy markup, some
internal cleanup, machinery to allow excluding the overwhelming VM
instantiation cost center, and some attempts at setting budget costs
more from first principles.
github-merge-queue bot pushed a commit that referenced this issue Oct 12, 2023
### What

- Update to the latest XDR, consolidating a few memory-related
`ContractCostType` -- resolves
#1020
- Resolve #1087
- Move some no longer used cost type calibrations (VecEntry, MapEntry)
to an `experimental` directory, they are currently not used anywhere but
would be useful for experimental purpose. Will do a followup to make
them usable.
- Fixed a bug in `memory_grow` function, where we were checking the
limit against the wrong input
- Add test helpers to make wasm memory alloc accessible
- Various test fixes, clearifications

### Why

[TODO: Why this change is being made. Include any context required to
understand the why.]

### Known limitations

Follow ups:
- Refactor the analytical models
- Make `experimental` directory usable
- Recalibrate numbers on an x86 machine and update the parameters
- Write a more complex test than the currently `complex`, use that for
metering benchmark

---------

Co-authored-by: Graydon Hoare <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants