Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(graphsync): allow setting of per-peer incoming requests for miners #7578

Merged
merged 1 commit into from
Dec 17, 2021

Conversation

rvagg
Copy link
Member

@rvagg rvagg commented Oct 28, 2021

@jennijuju to get this started. Exposes graphsync 0.10.3's MaxInProgressIncomingRequestsPerPeer as SimultaneousTransfersForStoragePerClient for storage providers only (the other gs options are exposed for fullnodes/clients too, I'm assuming this is appropriate behaviour).

Will need @hannahhoward's eye, and tests would be nice but I have no idea what that might look like.

@rvagg rvagg requested a review from a team as a code owner October 28, 2021 11:43
@rvagg rvagg force-pushed the rvagg/SimultaneousTransfersForStoragePerClient branch 2 times, most recently from 7c052a2 to 908e515 Compare October 28, 2021 11:55
@codecov
Copy link

codecov bot commented Oct 28, 2021

Codecov Report

Merging #7578 (d336b0b) into master (690be5b) will increase coverage by 0.32%.
The diff coverage is 100.00%.

❗ Current head d336b0b differs from pull request most recent head 9e7d9af. Consider uploading reports for the commit 9e7d9af to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##           master    #7578      +/-   ##
==========================================
+ Coverage   39.48%   39.81%   +0.32%     
==========================================
  Files         653      633      -20     
  Lines       69925    67100    -2825     
==========================================
- Hits        27611    26715     -896     
+ Misses      37567    35780    -1787     
+ Partials     4747     4605     -142     
Impacted Files Coverage Δ
node/builder_miner.go 94.85% <100.00%> (-0.04%) ⬇️
node/config/def.go 97.41% <100.00%> (+0.01%) ⬆️
node/modules/storageminer.go 63.17% <100.00%> (ø)
node/impl/full/wallet.go 35.89% <0.00%> (-28.21%) ⬇️
extern/sector-storage/storiface/worker.go 53.84% <0.00%> (-22.16%) ⬇️
chain/events/message_cache.go 87.50% <0.00%> (-12.50%) ⬇️
api/v0api/v1_wrapper.go 2.22% <0.00%> (-11.99%) ⬇️
markets/dagstore/mount.go 73.33% <0.00%> (-10.88%) ⬇️
markets/loggers/loggers.go 89.28% <0.00%> (-10.72%) ⬇️
extern/sector-storage/fr32/fr32.go 91.86% <0.00%> (-8.14%) ⬇️
... and 159 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 690be5b...9e7d9af. Read the comment docs.

@jennijuju jennijuju added this to the v1.13.3 milestone Dec 9, 2021
@jennijuju jennijuju added the P2 P2: Should be resolved label Dec 9, 2021
Copy link
Contributor

@hannahhoward hannahhoward left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM as long as folks are ok with the slightly confusing behavior of 0 = no per peer limit.

The 0 made sense as a hidden default, but is perhaps a take confusing as explict config. I think we documented it though so it's probably fine?

@jennijuju
Copy link
Member

LGTM as long as folks are ok with the slightly confusing behavior of 0 = no per peer limit.

I think its fine as we have similar behavior in sealing configurations as well.

@jennijuju
Copy link
Member

@rvagg @hannahhoward is the total amount of SimultaneousTransfersForStoragePerClient bounded by SimultaneousTransfersForStorage ? so for example, if SimultaneousTransfersForStorage is set to 20, and SimultaneousTransfersForStoragePerClient is set to 10 and the SP has incoming dt from 5 clients. does SP will only have 20 ongoing dt across these 5 clients from a first come first serve basis?

@rvagg
Copy link
Member Author

rvagg commented Dec 15, 2021

@jennijuju yes, SimultaneousTransfersForStorage is the upper limit regardless. SimultaneousTransfersForStoragePerClient is bound by it (docs in here says that) and your example is correct - capped at 20 simultaneous in total and 10 per dt. Do you think it could be made more clear in the docs here perhaps?

documentation/en/default-lotus-miner-config.toml Outdated Show resolved Hide resolved
// The maximum number of simultaneous data transfers from any single client
// for storage deals.
// Unset by default (0), and values higher than SimultaneousTransfersForStorage
// will have no effect.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// will have no effect.
// will have no effect. Total amount of simultaneous data transfer across all storage clients is also bounded by `SimultaneousTransfersForStorage`.

@jennijuju
Copy link
Member

@jennijuju yes, SimultaneousTransfersForStorage is the upper limit regardless. SimultaneousTransfersForStoragePerClient is bound by it (docs in here says that) and your example is correct - capped at 20 simultaneous in total and 10 per dt. Do you think it could be made more clear in the docs here perhaps?

thanks @rvagg ! made one small suggestion on the doc - do you think it make sense?

@rvagg rvagg force-pushed the rvagg/SimultaneousTransfersForStoragePerClient branch from d336b0b to 9e7d9af Compare December 17, 2021 04:04
@rvagg
Copy link
Member Author

rvagg commented Dec 17, 2021

@jennijuju yep, I've added your words (fairly closely anyway), rebased and squashed this branch so it's up to date and just one commit now.

@magik6k magik6k merged commit a4728d3 into master Dec 17, 2021
@magik6k magik6k deleted the rvagg/SimultaneousTransfersForStoragePerClient branch December 17, 2021 13:27
dirkmc pushed a commit that referenced this pull request Dec 17, 2021
…sfersForStoragePerClient

feat(graphsync): allow setting of per-peer incoming requests for miners
@jennijuju jennijuju modified the milestones: v1.13.3, v1.15.0 Jan 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 P2: Should be resolved
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants