-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Community Diligence Review of IPFSTT Allocator #9
Comments
Second example: nicelove666/Allocator-Pathway-IPFSTT#10 1st point) 2nd point) 3rd point) Actual data storage report: Provider | Location | Total Deals Sealed | Percentage | Unique Data | Duplicate Deals Most SP IDs taking deals did match per report. Additional diligence needed to confirm entity and actual storage locations - but seems allocator did initial review of SPs and asked about retrievals, however never followed up. 4th point) The Allocator showed no sign of diligence after 1st allocation and gave the 2nd, 3rd, 4th allocation of 3.6PiB to the client. |
We attended three notary meetings in a row and spoke twice in a row, Some SPs can be successfully retrieved on Spark. We hope to contact the Spark team for more solutions. |
Based on an additional compliance review, it appears this allocator is attempting to work with public open dataset clients. However, the data associated with this pathway is not currently able to be retrieved at scale, and testing for retrieval is currently noncompliant. As a reminder, the allocator team is responsible for verifying, supporting, and intervening with their clients. If a client is NOT providing accurate deal-making info (such as incomplete or inaccurate SP details) or making deals with noncompliant unretrievable SPs, then the allocator needs to intervene and require client updates before more DataCap should be awarded. Before we will submit a request for more DataCap to this allocator, please verify that you will instruct, support, and require your clients to work with retrievable storage providers. @nicelove666 can you verify that you will enforce retrievability requirements, such as through Spark? Please reply here with acknowledgement and any additional details for our review. |
Dear Galen @galen-mcandrew , We confirm that we will guide, support, and require our clients to collaborate with retrievable SP to ensure their successful data storage and retrieval. Our allocator supports three main types of applications: public datasets, enterprise clients, and individual clients. In the first round, we received applications from individuals and public datasets, and in the next round, we will support applications from enterprise clients. Due to the early launch of our allocator, before Spark was introduced, we primarily used Boost and Lassie for data retrieval, and the retrieved data looked good at the time. However, after the emergence of Spark, we found that the success rate on this platform was relatively low. We have communicated with the Spark team (filecoin-station/spark#74) and are actively seeking solutions. Going forward, we will focus on data retrieval and require clients to provide more accurate transaction information. If the transaction information is inaccurate, we will intervene and request the client to update it. Only after the information is updated will we consider granting more DataCap. I will closely monitor the SP's disclosures and retrieval situation. If you have any guidance, please contact us at any time. |
Thank you for the confirmation and update! Looking forward to seeing the continued diligence & onboarding through this pathway. We will be requesting 10PiB of DataCap for this allocator to increase runway and scale. |
Hey, dear Galen, thanks for your reply, support, and help! Through our efforts, the number of SPs supporting Spark has increased by one. Currently, there are two SPs supporting Spark, f02951213 and f02894875. Moving forward, we will focus on the retrieval functionality of SPs on Spark, and I believe we can do a good job! Our communication with Spark will mainly be reflected here. If there is important content in the phone communication, we will also synchronize it.filecoin-station/spark#74 |
@nicelove666 looks like you gave a brand new github ID 1.75PiBs over 4 days with no retrievals on most SPs after each allocation. nicelove666/Allocator-Pathway-IPFSTT#27 can you explain why you keep giving DataCap to non retrieval SPs? cc @galen-mcandrew |
@filecoin-watchdog It's been two years, and you've left messages on nearly all of our LDNs (over 20 LDNs). It seems you are particularly concerned about us. Thanks for your attention. Given the limitations of written expression, we can have an in-depth discussion at next week's meeting.
Finally, we are also actively following you. This round 10P, we used 1.75P, you used 1.95P. Could you please explain how your 1.95P was allocated?
|
Review of Top Value Allocations from @nicelove666
Allocator Application: filecoin-project/notary-governance#1006
First example:
DataCap was given to:
nicelove666/Allocator-Pathway-IPFSTT#3
This client was given 1PiB (100% of requested weekly amount) of DataCap, instead of 50% of their request as stated in their allocator application. The Allocator later claimed it was a mistake and asked for DataCap to be removed. Asking for more details from @galen-mcandrew on removal
The text was updated successfully, but these errors were encountered: