-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Community Diligence Review of ND Cloud (ND Labs) Allocator #13
Comments
Second example: Public Open Dataset - key compliance requirement: Retrievability 1st point) Actual allocation: 50Tib, 1PiB - this follows the guidelines. 2nd point) 3rd point) Actual data storage report: Provider | Location | Total Deals Sealed | Percentage | Unique Data | Duplicate Deals 2 SPs match from the original list account for 4% of all deals. Also, additional diligence needs to be completed to confirm entity and location. 4th point) The Allocator showed no sign of diligence after 1st allocation and gave the 2nd allocation of 1PiB to the client. |
example 3: DataCap given to: NDLABS-Leo/Allocator-Pathway-ND-CLOUD#15 Public Open Dataset - key compliance requirement: Retrievability SPs provided: Actual report data: Provider | Location | Total Deals Sealed | Percentage | Unique Data | Duplicate Deals 2 SPs match. 0% retrievable, similar to all other applications, SPs |
what are the key questions please? |
@NDLABS-Leo - just pointing out what I see from the data for Governance Team to use toward a review. Yes, retrievals is the main problem - why did clients continue getting datacap when their SPs are not following guidelines? |
Hi @NDLABS-Leo On the next Fil+ Allocator meeting we will be going over each refill application. Wanted to ensure you were tracking the review discussion taking place in #13. If your schedule allows, recommend coming to the May 28th meeting to answer/discuss the issues raised in the recent distributions. This will allow you to faster address - or, the issue in Allocator Governance for ongoing written discussion. Warmly, |
Hi, @Kevin-FF-USA |
Hi, @filecoin-watchdog Also, here are three points that I would like to clarify: The second point about the retrieval rate is that the retrieval test we conducted and the retrieval rate given in the current bot are not consistent. For example, the previous retrieval supported three kinds of retrieval (HTTP/GRAPHSYNC/BITSWAP), but the current retrieval is Mean Spark Retrieval. and as you can see in Spark's website, only 824 nodes in the whole network are included in the retrieval at the moment, and the rest of the nodes can't be monitored. This is what I would like to bring up regarding the inconsistency between our manual retrieval and the current bot retrieval data. The third point, about the second point, we have also contacted the customer in troubleshooting the problem, I believe there will be a conclusion soon, by then I will make this conclusion public over here, and I also hope that RKH can record this situation, so as to prevent some sp from being misunderstood because of technical problems. |
our signature operation is a decision based on the bot situation at the
time of the review,
This is part of a "race to the bottom" - bots are designed to flag a set of
compliance issues to make your life easier, but have never been claimed as
'sufficient'. They do not relieve the allocators of responsibility for
their own diligence or validation of the compliance of their customers.
This is a conversation that has happened in previous rounds of allocators
as well, where retrieval would be challenged separately from the retrieval
bot that existed at the time. The report status of the bot is not meant to
allow a rubber stamp by allocators.
…On Tue, May 28, 2024 at 6:50 AM ND Labs - Leo ***@***.***> wrote:
Hi, @filecoin-watchdog <https://github.com/filecoin-watchdog>
Thank you for asking questions based on facts. No offence, I would like to
point out that as Allocator admins, we are responsible for our review, and
if we continue to issue credits when a client's sp node does not comply
with the rules, then there is something wrong with our review. However, our
signature operation is a decision based on the bot situation at the time of
the review, and we will not delegate credits if we find any non-compliance
issues.
Also, here are three points that I would like to clarify:
First point, the retrieval rate of the nodes you screenshot out are all
recent bot's retrieval function added before the data, you can recall that
at the time of my review, because of the time is relatively early, there is
no retrieval rate display, I can only through the chain of CID for sample
retrieval test to carry out. And, you can see that bot are good before I
did the signature.
The second point about the retrieval rate is that the retrieval test we
conducted and the retrieval rate given in the current bot are not
consistent. For example, the previous retrieval supported three kinds of
retrieval (HTTP/GRAPHSYNC/BITSWAP), but the current retrieval is Mean Spark
Retrieval. and as you can see in Spark's website, only 824 nodes in the
whole network are included in the retrieval at the moment, and the rest of
the nodes can't be monitored. This is what I would like to bring up
regarding the inconsistency between our manual retrieval and the current
bot retrieval data.
The third point, about the second point, we have also contacted the
customer in troubleshooting the problem, I believe there will be a
conclusion soon, by then I will make this conclusion public over here, and
I also hope that RKH can record this situation, so as to prevent some sp
from being misunderstood because of technical problems.
—
Reply to this email directly, view it on GitHub
<#13 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADIRJ3H6MVHJMXY3IBRAFTZEQSMPAVCNFSM6AAAAABHQTRQCSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZUGQ3DOOJQGY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***
com>
|
Hi , @willscott |
@Kevin-FF-USA @galen-mcandrew @filecoin-watchdog |
Based on a further diligence review, this allocator pathway is partially in compliance with their application. Specifically:
Given this mixed review, we are requesting that the allocator verify that they will uphold all aspects & requirements of their initial application. If so, we will request an additional 2.5PiB of DataCap from RKH, to allow this allocator to show increased diligence and alignment. @NDLABS-Leo can you verify that you will enforce program and allocator requirements? (for example: public diligence, tranche schedules, and public scale retrievability like Spark). Please reply here with acknowledgement and any additional details for our review. |
@galen-mcandrew We have also communicated with the spark team at slack several times about this issue. We are also advancing our technology to interface with the spark technical team and are actively trying to test with spark's. So I am not happy with the result of assigning 2.5P to our lanes in the second round, and hopefully RKH can reassess in light of this latest situation. Since sprak has a large reach, probably most of the Allocator review situations will be affected, and we are pushing through our own efforts to deal with this issue. And, once the sprak issue is dealt with, the retrieval rate data will be changed to the correct data. |
@galen-mcandrew The problems we have identified so far are as follows: Currently we are conveying these experiences to the sp, I believe that soon you can see the completion of the spark retrieval rate! |
@Kevin-FF-USA @galen-mcandrew |
Review of Top Value Allocations from @NDLABS-Leo
Allocator Application: filecoin-project/notary-governance#1026
First example:
DataCap was given to:
NDLABS-Leo/Allocator-Pathway-ND-CLOUD#11
1st Point:
with a quick search this client has a history of questionable node usage: filecoin-project/filecoin-plus-large-datasets#2077 (comment) - should be a flag
Public Open Dataset - key compliance requirement: Retrievability
2nd point)
Allocation schedule per allocator:
First: The client will provide their weekly application volume, and for the initial allocation, we will allocate 50% of the weekly application volume.
Second: After review, the client will be allocated 100% of the weekly application volume.
Third: After review, the client will be allocated 200% of the weekly application volume.
Fourth: After review, the client will be allocated 200% of the weekly application volume.
Max per client overall: Upon successful review, the client will be allocated the weekly application volume minus the already allocated quota, with a maximum single application of 5P.
Actual allocation: 500Tib, 1PiB - this follows the guidelines
3rd point)
No sign of KYC or KYB of client or dataset as mentioned in allocator application. Questions were asked from allocator about previous applications in LDN and SP nodes used
4th point)
Client said these were the SPs,
f02211572 | Chengdu, Sichuan | MicroAnt
f02814600 | Chengdu, Sichuan| BigMax
f02226869 | Nanchang, Jiangxi | LuckyMine
f02274508 | Hong Kong | H&W
f02329119 | Hangzhou, Zhejiang | Cryptomage
f02837293 | Seoul, Seoul | FiveByte
f01159754 | Singapore, Singapore | VITACapital
f01852363 | Singapore, Singapore | HectorLi
f02321504 | Los Angeles, California | ipollo
f02320312 | Los Angeles, California | R1
f02327534 | Los Angeles, California | ipollo
f02322031 | Los Angeles, California | ipollo
f02320270 | Los Angeles, California | R1
f01853077 | Singapore, Singapore | Alpha100
Actual data storage report:
https://check.allocator.tech/report/NDLABS-Leo/Allocator-Pathway-ND-CLOUD/issues/11/1715238018641.md
Provider | Location | Total Deals Sealed | Percentage | Unique Data | Duplicate Deals
f03046248 | Hong Kong, Hong Kong, HKChina Unicom Global | 319.44 TiB | 47.27% | 319.44 TiB | 0.00%
f02023435 | Hong Kong, Hong Kong, HKHK Broadband Network Ltd. | 119.44 TiB | 17.67% | 94.19 TiB | 21.14%
f02894855 | UnknownUnknown | 119.28 TiB | 17.65% | 119.28 TiB | 0.00%
f02956383 | Hong Kong, Hong Kong, HKANYUN INTERNET TECHNOLOGY (HK) CO.,LIMITED | 78.03 TiB | 11.55% | 78.03 TiB | 0.00%
f02948413 | Chengdu, Sichuan, CNChina Mobile Communications Group Co., Ltd. | 39.59 TiB | 5.86% | 39.59 TiB | 0.00%
None of SP IDs taking deals matches per report. Additional diligence needed to confirm entity and actual storage locations
5th point)
Second allocation awarded to this client.
However, per Spark dashboard, all SPs are either not available or have 0% retrievability.
The Allocator showed no sign of diligence after 1st allocation and gave the 2nd allocation of 1PiB to the client.
The text was updated successfully, but these errors were encountered: