-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Community Diligence Review of IPFSTT Allocator #93
Comments
Retrievals still at 0% https://compliance.allocator.tech/report/f03011612/1721396249/report.md @galen-mcandrew First Diligence review: #9 |
@filecoin-watchdog First, please state the facts truthfully. If the content you are presenting is deceptive in nature, please remain silent, otherwise this will continuously lower your credit score. Second, I hope you can explain in detail why you have consecutively signed three 1.95PiB signatures and only issued them to yourself (Dcent). The key point is: your three 1.95PiB signatures do not have any record on GitHub, and were manually signed. |
I think you have the wrong watchdog, @nicelove666. I'll also take a look at DCent allocator when they are close to renewal, thanks. |
Let me provide a detailed and impartial explanation of the IPFSTT allocator.Thank you for taking the time to read @galen-mcandrew. First, this is the allocation situation for the first round of 5P. #9 For the first round of 5P, detailed discussions have been made, including communication on Slack https://filecoinproject.slack.com/archives/C06MTBZ44P2/p1718358842699939, GitHub communication filecoin-station/spark#74, and statements made at the notary meetings.So the details of the 5P allocation will not be repeated.Interested friends can look herehttps://github.com//issues/9. In the first round of 5P, we made the following commitments:
In the second round of 10P, to fulfill these commitments, we have done the following work:
Although the Spark statistics for the enterprise client nicelove666/Allocator-Pathway-IPFSTT#34 cannot be counted, we can manually check that the SP supports boost and lassie retrieval, and we can use run_retrieval_test to more clearly and meticulously count the retrieval rate and number of successful retrievals of the SP. In fact, this enterprise customer has done very well. Every SP they cooperate with stores unseal files and supports retrieval. We can count these data in real time. Therefore, in the 10P allocation of the second round, all our clients support retrieval, with an average retrieval success rate of over 90%. Finally, this is our overall allocation situation https://compliance.allocator.tech/report/f03011612/1721396249/report.md. We have collaborated with 6 clients and 28 SPs, which are distributed in mainland China, Hong Kong, Japan, Vietnam, Singapore and other regions. Although Spark cannot statistically count all our SPs, we can use technical means to query that these SPs have saved unseal files and can support retrieval. Going forward, we will be more proactive in finding SP solutions that support Spark, strengthen communication with Spark, and explore more enterprise clients. We believe that more and more SPs supporting Spark will emerge! |
Finally, let me repeat it again: |
The retrieval result shows everything. Most of the sps mentioned below shows 0% successful retrievals. Even some of your sps can not be found in spark dashboard.
All you said seems to be empty words. There's been no progress since the last time your allocator were allocated 10 PiB. |
@TrueBlood1 1.The link you cited was from a week ago, and the data has now been updated. You need to look at this link: https://compliance.allocator.tech/report/f03011612/1721999568/report.md. 2.Why are you ignoring the SPs that support Spark, and pretending not to see those 50-97% Spark success rates? 3.The SPs in your screenshot that do not support Spark are mostly from the first round of 5P. Additionally, there is one from the enterprise customer dangbei, at nicelove666/Allocator-Pathway-IPFSTT#34. These SPs actually store the unsealed files and can support boost and other retrieval methods. The reason for not supporting Spark has been explained by Joss in a meeting - their open-source solution is coming soon. 4.Why should I help Josh and allocate 2.75P of DC to their customers? Because they have brought us enterprise-level customer applications, which is exactly what we need. Meanwhile, our team has verified that their technical solution is indeed feasible. Of course, the ideas of PL and FF take precedence, but regardless of whether PL and FF accept Joss's solution, we have only allocated 2.75P of DC to Josh. The remaining 7.25P is all used for customers who can accept Spark. So in our second round of 10P, we will still have 70% of the quota for SPs that support Spark, and 30% for SPs that support boost but currently do not support Spark. |
@TrueBlood1 Finally, I want to tell you that defamation not based on facts is very easily clarified. A person who does not state the facts cannot have a high credit score. In the long run, people will regard their words as mere air. The latest data and information will appear first at the notary's meeting. I hope you can fully understand the facts before making any statements. |
@nicelove666 Can you provide more information here? I want to make sure I understand which "solution" you are referring to in this comment. |
Good morning @galen-mcandrew The "josh" I mentioned is joshua-ne, #63. They explained their situation at the notary's meeting. They are currently facing some Our team suspects that SPs using other systems like Venus are also unable to be counted on Spark, Maybe this is the reason. Therefore, the josh team has integrated the retrieval functionality of the v3.1LDN check-bot, and developed a tool on top of that, which is a "command-line" based retrieval tool, built upon systems like boost. Our team has tested their solution and found it to be feasible. Most importantly, we have manually verified that their SPs do support retrieval through various means like boost, so we have decided to help them by allocating 2.75P of DC to them. Of course, we are also grateful that they have brought us enterprise-level customer applications, which is something we have been lacking and have promised to do. An hour ago, I communicated with their team: 1、They will submit an issue to explain the situation in detail. 2、They need to open-source the tool as soon as possible, and the first version of the test tool will be open-sourced next week for the community to test. 3、They will resolve the indexing issue as soon as possible, and it is expected that they will be able to support Spark after the next network upgrade on August 6th. Next week, we will be able to see the progress, our team will continue to follow up.Sincerely, have a great day. |
I attempted to pull recent deals from several of the SPs listed in #63 and was unable to connect to them with lassie / boost. e.g.
You will need to allow others to also run your check-bot independently if it is to be convincing. |
Dear, Can you provide the specific SP ID? Additionally, I wholeheartedly agree with your viewpoint, open-source is a must, and they will also publish the indexes quickly to support Spark. |
@nicelove666 All the data on the image is what I searched that day by checking it out on spark dashboard. I just showing the problems you have. I don't think you should be allocated any more datacap until we wait for your upgrades to be completed. Please finish your upgrade asap. |
Cannot be retrieved in Spark are dangbei’s issues, accounting for less than 30%, And it can be solved immediately |
I've seen all of applications for you and it's a long time that we can not find your change, from the first application to the latest application. |
I will not reply to the biased ones. |
If you are in Dubai, we can also meet to communicate. |
What you need to do now is to make sure that everyone can retrieve on most of your sps. |
@galen-mcandrew @willscott The josh team released a roadmap, joshua-ne/FIL_DC_Allocator_1022#22 joshua-ne/FIL_DC_Allocator_1022#22 |
Until there are more valid retrieval results to show, it is recommended to reevaluate the refill of this allocator in order not to cause argue or dissatisfaction in Fil+ community. |
First, all the sps that josh cooperates with already support spark. From now on, all packaged ones will support it, but the previous ones did not. |
Do not see you or your partner in the meeting. |
Based on a further diligence review, this allocator pathway is in compliance with their application. They are continuing to increase visibility and compliance, while working towards additional scale tools to support the growth of the ecosystem. I will continue to ask all commenters to refrain from antagonistic attacks or claims, and limit these threads to the relevant details. For example, there is no reason to deflect and call into question other allocators. We are requesting 20PiB of DataCap from RKH for this pathway to increase runway and scale. @nicelove666 Please reply if there are any issues, concerns, or updates while we initiate the request to the RKH. |
Yes, we succeeded and you can view the data. @Yvette516 |
Review of Allocations from IPFSTT
Allocator Application: filecoin-project/notary-governance#1006
First example:
DataCap was given to:
nicelove666/Allocator-Pathway-IPFSTT#27
First: 256TiB% • Second:512TiB% • Third: 1PiB • Fourth:2PiB • Fiveth: 1PiB • Sixth:1PiB
Due diligence was conducted before each signing. Pay close attention to Spark data, SP disclosure, etc.
SP disclosure:
Polaris f02951213 Singapore
ShenSuanCloud f03035686 Nanchang, Jiangxi, CN
CoffeeCloud f03086293 Hong Kong
CoffeeCloud f03136267 Hong Kong
Round Arithmetic f02200472 Chengdu, Sichuan, CN
Individual f02956383 Hong Kong
Lucky Star f03068013 Hong Kong
Actual data storage report:
nicelove666/Allocator-Pathway-IPFSTT#27 (comment)
Provider Location Total Deals Sealed Percentage Unique Data Duplicate Deals Mean Spark Retrieval Success Rate 7d
f02956383 Hong Kong, Hong Kong, HK
ANYUN INTERNET TECHNOLOGY (HK) CO.,LIMITED 630.09 TiB 13.10% 630.06 TiB 0.00% 9.25%
f02200472 Chengdu, Sichuan, CN
CHINANET SiChuan Telecom Internet Data Center 59.88 TiB 1.24% 59.88 TiB 0.00% 0.63%
f03035686 Shenzhen, Guangdong, CN
CHINANET-BACKBONE 1.24 PiB 26.35% 1.24 PiB 0.00% 42.14%
f03136267 Hong Kong, Hong Kong, HK
HK Broadband Network Ltd. 1.09 PiB 23.19% 1.09 PiB 0.00% 12.57%
f03144077 Hong Kong, Hong Kong, HK
HK Broadband Network Ltd. 651.44 TiB 13.54% 651.44 TiB 0.00% 71.26%
f03086293 Hong Kong, Hong Kong, HK
HK Broadband Network Ltd. 299.31 TiB 6.22% 299.31 TiB 0.00% 0.00%
f03068013 Hong Kong, Hong Kong, HK
PCCW Global, Inc. 424.91 TiB 8.83% 424.91 TiB 0.00% 8.78%
f02951213 Singapore, Singapore, SG
StarHub Ltd 362.28 TiB 7.53% 362.28 TiB 0.00% 68.44%
The client disclosed 7 SPs, but actually collaborated with 8 SPs. added 1 new SP. The SPs are generally well-matched.
For the 1 new SP that was added, client have provided detailed information about their geographic location and company on GitHub. nicelove666/Allocator-Pathway-IPFSTT#27 (comment)
All SPs support Spark
Second example
DataCap was given to:
nicelove666/Allocator-Pathway-IPFSTT#34
First: 256TiB% • Second:512TiB% • Third: 1PiB • Fourth:1PiB
Due diligence was conducted before each signing. Pay close attention to Spark data, SP disclosure, etc.
SP disclosure:
f01422327 - Japan
f02252023 - Japan
f02252024 - Japan
f01989013 - Malaysia
f01989014 - Malaysia
f01989015 - Malaysia
f02105010 - Malaysia
Actual data storage report:
nicelove666/Allocator-Pathway-IPFSTT#34 (comment)
Provider Location Total Deals Sealed Percentage Unique Data Duplicate Deals Mean Spark Retrieval Success Rate 7d
f02105010 Kuala Lumpu
Extreme Broadband - Total Broadband Experience 300.00 TiB 17.01% 300.00 TiB 0.00% -
f01989015 Kuala Lumpur, MY
Extreme Broadband - Total Broadband Experience 300.00 TiB 17.01% 300.00 TiB 0.00% -
f01989013 Kuala Lumpur
Extreme Broadband - Total Broadband Experience 300.00 TiB 17.01% 300.00 TiB 0.00% -
f01989014 Kuala Lumpur, MY
Extreme Broadband - Total Broadband Experience 300.00 TiB 17.01% 300.00 TiB 0.00% -
f02252024 JP
TOKAI Communications Corporation 188.00 TiB 10.66% 188.00 TiB 0.00% -
f01422327 JP
TOKAI Communications Corporation 188.00 TiB 10.66% 188.00 TiB 0.00% -
f02252023 JP
TOKAI Communications Corporation 188.00 TiB 10.66% 188.00 TiB 0.00%
All disclosed SP exactly matches the actual cooperative SP.
SP is normal in retrieval.
This is an enterprise customer. We mainly confirm the customer's identity through emails and conference calls. The domain name email has been forwarded to filplus-app-review@fil.org, please review it.
Third example
DataCap was given to:
nicelove666/Allocator-Pathway-IPFSTT#41
First: 256TiB
Only 256TiB has been approved,the checkbot has not been updated. We will continue to pay attention.
The text was updated successfully, but these errors were encountered: