Skip to content
This repository has been archived by the owner on Jul 18, 2024. It is now read-only.

[DataCap Application] FileDrive Labs - Datasets Landing Plan V2 - [1/5] #1623

Closed
1 of 2 tasks
laurarenpanda opened this issue Feb 17, 2023 · 149 comments
Closed
1 of 2 tasks

Comments

@laurarenpanda
Copy link

laurarenpanda commented Feb 17, 2023

Data Owner Name

FileDrive Labs

Data Owner Country/Region

China

Data Owner Industry

Life Science / Healthcare

Website

https://filedrive.io/

Social Media

Twitter: https://twitter.com/FileDrive1
Medium: https://medium.com/@FileDrive1
WeChat Offical Account: FileDrive

Total amount of DataCap being requested

5PiB

Weekly allocation of DataCap requested

500TiB

On-chain address for first allocation

f1mnahpxpyrazryxuh24rcyelb4ksgwaztvskjzcq

Custom multisig

  • Use Custom Multisig

Identifier

No response

Share a brief history of your project and organization

FileDrive Datasets Landing Plan is a project for onboarding more valuable public datasets onto the Filecoin network. Through several phases, we plan to bring 10 PiB data and promote 100 PiB storage power growth to Filecoin. 


About FileDrive Datasets

FileDrive Datasets is a platform to effectively connect the huge storage market that Filecoin has built with publishers of public datasets.
The Filecoin network provides reliable, secure, and affordable decentralized storage services, and FileDrive Labs wants to deliver these benefits to end-users by building a public dataset platform.
It is challenging to attract traditional Cloud Storage and Object-base Storage users to the Filecoin network and benefit from it. Developers in the Felicoin ecosystem, such as FileDrive Labs, need to face this challenge together.
As a member of the Filecoin ecosystem, FileDrive Labs has been insisting on developing useful tools to make it easier for users to store their data onto the Filecoin network. 

FileDrive Datasets has integrated a group of tools to provide storage service with the compatibility of both Cloud Storage and Object-base Storage and better user experience to attract more users.
Projects(ongoing) behind:
- Go-Graphsplit: https://github.com/filedrive-team/go-graphsplit
- DS-Cluster: https://github.com/filedrive-team/go-ds-cluster
- Filejoy: https://github.com/filedrive-team/filejoy

Article about FileDrive Datasets on Filecoin Blog:
- Large Datasets: FileDrive: https://filecoin.io/blog/posts/large-datasets-filedrive/



About FileDrive Labs

FileDrive Labs has always defined ourselves as tool developers and infrastructure builders in the Filecoin ecosystem. From 2019, we continuously focus on technical solutions and development based on IPFS protocol and the Filecoin network and do our best to contribute to the community.
Over 80% of our team are qualified engineers, and half of them have more than 10-year development experience in multiple industries, including Communication, the Internet, and blockchain.
Since 2020, we have participated in Slingshot Competition, become one of the top teams, and stored over 5 PiB useful data from public datasets to the Filecoin network.
To contribute to the Filecoin Community, we developed an open-source data prep tool Graphsplit, FIL+ project dashboard filplus.info and storage provider discovery platform filfind,info.
Besides, we have also hold weekly online virtual events named FileDrive Meetup from March 2022, which aims to provide a platform for community members to grasp the latest trends of the Filecoin network and our work and research.

Please check the following links for more details.
- GitHub: https://github.com/filedrive-team
- Twitter: https://twitter.com/FileDrive1
- Eventbrite: https://www.eventbrite.hk/o/filedrive-labs-42456337463
- YouTube Channel: https://www.youtube.com/channel/UCxcZC1dtBUlQvZY7DX13W1w
- Medium: https://medium.com/@FileDrive1

Is this project associated with other projects/ecosystem stakeholders?

No

If answered yes, what are the other projects/ecosystem stakeholders

No response

Describe the data being stored onto Filecoin

FileDrive Datasets Landing Plan #2
- Datasets: 10


List of Datasets in #2:

1. Transiting Exoplanet Survey Satellite (TESS)
- The Transiting Exoplanet Survey Satellite (TESS) is a multi-year survey that will discover exoplanets in orbit around bright stars across the entire sky using high-precision photometry. The survey will also enable a wide variety of stellar astrophysics, solar system science, and extragalactic variability studies. More information about TESS is available at MAST and the TESS Science Support Center.
- https://registry.opendata.aws/tess/
- License: STScI herby grants the non-exclusive, royalty free, non-transferable, worldwide right and license to use, reproduce and publicly display in all media public data from the TESS mission.
- Size: 285.6 TiB

2. Oxford Nanopore Technologies Benchmark Datasets
- The ont-open-data registry provides reference sequencing data from Oxford Nanopore Technologies to support, 1) Exploration of the characteristics of nanopore sequence data. 2) Assessment and reproduction of performance benchmarks 3) Development of tools and methods. The data deposited showcases DNA sequences from a representative subset of sequencing chemistries. The datasets correspond to publicly-available reference samples (e.g. Genome In A Bottle reference cell lines). Raw data are provided with metadata and scripts to describe sample and data provenance.
- https://registry.opendata.aws/ont-open-data/
- License: Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
- Size: 60.3 TiB

3. Community Earth System Model v2 ARISE (CESM2 ARISE)
- Data from ARISE-SAI Experiments with CESM2
- https://registry.opendata.aws/ncar-cesm2-arise/
- License: Creative Commons Attribution 4.0 International (CC BY 4.0)
- Size: 263.5 TiB

4. NOAA Wave Ensemble Reforecast
- This is a 20-year global wave reforecast generated by WAVEWATCH III model (https://github.com/NOAA-EMC/WW3) forced by GEFSv12 winds (https://noaa-gefs-retrospective.s3.amazonaws.com/index.html). The wave ensemble was run with one cycle per day (at 03Z), spatial resolution of 0.25°X0.25° and temporal resolution of 3 hours. There are five ensemble members (control plus four perturbed members) and, once a week (Wednesdays), the ensemble is expanded to eleven members. The forecast range is 16 days and, once a week (Wednesdays), it extends to 35 days. More information about the wave modeling, wave grids and calibration can be found in the WAVEWATCH III regtest ww3_ufs1.3 (https://github.com/NOAA-EMC/WW3/tree/develop/regtests/ww3_ufs1.3).
- https://registry.opendata.aws/noaa-wave-ensemble-reforecast/
- License: Open Data. There are no restrictions on the use of this data.
- Size: 114.3TiB

5.UCSC Genome Browser Sequence and Annotations
- The UCSC Genome Browser is an online graphical viewer for genomes, a genome browser, hosted by the University of California, Santa Cruz (UCSC). The interactive website offers access to genome sequence data from a variety of vertebrate and invertebrate species and major model organisms, integrated with a large collection of aligned annotations. This dataset is a copy of the MySQL tables in MyISAM binary and tab-sep format and all binary files in custom formats, sometimes referred as 'gbdb'-files. Data from the UCSC Genome Browser is free and open for use by anyone. However, every genome annotation track has been created by an academic research group, or, in a few cases, by commercial companies. Please acknowledge them by citing them. The information can be found by going to https://genome.ucsc.edu, selecting the respective genome assembly and clicking on the data track. At the end of the documentation, we provide a list of references and acknowledgements.
- https://registry.opendata.aws/ucsc-genome-browser/
- License: https://genome.ucsc.edu/license/
- Size:  81.7 TiB

6.Open Observatory of Network Interference (OONI)
- A free software, global observation network for detecting censorship, surveillance and traffic manipulation on the internet.
- https://registry.opendata.aws/ooni/
- License: Creative Commons Attribution 4.0 International (CC BY 4.0)
- Size: 135 TiB


7.OpenProteinSet
- Multiple sequence alignments (MSAs) for 132,000 unique Protein Data Bank (PDB) chains, covering 640,000 PDB chains in total, and 4,850,000 UniClust30 clusters. Template hits are also provided for the PDB chains and 270,000 UniClust30 clusters chosen for maximal diversity and MSA depth. MSAs were generated with HHBlits (-n3) and JackHMMER against MGnify, BFD, UniRef90, and UniClust30 while templates were identified from PDB70 with HHSearch, all according to procedures outlined in the supplement to the AlphaFold 2 Nature paper, Jumper et al. 2021. We expect the database to be broadly useful to structural biologists training or validating deep learning models for protein structure prediction and related tasks.
- https://registry.opendata.aws/openfold/
- License: Creative Commons Attribution 4.0 International (CC BY 4.0)
- Size: 4.9 TiB

8.AI2 Diagram Dataset (AI2D)
- 4,817 illustrative diagrams for research on diagram understanding and associated question answering.
- https://registry.opendata.aws/allenai-diagrams/
- License: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
- Size: 6.4 TiB

9. Legal Entity Identifier (LEI) and Legal Entity Reference Data (LE-RD)
- The Legal Entity Identifier (LEI) is a 20-character, alpha-numeric code based on the ISO 17442 standard developed by the International Organization for Standardization (ISO). It connects to key reference information that enables clear and unique identification of legal entities participating in financial transactions. Each LEI contains information about an entity’s ownership structure and thus answers the questions of 'who is who’ and ‘who owns whom’. Simply put, the publicly available LEI data pool can be regarded as a global directory, which greatly enhances transparency in the global marketplace. The Financial Stability Board (FSB) has reiterated that global LEI adoption underpins “multiple financial stability objectives” such as improved risk management in firms as well as better assessment of micro and macro prudential risks. As a result, it promotes market integrity while containing market abuse and financial fraud. Last but not least, LEI rollout “supports higher quality and accuracy of financial data overall”. The publicly available LEI data pool is a unique key to standardized information on legal entities globally. The data is registered and regularly verified according to protocols and procedures established by the Regulatory Oversight Committee. In cooperation with its partners in the Global LEI System, the Global Legal Entity Identifier Foundation (GLEIF) continues to focus on further optimizing the quality, reliability and usability of LEI data, empowering market participants to benefit from the wealth of information available with the LEI population. The drivers of the LEI initiative, i.e. the Group of 20, the FSB and many regulators around the world, have emphasized the need to make the LEI a broad public good. The Global LEI Index, made available by GLEIF, greatly contributes to meeting this objective. It puts the complete LEI data at the disposal of any interested party, conveniently and free of charge. The benefits for the wider business community to be generated with the Global LEI Index grow in line with the rate of LEI adoption. To maximize the benefits of entity identification across financial markets and beyond, firms are therefore encouraged to engage in the process and get their own LEI. Obtaining an LEI is easy. Registrants simply contact their preferred business partner from the list of LEI issuing organizations available on the GLEIF website.
- https://registry.opendata.aws/lei/
- License: Creative Commons (CC0) license
- Size: 6.0 TiB

10. COVID-19 Genome Sequence Dataset
- A centralized sequence repository for all records containing sequence associated with the novel corona virus (SARS-CoV-2) submitted to the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA). Included are both the original sequences submitted by the principal investigator as well as SRA-processed sequences that require the SRA Toolkit for analysis. Additionally, submitter provided metadata included in associated BioSample and BioProject records is available alongside NCBI calculated data, such k-mer based taxonomy analysis results, contiguous assemblies (contigs) and associated statistics such as contig length, blast results for the assembled contigs, contig annotation, blast databases of contigs and their annotated peptides, and VCF files generated for each record relative to the SARS-CoV-2 RefSeq record. Finally, metadata is additionally made available in parquet format to facilitate search and filtering using the AWS Athena Service.
- https://registry.opendata.aws/ncbi-covid-19/
- License: NIH Genomic Data Sharing Policy
- Size: 1.2 PiB

Where was the data currently stored in this dataset sourced from

My Own Storage Infra

If you answered "Other" in the previous question, enter the details here

No response

How do you plan to prepare the dataset

IPFS, graphsplit

If you answered "other/custom tool" in the previous question, enter the details here

No response

Please share a sample of the data

FileDrive Datasets: 
https://datasets.filedrive.io/

Original Source:
1. Transiting Exoplanet Survey Satellite (TESS)
- https://registry.opendata.aws/tess/

2. Oxford Nanopore Technologies Benchmark Datasets
- https://registry.opendata.aws/ont-open-data/

3. Community Earth System Model v2 ARISE (CESM2 ARISE)
- https://registry.opendata.aws/ncar-cesm2-arise/

4. NOAA Wave Ensemble Reforecast
- https://registry.opendata.aws/noaa-wave-ensemble-reforecast/

5.UCSC Genome Browser Sequence and Annotations
- https://registry.opendata.aws/ucsc-genome-browser/

6.Open Observatory of Network Interference (OONI)
- https://registry.opendata.aws/ooni/

7.OpenProteinSet
- https://registry.opendata.aws/openfold/

8.AI2 Diagram Dataset (AI2D)
- https://registry.opendata.aws/allenai-diagrams/

9. Legal Entity Identifier (LEI) and Legal Entity Reference Data (LE-RD)
- https://registry.opendata.aws/lei/

10. COVID-19 Genome Sequence Dataset
- https://registry.opendata.aws/ncbi-covid-19/

Confirm that this is a public dataset that can be retrieved by anyone on the Network

  • I confirm

If you chose not to confirm, what was the reason

No response

What is the expected retrieval frequency for this data

Weekly

For how long do you plan to keep this dataset stored on Filecoin

More than 3 years

In which geographies do you plan on making storage deals

Greater China, Asia other than Greater China, North America, Europe, Australia (continent)

How will you be distributing your data to storage providers

HTTP or FTP server, IPFS, Shipping hard drives

How do you plan to choose storage providers

Slack, Filmine

If you answered "Others" in the previous question, what is the tool or platform you plan to use

No response

If you already have a list of storage providers to work with, fill out their names and provider IDs below

Please check the Checker Reports of our previous LDN applications:
- https://github.com/filecoin-project/filecoin-plus-large-datasets/issues/1266
- https://github.com/filecoin-project/filecoin-plus-large-datasets/issues/1267
- https://github.com/filecoin-project/filecoin-plus-large-datasets/issues/1268

How do you plan to make deals to your storage providers

Lotus client

If you answered "Others/custom tool" in the previous question, enter the details here

No response

Can you confirm that you will follow the Fil+ guideline

Yes

@large-datacap-requests
Copy link

Thanks for your request!

Heads up, you’re requesting more than the typical weekly onboarding rate of DataCap!

@large-datacap-requests
Copy link

Thanks for your request!
Everything looks good. 👌

A Governance Team member will review the information provided and contact you back pretty soon.

@Sunnyiscoming
Copy link
Collaborator

It is recommended that applications be submitted separately for each dataset.

Take #1267 for example.
image
image
It seems that you stored too many copies. Can you explain about that?

@Sunnyiscoming
Copy link
Collaborator

If you mix all your datasets, rather than apply for them separately, it will be difficult to identify the number of your backups.

@herrehesse
Copy link

@laurarenpanda Greetings, it appears that several of these datasets have been previously stored multiple times and are currently present on the chain. Therefore, I'm uncertain why they would qualify for a datacap request once more.

I'm not in favor of creating numerous unnecessary copies of identical sets, as it wouldn't serve any practical purpose and would waste storage capacity.

@laurarenpanda
Copy link
Author

laurarenpanda commented Feb 20, 2023

If you mix all your datasets, rather than apply for them separately, it will be difficult to identify the number of your backups.

Hi @Sunnyiscoming.

I understand your concern.
Since November 2020, FileDrive Labs has participated in the FIL+ program, followed its rules, and built a reputation. We have brought more than 5 PiB valuable data to Filecoin with about 50 SPs total.
According to the rules of FIL+ and recommended methods from the community, we usually store 6-10 copies with different SPs.
To identify our behaviors, I highly recommend synthesizing reports with all CIDs and deals info from #1266, #1267, and #1268. If the Checker Bot hasn't supported doing so, I could also offer a report with deal info from filplus.info.
Many applications, which are storing public datasets, may meet the same problem. Instead of separating 5 applications into 8 or more, IMO, it could be better to support a multi-LDN checker report.

Would like to know your advice on it.
Thank you.

@laurarenpanda
Copy link
Author

@laurarenpanda Greetings, it appears that several of these datasets have been previously stored multiple times and are currently present on the chain. Therefore, I'm uncertain why they would qualify for a datacap request once more.

I'm not in favor of creating numerous unnecessary copies of identical sets, as it wouldn't serve any practical purpose and would waste storage capacity.

Hi @herrehesse.

Thanks for pointing it out.
It's a little hard to know which dataset has been stored with how many copies and deal duration.
It could help a lot if you could give us some specific info. Then, We could consider to remove some datasets and add new instead.
Thanks again.

@herrehesse
Copy link

herrehesse commented Feb 20, 2023

@laurarenpanda Thank you for your professional reply. I will investigate this matter by examining precise duplication figures and geographical locations. In my professional opinion, it would be prudent to limit each set to no more than 10 copies. As for the primary sets under consideration, they were successfully restored during the Filecoin Slingshot Restore initiative and previous Slingshot rounds (which some of might have been already expired).

I will revert to you at the earliest convenience with my findings.

@laurarenpanda
Copy link
Author

Hi @Sunnyiscoming.

Please check the following report on Deal Data Replication of #1266, #1267, and #1268.
We used the code from filplus-checker with filplus.info's database.

If necessary, we support to provide multi-LDN report for FileDrive Datasets Landing Plan V2 regularly.
Would like to know your suggestion on it.

Thank you!

Deal Data Replication

Unique Data Size/TiB Total Deal Size/TiB Num Of Replicas Percent
68.171875 68.171875 1 0.42%
232.38427734375 464.7685546875 2 2.85%
520.5380859375 1561.6142578125 3 9.59%
377.2109375 1508.84375 4 9.26%
145.5085563659668 727.683406829834 5 4.47%
3.84375 23.0625 6 0.14%
377.09375 2639.71875 7 16.20%
195.984375 1568.25 8 9.63%
44.3125 398.96875 9 2.45%
439.13671875 4494.9453125 10 27.59%
257.718994140625 2834.908935546875 11 17.40%

@cryptowhizzard
Copy link

Hi @laurarenpanda

Can you do KYC please with me? I am willing to propose here.

https://form.jotform.com/230337462961356

@laurarenpanda
Copy link
Author

Hi @laurarenpanda

Can you do KYC please with me? I am willing to propose here.

https://form.jotform.com/230337462961356

@cryptowhizzard

Done.

BTW, I would like to point out that the the input limter of Phone Number is not workd for some region like China (11 numbers).

@Sunnyiscoming
Copy link
Collaborator

@laurarenpanda How do you decide on the number of backups? Some data was even backed up in 11 copies.

@laurarenpanda
Copy link
Author

laurarenpanda commented Feb 22, 2023

@laurarenpanda How do you decide on the number of backups? Some data was even backed up in 11 copies.

@Sunnyiscoming
We try to distribute data to SPs located in more places. The ideal state is to let each data have 6-10 copies onto the Network. However, the speed of data transmission and sealing differs among SPs. As a result, you can see some data has 11 copies, and some only have one copy. But we are still working with these SPs and trying our best to let them reach the ideal result.
Besides, in my perspective, 11 copies don't mean over-replication.
If the rules of FIL+ and the consensus of Notaries think it should be limited to 10 copies, we could also change the distribution strategy.
We'd like to provide regular reports for our multi-LDN applications before the Checker Bot supports this function. And for the limitation of copies, we could hold some discussions among community numbers.
Thanks!

@herrehesse
Copy link

Dear Filecoin+ Github applicant,

We encourage you to review the discussions in issue #832. It's important to ensure that your datacap requests are valid, necessary, and add value to the network. By doing so, you can help to maintain the integrity and sustainability of the Filecoin network.

You can find the link to issue #832 here: filecoin-project/notary-governance#832

Thank you for your understanding and cooperation.

@herrehesse
Copy link

Screenshot 2023-02-22 at 11 25 15

@laurarenpanda
Copy link
Author

About the above topic, please check my comment in filecoin-project/notary-governance/issues/832.

@laurarenpanda
Copy link
Author

laurarenpanda commented Feb 23, 2023

Sorry for missing this important info in the ReadME.
I think the Checker Bot has already support multiple LDNs checker report.
Check the report for #1266, #1267, and #1268: #1266 (comment)

截屏2023-02-23 10 13 28

@Sunnyiscoming
Copy link
Collaborator

There are some discussion about whether the client should open an issue for a single dataset. So I advice you would better open an issue for a single dataset at present.

@laurarenpanda
Copy link
Author

@Sunnyiscoming
As @kernelogic highlighted in filecoin-project/notary-governance#832 (comment):

There's also benefits actually: some public datasets are quite small (< 10T) and no one will want to do them. Having merged LDN will give them a chance to be onboarded.

We have FileDrive Datasets to browse for these datasets and will help more valuable data onboarding Filecoin.
Please give a hand to accelerate our Datasets Landing Program.

@data-programs data-programs added the kyc verified User has passed KYC check label Dec 27, 2023
Copy link

mikezli commented Dec 27, 2023

Request Approved

Your Datacap Allocation Request has been approved by the Notary

Message sent to Filecoin Network

bafy2bzacedyfsks2tzmt6j53rkimpicglr3j2t63v2b66spgeu5xjxtasbi5g

Address

f1mnahpxpyrazryxuh24rcyelb4ksgwaztvskjzcq

Datacap Allocated

750.00TiB

Signer Address

f1dnb3uz7sylxk6emti3ififcvu3nlufnnsjui6ea

Id

e32f675f-57d9-469d-b836-19cd2c312800

You can check the status of the message here: https://filfox.info/en/message/bafy2bzacedyfsks2tzmt6j53rkimpicglr3j2t63v2b66spgeu5xjxtasbi5g

Copy link

The issue reached the total datacap requested. This should be closed

Copy link

github-actions bot commented Jan 7, 2024

This application has not seen any responses in the last 10 days. This issue will be marked with Stale label and will be closed in 4 days. Comment if you want to keep this application open.

--
Commented by Stale Bot.

@laurarenpanda
Copy link
Author

Please keep this open.

Copy link

This application has not seen any responses in the last 10 days. This issue will be marked with Stale label and will be closed in 4 days. Comment if you want to keep this application open.

--
Commented by Stale Bot.

@laurarenpanda
Copy link
Author

Please keep this application open.

Copy link

This application has not seen any responses in the last 10 days. This issue will be marked with Stale label and will be closed in 4 days. Comment if you want to keep this application open.

--
Commented by Stale Bot.

@laurarenpanda
Copy link
Author

Please keep this application open.

Copy link

This application has not seen any responses in the last 10 days. This issue will be marked with Stale label and will be closed in 4 days. Comment if you want to keep this application open.

--
Commented by Stale Bot.

@laurarenpanda
Copy link
Author

Please keep this application open.

Copy link

This application has not seen any responses in the last 10 days. This issue will be marked with Stale label and will be closed in 4 days. Comment if you want to keep this application open.

--
Commented by Stale Bot.

Copy link

This application has not seen any responses in the last 14 days, so for now it is being closed. Please feel free to contact the Fil+ Gov team to re-open the application if it is still being processed. Thank you!

--
Commented by Stale Bot.

@laurarenpanda
Copy link
Author

Please keep this application open.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.