Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIP-0119: Extend dRep metadata for better discovery / adoption #977

Open
MadOrkestra opened this issue Feb 1, 2025 · 14 comments
Open

CIP-0119: Extend dRep metadata for better discovery / adoption #977

MadOrkestra opened this issue Feb 1, 2025 · 14 comments

Comments

@MadOrkestra
Copy link
Contributor

MadOrkestra commented Feb 1, 2025

I will open this issue as a starting point for a discussion about an extension of the dRep metadata fields defined in CIP-119 for better discovery on dRep explorers. From an UX perspective, right now we have very little filter options to provide users on frontends to actually find a dRep that fits their needs, because freeform text fields don't do the trick. dRep discovery right now depends purely on recommendation or social media visibility and filter options on dRep explorers are mostly based on one thing: delegation.

This has already led to a dRep distribution that is far from good, because instead of browsing through 500+ dReps people simply choose dReps with high delegation.

Information that could be included for better discovery:

  • Location/Region
  • Languages spoken (different from language used in the metadata file)
  • Association with a stake pool (thinking of WEED stake pool e.g.)
  • Category/Areas of Interest (Yuta has assembled a list of dev dReps, Jenny is collecting female dReps, etc.)
  • Ecosystem Roles (Stake Pool Operator, Intersect Member, etc.)

The challenge with most of these fields/arrays will be to agree on standards how to create and store this data, so all platforms can implement this for the creation of metadata files and use it as filter-options for displaying search results. A location/region field becomes utterly useless if you can throw "Europe", "Europa", "EU" and "European Union" in there for example.

This needs some work from all of us, at least if we want better dRep discovery and a more decentralized distribution among dReps - Otherwise we will continue the current popularity contest where dReps with existing platforms or already high delegation numbers continue to grow their delegation, leading to a governance system that - in reality - will be based on very few opinions instead of many.

We are already seeing different groups of people working around the proposed CIP119 (the "DRep Collective" is working on an un-official CIP that wants to move dRep data into NFTs, tempo.vote has gone through the lengths of scraping the web to come up with dRep locations, etc.), so I'd urge all of us to find an ecosystem wide solution for this problem, before this ends up in a scenario where platforms and projects start to throw random stuff into metadata files to fulfill their users needs without any other platform to be able to provide the same discovery.

References:

@Ryun1
Copy link
Collaborator

Ryun1 commented Feb 1, 2025

The metadata standards we have right now were all designed before we had any of this properly online
eager to see these standards evolve and be extended to solve the todays problems

I would maybe encourage this to become its own proposal, rather than adding to CIP119.
Just because tooling considered CIP119 compliant would become un-compliant, this can be frustrating and confusing

@MadOrkestra
Copy link
Contributor Author

I'd really argue the other way around: Lets do these changes now and move on it quickly, before we have more gov tool platforms building on it. None of these are breaking changes as the fields are optional, the proposed character limit change (#975) becomes more crucial with it, so why not do this all in one go while we still can.

@MadOrkestra
Copy link
Contributor Author

Is there some precedent on how you'd go about something like the arrays I proposed on any other CIPs? Some kind of registry or process for something like that? Host a JSON on Github and allow pull requests?

I can't emphasize enough how crucial it is to not just define a datatype and then let everyone throw their stuff into it. It will become cluttered over time and ultimately become useless as a filter option for frontends. I am aware we probably won't solve this problem on the CIP level and we won't be able to prevent people from manually adding entries in their metadata, but if we'd have some kind of single source of data frontends could pull from to populate dropdowns for metadata creation, this would already go a long way imo.

@Quantumplation
Copy link
Contributor

The whole point of CIP-100 is so that it could be extended by layering, rather than changing specifications, exactly so that the spec could evolve over time without breaking existing implementations / CIPs, and support experimentation like this. It's even designed to be resilient to a breakdown in the CIP process (such as the editors refusing to merge a CIP, etc.)

CIP-108 / CIP-119 just define certain fields that governance metadata might include; it does not preclude defining other fields in the metadata document, new CIPs, etc.

For example, a governance metadata document can include fields from CIP-100, CIP-108, CIP-119, CIP-1234567, and even non-CIP fields where the specification is self-hosted, so long as it uniquely identifies those fields via the @context. It even supports the same field define differently by different CIPs.

Then, tool authors can add support for these standards as they see fit: they fetch the metadata document, normalize the fields so they're all uniquely identified, and then display the fields they support in the UI, etc.

Adding support for all possible CIPs isn't mandatory: social consensus decides which fields are the most useful and widely used.

The idea would be that every explorer provides some very basic low-effort view for every field that is present on the document, regardless of whether it 'understands' this field or not: either just show the raw JSON, or iterate over each field and render it as text, etc, as if it were just a completely arbitrary JSON document.

Then, for the fields that the explorer wants to / has time to add support for, you can build more feature rich interfaces, search capabilities, etc.

Trying to force a single standard on everyone is going to be more painful: the CIP process will devolve into endless arguing about what the right fields are, what the right type for those fields is, etc; the spec will be slow to evolve, and tooling authors will resist changes because of the social stigma of being seen as "non-compliant" when it updates, and the time required to go back and update, etc.

So, I'd highly recommend just creating a new CIP that defines the fields you want to be able to filter on, arguing for the benefit of this, socializing the proposal so that you get feedback, implementing those fields in your own tools, and then advocating for the adoption of those fields in tools that are handling the dRep registration.

@MadOrkestra
Copy link
Contributor Author

MadOrkestra commented Feb 1, 2025

So then why have standards at all? I can now build a tool that throws some random fields in metadata files, every explorer does the same and all we will ever accomplish is a complete mess that will not have any usability advantages on any frontend except my own platform. We don't even need a CIP for that.

Are we doing these standards to drive decentralization and build better frontends - something everyone is complaining about all the time - or for what reason exactly? So that db-sync is happy?

This CIP is open and will undergo some serious changes as far as I understand the signature-issue, why not use this opportunity where a situation arises where no gov tool provider will be compliant anyways, because everyone has to implement changes anyways? And optional fields do not force anything on anyone. I don't get it.

(Sorry if I sound frustrated. I am. I have been advocating all month for ppl to not build their own independent solutions for this, but apparently they were right. Lets just connect dReps to our platforms, fill out some forms and store the data in centralized databases like good old Web2 - we can do this faster than anyone can even spell CIP and with building explorers on zero budgets this seems the way to go then.)

@Quantumplation
Copy link
Contributor

Quantumplation commented Feb 1, 2025

What you describe in your first paragraph is a straw-man representation of what CIP-100 seeks to achieve. What you're worried about would be more apt if we had just said the governance metadata was a raw JSON document, or a raw markdown document, but CIP-100 does impose structure.

The goal of CIP-100 is to enable structured and documented experimentation, and allow an organic ecosystem of metadata to evolve, with a richly interconnected body of metadata, balancing the machine interpret-ability of that metadata, with the expressiveness we know we'll need exactly because what we need is an open question, one that ultimately doesn't have a single answer.

Let me try to motivate the current design in two ways, first by comparison to another "standard" that used versioning, and then by construction.

CIP-25

CIP-0025 is a standard governing the metadata fields that can be attached to a token, to provide richer experiences in tools: logos to display in explorers, names and descriptions, etc.

It chose a "versioned" approach, where non-standard fields could be added to the metadata document, and future versions could add new fields, or make breaking changes if needed.

This has had the following effects:

  • The standard is largely static; It has had a version 2, but has been touched very little since, despite lots of fields that would be useful for tools to support
  • I'm sure people like @Ryun1 or @Crypto2099 can attest to how much discussion has gone into potential updates to the spec that have gone nowhere
  • Because of the lack of updates, competing and often incompatible standards like CIP-26, CIP-68, CIP-60, CIP-86, CIP-99, CIP-102, CIP-124, and even some defunct standards.
  • Tools and dApps are generally hesitant or slow to adopt these, or the updates to them, because of the confusing and competing formats
  • There is substantial metadata in the wild that mixes these standards, leading to ambiguous resolution rules; When showing the name for a token that publishes it's metadata via CIP-25, the cardano token registry, and as a CIP-68 token, which takes precedence?

Just ask @Crypto2099 how much of a pain in the ass this has been! 😅

Construction

Instead, lets try building up from the simplest possible solution and think through what each actor actually wants to do, and see how that lead us to the design of CIP-100.

The simplest possible solution is that every document is treated as raw binary data, shown as is, and the on-chain anchor must be the hash of that exact content.

This has zero ambiguity, and full flexibility for DReps and proposers to tell exactly the story/information they want to tell.

However, the tooling experience is terrible: the only user experience that can be faithfully provided is a list of title-less, timestamped governance actions / dreps / votes, with a link to the raw document itself.

Because the ledger itself can't enforce anything about the structure (it would certainly be a terrifying thing for block producing nodes to start fetching large documents from the internet while validating a block), this will always be a base case that every tool must support; an escape hatch to the authentic, native content that the proposer uploaded.

But, we want to do better when we can, so, lets say that we'll show the raw content faithfully as a fallback, but we recommend that it is JSON, so we can at least represent structured data, and pull out things like "title", "author name", "motivation", etc.

At this point, an example JSON document might look like this:

{
  "title": "ABC",
  "author": "Pi",
  "motivation": "# ABC\n\n123",
  "budget": { "item1": 100, "item2": 200 }
}

This has the benefit that a computer can pluck out those values, and even though it might look intimidating, even a nontechnical user can look at that and probably work out what it's trying to say.

However, on-chain governance has never been done before. We don't know, today, what the best format for that will be. Is it better to have a single "motivation" field that's free-text? or maybe a motivation field that is markdown? or HTML? or some future structured language?

What about representing budgets? Should it be a key value pair of "description" to "amount"? should it include/represent a payment schedule? etc.

Because a tooling author needs to know how to interpret the field in order to display it (rendering raw text is different from rendering markdown is different from rendering html) or to create it (providing a rich text editing experience in the browser needs to structure their UI differently and produce different payloads to upload, any individual field should have a canonical answer to how a machine should interpret or produce it.

And what about the fact that all these terms are very English-centric: The only Japanese people that can natively write their own governance metadata, or build tools that construct or interpret governance metadata are those that understand English enough to know the terms "budget", "author", etc., meaning they are beholden, in some small way, to others. (As an aside, this is a big problem across all of programming, not unique to us, so it's not the primary motivator for CIP-100, but it would at least be nice not to contribute to making the problem worse.)

Because of the broad scope and interest of governance, any one of these fields is a potential source for devolving into endless debates about the "right" structure in CIP threads, and governance will stall. Tooling authors will hold off implementing any enhancements to governance until a proposed field has a broadly accepted structure.

Beyond just machine interpretability, we want people to be able to understand the differences of interpretation between fields too; What does "budget" mean? is it a strong commitment and guarantee? is it a best guess estimate? Is the "author" the person who actually sat down and typed up the proposal, or the main advocate behind the proposal, even if someone at their company did most of the actual typing and submitting it on-chain? American readers might be unfamiliar with the term "family name", etc.

All of that ambiguity compounds the challenge in getting agreement on what a field should mean, as well as contributing to our inability to change it in the future if we get that definition wrong.

One way to solve this is to uniquely identify the fields: lets say, now, that the metadata documents are a JSON document, where every key is a UUID. Then, we have a database of UUIDs and exactly what they mean, and how they should be interpreted by a computer.

At this point, the document would look like this:

{
  "8af21032-38d4-4a5a-a3c1-11e3c51e7476": "ABC",
  "8d613547-b438-4898-be4c-c5cfb446b95c": "Pi",
  "b5d02633-2124-4bd8-91b5-9b512dbbc325": "# ABC\n\n123",
  "1e924ad7-844d-4824-ac4b-1cd6db3915a6": { "item1": 100, "item2": 200 },
}

An explorer can now iterate over the fields in the JSON document, and switch based on the key:

  • if it's 8af21032-38d4-4a5a-a3c1-11e3c51e7476, display the word "Title" localized to the local language, and then a plain text field.
  • if it's b5d02633-2124-4bd8-91b5-9b512dbbc325, display the word "Motivation", and render the field value as markdown
  • if it's eb57e955-05a0-4d81-aea1-cb523eb705bb, display the word "Motivation", and render the field value as LaTeX
  • ...
  • otherwise, if we don't recognize it, have a section at the bottom of the page that displays the UUID, and a code block with the raw content of the field value

Now, this design has the benefit that there is never any ambiguity: two different interpretations of "budget" are two different UUIDs. The code to render each one, if you choose to support it, is relatively trivial, and doesn't require complex "if it looks like a duck, quacks like a duck..." style case analysis like CIP-25 does today.

However, it obviously has some strong drawbacks:

  • It has damaged the human interpret-ability; when the explorer falls back to the default and displays the UUID, that doesn't give the human any hint as to what it means, how to interpret it, or even where to find a description of it; at least with raw JSON, even a non-technical user could open the file and read the characters "motivation" and get some gist of what the governance proposal was trying to say
  • It's beholden to a central registry of these UUIDs; That central registry could become a bottleneck, could go offline with no successor, or could become politically motivated and start rejecting/purging the database of any fields that are "too woke" (or are "racially insensitive", from the other end of the political spectrum)

To mitigate the first bullet point, what if instead of a UUID, we used a URL like https://cardano-gov-metadata-database.com/fields/b5d02633-2124-4bd8-91b5-9b512dbbc325, and that URL linked to the documentation about how to interpret and render that field? So, the JSON document might look like this:

{
  "https://cardano-gov-metadata-database.com/fields/8af21032-38d4-4a5a-a3c1-11e3c51e7476": "ABC",
  "https://cardano-gov-metadata-database.com/fields/8d613547-b438-4898-be4c-c5cfb446b95c": "Pi",
  "https://cardano-gov-metadata-database.com/fields/b5d02633-2124-4bd8-91b5-9b512dbbc325": "# ABC\n\n123",
  "https://cardano-gov-metadata-database.com/fields/1e924ad7-844d-4824-ac4b-1cd6db3915a6": { "item1": 100, "item2": 200 },
}

Suddenly:

  • Even a non-technical user recognizes links, and can click on a link to read about it
  • A tooling author who gets a support ticket about an unsupported field has a convenient link to the documentation from seeing it in the wild

But, if we're making it a URL, URLs can be made pretty unique, so we can make them a bit more human readable now by adding the name back in.

{
  "https://cardano-gov-metadata-database.com/fields/8af21032-38d4-4a5a-a3c1-11e3c51e7476/title": "ABC",
  "https://cardano-gov-metadata-database.com/fields/8d613547-b438-4898-be4c-c5cfb446b95c/author": "Pi",
  "https://cardano-gov-metadata-database.com/fields/b5d02633-2124-4bd8-91b5-9b512dbbc325/motivation": "# ABC\n\n123",
  "https://cardano-gov-metadata-database.com/fields/1e924ad7-844d-4824-ac4b-1cd6db3915a6/budget": { "item1": 100, "item2": 200 },
}

And, the fact that they're URLs solves the second issue: if "cardano-gov-metadata-database.com" goes down without a clear successor, or is censoring a certain field from appearing, someone can quickly step in and host that field on their own domain:

{
  "https://cardano-gov-metadata-database.com/fields/8af21032-38d4-4a5a-a3c1-11e3c51e7476/title": "ABC",
  "https://cardano-gov-metadata-database.com/fields/8d613547-b438-4898-be4c-c5cfb446b95c/author": "Pi",
  "https://cardano-gov-metadata-database.com/fields/b5d02633-2124-4bd8-91b5-9b512dbbc325/motivation": "# ABC\n\n123",
  "https://cardano-gov-metadata-database.com/fields/1e924ad7-844d-4824-ac4b-1cd6db3915a6/budget": { "item1": 100, "item2": 200 },
  "https://314pool.com/0f193b1d-1861-430c-bd99-057e918a9058/cant_censor_me": "XYZ",
}

But, there's a lot of noise in the payload, visually, and a lot of repetition, especially if the document is large or one field appears multiple times.

So, lets just create a little dictionary at the top of the document that makes it self describing:

{
  "@definitions": {
    "title": "https://cardano-gov-metadata-database.com/fields/8af21032-38d4-4a5a-a3c1-11e3c51e7476/title",
    "author": "https://cardano-gov-metadata-database.com/fields/8d613547-b438-4898-be4c-c5cfb446b95c/author",
    "motivation": "https://cardano-gov-metadata-database.com/fields/b5d02633-2124-4bd8-91b5-9b512dbbc325/motivation",
    "budget": "https://cardano-gov-metadata-database.com/fields/1e924ad7-844d-4824-ac4b-1cd6db3915a6/budget",
    "cant_censor_me": "https://314pool.com/0f193b1d-1861-430c-bd99-057e918a9058/cant_censor_me"
  },
  "title": "ABC",
  "author": "Pi",
  "motivation": "# ABC\n\n123",
  "budget": { "item1": 100, "item2": 200 },
  "cant_censor_me": "XYZ",
}

This is essentially the JSON-LD specification, but the dictionary is called @context, they've thought about this a lot more to make it as flexible as you'd need, and there's a rich ecosystem of tooling emerging around the spec.

So at this point, the ideal tool:

  • gives users access to the raw metadata as needed, via a separate tab, a link to the hosted document, etc
  • the tool makes a best effort to display everything in the metadata: it displays rich experiences for the things it understands, and some kind of fallback for the fields it doesn't understand
  • It provides a hyperlink for every field for the users who are curious what the field actually means

Now, lets look at what happens when some requirement like the one you outline here comes along; You, as the maintainer of cgov.app, want to provide a richer discover-ability for DReps. You have some idea about how that can best be done (ex: defining a Location with a standardized set of options).

In the CIP-25 style standard, where we try to agree on a universal definition of what you "should" provide, you have to start a revision to the CIP, argue a ton with highly opinionated people about what the exact right list of locations should be, get it approved and merged by the CIP process, and hound a bunch of other tools to implement it.

As they/you implement it, you discover there's already metadata out there that used the "location" field in a non-standard way to give a list of URLs to find additional information; someone else put their home town, and someone else used it to call out their favorite travel destination, and still someone else uses it to report their current location, updated in real time as they travel around the world.

Sure, these people were playing with fire by using a field that wasn't standardized, but hey, they did so, and now you have to live with it.

If you're building the best tool you can, you probably should have some kind of interpretation: if it "looks like" a plain text field set to one of the defined values outlined in the spec, you can use that to place them on a big map widget; otherwise, you just render it as text on the page.

This pattern plays out for every single field we want to experiment, A/B test, or new way that someone wants to express themselves via governance. The practical effect is that updates to the spec slow to a crawl, people get worn out trying to get improvements made, and we ossify around a format that was designed when we knew the least about what would make for useful and effective governance.

Lets walk through what I believe will happen with CIP-100:

You, as the maintainer of cgov.app, want to provide a richer discover-ability for DReps. You have some idea about how that can best be done (ex: defining a Location with a standardized set of options).

You draft up a markdown file and host it on your server, and then publish some metadata by hand that points location to this markdown file. It's a net new artifact, and uniquely identified by the URL, so if there's existing metadata documents out there that used location to record "favorite travel destination" and gov.tools decided to implement that, then your DRep page on gov.tools won't accidentally mis-attribute and lie about your favorite travel destination.

You can add support for location immediately to cgov.app, and work with Eternl wallet to create a quick pilot to let people set that when registering as a DRep. This goes really quickly, because it's low commitment, can be deprecated without breaking anyone else, and won't interfere with any existing standards. You've eliminated the "people problem" of trying to argue by committee what the "right thing" is.

You can then demo this! you can show screenshots of the app working this way, and invite people to try it out. DReps can use eternl to set this property themselves, voters can use cgov.app to filter this way, and give you feedback about the feature: maybe you forgot a country, maybe there's a typo. You can evolve the definition of this field to improve your product as quickly as you can update some documentation and a frontend.

After getting some feedback, maybe it's popular and other tools express interest in adopting the same thing; Or tons of DReps start to use this. While that achieves social adoption, that data isn't ever hidden from anyone: even using another tool like cexplorer, you can (or should be able to!) see that non-standard metadata.

It's popular enough that other tools start to document this, and ask you questions, so you create a CIP to get a bit of "formal" recognition and make it discoverable to other tool authors who may not have heard about it; you're essentially just publishing a bulletin about the standard, which provides examples, justification from real world feedback, maybe even example code, etc; there's very little to argue about, because your specification is isolated and namespaced from any other tools; it's not going to interfere with, or cause other tools to lie to users, and so your CIP approved and merged pretty fast.

If it's popular, then authors of other explorers and wallets can start to implement it at a pace that makes sense with their roadmap: they don't need to be rushed, because their non-adoption isn't breaking the ecosystem in any way, because no data is hidden from the user.

If, on the other hand, your proposal isn't popular, and DReps hardly use it, and it turns out not to be a valuable thing to filter by, then the field can be quietly deprecated; Eternl can stop providing that field in their wallet, and instead adopt a different proposal that had a better thought out / more useful way to structure location that did end up being popular. You can either leave that code in cgov.app, when browsing historical data, or remove it to clean up some technical debt, and that field just falls back to the default display. Ideally the documentation about the field was hosted on some kind of permanent storage for historians who are crawling through the historical data, or it was indexed by an archiver service designed to do exactly that, but if not, nothing about the ecosystem breaks, and the document itself is just self-describing enough that those historians can likely get by with the metadata that was in the governance document itself.

What constitutes governance metadata is fundamentally an organic, socially-competitive thing, and CIP-100 leans into and acknowledges that, rather than fighting against it and trying to mandate one specification.

It's not perfect: people can create ambiguous fields; people can publish metadata that doesn't adhere to the standard; we can forget important security implications and need to make revisions; people can just maliciously lie and put the country they want to bomb in the location field instead of their actual region.

And yes, it creates a non-uniform experience for governance; different governance tools might support different features. But the hypothesis behind CIP-100 is that flexibility, expressiveness, and the ability to evolve were way more important than any kind of "uniformity".

But all of those things are true if we try to "standardize" on one set of fields anyway, as seen with CIP-25, and so CIP-100 seeks to minimize the harm that that causes, rather than try to pretend the real world isn't messy, or try to whip everyone into line.

@Quantumplation
Copy link
Contributor

(Note that I don't care at all whether DB Sync is happy 😅 In fact, I don't know why it's trying to index governance metadata at all, it seems like not what db sync was designed to do. It should be an entirely separate tool that indexes governance metadata, IMO)

@Quantumplation
Copy link
Contributor

Also note that the "host it yourself" step can also start with a draft CIP if you prefer; as soon as the CIP number is assigned, you can assign a URL to it, and start prototyping the implementation, leading to all the same benefits of increased velocity, lack of breakage, and flexibility.

@MadOrkestra
Copy link
Contributor Author

(Note that I don't care at all whether DB Sync is happy 😅 In fact, I don't know why it's trying to index governance metadata at all, it seems like not what db sync was designed to do. It should be an entirely separate tool that indexes governance metadata, IMO)

Well, at least this we can agree on :)

Thanks for the comprehensive explanation, this issue can be closed as far as I am concerned, but maybe lets wait and see if any of the "CIP148" people come around to attempt this. I am building cgov on a zero budget with no outlook on funding or a business model and this process is not something I can afford to put my time towards.

I'll find another solution, maybe indeed just let dReps connect their wallets and update their info in the cgov db, this also circumvents the cluttered data. Not my idea of decentralization, but if it gains traction, we can always move to a CIP later or forget about it all together and just open the API for other explorers to query and submit to.

@Quantumplation
Copy link
Contributor

The point of my comprehensive answer is that CIP-100 is designed to actually be much cheaper and easier for you to execute on (on a shoestring budget, for example) compared to trying to update an existing CIP. You simply write down what you want to do, host that description somewhere (whether on your own servers or as a new draft CIP), and CIP-100 then provides you a way to immediately start reading and writing that field from the DRep metadata in a way that doesn't break the ecosystem. You could likely have it implemented in an afternoon. The social adoption for other tools to also adopt that might take some time, but it will take longer and more of your energy, if we try to update an existing CIP, argue about definitions, etc.

CIP-100 gives you the low-effort path to adoption that you're describing (i.e. instead of "storing it in your database", just store it in the DRep metadata with a context that points to where you wrote down what the fields mean) while also being more decentralized than either a DB on your server or a centralized gateway process like the CIP process.

@Quantumplation
Copy link
Contributor

The path to getting such a change adopted and implemented in at least two tools if we try to do it by modifying the existing CIP is probably ~6 months, optimistically.

The path to getting such a change adopted and implemented in at least two tools if you embrace the way CIP-100 is designed could be as low as a few weeks if the idea is popular.

@MadOrkestra
Copy link
Contributor Author

Yeah, I got that. I'll discuss this in the comms group for gov tool builders, maybe we can agree on fields there, so it'll be worth the time for all of us and we can push this through as easy as you describe.

Integrating a centralized solution is literally a question of minutes though, the wallet connect and a profile page are already there, so it'll just be another dropdown, one line in a schema and a tiny addition to an endpoint validation and this is done. Thats what this whole process is up against, CIP100 or not.

@Quantumplation
Copy link
Contributor

For sure, decentralized technologies are harder to build than centralized databases, I won't disagree there. But the whole hypothesis of our industry is that the extra effort is worth it.

Feel free to reach out if you want me to join any calls to help out, or if you want help drafting / building a solution :)

@Crypto2099
Copy link
Collaborator

Early Cardano Atlantic Council rationale files had to reference CIP-136 when it was still in draft form and hadn't even been assigned a number yet. Because of this you can see that I simply opted to use a commit-locked link to @Ryun1's fork where he had submitted his draft in order to make it valid JSON-LD.

There is definitely value in being able to allow any user or platform to invent whichever new fields they want and try them out, but there is also value in the traditional CIP route where we propose, refactor, and refine prior to moving forward. The bigger question is, do we add these to the existing CIP-119 definition since it is technically already "merged" and so a bit beyond the early ideation and refinement stage and we do not have any current feedback or buy-in from the original author of that standard and we can definitely propose this as a new/alternate/additional standard that complements CIP-119.

For reference, here is one of if not the first Atlantic Council JSON-LD rationale documents w/ the "Do it yourself" JSON-LD schema that Pi was discussing at-length up above: https://github.com/Cardano-Atlantic-Council/rationale/blob/main/gov_action1zhuz5djmmmjg8f9s8pe6grfc98xg3szglums8cgm6qwancp4eytqqmpu0pr.json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants