-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
web of trust score #612
Comments
see iOS damus-io/damus#2127 |
first, i doubt that I respect enough that any ideas i may add will be long since resolved by those smarter than me. second, i doubt that i respect enough that I have misinterpreted or missed or misunderstood something. third, i don't understand any of the formulas I only got some statistics experience fourth, i don't know the lingo. fifth, a wiser man would ask questions rather than draw conclusions and type them here, but i don't want to take more time from everyone here than reading this already might; last, take it or leave it, for what it's worth, if anything, hopefully a good idea here or there, and that is all: thoughts on models surveyed seem biased toward finding an answer, rather than an estimate that can be easily adapted by user to see what intuitively works. i.e trying to determine an optimal range formula hardcode setting vs. using data over time and user input to determine retroactively what formula/variablesetting would have resulted in the most appropriate recommendation. models seem biased toward optimizing the recommendation engine formula to result in greater trust and confidence conclusions, rather than fleshing out the ways that the user can use multiple dimensions to define his own trust and evaluate the results using discretion over time. i.e. trust and confidence on only one dimension - whereas a complete trust model would require dimensions of 'in the realm of ideas' and 'in the realm of action' and 'in the realm of emotional interoperativitiy' and 'in the current realm'. this bias surely is reflective of the centralization hubris, all resources go toward 'how do we create something to put on the user so that we get the results we want'. resulting from blind elitism in a paradigm where few children are raised with the ability to use discretion, and therefore thinking it isn't possible for people to control their own destinies, whereas here we build new future.
for example, suffers inaccuracy from ignorance of varying reasons to become a neighbor. most people, most times, just choose friends who are friends in friend contexts, choose connections they want to be positively connected with. even when run locally, the confidence one has in the ability of the system to hide his/her personal preferences from others will dictate the degree of preference falsification and therefore the accuracy of any recommendation / algorithmic ordering. worse and moreso where trust attestations are public. 'look at trust network based on your public attestations only' 'look at based on private only' ... 'make temporary changes to attestations privately and cloak with many random changes that are not then used to display a testing environment visualization of how that would change networks 'if hypothetically' these testing changes were real attestattion changes and we all know they aren't winkwink', or something. *this entire post above and below has been written without need to over or under represent opinion or ability except in whatever way ego has done beyond my knowledge
path dependence: the question seems to be 'when overlap, which to prefer?' whereas our context seems to allow for 'ask user'. trust decay:
if the user thinks it should be, for first level connections, and user then can click and drag to decide whether second, third level preferences publicly attested will also be used in calculations, and see visually how that changes trust network recommendations. i have no context to see how computationally inefficient or practical this is.
... i was just thinking about who i trust the most and it's people i knew growing up and then some that i just know are likeminded as soon as i see their writing or speaking. there are kinds of trust surely in our context that could logically / effectively (in a way that results in results) fade on user preference, but not the one dimensional trust attestation 'follow' or 'connect' or 'i trust this person irl'. perhaps more important is 'how long since the last time i audited my trust network' or 'this branch of my trust network' opinion conflict: the risk of preference falsification for social / peer pressure reasons / fear of retaliation has me thinking local private attestation. but does a market of free trust eventually clean house? i.e. i will certainly rate those who seem unafraid to speak more highly, and perhaps that will result in greater decisionmaking, which will result in greater results. or, perhaps, that will not correlate, and/or will come with a cost, as people who go out on limbs fall out of trees with me, or i with them. attack resistance: 'take a look at these recently quickly changing metrics in your trust network, do they indicate anything sus?'
... i don't see any engineering problems, i see only discretion and education problems.
... and another example
... perhaps these styles of centralengineering are adequate for recommendations systems, where I would favor visible caveats at minimum, and a culture of skepticism is better - but always seems to me to be hubris and punishing of edge cases - those who have a pattern of never fitting into checklists / forms / boxes. side note, 'weighting' alone always seems like engineering hubris to me, though i need to watch for when it's ok to have something that works well enough, even if only for now, while being biased toward long term defense from capture, mission creep, slippery slopes, etc.
... this is not a solution i predicted to find near the end of my reading - new to me. key point it seems. the system can a) look at the connections b) guess what user will say about trust when asked, using objective data (follow, zap, reply, reply-response, avatar similarity, choice to change profile picture from defualt, use of gravatar style.....) and then evaluate itself / it's own ability to predict, based on whether it was correct / matches user's later attestations. attack vectors, weaknesses, opportunities for inaccuracy, and bugginess potential would remain from users who lie about preferences, accidentally click alot, or are moody.
I don't like the word 'given' here, but the model of oracle / guardian / watchtower / trusted advisor ... might functionally be more appropriate for initial recommendations than a system's guess. optional visualization of varied algorithms, models, and trust defaults would be sweeeet. but defaulting to few by default will be like congress enacting temporary taxation, even if thought to be temporary, will too likely become an institution to be captured. |
(technical) user story
As a Damus dev who wants to enable building on WOT, I would like Damus to know which of a npub's follows are most in their WOT, so that Damus can surface when the npub's WOT participates/performs certain actions (e.g. replies, zaps, DMs etc.).
acceptance criteria
there is formula according to which follows are ranked from a high to a low WOT score (e.g. using shared/mutual follows, mute lists etc.)
implementation & context
some WOT models discussed here https://dl.acm.org/doi/pdf/10.1145/2906151
The text was updated successfully, but these errors were encountered: