Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

web of trust score #612

Open
Tracked by #611
alltheseas opened this issue Dec 25, 2024 · 2 comments
Open
Tracked by #611

web of trust score #612

alltheseas opened this issue Dec 25, 2024 · 2 comments
Labels
web-of-trust Web of trust, spam filtering, etc

Comments

@alltheseas
Copy link
Contributor

alltheseas commented Dec 25, 2024

(technical) user story

As a Damus dev who wants to enable building on WOT, I would like Damus to know which of a npub's follows are most in their WOT, so that Damus can surface when the npub's WOT participates/performs certain actions (e.g. replies, zaps, DMs etc.).

acceptance criteria

there is formula according to which follows are ranked from a high to a low WOT score (e.g. using shared/mutual follows, mute lists etc.)

implementation & context

some WOT models discussed here https://dl.acm.org/doi/pdf/10.1145/2906151

@alltheseas alltheseas mentioned this issue Dec 25, 2024
3 tasks
@alltheseas
Copy link
Contributor Author

see iOS damus-io/damus#2127

@alltheseas alltheseas added the web-of-trust Web of trust, spam filtering, etc label Dec 25, 2024
@alltheseas alltheseas added this to the notedeck beta milestone Dec 25, 2024
@johnbnevin
Copy link

first, i doubt that I respect enough that any ideas i may add will be long since resolved by those smarter than me.

second, i doubt that i respect enough that I have misinterpreted or missed or misunderstood something.

third, i don't understand any of the formulas I only got some statistics experience

fourth, i don't know the lingo.

fifth, a wiser man would ask questions rather than draw conclusions and type them here, but i don't want to take more time from everyone here than reading this already might;

last, take it or leave it, for what it's worth, if anything, hopefully a good idea here or there, and that is all:

thoughts on
https://dl.acm.org/doi/pdf/10.1145/2906151
in context of social graph subjective user side reputation / trust models run locally:

models surveyed seem biased toward finding an answer, rather than an estimate that can be easily adapted by user to see what intuitively works. i.e trying to determine an optimal range formula hardcode setting vs. using data over time and user input to determine retroactively what formula/variablesetting would have resulted in the most appropriate recommendation.

models seem biased toward optimizing the recommendation engine formula to result in greater trust and confidence conclusions, rather than fleshing out the ways that the user can use multiple dimensions to define his own trust and evaluate the results using discretion over time. i.e. trust and confidence on only one dimension - whereas a complete trust model would require dimensions of 'in the realm of ideas' and 'in the realm of action' and 'in the realm of emotional interoperativitiy' and 'in the current realm'. this bias surely is reflective of the centralization hubris, all resources go toward 'how do we create something to put on the user so that we get the results we want'. resulting from blind elitism in a paradigm where few children are raised with the ability to use discretion, and therefore thinking it isn't possible for people to control their own destinies, whereas here we build new future.

'weighted average value among all direct neighbors of d'

for example, suffers inaccuracy from ignorance of varying reasons to become a neighbor. most people, most times, just choose friends who are friends in friend contexts, choose connections they want to be positively connected with.
edge cases, most obviously in general terms, if one chooses to 'keep enemies closer' a dumb system fails. most relevant in social graph terms, 'i trust this genius to have interesting ideas' does not speak to trust on the dimension of diet; 'i trust this guy will make a meme that is funny 1/10 tries' does not speak to morality. encouraging the user to curate his trust network with a guided walkthrough with clippy avatar friend inevitable?

even when run locally, the confidence one has in the ability of the system to hide his/her personal preferences from others will dictate the degree of preference falsification and therefore the accuracy of any recommendation / algorithmic ordering. worse and moreso where trust attestations are public. 'look at trust network based on your public attestations only' 'look at based on private only' ... 'make temporary changes to attestations privately and cloak with many random changes that are not then used to display a testing environment visualization of how that would change networks 'if hypothetically' these testing changes were real attestattion changes and we all know they aren't winkwink', or something.

*this entire post above and below has been written without need to over or under represent opinion or ability except in whatever way ego has done beyond my knowledge

'The most important four challenges are path dependence, trust decay, opinion conflict, and attack resistance.'

path dependence: the question seems to be 'when overlap, which to prefer?' whereas our context seems to allow for 'ask user'.

trust decay:

'The two types of decay indicate that time should be an essential factor of a comprehensive trust model, and the length of a trusted path cannot be too long.'

if the user thinks it should be, for first level connections, and user then can click and drag to decide whether second, third level preferences publicly attested will also be used in calculations, and see visually how that changes trust network recommendations. i have no context to see how computationally inefficient or practical this is.

'trust should decay with time'

... i was just thinking about who i trust the most and it's people i knew growing up and then some that i just know are likeminded as soon as i see their writing or speaking. there are kinds of trust surely in our context that could logically / effectively (in a way that results in results) fade on user preference, but not the one dimensional trust attestation 'follow' or 'connect' or 'i trust this person irl'. perhaps more important is 'how long since the last time i audited my trust network' or 'this branch of my trust network'

opinion conflict: the risk of preference falsification for social / peer pressure reasons / fear of retaliation has me thinking local private attestation. but does a market of free trust eventually clean house? i.e. i will certainly rate those who seem unafraid to speak more highly, and perhaps that will result in greater decisionmaking, which will result in greater results. or, perhaps, that will not correlate, and/or will come with a cost, as people who go out on limbs fall out of trees with me, or i with them.

attack resistance: 'take a look at these recently quickly changing metrics in your trust network, do they indicate anything sus?'
... as trust models are used in the wild, a science of attacking and a science of teaching resistance to attacks will develop - 'look out for the fabled boobie trap' ...

'We can see that there is a need for comprehensive trust models that can handle more (or even all) possible challenges.'

... i don't see any engineering problems, i see only discretion and education problems.

'a more general confidence measure that depends on both the frequency and duration of contact. Jøsang [1999] uses subjective logic and proposes using a triplet to represent trust, with belief (b), disbelief (d), and uncertainty (u), normalized such that b + d + u = 1'

... and another example

'On Advogato, users can rank others with four choices: Observer, Apprentice, Journeyer, and Master, which can be assigned 0.4, 0.6, 0.8, and 1.0, respectively, to numerate the level of trust. The snapshots can be found at [Trustlet 2014].'

... perhaps these styles of centralengineering are adequate for recommendations systems, where I would favor visible caveats at minimum, and a culture of skepticism is better - but always seems to me to be hubris and punishing of edge cases - those who have a pattern of never fitting into checklists / forms / boxes. side note, 'weighting' alone always seems like engineering hubris to me, though i need to watch for when it's ok to have something that works well enough, even if only for now, while being biased toward long term defense from capture, mission creep, slippery slopes, etc.

'The two most commonly used metrics are the coverage and accuracy of trust prediction. The former represents the ability of algorithms to provide a prediction; that is, the percentage of trust relationships that are predictable (at least one trusted path is available between two users). The latter represents “the ability of predicting whether a user will be trusted or not"'

... this is not a solution i predicted to find near the end of my reading - new to me. key point it seems. the system can a) look at the connections b) guess what user will say about trust when asked, using objective data (follow, zap, reply, reply-response, avatar similarity, choice to change profile picture from defualt, use of gravatar style.....) and then evaluate itself / it's own ability to predict, based on whether it was correct / matches user's later attestations. attack vectors, weaknesses, opportunities for inaccuracy, and bugginess potential would remain from users who lie about preferences, accidentally click alot, or are moody.

'Because heat always flows from a position with a high temperature to a position with a low temperature, seeding users are given a high amount of heat that will be diffused to other users.'

I don't like the word 'given' here, but the model of oracle / guardian / watchtower / trusted advisor ... might functionally be more appropriate for initial recommendations than a system's guess. optional visualization of varied algorithms, models, and trust defaults would be sweeeet. but defaulting to few by default will be like congress enacting temporary taxation, even if thought to be temporary, will too likely become an institution to be captured.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
web-of-trust Web of trust, spam filtering, etc
Projects
Status: No status
Development

No branches or pull requests

3 participants