Related link: http://wgz.org/chromatic/perl/trustabit/Trustabit.tar.gz
Existing social networks have several flaws of implementation. Most obviously, they’re too coarsely-grained. Are you my friend or not? Do I trust you or not? My personal and professional relationships aren’t that binary and yours aren’t either.
Of course, if you want to make these relationships flow between users, you run into scary math with directed graphs. If someone well-connected changes something, how many nodes and edges do you have to recalculate? Yuck.
Another flaw is that most don’t let you retain control of your own information. Some details of my work aren’t appropriate to share with acquaintances (or friends, in the cases of books as yet unsigned) and some details of my private life aren’t appropriate (or even interesting) to my work colleagues. It’s important to be able to control what information I provide as well as who sees it.
I’ve been thinking about this problem off and on for a year or so. During ETech, the two problems above started to solve each other. Why not let users control their own data and make them calculate their own trust flows?
That’s where Trustabit comes in. It’s a proof of concept trust and rating network with working (if limited) code.
In brief, Alice rates items, such as “The Princess Bride” and publishes a list of those ratings on her site. Her friends fetch that list and apply their ratings of Alice’s ratings to the items and publish the results, as they choose. If Bob trusts Alice’s taste in movies but not in restaurants, he can ignore her ratings of restaurants while viewing — and passing along — her ratings of movies to his friends.
There’s a bit more to it, so see the Trustabit README for the fuller, gorier details. You’ll need Perl 5.6 or better with YAML to run the code, though it should be reasonably easy to reimplement in any other decent language.
I’ve tried to stay away from dictating much policy. I’ve limited the code deliberately. The idea is pretty simple and I think it’s rather workable as it is.
(I did look at FOAF briefly, but the complexity is too high. Try to explain the mechanism in two paragraphs as I did above. Also, its goals are different. I don’t particularly care if machines can infer semantic meaning from Trustabit data. It’s for humans anyway.)
That’s my wacky idea. It may not be entirely practical, but I think it’s worth playing with. Drop me a line if you find it useful.
Suggestions? Enhancements? Existing projects I overlooked?