What Is Trust?

posted by eric

The deeper we dive into this study, the more we realize how vague the term “trust” really is. It becomes especially evident as soon as one starts to study a particular trust-related phenomenon (such as buyer-seller trust in online marketplaces). One quickly realizes that “trust” is often a much too general term to use–it is commonly used as a “catch-all” for more specific concepts. There’s clearly a problem of “semantic discrepancies” in the current discourse (but then again, that seems to be the issue in almost any hot enough discourse).

Trust, at least seen pragmatically, can be broken down into more precise concepts such as reputation, credibility, predictability, and consistency. The credibility people at Stanford have done a fairly good job of distinguishing credibility from trust, whereas Seligman tries to separate out “simple” predictability from trust. Many more similar efforts can be found in the literature. We’ll dig more deeply into these semantic issues within the next few days.

In the meantime, here’s an independent study commissioned by Rapleaf on online marketplaces that concludes that “posted ratings are the most important factor in determining their level of trust in sellers” (my emphasis). We hope to do an interview with the Rapleaf guys soon. Their plan is to make an eBay-like reputation system available through open API:s, in essence creating a global, open reputation system. I really think it’s about time, but it remains to be seen how well they can solve the obvious fraud problems

UPDATE: Here are the full details of the study.

Lying Autonomous Agents

posted by alex

BarCamp San Francisco was a GREAT event, big big thanks to Tara Hunt and crew for the excellent arrangements. We instantly found help for our long-time bugging server-issues and the overall atmosphere was of a helping-hand community. Met up with Ben Metcalfe, previously leading the BBC backstage development and now “grassroots architect & CTO” over at Citizen Agency. Ben was off to Gnomedex and continuing back to London but we’re hoping to bump into each other again soon.

BarcampWe had a late night (3 or 4 am?) session with Jordan Sissel that just moved here to work for Google. Jordan is currently involved in arranging more BarCamps (the BarCamp Stanford looks interesting!). An evening microformats session held by Chris & Tantek was interesting with some discussion on, among other things, how to achieve an Apple Human Interface Guideline-esqu mentality for web2.0 sites. In effect this could entail cohesive interactions across websites much in the same way that you would expect an OS keyboard short command to function in the same way across several applications. This however, of course, has to be balanced with the creative freedom of the UI designer of each site and furthermore the greatest obstacle to overcome is probably agreeing on what good practice actually is.

Later the same week I joined Felix at the Adam Greenfield Everyware talk at Adaptive Path. Adam was a really nice guy and we had some discussions about the function of trust in an ubiComp setting and the importance of privacy and self-control of distributed data. Felix actually had the most interesting idea on the topic – instead of constantly trying to control what is being published about ourselves we should add fake-services or fake agents that propagate the web with false information. Since placing the responsibility of constantly deciding and stating “what level of privacy I as I user would like right now” the transaction costs for using the service increases dramatically. By using lying autonomous agents in parallel I could ease up on the level of control over distribution of my real personal data. Since my real data is only valuable if you know it’s true I won’t have t worry about disclosing personal information (since there will also be conflicting information available). The increase in complexity would make it hard for anybody without the right contextual understanding of me to decipher what is the “real me” and what is the fake agent. However, for somebody that knows enough about me (e.g. in what city I am today) the fake info can be filtered out prevailing the true data. Some pretty heavy algorithm work would be necessary but the more I think about it the more sense it actually makes. Of course, if a flood of information becomes available the issue of trust in the source becomes highly important. What, how and when do you trust certain sources to be credible? This will certainly be one of the toughest questions when exploring online-trust but certainly also one of the most fascinating ones.

Anyway, we’re off now to Ritual Coffee Roasters to start working on what trust is, how it functions and what the consequences for the new web will be. Stay tuned!