Eric and I are currently down at the disneyland of the tech world–Google. We are just about to check out a techtalk by Will Wright on Spore but I thought I’d post a few reflections that surfaced during yesterday’s meeting with Professor BJ Fogg and David Danielsson from the Stanford Persuasive Technology Lab.
We were talking about establishing online trust when BJ had this interesting idea for a slightly unorthodox case study approach. Instead of trying to look at desirable mechanics of trust online (how trust can enhance the online experience) we could flip the whole concept around and learn about the same things by looking at the people who maliciously deal with establishing trust as a profession–I am, of course, talking about the predators.
Predators are probably the most well-educated in the mechanics of establishing trust online since their whole agenda deeply depends on it. Myspace predators constantly seek to establish trust as fast as possible and are sure to know the ins and outs of trust-enhancing social interaction within that system. To exploit a system, technical or social, you really have to know how to “work it”.
Now, what I have been thinking about is how to get in contact with serious predators and get insight into their tactics and views on their “work”. One idea I had after talking to Mike Micucci, CEO of TN20 and hearing about his problems concerning the scam-proposals put forward to him when selling his car on eBay would be to create an online potential victim. I could create a fake ad for an expensive car, add a made-up person to Myspace, enter a non-existing CEO on LinkedIn and then wait for scammers to contact me. Once contact has been established I could “come clean”, explain the research and try to start a conversation with the intention of getting their comments on trust. Am I being naive in thinking this might yield some results?
Yet, on the other hand, the whole idea of faking identities and ads makes me feel slightly uneasy. Is this an ethical way of finding interview subjects? Is it safe? Let me know what you think!
The deeper we dive into this study, the more we realize how vague the term “trust” really is. It becomes especially evident as soon as one starts to study a particular trust-related phenomenon (such as buyer-seller trust in online marketplaces). One quickly realizes that “trust” is often a much too general term to use–it is commonly used as a “catch-all” for more specific concepts. There’s clearly a problem of “semantic discrepancies” in the current discourse (but then again, that seems to be the issue in almost any hot enough discourse).
Trust, at least seen pragmatically, can be broken down into more precise concepts such as reputation, credibility, predictability, and consistency. The credibility people at Stanford have done a fairly good job of distinguishing credibility from trust, whereas Seligman tries to separate out “simple” predictability from trust. Many more similar efforts can be found in the literature. We’ll dig more deeply into these semantic issues within the next few days.
In the meantime, here’s an independent study commissioned by Rapleaf on online marketplaces that concludes that “posted ratings are the most important factor in determining their level of trust in sellers” (my emphasis). We hope to do an interview with the Rapleaf guys soon. Their plan is to make an eBay-like reputation system available through open API:s, in essence creating a global, open reputation system. I really think it’s about time, but it remains to be seen how well they can solve the obvious fraud problems…
UPDATE: Here are the full details of the study.