Lying Autonomous Agents

posted by alex

BarCamp San Francisco was a GREAT event, big big thanks to Tara Hunt and crew for the excellent arrangements. We instantly found help for our long-time bugging server-issues and the overall atmosphere was of a helping-hand community. Met up with Ben Metcalfe, previously leading the BBC backstage development and now “grassroots architect & CTO” over at Citizen Agency. Ben was off to Gnomedex and continuing back to London but we’re hoping to bump into each other again soon.

BarcampWe had a late night (3 or 4 am?) session with Jordan Sissel that just moved here to work for Google. Jordan is currently involved in arranging more BarCamps (the BarCamp Stanford looks interesting!). An evening microformats session held by Chris & Tantek was interesting with some discussion on, among other things, how to achieve an Apple Human Interface Guideline-esqu mentality for web2.0 sites. In effect this could entail cohesive interactions across websites much in the same way that you would expect an OS keyboard short command to function in the same way across several applications. This however, of course, has to be balanced with the creative freedom of the UI designer of each site and furthermore the greatest obstacle to overcome is probably agreeing on what good practice actually is.

Later the same week I joined Felix at the Adam Greenfield Everyware talk at Adaptive Path. Adam was a really nice guy and we had some discussions about the function of trust in an ubiComp setting and the importance of privacy and self-control of distributed data. Felix actually had the most interesting idea on the topic – instead of constantly trying to control what is being published about ourselves we should add fake-services or fake agents that propagate the web with false information. Since placing the responsibility of constantly deciding and stating “what level of privacy I as I user would like right now” the transaction costs for using the service increases dramatically. By using lying autonomous agents in parallel I could ease up on the level of control over distribution of my real personal data. Since my real data is only valuable if you know it’s true I won’t have t worry about disclosing personal information (since there will also be conflicting information available). The increase in complexity would make it hard for anybody without the right contextual understanding of me to decipher what is the “real me” and what is the fake agent. However, for somebody that knows enough about me (e.g. in what city I am today) the fake info can be filtered out prevailing the true data. Some pretty heavy algorithm work would be necessary but the more I think about it the more sense it actually makes. Of course, if a flood of information becomes available the issue of trust in the source becomes highly important. What, how and when do you trust certain sources to be credible? This will certainly be one of the toughest questions when exploring online-trust but certainly also one of the most fascinating ones.

Anyway, we’re off now to Ritual Coffee Roasters to start working on what trust is, how it functions and what the consequences for the new web will be. Stay tuned!