Abstract
In this article, we are interested in the fact that relevance and trustworthiness of information acquired by an agent X from a source F strictly depends and derives from X's trust in F with respect to the kind of information. In particular, we are interested in analyzing the relevance of F's category as indicator for its trustworthiness with respect to the specific informative goals of X. In this article, we analyze an interactive cognitive model for searching information in a world in which each agent can be considered as belonging to a specific agent's category. We also consider variability within the canonical categorical behavior and consequent influence on the trustworthiness of provided information. The introduced interactive cognitive model also allows evaluation of the trustworthiness of a source both on the basis of its category and on past direct experience with it, thus selecting the more adequate source with respect to the informative goals to achieve. We present a computational approach based on fuzzy sets and some selected simulation scenarios together with the discussion of their more interesting results.
- C. Castelfranchi, R. Falcone, and Pezzulo. 2003. Trust in information sources as a source for trust: A fuzzy approach. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’03). ACM, New York, NY, 89--96. Google Scholar
Digital Library
- C. Castelfranchi and R. Falcone. 2010. Trust Theory: A Socio-cognitive and Computational Model. John Wiley and Sons, Hoboken, NJ. Google Scholar
Digital Library
- R. Falcone, M. Piunti, M. Venanzi, and C. Castelfranchi. 2013. From Manifesta to Krypta: The relevance of categories for trusting others. ACM Transactions on Intelligent Systems and Technology, 4, 2. Google Scholar
Digital Library
- P. Yolum and M. P. Singh. 2003. Emergent properties of referral systems. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS’03). Google Scholar
Digital Library
- R. Conte and M. Paolucci. 2002. Reputation in Artificial Societies: Social Beliefs for Social Order. Kluwer Academic Publishers, Boston, MA. Google Scholar
Digital Library
- C. Burnett, T. Norman, and K. Sycara. 2010. Bootstrapping trust evaluations through stereotypes. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’10). 241--248. Google Scholar
Digital Library
- M. E. Bratman. 1987. Intentions, Plans, and Practical Reason. Harvard University Press, Cambridge, MA.Google Scholar
- R. Hermoso, H. Billhardt, and S. Ossowski. 2013. Trust-based role coordination in task-oriented multiagent systems. Knowledge-Based Systems 52, 78--90. Google Scholar
Digital Library
- R. Falcone and C. Castelfranchi. 2012. Trust and transitivity: How trust-transfer works. In 10th International Conference on Practical Applications of Agents and Multi-Agent Systems, University of Salamanca (Spain).Google Scholar
- Joana Urbano, Ana Paula Rocha, and Eugnio Oliveira. 2009. Computing Confidence Values: Does Trust Dynamics Matter? In L. Sabra Lopes et al. (Eds.): EPIA 2009, Lecture Notes in Artificial Intelligence vol. 5816. Springer, 520--531. Google Scholar
Digital Library
- Daniel Kahneman and Amos Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica. XLVII, 263--291.Google Scholar
- E. T. Higgins. 1997. Beyond pleasure and pain. American Psychologist 52, 1280--1300.Google Scholar
Cross Ref
- R. Falcone and C. Castelfranchi. 2004. Trust dynamics: How trust is influenced by direct experiences and by trust itself. In Proceedings of the 3rd International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’04), ACM, New York, NY, 740--747. Google Scholar
Digital Library
- J. Sabater-Mir and C. Sierra. 2001. Regret: A reputation model for gregarious societies. In Proceedings of the 4th Workshop on Deception and Fraud in Agent Societies, Montreal, Canada, 61--70.Google Scholar
- S. Jiang, J. Zhang, and Y. S. Ong. 2013. An evolutionary model for constructing robust trust networks. In Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). Google Scholar
Digital Library
- U. Wilensky. 1999. NetLogo. Retrieved October 27, 2015 from http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.Google Scholar
Index Terms
The Relevance of Categories for Trusting Information Sources
Recommendations
Dynamically learning sources of trust information: experience vs. reputation
AAMAS '07: Proceedings of the 6th international joint conference on Autonomous agents and multiagent systemsTrust is essential when an agent must rely on others to provide resources for accomplishing its goals. When deciding whether to trust, an agent may rely on, among other types of trust information, its past experience with the trustee or on reputations ...
Trusting Virtual Trust
Can trust evolve on the Internet between virtual strangers? Recently, Pettit answered this question in the negative. Focusing on trust in the sense of `dynamic, interactive, and trusting' reliance on other people, he distinguishes between two forms of ...
Simulating rational social normative trust, predictive trust, and predictive reliance between agents
A program for the simulation of rational social normative trust, predictive `trust,' and predictive reliance between agents will be introduced. It offers a tool for social scientists or a trust component for multi-agent simulations/multi-agent systems, which ...






Comments