Abstract
Online reviews have become increasingly popular as a way to judge the quality of various products and services. However, recent work demonstrates that the absence of reporting incentives leads to a biased set of reviews that may not reflect the true quality. In this paper, we investigate underlying factors that influence users when reporting feedback. In particular, we study both reporting incentives and reporting biases observed in a widely used review forum, the Tripadvisor Web site. We consider three sources of information: first, the numerical ratings left by the user for different aspects of quality; second, the textual comment accompanying a review; third, the patterns in the time sequence of reports. We first show that groups of users who discuss a certain feature at length are more likely to agree in their ratings. Second, we show that users are more motivated to give feedback when they perceive a greater risk involved in a transaction. Third, a user's rating partly reflects the difference between true quality and prior expectation of quality, as inferred from previous reviews. We finally observe that because of these biases, when averaging review scores there are strong differences between the mean and the median. We speculate that the median may be a better way to summarize the ratings.
- Admati, A. and Pfleiderer, P. 2000. Noisytalk.com: Broadcasting opinions in a noisy environment. Working Paper 1670R, Stanford University.Google Scholar
- Cui, H., Mittal, V., and Datar, M. 2006. Comparative experiments on sentiment classification for online product reviews. In Proceedings of the National Conference on Artificial Intelligence. Google Scholar
Digital Library
- Dave, K., Lawrence, S., and Pennock, D. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of the 12th International Conference on the World Wide Web (WWW03). Google Scholar
Digital Library
- Dellarocas, C., Awad, N., and Zhang, X. 2006. Exploring the value of online product ratings in revenue forecasting: The case of motion pictures. Working paper.Google Scholar
- Forman, C., Ghose, A., and Wiesenfeld, B. 2006. A multi-level examination of the impact of social identities on economic transactions in electronic markets. http://ssrn.com/abstract=918978.Google Scholar
- Garcin, F., Faltings, B., and Jurca, R. 2009. Aggregating reputation Feedback. In Proceedings of the International Conference on Reputation (ICORE). 119--128.Google Scholar
- Ghose, A., Ipeirotis, P., and Sundararajan, A. 2005. Reputation premiums in electronic peer-to-peer markets: Analyzing textual feedback and network structure. In Proceedings of the 3rd Workshop on Economics of Peer-to-Peer Systems (P2PECON). Google Scholar
Digital Library
- Ghose, A., Ipeirotis, P., and Sundararajan, A. 2006. The dimensions of reputation in electronic markets. Working Paper CeDER-06-02, New York University.Google Scholar
- Harmon, A. 2004. Amazon glitch unmasks war of reviewers. New York Times.Google Scholar
- Houser, D. and Wooders, J. 2006. Reputation in auctions: Theory and evidence from eBay. J. Econ. Manag. Strat. 15, 353--369.Google Scholar
Cross Ref
- Hu, M. and Liu, B. 2004. Mining and summarizing customer reviews. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'04). Google Scholar
Digital Library
- Hu, N., Pavlou, P., and Zhang, J. 2006. Can online reviews reveal a product's true quality? In Proceedings of the ACM Conference on Electronic Commerce (EC'06). Google Scholar
Digital Library
- Kalyanam, K. and McIntyre, S. 2001. Return on reputation in online auction market. Working Paper 02/03-10-WP, Leavey School of Business, Santa Clara University.Google Scholar
- Khopkar, T., Li, X., and Resnick, P. 2005. Self-selection, slipping, salvaging, slacking, and stoning: The impacts of negative feedback at eBay. In Proceedings of the ACM Conference on Electronic Commerce (EC'05). Google Scholar
Digital Library
- Melnik, M. and Alm, J. 2002. Does a seller's reputation matter? evidence from Ebay auctions. J. Indust. Econ. 50, 3, 337--350.Google Scholar
Cross Ref
- Moulin, H. 1980. On strategy-proofness and single peakedness. Public Choice 35, 437--455.Google Scholar
Cross Ref
- Olshavsky, R. and Miller, J. 1972. Consumer expectations, product performance and perceived product quality. J. Market. Resear. 9, 19--21.Google Scholar
Cross Ref
- Pang, B., Lee, L., and Vaithyanathan, S. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-02). Google Scholar
Digital Library
- Parasuraman, A., Zeithaml, V., and Berry, L. 1985. A conceptual model of service quality and its implications for future research. J. Market. 49, 41--50.Google Scholar
Cross Ref
- Parasuraman, A., Zeithaml, V., and Berry, L. 1988. SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. J. Retail. 64, 12--40.Google Scholar
- Pavlou, P. and Dimoka, A. 2006. The nature and role of feedback text comments in online marketplaces: Implications for trust building, price premiums, and seller differentiation. Inform. Syst. Resear. 17, 4, 392--414. Google Scholar
Digital Library
- Popescu, A. and Etzioni, O. 2005. Extracting product features and opinions from reviews. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Google Scholar
Digital Library
- Talwar, A., Jurca, R., and Faltings, B. 2007. Understanding user behavior in online feedback reporting. In Proceedings of the ACM Conference on Electronic Commerce (EC'07). Google Scholar
Digital Library
- Teas, R. 1993. Expectations, performance evaluation, and consumers' perceptions of quality. J. Market. 57, 18--34.Google Scholar
- White, E. 1999. Chatting a singer up the pop charts. Wall Street Journal.Google Scholar
Index Terms
Reporting incentives and biases in online review forums
Recommendations
Online Review Solicitations Reduce Extremity Bias in Online Review Distributions and Increase Their Representativeness
Representative online customer reviews are critical to the effective functioning of the Internet economy. In this study, I investigate the representativeness of online review distributions to examine how extremity bias and conformity impact it and explore ...
Do online reviews reflect a product's true perceived quality? An investigation of online movie reviews across cultures
This paper investigates when the reported average of online ratings matches the perceived average assessment of the population as a whole, including the average assessments of both raters and non-raters. We apply behavioral theory to capture intentions ...
A study of factors that contribute to online review helpfulness
Highlights- The findings suggest that word count has a threshold in its effects on review helpfulness.
AbstractHelpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (...






Comments