Author image not provided
 Ben A Carterette

Authors:
Add personal information
  Affiliation history
Bibliometrics: publication history
Average citations per article15.00
Citation Count1,020
Publication count68
Publication years2005-2017
Available for download52
Average downloads per article360.56
Downloads (cumulative)18,749
Downloads (12 Months)1,540
Downloads (6 Weeks)161
SEARCH
ROLE
Arrow RightAuthor only
· Editor only
· Other only
· All roles


AUTHOR'S COLLEAGUES
See all colleagues of this author

SUBJECT AREAS
See all subject areas




BOOKMARK & SHARE


68 results found Export Results: bibtexendnoteacmrefcsv

Result 1 – 20 of 68
Result page: 1 2 3 4

Sort by:

1 published by ACM
August 2017 SIGIR '17: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 17,   Downloads (12 Months): 86,   Downloads (Overall): 86

Full text available: PDFPDF
We analyze 5,792 IR conference papers published over 20 years to investigate how researchers have used and are using statistical significance testing in their experiments
Keywords: evaluation, information retrieval, statistical significance

2 published by ACM
August 2017 SIGIR '17: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 11,   Downloads (12 Months): 60,   Downloads (Overall): 60

Full text available: PDFPDF
The past 20 years have seen a great improvement in the rigor of information retrieval experimentation, due primarily to two factors: high-quality, public, portable test collections such as those produced by TREC (the Text REtrieval Conference), and the increased practice of statistical hypothesis testing to determine whether measured improvements can ...
Keywords: evaluation, information retrieval, reproducibility, statistical significance testing

3 published by ACM
March 2017 CHIIR '17: Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 8,   Downloads (12 Months): 78,   Downloads (Overall): 78

Full text available: PDFPDF
Users interact with search engine result pages in various ways, including their clicks, cursor movements, and page scrolls. Researchers model such user interaction behavior in order to understand users and improve search result presentation. In this paper we propose nine different user click models that take various real life search ...
Keywords: users, clicks, detection, simulation

4 published by ACM
March 2017 IUI '17: Proceedings of the 22nd International Conference on Intelligent User Interfaces
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 6,   Downloads (12 Months): 75,   Downloads (Overall): 75

Full text available: PDFPDF
We present a user-based model for rating concepts (i.e., words and phrases) in clinical queries based on their relevance to clinical decision making. Our approach can be adopted by information retrieval systems (e.g., search engines) to identify the most important concepts in user queries in order to better understand user ...
Keywords: clinical concept rating, user simulated modeling, clinical decision support (cds)

5 published by ACM
September 2016 ICTIR '16: Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 2,   Downloads (12 Months): 49,   Downloads (Overall): 108

Full text available: PDFPDF
Data fusion has been shown to be a simple and effective way to improve retrieval results. Most existing data fusion methods combine ranked lists from different retrieval functions for a single given query. But in many real search settings, the diversity of retrieval functions required to achieve good fusion performance ...
Keywords: probabilistic data fusion, search over sessions, retrieval models, diversified ranking

6 published by ACM
July 2016 SIGIR '16: Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 4
Downloads (6 Weeks): 9,   Downloads (12 Months): 60,   Downloads (Overall): 112

Full text available: PDFPDF
Information Retrieval (IR) research has traditionally focused on serving the best results for a single query - so-called ad hoc retrieval. However, users typically search iteratively, refining and reformulating their queries during a session. A key challenge in the study of this interaction is the creation of suitable evaluation resources ...
Keywords: evaluation, test collection, trec session track

7
June 2016 Information Retrieval: Volume 19 Issue 3, June 2016
Publisher: Kluwer Academic Publishers
Bibliometrics:
Citation Count: 0


8 published by ACM
October 2015 CIKM '15: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 5,   Downloads (12 Months): 44,   Downloads (Overall): 159

Full text available: PDFPDF
Similarity measures have been used widely in information retrieval research. Most research has been done on query-document or document-document similarity without much attention to the user's perception of similarity in the context of the information need. In this study, we collect user preference judgements of web document similarity in order ...
Keywords: document similarity, users, similarity measures

9 published by ACM
September 2015 ICTIR '15: Proceedings of the 2015 International Conference on The Theory of Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 6
Downloads (6 Weeks): 3,   Downloads (12 Months): 68,   Downloads (Overall): 194

Full text available: PDFPDF
Batch evaluation with test collections of documents, search topics, and relevance judgments has been the bedrock of IR evaluation since its adoption by Salton for his experiments on vector space systems. Such test collections have limitations: they contain no user interaction data; there is typically only one query per topic; ...
Keywords: evaluation, information retrieval, user simulation, sessions, test collections

10 published by ACM
September 2015 ICTIR '15: Proceedings of the 2015 International Conference on The Theory of Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 1
Downloads (6 Weeks): 7,   Downloads (12 Months): 67,   Downloads (Overall): 234

Full text available: PDFPDF
The past 20 years have seen a great improvement in the rigor of information retrieval experimentation, due primarily to two factors: high-quality, public, portable test collections such as those produced by TREC (the Text REtrieval Conference [28]), and the increased practice of sta- tistical hypothesis testing to determine whether measured ...
Keywords: statistical significance testing, evaluation, information retrieval, reproducibility

11 published by ACM
September 2015 ICTIR '15: Proceedings of the 2015 International Conference on The Theory of Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 4
Downloads (6 Weeks): 6,   Downloads (12 Months): 62,   Downloads (Overall): 611

Full text available: PDFPDF
A key component of experimentation in IR is statistical hypothesis testing , which researchers and developers use to make inferences about the effectiveness of their system relative to others. A statistical hypothesis test can tell us the likelihood that small mean differences in effectiveness (on the order of 5%, say) ...
Keywords: evaluation, information retrieval, bayesian inference, statistical testing

12 published by ACM
August 2015 SIGIR '15: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 1
Downloads (6 Weeks): 2,   Downloads (12 Months): 30,   Downloads (Overall): 139

Full text available: PDFPDF
Reusable test collections allow researchers to rapidly test different algorithms to find the one that works "best". But because of randomness in the topic sample, or in relevance judgments, or in interactions among system components, extreme results can be seen entirely due to chance, particularly when a collection becomes very ...
Keywords: test collections, evaluation, information retrieval, statistical analysis

13 published by ACM
August 2015 SIGIR '15: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 4,   Downloads (12 Months): 29,   Downloads (Overall): 185

Full text available: PDFPDF
Different users may be attempting to satisfy different information needs while providing the same query to a search engine. Addressing that issue is addressing Novelty and Diversity in information retrieval. Novelty and Diversity search task models the task wherein users are interested in seeing more and more documents that are ...
Keywords: user study, diversity, preference judgment

14 published by ACM
July 2014 SIGIR '14: Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 1
Downloads (6 Weeks): 2,   Downloads (12 Months): 35,   Downloads (Overall): 216

Full text available: PDFPDF
The past 20 years have seen a great improvement in the rigor of information retrieval experimentation, due primarily to two factors: high-quality, public, portable test collections such as those produced by TREC (the Text REtrieval Con- ference [2]), and the increased practice of statistical hypothesis testing to determine whether measured ...
Keywords: information retrieval, statistical significance

15
June 2014 Journal of Biomedical Informatics: Volume 49 Issue C, June 2014
Publisher: Elsevier Science
Bibliometrics:
Citation Count: 3

Display Omitted Demonstrated utility of an in-domain collection (clinical text) for query expansion.Analyzed effect of external collection size on a mixture of relevance models.Any existing query expansion configuration can benefit from an indomain collection. In light of the heightened problems of polysemy, synonymy, and hyponymy in clinical text, we hypothesize ...
Keywords: Clinical text, Cohort identification, Information retrieval, Electronic medical records, Query expansion

16 published by ACM
September 2013 ICTIR '13: Proceedings of the 2013 Conference on the Theory of Information Retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 2
Downloads (6 Weeks): 3,   Downloads (12 Months): 24,   Downloads (Overall): 123

Full text available: PDFPDF
The past 20 years have seen a great improvement in the rigor of information retrieval experimentation, due primarily to two factors: high-quality, public, portable test collections such as those produced by TREC (the Text REtrieval Conference [2]), and the increased practice of statistical hypothesis testing to determine whether measured improvements ...
Keywords: statistical significance, information retrieval

17 published by ACM
July 2013 SIGIR '13: Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 6
Downloads (6 Weeks): 3,   Downloads (12 Months): 29,   Downloads (Overall): 346

Full text available: PDFPDF
Novel and diverse document ranking is an effective strategy that involves reducing redundancy in a ranked list to maximize the amount of novel and relevant information available to users. Evaluation for novelty and diversity typically involves an assessor judging each document for relevance against a set of pre-identified subtopics, which ...
Keywords: novelty and diversity, evaluation

18 published by ACM
July 2013 SIGIR '13: Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 4
Downloads (6 Weeks): 2,   Downloads (12 Months): 21,   Downloads (Overall): 226

Full text available: PDFPDF
In this paper, we present a medical record search system which is useful for identifying cohorts required in clinical studies. In particular, we propose a query-adaptive weighting method that can dynamically aggregate and score evidence in multiple medical reports (from different hospital departments or from different tests within the same ...
Keywords: cohort identification, information retrieval, emr, language models, medical record search

19 published by ACM
July 2013 SIGIR '13: Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Publisher: ACM
Bibliometrics:
Citation Count: 4
Downloads (6 Weeks): 2,   Downloads (12 Months): 24,   Downloads (Overall): 155

Full text available: PDFPDF
The notion of relevance differs between assessors, thus giving rise to assessor disagreement. Although assessor disagreement has been frequently observed, the factors leading to disagreement are still an open problem. In this paper we study the relationship between assessor disagreement and various topic independent factors such as readability and cohesiveness. ...
Keywords: evaluation, retrieval experiment

20
March 2013 ECIR'13: Proceedings of the 35th European conference on Advances in Information Retrieval
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 4

Twitter is an accepted platform among users for expressing views in a short text called a "Tweet" Application of search models to platforms like Twitter is still an open-ended question, though the creation of the TREC Microblog track in 2011 aims to help resolve it. In this paper, we propose ...



The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
Terms of Usage   Privacy Policy   Code of Ethics   Contact Us