skip to main content
10.1145/3571884.3604316acmconferencesArticle/Chapter ViewAbstractPublication PagescuiConference Proceedingsconference-collections
Work in Progress

Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators

Published: 19 July 2023 Publication History

Abstract

Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI’s ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as ‘open source’, many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human annotation labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment.

References

[1]
Nur Ahmed, Muntasir Wahed, and Neil C. Thompson. 2023. The growing influence of industry in AI research. Science 379, 6635 (March 2023), 884–886. https://doi.org/10.1126/science.ade2420
[2]
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, Virtual Event Canada, 610–623. https://doi.org/10.1145/3442188.3445922
[3]
Frances C. Bernstein, Thomas F. Koetzle, Graheme J. B. Williams, Edgar F. Meyer, Michael D. Brice, John R. Rodgers, Olga Kennard, Takehiko Shimanouchi, and Mitsuo Tasumi. 1977. The protein data bank: A computer-based archival file for macromolecular structures. Journal of Molecular Biology 112, 3 (May 1977), 535–542. https://doi.org/10.1016/S0022-2836(77)80200-3
[4]
Abeba Birhane. 2021. Algorithmic injustice: a relational ethics approach. Patterns 2, 2 (Feb. 2021), 100205. https://doi.org/10.1016/j.patter.2021.100205
[5]
Abeba Birhane and Olivia Guest. 2021. Towards Decolonising Computational Sciences. Kvinder, Køn & Forskning2 (2021), 60–73. https://doi.org/10.7146/kkf.v29i2.124899
[6]
Abeba Birhane and Vinay Uday Prabhu. 2021. Large image datasets: A pyrrhic win for computer vision?. In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). 1536–1546. https://doi.org/10.1109/WACV48630.2021.00158 ISSN: 2642-9381.
[7]
Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: misogyny, pornography, and malignant stereotypes. https://doi.org/10.48550/arXiv.2110.01963 arXiv:2110.01963 [cs].
[8]
Jean-Claude Burgelman, Corina Pascu, Katarzyna Szkuta, Rene Von Schomberg, Athanasios Karalopoulos, Konstantinos Repanas, and Michel Schouppe. 2019. Open Science, Open Data, and Open Scholarship: European Policies to Make Science Fit for the Twenty-First Century. Frontiers in Big Data 2 (2019). https://www.frontiersin.org/articles/10.3389/fdata.2019.00043
[9]
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting Training Data from Large Language Models. https://doi.org/10.48550/arXiv.2012.07805 arXiv:2012.07805 [cs].
[10]
Sanjay Chawla, Preslav Nakov, Ahmed Ali, Wendy Hall, Issa Khalil, Xiaosong Ma, Husrev Taha Sencar, Ingmar Weber, Michael Wooldridge, and Ting Yu. 2023. Ten years after ImageNet: a 360° perspective on artificial intelligence. Royal Society Open Science 10, 3 (March 2023), 221414. https://doi.org/10.1098/rsos.221414 Publisher: Royal Society.
[11]
Kenneth Ward Church and Valia Kordoni. 2022. Emerging Trends: SOTA-Chasing. Natural Language Engineering 28, 2 (March 2022), 249–269. https://doi.org/10.1017/S1351324922000043 Publisher: Cambridge University Press.
[12]
Deborah Cohen, Moonkyung Ryu, Yinlam Chow, Orgad Keller, Ido Greenberg, Avinatan Hassidim, Michael Fink, Yossi Matias, Idan Szpektor, Craig Boutilier, and Gal Elidan. 2022. Dynamic Planning in Open-Ended Dialogue using Reinforcement Learning. https://doi.org/10.48550/arXiv.2208.02294 arXiv:2208.02294 [cs].
[13]
Anamaria Crisan, Margaret Drouhard, Jesse Vig, and Nazneen Rajani. 2022. Interactive Model Cards: A Human-Centered Approach to Model Documentation. In 2022 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’22). Association for Computing Machinery, New York, NY, USA, 427–439. https://doi.org/10.1145/3531146.3533108
[14]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248–255. https://doi.org/10.1109/CVPR.2009.5206848 ISSN: 1063-6919.
[15]
Harry G. Frankfurt. 2009. On Bullshit. Princeton University Press. https://doi.org/10.1515/9781400826537
[16]
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. https://doi.org/10.48550/arXiv.2101.00027 arXiv:2101.00027 [cs].
[17]
Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell. 2023. Statement from the listed authors of Stochastic Parrots on the "AI pause" letter. https://www.dair-institute.org/blog/letter-statement-March2023
[18]
Odd Erik Gundersen, Yolanda Gil, and David W. Aha. 2018. On Reproducible AI: Towards Reproducible Research, Open Science, and Digital Scholarship in AI Publications. AI Magazine 39, 3 (Sept. 2018), 56–68. https://doi.org/10.1609/aimag.v39i3.2816 Number: 3.
[19]
Odd Erik Gundersen and Sigbjørn Kjensmo. 2018. State of the Art: Reproducibility in Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence 32, 1 (April 2018). https://doi.org/10.1609/aaai.v32i1.11503 Number: 1.
[20]
Benjamin Haibe-Kains, George Alexandru Adam, Ahmed Hosny, Farnoosh Khodakarami, Levi Waldron, Bo Wang, Chris McIntosh, Anna Goldenberg, Anshul Kundaje, Casey S. Greene, Tamara Broderick, Michael M. Hoffman, Jeffrey T. Leek, Keegan Korthauer, Wolfgang Huber, Alvis Brazma, Joelle Pineau, Robert Tibshirani, Trevor Hastie, John P. A. Ioannidis, John Quackenbush, and Hugo J. W. L. Aerts. 2020. Transparency and reproducibility in artificial intelligence. Nature 586, 7829 (Oct. 2020), E14–E16. https://doi.org/10.1038/s41586-020-2766-y Number: 7829 Publisher: Nature Publishing Group.
[21]
Peter Henderson, Mark Krass, Lucia Zheng, Neel Guha, Christopher D. Manning, Dan Jurafsky, and Daniel Ho. 2022. Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset. Advances in Neural Information Processing Systems 35 (Dec. 2022), 29217–29234. https://proceedings.neurips.cc/paper_files/paper/2022/hash/bc218a0c656e49d4b086975a9c785f47-Abstract-Datasets_and_Benchmarks.html
[22]
Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. 2021. Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 560–575. https://doi.org/10.1145/3442188.3445918
[23]
Ivan Illich. 1973. Tools for conviviality. Harper & Row, New York.
[24]
W. Bradley Knox and Peter Stone. 2008. TAMER: Training an Agent Manually via Evaluative Reinforcement. In 2008 7th IEEE International Conference on Development and Learning. 292–297. https://doi.org/10.1109/DEVLRN.2008.4640845 ISSN: 2161-9476.
[25]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2017. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 84–90. https://doi.org/10.1145/3065386
[26]
Nathan Lambert, Louis Castricato, Leandro von Werra, and Alex Havrilla. 2022. Illustrating Reinforcement Learning from Human Feedback (RLHF). Hugging Face Blog (2022).
[27]
Martha Larson, Nelleke Oostdijk, and Frederik Zuiderveen Borgesius. 2021. Not Directly Stated, Not Explicitly Stored: Conversational Agents and the Privacy Threat of Implicit Information. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization(UMAP ’21). Association for Computing Machinery, New York, NY, USA, 388–391. https://doi.org/10.1145/3450614.3463601
[28]
Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gérard Dupont, Stella Biderman, Anna Rogers, Loubna Ben Allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Romero Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Vu Minh Chien, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Ifeoluwa Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Luccioni, and Yacine Jernite. 2022. The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset. https://openreview.net/forum?id=UoEw6KigkUn
[29]
Clifford H. Lee and Elisabeth Soep. 2016. None But Ourselves Can Free Our Minds: Critical Computational Literacy as a Pedagogy of Resistance. Equity & Excellence in Education 49, 4 (Oct. 2016), 480–492. https://doi.org/10.1080/10665684.2016.1227157
[30]
Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. 2023. Trustworthy AI: From Principles to Practices. Comput. Surveys 55, 9 (Jan. 2023), 177:1–177:46. https://doi.org/10.1145/3555803
[31]
Weixin Liang, Girmaw Abebe Tadesse, Daniel Ho, L. Fei-Fei, Matei Zaharia, Ce Zhang, and James Zou. 2022. Advances, challenges and opportunities in creating data for trustworthy AI. Nature Machine Intelligence 4, 8 (Aug. 2022), 669–677. https://doi.org/10.1038/s42256-022-00516-1 Number: 8 Publisher: Nature Publishing Group.
[32]
Alexandra Sasha Luccioni, Sylvain Viguier, and Anne-Laure Ligozat. 2022. Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model. http://arxiv.org/abs/2211.02001 arXiv:2211.02001 [cs].
[33]
Erin C. McKiernan, Philip E. Bourne, C. Titus Brown, Stuart Buck, Amye Kenall, Jennifer Lin, Damon McDougall, Brian A. Nosek, Karthik Ram, Courtney K. Soderberg, Jeffrey R. Spies, Kaitlin Thaney, Andrew Updegrove, Kara H. Woo, and Tal Yarkoni. 2016. How open science helps researchers succeed. eLife 5 (July 2016), e16800. https://doi.org/10.7554/eLife.16800
[34]
Angelina McMillan-Major, Emily M. Bender, and Batya Friedman. 2023. Data Statements: From Technical Concept to Community Practice. ACM Journal on Responsible Computing (2023). https://doi.org/10.1145/3594737 Just Accepted.
[35]
Angelina McMillan-Major, Salomey Osei, Juan Diego Rodriguez, Pawan Sasanka Ammanamanchi, Sebastian Gehrmann, and Yacine Jernite. 2021. Reusable Templates and Guides For Documenting Datasets and Models for Natural Language Processing and Generation: A Case Study of the HuggingFace and GEM Data and Model Cards. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021). Association for Computational Linguistics, Online, 121–135. https://doi.org/10.18653/v1/2021.gem-1.11
[36]
Dan McQuillan. 2022. Resisting AI: an anti-fascist approach to artificial intelligence. Bristol University Press, Bristol, UK. OCLC: on1328026349.
[37]
Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented Language Models: a Survey. https://doi.org/10.48550/arXiv.2302.07842 arXiv:2302.07842 [cs].
[38]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency(FAT* ’19). Association for Computing Machinery, New York, NY, USA, 220–229. https://doi.org/10.1145/3287560.3287596
[39]
Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M. Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. 2022. Crosslingual Generalization through Multitask Finetuning. https://doi.org/10.48550/arXiv.2211.01786 arXiv:2211.01786 [cs].
[40]
Michael Muller and Angelika Strohmayer. 2022. Forgetting Practices in the Data Sciences. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–19. https://doi.org/10.1145/3491102.3517644
[41]
Nadia Nahar, Shurui Zhou, Grace Lewis, and Christian Kästner. 2022. Collaboration challenges in building ML-enabled systems: communication, documentation, engineering, and process. In Proceedings of the 44th International Conference on Software Engineering(ICSE ’22). Association for Computing Machinery, New York, NY, USA, 413–425. https://doi.org/10.1145/3510003.3510209
[42]
OpenAI. 2023. GPT-4 Technical Report. https://doi.org/10.48550/arXiv.2303.08774 arXiv:2303.08774 [cs].
[43]
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. (2022), 68.
[44]
Mohit Pandey. 2023. OpenAI Might Invite Legal Trouble. Analytics India Magazine (March 2023). https://analyticsindiamag.com/openai-might-invite-legal-trouble/
[45]
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, and Alex Hanna. 2021. Data and its (dis)contents: A survey of dataset development and use in machine learning research. Patterns 2, 11 (Nov. 2021), 100336. https://doi.org/10.1016/j.patter.2021.100336
[46]
Jason Phang, Herbie Bradley, Leo Gao, Louis Castricato, and Stella Biderman. 2022. EleutherAI: Going Beyond "Open Science" to "Science in the Open". https://doi.org/10.48550/arXiv.2210.06413 arXiv:2210.06413 [cs].
[47]
Anna Rogers. 2021. Changing the World by Changing the Data. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Online, 2182–2194. https://doi.org/10.18653/v1/2021.acl-long.170
[48]
Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–15. https://doi.org/10.1145/3411764.3445518
[49]
Kevin Schaul, Szu Yu Chen, and Nitasha Tiku. 2023. Inside the secret list of websites that make AI like ChatGPT sound smart. https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/
[50]
Irene Solaiman. 2023. The Gradient of Generative AI Release: Methods and Considerations. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’23). Association for Computing Machinery, New York, NY, USA, 111–122. https://doi.org/10.1145/3593013.3593981
[51]
Arthur Spirling. 2023. Why open-source generative AI models are an ethical way forward for science. Nature 616, 7957 (April 2023), 413–413. https://doi.org/10.1038/d41586-023-01295-4 Bandiera_abtest: a Cg_type: World View Number: 7957 Publisher: Nature Publishing Group Subject_term: Ethics, Machine learning, Technology, Scientific community.
[52]
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. http://arxiv.org/abs/2302.13971 arXiv:2302.13971 [cs].
[53]
Kathryn Tunyasuvunakool, Jonas Adler, Zachary Wu, Tim Green, Michal Zielinski, Augustin Žídek, Alex Bridgland, Andrew Cowie, Clemens Meyer, Agata Laydon, Sameer Velankar, Gerard J. Kleywegt, Alex Bateman, Richard Evans, Alexander Pritzel, Michael Figurnov, Olaf Ronneberger, Russ Bates, Simon A. A. Kohl, Anna Potapenko, Andrew J. Ballard, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Ellen Clancy, David Reiman, Stig Petersen, Andrew W. Senior, Koray Kavukcuoglu, Ewan Birney, Pushmeet Kohli, John Jumper, and Demis Hassabis. 2021. Highly accurate protein structure prediction for the human proteome. Nature 596, 7873 (Aug. 2021), 590–596. https://doi.org/10.1038/s41586-021-03828-1 Number: 7873 Publisher: Nature Publishing Group.
[54]
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-Instruct: Aligning Language Models with Self-Generated Instructions. https://doi.org/10.48550/arXiv.2212.10560 arXiv:2212.10560 [cs].
[55]
Garrett Warnell, Nicholas Waytowich, Vernon Lawhern, and Peter Stone. 2018. Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces. Proceedings of the AAAI Conference on Artificial Intelligence 32, 1 (April 2018). https://doi.org/10.1609/aaai.v32i1.11485 Number: 1.
[56]
BigScience Workshop. 2023. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. https://doi.org/10.48550/arXiv.2211.05100
[57]
Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. 2023. Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data. https://doi.org/10.48550/arXiv.2304.01196 arXiv:2304.01196 [cs].
[58]
Frank F. Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming(MAPS 2022). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3520312.3534862
[59]
Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2020. Fine-Tuning Language Models from Human Preferences. https://doi.org/10.48550/arXiv.1909.08593 arXiv:1909.08593 [cs, stat].

Cited By

View all
  • (2024)Position: TRUSTLLMProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692883(20166-20270)Online publication date: 21-Jul-2024
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692561(12348-12370)Online publication date: 21-Jul-2024
  • (2024)Potential Role and Challenges of ChatGPT and Similar Generative Artificial Intelligence in Architectural EngineeringInternational Journal of Artificial Intelligence and Machine Learning10.51483/IJAIML.4.1.2024.22-474:1(22-47)Online publication date: 5-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces
July 2023
504 pages
ISBN:9798400700149
DOI:10.1145/3571884
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 July 2023

Check for updates

Author Tags

  1. RLHF
  2. chatGPT
  3. large language models
  4. open source
  5. survey

Qualifiers

  • Work in progress
  • Research
  • Refereed limited

Conference

CUI '23
Sponsor:
CUI '23: ACM conference on Conversational User Interfaces
July 19 - 21, 2023
Eindhoven, Netherlands

Acceptance Rates

Overall Acceptance Rate 34 of 100 submissions, 34%

Upcoming Conference

CUI '25
ACM Conversational User Interfaces 2025
July 7 - 9, 2025
Waterloo , ON , Canada

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)968
  • Downloads (Last 6 weeks)53
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Position: TRUSTLLMProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692883(20166-20270)Online publication date: 21-Jul-2024
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692561(12348-12370)Online publication date: 21-Jul-2024
  • (2024)Potential Role and Challenges of ChatGPT and Similar Generative Artificial Intelligence in Architectural EngineeringInternational Journal of Artificial Intelligence and Machine Learning10.51483/IJAIML.4.1.2024.22-474:1(22-47)Online publication date: 5-Jan-2024
  • (2024)A systematic evaluation of text mining methods for short texts: Mapping individuals’ internal states from online postsBehavior Research Methods10.3758/s13428-024-02381-956:4(2782-2803)Online publication date: 4-Apr-2024
  • (2024)ChatGPT May Offer an Adequate Substitute for Informed Consent to Patients Prior to Total Knee Arthroplasty—Yet Caution Is NeededJournal of Personalized Medicine10.3390/jpm1401006914:1(69)Online publication date: 5-Jan-2024
  • (2024)EX-CODE: A Robust and Explainable Model to Detect AI-Generated CodeInformation10.3390/info1512081915:12(819)Online publication date: 20-Dec-2024
  • (2024)Anticipating Job Market Demands—A Deep Learning Approach to Determining the Future Readiness of Professional SkillsFuture Internet10.3390/fi1605014416:5(144)Online publication date: 23-Apr-2024
  • (2024)PTB-FLA development paradigm adaptation for ChatGPTComputer Science and Information Systems10.2298/CSIS231224036P21:4(1269-1292)Online publication date: 2024
  • (2024)Large language models are better than theoretical linguists at theoretical linguisticsTheoretical Linguistics10.1515/tl-2024-200250:1-2(33-48)Online publication date: 4-Jul-2024
  • (2024)Performance of Publicly Available Large Language Models on Internal Medicine Board-style QuestionsPLOS Digital Health10.1371/journal.pdig.00006043:9(e0000604)Online publication date: 17-Sep-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media