
Keynote: Geoffrey Hinton Cohere
Monday, July 10 - Time: 09:30–10:30 EDT
Abstract: I will briefly describe the forty year history of neural net language models with particular attention to whether they understand what they are saying. I will then discuss some of the main differences between digital and biological intelligences and speculate on how the brain could implement something like transformers. I will conclude by addressing the contentious issue of whether current multimodal LLMs have subjective experience.

Mausam, Professor, IIT Delhi (ARR EIC), Jonathan K. Kummerfeld, Assistant Professor, University of Sydney (ARR CTO) Tuesday, July 11, 2023 - Room: Metropolitan - Time: 14:15–14:45
This session will contain a presentation on progress in ARR over the past year and provide an opportunity for community questions and discussion.
We will briefly present:
Personnel Updates New aspects: Tracks, Senior Action Editors Improvements, e.g. changes to the review - paper matching process Statistics on timeliness and paper outcomes Next steps
With that context we will open the floor to questions.

Karën Fort, Min-Yen Kan and Yulia Tsvetkov (ACL Ethics Committee co-chairs) Committee Members: Luciana Benotti, Mark Dredze, Pascale Fung, Dirk Hovy, Jin-Dong Kim, Malvina Nissim Tuesday, July 11, 2023 - Room: Pier 4&5 - Time: 16:15–17:45
We present our ACL Ethics Committee’s progress over the last few years. Of core interest, we will present the results of the ACL stakeholder survey about the role of ethics and ethics training exposure. Results from the survey respondents indicate that ethics is of primary interest to the community and that there is a mandate for the further creation and dissemination of ethics related training for authors, reviewers and event organisers. We will briefly review the survey results and feature a lengthed question and answer session in support of extended dialogue with our community. Our session will culminate through a dialogue with our session’s participants in a moderated panel that includes participation from the entire ethics committee.

Tuesday, July 11, 2023 - Room: Metropolitan - Time: 13:00–13:30
Dragomir Radev, the A. Bartlett Giamatti Professor of Computer Science at Yale University, passed away this year on Wed, March 29th. Drago contributed in substantial ways to research in NLP, to the organization of the ACL and to mentoring the next generation of computational linguists. Drago’s role in our ACL community spans four decades. He was recognized for his work over this period through his selection as an ACL Fellow in 2018 for his significant contributions to text summarization and question answering, and through his receipt of the Distinguished ACL Service Award in 2022. In this session, speakers from different time periods of his life will discuss his contributions to the field and the impact his life had on so many of us.

Join us for a panel featuring experts Sara Hooker (Cohere), Swaroop Mishra (Google DeepMind), and Danqi Chen (Princeton), who will provide invaluable insights into navigating the tempestuous seas of NLP in the era of large language models. This discussion will guide students and early career researchers through impactful directions, progress-making strategies, offering perspectives from academia and industry.
Tuesday, July 11 - Time: 13:45–14:30 EDT
Room: Pier 2&3

Chair: Iryna Gurevych Technische Universität Darmstadt
Tuesday, July 11 - Time: 14:45-15:45
This is a panel discussion with:
-
Dan Klein (UC Berkeley)
-
Meg Mitchell (Hugging Face)
-
Roy Schwartz (the Hebrew University of Jerusalem)
They will present short statements (5 to 7 min.) related to the main topic of the panel
-
New opportunities (e.g., artificial general intelligence, responsible NLP);
-
Technical challenges (e.g., multimodality, instruction-tuning, etc.)
-
Real life problems & societal implications (e.g., hallucinations, biases, future job market);
-
LLMs and the future of NLP; and
-
Open-science vs. commercial LLMs
Followed by discussion with the panel and audience.

Alison Gopnik University of California, Berkeley
Wednesday, July 12 - Time: 14:00–15:00 EDT
Abstract: Its natural to ask whether large language models like LaMDA or GPT-3 are intelligent agents. But I argue that this is the wrong question. Intelligence and agency are the wrong categories for understanding them. Instead, these Al systems are what we might call cultural technologies, like writing, print, libraries, internet search engines or even language itself. They are new techniques for passing on information from one group of people to another. Cultural technologies arent like intelligent humans, but they are essential for human intelligence. Many animals can transmit some information from one individual or one generation to another, but no animal does it as much as we do or accumulates as much information over time, . New technologies that make cultural transmission easier and more effective have been among the greatest engines of human progress, but they have also led to negative as well as positive social consequences. Moreover, while cultural technologies allow transmission of existing information cultural evolution, which is central to human success, also depends on innovation, exploration and causal learning. Comparing LLM’s responses in prompts based on developmental psychology experiments to the responses of children may provide insight into which capacities can be learned through language and cultural transmission, and which require innovation and exploration in the physical world. I will present results from several studies making such comparisons.