Neural language modelling based in enormous text corpora has been an enormously successful program of research and development. Recent large-scale models have garnered significant media attention due to the seeming human-realism of their output in terms of narrative generation or philosophical conversation, among others. However, close inspection finds significant gaps from what we would expect from humans. This raises questions about their applicability both to natural language processing tasks and as models of human linguistic cognition.
At the same time, it's unarguable these models have made progress. But what are the limits of "big text corpus" models that don't learn from the kinds of "grounded" interactions (environmental and social) that living humans learn from? How much can these systems be said to "capture" what living, embodied humans mean when they use language?
At (Dis)embodiment, we aspire to discuss these questions from many viewpoints in the form of an academic conference organized by the Centre for Linguistic Theory and Studies in Probability (CLASP), http://clasp.gu.se at the Department of Philosophy, Linguistics and Theory of Science (FLoV). The conference is sponsored by SIGSEM http://sigsem.org, the ACL special interest group on semantics. The (Dis)embodiment conference proceedings will be published online in the ACL Anthology for 2022 as a SIGSEM workshop event.