To the top

Page Manager: Webmaster
Last update: 9/11/2012 3:13 PM

Tell a friend about this page
Print version

Learning to Compose Spati… - University of Gothenburg, Sweden Till startsida
To content Read more about how we use cookies on

Learning to Compose Spatial Relations with Grounded Neural Language Models

Conference paper
Authors Mehdi Ghanimifard
Simon Dobnik
Published in Proceedings of IWCS 2017: 12th International Conference on Computational Semantics, Montpellier 19-22 September 2017 / Claire Gardent and Christian Retoré (eds.)
Publisher Association for Computational Linguistics
Publication year 2017
Published at Department of Philosophy, Linguistics and Theory of Science
Language en
Keywords Symbol Grounding Grounded Language Model Language and Vision Recurrent Neural Networks Representation Learning Representation of Meaning
Subject categories Computational linguistics, Linguistics


Language is compositional: we can generate and interpret novel sentences by having a notion of the meaning of their individual parts. Spatial descriptions are grounded in perceptional representations but their meaning is also defined by what neighbouring words they co-occur with. In this paper, we examine how language models conditioned on perceptual features can capture the semantics of composed phrases as well as of individual words. We generate a synthetic dataset of spatial descriptions referring to perceptual scenes and examine how grounded language models built with deep neural networks can account for compositionality of descriptions – by evaluating how the learned language models can deal with novel grounded composed descriptions and novel grounded decomposed descriptions, constituents previously not seen in isolation.

Page Manager: Webmaster|Last update: 9/11/2012

The University of Gothenburg uses cookies to provide you with the best possible user experience. By continuing on this website, you approve of our use of cookies.  What are cookies?