Till startsida
Webbkarta
Till innehåll Läs mer om hur kakor används på gu.se

Learning to Compose Spatial Relations with Grounded Neural Language Models

Paper i proceeding
Författare Mehdi Ghanimifard
Simon Dobnik
Publicerad i Proceedings of IWCS 2017
Publiceringsår 2017
Publicerad vid Institutionen för filosofi, lingvistik och vetenskapsteori
Språk en
Länkar aclweb.org/anthology/W17-6808
https://gup-server.ub.gu.se/v1/asse...
Ämnesord Symbol Grounding Grounded Language Model Language and Vision Recurrent Neural Networks Representation Learning Representation of Meaning
Ämneskategorier Datorlingvistik, Språkteknologi (språkvetenskaplig databehandling), Lingvistik

Sammanfattning

Language is compositional: we can generate and interpret novel sentences by having a notion of the meaning of their individual parts. Spatial descriptions are grounded in perceptional representations but their meaning is also defined by what neighbouring words they co-occur with. In this paper, we examine how language models conditioned on perceptual features can capture the semantics of composed phrases as well as of individual words. We generate a synthetic dataset of spatial descriptions referring to perceptual scenes and examine how grounded language models built with deep neural networks can account for compositionality of descriptions – by evaluating how the learned language models can deal with novel grounded composed descriptions and novel grounded decomposed descriptions, constituents previously not seen in isolation.

Sidansvarig: Webbredaktion|Sidan uppdaterades: 2012-09-11
Dela:

På Göteborgs universitet använder vi kakor (cookies) för att webbplatsen ska fungera på ett bra sätt för dig. Genom att surfa vidare godkänner du att vi använder kakor.  Vad är kakor?