To the top

Page Manager: Webmaster
Last update: 9/11/2012 3:13 PM

Tell a friend about this page
Print version

Interactive visual ground… - University of Gothenburg, Sweden Till startsida
Sitemap
To content Read more about how we use cookies on gu.se

Interactive visual grounding with neural networks

Conference paper
Authors José Miguel Cano Santín
Simon Dobnik
Mehdi Ghanimifard
Published in Proceedings of LondonLogue - Semdial 2019: The 23rd Workshop on the Semantics and Pragmatics of Dialogue
ISSN 2308-2275
Publisher Queen Mary University of London
Place of publication London, UK
Publication year 2019
Published at Department of Philosophy, Linguistics and Theory of Science
Language en
Links semdial.org/anthology/papers/Z/Z19/...
https://semdial2019.github.io/#
https://gup.ub.gu.se/file/207842
https://gup.ub.gu.se/file/207881
Keywords grounding, object learning, interactive learning, transfer learning, neural networks
Subject categories Computational linguistics, Linguistics, Cognitive science

Abstract

Training strategies for neural networks are not suitable for real time human-robot interaction. Few-shot learning approaches have been developed for low resource scenarios but without the usual teacher/learner supervision. In this work we present a combination of both: a situated dialogue system to teach object names to a robot from its camera images using Matching Networks (Vinyals et al., 2016). We compare the performance of the system with transferred learning from pre-trained models and different conversational strategies with a human tutor.

Page Manager: Webmaster|Last update: 9/11/2012
Share:

The University of Gothenburg uses cookies to provide you with the best possible user experience. By continuing on this website, you approve of our use of cookies.  What are cookies?

Denna text är utskriven från följande webbsida:
http://www.gu.se/english/research/publication/?publicationId=283004
Utskriftsdatum: 2019-10-20