To the top

Page Manager: Webmaster
Last update: 9/11/2012 3:13 PM

Tell a friend about this page
Print version

Improving Neural Network … - University of Gothenburg, Sweden Till startsida
To content Read more about how we use cookies on

Improving Neural Network Performance by Injecting Background Knowledge: Detecting Code-switching and Borrowing in Algerian texts

Conference paper
Authors Wafia Adouane
Jean-Philippe Bernardy
Simon Dobnik
Published in Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-switching, Melbourne, Australia, July 19, 2018
ISBN 978-1-948087-45-2
Publisher Association for Computational Linguistics
Place of publication Melbourne, Australia
Publication year 2018
Published at Department of Philosophy, Linguistics and Theory of Science
Language en
Keywords injecting back-ground knowledge to DNN, low-resourced languages, code-switching, sub-word embeddings, non-standardised user-generated Algerian texts
Subject categories Language Technology (Computational Linguistics)


We explore the effect of injecting back- ground knowledge to different deep neural network (DNN) configurations in order to mitigate the problem of the scarcity of annotated data when applying these models on datasets of low-resourced languages. The background knowledge is encoded in the form of lexicons and pre-trained sub-word embeddings. The DNN models are evaluated on the task of detecting code-switching and borrowing points in non-standardised user-generated Algerian texts. Overall results show that DNNs benefit from adding background knowledge. However, the gain varies between models and categories. The proposed DNN architectures are generic and could be applied to other low-resourced languages.

Page Manager: Webmaster|Last update: 9/11/2012

The University of Gothenburg uses cookies to provide you with the best possible user experience. By continuing on this website, you approve of our use of cookies.  What are cookies?