Till sidans topp

Sidansvarig: Webbredaktion
Sidan uppdaterades: 2012-09-11 15:12

Tipsa en vän
Utskriftsversion

Gradient Probabilistic Mo… - Göteborgs universitet Till startsida
Webbkarta
Till innehåll Läs mer om hur kakor används på gu.se

Gradient Probabilistic Models vs Categorical Grammars: A Reply to Sprouse et al. (2018)

Working paper
Författare Shalom Lappin
Jey Han Lau
Förlag The Science of Language (blog)
Förlagsort MIT
Publiceringsår 2018
Publicerad vid Institutionen för filosofi, lingvistik och vetenskapsteori
Språk en
Länkar thescienceoflanguage.com/2018/07/22...
https://gup.ub.gu.se/file/207512
Ämnesord Gradience in sentence acceptability, probabilistic models of linguistics knowledge, machine learning applied to natural language, deep neural network models of sentence acceptability, categorial grammars, RNNs and LSTMs
Ämneskategorier Språkteknologi (språkvetenskaplig databehandling), Datorlingvistik

Sammanfattning

In Lau et al. (2017) we present two claims, which we support through two sets of experiments. The first claim is that speaker’s acceptability judgments for sentences are intrinsically gradient rather than binary. The second is that probabilistic machine learning models trained on corpora of naturally occurring text are able to predict human acceptability judgments with an encouraging degree of accuracy. Sprouse et al. (2018) (SYIFB) argue that our models capture gradience in human acceptability ratings at the cost of accuracy in binary classification of sentences as acceptable or unacceptable. They support this argument by training two of our models, trigrams + SLOR and the RNN + SLOR, on the BNC and then testing them on three crowd source annotated test sets. We show that SYIFB's "binary grammaticality" metric corresponds to neither a model nor a grammaticality classifier. Therefore their criticisms of our models lack force. We consider recent work in the use of deep neural networks to learn and represent properties of natural language. We speculate on the prospects that these models will achieve human level performance across a wide range of cognitively interesting NLP tasks. We conclude by suggesting that for the discussion to move forward it is necessary for advocates of a categorial grammar, derived from a strong bias UG view of language acquisition, to produce a genuine computational model that provides a non-trivial classifier for acceptability. It is only when such a system is available that we can compare it to the machine learning models that we and other computational linguists are using to acquire and represent linguistic knowledge.

Sidansvarig: Webbredaktion|Sidan uppdaterades: 2012-09-11
Dela:

På Göteborgs universitet använder vi kakor (cookies) för att webbplatsen ska fungera på ett bra sätt för dig. Genom att surfa vidare godkänner du att vi använder kakor.  Vad är kakor?