Reading list

Computational semantics

Komputationell semantik

Course
LT2813
Second cycle
7.5 credits (ECTS)

About the Reading list

Valid from
Spring semester 2026 (2026-01-19)
Decision date
2025-11-13

Diarienummer GU 2025/4255


Adesam, Y., Berdicevskis, A., and Morger, F. (2020). SwedishGLUE – towards a Swedish test set for evaluating natural language understanding models. Research reports from the department of Swedish, Department of Swedish, University of Gothenburg, Gothenburg, Sweden. (15 pages)

- Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pages 610–623, New York, NY, USA. Association for Computing Machinery. (14 pages)

- Bender, E. M. and Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5185–5198, Online. Association for Computational Linguistics. (14 pages)

- Bengio, Y., Ducharme, R., Vincent, P., and Janvin, C. (2003). A neural probabilistic language model. Journal of Machine Learning Research, 3(6):1137–1155. (19 pages)

- Bird, S., Klein, E., and Loper, E. (2009). Analyzing the Meaning of Sentences, chapter 10, pages 1–36. O’Reilly. (37 pages)

- Bowman, S. R., Angeli, G., Potts, C., and Manning, C. D. (2015). A large annotated corpus for learning natural language inference. In Màrquez, L., Callison-Burch, C., and Su, J., editors, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. (11 pages)

- Clark, S. (2015). Vector space models of lexical meaning. In Lappin, S. and Fox, C., editors, Handbook of Contemporary Semantics — second edition, chapter 16, pages 493–522. Wiley – Blackwell. (30 pages)

- Cooper, R., Crouch, D., Van Eijck, J., Fox, C., Van Genabith, J., Jaspars, J., Kamp, H., Milward, D., Pinkal, M., Poesio, M., et al. (1996). Using the framework. Technical report LRE 62-051 d-16, The FraCaS Consortium. (136 pages)

- Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018). BERT: pre-training of deep bidirectional transformers for language understanding. arXiv, arXiv:1810.04805 [cs.CL]:1–14. (14 pages)

- Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Burstein, J., Doran, C., and Solorio, T., editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human

Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. (16 pages)

- Dobnik, S., Cooper, R., Ek, A., Noble, B., Larsson, S.,Ilinykh, N., Maraev, V., and Somashekarappa, V. (2022). In search of meaning and its representations for computational linguistics. In Proceedings of the 2022 CLASP Conference on (Dis)embodiment, pages 30–44, Gothenburg Sweden. Association for Computational Linguistics. (15 pages)

- Duchnowski, A., Pavlick, E., and Koller, A. (2025). Ehop: A dataset of everyday np-hard optimization problems. arXiv, arXiv:2502.13776 [cs.CL]:1–18. (18 pages)

- Erk, K. (2012). Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635–653. (19 pages)

- Ghanimifard, M. and Dobnik, S. (2017). Learning to compose spatial relations with grounded neural language models. In Gardent, C. and Retoré, C., editors, Proceedings of IWCS 2017: 12th International Conference on Computational Semantics, pages 1–12, Montpellier, France. Association for Computational Linguistics. (12 pages)

- Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector space. arXiv, arXiv:1301.3781 [cs.CL]:1–12. (12 pages)

- Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. (9 pages)

- Mitchell, J. and Lapata, M. (2008). Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236–244, Columbus, Ohio. (9 pages)

- Mitchell, J. and Lapata, M. (2010). Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. (42 pages)

- Stone, M. (2016). Semantics and computation. In Aloni, M. and Dekker, P., editors, The Cambridge Handbook of Formal Semantics, Cambridge Handbooks in Language and Linguistics, chapter 25, pages 775–800. Cambridge University Press, Cambridge, UK. (26 pages)

- Talman, A., Yli-Jyrä, A., and Tiedemann, J. (2019). Sentence embeddings in nli with iterative refinement encoders. Natural Language Engineering, 25(4):467–482. (16 pages)

- Turney, P. D., Pantel, P., et al. (2010). From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37(1):141–188. (48 pages)

- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors,

Advances in Neural Information Processing Systems, volume 30, pages 5998–6008. Curran Associates, Inc. (11 pages)

- Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. (2019). Superglue: A stickier benchmark for general-purpose language understanding systems. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 32, pages 1–15. Curran Associates, Inc. (15 pages)

- Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. (2018). GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Linzen, T., Chrupala, G., and Alishahi, A., editors, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. (3 pages)


Ca 561 pages in total