Here you can find a number of courses that are offered at PhD level at CLASP, as part of the PhD degree in Computational Linguistics. Some of the courses are reading courses. See each course for more details on the format.
- Constructive Type Theories and Natural Language Semantics
- Dialogue Systems 2
- Language, Action, and Perception (APL)
- Representations of Meaning (RoM)
- Topics in Deep Machine Learning (reading course)
- Type Theory with Records From Perception to Communication
- Sociolinguistics and Bilingualism for Natural Language Processing (CSoc)
- Machine Learning Methods For Vision and Language (ML-V&L)
- PhD in Computational Linguistics
Constructive Type Theories and Natural Language Semantics
The course concentrates on the use of constructive type theories to the study of natural language semantics. It presents an alternative language to be used in representing the semantics of NL based on the notion of proof, rather than the notion of truth with respect to a model as standard in classic Montague Semantics.
Dialogue Systems 2
The course gives in-depth knowledge about theories and methods for the design, implementation and evaluation of dialogue systems by focusing especially on:
- Semantics and pragmatics for dialogue systems
- Data collection and analysis
- Advanced dialogue management
- Evaluation of dialogue systems
- Advanced implementation techniques
Language, Action, and Perception (APL)
This is a PhD course that explores computational modelling of language and vision in particular in relation to situated dialogue agents and image classification. There is a parallel course at the master’s level which this course may partially overlap with: LT2308 ESLP: Embodied and Situated Language Processing or LT2318: Artificial Intelligence: Cognitive Systems.
The course gives a survey of theory and practical computational implementations of how natural language interacts with the physical world through action and perception. We will look at topics such as semantic theories and computational approaches to modelling natural language, action and perception (grounding), situated dialogue systems, integrated robotic systems, grounding of language in action and perception, generation and interpretation of scene descriptions from images and videos, spatial cognition, and others.
As the course studies how humans structure and interact with the physical world and express it in language, it bridges into the domains of cognitive science, computer vision, robotics and therefore more broadly belongs to the field of cognitive artificial intelligence. Typical applications of computational models of language, action, and perception are image search and retrieval on the web, navigation systems that provide more natural, human-like instructions, and personal robots and situated conversational agents that interact with us in our home environment through language.
The learning outcomes of the course are based on covering 3 topics: (i) the relation between language and perception in human interaction, (ii) how language and perception is modelled with formal and computational models and methods and how these are integrated with different applications, and (iii) how research in the field is communicated scientifically.
Representations of Meaning (RoM)
The course gives a survey of theory and computational implementations of representing and reasoning with meaning in natural languages from cognitive, linguistic and computational perspective. We will look at formal theories and computational implementations to model-theoretic semantics (lambda calculus), situated and grounded representations of meaning, semantic grammars (CCG, dependency grammar), distributional representations of lexical meaning and its compositional extensions, approaches to unsupervised machine learning of linguistic representations, and others. An emphasis of the course will be (i) on the nature of representations, (ii) how they satisfy the notion of compositionality, (iii) how they are used in inference or reasoning and (iv) what natural language processing applications are they useful for.
Topics in Advanced Deep Machine Learning
This course comes into an introductory as well as an advanced version. It is a reading course.
Course content for the introductory version: An introduction to the basic concepts of deep machine learning as applied to problems in natural language processing.
Course content for the advanced version: Advanced applications of deep machine learning as applied to problems in natural language processing and artificial intelligence.
Type Theory with Records: From Perception to Communication
The course introduces TTR, a Type Theory with Records, as a framework for natural language grammar and interaction. We follow Cooper (in preparation) in taking a dialogical view of semantics. The course covers the formal foundations of TTR as well as TTR accounts of perception, intensionality, information exchange, grammar (syntax and semantics), quantification, modality and other linguistic phenomena. It also covers the relation between TTR and other type theories for natural language semantics, as well as recent extensions and applications of TTR.
Sociolinguistics and Bilingualism for Natural Language Processing (CSoc)
The course overviews basic concepts and theories in sociolinguistics and bilingualism. It examines their implications for computational approaches to language in respect to the collection and processing of corpora of speech, writing or social media displaying sociolinguistic variation and/or code- switching, borrowing and similar phenomena of multilingual communities.
Machine Learning Methods For Vision and Language (ML-V&L)
The course focuses on machine learning/deep learning models and techniques such as Recurrent Neural Networks (RNNs), Long-Short Term Memory Networks (LSTMs), Convolutional Neural Networks (ConvNets), Neural Auto-Encoders, Memory Networks, and others applied to computational modeling of natural language and images, and other sensory information.
Theoretically, it examines how machine learning approaches address topics such as multi-modal grounded representations of meaning, representing and resolving semantic ambiguity, attention and salience, perception and dialogue interaction, natural language interpretation, natural language generation, natural language reasoning and inference, and collection of perceptual and linguistic data.
Practically, the course overviews contemporary computer vision and natural language processing tasks such as generating image and video descriptions, visual question answering, image retrieval using text queries, aligning images and text in large data collections, image generation from textual descriptions, and others.