- Basic tutorial on Probability Theory and Linear Algebra - Mário Figueiredo (slides)
- Introduction to Python - Luís Pedro Coelho (notebook)
- Introduction to Machine Learning: Linear Learners - Stefan Riezler (slides)
- Multi-view representation learning for speech and language - Karen Livescu (slides)
- Sequence Models - Noah Smith (slides)
- Making neural generation better: with practice and common sense - Yejin Choi
- Introduction to Neural Networks - Bhiksha Raj (slides)
- Learning language by grounding language - Karl Moritz Hermann (slides)
- Modeling sequential data with Recurrent Networks - Chris Dyer (slides)
- Meta-learning of Neural Machine Translation for low-resource language pairs - Kyunghyun Cho
- Install:
- Resources:
- Labs Guide
- Labs Sources (Github)
- LxMLS 2018 website
- Slack Group
- Other Lectures:
- Previous Lectures (2017):
- Learning and representation in language understanding (slides) - Fernando Pereira
- Smaller, faster, deeper: Univ. Edinburg MT submission to WMT 2017 (slides) - Alexandra Birch
- Deep Learning for Speech Recognition - Mark Gales (slides)
- Syntax and Parsing - Yoav Goldberg (slides)
- Simple and efficient learning with dynamic Neural Networks (slides) - Graham Neubig
- Neural Machine Translation and beyond (slides) - Kyunghyun Cho
- Previous Lectures (2016):
- Structured prediction in Natural Language Processing with imitation learning - Andreas Vlachos (slides)
- Machine Translation as Sequence Modelling - Philipp Koehn (slides)
- Turbo Parser Redux: from dependencies to constituents - André Martins (slides)
- Deep Neural networks are our friends - Wang Ling (slides)
- Memory Networks for language understanding - Antoine Bordes (slides)
Lisbon, 2018