Advanced machine learning methods in the context of natural language processing (NLP) applications. Word embeddings, Markov chains, hidden Markov models, topic modeling, recurrent neural networks.
Course Learning Outcomes
By the end of the course, students are expected to be able to
- Explain and use word embeddings for word meaning representation.
- Train your own word embedding and use pre-trained word embeddings.
- Specify a Markov chain and carry out generation and inference with them.
- Explain the general idea of stationary distribution in Markov chains.
- Explain hidden Markov models and carry out decoding with them.
- Explain Latent Dirichlet Allocation (LDA) approach to topic modeling and carry out topic modeling on text data.
- Explain Recurrent Neural Networks (RNNs) and use them for classification, generation, and image captioning.
All videos are available here.