Abstract: This talk will give a short introduction to neural networks and how they are used for machine translation. The primary goal of the talk is to give a no-tears introduction to neural machine translation (NMT) to people that do not have a computer science or mathematical background. The secondary goal is to provide a deep enough understanding of NMT that so that the audience can appreciate the strengths of weaknesses of the technology. The talk starts with a brief introduction to standard feed-forward neural networks (what they are, how they work, and how they are trained), this is followed by an introduction to word-embeddings (vector representations of words) and then we introduce recurrent neural networks. Once these fundamentals have been introduced we then focus in on the components of a standard neural-machine translation architecture, namely: encoder networks, decoder language models, and the encoder-decoder architecture.
Bio: Prof. John D. Kelleher is the Academic Leader of the Information, Communication and Entertainment Research Institute at the Dublin Institute of Technology. His areas of expertise include machine learning, artificial intelligence, natural language processing, and spatial cognition. John has worked in a number of different academic and research focused institutes, including Dublin City University, Media Lab Europe, and DFKI (the German Centre for Artificial Intelligence Research). Currently, his research is supported by the Science Foundation Ireland ADAPT Research Centre (Grant Number 13/RC/2016). He is the co-author of Fundamentals of Machine Learning for Predictive Data Analytics, MIT Press, 2015.