
Abstract: A neural network model, no matter how deep or complex, implements a function, a mapping from inputs to outputs. The function implemented by a network is determined by the weights the network uses. So, training a network (learning the function the network should implement) on data involves searching for the set of weights that best enable the network to model the patterns in the data. The most commonly used algorithm for learning patterns from data is the gradient descent algorithm. By itself the gradient descent algorithm can be used to train a single neuron, however it cannot be used to train a deep network with multiple hidden layers. Training a deep neural network, involves using both the gradient descent algorithm and the backpropagation algorithm in tandem. These algorithms are at the core of deep learning and understanding how they work is, possibly, the most direct way of understanding the potential and limitations of the deep learning. This talk provides a gentle but still comprehensive introduction to these two important algorithms. I will also explain how the problem of vanishing gradients arises, and how this problem has been addressed in deep learning.
Bio: Prof. John D. Kelleher is the Academic Leader of the Information, Communication and Entertainment Research Institute at the Dublin Institute of Technology. His areas of expertise include machine learning, artificial intelligence, natural language processing, and spatial cognition. John has worked in a number of different academic and research focused institutes, including Dublin City University, Media Lab Europe, and DFKI (the German Centre for Artificial Intelligence Research). Currently, his research is supported by the Science Foundation Ireland ADAPT Research Centre (Grant Number 13/RC/2016). He is the co-author of Fundamentals of Machine Learning for Predictive Data Analytics, MIT Press, 2015.

Dr. John D. Kelleher
Title
Author and Academic Leader, Dublin Institute of Technology
Category
europe-2018-talks
