Abstract: Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalisations of CNNs to graph-structured data, and neural message-passing approaches. These advances in graph neural networks (GNNs) and related techniques have led to new state-of-the-art results in numerous domains: chemical synthesis, vehicle routing, 3D-vision, recommender systems, question answering, continuous control, self-driving and social network analysis. Accordingly, GNNs regularly top the charts on fastest-growing trends and workshops at virtually all top machine learning conferences.
In this talk, I will attempt to provide several “bird’s eye” views on GNNs. Following a quick motivation on the utility of graph representation learning, I will derive GNNs from first principles of permutation invariance and equivariance. We will discuss how we can build GNNs that are not strictly reliant on the input graph structure.
The talk will be geared towards a generic computer science audience, though some basic knowledge of machine learning with neural networks will be a useful prerequisite.
Bio: Petar Veličković is a Staff Research Scientist at Google DeepMind, Affiliated Lecturer at the University of Cambridge, and an Associate of Clare Hall, Cambridge. Petar holds a PhD in Computer Science from the University of Cambridge (Trinity College), obtained under the supervision of Pietro Liò. His research concerns geometric deep learning—devising neural network architectures that respect the invariances and symmetries in data (a topic I’ve co-written a proto-book about). Petar's research has been used in substantially improving travel-time predictions in Google Maps, and guiding intuition of mathematicians towards new top-tier theorems and conjectures.