Deep Learning for NLP – Part 8

Graph Neural Networks

More and more evidence has demonstrated that graph representation learning especially graph neural networks (GNNs) has tremendously facilitated computational tasks on graphs including both node-focused and graph-focused tasks. The revolutionary advances brought by GNNs have also immensely contributed to the depth and breadth of the adoption of graph representation learning in real-world applications. For the classical application domains of graph representation learning such as recommender systems and social network analysis, GNNs result in state-of-the-art performance and bring them into new frontiers. Meanwhile, new application domains of GNNs have been continuously emerging such as combinational optimization, physics, and healthcare. These wide applications of GNNs enable diverse contributions and perspectives from disparate disciplines and make this research field truly interdisciplinary.

What you’ll learn

  • Deep Learning for Natural Language Processing.
  • Graph Neural Networks.
  • Graph convolutions.
  • Graph pooling.
  • Applications of GNNs for NLP.
  • DL for NLP.

Course Content

  • Introduction –> 13 lectures • 2hr 34min.

Deep Learning for NLP - Part 8

Requirements

  • Basics of machine learning.
  • Basic understanding of convolution and pooling operations.

More and more evidence has demonstrated that graph representation learning especially graph neural networks (GNNs) has tremendously facilitated computational tasks on graphs including both node-focused and graph-focused tasks. The revolutionary advances brought by GNNs have also immensely contributed to the depth and breadth of the adoption of graph representation learning in real-world applications. For the classical application domains of graph representation learning such as recommender systems and social network analysis, GNNs result in state-of-the-art performance and bring them into new frontiers. Meanwhile, new application domains of GNNs have been continuously emerging such as combinational optimization, physics, and healthcare. These wide applications of GNNs enable diverse contributions and perspectives from disparate disciplines and make this research field truly interdisciplinary.

In this course, I will start by talking about basic graph data representation and concepts like node data, edge types, adjacency matrix and Laplacian matrix etc. Next, we will talk about broad kinds of graph learning tasks and discuss basic operations needed in a GNN: filtering and pooling. Further, we will discuss details of different types of graph filtering (i.e., neighborhood aggregation) methods. These include graph convolutional networks, graph attention networks, confidence GCNs, Syntactic GCNs and the general message passing neural network framework. Next, we will talk about three main types of graph pooling methods: Topology based pooling, Global pooling and Hierarchical pooling. Within each of these three types of graph pooling methods, we will discuss popular methods. For example, in topology pooling we will talk about Normalized Cut and Graclus mainly. In Global pooling, we will talk about Set2Set and SortPool. In Hierarchical pooling, we will talk about diffPool, gPool and SAGPool. Next, we will talk about three unsupervised graph neural network architectures: GraphSAGE, Graph auto-encoders and Deep Graph InfoMax. Lastly, we will talk about some applications of GNNs for NLP including semantic role labeling, event detection, multiple event extraction, neural machine translation, document timestamping and relation extraction.

Get Tutorial