Eran Rosenbluth: Analysis of the expressivity of Graph Neural Networks (GNNs) and similar deep learning architectures for graphs
Graph Neural Networks (GNNs) are a class of computation models for graph processing, used also in learning tasks on graphs. These may entail learning to classify graphs (or their nodes) or learning to regress unknown features of graphs (or their nodes). In any case, it is desired that the computational model of choice is both isomorphism-invariant and inherently scalable to arbitrary graph sizes. GNNs combine a Message-Passing computation scheme with Aggregation-and-Neural-Network computation blocks. The computation blocks are identical for every vertex, making GNNs isomorphism-invariant and inherently scalable algorithms. The use of neural networks makes GNNs deep learning algorithms, potentially benefiting from the power of that paradigm.
A key characteristic of any computational model used in learning is its expressivity. While there are significant results concerning the expressivity of GNNs, there are interesting and important questions yet to be answered. In my research I try to answer some of these questions e.g. how a certain property of a GNN’s architecture affects its expressivity – whether a specific configuration subsumes others. While my work is mainly theoretical – formulating and proving theorems, it is augmented with developing experiments that test how the theoretical results come into play in practice.