The power of Graph Neural Networks (GNNs) is commonly measured in terms of their ability to separate graphs: a GNN is more powerful when it can recognize more graphs as being different. Studying this metric in GNNs helps in understanding the limitations of GNNs for graph learning tasks, but there are few general techniques for doing this, and most results in this direction are geared at specific GNN architectures.
In this talk I will review our recent work in studying the separation power of GNNs. Our approach is to view GNNs as expressions specified in procedural languages that describe the computations in the layers of the GNNs, and then analyze these expressions to obtain bounds on the separation power of said GNNs. As we see, this technique gives us an elegant way to easily obtain bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests, which have become the yardstick to measure the separation power of GNNs. If time permits, I will also review some of the by-products of our characterization, including connections to logic and approximation results for GNNs.