Graph embedding is the problem of learning a mapping function from each node in a graph to a low dimensional latent representation. These representations can then be used as features for common tasks on graphs such as multi-label node classification, clustering and link prediction. Most of the recent graph embedding methods are neural networks based, and have proven both highly scalable and performant. However, they all rely on a non-convex optimization solved using stochastic gradient descent which can become stuck at a local minima.
In this talk, we will present a new method for learning low dimensional embeddings in such a way as to to preserve higher-order structural features. Simplify and Embed achieves this by recursively compressing the input graph prior to embedding it, to avoid troublesome embedding configurations (i.e. local minima) found during non-convex optimization. Experiments on various real-life graphs prove that Simplify and Embed is a general strategy to improve all of the popular neural algorithms for embedding graphs.