As the process of maximizing accuracies on large perceptual
datasets slowly reaches its limits, deep learning researchers
start to explore increasingly ambitious goals for neural networks to strive for, with thoroughly tuned neural architectures
continuously taking over new domains on daily basis.
Despite their significant success, all the existing neural
architectures based on static computational graphs processing fixed tensor representations necessarily face fundamental limitations when presented with dynamically sized and
structured data. Examples of these are sparse multi-relational
structures present everywhere from biological networks and
complex knowledge hyper-graphs to logical theories. Likewise, given the cryptic nature of generalization and representation learning in neural networks, potential integration with
the sheer amounts of existing symbolic abstractions present
in human knowledge remains highly problematic.
Here, we argue that these abilities, naturally present in
symbolic approaches based on the expressive power of relational logic, are necessary to be adopted for further progress
of neural networks into the more complex domains