The learnability of different neural architectures can be characterized directly by computable measures of data complexity. Neural homology theory gives a complete topological description of the power of neural networks with applications in architecture selection and computational topology.
On Characterizing the Capacity of Neural Networks using Algebraic Topology. William H. Guss, Ruslan Salakhutdinov. Preprint. NIPS 2017, DLTP Workshop. [arxiv] [poster]
Towards Neural Homology Theory. William H. Guss, Ruslan Salakhutdinov. Talk, Microsoft Research, 2018. [slides]