Abstract

Deep learning models have seen significant successes in numerous applications, but their inner workings remain elusive. The purpose of this work is to quantify the learning process of deep neural networks through the lens of a novel topological invariant called magnitude. Magnitude is an isometry invariant; its properties are an active area of research as it encodes many known invariants of a metric space. We use magnitude to study the internal representations of neural networks and propose a new method for determining their generalisation capabilities. Moreover, we theoretically connect magnitude dimension and the generalisation error, and demonstrate experimentally that the proposed framework can be a good indicator of the latter.

Magnitude

Magnitude describes the effective amount of points in a distance. Magnitude find its uses in many fields such as

  • Biodiversity
  • Clustering
  • Deep Learning
  • Representation learning

We want to focus on Neural Networks. There is evidence in form of theorems that the lower the agnitude dimension the better a NN generalizes. (Limbeck, Andreeva)

Representation learning

High dimensions are expensive to compute. Can we somehow reduce the dimensionality while preserving the most important structures? Can this be used to reduce cayley graph to lower dimension?

Reference

  • Entropy and diversity (Leinster 2022) (Book)