โ† back to index

Categorical Magnitude and Entropy

Stephanie Chen & Juan Pablo Vigneaux ยท 2023 ยท arxiv arXiv:2303.00879

Prereqs: ๐Ÿž Leinster 2021 (magnitude, Hill numbers). ๐Ÿž Baez, Fritz, Leinster 2011 (entropy characterization) helps.

wpShannon entropy and wpmagnitude are the same invariant in disguise. Under a uniform distribution, log(magnitude) = entropy. The paper unifies them through a single categorical construction โ€” the Euler characteristic of an enriched category.

probability distribution 1/4 H = log 4 metric space magnitude = 4 both = Euler characteristic log |A| = H(p) same functor, different enrichment

Entropy from a uniform distribution

Shannon entropy of a uniform distribution over n outcomes is log(n). Magnitude of n equally-spaced points is n. So log(magnitude) = log(n) = entropy. This is the simplest case of the unification.

Scheme

Beyond uniform: the weighted case

For non-uniform distributions, the connection goes through weighted magnitude. Given a metric space with a probability distribution (weights), the log of the weighted magnitude recovers the entropy of that distribution. The uniform case is the special case where all weights are equal.

Scheme

The Euler characteristic connection

Both entropy and magnitude arise as the Euler characteristic of an enriched category. A finite metric space is a category enriched over [0,โˆž). Its Euler characteristic is the magnitude. A finite probability space is a category enriched over [0,1]. Its Euler characteristic is exp(entropy). Same construction, different enrichment.

Scheme

Why this matters

The unification means entropy and magnitude aren't separate concepts that happen to look similar. They're the same functor evaluated on different enriched categories. ๐Ÿž Baez, Fritz, and Leinster characterized entropy as the unique information loss measure. ๐Ÿž Leinster characterized magnitude as the unique notion of size for metric spaces. Chen and Vigneaux show these uniqueness results are two faces of one theorem.

Scheme

Notation reference

Paper Scheme Meaning
|A|; magnitude (Euler characteristic)Size of enriched category A
H(p)(shannon-entropy p)Shannon entropy
log|A| = H(log magnitude) = HThe unification (uniform case)
Z_ij = e^(-d_ij)(similarity d)Similarity matrix from distances
V-Cat; category enriched over VEnriched category (V = metric or probability)
Neighbors

Other paper pages

Foundations (Wikipedia)

Translation notes

The examples demonstrate the uniform-case identity (log magnitude = entropy) and the intuition behind weighted magnitude. Chen and Vigneaux's actual construction works through the Euler characteristic of categories enriched over a quantale, unifying the metric-space and probability-space cases via a change-of-base functor. For example: the "same construction, two enrichments" example on this page computes exp(H) and compares it to a count. In the paper, the comparison is between two Euler characteristics: one for a [0,โˆž)-enriched category (metric space) and one for a [0,1]-enriched category (probability space), connected by a lax monoidal functor between the enrichment bases. The numerical agreement is the same; the functorial explanation is not.

Uniform case: Exact. Non-uniform and magnitude examples: Simplified.

Ready for the real thing? arxiv Read the paper. Start at ยง2 for the enriched category setup, ยง4 for the magnitude-entropy theorem.

Framework connection: The magnitude-entropy unification gives the Natural Framework's information budget a single invariant โ€” compression ratio is the same functor applied to different enriched categories. (jkThe Natural Framework)