Categorical Magnitude and Entropy
Stephanie Chen & Juan Pablo Vigneaux ยท 2023 ยท
arXiv:2303.00879
Prereqs: ๐ Leinster 2021 (magnitude, Hill numbers). ๐ Baez, Fritz, Leinster 2011 (entropy characterization) helps.
Shannon entropy and
magnitude are the same invariant in disguise. Under a uniform distribution, log(magnitude) = entropy. The paper unifies them through a single categorical construction โ the Euler characteristic of an enriched category.
Entropy from a uniform distribution
Shannon entropy of a uniform distribution over n outcomes is log(n). Magnitude of n equally-spaced points is n. So log(magnitude) = log(n) = entropy. This is the simplest case of the unification.
Beyond uniform: the weighted case
For non-uniform distributions, the connection goes through weighted magnitude. Given a metric space with a probability distribution (weights), the log of the weighted magnitude recovers the entropy of that distribution. The uniform case is the special case where all weights are equal.
The Euler characteristic connection
Both entropy and magnitude arise as the Euler characteristic of an enriched category. A finite metric space is a category enriched over [0,โ). Its Euler characteristic is the magnitude. A finite probability space is a category enriched over [0,1]. Its Euler characteristic is exp(entropy). Same construction, different enrichment.
Why this matters
The unification means entropy and magnitude aren't separate concepts that happen to look similar. They're the same functor evaluated on different enriched categories. ๐ Baez, Fritz, and Leinster characterized entropy as the unique information loss measure. ๐ Leinster characterized magnitude as the unique notion of size for metric spaces. Chen and Vigneaux show these uniqueness results are two faces of one theorem.
Notation reference
| Paper | Scheme | Meaning |
|---|---|---|
| |A| | ; magnitude (Euler characteristic) | Size of enriched category A |
| H(p) | (shannon-entropy p) | Shannon entropy |
| log|A| = H | (log magnitude) = H | The unification (uniform case) |
| Z_ij = e^(-d_ij) | (similarity d) | Similarity matrix from distances |
| V-Cat | ; category enriched over V | Enriched category (V = metric or probability) |
Neighbors
Other paper pages
- ๐ Leinster 2021 โ magnitude and diversity (the metric side)
- ๐ Baez, Fritz, Leinster 2011 โ entropy characterization (the information side)
- ๐ Sato 2023 โ divergences on monads (related information measures)
Foundations (Wikipedia)
Translation notes
The examples demonstrate the uniform-case identity (log magnitude = entropy) and the intuition behind weighted magnitude. Chen and Vigneaux's actual construction works through the Euler characteristic of categories enriched over a quantale, unifying the metric-space and probability-space cases via a change-of-base functor. For example: the "same construction, two enrichments" example on this page computes exp(H) and compares it to a count. In the paper, the comparison is between two Euler characteristics: one for a [0,โ)-enriched category (metric space) and one for a [0,1]-enriched category (probability space), connected by a lax monoidal functor between the enrichment bases. The numerical agreement is the same; the functorial explanation is not.
Uniform case: Exact. Non-uniform and magnitude examples: Simplified.
Read the paper. Start at ยง2 for the enriched category setup, ยง4 for the magnitude-entropy theorem.
Framework connection: The magnitude-entropy unification gives the Natural Framework's information budget a single invariant โ compression ratio is the same functor applied to different enriched categories. (
The Natural Framework)