Introduction There is an amazing paper I got to read last week by [Rieck19] on the subject of persistent homology. The idea is that by borrowing ideas from topological data science, we can construct a per layer complexity metric. This complexity metric can then shed light into how generalized our learned representation is. Namely if …

Continue reading “On Neural Persistence, Improved Language Models, and Narrative Complexity”