Blog

On the structure between narrators and readers.

Foreword I’ve been meaning to write this blog post for a long time. I had the idea sometime in September 2020, and started doing a bit of lit review. Sometime around Christmas that year, I was going to finally sit down and write the blog post but I quickly realized that even after all the…

Joint Representations of Connectionism vs. Symbolism via Attractor Networks and Self Attention

Introduction In most modern fields of cognitive computing, as it pertains to natural language processing, both symbolism and connectionism have their own unique benefits. Symbolism relates to the idea that most cognitive processes can be computed as the manipulation of symbols. Alternatively, in connectionism, computation is done by a metaphorical “black box” in which the…

On Neural Persistence, Improved Language Models, and Narrative Complexity

Introduction There is an amazing paper I got to read last week by [Rieck19] on the subject of persistent homology. The idea is that by borrowing ideas from topological data science, we can construct a per layer complexity metric. This complexity metric can then shed light into how generalized our learned representation is.Namely if we…