Top-Down Interpretability Through Eigenspectra

Add to
My games
Add to
Wishlist
Save to
Collection
No reviews
Exceptional
Meh
Skip

About

Random matrix theory (RMT) offers a host of tools to make sense of neural networks. In this paper, we look at the heavy-tailed random matrix theory developed by Martin and Mahoney (2021). From the spectrum of eigenvalues, it’s possible to derive generalization metrics that are independent of data, and to make decompose the training process into five unique phases. Additionally, the theory predicts and tests a key form learning bias known as “self-regularization.” In this paper, we extend the results from computer vision to language models, finding many similarities and a few potentially meaningful differences. This provides a glimpse of what more “top-down” interpretability approaches might accomplish: from a deeper understanding of the training process and path-dependence to inductive bias and generalization.

Platforms
Release date
Developer
jhoogland
Age rating
Not rated

System requirements for PC

Read more...
Top-Down Interpretability Through Eigenspectra screenshot, image №3649283 - RAWG
Edit the game info
Last Modified: Nov 14, 2022

Where to buy

itch.io