Skip to content

A collection of papers that try to explain the mysteries of deep learning with theories and empirical evidences.

Notifications You must be signed in to change notification settings

Epsilon-Lee/deep-learning-explained

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

85 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Deep learning explained

A collection of papers that try to explain the mysteries of deep learning with theories and empirical evidences. And here is a curated resource of deep learning theory papers by Prof. Boris Hanin at Princeton.

Theory-oriented explanations

Differential equation view

Interpolation-Extrapolation tradeoffs

Inductive Bias

Deep PAC and PAC-Bayes

Information-theoretic

Theory of training

SGD, loss landscape, learning dynamics, stochacity, sgd for feature learning, learning curriculum etc.

Neural Tangent Kernel

Understanding training tricks

Implicit regularization

Theory of representation learning

Self-supervised learning

Contrastive learning

Explaining representational power

Neural collapse


Empirical observations and explanations

Double descent

Mechanistic interpretability of DL

Generalization metrics

Flatness

Decision boundary

Data-centric understanding

Spurious correlation

See here for the detailed discussion on spurious correlation.

Lottery ticket hypothesis

Memorization

About

A collection of papers that try to explain the mysteries of deep learning with theories and empirical evidences.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published