Implementing something from first principles is a good way to make sure you understand at a deep level how that thing works, and trying to build it from scratch will quickly make you aware of what parts you still don't understand.
I started working with ML and more precisely DL in 2016, my personal interests are NLP and Reinforcement Learning.
From 2016-2019 I worked on everything from Seq2Seq models to ConvNets focused on text prediction tasks. I was particularly inerested in the Attention Mechanism.
I wrote a few papers as part of independent studies and as part of a stint as a research assistant in 2018. Prior to that I had been working fulltime as a SWE taking a break from college. From the end of 2018 to the start of 2020 I kept on ML as a hobby. From 2020-2021 I was working on my startup for better code search and documentation which ultimately didn't go anywhere. From Sep 2021 to June of 2022 I had burnt out and was just doing my day job and avoiding comptuers outside of that.
In 2020 I had used BERT and various flavors like RoBERTa for semantic search but I did not take the time to understand the architecture and more generally started getting rusty, that continued through my period of burn out.
Now that I'm not burnt out and working in a job that I enjoy I've been taking the time to get back into working on ML related projects and as part of that I'm doing a couple different refresher courses.
As of this work I've just finished Lesson 5 of "Practical Deep Learning for Coders" by FastAI and wanted to implement a full NN from scratch, from memory, to test my fundamentals. This is the result of that.