Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in build_makemore_mlp.ipynb on colab #23

Open
MithrilMan opened this issue Dec 15, 2024 · 0 comments
Open

Bug in build_makemore_mlp.ipynb on colab #23

MithrilMan opened this issue Dec 15, 2024 · 0 comments

Comments

@MithrilMan
Copy link

First of all, thanks from a 45yo full stack dev that never stop learning and really enjoyed your videos!!

I know I'm picky but as a dev I feel the need to point out a small bug on the build_makemore_mlp.ipynb colab notebook

You moved up in a cell the

lri = []
lossi = []
stepi = []

to not lose the history of your traces, good! but...
In the next cell, your stepi add the straight i from the for pool, so basically restarting back from step 0, this mean if you run the training process multiple times,the chart overwrite itself over and over. It's very visible if you just for example try to run 20 iterations )instead of 200000) multiple times.

The solution is very simple, just add
last_step = stepi[-1] if stepi else 0
before the for loop, and change stepi.append(i) to stepi.append(i + last_step)

here the full revised cell to ease the update :)

last_step = stepi[-1] if stepi else 0
for i in range(20):
  
  # minibatch construct
  ix = torch.randint(0, Xtr.shape[0], (32,))
  
  # forward pass
  emb = C[Xtr[ix]] # (32, 3, 2)
  h = torch.tanh(emb.view(-1, 30) @ W1 + b1) # (32, 100)
  logits = h @ W2 + b2 # (32, 27)
  loss = F.cross_entropy(logits, Ytr[ix])
  #print(loss.item())
  
  # backward pass
  for p in parameters:
    p.grad = None
  loss.backward()
  
  # update
  #lr = lrs[i]
  lr = 0.1 if i < 100000 else 0.01
  for p in parameters:
    p.data += -lr * p.grad

  # track stats
  #lri.append(lre[i])
  stepi.append(i + last_step)
  lossi.append(loss.log10().item())

#print(loss.item())

I've to say I'm not used to colab, I don't know if I can "pull request" on it, I didn't found the colab on this report so I couldn't fix it by myself, in case there is an easier way let me know in case I find something else to fix

Thank you again for your amazing contents!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant