-
Notifications
You must be signed in to change notification settings - Fork 0
Home
AIboy996 edited this page Apr 26, 2024
·
4 revisions
Welcome to the npnn wiki!
find gradient of
from npnn import Tensor, np
from npnn.functional import Inner, Add, Norm, Softmax
rand = np.random.random
# npnn api
A = Tensor(rand((1, 5, 5)), requires_grad=True)
b = Tensor(rand((1, 5, 1)), requires_grad=True)
x = Tensor(rand((1, 5, 1)))
inner = Inner(); add = Add(); norm = Norm(); act_npnn = Softmax()
y = norm(act_npnn(add(inner(A, x), b)))
y.backward()
A_grad_npnn = A.grad
b_grad_npnn = b.grad
import torch
# torch api
if np.__name__ == "cupy":
A = torch.from_numpy(A.data.get())
b = torch.from_numpy(b.data.get())
x = torch.from_numpy(x.data.get())
elif np.__name__ == "numpy":
A = torch.from_numpy(A.data)
b = torch.from_numpy(b.data)
x = torch.from_numpy(x.data)
A.requires_grad = True
b.requires_grad = True
act_torch = torch.nn.Softmax(dim=1)
y = torch.norm(act_torch(A @ x + b), p=2, dim=(1, 2)).sum() # sum over batch
y.backward()
A_grad_torch = A.grad.numpy()
b_grad_torch = b.grad.numpy()
# check float almost equal
print('check A_grad all close:', np.allclose(A_grad_npnn, A_grad_torch))
print('check b_grad all close:', np.allclose(b_grad_npnn, b_grad_torch))
npnn
have five parts:
-
base.py: provide some base classes, e.g.
Module
. -
autograd.py: provide
Tensor
class with autograd algorithm -
functional.py: provide operations and their gradient for tensors, e.g.
Inner
. -
nn.py: provide neural network parts, e.g.
Sequential
. -
optim.py: provide gradient descent optimizer, e.g.
SGD
.
Gradient of second(or more)-order term is not considered:
npnn
will give wrong anwser:
from npnn.functional import Inner
x = Tensor(np.random.random((1, 3, 1)), requires_grad=True)
loss = Inner()(x.T, x)
loss.backward()
print(x.grad) # we will get a wrong grad.