-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added bias vector and regularizers #44
Conversation
keras_lmu/tests/test_layers.py
Outdated
if bias: | ||
assert np.isclose(diff, 0.22) | ||
else: | ||
assert np.isclose(diff, 0.2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do these target numbers (e.g. 0.2) come from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Figured it out. They're based on the default L1 amount of regularization, and the sizes and values of the respective kernels and bias term. I've redone the test to compute their values based on all those terms.
40d7395
to
4e7cd65
Compare
I've made a few modifications. The main one is that the bias is no longer a subsidiary of the kernel, because a user may want to have a bias for the recurrent kernel when they have the encoding kernel turned off. I also added a separate initialization and regularization term for the bias, since these are typically treated separately (e.g. in |
Changes are looking good to me! |
1ba777e
to
d8ade78
Compare
d8ade78
to
ea6053c
Compare
Added options to include a bias vector, and regularizers to the kernels