-
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make general components #370
Make general components #370
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Remember to update
NEWS.md
. - The failure in CI seems to be unrelated to this PR. I'll take a look into it.
...ningCore/src/policies/q_based_policies/learners/approximators/neural_network_approximator.jl
Outdated
Show resolved
Hide resolved
...ningCore/src/policies/q_based_policies/learners/approximators/neural_network_approximator.jl
Outdated
Show resolved
Hide resolved
It's quite strange that the tests now run much slower... |
Thanks @findmyway for finding so many bugs! |
@pilgrimygy could you confirm that the
Now I get it. Will fix it soon. |
Well. Why |
I must say, this is the weirdest thing I've seen in Julia so far. And I really don't know why... (@v1.6) pkg> activate src/ReinforcementLearningExperiments/
(ReinforcementLearningExperiments) pkg> build
(ReinforcementLearningExperiments) pkg> test
┌ Warning: `InplaceableThunk(t::Thunk, add!)` is deprecated, use `InplaceableThunk(add!, t)` instead.
│ caller = ip:0x0
└ @ Core :-1
Then it is stuck. Just like what we see in the CI. Well, if we run it in REPL: (@v1.6) pkg> activate src/ReinforcementLearningExperiments/
julia> using ReinforcementLearningExperiments
julia> ex = E`JuliaRL_BasicDQN_CartPole`;
julia> run(ex) Things work as usual. |
Put GaussianNetwork and DuelingNetwork into RLCore as general components. And make a general
evaluate
function.