-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible error when predicting next action (class RolloutGenerator) #28
Comments
I'm seeing the same thing, and am running a comparison between the current model and a modified model that is passed |
You are right, we are passing the mean instead of a sample. I don't think this will make a significant difference, and notably it's unclear wether it is going to improve the results, but I may be wrong. Typically, I don't think this could explain the lack of necessity for a model of the world, since our observation is not that we obtain significantly worse results than (Ha and Schmidhuber, 2018), but that we already have very good performance without training the model. Anyway, @wildermuthn, thanks for running this experiment. Could you keep us updated on the results. Besides, if you have time for that, and a code that is already ready to be integrated, don't hesitate to issue a pull request. Otherwise I'll be fixing that soon. |
@wildermuthn , in your experiments, are you using the carRacing environment? I modified this library slightly and I should be ready to (hopefully!) run a few experiments on the viZDoom environment by the end of the week. I could program a few extra runs to test performance if you haven't done so already! |
@AlexGonRo I am using the carRacing environment. I'm switching to a cloud server with multiple V100s, as my experiments were inconclusive running on my single 1080ti with only 8 CPUs. I did notice that with @ctallec I've got some nvidia-docker and gcp code that is messy, but will see about putting up a PR for the |
@wildermuthn You may want to be extra cautious with the hyperparameters you use and the training duration, typically you want to use the same hyperparameters for cma as in the original paper, and not the one we provided. With the one we provided, you will get lower final performance. The original paper used 16 rollouts per return evaluation and a population size of 64, that's what we used for reproducing, but this also mean that you'll have to use in the order of 32 jobs and 4 gpus to get it to run in a reasonable amount of time. |
I ran into an issue for a while now, which may be related to this one (but my controller is a bit different than this one). I trained the LSTM on a GPU, and used that in a different controller setup on a local CPU machine with pytorch 0.4.1. I finally isolated the problem, which seems to be related to this issue in pytorch. Basically, the torch.exp function didn't work properly for me, and it made the entire hidden state of the LSTM garbage (when using on the CPU). When I ran my controller on the GPU on the server (with the same pytorch version 0.4.1), it worked. |
Despite being a bit late, I created a new pull request fixing the issue. I did some testing with my library and didn't find any significant boost in performance with these new changes. However, as we discussed here, this should be the expected behaviour of the code. I must clarify that I did not perform any extensive test of these changes in the current version of this library (ctallec's master branch). I, however, made sure that the new lines of code do not cause any errors. |
Hello!
Thanks, first of all, for the library. It has been of great help to me!
Now, I wanted to discuss a portion of the code that I believe to be erroneous. In class
RolloutGenerator
, functionget_action_and_transition()
, we have the following code:I think this function description is quite clear. The problem is, it feeds
latent_mu
to both the controller and the mdrnn network. I would argue that we should use the real latent vector instead (let's call itz
).First, the current implementation is not what they do in the original World models paper, as they describe the controller as, and I quote:
Second, we train the mdrnn network using the latent vector 'z' (see file
trainmdrnn.py
, functionto_latent()
). Therefore, why do we uselatent_mu
now?This problem affects both the training and testing of the controller. It might be the reason why you report that the memory module is of little to no help in your experiments (https://ctallec.github.io/world-models/). However, I must say I haven't done any proper testing yet.
I would like to hear your thoughts on this.
The text was updated successfully, but these errors were encountered: