Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible error when predicting next action (class RolloutGenerator) #28

Open
AlexGonRo opened this issue Jun 12, 2019 · 7 comments
Open

Comments

@AlexGonRo
Copy link

Hello!

Thanks, first of all, for the library. It has been of great help to me!

Now, I wanted to discuss a portion of the code that I believe to be erroneous. In class RolloutGenerator, function get_action_and_transition(), we have the following code:

def get_action_and_transition(self, obs, hidden):
    """ Get action and transition.
    Encode obs to latent using the VAE, then obtain estimation for next
    latent and next hidden state using the MDRNN and compute the controller
    corresponding action.
    :args obs: current observation (1 x 3 x 64 x 64) torch tensor
    :args hidden: current hidden state (1 x 256) torch tensor
    :returns: (action, next_hidden)
        - action: 1D np array
        - next_hidden (1 x 256) torch tensor
    """
    _, latent_mu, _ = self.vae(obs)
    action = self.controller(latent_mu, hidden[0])
    _, _, _, _, _, next_hidden = self.mdrnn(action, latent_mu, hidden)
    return action.squeeze().cpu().numpy(), next_hidden

I think this function description is quite clear. The problem is, it feeds latent_mu to both the controller and the mdrnn network. I would argue that we should use the real latent vector instead (let's call it z).

First, the current implementation is not what they do in the original World models paper, as they describe the controller as, and I quote:

C is a simple single layer linear model that maps z_t​​ and h_t​​ directly to action a_t​​ at each time step.

Second, we train the mdrnn network using the latent vector 'z' (see file trainmdrnn.py, function to_latent()). Therefore, why do we use latent_mu now?

This problem affects both the training and testing of the controller. It might be the reason why you report that the memory module is of little to no help in your experiments (https://ctallec.github.io/world-models/). However, I must say I haven't done any proper testing yet.

I would like to hear your thoughts on this.

@wildermuthn
Copy link

I'm seeing the same thing, and am running a comparison between the current model and a modified model that is passed z rather than latent mu.

@ctallec
Copy link
Owner

ctallec commented Jun 22, 2019

You are right, we are passing the mean instead of a sample. I don't think this will make a significant difference, and notably it's unclear wether it is going to improve the results, but I may be wrong. Typically, I don't think this could explain the lack of necessity for a model of the world, since our observation is not that we obtain significantly worse results than (Ha and Schmidhuber, 2018), but that we already have very good performance without training the model. Anyway, @wildermuthn, thanks for running this experiment. Could you keep us updated on the results. Besides, if you have time for that, and a code that is already ready to be integrated, don't hesitate to issue a pull request. Otherwise I'll be fixing that soon.

@AlexGonRo
Copy link
Author

@wildermuthn , in your experiments, are you using the carRacing environment? I modified this library slightly and I should be ready to (hopefully!) run a few experiments on the viZDoom environment by the end of the week. I could program a few extra runs to test performance if you haven't done so already!

@wildermuthn
Copy link

@AlexGonRo I am using the carRacing environment. I'm switching to a cloud server with multiple V100s, as my experiments were inconclusive running on my single 1080ti with only 8 CPUs. I did notice that with z, in a day I was able to get to 700, but without z, it seemed the training stalled around 500. But like I said, inconclusive. Will report back once I actually run it on a good machine.

@ctallec I've got some nvidia-docker and gcp code that is messy, but will see about putting up a PR for the z code. It's just a few lines, stealing from the mdrnn code.

@ctallec
Copy link
Owner

ctallec commented Jun 24, 2019

@wildermuthn You may want to be extra cautious with the hyperparameters you use and the training duration, typically you want to use the same hyperparameters for cma as in the original paper, and not the one we provided. With the one we provided, you will get lower final performance. The original paper used 16 rollouts per return evaluation and a population size of 64, that's what we used for reproducing, but this also mean that you'll have to use in the order of 32 jobs and 4 gpus to get it to run in a reasonable amount of time.

@ranahanocka
Copy link

I ran into an issue for a while now, which may be related to this one (but my controller is a bit different than this one). I trained the LSTM on a GPU, and used that in a different controller setup on a local CPU machine with pytorch 0.4.1. I finally isolated the problem, which seems to be related to this issue in pytorch. Basically, the torch.exp function didn't work properly for me, and it made the entire hidden state of the LSTM garbage (when using on the CPU). When I ran my controller on the GPU on the server (with the same pytorch version 0.4.1), it worked.

@AlexGonRo
Copy link
Author

AlexGonRo commented Sep 3, 2019

Despite being a bit late, I created a new pull request fixing the issue.

I did some testing with my library and didn't find any significant boost in performance with these new changes. However, as we discussed here, this should be the expected behaviour of the code.

I must clarify that I did not perform any extensive test of these changes in the current version of this library (ctallec's master branch). I, however, made sure that the new lines of code do not cause any errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants