-
Notifications
You must be signed in to change notification settings - Fork 19.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A model saved with Python3.5 cannot be loaded in Python3.6 #9595
Comments
Digging a bit more, I found that the marshal serial format has changed a bit between Python 3.5 and 3.6. The change is not documented in the release notes, but the documentation doesn't promise to keep it. I am following the logic used by Keras to serialise Lambda layers, mostly found in https://github.com/keras-team/keras/blob/master/keras/utils/generic_utils.py#L171-L241
|
Similar issue occurs when loading a python 3.5 built model on a python 3.6 notebook in Google Colab. Is there any workaround? |
@tadbeer the best way is to save the architecture as code, and the weights in an h5. This will be compatible across versions. Saving on yaml or json will only work across versions if you don't have Lambda layers. |
thanks @Dapid ! Writing the model architecture in the code itself and then loading weights on top of it worked. I was thus able to load a model built on my machine in python 3.5 to Google Collab in python 3.6. |
A Lambda layer contains arbitrary code. That is not safely serializable, at all. We serialize this arbitrary code on a best-effort basis. There are no guarantees whatsoever that it will be loadable on a different system or a different Python version. In order to safely serialize a custom expression, you need to store its code somewhere. Not as bytecode -- you need the original source code. So you can do:
|
Hi! The workaround was to rebuild my architecture and then use model.load_weights('trained_model.h5'). More detailed in here Detailed example Training the modeldo preprocessing of your inputs..... Then compile the modeldo the trainingsave your trained model weightsto load the model for future use first rebuild ur model againand then call model.load_weights('')The important part is model.save_weights('trained_model.h5'), rebuilding the architecture from codes and model.load_weights('trained_model.h5') Addition: |
hello, i read your code. But i can't understand why you use the ModelCheckpoint callback function. it seems you don't use the ".ckpt". Could you tell me the reason? Thanks |
... at least if it includes a Lambda layer.
Minimal example:
Run save_model.py on Python3.5:
Loading the model from Python3.5 also works:
But Python3.6 raises a SystemError:
Contrary to what was stated in bug #7297, the problem is indeed in Keras, since the error is triggered before any call to the backend. The problem appears in both Tensorflow and Theano. Also, both in Keras 2.1.5 and master.
The text was updated successfully, but these errors were encountered: