We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
03_03_vae_digits_train Error
Error 発生個所
vae.train( x_train , batch_size = BATCH_SIZE , epochs = EPOCHS , run_folder = RUN_FOLDER , print_every_n_batches = PRINT_EVERY_N_BATCHES , initial_epoch = INITIAL_EPOCH )
Error Message
Epoch 1/200 2/1875 [..............................] - ETA: 53s - loss: 230.6028 - reconstruction_loss: 230.6014 - kl_loss: 0.0014WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.173996). Check your callbacks. 1875/1875 [==============================] - ETA: 0s - loss: 58.3136 - reconstruction_loss: 55.0746 - kl_loss: 3.2390- ETA: 3s - los --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-11-a0cdb3ff19b5> in <module> 5 , run_folder = RUN_FOLDER 6 , print_every_n_batches = PRINT_EVERY_N_BATCHES ----> 7 , initial_epoch = INITIAL_EPOCH 8 ) ~\work\Generative_Deep_Learning\GDL_code\models\VAE.py in train(self, x_train, batch_size, epochs, run_folder, print_every_n_batches, initial_epoch, lr_decay) 224 , epochs = epochs 225 , initial_epoch = initial_epoch --> 226 , callbacks = callbacks_list 227 ) 228 ~\anaconda3\envs\generative\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs) 64 def _method_wrapper(self, *args, **kwargs): 65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access ---> 66 return method(self, *args, **kwargs) 67 68 # Running inside `run_distribute_coordinator` already. ~\anaconda3\envs\generative\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 874 epoch_logs.update(val_logs) 875 --> 876 callbacks.on_epoch_end(epoch, epoch_logs) 877 if self.stop_training: 878 break ~\anaconda3\envs\generative\lib\site-packages\tensorflow\python\keras\callbacks.py in on_epoch_end(self, epoch, logs) 363 logs = self._process_logs(logs) 364 for callback in self.callbacks: --> 365 callback.on_epoch_end(epoch, logs) 366 367 def on_train_batch_begin(self, batch, logs=None): ~\anaconda3\envs\generative\lib\site-packages\tensorflow\python\keras\callbacks.py in on_epoch_end(self, epoch, logs) 1175 self._save_model(epoch=epoch, logs=logs) 1176 else: -> 1177 self._save_model(epoch=epoch, logs=logs) 1178 if self.model._in_multi_worker_mode(): 1179 # For multi-worker training, back up the weights and current training ~\anaconda3\envs\generative\lib\site-packages\tensorflow\python\keras\callbacks.py in _save_model(self, epoch, logs) 1194 int) or self.epochs_since_last_save >= self.period: 1195 self.epochs_since_last_save = 0 -> 1196 filepath = self._get_file_path(epoch, logs) 1197 1198 try: ~\anaconda3\envs\generative\lib\site-packages\tensorflow\python\keras\callbacks.py in _get_file_path(self, epoch, logs) 1242 # `{mape:.2f}`. A mismatch between logged metrics and the path's 1243 # placeholders can cause formatting to fail. -> 1244 return self.filepath.format(epoch=epoch + 1, **logs) 1245 except KeyError as e: 1246 raise KeyError('Failed to format this callback filepath: "{}". ' TypeError: unsupported format string passed to numpy.ndarray.__format__
The text was updated successfully, but these errors were encountered:
chg code GDL_code/models/VAE.py
Line:213
custom_callback = CustomCallback(run_folder, print_every_n_batches, initial_epoch, self) lr_sched = step_decay_schedule(initial_lr=self.learning_rate, decay_factor=lr_decay, step_size=1) - checkpoint_filepath=os.path.join(run_folder, "weights/weights-{epoch:03d}-{loss:.2f}.h5") + checkpoint_filepath=os.path.join(run_folder, "weights/weights.h5") checkpoint1 = ModelCheckpoint(checkpoint_filepath, save_weights_only = True, verbose=1) checkpoint2 = ModelCheckpoint(os.path.join(run_folder, 'weights/weights.h5'), save_weights_only = True, verbose=1)
Sorry, something went wrong.
上で修正された! 答えを書いてくれた人 thx!!!!
org issue davidADSP/GDL_code#73
solusion davidADSP/GDL_code#73 (comment)
@chc1129 Google Colab上で動くノートブックを作っています。よろしければ参考まで。 「生成Deep Learning」は久しぶりにワクワクするAI関係の本だった
I made google colab notebook of this repository scripts. Refere to below link. https://github.com/karaage0703/GDL_code/tree/karaage/colab
No branches or pull requests
03_03_vae_digits_train Error
Error 発生個所
Error Message
The text was updated successfully, but these errors were encountered: