This repository has been archived by the owner on Dec 9, 2024. It is now read-only.
fix the bug for eval function while variable_update=parameter_server|distributed_replicated #47
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
firstly , the _eval function currently doesn't support the mode of 'variable_update=parameter_server' and 'variable_update=distributed_replicated' ,and there will be some mistakes while using the mode of 'replicated' to restore parameters from the checkpoint file that created by training with 'variable_update=parameter_server|distributed_replicated' ,so I changed the 'target' to fix it .
secondly ,while variable_update='distributed_replicated' ,the result of eval function looks not correct. I found that the set of tf.global_variables have no parameters while restoring checkpoint , and even in training ,tf.global_variables() only contained 190+ parameters(these parameters were copied from local_variables and only trainable variables) ,without 'batchnorm/gamma' ,'batchnorm/moving_mean' and 'batchnorm/moving_variance' ,so I changed the code to store/restore parammeters from/to the tf.local_variables and it worked.