Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add python script to plot the training log #86

Closed
kloudkl opened this issue Feb 9, 2014 · 4 comments
Closed

Add python script to plot the training log #86

kloudkl opened this issue Feb 9, 2014 · 4 comments

Comments

@kloudkl
Copy link
Contributor

kloudkl commented Feb 9, 2014

No description provided.

@ghost
Copy link

ghost commented Feb 9, 2014

Would be good to have. One can better formulate the solver messages or potentially have the solver write the key parameters like current loss to a separate log file so further scripts could analyze it more easily.

@sguada
Copy link
Contributor

sguada commented Feb 9, 2014

I have a bash script that extracts the relevant information from the log,
which later can be easily plotted with python, gnuplot, matlab, ...
I can share or simply add it to the code base see #89.

2014-02-08 kloudkl notifications@github.com:

Reply to this email directly or view it on GitHubhttps://github.com//issues/86
.

@kloudkl
Copy link
Contributor Author

kloudkl commented Feb 10, 2014

@sguada, thanks for sharing your script in #89 and #90! I am inspired a lot.

Due to the diversified requirements of the users of Caffe, the best we can do is to provide a sample python script as the basis for various customizations. To this end, data extraction and plotting should be separated as you have done. For benchmarking purpose, results of multiple experiments often need to be plotted in one chart.

Based on the information present in the current version of training log, possible useful plots are summarized as follows. In the future, this script should be synchronized with the changes of the training log format such as adding training or validation accuracy.

  1. Test accuracy (test score 0) vs. training iterations / time;
  2. Test loss (test score 1) vs. training iterations / time;
  3. Training loss vs. training iterations / time;
  4. Learning rate vs. training iterations / time;

@shelhamer
Copy link
Member

Good job everyone. This is done with #89 #90 #91 .

thatguymike pushed a commit to thatguymike/caffe that referenced this issue Dec 3, 2015
myfavouritekk pushed a commit to myfavouritekk/caffe that referenced this issue Jul 28, 2016
* Specify whether bottom and top blobs are sharing data

* Reuse data memory if not propagate down

* Skip a layer if no_mem_opt is set

* Fix a bug and add optimize_mem enum

* Code formatting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants