Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added ChannelConstantFiller #96

Closed
wants to merge 9 commits into from
Closed

Conversation

sguada
Copy link
Contributor

@sguada sguada commented Feb 12, 2014

Added the option to fill a blob using a different constant value per channel.

This change allows to fill the mean_image of a data_layer with the mean of the RGB values.

@shelhamer
Copy link
Member

Is this better than passing a mean image file with the same value at every pixel?

@sguada
Copy link
Contributor Author

sguada commented Feb 12, 2014

I think this way is better because one can define it in the prototxt,
change when need it and don't need to create one mean image file for every
different size or value.

Sergio

2014-02-11 Evan Shelhamer notifications@github.com:

Is this better than passing a mean image file with the same value at every
pixel?

Reply to this email directly or view it on GitHubhttps://github.com//pull/96#issuecomment-34835135
.

@shelhamer
Copy link
Member

Ok, good motivation. Could you include an example network using this instead of the meanfile field? Can this be included in deployment nets like examples/imagenet_deploy.prototxt to automatically subtract the mean from inputs and avoid the need to do it in user code (like the wrappers)?

@sguada
Copy link
Contributor Author

sguada commented Feb 13, 2014

I have wrote some prototxt as examples on how to use the bias_filler to fill the data_mean instead of using meanfile.
I have performed some test to see the effect in validation accuracy and found the following interesting results:

==> logs/imagenet_test_zero_mean.log <==
E0212 16:52:36.766688  6231 test_net.cpp:54] Test accuracy:0.49616

==> logs/imagenet_test_constant_mean.log <==
E0212 16:48:28.909536  5106 test_net.cpp:54] Test accuracy:0.56152

==> logs/imagenet_test_channelconstant_mean.log <==
E0212 17:10:05.759047 11184 test_net.cpp:54] Test accuracy:0.57048

==> logs/imagenet_test_mean_file.log <==
E0212 17:01:32.425643  8061 test_net.cpp:54] Test accuracy:0.57278

What means that if one don't remove the mean from the images the performance drops substantially 7.66% . If one remove the average pixel value from the image the performance drops 1.13%. While if one remove the average RGB values of the training-data from the image the performance only drops 0.23%

So for anyone using a wrapper they can just remove the mean RGB values and lose very little in performance.

@shelhamer
Copy link
Member

Could this be better as a DataProcessing layer #148? With a MeanSubtraction layer one could define a mean image, single number, or channel as desired and use it for training, test, and deployment. A workaround is to add an InnerProduct layer immediately after the input with the bias filled to the negative mean, but that doesn't feel clean.

@shelhamer
Copy link
Member

@sguada let's wait for #148 since this is more appropriately implemented in a MeanSubtractionLayer data preprocessing layer. Thanks for the constant / channel constant / mean image analysis. We should add that to the docs somewhere.

@shelhamer shelhamer closed this Mar 13, 2014
thatguymike pushed a commit to thatguymike/caffe that referenced this pull request Mar 11, 2016
myfavouritekk pushed a commit to myfavouritekk/caffe that referenced this pull request Aug 11, 2016
myfavouritekk added a commit to myfavouritekk/caffe that referenced this pull request Aug 11, 2016
standardize memory optimization configurations

* yjxiong/fix/mem_config:
  take care of share data with excluded blob
  improvise memory opt configs
  fix cudnn conv legacy bug (BVLC#96)
  add TOC
  Update README.md
  Update README.md (BVLC#95)
  Update README.md
  Improve the python interface (BVLC#80)
  Update README.md
myfavouritekk added a commit to myfavouritekk/caffe that referenced this pull request Aug 15, 2016
…caffe into imagenet_vid_2016

* 'imagenet_vid_2016' of https://github.com/myfavouritekk/caffe:
  take care of share data with excluded blob
  Revert "Fix a but when setting no_mem_opt: true for layers near in-place layers."
  improvise memory opt configs
  fix cudnn conv legacy bug (BVLC#96)
  add TOC
  Update README.md
  Update README.md (BVLC#95)
  Update README.md
  Improve the python interface (BVLC#80)
  Update README.md
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants