You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I understand correctly by reading the report (http://arxiv.org/pdf/1506.07552v1.pdf), the new algorithm is not a batch-wise one. The philosophy behind is that batch-wise approach makes less progress compared to the full sequential update.
Yet from an implementation perspective, processing batch can be faster than processing points one by one at the same size (because of dense matrix multiplication). I guess it at least provides some room to speed up the optimization procedure.
I am not sure adding an extra interface to support mini-batch can be beneficial for further speed-ups?
Jianbo
The text was updated successfully, but these errors were encountered:
It is our intention to implement a native batch processing API. But before that is available, you may do it yourself -- make every single RDD element as a mini-batch of samples, so that in each iteration the processing function is fed with a mini-batch instead of a single sample.
If I understand correctly by reading the report (http://arxiv.org/pdf/1506.07552v1.pdf), the new algorithm is not a batch-wise one. The philosophy behind is that batch-wise approach makes less progress compared to the full sequential update.
Yet from an implementation perspective, processing batch can be faster than processing points one by one at the same size (because of dense matrix multiplication). I guess it at least provides some room to speed up the optimization procedure.
I am not sure adding an extra interface to support mini-batch can be beneficial for further speed-ups?
Jianbo
The text was updated successfully, but these errors were encountered: