You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My current understanding of how Aloha deals with off heap models (right now VW and H2O models) when the model is embedded in the JSON is that it reads the entire JSON and then takes the model part and either writes it to disk or compiles it. This necessitates a JVM twice as large as it needs to be. I think this can be remedied by the following procedure.
Pass through the model JSON and note the start and end locations of the model bytes.
Replace the model tag in the JSON with nothing.
Instantiate the model as regular.
Stream the model bytes either to disk (in the off heap case) or to the compiler (in the H2O) case.
By doing this we should need a much smaller heap size when reading very large VW models which will mean we can use smaller machines when reading large models.
The text was updated successfully, but these errors were encountered:
My current understanding of how Aloha deals with off heap models (right now VW and H2O models) when the model is embedded in the JSON is that it reads the entire JSON and then takes the model part and either writes it to disk or compiles it. This necessitates a JVM twice as large as it needs to be. I think this can be remedied by the following procedure.
By doing this we should need a much smaller heap size when reading very large VW models which will mean we can use smaller machines when reading large models.
The text was updated successfully, but these errors were encountered: