-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data set which is much bigger than RAM #10897
Comments
Hi @Smotrov, given your description and code, I would expect this query to run incrementally and not buffer all the results to memory -- that is I would expect the query to stream There are some operators that require potentially buffering all data (grouping, joins, sorts) but you don't seem to be doing that I am not super familar with exactly how the json writing is implemented, but I believe that should be streaming as well
You can limit the amount of memory using https://docs.rs/datafusion/latest/datafusion/execution/memory_pool/trait.MemoryPool.html However, as I mentioed I wouldn't expect your query to buffer large amounts of memory, so if it is maybe we need to adjust the writer seetings or there is some improvement to make to datafusion Let us know how it goes! |
Thank you @alamb This is what I actually did. const MEMORY_LIMIT: usize = 8 * 1024 * 1024 * 1024; // 8GB
fn create_context() -> Result<SessionContext> {
// Create a memory pool with a limit
let memory_pool = Arc::new(FairSpillPool::new(MEMORY_LIMIT));
// Configure the runtime environment to use the memory pool
let rt_config = RuntimeConfig::new()
.with_memory_pool(memory_pool)
.with_temp_file_path(PathBuf::from("./tmp"));
let runtime_env = Arc::new(RuntimeEnv::new(rt_config)?);
// Configure the session context to use the runtime environment
let session_config = SessionConfig::new();
let ctx = SessionContext::new_with_config_rt(session_config, runtime_env);
Ok(ctx)
} However it easily takes 20..30GB of RAM and what is interesting, the CPU load stays relatively small. Like 20...30%. The memory consumption is that high when I set at lease 4 target partitions. // Define the partitioned Listing Table
let listing_options = ListingOptions::new(file_format)
.with_table_partition_cols(part)
.with_target_partitions(4)
.with_file_extension(".ndjson.zst"); Would be grate if it would be possible to set an actual limit for the memory otherwise I can't use it on docker :-( |
Hi @Smotrov -- I agree the use of 20-30 GB seems not good. Perhaps there is something in DataFusion that is not accounting for memory correctly (perhaps it is the decoding of the ndjson / zstd stream) 🤔 |
FWIW: I was able to reproduce it while writing single file into partitioned table with multiple partitions (~5kk rows, 4k partitions): UPD: for single file -> non-partitioned table case, DF works just fine (~75MB peak in total, ~15MB of them is memory for encoder), and it's also ok for writing multiple partitions without compression (~500MB in total due to buffering for 4k writes) so it's just an issue in case of dozens / hundreds of partitions + ZSTD. |
Thanks @korowa -- this analysis makes sense (aka that there is some constant overhead per active partition) @Smotrov does this match your dataset? As in how many partitions (aka files) are created by your query? Some other ideas for improvements:
|
I'm using Rust, meanwhile I'm new to DataFusion.
I need to repartition big dataset which is hundreds of GB. It is stored on S3 as multiple compressed packet files.
It should be partitioned by the value of a column. Here is what I'm doing
Will it swallow all memory and fail or it will be running in a kind on streaming format?
How could I limit the amount of memory which can be used to run the app inside of the Docker and make sure it would not run out.
The text was updated successfully, but these errors were encountered: