Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-51351][SS] Do not materialize the output in Python worker for TWS #50110

Closed
wants to merge 3 commits into from

Conversation

HeartSaVioR
Copy link
Contributor

@HeartSaVioR HeartSaVioR commented Feb 28, 2025

What changes were proposed in this pull request?

This PR proposes to fix the logic of serializer in TWS PySpark version to NOT materialize the output entirely. This PR changes the logic of creating a list to create a generator instead, so that it can be lazily consumed.

Why are the changes needed?

Without this PR, all the outputs are materialized when JVM signals to Python worker that there is no further input (at task completion), which brings up two critical issues:

  • downstream operator can only see outputs after TWS operator processes all inputs
  • all the outputs are materialized into "memory" in Python worker, which could lead memory issue

Does this PR introduce any user-facing change?

No.

How was this patch tested?

Existing UTs. I've confirmed manually below:

  • Before this PR, all the outputs are available after processing all inputs
  • After this PR, outputs are available during processing inputs

The change I have made to verify the fix manually:
HeartSaVioR@cd30db0

If we call run_test() in testcode.py in PySpark, the log messages Spark pulls the iterators and The data is being retrieved from Python worker are interleaved with this fix. Without the fix, there is a sequence of log messages, all Spark pulls the iterators messages come first, and then all The data is being retrieved from Python worker messages come later.

Was this patch authored or co-authored using generative AI tooling?

No.

Copy link
Contributor

@bogao007 bogao007 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@HeartSaVioR
Copy link
Contributor Author

I'm going to provide the branch I was used to test this behavior. I'll update the PR description once it is ready.

@HeartSaVioR
Copy link
Contributor Author

Also updated the PR description to explain how I figured out the issue and how I verified the fix.

@HeartSaVioR
Copy link
Contributor Author

cc. @HyukjinKwon Would you mind taking a look? Thanks!

@@ -1223,8 +1223,17 @@ def dump_stream(self, iterator, stream):
Read through an iterator of (iterator of pandas DataFrame), serialize them to Arrow
RecordBatches, and write batches to stream.
"""
result = [(b, t) for x in iterator for y, t in x for b in y]
super().dump_stream(result, stream)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: linter asked me to have an empty line here - probably due to inner function?

@HeartSaVioR
Copy link
Contributor Author

https://github.com/HeartSaVioR/spark/actions/runs/13583292499/job/37988558904

One module failed due to heap space during compilation. I'm rerunning it.

@HeartSaVioR
Copy link
Contributor Author

Thanks! Merging to master/4.0.

HeartSaVioR added a commit that referenced this pull request Feb 28, 2025
### What changes were proposed in this pull request?

This PR proposes to fix the logic of serializer in TWS PySpark version to NOT materialize the output entirely. This PR changes the logic of creating a list to create a generator instead, so that it can be lazily consumed.

### Why are the changes needed?

Without this PR, all the outputs are materialized when JVM signals to Python worker that there is no further input (at task completion), which brings up two critical issues:

* downstream operator can only see outputs after TWS operator processes all inputs
* all the outputs are materialized into "memory" in Python worker, which could lead memory issue

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Existing UTs. I've confirmed manually below:

* Before this PR, all the outputs are available after processing all inputs
* After this PR, outputs are available during processing inputs

The change I have made to verify the fix manually:
HeartSaVioR@cd30db0

If we call run_test() in testcode.py in PySpark, the log messages `Spark pulls the iterators` and `The data is being retrieved from Python worker` are interleaved with this fix. Without the fix, there is a sequence of log messages, all `Spark pulls the iterators` messages come first, and then all `The data is being retrieved from Python worker` messages come later.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #50110 from HeartSaVioR/SPARK-51351.

Authored-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
(cherry picked from commit 496fe7a)
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants