-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-19659][Core]Disable spark.reducer.maxReqSizeShuffleToMem #18467
Conversation
cc @JoshRosen As it's impossible to safely revert #16989, I just changed the default value to Long.MaxValue. |
@@ -529,14 +529,6 @@ Apart from these, the following properties are also available, and may be useful | |||
</td> | |||
</tr> | |||
<tr> | |||
<td><code>spark.reducer.maxReqSizeShuffleToMem</code></td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't document it as it's not a safe feature now.
Test build #78924 has finished for PR 18467 at commit
|
@@ -326,7 +326,7 @@ package object config { | |||
.doc("The blocks of a shuffle request will be fetched to disk when size of the request is " + | |||
"above this threshold. This is to avoid a giant request takes too much memory.") | |||
.bytesConf(ByteUnit.BYTE) | |||
.createWithDefaultString("200m") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might want to put a .internal()
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall. This seems like the cleanest fix for now to unblock the RC.
I do think this feature is valuable so we should see if there's a way to support it without requiring any backend changes to the shuffle service.
Let's make sure to update the "Fix Version" on the JIRA which added this feature so that it's clear that we've not shipped it in 2.2.x.
thanks, merging to master/2.2! |
Disable spark.reducer.maxReqSizeShuffleToMem because it breaks the old shuffle service. Credits to wangyum Closes #18466 Jenkins Author: Shixiong Zhu <shixiong@databricks.com> Author: Yuming Wang <wgyumg@gmail.com> Closes #18467 from zsxwing/SPARK-21253. (cherry picked from commit 80f7ac3) Signed-off-by: Wenchen Fan <wenchen@databricks.com>
@JoshRosen I'd argue that this feature is shipped in 2.2.x, as users can change the config value to enable it. Though this is kind of an experimental feature that breaks old shuffle service. |
Oops, @wangyum sorry I forgot to set the author name... will be more careful next time :) |
Test build #78943 has finished for PR 18467 at commit
|
What changes were proposed in this pull request?
Disable spark.reducer.maxReqSizeShuffleToMem because it breaks the old shuffle service.
Credits to @wangyum
Closes #18466
How was this patch tested?
Jenkins