Skip to content

Commit

Permalink
[SPARK-21253][CORE] Disable spark.reducer.maxReqSizeShuffleToMem
Browse files Browse the repository at this point in the history
Disable spark.reducer.maxReqSizeShuffleToMem because it breaks the old shuffle service.

Credits to wangyum

Closes #18466

Jenkins

Author: Shixiong Zhu <shixiong@databricks.com>
Author: Yuming Wang <wgyumg@gmail.com>

Closes #18467 from zsxwing/SPARK-21253.

(cherry picked from commit 80f7ac3)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
  • Loading branch information
zsxwing authored and cloud-fan committed Jun 30, 2017
1 parent 20cf511 commit 8de67e3
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 9 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -289,8 +289,9 @@ package object config {

private[spark] val REDUCER_MAX_REQ_SIZE_SHUFFLE_TO_MEM =
ConfigBuilder("spark.reducer.maxReqSizeShuffleToMem")
.internal()
.doc("The blocks of a shuffle request will be fetched to disk when size of the request is " +
"above this threshold. This is to avoid a giant request takes too much memory.")
.bytesConf(ByteUnit.BYTE)
.createWithDefaultString("200m")
.createWithDefault(Long.MaxValue)
}
8 changes: 0 additions & 8 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -519,14 +519,6 @@ Apart from these, the following properties are also available, and may be useful
By allowing it to limit the number of fetch requests, this scenario can be mitigated.
</td>
</tr>
<tr>
<td><code>spark.reducer.maxReqSizeShuffleToMem</code></td>
<td>200m</td>
<td>
The blocks of a shuffle request will be fetched to disk when size of the request is above
this threshold. This is to avoid a giant request takes too much memory.
</td>
</tr>
<tr>
<td><code>spark.shuffle.compress</code></td>
<td>true</td>
Expand Down

0 comments on commit 8de67e3

Please sign in to comment.