Skip to content

Commit

Permalink
Disable spark.reducer.maxReqSizeShuffleToMem
Browse files Browse the repository at this point in the history
  • Loading branch information
zsxwing committed Jun 29, 2017
1 parent bcae03f commit d49e3f1
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 9 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -326,7 +326,7 @@ package object config {
.doc("The blocks of a shuffle request will be fetched to disk when size of the request is " +
"above this threshold. This is to avoid a giant request takes too much memory.")
.bytesConf(ByteUnit.BYTE)
.createWithDefaultString("200m")
.createWithDefault(Long.MaxValue)

private[spark] val TASK_METRICS_TRACK_UPDATED_BLOCK_STATUSES =
ConfigBuilder("spark.taskMetrics.trackUpdatedBlockStatuses")
Expand Down
8 changes: 0 additions & 8 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -528,14 +528,6 @@ Apart from these, the following properties are also available, and may be useful
By allowing it to limit the number of fetch requests, this scenario can be mitigated.
</td>
</tr>
<tr>
<td><code>spark.reducer.maxReqSizeShuffleToMem</code></td>
<td>200m</td>
<td>
The blocks of a shuffle request will be fetched to disk when size of the request is above
this threshold. This is to avoid a giant request takes too much memory.
</td>
</tr>
<tr>
<td><code>spark.shuffle.compress</code></td>
<td>true</td>
Expand Down

0 comments on commit d49e3f1

Please sign in to comment.