Skip to content

Commit

Permalink
[SPARK-4183] Enable NettyBlockTransferService by default
Browse files Browse the repository at this point in the history
  • Loading branch information
aarondav committed Nov 1, 2014
1 parent ee29ef3 commit bb981cc
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 1 deletion.
2 changes: 1 addition & 1 deletion core/src/main/scala/org/apache/spark/SparkEnv.scala
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ object SparkEnv extends Logging {
val shuffleMemoryManager = new ShuffleMemoryManager(conf)

val blockTransferService =
conf.get("spark.shuffle.blockTransferService", "nio").toLowerCase match {
conf.get("spark.shuffle.blockTransferService", "netty").toLowerCase match {
case "netty" =>
new NettyBlockTransferService(conf)
case "nio" =>
Expand Down
10 changes: 10 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -359,6 +359,16 @@ Apart from these, the following properties are also available, and may be useful
map-side aggregation and there are at most this many reduce partitions.
</td>
</tr>
<tr>
<td><code>spark.shuffle.blockTransferService</code></td>
<td>netty</td>
<td>
Implementation to use for transferring shuffle and cached blocks between executors. There
are two implementations available: <code>netty</code> and <code>nio</code>. Netty-based
block transfer is intended to be simpler but equally efficient and is the default option
starting in 1.2.
</td>
</tr>
</table>

#### Spark UI
Expand Down

0 comments on commit bb981cc

Please sign in to comment.