Skip to content

Commit

Permalink
misleading task number of groupByKey
Browse files Browse the repository at this point in the history
"By default, this uses only 8 parallel tasks to do the grouping." is a big misleading. Please refer to #389 

detail is as following code :
<code>
  def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = {
    val bySize = (Seq(rdd) ++ others).sortBy(_.partitions.size).reverse
    for (r <- bySize if r.partitioner.isDefined) {
      return r.partitioner.get
    }
    if (rdd.context.conf.contains("spark.default.parallelism")) {
      new HashPartitioner(rdd.context.defaultParallelism)
    } else {
      new HashPartitioner(bySize.head.partitions.size)
    }
  }
</code>
  • Loading branch information
CrazyJvm committed Apr 14, 2014
1 parent 037fe4d commit 1568336
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/scala-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ The following tables list the transformations and actions currently supported (s
<tr>
<td> <b>groupByKey</b>([<i>numTasks</i>]) </td>
<td> When called on a dataset of (K, V) pairs, returns a dataset of (K, Seq[V]) pairs. <br />
<b>Note:</b> By default, this uses only 8 parallel tasks to do the grouping. You can pass an optional <code>numTasks</code> argument to set a different number of tasks.
<b>Note:</b> By default, if the RDD already has a partitioner, the task number is decided by the partition number of the partitioner, or else relies on the value of <code>spark.default.parallelism</code> if the property is set , otherwise depends on the partition number of the RDD. You can pass an optional <code>numTasks</code> argument to set a different number of tasks.
</td>
</tr>
<tr>
Expand Down

0 comments on commit 1568336

Please sign in to comment.