Skip to content

Commit

Permalink
[SPARK-17738][TEST] Fix flaky test in ColumnTypeSuite
Browse files Browse the repository at this point in the history
## What changes were proposed in this pull request?

The default buffer size is not big enough for randomly generated MapType.

## How was this patch tested?

Ran the tests in 100 times, it never fail (it fail 8 times before the patch).

Author: Davies Liu <davies@databricks.com>

Closes #15395 from davies/flaky_map.
  • Loading branch information
Davies Liu authored and zsxwing committed Oct 11, 2016
1 parent 03c4020 commit d5ec4a3
Showing 1 changed file with 4 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -101,14 +101,15 @@ class ColumnTypeSuite extends SparkFunSuite with Logging {

def testColumnType[JvmType](columnType: ColumnType[JvmType]): Unit = {

val buffer = ByteBuffer.allocate(DEFAULT_BUFFER_SIZE).order(ByteOrder.nativeOrder())
val proj = UnsafeProjection.create(Array[DataType](columnType.dataType))
val converter = CatalystTypeConverters.createToScalaConverter(columnType.dataType)
val seq = (0 until 4).map(_ => proj(makeRandomRow(columnType)).copy())
val totalSize = seq.map(_.getSizeInBytes).sum
val bufferSize = Math.max(DEFAULT_BUFFER_SIZE, totalSize)

test(s"$columnType append/extract") {
buffer.rewind()
seq.foreach(columnType.append(_, 0, buffer))
val buffer = ByteBuffer.allocate(bufferSize).order(ByteOrder.nativeOrder())
seq.foreach(r => columnType.append(columnType.getField(r, 0), buffer))

buffer.rewind()
seq.foreach { row =>
Expand Down

0 comments on commit d5ec4a3

Please sign in to comment.