Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-20343][BUILD] Avoid Unidoc build only if Hadoop 2.6 is explicitly set in SBT build #17669

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions dev/run-tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -365,8 +365,16 @@ def build_spark_assembly_sbt(hadoop_version):
print("[info] Building Spark assembly (w/Hive 1.2.1) using SBT with these arguments: ",
" ".join(profiles_and_goals))
exec_sbt(profiles_and_goals)
# Make sure that Java and Scala API documentation can be generated
build_spark_unidoc_sbt(hadoop_version)

# Note that we skip Unidoc build only if Hadoop 2.6 is explicitly set in this SBT build.
# Due to a different dependency resolution in SBT & Unidoc by an unknown reason, the
# documentation build fails on a specific machine & environment in Jenkins but it was unable
# to reproduce. Please see SPARK-20343. This is a band-aid fix that should be removed in
# the future.
is_hadoop_version_2_6 = os.environ.get("AMPLAB_JENKINS_BUILD_PROFILE") == "hadoop2.6"
if not is_hadoop_version_2_6:
# Make sure that Java and Scala API documentation can be generated
build_spark_unidoc_sbt(hadoop_version)


def build_apache_spark(build_tool, hadoop_version):
Expand Down
1 change: 0 additions & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,6 @@
<ivy.version>2.4.0</ivy.version>
<oro.version>2.0.8</oro.version>
<codahale.metrics.version>3.1.2</codahale.metrics.version>
<!-- Keep consistent with Avro vesion in SBT build for SPARK-20343 -->
<avro.version>1.7.7</avro.version>
<avro.mapred.classifier>hadoop2</avro.mapred.classifier>
<jets3t.version>0.9.3</jets3t.version>
Expand Down
14 changes: 2 additions & 12 deletions project/SparkBuild.scala
Original file line number Diff line number Diff line change
Expand Up @@ -318,8 +318,8 @@ object SparkBuild extends PomBuild {
enable(MimaBuild.mimaSettings(sparkHome, x))(x)
}

/* Generate and pick the spark build info from extra-resources and override a dependency */
enable(Core.settings ++ CoreDependencyOverrides.settings)(core)
/* Generate and pick the spark build info from extra-resources */
enable(Core.settings)(core)

/* Unsafe settings */
enable(Unsafe.settings)(unsafe)
Expand Down Expand Up @@ -443,16 +443,6 @@ object DockerIntegrationTests {
)
}

/**
* Overrides to work around sbt's dependency resolution being different from Maven's in Unidoc.
*
* Note that, this is a hack that should be removed in the future. See SPARK-20343
*/
object CoreDependencyOverrides {
lazy val settings = Seq(
dependencyOverrides += "org.apache.avro" % "avro" % "1.7.7")
}

/**
* Overrides to work around sbt's dependency resolution being different from Maven's.
*/
Expand Down