This repository has been archived by the owner on Jan 9, 2020. It is now read-only.
forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 118
Run tests with SBT instead of Maven #273
Comments
Kicked off a build at http://spark-k8s-jenkins.pepperdata.org:8080/job/PR-spark-k8s-unit-tests-SBT-TESTING to Spark's |
Closed
Question again is: what tests should we run? I think we decided in the original Maven build to only run the things that related to Kubernetes in order to cut down the build time by avoiding building things we don't touch. |
Maybe the SBT tests would catch checkstyle issues like unused identifiers: #281 (comment) |
We need to enable running SBT tests on the kubernetes module via palantir@de3e772 Also running |
ifilonenko
pushed a commit
to ifilonenko/spark
that referenced
this issue
Feb 26, 2019
…apache2 [NOSQUASH] Resync Apache
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Currently in this repo we're running tests via Maven:
http://spark-k8s-jenkins.pepperdata.org:8080/job/PR-spark-k8s-unit-tests/
./build/mvn clean test -Pmesos -Pyarn -Phadoop-2.7 -Pkubernetes -pl core,resource-managers/kubernetes/core -am -Dtest=none -Dsuffixes='^org\.apache\.spark\.(?!SortShuffleSuite$|rdd\.LocalCheckpointSuite$|deploy\.SparkSubmitSuite$|deploy\.StandaloneDynamicAllocationSuite$).*'
whereas in Apache Spark the tests are run via SBT:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/
https://github.com/apache/spark/blob/bbd163d589e7503c5cb150d934e7565b18a908f2/dev/run-tests.py#L527
[info] Running Spark tests using SBT with these arguments: -Phadoop-2.6 -Phive -Pyarn -Pmesos -Phive-thriftserver -Pkinesis-asl -Dtest.exclude.tags=org.apache.spark.tags.ExtendedYarnTest test
There are subtle differences between SBT and Maven in how test are run (largely around dependency resolution) so for maximal compatibility with Apache we should be running with SBT.
This has been causing problems in the Palantir Spark repo where we cherry pick these commits into.
The text was updated successfully, but these errors were encountered: