Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-30090][SHELL] Adapt Spark REPL to Scala 2.13 #28545

Closed
wants to merge 3 commits into from

Conversation

karolchmist
Copy link
Contributor

What changes were proposed in this pull request?

This is an attempt to adapt Spark REPL to Scala 2.13.

It is based on a scala-2.13 branch made by @smarter.

I had to set Scala version to 2.13 in some places, and to adapt some other modules, before I could start working on the REPL itself. These are separate commits on the branch that probably would be fixed beforehand, and thus dropped before the merge of this PR.

I couldn't find a way to run the initialization code with existing REPL classes in Scala 2.13.2, so I modified REPL in Scala to make it work. With this modification I managed to run Spark Shell, along with the units tests passing, which is good news.

The bad news is that it requires an upstream change in Scala, which must be accepted first. I'd be happy to change it if someone points a way to do it differently. If not, I'd propose a PR in Scala to introduce ILoop.internalReplAutorunCode.

Why are the changes needed?

REPL in Scala changed quite a lot, so current version of Spark REPL needed to be adapted.

Does this PR introduce any user-facing change?

In the previous version of SparkILoop, a lot of Scala's ILoop code was overridden and duplicated to make the welcome message a bit more pleasant. In this PR, the message is in a bit different order, but it's still acceptable IMHO.

Before this PR:

20/05/15 15:32:39 WARN Utils: Your hostname, hermes resolves to a loopback address: 127.0.1.1; using 192.168.1.28 instead (on interface enp0s31f6)
20/05/15 15:32:39 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/05/15 15:32:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
20/05/15 15:32:45 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
Spark context Web UI available at http://192.168.1.28:4041
Spark context available as 'sc' (master = local[*], app id = local-1589549565502).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.0.1-SNAPSHOT
      /_/
         
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

With this PR:

20/05/15 15:32:15 WARN Utils: Your hostname, hermes resolves to a loopback address: 127.0.1.1; using 192.168.1.28 instead (on interface enp0s31f6)
20/05/15 15:32:15 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/05/15 15:32:15 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.0.0-SNAPSHOT
      /_/
         
Using Scala version 2.13.2-20200422-211118-706ef1b (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context Web UI available at http://192.168.1.28:4040
Spark context available as 'sc' (master = local[*], app id = local-1589549541259).
Spark session available as 'spark'.

scala> 

It seems that currently the welcoming message is still an improvement from the original ticket, albeit in a different order. As a bonus, some fragile code duplication was removed.

How was this patch tested?

Existing tests pass in replmodule. The REPL runs in a terminal and the following code executed correctly:

scala> spark.range(1000 * 1000 * 1000).count()
val res0: Long = 1000000000                                                     

@srowen
Copy link
Member

srowen commented May 15, 2020

Yea this is fine as a test, but in the end we will need to make the .toSeq / .toMap changes first I think, to get it close to compilable. (And we need a new profile rather than modify the _2.12 -> _2.13 etc) Thanks! please test away.

I was thinking we'd get 3.0 out the door before returning to 2.13 work, but OK to start on the few big last chunky changes. I have a branch for .toSeq somewhere but it's already out of date.

@srowen
Copy link
Member

srowen commented May 15, 2020

(I'd ask it to test but there is no 2.13 build available, so it will just fail)

@SethTisue
Copy link

SethTisue commented May 15, 2020

If not, I'd propose a PR in Scala to introduce ILoop.internalReplAutorunCode

patch LGTM. we expect to roll Scala 2.13.3 in about two weeks, so there's an opportunity to get the change in now. (2.13.4 will likely follow circa early August, though our release scheduling is malleable depending on what issues are reported. in any case, we publish nightlies, so development work needn't wait on our releases)

@retronym
Copy link

I couldn't find a way to run the initialization code with existing REPL classes in Scala 2.13.2

Did you try writing the auto-run script to a temporary file, calling System.setProperty("scala.repl.autoruncode", "/tmp/path/to/script")?

@karolchmist
Copy link
Contributor Author

I couldn't find a way to run the initialization code with existing REPL classes in Scala 2.13.2

Did you try writing the auto-run script to a temporary file, calling System.setProperty("scala.repl.autoruncode", "/tmp/path/to/script")?

I considered it, but then a new temp file would be created every time the REPL is started... It would work probably.

Also, I thought that maybe this setting should be left untouched to the user.

@dongjoon-hyun
Copy link
Member

Hi, @karolchmist . Thank you for contribution.
Apache Spark 3.0.0 is released. Can we resume this work for Apache Spark 3.1.0 (December 2020)?

@karolchmist
Copy link
Contributor Author

Hey @dongjoon-hyun , I'd gladly resume it. Is there a branch I should rebase my work on? My current branch is based on smarter/scala-2.13, which contains fixes from @smarter, but I'm not sure it's an "official" Scala 2.13 branch.

BTW, Scala 2.13.3 was just released, containing the proposed change to make this PR work.

@srowen
Copy link
Member

srowen commented Jun 29, 2020

You can just add more commits to the branch in this PR, if possible; it's already based on apache/master, which is what we need here. You can open a new PR too. That'd be great if you can get the 2.13 REPL working and I can review.

@wjoel
Copy link

wjoel commented Aug 26, 2020

@karolchmist is this something you'd still be willing to continue? Would be fantastic to have support for Scala 2.13 in Spark 3.1.0!

@srowen
Copy link
Member

srowen commented Aug 26, 2020

Yeah we are pretty close to done otherwise - it should all compile, tests pass up through REPL at least. Should be open for anyone to take a run at.

@karolchmist
Copy link
Contributor Author

I'm on it!

@karolchmist
Copy link
Contributor Author

karolchmist commented Aug 27, 2020

I can't build master with scala-2.13... (It builds with 2.12). Any ideas why? Maybe there is now a CI job for Scala 2.13?

➜  spark git:(master) ✗ ./build/mvn  -Pscala-2.13 compile                                                                                          ~/workspace/open-source/spark
Using `mvn` from path: /home/karol/.nix-profile/bin/mvn
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO] 
[INFO] Spark Project Parent POM                                           [pom]
[INFO] Spark Project Tags                                                 [jar]
[INFO] Spark Project Sketch                                               [jar]
[INFO] Spark Project Local DB                                             [jar]
[INFO] Spark Project Networking                                           [jar]
[INFO] Spark Project Shuffle Streaming Service                            [jar]
[INFO] Spark Project Unsafe                                               [jar]
[INFO] Spark Project Launcher                                             [jar]
[INFO] Spark Project Core                                                 [jar]
[INFO] Spark Project ML Local Library                                     [jar]
[INFO] Spark Project GraphX                                               [jar]
[INFO] Spark Project Streaming                                            [jar]
[INFO] Spark Project Catalyst                                             [jar]
[INFO] Spark Project SQL                                                  [jar]
[INFO] Spark Project ML Library                                           [jar]
[INFO] Spark Project Tools                                                [jar]
[INFO] Spark Project Hive                                                 [jar]
[INFO] Spark Project REPL                                                 [jar]
[INFO] Spark Project Assembly                                             [pom]
[INFO] Kafka 0.10+ Token Provider for Streaming                           [jar]
[INFO] Spark Integration for Kafka 0.10                                   [jar]
[INFO] Kafka 0.10+ Source for Structured Streaming                        [jar]
[INFO] Spark Project Examples                                             [jar]
[INFO] Spark Integration for Kafka 0.10 Assembly                          [jar]
[INFO] Spark Avro                                                         [jar]
[INFO] 
[INFO] -----------------< org.apache.spark:spark-parent_2.13 >-----------------
[INFO] Building Spark Project Parent POM 3.1.0-SNAPSHOT                  [1/25]
[INFO] --------------------------------[ pom ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-parent_2.13 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-duplicate-dependencies) @ spark-parent_2.13 ---
[INFO] 
[INFO] --- mvn-scalafmt_2.13:1.0.3:format (default) @ spark-parent_2.13 ---
[WARNING] format.skipSources set, ignoring main directories
[WARNING] format.skipTestSources set, ignoring validateOnly directories
[WARNING] No sources specified, skipping formatting
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:add-source (eclipse-add-source) @ spark-parent_2.13 ---
[INFO] Add Source directory: /home/karol/workspace/open-source/spark/src/main/scala
[INFO] Add Test Source directory: /home/karol/workspace/open-source/spark/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-parent_2.13 ---
[INFO] Dependencies classpath:
/home/karol/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-parent_2.13 ---
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-parent_2.13 ---
[INFO] compile in 0.0 s
[INFO] No sources to compile
[INFO] 
[INFO] ------------------< org.apache.spark:spark-tags_2.13 >------------------
[INFO] Building Spark Project Tags 3.1.0-SNAPSHOT                        [2/25]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-tags_2.13 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-duplicate-dependencies) @ spark-tags_2.13 ---
[INFO] 
[INFO] --- mvn-scalafmt_2.13:1.0.3:format (default) @ spark-tags_2.13 ---
[WARNING] format.skipSources set, ignoring main directories
[WARNING] format.skipTestSources set, ignoring validateOnly directories
[WARNING] No sources specified, skipping formatting
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:add-source (eclipse-add-source) @ spark-tags_2.13 ---
[INFO] Add Source directory: /home/karol/workspace/open-source/spark/common/tags/src/main/scala
[INFO] Add Test Source directory: /home/karol/workspace/open-source/spark/common/tags/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-tags_2.13 ---
[INFO] Dependencies classpath:
/home/karol/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/karol/.m2/repository/org/scala-lang/scala-library/2.13.3/scala-library-2.13.3.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-tags_2.13 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-tags_2.13 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/karol/workspace/open-source/spark/common/tags/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ spark-tags_2.13 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-tags_2.13 ---
[INFO] Using incremental compilation using Mixed compile order
[INFO] Compiler bridge file: /home/karol/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.13-1.3.1-bin_2.13.3__52.0-1.3.1_20191012T045515.jar
[INFO] compile in 1.5 s
[INFO] 
[INFO] -----------------< org.apache.spark:spark-sketch_2.13 >-----------------
[INFO] Building Spark Project Sketch 3.1.0-SNAPSHOT                      [3/25]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Spark Project Parent POM 3.1.0-SNAPSHOT:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [  1.077 s]
[INFO] Spark Project Tags ................................. SUCCESS [  1.614 s]
[INFO] Spark Project Sketch ............................... FAILURE [  0.016 s]
[INFO] Spark Project Local DB ............................. SKIPPED
[INFO] Spark Project Networking ........................... SKIPPED
[INFO] Spark Project Shuffle Streaming Service ............ SKIPPED
[INFO] Spark Project Unsafe ............................... SKIPPED
[INFO] Spark Project Launcher ............................. SKIPPED
[INFO] Spark Project Core ................................. SKIPPED
[INFO] Spark Project ML Local Library ..................... SKIPPED
[INFO] Spark Project GraphX ............................... SKIPPED
[INFO] Spark Project Streaming ............................ SKIPPED
[INFO] Spark Project Catalyst ............................. SKIPPED
[INFO] Spark Project SQL .................................. SKIPPED
[INFO] Spark Project ML Library ........................... SKIPPED
[INFO] Spark Project Tools ................................ SKIPPED
[INFO] Spark Project Hive ................................. SKIPPED
[INFO] Spark Project REPL ................................. SKIPPED
[INFO] Spark Project Assembly ............................. SKIPPED
[INFO] Kafka 0.10+ Token Provider for Streaming ........... SKIPPED
[INFO] Spark Integration for Kafka 0.10 ................... SKIPPED
[INFO] Kafka 0.10+ Source for Structured Streaming ........ SKIPPED
[INFO] Spark Project Examples ............................. SKIPPED
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SKIPPED
[INFO] Spark Avro ......................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  3.429 s
[INFO] Finished at: 2020-08-27T10:38:33+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project spark-sketch_2.13: Could not resolve dependencies for project org.apache.spark:spark-sketch_2.13:jar:3.1.0-SNAPSHOT: Failure to find org.apache.spark:spark-tags_2.13:jar:tests:3.1.0-SNAPSHOT in https://repository.apache.org/snapshots was cached in the local repository, resolution will not be reattempted until the update interval of apache.snapshots has elapsed or updates are forced -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :spark-sketch_2.13

@srowen
Copy link
Member

srowen commented Aug 27, 2020

Try ... -DskipTests clean install

@karolchmist
Copy link
Contributor Author

Thanks @srowen, it worked better, but now it fails in spark-sql. I can try to fix it if no one is working on it...

...
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-sql_2.13 ---
[INFO] Using incremental compilation using Mixed compile order
[INFO] Compiler bridge file: /home/karol/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.13-1.3.1-bin_2.13.3__52.0-1.3.1_20191012T045515.jar
[INFO] Compiling 473 Scala sources and 59 Java sources to /home/karol/workspace/open-source/spark/sql/core/target/scala-2.13/classes ...
[ERROR] [Error] /home/karol/workspace/open-source/spark/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala:121: value += is not a member of org.apache.spark.sql.catalyst.util.CaseInsensitiveMap[String]
  Expression does not convert to assignment because:
    type mismatch;
     found   : scala.collection.immutable.Map[String,String]
     required: org.apache.spark.sql.catalyst.util.CaseInsensitiveMap[String]
    expansion: this.extraOptions = this.extraOptions.+(key.$minus$greater(value))
[ERROR] [Error] /home/karol/workspace/open-source/spark/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala:132: value += is not a member of org.apache.spark.sql.catalyst.util.CaseInsensitiveMap[String]
  Expression does not convert to assignment because:
    type mismatch;
     found   : scala.collection.immutable.Map[String,String]
     required: org.apache.spark.sql.catalyst.util.CaseInsensitiveMap[String]
    expansion: this.extraOptions = this.extraOptions.+(key.$minus$greater(value))
[ERROR] [Error] /home/karol/workspace/open-source/spark/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala:294: value += is not a member of org.apache.spark.sql.catalyst.util.CaseInsensitiveMap[String]
  Expression does not convert to assignment because:
    type mismatch;
     found   : scala.collection.immutable.Map[String,String]
     required: org.apache.spark.sql.catalyst.util.CaseInsensitiveMap[String]
    expansion: this.extraOptions = this.extraOptions.+("path".$minus$greater(path))
Error occurred in an application involving default arguments.
[ERROR] [Error] /home/karol/workspace/open-source/spark/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala:317: type mismatch;
 found   : Iterable[(String, String)]
 required: java.util.Map[String,String]
Error occurred in an application involving default arguments.
[INFO] [Info] : Iterable[(String, String)] <: java.util.Map[String,String]?
[INFO] [Info] : false
[ERROR] [Error] /home/karol/workspace/open-source/spark/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala:412: value += is not a member of org.apache.spark.sql.catalyst.util.CaseInsensitiveMap[String]
  Expression does not convert to assignment because:
    type mismatch;
     found   : scala.collection.immutable.Map[String,String]
     required: org.apache.spark.sql.catalyst.util.CaseInsensitiveMap[String]
    expansion: DataFrameWriter.this.extraOptions = DataFrameWriter.this.extraOptions.+(DataSourceUtils.PARTITIONING_COLUMNS_KEY.$minus$greater(DataSourceUtils.encodePartitioningColumns(columns)))
[ERROR] [Error] /home/karol/workspace/open-source/spark/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFiltersBase.scala:85: type mismatch;
 found   : scala.collection.MapView[String,OrcFiltersBase.this.OrcPrimitiveField]
 required: Map[String,OrcFiltersBase.this.OrcPrimitiveField]
[ERROR] [Error] /home/karol/workspace/open-source/spark/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/FileDataSourceV2.scala:64: type mismatch;
 found   : Iterable[(String, String)]
 required: java.util.Map[String,String]
[INFO] [Info] : Iterable[(String, String)] <: java.util.Map[String,String]?
[INFO] [Info] : false[ERROR] 7 errors found
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Spark Project Parent POM 3.1.0-SNAPSHOT:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [  3.160 s]
[INFO] Spark Project Tags ................................. SUCCESS [  4.082 s]
[INFO] Spark Project Sketch ............................... SUCCESS [  2.523 s]
[INFO] Spark Project Local DB ............................. SUCCESS [  3.184 s]
[INFO] Spark Project Networking ........................... SUCCESS [  5.766 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  5.206 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [  3.028 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  3.517 s]
[INFO] Spark Project Core ................................. SUCCESS [02:02 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [ 23.898 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 30.828 s]
[INFO] Spark Project Streaming ............................ SUCCESS [ 56.213 s]
[INFO] Spark Project Catalyst ............................. SUCCESS [02:44 min]
[INFO] Spark Project SQL .................................. FAILURE [ 24.530 s]
[INFO] Spark Project ML Library ........................... SKIPPED
[INFO] Spark Project REPL ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  07:33 min
[INFO] Finished at: 2020-08-27T15:42:43+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:4.3.0:compile (scala-compile-first) on project spark-sql_2.13: Execution scala-compile-first of goal net.alchim31.maven:scala-maven-plugin:4.3.0:compile failed.: CompileFailed -> [Help 1]

@srowen
Copy link
Member

srowen commented Aug 27, 2020

Yeah, we're not tracking/enforcing 2.13 compilation in CI/CD yet, so it's possible that recent changes will occasionally break 2.13 compilation. Looks like this happened in SPARK-32364. You are welcome to open a follow up PR for that JIRA to fix this - or I am happy to. I think the problem is that the map in question is now CaseInsensitiveMap, which is immutable, but used with +=, which I guess in Scala 2.12 converts to an assignment. Now we would have to unroll it to foo = foo + bar instead of foo += bar

@karolchmist
Copy link
Contributor Author

I'll fix it in a new PR 👍

srowen pushed a commit that referenced this pull request Sep 2, 2020
### What changes were proposed in this pull request?

This is a follow-up of #29160. This allows Spark SQL project to compile for Scala 2.13.

### Why are the changes needed?

It's needed for #28545

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

I compiled with Scala 2.13. It fails in `Spark REPL` project, which will be fixed by #28545

Closes #29584 from karolchmist/SPARK-32364-scala-2.13.

Authored-by: Karol Chmist <info+github@chmist.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
@srowen
Copy link
Member

srowen commented Sep 2, 2020

@karolchmist your PR is merged; hope that helps unblock you

@karolchmist karolchmist marked this pull request as draft September 6, 2020 12:39
@karolchmist karolchmist force-pushed the scala-2.13-repl branch 2 times, most recently from 6727171 to b0842ee Compare September 7, 2020 11:14
@karolchmist karolchmist marked this pull request as ready for review September 9, 2020 14:17
@karolchmist karolchmist changed the title [WIP][SPARK-30090][SHELL] Adapt Spark REPL to Scala 2.13 [SPARK-30090][SHELL] Adapt Spark REPL to Scala 2.13 Sep 9, 2020
@karolchmist
Copy link
Contributor Author

karolchmist commented Sep 10, 2020

Hello @srowen , for me the PR is ready to be reviewed.

I extracted a Scala 2.12/2.13 specific code to separate directories, as the interface of ILoop changed. I also had to extract some fragments of tests for this reason. I put them in new files, putting a 2 infix in the name, for example ReplSuite.scala -> Repl2Suite.scala, to avoid a name collision. I would gladly change the name if you have a better one...

Copy link
Member

@srowen srowen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good if it works!

@@ -86,31 +84,6 @@ class ReplSuite extends SparkFunSuite with BeforeAndAfterAll {
"Interpreter output contained '" + message + "':\n" + output)
}

test("propagation of local properties") {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this test just obsolete now? Just want to understand where we might be taking away tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved it to Repl2Suite (because it's different between Scala 2.12 and 2.13)

@srowen
Copy link
Member

srowen commented Sep 10, 2020

Jenkins test this please

@SparkQA
Copy link

SparkQA commented Sep 10, 2020

Test build #128527 has finished for PR 28545 at commit ae447fb.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@srowen
Copy link
Member

srowen commented Sep 12, 2020

Looks good, as long as 2.12 still works, we can yet fix any other small issues with the REPL in 2.13 as we enable tests later in the 3.1.0 cycle. Thanks a lot! this was one of the few remaining pieces.

@srowen srowen closed this in 3be552c Sep 12, 2020
dongjoon-hyun pushed a commit that referenced this pull request Sep 26, 2023
…e/Repl2Suite` back into `SingletonReplSuite/ReplSuite`

### What changes were proposed in this pull request?
This pr aims to merge test cases from `SingletonRepl2Suite/Repl2Suite` back into `SingletonReplSuite/ReplSuite` to reduce duplicate code.

### Why are the changes needed?
#28545 split the relevant test cases from `SingletonReplSuite/ReplSuite` into `SingletonRepl2Suite/Repl2Suite`, distinguishing different test versions of Scala 2.12 and Scala 2.13.

Currently, Spark 4.0 no longer supports Scala 2.12, so they can be merged back into the original files to reduce duplicate code.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass GitHub Actions

### Was this patch authored or co-authored using generative AI tooling?
No

Closes #43104 from LuciferYang/SPARK-45318.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants