Skip to content

Commit

Permalink
[SPARK-3410] The priority of shutdownhook for ApplicationMaster shoul…
Browse files Browse the repository at this point in the history
…d not be integer literal

I think, it need to keep the priority of shutdown hook for ApplicationMaster than the priority of shutdown hook for o.a.h.FileSystem depending on changing the priority for FileSystem.

Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>

Closes #2283 from sarutak/SPARK-3410 and squashes the following commits:

1d44fef [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-3410
bd6cc53 [Kousuke Saruta] Modified style
ee6f1aa [Kousuke Saruta] Added constant "SHUTDOWN_HOOK_PRIORITY" to ApplicationMaster
54eb68f [Kousuke Saruta] Changed Shutdown hook priority to 20
2f0aee3 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-3410
4c5cb93 [Kousuke Saruta] Modified the priority for AM's shutdown hook
217d1a4 [Kousuke Saruta] Removed unused import statements
717aba2 [Kousuke Saruta] Modified ApplicationMaster to make to keep the priority of shutdown hook for ApplicationMaster higher than the priority of shutdown hook for HDFS
  • Loading branch information
sarutak authored and tgravescs committed Sep 15, 2014
1 parent f493f79 commit cc14644
Showing 1 changed file with 7 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,8 @@ import java.io.IOException
import java.net.Socket
import java.util.concurrent.atomic.AtomicReference

import scala.collection.JavaConversions._
import scala.util.Try

import akka.actor._
import akka.remote._
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.hadoop.util.ShutdownHookManager
import org.apache.hadoop.yarn.api._
Expand Down Expand Up @@ -107,8 +103,11 @@ private[spark] class ApplicationMaster(args: ApplicationMasterArguments,
}
}
}
// Use priority 30 as it's higher than HDFS. It's the same priority MapReduce is using.
ShutdownHookManager.get().addShutdownHook(cleanupHook, 30)

// Use higher priority than FileSystem.
assert(ApplicationMaster.SHUTDOWN_HOOK_PRIORITY > FileSystem.SHUTDOWN_HOOK_PRIORITY)
ShutdownHookManager
.get().addShutdownHook(cleanupHook, ApplicationMaster.SHUTDOWN_HOOK_PRIORITY)

// Call this to force generation of secret so it gets populated into the
// Hadoop UGI. This has to happen before the startUserClass which does a
Expand Down Expand Up @@ -407,6 +406,8 @@ private[spark] class ApplicationMaster(args: ApplicationMasterArguments,

object ApplicationMaster extends Logging {

val SHUTDOWN_HOOK_PRIORITY: Int = 30

private var master: ApplicationMaster = _

def main(args: Array[String]) = {
Expand Down

0 comments on commit cc14644

Please sign in to comment.