Skip to content
This repository has been archived by the owner on Jan 9, 2020. It is now read-only.

Added service name as prefix to executor pods #14

Merged
merged 2 commits into from
Jan 13, 2017

Conversation

foxish
Copy link
Member

@foxish foxish commented Jan 12, 2017

Minor change to identify by name, the executor pods created by a particular driver.
/cc @mccheah @ash211

@@ -155,7 +155,7 @@ private[spark] class KubernetesClusterSchedulerBackend(
private def allocateNewExecutorPod(): (String, Pod) = {
val executorKubernetesId = UUID.randomUUID().toString.replaceAll("-", "")
val executorId = EXECUTOR_ID_COUNTER.incrementAndGet().toString
val name = s"exec$executorKubernetesId"
val name = s"$kubernetesDriverServiceName-exec$executorKubernetesId"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran into name length constraints before while trying to figure out the name. But if this works then I'm fine with this change.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a good point. I'll check the constraints we impose and see if some truncation is needed. IIRC, it's the service names that typically complain about length because they're involved in DNS domain creation.

@@ -155,7 +155,7 @@ private[spark] class KubernetesClusterSchedulerBackend(
private def allocateNewExecutorPod(): (String, Pod) = {
val executorKubernetesId = UUID.randomUUID().toString.replaceAll("-", "")
val executorId = EXECUTOR_ID_COUNTER.incrementAndGet().toString
val name = s"exec$executorKubernetesId"
val name = s"$kubernetesDriverServiceName-exec$executorKubernetesId"
Copy link

@ash211 ash211 Jan 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a common convention in k8s for pod names? kebab-case? I might suggest we make this slightly more verbose:

s"$kubernetesDriverServiceName-executor-$executorKubernetesId

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the common convention is the kebab-case.

+1 on making the pod name verbose.

@foxish foxish mentioned this pull request Jan 13, 2017
7 tasks
@ash211
Copy link

ash211 commented Jan 13, 2017

@foxish are you good with modifying that podname to have the fuller -executor- and then merging?

Any news on the pod name length constraints?

@foxish
Copy link
Member Author

foxish commented Jan 13, 2017

253 characters for the pod name and 63 characters for service name, which means we should be well within the limits. Will make the change to kebab-case shortly.

@ash211 ash211 merged commit 98b5edc into k8s-support-alternate-incremental Jan 13, 2017
@ash211 ash211 deleted the plumb-exec-pod-names branch January 13, 2017 23:05
ash211 pushed a commit that referenced this pull request Feb 8, 2017
* Added service name as prefix to executor pods to be able to tell them apart from kubectl output

* Addressed comments
ash211 pushed a commit that referenced this pull request Mar 8, 2017
* Added service name as prefix to executor pods to be able to tell them apart from kubectl output

* Addressed comments
foxish added a commit that referenced this pull request Jul 24, 2017
* Added service name as prefix to executor pods to be able to tell them apart from kubectl output

* Addressed comments
puneetloya pushed a commit to puneetloya/spark that referenced this pull request Mar 11, 2019
* Added service name as prefix to executor pods to be able to tell them apart from kubectl output

* Addressed comments
ifilonenko referenced this pull request in bloomberg/apache-spark-on-k8s Mar 13, 2019
* Added service name as prefix to executor pods to be able to tell them apart from kubectl output

* Addressed comments
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants