-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-22648][K8s] Add documentation covering init containers and secrets #20059
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR @liyinan926. Left some comments, PTAL. Reviewers - the documentation is for the as-yet unmerged PR #19954. PTAL.
docs/running-on-kubernetes.md
Outdated
<td> | ||
Add the secret named <code>SecretName</code> to the executor pod on the path specified in the value. For example, | ||
<code>spark.kubernetes.executor.secrets.spark-secret=/etc/secrets</code>. Note that if an init-container is used, | ||
the secret will also be add to the init-container in the executor pod. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/add/added/
docs/running-on-kubernetes.md
Outdated
<td><code>spark.kubernetes.mountDependencies.mountTimeout</code></td> | ||
<td>5 minutes</td> | ||
<td> | ||
Timeout before aborting the attempt to download and unpack dependencies from remote locations when initializing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's be more precise on what operation is happening.
"initializing" -> "when downloading and unpacking dependencies into"
docs/running-on-kubernetes.md
Outdated
<td><code>spark.kubernetes.driver.secrets.[SecretName]</code></td> | ||
<td>(none)</td> | ||
<td> | ||
Add the secret named <code>SecretName</code> to the driver pod on the path specified in the value. For example, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
secret -> Kubernetes Secret
Please also link to the Kubernetes secrets docs page.
docs/running-on-kubernetes.md
Outdated
<td> | ||
Add the secret named <code>SecretName</code> to the driver pod on the path specified in the value. For example, | ||
<code>spark.kubernetes.driver.secrets.spark-secret=/etc/secrets</code>. Note that if an init-container is used, | ||
the secret will also be add to the init-container in the driver pod. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/add/added/
docs/running-on-kubernetes.md
Outdated
<td><code>spark.kubernetes.initContainer.image</code></td> | ||
<td>(none)</td> | ||
<td> | ||
Container image for the init-container of the driver and executors for downloading dependencies. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Link to init-container docs.
docs/running-on-kubernetes.md
Outdated
</td> | ||
</tr> | ||
<tr> | ||
<td><code>spark.kubernetes.mountDependencies.mountTimeout</code></td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should rename this to mountDependencies.timeout
to avoid reiterating "mount"
docs/running-on-kubernetes.md
Outdated
</td> | ||
</tr> | ||
<tr> | ||
<td><code>spark.kubernetes.initContainer.maxThreadPoolSize</code></td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like this option name needs fixing. Maybe spark.kubernetes.mountDependencies.maxThreadPoolSize
?
@foxish addressed your comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing those. Sorry, missed a couple of minor ones.
docs/running-on-kubernetes.md
Outdated
@@ -120,6 +120,23 @@ by their appropriate remote URIs. Also, application dependencies can be pre-moun | |||
Those dependencies can be added to the classpath by referencing them with `local://` URIs and/or setting the | |||
`SPARK_EXTRA_CLASSPATH` environment variable in your Dockerfiles. | |||
|
|||
### Using Remote Dependencies | |||
When there are application dependencies hosted in remote locations like HDFS or HTTP servers, the driver and executor pods need a Kubernetes [init-container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) for downloading the dependencies so the driver and executor containers can use them locally. This requires users to specify the container image for the init-container using the configuration property `spark.kubernetes.initContainer.image`. For example, users simply add the following option to the `spark-submit` command to specify the init-container image: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This and below text should be broken up into multiple lines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should include 2-3 examples of remote file usage - ideally, showing that one can use http, hdfs, gcs, s3 in dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to break them into lines? I thought this should be automatically wrapped when being viewed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding examples, I can add one spark-submit example showing how to use remote jars/files on http/https and hdfs. But gcs requires the connector in the init-container, which is non-trivial. I'm not sure about s3. I think we should avoid doing so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HDFS or HTTP sound good. We can cover GCS elsewhere. Line breaks were for ease of reviewing by others (being able to comment on individual lines) and for consistency with the rest of the docs.
Updated in fbb2112. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks ok. Can you use "k8s" to save space in the PR title? And also make it self-contained instead of referencing another PR?
e.g.
"Add documentation covering init containers and secrets."
docs/running-on-kubernetes.md
Outdated
``` | ||
|
||
## Secret Management | ||
In some cases, a Spark application may need to use some credentials, e.g., for accessing data on a secured HDFS cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rewrite this.
"Kubernetes Secrets can be used to provide credentials for a Spark application to access secured services. To mount secrets into a driver container, ..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
docs/running-on-kubernetes.md
Outdated
<td><code>spark.kubernetes.mountDependencies.maxThreadPoolSize</code></td> | ||
<td>5</td> | ||
<td> | ||
Maximum size of the thread pool for downloading remote dependencies into the driver and executor pods. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd clarify this controls how many downloads happen simultaneously; could even change the name of the config to reflect that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Test build #85321 has finished for PR 20059 at commit
|
Test build #85323 has finished for PR 20059 at commit
|
Test build #85325 has finished for PR 20059 at commit
|
Test build #85331 has finished for PR 20059 at commit
|
Test build #85385 has finished for PR 20059 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly LGTM
docs/running-on-kubernetes.md
Outdated
</tr> | ||
<tr> | ||
<td><code>spark.kubernetes.executor.secrets.[SecretName]</code></td> | ||
<td>5</td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the meaning of 5
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, copy and paste error. Fixed.
Test build #85411 has finished for PR 20059 at commit
|
docs/running-on-kubernetes.md
Outdated
--files hdfs://host:port/path/to/file1,hdfs://host:port/path/to/file2 | ||
--conf spark.executor.instances=5 \ | ||
--conf spark.kubernetes.driver.docker.image=<driver-image> \ | ||
--conf spark.kubernetes.executor.docker.image=<executor-image> \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
container.image
instead of docker.image
. We need to modify line 79-80 as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
docs/running-on-kubernetes.md
Outdated
</tr> | ||
<tr> | ||
<td><code>spark.kubernetes.mountDependencies.timeout</code></td> | ||
<td>300 seconds</td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
300s
instead of 300 seconds
, which should be the form we can specify to the config string.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Test build #85428 has finished for PR 20059 at commit
|
retest this please |
Test build #85431 has finished for PR 20059 at commit
|
Test build #85440 has finished for PR 20059 at commit
|
Thanks! merging to master. |
What changes were proposed in this pull request?
This PR updates the Kubernetes documentation corresponding to the following features/changes in #19954.
@vanzin @jiangxb1987 @foxish