-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConnectionResetError: [Errno 104] Connection reset by peer #86
Comments
Hi, this looks like a kind of timeout. Note that the restart policy should be never. What is your system? Also, this seems like a timeout which is known to happen on Amazon kubernetes. See the other open conversation. Can you run this locally with docker desktop kubernetes or some other local cluster? I can only get to this mid next week. Apologies about that. |
Hi, @LamaAni, I've changed the RestartPolicy to Never and the problem keeps happening. Sorry but I don't know what your meant about running the example in the repo... Sorry I'm new on this. Could you explain me how to do it? Tks |
Can you please test the example here: from airflow import DAG
from airflow_kubernetes_job_operator.kubernetes_job_operator import KubernetesJobOperator
from airflow.utils.dates import days_ago
default_args = {
"owner": "tester",
"start_date": days_ago(2),
"retries": 0,
}
dag = DAG(
"job-operator-simple-test",
default_args=default_args,
description="Test base job operator",
schedule_interval=None,
)
KubernetesJobOperator(
task_id="very-simple-job",
dag=dag,
image="ubuntu",
command=[
"bash",
"-c",
"echo start; sleep 5; echo end",
],
) The above should make a very fast image run - check if the timeout is the issue. If that passes, just increase the sleep time in there and you would get a test of the timeout limit on the cluster. I forced in the code the restart policy to Never - but the The error sometimes comes from a timeout on the open connection with the Kubernets cluster (connection timeout forced by server) - hence testing locally may shed light on the issue. |
I've just test your proposed dag and the task end successfully when the sleep value was 5 seconds. After increasing to 300 seconds, the task ends with Connection reset by peer. Since this problem is critical to the delivery of a project. We had to create our own custom operator to submit Kubernetes jobs. During the creation and testing process we came across a similar problem when we tried to stream read the logs generated by Kubernetes using the official python library for k8s. |
Yea this issue was already mentioned here: #54 I need to fix that but had not the time. If you can add a reconnect methodology then I will def accept a PR. If I get some time I'll fix that up but currently it is an open issue. Feel free to close this issue when you are done. |
Describe the bug
I getting this error in our dags running KubenernetesJobOperator
To Reproduce
Steps to reproduce the behavior:
Expected behavior
No error raise during job execution.
Screenshots
If applicable, add screenshots to help explain your problem.
Environment
Airflow is deployed on Azure Kubernetes Service with KubernetesExecutor that spawns the workers pod in the same AKS Clusters.
Kubernetes Version 1.24.9
Airflow Version 2.4.3
airflow_kubernetes_job_operator-2.0.12
Log
Complete log:
dag_id=logs-job-operator_run_id=manual__2023-02-23T14_31_39.933191+00_00_task_id=test-job-success_attempt=1.log
The text was updated successfully, but these errors were encountered: