Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected error when using Cloud Run jobs command in BashOperator #29459

Closed
2 tasks done
janos0207 opened this issue Feb 10, 2023 · 4 comments
Closed
2 tasks done

Unexpected error when using Cloud Run jobs command in BashOperator #29459

janos0207 opened this issue Feb 10, 2023 · 4 comments
Labels
area:core kind:bug This is a clearly a bug

Comments

@janos0207
Copy link

janos0207 commented Feb 10, 2023

Apache Airflow version

2.5.1

What happened

When I ran the Cloud Run jobs execute command in BashOperator:

gcloud beta run jobs execute extract-dev-****** --project "products-*****" --region "us-east1" --wait 

And the task failed, and I got an error whose message said that the task process group received SIGTERM:

But It went green when I dropped the "wait" option.

What you think should happen instead

The error message says, "WARNING - Recorded pid 6032 does not match the current pid 6033"
and so Airflow sent Signals.SIGTERM to group 6033, whose PIDs of all processes in the group: [6036, 6033].

I think this SIGTERM signal makes the BashOperator crash.

How to reproduce

No response

Operating System

Ubuntu 22.04.1 LTS

Versions of Apache Airflow Providers

apache-airflow-providers-common-sql==1.3.3
apache-airflow-providers-ftp==3.3.0
apache-airflow-providers-google==8.8.0
apache-airflow-providers-http==4.1.1
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.1

Deployment

Virtualenv installation

Deployment details

No response

Anything else

There are some issues similar to this problem:

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

@janos0207 janos0207 added area:core kind:bug This is a clearly a bug labels Feb 10, 2023
@boring-cyborg
Copy link

boring-cyborg bot commented Feb 10, 2023

Thanks for opening your first issue here! Be sure to follow the issue template!

@Taragolis
Copy link
Contributor

As I understand that it only happen when you add --wait flag, and did failed if not.
I'm not familiar with gcloud but if it create new process and close previous than this behaviour of local_task_job.py is valid, it detects that process which initially create task run not exists anymore and kill all process group.

@janos0207
Copy link
Author

Thank you for the comment, and sorry for my late response.
So if the gcloud CLI ended the original process and created a new one in this case, is this the correct behavior of BashOperator?
Should I report the crash problem to the gcloud CLI dev. group rather than Airflow's group?

@Taragolis
Copy link
Contributor

It is just my assumption about gcloud I could be wrong.
The main question are you have any problem with other tasks?

@apache apache locked and limited conversation to collaborators Feb 18, 2023
@eladkal eladkal converted this issue into discussion #29605 Feb 18, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
area:core kind:bug This is a clearly a bug
Projects
None yet
Development

No branches or pull requests

2 participants