Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI connector ops: introduce QA checks in /test #21699

Merged
merged 3 commits into from
Jan 23, 2023

Conversation

alafanechere
Copy link
Contributor

@alafanechere alafanechere commented Jan 23, 2023

What

Closes #21694
I want to run QA checks during the integration test workflow run.
These checks will run on /test and nightly builds.
Their failure will lead to a build failure in the CI.
They are not meant to be run manually by connector developers to test actual connector behavior.
They are run in the CI to asses whether a connector meets all the QA requirements to join our connector catalog.

How

  • Introduce the run-qa-checks commands in ci-connector-ops python package
  • Create empty functions returning True for all the planned checks
  • Run the run-qa-checks command in
    run-qa-checks $connector_name
  • Discard the use of virtualenv in
    name: Run Integration Test

    They are not required and their absence simplifies the availability of run-qa-checks command in path.

🚨 User Impact 🚨

None: all the checks are forced to successful and yet to be implemented in future PRs.

@alafanechere alafanechere temporarily deployed to more-secrets January 23, 2023 12:15 — with GitHub Actions Inactive
@alafanechere alafanechere temporarily deployed to more-secrets January 23, 2023 12:15 — with GitHub Actions Inactive
@alafanechere
Copy link
Contributor Author

An example of a /test execution on this branch can be found here. Check the head of the Test connectors/source-facebook-marketing logs.

@alafanechere alafanechere marked this pull request as ready for review January 23, 2023 12:16
@alafanechere alafanechere requested a review from a team January 23, 2023 12:16
@alafanechere alafanechere temporarily deployed to more-secrets January 23, 2023 12:19 — with GitHub Actions Inactive
@alafanechere alafanechere temporarily deployed to more-secrets January 23, 2023 12:19 — with GitHub Actions Inactive
@github-actions
Copy link
Contributor

Airbyte Code Coverage

There is no coverage information present for the Files changed

Total Project Coverage 26.72% 🍏

Copy link
Contributor

@erohmensing erohmensing left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These checks will run on /test and nightly builds.
Their failure will lead to a build failure in the CI.
They are not meant to be run manually by connector developers to test actual connector behavior.
They are run in the CI to asses whether a connector meets all the QA requirements to join our connector catalog.

Just wondering where we plan to run/consume this info in the future - since we're tying it to a /test failure; if we're using it to remove connectors from cloud, does that also then mean we'd remove them if their integration tests fail, even if QA checks are passing?

This is my only open question re: approval

Comment on lines 83 to 89
for qa_check in QA_CHECKS:
successful_check = qa_check(connector_name)
check_result_prefix = "✅" if successful_check else "❌"
print(f"{check_result_prefix} - {qa_check.__name__}")
qa_check_results.append(successful_check)
if not all(qa_check_results):
print(f"QA checks failed for {connector_name}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking, but ere, I find the naming of successful_check a bit confusing, and we're only storing the boolean result, so we have to rely on the print statements to interpret which one failed. WDYT about a dictionary of check_results? {"check_name": bool} e.g. {"check_connector_icon_is_available": True, "check_changelog_entry_is_updated": False}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I improved it as you suggested 😄

@alafanechere
Copy link
Contributor Author

Just wondering where we plan to run/consume this info in the future - since we're tying it to a /test failure; if we're using it to remove connectors from cloud, does that also then mean we'd remove them if their integration tests fail, even if QA checks are passing?

These new checks will be run on each /test command dispatch. This slash command is run manually from PR or during the nightly builds.
These new checks are not meant to decide if a connector gets added or removed from Cloud (this will be a different action based on these test results and other logic).
The role of this PR is only to add a new set of QA checks that are run on connectors built in the CI that will make this build fail if the QA requirements are not met.

Copy link
Contributor

@erohmensing erohmensing left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These new checks are not meant to decide if a connector gets added or removed from Cloud (this will be a different action based on these test results and other logic).

Got it - that was the clarification I needed!

@alafanechere alafanechere merged commit d77514d into master Jan 23, 2023
@alafanechere alafanechere deleted the augustin/introduce-qa-checks branch January 23, 2023 17:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Define where and how to declare QA tests for connectors
2 participants