Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document MAX_*_WORKERS in Workers & Jobs #9191

Merged
merged 1 commit into from
Jan 3, 2022

Conversation

alafanechere
Copy link
Contributor

We often receive messages from users (e.g. this one) asking for help related to scaling and parallelization.
I found it helpful to add more details about MAX_*_WORKERS env var to the Worker & Jobs documentation.

@github-actions github-actions bot added the area/documentation Improvements or additions to documentation label Dec 29, 2021
@alafanechere alafanechere added the type/enhancement New feature or request label Dec 29, 2021
Copy link
Member

@marcosmarxm marcosmarxm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment.

Comment on lines +42 to +50
## Worker parallelization
Airbyte exposes the following environment variable to change the maximum number of each type of worker allowed to run in parallel.
Tweaking these values might help you run more jobs in parallel and increase the workload of your Airbyte instance:
* `MAX_SPEC_WORKERS`: Maximum number of *Spec* workers allowed to run in parallel.
* `MAX_CHECK_WORKERS`: Maximum number of *Check connection* workers allowed to run in parallel.
* `MAX_DISCOVERY_WORKERS`: Maximum number of *Discovery* workers allowed to run in parallel.
* `MAX_SYNC_WORKERS`: Maximum number of *Sync* workers allowed to run in parallel.

The current default value for these environment variables is currently set to **5**.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the right place for this. The Worker Parallelization only works for Kubernetes deployment. Also already have one section about the topic: https://docs.airbyte.io/deploying-airbyte/on-kubernetes#increasing-job-parallelism

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure they are only working for Kubernetes? When I check the call stack of these env var they are used to set a temporal WorkerOptions in airbyte-workers/src/main/java/io/airbyte/workers/WorkerApp.java:

  private static WorkerOptions getWorkerOptions(final int max) {
    return WorkerOptions.newBuilder()
        .setMaxConcurrentActivityExecutionSize(max)
        .build();
  }

I am not sure this temporal configuration is exclusive to K8S.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I am sure it works for K8s but I was not able to get it working in docker.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harshithmullapudi this should work in docker too. Why do you say you aren't able to get this working?

Copy link
Contributor

@harshithmullapudi harshithmullapudi Dec 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In k8s I played with these parameters a bit earlier and could find the right config to run about 2000 syncs (in 4 hours, I could see 50 running at any point) with 50 parallel syncs (SUBMITTER_NUM_THREADS=50) with about 5 workers (with MAX_SYNC_WORKERS=20)

I tried doing the same with docker and I could only run 15 at max. Not sure if I am missing something

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you remember what your submitter_num_threads and max_sync_workers vars were?

Copy link
Contributor

@harshithmullapudi harshithmullapudi Dec 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. I think we also need to increase the ports right? I gave 60 ports in K8s but didn't change it in docker.

Copy link
Contributor

@davinchia davinchia Dec 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I think it's because the max sync worker variable is not actually passed into the worker container in the docker build config.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. I think we also need to increase the ports right? I gave 60 ports in K8s but didn't change it in docker.

Nope. This is only used in Kube. Docker doesn't need explicit port allocation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to fix this in #9209.

Will tag folks when I'm done. Apart from this change, what we have here is correct.

@alafanechere
Copy link
Contributor Author

@marcosmarxm, as discussed with @davinchia above, these env var can be used both on docker and K8S deployment, and are now set in the docker-compose file. Can we merge this now?

Copy link
Member

@marcosmarxm marcosmarxm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @alafanechere !

@alafanechere alafanechere merged commit 7300240 into master Jan 3, 2022
@alafanechere alafanechere deleted the augustin/doc/worker-parallelization branch January 3, 2022 18:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/documentation Improvements or additions to documentation type/enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants