Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regularly Hit API limits on Zendesk Support #11899

Closed
danieldiamond opened this issue Apr 11, 2022 · 15 comments · Fixed by #12122 or #13757
Closed

Regularly Hit API limits on Zendesk Support #11899

danieldiamond opened this issue Apr 11, 2022 · 15 comments · Fixed by #12122 or #13757
Assignees
Labels
autoteam community team/tse Technical Support Engineers type/bug Something isn't working

Comments

@danieldiamond
Copy link
Contributor

Environment

  • Airbyte version: 0.35.64-alpha
  • OS Version / Instance: AWS EC2
  • Deployment: Docker
  • Source Connector and version: Zendesk Support (0.2.5)
  • Destination Connector and version: Snowflake (0.4.24)
  • Severity: Critical
  • Step where error happened: Sync

Current Behavior

Regularly hitting the API limits for this connector. We have a few APIs hitting zendesk support but with this airbyte connector there is a massive surge on requests that cause rate limiting to apply. This in turn causes the airbyte instance to crash as it does not appear to handle the rate limiting appropriately.

This would be less of an issue if I were able to use the incremental stream. However, I am unable to use the incremental stream as it has its own issues i.e. returning duplicate records.

Expected Behavior

Syncs should be infrequent enough to prevent rate limiting. Alternatively, there should be an option for users to modify the rate limiting behaviour in the airbyte connector.

Logs

If applicable, please upload the logs from the failing operation.
For sync jobs, you can download the full logs from the UI by going to the sync attempt page and
clicking the download logs button at the top right of the logs display window.

LOG

2022-04-10 04:47:37 �[32mINFO�[m i.a.w.w.WorkerRun(call):49 - Executing worker wrapper. Airbyte version: 0.35.64-alpha
2022-04-10 04:47:37 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(get):105 - Docker volume job log path: /tmp/workspace/15684/0/logs.log
2022-04-10 04:47:37 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(get):110 - Executing worker wrapper. Airbyte version: 0.35.64-alpha
2022-04-10 04:47:37 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):104 - start sync worker. job id: 15684 attempt id: 0
2022-04-10 04:47:37 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):116 - configured sync modes: {null.tickets=incremental - append, null.ticket_comments=incremental - append, null.groups=incremental - append, null.organizations=incremental - append, null.ticket_fields=incremental - append, null.ticket_metric_events=incremental - append, null.ticket_metrics=incremental - append, null.users=incremental - append, null.tags=full_refresh - overwrite}
2022-04-10 04:47:37 �[32mINFO�[m i.a.w.p.a.DefaultAirbyteDestination(start):69 - Running destination...
2022-04-10 04:47:37 �[32mINFO�[m i.a.c.i.LineGobbler(voidCall):82 - Checking if airbyte/destination-snowflake:0.4.24 exists...
2022-04-10 04:47:38 �[32mINFO�[m i.a.c.i.LineGobbler(voidCall):82 - airbyte/destination-snowflake:0.4.24 was found locally.
2022-04-10 04:47:38 �[32mINFO�[m i.a.w.p.DockerProcessFactory(create):106 - Creating docker job ID: 15684
2022-04-10 04:47:38 �[32mINFO�[m i.a.w.p.DockerProcessFactory(create):158 - Preparing command: docker run --rm --init -i -w /data/15684/0 --log-driver none --network host -v airbyte_workspace:/data -v /tmp/airbyte_local:/local -e WORKER_CONNECTOR_IMAGE=airbyte/destination-snowflake:0.4.24 -e WORKER_JOB_ATTEMPT=0 -e AIRBYTE_ROLE= -e WORKER_ENVIRONMENT=DOCKER -e AIRBYTE_VERSION=0.35.64-alpha -e WORKER_JOB_ID=15684 airbyte/destination-snowflake:0.4.24 write --config destination_config.json --catalog destination_catalog.json
2022-04-10 04:47:38 �[32mINFO�[m i.a.c.i.LineGobbler(voidCall):82 - Checking if airbyte/source-zendesk-support:0.2.5 exists...
2022-04-10 04:47:38 �[32mINFO�[m i.a.c.i.LineGobbler(voidCall):82 - airbyte/source-zendesk-support:0.2.5 was found locally.
2022-04-10 04:47:38 �[32mINFO�[m i.a.w.p.DockerProcessFactory(create):106 - Creating docker job ID: 15684
2022-04-10 04:47:38 �[32mINFO�[m i.a.w.p.DockerProcessFactory(create):158 - Preparing command: docker run --rm --init -i -w /data/15684/0 --log-driver none --network host -v airbyte_workspace:/data -v /tmp/airbyte_local:/local -e WORKER_CONNECTOR_IMAGE=airbyte/source-zendesk-support:0.2.5 -e WORKER_JOB_ATTEMPT=0 -e AIRBYTE_ROLE= -e WORKER_ENVIRONMENT=DOCKER -e AIRBYTE_VERSION=0.35.64-alpha -e WORKER_JOB_ID=15684 airbyte/source-zendesk-support:0.2.5 read --config source_config.json --catalog source_catalog.json --state input_state.json
2022-04-10 04:47:38 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):158 - Waiting for source and destination threads to complete.
2022-04-10 04:47:38 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getDestinationOutputRunnable$6):339 - Destination output thread started.
2022-04-10 04:47:38 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):279 - Replication thread started.
2022-04-10 04:47:40 �[43mdestination�[0m > SLF4J: Class path contains multiple SLF4J bindings.
2022-04-10 04:47:40 �[43mdestination�[0m > SLF4J: Found binding in [jar:file:/airbyte/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2022-04-10 04:47:40 �[43mdestination�[0m > SLF4J: Found binding in [jar:file:/airbyte/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2022-04-10 04:47:40 �[43mdestination�[0m > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2022-04-10 04:47:40 �[43mdestination�[0m > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[32mINFO�[m i.a.i.b.IntegrationCliParser(parseOptions):118 - integration args: {catalog=destination_catalog.json, write=null, config=destination_config.json}
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[32mINFO�[m i.a.i.b.IntegrationRunner(runInternal):121 - Running integration: io.airbyte.integrations.destination.snowflake.SnowflakeDestination
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[32mINFO�[m i.a.i.b.IntegrationRunner(runInternal):122 - Command: WRITE
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[32mINFO�[m i.a.i.b.IntegrationRunner(runInternal):123 - Integration config: IntegrationConfig{command=WRITE, configPath='destination_config.json', catalogPath='destination_catalog.json', statePath='null'}
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword examples - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword order - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword multiline - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2022-04-10 04:47:41 �[44msource�[0m > Starting syncing SourceZendeskSupport
2022-04-10 04:47:41 �[44msource�[0m > Syncing stream: groups
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[32mINFO�[m i.a.i.d.j.c.SwitchingDestination(getConsumer):65 - Using destination type: COPY_S3
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[33mWARN�[m i.a.i.d.s.SnowflakeDatabase(createDataSource):96 - Obsolete User/password login mode is used. Please re-create a connection to use the latest connector's version
2022-04-10 04:47:41 �[43mdestination�[0m > 2022-04-10 04:47:41 �[32mINFO�[m i.a.i.d.s.S3DestinationConfig(createS3Client):169 - Creating S3 client...
2022-04-10 04:47:42 �[44msource�[0m > Read 57 records from groups stream
2022-04-10 04:47:42 �[44msource�[0m > Finished syncing groups
2022-04-10 04:47:42 �[44msource�[0m > SourceZendeskSupport runtimes:
Syncing stream groups 0:00:00.599228
2022-04-10 04:47:42 �[44msource�[0m > Syncing stream: organizations
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=groups, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_jbc_groups, outputTableName=_airbyte_raw_groups, syncMode=append}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=organizations, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_tpr_organizations, outputTableName=_airbyte_raw_organizations, syncMode=append}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=tags, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_ahc_tags, outputTableName=_airbyte_raw_tags, syncMode=overwrite}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=ticket_comments, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_vdz_ticket_comments, outputTableName=_airbyte_raw_ticket_comments, syncMode=append}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=ticket_fields, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_osm_ticket_fields, outputTableName=_airbyte_raw_ticket_fields, syncMode=append}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=ticket_metric_events, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_lxu_ticket_metric_events, outputTableName=_airbyte_raw_ticket_metric_events, syncMode=append}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=ticket_metrics, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_fqo_ticket_metrics, outputTableName=_airbyte_raw_ticket_metrics, syncMode=append}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=tickets, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_nuv_tickets, outputTableName=_airbyte_raw_tickets, syncMode=append}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=users, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_lql_users, outputTableName=_airbyte_raw_users, syncMode=append}
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.b.BufferedStreamConsumer(startTracked):116 - class io.airbyte.integrations.destination.buffered_stream_consumer.BufferedStreamConsumer started.
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):114 - Preparing tmp tables in destination started for 9 streams
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream groups: tmp table: _airbyte_tmp_jbc_groups, stage: ZENDESK_SUPPORT_GROUPS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:42 �[43mdestination�[0m > 2022-04-10 04:47:42 �[32mINFO�[m c.z.h.HikariDataSource(getConnection):110 - HikariPool-1 - Starting...
2022-04-10 04:47:43 �[43mdestination�[0m > 2022-04-10 04:47:43 �[32mINFO�[m c.z.h.p.HikariPool(checkFailFast):565 - HikariPool-1 - Added connection net.snowflake.client.jdbc.SnowflakeConnectionV1@611b35d6
2022-04-10 04:47:43 �[43mdestination�[0m > 2022-04-10 04:47:43 �[32mINFO�[m c.z.h.HikariDataSource(getConnection):123 - HikariPool-1 - Start completed.
2022-04-10 04:47:44 �[43mdestination�[0m > 2022-04-10 04:47:44 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:44 �[43mdestination�[0m > 2022-04-10 04:47:44 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_GROUPS does not exist in bucket; creating...
2022-04-10 04:47:44 �[43mdestination�[0m > 2022-04-10 04:47:44 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_GROUPS has been created in bucket.
2022-04-10 04:47:44 �[43mdestination�[0m > 2022-04-10 04:47:44 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream groups
2022-04-10 04:47:44 �[43mdestination�[0m > 2022-04-10 04:47:44 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream organizations: tmp table: _airbyte_tmp_tpr_organizations, stage: ZENDESK_SUPPORT_ORGANIZATIONS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:44 �[43mdestination�[0m > 2022-04-10 04:47:44 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_ORGANIZATIONS does not exist in bucket; creating...
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_ORGANIZATIONS has been created in bucket.
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream organizations
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream tags: tmp table: _airbyte_tmp_ahc_tags, stage: ZENDESK_SUPPORT_TAGS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_TAGS does not exist in bucket; creating...
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_TAGS has been created in bucket.
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream tags
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream ticket_comments: tmp table: _airbyte_tmp_vdz_ticket_comments, stage: ZENDESK_SUPPORT_TICKET_COMMENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:45 �[43mdestination�[0m > 2022-04-10 04:47:45 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKET_COMMENTS does not exist in bucket; creating...
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKET_COMMENTS has been created in bucket.
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream ticket_comments
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream ticket_fields: tmp table: _airbyte_tmp_osm_ticket_fields, stage: ZENDESK_SUPPORT_TICKET_FIELDS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKET_FIELDS does not exist in bucket; creating...
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKET_FIELDS has been created in bucket.
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream ticket_fields
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream ticket_metric_events: tmp table: _airbyte_tmp_lxu_ticket_metric_events, stage: ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS does not exist in bucket; creating...
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS has been created in bucket.
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream ticket_metric_events
2022-04-10 04:47:46 �[43mdestination�[0m > 2022-04-10 04:47:46 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream ticket_metrics: tmp table: _airbyte_tmp_fqo_ticket_metrics, stage: ZENDESK_SUPPORT_TICKET_METRICS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:47 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:47 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKET_METRICS does not exist in bucket; creating...
2022-04-10 04:47:47 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKET_METRICS has been created in bucket.
2022-04-10 04:47:47 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream ticket_metrics
2022-04-10 04:47:47 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream tickets: tmp table: _airbyte_tmp_nuv_tickets, stage: ZENDESK_SUPPORT_TICKETS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:47 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:47 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKETS does not exist in bucket; creating...
2022-04-10 04:47:47 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_TICKETS has been created in bucket.
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:47 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream tickets
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:48 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream users: tmp table: _airbyte_tmp_lql_users, stage: ZENDESK_SUPPORT_USERS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:48 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:48 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_USERS does not exist in bucket; creating...
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:48 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_USERS has been created in bucket.
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:48 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream users
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:48 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):136 - Preparing tmp tables in destination completed.
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:48 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream groups (current state: 0 bytes in 0 buffers)
2022-04-10 04:47:48 �[43mdestination�[0m > 2022-04-10 04:47:48 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream organizations (current state: 0 bytes in 1 buffers)
2022-04-10 04:47:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1000 (414 KB)
2022-04-10 04:47:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 2000 (844 KB)
2022-04-10 04:47:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 3000 (1 MB)
2022-04-10 04:47:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 4000 (1 MB)
...
...
2022-04-10 04:48:18 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 74000 (31 MB)
2022-04-10 04:48:19 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 75000 (31 MB)
2022-04-10 04:48:19 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 76000 (31 MB)
2022-04-10 04:48:19 �[44msource�[0m > Read 76331 records from organizations stream
2022-04-10 04:48:19 �[44msource�[0m > Finished syncing organizations
2022-04-10 04:48:19 �[44msource�[0m > SourceZendeskSupport runtimes:
Syncing stream groups 0:00:00.599228
Syncing stream organizations 0:00:37.764184
2022-04-10 04:48:19 �[44msource�[0m > Syncing stream: tags
2022-04-10 04:48:20 �[43mdestination�[0m > 2022-04-10 04:48:20 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream tags (current state: 4 MB in 2 buffers)
2022-04-10 04:48:21 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 77000 (32 MB)
2022-04-10 04:48:21 �[44msource�[0m > Read 612 records from tags stream
2022-04-10 04:48:21 �[44msource�[0m > Finished syncing tags
2022-04-10 04:48:21 �[44msource�[0m > SourceZendeskSupport runtimes:
Syncing stream groups 0:00:00.599228
Syncing stream organizations 0:00:37.764184
Syncing stream tags 0:00:01.396089
2022-04-10 04:48:21 �[44msource�[0m > Syncing stream: ticket_comments
2022-04-10 04:48:30 �[43mdestination�[0m > 2022-04-10 04:48:30 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream ticket_comments (current state: 4 MB in 3 buffers)
2022-04-10 04:48:37 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 78000 (37 MB)
2022-04-10 04:48:50 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 79000 (41 MB)
2022-04-10 04:49:03 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 80000 (44 MB)
2022-04-10 04:49:15 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 81000 (49 MB)
2022-04-10 04:49:28 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 82000 (54 MB)
2022-04-10 04:49:40 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 83000 (58 MB)
...
...
2022-04-10 04:56:12 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 114000 (170 MB)
2022-04-10 04:56:18 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 115000 (174 MB)
2022-04-10 04:56:33 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 116000 (178 MB)
2022-04-10 04:56:45 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 117000 (181 MB)
2022-04-10 04:56:56 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 118000 (185 MB)
2022-04-10 04:56:56 �[44msource�[0m > Backing off _send(...) for 0.0s (airbyte_cdk.sources.streams.http.exceptions.UserDefinedBackoffException: Request URL: https://mydomain.zendesk.com/api/v2/incremental/ticket_events.json?start_time=1579676644&include=comment_events, Response Code: 429, Response Text: {"error":"APIRateLimitExceeded","description":"Number of allowed incremental export API requests per minute exceeded"})
2022-04-10 04:57:06 �[44msource�[0m > Retrying. Sleeping for 4 seconds
2022-04-10 04:57:12 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 119000 (189 MB)
2022-04-10 04:57:25 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 120000 (193 MB)
2022-04-10 04:57:38 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 121000 (196 MB)
2022-04-10 04:57:51 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 122000 (200 MB)
...
...
2022-04-10 09:05:45 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1343000 (3 GB)
2022-04-10 09:05:59 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1344000 (3 GB)
2022-04-10 09:05:59 �[44msource�[0m > Backing off _send(...) for 0.0s (airbyte_cdk.sources.streams.http.exceptions.UserDefinedBackoffException: Request URL: https://mydomain.zendesk.com/api/v2/incremental/ticket_events.json?start_time=1637003171&include=comment_events, Response Code: 429, Response Text: {"error":"APIRateLimitExceeded","description":"Number of allowed incremental export API requests per minute exceeded"})
2022-04-10 09:06:09 �[44msource�[0m > Retrying. Sleeping for 1 seconds
2022-04-10 09:06:15 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1345000 (3 GB)
2022-04-10 09:06:20 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1346000 (3 GB)
2022-04-10 09:06:32 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1347000 (3 GB)
...
...
2022-04-10 09:09:26 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1365000 (3 GB)
2022-04-10 09:09:37 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1366000 (3 GB)
2022-04-10 09:09:44 �[44msource�[0m > Backing off _send(...) for 0.0s (airbyte_cdk.sources.streams.http.exceptions.UserDefinedBackoffException: Request URL: https://mydomain.zendesk.com/api/v2/incremental/ticket_events.json?start_time=1637622084&include=comment_events, Response Code: 429, Response Text: {"error":"APIRateLimitExceeded","description":"Number of allowed incremental export API requests per minute exceeded"})
2022-04-10 09:10:06 �[44msource�[0m > Retrying. Sleeping for 16 seconds
2022-04-10 09:10:06 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1367000 (3 GB)
2022-04-10 09:10:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1368000 (3 GB)
2022-04-10 09:10:22 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1369000 (3 GB)
2022-04-10 09:10:28 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1370000 (3 GB)
2022-04-10 09:10:40 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1371000 (3 GB)
2022-04-10 09:10:52 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1372000 (3 GB)
2022-04-10 09:10:52 �[44msource�[0m > Backing off _send(...) for 0.0s (airbyte_cdk.sources.streams.http.exceptions.UserDefinedBackoffException: Request URL: https://mydomain.zendesk.com/api/v2/incremental/ticket_events.json?start_time=1637747683&include=comment_events, Response Code: 429, Response Text: {"error":"APIRateLimitExceeded","description":"Number of allowed incremental export API requests per minute exceeded"})
2022-04-10 09:11:06 �[44msource�[0m > Retrying. Sleeping for 8 seconds
2022-04-10 09:11:06 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1373000 (3 GB)
2022-04-10 09:11:17 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1374000 (3 GB)
2022-04-10 09:11:25 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1375000 (3 GB)
2022-04-10 09:11:34 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1376000 (3 GB)
2022-04-10 09:11:47 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1377000 (3 GB)
2022-04-10 09:11:53 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1378000 (3 GB)
2022-04-10 09:11:59 �[44msource�[0m > Backing off _send(...) for 0.0s (airbyte_cdk.sources.streams.http.exceptions.UserDefinedBackoffException: Request URL: https://mydomain.zendesk.com/api/v2/incremental/ticket_events.json?start_time=1637893888&include=comment_events, Response Code: 429, Response Text: {"error":"APIRateLimitExceeded","description":"Number of allowed incremental export API requests per minute exceeded"})
2022-04-10 09:12:05 �[44msource�[0m > Retrying. Sleeping for 1 seconds
2022-04-10 09:12:06 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1379000 (3 GB)
2022-04-10 09:12:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1380000 (3 GB)
2022-04-10 09:12:21 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1381000 (3 GB)
...
...
2022-04-10 09:53:30 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1615000 (4 GB)
2022-04-10 09:53:38 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1616000 (4 GB)
2022-04-10 09:53:39 �[44msource�[0m > Backing off _send(...) for 0.0s (airbyte_cdk.sources.streams.http.exceptions.UserDefinedBackoffException: Request URL: https://mydomain.zendesk.com/api/v2/incremental/ticket_events.json?start_time=1649390729&include=comment_events, Response Code: 429, Response Text: {"error":"APIRateLimitExceeded","description":"Number of allowed incremental export API requests per minute exceeded"})
2022-04-10 09:54:05 �[44msource�[0m > Retrying. Sleeping for 22 seconds
2022-04-10 09:54:05 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1617000 (4 GB)
2022-04-10 09:54:14 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1618000 (4 GB)
2022-04-10 09:54:17 �[44msource�[0m > Read 1541783 records from ticket_comments stream
2022-04-10 09:54:17 �[44msource�[0m > Finished syncing ticket_comments
2022-04-10 09:54:17 �[44msource�[0m > SourceZendeskSupport runtimes:
Syncing stream groups 0:00:00.599228
Syncing stream organizations 0:00:37.764184
Syncing stream tags 0:00:01.396089
Syncing stream ticket_comments 5:05:56.198026
2022-04-10 09:54:17 �[44msource�[0m > Syncing stream: ticket_fields
2022-04-10 09:54:17 �[43mdestination�[0m > 2022-04-10 09:54:17 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream ticket_fields (current state: 177 MB in 4 buffers)
2022-04-10 09:54:18 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1619000 (4 GB)
2022-04-10 09:54:18 �[44msource�[0m > Read 292 records from ticket_fields stream
2022-04-10 09:54:18 �[44msource�[0m > Finished syncing ticket_fields
2022-04-10 09:54:18 �[44msource�[0m > SourceZendeskSupport runtimes:
Syncing stream groups 0:00:00.599228
Syncing stream organizations 0:00:37.764184
Syncing stream tags 0:00:01.396089
Syncing stream ticket_comments 5:05:56.198026
Syncing stream ticket_fields 0:00:00.772529
2022-04-10 09:54:18 �[44msource�[0m > Syncing stream: ticket_metric_events
2022-04-10 09:54:21 �[43mdestination�[0m > 2022-04-10 09:54:21 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream ticket_metric_events (current state: 177 MB in 5 buffers)
2022-04-10 09:54:21 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1620000 (4 GB)
2022-04-10 09:54:21 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1621000 (4 GB)
2022-04-10 09:54:21 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1622000 (4 GB)
...
...
...
...
...
...
2022-04-10 10:34:46 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7373000 (4 GB)
2022-04-10 10:34:46 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7374000 (4 GB)
2022-04-10 10:34:46 �[43mdestination�[0m > 2022-04-10 10:34:46 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(flushWriter):82 - Flushing buffer of stream ticket_metric_events (200 MB)
2022-04-10 10:34:46 �[43mdestination�[0m > 2022-04-10 10:34:46 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$flushBufferFunction$3):155 - Flushing buffer for stream ticket_metric_events (200 MB) to staging
2022-04-10 10:34:46 �[43mdestination�[0m > 2022-04-10 10:34:46 �[32mINFO�[m i.a.i.d.r.BaseSerializedBuffer(flush):123 - Finished writing data to 61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz (200 MB)
2022-04-10 10:34:46 �[43mdestination�[0m > 2022-04-10 10:34:46 �[32mINFO�[m i.a.i.d.s.u.S3StreamTransferManagerHelper(getDefault):55 - PartSize arg is set to 10 MB
2022-04-10 10:34:46 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7375000 (4 GB)
2022-04-10 10:34:46 �[43mdestination�[0m > 2022-04-10 10:34:46 �[32mINFO�[m a.m.s.StreamTransferManager(getMultiPartOutputStreams):329 - Initiated multipart upload to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with full ID BVGMe4jJBXyQDdM32HS_zbNtVHZCyzCDZgDusNvYGdiYBngj3NRlubV5VmXxLOJ5IIt6JZi1gftCA3c4TLLsJ4VF7bAsqsxETyy2cHaGwA1xFWqBu8HEQyCuw3t_0Vrt
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 1 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.MultiPartOutputStream(close):158 - Called close() on [MultipartOutputStream for parts 1 - 10000]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 4 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 2 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 3 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 8 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 6 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 5 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 7 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 11 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 9 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 13 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 12 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 10 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 14 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 15 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 16 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 17 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 18 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 20 containing 9.87 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Finished uploading [Part number 19 containing 10.01 MB]
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m a.m.s.StreamTransferManager(complete):395 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TICKET_METRIC_EVENTS/2022/04/10/04/34B26E9E-6EE4-46B9-8648-7137D30EE82F/61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz with id BVGMe4jJB...uw3t_0Vrt]: Completed
2022-04-10 10:34:47 �[43mdestination�[0m > 2022-04-10 10:34:47 �[32mINFO�[m i.a.i.d.r.FileBuffer(deleteFile):78 - Deleting tempFile data 61e3d532-62d7-478e-813b-32e84b5d8f293551921784484819405.csv.gz
2022-04-10 10:34:48 �[43mdestination�[0m > 2022-04-10 10:34:48 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream ticket_metric_events (current state: 177 MB in 5 buffers)
2022-04-10 10:34:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7376000 (4 GB)
2022-04-10 10:34:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7377000 (4 GB)
2022-04-10 10:34:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7378000 (4 GB)
2022-04-10 10:34:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7379000 (4 GB)
2022-04-10 10:34:50 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7380000 (4 GB)
...
...
...
...
...
...
...
...
...
...
2022-04-10 11:08:06 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12137000 (5 GB)
2022-04-10 11:08:06 �[44msource�[0m > Read 10518449 records from ticket_metric_events stream
2022-04-10 11:08:06 �[44msource�[0m > Finished syncing ticket_metric_events
2022-04-10 11:08:06 �[44msource�[0m > SourceZendeskSupport runtimes:
Syncing stream groups 0:00:00.599228
Syncing stream organizations 0:00:37.764184
Syncing stream tags 0:00:01.396089
Syncing stream ticket_comments 5:05:56.198026
Syncing stream ticket_fields 0:00:00.772529
Syncing stream ticket_metric_events 1:13:48.532793
2022-04-10 11:08:06 �[44msource�[0m > Syncing stream: ticket_metrics
2022-04-10 11:08:07 �[43mdestination�[0m > 2022-04-10 11:08:07 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream ticket_metrics (current state: 344 MB in 6 buffers)
2022-04-10 11:08:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12138000 (5 GB)
2022-04-10 11:08:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12139000 (5 GB)
2022-04-10 11:08:09 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12140000 (5 GB)
2022-04-10 11:08:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12141000 (5 GB)
2022-04-10 11:08:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12142000 (5 GB)
...
...
2022-04-10 11:08:51 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12203000 (5 GB)
2022-04-10 11:08:52 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12204000 (5 GB)
2022-04-10 11:08:53 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12205000 (5 GB)
2022-04-10 11:08:53 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12206000 (5 GB)
2022-04-10 12:01:57 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(lambda$getCancellationChecker$3):191 - Running sync worker cancellation...
2022-04-10 12:01:57 �[32mINFO�[m i.a.w.DefaultReplicationWorker(cancel):377 - Cancelling replication worker...
2022-04-10 12:02:07 �[32mINFO�[m i.a.w.DefaultReplicationWorker(cancel):385 - Cancelling destination...
2022-04-10 12:02:07 �[32mINFO�[m i.a.w.p.a.DefaultAirbyteDestination(cancel):125 - Attempting to cancel destination process...
2022-04-10 12:02:07 �[32mINFO�[m i.a.w.p.a.DefaultAirbyteDestination(cancel):130 - Destination process exists, cancelling...
2022-04-10 12:02:07 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):163 - One of source or destination thread complete. Waiting on the other.
2022-04-10 12:02:07 �[33mWARN�[m i.a.c.i.LineGobbler(voidCall):86 - airbyte-destination gobbler IOException: Stream closed. Typically happens when cancelling a job.
2022-04-10 12:02:07 �[32mINFO�[m i.a.w.p.a.DefaultAirbyteDestination(cancel):132 - Cancelled destination process!
2022-04-10 12:02:07 �[32mINFO�[m i.a.w.DefaultReplicationWorker(cancel):392 - Cancelling source...
2022-04-10 12:02:07 �[32mINFO�[m i.a.w.p.a.DefaultAirbyteSource(cancel):142 - Attempting to cancel source process...
2022-04-10 12:02:07 �[32mINFO�[m i.a.w.p.a.DefaultAirbyteSource(cancel):147 - Source process exists, cancelling...
2022-04-10 12:02:10 �[32mINFO�[m i.a.w.p.a.DefaultAirbyteSource(cancel):149 - Cancelled source process!
2022-04-10 12:02:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):305 - Total records read: 12206226 (5 GB)
2022-04-10 12:02:10 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(lambda$getCancellationChecker$3):195 - Interrupting worker thread...
2022-04-10 12:02:10 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(lambda$getCancellationChecker$3):198 - Cancelling completable future...
2022-04-10 12:02:10 �[33mWARN�[m i.a.w.t.CancellationHandler$TemporalCancellationHandler(checkAndHandleCancellation):54 - Job either timed out or was cancelled.
2022-04-10 12:02:10 �[33mWARN�[m i.a.w.t.CancellationHandler$TemporalCancellationHandler(checkAndHandleCancellation):54 - Job either timed out or was cancelled.
2022-04-10 12:02:10 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(get):131 - Stopping cancellation check scheduling...
2022-04-10 12:02:10 �[32mINFO�[m i.a.w.t.TemporalUtils(withBackgroundHeartbeat):235 - Stopping temporal heartbeating...
2022-04-10 12:02:10 �[1;31mERROR�[m i.a.w.DefaultReplicationWorker(run):169 - Sync worker failed.
java.lang.InterruptedException: null
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:386) ~[?:?]
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) ~[?:?]
	at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:164) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
	at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:57) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
	at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:155) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
	Suppressed: io.airbyte.workers.WorkerException: Source process exit with code 143. This warning is normal if the job was cancelled.
		at io.airbyte.workers.protocols.airbyte.DefaultAirbyteSource.close(DefaultAirbyteSource.java:136) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:126) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:57) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:155) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at java.lang.Thread.run(Thread.java:833) [?:?]
	Suppressed: java.io.IOException: Stream closed
		at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:445) ~[?:?]
		at java.io.OutputStream.write(OutputStream.java:162) ~[?:?]
		at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81) ~[?:?]
		at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142) ~[?:?]
		at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:320) ~[?:?]
		at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:160) ~[?:?]
		at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:248) ~[?:?]
		at java.io.BufferedWriter.flush(BufferedWriter.java:257) ~[?:?]
		at io.airbyte.workers.protocols.airbyte.DefaultAirbyteDestination.notifyEndOfStream(DefaultAirbyteDestination.java:98) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at io.airbyte.workers.protocols.airbyte.DefaultAirbyteDestination.close(DefaultAirbyteDestination.java:111) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:126) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:57) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:155) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
		at java.lang.Thread.run(Thread.java:833) [?:?]
2022-04-10 12:02:10 �[33mWARN�[m i.t.i.a.POJOActivityTaskHandler(activityFailureToResult):307 - Activity failure. ActivityId=8e6c4fda-2eab-3300-a619-1c6adfaef928, activityType=Replicate, attempt=1
java.lang.RuntimeException: java.util.concurrent.CancellationException
	at io.airbyte.workers.temporal.TemporalUtils.withBackgroundHeartbeat(TemporalUtils.java:233) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
	at io.airbyte.workers.temporal.sync.ReplicationActivityImpl.replicate(ReplicationActivityImpl.java:106) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
	at jdk.internal.reflect.GeneratedMethodAccessor183.invoke(Unknown Source) ~[?:?]
	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
	at java.lang.reflect.Method.invoke(Method.java:568) ~[?:?]
	at io.temporal.internal.activity.POJOActivityTaskHandler$POJOActivityInboundCallsInterceptor.execute(POJOActivityTaskHandler.java:214) ~[temporal-sdk-1.8.1.jar:?]
	at io.temporal.internal.activity.POJOActivityTaskHandler$POJOActivityImplementation.execute(POJOActivityTaskHandler.java:180) ~[temporal-sdk-1.8.1.jar:?]
	at io.temporal.internal.activity.POJOActivityTaskHandler.handle(POJOActivityTaskHandler.java:120) ~[temporal-sdk-1.8.1.jar:?]
	at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:204) ~[temporal-sdk-1.8.1.jar:?]
	at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:164) ~[temporal-sdk-1.8.1.jar:?]
	at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:93) ~[temporal-sdk-1.8.1.jar:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: java.util.concurrent.CancellationException
	at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2478) ~[?:?]
	at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getCancellationChecker$3(TemporalAttemptExecution.java:201) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
	at io.airbyte.workers.temporal.CancellationHandler$TemporalCancellationHandler.checkAndHandleCancellation(CancellationHandler.java:53) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
	at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getCancellationChecker$4(TemporalAttemptExecution.java:204) ~[io.airbyte-airbyte-workers-0.35.64-alpha.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) ~[?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) ~[?:?]
	... 3 more
2022-04-10 12:02:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):228 - sync summary: io.airbyte.config.ReplicationAttemptSummary@d72992e[status=cancelled,recordsSynced=12206221,bytesSynced=6047463191,startTime=1649566057982,endTime=1649592130286,totalStats=io.airbyte.config.SyncStats@285c4666[recordsEmitted=12206221,bytesEmitted=6047463191,stateMessagesEmitted=5,recordsCommitted=0],streamStats=[io.airbyte.config.StreamSyncStats@169d41df[streamName=ticket_fields,stats=io.airbyte.config.SyncStats@4e5c44a0[recordsEmitted=292,bytesEmitted=404510,stateMessagesEmitted=<null>,recordsCommitted=<null>]], io.airbyte.config.StreamSyncStats@577e1fac[streamName=organizations,stats=io.airbyte.config.SyncStats@1d5b1c53[recordsEmitted=76331,bytesEmitted=33623227,stateMessagesEmitted=<null>,recordsCommitted=<null>]], io.airbyte.config.StreamSyncStats@61b4259e[streamName=groups,stats=io.airbyte.config.SyncStats@4d61044c[recordsEmitted=57,bytesEmitted=13742,stateMessagesEmitted=<null>,recordsCommitted=<null>]], io.airbyte.config.StreamSyncStats@6ac5f133[streamName=ticket_comments,stats=io.airbyte.config.SyncStats@2f39c1bc[recordsEmitted=1541783,bytesEmitted=4471454351,stateMessagesEmitted=<null>,recordsCommitted=<null>]], io.airbyte.config.StreamSyncStats@6fe18910[streamName=ticket_metrics,stats=io.airbyte.config.SyncStats@2ae7a3c4[recordsEmitted=68697,bytesEmitted=62794907,stateMessagesEmitted=<null>,recordsCommitted=<null>]], io.airbyte.config.StreamSyncStats@6fee83c2[streamName=ticket_metric_events,stats=io.airbyte.config.SyncStats@e3b9ba2[recordsEmitted=10518449,bytesEmitted=1479148462,stateMessagesEmitted=<null>,recordsCommitted=<null>]], io.airbyte.config.StreamSyncStats@6df12a7c[streamName=tags,stats=io.airbyte.config.SyncStats@7a1f4d85[recordsEmitted=612,bytesEmitted=23992,stateMessagesEmitted=<null>,recordsCommitted=<null>]]]]
2022-04-10 12:02:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):248 - Source output at least one state message
2022-04-10 12:02:10 �[33mWARN�[m i.a.w.DefaultReplicationWorker(run):258 - State capture: No new state, falling back on input state: io.airbyte.config.State@61f43197[state={}]
2022-04-10 12:02:10 �[32mINFO�[m i.a.v.j.JsonSchemaValidator(test):56 - JSON schema validation failed.
errors: $.access_token: is missing but it is required, $.credentials: does not have a value in the enumeration [oauth2.0], $.credentials: must be a constant value oauth2.0
2022-04-10 12:02:10 �[32mINFO�[m i.a.v.j.JsonSchemaValidator(test):56 - JSON schema validation failed.
errors: $.part_size: is not defined in the schema and the schema does not allow additional properties, $.access_key_id: is not defined in the schema and the schema does not allow additional properties, $.s3_bucket_name: is not defined in the schema and the schema does not allow additional properties, $.s3_bucket_region: is not defined in the schema and the schema does not allow additional properties, $.secret_access_key: is not defined in the schema and the schema does not allow additional properties, $.purge_staging_data: is not defined in the schema and the schema does not allow additional properties, $.method: does not have a value in the enumeration [Standard]
2022-04-10 12:02:10 �[32mINFO�[m i.a.v.j.JsonSchemaValidator(test):56 - JSON schema validation failed.
errors: $.part_size: is not defined in the schema and the schema does not allow additional properties, $.access_key_id: is not defined in the schema and the schema does not allow additional properties, $.s3_bucket_name: is not defined in the schema and the schema does not allow additional properties, $.s3_bucket_region: is not defined in the schema and the schema does not allow additional properties, $.secret_access_key: is not defined in the schema and the schema does not allow additional properties, $.purge_staging_data: is not defined in the schema and the schema does not allow additional properties, $.method: does not have a value in the enumeration [Internal Staging]


Steps to Reproduce

  1. Connector support connector
  2. Sync

Are you willing to submit a PR?

No

@danieldiamond
Copy link
Contributor Author

Issue seems to be presented here but not resolved

@bazarnov
Copy link
Collaborator

Please share the full log, and do not cancel the sync manually, we should see the exact place where the issue appears, from the partial logs you've shared in the description i cannot confirm that the "Tickets" stream was even synced.

429 error is the typical API error message when you hit the API RateLimits, as far as i can see it handled correctly, this message is just FYI, it should not cause the sync failure directly.

@danieldiamond
Copy link
Contributor Author

danieldiamond commented Apr 12, 2022

@bazarnov take a look at the logs above, specifically the time difference between these rows

2022-04-10 11:08:53 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12206000 (5 GB)
2022-04-10 12:01:57 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(lambda$getCancellationChecker$3):191 - Running sync worker cancellation...

i have to cancel it, otherwise it just hangs and eventually my instance crashes.

it hangs at various points but always at a records read log e.g.

2022-04-10 11:08:53 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12206000 (5 GB)

it varies how far it gets each time. sometimes I wait up to 6 hours before my instance crashes
Screen Shot 2022-04-12 at 10 11 17 pm

@danieldiamond
Copy link
Contributor Author

danieldiamond commented Apr 12, 2022

I have reverted to zendesk support 0.1.12 and the sync appears to be rate limiting appropriately albeit much slower as it doesn't use the latest version requests configuration. and also contains the proper logs to show backing off

2022-04-12 12:12:33 source > The rate limit of requests is exceeded. Waiting for 27 seconds.
2022-04-12 12:12:33 source > Backing off _send(...) for 0.0s (airbyte_cdk.sources.streams.http.exceptions.UserDefinedBackoffException)
2022-04-12 12:13:03 source > Retrying. Sleeping for 27 seconds
2022-04-12 12:13:03 source > Setting state of users stream to {'updated_at': '2016-01-12T10:26:52Z', '_last_end_time': 1452711416}
2022-04-12 12:13:03 INFO i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 551000 (549 MB)

which this latest version does not show.

Also the number of users is being capped at 100,000 in the latest version but does not appear the be the case with this earlier version (see other issue #11895)

@danieldiamond
Copy link
Contributor Author

Please see attached full logs for a sync, there is no 429 error and the sync just hangs for 7 hours before I decide to cancel it
logs-15777.txt

specific lines

2022-04-10 22:19:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 81000 (59 MB)
2022-04-11 05:03:18 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(lambda$getCancellationChecker$3):191 - Running sync worker cancellation...

@bazarnov
Copy link
Collaborator

You've got a lot of data, please do not cancel the sync, let it fail, if it fails, and send the logs here, thank you.

@danieldiamond
Copy link
Contributor Author

Is something happening in those 7 hours that I'm not aware of?

In the previous comment I attached full logs although yes I cancelled it. Is there a reason I should let it hang beyond 7 hours? It ends up crashing our production airbyte instance.

If there is a valid reason to let it run until failure (or airbyte instance crashes, which the logs are then still of not use) then please let me know.

@bazarnov
Copy link
Collaborator

The reason I'm asking to let it run until it crashes is the actual crashed log, we should understand what is the main point of crashing, because due to the logs provided above, there is no sign that connector Zendesk Support fails at all.
The backoff and retry messages in the logs are just the part of the API calls, sometimes it hits the RateLimits, sometimes - it's not, it depends on how much data do you have, I believe and the API Calls rate.

So my suggestion here is to decrease the Start Date to the possible minimum value you can, let's say a couple of days ago since now, and proceed with the full refresh sync. As an option you can select the Local CSV or Local JSON destinations to check the actual data afterwards.
It also will help us to understand the main core part, if it crashes, we'll try to catch the bug.

The expected result: Zendesk Support connector doesn't crash and your instance either.

@misteryeo
Copy link
Contributor

misteryeo commented Apr 19, 2022

Sorry to hijack this thread but wanted to confirm with you @danieldiamond that when you say:

This would be less of an issue if I were able to use the incremental stream. However, I am unable to use the incremental stream as it has its own issues i.e. returning duplicate records.

Does this issue accurately describe what you're surfacing: #10829?

We're actively working on improving this connector so thank you for helping us surface all these bugs and reporting them!

@danieldiamond
Copy link
Contributor Author

hey @misteryeo thanks for reaching out. the issue on duplicate records was actually relating to "at-least-once" framework which is discussed further here.

tldr; I am not using any normalization method and so incremental+dedupe is not available. and incremental will always pull the latest record again

@danieldiamond
Copy link
Contributor Author

danieldiamond commented Apr 19, 2022

@bazarnov i left out this screenshot but essentially on older version 0.1.12, the incremental sync is fine. and on the later versions, the API count skyrockets and hangs, see image below

Screen Shot 2022-04-15 at 1 11 54 pm

@danieldiamond
Copy link
Contributor Author

@bazarnov just FYI there's still something going on with the main incremental stream.
I'm seeing the ticket_metric stream freeze on incremental sync.
User and Ticket stream work fine but I assume its because its using the updated class. But all the other incremental streams SourceZendeskSupportStream seem to hang

@alexchouraki
Copy link
Contributor

I'm having the same issue as you do, with ticket_metric stream failing and this error message : :"Rate limited by Zendesk edge protection". See logs attached.
logs-15174.txt
I'll revert the connector until it's fixed, I guess. Thanks for the details around this issue @danieldiamond!

@lluisgassovillarejo
Copy link

Hi,
I'd like to add to this that I am getting the 429 error on satisfaction_ratings as well:

2022-06-15 07:44:59 source > Encountered an exception while reading stream satisfaction_ratings
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py", line 115, in read
    yield from self._read_stream(
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py", line 165, in _read_stream
    for record in record_iterator:
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py", line 221, in _read_incremental
    for record_counter, record_data in enumerate(records, start=1):
  File "/airbyte/integration_code/source_zendesk_support/streams.py", line 294, in read_records
    self._retry(request=request, retries=retries, response=response, **kwargs)
  File "/airbyte/integration_code/source_zendesk_support/streams.py", line 262, in _retry
    raise DefaultBackoffException(request=request, response=response)
airbyte_cdk.sources.streams.http.exceptions.DefaultBackoffException: Request URL: https://honeylovehelp.zendesk.com/api/v2/satisfaction_ratings?start_time=1577836800&sort_by=asc&page=1011, Response Code: 429, Response Text: {
  "status": "429",
  "title":"Automated response",
  "detail":"Rate limited by Zendesk edge protection"
}

What is very odd (and I don't understand) is that Airbyte reports the sync as Failed after the 3 tries. However, at the bottom of the logs it says:

2022-06-15 07:47:30 normalization > 07:47:30 Completed successfully

And I see data loaded in BigQuery from the 1st of January 2020 (initial date in the source) to the 26th of September 2021. It contains around 102K rows.

It seems to me as if the connector is trying to synchronise all the data from the start date every time, instead of picking up on the last synchronised date.

I uploaded the log to Google drive for reference: https://drive.google.com/file/d/1u2dB6xx5Mqfyqsqn8MOlN1Au4n5L3xMh/view?usp=sharing

Thanks.

@bazarnov
Copy link
Collaborator

@lluisgassovillarejo
This is also covered in #13757

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
autoteam community team/tse Technical Support Engineers type/bug Something isn't working
Projects
None yet
7 participants