Description: We encountered an issue where some of Google Ads connections are failing with Null primary key found while syncing table campaign_shared_set_history after update to Google Ads API v20
Timeline: The issue started on Aug 01, 2025, at 11:30 UTC and was resolved on Aug 01, 2025, at 5:00 PM UTC
Cause: The issue was caused due to some changes observed with the upgrade and there were possibly problems with the code to handle the same.
Resolution: The issue has been fully resolved, and all the affected connectors have returned to normal functionality.
Aug 1, 18:04 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Aug 1, 16:40 UTC
Update -
With upgrade to v20 we started to receive deletes for CampaignSharedCriterion, and there was a problem within the logic handling the same between the Deletion Key definition and Primary Key definition for the campaign_shared_set_history as something related to the primary key changed with the upgrade in the Google APIs.
Aug 1, 16:39 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 1, 16:30 UTC
Resolved -
Since releasing the hotfix, we have seen the number of pending syncs reduce back to the normal level. We do not expect any further delayed syncs related to this issue.
Jul 31, 11:40 UTC
Monitoring -
The root cause of the issue has been identified as an issue with passing requests between Fivetran's orchestrator service.
We have deployed a hotfix which is expected to resolve the issue. We will continue to monitor for delayed syncs.
Jul 31, 10:20 UTC
Identified -
An issue has been identified that is preventing Hybrid deployment syncs from being scheduled correctly.
Jul 31, 04:00 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description:
Some Greenhouse connectors began to fail with either a NullPointerException or because the text format [null] was appearing in some date-time format columns. This was due to a recently introduced, incorrect regex transformation as well as improperly handled null webhook events.
Timeline:
This issue began on Justly 29, 2025, at 09:48 UTC and was resolved on July 30, 2025, at 17:08 UTC.
Cause:
We recently identified incorrect regex transformations that caused sync failures. Compounding this, the system encountered null webhook events, which were not correctly handled.
Resolution:
The incorrect regex was removed, and robust null event handling was implemented for webhooks.
Jul 31, 00:18 UTC
Monitoring -
A fix has been implemented and we are monitoring the results
Jul 30, 18:16 UTC
Update -
A fix has been raised and is now in the testing phase to confirm stability.
Jul 30, 14:46 UTC
Update -
We are continuing to work on a fix for this issue.
Jul 30, 07:00 UTC
Identified -
We’re actively investigating sync issues impacting some Greenhouse connectors.
Jul 30, 06:55 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary Description: We identified two issues for Shopify. - Syncs were failing with a which resulted in syncs failing with a "Cannot find entity" error. - Tables were missing from the connector schema tab.
Timeline: This issue began on July 28, 2025 around 5:00 AM UTC and was resolved on July 29, 2025 at 7:00 PM UTC.
Cause: There was an issue with the logic we used to populate the schema tab.
Resolution: We isolated the defect in the logic used to populate the schema tab and deployed a fix.
Jul 30, 09:02 UTC
Monitoring -
We have applied a fix that resolves the issue of tables not displaying in the schema tab and allows syncs to increment successfully. We will continue to monitor all affected connectors until they resume their normal sync functionality
Jul 29, 20:03 UTC
Update -
We are currently investigating an issue with our Shopify connectors that is causing some connectors to fail with a variation of the following exception:
"Cannot find entity 'Table'"
In addition, certain tables are no longer displaying on the schema tab.
Jul 29, 17:20 UTC
Identified -
The issue has been identified and we are working to resolve it.
Jul 29, 17:15 UTC
Description: Consequent manually triggering sync via API or in the UI is throwing an error if the connector is still running and it also was started with the API call or from the UI.
Timeline: July, 24 at 10:00 UTC until July, 29 at 14:00 UTC
Cause: The issue was with the consequent sync triggers that caused the syncs to fail.
Resolution: The issue has been fully resolved, and all affected services have returned to normal functionality.
Jul 29, 10:00 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description: Some SendGrid connectors began failing with a missing "marketing_campaigns.read" permission error. This was due to recent changes in how SendGrid handles API key scopes, which impacted access to marketing-related tables.
Timeline: Issue began on Jul 26, 2025 at 15:00 UTC Resolved on Jul 28, 2025 at 15:30 UTC
Cause: SendGrid began enforcing scope restrictions on API keys, removing marketing_campaigns.read for accounts without Marketing Campaigns enabled. This caused sync failures for connectors trying to access marketing-related tables.
Resolution: We deployed a backend fix to automatically exclude unsupported Marketing tables when the required scopes are missing, allowing affected connectors to sync successfully.
Jul 29, 08:33 UTC
Monitoring -
We have implemented the fix and are now closely monitoring the results to ensure the issue is fully resolved.
Jul 28, 19:14 UTC
Update -
We are preparing a fix to exclude all marketing tables in case of scope absence.
Jul 28, 11:55 UTC
Identified -
Summary: Some SendGrid connectors are currently failing due to missing `marketing_campaigns.read` permissions. This permission is now being enforced by the SendGrid API for non-Event tables. Accounts without Marketing Campaigns enabled cannot grant this scope, leading to sync failures.
Workaround: If your SendGrid account does not use Marketing Campaigns, please go to your connector’s Schema tab and de-select all tables except Event. This will allow the connector to sync successfully.
Next Steps: Our engineering team is actively working on a permanent fix that will automatically exclude unsupported tables for accounts lacking the required permissions.
Jul 28, 07:55 UTC
Description: Dbt couldn’t install packages, the error was “External call exception: not a gzip file“ when fetching a package from the dbt-hub. It was an source dbt hub side issue.
Timeline: July, 29, from 00:47 until 00:51 UTC.
Cause: The issue was caused by some failure on the dbt hub side.
Resolution: The issue has been fully resolved, and all affected clusters have returned to normal functionality.
Jul 29, 01:00 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description: We identified an issue for Facebook Ads connectors which resulted in syncs failing with 500 Internal Server and Socket Connection Errors.
Timeline: This issue began on 2025-07-28 at 9:52 AM UTC and was resolved at 2025-07-28 at 3:18 PM UTC.
Cause: The /advideos endpoint from the Facebook API started returning 500 Internal Server errors.
Resolution: A fix was implemented to handle the 500 errors and the number of retries on socket connection errors was increased.
Jul 28, 21:00 UTC
Monitoring -
A fix has been implemented, and we are currently monitoring the results.
Jul 28, 16:08 UTC
Update -
We are currently encountering issues with the /advideos endpoint. The source is returning a 500 Internal Server Error along with socket connection errors when requests are made. Our team has implemented a fix to handle the 500 error and has increased the retry attempts for socket connection errors
Jul 28, 15:46 UTC
Identified -
The issue has been identified and we are working to resolve it.
Jul 28, 15:30 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels, and affected connectors are syncing successfully.
Incident Summary
Description: We identified an issue affecting the SDK Connectors where the syncs are failing with the error "Unable to start the docker daemon process within timeout".
Timeline: This issue began on 28/07/2025 at 07:00 UTC and was resolved on 28/07/2025 at 10:30 UTC.
Cause: The issue was caused by a recent change in the connector startup process.
Resolution: The changes were reverted to resolve the issue.
Jul 28, 09:33 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Jul 28, 08:26 UTC
Identified -
We identified an issue affecting the SDK Connectors where the syncs are failing with the error "Unable to start the docker daemon process within timeout".
Jul 28, 08:20 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels, and affected connectors are syncing successfully.
Incident Summary Description: We identified missing retry logic for REST API-triggered syncs in our internal infrastructure, causing some syncs to silently fail.
Timeline: This issue began on 2025-07-24 at 11:25 UTC and was resolved on 2025-07-25 at 18:15 UTC.
Cause: The issue was due to missing retry logic after migrating API-triggered connectors to the scheduling system.
Resolution: We implemented and deployed a hot-fix to add the necessary missing retry logic for manually triggered syncs.
Jul 25, 20:35 UTC
Monitoring -
A fix has been implemented, and we are monitoring the results.
Jul 25, 18:56 UTC
Update -
We've prepared a fix and currently it is under review, we expect that adding the retry logic to the REST API-triggered syncs would help resolve the problem.
Jul 25, 11:00 UTC
Identified -
We've identified missing retry logic for REST API-triggered syncs in our internal infrastructure, causing some syncs to silently fail. Our team is actively working on a fix.
Jul 25, 03:46 UTC
Resolved -
We have resolved this incident.
Jul 24, 14:30 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Jul 24, 11:30 UTC
Update -
A fix has been created for this issue. We are now testing for results to ensure degraded performance is resolved when this fix is applied.
Jul 24, 08:40 UTC
Update -
We are continuing to work on a fix for this issue.
The common unexpected behaviors seen with this issue are: - Syncs do not get triggered. - Airflow pipelines miss acknowledgement. - The /sync endpoint fails to return a response.
This issue is specific to workflows that use Fivetran's REST API.
Jul 23, 21:07 UTC
Identified -
The issue has been identified and we are working to resolve it.
Jul 23, 20:45 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Jul 24, 14:12 UTC
Monitoring -
We have deployed a fix for this issue, and we will continue to monitor all affected connections until they resume their normal sync functionality.
Jul 24, 13:44 UTC
Update -
We have identified an issue where Oracle RDS connections are failing with an InvalidFormatException error. We are currently investigating in more detail and will provide additional updates accordingly.
Jul 24, 11:07 UTC
Identified -
The issue has been identified and we are working to resolve it.
Jul 24, 11:05 UTC
Resolved -
This incident has been resolved. We have observed that instance rates returned to normal levels, and affected connectors are syncing successfully
Incident Summary
Description: We identified an issue with the Amplitude connection, which resulted in syncs failing with a "Credentials for Amazon S3 were not provided".
Timeline: This issue began on 2025-07-22 at 04:09 PM UTC and was resolved on 2025-07-22 at 09:29 PM UTC.
Cause: The issue occurred due to a recent change related to the support for the S3 bucket export mechanism
Resolution: We have deployed a code fix to resolve the issue
Jul 23, 06:00 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Jul 23, 03:55 UTC
Identified -
We have identified that the Amplitude connections are failing with this error "Credentials for Amazon S3 were not provided".
Jul 22, 20:05 UTC
Resolved -
This incident has been resolved. The fix for the "HTTP 401 Unauthorized" has been deployed. We have observed that instance rates returned to normal levels, and affected connectors are syncing successfully
Incident Summary
Description: We identified an issue for Criteo, which resulted in syncs failing with a 500 Internal Server and 401 Unauthorized error.
Timeline: This issue began on 2025-07-21 at 02:04 PM UTC and was resolved on 2025-07-22 at 12:44 AM UTC.
Cause: The issue occurred from the source (Criteo) API endpoint, causing 500 and 401 errors
Resolution: 1. For 500 Internal Server error, we have deployed a fix to skip the problematic advertiser IDs 2. For 401 Unauthorized error, we have deployed a fix to lower the number of retries
Jul 22, 07:38 UTC
Update -
The fix has been deployed and connectors are no longer failing with "Error: HTTP 500 Internal Server Error".
We have found a few instance still failing with "HTTP 401 Unauthorized" and a solution is currently being deployed.
Jul 21, 22:46 UTC
Monitoring -
We are currently skipping the problematic advertiser IDs that were causing 500 Internal Server Errors in the Creatives API to ensure successful syncs. A fix has been deployed, and we are monitoring the connectors.
Jul 21, 19:30 UTC
Update -
Criteo API is currently returning an HTTP 500 Internal Server Error when accessing the Creatives API endpoints for certain advertisers
Jul 21, 17:33 UTC
Identified -
The issue has been identified and we are working to resolve it.
Jul 21, 17:15 UTC
Completed -
The scheduled maintenance has been completed.
Jul 19, 10:30 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jul 19, 10:00 UTC
Scheduled -
We are performing scheduled maintenance on the Polaris Catalog. No downtime or connection interruptions are expected, but brief disruptions may occur for Managed Data Lakes Service destinations.
Jul 11, 14:19 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary Description: We identified an issue for pinterest_ads which resulted in syncs failing with a 500 Internal Server Error.
Timeline: This issue began on 2025-07-18 at 11:20 PM UTC and was resolved on 2025-07-19 at 1:10 AM UTC.