Update - We are continuing to observe improvement in sync success rates for impacted Qualtrics connectors. Over the last 3 hours, Failure rates have been reduced.
Our team remains actively engaged in monitoring all affected connectors to ensure stability.
Aug 08, 2025 - 03:57 UTC
Monitoring - We have deployed a fix for this issue.
We will continue to monitor all affected connectors until they resume their normal sync functionality.
Aug 07, 2025 - 19:50 UTC
Update - Our IP address has been blocked by Akamai, the content delivery network used by Qualtrics. We are in active communication with the Qualtrics team to expedite a resolution to this issue.
Aug 07, 2025 - 14:19 UTC
Identified - We are observing continued issues with the directory_contacts endpoint in SAP Qualtrics affecting some connectors. Specifically, we are:
Receiving 403 Forbidden errors when calling the directories/directoryId/contacts endpoint. This appears to be a source-side issue, and we have reached out to Qualtrics support.
We are actively following up with Qualtrics and continuing our internal investigation.
Aug 07, 2025 - 07:40 UTC
Investigating - We have identified an issue where some of the Qualtrics connectors are failing with a "Uh oh! We were unable to connect to SAP Qualtrics server using your Credentials.
Error : HTTP 403 Forbidden" exception. We are currently investigating in more detail and will provide additional updates accordingly.
Aug 07, 2025 - 07:32 UTC
Database connectors
Operational
90 days ago
99.99
% uptime
Today
Amazon Aurora MySQL
Operational
90 days ago
99.97
% uptime
Today
Amazon Aurora PostgreSQL
Operational
90 days ago
99.97
% uptime
Today
Azure Database for MariaDB
Operational
90 days ago
100.0
% uptime
Today
Azure Database for MySQL
Operational
90 days ago
100.0
% uptime
Today
Azure Database for PostgreSQL
Operational
90 days ago
99.97
% uptime
Today
Azure SQL Database
Operational
90 days ago
99.97
% uptime
Today
Azure SQL Managed Instance
Operational
90 days ago
100.0
% uptime
Today
Amazon DynamoDB
Operational
90 days ago
100.0
% uptime
Today
Google Cloud SQL for MySQL
Operational
90 days ago
100.0
% uptime
Today
Google Cloud SQL for PostgreSQL
Operational
90 days ago
99.97
% uptime
Today
Google Cloud SQL for SQL Server
Operational
90 days ago
100.0
% uptime
Today
Heroku PostgreSQL
Operational
90 days ago
100.0
% uptime
Today
Magento MySQL
Operational
90 days ago
100.0
% uptime
Today
Magento MySQL RDS
Operational
90 days ago
100.0
% uptime
Today
MariaDB
Operational
90 days ago
100.0
% uptime
Today
Amazon RDS for MariaDB
Operational
90 days ago
100.0
% uptime
Today
MongoDB
Operational
90 days ago
99.97
% uptime
Today
MongoDB Sharded
Operational
90 days ago
100.0
% uptime
Today
MySQL
Operational
90 days ago
100.0
% uptime
Today
MySQL RDS
Operational
90 days ago
100.0
% uptime
Today
Oracle
Operational
90 days ago
100.0
% uptime
Today
Oracle EBS
Operational
90 days ago
100.0
% uptime
Today
Oracle RAC
Operational
90 days ago
100.0
% uptime
Today
Oracle RDS
Operational
90 days ago
99.96
% uptime
Today
PostgreSQL
Operational
90 days ago
100.0
% uptime
Today
Amazon RDS for PostgreSQL
Operational
90 days ago
99.97
% uptime
Today
SAP HANA
Operational
90 days ago
100.0
% uptime
Today
SQL Server
Operational
90 days ago
100.0
% uptime
Today
Amazon RDS for SQL Server
Operational
90 days ago
100.0
% uptime
Today
Amazon DocumentDB
Operational
90 days ago
100.0
% uptime
Today
High-Volume Agent Oracle
Operational
90 days ago
100.0
% uptime
Today
High Volume Agent Db2 for i
Operational
90 days ago
100.0
% uptime
Today
High-Volume Agent SQL Server
Operational
90 days ago
100.0
% uptime
Today
Snowflake
Operational
90 days ago
100.0
% uptime
Today
High-Volume Agent SAP ECC on Oracle
Operational
90 days ago
100.0
% uptime
Today
Elastic Cloud
Operational
90 days ago
99.97
% uptime
Today
Self Hosted Elasticsearch
Operational
90 days ago
99.97
% uptime
Today
Open Distro
Operational
90 days ago
100.0
% uptime
Today
Opensearch
Operational
90 days ago
100.0
% uptime
Today
High-Volume Agent SAP ECC on Oracle with NetWeaver
Operational
90 days ago
100.0
% uptime
Today
Azure Cosmos DB for NoSQL
Operational
90 days ago
100.0
% uptime
Today
High-Volume Agent SAP ECC on Db2 for i
Operational
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary Description: We identified an issue for Qualtrics which resulted in syncs failing with "Uh oh! We were unable to connect to SAP Qualtrics server using your Credentials. Error : HTTP 403 Forbidden": error.
Timeline: This issue began on 2025-08-06 at 01:00 UTC and was resolved on 2025-08-06 at 18:30 UTC.
Cause: There was an issue from the Qualtrics side; we were receiving 403 Forbidden Exception intermittently for a few API endpoints.
Resolution: We have mitigated the issue on our end and contacted Qualtrics support to investigate further. We will update the incident summary once we receive more information from them.
Aug 6, 20:30 UTC
Monitoring -
We have deployed a fix for this issue.
We will continue to monitor all affected connectors until they resume their normal sync functionality.
Aug 6, 19:20 UTC
Identified -
We have identified an issue where some of the Qualtrics connectors are failing with a "Uh oh! We were unable to connect to SAP Qualtrics server using your Credentials. Error : HTTP 403 Forbidden" exception. We are currently investigating in more detail and will provide additional updates accordingly.
Aug 6, 15:50 UTC
Resolved -
Incident Summary: This incident has been fully resolved. Instance rates have returned to normal, and logs are now correctly showing failed transformation runs.
Description: DBT logs were not appearing for failed transformation jobs.
Timeline: The issue started on Aug 04, 2025, at 12:00 UTC and was resolved on Aug 06, 2025, at 00:45 UTC
Cause: The issue was introduced by new logic related to auto-pausing failing jobs, which contained faulty conditional handling.
Resolution: Corrected logic was deployed, restoring proper logging of failed runs.
Note: Transformation execution was never impacted. The issue only affected the saving and display of failure causes and results.
Aug 6, 02:52 UTC
Update -
Corrected logic was deployed, restoring proper logging of failed runs.
Aug 6, 02:48 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Aug 6, 01:40 UTC
Update -
The issue does not impact the actual execution of transformations. Instead, it affects the handling of failed transformation runs, specifically the saving of results and related logs. When a transformation run fails, we are currently unable to correctly store the failure result, which is why the expected logs or error messages are not visible. However, this does not interfere with the transformation process itself.
Aug 6, 00:36 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 5, 21:35 UTC
Resolved -
Description: Several connector services faced sync failures due to a transient Cloud KMS issue.
Timeline: The issue began on Aug 06, 2025, at 02:36 UTC and was resolved on Aug 06, 2025, at 04:00 UTC.
Cause: The incident was caused by an external issue with Cloud KMS, which temporarily disrupted key operations required during sync processing.
Resolution: The issue auto-resolved once the underlying Cloud KMS service recovered, restoring normal sync behavior. Note: No data loss occurred during this period. Impact was limited to sync execution delays and failures.
Aug 6, 02:30 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels, and affected connectors are syncing successfully.
Incident Summary
Description: We encountered an issue where PayPal connectors started failing due to an API rate limit issue
Timeline: The issue started on Aug 03, 2025, at 08:30 UTC and was resolved on Aug 03, 2025, at 15:19 UTC
Cause: The issue was caused due to sustained rate limiting from the source API endpoint (/v1/reporting/transactions) for all transaction-related tables (e.g, TRANSACTION and BALANCE).
Resolution: We have initially disabled the balance endpoint to fix the issue. Later, we re-enabled the balance endpoint and added a logic to handle the rate limit more gracefully (Using ProActive Rate Limit and rescheduling the sync for 1 hour when rate limits are encountered) in order to resolve the issue.
Aug 4, 08:38 UTC
Monitoring -
We have raised a fix to disable the affected endpoint "Balance" and we are monitoring the results.
Aug 3, 15:12 UTC
Identified -
We have identified that the the failures are occurring on transaction-related tables (TRANSACTION, BALANCE, INCENTIVE, etc.) due to persistent rate limiting from the PayPal API endpoint /v1/reporting/transactions.
Aug 3, 11:15 UTC
Description: We encountered an issue where some of Google Ads connections are failing with Null primary key found while syncing table campaign_shared_set_history after update to Google Ads API v20
Timeline: The issue started on Aug 01, 2025, at 11:30 UTC and was resolved on Aug 01, 2025, at 5:00 PM UTC
Cause: The issue was caused due to some changes observed with the upgrade and there were possibly problems with the code to handle the same.
Resolution: The issue has been fully resolved, and all the affected connectors have returned to normal functionality.
Aug 1, 18:04 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Aug 1, 16:40 UTC
Update -
With upgrade to v20 we started to receive deletes for CampaignSharedCriterion, and there was a problem within the logic handling the same between the Deletion Key definition and Primary Key definition for the campaign_shared_set_history as something related to the primary key changed with the upgrade in the Google APIs.
Aug 1, 16:39 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 1, 16:30 UTC
Resolved -
Since releasing the hotfix, we have seen the number of pending syncs reduce back to the normal level. We do not expect any further delayed syncs related to this issue.
Jul 31, 11:40 UTC
Monitoring -
The root cause of the issue has been identified as an issue with passing requests between Fivetran's orchestrator service.
We have deployed a hotfix which is expected to resolve the issue. We will continue to monitor for delayed syncs.
Jul 31, 10:20 UTC
Identified -
An issue has been identified that is preventing Hybrid deployment syncs from being scheduled correctly.
Jul 31, 04:00 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description:
Some Greenhouse connectors began to fail with either a NullPointerException or because the text format [null] was appearing in some date-time format columns. This was due to a recently introduced, incorrect regex transformation as well as improperly handled null webhook events.
Timeline:
This issue began on Justly 29, 2025, at 09:48 UTC and was resolved on July 30, 2025, at 17:08 UTC.
Cause:
We recently identified incorrect regex transformations that caused sync failures. Compounding this, the system encountered null webhook events, which were not correctly handled.
Resolution:
The incorrect regex was removed, and robust null event handling was implemented for webhooks.
Jul 31, 00:18 UTC
Monitoring -
A fix has been implemented and we are monitoring the results
Jul 30, 18:16 UTC
Update -
A fix has been raised and is now in the testing phase to confirm stability.
Jul 30, 14:46 UTC
Update -
We are continuing to work on a fix for this issue.
Jul 30, 07:00 UTC
Identified -
We’re actively investigating sync issues impacting some Greenhouse connectors.
Jul 30, 06:55 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary Description: We identified two issues for Shopify. - Syncs were failing with a which resulted in syncs failing with a "Cannot find entity" error. - Tables were missing from the connector schema tab.
Timeline: This issue began on July 28, 2025 around 5:00 AM UTC and was resolved on July 29, 2025 at 7:00 PM UTC.
Cause: There was an issue with the logic we used to populate the schema tab.
Resolution: We isolated the defect in the logic used to populate the schema tab and deployed a fix.
Jul 30, 09:02 UTC
Monitoring -
We have applied a fix that resolves the issue of tables not displaying in the schema tab and allows syncs to increment successfully. We will continue to monitor all affected connectors until they resume their normal sync functionality
Jul 29, 20:03 UTC
Update -
We are currently investigating an issue with our Shopify connectors that is causing some connectors to fail with a variation of the following exception:
"Cannot find entity 'Table'"
In addition, certain tables are no longer displaying on the schema tab.
Jul 29, 17:20 UTC
Identified -
The issue has been identified and we are working to resolve it.
Jul 29, 17:15 UTC
Description: Consequent manually triggering sync via API or in the UI is throwing an error if the connector is still running and it also was started with the API call or from the UI.
Timeline: July, 24 at 10:00 UTC until July, 29 at 14:00 UTC
Cause: The issue was with the consequent sync triggers that caused the syncs to fail.
Resolution: The issue has been fully resolved, and all affected services have returned to normal functionality.
Jul 29, 10:00 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description: Some SendGrid connectors began failing with a missing "marketing_campaigns.read" permission error. This was due to recent changes in how SendGrid handles API key scopes, which impacted access to marketing-related tables.
Timeline: Issue began on Jul 26, 2025 at 15:00 UTC Resolved on Jul 28, 2025 at 15:30 UTC
Cause: SendGrid began enforcing scope restrictions on API keys, removing marketing_campaigns.read for accounts without Marketing Campaigns enabled. This caused sync failures for connectors trying to access marketing-related tables.
Resolution: We deployed a backend fix to automatically exclude unsupported Marketing tables when the required scopes are missing, allowing affected connectors to sync successfully.
Jul 29, 08:33 UTC
Monitoring -
We have implemented the fix and are now closely monitoring the results to ensure the issue is fully resolved.
Jul 28, 19:14 UTC
Update -
We are preparing a fix to exclude all marketing tables in case of scope absence.
Jul 28, 11:55 UTC
Identified -
Summary: Some SendGrid connectors are currently failing due to missing `marketing_campaigns.read` permissions. This permission is now being enforced by the SendGrid API for non-Event tables. Accounts without Marketing Campaigns enabled cannot grant this scope, leading to sync failures.
Workaround: If your SendGrid account does not use Marketing Campaigns, please go to your connector’s Schema tab and de-select all tables except Event. This will allow the connector to sync successfully.
Next Steps: Our engineering team is actively working on a permanent fix that will automatically exclude unsupported tables for accounts lacking the required permissions.
Jul 28, 07:55 UTC
Description: Dbt couldn’t install packages, the error was “External call exception: not a gzip file“ when fetching a package from the dbt-hub. It was an source dbt hub side issue.
Timeline: July, 29, from 00:47 until 00:51 UTC.
Cause: The issue was caused by some failure on the dbt hub side.
Resolution: The issue has been fully resolved, and all affected clusters have returned to normal functionality.
Jul 29, 01:00 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description: We identified an issue for Facebook Ads connectors which resulted in syncs failing with 500 Internal Server and Socket Connection Errors.
Timeline: This issue began on 2025-07-28 at 9:52 AM UTC and was resolved at 2025-07-28 at 3:18 PM UTC.
Cause: The /advideos endpoint from the Facebook API started returning 500 Internal Server errors.
Resolution: A fix was implemented to handle the 500 errors and the number of retries on socket connection errors was increased.
Jul 28, 21:00 UTC
Monitoring -
A fix has been implemented, and we are currently monitoring the results.
Jul 28, 16:08 UTC
Update -
We are currently encountering issues with the /advideos endpoint. The source is returning a 500 Internal Server Error along with socket connection errors when requests are made. Our team has implemented a fix to handle the 500 error and has increased the retry attempts for socket connection errors
Jul 28, 15:46 UTC
Identified -
The issue has been identified and we are working to resolve it.
Jul 28, 15:30 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels, and affected connectors are syncing successfully.
Incident Summary
Description: We identified an issue affecting the SDK Connectors where the syncs are failing with the error "Unable to start the docker daemon process within timeout".
Timeline: This issue began on 28/07/2025 at 07:00 UTC and was resolved on 28/07/2025 at 10:30 UTC.
Cause: The issue was caused by a recent change in the connector startup process.
Resolution: The changes were reverted to resolve the issue.
Jul 28, 09:33 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Jul 28, 08:26 UTC
Identified -
We identified an issue affecting the SDK Connectors where the syncs are failing with the error "Unable to start the docker daemon process within timeout".
Jul 28, 08:20 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels, and affected connectors are syncing successfully.
Incident Summary Description: We identified missing retry logic for REST API-triggered syncs in our internal infrastructure, causing some syncs to silently fail.
Timeline: This issue began on 2025-07-24 at 11:25 UTC and was resolved on 2025-07-25 at 18:15 UTC.
Cause: The issue was due to missing retry logic after migrating API-triggered connectors to the scheduling system.
Resolution: We implemented and deployed a hot-fix to add the necessary missing retry logic for manually triggered syncs.
Jul 25, 20:35 UTC
Monitoring -
A fix has been implemented, and we are monitoring the results.
Jul 25, 18:56 UTC
Update -
We've prepared a fix and currently it is under review, we expect that adding the retry logic to the REST API-triggered syncs would help resolve the problem.
Jul 25, 11:00 UTC
Identified -
We've identified missing retry logic for REST API-triggered syncs in our internal infrastructure, causing some syncs to silently fail. Our team is actively working on a fix.
Jul 25, 03:46 UTC