Resolved -
This incident has been resolved. We observed STripe connector sync success rates returning to normal levels, and syncs are now running as expected.
Description: We identified an issue where some Stripe connector syncs were failing with the error: "Transaction fee report: HTTP 404 Not Found"
Timeline: The issue began on March 10, 2026, at 19:00 UTC and was resolved at 21:07 UTC.
Cause: The issue was caused by file downloads returning 404 errors.
Resolution: No changes were required on our side. The issue was resolved automatically once access to file downloads was restored.
Mar 10, 22:23 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description: We identified an issue impacting Criteo connections, which resulted in syncs failing with the following error: "Failed to upsert additional attributes for creative"
Timeline: This issue began on 2026-03-10 at 15:15 UTC and was resolved on 2026-03-10 at 21:12 UTC.
Cause:
The issue occurred when the source returned "unknown" attribute types for certain creatives, resulting in sync failures.
Resolution:
A fix has been implemented to skip the problematic attribute types.
Mar 10, 22:01 UTC
Update -
Connector syncs are currently failing due to an unknown creative type format returned by the source API. Our team is actively working on a hotfix to treat this as a warning instead of causing sync failures.
We will share another update once the fix has been deployed.
Mar 10, 20:15 UTC
Identified -
The issue has been identified and we are working to resolve it.
Mar 10, 18:25 UTC
Resolved -
This incident has been resolved. We have observed that destination setup tests are now running successfully and operating as expected.
Incident Summary
Description: We identified an issue impacting destination setup tests, causing them to fail across regions.
Timeline: This issue began on 2026-03-10 at 02:40 UTC and was resolved on 2026-03-10 at 19:20 UTC.
Cause: The issue was caused by a faulty pull request related to a new feature, which introduced changes to the token registration process used for setup test jobs.
Resolution: Engineering identified the root cause and reverted the recent changes to restore the original token registration model. Following the revert and hotfix deployment, destination setup tests resumed functioning normally.
Mar 10, 19:15 UTC
Monitoring -
A fix has been implemented, and we are currently monitoring the results.
Mar 10, 17:59 UTC
Update -
We have identified the root cause of the issue and are actively working on a hotfix to resolve it.
Mar 10, 15:41 UTC
Update -
We are continuing to work on a fix for this issue.
Mar 10, 11:23 UTC
Identified -
The issue has been identified and we are working to resolve it.
Mar 10, 11:15 UTC
Resolved -
This incident has been resolved. We observed a short delay in sending Webhook Events, and the events are now being delivered as expected post the fix.
Description: We identified an issue where there was a gap in the delivery of Webhook Sender events for around 40 minutes.
Timeline: The issue began on March 9, 2026, at 15:23 UTC and was resolved at 16:07 UTC.
Cause: The issue was caused due to a scheduled maintenance of the Webhook Sender Service completed on Mar 7th 2026.
Resolution: A fix was deployed as part of the post-maintenance action items, and the issue is now resolved.
Mar 9, 10:00 UTC
Completed -
The scheduled maintenance has been completed.
Mar 7, 16:00 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 7, 15:00 UTC
Scheduled -
We are consolidating our Webhook Sender so all webhook egress traffic will originate from a single IP address. No downtime is expected; however, you may see short delays in webhook delivery while the change is applied and during the maintenance window.
Feb 23, 14:52 UTC
Resolved -
This incident has been resolved. We observed HubSpot connector sync success rates returning to normal levels, and syncs are now running as expected.
Description: We identified an issue where some HubSpot connector syncs were failing with the error: "Unknown failure."
Timeline: The issue began on March 6, 2026, at 15:00 UTC and was resolved at 15:45 UTC.
Cause: The issue was caused by timeouts when attempting to reach HubSpot's OAuth endpoint.
Resolution: No changes were required on our side. The issue was resolved automatically once connectivity to HubSpot's OAuth endpoint was restored.
Mar 6, 16:00 UTC
Resolved -
This incident has been resolved. We observed connector sync success rates returning to normal levels, and HubSpot connector syncs are now running as expected.
Incident Summary: Description: We identified an issue where HubSpot connector syncs were failing with the error: Endpoint [marketing_email] encountered an exception: "The requested resource does not exist."
Timeline: The issue began on March 5, 2026, at 10:30 AM UTC and was resolved at 12:10 PM UTC.
Resolution: HubSpot identified and resolved the issue on their end. No changes were required on our side.
Mar 5, 13:29 UTC
Monitoring -
HubSpot has resolved the issue, and we are currently monitoring the results to ensure sync operations continue to run as expected.
Mar 5, 12:50 UTC
Update -
The issue has been identified as a HubSpot outage that caused temporary unavailability of certain API services. As a result, affected connections experienced intermittent sync failures.
HubSpot has implemented a fix, and services are currently recovering.
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected models are syncing successfully.
Incident Summary
Description: We identified an issue for DBT and Quickstart Transformation which resulted in models failing with below error.
Error : ValueError: Proto enum SubmitSQLResultType has values not defined in Python enum SubmitSQLResultType: ['GET_MODIFICATION_INFO']. All proto enum values must have corresponding Python enum members
Timeline: This issue began on 5th March, 10:02 AM UTC and was resolved on 5th March, 11:22 AM UTC.
Cause: A recently released dbt image introduced a compatibility issue with a dependent internal service, which led to service disruption.
Resolution: We have rolled back the affected images to the previous stable version to restore services.
Mar 5, 13:04 UTC
Monitoring -
A fix has been Implemented and we are monitoring the results on our end.
Mar 5, 11:33 UTC
Identified -
We have identified that the models are failing with the below error.
Error: ValueError: Proto enum SubmitSQLResultType has values not defined in Python enum SubmitSQLResultType: ['GET_MODIFICATION_INFO']. All proto enum values must have corresponding Python enum members.
Mar 5, 11:05 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description: We identified an issue impacting Hubspot connections, which resulted in syncs failing with the following error: "Failed to sync 1 endpoint(s) with error: {email_campaign=java.lang.NullPointerException: Cannot invoke <>"
Timeline: This issue began on 2026-03-03 at 18:00 UTC and was resolved on 2026-02-25 at 22:00 UTC.
Cause: This was determined to be caused by a change in connector sync behavior.
Resolution: The changes to the sync strategy have been reverted and connections are now syncing successfully.
Mar 3, 22:20 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Mar 3, 22:05 UTC
Update -
We have identified the root cause of the issue: recent changes to the HubSpot connector introduced a NullPointerException in the email_campaign object, resulting in sync failures, and we are actively working on a hotfix to resolve it.
Mar 3, 19:52 UTC
Identified -
The issue has been identified and we are working to resolve it.
Mar 3, 18:25 UTC
Resolved -
This incident has been resolved. We have observed that the setup tests are successful.
Incident Summary Description: We identified an issue for the Partner Built Destinations and Connections where the setup is failing with the "Operation timeout" error.
Timeline: This issue began on March 3rd 2026, at 10 AM UTC and was resolved on March 3rd 2026, at 2:30 PM UTC.
Cause: Recent changes to the setup workflow caused the issue.
Resolution: The changes have been reverted to ensure the setup is successful.
Mar 3, 14:50 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Mar 3, 14:09 UTC
Update -
Recent changes to the setup workflow caused the issue. A hotfix is being deployed to resolve the issue.
Mar 3, 12:41 UTC
Identified -
We identified an issue for the Partner Built Destinations and Connections where the setup is failing with the "Operation timeout" error.
Mar 3, 12:10 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description: We identified an issue impacting Oracle BIP connections, which resulted in syncs failing with the following error: "java.lang.RuntimeException: oracle.xdo.service.client.scan.ScanException: An error occurred in ApplCore Virus Scanner."
Timeline: This issue began on 2026-02-25 at 03:43:21 UTC and was resolved on 2026-02-25 at 04:34:33 UTC.
Cause: This was determined to be a source-side issue.
Resolution: The issue was resolved at the source side, and connections are now syncing successfully. We have reached out to Oracle Support for additional information regarding the underlying root cause.
Feb 25, 06:22 UTC
Identified -
The issue has been identified and we are working to resolve it.
Feb 25, 06:20 UTC
Description: We identified an issue in some Fivetran's Box connections failing their sync runs with HTTP "429 Too Many Requests" errors.
Timeline: Incident Start time: 2026:02:24 23:35 UTC Incident End time: 2026:02:25 00:02 UTC
Cause: The sync runs failures occurred to an issue in Box's API. Box created a status page for this issue, and they deployed measures to fix the API errors.
Resolution: Box has resolved their incident, and our affected Box connection sync runs have recovered.
Feb 25, 03:30 UTC
Monitoring -
3rd Party: Box has resolved their incident. We are seeing connections recover, and we are monitoring as new sync runs trigger.
Feb 25, 00:20 UTC
Update -
This issue is occurring due to an incident on Box's end.