Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary Description: We identified an issue for ADP Workforce connectors, which resulted in syncs failing with sync errors.
Timeline: This issue began on August 24, 2025, at 02:06 UTC and was resolved on August 25, 2025, at 10:30 UTC .
Resolution: The hotfix has been applied to automatically reschedule syncs. As of 2025-08-25 10:30 UTC, we confirm the api is working fine and syncs are passing.
Aug 25, 01:36 UTC
Monitoring -
The hotfix has been applied to automatically reschedule syncs until the source system is back up.
Aug 24, 06:49 UTC
Identified -
Multiple ADP Workforce connectors are failing with 404, 501, and 503 errors, likely due to ongoing ADP maintenance. The ADP maintenance page confirms downtime but provides no ETA for restoration. Last time, we saw a similar issue, where their system was down for approximately 6 hours.
Resolved -
We have resolved this incident.
Aug 23, 06:05 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Aug 23, 03:05 UTC
Update -
We have deployed a temporary fix to resolve syncs for some connections. Historical syncs and new connection syncs will continue to fail until the underlying issue is resolved. We are continuing to work with Zendesk towards a full resolution.
Aug 22, 21:21 UTC
Update -
We are receiving unexpected 503 responses from the Zendesk API. We've reached out to their team for more information on why this is happening
Aug 22, 17:02 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 22, 16:10 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary:
Description: Fivetran: Multiple Managed/S3 data lake destinations were erroring out, causing connection sync failures.
Timeline: Issue began on Aug 22, 2025, at 01:20 UTC Resolved on Aug 22, 2025, at 15:32 UTC
Cause: The issue was with Cloud NAT – port exhaustion, which led to subsequent sync failures.
Resolution: Dynamic port allocation was enabled on the Cloud NAT on Fivetran’s end, mitigating the issue. Connections that were paused are being gradually unpaused in batches while monitoring relevant metrics to ensure stability.
Aug 22, 15:35 UTC
Monitoring -
The issue has been mitigated, sync failure rates have returned to normal, and we are continuing to monitor connections.
Aug 22, 10:34 UTC
Update -
Our engineering team is still actively investigating the issue. While some connections have been restored, others are still failing. We are continuing to work towards a full resolution and will provide the next update as soon as more information is available.
Aug 22, 09:52 UTC
Update -
Fix has been deployed, and we are seeing connections are getting restored; however, many connections are still failing. We are continuing to investigate the issue to restore the services fully
Aug 22, 05:04 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 22, 01:20 UTC
Description: Some PostgreSQL connectors failed due to a problem with JsonB binary decoding.
Timeline: This issue began on 21 Aug 2025 at 4:00 PM UTC and was resolved at 8:30 PM UTC.
Cause: An issue with JsonB binary decoding caused PostgreSQL connections to start failing.
Resolution: Engineering implemented a hotfix, which restored normal operations.
Aug 21, 22:46 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Aug 21, 21:12 UTC
Update -
Engineering has identified the issue as being caused by a problem with JsonB binary decoding, and are working to resolve it.
Aug 21, 20:10 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 21, 20:05 UTC
Resolved -
Description: Connectors experienced delays in the AWS us-west-2 region.
Timeline: This issue began on August 21st at 24:00 UTC and was resolved on August 25th at 24:00 UTC.
Cause: Connector scheduling was delayed due to infrastructure processes becoming stuck and exhausting available resources.
Resolution: We increased resource capacity in the region, which resolved the issue. Syncs are now running as expected without delays.
Aug 20, 23:00 UTC
Resolved -
We have confirmed that fix resolved the issues and the connections are syncing as expected.
Incident Summary Description: We observed multiple Connector SDK connections failing with the error below: Error getting access token for service account: 400 Bad Request.
Timeline: Start: 11:20 UTC Resolution: 13:20 UTC
Cause: An internal automation issue mistakenly affected a production environment, causing a service interruption.
Resolution: The interruption has been resolved, and we have implemented new safeguards to prevent a recurrence.
Aug 20, 15:10 UTC
Monitoring -
A fix has been implemented and we are observing successful connections. We are continuing to monitor for any further issues.
Aug 20, 13:28 UTC
Update -
We are seeing multiple Connector SDK connections failing with the error below: Error getting access token for service account: 400 Bad Request
The connections display broken state on the Fivetran dashboard. Fivetran engineering team is working on the fix. We will provide updates as we learn more.
Aug 20, 13:21 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 20, 12:55 UTC
Resolved -
We have confirmed that instance rates have returned to normal and connectors are now syncing successfully.
Incident Summary Description: HubSpot connectors were failing with a reconnect error: "We were denied access to HubSpot. Please check the accuracy of your credentials."
Timeline: Start: 20:30 UTC Resolution: 08:30 UTC
Cause: A change introduced on HubSpot’s side caused unexpected 403 errors from the /property/{crm_object} endpoint.
Resolution: HubSpot identified the issue and deployed a fix on their end, restoring normal functionality.
Aug 20, 10:48 UTC
Update -
We are continuing to monitor for any further issues.
Aug 20, 10:17 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Aug 20, 08:08 UTC
Update -
We have merged a hotfix to skip the affected endpoints. We will monitor the connections once the hotfix is deployed
Aug 20, 07:02 UTC
Update -
We are currently seeing failures for HubSpot connectors due to changes in source API requirements for certain Engagement CRM objects (Calls, Notes, Meetings, Tasks). Connections are returning 403 errors when accessing affected endpoints.
Our team has contacted the HubSpot team and is working on a temporary fix, skipping these objects until we receive further clarification. We will provide updates as we learn more.
Aug 20, 04:05 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 20, 01:25 UTC
Resolved -
Description: We identified an issue with Email connectors, which resulted in syncs failing with an error message stating, "INTERNAL: Connector setup not found for integration email and version 1."
Timeline: This issue began on Aug 18, 2025, at 20:30 UTC and was resolved on Aug 19, 2025, at 01:30 UTC.
Root Cause: The issue was caused by a bug introduced in a recent code change.
Resolution: We resolved the issue by rolling back the change, which restored normal functionality.
Aug 19, 07:12 UTC
Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary Description: We identified an issue with Apple Search Ads connector failing with 503 Errors which resulted in sync failures.
Timeline: This issue began on Aug 15 at 21:55 UTC and was resolved on Aug 17 at 9:30 AM UTC.
Cause: Due to third party service unavailability
Resolution: This seems to have been resolved on Apple's end, and the connections are now syncing successfully
Aug 17, 14:35 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Aug 17, 11:35 UTC
Update -
Our team is working with the Apple support team to investigate.
Aug 16, 18:58 UTC
Update -
We’ve reached out to the Apple support team to investigate the 503 response being returned from their endpoint.
Aug 16, 04:57 UTC
Identified -
The Apple Search Ads API is returning "503 Service Temporarily Unavailable" when making requests.
We are working to identify the cause of the failures.
Aug 15, 21:55 UTC
Description: Connections to the Managed Data Lake Service were failing in the AWS region with the Polaris connectivity to AWS STS. Connections hit a second issue and started to fail with the error: "reason":"com.fivetran.warehouses.data_lake_v2.exception.UnifiedDataLakeException: org.apache.iceberg.exceptions.ValidationException: Found conflicting files that can contain records [...] "
Timeline: The First issue began on Aug 11th at 09:48 AM UTC and was resolved on Aug 11th at 19.30 UTC. The Second Issue began on Aug 11th at 19.30 UTC and was resolved on August 13 at 1:54 PM UTC.
Cause: Due to the recent change in the retry mechanism, the number of connections increased, which, together with ongoing customer migrations onto the Polaris service, resulted in high-volume traffic in Cloud NAT in the GCP region US-east4, where Polaris is running.
Resolution: Polaris retry mechanism feature has been reverted, and Cloud NAT per VM minimum port count has been increased to 512 ports from the existing (default) 64.
Aug 15, 16:30 UTC
Monitoring -
We have deployed a fix to rollback and recover affected snapshots to the latest stable version.
Affected connectors are resuming their normal sync functionality and we are continuing to monitor progress.
Aug 14, 23:27 UTC
Update -
We are continuing to work on a fix for this issue. We will additionally contact customers directly for resolutions in certain cases.
Aug 14, 08:00 UTC
Update -
We are making code changes to roll back to the previous snapshot if the table is found to be corrupted. We will share further updates as soon as more information becomes available.
Aug 12, 19:00 UTC
Update -
We are continuing to work on investigating the root cause and a fix for this issue.
Aug 12, 07:22 UTC
Update -
We are currently observing sync failures across multiple connectors, with the error "Found conflicting files that can contain records matching".
Our team is actively investigating the root cause of this issue. We will provide further updates as soon as more information becomes available.
Aug 12, 02:54 UTC
Update -
We have deployed a fix to reduce the number of retries and also increased the minimum number of ports to help handle the large number of requests. This has lead to a reduction in failures, but some connectors are still affected.
Remaining connectors are still being investigated for intermittent connectivity issues with Polaris.
Aug 11, 22:00 UTC
Update -
We are still working on a fix for this issue.
Aug 11, 19:01 UTC
Update -
We are continuing to work on a fix for this issue.
Aug 11, 16:51 UTC
Update -
We are currently investigating an issue affecting multiple connections, which are failing with the following error: "org.apache.iceberg.exceptions.RESTException: Unable to process: Failed to get subscoped credentials: Unable to execute HTTP request, Connect timed out"
Aug 11, 14:56 UTC
Identified -
The issue has been identified and we are working to resolve it.
Aug 11, 14:50 UTC