Resolved -
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully.
Incident Summary
Description:
Following a release to a shared component, several database connectors — including Oracle, SQL Server HVA, LogMiner, and Teleport-based connections — failed with the error "Failed to get or decrypt encryption key from file header." This affected sync and re-sync operations across multiple accounts.This was not a security incident and no data exposure or unauthorized access occurred.
Timeline:
This issue began on 2025-11-06 23:04 UTC and was resolved on 2025-11-11 00:24 UTC.
Cause:
A code change to a central component introduced an incompatible encryption mechanism, preventing connectors from decrypting existing data. We rolled back to a previous version; however, we identified incompatibilities between services that led to connections running with an increased sync frequency, overloading our systems.
Resolution:
The issue was mitigated through Reverting the affected release and Scaling necessary infrastructure to restore stability.
Nov 7, 23:21 UTC
Update -
We are continuing to see connectors recover. Monitoring is ongoing to ensure all failing connectors recover successfully.
Nov 7, 20:21 UTC
Update -
We are continuing to see a steady increase in recovery and we are still monitoring syncs and completion rates.
Nov 7, 18:28 UTC
Update -
We are seeing a steady increase in recovery and we are still monitoring syncs and completion rates.
Nov 7, 16:49 UTC
Update -
We observed that the connections are still recovering. We will continue to monitor sync performance and completion rates until full restoration.
Nov 7, 14:36 UTC
Update -
We are continuing to observe syncs recovering and will actively monitor sync performance and completion rates. We will provide the next update in the next hour.
Nov 7, 11:52 UTC
Update -
We have observed that syncs are starting to get recovered successfully. We are continuing to closely monitor sync performance and completion rates and will update in the next hour.
Nov 7, 10:49 UTC
Update -
We have identified and deployed a fix for the issue, and we expect affected connectors to recover automatically. Our team is closely monitoring the sync performance and will provide a further update in 30 minutes.
Nov 7, 10:16 UTC
Monitoring -
A fix has been implemented, We are monitoring syncs recovering and will provide another update in 30 minutes.
Nov 7, 09:35 UTC
Update -
We are continuing to investigate the issue affecting Teleport-based PostgreSQL and Snowflake connections, and we are closely monitoring.
Nov 7, 04:42 UTC
Identified -
All connections except Teleport-based PostgreSQL and Snowflake have now recovered. We are actively investigating the issue affecting Teleport-based PostgreSQL connections and continue to monitor.
Nov 7, 04:42 UTC
Monitoring -
The system has been rolled back to a previously built version, and error rates are improving. We continue to closely monitor performance and stability.
Nov 7, 03:45 UTC
Update -
We are continuing to work on a fix for this issue.
Nov 7, 02:55 UTC
Identified -
The issue has been identified and we are working to resolve it.
Nov 7, 00:00 UTC