Degraded performance on DeepL Pro, DeepL Pro API
Incident Report for DeepL
Postmortem

On September 17, our shared webservice component encountered an overload condition, leading to elevated latency and errors for API requests at several points throughout the day. The most notable impacts were observed at 11:16 and 14:41, when we saw a sharp increase in response times and a rise in error rates. The overload was caused by a resource misconfiguration in one of the shared web components. Under heavy load, this component failed to allocate sufficient resources, leading to increased latency and eventual failures in processing API requests. We sincerely apologize for the impact this incident had on our users. We are committed to preventing similar occurrences in the future through better configuration management, proactive scaling, and improved monitoring practices. We will continue to monitor the system closely and take the necessary steps to maintain a stable service.

Posted Sep 17, 2024 - 15:30 UTC

Resolved
This incident has been resolved.
Posted Sep 17, 2024 - 15:13 UTC
Monitoring
A fix has been implemented and we are monitoring the results.
Posted Sep 17, 2024 - 15:05 UTC
Identified
The issue has been identified and a fix is being implemented.
Posted Sep 17, 2024 - 14:38 UTC
Investigating
We are investigating this issue.

Affected Services: Translations
Additional Info: We are investigating increased error rates on DeepL Pro
Posted Sep 17, 2024 - 14:15 UTC
This incident affected: DeepL Pro and DeepL Pro API.