-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deepl Limitations - Timeout Errors during concurrent calls #33
Comments
Hi @razvan-zavalichi , thanks for your detailed report! For your question around how many requests you can send us, we generally advise to limit your usage to 50 requests per second, above that you should see HTTP 429 errors. The library will then use exponential backoff to retry those requests (this behaviour can be configured via the I tested the API last week and was able to do 20 QPS sustained without seeing any errors, so to better be able to reproduce your issues I have the following questions:
const textsToTranslate: string[][] = [['First sentence.', 'Second sentence.', 'Third sentence.'], ['Erster Satz.', 'Zweiter Satz.'], ...]; // 15 entries
const startTime = Date.now();
translations = await Promise.all(textsToTranslate.map((texts) => translator.translateText(texts)));
// Do something with translations
const timeDiff = Date.now() - startTime;
if (timeDiff > 0) {
await sleep(timeDiff); // limit to 15 QPS
}
translations = await Promise.all(textsToTranslate.map((texts) => translator.translateText(texts))); // probably with different texts :)
For your questions:
|
Hello @JanEbbing! Here are my responses:
Q2: How exactly do you perform the concurrency? Something like the following?
Q3: By "The current timeout configuration is set to 30000ms.", do you mean the minTimeout value in the TranslatorOptions you use to construct your Translator object?
Q4: As it may have performance implications, roughly what language pair distribution do you have in your request? Is it all the same language pair, or automatic source language detection into the same target language (with the source language being anything provided by users?)
Q5: You mentioned putting everything into a single request solves the issue. So basically when you split this request into 15 smaller ones to adhere to the texts limit of 50, you observe the timeouts?
In my situation, I encountered two significant issues:
More context:
This behavior signifies that if the DeeplApi consistently delivers translated data within a precise 1-second window for all 15 concurrent calls, the maximum achievable rate is 15 concurrent requests per second. |
Hi, with a) 413 Payload Too Large
With different string lengths in the
So I can't reproduce that 85 KiB is too large of a request. b) Timeouts To simplify, my concurrency logic is a bit different (start 50 requests, wait till they are complete, start 50 new requests), but should still trigger the timeouts.
For me, it finished in ~63s, which comes out to about 15 QPS, without any errors. Please let me know if anything in my reproduction attempt is off. |
Is it possible that this error is caused by the text format? This is the format of the translation text: " <table cellspacing=0 cellpadding=0 class..."
I use "deepl-node": "^1.7.2" |
Thanks, that helps. For the timeouts, have you tried simply increasing the timeout limit? Even with a simple table with a few rows and columns, a large request like this takes >10s for me, so if you have a complex table/get unlucky with API response time, you can easily hit the 30s window. I can also reproduce a lower size limit with tag handling, Im following up internally with another team. |
Everything works now on my side but following these rules:
Anyway, we don't have any text larger than 70KiB at the moment, so there is no blocking point for us. The previous text was taken from a development environment. |
Hello @JanEbbing, I occasionally encountered the following error:
This error caused 56 of the 2000 requests to fail. I can increase the number of retries, but if the request timeouts and I retry it, DeeplApi charges for these retries, making it more expensive. Are there other constraints regarding this? |
Hello,
I am utilizing the PRO Deepl API and have encountered some challenges related to concurrent requests while using the Deepl-Node client.
Here's the scenario: I have a service hosted on Azure, leveraging Azure Functions. Within this setup, there are 15 concurrent functions that attempt to translate a total of 72 KiB of text. The
translateText<string[]>
function takes an array of strings as input. The interesting aspect is that when these 15 functions are triggered, I encounter 'timeout' errors for all the requests. It's important to note that all these functions run on the same Virtual Machine, which implies that a single Deepl Client is employed to handle these 15 concurrent requests.My questions are as follows:
Your insights and guidance on optimizing these concurrent translation calls would be greatly appreciated. The service aim to translate ~1 000 000 000 characters.
Please be aware that the translation process functions correctly when a single request is made. This singular request encompasses the following elements:
An array input containing 600 texts (It is noteworthy to mention that the documentation indicates a limit of 50 texts per request.)
A maximum of 72 KiB of text, equating to a request payload of approximately 76 KiB (It is important to acknowledge that the documentation specifies a payload size allowance of up to 128 KiB. However, I have observed that if the payload size exceeds 85 KiB, it results in a '413 - Payload Too Large' error).
Could you provide more information regarding the limits documented?
The text was updated successfully, but these errors were encountered: