On 4/25/21 4:37 PM, tlhackque via Mailman-users wrote:
The described timeouts are something that hyperkitty ought to be able to avoid. For apache, the timeout is idle time between blocks of output. Hyperkitty can avoid this by generating the archive in segments (based on size, or elapsed time), flushing its output buffer, generating a multi-file archive, and/or using Transfer-Encoding: chunked (chunked doesn't work for http/2). It ought to be able to break the work into blocks of "n" messages & do something to generate output. Besides avoiding timeouts, working in segments allows the GUI to display meaningful progress (e.g. if you're loading with XMLHttpRequest, "onprogress") It really oughtn't be up to the user to break up the request.
It is not the web server that times out. I'm not sure about uwsgi because I don't use it, but the timeouts I see are on servers that use gunicorn as the WSGI interface to Django and the timeout is in a gunicorn worker. This is controlled by the timeout setting in the gunicorn config. <https://docs.gunicorn.org/en/stable/settings.html#timeout>
Note that even 300 seconds is not enough to download the entire <https://mail.python.org/archives/list/python-dev@python.org/> archive.
It may be possible to get HyperKitty to chunk the output to avoid this, but it doesn't currently do that. Care to submit an MR?
-- Mark Sapiro <mark@msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan