On 12/13/20 7:11 AM, Eric Broens via Mailman-users wrote:
Hi Mark, 1.Regarding the repeated access of /archives/api/mailman/urls seems to be related to mailman not being able to open port 25. This is weird because other mails have been distributed. The mailman log shows:Dec 12 22:16:48 2020 (1471) ACCEPT: < message id > Dec 12 22:16:52 2020 (1474) Cannot connect to SMTP server localhost on port 25
That message is in response to a socket.error exception in attempted delivery. It is logged only once until there is a successful delivery, but the message keeps being retried.
webserver logs:<host > - - [12/Dec/2020:22:16:50 +0100] "GET /archives/api/mailman/urls ?mlist=...&key=*** HTTP/1.1" 200 64 "-" "python-requests/2.25.0" <host > - - [12/Dec/2020:22:16:52 +0100] "GET /archives/api/mailman/urls?mlist=...&msgid=< message id >&key=*** HTTP/1.1" 200 105 "-" "python-requests/2.25.0" This last entry is repeated forever (until I stop the mailserver for a few minutes, but later on this happens again for other mails too).
Those GETs are part of normal message processing. As I said, they result from core handlers asking HyperKitty for the URL at which the message will be archived so they can add that URL to the Archived-At: header and maybe to a message header or footer.
Presumably the one that keeps repeating is the one message/message-id that is throwing the socket.error and being continuously retried.
I would stop Mailman and move that one .pck or .bak file out of
Mailman's var/queue/out/ directory and then start mailman. You can then
examine the queue entry with mailman qfile
and maybe see what the
issue is.
2..Regarding the mails on the archive page, for most of the lists it is solved now. So probably one of the periodic jobs fixed that.The mailinglist which showed 0 participants 0 subscribers, does show now 0 participants 66 discussions.I have checked the member tables, and the members for this mailinglist are included there.
OK.
3.What I notice now too is that the hourly runjobs tasks don't seem to finish.I would have to check what they exactly do. Can I somehow activate logging for these? mailman 300042 300016 1 08:00 ? 00:09:18 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settingsmailman 302158 302149 2 09:00 ? 00:09:10 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settingsmailman 304513 304506 2 10:00 ? 00:09:38 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settingsmailman 308631 308620 3 11:00 ? 00:09:29 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settingsmailman 312847 312832 3 12:00 ? 00:09:15 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settingsmailman 317343 317332 4 13:00 ? 00:09:20 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settingsmailman 320079 320072 7 14:00 ? 00:09:08 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settingsmailman 322618 322610 13 15:00 ? 00:08:54 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settingsmailman 325205 325198 84 16:00 ? 00:06:56 /opt/mailman/venv/bin/python3 /opt/mailman/venv/bin/django-admin runjobs hourly --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settings
/opt/mailman/venv/bin/django-admin runjobs --list
will show you what the jobs are. The hourly jobs are new_lists_from_mailman, thread_starting_email and update_index. The one running long is almost certainly update_index. I would stop running the hourly jobs until you have successfully updated the search index for all lists.
-- Mark Sapiro <mark@msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan