No recent activity discussions and activity overview shown. Only a spinning wheel.
I have tried to find an equal issue, but could not find it. If so, sorry for the doubling...
I have recently setup a fresh mailman3 installation on my debian12 box. It works fine. The only thing which is not working is in the archive overview I see a spinning wheel. The participants and number of discussions is increasing and the archive files are getting the archived messages.
mailman -info shows: GNU Mailman 3.3.8 (Tom Sawyer) Python 3.11.2 (main, Nov 30 2024, 21:22:50) [GCC 12.2.0] config file: /etc/mailman3/mailman.cfg db url: mysql+pymysql://.....:....@localhost/mailman3?charset=utf8mb4&use_unicode=1 devmode: DISABLED REST root url: http://localhost:8001/3.1/ REST credentials: restadmin:.......
mailman-web qinfo shows:
-- Default 1.3.9 on ORM default --
Clusters 0 Workers 0 Restarts 0
Queued 0 Successes 100 Failures 0
Schedules 0 Tasks/day 14.00 Avg time 0.1756
Webserver: apache2/2.4.64
It looks like a AJAX problem, but not sure about that.
Any help would be appriciated.
Freerk What is missing
Everything seems to work as it should, although I cannot see any recent activities and the activity overview doesnt show a graphical view
The output of mailman-web qcluster is: 13:19:16 [Q] INFO Q Cluster double-india-romeo-two starting. 13:19:16 [Q] INFO Process-1:1 ready for work at 288786 13:19:16 [Q] INFO Process-1:2 ready for work at 288787 13:19:16 [Q] INFO Process-1:3 ready for work at 288788 13:19:16 [Q] INFO Process-1:4 ready for work at 288789 13:19:16 [Q] INFO Process-1:5 monitoring at 288790 13:19:16 [Q] INFO Process-1 guarding cluster double-india-romeo-two 13:19:16 [Q] INFO Process-1:6 pushing tasks at 288791 13:19:16 [Q] INFO Q Cluster double-india-romeo-two running.
The hyperkitty version I use is 1.3.7
i can't send a screendump i think?
In the earlier screen dump "Clusters 0 Workers 0" doesn't seem right. I get: "Clusters 1 Workers 2"
ps ax | grep uwsgi results in:
1248 ? Ss 0:00 /usr/bin/uwsgi --plugin python3 --ini /etc/mailman3/uwsgi.ini 1249 ? Sl 0:00 /usr/bin/uwsgi --plugin python3 --ini /etc/mailman3/uwsgi.ini
ps ax | grep manage results in:
1017 ? S 0:00 python3 manage.py qcluster 1018 ? S 0:00 python3 manage.py qcluster 1019 ? S 0:00 python3 manage.py qcluster 1020 ? S 0:00 python3 manage.py qcluster 1021 ? S 0:00 python3 manage.py qcluster 1022 ? S 0:00 python3 manage.py qcluster 1023 ? S 0:00 python3 manage.py qcluster
In my opinion this should be ok and the qcluster is running
Freerk Bosscha via Mailman-users writes:
I have recently setup a fresh mailman3 installation on my debian12 box.
If this is a Debian package, you should report to them. Debian maintainers make the patches they think are wise, and often go well beyond conformance to Debian policy on file locations and the like. Also, while MySQL is a supported backend database, and the main problem I know of seems to be addressed with the "?charset=utf8mb4&use_unicode=1" parameters in the db url, I believe the developers mostly run PostgreSQL. So if there is a database communication issue, the Debian developers may be more aware of it.
In my opinion this should be ok and the qcluster is running
I don't know what you meant by "looks like an AJAX problem" -- if you have specific knowledge of such problems, please enlighten us, but otherwise it is very unhelpful to guess.
Also, what do you mean by "the archive files are getting the archived messages"? If you know what you're doing you may be able to find archived messages in your RDBMS, but in principle HyperKitty just stores them as blobs in the RDMBS -- HyperKitty doesn't see them as files.
Now, what Sam is pointing out suggests that qcluster is not talking to qinfo, and therefore it may not be talking to HyperKitty. Since queues are implemented as files on disk, it's likely that qinfo is counting them using filesytem utilities, while it's talking to qcluster over a socket. Anyway, I don't think the "clusters 0, workers 0" report is good news. Also, I believe the "successful" queue is a measure of backlog. I've seen backlogs of that size on a site with thousands of archived lists and hundreds of thousands of posts per day, but I don't think it's normal for typical scales. I have seen it take many minutes for the recent activity to show up in HyperKitty on that huge site. I don't recall what the problem was, maybe something to do with a backlog in the full-text indexer (Xapian on that site). Unfortunately I don't recall how that was resolved, and don't have access to that system any more.
In general, IIRC, qcluster is a red herring for the activity display. It's used to manage the incoming queue from Mailman core for the archive but I don't think it's relevant to fetching data. HyperKitty fetches that directly from the database. I would guess that the issue has to do with communication between HyperKitty and the database.
Steve
Thank you for your reply. I Will try to get iT up and running the way I want
Op 30 jan 2025 om 15:58 heeft Stephen J. Turnbull <turnbull@sk.tsukuba.ac.jp> het volgende geschreven:
Freerk Bosscha via Mailman-users writes:
I have recently setup a fresh mailman3 installation on my debian12 box.
If this is a Debian package, you should report to them. Debian maintainers make the patches they think are wise, and often go well beyond conformance to Debian policy on file locations and the like. Also, while MySQL is a supported backend database, and the main problem I know of seems to be addressed with the "?charset=utf8mb4&use_unicode=1" parameters in the db url, I believe the developers mostly run PostgreSQL. So if there is a database communication issue, the Debian developers may be more aware of it.
In my opinion this should be ok and the qcluster is running
I don't know what you meant by "looks like an AJAX problem" -- if you have specific knowledge of such problems, please enlighten us, but otherwise it is very unhelpful to guess.
Also, what do you mean by "the archive files are getting the archived messages"? If you know what you're doing you may be able to find archived messages in your RDBMS, but in principle HyperKitty just stores them as blobs in the RDMBS -- HyperKitty doesn't see them as files.
Now, what Sam is pointing out suggests that qcluster is not talking to qinfo, and therefore it may not be talking to HyperKitty. Since queues are implemented as files on disk, it's likely that qinfo is counting them using filesytem utilities, while it's talking to qcluster over a socket. Anyway, I don't think the "clusters 0, workers 0" report is good news. Also, I believe the "successful" queue is a measure of backlog. I've seen backlogs of that size on a site with thousands of archived lists and hundreds of thousands of posts per day, but I don't think it's normal for typical scales. I have seen it take many minutes for the recent activity to show up in HyperKitty on that huge site. I don't recall what the problem was, maybe something to do with a backlog in the full-text indexer (Xapian on that site). Unfortunately I don't recall how that was resolved, and don't have access to that system any more.
In general, IIRC, qcluster is a red herring for the activity display. It's used to manage the incoming queue from Mailman core for the archive but I don't think it's relevant to fetching data. HyperKitty fetches that directly from the database. I would guess that the issue has to do with communication between HyperKitty and the database.
Steve
And this is in my logfile after a restart:
*** Starting uWSGI 2.0.21-debian (64bit) on [Thu Jan 30 14:24:45 2025] *** compiled with version: 12.2.0 on 19 May 2023 13:59:29 os: Linux-6.1.0-30-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.124-1 (2025-01-12) nodename: mailman.pgtrynwalden.nl machine: x86_64 clock source: unix pcre jit disabled detected number of CPU cores: 4 current working directory: / detected binary path: /usr/bin/uwsgi-core setgid() to 33 setuid() to 33 chdir() to /usr/share/mailman3-web your processes number limit is 31478 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) uwsgi socket 0 bound to UNIX address /run/mailman3-web/uwsgi.sock fd 4 Python version: 3.11.2 (main, Nov 30 2024, 21:22:50) [GCC 12.2.0] Python main interpreter initialized at 0x7f3ca0160018 python threads support enabled your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds [uwsgi-cron] command "./manage.py runjobs minutely" registered as cron task [uwsgi-cron] command "./manage.py runjobs quarter_hourly" registered as cron task [uwsgi-cron] command "./manage.py runjobs hourly" registered as cron task [uwsgi-cron] command "./manage.py runjobs daily" registered as cron task [uwsgi-cron] command "./manage.py runjobs monthly" registered as cron task [uwsgi-cron] command "./manage.py runjobs weekly" registered as cron task [uwsgi-cron] command "./manage.py runjobs yearly" registered as cron task mapped 166752 bytes (162 KB) for 2 cores *** Operational MODE: threaded *** WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x7f3ca0160018 pid: 465 (default app) *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 465) spawned uWSGI worker 1 (pid: 983, cores: 2) [uwsgi-daemons] spawning "python3 manage.py qcluster" (uid: 33 gid: 33) 14:24:47 [Q] INFO Q Cluster gee-fruit-vegan-leopard starting. 14:24:47 [Q] INFO Process-1:1 ready for work at 1018 14:24:47 [Q] INFO Process-1:2 ready for work at 1019 14:24:47 [Q] INFO Process-1:3 ready for work at 1020 14:24:47 [Q] INFO Process-1:4 ready for work at 1021 14:24:47 [Q] INFO Process-1:5 monitoring at 1022 14:24:47 [Q] INFO Process-1 guarding cluster gee-fruit-vegan-leopard 14:24:47 [Q] INFO Process-1:6 pushing tasks at 1023 14:24:47 [Q] INFO Q Cluster gee-fruit-vegan-leopard running. Thu Jan 30 14:24:48 2025 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 1024) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 1024 exited after 1 second(s) Thu Jan 30 14:25:48 2025 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 1118) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 1118 exited after 1 second(s) Thu Jan 30 14:26:48 2025 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 1128) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 1128 exited after 1 second(s)
participants (4)
-
Freerk Bosscha
-
Freerk bosscha
-
Sam Darwin
-
Stephen J. Turnbull