Search results for query "sapiro"
- 5813 messages

[MM3-users] Re: Postorius no connection to REST API
by Richard Rosner
Mark Sapiro wrote:
> > You are in a better position to answer that than am I.
> What does sudo netstat -lntp show?
A lot. But since most of that isn't relevant here:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:465 0.0.0.0:* LISTEN 20241/master
tcp 0 0 127.0.0.1:8024 0.0.0.0:* LISTEN 14076/python3
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 20241/master
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 14080/python3
tcp 0 0 0.0.0.0:587 0.0.0.0:* LISTEN 20241/master
tcp6 0 0 :::80 :::* LISTEN 13882/apache2
tcp6 0 0 :::465 :::* LISTEN 20241/master
tcp6 0 0 :::25 :::* LISTEN 20241/master
tcp6 0 0 :::443 :::* LISTEN 13882/apache2
tcp6 0 0 :::587 :::* LISTEN 20241/master
> What does ps -fwwa|grep rest show?
root 15055 14843 0 12:58 pts/1 00:00:00 grep rest
So whatever it's supposed to find, it's not there
> > mailman3-web.service must also run as list.
I changed that. It didn't like it.
systemctl status mailman3-web.service
● mailman3-web.service - Mailman3-web uWSGI service
Loaded: loaded (/lib/systemd/system/mailman3-web.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2021-08-11 13:05:39 CEST; 32s ago
Docs: file:///usr/share/doc/mailman3-web/README.rst
Process: 15570 ExecStart=/usr/bin/uwsgi --plugin python3 --ini /etc/mailman3/uwsgi.ini (code=exited, status=1/FAILURE)
Main PID: 15570 (code=exited, status=1/FAILURE)
Status: "initializing uWSGI"
Aug 11 13:05:39 mail systemd[1]: Starting Mailman3-web uWSGI service...
Aug 11 13:05:39 mail systemd[1]: mailman3-web.service: Main process exited, code=exited, status=1/FAILURE
Aug 11 13:05:39 mail systemd[1]: mailman3-web.service: Failed with result 'exit-code'.
Aug 11 13:05:39 mail systemd[1]: Failed to start Mailman3-web uWSGI service.
Aug 11 13:05:39 mail systemd[1]: mailman3-web.service: Service RestartSec=100ms expired, scheduling restart.
Aug 11 13:05:39 mail systemd[1]: mailman3-web.service: Scheduled restart job, restart counter is at 5.
Aug 11 13:05:39 mail systemd[1]: Stopped Mailman3-web uWSGI service.
Aug 11 13:05:39 mail systemd[1]: mailman3-web.service: Start request repeated too quickly.
Aug 11 13:05:39 mail systemd[1]: mailman3-web.service: Failed with result 'exit-code'.
Aug 11 13:05:39 mail systemd[1]: Failed to start Mailman3-web uWSGI service.
> > So, what do you have in your apache config for proxying to uwsgi and
> what's your uwsgi configuration.
lists-ssl.conf:
<VirtualHost *:443>
ServerAdmin admin(a)domain.de
ServerName lists.domain.de
Alias /mailman3/favicon.ico /var/lib/mailman3/web/static/postorius/img/favicon.ico
Alias /mailman3/static /var/lib/mailman3/web/static
<Directory "/var/lib/mailman3/web/static">
Require all granted
</Directory>
<IfModule mod_proxy_uwsgi.c>
ProxyPass /mailman3/favicon.ico !
ProxyPass /mailman3/static !
ProxyPass "/mailman3" "unix:/run/mailman3-web/uwsgi.sock|uwsgi://localhost:8001/"
</IfModule>
# SSL Engine Switch:
# Enable/Disable SSL for this virtual host.
SSLEngine on
SSLCertificateFile /etc/ssl/certs/lists.domain.de.cert.pem
SSLCertificateKeyFile /etc/ssl/private/lists.domain.de.private.pem
SSLCertificateChainFile /etc/ssl/certs/dfnca.pem
SSLCACertificateFile /etc/ssl/certs/rwthcert.pem
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
Protocols h2 http/1.1
SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256
SSLHonorCipherOrder off
#RewriteEngine on
#RewriteRule ^/$ https://lists.domain.de/listinfo
Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
Header always set X-Frame-Options: "SAMEORIGIN"
Header always set X-Xss-Protection "1; mode=block"
Header always set X-Content-Type-Options "nosniff"
Header always set Content-Security-Policy "default-src 'self' *.domain.de; script-src 'self' *.domain.de; connect-src 'self' *.domain.de; img-src 'self' *.domain.de; style-src 'self' 'unsafe-inline' *.domain.de; object-src 'self' *.domain.de; frame-src 'self' *.domain.de;"
Header always set Referrer-Policy "no-referrer-when-downgrade"
</VirtualHost>
<VirtualHost *:80>
RewriteEngine On
RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R=301,L]
</VirtualHost>
I guess with uwsgi config you mean the /etc/mailman3/uwsgi.ini file?
[uwsgi]
# Port on which uwsgi will be listening.
uwsgi-socket = /run/mailman3-web/uwsgi.sock
#Enable threading for python
enable-threads = true
# Move to the directory wher the django files are.
chdir = /usr/share/mailman3-web
# Use the wsgi file provided with the django project.
wsgi-file = wsgi.py
# Setup default number of processes and threads per process.
master = true
process = 2
threads = 2
# Drop privielges and don't run as root.
uid = www-data
gid = www-data
plugins = python3
# Setup the django_q related worker processes.
attach-daemon = python3 manage.py qcluster
# Setup hyperkitty's cron jobs.
#unique-cron = -1 -1 -1 -1 -1 ./manage.py runjobs minutely
#unique-cron = -15 -1 -1 -1 -1 ./manage.py runjobs quarter_hourly
#unique-cron = 0 -1 -1 -1 -1 ./manage.py runjobs hourly
#unique-cron = 0 0 -1 -1 -1 ./manage.py runjobs daily
#unique-cron = 0 0 1 -1 -1 ./manage.py runjobs monthly
#unique-cron = 0 0 -1 -1 0 ./manage.py runjobs weekly
#unique-cron = 0 0 1 1 -1 ./manage.py runjobs yearly
# Setup the request log.
#req-logger = file:/var/log/mailman3/web/mailman-web.log
# Log cron seperately.
#logger = cron file:/var/log/mailman3/web/mailman-web-cron.log
#log-route = cron uwsgi-cron
# Log qcluster commands seperately.
#logger = qcluster file:/var/log/mailman3/web/mailman-web-qcluster.log
#log-route = qcluster uwsgi-daemons
# Last log and it logs the rest of the stuff.
#logger = file:/var/log/mailman3/web/mailman-web-error.log
logto = /var/log/mailman3/web/mailman-web.log
4 years

[MM3-users] Re: LTMP to Postfix problem
by Brian Carpenter
On 6/18/20 7:36 PM, Mark Sapiro wrote:
> Check your system logs to see if there's anything about the OS killing
> the process. Also check the Postfix log for deliveries to mailman just
> before it died. Just stabbing in the dark, but maybe some huge message
> caused it to grow beyond some memory limit and get killed by the OS.
I am replying off list at this point. It is definitely a OOM issue. Here
are the log entries right before ltmp crashed:
Jun 18 00:00:12 mm4 kernel: [19548549.158382] postgres invoked
oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null),
order=0, oom_score_adj=0
Jun 18 00:00:12 mm4 kernel: [19548549.160719] postgres cpuset=/
mems_allowed=0
Jun 18 00:00:12 mm4 kernel: [19548549.161524] CPU: 0 PID: 7243 Comm:
postgres Not tainted 4.19.0-6-amd64 #1 Debian 4.19.67-2
Jun 18 00:00:12 mm4 kernel: [19548549.163047] Hardware name: QEMU
Standard PC (Q35 + ICH9, 2009), BIOS
rel-1.12.0-0-ga698c8995f-prebuilt.qemu.org 04/01/2014
Jun 18 00:00:12 mm4 kernel: [19548549.164885] Call Trace:
Jun 18 00:00:12 mm4 kernel: [19548549.165402] dump_stack+0x5c/0x80
Jun 18 00:00:12 mm4 kernel: [19548549.166097] dump_header+0x6b/0x283
Jun 18 00:00:12 mm4 kernel: [19548549.167019] ?
do_try_to_free_pages+0x2ec/0x370
Jun 18 00:00:12 mm4 kernel: [19548549.167864]
oom_kill_process.cold.30+0xb/0x1cf
Jun 18 00:00:12 mm4 kernel: [19548549.168760] ? oom_badness+0x23/0x140
Jun 18 00:00:12 mm4 kernel: [19548549.169534] out_of_memory+0x1a5/0x430
Jun 18 00:00:12 mm4 kernel: [19548549.170174]
__alloc_pages_slowpath+0xbd8/0xcb0
Jun 18 00:00:12 mm4 kernel: [19548549.171060]
__alloc_pages_nodemask+0x28b/0x2b0
Jun 18 00:00:12 mm4 kernel: [19548549.171973] filemap_fault+0x3bd/0x780
Jun 18 00:00:12 mm4 kernel: [19548549.172643] ? alloc_set_pte+0xf2/0x560
Jun 18 00:00:12 mm4 kernel: [19548549.173369] ?
filemap_map_pages+0x1ed/0x3a0
Jun 18 00:00:12 mm4 kernel: [19548549.174404]
ext4_filemap_fault+0x2c/0x40 [ext4]
Jun 18 00:00:12 mm4 kernel: [19548549.175259] __do_fault+0x36/0x130
Jun 18 00:00:12 mm4 kernel: [19548549.175930] __handle_mm_fault+0xe6c/0x1270
Jun 18 00:00:12 mm4 kernel: [19548549.176765] handle_mm_fault+0xd6/0x200
Jun 18 00:00:12 mm4 kernel: [19548549.177644] __do_page_fault+0x249/0x4f0
Jun 18 00:00:12 mm4 kernel: [19548549.178407] ? async_page_fault+0x8/0x30
Jun 18 00:00:12 mm4 kernel: [19548549.179156] async_page_fault+0x1e/0x30
Jun 18 00:00:12 mm4 kernel: [19548549.179915] RIP: 0033:0x55d875a3fe10
Jun 18 00:00:12 mm4 kernel: [19548549.180569] Code: Bad RIP value.
Jun 18 00:00:12 mm4 kernel: [19548549.181181] RSP: 002b:00007ffcad32f248
EFLAGS: 00010206
Jun 18 00:00:12 mm4 kernel: [19548549.182132] RAX: 000055d877d512b0 RBX:
000055d877d60b93 RCX: 00007ffcad32f2c0
Jun 18 00:00:12 mm4 kernel: [19548549.183365] RDX: 0000000000000005 RSI:
000055d877d60b93 RDI: 000055d877d512b0
Jun 18 00:00:12 mm4 kernel: [19548549.184712] RBP: 00007ffcad32f270 R08:
0000000000000001 R09: 00007ffcad32f388
Jun 18 00:00:12 mm4 kernel: [19548549.186338] R10: 00007f2585002d40 R11:
0000000000000000 R12: 0000000000000005
Jun 18 00:00:12 mm4 kernel: [19548549.187583] R13: 000055d877d4b3a0 R14:
00007f2581ad05b8 R15: 0000000000000000
Jun 18 00:00:12 mm4 kernel: [19548549.188952] Mem-Info:
Jun 18 00:00:12 mm4 kernel: [19548549.189468] active_anon:197899
inactive_anon:206498 isolated_anon:0
Jun 18 00:00:12 mm4 kernel: [19548549.189468] active_file:253
inactive_file:278 isolated_file:23
Jun 18 00:00:12 mm4 kernel: [19548549.189468] unevictable:0 dirty:0
writeback:0 unstable:0
Jun 18 00:00:12 mm4 kernel: [19548549.189468] slab_reclaimable:21867
slab_unreclaimable:55017
Jun 18 00:00:12 mm4 kernel: [19548549.189468] mapped:17008 shmem:24450
pagetables:4890 bounce:0
Jun 18 00:00:12 mm4 kernel: [19548549.189468] free:13184 free_pcp:340
free_cma:0
Jun 18 00:00:12 mm4 kernel: [19548549.195739] Node 0
active_anon:791596kB inactive_anon:825992kB active_file:1012kB
inactive_file:1112kB unevictable:0kB isolated(anon):0kB
isolated(file):92kB mapped:68032kB dirty$
Jun 18 00:00:12 mm4 kernel: [19548549.200559] Node 0 DMA free:8152kB
min:352kB low:440kB high:528kB active_anon:1056kB inactive_anon:5048kB
active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB prese$
Jun 18 00:00:12 mm4 kernel: [19548549.204993] lowmem_reserve[]: 0 1950
1950 1950 1950
Jun 18 00:00:12 mm4 kernel: [19548549.205877] Node 0 DMA32 free:44584kB
min:44700kB low:55872kB high:67044kB active_anon:790540kB
inactive_anon:820944kB active_file:1012kB inactive_file:1112kB
unevictable:0kB wri$
Jun 18 00:00:12 mm4 kernel: [19548549.211109] lowmem_reserve[]: 0 0 0 0 0
Jun 18 00:00:12 mm4 kernel: [19548549.211826] Node 0 DMA: 8*4kB (UE)
31*8kB (UME) 20*16kB (UME) 22*32kB (UME) 13*64kB (UME) 5*128kB (UME)
5*256kB (UM) 4*512kB (UM) 2*1024kB (UM) 0*2048kB 0*4096kB = 8152kB
Jun 18 00:00:12 mm4 kernel: [19548549.214414] Node 0 DMA32: 378*4kB
(UMEH) 472*8kB (UEH) 412*16kB (UEH) 196*32kB (UMEH) 63*64kB (UMEH)
63*128kB (UE) 16*256kB (UE) 4*512kB (ME) 0*1024kB 2*2048kB (M) 1*4096kB
(M) =$
Jun 18 00:00:12 mm4 kernel: [19548549.217336] Node 0 hugepages_total=0
hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jun 18 00:00:12 mm4 kernel: [19548549.218937] Node 0 hugepages_total=0
hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jun 18 00:00:12 mm4 kernel: [19548549.220513] 39729 total pagecache pages
Jun 18 00:00:12 mm4 kernel: [19548549.221286] 14715 pages in swap cache
Jun 18 00:00:12 mm4 kernel: [19548549.222022] Swap cache stats: add
10809560, delete 10794845, find 18524127383/18527544787
Jun 18 00:00:12 mm4 kernel: [19548549.223482] Free swap = 0kB
Jun 18 00:00:12 mm4 kernel: [19548549.224081] Total swap = 524284kB
Jun 18 00:00:12 mm4 kernel: [19548549.224712] 524154 pages RAM
Jun 18 00:00:12 mm4 kernel: [19548549.225222] 0 pages HighMem/MovableOnly
Jun 18 00:00:12 mm4 kernel: [19548549.225894] 13385 pages reserved
Then there was this little bit of info:
Jun 18 00:00:12 mm4 kernel: [19548549.442555] Out of memory: Kill
process 29676 (python3) score 37 or sacrifice child
Jun 18 00:00:12 mm4 kernel: [19548549.444508] Killed process 29676
(python3) total-vm:192224kB, anon-rss:93084kB, file-rss:0kB, shmem-rss:0kB
Jun 18 00:00:12 mm4 kernel: [19548549.464907] oom_reaper: reaped process
29676 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
I suspect the above process was ltmp.
So the server has 2 gig of ram with about a dozen MM3 lists on it. Is
this an issue with tuning performance of the Postgresql server or do I
need to add more memory. I am curious how much ram your server is
running and if you had run into any OOM of problems.
If you want me to post this to the list, I will.
--
Please let me know if you need further assistance.
Thank you for your business. We appreciate our clients.
Brian Carpenter
EMWD.com
--
EMWD's Knowledgebase:
https://clientarea.emwd.com/index.php/knowledgebase
EMWD's Community Forums
http://discourse.emwd.com/
5 years, 2 months

[MM3-users] Re: search this list searches more then just this list
by Marco van Tol
Op 26 jan 2024, om 22:55 heeft Mark Sapiro <mark(a)msapiro.net> het volgende geschreven:
> On 1/26/24 02:44, Marco van Tol wrote:
>> Op 25 jan 2024, om 14:16 heeft Marco van Tol <mvantol(a)ripe.net> het volgende geschreven:
>>>
>>> Okay, so, I got a bit further, but something still gets stuck.
>>>
>>> Here’s what I did.
>>> Keep in mind I’m using containers that are built from some CI/CD pipeline, so I updated the pipeline to apply the patch attached to this email to `/usr/lib/python3.11/site-packages/xapian_backend.py`.
>>>
>>> Before I had 2 list servers with the “Term too long” issue, 1 got resolved by this, and the other did not.
>>> I opened a shell in the newly deployed container to confirm the patch was applied in it.
>>>
>>> The other attachment to this email is a copy/paste from the full error from `./manage.py rebuild_index`.
>>>
>>> Is there something else special in the email that makes it choke that evades the xapian patch?
>>>
>>> Thank you very much in advance!
>
> The message above with the attached "full error" never got to the list. What is the error report?
Hm, I see. Not sure why. The reply I got from mail.mailman3.org <http://mail.mailman3.org/> at 2024-01-25 13:16:25.486 UTC was:
"250 2.0.0 Ok: 12772 bytes queued as 55AFD105C02”
I’m pasting it at the bottom of this email. Sorry it didn’t come through.
>> I tried to change to ‘hash’, but the code in that bit of the function has not been tested enough.
>> For example `hole = sha224(hole.encode('utf8')).hexdigest()` comes back with that the bytes object hole does not have an encode() method.
>> When I change it to `hole = sha224(hole).hexdigest()`, the following error is:
>
> That's only part of it. You need
>
> hole = sha224(hole).hexdigest().encode('utf8')
I ended up changing it to this, which fixed that bit, and led to the next issue. :)
>> text = text[:match.start()] + hole + text[match.end():]
>> TypeError: can't concat str to bytes
>> The ‘hash’ part of that function needs some debugging.
>
>
> Yes, presumably because no one sets `XAPIAN_LONG_TERM_METHOD=hash` in the environment. Do you have a reason for this?
I wanted to check and see if I could avoid the “Term too long (>245)" issue this way, but I haven’t gotten to the point where xapian is successful.
Right now I’m back at the original issue as I see no other solution than to go back to whoosh.
Thanks!
Marco van Tol
Paste:
----
Indexing 194620 emails
[ERROR/MainProcess] Failed indexing 156001 - 157000 (retry 5/5): Term too long (> 245): XSUBJECThttp://www.google.com/url?q=%68%74%74%70%73%3a%2f%2f%68%64%72%65%64… (pid 32): Term too long (> 245): XSUBJECThttp://www.google.com/url?q=%68%74%74%70%73%3a%2f%2f%68%64%72%65%64…
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/haystack/management/commands/update_index.py", line 119, in do_update
backend.update(index, current_qs, commit=commit)
File "/usr/lib/python3.11/site-packages/xapian_backend.py", line 98, in wrapper
func(self, *args, **kwargs)
File "/usr/lib/python3.11/site-packages/xapian_backend.py", line 505, in update
database.replace_document(document_id, document)
xapian.InvalidArgumentError: Term too long (> 245): XSUBJECThttp://www.google.com/url?q=%68%74%74%70%73%3a%2f%2f%68%64%72%65%64…
[ERROR/MainProcess] Error updating hyperkitty using default
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/haystack/management/commands/update_index.py", line 297, in handle
self.update_backend(label, using)
File "/usr/lib/python3.11/site-packages/haystack/management/commands/update_index.py", line 342, in update_backend
max_pk = do_update(
^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/haystack/management/commands/update_index.py", line 119, in do_update
backend.update(index, current_qs, commit=commit)
File "/usr/lib/python3.11/site-packages/xapian_backend.py", line 98, in wrapper
func(self, *args, **kwargs)
File "/usr/lib/python3.11/site-packages/xapian_backend.py", line 505, in update
database.replace_document(document_id, document)
xapian.InvalidArgumentError: Term too long (> 245): XSUBJECThttp://www.google.com/url?q=%68%74%74%70%73%3a%2f%2f%68%64%72%65%64…
Traceback (most recent call last):
File "/opt/mailman-web/./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/lib/python3.11/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/usr/lib/python3.11/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3.11/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3.11/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/haystack/management/commands/rebuild_index.py", line 65, in handle
call_command("update_index", **update_options)
File "/usr/lib/python3.11/site-packages/django/core/management/__init__.py", line 198, in call_command
return command.execute(*args, **defaults)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/haystack/management/commands/update_index.py", line 297, in handle
self.update_backend(label, using)
File "/usr/lib/python3.11/site-packages/haystack/management/commands/update_index.py", line 342, in update_backend
max_pk = do_update(
^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/haystack/management/commands/update_index.py", line 119, in do_update
backend.update(index, current_qs, commit=commit)
File "/usr/lib/python3.11/site-packages/xapian_backend.py", line 98, in wrapper
func(self, *args, **kwargs)
File "/usr/lib/python3.11/site-packages/xapian_backend.py", line 505, in update
database.replace_document(document_id, document)
xapian.InvalidArgumentError: Term too long (> 245): XSUBJECThttp://www.google.com/url?q=%68%74%74%70%73%3a%2f%2f%68%64%72%65%64…
----
1 year, 6 months

[MM3-users] Re: External MTA incoming mail: configuration
by Odhiambo Washington
On Tue, Aug 6, 2024 at 10:46 AM Roland Giesler via Mailman-users <
mailman-users(a)mailman3.org> wrote:
> On 2024/08/05 19:59, Mark Sapiro wrote:
> > I see this reply is now moot as you have now configured list mail to
> > go directly to the Mailman server, but ...
> >
> > On 8/5/24 03:44, Roland Giesler via Mailman-users wrote:
> >>
> >> In the logs of the MTA I see this however: warning: do not list
> >> domain fast.za.net in BOTH virtual_mailbox_domains and relay_domains
> >>
> >> Mailman creates these entries, but postfix doesn't like it. I don't
> >> see any mail delivered to the mailman yet. Is this the problem?
> >
> > Probably not. It is telling you that mail to the fast.za.net domain
> > cannot both be delivered to local mailboxes (virtual_mailbox_domains)
> > and relayed to foreign hosts (relay_domains)
> >
> Thanks, yes, I have since assumed that to be the case.
> >
> >> In the MTA postfix main.cf:
> >>
> >> relay_domains = hash:/etc/mailman3/data/postfix_domains
> > >
> >> cat /etc/mailman3/data/postfix_domains
> >> ...
> >>
> >> and also
> >>
> >> local_recipient_maps=$virtual_mailbox_maps,
> >> hash:/etc/mailman3/data/postfix_lmtp
> >>
> >> cat /etc/mailman3/data/postfix_lmtp
> >> ...
> >
> > How about
> >
> > transport_maps = hash:/etc/mailman3/data/postfix_lmtp
> I can't remove the $virtual_mailbox_maps entry, since Power-mailinbox
> (PMiaB) uses that. It may make Mailman3 work, but break PMiaB).
> >
> >
> >>
> >> Then there's:
> >> virtual_mailbox_domains=sqlite:/etc/postfix/virtual-mailbox-domains.cf
> >>
> >> cat /etc/postfix/virtual-mailbox-domains.cf
> >> dbpath=/home/user-data/mail/users.sqlite
> >> query = SELECT 1 FROM users WHERE email LIKE '%%@%s' UNION SELECT 1
> >> FROM aliases WHERE source LIKE '%%@%s' UNION SELECT 1 FROM
> >> auto_aliases WHERE source LIKE '%%@%s'
> >>
> >> When I run that query in sqlite3, it returns no records, so I'm not
> >> sure how this is supposed to work. %s to me means that first
> >> argument, so is this used in python and then %s is the argument sent
> >> to this query?
> >
> >
> > See https://www.postfix.org/sqlite_table.5.html
> >
> > `%%` is replaced with `%` which is a SQL wildcard matching anything
> > and `%s` is replaced by the key postfix is looking for, i.e. the
> > domain that it is asking about.
> >
> > So, that query becomes
> >
> > SELECT 1 FROM users WHERE email LIKE '%(a)fast.za.net' UNION SELECT 1
> > FROM aliases WHERE source LIKE '%(a)fast.za.net' UNION SELECT 1 FROM
> > auto_aliases WHERE source LIKE '%(a)fast.za.net'
> >
> > I.e, it returns true if any user or alias or auto_alias has an address
> > ending in '@fast.za.net' and if that's true the mail to any
> > '@fast.za.net' address including list mail will be stored locally.
>
> Ah, thank you! I created a ticket at MiaB about this, so I'll post your
> response there. The %s had be stumped at first, but now it's clear.
>
>
> >
> > If you really have local users on box2.gtahardware.co.za with
> > addresses '@fast.za.net' and you want to relay list mail to lists
> > '@fast.za.net', you need to see
> >
> https://docs.mailman3.org/projects/mailman/en/latest/src/mailman/docs/mta.h…
> .
>
> Thank you for that! From that it seems it may still be possible to use
> PMiaB as my MTA after, but I'll work through that reference and test it
> and report back.
>
I think that ALL MTAs have the concept of local domains (for which mails
are delivered to 'local mailboxes') and remote domains (aka relay domains)
for which mail is relayed to another host which has the mailboxes.
So in your case box2.gtahardware.co.za (this is FQDN) could be handling
local emails, e.g roland(a)gtahardware.co.za, johndoe(a)gtahardware.co.za, etc.
Those are local, and so gtahardware.co.za is a local domain. However,
fast.za.net is a relay domain and all mail to XXX(a)fast.za.net should be
relayed to the MM3 server.
If your MTA does not have this concept, then it's either not ready for
prime time or it wasn't intended to have such ability.
--
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254 7 3200 0004/+254 7 2274 3223
In an Internet failure case, the #1 suspect is a constant: DNS.
"Oh, the cruft.", egrep -v '^$|^.*#' ¯\_(ツ)_/¯ :-)
[How to ask smart questions:
http://www.catb.org/~esr/faqs/smart-questions.html]
1 year

[MM3-users] Re: Custom templates (was: Re: Re: nginx configuration on a multitasking server)
by David Newman
On 1/14/22 4:54 PM, Mark Sapiro wrote:
> On 1/14/22 11:56 AM, David Newman wrote:
>>
>> I created a pending subscription request on a test list. Here is the
>> text I'm using in a file called 'list:admin:action:subscribe.txt'
>> (minus the rows of hypens):
>>
>> ----------
>>
>>
>> As list administrator, your authorization is required for a mailing
>> list subscription request approval:
>>
>> For: $member
>> List: $listname
>>
>> At your convenience, visit:
>>
>> https://${domain}/mailman/admindb/lists/${short_listname}.${domain}/
>
> This looks like a Mailman 2.1 URL. Are you sure you don't want something
> like
>
> https://${domain}/mailman3/lists/${short_listname}.${domain}/
Yes. The server redirects MM3-style requests to MM2.1 URLs. I need to
determine why, but that may be a different issue than the templates one.
There's nothing obvious in this vhost's nginx config that would generate
the redirect. Will check, if I can't resolve this I'll open a different
thread.
>
>>
>> to approve or deny the request.
>>
>> ----------
>>
>> Instead the message that goes out at midnight daily reads like this:
>
>
> That's a different message. It is sent from the `mailman notify` command
> run by cron. It's built from the list:admin:notice:pending template.
OK, wrong template then.
>
>>
>> On successive nights I have placed copies of the
>> 'list:admin:action:subscribe.txt' file above in these locations:
>>
>> mm/var/templates/lists/test.lists.domain.tld/en
>
> I hope you mean
>
> /opt/mailman/mm/var/templates/lists/test.lists.domain.tld/en
Yes
>
>> /opt/mailman/mm/var/templates/site/en
>
>
> Either of those should work. The lists one will take priority over the
> site one for that list.
>
>
>> and then in Postorius, with the same contents as above.
>
> If you set a template in Postorius, it takes priority. If you later
> decide you want to use one in /opt/mailman/mm/var/templates/ you have to
> delete the postorius one.
>
>
>> The file versions are owned by the mailman user, and have 0644
>> permissions. Not sure this is necessary but I've restarted the
>> mailman3 and mailmanweb services each time after a change.
>>
>> There is no sign of trouble in the MM3 or web logs. The only thing I
>> see is in the Postfix log, where the shorter and less helpful message
>> goes out each night. I don't see anything in the template docs about
>> this.
>>
>> Questions:
>>
>> 1. What to do to get the custom message working?
>
>
> It should work. I suspect you need to go to the list's Settings ->
> Automatic Responses and set `Admin immed notify` to Yes.
It's enabled. I think this triggers notifications only for new
subscription requests, not pending ones.
>
>
>> 2. Is there a way to trigger a subscribe reminder for one list only?
>
>
> If by subscribe reminder you mean the `list:admin:action:subscribe.txt`
> message, yes that's the above list setting. If you mean the one sent
> from the `mailman notify` command, see `mailman notify --help` for the
> options. You can specify one or more lists (default is all lists) via
> options and put them in your cron.
>
>
>> Asking because there are other lists on this server with other
>> requests pending, and I don't want to bother other moderators with
>> multiple reminders per day. Not a big deal, but it would be nice to
>> test this with one list rather than waiting up to 24 hours after each
>> change.
>
>
> You can test `mailman notify` at any time by running it by hand.
Thanks. This still isn't working as intended. I took these steps:
1. Created the file
'/opt/mailman/mm/var/templates/lists/test.lists.domain.tld/en/list:admin:action:pending.txt'
with these contents (minus the hyphens top and bottom):
----------
The $listname list has $count moderation requests waiting.
$data
At your convenience, visit:
https://${domain}/mailman3/lists/${short_listname}.${domain}/
to approve or deny the request.
----------
2. Restarted the mailman3 and mailmanweb services
3. As the mailman user, ran the command 'mailman notify -l
test.lists.domain.tld -v'
This generated an email message but its contents were the same as in my
previous post:
----------
The test(a)lists.domain.tld list has 1 moderation requests waiting.
Held Subscriptions:
User: letmein(a)domain.tld
Please attend to this at your earliest convenience.
----------
What I'm trying to do here is replicate a feature from MM2.1, where
pending notifications had URLs embedded in the message. That's not
available out of the box in MM3, and if there's a place for feature
requests I'd be glad to ask that this be made the default.
For now, though, I'm looking to get this customization working.
Thanks.
dn
3 years, 7 months

[MM3-users] Re: messages stuck in the bad queue
by Ken Alker
--On Sunday, June 25, 2023 9:18 AM -0700 Mark Sapiro <mark(a)msapiro.net>
wrote:
> On 6/24/23 9:54 PM, Ken Alker wrote:
>> I am working on a mail server that was migrated from mailman V2 to
>> mailman V3 in February. I just noticed that there are over 1000
>> messages in the bad queue. All of the messages in the bad queue are
>> dated 2/2/23 and all have a timestamp within a one-hour window of each
>> other.
>>
>> I used mailman qfile to inspect five random emails in the bad queue and
>> so far they are all "successfully subscribed" emails,
>
> This is a bit strange. See below.
>
>> but I presume
>> there are no guarantees there aren't others mixed in there that might be
>> legitimate emails (ie. I just unshunted 140 emails in the shunt queue
>> and they were all good emails and all got processed).
>
>
> Presumably these were shunted due to some issue that was subsequently
fixed.
That is my assumption. I didn't unshunt them until I had V3.3.8 completed,
figuring the previous version/install had some bug/mistake. Fortunately,
they all got processed (with one problem, which I brought up in another
thread).
>> I figure that the easiest way to inspect these 1000 emails is just to
>> have them re-delivered. I tried moving one from the bad queue to the
>> shunt queue and I ran "mailman unshunt" but nothing happened.
>
> What does nothing happened mean? "mailman unshunt" should move the
> message to the original queue which was stored in the 'whichq' attribute
> in the msgdata when the message was shunted. Since this wasn't a shunted
> message, there's no 'whichq' attribute so it goes to the 'in' queue.
> I.e., "mailman unshunt" would have moved the message from the 'shunt'
> queue to the 'in' queue. If the message wound up back in the shunt queue,
> there should be messages in mailman.log indicating why.
I moved the .psv file from the bad queue into the shunt queue. I then ran
"mailman unshunt" (as user 'mailman' while in the virtual environment). I
tailed mailman.log during this process and no logs were spit out. The date
stamp on the .psv file never changed (maybe it does not when being moved
between queues?) and, AFAICT, the file never moved from the shunt queue. I
waited maybe five minutes, tops.
>> Is there a way to reprocess the bad queue?
>
> You could just move the messages to the 'in' queue.
I just now tried moving the same message into the 'in' queue but, again,
nothing happened. I left it in there for five minutes. Do I have to run a
program to get it to act on the 'in' queue (I presume that there is a
"runner" that is always looking and taking care of this already as I
presume this is the queue where all 'normal' traffic is handled).
Here are the (obfusacted) results of "mailman qfile
/opt/mailman/mm/var/queue/shunt/1675389793.6945386+2aeaf0015558c9d8380c96142c3fa9d03a8142bc.psv"
(the message I was experimenting with):
[----- start pickle -----]
<----- start object 1 ----->
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Subject: THE-list subscription notification
From: email(a)obfuscated.com
To: the-list-owner(a)lists.obfuscated.com
Message-ID: <167538979369.2390432.521344217673230839(a)obfuscated.net>
Date: Thu, 02 Feb 2023 18:03:13 -0800
Precedence: bulk
user(a)domain.tld has been successfully subscribed to THE-list.
<----- start object 2 ----->
{ '_parsemsg': False,
'envsender': 'email(a)obfuscated.com',
'listid': 'the-list.obfusacted.com',
'nodecorate': True,
'recipients': {'person(a)domain.tld'},
'reduced_list_headers': True,
'version': 3}
[----- end pickle -----]
>> Also, what exactly is the bad queue?
>
> If Mailman's content filtering is enabled and Filter Action is Preserve,
> messages which have no remaining content after content filtering are put
> in the 'bad' queue. These are the only messages put there. When this
> happens there should a log message like
>
> <message-id> preserved in file base <queue_file>
>
> It is unclear to me how these 1000+ messages wound up in the 'bad' queue.
> If you have logs from Feb 2, they might help. My guess is there was some
> MTA misrouting that caused these list welcome messages (from some mass
> subscribe?) to be rerouted to the list posting address combined with some
> bad content filtering settings that removed all the content, but that
> seems pretty far fetched.
Unfortunately, I don't have logs going that far back. I don't think your
concept is far fetched. I'm 99% sure this was the day that the migration
from V2 to V3 took place so this was certainly the result of some type of
mass-subscription-import into V3. The V3 that was installed or the install
itself definitely had problems, which is why I just did an upgrade/overhaul
to V3.3.8 this past week. Many/most of the strange issues that were
occurring seem to be cleared up.
2 years, 1 month

[MM3-users] Time Stand Still
by Barry Warsaw
Somewhen in the dark recesses of intarweb history, I found myself as the project leader for both Jython (née JPython) and GNU Mailman. I'd been involved with Jython since it was invented by Jim Hugunin around the time he came to work with us at Pythonlabs. I'd been contributing to Mailman since we inherited John Viega's Python-based Dave Matthews Band list server, and put it to use replacing python.org's Majordomo installation.
I'd enjoyed both projects, but knew I could not lead both, so I had to make a choice. I chose to turn over Jython to a team that's done a much better job over the years than I ever could have. Something about email, and especially the communication and collaboration patterns that it facilitates, really fascinated me. I know, I know, but we all have our lapses of sanity. Mine has lasted almost 20 years, a bit more than "momentary" perhaps.
I've rarely gotten paid to work on Mailman, but it did provide me some great opportunities. Most notably it led to my 10 year stint at Canonical. I was originally hired on there to integrate mailing lists with Launchpad, and Mailman was the obvious choice. I learned a ton doing that project, and working within the constraints of integrating the two Python-based systems, especially since Launchpad was originally not free software and Mailman was GPL'd. Later, the Zope-based Launchpad source code was released under the AGPL, making much of the monkeypatching unnecessary, but by then the system was solid and reliable, and you don't fix what's not broken.
Except, I guess I did. I took a lot of the lessons from that work, along with a good hard look at all the problems with Mailman 2, and began to break another cardinal rule of software development: second system syndrome. The result is Mailman 3. It took forever, and we're still not at complete feature parity with Mailman 2, but at least it's Real Enough to be used at many Real Sites, including python.org and lists.fedoraprojects.org.
It would be ridiculous for me to take significant credit for this. I have to acknowledge the amazing user community -- you! -- for all the support, patches, suggestions, feedback, patience, criticism, donations, and contributions that you've given to the project, and to me personally over the years. And my deepest gratitude goes to all the core developers that have stayed or come and gone, but most especially the current Cabal: Abhilash Raj, Aurelien Bompard, Florian Fuchs, Mark Sapiro, Stephen J. Turnbull, Terri Oda. You should know that each and every one of them is truly awesome, both in what they contribute technically, and in their amazing friendships. Mailman is infinitely better because of their involvement, and I've loved spending time with them over the years at the Pycon sprints, making releases and sharing teas and meals.
My blog is called We Fear Change, and that's humorously taken from a 90's bit in Mike Myer's excellent Wayne's World movie (a phrase actually uttered by the brilliant Dana Carvey as Garth). The irony of course is that while we all may fear change, it's the one constant thing we can count on. And in fact, we *require* change to thrive, because if you aren't changing, you aren't alive. Time, and being engaged with life's vagaries, means there's no alternative to change; it must be embraced.
And so, with a vague reference to the many (good!) changes in my personal and professional life, I'm announcing that I'm stepping down from the project leadership role of GNU Mailman, effective... nowish! And it's with unanimous agreement among the GNU Mailman Steering Committee (a.k.a. the Mailman Cabal), that we are announcing Abhilash Raj as the new project leader.
If you don't recognize Abhilash's name, you probably aren't paying attention, at least to Mailman 3. Abhilash came to us in 2013 as a Google Summer of Code student, and he's become one of the project's most valuable contributors. His list of accomplishments is long, and it includes everything from redesigning the website, to integrating CI with our GitLab build system, porting our code to the SQLAlchemy ORM, adding MySQL support, revving up adoption through his Docker images, along with his great coding work on Core, Postorius, HyperKitty, and mailmanclient.
This transition is good for the project too. Email, its defining protocols and standards, and its role in our daily lives, has changed profoundly since the early days of Mailman. A fresh perspective and enthusiasm will help keep Mailman relevant to the changing ways we -- especially the FLOSS and tech communities -- communicate.
Please join me in supporting Abhilash in every way possible as he takes over in this new role as project leader. I'll be here when and if needed, even as I create space in my "spare" time for... Something Else. I look forward to the vision that Abhilash will bring to the project, and I know that he will do a great job. To me, Mailman has always been about collaboration, and the best
way for it to succeed is for you to continue to contribute your insights, experiences, opinions, and skills with positive intention.
-Barry
https://www.wefearchange.org/
7 years, 9 months

[MM3-users] Re: E-mail every minute: "Cron <www-data@sharky5> ..."
by Robert Heller
At Mon, 8 Jul 2024 00:16:48 +0300 Odhiambo Washington <odhiambo(a)gmail.com> wrote:
>
> On Sun, Jul 7, 2024 at 8:43â¯PM Robert Heller <heller(a)deepsoft.com> wrote:
>
> > At Sun, 7 Jul 2024 19:48:02 +0300 Odhiambo Washington <odhiambo(a)gmail.com>
> > wrote:
> >
> > >
> > > On Sun, Jul 7, 2024 at 7:16âÃâ¬Ã¯PM Robert Heller <heller(a)deepsoft.com>
> > wrote:
> > >
> > > > At Sun, 7 Jul 2024 18:52:05 +0300 Odhiambo Washington <
> > odhiambo(a)gmail.com>
> > > > wrote:
> > > >
> > > > >
> > > > > On Sun, Jul 7, 2024 at 4:12ÃÆÃÆÃâÃÂ¢ÃÆÃââÃâÃÂ¬ÃÆÃâÃâïPM Robert
> > Heller <heller(a)deepsoft.com>
> > > > wrote:
> > > > >
> > > > > > What am I missing? I *think* I have mailman3 *mostly* setup, but
> > there
> > > > > > are
> > > > > > still some configuration things that are missing, but I am not sure
> > > > how to
> > > > > > fix
> > > > > > them (the docs are NOT clear).
> > > > > >
> > > > >
> > > > > Which docs are you relying on?
> > > >
> > > > https://docs.mailman3.org/en/latest/config-web.html
> > > >
> > > > I presume these are the official docs for mailman3 -- maybe they
> > aren't?
> > > >
> > > > >
> > > > > How about this -
> > > > https://docs.mailman3.org/en/latest/install/virtualenv.html
> > > > > ??
> > > >
> > > > I'm not using a virtual environment. I'm using all native Debian 12
> > > > packages,
> > > > installed via apt. The virtual environment docs are actually even worse
> > > > (even
> > > > more confusing).
> > >
> > >
> > > Worse? :-)
> >
> > Even more confusing. Both sets of docs make various assumptions and don't
> > really explain things properly. Like everywhere where "settings.py" is
> > mentioned, it really means "/etc/mailman3/mailman-web.py"
> >
>
> No! It means /etc/mailman3/settings.py - literally!
>
>
> > In any case, the virtual environment docs are hard to relate to a "native"
> > install and are generally hard to follow, since they seem to jump all over
> > the
> > place.
>
>
> When one day you'll be able to internalize what a Python virtual
> environment is, you'll realize that it's VERY convenient.
> You will actually embrace it from that point onwards.
>
> (Spaghetti docs?) And it is hard to replace the various (and not
> > always consistent) virtual environment paths and settings files to the
> > "native"
> > paths.
>
>
> Actually, if you're this inclined to run everything natively, MM3 is
> perhaps not for you. Why? Because you'll not easily find help here.
> We focus on the virtual environment only as the standard.. Why? Because no
> one is willing to deal with ALL the OS-centric packaging
> out there. Python virtual environment is universal across all the OSes, I
> can say.
>
>
> > The "official" docs are just not useful to me, since I am not using a
> > virtual
> > environment. If a virtual environment is recomended, what is the point of
> > the
> > Debian 12 packages?
>
>
> We cannot answer that here. I guess they are meant for people like you who
> strive under pain :-)
> With the Python virtual environment, I can install and manage MM3 in almost
> any *nix OS.
>
>
> > Are they just not meant to be used? Really? Do you mean that I should use
> > a separate package management system for Mailman3? That
> > really sucks.
> >
>
> Yes, they are meant to be used. Noone denies that. However, they are not
> packaged by the Mailman Developers.
> Did you read one response from Mark Sapiro where he said, and I quote:
> ```
> If you prefer to use the Debian packages, that's fine, but if using the
> Debian packages, your primary resource for support, documentation, bug
> reports, etc. should be Debian. See https://wiki.list.org/x/12812344
> ```
> So yes, go ahead and use the Debian packages. No one is stopping you.
I uninstalled them and have given up on mm3. It means the main mailling list
I have been hosting for the past while (more than a decade), will have to
migrate to something else... :-(
>
>
--
Robert Heller -- Cell: 413-658-7953 GV: 978-633-5364
Deepwoods Software -- Custom Software Services
http://www.deepsoft.com/ -- Linux Administration Services
heller(a)deepsoft.com -- Webhosting Services
1 year, 1 month

[MM3-users] Re: How to delete non-users
by Allan Hansen
Hi Mark,
Thank you for looking into it.
We are very strict about memberships, which we are because of spam, bots and malicious contributors and because we don't want anyone to think that our lists are used for spam:
When a user applies, the server sends a message to the moderator.
The moderator communicates with the potential member and accepts or does not accept the application.
At this point, if the user has not been accepted, but tried to send a message to the list, a non-member membership is created.
When he/she logs in to list his/her account, the list to which he/she holds non-memberships will be listed and the user will think that he/she has been properly subscribed (why else are the lists listed). Noone notices the column that shows the role as 'nonmember.' So he/she thinks that the subscription request has been accepted, but nothing is working. That's why the 'non-member' record is an issue. I also don't see why non-members are automatically added, filling up the database with junk (at least from our point of view, with all respect).
But our lists don't accept messages from non-members. Such messages are quietly discarded, as most are spam, as mentioned above.
So now the user is neither getting emails from the lists nor is unable to send messages to the list.
The next step for the user is to complain to me. ☹
I have looked for a template that could be used to warn someone when he/she is added as a non-member, but did not find one. It's also not clear that I'd want one, as most of these non-subscriptions are by spammers and I prefer not to reply to spammers. __
I tried your suggestion below, Mark.
Here's my transcript:
**********
Welcome to the GNU Mailman shell
Use commit() to commit changes.
Use abort() to discard changes since the last commit.
Exit with ctrl+D does an implicit commit() but exit() does not.
>>> lm = getUtility(IListManager)
>>> for l in lm:
... for nm in l.nonmembers.members:
... nm.unsubscribe()
...
>>>
**********
I did this twice, once with commit() and once typing ctrl/d (ctrl/D) just gave me a beep.
Calling commit() did not return to the ... as in your example, but showed the >>> prompt, so I tried ctrl/D (beep) and then ctrl/d (exit).
I then went to the Postorius page for one of the lists and found that all the non-members were still present.
Yours,
Allan
On 9/16/22, 15:01 , "Mark Sapiro" <mark(a)msapiro.net> wrote:
On 9/16/22 11:19, Allan Hansen wrote:
> Hi all,
>
> This is a bit of an emergency:
>
> I am getting a bunch of complaints from potential list members of my lists that they can't subscribe and they don't get messages. Apparently, the issue is that they are non-members. I have never created any non-members but looking at the docs it seems that if someone sends a message to the list, they automatically become non-members.
> For individuals I have been able to delete their non-membership and they then could subscribe properly.
I don't understand. the fact that an address is a non-member of a list
should not impact subscribing that address as a member.
> I have looked at some of my most popular lists and they have hundreds of non-members! It will take me an awful amount of time to remove each one manually, and not doing it - handling each as they complain - is also a lot of waste of time and cause of frustration for all involved.
What is actually happening when the non-member attempts to subscribe as
a member? What do they do and what it the result?
> I have tried scripting it with delmember, but it does not take the '-r nonmember' option ('findmember' does!).
>
> Can anyone help me find out how to
> a. delete all non-members of all my lists
If you have access to `mailman shell` you could do:
```
$ mailman shell
Welcome to the GNU Mailman shell
Use commit() to commit changes.
Use abort() to discard changes since the last commit.
Exit with ctrl+D does an implicit commit() but exit() does not.
>>> lm = getUtility(IListManager)
>>> for l in lm:
... for nm in l.nonmembers.members:
... nm.unsubscribe()
...
>>> commit()
```
> b. prevent MM3 from creating new non-members in the future (so I don't have to keep removing them)
nonmembers are an integral part of Mailman 3's architecture. The basic
idea is a nonmember has a moderation action and setting a nonmembers
moderation replaces MM 2.1's adding that address to one of
*_these_nonmembers (The legacy *_these_nonmembers attributes still
exist, but only to support regexps).
It would require extensive modification to not create nonmembers.
However, I still don't understand why the presence of a nonmember record
is an issue.
--
Mark Sapiro <mark(a)msapiro.net> The highway is for gamblers,
San Francisco Bay Area, California better use your sense - B. Dylan
_______________________________________________
Mailman-users mailing list -- mailman-users(a)mailman3.org
To unsubscribe send an email to mailman-users-leave(a)mailman3.org
https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
2 years, 11 months

[MM3-users] Re: Held messages not delivered after approval
by Krinetzki, Stephan
Hi Mark, Hi Stephen,
so I updated now my configuration a bit. The logging is now default:
[logging.archiver] datefmt: %b %d %H:%M:%S %Y
[logging.archiver] format: %(asctime)s (%(process)d) %(message)s
[logging.archiver] level: info
[logging.archiver] path: mailman.log
[logging.archiver] propagate: no
[logging.bounce] datefmt: %b %d %H:%M:%S %Y
[logging.bounce] format: %(asctime)s (%(process)d) %(message)s
[logging.bounce] level: info
[logging.bounce] path: bounce.log
[logging.bounce] propagate: no
[logging.config] datefmt: %b %d %H:%M:%S %Y
[logging.config] format: %(asctime)s (%(process)d) %(message)s
[logging.config] level: info
[logging.config] path: mailman.log
[logging.config] propagate: no
[logging.database] datefmt: %b %d %H:%M:%S %Y
[logging.database] format: %(asctime)s (%(process)d) %(message)s
[logging.database] level: warn
[logging.database] path: mailman.log
[logging.database] propagate: no
[logging.debug] datefmt: %b %d %H:%M:%S %Y
[logging.debug] format: %(asctime)s (%(process)d) %(message)s
[logging.debug] level: info
[logging.debug] path: debug.log
[logging.debug] propagate: no
[logging.error] datefmt: %b %d %H:%M:%S %Y
[logging.error] format: %(asctime)s (%(process)d) %(message)s
[logging.error] level: info
[logging.error] path: mailman.log
[logging.error] propagate: no
[logging.fromusenet] datefmt: %b %d %H:%M:%S %Y
[logging.fromusenet] format: %(asctime)s (%(process)d) %(message)s
[logging.fromusenet] level: info
[logging.fromusenet] path: mailman.log
[logging.fromusenet] propagate: no
[logging.gunicorn] datefmt: %b %d %H:%M:%S %Y
[logging.gunicorn] format: %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
[logging.gunicorn] level: info
[logging.gunicorn] path: mailman.log
[logging.gunicorn] propagate: no
[logging.http] datefmt: %b %d %H:%M:%S %Y
[logging.http] format: %(asctime)s (%(process)d) %(message)s
[logging.http] level: info
[logging.http] path: mailman.log
[logging.http] propagate: no
[logging.locks] datefmt: %b %d %H:%M:%S %Y
[logging.locks] format: %(asctime)s (%(process)d) %(message)s
[logging.locks] level: info
[logging.locks] path: mailman.log
[logging.locks] propagate: no
[logging.mischief] datefmt: %b %d %H:%M:%S %Y
[logging.mischief] format: %(asctime)s (%(process)d) %(message)s
[logging.mischief] level: info
[logging.mischief] path: mailman.log
[logging.mischief] propagate: no
[logging.plugins] datefmt: %b %d %H:%M:%S %Y
[logging.plugins] format: %(asctime)s (%(process)d) %(message)s
[logging.plugins] level: info
[logging.plugins] path: plugins.log
[logging.plugins] propagate: no
[logging.root] datefmt: %b %d %H:%M:%S %Y
[logging.root] format: %(asctime)s (%(process)d) %(message)s
[logging.root] level: info
[logging.root] path: mailman.log
[logging.root] propagate: no
[logging.runner] datefmt: %b %d %H:%M:%S %Y
[logging.runner] format: %(asctime)s (%(process)d) %(message)s
[logging.runner] level: info
[logging.runner] path: mailman.log
[logging.runner] propagate: no
[logging.smtp] datefmt: %b %d %H:%M:%S %Y
[logging.smtp] every: $msgid smtp to $listname for $recip recips, completed in $time seconds
[logging.smtp] failure: $msgid delivery to $recip failed with code $smtpcode, $smtpmsg
[logging.smtp] format: %(asctime)s (%(process)d) %(message)s
[logging.smtp] level: info
[logging.smtp] path: smtp.log
[logging.smtp] propagate: no
[logging.smtp] refused: $msgid post to $listname from $sender, $size bytes, $refused failures
[logging.smtp] success: $msgid post to $listname from $sender, $size bytes
[logging.subscribe] datefmt: %b %d %H:%M:%S %Y
[logging.subscribe] format: %(asctime)s (%(process)d) %(message)s
[logging.subscribe] level: info
[logging.subscribe] path: mailman.log
[logging.subscribe] propagate: no
[logging.task] datefmt: %b %d %H:%M:%S %Y
[logging.task] format: %(asctime)s (%(process)d) %(message)s
[logging.task] level: info
[logging.task] path: mailman.log
[logging.task] propagate: no
[logging.vette] datefmt: %b %d %H:%M:%S %Y
[logging.vette] format: %(asctime)s (%(process)d) %(message)s
[logging.vette] level: info
[logging.vette] path: mailman.log
[logging.vette] propagate: no
Further, I edited the logrotate:
/var/log/mailman/mailman-logs/*.log {
missingok
daily
compress
delaycompress
nomail
notifempty
rotate 14
dateext
su mailman mailman
olddir /var/log/mailman/mailman-logs/oldlogs
postrotate
/bin/kill -HUP `cat /opt/mailman/var/master.pid 2>/dev/null` 2>/dev/null || true
# Don't run "mailman3 reopen" with SELinux on here in the logrotate
# context, it will be blocked
/opt/mailman/mailman-venv/bin/mailman reopen >/dev/null 2>&1 || true
endscript
}
It is now more like the Fedora one and seems to be better.
Now in the mailman.log I see the HOLD messages and the approved messages. The smtp.log logs just the incoming Mail, which seems to be fine.
So, here a complete Trace:
smtp.log:
Aug 05 10:18:40 2025 (217984) Available AUTH mechanisms: LOGIN(builtin) PLAIN(builtin)
Aug 05 10:18:40 2025 (217984) Peer: ('127.0.0.1', 57074)
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) handling connection
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) >> b'LHLO lists.rwth-aachen.de'
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) >> b'MAIL FROM:<SENDER> SIZE=15187'
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) sender: SENDER
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) >> b'RCPT TO:<stephansmodliste(a)lists.example.com>'
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) recip: stephansmodliste(a)lists.example.com
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) >> b'DATA'
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) >> b'QUIT'
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) connection lost
Aug 05 10:18:40 2025 (217984) ('127.0.0.1', 57074) Connection lost during _handle_client()
The mailman.log:
Aug 05 10:18:40 2025 (217983) HOLD: stephansmodliste(a)lists.example.com post from SENDER held, message-id=<b93e46d8342f42039c99e1d2d036c711@SENDERDOMAIN>: The message is not from a list member
Aug 05 10:21:04 2025 (218015) held message approved, message-id: <b93e46d8342f42039c99e1d2d036c711@SENDERDOMAIN>
[05/Aug/2025:10:21:04 +0200] "POST /3.1/lists/stephansmodliste(a)lists.example.com/held/211056 HTTP/1.1" 204 0 "-" "GNU Mailman REST client v3.3.5"
[05/Aug/2025:10:21:04 +0200] "GET /3.1/lists/stephansmodliste(a)lists.example.com/held?count=0&page=1 HTTP/1.1" 200 90 "-" "GNU Mailman REST client v3.3.5"
[05/Aug/2025:10:21:04 +0200] "GET /3.1/lists/stephansmodliste(a)lists.example.com/held?count=10&page=1 HTTP/1.1" 200 90 "-" "GNU Mailman REST client v3.3.5"
[05/Aug/2025:10:21:04 +0200] "GET /3.1/lists/stephansmodliste(a)lists.example.com/requests/count?token_owner=moderator HTTP/1.1" 200 73 "-" "GNU Mailman REST client v3.3.5"
[05/Aug/2025:10:21:04 +0200] "GET /3.1/lists/stephansmodliste(a)lists.example.com/held/count HTTP/1.1" 200 73 "-" "GNU Mailman REST client v3.3.5"
Aug 05 10:21:05 2025 (217986) Cannot connect to SMTP server localhost on port 25
The last error is a very often in the mailman.log:
Aug 05 09:46:12 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 09:46:13 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 09:46:13 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 09:46:14 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 09:46:24 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 09:50:22 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 09:50:24 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 09:53:30 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 09:55:01 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 09:55:01 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 09:55:09 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 09:57:44 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 09:57:46 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 09:58:50 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 09:58:52 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 09:58:52 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 09:58:53 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 09:58:53 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 09:58:53 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 10:01:18 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 10:02:02 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 10:07:11 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 10:10:36 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 10:11:35 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:16:04 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:16:05 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 10:16:07 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:16:58 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 10:17:42 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 10:18:30 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:20:13 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 10:21:05 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:28:08 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:28:08 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 10:28:31 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 10:28:37 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:28:44 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 10:28:47 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 10:28:50 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:28:57 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 10:29:22 2025 (217988) Cannot connect to SMTP server localhost on port 25
Aug 05 10:29:40 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 10:29:41 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:30:37 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:30:37 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 10:30:50 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:31:15 2025 (217987) Cannot connect to SMTP server localhost on port 25
Aug 05 10:33:42 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:33:53 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:34:27 2025 (217985) Cannot connect to SMTP server localhost on port 25
Aug 05 10:34:28 2025 (217986) Cannot connect to SMTP server localhost on port 25
Aug 05 10:38:25 2025 (230250) Cannot connect to SMTP server lists.example.com on port 25
Aug 05 10:38:25 2025 (230247) Cannot connect to SMTP server lists.example.com on port 25
Even after changing the smtp_host in mailman.cfg this error appears randomly.
Sadly the mail does not get delivered.
--
Stephan Krinetzki
IT Center
Gruppe: Anwendungsbetrieb und Cloud
Abteilung: Systeme und Betrieb
RWTH Aachen University
Seffenter Weg 23
52074 Aachen
Tel: +49 241 80-24866
Fax: +49 241 80-22134
krinetzki(a)itc.rwth-aachen.de
www.itc.rwth-aachen.de
Social Media Kanäle des IT Centers:
https://blog.rwth-aachen.de/itc/
https://www.facebook.com/itcenterrwth
https://www.linkedin.com/company/itcenterrwth
https://twitter.com/ITCenterRWTH
https://www.youtube.com/channel/UCKKDJJukeRwO0LP-ac8x8rQ
-----Original Message-----
From: Mark Sapiro <mark(a)msapiro.net>
Sent: Tuesday, August 5, 2025 12:51 AM
To: mailman-users(a)mailman3.org
Subject: [MM3-users] Re: Held messages not delivered after approval
On 8/4/25 00:21, Krinetzki, Stephan wrote:
>>
>> And for every one of those shunted messages there should be an exception with traceback logged in mailman.log. Those tracebacks should be helpful.
>
> If there were any. Maybe the "debug" level should be "info". But for which logs?
The standard logging levels from lowest to highest are
debug
info
warning
error
critical
Whatever level is set for a log results in all messages of that level or higher being logged. I.e. if the log's level is debug, all messages for that log of any level should be logged.
For every shunted message, a message like
`SHUNTING: <file name without the .pck extention>` preceded by the exception and traceback is logged to error.log with level error. See
https://gitlab.com/mailman/mailman/-/blob/master/src/mailman/core/runner.py…
> Maybe the restart at night after the lograte maybe the issue.
As I said before, blindly restarting Mailman is a bad idea. On servers that I maintain, I always verify that all queues are empty before stopping or restarting Mailman. If necessary, I'll kill the incoming runner and wait for the out queue to empty and then stop mailman. If you want to do this daily, you could automate that., e.g.
```
if queues empty:
restart Mailman
else:
when in queue is empty, sigterm incoming runner
when out queue is empty, stop Mailman
when Mailman stopped, start Mailman ``` The stop/start is needed because a simple restart at that point won't start the sigtermed incoming runner.
--
Mark Sapiro <mark(a)msapiro.net> The highway is for gamblers,
San Francisco Bay Area, California better use your sense - B. Dylan
_______________________________________________
Mailman-users mailing list -- mailman-users(a)mailman3.org To unsubscribe send an email to mailman-users-leave(a)mailman3.org https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
Archived at: https://lists.mailman3.org/archives/list/mailman-users@mailman3.org/message…
This message sent to krinetzki(a)itc.rwth-aachen.de
2 weeks, 5 days