Search results for query "sapiro"
- 5899 messages

[MM3-users] Re: Migration from old server to new server failed
by Helio Loureiro
Hi,
Any suggestions here?
Best Regards,
Helio Loureiro
https://helio.loureiro.eng.br
https://github.com/helioloureiro
https://mastodon.social/@helioloureiro
On Mon, 19 Feb 2024 at 14:44, Helio Loureiro <helio(a)loureiro.eng.br> wrote:
> Hi,
>
> I repeated the steps today.
>
> First I waited to end the errors raised because the mailman3-web wasn't
> running.
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File
> "/local/mailman/venv/lib/python3.10/site-packages/urllib3/connectionpool.py",
> line 793, in urlopen
> response = self._make_request(
> File
> "/local/mailman/venv/lib/python3.10/site-packages/urllib3/connectionpool.py",
> line 496, in _make_request
> conn.request(
> File
> "/local/mailman/venv/lib/python3.10/site-packages/urllib3/connection.py",
> line 400, in request
> self.endheaders()
> File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders
> self._send_output(message_body, encode_chunked=encode_chunked)
> File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output
> self.send(msg)
> File "/usr/lib/python3.10/http/client.py", line 976, in send
> self.connect()
> File
> "/local/mailman/venv/lib/python3.10/site-packages/urllib3/connection.py",
> line 238, in connect
> self.sock = self._new_conn()
> File
> "/local/mailman/venv/lib/python3.10/site-packages/urllib3/connection.py",
> line 213, in _new_conn
> raise NewConnectionError(
> urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection
> object at 0x7f0fdc2f72b0>: Failed to establish a new connection: [Errno
> 111] Connection refused
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File
> "/local/mailman/venv/lib/python3.10/site-packages/requests/adapters.py",
> line 486, in send
> resp = conn.urlopen(
> File
> "/local/mailman/venv/lib/python3.10/site-packages/urllib3/connectionpool.py",
> line 847, in urlopen
> retries = retries.increment(
> File
> "/local/mailman/venv/lib/python3.10/site-packages/urllib3/util/retry.py",
> line 515, in increment
> raise MaxRetryError(_pool, url, reason) from reason # type:
> ignore[arg-type]
> urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1',
> port=8000): Max retries exceeded with url: /archives/api/mailman/archive
> (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at
> 0x7f0fdc2f72b0>: Failed to establish a new connection: [Errno 111]
> Connection refused'))
>
> Which seems to keep forever. So eventually I gave up and moved forward.
> And I changed to use the uwsgi socket instead of the direct port.
>
> (venv) mailman@new-mailman3 ~ (v3.1.1) [0|SIGINT]> mailman-web migrate
> System check identified some issues:
>
> WARNINGS:
> account.EmailAddress: (models.W036) MariaDB does not support unique
> constraints with conditions.
> HINT: A constraint won't be created. Silence this warning if you
> don't care about it.
> account.EmailAddress: (models.W043) MariaDB does not support indexes on
> expressions.
> HINT: An index won't be created. Silence this warning if you don't
> care about it.
> Operations to perform:
> Apply all migrations: account, admin, auth, contenttypes,
> django_mailman3, django_q, hyperkitty, postorius, sessions, sites,
> socialaccount
> Running migrations:
> Applying account.0003_alter_emailaddress_create_unique_verified_email...
> OK
> Applying account.0004_alter_emailaddress_drop_unique_email... OK
> Applying account.0005_emailaddress_idx_upper_email... OK
> Applying admin.0003_logentry_add_action_flag_choices... OK
> Applying auth.0009_alter_user_last_name_max_length... OK
> Applying auth.0010_alter_group_name_max_length... OK
> Applying auth.0011_update_proxy_permissions... OK
> Applying auth.0012_alter_user_first_name_max_length... OK
> Applying django_mailman3.0003_sessions... OK
> Applying django_q.0010_auto_20200610_0856... OK
> Applying django_q.0011_auto_20200628_1055... OK
> Applying django_q.0012_auto_20200702_1608... OK
> Applying django_q.0013_task_attempt_count... OK
> Applying django_q.0014_schedule_cluster... OK
> Applying hyperkitty.0016_auto_20180309_0056... OK
> Applying hyperkitty.0017_file_attachments... OK
> Applying hyperkitty.0018_threadcategory_color... OK
> Applying hyperkitty.0019_auto_20190127_null_description... OK
> Applying hyperkitty.0020_auto_20190907_1927... OK
> Applying hyperkitty.0021_add_owners_mods...Traceback (most recent call
> last):
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
> line 87, in _execute
> return self.cursor.execute(sql)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/mysql/base.py",
> line 75, in execute
> return self.cursor.execute(query, args)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
> 179, in execute
> res = self._query(mogrified_query)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
> 330, in _query
> db.query(q)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/connections.py",
> line 261, in query
> _mysql.connection.query(self, query)
> MySQLdb.OperationalError: (1050, "Table
> 'hyperkitty_mailinglist_moderators' already exists")
>
> Then dropping table hyperkitty_mailinglist_moderators.
>
> (venv) mailman@new-mailman3 ~ (v3.1.1)> mailman-web migrate
> System check identified some issues:
>
> WARNINGS:
> account.EmailAddress: (models.W036) MariaDB does not support unique
> constraints with conditions.
> HINT: A constraint won't be created. Silence this warning if you
> don't care about it.
> account.EmailAddress: (models.W043) MariaDB does not support indexes on
> expressions.
> HINT: An index won't be created. Silence this warning if you don't
> care about it.
> Operations to perform:
> Apply all migrations: account, admin, auth, contenttypes,
> django_mailman3, django_q, hyperkitty, postorius, sessions, sites,
> socialaccount
> Running migrations:
> Applying hyperkitty.0021_add_owners_mods...Traceback (most recent call
> last):
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
> line 87, in _execute
> return self.cursor.execute(sql)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/mysql/base.py",
> line 75, in execute
> return self.cursor.execute(query, args)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
> 179, in execute
> res = self._query(mogrified_query)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
> 330, in _query
> db.query(q)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/connections.py",
> line 261, in query
> _mysql.connection.query(self, query)
> MySQLdb.OperationalError: (1050, "Table 'hyperkitty_mailinglist_owners'
> already exists")
>
> After dropping table hyperkitty_mailinglist_owners.
>
> (venv) mailman@new-mailman3 ~ (v3.1.1) [0|1]> mailman-web migrate
> System check identified some issues:
>
> WARNINGS:
> account.EmailAddress: (models.W036) MariaDB does not support unique
> constraints with conditions.
> HINT: A constraint won't be created. Silence this warning if you
> don't care about it.
> account.EmailAddress: (models.W043) MariaDB does not support indexes on
> expressions.
> HINT: An index won't be created. Silence this warning if you don't
> care about it.
> Operations to perform:
> Apply all migrations: account, admin, auth, contenttypes,
> django_mailman3, django_q, hyperkitty, postorius, sessions, sites,
> socialaccount
> Running migrations:
> Applying hyperkitty.0021_add_owners_mods...Traceback (most recent call
> last):
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
> line 87, in _execute
> return self.cursor.execute(sql)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/mysql/base.py",
> line 75, in execute
> return self.cursor.execute(query, args)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
> 179, in execute
> res = self._query(mogrified_query)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
> 330, in _query
> db.query(q)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/connections.py",
> line 261, in query
> _mysql.connection.query(self, query)
> MySQLdb.OperationalError: (1050, "Table
> 'hyperkitty_mailinglist_moderators' already exists")
>
> After dropping table hyperkitty_mailinglist_moderators.
>
> (venv) mailman@new-mailman3 ~ (v3.1.1) [0|1]> mailman-web migrate
> System check identified some issues:
>
> WARNINGS:
> account.EmailAddress: (models.W036) MariaDB does not support unique
> constraints with conditions.
> HINT: A constraint won't be created. Silence this warning if you
> don't care about it.
> account.EmailAddress: (models.W043) MariaDB does not support indexes on
> expressions.
> HINT: An index won't be created. Silence this warning if you don't
> care about it.
> Operations to perform:
> Apply all migrations: account, admin, auth, contenttypes,
> django_mailman3, django_q, hyperkitty, postorius, sessions, sites,
> socialaccount
> Running migrations:
> Applying hyperkitty.0021_add_owners_mods... OK
> Applying hyperkitty.0022_mailinglist_archive_rendering_mode... OK
> Applying hyperkitty.0023_alter_mailinglist_name... OK
> Applying postorius.0004_create_email_template...Traceback (most recent
> call last):
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
> line 87, in _execute
> return self.cursor.execute(sql)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/mysql/base.py",
> line 75, in execute
> return self.cursor.execute(query, args)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
> 179, in execute
> res = self._query(mogrified_query)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
> 330, in _query
> db.query(q)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/connections.py",
> line 261, in query
> _mysql.connection.query(self, query)
> MySQLdb.OperationalError: (1050, "Table 'postorius_emailtemplate' already
> exists")
>
> After dropping table postorius_emailtemplate.
>
> (venv) mailman@new-mailman3 ~ (v3.1.1) [0|1]> mailman-web migrate
> System check identified some issues:
>
> WARNINGS:
> account.EmailAddress: (models.W036) MariaDB does not support unique
> constraints with conditions.
> HINT: A constraint won't be created. Silence this warning if you
> don't care about it.
> account.EmailAddress: (models.W043) MariaDB does not support indexes on
> expressions.
> HINT: An index won't be created. Silence this warning if you don't
> care about it.
> Operations to perform:
> Apply all migrations: account, admin, auth, contenttypes,
> django_mailman3, django_q, hyperkitty, postorius, sessions, sites,
> socialaccount
> Running migrations:
> Applying postorius.0004_create_email_template... OK
> Applying postorius.0005_auto_20180707_1107... OK
> Applying postorius.0006_auto_20180711_1359... OK
> Applying postorius.0007_auto_20180712_0536... OK
> Applying postorius.0008_auto_20190310_0717... OK
> Applying postorius.0009_auto_20190508_1604... OK
> Applying postorius.0010_auto_20190821_0621... OK
> Applying postorius.0011_auto_20191109_1219... OK
> Applying postorius.0012_auto_20200420_2136... OK
> Applying postorius.0013_auto_20201116_0058... OK
> Applying postorius.0014_auto_20210329_2248... OK
> Applying postorius.0015_auto_20210619_0509... OK
> Applying postorius.0016_auto_20210810_2157... OK
> Applying postorius.0017_alter_emailtemplate_language... OK
> Applying postorius.0018_alter_emailtemplate_language... OK
> Applying socialaccount.0004_app_provider_id_settings... OK
> Applying socialaccount.0005_socialtoken_nullable_app... OK
> Applying socialaccount.0006_alter_socialaccount_extra_data... OK
>
> At this point the errors remains about not reaching web part. So I
> started mailman3-web.
>
> Then I see these errors on var/log/mailman.log
>
> Feb 19 15:35:14 2024 (543890) Traceback (most recent call last):
> File
> "/local/mailman/venv/lib/python3.10/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>404 Not Found</title>
> </head><body>
> <h1>Not Found</h1>
> <p>The requested URL was not found on this server.</p>
> <hr>
> <address>Apache/2.4.52 (Ubuntu) Server at 127.0.0.1 Port 80</address>
> </body></html>
>
> Feb 19 15:35:14 2024 (543890) HyperKitty failure on
> http://127.0.0.1/archives/api/mailman/archive: <!DOCTYPE HTML PUBLIC
> "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>404 Not Found</title>
> </head><body>
> <h1>Not Found</h1>
> <p>The requested URL was not found on this server.</p>
> <hr>
> <address>Apache/2.4.52 (Ubuntu) Server at 127.0.0.1 Port 80</address>
> </body></html>
> (404)
> Feb 19 15:35:14 2024 (543890) Could not archive the message with id <
> ff1707bb-d105-4f37-b3f8-9766b3563127(a)SESAMR604.domain.com>
> Feb 19 15:35:14 2024 (543890) archiving failed, re-queuing (mailing-list
> mylist.new-mailman.domain.com, message <
> ff1707bb-d105-4f37-b3f8-9766b3563127(a)machine.domain.com>)
> Feb 19 15:35:14 2024 (543890) Exception in the HyperKitty archiver:
> <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>404 Not Found</title>
> </head><body>
> <h1>Not Found</h1>
> <p>The requested URL was not found on this server.</p>
> <hr>
> <address>Apache/2.4.52 (Ubuntu) Server at 127.0.0.1 Port 80</address>
> </body></html>
>
> And I can't get access to the hyperkitty at all.
>
> On var/logs/uwsgi-error.log:
>
> --- Logging error ---
> Traceback (most recent call last):
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 356, in pusher
> task = SignedPackage.loads(task[1])
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/signing.py",
> line 25, in loads
> return signing.loads(
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/core_signing.py",
> line 35, in loads
> base64d = force_bytes(TimestampSigner(key, salt=salt).unsign(s,
> max_age=max_age))
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/core_signing.py",
> line 70, in unsign
> result = super(TimestampSigner, self).unsign(value)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/core_signing.py",
> line 55, in unsign
> raise BadSignature('Signature "%s" does not match' % sig)
> django.core.signing.BadSignature: Signature "48do9gGvueBxakX_a4uNmSwD7cs"
> does not match
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/usr/lib/python3.10/logging/__init__.py", line 1100, in emit
> msg = self.format(record)
> File "/usr/lib/python3.10/logging/__init__.py", line 943, in format
> return fmt.format(record)
> File "/usr/lib/python3.10/logging/__init__.py", line 678, in format
> record.message = record.getMessage()
> File "/usr/lib/python3.10/logging/__init__.py", line 368, in getMessage
> msg = msg % self.args
> TypeError: not all arguments converted during string formatting
> Call stack:
> File "/local/mailman/venv/bin/mailman-web", line 8, in <module>
> sys.exit(main())
> File
> "/local/mailman/venv/lib/python3.10/site-packages/mailman_web/manage.py",
> line 90, in main
> execute_from_command_line(sys.argv)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/core/management/__init__.py",
> line 446, in execute_from_command_line
> utility.execute()
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/core/management/__init__.py",
> line 440, in execute
> self.fetch_command(subcommand).run_from_argv(self.argv)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/core/management/base.py",
> line 402, in run_from_argv
> self.execute(*args, **cmd_options)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django/core/management/base.py",
> line 448, in execute
> output = self.handle(*args, **options)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/management/commands/qcluster.py",
> line 22, in handle
> q.start()
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 78, in start
> self.sentinel.start()
> File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start
> self._popen = self._Popen(self)
> File "/usr/lib/python3.10/multiprocessing/context.py", line 224, in
> _Popen
> return _default_context.get_context().Process._Popen(process_obj)
> File "/usr/lib/python3.10/multiprocessing/context.py", line 281, in
> _Popen
> return Popen(process_obj)
> File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in
> __init__
> self._launch(process_obj)
> File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 71, in
> _launch
> code = process_obj._bootstrap(parent_sentinel=child_r)
> File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in
> _bootstrap
> self.run()
> File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
> self._target(*self._args, **self._kwargs)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 168, in __init__
> self.start()
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 172, in start
> self.spawn_cluster()
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 248, in spawn_cluster
> self.pusher = self.spawn_pusher()
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 201, in spawn_pusher
> return self.spawn_process(pusher, self.task_queue, self.event_out,
> self.broker)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 197, in spawn_process
> p.start()
> File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start
> self._popen = self._Popen(self)
> File "/usr/lib/python3.10/multiprocessing/context.py", line 224, in
> _Popen
> return _default_context.get_context().Process._Popen(process_obj)
> File "/usr/lib/python3.10/multiprocessing/context.py", line 281, in
> _Popen
> return Popen(process_obj)
> File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in
> __init__
> self._launch(process_obj)
> File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 71, in
> _launch
> code = process_obj._bootstrap(parent_sentinel=child_r)
> File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in
> _bootstrap
> self.run()
> File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
> self._target(*self._args, **self._kwargs)
> File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 358, in pusher
> logger.error(e, traceback.format_exc())
> Message: BadSignature('Signature "48do9gGvueBxakX_a4uNmSwD7cs" does not
> match')
> Arguments: ('Traceback (most recent call last):\n File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/cluster.py",
> line 356, in pusher\n task = SignedPackage.loads(task[1])\n File
> "/local/mailman/venv/lib/python3.10/sit
> e-packages/django_q/signing.py", line 25, in loads\n return
> signing.loads(\n File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/core_signing.py",
> line 35, in loads\n base64d = force_bytes(TimestampSigner(key, salt=sa
> lt).unsign(s, max_age=max_age))\n File
> "/local/mailman/venv/lib/python3.10/site-packages/django_q/core_signing.py",
> line 70, in unsign\n result = super(TimestampSigner,
> self).unsign(value)\n File "/local/mailman/venv/lib/python3.10
> /site-packages/django_q/core_signing.py", line 55, in unsign\n raise
> BadSignature(\'Signature "%s" does not match\' %
> sig)\ndjango.core.signing.BadSignature: Signature
> "48do9gGvueBxakX_a4uNmSwD7cs" does not match\n',)
>
> I'm using uwsgi via apache:
>
> (venv) mailman@new-mailman3 ~ (v3.1.1)> more
> /etc/apache2/conf-enabled/mailman3.conf
> Alias /mailman3/favicon.ico /local/mailman/web/static/favicon.ico
> Alias /mailman/favicon.ico /local/mailman/web/static/favicon.ico
> Alias /favicon.ico /local/mailman/web/static/favicon.ico
> Alias /mailman3/static /local/mailman/web/static
> Alias /mailman/static /local/mailman/web/static
> Alias /static /local/mailman/web/static
>
> <Directory "/local/mailman/web/static">
> Require all granted
> </Directory>
>
> <IfModule mod_proxy_uwsgi.c>
> ProxyPass /mailman3/favicon.ico !
> ProxyPass /mailman/favicon.ico !
> ProxyPass /favicon.ico !
> ProxyPass /mailman3/static !
> ProxyPass /mailman/static !
> ProxyPass /static !
> ProxyPass /mailman3
> unix:/local/mailman/var/uwsgi.sock|uwsgi://localhost/
> ProxyPass /mailman
> unix:/local/mailman/var/uwsgi.sock|uwsgi://localhost/
> #ProxyPass /mailman3 http://localhost:8000/ timeout=180
> #ProxyPass /mailman http://localhost:8000/ timeout=180
> #ProxyPass / http://localhost:8000/ timeout=180
> </IfModule>
>
> Best Regards,
> Helio Loureiro
> https://helio.loureiro.eng.br
> https://github.com/helioloureiro
> https://mastodon.social/@helioloureiro
>
>
> On Fri, 16 Feb 2024 at 22:39, Mark Sapiro <mark(a)msapiro.net> wrote:
>
>> On 2/16/24 07:12, Helio Loureiro wrote:
>>
>> > - start mailman3 and mailman-3web
>> > - run mailman3-web migrate
>>
>> You should run mailman3-web migrate when mailman-3web is not running.
>>
>> > django.db.utils.OperationalError: (1050, "Table
>> > 'hyperkitty_mailinglist_moderators' already exists")
>> >
>> > I tried to drop the table, but then it complained about another one. I
>> > dropped all the complained tables and as result hypperkitty never
>> > started. So I went back to see why the first error happened.
>>
>> The errors occurred because you had recently added tables in your
>> database so they couldn't be added by the migrations. Dropping the
>> tables was the correct thing to do.
>>
>> Did you ultimately successfully run all the migrations?
>>
>> When you say `hypperkitty never started`, what exactly does that mean?
>> What happens when you try to access the archives? Can you access
>> postorius? Are any errors logged in mailman3-web's log?
>>
>> --
>> Mark Sapiro <mark(a)msapiro.net> The highway is for gamblers,
>> San Francisco Bay Area, California better use your sense - B. Dylan
>>
>> _______________________________________________
>> Mailman-users mailing list -- mailman-users(a)mailman3.org
>> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
>> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>> Archived at:
>> https://lists.mailman3.org/archives/list/mailman-users@mailman3.org/message…
>>
>> This message sent to helio(a)loureiro.eng.br
>>
>
1 year, 7 months

[MM3-users] RFC-2369 List-Unsubscribe header missing
by paul.sparks@us.idemia.com
Thanks to Mark Sapiro and others who have answered questions on this list. It has made my job easier. I'm hoping someone can help with another issue I'm seeing.
I'm setting up a new mailman 3 server and doing some test posts to a list. This server is intended only as an announcement list with limited members allowed to post. The resulting emails coming from the list do not include the List-Unsubscribe header although other "List-XXX" headers are included. I'm trying to figure out why the header is missing. It seems clear from the code that the header should be included.
Here is partial text of the email source as received from the list. I've tried viewing in Outlook.com, Thunderbird, and even captured the email text at postfix prior to sending to look for the missing header. My domain has been replaced with "example.com" throughout. This is from the postfix capture
________
...
To: "test1(a)example.com"
<test1(a)example.com>
Thread-Topic: Post 8 test
Thread-Index: AQHb8oBBzPBOYgrDxUOIiAxhvUzthA==
Date: Fri, 11 Jul 2025 16:24:38 +0000
Accept-Language: en-US
Content-Language: en-US
msip_labels:
authentication-results: dkim=none (message not signed)
header.d=none;dmarc=none action=none header.from=example.com;
MIME-Version: 1.0
Message-ID-Hash: FWHZCRE4EA6IFP4BZTPIF23DKAKVRMDT
X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop;
banned-address; emergency; member-moderation; nonmember-moderation;
administrivia; implicit-dest; max-recipients; max-size; news-moderation;
no-subject; digests; suspicious-header
X-Content-Filtered-By: Mailman/MimeDel 3.3.10
From: "Somewhere1, SW" <test1(a)example.com>
X-Mailman-Version: 3.3.10
Precedence: list
Reply-To: no-reply(a)example.com
Subject: [Test1] Post 8 test
List-Id: "Somewhere1, SW" <test1.example.com>
List-Help:
<mailto:test1-request@example.com?subject=help>
List-Owner: <mailto:test1-owner@example.com>
List-Post: NO
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-Id: <20250711163343.017C31DBB79(a)example.com>
Body text
...
__________
Details on my test setup:
# Mailman version.
__version__ = '3.3.10'
OS: Amazon Linux 2023 (RHEL 9 compatible)
MTA: postfix-3.7.2
I am using the following code with mailman shell to configure the list. include_rfc2369_headers is left at the default value of True (verified via mailman shell)
____________
# Description value must be passed as a parameter
def configure_list(ml, description):
#ml.accept_these_nonmembers = allowed_to_post
ml.admin_immed_notify = False
ml.advertised = False
ml.allow_list_posts = False
ml.anonymous_list = True
ml.archive_policy = ArchivePolicy.never
ml.bounce_info_stale_after = datetime.timedelta(days=90)
ml.bounce_notify_owner_on_disable = False
ml.bounce_notify_owner_on_removal = False
ml.bounce_score_threshold = 3
ml.bounce_you_are_disabled_warnings = 0
ml.default_member_action = Action.discard
ml.default_nonmember_action = Action.discard
ml.description = description
ml.digests_enabled = False
ml.filter_content = True
ml.first_strip_reply_to = True
ml.forward_auto_discards = False
ml.max_days_to_hold = 1
ml.max_message_size = 0
ml.nntp_prefix_subject_too = False
ml.pass_extensions = []
ml.pass_types = ['text/plain','multipart/alternative']
ml.reply_goes_to_list = ReplyToMunging.explicit_header
ml.reply_to_address = "no-reply(a)example.com"
ml.require_explicit_destination = False
ml.respond_to_post_requests = False
# Allow select members to post to list
for mbr in ml.members.members:
if str(mbr.address) in allowed_to_post:
mbr.moderation_action = Action.defer
print('end',mbr.address,mbr.moderation_action)
_______________
/etc/mailman3/mailman.cfg file:
# /etc/mailman3/mailman.cfg
# These settings override the defaults in the following 2 config files
# /opt/mailman/lib/python3.9/site-packages/mailman/config/schema.cfg
# /opt/mailman/lib/python3.9/site-packages/mailman/config/mailman.cfg
[paths.custom]
var_dir: /data/lib/mailman
pid_file: /data/lib/mailman/pid/master.pid
lock_file: /data/lib/mailman/pid/master.lock
[mailman]
layout: custom
# This address is the "site owner" address. Certain messages which must be
# delivered to a human, but which can't be delivered to a list owner (e.g. a
# bounce from a list owner), will be sent to this address. It should point to
# a human.
site_owner: ml-owner(a)example.com
[database]
class: mailman.database.postgresql.PostgreSQLDatabase
url: postgresql:///mailman
# No archiving enabled
[archiver.prototype]
enable: no
[archiver.hyperkitty]
enable: no
[shell]
history_file: $var_dir/history.py
[mta]
verp_confirmations: yes
verp_personalized_deliveries: yes
verp_delivery_interval: 1
[passwords]
# When Mailman generates them, this is the default length of passwords.
password_length: 15
[logging.debug]
path: debug.log
level: debug
______________
With debug enabled I see the rfc-2369 pipeline being run when processing the post to the list.
Jul 11 17:51:32 2025 (444088) [IncomingRunner] starting oneloop
Jul 11 17:51:32 2025 (444088) [IncomingRunner] processing filebase: 1752256291.9136803+bfd305a28151ce750dde5114b413b351fa4d1284
Jul 11 17:51:32 2025 (444088) [IncomingRunner] processing onefile
Jul 11 17:51:32 2025 (444090) [OutgoingRunner] starting oneloop
Jul 11 17:51:32 2025 (444090) [OutgoingRunner] ending oneloop: 0
Jul 11 17:51:32 2025 (444097) [DigestRunner] starting oneloop
Jul 11 17:51:32 2025 (444097) [DigestRunner] ending oneloop: 0
Jul 11 17:51:32 2025 (444088) [IncomingRunner] finishing filebase: 1752256291.9136803+bfd305a28151ce750dde5114b413b351fa4d1284
Jul 11 17:51:32 2025 (444088) [IncomingRunner] doing periodic
Jul 11 17:51:32 2025 (444088) [IncomingRunner] committing transaction
Jul 11 17:51:32 2025 (444088) [IncomingRunner] checking short circuit
Jul 11 17:51:32 2025 (444088) [IncomingRunner] ending oneloop: 1
Jul 11 17:51:32 2025 (444088) [IncomingRunner] starting oneloop
Jul 11 17:51:32 2025 (444088) [IncomingRunner] ending oneloop: 0
Jul 11 17:51:32 2025 (444087) [CommandRunner] starting oneloop
Jul 11 17:51:32 2025 (444087) [CommandRunner] ending oneloop: 0
Jul 11 17:51:32 2025 (444091) [PipelineRunner] starting oneloop
Jul 11 17:51:32 2025 (444091) [PipelineRunner] processing filebase: 1752256292.373131+a372784a3ff7a76a662ed1c62c224a53a8008b8b
Jul 11 17:51:32 2025 (444091) [PipelineRunner] processing onefile
Jul 11 17:51:32 2025 (444096) [VirginRunner] starting oneloop
Jul 11 17:51:32 2025 (444096) [VirginRunner] ending oneloop: 0
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: validate-authenticity
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: mime-delete
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: tagger
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: member-recipients
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: avoid-duplicates
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: cleanse
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: cleanse-dkim
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: cook-headers
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: subject-prefix
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: rfc-2369
Jul 11 17:51:32 2025 (444091) RFC 2369 process
Jul 11 17:51:32 2025 (444091) RFC 2369 process continue 1
Jul 11 17:51:32 2025 (444091) RFC 2369 process headers extend
Jul 11 17:51:32 2025 (444091) RFC 2369 process header for loop
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: to-archive
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: to-digest
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: to-usenet
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: after-delivery
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: acknowledge
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: dmarc
Jul 11 17:51:32 2025 (444091) <20250711175131.B8CEC1DA51B(a)example.com> pipeline default-posting-pipeline processing: to-outgoing
Jul 11 17:51:32 2025 (444091) [PipelineRunner] finishing filebase: 1752256292.373131+a372784a3ff7a76a662ed1c62c224a53a8008b8b
Jul 11 17:51:32 2025 (444091) [PipelineRunner] doing periodic
Jul 11 17:51:32 2025 (444091) [PipelineRunner] committing transaction
Jul 11 17:51:32 2025 (444091) [PipelineRunner] checking short circuit
Jul 11 17:51:32 2025 (444091) [PipelineRunner] ending oneloop: 1
Jul 11 17:51:32 2025 (444091) [PipelineRunner] starting oneloop
Jul 11 17:51:32 2025 (444091) [PipelineRunner] ending oneloop: 0
Jul 11 17:51:32 2025 (444085) [ArchiveRunner] starting oneloop
Jul 11 17:51:32 2025 (444085) [ArchiveRunner] ending oneloop: 0
Jul 11 17:51:33 2025 (444090) [OutgoingRunner] starting oneloop
Jul 11 17:51:33 2025 (444090) [OutgoingRunner] processing filebase: 1752256292.8885148+5e5c4a68eacfead13561df616fe7ca19e1d1d182
Jul 11 17:51:33 2025 (444090) [OutgoingRunner] processing onefile
Jul 11 17:51:33 2025 (444097) [DigestRunner] starting oneloop
Jul 11 17:51:33 2025 (444097) [DigestRunner] ending oneloop: 0
Jul 11 17:51:33 2025 (444090) [outgoing] <function deliver at 0x7f21ed7b1310>: <175225629286.444091.14942604217243398224(a)ip-10-0-0-168.us-gov-west-1.compute.internal>
Jul 11 17:51:33 2025 (444088) [IncomingRunner] starting oneloop
_____________
I tried inserting some log.debug calls into lib/python3.9/site-packages/mailman/handlers/rfc_2369.py to see where headers were being removed and added.
I modified: log = logging.getLogger('mailman.debug')
and inserted log.debug throughout the process function and one in the top level.
___________
...
@public
@implementer(IHandler)
class RFC2369:
"""Add the RFC 2369 List-* headers."""
name = 'rfc-2369'
description = _('Add the RFC 2369 List-* headers.')
log.debug('RFC 2369 processing')
def process(self, mlist, msg, msgdata):
"""See `IHandler`."""
process(mlist, msg, msgdata)
__________
When running with debug enabled, I see this top level message many times at the top of the debug.log file during mailman startup, but not after that. I don't see any of my other debug messages.
_____________
Jul 11 16:03:36 2025 (409226) RFC 2369 processing
Jul 11 16:03:40 2025 (409244) RFC 2369 processing
Jul 11 16:03:41 2025 (409250) RFC 2369 processing
Jul 11 16:03:49 2025 (409271) RFC 2369 processing
Jul 11 16:03:50 2025 (409271) [DigestRunner] starting oneloop
Jul 11 16:03:50 2025 (409271) [DigestRunner] ending oneloop: 0
Jul 11 16:03:50 2025 (409261) RFC 2369 processing
Jul 11 16:03:50 2025 (409261) [BounceRunner] starting oneloop
Jul 11 16:03:50 2025 (409261) [BounceRunner] ending oneloop: 0
Jul 11 16:03:50 2025 (409264) RFC 2369 processing
Jul 11 16:03:50 2025 (409267) RFC 2369 processing
Jul 11 16:03:51 2025 (409262) RFC 2369 processing
Jul 11 16:03:51 2025 (409262) [CommandRunner] starting oneloop
Jul 11 16:03:51 2025 (409262) [CommandRunner] ending oneloop: 0
Jul 11 16:03:51 2025 (409266) RFC 2369 processing
Jul 11 16:03:51 2025 (409266) [PipelineRunner] starting oneloop
Jul 11 16:03:51 2025 (409266) [PipelineRunner] ending oneloop: 0
...
__________
Any suggestions on where else I can look? Any thoughts on why my debug calls are not showing in the logs?
3 months

[MM3-users] Re: Hyperkitty CPU usage
by Alain Kohli
I'm running a custom image which is based on an older version of the one
here: https://github.com/maxking/docker-mailman. I attached it below.
But I separated postorius and hyperkitty, so hyperkitty is running in
its own container. I'm deploying the image with a plain 'docker run'
behind nginx. I made fulltext_index persistent now, but it didn't get
populated with anything yet. I don't really have an error traceback
because there is never an error thrown. The only thing with some content
is uwsgi-error.log, which you can find below. I'm also still getting the
"A string literal cannot contain NUL (0x00) characters." messages. I
also noticed that it takes incredibly long for the webinterface to load
(several minutes) even though there doesn't seem to be any process
consuming notable resources apart from the minutely job.
Funnily enough, I have the exact same image deployed on a second server
as well for testing. On that one everything works fine. The only
difference is that on the problematic one I have a lot more mailing
lists/archives and that I imported them from mailman2. Could something
have gone wrong during the import? I used the regular hyperkitty_import
command.
uwsgi-error.log:
*** Starting uWSGI 2.0.18 (64bit) on [Sat Apr 27 22:50:17 2019] ***
compiled with version: 6.4.0 on 27 April 2019 22:48:42
os: Linux-4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19)
nodename: hyperkitty.docker
machine: x86_64
clock source: unix
detected number of CPU cores: 4
current working directory: /home/hyperkitty
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
setgid() to 82
setuid() to 82
chdir() to /home/hyperkitty
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 0.0.0.0:8081 fd 8
uwsgi socket 1 bound to TCP address 0.0.0.0:8080 fd 9
Python version: 3.6.8 (default, Jan 30 2019, 23:54:38) [GCC 6.4.0]
Python main interpreter initialized at 0x55dfaa41c980
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
[uwsgi-cron] command "./manage.py runjobs minutely" registered as
cron task
[uwsgi-cron] command "./manage.py runjobs quarter_hourly" registered
as cron task
[uwsgi-cron] command "./manage.py runjobs hourly" registered as cron
task
[uwsgi-cron] command "./manage.py runjobs daily" registered as cron task
[uwsgi-cron] command "./manage.py runjobs monthly" registered as
cron task
[uwsgi-cron] command "./manage.py runjobs weekly" registered as cron
task
[uwsgi-cron] command "./manage.py runjobs yearly" registered as cron
task
mapped 208576 bytes (203 KB) for 4 cores
*** Operational MODE: threaded ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter
0x55dfaa41c980 pid: 1 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1)
spawned uWSGI worker 1 (pid: 40, cores: 4)
Sat Apr 27 22:50:18 2019 - [uwsgi-cron] running "./manage.py runjobs
minutely" (pid 45)
[uwsgi-daemons] spawning "./manage.py qcluster" (uid: 82 gid: 82)
22:50:21 [Q] INFO Q Cluster-47 starting.
22:50:21 [Q] INFO Process-1:1 ready for work at 59
22:50:21 [Q] INFO Process-1:2 ready for work at 60
22:50:21 [Q] INFO Process-1:3 ready for work at 61
22:50:21 [Q] INFO Process-1:4 ready for work at 62
22:50:21 [Q] INFO Process-1:5 monitoring at 63
22:50:21 [Q] INFO Process-1 guarding cluster at 58
22:50:21 [Q] INFO Process-1:6 pushing tasks at 64
22:50:21 [Q] INFO Q Cluster-47 running.
22:59:31 [Q] INFO Enqueued 3403
22:59:31 [Q] INFO Process-1:1 processing [update_from_mailman]
22:59:33 [Q] INFO Processed [update_from_mailman]
Sat Apr 27 23:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
quarter_hourly" (pid 73)
Sat Apr 27 23:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
hourly" (pid 74)
[uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
with pid 73 exited after 64 second(s)
23:01:28 [Q] INFO Enqueued 3404
23:01:29 [Q] INFO Process-1:2 processing
[rebuild_mailinglist_cache_recent]
[uwsgi-cron] command "./manage.py runjobs hourly" running with pid
74 exited after 91 second(s)
Sat Apr 27 23:01:36 2019 - uwsgi_response_write_body_do(): Broken
pipe [core/writer.c line 341] during GET / (212.203.58.154)
OSError: write error
23:01:36 [Q] INFO Processed [rebuild_mailinglist_cache_recent]
Sat Apr 27 23:15:00 2019 - [uwsgi-cron] running "./manage.py runjobs
quarter_hourly" (pid 88)
[uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
with pid 88 exited after 4 second(s)
23:28:24 [Q] INFO Enqueued 3405
23:28:24 [Q] INFO Process-1:3 processing [update_from_mailman]
23:28:25 [Q] INFO Processed [update_from_mailman]
Sat Apr 27 23:30:00 2019 - [uwsgi-cron] running "./manage.py runjobs
quarter_hourly" (pid 96)
[uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
with pid 96 exited after 4 second(s)
23:44:40 [Q] INFO Enqueued 3406
23:44:40 [Q] INFO Process-1:4 processing [update_from_mailman]
23:44:41 [Q] INFO Processed [update_from_mailman]
Sat Apr 27 23:45:00 2019 - [uwsgi-cron] running "./manage.py runjobs
quarter_hourly" (pid 104)
[uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
with pid 104 exited after 4 second(s)
Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
quarter_hourly" (pid 113)
Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
hourly" (pid 114)
Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
daily" (pid 115)
Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
weekly" (pid 116)
[uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
with pid 113 exited after 55 second(s)
[uwsgi-cron] command "./manage.py runjobs weekly" running with pid
116 exited after 55 second(s)
00:01:36 [Q] INFO Enqueued 3407
00:01:36 [Q] INFO Process-1:1 processing
[rebuild_mailinglist_cache_recent]
[uwsgi-cron] command "./manage.py runjobs hourly" running with pid
114 exited after 99 second(s)
00:01:50 [Q] INFO Processed [rebuild_mailinglist_cache_recent]
00:04:52 [Q] INFO Enqueued 3408
00:04:52 [Q] INFO Process-1:2 processing [update_from_mailman]
00:04:54 [Q] INFO Processed [update_from_mailman]
Dockerfile:
FROM python:3.6-alpine3.7 # Add startup script to container COPY
assets/docker-entrypoint.sh /usr/local/bin/ # Install packages and
dependencies for hyperkitty and add user for executing apps. # It's
important that the user has the UID/GID 82 so nginx can access the
files. RUN set -ex \&& apk add --no-cache --virtual .build-deps gcc
libc-dev linux-headers git \postgresql-dev \&& apk add --no-cache
--virtual .mailman-rundeps bash sassc mailcap \postgresql-client
curl \&& pip install -U django==2.2 \&& pip install
git+https://gitlab.com/eestec/mailmanclient
\git+https://gitlab.com/mailman/hyperkitty@c9fa4d4bfc295438d3e01cd93090064d004cf44d
\git+https://gitlab.com/eestec/django-mailman3 \whoosh \uwsgi
\psycopg2 \dj-database-url \typing \&& apk del .build-deps \&&
addgroup -S -g 82 hyperkitty \&& adduser -S -u 82 -G hyperkitty
hyperkitty \&& chmod u+x /usr/local/bin/docker-entrypoint.sh# Add
needed files for uwsgi server + settings for django COPY
assets/__init__.py /home/hyperkittyCOPY assets/manage.py
/home/hyperkittyCOPY assets/urls.py /home/hyperkittyCOPY
assets/wsgi.py /home/hyperkittyCOPY assets/uwsgi.ini
/home/hyperkittyCOPY assets/settings.py /home/hyperkitty# Change
ownership for uwsgi+django files and set execution rights for
management script RUN chown -R hyperkitty /home/hyperkitty && chmod
u+x /home/hyperkitty/manage.py# Make sure we are in the correct
working dir WORKDIR /home/hyperkittyEXPOSE 8080 8081# Use stop
signal for uwsgi server STOPSIGNAL SIGINTENTRYPOINT
["docker-entrypoint.sh"]CMD ["uwsgi", "--ini",
"/home/hyperkitty/uwsgi.ini"]
On 4/27/19 7:58 PM, Abhilash Raj wrote:
> On Sat, Apr 27, 2019, at 9:40 AM, Alain Kohli wrote:
>> I have run "python manage.py rebuild_index" before, doesn't that do
>> clear_index as well? Apart from that, I run hyperkitty in a docker
>> container and didn't know fulltext_index should be persistent, so that
>> got deleted after every version update for sure.
> Which images are you using and how are you deploying them?
>
> You should persist fulltext_index, yes, and possibly logs if you need
> them for debugging later.
>
> Can you paste the entire error traceback?
>
>>
>> On 4/26/19 10:18 PM, Mark Sapiro wrote:
>>> On 4/26/19 11:14 AM, Alain Kohli wrote:
>>>> I see loads of "A string literal cannot contain NUL (0x00) characters."
>>>> messages, but I haven't found missing messages in the archives yet. Not
>>>> sure how that could be related, though. Apart from that I don't see
>>>> anything unusual. The other jobs (quarter_hourly, hourly, etc.) seem to
>>>> run and finish normally.
>>> Did you upgrade from a Python 2.7 version of HyperKitty to a Python 3
>>> version? The Haystack/Whoosh search engine databases are not compatible
>>> between the two and "A string literal cannot contain NUL (0x00)
>>> characters." is the symptom.
>>>
>>> You need to run 'python manage.py clear_index' or just remove all the
>>> files from the directory defined as 'PATH' under HAYSTACK_CONNECTIONS in
>>> your settings file (normally 'fulltext_index' in the same directory that
>>> contains your settings.py.
>>>
>> _______________________________________________
>> Mailman-users mailing list -- mailman-users(a)mailman3.org
>> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
>> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>>
6 years, 5 months

[MM3-users] Re: Held messages not delivered after approval
by Krinetzki, Stephan
Hi all,
after the Weekend, this is the status of the queues:
/opt/mailman/var/queue/archive:
total 0
drwxrwx--- 2 mailman mailman 6 Aug 4 08:47 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
/opt/mailman/var/queue/bad:
total 3016
drwxrwx--- 2 mailman mailman 4096 Aug 2 00:02 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
-rw-rw---- 1 mailman mailman 221723 Aug 1 17:26 1754061973.4885209+46b1ae3716439bf3ef98090296dfce0320fc3017.psv
-rw-rw---- 1 mailman mailman 32912 Aug 2 00:00 1754085602.191851+3576cf33232db110fa7761233f67245564553652.psv
-rw-rw---- 1 mailman mailman 416 Aug 2 00:00 1754085604.0204346+ad485da0c45cb0ad17a5dc42613c3eb3f313c20e.psv
-rw-rw---- 1 mailman mailman 1407649 Aug 2 00:00 1754085623.275817+f23139c8127c454b4fe65453af3db18e558b0e87.psv
-rw-rw---- 1 mailman mailman 1407634 Aug 2 00:02 1754085729.3529432+1643f907bac39a22a7d71e50b031c4f8a574082c.psv
/opt/mailman/var/queue/bounces:
total 0
drwxrwx--- 2 mailman mailman 6 Aug 4 05:22 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
/opt/mailman/var/queue/command:
total 0
drwxrwx--- 2 mailman mailman 6 Aug 4 08:14 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
/opt/mailman/var/queue/digest:
total 0
drwxrwx--- 2 mailman mailman 6 Aug 4 08:21 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
/opt/mailman/var/queue/in:
total 0
drwxrwx--- 2 mailman mailman 6 Aug 4 08:47 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
/opt/mailman/var/queue/nntp:
total 0
drwxrwx--- 2 mailman mailman 6 Jun 27 2024 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
/opt/mailman/var/queue/out:
total 1772
drwxrwx--- 2 mailman mailman 4096 Aug 4 08:49 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
-rw-rw---- 1 mailman mailman 1407649 Aug 2 00:00 1754085626.995262+ebf03275f7441b1bc7bbaf063cb6238bec30ff9f.pck.tmp
-rw-rw---- 1 mailman mailman 50244 Aug 3 00:00 1754172045.4400518+1da9c6fd82ee0dbc0e893eeca713d498a7273150.pck.tmp
-rw-rw---- 1 mailman mailman 31122 Aug 4 08:49 1754290173.7476344+7237c062741059807b66024091571ce3399d8eda.pck
-rw-rw---- 1 mailman mailman 18091 Aug 4 08:49 1754290173.8104873+cb4c78dffcd7147a5deca51b2cfd92829c06af9d.pck
-rw-rw---- 1 mailman mailman 18243 Aug 4 08:49 1754290173.8333225+d77876f47de73141dd5f6f9ac8d33d937bf6a727.pck
-rw-rw---- 1 mailman mailman 17585 Aug 4 08:49 1754290173.8802657+cb8103f6db5193a75706a7c1bc0d4f90126e587a.pck
-rw-rw---- 1 mailman mailman 217073 Aug 4 08:49 1754290173.9709926+90b2bfd3adf5faa7d0ec336f07dbe2267e666c75.pck
-rw-rw---- 1 mailman mailman 33494 Aug 4 08:49 1754290174.019171+553fc3546ae6c5f8e1b0f27ffc60e154629b9f1d.pck
/opt/mailman/var/queue/pipeline:
total 0
drwxrwx--- 2 mailman mailman 6 Aug 4 08:47 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
/opt/mailman/var/queue/retry:
total 0
drwxrwx--- 2 mailman mailman 6 Feb 18 08:57 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
/opt/mailman/var/queue/shunt:
total 9012
drwxrwx--- 2 mailman mailman 8192 Aug 4 00:00 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
-rw-rw---- 1 mailman mailman 490 Aug 1 10:26 1754036797.3635633+4a7750d6b8765f9f982dbfcbf9e972d8055bb4c5.pck
-rw-rw---- 1 mailman mailman 445 Aug 1 10:26 1754036797.618993+a6e6aefca6a76ed55c5bb4fae448f59c42a6f246.pck
-rw-rw---- 1 mailman mailman 443 Aug 1 10:26 1754036797.647361+67d36f287c76ea09b0524dcf03e080d68604d063.pck
-rw-rw---- 1 mailman mailman 443 Aug 1 10:26 1754036797.67375+0a78febb8e89d073607be70ccf90196b3e7fac17.pck
-rw-rw---- 1 mailman mailman 14233 Aug 1 10:50 1754038238.585611+3814bb1ce4232c97991a9cbcaed482966788e7c6.pck
-rw-rw---- 1 mailman mailman 14206 Aug 1 10:50 1754038251.6686325+2cacc5d5709cf5c9d8ce571947c8395eb0da37c9.pck
-rw-rw---- 1 mailman mailman 14466 Aug 1 10:51 1754038263.7415857+7f8260ca4bdf6f109be00206e07d03e717bfa6e1.pck
-rw-rw---- 1 mailman mailman 14297 Aug 1 10:51 1754038273.8673453+9ccb00ea1dc05fcbfb2bc0ef071e1f3083b0e73f.pck
-rw-rw---- 1 mailman mailman 443 Aug 1 13:30 1754047820.8963482+a122a01a47aa5c6dd8240d8ea7bc35fb3960a46d.pck
-rw-rw---- 1 mailman mailman 10870 Aug 1 15:21 1754054475.6638494+f8156ebc84effc7680b64ec03edc7d86b8f6eb65.pck
-rw-rw---- 1 mailman mailman 14782 Aug 1 16:02 1754056965.1972625+8273637db3056c2325a0903b7171725fb6f4d5e8.pck
-rw-rw---- 1 mailman mailman 221723 Aug 1 17:26 1754061973.4941757+4dc6368d88536bc195afdbce9432375166817413.pck
-rw-rw---- 1 mailman mailman 32912 Aug 2 00:00 1754085603.42874+d0b308c26d31e682d256e6cda3786797cad7b062.pck.tmp
-rw-rw---- 1 mailman mailman 17773 Aug 2 00:00 1754085627.0487497+cf822cd6608fcca545f5350124eae405df598f59.pck
-rw-rw---- 1 mailman mailman 478 Aug 2 00:00 1754085627.2411385+cbc654e510b8d618a2999274476b000542feff0e.pck
-rw-rw---- 1 mailman mailman 733213 Aug 2 00:00 1754085627.264005+5a16138d0e33252f2606ceb69577bb58d88bb2e5.pck.tmp
-rw-rw---- 1 mailman mailman 86108 Aug 2 00:00 1754085646.590158+624db84af49b742971e10004557f4289a2c5bccf.pck
-rw-rw---- 1 mailman mailman 446 Aug 2 00:00 1754085646.7170782+9ee0dedf32447f0e860dbe589fed9a272ceeadf8.pck.tmp
-rw-rw---- 1 mailman mailman 18750 Aug 2 00:01 1754085665.8180223+2d11df5f3b4b96ba710fc1bd24551cf67598be79.pck
-rw-rw---- 1 mailman mailman 21926 Aug 2 00:01 1754085665.823551+63198e22da918bc7193d93ec2e3cc01f8aaf5e44.pck
-rw-rw---- 1 mailman mailman 60639 Aug 2 00:01 1754085665.901899+4e47db603c553fa357350865de35092a4ebef5fe.pck
-rw-rw---- 1 mailman mailman 85888 Aug 2 00:01 1754085665.9268048+8ec4679a850410c2e574d4e63f0a4f3ab602af3b.pck
-rw-rw---- 1 mailman mailman 449 Aug 2 00:01 1754085666.0164568+fee1f138066b27b2752812f59e8154f47a534df7.pck
-rw-rw---- 1 mailman mailman 1407870 Aug 2 00:01 1754085696.4017332+94ab6c71d0e984764abd160f14775bb7f73b1222.pck
-rw-rw---- 1 mailman mailman 475 Aug 2 00:01 1754085696.6109+ced325862a46537b64b26257214a9190c3dde0f2.pck
-rw-rw---- 1 mailman mailman 311196 Aug 2 00:01 1754085715.0974422+7b9989730c03cff548e08961591887e17cc2324c.pck
-rw-rw---- 1 mailman mailman 437 Aug 2 00:01 1754085715.1049805+27e5b8943991e847faf240e5e29f8d0c08af4503.pck
-rw-rw---- 1 mailman mailman 29278 Aug 2 00:01 1754085715.1054037+b2fa0b025564ef9ba8d6917b8390cbe8fadf7b94.pck
-rw-rw---- 1 mailman mailman 20878 Aug 2 00:01 1754085715.1229646+805071d12ddfe493768a1e83f05db70aa92fb3bb.pck
-rw-rw---- 1 mailman mailman 7376 Aug 2 00:01 1754085715.2501519+16602d78716c922920e770b6d9e17609bdbc4adb.pck
-rw-rw---- 1 mailman mailman 1407649 Aug 2 00:02 1754085734.1192129+c43923f712a65cc4a111ad09a8f610df855a8692.pck.tmp
-rw-rw---- 1 mailman mailman 458 Aug 2 00:02 1754085734.1719563+31c0d3d15642b9c7c9f8fbfb94cc0ad8bfccd912.pck
-rw-rw---- 1 mailman mailman 12960 Aug 2 00:02 1754085734.21608+0265fcd448c9fb539c801f3a19ddb18c4956aa0e.pck
-rw-rw---- 1 mailman mailman 17254 Aug 2 15:26 1754141215.9892564+3b3dc4f6023516ad41b8a0430887f10701fccb3c.pck
-rw-rw---- 1 mailman mailman 38992 Aug 3 00:00 1754172001.3344436+8a1c1a9b3703f0421263c2b24fc27c9a9bb9116d.pck
-rw-rw---- 1 mailman mailman 1332842 Aug 3 00:00 1754172001.3540711+13acca94280dbcbb1a1cb8e730fbf87971b31aa9.pck
-rw-rw---- 1 mailman mailman 752482 Aug 3 00:00 1754172001.3596764+d2d772c50ff9812632bd6b054dfb9973758ed5c4.pck
-rw-rw---- 1 mailman mailman 18267 Aug 3 00:00 1754172026.4244561+b2e097258a1123092a199858a50f7d4cb4d5ca65.pck
-rw-rw---- 1 mailman mailman 17241 Aug 3 00:00 1754172026.4761071+9818446a413647ef57fc5f60d8fa51da43d85da8.pck
-rw-rw---- 1 mailman mailman 1407668 Aug 3 00:00 1754172026.4913034+c3055e1f383c0b3226169006e1cb79322e09846d.pck
-rw-rw---- 1 mailman mailman 50244 Aug 3 00:00 1754172045.543335+521b1a4d393691e85c86fd3a313efc6d90008b30.pck
-rw-rw---- 1 mailman mailman 32173 Aug 3 00:00 1754172045.5646617+ea2a4f2a232dfa9750e58c5b8cf9150c933e8ae7.pck
-rw-rw---- 1 mailman mailman 37411 Aug 3 00:01 1754172102.5104077+c9cabab5571fb2db62cbc688b6cc0e05f2fa2bfb.pck
-rw-rw---- 1 mailman mailman 25419 Aug 3 00:01 1754172102.5282032+20946f26d1adf7c481e504de4d589cdb1cc21edb.pck
-rw-rw---- 1 mailman mailman 276542 Aug 3 00:02 1754172120.9135556+ddbc3a3e9ce4afc78b95d03d73b0c8737f3f6606.pck
-rw-rw---- 1 mailman mailman 18018 Aug 3 00:02 1754172140.6559258+621dad8dc6a84e2ddc5413cedd7f8860b267c27c.pck
-rw-rw---- 1 mailman mailman 12887 Aug 4 00:00 1754258401.069818+189eeede43be7771f2932f2fb0954f1b09cb9fb9.pck
-rw-rw---- 1 mailman mailman 311196 Aug 4 00:00 1754258421.5529852+ca4e152d308b15023f5203309cc583929177cda2.pck
-rw-rw---- 1 mailman mailman 66982 Aug 4 00:00 1754258441.022148+91e14500fe3abe2bdcf69e56f5f200ca851e3a7e.pck
-rw-rw---- 1 mailman mailman 84863 Aug 4 00:00 1754258441.0448039+48b144d7abc1fe88b2fc6aef25398993c6bdc2e5.pck
-rw-rw---- 1 mailman mailman 20951 Aug 4 00:00 1754258459.2980123+485ba78453a56be2211e9a02277087ad5cf12b22.pck
/opt/mailman/var/queue/virgin:
total 0
drwxrwx--- 2 mailman mailman 6 Aug 4 08:47 .
drwxr-xr-x 14 mailman mailman 165 Jun 27 2024 ..
So let's start with bad queue:
mailman qfile /opt/mailman/var/queue/bad/1754061973.4885209+46b1ae3716439bf3ef98090296dfce0320fc3017.psv
Traceback (most recent call last):
File "/opt/mailman/mailman-venv/bin/mailman", line 8, in <module>
sys.exit(main())
^^^^^^
File "/opt/mailman/mailman-venv/lib64/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/mailman/mailman-venv/lib64/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/opt/mailman/mailman-venv/lib64/python3.11/site-packages/mailman/bin/mailman.py", line 69, in invoke
return super().invoke(ctx)
^^^^^^^^^^^^^^^^^^^
File "/opt/mailman/mailman-venv/lib64/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/mailman/mailman-venv/lib64/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/mailman/mailman-venv/lib64/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/mailman/mailman-venv/lib64/python3.11/site-packages/mailman/commands/cli_qfile.py", line 63, in qfile
m.append(pickle.load(fp))
^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x95 in position 25: invalid start byte
Seems to be an decoding error
mailman qfile /opt/mailman/var/queue/bad/1754061973.4885209+46b1ae3716439bf3ef98090296dfce0320fc3017.psv
First Object is the E-Mail with a lot of HTML (Outlook seems to be the client, so...)
The Second:
{ '_parsemsg': False,
'approved': True,
'envsender': 'noreply(a)lists.example.com,
'lang': 'de',
'listid': 'kennziffern.lists.example.com,
'member_moderation_action': 'hold',
'moderation_reasons': ['The message comes from a moderated member'],
'moderation_sender': '<Sender Address>,
'moderator_approved': True,
'original_sender': <Sender Address>',
'original_size': 28614,
'original_subject': '=?iso-8859-1?Q?=C4nderungen_im_Orgaverzeichnis?=',
'received_time': datetime.datetime(2025, 7, 31, 11, 16, 46, 475199),
'recipients': { <some recipients>},
'rule_hits': ['member-moderation'],
'rule_misses': [ 'dmarc-mitigation',
'no-senders',
'approved',
'loop',
'banned-address',
'header-match-config-1',
'emergency'],
'stripped_subject': 'Änderungen im Orgaverzeichnis',
'to_list': True,
'type': 'data',
'verp': False,
'version': 3,
'whichq': 'out'}
I don't see here a problem. But the timestamp seems to be related to the restart of mailman. Can I skip this in the logrotate?
mailman qfile /opt/mailman/var/queue/bad/1754085604.0204346+ad485da0c45cb0ad17a5dc42613c3eb3f313c20e.psv
It's a digest:
[----- start pickle -----]
<----- start object 1 ----->
<----- start object 2 ----->
{ '_parsemsg': False,
'digest_number': 7,
'digest_path': '/opt/mailman/var/lists/doc-infos.lists.example.com/digest.134.7.mmdf',
'listid': 'doc-infos.lists.example.com',
'version': 3,
'volume': 134}
[----- end pickle -----]
Btw: The crontab is the following:
#####0 */2 * * * apache /opt/mailman/mailman-venv/bin/django-admin runjobs minutely --pythonpath /opt/mailman/mailman-suite/mailman-suite_project --settings settings
#*/30 * * * * mailman /opt/mailman/mailman-venv/bin/django-admin runjobs minutely --pythonpath /etc/mailman3/ --settings settings
@hourly mailman /opt/mailman/mailman-venv/bin/django-admin runjobs hourly --pythonpath /etc/mailman3/ --settings settings
#####@daily apache /opt/mailman/mailman-venv/bin/django-admin runjobs daily --pythonpath /etc/mailman3/ --settings settings
@monthly mailman /opt/mailman/mailman-venv/bin/django-admin runjobs monthly --pythonpath /etc/mailman3/ --settings settings
@yearly mailman /opt/mailman/mailman-venv/bin/django-admin runjobs yearly --pythonpath /etc/mailman3/ --settings settings
@daily mailman cd /opt/mailman; source /opt/mailman/mailman-venv/bin/activate; /opt/mailman/mailman-venv/bin/mailman digests --send > /dev/null 2>&1
mailman qfile /opt/mailman/var/queue/bad/1754085623.275817+f23139c8127c454b4fe65453af3db18e558b0e87.psv
That's the Mail which should be send to ~43000 Members.
Header says:
Received: from <ext. Server> (<IP>)
by lists.example.com (Postfix) with ESMTPS id F16D88016D1C
for <mm(a)lists.example.com>; Thu, 31 Jul 2025 09:58:38 +0200 (CEST)
The date of the file is
-rw-rw---- 1 mailman mailman 1407649 Aug 2 00:00 1754085623.275817+f23139c8127c454b4fe65453af3db18e558b0e87.psv
So i checked the mailman.log:
[2025-08-01 00:00:02 +0200] [324558] [INFO] Handling signal: term
[2025-08-01 00:00:02 +0200] [324568] [INFO] Worker exiting (pid: 324568)
[2025-08-01 00:00:02 +0200] [324571] [INFO] Worker exiting (pid: 324571)
[2025-08-01 00:00:02 +0200] [324572] [INFO] Worker exiting (pid: 324572)
[2025-08-01 00:00:02 +0200] [324574] [INFO] Worker exiting (pid: 324574)
[2025-08-01 00:00:02 +0200] [324558] [ERROR] Worker (pid:324571) was sent SIGTERM!
[2025-08-01 00:00:02 +0200] [324558] [ERROR] Worker (pid:324572) was sent SIGTERM!
[2025-08-01 00:00:02 +0200] [324558] [ERROR] Worker (pid:324568) was sent SIGTERM!
[2025-08-01 00:00:02 +0200] [324558] [ERROR] Worker (pid:324574) was sent SIGTERM!
[2025-08-01 00:00:02 +0200] [324558] [INFO] Shutting down: Master
Aug 01 00:00:11 2025 (567061) Task runner evicted 0 expired pendings
[2025-08-01 00:00:12 +0200] [567059] [INFO] Starting gunicorn 22.0.0
[2025-08-01 00:00:12 +0200] [567059] [INFO] Listening at: http://127.0.0.1:8001 (567059)
[2025-08-01 00:00:12 +0200] [567059] [INFO] Using worker: sync
[2025-08-01 00:00:12 +0200] [567069] [INFO] Booting worker with pid: 567069
[2025-08-01 00:00:12 +0200] [567070] [INFO] Booting worker with pid: 567070
[2025-08-01 00:00:12 +0200] [567071] [INFO] Booting worker with pid: 567071
[2025-08-01 00:00:12 +0200] [567073] [INFO] Booting worker with pid: 567073
Aug 01 00:00:13 2025 (567061) Task runner deleted 0 orphaned workflows
<Ommited GET Request>
Aug 01 00:00:21 2025 (567061) Task runner deleted 0 orphaned requests
[2025-08-01 00:00:23 +0200] [567059] [INFO] Handling signal: term
[2025-08-01 00:00:23 +0200] [567073] [INFO] Worker exiting (pid: 567073)
[2025-08-01 00:00:23 +0200] [567069] [INFO] Worker exiting (pid: 567069)
[2025-08-01 00:00:23 +0200] [567070] [INFO] Worker exiting (pid: 567070)
[2025-08-01 00:00:23 +0200] [567071] [INFO] Worker exiting (pid: 567071)
[2025-08-01 00:00:23 +0200] [567059] [ERROR] Worker (pid:567073) was sent SIGTERM!
[2025-08-01 00:00:23 +0200] [567059] [ERROR] Worker (pid:567070) was sent SIGTERM!
[2025-08-01 00:00:23 +0200] [567059] [ERROR] Worker (pid:567071) was sent SIGTERM!
[2025-08-01 00:00:23 +0200] [567059] [ERROR] Worker (pid:567069) was sent SIGTERM!
[2025-08-01 00:00:23 +0200] [567059] [INFO] Shutting down: Master
[2025-08-01 00:00:35 +0200] [567206] [INFO] Starting gunicorn 22.0.0
[2025-08-01 00:00:35 +0200] [567206] [INFO] Listening at: http://127.0.0.1:8001 (567206)
[2025-08-01 00:00:35 +0200] [567206] [INFO] Using worker: sync
[2025-08-01 00:00:35 +0200] [567246] [INFO] Booting worker with pid: 567246
[2025-08-01 00:00:35 +0200] [567250] [INFO] Booting worker with pid: 567250
[2025-08-01 00:00:35 +0200] [567252] [INFO] Booting worker with pid: 567252
[2025-08-01 00:00:35 +0200] [567253] [INFO] Booting worker with pid: 567253
<Ommited GET Requests>
Aug 01 00:00:36 2025 (567208) Task runner evicted 0 expired pendings
<Ommited GET Requests>
Aug 01 00:00:38 2025 (567208) Task runner deleted 0 orphaned workflows
[2025-08-01 00:00:42 +0200] [567206] [INFO] Handling signal: term
[2025-08-01 00:00:42 +0200] [567246] [INFO] Worker exiting (pid: 567246)
[2025-08-01 00:00:42 +0200] [567253] [INFO] Worker exiting (pid: 567253)
[2025-08-01 00:00:42 +0200] [567250] [INFO] Worker exiting (pid: 567250)
[2025-08-01 00:00:42 +0200] [567252] [INFO] Worker exiting (pid: 567252)
[2025-08-01 00:00:42 +0200] [567206] [ERROR] Worker (pid:567250) was sent SIGTERM!
[2025-08-01 00:00:42 +0200] [567206] [ERROR] Worker (pid:567252) was sent SIGTERM!
[2025-08-01 00:00:42 +0200] [567206] [ERROR] Worker (pid:567246) was sent SIGTERM!
[2025-08-01 00:00:42 +0200] [567206] [ERROR] Worker (pid:567253) was sent SIGTERM!
[2025-08-01 00:00:42 +0200] [567206] [INFO] Shutting down: Master
Aug 01 00:00:54 2025 (567280) Task runner evicted 2 expired pendings
Aug 01 00:00:56 2025 (567280) Task runner deleted 0 orphaned workflows
[2025-08-01 00:00:58 +0200] [567278] [INFO] Starting gunicorn 22.0.0
[2025-08-01 00:00:58 +0200] [567278] [INFO] Listening at: http://127.0.0.1:8001 (567278)
[2025-08-01 00:00:58 +0200] [567278] [INFO] Using worker: sync
[2025-08-01 00:00:58 +0200] [567327] [INFO] Booting worker with pid: 567327
[2025-08-01 00:00:58 +0200] [567328] [INFO] Booting worker with pid: 567328
[2025-08-01 00:00:58 +0200] [567329] [INFO] Booting worker with pid: 567329
[2025-08-01 00:00:58 +0200] [567330] [INFO] Booting worker with pid: 567330
<Ommited GET Requests>
[2025-08-01 00:01:00 +0200] [567278] [INFO] Handling signal: term
[2025-08-01 00:01:00 +0200] [567327] [INFO] Worker exiting (pid: 567327)
[2025-08-01 00:01:00 +0200] [567328] [INFO] Worker exiting (pid: 567328)
[2025-08-01 00:01:00 +0200] [567329] [INFO] Worker exiting (pid: 567329)
[2025-08-01 00:01:00 +0200] [567330] [INFO] Worker exiting (pid: 567330)
[2025-08-01 00:01:01 +0200] [567278] [ERROR] Worker (pid:567328) was sent SIGTERM!
[2025-08-01 00:01:01 +0200] [567278] [ERROR] Worker (pid:567329) was sent SIGTERM!
[2025-08-01 00:01:01 +0200] [567278] [ERROR] Worker (pid:567330) was sent SIGTERM!
[2025-08-01 00:01:01 +0200] [567278] [ERROR] Worker (pid:567327) was sent SIGTERM!
[2025-08-01 00:01:01 +0200] [567278] [INFO] Shutting down: Master
Aug 01 00:01:12 2025 (567381) Task runner evicted 2 expired pendings
[2025-08-01 00:01:13 +0200] [567379] [INFO] Starting gunicorn 22.0.0
[2025-08-01 00:01:13 +0200] [567379] [INFO] Listening at: http://127.0.0.1:8001 (567379)
[2025-08-01 00:01:13 +0200] [567379] [INFO] Using worker: sync
[2025-08-01 00:01:13 +0200] [567397] [INFO] Booting worker with pid: 567397
[2025-08-01 00:01:13 +0200] [567398] [INFO] Booting worker with pid: 567398
[2025-08-01 00:01:13 +0200] [567399] [INFO] Booting worker with pid: 567399
[2025-08-01 00:01:13 +0200] [567400] [INFO] Booting worker with pid: 567400
<Ommited GET Request>
Aug 01 00:01:13 2025 (567381) Task runner deleted 0 orphaned workflows
<Ommited GET Requests>
Aug 01 00:01:19 2025 (567381) Task runner deleted 0 orphaned requests
<Ommited GET Requesta>
[2025-08-01 00:01:34 +0200] [567379] [INFO] Handling signal: term
[2025-08-01 00:01:34 +0200] [567397] [INFO] Worker exiting (pid: 567397)
[2025-08-01 00:01:34 +0200] [567399] [INFO] Worker exiting (pid: 567399)
[2025-08-01 00:01:34 +0200] [567398] [INFO] Worker exiting (pid: 567398)
[2025-08-01 00:01:34 +0200] [567400] [INFO] Worker exiting (pid: 567400)
[2025-08-01 00:01:34 +0200] [567379] [ERROR] Worker (pid:567399) was sent SIGTERM!
[2025-08-01 00:01:34 +0200] [567379] [ERROR] Worker (pid:567398) was sent SIGTERM!
[2025-08-01 00:01:34 +0200] [567379] [ERROR] Worker (pid:567397) was sent SIGTERM!
[2025-08-01 00:01:34 +0200] [567379] [ERROR] Worker (pid:567400) was sent SIGTERM!
[2025-08-01 00:01:34 +0200] [567379] [INFO] Shutting down: Master
[2025-08-01 00:01:46 +0200] [567516] [INFO] Starting gunicorn 22.0.0
[2025-08-01 00:01:46 +0200] [567516] [INFO] Listening at: http://127.0.0.1:8001 (567516)
[2025-08-01 00:01:46 +0200] [567516] [INFO] Using worker: sync
[2025-08-01 00:01:46 +0200] [567525] [INFO] Booting worker with pid: 567525
[2025-08-01 00:01:46 +0200] [567526] [INFO] Booting worker with pid: 567526
[2025-08-01 00:01:46 +0200] [567527] [INFO] Booting worker with pid: 567527
[2025-08-01 00:01:46 +0200] [567528] [INFO] Booting worker with pid: 567528
Aug 01 00:01:47 2025 (567518) Task runner evicted 2 expired pendings
Aug 01 00:01:48 2025 (567518) Task runner deleted 0 orphaned workflows
<Ommited GET Request>
[2025-08-01 00:01:52 +0200] [567516] [INFO] Handling signal: term
[2025-08-01 00:01:52 +0200] [567526] [INFO] Worker exiting (pid: 567526)
[2025-08-01 00:01:52 +0200] [567525] [INFO] Worker exiting (pid: 567525)
[2025-08-01 00:01:52 +0200] [567527] [INFO] Worker exiting (pid: 567527)
[2025-08-01 00:01:52 +0200] [567528] [INFO] Worker exiting (pid: 567528)
[2025-08-01 00:01:52 +0200] [567516] [ERROR] Worker (pid:567526) was sent SIGTERM!
[2025-08-01 00:01:52 +0200] [567516] [ERROR] Worker (pid:567525) was sent SIGTERM!
[2025-08-01 00:01:52 +0200] [567516] [ERROR] Worker (pid:567527) was sent SIGTERM!
[2025-08-01 00:01:52 +0200] [567516] [ERROR] Worker (pid:567528) was sent SIGTERM!
[2025-08-01 00:01:52 +0200] [567516] [INFO] Shutting down: Master
Aug 01 00:02:06 2025 (567648) Task runner evicted 2 expired pendings
[2025-08-01 00:02:06 +0200] [567646] [INFO] Starting gunicorn 22.0.0
[2025-08-01 00:02:06 +0200] [567646] [INFO] Listening at: http://127.0.0.1:8001 (567646)
[2025-08-01 00:02:06 +0200] [567646] [INFO] Using worker: sync
[2025-08-01 00:02:06 +0200] [567688] [INFO] Booting worker with pid: 567688
[2025-08-01 00:02:06 +0200] [567689] [INFO] Booting worker with pid: 567689
[2025-08-01 00:02:06 +0200] [567690] [INFO] Booting worker with pid: 567690
[2025-08-01 00:02:06 +0200] [567691] [INFO] Booting worker with pid: 567691
Aug 01 00:02:07 2025 (567648) Task runner deleted 0 orphaned workflows
[2025-08-01 00:02:11 +0200] [567646] [INFO] Handling signal: term
[2025-08-01 00:02:11 +0200] [567689] [INFO] Worker exiting (pid: 567689)
[2025-08-01 00:02:11 +0200] [567688] [INFO] Worker exiting (pid: 567688)
[2025-08-01 00:02:11 +0200] [567690] [INFO] Worker exiting (pid: 567690)
[2025-08-01 00:02:11 +0200] [567691] [INFO] Worker exiting (pid: 567691)
[2025-08-01 00:02:11 +0200] [567646] [ERROR] Worker (pid:567688) was sent SIGTERM!
[2025-08-01 00:02:11 +0200] [567646] [ERROR] Worker (pid:567689) was sent SIGTERM!
[2025-08-01 00:02:11 +0200] [567646] [ERROR] Worker (pid:567690) was sent SIGTERM!
[2025-08-01 00:02:11 +0200] [567646] [ERROR] Worker (pid:567691) was sent SIGTERM!
[2025-08-01 00:02:11 +0200] [567646] [INFO] Shutting down: Master
[2025-08-01 00:02:24 +0200] [567717] [INFO] Starting gunicorn 22.0.0
[2025-08-01 00:02:24 +0200] [567717] [INFO] Listening at: http://127.0.0.1:8001 (567717)
[2025-08-01 00:02:24 +0200] [567717] [INFO] Using worker: sync
[2025-08-01 00:02:24 +0200] [567786] [INFO] Booting worker with pid: 567786
[2025-08-01 00:02:24 +0200] [567789] [INFO] Booting worker with pid: 567789
[2025-08-01 00:02:24 +0200] [567792] [INFO] Booting worker with pid: 567792
[2025-08-01 00:02:24 +0200] [567794] [INFO] Booting worker with pid: 567794
<Ommited GET Requests>
Aug 01 00:02:25 2025 (567719) Task runner evicted 2 expired pendings
Aug 01 00:02:26 2025 (567719) Task runner deleted 0 orphaned workflows
<Ommited GET Requests>
Aug 01 00:02:33 2025 (567719) Task runner deleted 0 orphaned requests
[01/Aug/2025:00:02:35 +0200] "GET /3.1/lists/ifip-tc6(a)lists.rwth-aachen.de HTTP/1.1" 200 423 "-" "GNU Mailman REST client v3.3.5"
[01/Aug/2025:00:02:35 +0200] "GET /3.1/lists/smartlist(a)lists.rwth-aachen.de HTTP/1.1" 200 438 "-" "GNU Mailman REST client v3.3.5"
Aug 01 00:02:42 2025 (567719) Task runner deleted 2 orphaned messages
Aug 01 00:02:42 2025 (567719) Task runner deleted 0 orphaned message files
Aug 01 00:02:42 2025 (567719) Task runner evicted 2 expired bounce events
Aug 01 00:02:42 2025 (567719) Task runner evicted expired cache entries
Well...i will stop the restart after the log rotate today.
>>
>> IIRC all of the shunted messages that Stephen looked at with qfiles
> were those special digest messages (ie, message component empty,
>> pointer to lists/$LIST/something.mmdf in the msg_data component). So
>> something is going wrong in the to-digest handler.
>
>
>And for every one of those shunted messages there should be an exception with traceback logged in mailman.log. Those tracebacks should be helpful.
If there were any. Maybe the "debug" level should be "info". But for which logs?
Maybe the restart at night after the lograte maybe the issue.
--
Stephan Krinetzki
IT Center
Gruppe: Anwendungsbetrieb und Cloud
Abteilung: Systeme und Betrieb
RWTH Aachen University
Seffenter Weg 23
52074 Aachen
Tel: +49 241 80-24866
Fax: +49 241 80-22134
krinetzki(a)itc.rwth-aachen.de
www.itc.rwth-aachen.de
Social Media Kanäle des IT Centers:
https://blog.rwth-aachen.de/itc/
https://www.facebook.com/itcenterrwth
https://www.linkedin.com/company/itcenterrwth
https://twitter.com/ITCenterRWTH
https://www.youtube.com/channel/UCKKDJJukeRwO0LP-ac8x8rQ
-----Original Message-----
From: Mark Sapiro <mark(a)msapiro.net>
Sent: Saturday, August 2, 2025 6:11 PM
To: Stephen J. Turnbull <steve(a)turnbull.jp>
Cc: mailman-users(a)mailman3.org
Subject: [MM3-users] Re: Held messages not delivered after approval
On 8/2/25 01:51, Stephen J. Turnbull wrote:
>
> IIRC all of the shunted messages that Stephen looked at with qfiles
> were those special digest messages (ie, message component empty,
> pointer to lists/$LIST/something.mmdf in the msg_data component). So
> something is going wrong in the to-digest handler.
And for every one of those shunted messages there should be an exception with traceback logged in mailman.log. Those tracebacks should be helpful.
> The number of .tmp files lying around bothers me. AFA grep CS, the
> only place that can happen is in switchboard.py:136 in .enqueue:
>
> with open(tmpfile, 'wb') as fp:
> fp.write(msgsave)
> pickle.dump(data, fp, protocol)
> fp.flush()
> os.fsync(fp.fileno())
> os.rename(tmpfile, filename)
>
> where `msgsave` is already a pickled object. So either pickle.dump is
> choking on something in data (the metadata, which I believe is all
> primitive Python data types), or something (OOM kill?) is happening at
> the OS level. A crash in pickle.dump should leave an exception log
> and backtrace in the logs.
>
> AFAIK, Mailman does not clean up .tmp files at startup, right?
That is correct. The *.pck.tmp file is created by the above code and immediately after writing is renamed to *.pck. It is done this way to prevent another process picking up a partially written *.pck.
If a *.pck.tmp file is somehow left behind, it is never looked at or deleted by any Mailman code.
--
Mark Sapiro <mark(a)msapiro.net> The highway is for gamblers,
San Francisco Bay Area, California better use your sense - B. Dylan
_______________________________________________
Mailman-users mailing list -- mailman-users(a)mailman3.org To unsubscribe send an email to mailman-users-leave(a)mailman3.org https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
Archived at: https://lists.mailman3.org/archives/list/mailman-users@mailman3.org/message…
This message sent to krinetzki(a)itc.rwth-aachen.de
2 months, 1 week

[MM3-users] Re: Fwd: Re: Removing a mail addresses and users
by Abhilash Raj
On Sun, Jun 14, 2020, at 12:45 PM, Allan Hansen wrote:
> All,
>
> Thank you for your thoughtful responses to my call for a removal of the
> core vs. Django disconnect. From your responses it appears that my
> suggestion my issues were caused by the Mailman Core was not based in
> reality. Instead, the issue is how Postorius interacts with the Core.
> My apologies for overreaching, as the cause of my issues is not an
> issue to me. Keeping Core a simple list manager is fine, if Postorius
> is easy to use and does the expected.
>
> As a Subscriber, you are able to do switch emails in subscriptions from
> your options page. The URL to which is displayed in the List's Summary
> page when you are logged in. Yes, it requires an approval if the list's
> settings are set to moderate but that is going to be fixed, see this
> issue[1]. It is quite simple IMO to fix this one, if someone wants to
> take this up.
>
> Well, it’s more than that. Based on the current setup, I have asked my
> subscribers who want to change their subscription addresses to:
> a. Create an account if you do not already have one.
> b. Go to the user profile page.
> c. Select ‘E-mail Addresses’.
> d. Add the new e-mail to the user profile.
> e. Wait for the verification notice and verify the new address.
> e1. If it does not show send email to hansen(a)rc.org
> <mailto:hansen@rc.org> to get the email verified manually.
> f. Sign in to the account again and to the user profile (or refresh
> the page if not signed out).
> g. Select the new address as the primary address.
> h. Click on ‘Manage Lists’
> i. For each list you are subscribed to:
> i1. Select the list
> i2. Click ‘List Options Page.’
> i3. Pull down the ’Select Email’ menu.
> i4 Select your new email.
> i5. Press ‘Change email used for subscription.
> i6. When the moderator contacts you, explain that you are just
> changing your email.
> i6.1. If the moderator is late, send a reminder or send email to
> hansen(a)rc.org <mailto:hansen@rc.org> to bypass the moderator.
So, a-f is something we still always want to keep since we aren't going to deal with un-verified email addresses.
Now, Primary Address is something interesting in Mailman 3, which wasn't utilized until now in Postorius. I recently added a new feature to Postorius which allows users to subscribe via their Primary Address and it switches the delivery address when you switch your primary address. You can play with how it looks at https://lists.mailman3.org (Mailman instance serving this list) where you will see "Primary Address (myprimaryemail(a)example.com)" as an option when subscribing/switching address.
This is still somewhat new feature and there are gaps to it, but it will grow over time. For example, as as admin, you can only subscribe a User's address and not their Primary Address (from Mass Subscription page, or through the CLI commands).
As a User, you are able to switch to using your primary address and then every time you switch your primary address, all your subscription shifts to that new address without any extra steps.
Also, I just created a fix for i6[1], so switching email addresses won't require approvals anymore. I am still debating if this should be configurable by list owners.
[1]: https://gitlab.com/mailman/postorius/-/merge_requests/532
>
> Most people choke on these instructions, so I have unfortunately
> resorted to just asking them to subscribe the new address and forget
> about the old. This is not good for our reputation when the old
> addresses start bouncing. So, in more detail:
>
> I would like to ask my subscribers who want to change their address to:
> a. Create an account if you did not already have one.
> b. Go to the user profile page.
> c. Select ‘E-mail Addresses’.
> d. Select an existing address e-mail listed the user profile.
> e. Press new button ‘Change address’.
> f. When page refreshes, enter the new e-mail address.
> g. Press ‘Apply.'
> d. Wait for the verification notice.
> d1. If it does not show send email to hansen(a)rc.org
> <mailto:hansen@rc.org> to get the email verified manually.
>
> In other words, in addition to ‘Make Primary’ and ‘Re-send
> Verification’ and ‘Remove’, another button says: ‘Change Address’ or’
> ‘Replace Address’ or some such.
>
> This button brings up another page that looks the same, but instead of
> ‘Add E-mail Adress’ is says : Enter New E-mail Address’
> and instead of ‘Add E-mail’ is says: ‘Apply’. Then the page refreshes
> back to the original page, but the new address is now
> replacing the old address. Optionally, it can show the old address
> still with a ‘pending change’ until verification.
I am still thinking about a few implementation details of this one, I've created an issue[2]
[2]: https://gitlab.com/mailman/postorius/-/issues/435
I am thinking we could minimize changes by retaining the same workflow for adding and verifying new address, but add a new workflow of some sort to switch subscriptions to a new address with a single button click. This should simplify the Loop of going to each option page and switching addresses. Finally, the old address can be removed.
Another train of thought is that when you Delete your email address, you get an option to switch all the subscriptions on that address to some other address. I could do this by adding a page after you click on "Remove" button. Although, this would restrict the ability to switch subscriptions by only deleting the original one, which I don't want.
Maybe I'll end up implementing both of these.
What do others think about this?
>
> Further, when the new address is verified, all lists get updated with
> the new address replacing the old
> address and all settings associated with the old address are now
> associated with the new address. No moderator or admin
> involvement needed (other than d1).
The fix[1] I mentioned above should already resolve this issue.
>
> You are able to sign in via web and manage your email and
> subscriptions. The preferences also work exactly how you described
> above where the lower level override the default and/or upper level
> settings.
>
> To some extent that is true, Abhilash, and I appreciate that. The above
> (address change) is a missing piece in that picture and I look forward
> to seeing it added, if you have time.
>
> Yours,
>
> Allan Hansen
> hansen(a)rc.org
>
>
>
>
> > On Jun 14, 2020, at 0:19 , Stephen J. Turnbull <turnbull.stephen.fw(a)u.tsukuba.ac.jp> wrote:
> >
> > Mark Sapiro writes:
> >
> >> Then the people who developed the web based management UI (Postorius)
> >> and archive UI (HyperKitty) chose to develop those within a Django
> >> framework and Django has its own concept of User separate from Mailman
> >> Core and that is where the disconnect occurs.
> >>
> >> It's not that Mailman Core lacks what you want. It's that Django doesn't
> >> use it.
> >
> > I think that's mostly right, in terms of the features that users miss.
> > However, as far as I know, Mailman core does lack facilities for
> > identification, authentication, and authorization of connections to
> > the REST API. And that means that the front ends have to handle
> > this. I would guess that's why the web interfaces are built around
> > Django user authentication.
> >
> > I think it would be possible to have somewhat tighter integration
> > between the Django "web users" and the Mailman core User objects, but
> > it's not necessarily going to be trivial.
> >
> > I see that Abhilash is pretty optimistic, but I fear this this is
> > going to be a long-tail situation where we're going to be seeing core
> > user vs. web-gui user integration issues in 2030 (maybe by then only 1
> > every 450 days ;-). I have some ideas, maybe in a couple weeks I can
> > sketch them out.
> >
> > Steve
> > _______________________________________________
> > Mailman-users mailing list -- mailman-users(a)mailman3.org
> > To unsubscribe send an email to mailman-users-leave(a)mailman3.org
> > https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>
> _______________________________________________
> Mailman-users mailing list -- mailman-users(a)mailman3.org
> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>
--
thanks,
Abhilash Raj (maxking)
5 years, 4 months

[MM3-users] Re: Held messages not delivered after approval
by Krinetzki, Stephan
>The django-admin commands aren't directly related, I'm going to ignore them for now. The only thing I know for *sure* runs at midnight daily is "mailman digests --send". On my Debian Linode, the default (which I left alone) is for logrotate's cron job to live in >/etc/cron.daily, which is run at 06:25 daily using "run-parts". (This is quite a common setup on Linux.) So we need to know where the logrotate job is specified (crontab, cron.d, or cron.daily) and at what time (@daily =
>midnight) to be sure that the mailman restart is related to the bad and shunt queue files.
The logrotate is executed by a system Timer (Rocky 9 OS btw) and is planned for:
Tue 2025-08-05 00:00:00 CEST 7h left Mon 2025-08-04 00:00:00 CEST 16h ago logrotate.timer logrotate.service
So every day at midnight.
>That is not normal. Your control process is crashing every 15-20 seconds. I think it probably is a problem with the digests, not with the restart. What appears to be happening is that the digest process gets triggered, it creates a message and queues it, then fails to >send it so nastily that Mailman restarts (or stops and something like systemd restarts it). On restart, Mailman finds the digest message (probably in the out queue), tries to send it again, crashes again, and eventually decides that isn't going to work, sends it to bad, >and stops crashing.
I saw this but I don't have any idea how this happens. Currently there are ~42 Mails after 'mailman unshunt' and I think, mailman loops over them (queue doesn't get shorter). But mails are delivered for a lot of lists.
>According to the config you posted earlier, you're sending most channels to separate log files. Have you checked any of them other than mailman.log and smtp.log? Also, note that httpd.log and error.log are normally used by Mailman core's gunicorn (ie, the REST >API). I'm not sure what effect directing Mailman's error channel to error.log will have, but I suspect you could end up losing logs or having text from different sources mixed.
So I should update my logging config. Do you have a good example or maybe even the dist?
--
Stephan Krinetzki
IT Center
Gruppe: Anwendungsbetrieb und Cloud
Abteilung: Systeme und Betrieb
RWTH Aachen University
Seffenter Weg 23
52074 Aachen
Tel: +49 241 80-24866
Fax: +49 241 80-22134
krinetzki(a)itc.rwth-aachen.de
www.itc.rwth-aachen.de
Social Media Kanäle des IT Centers:
https://blog.rwth-aachen.de/itc/
https://www.facebook.com/itcenterrwth
https://www.linkedin.com/company/itcenterrwth
https://twitter.com/ITCenterRWTH
https://www.youtube.com/channel/UCKKDJJukeRwO0LP-ac8x8rQ
-----Original Message-----
From: Stephen J. Turnbull <steve(a)turnbull.jp>
Sent: Monday, August 4, 2025 1:51 PM
To: Krinetzki, Stephan <Krinetzki(a)itc.rwth-aachen.de>
Cc: Mark Sapiro <mark(a)msapiro.net>; mailman-users(a)mailman3.org
Subject: RE: [MM3-users] Re: Held messages not delivered after approval
Krinetzki, Stephan writes:
> /opt/mailman/var/queue/bad:
> -rw-rw---- 1 mailman mailman 221723 Aug 1 17:26 1754061973.4885209+46b1ae3716439bf3ef98090296dfce0320fc3017.psv
This one might be spam, but it's weird that it managed to get pickled but can't be read.
> -rw-rw---- 1 mailman mailman 32912 Aug 2 00:00 1754085602.191851+3576cf33232db110fa7761233f67245564553652.psv
> -rw-rw---- 1 mailman mailman 416 Aug 2 00:00 1754085604.0204346+ad485da0c45cb0ad17a5dc42613c3eb3f313c20e.psv
> -rw-rw---- 1 mailman mailman 1407649 Aug 2 00:00 1754085623.275817+f23139c8127c454b4fe65453af3db18e558b0e87.psv
> -rw-rw---- 1 mailman mailman 1407634 Aug 2 00:02 1754085729.3529432+1643f907bac39a22a7d71e50b031c4f8a574082c.psv
I have no clue about these four (see below for comments on cron).
> /opt/mailman/var/queue/out:
Looks normal for your configuration.
> /opt/mailman/var/queue/shunt:
I don't understand why on August 1st you see shunts at intervals throughout the working day, then suddenly on the 2nd they all happen at midnight.
Have you tried "mailman unshunt"? If not what happens when you do?
If the shunts are happening because of the restart, then they should go through on unshunt. If they don't, there's some other problem.
You can also try renaming the .psvs to .pck, and check the metadata in the pickle for which queue to move it to. That's more risky, and you shouldn't try it if the output of "mailman qfile" isn't as expected.
> I don't see here a problem. But the timestamp seems to be related > to the restart of mailman. Can I skip this in the logrotate?
As I mentioned before, there was (and may still be) a bug in Mailman's logging such that Mailman fails to reopen the logs, and typically after a couple of days you end up with a nameless open file collecting the logs and uselessly consuming more and more disk space. The restart is intended to work around this problem.
> Btw: The crontab is the following:
> @daily mailman cd /opt/mailman; source /opt/mailman/mailman-venv/bin/activate; /opt/mailman/mailman-venv/bin/mailman digests --send > /dev/null 2>&1
The django-admin commands aren't directly related, I'm going to ignore them for now. The only thing I know for *sure* runs at midnight daily is "mailman digests --send". On my Debian Linode, the default (which I left alone) is for logrotate's cron job to live in /etc/cron.daily, which is run at 06:25 daily using "run-parts". (This is quite a common setup on Linux.) So we need to know where the logrotate job is specified (crontab, cron.d, or cron.daily) and at what time (@daily =
midnight) to be sure that the mailman restart is related to the bad and shunt queue files.
> So i checked the mailman.log:
>
> [2025-08-01 00:00:02 +0200] [324558] [INFO] Shutting down: Master > [2025-08-01 00:00:23 +0200] [567059] [INFO] Shutting down: Master > [2025-08-01 00:00:42 +0200] [567206] [INFO] Shutting down: Master > [2025-08-01 00:01:01 +0200] [567278] [INFO] Shutting down: Master > [2025-08-01 00:01:34 +0200] [567379] [INFO] Shutting down: Master > [2025-08-01 00:01:52 +0200] [567516] [INFO] Shutting down: Master > [2025-08-01 00:02:11 +0200] [567646] [INFO] Shutting down: Master
That is not normal. Your control process is crashing every 15-20 seconds. I think it probably is a problem with the digests, not with the restart. What appears to be happening is that the digest process gets triggered, it creates a message and queues it, then fails to send it so nastily that Mailman restarts (or stops and something like systemd restarts it). On restart, Mailman finds the digest message (probably in the out queue), tries to send it again, crashes again, and eventually decides that isn't going to work, sends it to bad, and stops crashing.
There's normally lot more chatter at startup and shutdown, for example about runners being started. That's probably because you have that redirected to a separate log file, or maybe that information doesn't get output with a log level of "warn". Maybe the crash information is in the runner.log.
According to the config you posted earlier, you're sending most channels to separate log files. Have you checked any of them other than mailman.log and smtp.log? Also, note that httpd.log and error.log are normally used by Mailman core's gunicorn (ie, the REST API). I'm not sure what effect directing Mailman's error channel to error.log will have, but I suspect you could end up losing logs or having text from different sources mixed.
I haven't thought about it carefully, but I would have separate logs for bounces, subscriptions, smtp, and nntp because they are quite separate. Everything else would go into mailman.log, because that makes it easier to trace a single message through the whole process.
Until you know that you don't need it, I would have most channels at the info level. The debug level is almost never useful unless you're a developer trying to fix something (vs a troubleshooter trying to diagnose the problem). The logs compress very well (often 70% reduction), so it's generally a good idea to include the extra information at info level. Remember, the real explosion is logging is that outgoing mail gets logged up to 43k times per incoming post. Of course you can do quite a bit better if you can sacrifice the personalized footers, but most sites don't anymore because there are strict rules about convenience of unsubscription.
> Well...i will stop the restart after the log rotate today.
You can do that if you want, but it's likely that you'll end up losing logs.
> >And for every one of those shunted messages there should be an > >exception with traceback logged in mailman.log. Those tracebacks > >should be helpful.
>
> If there were any. Maybe the "debug" level should be "info". But > for which logs?
Setting the channel to "debug" gives maximum verbosity, and unhandled exceptions are logged at "warn" or "error" level (maximum severity).
> Maybe the restart at night after the lograte maybe the issue.
Not with Mailman bouncing up and down pretty much as fast as it can.
The restart can only account for one restart, the other 6 were caused by something else.
--
GNU Mailman consultant (installation, migration, customization)
Sirius Open Source https://www.siriusopensource.com/
Software systems consulting in Europe, North America, and Japan
2 months, 1 week

[MM3-users] Re: MTA setup
by Arte Chambers
Update: After changing the DMARC settings messages are now going through to
my inbox, however they are still not being archived.
Thank you,
Paul 'Arte Chambers' Robey
502-408-6922
On Mon, Dec 23, 2024 at 12:01 PM Arte Chambers <paul.m.robey(a)gmail.com>
wrote:
> Output from mailman log after sending a test message to the list:
>
> Dec 23 16:52:23 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:52:23 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:52:23 2024 (684048) Could not archive the message with id <
> CAM05vAcLAhTzjdcFKsp6i0RSKpfhQ9H_KpOLJyVmNZy4KcaWzQ(a)mail.gmail.com>
> Dec 23 16:52:23 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message <
> CAM05vAcLAhTzjdcFKsp6i0RSKpfhQ9H_KpOLJyVmNZy4KcaWzQ(a)mail.gmail.com>)
> Dec 23 16:52:23 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:52:23 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:52:23 2024 (684058) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/urls:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
>
>
> I sent another test after changing the DMARC settings for the list
> mailman.log:
>
> Dec 23 16:58:48 2024 (684052) ACCEPT:
> <CAM05vAcs3mL2MMX_NNsrHHyLz8p_tTvCHRRiROCgt=U5Oi6x5w(a)mail.gmail.com>
> Dec 23 16:58:49 2024 (684063) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/urls:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:49 2024 (684063) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/urls:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:50 2024 (684058) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/urls:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:50 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:50 2024 (684048) Could not archive the message with id
> <CAPesOD2KavP948Oq6n9mvYF9WW3-sXRRTxAMm2roMgzL+=s8Dw(a)mail.gmail.com>
> Dec 23 16:58:50 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message
> <CAPesOD2KavP948Oq6n9mvYF9WW3-sXRRTxAMm2roMgzL+=s8Dw(a)mail.gmail.com>)
> Dec 23 16:58:50 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:50 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:50 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:50 2024 (684048) Could not archive the message with id
> <CAM05vAcN5RZ3Mzg-xXd-u=X7_inDULsBR8A=ehPwk5oy+y+ZLg(a)mail.gmail.com>
> Dec 23 16:58:50 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message
> <CAM05vAcN5RZ3Mzg-xXd-u=X7_inDULsBR8A=ehPwk5oy+y+ZLg(a)mail.gmail.com>)
> Dec 23 16:58:50 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:50 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:50 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:50 2024 (684048) Could not archive the message with id <
> CAM05vAfoHZmvu42wojk5SpCeynQs4rM01iKw+U5KpWMChHXGWw(a)mail.gmail.com>
> Dec 23 16:58:50 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message <
> CAM05vAfoHZmvu42wojk5SpCeynQs4rM01iKw+U5KpWMChHXGWw(a)mail.gmail.com>)
> Dec 23 16:58:50 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:50 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:50 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:50 2024 (684048) Could not archive the message with id <
> CAM05vAdx7VGPpApAEtF_J8RW2eXBoS6D-_6BCp8wKGUF5_Cyyw(a)mail.gmail.com>
> Dec 23 16:58:50 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message <
> CAM05vAdx7VGPpApAEtF_J8RW2eXBoS6D-_6BCp8wKGUF5_Cyyw(a)mail.gmail.com>)
> Dec 23 16:58:50 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:50 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:50 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:50 2024 (684048) Could not archive the message with id
> <CAPesOD1y0fEaG=GFodz=Lgs1m_hqRErPnQcpXCNtKsBbJ=BJ7Q(a)mail.gmail.com>
> Dec 23 16:58:50 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message
> <CAPesOD1y0fEaG=GFodz=Lgs1m_hqRErPnQcpXCNtKsBbJ=BJ7Q(a)mail.gmail.com>)
> Dec 23 16:58:50 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:50 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:51 2024 (684058) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/urls:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:51 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:51 2024 (684048) Could not archive the message with id
> <CAM05vAdLwKNNLgFGMKPm_yv6YykcL=SynJSDYf70hyGufryyCw(a)mail.gmail.com>
> Dec 23 16:58:51 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message
> <CAM05vAdLwKNNLgFGMKPm_yv6YykcL=SynJSDYf70hyGufryyCw(a)mail.gmail.com>)
> Dec 23 16:58:51 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:51 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:51 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:51 2024 (684048) Could not archive the message with id
> <CAM05vAcG99mHboPkZ82q=+d+uB3L3+iEmhwSCNAVFwSua80LoQ(a)mail.gmail.com>
> Dec 23 16:58:51 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message
> <CAM05vAcG99mHboPkZ82q=+d+uB3L3+iEmhwSCNAVFwSua80LoQ(a)mail.gmail.com>)
> Dec 23 16:58:51 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:51 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:51 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:51 2024 (684048) Could not archive the message with id <
> CAPesOD1ZJG3zyEagCqYd+nce76g_hbwJK9wQF51Pi6iyJhBKvA(a)mail.gmail.com>
> Dec 23 16:58:51 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message <
> CAPesOD1ZJG3zyEagCqYd+nce76g_hbwJK9wQF51Pi6iyJhBKvA(a)mail.gmail.com>)
> Dec 23 16:58:51 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:51 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:51 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:51 2024 (684048) Could not archive the message with id <
> CAM05vAdR3n-zhrLHCPhzi4xHOkLsmTUUgXnwgSwJM1oao2j3tg(a)mail.gmail.com>
> Dec 23 16:58:51 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message <
> CAM05vAdR3n-zhrLHCPhzi4xHOkLsmTUUgXnwgSwJM1oao2j3tg(a)mail.gmail.com>)
> Dec 23 16:58:51 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:51 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:51 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:51 2024 (684048) Could not archive the message with id <
> CAM05vAcLAhTzjdcFKsp6i0RSKpfhQ9H_KpOLJyVmNZy4KcaWzQ(a)mail.gmail.com>
> Dec 23 16:58:51 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message <
> CAM05vAcLAhTzjdcFKsp6i0RSKpfhQ9H_KpOLJyVmNZy4KcaWzQ(a)mail.gmail.com>)
> Dec 23 16:58:51 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:51 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:51 2024 (684048) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/archive:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
> Dec 23 16:58:51 2024 (684048) Could not archive the message with id
> <CAM05vAcs3mL2MMX_NNsrHHyLz8p_tTvCHRRiROCgt=U5Oi6x5w(a)mail.gmail.com>
> Dec 23 16:58:51 2024 (684048) archiving failed, re-queuing (mailing-list
> testing.list.louisvillecommunitygrocery.com, message
> <CAM05vAcs3mL2MMX_NNsrHHyLz8p_tTvCHRRiROCgt=U5Oi6x5w(a)mail.gmail.com>)
> Dec 23 16:58:51 2024 (684048) Exception in the HyperKitty archiver:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> Dec 23 16:58:51 2024 (684048) Traceback (most recent call last):
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 158, in _archive_message
> url = self._send_message(mlist, msg)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File
> "/opt/mailman/venv/lib/python3.12/site-packages/mailman_hyperkitty/__init__.py",
> line 228, in _send_message
> raise ValueError(result.text)
> ValueError:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
>
> Dec 23 16:58:51 2024 (684058) HyperKitty failure on
> http://127.0.0.1:8000/archives/api/mailman/urls:
> <!doctype html>
> <html lang="en">
> <head>
> <title>Bad Request (400)</title>
> </head>
> <body>
> <h1>Bad Request (400)</h1><p></p>
> </body>
> </html>
> (400)
>
> Thank you,
> Paul 'Arte Chambers' Robey
> 502-408-6922
>
>
> On Sun, Dec 22, 2024 at 11:31 PM Mark Sapiro <mark(a)msapiro.net> wrote:
>
>> On 12/22/24 20:10, Arte Chambers via Mailman-users wrote:
>> > I'm not sure how to check mailmans shunt queue.
>>
>> `ls var/queue/shunt`
>>
>> > I'm not seeing any errors in mailman logs
>> >
>> > There are several .pck files in Mailman's
>> > var/archives/hyperkitty/spool/
>> These are message that failed to archive. There should be messages in
>> mailman.log about these failures indicating what the issue is.
>>
>> > I can send email from the server using mail utilities as long as the
>> "from
>> > email address" contains the server's domain name. I also notice that in
>> > this MM3-Users list the "from" email shows senders(a)emailaddress.com VIA
>> > mailman3.org. I'm wondering if I've missed something that would allow
>> my
>> > server to behave this way.
>>
>>
>> In the list's Settings -> DMARC Mitigations set DMARC mitigation action
>> to Replace From: with list address and set DMARC Mitigate
>> unconditionally to Yes.
>>
>> --
>> Mark Sapiro <mark(a)msapiro.net> The highway is for gamblers,
>> San Francisco Bay Area, California better use your sense - B. Dylan
>>
>> _______________________________________________
>> Mailman-users mailing list -- mailman-users(a)mailman3.org
>> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
>> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>> Archived at:
>> https://lists.mailman3.org/archives/list/mailman-users@mailman3.org/message…
>>
>> This message sent to paul.m.robey(a)gmail.com
>>
>
9 months, 3 weeks

[MM3-users] Re: A little stuck with installation of MM3 - ModuleNotFoundError: No module named 'flufl.lock'
by Odhiambo Washington
On Sun, 26 Jul 2020 at 00:17, Mark Sapiro <mark(a)msapiro.net> wrote:
> On 7/25/20 12:54 PM, Odhiambo Washington wrote:
> >
> > Finally, I have adapted your init script to get some rudimentary one
> that I
> > could use on FreeBSD.
> > I have to change /opt/mailman to be owned by mailman3:mailman3 (because I
> > have "live" MLs on this server, using mailman-2.1.34.
> > I am not sure if they can co-exist. I suppose they could, but what might
> be
> > the security implication, if any??
>
> I manage more than one server that supports both Mailman 2.1 and Mailman
> 3 running as user `mailman`. I'm not aware of any security issues.
>
>
> > This is what the init script looks like (rudimentary!!):
> > (venv) [root@gw /usr/local/etc/rc.d]# less mailman3
> > ### BEGIN INIT INFO
> > # Provides: GNU Mailman
> > # Short-Description: Mailman3 Service
> > # Description: service control for Mailman3
> > ### END INIT INFO
> >
> >
> PATH=/opt/mailman/mm/bin:/opt/mailman/mm/venv/bin:/usr/sbin:/usr/bin:/bin:/sbin:
> > DESC="GNU Mailman service"
> > DAEMON=/opt/mailman/mm/bin/mailman
> > NAME=mailman
> > USER=mailman3
> > GROUP=mailman3
> >
> > # Needed by click
> > export LANG=en_US.UTF-8
> >
> > # Exit if the package is not installed
> > [ -x "$DAEMON" ] || exit 0
> >
> > # Load the VERBOSE setting and other rcS variables
> > #. /lib/init/vars.sh
> >
> > # Define LSB log_* functions.
> > # Depend on lsb-base (>= 3.2-14) to ensure that this file is present
> > # and status_of_proc is working.
> > #. /lib/lsb/init-functions
> >
> > case "$1" in
> > start)
> > [ "$VERBOSE" != no ] && echo "Starting $DESC" "$NAME"
> > # use --force to remove a stale lock.
> > /usr/local/bin/sudo -u $USER $DAEMON start --force
>
> T don't recommend using --force in init scripts. It *shouldn't* matter
> because even with --force the master shouldn't break the lock if the pip
> that set it still exists, but I prefer not to use it.
>
>
> > ;;
> > stop)
> > [ "$VERBOSE" != no ] && echo "Stopping $DESC" "$NAME"
> > /usr/local/bin/sudo -u $USER $DAEMON stop
> > ;;
> > status)
> > /usr/local/bin/sudo -u $USER $DAEMON status
> > ;;
> > reopen)
> > /usr/local/bin/sudo -u $USER $DAEMON reopen
> > ;;
> > restart)
> > log_daemon_msg "Restarting $DESC" "$NAME"
> > /usr/local/bin/sudo -u $USER $DAEMON restart
> > ;;
> > *)
> > echo "Usage: $SCRIPTNAME {start|stop|status|reopen|restart}" >&2
> > exit 3
> > ;;
> > esac
> >
> > It does start mailman3 for sure, but also complains after a few seconds
> > with the message:
> >
> > (venv) [root@gw /usr/local/etc/rc.d]#
> >
> */opt/mailman/mm/venv/lib/python3.7/site-packages/mailman-3.3.2b1-py3.7.egg/mailman/rest/wsgiapp.py:180:
> > DeprecatedWarning: Call to deprecated function __init__(...). API class
> may
> > be removed in a future release, use falcon.App instead.*
> > * **kws)*
>
>
> Hmm... strange, My various MM 3 installs are running falcon 1.4.0, 2.0.0
> and 3.0.0a1 and I don't see this warning, but it is only a deprecation
> warning. It doesn't mean it won't work.
>
>
> > But:
> >
> > (venv) [root@gw /usr/local/etc/rc.d]# ps ax | grep mailman
> ...
>
> It seems Mailman core and the rest runner and its workers are all running.
>
>
> >
> > Assuming that I am using mod_wsgi with Apache, and that I have configured
> > apache right using
> >
> https://wiki.list.org/DOC/Mailman%203%20installation%20experience?action=At…
> > ..
> > should I be able to access the MM3 web UI??
>
> Yes, I think so.
>
>
> > At that point, I am feeling somewhat confused as to what to try next.
> >
> > (venv) [root@gw /opt/mailman/mm/var/logs]# less mailman.log
> > Jul 25 22:53:16 2020 (40670) Master started
> > Jul 25 22:53:18 2020 (42036) bounces runner started.
> > Jul 25 22:53:18 2020 (45205) out runner started.
> > Jul 25 22:53:19 2020 (41191) archive runner started.
> > Jul 25 22:53:19 2020 (46670) retry runner started.
> > Jul 25 22:53:19 2020 (44567) nntp runner started.
> > Jul 25 22:53:19 2020 (43069) in runner started.
> > Jul 25 22:53:19 2020 (46960) virgin runner started.
> > Jul 25 22:53:20 2020 (45720) pipeline runner started.
> > Jul 25 22:53:20 2020 (47326) digest runner started.
> > Jul 25 22:53:20 2020 (43755) lmtp runner started.
> > Jul 25 22:53:20 2020 (45922) rest runner started.
> > [2020-07-25 22:53:20 +0300] [45922] [INFO] Starting gunicorn 20.0.4
> > [2020-07-25 22:53:20 +0300] [45922] [INFO] Listening at:
> > http://127.0.0.1:8001 (45922)
> > [2020-07-25 22:53:20 +0300] [45922] [INFO] Using worker: sync
> > [2020-07-25 22:53:20 +0300] [54732] [INFO] Booting worker with pid: 54732
> > [2020-07-25 22:53:20 +0300] [55467] [INFO] Booting worker with pid: 55467
> > Jul 25 22:53:21 2020 (42743) command runner started.
>
>
> The above is all fine, but it is just Mailman core.
>
>
> > How do I access the web UI for MM3 now?
>
>
> The Apache config points to /opt/mailman/mm/wsgi.py, a sampole of which
> is at
> <
> https://wiki.list.org/DOC/Mailman%203%20installation%20experience?action=At…
> >.
> Do you have that?
>
Yes, I have that file.
> If you installed the Apache config literally, the location for Mailman 3
> is defined by
> <
> https://wiki.list.org/DOC/Mailman%203%20installation%20experience?action=At…
> >
> and the URL would be http(s)://your.server/mm3 - have you tried that?
>
>
There is still some confusion on my part about the directives in the file
so allow me to seek some clarifications.
The following are the contents of a file I have created and placed in my
Apache Includes/ directory.
I was hoping that with it, I can now access http://lists.my.server/mm3 and
get the UI.
<CUT>
# Global section
WSGIDaemonProcess mailman-web display-name=mailman-web
maximum-requests=1000 umask=0002 user=mailman3 \
group=mailman3
python-path=/opt/mailman/mm/venv/lib/python3.7/site-packages:/opt/mailman/mm/venv/lib/python3.7
\
python-home=/opt/mailman/mm/venv home=/opt/mailman/mm/var
WSGIRestrictSignal Off
<VirtualHost *:80>
ServerName lists.my.server
ServerAdmin odhiambo(a)gmail.com
# (I'm not sure that WSGIRestrictSignal Off is required, but it was in the
# provided example so I kept it. I also made changes to WSGIDaemonProcess
# based on my own mod_wsgi experience elsewhere.)
ErrorLog /var/log/myserver-error.log
LogLevel debug
# This goes in the VirtualHost block for the domain.
# Mailman 3 stuff
Alias /static "/var/spool/mailman-web/static" *<----- Where is this
directory supposed to be and what/who creates it and with what permissions?*
<Directory "/var/spool/mailman-web/static">
Require all granted
</Directory>
WSGIScriptAlias /mm3 /opt/mailman/mm/wsgi.py
<Directory "/opt/mailman/mm/">
<Files wsgi.py>
Order deny,allow
Allow from all
Require all granted
</Files>
WSGIProcessGroup mailman-web
</Directory>
</VirtualHost>
</CUT-HERE>
I end up with an "Internal server error" and from the error log I see:
[Sun Jul 26 12:04:43.444127 2020] [wsgi:info] [pid 6444] [remote
197.232.81.246:53383] mod_wsgi (pid=6444, process='mailman-web',
application='lists.my.server|/mm3'): Loading Python script file
'/opt/mailman/mm/wsgi.py'.
[Sun Jul 26 12:04:44.091922 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] mod_wsgi (pid=6444): Failed to exec Python script
file '/opt/mailman/mm/wsgi.py'.
[Sun Jul 26 12:04:44.092006 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] mod_wsgi (pid=6444): Exception occurred processing
WSGI script '/opt/mailman/mm/wsgi.py'.
[Sun Jul 26 12:04:44.092940 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] Traceback (most recent call last):
[Sun Jul 26 12:04:44.093019 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File "/opt/mailman/mm/wsgi.py", line 38, in <module>
[Sun Jul 26 12:04:44.093029 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] application = get_wsgi_application()
[Sun Jul 26 12:04:44.093048 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File
"/opt/mailman/mm/venv/lib/python3.7/site-packages/Django-3.0.8-py3.7.egg/django/core/wsgi.py",
line 12, in get_wsgi_application
[Sun Jul 26 12:04:44.093057 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] django.setup(set_prefix=False)
[Sun Jul 26 12:04:44.093071 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File
"/opt/mailman/mm/venv/lib/python3.7/site-packages/Django-3.0.8-py3.7.egg/django/__init__.py",
line 19, in setup
[Sun Jul 26 12:04:44.093090 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] configure_logging(settings.LOGGING_CONFIG,
settings.LOGGING)
[Sun Jul 26 12:04:44.093108 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File
"/opt/mailman/mm/venv/lib/python3.7/site-packages/Django-3.0.8-py3.7.egg/django/conf/__init__.py",
line 76, in __getattr__
[Sun Jul 26 12:04:44.093117 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] self._setup(name)
[Sun Jul 26 12:04:44.093130 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File
"/opt/mailman/mm/venv/lib/python3.7/site-packages/Django-3.0.8-py3.7.egg/django/conf/__init__.py",
line 63, in _setup
[Sun Jul 26 12:04:44.093138 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] self._wrapped = Settings(settings_module)
[Sun Jul 26 12:04:44.093158 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File
"/opt/mailman/mm/venv/lib/python3.7/site-packages/Django-3.0.8-py3.7.egg/django/conf/__init__.py",
line 142, in __init__
[Sun Jul 26 12:04:44.093166 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] mod =
importlib.import_module(self.SETTINGS_MODULE)
[Sun Jul 26 12:04:44.093179 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File
"/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
[Sun Jul 26 12:04:44.093187 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] return _bootstrap._gcd_import(name[level:],
package, level)
[Sun Jul 26 12:04:44.093210 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File "<frozen importlib._bootstrap>", line 994, in
_gcd_import
[Sun Jul 26 12:04:44.093224 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File "<frozen importlib._bootstrap>", line 971, in
_find_and_load
[Sun Jul 26 12:04:44.093239 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] File "<frozen importlib._bootstrap>", line 953, in
_find_and_load_unlocked
[Sun Jul 26 12:04:44.093265 2020] [wsgi:error] [pid 6444] [remote
197.232.81.246:53383] ModuleNotFoundError: No module named 'settings'
--
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254 7 3200 0004/+254 7 2274 3223
"Oh, the cruft.", grep ^[^#] :-)
5 years, 2 months

[MM3-users] Re: error changed after restart
by Guillermo Hernandez (Oldno7)
On 7/2/21 19:04, Abhilash Raj wrote:
>
> On Sun, Feb 7, 2021, at 1:07 AM, Guillermo Hernandez (Oldno7) via Mailman-users wrote:
>> On 6/2/21 21:19, Abhilash Raj wrote:
>>>
>>> On Sat, Feb 6, 2021 at 19:44, Guillermo Hernandez (Oldno7) via
>>> Mailman-users <mailman-users(a)mailman3.org> wrote:
>>>> On 6/2/21 18:08, Abhilash Raj wrote:
>>>>
>>>> On Sat, Feb 6, 2021, at 3:04 AM, Guillermo Hernandez (Oldno7) via
>>>> Mailman-users wrote:
>>>>
>>>> I restarted de server and the error changed. Now the log
>>>> shows "KeyError: 'subscription_mode'":
>>>>
>>>> Did you also restart Mailman Core after the upgrade?
>>>>
>>>> Yes, indeed: I stopped mailman core and all the processes related.
>>>> Did the upgrades. Started all again. Find the errors in the web user
>>>> interface. Stopped all again. Looked for errors in the log. Restarted
>>>> the complete server. Found the second error that this second mail is
>>>> about to. It happens when, in the main page that shows all the lists
>>>> you click to see one of them. It seems to me that it has been
>>>> database structure changes in django that the upgrade is not aware...
>>>> but it's a very long shot from my side.
>>> `subscription_mode` was added in Mailman Core 3.3.2 and it is actually
>>> a derived atrribute and not stored in Database. Mailman's API should be
>>> returning this attribute for each Member, but for some reason it seems
>>> to me like it isn't doing that even though do have Mailman 3.3.3 running
>>> like you said.
>>> If you have Curl installed, can you send me the output of:
>>> $ curl -u <user>:<pass> http://localhost:8001/3.1/members?count=5&page=1
>> This is the output you asked for (it's the same you can see when you try
>> to interact with one list):
>>
>> /usr/local/mailman3 # curl -u XXXXXX:XXXXXX
>> http://localhost:8001/3.1/members?count=5&page=1
>> /usr/local/mailman3 # <html>
>> <head>
>> <title>Internal Server Error</title>
>> </head>
>> <body>
>> <h1><p>Internal Server Error</p></h1>
>>
>> </body>
>> </html>
>>
>> No log entry has been produced...
> This is weird, if you have working Core, then there should be *some* json returned
> from the above command. Do you have the Gunicorn running?
I have the uwsgi approach running...
> What is the
> output of `ps -ef | grep mailman`?
none
but if I do an ps ax | grep mailman it shows
/usr/local/mailman3 # ps ax | grep mailman
26803 - IsJ 0:00.87 /usr/local/bin/python3.7 /usr/local/bin/master
--force -C /usr/local/mailman3/var/et
26834 - SJ 0:07.57 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26835 - IJ 0:05.84 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26836 - SJ 0:07.16 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26837 - SJ 0:08.74 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26838 - SJ 0:03.16 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26839 - SJ 0:07.07 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26840 - SJ 0:08.12 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26841 - SJ 0:09.07 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26842 - SJ 0:07.45 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26843 - IJ 0:00.95 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26844 - SJ 0:07.27 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26845 - SJ 0:07.69 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26859 - IJ 0:01.01 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
26860 - IJ 0:05.75 /usr/local/bin/python3.7 /usr/local/bin/runner -C
/usr/local/mailman3/var/etc/mailma
50153 - IsJ 0:00.16 /usr/local/bin/uwsgi --ini
/usr/local/mailman3/uwsgi.ini
50481 2 S+J 0:00.00 grep mailman
26808 1 SJ 0:02.09 /usr/local/bin/uwsgi --ini
/usr/local/mailman3/uwsgi.ini
26832 1 IJ 0:00.00 /usr/local/bin/uwsgi --ini
/usr/local/mailman3/uwsgi.ini
> Are you able to run `mailman members` command to list the members of a list?
Yes. All the commands are working as expected.
> Also, how did you actually install Mailman?
Using pip. Here is a detailled post of how I did put all together:
https://forums.FreeBSD.org/threads/mailman-3.61050/post-488128
Thanks for your support
>
> Abhilash.
>
>> TIA.
>>
>>
>>
>>
>>> It should ideally return an output that looks something like shown
>>> here[1]. You
>>> can put the username/password of Core's API server in the above command.
>>> [1]:
>>> https://docs.mailman3.org/projects/mailman/en/latest/src/mailman/rest/docs/…
>>>
>>> Abhilash
>>>> I'm using sqlite as django database and mysql for mailman.
>>>>
>>>> ERROR 2021-02-06 11:47:49,015 26798 django.request Internal
>>>> Server Error:
>>>> /mailman3/mailman3/lists/name_and_domain.of.the.list
>>>> Traceback (most recent call last): File
>>>> "/usr/local/lib/python3.7/site-packages/mailmanclient/restbase/base.py",
>>>> line 119, in __getattr__ return self._get(name) File
>>>> "/usr/local/lib/python3.7/site-packages/mailmanclient/restbase/base.py",
>>>> line 86, in _get raise KeyError(key) KeyError:
>>>> 'subscription_mode' During handling of the above exception,
>>>> another exception occurred: Traceback (most recent call
>>>> last): File
>>>> "/usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py",
>>>> line 34, in inner response = get_response(request) File
>>>> "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py",
>>>> line 115, in _get_response response =
>>>> self.process_exception_by_middleware(e, request) File
>>>> "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py",
>>>> line 113, in _get_response response =
>>>> wrapped_callback(request, *callback_args, **callback_kwargs)
>>>> File
>>>> "/usr/local/lib/python3.7/site-packages/django/views/generic/base.py",
>>>> line 71, in view return self.dispatch(request, *args,
>>>> **kwargs) File
>>>> "/usr/local/lib/python3.7/site-packages/postorius/views/generic.py",
>>>> line 74, in dispatch return super(MailingListView,
>>>> self).dispatch(request, *args, **kwargs) File
>>>> "/usr/local/lib/python3.7/site-packages/django/views/generic/base.py",
>>>> line 97, in dispatch return handler(request, *args,
>>>> **kwargs) File
>>>> "/usr/local/lib/python3.7/site-packages/postorius/views/list.py",
>>>> line 295, in get member.subscription_mode == File
>>>> "/usr/local/lib/python3.7/site-packages/mailmanclient/restbase/base.py",
>>>> line 124, in __getattr__ self.__class__.__name__, name))
>>>> AttributeError: 'Member' object has no attribute
>>>> 'subscription_mode' *********************** some info about
>>>> the installed versions via pip list: pip list | grep django
>>>> django-allauth 0.44.0
>>>> django-appconf 1.0.4
>>>> django-compressor 2.4
>>>> django-extensions 3.1.0
>>>> django-gravatar2 1.4.4
>>>> django-haystack 3.0
>>>> django-mailman3 1.3.5
>>>> django-picklefield 3.0.1
>>>> django-q 1.3.4
>>>> djangorestframework 3.12.2 pip list | grep mailman
>>>> django-mailman3 1.3.5
>>>> mailman 3.3.3
>>>> mailman-hyperkitty 1.1.0
>>>> mailmanclient 3.3.2 pip list | grep postorius
>>>> postorius 1.3.4 On 6/2/21 11:12,
>>>> Guillermo Hernandez (Oldno7) via Mailman-users wrote:
>>>>
>>>> I've just upgrade mailman 3 installation following Mr.
>>>> Sapiro advice: did a pip install --upgrade
>>>> django-mailman3 hyperkitty mailman mailmanclient
>>>> mailman-hyperkitty postorius I did a "python3 manage.py
>>>> migrate" after, too. And all seemed to run well. All the
>>>> lists showed in postorius via web, but when I try to
>>>> accesss into one of them the browser shows an error. In
>>>> the log you can see: *-*-*-*-*-*-* Traceback (most recent
>>>> call last): File
>>>> "/usr/local/lib/python3.7/site-packages/mailmanclient/restbase/base.py",
>>>> line 119, in __getattr__ return self._get(name)
>>>> File
>>>> "/usr/local/lib/python3.7/site-packages/mailmanclient/restbase/base.py",
>>>> line 86, in _get raise KeyError(key) KeyError:
>>>> 'get_requests_count' .. (And after all the traceback
>>>> lines) AttributeError: 'MailingList' object has no
>>>> attribute 'get_requests_count' *-*-*-*-*-*-*-* The lists
>>>> seem to be distributing messeages well.. but I cannot
>>>> acces via web administration (django/postorius) Can
>>>> anyone point me in the right direction to solve this,
>>>> please? _______________________________________________
>>>> Mailman-users mailing list -- mailman-users(a)mailman3.org
>>>> <mailto:mailman-users@mailman3.org> To unsubscribe send
>>>> an email to mailman-users-leave(a)mailman3.org
>>>> <mailto:mailman-users-leave@mailman3.org>
>>>> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>>>> <https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3org/>
>>>>
>>>>
>>>> _______________________________________________ Mailman-users
>>>> mailing list -- mailman-users(a)mailman3.org
>>>> <mailto:mailman-users@mailman3.org> To unsubscribe send an
>>>> email to mailman-users-leave(a)mailman3.org
>>>> <mailto:mailman-users-leave@mailman3.org>
>>>> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>>>> <https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3org/>
>>>>
>>>>
>>>> _______________________________________________ Mailman-users mailing
>>>> list -- mailman-users(a)mailman3.org
>>>> <mailto:mailman-users@mailman3.org> To unsubscribe send an email to
>>>> mailman-users-leave(a)mailman3.org
>>>> <mailto:mailman-users-leave@mailman3.org>
>>>> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>>>> <https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3org/>
>>
>> _______________________________________________
>> Mailman-users mailing list -- mailman-users(a)mailman3.org
>> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
>> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>>
4 years, 8 months

[MM3-users] Re: Hyperkitty CPU usage
by Abhilash Raj
On Sat, Apr 27, 2019, at 6:22 PM, Alain Kohli wrote:
> I'm running a custom image which is based on an older version of the one
> here: https://github.com/maxking/docker-mailman. I attached it below.
> But I separated postorius and hyperkitty, so hyperkitty is running in
> its own container. I'm deploying the image with a plain 'docker run'
> behind nginx. I made fulltext_index persistent now, but it didn't get
> populated with anything yet. I don't really have an error traceback
> because there is never an error thrown. The only thing with some content
> is uwsgi-error.log, which you can find below. I'm also still getting the
> "A string literal cannot contain NUL (0x00) characters." messages. I
> also noticed that it takes incredibly long for the webinterface to load
> (several minutes) even though there doesn't seem to be any process
> consuming notable resources apart from the minutely job.
>
> Funnily enough, I have the exact same image deployed on a second server
> as well for testing. On that one everything works fine. The only
> difference is that on the problematic one I have a lot more mailing
> lists/archives and that I imported them from mailman2. Could something
> have gone wrong during the import? I used the regular hyperkitty_import
> command.
Yes, this is because `whoosh`, the library set by default to run fulltext
indexing is a pure python implementation and quite slow in busy lists.
We do support more backends though, see [1] for a list of all the supported
search backends. Something like Xapian(C++) or Elasticsearch/Solr(Java)
should be much better in terms of performance.
[1]: https://django-haystack.readthedocs.io/en/master/backend_support.html
>
> uwsgi-error.log:
>
> *** Starting uWSGI 2.0.18 (64bit) on [Sat Apr 27 22:50:17 2019] ***
> compiled with version: 6.4.0 on 27 April 2019 22:48:42
> os: Linux-4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19)
> nodename: hyperkitty.docker
> machine: x86_64
> clock source: unix
> detected number of CPU cores: 4
> current working directory: /home/hyperkitty
> detected binary path: /usr/local/bin/uwsgi
> !!! no internal routing support, rebuild with pcre support !!!
> setgid() to 82
> setuid() to 82
> chdir() to /home/hyperkitty
> your memory page size is 4096 bytes
> detected max file descriptor number: 1048576
> lock engine: pthread robust mutexes
> thunder lock: disabled (you can enable it with --thunder-lock)
> uwsgi socket 0 bound to TCP address 0.0.0.0:8081 fd 8
> uwsgi socket 1 bound to TCP address 0.0.0.0:8080 fd 9
> Python version: 3.6.8 (default, Jan 30 2019, 23:54:38) [GCC 6.4.0]
> Python main interpreter initialized at 0x55dfaa41c980
> python threads support enabled
> your server socket listen backlog is limited to 100 connections
> your mercy for graceful operations on workers is 60 seconds
> [uwsgi-cron] command "./manage.py runjobs minutely" registered as
> cron task
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" registered
> as cron task
> [uwsgi-cron] command "./manage.py runjobs hourly" registered as cron
> task
> [uwsgi-cron] command "./manage.py runjobs daily" registered as cron task
> [uwsgi-cron] command "./manage.py runjobs monthly" registered as
> cron task
> [uwsgi-cron] command "./manage.py runjobs weekly" registered as cron
> task
> [uwsgi-cron] command "./manage.py runjobs yearly" registered as cron
> task
> mapped 208576 bytes (203 KB) for 4 cores
> *** Operational MODE: threaded ***
> WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter
> 0x55dfaa41c980 pid: 1 (default app)
> *** uWSGI is running in multiple interpreter mode ***
> spawned uWSGI master process (pid: 1)
> spawned uWSGI worker 1 (pid: 40, cores: 4)
> Sat Apr 27 22:50:18 2019 - [uwsgi-cron] running "./manage.py runjobs
> minutely" (pid 45)
> [uwsgi-daemons] spawning "./manage.py qcluster" (uid: 82 gid: 82)
> 22:50:21 [Q] INFO Q Cluster-47 starting.
> 22:50:21 [Q] INFO Process-1:1 ready for work at 59
> 22:50:21 [Q] INFO Process-1:2 ready for work at 60
> 22:50:21 [Q] INFO Process-1:3 ready for work at 61
> 22:50:21 [Q] INFO Process-1:4 ready for work at 62
> 22:50:21 [Q] INFO Process-1:5 monitoring at 63
> 22:50:21 [Q] INFO Process-1 guarding cluster at 58
> 22:50:21 [Q] INFO Process-1:6 pushing tasks at 64
> 22:50:21 [Q] INFO Q Cluster-47 running.
> 22:59:31 [Q] INFO Enqueued 3403
> 22:59:31 [Q] INFO Process-1:1 processing [update_from_mailman]
> 22:59:33 [Q] INFO Processed [update_from_mailman]
> Sat Apr 27 23:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 73)
> Sat Apr 27 23:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> hourly" (pid 74)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 73 exited after 64 second(s)
> 23:01:28 [Q] INFO Enqueued 3404
> 23:01:29 [Q] INFO Process-1:2 processing
> [rebuild_mailinglist_cache_recent]
> [uwsgi-cron] command "./manage.py runjobs hourly" running with pid
> 74 exited after 91 second(s)
> Sat Apr 27 23:01:36 2019 - uwsgi_response_write_body_do(): Broken
> pipe [core/writer.c line 341] during GET / (212.203.58.154)
> OSError: write error
> 23:01:36 [Q] INFO Processed [rebuild_mailinglist_cache_recent]
> Sat Apr 27 23:15:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 88)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 88 exited after 4 second(s)
> 23:28:24 [Q] INFO Enqueued 3405
> 23:28:24 [Q] INFO Process-1:3 processing [update_from_mailman]
> 23:28:25 [Q] INFO Processed [update_from_mailman]
> Sat Apr 27 23:30:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 96)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 96 exited after 4 second(s)
> 23:44:40 [Q] INFO Enqueued 3406
> 23:44:40 [Q] INFO Process-1:4 processing [update_from_mailman]
> 23:44:41 [Q] INFO Processed [update_from_mailman]
> Sat Apr 27 23:45:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 104)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 104 exited after 4 second(s)
> Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 113)
> Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> hourly" (pid 114)
> Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> daily" (pid 115)
> Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> weekly" (pid 116)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 113 exited after 55 second(s)
> [uwsgi-cron] command "./manage.py runjobs weekly" running with pid
> 116 exited after 55 second(s)
> 00:01:36 [Q] INFO Enqueued 3407
> 00:01:36 [Q] INFO Process-1:1 processing
> [rebuild_mailinglist_cache_recent]
> [uwsgi-cron] command "./manage.py runjobs hourly" running with pid
> 114 exited after 99 second(s)
> 00:01:50 [Q] INFO Processed [rebuild_mailinglist_cache_recent]
> 00:04:52 [Q] INFO Enqueued 3408
> 00:04:52 [Q] INFO Process-1:2 processing [update_from_mailman]
> 00:04:54 [Q] INFO Processed [update_from_mailman]
>
> Dockerfile:
>
> FROM python:3.6-alpine3.7 # Add startup script to container COPY
> assets/docker-entrypoint.sh /usr/local/bin/ # Install packages and
> dependencies for hyperkitty and add user for executing apps. # It's
> important that the user has the UID/GID 82 so nginx can access the
> files. RUN set -ex \&& apk add --no-cache --virtual .build-deps gcc
> libc-dev linux-headers git \postgresql-dev \&& apk add --no-cache
> --virtual .mailman-rundeps bash sassc mailcap \postgresql-client
> curl \&& pip install -U django==2.2 \&& pip install
> git+https://gitlab.com/eestec/mailmanclient
>
> \git+https://gitlab.com/mailman/hyperkitty@c9fa4d4bfc295438d3e01cd93090064d004cf44d
> \git+https://gitlab.com/eestec/django-mailman3 \whoosh \uwsgi
> \psycopg2 \dj-database-url \typing \&& apk del .build-deps \&&
> addgroup -S -g 82 hyperkitty \&& adduser -S -u 82 -G hyperkitty
> hyperkitty \&& chmod u+x /usr/local/bin/docker-entrypoint.sh# Add
> needed files for uwsgi server + settings for django COPY
> assets/__init__.py /home/hyperkittyCOPY assets/manage.py
> /home/hyperkittyCOPY assets/urls.py /home/hyperkittyCOPY
> assets/wsgi.py /home/hyperkittyCOPY assets/uwsgi.ini
> /home/hyperkittyCOPY assets/settings.py /home/hyperkitty# Change
> ownership for uwsgi+django files and set execution rights for
> management script RUN chown -R hyperkitty /home/hyperkitty && chmod
> u+x /home/hyperkitty/manage.py# Make sure we are in the correct
> working dir WORKDIR /home/hyperkittyEXPOSE 8080 8081# Use stop
> signal for uwsgi server STOPSIGNAL SIGINTENTRYPOINT
> ["docker-entrypoint.sh"]CMD ["uwsgi", "--ini",
> "/home/hyperkitty/uwsgi.ini"]
>
> On 4/27/19 7:58 PM, Abhilash Raj wrote:
> > On Sat, Apr 27, 2019, at 9:40 AM, Alain Kohli wrote:
> >> I have run "python manage.py rebuild_index" before, doesn't that do
> >> clear_index as well? Apart from that, I run hyperkitty in a docker
> >> container and didn't know fulltext_index should be persistent, so that
> >> got deleted after every version update for sure.
> > Which images are you using and how are you deploying them?
> >
> > You should persist fulltext_index, yes, and possibly logs if you need
> > them for debugging later.
> >
> > Can you paste the entire error traceback?
> >
> >>
> >> On 4/26/19 10:18 PM, Mark Sapiro wrote:
> >>> On 4/26/19 11:14 AM, Alain Kohli wrote:
> >>>> I see loads of "A string literal cannot contain NUL (0x00) characters."
> >>>> messages, but I haven't found missing messages in the archives yet. Not
> >>>> sure how that could be related, though. Apart from that I don't see
> >>>> anything unusual. The other jobs (quarter_hourly, hourly, etc.) seem to
> >>>> run and finish normally.
> >>> Did you upgrade from a Python 2.7 version of HyperKitty to a Python 3
> >>> version? The Haystack/Whoosh search engine databases are not compatible
> >>> between the two and "A string literal cannot contain NUL (0x00)
> >>> characters." is the symptom.
> >>>
> >>> You need to run 'python manage.py clear_index' or just remove all the
> >>> files from the directory defined as 'PATH' under HAYSTACK_CONNECTIONS in
> >>> your settings file (normally 'fulltext_index' in the same directory that
> >>> contains your settings.py.
> >>>
> >> _______________________________________________
> >> Mailman-users mailing list -- mailman-users(a)mailman3.org
> >> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
> >> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
> >>
>
> _______________________________________________
> Mailman-users mailing list -- mailman-users(a)mailman3.org
> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>
--
thanks,
Abhilash Raj (maxking)
6 years, 5 months