Re: Getting connection refused on postfix trying to connect to mailman-core:lmtp
by Abhilash Raj
On Thu, Oct 5, 2017, at 08:56 PM, Dmitry Makovey wrote:
> On 10/05/2017 07:20 PM, Abhilash Raj wrote:
> > On Thu, Oct 5, 2017, at 05:36 PM, Dmitry Makovey wrote:
> >> On 10/05/2017 03:20 PM, Abhilash Raj wrote:
> >>> On Thu, Oct 5, 2017, at 03:08 PM, Dmitry Makovey wrote:
> >>>>
> >>>> I've got a setup where postfix runs inside one VM(container) and mailman
> >>>> runs inside another one (maxking containers). I've wired everything
> >>>> according to docs yet I'm getting:
> >>>>
> >>>> postfix/lmtp[266]: 66A72800A87: to=<somelist(a)lists.here.stanford.edu>,
> >>>> relay=none, delay=0.5, delays=0.48/0.01/0/0, dsn=4.4.1, status=deferred
> >>>> (connect to mailman-01.stanford.edu[1.2.3.4]:8024: Connection refused)
> >
> > I found this in your settings:
> >
> > [mta] lmtp_host: mailman-01.stanford.edu
> > [mta] lmtp_port: 8024
> >
> >
> > And the log message above.
> >
> > I believe that the LMTP runner died because it wasn't able to bind to
> > `mailman-01.stanford.edu`, which I am assuming is the hostname assigned
> > to the host running these containers.
> >
> > `MM_HOSTNAME` env variable in the docker containers should be something
> > that the process inside mailman-core container can bind to and can be
> > reached by postfix (which can run either on host or on another
> > container). (Now that I read it myself, I agree that the name of the
> > variable sounds not-so-intuitive.)
>
> thank you so much for the hints! I've changed docker-compose to include
> MM_HOSTNAME variable *and* made sure that for the mailman-core I've got:
>
> services:
> mailman-core:
> hostname: mailman-01
> domainname: stanford.edu
> ...
> environment:
> ...
> MM_HOSTNAME: mailman-01.stanford.edu
>
>
> that solved the issue while keeping the mentioned mailman config. Is
> that what you've had in mind? I was looking for a quick hack, but still
> would like to find out proper solution if that isn't the one.
If that is the address that can be used to reach the container from
postfix, nothing else is needed. That is the proper solution.
>
> > The default configuration (and docker-compose.yaml) sets this value to
> > the IP Address of the container (172.19.199.2), which is reachable from
> > the host. If you set this value to whatever IP the mailman-core is
> > assigned and re-create the containers (or just re-start and run `mailman
> > aliases` in mailman-core to re-generate transport_maps), it would work
> > out.
>
> if I understand above correctly that means semi-manual mangling of
> postfix aliases file which I'd rather not do.
Not exactly manual, next time you re-create the container, these aliases
should be generated correctly. I asked for re-generating aliases as
you'd have wrong ones. BUT, because `mailman-01.stanford.edu` was
actually the correct address, it probably wasn't needed.
> Using above technique
> mailman-core does generate proper aliases while binding to the
> appropriate IP. kind of icky but seems to work.
>
> >
> > Also, I see your docker-compose.yaml configuration (`MAILMAN_CORE_IP:
> > 172.19.199.2`) is not consistent with your output of `mailman
> > conf`([webservice] hostname: 172.19.199.5`). I am not sure how that
> > happened though, just wanted to point it out. This *might* cause the
> > rest runner to die to and thus Postorius/HK wouldn't work.
>
> right, some of the IPs keep on popping at me and I've got to look for
> them - I intentionally removed direct IP assignment in
> docker-compose.yaml yet I get the feeling that someplace there's another
> IP assignement I've missed.
>
> > Hope that was helpful!
>
> very much so! thanks for your quick responses - provided my existing
> deadlines I value them that much more :)
>
> --
> Sr System and DevOps Engineer SoM IRT
>
> Email had 1 attachment:
> + signature.asc
> 1k (application/pgp-signature)
--
Abhilash Raj
maxking(a)asynchronous.in
7 years, 1 month
[Mailman-Users] Listservers currently hosted with EMWD - urgent request for assistance
by Stephen J. Turnbull
Note: Reply-To set to mailman-users(a)mailman3.org. Please check that
any replies are addressed to that list, and not mailman-users(a)python.org.
Jonathan,
FWIW, I haven't heard anything. I have 3 students submitting MS/PhD
theses in less than (checks watch) 18 hours, but after that I'll see
if I can find out anything. I wouldn't bet on it, though, as I don't
think anyone here has a connection to the Carpenter family or whoever
is left of the EMWD staff. I assume you have checked the emwd.com
website?
> We have our listservers hosted with EMWD which stopped working on
> 23 December.
What does "stopped working" mean? I assume that list traffic is no
longer being forwarded to subscribers, but perhaps other services are
working. It may help if you can answer the following questions.
1. Are you using the GNU Mailman-supplied Postorius/HyperKitty
combination for administration and archiving, or EMWD's
proprietary "Harmony" (Affinity/Empathy) platform?
2. Can you see archives (if any)?
3. Can you log in to the administrative interface?
4. Is mail to the lists being returned to sender, or just
disappearing into the void?
5. You mention an automated response in a later message. Is that to
an email you sent, or from a web-based issue-tracking system?
What does it say about the current status of EMWD services, if
anything?
Regards,
Steve
> We have since found out the devastating news that
> Brian Carpenter passed away last year. Brian was really helpful
> when we moved our listservers from a legacy system to hosting on
> the EMWD platform. My sincere condolences to his family and
> friends.
>
> As we can no longer get a response from EMWD about our listserver issue I was wondering if any one else in this community could help.
>
> Many thanks.
>
> Jonathan
>
> Jonathan Ashby
> ICT Strategic Communications Project Manager
> Londonwide LMCs and Londonwide Enterprise Ltd
> Working days: Tuesday and Wednesday
> Direct dial: 020 3818 6228
> Mobile: 07768 109601
> Fax: 020 7383 7442
> Email: jonathan.ashby(a)lmc.org.uk
> Web: www.lmc.org.uk
> Twitter: @LondonwideLMCs
>
>
>
> ----------------------------
> Visit www.lmc.org.uk/coronavirus-covid-19 for the latest official resources and guidance on covid-19.
>
> Remember that you can use WhatsApp and FaceTime for online consultations as well as the usual online tools.
> 🌲 Think of the environment. Do you really need to print this email?
> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please accept our apologies and notify the sender.
>
> The registered and office address for Londonwide Local Medical Committees Limited is: Tavistock House South, Tavistock Square, London WC1H 9LG. Registered in England No: 6391298.
>
> Londonwide Enterprise Limited is a wholly owned subsidiary of Londonwide Local Medical Committees Limited. Londonwide Enterprise Limited is registered at: Tavistock House South, Tavistock Square, London WC1H 9LG. Registered in England No. 6990874. Londonwide Enterprise Limited is registered as a Company Limited by Shares. VAT no: 130 1454 66.
>
> Londonwide Local Medical Committees Limited and Londonwide Enterprise Limited do not provide legal or financial advice and thereby excludes all liability howsoever arising in circumstances where any individual, person or entity has suffered any loss or damage arising from the use of information provided by Londonwide Local Medical Committees Limited and Londonwide Enterprise Limited in circumstances where professional legal or financial advice ought reasonably to have been obtained.
>
> Londonwide Local Medical Committees Limited and Londonwide Enterprise Limited provide guidance and support to GPs and practice teams in the Londonwide area. Additionally Londonwide Local Medical Committees Limited provides representation to GPs and practice teams in the Londonwide area. Londonwide Local Medical Committees Limited and Londonwide Enterprise Limited strongly advises individuals or practices to obtain independent legal or financial advice.
>
> ------------------------------------------------------
> Mailman-Users mailing list -- mailman-users(a)python.org
> To unsubscribe send an email to mailman-users-leave(a)python.org
> https://mail.python.org/mailman3/lists/mailman-users.python.org/
> Mailman FAQ: http://wiki.list.org/x/AgA3
> Security Policy: http://wiki.list.org/x/QIA9
> Searchable Archives: https://www.mail-archive.com/mailman-users@python.org/
> https://mail.python.org/archives/list/mailman-users@python.org/
2 years, 10 months
Re: Digests not working correctly
by Joel Lord
Now I'm on one of the lists in digest mode and I can see that it's a
mess. Periodic digests are definitely NOT working, so I'll lay that out
here.
root@host2:/# cat /etc/cron.d/mailman
# This goes in /etc/cron.d/mailman
# Replace "apache" by your webserver user ("www-data" on Debian systems) and
# set the path to the Django project directory
0 23 * * * lists /usr/local/bin/mailman digests --periodic
0 23 * * * lists /usr/local/bin/mailman notify
root@host2:/# grep digests /var/log/cron.log
Jun 11 23:00:01 host2 CRON[1632765]: (lists) CMD (/usr/local/bin/mailman
digests --periodic)
Jun 12 23:00:01 host2 CRON[2177286]: (lists) CMD (/usr/local/bin/mailman
digests --periodic)
root@host2:/home/members/directory# su - lists
lists@host2:~$ /usr/local/bin/mailman digests --periodic
lists@host2:~$ ls var/lists/<list>/
digest.mmdf
In this case I've got /usr/local/bin/mailman as a symlink to the mailman
binary inside the venv's bin directory, just for simplicity. That
digest.mmdf file is dated June 9th and clearly ought to have been
cleared out on any of the nightly runs between then and today but has
not. There are no errors anywhere I can find.
How can I try and track this down?
-Joel
On 6/4/2023 10:15 PM, Joel Lord wrote:
> The May 4th digest that went out was _also_ size-triggered, so this may
> have nothing to do with periodic digests at all, and possibly my
> periodic digests aren't working. I'm not on any of my own lists in
> digest mode, I'm slowly extracting diagnostic information out of people
> who are. Also, since this is a ~2 month cycle, it's really difficult to
> get data points to work with. I'll need to remember to go in and look
> when this settles down again (new cycle of activity started last night)
> to see if there's anything left pending.
>
> (venv) root@host2:/home/lists/mailman/venv/bin# pip freeze | grep -i hyper
> HyperKitty==1.3.7
>
> On 6/4/2023 10:05 PM, Mark Sapiro wrote:
>> On 6/4/23 18:35, Joel Lord wrote:
>>>
>>> The periodic digests do seem to be coming out. I also now have
>>> confirmation that the one message in this morning's digest that was
>>> from May 4th was also included in the last digest back on May 4th, so
>>> it seems that the one message was left behind in the digest queue
>>> when the periodic digest was sent.
>>
>> I don't see how that can happen. The process that sends a digest
>> renames the var/lists/<list-id>/digest.mmdf mailbox file in which the
>> messages are accumulated to
>> var/lists/<list-id>/digest.<volume>.<issue>.mmdf, where <volume> and
>> <issue> are the volume and issue numbers of that digest, and then
>> queues a message in the `digest` queue to tell the digest runner to
>> create the digest from the messages in that mbox and send it. Thus, it
>> leaves no var/lists/<list-id>/digest.mmdf mailbox file behind and that
>> is created anew when the next post arrives. Further, if there is a
>> non-empty digest.mmdf file, its messages should be sent no later than
>> the next 11 PM `cron digests`.
>>
>>
>>> There was one earlier message to the list back on May 4th, before the
>>> one that got duplicated, but I can't tell if that triggered a
>>> size-based digest to be sent: the logs aren't clear enough on that
>>> detail for me to tell >
>>
>> OK
>>
>>
>>> Just to inform things:
>>>
>>> (venv) lists@host2:~/mailman/venv/bin$ pip freeze | grep mailman
>>> django-mailman3==1.3.9
>>> mailman==3.3.8
>>> mailman-hyperkitty==1.2.1
>>> mailman-web==0.0.6
>>> mailmanclient==3.3.5
>>> (venv) lists@host2:~/mailman/venv/bin$ pip freeze | grep hyper
>>> mailman-hyperkitty==1.2.1
>>
>> Actually, it's HyperKitty, not hyperkitty, but I assume HyperKitty is
>> up to date as are the others.
>>
>>> (venv) lists@host2:~/mailman/venv/bin$ pip freeze | grep post
>>> postorius==1.3.8
>>>
>>>
>>
>
--
Joel Lord
1 year, 5 months
Re: [Django] ERROR (EXTERNAL IP): Service Unavailable
by dancab@caltech.edu
Thanks Mark. I ended up just restoring the database from a backup.
One last question. One more question.
I'm testing a custom footer via the template creation tool in Postorius.
However my messages aren't being delivered when I have the template activated.
If I delete the template then suddenly my message goes through.
I see the out queue folder seems to have the message stuck.
The smtp.log is just filling up with the output below.
Mar 04 11:44:33 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:33 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49976f0710>
Mar 04 11:44:33 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:33 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49977ea390>
Mar 04 11:44:33 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:33 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f4997895e48>
Mar 04 11:44:33 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499770e908>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49978f2f28>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49977e8320>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499784c588>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499775c860>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49976f79e8>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499775e9e8>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499aa18b00>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499aa66ac8>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499784fcf8>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49976df080>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499d64d438>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499780b160>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:34 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499784c048>
Mar 04 11:44:34 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499784ceb8>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49977f8828>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f4997764160>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499784c2e8>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49978b2cc0>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49978d89b0>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f4997a374e0>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499775c470>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f49978d8dd8>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499784f9e8>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499784ce48>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f499aa60860>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
Mar 04 11:44:35 2021 (416) Using agent: <mailman.mta.deliver.Deliver object at 0x7f4997819e80>
Mar 04 11:44:35 2021 (416) IndividualDelivery to: dancab(a)systems.caltech.edu
3 years, 8 months
Re: Hyperkitty CPU usage
by Abhilash Raj
On Sat, Apr 27, 2019, at 6:22 PM, Alain Kohli wrote:
> I'm running a custom image which is based on an older version of the one
> here: https://github.com/maxking/docker-mailman. I attached it below.
> But I separated postorius and hyperkitty, so hyperkitty is running in
> its own container. I'm deploying the image with a plain 'docker run'
> behind nginx. I made fulltext_index persistent now, but it didn't get
> populated with anything yet. I don't really have an error traceback
> because there is never an error thrown. The only thing with some content
> is uwsgi-error.log, which you can find below. I'm also still getting the
> "A string literal cannot contain NUL (0x00) characters." messages. I
> also noticed that it takes incredibly long for the webinterface to load
> (several minutes) even though there doesn't seem to be any process
> consuming notable resources apart from the minutely job.
>
> Funnily enough, I have the exact same image deployed on a second server
> as well for testing. On that one everything works fine. The only
> difference is that on the problematic one I have a lot more mailing
> lists/archives and that I imported them from mailman2. Could something
> have gone wrong during the import? I used the regular hyperkitty_import
> command.
Yes, this is because `whoosh`, the library set by default to run fulltext
indexing is a pure python implementation and quite slow in busy lists.
We do support more backends though, see [1] for a list of all the supported
search backends. Something like Xapian(C++) or Elasticsearch/Solr(Java)
should be much better in terms of performance.
[1]: https://django-haystack.readthedocs.io/en/master/backend_support.html
>
> uwsgi-error.log:
>
> *** Starting uWSGI 2.0.18 (64bit) on [Sat Apr 27 22:50:17 2019] ***
> compiled with version: 6.4.0 on 27 April 2019 22:48:42
> os: Linux-4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19)
> nodename: hyperkitty.docker
> machine: x86_64
> clock source: unix
> detected number of CPU cores: 4
> current working directory: /home/hyperkitty
> detected binary path: /usr/local/bin/uwsgi
> !!! no internal routing support, rebuild with pcre support !!!
> setgid() to 82
> setuid() to 82
> chdir() to /home/hyperkitty
> your memory page size is 4096 bytes
> detected max file descriptor number: 1048576
> lock engine: pthread robust mutexes
> thunder lock: disabled (you can enable it with --thunder-lock)
> uwsgi socket 0 bound to TCP address 0.0.0.0:8081 fd 8
> uwsgi socket 1 bound to TCP address 0.0.0.0:8080 fd 9
> Python version: 3.6.8 (default, Jan 30 2019, 23:54:38) [GCC 6.4.0]
> Python main interpreter initialized at 0x55dfaa41c980
> python threads support enabled
> your server socket listen backlog is limited to 100 connections
> your mercy for graceful operations on workers is 60 seconds
> [uwsgi-cron] command "./manage.py runjobs minutely" registered as
> cron task
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" registered
> as cron task
> [uwsgi-cron] command "./manage.py runjobs hourly" registered as cron
> task
> [uwsgi-cron] command "./manage.py runjobs daily" registered as cron task
> [uwsgi-cron] command "./manage.py runjobs monthly" registered as
> cron task
> [uwsgi-cron] command "./manage.py runjobs weekly" registered as cron
> task
> [uwsgi-cron] command "./manage.py runjobs yearly" registered as cron
> task
> mapped 208576 bytes (203 KB) for 4 cores
> *** Operational MODE: threaded ***
> WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter
> 0x55dfaa41c980 pid: 1 (default app)
> *** uWSGI is running in multiple interpreter mode ***
> spawned uWSGI master process (pid: 1)
> spawned uWSGI worker 1 (pid: 40, cores: 4)
> Sat Apr 27 22:50:18 2019 - [uwsgi-cron] running "./manage.py runjobs
> minutely" (pid 45)
> [uwsgi-daemons] spawning "./manage.py qcluster" (uid: 82 gid: 82)
> 22:50:21 [Q] INFO Q Cluster-47 starting.
> 22:50:21 [Q] INFO Process-1:1 ready for work at 59
> 22:50:21 [Q] INFO Process-1:2 ready for work at 60
> 22:50:21 [Q] INFO Process-1:3 ready for work at 61
> 22:50:21 [Q] INFO Process-1:4 ready for work at 62
> 22:50:21 [Q] INFO Process-1:5 monitoring at 63
> 22:50:21 [Q] INFO Process-1 guarding cluster at 58
> 22:50:21 [Q] INFO Process-1:6 pushing tasks at 64
> 22:50:21 [Q] INFO Q Cluster-47 running.
> 22:59:31 [Q] INFO Enqueued 3403
> 22:59:31 [Q] INFO Process-1:1 processing [update_from_mailman]
> 22:59:33 [Q] INFO Processed [update_from_mailman]
> Sat Apr 27 23:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 73)
> Sat Apr 27 23:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> hourly" (pid 74)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 73 exited after 64 second(s)
> 23:01:28 [Q] INFO Enqueued 3404
> 23:01:29 [Q] INFO Process-1:2 processing
> [rebuild_mailinglist_cache_recent]
> [uwsgi-cron] command "./manage.py runjobs hourly" running with pid
> 74 exited after 91 second(s)
> Sat Apr 27 23:01:36 2019 - uwsgi_response_write_body_do(): Broken
> pipe [core/writer.c line 341] during GET / (212.203.58.154)
> OSError: write error
> 23:01:36 [Q] INFO Processed [rebuild_mailinglist_cache_recent]
> Sat Apr 27 23:15:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 88)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 88 exited after 4 second(s)
> 23:28:24 [Q] INFO Enqueued 3405
> 23:28:24 [Q] INFO Process-1:3 processing [update_from_mailman]
> 23:28:25 [Q] INFO Processed [update_from_mailman]
> Sat Apr 27 23:30:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 96)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 96 exited after 4 second(s)
> 23:44:40 [Q] INFO Enqueued 3406
> 23:44:40 [Q] INFO Process-1:4 processing [update_from_mailman]
> 23:44:41 [Q] INFO Processed [update_from_mailman]
> Sat Apr 27 23:45:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 104)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 104 exited after 4 second(s)
> Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> quarter_hourly" (pid 113)
> Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> hourly" (pid 114)
> Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> daily" (pid 115)
> Sun Apr 28 00:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs
> weekly" (pid 116)
> [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running
> with pid 113 exited after 55 second(s)
> [uwsgi-cron] command "./manage.py runjobs weekly" running with pid
> 116 exited after 55 second(s)
> 00:01:36 [Q] INFO Enqueued 3407
> 00:01:36 [Q] INFO Process-1:1 processing
> [rebuild_mailinglist_cache_recent]
> [uwsgi-cron] command "./manage.py runjobs hourly" running with pid
> 114 exited after 99 second(s)
> 00:01:50 [Q] INFO Processed [rebuild_mailinglist_cache_recent]
> 00:04:52 [Q] INFO Enqueued 3408
> 00:04:52 [Q] INFO Process-1:2 processing [update_from_mailman]
> 00:04:54 [Q] INFO Processed [update_from_mailman]
>
> Dockerfile:
>
> FROM python:3.6-alpine3.7 # Add startup script to container COPY
> assets/docker-entrypoint.sh /usr/local/bin/ # Install packages and
> dependencies for hyperkitty and add user for executing apps. # It's
> important that the user has the UID/GID 82 so nginx can access the
> files. RUN set -ex \&& apk add --no-cache --virtual .build-deps gcc
> libc-dev linux-headers git \postgresql-dev \&& apk add --no-cache
> --virtual .mailman-rundeps bash sassc mailcap \postgresql-client
> curl \&& pip install -U django==2.2 \&& pip install
> git+https://gitlab.com/eestec/mailmanclient
>
> \git+https://gitlab.com/mailman/hyperkitty@c9fa4d4bfc295438d3e01cd93090064d004cf44d
> \git+https://gitlab.com/eestec/django-mailman3 \whoosh \uwsgi
> \psycopg2 \dj-database-url \typing \&& apk del .build-deps \&&
> addgroup -S -g 82 hyperkitty \&& adduser -S -u 82 -G hyperkitty
> hyperkitty \&& chmod u+x /usr/local/bin/docker-entrypoint.sh# Add
> needed files for uwsgi server + settings for django COPY
> assets/__init__.py /home/hyperkittyCOPY assets/manage.py
> /home/hyperkittyCOPY assets/urls.py /home/hyperkittyCOPY
> assets/wsgi.py /home/hyperkittyCOPY assets/uwsgi.ini
> /home/hyperkittyCOPY assets/settings.py /home/hyperkitty# Change
> ownership for uwsgi+django files and set execution rights for
> management script RUN chown -R hyperkitty /home/hyperkitty && chmod
> u+x /home/hyperkitty/manage.py# Make sure we are in the correct
> working dir WORKDIR /home/hyperkittyEXPOSE 8080 8081# Use stop
> signal for uwsgi server STOPSIGNAL SIGINTENTRYPOINT
> ["docker-entrypoint.sh"]CMD ["uwsgi", "--ini",
> "/home/hyperkitty/uwsgi.ini"]
>
> On 4/27/19 7:58 PM, Abhilash Raj wrote:
> > On Sat, Apr 27, 2019, at 9:40 AM, Alain Kohli wrote:
> >> I have run "python manage.py rebuild_index" before, doesn't that do
> >> clear_index as well? Apart from that, I run hyperkitty in a docker
> >> container and didn't know fulltext_index should be persistent, so that
> >> got deleted after every version update for sure.
> > Which images are you using and how are you deploying them?
> >
> > You should persist fulltext_index, yes, and possibly logs if you need
> > them for debugging later.
> >
> > Can you paste the entire error traceback?
> >
> >>
> >> On 4/26/19 10:18 PM, Mark Sapiro wrote:
> >>> On 4/26/19 11:14 AM, Alain Kohli wrote:
> >>>> I see loads of "A string literal cannot contain NUL (0x00) characters."
> >>>> messages, but I haven't found missing messages in the archives yet. Not
> >>>> sure how that could be related, though. Apart from that I don't see
> >>>> anything unusual. The other jobs (quarter_hourly, hourly, etc.) seem to
> >>>> run and finish normally.
> >>> Did you upgrade from a Python 2.7 version of HyperKitty to a Python 3
> >>> version? The Haystack/Whoosh search engine databases are not compatible
> >>> between the two and "A string literal cannot contain NUL (0x00)
> >>> characters." is the symptom.
> >>>
> >>> You need to run 'python manage.py clear_index' or just remove all the
> >>> files from the directory defined as 'PATH' under HAYSTACK_CONNECTIONS in
> >>> your settings file (normally 'fulltext_index' in the same directory that
> >>> contains your settings.py.
> >>>
> >> _______________________________________________
> >> Mailman-users mailing list -- mailman-users(a)mailman3.org
> >> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
> >> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
> >>
>
> _______________________________________________
> Mailman-users mailing list -- mailman-users(a)mailman3.org
> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
>
--
thanks,
Abhilash Raj (maxking)
5 years, 6 months
Re: Member Issue Discovered
by Stephen J. Turnbull
Brian Carpenter writes:
> If you mean if they don't have a Django user account, they can't
> unsubscribe?
The Django user account is more or less a proxy for the underlying
Mailman User object, which is what we're concerned with here because
that's where the data you want deleted is stored.
> I think that is true if they are wanting to unsubscribe via
> the web interface. But sending an email to
> listname-unsubscribe@listdomain allows them to unsubscribe without such
> user authentication. Isn't that correct?
They don't need a Django password or social auth, but they'll still
have to do the OTK dance. I don't see a reason to distinguish the
methods of authentication. They all reduce to the OTK dance to prove
ownership of a mailbox, then optional *delegation* to a password in
Django or social auth via Gmail etc.
I understand that your users don't understand this or care to. Thing
is, some of *our* users care about the features that this architecture
enables.
> My intent in our changes is to give the list owner and member
> exactly what they expect: when they explicitly remove a
> subscription and/or user profile, that they expect their data to be
> totally removed.
That's fine, and you're welcome to implement that in Affinity. In
fact, as I understand it, you have already done so. *I'm* not saying
*nobody* should implement that.
I'm saying that *Mailman* shouldn't, because a lot of subscribers to
Mailman lists won't like it. The whole point of Mailman 3 is to cater
to users who want powerful control over their subscriptions. For
Mailman (Postorius), prompting the user "If you delete this
subscription, you will have no subscriptions linked to this account.
Would you like to delete the account as well?" (as I have proposed
three times now) is the right way to go. This can be implemented via
email with a second OTK.
> I can easily picture a scenario where an older list has a number of
> list members move to on to other things in life
And I can picture a scenario where they take a summer break and
reactivate a decade later. I would be *pissed off* if my data got
deleted in that scenario. (Yes, I've done that in the five year
variant, if not the full decade.)
> and even abandon their email accounts totally.
Which is a totally different scenario. I deny your "imperative" in
the former case; inactivity from the point of view of a list owner is
(usually) not abandonment. I question it in the latter, because
"abandon" is an inference drawn from lack of activity or bounces,
either of which could be inadvertant (respectively the "summer break"
scenario, and separation from an employer).
> In those cases it is imperative that those list addresses are
> removed either via bounce processing or list owner intervention and
> that no legacy data on such members remain.
But how do you propose to identify "legacy data"? In Mailman 3,
members are *people*, not addresses. Do you really want it to be the
case that if Albert signs up with albert(a)example.com, and later gets
fired by example.com, then all his subscriptions get bounce-cancelled,
and his User profile gets irreversibly trashed? Or some large email
provider hosting lots of posters decides to suddenly implement some
spam protection that causes a ton of bounces (as DMARC p=reject did in
2014) and hundreds of users have their accounts trashed?
You just can't know without asking Albert what he wants done in his
case and you can be quite sure you're doing the wrong thing in the
DMARC-like case. In either case, resubscribing is a click per list if
his data is retained (and Albert needs an OTK confirmation of another
address). It's an annoying session of duplicating his configuration
(including passwords and social auth links) if not, and probably some
months of discovering that a configuration that built up over years
wasn't accurately reproduced for some lists.
List owners, of course, can do what they want with their lists. If
they want to add automation for their subscribers that simplify the
lives of people who have simple needs, that's not something Mailman
can, should, or will try to stop. That's *why* I support efforts like
Affinity, Empathy, and Harmony.
Steve
4 years
Re: using SSH/TLS with external MTA
by Odhiambo Washington
On Wed, Jul 24, 2024 at 4:36 PM Roland Giesler via Mailman-users <
mailman-users(a)mailman3.org> wrote:
> I have managed thus far to get things working on my new install, but I
> need to use a secure logon to send mail from an external MTA. I have
> set up:
>
> /etc/mailman/mailman.cfg:
>
> smtp_host: box2.gtahardware.co.za
> smtp_port: 465
> smtp_user: roland(a)giesler.za.net
> smtp_pass: <hidden>
> smtp_secure_mode: smtps
> smtp_verify_cert: no
> smtp_verify_hostname: no
>
> I'll get a cert installed later, for now just want to get it going.
>
> This error occurs when I try to create a new user in postorius.
>
> ERROR 2024-07-24 15:29:45,572 2150 django.request Internal Server Error:
> /accounts/login/
> Traceback (most recent call last):
> File
> "/usr/lib/python3/dist-packages/django/core/handlers/exception.py", line
> 47, in inner
> response = get_response(request)
> File "/usr/lib/python3/dist-packages/django/core/handlers/base.py",
> line 181, in _get_response
> response = wrapped_callback(request, *callback_args,
> **callback_kwargs)
> File "/usr/lib/python3/dist-packages/django/views/generic/base.py",
> line 70, in view
> return self.dispatch(request, *args, **kwargs)
> File "/usr/lib/python3/dist-packages/django/utils/decorators.py",
> line 43, in _wrapper
> return bound_method(*args, **kwargs)
> File
> "/usr/lib/python3/dist-packages/django/views/decorators/debug.py", line
> 89, in sensitive_post_parameters_wrapper
> return view(request, *args, **kwargs)
> File "/usr/lib/python3/dist-packages/allauth/account/views.py", line
> 146, in dispatch
> return super(LoginView, self).dispatch(request, *args, **kwargs)
> File "/usr/lib/python3/dist-packages/allauth/account/views.py", line
> 74, in dispatch
> response = super(RedirectAuthenticatedUserMixin, self).dispatch(
> File "/usr/lib/python3/dist-packages/django/views/generic/base.py",
> line 98, in dispatch
> return handler(request, *args, **kwargs)
> File "/usr/lib/python3/dist-packages/allauth/account/views.py", line
> 102, in post
> response = self.form_valid(form)
> File "/usr/lib/python3/dist-packages/allauth/account/views.py", line
> 159, in form_valid
> return form.login(self.request, redirect_url=success_url)
> File "/usr/lib/python3/dist-packages/allauth/account/forms.py", line
> 196, in login
> ret = perform_login(
> File "/usr/lib/python3/dist-packages/allauth/account/utils.py", line
> 175, in perform_login
> send_email_confirmation(request, user, signup=signup, email=email)
> File "/usr/lib/python3/dist-packages/allauth/account/utils.py", line
> 346, in send_email_confirmation
> email_address.send_confirmation(request, signup=signup)
> File "/usr/lib/python3/dist-packages/allauth/account/models.py", line
> 62, in send_confirmation
> confirmation.send(request, signup=signup)
> File "/usr/lib/python3/dist-packages/allauth/account/models.py", line
> 169, in send
> get_adapter(request).send_confirmation_mail(request, self, signup)
> File "/usr/lib/python3/dist-packages/allauth/account/adapter.py",
> line 464, in send_confirmation_mail
> self.send_mail(email_template,
> emailconfirmation.email_address.email, ctx)
> File "/usr/lib/python3/dist-packages/allauth/account/adapter.py",
> line 136, in send_mail
> msg.send()
> File "/usr/lib/python3/dist-packages/django/core/mail/message.py",
> line 284, in send
> return self.get_connection(fail_silently).send_messages([self])
> File
> "/usr/lib/python3/dist-packages/django/core/mail/backends/smtp.py", line
> 109, in send_messages
> sent = self._send(message)
> File
> "/usr/lib/python3/dist-packages/django/core/mail/backends/smtp.py", line
> 125, in _send
> self.connection.sendmail(from_email, recipients,
> message.as_bytes(linesep='\r\n'))
> File "/usr/lib/python3.10/smtplib.py", line 901, in sendmail
> raise SMTPRecipientsRefused(senderrs)
> smtplib.SMTPRecipientsRefused: {'roland(a)giesler.za.net': (454, b'4.7.1
> <roland(a)giesler.za.net>: Relay access denied')}
>
> However, the mailserver is in daily use and accept logon all the time.
> I believe I'm missing something that should be installed to make SMTPS
> (SSL) work, but what?
>
How about this below??
[mta]
# The class defining the interface to the incoming mail transport agent.
incoming: mailman.mta.postfix.LMTP
# The callable implementing delivery to the outgoing mail transport agent.
# This must accept three arguments, the mailing list, the message, and the
# message metadata dictionary.
*outgoing: mailman.mta.deliver.deliver* <=========
smtp_host: box2.gtahardware.co.za
smtp_port: 465
smtp_user: roland(a)giesler.za.net
smtp_pass: <hidden>
smtp_secure_mode: smtps
smtp_verify_cert: no
smtp_verify_hostname: no
To be honest, I have never used this as I always use the localhost MTA. I
think you could use a local MTA, configured to authenticate to the remote
MTA.
Details:
https://docs.mailman3.org/projects/mailman/en/latest/src/mailman/mta/docs/c…
--
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254 7 3200 0004/+254 7 2274 3223
In an Internet failure case, the #1 suspect is a constant: DNS.
"Oh, the cruft.", egrep -v '^$|^.*#' ¯\_(ツ)_/¯ :-)
[How to ask smart questions:
http://www.catb.org/~esr/faqs/smart-questions.html]
4 months
Re: Migrating mailman3 to latest ubuntu lts
by Helio Loureiro
Hi,
No luck :(
(venv) mailman@new-server ~ (v3.3.9)> *pip freeze | egrep
"mailman-web|django-mailman3|django-allauth"*
django-allauth==0.59.0
django-mailman3==1.3.11
mailman-web==0.0.8
(venv) mailman@new-server ~ (v3.3.9)> *pip install -U
django-allauth==0.58.0*
Collecting django-allauth==0.58.0
Downloading django-allauth-0.58.0.tar.gz (861 kB)
---------------------------------------- 861.7/861.7 KB 9.4 MB/s eta
0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: requests-oauthlib>=0.3.0 in
./venv/lib/python3.10/site-packages (from django-allauth==0.58.0) (1.3.1)
Requirement already satisfied: Django>=3.2 in
./venv/lib/python3.10/site-packages (from django-allauth==0.58.0) (4.1.13)
Requirement already satisfied: pyjwt[crypto]>=1.7 in
./venv/lib/python3.10/site-packages (from django-allauth==0.58.0) (2.8.0)
Requirement already satisfied: requests>=2.0.0 in
./venv/lib/python3.10/site-packages (from django-allauth==0.58.0) (2.31.0)
Requirement already satisfied: python3-openid>=3.0.8 in
./venv/lib/python3.10/site-packages (from django-allauth==0.58.0) (3.2.0)
Requirement already satisfied: asgiref<4,>=3.5.2 in
./venv/lib/python3.10/site-packages (from
Django>=3.2->django-allauth==0.58.0) (3.7.2)
Requirement already satisfied: sqlparse>=0.2.2 in
./venv/lib/python3.10/site-packages (from
Django>=3.2->django-allauth==0.58.0) (0.4.4)
Requirement already satisfied: cryptography>=3.4.0 in
./venv/lib/python3.10/site-packages (from
pyjwt[crypto]>=1.7->django-allauth==0.58.0) (41.0.7)
Requirement already satisfied: defusedxml in
./venv/lib/python3.10/site-packages (from
python3-openid>=3.0.8->django-allauth==0.58.0) (0.7.1)
Requirement already satisfied: urllib3<3,>=1.21.1 in
./venv/lib/python3.10/site-packages (from
requests>=2.0.0->django-allauth==0.58.0) (2.1.0)
Requirement already satisfied: charset-normalizer<4,>=2 in
./venv/lib/python3.10/site-packages (from
requests>=2.0.0->django-allauth==0.58.0) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in
./venv/lib/python3.10/site-packages (from
requests>=2.0.0->django-allauth==0.58.0) (3.6)
Requirement already satisfied: certifi>=2017.4.17 in
./venv/lib/python3.10/site-packages (from
requests>=2.0.0->django-allauth==0.58.0) (2023.11.17)
Requirement already satisfied: oauthlib>=3.0.0 in
./venv/lib/python3.10/site-packages (from
requests-oauthlib>=0.3.0->django-allauth==0.58.0) (3.2.2)
Requirement already satisfied: typing-extensions>=4 in
./venv/lib/python3.10/site-packages (from
asgiref<4,>=3.5.2->Django>=3.2->django-allauth==0.58.0) (4.9.0)
Requirement already satisfied: cffi>=1.12 in
./venv/lib/python3.10/site-packages (from
cryptography>=3.4.0->pyjwt[crypto]>=1.7->django-allauth==0.58.0) (1.16.0)
Requirement already satisfied: pycparser in
./venv/lib/python3.10/site-packages (from
cffi>=1.12->cryptography>=3.4.0->pyjwt[crypto]>=1.7->django-allauth==0.58.0)
(2.21)
Building wheels for collected packages: django-allauth
Building wheel for django-allauth (pyproject.toml) ... done
Created wheel for django-allauth:
filename=django_allauth-0.58.0-py3-none-any.whl size=1157319
sha256=a430c552101d1ad47bc00b16d1c1d6df728afacdd13823927b4cbfb02c35dbfc
Stored in directory:
/local/mailman/.cache-ubuntu-22.04/pip/wheels/55/0a/79/e199827a18f310906c2a90b0e92b89c41daf21d2a502db6710
Successfully built django-allauth
Installing collected packages: django-allauth
Attempting uninstall: django-allauth
Found existing installation: django-allauth 0.59.0
Uninstalling django-allauth-0.59.0:
Successfully uninstalled django-allauth-0.59.0
Successfully installed django-allauth-0.58.0
(venv) mailman@new-server ~ (v3.3.9)> *mailman-web migrate*
System check identified some issues:
WARNINGS:
account.EmailAddress: (models.W036) MariaDB does not support unique
constraints with conditions.
HINT: A constraint won't be created. Silence this warning if you don't care
about it.
account.EmailAddress: (models.W043) MariaDB does not support indexes on
expressions.
HINT: An index won't be created. Silence this warning if you don't care
about it.
Operations to perform:
Apply all migrations: account, admin, auth, contenttypes,
django_mailman3, django_q, hyperkitty, postorius, sessions, sites,
socialaccount
Running migrations:
Applying account.0004_alter_emailaddress_drop_unique_email...Traceback
(most recent call last):
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
line 89, in _execute
return self.cursor.execute(sql, params)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/mysql/base.py",
line 75, in execute
return self.cursor.execute(query, args)
File
"/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
179, in execute
res = self._query(mogrified_query)
File
"/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
330, in _query
db.query(q)
File
"/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/connections.py",
line 257, in query
_mysql.connection.query(self, query)
MySQLdb.OperationalError: (2013, 'Lost connection to MySQL server during
query')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/local/mailman/venv/bin/mailman-web", line 8, in <module>
sys.exit(main())
File
"/local/mailman/venv/lib/python3.10/site-packages/mailman_web/manage.py",
line 90, in main
execute_from_command_line(sys.argv)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/core/management/__init__.py",
line 446, in execute_from_command_line
utility.execute()
File
"/local/mailman/venv/lib/python3.10/site-packages/django/core/management/__init__.py",
line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/core/management/base.py",
line 402, in run_from_argv
self.execute(*args, **cmd_options)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/core/management/base.py",
line 448, in execute
output = self.handle(*args, **options)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/core/management/base.py",
line 96, in wrapped
res = handle_func(*args, **kwargs)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/core/management/commands/migrate.py",
line 349, in handle
post_migrate_state = executor.migrate(
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/migrations/executor.py",
line 135, in migrate
state = self._migrate_all_forwards(
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/migrations/executor.py",
line 167, in _migrate_all_forwards
state = self.apply_migration(
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/migrations/executor.py",
line 252, in apply_migration
state = migration.apply(state, schema_editor)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/migrations/migration.py",
line 130, in apply
operation.database_forwards(
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/migrations/operations/fields.py",
line 235, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/base/schema.py",
line 788, in alter_field
self._alter_field(
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/base/schema.py",
line 858, in _alter_field
self.execute(self._delete_unique_sql(model, constraint_name))
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/base/schema.py",
line 199, in execute
cursor.execute(sql, params)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
line 67, in execute
return self._execute_with_wrappers(
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
line 84, in _execute
with self.db.wrap_database_errors:
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/utils.py", line
91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/utils.py",
line 89, in _execute
return self.cursor.execute(sql, params)
File
"/local/mailman/venv/lib/python3.10/site-packages/django/db/backends/mysql/base.py",
line 75, in execute
return self.cursor.execute(query, args)
File
"/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
179, in execute
res = self._query(mogrified_query)
File
"/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/cursors.py", line
330, in _query
db.query(q)
File
"/local/mailman/venv/lib/python3.10/site-packages/MySQLdb/connections.py",
line 257, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (2013, 'Lost connection to MySQL server
during query')
(venv) mailman@new-server ~ (v3.3.9) [0|1]> *more /etc/mailman3/settings.py*
# Mailman Web configuration file.
# /etc/mailman3/settings.py
# Get the default settings.
from mailman_web.settings.base import *
from mailman_web.settings.mailman import *
# Settings below supplement or override the defaults.
#: Default list of admins who receive the emails from error logging.
ADMINS = (
('Mailman Suite Admin', 'root@localhost'),
)
# Postgresql database setup.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'mailman3web',
'USER': 'mailman3web',
# TODO: Replace this with the password.
'PASSWORD': '***********',
'HOST': 'localhost',
# PORT: set to empty string for default.
'PORT': '3306',
# OPTIONS: Extra parameters to use when connecting to the database.
#'OPTIONS': {
# Set sql_mode to 'STRICT_TRANS_TABLES' for MySQL. See
# https://docs.djangoproject.com/en/1.11/ref/
# databases/#setting-sql-mode
# 'init_command': "SET sql_mode='STRICT_TRANS_TABLES'",
# 'charset': 'utf8mb4',
#},
}
}
# 'collectstatic' command will copy all the static files here.
# Alias this location from your webserver to `/static`
STATIC_ROOT = '/local/mailman/web/static'
# enable the 'compress' command.
COMPRESS_ENABLED = True
# Make sure that this directory is created or Django will fail on start.
LOGGING['handlers']['file']['filename'] =
'/local/mailman/web/logs/mailmanweb.log'
#: See https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
ALLOWED_HOSTS = [
"localhost", # Archiving API from Mailman, keep it.
"127.0.0.1",
# "lists.your-domain.org",
# Add here all production domains you have.
"*"
]
#: See
https://docs.djangoproject.com/en/dev/ref/settings/#csrf-trusted-origins
(venv) mailman@new-server ~ (v3.3.9)> *mysql -umailman3web -p -h localhost
mailman3web*
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 32
Server version: 10.6.12-MariaDB-0ubuntu0.22.04.1 Ubuntu 22.04
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
MariaDB [mailman3web]> show tables;
+-------------------------------+
| Tables_in_mailman3web |
+-------------------------------+
| account_emailaddress |
| account_emailconfirmation |
| auth_group |
| auth_group_permissions |
| auth_permission |
| auth_user |
| auth_user_groups |
| auth_user_user_permissions |
| django_admin_log |
| django_content_type |
| django_mailman3_maildomain |
| django_mailman3_profile |
| django_migrations |
| django_q_ormq |
| django_q_schedule |
| django_q_task |
| django_session |
| django_site |
| hyperkitty_attachment |
| hyperkitty_email |
| hyperkitty_favorite |
| hyperkitty_lastview |
| hyperkitty_mailinglist |
| hyperkitty_profile |
| hyperkitty_sender |
| hyperkitty_tag |
| hyperkitty_tagging |
| hyperkitty_thread |
| hyperkitty_threadcategory |
| hyperkitty_vote |
| socialaccount_socialaccount |
| socialaccount_socialapp |
| socialaccount_socialapp_sites |
| socialaccount_socialtoken |
+-------------------------------+
34 rows in set (0.000 sec)
Best Regards,
Helio Loureiro
https://helio.loureiro.eng.br
https://github.com/helioloureiro
https://mastodon.social/@helioloureiro
On Mon, 18 Dec 2023 at 17:11, Mark Sapiro <mark(a)msapiro.net> wrote:
> On 12/18/23 6:24 AM, Helio Loureiro wrote:
> > Hi,
> >
> > Indeed it was the configuration. It was placed into
> > /etc/mailman3/mailman-web.py. After a I changed to
> > /etc/mailman3/settings.py a few things advanced a little bit more.
> >
> > I had to figure out how to fix mysqlclient installation since there
> isn't a
> > mention about it and the simple "pip install mysqclient" was breaking
> with
> > pkg-config issues. But it did work at the end.
> >
> > Now I can see further messages on mailman3-web than before.
> >
> > (venv) mailman@new-server ~ (v3.3.9)> mailman-web migrate
> > System check identified some issues:
> >
> > WARNINGS:
> > account.EmailAddress: (models.W036) MariaDB does not support unique
> > constraints with conditions.
> > HINT: A constraint won't be created. Silence this warning if you don't
> care
> > about it.
> > account.EmailAddress: (models.W043) MariaDB does not support indexes on
> > expressions.
> > HINT: An index won't be created. Silence this warning if you don't care
> > about it.
> > Operations to perform:
> > Apply all migrations: account, admin, auth, contenttypes,
> > django_mailman3, django_q, hyperkitty, postorius, sessions, sites,
> > socialaccount
> > Running migrations:
> > Applying account.0004_alter_emailaddress_drop_unique_email...Traceback
> > (most recent call last):
>
>
> I'm not sure why there would be an issue with this migration, but there
> is a possible compatibility issue depending on how you installed things.
>
> django-mailman3<=1.3.11 is not compatible with django-allauth>=0.58.
>
> In your venv, try
> ```
> pip install django-allauth\<0.58
> ```
>
> --
> Mark Sapiro <mark(a)msapiro.net> The highway is for gamblers,
> San Francisco Bay Area, California better use your sense - B. Dylan
>
> _______________________________________________
> Mailman-users mailing list -- mailman-users(a)mailman3.org
> To unsubscribe send an email to mailman-users-leave(a)mailman3.org
> https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
> Archived at:
> https://lists.mailman3.org/archives/list/mailman-users@mailman3.org/message…
>
> This message sent to helio(a)loureiro.eng.br
>
11 months
Re: admin/login/ cannot be accessed
by jean-christophe manciot
- Django version is 2.2.6-1ubuntu1
- Disabling HTTP2 in nginx means disabling it for all server blocks
listening on the same IP, which would degrade all other servers.
- Doing so leads to another error:
This page isn’t working
<mysite> didn’t send any data.
ERR_EMPTY_RESPONSE
- I cannot run nginx and apache on the same <ip_address>:443 port either.
I found no error in mailman3 or syslog logs.
In ```/etc/mailman3/mailman.cfg```, I have:
```
[logging.debian]
format: %(asctime)s (%(process)d) %(message)s
datefmt: %b %d %H:%M:%S %Y
propagate: no
level: debug
path: mailman.log
```
Yet, mailman.log does not seem to show debug level information.
On Wed, Dec 11, 2019 at 7:04 PM Abhilash Raj <maxking(a)asynchronous.in>
wrote:
>
>
> On Wed, Dec 11, 2019, at 9:25 AM, jean-christophe manciot wrote:
> > Ubuntu 20.04
> > python3-django 2:2.2.6-1ubuntu1
> > python3-django-hyperkitty 1.3.1 (built from sources)
> > mailman3-full 3.2.2-1
>
> Which version of Django are you using?
>
> >
> > Nginx server configuration:
> > ```
> > ...
> > ########
> > # Static
> > ########
> > location /favicon.ico
> > {
> > alias <mysite_dir>/static/hyperkitty/img/favicon.ico;
> > }
> > location /static/favicon.ico
> > {
> > alias <mysite_dir>/static/postorius/img/favicon.ico;
> > }
> > location /static/
> > {
> > alias <mysite_dir>/static/;
> > }
> >
> > #######################
> > # Upstream uwsgi server
> > #######################
> > location /
> > {
> > include /etc/nginx/uwsgi_params;
> > uwsgi_pass 127.0.0.1:<uwsgi_server_port>;
> > }
> > ...
> > ```
> > where:
> > - <mysite_dir> is a symlink to <django_dir>/static
> > - <uwsgi_server_port> matches the one defined in
> ```/etc/mailman3/uwsgi.ini```:
> > ```
> > [uwsgi]
> > # Port on which uwsgi will be listening.
> > uwsgi-socket = 127.0.0.1:<uwsgi_server_port>
> > ```
> >
>
> The config looks good to me in a quick glance.
>
>
> > All 3 systemd services run fine:
> > - mailman3
> > - mailman3-web
> > - qcluster
> >
> > I'm trying to login to the django administration pages.
> > I get the django administration login page at:
> > https://mysite/admin/login/
> > Logging in with the admin credentials leads to:
> > ```
> > This site can’t be reached
> > The webpage at https://mysite/admin/login/ might be temporarily down or
> > it may have moved permanently to a new web address.
> > ERR_HTTP2_PROTOCOL_ERROR
> > ```
> > This is very strange because it is the URL which I used to get the
> > login page in the first place.
>
>
> Looking at the error, it seems like something somewhere is re-directing to
> HTTP/2 or the request is based off of HTTP/2 and all the components in the
> stack don't support HTTP/2, leading to the error message.
>
> I haven't played a lot with HTTP/2 yet so I am not sure which specific
> component in the stack could be incompatible here.
>
> >
> > If I launch a test web server at another port with:
> > ```
> > <django_dir># python3 manage.py runserver <mysite_ip_address>:8080
> > Performing system checks...
> >
> > System check identified no issues (0 silenced).
> > December 11, 2019 - 17:50:48
> > Django version 2.2.6, using settings 'settings'
> > Starting development server at http://<mysite_ip_address>:8080/
> > Quit the server with CONTROL-C.
> > ```
> > and access it at ```http://<mysite_ip_address>:8080/admin/login/``` to
> > login with the same credentials as before, I get through and all the
> > django administration lines appear, although in a degraded layout:
> > ```
> > Site administration
> > Accounts
> > Email addresses Add Change
> > Authentication and Authorization
> > Groups Add Change
> > Users Add Change
> > Django Mailman 3
> > Mail domains Add Change
> > Profiles Add Change
> > ...
> > ```
> > Any idea what could be happening here?
>
>
> Degraded layout is due to missing static files since the development
> server that you spun off doesn't serve static files. So, that is okay.
> > _______________________________________________
> > Mailman-users mailing list -- mailman-users(a)mailman3.org
> > To unsubscribe send an email to mailman-users-leave(a)mailman3.org
> > https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
> >
>
> --
> thanks,
> Abhilash Raj (maxking)
>
--
Jean-Christophe
4 years, 11 months
Re: Importing mbox files into archive defect with lines with From
by Stephen J. Turnbull
Alex Schuilenburg via Mailman-users writes:
> Nope, apologies.� I'm happy to add the list back in - I thought I
> hit reply-to-list.
No apologies necessary.
> As an email separator agreed, provided the body has all "First "
> (preceded by a blank line and nothing preceding on the same line)
> suitably escaped.
That's not the way this works. You don't get to choose, only to
defend your system. See
https://www.jwz.org/doc/content-length.html
> I have to move the lists onto a new Debian 12 server using the
> native mailman 3.3.8 & mailman-web 0+20200530-2 packages.
I think the relevant package version is HyperKitty's. Mailman-Web
should just be a wrapper around HyperKitty and Postorius.
> > The preferred way is to dump the database to SQL, and then load
> > it in to the new database directly rather than downloading the
> > mbox files and importing.
>
> Thats what I thought initially, but that failed as per
> https://lists.mailman3.org/archives/list/mailman-users@mailman3.org/message….
>
> As my old installation appears to have the django_migrations table
> inconsistent with the state of the database, and Debian have been
> unresponsive so far.
Hm. I think it's more likely that the load overwrote the
django_migrations table with the old migrations table, but the
Debian-supplied database already had migrations applied on the
assumption that you're either creating a new Mailman instance or
upgrading in place. When you load the dumped database, that is
probably smart enough to delete tables before creating them (or
perhaps you just get lucky that the load doesn't try to delete or
rename columns) BUT new tables do NOT get that treatment. They just
sit around waiting for you to apply the migration that creates them
and "what migration doing?" KA-BOOM.
I suspect that "DROP DATABASE mailmanweb;" and then loading the dumped
database, followed by `mailman-web migrate` will work. (Usual caveat
of you should have a backup onsite, a backup in a bank vault, and a
backup store on the dark side of the moon before trying this!)
Thank you for going to Debian first (at about the same time is fine),
by the way. We really appreciate that. We try hard to make sure
Mailman works in all *our* common use cases, but distros have a
different set. It's very common that a distro will do something that
makes sense for their usual use cases that just fail badly for cases
they didn't anticipate.
> Then I guess that is the case in Debian 12.
Maybe, it's often hard to tell where distros go wrong. If my
guessalasys above is correct, it's simply the assumption that the user
is either doing a greenfield installation or an upgrade in place.
Surely those are the great majority of cases.
> I thought that ">From " would be escaped to ">>From ", and so on,
> so the escape could easily be reversed when imported.
Ah, you are an honest person. Do not commit crimes, my friend, you
will get caught. More devious thinkers quote messages by prepending
only ">" to the line. Or, knowing about From-stuffing when sending
signed mail then might pre-stuff From lines so that signature
validation succeeds by default. Either way if you unstuff you will
break the message. Maybe ChatGPT-10 will get it right. ;-)
The only way to win is to not play the mbox game.
> After all, I would expect that an export of the archive to mbox,
> followed by a delete of the archive, followed by a
> hyperkitty_import of the archive, should leave you at the same
> place.
You would expect, for sure. You would be wrong, because mbox is a
lossy format by design. (Or by lack of design, if you prefer.)
> Not with ">From " escapes in the new archives.� In fact I
> also had a number of messages with "Message-ID: <>" and worse: all
> messages with attachments had the text/plain content empty.
I don't EVEN want to think why that might be.
> The following dump and import worked.
>
> oldhost> mysqldump --no-create-info --no-create-db --disable-keys
> --complete-insert mailman3web > mailman3web.sql
>
> newhost> mysql
> MariaDB [(none]> use mailman3web
> MariaDB [mailman3web]> source mailman3web.sql
Yeah!!
Except I forgot how to update the FAQ. Now I have to learn again!
:-)
Steve
1 year, 2 months