Mailman3 in dockers on EC2 - running out of memory and puking ev.ry.where.
Hi all -- really need some help. I'm tech literate, but definitely not a sysadmin or dev.
I'm following the tutorial at "Olay's Farmland" at https://www.olay.xyz/2018/01/01/deploy-mailman-3-on-aws-using-docker/ which seems to be a clone of the same content here https://xiaoxing.us/2018/01/01/deploy-mailman-3-on-aws-using-docker/
Easy enough to get most of the way ... set up EC2 instance, configure SES and add DNS settings with DKIM etc. Set up Postfix, test it. All working groovy.
I run through the how to all the way through setting up the containers and firing it up. This is where things start to get funky.
Once the docker-compose is run, I run "curl http://172.19.199.3:8000/postorius/lists/" to test. First time it said it couldn't connect to database. So I nuked and restarted from scratch. Second time it said the same thing... was reading around the place and tried again, and then reran that curl and it worked -- gave me html from a page. But the VM started running slowly --taking forever to follow keypresses.
Eventually became unresponsive. So I rebooted the machine via AWS console, and docker ps showed only the postgres db running. docker-compose ps shows Name Command State Ports
docker-mailman_database_1 docker-entrypoint.sh postgres Up 5432/tcp
mailman-core docker-entrypoint.sh maste ... Exit 255 8001/tcp, 8024/tcp
mailman-web docker-entrypoint.sh uwsgi ... Exit 255 8000/tcp, 8080/tcp
So rebuilt the container -- same thing. It looks and feels like its running out of memory.
I used the compose file from https://gist.githubusercontent.com/Yexiaoxing/833bfcc5d3e4e0c06a8b7f0bac7c4c... (with appropriate edits).
I have no idea how to proceed from here...
Go to logs...
Partial log (they're LONG)
docker logs docker-mailman_database_1 The files belonging to this database system will be owned by user "postgres". This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok creating subdirectories ... ok selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting dynamic shared memory implementation ... posix creating configuration files ... ok running bootstrap script ... ok sh: locale: not found performing post-bootstrap initialization ... No usable system locales were found. Use the option "--debug" to see details. ok syncing data to disk ... ok
WARNING: enabling "trust" authentication for local connections You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
waiting for server to start....LOG: database system was shut down at 2019-04-09 14:37:48 UTC LOG: MultiXact member wraparound protections are now enabled LOG: database system is ready to accept connections LOG: autovacuum launcher started done server started CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
LOG: received fast shutdown request LOG: aborting any active transactions LOG: autovacuum launcher shutting down LOG: shutting down waiting for server to shut down....LOG: database system is shut down done server stopped
PostgreSQL init process complete; ready for start up.
LOG: database system was shut down at 2019-04-09 14:37:50 UTC LOG: MultiXact member wraparound protections are now enabled LOG: database system is ready to accept connections LOG: autovacuum launcher started ERROR: relation "social_auth_usersocialauth" does not exist at character 15 STATEMENT: SELECT 1 from social_auth_usersocialauth LOG: database system was interrupted; last known up at 2019-04-09 14:48:37 UTC LOG: database system was not properly shut down; automatic recovery in progress LOG: invalid record length at 0/1767BB8: wanted 24, got 0 LOG: redo is not required LOG: MultiXact member wraparound protections are now enabled LOG: database system is ready to accept connections LOG: autovacuum launcher started LOG: received smart shutdown request LOG: autovacuum launcher shutting down LOG: shutting down LOG: database system is shut down LOG: database system was shut down at 2019-04-09 15:18:06 UTC LOG: MultiXact member wraparound protections are now enabled LOG: database system is ready to accept connections LOG: autovacuum launcher started ERROR: duplicate key value violates unique constraint "auth_user_username_key" DETAIL: Key (username)=(lists) already exists. STATEMENT: INSERT INTO "auth_user" ("password", "last_login", "is_superuser", "username", "first_name", "last_name", "email", "is_staff", "is_active", "date_joined") VALUES ('!lS8pgTTV7CyhUbZItku00ZnUGxLQkVEuc3ENZKbV', NULL, true, 'lists', '', '', 'iotic-za@reasondigital.co.za', true, true, '2019-04-09T15:39:57.715036+00:00'::timestamptz) RETURNING "auth_user"."id" TopMemoryContext: 8192 total in 1 blocks; 3480 free (0 chunks); 4712 used smgr relation table: 24576 total in 2 blocks; 13008 free (4 chunks); 11568 used WAL record construction: 49768 total in 2 blocks; 6584 free (0 chunks); 43184 used Checkpointer: 8192 total in 1 blocks; 7992 free (3 chunks); 200 used PrivateRefCount: 8192 total in 1 blocks; 2840 free (0 chunks); 5352 used MdSmgr: 8192 total in 1 blocks; 8120 free (0 chunks); 72 used Pending ops context: 0 total in 0 blocks; 0 free (0 chunks); 0 used Pending Ops Table: 8192 total in 1 blocks; 2840 free (5 chunks); 5352 used LOCALLOCK hash: 8192 total in 1 blocks; 776 free (0 chunks); 7416 used Timezones: 104120 total in 2 blocks; 2840 free (0 chunks); 101280 used ErrorContext: 8192 total in 1 blocks; 8152 free (0 chunks); 40 used Grand total: 235808 bytes in 13 blocks; 56632 free (12 chunks); 179176 used ERROR: out of memory DETAIL: Failed on request of size 4032. CONTEXT: writing block 0 of relation base/16384/16797 TopMemoryContext: 8192 total in 1 blocks; 3480 free (0 chunks); 4712 used smgr relation table: 24576 total in 2 blocks; 13008 free (4 chunks); 11568 used WAL record construction: 49768 total in 2 blocks; 6584 free (0 chunks); 43184 used Checkpointer: 8192 total in 1 blocks; 7992 free (2 chunks); 200 used PrivateRefCount: 8192 total in 1 blocks; 2840 free (0 chunks); 5352 used LOG: could not fork autovacuum worker process: Out of memory MdSmgr: 8192 total in 1 blocks; 8120 free (0 chunks); 72 used Pending ops context: 0 total in 0 blocks; 0 free (0 chunks); 0 used Pending Ops Table: 8192 total in 1 blocks; 2840 free (5 chunks); 5352 used LOCALLOCK hash: 8192 total in 1 blocks; 776 free (0 chunks); 7416 used LOG: could not fork autovacuum worker process: Out of memory Timezones: 104120 total in 2 blocks; 2840 free (0 chunks); 101280 used ErrorContext: 8192 total in 1 blocks; 8152 free (0 chunks); 40 used Grand total: 235808 bytes in 13 blocks; 56632 free (11 chunks); 179176 used LOG: could not fork autovacuum worker process: Out of memory ERROR: out of memory DETAIL: Failed on request of size 4032. CONTEXT: writing block 0 of relation base/16384/16797 LOG: could not fork autovacuum worker process: Out of memory WARNING: could not write block 0 of base/16384/16797 DETAIL: Multiple failures --- write error might be permanent. TopMemoryContext: 8192 total in 1 blocks; 3480 free (0 chunks); 4712 used smgr relation table: 24576 total in 2 blocks; 13008 free (4 chunks); 11568 used WAL record construction: 49768 total in 2 blocks; 6584 free (0 chunks); 43184 used Checkpointer: 8192 total in 1 blocks; 7992 free (2 chunks); 200 used PrivateRefCount: 8192 total in 1 blocks; 2840 free (0 chunks); 5352 used MdSmgr: 8192 total in 1 blocks; 8120 free (0 chunks); 72 used Pending ops context: 0 total in 0 blocks; 0 free (0 chunks); 0 used Pending Ops Table: 8192 total in 1 blocks; 2840 free (5 chunks); 5352 used LOCALLOCK hash: 8192 total in 1 blocks; 776 free (0 chunks); 7416 used TopMemoryContext: 51040 total in 4 blocks; 1344 free (1 chunks); 49696 used TopTransactionContext: 8192 total in 1 blocks; 6968 free (0 chunks); 1224 used Timezones: 104120 total in 2 blocks; 2840 free (0 chunks); 101280 used Statistics snapshot: 0 total in 0 blocks; 0 free (0 chunks); 0 used ErrorContext: 8192 total in 1 blocks; 8152 free (0 chunks); 40 used Grand total: 235808 bytes in 13 blocks; 56632 free (11 chunks); 179176 used Per-database function: 8192 total in 1 blocks; 776 free (0 chunks); 7416 used Per-database table: 8192 total in 1 blocks; 776 free (0 chunks); 7416 used Per-database function: 8192 total in 1 blocks; 776 free (0 chunks); 7416 used Per-database table: 8192 total in 1 blocks; 776 free (0 chunks); 7416 used Databases hash: 24576 total in 2 blocks; 15072 free (5 chunks); 9504 used ERROR: out of memory
For UWSGI
cat uwsgi.log [pid: 49|app: 0|req: 1/1] 172.19.199.1 () {24 vars in 291 bytes} [Tue Apr 9 14:43:43 2019] GET /postorius/lists/ => generated 3710 bytes in 705 msecs (HTTP/1.1 200) 5 headers in 163 bytes (1 switches on core 0) [pid: 39|app: 0|req: 1/1] 172.19.199.1 () {24 vars in 291 bytes} [Tue Apr 9 15:40:12 2019] GET /postorius/lists/ => generated 3710 bytes in 694 msecs (HTTP/1.1 200) 5 headers in 163 bytes (1 switches on core 0) [ec2-user@ip-172-31-23-250 logs]$ cat uwsgi-error.log cat: uwsgi-error.log: Permission denied [ec2-user@ip-172-31-23-250 logs]$ sudo cat uwsgi-error.log *** Starting uWSGI 2.0.15 (64bit) on [Tue Apr 9 14:38:03 2019] *** compiled with version: 6.3.0 on 29 December 2017 00:16:31 os: Linux-4.14.77-70.59.amzn1.x86_64 #1 SMP Mon Nov 12 22:02:45 UTC 2018 nodename: mailman-web machine: x86_64 clock source: unix detected number of CPU cores: 1 current working directory: /opt/mailman-web detected binary path: /usr/local/bin/uwsgi !!! no internal routing support, rebuild with pcre support !!! setgid() to 101 setuid() to 100 chdir() to /opt/mailman-web your processes number limit is 3860 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) uwsgi socket 0 bound to TCP address 0.0.0.0:8080 fd 8 uwsgi socket 1 bound to TCP address 0.0.0.0:8000 fd 9 Python version: 2.7.14 (default, Dec 19 2017, 17:52:21) [GCC 6.3.0] Python main interpreter initialized at 0x55ca1d90e1e0 python threads support enabled your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds [uwsgi-cron] command "./manage.py runjobs minutely" registered as cron task [uwsgi-cron] command "./manage.py runjobs quarter_hourly" registered as cron task [uwsgi-cron] command "./manage.py runjobs hourly" registered as cron task [uwsgi-cron] command "./manage.py runjobs daily" registered as cron task [uwsgi-cron] command "./manage.py runjobs monthly" registered as cron task [uwsgi-cron] command "./manage.py runjobs weekly" registered as cron task [uwsgi-cron] command "./manage.py runjobs yearly" registered as cron task mapped 166144 bytes (162 KB) for 2 cores *** Operational MODE: threaded *** WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x55ca1d90e1e0 pid: 1 (default app) *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 1) spawned uWSGI worker 1 (pid: 49, cores: 2) Tue Apr 9 14:38:04 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 51) [uwsgi-daemons] spawning "./manage.py qcluster" (uid: 100 gid: 101) 14:38:07 [Q] INFO Q Cluster-52 starting. 14:38:07 [Q] INFO Process-1 guarding cluster at 67 14:38:07 [Q] INFO Process-1:1 ready for work at 68 14:38:07 [Q] INFO Process-1:3 pushing tasks at 70 14:38:07 [Q] INFO Q Cluster-52 running. 14:38:07 [Q] INFO Process-1:2 monitoring at 69 [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 51 exited after 4 second(s) Tue Apr 9 14:39:04 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 71) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 71 exited after 269 second(s) Tue Apr 9 14:43:33 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 79) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 79 exited after 3 second(s) Tue Apr 9 14:51:35 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 89) *** Starting uWSGI 2.0.15 (64bit) on [Tue Apr 9 15:39:58 2019] *** compiled with version: 6.3.0 on 29 December 2017 00:16:31 os: Linux-4.14.109-80.92.amzn1.x86_64 #1 SMP Mon Apr 1 23:07:39 UTC 2019 nodename: mailman-web machine: x86_64 clock source: unix detected number of CPU cores: 1 current working directory: /opt/mailman-web detected binary path: /usr/local/bin/uwsgi !!! no internal routing support, rebuild with pcre support !!! setgid() to 101 setuid() to 100 chdir() to /opt/mailman-web your processes number limit is 3860 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) uwsgi socket 0 bound to TCP address 0.0.0.0:8080 fd 8 uwsgi socket 1 bound to TCP address 0.0.0.0:8000 fd 9 Python version: 2.7.14 (default, Dec 19 2017, 17:52:21) [GCC 6.3.0] Python main interpreter initialized at 0x562e2c0b11e0 python threads support enabled your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds [uwsgi-cron] command "./manage.py runjobs minutely" registered as cron task [uwsgi-cron] command "./manage.py runjobs quarter_hourly" registered as cron task [uwsgi-cron] command "./manage.py runjobs hourly" registered as cron task [uwsgi-cron] command "./manage.py runjobs daily" registered as cron task [uwsgi-cron] command "./manage.py runjobs monthly" registered as cron task [uwsgi-cron] command "./manage.py runjobs weekly" registered as cron task [uwsgi-cron] command "./manage.py runjobs yearly" registered as cron task mapped 166144 bytes (162 KB) for 2 cores *** Operational MODE: threaded *** WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x562e2c0b11e0 pid: 1 (default app) *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 1) spawned uWSGI worker 1 (pid: 39, cores: 2) Tue Apr 9 15:39:59 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 41) [uwsgi-daemons] spawning "./manage.py qcluster" (uid: 100 gid: 101) 15:40:02 [Q] INFO Q Cluster-43 starting. 15:40:02 [Q] INFO Process-1:1 ready for work at 58 15:40:02 [Q] INFO Process-1:2 monitoring at 59 15:40:02 [Q] INFO Process-1 guarding cluster at 57 15:40:02 [Q] INFO Process-1:3 pushing tasks at 60 15:40:02 [Q] INFO Q Cluster-43 running. [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 41 exited after 4 second(s) Tue Apr 9 15:40:59 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 63) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 63 exited after 13 second(s) Tue Apr 9 15:41:59 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 71) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 71 exited after 33 second(s) Tue Apr 9 15:42:59 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 79) 16:33:36 [Q] ERROR could not translate host name "database" to address: Try again
16:35:17 [Q] INFO Process-1:3 stopped pushing tasks 16:35:25 [Q] ERROR reincarnated pusher Process-1:3 after sudden death 16:35:25 [Q] INFO Process-1:4 pushing tasks at 87 [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 79 exited after 3146 second(s) Tue Apr 9 16:35:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 88) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 88 exited after 3 second(s) Tue Apr 9 16:36:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 96) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 96 exited after 3 second(s) Tue Apr 9 16:37:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 104) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 104 exited after 3 second(s) Tue Apr 9 16:38:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 112) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 112 exited after 3 second(s) Tue Apr 9 16:39:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 120) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 120 exited after 3 second(s) Tue Apr 9 16:40:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 128) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 128 exited after 3 second(s) Tue Apr 9 16:41:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 136) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 136 exited after 3 second(s) Tue Apr 9 16:42:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 144) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 144 exited after 3 second(s) Tue Apr 9 16:43:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 152) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 152 exited after 3 second(s) Tue Apr 9 16:44:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 160) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 160 exited after 3 second(s) Tue Apr 9 16:45:00 2019 - [uwsgi-cron] running "./manage.py runjobs quarter_hourly" (pid 168) [uwsgi-cron] command "./manage.py runjobs quarter_hourly" running with pid 168 exited after 2 second(s) Tue Apr 9 16:45:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 176) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 176 exited after 3 second(s) Tue Apr 9 16:46:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 184) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 184 exited after 2 second(s) Tue Apr 9 16:47:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 192) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 192 exited after 3 second(s) Tue Apr 9 16:48:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 200) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 200 exited after 3 second(s) Tue Apr 9 16:49:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 208) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 208 exited after 3 second(s) Tue Apr 9 16:50:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 216) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 216 exited after 3 second(s) Tue Apr 9 16:51:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 224) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 224 exited after 3 second(s) Tue Apr 9 16:52:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 232) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 232 exited after 3 second(s) Tue Apr 9 16:53:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 240) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 240 exited after 2 second(s) Tue Apr 9 16:54:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 248) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 248 exited after 2 second(s) Tue Apr 9 16:55:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 256) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 256 exited after 3 second(s) Tue Apr 9 16:56:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 264) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 264 exited after 3 second(s) Tue Apr 9 16:57:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 272) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 272 exited after 3 second(s) Tue Apr 9 16:58:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 280) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 280 exited after 3 second(s) Tue Apr 9 16:59:25 2019 - [uwsgi-cron] running "./manage.py runjobs minutely" (pid 288) [uwsgi-cron] command "./manage.py runjobs minutely" running with pid 288 exited after 3 second(s) Tue Apr 9 17:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs quarter_hourly" (pid 296) Tue Apr 9 17:00:00 2019 - [uwsgi-cron] running "./manage.py runjobs hourly" (pid 297) Traceback (most recent call last): File "./manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 363, in execute_from_command_line Traceback (most recent call last): File "./manage.py", line 10, in <module> execute_from_command_line(sys.argv) utility.execute() File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 363, in execute_from_command_line File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 337, in execute utility.execute() File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 337, in execute django.setup() django.setup() File "/usr/local/lib/python2.7/site-packages/django/__init__.py", line 27, in setup File "/usr/local/lib/python2.7/site-packages/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) apps.populate(settings.INSTALLED_APPS) File "/usr/local/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate File "/usr/local/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate app_config.import_models() app_config.import_models() File "/usr/local/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models File "/usr/local/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models self.models_module = import_module(models_module_name) self.models_module = import_module(models_module_name) File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) __import__(name) File "/usr/local/lib/python2.7/site-packages/hyperkitty/models/__init__.py", line 28, in <module> File "/usr/local/lib/python2.7/site-packages/hyperkitty/models/__init__.py", line 28, in <module> from .email import Email, Attachment from .email import Email, Attachment File "/usr/local/lib/python2.7/site-packages/hyperkitty/models/email.py", line 37, in <module> File "/usr/local/lib/python2.7/site-packages/hyperkitty/models/email.py", line 37, in <module> from hyperkitty.lib.analysis import compute_thread_order_and_depth from hyperkitty.lib.analysis import compute_thread_order_and_depth File "/usr/local/lib/python2.7/site-packages/hyperkitty/lib/analysis.py", line 28, in <module> File "/usr/local/lib/python2.7/site-packages/hyperkitty/lib/analysis.py", line 28, in <module> import networkx as nx import networkx as nx File "/usr/local/lib/python2.7/site-packages/networkx/__init__.py", line 114, in <module> File "/usr/local/lib/python2.7/site-packages/networkx/__init__.py", line 114, in <module> import networkx.generators import networkx.generators File "/usr/local/lib/python2.7/site-packages/networkx/generators/__init__.py", line 6, in <module> File "/usr/local/lib/python2.7/site-packages/networkx/generators/__init__.py", line 6, in <module> from networkx.generators.classic import * from networkx.generators.classic import * File "/usr/local/lib/python2.7/site-packages/networkx/generators/classic.py", line 26, in <module> File "/usr/local/lib/python2.7/site-packages/networkx/generators/classic.py", line 26, in <module> from networkx.algorithms.bipartite.generators import complete_bipartite_graph from networkx.algorithms.bipartite.generators import complete_bipartite_graph File "/usr/local/lib/python2.7/site-packages/networkx/algorithms/__init__.py", line 57, in <module> File "/usr/local/lib/python2.7/site-packages/networkx/algorithms/__init__.py", line 57, in <module> import networkx.algorithms.connectivity import networkx.algorithms.connectivity File "/usr/local/lib/python2.7/site-packages/networkx/algorithms/connectivity/__init__.py", line 3, in <module> File "/usr/local/lib/python2.7/site-packages/networkx/algorithms/connectivity/__init__.py", line 3, in <module> from .connectivity import * from .connectivity import * File "/usr/local/lib/python2.7/site-packages/networkx/algorithms/connectivity/connectivity.py", line 13, in <module> File "/usr/local/lib/python2.7/site-packages/networkx/algorithms/connectivity/connectivity.py", line 13, in <module> from networkx.algorithms.flow import boykov_kolmogorov from networkx.algorithms.flow import boykov_kolmogorov File "/usr/local/lib/python2.7/site-packages/networkx/algorithms/flow/__init__.py", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/networkx/algorithms/flow/__init__.py", line 1, in <module> from .maxflow import * from .maxflow import * MemoryErrorMemoryError
18:16:37 [Q] ERROR could not translate host name "database" to address: Try again
...etc ...
For Mailman cat mailmanweb.log ERROR 2019-04-09 20:05:48,878 934 hyperkitty.lib.utils Failed to update the fulltext index: could not translate host name "database" to address: Try again Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/hyperkitty/lib/utils.py", line 186, in run_with_lock fn(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/hyperkitty/search_indexes.py", line 87, in update_index update_cmd.update_backend("hyperkitty", "default") File "/usr/local/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 240, in update_backend total = qs.count() File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 363, in count return self.query.get_count(using=self.db) File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/query.py", line 498, in get_count number = obj.get_aggregation(using, ['__count'])['__count'] File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/query.py", line 479, in get_aggregation result = compiler.execute_sql(SINGLE) File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 874, in execute_sql cursor = self.connection.cursor() File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 254, in cursor return self._cursor() File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 229, in _cursor self.ensure_connection() File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection self.connect() File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection self.connect() File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 189, in connect self.connection = self.get_new_connection(conn_params) File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection connection = Database.connect(**conn_params) File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) OperationalError: could not translate host name "database" to address: Try again
ERROR 2019-04-09 20:15:08,192 983 hyperkitty.lib.utils Failed to update the fulltext index: could not translate host name "database" to address: Try again Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/hyperkitty/lib/utils.py", line 186, in run_with_lock fn(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/hyperkitty/search_indexes.py", line 87, in update_index update_cmd.update_backend("hyperkitty", "default") File "/usr/local/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 240, in update_backend total = qs.count() File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 363, in count return self.query.get_count(using=self.db) File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/query.py", line 498, in get_count number = obj.get_aggregation(using, ['__count'])['__count'] File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/query.py", line 479, in get_aggregation result = compiler.execute_sql(SINGLE) File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 874, in execute_sql cursor = self.connection.cursor() File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 254, in cursor return self._cursor() File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 229, in _cursor self.ensure_connection() File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection self.connect() File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection self.connect() File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 189, in connect self.connection = self.get_new_connection(conn_params) File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection connection = Database.connect(**conn_params) File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) OperationalError: could not translate host name "database" to address: Try again
On Tue, Apr 9, 2019, at 2:34 PM, roger.hislop@is.co.za wrote:
Hi all -- really need some help. I'm tech literate, but definitely not a sysadmin or dev.
I'm following the tutorial at "Olay's Farmland" at https://www.olay.xyz/2018/01/01/deploy-mailman-3-on-aws-using-docker/ which seems to be a clone of the same content here https://xiaoxing.us/2018/01/01/deploy-mailman-3-on-aws-using-docker/
The right place to get help regarding these guides would be the author of the posts, which AFAIK, have no affiliation with Mailman Core Team.
On a (very) quick glance, it seems fine.
Easy enough to get most of the way ... set up EC2 instance, configure SES and add DNS settings with DKIM etc. Set up Postfix, test it. All working groovy.
I run through the how to all the way through setting up the containers and firing it up. This is where things start to get funky.
Once the docker-compose is run, I run "curl http://172.19.199.3:8000/postorius/lists/" to test. First time it said it couldn't connect to database. So I nuked and restarted from scratch. Second time it said the same thing... was reading around the place and tried again, and then reran that curl and it worked -- gave me html from a page. But the VM started running slowly --taking forever to follow keypresses.
Eventually became unresponsive. So I rebooted the machine via AWS console, and docker ps showed only the postgres db running. docker-compose ps shows Name Command State
Portsdocker-mailman_database_1 docker-entrypoint.sh postgres Up
5432/tcp
mailman-core docker-entrypoint.sh maste ... Exit 255
8001/tcp, 8024/tcp mailman-web docker-entrypoint.sh uwsgi ... Exit 255
8000/tcp, 8080/tcpSo rebuilt the container -- same thing. It looks and feels like its running out of memory.
I used the compose file from https://gist.githubusercontent.com/Yexiaoxing/833bfcc5d3e4e0c06a8b7f0bac7c4c... (with appropriate edits).
How much memory do yo have in your VM? Have you tried using a bigger VM?
thanks Abhilash
Hi,
I assume micro instance has been used for obvious reason to leverage AWS Free tier.
Unfortunately it wouldn't work for Mailman 3 w. Hyperkitty because of large memory footprint of the applications.
However, Postorius-only version works just fine on micro instance.
I'm providing free AMIs for both cases with zero-configuration needed (but without SES integration) in AWS Marketplace, links could be found on page https://mailman3.com/aws/
Best regards, Danil Smirnov
ср, 10 апр. 2019 г., 19:29 Abhilash Raj <maxking@asynchronous.in>:
On Tue, Apr 9, 2019, at 2:34 PM, roger.hislop@is.co.za wrote:
Hi all -- really need some help. I'm tech literate, but definitely not a sysadmin or dev.
I'm following the tutorial at "Olay's Farmland" at https://www.olay.xyz/2018/01/01/deploy-mailman-3-on-aws-using-docker/ which seems to be a clone of the same content here https://xiaoxing.us/2018/01/01/deploy-mailman-3-on-aws-using-docker/
The right place to get help regarding these guides would be the author of the posts, which AFAIK, have no affiliation with Mailman Core Team.
On a (very) quick glance, it seems fine.
Easy enough to get most of the way ... set up EC2 instance, configure SES and add DNS settings with DKIM etc. Set up Postfix, test it. All working groovy.
I run through the how to all the way through setting up the containers and firing it up. This is where things start to get funky.
Once the docker-compose is run, I run "curl http://172.19.199.3:8000/postorius/lists/" to test. First time it said it couldn't connect to database. So I nuked and restarted from scratch. Second time it said the same thing... was reading around the place and tried again, and then reran that curl and it worked -- gave me html from a page. But the VM started running slowly --taking forever to follow keypresses.
Eventually became unresponsive. So I rebooted the machine via AWS console, and docker ps showed only the postgres db running. docker-compose ps shows Name Command State Ports
docker-mailman_database_1 docker-entrypoint.sh postgres Up 5432/tcp mailman-core docker-entrypoint.sh maste ... Exit 255 8001/tcp, 8024/tcp mailman-web docker-entrypoint.sh uwsgi ... Exit 255 8000/tcp, 8080/tcp
So rebuilt the container -- same thing. It looks and feels like its running out of memory.
I used the compose file from
https://gist.githubusercontent.com/Yexiaoxing/833bfcc5d3e4e0c06a8b7f0bac7c4c... (with appropriate edits).
How much memory do yo have in your VM? Have you tried using a bigger VM?
thanks Abhilash
Mailman-users mailing list -- mailman-users@mailman3.org To unsubscribe send an email to mailman-users-leave@mailman3.org https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
Thanks Danil, Abhilash ... moved instance to a *small, reboote and restated container. Boom, all working.
The guide is on the surface fine, but has a number of mistake, and the middle section is a direct copy-paste from the Mailman3 docker instructions page. There are a number of small but important mistakes (syntax errors for eg) -- and the page owner provides no contacts, or hints how to google for them.
I guess I'm so use to out of memory errors being a symptom of a giant bork that I don't take them as a sign I don't have enough memory.
OK -- so everything is almost good... but nginx is still showing the default page.
Could it be as stupid as not using the right single quotes around the domain name (hostname in this case)?
I tried this config saved in /etc/nginx/conf.d/listman.blahblah.org.za.conf
server {
listen 80;
server_name 'listman.blahblah.org.za';
location /static/ {
root /var/spool/mailman-web/;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/spool/mailman-web/;
}
}
I also tried the config in the Olay Farm cookbook
server {
listen 80;
server_name listman.blahblah.org.za
;
location /static/ { alias /opt/mailman/web/static/; }
location / { include uwsgi_params; uwsgi_pass 172.19.199.3:8080; uwsgi_read_timeout 300; }
Even tried the config at https://wiki.list.org/DOC/Mailman%203%20installation%20experience?action=AttachFile&do=view&target=lm3o_nginx.txt but get when I test config nginx: [emerg] a duplicate default server for 0.0.0.0:80 in /etc/nginx/nginx.conf:41 nginx: configuration file /etc/nginx/nginx.conf test failed
And finally the instructions given here https://github.com/Yexiaoxing/mailman-on-aws/blob/master/05-nginx-proxy.md
I'm basically flailing in the dark here...
participants (3)
-
Abhilash Raj
-
Danil Smirnov
-
roger.hislop@is.co.za