Error after HyperKitty 1.3.x upgrade
Hi!
After upgrading to HyperKitty 1.3.x I get the following error for qcluster:
File ".../lib/python3.7/site-packages/django_q/cluster.py", line 300, in pusher task = SignedPackage.loads(task[1]) File ".../lib/python3.7/site-packages/django_q/signing.py", line 31, in loads serializer=PickleSerializer) File ".../lib/python3.7/site-packages/django_q/core_signing.py", line 38, in loads return serializer().loads(data) File ".../lib/python3.7/site-packages/django_q/signing.py", line 44, in loads return pickle.loads(data) AttributeError: Can't get attribute 'process_task_result' on <module 'hyperkitty.tasks' from '.../lib/python3.7/site-packages/hyperkitty/tasks.py'>
How do I recover from that? It seems like there are still persisted tasks in the qcluster queue using the old code from 1.2.x with the old locking implementation.
Regards, Florian Schulze
On 6/29/20 1:38 AM, Florian Schulze wrote:
Hi!
After upgrading to HyperKitty 1.3.x I get the following error for qcluster:
File ".../lib/python3.7/site-packages/django_q/cluster.py", line 300, in pusher task = SignedPackage.loads(task[1]) File ".../lib/python3.7/site-packages/django_q/signing.py", line 31, in loads serializer=PickleSerializer) File ".../lib/python3.7/site-packages/django_q/core_signing.py", line 38, in loads return serializer().loads(data) File ".../lib/python3.7/site-packages/django_q/signing.py", line 44, in loads return pickle.loads(data) AttributeError: Can't get attribute 'process_task_result' on <module 'hyperkitty.tasks' from '.../lib/python3.7/site-packages/hyperkitty/tasks.py'>
How do I recover from that? It seems like there are still persisted tasks in the qcluster queue using the old code from 1.2.x with the old locking implementation.
Have you stopped ans started qcluster since upgrading?
Are you running the Django periodic jobs?
-- Mark Sapiro <mark@msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan
On 29 Jun 2020, at 23:40, Mark Sapiro wrote:
On 6/29/20 1:38 AM, Florian Schulze wrote:
Hi!
After upgrading to HyperKitty 1.3.x I get the following error for qcluster:
File ".../lib/python3.7/site-packages/django_q/cluster.py", line 300, in pusher task = SignedPackage.loads(task[1]) File ".../lib/python3.7/site-packages/django_q/signing.py", line 31, in loads serializer=PickleSerializer) File ".../lib/python3.7/site-packages/django_q/core_signing.py", line 38, in loads return serializer().loads(data) File ".../lib/python3.7/site-packages/django_q/signing.py", line 44, in loads return pickle.loads(data) AttributeError: Can't get attribute 'process_task_result' on <module 'hyperkitty.tasks' from '.../lib/python3.7/site-packages/hyperkitty/tasks.py'>
How do I recover from that? It seems like there are still persisted tasks in the qcluster queue using the old code from 1.2.x with the old locking implementation.
Have you stopped ans started qcluster since upgrading?
Yes
Are you running the Django periodic jobs?
They were disabled because of the errors (someone else did the upgrade). I have enabled them again and will watch it and report back.
Regards, Florian Schulze
On 30 Jun 2020, at 8:22, Florian Schulze wrote:
On 29 Jun 2020, at 23:40, Mark Sapiro wrote:
On 6/29/20 1:38 AM, Florian Schulze wrote:
Hi!
After upgrading to HyperKitty 1.3.x I get the following error for qcluster:
File ".../lib/python3.7/site-packages/django_q/cluster.py", line 300, in pusher task = SignedPackage.loads(task[1]) File ".../lib/python3.7/site-packages/django_q/signing.py", line 31, in loads serializer=PickleSerializer) File ".../lib/python3.7/site-packages/django_q/core_signing.py", line 38, in loads return serializer().loads(data) File ".../lib/python3.7/site-packages/django_q/signing.py", line 44, in loads return pickle.loads(data) AttributeError: Can't get attribute 'process_task_result' on <module 'hyperkitty.tasks' from '.../lib/python3.7/site-packages/hyperkitty/tasks.py'>
How do I recover from that? It seems like there are still persisted tasks in the qcluster queue using the old code from 1.2.x with the old locking implementation.
Have you stopped ans started qcluster since upgrading?
Yes
Are you running the Django periodic jobs?
They were disabled because of the errors (someone else did the upgrade). I have enabled them again and will watch it and report back.
The cronjobs seem to work fine, no errors so far.
The queue doesn't seem to change at all. In "qmonitor" there is nothing
listed and the statusline remains the same at
ORM default Queued 303(118)
Success 100 Failures
0
Regards, Florian Schulze
On 2 Jul 2020, at 8:38, Florian Schulze wrote:
On 30 Jun 2020, at 8:22, Florian Schulze wrote:
On 29 Jun 2020, at 23:40, Mark Sapiro wrote:
On 6/29/20 1:38 AM, Florian Schulze wrote:
Hi!
After upgrading to HyperKitty 1.3.x I get the following error for qcluster:
File ".../lib/python3.7/site-packages/django_q/cluster.py", line 300, in pusher task = SignedPackage.loads(task[1]) File ".../lib/python3.7/site-packages/django_q/signing.py", line 31, in loads serializer=PickleSerializer) File ".../lib/python3.7/site-packages/django_q/core_signing.py", line 38, in loads return serializer().loads(data) File ".../lib/python3.7/site-packages/django_q/signing.py", line 44, in loads return pickle.loads(data) AttributeError: Can't get attribute 'process_task_result' on <module 'hyperkitty.tasks' from '.../lib/python3.7/site-packages/hyperkitty/tasks.py'>
How do I recover from that? It seems like there are still persisted tasks in the qcluster queue using the old code from 1.2.x with the old locking implementation.
Have you stopped ans started qcluster since upgrading?
Yes
Are you running the Django periodic jobs?
They were disabled because of the errors (someone else did the upgrade). I have enabled them again and will watch it and report back.
The cronjobs seem to work fine, no errors so far.
The queue doesn't seem to change at all. In "qmonitor" there is nothing listed and the statusline remains the same at ORM default Queued 303(118)
Success 100 Failures
0
I managed to figure out how to inspect the queue in the admin interface. I just deleted all the repeatedly failing tasks and ran all cron jobs. I hope it is fine now.
Regards, Florian Schulze
participants (2)
-
Florian Schulze
-
Mark Sapiro