My application uses a background queue handler to dispatch customized emails to mailing lists. Currently in production on GAE where it uses taskqueue.
On PA, I have a scheduled job to start the queue handler:
#/usr/bin/env python
# for the PythonAnywhere environment:
# /home/ocsnedb/web2py/applications/init/private/ocsnedb_mail_queue.py
import logging
import socket
import sys
import subprocess
lock_socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
try:
lock_id = "ocsnedb_mail_queue" # this should be unique. using your username as a prefix is a convention
lock_socket.bind('\0' + lock_id)
logging.debug("Acquired lock %r" % (lock_id,))
except socket.error:
# socket already locked, task must already be running
logging.info("Failed to acquire lock %r" % (lock_id,))
print("Failed to acquire lock, task must already be running")
sys.exit()
subprocess.call(["python", "/home/ocsnedb/web2py/web2py.py", "-S", "init", "-M", "-R", "applications/init/mail_queue.py"])
The actual queue processor is:
## in file /applications/init/mail_queue.py
#this is used in both test environment on PC, where it is started using mail_queue.cmd,
#and in PythonAnywhere environment, where it is run by /applications/init/ocsnedb_mail_queue.py
#which in turn is run by the PythonAnywhere scheduler.
import time
mail=auth.settings.mailer
while True:
queue = db(db.emailqueue.id > 0).select(orderby=db.emailqueue.Created)
for row in queue:
while len(row.targets) > 0:
e = row.targets[0]
email = db.Emails[e]
message = emailrender(row.subject, row.body, email, row.list, row.event)
mail.send(to=email.Email, bcc=row.bcc, reply_to=row.reply_to, subject=row.subject, message=message)
row.update_record(targets=row.targets[1:])
db(db.emailqueue.id==row.id).delete()
db.commit()
time.sleep(5)
On the PC, this process running in a cmd shell works perfectly. On PA, the existing queue processes just fine when the job starts, but apparently never resumes after the time.sleep(5) call, so new messages simply sit in the queue. Right now the PA account is a free account, and I realize this won't work due to the time limit on schedule jobs, but if/when I move production to PA I'll upgrade to a paid account and worst case the queue handler would be inactive for an hour until the next schedule job run. But why doesn't the background loop work?