Forums

Signal Kill "process does not exist"

I'm have trouble getting this signal thing working. I have a receive and send script. It seems to work randomly, I run the receive script and then the send script and it seems to work most of the time. How ever sometimes it will randomly give me:

Traceback (most recent call last):
  File "sender.py", line 8, in <module>
    os.kill(PID, signal.SIGUSR1)
OSError: [Errno 3] No such process

or

 File "sender.py", line 8, in <module>
    os.kill(PID, signal.SIGUSR1)
OSError: [Errno 1] Operation not permitted

Which makes no sense since I could have 2 consoles both running the send script and one sending the signal successfully whilst the other giving that error even though both are getting the same PID. So not quite sure what I'm doing wrong. Here are both the scripts

recieve:

import time
import signal
import os

with open("processid.txt","w+") as processPID:
    processPID.write(str(os.getpid()))

received_signal = False
def handler(signum, frame):
    global received_signal
    received_signal = True

signal.signal(signal.SIGUSR1, handler)

while True:
    time.sleep(1)
    print("waiting for signal")
    print(str(os.getpid()))
    if received_signal:
        received_signal = False
        print('signal received')

send:

import os
import signal
with open("processid.txt","r") as pidinfo:
    PID = int(pidinfo.read())
print(PID)
os.kill(PID, signal.SIGUSR1)

If these could also be improved upon in anyway I'd appreciate any tips

Different consoles can potentially be running on different servers, so that would explain the variation you're seeing. If your two consoles are on the same server, then your code will work, but if they're on different ones, when you send the signal it will either be going to a process ID that doesn't exist on the server, causing the "No such process" error, or it will go to a process that's owned by another user and you'll get the "Operation not permitted".

Allocation of a console to a server happens when you create it, so if you're doing this in order to experiment with signals, then your best best is probably to start a console for your first process, run hostname to find out which server it's on, then start more new consoles until you get another on the same server. Alternatively you could use just one console, and background the first process.

If you're looking in to using signals for something that's not just experimentation, perhaps there's a different way to achieve your aim?

Right now I have a django app and a separate long running task. The task is running a continuous loop and on each iteration its checking the database for changes. I wanted to change it so that the loops is still running, but it only checks the database after receiving a signal when data has been posted through the django app, in a similar way to the code I posted.

Or can I simple place the background task in the django project it self so it runs when I start the django app? Though it needs to be running indefinitely. The current task I have scheduler to keep checking that its still running every so often

Ah, I see. That definitely won't work with signals -- web app code runs on different servers to scheduled task code (and, indeed, can move from server to server as we load-balance the cluster).

To be honest, I think polling is your best solution right now. If you're worried about the load on the database from very frequent polling, perhaps you could have the Django code create a file to say "stuff needs to be done", and have the task poll that?

(Just in case -- because the filesystem is networked, inotify won't work, so you can't use that to watch for changes to the file. That won't matter if you're polling, but I thought I'd mention it to save you time in case you considered that as an alternative.)

Ok so every time someone submits data through django I'd have it change some value in a txt file to True. Then with each loop iteration in the background task it will read the txt file and if its true, change it back to False and then query the database.

Would this be more efficient than querying the database every iteration ?

I think I'd personally use the existence/non-existence of the file as the signal rather than its contents -- you can create a file atomically using the os.link function, and remove it likewise atomically using os.unlink.

And yes, checking for a file's existence (or, indeed, its contents) would definitely be faster than scanning the DB.

Ok that sounds like a good idea thanks, I think I'll go with that method. Just one more thing if you don't mind, is there any benefit to using os.link vs os.open to create the file ?

I think Giles was thinking of something else: os.mknod (instead of os.link), but you could also just use os.open().close() as well.

But you can still do os.unlink to delete the file though.

Ok thanks