135

I need to run a shell command asynchronously from a Python script. By this I mean that I want my Python script to continue running while the external command goes off and does whatever it needs to do.

I read this post:

Calling an external command in Python

I then went off and did some testing, and it looks like os.system() will do the job provided that I use & at the end of the command so that I don't have to wait for it to return. What I am wondering is if this is the proper way to accomplish such a thing? I tried commands.call() but it will not work for me because it blocks on the external command.

Please let me know if using os.system() for this is advisable or if I should try some other route.

Community
  • 1
  • 1

10 Answers10

147

subprocess.Popen does exactly what you want.

from subprocess import Popen
p = Popen(['watch', 'ls']) # something long running
# ... do other stuff while subprocess is running
p.terminate()

(Edit to complete the answer from comments)

The Popen instance can do various other things like you can poll() it to see if it is still running, and you can communicate() with it to send it data on stdin, and wait for it to terminate.

Ali Afshar
  • 38,455
  • 12
  • 89
  • 108
  • 5
    You can also use poll() to check if the child process has terminated, or use wait() to wait for it to terminate. – Adam Rosenfield Mar 11 '09 at 22:09
  • Adam, very true, although it could be better to use communicate() to wait because that has better handling of in/out buffers and there are situations where flooding these might block. – Ali Afshar Mar 11 '09 at 22:11
  • Adam: docs say "Warning This will deadlock if the child process generates enough output to a stdout or stderr pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that. " – Ali Afshar Mar 11 '09 at 22:12
  • 17
    communicate() and wait() are blocking operations, though. You won't be parallelize commands like the OP seems to ask if you use them. – cdleary Mar 11 '09 at 22:24
  • 1
    Cdleary is absolutely correct, it should be mentioned that communicate and wait do block, so only do it when you are waiting for things to shut down. (Which you should really do to be well-behaved) – Ali Afshar Mar 11 '09 at 22:29
  • Of course, calling communicate on one subprocess will not block other running subprocesses... – Ali Afshar Mar 11 '09 at 22:33
55

If you want to run many processes in parallel and then handle them when they yield results, you can use polling like in the following:

from subprocess import Popen, PIPE
import time

running_procs = [
    Popen(['/usr/bin/my_cmd', '-i %s' % path], stdout=PIPE, stderr=PIPE)
    for path in '/tmp/file0 /tmp/file1 /tmp/file2'.split()]

while running_procs:
    for proc in running_procs:
        retcode = proc.poll()
        if retcode is not None: # Process finished.
            running_procs.remove(proc)
            break
        else: # No process is done, wait a bit and check again.
            time.sleep(.1)
            continue

    # Here, `proc` has finished with return code `retcode`
    if retcode != 0:
        """Error handling."""
    handle_results(proc.stdout)

The control flow there is a little bit convoluted because I'm trying to make it small -- you can refactor to your taste. :-)

This has the advantage of servicing the early-finishing requests first. If you call communicate on the first running process and that turns out to run the longest, the other running processes will have been sitting there idle when you could have been handling their results.

yizzlez
  • 8,449
  • 4
  • 27
  • 43
cdleary
  • 63,281
  • 49
  • 155
  • 190
  • 3
    @Tino It depends on how you define busy-wait. See [What is the difference between busy-wait and polling?](http://stackoverflow.com/questions/10594426/) – Piotr Dobrogost Sep 18 '12 at 19:20
  • 1
    Is there any way to poll a set of processes not only one? – Piotr Dobrogost Sep 18 '12 at 19:21
  • 1
    note: it might hang if a process generates enough output. You should consume stdout concurrently if you use PIPE (there are (too many but not enough) warnings in the subprocess' docs about it). – jfs Nov 22 '12 at 16:31
  • @PiotrDobrogost: you could use `os.waitpid` directly which allows to check whether *any* child process has changed its status. – jfs Dec 21 '13 at 05:44
  • 5
    use `['/usr/bin/my_cmd', '-i', path]` instead of `['/usr/bin/my_cmd', '-i %s' % path]` – jfs Apr 12 '14 at 02:43
  • This will fail for some cases, running_procs = ['sleep 5', 'sleep 2'] This following code will be reached first time with retcode = None, as you polled all running_procs and none finished. But this is not an error cash, you just need to re-poll. The correct code could be `if retcode != None and retcode != 0: #Error handling.` – ramgo Jun 30 '15 at 20:10
  • 1
    Just wonder, how come you use `proc.stdout` outside of `for proc in running_procs:` loop where you declared `proc`? – Anar Salimkhanov May 05 '20 at 20:24
12

What I am wondering is if this [os.system()] is the proper way to accomplish such a thing?

No. os.system() is not the proper way. That's why everyone says to use subprocess.

For more information, read http://docs.python.org/library/os.html#os.system

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. Use the subprocess module. Check especially the Replacing Older Functions with the subprocess Module section.

S.Lott
  • 359,791
  • 75
  • 487
  • 757
9

This is covered by Python 3 Subprocess Examples under "Wait for command to terminate asynchronously":

import asyncio

proc = await asyncio.create_subprocess_exec(
    'ls','-lha',
    stdout=asyncio.subprocess.PIPE,
    stderr=asyncio.subprocess.PIPE)

# do something else while ls is working

# if proc takes very long to complete, the CPUs are free to use cycles for 
# other processes
stdout, stderr = await proc.communicate()

The process will start running as soon as the await asyncio.create_subprocess_exec(...) has completed. If it hasn't finished by the time you call await proc.communicate(), it will wait there in order to give you your output status. If it has finished, proc.communicate() will return immediately.

The gist here is similar to Terrels answer but I think Terrels answer appears to overcomplicate things.

See asyncio.create_subprocess_exec for more information.

gerrit
  • 17,590
  • 12
  • 72
  • 135
8

The accepted answer is very old.

I found a better modern answer here:

https://kevinmccarthy.org/2016/07/25/streaming-subprocess-stdin-and-stdout-with-asyncio-in-python/

and made some changes:

  1. make it work on windows
  2. make it work with multiple commands
import sys
import asyncio

if sys.platform == "win32":
    asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())


async def _read_stream(stream, cb):
    while True:
        line = await stream.readline()
        if line:
            cb(line)
        else:
            break


async def _stream_subprocess(cmd, stdout_cb, stderr_cb):
    try:
        process = await asyncio.create_subprocess_exec(
            *cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
        )

        await asyncio.wait(
            [
                _read_stream(process.stdout, stdout_cb),
                _read_stream(process.stderr, stderr_cb),
            ]
        )
        rc = await process.wait()
        return process.pid, rc
    except OSError as e:
        # the program will hang if we let any exception propagate
        return e


def execute(*aws):
    """ run the given coroutines in an asyncio loop
    returns a list containing the values returned from each coroutine.
    """
    loop = asyncio.get_event_loop()
    rc = loop.run_until_complete(asyncio.gather(*aws))
    loop.close()
    return rc


def printer(label):
    def pr(*args, **kw):
        print(label, *args, **kw)

    return pr


def name_it(start=0, template="s{}"):
    """a simple generator for task names
    """
    while True:
        yield template.format(start)
        start += 1


def runners(cmds):
    """
    cmds is a list of commands to excecute as subprocesses
    each item is a list appropriate for use by subprocess.call
    """
    next_name = name_it().__next__
    for cmd in cmds:
        name = next_name()
        out = printer(f"{name}.stdout")
        err = printer(f"{name}.stderr")
        yield _stream_subprocess(cmd, out, err)


if __name__ == "__main__":
    cmds = (
        [
            "sh",
            "-c",
            """echo "$SHELL"-stdout && sleep 1 && echo stderr 1>&2 && sleep 1 && echo done""",
        ],
        [
            "bash",
            "-c",
            "echo 'hello, Dave.' && sleep 1 && echo dave_err 1>&2 && sleep 1 && echo done",
        ],
        [sys.executable, "-c", 'print("hello from python");import sys;sys.exit(2)'],
    )

    print(execute(*runners(cmds)))

It is unlikely that the example commands will work perfectly on your system, and it doesn't handle weird errors, but this code does demonstrate one way to run multiple subprocesses using asyncio and stream the output.

Terrel Shumway
  • 444
  • 3
  • 6
  • I tested this on cpython 3.7.4 running on windows and cpython 3.7.3 running on Ubuntu WSL and native Alpine Linux – Terrel Shumway Jul 24 '19 at 16:18
  • [Passing coroutines objects to `wait()` directly is deprecated](https://docs.python.org/3/library/asyncio-task.html) – gerrit Apr 16 '20 at 14:58
8

I've had good success with the asyncproc module, which deals nicely with the output from the processes. For example:

import os
from asynproc import Process
myProc = Process("myprogram.app")

while True:
    # check to see if process has ended
    poll = myProc.wait(os.WNOHANG)
    if poll is not None:
        break
    # print any new output
    out = myProc.read()
    if out != "":
        print out
Noah
  • 17,200
  • 7
  • 59
  • 68
  • is this anywhere on github? – Nick Nov 12 '14 at 19:25
  • It's gpl license, so I'm sure it's on there many times. Here's one: https://github.com/albertz/helpers/blob/master/asyncproc.py – Noah Nov 12 '14 at 22:03
  • I added a gist with some modifications to make it work with python3. (mostly replaces the str with bytes). See https://gist.github.com/grandemk/cbc528719e46b5a0ffbd07e3054aab83 – Tic Dec 04 '18 at 09:59
  • 1
    Also, you need to read the output one more time after going out of the loop or you will lose some of the output. – Tic Dec 05 '18 at 17:06
7

Using pexpect with non-blocking readlines is another way to do this. Pexpect solves the deadlock problems, allows you to easily run the processes in the background, and gives easy ways to have callbacks when your process spits out predefined strings, and generally makes interacting with the process much easier.

Jean-François Fabre
  • 126,787
  • 22
  • 103
  • 165
Gabe
  • 968
  • 1
  • 9
  • 11
4

Considering "I don't have to wait for it to return", one of the easiest solutions will be this:

subprocess.Popen( \
    [path_to_executable, arg1, arg2, ... argN],
    creationflags = subprocess.CREATE_NEW_CONSOLE,
).pid

But... From what I read this is not "the proper way to accomplish such a thing" because of security risks created by subprocess.CREATE_NEW_CONSOLE flag.

The key things that happen here is use of subprocess.CREATE_NEW_CONSOLE to create new console and .pid (returns process ID so that you could check program later on if you want to) so that not to wait for program to finish its job.

Pugsley
  • 647
  • 10
  • 11
3

I have the same problem trying to connect to an 3270 terminal using the s3270 scripting software in Python. Now I'm solving the problem with an subclass of Process that I found here:

http://code.activestate.com/recipes/440554/

And here is the sample taken from file:

def recv_some(p, t=.1, e=1, tr=5, stderr=0):
    if tr < 1:
        tr = 1
    x = time.time()+t
    y = []
    r = ''
    pr = p.recv
    if stderr:
        pr = p.recv_err
    while time.time() < x or r:
        r = pr()
        if r is None:
            if e:
                raise Exception(message)
            else:
                break
        elif r:
            y.append(r)
        else:
            time.sleep(max((x-time.time())/tr, 0))
    return ''.join(y)

def send_all(p, data):
    while len(data):
        sent = p.send(data)
        if sent is None:
            raise Exception(message)
        data = buffer(data, sent)

if __name__ == '__main__':
    if sys.platform == 'win32':
        shell, commands, tail = ('cmd', ('dir /w', 'echo HELLO WORLD'), '\r\n')
    else:
        shell, commands, tail = ('sh', ('ls', 'echo HELLO WORLD'), '\n')

    a = Popen(shell, stdin=PIPE, stdout=PIPE)
    print recv_some(a),
    for cmd in commands:
        send_all(a, cmd + tail)
        print recv_some(a),
    send_all(a, 'exit' + tail)
    print recv_some(a, e=0)
    a.wait()
Patrizio Rullo
  • 471
  • 4
  • 13
1

There are several answers here but none of them satisfied my below requirements:

  1. I don't want to wait for command to finish or pollute my terminal with subprocess outputs.

  2. I want to run bash script with redirects.

  3. I want to support piping within my bash script (for example find ... | tar ...).

The only combination that satiesfies above requirements is:

subprocess.Popen(['./my_script.sh "arg1" > "redirect/path/to"'],
                 stdout=subprocess.PIPE, 
                 stderr=subprocess.PIPE,
                 shell=True)
Shital Shah
  • 47,549
  • 10
  • 193
  • 157