5310

How do you call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script?

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
freshWoWer
  • 54,641
  • 10
  • 32
  • 33

62 Answers62

5118

Use the subprocess module in the standard library:

import subprocess
import sys

command = subprocess.run(['ls', '-l'], capture_output=True)

sys.stdout.buffer.write(command.stdout)
sys.stderr.buffer.write(command.stderr)
sys.exit(command.returncode)

The advantage of subprocess.run over os.system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...).

Even the documentation for os.system recommends using subprocess instead:

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.

On Python 3.4 and earlier, use subprocess.call instead of .run:

subprocess.call(["ls", "-l"])
John Mee
  • 44,003
  • 31
  • 133
  • 171
David Cournapeau
  • 71,471
  • 8
  • 59
  • 68
  • 4
    Is there a way to use variable substitution? IE I tried to do `echo $PATH` by using `call(["echo", "$PATH"])`, but it just echoed the literal string `$PATH` instead of doing any substitution. I know I could get the PATH environment variable, but I'm wondering if there is an easy way to have the command behave exactly as if I had executed it in bash. – Kevin Wheeler Sep 01 '15 at 23:17
  • @KevinWheeler You'll have to use `shell=True` for that to work. – SethMMorton Sep 02 '15 at 20:38
  • 53
    @KevinWheeler You should NOT use `shell=True`, for this purpose Python comes with [os.path.expandvars](https://docs.python.org/2/library/os.path.html#os.path.expandvars). In your case you can write: `os.path.expandvars("$PATH")`. @SethMMorton please reconsider your comment -> [Why not to use shell=True](https://docs.python.org/2/library/subprocess.html#frequently-used-arguments) – Murmel Nov 11 '15 at 20:24
  • does call block? i.e. if I want to run multiple commands in a `for` loop how do I do it without it blocking my python script? I don't care about the output of the command I just want to run lots of them. – Charlie Parker Oct 24 '17 at 19:07
  • 8
    If you want to **create a list out of a command with parameters**, a list which can be used with `subprocess` when `shell=False`, then use `shlex.split` for an easy way to do this https://docs.python.org/2/library/shlex.html#shlex.split – Daniel F Sep 20 '18 at 18:05
  • `subprocess` also allows you to directly pipe two commands together, there's an example in the docs. – Daniel F Sep 20 '18 at 18:15
  • @CharlieParker: `call`, `check_call`, `check_output`, and `run` blocks. If you want non-blocking, use `subprocess.Popen`. – Lie Ryan Dec 03 '18 at 13:44
  • 8
    You can also pass in as string instead of a list of strings: `subprocess.run("ls -l")` – tuket Sep 23 '19 at 14:33
  • If they want to name it `run` instead of `call` why not just make it an alias? Some types of backwards compatibility are difficult and allow error-prone patterns to continue, but making `run == call` is easy, simple, and almost no cost. Why `call` for versions of Python before 3.5 instead of both new and old versions? – Samuel Muldoon Nov 07 '19 at 04:28
  • @SamuelMuldoon The issue is that `subprocess.call` has completely different semantics from `subprocess.run`. `subprocess.call` still exists in Python 3.5–3.8 (for backwards compatibility), it's just that if you have it available, `subprocess.run` is _better_ by far. – FeRD Nov 08 '19 at 12:54
  • I'm trying to use that command to run excalibur commands with python to connect with the localhost, but it doesn't work I get `FileNotFoundError`. `import subprocess subprocess.run(["excalibur initdb", "excalibur webserver"])` – Chacho Fuva May 01 '20 at 20:26
  • @ChachoFuva the list you pass to [`subprocess.run`](https://docs.python.org/3/library/subprocess.html#subprocess.run) is a list of words that represent 1 command, not a list of commands. It should be `subprocess.run(["excalibur", "initdb"])` then `subprocess.run(["excalibur", "webserver"])` – Boris Nov 17 '20 at 18:35
  • Any value in spinning a new thread with the subprocess.run() and having the original thread kill it if it exceeds some timeout? – grambo Dec 23 '20 at 20:12
3118

Here's a summary of the ways to call external programs and the advantages and disadvantages of each:

  1. os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:

    os.system("some_command < input_file | another_command > output_file")  
    

However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs. See the documentation.

  1. stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. See the documentation.

  2. The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:

    print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
    

    instead of:

    print os.popen("echo Hello World").read()
    

    but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation.

  3. The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:

    return_code = subprocess.call("echo Hello World", shell=True)  
    

    See the documentation.

  4. If you're on Python 3.5 or later, you can use the new subprocess.run function, which is a lot like the above but even more flexible and returns a CompletedProcess object when the command finishes executing.

  5. The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.

The subprocess module should probably be what you use.

Finally please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:

print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()

and imagine that the user enters something "my mama didnt love me && rm -rf /" which could erase the whole filesystem.

Trenton McKinney
  • 29,033
  • 18
  • 54
  • 66
Eli Courtwright
  • 164,889
  • 61
  • 203
  • 255
  • 33
    Nice answer/explanation. How is this answer justifying Python's motto as described in this article ? http://www.fastcompany.com/3026446/the-fall-of-perl-the-webs-most-promising-language "Stylistically, Perl and Python have different philosophies. Perl’s best known mottos is " There’s More Than One Way to Do It". Python is designed to have one obvious way to do it" Seem like it should be the other way! In Perl I know only two ways to execute a command - using back-tick or `open`. – Jean May 26 '15 at 21:16
  • 16
    If using Python 3.5+, use `subprocess.run()`. https://docs.python.org/3.5/library/subprocess.html#subprocess.run – phoenix Oct 07 '15 at 16:37
  • 6
    What one typically needs to know is what is done with the child process's STDOUT and STDERR, because if they are ignored, under some (quite common) conditions, eventually the child process will issue a system call to write to STDOUT (STDERR too?) that would exceed the output buffer provided for the process by the OS, and the OS will cause it to block until some process reads from that buffer. So, with the currently recommended ways, `subprocess.run(..)`, what exactly does *"This does not capture stdout or stderr by default."* imply? What about `subprocess.check_output(..)` and STDERR? – Evgeni Sergeev Jun 01 '16 at 10:44
  • which of the commands you recommended block my script? i.e. if I want to run multiple commands in a `for` loop how do I do it without it blocking my python script? I don't care about the output of the command I just want to run lots of them. – Charlie Parker Oct 24 '17 at 19:08
  • @phoenix I disagree. There is nothing preventing you from using os.system in python3 https://docs.python.org/3/library/os.html#os.system – Qback Dec 08 '17 at 09:27
  • 8
    This is arguably the wrong way around. Most people only need `subprocess.run()` or its older siblings `subprocess.check_call()` et al. For cases where these do not suffice, see `subprocess.Popen()`. `os.popen()` should perhaps not be mentioned at all, or come even after "hack your own fork/exec/spawn code". – tripleee Dec 03 '18 at 06:00
391

Typical implementation:

import subprocess

p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
    print line,
retval = p.wait()

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().

Trenton McKinney
  • 29,033
  • 18
  • 54
  • 66
EmmEff
  • 6,935
  • 2
  • 15
  • 18
  • 50
    `.readlines()` reads *all* lines at once i.e., it blocks until the subprocess exits (closes its end of the pipe). To read in real time (if there is no buffering issues) you could: `for line in iter(p.stdout.readline, ''): print line,` – jfs Nov 16 '12 at 14:12
  • 3
    Could you elaborate on what you mean by "if there is no buffering issues"? If the process blocks definitely, the subprocess call also blocks. The same could happen with my original example as well. What else could happen with respect to buffering? – EmmEff Nov 17 '12 at 13:25
  • 17
    the child process may use block-buffering in non-interactive mode instead of line-buffering so `p.stdout.readline()` (note: no `s` at the end) won't see any data until the child fills its buffer. If the child doesn't produce much data then the output won't be in real time. See the second reason in [Q: Why not just use a pipe (popen())?](http://www.noah.org/wiki/Pexpect#Q:_Why_not_just_use_a_pipe_.28popen.28.29.29.3F). Some workarounds are provided [in this answer](http://stackoverflow.com/a/12471855/4279) (pexpect, pty, stdbuf) – jfs Nov 17 '12 at 13:51
  • 5
    the buffering issue only matters if you want output in real time and doesn't apply to your code that doesn't print anything until *all* data is received – jfs Nov 17 '12 at 13:53
  • 4
    This answer was fine for its time, but we should no longer recommend `Popen` for simple tasks. This also needlessly specifies `shell=True`. Try one of the `subprocess.run()` answers. – tripleee Dec 03 '18 at 05:39
256

Some hints on detaching the child process from the calling one (starting the child process in background).

Suppose you want to start a long task from a CGI script. That is, the child process should live longer than the CGI script execution process.

The classical example from the subprocess module documentation is:

import subprocess
import sys

# Some code here

pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess

# Some more code here

The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.

My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.

On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.

The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows API. If you happen to have installed pywin32, you can import the flag from the win32process module, otherwise you should define it yourself:

DETACHED_PROCESS = 0x00000008

pid = subprocess.Popen([sys.executable, "longtask.py"],
                       creationflags=DETACHED_PROCESS).pid

/* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */

On FreeBSD we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in a CGI script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:

pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)

I have not checked the code on other platforms and do not know the reasons of the behaviour on FreeBSD. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
newtover
  • 28,176
  • 11
  • 78
  • 85
  • i noticed a possible "quirk" with developing py2exe apps in pydev+eclipse. i was able to tell that the main script was not detached because eclipse's output window was not terminating; even if the script executes to completion it is still waiting for returns. but, when i tried compiling to a py2exe executable, the expected behavior occurs (runs the processes as detached, then quits). i am not sure, but the executable name is not in the process list anymore. this works for all approaches (os.system("start *"), os.spawnl with os.P_DETACH, subprocs, etc.) – maranas Apr 09 '10 at 08:09
  • Windows gotcha: even though I spawned process with DETACHED_PROCESS, when I killed my Python daemon all ports opened by it wouldn't free until all spawned processes terminate. WScript.Shell solved all my problems. Example here: http://pastebin.com/xGmuvwSx – Alexey Lebedev Apr 16 '12 at 10:04
  • 2
    you might also need CREATE_NEW_PROCESS_GROUP flag. See [Popen waiting for child process even when the immediate child has terminated](http://stackoverflow.com/q/13243807/4279) – jfs Nov 16 '12 at 14:16
  • 1
    I'm seeing `import subprocess as sp;sp.Popen('calc')` not waiting for the subprocess to complete. It seems the creationflags aren't necessary. What am I missing? – ubershmekel Oct 27 '14 at 21:01
  • 1
    @ubershmekel, I am not sure what you mean and don't have a windows installation. If I recall correctly, without the flags you can not close the `cmd` instance from which you started the `calc`. – newtover Oct 28 '14 at 12:25
  • I'm on Windows 8.1 and `calc` seems to survive the closing of `python`. – ubershmekel Oct 30 '14 at 05:45
  • Is there any significance to using '0x00000008'? Is that a specific value that has to be used or one of multiple options? – SuperBiasedMan May 05 '15 at 13:13
  • 8
    The following is incorrect: "[o]n windows (win xp), the parent process will not finish until the longtask.py has finished its work". The parent will exit normally, but the console window (conhost.exe instance) only closes when the last attached process exits, and the child may have inherited the parent's console. Setting `DETACHED_PROCESS` in `creationflags` avoids this by preventing the child from inheriting or creating a console. If you instead want a new console, use `CREATE_NEW_CONSOLE` (0x00000010). – Eryk Sun Oct 27 '15 at 00:27
  • 2
    I didn't mean that executing as a detached process is incorrect. That said, you may need to set the standard handles to files, pipes, or `os.devnull` because some console programs exit with an error otherwise. Create a new console when you want the child process to interact with the user concurrently with the parent process. It would be confusing to try to do both in a single window. – Eryk Sun Oct 27 '15 at 17:37
  • `stdout=subprocess.PIPE` will make your code hang up if you have long output from a child. For more details see https://thraxil.org/users/anders/posts/2008/03/13/Subprocess-Hanging-PIPE-is-your-enemy/ – Dr_Zaszuś Mar 08 '18 at 08:56
  • 2
    is there not an OS-agnostic way to have the process run in the background? – Charlie Parker Feb 24 '19 at 19:05
  • your answer seems strange to me. I just opened a `subprocess.Popen` and nothing bad happened (not had to wait). Why exactly do we need to worry about the scenario you are pointing out? I'm skeptical. – Charlie Parker Feb 24 '19 at 19:38
174
import os
os.system("your command")

Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant documentation on the 'os' and 'sys' modules. There are a bunch of functions (exec* and spawn*) that will do similar things.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
nimish
  • 4,110
  • 3
  • 18
  • 33
  • 10
    No idea what I meant nearly a decade ago (check the date!), but if I had to guess, it would be that there's no validation done. – nimish Jun 06 '18 at 16:01
  • 1
    This should now point to `subprocess` as a slightly more versatile and portable solution. Running external commands is of course inherently unportable (you have to make sure the command is available on every architecture you need to support) and passing user input as an external command is inherently unsafe. – tripleee Dec 03 '18 at 05:11
  • 1
    Note the timestamp on this guy: the "correct" answer has 40x the votes and is answer #1. – nimish Dec 03 '18 at 18:41
  • The one solution that worked for me for running NodeJS stuff. – Nikolay Shindarov Oct 29 '19 at 20:49
163

I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer.

subprocess.call(['ping', 'localhost'])
Nicolas Gervais
  • 21,923
  • 10
  • 61
  • 96
sirwart
  • 2,249
  • 1
  • 14
  • 7
  • If you want to **create a list out of a command with parameters**, a list which can be used with `subprocess` when `shell=False`, then use `shlex.split` for an easy way to do this https://docs.python.org/2/library/shlex.html#shlex.split (it's the recommended way according to the docs https://docs.python.org/2/library/subprocess.html#popen-constructor) – Daniel F Sep 20 '18 at 18:07
  • 10
    This is incorrect: "**it does shell escaping for you and is therefore much safer**". subprocess doesn't do shell escaping, subprocess doesn't pass your command through the shell, so there's no need to shell escape. – Lie Ryan Dec 04 '18 at 08:36
154
import os
cmd = 'ls -al'
os.system(cmd)

If you want to return the results of the command, you can use os.popen. However, this is deprecated since version 2.6 in favor of the subprocess module, which other answers have covered well.

Patrick M
  • 9,455
  • 9
  • 56
  • 97
Alexandra Franks
  • 2,786
  • 1
  • 17
  • 22
  • 12
    popen [is deprecated](https://docs.python.org/2/library/os.html#os.popen) in favor of [subprocess](https://docs.python.org/2/library/subprocess.html). – Tris - archived Aug 08 '14 at 00:22
  • You can also save your result with the os.system call, since it works like the UNIX shell itself, like for example os.system('ls -l > test2.txt') – Stefan Gruenwald Nov 07 '17 at 23:19
111

There are lots of different libraries which allow you to call external commands with Python. For each library I've given a description and shown an example of calling an external command. The command I used as the example is ls -l (list all files). If you want to find out more about any of the libraries I've listed and linked the documentation for each of them.

Sources

These are all the libraries

Hopefully this will help you make a decision on which library to use :)

subprocess

Subprocess allows you to call external commands and connect them to their input/output/error pipes (stdin, stdout, and stderr). Subprocess is the default choice for running commands, but sometimes other modules are better.

subprocess.run(["ls", "-l"]) # Run command
subprocess.run(["ls", "-l"], stdout=subprocess.PIPE) # This will run the command and return any output
subprocess.run(shlex.split("ls -l")) # You can also use the shlex library to split the command

os

os is used for "operating system dependent functionality". It can also be used to call external commands with os.system and os.popen (Note: There is also a subprocess.popen). os will always run the shell and is a simple alternative for people who don't need to, or don't know how to use subprocess.run.

os.system("ls -l") # Run command
os.popen("ls -l").read() # This will run the command and return any output

sh

sh is a subprocess interface which lets you call programs as if they were functions. This is useful if you want to run a command multiple times.

sh.ls("-l") # Run command normally
ls_cmd = sh.Command("ls") # Save command as a variable
ls_cmd() # Run command as if it were a function

plumbum

plumbum is a library for "script-like" Python programs. You can call programs like functions as in sh. Plumbum is useful if you want to run a pipeline without the shell.

ls_cmd = plumbum.local("ls -l") # Get command
ls_cmd() # Run command

pexpect

pexpect lets you spawn child applications, control them and find patterns in their output. This is a better alternative to subprocess for commands that expect a tty on Unix.

pexpect.run("ls -l") # Run command as normal
child = pexpect.spawn('scp foo user@example.com:.') # Spawns child application
child.expect('Password:') # When this is the output
child.sendline('mypassword')

fabric

fabric is a Python 2.5 and 2.7 library. It allows you to execute local and remote shell commands. Fabric is simple alternative for running commands in a secure shell (SSH)

fabric.operations.local('ls -l') # Run command as normal
fabric.operations.local('ls -l', capture = True) # Run command and receive output

envoy

envoy is known as "subprocess for humans". It is used as a convenience wrapper around the subprocess module.

r = envoy.run("ls -l") # Run command
r.std_out # Get output

commands

commands contains wrapper functions for os.popen, but it has been removed from Python 3 since subprocess is a better alternative.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Tom Fuller
  • 4,578
  • 6
  • 29
  • 41
80

With the standard library

Use the subprocess module (Python 3):

import subprocess
subprocess.run(['ls', '-l'])

It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.

Note on Python version: If you are still using Python 2, subprocess.call works in a similar way.

ProTip: shlex.split can help you to parse the command for run, call, and other subprocess functions in case you don't want (or you can't!) provide them in form of lists:

import shlex
import subprocess
subprocess.run(shlex.split('ls -l'))

With external dependencies

If you do not mind external dependencies, use plumbum:

from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())

It is the best subprocess wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by pip install plumbum.

Another popular library is sh:

from sh import ifconfig
print(ifconfig('wlan0'))

However, sh dropped Windows support, so it's not as awesome as it used to be. Install by pip install sh.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Honza Javorek
  • 6,241
  • 5
  • 42
  • 63
80

I always use fabric for this things like:

from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )

But this seem to be a good tool: sh (Python subprocess interface).

Look at an example:

from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Jorge E. Cardona
  • 84,503
  • 3
  • 32
  • 40
78

Check the "pexpect" Python library, too.

It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet, etc. You can just type something like:

child = pexpect.spawn('ftp 192.168.0.24')

child.expect('(?i)name .*: ')

child.sendline('anonymous')

child.expect('(?i)password')
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
athanassis
  • 969
  • 7
  • 4
76

If you need the output from the command you are calling, then you can use subprocess.check_output (Python 2.7+).

>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'

Also note the shell parameter.

If shell is True, the specified command will be executed through the shell. This can be useful if you are using Python primarily for the enhanced control flow it offers over most system shells and still want convenient access to other shell features such as shell pipes, filename wildcards, environment variable expansion, and expansion of ~ to a user’s home directory. However, note that Python itself offers implementations of many shell-like features (in particular, glob, fnmatch, os.walk(), os.path.expandvars(), os.path.expanduser(), and shutil).

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Facundo Casco
  • 8,729
  • 5
  • 40
  • 61
  • 1
    Note that `check_output` requires a list rather than a string. If you don't rely on quoted spaces to make your call valid, the simplest, most readable way to do this is `subprocess.check_output("ls -l /dev/null".split())`. – Bruno Bronosky Jan 30 '18 at 18:18
60

This is how I run my commands. This code has everything you need pretty much

from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()
Usman Khan
  • 641
  • 5
  • 3
60

Update:

subprocess.run is the recommended approach as of Python 3.5 if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See this question for how.)

Here's some examples from the documentation.

Run a process:

>>> subprocess.run(["ls", "-l"])  # Doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)

Raise on failed run:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Capture output:

>>> subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')

Original answer:

I recommend trying Envoy. It's a wrapper for subprocess, which in turn aims to replace the older modules and functions. Envoy is subprocess for humans.

Example usage from the README:

>>> r = envoy.run('git config', data='data to pipe in', timeout=2)

>>> r.status_code
129
>>> r.std_out
'usage: git config [options]'
>>> r.std_err
''

Pipe stuff around too:

>>> r = envoy.run('uptime | pbcopy')

>>> r.command
'pbcopy'
>>> r.status_code
0

>>> r.history
[<Response 'uptime'>]
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Joe
  • 14,782
  • 10
  • 57
  • 69
49

Use subprocess.

...or for a very simple command:

import os
os.system('cat testfile')
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Ben Hoffstein
  • 98,117
  • 8
  • 99
  • 119
40

Calling an external command in Python

Simple, use subprocess.run, which returns a CompletedProcess object:

>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)

Why?

As of Python 3.5, the documentation recommends subprocess.run:

The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.

Here's an example of the simplest possible usage - and it does exactly as asked:

>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)

run waits for the command to successfully finish, then returns a CompletedProcess object. It may instead raise TimeoutExpired (if you give it a timeout= argument) or CalledProcessError (if it fails and you pass check=True).

As you might infer from the above example, stdout and stderr both get piped to your own stdout and stderr by default.

We can inspect the returned object and see the command that was given and the returncode:

>>> completed_process.args
'python --version'
>>> completed_process.returncode
0

Capturing output

If you want to capture the output, you can pass subprocess.PIPE to the appropriate stderr or stdout:

>>> cp = subprocess.run('python --version', 
                        stderr=subprocess.PIPE, 
                        stdout=subprocess.PIPE)
>>> cp.stderr
b'Python 3.6.1 :: Anaconda 4.4.0 (64-bit)\r\n'
>>> cp.stdout
b''

(I find it interesting and slightly counterintuitive that the version info gets put to stderr instead of stdout.)

Pass a command list

One might easily move from manually providing a command string (like the question suggests) to providing a string built programmatically. Don't build strings programmatically. This is a potential security issue. It's better to assume you don't trust the input.

>>> import textwrap
>>> args = ['python', textwrap.__file__]
>>> cp = subprocess.run(args, stdout=subprocess.PIPE)
>>> cp.stdout
b'Hello there.\r\n  This is indented.\r\n'

Note, only args should be passed positionally.

Full Signature

Here's the actual signature in the source and as shown by help(run):

def run(*popenargs, input=None, timeout=None, check=False, **kwargs):

The popenargs and kwargs are given to the Popen constructor. input can be a string of bytes (or unicode, if specify encoding or universal_newlines=True) that will be piped to the subprocess's stdin.

The documentation describes timeout= and check=True better than I could:

The timeout argument is passed to Popen.communicate(). If the timeout expires, the child process will be killed and waited for. The TimeoutExpired exception will be re-raised after the child process has terminated.

If check is true, and the process exits with a non-zero exit code, a CalledProcessError exception will be raised. Attributes of that exception hold the arguments, the exit code, and stdout and stderr if they were captured.

and this example for check=True is better than one I could come up with:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Expanded Signature

Here's an expanded signature, as given in the documentation:

subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, 
shell=False, cwd=None, timeout=None, check=False, encoding=None, 
errors=None)

Note that this indicates that only the args list should be passed positionally. So pass the remaining arguments as keyword arguments.

Popen

When use Popen instead? I would struggle to find use-case based on the arguments alone. Direct usage of Popen would, however, give you access to its methods, including poll, 'send_signal', 'terminate', and 'wait'.

Here's the Popen signature as given in the source. I think this is the most precise encapsulation of the information (as opposed to help(Popen)):

def __init__(self, args, bufsize=-1, executable=None,
             stdin=None, stdout=None, stderr=None,
             preexec_fn=None, close_fds=_PLATFORM_DEFAULT_CLOSE_FDS,
             shell=False, cwd=None, env=None, universal_newlines=False,
             startupinfo=None, creationflags=0,
             restore_signals=True, start_new_session=False,
             pass_fds=(), *, encoding=None, errors=None):

But more informative is the Popen documentation:

subprocess.Popen(args, bufsize=-1, executable=None, stdin=None,
                 stdout=None, stderr=None, preexec_fn=None, close_fds=True,
                 shell=False, cwd=None, env=None, universal_newlines=False,
                 startupinfo=None, creationflags=0, restore_signals=True,
                 start_new_session=False, pass_fds=(), *, encoding=None, errors=None)

Execute a child program in a new process. On POSIX, the class uses os.execvp()-like behavior to execute the child program. On Windows, the class uses the Windows CreateProcess() function. The arguments to Popen are as follows.

Understanding the remaining documentation on Popen will be left as an exercise for the reader.

Aaron Hall
  • 291,450
  • 75
  • 369
  • 312
  • A simple example of two-way communication between a primary process and a subprocess can be found here: https://stackoverflow.com/a/52841475/1349673 – James Hirschorn Oct 16 '18 at 18:05
  • The first example should probably have `shell=True` or (better yet) pass the command as a list. – tripleee Dec 03 '18 at 05:16
38

os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.

Get more information here.

Dimitris Fasarakis Hilliard
  • 119,766
  • 27
  • 228
  • 224
Martin W
  • 1,279
  • 7
  • 12
  • 2
    While I agree with the overall recommendation, `subprocess` does not remove all of the security problems, and has some pesky issues of its own. – tripleee Dec 03 '18 at 05:36
35

There is also Plumbum

>>> from plumbum import local
>>> ls = local["ls"]
>>> ls
LocalCommand(<LocalPath /bin/ls>)
>>> ls()
u'build.py\ndist\ndocs\nLICENSE\nplumbum\nREADME.rst\nsetup.py\ntests\ntodo.txt\n'
>>> notepad = local["c:\\windows\\notepad.exe"]
>>> notepad()                                   # Notepad window pops up
u''                                             # Notepad window is closed by user, command returns
stuckintheshuck
  • 2,283
  • 2
  • 24
  • 31
30

It can be this simple:

import os
cmd = "your command"
os.system(cmd)
  • 2
    This fails to point out the drawbacks, which are explained in much more detail in [PEP-324](https://www.python.org/dev/peps/pep-0324/). The documentation for `os.system` explicitly recommends avoiding it in favor of `subprocess`. – tripleee Dec 03 '18 at 05:02
29

Use:

import os

cmd = 'ls -al'

os.system(cmd)

os - This module provides a portable way of using operating system-dependent functionality.

For the more os functions, here is the documentation.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Priyankara
  • 720
  • 11
  • 21
28

I quite like shell_command for its simplicity. It's built on top of the subprocess module.

Here's an example from the documentation:

>>> from shell_command import shell_call
>>> shell_call("ls *.py")
setup.py  shell_command.py  test_shell_command.py
0
>>> shell_call("ls -l *.py")
-rw-r--r-- 1 ncoghlan ncoghlan  391 2011-12-11 12:07 setup.py
-rw-r--r-- 1 ncoghlan ncoghlan 7855 2011-12-11 16:16 shell_command.py
-rwxr-xr-x 1 ncoghlan ncoghlan 8463 2011-12-11 16:17 test_shell_command.py
0
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
mdwhatcott
  • 4,533
  • 3
  • 30
  • 45
28

As of Python 3.7.0 released on June 27th 2018 (https://docs.python.org/3/whatsnew/3.7.html), you can achieve your desired result in the most powerful while equally simple way. This answer intends to show you the essential summary of various options in a short manner. For in-depth answers, please see the other ones.


TL;DR in 2021

The big advantage of os.system(...) was its simplicity. subprocess is better and still easy to use, especially as of Python 3.5.

import subprocess
subprocess.run("ls -a", shell=True)

Note: This is the exact answer to your question - running a command

like in a shell


Preferred Way

If possible, remove the shell overhead and run the command directly (requires a list).

import subprocess
subprocess.run(["help"])
subprocess.run(["ls", "-a"])

Pass program arguments in a list. Don't include \"-escaping for arguments containing spaces.


Advanced Use Cases

Checking The Output

The following code speaks for itself:

import subprocess
result = subprocess.run(["ls", "-a"], capture_output=True, text=True)
if "stackoverflow-logo.png" in result.stdout:
    print("You're a fan!")
else:
    print("You're not a fan?")

result.stdout is all normal program output excluding errors. Read result.stderr to get them.

capture_output=True - turns capturing on. Otherwise result.stderr and result.stdout would be None. Available from Python 3.7.

text=True - a convenience argument added in Python 3.7 which converts the received binary data to Python strings you can easily work with.

Checking the returncode

Do

if result.returncode == 127: print("The program failed for some weird reason")
elif result.returncode == 0: print("The program succeeded")
else: print("The program failed unexpectedly")

If you just want to check if the program succeeded (returncode == 0) and otherwise throw an Exception, there is a more convenient function:

result.check_returncode()

But it's Python, so there's an even more convenient argument check which does the same thing automatically for you:

result = subprocess.run(..., check=True)

stderr should be inside stdout

You might want to have all program output inside stdout, even errors. To accomplish this, run

result = subprocess.run(..., stderr=subprocess.STDOUT)

result.stderr will then be None and result.stdout will contain everything.

Using shell=False with an argument string

shell=False expects a list of arguments. You might however, split an argument string on your own using shlex.

import subprocess
import shlex
subprocess.run(shlex.split("ls -a"))

That's it.

Common Problems

Chances are high you just started using Python when you come across this question. Let's look at some common problems.

FileNotFoundError: [Errno 2] No such file or directory: 'ls -a': 'ls -a'

You're running a subprocess without shell=True . Either use a list (["ls", "-a"]) or set shell=True.

TypeError: [...] NoneType [...]

Check that you've set capture_output=True.

TypeError: a bytes-like object is required, not [...]

You always receive byte results from your program. If you want to work with it like a normal string, set text=True.

subprocess.CalledProcessError: Command '[...]' returned non-zero exit status 1.

Your command didn't run successfully. You could disable returncode checking or check your actual program's validity.

TypeError: init() got an unexpected keyword argument [...]

You're likely using a version of Python older than 3.7.0; update it to the most recent one available. Otherwise there are other answers in this Stack Overflow post showing you older alternative solutions.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
fameman
  • 1,737
  • 10
  • 28
  • "The big advantage of os.system(...) was its simplicity. subprocess is better" - how subprocess is better? I am happily using os.system, not sure how switching to subprocess and remembering extra `shell=True` benefits me. What kind of thing is better in subprocess? – reducing activity Mar 26 '21 at 09:14
  • You're right in that `os.system(...)` is a reasonable choice for executing commands in terms of simple "blind" execution. However, the use cases are rather limited - as soon as you want to capture the output, you have to use a whole other library and then you start having both - subprocess and os for similar use cases in your code. I prefer to keep the code clean and use only one of them. Second, and I would have put that section at the top but the TL;DR has to answer the question **exactly**, you should **not** use `shell=True`, but instead what I've written in the `Preferred Way` section. – fameman Mar 27 '21 at 11:30
  • The problem with `os.system(...)` and `shell=True` is that you're spawning a new shell process, just to execute your command. This means, you have to do manual escaping which is not as simple as you might think - especially when targeting both POSIX and Windows. For user-supplied input, this is a no-go (just imagine the user entered something with `"` quotes - you'd have to escape them as well). Also, the shell process itself could load code you don't need - not only does it delay the program, but it could also lead to unexpected side effects, ending with a wrong return code. – fameman Mar 27 '21 at 11:34
  • 1
    Summing up, `os.system(...)` is valid to use, indeed. But as soon as you're writing more than a quick python helper script, I'd recommend you to go for subprocess.run without `shell=True`. For more information about the drawbacks of os.system, I'd like to propose you a read through this SO answer: https://stackoverflow.com/a/44731082/6685358 – fameman Mar 27 '21 at 11:35
  • thanks! I wanted to edit "better" to include that link, but I got error about full edit queue. – reducing activity Mar 27 '21 at 15:01
24

os.system does not allow you to store results, so if you want to store results in some list or something, a subprocess.call works.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Saurabh Bangad
  • 357
  • 2
  • 2
24

There is another difference here which is not mentioned previously.

subprocess.Popen executes the <command> as a subprocess. In my case, I need to execute file <a> which needs to communicate with another program, <b>.

I tried subprocess, and execution was successful. However <b> could not communicate with <a>. Everything is normal when I run both from the terminal.

One more: (NOTE: kwrite behaves different from other applications. If you try the below with Firefox, the results will not be the same.)

If you try os.system("kwrite"), program flow freezes until the user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow, but kwrite became the subprocess of the console.

Anyone runs the kwrite not being a subprocess (i.e. in the system monitor it must appear at the leftmost edge of the tree).

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Atinc Delican
  • 265
  • 2
  • 2
22

I tend to use subprocess together with shlex (to handle escaping of quoted strings):

>>> import subprocess, shlex
>>> command = 'ls -l "/your/path/with spaces/"'
>>> call_params = shlex.split(command)
>>> print call_params
["ls", "-l", "/your/path/with spaces/"]
>>> subprocess.call(call_params)
Emil Stenström
  • 10,369
  • 8
  • 45
  • 70
22

subprocess.check_call is convenient if you don't want to test return values. It throws an exception on any error.

cdunn2001
  • 15,667
  • 7
  • 51
  • 42
17

I wrote a library for this, shell.py.

It's basically a wrapper for popen and shlex for now. It also supports piping commands, so you can chain commands easier in Python. So you can do things like:

ex('echo hello shell.py') | "awk '{print $2}'"
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
houqp
  • 643
  • 8
  • 11
17

In Windows you can just import the subprocess module and run external commands by calling subprocess.Popen(), subprocess.Popen().communicate() and subprocess.Popen().wait() as below:

# Python script to run a command line
import subprocess

def execute(cmd):
    """
        Purpose  : To execute a command and return exit status
        Argument : cmd - command to execute
        Return   : exit_code
    """
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    (result, error) = process.communicate()

    rc = process.wait()

    if rc != 0:
        print "Error: failed to execute command:", cmd
        print error
    return result
# def

command = "tasklist | grep python"
print "This process detail: \n", execute(command)

Output:

This process detail:
python.exe                     604 RDP-Tcp#0                  4      5,660 K
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Swadhikar
  • 1,817
  • 1
  • 17
  • 30
16

You can use Popen, and then you can check the procedure's status:

from subprocess import Popen

proc = Popen(['ls', '-l'])
if proc.poll() is None:
    proc.kill()

Check out subprocess.Popen.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
admire
  • 320
  • 2
  • 6
16

To fetch the network id from the OpenStack Neutron:

#!/usr/bin/python
import os
netid = "nova net-list | awk '/ External / { print $2 }'"
temp = os.popen(netid).read()  /* Here temp also contains new line (\n) */
networkId = temp.rstrip()
print(networkId)

Output of nova net-list

+--------------------------------------+------------+------+
| ID                                   | Label      | CIDR |
+--------------------------------------+------------+------+
| 431c9014-5b5d-4b51-a357-66020ffbb123 | test1      | None |
| 27a74fcd-37c0-4789-9414-9531b7e3f126 | External   | None |
| 5a2712e9-70dc-4b0e-9281-17e02f4684c9 | management | None |
| 7aa697f5-0e60-4c15-b4cc-9cb659698512 | Internal   | None |
+--------------------------------------+------------+------+

Output of print(networkId)

27a74fcd-37c0-4789-9414-9531b7e3f126
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
IRSHAD
  • 2,302
  • 25
  • 35
  • You should not recommend `os.popen()` in 2016. The Awk script could easily be replaced with native Python code. – tripleee Dec 03 '18 at 05:49
15

Under Linux, in case you would like to call an external command that will execute independently (will keep running after the Python script terminates), you can use a simple queue as task spooler or the at command.

An example with task spooler:

import os
os.system('ts <your-command>')

Notes about task spooler (ts):

  1. You could set the number of concurrent processes to be run ("slots") with:

    ts -S <number-of-slots>

  2. Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Yuval Atzmon
  • 4,675
  • 2
  • 29
  • 64
  • 1
    `ts` is not standard on any distro I know of, though the pointer to `at` is mildly useful. You should probably also mention `batch`. As elsewhere, the `os.system()` recommendation should probably at least mention that `subprocess` is its recommended replacement. – tripleee Dec 03 '18 at 05:43
15

Invoke is a Python (2.7 and 3.4+) task execution tool and library. It provides a clean, high-level API for running shell commands:

>>> from invoke import run
>>> cmd = "pip install -r requirements.txt"
>>> result = run(cmd, hide=True, warn=True)
>>> print(result.ok)
True
>>> print(result.stdout.splitlines()[-1])
Successfully installed invocations-0.13.0 pep8-1.5.7 spec-1.3.1
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Valery Ramusik
  • 1,134
  • 12
  • 17
  • This is a great library. I was trying to explain it to a coworker the other day adn described it like this: `invoke` is to `subprocess` as `requests` is to `urllib3`. – user9074332 Mar 12 '19 at 02:00
14

Very simplest way to run any command and get the result back:

from commands import getstatusoutput

try:
    return getstatusoutput("ls -ltr")
except Exception, e:
    return None
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Garfield
  • 2,257
  • 3
  • 25
  • 51
  • 6
    Is this going to be deprecated in python 3.0? – 719016 Mar 04 '14 at 14:16
  • 3
    Indeed, the [`commands` documentation from Python 2.7](https://docs.python.org/2/library/commands.html) says it was deprecated in 2.6 and will be removed in 3.0. – tripleee Dec 03 '18 at 05:06
14

A simple way is to use the os module:

import os
os.system('ls')

Alternatively, you can also use the subprocess module:

import subprocess
subprocess.check_call('ls')

If you want the result to be stored in a variable try:

import subprocess
r = subprocess.check_output('ls')
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
amehta
  • 1,027
  • 3
  • 14
  • 21
13

MOST OF THE CASES:

For the most of cases, a short snippet of code like this is all you are going to need:

import subprocess
import shlex

source = "test.txt"
destination = "test_copy.txt"

base = "cp {source} {destination}'"
cmd = base.format(source=source, destination=destination)
subprocess.check_call(shlex.split(cmd))

It is clean and simple.

subprocess.check_call run command with arguments and wait for command to complete.

shlex.split split the string cmd using shell-like syntax

REST OF THE CASES:

If this do not work for some specific command, most probably you have a problem with command-line interpreters. The operating system chose the default one which is not suitable for your type of program or could not found an adequate one on the system executable path.

Example:

Using the redirection operator on a Unix system

input_1 = "input_1.txt"
input_2 = "input_2.txt"
output = "merged.txt"
base_command = "/bin/bash -c 'cat {input} >> {output}'"

base_command.format(input_1, output=output)
subprocess.check_call(shlex.split(base_command))

base_command.format(input_2, output=output)
subprocess.check_call(shlex.split(base_command))

As it is stated in The Zen of Python: Explicit is better than implicit

So if using a Python >=3.6 function, it would look something like this:

import subprocess
import shlex

def run_command(cmd_interpreter: str, command: str) -> None:
    base_command = f"{cmd_interpreter} -c '{command}'"
    subprocess.check_call(shlex.split(base_command)

N.Nonkovic
  • 460
  • 3
  • 6
12

Often, I use the following function for external commands, and this is especially handy for long running processes. The below method tails process output while it is running and returns the output, raises an exception if process fails.

It comes out if the process is done using the poll() method on the process.

import subprocess,sys

def exec_long_running_proc(command, args):
    cmd = "{} {}".format(command, " ".join(str(arg) if ' ' not in arg else arg.replace(' ','\ ') for arg in args))
    print(cmd)
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

    # Poll process for new output until finished
    while True:
        nextline = process.stdout.readline().decode('UTF-8')
        if nextline == '' and process.poll() is not None:
            break
        sys.stdout.write(nextline)
        sys.stdout.flush()

    output = process.communicate()[0]
    exitCode = process.returncode

    if (exitCode == 0):
        return output
    else:
        raise Exception(command, exitCode, output)

You can invoke it like this:

exec_long_running_proc(command = "hive", args=["-f", hql_path])
am5
  • 1,782
  • 1
  • 9
  • 10
  • 1
    You'll get unexpected results passing an arg with space. Using `repr(arg)` instead of `str(arg)` might help by the mere coincidence that python and sh escape quotes the same way – sbk May 17 '18 at 12:08
  • 1
    @sbk `repr(arg)` didn't really help, the above code handles spaces as well. Now the following works `exec_long_running_proc(command = "ls", args=["-l", "~/test file*"])` – am5 Nov 17 '18 at 00:07
11

Here are my two cents: In my view, this is the best practice when dealing with external commands...

These are the return values from the execute method...

pass, stdout, stderr = execute(["ls","-la"],"/home/user/desktop")

This is the execute method...

def execute(cmdArray,workingDir):

    stdout = ''
    stderr = ''

    try:
        try:
            process = subprocess.Popen(cmdArray,cwd=workingDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1)
        except OSError:
            return [False, '', 'ERROR : command(' + ' '.join(cmdArray) + ') could not get executed!']

        for line in iter(process.stdout.readline, b''):

            try:
                echoLine = line.decode("utf-8")
            except:
                echoLine = str(line)

            stdout += echoLine

        for line in iter(process.stderr.readline, b''):

            try:
                echoLine = line.decode("utf-8")
            except:
                echoLine = str(line)

            stderr += echoLine

    except (KeyboardInterrupt,SystemExit) as err:
        return [False,'',str(err)]

    process.stdout.close()

    returnCode = process.wait()
    if returnCode != 0 or stderr != '':
        return [False, stdout, stderr]
    else:
        return [True, stdout, stderr]
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
9

Just to add to the discussion, if you include using a Python console, you can call external commands from IPython. While in the IPython prompt, you can call shell commands by prefixing '!'. You can also combine Python code with the shell, and assign the output of shell scripts to Python variables.

For instance:

In [9]: mylist = !ls

In [10]: mylist
Out[10]:
['file1',
 'file2',
 'file3',]
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
imagineerThat
  • 4,293
  • 7
  • 30
  • 61
9

Calling an external command in Python

A simple way to call an external command is using os.system(...). And this function returns the exit value of the command. But the drawback is we won't get stdout and stderr.

ret = os.system('some_cmd.sh')
if ret != 0 :
    print 'some_cmd.sh execution returned failure'

Calling an external command in Python in background

subprocess.Popen provides more flexibility for running an external command rather than using os.system. We can start a command in the background and wait for it to finish. And after that we can get the stdout and stderr.

proc = subprocess.Popen(["./some_cmd.sh"], stdout=subprocess.PIPE)
print 'waiting for ' + str(proc.pid)
proc.wait()
print 'some_cmd.sh execution finished'
(out, err) = proc.communicate()
print 'some_cmd.sh output : ' + out

Calling a long running external command in Python in the background and stop after some time

We can even start a long running process in the background using subprocess.Popen and kill it after sometime once its task is done.

proc = subprocess.Popen(["./some_long_run_cmd.sh"], stdout=subprocess.PIPE)
# Do something else
# Now some_long_run_cmd.sh exeuction is no longer needed, so kill it
os.system('kill -15 ' + str(proc.pid))
print 'Output : ' proc.communicate()[0]
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
rashok
  • 10,508
  • 11
  • 76
  • 90
9

I wrote a small library to help with this use case:

https://pypi.org/project/citizenshell/

It can be installed using

pip install citizenshell

And then used as follows:

from citizenshell import sh
assert sh("echo Hello World") == "Hello World"

You can separate standard output from standard error and extract the exit code as follows:

result = sh(">&2 echo error && echo output && exit 13")
assert result.stdout() == ["output"]
assert result.stderr() == ["error"]
assert result.exit_code() == 13

And the cool thing is that you don't have to wait for the underlying shell to exit before starting processing the output:

for line in sh("for i in 1 2 3 4; do echo -n 'It is '; date +%H:%M:%S; sleep 1; done", wait=False)
    print ">>>", line + "!"

will print the lines as they are available thanks to the wait=False

>>> It is 14:24:52!
>>> It is 14:24:53!
>>> It is 14:24:54!
>>> It is 14:24:55!

More examples can be found at https://github.com/meuter/citizenshell

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Cédric
  • 136
  • 1
  • 5
8

Use:

import subprocess

p = subprocess.Popen("df -h", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")

It gives nice output which is easier to work with:

['Filesystem      Size  Used Avail Use% Mounted on',
 '/dev/sda6        32G   21G   11G  67% /',
 'none            4.0K     0  4.0K   0% /sys/fs/cgroup',
 'udev            1.9G  4.0K  1.9G   1% /dev',
 'tmpfs           387M  1.4M  386M   1% /run',
 'none            5.0M     0  5.0M   0% /run/lock',
 'none            1.9G   58M  1.9G   3% /run/shm',
 'none            100M   32K  100M   1% /run/user',
 '/dev/sda5       340G  222G  100G  69% /home',
 '']
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
David Okwii
  • 6,261
  • 2
  • 29
  • 25
8

As an example (in Linux):

import subprocess
subprocess.run('mkdir test.dir', shell=True)

This creates test.dir in the current directory. Note that this also works:

import subprocess
subprocess.call('mkdir test.dir', shell=True)

The equivalent code using os.system is:

import os
os.system('mkdir test.dir')

Best practice would be to use subprocess instead of os, with .run favored over .call. All you need to know about subprocess is here. Also, note that all Python documentation is available for download from here. I downloaded the PDF packed as .zip. I mention this because there's a nice overview of the os module in tutorial.pdf (page 81). Besides, it's an authoritative resource for Python coders.

  • 2
    According to https://docs.python.org/2/library/subprocess.html#frequently-used-arguments, "shell=True" may raise a security concern. – Nick Predey Mar 20 '18 at 18:54
  • @Nick Predley: noted, but "shell=False" doesn't perform the desired function. What specifically are the security concerns and what's the alternative? Please let me know asap: I do not wish to post anything which may cause problems for anyone viewing this. –  Mar 21 '18 at 19:49
  • 1
    The basic warning is in the documentation but this question explains it in more detail: https://stackoverflow.com/questions/3172470/actual-meaning-of-shell-true-in-subprocess – tripleee Dec 03 '18 at 05:14
7

There are a lot of different ways to run external commands in Python, and all of them have their own plus sides and drawbacks.

My colleagues and me have been writing Python system administration tools, so we need to run a lot of external commands, and sometimes you want them to block or run asynchronously, time-out, update every second, etc.

There are also different ways of handling the return code and errors, and you might want to parse the output, and provide new input (in an expect kind of style). Or you will need to redirect standard input, standard output, and standard error to run in a different tty (e.g., when using GNU Screen).

So you will probably have to write a lot of wrappers around the external command. So here is a Python module which we have written which can handle almost anything you would want, and if not, it's very flexible so you can easily extend it:

https://github.com/hpcugent/vsc-base/blob/master/lib/vsc/utils/run.py

It doesn't work stand-alone and requires some of our other tools, and got a lot of specialised functionality over the years, so it might not be a drop-in replacement for you, but it can give you a lot of information on how the internals of Python for running commands work and ideas on how to handle certain situations.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Jens Timmerman
  • 7,500
  • 1
  • 34
  • 45
7

Use subprocess.call:

from subprocess import call

# Using list
call(["echo", "Hello", "world"])

# Single string argument varies across platforms so better split it
call("echo Hello world".split(" "))
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
andruso
  • 1,423
  • 1
  • 14
  • 22
7

Here is calling an external command and return or print the command's output:

Python Subprocess check_output is good for

Run command with arguments and return its output as a byte string.

import subprocess
proc = subprocess.check_output('ipconfig /all')
print proc
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Rajiv Sharma
  • 5,330
  • 44
  • 44
  • The argument should properly be tokenized into a list, or you should explicitly pass in `shell=True`. In Python 3.x (where x > 3 I think) you can retrieve the output as a proper string with `universal_newlines=True` and you probably want to switch to `subproces.run()` – tripleee Dec 03 '18 at 05:22
7

If you need to call a shell command from a Python notebook (like Jupyter, Zeppelin, Databricks, or Google Cloud Datalab) you can just use the ! prefix.

For example,

!ls -ilF
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
dportman
  • 1,055
  • 7
  • 20
7

For using subprocess in Python 3.5+, the following did the trick for me on Linux:

import subprocess

# subprocess.run() returns a completed process object that can be inspected
c = subprocess.run(["ls", "-ltrh"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(c.stdout.decode('utf-8'))

As mentioned in the documentation, PIPE values are byte sequences and for properly showing them decoding should be considered. For later versions of Python, text=True and encoding='utf-8' are added to kwargs of subprocess.run().

The output of the abovementioned code is:

total 113M
-rwxr-xr-x  1 farzad farzad  307 Jan 15  2018 vpnscript
-rwxrwxr-x  1 farzad farzad  204 Jan 15  2018 ex
drwxrwxr-x  4 farzad farzad 4.0K Jan 22  2018 scripts
.... # Some other lines
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Farzad Vertigo
  • 1,711
  • 16
  • 25
7

If you're writing a Python shell script and have IPython installed on your system, you can use the bang prefix to run a shell command inside IPython:

!ls
filelist = !ls
Nuno André
  • 2,869
  • 1
  • 21
  • 31
noɥʇʎԀʎzɐɹƆ
  • 6,405
  • 2
  • 38
  • 63
6

After some research, I have the following code which works very well for me. It basically prints both standard output and standard error in real time.

stdout_result = 1
stderr_result = 1


def stdout_thread(pipe):
    global stdout_result
    while True:
        out = pipe.stdout.read(1)
        stdout_result = pipe.poll()
        if out == '' and stdout_result is not None:
            break

        if out != '':
            sys.stdout.write(out)
            sys.stdout.flush()


def stderr_thread(pipe):
    global stderr_result
    while True:
        err = pipe.stderr.read(1)
        stderr_result = pipe.poll()
        if err == '' and stderr_result is not None:
            break

        if err != '':
            sys.stdout.write(err)
            sys.stdout.flush()


def exec_command(command, cwd=None):
    if cwd is not None:
        print '[' + ' '.join(command) + '] in ' + cwd
    else:
        print '[' + ' '.join(command) + ']'

    p = subprocess.Popen(
        command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
    )

    out_thread = threading.Thread(name='stdout_thread', target=stdout_thread, args=(p,))
    err_thread = threading.Thread(name='stderr_thread', target=stderr_thread, args=(p,))

    err_thread.start()
    out_thread.start()

    out_thread.join()
    err_thread.join()

    return stdout_result + stderr_result
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Jake W
  • 2,559
  • 31
  • 36
  • 4
    your code may lose data when the subprocess exits while there is some data is buffered. Read until EOF instead, see [teed_call()](http://stackoverflow.com/q/4984428/4279) – jfs Jul 13 '15 at 18:52
6
import subprocess

p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

Try online

Vishal
  • 17,727
  • 17
  • 72
  • 91
5

For Python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.

from subprocess import PIPE, run

command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Chiel ten Brinke
  • 12,091
  • 12
  • 60
  • 104
5

There are a number of ways of calling an external command from Python. There are some functions and modules with the good helper functions that can make it really easy. But the recommended thing among all is the subprocess module.

import subprocess as s
s.call(["command.exe", "..."])

The call function will start the external process, pass some command line arguments and wait for it to finish. When it finishes you continue executing. Arguments in call function are passed through the list. The first argument in the list is the command typically in the form of an executable file and subsequent arguments in the list whatever you want to pass.

If you have called processes from the command line in the windows before, you'll be aware that you often need to quote arguments. You need to put quotations mark around it. If there's a space then there's a backslash and there are some complicated rules, but you can avoid a whole lot of that in Python by using subprocess module because it is a list and each item is known to be a distinct and python can get quoting correctly for you.

In the end, after the list, there are a number of optional parameters one of these is a shell and if you set shell equals to true then your command is going to be run as if you have typed in at the command prompt.

s.call(["command.exe", "..."], shell=True)

This gives you access to functionality like piping, you can redirect to files, you can call multiple commands in one thing.

One more thing, if your script relies on the process succeeding then you want to check the result and the result can be checked with the check call helper function.

s.check_call(...)

It is exactly the same as a call function, it takes the same arguments, takes the same list, you can pass in any of the extra arguments but it going to wait for the functions to complete. And if the exit code of the function is anything other then zero, it will through an exception in the python script.

Finally, if you want tighter control Popen constructor which is also from the subprocess module. It also takes the same arguments as incall & check_call function but it returns an object representing the running process.

p=s.Popen("...")

It does not wait for the running process to finish also it's not going to throw any exception immediately but it gives you an object that will let you do things like wait for it to finish, let you communicate to it, you can redirect standard input, standard output if you want to display output somewhere else and a lot more.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Kashif Iftikhar
  • 1,005
  • 7
  • 23
4

Using the Popen function of the subprocess Python module is the simplest way of running Linux commands. In that, the Popen.communicate() function will give your commands output. For example

import subprocess

..
process = subprocess.Popen(..)   # Pass command and arguments to the function
stdout, stderr = process.communicate()   # Get command output and error
..
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Asif Hasnain
  • 181
  • 1
  • 1
  • This is no longer true, and probably wasn't when this answer was posted. You should prefer `subprocess.check_call()` and friends unless you absolutely need the lower-level control of the more-complex `Popen()`. In recent Python versions, the go-to workhorse is `subprocess.run()` – tripleee Dec 03 '18 at 05:30
4

There are many ways to call a command.

  • For example:

if and.exe needs two parameters. In cmd we can call sample.exe use this: and.exe 2 3 and it show 5 on screen.

If we use a Python script to call and.exe, we should do like..

  1. os.system(cmd,...)

    • os.system(("and.exe" + " " + "2" + " " + "3"))
  2. os.popen(cmd,...)

    • os.popen(("and.exe" + " " + "2" + " " + "3"))
  3. subprocess.Popen(cmd,...)
    • subprocess.Popen(("and.exe" + " " + "2" + " " + "3"))

It's too hard, so we can join cmd with a space:

import os
cmd = " ".join(exename,parameters)
os.popen(cmd)
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
liuyip
  • 95
  • 1
  • 2
  • `os.popen` should not be recommended and perhaps even mentioned any longer. The `subpocess` example should pass the arguments as a list instead of joining them with spaces. – tripleee Dec 03 '18 at 05:25
4

Python 3.5+

import subprocess

p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

Try online

Vishal
  • 17,727
  • 17
  • 72
  • 91
4

os.popen() is the easiest and the most safest way to execute a command. You can execute any command that you run on the command line. In addition you will also be able to capture the output of the command using os.popen().read()

You can do it like this:

import os
output = os.popen('Your Command Here').read()
print (output)

An example where you list all the files in the current directory:

import os
output = os.popen('ls').read()
print (output)
# Outputs list of files in the directory
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Trect
  • 1,662
  • 2
  • 18
  • 32
3

The subprocess module described previously by Eli is very powerful, but the syntax to make a bog-standard system call and inspect its output, is unnecessarily prolix.

The easiest way to make a system call is with the commands module (Linux only).

> import commands
> commands.getstatusoutput("grep matter alice-in-wonderland.txt")
(0, "'Then it doesn't matter which way you go,' said the Cat.")

The first item in the tuple is the return code of the process. The second item is its standard output (and standard error, merged).

Python 2.6 (2008) deprecated the commands module, but that doesn't mean you shouldn't use it. Only that they're not developing it any more, which is okay, because it's already perfect (at it's a small, but important, function).


Update 2015: Python 3.5 added subprocess.run which is much easier to use than the older subprocess API https://docs.python.org/3/library/subprocess.html#subprocess.run . I recommend that.

Colonel Panic
  • 119,181
  • 74
  • 363
  • 435
  • 8
    Deprecated doesn't only mean "isn't developed anymore" but also "you are discouraged from using this". Deprecated features may break anytime, may be removed anytime, or may dangerous. You should never use this in important code. Deprecation is merely a better way than removing a feature immediately, because it gives programmers the time to adapt and replace their deprecated functions. – Misch Apr 19 '13 at 08:07
  • 3
    Just to prove my point: "Deprecated since version 2.6: The commands module has been removed in Python 3. Use the subprocess module instead." – Misch Apr 19 '13 at 08:14
  • It's not dangerous! The Python devs are careful only to break features between major releases (ie. between 2.x and 3.x). I've been using the commands module since 2004's Python 2.4. It works the same today in Python 2.7. – Colonel Panic Apr 23 '13 at 16:09
  • 6
    With dangerous, I didn't mean that it may be removed anytime (that's a different problem), neither did I say that it is dangerous to use this specific module. However it may become dangerous if a security vulnerability is discovered but the module isn't further developed or maintained. (I don't want to say that this module is or isn't vulnerable to security issues, just talking about deprecated stuff in general) – Misch Apr 23 '13 at 16:23
3

If you are not using user input in the commands, you can use this:

from os import getcwd
from subprocess import check_output
from shlex import quote

def sh(command):
    return check_output(quote(command), shell=True, cwd=getcwd(), universal_newlines=True).strip()

And use it as

branch = sh('git rev-parse --abbrev-ref HEAD')

shell=True will spawn a shell, so you can use pipe and such shell things sh('ps aux | grep python'). This is very very handy for running hardcoded commands and processing its output. The universal_lines=True make sure the output is returned in a string instead of binary.

cwd=getcwd() will make sure that the command is run with the same working directory as the interpreter. This is handy for Git commands to work like the Git branch name example above.

Some recipes

  • free memory in megabytes: sh('free -m').split('\n')[1].split()[1]
  • free space on / in percent sh('df -m /').split('\n')[1].split()[4][0:-1]
  • CPU load sum(map(float, sh('ps -ef -o pcpu').split('\n')[1:])

But this isn't safe for user input, from the documentation:

Security Considerations

Unlike some other popen functions, this implementation will never implicitly call a system shell. This means that all characters, including shell metacharacters, can safely be passed to child processes. If the shell is invoked explicitly, via shell=True, it is the application’s responsibility to ensure that all whitespace and metacharacters are quoted appropriately to avoid shell injection vulnerabilities.

When using shell=True, the shlex.quote() function can be used to properly escape whitespace and shell metacharacters in strings that are going to be used to construct shell commands.

Even using the shlex.quote(), it is good to keep a little paranoid when using user inputs on shell commands. One option is using a hardcoded command to take some generic output and filtering by user input. Anyway using shell=False will make sure that only the exactly process that you want to execute will be executed or you get a No such file or directory error.

Also there is some performance impact on shell=True, from my tests it seems about 20% slower than shell=False (the default).

In [50]: timeit("check_output('ls -l'.split(), universal_newlines=True)", number=1000, globals=globals())
Out[50]: 2.6801227919995654

In [51]: timeit("check_output('ls -l', universal_newlines=True, shell=True)", number=1000, globals=globals())
Out[51]: 3.243950183999914
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
geckos
  • 3,955
  • 1
  • 30
  • 35
2

I would recommend the following method 'run' and it will help us in getting standard output, standard error and exit status as a dictionary; the caller of this can read the dictionary return by 'run' method to know the actual state of the process.

  def run (cmd):
       print "+ DEBUG exec({0})".format(cmd)
       p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, shell=True)
       (out, err) = p.communicate()
       ret        = p.wait()
       out        = filter(None, out.split('\n'))
       err        = filter(None, err.split('\n'))
       ret        = True if ret == 0 else False
       return dict({'output': out, 'error': err, 'status': ret})
  #end
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Viswesn
  • 4,030
  • 1
  • 26
  • 38
2

I have written a wrapper to handle errors and redirecting output and other stuff.

import shlex
import psutil
import subprocess

def call_cmd(cmd, stdout=sys.stdout, quiet=False, shell=False, raise_exceptions=True, use_shlex=True, timeout=None):
    """Exec command by command line like 'ln -ls "/var/log"'
    """
    if not quiet:
        print("Run %s", str(cmd))
    if use_shlex and isinstance(cmd, (str, unicode)):
        cmd = shlex.split(cmd)
    if timeout is None:
        process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
        retcode = process.wait()
    else:
        process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
        p = psutil.Process(process.pid)
        finish, alive = psutil.wait_procs([p], timeout)
        if len(alive) > 0:
            ps = p.children()
            ps.insert(0, p)
            print('waiting for timeout again due to child process check')
            finish, alive = psutil.wait_procs(ps, 0)
        if len(alive) > 0:
            print('process {} will be killed'.format([p.pid for p in alive]))
            for p in alive:
                p.kill()
            if raise_exceptions:
                print('External program timeout at {} {}'.format(timeout, cmd))
                raise CalledProcessTimeout(1, cmd)
        retcode = process.wait()
    if retcode and raise_exceptions:
        print("External program failed %s", str(cmd))
        raise subprocess.CalledProcessError(retcode, cmd)

You can call it like this:

cmd = 'ln -ls "/var/log"'
stdout = 'out.txt'
call_cmd(cmd, stdout)
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Asav Patel
  • 928
  • 6
  • 23
0

Sultan is a recent-ish package meant for this purpose. It provides some niceties around managing user privileges and adding helpful error messages.

from sultan.api import Sultan

with Sultan.load(sudo=True, hostname="myserver.com") as sultan:
  sultan.yum("install -y tree").run()
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Zach Valenta
  • 1,301
  • 1
  • 11
  • 30
-3

I use this for Python 3.6+:

import subprocess
def execute(cmd):
    """
        Purpose  : To execute a command and return exit status
        Argument : cmd - command to execute
        Return   : result, exit_code
    """
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    (result, error) = process.communicate()
    rc = process.wait()
    if rc != 0:
        print ("Error: failed to execute command: ", cmd)
        print (error.rstrip().decode("utf-8"))
    return result.rstrip().decode("utf-8"), serror.rstrip().decode("utf-8")
# def
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
ivanmara
  • 71
  • 1
  • 5
  • 1
    Don't use set ``shell=True`` to run commands, it opens the program to command injection vulnerabilities. You're supposed to pass the command as a list with arguments ``cmd=["/bin/echo", "hello word"]``. https://docs.python.org/3/library/subprocess.html#security-considerations – user5994461 Apr 19 '20 at 16:52