1

I would like to run several commands in the same shell. After some research I found that I could keep a shell open using the return process from Popen. I can then write and read to stdin and stdout. I tried implementing it as such:

process = Popen(['/bin/sh'], stdin=PIPE, stdout=PIPE)
process.stdin.write('ls -al\n')
out = ' '
while not out == '':
    out = process.stdout.readline().rstrip('\n')
    print out

Not only is my solution ugly, it doesn't work. out is never empty because it hands on the readline(). How can I successfully end the while loop when there is nothing left to read?

Juicy
  • 9,942
  • 32
  • 97
  • 181
  • related to the title: [Python: read streaming input from subprocess.communicate()](http://stackoverflow.com/a/17698359/4279) – jfs Jul 29 '15 at 19:27
  • think about why [there is `prompt()` method in `pexpect` module](https://pexpect.readthedocs.org/en/latest/api/pxssh.html#pexpect.pxssh.pxssh.prompt) – jfs Jul 29 '15 at 19:30

2 Answers2

2

Use iter to read data in real time:

for line in iter(process.stdout.readline,""):
   print line

If you just want to write to stdin and get the output you can use communicate to make the process end:

process = Popen(['/bin/sh'], stdin=PIPE, stdout=PIPE)
out,err =process.communicate('ls -al\n')

Or simply get the output use check_output:

from subprocess import check_output

out = check_output(["ls", "-al"])
Padraic Cunningham
  • 160,756
  • 20
  • 201
  • 286
  • @AnandSKumar, yes indeed, used the OP's. – Padraic Cunningham Jul 29 '15 at 18:56
  • But if I use check output then it doesn't keep the same shell no? IE: if one of the command is changing the current directory, then the next time I call `check_output` it will still be in the start directory. – Juicy Jul 29 '15 at 20:46
  • what do you actually want to do? – Padraic Cunningham Jul 29 '15 at 20:46
  • I want to run a long list of commands in the same shell. Some of these commands will be changing the current working directory with `cd`, run a few commands, change the directory, run a few more etc.. So it is crucial that these commands run in the same shell for the working directories to be set correctly. – Juicy Jul 29 '15 at 20:48
  • Ok, do you need to parse the output to know what to write to stdin? – Padraic Cunningham Jul 29 '15 at 20:50
  • Yes I would need to both have the output in a variable and print it to screen. Also process return code if possible. – Juicy Jul 29 '15 at 20:52
  • Then parse the output and write to stdin, are you actually using bash or sh? – Padraic Cunningham Jul 29 '15 at 20:56
  • Also essentially what do you want to achieve? – Padraic Cunningham Jul 29 '15 at 21:04
  • The problem with `process.communicate` is that it kills my process. I can't use the same one again later. Over simplified, let's say I want to run four commands *in the same shell and get their output*. First `cd ~/Documents/`, then `ls -al`, then `cd mydir`, then `cat somefile`. This sequence of commands can only work if they're run in the same shell as the working directory has to be modified by calls to `cd` or `chdir`. For that reason I would like to run all these commands in the same `process`. – Juicy Jul 29 '15 at 21:07
  • I know there would be ways of handling the file directories directly in python and doing essentially the same, but this is an oversimplified example. I Would actually need full access to a shell for most of my commands. – Juicy Jul 29 '15 at 21:09
  • To change dir I would use os.chdir, the only problem I see is storing the output, what do you want to do with the output from the commands? – Padraic Cunningham Jul 29 '15 at 21:20
  • Parse it and use some of it as arguments for further inputs. – Juicy Jul 29 '15 at 21:21
  • I think you may be trying to do what pexpect will do for you, if you know the order and how many commands you will be running then it is probably easy enough at a basic level to do what you want. You are only going to be using the command output in the loop itself and maybe the final output outside? – Padraic Cunningham Jul 29 '15 at 21:22
1

The command you're running in a subprocess is sh, so the output you're reading is sh's output. Since you didn't indicate to the shell it should quit, it is still alive, thus its stdout is still open.

You can perhaps write exit to its stdin to make it quit, but be aware that in any case, you get to read things you don't need from its stdout, e.g. the prompt.

Bottom line, this approach is flawed to start with...

shx2
  • 53,942
  • 9
  • 107
  • 135
  • How else would I approach this? The reason I want to keep the same shell is because some of the commands I will be running will be changing the working directory. That is why I would like to run them all in the same "shell" process to keep all relatives paths correct. – Juicy Jul 29 '15 at 20:49
  • @Juicy, One option is to put all the commands in a shell script and run it directly, instead of feeding the commands one-by-one to `sh`. Another is to feed them one-by-one like you do, but to add `exit` at the end. Parsing the output in the latter can be tricky. You might want to set `$PROMPT` to be empty. – shx2 Jul 30 '15 at 05:52