14

I want to know whether any commands in a bash script exited with a non-zero status.

I want something similar to set -e functionality, except that I don't want it to exit when a command exits with a non-zero status. I want it to run the whole script, and then I want to know that either:

a) all commands exited with exit status 0
-or-
b) one or more commands exited with a non-zero status


e.g., given the following:

#!/bin/bash

command1  # exits with status 1
command2  # exits with status 0
command3  # exits with status 0

I want all three commands to run. After running the script, I want an indication that at least one of the commands exited with a non-zero status.

Rob Bednark
  • 19,968
  • 18
  • 67
  • 100

7 Answers7

9

Set a trap on ERR:

#!/bin/bash

err=0
trap 'err=1' ERR

command1
command2
command3
test $err = 0 # Return non-zero if any command failed

You might even throw in a little introspection to get data about where the error occurred:

#!/bin/bash
for i in 1 2 3; do
        eval "command$i() { echo command$i; test $i != 2; }"
done

err=0
report() {
        err=1
        echo -n "error at line ${BASH_LINENO[0]}, in call to "
        sed -n ${BASH_LINENO[0]}p $0
} >&2
trap report ERR

command1
command2
command3
exit $err
William Pursell
  • 174,418
  • 44
  • 247
  • 279
4

You could try to do something with a trap for the DEBUG pseudosignal, such as

trap '(( $? && ++errcount ))' DEBUG

The DEBUG trap is executed

before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function

(quote from manual).

So if you add this trap and as the last command something to print the error count, you get the proper value:

#!/usr/bin/env bash

trap '(( $? && ++errcount ))' DEBUG

true
false
true

echo "Errors: $errcount"

returns Errors: 1 and

#!/usr/bin/env bash

trap '(( $? && ++errcount ))' DEBUG

true
false
true
false

echo "Errors: $errcount"

prints Errors: 2. Beware that that last statement is actually required to account for the second false because the trap is executed before the commands, so the exit status for the second false is only checked when the trap for the echo line is executed.

Benjamin W.
  • 33,075
  • 16
  • 78
  • 86
1

I am not sure if there is a ready-made solution for your requirement. I would write a function like this:

function run_cmd_with_check() {
  "$@"
  [[ $? -ne 0 ]] && ((non_zero++))
}

Then, use the function to run all the commands that need tracking:

run_cmd_with_check command1
run_cmd_with_check command2
run_cmd_with_check command3
printf "$non_zero commands exited with non-zero exit code\n"

If required, the function can be enhanced to store all failed commands in an array which can be printed out at the end.


You may want to take a look at this post for more info: Error handling in Bash

Community
  • 1
  • 1
codeforester
  • 28,846
  • 11
  • 78
  • 104
  • @GrishaLevit - you are right - I corrected the answer. Looks like the quoting topic never stops being a little confusing... – codeforester Feb 14 '17 at 06:48
0

You have the magic variable $? available in bash which tells the exit code of last command:

#!/bin/bash

command1  # exits with status 1
C1_output=$?   # will be 1
command2  # exits with status 0
C2_output=$?   # will be 0
command3  # exits with status 0
C3_output=$?   # will be 0
hjpotter92
  • 71,576
  • 32
  • 131
  • 164
  • Yes, I'm aware of that. That doesn't directly answer my question. You might be implying that I could modify the script to keep a running total of all exit statuses. That would be a solution, but not a desirable one, as it requires adding 30 lines of code to a script with 30 commands in it, making it more difficult to read, and is prone to human-error by forgetting to add it to every command. I suppose one could create a function and pass every command to the function. Still rather cumbersome. I'm hoping for something like a simple `set -e` type of solution. – Rob Bednark Feb 14 '17 at 04:51
  • @RobBednark This is probably the best you can do. Consider a command like `((x=0))`. That is technically a failed command, because the contained expression evaluates to 0. Anything automated would have to take things like that into account. The problem with `set -e` is that a failed command isn't necessarily an error. – chepner Feb 14 '17 at 12:05
0

For each command you could do this:

if ! Command1 ; then an_error=1; fi 

And repeat this for all commands

At the end an_error will be 1 if any of them failed.

If you want a count of failures set an_error to 0 at the beginning and do $((an_error++)). Instead of an_error=1

Mike Wodarczyk
  • 1,084
  • 10
  • 17
0

You could place your list of commands into an array and then loop over the commands. Any that return an error code your keep the results for later viewing.

declare -A results

commands=("your" "commands")

for cmd in "${commands[@]}"; do 
    out=$($cmd 2>&1)
    [[ $? -eq 0 ]] || results[$cmd]="$out"
done    

Then to see any non zero exit codes:

for cmd in "${!results[@]}"; do echo "$cmd = ${results[$cmd]}"; done

If the length of results is 0, there were no errors on your list of commands.

This requires Bash 4+ (for the associative array)

dawg
  • 80,841
  • 17
  • 117
  • 187
0

You can use the DEBUG trap like:

trap 'code+=$?' DEBUG
code=0

# run commands here normally

exit $code
Grisha Levit
  • 6,800
  • 2
  • 33
  • 49