I was playing around with the _
underscore in the Python interpreter and wanted to try if it has the same behavior in code. I have used the underscore in code as a 'Don't care'-variable, like this:
_, a = someFunction()
And in the interpreter to get the last stored value, like this:
>>> 2 + 2
4
>>> a = _
>>> a
4
Now I tried to execute the following example code:
for i in range(5):
2 + 1
a = _
print (a)
In the interpreter as well as written in a Python script and ran using python underscore.py
.
With the behavior in mind that the _
underscore will save the last stored value, because this it is not formatted as a 'Don't care'-variable, the expected outcome would be 2 + 1 = 3
, making 3
the last stored value, which is then saved into the a
variable with a = _
.
The outcome of the interpreter was the following:
>>> for i in range(5):
... 2 + 1
... a = _
...
3
3
3
3
3
>>> print(a)
3
This outcome works as expected while the outcome of the same code saved in a Python script and ran using python underscore.py
, resulted in a name error:
C:\Users\..\Python files>python underscore.py
Traceback (most recent call last):
File "underscore.py", line 3, in <module>
a = _
NameError: name '_' is not defined
When reading the error, it sounds logic that the _
variable is not defined, but, while it probably has something to do with how Python runs a script, I was just wondering what the difference between these two cases is that makes the outcome result in a somewhat logic answer (when you've been using the interpreter like this for a while) vs a name error?
So don't get me wrong, I do know what the _
symbol does in Python. What I'm asking is why the exact same code behaves differently in the interpreter then when run as a Python program in the terminal?