Is there a math equivalent of the ternary conditional operator as used in programming?
a = b + (c > 0 ? 1 : 2)
The above means that if $c$ is greater than $0$ then $a = b + 1$, otherwise $a = b + 2$.
Is there a math equivalent of the ternary conditional operator as used in programming?
a = b + (c > 0 ? 1 : 2)
The above means that if $c$ is greater than $0$ then $a = b + 1$, otherwise $a = b + 2$.
From physics, I'm used to seeing the Kronecker delta,$$ {\delta}_{ij} \equiv \left\{ \begin{array}{lll} 1 &\text{if} & i=j \\ 0 &\text{else} \end{array} \right. _{,} $$and I think people who work with it find the slightly generalized notation$$ {\delta}_{\left[\text{condition}\right]} \equiv \left\{ \begin{array}{lll} 1 &\text{if} & \left[\text{condition}\right] \\ 0 &\text{else} \end{array} \right. $$to be pretty natural to them.
So, I tend to use $\delta_{\left[\text{condition}\right]}$ for a lot of things. Just seems so simple and well-understood.
Transforms:
Basic Kronecker delta:
To write the basic Kronecker delta in terms of the generalized Kronecker delta, it's just$$
\delta_{ij}
\Rightarrow
\delta_{i=j}
\,.$$It's almost the same notation, and I think most folks can figure it out pretty easily without needing it explained.
Conditional operator:
The "conditional operator" or "ternary operator" for the simple case of ?1:0
:$$
\begin{array}{ccccc}
\boxed{
\begin{array}{l}
\texttt{if}~\left(\texttt{condition}\right) \\
\{ \\
~~~~\texttt{return 1;} \\
\} \\
\texttt{else} \\
\{ \\
~~~~\texttt{return 0;} \\
\}
\end{array}
~} &
\Rightarrow &
\boxed{~
\texttt{condition ? 1 : 0}
~} &
\Rightarrow &
\delta_{\left[\text{condition}\right]}
\end{array}
_{.}
$$Then if you want a non-zero value for the false
-case, you'd just add another Kronecker delta, $\delta_{\operatorname{NOT}\left(\left[\text{condition}\right]\right)} ,$ e.g. $\delta_{i \neq j} .$
Indicator function:
@SiongThyeGoh's answer recommended using indicator function notation. I'd rewrite their example like$$
\begin{array}{ccccc}
\underbrace{a=b+1+\mathbb{1}_{(-\infty, 0]}(c)}
_{\text{their example}}
&
\Rightarrow &
\underbrace{a=b+1+ \delta_{c \in \left(-\infty, 0\right]}}
_{\text{direct translation}} &
\Rightarrow &
\underbrace{a=b+1+ \delta_{c \, {\small{\leq}} \, 0}}
_{\text{cleaner form}}
\end{array}
\,.
$$
Iverson bracket:
Iverson bracket notation, as suggested in @FredH's answer, is very similar to the generalized Kronekcer delta. For example:$$
\delta_{i=j}
~~ \Rightarrow ~~
\left[i = j \right]
\,.$$Dropping the $`` \delta "$ reduces backwards-compatibility withe the basic Kroncker delta, plus it weakens the signal about what the notation means, so it's probably not as good in general contexts right now. However, Iverson bracket notation should be easier to read and write, so when reinforcing the meaning of the notation isn't a big issue, it could be preferable.
The conditional operator, condition ? trueValue : falseValue
, has 3 arguments, making it an example of a ternary operator. By contrast, most other operators in programming tend to be unary operators (which have 1 argument) or binary operators (which have 2 arguments).
Since the conditional operator is fairly unique in being a ternary operator, it's often been called "the ternary operator", leading many to believe that that's its name. However, "conditional operator" is more specific and should generally be preferred.
The expression b + (c > 0 ? 1 : 2)
is not a ternary operator; it is a function of two variables. There is one operation that results in $a$. You can certainly define a function
$$f(b,c)=\begin {cases} b+1&c \gt 0\\
b+2 & c \le 0 \end {cases}$$
You can also define functions with any number of inputs you want, so you can define $f(a,b,c)=a(b+c^2)$, for example. This is a ternary function.
In Concrete Mathematics by Graham, Knuth and Patashnik, the authors use the "Iverson bracket" notation: Square brackets around a statement represent $1$ if the statement is true and $0$ otherwise. Using this notation, you could write $$ a = b + 2 - [c \gt 0]. $$
Using the indicator function notation:$$a=b+1+\mathbb{1}_{(-\infty, 0]}(c)$$
In math, equations are written in piecewise form by having a curly brace enclose multiple lines; each one with a condition excepting the last which has "otherwise".
There are a few custom operators that also occasionally make an appearance. E.g. the Heavyside function mentioned by Alex, the Dirac function, and the cyclical operator $\delta_{ijk}$ - all of which can be used to emulate conditional behaviour.
I think mathematicians should not be afraid to use the Iverson bracket, including when teaching, as this is a generally very useful notation whose only slightly unusual feature is to introduce a logical expression in the middle of an algebraic one (but one already regularly finds conditions inside set-theoretic expressions, so it really is not a big deal). It may avoid a lot of clutter, notably many instances of clumsy expressions by cases with a big unmatched brace (which is usually only usable as the right hand side of a definition). Since brackets do have many other uses in mathematics, I personally prefer a typographically distinct representation of Iverson brackets, rendering your example as $$\def\[#1]{[\![{#1}]\!]} a= b + \[c>0]1 + \[c\not>0]2. $$ This works best in additive context (though one can use Iverson brackets in the exponent for optional multiplicative factors). It is not really ideal for general two-way branches, as the condition must be repeated twice, one of them in negated form, but it happens that most of the time one needs $0$ as the value for one branch anyway.
As a more concise two-way branch, I can recall that Algol68 introduced the notation $b+(c>0\mid 1\mid 2)$ for the right-hand side of your equation; though this is a programming language and not mathematics, it was designed by mathematicians. They also had notation for multi-way branching: thus the solution to the recursion $a_{n+2}=a_{n+1}-a_n$ with initial values $a_0=0$, $a_1=1$ can be written $$ a_n=(n\bmod 6+1\mid 0,1,1,0,-1,-1) $$ (where the "${}+1$" is needed because in 1968 they still counted starting from $1$, which is a mistake), which is reasonably concise and readable, compared to other ways to express this result. Also consider, for month $m$ in year $y$, the number $$ ( m \mid 31,(y\bmod 4=0\land y\bmod 100\neq0\lor y\bmod400=0\mid 29\mid 28) ,31,30,31,30,31,31,30,31,30,31). $$
The accepted answer from Nat, suggesting the Kronecker Delta, is correct. However, it is also important to note that one of the highly upvoted answers here, which claims the C ternary operator x?y:z
is not ternary is incorrect.
Mathematically, the expression x?y:z
can be expressed as a function of three variables:
$$f(x,y,z)=\begin {cases} y&x\neq 0\\
z & x=0 \end {cases}$$
Note that in programming an expression such as $a<b$ could be used for $x$. If the expression is true, then $x=1$, otherwise $x=0$.
About nomenclature: computer programmers have used the phrase the ternary operator to mean exactly this since at least the 1970s. Of course, among mathematicians, it would simply be a ternary operator and we would qualify it by either stating a programming language, e.g., the C ternary operator, or by calling it the conditional operator.
One should realize that operators are just a fancy way of using functions. So a ternary operator is a function of 3 variables that is notated in a different way. Is that useful? The answer is mostly not. Also realize that any mathematician is allowed to introduce any notation he feels is illustrative.
Let's review why we use binary operators at all like in a+b*c . Because parameters and results are of the same type, it makes sense to leave out parentheses and introduce complicated priority rules. Imagine that a b c are numbers and we have a normal + and a peculiar * that results in dragons. Now the expression doesn't make sense (assumming a high priority *), because there is no way to add numbers and dragons. Thusly most ternary operators results in a mess.
With a proper notation there are examples of ternary operations. For example, there is a special notation for "sum for i from a to b of expression". This takes two boundaries (numbers) and a function from a number of that type that results in another number. (Mathematician, read "element of an addition group" for number.)
The notation for integration is similarly ternary.
So in short ternary operators exist, and you can define your own. They are in general accompagnied with a special notation, or they are not helpful.
Now back to the special case you mention. Because truth values are implied in math, an expression like "if a then b else c" makes sense if a represens a truth value like (7<12). The above expression is understood in every mathematical context. However in a context where truth values are not considered a set, (if .. then .. else ..) would not be considered an operator/function, but a textual explanation. A general accepted notation could be useful in math, but I'm not aware there is one. That is probably, because like in the above, informal notations are readily understood.
Fundamentally, the non-answer is the answer. Whatever notation you think you might want to use to express "$a=b+1$ if $c>0$ and $a=b+2$ otherwise" or $$a=\begin{cases}b+1 &\text{if }c>0\\b+2 &\text{otherwise}\end{cases}$$ is much harder to read than either of those two things.
There are many good answers that give notation for, “if this condition holds, then 1, else 0.” This corresponds to an even simpler expression in C;(x>1)
is equivalent to (x>1 ? 1 : 0)
.
It’s worth noting that the ternary operator is more general than that. If the arguments are elements of a ring, you could express c ? a : b
with (using Iverson-bracket notation) $(a-b) \cdot [c] + b$, but not otherwise. (And compilers frequently use this trick, in a Boolean ring, to compile conditionals without needing to execute a branch instruction.) In a C program, evaluating the expressions $a$ or $b$ might have side-effects, such as deleting a file or printing a message to the screen. In a mathematical function, this isn’t something you would worry about, and a programming language where this is impossible is called functional.
Ross Millikan gave the most standard notation, a cases block. The closest equivalent in mathematical computer science is the if-then-else
function of Lambda Calculus.
Following solution is not defined for $c = 0$; however it uses very basic operations only, which might be useful as you probably look for an expression to implement in a program:
$$a = b + 1\lambda + 2(1-\lambda)$$
where
$$\lambda = \frac{ 1 + \frac{|c|}{c} }{2}$$
You need to make the problem discrete and make a choice from two values. So, given some value $c \in \mathbb{R}$ we need to calculate some value $\lambda \in \{0,1\}$ depnding on $c<0$ or $c>0$.
Knowing that
$$\frac{|c|}{c} \in \{1,-1\}$$
we can calculate the $\lambda$ as follows:
$$\lambda = \frac{ 1 + \frac{|c|}{c} }{2}$$
Now that our $\lambda \in \{0,1\}$ we can do the "choice" between the two constants $d$ and $e$ as follows:
$$d\lambda + e(1-\lambda)$$
which equals $d$ for $\lambda = 1$, and $e$ for $\lambda = 0$.
The best idea is probably to split the world into different cases above the context in which your expression lives, so you consider the cases where $c$ is positive and the cases where $c$ is nonpositive separately. Or impose conditions on $c$ like $|c| = 1$ .
Another alternative is to create some new ad hoc indicator-like functions whose algebraic properties are as strong as possible. I'm partial to signum because it's odd and multiplicative.
b + (c > 0 ? 1 : 2)
Can be written nicely using two new functions $S$ for signum and $Z$ the zero indicator function:
$$ S(x) \stackrel{\small{\text{def}}}{=} \begin{cases} \;\;1 & x > 0 \\ \;\;0 & x = 0 \\ -1 & x < 0 \end{cases}$$
And $Z$ is $1$ when its argument is $0$ and $0$ otherwise.
So we can write the expression as
$$ b + \frac{3}{2} + \frac{1}{2} S(c) - \frac{1}{2} Z(c) $$
or
$$ b + 1 + S(c) + \frac{1}{2} S^2(c) $$
with $S^2(c)$ denoting $S(c^2)$ or $S(c)^2$ since they're equivalent.
This expression isn't that great in this case, but $S$ and $Z$ have some nice properties:
$$ S(a)S(b) = S(ab) $$ $$ S(a)Z(a) = 0 $$ $$ Z(a) = 1 - S(a)^2 $$ $$ S(S(a)) = S(a) $$ $$ S(a)^n = S(a^n) \;\;\;\;\forall n \in \mathbb{N} $$ $$ S(a)^m = S(a)^m \cdot S(a)^{(2n)} \;\;\;\;\forall m, n \in \mathbb{N} $$
The concepts from lambda calculus and combinatory logic is worth mentioning here.
We define lambda calculus by two constructions: - you can make a function using abstraction, e.g. to define the function $f(x)=x+1$ (assuming that numbers and addition is defined), we write $\lambda x.x+1$. - you can use a function using application, e.g. given a function $M$, we can evaluate it at $x$ by $M\ x$.
Surprisingly, these two constructions are enough to express all of mathematics, without anything else assumed! For natural numbers, there is Church numerals to use. To define boolean values, we can write $$\mathbf{True} \equiv \lambda x. \lambda y. x$$ and $$\mathbf{False} \equiv \lambda x. \lambda y. y.$$ From this we can define all of Boolean algebra. Specifically, for the tenary operator $b?x:y$, we can construct the expression $b\ x\ y.$
For mathematical logic, let's say that the ternary operator A ? B : C
represents "for the predicates $A$, $B$, $C$, if $B$ is true iff $A$ is true, and $C$ is true iff $A$ is false". We can then represent your problem as $A := (c > 0)$, $B := (a = b + 1)$, and $C := (a = b + 2)$. We can write this in predicate logic as the true statement
$$(\neg A \iff C) \land (A \iff B)$$
We can verify the correctness of this with a truth table:
$$\begin{array} {|c c c | c |} \hline A & B & C & (\neg A \iff C) \land (A \iff B) \\ \hline F & F & F & F \\ \hline F & F & T & \color{blue}T \\ \hline F & T & F & F \\ \hline F & T & T & F \\ \hline T & F & F & F \\ \hline T & F & T & F \\ \hline T & T & F & \color{blue}T \\ \hline T & T & T & F \\ \hline \end{array} $$