61

I know what covariance and contravariance of types are. My question is why haven't I encountered discussion of these concepts yet in my study of Haskell (as opposed to, say, Scala)?

It seems there is a fundamental difference in the way Haskell views types as opposed to Scala or C#, and I'd like to articulate what that difference is.

Or maybe I'm wrong and I just haven't learned enough Haskell yet :-)

stites
  • 3,913
  • 5
  • 28
  • 43
ErikR
  • 50,049
  • 6
  • 66
  • 121
  • 2
    It's been a while but I seem to recall some functional/haskellish dialog in this video about co/contra-variance: http://channel9.msdn.com/shows/Going+Deep/E2E-Brian-Beckman-and-Erik-Meijer-CoContravariance-in-Physics-and-Programming-2-of-2/ – steamer25 Feb 16 '12 at 19:43

3 Answers3

61

There are two main reasons:

  • Haskell lacks an inherent notion of subtyping, so in general variance is less relevant.
  • Contravariance mostly appears where mutability is involved, so most data types in Haskell would simply be covariant and there'd be little value to distinguishing that explicitly.

However, the concepts do apply--for instance, the lifting operation performed by fmap for Functor instances is actually covariant; the terms co-/contravariance are used in Category Theory to talk about functors. The contravariant package defines a type class for contravariant functors, and if you look at the instance list you'll see why I said it's much less common.

There are also places where the idea shows up implicitly, in how manual conversions work--the various numeric type classes define conversions to and from basic types like Integer and Rational, and the module Data.List contains generic versions of some standard functions. If you look at the types of these generic versions you'll see that Integral constraints (giving toInteger) are used on types in contravariant position, while Num constraints (giving fromInteger) are used for covariant position.

C. A. McCann
  • 75,408
  • 19
  • 205
  • 301
  • 7
    I see no relation between contravariance and mutability. In fact, mutability leads to invariance. Examples of contravariance are functions input, `Ord` and `Eq` (their equivalents, of course), none of which even have data to be mutated. – Daniel C. Sobral Feb 16 '12 at 17:08
  • 14
    @DanielC.Sobral: Writing to a mutable reference is contravariant, being a special case of function input. So while simple data types often allow covariance, types that allow contravariance tend to represent sinks or outputs, which are likely to be trivial unless there are some sort of side effects involved. A mutable reference with read and write operations is necessarily invariant, though. – C. A. McCann Feb 16 '12 at 18:06
  • @Daniel Contravariance related to mutability can be seen in the [with function](http://hackage.haskell.org/packages/archive/snap/0.7/doc/html/Snap-Snaplet.html#v:with) provided by the Snap Framework's Snaplet API. It was designed precisely to allow hierarchies of "mutable" state to be manipulated easily. – mightybyte Feb 16 '12 at 18:09
  • 6
    I never noticed how `Num` and `Integral` were dual like that! – J. Abrahamson Jul 19 '13 at 05:04
  • 3
    @C.A.McCann Bit of a confusion of terminology here. You say that *"types that allow contravariance tend to represent sinks or outputs"* - like function inputs. The use of *"sinks or outputs"* and *"function inputs"* seems like an oxymoron. From what I understand, function inputs are contravariant and function outputs are covariant. I believe that you are trying to say that function inputs are like sinks into the function. Hence it is an output into the function (which means that it's an input). Am I correct? Sorry, I am just terribly confused. – Aadit M Shah Dec 27 '14 at 09:03
  • 3
    @AaditMShah: You are correct, on both the concepts and it being an unfortunate choice of words on my part. :] When calling a function its arguments are outputs, but to the body of the function they're inputs. The issue here is *caller's vs. callee's perspective*, and switching between them inverts all variances. Above, I used "inputs" to mean "parameters" while otherwise talking from the caller's perspective, hence the confusion. The same applies (with opposite variance) to function return values, which are effectively "outputs into the caller's continuation". – C. A. McCann Jan 01 '15 at 00:10
22

There are no "sub-types" in Haskell, so covariance and contravariance don't make any sense.

In Scala, you have e.g. Option[+A] with the subclasses Some[+A] and None. You have to provide the covariance annotations + to say that an Option[Foo] is an Option[Bar] if Foo extends Bar. Because of the presence of sub-types, this is necessary.

In Haskell, there are no sub-types. The equivalent of Option in Haskell, called Maybe, has this definition:

data Maybe a = Nothing | Just a

The type variable a can only ever be one type, so no further information about it is necessary.

dflemstr
  • 25,186
  • 5
  • 66
  • 102
9

As mentioned, Haskell does not have subtypes. However, if you're looking at typeclasses it may not be clear how that works without subtyping.

Typeclasses specify predicates on types, not types themselves. So when a Typeclass has a superclass (e.g. Eq a => Ord a), that doesn't mean instances are subtypes, because only the predicates are inherited, not the types themselves.

Also, co-, contra-, and in- variance mean different things in different fields of math (see Wikipedia). For example the terms covariant and contravariant are used in functors (which in turn are used in Haskell), but the terms mean something completely different. The term invariant can be used in a lot of places.

Hans
  • 2,120
  • 22
  • 21