45

Background

Several SQL languages (I mostly use postgreSQL) have a function called coalesce which returns the first non null column element for each row. This can be very efficient to use when tables have a lot of NULL elements in them.

I encounter this in a lot of scenarios in R as well when dealing with not so structured data which has a lot of NA's in them.

I have made a naive implementation myself but it is ridiculously slow.

coalesce <- function(...) {
  apply(cbind(...), 1, function(x) {
          x[which(!is.na(x))[1]]
        })
}

Example

a <- c(1,  2,  NA, 4, NA)
b <- c(NA, NA, NA, 5, 6)
c <- c(7,  8,  NA, 9, 10)
coalesce(a,b,c)
# [1]  1  2 NA  4  6

Question

Is there any efficient way to implement coalesce in R?

epo3
  • 2,573
  • 1
  • 29
  • 51
while
  • 3,212
  • 3
  • 29
  • 41
  • How do you define "ridiculously slow"? (The call you describe takes ~100 microseconds on my machine). How many vectors do you have, and how long are they? (Incidentally, one way that speeds it up *slightly* (~5%) is to do `x[!is.na(x)][1]` instead of `x[which(!is.na(x))[1]]`) – David Robinson Oct 08 '13 at 17:26
  • The main difficulty here IMO is vectorization doesn't help much in solving this problem; a lot of elements are needlessly probed by `which` and `is.na`, and `cbind` + `apply` will make copies of the data and will be slow for large vectors. I would recommend an `Rcpp` solution (and may try to cook something up later). – Kevin Ushey Oct 08 '13 at 17:38
  • 1
    Yeah, I have quite large vectors. The problem I'm working on atm uses vectors of length `608247`. Which is a bit longer than the example. – while Oct 08 '13 at 17:43
  • coalesce() also fails if all parameters are NULL. This is a quick fix: `"%??%" – Kevin Jin Jul 26 '15 at 01:39
  • See answers to [my question](http://stackoverflow.com/questions/37714533/merge-join-prioritizing-non-missing-values) for suggestions for incorporating coalesce functions into merges/joins. – Richard Border Jun 09 '16 at 20:11

8 Answers8

43

On my machine, using Reduce gets a 5x performance improvement:

coalesce2 <- function(...) {
  Reduce(function(x, y) {
    i <- which(is.na(x))
    x[i] <- y[i]
    x},
  list(...))
}

> microbenchmark(coalesce(a,b,c),coalesce2(a,b,c))
Unit: microseconds
               expr    min       lq   median       uq     max neval
  coalesce(a, b, c) 97.669 100.7950 102.0120 103.0505 243.438   100
 coalesce2(a, b, c) 19.601  21.4055  22.8835  23.8315  45.419   100
Gregor Thomas
  • 104,719
  • 16
  • 140
  • 257
mrip
  • 13,932
  • 4
  • 34
  • 52
  • Thanks! This is tremendously faster! Second time today Reduce saves my ass. Tried to do something like this myself but I couldn't figure it out. Awesome. – while Oct 08 '13 at 17:47
  • Works great if vectors passed are the same length. I had problems with this because I was trying coalesce2(NULL, c(3, 2)), so is.na() gave me warnings whereas coalesce() just repeats NA twice for the first column. – Kevin Jin Jul 26 '15 at 01:34
  • 2
    This fails if either is `NULL` instead of just `NA`. Recommend use of `is.null`. – r2evans Apr 05 '19 at 01:26
23

Looks like coalesce1 is still available

coalesce1 <- function(...) {
    ans <- ..1
    for (elt in list(...)[-1]) {
        i <- is.na(ans)
        ans[i] <- elt[i]
    }
    ans
}

which is faster still (but more-or-less a hand re-write of Reduce, so less general)

> identical(coalesce(a, b, c), coalesce1(a, b, c))
[1] TRUE
> microbenchmark(coalesce(a,b,c), coalesce1(a, b, c), coalesce2(a,b,c))
Unit: microseconds
               expr     min       lq   median       uq     max neval
  coalesce(a, b, c) 336.266 341.6385 344.7320 355.4935 538.348   100
 coalesce1(a, b, c)   8.287   9.4110  10.9515  12.1295  20.940   100
 coalesce2(a, b, c)  37.711  40.1615  42.0885  45.1705  67.258   100

Or for larger data compare

coalesce1a <- function(...) {
    ans <- ..1
    for (elt in list(...)[-1]) {
        i <- which(is.na(ans))
        ans[i] <- elt[i]
    }
    ans
}

showing that which() can sometimes be effective, even though it implies a second pass through the index.

> aa <- sample(a, 100000, TRUE)
> bb <- sample(b, 100000, TRUE)
> cc <- sample(c, 100000, TRUE)
> microbenchmark(coalesce1(aa, bb, cc),
+                coalesce1a(aa, bb, cc),
+                coalesce2(aa,bb,cc), times=10)
Unit: milliseconds
                   expr       min        lq    median        uq       max neval
  coalesce1(aa, bb, cc) 11.110024 11.137963 11.145723 11.212907 11.270533    10
 coalesce1a(aa, bb, cc)  2.906067  2.953266  2.962729  2.971761  3.452251    10
  coalesce2(aa, bb, cc)  3.080842  3.115607  3.139484  3.166642  3.198977    10
Martin Morgan
  • 43,010
  • 5
  • 72
  • 104
  • 1
    Interesting that it's so much faster than Reduce. My guess would be that it's because the loop version avoids the overhead of a function call (or possibly some parameter checking), which might be a bottleneck when the inputs are small. That would explain why the times are much closer with large inputs. – mrip Oct 10 '13 at 13:31
  • @MartinMorgan +1! maybe you should add a test on number of arguments something like ` if(length(list(...)>0)){}` to avoid an error when you call `coalesce1()` without arguments.. – agstudy Oct 11 '13 at 12:58
18

Using dplyr package:

library(dplyr)
coalesce(a, b, c)
# [1]  1  2 NA  4  6

Benchamark, not as fast as accepted solution:

coalesce2 <- function(...) {
  Reduce(function(x, y) {
    i <- which(is.na(x))
    x[i] <- y[i]
    x},
    list(...))
}

microbenchmark::microbenchmark(
  coalesce(a, b, c),
  coalesce2(a, b, c)
)

# Unit: microseconds
#                expr    min     lq     mean median      uq     max neval cld
#   coalesce(a, b, c) 21.951 24.518 27.28264 25.515 26.9405 126.293   100   b
#  coalesce2(a, b, c)  7.127  8.553  9.68731  9.123  9.6930  27.368   100  a 

But on a larger dataset, it is comparable:

aa <- sample(a, 100000, TRUE)
bb <- sample(b, 100000, TRUE)
cc <- sample(c, 100000, TRUE)

microbenchmark::microbenchmark(
  coalesce(aa, bb, cc),
  coalesce2(aa, bb, cc))

# Unit: milliseconds
#                   expr      min       lq     mean   median       uq      max neval cld
#   coalesce(aa, bb, cc) 1.708511 1.837368 5.468123 3.268492 3.511241 96.99766   100   a
#  coalesce2(aa, bb, cc) 1.474171 1.516506 3.312153 1.957104 3.253240 91.05223   100   a
zx8754
  • 42,109
  • 10
  • 93
  • 154
15

From data.table >= 1.12.3 you can use fcoalesce.

library(data.table)
fcoalesce(a, b, c)
# [1]  1  2 NA  4  6

fcoalesce can also take "a single plain list, data.table or data.frame". Thus, if the vectors above were columns in a data.frame (or a data.table), we could simply supply the name of the data set:

d = data.frame(a, b, c)
# or d = data.table(a, b, c) 
fcoalesce(d)
# [1]  1  2 NA  4  6

For more info, including a benchmark, see NEWS item #18 for development version 1.12.3.

Henrik
  • 56,228
  • 12
  • 124
  • 139
9

I have a ready-to-use implementation called coalesce.na in my misc package. It seems to be competitive, but not fastest. It will also work for vectors of different length, and has a special treatment for vectors of length one:

                    expr        min          lq      median          uq         max neval
    coalesce(aa, bb, cc) 990.060402 1030.708466 1067.000698 1083.301986 1280.734389    10
   coalesce1(aa, bb, cc)  11.356584   11.448455   11.804239   12.507659   14.922052    10
  coalesce1a(aa, bb, cc)   2.739395    2.786594    2.852942    3.312728    5.529927    10
   coalesce2(aa, bb, cc)   2.929364    3.041345    3.593424    3.868032    7.838552    10
 coalesce.na(aa, bb, cc)   4.640552    4.691107    4.858385    4.973895    5.676463    10

Here's the code:

coalesce.na <- function(x, ...) {
  x.len <- length(x)
  ly <- list(...)
  for (y in ly) {
    y.len <- length(y)
    if (y.len == 1) {
      x[is.na(x)] <- y
    } else {
      if (x.len %% y.len != 0)
        warning('object length is not a multiple of first object length')
      pos <- which(is.na(x))
      x[pos] <- y[(pos - 1) %% y.len + 1]
    }
  }
  x
}

Of course, as Kevin pointed out, an Rcpp solution might be faster by orders of magnitude.

krlmlr
  • 22,030
  • 13
  • 107
  • 191
4

A very simple solution is to use the ifelse function from the base package:

coalesce3 <- function(x, y) {

    ifelse(is.na(x), y, x)
}

Although it appears to be slower than coalesce2 above:

test <- function(a, b, func) {

    for (i in 1:10000) {

        func(a, b)
    }
}

system.time(test(a, b, coalesce2))
user  system elapsed 
0.11    0.00    0.10 

system.time(test(a, b, coalesce3))
user  system elapsed 
0.16    0.00    0.15 

You can use Reduce to make it work for an arbitrary number of vectors:

coalesce4 <- function(...) {

    Reduce(coalesce3, list(...))
}
sdgfsdh
  • 24,047
  • 15
  • 89
  • 182
  • 1
    Sure. But this does not work for more than two vectors. What if you have an arbitrary amount of vectors? – while Sep 09 '15 at 14:14
  • Yes that is a limitation. Quick fix is to use `Reduce`. – sdgfsdh Sep 09 '15 at 14:21
  • 1
    interestingly, an explicit if-else does not convert dates to numbers, but `ifelse()` does. So you function on dates does not work: `coalesce4(NULL, lubridate::ymd('2019-05-01'))` returns `18017` – Ufos May 15 '19 at 13:02
1

Here is my solution:

coalesce <- function(x){ y <- head( x[is.na(x) == F] , 1) return(y) } It returns first vaule which is not NA and it works on data.table, for example if you want to use coalesce on few columns and these column names are in vector of strings:

column_names <- c("col1", "col2", "col3")

how to use:

ranking[, coalesce_column := coalesce( mget(column_names) ), by = 1:nrow(ranking)]

Taz
  • 4,945
  • 2
  • 20
  • 45
1

Another apply method, with mapply.

mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]}, a, b, c)
[1]  1  2 NA  4  6

This selects the first non-NA value if more than one exists. The last non-missing element could be selected using tail.

Maybe a bit more speed could be squeezed out of this alternative using the bare bones .mapply function, which looks a little different.

unlist(.mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]},
               dots=list(a, b, c), MoreArgs=NULL))
[1]  1  2 NA  4  6

.mapplydiffers in important ways from its non-dotted cousin.

  • it returns a list (like Map) and so must be wrapped in some function like unlist or c to return a vector.
  • the set of arguments to be fed in parallel to the function in FUN must be given in a list to the dots argument.
  • Finally, mapply, the moreArgs argument does not have a default, so must explicitly be fed NULL.
lmo
  • 35,764
  • 9
  • 49
  • 57