0

So I have very large dataset (>1000 obs. of >15000 variables) and I wan't to replace all values >1 with 1 and leave the rest unchanged.

Example data:

data <- data.frame(a = 1:10, b = -1:-10, c = letters[1:10])

    a   b c
1   1  -1 a
2   2  -2 b
3   3  -3 c
4   4  -4 d
5   5  -5 e
6   6  -6 f
7   7  -7 g
8   8  -8 h
9   9  -9 i
10 10 -10 j

This is my dplyrapproach:

data %>% mutate_if(is.numeric, 
                                   funs(
                                     case_when(
                                       . >= 1 ~ 1,
                                       TRUE ~ as.double(.))
                                     )
                                   )

This takes ages on the original data. Any idea how to speed this up? data.table?

ceefel
  • 143
  • 1
  • 10

2 Answers2

1

This solution with data.table seems to work, to be fair it gives a warning():

library(data.table)
library(purrr)
num_cols <- colnames(data)[map_lgl(data, is.numeric)] # select only the numerics 

data[, (num_cols):= lapply(.SD, function(x) {
                                    x[x>1] = 1
                                    x}),
     .SDcols=num_cols
     ]
data
# a aa   b c
# 1: 1  1  -1 a
# 2: 1  1  -2 b
# 3: 1  1  -3 c
# 4: 1  1  -4 d
# 5: 1  1  -5 e
# 6: 1  1  -6 f
# 7: 1  1  -7 g
# 8: 1  1  -8 h
# 9: 1  1  -9 i
# 10: 1  1 -10 j

Warning message: In [.data.table(data, , :=((num_cols), lapply(.SD, function(x) { : Supplied 2 columns to be assigned a list (length 3) of values (1 unused)

Data used:

data <- data.table(a = 1:10, aa = 1:10, b = -1:-10, c = letters[1:10])

Benchmark:

microbenchmark::microbenchmark(
  dplyr = data %>% mutate_if(is.numeric, 
                              funs(
                                case_when(
                                  . >= 1 ~ 1,
                                  TRUE ~ as.double(.))
                              )
  ),
  datatable = data[, (num_cols):= lapply(.SD, function(x) {
    x[x>1] = 1
    x})
    ],
  times = 100
)

# Unit: microseconds
# expr      min        lq      mean    median        uq       max neval
# dplyr 1465.088 1644.7690 2012.3148 1775.4730 1989.1065 19992.621   100
# datatable  372.282  399.0235  480.9405  440.0375  547.3055   831.398   100

Update Ronak Shah solution is faster to be fair:

microbenchmark::microbenchmark(
  dplyr = data %>% mutate_if(is.numeric, 
                              funs(
                                case_when(
                                  . >= 1 ~ 1,
                                  TRUE ~ as.double(.))
                              )
  ),
  datatable = data[, (num_cols):= lapply(.SD, function(x) {
    x[x>1] = 1
    x})
    ],
  base = {dataframe <- as.data.frame(data)
          dataframe[dataframe > 1] <- 1},
  times = 100
)
# Unit: microseconds
# expr      min        lq      mean   median        uq       max neval
# dplyr 1782.384 1902.1210 2549.3977 1995.116 2099.9800 55628.570   100
# datatable  394.817  422.7605  466.5329  441.690  512.9020   628.282   100
# base  118.987  135.5120  160.1595  154.291  176.2255   300.469   100
RLave
  • 7,479
  • 2
  • 19
  • 34
  • 1
    Great approach! I think the warning comes from the one non-numeric variable. Warning disappears when this variables is excluded. – ceefel Oct 16 '18 at 10:01
  • 2
    Re the warning, you should use `.SDcols=num_cols` (see `?data.table`) – Frank Oct 16 '18 at 15:45
0

You can try:

apply(data[, which(sapply(data, is.numeric))], 2, 
      function(x) {ifelse(x > 1, 1, x)})

It leaves out the c column but you can easily merge it afterwards.

tmfmnk
  • 31,986
  • 3
  • 26
  • 41