5

I am getting heap space errors on even fairly small datasets. I can be sure that I'm not running out of system memory. For example, consider a dataset containing about 20M rows and 9 columns, and that takes up 1GB on disk. I am playing with it on a Google Compute node with 30gb of memory.

Let's say that I have this data in a dataframe called df. The following works fine, albeit somewhat slowly:

library(tidyverse) 
uniques <- search_raw_lt %>%
    group_by(my_key) %>%
    summarise() %>%
    ungroup()

The following throws java.lang.OutOfMemoryError: Java heap space.

library(tidyverse)
library(sparklyr)
sc <- spark_connect(master = "local")

df_tbl <- copy_to(sc, df)

unique_spark <- df_tbl %>%
  group_by(my_key) %>%
  summarise() %>%
  ungroup() %>%
  collect()

I tried this suggestion for increasing the heap space to Spark. The problem persists. Watching the machine's state on htop, I see that total memory usage never goes over about 10gb.

library(tidyverse)
library(sparklyr)

config <- spark_config()
config[["sparklyr.shell.conf"]] <- "spark.driver.extraJavaOptions=-XX:MaxHeapSize=24G"

sc <- spark_connect(master = "local")

df_tbl <- copy_to(sc, df)

unique_spark <- df_tbl %>%
  group_by(my_key) %>%
  summarise() %>%
  ungroup() %>%
  collect()

Finally, per Sandeep's comment, I tried lowering MaxHeapSize to 4G. (Is MaxHeapSize per virtual worker or for the entire Spark local instance?) I still got the heap space error, and again, I did not use much of the system's memory.

David Bruce Borenstein
  • 1,323
  • 1
  • 13
  • 31
  • 2
    reduce the `MaxHeapSize=24G` to `MaxHeapSize=4GB`, Since you have only one GB data. It doesn't require 24 GB memory. Even 4GB is enough for this. – Sandeep Singh Dec 29 '16 at 17:45
  • Thanks; it still gets the error. I clarified the text of the question to address this. – David Bruce Borenstein Dec 29 '16 at 18:01
  • Can you also post spark submit command which you are using to run this job?? – Sandeep Singh Dec 29 '16 at 18:02
  • **From the spark documentation:** `spark.driver.extraJavaOptions`:- A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. Note: In client mode, this config must not be set through the SparkConf directly in your application, because the driver JVM has already started at that point. Instead, please set this through the `--driver-java-options` command line option or in your default properties file. Are you doing the same way ?? – Sandeep Singh Dec 29 '16 at 18:13
  • Do you know how to get the spark submit from sparklyr? I can get the log, but not the actual job submitted to the cluster. – David Bruce Borenstein Dec 29 '16 at 18:16
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/131830/discussion-between-david-bruce-borenstein-and-sandeep-singh). – David Bruce Borenstein Dec 29 '16 at 18:21

1 Answers1

4

In looking into Sandeep's suggestions, I started digging into the sparklyr deployment notes. These mention that the driver might run out of memory at this stage, and to tweak some settings to correct it.

These settings did not solve the problem, at least not initially. However, isolating the problem to the collect stage allowed me to find similar problems using SparkR on SO.

These answers depended in part on setting the environment variable SPARK_MEM. Putting it all together, I got it to work as follows:

library(tidyverse)
library(sparklyr)

# Set memory allocation for whole local Spark instance
Sys.setenv("SPARK_MEM" = "13g")

# Set driver and executor memory allocations
config <- spark_config()
config$spark.driver.memory <- "4G"
config$spark.executor.memory <- "1G"

# Connect to Spark instance
sc <- spark_connect(master = "local")

# Load data into Spark
df_tbl <- copy_to(sc, df)

# Summarise data
uniques <- df_tbl %>%
  group_by(my_key) %>%
  summarise() %>%
  ungroup() %>%
  collect()
Community
  • 1
  • 1
David Bruce Borenstein
  • 1,323
  • 1
  • 13
  • 31