86

I am using Spark 1.3.1 (PySpark) and I have generated a table using a SQL query. I now have an object that is a DataFrame. I want to export this DataFrame object (I have called it "table") to a csv file so I can manipulate it and plot the columns. How do I export the DataFrame "table" to a csv file?

Thanks!

Community
  • 1
  • 1
PyRsquared
  • 5,027
  • 5
  • 36
  • 63

5 Answers5

204

If data frame fits in a driver memory and you want to save to local files system you can convert Spark DataFrame to local Pandas DataFrame using toPandas method and then simply use to_csv:

df.toPandas().to_csv('mycsv.csv')

Otherwise you can use spark-csv:

  • Spark 1.3

    df.save('mycsv.csv', 'com.databricks.spark.csv')
    
  • Spark 1.4+

    df.write.format('com.databricks.spark.csv').save('mycsv.csv')
    

In Spark 2.0+ you can use csv data source directly:

df.write.csv('mycsv.csv')
zero323
  • 283,404
  • 79
  • 858
  • 880
  • Super answer. For the first option, if I want to write to a pipe-delimited file rather than comma-delimited CSV, is this possible? – Brian Waters Jul 19 '17 at 11:16
  • 7
    If you have spark dataframes you can use `df.write.csv('/tmp/lookatme/')` and that will drop a set of csv files in `/tmp/lookatme` Using spark is significantly faster than serializing it in pandas. The only drawback is that you'll end up with a set of csvs instead of a single one and if the destination tool doesn't know how to concatenate them you will need to do it yourself. – Txangel Jul 19 '17 at 16:39
  • 1
    What a big deal it is to get a csv out of spark. Something interesting about that first solution is that `to_csv` works without needing to import Pandas. `.toPandas` is part of Spark maybe it implicitly imports it.. – cardamom Sep 14 '17 at 01:12
  • 30
    You should be able to use `df.coalesce(1).write.csv('mycsv.csv')` if you insist on having a single output file – MichaelChirico Feb 21 '18 at 03:31
  • 1
    @Txangel thanks for your answer. However, when I use that it runs without any error but I can't find *any* csv created in the target location. Any thoughts? – Rotail Jun 13 '19 at 17:14
  • @Rotail [That's because it is not how this works in general](https://stackoverflow.com/a/51603898/10938362). – user10938362 Jun 14 '19 at 22:37
  • This is for an RDD dataframe right? As opposed to an SQL dataframe? I think the author wanted something for the latter, which I do not see in this solution. – spacedustpi Dec 09 '19 at 18:14
  • 2
    using ```df.write.csv('mycsv.csv')``` exports the csv to hdfs environment. How can i get it in my local environment? – Tracy Jun 10 '20 at 14:57
  • @Tracy exact same question that I've been having. Have you found a solution to this? – etjk May 12 '21 at 15:16
36

For Apache Spark 2+, in order to save dataframe into single csv file. Use following command

query.repartition(1).write.csv("cc_out.csv", sep='|')

Here 1 indicate that I need one partition of csv only. you can change it according to your requirements.

Hafiz Muhammad Shafiq
  • 6,781
  • 10
  • 49
  • 92
  • 6
    As indicated here: https://spark.apache.org/docs/2.2.0/api/python/pyspark.html#pyspark.RDD.repartition it is recommended to use coalesce() instead of repartition() to increase performance ("If you are decreasing the number of partitions in this RDD, consider using coalesce, which can avoid performing a shuffle.") – Seastar Nov 28 '18 at 15:34
  • @Seastar: While coalescing might have advantages in several use cases, your comment does not apply in this special case. If you want to have a .csv in your hdfs (or whatever), you will usually want one file and not dozens of files spreaded across your cluster (the whole sense of doing `repartition(1)`. You need to shuffle the data for this either way, so coalescing will not help at all in the bigger picture. – Markus Jun 24 '20 at 17:17
19

If you cannot use spark-csv, you can do the following:

df.rdd.map(lambda x: ",".join(map(str, x))).coalesce(1).saveAsTextFile("file.csv")

If you need to handle strings with linebreaks or comma that will not work. Use this:

import csv
import cStringIO

def row2csv(row):
    buffer = cStringIO.StringIO()
    writer = csv.writer(buffer)
    writer.writerow([str(s).encode("utf-8") for s in row])
    buffer.seek(0)
    return buffer.read().strip()

df.rdd.map(row2csv).coalesce(1).saveAsTextFile("file.csv")
jbochi
  • 26,975
  • 14
  • 70
  • 87
7

You need to repartition the Dataframe in a single partition and then define the format, path and other parameter to the file in Unix file system format and here you go,

df.repartition(1).write.format('com.databricks.spark.csv').save("/path/to/file/myfile.csv",header = 'true')

Read more about the repartition function Read more about the save function

However, repartition is a costly function and toPandas() is worst. Try using .coalesce(1) instead of .repartition(1) in previous syntax for better performance.

Read more on repartition vs coalesce functions.

Gazal Patel
  • 400
  • 4
  • 9
3

How about this (in you don't want an one liner) ?

for row in df.collect():
    d = row.asDict()
    s = "%d\t%s\t%s\n" % (d["int_column"], d["string_column"], d["string_column"])
    f.write(s)

f is a opened file descriptor. Also the separator is a TAB char, but it's easy to change to whatever you want.

Matei Florescu
  • 1,115
  • 9
  • 20