89

Is it possible to save DataFrame in spark directly to Hive?

I have tried with converting DataFrame to Rdd and then saving as a text file and then loading in hive. But I am wondering if I can directly save dataframe to hive

mrsrinivas
  • 27,898
  • 11
  • 107
  • 118
Gourav
  • 1,065
  • 1
  • 9
  • 12

11 Answers11

119

You can create an in-memory temporary table and store them in hive table using sqlContext.

Lets say your data frame is myDf. You can create one temporary table using,

myDf.createOrReplaceTempView("mytempTable") 

Then you can use a simple hive statement to create table and dump the data from your temp table.

sqlContext.sql("create table mytable as select * from mytempTable");
Vinay Kumar
  • 1,513
  • 2
  • 13
  • 19
  • This is not a valid HQL statement: cannot recognize input near 'select' '*' 'from' in create table statement; line 1 pos 16 – lazywiz Mar 11 '16 at 19:02
  • 2
    this got around the parquet read errors I was getting when using write.saveAsTable in spark 2.0 – ski_squaw Nov 30 '16 at 18:08
  • No problem. Btw, I just found out you can't use `PARTITIONED BY` clause in this statement. – chhantyal May 05 '17 at 07:59
  • 2
    Yes.However, we can use partition by on data frame before creating the temp table. @chhantyal – Vinay Kumar May 26 '17 at 06:10
  • Thanks for this answer. I've tried to do the same thing in my program as well. `dataframe.registerTempTable("RiskRecon_tmp") hiveContext.sql("CREATE TABLE IF NOT EXISTS RiskRecon_TOES as select * from RiskRecon_tmp")`. But I get this error: `java.lang.IllegalArgumentException: Wrong FS: file:/tmp/spark-a68a9fc7-50f3-43ae-ac06-8c07ba7253c2/scratch_hive_2017-07-12_07-12-57_948_8232393446428506434-1, expected: hdfs://nameservice1` at the line where I am passing the query. Do you have any idea regarding this? @VinayKumar – Hemanth Annavarapu Jul 12 '17 at 12:17
  • @HemanthAnnavarapu check this(https://community.hortonworks.com/content/supportkb/48759/javalangillegalargumentexception-wrong-fs-running.html) – Vinay Kumar Jul 12 '17 at 19:08
  • 1
    How were you able to mix and match the `temporary` table with the `hive` table? When doing `show tables` it only includes the `hive` tables for my `spark 2.3.0` installation – StephenBoesch Nov 23 '17 at 01:59
  • 1
    this temporary table will be saved to your hive context and doesn't belong to hive tables in any way. – Vinay Kumar Nov 23 '17 at 07:05
  • 1
    hi @VinayKumar why you say "If you are using saveAsTable(its more like persisting your dataframe) , you have to make sure that you have enough memory allocated to your spark application". could you explain this point? – enneppi Aug 31 '18 at 15:49
  • 1
    @enneppi its irrelevant. I have updated the answer now. – Vinay Kumar Apr 25 '19 at 05:00
  • @VinayKumar : I tried partitioning DF with partitionBy($column) before storing as temp table, but it did not create any partitions in HIVE. Could you please comment on this. Thnx – DrthSprk_ Apr 20 '20 at 17:12
  • Hi @VinayKumar how should I import sqlcontext so that I use it this way – Scope Apr 19 '21 at 14:41
28

Use DataFrameWriter.saveAsTable. (df.write.saveAsTable(...)) See Spark SQL and DataFrame Guide.

Daniel Darabos
  • 25,678
  • 9
  • 94
  • 106
  • 4
    saveAsTable does not create Hive compatible tables. The best solution I found is of Vinay Kumar. – RChat Jul 29 '16 at 06:15
  • @Jacek: I have added this note myself, because I think my answer is wrong. I would delete it, except that it is accepted. Do you think the note is wrong? – Daniel Darabos Dec 27 '16 at 12:42
  • Yes. The note was wrong and that's why I removed it. "Please correct me if I'm wrong" applies here :) – Jacek Laskowski Dec 27 '16 at 13:05
  • @DanielDarabos, why "saveAsTable is deprecated and removed in Spark 2.0.0"? I see it is still quite supported and documented in Spark 2.1: http://spark.apache.org/docs/latest/sql-programming-guide.html#saving-to-persistent-tables – Tagar Jun 22 '17 at 09:38
  • I think it used to be `df.saveAsTable`. That is gone now, but there is `df.write.saveAsTable`. I don't have a Hive installation to test it against, but it does do something, so you're right. I have no clue. Okay, I'll remove the note! – Daniel Darabos Jun 22 '17 at 16:07
  • 1
    will this `df.write().saveAsTable(tableName)` also write streaming data into the table? – user1870400 Aug 21 '17 at 11:19
  • 1
    no you can't save streaming data with saveAsTable it's not even in the api – Brian Jul 29 '18 at 22:27
22

I don't see df.write.saveAsTable(...) deprecated in Spark 2.0 documentation. It has worked for us on Amazon EMR. We were perfectly able to read data from S3 into a dataframe, process it, create a table from the result and read it with MicroStrategy. Vinays answer has also worked though.

Tshilidzi Mudau
  • 5,472
  • 6
  • 32
  • 44
Alex
  • 297
  • 2
  • 10
  • 5
    Somebody flagged this answer as low-quality due to length and content. To be honest it probably would have been better as a comment. I guess it's been up for two years and some people have found it helpful so might be good to leave things as is? – serakfalcon Jan 22 '18 at 16:31
  • I agree, comment would have been the better choice. Lesson learned :-) – Alex Jan 24 '18 at 08:17
15

you need to have/create a HiveContext

import org.apache.spark.sql.hive.HiveContext;

HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(sc.sc());

Then directly save dataframe or select the columns to store as hive table

df is dataframe

df.write().mode("overwrite").saveAsTable("schemaName.tableName");

or

df.select(df.col("col1"),df.col("col2"), df.col("col3")) .write().mode("overwrite").saveAsTable("schemaName.tableName");

or

df.write().mode(SaveMode.Overwrite).saveAsTable("dbName.tableName");

SaveModes are Append/Ignore/Overwrite/ErrorIfExists

I added here the definition for HiveContext from Spark Documentation,

In addition to the basic SQLContext, you can also create a HiveContext, which provides a superset of the functionality provided by the basic SQLContext. Additional features include the ability to write queries using the more complete HiveQL parser, access to Hive UDFs, and the ability to read data from Hive tables. To use a HiveContext, you do not need to have an existing Hive setup, and all of the data sources available to a SQLContext are still available. HiveContext is only packaged separately to avoid including all of Hive’s dependencies in the default Spark build.


on Spark version 1.6.2, using "dbName.tableName" gives this error:

org.apache.spark.sql.AnalysisException: Specifying database name or other qualifiers are not allowed for temporary tables. If the table name has dots (.) in it, please quote the table name with backticks ().`

Jacek Laskowski
  • 64,943
  • 20
  • 207
  • 364
Anandkumar
  • 950
  • 10
  • 11
  • Is the second command: 'df.select(df.col("col1"),df.col("col2"), df.col("col3")) .write().mode("overwrite").saveAsTable("schemaName.tableName");' requiring that the selected columns that you intend to overwrite already exist in the table? So you have the existing table and you only overwrite the existing columns 1,2,3 with the new data from your df in spark? is that interpreted right? – dieHellste Aug 15 '16 at 07:31
  • 3
    `df.write().mode...` needs to be changed to `df.write.mode...` – user 923227 Jul 26 '18 at 21:27
8

Saving to Hive is just a matter of using write() method of your SQLContext:

df.write.saveAsTable(tableName)

See https://spark.apache.org/docs/2.1.0/api/java/org/apache/spark/sql/DataFrameWriter.html#saveAsTable(java.lang.String)

From Spark 2.2: use DataSet instead DataFrame.

Thomas Decaux
  • 18,451
  • 2
  • 83
  • 95
  • I seem to have an error which states Job aborted. I tried the following code pyspark_df.write.mode("overwrite").saveAsTable("InjuryTab2") – Sade Nov 20 '18 at 07:44
  • Hi! why this? `From Spark 2.2: use DataSet instead DataFrame.` – onofricamila May 06 '20 at 13:04
4

Sorry writing late to the post but I see no accepted answer.

df.write().saveAsTable will throw AnalysisException and is not HIVE table compatible.

Storing DF as df.write().format("hive") should do the trick!

However, if that doesn't work, then going by the previous comments and answers, this is what is the best solution in my opinion (Open to suggestions though).

Best approach is to explicitly create HIVE table (including PARTITIONED table),

def createHiveTable: Unit ={
spark.sql("CREATE TABLE $hive_table_name($fields) " +
  "PARTITIONED BY ($partition_column String) STORED AS $StorageType")
}

save DF as temp table,

df.createOrReplaceTempView("$tempTableName")

and insert into PARTITIONED HIVE table:

spark.sql("insert into table default.$hive_table_name PARTITION($partition_column) select * from $tempTableName")
spark.sql("select * from default.$hive_table_name").show(1000,false)

Offcourse the LAST COLUMN in DF will be the PARTITION COLUMN so create HIVE table accordingly!

Please comment if it works! or not.


--UPDATE--

df.write()
  .partitionBy("$partition_column")
  .format("hive")
  .mode(SaveMode.append)
  .saveAsTable($new_table_name_to_be_created_in_hive)  //Table should not exist OR should be a PARTITIONED table in HIVE
DrthSprk_
  • 517
  • 1
  • 9
  • 24
2

For Hive external tables I use this function in PySpark:

def save_table(sparkSession, dataframe, database, table_name, save_format="PARQUET"):
    print("Saving result in {}.{}".format(database, table_name))
    output_schema = "," \
        .join(["{} {}".format(x.name.lower(), x.dataType) for x in list(dataframe.schema)]) \
        .replace("StringType", "STRING") \
        .replace("IntegerType", "INT") \
        .replace("DateType", "DATE") \
        .replace("LongType", "INT") \
        .replace("TimestampType", "INT") \
        .replace("BooleanType", "BOOLEAN") \
        .replace("FloatType", "FLOAT")\
        .replace("DoubleType","FLOAT")
    output_schema = re.sub(r'DecimalType[(][0-9]+,[0-9]+[)]', 'FLOAT', output_schema)

    sparkSession.sql("DROP TABLE IF EXISTS {}.{}".format(database, table_name))

    query = "CREATE EXTERNAL TABLE IF NOT EXISTS {}.{} ({}) STORED AS {} LOCATION '/user/hive/{}/{}'" \
        .format(database, table_name, output_schema, save_format, database, table_name)
    sparkSession.sql(query)
    dataframe.write.insertInto('{}.{}'.format(database, table_name),overwrite = True)
Shadowtrooper
  • 1,067
  • 9
  • 20
1

Here is PySpark version to create Hive table from parquet file. You may have generated Parquet files using inferred schema and now want to push definition to Hive metastore. You can also push definition to the system like AWS Glue or AWS Athena and not just to Hive metastore. Here I am using spark.sql to push/create permanent table.

   # Location where my parquet files are present.
    df = spark.read.parquet("s3://my-location/data/")
    cols = df.dtypes
    buf = []
    buf.append('CREATE EXTERNAL TABLE test123 (')
    keyanddatatypes =  df.dtypes
    sizeof = len(df.dtypes)
    print ("size----------",sizeof)
    count=1;
    for eachvalue in keyanddatatypes:
        print count,sizeof,eachvalue
        if count == sizeof:
            total = str(eachvalue[0])+str(' ')+str(eachvalue[1])
        else:
            total = str(eachvalue[0]) + str(' ') + str(eachvalue[1]) + str(',')
        buf.append(total)
        count = count + 1

    buf.append(' )')
    buf.append(' STORED as parquet ')
    buf.append("LOCATION")
    buf.append("'")
    buf.append('s3://my-location/data/')
    buf.append("'")
    buf.append("'")
    ##partition by pt
    tabledef = ''.join(buf)

    print "---------print definition ---------"
    print tabledef
    ## create a table using spark.sql. Assuming you are using spark 2.1+
    spark.sql(tabledef);
kartik
  • 1,971
  • 3
  • 19
  • 28
1

In my case this works fine:

from pyspark_llap import HiveWarehouseSession
hive = HiveWarehouseSession.session(spark).build()
hive.setDatabase("DatabaseName")
df = spark.read.format("csv").option("Header",True).load("/user/csvlocation.csv")
df.write.format(HiveWarehouseSession().HIVE_WAREHOUSE_CONNECTOR).option("table",<tablename>).save()

Done!!

You can read the Data, let you give as "Employee"

hive.executeQuery("select * from Employee").show()

For more details use this URL: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/integrating-hive/content/hive-read-write-operations.html

MD Rijwan
  • 393
  • 5
  • 14
0

You could use Hortonworks spark-llap library like this

import com.hortonworks.hwc.HiveWarehouseSession

df.write
  .format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
  .mode("append")
  .option("table", "myDatabase.myTable")
  .save()
mike
  • 9,910
  • 3
  • 18
  • 43
-1

If you want to create a hive table(which does not exist) from a dataframe (some times it fails to create with DataFrameWriter.saveAsTable). StructType.toDDL will helps in listing the columns as a string.

val df = ...

val schemaStr = df.schema.toDDL # This gives the columns 
spark.sql(s"""create table hive_table ( ${schemaStr})""")

//Now write the dataframe to the table
df.write.saveAsTable("hive_table")

hive_table will be created in default space since we did not provide any database at spark.sql(). stg.hive_table can be used to create hive_table in stg database.

mrsrinivas
  • 27,898
  • 11
  • 107
  • 118