15

I want to create a hive table using my Spark dataframe's schema. How can I do that?

For fixed columns, I can use:

val CreateTable_query = "Create Table my table(a string, b string, c double)"
sparksession.sql(CreateTable_query) 

But I have many columns in my dataframe, so is there a way to automatically generate such query?

lserlohn
  • 4,406
  • 10
  • 27
  • 45
  • Possible duplicate of [Hadoop Hive unable to move source to destination](http://stackoverflow.com/questions/30483296/hadoop-hive-unable-to-move-source-to-destination) – Knows Not Much Feb 16 '17 at 03:33
  • Create HiveContext and then run : `val CreateTable_query = hiveContext.sql("Create Table myTable as select * from mytempTable")` This will solve your issue – Ashish Singh Feb 16 '17 at 05:50

5 Answers5

24

Assuming, you are using Spark 2.1.0 or later and my_DF is your dataframe,

//get the schema split as string with comma-separated field-datatype pairs
StructType my_schema = my_DF.schema();
String columns = Arrays.stream(my_schema.fields())
                       .map(field -> field.name()+" "+field.dataType().typeName())
                       .collect(Collectors.joining(","));

//drop the table if already created
spark.sql("drop table if exists my_table");
//create the table using the dataframe schema
spark.sql("create table my_table(" + columns + ") 
    row format delimited fields terminated by '|' location '/my/hdfs/location'");
    //write the dataframe data to the hdfs location for the created Hive table
    my_DF.write()
    .format("com.databricks.spark.csv")
    .option("delimiter","|")
    .mode("overwrite")
    .save("/my/hdfs/location");

The other method using temp table

my_DF.createOrReplaceTempView("my_temp_table");
spark.sql("drop table if exists my_table");
spark.sql("create table my_table as select * from my_temp_table");
Thomas Decaux
  • 18,451
  • 2
  • 83
  • 95
somnathchakrabarti
  • 2,706
  • 9
  • 58
  • 87
  • 5
    why do we need to create temp tables? is there any benefit over `my_DF.write.saveAsTable(...)`? – nefo_x Mar 12 '18 at 21:56
  • 2
    https://stackoverflow.com/questions/30664008/how-to-save-dataframe-directly-to-hive TL;DR saveastable doesn't create a hive compatible table. Question asks for hive table specifically so... – Robert Beatty Mar 26 '18 at 15:44
  • 2
    i would change field.dataType().typeName() to field.dataType().sql() it handles complex/array types better – Slava Dec 19 '19 at 13:13
  • 1
    **Scala** translation `val tableColumns = df.schema.filter(_.name != partCol).map(field => field.name + " " + field.dataType.typeName).mkString(",")` – Nav Feb 03 '21 at 11:22
9

As per your question it looks like you want to create table in hive using your data-frame's schema. But as you are saying you have many columns in that data-frame so there are two options

  • 1st is create direct hive table trough data-frame.
  • 2nd is take schema of this data-frame and create table in hive.

Consider this code:

package hive.example

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.Row
import org.apache.spark.sql.SparkSession

object checkDFSchema extends App {
  val cc = new SparkConf;
  val sc = new SparkContext(cc)
  val sparkSession = SparkSession.builder().enableHiveSupport().getOrCreate()
  //First option for creating hive table through dataframe 
  val DF = sparkSession.sql("select * from salary")
  DF.createOrReplaceTempView("tempTable")
  sparkSession.sql("Create table yourtable as select * form tempTable")
  //Second option for creating hive table from schema
  val oldDFF = sparkSession.sql("select * from salary")
  //Generate the schema out of dataframe  
  val schema = oldDFF.schema
  //Generate RDD of you data 
  val rowRDD = sc.parallelize(Seq(Row(100, "a", 123)))
  //Creating new DF from data and schema 
  val newDFwithSchema = sparkSession.createDataFrame(rowRDD, schema)
  newDFwithSchema.createOrReplaceTempView("tempTable")
  sparkSession.sql("create table FinalTable AS select * from tempTable")
}
SilverNak
  • 3,017
  • 4
  • 24
  • 33
Nilesh Shinde
  • 381
  • 4
  • 8
  • 1
    `temp` view? This appears to be creating a temporary table - not in `hive` .. ? Please show that the table were actually created *in hive* - e.g. in which hive database – StephenBoesch Nov 22 '17 at 16:50
  • This "Create table yourtable as select * from tempTable" command will create table in hive with "yourtable" as table name in hive db.. here i haven't mentioned any db name so its will create in default database. – Nilesh Shinde Nov 23 '17 at 11:47
  • I had done some additional research: and it seems your approach *should* be correct. The reason for my skepticism is: *it is not working for me*. I will have to create a separate question about **how to intermix in-memory (temp) and hive tables** – StephenBoesch Nov 23 '17 at 13:33
6

Another way is to use methods available on StructType.. sql , simpleString, TreeString etc...

You can create DDLs from a Dataframe's schema, Can create Dataframe's schema from your DDLs ..

Here is one example - ( Till Spark 2.3)

    // Setup Sample Test Table to create Dataframe from
    spark.sql(""" drop database hive_test cascade""")
    spark.sql(""" create database hive_test""")
    spark.sql("use hive_test")
    spark.sql("""CREATE TABLE hive_test.department(
    department_id int ,
    department_name string
    )    
    """)
    spark.sql("""
    INSERT INTO hive_test.department values ("101","Oncology")    
    """)

    spark.sql("SELECT * FROM hive_test.department").show()

/***************************************************************/

Now I have Dataframe to play with. in real cases you'd use Dataframe Readers to create dataframe from files/databases. Let's use it's schema to create DDLs

  // Create DDL from Spark Dataframe Schema using simpleString function

 // Regex to remove unwanted characters    
    val sqlrgx = """(struct<)|(>)|(:)""".r
 // Create DDL sql string and remove unwanted characters

    val sqlString = sqlrgx.replaceAllIn(spark.table("hive_test.department").schema.simpleString, " ")

// Create Table with sqlString
   spark.sql(s"create table hive_test.department2( $sqlString )")

Spark 2.4 Onwards you can use fromDDL & toDDL methods on StructType -

val fddl = """
      department_id int ,
      department_name string,
      business_unit string
      """


    // Easily create StructType from DDL String using fromDDL
    val schema3: StructType = org.apache.spark.sql.types.StructType.fromDDL(fddl)


    // Create DDL String from StructType using toDDL
    val tddl = schema3.toDDL

    spark.sql(s"drop table if exists hive_test.department2 purge")

   // Create Table using string tddl
    spark.sql(s"""create table hive_test.department2 ( $tddl )""")

    // Test by inserting sample rows and selecting
    spark.sql("""
    INSERT INTO hive_test.department2 values ("101","Oncology","MDACC Texas")    
    """)
    spark.table("hive_test.department2").show()
    spark.sql(s"drop table hive_test.department2")

ValaravausBlack
  • 611
  • 5
  • 10
4

Here is PySpark version to create Hive table from parquet file. You may have generated Parquet files using inferred schema and now want to push definition to Hive metastore. You can also push definition to the system like AWS Glue or AWS Athena and not just to Hive metastore. Here I am using spark.sql to push/create permanent table.

 # Location where my parquet files are present.
 df = spark.read.parquet("s3://my-location/data/")

    cols = df.dtypes
    buf = []
    buf.append('CREATE EXTERNAL TABLE test123 (')
    keyanddatatypes =  df.dtypes
    sizeof = len(df.dtypes)
    print ("size----------",sizeof)
    count=1;
    for eachvalue in keyanddatatypes:
        print count,sizeof,eachvalue
        if count == sizeof:
            total = str(eachvalue[0])+str(' ')+str(eachvalue[1])
        else:
            total = str(eachvalue[0]) + str(' ') + str(eachvalue[1]) + str(',')
        buf.append(total)
        count = count + 1

    buf.append(' )')
    buf.append(' STORED as parquet ')
    buf.append("LOCATION")
    buf.append("'")
    buf.append('s3://my-location/data/')
    buf.append("'")
    buf.append("'")
    ##partition by pt
    tabledef = ''.join(buf)

    print "---------print definition ---------"
    print tabledef
    ## create a table using spark.sql. Assuming you are using spark 2.1+
    spark.sql(tabledef);
kartik
  • 1,971
  • 3
  • 19
  • 28
4

From spark 2.4 onward you can use the function dataframe.schema.toDDL to get the column names and type (even for nested struct)

Aparee
  • 41
  • 1