85

Let's say I have a spark data frame df1, with several columns (among which the column id) and data frame df2 with two columns, id and other.

Is there a way to replicate the following command

sqlContext.sql("SELECT df1.*, df2.other FROM df1 JOIN df2 ON df1.id = df2.id")

by using only pyspark functions such as join(), select() and the like?

I have to implement this join in a function and I don't want to be forced to have sqlContext as a function parameter.

Thanks!

Marioanzas
  • 536
  • 2
  • 19
Francesco Sambo
  • 863
  • 1
  • 6
  • 6

11 Answers11

78

Asterisk (*) works with alias. Ex:

from pyspark.sql.functions import *

df1 = df1.alias('df1')
df2 = df2.alias('df2')

df1.join(df2, df1.id == df2.id).select('df1.*')
cronoik
  • 7,605
  • 2
  • 21
  • 47
maxcnunes
  • 2,517
  • 18
  • 26
  • 5
    perfect -- full solution; { df1.join(df2, df1.id == df2.id).select('df1.*', 'df2.other') } – Andre Odendaal Apr 18 '18 at 20:35
  • You wrote `df1 = df1.alias('df1')` and `df2 = df2.alias('df2')`. What is the purpose here? You are renaming `df1` as `df1`. Isn't this useless? – Sheldore Jan 31 '21 at 15:30
70

Not sure if the most efficient way, but this worked for me:

from pyspark.sql.functions import col

df1.alias('a').join(df2.alias('b'),col('b.id') == col('a.id')).select([col('a.'+xx) for xx in a.columns] + [col('b.other1'),col('b.other2')])

The trick is in:

[col('a.'+xx) for xx in a.columns] : all columns in a

[col('b.other1'),col('b.other2')] : some columns of b
Ramesh Maharjan
  • 36,749
  • 6
  • 50
  • 78
Pablo Estevez
  • 732
  • 5
  • 3
  • 6
    In spark2, I had to change this to col('b.id') == col('a.id') (with two equals signs). Otherwise, it gives me a 'SyntaxError: keyword can't be an expression' exception – void Jul 29 '17 at 06:43
  • Hi, How can I pass multiple columns as a list instead of individual cols like this [col('b.other1'),col('b.other2')] for df2 dataset – Manu Sharma Jul 04 '20 at 05:32
40

Without using alias.

df1.join(df2, df1.id == df2.id).select(df1["*"],df2["other"])
  • 1
    I notice that when joined dataframes have same-named column names, doing `df1["*"]` in the select method correctly gets the columns from that dataframe even if `df2` had columns with some of the same names as `df1`. Would you mind explaining (or linking to docs on) how this works? – lampShadesDrifter Dec 23 '19 at 20:21
12

Here is a solution that does not require a SQL context, but maintains the metadata of a DataFrame.

a = sc.parallelize([['a', 'foo'], ['b', 'hem'], ['c', 'haw']]).toDF(['a_id', 'extra'])
b = sc.parallelize([['p1', 'a'], ['p2', 'b'], ['p3', 'c']]).toDF(["other", "b_id"])

c = a.join(b, a.a_id == b.b_id)

Then, c.show() yields:

+----+-----+-----+----+
|a_id|extra|other|b_id|
+----+-----+-----+----+
|   a|  foo|   p1|   a|
|   b|  hem|   p2|   b|
|   c|  haw|   p3|   c|
+----+-----+-----+----+
Katya Handler
  • 1,956
  • 4
  • 20
  • 40
  • 2
    Well, the OP has asked for selection of only few cols, ie. filteration, the answer has all the columns after join. – Viv Apr 16 '19 at 14:45
5

I believe that this would be the easiest and most intuitive way:

final = (df1.alias('df1').join(df2.alias('df2'),
                               on = df1['id'] == df2['id'],
                               how = 'inner')
                         .select('df1.*',
                                 'df2.other')
)
Xehron
  • 51
  • 1
  • 3
4

drop duplicate b_id

c = a.join(b, a.a_id == b.b_id).drop(b.b_id)
Selvaraj S.
  • 114
  • 5
2

Here is the code snippet that does the inner join and select the columns from both dataframe and alias the same column to different column name.

emp_df  = spark.read.csv('Employees.csv', header =True);
dept_df = spark.read.csv('dept.csv', header =True)


emp_dept_df = emp_df.join(dept_df,'DeptID').select(emp_df['*'], dept_df['Name'].alias('DName'))
emp_df.show()
dept_df.show()
emp_dept_df.show()
Output  for 'emp_df.show()':

+---+---------+------+------+
| ID|     Name|Salary|DeptID|
+---+---------+------+------+
|  1|     John| 20000|     1|
|  2|    Rohit| 15000|     2|
|  3|    Parth| 14600|     3|
|  4|  Rishabh| 20500|     1|
|  5|    Daisy| 34000|     2|
|  6|    Annie| 23000|     1|
|  7| Sushmita| 50000|     3|
|  8| Kaivalya| 20000|     1|
|  9|    Varun| 70000|     3|
| 10|Shambhavi| 21500|     2|
| 11|  Johnson| 25500|     3|
| 12|     Riya| 17000|     2|
| 13|    Krish| 17000|     1|
| 14| Akanksha| 20000|     2|
| 15|   Rutuja| 21000|     3|
+---+---------+------+------+

Output  for 'dept_df.show()':
+------+----------+
|DeptID|      Name|
+------+----------+
|     1|     Sales|
|     2|Accounting|
|     3| Marketing|
+------+----------+

Join Output:
+---+---------+------+------+----------+
| ID|     Name|Salary|DeptID|     DName|
+---+---------+------+------+----------+
|  1|     John| 20000|     1|     Sales|
|  2|    Rohit| 15000|     2|Accounting|
|  3|    Parth| 14600|     3| Marketing|
|  4|  Rishabh| 20500|     1|     Sales|
|  5|    Daisy| 34000|     2|Accounting|
|  6|    Annie| 23000|     1|     Sales|
|  7| Sushmita| 50000|     3| Marketing|
|  8| Kaivalya| 20000|     1|     Sales|
|  9|    Varun| 70000|     3| Marketing|
| 10|Shambhavi| 21500|     2|Accounting|
| 11|  Johnson| 25500|     3| Marketing|
| 12|     Riya| 17000|     2|Accounting|
| 13|    Krish| 17000|     1|     Sales|
| 14| Akanksha| 20000|     2|Accounting|
| 15|   Rutuja| 21000|     3| Marketing|
+---+---------+------+------+----------+
Sunil
  • 31
  • 2
1

function to drop duplicate columns after joining.

check it

def dropDupeDfCols(df): newcols = [] dupcols = []

for i in range(len(df.columns)):
    if df.columns[i] not in newcols:
        newcols.append(df.columns[i])
    else:
        dupcols.append(i)

df = df.toDF(*[str(i) for i in range(len(df.columns))])
for dupcol in dupcols:
    df = df.drop(str(dupcol))

return df.toDF(*newcols)
aamirmalik124
  • 133
  • 11
0

You could just make the join and after that select the wanted columns https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=dataframe%20join#pyspark.sql.DataFrame.join

Erica
  • 49
  • 3
  • My question is exactly how to select all columns from one data frame (without enumerating them one by one) and one column from the other – Francesco Sambo Mar 23 '16 at 13:51
0

I got an error: 'a not found' using the suggested code:

from pyspark.sql.functions import col df1.alias('a').join(df2.alias('b'),col('b.id') == col('a.id')).select([col('a.'+xx) for xx in a.columns] + [col('b.other1'),col('b.other2')])

I changed a.columns to df1.columns and it worked out.

0

I just dropped the columns I didn't need from df2 and joined:

sliced_df = df2.select(columns_of_interest)
df1.join(sliced_df, on=['id'], how='left')
**id should be in `columns_of_interest` tho