I have a join between cleanDF
and sentiment_df
using array_contains
that works fine (from solution 61687997). And I need include in the Result
df a new column ('Year') from cleanDF
.
This is the join:
from pyspark.sql import functions
Result = cleanDF.join(sentiment_df, expr("""array_contains(MeaningfulWords,word)"""), how='left')\
.groupBy("ID")\
.agg(first("MeaningfulWords").alias("MeaningfulWords")\
,collect_list("score").alias("ScoreList")\
,mean("score").alias("MeanScore"))
This is the Result
structure:
Result.show(5)
#+------------------+--------------------+--------------------+-----------------+
#| ID| MeaningfulWords| ScoreList| MeanScore|
#+------------------+--------------------+--------------------+-----------------+
#|a0U3Y00000p1IzjUAE|[buen, servicio, ...| [6.39, 1.82]| 4.105|
#|a0U3Y00000p1KhGUAU| [mala]| [2.02]| 2.02|
#|a0U3Y00000p1M1oUAE|[cliente, content...| [6.39, 8.41]| 7.4|
#|a0U3Y00000p1OnTUAU|[positivo, trato,...| [8.2]| 8.19|
#|a0U3Y00000p1R5DUAU|[momento, sido, g...| [6.0]| 6.0|
#+------------------+--------------------+--------------------+-----------------+
I add a .select
(36132322) to include the column Year
from cleanDF
:
Result1 = cleanDF.alias('a').join(sentiment_df.alias('b'), expr("""array_contains(a.MeaningfulWords,b.word)"""), how='left')\
.select(col('a.ID'),col('a.Year'),col('a.MeaningfulWords'),col('b.word'),col('b.score'))\
.groupBy("ID")\
.agg(first("a.MeaningfulWords").alias("MeaningfulWords")\
,collect_list("score").alias("ScoreList")\
,mean("score").alias("MeanScore"))
But I get in Result1
the same columns than **Result**
:
display(Result1)
#DataFrame[ID: string, MeaningfulWords: array<string>, ScoreList: array<double>, MeanScore: double]
When I'm try include Year
in .agg
function:
Result2 = cleanDF.join(sentiment_df, expr("""array_contains(MeaningfulWords,word)"""), how='left')\
.groupBy("ID")\
.agg(first("MeaningfulWords").alias("MeaningfulWords"),first("Year").alias("Year")\
,collect_list("score").alias("ScoreList")\
,mean("score").alias("MeanScore"))
Result2.show()
Py4JJavaError: An error occurred while calling o3205.showString.
: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:146)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:144)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:140)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:140)
at org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoinExec.doExecute(BroadcastNestedLoopJoinExec.scala:343)
...
...
...
Caused by: org.apache.spark.SparkException: Failed to execute user defined function($anonfun$createTransformFunc$1: (string) => array<string>)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1066)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$2.apply(ScalaUDF.scala:109)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$2.apply(ScalaUDF.scala:107)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1063)
...
...
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 411.0 failed 1 times, most recent failure: Lost task 2.0 in stage 411.0 (TID 9719, localhost, executor driver): org.apache.spark.SparkException: Failed to execute user defined function($anonfun$5: (array<string>) => array<string>)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1066)
at org.apache.spark.sql.catalyst.expressions.SimpleHigherOrderFunction$class.eval(higherOrderFunctions.scala:208)
at org.apache.spark.sql.catalyst.expressions.ArrayFilter.eval(higherOrderFunctions.scala:296)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
...
...
... 20 more
Caused by: java.lang.NullPointerException
Im using pyspark on spark 2.4.5.
Thanks in advance for your help.