Code:
import sparkSession.sqlContext.implicits._
val table_df = Seq((1, 20, 1), (2, 200, 2), (3, 222, 3), (4, 2123, 4), (5, 2321, 5)).toDF("ID", "Weight", "ID")
table_df.show(false)
Input:
+---+------+---+
|ID |Weight|ID |
+---+------+---+
|1 |20 |1 |
|2 |200 |2 |
|3 |222 |3 |
|4 |2123 |4 |
|5 |2321 |5 |
+---+------+---+
Expected Output:
+---+------+
|ID |Weight|
+---+------+
|1 |20 |
|2 |200 |
|3 |222 |
|4 |2123 |
|5 |2321 |
+---+------+
I am using drop in following way
table_df.drop("ID").show(false)
This dropping both of the "ID" columns. How can I drop duplicated second column "ID" here?