0

I am running SQL query on the top of view. I am trying to save query output into csv file using pyspark. While validating the file, i am getting some of records are not in their respective columns. Columns Name - PARNT_CASE_NMBR|CASE_NMBR Is this the problem of pypsark ? If yes, then how can i solve this?

I am using below line to save output of query into file :

df.repartition(1).write.format('com.databricks.spark.csv').mode('overwrite').save(row['TargetPath'], quote='',sep='|',header='True',nullValue=None)

output :

CASE_ID|PARNT_CASE_ID|CREATED_DT|RECORD_TYPE|THREPTC_ID|CHANNEL_ID|SRC|SUB_SRC|ACCT_ID|ADDR_ID|PROD_ID|TERRITORY_ID|CC_LOCTN_GRP_ID|**PARNT_CASE_NMBR|CASE_NMBR**|CATEGORY_ID|CTGRY|SUBCAT_ID|SUB_CTGRY|CATG_ID_MIR|CATG_MIR|SUBCAT_ID_MIR|SUBCAT_MIR|FIRST_NAME|LAST_NM|CUSTOMER_TYPE|PRIM_RPRTR_TYPE|ACCT_TYPE|ACCT_SUB_TYPE|QSTN|STLMNT_RSN|STLMNT_TYPE|USR_ERR_ISSUE|RPRTR_NM|RRA_NO|JJ_SECRY_RPTR_EMAIL|JJ_SECRY_RPTR_OTHR_TYP|QUICK_CD_TITLE|QUICK_CODE|QUICK_SUB_CTGRY|QUICKCDE_CATGRY|RPRTR_ID|SEC_FIRST_NAME|SEC_LAST_NM|SECNDARY_REPORTER_TYPE|ADDR_LINE_1|ADDR_LINE_2|CITY|STATE|ZIP|CNTRY|SEC_ADDR|SEC_ADDR_LINE_1|SEC_ADDR_LINE_2|SEC_CITY|SEC_STATE|ZIP_SEC|CNTRY_SEC|LAST_MOD_DT|SECTOR_CRTD_BY|SECTOR_RQSTD_BY|RQST_TYP|SRVY_FLG|SRCH_TERMS|CONSULT_FLG|ESCLTN_DT|ESCALTD|ESCLTD_TO|MIA_OWNR|CLOSED|CLOSED_DT|SPCL_EVENT|CMMNTS|CASE_REASON|OVRD_REASON|KRA_MET_FLG|THRPTC_AREA|LABEL_TYPE|HLTH_POLCY_DESCN_MKR|PQC_NMBR|J_J_FIRST_NTFYD|CCC_NOTFD_DT|PRCTC_NM|SENT_TO_PRTNR|AE_PQC_RPTD|SAFETY_SRVCS_PRTNR_NO|DCMNT_ITEM_NMBR|PROD_QTY|ITEM_NM|JJ_PURGE|NRRTV|RPRTR_IS_PATIENT|FLFLMNT_DT|TRGT_AUDI|SLTTN|AE_PARTNER_CASE|RELEVANT_DATA_AVAIL|RQST_DT|EMAIL|LOT_NMBR|ANON|RPRT_TYPE|FLLW_UP_SQNC_NMBR|DEL_FLAG|DSTRBTN_LIST|CREATE_AE|CREATE_PQ|CASE_NOTES|OWNER_ID|APPROVER|COMPONENT_IDS|CREATED_BY_ID|STATUS|SUB_STATUS|EMPLY_NAME|IM_INSERT_DT
5000B00000WmmZNQAZ|5000B00000WmkZaQAJ|2016-08-29 18:58:50.0|012U0000000QIzaIAG||61|Phone|Xarelto Carepath|001U000001jrip7IAA|a01U000001W484XIAR|1645||51|**00671609|00672036**|2301||1680||2301||1680||Information Redacted|Information Redacted|Consumer|Employee|Consumer|Consumer|Is there any information on using lavender or eucalyptus in a diffuser while on XARELTO?|||||a3tU0000000CpBhIAK|||||||||||Information Redacted||Frisco|TX|75034|USA|||||||USA|2017-09-12 16:50:17.0|MIC|Janssen Pharmaceuticals||N||||N|||Y|2016-08-29 20:11:34.0|||||N|Cardiovascular||N|||||||||||N||N|2016-08-29 18:59:59.0|Consumer|Mr.|||2016-08-29 00:00:00.0|Information Redacted||N|Initial||N||N|N||005U0000004Lo5jIAC|005U0000004Lo5jIAC|XAR-CV-Cons PI;XARELTO - Package Insert Information [and Medication Guide];Xarelto##ENC-010330-11;Xarelto (rivaroxaban) Prescribing Information - May 2016 ;Xarelto|005U0000004Lo5jIAC|Closed|Fulfilled|0010B00001rneZ5QAI|1579623290022000

Wrong output in file:

5000B00000XdxnQQAR|||||288|||||||20||Janssen Pharmaceutical Canada 19 Green Belt Dr., Toronto, ON,  M3C1L9 Office: +1 (416) 382 5182|||766|Cancelled|2077|Janssen Canada|1680|||N|N|Initial|005U0000003U3PYIA0||N|||||2016-09-28 00:00:00.0||N||Clinical Use|||||||||**00694588**|||||||||||N|||Canada|2016-10-03 20:56:44.0||N||annvbertrand@gmail.com||||||N|||N|0||Email|||N||CCC|||||||||Comparative Use|Y|Y||N|||Ms.|Y|N||2016-09-27 13:30:28.0||Comparison of Bioequivalent MPH ER Preparations|HCP|"From: Azadtalab, Maysam [JOICA Non-J&J] Sent: Monday, September 26, 2016 1:42 PM To: MedInfo Canada [JOICA] Subject: Request for studies  Hi MedInfo,  Could you please send the following information:  Request:  - Difference between brand and generic - OROS tech with Graph - Fallu study - Van Stralen study  Product:  CONCERTA  Reporter Type:  Pharmacist  Salutation:  Ms.  First Name:   Ann  Last Name:  Bertrand  Province:  ON  Email/Address/Fax:  annvbertrand@gmail.com<mailto:annvbertrand@gmail.com>  Language:  English  Fulfillment Mode:  Email  Employee Name:  Maysam Azadtalab   Thanks Maysam Azadtalab Healthcare Relationship Specialist - Concerta(r)||||||On|N|1579623290022000

5000B00000XdxnQQAR|||||25|||||||20||Janssen Pharmaceutical Canada 19 Green Belt Dr., Toronto, ON,  M3C1L9 Office: +1 (416) 382 5182|||766|Cancelled|2077|Janssen Canada|1680|||N|N|Initial|005U0000003U3PYIA0||N|||||2016-09-28 00:00:00.0||N||Clinical Use|||||||||**00694588**|||||||||||N|||Canada|2016-10-03 20:56:44.0||N||annvbertrand@gmail.com||||||N|||N|0||Email|||N||CCC|||||||||Comparative Use|Y|Y||N|||Ms.|Y|N||2016-09-27 13:30:28.0||Comparison of Bioequivalent MPH ER Preparations|HCP|"From: Azadtalab, Maysam [JOICA Non-J&J] Sent: Monday, September 26, 2016 1:42 PM To: MedInfo Canada [JOICA] Subject: Request for studies  Hi MedInfo,  Could you please send the following information:  Request:  - Difference between brand and generic - OROS tech with Graph - Fallu study - Van Stralen study  Product:  CONCERTA  Reporter Type:  Pharmacist  Salutation:  Ms.  First Name:   Ann  Last Name:  Bertrand  Province:  ON  Email/Address/Fax:  annvbertrand@gmail.com<mailto:annvbertrand@gmail.com>  Language:  English  Fulfillment Mode:  Email  Employee Name:  Maysam Azadtalab   Thanks Maysam Azadtalab Healthcare Relationship Specialist - Concerta(r)||||||On|N|1579623290022000
shivi
  • 15
  • 6

2 Answers2

0

If you are using Spark 2.0+ you can write csv directly, refer How to export a table dataframe in PySpark to csv?

df.repartition(1).write.csv(row['TargetPath'])

Doing a select on the dataframe and then writing may solve your problem.

df.repartition(1).select('CASE_ID','PARNT_CASE_ID'...'IM_INSERT_DT').write.format('com.databricks.spark.csv').mode('overwrite').save(row['TargetPath'], quote='',sep='|',header='True',nullValue=None)
Vishnu P N
  • 385
  • 5
  • 16
0

You just need to use same options and function which we use while creating dataframe from csv file.

from pyspark.sql import SparkSession
spark=SparkSession.builder.appName("SparkTest").master("local[*]").getOrCreate()

sampleDF=spark.createDataFrame([("manoj","gwalior","mp"),("kumar","delhi","delhi"),("dhakad","chennai","tn")],["name","city","state"])

sampleDF.coalesce(1).write.mode("overwrite").format("csv").option("delimiter","|").option("header","true").save("temp")

//Sample output

name|city|state
manoj|gwalior|mp
kumar|delhi|delhi
dhakad|chennai|tn

If you want to change sequence of columns then select the columns in that order

Manoj Kumar Dhakad
  • 1,684
  • 1
  • 10
  • 21