0

I am getting below error in my Ubutnttu(18.0.4) machine while launching sqoop(1.4.7,Hadoop-3.1.3)

command used: sqoop import --connect jdbc:mysql://localhost/myhadoop --username hiveuser --password xxxx --table employee --split-by --target-dir /employee2

Error:

2020-04-30 15:28:01,570 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
Thu Apr 30 15:28:01 IST 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2020-04-30 15:28:01,727 INFO db.DBInputFormat: Using read commited transaction isolation
2020-04-30 15:28:01,736 INFO mapred.MapTask: Processing split: 1=1 AND 1=1
2020-04-30 15:28:01,772 INFO mapred.LocalJobRunner: map task executor complete.
2020-04-30 15:28:01,801 WARN mapred.LocalJobRunner: job_local1054959073_0001
java.lang.Exception: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class employee not found
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class employee not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2638)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getInputClass(DBConfiguration.java:403)
at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.createDBRecordReader(DataDrivenDBInputFormat.java:270)
at org.apache.sqoop.mapreduce.db.DBInputFormat.createRecordReader(DBInputFormat.java:266)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:527)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: Class employee not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2542)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2636)
... 12 more
2020-04-30 15:28:02,089 INFO mapreduce.Job: Job job_local1054959073_0001 running in uber mode : false
2020-04-30 15:28:02,094 INFO mapreduce.Job:  map 0% reduce 0%
2020-04-30 15:28:02,104 INFO mapreduce.Job: Job job_local1054959073_0001 failed with state FAILED due to: NA
2020-04-30 15:28:02,143 INFO mapreduce.Job: Counters: 0
2020-04-30 15:28:02,151 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2020-04-30 15:28:02,155 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 3.8645 seconds (0 bytes/sec)
2020-04-30 15:28:02,160 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2020-04-30 15:28:02,160 INFO mapreduce.ImportJobBase: Retrieved 0 records.
2020-04-30 15:28:02,160 ERROR tool.ImportTool: Import failed: Import job failed!

Please advise

suneesh
  • 11
  • 1

1 Answers1

0

You have to specify --bin-dir for sqoop import. You can specify any directory.

From official documentation

The import process compiles the source into .class and .jar files; these are ordinarily stored under /tmp. You can select an alternate target directory with --bindir. For example, --bindir /scratch.

sqoop import --connect jdbc:mysql://localhost/myhadoop --username hiveuser --password xxxx --table employee --split-by --target-dir /employee2
--bindir /tmp
Piyush Patel
  • 1,338
  • 1
  • 11
  • 21