-5

My Spark scala code is like:

val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row])

The CqlInputFormat class is implemented in Cassandra's source code. I tried to convert it to Java codes and it worked. But it failed to build with Scala codes.

inferred type arguments[org.apache.hadoop.io.LongWritable,com.datastax.driver.core.Row,org.apache.cassandra.hadoop.cql3.CqlInputFormat] do not conform to method newAPIHadoopRDD's type parameter bounds [K,V,F <: org.apache.hadoop.mapreduce.InputFormat[K,V]]
[error]         val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);

[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error]  found   : Class[org.apache.cassandra.hadoop.cql3.CqlInputFormat](classOf[org.apache.cassandra.hadoop.cql3.CqlInputFormat])
[error]  required: Class[F]
[error]         val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error]                                                      ^
[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error]  found   : Class[org.apache.hadoop.io.LongWritable](classOf[org.apache.hadoop.io.LongWritable])
[error]  required: Class[K]
[error]         val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error]                                                                               

[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error]  found   : Class[com.datastax.driver.core.Row](classOf[com.datastax.driver.core.Row])
[error]  required: Class[V]
[error]  val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error]                                                                                                      
[error] four errors found
[error] (compile:compileIncremental) Compilation failed

Any suggestions? Thank you.

Alex Ott
  • 49,058
  • 5
  • 62
  • 91
Jenny.D
  • 11
  • 4
  • 3
    The question currently looks like a wall of noise. You probably should take a look at [Markdown Editing Help](https://stackoverflow.com/editing-help), and also [How to Ask](https://stackoverflow.com/help/how-to-ask). – Andrey Tyukin Sep 23 '18 at 13:02
  • @Andrey Tyukin Thanks for your suggestions. – Jenny.D Sep 24 '18 at 02:37

1 Answers1

1

If you're using Spark, you need to take Spark Cassandra Connector instead of using Hadoop integration. And it's better to use DataFrames...

I recommend to take DS320 course to learn more about Spark + Cassandra.

Alex Ott
  • 49,058
  • 5
  • 62
  • 91