How to fix java.lang.ClassCastException: Instanz von scala.collection.immutable.List kann nicht dem Feldtyp scala.collection.Seq zugewiesen werden?
Dieser Fehler war am schwierigsten nachzuvollziehen. Ich bin nicht sicher, was los ist. Auf meinem Standortcomputer wird ein Spark-Cluster ausgeführt. so ist der gesamte Funken-Cluster unter einem Host, der @ i127.0.0.1
und ich laufe im Standalone-Modus
JavaPairRDD<byte[], Iterable<CassandraRow>> cassandraRowsRDD= javaFunctions(sc).cassandraTable("test", "hello" )
.select("rowkey", "col1", "col2", "col3", )
.spanBy(new Function<CassandraRow, byte[]>() {
@Override
public byte[] call(CassandraRow v1) {
return v1.getBytes("rowkey").array();
}
}, byte[].class);
Iterable<Tuple2<byte[], Iterable<CassandraRow>>> listOftuples = cassandraRowsRDD.collect(); //ERROR HAPPENS HERE
Tuple2<byte[], Iterable<CassandraRow>> tuple = listOftuples.iterator().next();
byte[] partitionKey = tuple._1();
for(CassandraRow cassandraRow: tuple._2()) {
System.out.println("************START************");
System.out.println(new String(partitionKey));
System.out.println("************END************");
}
Dieser Fehler war am schwierigsten nachzuvollziehen. Es passiert eindeutig beicassandraRowsRDD.collect()
und ich weiß nicht warum?
16/10/09 23:36:21 ERROR Executor: Exception in task 2.3 in stage 0.0 (TID 21)
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2006)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Hier sind die Versionen, die ich benutze
Scala code runner version 2.11.8 // when I run scala -version or even ./spark-shell
compile group: 'org.apache.spark' name: 'spark-core_2.11' version: '2.0.0'
compile group: 'org.apache.spark' name: 'spark-streaming_2.11' version: '2.0.0'
compile group: 'org.apache.spark' name: 'spark-sql_2.11' version: '2.0.0'
compile group: 'com.datastax.spark' name: 'spark-cassandra-connector_2.11' version: '2.0.0-M3':
my gradle file sieht so aus, nachdem etwas namens "provided" eingeführt wurde, das anscheinend nicht existiert. Google hat jedoch gesagt, dass es eine solche Datei erstellen soll, sodass mein build.gradle so aussieht:
group 'com.company'
version '1.0-SNAPSHOT'
apply plugin: 'java'
apply plugin: 'idea'
repositories {
mavenCentral()
mavenLocal()
}
configurations {
provided
}
sourceSets {
main {
compileClasspath += configurations.provided
test.compileClasspath += configurations.provided
test.runtimeClasspath += configurations.provided
}
}
idea {
module {
scopes.PROVIDED.plus += [ configurations.provided ]
}
}
dependencies {
compile 'org.slf4j:slf4j-log4j12:1.7.12'
provided group: 'org.apache.spark', name: 'spark-core_2.11', version: '2.0.0'
provided group: 'org.apache.spark', name: 'spark-streaming_2.11', version: '2.0.0'
provided group: 'org.apache.spark', name: 'spark-sql_2.11', version: '2.0.0'
provided group: 'com.datastax.spark', name: 'spark-cassandra-connector_2.11', version: '2.0.0-M3'
}
jar {
from { configurations.provided.collect { it.isDirectory() ? it : zipTree(it) } }
// with jar
from sourceSets.test.output
manifest {
attributes 'Main-Class': "com.company.batchprocessing.Hello"
}
exclude 'META-INF/.RSA', 'META-INF/.SF', 'META-INF/*.DSA'
zip64 true
}