Lançamento simulado de Mockito ClassNotFoundException no aplicativo Spark

eu achei aquilomock objeto emMockito jogariaClassNotFoundException quando usado no Spark. Aqui está um exemplo mínimo:

import org.apache.spark.{SparkConf, SparkContext}
import org.mockito.{Matchers, Mockito}
import org.scalatest.FlatSpec
import org.scalatest.mockito.MockitoSugar

trait MyTrait {
  def myMethod(a: Int): Int
}

class MyTraitTest extends FlatSpec with MockitoSugar {
  "Mock" should "work in Spark" in {
    val m = mock[MyTrait](Mockito.withSettings().serializable())
    Mockito.when(m.myMethod(Matchers.any())).thenReturn(1)

    val conf = new SparkConf().setAppName("testApp").setMaster("local")
    val sc = new SparkContext(conf)

    assert(sc.makeRDD(Seq(1, 2, 3)).map(m.myMethod).first() == 1)
  }
}

que lançaria a seguinte exceção:

[info] MyTraitTest:
[info] Mock
[info] - should work in Spark *** FAILED ***
[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ClassNotFoundException: MyTrait$EnhancerByMockitoWithCGLIB$6d9e95a8
[info]  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
[info]  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[info]  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
[info]  at java.lang.Class.forName0(Native Method)
[info]  at java.lang.Class.forName(Class.java:348)
[info]  at org.apache.spark.serializer.JavaDeserializationStream$anon$1.resolveClass(JavaSerializer.scala:67)
[info]  at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1819)
[info]  at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
[info]  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1986)
[info]  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info]  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info]  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info]  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info]  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info]  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info]  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info]  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info]  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info]  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info]  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info]  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info]  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info]  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info]  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info]  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info]  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info]  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
[info]  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
[info]  at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
[info]  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
[info]  at org.apache.spark.scheduler.Task.run(Task.scala:99)
[info]  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
[info]  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info]  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info]  at java.lang.Thread.run(Thread.java:745)

O stacktrace sugere que isso está relacionado ao carregamento de classe dinâmica, mas não sei como corrigi-lo.

Atualização: aparentemente, mude

val m = mock[MyTrait](Mockito.withSettings().serializable())

para

val m = mock[MyTrait](Mockito.withSettings().serializable(SerializableMode.ACROSS_CLASSLOADERS))

faz a exceção desaparecer. No entanto, não estou entendendo por que essa correção é necessária. Eu pensei que, no modo local spark, está sendo executada uma única JVM que hospeda o driver e o executor. Então deve ser diferenteClassLoader é usado para carregar a classe desserializada no executor?

questionAnswers(0)

yourAnswerToTheQuestion