You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi, my scala version is 2.12.15 , spark version is 3.0, I use spark-tfrecord_2.12:0.40, but met an error:
Caused by: java.lang.NullPointerException
at org.apache.hadoop.conf.Configuration.(Configuration.java:821)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:440)
at org.apache.hadoop.mapreduce.task.JobContextImpl.(JobContextImpl.java:67)
at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.(TaskAttemptContextImpl.java:49)
at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.(TaskAttemptContextImpl.java:44)
at com.linkedin.spark.datasources.tfrecord.TFRecordFileReader$.readFile(TFRecordFileReader.scala:32)
at com.linkedin.spark.datasources.tfrecord.DefaultSource.$anonfun$buildReader$1(DefaultSource.scala:132)
I tested the following case, but it runs well:
sc.textFile("xxxx")
spark.read.textFile("xxx")
Do you have any ideas about this error? I am really confused. lol.
The text was updated successfully, but these errors were encountered:
It is hard to say where the problem is with the limited information provided here.
If you can run the examples in the README file, then your setup is likely correct. Then the problem might be in the TFRecord files.
hi, my scala version is 2.12.15 , spark version is 3.0, I use spark-tfrecord_2.12:0.40, but met an error:
Caused by: java.lang.NullPointerException
at org.apache.hadoop.conf.Configuration.(Configuration.java:821)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:440)
at org.apache.hadoop.mapreduce.task.JobContextImpl.(JobContextImpl.java:67)
at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.(TaskAttemptContextImpl.java:49)
at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.(TaskAttemptContextImpl.java:44)
at com.linkedin.spark.datasources.tfrecord.TFRecordFileReader$.readFile(TFRecordFileReader.scala:32)
at com.linkedin.spark.datasources.tfrecord.DefaultSource.$anonfun$buildReader$1(DefaultSource.scala:132)
I tested the following case, but it runs well:
Do you have any ideas about this error? I am really confused. lol.
The text was updated successfully, but these errors were encountered: