-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SPARK-5063 error #22
Comments
getting same error |
@ramondalmau what was the solution you found 2 years ago. |
@gunjan075 Admittedly, I'm kind of late to the party but I just stumbled upon this problem myself, so here's the workaround. First of all, if you check out the documentation of SparkTorch (README page), you will see the following note:
From my experience, it appears that using a Sequential network is not sufficient in itself, because simply adding a forward pass also leads to this problem. So, the workaround in Databricks is to upload a separate file locally or by using the Repos functionality. One rather straightforward way of doing this would be to upload the file in DBFS (this can be done manually while in your Databricks notebook by selecting File -> Upload data to DBFS...) and then inform spark with a command similar to
supposing that your file is named
to your code. And that's pretty much it. |
Dear all
Many thanks for this nice contribution. Sparktorch is exactly what I was looking for! :)
I am trying to attach a trained Pytorch network to a fitted ML PipelineModel. My attempt is:
Unfortunately, I get the following error:
Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
I also tried with the following (simple) neural network and command, and I receive EXACTLY the same error
However, the following code runs smooth, without errors:
That is, it seems that the error is not due to my specific neural network, nor the pipeline.
I am using Databricks with Apache Spark 3.0.1. Do you know how to solve this issue?
Many thanks in advance
Ramon
The text was updated successfully, but these errors were encountered: