You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I met an issue. Anyone can help? Thanks in advance.
After deploy docker-spark to a server(192.168.10.8), I try to test by another server(192.168.10.7).
same version spark has been installed on 192.168.10.7.
cmd steps:
spark-shell --master spark://192.168.10.8:7077 --total-executor-cores 1 --executor-memory 512M
# xxxx
# some output here
# xxxx
val textFile = sc.textFile("file:///opt/spark/README.md");
textFile.first();
I got below error.(infinite loop messages)
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
The text was updated successfully, but these errors were encountered:
thanks a lot for reporting this. Could you tell us how you are running spark-shell command? Withing docker exec or from outside the docker network? Have you tried to use one of our docker templates as an example?
I got the same error too.
Could I use this configuration as below from outside Spark Cluster ?
spark = SparkSession.builder.appName("SparkSample2").master("spark://192.XX.X.XX:7077").getOrCreate()
I'd like to run this application from client side.
Thank you for your great support.
Best,
Hi,
I met an issue. Anyone can help? Thanks in advance.
After deploy docker-spark to a server(192.168.10.8), I try to test by another server(192.168.10.7).
same version spark has been installed on 192.168.10.7.
cmd steps:
I got below error.(infinite loop messages)
The text was updated successfully, but these errors were encountered: