You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using the spark connector to import nearly 200M records. While I'd like to use bigger batches and make use of asynchronous importing from weaviate version 1.22, the spark connector seems to have issues in handling batch sizes beyond 200. Specifically, when going beyond 200, I often see errors like the following:
reason=ExceptionFailure(io.weaviate.spark.WeaviateResultError,error getting result and no more retries left. Error from Weaviate: [WeaviateErrorMessage(message=java.lang.IllegalStateException: Expected BEGIN_OBJECT but was STRING at line 1 column 1 path $, throwable=com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_OBJECT but was STRING at line 1 column 1 path $), WeaviateErrorMessage(message=Failed ids: 42946687-9c7b-5a99-b5a5-60f2216e894d,...
Any help would be appreciated!
The text was updated successfully, but these errors were encountered:
@lbakshi are you testing this in WCS or in your self-hosted Weaviate cluster?
Expected BEGIN_OBJECT but was STRING indicates the result isn't coming back as valid json. The most likely cause is a load balancer or something between Weaviate and Spark returning an error.
Hi all,
I'm using the spark connector to import nearly 200M records. While I'd like to use bigger batches and make use of asynchronous importing from weaviate version 1.22, the spark connector seems to have issues in handling batch sizes beyond 200. Specifically, when going beyond 200, I often see errors like the following:
Any help would be appreciated!
The text was updated successfully, but these errors were encountered: