Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error Writing Request Body #204

Open
mtgnoah opened this issue Nov 22, 2022 · 2 comments
Open

Error Writing Request Body #204

mtgnoah opened this issue Nov 22, 2022 · 2 comments

Comments

@mtgnoah
Copy link

mtgnoah commented Nov 22, 2022

One or two tasks will fail out of many when writing a heavily partitioned table that uses a UDF to clickhouse and I can't figure out what is causing it

xenon.clickhouse.exception.CHServerException: [HTTP][email protected]:8123}/default [1002] Error writing request body to server, server ClickHouseNode [uri=http://192.168.0.202:8123/default]@1139606723
	at xenon.clickhouse.client.NodeClient.syncInsert(NodeClient.scala:126)
	at xenon.clickhouse.client.NodeClient.syncInsertOutputJSONEachRow(NodeClient.scala:85)
	at xenon.clickhouse.write.ClickHouseWriter.$anonfun$doFlush$1(ClickHouseWriter.scala:231)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.util.Try$.apply(Try.scala:213)
	at xenon.clickhouse.Utils$.retry(Utils.scala:87)
	at xenon.clickhouse.write.ClickHouseWriter.doFlush(ClickHouseWriter.scala:229)
	at xenon.clickhouse.write.ClickHouseWriter.flush(ClickHouseWriter.scala:217)
	at xenon.clickhouse.write.ClickHouseWriter.commit(ClickHouseWriter.scala:262)
	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:453)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538)
	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:480)
	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:381)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:136)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
@pan3793
Copy link
Collaborator

pan3793 commented Nov 23, 2022

1002 is unknown exception.

UNKNOWN_EXCEPTION(1002, "UNKNOWN_EXCEPTION");

Lowering the spark.clickhouse.write.batchSize may help.

https://housepower.github.io/spark-clickhouse-connector/configurations/

@mzitnik
Copy link
Collaborator

mzitnik commented Jun 30, 2024

@mtgnoah did it helped you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants