Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remote write size larger than 104857600 #106

Open
jerryum opened this issue Feb 8, 2023 · 3 comments
Open

remote write size larger than 104857600 #106

jerryum opened this issue Feb 8, 2023 · 3 comments

Comments

@jerryum
Copy link

jerryum commented Feb 8, 2023

[2023-02-08 19:56:35,993] WARN [SocketServer listenerType=ZK_BROKER, nodeId=0] Unexpected error from /10.138.0.12 (channelId=10.32.2.15:9092-10.138.0.12:37806-76); closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1347375956 larger than 104857600)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:105)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
        at kafka.network.Processor.poll(SocketServer.scala:1055)
        at kafka.network.Processor.run(SocketServer.scala:959)
        at java.base/java.lang.Thread.run(Thread.java:829)

This is what I received when I connect prometheus-kafka-adapter to Prometheus. Modified the max receive size of Kafka lager than 1347375956 but still getting the same error. Any advise will be welcome!

@palmerabollo
Copy link
Member

Hi @jerryum, that's a big message. I think it has nothing to do with prometheus-kafka-adapter, but with Kafka config itself. Could you try to increase the message.max.bytes in your Kafka brokers? Is that what you changed?

@jerryum
Copy link
Author

jerryum commented Apr 16, 2023

yes, that's what I did... Couldn't find the solution and ... forked the the repo and modified the adapter to create two different Kafka topics by exporters to reduce the message size....Metrics for the pods are too many... separated the metrics - one topic for the pods and the other topic for the rest of metrics.

@roshan989
Copy link

Hi @jerryum,

I faced a similar issue with Spark writes. I believe you may need to adjust the producer properties, specifically with max.request.size. Please take a look at this resource: How to Send Large Messages in Apache Kafka.

You might need to make changes to the producer configuration in the adapter code or tweak some settings in the configuration. I'll update you once I find the necessary changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants