Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SOCKET_TIMEOUT] Code: 209. DB::NetException: Timeout exceeded while reading from socket #175

Closed
daemon027 opened this issue Nov 14, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@daemon027
Copy link

Describe the bug

I use inserter feature to do infinite inserting, I commit once the event count reaches 1k.
I find if the time lag between the first inserter.write() and the 1000th inserter.write() more than the default 30s, the clickhouse server complains:
2024.11.13 16:02:42.114222 [ 16283 ] {89a5291d-9220-4532-ab74-377c70a9156d} <Error> DynamicQueryHandler: Code: 209. DB::NetException: Timeout exceeded while reading from socket (x.x.x.x:59460, 30000 ms). (SOCKET_TIMEOUT), Stack trace

The code code snippet as follows:

let mut event_count = 0;
loop{
   ...
   let e = parse_line(line.clone(), &mut trace_fields);

   inserter.write(&e).expect("[ERR] write event failed");
   event_count += 1;
   if event_count == MAX_ROWS{
            let stats = inserter.commit().await.expect("[ERR] commit to clickhouse failed");
            event_count = 0;
        }
}

How can I set the socket timeout time? I find JDBC has the setting like this:
jdbc:clickhouse://127.0.0.1:8123/default?socket_timeout=3600

Environment

  • Client version: clickhouse 0.13.1
  • OS: linux

ClickHouse server

  • ClickHouse Server version: 23.7.3.14 (official build)

The detailed server err as follows:

(version 23.7.3.14 (official build))
2024.11.13 16:03:12.114726 [ 16283 ] {89a5291d-9220-4532-ab74-377c70a9156d} <Error> DynamicQueryHandler: Cannot send exception to client: Code: 209. DB::NetException: Timeout exceeded while reading from socket (x.x.x.x:59460, 30000 ms). (SOCKET_TIME
OUT), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000e91fbb7 in /usr/bin/clickhouse
1. ? @ 0x000000000eb97436 in /usr/bin/clickhouse
2. DB::ReadBufferFromPocoSocket::nextImpl() @ 0x000000000eb96fd7 in /usr/bin/clickhouse
3. DB::HTTPChunkedReadBuffer::readChunkHeader() @ 0x000000001529b570 in /usr/bin/clickhouse
4. DB::HTTPChunkedReadBuffer::nextImpl() @ 0x000000001529bb7c in /usr/bin/clickhouse
5. ? @ 0x00000000131e83cc in /usr/bin/clickhouse
6. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&, bool) @ 0x000000001341491b in /usr/bin/clickhouse
7. non-virtual thunk to DB::CompressedReadBuffer::nextImpl() @ 0x0000000013414322 in /usr/bin/clickhouse
8. ? @ 0x00000000131e83cc in /usr/bin/clickhouse
9. ? @ 0x000000001379fc76 in /usr/bin/clickhouse
10. DB::LimitReadBuffer::nextImpl() @ 0x000000000ea526e4 in /usr/bin/clickhouse
11. ? @ 0x000000000e99fe27 in /usr/bin/clickhouse
12. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::shared_ptr<DB::Context>, std::function<void (DB::QueryResultDetails const&)>, std::optional<DB::FormatSettings> const&) @ 0x000000001442d657 in /usr/bin/clickhouse
13. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::optional<DB::CurrentThread::QueryScope>&) @ 0x0000000015224bad in /usr/bin/clickhouse
14. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x0000000015228fe9 in /usr/bin/clickhouse
15. DB::HTTPServerConnection::run() @ 0x0000000015298532 in /usr/bin/clickhouse
16. Poco::Net::TCPServerConnection::start() @ 0x0000000018293df4 in /usr/bin/clickhouse
17. Poco::Net::TCPServerDispatcher::run() @ 0x0000000018295011 in /usr/bin/clickhouse
18. Poco::PooledThread::run() @ 0x000000001841e227 in /usr/bin/clickhouse
19. Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001841bc5c in /usr/bin/clickhouse
20. start_thread @ 0x0000000000007fa3 in /usr/lib/x86_64-linux-gnu/libpthread-2.28.so
21. clone @ 0x00000000000f94cf in /usr/lib/x86_64-linux-gnu/libc-2.28.so

@daemon027 daemon027 added the bug Something isn't working label Nov 14, 2024
@daemon027
Copy link
Author

I can set http_receive_timeout with clickhouse server to extend the timeout.
Can we do similar settings with clickhouse client?
If the issue is only related to the clickhouse server settings, I will close this.
Thanks.

@serprex
Copy link
Member

serprex commented Nov 20, 2024

You should be able to adjust some settings for client parameters too. Similar issues exist in other language bindings: ClickHouse/clickhouse-java#159

ClickHouse/clickhouse-java#1822 implies server settings are the right place to tune this

@serprex serprex closed this as completed Nov 20, 2024
@loyd
Copy link
Collaborator

loyd commented Nov 21, 2024

I find if the time lag between the first inserter.write() and the 1000th inserter.write() more than the default 30s, the clickhouse server complains

You should call inserter.commit() to let the inserter check his time (if configured) and complete an active INSERT query.
Long-running queries without activity are not expected in any way, so increasing timeouts is not the right way to do the job for sparse streams.

One example of how to do it #92 (comment)
(ok, it should be added as an example).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants