Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dataplaneapi timeout and 500 error #327

Open
SimoneBosa opened this issue Mar 13, 2024 · 1 comment
Open

dataplaneapi timeout and 500 error #327

SimoneBosa opened this issue Mar 13, 2024 · 1 comment

Comments

@SimoneBosa
Copy link

Hi,

we are encountering the following issue when call dataplaneapi with curl.

When I call dataplane api with this curl

curl -k -X POST --user admin:adminpwd \
  -H "Content-Type: application/json" \
  -d '{"address": "1.2.3.4", "name": "my-server",  "port": 80}' \
  "http://10.10.10.1:5555/v2/services/haproxy/runtime/servers?backend=mybackend&version=1"

we got this message after 60 seconds

curl: (52) Empty reply from server

Looking at dataplane logs, we see this message

"POST /v2/services/haproxy/configuration/servers?backend=mybackend&version=1 HTTP/1.1\" 500 43 \"-\" \"curl/7.68.0\""

and we got a error 500.

Any idea about this strange behaviour?

Starting dataplane as a root we have

dataplaneapi -f /etc/dataplaneapi/dataplaneapi.yml

time="2024-03-13T17:05:05+01:00" level=info msg="Build date: 2024-02-15T08:20:47Z"
time="2024-03-13T17:05:05+01:00" level=info msg="Build from: https://github.com/haproxytech/dataplaneapi"
time="2024-03-13T17:05:05+01:00" level=info msg="HAProxy Data Plane API v2.9.1 4d10854"
time="2024-03-13T17:05:05+01:00" level=info msg="Reload strategy: systemd"
time="2024-03-13T17:05:05+01:00" level=info msg="Serving data plane at http://[::]:5555"

/etc/dataplaneapi/dataplaneapi.yml

config_version: 2
name: lnx
mode: single
status: ""
dataplaneapi:
  host: 0.0.0.0
  port: 5555
  advertised:
    api_address: ""
    api_port: 0
  scheme:
  - http
  userlist:
    userlist: dataplaneapi
  transaction:
    transaction_dir: /var/lib/dataplaneapi/transactions
    backups_number: 10
    backups_dir: /var/lib/dataplaneapi/backups
  resources:
    maps_dir: /etc/haproxy/maps
    ssl_certs_dir: /etc/haproxy/ssl
    general_storage_dir: /etc/haproxy/general
    spoe_dir: /etc/haproxy/spoe
haproxy:
  config_file: /etc/haproxy/haproxy.cfg
  haproxy_bin: /usr/local/sbin/haproxy
  reload:
    reload_delay: 5
    service_name: haproxy
    reload_strategy: systemd
log_targets:
- log_to: file
  log_file: /var/log/dataplaneapi.log
  log_level: info
  log_types:
  - access
  - app

haproxy specs

HAProxy version 2.8.7-1a82cdf 2024/02/26 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2028.
Known bugs: http://www.haproxy.org/bugs/bugs-2.8.7.html
Running on: Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64
Build options :
  TARGET  = linux-glibc
  CPU     = native
  CC      = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef -Wdeclaration-after-statement -Wfatal-errors -Wtype-limits -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_GETADDRINFO=1 USE_OPENSSL=1 USE_ZLIB=1 USE_NS=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC +LIBCRYPT +LINUX_CAP +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL -PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC -QUIC_OPENSSL_COMPAT +RT +SHM_OPEN -SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL +ZLIB

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=8).
Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with network namespace support.
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE2 version : 10.23 2017-02-14
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 4.8.5 20150623 (Red Hat 4.8.5-39)

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
         h2 : mode=HTTP  side=FE|BE  mux=H2    flags=HTX|HOL_RISK|NO_UPG
       fcgi : mode=HTTP  side=BE     mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
  <default> : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
         h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
  <default> : mode=TCP   side=FE|BE  mux=PASS  flags=
       none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
        [BWLIM] bwlim-in
        [BWLIM] bwlim-out
        [CACHE] cache
        [COMP] compression
        [FCGI] fcgi-app
        [SPOE] spoe
        [TRACE] trace

haproxy.cfg include also

stats socket /run/haproxy/admin.sock mode 660 level admin
userlist dataplaneapi
  user admin insecure-password adminpwd
@SimoneBosa
Copy link
Author

Hi,

just find my issue.

folder /etc/haproxy is a symbolic link to a path mounted on NFS server (over GFS). Haproxy configuration is shared between a cluster of a nodes.

Changing this mountpoint directly to GFS ( no more over NFS), it works without timeouts or errors.

Our suspects are on NFS server side (Ganesha) and inexplicable or unlikely locks on this cfg file.

Hope this can help others on same setup.

SB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant