Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Behind Reverseproxy didn't open the Webinterface after few hours #821

Open
phillipunzen opened this issue Sep 30, 2024 · 13 comments
Open
Labels
bug Something isn't working

Comments

@phillipunzen
Copy link

Operating system

Unraid (Latest Stable Version)

Description

I have the problem that my V-DSM container is running. I have made the web interface accessible via the NGINX proxy manager. (nas.domain.de) refers to the local IP of the NAS with port 5000.

After a few hours of running, I can no longer open the DSM web interface. I then only get a message: "The IP of the DSM is: :5000" and am then redirected to the page which of course cannot be reached from mobile via the Internet.

I therefore have to restart the container every 6 hours... That can't be normal. After the restart. all is normal.

Docker compose

use from Unraid

Docker log

text error warn system array login

❯ -----------------------------------------------------------
❯ You can now login to DSM at http://192.168.10.4:5000
❯ -----------------------------------------------------------

[ OK ] Started Samba Environment Setup Unit.
[ 36.298661] Installing knfsd (copyright (C) 1996 [email protected]).
[ OK ] Started always preprocess jobs before any package is started.
[ OK ] Reached target Hook on root ready.
Starting SMBService's service unit...
Starting Python2's service unit...
Starting QuickConnect's service unit...
Starting SecureSignIn's service unit...
[ OK ] Started NFS related modules.
Starting NFSD configuration filesystem...
Starting RPC Pipe File System...
[ OK ] Started synoindex mediad.
[ OK ] Started NFSD configuration filesystem.
[ OK ] Started RPC Pipe File System.
[ OK ] Reached target rpc_pipefs.target.
Starting NFSv4 ID-name mapping service...
[ OK ] Started NFSv4 ID-name mapping service.
[ OK ] Started Python2's service unit.
[ OK ] Started Synology space service.
Starting Synology virtual space service...
[ OK ] Started Synology virtual space service.
Starting Synology virtual space service phase2...
[ OK ] Started Synology virtual space service phase2.
[ OK ] Reached target Synology storage pool.
Starting Check Synology HotSpare Config...
Starting Synology filesystem check service...
Starting StorageManager's service unit...
Starting Synology space table update for Storage Manager...
Starting Synology Building Tasks Restore for Storage Manager...
[ OK ] Started Check Synology HotSpare Config.
[ OK ] Started Synology log notification service.
Stopping Synology Task Scheduler Vmtouch...
[ OK ] Stopped Synology Task Scheduler Vmtouch.
[ OK ] Started Synology Task Scheduler Vmtouch.
Starting Synology Task Scheduler Vmtouch...
[ OK ] Started Synology filesystem check service.
[ OK ] Started Synology Building Tasks Restore for Storage Manager.
[ OK ] Started Synology space table update for Storage Manager.
[ 40.092507] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null)
[ 40.150578] 8021q: 802.1Q VLAN Support v1.8
[ 40.153237] 8021q: adding VLAN 0 to HW filter on device eth0

V-DSM login: [ 40.753689] BTRFS: device label 2024.09.15-13:31:50 v72806 devid 1 transid 57553 /dev/sdb1
[ 40.797687] BTRFS info (device sdb1): enabling auto syno reclaim space
[ 40.801945] BTRFS info (device sdb1): use ssd allocation scheme
[ 40.805591] BTRFS info (device sdb1): turning on discard
[ 40.808782] BTRFS info (device sdb1): using free space tree
[ 40.812382] BTRFS info (device sdb1): using free block group cache tree
[ 40.816613] BTRFS info (device sdb1): has skinny extents
[ 41.128718] BTRFS: device label 2024.09.15-15:20:03 v72806 devid 1 transid 18201 /dev/sdc1
[ 41.160980] BTRFS info (device sdc1): enabling auto syno reclaim space
[ 41.166355] BTRFS info (device sdc1): use ssd allocation scheme
[ 41.170216] BTRFS info (device sdc1): turning on discard
[ 41.173317] BTRFS info (device sdc1): using free space tree
[ 41.176796] BTRFS info (device sdc1): using free block group cache tree
[ 41.180918] BTRFS info (device sdc1): has skinny extents
[ 41.278612] BTRFS info (device sdb1): BTRFS: root of syno feature tree is null
[ 41.282855] BTRFS info (device sdc1): BTRFS: root of syno feature tree is null
[ 47.411767] capability: warning: `nginx' uses 32-bit capabilities (legacy support in use)
[ 50.934788] Synotify use 16384 event queue size
[ 50.968386] Synotify use 16384 event queue size
[ 51.323274] Synotify use 16384 event queue size
[ 51.359728] Synotify use 16384 event queue size
[ 51.977446] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 51.983805] NFSD: starting 90-second grace period (net ffffffff818b6f00)
[ 52.208058] EXT4-fs (loop1): mounted filesystem with ordered data mode. Opts: (null)
[ 52.660343] iSCSI:target_core_rodsp_server.c:1025:rodsp_server_init RODSP server started, login_key(b34aaf02d462).
[ 52.809226] syno_extent_pool: module license 'Proprietary' taints kernel.
[ 52.815905] Disabling lock debugging due to kernel taint
[ 52.842209] iSCSI:extent_pool.c:766:ep_init syno_extent_pool successfully initialized
[ 52.945157] iSCSI:target_core_device.c:612:se_dev_align_max_sectors Rounding down aligned max_sectors from 4294967295 to 4294967288
[ 52.987813] iSCSI:target_core_configfs.c:5763:target_init_dbroot db_root: cannot open: /etc/target
[ 52.996479] iSCSI:target_core_lunbackup.c:366:init_io_buffer_head 2048 buffers allocated, total 8388608 bytes successfully
[ 53.078565] iSCSI:target_core_file.c:152:fd_attach_hba RODSP plugin for fileio is enabled.
[ 53.086875] iSCSI:target_core_file.c:159:fd_attach_hba ODX Token Manager is enabled.
[ 53.095495] iSCSI:target_core_multi_file.c:91:fd_attach_hba RODSP plugin for multifile is enabled.
[ 53.110723] iSCSI:target_core_ep.c:795:ep_attach_hba RODSP plugin for epio is enabled.
[ 53.134685] iSCSI:target_core_ep.c:802:ep_attach_hba ODX Token Manager is enabled.
[ 53.449967] workqueue: max_active 1024 requested for vhost_scsi is out of range, clamping between 1 and 512
[ 57.298902] findhostd uses obsolete (PF_INET,SOCK_PACKET)
[ 63.299611] Synotify use 16384 event queue size
[ 64.692322] fuse init (API version 7.23)
[ 84.094938] Synotify use 16384 event queue size
[ 86.009437] Synotify use 16384 event queue size
[ 86.012220] Synotify use 16384 event queue size

Screenshots (optional)

Error-Screen

@phillipunzen phillipunzen added the bug Something isn't working label Sep 30, 2024
@kroese
Copy link
Collaborator

kroese commented Oct 14, 2024

Very interesting.. I have no clue why this happens

@Skyfay
Copy link

Skyfay commented Oct 15, 2024

  • Does the Nginx Proxy Manager also run on Unraid?
  • Can you share the vdsm docker config you used?
  • Have you ever run a second vdsm instance and tested whether it happens?
  • In which network mode does vdsm run via docker?

@kroese
Copy link
Collaborator

kroese commented Oct 15, 2024

Im pretty sure you are redirecting to the wrong IP. Most likely you are using the DHCP mode and the container and the VM with DSM have seperate IP's. And this blue screen is what you get when visiting port 5000 on the container IP, but you should redirect to http://192.168.10.4:5000 instead (the DSM IP).

The only thing I cannot explain why it works correctly for the first 6 hours, really weird.

@phillipunzen
Copy link
Author

@Skyfay

  • No, the Nginx Proxy Manager runs on a dedicated vm on my proxmox host. I have try it, with the caddy server. Same result.

  • Here is the config from the Container:
    image

  • Yes, i run a "test container" with vdsm. Same Result.

  • I run the container in bridged mode.

@kroese
I have set a static IP in the DSM VM.

@kroese
Copy link
Collaborator

kroese commented Oct 15, 2024

@phillipunzen But the screen in your screenshot is a webpage I generate in my code. If it was redirected to the DSM IP its impossible that you see it, because DSM does not contain my code.

@phillipunzen
Copy link
Author

You think i go to disable the dhcp mode?

@kroese
Copy link
Collaborator

kroese commented Oct 15, 2024

No? Just that you configured NPM proxy wrongly otherwise the screen in your screenshot is not possible.

@phillipunzen
Copy link
Author

The NPM used the 192.168.10.4 and the 5000er TCP Port.
But since I also had the problem with the Caddy reverse proxy, the problem must be somewhere else or not?

@Skyfay
Copy link

Skyfay commented Oct 15, 2024

So you set a static IP in DSM Settings but using bridge mode with ports mapped 80:5000 / 443:5001?

@xrh0905
Copy link
Contributor

xrh0905 commented Oct 19, 2024

Hi there!
I have a theory on what is going on about this.
@phillipunzen Did you tried to assign the DSM the same address as the container with DSM control panel or router static address assignment? I mean set the DSM address to 192.168.10.4, the same as the one in the unRAID DockerMAN template? This might explain the weird screenshot and why it works at first but no longer.

@kroese
Copy link
Collaborator

kroese commented Oct 19, 2024

@xrh0905 That is a very good theory! That would explain perfectly why it stops after a couple of hours, because of a DHCP lease that expires.

@phillipunzen
Copy link
Author

Hi there! I have a theory on what is going on about this. @phillipunzen Did you tried to assign the DSM the same address as the container with DSM control panel or router static address assignment? I mean set the DSM address to 192.168.10.4, the same as the one in the unRAID DockerMAN template? This might explain the weird screenshot and why it works at first but no longer.

@xrh0905 i have assigned a static ip in the VDSM Mashine.
I try a another solution. I have added more RAM und CPU Cores, and the VM runs now 10 hours. I will spectate this and give here a update!

@xrh0905
Copy link
Contributor

xrh0905 commented Oct 21, 2024

@phillipunzen The IP should be differ. The container shouldn't have the same IP as DSM, they are sepreate host.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants