-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Virtual DSM in a Docker container inside an unprivileged LXC container #382
Comments
Wow, thank you very much! This will be helpful for a lot of LXC users, and I will include a link in the readme file. But it will be even better to also modify the container so that it will give the least amount of errors when running under LXC, even without any special preparations like your script does. You said that you received errors regarding If you let me know which errors you received when running under LXC, I will take a look at them and see if we can handle them differently. Because I have a suspicion that most of them can be solved in the compose-file without the need for a preparation script. |
It would be highly advantageous if you could devise a method to ensure seamless execution of "virtual-dsm" within an unprivileged LXC container, requiring no specific preconditions. The errors encountered when running the script under an unprivileged LXC container are as follows: Reproducible Steps:With a fresh installation of a Debian 12 Standard LXC container and default installation of Docker: After adding lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file 0 0" to the LXC configuration: After adding lxc.mount.entry: /dev/kvm dev/kvm none bind,create=file 0 0" to the LXC configuration: After adding lxc.mount.entry: /dev/vhost-net dev/vhost-net none bind,create=file 0 0 to the LXC configuration:
After changing cpio -idm <"$TMP/rd" code of install.sh to ignore mknod errors:
After changing tar xpfJ "$HDA.txz" code of install.sh to ignore mknod errors:
After chown 100000:100000 /dev/net/tun + /dev/kvm + /dev/vhost-net on the Proxmox host: Note: The script does not make use of the Proxmox host device nodes. Instead, new device nodes are created (using mknod) to secure the Proxmox host. The "virtual-dsm" device nodes can be found on the Proxmox host in the "/dev-CT ID" folder. These device nodes are also used in the lxc.mount.entry of the LXC configuration. Docker Commands:
|
Thanks, I will look into these ASAP. Because right now you are patching the sourcecode with fixes. But if those parts of the code in this repository change in the future, those patches will stop working. So it would be much better if you submitted those patches as pull-requests (provided they cause no problems for non-LXC users), so that they become available for everyone. |
I tried to implement your patches but I was getting all kinds of warnings from I can see from your script that you are a much better Bash coder than I am, as the quality is miles ahead. But it's difficult for me to accept the patches if I don't fully understand how it works with all the pipes to So my next attempt, was to solve it differently. For example, with the So I was unable to find a way to implement your fixes while still being sure that they don't cause any negative side-effects for people that do not need them. |
The initial script was a quick fix, altering the source code to enable "virtual-dsm" to operate within an unprivileged LXC container. However, a more refined solution has been developed, eliminating the need for such workarounds and ensuring compatibility with diverse use cases. A pull request for this improvement has been submitted. Enhancements:The pull request introduces a crucial enhancement: the ability to control the creation of device nodes. This feature empowers users to run "virtual-dsm" seamlessly within a Docker container inside an unprivileged LXC container, all without the necessity of modifying the source code. Usage Note:If users prefer or need to create the device nodes manually (e.g., in the case of an unprivileged LXC container), they can utilize the Docker environment variable DEV="N". This flexibility ensures a more versatile application. Device Configuration:Please note that for successful execution, the tun, kvm, and vhost-net devices must be established on the Proxmox host. These devices should be granted LXC ownership and subsequently added as mount entries to their respective LXC containers. The initial script has been updated to simplify and streamline this process.
|
Thanks! I merged this improved version, looks good! But it did not include any workaround for |
Thank you for merging the improved version, and I appreciate your keen observation. You're correct; the workaround for /dev/tap devices in Even with the previously employed workaround, I encounter a persistent issue marked by the following error message: Regrettably, this indicates that DHCP (macvlan) remains non-operational for the time being when the application runs within an unprivileged LXC container. I will try to identify and resolve the issue, but your expertise on this front (qemu) would be invaluable to pinpoint the root cause of this matter. If you have any insights or suggestions, they would be greatly appreciated. |
Yes, that is an annoying limitation of QEMU that it communicates with those devices via a file-descriptor, instead of by device name. So instead of specifying that you want to use However, this |
Hi, Sorry for the noob question here, what is the right process to make this work? 1: Create a LXC container with debian 12 + docker installed. Again, sorry for the noob question. |
Usage
ExampleIn this example we will setup virtual-dsm using multiple storage locations. The LXC disk will be used for installing Debian OS and storing the virtual-dsm Docker image. The first mount point will be used for installing virtual-dsm and is using the fast storage pool named 1. Create LXC container using Proxmox UI
2. Execute the
|
Wow! Thanks! Great doc! :-). I enabled GPU passtrough and added the right things to the conf within proxmox to enable passtrough. But, I'm running to the error that card128 chmod 666 can't be executed because of some permissions. Any fix for that? I think this will fix that issue: --device-cgroup-rule='c : rwm', didn't do that yet. |
If I follow your step by step instructions, I get this message:
Above fixed but still the other error:
Extra info on the proxmox server:
|
Thank you for reporting this. Missed that one as I have IP forwarding enabled on my test Proxmox host. The guide as been updated to include the IP forwarding argument. |
No problem. I hope you can also help me with this?
Extra info:
vainfo giving me this:
|
I do not use GPU / vGPU passthrough on LXC containers. However, I have created a modified version of the script which should make the card and renderD accessible in the LXC container. This should fix your permission errors. Important: GPU / vGPU passthrough on LXC containers requires configuration on both the Proxmox host as the LXC container. Success depends on settings, drivers and GPU compatibility. The following steps differ from the guide above: 2. Execute the virtual-dsm-lxc script
3. Install Docker and run virtual-dsm
Edits
|
Yes, that worked! Thanks for your great work! |
GPU passtrough doesn't work for me because I can't find: /dev/dri on the synology container. GPU=Y and it's passtrough via Proxmox. If you know something to fix this, let me know.
|
@BliXem1 Do you mean you dont have If instead you mean that you have no |
Alright, got the card within the container: 00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P GT1 [UHD Graphics] (rev 0c) Then it's not possible what I thought. I need /dev/dri for Jellyfin within docker. But yeah, not possible :) |
Maybe its possible by installing some extra module or drivers inside DSM that creates /dev/dri, i am not sure. But |
To rule out a non-working GPU inside DSM, you could check if face recognition and/or video thumbnails are working. |
@databreach In your edits you wrote:
and that worked a couple of versions ago because the allocation flag did not apply to qcow2. But in recent versions allocation support for qcow2 was added, so now you will need to set also |
The guides have been updated. Thank you! |
If I want to use macvlan how would I create the network using tuntap? Currently I create the network on the LXC debian 12 with: In my compose I have
This currently fails with the mknod /dev/tap2 error. |
@Stan-Gobien That is because you have set |
Okay, so the only drawback is no DHCP assigned address then?
Edit: Did a test without the ipv4_address line and it does assign a random address in the range. |
Yes, you need to specify an address like that. The drawback is that you don't have DHCP and that the VM will have its traffic tunneled between an internal address (20.20.20.21) and the outside world. So DSM will not know its "real" address, but that should not be a problem for the average user. Also you can remove this
section from your compose file. Because its not needed when you use macvlan. |
This works very well. Splendid project! |
i follow instructions, all is ok. a problem only ... proxmox backups and proxmox backup server dont work. i cant backups the CT when dsm is running. please , can someone confirm this or have a fix? |
For backups I sometimes had it with other LXC, my solution was to do a backup in stop/start instead of snapshot, recreating the LXC several months later solved my issue somehow... I see people are getting it working inside LXC, I was wondering if any of you were able to use mountpoints inside DSM. I was looking for virtual DSM to keep using hyper backup but it seems it refuses to use mounted filesystems as source of the backup and as such I only have one option left which is to pass the data from my PvE host to the LXC as mountpoint and then hopefully pass it on to DSM making it believe it is local storage. I fear that's going to be impossible but I wanted to ask the LXC DSM specialists here... |
I have a server without KVM support. I only use LXC. I get an error when starting the container: root@DSM1:~# docker run -it --rm -p 5000:5000 --cap-add NET_ADMIN --device-cgroup-rule='c : rwm' --sysctl net.ipv4.ip_forward=1 --device /dev/net/tun --device /dev/vhost-net --stop-timeout 60 -v /vdsm/storage1:/storage -v /vdsm/storage2:/storage2 -e CPU_CORES=2 -e RAM_SIZE=4096M -e DISK_SIZE=16G -e DISK2_SIZE=100G -e DISK_FMT=qcow2 -e ALLOCATE=N vdsm/virtual-dsm:latest KVM=NO ❯ ERROR: KVM acceleration not available (device file missing), this will cause a major loss of performance. |
try: docker run -it --rm -p 5000:5000 --cap-add NET_ADMIN --device-cgroup-rule='c : rwm' --sysctl net.ipv4.ip_forward=1 --device /dev/net/tun --device /dev/vhost-net --stop-timeout 60 -v /vdsm/storage1:/storage -v /vdsm/storage2:/storage2 -e CPU_CORES=2 -e RAM_SIZE=4096M -e DISK_SIZE=16G -e DISK2_SIZE=100G -e DISK_FMT=qcow2 -e ALLOCATE=N -e KVM=NO vdsm/virtual-dsm:latest |
Everything works inside the container. I can connect to [local address]:5000. I installed Plex and Emby - they are installed, but I can't connect - address [local address]:8096/web/index.html - ERR_CONNECTION_REFUSED. I have enabled the Samba server, but on Windows PC the address \[local address] - is not detected. The Proxmox firewall is disabled. Is there anything else that needs to be done? |
I wanted to share insights into why "virtual-dsm" encounters challenges running within an unprivileged Proxmox LXC container by default and how the provided script addresses these issues.
Core Challenges:
Script Solutions:
Executing the script below on your Proxmox host (not within the LXC container) will pave the way for a more seamless "virtual-dsm" experience within an unprivileged LXC on Proxmox.
bash -c "$(wget -qLO - https://raw.githubusercontent.com/databreach/virtual-dsm-lxc/main/virtual-dsm-lxc.sh)"
Feel free to delve into the script for a detailed understanding, and don't hesitate to share your insights or report any observations.
Best,
databreach
The text was updated successfully, but these errors were encountered: